At ACEx2026, Dr. Tim Renick and Dr. Kathryn Crowther detailed the results and strategic implications of an innovative pilot program between the National Institute for Student Success (NISS), Georgia State University (GSU), and Studiosity.
In an era demanding both efficiency and personalized learning, GSU sought a student support solution that could scale effectively across their diverse student body. A micro-pilot took place in Spring 2025, offering Studiosity to 100 students, and was expanded for the Fall to 1,000 students across multiple programs and disciplines. GSU strategically adopted Studiosity's AI-powered writing feedback tool and foundational skills support to determine efficacy and impact.
This session unpacks:
-
The decision-making process for integrating an external, AI-powered academic support service into NISS/GSU's existing student success ecosystem
-
Key data demonstrating the correlation between the use of this on-demand support and measurable increased student confidence and learning outcomes, particularly in gateway and foundation courses
-
How the model provided equitable access to immediate feedback, acting as a force-multiplier for existing support centers and helping GSU further close achievement gaps
Featuring:
- Dr. Timothy M. Renick, Executive Director, National Institute for Student Success
- Dr. Kathryn Crowther, Director of Teaching Effectiveness & Learning Design in the Center for Teaching, Learning, and Online Education at Georgia State University
Delve into the details and review results for GSU and NISS's Studiosity pilot here, and learn more about GSU's radical transformation here.
Tim Renick:
Well, good morning everyone. Thanks for being here, appreciate it. We know it's kinda towards the end of the conference. I was saying always a little disheartening to have your session scheduled while they're vacuuming in the exhibition room next door, but appreciate you being the diehard and dedicated. We hope to make this worth your time this morning. I'm Tim Renick. I'm the Executive Director of the National Institute for Student Success at Georgia State. I'll have Katy introduce herself in just a couple minutes. But we're going to talk this morning about using technology in order to deliver personalized attention to students. Specifically, we're gonna concentrate on a pilot we've been doing with Studiosity, which is using technology and AI for high-level writing assistance. But I'm gonna start this morning by setting the stage, by pointing out that this kind of work has developed over the course of really a decade of trying to see ways to be more equitable in our delivery of student supports to students at institutions that may not be particularly well-resourced. And that's a good description of Georgia State University. We're one of the largest minority-serving institutions in the country, 53,000 students across six campuses. Our main campus you're looking at in this picture, it's downtown Atlanta. Our claim to fame nationally is not being particularly strong in any reputational area. Historically, what we've best been noted for, maybe the most distinctive trait, is our geography, right? Our campus is bisected by a National Park Service Historic District. The Martin Luther King District slices our campus in half. My office every morning is a building on Auburn Avenue, a few blocks away from the brick building at the bottom of the screen. That's Ebeneezer Baptist Church, where the Reverend Martin Luther King was head pastor. His grave site, his childhood home, all located right in the midst of Georgia State University. But the sad reality is that for much of our history, we didn't live up to that legacy. I pulled these photos from the Georgia State photo archive. That's what Georgia State looked like when the Reverend King was doing his Nobel Peace Prize winning civil rights work. And we were whites only, a segregated campus into the 1960s. So sad irony of that, this great world leader in civil rights is preaching a couple blocks away from a campus that is the embodiment of what he's fighting against. But even much more recently than the 1960s, Georgia State was still grossly underperforming, especially for our students of color. So these were our Bachelor's graduation rates as of about 20 years ago. They hovered around 30%. They were even lower for students from underserved backgrounds, Black, Hispanic, low-income students. And it was so predictable that we weren't surprised year after year when we would see this pattern repeated. And what made Georgia State's story both interesting, I hope, for your purposes, but also a good testing ground for some of the approaches that we're gonna be discussing this morning is that while Georgia State was starting at this low baseline, 30% graduation rates, we didn't have a lot of things working in our favor. In the 20 years since then, we've gone through very dramatic demographic changes that usually don't correlate to greatly improved graduation rates. So a student body that when I arrived as a junior faculty member out of my PhD program was 75% white, now 80% of the undergraduates self-identify as non-white. There's no majority population at Georgia State. The largest single demographic group are black students now, about 40 percent of the student body. Obviously that typically does not correlate to greatly improved graduation rates, nor does this second detail on the right of the screen there. We've gone through another transition over the same period where our students have gotten a lot less well-resourced financially. Before the last recession, under a third of our students were Pell eligible. That number now is around 60 percent. This spring term at Georgia State we're currently enrolling 29,000 Pell students, so 29,000 low-income students at one campus. The entire Ivy League combined this spring is enrolling about 9,800 Pell students. We're trying to educate three times the number of low-income students as the entire Ivy League. It's not gonna be business as usual. It's also not the opportunity to do something we love to do in higher education, which is if you've got a challenge, let's create a program. We'll create a special program for this group, a special cohort for that group, it'll be an African American male initiative, it'll a Latinx initiative. It may work in some context, I'm not saying that that's never the right approach.
I am saying when you enroll 29,000 low-income students, you don't create a program for them, right? You need to come up with strategies that scale across the whole student body. So what we did at Georgia State now, starting really about 15 years ago, was take a different approach. We began to put the focus on our own processes and systems, not because they're the only reason students were dropping out of Georgia State.
Yes, some students were dropping out because they were academically underprepared. Some students were dropping out because they financially strapped and couldn't afford to stay enrolled. But we focused on the issues that were under our control because these were areas that we could potentially correct and improve student outcomes. So, we got better with the data. We use the data to analyze our own systems, and when we found a system or a support that was failing our students, what we did is we piloted alternatives and then scaled them up across the student body. What we're talking about this morning, with regard to using adaptive approaches to improving student writing is another example, another iteration of the same approach. Just to very quickly give you a couple examples, we were 10, 15 years ago, having a huge problem with students switching between majors. The average graduate, bachelor's graduate of Georgia State was going through 2.6 majors before they completed their degree. Whose fault was that? We enroll mostly low income first gen students. They come in, we've got 80 something undergraduate majors and they were overwhelmed by the choices, making choices without much context and switching later on in their programs. It was a disaster for our low income students. It was one of the reasons our data showed are low-income students were struggling. Because if you switch majors two and a half times and you're on a limited eligibility for your aid, you're likely to run out of eligibility before you complete any program. And so we've created a model where all of our first year students now are in meta-major based learning communities and get introduced to majors in a structured way and careers before they leave really the first semester of their enrollment. We've also looked at students who are dropping out in droves from the university. We were losing about 6,000 students a year to attrition. These were students who were eligible academically to re-enroll, but didn't do so. So in 2012, we began using predictive analytics to identify early warning signs of students who at risk. Oftentimes before they could identify that they were at risk and reaching out to the students in a proactive and systematic fashion. We're now tracking every Georgia State undergraduate for over 800 risk factors every day. We last year had over 100,000 instances where we proactively reached out to students because we saw they were registered for the wrong class, underperforming in the early weeks of a course and so forth. A third example and final example I'll share is our use of data in order to deliver financial aid. We were losing about 1,000 students every semester 15 years ago because they couldn't pay their tuition and fees. Not because they were academically ineligible, not because they didn't want to be enrolled. They were registered, but they couldn't pay. And so we started a program that requires no application whatsoever, where we look for students who are close to graduating, making good academic progress, but running out of the eligibility for their aid programs, and we provide them with micro-grants to keep them enrolled proactively, no questions asked, no payback necessary. We've given over 28,000 of these grants over the last little more than a decade with over 80% of these students going on to graduate.
By the way, if a student stops out for financial reasons and we recently completed a study of over 80,000 undergraduates, through my institute that I direct, their chances of completing their degree is about 25%. So these students leaving for financial reason intending to come back actually rarely do so.
So these have been part of the strategy to deal with a student body that is facing risk factors across the board. Have they been impactful? Well, over the last decade, despite those demographic shifts you saw a moment ago, Georgia State is graduating 3,500 more students every year, an 84% improvement overall. But the biggest gains in degrees awarded have been for the students who are struggling the most before we did these scaled, systematic, and proactive supports. So we're up 84% overall. You can see from this screen much greater increases for our Black, our Pell, and our Hispanic students. Because those gains have been not only strong, but disproportionately powerful for students from underserved backgrounds, it's had a profound impact on our graduation rates. They haven't improved, excuse me, incrementally, they've improved exponentially. For some demographic groups, they've more than doubled. They've come close to tripling. And we've had seven years in a row now where our Black, Hispanic, and low-income students are graduating at or above the rate of the student body overall. So no equity gaps, at least based on race, ethnicity, and income when it comes to graduation rates. And this institution in the shadow of the Martin Luther King District that was segregated into the 1960s and still grossly underperforming for our students of color a decade or so ago, now, and it's been for seven years in a row as well, confers more Bachelor degrees to African American students than any other college or university in the U.S.. One last point about this approach in general is that we sometimes get asked, more students are graduating, are they succeeding after they graduate? Are they learning anything? And we've just completed with the Burning Glass Institute a tracking study of over 40,000 undergraduate students from Georgia State starting 15 years ago, asking the simple question, did the students who participated in the programs I just described, the proactive advising, the micro-grants, the learning communities, the meta-majors, are they doing better in the job market as well? And the news is extremely encouraging.
The students who benefited from these proactive data-based interventions are twice as likely, now, this is, sorry, five years after graduation to be in careers that align with their academic major. So it's created a more seamless transition of these students from college into the job market. These students are twice as likely, now eight years after enrollment, to be leadership roles in whatever type of profession they're in. They're managers, they're directors and so forth. Twice as likely. And, I know this is usually the bottom line or the starting point, they're earning more as well. So overall, five years out, the students are earning about $5,000 to $6,000 more if they benefited from those programs, but the bump is even stronger, $8,000 to $9,000 for our low-income students.
So this has been one of the most encouraging things about tracking the impacts after graduation, is that the students are benefiting long-term from these kind of interventions, but the greatest benefits are for the students who were struggling the most before they entered college. And to put it more bluntly, what we're helping to do is close equity gaps in career outcomes by these interventions. With all of that said, it sets up where we're going this morning to talk about the ways we've tried to leverage AI.
Because at heart, what links those various interventions that I've described already together is we're trying to find ways to deliver personalized attention to students at scale. There's no debate, I think, for anybody in this room about whether that's a core value in education, right? We've always tried to give students attention that is personalized, customized to them, and to do it across the board, not just for the most elite students, or for that matter, the most vulnerable students, but do it for all students.
The problem for most of us in the room, and I count Georgia State as one of these problem cases, is we haven't had the resources to deliver that kind of personalized attention at scale. The model in the past was if you want to deliver more personalized attention, create smaller class sizes, hire more faculty, hire more staff, and we haven't had the resources to do it. But Georgia State, starting 10 years ago, in fact this is the 10th anniversary of our scaling of AI, began to recognize with the successes that you've already heard about, under the belt, that the next frontier is to begin to use AI to deliver more personalized attention and feedback to our students. The first example of us doing so was in 2016. It came out of a data point we uncovered from 2015. In the summer of 2015, we had what's called a summer melt rate of almost 20%. Summer melt is the percent of your incoming fall new students who never show up for classes. These are not students who you admit who go elsewhere, at Georgia State, that's much, much higher than 20%. These are students who confirm their intent to enroll, come to orientation, register for classes, but never show once classes begin. 20% of our, actually 19% of Fall '15 class. And so we looked at the data, and most of these students, as you're seeing on this slide, were students from underserved backgrounds. 76% non-white, 71% low-income students. So we not only were losing a subset of our incoming fall class, but we were losing subset that is really very important in our state and nationally to try to level the playing field in educational outcomes. By the way, if you think that number is very, very high, read the book by Lindsay Page and Ben Castleman called Summer Melt. They look at urban school districts across the country and they find that 20% to 25% summer melt rate is not, tragically, not uncommon. So we ask the next question, what caused this 19% of our incoming fall class to not actually matriculate? And you know, did they decide to change their mind? Did they decide after graduating high school and taking the ACT or SAT and getting into Georgia State and coming to, oh, I don't wanna go to college after all. No, what linked them together was that they had failed to navigate our bureaucracy. What we found, is that there were all these administrative steps we require the students to complete - FAFSA verification, transcripts, deposits, placement exams, and so forth. What linked these students together, this 19%, is that all of them had failed to navigate one or more of those particular challenges. When we looked at the data more closely, we found the other part of it, which is, and it was typically not our middle or upper income students who are struggling to navigate the bureaucracy. It was mostly our low-income students. So we made two changes for the fall of 2016. The first thing we did, launched in the late spring of 2016, was we created a new portal. The portal takes the students through the steps they need to complete to be ready for the first day of classes. It's color-coded. When they complete a step, it changes color. The most important advantage of this new system was it gave us better data about individual students. Because we used to look at this amass of students, we admitted all these thousands of students, and some showed up for fall, and some didn't. And that's basically what we knew. With this new portal, we knew this particular student was fully engaged until we asked for their immunization records, and then we stopped hearing from them. So we could reach out with much more specific help. The other thing we did, and here's the link to AI, is we became one of the first schools in the country to launch an AI chatbot at scale. Directed specifically to our incoming new students. These were students who were not on campus, so it was hard to reach them and communicate with them. And we launched this in May of 2016, thinking maybe we'll have 5,000 or 6,000 questions answered on the platform for our incoming students before the start of fall classes. In the four months leading up to the start of the fall term, we had over 180,000 exchanges with just our first-year students. Average response time when a student asked a question was about six seconds, use of the chatbot much heavier at 12 midnight than at 9 in the morning. It's not just an indictment of our business practice, it says something structurally that we're doing wrong in higher education.
Because we're enrolling more and more low-income students, many of them have jobs, we know at Georgia State 80% of our undergraduates have jobs, but we still set up these structures like they're going to be free at 3 in the afternoon or 9 in the morning to get these issues settled and I talked to the people who run these offices, they're frustrated. Yeah, we sent an email and said they should come, you know, to this session at 10 a.m. On a Tuesday and they didn't show up. No kidding, they have these complicated lives, they have jobs and families and so forth. The chatbot was able to answer their questions even off-hours. So did this make a difference? It made a huge difference. We've lowered the summer melt rate at Georgia State from 19% to basically 9%. Students haven't gotten any more savvy about navigating verification requests from the federal government, we got a little better at supporting them.
And because it worked so well for our students as they were coming into the university, what we did the next year is said, okay, well let's keep this tool going from the day they matriculate to the day they graduate, and let's provide support for them and all the thing, I need help in chemistry, I don't know how to resolve my parking ticket, whatever the issue is. We've run randomized control trials, they're all published and out there, we have five RCTs out there right now. Every one of them shows profound, better performance by the students in areas that are important to their success and disproportionately strong outcomes for low-income and first-gen students. More recently, over the last couple years, we said, we have 40,000 undergraduates using this AI tool, why don't we integrate it into the classroom? We know we've got some courses where they're really struggling and they can't get the support they need because our tutoring sessions close at a certain hour, or they're not - the students aren't available. So we started in the social sciences with political science and economics. We work with the instructors to have the tool be able to answer some key questions, but also to nudge the students, to let them know. You've got a quiz coming up next week. The average student needs to study two hours for the quiz. And here's a sample question. Can you answer this? If not, you might need to study some more. And this began to produce significantly improved results. On a hundred point scale, the average grades in these classes went up by about seven points overall, but for the first-gen students in the class, the gains were even stronger by 11 points from this tool. And much more recently, now just over the last 12 months, we've been extending the use of this same AI tool in introductory math courses. We intentionally put this off for a while because we knew this would be a more difficult area to see progress. We're working with faculty in various math classes. In fact, this randomized control trial is not just at Georgia State. Georgia State and my institute is running the trial, but also at the University of Central Florida and Morgan State as well. And the early results are extremely positive. That what we're seeing in these critical first year math classes, which are sometimes sadly the graveyard for a lot of first year students. They can't get through those math classes and they lose their scholarships or drop out because they get discouraged and so forth, non-pass rates that are 17 percentage points lower than in the sections not using the tool. That leaves still an area untouched. But we're not gonna leave it untouched this morning because one of the other major struggling points for first year college students, especially those from low income and first generation backgrounds, is getting up to speed when it comes to writing skills. And so, over the last 12 months now, we've begun to use new approaches to try to help students improve their writing skills, make them more successful, not just in composition courses, but in writing courses, especially early on in their academic careers across the curriculum. Katy, my colleague, has been leading a lot of this effort, and so I'm going to turn things over to her.
Kathryn Crowther:
Thank you, Tim. All right, let's see if I can get the clicker working. So, hello, everybody. Good morning. Thanks for being here on a Friday before I know everyone's getting ready to take off. I'm Katy Crowther. I'm the Director of Teaching Effectiveness and Learning Design in the Center for Excellence in Teaching and Learning and Online Education, which is a really long acronym, CETLOE. But I'm also a Professor of English at our two-year college. You saw on the slide that we have many campuses, and part of that is our two-year system, our Perimeter College campus. I'm a Professor of English there, and I was formerly the Director of the Writing Across the Curriculum Program. I was in a session, I think it was in this room the other day, and someone said, throwback to the 1970s, anyone remember WAC? And I was like, remember WAC? We're still doing WAC. We have a very successful, robust WAC program. And the reason I'm saying that is because when Tim approached me in our Center for Teaching and said, "Would you be interested in running a pilot of a tool that would give adaptive support in writing, an AI-supported writing feedback tool?" I said, that sounds like it's right up my alley. I'm always interested to think about ways we can do a better job of integrating writing into as many classes as possible.
Writing is, of course, a high-impact practice. We have a big strategic plan initiative at Georgia State to embed high-impact practices in all of our undergraduate core courses. So how can we get as many people, many instructors as possible using writing in their courses when, as we know, there's a lot of barriers to that.
So the reason I was interested in sort of picking this up is we'd already been thinking about how can provide better access to writing support for students. And so we were looking for a tool that would do a lot the things that Tim has already been talking about. It would provide timely access to students to support outside of class time. It would it would level the playing field for those students who couldn't be on campus during the day to go to the writing center or to access some of the other supports we have on campus.
So a tool that would provide writing support 24/7 that students could access through our learning management system, and I'll say a little bit more about how that works in a minute. That was that was a very exciting idea for us. Especially because it would also do something that we knew we needed help with which would be to support faculty who would like to do more writing in their classes, but don't have the bandwidth or the capacity to take on, if they have 75, 100, 200, 300, 400 students, how to manage grading that many papers, and also how to follow the rules of good writing pedagogy, which would be to encourage students to write multiple drafts and get feedback.
Again, running into that issue if they aren't able to take it to the writing center. How could they be encouraging writing in the classroom, but doing it in a pedagogically supported way? So there was sort of the two big reasons we were excited about a tool that would help with writing support. We also saw it as a moment to think about how we could start embedding opportunities for AI literacy in our courses. And a tool like this would provide a gateway, an entrance into AI that shows students how to work collaboratively with AI, rather than a tool that says, hey, I'll write your paper for you. A tool that would support students with the writing feedback process. And in a safe and gated environment, an environment in which their data is not being used to train the model, they don't have to worry about those kind of, those, those concerns. And then, of course, we wanted something that, as I said, would allow faculty members to model good pedagogy. So we're looking at a tool that would focus on iterative, iterative feedback, which is what we know is the best way to think about writing, right? It's a process. It's something that you spend, hopefully, lots of time on and you produce multiple drafts and you get feedback and you revise. So we were looking for a tool that would support that kind of model because we wanted the pedagogy to be at the heart of a tool that would use AI to support writing. And so you can see on this slide I have a little sneak peek. Spoiler alert, the tool that we're talking about that we piloted is Studiosity. And so what I'm gonna talk to you about is how we developed a pilot to test out this tool and see how our faculty and our students responded to using a tool that did all of these things. So as we're thinking about this, we're thinkin', well, how do we do this at a school like Georgia State? As Tim sort of told you already, we're a very diverse, very large student body. We have courses across all disciplines, obviously, that have different needs when it comes to writing support. The way that we structured the pilot was we brought together our Center for Teaching and Learning, our Writing Across the Curriculum program, some of our Internal data people, we had learning technologists on the job to help get students set up with the tool in our in our LMS, so we had a good pilot team to get going. And then we had to think about how are we going to roll this out? Who are we gonna recruit? And so we decided that what we were going to do was start off with a mini pilot and move to a larger pilot once we sort of tested it out on a smaller crowd. So, I'm gonna get into the details of the pilot and let you know a little bit of the data. But as we were doing this, we were also thinking about how do we get faculty prepared for this? Do we train them to use the tool? Do we also train them how to think about using a writing feedback tool in terms of good writing pedagogy, right? We don't want them just to throw a tool at the students without thinking through what does it look like to offer students an opportunity to use a tool like this. So we had to decide about training our faculty. But our big concern here, too, was if we were to roll a tool out like this to our entire student body and all of our faculty, we wouldn't have the capacity to train all faculty in a really robust way. So what can we model? What can we use for our pilot that would be scalable, that we could use for the entire faculty, anyone who was interested in using the tool? So we started looking at the tool to see, well, what does it do? And how can we think about the way to get students engaged with it and test it out and see if it's gonna help students with their writing? So we have the partnership with Studiosity and they work together with us to get the tool embedded in our LMS I know you can't see much on the screenshot, but I included it because I showed you that they even made it match our Georgia State colors. So within our system, it looks nice with our Georgia State blue. And it does provide that type of iterative feedback and I'm gonna go more into what that feedback looks like within the tool it models that good pedagogy. So we were excited about this when we looked at the tool we thought this is a tool that will do the things we're looking for. Now we want to test it out. And so we designed a small pilot, very small, five faculty, 100 students. We used the tool. We did some pre- and post- assessment. Then we moved to a larger pilot. And for that one, so for the first pilot, we did use only Writing Across the Curriculum trained instructors because they already knew a lot about writing pedagogy. They're already thinking about how do I structure an assignment to encourage students to think about writing as a process. For the second round of the pilot, which we expanded, we used faculty who were not as familiar with that. So they really would be faculty who might think, I teach physics, I would love to have my students do more writing, I know it's a high impact practice, I don't have any training in teaching writing, how would I use this tool? So we wanted to get feedback from faculty like that as well.
So you can see here from the numbers that we had 17 faculty that gave us over a thousand undergraduate students enrolled in courses across all disciplines, about 15 disciplines total. And what we all know is that of the students who enrolled in the pilot at least 71% engaged with the platform. So we had a good number, good sample size to look at in terms of their data. And we were interested in both their the students interaction with the tool like what they did with the tool and their perceptions of the tool, and then also other outcomes that we could say were related to the tool. So we did pre- and post- surveys. We did that with the faculty as well because we were interesting to know - did they think it was a helpful tool?
And then we also, and we're still actually, caveat here, we finished up this pilot in December. So we're actually still going through a lot of the data. So some of these numbers are preliminary. And we decided to add a component to the spring pilot that was to actually look at the writing and say, do we see measurable improvements in the writing? So we have a crack team of WAC graduate students who have been looking at multiple iterations of the drafts that students wrote for these classes and submitted to Studiosity, looking at the feedback that they got and comparing the progression of the writing to see can we say yes, there has been improvement in the student writing. So we're still actually, in fact I had to update these slides yesterday because we had some new numbers from that assessment. So this is ongoing work. Some quick numbers, these actually came from Studiosity because they can see everything that's happening behind the scenes. They saw that we had over 3,400 interactions that were with the tool over the course of the pilot. They do quick surveys after a student uses it and 92% said that it was easy to use. That of course was a goal for us. We're trying to make this accessible. We're trying to make sure that students can very easily get online in their LMS, go to the tool, and get some feedback. And 80% of the engagement, no surprise, happened after business hours. So, and I'll show you a slide in a minute that says when exactly that peak time was. So what the study found was that students who used the tool, and actually, I want to backtrack for a second because I want say a little bit more about what they did when they used the tools before we get into the data. So students were encouraged to submit a draft of their writing to the tool at any stage in the process. They were encouraged do even brainstorming notes or very early drafts. They submit the draft to the tool. It's a very simple process. It just says, like, upload here. They upload it, it takes a few minutes, and it generates feedback for them. They also have the opportunity to indicate where along in the process they are. Is this an early draft? Is this, are they feeling, or is this ready to turn in? So they can, the tool will adjust the feedback based on how finished the draft is. And then the tool will give them feedback on the draft. It will identify where in the draft the issues are. Much of it is framed very positively to give them that kind of what we know, the sort of feedback that works for students. And most importantly, it's actually getting away from surface level feedback, which is what we see in most writing tools. So it's not just like to give the example of Grammarly, it's just not giving grammar corrections, mechanics, punctuation. It's getting into the content of the student's writing. So it talking about structure, organization, argument, support. And when it sees an issue that it flags, it gives a suggestion. It doesn't fix it, right? It says. You know, you might want to say more here about this point that you've raised, or you mentioned this point earlier but you don't continue it later, or do you have any references to back this up and it will actually give them support with their citations. And also will point them towards resources, it has embedded videos they can watch if there's a topic that's coming up that they perhaps clearly they need some support with, and then it will generate a report like that a PDF report that if they want and if the instructor wants they can upload with their draft to show, I ran this through Studiosity, I got this feedback, and I made these revisions. And then they can submit another draft once they've made those revisions and get more feedback. So some of the data I'm going to show you is how often the students engage with the tool and how many revisions actually supported getting better grades and having a positive interaction with the tool. So what we found when students did this, when they went through this process of uploading their draft and getting feedback and making revisions: we saw lower DFW rates, we saw higher grades. This is just kind of the summary. This is the TLDR, right, the overview. We found that it was effective for different types of documents. So we had faculty across all disciplines. So they weren't just uploading a traditional five-paragraph essay. They were uploading lab reports. And we actually had a math instructor using the tool. That was interesting. We had some conversations about what, what math functionality the tool had. And then we, so we were interested to see, like does it know what kind of document it's using? And you have the option to select from a set menu of what type of writing you're submitting. And we found that it generally worked well for the different types of documents. It does promote self-revision we found. We found that, it had mostly positive results for students. There were some problems that we encountered. I'll talk more about those in a minute.
And, but more importantly, students reported in their reflections feeling supported as a writer, getting a sense that revision and getting feedback was part of the process. And so they began to sort of identify more as a writer. So there was an increased confidence there for writing.
And I know I read something that Studiosity was talking about recently that students now are seeing a lot of anxiety around submitting writing because they're afraid, am I gonna get busted for using this tool? Is this gonna be flagged as AI? So there was a safety in using this tool because it was part of the LMS and it was a gated system. So it gave them that increased support and confidence.
We did also find that it helped with the faculty who had a little bit of training to think about structuring their assignments so the tool was gonna be best used. And so that is something that we are gonna be thinking about as we move forward. So just quickly some of the numbers. You can see that for the students who use Studiosity in the pilot, there was over 800 of them. Their DFW rates were around 8%. Everybody is familiar with DFW rates, right? So the D's and the F's and the withdrawal rates. And they had overall grades around a B, a 3.23. Whereas the non-Studiosity students, and these are students within the same class, so they got all the same instruction, all the same context, etc., for the writing, they just didn't engage with the tool. And that was about 330 students. They had a much, much higher DFW rate, 38.9%. And we can tease out some of the reasons behind that as well if you want to in the Q&A. We also saw that the average number of interactions was between three and four. Some students used it many times. We see like 26 interactions with the tool. And again, we see sort of declining actual usefulness of the feedback over that amount of interactions. But this was a pretty exciting data point. And here's what I mentioned earlier, that students, the most common time for students to engage with the tool was Sunday night at 11 p.m. Of course, that does speak to the fact that many times assignments are due, right? Sunday night. But it also, I think, points to a lot of the things that we were talking about before, which is that students need access to things that are outside of hours, that are gonna fit their schedule when they're not working, they've got their kids to bed, right, so they're gonna be working on their writing at that time of night. And I want to mention that Studiocity has a couple of other tools that we actually didn't have turned on for this pilot. So there are options to chat with an online tutor and to get some peer support as well. And so that adds, again, more layers of access for students. So what did the students think of it? Students like the fact that they got this feedback that was one student said, "I like how in depth the explanations are. I also like how you tell me what went right." So they like that positive feedback. They like the encouragement that they were developing as a writer. They found it clear and easy to use. They like that they could use the tool whenever they needed to, that it was just there. In fact, we did find students using it for other classes. We haven't told them that they couldn't. So they were using it to get writing feedback for their other classes as well.
They liked the fact that they could use it at different points in the writing process, and several of them said it sort of made them understand the writing process better.
It also promotes support seeking, and that's another feature of the tool that we appreciated, which is that Studiosity is encouraging students to seek support themselves, to normalize seeking academic support, that that's actually an academic power move. And so embedded in the tool are resources that link directly to Georgia State. So we gave Studiosity links to all of our, to our library, to our writing studio, to our counseling center, to the citation guides, et cetera, so that students can see the connection between getting support in the tools, but also how you would get that support outside of class, or sorry, outside of the tool. And also that that is, again, that's a, positive student behavior that signals student success. And we had over 1,500 referrals made by the tool.
The faculty said, yes, actually this worked well for us. It did, this is the number one thing that we were looking for. It aligned with what the instructor said that they would say about the student's writing. When they look at it and say, this would have been my feedback, it aligned with that, "The feedback was quite largely consistent with my own evaluation. Clear, relevant, and aligned with standard academic writing expectations."
So that was good. They did comment that very often the feedback was surface level to the extent that those were the easy things for students to fix. It did a really good job of helping sentence level writing improvement. It also did help with organization and scaffolding. Some instructors said some of the deeper content it didn't address quite as well. But they liked the fact that what students were doing was getting that multiple rounds of feedback so they could maybe focus on different things each time. And it was a very timely intervention. It was non-punitive. It wasn't like you need to go revise your paper, but it was, okay, best practices, right? We all know that we write things at the last minute, but we want our students to think that you should always be building in time ahead of time to get that feedback in iterative versions. Some of the critiques. It sometimes doesn't quite get at that deeper content. The tool is built on Bloom's taxonomy. So it is focusing on those deeper levels of learning. There is a category for critical thinking. It is trying to capture the students' attempts to make good arguments. But sometimes that was not quite as easy for the tool to identify. Occasionally there was misalignment with the goals of the assignment or the genre. Actually, since we did this pilot, Studiosity has a new feature that allows you to upload the assignment and the criteria. So that does improve alignment with the actual criteria of what the professor is looking for, because they've now given the tool exactly what it is they're looking for. So that probably helps correct some of that occasional incorrect or unhelpful microfeedback. And then, of course, we have the part where students maybe look at it and go, that sounds like a big thing to revise. I'm not going to do that. So they may not actually do that make the revisions that the tool suggests. And we did see in this this data comes from Studiosity, because the tool actually will give the writing a score as well an assignment score and this shows how over multiple submissions the score on the writing improves. So you see students doing two, three, four submissions and their score their writing score improving all the way up to five submissions on this graph. And students, so that was the same assignment being submitted multiple times. Students who submit the same type of document multiple times, so perhaps over the course of semester they do multiple lab reports, they also see their grades improving on that type of document over multiple submissions. So they're getting better at doing the kind of writing they're being asked to do, which is again something we want to see. And then when we started looking at the writing parts, or you know I mentioned that we were looking at the DFW rates, we were look at the grades. We were looking at reflections from the faculty and the students. But we also wanted to pull some data from the actual writing. So we had the pilot faculty members submit multiple drafts of the students' writing along with their Studiosity writing reports to see what revisions the students were making and to see if there were these measurable improvements in writing.
And what we're seeing is that so far, and like I said, we're still processing this data, 66.7% of the papers that were submitted saw an increase in the grade, which you would hope. And the average improvement in the grades was around five points. Some people saw a much bigger bump.
So this is the idea. This is with that sort of second round. So they've done the paper, the professor looks at it. Now, the next round, how much more would that improve the grade? So we were we were happy to see that as well from the objective writings, as objective as writing can be, standpoint. And then the other thing faculty said to us was you know, yes, this is going to help us with our workload, especially in classes where we don't have support from you know a GTA or we have way too many students to be giving multiple rounds of feedback. And it also allows them in the class to shift their attention to sort of higher level writing concern so instead of just saying, "Okay students, if you want to use Grammarly before you turn in your paper, that'd be great, because I really don't want to see all those spelling mistakes," but rather they can say, "I'm really interested in the way you develop an argument in this paper," or "I want to you using good support or structure," and they can allow that to be part of the conversation about writing because the tool will support it. Reduce grading time, obviously, there is a little bit of a, we can't quite measure this as well, because for the pilot, we asked a lot of the faculty, we asked them to turn in multiple drafts and give us reflections and feedback. There was this idea that perhaps in some cases it might cause a little bit more labor for the faculty because they have to manage the students using the tool. We do support it a little on our end so students could reach out to us if they had problems with the tool and it is actually quite easy to use. So many of those issues were pretty simple to rectify. So to finish up, kind of our big takeaways from this is that a tool like this is a really good step forward. When we're thinking about how AI is going to impact writing. One of the people working on this project was a graduate student that I work with closely who is on the market right now and is getting her PhD in AI and writing. And she's basically looking at the way that AI is just going to completely reshape composition studies and the way we think about writing pedagogy. It's going to turn into this idea of human-machine collaboration, right? How do we work together with the machine to help to support our writing? But we wanna model that in the right way. We wanna make sure students think about writing as a collaborative process, they can get support, but not something that takes over for them.
We like the fact that a tool like this builds AI literacy in a safe environment, a way to learn about AI, but not have to worry about you're gonna get busted for cheating or that your data's being used. And again, we very much like the fact that it allows equal access at levels of playing field for students who need support with their writing.
And so, it leads us to think more broadly about how we're gonna support writing moving forward at Georgia State, especially as we're promoting it very strongly for our High Impact Practices Initiative. We don't wanna just say, hey everybody, writing is a high-impact practice, you should be doing it, go forth. But to be able to offer them support with a tool like this gives us the feeling that we could promote more writing in all kinds of classes across disciplines and know that the students were getting access to support. So with that, I'll pass it back to Tim.
Tim Renick:
Thank you. Yeah, and so I'm just going to bring us home here by reflecting on the larger issue here, which is that over the last decade, we've made, I think, a lot of progress in student success. We have been able, at institutions, to see significant gains in graduation rates across a subset of campuses that have adopted these approaches we've seen on average. And I can share these data. 18 percentage point increases in graduation rates. At Georgia State, by the way, the number is 25% increase in graduation rate overall. So four years ago, we launched the National Institute for Student Success. We now have over 107 partner institutions. I know some of them are represented in this room. You may not realize it. We contract with your presidents and chancellors. And what we do is analyze your student support systems and try to offer support in the areas that we've talked about this morning. So we'll spend about six months looking at your data, but specifically the way you're delivering coaching support, advising, financial aid, registration and so forth to try to find ways in which these systems can be improved. That process leads to a detailed set of recommendations and for most of our partner campuses. At least a year of implementation support where we bring in our subject matter experts who can then help your campus peers understand how to leverage things like predictive analytics, AI, new technologies, and new approaches that, as I say, I think have been proven to be effective by the preponderance of the evidence and the research, but in many cases are unfamiliar to the practitioners on your campus. So this is, I think, an emerging field. What we're doing at Georgia State, I think is the tip of the iceberg. I know you have stories from your own campuses, but we appreciate the willingness to learn about what we've done over the last few years at Georgia State and specifically this most recent pilot with Studiosity. So we do have a few minutes for questions and comments. If you do have questions or comments, we're gonna ask you to use a microphone because this session's being audio recorded.
Q1 Speaker:
Couple quick questions, please. Studiosity, is that a commercial AI system? And this was a pilot. Was it for a course or was this just, were they separate writing assignments, not for a class?
Tim Renick:
Yeah, so Studiocity is a for-profit company that we contracted with, and Katy can probably be more specific about the nature of the pilot, but it was varied.
Q1 Speaker:
I just I just want to know what was it for a course grade or were these students turning papers in just because they were part of the pilot?
Kathryn Crowther:
It was course materials that they were already doing, and then we asked them, and we gave them, and it varied from teacher to teacher. Some instructors said, you are gonna use this tool. Some of them said, you have the option to use this tools, and so, and students had many students, not many, some students opted out. They didn't want to use the tool. That's why we saw that percentage number of students who didn't use the tools. They were not graded based on whether or not they used the tool at all.
Q1 Speaker:
You said it was a gated system. What does that mean, please? And then I just have one more question.
Kathryn Crowther:
So gated means that it's internal to itself. So it's not pulling data from the outside and it's not using the student data. The Studiosity people here, they could probably answer that question better than me.
Q1 Speaker:
Well, the last question I have is you and I are in the same system, University System of Georgia. Is this an approved system to use within our university system?
Tim Renick:
My understanding is, yes, our colleagues at Studiosity, along with purchasing and our tech team at Georgia State, had to work quite a while to get all the clearances and so forth. I think that would pave the way at Augusta and other campuses in Georgia to use this.
Q1 Speaker:
Are we one of your partners?
Tim Renick:
You are in the spring. You are one of the NISS partners. We're launching the diagnostic process with you this spring. So yes, we can give you advice about that.
Kathryn Crowther:
I'm happy to chat more about it.
Tim Renick:
But just to be very clear, every single assignment we've talked about with regard to writing, these were already embedded graded assignments in the courses, and Studiosity was layered upon. And so the faculty were not changing what they were doing. In fact, that was one of the premises. We didn't want them to change what they we were doing, we wanted to see if the students could improve based on the standards that were already in place for those classes.
Q2 - Chelle Batchelor:
Hi, my name's Chelle Batchelor. I'm from Western Oregon University. I'm curious, you had mentioned a math-related tool that was used and had some positive outcomes for students in math. And then, of course, Katy presented about Studiosity. I'm curious sort of like, what was the tool that was used for math and how did it differ from Studiosity? Because, clearly, Studiosity is very much built for writing. And I'm wondering what that math project looked like.
Tim Renick:
You picked up on exactly the distinction. So the product that we used, it's also a commercial product for math, but it's the one we used for Summer Melt and for some of the other projects is a product by MainStay. And it is not designed, it is much more generic, not designed for writing, not designed for math. It's just, it's basically a platform that allows us to create bi-directional texting communication with our students at scale. We create a knowledge base. It can be a knowledge base about how to complete FAFSA requirements for incoming students. It can also be a a knowledge-base about how to get through chapter one of microeconomics. And the students use the platform, text questions in, they can get responses, we can nudge the students and so forth, so it's a much more generic thing. This is, Studiosity is very much a product that is designed to support iterative improvements in undergraduate student writing. There's a question back there, yes.
Q3 Speaker:
So it's really exciting, great work. Thank you for all you're doing. My question is related to the improvement by the students who participated versus those that did not. Was it a randomized trial or was this a voluntary engagement with the product?
Kathryn Crowther:
So in each course, and we did, to some degree, leave this at the discretion of the instructor, all the students were doing the same writing assignments. They were part, they were embedded in the course. They were the writing assignments everybody would do. The students then were told they have access to this tool. Some of the professors said, I want you to use this tool, we talked about whether you could require them to use the tool, but because some of the students having maybe ethical concerns about using AI, some students decided to opt out. But all students were submitting the papers for a grade, it was just that some of the students had fed their paper through the tool and got feedback before they submitted it.
Tim Renick:
So these were not, with the Studiosity, these were no randomized control trials. That would be the next phase. I would be very supportive of that. I think you do get some bias because the students who are participating by their own volition might be a stronger subset of students. I do say having now run dozens of RCTs over the years that the results we're seeing are so strong that I don't think that alone accounts for the distinctions. In the other examples that I was sharing, those were randomized control trials that were producing some of the data, like the 17-point drop in the math DFW rates. Those were RCTs. But this would be the next level, yes. It's, there are moral issues of multiple sorts in running randomized control trials. But what my strong encouragement is, if you're gonna run them, run them early on. Because once you find something can be scaled and worked, then there is a huge moral issue with withdrawing it from a subset of students. So we're still in that early phase with this particular approach and with Studiosity. And I would think that the next thing we'd want to do is run some RCTs.
Q4 Speaker:
I just wanted to kind of follow up on what you just said of since this is a pilot and you're still reviewing the findings from the fall 2025 semester, based on those early findings, do you think that this is something that will be incorporated into the curriculum at GSU and kind of what do you see for the future of it?
Tim Renick:
Yeah, it's not entirely a mystery. We've already signed on. We're working on a contract for the next year. And the approach we will take will hopefully be to continue to expand and see how it works. With all of these technologies, it's all it's it's, not just a matter of doing the research. It's a cultural change. Part of the cultural change is getting faculty to speak well of this and their experience. We've already had that with regard to the chatbot that we're now finding new faculty each semester to adopt the chatbot in their classes because in many cases they've talked to faculty who have used it and it's not problematic and it really helpful and so forth. So we'll want that cultural change to occur at Georgia State. The other cultural change we still need to occur and Katy knows this, is to convince our very top of the administration that this is the direction they need to go. So our President and Provost and so for. So we're working on getting that level of buy-in. And support as well, but we do plan to advance this to the next stage.
Q5 Speaker:
Hi Katy, great study and kudos on trying to get a handle on what's actually happening there. Because it's slippery, so I really appreciate it. One question, which is not really a critique of where you are, like you said you're sort of early days, but that we see often in a lot of the early AI research literature around efficacy is that they often don't follow up six months later or beyond, right, to see if the gains are enduring or more ephemeral, right? And we do know that there are some studies that show it's sort of like, if used inappropriately it can be a sugar high, help boost your near-term scores and then they follow up six months later and you don't remember. There's just a study out from MIT on coding and stuff like that that found that. So are there any plans or maybe potentially thoughts around maybe following up later on to see to what extent, you know, the structure of the Studiosity support and the really sort of more principled approach does sort of stick and last for another semester, two semesters or something so that, you know you can feel good about building these enduring writing. I mean, writing is like you said, that's like the foundation of almost everything in any discipline or major. So. Any thoughts about that or how you might tackle getting a handle on that?
Tim Renick:
I'll mention to start that we've done exactly that in some of our math classes where we've looked at, especially as the students progress in other STEM courses, whether their grades are showing improvement over the predicted model and so forth. I think we could do the same. It's the type of thing we do at Georgia State is track these data. And I'll say that at the end of my comments when I was sharing some of the career data, that's another iteration of it because sometimes I'm at conferences and we show the graduation rates went up, and the variation on your question is, yeah, but does it benefit the students, you know, after they graduate? So it's something we very much would like to do, and we've got the data on which students used the platform and which students showed bigger gains while they were in these classes and so forth, so we certainly could do that.
Kathryn Crowther:
Yeah, and that's one thing that I was trying to emphasize in our conversation was that what we really like about it is it does set students up to think about writing in a certain way, it promotes certain behaviors around writing that we see in our first year composition classes, that's exactly what we teach, right? But then we don't think, for the most part, that writing teachers across the rest of the disciplines, across the under graduate experience, have the time to be able to keep reinforcing that writing is a process. Let's do multiple drafts, let's get feedback. So at least that tool does support that and scaffolds it in the way that we would like students to think about writing moving forward. And also with that AI literacy piece, the idea that this is how I best use AI to help me with writing, right? Is this back and forth, this dialog, this collaboration. But yes, I would really love to know if that sticks because that is our hope, and that is what we would, you know, we would to see with a tool like this.
Tim Renick:
I'm afraid we are out of time officially, but Katy and I will stick around. If you have any additional questions, please feel free to come up. Thank you very much for your participation today. We really appreciate your support.
[End]