<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=1005154772848053&amp;ev=PageView&amp;noscript=1">
Higher education , Transcript

New research & early insight: How AI can boost learning, retention & progression

18 Dec 2025 /
Post Featured Image

Held on 24 November, a webinar entitled "New research & early insight: How AI can boost learning, retention and progression" was a high-level panel discussion on new independent research, Ethical AI in Higher Education: Boosting Learning, Retention and Progression authored by Dr Rebecca Mace. Experts and key policy figures discussed the implications of these findings and laid out practical recommendations for the sector. 

Read the full discussion transcript from our expert panellists:

  • Chair: Professor Rebecca Bunting, Vice-Chancellor and Chief Executive, University of Bedfordshire and Studiosity Academic Advisory member
  • Dr Rebecca Mace, SFHEA, Independent Researcher and Course Lead, University of Worcester
  • Nick Hillman, OBE, Director, HEPI
  • Dr Stephanie Harris, Director of Policy, Universities UK
  • Lisa Abrahams, Partnerships Development Lead, Studiosity

>> Watch the full recording here [0:44:53]
>> Access the research report here

Prof Rebecca Bunting [00:01:12] So a very warm welcome. My name's Rebecca Bunting. I'm the Vice Chancellor of the University of Bedfordshire. But importantly for this session, I'm a member of Studiosity's UK Academic Advisory Board. I'm very pleased to welcome today's panel who will introduce themselves in a moment, but just to say that what we're talking about today is the new research by Rebecca, who will speak in a moment, Dr. Rebecca Mace, on Ethical AI in higher education, boosting learning, retention and progression, which I think is very much at the heart of all our concerns in higher education at the moment, and the role of AI in that really gives us some exceptionally interesting opportunities. So let me now ask if each of the panel members could introduce themselves and just say a little bit about their relation to this work, this broader aspect of AI for learning. So let shall I start with Rebecca, Beki, over to you. 

Dr Rebecca Mace [00:04:55] I am Rebecca, I was an independent researcher on this project. My background is in the impact of technology on young people's learning. So I found the sort of topic particularly interesting. And hopefully you'll find our research equally as fascinating as I did. It was a comment on society, it was a comment on the way that that we're dealing with knowledge and our relationship with knowledge as much as it is with assessment and all sorts of other things. 

Lisa Abrahams [00:05:32] Thank you. Thank you, Rebecca. It's nice to be welcomed alongside the rest of the panel today. I've worked with Studiosity now for just coming up to eight years.And when I first started working with Studiosity, we only had two partners who were working with us in the UK. And a lot has changed in those eight years as we are now, obviously in 2025. And what really struck me when I first started working with Studiosity was our mission and what we do at Studiosity. And I was first in family to go to university. So I didn't really have parents who could help me academically or who even really knew anything about higher education when I kind of went into that realm. So having support like Studiosity, could have been a real lifesaver for me at that point. And I know it is for many of our students around the world and who use our services, a tool which can support widening participation, access and participation, retention, student experience, and now of course AI for learning and all of those things are a real passion of mine and something that I'm really proud to be involved in and with Studiosity. 

Dr Steph Harris [00:06:39] Thanks, Rebecca, good to see you all. I'm Steph Harris, and I'm the Director of Policy at Universities UK. I'm sure as many as you know, Universities UK is the representative body for 141 universities across the UK. As Director of Policy, it's my job to try and make sure that universities understand what's going on in government and regulators and other bodies, and that crucially government and regulators and other bodies understand what's going on in universities so that we can hopefully kind of make good and relevant policy. 

Nick Hillman, OBE [00:07:22] Yes, hello. Good afternoon everybody. Like Rebecca, I'm in Bedfordshire today because I'm speaking to you from Cranfield University where I'm on a visit.And I'm director of HEPI, the Higher Education Policy Institute, which works with pretty much all universities, as does Steph's organisation, of course. But we're very small and we're very interested in AI and we work with a number of corporate partners too, including Studiosity and our own work on AI has recently seen, for example, a collection of essays that we did with the University of Southampton about how AI is affecting everything to do with universities, everything in there from setting strategy to you know teaching and learning to research and so on and so forth. And also we've done consistent polling among students on how much they're using AI, now showing that over 90 percent of students are familiar with generative AI and it's changing the way they they learn, some students more than others, of course. And indeed we do a daily blog which today is on AI. We've got Janice Kay talking about what AI means for university staff in particular  [Embedding AI, not bolting it on]. So I'm here as much to learn from others as I am to expand, but this is such an important area that I'm very pleased to be here. 

Prof Rebecca Bunting [00:08:38] Thank you very much much, Nick. Thank you, everybody. So let's launch then into hearing a bit more about the research and hand back over to Beki. 

Report thumbnail_Ethical AI in HE_Research

Get the full research pdf download here

 

Dr Rebecca Mace [00:08:48] Thank you. So the research emerged in response to two parallel trends that we saw in higher education. Firstly, was this rapid growth of growth of student use of generative AI. And hybrid writing has now become normalised amongst vast swathes of the population. But the majority of the institutional focus has been on cheating and detection of AI use. And I get that. I do really understand because we're in the business of intelligence. Anything that's artificial belittles what we've done, it belittles the system we've come through, and it undermines everything that we're trying to do with students moving forward. We don't want fake. So artificial intelligence is not really in our in our interest, you know, that the language itself becomes complicated. However, framing AI in this way in many of our interactions that we have with students is actually demonstrating that we fundamentally misunderstood how students perceive their own use of Gen AI and the reality of new ways of working and how it's showing up in the world. 

"we were keen to establish whether AI tools could actually provide structured academic support without undermining higher level academic skills." 

And then secondly, there's this long standing problem with student retention and progression in UK higher education. And that's rooted in complex psychological and relational factors such as low self efficacy, weak academic integration, and a diminished sense of belonging, as much as or perhaps more than academic ability. So rather than viewing these two trends as distinct or discrete issues, we wanted to explore whether AI could be used intentionally and ethically to support learning and ultimately help kind of improve academic engagement. But we were acutely aware throughout that technology gives with one hand and it takes away with the other. And so we were keen to establish whether AI tools could actually provide structured academic support without undermining higher level academic skills. 

So I've worked in AI feedback now in relation to assessment and sort of academic context for nearly 10 years, right through from GCSE students doing English literature and English language and having feedback provided on their work through to A-levels and now in higher education. And feedback is widely recognised as one of the most powerful tools for improving student achievement and engagement across all of the disciplines. And so this research has focused specifically upon the targeted use of Studiosity for careful formative feedback. 

We used six UK universities and they were from high, middle, and lower tariff institutions. And then we had a full range of data that we collected from them. So we ran student surveys, we had focus groups and interviews with both staff and students, and which explored ethical concerns, the different perceptions of AI use and the tensions between the institutional policies and then student practice. We combined this with analysis of over 8,000 data points from the Studiosity platform itself, which measures learning gains. And then we assessed in that process, we were assessing a range of aspects such as grammar and language, also citation, and then higher level academic writing skills such as criticality and analysis. And that was done looking at a range of assessment tasks because we wanted it to be realistic in a university setting. So across disciplines as well, we looked at report writing, essays, case studies, reflections, speeches, all at the formative stage. And then after that, institutions shared their retention and progression data with us. 

"the data on the academic writing skills showed that writing scores generally improved across the board, especially with repeated submissions"

So the combination of all of that allowed us to examine not only measurable outcomes, but really get to grips with how staff and students felt about AI in practice, sort of in the wild. So initially, sort of first of all, the academic data. So the data on the academic writing skills showed that writing scores generally improved across the board, especially with repeated submissions, with the optimal number being around six. But then we thought, well, that's nice to know, but we wanted to know specifically how? So we looked at specific types of assessment and then also types of feedback. So we looked at essays and scientific report writing with text analysis tasks, making the sort of largest improvements. Speeches didn't do so well, but that might be the sort of a mixture of the lack of convention, maybe that changes across culture as well and in terms of the subject itself, the speech might look quite different in politics to how it might do in drama. But what was interesting as well, and we drew on previous research from the University of Bedfordshire undertaken by David Pike, that made it clear that students didn't just want the mechanics of their writing and their feedback, they wanted something to relate to elements like criticality, use of sources, and sophistication of their grammar, because that's what makes the difference to their writing. It was a code switching type of a thing that they needed to be able to do. They wanted to be able to write like academics, they wanted to sound like they belonged in their academic environment. They didn't just want to put full stops in the right places. So we looked at the analysis of writing in terms of criticality, and that showed steady improvements. It also showed steady improvements in language complexity. 

It was interesting to see there was this more spiky approach to citation. It's almost like students kind of didn't really pay attention to the feedback that was saying ‘you need to cite this’ or ‘you haven't referenced this correctly’ for a couple of times, and then they would add it in on like the third time. So this could be an area potentially where you can predict, maybe as a as an institution or as a subject, where you want to input teaching and really focus on how to reference or what you should be referencing and styles of referencing, or alternatively, you can use it just to think more carefully about to change student behaviour about when they're starting to add referencing into their essays. So it can helpfully influence student practice, potentially influence student practice. Which I think everybody's been there in an academic context where they've thought, I'll put the references in later. And it's only through practise and nagging that you learn that that maybe doesn't work so well. 

"students who were using Studiosity had significantly higher odds of staying in university."

We also looked at the impacts on retention and progression, and that was really key. That's something that came out loud and clear from this. So data gathered on 645 students, which were identified particularly as being high risk for dropouts, show that that particular group of high risk students benefited hugely. So students who were using Studiosity had significantly higher odds of staying in university. Now, this research doesn't pretend to account for every factor that might have paid sort of played into that. So we can't prove direct causation, but there was a really clear correlative, positive correlation between Studiosity use and students staying on track. And the more they used it, the higher that became. So of those identified as most likely to drop out, with students who were using around about six interactions for a piece of feedback, we saw a range between 75% and 96% of retention at their respective institutions, which again, we can't say it's causation, but it was a very strong correlation. 

So we started thinking about why that might be. And this is where some of the focus group and interviews and survey data came in. The thinking is that the students found that their academic confidence was strengthened. They were able to understand what they needed to do in quite straightforward, manageable chunks. It felt manageable to them, and there wasn't a sort of sense of shame in the fact that they weren't able to perform to the level that they'd wanted to. So they didn't feel that they'd lost face to a lecturer. They hadn't felt like they'd been found out as not being good enough to be there because they were dealing with a machine, essentially. So they developed their writing in league with developing their sense of legitimacy and in that academic space. And the two things kind of played on each other. So on multiple occasions, students said that the AI helped them to articulate their ideas more clearly, and that enabled them to feel more capable rather than overwhelmed. And that sense of reassurance then played a role in sustaining engagement. The tasks were broken down into smaller and more manageable kind of chunks. The quotes we've got in the in the study, when you read it, you'll see "it's not the ideas I struggled with, it's how I write them down". "Sometimes I struggle with ideas where I've got them in my head and I can verbalise them better than when I get them written down on paper". And that was where the feedback was proving invaluable because it enabled students to start feeling like they were able to articulate their own thoughts. If you go to a generative AI that maybe you have encountered where students are going to that to provide them with the thoughts, it's not their words. And they were very clear with their with their feedback and their responses that they didn't feel that that was their work and they wanted to hand in their work. And so they were viewing it as a scaffold to their learning, they were keen that they wanted to own what they were handing in. 

Then the research also found there was this mismatch between how students and institutions understood AI use. And there was a different conversations which were missing. In many ways, it felt a bit like a Teams call where you're looking each other not quite in the eye. So the conversations that were just passing each other by, with students writing hybrid, taking their ideas, having conversations with a chatbot in the same way that they would have a conversation with a person, and then being able to then take right, I've developed some ideas, that's where 'I'm considering that, and now I need to put that into my own thinking and articulate that in a different space as an academic essay' and they had the help then from the Studiosity tool to write their own words and their own ideas. But they were too scared to say at any point that they'd used an AI because they thought that they would be told off that they would come unstuck under academic integrity rules because there was this low trust culture that was beginning to develop where students were fearing punishment, they didn't feel like they were supported by their institutions and so it became a game of not being caught out rather than actually being more proactive in in their use and the staff were equally finding that they were struggling to prove one way or the other, so the conversation was all around this proof and ‘can you you demonstrate that you've you actually have got an understanding of the topic’ and it was about pulling those sorts of things apart and one of the members of staff even said that it was “the conversation that no one wants to have, that no one wanted to think about AI”. 

"the research really highlights the importance of clear yet nuanced AI policies. It was about transparent communication that was key."

So  in conclusion, basically the research really highlights the importance of clear yet nuanced AI policies. It was about transparent communication that was key. And also this equity in access with the tool that was provided by the institution was available to everybody. Everybody had fair access to it. They knew it was allowed. They knew they were able to use it. They didn't have to hide that they'd used it. And it helped them to articulate their own thinking. So this research just really kind of reiterates that without these sorts of things, and if you've got a limited digital literacy, or you've got an issue with financial means, and you end up being disproportionately penalised as a student for using an AI. 

So we need to start changing how we view AI as a threat as academics and stepping away from conversations around it being cheating. We need to start considering it more as an academic partner. That when guided responsibly, students can use to boost their own learning. So that the stuff that we see when we get to the summative stage is actually better quality. We haven't had to worry about the right nuts and bolts of writing. But it also enables students to feel more like they are confident that they belong at the place where they have chosen to study, that they're able to speak in the right language, write in the right way, and don't have to worry about their insecurities along participation. 

Prof Rebecca Bunting [00:21:21] Thanks, Beki. That was a very, very good and concise summary of a much longer and more complex piece that we will be able to read in full in in due course when it's all formatted and available. So I've got a few questions just to start with the panel, but very happy to start to take questions in the chat if we have some time for those towards the end. But I think I'd like to start by thinking about policy really and perhaps come to Steph and Nick in turn, about the extent to which Steph, in terms of your overview of the sector and being able to kind of see what's going on in many conversations across the sector, the extent to which you're seeing policy and practise develop in this domain, not just about AI, but specifically about that shift to AI for learning, not just for policing what has been learnt. And I I just wondered whether you could say anything about, are we going fast enough? Are we capturing this in any way? 

"thinking about how students are using it in assessments and ensuring integrity of higher education assessments and qualification weren't compromised"

Dr Steph Harris [00:22:32] Well first of all is an interesting, a very interesting question and a timely question, as colleagues I'm sure on the webinar will know, we recently had the post-16 Skills White Paper published by the government, which was quite interesting on this topic actually. I'll try and resist from quoting it verbatim at you, you could read it for yourself. But it had a topic at the end of the chapter on universities that was specifically about harnessing the benefits of AI and some sort of sub-bullet points under that. And lots of those sub-bullet points were positive. They were talking about this government's AI data for AI strategy, which it published last week and it's AI research strategy, talking about workforce development, the broader workforce development in AI and the role of higher education providers in doing some of that. But when it got on to the topic of teaching and learning, it was slightly more negative, I would say, in its tone on AI. 

It talked about supporting the Office for Students (OfS) to assess the impact of artificial intelligence, thinking about how students are using it in assessments and ensuring that the kind of integrity of higher education assessments and qualification weren't compromised. And as we said, other people have said in this webinar, I mean, clearly that's foundational, right? That's kind of base camp. You have to make sure that's true, that the kind of product, the basic product is not being disruptive and making sure you've got that right. But there was much less focus on how do you build on that and how do you do some of the things that Beki has really helpfully articulated, particularly thinking about how you can use AI to support student retention and progression, as well as helping them develop the skills that will secure good outcomes in the workforce later on down the line for them as well. So I think it would be fair to say that the Department for Education’s (DfE) thinking feels like it's at quite an early stage, that it hasn't moved on to thinking about what some of the benefits of adopting AI might be from a policy point of view. 

And the last thing I would say, and and colleagues from the devolved nations will have to forgive me because this is relatively kind of English focused answer, is that I think the OfS is slightly ahead on the DfE on some of this. It has been thinking about balancing that you know, foundational approach to making sure that standards are maintained and that assessments are rigorous, to starting to think about how this it might support innovation in the sector and in ways that actually do improve student outcomes. And there's a really helpful blog that was published back in June that I encourage colleagues to look at from the OfS on Supporting universities to experiment, but but doing so through making sure that they've got the right guard rails in place [Embracing innovation in higher education: our approach to artificial intelligence]. So it's a conversation that's still developing, I guess would be my high level summary, Rebecca. 

Prof Rebecca Bunting [00:25:37] Thank thank you, Steph. That's that's really interesting. And just moving to Nick now - is it a national policy issue? Do you think the OFS should be getting its teeth into this? Or is it is it something that really has got to work up from the ground and we've got to own this in the sector? 

Nick Hillman, OBE [00:25:56] Well, yeah, thank you, Rebecca. And I could I could listen to Steph all day on this because I thought she brought coherence to actually a pretty incoherent white paper. And I think it was very useful her framing of it. And equally, I think Rebecca Mace's work is very important because of course this is one of those areas where we do need to evaluate what's happening on the ground to find out which AI tools are beneficial and which are less working. And that's a long way really of answering your question, I think, Rebecca, which is that I think it does need to be ground up to the maximum possible degree. And I say that for a number of reasons. I mean, first of all, my background is working with policymakers. They are generally some of the least tech savvy people you could ever meet. So I'm not sure you always want them setting guidelines in this area. And secondly, they tend to think of guidelines as being almost laws. You know, the temptation if you're a policymaker is to make things statutory or close to statutory. And of course, this is a very fast-moving, fast-paced area. And as we saw even within our own sector when some of these generative AI tools were first turned on, the initial temptation was just to say people shouldn't be using them. People just should not be using them. 

"it's not bad teaching because the academic has used AI to help them just as it's not bad learning just because the students used AI to help them."

I had a journalist on the phone to me last week. She had found an example where it did look like AI was being used in quite a lazy way by academics to design a course and teach a course, and this journalist wanted me to say basically that AI was just terrible and no one should use it. And I said no look, if a course is badly taught and badly designed and the learning resources are bad, poorly designed, then that is bad teaching. But it's not bad teaching because the academic has used AI to help them just as it's not bad learning just because the students used AI to help them. But there are some ways in which it really expands your horizons and other ways in which it narrows it. And in our recent collection of essays I thought one of the most interesting was by Rose Luckin from the Institute of Education where she said actually we all need to be smarter in the future. AI is so sophisticated that if we're going to get a maximum benefit from it, we actually need to be smarter so we can use these tools to their maximum advantages and get the best out of them and not use them to encourage us to be lazier. 

And so this I think this is such an important piece of work. We do need those conversations within sector you know curated by universities UK or indeed companies like Studiosity, or via almost any organisation, because it's the process of the conversation to think about what those policy frameworks should be is almost as useful as the final framework, because we're learning from each other, we're learning from best practise here and elsewhere, and indeed from other sectors. So we do need to think about guard rails, as somebody said, but they need to be from the bottom up, they need to be on the basis of experience as this work is, and they need to be as sector owned as is possible. Obviously the regulator has a role, but as sector owned as is possible. 

Prof Rebecca Bunting [00:29:08] Thanks, Nick. Thank you. I want now to think about student outcomes and what what Beki was saying about impact on progression, on retention and potentially on graduate outcomes and the extent to which AI can have an impact there. So Beki, could I ask you just to say a little bit more about some of those findings from the research around retention and and the kind of B3 data that that you know we're also very concerned about in the sector? 

"it becomes consistent for everybody, then that makes the biggest difference for student outcomes."

Dr Rebecca Mace [00:29:43] Yes, so the the research showed this really strong correlation between when AI can operate as like an infrastructural support mechanism - it can offer 24-7 support, it's available to all students, and at a time that they need it. And the time that they need it was also interesting. Some students were using it at two o'clock in the morning. So it wasn't at a time when lecturers or members of staff are on hand. It was that that was when it was at its best. And that was when it became almost like a pastoral tool would be at its best, when that those sorts of help desks and things were available 24-7. So it's not accidental, it's an ongoing process. It's not occasional because it's built into the university processes. So it isn't then just reliant on one or two individuals who maybe are just really great at doing that in their departments, and it becomes consistent for everybody, then that makes the biggest difference for student outcomes. 

"the feedback itself serves as this kind of stabilising scaffold. It's not just a correction"

So if you can kind of connect this academic writing, this sense of ‘I don't necessarily feel like I can write in the right way’, but improve that, it improves student psychological resilience. So they are able to feel more like they're participating in a space that they do actually have a voice in. Because they then feel that they have a voice and they can articulate themselves, it steps away from. 'I don't know how to write this, perhaps I don't belong here', and moves more to 'I don't know how to write this section. Oh, okay, here's how it breaks this part down' and it's an achievable task. So the feedback itself serves as this kind of stabilising scaffold. It's not just a correction, it's not just feedback to make you better. It becomes a different type of relationship that they're having with the comments. So there is this impact on an individual level, which we can see, but then that translates out to this system level, this university institution level, in terms of retention, particularly for the students that are often hardest to reach, that work at part-time jobs and they're therefore maybe not sitting down at times when university lecturers are available to help them. And the ones who may be less likely to come forward because they don't want to maybe admit that they were worrying they shouldn't be there. They don't want to be the ones who who feel like they don't belong. So they'll keep quiet until it's too late. 

Prof Rebecca Bunting [00:32:14] Thanks, Beki. I think this issue, although this isn't quite an AI matter, but this issue of belonging, we've often looked at that in terms of settling in and feeling that they're okay and that they can do it and so on. But actually not being able to speak the right language or write the right language is an immensely rich area for research, isn't it? Because to be able to belong, you have to feel that you use the right discourse, that you can do it. And and if it's an AI tool or just an exceptionally good tutor who can help you to do that, then then that's a really important part of belonging and feeling that you're part of that group who can do this. And it it's it's about equality in the end, isn't it? 

 

Dr Rebecca Mace [00:33:02] So one of the things that we were keen on looking at is not that copy and paste 'oh, this sounds clever, I'll use this'. It was the way that the feedback's delivered was integrated into their writing and it changed it using their words. It's keeps pushing the student back to their text. These are your words. Come on, you've got the voice. And we want you to articulate yourself. Don't just use those words because they sound clever and they're using the right kind of topic. So that was the key part with this. 

Prof Rebecca Bunting [00:33:30] It's owning it, isn't it? Thank you very much, Beki. Let me move on to Lisa now, just to ask a bit more about, obviously this was a Studiosity product that was being looked at in the research, but I wonder what other aspects Studiosity's work relate here, because I know that retention and student outcomes are very much at the heart of what Studiosity's intentions are. So I wondered if you could just perhaps say a little bit about other impactful initiatives and how that's going?

Lisa Abrahams [00:34:07] Yes, definitely. I think from what we've just heard there from Beki, we can really see that Studiosity is a great tool for students, and we've kind of seen that in a lot of the research, and a lot of the research that we also do in collaboration with our partners - or that our partners do themselves into Studiosity. There's lots of that out there. And being able to provide that valuable out-of-hour support, giving them that safe space to go and get feedback and support via AI, is really crucial for all of our students that have access to Studiosity. 

But the other part that I just want to spend a couple of minutes talking about as well, is the service and how we can provide enhanced reporting and data insights to universities as well, giving those valuable insights which can help support that kind of whole institutional change. So as well as complete transparency with all of the feedback and anything that's been said to the students, or any feedback that they've got from Studiosity, staff at universities who work with us also have access to reporting. They can look at usage patterns, they can look at which faculties are using Studiosity, and in particular they can look at time of use, as I think Beki mentioned earlier, we tend to see a lot of students using our service late into the evening, just because they've got to work around other bits and pieces that they might be doing. And one of the things that we can really kind of help pinpoint for universities, is that academic writing ability. 

So looking at tracking improvements in critical thinking skills at an individual level, but also at a wider faculty level as well. And being able to identify those outliers quickly and for universities which can really help support in a timely manner, that retention piece in terms of picking up those students that may need more help and then perhaps than what we can provide them, as well and giving them that next step to the to support at the university as well. So for example with our academic writing ability tool we can help identify students who are at a really good level with their writing, they're advanced level but actually, being able to then get those students to the kind of next level and the right level of support to mean that they could go from a 2:1 up to a first, is really crucial to a lot of the universities that we work with. And then of course you've got the students who are potentially struggling and at the bottom of their writing ability, and to quickly identify those students and then to be able to get those early interventions in and really pick [up] on those students, and it's that sense of belonging as well, you know bringing them in and understanding what is it that that they can, that the university can, do to help and support them. 

So I just want to give you a bit of an example of that in practice, at one of our partner universities as well. So University of Greenwich - and they've partnered with Studiosity now, and I have personally worked with University of Greenwich now since 2018. And so often students were very able at University of Greenwich, they just needed a little bit more support with their academic writing. In particular when English wasn't their first language, or if they're postgraduates, if they're returning to study after a long time, maybe being out, or if perhaps they're a more traditional kind of new student coming in. 

"very early on in that academic year, you're able to highlight those to help support those that could be at risk of failure as well."

So academic tutors and lecturers were spending a lot of time giving that feedback on written skills and their writing aspects of the studies. They were struggling to get the quality of work that they needed. And I think Beki mentioned this earlier about you know, you can have the best ideas and the best kind of mind, but actually being able to get that and convey that down onto a written piece, whether that's an essay or a business report or a scientific report, that can be a real struggle for a lot of students. So what University of Greenwich have done and kind of fast forward now into 2025, all of their students at University of Greenwich, even partnership students, have access to Studiosity and and in particular AI feedback in minutes. And they set up a programme called Write with Confidence. And so this is developed specifically for access and participation in mind, and also to support that retention piece. So they ask students to submit part of their first assignment to Studiosity in around week five, week six of coming into university. And what that was able to do for them was obviously pick up those students very early on and get those early interventions to those students, and to help support them with their writing, but also any other bits and pieces that they might need support with as well. But more importantly, what that can also highlight as well is those students who did not complete that initial first review, that first assignment that they were asked to do. And so very early on in that academic year, you're able to highlight those to help support those that could be at risk of failure as well. And so alongside that, they also found that students who had used Studiosity in those very early on days at university achieved a better mark, 6% increase in grades for those students that that used Studiosity, and to be able to do that at scale as well, to be able to do that with all of your students at one time is really quite crucial. 

"100% of students who were involved in that pilot said they felt more confident with with their writing and with their studies after using Studiosity."

We also have other universities who've worked with us as well, University of Greater Manchester, and they just completed one of our UUK pro bono trials with us, and those students they were very much using Studiosity to support them with their block teaching, because that's a model that they're moving to. After that pilot, 100% of students who were involved in that pilot said they felt more confident with with their writing and with their studies after using Studiosity. So having rolled that out, they've now rolled it out to all of their first years this academic year as well. 

So lots and lots of research and and bits and pieces that I could go on and on, but I won't because I know we're we're short on time! But yes, so just a bit of a flavour of what we can do and to support universities. 

Prof Rebecca Bunting [00:40:03] Thank you Lisa. I just want to ask a question about, to to Beki again, about what you might call AI literacy deficit and the risks that we face. We we know about digital literacy deficits. Are we at risk of that happening all over again with AI, that students who can and do get on well and those who are not used to working with AI or haven't had that prior experience are cast adrift? 

Dr Rebecca Mace [00:40:38] So what you tend to see with AI, and actually probably we're beginning to move through it now, but when certainly when generative AI appeared on the scene, we saw a reverse J curve with people who encountered it for the first time and they were like, 'wow, this is amazing, this is the best thing I've ever seen, and it can just generate all this text, and isn't this fantastic?' And then the more people used it, the more they realised it wasn't quite what they were wanting, or they spent longer crafting prompts to get a response than if they'd just written it themselves in the first place. So the more you used it, the better. So it kind of started a divide between people who had easy access to these tools, Gen AI tools, and people who didn't, because the ones who hadn't got as much access remained in that still up here thinking this is amazing, and the ones who obviously had more access got over that quite quickly. 

On top of that, we also saw a split between people who were only using it through things like Snapchat because on their WhatsApp rather than dealing with it in an academic context. So there was a divide between how you would use it as well. And then just in general there was this a sort of shift and a change in society about their relationship with knowledge and how we own ideas and whose ideas are what, which kind of correlates to how we own anything in many things in society now, you don't own record collections anymore. You don't own your car even in the same way. Lots of people are on pay pay in a little bit or you pay a subscription fee or those sorts of processes. So to own anything was a a different set of engagement, you know, the idea of an idea being yours or it became more like a meme where you could spread it thin across and it you can change it and shift its interpretation and it still belongs to you. It doesn't have to, you don't have to, reference all the other people. 

So the way in which AI was being used has has massively changed our relationship with knowledge. And so students who are less sophisticated in their use were tending to get caught up in academic integrity issues because they didn't know how to navigate that landscape. They didn't know how to deal with it and the policies that were in place ended up being punitive to them particularly, that group particularly. So we were inadvertently reinforcing sort of digital divides from a knowledge perspective. 

Prof Rebecca Bunting [00:42:56] Yes, thanks, thanks, Beki. A very interesting and concerning aspect of all of this. In the last few minutes, I'd just like to come back to Nick and Steph, perhaps Nick first this time, to ask about the implications for staffing on all of this and the development of staff skills in it. And there is that divide that we know about between what our students can do and what we can do in many cases. Workforce skills are clearly a big thing in the white paper. How do you see this playing out really for development, for staff development, to give those core and fundamental skills and confidence that are going to be needed? 

"university staff need to be on top of all of this, and that's why events like this, and Studiosity research and indeed all sorts of other things going on are so important." 

Nick Hillman, OBE [00:43:39] Yes, well, I mean it's a very good question. I'm sure Steph can answer it in a more granular fashion. But of course, most people we know from poll after poll after poll, most people who go to higher education go primarily because they want to get a fulfilling career afterwards, and that means they do need the skills that employers want, and up-to-date skills. And so that does mean of course the university staff need to be on top of all of this, and that's why events like this, and Studiosity research and indeed all sorts of other things going on are so important. And people might want to, as I say, look at our blog today from Janice Kay which is exactly on this. 

But I'll end just by passing to Steph, if I may, with an anecdote. I'm here at Cranfield University today. My job partly entails travelling around the country looking at what individual universities are doing. And I've just been having a very interesting chat here at at Cranfield with some of the people who run the MBA course here at Cranfield. And they say a lot of the projects now being done by their students are on AI in the workplace, and they expect their university staff who are taking them on the course to have an understanding of that and fresh, up-to-date experience of that. You know, whether it's via training or or spending time themselves in industry. So I think it's absolutely critical. 

The labour market is changing, employers want people with these skills, students are coming to us demanding these skills, and I know there's a huge amount of pressures on university staff, but this is another very important area where we need to stay up to date and then universities need to help their staff stay up to date, whether that's mission groups or representative bodies or or government or individual universities. 

Prof Rebecca Bunting [00:45:24] Yes, absolutely. Thanks, Nick. Steph, any further thoughts on that? 

Dr Steph Harris [00:45:28] Yeah, I mean I keep it very brief, but I think Nick's highlighted two things, and there is, I think to my mind, two distinctive things when we're thinking about university and my answer is going to be about academic staff because most of our conversation so far has been about teaching, but obviously there's a conversation to have about the entirety of the university workforce and their skills in this area. But when it comes to academic staff, I think Nick has highlighted two things, one of which is about up-skilling to deliver teaching and learning and the up-skilling of those individuals to do that, but also keeping up to date with what's current in the industries that some of those students might go on to work in and how they keep the kind of content and what they're kind of teaching those students up to date, as well as the kind of methods of delivery of teaching and assessment. And those I think are an important distinction. I think the sector has well developed mechanisms through Advanced HE, the QAA and others about thinking about, how does the pedagogy need to develop? How do the assessment methods need to develop? How do you keep an eye on quality assurance and quality enhancement? But then there is a question for those academics in their individual fields about how they can keep up to date with where AI is taking that field and the workforce development as well. So I think it's crucial that we think about both of those things and how we can support both of those things. 

Prof Rebecca Bunting [00:46:56] Thanks, Steph. Yes, absolutely. Well, look, we've come to the end of our time, so I'm just going to draw this to a close now and thank the panel. I want to thank you for joining us, those of you who are here. Lots of interesting comments and a few questions in the chat. We will follow those up and in the in the follow up.

Thank you all for taking part and I hope that's provoked a lot of thinking for you and that you can take that back into your organisations for further consideration. Thank you all very much. 

 

>> Watch the full recording here [0:44:53]
>> Access the research report here

 

 

Share this post: