Held on June 28th, the "Growing Student Success with Ethical AI for Learning - A Multi-Institutional Case Study Review" webinar was an engaging discussion on the impact of AI tools on teaching, learning, and assessment. A range of student outcomes was discussed, including using Studiosity+ for responsible AI use, learning opportunities through unlimited, repeated formative feedback, scalable support, keeping humans in the loop, and actionable data.
Read the full discussion transcript from our expert panellists - including answers to the many questions from the highly engaged delegates.
- Dr David Pike, Head of Digital Learning, University of Bedfordshire
- Sharon Perera, Head of Academic and Digital Skills, University of Greenwich
- Chair: Graeme Kirk, Partnerships Manager at Studiosity
>> Watch the full recording, here [1:01:02]
Graeme Kirk: Good afternoon, everyone, and welcome to the growing student success with ethical AI for learning webinar. I'm Graeme Kirk and I'm a Partnerships Manager for UK and Europe at Studiosity, and I'll be chairing the webinar. But more importantly, it's my pleasure to welcome our guests, Dr David Pike, who is the Head of Digital Learning at the University of Bedfordshire, and Sharon Perera, who's the Head of Academic and Digital Skills at Greenwich University. They'll be sharing their experiences of ethical use of AI for learning, but also the wider institute's experience on AI. And I'll be handing over to them shortly to introduce themselves.
During the webinar, we will be discussing student behaviour towards the use of AI, and AI for learning in particular versus AI for convenience and productivity. And we will cover AI and academic integrity. And then thirdly, the importance of humans being involved with AI for learning, the governance, for ethics, but also for the accuracy of the information the students are receiving.
A quick introduction for those who are not familiar with Studiosity, Studiosity was established over 20 years ago, providing formative feedback on academic writing skills, specifically within higher education. This has been a human based service in partnerships with institutes, and we're embedded within the institutes VLE. And from there, students can submit their assignments in advance of deadline, whether it's an essay, business report, case study, and they receive their feedback typically within 4-6 hours, every day of the year, 365 days a year, from a specialist on their academic writing. The feedback that they receive, will be on their spelling and grammar, the structure, choice of language and use of sources. And with Studiosity+, our new AI offering it now has the addition of critical thinking feedback.
"So why is Studiosity investing in AI? Studiosity's mission, at its heart, is to increase life chances for students across the globe"
So why is Studiosity investing in AI? Studiosity's mission, at its heart, is to increase life chances for students across the globe. Not just within their education, but beyond and into the workplace. And AI from our own secure, higher education specific data has already given us that opportunity to maximise and expand on that goal, to the point where we are now world leaders in AI for learning, specifically, and the goal to increase life chances for students is not just within our existing partners, who with Studiosity+ can now really expand on the use of the service without limitations. And no doubt you'll hear from David some quite interesting examples, and probably from Sharon as well, of student use and that front, but it also breaks down international barriers for us as well. And from the student side, students have been asking to be able to have more access to support services, but also easier and faster access to that support. And again, AI has given us that possibility. We can now provide feedback in just under two minutes. Yep. Just two minutes, 24 seven 365 days a year on any assignment, up to 8000 words. But it really opens up the availability for parity of access within institutes as well. But also the frequency, which students can use the service. I know Bedfordshire has got some crazy numbers on that front as well.
As leaders in AI for learning, and, and I emphasise for learning, we don't generate any content for students. We don't change students' work, change students' voices. We don't have a 'tick to accept all changes' option. Students are basically provided the feedback to learn from, to improve their writing. And as I mentioned before, it's not just for further education, it's for a future in life, going into the workplace.
A number of our partners at the moment, have either fully transferred all students to Studiosity+ or are running pilots, some with all students, but some with just particular cohorts of students. We've already had some really interesting feedback and no doubt we'll hear more from David and Sharon shortly with regards to their experience with their institutes. But just, we've also had some great feedback from the likes of Roehampton where they've quoted back to us saying we've seen attainment and pass rates were significantly improved regardless of students starting position. In fact, they were greater gains with students with borderline or low grades, which is obviously great to see.
And then specifically Studiosity+ from Plymouth Marjon, they've seen an 842% increase in participation from the students. So that's a huge increase in formative feedback, which has really impacted our widening access and participation goals.
So that's about enough from me. So before I hand over to David and Sharon to make their introductions, just a couple of housekeeping points to get across. Please do put all your questions in the Q&A not the chat at the bottom of your screen. If it goes into the chat, the possibility of you might miss those questions. So please make sure that you use the Q&A. We would pick up those questions at the end of the conversation. And we're anticipating it to be a very popular session, actually, with lots of questions. So it's likely it might run over. David and Sharon have kindly offered to stay on a little bit longer to answer as many questions as possible. However, if we don't get around to every single question, we will answer those questions and feedback to all the questions after the session, offline, and we will also make the recording available for everyone as well. If you can stay, well, that's great. But like I say, rest assured you'll get the answers to the questions and the recording in due course, after the after the webinar.
So now it's over to David and Sharon to make their introductions. Before we kick off into the insights, initially into their experience of student behaviour, towards AI.
Sharon Perera: Thank you Graeme. Hello, everyone. I'm Sharon Perera, I'm Head of Academic and Digital Skills at the University of Greenwich. And this is the department that supports students, all our students across the university, with academic skills and that means, study skills, library skills and digital skills. And, we also, we look after Studiosity, the Studiosity project for the university and support students with using that as part of the arsenal for supporting students.
Dr David Pike: Right. Happy Friday everyone. I'm Dr David Pike from the University of Bedfordshire. It's very interesting talking to Sharon a bit more over the last week or so, our universities are very alike but in very different places, but, at our heart, we both got the same mission, and it's about improving the life chances of our students. So we get a lot of students who come from non-traditional backgrounds, a lot of our students, not the 18 to 21 year old you might expect actually, a lot of our students are a lot older. So they come to us with a lot of confidence issues."
So interesting, my job as Head of Digital Learning, this isn't something normally expected of me to work with, and deal with, but actually having worked with Studiosity for I think probably four years now - is that right Graeme? It's four years, I think, isn't it we've been with you guys?
Graeme Kirk: Going into our fourth year.
Dr David Pike: We're getting there. I keep forgetting what year it is sometimes. So, but the important thing here is it's been a great help to our students. And for our students, they need a lot of help. And confidence is a big thing. And I'll talk a bit more about that in a bit. But it's something that we've probably all experienced and, you know, reaching out to those students, they come to us, they're not sure what to do. Are they doing the right thing? Coming to university is a real challenge. But actually what we want the students really to do is, you know, what's the best version of themselves that they can become.
So a very philosophical point. But that's kind of where I come from. But with the day job, I run the university's virtual learning environment and a lot of the other digital systems, particularly with a learning focus. So that's what I do.
Graeme Kirk: Fantastic. So the first point to the agenda, the first discussion point of the agenda is really looking at the student point of view towards AI. So first of all holistically, there's obviously very much two different sides of the fence, those that are using and abusing it, and those that are so nervous to use it because they think it's wrong that they don't touch any AI. But also be good to get your thoughts with regards to how students are using it in an ethical way for learning, against that ease and convenience for productivity. So, I don't know who wants to sort of open up on that first though. Sharon, if you've got some thoughts on that to start with?
Sharon Perera: Okay. Thanks, Graeme. Yeah, I think early on last year when ChatGPT burst on the scene, we did student consultations and we, talked to students about their understanding and what they were doing. And very early on, the message, the very clear message from students was, we want to use it, even if we're not using it, we want to use it, but tell us how to use it ethically and safely. And that has been a consistent message. So for those students who engage with us, this is what they want. We know that students want to use it ethically, and safely.
We do also know that students are not using it the way that we would like to see them use it. We have seen that, this coming up in the assessments and in written work. And part of it is that I think some students probably don't know how to use it and would like to use it. So that's our goal to help students to, to actively learn how to use it. So yeah, that's a starter for ten. I think, you know, students want to use it, but I actually, you know, they don't want the teachers to use it. They don't want the teachers to use it to mark their work. So there are double standards there. I think that's consistent across, you know, with all our students when we have those conversations.
"AI exploded onto the scene for us like a giant firework."
Dr David Pike: So we've had very similar issues to Sharon. It's quite interesting. AI exploded onto the scene for us like a giant firework. Fortunately, I was on annual leave at the time, so I didn't really - and that's always a good time to get lots of panicked phone calls as people start to realise - but it's been a bit of a difficult journey for our students because a lot of our students not only come from, you know, deprived areas, etc., but a lot of overseas students. And so they're used to having somebody or something. They're experts in their own education system. And often it can be the case that, you know, repetition is the idea. You agree with what the professor says and something like, you know, generative AI is really dangerous right? Because suddenly it acts like an authority and it seems to know things, but it's not always right. And the biggest challenge we've had with our students is, first of all, I don't think a lot of the students, and you Sharon you might find something very similar, but a lot of our students, I don't think, actually knew what it was. And it's taken a long time for them to kind of to, to realise, but we started to realise they'd figured it out because we started to see all sorts of, and I'm sure colleagues of here will see this kind of thing, really interesting types of practice where perhaps the students might copy the prompt from the generative AI and things like that.
So there was a lot of debate around how do we explain this to the students, and how can we help them understand that actually they need to kind of write themselves. That's the idea, you know, this is about making an intellectual effort rather than just trying to get the answer. And of course, it's very tempting to do that if you're not confident and you're not sure. And if somebody comes along and says this, this is the answer here. What we try to do with the students was, the ethics are terribly complicated to explain, and also to experience as well, and if you're not used to the way things are done in a UK university, that just makes it even worse. So some of the ways we try to explain the students, well, you know, well, think about this, if you were driving down the road and you didn't think there was a policeman there or a police officer with a radar gun, would you be tempted to speed? So we tried to have conversations with students where we explain that the university has rules, and you do need to stick to the rules, but it's also okay to experiment as well. And that's where it becomes very difficult, because at the same time we say, hey, we got rules, and then second, we say it's okay to experiment, which experimenting invariably means breaking the rules unfortunately.
"There was a lot of debate around how do we explain this to the students, and how can we help them understand that actually they need to kind of write themselves. That's the idea, you know, this is about making an intellectual effort rather than just trying to get the answer"
So we're at the point now where we know, I know from the survey data we've got, where we've been looking at Writing Feedback Plus that the students have it as a concern doesn't mean that they won't use the system. It just simply means that they say to us, we're actually a bit worried about this. I'm worried about engaging with generative AI - oh sorry - engaging with AI rather, which is what Studiosity is, rather than generative AI, that they'll somehow get into trouble. So we kind of got ourselves a bit of a mountain to climb really with explaining to the students, you know, actually, these are the kind of behaviours that kind of work, and these are the kind of behaviours which you can, you can experiment with, but don't use that in your assignment. Don't, don't do those things. So it can be really, really tricky. So I guess it's a kind of, you know, student’s behaviours are if they don't think that you're watching, and I know that sounds very big brother, they're likely to do things that you don't approve of or don't expect which is always interesting isn't it? So yeah, a key thing for us has been, and I'll say Sharon will give you some examples I guess as well, it's just explaining to students and having that difficult conversation about the difference between right and wrong.
And actually just to say really quickly, final point, I'll keep it short - these are not new arguments that we're having. Anybody who's been around enough who remembers first switching on Turnitin, this is just another variant of those arguments. Students will copy things from the internet, we know people do that anyway. AI's just another, much more scary though I'd say, variant of that, so there we go.
Graeme Kirk: It'll be quite interesting to hear from those online actually regards to communication to students. So I know with Bedfordshire you had a line in the sand with AI 12 months ago, but you've had to change your position in communication to students on AI from 12 months ago. So it's a forever moving beast. So be interesting and want to hear from who's online, whether they've got any questions on that or anything they can input on that. But also from Sharon, from your perspective, you've noticed the big change in communication, change in language around the use of AI to students from like I say 12 months ago it was very much a no, to actually where's a happy middle ground.
"So the first and foremost thing is that the university encourages students and staff to use AI. And then there are the parameters and the boundaries to work within."
Sharon Perera: Yeah. About so in May 2023, we wrote our guidance for students and we explained to them, you know, just very, you know, broad brush strokes - what is AI, how they can use it. And the university published its position really quickly to give everybody a steer on how we were going to approach AI. So the first, the first and foremost thing is that the university encourages students and staff to use AI. And then there are the parameters and the boundaries to work within. Of course, you know that these boundaries are consistent with any way that we would share information about ourselves, or you know, so it's really about behaviour.
So as David was saying, you know, it's behaviours and it's not about the tools, it's not about prohibiting the use. So we are looking at how we can update our guidance, but only to enhance it, because since we released it, students were saying to us, well, what about images? How do we generate images? And there is no clear guidance about this and that's true. We are finding as a university, you know, what does our university branding say about using, you know, generated images. So what we've done is really to look at our own staff capabilities.
So when we started to update our guidance, we realised that actually does the team know, you know, does a team have the capabilities to, to support students in this way? So what we found is that no, that's not true. You know, some of us have very good, developed capabilities and some in the team don't. So, as a team, we looked within ourselves and put in CPD so that we could experiment and use, understand how students are using it. But we also have very clear guidelines for students that if they do use generative AI for their assessment, or for any assignments that they have to, they must declare the use of it and they must reference it, and they must not use work that they have not produced. So those are very, very clear guidelines for students. But what we were finding was that, although the declaration was there, the template was there, the referencing template was there, they didn't know how to use it.
So, from that understanding, is what we're go to do now is we're going to actually have workshops showing students step by step how to do it. And that's really our experience of, you know, giving students the guidance versus actually taking them through. So we've had to reframe our own understanding of how we support students. So we're going to model the use of it, and we think that by doing that will be getting them out of that back room where they're kind of experimenting and maybe not using it correctly so that we can really model good practice and that's what we want to do with our students.
Graeme Kirk: Yeah, from the Studiosity side, we've done a lot of research with independent research around student engagement and students' cognitive load is so high, they just literally want to be told what to do, when to do it, and they don't want to be in any sort of ambiguity. And I know, David, you've got some examples where students are basically taking the referencing from generating something in AI to the whole assignment and just reference the whole pretty much the whole assignment, I think you mentioned that one before.
Dr David Pike: Yeah, yeah, that's a bit of a problem for us because again, it comes down to, Sharon's absolutely right, you know, this is a thing - it's one thing to tell the students and, you know, the rules are there, but actually what we really need to do, I'm speaking just from our own, you know, institutional perspective here, is you're going to have to break some rules if you want to learn right? That's the bottom line. You're not going to learn anything by not breaking a rule or bending a rule. But then it's really the process of the assessment. And I think that's the thing for us.
“we wanted to show the students that as an institution, if we could use AI responsibly through Studiosity then the students could do the same thing"
What we've been working with the students, to try and explain to them and this is why we went down the route of wanting to use Studiosity's AI tool because we wanted to show the students that as an institution, if we could use AI responsibly through Studiosity then as students could do the same thing. And there's really something to be said for the, what how do I describe it? It's I try not to sound like it's something from Star Wars, but to try is the idea - that's what you actually need to do, because there's a lot of problems when you come to come to write and I'll admit to everyone in the room here and this is being recorded, although I can write, I really struggled and it took me a long time to learn to do that.
And I'm sure it's the same for everyone else here and the same for our students and so showing them actually you don't need, you know, the bad tools let's just say - I won't mention what they are, but everyone knows what they are - and actually showing students you can rely on yourself, It's okay you are sufficient, it's just you have to learn a bit more. And I think that's really, really important, I think to get across to the students as well.
And you know you can really hear it in what Sharon's saying, just to assure them that yeah, it's okay. And you know, what we're finding is students want a lot of reassurance, am I doing the right thing?
Sometimes, though, that does trip into expecting that Studiosity and colleagues in Studiosity will answer their assessment for them. It's not, it's not about that. It's about, you know, the process of writing itself and getting better. But I think that's something important to acknowledge there. You know, for our students reassurance is key, but also showing that, as I say, as an institution, we can be responsible, and we can show you how it's done. We can show you how we are being responsible.
"we make the point to students that we don't have the writing skills, they are not fully developed - it's a process... and the pilot where we’re using Studiosity+ really helps with that”
Sharon Perera: I just want to add one point of what David is saying so I think we make the point that actually writing, you know, we don't have that skill - it's not fully developed isn't it. You know, it's a process. And we can only get, work to the better model of where we want to get to if we work through that process. And we are and I think, you know, the pilot where we've been using Studiosity+ really helps with that.
Interestingly enough, you know, we've seen when students were first allowed to use Studiosity+ and submit multiple times they were doing that.
But then, you know, we're now seeing that actually the feedback that they get, and using that feedback, we've been some of our tutors have been working with them on that feedback and helping them to improve it. So, you know, we're learning to use this new tool that we've got, but also, we we're getting that message across that actually, you know, that writing doesn't happen perfectly the first time over. And, you know, you've got to learn to be able to do that, you know.
Graeme Kirk: I actually had a conversation - this is kind of new for me as well actually - with my Canadian counterpart, one of his partners, an international university. So they've got UK universities, their students have access Studiosity but they also have access to another writing feedback [tool] where they can actually select tick box to make correct all changes, but they've also got an AI detection running in the background, you know, and with students who use this other paid for service, still paid for service, for writing feedback, was flagged very high on AI possibility within their writing. And those who didn't use that and used the learning feedback that they got from Studiosity, flagged zero on the AI score. So it goes back to that point of if it's AI for learning, not to do it for you.
Dr David Pike: It's I’ve seen it described as, that's really interesting, and I can't claim this because someone else wrote it, but 'cheating and learning' was the phrase I'd heard which I thought was quite funny, but that's the thing, we tried to make it, and Sharon's example was really good, that that's what the team here is, that they're in a different team to me, the Learning Development team, but they're more inline what Sharon does.
But with the students, again, make it fun, right? That's what we try to do. We you know, we use silly ideas like 'did you touch this last?' - and it's a silly idea, it's a bit like a playground game 'it', but actually it's really quite powerful because you say to students, did you write that yourself? Were you typing on there? Were you editing that? Just like Graeme just said, I think that's the thing for me is if you're going to learn, you need to take responsibility for learning. That's what being at university is about. That's a big challenge, but it's not one that you know, is impossible to overcome. It's a good one. And when students leave the university, that's what they're going to need out in the big wide world. Because they may not have, you know, Studiosity, unfortunately and the other systems that I won't mention the name of, they may not have those. And so being reliant on yourself, being able to do it and be confident as well. Really really important. Well certainly from my perspective for the students I spoke to.
"Now what we can see is, for the first time ever, I get to be able to later today go to the data in Studiosity, pick up the data about referencing, and match that against the different cohorts we've got in our university"
Graeme Kirk: That's actually a really nice segue into the second point of the webinar, which is going on going into the AI side of things with regards to academic integrity. So that's obviously a very hot topic at the moment because there's a lot of generation through AI, which takes away student knowledge, student work. So it'll be good to get your thoughts on that again, your experience with students, what processes you have in place? You either identify lack of integrity? Or sort of manage and instruct students best practices to complete assignments?
Sharon Perera: Well, at the university, we, my team and I, and at the University of Greenwich, one of the things that we have actively done over the last few years is we felt like, you know, to start with, we were always approaching academic integrity from the perspective of a deficit. So we were calling it avoiding plagiarism, you know, that's the thing. Academic offences. And we were seeing quite a lot of referrals to us and students coming to us. So, two years ago, we said we changed it. We developed a course, what we call the course, and we shared it across the faculties, and we called it the Academic Integrity Course. And just through that influence the wording and the change, the naming of it to academic integrity rather than 'avoiding plagiarism' or 'detecting, detecting plagiarism'. And last year we developed an integrity and academic integrity module. But, you know, we also wanted to show it to students that actually integrity isn't just for academic work, it's about you. It's about being your authentic self. So we called it Integrity Matters. And we're really proud of this project because it's not it's not because it's about an ethos and it's about an approach to integrity.
So we really feel that it's the culture that we want to inculcate. And it's a slow process, but that's the approach that we're taking that, you know, students should be, to take time to develop and nurture this academic voice, the citizenship, and that's what we use to encourage students. And we want to spread this across, you know, across universities as well. So as part of that initiative, we've developed a student voices symposium that we want to share with students. And I hope that, you know, with all of the participants here, we can share widely our symposium in November that everybody can, can participate in. And we also want to make our resources available to everybody across, across, you know, universities and international bodies as well.
So that's the approach really, that I think I would love in a, in an ideal, you know, utopia that students will actually realise that, that, that this, this, culture, this, this value of being, you know, having integrity is something that they will choose rather than, you know, sort of have to do what we wished upon them. So that's our approach. And I know that's very idealistic, but I think we've got to start somewhere. And that's the approach. It's a goal really.
“we also wanted to show it to students that actually integrity isn't just for academic work, it's about you. It's about being your authentic self”
Dr David Pike: I'm just going to say, first of all, if you haven't seen the materials and content that Sharon's got, you're definitely missing out. You absolutely should have a look at. It is very, very, very good, really brilliant.
Yeah. So from, from our perspective then, the students, we, we know our students are, you know, susceptible to not doing things the right way. So what we found was that actually the best way to deal with a problem, first of all, was to be absolutely crystal clear, now I'll admit, we probably didn't go about it in the best way because it was like a lot of institutions, initial reaction was, ‘don't use generative AI’. We didn't want people to do that.
But when we kind of came around, what we what we had to learn to do was, a bit like what Sharon was saying earlier, first of all focus on the staff, so staff were worried about it, and not sure how to react, not sure what to do because they wanted to talk to the students and they wanted to, you know, explain effectively the rules for the students and show them how, you know, what, what they were expecting. But that's particularly difficult, right? Because there's a lot of, forgive the generalisation, a lot of chaos in the sector about what you should we shouldn't do with generative AI. And everyone's got a slightly different idea about it. And then that gets mixed up with the term AI. And then students are not sure when they hear the term AI, they get worried that they're going to get into trouble using AI. But it's not like that. So there's a lot of kind of clearing conversations needed. But what we found that worked best of all, now, I'll admit to you all, at the University Bedfordshire we do have Turnitin’s AI detector turned on. So that is something that that's how we're seeing, you know, the malpractice as we go through.
What we try to do with the students, though, is engage them and show them that actually, you don't really need to use these tools to write your assignment. You can write them yourself and then to show them ways in which you could use it. So turn that negative into a positive and get out there and engage students. It's interesting because not all the students come to this in the same way. The nurses are particularly good at ethics - I even got an ethics lesson off one student, and I thought, gosh, that's got going - so they're a bit more on the ball. But even in those areas, we do see some students who have difficulty and have kind of fallen off the path. When it came to business though, that was very interesting because I think students kind of had a bit of a bit of a disbelief that we knew that they were doing some things that they shouldn't be doing, right? And I'd seen all sorts of unusual words used in assignments I'd never seen in my entire life.
So again, what we're trying to encourage the students to do is, you know, some of them fall fail because they'd taken their work and put it into a generative AI system and said, oh no, it rewrote it for me. Whereas we are now saying, well, you don't need those things. We've got Studiosity, you can submit your work in it, the university is going to guarantee it's safe. But the only thing I would say to you all is, be careful with that because even when we've made statements like that, and we think we've shifted the debate forward, what can happen is we've had some students write something of generative AI and say, well, I submitted it to Studiosity, but hang on a minute - it's about the process - if you didn't start in the right place, this is where Sharon's work is really, really good, right? We don't start in the right place, and you don't try and follow more or less the right path, then that's where you're going to have a lot of problems. And I think that's important thing to kind of get across to the students.
It's about, getting back to what we talked about before, about behaviours you know, if we can use AI responsibly, so can you. And I think that's, you know, a big part of it and I'd rather not see the students, we've had loads of academics doing Vivas and things like that and I don't want the students to get there. I'd rather them not get to that point. But you know, we at the moment, for us, we're still in that transitional process. We're still trying to find our feet with explaining it and doing it with students. And so this coming academic year, you know, that's when we're going to really make a big push on explaining this and trying to get that across to the students and make those messages clear, what is good and what is bad, and where's the middle ground? What can you do? I think that's important thing to focus on - pragmatism.
"Whereas we are now saying, well, you don't need those things. We've got Studiosity, you can submit your work in it, the university is going to guarantee it's safe"
Graeme Kirk: So how do you work for that institute with regards to, if it's pre AI where someone's used a source and hasn't referenced properly, it's very easy to identify and be addressed. How do you approach that from a post submission perspective? With regards because we've, Studiosity's invested quite a lot of money in AI detection, but ultimately it is incredibly hard to do - it had biases for international students, so be interesting to see from an institute perspective actually, how did you address that?
Sharon Perera: Okay. Well, from an institution, institutional perspective, I think there is a panel of people who actually, you know, if a student has committed - and I think university colleagues this process will be consistent across the universities - if a student has committed an academic offence that it will go to an initial investigation, and if that is upheld or we can't come to a decision, then it will go to an academic offences panel where, you know, there are several people there who will, and the student can, will be asked to, you know, to talk about the use of their, you know, what they did and, and then, you know, so that's the process. There is a process in place.
I think what happens after is really important. And what happens maybe, you know, what happens before, you know, so students have the tools and an understanding, that's really important. So you know I talked about our Integrity Matters module. I think in November we will run this symposium, so we have a network. And you know, we can illustrate a community of practice, but also, illustrate to students those do's and don'ts, those pitfalls. And the way that we're doing it is through situational learning where other students, you know, if we've got this module where other students experience, you know, plagiarism or accidentally lending somebody their work, you know, those very simple, very relatable, situations.
So that's the preventative work. But then if a student has been to an academic offences panel, has been found to have committed an academic offence, then that's also important, you know, so we are running a referral service at the moment. And it's really new, because we wanted to sort of see how we could take the resources and translate that and sort of use it across the, you know, the whole student journey. So, when the student has been to a panel, you know, we invite them to the, the, to the workshop where we explain to them what a panel is, why it's there, and what are your next steps, what can you do? And I think that's really important as well in that process of learning.
Graeme Kirk: Just to send that across, the same question across to David before we move on to that third point, just regards to the identification of, you know, wrongful use of generative AI within students with regards to academic integrity. Obviously, there's some glaring errors that are very easy to identify, but not quite as straightforward to identify against recently more previously published work.
Dr David Pike: I mean, I'll, I'll give the slightly the, the politicians style answer - it's complicated. But actually, in some ways, the majority of the cases that we saw at first, it was pretty obvious, right? It was just, it was copy and pasted and the students, the approaches were not particularly sophisticated. Some students are more sophisticated than others. But what generally tends to cause an issue is where you don't have process in the assessment. So where the lecturers haven't had any kind of contact with the students. So what we've said to colleagues is, well, you know, look, you can argue about how effective Turnitin’s AI detector is and AI detection generally, it is the police officer on the side of the road. I know I've used that example a little while ago, but it is the police officer on the side of the road. And actually, that's probably enough to keep students realising that somebody is checking. If you know someone's checking, you are probably slightly less likely to not do the things you should.
The other ways we've done it are actually do it live in a classroom. And this is massively risky. Go into a classroom, get 200 students in front of you, load up generative AI, write something with it and submit it to Turnitin and just hope that it shows up. But then again, you know, saying to the students, yeah, look, this, we don't want you to do this. Actually what we rather do is write your work, go to Studiosity, submit it, get it corrected. You've got tons of time. It will come back in 2 or 3 minutes, right? It's not going to take the time it took before. So, you know, we're trying to make it as easy as possible for you to get through and over the line and also showing the academics that as well. But I think that's the thing, you know, where colleagues are concerned, I think with our students it's generally a bit more obvious.
We don't, we're seeing more cases, we've seen more Vivas, but we're not seeing a massive spike in 'you've definitely used AI'. It's not quite like that, I suspect, and we're having a look at this, with some research, when the students are writing, they're not writing in English, they're writing in their own native language and they translate it, and the students are saying, well, that's my own words, but well it's not, okey it's actually the words of the translator. And that's what we found, because we've got a lot of English as a second, third and fourth and fifth language in some cases for students. And I think that's where it's kind of catching the students, where they're able to express themselves in native language, but when you translate it, that's when we see the problems. And that's, I think what we're seeing in the Vivas where we're getting that they're not, you might say false positives, but that they're positives in the sense that a machine had a hand in that. And that's really what we want, and we, we don't want students doing.
So I guess the conclusion really there, is for our academics, they got better at spotting when somebody else is writing for the student. I think that's the key thing. And if you've got academics who deliver, and Sharon will have a view on this as well, my position with academics has been you're an academic, you're an expert, you know how these students write, you've seen lots of groups like this. If you don't think that's, if you think there's something not right there, that's the start of a conversation with the students. So it's about, it sounds strange doesn't it, going back to talking to students about these problems and having direct contact. But of course I acknowledge - not easy when you've got a lot of students in the room, that can be quite difficult.
Graeme Kirk: And another brilliant segue into the third point. It's almost like you rehearsed this, David.
Dr David Pike: It is, isn't it - the cheque's in the post right?
"the importance of human role in AI for learning"
Graeme Kirk: The third point, the importance of human role in AI for learning. We can go into some of your insights that you've done from the research, but certainly from a Studiosity perspective, we see it as we're just part of the student support. So whether it's been previously with human feedback, it's the AI, it's linking into the tutors, it's linking into your learning development team. But also going into the AI, our AI is trained by our specialists, our specialists now have a more quality assurance role where it's actually trained by humans saying 'no that's not quite right, this is what it needs to be in terms of feedback'. So it's not AI training AI. It'll be good to then go from an institute perspective actually, how that hybrid model works for you with regards to the use of AI build, that human oversight, that human intervention or that human support.
Dr David Pike: So, over the last few weeks, so this, this is kind of literally I going to use the phrase cutting edge and I'm sure somebody's going to put air quotes here, but it's cutting edge. We've literally just done this. So we've been working with academic staff, our Learning Development team have been out with academic staff, exploring the difference between human feedback with Studiosity and AI feedback. So they've been having a look at that. And also we've got survey data from the students, so if you see the other publication which is about the original writing feedback process that's in there, you can see that I think probably put a link to that if you want to have a look at it. But the important thing is humans, extremely important.
What we've seen from talking to colleagues is that they identify with the amount of extra information students are getting when we're using AI, they did worry that, and we use this as an opportunity to explain to them as well - no, no, no, we've not simply cut away the humans they're still there. They're working with the system, they're training it, they're helping it to improve. And so it's interesting because it's not only the students that, you know, need a confidence boost or need to be reassured, or just maybe need to understand, colleagues worry because they put that blur between AI and generative AI and if you like, you know, let's split them out - AI is the useful tool, generative AI not so useful.
So for the students as well, they as I say, they worry about it not being human. But on the other hand, they're now happy because they're getting the feedback very, very quickly. And they getting a lot more precise feedback in different places in the documents. So they want to know exactly, like Graeme said earlier, they want to know exactly what they change. But at least this way, unlike some of the tools you find, the students have to change it themselves, take responsibility and that was a big thing that came across to the academics.
I can give you a bit of an idea because we did a quick analysis. There's a lot of thoughts that have come back from colleagues about the long-term effect of having more feedback and what they said to us as well, “I physically couldn't give that much feedback to my students. It's impossible”. We've got a cohort of 500 students in some, the business school the depth is more. There's more feedback there. And yeah, it's not a criticism of the human service, you know, humans can only do so much, but at least humans are still in the loop here and helping to train the system. But also what they were saying was that there's something to be said for, the students’ ability to act on feedback. So you still need humans in there, right? They would say, okay, I can see that's really good, I can see if the student went through and they followed that advice, actually, yes, that would make that assignment better. But the thing they come back to is actually what they would need to do, and Sharon's already picked up on this a bit as well, it's about that human touch. It's about explaining to students and showing them examples. And that's really where we're trying to position it now.
We're saying to our learning development colleagues, it's not there to stop you doing your job. Actually, what it's really doing is a showing you where you should look, because actually a lot of the students' work that they put in is actually pretty good already. That's a really good sign, right? But you wouldn't know that until you've actually looked at the data.
So I guess the concluding point is, humans are still very valuable and important, and the research, the interviews we're doing, it's just, you know, reinforcing that point. Having humans there adds that extra layer. This isn't there to take something over, it's there to augment and improve.
"And so it's interesting because it's not only the students that, you know, need a confidence boost or need to be reassured, or just maybe need to understand, colleagues worry because they put that blur between AI and generative AI and if you like, you know, let's split them out - AI is the useful tool, generative AI not so useful"
Graeme Kirk: Brilliant. And over to Sharon to conclude.
Sharon Perera: And well, you know, just to continue what David was saying about, you know, how, that student and interaction with the AI generated feedback works, Peter, my colleague Peter in my team, has been experimenting quite a lot. And he's posted a few of his thoughts here as well, you know, and that's been really good because his feedback on how he gets a student to submit to Studiosity+ and, for colleagues who are not aware, it's AI generated. The feedback is AI generated on the writing, and that happens in a matter of maybe 2 or 3 minutes. And, when the student receives that, he can work with the student on, on the elements where, you know, that that the, that's been thrown up, highlighted where, where they can work on it. And actually, that's great because as well as that feedback, there's a bank of resources that the student gets.
And I think, you know, if the students - there are different students, our students like David's students, come from different backgrounds. So some students are perfectly capable of looking at the feedback, working on it and just, you know, and then flying from there on and we get some excellent feedback from those students. And some other students require other support. So actually having this kind of model really does help. And you know, and we have seen this and when I look at the feedback, I can see exactly a way forward, how we can continue to do this with, you know, scale it up and work with students who most need that human interaction.
Graeme Kirk: Excellent. So yeah absolutely, so we completely agree with that in terms of it needs to be part of the process. Not as a standalone tool as it is a contributing factor to student learning. It shouldn't be left to its own devices, absolutely. So we do have a few questions that have come in. So I'll basically work through those. So the first question:
"When a highly disruptive new technology comes into use, who should decide the right or wrong use? Shouldn't everything be reassessed, including the right or wrong use? From a discussion, it sounds like the participants are trying to match AI to existing assessment models. Perhaps those assessment models are now redundant?"
Who wants to have a go at that?
Sharon Perera: I can make a start. So one of the things, what the reality is, I think, you know, the, the advent of, generative AI on the scene, and the mix has come sooner than we can actually respond to it. And so the reality is we are using the current assessment systems that we have, but that doesn't mean that we're not working on changing them. And I believe all universities are looking hard at how they assess their students.
But, you know, we have a process in place and it's really hard to sort of pivot overnight, you know. So the reality is we are looking at how we can support students in the current climate, you know, where they have to do these assessments, where they are succumbing and maybe, you know, not using AI. So that's the situation now. But I do think that this, our understanding is informing us, informing our learning and how we will, you know, go forward and adapt the way we assess. So absolutely agree that assessment models have to change. But this is what we have to do now.
Dr David Pike: I'll pick up on the interesting bit 'what's right and wrong?' Well, there's a good question, my morals are not your ethics, and my ethics are not your morals. And it really depends where, what you were used to before. Because 'right' in one place is not the same as 'right' in another. But I do get the point about the assessment models. And, do you know, again, I go back to that point that it's not the first time we've done this dance. We had Turnitin and then everyone was afraid that the world was going to collapse and fall down because students had the internet, sorry students had the internet and it looked things up and that's how they would cheat. And there was all the understandable moral outrage then. And I think, you know, this is a thing, AI is here to stay. But what we've got to be careful of is knee, kind of a knee-jerk response. There's lots of guidance out there right now telling you to do all sorts of fascinating, interesting things. And I can never get the year right but I'm sure colleagues will remember, Covid when there was tons of guidance around the best thing to do with students, the best way to do this, the best way to do that and sometimes it just needs a bit of time to settle down.
This is the time now, though, when there are lots of opportunities for anybody who likes embracing chaos and wants to be a bit brave, but I don't think that's for everybody, because you still got to the kind of the bread and butter of assessments, and I don't know about colleagues who are here, but certainly in this institution, it's not that easy to simply flip and change an assessment overnight. And if you do it that quickly, you do have a danger that, it may not be the best thing in in the long term. It's very, very easy to react.
But what an exciting opportunity you know, that's the other way to look at it, right? We've got a chance now to go back and take a damn good look at assessments because the world's turned, right? And so I try to see it a bit positively. The right and wrong argument actually, I think I'd replace that with what's going to lead to the most, how should I say, virtuous outcome for the students, yeah? We want our students to leave university being experts in their subject being ambassadors. How are we going to do that? If I take away the AI part, we're still fixed there on the fundamental mission of the university. How do you send those students? How are you going to turn them into ambassadors for, essentially, your institution? But then of course we've got AI to think about as well.
Sharon Perera: I'd like to just add to that as well, David, I think it’s about sort of, you know, if we are going to be developing our assessment, you know how can it be authentic, and related to the real world? And students will be going out and using AI in the real world. So new assessments could probably embrace the use of AI tools in the assessment, you know, so a typical one that I think has been going on even before generative AI came on the scene, was for computer science students to actually look at strings of code, you know, that's generated by AI and then comment. You know, that's so simple, but you can translate that across everything. So, yeah, but I do think that the fundamentals of integrity always remain true, you know, is this your work? You know, and that's what students have to be able to say 'yes' to, you know.
Dr David Pike: Yeah. No I agree I think that's the thing, it's about if, you know, the classic question, do you want to live in a world where everyone just simply cheats using AI, the world isn't going to move on. And that's how, that's sounds very rule based. But that's kind of, that's it isn't it? That's really what we're after because we want the students to be good at this. And but I remember as a child, I sure everyone may have been through this process as well, people saying to you shouldn't use a calculator because you won't learn anything, right? So again, it's as I say, the world's moving on. What a chance, what an opportunity though? Like I say, because that's the thing, right? The world's moving on. And you know, the best place to be, when the world spins in a different way, is in a university, because that's what university is all about. About thinking, and these are difficult problems, right? There are no easy answers to any of these problems.
Graeme Kirk: Excellent. So going on to the next question:
"What is your position on the model of generative text Studiosity sometimes provides as part of the feedback?"
So I think there'll be more to do with the text that's obviously generated to students on the feedback. So, David, you've got a mere 7000 submissions on [Studiosity] Plus to sample from.
Dr David Pike: There's a few, there's a few! I think the things coming across in the surveys that we've got out there at the moment, we're looking at the old Writing Feedback [Studiosity classic]; when you've used both systems - when you've used AI and the original system; and just when you've used AI [Studiosity+], the students don't always understand. And this is true for humans too, they don't always understand what some of the comments mean. But that's not Studiosity's fault. I should stress that because I look at the comments and that are next to me on a piece of paper here. A lot of the time they just need a bit more context and I think what's happening is because the AI system puts more comments in for the students, naturally as students will say 'I'm now more confused, there are more comments'. But actually it's just simply a case of we need a bit of help from academic colleagues to contextualise some of those things that the system [Studiosity] is saying to students. And they're very, very common and simple things and sometimes, you know, we get it - I'm sure we've all done it - you just want somebody to read something out for you, and they go, 'oh, I understand it now'. And it's just hearing it out loud.
Graeme Kirk: It's that humans in the loop still.
Dr David Pike: Yeah. That's it, that's it. So again humans, we're still there everyone, don't worry, humans are still important.
Sharon Perera: Just because our model of our, our pilot is quite small, but I do know that working with my colleagues in the team, that sometimes, you know, it is it can be fairly stock, you know, you can get a stock answer because it's AI generated. So, you know, so for example, some of the, the punctuation, you know, you, you, you choose your punctuation and, and the AI generated model can just flag up that actually this is wrong. And the student will go like, 'no, this is absolutely right!' And so it's really important here that human interaction as well, you know, to say that, you know, this is your voice. You know, you choose the model the way that you want to express yourself. So you can override some of it. It's flagging it up. It's an error to look at, you know. So I think it's really important to advise students, you know, on how they would act on the feedback. And it may be that they just reject that feedback and choose to stay with their own choice of punctuation.
Dr David Pike: I'm going to try and invent a new term with advisory intelligence as opposed to artificial intelligence, because that's it. That's what it's about.
Graeme Kirk: Yeah. The students should reflect on their feedback and input to what they feel is correct. Just to follow on from that, so Peter posed the question, so basically:
"Weaker students might be tempted to simply copy and paste the text rather than seeing it as a learning opportunity to improve their writing".
I'm more than happy to give a Studiosity stock response to that, but probably far more valuable coming from you, in your experience of the feedback that Studiosity provides, and what students are able to take from it and do with it.
Dr David Pike: Actually, I think the thing is, the way it's phrased in there, I don't think it's quite that easy to do it because, you know, you could copy and paste it, but it's, yeah no, I don't think I don't think it's that simple. So I've been rather nebulous of this. And I've actually been asked this question, what happens if a student copy and pasted the advice from Studiosity straight into the assignment? And I've had to say, well, the examples we've looked at, you can't really do that. It's not the way it's written. It's suggesting changes and it says, well, here's a slightly different way that you would structure that sentence, and that's probably where the generic advice part comes from as well. So I don't think it's a, it's a straight drop-in.
There are other things that do straight drop-ins, which I don't think are a good idea for students to use because they just don't learn anything. But at least this way you have to read it and interpret it, and it is a good thing that our students are saying to us that sometimes they don't understand things because that means they're trying to engage with it and then not lifting it and copying and pasting it because they can't. Yeah. I don't know if that made sense?
"It is a good thing that our students are saying to us that sometimes they don't understand things because that means they're trying to engage with it and then not lifting it and copying and pasting it because they can't"
Sharon Perera: I do think - it does make sense, David. And I think, and actually you could give students that choice and sometimes, you know, when you are supporting a student, a tutor might suggest another way of writing something that the student might take on, you know, so helping, refocusing, recalibrating a paragraph I think is acceptable, you know, so, I think it's a matter of judgement, it’s a matter of how much that student copies. I don't think there's a huge amount for the student to take, just to take on and copy. But, you know, I think it's, we are learning and I, I'm, I'm happy to learn from our experiences. I'm not sure, you know, I can't be a definitive you know, I don't have all of the answers on that score.
Graeme Kirk: So, two more questions. One is very quick, so I'll go with the one slightly longer just because we're running out of time:
"Are there any independent peer review studies being done on the robust underlying technology, the impact on students' learning?"
David?
Dr David Pike: Number one is a bit tricky because I don't work for Studiosity, so I'd have to let you guys answer that one. But number two, yes. I have a long queue of things on a piece of paper next to me. Graham has seen some of the research that we've been producing here, that's still in draft mode. But that will be leaving, leaving the building very soon. What we're turning to do now, so we've been through an initial analysis of the change. So we want to see if you know what's, how students rate the service? Are they using any differently? Has it been an impact on attainment? And yes, yes and yes to some of those questions. It's a little bit more complicated now. So watch this space for the next research project pop out.
What we're now shifting to is students' experiences. Survey has got about 210 responses and we're about to do one more push on that, so you'll see that. That's an update to the original paper that I produced. Then we've got some other fun coming up. So, can you use the performance data from the AI system to predict and link to student demographics? So which students should you help, or could you help the most? Or which students need the help the most? Then we can follow that through and see if that started to have an impact on grades.
We've got some fun we're going to do if anybody like stats Bradley–Terry test, pairwise comparison. So looking to see how our advisors, and our Learning Development team, how they rate students' assessments compared to AI, so that's going to be quite interesting. See if there's any difference and see what the different kind of people pick up.
But also looking at a higher level at the University. So you know, when and how can we divide up the job of using Studiosity, it's called activity theory, if anyone likes that kind of thing. Looking at that to see how we first of all, how it integrates into systems that we have already and the approaches we use. But then how can we make more strict or proper measurements? So for us, we use quite large units to get students into Studiosity, but we've been putting students into Studiosity through units. So that's why I say we've got the data now that shows there is a difference for those students who are using Studiosity in the unit and those that are not, and that are avoiding it. So that's the idea. That's where we're heading right now. And I'd like to say, watch this space for that one because I can certainly answer question number two. It just might take me to a few different papers. So keep an eye open for it.
"Can you use the performance data from the AI system to predict and link to student demographics? So which students should you help, or could you help the most? Or which students need the help the most? Then we can follow that through and see if that started to have an impact on grades"
Graeme Kirk: So in terms of the underlying technology from us, it's been developed with us since it's our own data, it's our own model. No one else has access to it. It's being worked on through our specialists. So in effect, they're independent themselves. And so they're training the systems. Beyond that, in terms of peer review, again, it'll be watch this space. But it's at the moment it's ranking better than human feedback at the moment. Which is all good to see. And then finally, last question to wrap up, which should be very quickly be able to summarise on this one. David, probably more one for you again, because you've got it turned on:
"Do you think TII [Turnitin] is reliable as everyone else seems to have it turned off?"
Dr David Pike: Well, I thought that too right and then I went and did a presentation in Durham. And it turns out that I thought I was going to walk into a room with lots of people saying, God, what have you done? But actually, quite a few people do actually have it turned on in a surprising number of places, but whether they'll admit it or not is a different question. As to whether it's reliable - good question. Jury's out on that one.
However, the deterrent effect is perhaps important because I know Sharon will have a view on this one. I know that if we tell students that we just simply didn't bother to check or didn't have at least one system to do it, our staff would go off and do lots of different things. So it has the benefit of being one system that it's done through. But also, if the students think that there is an opportunity to bury their practice into perhaps the, it's not, not feasible for academic integrity practice, should we say, they might well likely take it. So it's unfortunate, it's difficult and a big ethical question again, watch this space for that because I'm just about to send a study for publication which looks at the links between demographic and AI detection, sorry students demographic and AI detection.
But if you want a real, proper answer about academic integrity, Sharon is definitely the person to talk to because she really is the expert - I'm not, I'll admit.
Graeme Kirk: I think we'll have to wrap it up there as we've gone up to the hour. I know that both Sharon and David need to shoot off. So just a big thank you to our guests today, Dr David Pike and Sharon Perera. It's been a real pleasure to host this with you. But also a big thank you to Andrea and Joey who've actually organised this and brought this together for everyone to, to be able to jump in on, this will be recorded. Any further questions that you feel that you might or could have answered, please just do drop them to us and we can share those with David and Sharon and get back to you in due course. Otherwise, the recording, and all feedback to the questions will be in your inboxes soon.
All right. Thank you everyone. Thank you. Sharon. Thank you David.
Dr David Pike: Thank you. Have a great Friday.
Sharon Perera: Take care.