A study with 10,330 university students tells educators that technical exposure has not translated into skill, confidence, or trust.
While students without these tools face a clear disadvantage, the 81% who do use LLMs for degree completion have moved the conversation from whether to use AI to how to use it.
Because in 2026, a dual frustration has emerged: students are grappling with "AI addiction" - a stressor for 40% of AI users globally - together with the persistent threat of being wrongly accused of cheating with AI or otherwise. This has created an urgent mandate for institutions to shift their focus (back) toward pedagogy and validating the genuine output of degrees.
The study finds:
1. Students believe that AI use is causing cognitive atrophy (aka "brainrot")
44% of students now believe that relying on generative AI for assignments is actively reducing their critical thinking and communication skills.
Qualitative data highlights a deep concern that "laziness has a price" (male, 34-41, South Australia) with students describing that AI "saves me time and gives quick answers" (female, 26-33, Ontario, Canada) and limits the process, "someone gradually becomes reluctant to think" (female, 34-41, university in South East England, studying from Saudi Arabia.)
2. AI detection tools are creating an epidemic of student stress
Roughly 77% of students experience significant stress regarding AI detection, fearing their original work will be misidentified as machine-generated.
This "guilty until proven innocent" (male, 18-25, Canterbury, England) atmosphere creates a punitive and unsafe learning environment where students say they are prioritising using the LLM and then also "taking a course on the AI detector" (female, 18-25, New York), over demonstrating learned skills.
3. Universities are failing to provide clear AI guidance
While 58% of students report receiving some training, 42% are either unsure or certain that no policy exists.
A perceived lack of institutional clarity leaves students to navigate ethical grey areas; one student shared, "most of what I know... comes from general advice online rather than formal university guidance" (Male, 26-33, UK).
4. The most vulnerable learners are the most concerned about intellectual erosion
A generational divide exists: students aged 18–25 are more likely to agree (48%) that AI is reducing their critical thinking than those aged 26+ (40%).
These younger learners are increasingly self-critical; one admitted, "I use it to brainstorm and feel like I'm not using my brain at all" (Male, 18-25, regional South Australia) and another worries, "At the end of the day, I worried that I would focus more on obtaining the right answer rather than learning the material." (Female, 18-25, Ontario, Canada).
5. Technical confidence masks a deeper fear of tool over-reliance
While students are high adopters, a learning confidence gap has emerged: only 41% feel highly confident they are genuinely learning while using AI. This stems from a fear of "weak[en]ing my own brain" (Male, 18-25, UAE) and findings that "AI can become a crutch that people lean on and they won't even realize how dependent they become on it. I want to make sure I am still doing my absolute best to solve the problem or understand things without the use of AI, but at the same time I need to know how to utilize that tool so I am not afraid of it - because it is arguably the future." (Female, 18-25, Texas, USA).
In conclusion, there is an incoming pedagogy-first pivot to protect degree value.
Or: 'Learning first, AI second'.
To protect the value of a degree, leadership are moving beyond the early-2023 AI literacy myth that pure exposure to LLMs creates more valuable graduates.
Without guardrails to protect core skills - thinking, collaboration, communication - graduates lose agency and employability, and as the study shows, students are increasingly self-aware of the personal and professional loss of value and the risks to their future.
Fortunately it is now also clear to leaders and educators that institutional success depends on explicit, pedagogy-aligned guardrails that build, not a production line of AI operators, but skilled thinkers first and foremost.