<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=1005154772848053&amp;ev=PageView&amp;noscript=1">

Discussion: The delusion of chatbots, for the illusion of learning.

Insights

Oct 28, 2025

Summary: The "Chatbot Delusion" - a result of placing technology before learning process when it comes to AI for student support - has resulted in multi-faceted failure within institutions. To mitigate, higher education leaders are thinking beyond prior AI assumptions to instead prioritise fit-for-purpose, student-learning solutions. This a. transfers risk and compliance to a proven vendor; b. allows universities to take advantage of technology advancements quickly rather than re-build and re-invest continually; c. ensures evidence of defensible governance and pedagogy inputs and outputs for regulators; d. speeds up relevant implementation and technology acceptance by students; e. ensures institutional focus is on teaching and learning; and - lastly but critically - ensures the ROI is about learning outcomes, from the get go.

Navigating risk; and the central question of student learning

Governance. Data. Privacy. IP Protection. All critical issues as universities navigate their approach to AI.

But there’s an even bigger, mission-critical shadow being cast across the sector’s future:

  • Are students learning?

Educators and students are worried about the loss of critical thinking skills and over-reliance on AI chatbots and derivatives. Notably this year, the majority of university-level students expressed concern about their capacity to learn while using AI (Studiosity-YouGov Wellbeing Survey, 2025). And at the leadership level, the conversation is increasingly centred around security compliance, pedagogical compliance, and technology acceptance.

With several converging mission-critical issues looming, there is a discernible shift in how higher education is discussing, reviewing, and investing in AI.

 


 

The Chatbot Delusion: A multi-faceted failure

The "Chatbot Delusion" is the belief that a generic, conversational AI is sufficient as a learning assistant, or even a good baseline use of AI around college-level learning.

Tune into any panel, event, podcast at the moment and you can hear how this reliance on a chat interface narrative limits both conversation and imagination for the sector’s future using AI.

This arrested view means universities are already being confronted with failure on several fronts.

  1. Pedagogical Failure - The Illusion of Learning. Generic, public LLMs promote output over the learning process - that is, they skip the multi-step, iterative work required to develop critical thinking and communication skills. Tellingly, there is no evidence of student learning gain using generic LLM tools. On the other hand, there is widespread concern about cognitive offloading. General AI platforms are also facing scrutiny over consumer-revenue goals that undermine pedagogy.
  2. Technology-Acceptance Failure. Limiting the AI narrative to ‘chatbots’ presents a no-win dilemma. On the one hand, unconstrained, public chatbots (LLMs) can be engaging - where addiction is the enemy of learning. On the other hand, custom chatbots end up inconsistently implemented - poor equity of access, and restricted, second-best user experiences. Frustrated students simply switch to their GPT tab. As an East-Coast university leader observed in October, "Students just don't seem to engage as much in our chat tools."
  3. Project and Logistical Failure. The allure of the customised chatbot is misguided, as take up and equity-of-access remains low and the burden is shifted rather than alleviated. Generally, small-scale, custom-build chatbots push the burden onto teachers to adapt, engage, and to maintain the technology, security, and pedagogy at once. On the other side - non-custom, or generalised LLM access generally ends with shifting enforcement or mitigation of policy, training, and integrity repercussions to teaching and professional staff. Either approach tries to bypass institutional responsibility to support all students, at scale, to budget.
  4. Budget Failure. The pace of technology means that well-meaning, DIY efforts to customise LLM chat interfaces are almost immediately lapped by technology improvements. This puts pressure on leaders to justify significant “3-year spend” on projects and the ongoing costs associated with maintaining an immediately outdated technological stack. Further financial risk comes in the form of lock-in with particular LLMs and their owners. Increasingly, as we see early disruption and knee-jerk reactions settle in coming years, leaders are choosing defensible, robust learning infrastructure for students - part of the ‘normal’ approach to any new education technology adoption.

 


 

Breaking the delusion, and adopting defensible AI.

Almost three years into the GPT-area, the delusion is starting to lift.

This is in part led by a shift from the wider technology sector that the ‘conversational chat interface’ “feels archaic” and is unlikely to be the end game for strong student experiences, with a "new wave" of tools coming. There is also a noted challenge amongst higher ed leaders, of simply trying to be fast enough to keep up.

As a result, the trend is for university leaders and educators to look at specialised AI for student feedback, unequivocally and openly designed for the problem HE is solving.

That is: nuanced approaches to writing and feedback, delivering strong, student-led experiences, align with integrity policies, and that are designed firstly for critical thinking and learning at the post-secondary level.

This shift starts to help solve the “are they learning?” crisis that has come with AI. But it also reflects universities' strength of robust, due diligence (learn fast, move slow - notably, as Prof Rose Luckin explains).

 


 

Student-Centric AI succeeds because it puts learning process before technology interface.

  • Pedagogy first: Specialised AI just for higher education serves the learning process, with formative writing feedback, student ownership, revision cycles and transparency for educators.
  • Proof of learning: Fit-for-purpose services have independent evidence of learning gain amongst students.
  • Lowers risk and cost: Learning-first AI will enable transparency and evidence for educators and regulators, and take on required security controls, and technology maintenance, ad infinitum.

Many university leaders are challenging the current narrative and delusion that AI = chatbot. For others not there yet - poor decisions now will likely leave institutions with a high price to pay, both financially and in the shape of accountability to teachers, students, and public in coming years.

 


 

HE leaders' path forward

 

For institutions already effectively managing change in the AI era, with a strong teaching and learning culture, current direction looks like:

  1. Prioritise learning process over technology interface. Choose specialised HE solutions that support pedagogical frameworks and scaffolding.
  2. Hurry, Slowly. Be wary of high-cost build projects, or long project times (e.g. 1-3 years), or highly customised work, which will be inevitably surpassed in an unstable technology environment.
  3. Demand more of technology providers. Shift risk to the vendor, and demand more of technology providers to prove efficacy via real evidence of learning upfront. Opt for vendors who provide 'regulator-ready' data on student skill development and assurance of pedagogical effectiveness, and those that can prove risk, maintenance, and compliance (including pedagogical) are managed continually.

 

And for higher education, the key question - and test - remains: Are students learning?

 

About Studiosity

Studiosity is ethical, AI-powered writing feedback and study support at scale for student learning. Universities partner with Studiosity to support student success at scale, with a 4.4x return as retention, protecting integrity and reducing risk.