We are currently conducting an unregulated experiment on the human brain. While we mandate rigorous clinical trials for every new drug, we have allowed Silicon Valley to "prescribe" digital tools to our students with zero proof of efficacy. Following Dr. Jared Cooney Horvath’s chilling warnings to the U.S. Senate regarding the cognitive decline of Gen Z, the question is no longer if technology is harming learning, but why we allowed the venture capital business model to dictate the classroom. From "expert blindness" to the dangerous removal of the cognitive friction required to think, this is an insider's look at why the current EdTech landscape is a case of pedagogical malpractice.
By Jack Goodman
For at least two decades K-12 educators and university researchers have observed - and standardised tests have measured - incremental declines in the capabilities of students. Literacy and numeracy scores have been falling, at the same time that digital technologies, first desktop computers and the internet, then mobile phones, laptops, and social media have come to mediate students’ learning.
For three years, schools and universities have been overwhelmed by the tsunami of generative artificial intelligence, a technology that students quickly realised had the power to complete their assignments in no time, with no effort, bypassing traditional academic integrity technologies. Universities are suffering a plague of cheating as students embrace unconstrained large language models for every task, assignment, essay and exam.
"Is technology the cause or cure of this crisis? Or is it simply a contemporaneous but unrelated fact?"
Neuroscientist and academic researcher Dr. Jared Cooney Horvath has connected the dots in his new treatise/manifesto, The Digital Delusion: How Classroom Technology Harms Our Kids’ Learning - And How to Help Them Thrive Again. He concludes that educational technology - “edtech” - is the cause of our crisis and the culprit we should blame. An expert with data and statistics, as well as the brain science that underpins learning, Horvath makes a strong case that the swamp of screens that our eyes flit across all day have been undermining students’ capacity to learn, a warning also made by the OECD.
Horvath brings a scientist’s lens to his analysis of the outcomes of tech in education. His analysis and conclusions are powerful and important. But he never asks the question: why does edtech look and perform as it does? Why is there, for example, no scientifically-valid evidence base for the vast majority of software and hardware that is introduced into formal learning environments?
This is something society rightly insists upon - and all governments legislate - when it comes to medical safety and pharmaceutical innovations. Companies that develop new medical interventions must invest in randomised controlled trials to measure impact and to monitor side effects, even though they are time-consuming, expensive, and often frustratingly inconclusive.
"Companies that develop new medical interventions must invest in randomised controlled trials... why is there no scientifically-valid evidence base for the vast majority of software introduced into formal learning?"
And that’s exactly what’s missing in information technology, including the subset known as edtech. Rigorous, evidence-based studies that determine pedagogical safety - technologies and programs align with brain science, human learning patterns, and augment, rather than inhibit, the rate of knowledge acquisition - should be essential before technologies are introduced into educational settings.
Why? Simply put: Time, cost, and a lack of regulation.
The vast majority of technology companies - and certainly the ones Horvath writes about - were established with investments from professional venture capital (VC) funds. These investors - who sit at the heart of Silicon Valley - make their investments with myriad restrictions on their investee companies, including time-frames for execution of business plans. Most VC investors require an “exit event” - either a follow-on investment, sale of the company, or a public listing - within five to seven years.
This structure is insufficient for the time-consuming work of developing technologies collaboratively with educators, iterating designs, and then allowing for independent, rigorous evaluation of impact on learning. Instead, tech start-ups - and more mature companies - compete to deploy their inventions with little more evidence than case studies, testimonials, and claims of record MAU - monthly active users - a bit like the blurbs on the cover of books like Horvath’s. Will you like his book just because the “actor and education advocate” Hugh Grant says it’s “Terrifying and essential reading”?
"The five-to-seven-year VC 'exit event' is insufficient for the time-consuming work of developing technologies collaboratively with educators and allowing for independent, rigorous evaluation."
And because governments have not mandated such research, the vast majority of companies developing solutions for education institutions skip that essential step of rigorously proving efficacy and safety prior to deployment. I write from experience, having spent more than three decades working in edtech startups across the US and Australia. I also have experience working for both VC-backed companies and those that have much more patient capital sourced from founders and other long-term investors.
"The vast majority of tech companies skip the essential step of rigorously proving efficacy and safety prior to deployment because governments have not mandated such research."
The second mistake Horvath makes is in confusing edtech with all tech. That is, much of his complaint is not with software that is designed to sit within the teacher-student nexus but rather the ubiquitous hardware and software that permeates modern society. Laptops, tablets, and smartphones, all connected by wifi and ethernet cables to the internet have invaded educational settings, displacing more traditional textbooks, notebooks, pens and paper, resulting in a fundamental altering of the learning experience.
There’s good evidence, particularly in K-12, that these technologies need to be much more carefully regulated because they deeply undermine the brain science of learning. Horvath is at his best when documenting that uncontrolled access to these “tools” is deeply detrimental to literacy and numeracy and is resulting in falling scores across all major national and international benchmarks.
And so we come to the current crisis around generative artificial intelligence. Again, Horvath diagnoses the problem genAI causes, particularly at universities. When students use unrestricted, foundational LLMs to complete their academic work, they are engaging in cognitive offloading, diminishing their development of higher order thinking skills, and undermining their sense of self.
“Using AI to skip the slow, sometimes tedious work of learning isn’t the key to developing higher-order skills; it’s the surest way to prevent them from emerging at all,” Horvath writes on page 117. Indeed, if university leaders want a quick study of the true consequences of the trajectory we are currently on, they should look no further than Chapter 6 of The Digital Delusion.
"Using AI to skip the slow, sometimes tedious work of learning isn’t the key to developing higher-order skills; it’s the surest way to prevent them from emerging at all."
The foundation model companies know this, which is why they have built “guided learning” and “study modes,” too-easily-disabled veneers that purportedly stop the chatbots from answering questions, generating text, and undermining the effort required for true learning to occur.
Why are ChatGPT, Gemini, Copilot etc. engineered to operate as effortless, sycophantic sentence, paragraph and essay-generating machines? Because that’s the fastest way toward mass adoption, and the Silicon Valley mantra when it comes to market-share is that first-mover advantage usually leads to a “winner-take-all” outcome. Tools are made “free” at the outset, with monetisation - in the form of ads and/or subscription fees - to come once the demand for the product has become so deeply embedded and users are so addicted that “switching off” isn’t an option.
These LLMs are all productivity tools, which makes them fundamentally antithetical to any educational setting where learning requires the “friction” of cognitive effort. They can be incredibly powerful when used by experts, like highly skilled academics, to augment their teaching efforts. But as Horvath explains, “expert blindness” sets in when we forget how difficult it is to learn a skill we’ve already mastered. When university leaders don’t actively discourage students from using these tools, they “assume that the same tools they use to perform tasks efficiently will be equally useful for novices.” (page 112) Either that or they believe that because these tools are freely accessible outside of the university it’s impossible to stop students from using them - and undermining their own learning in the process.
"Large Language Models are productivity tools, which makes them fundamentally antithetical to any educational setting where learning requires the ‘friction’ of cognitive effort."
"When university leaders don’t actively discourage students from using these tools, they assume that the same tools they use to perform tasks efficiently will be equally useful for novices."
There is a better way, and we know that not all edtech tools are detrimental to learning. Universities have the capacity - and the obligation - to lead the way in choosing which tools augment learning, are evidence-based, and align with their own mission of educating students effectively and authentically.
"Anything short of choosing evidence-based tools is accepting as inevitable the world that Silicon Valley wants to foist upon the sector."
________________________________________________________________________
About the Author:
From Sydney Australia, Jack Goodman is Studiosity Founder and a frequent commentator on academic integrity around the world, including the impact of generative AI on higher education and the responsibility of institutions to prioritise student outcomes.