The UK government has developed a voracious appetite for artificial intelligence (AI), based on a promise of its apparently transformative power across myriad industries.
From prime minister Boris Johnson’s pledge to fund a £250m AI lab for the NHS, to the Department for Education’s recently launched ‘AI horizon scanning group’, AI is being lauded as a panacea to some of the most pressing issues society faces.
Education is just one of the sectors that is meeting AI with open arms. As Matthew Jones at Perlego argued for this title, the opportunities being presented for AI to close educational accessibility gaps is exciting. In fact, educators, policymakers and investors are all being bombarded with messages related to AI’s seemingly endless benefits in the classroom.
There’s just one problem with this promise, and it’s a big one; empirical evidence. While there are many passionately argued comment pieces championing AI in education (including from suppliers of edtech solutions such as myself), these are often based on personal passion, not empirical evidence.
As schools minister Nick Gibb said in a response to a written parliamentary question on the issue, “AI is a complex, emerging area…However, the impact of these technologies in the classroom still remains largely unevidenced.”
Passion outweighing evidence
In China, where its government is reportedly ploughing billions of dollars into AI technologies for education it is startling that one of the country’s most popular edtech suppliers, Squirrel, has offered little validation of its impact on students.
As MIT Technology Review describes, Squirrel positions itself at the forefront of AI edtech but seems to demonstrate its success through a “self-funded four-day study with 78 middle school students”. At first glance, this wouldn’t appear to be akin to an academically rigorous assessment.
This lack of validation is echoed across the edtech sector. Passion is outweighing evidence. Let’s consider findings that 40% of ‘AI startups’ don’t actually use AI. These companies are often led by founders positioning themselves as AI experts and we aren’t questioning those self-appointed experts on the strength of their credentials. In fact, it seems that the bar to qualify as an AI expert is pretty low.
I’ve spent a decade setting up and running edtech startups and am both keenly interested and excited about how AI can be used effectively in education. However, I don’t position myself as an AI expert, or to claim that our technology at Sparx is AI-led. Others aren’t so conservative. As such, when it comes to AI in edtech, we often see passion and headlines outweighing evidence.
My worry is that groups such as the DfE’s AI group will succumb to a growing blanket acceptance that AI is the answer to education’s challenges. Will they rely on ‘evidence’ provided by self-proclaimed ‘experts’, rather than taking a reasoned, long-term view of the real impact of these technologies?
The future of AI in education
The truth is that research is time-consuming, relatively expensive and provides few shortcuts. The Nuffield Foundation’s recent report, “Growing up digital: What do we really need to know about educating the digital generation?” echoes this:
“Any research programme will need to take account of the need to gather data from established practice, not simply at the implementation stage where technical glitches and the halo effect can both skew findings significantly. Moreover… there is a need for longitudinal studies as well as, where possible, retrospective ones.”
3 Things That Will Change the World Today
Evidencing impact through rigorous research isn’t about stifling innovation but about ensuring educational technologies make a measurable difference to learners and schools. This brings me to another concern; that the AI-bubble is so all-encompassing that other equally exciting and viable technologies are being sidelined to the detriment of learners.
For example, at Sparx we never claim that our technology is AI-led. While we harness innovative machine learning and algorithmic augmentation in our personalised maths learning solutions, our priority is ensuring the technology delivers rather than bask in the halo of the ‘AI’ buzzword.
Over the last eight years, we have focused on ensuring our technology is subject to an academic-level assessment of its efficacy, with in-school testing and teacher input from day one. This approach takes time but is vital if we are to be in edtech for the long-term impact, not short-term gain.
We are all aware of the effect of headline promises that aren’t backed by transparent evidence – our current political climate is a testament to that! As the DfE’s AI steering group starts its analysis, I hope that robust evidence is its priority. We can’t decide the future of our children’s education based on buzzwords, hyperbole and unqualified expertise. Without a focus on empirical and transparent evidence, this is where we’ll be headed.