Home / artificial intelligence / AI in training and education

AI in training and education

The hype around AI is so intense that there is a risk it will fail to capitalise on the opportunities it affords. Nowhere is this more salient, argues David Sharp CEO of learning provider International Workplace, than in training and education

Let’s start with the hype. Founding DeepMind in 2011, Demis Hassabis defined its mission as solving “the problem of intelligence”, then using AI “to solve everything else”. Last year Sam Altman, CEO of OpenAI, called for $7 trillion to make the super powerful chips needed to be able to create AGI – artificial general intelligence, aka ‘superintelligence’. In 2023 UK Research and Innovation (UKRI) announced plans to support “moonshots”, described as “bold, ambitious, and transformative ideas” to address major global challenges. Last month Altman predicted “superintelligence will happen this year” as AI agents start to automate tasks we previously did ourselves. And Sir Keir Starmer has announced plans to “unleash AI” across the UK to boost productivity and growth.

This tidal wave of techno-solutionist hyperbole must not go unchallenged. AI threatens many of the principles western society values, like democracy, privacy intellectual property rights. The infrastructure required to deliver AI is a threat to precious planetary resources, predicated on hazardous, low-paid work that exploits vulnerable people. And we’re spending ever more time negotiating security obstacles designed to protect us from the very applications that AI promises will set us free.

AI IN EDUCATION

These concerns are relevant to the use of AI in education, which technology providers see as a way of freeing up time for teachers and trainers and making courses more interactive and engaging for learners. OpenAI has launched a free AI training course for teachers in the US, to help them use ChatGPT to produce lesson plan content and streamline work tasks. It’s the first step in a process towards the automation of learning. The appeal is obvious: a ‘universal teacher’ that has access to all the world’s knowledge, an ability to translate it into multiple languages, to repackage it as content and deliver it efficiently, and to objectively assess learner interaction and progress. No need for an unreliable human to deliver training or mark coursework, and no inconsistency in approach when scaling it throughout your organisation.

Is this realistic though? And if so, what will it mean for the role of the human in training and education?

The answer to the first question is no. The concept of a universal teacher mistakes information for knowledge, treating it as something finite that can be collated, analysed and measured. It is predicated on a definition of intelligence that has been captured by corporations and politicians for their own gain – a process described by academic Stephen Cave as the “fetishisation of intelligence”. If we allow corporate and state entities to present a need for greater intelligence as a problem that can be solved by technology, then it is inevitable that AI will be prescribed as the solution.

How does their concept of intelligence – artificial or otherwise – embrace what Oxford academic Ian Deary identifies as equally valuable human traits, such as creativity, wisdom, leadership, charisma, calmness or altruism, among others? Are they not also forms of intelligence? How can these traits be quantified, promoted, or assessed?

A universal teacher doesn’t have access to all the knowledge in the world – just the data that’s been scraped from the (western dominated) internet or harvested from the millions of datapoints generated from daily user interaction with applications and their devices. It is not representative of everyone in the world (many of whom are individually invisible to the reach of technology) and – like humans – it presents a worldview that is incapable of being unbiased. To think otherwise fails to recognise something intrinsically human – that everyone’s lived reality is infinitely rich and that there is not – and there can never be – an objective measure of intelligence.

AI & WORKPLACE TRAINING

What does that mean for employers considering using AI tools to create, deliver and track learning in the workplace? First, if you’re using AI, think about how and where your (and your workers’) data is being processed, how you can maintain control over it, and how you consult with them to bring them on your AI journey – especially where algorithmic processing of their personal data is used to make judgements about their actual or potential performance. Transparency is vital.

Second, now that AI tools make it easier than ever to create professional looking content, don’t let your eyes fool you. User experience (UX) matters greatly, but so do the credentials that underpin learning content. Make sure you can trace the legitimacy of assets, and that reference sources are relevant and up to date.

Finally, don’t fall for the hype. AI tools can assist with some tasks but can obfuscate others, and knowing this distinction is key. Learner engagement data can be very valuable for employers, particularly as contextual measure of organisational performance, helping to identify gaps that learning programmes can address. But learner engagement data is not a proxy for knowledge, competency or intelligence, which are taught and learned through lived experience and will always be partial and incomplete. We’ll always need a human for that.

About Sarah OBeirne

Leave a Reply

Your email address will not be published. Required fields are marked *

*