Inform interview
Robot at Digifest
Creative Commons attribution information
Robot at Digifest
©Jisc and Matt Lincoln

Rose Luckin: the AI revolution is here

Colleges and universities will be transformed over the next decade by the introduction of artificial intelligence (AI).

Rose Luckin

Rose Luckin, who has done visionary work as professor at UCL’s Institute of Education in the field of learning sciences, educational technology and AI, sets out the best and worst case scenarios for educators.

Can you set out two visions of how AI might be used in education – utopia and dystopia?

The dystopian view is one where the AI takes over control, becomes the student’s tutor and dictates what they should learn, based on the data that it collects about that student.

We lose the human tutor, we forget the importance of human interaction and have personalised learning delivered purely by artificially intelligent systems.

In the long run, that will be cheaper. It’s expensive to build AI systems but once you’ve built systems that learn, they can go on skilling themselves up and they don’t take days off sick, don’t go on strike and don’t forget things or behave in an inconsistent way. They may be wrong but they’re consistently wrong.

The much nicer and real vision would be a system where learners are in charge. They have their own personal AI but it is there to help them and their tutors understand their progress, to help them collate the data that demonstrates and gives evidence for what they’re good at, what they’ve achieved, why they can accomplish a particular job satisfactorily, or why they should be given a place at a university.

The learners are in charge, the AI is there to work with the educators to support students to be the best learner they can be. Human educators are a much sought-after resource in this vision, because everybody will need to learn throughout their lives. 

What’s standing in the way of that utopia?

Probably the biggest thing is mindsets. People think about artificial intelligence as the technology, they don’t think of it as intelligence. But you can’t build a successful AI if you don’t understand what intelligence is.

So it’s about shifting people’s mindsets so that they start thinking about intelligence and how you achieve the most sophisticated intelligence, or more intelligent behaviour and action, through a blend of artificial and human intelligence.

That means a mindset change for those who are building the technologies, to appreciate that they absolutely must work with educators if they want to end up developing something that’s really worthwhile, and changing the mindset of educators to one in which they want to be part of that conversation and believe they can make a valuable contribution to AI for education design.

Do you think educators should be apprehensive? Do they understand that this is a technology that could replace them?

Yes. It is here and we need teachers to engage with it.

It must seem quite scary - headlines tend to be attention-grabbing, understandably. For a lot of educators it must seem like, “Oh, yet another thing I need to be able to use and take account of and, gosh, I’ve got so much going on and how am I going to do this?”

I also see a lot of denial that it’s even here – but it absolutely is. We need to help teachers become much better educated in data literacy and AI so that they can be apprehensive in the right way.

 A couple of years ago, in Intelligence Unleashed, you were frustrated with the status quo and AI benefits not being realised. What has changed in two years, if anything?

It’s a cliché but there is a perfect storm - ample data, cheap computing and AI algorithms mean technology can learn very quickly.

That has now got to a tipping point. Systems can learn huge bodies of knowledge more quickly and accurately than we can and identify precisely the right piece of information in answer to a question. And that puts us in a different position.

They can’t do everything and there’s lots of elements of human intelligence that cannot be automated but the bit that we’ve tended to value, that relates to IQ and academic exam success, is one of the bits that we’ve managed to automate. Therefore we, as educators, need to do something about it. 

Let’s return to your utopian view. How do we achieve it?

We have to make sure educators are part of the conversation, now, about how AI should be used in education.

I would say, colloquially, get educators some skin in the game. We have to get the conversation going with the tech companies.

We do it in a small way here at UCL because we have this project called EDUCATE, which is all about getting edtech companies to talk to educators and researchers and students. And a company like Century Tech, that is developing machine learning, works closely with teachers to develop that technology so that it is developed in a way that is useful to teachers. But we need to make that happen over and over again.

We have to persuade technology companies that if they are thinking of developing something for an education market, they don’t just think they can adapt something they’ve been selling elsewhere. We should have supercharged AI for education, not some business hand-me-down.

They need to look at what education needs and that means talking to educators to understand what teaching and learning requires of the technology. And I don’t just mean that they bring a teacher on to an advisory group or employ ex-teachers but they actually get out there and work with teachers as design partners to develop the sort of thing that can really work.

You would never get the companies that design technologies for medicine doing it without medical experts. But we get educational technology developers doing it without consulting the educator experts. Why?

Is there a responsibility on the part of the educators and budget holders to say no, we’re not going to buy this if it’s not fit for educational purposes?

Absolutely. But we have to give them the ammunition, information and understanding to feel confident about saying that. In the same way that we need to make sure we have a population educated enough that, when an AI gives a particular decision they don’t think is right, they are able to say with good reason, no actually, I’m not going to trust that. We really do need to do a big education piece about AI.

How does that education piece happen?

It was highlighted in the House of Lords AI select committee report, where the fourth principle was about everyone being entitled to be educated in how to live and work alongside AI.

It’s a mass education programme - we need everybody to understand enough to be able to protect themselves and to be able to use this technology wisely.

Andreas Schleicher, head coordinator of the OECD Programme for International Student Assessment (PISA), has said that the UK is an area of concern because our curriculum is too narrow and we do too much rote learning, so we’re not preparing students effectively. And in the education ‘pillar’ of the Economist index we finished 20th out of 25 countries in preparing children for 21st century knowledge and skills. That’s appalling. We’ve got to a point where we have to do something.

The countries that make sure their populations are educated in this way will achieve the most when it comes to developing AI and using it wisely. We really need to see how we make radical shifts in what we expect from our education system and what we want for our students and our population. It really does need to be a shift. The description of the fourth industrial revolution is not wrong.

It is a revolution and we need to prepare people for it.