Hannah Fry, author of Hello World, an exploration of how we live our lives in the age of artificial intelligence, considers the kind of future we want – and how education can help to get us there. Hannah will be speaking at Networkshop47, 9-11 April 2019, in Nottingham.
In my book, Hello World, I tell the story of a man who trusts the algorithm in his sat nav to such an extent that he literally drives his car almost over a cliff. For me that sums up the really rather strange relationship we currently have with algorithms.
On the one hand, we enthusiastically adopt them and trust them far beyond what we reasonably should – I have many stories, as I’m sure we all do, of people who blindly do what a computer tells them without ever applying rational thought.
On the other hand, as soon as an algorithm or technology is shown to be flawed, we have a habit of dismissing it as complete junk. I certainly swear at my Alexa regularly. The absolutely phenomenal achievement of having a voice recognition system in my own home that can turn lights on lost in the incredible irritation I feel when it slightly mishears the name of the lighting system.
An eye-opener in Berlin
The implications of algorithms for humans, however, go beyond having to reach over and turn our own lights on.
I discovered this in Berlin a few years ago when I gave a talk about a research project I had been working on with the Metropolitan Police, looking at the London riots of 2011.
The premise was that data and algorithms could be used to pick up on patterns in the way rioters behave, with the intention that if these patterns were detected in the future then they could be stopped much sooner – the police would be able to pre-empt the way that people were about to behave.
I gave a very optimistic overview of this research on stage, enthusiastically declaring to the audience how wonderful it was that we could use data and algorithms to, essentially, control an entire city of people.
I naively didn't stop to think that if there was one place where citizens were likely to be quite cautious about surveillance, police powers and state control, it was Berlin.
The resulting Q&A was an absolute blood bath.
That was the moment I realised that, even as a theoretical mathematician or computer scientist, I am not working in isolation and cannot simply ignore the ethical implications of my work.
If I am building algorithms, I have to really think about how they are going to be used – both now and in the future – and have to put them in the context of the humans who will be deploying them.
Ultimately, I think that we need to be much more realistic about the limitations of our technology, but also more cognisant of the benefits.
The role of education and educators in recognising this and determining whether the human, rather than the technology, is at the centre is crucial. I’d like to see these conversations come out much more into the open.
But there’s a problem with the word algorithm – people hate it. It tends to make them either fall asleep or run away screaming. This needs to change, given how pervasive and influential algorithms now are in dictating the decisions that are made about us and by us, and how there are important choices to be made about how we live with, and regulate, this technology.
What kind of future do we want?
It is very easy to imagine that there are clear-cut answers to these issues and that the world we have to look forward to is either a dystopia – where evil AI is ruling us, having taken all our jobs – or to imagine that it's an optimistic utopia with real positives. There are people shouting very loudly on both sides of that argument.
What really surprised me as I researched Hello World is that there are no clear-cut answers in any of these areas. It's all about trade-offs and sitting down and deciding what we actually want the world to look like.
We can't hide from those questions much longer. There needs to be a national debate about what we want our future to look like, and how education and educators can help deliver that.
Remember – the future doesn’t happen to us, we create it.
This blog post is based on an interview with Hannah, which will appear in the Networkshop47 magazine, available to all delegates.
Hannah is our opening plenary speaker at Networkshop47 on Tuesday 9 April at 14:00.