For artificial intelligence (AI) to work best, it needs to be complemented by human expertise and ethical judgement.
We’re currently at a moment of peak expectation with AI, but there’s a real risk that these expectations will be dashed should we subsequently discover that this technology isn’t capable of delivering all that has been promised.
This has happened with AI before in the seventies and then again in the eighties, and now we could face a third so-called ‘AI winter’ unless we act to tackle the risks that are now beginning to be recognised.
A first bold step toward a more holistic approach to data is the government’s recently-announced National Data Strategy - pledging to hire a new government chief data officer to oversee the Government Digital Service, remove barriers to cross-border data flows, and overhaul legacy IT systems.
While I see huge potential in the continuing development of more and more sophisticated AI algorithms and methods, we urgently need to attend to the practical problems of embedding them into the real world.
Evolving technology for evolving problems
AI systems are trained on data and over time that data – and hence the AI systems trained on them – may no longer be an accurate reflection of the tasks they were created to perform.
For example, AI systems trained to extract information from sources such as social media will inevitably lose their accuracy over time as language use changes. It is also well known that training AI systems on historical data can introduced unwanted biases into the decisions they make.
The issue of how to ensure that the performance of AI systems is maintained at an acceptable level becomes especially important when these systems are deployed outside the research environment and applied in our daily lives.
What is needed is some kind of regulatory oversight to ensure the accuracy and safety of AI systems used in both the public and private sectors. Processes need to be put in place to routinely test the performance of the AI systems and certify that they are and remain fit for purpose.
This is not a novel idea, regulatory oversight is already applied in a wide range of contexts where public safety could otherwise be at risk without it.
Responsible research and innovation
When it comes to deploying AI in the real world, we have to take into account effects and potential impacts on individuals and on society as a whole. The concept of ‘Responsible Research and Innovation’ is about having the foresight to anticipate harmful effects and design to minimise them.
It seems surprising that we haven't already made provision for training future AI developers and researchers about how to apply this to their work, given the potential impact AI may have and how some of those consequences could be very detrimental without proper oversight.
The need to review the ‘current post-16 curriculum to ensure all pupils receive a grounding in basic digital, quantitative and ethical skills necessary to ensure the effective and appropriate use of AI’ is one of the recommendations of a report I have authored in collaboration with independent think tank Demos and is supported by Jisc.
I worry about the current lack of oversight and control as an individual citizen, but also as a researcher because I don't think we have enough focus on how AI is increasingly being used to influence and shape our experience of the world we live in. There is evidence that the public is also becoming more sceptical about the benefits of AI.
Confidence is key
We did a survey1 recently to gauge the level of people’s trust in the security and privacy of the Internet of Things and the ‘smart home’, in particular. We found that a lot of people are beginning to question whether these technologies – of which AI is an increasingly important component – are trustworthy. Unless businesses take these concerns seriously, this distrust could become a real threat to commercial success.
More broadly, the private sector needs to recognise its responsibility for delivering AI products and services which people can be confident will not expose them to unwanted risks and undesirable impacts. The history of technological innovation tells us that government intervention may be necessary if this is to happen in a timely and effective away.
Rob Procter is co-author of the report Research 4.0, Research in the Age of Automation delivered by independent think tank Demos and supported by Jisc. The report seeks to understand what impact AI is having on the UK’s research sector and what implications it has for its future.
- 1 PLOS ONE article - Trust in the smart home: Findings from a nationally representative survey in the UK -https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0231615