A pathway towards responsible, ethical AI could deliver a fairer system in education
A new report by Jisc has been compiled to help universities, colleges and research institutes think about the ethics of artificial intelligence (AI), and to combat “unfairness” or “unexpected effects” for students and staff in education and research.
Working with the British and Irish Law, Education and Technology Association (BILETA), Jisc's National Centre for AI has mapped a route, taking smaller steps through more familiar ground, towards that goal.
Andrew Cormack, chief regulatory adviser at Jisc, said:
“The report has been published because there's a widespread desire to do AI "right", but that is often presented in the form of lengthy "ethics" requirements, detached from anything that institutions or individuals have previously experienced.
"Our pathway report aims to show that, in fact, we can get a long way towards that goal - in particular identifying ideas or proposals that would be likely to fail an ethics test - by using and discussing what we already know about educational and research processes and experiences.”
This involves two key insights. First, that the intuitions and practice of the broad education community will usually guide us towards responsible and ethical actions and away from unethical ones. And, second, that discussing more familiar questions can help us discover those intuitions and practices.
One of the key aims of the pathway is to help individual institutions identify settings where it might be appropriate for them to use AI. And, throughout, using the term "AI" in a very broad sense: the pathway should help discussions about any idea or application that depends on data or algorithms.
Andrew Cormack added:
“The outcome which we hope for is institutions and their stakeholders being confident - and having a sound basis for that confidence - about their choices of where and how to use AI.”
The pathway discusses the need to consider a wide range of educational habits and experiences, including students who learn from books or have limited access to technology, and those who are keen adopters of technology.
To develop the report, Jisc and BILETA researched ethics codes, including those from the EU High-Level Experts Group and the UK government, and identified common features. Many of these, sometimes with a change in terminology, were already familiar, from Jisc’s work helping educational institutions make safe use of technology.
Jisc is currently setting up an early-adopters group to identify members using AI responsibly that can act as inspiration for positive change, and help others learn from their experiences.
In the collaboration with BILETA, members of the BILETA committee provided encouragement and feedback since the initial idea for the pathway and reviewing the final paper.
Professor Abbe Brown, chair at BILETA said:
“BILETA is delighted to collaborate with Jisc in this valuable contribution on this important issue, and is committed to continued leadership in law, education and technology.”
Following the pathway should help institutions identify and deliver applications where AI and humans can work effectively together to reduce unfairness and ethical concerns.
Read the full report- a pathway towards responsible, ethical AI report (pdf)