From concerns around student wellbeing to reports of discrimination, the higher education (HE) sector is under fire - and the urgency with which institutions address these issues may define the next few years in education.
But technology has a key role to play, so AI must be part of the conversation. In considering racial discrimination on campus, for example, it's important to look at evidence of racial discrimination in some AI algorithms.
How can institutions know whether products are potentially discriminatory before they start using them? How can universities find out if products have been tested on a truly diverse population? And what are these products’ limitations? Problems can arise from using AI in contexts it wasn’t designed for.
Jisc is collaborating with universities, colleges, start-ups, technology companies and experts in education to build a new National Centre for AI in Tertiary Education, which will address these questions.
The centre will dig deep, asking whether products are effective and ethical. We’re working with the Institute for Ethical AI, adapting their framework for UK education environments - and every tool tested by the centre will have to meet those standards. Only if they fit with a culture of teaching and learning will they be recommended.
Fairness and trust
There are many hurdles to overcome. Broadly, I see a lack of trust in AI at the moment - especially when coupled with a suspected lack of regulation. There’s a perception that AI threatens to replace important and highly valued human elements of education with inferior robot experiences.
Then there are issues of data and transparency. Are we being spied on and monitored by AI? Is our data being used without our knowledge?
Another common concern is whether AI is making decisions we humans don’t understand and can’t appeal. Is it ‘deciding’ whether a student is doing ‘well’ or ‘badly’? How has it come to that conclusion? How can we challenge it? And who has access to that information – peers, teacher, potential employers?
Finally, we need to ask, when an algorithm is making judgements, assessments or decisions, has it been tested on a diverse enough group to enable it to do that job properly and fairly? How does it cope with different approaches and learning styles, or with different levels of access to technology?
No 'quick fix'
To answer all these questions directly, the Centre for AI runs pilots, looking for products that teachers like using and say save them time. We want to know the AI tools we recommend can augment or release human effort, rather than replace it. We want to see that student satisfaction went up as a result of using the AI product – or that attainment went up, or that drop-out rates went down. We need to see data used in an ethical way, enabling institutions to take responsibility for their decision to use AI, and allowing them to give students control, as and where appropriate. We need to see that the humans engaged in the pilot understood the decisions the AI tool made and could appeal them, with a clear process in place. And that the product was fair and balanced. This is about delivering tangible, ethical benefits to help the sector move forward.
It isn’t a quick fix, though. Even armed with a list of approved AI products, institutions will have to ask their own questions about what’s right for their students and their environment. That will be tough, especially as universities grapple with their responses to Covid in terms of teaching, learning, assessment and delivery of education. Sharing experiences will help.
Ethical in letter and spirit
When adopting AI in education, culture is often the hardest part. A university could follow ethics guidelines to the letter yet still fail to implement AI in a way that their students and staff are happy with. AI is there to augment the human experience, not replace it – and understanding that meaningfully means looking at both the letter and the spirit of any set of guidelines, and framing them around each unique institution.
Right now, the education sector faces huge challenges. Many universities are making significant changes in delivery and approach while also grappling with issues around discrimination and inequality. AI is a useful tool which, if used correctly, could form part of a solution. Let’s pull together and welcome progress. If we don’t, we risk seeing others – be they other countries, start-ups or corporates – reaping the greatest benefits while we stand on the sidelines.