When we look at technology in education – or edtech - there is a risk that we fall victim to magical thinking and tech solutionism.
From smart whiteboards to kids coding, it can feel like we are leaping into action because we can, without knowing why or what we’re looking to achieve.
As HEPI director Nick Hillman says on the HEPI blog, nowhere is this more true than with hot button technologies like artificial intelligence (AI), blockchain and virtual reality.
For educators it can be instructive to ask what the real value of new product or service is to them. I would venture to suggest that this is very rarely in the technology itself. For example, does the fact that it “uses advanced AI techniques”, or “is built on the blockchain with smart contracts” at all relevant to the outcomes that it delivers? I don’t think so.
We should rightly be suspicious of products whose key selling point is the technology and not the outcomes. For me, the tangible benefits provide the value, such as improved wellbeing and mental health and learning outcomes. And if it’s not heretical to say this, perhaps we should also expect our edtech to spark a little joy too?
At Jisc we work with our members and customers in a co-design process to arrive at a result that everyone can feel ownership of, and which meets a genuine need. Where we use technologies like machine learning or augmented reality, it’s as a means to an end, rather than an end in itself. It might seem like a little thing – a change of emphasis, perhaps – but really this is what separates user-led design from the breathless hype of tech solutionism.
Turing’s red flag
During the recent All-Party Parliamentary Group on Data Analytics enquiry into Data Ethics, which Jisc supported, a key concern was the potential lack of transparency about when a decision is made by a machine. A possible remedy suggested in the resulting report was a user-friendly means to show when and how a decision is taken by machine intelligence - such as a kitemark or the “Turing Red Flag” proposed by AI professor Toby Walsh.
These issues of data ethics and integrity are far more important than any one technology like AI or blockchain. That’s why we have worked with colleges and universities to develop a learning analytics code of practice, which sets down some guiding principles around the ethical use of personal data. What’s crucial here is that data is used to assist rather than automate decision making, for example in helping a personal tutor understand whether a student they are supporting looks like they are at risk of disengaging.
Technologies with names like machine learning and robotic process automation tend to conjure up images of humanoid robots taking over from human teachers and lecturers – exciting and terrifying in equal measures. All too often, though, the reality has been something much more mundane like swapping authentic assessments for multiple-choice questions. Little in the way of joy to behold here!
As part of Jisc’s latest round of co-design, we are specifically looking into key challenges that have emerged as part of our dialogue with the sector about Education 4.0 - how technology can transform teaching and learning for the better. For example, perhaps we can use AI to help determine when a student has achieved mastery of the subject material in a way that reduces workload and makes the whole process of assessment more resistant to essay mills and plagiarism.
In all of this the technology itself is just another tool in the edtech toolbox – the real issue is about how it’s employed and ensuring that the data that drives it is used in a responsible way.
Find out more about co-design at Jisc and how you can get involved – we’d love to hear from you.
Watch our video of a virtual student called Natalie, where we've brought to life what the student experience could look like under Education 4.0.