Inform feature
On-screen analytics at Salford City College
Creative Commons attribution information
On-screen analytics at Salford City College
©Jisc and Matt Lincoln

Learning analytics: ditch the humans, leave it to the machine - a Digifest debate

Richard Palmer

Computers don’t turn up to work hungover, stressed or loaded with unconscious biases, so why shouldn’t they be used for routine interventions with students? They can deal with sensitive situations as well as humans, if not better, argues Digifest debate speaker Richard Palmer.

Learning analytics is going to become ubiquitous in UK education – and with good reason. Tracking, in near-real time, individual student engagement, attainment and progression has been shown to improve the educational experience for students1, leading to better grades and higher retention rates.

Learning analytics provides institutions with huge amounts of data but the crucial point is that it is actionable data. Institutions can use the data about students to predict which students may need support or be at risk of withdrawal or how students are going to be best served by them. It is data that can lead to interventions.

Learning analytics and interventions

An intervention could be a number of things. It could be sending an email or a text message to a student saying, “it doesn't look like you attended your lectures all week – is everything ok? It doesn't look like you handed in your most recent piece of coursework on time – is there anything we can do to help?”

More advanced systems offer advice based on prescriptive rather than predictive analytics. So, for example, if a learning analytics processor notices that, although they are doing the work and obviously showing the intelligence, a student's written assessments are scored lower than other objective criteria, it might ask them if they know about the help available in academic writing classes. There's a really broad range of what an intervention might be.

The trouble with humans...

Should those interventions brought about by learning analytics always be mediated by a human?  No.

First of all, humans have a long history of believing that when certain things have always been done in one way, they should stay that way, far beyond the point where they need to be.

If you look at Luddite rebellions, we thought that it should always be a human being who stretched wool over looms and now everyone agrees that's an outdated concept. So, deciding that something needs to be done by a human because it always has been done by a human seems, at best, misguided.

Secondly, people object that the technology isn't good enough. That may, possibly, be the case right now but it is unlikely to be the case in the future. How difficult is it to intervene with a student identified as at risk by a learning analytics processor? Is it harder than driving a car, which computers already do better than us? Is it harder than being a world champion at poker, which we now know that computers are better at than us? Or playing chess or landing things on comets, all of which computers do better than people? Are we saying that it's just this single aspect of human experience that is unique? That seems unlikely.

Technologies will improve. Learning analytics will become more advanced. The data that we hold about our students will become more predictive, the predictions we make will be better and at some point institutions will decide where their cost benefit line is and whether everything does have to be human-mediated. I have no doubt that some universities will adopt at least partial automation of certain interventions in the not too distant future.

Thirdly, how good do we actually think people are? Certainly, human beings can empathise and pick up on non-verbal or even non-data-related signals from other people, but when was the last time a computer turned up to work hungover? Or stressed or worried about something – or just didn't turn up at all?

Computers aren’t intrinsically prejudiced against people of different genders or races or sexualities. Looking at the Harvard unconscious bias survey, 82% of people have some unconscious bias either pro or anti black or white people. Only 18% show little or no preference. 85% have unconscious biases around preferring people who are fatter or thinner. Will a computer ever be better than the perfect person? Maybe, maybe not. But, let's face it, people aren't perfect.

The downsides of machine interventions are no worse than humans doing it badly – and humans are pretty good at doing things badly. We worry about computers sending insensitively worded emails and inappropriate interventions but we all know human beings who are poor communicators, who are just as capable, if not more, of being insensitive.

There's certainly a risk of sending too many emails or texts and diluting the message but that's easily fixed by appropriate development and testing. And with a computer, if you programme it properly, it does what you tell it. In contrast, no matter how good you are at writing policies, humans don't always follow them.

Human intervention?

The point at which a human intervenes entirely depends on how good the system is. If an institution sends an email to a student saying that we've noticed you haven’t turned up all week, is everything ok, and the student responds via email, a computer with natural language processing is fully capable of understanding the response.

If the student replies along the lines of “I was busy / I was poorly / I'm all right now, I'll be in on Monday” then there is no real need for human interaction – the computer can deal with it. But, at the same time, a computer can be programmed to know what its limitations are. So if it doesn't understand the response or sees something it is not pre-programmed to deal with in the response, such as a mention of bereavement or money worries, then that's the point to hand something on to a person.

These systems can be programmed carefully and tested properly to know their limitations and can – and will – expand their capabilities over time.

I'm not saying we're there now but if you think this is a sacred cow then you should think about the things you do today that were science fiction ten years ago.

Digifest 2017 - join the debate

Digifest 2017 logo

This is the fourth in a series of features on topics that will be covered at this year's Digifest, which takes place on 14-15 March 2017.  

Richard Palmer will be on the panel for our debate, learning analytics interventions should always be mediated by a human being, which takes place in the morning on day two of Digifest. Full details for all this year's sessions, can be found in the Digifest 2017 programme.

Join the conversation on Twitter using #digifest17.

The views expressed by contributors to Jisc Inform are theirs alone and not necessarily those of Jisc.

Footnotes