The recently announced online safety bill focuses on defining both illegal and ‘legal but harmful’ activities and specifies a nebulous ‘duty of care’ that platform providers must demonstrate should a user be subject to online harm.
There is much to unpick with this draft legislation, but throughout the document there is an implication that platforms should implement technical approaches to prevent harm.
The bill reflects the view that, because online harm takes place within the digital environment, responsibility should fall on digital providers to ensure it cannot take place.
If only it were that simple.
Let us take the example of ‘Zoombombing’ in the education sector during lockdown.
There is a view that Zoom should be able to prevent both unauthorised access to its meetings and the broadcasting of extreme material. It’s a paid-for service and users expect it to make meetings secure.
Zoom can do this, but not automatically.
Platforms like this provide effective access control and a range of in-session tools that can be used to manage and mute participants and, if necessary, eject them.
Session control can be placed solely in the hands of the host and a waiting room set up to ensure only those invited are allowed in. However, it requires knowledge on the part of the host to set up access control and manage the session. The platform cannot prevent a particular participant from misbehaving during a session.
We should recognise that, in the rush to move teaching online last year, there was little time
or resource available to develop the digital knowledge of the academic staff required to deliver online sessions.
There is, as we are often reminded, an assumption that users of digital technology have some implicit capacity to learn through osmosis. Anyone with a PIN on their mobile device will understand the need for access control on video conferencing platforms, right?!
We could consider a different scenario: a student is subject to persistent sexual harassment and hate speech on a Discord server that students have implemented to support interactions on their course.
The server is not hosted by the university, but the programme team is aware it has been set up and was, until this point, keen to encourage students to use digital platforms to interact. In this case, the student makes a complaint to the programme leader about the abuse, and asks them to intervene.
The programme leader might think that, because the university didn’t supply the platform, it’s not their problem. However, student welfare certainly is their concern.
They will need to consider whether appropriate routes were in place for disclosure and reporting, along with clear and transparent policies around sanctions for online abuse. It would be extremely unlikely that a technical solution could be found to stop this abuse.
Ransomware and phishing
There has been an increase in ransomware attacks on the education sector over the past few months, some triggered by phishing emails.
This puts pressure on sometimes poorly resourced cyber security managers, who must be struggling to shut down all the phishing threats that appear in the inboxes of the thousands of staff and students, and across campus networks.
However, phishing is just one method attackers use to gain credentials that have then been used through insecure remote access solutions. Log in details could also have been gained from previous data breaches, or brute force attacks that have been successful because of ineffective password policies, for example.
While there are technical applications that can help minimise the success of phishing attacks, such as multi-factor authentication, and tools to detect and filter out phishing emails, including free resources from the National Cyber Security Centre, these technical solutions often go hand-in-hand with a more human approach.
It is no surprise that many institutions are now sending out online training around ransomware prevention.
However, I question whether a half hour online course is sufficient to understand the role the end user plays in cyber security and online harm. These are whole-institution issues requiring education and awareness across all stakeholders and leadership at the top levels of the organisation.
Technology can only ever serve as a tool to support the systems and processes that need to be in place to mitigate cyber risk in organisations and provide students and staff with the means to disclose abuse and gain support.
For instance, while web filtering will prevent access to illegal content on institutional networks, there are freedom of expression challenges to be met if similar tools are to be used to filter ‘legal but harmful’ content.
Tech as a tool
Monitoring tools can identify abuse taking place on networks, but this is by no means a perfect system. They will generally only trigger based upon keywords or phrases, and if applied too strongly will intervene too readily in innocent discourse.
And while cyber security services, such as anti-virus software, firewalls and intrusion detection systems, will all help reduce the risk of cyber attacks, they will do little to prevent a member of staff handing over their login details via a phishing attack.
This is where thorough and regular compulsory security training for all staff and students comes in. Users are, after all, the first line of defence, and the greatest weakness.
Research that I and Prof Emma Bond conducted in 2020, which served Freedom of Information requests across the higher education sector relating to policies and practice around online harms, suggested that the majority of institutions do not have effective policies to address these issues, and even fewer trained staff to recognise risks.
As higher education spaces become increasingly digital, technology alone cannot be relied upon to mitigate online risks across campus.
For more advice and information about online safety and cyber security, sign up for the Jisc security conference (9-11 November). Prof Andy Phippen will be speaking on day three of the event, which is free for Jisc members.