Blog

Research’s AI future is nearly here – are we ready?

Victoria Moody headshot
by
Victoria Moody

The AI revolution is about to hit the research sector at scale. As we begin to explore the possibilities of AI in research and research management, it’s Jisc’s role to work with members, funders and stakeholders to develop ethical and practical ways of adapting it.

Robot with students in study setting.

There is vast potential for AI in the research sector. We have the opportunity to shape our approach to deliver maximum value to the research community, if we begin now.

The Department for Science, Innovation and Technology recently announced the UK’s AI Foundation Model Taskforce. In the higher education sector, the Russell Group of universities has developed a new set of principles, citing work by Jisc’s national centre for AI (NCAI), to help universities ensure students and staff are ‘AI literate,’ which includes a focus on upholding academic rigour and integrity.

Jisc welcomes these developments in a rapidly changing research landscape.

There are also research programmes dedicated to AI ethics for research. The health sector is galvanising around the NIHR AI e-learning course for researchers and research teams are taking the lead on the development of effective AI approaches and standards.

AI technologies present a huge opportunity. The challenge for the sector is to manage their development while preserving trust in research and research integrity. Jisc will act as a trusted partner to support and guide the development of ethical AI foundation models for research and research management.

Ethical AI

The potential for AI to reduce administrative research tasks is huge at a time when the sector has been challenged to reduce its bureaucracy by the Government.

AI applications could undertake vast amounts of research management tasks. Literature reviews, progressing grant applications, data management, curation, access and analysis, underpinning support for research assessment, demonstrating impact and financial reporting could all be transformed.

Digital laboratory interfaces could produce multi-faceted outputs directly from an experiment in STEM research or highly complex digital twin technologies in the humanities. Ethical AI applications could be trained in secure settings with accompanying monitoring algorithms to verify, review and sustainably reproduce the work across many disciplines

But the opportunities don’t stop there: ethical AI models for research could drive research standards, be cited and become sought-after research outputs in their own right, given the appropriate frameworks for their production and management.

This would create significant potential new markets for ‘dimensionally ethical’ AI that would help projects comply with the five dimensions of research ethics. We envisage, and aim to nurture, the development of these new markets.

Finding solutions to pressing societal challenges requires new and cross-disciplinary research approaches. If we begin now, we can establish ethical AI conditions and the frameworks to support and grow them. This will shape our approach to AI so it can deliver maximum impact in addressing complex societal and innovation challenges.

But the research sector must first build ethical foundation models that set the expectation for and establish the standards and the conditions to manage them. We cannot wait until the market has developed. Most important are applications dedicated to supporting research ethics and integrity.

AI needs data

Research is a highly visible activity. Data is widely available for research collaborations, publications, open peer review practice, citations, participation on panels and at conferences, grant awards and doctoral supervision, as well as through advisory groups and research assessment processes. The researcher is at the heart of that data landscape.

The UK Government’s Research and development (R&D) people and culture strategy places significant emphasis on the need for data to be brought together to support its ambitions.

However, there are ethical concerns. Primary among them is the general data protection regulation GDPR, but there are also issues of trust in terms of system design and the AI that will range over research management data.

There is a need for integrity and ethics to be designed into any use of technology in this area, to convince the research community of their sound application.

Importantly, human-led approaches and ‘algorithms as infrastructure' can be alert to unintended consequences. They can introduce tests and balances to train and demonstrate ethical approaches.

Who better than researchers, research managers and research technical professionals to cultivate the research management AI landscape?

Getting ahead of the model

We are working with our research and innovation sector strategy forum group of pro vice-chancellors to help shape our next steps. We are also exploring opportunities with Jisc’s national centre for AI to take forward our approach in discussion with the sector.

Historically built on the logistics of the creation of millions of paper outputs and the physical infrastructure to facilitate it, research has grown in response to centuries of shaping how we do it and manage it.

AI has begun to change the world in exciting and unexpected ways, and it will soon transform research.

About the author

Victoria Moody headshot
Victoria Moody
Director, higher education and research

I focus on the design and delivery and implementation of Jisc’s higher education and research strategic themes, supporting Jisc to deliver a sustainable support and services across higher education and research supported by diverse revenue streams and partnerships. My role involves senior engagement across Jisc, and with higher education, research and professional leaders in the UK and internationally. I’m also co-investigator and deputy director of the UK Data Service.