Skip to main content

"The Governor View" - Generative AI

Generative artificial intelligence (AI), which is capable of processing vast amounts of information to generate human-like responses, as well as images and videos, almost instantaneously, has implications across a range of areas for which governors have oversight - from strategic planning, academic assurance and student experience to data governance and ethics, operational efficiencies, and risk management.

Since the launch of the latest version of OpenAI’s ChatGPT last autumn, universities have been trying to get to grip with the potential impact of the fast moving, and some would argue revolutionary technology, on core university business.

Over that time, ChatGPT has signed up 180 million users worldwide and student usage of generative AI in the UK has become “normalised”. A survey by the Higher Education Policy Institute (HEPI) published earlier this month revealed that most students were using AI tools to support their studies, with 53 per cent using the software to help prepare assessments.

A key consideration for governors, then, has been how their university can best integrate generative AI into teaching, learning and assessment and, while doing so, safeguard academic integrity and reputation.

Institutions have developed policies and guidance which attempts to advise students on the acceptable use of large language models and how to avoid academic misconduct. The HEPI report suggests many have been reasonably successful: two thirds of respondents thought their institution had a ‘clear’ policy on AI use, although only a fifth of students were satisfied with the support they have received on AI. 

Even with these policies, though, some governors are concerned that there are too many “grey areas” which are potentially problematic for staff and students.

“We know that people are using it but not really the extent of that use and where exactly the line is,” said a staff governor at a university in Scotland. “A student may be using it for the first draft and then altering it and saying, ‘well that’s ok, that’s not misconduct’. But our academic integrity policy says that the flow of ideas should be your own. We need to think about the types of assignments we set and what integrity means. What does it mean to use it “as a study tool” and what does it mean when it is more than a tool? Where do we draw that line?”

One way of ensuring assessment integrity is in-person, through invigilated exams. Some universities are returning to traditional papers after the move to online assessments during the pandemic. Other institutions discarded exams as the main method of assessment long ago and would regard returning to it as a retrograde step. Guidance from the Quality Assurance Agency says that while unseen, in-person, invigilated exams do protect integrity, the method “would reverse much recent progress around accessibility” and is “not authentic”.

The governor of a high-tariff university thinks such a move across the sector is unlikely.

“We’ve gone to continuous assessments, writing at home, that kind of thing,” she said. “Generative AI presents a challenge because staff have to be able to confirm learning outcomes and say that ‘this person definitely knows how to critically appraise or how to formulate a research question’. Although exams might be staring us in the face as a solution to authenticity, are we going to swing back and say actually ‘we need you in the room, writing the essay without electronic devices to make sure you know this’.  I don’t think so.”

She suggests that institutions should be willing to have “open and honest” conversations about the implications of AI and that students need to be part of those debates.  

At a new university in the north of England, board level discussions about how staff were going to deal with the advent of AI were held after ChatGPT had become headline news.

“We had a number of reports, through the academic board, about the requirement for students who use generative AI to reference it in their work and we’ve had a couple of updates on that,” said the chair. “It seems a sensible policy; AI is there, students are going to use it, and will use it in their careers, and it is much better to be upfront about it and have it properly referenced.”

Despite the drawing up of these kinds of policies, confusion abounds, says a student governor at a post-92 university in the Midlands. He pinpoints two areas where attention is needed; how to embed AI to make courses more future proof, and the ethnical uses of AI. On the first, there is evidence that more needs to be done. The HEPI report found that only a fifth of students were satisfied with the support they have received on AI. It recommends that institutions should teach students how to use it effectively and provide AI tools for those who cannot afford them.

“We know AI is the future, so the big question is how programmes can integrate it,” said the student governor. 

On the other question - what is and what isn’t acceptable use of AI within student study and assessment, he observes “everyone is grappling with where the boundary is”. 

He added: “What we understand is that you shouldn’t be using AI to write any of your work but you can use it for research. But it’s a confusing and mixed picture across the sector. Academics themselves have different views on what is good use and what is bad use.”

The question of legitimate usage may apply as much to staff as to students. One governor cited a colleague who had used ChatGPT to help set a student assessment by “inputting the papers they wanted students to review and out popped the questions”.

“In the same conversation we are having about ‘gosh how are we going to get students to declare in taught modules and in research what they are doing’, we were talking about how it can be used to cut corners in our own work,” she said.

Rather than overarching, very general AI guidance, this governor wants to see specific guidelines covering, for example, particular subjects and postgraduate research: “Current guidance around use of AI is for teaching modules but what does it mean for, say, writing abstracts for conferences or proposals for research projects? Is it legitimate or not?,” she says. “I sit on a new research integrity sub-committee group and our first discussion is going to be about AI.”

Governors were supportive of training to help staff become “AI-literate”, a move outlined in the Russell Group’s principles on the use of generative AI tools in education

As well as understanding the potential opportunities, staff need to be aware of the limitations and ethical issues associated with the use of AI tools, such as privacy and data considerations, potential for bias and inaccuracy and misinterpretation of information. 

Knowing when to apply AI tools and when not to is important. They may have the potential to lighten administrative burdens but in forecasting, for instance, human capabilities have proved to be more accurate than those of generative AI, according to recent research.

One governor suspected that staff were not independently using generative AI in their day-to-day work, even in situations where it could be useful or increase efficiency, because of a general stigma stemming from connotations of bad academic practice and “fake” outputs.

To improve knowledge, one board chair is enlisting the expertise of AI specialists at the university. 

“We’ve just won a big research grant for postdoctoral work on wider community understanding and use of AI and we are waiting as a board to have some interaction there,” she says.

External governors who may have experience of AI applications and software in other organisations and industries could also be a valuable resource.

“There is a richness of discussion that could happen at governing board where we could hear about companies using the various versions of AI and employers’ experiences across sectors to help us understand how we might adapt,” said a governor in Scotland.

Many universities are in the midst of rewriting digital strategies to replace the now-redundant visions that applied pre-Covid. In doing so, boards have to consider the processes, practises and procedures that need to be in place to ensure institutions can be flexible enough to harness the potential benefits of generative AI technology.

“Up to now governors have been more concerned about usage and academic integrity, as there are so many governance issues connected to that, such as reputation and regulation. But I think we now need to start thinking about the business context,” said one. “We are facing change like we have never seen, and we need to have the flexibly to adapt quickly.”

As the technology develops – “it’s a journey, not a destination”, as one governor puts it – so policies, guidance and strategies should develop. More input from organisations such as the Committee of University Chairs (CUC) on generative AI would be timely and welcome, suggested one governor.

“We are only beginning to get our heads round the pitfalls and the advantages of these advances,” she said. “I just feel we are quite soon going to be having quite different conversations about this.”

Keep up to date – sign up to Advance HE communications

Our monthly newsletter contains the latest news from Advance HE, updates from around the sector, links to articles sharing knowledge and best practice and information on our services and upcoming events. Don't miss out, sign up to our newsletter now.

Sign up to our communications
Resource type: