Skip to main content

"The Governor View" - Universities and AI

Artificial Intelligence (AI) is already a feature of higher education - from big data, predictive analytics and course management systems to virtual teaching and recruitment chatbots. 

Academics and professional services use systems that collect information about students, while students use systems in their learning, and senior management teams use AI to help understand and steer their institutions. As part of that, policy and guidance has been developed on data governance, data protection and cybersecurity, as well as more forward thinking strategies covering IT and digital.  

But since the launch of ChatGPT in November 2022, higher education, along with other sectors, has been grappling with the implications of this new high speed, sophisticated generative AI. Not only is it grabbing media headlines, it is posing tricky questions about academic integrity and assurance, day-to-day operations and staff workload.

“There are clearly risks and opportunities,” said the governor of an alternative HE institution. “It is up to education to maximise the opportunities and minimise the risks. What we can’t do is un-invent these things. That is not a debate worth having. They are out there. Everyone can use them and they’re only going to get better.”

According to this governor, institutions need to think about the use of AI in three crucial areas – internal systems, how they teach and what they teach.

The first of these includes consideration of the extent to which AI should be deployed in the everyday business of the university. For instance should it be used by staff to write reports or used instead of staff to write reports?

“Just as in other domains it will change the way we work,” he said. “I envisage it enhancing rather than replacing people. If it writes marketing copy, for instance, this will do the job of a junior marketing assistant so you might need fewer of them but there is still going to be a role in editing and finessing because you are not just going to take the output of AI and publish it.”

In the teaching space, AI provides opportunities to personalise learning more closely to individual student needs. This raises questions about the digital and IT capabilities of staff and students and has potential resource implications.

According to the governor of a new university, there are likely to be a plethora of tasks that can be done by AI.

“In recruitment and marketing, data capture and data cleaning and analytics there will be things that could be done by machine,” she said. “But the next level, when you are asking questions of your data and considering how you are going to use it, that is where you need human input. In some ways, advances like ChatGPT mean we need more people with enhanced skills, insight and creativity.”

Potential shortcomings of the new technology also have to be understood, acknowledged and compensated for. Innovations need to be thoroughly user tested, cautioned one governor.

“If you are using recruitment chatbots, for instance, we need to make sure that the chat function responds appropriately to diversity,” she said. “It may assume a certain sort of student and that can be very dangerous. There needs to be thorough user testing and user voice needs to be involved in the design and the rolling out of interventions. The less you do of that the less useful it will be and could possibly have perverse effects such as discriminating against some sorts of students.”

A student governor makes a similar point: “Some firms use AI to look over applications and weed out the poorer candidates and reduce the number of CVs that are read manually,” he said. “But perhaps the ‘weaker’ candidate is the widening participation candidate and exactly the person the university wants to target.”

The challenge for senior managers and the governing board, according to governors, is how to use AI productively to further the university purpose and the public purpose of education.

Large language model AI, such as ChatGPT, also has major implications for what universities teach. Many of the headlines devoted to AI have highlighted its risk to academic integrity and the prospect of students using AI to write assignments and passing them off as their own work. While cheating has always been a risk, the wholesale accessibility of free AI software, its speed and its ability to produce unique responses make it a very different prospect to essay mills and one where misconduct is much harder to detect. 

Many universities have accepted that generative AI will be used by students but have made clear that passing its work off as a students’ own will be treated as cheating. Some universities are employing measures to try to make this less likely, where a sample of students are routinely selected at random for what is essentially a viva voce.

“I do think we need to be measured about the cheating thing,” said a governor of a new university in London. “It’s not new: academics have for years been worried about plagiarism and students copying great tracts from books or cutting and pasting. And the tools to uncover AI generated text will become more sophisticated.”

A student governor said academics were already setting assessments that assume students will uses generative AI, but asking them to reference it and critically discuss it. He envisages that there will need to be some oversight of developments at the board level, whether that be in setting strategic direction or as part of the new aspects of academic assurance that boards are taking on.

“I think governors will increasingly be discussing the risks that AI might pose, whether that is under a more general risk and opportunity analysis or part of a strategy aimed at designing flexible learning and new modes of teaching,” he said.

Governors generally are less concerned about the potential to cheat (none voiced the wish to reinstate traditional exams wholesale, for example) and more concerned about the need to teach students to use AI like ChatGPT effectively and responsibly.

“Students are going to need slightly different, or perhaps majorly different, sets of skills, depending on their subject,” said one governor. “A potential old school approach is that we should just ban it because students can use them for passing off its work as their own. I think that is the wrong approach because the reality is that when they go out into the work, these tools are going to be there.”

One governor likened the new software to the advent of Wikipedia: “The right approach is to say ‘yes it’s a useful resource but you’ve got to understand how it was created and what it’s limitations are and you have to use it with due care.’ Similar principles apply here.”

From a governance perspective, this could include developing strategic direction to underpin guidance to educators about setting work that uses AI software constructively.

“Faculties are going to help us find the answers but I think sharing good practice is a good idea, as is looking at other sectors and seeing how it works there,” said one governor.

A student governor from a selective university in the north of England points to a certain amount of uncertainty among students about what is “acceptable use” of generative AI.

“For international students, for instance, it can clean up the text and improve the writing,” he said. “Some are doing that but it is not totally clear if that is plagiarism or not. But once you graduate, you will be expected to use AI because it can improve the quality of work and your efficiency so why wouldn’t you use it? In a way, the question of integrity and malpractice goes deeper to questions about what are assessments for.”

Despite feelings of uncertainty, he believes the majority of students are using generative AI on a day to day basis. A quick glance across the university library can reveal numerous students working with ChatGPT open on their screens.

The governor at a new university emphasises the work her institution is already doing on “authentic assessment” – which focuses on students using and applying knowledge and skills in real-life settings.

“I think there are ways around the challenges AI might pose to assessment and you have to think creatively about it and that’s a good thing,” she said. “There will need to be behaviour change and developments in academic practice, which happens all the time. I’m not minimising the impact of ChatGPT and things like it, but I think we need to keep it in perspective.”

To this end, governors need to be well informed about what is modern contemporary practice in teaching and assessment, she says.

At the moment ChatGPT is free but it also has a payment option. Some governors point to a future where more advanced iterations will have to be paid for. This then raises questions of equality of access.

“Students who can pay for it might get better grades,” said one. “That is the kind of thing boards may have to consider: do universities then have to ensure access to all their students. It might not be next year but it could be five years down the line.”

Resource type: