‘An existential crisis’: Can universities survive ChatGPT and AI?
Students are using AI to cheat and professors are struggling to keep up. If an AI can do all the research and writing, what’s the point of a degree?
It was September 2023 when solicitor Rhys Palmer received his first call from a panicked university student accused of using an artificial intelligence chatbot to cheat on their work. It was the first of many.
Less than a year earlier, Open AI’s ChatGPT had been launched and students were already using it to help summarise journal articles and books. But also to write essays.
“From the first call, I immediately thought this was going to be a big issue,” Palmer says from his office in Cardiff, Wales, where he specialises in education law.
“For that particular student, the university had used artificial intelligence itself to detect plagiarism. I saw the potential flaws in that and anticipated a wave of students would have similar issues.”
As chatbot use becomes ever more widespread, what began as a trickle of issues has evolved into a profound challenge for universities. If vast numbers of students are using chatbots to write, research, code and think for them, what is the purpose of a traditional education?
Fighting cheating claims
Since that first call in 2023, Palmer has carved out a niche in helping students accused of using AI to cheat their coursework or remote examinations.
He says most of the students have been exonerated after presenting evidence of draft essays, revision notes and previous work.
“AI cheating is … a beast of its own,” Palmer says. “Often it is the parents who call on behalf of their children. Often they feel their children were not given the right guidance or training on how they could and could not use AI.”
On other occasions, Palmer has helped students who admit to using AI avoid sanction by arguing that their university’s policies on AI were unclear or that they had mental health difficulties such as depression or anxiety.
“They come to the table saying, ‘I’ve messed up’,” he says. “In instances, we get a GP letter or an expert report confirming that their judgment was impaired.”
Literacy gap
Some students report that ChatGPT is now the most common program displayed on students’ laptops in university libraries. For many, it is already an essential part of daily life.
Gaspard Rouffin, 19, a third-year student studying history and German at Oxford University, uses it daily for everything from finding book suggestions to summarising long articles to work out whether it’d be worthwhile to read them in full.
For his language modules, using AI is understandably more controversial.
“I had one tutor in second year, in a (German) translation class, and she noticed that a lot of the translations were AI-generated, so she refused to mark any translations that week and told us to never do that again,” he says.
Other lecturers have been less vigilant. A third-year student at Oxford recalls a tutorial in which a fellow student was reading an essay that she felt was obviously AI-generated.
“I just knew instantly,” she says. “There was something about the syntax, the way it was constructed and the way that she (was) reading it.”
The tutor’s response? “They said: ‘Wow, that was a really great introduction, that was really well sculpted, and I really liked how precise it was.’
“I was just sitting there, thinking: ‘How can you not see that this is a product of ChatGPT?’ I think this is possibly illustrative of a literacy gap on the issue.”
Frustration builds
Research by international student accommodation company Yugo reveals that 43 per cent of British university students are using AI to proofread academic work, 33 per cent use it to help with essay structure and 31 per cent use it to simplify information.
Only 2 per cent of the 2255 students surveyed said they used it to cheat on coursework.
However, not everyone feels positively about the software.
Claudia, 20, who is majoring in health environment and societies, sometimes feels disadvantaged.
She says: “I can sometimes get frustrated, like in my modern languages modules, when I know for a fact I’ve written my thing from scratch and I’ve worked really hard to do it and then hear of other people who’ve got away with just using ChatGPT to write it and at the end of the day they end up with a way better mark.”
Students also fear the consequences if the chatbots let them down.
“I am scared of getting it wrong and plagiarising,” says Eva, 20, studying health and environment at University College London. Instead she puts her own revision notes into ChatGPT and asks it to ask her questions, to check her knowledge.
“Obviously, it’s a bit annoying, when you hear, ‘Oh, well, I got this grade’,” she says. “And you’re like, ‘Well, you used ChatGPT to get that’. (But) if others want to use AI now and not know the course content afterwards, it’s their problem.”
Belated reactions
Universities are somewhat belatedly scrambling to draw up new codes of conduct and clarifying how AI can be used depending on the course, module and assessment.
Approaches vary considerably. Many universities allow AI for research purposes or assistance with spelling and grammar, but others ban it altogether.
Penalties for breaching the rules can range from written warnings to expulsion.
“Universities are in the position of trying to shut the stable door after the horse has bolted,” says one assistant professor at a Russell Group of leading research universities in Britain.
“Our university is only just now reacting to AI by setting policies such as which assessments it cannot be used for.”
There are certain telltale signs of robot writing the professor looks out for: “If I see work which is very heavily verbose or laden with adjectives I tend to become suspicious.”
Some students are certainly being caught. Figures obtained by Times Higher Education show AI misconduct cases at Russell Group universities are rising as AI becomes mainstream. At the University of Sheffield, for example, there were 92 cases of suspected AI-related misconduct in 2023-24, for which 79 students were issued penalties, compared with just six suspected cases and six penalties the year before.
But Palmer says many universities have become over-reliant on AI software, such as Turnitin, which compares students’ work against billions of web pages, to spot potential plagiarism.
‘Similarity score’
Turnitin provides a “similarity score” – the percentage of text that matches other sources. The software company says its results should not be used in isolation.
“The burden of evidence falls on the university to decide whether, on the balance of probabilities, they cheated, but that burden’s quite low compared to a criminal case,” Palmer says. Biomedical and medical students appear to be more vulnerable to accusations, he believes, because they often memorise technical definitions and medical terminology directly from AI.
Andrew Stanford’s case illustrates just how knotty this problem can be.
In 2023, Stanford, 57, was accused of using AI to cheat on his first-year examinations by the University of Bath. He had paid £15,000 to join an MSc course in applied economics remotely from his home in Thailand.
However, nine months into his degree, he was accused of using AI to formulate his exam answers. It is unclear how the university drew this conclusion.
Stanford insisted the paragraphs in question were his own work. He searched mainstream AI apps for similar phrasing to his own and could find none, he insists. However, two months later, in November 2023, he was told he had been found culpable and would have 10 per cent knocked off his assessment marks.
“I couldn’t have cared less about the 10 per cent deduction, but I did care about a record for academic misconduct,” says Stanford, who teaches maths and economics in Thailand. “It was very, very depressing for me when I had done nothing wrong. It felt like a kangaroo court.”
Solutions at hand
Stanford, who will complete his master’s at Bath this year, took his complaint to the Office of the Independent Adjudicator for Higher Education. This month he was exonerated.
The stakes for students – and universities – are huge.
“Universities face an existential crisis,” says Sir Anthony Seldon, former vice-chancellor of the University of Buckingham. But if the hurdles can be overcome, it could also be an opportunity.
“AI could be the best thing that has ever happened to education but only if we get ahead of the downsides, and the most immediate downside is cheating,” he says. “Generative AI is improving faster than the software used to detect it.
“It can be personalised to the users’ voice, which means even a really good teacher will struggle to detect it. The great majority of students are honest but it’s really difficult. If you know that other people are cheating, for you not to do it too.”
Some academics suggest supervised exams and handwritten work as a solution. Seldon doesn’t think the answer lies in switching from coursework to more examinations because these do not teach broad enough life skills. Instead, he argues there should be more focus on seminars where students are encouraged to “think critically and collaboratively”.
Critical role
Birmingham City University head lecturer in data journalism Paul Bradshaw says AI is a “massive problem” for lecturers who traditionally have based their assessments on students’ ability to absorb text and then provide their own insights.
Yet he says it is crucial for universities to teach students how to use AI with a critical eye, learning about its benefits and shortfalls, rather than ban its use.
“You’ve got a camp of students who won’t touch AI with a stick,” he says. “The problem for them is they’re going to be going into an employment market where they’re going to need those skills. Then you’ve got another camp who are using it but not telling anyone and don’t really know what they’re doing.
“I think we’re in a horrible situation where we’re having to adapt on the hoof and I think you’ll see a lot of mistakes made, both by students and by lecturers and by technology companies. AI has the potential to either further learning or destroy it.”
The Sunday Times
Additional reporting: Amelia Gibbins
To join the conversation, please log in. Don't have an account? Register
Join the conversation, you are commenting as Logout