Unis warned: chatbots taking over and return to ‘pen and paper’ is futile
A new survey of 337 Australian university students found that more than a third had used a chatbot for help with an assessment, and did not think this was a breach of academic integrity.
Universities have been warned that resisting generative AI and returning to pen and paper exams is futile, and that AI detection tools would result in accusations against innocent students, as new research shows more than one-third of students use chatbots for assistance with assessments but don’t necessarily view their actions as cheating.
The Tertiary Education Quality and Standards Agency released a document earlier this month to guide universities on “immediate” actions to address academic integrity risks posed by generative AI, while complex, longer-term action plans were implemented by individual institutions.
“It is imperative to resist the urge to return entirely to conventional, ostensibly ‘AI-proof’ assessment tasks like pen and paper exams,” the university watchdog advice, written by University of Queensland associate professor Jason Lodge states.
“Although these could appear like a quick fix, they frequently fail to evaluate the entire range of knowledge, skills, and abilities we ask of students pursuing a higher education.”
TEQSA also advised universities to “limit (their) reliance on AI detectors”.
“Testing of these tools continually demonstrates that they are unreliable and tend to produce false positive results,” Professor Lodge wrote.
“In one example, an AI detection tool flagged The Bible as being written by ChatGPT.
“Relying on these tools will lead to some unfair accusations against innocent students while potentially missing sophisticated misuse by students who deliberately seek to avoid detection.
“At the time of writing, it is unwise to rely solely on AI detectors as a means of managing the risk to academic integrity posed by AI.”
Instead of simply reverting to pen and paper exams, TEQSA is advising institutions to instruct students to “show their working” when using AI tools, and also use oral assessments.
The document notes that it is unclear what proportion of students use AI in their studies, with estimates ranging wildly from 10 per cent to 60 per cent.
But a new survey of 337 Australian university students found that more than a third of participants had used a chatbot for help with an assessment, although they did not necessarily perceive this as a breach of academic integrity.
Almost 37 per cent of students used a chatbot to find information for their assessment, to help them with analysis, to write part or all of a paper, or to solve a multiple choice quiz.
“Study results suggest that the widespread use of AI chatbots has rapidly arrived in the university context, with the majority of students having used tools such as ChatGPT – many of them to research information or to better understand a specific topic,” researchers from the University of Technology Sydney wrote in the Computers and Education: Artificial Intelligence journal.
“A non-negligible group of students reported to have used generative AI applications for analysis purposes, or to write parts of or the entire assessment. These use cases, arguably, present considerable challenges for academic integrity and risks of plagiarism.
“This viewpoint, however, is not necessarily shared by students who predominantly perceive using AI chatbots as not cheating and as not always breaching academic integrity.”
Researchers from health, media and transdisciplinary schools urged institutions to “define clear policies and guidelines about the ethical and academically honest approach to use, and integrate generative AI tools into university education and assessment”.