NewsBite

Exclusive

AI spawning ‘fake science’, tertiary watchdog warns

Artificial intelligence can rig research results to spawn fake ­science, Australia’s universities regulator has warned.

TEQSA has warned that AI can rig research results. Picture: AFP
TEQSA has warned that AI can rig research results. Picture: AFP

Artificial intelligence can rig research results to spawn fake ­science, Australia’s universities regulator has warned, as research funding bodies ban the use of AI to assess grant applications.

The federal government’s Tertiary Education Quality and Standards Agency has called on universities to beat chatbot cheats by “transforming’’ student assessment.

AI poses “a risk to academic integrity’’ by helping students cheat, or generating fake research, TEQSA warns.

“AI has the capability to generate not just fake data, including false or doctored images, but entirely manufactured studies and journal articles,” the agency has told a Senate inquiry into the use of AI in education.

“Images generated by AI are becoming increasingly difficult to detect and can compromise the integrity of research findings. If not appropriately managed, AI has the potential to … obscure genuine research in a sea of AI-generated content and ultimately undermine the public’s trust in the scientific process’’.

The agency’s warning comes as government research organisations blow the whistle on risks to national security from researchers unwittingly feeding sensitive information into AI question-and-answer engines.

The Australian Research Council, which administers almost $900m a year in federal grant funding, says “information entered online can be accessed by unspecified third parties’’.

“Risks in the use of generative AI in research include IT security, intellectual integrity and property protection, biases, inaccurate or unoriginal output, and the loss of confidential information,’’ the council told the Senate inquiry.

“When information is entered into most commercial generative AI tools, such as ChatGPT, there is an unacceptable risk that it will enter the public domain and be accessed by other users or third parties.’’

The ARC is concerned that AI may “hallucinate’’ by churning out factually incorrect or in­appropriate content, or steal intellectual property.

“Traditional attribution of authorship assumes the author has applied their intellect, skill and effort, and appropriately acknowledged and cited the work and ideas of others,’’ it states.

“When generative AI tools are used, it can become difficult to identify what is work genuinely authored by that researcher or research team, or where authors have drawn upon the work of others, without acknowledgment.’’

The ARC has banned its grant assessors and peer reviewers from using AI to help award research funding.

A crackdown on AI has also been ordered by the National Health and Medical Research Council, which distributes almost $900m in taxpayer funding for medical research grants.

It has told grant applicants to exercise caution when using AI to prepare funding applications, “given it may not be possible to monitor or manage the subsequent use of information entered into generative AI databases’’.

The NHMRC has banned peer reviewers, who test that research is ethical and accurate, from feeding information from grant applications into AI tools to help them assess applications.

The hazards of using AI as a shortcut to award research grants have also been flagged by TEQSA, which told the Senate inquiry that AI could generate scientific error.

“The administrative burden of the scientific peer-review process may result in reviewers outsourcing the review to AI systems to ­either provide the reviewer with a summary or provide feedback,’’ it states. “AI systems do not hold the same level of detailed expertise of a human discipline expert, which may result in fewer erroneous or misleading research findings being identified prior to pub­lication.’’

TEQSA says AI is trained on historical data, which may “limit innovation, insight and ­discovery”.

Calling for new ways of assessing students, it reveals that AI can already write essays, complete coding tasks and solve both worded and numerical maths problems that universities have traditionally relied upon to mark students.

“Without a transformation in how institutions assess student achievement of learning outcomes, there is a risk to the integrity of the system,’’ it says.

TEQSA predicts an AI learning loop, as teachers and lecturers use AI to write lesson plans and set assignments that students then complete using AI.

“The educator then uses AI to grade the assessment and provide feedback,’’ it states. “In such a situation, limited human involvement … undermines not just the educational experience but the very process of learning.‘’

TEQSA insists the ability to write an essay without using AI has an intrinsic educational value.

Add your comment to this story

To join the conversation, please Don't have an account? Register

Join the conversation, you are commenting as Logout

Original URL: https://www.theaustralian.com.au/business/technology/ai-spawning-fake-science-tertiary-watchdog-warns/news-story/91621e573a81fb00c36b4538859de3a2