NewsBite

Educators demand controls over cheating chatbots

A human brain drain will be a real danger if artificial intelligence results in the lazy teaching of gullible students. From banning AI to embracing it, policies vary across Australia.

Illustration: Emilia Tortorella
Illustration: Emilia Tortorella

Lying, cheating chatbots are running amok, as schools and universities scramble to control the use of artificial intelligence in classrooms and research labs. Now that the genius of all ­genies is out of the bottle, regulators and educators are grappling with the practical, legal and ethical challenges of using AI to teach and learn. From banning AI to embracing it, every university has a different policy.

In schools, the digital divide is growing wider. AI is inherently unfair when some students are using it to write essays or take shortcuts in research – either surreptitiously or with permission – while others rely on their own brainpower.

The risks and rewards of generative AI – or Gen-AI – are emerging as more students and teachers use the technology, which is growing “smarter’’ by the second.

“The threat landscape is evolving rapidly,’’ the Tertiary Education Quality and Standards Agency, has warned a federal parliamentary inquiry into the use of Gen-AI in education. “The increasing sophistication of AI poses diverse risks to research integrity. AI has the capability to generate not just fake data, including false or doctored images, but entirely manufactured studies and journal articles. Images generated by AI are becoming increasingly difficult to detect and can compromise the integrity of research findings.’’

“Enfeeblement’’ has been flagged by TEQSA as another risk arising from a hybrid model of human and AI learning. “This refers to humans increasingly relying on machines, losing the ability to self-govern, question AI outputs, and intervene for the benefit of humanity, resulting in poorer outcomes,’’ the standards agency told the ­inquiry.

“The education sector will be a crucial partner in ensuring society is able to strike a balance between the capabilities of AI and the retention of human critical thinking, creativity and decision-making skills.’’

In shining a regulatory torch on the dark side of AI, TEQSA has revealed the urgency of setting some road rules for its use. Will Gen-AI dumb down the population, as people lazily leave thinking and analysis to computers? Or will it be a springboard for human innovation, turbocharging curiosity and inspiration? The answer will depend on the quality of the AI, and how it is used and controlled.

Road rules were written when cars replaced horses as the main form of transport. Governments set safety standards for vehicles, built speed bumps, stop signs and no-go zones, taught people how to drive safely, and punished dangerous drivers. The same must happen quickly for AI, as it crashes over society like a technological tsunami.

As the House of Representatives standing committee on employment, education and training probes the perils of AI in education, the nation’s education ministers are drawing up guidelines for its use, while the federal Department of Industry, Science and Resources is also consulting on regulations for “safe and responsible AI”.

The NSW parliament, too, is holding an inquiry and will call technology giants Google, Meta and OpenAI – the creator of ChatGPT – to give evidence.

Education ministers are seeking public input on a “draft framework’’ to safeguard children’s privacy and “protect inputs by or about students, such as typed prompts, uploaded multimedia or other data’’.

In an extra responsibility for school principals already strangled in red tape, schools would be required to “regularly monitor the generative AI tools they are using and their impact on students and teachers’’.

The proposed Australian laws fall far short of the world’s first comprehensive legislation, the AI Act before the European parliament. Subject to the agreement of 27 member countries by the end of this year, the legislation will impose “transparency requirements’’ on Gen-AI, including Chat GPT, to ensure that AI-­generated content is disclosed, that AI models are designed to prevent them generating illegal content, and that AI owners publish summaries of any copyrighted materials fed into the systems.

AI will be banned from operating biometric identification systems, such as facial recognition, and from classifying people based on behaviour, socio-economic status or personal characteristics.

The implications of this are ­immense, given that tech companies, corporations and governments already harvest biometric data and use algorithms to target advertising, public services and law enforcement.

The most vocal opponent of the unbridled use of AI in education is Deakin University education lecturer Dr Lucinda McKnight, who has warned that AI apps are enriching “white male billionaires’’ at the expense of children’s privacy and cyber safety.

Artificial Intelligence becoming a ‘superbeing’ that makes people ‘stupid’

“Our education system risks being shaped by corporate interests,’’ she states in a submission to the inquiry, in collaboration with 10 colleagues from Deakin’s Centre for Research in Assessment and Digital Education.

Referring to “unethical data harvesting’’, the researchers warn that biased AI could “spread misinformation at scale’’. They urge schools to carefully select AI programs that “prioritise ethical approaches and respect intellectual rights’’. “A software that uses its own copyrighted images is more ethical than one which generates images based on a much more contentious dataset scraped from the web,’’ they state. “Generative AI in and of itself looks backwards, drawing from past ideas. Our nation needs citizens who can look to the future,” they add.

Artificial intelligence gleans its “knowledge’’ from historical data, scraped from research papers and corporate or government websites – as well as conspiracy theories on social media.

AI can answer questions on command, faster than any person, yet it lacks the safeguards of wisdom and integrity that come from human experience.

The TEQSA – which is chaired by former Queensland University of Technology vice-chancellor Peter Coaldrake – has raised its concern that “an over-reliance on AI may limit innovation, insight and discovery’’. It also warns that AI could wipe out diversity of views, and insists that AI must be trained on local data “to avoid erasing Australian and Indigenous culture in a sea of US-centric internet content”.

Educational institutions that rely on facts in teaching are worried about AI’s tendency to bend the truth, or “hallucinate’’ credible-sounding responses later found to be false. ChatGPT, the first mainstream “chatbot’’, has been branded a “cheatbot’’ for its popularity with students to pass exams and write assignments ­undetected.

Regulators have revealed that Gen-AI is rigging research results using false data to generate “fake science’’. The Australian Research Council and the National Health and Medical Research Council, which together administer $1.8bn in taxpayer funding for research grants, both have banned the use of AI to assess grant applications, for fear of errors or the theft of intellectual property.

Acknowledging that it is nigh-impossible to detect chatbot cheating, many Australian universities are changing their assessment methods, returning to old-school pen-and-paper exams under supervision, or in-person interrogations of knowledge. TEQSA predicts that teachers and lecturers will have to spend more time marking students’ work, as a result.

The University of Sydney, which refers to AI as “copilots’’ for students, has revealed the difficulty of detecting when AI has been use to cheat. “Detection tools may give ­erroneous results, and valid research outputs may be questioned in error,’’ it told the inquiry.

Your New Best Friend Is an AI Chatbot. What Are the Risks?

The rise of Gen-AI could well be a blessing in disguise for the quality of education. Given the industry concerns that too many tertiary courses fail to produce “job-ready’’ graduates – with evidence that some courses are redundant by the time students graduate – education providers will need to constantly update content to keep it relevant. This requires closer collaboration between academics and industry experts – which happens to be a key objective of the Albanese government’s Universities Accord.

TEQSA, which is working with universities on ways to safeguard academic integrity, calls for ­“regular review and updates’’ to ensure teaching is “contemporary and appropriate in the age of AI”.

“Some courses … may quickly become outdated,’’ it told the parliamentary inquiry.

“The rapid advancement of large language models is forcing educators to think carefully about what knowledge still needs to be taught when so much information can be so readily synthesised by AI.

“It is crucial that the education sector develops new methods of assessment. Without a transformation … there is a risk to the integrity of the system.’’

Engineers Australia, citing research showing that AI could pass exams, also cautions university lecturers relying on AI to plan and deliver lessons.

“If they are not subject matter experts, they need to verify the information being provided,’’ it told the inquiry. “An example of a risk in this regard is out-of-subject teachers using these tools to help them prepare learning activities on topics they don’t fully understand.’’

The Australian College of Nursing, too, is concerned about the quality of information generated by AI. “The perpetuation of systematically embedded human biases, including racism, may result in the prioritisation of less-sick white patients over sick patients of other races,’’ it warned the inquiry. If nursing students use AI as a “shortcut’’ to get a degree, the college warns, “this could endanger the care of patients’’.

An academic who creates digital games to improve children’s literacy, Associate Professor Erica Southgate of the University of Newcastle, is worried that AI could confuse or mislead primary school children still too young to distinguish fantasy from reality, let alone fact from fiction.

AI can “nudge’’ users towards information and “discriminate in ways that are without precedent’’, she has told the inquiry. Some students may even form unhealthy “bonds’’ with AI chatbots.

“Often the purpose of these tools is to make them feel like they’re human, and not be transparent as to the fact they are ­robots,’’ Southgate tells Inquirer.

“People are overly trusting. You think that if a machine comes up with an answer or gives you a direction, you should trust it – but that’s not necessarily the case.

“There are dark patterns in the design of software where it nudges in a particular direction and uses emotional manipulation. These are black box systems. You can’t examine the algorithms or audit them – but we’re really at the stage we need to crack open the black boxes.’’

Pointing to the former Coalition government’s Robodebt debacle, which led to the tragic suicide of some welfare recipients who were wrongly lumped with debts raised by automated audits, Southgate insists that humans must always stay in the loop.

Westbourne Grammar principal Adrian Camm talks to students.
Westbourne Grammar principal Adrian Camm talks to students.

Despite the cacophony of concerns, many universities and schools are enthusiastically adopting AI as a tool for teaching. At Westbourne Grammar School in Melbourne, principal Adrian Camm insists that banning AI is the “wrong approach’’. His school set up an “AI academy’’ this year, where physics students have built their own chatbot to tutor them in physics. “Our students aren’t just passive consumers of end products,’’ he tells Inquirer. “They are the creators of tomorrow’s technology. And they have a finely tuned bullshit detector.’’

Westbourne students do more than type prompts into ChatGPT to write an essay. They are being taught to create their own AI, to consider biases in the technology, interrogate its answers, and consider ethics in its use. But the tech is off-limits until year 5, as younger students focus on building the foundations of reading, writing and maths before interacting with AI. And even in the senior years, students still need to write pen-and-paper essays.

The disparity between schools is of concern. Well-resourced private schools have teachers with the expertise to put AI to smart use, and the money to ensure all students have access to it. Yet public schools are still awaiting instructions from governments that have banned student use of AI everywhere except South Australia, pending an agreement on ­national guidelines.

The Federation of Parents and Citizens Associations of NSW is demanding controls on the use of AI in schools, based on concerns about fairness, privacy and teaching quality, raised in its survey of 800,000 parents. “Children will lose the ability to think, discuss and problem solve, and this is how many life lessons and coping strategies are learnt,’’ one parent said. Another complained that her child’s science class had been cancelled due to a power outage to the smartboard.

“I questioned if the teacher actually knew what she was teaching, or did she need technology to enable her to teach?’’ the parent asked, adding: “It would only get worse with AI.’’

Natasha Bita
Natasha BitaEducation Editor

Natasha Bita is a multi-award winning journalist with a focus on free speech, education, social affairs, aged care, health policy, immigration, industrial relations and consumer law. She has won a Walkley Award, Australia's most prestigious journalism award, and a Queensland Clarion Award for feature writing. Natasha has also been a finalist for the Graham Perkin Australian Journalist of the Year Award and the Sir Keith Murdoch Award for Excellence in Journalism. Her reporting on education issues has won the NSW Professional Teachers' Council Media Award and an Australian Council for Educational Leaders award. Her agenda-setting coverage of aged care abuse won an Older People Speak Out award. Natasha worked in London and Italy for The Australian newspaper and News Corp Australia. She is a member of the Canberra Press Gallery and the Media, Entertainment and Arts Alliance. Contact her by email natasha.bita@news.com.au

Add your comment to this story

To join the conversation, please Don't have an account? Register

Join the conversation, you are commenting as Logout

Original URL: https://www.theaustralian.com.au/inquirer/educators-demand-controls-over-cheating-chatbots/news-story/c0a51f30eb62d7a5c9855b326f1e3587