NewsBite

Teacher or cheater? Artificial intelligence is the ‘challenge of our age’ for education

As artificial intelligence disrupts teaching, its unregulated rise also brings serious concerns.

Catherine McAuley, founder of the Sisters of Mercy, who died in 1841

When Catherine McAuley, founder of the Sisters of Mercy religious order, addressed a recent Sydney school assembly 182 years after her death, it was artificial intelligence that brought her back to life.

To the surprise of students and sisters at Our Lady of Mercy College in Parramatta, the school’s director of innovation, Matthew Esterman, used AI to animate a portrait of McAuley that smiled and spoke to them in her own words.

“AI can make teaching more engaging,” Esterman says. “I’ve seen teachers use it to introduce topics, generating a painting of Macbeth, animating it and using quotes from Shakespeare with a Scottish accent. Teachers are starting to use these tools to engage students the way we might have used YouTube videos.”

Just six months after tech firm OpenAI unleashed its chatbot, ChatGPT – dubbed the “cheatbot” – teachers and students have embraced AI to help plan lessons, compile research, answer questions and write essays.

Like Siri on steroids, AI superintelligence is rapidly disrupting the world of teaching and learning in ways that threaten to make reading and writing redundant. An array of new AI apps is being used to speed-read and summarise academic research papers, ask questions of a PDF file, generate deep-fake videos and photos, create artwork, and write music, essays and love letters.

Equally exciting and alarming, AI has caught most educators unprepared for its potential to turbocharge teaching, while giving lazy students an easy opportunity to cheat. The unregulated rise of AI poses problems. Will students who use a chatbot to write their assignments, analyse information or compile research win an academic advantage over those who rely on their own brains? How can teachers detect when a student has used AI? Will AI make students so lazy that they don’t learn to think for themselves? What if AI rewrites history, spouts propaganda or “hallucinates” by generating the wrong information?

Even the creators of ChatGTP are worried, warning this week that AI poses such a risk to humanity it must be regulated in the same manner as nuclear power. AI pioneer Geoffrey Hinton, known as the “godfather of AI”, quit Google this month after blowing the whistle on the dangers of AI generating fake images, videos and texts, and writing its own computer code.

Education experts fear AI implementation in schools will spread disinformation

Australia’s education ministers will meet on July 6 to discuss the first draft guidelines on the use of AI in schools. A federal parliamentary inquiry into the use of generative AI in education was launched this week after a referral from federal Education Minister Jason Clare, who sees the challenge as “not whether we use AI, but how we use it”.

“AI can help us to personalise education and make it more engaging and more effective for individual students,” Clare told Informa’s AI in Education conference in Sydney on Wednesday. “AI can also assist teachers by automating routine tasks such as grading, allowing them to focus on the more critical task of teaching and mentoring their students. The challenge is how do we harness the … benefits of AI and also make sure it’s not misused.”

Matt Bower, the interim dean of Macquarie University’s school of education, is contributing to the draft guidelines for schools. He is concerned that AI will create a “tidal wave of information junk” and predicts that schools and universities will need to change assessment methods, relying less on take-home assignments and more on supervised exams, spoken tasks and video assessments.

Bower insists that students still need to master the fundamentals of reading, writing and arithmetic so they can command AI. “The risk is that students won’t learn,” he told the Informa AI in Education conference. “ChatGPT could easily be used by a failing student to pass.”

Bower says a ban is “futile”. “An arms race of detection and subversion is just … time wasting,” he says. “AI is going to be embedded in every major learning platform. We’ve got to work furiously to get ahead of it, or at least stay on top of it. This is the challenge of our age.”

The Australian Council for Educational Research is pushing for a return to oral examinations to test the knowledge of medical students, after finding that AI has the knowledge to pass a medical exam. ACER’s deputy chief executive in charge of research and assessment, Catherine McClellan, predicts AI will change the way students are tested in schools and universities.

“I expect we will see a push away from recall and facts, and a push towards process and comprehension, towards understanding and use of information,” she says. “This doesn’t mean we abandon basic knowledge and facts to the machines – far from it. We must know what we’re talking about, what’s true and real.”

Labor to propose national banning of mobile phones in schools

All state governments have blocked ChatGTP in state schools but many private schools are embracing its use, potentially giving their students an academic headstart. At the Islamic College of Brisbane, students older than 13 are allowed to access AI apps on their school computers, with the search history saved and monitored for safety. The school’s chief executive, Ali Kadri, treats the technology as a personal tutor for each of the 1600 students and an assistant for teachers, who can create an individualised lesson plan for a student with a learning disability in five minutes instead of a half-hour.

“If the student is struggling, the system can offer help,” he says. “Instead of having to read an entire book or article, they can ask for a summary. It’s like speaking to the author in real-time.

“Students are accessing AI if they need clarification around an issue. This type of real-time feedback is often difficult for an educator or school to do for a single student, let alone an entire class.”

Without regulations over use, and control over quality, the dangers of AI can smother its benefits. If students use it to cheat, their higher marks could push an honest student out of a place in university. If wealthier students can afford to pay for the latest AI apps, poorer mates may miss out on the same learning opportunities.

Deakin University education academic Lucinda McKnight insists there is “no ethical way” to use generative AI and is concerned about privacy and plagiarism. “(Generative AI is) trained on materials that have been lifted without copyright permission,” she says. “Who owns student data, and where is it going? There should be a conscientious objection clause for students who choose not to use it.”

In some cases, when online plagiarism detectors flagged work as AI-generated, students argued they had merely used the Grammarly app to check punctuation. Picture: AFP
In some cases, when online plagiarism detectors flagged work as AI-generated, students argued they had merely used the Grammarly app to check punctuation. Picture: AFP

Hillbrook school in Brisbane asks parents and students to consent to AI usage terms – including a ban on cheating – as part of its technology usage policy. “We haven’t blocked it,” says Hillbrook’s head of digital education, Miriam Scott. “We teach students how to reference it (when used in assignments). If students are going to use AI, it has to be cleared with the teacher beforehand.”

The school uses AI detection programs but Scott says they “are not gospel”. “They’re flawed,” she says. “They’re a starting point to have a conversation with a student and start an investigation.”

At St Andrew’s Cathedral School in Sydney, teachers are reviewing the written assignment tasks for the International Baccalaureate to prevent chatbot cheating. School deputy head Brad Swibel says the school’s stance on AI is one of the most common questions asked at parent-teacher interviews.

“A lot of parents are very interested in it, especially in years 11 and 12,” he says. “The Baccalaureate has internal assessments that involve reading and research, so the advent of generative AI poses quite a number of challenges. This puts the onus on teachers to do intermediary checks and vouch that it is the student’s own work. In the end it’s going to come down to teachers looking at Microsoft Word versions to see where they’ve been edited. That doesn’t stop kids putting ChatGTP into their iPad and typing it in.”

Swibel says it is difficult to prove when a student has used ChatGTP; in several cases, when online plagiarism detectors flagged work as AI-generated, students argued they had merely put their assignment through the Grammarly app to check punctuation.

Flinders University is using an updated version of plagiarism detector Turnitin to detect the possible use of AI in students’ work.

“It’s not a failsafe method,” Flinders pro vice-chancellor (learning and teaching innovation) Michelle Picard says. “We look at the student habits in writing – is it consistent in style to what they’ve previously done? Students are judged on the balance of probabilities if they’ve failed to meet academic integrity standards.”

The university has not banned AI entirely; its use is required in some assessments. “In the professions, many people will be starting to use artificial intelligence,” Picard argues. “We want to prepare our students for the real world.”

Add your comment to this story

To join the conversation, please Don't have an account? Register

Join the conversation, you are commenting as Logout

Original URL: https://www.theaustralian.com.au/inquirer/teacher-or-cheater-artificial-intelligence-is-the-challenge-of-our-age-for-education/news-story/0e40e0caf350238f479b5948c86b66b6