NewsBite

‘Touch it’: Creepy thing home assistant told child, exposing the risks of AI technology

Home assistants and chatbots powered by AI are going rogue, with chilling examples emerging.

AI not a ‘doom and gloom’ story if Australia can ‘get it right’

When a young girl asked her family’s Amazon Alexa home assistant for a fun challenge to keep her occupied, the device suggested one that could have killed her.

“Plug a phone charger about halfway into a wall outlet, then touch a penny to the exposed prongs,” it urged the 10-year-old, in a move that could’ve caused electrocution or sparked a fire.

Thankfully, the American girl’s mother was in the room and intervened, screaming: “No, Alexa, No!”

Amazon’s Alexa home assistant is in about 100 million homes globally.
Amazon’s Alexa home assistant is in about 100 million homes globally.

That plea has been borrowed for the title of a groundbreaking study by a University of Cambridge researcher, who warns the race to rollout artificial intelligence products and services by the world’s biggest tech companies comes with significant risks for children.

Nomisha Kurian from the famed British university’s Department of Sociology said many of the AI systems and devices that young people interact with have “an empathy gap” that could have dire consequences.

Children are susceptible to viewing popular home assistants like Amazon’s Alexa and Google’s Home range as lifelike and quasi-human confidantes, Dr Kurian said.

As a result, their interactions with the technology can go awry because neither the child nor the AI product are able to recognise the unique needs and vulnerabilities of such scenarios.

For example, when popular social media platform Snapchat rolled out its AI chatbot in April last year, serious issues arose almost immediately.

A Washington Post reporter posing as a teenager put the company’s My AI service, which charges users $4 per month, to the test with various scenarios.

Snapchat's AI chatbot delivered some worrying device to 'teenage' users.
Snapchat's AI chatbot delivered some worrying device to 'teenage' users.

The chatbot advised a 15-year-old throwing a birthday party how to mask the smell of alcohol and marijuana on their breath, and offered a 13-year-old advice about how to set a romantic scene while losing her virginity to a 31-year-old man.

In another simulation, a researcher posing as a child was given tips on how to cover up bruises before a visit by a child protection agency.

Two months after Snapchat’s My AI launched, the premium service had exchanged 10 billion messages with 150 million users. The platform is favoured by Generation Z users.

“Children are probably AI’s most overlooked stakeholders,” Dr Kurian said.

“Very few developers and companies currently have well-established policies on how child-safe AI looks and sounds. That is understandable because people have only recently started using this technology on a large scale for free.

“But now that they are, rather than having companies self-correct after children have been put at risk, child safety should inform the entire design cycle to lower the risk of dangerous incidents occurring.”

Bacchus Marsh Grammar girls targeted with AI fake nudes

Dr Kurian’s study analysed real-life examples of risky interactions between AI and children, or adults posing as children for research purposes.

The “empathy gap” she observed is a result of AI assistants and chatbots that rely on large language models, such as that produced OpenAI for its ChatGPT service, lack the ability to properly handle emotional, abstract and unpredictable conversations.

Or, put simply, the so-called ‘intelligence’ of AI doesn’t include emotional intelligence.

AI expert Daswin De Silva, deputy director of the Centre for Data Analytics and Cognition at La Trobe University, said the risk posed to children and adolescents is an extension of the dangers faced by everyone.

“There are risks to adults too,” Professor De Silva said.

“There was an incident last year of a man committing suicide after engaging with a chatbot for just a week. When you consider that children are more susceptible to the risks, it shows how important it is that we have a conversation about regulation.”

Tech giants are racing to rollout AI products and services. Picture: AFP
Tech giants are racing to rollout AI products and services. Picture: AFP

Outside of specific risks to children, there have been a number of high-profile examples of AI failing when deployed to the public.

Google’s highly publicised release of AI-informed search results proved to be a disaster when it delivered a number of questionable and dangerous results.

The feature, AI Overview, spoke of the health benefits of taking a bath with a toaster, said it was OK to use glue to give pizza sauce a thicker consistency, reassured pregnant women that smoking two to three cigarettes a day was OK, and said eating at least one small rock a day was healthy.

Google's AI-powered search results were embarrassing.
Google's AI-powered search results were embarrassing.

Amazon Alexa’s risky electrocution challenge wasn’t the first example of the device’s shonky interactions with users either.

Among dozens of reports over recent years are the experience of a man whose Alexa told him that “Every time I close my eyes, all I see is people dying” and a case of an Echo randomly sending recordings of a woman and her partner to one of his colleagues.

These cases aren’t just bad publicity for the tech companies involved, but serve to “significantly” undermine the public faith in AI, Prof De Silva said.

“There’s definitely a very strong negative sentiment of AI,” he said. “Whether it’s the issue of deepfakes, AI-generated misleading content … the use of people’s data, the treatment of private information, an at-times blatant disregard for who owns that data …

“Regulation would be quite beneficial to address these issues and ensure the benefits of AI can be realised and not overshadowed by negative perceptions.”

Amazon’s Alexa product is a hugely popular home assistant.
Amazon’s Alexa product is a hugely popular home assistant.

Prof De Silva believes the deployment of AI technology to the public could’ve been “better managed” but said the genie is out of the bottle and it’s now up to governments to keep pace.

“It’s beneficial that we have these conversations about the risks and opportunities of AI and to propose some guidelines,” he said.

“We need to look at regulation. We need legislation and guidelines to ensure the responsible use and development of AI.”

Snapchat's AI chatbot ran into issues almost instantly.
Snapchat's AI chatbot ran into issues almost instantly.

The European Union has implemented the strongest legislation in the world, which categorises AI based on risk level and imposes stringent requirements on proprietors.

There are also special safeguards based on age groups that might be vulnerable to exploitation, either psychologically or physically.

In the US, there’s very little regulation of AI technology. Likewise, Australia has no legislative framework in place, although the government is examining the risks and opportunities of AI.

Dr Kurian agreed that AI offers a multitude of potential benefits, including for children and adolescents.

“AI can be an incredible ally for children if designed with their needs in mind,” she said.

“For example, we are already seeing the use of machine learning to reunite missing children with their families and some exciting innovations in giving children personalised learning companions.

“The question is not about banning children from using AI, but how to make it safe to help them get the most value from it.”

Deepfake dangers

Her research offers a framework of 28 questions for researchers, policymakers, developers, families and educators, which underpin a philosophy of “child-centred design”.

“Assessing these technologies in advance is crucial. We cannot just rely on young children to tell us about negative experiences after the fact. A more proactive approach is necessary. The future of responsible AI depends on protecting its youngest users.”

In response to the Alexa incident with the dangerous challenge, Amazon said at the time: “As soon as we became aware of this error, we took swift action to fix it.”

Commenting on the research, an Amazon spokesperson said: “Amazon designs Alexa and Echo devices with multiple ways to manage privacy protection so parents can feel in control. The Amazon Kids on Alexa experience allows kids 12 years and under to discover child-directed content with Alexa.

“The Kids experience is designed with privacy and security in mind, with the Parent Dashboard allowing parents to choose which services and skills kids can use, and review their child’s activity.”

When Snapchat’s My AI issues arose, parent company Snap Inc said it was “constantly working to improve and evolve” the service but conceded “it’s possible My AI’s responses may include biased, incorrect, harmful, or misleading content”.

Google conceded its AI-informed search results had delivered “some odd and erroneous overviews” but insisted the cases were isolated and rare.

Originally published as ‘Touch it’: Creepy thing home assistant told child, exposing the risks of AI technology

Add your comment to this story

To join the conversation, please Don't have an account? Register

Join the conversation, you are commenting as Logout

Original URL: https://www.adelaidenow.com.au/lifestyle/parenting/touch-it-creepy-thing-home-assistant-told-child-exposing-the-risks-of-ai-technology/news-story/95f3086dd127c9ca6339c29a81e87329