NewsBite

Privacy and security threats from AI assistants need digital response

Google has unveiled new capabilities for its automated assistant based on Google’s growing expertise in artificial intelligence.

Google says it will to identify when a machine, not a human, is calling. Picture: AFP
Google says it will to identify when a machine, not a human, is calling. Picture: AFP

Last month Google unveiled remarkable new capabilities for its automated assistant. They’re based on Google’s growing expertise in artificial intelligence.

Perhaps the most dramatic, and, to look at the deluge of commentary, troubling demonstration was the ability of Google’s AI to make phone calls that imitate a human.

While we’re not there yet, you’ll soon be able to instruct an AI to use an old technology (voice calls) to make appointments and handle other interactions on your behalf — interacting with other humans or, if the receiver wants, other AIs. There’s value in that.

So, what’s the concern? An AI that sounds human compromises privacy and security. Although they’re often bundled together, privacy and security are different things.

Privacy includes the right to be left alone. AI callers violate that because of their potential to intrude. Privacy concerns also arise when information is used out of context (for instance, for gossip, price discrimination or targeted advertising). AI callers that sound human may violate privacy because they can fool people into believing they’re talking to a person. These machines may obtain information about you, then use it in ways you don’t anticipate. This could happen if you’re talking to the AI or, in the future, AI is talking on your behalf.

However, when people talk about privacy concerns, they’re often really concerned about security. The issue isn’t targeted advertising or gossip; it’s theft and safety. Security is the state of being free from danger. Security concerns arise when information is extracted, then used in an illegal way. The most obvious of these is identity theft. Imagine an AI caller can impersonate your voice. This may be an attractive feature if it’s part of a service that you can control. At the same time, a stranger could use the same service to fool others into believing they’re talking to you.

The problem is that improving security may not help privacy. For instance, increased surveillance may increase security from bad actors but at the expense of privacy. Solving privacy may not help security. Privacy rules that restrict the flow of information may make it harder for police to know what the bad guys are doing, therefore making you less secure.

For AI assistants, we can make impersonation illegal and add new layers to identification checks so that identify theft is difficult. This helps security but not privacy. We’ll still get spammed by AI callers and information you tell an AI could be used to target advertising.

Many commentators called for Google (and others) to be required to identify when a machine was calling rather than a person. Google said it would provide that identification. But another person or organisation set on stealing your identity simply can disregard these rules. Even if the government mandates that an AI assistant needs to identify itself, this will protect you only if bad actors also have to identify themselves.

It’s time to consider the protocols for AI communication across domains to improve privacy and security. Social networks have their own internal ways of authenticating who is messaging whom. However, we do not have good ways to verify across networks. We still live in an analog world and voice recognition, a classic analog authenticator, is no longer good enough.

The goal is clear: we want the benefit of AI assistants without significantly sacrificing our privacy or security. We need AI callers to be able to identify themselves in a verifiable manner and protocols on how AI calls should be handled.

Ajay Agrawal is the Peter Munk professor of entrepreneurship at the University of Toronto’s Rotman School of Management. Joshua Gans holds the Jeffrey S. Skoll chair in technical innovation and entrepreneurship at Rotman. Avi Goldfarb is the Ellison professor of marketing at Rotman.

HARVARD BUSINESS REVIEW

Original URL: https://www.theaustralian.com.au/business/harvard-business-review/privacy-and-security-threats-from-ai-assistants-need-digital-response/news-story/f5ca18d1672d431df6858133ac4c473b