NewsBite

Clearview AI facial recognition tech probed by privacy watchdog

A secretive facial recognition tech company is now being probed by Australia’s privacy watchdog after scraping billions of our photos.

Controversial facial ID app retrieves deleted photos, US authorities warn

A company that scraped images from social media profiles and used them to create a facial recognition app to sell to police is under investigation by Australia’s information commissioner and its UK equivalent.

Clearview AI was founded by an Australian entrepreneur and one-time model Hoan Ton-That.

A New York Times expose of Clearview AI in January this year described his creation as a “secretive company that might end privacy as we know it”.

At Clearview’s core is a database of more than three billion images it claims to have scraped from social media sites (against the policies of those sites).

Hoan Ton-That's Clearview AI has raised millions from investors and companies with links to the far-right, the CIA, and Facebook. Picture: Amanpour and Company/YouTube
Hoan Ton-That's Clearview AI has raised millions from investors and companies with links to the far-right, the CIA, and Facebook. Picture: Amanpour and Company/YouTube

If you take a picture of a person and upload it, it will find other pictures of them online, including showing where those pictures were posted.

This means law enforcement agencies could use it to identify (or as is feared by opponents of facial recognition being used by law enforcement: wrongly identify) people suspected of committing crimes.

It could also potentially be used to identify victims where their identity was unknown.

RELATED: ‘Performative’: police tech ban slammed

The existence of Clearview was known only to a select few before January this year. Picture: Alamy
The existence of Clearview was known only to a select few before January this year. Picture: Alamy

Despite the potential benefits, following public revelations about the company’s existence and clients earlier this year, an investigation has now been opened.

“The Office of the Australian Information Commissioner (OAIC) and the UK’s Information Commissioner’s Office (ICO) have opened a joint investigation into the personal information handling practices of Clearview AI Inc, focusing on the company’s use of ‘scraped’ data and biometrics of individuals,” a statement from the OAIC distributed on Thursday afternoon reads.

“The investigation highlights the importance of enforcement co-operation in protecting the personal information of Australian and UK citizens in a globalised data environment,” the OAIC added, saying “no further comment will be made while the investigation is ongoing”.

In the US, more than 600 law enforcement agencies had started using Clearview throughout 2019, according to the company.

Facial recognition technology has many worried about living under an ever-watching police state. Picture: Steven Senne/AP
Facial recognition technology has many worried about living under an ever-watching police state. Picture: Steven Senne/AP

RELATED: Bezos backs new tech for NSW police

In Australia it’s been used by a number of law enforcement agencies including the Australian Federal Police, as well as Queensland, South Australia and Victorian police.

According to a Buzzfeed News investigation, Australian police had registered dozens of accounts between them and run more than 1000 searches.

In a statement released earlier this year following questions at a Parliamentary Joint Committee on Intelligence and Security hearing, the AFP explained its use of the app, qualifying that the agency had not adopted the platform or entered into any contractual arrangements with Clearview AI, and had no plans to do so.

“However, between 2 November 2019 and 22 January 2020, members of the AFP-led Australian Centre to Counter Child Exploitation (ACCCE) registered for a free trial of the Clearview AI facial recognition tool and conducted a limited pilot of the system in order to ascertain its suitability in combating child exploitation and abuse,” the AFP said.

AFP Commissioner Reece Kershaw was questioned over Clearview at a parliamentary committee earlier this year. Picture: Sean Davey.
AFP Commissioner Reece Kershaw was questioned over Clearview at a parliamentary committee earlier this year. Picture: Sean Davey.

RELATED: State’s police forced labelled ‘hysterical’

The AFP had been questioned by the committee in February on whether it had used Clearview AI and why Freedom of Information requests submitted by media in relation to the software had been rejected.

A total of seven AFP officers activated a trial and conducted searches after nine invitations were sent from Clearview AI.

“These searches included images of known individuals, and unknown individuals related to current or past investigations relating to child exploitation,” the AFP said in a response to questions taken on notice, adding the trials came following suggestions from international agencies.

Multiple tests of a variety of facial recognition technologies have shown them to display race and gender biases and it’s feared the technology could wrongly identify innocent people as criminals.
Multiple tests of a variety of facial recognition technologies have shown them to display race and gender biases and it’s feared the technology could wrongly identify innocent people as criminals.

It revealed the Office of the Australian Information Commissioner had issued notices to produce information under the Privacy Act and said the AFP is fully co-operating.

It also said the FOI requests were rejected because they didn’t turn up any information, and the use of trials by members of the AFP was “subsequently discovered” at a later date.

“You will appreciate the concern in this committee that, in the absence of existing legal framework in Australia, the thought that such facial recognition technology was being used by the Australian Federal Police would be a concern,” shadow attorney-general Mark Dreyfus said.

A number of state-based police organisations told Buzzfeed that facial recognition is “one of many capabilities” available to law enforcement while not commenting directly on Clearview.

Despite Clearview claims the app had been restricted to use by law enforcement agencies,

a New York Times report claimed Clearview had been used as a “secret plaything of the rich”, including a billionaire grocery chain owner using the app to identify a man who was on a date with his daughter while they dined separately in an upscale Manhattan restaurant.

Companies and individuals including US department store Macy’s, the National Basketball Association, and a sovereign wealth fund in the United Arab Emirates were reported as using the software.

Angelene Falk has been Australia’s information and privacy commissioner since 2018. Picture: Supplied
Angelene Falk has been Australia’s information and privacy commissioner since 2018. Picture: Supplied

A venture capitalist backing Clearview quoted in the same piece reported that his school-aged daughters liked playing with the app.

“They like to use it on themselves and their friends to see who they look like in the world,” he said.

“It’s kind of fun for people,” he added.

Add your comment to this story

To join the conversation, please Don't have an account? Register

Join the conversation, you are commenting as Logout

Original URL: https://www.news.com.au/technology/innovation/inventions/clearview-ai-facial-recognition-tech-probed-by-privacy-watchdog/news-story/34b7117de437ffdee8dc72d614b258e1