Police cop flak over facial tech

Published on the 11/03/2020 | Written by Heather Wright


Facial recognition app_Clearview AI

Australian law enforcement among agencies using secretive system…

A controversial AI facial recognition system which uses more than three billion photos scraped from social media sites such as Facebook, LinkedIn and Instagram, is reportedly being used by Australian police.

Clearview AI, which bills itself as a ‘research tool used by law enforcement agencies to identify perpetrators and victims of crime’, allows users to upload a picture of someone and see any other public photos of that person, along with links to where the photos appeared.

The company argues that it only searches the open web, and doesn’t search any private or protected info including private social media accounts, and is an ‘after-the-fact research tool, rather than surveillance.

A New York Times investigation dubbed Clearview AI ‘the secretive company that might end privacy as we know it.

That hasn’t stopped it raising the ire of many and sparking strong criticism from privacy advocates.

The technology is reportedly in use by more than 600 law enforcement agencies and, according to documents leaked to BuzzFeed News, and other organisations including the FBI, Customs and Border Patrol – and retailers including Macy’s and Walmart.

Law enforcement customers include the Royal Canadian Mounted Police, which earlier this week said they would continue using the controversial technology, but will ‘limit’ its use.

Vermont’s Attorney General’s Office this week filed suit against the company, founded by Australian (and now US-based) Hoan Ton-That. The civil suit alleges that Clearview AI violates Vermont’s consumer protection statute and infringes on citizen’s right to privacy. Illinois and Virginia have also reportedly filed against the company, while several senators have asked Clearview to answer a slew of questions over the system.

Apple, meanwhile blocked Clearview AI use on iPhones and suspended Clearview from its developer program at the end of February saying it violates the terms of its enterprise developer program. Facebook, YouTube, Twitter, LinkedIn and Venmo have all served cease and desist letters on the company.

The company shot into public prominence in January after a New York Times investigation which dubbed the company ‘the secretive company that might end privacy as we know it’.

At the time, Australian law enforcement denied using Clearview AI.

Now, however, a list of Clearview customers provided to Buzzfeed has cast doubt on those assertions, with BuzzFeed News reporting that the Australian Federal Police along with forces in Queensland, Victoria and South Australia have been using the system with hundreds of searches run on the system.

On requesting information from New Zealand Police about whether they are using the system, or have plans to do so, iStart was told an Official Information Act request for disclosure would be required.

News of the potential use of Clearview in Australia comes amidst growing noise over facial recognition use. Last week the Digital Transformation Agency said the biometric – facial recognition – component of its myGovID is expected to be ready for public testing by mid-year.

Chief digital officer Peter Alexander told Senate Estimates: “We would like the biometric to be in by, say mid-year, but we wouldn’t pressure that.

“This is about getting it right before we put it in.”

The MyGovID offering would use one-to-one matching, verifying faces against an identity document, such as a driver’s license or passport.

The Parliamentary Joint Committee on Intelligence and Security knocked back Australian government plans for new legislation which would have enabled a range of facial recognition systems and sharing of information between jurisdictions and non-government entities.

The Clearview AI system is, at best, controversial. Mention of it is often accompanied by words like ‘dystopian’, with suggestions it will mean the end of privacy – especially given the prevalence of security cameras in cities.

Opponents also question the accuracy of the technology, with no independent assessments of its accuracy and bias available, and the ‘alarming’ reports that the company has provided software to organisations in countries with ‘authoritarian regimes’ with poor human rights track records.

“The use of sophisticated facial recognition technology is concerning even in a democracy with strong civil liberties, but its export to certain foreign countries could enable mass surveillance and repression of minorities,” Senator Edward Markey says in a letter to Ton-That.

The company has already been the target of one data breach – the one which saw its entire client list stolen.

 

Post a comment or question...

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

MORE NEWS:

Processing...
Thank you! Your subscription has been confirmed. You'll hear from us soon.
Follow iStart to keep up to date with the latest news and views...
ErrorHere