Microsoft: It’s time to regulate facial recognition

Published on the 17/07/2018 | Written by Heather Wright


Because tech companies just can’t be trusted to do it right…

Microsoft has called for regulation of facial recognition technology – which the tech giant itself provides – in the wake of ‘deficiencies’ in the technology and increasing concerns about government use of the technology and erosion of civil rights.

Microsoft president Brad Smith says the issues around computer-assisted facial recognition ‘call for thoughtful government regulation and for the development of norms around acceptable uses’.

“The only effective way to manage the use of technology by a government is for the government to proactively manage the use itself.”

In a blog post, Smith raises questions about potential ‘sobering’ government use of the technology including tracking citizen’s movements and monitoring political rallies, saying “The only effective way to manage the use of technology by a government is for the government proactively to manage the use itself.

“If there are concerns about how a technology will be deployed more broadly across society, the only way to regulate this broad use is for the government to do so,” Smith, who is Microsoft’s chief legal officer, says.

His comments come just weeks after Microsoft came under scrutiny, with a small minority of GitHub users up in arms about Microsoft providing Azure cloud services, including the ability to use deep learning capabilities to accelerate facial recognition and identification’, to the US Immigration and Customs Enforcement.

Microsoft which has a long, and often acrimonious, relationship with regulators, dating back to the antitrust case of the 1990s, has since said ‘the contract in question’ isn’t being used for facial recognition.

But Microsoft isn’t alone in being caught up in negative publicity about the growing reach of technology, particularly when it comes to government use. Employees at Google, Salesforce and Amazon have all urged their bosses to dump government contracts over concerns about human rights being impacted by the technologies.

While facial recognition has been in use for a number of years – witness Facebook’s tagging suggestions for photos and computers, and smartphones, which use facial recognition rather than passwords – Smith says the rapid improvements with better cameras, sensors and machine learning capabilities, combined with larger and larger datasets as ever increasing numbers of images are stored online, and the use of cloud to connect all the data and technology, means regulation is becoming necessary.

The use of facial recognition technologies has caused headaches for a number of companies, including Facebook, whose rollout of facial recognition tools in the EU earlier this year has sparked claims by privacy and consumer groups that the tools violate privacy by not obtaining appropriate user consent. The company is also facing a class action lawsuit over its use of the technology in Illinois, US.

Meanwhile, Amazon’s push to get police departments using its Rekognition image detection and recognition and deep learning platform, has seen more than two dozen civil rights organisations calling for the tech giant to stop selling to law enforcement.
MIT Media Lab’s Gender Shades research found that facial recognition is subject to biases, with Microsoft, IBM and Face++ systems becoming less accurate the darker the skin – and the fairer the sex. The research concluded that leading tech companies’ commercial AI systems significantly mis-gendered women and darker skinned individuals, with the ‘pale male’ overrepresented in existing benchmark datasets.

Smith says regulation – informed first by a bipartisan and expert commission – is needed ‘today’ and raised several issues as ‘starting points’ for consideration, including whether organisations should be required to gain prior consent before collecting individuals images and whether individual have a right to know what photos have been collected and stored with their names and faces identified.

In the meantime, Microsoft says it will be talking with customers, academics and human rights and privacy groups, with Smith also promising the company will be more transparent about its use of the technology.

At a time when use of facial recognition is increasing across Australia and New Zealand – with Qantas just announcing it is trialling the technology at Sydney Airport and NZ Police contemplating an upgrade to their CCTV network to introduce facial recognition capabilities – the debate around big brother’s constant gaze is sure to intensify.

Post a comment or question...

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

MORE NEWS:

Processing...
Thank you! Your subscription has been confirmed. You'll hear from us soon.
Follow iStart to keep up to date with the latest news and views...
ErrorHere