Too smart for our own good: Tackling the thorny issues of ethics and AI

Published on the 25/05/2018 | Written by Jonathan Cotton

AI keeps getting smarter and governments find themselves asking: Just where is the boundary that makes AI ethical?…

A recent report from New Zealand’s Human Rights Commission, Privacy, Data and Technology: Human Rights Challenges in the Digital Age, provides a stark example of the challenges associated with one of the thornier issues of our time, including mass surveillance, big data, and artificial intelligence.

Just released, the paper points particularly to the problems surrounding the emerging field of ‘predictive risk modelling’, especially in regards to the legal sector.

“The development of a proposed predictive risk modelling (PRM) programme in the child protection sector may have significant implications for children’s privacy rights.”

Concerns are coming to the fore in Australia too. Alan Finkel, Australia’s Chief Scientist, during his keynote at the Committee for Economic Development of Australia (CEDA) event in Sydney recently said that a framework needs to be created to prevent “a free-for-all that allows unscrupulous and unthinking and just plain incompetent people to do their worst”.

Finkel proposes a voluntary certificate – which he’s tentatively calling ‘the Turing Certificate’ – which would essentially vet companies using AI according to a set of ethical standards.

“Companies [could] apply for Turing certification, and if they meet the standards and comply with the auditing requirements, they [could] display the Turing Stamp”, he says.

“Then consumers and governments could use their purchasing power to reward and encourage ethical AI, just as they currently look for the ‘Fairtrade’ logo on coffee.”

But just what does an ethical algorithm look like? PRM has stimulated some challenging debate.

“Concerns have been raised about the ethics and human rights implications of PRM, including in relation to the security of information; unanticipated uses of information; stigmatisation of people identified as having high risk scores; systematic discrimination occurring as a result of the algorithmic techniques used to filter data; and transparency in relation to the data used to create algorithmic design.”

The HRC paper also points to the increasing use of AI in the criminal justice system, with police using such technology to target resources or high-risk individuals and by the courts to predict the likelihood of re-offending.

“In New Zealand, the development of a proposed predictive risk modelling (PRM) programme in the child protection sector may have significant implications for children’s privacy rights.”

NZ’s Minister for government digital services and broadcasting, communications and digital media, Clare Curran, says an ethical framework for AI that includes both private sector and industry is urgently needed.

“An ethical framework will give people the tools to participate in conversations about Artificial Intelligence (AI) and its implications in our society and economy,” Curran says.

“There are economic opportunities but also some pressing risks and ethical challenges with AI and New Zealand is lagging behind comparable countries in its work in these areas.”

“We must prepare for the ethical challenges AI poses to our legal and political systems, as well as the impact AI will have on workforce planning, the wider issues of digital rights, data bias, transparency and accountability are also important for this Government to consider.”

Such recommendations echo the growing global preoccupation with the ramifications of AI tech and need for boundaries to be set in its development. The German Government released a code of ethics for autonomous vehicles late last year and the European Commission has announced it will present its own set of ethical guidelines on AI development by the end of 2018, based on the EU’s Charter of Fundamental Rights. The code will take into account principles such as data protection and transparency, and building on the work of the European Group on Ethics in Science and New Technologies.

With the Australian government having allocated AU$30 million for AI, such a recommendation is a timely reminder of just how drastically things have changed in a few short years.

Another report, this time commissioned by the AI Forum of New Zealand, Artificial Intelligence: Shaping a Future New Zealand, echoes a similar position, touching on some of the issues presenting themselves in New Zealand’s emerging AI landscape.

While the report notes that there is relatively limited commentary extant on formulating policies around AI, it also notes that the U.S. National Science and Technology Council has made policy proposals “in particular to address potential barriers [stemming] from algorithmic bias” and recommends the development of one – or several – codes of ethics for AI developers.

“Perhaps unsurprisingly, given its complexity and rapid emergence, New Zealand’s understanding of AI’s significance is low compared to other issues with similarly wide-ranging effects on our society. This report advocates the need to act now, in a substantial, coordinated way, to increase New Zealand’s ability to remain competitive and adapt to changes brought about by AI.”

The report supports investment in AI for the private sector and for SMEs specifically, as well as the creation of an AI ethics working group and the potential for a series of ‘data trusts’ – frameworks and agreements to ensure the safe, trusted and efficient exchange of data between between public and private sector organisations to enable AI-based solutions.

As big data and AI develop – and as emerging forms of that technology such as facial recognition and the rise of citizen-based analytics – one thing, if nothing else, seems apparent: AI systems depend, for their efficacy, on consuming as much data as possible. That, in itself, seems at odds with current principles of privacy.

Whether those principles survive the rise of AI intact – or need to be redrawn to coexist with it – is very much the question of the moment.

Read the Human Rights Commission’s report: Privacy, Data and Technology: Human Rights Challenges in the Digital Age.

Read Artificial Intelligence: Shaping a Future New Zealand and the AI Forum NZ Discussion Paper: The Potential Economic Impacts of AI Literature Review.

Post a comment or question...

Your email address will not be published.

Time limit is exhausted. Please reload CAPTCHA.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Follow iStart to keep up to date with the latest news and views...