Overcoming the ethical challenges of artificial intelligence

Published on the 08/08/2019 | Written by Audrey William


Data collection_Audrey Williams

First, ask the right questions…

As artificial intelligence (AI) becomes increasingly entwined in daily life, questions are being raised about its ethical implications. While there’s little doubt the technology will transform industries and boost productivity, what’s less clear is its impact on society generally.

Because AI involves the analysis of very large data sets, security and personal privacy are of particular concern. Data from one source can readily be combined with data from others. As a result, detailed personal profiles can be developed that, should they be misused, could cause pain and hardship.

At the same time AI systems can be misconfigured and generate erroneous results that do not accurately reflect the underlying data. Should this go unnoticed, decisions based on these results will be flawed and could have a detrimental impact.

Having effective regulations and governance frameworks in place will become a vital part of any AI-related project.

For this reason, having effective regulations and governance frameworks in place will become a vital part of any AI-related project. This need is already widely recognised within the IT industry.

Recent research conducted by Ecosystm found 70 percent of IT decision makers say that cybersecurity and privacy concerns are a challenge when implementing an AI solution. Of those surveyed, almost 60 percent highlighted regulation as a key challenge and one about which they are frequently questioned.

The need for better regulation
According to the Institute of Electrical and Electronics Engineers (IEEE), regulatory policies governing the use of AI and related data sets are required in a range of key areas. These include:

  • Legal accountability: These policies should cover the potential harm that could be caused by AI systems and tools, both to organisations and individuals, and the steps that will be taken to mitigate such harms.
  • Data usage transparency: Clear policies are needed to cover what and how data is accessed and used by AI systems. Usage also needs to be regularly audited to ensure compliance and transparency.
  • Embedded values: The values of an organisation to which staff are held should also be embedded into the AI systems themselves. This helps to ensure that decisions made by the software will be in line with decisions made by humans.
  • A governance framework: Every organisation using AI systems should have in place a governance framework that ensures the processes and procedures followed by AI systems do not infringe on basic human rights.

Asking the right questions
To be truly effective, such regulations must be put in place at the very start of any AI project. Trying to retrofit them after a system has gone live is difficult and unlikely to provide the level of protection required.

For this reason, there needs to be close consultation between the technology vendor deploying the AI system and the organisation that will put it to use. Both parties must have a common understanding and agreement on what is ethical and right. This will reduce the chance of confidentiality and other problems arising once the system has gone live.

There are seven key questions that need to be answered before beginning an AI system deployment. They are:

  1. What types of data will be used? Clarity is needed on precisely what data sets will be analysed by the AI system. This will ensure only data that is directly relevant is used and overreaches are avoided.
  2. Is the data biased? Care must be taken to ensure data is not biased in areas such as race and gender. Any bias that exists will have implications for the results produced by the system.
  3. How will data be used and where will it be stored? Many data sets are likely to contain personal information and so being clear on exactly where they will be stored and how they will be used is vital.
  4. Who will check the analysis algorithms?  AI tools use complex and powerful algorithms that evolve overtime. Responsibility needs to be assigned so that their functions are regularly reviewed to ensure they are not crossing any boundaries or operating in error.
  5. Can developers work with users? To ensure proper functioning of an AI system, it’s important for the developers to work closely with those who will use the tools. This ensures the system will be designed to perform as expected while protecting confidentiality and privacy.
  6. What happens to data once it’s no longer required? Clear policies must be agreed on that cover how data sets are managed once they are no longer required by the AI system. Who will destroy them and how? After what period will this occur?
  7. Who is responsible for ongoing monitoring? An appropriate person or team must be assigned the task of regularly reviewing the AI system and its outputs to ensure the organisation is within its declared boundaries of ethics and transparency at all times.

Another step that should be considered is working with an organisation that specialises in all aspects of the ethics and governance relating to AI. This relationship should begin prior to the start of any implementation project.

Examples of organisations well qualified to offer assistance in this area include the Europe-based Foundation for Responsible Robots (FFR), the US-based AI Now Institute, the Algorithmic Justice League and the Machine Intelligence Research Institute (MIRI).

By working with such organisations, thoroughly considering the issues, and asking questions before embarking on an AI project, an organisation can ensure issues around privacy and governance are fully addressed.

In this way, the organisation can harness the power of AI tools while at the same time avoiding any potential issues that might arise in the future.

ABOUT AUDREY WILLIAM//

Audrey William is principal advisor, enterprise communications, contact centre and customer experience at Ecosystm.

Post a comment or question...

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

No items found
Processing...
Thank you! Your subscription has been confirmed. You'll hear from us soon.
Follow iStart to keep up to date with the latest news and views...
ErrorHere