Explaining explainable AI

Published on the 14/04/2022 | Written by Heather Wright


Explainable AI explained

LinkedIn’s win, health’s loss?…

LinkedIn says it has logged an eight percent lift in subscriptions after deploying an explainable AI-driven recommendation system for its sales team, in what is being seen as a breakthrough in getting AI to explain its actions in a way humans can understand.

The system predicts potential churn and upsell opportunities but goes one step further in also explaining how it arrived at that conclusion.
The system, LinkedIn says, helps its sales team ‘understand and trust our modelling results because they understand the key facts that influenced the model’s score’.

The explainability aspect of AI has been an ongoing issue. Recent years have seen the use of complex algorithms and AI become increasingly common. Most of us, whether aware of it or not, are engaging with AI everyday, whether through social media platforms personalising our feeds and identifying friends – and fake news – web search engines personalising searches, banks using it for security and fraud detection, Netflix recommendations or a multitude of business deployments. McKinsey is forecasting AI to add US$13 trillion to the global economy by 2030.

“The desire to engender trust through current explainability approaches represents a false hope.”

But the ‘black box’ nature of AI, and its potential for bias, has raised concerns for many and highly publicised missteps – witness Microsoft’s Tay chatbot on the rampage, the robodebt debacle or Amazon’s hiring algorithm fiasco – have done little to alleviate concerns.

It is, essentially a case of take it or leave it, with AI delivering accurate results through incomprehensible means. Explaining exactly why the software made a particular prediction in a particular case can be nigh on impossible, prompting Tesla and SpaceX founder Elon Musk to proclaim that ‘AI is far more dangerous than nukes’.

In the United States the US House and Senate recently reintroduced the Algorithmic Accountability Act, intended to regulate AI and hold organisations accountable for their use of algorithms and other automated systems. The Federal Trade Commission has warned for several years that AI that is not explainable could been investigated, while the EU is looking to pass the Artificial Intelligence Act which includes a push for algorithmic transparency.

Enter the emerging field of explainable AI, or XAI. It’s an area of big investment with both startups and large established technology companies in the race to make AI not just deliver accurate results, but explain how it came to that results.

Forrester says the market for responsible AI solutions will double this year as companies seek fairness and transparency.

In LinkedIn’s case, the company has been working on its solution, ow part of the Responsible AI program, for two years. This month in a blog post Jilei Yang, LinkedIn senior applied research for machine learning and optimisation, detailed the success of the Project Account Prioritizer program which uses user-facing explainable AI system CrystalCandle – previously Intellige – to identify which subscribers are likely to renew a membership and which are likely to cancel, and who is a good upsell prospect.

Previously sales reps relied on a combination of human intelligence and a lot of time spent sifting through offline data to identify which accounts were likely to continue doing business with the company and what products they might be interested in. Identifying accounts which were likley to churn also required similar huge time draws for the sales team as well, Yang says.

Now AI handles the analysis and results are presented with ‘narrative insights’ explaining why an account might upsell and highlighting the trends and reasoning.

“These narratives give more support for sales teams to trust the prediction results and better extract meaningful insights,” Yang says.

“The combination of Project Account Prioritizer and CrystalCandle has deepened our customer value by increasing the information and speed with which our sales teams can reach out to customers having poor experience with the products, or offer additional support to those growing quickly.”

While it might be working for LinkedIn, not everyone is sold on explainable AI. In a paper published in the Lancet, Australian Institute of Machine Learning’s Luke Oakden-Rayner, MIT computer scientist Marzyeh Ghassemi and Andrew Beam from Harvard’s Department of Epidemiology, argue that the idea explainable AI will engender trust, at least in the healthcare workforce, is ‘false hope’.

“We believe that the desire to engender trust through current explainability approaches represents a false hope: that individual users or those affected by AI will be able to judge the quality of an AI decision by reviewing a local explanation (that is, an explanation specific to that individual decision),” their report says.

“These stakeholders might have misunderstood the capabilities of contemporary explainability techniques – they can produce broad descriptions of how the AI system works in a general sense but, for individual decisions, the explanations are unreliable or, in some instances, only offer superficial levels of explanation.”

Others suggest the technology remains too unreliable, leaving AI, for the time being at least, still working on those explanations.

Post a comment or question...

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

MORE NEWS:

Processing...
Thank you! Your subscription has been confirmed. You'll hear from us soon.
Follow iStart to keep up to date with the latest news and views...
ErrorHere