Published on the 15/02/2018 | Written by Newsdesk
Advice from Gartner on how to successfully get projects across the line…
Gartner’s 2018 CIO Agenda Survey shows that while just four percent of CIOs have implemented AI, a further 46 percent have plans to do so. “Despite huge levels of interest in AI technologies, current implementations remain at quite low levels,” said Whit Andrews, Gartner research VP. “However, there is potential for strong growth as CIOs begin piloting AI programs through a combination of buy, build and outsource efforts.”
As with most emerging or unfamiliar technologies, early adopters are facing obstacles to the progress of AI in their organisations. Gartner analysts have identified the following four lessons that have emerged from these early AI projects.
- Aim low at first
“Don’t fall into the trap of primarily seeking hard outcomes, such as direct financial gains, with AI projects,” said Andrews. “In general, it’s best to start AI projects with a small scope and aim for ‘soft’ outcomes, such as process improvements, customer satisfaction or financial benchmarking.”
It’s advice we’ve heard before: last year, Xero’s Sam Daish pointed out that AI should focus first on the little things.
And, continued Gartner, expect AI projects to produce, at best, lessons that will help with subsequent, larger experiments, pilots and implementations. In some organisations, a financial target will be a requirement to start the project. “In this situation, set the target as low as possible,” said Andrews. “Think of targets in the thousands or tens of thousands of dollars, understand what you’re trying to accomplish on a small scale, and only then pursue more-dramatic benefits.”
- Focus on augmenting not replacing people
Big technological advances are often historically associated with a reduction in staff head count. While reducing labour costs is attractive to business executives, it is likely to create resistance from those whose jobs appear to be at risk. In pursuing this way of thinking, organisations can miss out on real opportunities to use the technology effectively. “We advise our clients that the most transformational benefits of AI in the near term will arise from using it to enable employees to pursue higher-value activities,” added Andrews.
Gartner predicts that by 2020, 20 percent of organisations will dedicate workers to monitoring and guiding neural networks.
“Leave behind notions of vast teams of infinitely duplicable ‘smart agents’ able to execute tasks just like humans,” said Andrews. “It will be far more productive to engage with workers on the front line. Get them excited and engaged with the idea that AI-powered decision support can enhance and elevate the work they do every day.”
- Plan for knowledge transfer
Gartner said most organisations aren’t well-prepared for implementing AI. Specifically, they lack internal skills in data science and plan to rely to a high degree on external providers to fill the gap. Fifty-three percent of organisations in the CIO survey rated their own ability to mine and exploit data as “limited” — the lowest level.
Gartner predicts that through 2022, 85 percent of AI projects will deliver erroneous outcomes due to bias in data, algorithms or the teams responsible for managing them.
“Data is the fuel for AI, so organisations need to prepare now to store and manage even larger amounts of data for AI initiatives,” said Jim Hare, Gartner research VP. “Relying mostly on external suppliers for these skills is not an ideal long-term solution. Therefore, ensure that early AI projects help transfer knowledge from external experts to your employees, and build up your organisation’s in-house capabilities before moving on to large-scale projects.”
- Choose transparent AI solutions
AI projects will often involve software or systems from external service providers. It’s important that some insight into how decisions are reached is built into any service agreement. “Whether an AI system produces the right answer is not the only concern,” said Andrews. “Executives need to understand why it is effective and offer insights into its reasoning when it’s not.”
Although it may not always be possible to explain all the details of an advanced analytical model, such as a deep neural network, it’s important to at least offer visualisation of the potential choices. In fact, in situations where decisions are subject to regulation and auditing, it may be a legal requirement to provide this kind of transparency.