Published on the 05/09/2019 | Written by Newsdesk
As AI moves increasingly open source, what does it mean for regulatory moves?…
As moves to regulate artificial intelligence heat up, the growing movement towards open source AI development is also gathering momentum – and adding to the debate around AI regulation.
In May, Cisco open-sourced its MindMeld platform, which has been used by the likes of Starbucks to build conversational assistants. That was followed in July by Uber Technology unveiling an open-sourced AI engine, the Plato Research Dialog System. A month later Microsoft open sourced a conversational AI toolkit to give chatbots ‘personality’.
While those are all on the conversational AI front – itself a growing field – AI and machine learning have seen a strong shift towards open source in recent years. Back in 2015 Google open sourced the TensorFLow library. Earlier this year AWS launched its open-source Neo-AI project. PyTorch, CNTK, MXNET and Chainer, among others are all open source libraries offering the ‘deep learning’ building blocks.
“54 percent of tech executives believe regulation of AI technology is critical.”
It’s a logical move as companies see AI capabilities as strategically important and needing to be developed in-house.
According to Gartner, more than half of organisations already have at least one AI deployment up and running (the average number of projects in place is four) and are planning to add more projects this year.
At the same time companies – and the wider industry and governments – are grappling with issues around the ethics of AI and whether the models used are fair, ethical and explainable.
The 2019 Edelman Artificial Intelligence survey, done in conjunction with the World Economic Forum, found that 60 percent of the general population and 54 percent of tech executives believe regulation of AI technology is critical for its safe development.
And in May, 42 countries, including Australia and New Zealand, signed on to support the OCED Principals on Artificial Intelligence accord – effectively the world’s first inter-governmental policy guidelines for AI.
While open source may be the fastest way to deliver innovation, the move adds to the debate around regulation. Enabling anyone to easily modify code from the libraries, open source makes tracking and understanding changes – and the potential impacts – in a timely manner even more complex.
IBM, however, says open source is a key enabler of building trust in AI, because the code and techniques are visible to everyone.
The vendor, which has open sourced three ‘trusted’ AI tool kits in the past two years (AI Fairness 360, Adversarial Robustness 360 and AI Explainability 360) joined the Linux Foundation AI (LF AI) last month in what it says is a move to help drive the development of open source trusted AI workflows.
In a blog post last month, IBM noted that in the previous six months, the overall open source AI ecosystem described in the LF AI landscape has grown from 80 to more than 170 projects, with a combined 350 million lines of code from more than 80 different organisations around the world.
“This level and pace of open source development is similar to the earliest days of Linux, blockchain, cloud, and containers development,” IBM says.
“The basic tools needed to start building fairness, robustness, and explainability into enterprise AI workflows are now available to developers and data scientists, so it’s time for IBM to team with Linux Foundation AI and other partners to work together on this important space, and build one of the foundations for trusted AI.”