Is AI really going to run amok?

Published on the 18/02/2021 | Written by Jonathan Cotton


Artificial Intelligence_Pega survey

And if so, who’s responsible for regulating it?…

As the tech around artificial intelligence evolves ever faster, surpassing human decision-making in unexpected and perhaps dangerous ways, can industry take accountability for its role in those emerging AI hazards?

A survey from software company Pega canvassing 1,350 C-level executives from around the world looks to find that out. The paper, entitled Future-proof 2025: A look at top tech trends, finds business leaders expecting bad things from AI technology in the near future.

According to the paper, which surveys leaders in financial services, retail, healthcare, manufacturing, telcos and the public sector, business leaders think that a lack of accountability within the private sector likely will lead to governments ‘taking over responsibility’ for AI regulation, all within the next five years.

“All surveyed industries see trouble ahead for AI governance and regulation that do not go far enough.”

“Sixty-five percent of respondents felt external governance was insufficient to manage AI adoption,” says the report.

“All surveyed industries see trouble ahead for AI governance and regulation that do not go far enough.”

According to the survey, the private sector will ‘fail to provide the governance necessary’ to keep artificial intelligence in check, with governments around the world ‘forced to take over within five years’.

So what’s with the feckless attitude to good AI governance? The survey seems to show business leaders seemingly caught between the burdens of regulatory compliance and their own limitations.

Respondents from all industry sectors described the challenges of conforming to GDPR, bank directives and other regulatory frameworks, with 27 percent of respondents saying they have no designated leader in AI governance.

Manufacturing, healthcare, and financial services all reported significant ‘AI governance gaps’ in internal leadership and formal strategies.

“This frustration with external governing frameworks actually reveals their natural limitations,” says the report, “and the urgent need and responsibility for enterprises to step up and create more comprehensive governance frameworks.”

So who should be filling this perceived AI governance leadership void? The public or private sector?

“Though the vast majority (78 percent) of respondents prefer full or equally shared responsibility for regulation, the numbers flip when asked about expectations for five years out, when 75 percent expect the government will be largely or fully responsible for governance, which is clearly far from what respondents feel is the most appropriate balance.

“Whatever the future actually brings, the stakes are high: more than half (53 percent) are concerned that external and/or government regulation will stifle their innovation.”

The research urges businesses to take better control and stronger accountability for the governance, integration, innovation and adoption of emerging technologies so they can better enact change within their organisations.

The New Zealand government has not yet developed an AI strategy, but industry body AI Forum of New Zealand has published a set of guiding principles. Australia has similarly taken a hands-off approach, developing the AI Ethics Framework, a set of voluntary AI Ethics Principles to encourage organisations using AI systems to ‘strive for the best outcomes for Australia and Australians’.

“Trust is central to the widespread acceptance and adoption of AI,” says Nicole Gillespie, professor of management at the University of Queensland.

“However, our research suggests the Australian public is ambivalent about trusting AI systems.”

The survey of over 2,500 Australians in June and July of last year found that, when it comes to developing and using AI systems, Australians had little confidence in commercial organisations to develop and use AI responsibly (37 percent had no or low confidence).

Overwhelmingly (96 percent), Australians expected AI to be regulated and most expected external, independent oversight.

Most Australians (over 68 percent) have moderate to high confidence in the federal government and regulatory agencies to regulate and govern AI in the best interests of the public.

Post a comment or question...

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

MORE NEWS:

Processing...
Thank you! Your subscription has been confirmed. You'll hear from us soon.
Follow iStart to keep up to date with the latest news and views...
ErrorHere