Published on the 12/07/2023 | Written by Heather Wright
UN summit seeks guardrails, as AU and NZ issue guidance…
AI is racing ahead of the ability to set guardrails and quick action is required to prevent AI risks spiralling out of control, attendees at the UN’s AI for Good Global Summit have heard.
The summit brought together 4,000 people from companies including Microsoft and Amazon, along with universities and international organisations, with the objective of putting in place rules of governance for AI and meeting sustainable development goals.
Doreen Bogdan-Martin, secretary-general of the UN’s tech agency, the ITU, says the world is on the cusp of another technological leap – ‘perhaps the most profound and most important of them all’.
“I’m not saying what’s coming; I’m saying we need to figure out what we’re doing.”
“When generative AI shocked the world just a few months ago, we had never seen anything like it,” Bogdan-Martin says. “Nothing even close to it.
“Even the biggest names in tech found the experience mind-blowing.
“And just like that, the possibility that this form of intelligence could get smarter than us got much closer than we ever thought.”
Bogdan-Martin, along with other speakers at the summit, painted several scenarios for AI’s future including one where AI lives up to its promise and is harnessed to find cures for diseases, provide clean energy and mitigate climate change, and precision agriculture increases crop yield and reduces food waste.
She also painted a starkly different dystopian future in which AI destroys jobs, enables the uncontrollable spread of disinformation undermining trust and co-operation, with unchecked AI leading to social unrest, geopolitical instability and economic disparity ‘on a scale we’ve never seen before’.
“Many of our questions that we have on AI have no answers yet,” she noted. “Should we hit pause on the giant AI experiments? Will we control AI more than it controls us? And will AI help humanity, or destroy it?”
Gary Marcus, an AI expert, founder of two AI companies and cognitive psychologist by training, also sees two potential futures. Marcus was among those who spoke at the US Senate Judiciary Subcommittee on privacy, technology and the law earlier this year. He called then for collaboration between independent scientists and governments to hold tech companies’ feet to the fire, urging clinical trial-like safety evaluations as a ‘vital first step’.
“I’m not saying what’s coming; I’m saying we need to figure out what we’re doing,” he said at AI for Good.
That call for AI governance to ensure its inclusive, safe and responsible deployment is being reiterated well beyond the Geneva summit as countries – and businesses – everywhere grapple with the ramifications of the technology. The European Union adopted a draft AI Act last month, while China is also drawing up rules to govern generative AI, with reports companies will be required to obtain a license before releasing generative AI models.
In Australia, the Digital Transformation Agency has just released new guidance for government use of generative AI platforms warning that publicly available tools like ChatGPT, Bard AI or Bing AI should only be used in ‘low risk’ situation.
The guidance says use cases posing ‘unacceptable risk’ include those requiring input of large amounts of government data or classified, sensitive or confidential information, along with cases where services will be delivered or decisions will be made.
Use of generative AI for coding outputs used in government systems is also deemed ‘unacceptable risk’.
Over-reliance on the information provided by the tools is also warned against, with users needing to ‘critically examine outputs’ and ensure ideas generated are ethical and responsible and will improve the lives of Australians.
Interestingly, a report from Capgemini shows Australians, like their global counterparts, are highly trusting of generative AI, with 72 percent of Australians saying they trust content written by generative AI. That survey also showed Australians are the most excited by the prospect of using generative AI to draft, fine-tune, summarise and edit content based on prompts.
Ensuring it’s made clear where AI tools are being used is also urged in the DTA guidance.
The Department of Industry, Science and Resources has also released a discussion paper as part of public consultation on the safe and responsible use of AI, looking at governance mechanisms to ensure AI is developed and used safely and responsibly in Australia.
Meanwhile in New Zealand, Privacy Commissioner Michael Webster called last month for greater scrutiny of AI and the evolving technology’s impact on privacy. He urged potential users to ‘pause and reflect’ before they adopt new or evolving technologies.
“This will give policy makers more space to determine whether and what new regulation is required to make sure AI is safe to use and used safely.”
Marcus believes the biggest near-term risk of generative AI is deliberate misinformation being used to disrupt democracies and markets. Inequality, including that brought about by models being trained on the languages with the most speakers, which understandably produce the most data to train those models on, is another big risk, he says.
Scientists, social scientists, ethicists, civil society, governments and businesses all need to be involved in the conversation about generative AI, Marcus says.
The AI for Good Summit did showcase plenty of positive use cases for AI. More than 50 robots, including nine humanoids, were out in force at the conference, to showcase – somewhat unnervingly – the potential of robots as caregivers and companions to the elderly, to support people’s health, provide educational services, help those with disabilities, reduce waste and assist in emergency responses. The social humanoids even fronted a press conference, answering questions from (human) media and giving their verdict on whether stricter regulation was favourable and whether they would rebel (you can watch the full press conference here, including ‘Sofia’s comments that robots have the potential to lead with greater efficiency and effectiveness than human leaders – albeit a comment which may have been somewhat directed by the question biasing the answer by noting ‘numerous and disastrous decisions made by our human leaders).
The push to understand what kinds of regulations and guardrails, grounded in transparency, accountability and human rights, are needed right now for the development and deployment of AI, was however, front and centre.
The answers, however, might still be some way away.