Published on the 25/01/2024 | Written by Heather Wright
Days of ‘do what you want’ coming to end…
Responsible AI is on the agenda for both Australia and New Zealand with Australia’s Federal Government proposing mandatory safeguards for high-risk AI use as part of its interim response to safe and responsible AI, while in New Zealand an industry body has launched a discussion paper as it attempts to nudge government and SMEs towards action.
“The days of self-regulation are gone,” Australia’s Ed Husic, Minister for Industry and Science, said in launching the interim response to the Safe and Responsible AI in Australia consultation last week.
“The whole let it rip, do what you want, you’re out there you can innovate with no boundary, I think we’ve passed that. Those days are gone,” he says.
He says the response, which follows consultation which concluded last August, is targeted towards the use of AI in high-risk settings where harms could be difficult to reverse, while ensuring the majority of low risk AI continues to flourish largely impeded.
AI and automation have been forecast to generate up to $600 billion a year for Australia by 2030.
“The days of self-regulation are gone.”
While it’s steering clear of outright bans as it looks to safe and responsible AI as a means of boosting productivity, mandatory guardrails – read regulation – is on the cards for companies designing, developing and deploying AI in high risk settings.
“We want to get the benefits of AI while also shoring up and fencing off the risks as much as we can and designing modern laws for modern technology,” Husic says.
An advisory group is being established to develop those regulations. Husic says the guardrails could include the testing of products as they’re being designed and developed both before and after release, requirements for transparency and openness about how AI models have been designed and developed and what they do, and expectations around their performance.
“We also believe there has to be an element of accountability,” he says. “Where these models may work in ways that were not intended or are not in the way that they were advised then we do need to have some elements of accountability there.”
He says the expert panel will be stood up ‘as quickly as possible’, with the work occurring this year.
As to who will sit on the panel, Husic was even less clear.
“We haven’t necessarily landed on precisely who will be in that group yet. We will be taking advice and I’ve asked my department to start framing that up,” he says. “We’ve got to get the balance right between people from the industry, people with regulatory know-how and also… inject ethical considerations as well, so there’ll be people from civil society.”
The Australian work will align with international frameworks wherever possible to support Australian AI company growth and expansion and ensure Australia doesn’t become a regulatory AI island inhibiting investment and two-way growth – a move welcomed by the Australian Information Industry Association.
Work is also beginning with industry to adopt a voluntary AI Safety Standard, along with work to develop voluntary labelling and watermarking of AI-generated materials.
Husic says the National AI Centre will take the lead on the development of the voluntary standard and work with both industry and wider society on them.
The Australian Federal Government’s interim response has, in general, been well received.
The Tech Council of Australia welcomed the ‘risk-based and proportionate approach’ saying that providing clarity on the government approach to regulation was good for business and consumer confidence and balanced innovation with the need for safety.
“It means businesses can better plan for building, investing in and adopting AI products and services, and the public can take confidence that AI risks are being safely managed and regulated in Australia,” Tech Council CEO Kate Pounder says.
The Business Council of Australia was in agreement, with CEO Brian Black saying it was glad the government had heard the message that the approach needed to be risk-based and use existing laws and regulation where possible.
Charles Darwin University computational and AI expert associate professor Niusha Shafiabady, however, says the report misses the mark on taking AI threats seriously. Shafiabady is calling for regulations and enforcement, including regulations to mandate watermarking fake material.
“In the US government’s executive order, they specifically mention what they are implementing and what needs to be done,” Shafiabady says.
“Here we are putting our faith and the fate of the people in the hands of industry’s good faith. Sorry, but this wouldn’t work. If we don’t take the threat of technology seriously and come up with mandatory regulations, we will feel the blow as a nation.”
Australia ranked 12th out of 193 countries in the recent Oxford Insights Government AI Readiness Index 2023 which measures across three key pillars of government, technology sector and data and infrastructure – where Australia scored third place.
New Zealand ranked a more lowly 49th overall, with the report wryly citing the release of just two AI documents by the government covering AI policies – one on genAI for public service and a Ministry of Education offering on genAI for teachers – are ‘a welcome change for New Zealand, which still lacks a national AI strategy and ethical AI guidelines’.
The two guidelines are the first AI policy documents New Zealand’s government has released publicly, and recommend each agency develop its own AI policy – something the report notes could be problematic, urging cross-agency work to ensure government AI policies are consistent and clear. A cross-agency work programme on AI is due to report back this year.
With little action on the government front around AI for Kiwi organisations, the Artificial Intelligence Researchers Association (AIRA) has stepped into the breach with a discussion paper on responsible AI, with a particular focus on small and medium enterprise.
It’s advocating New Zealand piggy-back on existing international AI regulations, noting the country’s lack of AI regulatory focus as an opportunity.
“In a dynamic global AI landscape, does a smaller economy like New Zealand benefit from crafting bespoke AI regulations or should it align with the dominant regulatory regimes? Harmonising New Zealand’s AI regulations with more stringent frameworks from larger jurisdictions could serve as a dual strategy – ensuring protection for its citizens while making its AI solutions internationally compliant and thus more exportable.”
AIRA says it’s aiming to provide practical advice for both organisations and the government with its document, which outlines common pitfalls including degradation of performance of AI after deployment due to data drift, where the underlying distribution of the new data differs to pilot stage data; biases; privacy concerns and vendor risks, including limited visibility of the AI tool’s training data, fairness and accuracy and restrictions on AI model customisation and improvements.
The AIRA Responsible AI Discussion Paper advocates implementing a Responsible AI (RAI) program using high level principles of human control; transparency and explainability; accountability; reliability, safety, security and privacy and fairness and inclusiveness.
In practical terms incorporating the principles into daily business includes having a senior executive responsible for RAI, using cross-functional teams with diverse backgrounds, encouraging RAI by design, prioritising transparency and including Maori participation.
The report notes the challenges ahead for SMEs, including lack of awareness and their lower levels of digitisation – the country ranked 27th out of 63 countries in 2022 for digitisation – and siloed data.
For government AIRA is calling for action to fill the notable gaps in current AI regulations and carrying out wide consultation to shape the future landscape of AI regulations.