Australia opens new round of AI consultation

Published on the 16/10/2024 | Written by Heather Wright


Australia opens new round of AI consultation

Evaluating impact on consumer law…

The Australian government has kicked off consultation on whether Australia’s ‘technology neutral’ consumer laws are fit for purpose in the age of artificial intelligence and whether AI specific frameworks such as an ‘AI Act’ might be required.

The review, which opened this week, is looking at whether the existing consumer law is suitable for supporting safe and responsible AI use by businesses and protecting consumers who use the technology.

“The technology that underlies AI may be complex, but the regulatory approach adopted need not be.”

The review builds on earlier work around ‘safe and responsible AI in Australia’, which raised concerns, and suggestions on how the consumer protection framework could respond to the increased availability of AI-enabled goods and services.

The 2024-25 budget included $39.5 million over five years for the development of policies and capability to support the adoption and use of AI, including work to clarify and strengthen existing laws.

A similar review is being undertaken into AI in the health and aged-care sector regulation and copyright law.

The current review, which is open until 12 November 2024, is looking at a range of areas including:

  • how well adapted the ACL is to support Australian consumers and businesses to manage potential consumer law risks of AI-enabled goods and services;
  • the application of well-established ACL principles to AI-enabled goods and services;
  • the remedies available ot consumers of AI-enabled goods and services under the ACL; and
  • the mechanisms for allocating liability among manufacturers and suppliers of AI-enabled goods and services.

The consultation comes hard on the heels of consultation on mandatory guardrails closing. Ten potential mandatory guardrails for AI in high-risk settings have been put forward.

More than 300 submissions were received. The majority were published this week and show a clear split between the tech giants and banks who are calling for lighter touch requirements, and human rights organisations, peak groups, academics and media groups keen to see an EU-style AI Act.

The Australian Banking Association is among those calling for less restrictive reforms and targeted regulation, saying regulation should be proportionate and targeted to specific risks.

“An unnecessarily high regulatory burden may have an adverse impact on innovation and productivity, limiting Australia’s ability to compete internationally,” it says in its submission, adding that well-intended regulatory reforms can often have the unintended consequence of imposing disproportionately expensive, or operationally challenging obligations on small- and medium-sized companies.

The increasing exploitation of AI by malicious actors, means AI is also being increasingly used by banks to more effectively detect and combat financial crime and fraud, it notes.

The Business Council of Australia is also calling for any regulatory approach to be specifically targeted at risk, without redundant or onerous obligations.

“The technology that underlies AI may be complex, but the regulatory approach adopted need not be,” it says.

The Tech Council of Australia has also said it doesn’t support an AI Act or single regulator for AI, saying having an AI regulatory is likely to result in siloed expertise and capability across government entities and limit capacity to adapt and consider innovative ways to evolve the regulatory architecture and coordination mechanisms.

The Media, Entertainment and Arts Alliance (MEAA), meanwhile, is among those endorsing an ‘economy-wide AI Act’.

“This option is the only one that will deliver on the ambition to deliver safe and responsible AI in Australia…”

Australia’s AI action also includes an inquiry into the use of AI systems by the government.

The Select Committee on Adopting AI however, has pushed back on legislation that would ban the use of AI-generated images and videos in election campaigns – despite acknowledging the potential for disinformation.

Across the Tasman, New Zealand’s Minister of Science, Innovation and Technology, Judith Collins, has acknowledged that Kiwi businesses have been slow to adopt AI, due in part to uncertainty about the future regulatory environment, and says she intends to take a ‘light-touch, proportionate and risk-based approach to AI regulation’.

That includes regulatory intervention only if the government considers it necessary to ‘unlock innovation or address acute risks’.

Existing regulations, such as the Privacy Act, will instead be leveraged rather than creating a standalone AI Act as has been done by the EU.

The cabinet paper notes the need for New Zealand to remain connected to key international discussions that are establishing international norms for AI and be a ‘fast follower’, drawing on work done in other countries.

Post a comment or question...

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

MORE NEWS:

Processing...
Thank you! Your subscription has been confirmed. You'll hear from us soon.
Follow iStart to keep up to date with the latest news and views...
ErrorHere