Unclear rules leave NZ businesses AI-shy

Published on the 19/03/2026 | Written by Heather Wright


Unclear rules leave NZ businesses AI-shy

Businesses urged to look to EU AI Act…

A lack of clear national frameworks for AI adoption is likely behind a lower level of trust in AI and less ‘depth’ of the technology’s use according to an AI researcher, who is urging political parties to consider what they’re going to do in with AI in the next three years – with regulation top of his wish list.

Despite widespread experimentation, Massey University applied AI researcher Athar Imtiaz says research suggests businesses are hesitant to move beyond basic use cases, with no centralised framework outlining what safe, compliant or responsible adoption looks like.

“We don’t have a centralised framework or legislation. Everything is scattered.”

Imtiaz says the absence of national guardrails is directly shaping how – and how little – companies are willing to use AI today. “The depth of usage is pretty low here, it’s very shallow adoption,” he says.

Despite strong ‘adoption’ of AI, much of the current activity is focused on basic productivity use cases – summarisation, paraphrasing, grammar checking – rather than core process automation, customer operations or high-value workflows. Imtiaz says the absence of national guardrails is directly contributing to that hesitation. Without clear guidance, businesses handling PII or other sensitive datasets are choosing to limit their exposure rather than expand it.

“Organisations want to comply, but … they are apprehensive.”

Imtiaz says New Zealand’s current approach leaves businesses exposed, while a centralised framework would give organisations confidence to progress from list experimentation to deeper operational deployment – without constant having to reinvent the wheel creating guidelines for themselves.

“I see a lot of repetition. If we had a centralised act, companies could just refer to it and be confident. Right now, everyone is spending a lot of time thinking ‘should we use it, should we not use it, what are the effects?’”

While most government agencies have signed up to the Algorithm Charter for Aotearoa New Zealand – an initiative for the government’s data system which can also be used by business – and the Privacy Act and general human rights protections apply to some degree, those tools were never designed to regulate AI systems. “We have a good set of recommendations, but it’s recommendations, not an Act or a framework,” he says, noting they Charter is not binding.

“The Privacy Act remains important, but it does not define technical standards for model validation, dataset integrity or audit requirements. Without tailored legislation, accountability become fragmented.”

Importing models, importing assumptions

New Zealand companies adopting offshore AI tools face another challenge: The risk that models trained overseas may not understand local demographics, health indicators, patterns or cultural context.

Imtiaz highlights emerging use cases in the health sector, where AI tools are being trialled to categorise patient urgency. Relying on purely international models in such settings may be risky.

He says for important decisions, such as medical triage or identifying high-risk patients ‘it absolutely matters what assumptions the model was trained on”, with those trained outside New Zealand not having the social and demographic understanding of New Zealand.  “Wrongful assumptions could lead to serious issues.” Kiwi demographic and health profiles differ significantly from those of larger countries whose data dominates model training. “We have concrete data from Stats New Zealand and from Te Whatu Ora/Health New Zealand that certain demographics are more prone to certain illnesses,” he says. “You can’t guarantee an overseas-trained model understands that.”

Model providers also don’t disclose full training sets, which is IP.

The mitigation is model tuning – adapting base models so they reflect New Zealand-specific data. “It’s a very common thing and it is not actually even very expensive in terms of computation. We tune it to our context, put some grounding information so that it is aware of our socioeconomic and health conditions that are more common in New Zealand, then we can be sure it is categorising patients according to whatever Stats NZ or Te Whatu Ora has released.

But tuning also brings up its own challenges, with New Zealand lacking any body responsible for certifying, auditing or standardising AI models, leaving organisations without a way to independently verify whether systems are safe, accurate or appropriate for local use.

He compares the situation to hardware and networking, where bodies such as the IEEE provide universally recognised technical standards. World standards for AI do exist, but with tuning, countries need their own local standards.

“If a company develops a model, who will assure everyone that it is up to standard?”

Without such an authority, organisations are left to determine their own validations requirements, risking inconsistency and duplication across industries. Imtiaz says such a body would play a critical role in establishing trust in AI systems and supporting broader adoption.

EU AI Act provides guidance

With the government signalling a light-touch approach, and not national regime in development, Imtiaz says New Zealand businesses looking for guidance can align with the European Union’s AI Act. The AI Act sets detailed requirements for transparency, human oversight, documentation, testing and risk management for high-risk AI systems, including those used in healthcare, employment, public services and customer-facing decision processes.

In writing AI policies for organisations, Imtiaz has used the Act as a point of referral. “It’s one of the best ones out there so you can use it as a guide. It’s a big document, but they have it online so you can search it. Then modify it and adapt it to your situation.”

Australia is also doing work in this area, though it doesn’t have a central Act.

International regulatory settings shape global product design. Even without local legislation, AI tools used in New Zealand are likely to inherit EU-drive compliance features as vendors adjust to the European market.

Building sovereignty

There’s another leg to Imtiaz’s push: Sovereignty. While many people equate ‘sovereign AI’ with keeping data onshore, that’s just the foundation, with true sovereignty requiring control over the entire AI lifecycle: Data collection, storage, training, tuning, deployment and oversight.

“It’s a chain. At each step you ask, do we have sovereignty?”

Today the answer is no. Even with local data centres from Microsoft and AWS coming online (and let’s be clear, they are owned by offshore companies), Imtiaz says government agencies still rely heavily on offshore infrastructure, with data stored or processed in Australia and training models usually happening overseas as New Zealand doesn’t have enough high-performance compute.

He estimates building sovereign capability would require several hundred million dollars over the next five years, especially to establish secure compute environments and skilled workforce pipelines.

“Australia has already committed more than AU$100 million in federal funding toward AI capability and regulatory reform, alongside broader digital infrastructure investment. Singapore has invested billions across successive national AI strategies. The United Kingdom continues to fund AI research and governance capacity through its central technology portfolio.

“In contrast, New Zealand has no dedicated AI budget line and no central authority equivalent to the Department for Science, Innovation and Technology in the UK, the Ministry of Digital Development and Information in Singapore, or Japan’s Digital Agency,” he says.

While Imtiaz is realistic that sovereign control is ‘a very long-term plan’ requiring a government with ‘very distant vision’, he argues that the priority for government is straightforward: Start developing a national AI framework. “At least think about establishing a centralised AI framework, an Act.” That step alone would provide consistent expectations, reduce duplicated work and give businesses certainty.

“To give organisations confidence, we need this centralised framework… guardrails that are specific to NZ conditions. Then later on the [sovereign] chain can be implemented and improved. But first, we need the rules and regulations and the guardrails.”

Post a comment or question...

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

MORE NEWS:

Processing...
Thank you! Your subscription has been confirmed. You'll hear from us soon.
Follow iStart to keep up to date with the latest news and views...
ErrorHere