Published on the 30/08/2023 | Written by Heather Wright
A/NZ AI use up, but where’s the responsible use policies?…
Business leaders across Australia and New Zealand say they remain concerned about the risks involved with AI, despite being interested in the technology – and despite many saying they already use some form of AI.
Two reports from Datacom – one for New Zealand surveying 200 senior business leaders and an Australian version surveying 318 – show interest in AI is running high among business leaders.
“It almost requires the same approach as cybersecurity.”
Just two percent of Kiwi business leaders said they weren’t interested in the technology, or didn’t support it, while 17 percent said they know about it, but aren’t interested in finding out more. In Australia, just three percent said they aren’t interested in AI or don’t support it, with seven percent not interested in finding out more.
AI use within local organisations was high, at 72 percent, in Australia, well in excess of the 48 percent in New Zealand who say they already use ‘some form’ of the technology. What form that usage takes, isn’t revealed in the reports.
That’s well ahead of the 35 percent global adoption reported in IBM’s Global AI Adoption Index 2022 – done well before the hype around ChatGPT spurred more widespread discussion about AI. Back in May 2022 when IBM’s report was released, the company noted adoption was continuing at a ‘measured’ pace worldwide.
When ChatGPT exploded onto the market – followed by a slew of generative AI wannabes – Ernest Hemmingway’s ‘gradually, then suddenly’, found a new reference point.
But leaders across both countries are professing some serious concerns around the technology, particularly on the security front (a key concern for 75 percent in NZ and 60 percent for Australia).
Safety, ethics, bias and unemployment concerns also feature.
Across both countries, Datacom’s reports suggest there’s a surprising lack of policies and procedures around AI.
In Australia, 52 percent of those surveyed say they have staff policy around AI use and 51 percent have staff awareness training in place. Only 40 percent say they have legal guidelines and control and 39 percent have audit assurance and governance frameworks. Two percent admit that despite using AI they have none of the policies and procedures in place.
In New Zealand, just 53 percent of organisations using AI say they have policies around AI use, with 47 percent saying they have staff awareness training in place. Audit assurance (13 percent), legal guidelines (24 percent) and targets for use of AI (nine percent) lag considerably.
Karl Wright, Datacom Group CIO and CISO, says while the 53 percent having policies around AI usage arguably lines up with the 48 percent of businesses that are using some form of AI, it overlooks the fact that employees are likely using publicly available online AI tools as a work tool. Organisations need to be proactive about setting policies to manage associated risks around business data, IP and copyright issues, he says.
“It almost requires the same approach as cybersecurity – clear policies and procedures to minimise risk, employee and user training to ensure they understand the role they play in protecting data, and regular audits.”
Indeed, those simple steps are among a number set out in Australia’s National AI Centre’s Implementing Australia’s AI Ethics Principles: A Selection of Responsible AI Practices and Resources.
It outlines eight AI ethics principles: Human, societal and environmental wellbeing; human-centred values; fairness; privacy protection and security; reliability and safety; transparency and explainability; contestability; and accountability.
Key practices for each of the eight principles are offered up, including impact assessments, data curation, fairness measures, pilot studies and organisational training.
“Organisations adopting Responsible AI must contexturalise ethical principles such as wellbeing, fairness or transparency to each AI system they create, and carefully balance these ethical goals against the system’s business purpose,” Implementing Australia’s AI Ethics Principles: A Selection of Responsible AI Practices and Resources. says.
“This will involve making practical commitments towards designing, deploying, maintaining and using AI systems in a way that is accountable to the people the AI system interacts with, minimises the risk of negative consequences and maximises the benefits to individuals, groups and the wider society.”
Stela Solar, National AI Centre director, says data quality, privacy and security are among the top challenges organisations face in adopting AI, and many have difficulty navigating international standards and procedures when producing or implementing AI systems.
“Australian businesses have told us that understanding ethics and governance in implementing AI is lacking across organisations globally,” she says.
In response to the issues businesses are facing, the National AI Centre launched the Responsible AI Network earlier this year, bringing together institutions, domain experts, commercial organisations and practitioner communities to enable best practice.
“The Responsible AI Network provides a unique offering: Practical guidance and coaching from experts on law, standards, principles, governance, leadership and technology to ensure explainability, fairness and accountability are built into Australian AI systems,” Solar says.
Results from a report from a National AI Centre report in June highlight the lack of policy and procedures seen in the Datacom reports, with 82 percent of businesses saying they believed they were practicing AI responsibly, but less than 24 percent having actual measures in place to ensure they were aligned with responsible AI practices.
Datacom’s reports also highlight another issue for companies: a lack of targets for use – the specific use cases or strategies around what the business hopes to achieve using AI.
In Australia just 42 percent of respondents had commercial and financial targets for the use of AI. In New Zealand the figures were even more dismal: A lowly nine percent identified their business as having targets for use in place.
Tracey Cotter-Martin, Datacom associate director future and insights, says the most common questions her team hears from customers are focused around the purpose of AI and how businesses ‘should’ be using it.
“We want people to understand that AI shouldn’t be viewed as something that sits outside your business or as a tech add-on,” she says.
“How you apply AI and its purpose should be determined by your business goals.
“AI has incredible optimisation capability that can be used to supercharge your strategy by introducing pace, creating adaptability, allowing you to identify differentiation opportunities or to pinpoint risk, but it is only effective if you understand the problem you are trying to solve. AI can’t define your problem statement for you.”
The real value of AI to business lies internally. Most business are shockingly unproductive despite spending big dollars on consultant ‘transformations’, 75% of which McKinsey says ‘fail’.
By how much could the average business increase their productivity / profitability? Having witnessed a 10-fold increase in productivity in one business, I’d say the potential to make an impression is huge… but if ‘big 4’ consultants can’t make it happen, how do we do it?
AI provides the answer – provided it knows what is happening within a business (i.e. gets the appropriate data) AND it has a model of the business around which provides the context around the data. The potential is huge. AI can identify all the places where activity is redundant and can be sped up, plus it can identify the exceptions that causes delays and errors.
AI, based on a model of how a business works, and supported by live data from their IT systems, can literally tell you how to optimize the business.
10-fold would be great, but even doubling productivity would completely transform the NZ economy, grow the revenue pie …and take the political discussion away from taxcuts and cost cuts, which do little since they at best balance themselves out.