Courting GenAI: The good, the bad and the ugly

Published on the 13/12/2023 | Written by Heather Wright


Courting GenAI: The good, the bad and the ugly

Courts get guidance, businesses fear failure…

The New Zealand judiciary has published its guidelines for generative AI use giving it the thumbs up for some tasks such as summarisation and simple administration tasks, but warning that even with the best prompts outputs may be inaccurate, incomplete, misleading or biased.

The guidance for court and tribunal participants comes as new research finds there’s a big gap between AI investments and worker proficiency, with nearly all organisations believing their AI initiatives will fail unless an in-house skills gap is closed.

“The gap between AI investments and investments in employee training could threaten ambitions.”

The guidelines for the courts and tribunals were developed by an AI advisory group established earlier this year, and note that generative AI in the legal context is a reality internationally and in New Zealand.

Globally, there have been cases of where lawyers have been caught out using genAI tools to produce submissions which included fictitious citations. In March, the New Zealand Law Society noted that it had been receiving case requests from lawyers which had been generated after submitting queries to ChatGPT.

“The cases appear real as ChatGPT has learnt what a case name and citation should look like, however, investigations by Library staff have found that the cases requested were fictitious,” it noted, warning that cases cited in ChatGPT responses could well be fake.

The New Zealand courts system is currently gearing up to transition to a fully digital platform with the first stage of the new $169 million digital case management system, Te Au Reka, due to be completed in 2025/26.

Chief Justice Dame Helen Winkelmann, says the use of GenAI, such as ChatGPT, Google Bard and Bing Chat, in the Kiwi legal context is increasing.

“There is potential for GenAI – when used responsibility – to enhance access to justice by making legal knowledge and information more accessible to non-lawyers,” Winkelmann says.

The guidelines flag potential in tasks such as summarising information, speech writing and administrative tasks such as drafting emails – albeit with cautions for each use case.

Areas such as legal research and analysis, however, require ‘extra care’ with the guidelines going as far as to note that using GenAI chatbots for legal analysis isn’t recommended.

“GenAI is ill-suited to legal analysis as it generates text based on probability, rather than an understanding of text’s content or human inferences,” it notes, flagging too GenAI’s inability to critically examine patterns in data, which can lead to inaccurate or biased conclusions, and the lack of a ‘neutral’ output.

For all parties, the GenAI guidelines caution the need to understand the technology and its limitations, including its inability to understand unique fact situations in a specific case, cultural and emotional needs and the broader Kiwi social and legal context.

Ensuring accountability and accuracy, upholding confidentially, suppression and privacy, being aware of ethical issues and disclosing GenAI use are also included in the guidelines.

In the US, many federal judges have issued standing orders governing the use of generative AI to prepare or draft court filings.

Justice Paul Radich, chair of the judiciary’s AI Advisory Group who led the development of the Kiwi guidelines, says New Zealand is among the first to develop guidelines, although all jurisdictions are ‘navigating the opportunities and challenges presented by GenAI technology’.

“It is in the interest of justice that a consistent approach to these issues is taken across judicial forums to the extent practicable, despite the differing administrative arrangements.”

The Kiwi guidance applies for all New Zealand courts, the Waitangi Tribunal and 28 other tribunals and authorities who have adopted them, with three sets of guidelines released – one for judges, judicial officers, tribunal members and support staff; another for lawyers and the third for non-lawyers.

For many businesses, there are concerns beyond misleading and biased results still to contend with. A Pluralsight report shows that while 90 percent of those surveyed say their organisation has accelerated AI initiatives in the last 12 months, 72 percent of IT practitioners and 80 percent of executives believe their organisations often invest in new technology without considering the training employees need it.

Perhaps more tellingly, 95 percent of executives and 94 percent of IT professionals, believe their AI initiatives will fail without staff who can effectively use AI tools, suggesting the gap between AI investments, and investments in employee training, could threaten ambitions.

Around 1,200 decision makers and IT practitioners were surveyed for the report.

Post a comment or question...

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

MORE NEWS:

Processing...
Thank you! Your subscription has been confirmed. You'll hear from us soon.
Follow iStart to keep up to date with the latest news and views...
ErrorHere