There’s a curious irony in depending on artificial intelligence to deliver justice. The law is a slow-moving, deliberative beast, with centuries of precedent, careful interpretation and human judgment. In contrast, generative artificial intelligence (GenAI) promises speed, clarity, and access,” ChatGPT said when, in true Gonzo fashion, I asked it to
AI in courtrooms: Legal system grapples with GenAI challenges – Sasha Borissenko

Subscribe to listen
AI tools can fabricate facts, leading to false case citations in court. Photo / 123rf

ChatGPT uses a statistical model to predict the next word or phrase based on the context, but will fabricate facts and sources in the absence of sufficient data, the Law Society website read.
“The cases appear real as ChatGPT has learnt what a case name and citation should look like, however, investigations by Library staff have found that the cases requested were fictitious.”
That same year, US lawyers Steven Schwartz, Peter LoDuca and their firm Levidow, Levidow & Oberman were ordered to pay fines after submitting a legal brief with six fictitious case citations generated by ChatGPT.
US District Judge P. Kevin Castel found that although there is nothing inherently improper about using AI as a tool, the lawyers acted in bad faith and made “acts of conscious avoidance and false and misleading statements to the court”.
Closer to home, in last year’s case of Wikeley v Kea Investments, the Court of Appeal noted the use of AI in a self-represented litigant’s submissions, which was withdrawn after the opposing counsel brought it to the court’s attention.
“No further comment is necessary except to note the relevant guidance recently issued by the judiciary [...],” the decision read. [1]
Shortcuts are a no-no
Enter the Courts of New Zealand guidelines (2023) and the Law Society’s guidance from last year. While GenAI offers efficiency gains for the legally inclined, both bodies warn that misunderstanding its limits – fact-checking, ahoy! – risks misleading the courts and clients, and undermining the integrity of the entire justice system.
For lawyers, that could mean breaching professional and ethical duties under the Lawyers and Conveyancers Act 2006. For judges, a sloppy justice system with bogus information and bot-generated legal reasoning is nothing short of apocalyptic.
The privacy, confidentiality and suppression elements are where things get interesting. The sentiment is the same across the board, where lawyers and judges alike are warned not to enter private, confidential, suppressed or legally privileged information into GenAI tools.
Neither lawyers nor judges are currently required to disclose their use of GenAI. In contrast, the American Bar Association issued a formal opinion strongly advising lawyers to obtain consent from clients before using sensitive data, even within a firm’s closed system. In New South Wales, the Supreme Court issued a practice note partially banning GenAI use last year.
AI is watching
In theory, GenAI tools have privacy policies that ensure AI can utilise only publicly available information, but that doesn’t stop users from feeding in potentially damning information.
This year, Google’s AI reportedly named a former Act Party president convicted of sexual abuse, despite a suppression order still in place. While no formal legal action has been taken against Google in New Zealand, the incident exposes the murky territory of AI, privacy and justice.
What’s more, Google has faced lawsuits overseas for alleged data privacy breaches, including unauthorised wiretapping and tracking personal data[2] . Think the Matrix version of News of the World, if you will.
Ultimately, there’s the issue of who owns the data, how it’s used and who can benefit from it.
Consider the Spiga case in the Employment Court this year, which addressed non-publication orders for employee litigants (or even witnesses). If names are made public, it could lead to “blacklisting” due to AI’s ability to mine court decisions.
An Employers and Manufacturers Association survey (submitted in evidence to the case) found 70% of employer respondents said they sometimes, often, or always undertook internet searches of candidates for employment to see whether they’d previously been involved in employment-related litigation, for example.
In other words, there’s scope for employers to rely on AI to predict how likely a job applicant is to challenge unlawful decisions – or any decisions, if you’re into the whole “trouble-maker” rhetoric – and deny them employment accordingly.
AI and the Wild Wild West
Which brings us to the elephant in the server room: the lacklustre regulatory framework. Last year, Science, Innovation and Technology Minister Judith Collins released a Cabinet paper that supported the use of AI to boost productivity and grow the economy, yet showed no signs of standalone regulation.
The paper pointed to a 2021 Qrious survey that found only 28% of businesses felt they understood the legal and ethical issues around AI. A 2023 Verian Internet Insights survey also found more than 66% of New Zealanders were deeply concerned that AI could be misused, remain unregulated or cause harm through unintended consequences.
Despite public concerns, Collins proposed a “light-touch, proportionate, and risk-based approach” to AI regulation, under which existing legislation could cover AI use. Fast forward to February this year, Collins released AI guidelines for the public service.
Guidelines without teeth? They do not compute. AI has entered the courtroom. The question is whether the justice system is ready for it.