A juror in a criminal trial must decide a case solely on the evidence presented in court. This has always been a core principle of our legal system. Jurors are instructed not to make their own investigations or consult outside sources, especially online. With the rise of digital technologies, this instruction now explicitly includes a prohibition on using Google or other search engines to look up information related to the case.
Why is this so important? Because the internet is full of material that might be deeply prejudicial to the accused – material that may be inaccurate, out of context, or simply inadmissible in court. If a juror encounters such content, it could distort their understanding of the evidence and undermine the presumption of innocence.
The Supreme Court has recently affirmed that the right to a fair trial is absolute – it cannot be weighed against competing rights like freedom of expression. Yet the reality is more complicated. Courts acknowledge that, despite clear directions, some jurors may still go online and find material that influences their deliberations. The Supreme Court recognises this risk and has responded with a framework for issuing takedown orders. These are judicial directions to online content hosts requiring them to remove or disable access to specific prejudicial content.
Not every trial will trigger a takedown order. Factors include the nature and prominence of the online content, how accessible it is, the way it is presented, whether it has been widely repeated, and the context of the trial itself. For example, a short, low-profile trial may not require any action, whereas a lengthy and highly publicised trial carries a much greater risk of juror exposure to prejudicial information.
For these orders to work, the prejudicial material must be clearly identifiable. In the case of traditional online articles, this usually means specifying the URL – the web address. In this sense, online articles resemble the digital equivalent of newspaper reports. But unlike the old newspaper clipping destined for the bin, online content lingers indefinitely.
And now we face a new, more elusive threat. What happens when a juror turns not to Google but to a generative AI platform like ChatGPT? Unlike a static article with a URL, a ChatGPT response is not a fixed document. It is generated in real time, tailored to the user’s prompt, and built on an aggregation of information drawn from many online sources. The answer may vary with the wording of the question or as the AI model updates its understanding over time.
The danger is clear. A juror asking ChatGPT about a case could be fed highly prejudicial information, even if they don’t mean to find it. Worse, the resulting content cannot be targeted by a takedown order – it is not static, not located at a specific web address, and cannot be identified in advance. The AI’s response is an output of probabilistic machine learning, not a retrievable article.
This represents a serious challenge to the integrity of the trial process. The Supreme Court’s framework is based on the assumption that prejudicial material exists in discrete, locatable units that can be identified and removed. Generative AI breaks that model. It introduces a new risk that is harder to detect, harder to control, and entirely outside the scope of the existing takedown regime.
In 2014, I wrote about the problem of jurors turning to Google. In 2019, I examined how takedown orders might help. But the arrival of generative AI changes the landscape. It creates a new threat to the accused’s right to a fair trial – one that is no less serious because it is difficult to see. And yet, our legal system has not yet caught up. The law must now reckon with the risks of the self-informing juror in the age of AI.
David Harvey is a retired district court judge.