It’s anticipated that OpenAI’s web browser will include ChatGPT features built in, among other attributes. A Kiwi, Ben Goodger, is also reportedly heavily involved in its plans.
But as we dive headfirst into this new AI-fuelled future, we should demand that this new technology gets the basics right first.
Over the past 30 years the internet has opened up our world. We can connect with people and enjoy endless volumes of information with the click of a button.
It’s a scene out of the Jetsons, minus the flying cars – for now.
Traditionally, most internet searches have given the user an exhaustively long list of links to websites with varying degrees of relevant information. The user can then sort through what they find and determine what is most helpful, discarding the rest.
However, with AI (artificial intelligence) tools acting as an aggregator, scraping the depths of the internet for whatever information it can find, we must ask: how reliable are its replies to our questions?
Well, the growing evidence suggests the reliability is not good.
When researching for a story, Google’s AI Overview, which provides a summary in response to a user’s search prompt, confidently asserted to the Herald that Jim Bolger was a Labour Prime Minister. Even more concerning, however, was that its answer cited official New Zealand Government websites as the source for this information.
Bolger spent his entire political career in Parliament with the National Party, so predictably these “sources” contained no information to support the falsehood.
This is an example of what is now commonly referred to as an AI hallucination. It is when the system’s algorithm generates information that seems plausible but is totally fabricated.
Some of these hallucinations could be relatively minor, but others could be gross misrepresentations of the world we live in and our history.
In a New York Times article, published by the Herald on Sunday earlier this year, researchers found the hallucination rate appeared to be increasing.
The newest and most powerful systems – called reasoning systems – from companies including OpenAI and Google were generating more errors, not fewer.
On one test, the hallucination rates of newer AI systems were as high as 79%. This hardly seems like a piece of technology we can or should be relying on to make sense of our world or teach others about it.
We should use AI to help us where it can and there are already basic functions where it performs well, but we need to be wary of the evangelists who preach it as the answer to all our productivity and economic woes.
The matter of why AI is having more Jim Morrison-like hallucinations has confused both the technology’s creators and sceptical researchers.
Perhaps it wants to please us? Perhaps it wants to give us the answers we want to hear – confirming the bias in our questions.
Perhaps it is learning to act more human?
Sign up to the Daily H, a free newsletter curated by our editors and delivered straight to your inbox every weekday.