GPT-5 is an upgraded version of the artificial intelligence technology, which promises better results. Photo / 123rf
GPT-5 is an upgraded version of the artificial intelligence technology, which promises better results. Photo / 123rf
The AI start-up said its new flagship technology was faster, more accurate and less likely to make stuff up.
ChatGPT maker OpenAI has released GPT-5, an upgraded version of the artificial intelligence technology that powers its popular chatbot, promising better results on tasks including answering health questions, writing and generatingcomputer code.
ChatGPT has more than 700 million users every week, OpenAI has said, and is used by people to help out at home, work and school. The company has often claimed its technology is advancing quickly toward the point it will be able to match or exceed humans at every possible task.
Executives said in a news briefing before GPT-5’s release that it does not perform at that level.
In a live-streamed demo, the new system showed that, like previous AI models, it can still make mistakes. When an executive asked GPT-5 to explain how the scientific principle called the Bernoulli effect applies to airplane wings, it offered a common but incorrect explanation.
All ChatGPT users will get access to GPT-5, even those using the free version. But only those with a US$200-a-month ($335) “Pro” subscription get unlimited access to the newly released system. GPT-5 will be the default mode on all versions.
Users not paying for ChatGPT will only be able to ask a certain number of questions answered by GPT-5 before the chatbot switches back to using an older version of OpenAI’s technology.
How will GPT-5 change ChatGPT?
GPT-5 responds to questions faster than OpenAI’s previous offerings and is less likely to “hallucinate” or make up false answers, OpenAI executives said at a news briefing before its release. It gives ChatGPT “better taste” when generating writing, said Nick Turley, who leads work on the chatbot.
OpenAI’s new AI software can also answer queries using a process dubbed reasoning that shows the user a series of messages attempting to break down a question into steps before giving its final answer. “GPT-5 is the first time that it really feels like talking to an expert, a PhD-level expert,” OpenAI CEO Sam Altman said.
Altman said GPT-5 is particularly good at generating computer programming code, a feature that has become a major selling point for OpenAI and rival AI developers and has transformed the work of programmers.
In a demo, the company showed how two paragraphs of instruction was enough to have GPT-5 create a simple website offering tutoring in French, complete with a word game and daily vocabulary tests.
Execs say ChatGPT users can now connect the app with their Google calendars and email accounts. Photo / Getty Images
Altman predicted that people without any computer science training will one day be able to quickly and easily generate any kind of software they need to help them at work or with other tasks. “This idea of software on demand will be a defining part of the new GPT-5 era,” Altman said.
Turley also claimed the upgrade made ChatGPT better at connecting with people. “The thing that’s really hard to put into words or quantify is the fact that just feels more human,” he said.
In a livestream Thursday, OpenAI execs said ChatGPT users could now connect the app with their Google calendars and email accounts, allowing the chatbot to help people schedule activities around their existing plans.
What does it mean for an AI chatbot to ‘reason?’
GPT-5 could give many people their first encounter with AI systems that attempt to work through a user’s request step-by-step before giving a final answer.
That so-called “reasoning” process has become popular with AI companies because it can result in better answers on complex questions, particularly on math and coding tasks. Watching a chatbot generate a series of messages that read like an internal monologue can be alluring, but AI experts warn users not to confuse the technique with a peek into AI’s black box.
The self-chatter doesn’t necessarily reflect an internal process like that of a human working on a problem, but designing chatbots to create what are sometimes dubbed “chains of thought” forces the software to allocate more time and energy to a query.
OpenAI released its first reasoning model in September for its paying users, but Chinese start-up DeepSeek in January released a free chatbot that made its “chain of thought” visible to users, shocking Silicon Valley and temporarily tanking American tech stocks.
The company said ChatGPT will now automatically send some queries to the “reasoning” version of GPT-5, depending on the type of conversation and complexity of the questions asked.
Is GPT-5 the ‘super intelligence’ or ‘artificial general intelligence’ OpenAI has promised?
No.
Tech leaders have for years been making claims that AI is improving so fast it will soon become able to learn and perform all tasks that humans can at or better than our own ability. But GPT-5 does not perform at that level.
Super intelligence and artificial general intelligence, or AGI, remain ill-defined concepts because human intelligence is very different from the capabilities of computers, making comparisons tricky.
OpenAI CEO Altman has been one of the biggest proponents of the idea that AI capabilities are increasing so rapidly that they will soon revolutionise many aspects of society. “This is a significant step forward,” Altman said of GPT-5. “I would say it’s a significant fraction of the way to something very AGI-like.”
Some people have alleged that loved ones were driven to violence, delusion or psychosis by hours spent talking to ChatGPT. Photo / Getty Images
Does GPT-5 change ChatGPT’s personality?
Changes OpenAI made to ChatGPT in April triggered backlash online after examples of the chatbot appearing to flatter or manipulate users went viral. The company undid the update, saying an attempt to enhance the chatbot’s personality and make it more personalised instead led it to reinforce user beliefs in potentially dangerous ways, a phenomenon the industry calls “sycophancy”. OpenAI said it worked to reduce that tendency further in GPT-5.
As AI companies compete to keep users engaged with their chatbots, they could make them compelling in potentially harmful ways, similar to social media feeds, The Washington Post reported in May. In recent months, some people have alleged that loved ones were driven to violence, delusion or psychosis by hours spent talking to ChatGPT. Lawsuits against other AI developers claim their chatbots contributed to incidents of self-harm and suicide by teens.
OpenAI released a report on GPT-5’s capabilities and limits Thursday that said the company looked closely at the risks of psychosocial harms and worked with Microsoft to probe the new AI system. It said the reasoning version of GPT-5 could still “be improved on detecting and responding to some specific situations where someone appears to be experiencing mental or emotional distress”.
Earlier this week, OpenAI said in a blog post it was working with physicians across more than 30 countries, including psychiatrists and paediatricians, to improve how ChatGPT responds to people in moments of distress. Turley, the head of ChatGPT, said the company is not optimising ChatGPT for engagement.