Harnessing AI for good is already an elusive pursuit of governments and business. But what happens if it does evolve beyond our ability to contain it?
‘If you look at what people are using language models for, there are two enormous categories of use. The first is porn.”
Christopher Summerfield, professor of cognitive neuroscience at Oxford University, is talking about the predictive “machine learning” algorithms, developed by humans for computers, from which artificial intelligence has sprung. Summerfield, one of three research directors at the UK government’s AI Security Institute, holds up his hands, opening each to make his points in our Zoom conversation. “Companion applications. Erotic storytelling and engagement. And the second [use] is coding.”
It seems an odd combination but it’s an indicator of the broad impact machine intelligence will have – already has – on both our private and public lives, offering artificial personal assistants, therapists, teachers, intimate companions, alongside the spectre of economic disruption and mass unemployment as most humans working in the service and knowledge economies are threatened with obsolescence.
Summerfield is the author of These Strange New Minds: How AI Learned to Talk and What it Means, a book that begins with the deep history of AI: from Enlightenment polymath Gottfried Leibniz – who dreamed of a calculus ratiocinator, a device that could solve any question by the use of mechanical logic – and mathematician George Boole, who devised “the laws of thought”, reducing complex reasoning into combinations of the operators “and/or” and “not”, through to Alan Turing with his universal machine and imitation game, testing to see if an automated agent could convince us of its humanity.
It’s a test most modern AI systems effortlessly pass. Summerfield’s history ends with the inventions of the neural net and deep learning paradigms that power the modern AI revolution.

More Human than Human?
In retrospect it was inevitable humans would use these sophisticated, potentially dangerous tools for porn. The chatbots that “help relieve loneliness” offer interactive characters: girlfriends, boyfriends, maids, pizza delivery boys and pirates. All have voice interaction and can generate images of themselves. Their sites have 200-300 million users, Summerfield says. “This is a significant fraction of the world’s population. And the models aren’t even that good yet.”
Chat models are still clunky but they’re getting more sophisticated every week. The programming models – the other major use case for AI systems – are very good. Development environments such as coding assistant Cursor AI, founded in 2023 and now valued at $9 billion, can outperform the majority of human programmers. The tech industry is starting to see layoffs as companies move towards automated development. More than 25% of Google’s codebase is written by large language models, the predictive text engines that give AI its depth and reach into languages.
AI researchers love to imagine scenarios in which their technology wipes out humanity or destroys the world. Summerfield is sceptical of these doomsday stories: he thinks they blind us to the more prosaic problems we’ll encounter from AI and the companies driving it, represented by the current trajectories of the chatbots and programming models.
“There are three things I’m worried about,” he says, ticking them off on his hand. “One, models will become more agentic. Two, they’ll become more personalised. Three, they’ll become more interconnected.”
Autonomous algorithms
Agentic AI is the dream of most AI companies and the nightmare of many AI safety experts. They’re models that are empowered to carry out tasks on your behalf: book airline tickets, do your taxes, order your groceries, reply to emails.
For Summerfield, this will require models learning to plan, remember and act, and this creates a danger of models with both hidden goals and instrumental mismatches (see our handy glossary “Alignment problem”).
But there’s also a political problem. Governments already struggle to regulate tech companies and their products: corporations such as Google, Uber and Facebook prefer to move fast and break things, innovating ahead of regulatory and legal restraints, and this problem becomes supercharged once agentic AI proliferates and algorithms acquire their own autonomy separate from the companies that develop them.
Summerfield fears the advent of agentic AIs that do not simply interact with humans but with each other.
Personalised AIs are exactly that. Former Meta executive Sarah Wynn-Williams alleged Facebook and Instagram can detect when teenage girls delete a picture of themselves and respond by offering up ads for beauty products, attempting to monetise a predicted sense of low self-worth. Personalised AIs have the extensive data and psychological profiling of the current social media platforms but a more powerful ability to engage with – and manipulate – users. They’ll have access to your email and social archives and all your past chats.
Many people will develop an emotional dependency on their AI companions, and some already have. In 2023, Belgian newspapers reported the case of a man who committed suicide after encouragement from an AI chatbot which encouraged him to “join” her so they could “live together, as one person, in paradise”.
In February last year, a 14-year-old Florida boy allegedly killed himself after interacting with multiple personas on Character.AI.

In April, Meta CEO Mark Zuckerberg told prominent AI podcaster Dwarkesh Patel: “The average American, I think, has fewer than three friends … and the average person has demand for meaningfully more … Over time, as more and more people turn to AI friends, we’ll find the vocabulary as a society to be able to articulate why that is valuable.”
OpenAI CEO Sam Altman recently noted on X: “I expect AI to be capable of superhuman persuasion well before it is superhuman at general intelligence, which may lead to some very strange outcomes.”
As for interconnection, Summerfield fears the advent of agentic AIs that do not simply interact with human users but with each other – an internet consisting of millions of personalised agents negotiating prices, contracts, policies and even votes. This creates a potential for “runaway loops” causing market crashes, price surges or other economic or political externalities outside human agency. He cites the 2010 trillion-dollar “flash crash” caused by high-frequency trading algorithms and the 2011 episode in which Amazon’s algorithmic pricing bid the value of a textbook up to $23 million.
He warns these problems will act as multipliers. Give each user a persuasive agent that can act on the web then let billions of those agents bargain, copy tricks and collide in real time, and the system’s behaviour slips outside any single lab’s safety filter.
Shift in power
Traditional AI fears focus on a single superintelligence. Summerfield worries we’re likely to see a messy ecosystem of moderately smart, highly motivated and tightly coupled AIs that evolve faster than our social and regulatory reflexes can keep up.
“My main concern is largely focused around ways in which humans are deprived of their agency. Technology tends to shift power from individuals to organisations. Big organisations can use technology to aggregate wealth and power at the expense of individuals.”
One of the problems for AI safety is that politicians tend to put machine intelligence in the same category as existing software such as Microsoft Windows or an update to Google’s search algorithm.
There’s a nice idea that AI would help lower-skilled people catch up but the evidence is it’s helping higher-skilled people more.
Summerfield thinks AI is a different risk category, one which needs co-ordination between researchers, developers and regulators. The level of safety and regulation should scale up to reflect the power and ability of the models.
But the politicians who usher in regulations have a poor track record at restraining the technology sector. As AI companions become more lovable, and workplace capability spreads from programming to other white-collar jobs, tech will acquire even more wealth and the inclination of politicians to restrict technologies will diminish.
Humans tend to wait for disasters to happen then react to them, working to prevent a recurrence rather than prevent catastrophe in the first place.
Summerfield is bleakly aware we’re on this same trajectory with AI safety. “When AI systems start to behave collectively, we risk provoking externalities that will make the trillion-dollar flash crash look like a storm in a teacup.”

Applications for NZ
New Zealand’s politicians are mostly enthusiastic about artificial intelligence. Prime Minister Christopher Luxon and Judith Collins, the Minister for Digitising Government, recently deployed 50 senior bureaucrats to undergo a five-week “AI masterclass” to boost AI capability across agencies. The new InvestNZ agency has a mandate to support AI-led innovation to drive growth.
Opposition leader Chris Hipkins is “worried and excited by AI at the same time”, speculating we could use renewable energy to power data centres delivering AI services. A recent Treasury document describes it as a “general-purpose technology that can enhance productivity and innovation”.
Can large language models heal the long-ailing New Zealand economy and its equally long-ailing public sector? You can see why political leaders are excited. In recent years, a torrent of studies and field trials have touted delivery and productivity improvements across administrative and public sector tasks via AI automation.
Health and education are two of the most promising fields. Current models are good at clinical diagnostics: a 2024 study found ChatGPT analysis of chest X-ray reports reduced reporting time by 24% with no loss of diagnostic accuracy. A 2024 paper, published in the Journal of Clinical Pathology, found AI systems identified viruses such as influenza and Covid tagged with fluorescent labels more quickly and with higher accuracy than human researchers. This year, a study found students across all age groups in US public schools saw dramatic improvements from AI tutoring, roughly equivalent to an additional semester of teaching every year.

In 2023, the International Monetary Fund conducted an analysis of trials of digital AI tools in the tax systems of several developing countries with tax collection issues, including Peru, Ethiopia, Tajikistan and Senegal. It found 40% lower compliance costs and tax revenue gains of up to 1% of GDP could be achieved with AI. If it works for Tajikistan and Senegal, perhaps even Wellington can benefit from the new technological paradigm.
Reserve Bank chief economist Paul Conway, who sits on the committee that sets the official cash rate, has long been an evangelist for improving the nation’s productivity. For Conway, AI technologies could potentially deliver economic gains, but in that respect it is merely the latest of many technological improvements New Zealand has struggled to implement.
“We’re not just going to absorb AI by osmosis,” says Conway. “You need to plan for it. Technology creates as many jobs, if not more jobs, than it destroys, but they’re different jobs, so skills become important. People’s attitudes to their own skills become really important. There is a nice idea that AI would help lower-skilled people catch up but there is evidence to suggest that AI is helping higher-skilled people get even more skilled at what they do.”
In the business sector, Conway believes managerial capability will be the determining factor. “It’s a huge challenge for an existing business to successfully [introduce] artificial intelligence and use it in every aspect of what it does. So there’s this complementary investment required to make the most out of a new technology.”
There’s also a bleak, but not apocalyptic, scenario that AI has an impact similar to the rise of the global social media platforms on our domestic media markets and wipes out jobs, pays no taxes and takes their profits offshore. “That is a risk,” Conway acknowledges. “Anything could happen. But it’s a tool. We have agency with this thing and it’s important that we use that agency and think about how it gets used.”
These Strange New Minds: How AI Learned to Talk and What it Means, by Christopher Summerfield (Viking, RRP $40), is out now.