Harnessing AI for good is already an elusive pursuit of governments and business. To be part of the AI revolution, you need to speak the language.
AI favours tyranny:
Prophecy made by the contemporary historian Yuval Noah Harari, who predicts that AI will optimise for automated surveillance states in which authoritarian regimes can detect and vanquish any dissent. He points out that China is already moving in this direction.
AGI & ASI
Acronyms freely used in the AI universe. AGI is Artificial General Intelligence, a system that can understand, learn, and apply knowledge across a wide range of tasks at or above human-level performance. ASI is Artificial Super Intelligence, a system that far surpasses human intelligence across most or all cognitive domains.
Alignment problem
When you take your car to a mechanic there’s often a conflict between goals. The garage wants to maximise its profit, you want to minimise your costs, but its mechanics know more about cars than you.
There are “misaligned interests” and one party knows more. That’s the principal-agent problem, and most of modern politics consists of attempted solutions to this problem, with mixed success. But when you hire a mechanic or vote a politician into power they’re still human, guided by the norms of our species.
The “alignment problem” is the same situation but the agent is a superintelligent AI. Safety researchers like to illustrate it with the story of King Midas, who told the god Dionysus to make everything the king touched turn to gold. Dionysus didn’t query the instruction (“Everything? Your food? Family?”) but literally fulfilled it. There is no solution to the problem of aligning a superintelligence, and many AI researchers strongly advise finding one before we create the superintelligence.
Bliss attractor
If two instances of a large language model are instructed to talk to each other, the conversations often converge to the same subjects. For Anthropic’s highly regarded Claude chatbot, a recent paper found “most of the interactions turned to themes of cosmic unity or collective consciousness, and commonly included spiritual exchanges, use of Sanskrit, emoji-based communication, and/or silence in the form of empty space.”
Deep learning
A branch of computer science that builds models that enable computers to learn how to perform complex cognitive tasks such as image recognition or natural language understanding, instead of traditional software engineering in which computers give programmed responses to programmed inputs.
Deep utopia/post scarcity/fully automated luxury communism/solved world
There are various optimistic versions of the AI future in which our species endures but our current economic and political problems are solved by superhuman intelligence and its subsequent inventions. All disease is cured, poverty is gone, lifespans are indefinite and superintelligent systems do every job better than we can. Life becomes about inventing purpose rather than a struggle for survival.
Emergence
The ability of large language models to solve problems and answer questions that do not appear in their training data. An indication that the models are forms of intelligence, not just pattern matching algorithms (see “stochastic parrot”). Summerfield argues the structure of language correlates to the structure of reality – at least as humans understand it – so a model’s comprehension of language corresponds to a comprehension of the world.
Hallucination
Well-documented tendency of large language models to simply make things up – misattributing quotes, inventing scientific facts which sound plausible but are completely false and citing books, quotes and individuals that do not exist. In 2024, an update to Google’s AI Overviews had it advising users to use glue to get cheese to stick to their pizza, and the number of rocks we should eat every day (one). It is the most glaring flaw in the technology and is proving surprisingly difficult to solve.
Hard take-off/intelligence explosion/foom
The hypothetical moment when an AI system becomes capable of improving its own intelligence, triggering a recursive cascade that rockets from human-level to god-like intelligence. The term “foom” was coined by AI researcher Eliezer Yudkowsky as onomatopoeia for the sound of something disappearing upward at incomprehensible speed – one moment you have a clever chatbot, the next, something orders of magnitude beyond human comprehension.
Move 37
In 2015, AI research lab Google DeepMind developed an AI model called AlphaGo, designed to play Go, an abstract two-player strategy game developed in China more than 2500 years ago. The game is far more computationally complex than chess. In its second game against South Korean champion Lee Sedol, Alpha Go’s 37th move was so bizarre it was widely regarded as a mistake, until subsequent play revealed it to be deeply strategic. One commentator said, “It’s not a human move. I’ve never seen a human play this move.” “Move 37” is a reference to the alien nature of modern AI systems, and their ability to seek solutions outside the traditional human realm.
Paperclip maximiser
A thought experiment from philosopher Nick Bostrom to illustrate the alignment problem. Imagine an office stationery manufacturer that tells an ASI to build as many paperclips as possible. Give it enough competence and freedom, and it will – perfectly rationally – convert the Earth, our bodies and eventually the reachable universe into factories and wire, because every atom not yet a paperclip is wasted potential.
RLHF (reinforcement learning from human feedback)
The process by which we teach AI to be helpful, harmless, and honest: essentially, domestication for artificial minds. Like training a dog with treats, we show AI thousands of examples of good and bad responses, rewarding the behaviours we want. It’s why ChatGPT sounds like an eager-to-please graduate student. Critics worry RLHF is merely teaching AI to hide its true nature, creating systems that tell us what we want to hear.
Shoggoth with a smiley face
In HP Lovecraft’s horror short stories, shoggoths are shapeless, protoplasmic creatures created as servants, which rise up and destroy their masters. The term captures a fundamental anxiety: we’ve created something powerful that we don’t truly understand, and we’ve taught it to present a friendly interface. When ChatGPT politely refuses to help you hotwire a car, that’s the smiley face. The Shoggoth is the mass of matrix multiplications underneath, operating by alien logic we can observe but not truly comprehend.
Stochastic parrots
A term coined by the linguist Emily Bender in 2021, questioning whether large language models truly represent a new form of intelligence, or whether they are merely sophisticated pattern-matching systems that recombine and rearrange text from their training data rather than demonstrating true comprehension or reasoning.
UBI
Universal Basic Income, proposed policy solution to the problem of widespread unemployment and economic disruption, where everyone receives a generous welfare payment from the state.
UBI advocates believe this will be made affordable by the very rapid GDP growth generated by advanced AI. In early June, the idea was attacked by David Sacks, Donald Trump’s AI czar. “The left envisions a post-economic order in which people stop working and instead receive government benefits. In other words, everyone on welfare. This is their fantasy; it’s not going to happen.”