Musk, never at a loss for words, opined that AI is an “existential threat” because human beings for the first time are faced with something “that is going to be far more intelligent than us”. It was a jamboree of the trite and the portentous.
These deep thinkers were all banging on about existential risk, but that contingency would only arise if the machines were endowed with something called “artificial general intelligence”, that is, cognitive abilities in software comparable or superior to human intelligence.
Such AGI systems would have intellectual capabilities as flexible and comprehensive as those of human beings, but they would be faster and better informed because they could access and process huge amounts of data at incredible speed. They would be a real potential threat, but they don’t exist.
There is not even any evidence that we are closer to creating such software than we were five or 10 years ago. There has been great progress in narrow forms of artificial intelligence, like self-driving vehicles and automated legal systems, but the only threat they pose is to jobs.
The “Large Language Models” chatbots are trained on make them expert in choosing the most plausible next word. That may occasionally produce random sentences containing useful new data or ideas, but there is no intellectual activity involved in the process except in the human who recognises that it is useful.
There is plenty to worry about in how “smarter” computer programmes will destroy jobs (now including highly skilled jobs), and also in how easy it has become to manipulate opinion with “deepfakes” and the like. But none of that needed a high-profile conference at Bletchley Park.
So why did they all go there and wind up talking about existential threats? Well, one possibility is that the leaders of the tech giants wanted to make sure they were in on the rule-making from the start, for there will surely be new rules made about AI over the next few years.
Most of those rules will be about mundane commercial matters, not about threats to human existence. You might feel that it would be inappropriate for the people who will be making money from these commercial activities to be the ones making the rules.
On the other hand, they should certainly be involved in decisions about any existential threats arising from their new technologies, so tactically it makes more sense for them to steer the discussion in that direction. They’re not stupid, you know.