A plume of smoke rises from the site of an Israeli airstrike in the southern suburbs of Beirut's Haret Hreik neighbourhood. Israel launched fresh strikes on Iran and Lebanon. Photo / Ibrahim Amro, AFP
A plume of smoke rises from the site of an Israeli airstrike in the southern suburbs of Beirut's Haret Hreik neighbourhood. Israel launched fresh strikes on Iran and Lebanon. Photo / Ibrahim Amro, AFP
Suspected widespread use of AI to select targets and launch attacks on Iran raises many questions, and fears that human control of war machinery could be slipping, a leading expert said today.
The United States and Israel have carried out thousands of strikes across Iran since launching their offensive, includingone that killed Iran’s Supreme Leader, Ayatollah Ali Khamenei on Saturday on the first day of the war.
Peter Asaro, an expert on artificial intelligence and robotics, told AFP it appeared likely that the two countries had used AI to identify targets in Iran.
He pointed to what seemed to be a very short planning phase and large number of targets.
But while AI can speed things up, it also raises a host of moral and legal questions, he said.
“You can rapidly produce long lists of targets much faster than humans can do it, by automating that process,” said the associate professor of media studies at the New School in New York, who also serves as vice-chair of the Stop Killer Robots campaign.
But then “the ethical and legal question is: to what degree are those humans actually reviewing the specific targets that have been listed, verifying their legality and their value militarily before authorising?”.
Loss of control?
“The desire [with] all those systems is to be able to make decisions and move faster than your enemy,” he said.
He added that the question arises: “Are you actually still in control of what’s happening?”
Discussions have been running for a decade around a possible future treaty regulating automated weapons use. Countries are due to decide later this year whether to launch full-on treaty negotiations.
But while there is no current specific treaty on AI and autonomous weapons, that does not mean these systems are operating in a legal vacuum: existing international law applies.
Speaking on the sidelines of discussions at the United Nations in Geneva, Asaro said a crucial part of the debate revolved around the selection of targets, and fears that meaningful human control could be lost.
While the “sales pitch” for using AI in warfare is typically that “these things are highly accurate and make fewer mistakes than humans”, he stressed that “we don’t actually know how these systems work”.
He pointed to how the AI runs on opaque classified systems, providing little insight into how they function and how they reach their conclusions.
There is no “easy way of evaluating the output of these systems” or determining what went wrong when mistakes are made, Asaro said.
Smoke billows from the site of an Israeli airstrike that targeted the Haret Hreik neighbourhood in Beirut's southern suburbs today. Israel launched fresh strikes on Iran and Lebanon. Photo / AFP
‘Where are the moral lines?
“If something does go wrong, then who’s responsible,” he asked.
“How do you define this legally, where are the moral lines?”
He pointed to the case of the school in the city of Minab that was hit on Saturday, killing more than 150 people, according to Iran.