The accused Christchurch gunman would not have been allowed to livestream his terror attack on March 15 under new rules that Facebook has just introduced.
"Those restrictions, if they had been in place at the time of the Christchurch atrocity, would have prevented the terrorist from using his live account on that day," Facebook VP of global policy and communications Nick Clegg told reporters in Paris.
Clegg would not elaborate on whether the alleged gunman had previously breached Facebook's standards, citing the ongoing criminal case.
Clegg represented Facebook at today's summit, one of several tech companies' representatives that signed up to the Christchurch Call to Action.
Seventeen countries and the European Commission also signed the call.
Facebook has come under intense criticism for its handling of the gunman's video on its platform. It took 12 minutes after the livestream ended before it became aware of it, and that notice came from police, not from its own algorithms or human moderators.
There were 1.5 million attempted uploads of the alleged gunman's video within 24 hours of his livestream, and its AI technology automatically blocked 1.2 million of those uploads.
Users wanting to share the video changed aspects of the footage to side-step AI detection; Facebook said there were 900 different variations of the footage.
Clegg said Facebook had used its AI technology to stop the spread of the video, but that technology was limited. That was why Facebook had announced it was putting US$7.5 million into improving that technology.
Yesterday Facebook said it will restrict more users who have broken certain rules from livestreaming.
Guy Rosen, Facebook vice-president integrity, said the policy before today's changes meant that users were blocked from Facebook if they kept violating its Community Standards.
"We will now apply a 'one strike' policy to [Facebook] Live in connection with a broader range of offences.
"From now on, anyone who violates our most serious policies will be restricted from using Live for set periods of time - for example 30 days - starting on their first offence.
"For example, someone who shares a link to a statement from a terrorist group with no context will now be immediately blocked from using Live for a set period of time."
Prime Minister Jacinda Ardern said she believed the changes announced by Facebook were a response to the Christchurch Call summit.
Whether Facebook should have made the changes sooner was a question for Facebook, she said.
"Our focus is on making sure we do everything we can to prevent this happening again. It's for companies to reflect on their own policies as to whether or not they should have or could have changed them earlier.
"What we do know is they've changed them now. The fact that the 15th of March happened, that that livestreaming was able to occur, was wrong.
"Facebook have moved to change that, and that is a positive step, but it is just a first step."
In what is believed to be a first, major tech companies Microsoft, Twitter, Facebook, Google and Amazon released a joint statement today saying they would set out concrete steps the industry will take to address the abuse of technology to spread terrorist content.
Ardern said the ultimate test of whether the call to action would be meaningful were the changes that tech companies made.
"There is an expectation on them. They've made some early announcements. For us it's going to be whether or not that action continues over the coming months.
"Our goal has to be to try and prevent this happening anywhere ever again, in New Zealand or anywhere else in the world.
"I do think we need to see more."