US lawmakers have grilled executives from Facebook, Google's YouTube and Twitter about what the companies are doing to prevent terrorists from using their platforms to spread propaganda and recruit new followers.
The Senate's commerce, science and transportation committee hearing comes amid growing government scrutiny about misuse of social media platforms and questions on what the companies are doing to prevent it.
And it comes after November's exhaustive congressional hearings on what the companies knew — and did — about Russia's efforts to meddle with the 2016 US elections using their platforms.
According to testimony obtained by CNBC, representatives from the three tech companies said they are, among other things, targeting people likely to be swayed by extremist messages and pushing content aimed at countering that message.
"We believe that a key part of combating extremism is preventing recruitment by disrupting the underlying ideologies that drive people to commit acts of violence. That's why we support a variety of counterspeech efforts," said Monika Bickert, Facebook's head of global policy management said Monika Bickert, Facebook's head of global policy management.
Facebook is also working with universities, nongovernmental organisations and community groups around the world "to empower positive and moderate voices," she added.
As for the Google-owned YouTube, the company said it will continue to use what it calls the "Redirect Method," developed by Google's Jigsaw research group, to send anti-terror messages to people likely to seek out extremist content through what is essentially targeted advertising.
Lawmakers acknowledged that the companies, especially Google and Facebook, have come a long way when it comes to weeding out extremist material. But they said more needs to be done.
"What have we learned about how the Russians attacked us?" Senator Bill Nelson, a Florida Democrat, asked the witnesses. "What have social media companies done to assess this threat, both individually and collectively? What have they done to address this threat? And what more do they need to do to be ready for the future?"
All three companies stressed their increasing reliance on automated systems and artificial intelligence to combat terrorism on their platforms. Facebook, for example, said 99 per cent of extremist material related to al-Qaeda and the Islamic State is detected and removed before anyone manually reports it.
But lawmakers and others noted that artificial intelligence is only good at detecting and preventing things it already knows. So AI won't help much when it comes to anticipating future social-media tactics that extremists might adopt.
Clint Watts, a terrorism expert at the Foreign Policy Research Institute, said at the hearing that Google and Facebook are ahead of Twitter when it comes to weeding out extremist content. He said that's because Twitter relies too much on technology and not enough on things like threat intelligence, like working with outside experts and officials.
He also said that "lesser educated" populations, including people around the world just getting online via mobile devices, are especially vulnerable to the social media manipulations of terrorists and authoritarians.