Meta, which owns Facebook and Instagram, followed in the footsteps of X, Elon Musk’s social platform, and turned responsibility for policing content over to users.
Unlike Meta and X, YouTube has not made public statements about relaxing its content moderation.
The online video service introduced its new policy in mid-December in training material that was reviewed by the New York Times.
For videos considered to be in the public interest, YouTube raised the threshold for the amount of offending content permitted to half a video, from a quarter of a video.
The platform also encouraged moderators to leave up those videos, which would include city council meetings, campaign rallies and political conversations.
This distances the platform from some of its pandemic practices, such as when it removed videos of local council meetings and a discussion between Florida’s Governor, Ron DeSantis, and a panel of scientists, citing medical misinformation.
These expanded exemptions could benefit political commentators whose lengthy videos blend news coverage with opinions and claims on a variety of topics, particularly as YouTube takes on a more prominent role as a leading distributor of podcasts.
The policy also helps the video platform avoid attacks by politicians and activists frustrated by its treatment of content about the origins of Covid, the 2020 election, and Hunter Biden, former US President Joe Biden’s son.
YouTube continuously updates its guidance for content moderators on topics surfacing in the public discourse, said Nicole Bell, a company spokesperson.
It retires policies that no longer make sense, as it did in 2023 for some Covid misinformation, and strengthens policies when warranted, as it did this year to prohibit content directing people to gambling websites, according to Bell.
In the first three months of this year, YouTube removed 192,586 videos because of hateful and abusive content, a 22% increase from a year earlier.
“Recognising that the definition of ‘public interest’ is always evolving, we update our guidance for these exceptions to reflect the new types of discussion we see on the platform today,” Bell said in a statement.
She added, “Our goal remains the same: to protect free expression on YouTube while mitigating egregious harm”.
Critics say the changes by social media platforms have contributed to the rapid spread of false assertions and have the potential to increase digital hate speech.
Last year on X, a post inaccurately said: “Welfare offices in 49 states are handing out voter registration applications to illegal aliens”, according to the Centre for Countering Digital Hate, which studies misinformation and hate speech.
The post, which would have been removed before recent policy changes, was seen 74.8 million times.
For years, Meta has removed about 277 million pieces of content annually, but under the new policies, much of that content could stay up, including comments like “black people are more violent than whites”, said Imran Ahmed, the centre’s chief executive.
“What we’re seeing is a rapid race to the bottom,” he said.
The changes benefit the companies by reducing the costs of content moderation, while keeping more content online for user engagement, he added.
“This is not about free speech. It’s about advertising, amplification, and ultimately profits.”
YouTube has in the past put a priority on policing content to keep the platform safe for advertisers. It has long forbidden nudity, graphic violence and hate speech.
But the company has always given itself latitude for interpreting the rules. The policies allow videos that violate YouTube’s rules, generally a small set, to remain on the platform if there is sufficient educational, documentary, scientific, or artistic merit.
The new policies, which were outlined in the training materials, are an expansion of YouTube’s exceptions.
They build on changes made before the 2024 election, when the company began permitting clips of electoral candidates on the platform even if the candidates violated its policies, the training material said.
Previously, YouTube removed a so-called public interest video if a quarter of the content broke the platform’s rules. As of December 18, YouTube’s trust and safety officials told content moderators that half a video could break YouTube’s rules and stay online.
Other content that mentions political, social and cultural issues has also been exempted from YouTube’s usual content guidelines.
The platform determined that videos are in the public interest if creators discuss or debate elections, ideologies, movements, race, gender, sexuality, abortion, immigration, censorship, and other issues.
Megan Brown, a doctoral student at the University of Michigan who researches the online information ecosystem, said YouTube’s looser policies were a reversal from a time when it and other platforms “decided people could share political speech but they would maintain some decorum”.
She fears that YouTube’s new policy “is not a way to achieve that”.
During training on the new policy, the trust and safety team said content moderators should err against restricting content when “freedom of expression value may outweigh harm risk”.
If employees had doubts about a video’s suitability, they were encouraged to take it to their superiors rather than remove it.
Case study 1: Covid vaccines
YouTube employees were presented with real examples of how the new policies had already been applied.
The platform gave a pass to a user-created video titled, RFK Jr. Delivers SLEDGEHAMMER Blows to Gene-Altering JABS, which violated YouTube’s policy against medical misinformation by incorrectly claiming that Covid-19 vaccines alter people’s genes.
The company’s trust and safety team decided the video shouldn’t be removed because public interest in the video “outweighs the harm risk”, the training material said. The video was deemed newsworthy because it presented contemporary news coverage of recent actions on Covid vaccines by the US Secretary of the Department of Health and Human Services, Robert F. Kennedy jnr.
The video also mentioned political figures such as US Vice-President JD Vance, Elon Musk, and Megyn Kelly, boosting its “newsworthiness”.
The video’s creator also discussed a university medical study and presented news headlines about people experiencing adverse effects from Covid vaccines, “signalling this is a highly debated topic (and a sensitive political topic)”, according to the materials.
Because the creator didn’t explicitly recommend against vaccination, YouTube decided that the video had a low risk of harm.
Currently, the video is no longer available on YouTube. It is unclear why.
Case study 2: Personal slur
Another video shared with the staff contained a slur about a transgender person.
YouTube’s trust and safety team said the 43-minute video, which discussed hearings for Trump Administration Cabinet appointees, should stay online.
It said that was because the description had only a single violation of the platform’s harassment rule forbidding a “malicious expression against an identifiable individual”.
Case study 3: Talk about violence
A video from South Korea featured two commentators talking about the country’s former President Yoon Suk Yeol.
About halfway through the more-than-three-hour video, one of the commentators said he imagined seeing Yoon turned upside down in a guillotine so that the politician “can see the knife is going down”.
The video was approved because most of it discussed Yoon’s impeachment and arrest.
In its training material, YouTube said it had also considered the risk for harm low because “the wish for execution by guillotine is not feasible”.
This article originally appeared in The New York Times.
Written by: Nico Grant and Tripp Mickle
Photographs by: Matt Chase
©2025 THE NEW YORK TIMES