In his campaign to control what ordinary Russians can learn about the war in Ukraine, Vladimir Putin has made a notable exception.
He has blocked or restricted Facebook, Twitter, and most other Western-based social media sites, but one major platform — YouTube — remains available. Any Russian with an internet connection can click onto YouTube to see videos about Putin’s lawless aggression as depicted by CNN, the BBC, or even exiled allies of imprisoned Russian dissident Alexei Navalny.
Why does YouTube get special dispensation? Part of the answer is that even before the war, it was the most popular social media site in Russia. Three-quarters of Russians active on the internet use YouTube and would resent it going dark.
“When we restrict something, we should clearly understand that our users won’t suffer,” Maksut Shadaev, Putin’s minister for digital development, explained recently.
But there’s another likely reason Putin treats YouTube differently — namely, his recognition that for years before he ordered the invasion, YouTube enabled Kremlin-controlled propaganda outlets like RT (formerly Russia Today) and Sputnik News to reach millions of viewers in the West. In 2013, YouTube even dispatched a company vice president to an RT studio to offer the network on-air congratulations for providing viewers with “authentic” content and tallying a landmark billion views on the platform.
YouTube’s duality — funneling factual news to ordinary Russians after years of facilitating Putin’s global falsehood machine — is a throughline in the platform’s influential role as the world’s dominant video-sharing venue. A new report that I coauthored for the NYU Stern Center for Business and Human Rights illustrates that YouTube has taken laudable steps to reduce its tendency to radicalize some users while continuing to allow unscrupulous actors to spread election disinformation, religious hatred and anti-vaccine conspiracies.
The report argues that while YouTube has helped intensify partisan animosities in the United States, most of its ongoing malign effects take place outside of its home market, where the company’s content moderation system struggles to interpret foreign languages and cultures. In India, YouTube’s largest market, with 450 million users, Hindu nationalists use YouTube as a weapon in their persecution of Muslims. In Brazil, where 100 million people use YouTube, right-wing President Jair Bolsonaro and his supporters have deployed the platform to undermine trust in elections and COVID-19 vaccines.
Globally, YouTube has more than 2 billion users. The most popular social media site, not only in Russia and India but also in the U.S., it generated nearly $29 billion in revenue in 2021, primarily from selling advertising. Despite this enormous presence, YouTube historically has received less outside scrutiny than platforms like Facebook and Twitter. That’s partly because, compared to data sets of text posts, large volumes of long-form videos are difficult and expensive for outside researchers to assess empirically. Another reason is that YouTube, a subsidiary of Google, provides fewer application programming interfaces, which social scientists can use to obtain sizable amounts of data. And YouTube sometimes remains below the media radar simply by refusing to discuss controversial issues publicly.
In some instances, YouTube has responded to problems it had a hand in creating. By using “digital fingerprints” distinctive to terrorism-recruitment videos, it has diminished Islamist incitement. In reaction to reports that its recommendation algorithm guided unwitting users toward “rabbit holes” of extremism, the platform altered its technology to suppress false and conspiratorial content — changes that appear to be working.
But platform recommendations are not the only way that users encounter extremist material on YouTube. They also can seek it out via YouTube’s powerful search engine, which is second in heft only to Google Search. And research published in 2021 by the Anti-Defamation League shows that alarming levels of exposure to extremist and other harmful content continues.
The danger lies not in the average user experience but in the ability of people inclined toward extremism to easily find what they’re looking for. The white 18-year-old accused of killing 10 African-American shoppers in a Buffalo, N.Y., grocery store in May went to YouTube to watch videos about mass shootings, police gunfights and tips on firearm use. It wasn’t until after the Buffalo massacre that YouTube removed three of the gun-related videos the alleged shooter mentioned in a diary.
Our report offers a series of recommendations for addressing such issues. It urges YouTube to provide researchers, and in some cases the public, with more information about how its currently secret algorithms rank, recommend, and remove videos. Access to this kind of data could allow social scientists to make more refined suggestions about how to root out misinformation and incitements to violence.
At the same time, YouTube should vastly increase the number of human content moderators and hire all of them as direct platform employees, rather than following the common industry practice of outsourcing the vast majority of this critical corporate function. (Google told us that it has 20,000 people working on content moderation, but it declines to specify how many of them are full-time employees and how many are hands-on reviewers focusing on YouTube.)
In light of YouTube’s inadequate self-regulation, the government now needs to step in. While President Biden has a lot on his plate, he should push Congress to enhance the Federal Trade Commission’s consumer protection authority to provide systematic oversight of the social media industry. Specifically, the FTC ought to require greater platform transparency and assure that social media companies provide procedurally adequate content moderation. The latter obligation would entail the major platforms delivering on the promises they already make in their terms of service to protect users from a wide array of harmful content.
Given YouTube’s scale — users post more than 1 billion hours of video daily — and the unfortunate human appetite for incendiary, bigoted and conspiratorial material, the platform is never going to be free of troubling content. But it is high time that YouTube takes more responsibility for decreasing the negative side effects that its lucrative business imposes on the U.S. and societies around the world.
Paul M. Barrett, the senior research scholar and deputy director of the NYU Stern Center for Business and Human Rights, writes about social media’s effects on democracy. Find him on Twitter @authorpmbarrett.