The views expressed by contributors are their own and not the view of The Hill

Hate is surging online — and social media companies are in denial. Congress can help protect users

FILE - A person uses a smartphone in Chicago, Sept. 16, 2017. Most Democrats and Republicans agree that the federal government should better regulate the biggest technology companies, particularly social media platforms. But there is very little consensus on how it should be done. (AP Photo, File)

For years, social media companies assured users, advertisers and the public that they were doing everything possible to provide a safer and friendlier experience. Even as hatred and disinformation surged on platforms such as Facebook, Twitter, Reddit, TikTok, YouTube and others, the CEOs insisted they were taking proactive steps to rid their platforms of the antisemitism, racism and hate that had given rise to events like Charlottesville and the Jan. 6 insurrection. 

“We have absolutely no incentive to tolerate hate speech. We don’t like it, our users don’t like it, advertisers understandably don’t like it,” Nick Clegg, Meta’s president of global affairs, told CNN in 2020 when faced with the prospect of a boycott after advertisers became fed up with hate and disinformation on Facebook. “We benefit from positive human connection — not hate.”  

“Hate speech impressions down by 1/3 from pre-spike levels. Congrats to Twitter team!,” Elon Musk cheered in a November 2022 Twitter post, at a time when it was clear that the opposite was true

Now we are hearing directly from social media users that their lived experiences tell a far different story. 

New data from my team at ADL Center for Technology and Society shows that online hate and harassment have risen to critical levels. Just over half of U.S. adults say they have experienced online hate and harassment in their lifetimes. And it’s not just adults: 51 percent of teenagers say they’ve experienced some form of online harassment in the past 12 months, compared to 36 percent the year before.  

The data is especially stark for those who identify as members of a marginalized group. More than three-quarters of transgender individuals say they’ve been harassed online, with 60 percent of those reporting severe harassment such as physical threats. 

Indeed, online hate is increasing across every available metric. Eighty percent of Jewish respondents, who also experienced a significant rise in online hate, worry about being harassed for their religion, and many reported they avoid identifying themselves as Jewish on social media.

This is what hate online does: it pushes people out of the conversation, impinging on their own freedom of expression.  

Online harassment is more than just “hurtful speech.” It ranges from being called offensive names to cyberstalking and doxing (the broadcasting of private or identifying information about an individual with the intent to cause harm). It causes emotional distress, reputational and economic harm and withdrawal from online spaces. And it can result in real-world consequences. 

Anti-trans harassment on social media has preceded bomb and death threats against hospitals that provide gender-affirming care. Mass shooters are finding inspiration in the misogynist, anti-LGBTQ+ and antisemitic content coursing through their feeds. Just last month a white supremacist in Michigan shared his intentions to carry out an attack against a synagogue via Instagram, after posting antisemtic content and glorifying past mass shootings.  

How have social media executives reacted? Rather than improving abuse reporting mechanisms or enforcing their own rules against hate speech, they continue to sweep these concerns under the rug. Some platforms have even exacerbated the problem. 

Take Twitter, for example. After Elon Musk purchased Twitter and took the company private in October 2022, the platform has invited back users who had been previously banned. Twitter also dissolved the Trust & Safety Council, a voluntary group of civil society advisors (including ADL) tasked with helping the platform navigate issues posed by online hate. Not surprisingly, since these changes we’ve documented how many of the reinstated accounts are actively spreading antisemitism and inciting harassment.

In the last year, tech companies have made record layoffs to the detriment of online safety. Google shrunk the team that builds moderation tools and tracks threats. Twitter dramatically reduced its safety and ethics teams. Meta’s mass layoffs decimated its trust and safety teams. 

As a former Facebook employee, I have seen firsthand the harm that ignoring the spread of online hate and harassment can cause. For years, as online hate has been increasingly normalized, social media platforms have been paying lip service to trust and safety, insisting their teams have things under control. But our research makes it clear they do not. 

And unfortunately, our current laws do not incentivize these companies to prioritize protecting people from harm. Basic public safety principles apply to every other industry but are currently ignored when it comes to protecting people online. If a car manufacturer makes faulty brakes, there are laws that require the automaker to recall all of those vehicles to fix the problem. No other industry can get away with ignoring their own safety problems the way tech companies can. 

Since tech companies have proven unwilling to devote adequate resources to protecting targets of abuse, policymakers must take action. While much of the harmful speech we see online is not unlawful, there are actions government can take to both incentivize better company behavior and, where content crosses the line to inciting harm, protect people. 

First, it is time to update laws that protect targets of severe harassment and digital abuse. Doxing, for example, is one form of severe online harassment that can incite violence in the physical world. And yet, there is no comprehensive federal anti-doxing law, even though this practice is becoming a more common way to target people. Some states have been making progress by passing anti-doxing and anti-swatting legislation, but Congress has a role to play, too.  

We also need to pull back the curtain on how platforms are responding to hate and harassment, how they are enforcing their own policies, and how they are living up to their promises to protect users. We need transparency about platforms’ data collection, ad targeting, and algorithmic systems to understand how online hate proliferates and how best to address it. 

We can no longer take companies at their own word, we must have auditable and verifiable transparency requirements. Government has a crucial role in breaking open tech’s black box. We deserve to know how platforms are doing when it comes to our safety.  

Lives are literally at stake.  

Yael Eisenstat is vice president at the Anti-Defamation League, where she leads the ADL Center for Technology and Society.