Google and Facebook are set for a grilling this week from lawmakers who say their platforms have fueled a rise in white nationalism and outright racism.
The tech titans are facing calls to take more action against such speech, even as they come under attack from the White House and congressional Republicans alleging that they are censoring the speech of conservatives.
{mosads}At a House Judiciary Committee hearing on Tuesday, representatives from Google and Facebook will field questions about the role social media has played in allowing white nationalists to connect, organize and radicalize new recruits.
The hearing will “foster ideas about what social media companies can do to stem white nationalist propaganda and hate speech online,” according to a description from the committee.
A day later, on Wednesday, company representatives will attend a hearing on claims of tech censorship of conservatives, held by the Senate Judiciary Subcommittee on the Constitution chaired by Sen. Ted Cruz (R-Texas).
The pair of hearings will force big tech to walk a fine line as the companies pledge to crack down on harmful content while protecting users’ freedom of speech.
“The common theme between the two hearings … is the question, ‘Where are the tech companies drawing the line in their content moderation policy of what is acceptable and what is not?’ ” Emma Llansó, the director of the free expression project at the Center for Democracy and Technology (CDT), a digital rights group, told The Hill. “And they’ll be getting a lot of feedback from members of Congress about their feelings on where that line is drawn.”
The House Judiciary Committee announced the hearing on white nationalism and hate crimes shortly after the mass shooting at two New Zealand mosques last month, which was livestreamed online.
The attack intensified scrutiny of Facebook and Google’s policies on extremist content, as both companies scrambled for days to remove the video and hateful posts applauding the killer’s actions.
Rep. Doug Collins (R-Ga.), the ranking member of the House Judiciary Committee, last month told The Hill that he believes social media platforms “have a responsibility to explain their standards for blocking or banning content to Congress.”
Democrats first began calling for hearings on domestic terrorism and white extremism after the 2017 white supremacist rally in Charlottesville, Va., which left one counterprotester dead, but Rep. Jerrod Nadler (D-N.Y.) said Republicans, then in control of the House, rebuffed his efforts at the time.
On Tuesday, Democrats — led by Nadler, now the committee chairman — will flex their powers by focusing on what they see as tech’s role in fanning the flames of extremism.
A Facebook spokesperson told The Hill that its public policy director, Neil Potts, will explain the company’s new policy banning “white nationalism” and “white separatism,” which was announced last month.
According to the new policy, Facebook will now ban white nationalist or white separatist content on its platform, a change that came a year after reports emerged that Facebook’s content moderation had allowed such posts, though it barred explicit “white supremacy.”
Potts will also emphasize the Global Internet Forum to Counter Terrorism, a 2017 initiative by the top tech companies aimed at curbing the spread of Islamic terrorist content online, as a possible model for dealing with white extremism on social media.
Google declined to share details on what Alexandria Walden, the company’s public policy and government relations counsel, will discuss at the hearing.
Other witnesses at the hearing will include Kristen Clarke, the president and executive director of the National Lawyers’ Committee for Civil Rights Under Law, and Eileen Hershenov, senior vice president of policy at the Anti-Defamation League (ADL).
{mossecondads}Both organizations have urged Facebook and YouTube to do more to deal with the deluge of bigoted content on their platforms.
“The companies that create and profit from online platforms where white supremacy is prevalent … have a responsibility to address this crisis,” Clarke will say, according to her written testimony. “We call on all online platforms to fully and fairly enforce their terms of service, and terminate purveyors of hate who violate those terms by promoting and inciting violence.”
In her opening remarks, Hershenov will point to the radicalizing role of fringe platforms like Gab and 8chan.
“Online propaganda can feed acts of violent terror, and conversely, violent terror can feed and perpetuate online propaganda,” Hershenov will say, according to a copy of her opening remarks provided to The Hill by the ADL.
On Wednesday, the focus will shift before the Senate Judiciary subcommittee. Cruz has long alleged the Silicon Valley giants are biased against conservatives and routinely censor right-wing voices.
Representatives from Google, Facebook and Twitter will testify before the subcommittee, a source confirmed to The Hill. All three companies have rejected accusations of anti-conservative bias, arguing there is little evidence to back up the charges.
But conservatives, including President Trump, in recent weeks have ramped up their criticisms of social media companies. Trump in a tweet last month accused Facebook, Google and Twitter of being “on the side of the Radical Left Democrats.”
“Big tech behaves like the only acceptable views are those on the far left,” Cruz told The Hill earlier this month. “And any views to the contrary are suitable for censorship and silencing.”
Last week, Sen. Josh Hawley (R-Mo.) called for a third-party audit of Twitter in a letter to company CEO Jack Dorsey after an account promoting “Unplanned,” a story about an anti-abortion activist, was briefly suspended.
Tech executives will have to walk a fine line during the two hearings to address the concerns of lawmakers from both parties.
Llansó from the CDT said she hopes the hearings this week highlight some of the complexities of policing content on social media platforms.
“People are talking about content moderation [more] today than ever before,” Llansó said. “But there’s a pretty fundamental lack of understanding of just what it is that a major social media platform does when they run a content moderation system across their entire site.”
-Updated 9:30 p.m.