Technology

Lawmakers urge tech to root out extremism after New Zealand

Getty

Lawmakers are putting pressure on social media companies to take aggressive action against white supremacists in the wake of the New Zealand massacre.

Critics note that tech companies were successful in largely rooting out content promoting ISIS, but question why those same efforts have not targeted other extremists online.

On Capitol Hill, Rep. Bennie Thompson (D-Miss.), the chairman of the House Homeland Security Committee, on Tuesday asked Facebook, YouTube, Twitter, and Microsoft to brief his panel on their efforts to remove violent terrorist content – including from “far-right, domestic terrorists” like the New Zealand shooter.

{mosads}”Your companies tout your record in removing terrorism-related content,” Thompson wrote, noting efforts to combat ISIS and al-Qaeda-related postings. “However, the public has largely been kept in the dark regarding metrics associated with other violent extremists, including far-right violent extremists.”

The issue is gaining new attention after the shootings at two New Zealand mosques that killed 50.

The suspected shooter posted a white supremacist manifesto on Twitter and other social media outlets, laying out his bigoted views on Muslims and immigrants, according to New Zealand police. The shooter also uploaded a live video to Facebook of the attack, filming himself shooting dozens of worshippers at one of the mosques he targeted. Though fewer than 200 people tuned into the livestream itself, Facebook said on Monday, the video had gone viral within minutes.

Twitter, Facebook, YouTube, Instagram and other platforms scrambled to wipe the disturbing 17-minute livestream, but the video took on a life of its own. Users at one point were uploading and sharing clips as quickly as once per second, YouTube said.  

Lawmakers say the issue extends beyond the video itself and raises larger questions about how to deal with the extremist views the shooter and his supporters promoted online.

“Social media platforms – like Facebook and YouTube – can be grossly misused, allowing radicalized extremists in all forms take advantage of their openness, their scale, and their reach,” Sen. Mark Warner (D-Va.), the vice chair of the Senate Intelligence Committee, said in a statement to The Hill.

“Platforms should prioritize adapting to constantly-shifting online threats, including how they understand and define dangerous or hateful content,” Warner said, encouraging tech platforms to change their practices before Congress acts.

Sen. Richard Blumenthal (D-Conn.) offered an even sharper rebuke, accusing Facebook, YouTube and other large tech companies of turning “a blind eye to hate [and] racism on their platforms for a decade.”

“We will be suffering the violent [and] divisive repercussions of Big Tech putting profits over people for years,” Blumenthal said in a statement. “Facebook [and] other platforms should be held accountable for not stopping horror, terror, and hatred—at an immediate Congressional hearing.”

Several lawmakers in the House and Senate have called for hearings on how the tech companies deal with hatred and extremism on their platforms in the week since the shooting.

“We’ve seen the ways social media has amplified hate and violence online, and I support the call for hearings in the Senate to look into this incredibly serious matter,” said Sen. Amy Klobuchar (D-Minn.), a 2020 contender.

The House Judiciary Committee earlier in the week confirmed it plans to hold a hearing specifically on white nationalism. Rep. Doug Collins (R-Ga.), the ranking member of the House Judiciary Committee, said he believes online platforms “need to be more transparent with Congress and the public.”
“Social media platforms have a responsibility to explain their standards for blocking or banning content to Congress,” Collins said in a statement to The Hill.

There has been a similar push on Capitol Hill before, when lawmakers and intelligence agencies in 2014 pressured companies to deal with ISIS and al-Qaeda efforts to radicalize and recruit online. At that time the outrage was spurred by a spate of beheading videos spread by ISIS on social media.

Facebook, Microsoft, Twitter, and YouTube in response formed the Global Internet Forum to Counter Terrorism (GIFCT) in 2017, an initiative aimed at curbing the spread of Islamic terrorist content online.

That coordinated effort resulted in a large reduction of Islamic extremist content, with companies reporting a high success rate in deleting content before users often even see it.

Seamus Hughes, the deputy director of George Washington University’s Program on Extremism, told The Hill the GIFCT is a “reflection of the pushback … [tech companies] got on Capitol Hill, the public, in terms of terrorist content on their servers.”

Civil rights advocates and extremism experts have expressed frustration and say they have long pushed for tech companies to launch a similar crack down on white supremacist and other hate content.

They say more must be done to target online hate before it leads to real world violence.

A representative of Muslim Advocates, a civil rights group, told The Hill that she does not believe tech companies have given “enough attention to the issue of white supremacists and white nationalist groups on the platforms.”

After the violent neo-Nazi rally in Charlottesville, Va. in 2017, during which a counter-protester was killed, tech companies faced calls to change their hate speech policies, with critics noting the white supremacist groups had used online forums to organize the event.

Since then, tech companies have changed policies and shared transparency reports. But experts told The Hill they have not seen any effort close to the removal campaign targeting ISIS and al-Qaeda.

Heidi Beirich, director of the Southern Poverty Law Center’s Intelligence Report, said the issue is that tech companies have “barely started” to think of online white supremacy as a form of terrorism.

“[After 2017], we saw a shift in the industry to start removing this kind of content,” Beirich said. “But it’s far from complete and it’s nothing like their universal bar on ISIS [and] al Qaeda.”

“They don’t think about white supremacy as an international terrorism problem,” she added.

Hughes said he believes the “tragic events in New Zealand are going to be in many ways a catalyst” for companies to be more proactive against white nationalist content.

The industry says it is taking the problem seriously.

Facebook this week said the member organizations of the GIFCT have been coordinating since the New Zealand massacre, sharing more than “800 visually-distinct videos related to the attack” in its collective terrorism database.

“This incident highlights the importance of industry cooperation regarding the range of terrorists and violent extremists operating online,” wrote Chris Sonderby, Facebook’s vice president and deputy general counsel.

All of the platforms have policies against content that promotes violence and extremism, but the guidelines are murkier when it comes to hate speech, as the platforms say they don’t want to impede on freedom of expression.

“Hate speech and content that promotes violence have no place on YouTube,” a YouTube spokesperson said in a statement to The Hill. “Over the last few years we have heavily invested in human review teams and smart technology that helps us quickly detect, review, and remove this type of content.”

Twitter said its policies bar references to “violent events where protected groups have been the primary targets or victims,” as well as behavior that targets individuals based on “race, ethnicity, national origin or religious affiliation,” a spokesperson said in a statement to The Hill. The platform last year reported that it removed more than 205,000 accounts for making violent threats between January and June 2018.

But lawmakers say tech companies must do more.

Sen. Brian Schatz (D-Hawaii), a tough voice on tech, on Sunday called out Twitter CEO Jack Dorsey after he posted a message expressing sympathy for victims of the New Zealand attack.

“It is too bad you are just another individual with a twitter account and not the person in charge of whether or not Nazis get to be here,” Schatz wrote.

 

Tags Amy Klobuchar Bennie Thompson Brian Schatz Doug Collins Mark Warner

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.