Technology

Facebook, Google struggle to stop spread of terrorist content

Facebook and Google’s platforms are still home to terrorist content despite their promises to crack down on extremists using their sites.

A new report by the Digital Citizens Alliance (DCA) details how graphic images of people being burned to death in cages and thrown off buildings still reside on Facebook, Instagram and Google Plus.

{mosads}A gallery of screenshots from the tech giants’ platforms included in the report show an array of terror-related content, including violent images of beheadings and pro-ISIS propaganda.  

Images and videos of similar pro-terror content reviewed by The Hill dating back to 2017 still remained on the site as of Friday. Some several-months-old posts had been removed from Facebook at some point between Thursday and Friday.

DCA worked with the Global Intellectual Property Enforcement Center (GIPEC) to find such content. Tom Galvin, executive director of DCA, said that GIPEC’s ability to find such content using the similar methods as to what Silicon Valley firms claim their doing casts doubt on how serious they actually are about their efforts to boot terror content from their platforms.

“I think what we see is that the platforms are stuck in a loop when it comes to offensive content. There have promised to fix it, and it’s not really going away,” Galvin told The Hill on Friday. “Either their systems aren’t as good as they say collectively or it’s not the priority that they claim it is.”

DCA and GIPEC found that while previous social media use was focused on normalizing the day-to-day of life of ISIS members, often with memes and pictures of Nutella and kittens, now more extreme images are shown with the intent of radicalizing members.

Galvin also said that content of posts and hashtags show patterns of communications in the lead up to terror attacks.

GIPEC was able to find the content through AI tools and human reviews by searching for hashtags like #khilafhfr, #broadcasting, #islamic_country, #Amaqagency and #depthagency, in English and Arabic.

Facebook said in a statement that it can do better on policing such content, but touted its previous work in booting extremists from its platform.

“There is no place for terrorists or content that promotes terrorism on Facebook or Instagram, and we remove it as soon as we become aware of it,” a Facebook spokesperson said in a statement. “We know we can do more, and we’ve been making major investments to add more technology and human expertise, as well as deepen partnerships to combat this global issue.”

DCA and GIPEC’s findings come the same week that Facebook said its algorithms are catching and deleting 99.5 percent of terror content before users report it, which comes out to 1.9 million pieces of ISIS and al Qaeda content in the last quarter.

Galvin called that statistic into question, saying that it’s unclear to him if Facebook and company are actually sure where or what the total amount of terror content on its platform is.

To him, companies like Google and Facebook are incentivized to not do the most thorough job possible when cracking down on extremist content.

“Their business model is to collect as much information as possible. No matter what, they’ll always be in conflict with trying to correct bad content on their platform,” Galvin said.

To date, many of Google’s rhetoric on its anti-terror efforts have been focused on YouTube, which Galvin says may explain why Google Plus has become an “abandoned warehouse where ISIS comes to work.”

He argued that the company is putting fewer resources into cracking down since it’s received less scrutiny and since because it doesn’t make Google as much money.

Google did not immediately respond to a request for comment.

Galvin and DCA think that their findings signal issues with bad actors on the platform beyond extremists. Galvin posited that the findings may foreshadow more issues with election interference, a problem that landed Facebook and Google in hot water after the 2016 presidential election.