Bill targeting social media ‘terrorist activity’ faces backlash
Opponents to a new bill that would require social media companies to police their networks for terrorist activity began pushing back on the legislation as soon as it was introduced on Tuesday.
{mosads}“I’m opposed to this proposal because I believe it will undermine that collaboration and lead to less reporting of terrorist activity, not more,” Sen. Ron Wyden (D-Ore.) said in a statement.
The bill, introduced by Sens. Dianne Feinstein (D-Calif.) and Richard Burr (R-N.C.), would require social media platforms like Facebook, Twitter and YouTube to alert federal officials about online terrorist activity.
Feinstein, the Senate Intelligence Committee’s top Democrat, has been pushing for legislation to try to stall the growing use of social media by extremist groups to spread propaganda, recruit followers and plan attacks.
“We’re in a new age where terrorist groups like ISIL are using social media to reinvent how they recruit and plot attacks,” Feinstein said, using an alternative acronym for the Islamic State in Iraq and Syria (ISIS). “That information can be the key to identifying and stopping terrorist recruitment or a terrorist attack, but we need help from technology companies.”
Wyden, an outspoken opponent of another major cybersecurity bill co-sponsored by Burr and Feinstein throughout the year, criticized the legislation as counterproductive.
This bill, he said, “would create a perverse incentive for companies to avoid looking for terrorist content on their own networks, because if they saw something and failed to report it they would be breaking the law, but if they stuck their heads in the sand and avoided looking for terrorist content they would be absolved of responsibility.”
The Software & Information Industry Association (SIIA) was also quick to register its opposition to the bill on Tuesday, citing privacy concerns.
“The desire to do something, particularly in the wake of recent attacks, should not lead Congress to put more innocent people under government surveillance, without any evidence it would make us safer,” Senior Vice President for Public Policy Mark MacCarthy said in a statement.
Forcing companies to report certain kinds of activity to law enforcement would “circumvent established legal methods” the government uses to demand information, the group said.
Right now, social media companies rely on the terms of service of their sites to justify the removal of ISIS-affiliated content at their own discretion, and law enforcement agencies can make removal requests.
Following last week’s deadly shootings in San Bernardino, Calif., which has been labeled a terrorist attack, both President Obama and Democratic presidential front-runner Hillary Clinton on Sunday called for social media firms to cooperate with law enforcement to limit the reach of ISIS.
“They are using websites, social media, chat rooms and other platforms to celebrate beheadings, recruit future terrorists and call for attacks,” Clinton said in remarks at the Brookings Institution. “We should work with host companies to shut them down.”
But experts are divided over the wisdom of booting terrorists off social media networks.
Many cite freedom of speech concerns. Although some proponents compare the mechanism to established policies surrounding child pornography, others note that terroristic content isn’t so well defined — it could be considered political speech.
Both Wyden and SIIA hit the bill for a vague definition of what constitutes “terrorist activity” online.
Other skeptics suggest that terrorist chatter on social media is a valuable stream of intelligence.
“If every single ISIS supporter disappeared from Twitter tomorrow, it would represent a staggering loss of intelligence,” a March report from the Brookings Institution noted.
Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.