Lawsuit claims social media companies liable for Paris attack
The family of one of the victims in the 2015 Paris terrorist attack has filed a lawsuit in U.S. federal court, accusing social media companies of providing material support to the Islamic State in Iraq and Syria (ISIS).
The civil lawsuit filed earlier this week against Twitter, Google and Facebook asks the court to hold the companies liable for enhanced damages, and rule that the companies “violated, and [are] continuing to violate, the Anti-Terrorism Act.”
{mosads}The lawsuit is similar to one filed against Twitter in Oakland, Calif., earlier this year. Both suits face a high hurdle because a landmark provision of the Telecommunications Act of 1996 gives online platforms broad immunity from harms done by third parties on their sites.
The suit is being brought by the father of Nohemi Gonzalez, a California State University student who died in the November Paris attack while studying abroad. She was among the 129 people killed.
The lawsuit alleges that the companies have not done enough to block the spread of terror recruitment and communication online. It also claims the companies, which are driven by digital ad sales, have profited from ISIS postings.
“For years, Defendants have knowingly permitted the terrorist group ISIS to use their social networks as a tool for spreading extremist propaganda, raising funds and attracting new recruits,” according to the lawsuit.
The tech companies pushed back on the allegations. Despite being immune from third-party content, they highlighted their work to quickly remove terror-related content when it is spotted and how they have worked with organizations to help stop the spread of terror-related content.
The issue has gained wide visibility this year, and could increase after the mass shooting over the weekend in Orlando.
The White House has held high-level meetings with tech companies about combatting ISIS recruitment online. And all three companies recently signed onto a code of conduct in the European Union to quickly remove hate speech once it is spotted.
“As we stated earlier this year, violent threats and the promotion of terrorism deserve no place on Twitter and, like other social networks, our rules make that clear,” a Twitter official said in a statement. “We have teams around the world actively investigating reports of rule violations, identifying violating conduct, and working with law enforcement entities when appropriate. We believe this lawsuit is without merit.”
Twitter has relied on a third-party immunity defense in a similar case it is challenging.
“Congress unequivocally resolved the question whether computer service providers may be held liable for harms arising from content created by third parties,” Twitter said in a March court brief in the separate case.
In statements, Facebook and Google also stressed that they try to remove terror-related content quickly after it is flagged.
“There is no place for terrorists or content that promotes or supports terrorism on Facebook, and we work aggressively to remove such content as soon as we become aware of it,” a Facebook spokesman said.
A Google spokesman added: ”Our hearts go out to the victims of terrorism and their families everywhere. While we cannot comment on pending litigation, YouTube has a strong track record of taking swift action against terrorist content. We have clear policies prohibiting terrorist recruitment and content intending to incite violence and quickly remove videos violating these policies when flagged by our users. We also terminate accounts run by terrorist organizations or those that repeatedly violate our policies.”
Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.