Technology

Twitter, Facebook to update hate speech moderation

Social media giants Twitter and Facebook are working on plans to update how they handle hate speech on their platforms after mounting scrutiny from civil rights groups. 

Twitter is expanding its hateful conduct policy to prohibit language that “dehumanizes people on the basis of race, ethnicity, or national origin,” the company announced Wednesday. 

Posts with such language may be removed from Twitter if reported, and users who repeatedly break the rule may have their accounts temporarily locked or suspended.

The update expands the company’s hateful conduct policy, which previously included prohibiting language that dehumanizes others on the basis of religion or caste, as well as on the basis of age, disability or disease.

Facebook is also updating how it handles hate speech online, a company spokesperson confirmed to The Hill.

The company is overhauling its algorithm that detects hate speech as part of a project first reported by The Washington Post on Thursday.

Facebook has stopped using proactive technology to find a small subset of attacks against white people, Americans and men over the past several months. Hate speech directed at those groups will still be removed if it is reported, according to the company.

Facebook has now made updates to focus its proactive detection technologies on hate speech that is considered the most serious.

The new system, known as the WoW project, involves reengineering Facebook’s systems to improve detecting and deleting hateful language considered “the worst of the worst,” including slurs directed at Black people, Muslims, people of more than one race, the LGBTQ community and Jews, according to the Post, citing internal documents reviewed by the newspaper.

As part of the overhaul to assess the severity of hate speech, Facebook reportedly assigned numerical scores weighted based on perceived harm, allowing the system to prioritize policing certain forms of hate speech, the Post reported.

“We know that hate speech targeted towards underrepresented groups can be the most harmful, which is why we have focused our technology on finding the hate speech that users and experts tell us is the most serious,” Facebook spokeswoman Sally Aldous said in a statement to The Hill. “Over the past year, we’ve also updated our policies to catch more implicit hate speech, such as content depicting Blackface, stereotypes about Jewish people controlling the world, and banned Holocaust denial.”

The project is still in its early stages, the Post reported.

Civil rights groups who have been pushing for the social media platforms to better address hate speech said the companies’ plans to update hate speech moderation are long overdue and may still be inadequate solutions. 

“This is progress, but Twitter demonstrated a consequential lack of urgency in implementing the updated policy before the most fraught election cycle in modern history, despite repeated warnings by civil rights advocates and human rights organizations,” Color of Change’s vice president Arisha Hatch said in a statement.

Hatch also said Twitter has adopted a “non-committal and cavalier attitude toward transparency,” and has failed to detail how content moderators are trained and how efficient Twitter’s artificial intelligence is at identifying dehumanizing content.

“The jury is still out for a company with a spotty track record of policy implementation and enforcing its rules with far-right extremist users. Void of hard evidence the company will follow through, this announcement will fall into a growing category of too little, too late PR stunt offerings,” Hatch added.

Hatch told the Post she did not know about Facebook’s proposed overhaul, but after reviewing the documents on behalf of the newspaper she said it is “confirmation of what we’ve been demanding for years, an enforcement regime that takes power and historical dynamics into account.”

Sum of Us, an advocacy group, also said Facebook’s proposed changes do not go far enough in terms of content regulation. The group also addressed Facebook’s announcement Thursday that it will remove false claims about the coronavirus vaccine. 

“Facebook is well aware of the harm it causes by allowing some of the most vile content to be promoted through its algorithms. Their latest move to more aggressively police anti-Black hate speech and false claims about COVID-19 vaccines shows that they have the ability to clean up their act – if they want to,” Sum of Us executive director, Emma Ruby-Sachs said in a statement. 

“But the platform still needs to take more responsibility across the board to regulate how quickly it allows harmful disinformation to spread, and we’ll keep putting pressure on them until they do so,” Ruby-Sachs added.

Updated at 12:23 p.m.