YouTube removed ‘tens of thousands’ of videos after New Zealand shooting
YouTube on Monday said it has removed “tens of thousands” of videos depicting last week’s mass shooting in New Zealand, which was livestreamed on Facebook and reposted millions of times across major social media platforms.
The video-sharing website said the volume of videos posted to YouTube after the attack was “unprecedented both in scale and speed,” with users uploading the gruesome videos much faster than YouTube could take them down.
{mosads}”Since Friday’s horrific tragedy, we’ve removed tens of thousands of videos and terminated hundreds of accounts created to promote or glorify the shooter,” a YouTube spokesperson said in a statement to The Hill. “The volume of related videos uploaded to YouTube in the 24 hours after the attack was unprecedented both in scale and speed, at times as fast as a new upload every second.”
YouTube in the hours after the attack went into overdrive trying to take down the footage, which the shooter apparently filmed using a GoPro camera strapped to his head. The video shows the gunman shooting into crowds of worshippers at a mosque in Christchurch.
So far, at least 50 people have died and around 50 have been reported injured following the pair of attacks on two New Zealand mosques.
Facebook on Sunday said that it removed 1.5 million videos of the attack in the first 24 hours, with 1.2 million of those removed before users would have been able to see them. The company did not say how many users were able to watch the 300,000 videos that were not blocked by artificial intelligence (AI) tools at upload.
According to a report from The Washington Post, YouTube assembled a group of senior executives in the wake of the attack who were tasked with limiting the video’s spread.
YouTube typically requires a human reviewer to assess whether a video flagged by its AI system should be taken down, the company’s attempt to ensure that videos are not taken down unfairly. But the team at one point cut off the human review features in order to allow videos to be taken down more quickly.
The company decided to automatically reject any footage of the attack, an atypical move, as YouTube usually allows videos of violence to remain up if they are presented within a “newsworthy” context, like news segments from legitimate sources.
“Our teams are continuing to work around the clock to prevent violent and graphic content from spreading,” the YouTube spokesperson said. “We know there is much more work to do.”
Facebook and Twitter, which also use a mixture of human content reviewers and AI tools to take down graphic content, said they are continuing to take down the video days after the attack.
“We are continuously monitoring and removing any content that depicts the tragedy, and will continue to do so in line with the Twitter Rules,” Twitter said in a statement. “We are also in close coordination with New Zealand law enforcement to help in their investigation.”
Many accounts are sidestepping the tech companies’ AI, which are trained to automatically take down flagged footage, by altering the clips slightly.
“This was a tragedy that was almost designed for the purpose of going viral,” Neal Mohan, YouTube’s chief product officer, told the Post. “We’ve made progress, but that doesn’t mean we don’t have a lot of work ahead of us, and this incident has shown that, especially in the case of more viral videos like this one, there’s more work to be done.”
Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.