A researcher is still spotting videos of the Christchurch shootings on Facebook and Instagram – two months after the New Zealand attacks that killed 51 and spurred technology companies and governments to pledge more vigilance about the spread of hate online.
As of yesterday, there were at least two videos of the shootings available on Instagram and two videos available on Facebook, according to Eric Feinberg, a researcher and chief executive of GiPEC, a cyberintelligence start-up that tracks harmful activity on tech platforms. Facebook removed all four videos yesterday afternoon after the Technology 202 reviewed them and reached out to the social network for comment.
Feinberg says he detected the videos by hunting for obvious hashtags – searching for the terms such as “New Zealand mosque attack” in various languages, including Arabic. (He used similar techniques to identify posts about drugs on social media). Feinberg says if he can find these posts independently, it casts scepticism on the efficacy of Facebook’s recent investments to improve its ability to moderate violent content on its platform.
“I think it’s just appalling,” Feinberg said. “They’ve been on notice that these exist.”
“You would think that with the 30,000 moderators they have that they would put a full court press on this,” he added. The teams working on safety and security at Facebook are now more than 30,000, and about half this team are content reviewers – a mix of full-time employees, contractors and companies.
Facebook has struggled to keep up with the deluge of Christchurch shooting videos that were posted to its website after the alleged gunman used Facebook Live to broadcast the shooting in real time. Even after that video was removed, copies were already saved, and posted across a wide range of social media services, including YouTube and 1.5 Twitter. Facebook says it continues to try to stamp out videos of the shooting – but it hasn’t shared data about how many of those videos it’s still blocking or removing since it said it removed 1.5 million videos in the first 24 hours after the attack.
“We continue to automatically detect and prevent new uploads of this content on our platforms, using a database of more than 900 visually unique versions of this video,” Facebook spokeswoman Sally Aldous said. “When we identify isolated instances of newly edited versions of the video being uploaded, we take it down and add it to our database to prevent future uploads of the same version being shared.”
Facebook added the videos flagged to its database so that future uploads will be prevented.
One of the major challenges for Facebook has been a proliferation of many different variants of the video, which people edited. Facebook deployed a number of techniques to find these variants, but says it is investing $7.5 million in new research projects to allow it to detect manipulated media and better distinguish between unwitting posters and intentional adversaries. The company also introduced new restrictions on Facebook Live to try to prevent other violent events from being live broadcast in the future.
Google and Twitter have also struggled to police the proliferation of the video. Feinberg said he did not find any copies of the video still available on these platforms in recent days.
Facebook announced these changes as it joined several other companies and foreign governments in signing the Christchurch Call yesterday – a global pledge to stamp out violent extremism online. Companies including Amazon, Google, Twitter and Microsoft, promised to work more closely with each other and governments to prevent their sites from fostering terrorism.
The pledge is largely symbolic because the document is non-binding. The White House did not sign onto the pledge after Trump administration officials said free-speech concerns prevented them from formally approving it.
Feinberg says “there’s no teeth” in the pledge, and he believes it’s time for Congress to overhaul Section 230 of the Communications Decency Act, a legal provision that gives the technology platforms broad immunity for the content that third parties post on their sites.
“It’s not for the White House to comment, it’s for Congress to legislate Section 230,” he said.
There’s growing support in Congress for an overhaul to the law in the wake of the New Zealand attacks. House Speaker Nancy Pelosi, D-Calif., said on a recent podcast that the provision’s future “could be a question mark and in jeopardy.”