By BARBARA ORTUTAY, HALELUYA HADERO and MATT O’BRIEN, AP Technology Writers
These days, mass shooters don’t stop planning their physical attacks. They also made marketing plans while arranging to livestream their massacres on social platforms in hopes of inciting more violence.
Sites like Twitter, Facebook and now game-streaming platform Twitch have learned painful lessons from dealing with the violent videos that now regularly accompany such shootings. But experts are calling for a broader discussion around livestreams, including whether they should exist, because if such videos go online, it’s almost impossible to delete them completely.
The self-proclaimed white supremacist gunman who killed 10 people, most of them Black, at a Buffalo supermarket on Saturday put a GoPro camera on his helmet to stream his attack live on Twitch, the video game streaming platform used by another shooter in 2019 killed two people in a synagogue in Halle, Germany.
He had previously outlined his plan in a detailed but rambling set of online diary entries that were apparently posted publicly before the attack, although it was unclear how people perceived it. His goal: to encourage copycats and to spread his racist beliefs. After all, he was a copycat himself.
He decided not to stream on Facebook, as another mass shooter did when he killed 51 people in two mosques in Christchurch, New Zealand, three years ago. Unlike Twitch, Facebook requires users to sign up for an account to watch livestreams.
However, not everything was fulfilled in the plan. In particular, on most accounts the platforms responded more quickly to stop the spread of Buffalo video than they did after the 2019 Christchurch shooting, according to Megan Squire, a senior fellow and technology expert at Southern Poverty. Law Center.
Another Twitch user watching the live video likely flagged it to the attention of moderators inside Twitch, he said, which would help Twitch pull the stream less than two minutes after the first few gunshots to every company spokesman. Twitch did not say how the video was flagged.
“In this case, they did well,” Squire said. “The fact that video is so hard to find right now is proof of that.”
In 2019, the Christchurch shooting was streamed live on Facebook for 17 minutes and quickly spread to other platforms. At this point, the platforms often seem to be better coordinated, especially by sharing digital video “signatures” used to identify and delete copies.
But the platform’s algorithms can make it difficult to identify a copycat video if someone is editing it. That creates problems, like when some internet forum users made back the Buffalo video with a twisted attempt at humor. Tech companies need to use “more algorithms” to identify partial matches, according to Squire.
“It seems darker and more cynical,” he said of attempts to spread the shooting video in recent days.
Twitch has more than 2.5 million viewers at any given time; nearly 8 million content creators stream video to the platform every month, according to the company. The site uses a combination of user reports, algorithms and moderators to detect and eliminate any violence that occurs on the platform. The company said the gunman’s creek was quickly removed, but did not share many details about what happened Saturday-including whether the creek was reported or how many people were watching live on the ramp.
A Twitch spokesperson said the company shared a livestream of the Global Internet Forum to Counter Terrorism, a nonprofit group set up by tech companies to help others monitor their own platforms for rebroadcasts. But clips from the video are still making their way across other platforms, including the Streamable site, where it will be available to be viewed by millions. A spokesman for Hopin, the company that owns Streamable, said Monday it was working to capture the videos and terminate the accounts of those who uploaded them.
Looking ahead, the platforms could face future complications of moderation from Texas law-reversed by the appellate court last week-which prohibits large social media companies from “censoring “in the perspectives of users. The shooter “has a specific vision” and the law is not clear enough to create a risk for platforms that moderate people like him, according to Jeff Kosseff, an associate professor of cybersecurity law at US Naval Academy. “It really puts the finger on the scale of perpetuating the destructive cycle,” he said.
Alexa Koenig, executive director of the Human Rights Center at the University of California, Berkeley, said there is a change in how technology companies respond to such events. In particular, Koenig said, coordination between companies to create fingerprint repositories for extremist videos so they can’t be re-uploaded to other platforms is “a significant improvement.”
A Twitch spokesman said the company will review how it responds to the gunman’s livestream.
Experts suggest that sites like Twitch could exercise more control over who can livestream and when-for example, by creating delays or whitelisting valid users while banning violators of the rules. More broadly, Koenig said, “there is also a general conversation in society that needs to take place around the usefulness of livestreaming and whether it is valuable, if it is not, and how we can put in place safe practices when how it’s used and what happens when you use it. “
The other option, of course, is to end livestreaming completely. But it’s almost impossible to imagine how many tech companies are relying on livestreams to attract and keep users engaged to bring in money.
Free speech, according to Koenig, is the constant reason given by technology platforms for allowing this form of technology – beyond the unspoken share of revenue. But that needs to be balanced “with privacy rights and some of the other issues that have arisen at this time,” Koenig said.
Copyright 2022 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or distributed.