Hatemail 2020-10-15: Are These Social Media Takedowns Enough?
Newsletter and intel from the LaBac Hacker Collective
As we hunker down for the final weeks of Trump’s and Biden’s presidential campaigns, we would like to take a moment to provide this helpful framework for identifying disinformation, courtesy of Graphika’s Ben Nimmo (@benimmo).
The framework, called “the Four D’s,” refers to four distinct behaviors that can help reveal the motive behind a threat actor’s distribution of bad information.
Dismiss: The first, and by far the most common, technique for a threat actor to deploy. In an effort to delegitimize the credit of an idea or information, the threat actor will target an entity or individual spreading the idea and say “don’t listen to ‘them’ because... [insert an insult of choice]”. The goal is to silence the entity or individual speaking out to instill distrust in them as a source.
Distort: If a person sees a fact that doesn’t suit their preferred story, they will make up their own facts to retell the narrative.
Distract: If a conversation is uncomfortable or unfavorable, then someone can attempt to change the subject. This can be done by simply starting a conversation about a different topic and is a surprisingly effective tactic. Another method is to accuse an accuser of being guilty of the same thing. This technique can draw a false comparison between the critic and the one being criticized while also changing the subject.
Dismay: ‘Dismay’ relates to scaring people away from adequately engaging with an idea or information. This technique is often used when there is a policy debate. Using dismaying rhetoric can warn against dire consequences of an action hoping they will not pursue their line of inquiry or achieve a previously stated goal.
For more, check out Ben Nimmo’s teach-out hosted by University of Michigan on Coursera.
Thursday, October 15 (4pm ET) - Webinar, “Adtech and the Attention Economy” with Tim Hwang and Moira Weigel. [Data & Society]
Friday, October 16 (8:30am ET) - The Silicon Harlem Virtual Summit is a timely event to focus on the role technology has in advancing humanity. [Eventbrite]
Friday, October 16 (2pm ET) - Webinar, “Lawgorithms: Everything Poverty Lawyers Need to Know about Tech, Law, and Social Justice” with Michele Gilman and Meredith Broussard. [Data & Society]
Tuesday, October 20 (2pm ET) - Webinar, “Electionland Misinformation” with Ryan McCarthy and Cristina López. [Data & Society]
Are Recent Social Media Takedowns Enough?
[Politico] [NBC News] Last week, an investigation by the Washington Post found that teenagers had been recruited by an Arizona-based marketing firm to create conservative social media content in an operation described as a “domestic troll farm.” The firm, called Rally Forge, is tied to the pro-Trump conservative group Turning Point USA. Facebook has since taken down hundreds of fake accounts made by the now-banned firm.
[Stanford Cyber Policy Center] Analyses recently released by the Stanford Cyber Policy Center takes a closer look at the social media accounts associated with several state-linked operations taken down by Twitter last week. The operations are linked to entities in Cuba, Saudi Arabia, Iran, Thailand, and Russia (specifically, the Internet Research Agency).
[Washington Post] Engaging in the disturbing behavior of “digital blackface,” networks of Twitter accounts posing as Black Trump supporters have been appearing to successfully spread content among thousands of users, and then vanish. Many of these fake accounts use identical language in their tweets. Even so, they can generate more than 265,000 retweets on Twitter, according to Clemson University social media researcher Darren Linvill (@DarrenLinvill), who recently started tracking these networks.
[Ars Technica] Both Twitter and Facebook faced criticism after depreciating (or outright banning, in the case of Twitter) a story published by the the New York Post suspected to be disinformation about Vice President Joe Biden.
[New York Times] Research from the German Marshall Fund Digital found that engagement with “deceptive outlets” via Facebook is higher today than the period leading up to the 2016 election — so, not the greatest news for American democracy.
Militarized Law Enforcement
[New York Times] This forensic reconstruction by the New York Times of the moments leading to the death of an Antifa activist at the hands of federal marshals in Portland sheds light on the questionable tactics and operations of a U.S. Marshals task force. The reconstruction, along with new information about the incident, raises serious questions about the accounts made by law enforcement.
[TechDirt] [Los Angeles Times] The Los Angeles Times recently busted the LAPD’s (not so) secret love affair with facial recognition tech. After literal years of department claims that it doesn't use facial recognition software, the LAPD finally admitted to using it 30,000 times since 2009. Well, they do say the first step is admitting you have a problem...
[Noema] Over 90,000 police contractors were hired to work in the vast network of Chinese reeducation camps currently detaining Muslims. Many of the contractors are young men who come from Uighur and Kazakh Muslim populations, the same groups that were targeted by the system. Contractors engage with various surveillance protocols such as watching cameras, performing phone inspections, and monitoring face-scanning machines and metal detectors at fixed checkpoints.
On Our Radar…
[Washington Post] Political tensions are running high in tech companies, including among employees at Facebook and Google. Nitasha Tiku (@nitashatiku) writes about the unsaid rule in the presumably left-leaning tech industry: “Don’t talk politics” (which, Tiku explains, is the diplomatic way to say “Don’t be liberal”).
[Oregon Live] Tech company New Relic is one of the largest employers in Portland. As racial justice protests raged downtown, New Relic employees began to examine their own workplace’s struggle with decency.
[Orange County Register] [LA Times] First, mysterious unofficial ballot boxes began appearing in Southern California. After suspicions arose, a church hosting one of the unofficial boxes stated that the California GOP was responsible for installing and maintaining it. Now Californian authorities are demanding that the boxes be taken down — while the state’s GOP refuses to take responsibility, but appears to support the ballot collections.
Hate speech website: Bitchute
Who hosts: Cloudflare, NatCoWeb Corp.
Today’s site is bitchute[.]com. Bitchute is a newsmedia aggregation site, hosting videos, memes, and comments likely too fringe to be accepted on conventional social media. The site popularizes conspiracy theories, holocaust denial content, and racist viewpoints. Bitchute uses Cloudflare to protect their infrastructure. However, we have previously observed the website to be hosted NatCoWeb Corp. infrastructure with IPs 88.214.207[.]96 and 88.214.207[.]97.