In July, the Washington Post ran a story about a viral left-wing influencer named Erica Marsh who became a poster child for conservatives as an example of liberal insanity. Marsh’s outrageous opinions enraged millions of people, sparking endless mockery, vitriol and debate. The most controversial thing about Marsh, however, is that she doesn’t seem to exist. Marsh’s obviously doctored photos, cartoonish tweets and lack of any other online presence raised suspicions that she was a fabrication of conservative or even foreign agitators trying to stir up division.
When the Post contacted Twitter about Marsh, the company suspended her account without explanation.
If you want to get a lot of attention on social media, one winning strategy is “rage baiting,” the term for inflammatory posts that goad people into engagement.
“We’re all being influenced by it, even if we think we’re in control,” said Tobias Rose-Stockwell, author of the upcoming book Outrage Machine: How Tech Amplifies Discontent, Disrupts Democracy—And What We Can Do About It. “It’s nothing new, but in the past, it was just something you’d see on TV or the radio. Now with social media, we’ve all been pulled into this broad moral play together.”
Social media algorithms are tuned to promote controversy. Rage baiting is an easy way to advance political goals, such as advancing particular positions or even spreading discontent among your opponents. “It’s been a bonanza for political operatives, activists, and even conflict entrepreneurs,” Rose-Stockwell said. But it’s also something regular people can fall into unintentionally. “If you post about something you’re extremely angry about and it gets a huge amount of traction, then our brains will start to assume that this is what the world wants.”
The phenomenon deepens societal divisions and advances misinformation. It’s also bad for the people observing and responding to it. Rage baiting is a distraction that prevents us from engaging with what’s actually going on in the world, and thanks to the stress that comes with furry, it’s bad for your mental health.
The most effective solutions to this problem require action from social media companies and the governments that regulate them. But Rose-Stockwell said there’s a lot you can do as an individual, too.
Rage-baiting strategies fall into a few broad categories. By understanding these techniques, you can spot rage baiting when it’s happening. That will help slow down your own thinking so you don’t fall into the trap, and make it less likely that you’ll contribute to the problem by reacting or sharing the content.
Rose-Stockwell compiled a list of the most common rage-baiting techniques and the kind of people who use them that you should watch out for.
The Motte and Bailey
The “motte and bailey” is a sort of argumentative sleight-of-hand. The offender starts with a weak, controversial position, known here as the “bailey.” When the weak argument is criticised, the debater switches to the “motte,” and a strong, uncontroversial opinion that’s harder to argue with. It’s a moving target that sneaks nonsense in with positions that everyone can agree on. It goes something like this:
Bailey: “Fast food should be outlawed to fight obesity.”
Counter argument: “Shouldn’t people have the freedom to choose what they eat?”
Motte: “Don’t you think we should promote healthier eating habits? We’re facing an obesity crisis.”
Scissor Statements split observers into two distinct and opposing groups. Typically, a scissor statement presents an idea that one audience will think is extreme or ridiculous, while the other finds it plausible, or at least for considering. For example: “To truly address systemic oppression, we should implement a progressive tax on personal attractiveness.”
The goal is to make a sharp division along binary lines. In other words, there are exactly two sides to take in this debate, and they’re very different. You don’t want to be one of the morons, do you? If not, there’s only one position for you to take.
The Flaming Strawman
“If you’re against government surveillance, then you must be in favour of letting terrorists run rampant and harm innocent people. Are you sympathetic to terrorists?”
A “Flaming Strawman” is a phrase that blends together two rhetorical ideas. You might have heard of a strawman argument. Essentially, the idea is you distort your opponent’s views to make them easier to criticize. The technique magnifies, perverts, or fabricates someone else’s intentions altogether. With a strawman, it makes it a lot more convenient to present your own stance as rational.
It becomes a “flaming” strawman when you throw in a derogatory insult for flavour.
Motte and Baileys, Scissor Statements, and Flaming Stawmen are the favourite techniques of Conflict Entrepreneurs, aka Line Steppers. You probably know the type. Conflict Entrepreneurs exploit social, cultural, or political fault lines. They inject outrageous content into these spaces to intensify emotions and polarise the discourse. This can look like spreading disinformation, incendiary memes and other rhetorical techniques that attempt to divide audiences. Sometimes they do it for personal gain, but sometimes it’s just about spreading chaos.
The old saying goes there’s no such thing as bad press, but that’s even more true on social media. It doesn’t matter whether people are watching and commenting because they hate you. Engagement is engagement, and the more the better.
Context Collapse is when an event — benign or otherwise — is posted to social media without the necessary or accurate background information. Without that context, the audience makes assumptions and jumps into the comments to share their anger or outrage. The classic example is a video of upsetting behaviour that’s edited to remove the events that lead up to it. With the context, it seems reasonable. Without it, it seems like you’re looking at a nut job.
Context Creep is a more pernicious version of context collapse — when brand new inaccurate context gets added to the original content. Often, this is done to explain why something is problematic and tie it to a bigger, more troubling narrative. Sometimes bad actors do this intentionally, but other times, it’s innocent people responding to context collapse and filling in the blanks with false assumptions.
The Trigger Chain
You’ve seen it before. Divisive or upsetting content makes its way onto social media. It triggers an emotional response from people who see it, who respond with their own vitriolic posts. The reaction continues as these posts trigger additional people who add their own responses to the chain. In the end, you have cascading threads of outrage, with people responding to content that’s completely divorced from the original event.
There are a lot of problems in the world, but they’re not all emblematic of something bigger. Sometimes an anecdote is just an anecdote, but Threat Picking highlights a sensational risk and portrays it as evidence of a proof scarier thing while downplaying conflicting evidence. This can be used to influence public opinion, promote a particular narrative, or serve a specific agenda.
The pandemic was rife with examples of Threat Picking. In extremely rare circumstances the COVID-19 vaccine causes adverse reactions. That doesn’t mean the vaccines are dangerous.
Often the result of Threat Picking, Phantom Panic, aka moral panic or good old hysteria, takes a real or imagined anecdote turn into mainstream political talking points, sometimes resulting in real-world consequences.
Pinball machines, for example, were banned in New York City from 1942 to 1976 because people thought they were a game of chance that corrupted kids by getting them hooked on gambling. The ban fell apart after pinball wizard Roger Sharpe demonstrated that it’s actually a game of skill in a much-publicised event.
Solutions for Rage Baiting
“I don’t want people to get too cynical,” Tobias Rose-Stockwell said. “It’s easy to see the faults in our enemies and not in ourselves. We’re all part of this dance, and my goal with the book is to help us reflect collectively about how we are all doing a weird thing right now.”
Collective reflection is important, but it’s also not your sole responsibility to fix the problem. Social media companies have a lot of power to step in, and there are well-established solutions. “Slowing things down on the platform level could go a huge distance in terms of helping cool us down,” Rose-Stockwell said.
In practice, that can look like adding a prompt that essentially asks “You sure about that?” when the system detects you’re about to post something derogatory. Systems like Twitter’s Community Notes, which lets users add context or corrections to inflammatory tweets, can stop rage before it starts. Social media companies could also throw a wrench in the algorithm, which slows the promotion of a piece of content that’s about to go viral to ensure that it’s actually a healthy thing for the company to promote.
However, social media companies are companies, of course, and the incentives aren’t always there to push them to do the right thing. Rage baiting keeps eyes on your app, after all. The government has some tools at its disposal as well.
“On the government side of things, I do think that Section 230 is too broad,” Rose-Stockwell said, referencing the American regulation which protects tech companies from legal responsibility over the things users post. “I think there needs to be more liability put in to force companies to mitigate harms,” he said.