The Substack-Nazi Controversy: What's it really about?
Nazism is the flashpoint, but the Substack controversy is about content moderation, and where the line is not just for Substack, but also for you as a writer and a reader.
By now, you've probably heard about Substack's Nazi Controversy. Back in November, the Atlantic ran an article about extremist content on Substack, and Substack's unwillingness to deplatform or even demonetize that content. The situation simmered until December 21st when Substack co-founder Hamish McKenzie posted a Note on Substack defending their decision to leave the Nazi publications up under the guise of Freedom of Expression.
Since then, the situation has deteriorated further. Big-named publications started applying pressure on Substack to respond and take action on the Nazi Substacks, with publications like the Platformer doing their research and presenting Substack with a list of Nazi-driven sites. Substack relented and took down 6 sites that made overt calls for violence against Jews--but only after giving their findings to another big-named publication that was more sympathetic to Substack's laissez-faire approach to content moderation. The Platformer decided to leave anyway, and there's been a ton of name-calling on both sides of this ideological debate.
So, drama aside, what are the actual issues at play here?
It's not really about Nazis
It is and it isn't about Nazism. It's actually about hateful ideologies and what we think should happen with those ideologies — i.e. content moderation. The controversy is the role a platform like Substack should hold in moderating hateful content that appears on their platform. This isn't a new thing, and it's something we've litigated on the public stage again and again. It's also not as simple as "Nazis good" vs "Nazis bad." Everyone agrees on the "Nazis bad" part of things, regardless of your approach. (by the way - if you take offense to my use of "Everyone" here, the door's right there, you can just fuck right off.") The controversy comes down to what should we do about the Nazi content, not whether Nazis are bad.
This is what Substack's content moderation policy is, paraphrased: we don't like hateful content but don't feel it's our place to silence them. We think that the "light of day" will be enough to force these hateful ideologies out of broad publication. That's not to say that anything goes, and Substack draws the line at hateful content that makes explicit calls to violence. Content that crosses that threshold will be removed.
So, the controversy at its heart is this: some people think that this policy is sufficient and is working. Others think that the policy is too lax and allows for too much harmful speech and that it should be stricter. There's a lot wrapped up in these two lines of thinking, but I'm going to focus on them as I consider this to be the primary point of tension in this controversy.
"Sunlight Sanitizes the Toxicity"
Those folks who think that the existing policy is sufficient to subscribe to the idea that sunlight sanitizes toxicity. The idea behind this approach is that any content that hateful content that floats to the surface will buried underneath an avalanche of objections from "normal" people. It's the idea that people might be willing to express these ideas from behind a proverbial hood, but not in public where there are real-world repercussions for supporting hateful ideologies.
This approach values freedom of expression over the psychological and physical safety of others. It still cares about psychological and physical safety, but it cares more about the ability of an individual to freely express themselves. Said another way, this approach will accept greater risk to psychological and physical safety so long as doing so means there's less risk of someone being unfairly prevented from freely expressing themselves.
"Some Speech is too Dangerous to Exist"
Those folks who believe that the content moderation policy is too lax subscribe to the idea that some speech is too dangerous to exist. These folks believe that some speech is so dangerous, that just being exposed to it can warp your view of the world and radicalize you. They believe that any speech expressing hateful ideology is too much, and we should do everything in our power to excise it from existence. This includes heavy-handed moderation tactics, and rock-solid terms of use for what is and isn't acceptable content.
This approach prioritizes the psychological and physical safety of others over the ability of an individual to freely express themselves. It's the value-inverse of the sunlight-sanitizes approach. Folks in the 'too dangerous to exist' camp still value freedom of individual expression but would sacrifice some of that freedom of expression if it means that there's less psychological and physical harm as a result. This approach will accept greater risk to personal freedom if it means less risk of psychological and physical harm.
Problems with both Approaches
Like most things in life, people take things to extremes, especially ideological arguments. Folks in the "Sunlight" camp will downplay the fact that exposure without moderation is seen as endorsement or truth by the general public, and draws and builds momentum from the perceived legitimacy. Folks in the "Dangerous speech" camp can too far in their crusade to not do anything offensive to anyone, ever, and chill speech to the point that honest communication is no longer possible. While I consider myself "Woke," I also acknowledge that there is a very real thing that I call "Toxic Wokeism" that is exactly this. This is compounded by the fact that both sides ignore the downsides of their ideological approaches holding up the benefits, and demonizing the other side of the continuum.
Additionally, we've seen the laissez-faire approach to content moderation fail time and time again. Twitter, Facebook, YouTube, and more all started with very lax content moderation policies, and have all skewed back into more restrictive policies after seeing the harm caused by their moderation practices on their platforms. Even Valve, the company behind the game platform Steam, has adopted a more rigid set of content moderation in recent years to deal with the shifting lines of what we consider to be acceptable.
We can expect, no matter the outcome here, that Substack will have to adjust its policy in the future. We have legislatures eying a Section 230 reform, and the likelihood of platforms like Substack continuing to be able to claim it's 'not my problem' is shortlived. Since 2016, there's been increased scrutiny on the impact of lax content moderation practices on mega-platforms, and that will only increase as time goes on.
Where do you fit in?
The reality is that we are all somewhere on this continuum, and we all have a line of what we consider to be acceptable and unacceptable when it comes to content moderation. Substack has set a threshold that's about as deep into the "sunshine" side of the continuum as possible. Effectively what that means in our paradigm here is that Substack is willing to accept a greater chance of psychological and physical harm to people so that they don't run the risk of unfairly infringing on someone's freedom of expression.
What you can expect from Substack is that you will see content that you think shouldn't exist - no matter who you are. You will see content that skirts the letter of Substack's policy but fully violates the intent and you'll see Substack side with allowing that content to remain.
But you can also expect more freedom to express yourself. You are less likely to be brigaded by individuals attempting to censor you by flooding Substack with content moderation reports against you. You are likely to get exposure to a wide range of ideas that you wouldn't have otherwise, both good and bad.
What you need to decide for you as a writer and a reader is whether having Substack suggest some truly heinous, hateful and disgusting content to you is worth it for the value you get on the other side.
This isn't the only piece
Also, remember that this issue is multi-dimensional. While I approached it from a content moderation perspective, there are other ethical judgments to be made. For example, The Atlantic who originally published the story, has had a declining readership and has no doubt lost a great deal of business to something like Substack. Was the intent of their article to make Substack better, or to drive folks back in their arms? Does it matter?
It's also important to note that Substack materially benefits from the Nazis and other hateful ideologies on their platform. They make 10% of every transaction on Substack. What role does the financial aspect play in Substack's content moderation policies, as opposed to their ethical and moral aspects?
What about Byte-Sized Ethics, where are you going?
For right now, I'm staying here. I am unoptimistic about Substack's moderation policies personally. In the last week, I've had anti-LGBT+ Substacks pushed at me, calling for the stoning of anyone in the LGBT+ community. Substack has pushed publications attacking progressives, saying they are destroying the fabric of the West and should be put into concentration camps. Substack has pushed articles to me about how Star Wars was ruined because :checks notes: women and is actually about how bad wokeism is because the empire was woke … or something. As time goes on, I expect this to get worse as more and more folks with hateful ideologies know they can come to Substack and grow.
So why am I staying for right now? I don't think I can grow Byte-Sized Ethics the way that I want on another platform, not yet at least. So for the moment, I will stay despite Substack's policies, not because of them. I will put up with the hateful content being pushed at me for the opportunity to continue to grow.
Then, either I'll get fed up with Substack and leave, grow enough that I can leave Substack and not have it negatively impact me, or Substack changes its policies to be more progressive and I don't have to go anywhere.
Only time will tell.
What about you? Where are you landing with the Substack-Nazi Controversy?
Tbh I think Notes might force Substack’s approach to change. A social media feed and newsletter service require two very different moderation methods because they spread information in very different ways.