In a lot of conflicts within online communities I've seen, I've noticed a certain pattern of behaviour emerge. To talk about this, let's first distinguish three groups of people that can commonly be found in conflicts:
- The troublemakers. The trolls, the assholes, the abusers, the bigots.
- The enablers. Friends of the trouble makers, people who silently hold the same beliefs (or less silently but with more eloquent words), people who feel entitled to knowing exactly what happened, or just libertarians who disagree with the entire concept of community moderation in the first place.
- The exhausted, fed up by the troublemakers. They'll either silently retreat or eventually snap and explode loudly.
Community moderation and codes of conduct generally focus on the first group, which makes sense because superficially they are the people actively causing problems. However, upon closer inspection, the second group is causing a lot more problems than the actual obvious trolls, because they are actively sabotaging any moderation efforts, and causing large collateral damages to the community as a whole.
Enabling abuse
The stereotypical script that plays out goes as follows: After many previous incidents involving a problematic person, one incident eventually triggers a moderation action. In response, several enablers will come to the defense of that person and loudly argue against the moderation action. Several heated discussions inflame, dragging in other people, and sucking all air out of the community space to do the things the community actually is set out to do.
Alternatively, if the moderation threshold for action was too high in that situation, it can instead happen that a person exhausted by the overall situation snaps. This usually involves mean words and insults towards the problematic person, and sometimes mental breakdowns in public. In that case too, the enablers will come to the defense of the problematic person, reframing this as an attack and abuse, and thus reversing abuser and victim roles. They will complain about the person who snapped to the moderators, and complain about "unjust" or "unfair" treatment.
In many cases, the actual incident triggering some moderation action is minor, yet the context for it usually involves a complex and long history of past events. To bystanders without the full context, such action may seem disproportionate, and enablers will happily weaponize this in their opposition of the decision. They will also ask for more and more justification, in the name of "accountability" or "transparency". But no justification will ever be satisfying, as the goal never was to get clarity on the situation in the first place, but to exert social pressure against undesired moderation actions.
An important factor is the dynamic of social status. Most troublemakers don't have a big standing in the community, as they tend to not be constructive and productive people overall, and social dynamics in tech communities usually are meritocratic in nature. (Productive problematic people with high status do exist, but are comparatively rare. Read more about them at The Impact of Toxic Influencers on Communities.) Enablers however tend to be well-known community members, and they give their voice and clout to defend abusive behavior. This is highly problematic, because it drags conflicts which should have been a minor issue right into the heart of the community, turning them into something much more serious.
If such stories repeat too often over a prolonged period of time, it can erode and reshape a community. People fed up with bullshit will tend to (more or less silently) leave, while combative people attracted by the drama tend to join. (The latter always exist, but in a functional community they get bored and leave unless they are actually interested in what the community is doing.) This slowly turns the community into a political debate club, fueled by inner conflicts.
Abuse happening outside of community spaces
By default, if a community member is being an asshole in a different corner of the internet, that is of little concern from a community moderation perspective—Community moderation is scoped to what happens within a community. However, the boundaries of where the community starts and ends are of course very fuzzy, and bad actors can weaponize disagreements about where the exact line should be drawn. The Contributor Covenant Code of Conduct, a code of conduct commonly used by tech projects says:
This Code of Conduct applies within all project spaces, and it also applies when an individual is representing the project or its community in public spaces.
The Covenant ignores one major point: A community member being blatantly sexist on Twitter for example cannot be ignored, even if the respective posts are unambiguously outside of the community boundaries. Not taking action would send the wrong signal: "We are tolerating a known sexist in our space, as long as they're nice while we're looking". Marginalized people will hear such a message loud and clear, and accordingly consider a community which does such as less safe.
Such decisions get even harder when they involve interpersonal conflicts, things said in semi-private or said a long time ago. But that is not an excuse to ignore all reports of that kind as "out of scope 🤷" (or even worse, as "we're apolitical") without further consideration.
What to do about it
The first step is to acknowledge this phenomenon and decide to expand the scope of what moderation means. It is important to note that on the long run, this will cause less work and not more, because the actual bad actors will become a lot easier to deal with. So, what does this mean in practice?
- Moderation is not equally accountable to everyone. Accountability is important, but it does not imply that fringe members are entitled to knowing everything that happened in full detail. Of course, there should be a space for discussion about how well a moderation case was handled to happen. However, it is crucial to shield it against bad actors abusing the process for their own purposes.
- Act before someone snaps. When insults go flying towards a person that already is on the moderation watchlist, this is a clear indication that moderation action should have happened sooner. Moderation should be proactive and not wait until problems blow up in public.
- Consider enablers of abuse in the Code of Conduct. Defending hateful behavior is rarely okay, and this should be reflected in the community rules.
- Handle reports from outside of the official community spaces. Doing this has nothing to do with "thought policing", instead it is a necessity to keep the community healthy. At the same time, it is important to acknowledge that different rules apply here, and that every case is unique.