RE: Censorship gone awry on Reddit: the aftermath of our r/science AMA

avatar

You are viewing a single comment's thread:

Thanks for the thoughtful post! I am a mod for some subs on Reddit and noticed some confusion about the site in your post and follow-up comments. So I thought it might be helpful to discuss how Reddit works with regards to these processes and what is specific to /r/science.

  • A removed comment is not shadowbanning. Anytime you have a comment removed for any reason on Reddit it usually still shows up to you but not others. But neither you nor /u/miserlou were shadowbanned so no worries there! Banning and shadowbanning are very specific terms within Reddit but neither apply to this context.

  • Automod is not specific to /r/science. It is a mechanism that is now built into Reddit that allows subs to insert specific phrases or websites that will auto-pull the comment. For example, racial slurs or known spam sites. But it will also auto pull really short comments (ex: if you just say "ok") or comments with a lot of links. The link issue is not something that /r/science moderators can change and is something you may run into for every sub. One way around this is to message mods when you post a link-heavy comment and it can be manually released. The amount of karma you've accumulated may also impact this issue (low karma will auto-pull a comment on many big subs.)

  • Moderators cannot change your comment nor where links follow. Either the entire comment stays or we can remove it. There is no in-between option.

  • Each sub has its own rules for content. You can't post cat photos in /r/dogs, for example, because the point of that sub is to curate photos of dogs. /r/science is (in)famous for their strict moderation rules of no jokes/pop culture, no hate speech, no pseudo-science, no fights, etc. Even Redditors who don't frequent the sub are well aware of this and know that if they run afoul of those rules their comment will be removed. For better or worse this sub-by-sub set of rules is the normative culture of Reddit and people expect a need to codeswitch.

  • /r/science is also unique because to enforce this strict conversational curation they have over 1,000 moderators. Most only have permission to remove comments while only a handful have additional permissions. This helps handle hot-button posts that garner tons of racist or sexist comments (for example) but it does slow down response time for actions that require higher permissions (such as approving removed comments.) Most subs simply aren't big enough to warrant that kind of team. Higher level mods periodically survey their activities and strip permissions if they are moderating in ways that aren't in line with their rules. I can't tell you definitively if any of your comments were grabbed by low-level mods vs automod, but I don't see any rule violations but I do see links. So automod is the logical assumption.

  • Some subs do alert users when their comments are removed but in my experience the subs with millions of users do not. This is mostly due to a volume issue. In small subs it is easy for me to give people a heads-up and manage responses. But if a post has a hundred removed comments due to actual rule violations (i.e. valid removals) it would take a lot of manpower to respond to each query or response.

  • I also see from a comment that you posted a text-post to /r/science, but their rules do not allow that. I suspect it was removed by auto-mod. Many subs do allow text-posts and figuring out where to target your content is part of just getting to know platforms and sub-cultures.

All of this does lead to very interesting debates for platforms from Facebook to steem. Do sites have obligations to deal with harassment, violent threats, and illegal content? If so, how do you build that moderation into the system without censoring inappropriately?

What about less obviously problematic content? One reason /r/science says they have such strict rules is that they've worked with communications scholars who showed through peer-reviewed research that pseudoscience and/or aggressiveness in comments meaningfully impacts how readers interpret the science in the associated posts. This research is why most major science news outlets have removed their commenting sections, btw. So what is the appropriate way to cultivate discussion that doesn't feed pseudoscience and/or science dismissal?

I certainly don't have the answer. Reddit is also very frustrating in that we moderators have been begging them for better moderation tools for years. Modmail is awful. Automod is a very blunt instrument, as you discovered. It is hard to sort through notifications. It is simply not well set-up for moderation yet that responsibility falls on the shoulders of volunteers. If your experiences frustrated you please consider dropping an email to the Reddit admins telling them to give moderators halfway decent mod tools! :)



0
0
0.000
1 comments
avatar

Thanks @liminalphase for your inside knowledge and thorough explanation. Also welcome to Steem!

I've reread my post with the information you provide in mind. It sounds like the posts that were only visible to the original poster resulted from auto-mod, triggered by hyperlinks. I am assuming auto-mod-banned comments were never publicly displayed, i.e. auto-mod blocked them immediately and not after they had already been public for some amount of time.

Given your explanation, it sounds like the comment that publicly showed "comment removed" (but showed in entirety to the eLife_AMA account which posted it) was not auto-mod. This is because auto-mod would hide the comment immediately, rather than rendering it as "comment deleted" at a later time. Therefore, I'm assuming there was human intervention in this instance.

Of course, human intervention, especially from a moderator, would be surprising as this was an officially sanctioned AMA post on r/science. In fact, it was coordinated by a Senior Press Officer at eLife in conjunction with the r/science moderators. r/science cared deeply about the popularity of these posts. In fact, so much so, that they would delete posts from around the time of the AMA, to help get the AMA highly ranked and achieve virility. Apparently, Reddit changed the rules to prevent this delete-posts-to-favor-another-post tactic, and subsequently r/science ceased the whole AMA program.

they have over 1,000 moderators. Most only have permission to remove comments while only a handful have additional permissions.

This could have resulted in a low-level moderator causing the "comment deleted" issue. Our study is somewhat provocative, and perhaps a moderator was abusing their role.

I believe the comment shown above where I detail the censorship was eventually un-banned by a mod but is now back to being banned.

Is that possible? In other words, could a post be auto-mod hidden, then be made publicly visible by a mod, but then return to being hidden without noting "comment removed"?

Anytime you have a comment removed for any reason on Reddit it usually still shows up to you but not others. But neither you nor /u/miserlou were shadowbanned so no worries there!

I guess this was not a technical "shaddow ban", but it certainly seems to fit the general definition of one (hiding content from the public, while pretending to its original poster that it's public).

Each sub has its own rules for content.

Good to know, but I don't think our comments (or Miserlou's) violated any rules. In fact, our comments were in essence pre-approved, given that they were the purpose of the AMA.

Reddit is also very frustrating in that we moderators have been begging them for better moderation tools for years.

Welcome to a platform where if something is broken, you can potentially do something about it, (beyond begging). Of course, there are many problems with Steem, but I think they are generally much more solvable, since the blockchain architecture allows permissionless innovation on top of the open database.

0
0
0.000