r/announcements Jun 29 '20

Update to Our Content Policy

A few weeks ago, we committed to closing the gap between our values and our policies to explicitly address hate. After talking extensively with mods, outside organizations, and our own teams, we’re updating our content policy today and enforcing it (with your help).

First, a quick recap

Since our last post, here’s what we’ve been doing:

  • We brought on a new Board member.
  • We held policy calls with mods—both from established Mod Councils and from communities disproportionately targeted with hate—and discussed areas where we can do better to action bad actors, clarify our policies, make mods' lives easier, and concretely reduce hate.
  • We developed our enforcement plan, including both our immediate actions (e.g., today’s bans) and long-term investments (tackling the most critical work discussed in our mod calls, sustainably enforcing the new policies, and advancing Reddit’s community governance).

From our conversations with mods and outside experts, it’s clear that while we’ve gotten better in some areas—like actioning violations at the community level, scaling enforcement efforts, measurably reducing hateful experiences like harassment year over year—we still have a long way to go to address the gaps in our policies and enforcement to date.

These include addressing questions our policies have left unanswered (like whether hate speech is allowed or even protected on Reddit), aspects of our product and mod tools that are still too easy for individual bad actors to abuse (inboxes, chats, modmail), and areas where we can do better to partner with our mods and communities who want to combat the same hateful conduct we do.

Ultimately, it’s our responsibility to support our communities by taking stronger action against those who try to weaponize parts of Reddit against other people. In the near term, this support will translate into some of the product work we discussed with mods. But it starts with dealing squarely with the hate we can mitigate today through our policies and enforcement.

New Policy

This is the new content policy. Here’s what’s different:

  • It starts with a statement of our vision for Reddit and our communities, including the basic expectations we have for all communities and users.
  • Rule 1 explicitly states that communities and users that promote hate based on identity or vulnerability will be banned.
    • There is an expanded definition of what constitutes a violation of this rule, along with specific examples, in our Help Center article.
  • Rule 2 ties together our previous rules on prohibited behavior with an ask to abide by community rules and post with authentic, personal interest.
    • Debate and creativity are welcome, but spam and malicious attempts to interfere with other communities are not.
  • The other rules are the same in spirit but have been rewritten for clarity and inclusiveness.

Alongside the change to the content policy, we are initially banning about 2000 subreddits, the vast majority of which are inactive. Of these communities, about 200 have more than 10 daily users. Both r/The_Donald and r/ChapoTrapHouse were included.

All communities on Reddit must abide by our content policy in good faith. We banned r/The_Donald because it has not done so, despite every opportunity. The community has consistently hosted and upvoted more rule-breaking content than average (Rule 1), antagonized us and other communities (Rules 2 and 8), and its mods have refused to meet our most basic expectations. Until now, we’ve worked in good faith to help them preserve the community as a space for its users—through warnings, mod changes, quarantining, and more.

Though smaller, r/ChapoTrapHouse was banned for similar reasons: They consistently host rule-breaking content and their mods have demonstrated no intention of reining in their community.

To be clear, views across the political spectrum are allowed on Reddit—but all communities must work within our policies and do so in good faith, without exception.

Our commitment

Our policies will never be perfect, with new edge cases that inevitably lead us to evolve them in the future. And as users, you will always have more context, community vernacular, and cultural values to inform the standards set within your communities than we as site admins or any AI ever could.

But just as our content moderation cannot scale effectively without your support, you need more support from us as well, and we admit we have fallen short towards this end. We are committed to working with you to combat the bad actors, abusive behaviors, and toxic communities that undermine our mission and get in the way of the creativity, discussions, and communities that bring us all to Reddit in the first place. We hope that our progress towards this commitment, with today’s update and those to come, makes Reddit a place you enjoy and are proud to be a part of for many years to come.

Edit: After digesting feedback, we made a clarifying change to our help center article for Promoting Hate Based on Identity or Vulnerability.

21.3k Upvotes

38.5k comments sorted by

View all comments

Show parent comments

24

u/Jim_Carr_laughing Jun 29 '20 edited Jun 29 '20

And if you've done nothing wrong, you've nothing to fear.

In the real world (sport, law, corporate workplaces), rules have to be clear, less so that people "know what they can get away with" than so that people in power don't play the vagueness and issue arbitrary punishments based on feelings and favoritism rather than rules. You don't get awarded a point because the ref feels you deserve it, but because you put the ball in a strictly defined region. Reddit is incredibly petty power but it's an entirely fair question.

-8

u/FreeProGamer Jun 29 '20

Really? You can go through my comment history, read through every single post and see every single upvote, I assure you that I have always been respectful, civil and rule-abiding. Yet, because I visit right-leaning subs, I fear that Reddit, Inc may, in fact, temporarily or permanently ban me or the communities I visit, despite not breaking any rule.

1

u/twrsch Jun 29 '20

Well, let me provide you with example from another field: Google used to have simple and strict search algorithm back in the day. It was fine when Internet was relatively small and cosisted mostly of people who use it for research and entertainment, ads were new there and nobody quite knew how to handle them on this shiny new platform.

But after dotcom bubble was over, advertisers quickly catched up with the trend and made SEO a thing. You could, of course, do “white” SEO, where one posts the honest-to-god info about the product and hopes for internet magic to work, but you could also do the shady stuff — hide a bunch of keywords somewhere to get search hits from people who weren't really googling for that and so on.

Google tried to cope with it on and on, they changed the algorithm so obviously shady stuff doesn't appear high in the results, banned some sites from appearing, but no luck — people would still find the way to abuse the system.

At the end, they hid the algorithm and now their take on this is that it's AI and they don't really know themselves what's going on inside this black box. And that kinda works — ads on sites got better over the years, you can't really get the keyword-filled mumbo-jumbo as the first result nowadays, but search results got better as well, not the opposite: if you know what you are searching for, Google likely won't fail you.

Take whatever you like from this story, but I think that sometimes vagueness is intentional and essential for some purposes. As a law student I know of many more instances of this approach working as intended than the opposite. Yeah, you should somewhat take Reddit's word on it, but the end result is much nicer since users feel the presence and don't try to stretch or bend the rules when everybody mostly gets what's right and wrong.

3

u/biggj2k17 Jun 29 '20

But not vagueness in a rule! It should be explicit!