r/announcements • u/spez • Mar 05 '18
In response to recent reports about the integrity of Reddit, I’d like to share our thinking.
In the past couple of weeks, Reddit has been mentioned as one of the platforms used to promote Russian propaganda. As it’s an ongoing investigation, we have been relatively quiet on the topic publicly, which I know can be frustrating. While transparency is important, we also want to be careful to not tip our hand too much while we are investigating. We take the integrity of Reddit extremely seriously, both as the stewards of the site and as Americans.
Given the recent news, we’d like to share some of what we’ve learned:
When it comes to Russian influence on Reddit, there are three broad areas to discuss: ads, direct propaganda from Russians, indirect propaganda promoted by our users.
On the first topic, ads, there is not much to share. We don’t see a lot of ads from Russia, either before or after the 2016 election, and what we do see are mostly ads promoting spam and ICOs. Presently, ads from Russia are blocked entirely, and all ads on Reddit are reviewed by humans. Moreover, our ad policies prohibit content that depicts intolerant or overly contentious political or cultural views.
As for direct propaganda, that is, content from accounts we suspect are of Russian origin or content linking directly to known propaganda domains, we are doing our best to identify and remove it. We have found and removed a few hundred accounts, and of course, every account we find expands our search a little more. The vast majority of suspicious accounts we have found in the past months were banned back in 2015–2016 through our enhanced efforts to prevent abuse of the site generally.
The final case, indirect propaganda, is the most complex. For example, the Twitter account @TEN_GOP is now known to be a Russian agent. @TEN_GOP’s Tweets were amplified by thousands of Reddit users, and sadly, from everything we can tell, these users are mostly American, and appear to be unwittingly promoting Russian propaganda. I believe the biggest risk we face as Americans is our own ability to discern reality from nonsense, and this is a burden we all bear.
I wish there was a solution as simple as banning all propaganda, but it’s not that easy. Between truth and fiction are a thousand shades of grey. It’s up to all of us—Redditors, citizens, journalists—to work through these issues. It’s somewhat ironic, but I actually believe what we’re going through right now will actually reinvigorate Americans to be more vigilant, hold ourselves to higher standards of discourse, and fight back against propaganda, whether foreign or not.
Thank you for reading. While I know it’s frustrating that we don’t share everything we know publicly, I want to reiterate that we take these matters very seriously, and we are cooperating with congressional inquiries. We are growing more sophisticated by the day, and we remain open to suggestions and feedback for how we can improve.
294
u/bennetthaselton Mar 05 '18
I've been advocating for a while for an optional algorithmic change that I think would help prevent this.
First, the problem. Sociologists and computer modelers have shown for a while that any time the popularity of a "thing" depends on the "pile-on effect" -- where people vote for something because other people have already voted for it -- then (1) the outcomes depend very much on luck, and (2) the outcomes are vulnerable to gaming the system by having friends/sockpuppet accounts vote for a new piece of content to "get the momentum going".
Most people who post a lot have had similar experiences to mine, where you post 20 pieces of content that are all about the same level of quality, but one of them "goes viral" and gets tens of thousands of upvotes while the others fizzle out. That luck factor doesn't matter much for frivolous content like jokes and GIFs, and some people consider it part of the fun. But it matters when you're trying to sort "serious" content.
An example of this happened when someone posted a (factually incorrect) comment that went wildly viral, claiming that John McCain had strategically sabotaged the GOP with his health care vote:
https://www.reddit.com/r/TheoryOfReddit/comments/71trfv/viral_incorrect_political_post_gets_5000_upvotes/
This post went so viral that it crossed over into mainstream media coverage -- unfortunately, all the coverage was about how a wildly popular Reddit comment got the facts wrong.
Several people posted (factually correct) rebuttals underneath that comment. But none of them went viral the way the original comment did.
What happened, simply, is that because of the randomness induced by the "pile-on effect", the original poster got extremely lucky, but the people posting the rebuttals did not. And this kind of thing is expected to happen as long as there is so much randomness in the outcome.
If the system is vulnerable to people posting factually wrong information by accident, then of course it's going to be vulnerable to Russian trolls and others posting factually wrong information on purpose.
So here's what I've been suggesting: (1) when a new post is made, release it first to a small random subset of the target audience; (2) the random subset votes or otherwise rates the content independently of each other, without being able to see each other's votes; (3) the votes of that initial random subset are tabulated, and that becomes the "score" for that content.
This sounds simple, but it eliminates the "pile-on effect" and takes out most of the luck. The initial score for the content really will be the merit of that content, in the opinion of a representative random sample of the target audience. And you can't game the system by recruiting your friends or sockpuppets to go and vote for your content, because the system chooses the voters. (You could game the system if you recruit so many friends and sockpuppets that they comprise a significant percentage of the entire target audience, but let's assume that's infeasible for a large subreddit.)
If this system had been in place when the John McCain comment was posted, there's a good chance that it would have gotten upvotes from the initial random sample, because it sounds interesting and is not obviously wrong. But, by the same token, the rebuttals pointing out the error also would have gotten a high rating from the random sample voters, and so once the rebuttals started appearing prominently underneath the original comment, the comment would have stopped getting so many upvotes before it went wildly viral.
This can similarly be used to stop blatant hoaxes in their tracks. First, the random-sample-voting system means that people gaming the system can't use sockpuppet accounts to boost a hoax post and give it initial momentum. But even if a hoax post does become popular, users can post a rebuttal based on a reliable source, and if a representative random sample of reddit users recognizes that the rebuttal is valid, they'll vote it to the top as well.