How Reddit Ends.

How “moderated” online communities mirror the dystopian corporate philosophies that turn “free speech” into intrusive surveillance.

Have you noticed how Reddit seems to be anti-censorship and pro-“free speech”, but actually functions in ways opposite of free speech?

The insidious aspect of censorship is that you’ll never see what has been deleted.

Here are three ways you might encounter questions of speech and censorship during a typical week spent on Reddit:

1. Arbitrarily chosen, non-elected, unimpeachable “moderators”, who are predictably incompetent at anything resembling “moderating”.

Reddit’s original (and current) demographic is the stereotypical middle-class, cisgender, heterosexual, technologically-inclined white American male/man.

(What about everyone else, though?)

Along the original Reddit demographic comes the biases and distortions that accompany it (i.e. implicit racism/sexism/misogyny, homophobia/transphobia, ableism, ageism, etc.).

Reddit was originally created by Alexis Ohanian and Steve Huffman, a couple of middle-class, white, computer-programming college graduates, mainly for their friends. From there, the Reddit “community” continued to grow until we have today’s Reddit where everyone is still called “dude”, “bro” and “man” by default. (Hint: where are all the women — half of humanity’s population…? Please ask women, and actually listen.)

This 11-minute mini-documentary is sad in retrospect. Look at what the founders of Reddit seem to believe they’re creating… although it’s really just typical Silicon Valley talking points about how technology (and money, and connections, and influence) can help “change the world”. Didn’t really turn out the way they intended, did it…?

This article (and short video) from DailyDot details how Reddit began, essentially, with spam, fake accounts and being “anti-censorship” (but again, for a very specific “community”).

In a way, Reddit was built by spam.
Here’s an interesting revelation from Reddit cofounder Steve Huffman: The social news site was built on a lie. Many hundreds of lies, to be more specific, in the form of fake user accounts that Huffman and fellow cofounder Alexis Ohanian used to populate the site in its earliest days.

“You would go to Reddit in the early days, the first couple of months and there’d be tons of fake users,” Huffman says in a video for online educator Udacity.

Through those fake accounts, Huffman and Ohanian submitted high-quality content—the type of articles they wanted read. This “set the tone” for the site as whole, Huffman said and, at the same time, made it look populated.

“Set the tone,” indeed.

The outcome: anything that “isn’t blatantly racist” is allowed and protected under Reddit’s intentionally vague rules. This is how Reddit became one of the most toxic places on the internet that refused to ban its most bigoted and hate-driven communities until 2015. (For reference, that’s a full decade after Reddit began.)

As you can see if you’ve spent any amount of time on Reddit recently, the bigots never really left. They just use the intentionally lax rules to say the same hatred-fueled things in more “civil” ways. And the moderators generally couldn’t be bothered to intervene because it’s not their day job. There’s a limit to how much unpaid volunteer labour can accomplish, and that limit tends to diminish over time.

Moderators on Reddit are also not voted on or chosen in any way by members. This effectively creates a kind of dictatorship; a subreddit’s members are the unrepresented subjects. Moderators function as totalitarian rulers, immune to criticism for anything that’s not “like, overtly racist“, as Reddit co-founder Steve Huffman says. This conveniently leaves untouched about 99.9% of bigoted and harassing behaviour experienced by people who aren’t members of Ohanian and Huffman’s “peer group”, as he astutely calls them.

2. No discovery mechanism for new subreddits.

Reddit is designed to ensure that new subreddits are invisible. This centralises power among the oldest and most popular communities — a self-perpetuating cycle that creates “super”-subreddits while starving new ones.

The predominant philosophy of “free choice” on Reddit is actually a corporate mantra that “if you don’t like it, go somewhere else.” This is fundamentally anti-democratic in that each subreddit becomes an echo chamber (“hive mind”). Dissent is suppressed by harassment, downvoted to oblivion, or bullied into silence. Or moderators will just delete your posts and comments while making noises about how it’s their playground, they’re free to make up their own rules and enforce them however they like. The Reddit admins generally pretend like they can’t do anything to intervene because “free speech”.

So if you dislike one subreddit, you can just create a new one, right?

Yes, but it’s practically guaranteed no one will ever find it, which completely defeats the purpose.

Sidebars of existing subreddits are the only way to find new subreddits. Sidebars, of course, operate at the whim of moderators who often refuse to include links to new subreddits that could become viable “competition”. This is similar to the way that corporations try to establish vertical monopolies, then strangle or absorb competitors while claiming to support “free markets” and “innovation”.

3. “Free” means “don’t expect us to lift a single finger to help you.”

On Reddit — and on “free” social media generally — selling user data is how the site makes money. Users are only profitable to the extent that we click ads, or give away private data that advertisers want to buy from the social media site’s owners.

This means that if you leave, the social media site loses nothing as long as another user signs up to take your place. It also means that they really don’t care about what happens as long as you, or any pair of eyeballs really, show up tomorrow.

  • Being harassed? Too bad.
  • Moderators operating in bad faith, deleting your posts and/or comments? Who cares.
  • Simple features, like a working Block button and ways to find new subreddits, don’t really work? Well, it’s free, what did you expect?

And this is how we ended up with our “free” dystopian social media landscape. Free speech. Free choice. We have the Reddit — and Twitter, Facebook, Tumblr, and Google — that we paid for. In a corporate capitalist world, “free” really means worse than worthless, most of the time. Every word we type and selfie we upload powers social media as a sacrifice of time, attention, privacy, emotional health, and increasingly, our freedom in the real world.

We’ve “disrupted” ourselves directly into a dystopian future-present. But at least nobody had to pay for it, right?

Not So Open That Our Brains Fall Out

What if all future social networks were built to protect against misogyny, racism, homophobia/transphobia, ableism and ageism — the way real cities are planned and programmed with public health initiatives using epidemiology to prevent the viral spread of infectious disease?

The Silicon Valley version of “free speech” is both stunningly wrong and remarkably backward considering the tech industry’s pretensions at building the future through advanced software.

Keeping Minds So Open That Our Brains Fall Out

Imagine if every child in school had to “learn all sides” when confronted with the questions of physics: is gravity real? Is the Earth round? Is the Moon really made of cheese? Is fire alive? Do vaccines cause autism? Is homeopathy real? What about the Flying Spaghetti Monster?

These incredibly impactful issues don’t all belong in the realm of physics, but they do share in common the fact that, if every child had to re-prove them in each generation, society would be trapped re-answering questions that already have scientifically valid answers.

Likewise, in society generally, society continues its progress as majority consensus changes. Are people of all genders, sexualities, skin colours and ethnicities equally human? Do all human beings have equal rights? These questions also have only one correct answer.

Silicon Valley’s corporate version of free speech, however, imagines a free speech that favours ignorance-fueled controversy over well-informed opinion.

How Free Speech Becomes Hate Speech

Fear, uncertainty, doubt and hatred install anxiety, panic. Desire for reassurance and validation provokes feverish conversation. All the while, our earnest outpourings of heartfelt emotion are recorded, bundled and sold to all who will pay. This is why social media is “free”.

The real cost is that “free speech” is redefined to mean “anything that brings more eyeballs to our advertisers”. As we’ve seen, society itself pays the price as misinformation, disinformation, harassment and bullying dominate the collective conversation.

Bringing Our Shared Humanity Back to Social Media

What if social networks operated more like the blood-brain barrier that protects our brains from infection, or like urban epidemiological strategies that keep air and water clean, preventing children and adults from pathological invaders using simple scientific principles?

Leaving each person alone with a mountain of “free” tools to fight harassment by themselves is like giving a person — who hasn’t learned basic critical thinking skills — books on astronomy and astrology, then telling them that scientific fact is a matter of Likes and Retweets.

If society hopes to advance past our current social-media dark age, ideas of “free speech” will have to move beyond shouting matches over questions that have obvious answers. Hate speech has no place in our present and future world; new social networks need to start from there.

Starpunk: What if the future began with everyone, including you?

Starpunk began way back in 2015, when social networks were more innocent and forgiving places.

Or at least, they seemed to be.

This project began life as an indie sci-fi zine project. The purpose of that project was to teach people about concepts like digital human rights, user privacy — “how to prevent the dystopia you see in cyberpunk sci-fi like Blade Runner and Snow Crash”.

The core of the zine project was to use existing social networks — Reddit, Twitter, and Tumblr (never, ever Facebook) — in get contributions, subscriptions, feedback and ideas directly from fans. Our ultimate goal was to finance short films.

A theme of the project was a vision of the future. An anti-dystopian vision where everyone could live their lives with full recognition of their rights and human beings. This emphasised people who are traditionally excluded, harassed and silenced online: women, LGBT people, the “disabled”, people of all non-extremist religious beliefs (including “no religion”), and neurodiverse people.

Early in 2018, however, it became clear that the plan was hitting obstactles that all led back to a common cause: the existing social networks are poisoned. The poison is not their technology. The poison is their culture, and their culture was set very early on in the social networks’ development.

Here’s our purpose, in a question:

What if our social networks could reflect the future rather than the past?

This social network is based on a foundational set of principles. A recent term that seems to fit is “digital humanism”. To summarise really simply and quickly:

  • Women are valid, complete human beings. Harassment of women is not acceptable.
  • No one else can determine your gender for you. Gender-based harassment is not acceptable.
  • LGBT people are valid, complete human beings.
  • Older people, neurodiverse people, differently-abled people are all complete and valid.
  • Bullying and harassment are not acceptable forms of “free speech” on a social network, any more than they are in real life. “Every person for themselves” is a primitive and unnecessary burden that privileges narcissists and abusers. A thriving community does the opposite — it does not tolerate abusive behaviour, so that the other 99.9% of the community can enjoy sharing their experiences together.

So now the project has evolved from a tiny zine idea to a vision for repairing social networks to be places where all people are welcome.

From “Free” Social Media to Real-World Surveillance State: How Can We Fix It?

It all began innocently enough.

Back in the 1995, the web was supposed to change everything. America Online compact discs flew through the air like frisbees. Getting online was as easy as looking in your mailbox, finding a shiny new CD from AOL and spinning it up in your computer. Within minutes, you could be online (like the rest of.. .America).

Soon enough, there were even free ISPs like NetZero. Information was finally free. A new model for making money emerged where nobody had to pay for access to the web. Humankind would be liberated through endless access to data.

The idea was simple and brilliant: put advertising everywhere, just like magazines and TV shows. It was a familiar approach, since we all get TV shows for free — nobody pays for cable/satellite channels on a per-show basis, right? We’ve long since been trained to put up with commercial breaks. Plus, magazines and newspapers are full of ads, right?

We’re all well-trained to accept ads as normal. We know commercials. People adore a catchy advertising jingle. Millions of people love American football’s Super Bowl more for the clever commercials than for the game itself.

But freedom via internet isn’t what happened, is it?

And, where’s NetZero now?
And, don’t you pay for cable/satellite (and Netflix)?
And, magazine subscriptions actually aren’t free at all.

None of these products are actually free — and neither is social media.

You sign up to Facebook (Instagram is owned by Facebook) for free. Then you jump onto Twitter, too. Eventually, your friends badger you into joining Snapchat. Maybe Tumblr and Reddit as well. To make it easier to sign up everywhere, you probably create a free Gmail account and activate it using your phone number.

You start building your daily feed of awesomeness. Never be bored again. See what’s happening.

All good so far, right?

There’s been a lot of talk about how ads are evil, but you can’t see anything bad, so they probably aren’t targeting you. It’s not like you have anything to hide, and besides, it’s all free. You might even be running an ad blocker. Smart.

But here’s the real problem:

Logging out and logging back in is a hassle, and sometimes you forget. Facebook and the other social sites are watching you on every page that connects to them — comments sections, login buttons, embedded posts, and so on. They even track you when you’re not logged in.

Then they bundle your data together and sell it to advertisers, or they sell advertisers access to the data that they have about you (it’s the same thing).

So how does this become really scary?

It becomes scary when you realise that you probably take your phone everywhere. Your phone can only give you service if it knows where you are. And if you stay logged into FB/Insta/Snap/etc., they know exactly where you are, too.

But again — you’re not doing anything wrong, so nothing to fear, right?

Unless, let’s say, you’re a woman traveling to and from a reproductive services clinic. Or a Latino immigrant to the United States visiting loved ones. Or a black person driving their car anywhere. A Muslim or Jewish person going to or from religious services. Or an LGBT person using a dating app. Or anyone attending a social or political protest of any kind.

Fitness and health-tracking apps — notorious for their inability to keep your data private — are now able to sense, track and report your vital signs, even as you sleep (do you ever take off your FitBit?).

What began as freedom for all has now become surveillance that, just as in the past, disproportionately harms people who were already targeted, bullied, violently harassed, and silenced.

Being “free” online isn’t just about blocking ads anymore. It’s a question of freedom in the real world now, and for many of us, it’s increasing a matter of life and death.

This is real:

The only way to truly fix the problem is to end it where it began, and start something new. Freedom in any sane society — online or in the real world — only exists when it starts by protecting the most underserved and marginalised people.

In upcoming blog entries, this thread will continue with the following topics:

– there’s no such thing as “targeted surveillance”. Surveillance (for police, anti-terrorism, etc.) only works if you target everyone.

– society only advances when we accept certain facts as true. Gravity is real. The Earth is round. Violent hate speech and intentional misinformation have no place in an advanced civilisation.

– fake “authenticity” in social media only feeds the problem. Notice how anyone who ever mentions getting paid for their work online is accused of “selling out”, “shilling”, a “cash-grab” or worse. This means that the only way to be paid is through advertising — and advertising is surveillance. As people stop clicking ads, surveillance only digs in deeper. The cycle continues until we end up in dystopia. Or, we throw it away and create something new.

The Easiest Way to Defeat Trolls and Keep Yourself Safe on Social Media.

You’re smart. That means that you’re concerned about how social media companies are tracking you everywhere, selling your data, and basically stalking you — even when you’re not logged in.

Stalkers. Bullies. ICE agents.

The problem is far worse for traditionally oppressed populations: women, non-white people, LGBT and gender-nonconforming people.

What can we do about it?

1. Keep Yourself Safe.

Do not use social networks that are based on selling your personal data. If you have used them in the past, stop now.

Support alternative approaches that are anti-surveillance — and beyond that, only use services that do not sell your personal data for the purposes of advertising. Remember: “advertising” online is just a nicer word for “surveillance”.

Our social networking project explicitly does not want your personal data. We don’t track you, so we don’t even have the data in the first place. Your private messages are encrypted. As of 2018.09.10, we’re working on end-to-end encryption — that means that ideally, no one but you will be able to decrypt the data at all.

2. Defeat Trolls.

First, recognise that they’re not “trolls” anymore. Trolls can be harmless and cute. Bullies are not. Bullies intimidate and hurt vulnerable people for fun, to build an imaginary sense of power.

Here’s the easiest way to defeat bullies online. Hint: it’s the same as step one.

The Starpunk social media project does not collect your personal data or ever sell “advertising” (read: intrusive surveillance). Instead, you pay a small fee every month, upfront. You know exactly what you pay, you can easily get a refund whenever you want, and you can help us build the service you want to use.

Bullies, however, are banned. The ban matters, because it hurts: every time a user is banned, they can’t get a refund for that month. If they want to come back, they have to pay again. Every ban also doubles in length: if the first ban is for 24 hours, the next one is 48 hours long. Eventually, a bully will either disappear or just go back to Twitter and Facebook where they belong.

See? There’s only one step.

Find out more about the Starpunk project on Twitter: @starpunkzine