Preventing Harassment and Ejecting Bullies From an Ethical Social Network

A young woman on Instagram who goes by the username scarebrat has been quite badly bullied over the past week.

She’s being harassed by thousands of users across social media (including Facebook and Twitter) after someone “exposed” her and “called her out” for changing her appearance in Instagram selfies.

Specifically, scarebrat has been accused of altering the shape of her eyelids in an attempt to seem “Asian”. Speculation among the social-media bullies centres on assumptions that she’s courting attention from K-pop fans whose preference runs toward women who appear Korean.

This is relevant to social network governance in regard to intent.

– Do you ban a person for problematic behaviour?

– Do you ban people for bullying a person engaging in problematic behaviour?

– Do you take no action at all under the umbrella of “free speech for all”?

Do you ban a person for problematic behaviour?

In the case of Instagram user scarebrat, she seems to be engaging in experimentation with her personal identity. This is unremarkable for people of any age, even if some consider it problematic. So it’s not grounds for banning or any form of discipline.

This is different from behaviours that are intended to insult or antagonise others. Blackface, for example — as seen in the cosplay community — is never acceptable, given its specific use to demean and dehumanise an entire ethnicity of people. For scarebrat, at worst, she seems to be altering her eyes to appear more attractive rather than less.

“What if someone uses blackface to appear more attractive?” No well-known person in the history of Western beauty has used blackface as a way to enhance their attractiveness. Skin-lightening among African diaspora, although varying in prevalence in some places rather than others, is almost universally frowned upon as a practice due to its association with internalised racism.

On the other hand, Asian women in popular culture often alter their eye shape to appear more round for the sake of aesthetics. So it’s far more likely that a person who enjoys K-pop might want to look like their favourite idols by adopting similar beauty practices.

Context is always a key determinant of fact.

In this case, the problematic behaviour, in context, is a matter of personal expression rather than cynical cultural appropriation or racist intent (attempting to infer intent always runs the risk of mind-reading, which is cause for caution. Asking the user directly would be preferable to second-hand speculation about her intention).

Do you ban people for bullying a person engaging in problematic behaviour?

Yes.

Posting someone’s personal information for the sake of insulting them, damaging their reputation or harassing them (their name, username, photos or likeness) is a clear violation of privacy.

In the case of scarebrat, those who posted information about her in order to harass or threaten her would have their posts on the topic deleted and would possibly be temporarily banned.

Even worse are people who posted selfies of themselves “Asian flexing on scarebrat” as if to prove their “true Asianness” at her expense — and pandering for popularity at the same time. That type of behaviour would earn a temporary ban with no need for further deliberation.

Do you take no action at all under the umbrella of “free speech for all”?

One difference about this social network is that “free speech” comes second to a culture where harassment and bullying are strictly forbidden.

This approach elevates our members’ sense of safety and ability to express themselves without undue fear of harassment or bullying. Information posted online tends to persist — this can mean that a bullying campaign will live on in web searches for years after the initial incident. To protect users from bullies and harassers, it is necessary to take action and remove offending content as contextually indicated by the situation and individuals or groups involved. This helps maintain clear boundaries for community members with flexible, yet unambiguous enforcement over time that is sensitive to the specific situation in each case.

Social media works exactly as its creators intended ā€” and thatā€™s why it keeps getting worse. | Part 3

Reddit is “the front page of the internet” — as long as you’re a cisgender, heterosexual white male age 18-49 who speaks fluent American English.

Part two of this three-part series examined how the web’s changing demographics are trapped in the false ideals of Silicon Valley. In this final entry in the series, we see how these dynamics are self-evident on Reddit, which calls itself “the front page of the internet”.

Black Panther, Ghost in the Shell 2017, and the Reddit Herd-Mind

This social media project began in 2018 March, soon after the groundbreaking superhero sci-fi film Black Panther was released. If you decided to create a topic about Black Panther in Reddit’s most popular science fiction community, you’d quickly find the conversation deleted by the moderators.

Why?

Because racists couldn’t stand the idea of talking about one of the most culturally important, critically praised, financially successful films in recent history. Instead, the comments section was immediately deluged by actively and passively racist comments.

We all know how this game works by now. If blatant racism isn’t allowed, use coded language with the same meaning.

  • “On all sides” (shifting blame, also known as false equivalence).
  • “[Nazis and violent racists are] very fine people.”
  • “I’m a [white] nationalist.”

As long as it’s not blatantly racist

In relation to Black Panther, the passive-aggressive comments operated along the lines of “Black Panther isn’t sci-fi, it’s fantasy — so it doesn’t belong here.” Or, “Black Panther is a superhero movie, and superhero films aren’t really sci-fi — so it doesn’t belong here.”

The description created by Reddit’s sci-fi community itself says this: “Science Fiction, or Speculative Fiction if you prefer. Fantasy too. Beware of the Leopard.” In other words, claiming that Black Panther “doesn’t belong” because it’s a “fantasy” or “superhero — not sci-fi” is coded language for something else.

That something else is, obviously, racism.

Combine passive and active racism with Reddit’s overwhelmingly (nearly 70% percent) male demographics. What do you get? The typical cisgender, assumed white and heterosexual, heteronormative “dude-bro-man” effect — women, nonwhite, LGBT and gender nonconforming people range from scarce to nonexistent.

This isn’t the first time, either. Remember the whitewashed travesty that was Ghost in the Shell 2017, directed by Rupert Sanders, inexplicably starring Scarlett Johannsen as a Japanese character named Motoko Kusanagi? Any mention of whitewashing in Reddit’s Ghost in the Shell community was met with similar attempts at diversion and racist trash-talk: “But, Kusanagi was drawn to look white in the anime and manga!” “But, in the future, ethnicity might be different!” “Japanese people love Westerners (meaning: white people)!” “There’s no reason why Kusanagi would need to be Japanese at all!” “Kusanagi was a robot — so she had no ethnicity at all!” Perhaps the best excuse was that “the film’s story made her white, so it’s fine!”

Deepest sympathies to the unfortunate concept artist who thought GITS2017 was such a brilliant idea that he insisted on having his name prominently displayed on the side of a building in the film. At least he can console himself that no one went to see the movie in any case. (Except the five rabid ScarJo fans on Reddit who adore Sanders’ whitewashed, Americanised “vision” for Ghost in the Shell, of course.)

Automated Harassment

It was during these conversations that a worse fact became clear: Reddit enables users to create bots that will literally stalk other users and post automated harassing comments across the entire site. Anywhere the target posts, a bot will show up within a few minutes and reply. The “block” button is actually just a “mute” button, which allows the harasser to see and respond to all of their victim’s posts or comments — but the victim can’t see what’s being said behind their back. In other words, the apparently simple feature somehow manages to be worse than useless.

Beyond that, the moderators of most Reddit communities are not trained in any form of conflict resolution or mediation. This means that, when it comes to the task of actually moderating conversation, the vast majority of Reddit’s moderators have absolutely no idea what they’re doing.

Combine passive and active racism with Reddit’s overwhelmingly (nearly 70%) male demographics. What do you get? The typical cisgender, assumed white and heterosexual, heteronormative “dude-bro-man” effect — women, nonwhite, LGBT and gender nonconforming people range from scarce to nonexistent.

The internet, according to Reddit

The majority of Reddit is between 18 and 29 years old (59 percent). Ninety-one percent (91%) are younger than fifty. This means that, similar to women and LGBT people, the site essentially ignores the existence of older users completely. It also virtually guarantees juvenile and childish behaviour — ableist slurs like “you must be autistic” are considered ordinary speech, easily ignored as “just a joke; ignore the trolls and grow a thicker skin“. Many Reddit users see the upvote and downvote arrows as a kind of popularity contest, which reduces conversation to a game of clickbait-driven social conformity where the “winner” amasses the most upvote points.

Reddit may have begun as a playground for tech-minded programmers and nerds like its founders, but it has grown into an exclusive club for a middle-class, white, cisgender, heterosexual dude-bro-man “hivemind” that passively and actively shuts out diversity and dissent, or quarantines it in niche communities. The volunteer “moderators” of Reddit’s communities are no better, since they’re just random, unelected gatekeepers who often censor disagreement in tribalistic misuses of power. The moderators censor “drama” (often including healthy debate or unpopular views) by setting their subreddits to automatically delete or mark certain users’ posts as spam, or ban a user altogether with no need for explanation.

Freedom of speech — for a privileged, exclusive, self-selected few

Reddit’s sitewide administrators strenuously resist taking action on any request for help in all but the most extreme cases. Their only real rule seems to be “as long as it’s not blatantly racist” — which conveniently ignores 99.9% of harassing and abusive behaviour online, including all “non-blatant” racism for which excuses can be made.

Is that “freedom of speech” at work? No, it’s racism, homophobia, transphobia, ableism, and ageism, normalised into an everyday code of conduct. Despite its calculated public-relations gestures of shutting down the most virulently hate-oriented subreddits, bigotry is built into the fabric of the site and the type of people who are made to feel welcome there. It was almost a decade before public outcry forced Reddit to delete its worst communities; they’d have never done so themselves unless their public image and prospects for advertising revenue were at stake. And many of Reddit’s worst communities are still there; apparently they’re too profitable to get rid of (and have adapted to tiptoe past Reddit’s laughably lax rules).

Time to create a world wide web and social media for everyone

Facebook, Instagram, Twitter and Reddit will never hit the “Restart” button. Silicon Valley only cares about “growth” and revenue. This means anything that connects users with advertising — whether it’s hate speech or outright violently abusive behaviour — will never be meaningfully reduced. Fear, hatred, confusion and controversy keep people fighting, arguing, and most importantly, spending time on the site while soaking in ads. This is the true price of “free”.

If we want to change social media in a meaningful way, we have to do it ourselves. We need to do better — for the sake of our collective emotional health, our interconnected global communities, the real social world around us, and the generations to come who will grow up online. That means starting over and building an alternative that works for everyone.

Social media works exactly as its creators intended ā€” and thatā€™s why it keeps getting worse. | Part 2

From part one of this article, we’ve seen that as Web 1.0 became Web 2.0, the kinds of people coming online have changed with it.

Web 1.0 was mainly white, male, cisgender, heterosexual, and mostly based in North America.
Web 2.0 is increasingly nonwhite, female _and_ male, omnigender, omnisexual, and located everywhere around the globe.

Along with this Cambrian explosion of human diversity online, an accelerating spiral of harassment, bullying, exclusion and marginalisation point to one conclusion: the cultural software running in the minds of social media’s creators and gatekeepers has abjectly failed to keep pace with the web’s increasingly diverse user demographics.

Specifically, the second generation of the web is fed up with:

  • “civility” toward Nazis and other harassing, increasingly violent idiots on Twitter
  • homophobic and transphobic policies on Facebook
  • racists (including genocidal “ethnonationalists”) on Reddit
  • entitled morons saying “just ignore the trolls” when anyone mentions their experiences of harassment
  • an internet where reporting abuse is likely to get the victim banned instead of the perpetrator
  • social network operators who repeatedly deploy ineffective solutions as propaganda and disinformation poisons public opinion and distorts conversation

Below, you’ll find real examples of how social media fails for anyone who doesn’t fit their “acceptable” demographics.

 

Twitter

Ask any woman, nonwhite or LGBT person about their experiences on Twitter. Anyone with a considerable following will tell you about a daily stream of unsolicited sexual advances, insulting condenscension, constant passive and active harassment, and often outright threatening behaviour ranging from stalking to promises of violence. The “support” team of Twitter has become famous for doing nothing to stop violent and abusive users. Most recently, the so-called “MAGA Bomber” was reported for violent threats several times by multiple people over a span of weeks. Twitter’s support team did nothing.

Prominent people who have fled Twitter:

Ruby Rose
Leslie Jones
Millie Bobby Brown
Zelda Williams

…among thousands of others who leave or simply never sign up because they know what to expect.

Twitter’s response: “oops! We’ll do better next time!” In the meantime, people are dying in the real world due to campaigns of radicalisation and weaponisation that begin online. “Ignore the trolls” is now just another tired excuse for bigoted and violent behaviour that goes mostly unpunished until after the fact.

 

Facebook and Instagram

Facebook

Transgender artist Chloe Sagal committed suicide on 19 June 2018 after being targeted by a gang of bullies online. When she took to Facebook to voice her suicidally depressive feelings, administrators of the site didn’t assist her in getting help — they locked her account. This happened several times in the month prior to her death by suicide.

Sidebar: ironically, on Reddit, a common refrain is that Facebook “only locked her account a few times” and thus bore no responsibility — despite completely ignoring her clear signals of obvious emotional distress and intent to self-harm.

The comment mentioned above was made by someone who self-identified as a transgender woman. Self-revelation of her gender seemed to be an attempt at making the comment itself seem less vile, but only proved the opposite. This shows how Reddit’s users are problematic collectively — not just one subset. More about that in part three of this article.

Instagram

Hopefully, you already know that Facebook owns Instagram.

So you’re also not surprised that they couldn’t really care much less much about harassment than Facebook.

Kate Friedman Siegel, creator of the humour-oriented “Crazy Jewish Mom” account, is terrified for her own safety. ā€œI wouldnā€™t be where I am today without these platforms,ā€ Siegel said. ā€œBut I feel the need to talk about this, because we all have to figure this out. Thereā€™s real-world implications that go beyond harassment and trolling… this has been an ongoing problem,ā€ Siegel says. ā€œYou can get away with it for a long time, but when youā€™re in a moment where so much violence is happening, and the precursor to that violence is being explicitly verbalized on your platforms, you have to do better.ā€

Prominent people who have fled Instagram:

Loan (Kelly Marie) Tran
Daisy Ridley
Ariana Grande and Pete Davidson
Selena Gomez

…among thousands more who leave or simply never sign up. We know how the harassment and bullying games work by now.

Sidebar: not to mention how Instagram’s constantly-changing algorithms are burying individuals and small brands, making posts practically invisible without warning or reason.

As Youtube (owned by Google) tries to develop into a more television-like platform — all the easier to push more and more advertising at users — smaller creators are similarly feeling pushed out and left behind.

Destructive Self-Selection and the “Free Speech or No Speech” Slippery Slope Fallacy

Self-selection leads communities to build and reinforce social norms, as likeminded people form a majority consensus that appears “normal” and correct. For now, note how users across social media sites tend to see things in similar, sometimes naively destructive ways, misperceiving those ideas as right and true because “everyone” seems to think the same way.

Hatred and bullying coalesce within groups who perceive themselves as righteous, from GamerGate to ComicsGate to indie artists who seek social-media “clout” by becoming roving gangs of malevolent copyright trolls. Often, the ethos of “protecting the tribe from outsiders” offers a flimsy-yet-seductive rationalisation for otherwise indefensible behaviour.

At core is Silicon Valley’s self-serving evangelism surrounding the imagined value of “absolute free speech or none at all” (also known as a slippery slope fallacy), “the only cure for bad speech is more speech” and liberation through “free” social media. These corrosive self-reinforcing beliefs persist despite mounting evidence to the contrary.

We’ll examine these dynamics in more detail by focusing on Reddit in part three.

Social media works exactly as its creators intended — and that’s why it keeps getting worse. | Part 1

The internet has become an extension of real life. What happens when real life becomes an extension of the internet?

First, an analogy: the United States was founded by men who were also slaveowners, and the country was shaped from its inception to advance their interests. This is the only reason why, as recently as 2016, candidates can become president despite having decisively lost the popular vote.

The modern social media world was designed by privileged, middle-class white young men, mainly in order to get rich while amusing themselves and their friends. Now we’re seeing the outcome of amoral, greed-driven, insular technology that is exclusionary by design.

Demanding that Silicon Valley change its ways is like asking a prehistoric mastodon to evolve into a present-day elephant. If you decide to stand by and wait for the mastodon, be sure not to hold your breath — lest you share the fate of the ancient dinosaurs.

Web 1.0 versus Web 2.0

The first generation of the world wide web, “Web 1.0”, was defined by starry-eyed ideals about “freedom” and “speech”. Questions of value, both moral and financial, were only answerable by saying, “making it free (with advertising)” and “more speech! Anything less is evil censorship! More speech! More speech!”

Freedom for whom?
More speech for whom?

Web 1.0 was an era defined by dialup modems, clunky desktop monitors and primitive web browsers. Compact discs sent by postal mail to install your internet connection software. Lovingly designed personal blogs that played MIDI files in the background and garishly coloured text that blinked or scrolled across the screen. Only people who could afford relatively fast internet access — and for bloggers, those who had time to learn HTML, CSS and Photoshop — invested the requisite effort to build digital homes and identities online.

We’re now in the thick of the internet’s second generation, both in terms of users and technology. Slick handheld screens and clever web interfaces keep our eyeballs and emotions dependent on our cellphones. High-speed internet is a necessity for any teenager who wants to know what’s hot, who’s cool and maybe even become an “influencer” themselves. Blogs have given way to a variety of personally revealing profiles on apps ranging from online dating to music to selfies and political opinions.

  • Web 1.0 was mainly white, male, cisgender, heterosexual, and mostly based in North America.
  • Web 2.0 is increasingly nonwhite, female and male, omnigender, omnisexual, and located everywhere around the globe.

As Web 2.0 grows, users increasingly tune out ads. Advertising has just as rapidly transformed to become a pervasive and intrusive form of internet-wide surveillance. Big Brother and Big Friend have become indistinguishable.

Where the first generation comprised a few hundred million users, Web 2.0 encompasses billions.

Culture Is What Matters Now, Not Technology

Although this second generation of the Web has the advantage of shiny new technologies, its cultural software has woefully failed to keep up. Silicon Valley’s utopian vision of “freedom” and “speech” was designed to accommodate the minds of the middle-class, cis/het white “dudebros” who still run the largest social media sites in the world.

The rest of us are increasingly realising that their vision is for them, and them only. Their utopia was not built for all of us, and never will be. The web is becoming an extension of real life; platitudes about “absolute free speech” have failed to provide even basic protection for all but the most privileged few. Cute “trolling” among peers becomes the ugliness of bullying among strangers and real-world violence between members of differing internet tribes.

Part two of this article will explore the meanings and specific consequences of technological versus cultural software, as social media becomes an extension of real life — while transforming our lives for better and increasingly, for worse.

What if a social network could also be an internet community bank?

An “internet community bank” could become especially useful by offering a medium of exchange for members of the community.

This is similar to the in-game currency or “virtual cash” approaches used often in video games.

Our social network itself supports those who are independent creators, as well as those who are traditionally excluded, marginalised and discriminated against. This includes:

  • those who buy and sell items that are either independently created or previously-owned (i.e. artists; fashion designers; individuals who want to sell their used clothing in good, clean condition, etc.)
  • webcam performers; those who offer nude self-portraits to their social media followers
  • cosplayers who may offer adult cosplay and fanclubs for their work

Scenario: PersonA deposits $50 to the site, translating into 50 credits. This enables them to pay 50 credits for a skirt that PersonB has placed on sale via their page, since PersonB “only wore it once, but someone else might love it.” The credits are transferred instantly and PersonB can now buy an original piece of art created by PersonC, etc.

Once the initial deposit of funds has been made, there is no more need for dealing with payment processors, gateway services or other entities that typically discriminate against certain populations. The community becomes a self-sustaining ecosystem where members can buy and sell items via their own store pages.

The only content restrictions would be obvious ones, such as the following (incomplete) list:

  • real or replica weapons capable of easily damaging the human body (i.e guns, laser pointers, etc.)
  • soiled or unclean items of any kind
  • materials that incite hatred or violence (Nazism, ethnonationalism, etc.)
  • intentional misinformation, disinformation or propaganda
  • items that promote or depict non-consensual sex acts of any kind
  • any items that depict sexualisation of people under 18 years old
  • scat (urination/”pee/squirt” is allowed; defecation is not)
  • bestiality (sexual content depicting non-human animals)
  • promotion or depiction of drug use

Through its connection to the social network, this “community banking” structure also serves as an alternative to sites like Patreon, which increasingly discriminates against everyone from therapeutic ASMR video creators to lingerie designers.

The social network as an alternative to PayPal could also become useful, given that at time of writing (2018), PayPal does not allow physical transactions of “sexually oriented goods and services” outside the United States, or “digital goods or content delivered through a digital medium” at all.

It’s also worth emphasising that, in a post-surveillance world wide web, the ability to buy and sell independently will be extremely valuable for individuals and small-scale independent creators. Internet advertising is now internet surveillance, and people are starting to wake up to the need for alternatives. Individuals and indie creators already trying to buy and sell outside the oppressive grips of Amazon and Silicon Valley; a community-based solution can only make life easier and more fair for everyone.

How Reddit Ends.

How “moderated” online communities mirror the dystopian corporate philosophies that turn “free speech” into intrusive surveillance.

Have you noticed how Reddit seems to be anti-censorship and pro-“free speech”, but actually functions in ways opposite of free speech?

The insidious aspect of censorship is that you’ll never see what has been deleted.

Here are three ways you might encounter questions of speech and censorship during a typical week spent on Reddit:

1. Arbitrarily chosen, non-elected, unimpeachable “moderators”, who are predictably incompetent at anything resembling “moderating”.

Reddit’s original (and current) demographic is the stereotypical middle-class, cisgender, heterosexual, technologically-inclined white American male/man.

(What about everyone else, though?)

Along the original Reddit demographic comes the biases and distortions that accompany it (i.e. implicit racism/sexism/misogyny, homophobia/transphobia, ableism, ageism, etc.).

Reddit was originally created by Alexis Ohanian and Steve Huffman, a couple of middle-class, white, computer-programming college graduates, mainly for their friends. From there, the Reddit “community” continued to grow until we have today’s Reddit where everyone is still called “dude”, “bro” and “man” by default. (Hint: where are all the women — half of humanity’s population…? Please ask women, and actually listen.)

This 11-minute mini-documentary is sad in retrospect. Look at what the founders of Reddit seem to believe they’re creating… although it’s really just typical Silicon Valley talking points about how technology (and money, and connections, and influence) can help “change the world”. Didn’t really turn out the way they intended, did it…?

This article (and short video) from DailyDot details how Reddit began, essentially, with spam, fake accounts and being “anti-censorship” (but again, for a very specific “community”).

In a way, Reddit was built by spam.
Hereā€™s an interesting revelation from Reddit cofounder Steve Huffman: The social news site was built on a lie. Many hundreds of lies, to be more specific, in the form of fake user accounts that Huffman and fellow cofounder Alexis Ohanian used to populate the site in its earliest days.

ā€œYou would go to Reddit in the early days, the first couple of months and thereā€™d be tons of fake users,ā€ Huffman says in a video for online educator Udacity.

Through those fake accounts, Huffman and Ohanian submitted high-quality contentā€”the type of articles they wanted read. This ā€œset the toneā€ for the site as whole, Huffman said and, at the same time, made it look populated.

“Set the tone,” indeed.

The outcome: anything that “isn’t blatantly racist” is allowed and protected under Reddit’s intentionally vague rules. This is how Reddit became one of the most toxic places on the internet that refused to ban its most bigoted and hate-driven communities until 2015. (For reference, that’s a full decade after Reddit began.)

As you can see if you’ve spent any amount of time on Reddit recently, the bigots never really left. They just use the intentionally lax rules to say the same hatred-fueled things in more “civil” ways. And the moderators generally couldn’t be bothered to intervene because it’s not their day job. There’s a limit to how much unpaid volunteer labour can accomplish, and that limit tends to diminish over time.

Moderators on Reddit are also not voted on or chosen in any way by members. This effectively creates a kind of dictatorship; a subreddit’s members are the unrepresented subjects. Moderators function as totalitarian rulers, immune to criticism for anything that’s not “like, overtly racist“, as Reddit co-founder Steve Huffman says. This conveniently leaves untouched about 99.9% of bigoted and harassing behaviour experienced by people who aren’t members of Ohanian and Huffman’s “peer group”, as he astutely calls them.

2. No discovery mechanism for new subreddits.

Reddit is designed to ensure that new subreddits are invisible. This centralises power among the oldest and most popular communities — a self-perpetuating cycle that creates “super”-subreddits while starving new ones.

The predominant philosophy of “free choice” on Reddit is actually a corporate mantra that “if you don’t like it, go somewhere else.” This is fundamentally anti-democratic in that each subreddit becomes an echo chamber (“hive mind”). Dissent is suppressed by harassment, downvoted to oblivion, or bullied into silence. Or moderators will just delete your posts and comments while making noises about how it’s their playground, they’re free to make up their own rules and enforce them however they like. The Reddit admins generally pretend like they can’t do anything to intervene because “free speech”.

So if you dislike one subreddit, you can just create a new one, right?

Yes, but it’s practically guaranteed no one will ever find it, which completely defeats the purpose.

Sidebars of existing subreddits are the only way to find new subreddits. Sidebars, of course, operate at the whim of moderators who often refuse to include links to new subreddits that could become viable “competition”. This is similar to the way that corporations try to establish vertical monopolies, then strangle or absorb competitors while claiming to support “free markets” and “innovation”.

3. “Free” means “don’t expect us to lift a single finger to help you.”

On Reddit — and on “free” social media generally — selling user data is how the site makes money. Users are only profitable to the extent that we click ads, or give away private data that advertisers want to buy from the social media site’s owners.

This means that if you leave, the social media site loses nothing as long as another user signs up to take your place. It also means that they really don’t care about what happens as long as you, or any pair of eyeballs really, show up tomorrow.

  • Being harassed? Too bad.
  • Moderators operating in bad faith, deleting your posts and/or comments? Who cares.
  • Simple features, like a working Block button and ways to find new subreddits, don’t really work? Well, it’s free, what did you expect?

And this is how we ended up with our “free” dystopian social media landscape. Free speech. Free choice. We have the Reddit — and Twitter, Facebook, Tumblr, and Google — that we paid for. In a corporate capitalist world, “free” really means worse than worthless, most of the time. Every word we type and selfie we upload powers social media as a sacrifice of time, attention, privacy, emotional health, and increasingly, our freedom in the real world.

We’ve “disrupted” ourselves directly into a dystopian future-present. But at least nobody had to pay for it, right?

Not So Open That Our Brains Fall Out

What if all future social networks were built to protect against misogyny, racism, homophobia/transphobia, ableism and ageism — the way real cities are planned and programmed with public health initiatives using epidemiology to prevent the viral spread of infectious disease?

The Silicon Valley version of “free speech” is both stunningly wrong and remarkably backward considering the tech industry’s pretensions at building the future through advanced software.

Keeping Minds So Open That Our Brains Fall Out

Imagine if every child in school had to “learn all sides” when confronted with the questions of physics: is gravity real? Is the Earth round? Is the Moon really made of cheese? Is fire alive? Do vaccines cause autism? Is homeopathy real? What about the Flying Spaghetti Monster?

These incredibly impactful issues don’t all belong in the realm of physics, but they do share in common the fact that, if every child had to re-prove them in each generation, society would be trapped re-answering questions that already have scientifically valid answers.

Likewise, in society generally, society continues its progress as majority consensus changes. Are people of all genders, sexualities, skin colours and ethnicities equally human? Do all human beings have equal rights? These questions also have only one correct answer.

Silicon Valley’s corporate version of free speech, however, imagines a free speech that favours ignorance-fueled controversy over well-informed opinion.

How Free Speech Becomes Hate Speech

Fear, uncertainty, doubt and hatred install anxiety, panic. Desire for reassurance and validation provokes feverish conversation. All the while, our earnest outpourings of heartfelt emotion are recorded, bundled and sold to all who will pay. This is why social media is “free”.

The real cost is that “free speech” is redefined to mean “anything that brings more eyeballs to our advertisers”. As we’ve seen, society itself pays the price as misinformation, disinformation, harassment and bullying dominate the collective conversation.

Bringing Our Shared Humanity Back to Social Media

What if social networks operated more like the blood-brain barrier that protects our brains from infection, or like urban epidemiological strategies that keep air and water clean, preventing children and adults from pathological invaders using simple scientific principles?

Leaving each person alone with a mountain of “free” tools to fight harassment by themselves is like giving a person — who hasn’t learned basic critical thinking skills — books on astronomy and astrology, then telling them that scientific fact is a matter of Likes and Retweets.

If society hopes to advance past our current social-media dark age, ideas of “free speech” will have to move beyond shouting matches over questions that have obvious answers. Hate speech has no place in our present and future world; new social networks need to start from there.