I watched the manosphere documentary; here is how design is making things worse.

Somewhere between the happy path and edge cases, we forgot to design against harmful actors.

Title card of Louis Theroux’s Netflix documentary “Inside the Manosphere,” featuring the show’s logo and title against a dark, stylized background.
Source: Netflix

Last month, Louis Theroux’s Netflix documentary, “Inside the Manosphere,” was released and has generated a lot of online discourse. Brought to the eyes of the public by other programs like Netflix’s Adolescence, the “manosphere” is a network of communities frequented almost exclusively by young boys and men , whose content is deeply misogynistic, anti-feminist, anti-semite, and more. The audience is drawn in by promises of learning how to become independently wealthy, fit and successful with the other sex, and then sold several ideas, more or less subtle, typical of the so-called “red-pill” rhetoric.

Perfectly designed golden cages

As a designer, one of the things that stuck with me is how these communities thrive (and depend) almost exclusively on online environments. This is possible thanks to social interfaces that, due to their nature, allow them to spread like wildfire despite their toxic content.

Watching the documentary, it quickly becomes apparent how, while the algorithm is the creators’ source of fame and money, it also quickly becomes a “golden cage”: while they brag about having escaped the 9–5, these young men need to constantly generate content through multiple daily live streams because they actively need “shocking” content which can then be repurposed via several short clips on Youtube and Tiktok. Since the algorithm promotes this kind of content over others, they are more likely to keep creating it purely(?) for views’ sake. As HSTikkyTokky tells Theroux in the documentary, “I deliberately say offensive things because that’s what gets attention and money. Algorithms reward controversy — I’m a salesman.”

A screenshot of a Kick live stream hosted by MyronGainesX. The main video features a woman with overlay text about dating dynamics, while the sidebar and live chat display a stream of misogynistic insults and racial slurs directed at her.
The screenshot of a clip from one of Myron Gaines’ livestreams, where his followers are insulting the girl in the video that he is watching. Source: Kick

The issue is that, while the creators might not necessarily believe in everything they say online behind closed doors, it is evident that their audience (sometimes as young as 13 years old) internalizes these extreme views and their real-life behaviour is influenced by this: in Insider’s article, Allie Chmielewski, a human-trafficking survivor and educator from Kentucky(US) says “she’s heard boys talk about how they want the world to go back to how it was in the 1950s, when women make zero decisions”.

In this article, I will cover the two main types of platforms used by the manosphere and how we, as UX practitioners, can help combat phenomena like the manosphere in digital products.

Social media platforms

Major platforms like YouTube and TikTok have banned (in some cases only temporarily) manosphere content creators, such as the Tate brothers, due to the violation of their content policies. However, their content has far from disappeared from the platform: their supporters constantly re-upload viral clips, perpetuating their content after the creators’ ban; sort of like Hydra in Marvel movies, it’s a situation of “Cut off one head, two more shall take its place”.

Screenshot of a TikTok fan account dedicated to Andrew Tate, showing a grid of video thumbnails with manosphere-related content and a high follower count.
An example of a TikTok fan account of the most famous manosphere creator Andrew Tate. Source: MMFA

Moreover, the algorithm actively boosts content that generates the most views and reactions; this means that manosphere creators can still exist there, especially when they use more “subtle” language, also known as “dogwhistle”, that doesn’t explicitly violate platform guidelines like Chad, Stacy, AWALTredpill.

While major platforms have failed to ban this kind of misogynistic content, there are other platforms which actively profit from the manosphere, where the creators can thrive through long-form content: one example is Rumble, which, in the name of “authentic expression”, allows reporting of content only in cases of extreme behaviour (e.g. physical violence).

Private spaces (Telegram, Discord)

While priding itself for its safety, Telegram has become one of the platforms that manosphere creators use the most to sell their content and products like trading schemes. Due to this claim of allowing free speech and safety for their users, these groups are private in nature and are therefore harder to reach and to receive ban requests. As a user on Telegram, I am able to report a message or send an email, but there is (apparently) no platform-level control over groups like this.

The process for reporting abusive content on Telegram
Source: anycontrol.app

For example, in my country (Italy), there are 17million users, mostly men between 11 and 60, who use private groups on Telegram to share explicit pictures of their partners, spouses, friends without their consent. The platform makes it so easy to conceal these communities that when the Italian private Facebook group “Mia moglie” (=my wife), in which users would share explicit and non-consensual pictures or videos of their wives, was exposed and shut down, the community simply moved to Telegram; each time one of these groups is shut down, a new one is born thanks to a multi-step process through which group admins can easily retain their community and keep it alive. The nature of self-destructing messages set up in these groups also doesn’t help locate abusers.

Screenshot of a Telegram private group chat showing Italian-language messages in which users request and share non-consensual explicit content. Sensitive material is obscured.
A screenshot of the “My wife” Telegram group with explicit messages and requests. Source: Repubblica.it

The situation on Discord is similar: users can report individual messages, and a user can receive multiple strikes; however, manosphere groups can still exist here, as a community’s guidelines are handled directly by the admins. As long as a community is invite-only on a private server and users aren’t using explicitly violent language or spamming, they can go on undisturbed.

Reaction only, not prevention

As we have seen so far, stopping the spreading of harmful ideologies like the ones of the manosphere seems to be mainly a possibility for the single, informed user; in the name of free speech, content with ambiguous terms can easily escape bans unless a user reports it (and the report actually results in a removal of the content). In private spaces, unless they are revealed to the public (like in the case of the “My wife” group), something like this could never happen, or if it does happen due to someone infiltrating the community, a new one can be easily built again.

Can UX Designers help?

While some of these issues are related to the algorithms behind the recommendation engines and therefore outside of the realm of UX, I still asked myself: how can designers help build safe online spaces that aren’t harmful to women?

The truth is, unlike in other fields such as accessibility or safety, there isn’t an “official” set of solutions and guidelines when it comes to perpetuating gender bias and gender based harm in digital products. To be honest, it has been tough for me to find specific best practices in this field at all. Yet there are many tools also from other fields that could help us tackle the problem from the inside.

Account for anti-personas in user research

An example of an anti-persona profile
An example of an ant-persona in fintech. Source: User Interviews

Just like one would account for potential thefts when designing a store experience, we need to start actively defining the users we don’t want to design for, but against: anti-personas. According to NNG’s definition, an antipersona “is a representation of a user group that could misuse a product in ways that negatively impact target users and the business”; accounting for their existent and behaviour in our interface can help improve aspects such as safety and retention for our target users. But how can we gather data on users with ill intent who would rather keep in the shadows? While of course interviewing them can be much harder, even a simple proto-persona exercise could be helpful in avoiding ignoring ill-intentioned people who want to use our platform for negative purposes. And just like with traditional personas in UX, we can use data from multiple sources, such as user reviews in the app store, interviews with target users as well as survivors, findings in the academic literature, etc. In our scenario, this exercise can help us to define the characteristics of manosphere perpetuators to identify their goals, characteristics and behaviour so that we can stop them in their tracks.

Define the Anti-Scenario

A person looking distressed in a dark room while viewing an overflow of offensive messages and insults.
Source: York SJ University

When it comes to our daily design practice, we are used to designing for the happy path and then finding solutions for the potential unhappy path (e.g. a user posts content on the interface and the upload fails) and for edge cases (e.g. a user profile which accounts for a long name, special characters, etc.). When it comes to anti-personas, though, we’re not talking about technical errors or rare occurrences, but rather about a scenario that is negative in nature due to social or behavioural aspects, also known as anti-scenario” or worst-case scenario. A well-known example is the malicious use of Apple AirTags: originally intended for keeping track of the whereabouts of our objects, such as house keys, they are used by stalkers to secretly know where their victims are at all times.

To figure out potential anti-scenarios in the context of the manosphere, a helpful question we can ask ourselves at this stage is: can the anti-persona execute actions that harm women or perpetuate gender bias and inequality? These actions can be direct, like writing hateful comments under a user’s post. But indirect actions are just as harmful: last winter X’s Grok made the headlines due to several users exploting the AI to undress female users’ pictures posted on X. Grok’s deepfake issue is so widespread that the AI is believed to be generating sexualised content (even of minors) at the rhythm of 190 pictures per minute, and the European Commission is currently running a formal investigation towards X.

Stop the anti-scenarios from succeeding

So after our anti-personas are ready, it’s time for us to look at their current (or potential) user journeys to figure out how they could use the platform’s current structure to get to their nefarious goal. Once that is done, the next step, unlike what happens with personas, is to actively find ways to stop them in their tracks, or at least make their journey quite difficult. How can we achieve that?

  • In Designing Social Interfaces, Crumlish and Malone suggest sharing platform-wide guidelines for expected behaviour, as well as having founding members act as “role models” for the community. Moreover, they recommend implementing an easily discoverable and simple procedure for reporting abusive content.
  • In her book Design For Safety, Eva PenzeyMoog suggests designing in order to prevent the identified harm from happening or add roadblocks to prevent it: in our case, can we create friction in the publishing of manosphere content?
A split-screen screenshot of Instagram’s anti-bullying feature. On the left, a user attempts to post the insult ‘You are so ugly and stupid’; on the right, a warning pop-up appears asking the user to rethink their comment to keep the platform a supportive place.
In 2019, Instagram rolled out an AI-powered feature that discourages users from going through with posting a mean comment or caption. Source: Instagram

Nudge positive behavior

In the context of digital interfaces, nudges are contextual hints aimed at influencing our users’ behaviour. Unlike dark patterns, nudging is (ideally always) used ethically to encourage positive behaviour. For example, on Instagram, Meta added “kindness reminders” both when opening a private chat with a creator, as well as in comment threads that are already displaying a certain number of negative comments. They found that by nudging their users to be kinder, about 50% of users would end up editing or deleting a negative comment, therefore diminishing hateful content on the app.

Screenshot of an Instagram comment thread showing Meta’s “kindness reminder” prompt appearing as an overlay above a series of hostile comments, encouraging the user to reconsider their message before posting.
An example of nudging in a hateful IG comment thread. Source: makeuseof.com

Conclusion

When it comes to gender inequality and the perpetuation of gender bias in products and services, the spreading of the manosphere is just the tip of the iceberg. Just like Caroline Criado Perez writes in her book Invisible Women, women are being systemically ignored in several areas of modern society, from tech to medicine, resulting in unpleasant as well as, in worst-case scenarios, dangerous and scary situations. It therefore becomes evident that interface design too isn’t neutral, as the platforms in which the manosphere operates are working exactly as intended. But just as we have been advocating in the industry for accessible and ethical practices, it is time we advocate for digital products that prevent the spreading of gender-based hate.

Further reading:

Surely if you rule the manosphere, you can be your own boss? These influencers aren’t even that, Elle Hunt, Guardian

“Inside the Manosphere” should also feature Galloway and Bartlett, Francesca Cavallo, Substack

Design For Safety, Eva PenzeyMoog

Invisible Women, Caroline Criado Perez

Technically Wrong, Sara Wachter-Boettcher

“Sexy” vs Sexist UX, Gina Taha

Women-Centric Design, Mansi Gupta

Designing Social Interfaces, Christian Crumlish & Erin Malone


I watched the manosphere documentary; here is how design is making things worse. was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.

Need help?

Don't hesitate to reach out to us regarding a project, custom development, or any general inquiries.
We're here to assist you.

Get in touch