The rise of the skeptical user in an age of synthetic media

I’ve been following the discourse around AI-generated content closely. My original hypothesis for social media was simple: IF algorithms shepherd us toward comfortable, familiar preferences, THEN it will kill nuance and critical thinking. But that’s not what is happening with synthetic media in the equation.
The Head of Instagram, Adam Mosseri, recently noted in his essay that we are moving from “assuming what we see is real” to “starting with skepticism.” He suggests we are shifting our focus from what is being shared to who is sharing it.

Mosseri acknowledges that platforms will “get worse” at detecting AI content over time as the technology improves, yet proposes Instagram become the arbiter of authenticity through verification systems.
While platforms are scrambling for technical band-aids, I’m more interested in the byproduct: The rise of the skeptical user.
The data of distrust
We aren’t just imagining this shift, the numbers back it up:
- Trust in national news organisations has fallen from 76% in 2016 to just 56% in 2025 (Pew Research Center, 2016, 2025).
- 70% of young adults get news incidentally (stumbling upon it) rather than seeking it out to verify it (Pew Research Center, 2025).
- 59% admit they can no longer reliably tell human and AI content apart (Deloitte, 2024).
- 70% of users say AI makes it harder to trust what they see (Deloitte, 2024).
Two critical tensions for product designers
Reliable AI detection isn’t here yet
The scale of the problem is invisible until it isn’t. According to a 2026 report by Kapwing, 21% of videos shown to new YouTube users are now classified as AI slop. This low-quality, synthetic content is produced solely to exploit attention.

C2PA (cryptographic signing) is a proposed tech solution that will effectively “sign” a file to prove it is real. One major obstacle is this technology is only currently viable for a handful of devices, which could create a two tier internet.
For example,
- You open an app like ChatGPT’s video generation on your iPhone to create a completely synthetic video — a raw, authentic moment of you reacting to news.
- The AI-generated video gets saved to your Photos app as a standard .mp4 or .mov file with basic metadata (date, device type, maybe app name).
- You upload it directly to Instagram.
What’s missing:
- No cryptographic signature to prove it was captured by the camera sensor.
- No C2PA chain of custody showing it originated from a real lens.
If creators with fancy devices have “verified real” badges, does everyone else become a suspect by default? We must be careful not to turn authenticity into a proprietary feature. Meta Verified showed us exactly what happens when paid verification turns trust into a product anyone can purchase.
The Wikipedia paradox
AI slop is accidentally training us to behave like Wikipedia editors. In his new book, The Seven Rules of Trust, Jimmy Wales argues that trust isn’t a static badge, it’s a living process. Trust is built on transparency, neutrality, and multi-source verification.
For years, social media has thrived on pluralistic ignorance. This is a psychological trap. You privately doubt a post. You wonder if it is AI, but you stay silent. You don’t publicly call it out. You don’t even like the post. Instead, you send it to a friend in a DM. We keep our skepticism in the group chat.

Before the slop era: I saw a striking news item → I shared it with a friend.
After the slop era: I see interesting content → I check the author’s credibility → I audit the profile → I (maybe) look for a second source → Then I consider it real.
Dino Ambrosi (TEDx, The Battle for Your Time) reveals 18-year-olds are on pace to spend 93% of their remaining free time staring at screens.

We could deduce the more hours we spend in AI-saturated environments, the more likely digital deception is our reality. The Wikipedia Paradox suggests a way forward. Users want the receipts. Instead of a simple “verified” badge, we should be designing for lateral reading.
“Trust us, this is a verified creator.” (Verdict-based)
“Here is this creator’s history, their linked accounts, and where this image was first seen.” (Evidence-based)
Design for skepticism, not just trust
As designers, we’re at a crossroads: Do we build systems that do the thinking for users, or systems that help users think for themselves? Morally speaking, our role isn’t to replace emerging critical behaviour with trust badges. It’s to amplify it.
How are social media platforms addressing it now?
- Tiktok are applying invisible watermarking. It’s a forensic technique that embeds a hidden signature into the video pixels. If a user screen-records a video, TikTok can still identify its origin.
- Meta joined the C2PA Steering Committee in late 2024. They’re looking into provenance metadata of digital files, where a user can verify their origin and history.
- Youtube are asking creators to disclose altered or synthetic content during upload. There are punishments like bans if they mislead their audience.
- Twitter (X) use community notes. It’s a bridging algorithm that only display context that earns consensus from users with historically opposing viewpoints.
- Bluesky (AT Protocol) focus on decentralised identity and user-selectable algorithms, allowing reputation, moderation, and AI-content labelling to be handled by independent services rather than a single platform authority.
- Mastodon (Fediverse) rely on community-run servers with local governance, where AI-generated content can be restricted, labelled, or excluded entirely through instance rules and defederation rather than global enforcement.
It’s evident that tech giants are trying to remediate the trust issues, but there’s a Jakob’s Law conflict. Users are not going to have a single, recognisable UX that can be trusted across their entire information ecosystem.
What if we had a standardised content nutrition label?
Pick up a cereal box. The back tells you exactly what’s inside: 12g of sugar, 3g of fibre, 150 calories. The mandated food nutrition label model works, so why not have the same transparency on what we consume online?

1. Status stamp
Following the model of food labelling, the front-of-package symbol should be consistent across X, Instagram, and TikTok. Is it captured / edited / synthetic?
2. The ingredients
- Provenance as context: Show the chain of custody, not just be verdict based. E.g. A history log that reflects the file’s life in a timeline format.
- Source identity: A verified link to the creator or organisation, cryptographically signed.
3. Triangulation links
- Make it easier to see related posts. E.g. A “view related posts” nudge.
- Links to reputable news outlets or community notes discussing the same event.
If we assume users need us to tell them what’s real, we underestimate the very skills they’re rapidly developing. Hard Fork’s hosts Kevin Roose and Casey Newton are already modelling this sort of autonomy by federating their own social media network, the “Forkiverse.”
The best design won’t solve the authenticity crisis. It will give users the tools to solve it themselves.
Is AI slop training us to be better critical thinkers? was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.