YouTube has a new plan to deal with the wave of AI-generated content flooding its platform, and it involves you. The company is now asking viewers to rate whether a video feels like AI slop. On the surface, that sounds like a reasonable way to tackle low-quality AI content in your feed. In practice, it may cause more problems than it solves.
Humans are bad at spotting AI-generated content, and getting worse
The most basic issue with this approach is that people are not good at spotting AI-generated content, and the gap between human detection and AI capability is widening fast. Early AI content had obvious tells like robotic voices, warped hands, or unnatural-looking faces. Newer models have largely fixed those issues.
Voices now sound natural, faces are convincing, and the obvious giveaways are disappearing. The tools have clearly advanced, but casual viewers haven’t kept up. And there’s research to back this up.
A recent study on AI face detection found that people performed only slightly better than chance when asked to identify AI-generated faces. What’s more concerning is that their confidence in being able to spot AI faces was consistently higher than their actual accuracy. Research shows similar patterns elsewhere.
A study on deepfake detection found that people struggle to detect deepfakes but still believed they can, while research on AI-generated voice detection suggests AI voices are now nearly indistinguishable from real ones for the average listener.
YouTube’s own track record does not help its case. A Kapwing study found that around 21% of the first 500 videos recommended to a new account were classified as AI slop, while an investigation by The New York Times found that more than 40% of the recommended Shorts aimed at kids in a 15-minute session contained low-quality AI content.
This is content that already passed YouTube’s automated and human review systems. If those systems let so much AI slop slip through, expecting viewers to do any better seems unrealistic.
The rating system also opens the door to abuse
Even if viewers were reliable AI detectors, the new ratings system is prone to abuse. Coordinated campaigns against creators are a well-documented problem on YouTube, with bad actors targeting channels through mass reporting and dislike bombing. A feature that lets users label content as AI slop gives them a new tool to exploit. Rival channels, angry communities, or organized groups could misuse it to flag videos regardless of whether AI was actually used.
YouTube has not explained how it will verify or weigh these ratings, leaving plenty of room for manipulation. Creators who have spent years building their audiences may now have to deal with a new risk that has little to do with the quality of their work. If the system is rolled out widely without safeguards, it could end up hurting legitimate creators as much as it targets low-quality AI content.
And what do viewers get out of it?
Even if YouTube somehow manages to tackle abuse, there’s another clear problem with the system: incentive. Flagging AI content takes effort and requires some level of awareness about what AI tools are actually capable of, but YouTube offers no clear benefit to viewers for helping spot AI slop. The platform, on the other hand, gets a cleaner feed and a steady stream of user data, without giving much back in return.
There’s also a legitimate concern that nothing is stopping YouTube from using this feedback to train future AI models, potentially making AI-generated videos even harder to detect. In effect, it could turn a system meant to fight AI slop into one that helps improve it.
YouTube’s approach misses the mark
The new rating system is another attempt by YouTube to show it’s taking the AI slop problem seriously, but the platform still isn’t doing enough. It doesn’t explicitly prohibit creators from posting AI-generated content, and while it requires disclosure for AI-altered or synthetic media, that rule only applies in certain cases. The monetization penalty is also limited, since it relies on the same detection systems that are already letting too much low-quality AI content slip through.
YouTube helped create the conditions for this problem by allowing and monetizing AI-generated content for years, and its efforts to contain it have fallen short at every turn. Outsourcing the cleanup to viewers, without explaining how their data will be used and without offering anything in return, treats them more like a free resource than a community. If YouTube is serious about tackling AI slop, it needs to own the solution rather than passing the job to the people watching.