Playing dumb: how AI is beating scammers at their own game

Fraudsters prey on human psychology. The most effective AI defences win by weaponising theirs.

There is a phone ringing somewhere in a call centre. A scammer picks up, settles into their script, and begins working through the familiar choreography of a con. Urgency, authority, manufactured trust. They have done this hundreds of times. They know how it goes.

Except this time, the person on the other end is an elderly woman named Daisy, chatty and warmly scatterbrained, and deeply interested in telling them about her cat, Fluffy. She can’t quite remember her bank details. She wonders if they could hold on a moment. She is, by any measure, the perfect target. She is also not a person at all.

A still from O2’s campaign video showing Daisy, an AI-generated elderly woman with grey hair and glasses, holding a pink telephone handset. A subtitle at the bottom reads “I’m just trying to have a little chat.” She appears warm, slightly confused, and entirely convincing.
Seventy-eight years old, infinitely patient, entirely artificial. Image source.

Daisy is a conversational AI built by British mobile operator O2, and her entire purpose is to waste scammers’ time. Not to block calls or filter them, but to engage, warmly and unhurriedly, for as long as possible. She has kept fraudsters on the line for up to 40 minutes at a time.

“While they’re busy talking to me, they can’t be scamming you. I’ve got all the time in the world.”

The instinct is not new. Hobbyist scambaiters have used a pre-recorded character called Lenny for years, a looping audio simulation of a talkative old man that plays automatically whenever a scammer pauses. But Lenny requires a human to initiate the call. Daisy does not. She is Lenny at scale, running autonomously, around the clock.

The con behind the con

To understand why Daisy works, you first have to understand why scams work. It is not, as we tend to assume, because the targets are naive or unsophisticated. Scammers are not succeeding through elaborate technical trickery. They are winning because they are extremely good at psychology.

The architecture of a phone scam is almost always the same.

  • Step one: create anxiety. A supposed bank, a government agency, a delivery company, something that signals authority and consequence.
  • Step two: offer relief. The caller who just alarmed you is also, conveniently, the person who can fix it.

The two-step is as simple as it sounds, and it is devastatingly effective, because the moment anxiety takes hold, rational processing takes a back seat. Cognitive shortcuts kick in. We defer to authority. We respond to urgency. We want to feel safe again quickly, and whoever is offering safety gets our trust by default.

This is not a flaw in a particular type of person. It is a feature of human cognition under stress. Research consistently shows that even sharp, alert individuals are vulnerable when the psychological conditions are right. All it takes is a convincing framing, a well-calibrated emotional pitch, and no pause to think. Scammers suppress that pause on purpose. Urgency is not incidental to the scam; it is load-bearing.

A two-panel illustration showing the psychology of a phone scam. Left panel labelled “Step one: create anxiety” shows a distressed figure holding a phone displaying a fake bank fraud alert. Right panel labelled “Step two: offer relief” shows the same figure now calm and on the call, with a shield and tick symbol in the background. A caption below reads: “The caller who alarmed you is also, conveniently, the person who can fix it.”
The threat and the rescue are the same person. That is the entire con. Image by author.

Playing the player, not the game

What makes the AI countermeasures genuinely interesting, from a design perspective rather than just a tech one, is that the best of them do not try to beat scammers at their own game. They turn scammers’ own psychology against them.

Take Daisy’s origin story. O2 and their agency partners at VCCP did not build her to intercept calls. She has her own dedicated phone number, which the team deliberately placed onto the “mugs lists” that scammer networks use to identify likely targets. O2 infiltrated the very database designed to find victims, and turned it into a trap. Daisy got herself onto those lists by appearing to be exactly what fraudsters were looking for: an older woman, a little uncertain about technology, with plenty of time to talk. The scammers’ confidence in their own profiling (their certainty that they had found easy prey) became the mechanism of their own waste.

This is a psychologically precise move. It exploits the overconfidence built into the scammer’s model. They have a script, a demographic target, and a practiced set of pressure tactics. Daisy satisfies the demographic criteria perfectly, and then proceeds to be completely impervious to the pressure. She does not panic. She does not comply. She just rambles, amiably and indefinitely, about her grandchildren.

A five-step flow diagram showing how O2’s AI scam-baiter Daisy operates: her number is seeded into scammer databases, a fraudster calls, Daisy answers autonomously, stalls with conversation, and wastes the scammer’s time.
Five steps to becoming a scammer’s worst nightmare. Image by author.

The persona itself is worth examining. O2 built Daisy to resemble the exact profile that fraud gangs most frequently target, older and apparently tech-inexperienced, because leaning into that stereotype was the most effective way to occupy the scammers’ attention. It is, in the most literal sense, a design that uses the adversary’s own bias as its primary feature.

The quieter version

Daisy is vivid and photogenic, and she has received considerable press. But the more infrastructural version of the same basic logic is already running on hundreds of millions of devices, largely invisibly.

Apple’s iOS 26 introduced call screening for unknown numbers. When a call comes in, the phone answers silently in the background, prompts the caller to state their name and reason for calling, and transcribes the response in real time on the lock screen. The user decides whether to pick up. Google has gone a step further with on-device AI scam detection for Android, which monitors conversations in progress for patterns associated with social engineering: urgency cues, requests for bank details, impersonation of trusted institutions, flagging them mid-call.

The shared logic here is the pause. Not a block. Not a barrier. A moment of friction inserted between the scammer’s approach and the target’s anxiety response. That gap is cognitively significant. The effectiveness of phone scams depends on suppressing rational deliberation; the call screening model reinstates it. You see the transcript. You read the reason. You decide. The scammer’s carefully constructed sense of immediacy dissolves the moment you are no longer actually on the phone with them.

It is a modest intervention, and it is not sufficient on its own. A determined human scammer can answer the screening prompt convincingly, and a voice-cloning tool can do it better. But as a design principle it is worth noting: sometimes the most powerful thing you can build is not a smarter wall, but a better pause.

https://medium.com/media/d93dfa3bcd53f28a4e6852f0939d535b/href

The system behind the system

At the institutional level, that arms-race logic plays out at larger scale. Banks have historically relied on static fraud detection models, rules built once and applied uniformly. Scammers noticed, and adapted accordingly, shifting from technical attacks toward social engineering simply because it is harder to catch. Authorised push payment fraud, where victims are manipulated into making transfers themselves, has surged for the same reason. It circumvents the systems designed to flag suspicious activity. The transfer looks legitimate because, technically, it is.

Mastercard’s Consumer Fraud Risk system is one concrete example of the shift. Operational in the UK since 2023, it uses AI to score transactions in real time, providing both the sending and receiving bank with a risk assessment within seconds. Since launch, authorised push payment scam losses in the UK fell 12% in the first year, and the system is now being extended globally. What makes it notable is not just the detection rate but the framing: it treats the conversation leading up to a transfer as data, not just the transfer itself.

The emerging response is adaptive AI fraud detection. Unlike static models that sit fixed until the next manual refresh, these systems update continuously, learn individual behavioural baselines, and flag anomalies in real time. They are watching the conversation, not just the transaction. These tools are no longer asking “is this transfer unusual?” The tell they are looking for now is subtler: is this person behaving as though they are under psychological pressure?

Governments have entered the picture too, with varying degrees of ambition. The UK used AI data-matching to prevent over £480 million in fraudulent public sector transactions in the year to April 2025, and has introduced legislation requiring platforms to assess and mitigate their role in enabling fraud. Australia has built a cross-industry Scam-Safe Accord, and Singapore has introduced a Shared Responsibility Framework that allocates scam losses between financial institutions and telecommunications operators, on the principle that if you build the infrastructure that fraud runs on, you bear some of the consequences when it does.

A three-layer stacked infographic titled “Defence in depth.” Layer one: Adaptive bank AI watches the conversation, not just the transfer. Layer two: OS-level call screening inserts friction before the call connects. Layer three: Legislation places responsibility on platforms and institutions. Caption: “The question has shifted from blocking fraud to interrupting the psychology behind it.”
The defence has depth. The question is whether it has reach. Image by author.

The gap that technology cannot close

All of this is real progress. The tools are working, the regulatory pressure is building, and the basic logic is sound. Fighting psychological manipulation with systems designed to interrupt, delay, and expose it is, at its core, the right instinct.

But there is a dimension of the problem that none of it touches.

The phone calls that Daisy answers, that iOS 26 screens, that adaptive fraud systems try to intercept: many of them are not being placed by predatory opportunists with a laptop and a grievance. They are being placed by people who are themselves victims. According to INTERPOL, hundreds of thousands of individuals from at least 66 countries have been trafficked into scam compounds across Southeast Asia, lured by fake job offers and forced to run fraud operations under conditions that constitute, by any reasonable measure, modern slavery. The scammer on the other end of a romance con or a crypto scheme may be a graduate with a confiscated passport and a daily quota to meet, working 14-hour shifts in a compound surrounded by armed guards.

This does not change the harm done to the people being defrauded. But it does change the shape of the problem. The factory is not a few bad actors with phones. It is an industrial system, deeply embedded in organised crime and, in some cases, in the corruption of the states meant to dismantle it. Law enforcement raids free workers; new compounds open. International coordination is increasing; so is the scale of operations.

Technology is not designed for this problem. It can slow the throughput, but it cannot address the source.

What this means for the people building things

For designers and product teams, the honest takeaway is not pessimistic. The countermeasures described here are meaningful. Daisy is not just a PR campaign. She is a proof of concept for using AI’s capacity for patient, indefinite engagement as a genuine defensive countermeasure. Call screening works. Adaptive fraud detection works. The pause, the delay, the friction: these are legitimate design interventions that save real people from real harm.

But the instinct to locate the solution entirely within the product layer, to build smarter tools and call the problem managed, is one that this particular landscape resists. Fraud at scale is not a UX problem. It is a governance problem, a labour problem, an organised crime problem, and a platform accountability problem that the industry has been slow to own. The Online Safety Act and its equivalents are moving in the right direction, even if their enforcement is still catching up with their ambition.

Thanks for reading! 📖

If you enjoyed this, follow me on Medium for more on the psychology of design and technology.

References & Credits


Playing dumb: how AI is beating scammers at their own game was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.

Need help?

Don't hesitate to reach out to us regarding a project, custom development, or any general inquiries.
We're here to assist you.

Get in touch