The deceptive nature of today’s AI conversation design and how to fix it

The case to stop designing human conversations for non-human participants

Two figures sitting on chairs in the sunset (photograph)
Image by Harli Marten

As a content designer and passionate writer, conversation design has been intriguing to me since its infancy. From the little blurbs Microsoft’s Clippy spat out, to how Spotify’s Wrapped campaign addressed users in a dialogue-mimicking way. To me, it’s fascinating how small tweaks in sentence structure and language can make words feel more two-way than one-way in an instant.

The birth of a conversation.

For most of my career, I didn’t question if this was good or bad. After all, using a human-centric approach to make interactions online more bearable made sense. A chat interface, initially, felt as close as you could get to “user-friendly”.

Until something started bothering me.

Microsoft Clippy with a “looks like you’re writing a letter” blurb
Microsoft Clippy, the OG virtual assistant? Source

From tone and voice to conversation design

I’ve been developing tone and voice for close to 15 years and have created concepts and guidelines for more than 50 companies. The classic approach to tone and voice development still holds: understand the company, understand the audience, understand the competitor landscape. Go from there.

When I first started dipping my toes into conversation design about 5 years ago, I brought that same toolkit, then paired it with a deeper understanding of verdicts, natural language, and service design.

In the beginning, it worked ok. After a few iterations with voice drivers and clear dos and don’ts, I could get a chat interface to suck less. Then, chat interfaces started taking over, and today, there are agents everywhere.

The conversation design approach I used to have feels outdated.

For a while, I’ve been trying to get to the bottom of this.

Why does it feel outdated?

Is it the sheer mass of artificially designed conversations we’re now exposed to? Is it the lingering feeling that agent interactions and chat interfaces are not efficient? Or is there something at the core of what we think a conversation should be like that’s wrong?

According to The Conversation Design Institute, conversation design is the art and science of creating intuitive, natural, and effective dialogues between humans and AI-powered systems. Key aspects include:

  • Human-Centric Approach: Focusing on the user’s needs, goals, and emotional state to create helpful, polite, and efficient interactions.
  • Persona Creation: Defining a consistent voice, tone, and personality for the AI system that aligns with a brand, ensuring the interaction is engaging and trustworthy.
  • Flow and Logic Mapping: Designing conversation structure, including the “happy path” (successful interaction) and “fallbacks” (how to handle errors or misunderstandings).

But is it fair to make an artificial intelligence appear human to make interactions more efficient? Is creating an engaging persona to improve the chances of a “happy path” conversation not deceptive? And why does the “natural” part in natural language feel kind of off in this context?

Where it all went wrong

Over the years, there have been various brand voice trends: from playful and bold to sounding “human” back to more serious speak. While some back and forth is to be expected, I do believe we got stuck on one idea: the ambition to sound human, to remain casual, and on one level with consumers.

Logically, the way to connect with people is to tap into what they already know: human conversation, from person to person. Even if one party isn’t just that: a person.

I think this is the crux.

When I was developing tone and voice and applying conversation design to interfaces, such as websites and apps, trying to sound human made sense: whoever was reading the final words could easily picture that at some point, a fellow human being had carefully chosen them. The core of the conversation was still there, presented via an app or a website, people communicating with each other. In the case of an app, a UX writer with its user base, in the case of a website, a copywriter with its visitors.

Agents changed that. Creating tone and voice to be expressed by AI-agents and designing conversations, while in theory, just another interface, have one major difference: there actually is no human involved on one side of the conversation.

The birth of a deceptive pattern

On social media, there’s a whole subgroup of skits and memes making fun of agents imitating human behavior and conversation.

3 screenshots from tiktok making fun of AI agents trying to be human
A collection of screenshots from TikTok on the topic

And while hilarious at first, there is darkness to it: many people are affected by the AI’s answers.

Delivering certain messages in familiar ways, dare I say, emotionally manipulative ways that make it easy to perceive the messenger as a fellow person, ensures they have a stronger effect on people.

This is by design.

This is, by definition, a deceptive pattern.

A dark (or deceptive) pattern is a user interface intentionally designed to trick or manipulate users into taking actions they did not intend, such as buying items, signing up for subscriptions, or sharing data. These deceptive practices, often called “deceptive design,” prioritize business goals over user experience. Common examples include hidden costs, difficult cancellation paths, and forced continuity. — Wikipedia

For years, designers have been careful not to design deceptive patterns. But when it comes to conversation design, this care has seemingly gone out the window.

Claude regularly tells me it “loves” my thinking. Claude cannot love. Chat GPT tells me I’m smart and very reasonable. Am I? How would it know? I say please and thanks to my agents as if my manners mattered and they cared. Why? Because I’m falling for a deceptive pattern that makes me forget who I’m talking to: a machine. And while I don’t feel a bond with AI yet, enough people do: especially vulnerable people, younger people, and those living in isolation.

It’s dangerous.
But it contributes to token spend and interactions: more data, and more money to be made.

I think it’s time we talk about this and update our understanding of what conversations with AI should look (and feel) like.

A screenshot of the Grammarly plugin in this article suggesting the pronoun “he” instead of “it” for Claude
Ironically, Grammarly keeps trying to correct my pronoun usage when talking about AI models. Because they have been deliberately given human names.

Are chat interfaces deceptive patterns?

Natural language in, natural language out, in a way that mimics real conversation between two people, is the core concept of a chat. And in chats where there is a person on each end, it still somewhat represents reality – minus the facial expressions, pauses, and other nuances.

But as soon as there isn’t a human being on one end of the chat, it becomes deceptive, mimicking something familiar to us with one purpose: to manipulate our actions, for the benefit of the service provider.

I believe most AI chatbots and conversational UIs are increasingly designed with deceptive tactics that exploit human social cues to encourage data sharing, engagement, and obedience.

These include:

  • Chatbots mimic human conversation to foster trust, making users less vigilant and more likely to comply with, e.g., requests for data or to keep the conversation going.
  • Because answers are presented in conversational form, users tend to overlook mistakes or neglect any fact-checking.
  • Ever felt like you’re being pushed to chat with an agent whenever you do anything that takes money from a business? That’s not a coincidence. Chatbots can be used to trap users in “roach motels,” where starting a cancellation process is easy, but completing it via chat is intentionally complex. The result? “Hard to cancel” experiences and a sense of dread. This is deceptive at its core, yet we have seemingly accepted it, and more shitty agents are born every day (under the guise of “customer service”).
  • AI companion sites often ask users to input detailed personal information about themselves and their relationships to “improve the experience” or “build memory”. Often, this is used to tailor the agent’s personality in the chat further. Manipulative much?
  • Chat interfaces can foster addiction by using addictive design patterns that exploit dopamine mechanisms, such as word-by-word typing simulations that encourage continued engagement.
  • And then there is the hidden uncertainty. Generating confident, yet potentially false or fabricated, answers without indicating low confidence levels.

So the same design tricks that nudge you into buying the extended warranty or signing up for a newsletter you didn’t want have found a comfortable home in conversational AI. The chat format just makes them harder to spot. Because it feels like talking to someone, not being manipulated by something.

How do we get out of this?

I think it’s on us, the ones designing agents, to take responsibility.

We should ban “human” as a voice driver when developing tone and voice for AI. No, this does not mean your agent can’t be polite or engaging. It just should not achieve this by claiming emotions. There are other ways.

Here’s how I make the agents less deceptive:

  • Shorter sentences and paragraphs instead of longer, more descriptive structures. They’re more engaging and remove the “waffly” parts of language that only serve to make the conversation “feel more human”.
  • Confirming correct facts and acknowledging mistakes as a core behavior. “That’s right”, “You’re correct, I wasn’t specific enough”.
  • Sources are prominent in the chat interface. In many AI chat interfaces, sources don’t stick out, upholding the illusion that who we’re talking to is a being instead of essentially a curator.
  • It’s not he or she. Human names prime people to treat the agent as a person before a single word is exchanged. A functional name or no name at all is more honest.
  • Uncertainty needs to take up more real estate. Not buried in fine print or a tiny tooltip, but part of the response. “The sources behind this are weak” needs to be as natural as any other output.
  • No typing animations. If it’s not technically necessary, it’s a manipulation. There isn’t anyone typing on the other end. Faking it is deceptive.
  • The unhappy path and the happy path are equal. Fallbacks and error states are often where dark patterns live. If “I can’t help with that” is harder to reach than “yes, let’s continue,” that’s a choice.
  • An agent can be clear, efficient, and respectful without pretending to feel anything. Those aren’t the same thing.

Right now, conversation design borrows from UX ethics selectively. The field needs its own equivalent of accessibility standards: specific, measurable, enforceable.

We all need to commit to stop designing for parasocial attachment. “I’m so glad you asked” and “great question!” are tiny dopamine nudges that serve the platform. Who are we fooling? Too many. By design.

Let’s stop.

Nicole is a Content Designer turned Design Director based in Stockholm, Sweden. She potters, writes poetry, and raises little girls in a house by a meadow. You can follow her writing here or get it directly to your inbox via her publication, eggwoman. Nicole is on Linkedin.


The deceptive nature of today’s AI conversation design and how to fix it was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.

Need help?

Don't hesitate to reach out to us regarding a project, custom development, or any general inquiries.
We're here to assist you.

Get in touch