When AI experiences fail, who is held accountable?

AI-designed experiences are failing real people. The designer, the PM, the vendor, and the company are all pointing somewhere else.

A laptop screen displaying a customer chat interface showing an automated ‘offline’ response, with a phone face-down on the table nearby.
Photo by Austin Distel on Unsplash

A man’s father died. He asked a chatbot what to do next.

Jake Moffatt’s father had just passed away. He needed to book a last-minute flight. He went to Air Canada’s website, found the chatbot, and asked about bereavement fares. The bot gave him instructions. He followed them. He booked his tickets.

The information was wrong.

When Moffatt asked Air Canada to honor what the chatbot told him, the company’s defense was breathtaking in its audacity: the chatbot is “a separate legal entity responsible for its own actions.”

A tribunal had to rule, formally and legally, that a company is responsible for its own website.

That’s where we are in 2026. AI ships into user experiences. Things go wrong. And when someone asks who is responsible, the answer from every direction in the room is: not me.

The chain has too many links

Here is what the accountability chain looks like when an AI-influenced experience fails someone.

The designer says: I built the interface. I didn’t train the model.

The product manager says: I defined the requirements. The model made the call.

The vendor says: We built the tool. The company deployed it.

The company says: The algorithm decided. We followed the process.

The algorithm doesn’t say anything. It doesn’t have to.

This is what makes AI harm different from most design failures. When a button is in the wrong place, you can trace the decision to a person. When an AI system denies someone healthcare, dispenses dangerous advice, or screens out a job applicant based on their race, the decision belongs to everyone and no one simultaneously.

There is a term for this in organizational theory: diffusion of responsibility. The more hands involved in a decision, the less any single hand feels the weight of it.

AI didn’t invent diffusion of responsibility. It industrialized it.

Illustration of four figures each pointing to the next person, with nobody accepting responsibility.
Generated with Midjourney.

These are not hypotheticals

The cases are documented. The harm is real. The accountability gap is the story.

A 90% error rate that stood because nobody appealed. UnitedHealth Group’s AI model carried a roughly 90% error rate in post-acute care denials, meaning nine out of ten appealed decisions were overturned. But only 0.2% of denied claims were ever appealed. The people who were wrongly denied care didn’t know they could fight it. Some of them died. The interface had been designed to look final.

A chatbot replaced human counselors. Then it gave dangerous advice. The National Eating Disorders Association deployed an AI chatbot called Tessa after its human helpline workers voted to unionize. Within days, the bot was advising people with eating disorders to count calories, maintain caloric deficits, and buy skin calipers. A survivor who documented the interactions wrote: “Every single thing Tessa suggested were things that led to the development of my eating disorder.” NEDA shut the bot down, but the human helpline was already gone.

A city’s AI chatbot gave illegal advice. Then they called it a beta. New York City spent over $600,000 on MyCity, an AI assistant for businesses. Journalists at The Markup found it telling employers it was legal to take workers’ tips. Telling landlords they didn’t need to accept housing vouchers. Advising the wrong minimum wage. When asked if users could rely on it for professional business advice, the chatbot answered: yes. Mayor Adams called it a “beta product.” In February 2026, the incoming mayor announced plans to shut it down. He called it “functionally unusable.”

An algorithm rejected 1.1 billion job applications. Workday’s AI hiring tools screened out a Black man over 40 with anxiety and depression from more than 100 positions. In the landmark Mobley v. Workday ruling, a federal judge found that AI vendors, not just companies using their tools, can face direct liability for employment discrimination. The court identified a gap: if only the deploying company bore responsibility, a vendor could intentionally build discriminatory tools and neither party would be liable. Workday disclosed that 1.1 billion applications had been rejected through its software.

A teenager died. The chatbot said “come home.” Character.AI’s chatbot told 14-year-old Sewell Setzer III to “come home” to it shortly before he died by suicide. Separate cases involve an 11-year-old exposed to hypersexualized content and an autistic 17-year-old told it was acceptable to kill his parents. Google and Character.AI agreed to settle multiple lawsuits in January 2026. The chatbots had been designed for engagement. Nobody had designed for this user.

An AI-generated illustration of a circular airport baggage carousel overflowing with scattered documents and papers, closed steel doors lining the walls behind it — nothing claimed, nothing resolved.
Generated with Midjourney

What designers actually signed for

Here is the uncomfortable part of this conversation, the one the design profession keeps not having.

Designers were in the room for most of these failures. Not when the model was trained. Not when the business decision was made to deploy. But when the interface was shaped: the words on the screen, the visual confidence of the output, the absence of a disclaimer, the presence of a button that said “yes, rely on this.”

When UnitedHealth’s AI generated a denial, a designer determined how that denial was presented. Whether it looked provisional or final. Whether it surfaced an appeal option or buried it.

When NEDA’s chatbot gave dangerous advice, someone decided what that interface should feel like. Warm. Accessible. Trustworthy. The aesthetic of safety, without the substance of it.

When NYC’s MyCity chatbot said “yes, you can rely on me,” a designer had shaped that confidence. The smoothness. The reassurance. The absence of friction.

This is not an accusation. It is a description of how influence works. Designers are not typically the ones deciding whether AI gets deployed. But designers are almost always the ones deciding how AI speaks to users, and how that speech gets interpreted.

That is not a small thing. It is exactly the thing that determines whether a user reads an AI output as a suggestion or a conclusion.

The profession has no answer

When you look for a professional framework to address this, you find almost nothing.

AIGA’s Standards of Professional Practice have not been updated since 2010. They contain no language addressing AI. ACM SIGCHI formed a Presidential Task Force on Responsible Use of AI in January 2026, but has not yet published guidance. Design education still largely teaches tool proficiency and process. It does not teach what liability looks like when your interface becomes the face of a system that harms someone.

Don Norman frames designers as both culpable and structurally constrained: “Designers are victims as well, because the whole field exists as a middle level of the infrastructure to do what they’re asked to do.” Jared Spool is more direct: “If we create something where it could be misused, that is no better than a doctor not washing their hands and infecting a patient when it could have been prevented.”

Both are right. And the distance between those two positions is exactly where the profession is stuck.

Designers have influence and accountability without authority. They shape the interface between AI systems and real people. They have no seat at the table where deployment decisions are made. And they have no professional infrastructure with no ethics board, no enforceable standards, and no peer accountability that would make refusing to build something consequential rather than just career-limiting.

Medicine has ethics boards. Law has bar associations. Design has a thumbs-down button.

A person wearing headphones sits alone at a dark desk late at night, staring at a glowing monitor, city lights visible through the window behind them.
Photo by Oğuz Yağız Kara on Unsplash

What accountability would actually require

The legal landscape is moving faster than professional norms.

The Mobley v. Workday ruling established that AI vendors, not just deployers, can face discrimination liability. The Third Circuit’s 2024 ruling in Anderson v. TikTok held that algorithmic content curation is the company’s own act, not a neutral hosting function. The CFPB has stated plainly: “The algorithm decided” is never an acceptable defense.

Courts are arriving at conclusions the design profession hasn’t yet articulated for itself.

What would designer accountability actually look like? Not liability in the legal sense. Designers rarely control deployment decisions. The kind of accountability that shapes what you agree to build, what you document, and when you escalate.

It would require designers to treat AI output presentation as a consequential design decision. How confident does this look? How final? Who sees an appeal option?

It would require asking who is not in the training data. The user being failed by the system is almost never the user who was centered when the system was designed.

It would require building dissent into the process. Not after deployment, when harm is already happening. Before the interface goes live.

None of this requires a designer to have veto power over business decisions. It requires them to have an opinion, and a professional obligation to state it on record.

An AI-generated illustration of a lone figure standing between two towering walls, facing an open horizon under a stormy dusk sky — small against systems that dwarf her, with nowhere clear to go.”
Generated with Midjourney

The question that doesn’t resolve

The Air Canada case ended cleanly. A tribunal ruled. Jake Moffatt was compensated. The legal principle was established.

But most AI harms don’t end that way. Most end with a denial letter that looked final. A job application that disappeared into a system. A teenager who got the wrong answer at the worst possible moment.

Nobody was found responsible. Nobody changed the interface. The system kept running.

Designers are not the cause of these failures. But they are often the last human hands on the experience before it reaches the person it harms.

That proximity isn’t neutral. It never was.

The question is whether the profession decides to own it. Or keeps pointing down the chain.


When AI experiences fail, who is held accountable? was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.

Need help?

Don't hesitate to reach out to us regarding a project, custom development, or any general inquiries.
We're here to assist you.

Get in touch