Product ethics have never mattered more

OpenAI just struck a deal with the Pentagon. Anthropic refused. And users noticed, which tells us something important about the future of values in product design.

In the last week of February 2026, something unusual happened. A sitting US president took to social media to brand a private technology company a political threat. The Secretary of Defense designated that same company a supply chain risk, a classification previously reserved for foreign adversaries. And hours later, a rival AI lab swooped in and took the government contract that had just been refused.

A split composition illustration. The left half has a light background with a blue outlined shield and padlock. The right half is black with a white fountain pen signing a document. A horizontal line divides both sides.
One company held the line. The other signed it away. Image by author.

By the weekend, users were voting with their subscriptions. It was, by any measure, a remarkable few days.

Most coverage treated this as a political story. A tech industry spat. A Washington power play. But look at it through a design and product lens and something else comes into focus entirely. This was a story about what happens when the values embedded in a product get put under pressure, and whether the architecture holds.

The anatomy of the deal

To understand why any of this matters for designers and product teams, it helps to know what actually happened.

Anthropic, whose Claude AI model had been the only large commercial AI approved for Pentagon use, refused to sign a new contract. The sticking point was a pair of hard limits the company had built into its principles: Claude would not be used for mass domestic surveillance, and it would not be integrated into fully autonomous weapons systems. The Pentagon wanted access for “all lawful purposes.” Anthropic said no.

The response was swift and, legally speaking, unprecedented. The US government then designated Anthropic a supply chain risk. It was the first time that classification had ever been applied to an American company, and the first time it appeared to be used in retaliation for a business simply declining certain contract terms. Every federal agency was directed to cease using Anthropic’s technology.

Within hours, OpenAI had a deal. CEO Sam Altman later admitted it had been “definitely rushed” and that “the optics don’t look good.” The contract was classified, but OpenAI published a partial account, arguing that its agreement protected against the same two red lines Anthropic had insisted upon, only enforced through architecture rather than explicit contract language. The Pentagon, OpenAI argued, had agreed to follow existing law. That was assurance enough.

A horizontal timeline titled The Week That Changed Things, showing five events: Wed 25 Feb, Pentagon demands access for all lawful purposes. Fri 27 Feb, Anthropic refuses and is designated a supply chain risk. Fri 27 Feb, OpenAI announces Pentagon deal. Sat 28 Feb, public backlash begins and the QuitGPT campaign launches. Sun/Mon, Claude overtakes ChatGPT in the Apple App Store.
Five days. One very public stress test for AI ethics. Image by author.

Critics were not entirely convinced. The surveillance loophole is real and worth understanding. Under current US law, government agencies can legally purchase commercially available data from brokers, including location data, financial records and social media activity, and analyse it at scale. AI doesn’t create that loophole. It turbocharges it. Policy experts have warned that this creates a real gap: mass surveillance of Americans, conducted entirely within the law. The question of whether OpenAI’s contract closes that door, or merely gestures toward it, remains genuinely unresolved.

What is resolved is how users responded.

Values as architecture, not afterthought

There is a concept in design that is easy to say and surprisingly hard to do: values-by-design. The idea is that the ethical commitments of a product are not a policy layer you bolt on at the end, or a terms-and-conditions document users click through without reading. They are structural. They live in the decisions made at the earliest stages: what the product will and will not do, who it will and will not serve, where the lines are drawn before anyone outside the building has asked.

Anthropic’s position was, essentially, an argument for values-by-design. The red lines were not negotiating positions. They were architectural features, non-removable by design. Much like safety constraints built into physical infrastructure, they weren’t there because anyone expected them to be tested. They were there because the cost of failure is too high to leave to chance.

OpenAI’s argument was different, and it is genuinely interesting rather than simply wrong. The company contended that deployment architecture was a stronger safeguard than contract language. Cloud-only access, no edge deployment, no direct integration into weapons hardware, and OpenAI personnel kept in the loop — that was the argument. In their view, how the technology is configured matters more than what the paperwork says.

Two product stack diagrams side by side. The left, labelled values as contract, shows an ethics document floating outside the stack connected by a dotted blue line to UI, features and backend blocks. The right, labelled values as architecture, shows ethics embedded as the bottom foundation block beneath UI, features and backend.
Where values live in a product determines how long they last. Image by author.

This is not a frivolous position. Designers know that constraints built into a system are more reliable than constraints that depend on user behaviour. But there is a meaningful difference between designing a system that cannot do a harmful thing and designing a system that is contractually prohibited from doing it while still technically capable. One of those is architecture. The other is policy. And policy, as anyone who has lived through a terms-of-service update knows, can change.

The distinction matters beyond this specific deal. Every product team makes decisions about what their product will and will not do. Most of those decisions are mundane. Some are not. The question of where values live in a product, whether in the code, the contract, a blog post, or nowhere in particular, is one that design teams are increasingly being asked to answer in public.

No trust in the process

Here is where the research becomes useful, because the public response to the OpenAI deal was not just an emotional reaction. It was consistent with years of data about how people actually behave when companies compromise on values they had claimed to hold.

A 2025 study by Givsly, surveying over 2,100 US adults, found that more than 88% of consumers purchase from brands that align with their personal values, and 6 out of 10 would actively pay more for brands that reflect those values. Among Gen Z, that figure rises to 79%. These are not trivial numbers, and they are not limited to environmental or sustainability concerns. They extend to ethics, transparency and the perceived integrity of a company’s decision-making.

Mintel’s 2024 Global Consumer Trends report found a parallel pattern: consumers are increasingly affiliating themselves with brands that represent their values, and abandoning those that don’t. The social and emotional meaning of a brand, what it stands for rather than just what it does, has become a primary driver of loyalty, particularly among younger demographics who are both the heaviest AI users and the most likely to switch.

Trust in AI specifically is a separate and more acute problem. The most comprehensive global study on AI trust to date, led by Melbourne Business School in collaboration with KPMG, surveyed over 48,000 people across 47 countries. It found that although 66% of people are already using AI regularly, fewer than half are willing to trust it. More striking still: people have become less trusting and more worried about AI as adoption has increased. Usage is up. Trust is down. That is not a reassuring trajectory for an industry that depends on both.

Deloitte’s TrustID Index added a more immediate data point. Trust in company-provided generative AI fell 31% between May and July 2025 alone. Trust in agentic AI systems, those capable of acting independently rather than simply making recommendations, dropped 89% over the same period. These are not gradual declines. They are collapses.

Within days, Anthropic’s Claude had overtaken ChatGPT in the Apple App Store. Set against that backdrop, that shift starts to look less like an emotional protest and more like a rational response. Users had all the proof they needed: two companies, the same pressure, very different choices. Anthropic held its position. OpenAI didn’t. Users drew their own conclusions and, quite literally, moved their business.

Four stat cards in a 2x2 grid on a light blue background. Card 1: 88% buy from values-aligned brands, Givsly 2025. Card 2: fewer than 1 in 2 trust AI despite 66% using it, KPMG/MBS 2025. Card 3: 31% drop in generative AI trust in 2 months, Deloitte 2025. Card 4: 89% drop in agentic AI trust, same period, Deloitte 2025.
The trust numbers don’t lie. Usage is climbing. Confidence isn’t. Image by author.

Mind the say-do gap

There is a complicating layer here, and it is worth being honest about it. Consumer research consistently reveals a gap between what people say they value and what they actually do. The same studies that show strong values-based purchasing intent also show that a large proportion of consumers do not follow through, particularly when switching involves friction or cost.

The Givsly research found that while 88% of Americans purchase from values-aligned brands, far fewer make active switching decisions based on ethics alone. Blue Yonder’s 2025 sustainability survey found that only 20% of consumers believe brands accurately represent their ethical commitments in their marketing, and 26% outright distrust those claims. Nearly a third of consumers have never actually switched brand loyalty toward a company they perceive as more ethical, despite saying they would.

This is the say-do gap, and it is both a challenge and an opportunity for product teams. The challenge is obvious: users have learned not to trust ethical claims made in marketing copy, mission statements and press releases, because they have been broken often enough that scepticism is the rational default. The opportunity is less obvious but more interesting. The products that genuinely close the gap, that demonstrate values through behaviour rather than just claiming them in language, stand to earn a level of loyalty that marketing cannot buy.

For AI products in particular, the bar is high. The 2024 Edelman Trust Barometer found that while 76% of respondents trust the technology sector broadly, only 30% embrace AI specifically. The gap between trust in tech companies and trust in AI suggests that users are already separating the messenger from the message. They might trust the company in general. They are not yet convinced about the technology.

What moves people from scepticism to trust, the Edelman research found, is not better marketing. It is understanding: seeing how something works, what it will and will not do, and having evidence that the company behind it means what it says. When users cannot verify that through direct observation, they look for proxies. And as the events of February 2026 showed, behaviour under pressure is the most powerful proxy of all.

A Google News panel showing multiple headlines about OpenAI’s Pentagon deal and user backlash, including reports of ChatGPT uninstalls surging 295%, over 2.5 million users boycotting ChatGPT, and Claude topping the App Store charts.
The news cycle said it all. Within days of the deal, the story had moved from policy pages to app stores. Image by author.

Building ethics into the blueprint

Most product teams will never face anything quite so dramatic. Most product teams will never have a government designation thrown at them for holding an ethical position. But the underlying dynamic is not unusual at all. Pressure to compromise on product values comes in quieter forms: a big enterprise client who wants a feature that conflicts with user privacy, a growth target that would require loosening data handling practices, a partnership that makes commercial sense but ethical less so.

These are the moments that reveal whether values were ever really built in, or just written down. Ethical commitments that exist only in contract language are as durable as the relationship that contract describes. Those built into the product itself, into what it literally cannot do, are structurally more robust and far more credible to users who are paying attention. Design and product teams are well-placed to make that distinction, and increasingly cannot afford to leave it to legal or comms.

Deloitte’s ethical technology research found that an organisation’s perceived ability to honour its ethical commitments is considered critical to long-term success, and that reputational damage from ethical failure was the concern that kept leaders up at night more than almost anything else. That concern is not abstract. It has a real-world cost, measurable in App Store rankings, subscription cancellations and the kind of brand trust that takes years to build and days to lose.

The industry is at an inflection point. Governments are stress-testing AI ethics in public, in real time, in ways that would have seemed implausible a few years ago. People are watching, and increasingly, they are keeping score. For designers, none of this is new. Every product decision has always been a values decision. What’s changed is that users are arriving at the same conclusion.

There is no neutral ground here. A product either stands for something structurally, or it doesn’t. The war on values is already underway. The more interesting question is which side your product is on, and whether that answer is written into the code, or just the copy.

Thanks for reading! 📖

If you enjoyed this or found it thought-provoking, follow me on Medium for more on the psychology of design and AI.

References & Credits


Product ethics have never mattered more was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.

Need help?

Don't hesitate to reach out to us regarding a project, custom development, or any general inquiries.
We're here to assist you.

Get in touch