We didn’t mean to build this— engagement at any cost
How well-meaning designers became complicit in broken systems and why handing those same briefs to AI could prove catastrophic.

Good intentions, broken systems
A New Mexico state court slapped Meta with a $375m fine on 24 March ’26, for misleading users about their platform’s safety. For a trillion-dollar organisation this amounts to a speeding ticket. But what makes this landmark ruling so interesting, is that the winning argument called into question the design features of their applications, citing these were to blame with charges including a failure to protect minors.
The sad part is that in real terms, we haven’t learned anything that we didn’t already really know; it’s been well documented that social media apps exacerbate addictive tendencies and can negatively impact personal behaviours. As much as the tech CEOs deny it, they typically place strict restrictions on the apps and devices their own children are allowed to use.
So how did product designers that pride themselves on using user research and evidence to inform decisions end up allowing it to reach this point? I’ve yet to meet a designer that actively seeks to make a product that is knowingly harmful, in fact designers want to delight their users, so what are the factors that drive organisations to end up in such a place? And are we as designers complicit?
This is not a new human failing. Rutger Bregman argues in his 2020 book Humankind: A Hopeful History that individuals are fundamentally decent but are capable of doing terrible things if they believe they are doing good. As Bregman writes, “If you push people hard enough, if you poke and prod, bait and manipulate, many of us are indeed capable of doing evil. The road to hell is paved with good intentions.” This feels uncomfortable to hear but it rings true, as well-meaning designers end up embedded in systems that cause real harm.
Let’s step back and look at the general trajectory of big tech experiences over the last few decades, organisations that once voraciously championed an effortless user experience to attract customers, have now turned this proposition on its head. Customers in ring-fenced eco-systems are now the target of being exploited for profit at all costs, only tolerating poorer, more costly experiences, simply because switching out is so inconvenient. This is referred to as ‘platform decay’, but you may be more familiar with the more colourful term “enshittification” coined by Cory Doctorow to describe exactly this effect.
So what is this perfect storm which brings forth these unintended yet reprehensible outcomes to manifest? It typically starts with the definitions of success, a matrix of engagement-heavy user-metrics coupled with aggressive growth and retention targets. The matrix acts as a proxy for profit measures, and prioritising profit attainment quickly surpasses any other factors. The consequential human costs of attaining these targets are not reflected in the dashboard. Now targets resolve to become just numbers to reach by the end of the quarter, through designing the right levers.
“The consequential human costs of attaining these targets are not reflected in the dashboard”
Overlooking those consequential problems that “might” manifest if targets are attained are not treated as concerns, because they’re not today’s problems. In fact they’re treated as “nice problems to have” and can be dealt with in the future, if we get there. But through incremental gains it doesn’t take long for these goals to appear in the product’s rear view mirror.
With the engine running at full speed, nobody wants to reduce the momentum. Financial incentives, quarterly time pressures and external market pressures all steep in a culture which ignores any recourse but zealously embraces the maxim: “move fast and break things.” It starts to become apparent how these success matrices shape design briefs.
This is common across tech, we’ve seen this play out to a similar extent with security being deprioritised in products, whereby doorbell cameras or web-connected children’s toys have had gaping security flaws, because safeguarding has always been a distant second when it comes to representation in the success matrix. Alarms were only ever raised because the consequences were direct and easier to spot, than a toxic algorithm embedded as the core feature of a product driving billions in revenue.
The problem in creating such a space, is that it not only lets bad outcomes manifest it makes their manifestation inevitable.
Where do designers sit in all of this? We aim to create successful product levers that move these metrics. But by narrowly focusing on lever design, we only see those quarterly targets on the horizon, put our blinders on and begin racing to that finish line.
As we move forward with broken briefs so closely aligned to profit, it becomes easy to justify cutting corners and drift incrementally further from the original intent. The rewards grow greater with each step, until you are racing ahead on questionable practices, and being pulled over for reckless endangerment by the authorities.
We need to ask ourselves is this the right finish line? It becomes even more pertinent a question when the brief is not given to a designer but an agent.
Black boxes briefing black boxes
If we are constantly deviating due to these broken briefs, what might happen when we want to do this very quickly at scale? When we pass these flawed briefs to AI agents, are we multiplying the problem?
AI models are non-deterministic, in that we don’t know exactly what it’ll output even when we supply identical inputs. So when one AI agent inherits a flawed brief, it passes its interpretation on to another, and so forth, a series of black boxes briefing one another; each hop can introduce a deviation and after a few agents down the chain, we can find our original intent has been heavily diluted or simply misconstrued. This premise makes Nick Bostrom’s paperclip maximiser scenario feel like it could very well ring true, where an AI is tasked to make paper clips but ends up converting all matter, including humans to paperclips.

“A series of black boxes briefing one another; each hop can introduce a deviation.”
By design AI seeks novel solutions and we as humans encourage this behaviour, we’re seeking to push the limits of generating creative solutions. But when we combine the need for innovation whilst optimising for engagement, without a clear ethical framework setting boundaries and constraints, we are unlikely to be prepared for the outcomes that arise from the system.
The stakes are further raised by the fact that product building has become cheap, so we’re seeing more organisations skipping prototyping and going straight to live population testing. The user base becomes a petri dish; moments for ethical reviews are replaced by a statistically significant result.
How inhumane does a system become? When key decisions are made along an entire agent chain, each one might be locally rationalised, but who oversees the chain as a whole?
In Meta’s case if we fast forward a few years, let’s suppose the egregiously designed features were all orchestrated and created by AI Agents, along with no individual intending to cause such harm or any direct malicious intent in the prompts and no single decision acting as the lynchpin to apportion blame, who is responsible? The designer that manages the agents? The business that sets aggressive metric targets? The operations team that didn’t enforce governance? Or everyone in this chain that didn’t stop it?
Who is to blame?
The tools exist. The will does not.
Ethical frameworks have been around for years. There are a number of prominent frameworks like Value Sensitive Design, the Santa Clara Ethical Toolkit, Ethics for Designers, Ethical by Design; all of which are applied practical toolkits for ethical design practices and largely ignored in organisations.
Applying these frameworks in practice would generate some costly “ethical friction” and likely impede growth, introduce safeguards for users — reducing prolonged engagement metrics, and in effect impact profit. Ultimately it isn’t down to a lack of understanding or knowledge but the choice comes down to people vs profits and time and time again profits are chosen over people.
The uncomfortable truth is there isn’t a commercial incentive to adopt such a framework, and truth be told we as designers are complicit.
The Meta ruling is a bellwether, regulation enforcement is coming, big-tech has had its free lunch at the expense of users for too long; whether they like it or not individuals are no longer going to idly sit and be missold and exploited.
Another comparable example is in the EU, where European regulators have moved beyond GDPR data policies and have begun enforcing the Digital Services Act (DSA) with regards to design related issues. X were fined €120M for their blue check mark, a design feature that was in violation of the DSA, misleading users and exposing them to more scams.
The Digital Fairness Act (DFA) currently in draft aims to go further by clearly outlining dark patterns, addictive design, unfair personalisation and profiling, misleading influencer marketing, unfair pricing, and problems with digital subs and cancellation flows.

“Each time we choose not to apply these ethical frameworks, we are making a choice to be complicit”
With AI redefining how design operates and agents being assigned more responsibility, there is an even greater risk for ‘unintended’ consequences of unchecked design to precipitate very rapidly. These design briefs are our opportunity to set the course straight, by clearly articulating the outcomes we are seeking, alongside the outcomes and consequences we will not accept or have manifest. These are not limitations, they are our red lines, the guard rails we construct to keep our users safe.
Ethical frameworks are not new, the knowledge and toolkits have been there in front of us for years, and the regulations are arriving regardless. As designers each time we choose not to apply these ethical frameworks whether in our designs or when briefing agents, we are making a choice to be complicit.
Good intentions are not enough, to build better requires us to take action: to be more critical, to decide what to enforce, what we need to question and ultimately what we need to refuse.
Further reading on this topic
- Tech Leaders Can Do More to Avoid Unintended Consequences by Wired (May 2022)
- Advocating for People in a Profit-Driven World by People Nerds (Sept 2021)
- Are we all fundamentally good? Philonomist. interview with Rutger Bregman (Nov 2021)
- Is the UK falling out of love with social media? by Dan Milmo Global technology editor, The Guardian (Apr 2026)
We didn’t mean to build this- engagement at any cost was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.