Against cleverness

Design principles for AI in complex systems.

A painting of the Titanic Sinking
Source: Artnet.com

Today we are at the cusp of revolutions in artificial intelligence, autonomous vehicles, renewable energy, and biotechnology. Each brings extraordinary promise, but each introduces more complexity, more interdependence, and more latent pathways to failure. This elevates prudence to be critical. Good design recognizes what cannot be foreseen. It acknowledges the limits of prediction and control. It builds not merely for performance, but for recovery.

Design Not Blame

When something goes wrong, our gut reaction is to turn to the person involved. This is sometimes known as the active failure, ascribing failure merely to the active failure is a mistake. It is a mistake that shows lack of appreciation for systems, for latent complexity, for the reality of how things fail. This reflex is a vestige of an older worldview, one in which human vigilance and effort were assumed to be the primary safeguards against failure. Now we know better.

The systems view rejects this premise entirely. A system, as we have repeated, is perfectly designed to get the results it gets. If a system produces recurring failures, the fault lies not with the operator but with the structure that shaped the operator’s choices. Good design aims not at perfect people but at ordinary people performing reliably under normal conditions.

In that spirit, good performance is not attained when we muster greater attention or exhort people to “try harder.” Rather, exceptional performance is achieved through exceptional design. Design which shapes the conditions in such a way that the correct action and the natural action are two of the same. In a well-designed system, error is eliminated not because humans have been improved, but because the system has been made incapable of producing predictable failure.

An Integrated Philosophy of Design

Each of the following major themes offers a distinct lens on design, but together they form a coherent blueprint.

Latent Errors: The Why

Latent errors teach us why systems fail: because latent conditions accumulate, hide, and align. The Swiss Cheese Model and resident pathogen metaphor remind us that complexity and opaqueness invite disaster. This provides the largest, systems-focused perspective of human error and systems design. It provides the backdrop for all our design considerations.

Reason’s Swiss Cheese Model
Source: Reason, 1997

Design decisions made today become the latent failures of tomorrow. Every shortcut, every unexamined assumption, every added layer of complexity is a pathogen waiting for the right conditions to cause harm.

The Automation Paradox: The How

This paradox shows how the design and integration of automation impacts human performance and cognition. It shows that the greater trust we put on automation [any technology] the more trust we MUST put on it, for it necessarily makes the human actors even weaker. This perpetuates a vicious cycle that is not easy to extricate ourselves from.

Automation changes human capability in ways that make system failure more catastrophic. When automation works, humans deskill. When automation fails, humans cannot recover.

Rasmussen’s Conundrum: The Where

Jens Rasmussen reveals where automation excels and where it collapses. The Automation Conundrum illustrates the narrow window of optimal performance and the importance of adaptability outside that window.

Superhuman peak performance means nothing if you cannot ensure conditions stay within the narrow range where that performance is achieved.

A chart showing the Rasmussen Conundrum: Automation exceeds humans in only a tightly controlled environment
Source: Author

Together

When combined, these frameworks yield a unified principle: Design must anticipate failure, accommodate human limitations, and employ technology in ways that extend human resilience.

A Philosophy of Conservative Decision-Making

Few figures embody this philosophy better than Admiral Hyman G. Rickover. Admiral Rickover was the first admiral of the nuclear navy. Under his leadership, the US designed and built the first nuclear submarines. Ostensively living and working next to a nuclear reactor comes with risks most people cannot fathom. Rickover recognized that catastrophic failures in complex systems seldom arise from the final, active failure. They originate as a consequence of earlier decisions. Decisions about materials, oversight, testing, assumptions, priorities. His remedy was a disciplined ethic of restraint and responsibility.

Admiral Rickover on the cover of Time Magazine, Jan 1954
Source: Content.Time.Com

Rickover insisted on what he called conservative decision-making. This meant favoring the proven over the novel, the simple over the clever, the transparent over the abstract. It also favors direct accountability over distributed blame. Rickover required engineers to understand every system they touched, to foresee how it could fail, and to take personal responsibility for its performance.

This philosophy is not opposed to innovation. It is opposed to undue confidence and corner cutting. It rejects the fantasy that more automation, more layers of protection, or more complexity can eliminate human fallibility. Instead, Rickover’s ethic aligns perfectly with Reason, and Rasmussen: The best systems are those designed with keen awareness of their limits, with clarity in how they function, and resilient in the face of the unexpected.

Design and AI

Nowhere are the principles of good design more urgently needed than in systems that incorporate artificial intelligence. AI is not merely another automation layer; it is a new kind of agent inside our systems — opaque, statistical, fast, and prone to unfamiliar failure modes. It makes predictions rather than following instructions, and its logic is embedded in inscrutable data patterns rather than explicit rules. All of this magnifies the challenges highlighted by Reason, Rasmussen, and Rickover.

AI Accumulates Latent Failures

AI systems, by their very nature, accumulate latent failures. They learn from datasets we did not fully inspect, absorb correlations we did not intend, and behave in ways that are not visible or understandable from the outside. A model might perform flawlessly for months before a quiet change in data distribution causes an abrupt collapse.

This is Reason’s “resident pathogens” writ large: dormant vulnerabilities that lie hidden until the right alignment triggers failure. Every training decision, every data preprocessing choice, every architecture selection, every hyperparameter is a potential pathogen. And unlike traditional software where we can inspect the logic, AI embeds these decisions in millions of parameters that no human can comprehend.

The problem is compounded because:

  • Training data is never complete or representative
  • Correlations learned may be spurious
  • Performance on training data doesn’t guarantee real-world performance
  • Models drift as real-world conditions change
  • Failure modes are unpredictable and emergent

AI Erodes Human-Centered Design

AI erodes the very foundations of human-centered design. Visibility, mapping, and feedback weaken when decisions emerge from statistical inference. Users cannot form an accurate mental model of a system when its internal logic is fundamentally opaque.

A well-designed traditional system has clear cause-and-effect relationships. You turn the dial, the temperature changes. You press the button, the action occurs. You can build a mental model of how it works and predict what will happen.

AI systems break this clarity. You provide input, you get output, but the relationship between them is inscrutable. Why did the AI make this recommendation? What factors did it consider? What would happen if conditions changed? These questions often have no satisfying answers.

A well-designed AI system must restore visibility through explanations, constraints, and clear domain limits so that human operators understand not just what the AI chose, but why and under what assumptions. This means:

  • Clear communication of confidence levels
  • Explanation of key factors in decisions
  • Explicit boundaries of competence
  • Graceful degradation when uncertain
  • Human-understandable reasoning paths

AI Intensifies the Automation Paradox

Rasmussen’s automation conundrum becomes particularly acute with AI. AI excels in routine, predictable environments but breaks sharply at the edges. This is especially manifest in instances where people are deliberately trying to break AI. When conditions drift or the unexpected occurs, AI systems fail in ways that human operators are least prepared to correct. Meanwhile, human skill declines as more decision-making is delegated to the machine.

A sketch explaining the automation paradox. Reliance on technology leads to fast, unexpected failures
Source: Sketchplanations.com

The result is a brittle system which may be high-performing on ordinary days, but extremely volatile and vulnerable on extraordinary ones.

But AI makes this worse than traditional automation because:

  • AI failure modes are less predictable (it doesn’t just stop working; it confidently produces wrong answers)
  • AI operates in domains requiring judgment (not just mechanical tasks)
  • AI deskills faster (it handles tasks humans used to do cognitively, not just physically)
  • Recovery is harder (humans may not recognize AI errors without domain expertise)

AI Demands Conservative Decision-Making

Rickover’s philosophy offers the necessary counterweight. Conservative decision-making demands restraint: use AI where it is appropriate, proven, and transparent, not merely where it is impressive.

This means:

  • Favor smaller, interpretable models over unnecessarily complex ones
  • Limit autonomy in high-stakes domains
  • Maintain human accountability for every decision the system makes
  • Require understanding of failure modes before deployment
  • Choose proven approaches over novel ones
  • Insist on transparency in how decisions are made

In Rickover’s world, responsibility cannot be delegated to software. Someone must always “sign their name.” This echos a popular meme going around allegedly showing a memo from IBM in the 1970s. “A computer can never be held accountable…Therefore a computer must never make a management decision. This principle becomes even more critical with AI, where the temptation is to let the algorithm decide without human oversight let alone accountability.

Practical Philosophy for AI Design

Taken together, these perspectives form a coherent philosophy for AI design. AI does not require us to reinvent the principles of good design. It requires us to apply them more rigorously than ever before.

1. Assume AI Will Fail

Design systems assuming AI will fail, not assuming it will work. This means:

  • Clear handoff protocols when AI reaches its limits
  • Human oversight for critical decisions
  • Fallback mechanisms that don’t depend on AI
  • Monitoring for distribution drift and performance degradation
  • Regular testing outside the training distribution

2. Preserve Human Capability

Don’t allow AI to completely deskill human operators. This means:

  • Keeping humans in the loop for critical decisions
  • Requiring periodic manual operation of tasks
  • Training for exceptions, not just normal operation
  • Maintaining domain expertise even when AI handles routine cases

3. Demand Transparency

Insist on explainable AI for any consequential application. This means:

  • Understanding what factors influence decisions
  • Knowing the confidence level of predictions
  • Recognizing when the AI is operating outside its competence
  • Being able to audit decisions after the fact

4. Define Clear Boundaries

Explicitly define where AI should and shouldn’t be used. This means:

  • Clear specifications of the optimal design domain
  • Hard limits on autonomy in high-stakes situations
  • Explicit human authority for final decisions
  • Recognition that some tasks should never be fully automated

5. Design for Recovery

Plan for what happens when AI fails, not just how it performs when it works. This means:

  • Clear error detection and signaling
  • Graceful degradation rather than catastrophic failure
  • Human-understandable system states
  • Recovery protocols that don’t require AI expertise

6. Take Responsibility

Maintain human accountability for AI-made decisions. This means:

  • Someone is responsible for every consequential decision
  • Regular review of AI performance and errors
  • Willingness to roll back AI when it underperforms
  • Ethical guidelines that don’t hide behind algorithmic decisions

A Warning Circa 1990

In 1990, James Reason warned of grave technological dangers:

“A point has been reached in the development of technology where the greatest dangers stem not so much from the breakdown of a major component or from isolated operator errors, as from the insidious accumulation of delayed-action human failures occurring primarily within the organizational and managerial sectors.”

If that was true in an age where the internet was still ascendent, TV the dominant form of entertainment, and phones and computers still geographically bound to the office and the home, it is exponentially more true today. Inscrutable design is a ticking time bomb for failure. AI has merely made that opacity more common, almost necessary.

An old, inscrutable nuclear control panel
Source: Taproot.com

The proliferation of software layers, automated decision-making, globalized workflows, and complex interdependencies has increased both the number of resident pathogens and the difficulty of detecting them. Many failures today arise not from dramatic mistakes but from quiet misalignments: an assumption not documented, a procedure not updated, a dataset not validated, a safeguard added without understanding what it hides.

The Designer’s New Responsibility

Designers, therefore, inherit a new responsibility. Their task is not merely to make systems functional or efficient, but to make them understandable. To build systems with fewer hidden couplings. To reduce opacity. To create clear cause-effect relationships. To design for transparency, resilience, and recovery.

This responsibility extends beyond engineering. Latent errors emerge from management decisions, organizational cultures, incentives, and expectations. The designer cannot control every upstream choice, but the designer can insist upon principles like simplicity, clarity, conservatism, recoverability that reduce the accumulation of resident pathogens.

Designing for Latent Complexity

The imperative is one of prudence, not perfection. Good design recognizes what cannot be foreseen. It acknowledges the limits of prediction and control. It builds not merely for performance, but for recovery.

A system designed in this spirit can:

  • Endure shocks without catastrophic failure
  • Adapt to inevitable new conditions and applications
  • Avoid the catastrophic alignment of latent failures
  • Preserve human agency without depending on individual heroism
  • Use technology without succumbing to its arrogance

Such systems are not just safer. They are more humane, more comprehensible, and ultimately more worthy of the trust we place in them.

The Path Forward

The future of design, especially in the age of AI, requires us to hold two truths simultaneously:

First, technology, including AI, offers genuine benefits. It can enhance human capability, reduce errors in routine tasks, reveal patterns we couldn’t see, and free us from tedious work.

Second, technology, especially AI, introduces new failure modes, new latent errors, new paradoxes that make systems more fragile precisely when they appear most capable.

The solution is not to reject technology but to deploy it with wisdom inherited from generations of systems thinking, human factors research, and hard-won lessons from failures.

This means:

  • Designing with awareness of latent failures
  • Understanding the paradox of automation
  • Respecting the conundrum of narrow performance windows
  • Applying conservative decision-making
  • Maintaining human accountability
  • Building for recovery, not just performance

The designer’s task is not only to create intelligent systems, but to ensure those systems remain understandable, bounded, recoverable, and responsible. AI magnifies both the power of good design and the consequences of poor design. It is the ultimate test of our ability to design systems that align with human judgment, human values, and human limits.

The stakes have never been higher. The principles have never been clearer. The question is whether we have the wisdom and restraint to apply them before the next catastrophic failure forces us to learn these lessons again, at a cost we can ill afford.

The future of design is not about making systems smarter. It’s about making systems wiser. Systems that know their limits, acknowledge their failures, and preserve the human capabilities that technology promises to enhance but often erodes.

That is the philosophy we must carry forward into an age where artificial intelligence will touch nearly every aspect of our lives. The question is not whether AI will be powerful but whether we will be wise enough to design it responsibly.


Against cleverness was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.

Need help?

Don't hesitate to reach out to us regarding a project, custom development, or any general inquiries.
We're here to assist you.

Get in touch