Why AI forces us to embrace emergence instead of clinging to control and understanding
Anthropic CEO Dario Amodei’s recent podcast with Lex Fridman caught my attention with an observation that keeps replaying in my mind. Even as his team works to understand and interpret AI models, he acknowledged a surprising truth:
There’s no reason why [AI models] should be designed for us to understand them, right? They’re designed to operate, they’re designed to work. Just like the human brain or human biochemistry. They’re not designed for a human to open up the hatch, look inside and understand them.
This statement gave me pause. It challenges something deeply ingrained from the last generation of software design: the idea that we must fully understand how something works in order to consider it a valid solution. For years, we’ve doubled down on the belief that ‘data-driven’ design can eliminate uncertainty. The idea is that to make something effective, we first have to deconstruct it, grasp every nuance, and then carefully engineer it into existence.
But there’s an irony here: as we’ve become more obsessed with data-driven certainty, we’ve invented systems that operate beyond our ability to fully analyze or predict. AI models demonstrate that sometimes the most powerful solutions emerge from patterns we can observe but not fully explain.
AI flips the script. It wasn’t designed for human comprehension — it was designed to work. Understanding, if it comes at all, is often after the fact, something we piece together after we see that the system works. And that realization has forced me to rethink my assumptions about design and invention for this new era.
Rethinking “solutions in search of problems”
Traditional design wisdom views “solutions in search of problems” as a criticism. It runs counter to everything we’re taught about being problem-focused and user-centered. But as Anthropic’s head of product design Joel Lewenstein observed in a recent Dive Club podcast with Michael Riddering: “I’ve come to see solutions in search of a problem as not a dirty word at all… as long as you just lean into it and state your assumptions, saying ‘look, there’s the germ of something here and we’re going to explore it.’”
This isn’t about abandoning user-centered design — it’s about recognizing that with AI, understanding often emerges through exploration. The technology’s capabilities are so novel that even those working on the frontier don’t know what’s possible until they see it.
This shift isn’t limited to Anthropic, it’s happening across leading AI companies. Inspired by a recent conversation with Perplexity’s head of design Henry Modisett, Linear CEO Karri Saarinen commented, “At Perplexity they start projects by exploring LLM capabilities with very simple prototypes, even with just a command-line implementation. Only once there’s proof that the idea can work consistently, and that they can bend it to do what they want, do they start designing the experience. Normally, you start with design to explore possibilities, and the tech follows. But in this domain, or this new era of software, LLM/AI is the tool for exploration, and design comes after.”
Invention comes before understanding
Historically, software design has followed a structured, step-by-step approach — one where every phase is carefully planned to produce a predictable outcome.
- Understand the problem deeply.
- Define a precise solution.
- Craft an experience that is intentional and predictable.
- Ship a finished product that behaves exactly as expected.
But this isn’t how many of the most transformative inventions have come about. If you look at breakthroughs across history, the process is always messy and often reversed:
- Have an intent — an idea of what you’re trying to achieve.
- Experiment, iterate, and push forward without much clarity.
- Uncover an unexpected breakthrough — it works, but not how you thought.
- You study the breakthrough, refine it, and later figure out why it works.
This pattern of discovery before understanding runs deep. Alexander Fleming didn’t intend to discover penicillin — he noticed something unexpected in his experiment and followed the thread. The steam engine was a product of tinkering; Newcomen and Watt refined working models decades before scientists understood the laws of thermodynamics that made them possible. Early radio pioneers transmitted signals across great distances without fully understanding the physics of electromagnetic waves. And the use of anesthesia in surgery revolutionized medicine long before scientists figured out its precise mechanism of action.
AI amplifies this historical pattern
AI doesn’t just follow this pattern — it speeds it up.
Traditional software is deterministic; AI is probabilistic. It doesn’t follow rigid rules — it generates outputs based on patterns and likelihoods we can observe but not fully predict. The technology itself resists complete upfront understanding.
The challenge is that many designers, engineers, and product teams are still trying to apply legacy design methodologies to a technology that simply doesn’t work that way. AI doesn’t respect our craving for certainty. It doesn’t wait for us to fully understand it before showing results. And the more we try to force it into rigid, explainable, deterministic workflows, the more we suffocate its potential.
Why design struggles to let go — but why it should
This has forced me to confront my own biases. I’ve spent a decade designing software, and the instinct to make things fully understood before they exist is deeply ingrained. It’s particularly rooted in the culture of UX design — this idea that we can’t build effectively unless we first have a full grasp of what we’re making.
AI challenges this instinct and asks us to revise our beliefs. It requires us to lean into the ambiguity, to design before we fully understand, and to shape the raw materials of generative outputs as useful options emerge. As Lewenstein notes, “You can talk about AI and you can write about AI, but there’s something just so powerful about seeing a working prototype and feeling the dynamic, stochastic nature of it… seeing a website get rendered in real time, iterating on it and seeing it change in front of you — it’s just magical.” Understanding comes through doing, through making something tangible that we can respond to and refine.
Confronting this tension reminds me of the Daoist concept of Wu Wei, often translated as “effortless action” or “without force.” It’s the idea that instead of rigidly trying to control every element of a process, we should move with the natural flow of things — guiding and shaping, rather than imposing. Wu Wei isn’t passivity; it’s about working with forces rather than against them. In AI design, this means crafting interactions where users guide and shape outcomes, rather than micromanaging every detail. It’s like how a surfer harnesses a wave’s energy rather than trying to control the ocean.
Design must guide, not control
So what does it look like to design without force? Instead of suppressing the unknown, we embrace it as part of the process.
- We create affordances, not strict controls: building interfaces that guide behavior rather than dictate it. Instead of trying to expose every parameter, we need interfaces that let users navigate AI while embracing its variability.
- We prioritize steerability over explainability: giving users meaningful, intuitive ways to shape AI’s behavior without needing to understand its internals. The goal isn’t to make the black box transparent, but to make it controllable at the right level of abstraction.
- We embrace emergence: designing systems that adapt and evolve, rather than ones that are locked into rigid, pre-defined behaviors. This means creating spaces where unexpected capabilities can surface and be refined through use.
This doesn’t mean that understanding is unimportant. But it does mean we should be wary of overprioritizing upfront understanding at the cost of progress. AI is teaching us that function can — and often must — precede full comprehension. And as designers, builders, and creative thinkers, we need to get more comfortable working in that space.
If this makes you uncomfortable–good. It means you’re seeing the shift. But if it excites you–well, my friend, you’re right where you need to be.
What might become possible if you embrace emergence instead of clinging to control?
Embracing the unknown
This shift is much bigger than a throwaway line on a podcast. It’s a reframe for how we approach design and invention in this new age. For decades, the software business has trained us to believe that predictability, explainability, and control are the highest ideals. But many of the most powerful things in the world — our brains, ecosystems, markets, and now AI models — don’t operate that way.
If you’re designing with AI, start experimenting before you demand clarity. Try tinkering with the raw materials first, then layering on design afterward — like Perplexity does. Embrace the unknown as a creative tool.
As we build in this new era, we need to ask ourselves: what happens when we stop forcing things to fit our desire for immediate understanding? What becomes possible when we embrace discovery as a design principle? And how do we shape these new, emergent systems in ways that are powerful, safe, and genuinely creative?
We may not fully understand AI yet. But if history tells us anything, that might be exactly where we need to be.
Patrick Morgan is the founder of Unknown Arts. If you enjoyed this post, subscribe to his newsletter or follow him on social media: X, LinkedIn, Bluesky.
The end of design certainty was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.