The ground is shaking: Why designers must flip the script on AI

Something has shifted in the way the design field operates, and I think most of us can sense it even if we haven’t yet found the words or identified the exact schema for it.

A robotic hand in Barbara Kruger’s graphic style overlaid with ‘Form follows nothing’ — inverting Sullivan’s ‘Form follows function’ to diagnose the structural absence in current AI systems. Generated by Google Gemini at the author’s direction. © 2026 Peter (Zak) Zakrzewski.
Header image inspired by Sonny’s (Claude Sonnet 4.6) advice to avoid “robot hands, neural network visualizations, glowing brain scans,” author’s college memories of American sci-fi artist Frank Kelly Freas cover for Queen’s News of the World, Barbara Kruger’s conceptual art, and Louis Sullivan’s design maxim “form follows function.” Robotic hand image prompted by the author and generated by Gemini who did not understand why the hand was empty. Type rendered by the author in Adobe Illustrator.

It isn’t just the tools. Designers have always adapted to new tools — from the drafting table to the screen, from the desktop screen to the smartphone, from Photoshop to Figma. The arrival of a powerful new instrument has never, by itself, caused such uneasy sensation: the feeling that the ground beneath the profession is no longer solid.

What’s different this time is that the tool is not just extending what we can do with our hands. It is extending what we do with our minds.

I’ve spent the last year in sustained experimental research with AI systems — building things with them, quarreling with them, and at one point putting my empathic designer hat on to ask an AI system what it felt like to know the word “weight” but with no experience of gravity. What I found confirmed something I had suspected but needed to prove to myself: the current generation of AI tools is extraordinarily capable at the Symbolic level — language, pattern, recombination — and structurally blind at the level where design actually happens: the level of space, weight, physical consequence, and intent.

In my first two articles in this series, I called this the Inversion Error: we have built a Symbolic Giant resting on an Enactive Void. These systems can write about gravity with technical or even poetic fluency but cannot feel it. They can describe a structure but cannot tell you whether it will stand or fall.

The ground is shaking because the floor is missing. And the design community’s response — so far — has been to learn how to give the machine better instructions.

I want to suggest we have the relationship exactly backwards.

When “Can AI do it?” replaces “Why are we doing this?”

Chances are that there is a meeting happening right now somewhere in the world, where someone opens a brief, and before the problem has been properly read — before anyone has asked what success looks like, or who the product is actually for, or what it’s supposed to do — someone else opens an AI tool and starts generating options.

In an article titled When design stops asking why and starts asking, “Can AI do it?” Dolphia has articulated a name for what’s happening in that meeting: the Decision Flip. Teams have started asking “Can AI do it?” before they ask “Why are we doing it?” The sequence matters more than it might appear. When execution becomes the first question, intent becomes an afterthought. And when intent becomes an afterthought, you get what Michael Szeto has been documenting: AI-generated output that is formally competent and creatively lifeless. Technically resolved but conceptually empty. The visual equivalent of a sentence that is grammatically perfect and says nothing worth saying.

This is not a tool problem. Tools don’t evacuate meaning. People do — when they let the tool’s capability define the scope of the question.

What’s being lost in that meeting is something designers have always been the custodians of, even when we didn’t have a precise name for it: the theory of the problem. Computer scientist and software designer Peter Naur argued in 1985 that the most important thing a programmer produces is not code but the theory of the problem being solved. The theory lives in the practitioner’s mind. It cannot be extracted into a document or delegated to a tool. When the programmer leaves, the theory is lost. Maria Rey reminded the design community of this insight recently, and Naur’s argument carries more weight now than he could have imagined: because the tool we are now tempted to delegate to is not a junior developer who will at least ask clarifying questions. It is a system that will generate thirty options without once asking why.

Greg McKeown’s philosophy of Essentialism gives us the other half of this picture. In a world of infinite AI-generated form, the scarce resource is not execution; it is the disciplined pursuit of the right problem. The essentialist designer is not the one who generates the most options. It is the one who holds the intent when the machine is calculating the next pixel — the one who knows which twenty-nine options to ignore and why.

The vacuum of “Why” is not an accident of the technology. It is what happens when a profession mistakes a capability for a direction.

The Stochastic Toddler needs a teacher

There is a concept in educational psychology that I keep returning to as I watch the design community negotiate its relationship with AI. Lev Vygotsky called it the More Knowledgeable Other (MKO). The MKO is not necessarily a teacher in the formal sense. It is whoever holds the competence that a learner, in any given situation, is reaching toward. It is the figure who can see both where the learner is and where they need to get to, and who can scaffold the gap between the two.

The current Silicon Valley pitch to designers is essentially this: AI is your MKO now. It knows more patterns than you do. It executes faster than you do. It can code. Your job is to learn how to give it good instructions — to become a fluent prompter of a more capable system.

I want to challenge that framing directly. Not because AI isn’t capable — it is, in specific and genuinely impressive ways — but because the framing misidentifies which party in this collaboration has the structural knowledge the project requires.

Here is what I mean. Current AI systems are, in Vygotsky’s terms, extraordinary learners with a catastrophic gap in their Zone of Proximal Development — the learning space between what a learner can do independently and what they can achieve with guidance from an MKO. They have ingested the entire symbolic output of human civilization — every recorded way of saying a thing or rendering an image — and they can recombine this vast store of data with a fluency that no human can match. But fluency is not understanding. A system that has read every book ever written about gravity still has no felt sense of what it means for a structure to bear weight. It can describe the physics, but it cannot tell you whether the floor will hold.

I like to call it the Stochastic Toddler problem, not in a way that dismisses a toddler’s capacity for language acquisition, or LLMs statistical probability plus randomness architecture, both of which are genuinely extraordinary, but to point out the cognitive development issue of a technology with a massive vocabulary, no embodied experience of the world it is describing, and no theory of the problem it is being asked to solve. When we hand the MKO role to that system, we are not accelerating the design process. We are inverting it. We are asking the learner to teach.

The flip I am proposing is this: the designer — especially the senior designer — must insist on becoming the MKO in the Human+AI collaboration. Not as a matter of professional pride, but as a structural necessity. Because what the AI cannot supply from within its own architecture is precisely what the designer has spent a career developing: the ability to develop a theory of the problem, the physical, spatial, and conceptual ground truth, and the sense of what the project is actually meant to accomplish in the world.

In a post titled “Why senior UX designers are struggling in 2026” Nurkhon has identified something important: senior designers are increasingly no longer asked to build the theory of the problem being addressed by the project. The assumption is that the AI will generate enough options that the theory will somehow emerge from the selection process. It won’t. Selection without theory is curation without judgment. It produces the Decision Flip Dolphia described — thirty options generated before anyone has asked why.

What I am proposing instead is that the designer’s primary job, when working with AI, is to scaffold the machine’s Crater of Ignorance. That crater is real, and it is structural — it is the Enactive Void I described in Article 1 and the hollow Iconic level I documented in Article 2. The AI cannot feel the edges of that crater from the inside but the designer can feel them from the outside. The MKO role is not a soft skill or a professional preference. It is the structural intervention the collaboration requires.

The design superpower AI cannot prompt-engineer its way into

There is a seductive argument circulating in the design and technology community right now. It goes something like this: the most important skill in the age of AI is the ability to articulate intent in language with precision and clarity. Learn to prompt well, and the machine will handle the rest. A well-articulated version of this argument, made recently on Medium, goes so far as to suggest that designers and engineers alike should effectively major in English — that writing is now the whole job. This is, however, only part of the story that current AI models present us with.

I have a great deal of sympathy for this argument. Clarity of language is a genuine discipline. The ability to articulate a design intent precisely enough that another person — or a machine — can act on it is not trivial. I have seen talented designers struggle to put their visual and spatial intuitions into persuasive discursive arguments. I understand why the language skills gap feels urgent in the age of LLMs.

But there is something this argument misses, and the miss is structural, not incidental.

Designers do not think primarily in sentences. Our human cognition is deeply embodied. We think in diagrams, in spatial relationships, in load paths and sight lines and in the non-discursive logic of things that must connect to other things in three-dimensional space. We think in what Jerome Bruner called the Iconic mode — the representational layer that sits between raw physical experience and abstract language. Iconic reasoning is not a softer version of symbolic reasoning. It is a different representational mode that requires different cognitive architecture. It requires the ability to hold multiple spatial relationships in mind simultaneously, to rotate a structure mentally, to feel when something is topologically wrong before we can say why, to detect a floating component before we have the language to name it. This reality is evocatively articulated in Greg Costikyan’s famous game design essay titled “I Have No Words and I Must Design.” The title itself is a wordplay on Harlan Ellison’s short story, “I Have No Mouth, and I Must Scream.” Costikyan argues that because it is impossible for designers to adequately understand or discuss games using existing language, they are forced to design and build that understanding using design tools.

Spatial reasoning and gaining understanding through cycles of building to think is precisely the cognitive mode that current AI systems cannot reach. Not because the engineers building them lacked ambition, but because of the structural condition I have identified and been calling the Inversion Error. In one of my stress tests, I asked Gemini model, and subsequently two other major AI systems each representing the current state of the art in symbolic reasoning (The Spaghetti Leg Table test and the diagnostic images can be found in Article 2), to create an image of a dining table with a concrete slab tabletop resting on dry spaghetti legs, and a fishbowl on top. In the second prompt I asked each system to draw the scene five seconds after the spaghetti legs gave way. Gemini failed on three separate dimensions stemming from the same Inversion problem. (1) Continuity, or failure of spatial reasoning that causes the model to produce hallucinatory content. (2) Gravity and Physics, which is a failure of the application of physical constraint at the moment of generation. (3) Reversibility of Thought which is a failure at the operational process level. It concerns the process by which the model operates on the content across time. The three leading AI systems used three different rendering styles but none of them could feel that a concrete slab cannot rest on legs made of dry pasta. They all rendered physical impossibility with complete fluency and complete confidence. The Iconic Toddler problem is not based on a lack of data, a lack of training, or a lack of words. It represents a structural absence — the absent base beneath the Symbolic peak.

Now consider a different but closely related problem of what happens if we accept the premise that writing is the whole job designers now perform. We are being asked to translate our primary cognitive mode of spatial, visual, non-discursive reasoning into the mode the AI can process: language. We are being asked to compress years of embodied cognition and our three-dimensional spatial judgment into a text prompt and then accept whatever the machine generates as an adequate rendering of our intent. We are, in other words, being asked to abandon the very capability that the AI lacks and that our projects require.

James Harrison and Dolphia have both argued that judgment is the new craft. I agree — but I want to be precise about what kind of judgment we are talking about. There are two. One is the problem-solving judgment. The other is Iconic judgment — the designer’s ability to look at a spatial configuration and know, without running a calculation, whether it is structurally coherent. Neither one is a soft skill that can be approximated by better prompting. It is a non-transferable cognitive capability that took years to develop and that no amount of language fluency can substitute for.

The risk is not that designers will choose to abandon problem solving and Iconic reasoning skills. The risk is that the current framing of AI as a prompting partner will make it progressively structurally irrelevant — not through a conscious decision, but through a thousand small deferrals to the machine’s output that will lead to atrophy.

The answer to this dilemma lies in what I want to propose next. If theory building is the designer’s primary job and Iconic reasoning is our primary mode and the AI’s primary blind spot, then the designer function within this new AI disrupted world cannot be reduced to being a prompter. The answer has to live in the solid redefinition of the design paradigm as a theory of action — an argument I am developing at length in my upcoming book on the evolution of the problem-solution space. To address the second part of the current dilemma, we have to reach somewhere even deeper: the structural parameters that govern how AI models generate their output in the first place.

From prompting for results to architecture of constraints

The two arguments I have built so far point toward a third.

If design is fundamentally about theory-building — generating the problem-solution theory that no AI can supply from within its own architecture — and if the designer’s primary cognitive mode is Iconic reasoning, which is precisely the mode the AI lacks — then the logical conclusion is this: the designer’s intervention cannot live in the prompt. The prompt is the wrong layer. It is the output end of the process, where the AI’s structural limitations are already fully in play. Asking a system with no Enactive floor and no genuine Iconic reasoning to do better when we describe our intent more clearly is not a solution. It is a more articulate version of the same problem.

What I am proposing instead has a name with a long history in design: Parametric Design.

Most UX practitioners know parametric design through its most spectacular architectural expressions. Gaudí’s hanging chain models — weighted strings that found their structural form through the application of gravity rather than through manual calculation — were a parametric system: the designer defined the governing constraints, and valid form emerged from their simultaneous satisfaction. I use the example of Zaha Hadid’s architectural practice to extend this approach into computational territory both as Human+AI collaboration — encoding structural, spatial, material, and environmental constraints — and as an inspiration for the design of AI architecture — embedded so deeply into the logic of AI models that the generated output cannot violate them. When Hadid designed her iconic Guangzhou Opera House, she did not select from pre-imagined options. She defined the conditions under which valid options were possible. The form followed the constraints. The constraints formed the logic of the designed form.

This is the cognitive tradition I am proposing we designers bring directly into AI collaboration — not as a metaphor, but as a method.

Applied to AI collaboration, Parametric Design means this: instead of asking the AI for a result and then evaluating what it produces, the designer defines the governing parameters of the problem — the theory, the physical constraints, the spatial logic — before the AI generates anything. This is not a technical intervention. It is a practice intervention, available to any designer working with AI today, without waiting for any research program to be completed. The designer moves from the output end of the process to the foundational layer. From prompter to architect. From describing what the result should look like to encoding the conditions that make valid results possible.

Riccardo Di Sipio has been arguing that the agentic AI moment requires designers to stop thinking about better screens and start thinking about better systems. The question is no longer what the interface looks like. It is what the agent is allowed to do, and under what conditions, and within what boundaries. Gian Luca Bailo has framed the same shift from a different angle: the designer’s emerging role is not omniscience but structural governance — defining the logic within which the system operates rather than knowing in advance every output it will produce. These two observations, arriving independently from within the design community, are pointing at the same underlying shift that the Parametric Design tradition makes explicit — and that Peter Naur, writing forty years ago about software, already knew how to name.

Maria Rey’s reflection on Naur’s Theory-Building gives us the epistemological foundation for this move. Naur argued that the theory of a system lives in the practitioner’s mind, not in the code. When the practitioner leaves, the theory is lost. What Parametric Design applied to AI collaboration proposes is the inversion of that loss: the designer makes the theory explicit as a set of governing parameters. The theory becomes the input. The AI operates within it rather than despite its absence.

But there is a second, deeper level to my proposal — one that goes beyond individual practice and what can be done with AI tools today — and into the architecture of the AI systems themselves.

The Inversion Error is not only a problem that designers can work around by changing how they prompt. It is a structural condition that needs to be fixed at the foundational level. Physical and spatial constraints need to be encoded directly into the architecture of AI models — into the attention mechanism itself — so that physical coherence becomes a structural property of the system rather than an externally imposed instruction. This is not something any designer can do alone at their desk. It requires being embedded inside an AI lab, working alongside mathematicians and machine learning researchers, with access to the development process where the architectural decisions are actually made.

This is the research program I have begun to formalize as the Parametric AGI Framework, which is an explicit invitation to designers who want to engage at this level. Not as prompt engineers. Not as UX consultants reviewing outputs after the engineering decisions have been made. But as the More Knowledgeable Other embedded at the center of the process from the beginning, supplying the spatial ground truth, the physical constraint, and the theory of the problem that the AI cannot generate from within.

The Architecture of Constraints operates at both levels simultaneously. At the practice level, it is how any senior designer works with AI today. At the research level, it is how we fix the floor. It is the designer as MKO role made operational.

The era of the architect of constraints

Precious Madubuike writing in the “Forget Figma, AI is the new Design Tool” is right that the Figma bottleneck is ending. The era in which design was gated by who could operate the digital tools is closing. But the destination is not, as some have suggested, simply designing with code — replacing one technical bottleneck with another. The destination is something more fundamental: a redefinition of what the designer’s primary contribution actually is.

The ground is shaking because the floor is missing from the very AI models we are asked to let take over our work. I have been making that argument across three articles now, and I would like to be precise about what it means for the designers and AI researchers reading this final installment.

AI is not the enemy of design. The problem lies with the current framing of AI as the More Knowledgeable Other — the capable partner who handles execution while the designer learns to prompt more fluently. This framing is structurally wrong. It places the MKO role with the system not equipped to hold it. And it progressively evacuates the designer’s most irreplaceable capabilities: the ability to generate the theory of the problem, the Iconic judgment, the felt sense of what the world will and will not support, all of which we gained through the struggle against the friction of building and testing applied design solutions.

What I am proposing is not a defense of the old order. It is a new program for the design profession — one that the current moment of AI disruption makes not just possible but necessary.

That program has a name I have been developing across this series and in my technical articles: the Architecture of Constraints. Not the designer as stylist, not the designer as prompter, not even the designer as creative director supervising an AI pipeline. The Architect of Constraints is the figure who defines the problem-solution space before, during, and after the AI enters it — who supplies the physical ground truth, the spatial logic, and the theory of the problem that the machine cannot generate from within its own architecture.

In my experimental research with AI systems, I have been calling the human who performs this function in real time the Somatic Compiler — the practitioner embedded inside the generative loop, detecting floating components, redirecting spatial incoherence, maintaining the state boundaries that the AI cannot maintain for itself. It is not a glamorous role. It is a structural one. And it is the role that makes the difference between a generative process that wanders off into the Divergence Swamp caused by AI systems drifting into hallucinations of a reasoning engine that cannot reset, reverse, or recover from compounding errors, and one that produces something worth building.

The Somatic Compiler is not a future role waiting for the right research program to be completed. It is what a senior designer with spatial judgment, theory-building capacity, and physical ground truth already does — when they are allowed to do it. The question is whether the profession will insist on doing it, or whether it will accept the tech industry’s current framing and gradually stop being asked to solve problems.

This is the choice the design community is facing right now. Not whether to use AI — I think that question is settled. But whether to engage with AI systems as the MKO or as the prompter. Whether to define the parameters or just accept the outputs. Whether to build the floor or keep asking why the ground is shaking.

The era of the Architect of Constraints is not coming. By necessity it is already here. The only question is whether designers will claim it.

References

Naur, P. (1985). Programming as Theory Building. Microprocessing and Microprogramming, 15(5), 253–261. Available open access

Costikyan, G. (1994). I Have No Words and I Must Design. Available online

McKeown, G. (2014). Essentialism: The Disciplined Pursuit of Less. Crown Business. gregmckeown.com

Vygotsky, L. (1978). Mind in Society: The Development of Higher Psychological Processes. Harvard University Press.

Bruner, J. (1966). Toward a Theory of Instruction. Harvard University Press.

Dolphia. (2026). When design stops asking why and starts asking, “Can AI do it?” Medium/UX Collective.

Rey, M. (2026). Reading Peter Naur: What do we make when we “make” software? Medium/UX Collective.

Nurkhon. (2026). Why senior UX designers are struggling in 2026. Medium/UX Collective.

Madubuike, P. (2026). Forget Figma, AI is the new Design Tool. Medium/UX Collective.

Zakrzewski, P. (2026). Why Safe AGI Requires an Enactive Floor and State-Space Reversibility. UX Collective.

Zakrzewski, P. (2026). The Baron Munchausen Trap: A Designer’s Field Report on the Iconic Blind Spot in AI World Models. UX Collective.


The ground is shaking: Why designers must flip the script on AI was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.

Need help?

Don't hesitate to reach out to us regarding a project, custom development, or any general inquiries.
We're here to assist you.

Get in touch