Democratisation, panic, quality collapse, then new norms emerging. This isn’t new terrain.
Scroll through any design discussion right now and you’ll find the same anxieties circulating. The specifics vary. Which tools to learn, whether the career ladder still holds, what counts as “distinctly human,” to name a few. But the underlying question is the same: will AI replace us, and what should we be doing about it?
The unease is palpable, and it’s not abstract. Design teams are shrinking. Generative tools are multiplying. Everyone seems to be confidently predicting a future they can’t actually see, either utopian (“AI will free us to do real creative work”) or apocalyptic (“learn to code or learn to starve”).

But what might be harder to notice is that technological disruption actually has a shape. One we’ve seen before, more than once. The printing press destabilised an entire knowledge economy. Desktop publishing sent the design profession into an existential spiral. Each time, the panic felt unprecedented. Each time, it wasn’t. What if history could help us see what’s coming? Not the specifics, but the arc.
The pattern emerges
The shape becomes clearer when we look at three moments of disruption centuries apart, but following the same trajectory.
The printing press: when knowledge broke loose
When Gutenberg’s press arrived in the mid-fifteenth century, it didn’t simply make books cheaper. It fundamentally rewired how information moved through society.
Before print, knowledge was scarce, slow, and tightly controlled. Monasteries and universities held near-monopolies on the written word. A single book could take months to copy by hand. Then, within fifty years of the first press, hundreds of European cities had printing operations churning out material. By 1500, an estimated 20 million books were in circulation. A century later, that number had climbed to 150 million.
The effects were extraordinary and contradictory. The printing press enabled the Scientific Revolution, allowing ideas to compound and spread as never before. It also enabled the witch trials. Sensationalist pamphlets about supposed witches became a profitable genre, fuelling persecution across Europe. Martin Luther’s Reformation spread through printed pamphlets; so did violent conspiracy theories about Catholics and Protestants alike.

The lesson is clear. A technology that democratises access to information doesn’t guarantee the quality or truthfulness of that information. New literacies take time to develop. The critical reading skills we now take for granted took roughly a century to emerge.
The Bauhaus closure: when suppression becomes distribution
In 1933, the Nazis forced the Bauhaus to close, denouncing its modernist principles as “degenerate” and “un-German.” It should have been the end of the most influential design school of the twentieth century.
Instead, the closure scattered its faculty across the globe. Walter Gropius and Marcel Breuer landed at Harvard. Mies van der Rohe took over architecture at the Illinois Institute of Technology in Chicago. László Moholy-Nagy founded the New Bauhaus (now the IIT Institute of Design). Josef Albers shaped a generation of American artists at Black Mountain College before moving to Yale.
The diaspora didn’t dilute Bauhaus principles; it amplified them. What had been a localised experiment in Weimar and Dessau became the foundation of modernist design education worldwide. The attempt to suppress the movement became the mechanism for its global spread.
Sometimes disruption works as distribution.

Desktop publishing: when everyone became a designer
In 1985, Aldus PageMaker arrived alongside Apple’s LaserWriter, and suddenly anyone with a Macintosh could produce professional-looking documents. The design profession collectively panicked.
The concern wasn’t entirely unfounded. Unionised typesetters and paste-up artists faced mass layoffs. Quality initially plummeted as decades of professional expertise were replaced by anyone who could afford the software. The “ransom note effect” became shorthand for amateur layouts stuffed with mismatched fonts and clip art. Desktop publishing opened design to the masses and gave us a decade of Comic Sans business cards.
But the chaos didn’t last. New standards emerged. Design education adapted. The tools that threatened to make designers obsolete ultimately expanded what designers could do, while shifting the value proposition from technical execution toward strategic thinking and taste.
By now, the rhythm should be familiar.

The democratisation paradox
Every technology that opens up a craft brings the same gift and the same curse. More people can participate, but the average quality of output drops, at least initially.
The printing press made publishing accessible beyond monasteries and universities. It also flooded Europe with pamphlets of wildly varying reliability. Desktop publishing put layout tools in the hands of anyone with a Mac. It also gave us church newsletters set in seven different typefaces. AI design tools let anyone generate a wireframe from a text prompt. They also produce work that’s competent, generic, and often indistinguishable from everything else the algorithm has seen.
This isn’t a flaw in the technology. It’s the predictable shape of access expanding faster than skill.
What’s less obvious is what happens to control. The optimistic narrative says access levels the playing field entirely, removing the barriers that kept newcomers out. The historical record suggests something messier. Those barriers don’t disappear; they relocate.
The printing press didn’t eliminate control over information; it shifted that control from monasteries to publishers, censors, and eventually newspaper proprietors. Desktop publishing didn’t remove the need for design expertise. It moved the value from technical execution (typesetting, paste-up) toward creative direction and strategic thinking. The people who thrived weren’t necessarily the ones who resisted the tools, but the ones who figured out where human discernment still mattered.
For the current moment, the issue isn’t whether AI will democratise design. That’s already underway. What‘s at stake now is where the new gatekeepers will emerge, and what kind of expertise will command a premium once the baseline execution becomes trivially cheap.
If history is any guide, the answer won’t be “nowhere” and “none.” It will be somewhere we haven’t quite mapped yet.
Expertise doesn’t vanish. It migrates.
There’s a fear that accompanies every new tool: we’re about to become obsolete.
Scribes saw the printing press and assumed their livelihoods were finished. They weren’t entirely wrong. The role of the scribe as we knew it did disappear. But the underlying skills (literacy, careful attention to text, the ability to organise information) didn’t vanish. They found new homes in editing, publishing, librarianship, and scholarship.
Typesetters and paste-up artists watched desktop publishing arrive and saw the end coming. Again, not entirely off. Those specific jobs largely evaporated. But the knowledge embedded in them (understanding of layout, hierarchy, visual rhythm, what makes a page readable) moved upward into roles that required more judgment and less manual execution.
The pattern isn’t that expertise becomes worthless. It’s that expertise gets unbundled from the tasks that used to contain it.
When a tool automates the mechanical parts of a job, what remains is the sensibility that guided those mechanics in the first place. The typesetter’s eye for spacing didn’t disappear when PageMaker arrived; it became the designer’s eye for spacing, operating at a higher level of abstraction. The scribe’s careful attention to accuracy didn’t vanish; it evolved into editorial standards and quality control.
This is cold comfort if you’re currently in the “mechanical” part of the job that’s being automated. The transition is genuine and often painful. But it’s worth noting that the expertise panic, while understandable, has historically been both right and wrong simultaneously. Right that specific roles would disappear. Wrong that the underlying human capability would become worthless.
It’s not about whether your current tasks survive. Some won’t. What matters is what judgment, taste, or contextual understanding you’re currently exercising that the tools can’t replicate. That’s what migrates.

Why disruption feels personal
The cycle of panic, resistance, and eventual adaptation isn’t just historical. It’s psychological.
When a new technology threatens to automate part of your work, the brain registers it as loss. And we’re wired to feel losses more acutely than equivalent gains. Psychologists call this loss aversion, and it explains why a 20% chance of losing your job feels more urgent than a 40% chance of your job getting better. The potential downside looms larger, even when the odds favour the upside.
But there’s something deeper at play for creative professionals. Design, like writing or art, tends to get tangled up with identity. It’s not just a job; it’s a way of seeing the world, a source of status, a thing that makes you you. When a tool arrives that can approximate your output, it doesn’t just threaten your income. It can threaten your sense of self.
This is why the AI conversation so often skips past practical questions (what can these tools actually do?) and lands on existential ones (where do I fit in?). The fear isn’t irrational. It’s just not entirely about what it appears to be about.
There’s also the problem of ambiguity. Humans are notoriously bad at sitting with uncertainty. We crave resolution, even false resolution. A confident prediction that turns out to be wrong is often more psychologically comfortable than an honest admission that we don’t know. This helps explain why the discourse tends to polarise into camps: the evangelists who insist AI will liberate us, and the doomsayers who insist it will destroy us. Both positions offer the comfort of certainty. Neither is particularly useful for navigating the transition.
Those who fare best in moments of disruption aren’t necessarily the ones who predict correctly. They’re the ones who can tolerate not knowing for longer than feels comfortable, while staying curious enough to keep experimenting.

The messy middle
Now on to the part of technological transitions that rarely gets the attention. As futurologist Roy Amara observed, we tend to overestimate technology’s impact in the short term and underestimate it in the long term. The hype peaks early, disillusionment follows, and the lasting transformation happens later, more quietly. The turbulence between those phases lasts longer than anyone predicts, and the outcomes depend heavily on choices made while everything still feels uncertain.
We’re in that phase now. AI tools are capable and limited in equal measure. They can generate a serviceable wireframe in seconds and completely miss the point of the product. They can produce twenty variations of a layout and have no idea which one actually solves the user’s problem. The technology is impressive and immature at the same time, which makes it difficult to know how seriously to take either the hype or the doom.
This is familiar territory. The dot com bubble is worth revisiting here, not as a cautionary tale about unrealistic expectations, but as a reminder that these transitions have real casualties. The companies that survived the crash weren’t necessarily the most innovative or the most technically advanced. They were the ones solving genuine problems rather than chasing novelty. Plenty of good ideas died alongside the bad ones, simply because they arrived at the wrong moment or ran out of runway before the market caught up.
The same dynamic is playing out now. Some AI tools will become essential infrastructure. Others will quietly disappear once the funding dries up. Some design roles will evaporate. Others will emerge that we can’t quite imagine yet. The frustrating reality is that we can’t know in advance which is which.
What history does suggest is that the messy middle rewards a particular kind of stance: neither early adopter enthusiasm nor defensive resistance, but a willingness to experiment without over-committing. The goal isn’t to predict the future correctly. It’s to stay adaptable enough to navigate it as it unfolds.
The dust settles eventually. It just takes longer than the headlines suggest.

A framework for navigating disruption
If history offers any guidance, it’s not a prediction of what will happen. It’s a set of questions worth asking when the ground shifts beneath you.
What’s becoming accessible, and what new problems will that create?
Every tool that lowers the barrier to entry also lowers the average quality of output, at least temporarily. We’ve seen this with print, with desktop publishing, and we’re seeing it now with AI-generated work. Knowing this in advance doesn’t prevent it, but it does help you anticipate where quality and originality will become differentiators.
What expertise is being devalued, and where is it migrating to?
The specific tasks that get automated are rarely the whole story. Whether your current workflow survives intact isn’t the point. It’s which parts of your judgment and sensibility will find new containers once the mechanical work is handled.
Who benefits from the chaos before new norms emerge?
Transitions create temporary winners: those who move fastest, who exploit the gap between what’s possible and what’s understood. Some of these winners build lasting value. Others are simply capitalising on confusion. It’s worth being clear-eyed about which is which.
What did we lose last time that we’ve forgotten we lost?
This one’s harder. Every transition involves trade-offs that only become visible in hindsight. The shift to digital design brought speed and flexibility; it also eroded certain craft skills that are now rare. Not everything lost deserves mourning, but some of it does. Knowing what you value helps you decide what to protect.
These aren’t answers. They’re prompts for thinking more clearly while the dust is still in the air.
What history can and can’t tell us
We can’t know precisely where AI takes design. The specific tools, roles, and workflows of five years from now are genuinely unpredictable. Anyone who claims otherwise is selling something.
But we can recognise the shape of the journey. Democratisation followed by chaos followed by consolidation. Expertise panics that turn out to be half-right. Gatekeepers that shift rather than disappear. A messy middle that drags on, rewarding adaptability over certainty.
History doesn’t repeat, but it does rhyme. And knowing the rhyme scheme helps.
The designers who navigated desktop publishing didn’t ignore it, nor did they abandon everything they knew to chase its novelty. They recognised which parts of their judgment still mattered and found new ways to apply it.
That’s probably the move now, too. Not prediction. Adaptation. And a willingness to stay in the question a little longer than feels natural.
Thanks for reading! 📖
If you enjoyed this, follow me on Medium for more on the psychology of design and technology.
References & Credits
- The Printing Revolution in Renaissance Europe, World History Encyclopedia
- Printing press, Wikipedia
- Fake news was invented 500 years ago — we must learn from its history, Resilience
- Bauhaus: The School of Modernism, Google Arts & Culture
- Bauhaus, Wikipedia
- 7 International Examples of How the Bauhaus Lived On After 1933, ArchDaily
- Bauhaus Movement Overview, The Art Story
- Desktop publishing, Wikipedia
- Aldus PageMaker: Revolutionizing Design with Desktop Publishing in the 1980s, Novedge
- The history of DTP & prepress, Prepressure
- Loss aversion, Wikipedia
- Prospect Theory and Loss Aversion: How Users Make Decisions, Nielsen Norman Group
- Tolerance of ambiguity and uncertainty, OECD
- Amara’s Law, Wikipedia
Disruption has a shape. Design history shows us what it is. was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.