How complexity accumulates

How systems become risky without anyone noticing.

A Painting of Alexander Cutting the Gordian Knot
Alexander Undoing the Gordian | Source: Knot 1st-art-gallery.com

No one decides to build a fragile system. No executive convenes a meeting to discuss how best to make operations inscrutable, unreliable, brittle. No engineer sets out to create software that no one can maintain or discern later on. No organization deliberately designs processes so convoluted that they guarantee failure.

Yet fragile, incomprehensible, unmaintainable, and failure-prone systems are everywhere. They are the norm. Systems that excel and are resilient are exceptional. So how come? Fragile systems didn’t arrive through dramatic decisions or catastrophic errors. They evolved. Their fragility and brittleness were accumulated gradually, through a thousand small, locally rational choices that collectively created something unmanageable. Therein lies the importance of Systems Thinking. To understand the aggregate dynamics of individual choices.

Systems Thinking is the best defense we have against the complexity run amuck. Complexity is an emergent property of systems. It evolves not from reckless decision-making but from responding sensibly to immediate, localized needs. A new feature here, a workaround there, an exception to handle an edge case, a patch to fix an urgent problem. Each addition seems small. Each solves a real problem. Each is approved and implemented with good intentions.

But complexity isn’t an additive attribute. It multiplies.

A diagram that demonstrates that the complexity of a system (connections) grows exponentially with the number of components in a system
Source: Author

Take the image above. In the abstract it does a good job of demonstrating complexity. A system of 3 nodes has just 3 unique connections. A system with 1 more node, doubles the number of connections to 6. Double that system and the number of connections grows exponentially. Somewhere along the way the system crosses a threshold where the system becomes so fraught with complexity that it creates genuine risk. Like the emergence of complexity, the probability also increases exponentially.

How Complexity Accumulates

The first step in managing complexity is understanding how it sneaks into systems despite everyone’s best efforts to keep things simple.

Feature Creep

Every user wants one more feature. Every stakeholder has a use case that isn’t quite covered. Every competitive analysis reveals something rivals offer that you don’t. The pressure to add is constant and multifaceted.

Individual feature requests seem reasonable. A customer needs the system to handle a specific edge case. A sales prospect will sign if you just add this one capability. An internal team needs special handling for their workflow.

Saying yes to each request improves the system for someone. But each addition increases the surface area of the system, manifesting in more code to maintain, more interactions to test, more documentation to write, more training to conduct, as well as an expanded Optimal Design Domain. The complexity grows with the factorial of features.

Procedural Layering

Organizations respond to problems by adding procedures. A mistake occurs; a new approval process is implemented. An audit finds gaps; a new compliance check is required. A risk materializes; a new control is instituted.

Each procedure makes sense in isolation. We should approve major purchases. We should verify compliance. We should control risk. But procedures accumulate faster than they’re removed. Organizations rarely ask, “What procedure can we eliminate now that we’re adding this one?”

The result is sedimentary layers of process, each representing a response to some past problem, many now obsolete but all still in force because no one has authority or incentive to remove them.

The Theory of Constraints (ToC) identifies this as policy constraints — rules and procedures that become bottlenecks themselves. Goldratt (the creator of ToC) observed that organizations often implement policies to optimize local efficiency but never revisit them when conditions change, creating system-level dysfunction.

Informal Complexity, Work Arounds

When formal systems don’t serve user needs well, people create workarounds. They copy data manually between systems. They use spreadsheets to track what the official system should track but doesn’t. They develop informal communication channels because formal ones are too slow.

In my experience as an engineer, systems designer, and consultant, this is actually the norm of how business gets done. People are very entrepreneurial by nature. There’s never a formalized meeting to figure out how business operations ought to be completed, they just figure out what works, and that becomes the best practice. This is great for getting things done, but it also means that there is usually lots of low-hanging fruit to optimize these systems, usually at low cost.

Workarounds are innovations at the edges. They represent localized problem-solving that keeps work flowing. But they’re also a key driver of hidden complexity. The formal system looks simple on paper, but the actual operating system includes dozens of undocumented workarounds that only certain people know about.

When those people leave, their workarounds break. When systems change, workarounds that depended on specific quirks stop working. When new people join, they don’t know the workarounds exist and make errors because the formal system doesn’t match operational reality.

Technical Debt and The Infrastructure Accumulation

In software, technical debt is explicit and easy to detect. Shortcuts are taken to complete a project faster, leaving behind code that should be refactored later. Often, “later” never comes, and the debt accumulates.

But technical debt exists in all systems, not just software. In manufacturing, we regularly see equipment that’s been patched repeatedly, or modules added onto instead of replaced. In organizational design we even see it in the form of reporting lines being added without rethinking the fundamental design. It’s the sales training program that’s been updated piecemeal for every new product instead of redesigned.

Each piece of debt makes future changes harder. The code becomes harder to modify, because the internal logic is already fractal. The manufacturing equipment becomes more fragile. The organization becomes more difficult to reorganize. The training becomes less effective. The system becomes rigid precisely when it needs to be adaptive.

Complexity Increases Faster Than Our Ability to Understand It

A system’s complexity doesn’t scale linearly with system size. A system twice as large isn’t twice as complex. It’s often four times, eight times, or exponentially more complex.

Interaction Effects

An interaction is the reciprocal cause-and-effect relationship between two components within a system. This is important to understand how systems behave as a whole. Systems are not merely the sum of their parts. It means that as systems grow their complexity is boosted by the mutual and cyclical relationships., highlighting that influences are mutual and cyclical, not linear. A system with 10 components has 45 potential interaction pairs, but a system with 100 components has 4,950.

Most interactions don’t matter most of the time. But under stress, under unusual conditions, or when specific combinations occur, obscure interactions become critical. And the more interactions exist, the more likely that some will create failure modes no one anticipated. This is the principle of resident pathogens.

You cannot understand a system by understanding its parts in isolation. The interactions between parts often dominate behavior. As those interactions multiply, understanding the whole becomes exponentially harder.

Emergent Behaviors

Complex systems exhibit emergent behaviors. Emergent behaviors are system-level properties that don’t exist in any individual component. Traffic jams emerge from individual driver decisions. Market crashes emerge from individual trading behaviors. Organizational dysfunction emerges from individual departmental optimizations. Flocking patterns of birds and Schools of Fish emerge as a result of many constituents which don’t, in and of themselves, possess these emergent properties.

A flock of birds during a yellow sunset
A Flock of birds as emergent behavior found in nature | Source: Arstechnica.com

Often spawned by localized decision making, these emergent properties are often negative (at least from the system designer’s perspective), namely because they are unintended and unexpected consequences. And they’re nearly impossible to predict because they emerge from the interaction of many factors, not from any single cause.

A side note/rant:

This is why I’m bearish when it comes to the future of gene editing technology. Gene editing methods like CRISPR uses correlations and probability to edit genes for some desired effect, eye color. But (1) these may correlate with other factors not accounted for (like the relationship between earwax and body odor) and (2) the collective editing of multiple genes can have greater unintended [read: emergent] consequences.

Cognitive Limits

Human working memory can hold roughly seven chunks of information. When a system has hundreds or thousands of interacting components, no individual can hold the complete system in their head. Understanding becomes distributed across many people, each of whom has a partial view.

This fragmentation of understanding is itself a risk. No one sees the whole. Decisions are made based on local knowledge that doesn’t account for global effects. Changes are implemented without understanding full implications. The system becomes too complex for anyone to fully reason about.

Real-World Complexity Failures

Healthcare.gov Launch (2013)

The Affordable Care Act’s federal insurance exchange launched in October 2013 and immediately collapsed. The website couldn’t handle traffic. Applications failed. Users couldn’t complete enrollment.

The now famous error screen of HealthCare.gov | Source: Medium.com

The failure wasn’t a single bug or bad decision. It was systemic complexity. Multiple contractors built different pieces. Systems needed to integrate with existing federal databases. State exchanges needed to interface with the federal system. Security requirements added layers. Compliance rules created conditional logic. Edge cases demanded special handling.

Each component worked (more or less) in isolation. But integrating them revealed cascade failures, timing issues, and interaction effects no one had anticipated. The system was too complex for anyone to fully understand, and that complexity created fragility.

Knight Capital Trading Loss (2012)

Knight Capital deployed new trading software to seven of eight servers. The eighth still ran old code. When trading began, the old code executed differently than the new code. Orders from the mixed system created erratic behavior that cost $440 million in 45 minutes.

The complexity wasn’t in the trading logic itself but was in the deployment process, version control, and fail-safes (or lack thereof). Each element seemed manageable. But the interaction of partial deployment, legacy code, and automated trading created a failure mode that destroyed the company.

Boeing 787 Development Delays

Boeing’s 787 Dreamliner was years late and billions over budget, largely due to complexity in managing a global supply chain with unprecedented outsourcing. Boeing delegated entire aircraft sections to suppliers, who delegated to sub-suppliers.

A Boeing Dreamliner Production Line | Source: Seattletimes.com

The complexity wasn’t within the airplane design. It was organizational and logistical. Coordinating work across dozens of companies, ensuring interface compatibility, managing schedule dependencies, and integrating testing added complexity that multiplied with every additional partner and interface.

Detecting Accumulated Complexity

Complexity accumulates invisibly until it causes problems. How do you detect it before it becomes critical?

Warning Signs:

  1. Lengthening cycle times: Changes that used to take days now take weeks. This often signals that complexity has increased friction.
  2. Rising error rates: More defects, more support tickets, more exceptions. Complexity creates more failure modes.
  3. Knowledge silos: Only certain people can work on certain systems because they’re too complex for newcomers to learn quickly.
  4. Fear of change: Teams resist modifications because they’re not confident about side effects. “If it’s working, don’t touch it” becomes the mantra.
  5. Escalating maintenance costs: More time spent fixing and patching, less time spent building new capability.
  6. Integration problems: New features break existing functionality in unexpected ways.
  7. Documentation drift: Formal documentation no longer matches actual operation because the system has evolved through undocumented changes.

These symptoms don’t prove complexity directly, but they correlate strongly with systems that have accumulated too much of it.

Quantitative Measures:

For software systems, metrics like cyclomatic complexity, coupling metrics, and dependency graphs provide numerical complexity measures and diagnostic tools. But even non-software systems can be measured:

  • Decision tree depth: How many conditional branches exist in a process?
  • Role count: How many different roles touch a workflow?
  • Approval layers: How many sign-offs does a decision require?
  • Exception frequency: How often do standard processes require exceptions?
  • Handoff count: How many times does work transfer between people or systems?

High values don’t automatically mean there’s a problem. Complexity is sometimes necessary. But they indicate where to look for opportunities to simplify and where to triage when something breaks.

Pruning Unnecessary Complexity

Once you’ve detected complexity, how do you reduce it without breaking things that work?

1. Dependency Mapping

You can’t simplify what you don’t understand. Create visual maps of dependencies:

  • What depends on what?
  • What components interact?
  • What can be changed independently?
  • Where are the tight couplings?
An example of a dependency map
An example of a dependency map | Source: dependency-map.com

Tools exist for software (dependency analyzers, architecture visualization). For organizational systems, this might be process maps, responsibility matrices (RACI), or workflow diagrams.

2. The 80/20 Analysis

Most systems exhibit Pareto distributions: 20% of features deliver 80% of value, 20% of code contains 80% of bugs, 20% of procedures handle 80% of cases.

An Example of a Pareto Distribution
An Example of a Pareto Distribution | Source: scirp.org

Identify:

  • Which features are rarely used?
  • Which procedures handle edge cases?
  • Which code paths are seldom executed?

These low-value, high-maintenance components are prime candidates for elimination. Removing them reduces surface area without significantly reducing capability.

Deming’s focus on variation reduction is relevant here. Simplification reduces sources of variation, making systems more stable and predictable. Eliminating rarely-used features eliminates rare but costly failure modes.

3. Complexity Budgets

Treat complexity as a constrained resource, like memory or budget. Every addition must fit within the budget, which means something else might need to be removed.

This forces explicit trade-offs: “To add this feature, we need to remove three existing ones. Which should go?” The question surfaces costs that are otherwise hidden.

Netflix famously has a policy limiting microservices complexity: teams can add new services, but the total count must stay within bounds, forcing consolidation and simplification as a regular practice.

4. Simplification Audits

Regularly review systems specifically to identify simplification opportunities:

  • Which procedures exist because of problems that no longer occur?
  • Which features could be consolidated?
  • Which integrations could be eliminated?
  • Which exceptions could be standardized?

The Theory of Constraints teaches to focus improvement efforts on constraints. But TOC also recognizes that non-constraint resources shouldn’t be optimized to full capacity. Similarly, not every part of a system needs maximum capability. Some parts can and should be simplified, even if it means slightly reduced local performance, if it improves overall system manageability.

5. Modular Decomposition

Break complex systems into loosely coupled modules with clear interfaces. This doesn’t reduce total complexity, but it contains and partitions it.

A monolithic system with 1,000 interconnected parts is unmanageable. Ten modules of 100 parts each, with well-defined interfaces between modules, is manageable. You can understand one module deeply without needing to understand all modules.

This requires discipline in interface design: modules should interact through narrow, well-specified interfaces, not through deep coupling or shared state. When modularity is maintained, complexity within modules stays local and doesn’t ripple system-wide.

6. Standardization and Platforming

Reduce variety by standardizing components and building on common platforms. Instead of five different authentication systems, use one. Instead of three different data formats, standardize on one.

This trades flexibility for simplicity. You can’t optimize each use case perfectly, but you reduce the number of things that need to be understood, maintained, and integrated.

Standardization of processes is foundational for quality. You can’t improve what varies wildly. You can’t maintain what’s different everywhere. Standardization creates the baseline from which to build, measure, and improve.

Culture and Resisting Complexity

Technical approaches help, but the deeper challenge is cultural. Organizations must develop a bias toward simplicity, which cuts against many incentives.

Incentive Misalignments:

  • Product managers are rewarded for feature additions, not feature removals
  • Engineers are evaluated on what they build, not what they eliminate
  • Processes are added in response to visible problems; removing them is invisible work
  • Budgets reward spending, not simplification

To counter these, organizations need to:

  • Celebrate simplification: Publicize cases where removing features improved the product
  • Measure complexity explicitly: Track metrics like code complexity, process steps, and approval layers
  • Require simplification alongside addition: New features must be accompanied by the removal of old features
  • Create dedicated simplification initiatives: Not as ongoing work but as explicit projects with resources

The Power of “No”

The most important complexity control is saying no to additions. This is politically difficult — every addition has a constituency. But the cumulative cost of saying yes too often is a system that collapses under its own weight.

Saying no requires:

  • Clear criteria for what’s in scope and what’s not
  • Explicit recognition that the capacity for complexity is limited
  • Willingness to disappoint stakeholders in the service of system sustainability
  • Authority structures that can enforce boundaries

W. Edwards Deming’s point about constancy of purpose applies here. Without a consistent commitment to simplicity, complexity accumulates as each decision optimizes locally without considering global effects.

Living with Necessary Complexity

Some complexity is essential. Real-world problems are complex; solutions must match that complexity to some degree. The eliminate unnecessary complexity while managing necessary complexity effectively.

Essential Complexity:

  • Business rules that reflect genuine domain complexity
  • Integration points that correspond to real organizational boundaries
  • Features that deliver significant value to significant user populations
  • Redundancy that provides resilience

This complexity should be:

  • Explicit: Documented and understood
  • Contained: Modularized so it doesn’t leak
  • Justified: Regularly validated as still necessary

Accidental Complexity:

  • Workarounds for systems that should be fixed
  • Procedures created in response to one-time events
  • Features that sounded good but are rarely used
  • Integration patterns that evolved organically without design

This complexity should be systematically hunted and eliminated.

The distinction isn’t always clear, but the question should be constantly asked: is this complexity necessary, or did it just accumulate?

Conclusion: Complexity as Systemic Debt

Complexity is like financial debt: sometimes useful, always carrying a cost, and dangerous when it accumulates beyond your ability to service it.

Taking on debt to invest in growth can be smart. But debt that accumulates from routine operations without producing value is insidious. It constrains future options, increases risk, and eventually demands painful restructuring.

The same is true of complexity. Some complexity enables capability. But much of how complexity propagates is just accumulation, the residue of past decisions that no one has cleaned up. It makes the system fragile, expensive to maintain, and difficult to evolve.

The challenge is that complexity accumulates gradually and locally while its costs manifest globally and suddenly. Each small addition seems manageable. The cumulative effect is system failure that seems to come from nowhere but was actually built up over years of accretion.

Managing this requires:

  • Vigilance: Constantly watching for complexity creep
  • Discipline: Resisting additions and forcing eliminations
  • Understanding: Mapping and measuring complexity
  • Investment: Dedicating resources to simplification
  • Culture: Valuing simplicity as much as capability

Systems should be designed for improvement, not just for operation. This means building systems that can be understood, modified, and simplified. Systems that resist simplification, where every change risks breaking something else, have accumulated too much complexity. I also mentioned this as a key principle to designing Responsible and Humane Technology.

The risk isn’t that your system will fail catastrophically tomorrow. It’s that complexity accumulates silently until one day you discover your system has become so fragile, so opaque, and so expensive to maintain that it’s effectively unmaintainable. By then, you have few options: live with mounting failures or undertake expensive, risky restructuring.

The solution is prevention: treat complexity as debt, recognize when you’re taking it on, ensure it’s justified, and regularly pay it down. Build systems that can be simplified, not just systems that work. Create cultures that celebrate subtraction as much as addition.

“Perfection is achieved, not when there is nothing left to add, but when there is nothing left to take away” — Antoine de Saint-Exupéry

Because in the long run, the systems that survive aren’t the ones with the most features, the most procedures, or the most components. They’re the ones that remain understandable, maintainable, and adaptable. They are the ones that resisted the inexorable accumulation of unnecessary complexity.


How complexity accumulates was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.

Need help?

Don't hesitate to reach out to us regarding a project, custom development, or any general inquiries.
We're here to assist you.

Get in touch