Exploring navigation as something bigger than moving through physical and digital spaces — as a fundamental cognitive act

A few weeks ago, I read an article — A new navigation paradigm — that felt relatable, yet unsettling in a way I couldn’t fully articulate. I eventually stopped thinking about it, but the ideas lingered in the background. They then surfaced in the most mundane and seemingly unrelated places.
A few weeks later, as I struggled to submit a loan application online, I realized that my frustration stemmed from more than a technical problem. I had lost the ability to navigate the system.
Shortly after, while taking an Uber ride to a friend’s place, I caught myself in a moment of reflection. I was moving in the physical space with no attention to my surroundings, completely at the mercy of the driver, who was in turn following the instructions from his phone.
The kind of navigation I was doing in an Uber didn’t seem all that different from my attempts to navigate the banking system. Navigation suddenly felt bigger — like a fundamental cognitive act.
In this essay, I examine navigation in four different realms: institutional, physical, digital, and semantic. By exploring the commonalities, I hope to convey what’s at stake when we eliminate the friction of navigation, and how to deliberately regain some of it.
Navigating the system
I am currently attempting to fund a home loan through a major Indian bank that is committed to the idea of a “digital-first” India. As a sign of this commitment, they are now “digital-only” — they now process home loan applications online exclusively, or so I’m told. But the portal is a never-ending obstacle course, almost as if designed to test the user’s patience and persistence. It succeeds phenomenally.
I’m prompted for identification proofs as PDFs under 1MB or JPEGs under 5MB, but when I provide them, I get an error: “Oops! Something went wrong.” Of course, many things could go wrong. Underneath the digital interfaces is the cross-continent network that makes them all tick — the hundreds of hardware devices the data has to traverse, the tens of thousands of miles of wire crossing land and oceans. Distributed systems are built assuming ephemeral failures. But when they happen consistently, it signals something systemic.

You know the problem is prevalent when it becomes material for comedy. Biswas Kalyan Rath, an Indian stand-up comedian, has a good bit about the state of “digital-first” marriage registration in India.
“Marriage, no problem. Very simple. Getting marriage registered.. Big problem. It was always a problem, now there is the illusion of Digital India. There is a website. You can go and register. No problem. Fill the entire form. The “Submit” button does not work.”
I cracked up when I watched it. I’m not laughing now. I’m three weeks into the loan application, and I need it approved.
The deeper problem isn’t the bad website. It’s what it does to its users. I understand technology, but the bank’s application is a black box to me. The loan agent is an expert in the process but has proven useless. He’s as much a spectator in this as I am. I’m not merely dependent on the website; I’m trapped between technology that’s not done well and a human who doesn’t understand it well. We’ve both lost the ability to navigate the system.
Navigating the physical world
During my childhood, when I traveled by bus to my maternal hometown — Ichalkaranji, Maharashtra’s textile hub — we could tell we had arrived by the pungent smell and rhythmic din of the cotton mills. We could tell the time remaining by the landmarks we passed — the restaurants, the trees, the market, the temples, the stadium. Knowing a destination meant knowing its spatial signature. We had to read the surroundings to know when we had arrived. Navigation was an act of observation. Even the act of describing a destination meant being deeply aware of its characteristics. We saw a place as a point in space, situated relative to other places.
Then came paper maps, which provided a way to abstract the physical reality and capture its essence in symbols. It was a leap forward, but still required us to compare the map against the territory — active interpretation. The map acted as external memory. The processing still happened in your brain. In software terms, this was the separation of storage and compute. The maps only told you the layout. You had to orient yourself against it.
Then Google Maps outsourced “compute” too.
It wasn’t just paper maps on a screen. It offered a fundamentally different technology: active navigation. Active interpretation on the user’s part gave way to the step-by-step guidance from the software. It not only knew the landscape; it knew your place in it. You just had to trust it. Spatial navigation became a skill you no longer needed. When Google Maps tells me to “take the next exit and stay on the second to the right lane”, I mindlessly execute.
What navigation demands from me now is not intelligence but compliance.

I’m as much a victim as a culprit of this de-skilling that has happened at scale. Due to my overreliance on technology, I don’t remember the street names in my neighborhood anymore, despite having lived in the same place for over two years. It speaks to how little I pay attention to the world around me when the blue arrow and the friendly voice are leading the way.
A 2020 longitudinal study tracked regular drivers over three years and found that people with greater lifetime GPS experience demonstrated reduced spatial memory during self-guided navigation. It found not just correlation, but causality of GPS use leading to decline in spatial memory.
Getting carried away
Compare this to the Uber model.
You set the destination, a driver accepts, you accept, and off you go. One of the “benefits” of Uber is that it relieves you of the mundane act of driving. Your attentional resources, the logic goes, can be better spent on “more important” needs — like scrolling social media on your phone instead of greeting the human three feet away. Thanks to self-driving cars, we’ve even eliminated that guilt, if there ever was.
Somewhere in that scroll, you lose track of time, of the route, of your environment. You’re physically there, but mentally elsewhere. The only active role you’ve played is scheduling the pick-up and the drop-off. You’ve shipped the package that is yourself.

Navigating the digital world
Consider a typical day using digital interfaces. You log in to your computer or your phone, you find the application you need, navigate through its menus and screens, do what you want to do, close it, move on to the next app. It’s essentially an endless loop of navigation until you log off and go back to the real world.
This is akin to navigation in the pre-GPS world. There is a certain map of the world — the software’s user interface — and you have to learn it. You “onboard” yourself, and navigate it by paying attention. You are spatially aware of the world that the software has created for you. You know what capabilities it has, or where and how to find them.
We are increasingly moving towards an intent-based navigation system, or as I think of it, “Uber-ification” of the digital world. Intent-based software comes in two flavors: search-first, and AI-first.
In search-first applications, you search for keywords, say, “expense report” and it’ll show you potential actions like “View your expense reports” or “File a new expense report”. You select the one you want without knowing where it lives in the application. The geography doesn’t matter.
In AI-first applications, it’s even more extreme. You declare the intent — “File my expenses” — and the agent handles the rest. You’ve achieved your goal without ever knowing where the expense module is housed in the software, or even how to even file one yourself.


There’s a plethora of intent-based software now, with AI embedded in them, eager to translate your intent into outcomes. Traditional navigation is gradually converging into a ubiquitous conversational interface.
Notion AI promises to write your docs for you. Cursor and Claude promise to write your code. Glean writes your performance reviews — the very document that requires sitting with the uncomfortable and hard work of assessing your own year, your team’s contributions, their strengths and weaknesses in your own words. Gemini writes your emails.

Each of these smart apps makes you more productive. No navigation required. No blank page to face. No empty file with a cursor blinking. Information architecture — the design of where things live and how they connect — no longer matters. Not to you, at least. The agent navigates for you.
Beyond movement: Navigation as a metacognitive skill
Consider what happens when you navigate without GPS in an unfamiliar city. You assess your current location, identify your destination, survey available information — landmarks, street signs, smell, sounds, hypothesize about which direction to go, make decisions with incomplete information, monitor feedback as you move, adjust, when you encounter obstacles, wrong turns, or dead ends, and learn from the experience, building a richer understanding of the territory.
Now consider what you do when you’re trying to understand a topic, say, climate change. You assess what you currently know or understand, identify what you need to know, survey available information — documentation, courses, articles, real world case-studies, papers, mental models, hypothesize about how it might work, make decisions about which path to explore first with incomplete information, monitor feedback as you learn, adjust your mental model when you hit obstacles, and learn, building a richer, more accurate understanding of the system.
Problem-solving is navigation. Learning is navigation. Decision-making is navigation. You’re moving through space — either real or conceptual — through uncertainty, towards clarity, understanding, destination, or outcomes.

This isn’t a metaphor. Herbert Simon’s work describes problem-solving as “search through a problem-space.” Neuroscience has also shown that the same neural systems(the hippocampus, place, and grid cells) support both physical and conceptual navigation. What makes something navigation isn’t whether it moves through a physical space, but rather that it requires relational mapping, position assessment, route planning, and dynamic reorganization.
So we are navigating pretty much all the time. And it’s becoming easier with technology. So what?
Just as GPS use causes a measurable decline in our spatial memory, AI tools that navigate conceptual spaces for us pose the same risk in a different, arguably more critical, domain.
The Pharmakon and the problem with frictionless
Looking at technological innovations through the lens of pharmakon — both remedy and poison — remedies are clearly visible: speed, efficiency, productivity, convenience. Even when they’re not, organizations are happy to sell us on them.
The poisons are more subtle: dependency, agency, losing the cognitive friction that maintains our capacity to navigate. The agent absorbs the cost of navigation, but it also becomes the locus of agency.
Two risks that stand out, and are worth thinking about:
The transfer of agency: By shifting the agency of navigation to the software, you end up being at the mercy of the navigator. If the AI routes you, you stop thinking about alternatives. The convenience is paid for with a little bit of your money and a lot more of your agency. With the erasure of navigation, Information Architecture gives way to Choice Architecture.
This might be fine for filing tax returns. Humanity would benefit from spending less time doing that. But what happens when the same happens with critical thinking? Comprehension? Reflection? Decision-making?
The vacuum of attention: With the time and attention you’ve regained from not navigating, you need something meaningful to do. Few people know what to do with that kind of freedom. They look to technology to fill the void. But the incentives of technology companies, more often than not, don’t align with the well-being of the user, especially when the way to make money and user interests are at odds.
When you’re not navigating and also not driving, there’s the promise of reflecting, reading, thinking, writing, or building. But first, why not catch up on what your friends are doing? Or the Kardashians? Or millions of strangers?
It presents a dark irony: We save time by delegating things to technology, only to pour that time back into it, mindlessly scrolling.
Solutions: For victims and culprits
I want to think about solutions to this problem, both as a victim — a consumer and a navigator — and a culprit — someone who builds technology products, and designs navigation.
I say culprits not because they have bad intentions, but likely because they either lack any or they are acting on misaligned incentives.
Micro-frictions as the antidote for the victim
I’m learning to be more intentional about what friction I want to eliminate and what I want to preserve. About what navigation I want to do myself versus delegate.
I use AI every day. It has largely empowered me. But I’m deliberate in its use.
I resist the temptation to start with AI-generated drafts because the value of writing lies in the struggle to articulate the original thought. It’s that struggle that clarifies what you want to say, what you want to achieve. Only then do I use AI. My process isn’t simply sequential, so the handoff isn’t a clear line. It’s a feedback loop. But the bulk of my effort goes into the pre-AI phase, ensuring that I build a vision for the artifact that I want to produce — whether it’s an essay or software. I can then direct it rather than being led by it.
For places I visit regularly, I’ve stopped using maps. I’m rebuilding the habit of paying attention to my surroundings.
I keep my phone away when I’m with my kids, both to be present with them and implicitly teach them the value of paying attention. I find pockets of time severed from technology — sipping coffee, going on walks — which builds the habit of being comfortable alone with my thoughts.
How is this all related to navigation, you say? Intentionality brings clarity, which in turn helps you navigate effectively.
If you want to read more about this idea of introducing friction, this article from The Cut captures it well: In 2026, We Are Friction-Maxxing.
Design principles for technologists as culprits
This might sound obvious, but when designing technology products, it helps to keep users’ needs and well-being in mind.
Build robust fallback mechanisms: In the banking portal, when the “digital-only” system failed, neither the customer nor the employee could navigate an alternative path. The system had eliminated all other routes.
Before implementing AI assistance or intent-based shortcuts, think about whether the user can complete the task if the feature breaks or is unavailable. Build fallback mechanisms that allow manual navigation when automated paths fail, surface error states that help users understand what went wrong, provide enough context for users to troubleshoot independently, and maintain alternative paths to the same destination.

Distinguish between transactional and developmental tasks: When developing automations, distinguish between transactional tasks (filing taxes, scheduling meetings, data entry) and developmental tasks (writing, learning, problem-solving, decision-making). This is similar to the idea of distinguishing between competitive artifacts and complementary artifacts.
Transactional tasks should be frictionless. These are tasks where efficiency is the primary goal, and there’s little value in the process itself. But developmental tasks benefit from good friction. The process of navigating uncertainty is often where the real value lies.
“Gym skills” is a good metaphor. For developmental tasks, consider AI as a collaborator that augments rather than replaces the user’s cognitive work. Speculative and Critical Design (SCD) framework lays out one of the most articulate ways to think about this.
Adjust assistance based on expertise: Design systems that adjust assistance levels based on user expertise, because effective delegation requires good judgment on the user’s part.
Beginners benefit from more guidance, explanations, and visible structure to help them build mental models of the system. Intermediate users can make effective use of shortcuts and intent-based features, but in a way that traditional navigation is visible and accessible. Expert users demand customizability, creating their own navigation paths, or even disabling assistance they don’t need.
The problem with many AI-first products is that they apply the same level of automation to everyone, preventing users from ever developing expertise. A newly minted manager using AI to completely delegate the work of crafting performance reviews for the team, for example, isn’t going to build the muscle of reflecting on the relationships with people, paying attention to their motivations, strengths, and weaknesses.
Preserve visible structure: Even with search-first or AI-first interfaces, users benefit from understanding the underlying structure. Consider visual sitemaps or navigation breadcrumbs that show users where they are in the system, contextual hints about where information lives, even when delivering it through search, and progressive disclosure that reveals structure as users engage more deeply with the product.
aiuxdesign.guide has a comprehensive set of design patterns that mention some of these and more, with real-world examples.
Conclusion
There are a few pockets of life that haven’t been taken over by assistive and active technology. Building the muscle to tolerate (and design) good friction and (help) reclaiming the ownership of your attention are good starts to prepare for what’s ahead. There’s little I can do to navigate the banking system. But there are other areas I can exercise control, both as a creator and a consumer.
Find me on LinkedIn, X, or check out my blog here.
Inspirations and references
- DOC • A new navigation paradigm
- Do AI Products Even Need Navigation? — by Andrew Sims
- Bernard Stiegler’s philosophy on how technology shapes our world | Aeon Essays
- Habitual use of GPS negatively impacts spatial memory during self-guided navigation — PMC
- Biswa Kalyan Rath | Birth Proof | Stand Up Comedy
- In 2026, We Are Friction-Maxxing
- THE THEORY OF PROBLEM SOLVING
- Organizing Conceptual Knowledge in Humans with a Grid-like Code — PMC
- Asana — Our principles for helping humanity thrive with AI
- Cognitive Artifacts — Psychology of Technology Institute
- In the AI Age, Making Things Difficult Is Deliberate | Every
- The Thinking Behind Notion AI
- The Importance of Fricton in AI Systems | Mozilla Foundation
- Why AI Customer Journeys Need More Friction | HBR
- AI UX Design Patterns
Getting carried away: When intelligence is replaced by compliance was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.