Cultivating the human capabilities that matter most

Developing product discovery judgment through psychological safety, collaboration, and systematic practices.

Hero image depicting AI-human collaboration: teal circuit board patterns on the left representing AI technology merges with four coral hands reaching from the right representing human teamwork. A lightbulb with leaves and gear at center symbolizes technology and human judgment working together.
Diagram created by author using Google Gemini AI text-to-image creator

How do you develop the judgment to decide what software is worth building when AI makes building it faster and cheaper?

This article explores how to systematically strengthen discovery judgment, whether you’re building solo or as a team. For background on why judgment becomes the critical constraint when AI accelerates execution, see my previous article “When building software became easier with AI, deciding became harder.

The answer lies not in better tools or more time, but in three interconnected capabilities: psychological safety that permits genuine learning, cross-functional collaboration that surfaces diverse perspectives, and systematic frameworks that make assumptions explicit and testable. These are the human systems that transform discovery from a sequence of tasks into a continuous learning loop where reasoning itself improves with every cycle.

Venn diagram with three overlapping circles showing how teams develop discovery judgment through Psychological Safety (permits genuine learning), Cross-Functional Collaboration (surfaces diverse perspectives), and Systematic Frameworks (makes assumptions explicit). The center intersection labeled ‘Judgment Development: Compounds Over Time’ shows these three capabilities working together.
Diagram created by author using Google Gemini AI text-to-image creator

Cross-functional teams, comprising product managers, engineers, designers, user researchers, and others, each bring unique perspectives to the discovery process. However, judgment only develops when these perspectives combine. The challenge isn’t getting better at your own function. It’s learning to reason together.

Solo founders and small teams face the same challenge from a different angle: you must develop these multiple perspectives within yourself. Product thinking, technical feasibility, design usability, business viability, you’re constantly reasoning across all these domains, continually switching contexts rapidly. The practices that help teams combine perspectives also help solo builders systematically develop multifaceted judgment.

“Psychological safety isn’t about being nice — it’s about being real.”
 — Amy Edmondson, The Fearless Organization (2019)

The Human Side of Discovery

Here’s the thing most organizations miss: judgment doesn’t develop through better frameworks. It develops through vulnerability.

When you admit “I was wrong about this assumption” without fear, when you kill ideas you love because evidence doesn’t support them, when you pivot after publicly committing to a direction — that’s when judgment grows.

Amy Edmondson’s research in The Fearless Organization (2019) shows that people learn faster when mistakes are treated as data, rather than a deficiency. Without this psychological safety, discovery becomes performative. Interviews occur, but inconvenient findings are often overlooked. Usability tests run, but only feedback that confirms gets heard. Teams cherry-pick evidence that confirms existing beliefs. Research on software teams confirms that psychological safety consistently ranks among the strongest predictors of team performance and innovation because it enables learning behaviours that improve judgment over time (Obasanjo, 2017). AI acceleration exacerbates this issue, amplifying bias rather than wisdom.

Comparison diagram contrasting Performative Discovery (ignoring inconvenient findings, cherry-picking evidence, hiding mistakes, unsafe environment) versus Genuine Discovery (examining evidence objectively, seeking disconfirming data, treating mistakes as learning). An arrow labeled ‘Psychological Safety Enables the Shift’ shows the transformation between the two approaches.
Diagram created by author using Google Gemini AI text-to-image creator

I learned this the hard way. Early in my career, leading product teams, I created an environment where people said yes to my ideas, not because they were validated, but because challenging authority felt risky. A team member eventually told me privately, ‘We built that feature, but no one felt comfortable saying it didn’t solve the customer’s actual problem.’ I recognized that courage immediately; I’d been the junior person speaking truth to power myself. I took the feedback as a gift. That moment taught me that psychological safety isn’t a nice-to-have; it’s the foundation of sound judgment and decision-making.

The challenge runs deeper than individual courage. Even in organizations that espouse flat structures, power dynamics shape which voices get heard. When product managers treat discovery ownership as exclusive rather than collaborative, engineers and designers stay silent, assuming ‘that’s the PM’s job.’ Senior voices dominate while junior insights go unheard.

User researchers face a particular challenge: they’re trained in evidence rigour and bias detection, yet their insights often get filtered through PMs rather than informing decisions directly. When research findings contradict existing plans, psychological safety determines whether teams adjust their approach or rationalize it.

Real discovery requires cross-functional sense-making. A user researcher observes patterns in how users describe their workflow. A designer notices that users are misunderstanding the navigation flow. An engineer sees users accessing features through indirect paths. A customer success rep recalls multiple support tickets about this workflow. A product manager connects these signals to a strategic question about information architecture. None would have seen the complete picture on their own. This is collaborative judgment, different perspectives combining to form an understanding that no single function could achieve.

Diagram showing cross-functional discovery with five roles (User Researcher, Designer, Engineer, Customer Success, Product Manager) contributing different observations that converge on a shared insight: Information Architecture needs investigation. Arrows point from each role toward the center, illustrating collaborative judgment that no single function could achieve alone.
Diagram created by author using Google Gemini AI text-to-image creator

Over time, something even more valuable happens. Engineers who participate in discovery with customer success start asking different questions about architecture. Designers who review analytics with engineers propose solutions aligned with observed behaviour patterns. Product managers who examine implementation complexity develop intuition for feasibility. You’re not just contributing expertise, you’re expanding your judgment by exposure to how others think.

Expanding Beyond Your Lane

For Solo Founders and Small Teams

You might think cross-functional judgment doesn’t apply to you; you’re already wearing all the hats. But that’s precisely the challenge: switching rapidly between perspectives (technical, business, user, design) without systematic practices leads to decision fatigue and blind spots.

Solo founders often skip discovery practices because “I don’t have time” or “I am the PM and the engineer and the designer.” AI accelerates building, which means you can build the wrong thing faster. Speed without judgment wastes resources at an accelerated pace. The systematic practices below help you develop multi-perspective judgment, compensating for working alone.

For Larger Teams

Cross-functional discovery can feel uncomfortable.

As a user researcher, why should you understand technical constraints? As an engineer, why should you sit in customer conversations? As a designer, why should you know about business viability? As a product manager, why is it necessary to understand research methodology?

Because AI is commoditizing siloed expertise, AI can generate code, create designs, draft requirements, and even synthesize interview transcripts.

What AI can’t replicate is integrating perspectives across functions to decide what’s worth building. That integration, that judgment, is what makes cross-functional teams valuable.

This doesn’t mean abandoning your core expertise. It means expanding your contribution beyond your functional lane.

Start small: Join one discovery conversation outside your usual domain. Ask questions. Listen to how others reason about problems. You’re not abandoning your lane, you’re learning to see the whole road.

“We can’t let AI seduce us into believing it can do our thinking for us.”
 — Brené Brown, Strong Ground (2025)

AI as Collaborator, Not Replacement

As Brené Brown discusses in her Fast Company interview (38 minutes) and explores in Strong Ground (2025), many organizations are experiencing what she calls a “collective anxiety about AI”, chasing tools without confronting the human work: vulnerability, truth-telling, and courage.

This is the challenge of discovery in the AI era: not competing with machines, but building human capacities — deep connection, deep thinking, and deep collaboration.

So what does this mean for discovery practice?

After Jobs-to-Be-Done interviews, AI can analyze transcripts in minutes: clustering themes, identifying recurring language, spotting emotional intensity, and surfacing contradictions. It can generate comprehensive assumption maps covering desirability, viability, feasibility, usability, and ethics.

But AI doesn’t know what matters. It can identify patterns but can’t feel the weight of user frustration. It can cluster themes but can’t sense when an interview reveals something genuinely novel. Recent MIT research examining hundreds of human-AI collaborations found that while these partnerships often underperformed on decision-making tasks, they showed significant gains in content creation tasks, such as generating and synthesizing new content, where humans provide contextual judgment that AI cannot replicate (Vaccaro et al., 2024). AI can’t fully contextualize insights against organizational realities, unstated strategic priorities, the team member who’s burned out, the competitive shift that’s not yet reflected in the data, or a coming regulatory change. Most critically, you decide what’s “good enough” when evidence is sufficient to proceed, when to test further, and when to abandon a direction. These threshold judgments require wisdom that can’t be automated.

A team identifies opportunities based on customer research—AI analyses which align with strategic outcomes and flags contradictory signals. The team reviews: Does this prioritization make sense given our competitive position? Which patterns might be artifacts of the recruitment process? AI has accelerated analysis; humans make meaning from it. They select an opportunity, AI generates an assumption map, and humans evaluate which assumptions are riskiest, then design experiments to test them.

This continuous loop, where AI synthesizes and suggests, and humans interpret and decide, creates discoveries that are both faster and thoughtful. But this partnership works best when supported by systematic frameworks that make judgment visible and teachable.

Circular flow diagram showing 5-step AI-human collaboration loop: AI Synthesizes (analyzes patterns), Humans Interpret (determines meaning), AI Generates (creates options), Humans Evaluate (assesses context), Humans Decide (makes judgment call). Arrows flow clockwise showing continuous cycle. Center reads ‘Faster + Thoughtful Discovery.
Diagram created by author using Google Gemini AI text-to-image creator

Making Judgment Visible

Whatever discovery frameworks you use, Jobs-to-Be-Done, Opportunity Solution Trees, Assumption Mapping, or your own approach, the key is making implicit assumptions explicit and testable. Frameworks matter not because they’re rigid processes, but because they create shared language for reasoning together.

“Frameworks matter not because they’re rigid processes, but because they create shared language for reasoning together.”

The principle: your frameworks and tools should force specificity and enable evidence-based reasoning. Jobs-to-Be-Done, for instance, requires what practitioners describe as a fundamental “mindset change,” shifting the focus from product features to the underlying jobs customers are trying to accomplish (Gecis, 2021). Teams can’t hide behind “users want better collaboration.” They must specify: What job are users hiring our product to do? How do they measure progress? What alternatives exist, and why are those inadequate?

Tools like Opportunity Solution Trees help teams visualize the path from desired outcomes through opportunities to solutions, deliberately slowing teams down to explore the opportunity space rather than jumping immediately to solutions (Corbett, 2023). Similarly, Assumption Mapping forces teams to articulate what would need to be true for their approach to succeed across the dimensions of desirability, viability, and feasibility. Making assumptions explicit surfaces disagreements before they become expensive mistakes, a critical practice given that product teams typically find nine out of ten product assumptions prove false when tested (Podzorska, 2023). Evidence tracking ensures that insights don’t get lost and maintains lineage, clarifying which opportunities are supported by evidence versus speculation.

Solo founders benefit from these frameworks in different ways. Where teams use frameworks to create shared language, solo builders use them to make implicit thinking explicit — writing down assumptions forces you to examine your own reasoning. Documenting evidence trails helps you catch your own confirmation bias. These aren’t just team collaboration tools; they’re personal judgment development practices.

What matters isn’t which specific frameworks or tools you adopt, it’s ensuring your discovery process produces the insights and evidence required for sound judgment and decision-making.

You Don’t Need Permission to Begin

Change doesn’t have to wait for mandates or executive buy-in. It can start with just one person choosing to work differently. Informal leaders, individuals without formal authority, consistently spark real change through a shared vision, reciprocal relationships, and an emphasis on process.

When your work demonstrates more explicit reasoning and better outcomes, others notice, whether “others” refers to your team, investors, customers, or future collaborators. This isn’t passive “leading by example”, it’s intentional demonstration. Become the person who consistently asks the right questions, who can articulate why a direction is worth pursuing with evidence rather than conviction. As Brené Brown observes, “Courage is contagious.” When you make your reasoning visible, admit when you’re wrong, and kill your own ideas based on evidence, that vulnerability invites others to do the same.

Here’s why starting now matters: as AI speeds execution up and lowers costs, judgment becomes exponentially more valuable. The talent market is segmenting professionals who use AI to execute faster while thinking the same way, versus professionals who use AI to amplify their judgment, competing on wisdom and strategic value. The latter group becomes increasingly valuable precisely because judgment can’t be automated.

Speed without judgment isn’t progress, it’s expensive failure at scale. When anyone can build anything, competitive advantage shifts to building the right things. The teams that thrive will be those who learn the fastest, make decisions with clarity, and discover unmet needs before their competitors do.

You can wait for your organization to recognize this shift, or you can start building this capability now, positioning yourself among the judgment-oriented professionals who will thrive in the AI era.

Start with one small practice:

If you’re building solo:

  • Document assumptions before building — create a simple “What must be true for this to work?” list.
  • After customer conversations, capture what surprised you about your own thinking, not just about customers.
  • When you pivot, document why — your future self needs to understand your reasoning.
  • Use AI to question your thinking: “Here’s what I learned, what am I missing?” However, remember that AI provides pattern-based analysis, while you provide meaning-based judgment.

If you’re working in a team:

  • Document your assumptions before your next sprint, just for yourself, in a simple document.
  • In retrospectives, add one question: “What surprised us about our own thinking?”
  • Share your reasoning when someone asks why you made a decision, including what you were uncertain about.
  • Invite one person from a different function to your next discovery conversation.
  • When you pivot based on evidence, quantify what you learned: “This assumption-mapping session revealed a critical usability flaw before we wrote code, which saved us two sprints.”

Find allies. One person practising this alone is interesting. Three people practising together across functions is a movement.

Which practice will you start with? Document assumptions? Add a reflection question to your retro? Invite someone from another function to your next discovery conversation? Drop a comment below.

Key Takeaways

  • Psychological safety and cross-functional collaboration are preconditions for the development of discovery judgment; without them, discovery becomes performative.
  • AI acts as a collaborator, accelerating synthesis while leaving meaning-making, interpretation, and final judgment to humans.
  • Frameworks and tools like Jobs-to-Be-Done and Assumption Mapping make implicit assumptions explicit and testable, creating a shared language for reasoning.
  • Individual practice scales into cultural change: one person modelling the required behaviour and validating assumptions can spark team-level learning and systemic improvement.
  • The talent market is segmenting: those who use AI to execute faster while thinking the same way, versus those who use AI to amplify their judgment and decision-making.

References

Brown, B. (2024, November 20). On Strong Ground: Brené Brown’s Lessons in Leading with Vulnerability & AI [Video]. Fast Company/YouTube. https://www.youtube.com/live/zdkKNC4gP1Y

Brown, B. (2025). Strong Ground: The Lessons of Daring Leadership, the Tenacity of Paradox, and the Wisdom of the Human Spirit. Random House.

Corbett, B. (2023, May 25). Opportunity solution trees: Everything you need to know. Bootcamp. https://medium.com/design-bootcamp/opportunity-solution-trees-everything-you-need-to-know-308f3d987d0f

Edmondson, A. C. (2019). The Fearless Organization: Creating Psychological Safety in the Workplace for Learning, Innovation and Growth. Wiley.

Gecis, Z. (2021, April 7). 8 things to use in “Jobs-To-Be-Done” framework for product development. UX Collective. https://uxdesign.cc/8-things-to-use-in-jobs-to-be-done-framework-for-product-development-4ae7c6f3c30b

Obasanjo, D. (2017, December 30). Psychological safety, risk tolerance and high-functioning software teams. HackerNoon. https://medium.com/hackernoon/psychological-safety-risk-tolerance-and-high-functioning-software-teams-75701ed23f68

Podzorska, G. (2023, March 31). Assumption mapping techniques for product discovery. Medium. https://medium.com/@gosiapodzorska/assumption-mapping-techniques-for-product-discovery-7553da1e1d6f

Vaccaro, M., Almaatouq, A., & Malone, T. W. (2024). When combinations of humans and AI are useful: A systematic review and meta-analysis. Nature Human Behaviour, 8(12), 2293–2303. https://doi.org/10.1038/s41562-024-02024-1

About Gale Robins

I help software teams and solo founders develop discovery judgment, the ability to decide what’s worth building when AI makes building easier and faster. My approach combines methods such as Jobs-to-Be-Done, assumption mapping, and double-loop learning with evidence-based reasoning to make judgment development systematic rather than accidental.

Connect: www.linkedin.com/in/galerobins


Cultivating the human capabilities that matter most was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.

Need help?

Don't hesitate to reach out to us regarding a project, custom development, or any general inquiries.
We're here to assist you.

Get in touch