Data collection is being automated. The strategic layer is wide open. The question is which side of that line you’re standing on.

The research industry is getting commodified.
A wave of AI-powered platforms can now conduct interviews, run surveys at scale, transcribe sessions, cluster themes, and deliver summary reports faster than any human team. Tools like Maze, Dovetail, Outset AI, and Listen Labs have compressed what used to take six to eight weeks into hours.
The global AI-based research services market is projected to grow from roughly $8 billion in 2025 to over $35 billion by 2035, according to Future Market Insights:
“During the first five-year period from 2025 to 2030, the market increases from USD 7,972.1 million to USD 16,804.0 million… AI-accelerated survey analytics dominates this period with 38.3% share, as enterprises increasingly automate survey feedback and real-time opinion tracking.”
The commodity layer of research is being automated, and it is happening fast.
This is good, and it’s terrifying.
Good, because the structured, repeatable parts of research have been heading toward automation for a while. I was using UserTesting.com over high-volume concept testing, quick-turn usability studies, and pattern recognition across large datasets. Machines are better at speed and scale. A Displayr study found that 85% of researchers said automated tools have already improved their workflow:
“What once took weeks of manual labor can now be achieved in days — or even hours — freeing researchers to focus on the strategic questions that really matter.”
Terrifying, because the industry is confusing data collection with understanding… two fundamentally different things. And if you have built your career on the gathering side of that equation, you are now competing with software that works faster, cheaper, and around the clock.
The gathering is not the knowing: Where AI tools fall short
Hot take: the hardest part of research was never gathering the data.
The hard part is making sense of what you find in the context of a business that has politics, constraints, competing priorities, and customers who say one thing and do another.
AI platforms work from their own interview data. They can surface the top themes from 50 conversations and present them in a tidy dashboard… Great!
What they cannot do is synthesize those themes against your:
- Analytics
- Competitive landscape
- Organizational dynamics
- And the thing your VP said in a hallway conversation last Tuesday that reframed the entire strategic direction (Yes, this happens all the time)
That synthesis layer requires judgment. It requires presence. It requires years of accumulated pattern recognition that no language model can replicate, because it depends on understanding your organization, not just your users.
Svend Brinkmann warned about the “McDonaldization” of qualitative research over a decade ago: the reduction of a craft-based discipline into standardized, repeatable processes optimized for efficiency.
AI has accelerated that trajectory. A 2025 study in Frontiers in Research Metrics and Analytics found that AI’s emphasis on structured, data-driven processes risks reinforcing positivist assumptions in qualitative work, losing the relational, situated, and cultural richness that makes qualitative research worth doing in the first place:
“Automated programmes with clear rules and formulae consistently followed each time do not work well under interpretivism’s assumptions. Researchers who use a general artificial intelligence agent only cover the rules-based epistemological spectrum of positivism.”
Lincoln and Guba said it best in 1985: The instrument of choice in qualitative research is the human. Humans are responsive to environmental cues, able to collect information at multiple levels simultaneously, and capable of exploring the unexpected. A machine can present raw material and initial patterns.
The act of meaning-making remains a uniquely human endeavor.
Where humans must excel
If the data collection layer is being commoditized, the question for every researcher becomes: what is left? The answer is everything that matters. But “everything that matters” demands specificity. Here are the five domains where human researchers are irreplaceable, and where the best practitioners are already doubling down.
1. Strategic synthesis across multiple inputs
AI gives you themes. A skilled researcher gives you a decision. The real value of research has always been the “so what should we actually do?” layer: integrating user data with business context, market dynamics, and organizational realities to produce recommendations that drive action.
This means your research readout cannot end with “here is what users said.” It must end with “here is what we should build, stop building, or invest in next, and here is why.”
If your deliverables do not connect to revenue, retention, or competitive positioning, you are delivering a report. The AI can do that now.
2. The relationship and trust layer
When organizations invest in research, they are buying judgment. They need someone who can push back on stakeholder assumptions, read a room, change a presentation on the fly, and tell a leadership team something they do not want to hear in a way they can actually hear it. This skill compounds over time. The researcher who understands a client’s organizational politics, who has built credibility with a skeptical CPO, who knows which battles to pick and which to table for next quarter, that person is not a vendor. They are a strategic partner. No platform replicates that.
3. Cross-cultural and international depth
AI interviews work well in English and a handful of other languages. The nuance of cross-cultural research, including regulatory differences, market-by-market strategy, and the things people mean but do not say, requires researchers who have spent years embedded in those markets.
No language model has lived in São Paulo, navigated regulatory frameworks in Mumbai, or understood why a French consumer’s “No” means something different than an American’s. As companies expand globally, this skill becomes a multiplier, not a nice-to-have.
4. Complex qualitative methods
Diary studies. Ethnography. Longitudinal research. Contextual inquiry in physical environments. These are immersive methods that require presence, adaptation, and the kind of rapport that only happens between people. They cannot be compressed into a chatbot interface.
And they produce the kind of deep, contextual insight that drives transformative product decisions, the kind of insight that a theme cluster in a dashboard will never surface. Even the AI knows this. When Hitch et al. asked ChatGPT to perform qualitative analysis, the model responded:
“As an AI language model, I cannot perform qualitative analysis on the data above as it requires a more nuanced understanding of context, and the ability to interpret and analyze human language in context.”
When the tool itself tells you it cannot do the work, that is worth paying attention to.
5. AI decision advisory
This is the frontier most researchers are ignoring entirely, and the one with the most upside.
As organizations deploy AI agents, copilots, and automated workflows, someone needs to figure out where those systems should and should not operate.
How do you design human-AI workflows? Where does automation create value, and where does it destroy trust? What gets delegated to an agent, and what requires a human in the loop?
These are research questions. They require researchers who understand both the technology and the humans it serves. The companies making the best AI decisions right now are not the ones with the best models. They are the ones with the best judgment about where to apply them. If you are not building fluency here, you are leaving the most strategic seat at the table empty.
The researcher’s new job description
The researchers who thrive in this landscape will be the ones who make the best decisions from data that is already abundant. This requires a shift in how you think about your role. Here is a framework for evaluating where you stand and what to do about it.
Three questions to ask yourself:
- Does my contribution on this project require judgment that depends on context a machine does not have? If yes, you are in the strategic layer. If no, you are doing work that will be automated within 18 months.
- Does my deliverable connect research findings to a business decision? If your output is a report that summarizes what users said, that is commodity work. If your output is a recommendation backed by synthesis across multiple data sources, that is strategy.
- Am I building relationships that make my judgment more valuable over time? Institutional knowledge, stakeholder trust, and cross-functional credibility are compounding assets. They are the moat.
Build the skills that sit on top of the data
Learn to read a P&L statement. Understand how your company makes pricing decisions. Shadow a sales call. Sit in on a board prep meeting. The researchers who will lead this industry are the ones who can walk into a room of executives and connect a research finding to a revenue outcome in language the CFO understands.
That is not a natural extension of most research training programs. It is the gap you need to close, and closing it is entirely within your control.
Use AI tools aggressively for the commodity layer
Do not resist the tools. Use them. Automate your transcription, your initial coding, your pattern recognition across large datasets. Let the machine handle what it handles well so you can spend more time on synthesis, strategy, and the client conversations that actually move the needle.
A 2024 study published in PMC put it plainly: AI can augment, but not replace, the work of human researchers in producing rigorous analysis, so long as it is applied in a mindful and ethical manner.
Treat AI tools the way a skilled carpenter treats a power tool: useful for speed, dangerous if you let it do your thinking.
Start advising on AI decisions, not just user decisions
Every organization is asking some version of the same question right now: Where should we use AI, and where should we not? Researchers are uniquely positioned to answer this because the question is fundamentally about human behavior, trust, and context.
Start volunteering for AI strategy conversations. Offer to run evaluations of AI-powered workflows. Frame your expertise around the question that every executive is losing sleep over: How do we deploy this technology without alienating the people it is supposed to serve?
Position yourself as the person who turns signal into decisions
This is the through-line. Every skill on this list, every shift in how you approach your work, comes back to one distinction: Are you gathering signal, or are you turning signal into decisions? The first is being commoditized. The second is becoming more valuable every day.
If your work ends with a research readout, you are on the wrong side of that line. If your work ends with a decision that moves the business, i.e. your make the business money, you are exactly where you need to be.
The Opportunity
The research industry is being reorganized. That is not a threat. It is a correction.
For years, the value of research was bottlenecked by the logistics of data collection. It took weeks to recruit, schedule, conduct, transcribe, and analyze.
The practitioners who could manage that pipeline were valuable because the pipeline was hard. Now the pipeline is easy. And the people who always knew that the pipeline was not the point, that the real work starts after the data is collected, those people are about to have their moment.
The strategic layer of research is becoming more valuable, not less. Those of us who have spent years building judgment, who can synthesize across messy inputs, who can advise on the hardest questions organizations face about AI, growth, and customer strategy… we are holding the one asset that cannot be automated. And for the first time, the logistics of data collection are no longer standing between us and the work that actually matters.
This is the best time in a generation to be a researcher who thinks strategically. The noise is being filtered by machines. The signal is waiting for someone with the judgment to act on it.
The tools got smarter… the question is whether you did too.
Josh LaMar is a product growth strategy and AI decision advisor with over 20 years in product, technology, and customer strategy. He has spent well over 40,000+ hours listening to customers across 19 countries on 5 continents.
References
- Future Market Insights. (2025). AI-based Research Services Market.
- Displayr. (2025). Market Research Automation: Trends, Tools, and What’s Next.
- Williams, R. T. (2024). Paradigm shifts: exploring AI’s influence on qualitative inquiry and analysis. Frontiers in Research Metrics and Analytics.
- Brinkmann, S. (2012). Qualitative research between craftsmanship and McDonaldization. Qualitative Studies, 3, 56–68.
- Lincoln, Y. S., & Guba, E. G. (1985). Chapter 7 (pages 193–194 of Naturalistic Inquiry). Sage Publications.
- Hitch, D. et al. (2024). Artificial Intelligence Augmented Qualitative Analysis: The Way of the Future? PMC.
- Grand View Research. (2025). Artificial Intelligence Market Size Report, 2033.
Your research tools got smarter… Did you? was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.