Table of Contents
The user research discipline as we know it is dying. After 15 years of explosive growth, mass layoffs have exposed a fundamental truth: too much UX research delivers interesting insights that don't drive business impact. Former Airbnb and Meta research leader Judd Antin reveals what went wrong and how to fix it.
Key Takeaways
- The UX research field faces a reckoning due to over-reliance on "middle-range research" that's interesting but not impactful enough for business needs
- Companies hired researchers during the zero-interest-rate era without knowing how to integrate them properly, setting them up as service functions rather than strategic partners
- Great researchers master five tools: formative research, evaluative research, survey design, applied statistics, and technical skills (SQL/dashboard/prompt engineering)
- "User-centered performance" describes symbolic customer obsession rather than genuine learning—common when PMs ask for validation research too late in the process
- Research should be integrated from beginning to end of product development, not called in reactively as a service function
- Researchers must become more business-focused, understanding quarterly reports, OKRs, and conversion funnels to drive maximum impact
- The future belongs to researchers who can do fewer things better, focusing on macro (strategic) and micro (tactical) research while avoiding the problematic middle range
- NPS is fundamentally flawed compared to simple customer satisfaction metrics, despite widespread industry adoption
Timeline Overview
- 00:00–04:16 — Judd's Background and Alumni Network: Introduction to Judd Antin's career spanning Yahoo, Meta, and Airbnb, highlighting his mentoring of future research leaders at Figma, Notion, Slack, and other major companies
- 04:16–08:53 — Reckoning Response Analysis: Community reaction to Judd's controversial post about UX research's decline, including unexpected critiques from anti-capitalist researchers and discussions of using "reckoning" terminology
- 08:53–14:05 — Research Framework Definition: Deep dive into macro (strategic business-focused), middle-range (problematic user understanding), and micro (tactical usability) research categories and their respective business impact
- 14:05–21:10 — Integration Challenges and Solutions: Why the service-function model fails researchers and how to restructure product development to include insights from beginning to end
- 21:10–26:42 — Business-Focused Research Evolution: The critical need for researchers to understand quarterly reports, OKRs, and business metrics rather than purely user-focused empathy work
- 26:42–32:54 — User-Centered Performance Phenomenon: Detailed exploration of symbolic customer obsession, including validation-seeking behavior and executive listening sessions that prioritize optics over learning
- 32:54–44:55 — PM-Researcher Relationship Dynamics: Common tropes researchers have about product managers, including speed concerns, AB testing limitations, and the Henry Ford quote fallacy
- 44:55–51:18 — Research Impact and Recommendations: How to balance actionable insights with collaborative decision-making, avoiding the trap of researchers prescribing exact solutions
- 51:18–59:43 — Partnership and Organizational Structure: Practical advice for PMs to better leverage researchers and guidance on optimal researcher-to-team ratios based on relationship quality over coverage
- 59:43–01:06:48 — Research Empowerment and Metrics: Strategies for researchers to drive impact through skill development, business knowledge, and communication excellence, plus NPS critique and alternatives
- 01:06:48–END — Product Dogfooding Limitations: Warning about over-relying on internal product usage for insights due to fundamental differences between teams and actual users
The Broken System: Why Middle-Range Research Fails
The core problem plaguing UX research lies in what Judd Antin calls "middle-range research"—work that sits uncomfortably between strategic macro insights and tactical micro optimizations. This research category has become the default for most teams, creating a devastating combination of high interest and low business impact.
Middle-range research typically involves taking groups of users and asking questions about their thoughts, feelings, and behaviors with existing products. While fascinating to researchers and seemingly valuable to stakeholders, this work often yields insights that are difficult to operationalize and trigger post-hoc rationalization where teams claim they "already knew that."
- Macro research drives strategic business value through competitive analysis, market studies, and long-term innovation planning that aligns with business strategy and planning cycles
- Micro research delivers immediate tactical wins through usability optimization, AB test interpretation, and pixel-perfect product improvements that directly impact conversion metrics
- Middle-range research creates interesting insights about user preferences and behaviors but lacks the strategic focus of macro work or the actionable specificity of micro research
- The abundance of middle-range research stems from companies hiring researchers without clear integration strategies, defaulting to questions that feel customer-obsessed but don't drive decisions
- Teams trapped in middle-range research cycles experience the worst aspects of research: delayed timelines, obvious-seeming results, and difficulty translating insights into product improvements
The solution requires dramatically reducing middle-range research while investing more heavily in macro planning-integrated work and micro optimization that produces immediate business results.
User-Centered Performance: The Theater of Customer Obsession
One of the most damaging phenomena in modern product development is what Antin terms "user-centered performance"—symbolic customer obsession that prioritizes signaling care about users over genuine learning and decision-making improvement.
This performance manifests most clearly when product managers request "quick user studies" at the end of development cycles to validate pre-made decisions. Such requests reveal no genuine interest in being wrong or changing direction; they represent checkbox exercises designed to demonstrate customer obsession to stakeholders.
- Executive listening sessions exemplify user-centered performance, where leaders want to "get close to customers" but aren't genuinely seeking insights that might change their strategic direction
- Validation-seeking research occurs when teams approach studies hoping to confirm existing beliefs rather than adopting a falsification mindset that actively seeks disconfirming evidence
- Late-stage research requests signal performance when PMs ask for user input after key decisions are locked in and changing course becomes practically impossible
- The genuine alternative involves a falsification mindset: researchers and product teams should actively seek evidence that proves their assumptions wrong rather than confirming their preferences
- True customer obsession requires vulnerability to being wrong and willingness to change direction based on user insights, not just collecting user feedback for optics
Breaking free from user-centered performance requires cultural shifts toward genuine learning orientation and research integration that can actually influence product decisions.
The Five Essential Tools of Modern Researchers
The evolution from qualitative-focused user research toward multi-method expertise represents a fundamental shift in how successful researchers create business impact. The best researchers develop proficiency across five critical tool categories rather than specializing narrowly in interview-based methods.
Formative/generative research enables forward-looking innovation and open-ended exploration through ethnographic field work and user journey understanding. Evaluative research provides tactical product optimization through usability testing and design validation.
- Survey design expertise creates scalable insights from user communities large and small, but requires rigorous methodology to avoid garbage-in-garbage-out results
- Applied statistics knowledge becomes essential for interacting in AB testing environments and understanding significance, confidence intervals, and experimental design principles
- Technical skills encompassing SQL, dashboard tools, and increasingly prompt engineering allow researchers to query their own data and interact effectively with generative AI
- Swiss army knife researchers can adapt their methodological approach based on project requirements, timeline constraints, and business questions rather than defaulting to preferred methods
- Team-based expertise allows organizations to hire specialists while ensuring collective coverage across all five tool categories through collaborative working relationships
The interview process should test candidates' ability to propose multi-method approaches to open-ended research questions rather than defaulting to single methodological solutions.
Business Fluency: The Missing Link in Research Impact
The historical separation between user empathy and business success has created a generation of researchers who struggle to connect insights with revenue impact. The future belongs to researchers who can explicitly identify the overlap between user needs and business profitability.
Antin recommends that researchers start by reading quarterly reports and listening to shareholder calls to understand the language of business strategy. This foundation enables researchers to position insights within existing business frameworks rather than expecting stakeholders to translate research findings into business implications.
- OKR literacy allows researchers to connect user insights directly to measurable business outcomes and team success metrics rather than treating business goals as separate concerns
- Conversion funnel expertise enables researchers to identify specific optimization opportunities and communicate impact in terms that product and growth teams immediately understand
- Competitive intelligence integration helps researchers frame user insights within market dynamics and strategic positioning rather than operating in isolated user bubbles
- Quarterly strategy documents provide context for prioritizing research questions that align with business timing and strategic initiatives rather than purely user-driven curiosity
- Metric-driven communication transforms research presentations from interesting observations into actionable recommendations tied to specific business outcomes and success measures
This business fluency doesn't replace user empathy but creates a bridge that makes research insights more actionable and valuable to cross-functional partners.
Integration Over Service: Restructuring Research Relationships
The fundamental problem with current research practice lies not in methodology or skill gaps but in organizational structure that treats research as a reactive service function rather than an integrated strategic capability.
The broken cycle begins with researchers hired into service roles without direct integration into product development processes. Unable to influence question framing or participate in early strategic discussions, researchers end up working on less impactful projects that reinforce stereotypes about research being slow and obvious.
- Consistent relationships between researchers and product teams create opportunities for researchers to participate in decision-making from project inception rather than being called in after key parameters are set
- Shared OKRs align researchers with product success metrics rather than creating separate research goals that may not connect to business outcomes
- The integration success metric: teams should be unable to have critical decision-making meetings without their research partner present, indicating genuine influence and collaborative relationships
- Early involvement allows researchers to shape research questions that maximize business impact rather than answering questions that were poorly framed without research input
- Breaking the vicious cycle requires companies to restructure processes while researchers develop business fluency and communication skills that make integration valuable
The most successful research partnerships create situations where product teams actively seek research participation rather than viewing it as an external requirement.
The Economics of Research Hiring: Quality Over Coverage
The zero-interest-rate environment enabled companies to hire researchers without clear value propositions, creating unsustainable organizational structures that prioritized coverage over impact. The current reckoning demands more strategic approaches to research headcount and organizational design.
Rather than spreading researchers thinly across multiple product areas, successful organizations create focused partnerships that protect researcher time and maximize relationship quality. Antin recommends a "full plate" of two large projects and one small project as the optimal researcher workload.
- Relationship-based sizing determines research headcount based on the number of sustained partnerships that can be maintained rather than abstract ratios or coverage requirements
- Creating productive pain by saying no to research requests builds demand and demonstrates value more effectively than trying to serve all stakeholders equally
- Headcount growth happens most successfully when existing product partners advocate for additional research capacity based on experienced value rather than abstract arguments about research importance
- Early-stage value exists even for startups' first 10 employees, as Swiss army knife researchers can accelerate iteration and reduce founder isolation in strategic decision-making
- Trade-off clarity helps startups understand when research hiring provides more acceleration than additional engineering capacity, particularly during pivot-critical periods
The most sustainable research organizations focus on proving value through deep partnerships rather than demonstrating activity through broad coverage.
Beyond NPS: The Survey Science Reality
The widespread adoption of Net Promoter Score represents one of the most successful marketing campaigns in business metrics history, despite fundamental flaws that survey scientists have documented for years. Organizations continue using NPS primarily due to consultant and software industry promotion rather than methodological validity.
The likelihood-to-recommend question suffers from multiple design flaws: unlabeled 0-10 scales, too many response options for mobile interfaces, and fundamentally misaligned question assumptions about recommendation behavior across product categories.
- Customer satisfaction metrics demonstrate better statistical properties, higher precision, and stronger correlation with business outcomes compared to NPS across most product categories
- Scale design problems include the challenge of displaying 11 response options effectively on mobile devices and users' tendency to avoid extreme scale positions without clear labeling
- Recommendation behavior assumptions fail for many product categories where users don't naturally evangelize operating systems, financial services, or enterprise software to personal networks
- Benchmarking limitations make industry NPS comparisons meaningless due to inconsistent implementation methods and idiosyncratic variations across companies and time periods
- Simple satisfaction questions like "Overall, how satisfied are you with your experience with [product]?" provide more actionable and reliable data than complex NPS calculations
The persistence of NPS demonstrates how marketing can override scientific evidence when supported by established consulting and software ecosystems.
The Dogfooding Trap: When Internal Usage Misleads
Product teams' reliance on internal product usage for insights represents another area where intuition-based approaches create systematic blind spots. While dogfooding provides valuable perspective, it cannot substitute for understanding external user contexts and constraints.
Product managers and teams differ fundamentally from target users in knowledge, motivation, priorities, and usage contexts. These differences create predictable biases that can only be corrected through external user research rather than increased internal usage.
The transformation of UX research requires simultaneous evolution from researchers, product teams, and organizational structures. Researchers must develop business fluency and multi-method expertise while companies must integrate insights capabilities into strategic processes rather than treating research as reactive services. The field's future depends on focusing resources on macro strategic work and micro tactical optimization while dramatically reducing the middle-range research that consumes resources without driving business impact.
Practical Implications
- Audit current research projects to eliminate middle-range work that produces interesting but non-actionable insights, focusing resources on strategic macro planning and tactical micro optimization
- Integrate researchers into product development from project inception rather than calling them in reactively, ensuring shared OKRs and consistent collaborative relationships
- Develop researcher business fluency through quarterly report analysis, OKR understanding, and conversion funnel expertise to bridge user insights with business impact
- Replace NPS with simple customer satisfaction metrics that provide more reliable and actionable feedback about user experience quality
- Structure research hiring around relationship quality and partnership depth rather than coverage ratios, protecting researcher time to enable focused high-impact work
- Adopt falsification mindsets that actively seek disconfirming evidence rather than validation of existing beliefs or assumptions
- Combine internal dogfooding with external user research to balance intuitive insights with objective understanding of user contexts and constraints