Table of Contents
When you strip away the hype surrounding artificial intelligence, the fundamental question for builders remains: what are we actually scaling? The rules of traditional product management—rigid roadmaps, distinct engineering silos, and deterministic outcomes—are buckling under the weight of generative AI. To understand how the playbook is being rewritten, we look to the leaders at the frontier: Aatish Nayak of Harvey, the AI-native legal platform, and Sachi Shah of Sierra, the conversational AI agent platform.
These leaders are not just integrating API calls into existing software; they are restructuring how enterprises operate. From "forward deployed lawyers" to the concept of "model vibes," the strategies employed by Harvey and Sierra offer a blueprint for the next decade of technology. Here is what product leaders must rethink to survive and thrive in the AI era.
Key Takeaways
- Trust is about predictability, not perfection: Users forgive errors if they are involved in the process (the "IKEA Effect") and if the agent behaves consistently, even when it cannot solve a problem.
- The return of Forward Deployed Engineering (FDE): AI is a solutions business, not just a technology business. embedding engineers—and domain experts—with customers is essential for discovering high-value use cases.
- Roadmaps are now 3-month sprints: Long-term planning is impossible when underlying model capabilities change overnight. Product teams must focus on "big boulders" and anticipate model lab advancements to avoid building redundant features.
- "Vibes" are a legitimate metric: While quantitative evals are necessary, developing a "model sense"—a qualitative feel for a model's personality and fit—is crucial for user retention.
- Network effects are the new moat: As models become commodities, the competitive advantage shifts to proprietary data access and deep, collaborative workflows between firms and their clients.
Moving Beyond "Rag" to Complex Agency
Two years ago, the industry was captivated by simple Retrieval-Augmented Generation (RAG)—essentially querying five documents to get an answer. Today, that is table stakes. The frontier has moved toward agents that can handle massive scale and execute complex actions.
For Harvey, this evolution means moving from analyzing a handful of contracts to processing tens of thousands of documents for massive mergers, such as the Netflix and Warner Bros. discovery. It is no longer just about retrieval; it is about verifying information and flagging low-confidence data for human review. Similarly, Sierra has moved beyond simple Q&A to executing tangible tasks, such as tracking "time to music" for Sonos customers or processing complex shipping changes.
The shift here is from passive information retrieval to active workflow automation. The goal is to build an "IDE for lawyers" or a comprehensive customer experience (CX) platform where the AI doesn't just chat—it does work.
The Renaissance of Forward Deployed Engineering
One of the most counter-intuitive trends in the AI era is the shift away from purely self-serve SaaS back toward high-touch, human-led deployment. Both Harvey and Sierra have embraced the "Forward Deployed Engineer" (FDE) model, a strategy famously popularized by Palantir.
Selling Solutions, Not Just Tech
In the current landscape, enterprises aren't just buying software; they are buying outcomes. Because prompt engineering and context management are new disciplines, customers often lack the expertise to implement these tools effectively. FDEs bridge this gap.
"People aren't buying technology, they're buying solutions... The FDE model actually works really well for just being super aligned with customers."
For Harvey, this concept has evolved into the "Forward Deployed Lawyer." These are domain experts who understand the nuances of legal work and can hack together solutions using Harvey’s agent builder. They possess the empathy to understand an associate's pain points and the technical competence to deploy a solution immediately. This creates a tight feedback loop where customer needs are instantly translated into product capabilities.
Agile Planning on Quicksand
In traditional software, a two-year roadmap provides stability. In AI, it provides a false sense of security. With model labs like OpenAI and Anthropic releasing game-changing updates frequently, product teams risk building features that become obsolete overnight.
Aatish Nayak notes that Harvey learned this the hard way. After spending a month building a deep research tool, OpenAI released a similar feature that outperformed their internal build. This necessitates a shift in strategy: product leaders must build a "theory of mind" for the model labs. If a feature—like memory or connectors—seems like a logical next step for a foundational model, it is risky for an application layer company to spend resources building it.
The 3-Month Horizon
The solution is shortening the planning horizon. Harvey and Sierra plan in three-month increments, focusing on "big boulders"—core objectives that are unlikely to change regardless of model updates. This requires a culture that embraces "thrash." Pivoting on a dime because a new model dropped isn't a failure of planning; it is a requirement for survival.
"You have to constantly re-earn product market fit... constantly shifting expectations. What you say is going to happen today is probably not the thing that's going to win the customer in three months."
Redefining Trust: The IKEA Effect
A major misconception in AI product design is that the agent must be 100% accurate to be trusted. In reality, trust comes from transparency and the absence of unpleasant surprises. If an agent is confident but wrong, trust is destroyed. If an agent admits it needs more time or asks for clarification, trust is built.
Harvey leverages the "IKEA Effect" to mitigate hallucination risks. By involving lawyers in the creation of the work product—asking them to verify terms or guide the drafting process—users feel a sense of ownership over the output. They become editors rather than just consumers, which makes them more forgiving of minor errors and more invested in the tool's success.
Safety Layers and Supervisors
On the consumer-facing side, Sierra implements rigorous "supervisor models." These are secondary AI models that monitor the conversation in real-time, checking both the user's input and the agent's output for safety, sentiment, and accuracy before a message is ever sent. This multi-model architecture ensures that even if the primary reasoning model falters, a safety layer prevents brand-damaging mistakes.
Model Sense and the Power of "Vibes"
While data-driven evaluations (evals) are critical for measuring latency and accuracy, they cannot capture the nuances of human interaction. This has given rise to a new product competency: "Model Sense."
Model sense is the intuitive understanding of what a specific model excels at and where it fails. It is the ability to look at a raw output and determine if the tone, structure, and "vibe" fit the use case.
The Voice Sommelier
Sierra has taken this literally by introducing roles like "Voice Sommelier," tasked with matching the perfect voice modulation to a brand's identity. Small details drive massive engagement shifts. For instance, Harvey discovered that enabling British spelling for UK lawyers immediately increased retention. Sierra found that a Southern accent improved metrics for a client with a predominantly Southern customer base.
"The most trusted agent isn't the one that's always right. It's the one that doesn't always surprise."
These are insights that no spreadsheet or automated eval could provide. They require product managers to be deeply "AI-native," constantly testing and feeling the software themselves.
The New Moats: Data and Networks
As intelligence becomes commoditized, where does the enduring value lie? For vertical applications, the answer lies in complex, locked-away data and network effects.
Harvey finds its moat in the unsexy reality of on-premise servers. Much of the world's high-value legal and financial data is trapped in legacy systems. The company that can successfully integrate with these systems—and navigate the intense security compliance required to do so—builds a defensive wall that is difficult to breach.
Furthermore, as firms adopt these tools, network effects begin to take hold. When a Fortune 500 company standardizes on a platform, they push their law firms to use the same platform for collaboration. This transforms the product from a single-player utility into a multi-player ecosystem.
Conclusion
The era of AI requires a fundamental rethinking of product management. The separation between engineering, product, and sales is blurring into a unified "solutions" function. We are moving away from rigid long-term planning toward high-velocity adaptation, where the ability to listen to "frontier customers"—those asking for the impossible—matters more than following a standard roadmap.
For product leaders, the message is clear: You cannot outsource your understanding of the models. You must develop the intuition to judge "vibes," the humility to pivot when the landscape shifts, and the foresight to build deep integrations that foundational models cannot easily replicate.