Table of Contents
When Boris Cherny joined Anthropic, he didn’t set out to build a terminal application that would replace the Integrated Development Environment (IDE) for thousands of engineers. He simply needed a way to test the API. Yet, that internal tool evolved into Claude Code, a product that has fundamentally shifted how developers interact with Large Language Models (LLMs).
In a revealing discussion about the development of Claude Code, Cherny outlines a philosophy that challenges traditional software engineering. From the concept of "latent demand" to the counter-intuitive decision to double down on the command line interface (CLI), the insights offer a roadmap for anyone building in the AI era. The central thesis is clear: the way we write software is changing faster than our tools, and the only way to keep up is to build for the future, not the present.
Key Takeaways
- Build for the Future Model: Don't design products around today's model limitations. Build for where the intelligence frontier will be in six months.
- Latent Demand Drives Product: Users won't adopt new behaviors; they only want easier ways to do what they are already trying to do.
- Scaffolding is Temporary: Any code written to compensate for model deficiencies should be viewed as temporary "scaffolding" to be deleted as models improve.
- The Terminal is Timeless: The CLI allows for faster prototyping and fewer distractions, proving superior to complex GUIs for agentic coding.
- The Rise of the "Builder": As productivity surges (up 150% at Anthropic), the distinction between Product Managers and Software Engineers is blurring into a generalist "Builder" role.
The Six-Month Horizon Philosophy
The most critical advice Cherny offers to founders and engineers building on LLMs is a counter-intuitive approach to product roadmap planning. In traditional software development, you build for the current constraints. In AI development, building for today's constraints ensures your product is obsolete by the time it ships.
Cherny explains that the team at Anthropic explicitly ignores the current model's weaknesses if they believe those weaknesses will be solved by the next generation of training.
At Anthropic, the way that we thought about it is we don't build for the model of today. We build for the model six months from now... Just try to think about what is that frontier where the model is not very good at today because it's going to get good at it.
This philosophy requires a high tolerance for ambiguity. It means building features that might not work perfectly right now—like complex agentic planning or autonomous debugging—with the confidence that the underlying intelligence will catch up to the software architecture. If you optimize too heavily for today's flaws, you are wasting engineering hours on problems that are about to vanish.
Latent Demand: Discovering Features Through Behavior
One of the driving forces behind Claude Code’s feature set is the concept of "latent demand." Cherny argues that you cannot force users to adopt entirely new behaviors. Instead, successful AI products identify what users are already struggling to do and remove the friction.
A prime example of this is the "Plan Mode" in Claude Code. The team didn't invent this feature in a vacuum. They observed that users were naturally trying to get the model to "think" before coding. Users were manually prompting Claude to write specifications or discuss architecture prior to writing syntax.
People will only do a thing that they already do. You can't get people to do a new thing... If people are doing a thing and you try to make them do a different thing, they're not going to do that. And so you just have to make the thing that they're trying to do easier.
By observing this latent demand, the team formalized "Plan Mode," allowing users to explicitly instruct the agent to architect a solution before executing it. However, consistent with the "Six-Month Horizon" philosophy, Cherny predicts that Plan Mode itself is a temporary feature. As models become more intelligent, they will automatically know when to plan and when to execute, removing the need for the user to toggle modes entirely.
The Art of Scaffolding and The Bitter Lesson
A recurring theme in the development of Claude Code is the impermanence of code. Cherny references Rich Sutton’s famous essay, "The Bitter Lesson," which posits that general methods (like scaling computation) eventually outperform human-designed heuristics. In the context of AI tools, this means that "scaffolding"—the code built to guide or constrain the model—is destined to be deleted.
Managing Context with Claude.md
This principle applies directly to how developers interact with the tool. Many engineers maintain a CLAUDE.md file—a set of instructions and context for the AI. While some users create massive, complex context files, Cherny advises the opposite. He recommends keeping these files minimal and deleting them frequently.
If you hit this [token limit], my recommendation would be delete your Claude.md and just start fresh... The capability changes with every model. And so the thing that you want is do the minimal possible thing in order to get the model on track.
As the models improve, they require less context and fewer explicit instructions to understand a codebase. Over-engineering your prompts or context files is a form of technical debt. The goal is to rely on the model's native intelligence as much as possible, only adding scaffolding when absolutely necessary.
The Renaissance of the Terminal
In an era of spatial computing and advanced GUIs, it is ironic that one of the most advanced AI coding tools lives in the terminal. This was originally a decision born of necessity—building a Command Line Interface (CLI) was the fastest way to prototype. However, it turned out to be the superior form factor for agentic coding.
The terminal offers a text-first interface that aligns perfectly with how LLMs operate. It removes the distractions of complex IDE menus and allows for rapid iteration. Cherny notes that the team experimented with various UIs, including web and desktop apps, but the density and speed of the terminal proved resilient.
That said, building a modern experience in a 1980s environment required significant design effort. The team iterated on the "spinner" animation nearly 100 times to get the "feel" right. They even considered implementing mouse support within the terminal but abandoned it to maintain the purity and speed of the keyboard-driven workflow.
The Shift from Software Engineer to Builder
Perhaps the most startling insight from the development of Claude Code is the impact it has had on productivity within Anthropic itself. Cherny reveals that since the introduction of the tool, productivity per engineer—measured by pull requests and commit volume—has grown by approximately 150%.
This efficiency gain is fundamentally changing the role of the engineer. The granular work of writing syntax, managing git commands, and debugging memory leaks is being offloaded to the AI. This shift is giving rise to a new archetype: the "Builder."
I think we're going to start to see the title software engineer go away. And I think it's just going to be maybe builder, maybe product manager... The work that people do, it's not just going to be coding. It's software engineers are also going to be writing specs. They're going to be talking to users.
At Anthropic, the distinction between roles is already vanishing. Product managers, designers, and even finance team members are using Claude Code to ship software. This democratization of coding suggests a future where "technical skills" are less about syntax knowledge and more about systems thinking, scientific reasoning, and the ability to articulate a clear problem statement.
Conclusion
The development of Claude Code serves as a microcosm for the broader industry shift. We are moving away from a world where humans painstakingly write every line of code, toward a reality where humans orchestrate agents to build complex systems. For Cherny, the ultimate metric is not how sophisticated the tool is, but how much it disappears, allowing the user to focus entirely on the product they are trying to bring into existence.
As models continue to scale and capabilities expand, the tools we use today will likely look primitive in a year. But the principles of building for the future, listening to latent demand, and embracing the inevitable obsolescence of scaffolding remain the most reliable guides for navigating the AI transition.