Table of Contents
Imagine merging 600 commits in a single day, not because you have a massive team of engineers, but because you have a fleet of AI agents working around the clock. This is the new reality for Peter Steinberger, the creator of Clawdbot and the founder of PSPDFKit. Peter is a standout developer who previously built a PDF framework used on over a billion devices—a task requiring obsessive attention to detail and manual memory management. After a three-year hiatus following burnout, he has returned to the tech scene with a workflow that looks nothing like traditional software development.
Peter’s current approach challenges the sacred cows of engineering: he no longer reads most of the code he ships, he views code reviews as "dead," and he treats Pull Requests as "Prompt Requests." In this deep dive, we explore how Peter is building the future of personal assistants, why "closing the loop" is the difference between frustrating vibe coding and effective engineering, and how the role of the software engineer is fundamentally shifting.
Key Takeaways
- Shift from Code to Architecture: Effective AI engineering requires moving from line-by-line syntax obsession to high-level system architecture and verification.
- The "Closing the Loop" Principle: AI agents become reliable only when they can compile, lint, run tests, and debug their own output without human intervention.
- Prompt Requests over Pull Requests: In an AI-driven workflow, seeing the prompt that generated the code is often higher signal than reading the code itself.
- The Rise of the "Builder": The future belongs to high-agency developers who can bridge product vision, design, and engineering, potentially reducing team sizes by up to 70%.
- CLI Supremacy: Despite the hype around protocols like MCP (Model Context Protocol), simple Command Line Interfaces (CLIs) often provide superior context management for agents.
From Pixel-Perfect C++ to "I Don't Read Code"
To understand the magnitude of Peter Steinberger's shift in perspective, one must understand his background. For over a decade, he led PSPDFKit, a company built on solving the excruciatingly difficult problem of rendering PDFs on mobile devices. This was an environment where every byte of memory mattered, and "bikeshedding" over whitespace and code structure was seen as a proxy for quality and technical debt management.
However, after selling his shares and taking a three-year hiatus to recover from burnout, Peter returned to a landscape transformed by Large Language Models (LLMs). His re-entry point wasn't gradual; he dove straight into using tools like Claude and Cursor, leading to a radical realization: the friction of syntax is gone.
"I ship code I don't read... It is much more about system architecture than having to read every single line."
Peter argues that the majority of application development is simply "plumbing"—massaging data from an API into a database, then back out to a UI. By delegating this plumbing to AI, he functions less as a writer of code and more as an architect or a "human merge button." He describes the feeling not as reckless abandonment, but as managing a team of infinitely fast, occasionally silly junior engineers.
The Core Principle: Closing the Loop
Critics of AI-generated code often point to its tendency to hallucinate APIs or produce bugs. Peter acknowledges this but argues that the failure lies in the workflow, not the model. The secret to making "agentic engineering" work is **closing the loop**.
In a traditional workflow, a developer writes code, runs it, sees an error, and fixes it. In Peter’s workflow, the agent must be empowered to perform that cycle independently. You cannot simply generate code and hope it works; you must demand that the agent verify its own work.
Designing for Verification
When an agent builds a feature, it must also be able to run it. If the agent is building a CLI tool, it should spawn a subprocess to test that tool. If it encounters an error, the agent reads the error log, iterates, and fixes the code.
"To be effective with coding agents, is always like you have to close the loop. It needs to be able to debug and test itself. That's the big secret."
This necessity has actually made Peter a better architect. To allow an agent to verify its work, the system architecture must be testable and modular by default. If the architecture is too complex for an agent to test, it is likely too complex, period.
Clawdbot: The "Human Merge Button" Experiment
Peter’s current project, Clawdbot, is a hyper-personal assistant that runs locally. It has full access to his computer, calendar, and messages, acting as a "second brain" that is proactive rather than reactive. It might wake him up if he oversleeps or suggest reaching out to a friend who is in town.
The development velocity on Clawdbot is staggering. Peter mentions merging up to 600 commits in a single day. The community engagement has exploded, with the repository earning thousands of stars in a week. To manage this, Peter utilizes a workflow that relies heavily on "weaving" code.
Weaving vs. Writing
Instead of writing a feature from scratch, Peter engages in a conversation with the AI. He asks the model to "weave" a new feature into the existing architecture. He directs the model to look at specific folders where similar problems were solved previously, using his own codebase as few-shot examples for the AI.
This creates a new paradigm for Pull Requests (PRs). In a traditional team, a PR is a request for code review. In Peter’s world, PRs are "Prompt Requests."
"I ask people to please add the prompts... I read the prompts more than I read the code because to me this is a way higher signal of like how did you get to the solution."
If the logic of the prompt is sound and the "gate" (the automated testing and verification loop) passes, the implementation details become secondary. The focus shifts from "is this variable named correctly?" to "does this feature fulfill the architectural vision?"
The Future of Engineering Teams
What does this mean for the future of software companies? Peter predicts a difficult transition for large enterprises. The strict separation of roles—product managers, designers, and engineers—is becoming a liability. The AI-enabled workflow favors the "Builder": a high-agency individual who can conceive a product, design the system, and steer the agents to build it.
Smaller, Faster Teams
Peter estimates that he could run a company like his previous one with perhaps 30% of the headcount today. This isn't just about efficiency; it's about the ability to parallelize tasks mentally. A single developer can have five different agents "cooking" different features simultaneously, switching contexts like a chess grandmaster playing multiple boards.
The Fate of the "Code Lover"
There is a divide forming in the developer community. Those who love the product—the outcome—are thriving with AI tools. However, those who love the puzzle—the manual implementation of algorithms and the craft of syntax—often struggle or feel alienated by this shift.
"I call it agentic engineering... all the mundane stuff of writing code is automated away, I can move so much faster. But also means like I have to think so much more."
Conclusion
Peter Steinberger’s workflow with Clawdbot is admittedly extreme—a "YOLO project" where an AI has root access to his machine. However, it serves as a harbinger for the industry. We are moving away from an era where the primary bottleneck was typing syntax, toward an era where the bottleneck is system design and verification.
For developers looking to adapt, the advice is clear: stop treating AI as a glorified autocomplete. Start building systems that allow agents to close the loop, verifying their own work. The future belongs to those who can direct the machine, not just those who can type the language it speaks.