Table of Contents
Key Takeaways
- Building your own agents: Instead of purchasing external solutions, companies can build custom, high-impact agents internally using existing infrastructure like Snowflake and YAML-based semantic documentation.
- Adopt a "coding" mindset: LLMs are currently optimized for programming tasks. By framing business logic and data queries as coding tasks, you can achieve disproportionately better results.
- Embrace radical humility: Best practices in the AI space shift rapidly. What worked six months ago may already be obsolete; successful teams must be willing to "light the code on fire" and start over as model capabilities evolve.
- Optimistic locking: Move faster by removing mandatory approval gates. Allow teams to ship independently, while maintaining the ability to veto changes when necessary to manage risk.
- Software abundance: As the cost of software creation trends toward zero, the world will likely become "software heavy." Much like the rise of YouTube in the video industry, we are entering an era where the total volume of software produced will explode.
The Shift Toward Agentic Workflows
The current landscape of coding agents feels different from previous cycles. As tools evolve rapidly, the ground underneath developers is constantly shifting. Building products in this environment requires a balance of speed and an willingness to discard outdated methodologies. At Vercel, the team found that their internal data agent, DZero, was initially flawed because it relied on a rigid "tools in a loop" infrastructure. By shifting to a architecture that treats data queries like a coding task—using simple YAML files to describe business semantics—the agent became significantly more effective.
The intuition is that you kind of have to think about what was the model trained on and what is it optimized on? For now, there’s lots of coding tasks, and if you can make things look like a coding task even though they’re not, you get disproportionately good results.
Humility in the Age of AI
In the early days of agents, it is easy to become attached to the code you have built. However, the rapid advancement of models like Claude 3.5 Sonnet means that previously "best practice" architectures can become legacy almost overnight. Developers must resist the urge to over-engineer solutions. If an agent can be built with 50 lines of code rather than a complex framework, the simpler path is almost always superior. This humility is the key to remaining relevant while the underlying technology matures.
From Prototyping to Production
When Vercel launched v0 in 2023, the initial assumption was that it would be a tool for front-end engineers. Instead, it found its strongest market fit with backend engineers who possessed the technical knowledge to debug and refine the AI's output. Over time, as model intelligence improved, the platform pivoted to support full-stack applications. This evolution underscores the reality of the current AI boom: the product is rarely static.
Handling the Shadow IT Onslaught
The rise of generative AI has ushered in a new wave of "Shadow IT," where non-engineers and business stakeholders are creating their own internal applications. Rather than viewing this as a threat, forward-thinking organizations should embrace the infrastructure challenge. The goal is to make it as easy to deploy these generated applications as it is to write the code that creates them. As CTOs and developers lean back into active coding, the focus must remain on the outer loop—ensuring these agent-generated apps can run safely and reliably in production.
Infrastructure and the Philosophy of Speed
A common tension exists between the need for speed and the requirement for system reliability. At Vercel, this is managed through a philosophy of optimistic locking. Instead of requiring formal approvals for every change, the system allows individual engineers to ship, with the organization retaining the power to veto if necessary. This approach empowers developers to own their work from end to end while preventing the slow, round-trip cycles typical of legacy bureaucratic organizations.
There are no approvals. Anyone can ship anything, but they have to tell the organization that they are going to do it, and the organization can veto things.
Managing Global Risk
The danger of moving fast is the potential for large-scale outages, particularly with global control plane configurations. The strategy here is not to move slowly, but to move incrementally. By deploying changes in waves—testing in single, autonomous regions before rolling out globally—the risk of a system-wide failure is mitigated. This architectural choice allows for high velocity without sacrificing the stability required by a global customer base.
The Future of the Software Workforce
The introduction of powerful agentic tools is fundamentally changing what it means to be an engineer. The role is shifting from pure individual contribution to something closer to management. Senior engineers are learning to lead "minions" (agents), while junior engineers—who are often more comfortable with the fluidity of AI tools—are seeing their productivity skyrocket. This transformation does not necessarily mean fewer jobs; rather, it suggests a shift in the nature of the work itself.
A Software-Heavy Future
Is the world becoming "software light" or "software heavy"? Drawing an analogy to the video industry, one could argue we were "video light" before the advent of accessible creation platforms like YouTube. Today, more people than ever are engaged in professional-grade video production. Similarly, as software becomes cheaper to create, the total demand for it will likely grow, leading to an abundance of applications. While this will require a new focus on maintenance and lifecycle management, the long-term societal outcome—a world where software is ubiquitous—will likely be a net positive for productivity and innovation.
Ultimately, the most exciting frontier for developers is not just building new things, but exploring how agents can automate the maintenance of the software we have already created. We are in the early stages of a transition that mirrors the introduction of the mainframe in the 1960s. It will be a challenging shift for some, but the trajectory points toward a future of greater complexity and deeper technological integration.