Table of Contents
For decades, the divide between "technical" and "non-technical" product people has been a hard line. If you couldn't write code, your ability to build independent products was severely limited. That era is effectively over. Zevi Arnovitz, a Product Manager at Meta with zero traditional engineering background, is proving that the definition of a builder has fundamentally changed. By leveraging tools like Cursor and Claude, he isn't just prototyping; he is shipping revenue-generating, fully localized, complex applications on his weekends.
This isn't about simply asking a chatbot to "make an app." It is about developing a sophisticated workflow that treats Artificial Intelligence not as a tool, but as a technical co-founder. By moving beyond basic "vibe coding" and establishing rigorous architectural planning and peer review systems, non-technical leaders can now oversee production-grade engineering work.
Key Takeaways
- The "AI CTO" Mindset: Success requires treating the AI as a technical partner that challenges your assumptions, rather than a subservient code generator.
- Workflow Over Code: The secret lies in a structured process (Issue → Exploration → Plan → Execute → Review) rather than random prompting.
- The Peer Review Hack: Non-technical builders can solve the "code review" bottleneck by having different AI models (Claude, OpenAI, Gemini) review each other's work.
- Transitioning from Vibe Coding: While tools like Bolt and Lovable are excellent for starting, complex applications eventually require the granular control of Cursor and local environments.
- Continuous Learning: The most effective builders use specific prompts to turn development blockers into "learning opportunities," effectively using the AI to teach them architecture as they build.
From Vibe Coding to Serious Engineering
The current landscape of AI coding tools is often split into two categories. On one side, there are "vibe coding" platforms like Bolt, Lovable, and Replit. These are incredible for speed and low-barrier entry, handling everything from the database to the deployment with opinionated defaults.
However, as a project grows in complexity, these opinionated "harnesses" can become limiting. If you need to implement a specific payment gateway or a complex database migration, an all-in-one agent might struggle. This is where the transition to Cursor and Claude Code happens.
Zevi describes this shift as graduating from a managed experience to full control. In Cursor, code is just text files. This allows you to leverage the specific strengths of different models—using Gemini 2.0 Flash for UI work because of its visual prowess, while using OpenAI's o1 or Claude 3.5 Sonnet for complex logic—all within the same environment.
"It’s not that you will be replaced by AI. You will be replaced by someone who’s better at using AI than you."
The "AI CTO" Workflow
The difference between a broken prototype and a shipping product is planning. Zevi has developed a system of "Slash Commands"—reusable prompts stored in his codebase—that guide the AI through a professional software development lifecycle. Instead of jumping straight to code, the AI acts as a CTO, ensuring the architecture is sound.
1. The Exploration Phase
Before writing a single line of code, the workflow begins with an exploration phase. Using a custom prompt, the AI is instructed to read the relevant files, understand the current state of the application, and importantly, challenge the user's requirements.
Just as a human engineering manager would ask clarifying questions about edge cases or database impacts, the AI is prompted to not be a "people pleaser." It must validate that the proposed feature makes architectural sense before proceeding.
2. The Planning Phase
Once the approach is agreed upon, the AI generates a detailed plan in a Markdown file. This document acts as the source of truth. It outlines:
- A TL;DR of the feature.
- Critical technical decisions made.
- A step-by-step checklist of tasks.
This plan is crucial because it decouples the thinking from the coding. If you use different models for different steps (e.g., swapping to a faster model for boilerplate code), they all refer back to this central plan to maintain context.
3. Execution and "Composer"
With a plan in place, the execution phase begins. Using Cursor's "Composer" feature, the AI reads the plan and implements the changes across multiple files simultaneously. Because the logic was settled during the exploration and planning phases, the actual coding becomes a high-speed execution task, often completed in minutes.
Solving the Non-Technical Bottleneck: Peer Review
The single biggest fear for non-technical founders is the inability to review code. If you cannot read the syntax, how do you know if the AI wrote secure, performant, or bug-free code?
Zevi’s solution is a multi-model peer review system. He treats different AI models as different "staff engineers" with unique personalities and strengths:
- Claude: The communicative, collaborative CTO who is great at explaining concepts.
- OpenAI (o1/Code-ex): The brilliant but silent genius who sits in a dark room and solves the hardest logic problems without talking much.
- Gemini: The creative, chaotic artist who excels at frontend design but might make questionable architectural choices if left unsupervised.
After a feature is built, the workflow involves running a /review command. One model reviews the code and flags issues. Then, a different model is asked to review the same code. Finally, a "Peer Review" prompt is used to make them debate the findings.
"I will take the output from one model and say: 'Dev Lead 1 says this is a bug.' The other model might get sassy and say, 'I have explained this three times, this is by design.' You let them fight it out until the code is clean."
This adversarial process catches bugs that a single model might miss and creates a system of checks and balances that mimics a human engineering team.
Turning Failures into Documentation
Even with advanced workflows, AI makes mistakes. The key to long-term velocity is not just fixing the bug, but fixing the process.
When the AI fails, the correct response is to conduct a post-mortem. You ask the model: "What in your system prompt or documentation caused you to make this mistake?"
The AI will often identify that it lacked context on a specific library or misunderstood a directory structure. You then update the project's documentation (often a `.cursorrules` file or a specialized markdown file) to prevent that specific error from happening again. Over time, the codebase becomes "AI-native," filled with context files written specifically to help the agents navigate the project effectively.
The Collapse of Titles and the Future of Building
We are entering a period where job titles and responsibilities will begin to collapse. The distinction between a Product Manager, a Designer, and an Engineer is blurring. A PM can now fix a UI bug; an Engineer can now generate marketing copy.
For junior professionals or those looking to break into the industry, this is an unprecedented equalizer. Experience matters, but the ability to leverage these tools to ship real products creates an unfair advantage. The expectation for the next generation of product leaders isn't just to manage tickets, but to prototype, build, and validate ideas directly.
The barrier to entry has never been lower. The tools are available, the workflows are being defined, and the only remaining friction is the willingness to open the terminal and start learning.
Conclusion
Zevi’s approach demonstrates that "technical" is no longer a binary trait—it is a spectrum of resourcefulness. By combining a growth mindset with a rigorous AI workflow, anyone can build software. The goal is not to have the AI do the work for you so you can disengage, but to use the AI to amplify your capability so you can tackle problems that were previously impossible.