Table of Contents
Are you ready to build AI apps that truly empower users? Let's dive deep into why current AI integrations often fall short and presents a groundbreaking vision for the future. Discover how accessible "system prompts" and powerful "tools" can transform AI from a mere chatbot into a collaborative, customizable assistant that handles the work you don't want to do.
Key Takeaways
- Current AI applications often feel clunky because they bolt AI onto old software paradigms, making them feel like a chore rather than a superpower.
- The "hidden system prompt" in many AI apps limits user customization and leads to generic, unhelpful outputs, like an overly formal Gmail draft.
- Enabling users to access and edit system prompts allows them to "program" the AI with natural language, making the tool truly personal and effective.
- This shift mirrors the evolution of technology, moving from "horseless carriages" (AI on old interfaces) to entirely new AI-native software designs.
- The future of AI apps lies in user-trainable agents that act as collaborators, learning from feedback and automating repetitive tasks across various platforms.
- Developers should focus on building powerful "tools" that AI agents can leverage, allowing users to define workflows without needing to code.
- The goal is to move beyond one-size-fits-all chatbots towards highly personalized, intelligent assistants that adapt to individual needs and preferences.
Timeline Overview
- 0:00 – Intro: A brief introduction to the problem with current AI apps and the promise of user-programmable software.
- 0:52 – Why AI apps are broken: Discussion on why existing AI integrations often feel like more work than they save.
- 2:39 – The problem with Google's AI App: A specific example using Gmail's AI draft feature to highlight common issues.
- 4:00 – A better way to build AI apps: Introducing the concept of AI-native software that anticipates user needs.
- 5:27 – The hidden system prompt: Explanation of the underlying instructions given to AI models that users can't see or edit.
- 7:57 – What if you could access the system prompt?: Exploring the potential for user customization if system prompts were editable.
- 9:40 – The developer-user divide in software: An analysis of how traditional software development paradigms hinder AI app design.
- 10:48 – The "horseless carriage" metaphor: Comparing current AI apps to early automobile designs that didn't fully embrace new technology.
- 13:35 – Email reading agent demo: A demonstration of a personalized AI agent that automates email management.
- 14:34 – Everyone can be a prompt engineer: Arguing that natural language makes AI programming accessible to a broad audience.
- 16:23 – Why coding agents feel magical: Reasons why AI coding tools are exceptionally effective and demonstrate AI's true potential.
- 21:42 – Training AI like a human assistant: How AI can learn and adapt from user feedback, much like a new employee.
- 28:45 – The problem with chatbot interfaces: Criticizing the limitations of the prevalent chatbot paradigm for AI interaction.
- 29:10 – Advice for founders: Guidance for entrepreneurs on how to approach building AI-native companies.
The Current State of AI Apps: Why They Miss the Mark
Many of us have had two very different experiences with AI. On one hand, tools like Cursor and Windsurf feel incredibly powerful, allowing us to "create anything I want" and feel like a "rocket ship for the mind." But then, there's the other side: AI integrated into existing apps that feels like "more of a chore." This disconnect arises because developers are often using "old software development techniques" for new AI capabilities.
A prime example is Gmail's AI draft-writing feature. While the underlying Gemini model is "absolutely incredible," its power is "hidden behind a UI that makes it really frustrating to use." The drafts produced are often generic and don't sound like the user, forcing them to spend time editing. The prompt itself is "roughly as long as the draft itself," negating any real time-saving benefit.
Unveiling the Hidden System Prompt
The core problem with many AI applications lies in the hidden system prompt. When you interact with an AI feature, like Gmail's draft writer, the user's input is combined with a secret "system prompt." This prompt tells the AI "who it is and what its job is." In the Gmail example, this hidden prompt likely dictates a "formal business tone and correct punctuation", making the output generic. Users can't see or edit this crucial piece of instruction.
This approach reflects an outdated "developer-user divide." Historically, users don't see or interact with the code. However, AI, with its natural language programming, changes this. Imagine if Gmail let you edit its system prompt to say, "You're Pete, a busy YC partner... keep emails as short as possible." The resulting draft – "Hi Gary, my daughter's sick with the flu so I can't come in today thanks." – would sound authentically 'Pete.' This allows users to "explain to the AI model how I write emails in general so that I don't have to do it every single time."
Beyond the "Horseless Carriage" Metaphor
The current state of AI app design is akin to the "horseless carriage" metaphor. Early automobiles looked just like carriages with an engine bolted on, ignoring fundamental design changes needed for speed and efficiency. Similarly, many AI applications are just "slotting AI into the Gmail application" or other existing software. They ask, "How can we replace the horse and put an engine in?" This overlooks AI's true potential: automating repetitive, busy work.
The real promise of AI is not to make human work in applications slightly easier, but to offload work that "doesn't really need my full brain power." Consider an "email reading agent" that automates inbox management based on a user's customizable system prompt: "If the email is from my wife draft a reply and label it personal." This "programming" is accessible to "non-programmers" because it uses natural language, turning anyone into a "prompt engineer."
Training AI Like a Human Assistant
The future of AI applications involves training AI like a human assistant. Just as you wouldn't give a new employee a 30-page instruction manual, AI should learn gradually. It should review past work, like your old emails, to "create a draft prompt for me." Users could provide feedback, saying "I would have phrased that this way," and the AI would "take that feedback and edit their own system prompt."
This iterative process suggests that while developers might set up the initial framework, the user ultimately "programs" the AI to their specific needs. Tools are emerging that could have an "AI system prompt writer sitting next to you," translating natural language feedback directly into system prompt adjustments. This means users likely won't "touch the system prompt" directly, but it will be "custom to them" based on their ongoing interactions.
The Power of Tools for AI Agents
For AI agents to be truly useful, they need access to tools. If developers aren't just writing system prompts, what do they focus on? Building the "tools" that these agents can use. For the email reading agent, these tools include functions for "labeling an email," "archiving an email," or "writing a draft." An email inbox, in many ways, is a "to-do list or a whole bunch of chores in life." The "very few emails that I write are really heartfelt original thinking," with most being "transactional." With powerful tools, an email reading agent could handle much of this, especially "across things like Slack and your calendar and your Notion or your Jira or your Linear or whatever."
Companies like Den are building "cursor for knowledge work," chaining together these "tools for agents." Imagine your boss sending a message on Slack, and an AI agent pulling terms and conditions from Google Docs, sending them to your legal team via email for review, and then publishing them on GitHub – all controlled from one place. This feels like "a rocket ship for the mind" because it "takes an agent... into something that can go off and accomplish things in the world on your behalf." This is a significant step beyond the "chatbot paradigm," which simply embedded "chat agent[s]" everywhere. The true promise of LLMs is not just producing text, but "automating work on our behalf."
Advice for Founders: Rethink Everything with AI
For founders, this is "one of the most exciting time[s] to be a founder." The key is to rethink every existing tool from the ground up with AI. The "AI native version of a lot of tools will look different from the versions we're used to using." This means going "beyond simply like embedding a chatbot inside their existing product." Instead of asking, "How do I insert AI into my tool?", founders should ask, "how would I design this tool from scratch to offload as much repetitive work from the user as possible so they can focus on what's important?"
Conclusion
The next generation of AI apps must move beyond basic chatbots to embrace user-programmability through accessible system prompts and robust tools. This shift will create truly intelligent, customized software that automates busywork and enhances human productivity, fulfilling AI's true potential.