Skip to content

From Software Engineer to AI Engineer: How Janvi Kalra Landed at OpenAI After Interviewing at 46 Companies

Table of Contents

Janvi Kalra reveals her framework for evaluating AI startups, self-teaching AI engineering skills, and navigating 46 interviews to land at OpenAI.

After transitioning from software engineering to AI engineering in just two years, Janvi Kalra now works on safety systems at OpenAI, processing 60,000 requests per second.

Key Takeaways

  • AI companies fall into three categories: product companies (building on models), infrastructure companies (tools for AI products), and model companies (building the intelligence)
  • Self-learning AI engineering requires hands-on experimentation through hackathons, building projects, and learning by doing rather than traditional coursework
  • Successful startup evaluation demands investigating four key areas: revenue growth, market size, customer obsession, and competitive positioning
  • AI engineers must be comfortable scrapping work as models improve—what took weeks to build may become obsolete with the next model release
  • Modern engineers need to be full-stack, product-minded, and business-aware as AI tools blur traditional role boundaries
  • OpenAI maintains startup speed at scale through high engineer trust, minimal approval processes, and systems designed for rapid deployment
  • Due diligence on startups should include talking to investors, reading specialized publications, and directly evaluating unit economics
  • The transition from software to AI engineering is accessible to engineers at any experience level since the field is so new that everyone is learning together

Timeline Overview

  • 00:00–02:31Intro: Introduction to Janvi Kalra's journey from software engineer to AI engineer at OpenAI
  • 02:31–03:35How Janvi got her internships at Google and Microsoft: Strategic essay writing and applying through university portals without connections
  • 03:35–07:11How Janvi prepared for her coding interviews: Using "Cracking the Coding Interview" and LeetCode preparation strategies
  • 07:11–08:59Janvi's experience interning at Google: Working on search team, exploring internal tools, and learning from senior engineers
  • 08:59–11:35What Janvi worked on at Microsoft: Azure OS team building file system integration for Azure blobs
  • 11:35–15:00Why Janvi chose to work for a startup after college: Comparing big tech stability vs startup breadth and learning opportunities
  • 15:00–16:58How Janvi picked Coda: Evaluating startups based on smart people and passionate product focus
  • 16:58–18:20Janvi's criteria for picking a startup now: Four-pillar framework including revenue growth, market size, customer obsession, and competition
  • 18:20–19:12How Janvi evaluates 'customer obsession': Researching Reddit, YouTube, and directly contacting users for real feedback
  • 19:12–21:38Fast—an example of the downside of not doing due diligence: Cautionary tale of engineers joining without verifying revenue claims
  • 21:38–25:48How Janvi made the jump to Coda's AI team: Being rejected initially, self-learning for 5 months, then earning invitation
  • 25:48–27:30What an AI Engineer does: Building products on models through experimentation, prototyping, and production deployment
  • 27:30–30:34How Janvi developed her AI Engineering skills through hackathons: Learning by doing through weekend events and online competitions
  • 30:34–37:40Janvi's favorite AI project at Coda: Workspace Q&A: Building RAG chatbot that evolved into Coda Brain product
  • 37:40–40:44Learnings from interviewing at 46 companies: Six-month process, market categorization, and due diligence importance
  • 40:44–43:17Why Janvi decided to get experience working for a model company: Focusing on model and infrastructure companies for broader growth
  • 43:17–45:28Questions Janvi asks to determine growth and profitability: Evaluating unit economics, margins, and business sustainability
  • 45:28–49:08How Janvi got an offer at OpenAI, and an overview of the interview process: Coding, system design, and project-based assessments
  • 49:08–51:01What Janvi does at OpenAI: Safety engineering, building real-time harmful content classifiers at 60K requests/second
  • 51:01–52:30What makes OpenAI unique: Combination of startup speed with massive scale, high engineer trust, and open culture
  • 52:30–55:41The shipping process at OpenAI: Minimal approval processes, single reviewer deployments, and engineer autonomy
  • 55:41–57:50Surprising learnings from AI Engineering: Constantly building guardrails for model limitations, then scrapping work as models improve
  • 57:50–1:02:19How AI might impact new graduates: Leveling playing field, using AI to learn vs avoid learning, importance of understanding systems
  • 1:02:19–1:07:51The impact of AI tools on coding—what is changing, and what remains the same: Core skills persist, role boundaries blur, full-stack expectations expand
  • 1:07:51–ENDRapid fire round: AI coding stack recommendations, book suggestions, and career advice

The Three Categories of AI Companies

Janvi's framework for understanding the AI landscape provides crucial clarity for anyone considering roles in the space. Her categorization helps candidates focus their job search and understand different risk-reward profiles.

  • Product companies build applications on top of existing models, including tools like Cursor, Kodium, and Hebia that leverage LLM capabilities for specific use cases
  • Infrastructure companies provide the plumbing for AI product companies, encompassing inference providers (Modal, Fireworks), vector databases (Pinecone, ChromaDB), and evaluation tools (Braintrust, Arize)
  • Model companies create the foundational intelligence, including big tech firms like Google and Meta alongside specialized companies like OpenAI and Anthropic
  • Each category presents different technical challenges, with product companies offering familiar software engineering problems while infrastructure and model companies require deeper AI-specific expertise
  • Janvi deliberately chose to focus on model and infrastructure companies to broaden her experience beyond the product work she'd done at Coda
  • The choice between categories should align with career goals—product companies offer immediate applicability of existing skills, while infrastructure and model companies provide deeper technical learning opportunities

Understanding these distinctions helps engineers make informed decisions about where their skills will be most valuable and which environments will provide the growth they seek.

Self-Teaching AI Engineering Through Action

Janvi's approach to learning AI engineering emphasizes practical experimentation over theoretical study, reflecting the rapidly evolving nature of the field where formal curricula don't exist.

  • Her journey began with fundamental questions like "how does ChatGPT work," leading to self-study of deep learning foundations from tokens and embeddings to transformer architecture
  • Hackathons provided structured learning opportunities with real deadlines and user feedback, including weekend events and extended six-week online competitions through Buildspace
  • Building a language learning tool for watching TV combined personal interest with technical exploration, demonstrating how to create meaningful projects that solve real problems
  • The key breakthrough came from documenting her learning journey publicly through blogging, making her self-directed education visible to her team at Coda
  • When initially rejected from Coda's AI team, she continued building in her spare time for five months until her demonstrated commitment earned her a "heck yes" response
  • Learning by doing proved more effective than traditional resources because the field changes too rapidly for static educational materials to remain current

This approach highlights that in emerging technologies, practical experience often matters more than formal credentials, especially when combined with public demonstration of commitment and capability.

The Coda Project That Sparked Innovation

Janvi's development of Workspace Q&A at Coda illustrates how AI engineers can create significant business impact by combining existing infrastructure in novel ways.

  • The project addressed a common customer pain point—difficulty finding information across numerous internal documents despite having comprehensive documentation
  • Janvi recognized that Coda already possessed the necessary components: a recently rebuilt search index, reliable LLM infrastructure, and an existing chatbot interface
  • By combining these existing pieces in just a couple of days, she created a working prototype that demonstrated retrieval-augmented generation (RAG) capabilities
  • CEO Shishir Mehrotra's unexpected interest transformed the project from an experiment into a strategic initiative, highlighting how AI engineers must be prepared for rapid scaling
  • The four-week sprint to prove enterprise search viability involved intense collaboration across functions—engineering, design, product management, and executive leadership
  • The successful demonstration led to Coda Brain, a full product launch with 20 team members, showcasing how AI projects can quickly evolve from prototypes to major initiatives

This example demonstrates that AI engineering success often comes from creative recombination of existing capabilities rather than building everything from scratch.

Evaluating AI Startups: A Due Diligence Framework

Janvi's systematic approach to startup evaluation reflects hard-learned lessons about the importance of thorough research before making career-defining decisions.

  • Her four-pillar framework examines revenue growth rates, market size, customer obsession, and competitive positioning to assess long-term viability
  • Customer research involves active investigation through Reddit, YouTube, and direct outreach to users rather than relying solely on company claims
  • For B2B products that can't be personally tested, she proactively contacts companies using the product to understand real user sentiment
  • Financial due diligence includes asking direct questions about GPU costs for infrastructure companies and revenue figures once offers are extended
  • The Fast.co example serves as a cautionary tale—many engineers joined without verifying revenue claims, leading to eventual disappointment when the company collapsed
  • External validation through specialized publications like The Information and conversations with investors provides crucial third-party perspectives on business fundamentals

This methodical approach treats career decisions with the same rigor that investors apply to financial decisions, recognizing that equity-heavy compensation makes employees de facto investors.

The 46-Company Interview Marathon

Janvi's extensive interview process over six months provides unique insights into the current state of AI engineering hiring and market dynamics.

  • The first half involved getting up to speed on interview preparation, highlighting that even experienced engineers need dedicated preparation time for role transitions
  • AI engineering interviews combine traditional software engineering assessments (LeetCode, system design) with domain-specific project work
  • Project interviews emerged as her favorite format because they allow candidates to demonstrate passion and practical skills relevant to the specific company
  • The market's transition away from pure algorithmic interviews remains incomplete, requiring candidates to prepare for multiple interview styles simultaneously
  • Remote interviews dominated but in-person final rounds became increasingly common, reflecting companies' desire for cultural assessment
  • The lengthy process stemmed from careful due diligence—many opportunities failed her evaluation criteria for growth potential and business sustainability

Her experience reveals that the AI engineering job market rewards both technical preparation and business acumen, with the best opportunities going to candidates who understand company dynamics beyond just technical requirements.

OpenAI's Safety Engineering Reality

Janvi's work on OpenAI's safety team reveals the technical complexity of building responsible AI systems at massive scale.

  • Safety engineering involves building low-latency classifiers that detect harmful model outputs and user inputs in real-time, requiring both ML expertise and systems engineering
  • The team measures unknown harms as models become more capable, constantly updating detection systems for new attack vectors and jailbreaking techniques
  • Integration with product launches keeps the safety team continuously busy as new features create new potential vectors for misuse
  • Dual-use research requires careful balance between preventing harm and enabling beneficial applications, making safety work inherently complex
  • The 60,000 requests per second scale demands robust engineering practices that combine startup agility with enterprise reliability requirements
  • Safety mitigation services must be deeply integrated across all OpenAI products, making this team central to the company's mission execution

This work demonstrates that AI safety isn't just theoretical research but practical engineering that directly impacts millions of users daily.

OpenAI's Unique Operating Model

Janvi's observations about OpenAI's culture reveal how the company maintains startup velocity while operating at massive scale.

  • The combination of speed and scale creates a unique environment where engineers experience both rapid iteration and enterprise-level traffic simultaneously
  • High trust in engineers enables single-reviewer deployments for services handling 60,000 requests per second, demonstrating confidence in hiring and systems
  • Open culture encourages questioning and learning, with employees freely discussing implementation details and architectural decisions across teams
  • Passion for the mission creates energy that makes "never a boring day" as people care deeply about advancing artificial general intelligence
  • Research-to-production pipelines enable individual engineers to propose ideas that can become major products, maintaining innovation despite company size
  • The expectation that engineers wear multiple hats (PM, design, full-stack) reflects the reality that AI capabilities blur traditional role boundaries

These practices suggest that maintaining startup culture at scale requires intentional system design and hiring practices that prioritize autonomy and ownership.

The Evolution of Engineering Roles

Janvi's perspective on how AI tools are reshaping software engineering provides insights into the future of technical work.

  • Core engineering skills remain valuable—system design, debugging, code reading, and high-level architecture thinking become more important as routine coding gets automated
  • Role boundaries blur as engineers take on more PM and design work, with some 100-person companies operating without dedicated designers
  • Full-stack expectations expand beyond traditional web development to include data engineering, infrastructure management, and business logic implementation
  • Effective AI utilization requires strong architecture thinking and clear mental models to provide proper guidance to AI coding assistants
  • The ability to zoom in and out becomes crucial—providing high-level direction through prompts while catching edge cases during code review
  • Engineers must develop judgment about when to use AI for rapid prototyping versus when to understand systems deeply for production ownership

This evolution suggests that the most successful engineers will combine traditional computer science fundamentals with new skills in AI collaboration and cross-functional thinking.

Common Questions

Q: What are the three types of AI companies?
A: Product companies build applications on models, infrastructure companies provide tools for AI development, and model companies create the foundational intelligence.

Q: How can you evaluate whether an AI startup will succeed?
A: Look at revenue growth, market size, customer obsession (through direct user research), and competitive positioning while doing thorough due diligence.

Q: What's the best way to learn AI engineering?
A: Learn by doing through hackathons, building personal projects, and practical experimentation rather than waiting for formal courses or textbooks.

Q: How did Janvi break into AI engineering at Coda?
A: After being initially rejected, she spent five months building AI projects in her spare time and blogging about her learning until the team invited her to join.

Q: What makes OpenAI's culture unique?
A: The combination of startup speed with massive scale, high trust in engineers for rapid deployment, and open culture that encourages learning and questioning.

Conclusion

Janvi Kalra's journey from software engineer to AI engineer at OpenAI demonstrates that success in emerging technology fields requires proactive learning, systematic evaluation of opportunities, and willingness to take calculated risks. Her experience shows that technical excellence alone isn't sufficient—understanding business fundamentals, demonstrating initiative, and building in public can accelerate career transitions in ways that traditional career paths cannot match.

The AI engineering field rewards those who combine deep technical skills with product thinking and business acumen. As Janvi's story illustrates, the most valuable engineers are those who can bridge the gap between cutting-edge technology and real-world applications, whether they're building safety systems that process thousands of requests per second or creating innovative solutions that transform entire product strategies.

Practical Implications

  • Start building AI projects immediately rather than waiting for formal training—the field moves too quickly for traditional education to keep pace
  • Develop a systematic framework for evaluating AI companies across product, infrastructure, and model categories before job searching
  • Practice due diligence by researching customer satisfaction, unit economics, and growth metrics rather than relying solely on company marketing
  • Invest in learning system design, architecture thinking, and debugging skills as these become more valuable when AI handles routine coding
  • Build in public through blogging, hackathons, and open-source contributions to demonstrate commitment and capability to potential employers
  • Prepare for role boundary blur by developing product sense, design thinking, and business understanding alongside technical skills
  • Focus interview preparation on project work and practical problem-solving rather than just algorithmic coding challenges
  • Be comfortable with rapid iteration and throwing away work as AI capabilities evolve and make previous solutions obsolete
  • Network with investors and industry insiders to gain insights into company fundamentals beyond public information
  • Develop strong mental models and architecture thinking to effectively guide AI tools rather than being guided by them

Latest