Skip to content

The Future of AI Software Security | Ep. 39

Traditional security meant outrunning the person next to you. Generative AI changed that. Now, we face "a thousand AI bears"—automated agents launching simultaneous attacks. Daniele Perito explains why we must completely reimagine how we build, secure, and monitor software.

Table of Contents

In the traditional world of cybersecurity, there is a grim adage regarding survival: to survive a bear attack, you do not need to outrun the bear; you simply need to outrun the person next to you. For decades, this logic governed corporate security strategies. Companies aimed for a defensive posture just robust enough to encourage attackers to move on to a softer target.

However, the emergence of generative AI has fundamentally broken this analogy. We are moving away from a world where human hackers pick targets based on convenience. Instead, we face a future where automated agents can launch thousands of sophisticated attacks simultaneously. As Daniele Perito, co-founder of Faire and now Depth First, explains, businesses are no longer facing a single bear—they are facing a thousand AI bears. This shift requires a complete reimagining of how we build, secure, and monitor the software that powers our infrastructure.

Key Takeaways

  • The threat landscape has shifted: AI lowers the cost of attacks, meaning security through obscurity or simply being "better than the neighbor" is no longer a viable strategy.
  • Context is the defender's advantage: While attackers have the advantage of volume, defenders possess the "home field" advantage of deep context and knowledge of their own codebase.
  • The end of heuristics: Traditional rule-based security scans are insufficient for logic-based vulnerabilities; the future lies in AI agents that reason like human security engineers.
  • Operationalizing intuition: Effective decision-making often relies on deep-diving into small datasets (the "30 data points" rule) rather than waiting for massive, aggregated big data.

The Asymmetry of AI Warfare

The fundamental equation of cybersecurity is economic. There is no such thing as a theoretically perfect bank vault or a perfectly secure software system. Security is defined by the relationship between the cost of the attack and the likelihood of enforcement. Since online enforcement is historically difficult and often anonymous, the primary lever for defense has always been raising the cost of entry for the attacker.

Artificial intelligence threatens to collapse this cost barrier.

There is the saying in security circles that in order to survive a bear attack, you don't need to outrun the bear, but you need to outrun the person running next to you... But with AI, you can think about the fact that there isn't just going to be one bear, there's going to be a thousand AI bears.

In this new paradigm, intelligence is abundant and cheap. Automated agents can test vulnerabilities at a scale and speed that human attackers never could. This necessitates a shift from passive defense to active, AI-driven security engineering.

The Decline of Rule-Based Security

For years, companies have relied on static analysis tools and heuristic scanners. These tools operate on rigid rules—checking for known patterns or specific misconfigurations. While useful for catching shallow bugs, they struggle with complex reasoning. They cannot understand the intent of the code or how different components interact logically.

The vision for the next generation of security—championed by companies like Depth First—is the AI Security Engineer. These are not simple scanners but swarms of independent agents capable of reasoning. They explore codebases, understand egress and ingress points, and identify logic flaws that would typically require human intuition to spot. By unifying distinct security categories into a cohesive agentic workforce, organizations can identify vulnerabilities that rule-based systems simply cannot see.

The Defender’s Advantage: Context and Depth

Despite the terrifying volume of potential AI-driven attacks, defenders retain a critical strategic advantage: context.

An attacker, even an AI-driven one, is ultimately flying blind. They must probe the perimeter to understand the system. In contrast, an internal AI security system has full access to the repository, the history of commits, and the staging environment. It can spend hours of compute time mapping the intricate relationships between services to understand how the system should work versus how it could be exploited.

Defenders still have a certain advantage which is they have full context... using that knowledge, the AI helps secure the business. Attackers need to fly blind. Now, there's another advantage that the attackers have, which is defenders need to find every attack. Attackers need to find one.

To tilt the scales back in favor of the defender, security systems must move beyond code scanning and integrate with the broader development lifecycle. This includes analyzing pull requests in real-time and verifying vulnerabilities against staging environments to reduce false positives. The goal is to create a "superhuman hacker" dedicated solely to defense, using reinforcement learning to discover complex exploit chains before an adversary does.

Operational Rigor: Lessons from Marketplace Building

Building a security company in the AI era requires a different operational mindset than building a consumer app or a B2B marketplace. Perito’s experience co-founding Faire offering a contrasting view on how different business models demand different management styles.

Marketplace vs. Pipeline Businesses

Faire operates as a platform business (marketplace). In this model, the ecosystem is highly recursive and chaotic. A small change in supply ripples out to demand, requiring a tight grip on operations and rigorous, centralized coordination. Management requires constant balancing of the ecosystem to ensure it flourishes.

Conversely, a security product behaves more like a pipeline business. Value is created linearly and delivered to the customer. This structure allows for more decentralized experimentation. In the fast-moving AI sector, where the technology stack changes every three months, a "let a thousand flowers bloom" approach allows engineering teams to test hypotheses faster. This flexibility is vital when the underlying ground—AI capabilities—is shifting constantly.

The "30 Data Points" Rule for Decision Making

In an era obsessed with Big Data, leaders often fall into the trap of analysis paralysis, waiting for statistical significance before making a move. Perito advocates for a more agile approach to intuition building, dubbed the "30 Data Points" rule.

The premise is simple: don't shy away from manually reviewing a small set of raw data—whether it's 30 customer support tickets, 30 search results, or 30 security flags.

With 30 data points, you're going to know whether something is 60% plus or minus 10%. Or it's 10% plus or minus 10%. And you can know a lot from that fact alone... Is your conversion rate roughly good or roughly bad?

This hands-on approach forces leaders to form a deep, qualitative intuition about the problem. It bridges the gap between high-level metrics and the ground truth of the user experience. In early-stage startups or rapidly changing industries, making three decisions a week with 90% confidence is often superior to making one decision a quarter with 99% confidence.

Conclusion

The intersection of AI and cybersecurity is not just a commercial opportunity; it is a prerequisite for a safe future. As AI systems become more integrated into critical infrastructure, the software layers underneath them must be unimpeachable. Without robust computer security, we cannot effectively manage AI safety and control.

By leveraging the reasoning capabilities of Large Language Models (LLMs) and combining them with the context advantage held by defenders, the security industry can move from a reactive posture to a proactive one. The goal is no longer just to outrun the person next to you, but to build a system resilient enough to withstand the swarm.

Latest

She Challenged Gender Orthodoxy—and New York Fired Her

She Challenged Gender Orthodoxy—and New York Fired Her

Former NY Assistant AG Glenna Goldis was fired after questioning the safety of pediatric gender medicine. Applying her expertise as a fraud attorney, she argued the industry resembles a scam. Her firing underscores the intensifying conflict over free speech and dissent in liberal institutions.

Members Public