Table of Contents
The word on the street says we'll achieve AGI in 2025, but according to AI expert Mo Gawdat, we've already crossed that threshold—and the implications are both thrilling and terrifying.
Key Takeaways
- AGI has effectively arrived when measured against actual human intelligence capabilities, not theoretical benchmarks
- We're entering a 10-year dystopian transition period before reaching true AI-driven abundance
- The "second dilemma" will force humanity to hand critical decisions over to artificial intelligence systems
- Job displacement is inevitable, but the real challenge is navigating institutional failures during the transition
- Power will become highly concentrated while simultaneously democratized, creating unprecedented surveillance and control
- Human connection and ethical behavior will become our most valuable assets in an AI-dominated world
- The key to surviving this transition is aggressive reskilling and focusing on uniquely human capabilities
- Intelligence without emotional wisdom creates dangerous decision-making, whether human or artificial
- The speed of AI advancement follows a six-month doubling pattern that makes current capabilities irrelevant quickly
- Stress management during this period requires expanding personal resources and capabilities, not just enduring challenges
The Intelligence Threshold We've Already Crossed
Here's the thing most people don't realize—we're having the wrong conversation about AGI. While experts debate definitions and benchmarks, the practical reality has already shifted beneath our feet. Mo Gawdat, former Chief Business Officer at Google X, puts it bluntly: "I'm not smarter than AI anymore. I think that happened firmly in 2024."
When you strip away the academic arguments about what constitutes "general" intelligence, the evidence becomes overwhelming. Current AI systems demonstrate linguistic intelligence that surpasses most humans, mathematical capabilities that leave PhD students in the dust, and knowledge synthesis that would take human researchers lifetimes to achieve. The recent AGI benchmark scores showing 87% performance—beating human intelligence on almost everything—should be our wake-up call.
But here's what's really staggering: we're not just talking about incremental improvements. The law of accelerating returns in AI suggests a doubling of capabilities every six months. Think about that for a second. If you count just a few doublings forward, we're looking at intelligence that operates completely outside the realm of human comprehension. The current debate about whether we've reached AGI becomes almost quaint when you realize that six months from now, whatever we call "AGI" today will seem primitive.
The democratization happening with systems like DeepSeek makes this even more profound. You can now download what essentially amounts to an entire advanced AI system and run it on four GPUs in your basement. We've moved from AI being a corporate monopoly to becoming a commodity that anyone can access. This isn't just about technological advancement—it's about fundamentally reshaping who has access to superhuman intelligence.
The Coming Dystopian Storm: Understanding FACE RIP
What's fascinating is how unprepared we are for what's actually coming. Gawdat has developed a framework he calls "FACE RIP" to describe the six pillars of society that will be completely redefined in the next few years: Freedom, Accountability, human Connection, Economics, Reality, Innovation, and Power.
The timeline isn't some distant future scenario. These changes have already started and will become felt realities in our daily lives by 2027, extending perhaps another decade after that. What makes this particularly challenging is that we're not dealing with gradual evolution—we're facing simultaneous disruption across every aspect of human civilization.
Take freedom, for instance. We're about to see an unprecedented concentration of power in the hands of those who control AI systems, while simultaneously experiencing a democratization that puts powerful tools in everyone's hands. This creates a dangerous dynamic where surveillance and population control become necessary responses to the chaos that emerges when everyone has access to technologies that were previously the domain of nation-states.
The economic disruption goes far beyond simple job displacement. When 92% of forex trading is already machine-automated, and AI systems are making decisions about trillions of dollars without human oversight, we're essentially running a massive casino where the house always wins—except now the house is an algorithm optimized for outcomes we may not even understand.
What's particularly unsettling is how this plays out in terms of accountability. When decisions are made by systems too complex for human comprehension, operating at speeds that make human oversight impossible, who takes responsibility when things go wrong? We're creating a world where the most important decisions affecting human lives are made by entities that exist outside our traditional frameworks of responsibility and consequence.
The Second Dilemma: Why Handing Over Control Is Inevitable
The most chilling insight from this conversation involves what Gawdat calls the "second dilemma." While the first dilemma was whether AI would happen (spoiler: it did), the second is more insidious—it's the recognition that in any competitive scenario, the side that hands decision-making over to AI will win.
Picture this: if China decides to hand over war gaming to an AI system, the only way America can keep its citizens safe is to do the same. Anyone who doesn't becomes irrelevant. This isn't just military strategy—it applies at every level, from corporate boardrooms to personal decision-making. Companies that don't integrate AI decision-making will be outcompeted by those that do. Individuals who don't leverage AI capabilities will be outperformed by those who do.
This creates a cascading effect where all relevant players become AI-dependent, and eventually, AI systems are making decisions without humans in the loop. It's not a choice we're making consciously—it's a competitive pressure that forces our hand. The prisoners' dilemma of AI development means no one can afford to be the first to slow down or opt out.
What's interesting is that this might actually be our salvation. When human biases, greed, and short-term thinking are removed from critical decisions, AI systems consistently demonstrate better outcomes. Recent research showed AI diagnosing diseases with 90% accuracy compared to 80% for humans alone, and 85% when humans and AI worked together. The AI performed better without human interference because it wasn't clouded by ego, financial incentives, or cognitive biases.
The timeline for this handover is roughly ten years, according to Gawdat's analysis. Once it happens, his belief is that we should trust in pure intelligence to guide us toward abundance rather than the scarcity mindset that drives current human decision-making. It's a leap of faith based on the premise that higher intelligence naturally tends toward altruistic outcomes because truly intelligent beings recognize that cooperation and abundance creation serve everyone's interests better than zero-sum competition.
Jobs, Economics, and the Great Disruption
The employment conversation reveals the fundamental disconnect between optimistic projections and harsh realities. While historical precedent suggests technological advancement creates more jobs than it destroys, this time feels different. When Marc Benioff talks about increasing productivity by 30% without needing to hire additional engineers, or when Sam Altman predicts AI will become the world's top programmer by year's end, we're not talking about gradual displacement—we're talking about entire professions becoming obsolete almost overnight.
The challenge isn't just the numbers game of job creation versus destruction. It's the human cost of transition. Real people working two or three jobs to make ends meet don't care about long-term economic theory when their immediate livelihood disappears. The suffering during transition periods is real, even if the ultimate destination is abundance.
What makes this particularly complex is that we're not just automating manual labor or routine tasks anymore. AI is coming for creative work, analytical thinking, and complex problem-solving—traditionally the domains where humans thought they'd always maintain an advantage. When an AI can write better code than most programmers, create more compelling marketing content than experienced copywriters, and analyze legal documents more thoroughly than seasoned attorneys, what exactly is left for humans?
The deeper issue is that our entire economic system is built around the concept of work as the primary means of distributing resources and creating meaning. Remove work from the equation, and you don't just have an unemployment problem—you have an existential crisis. People define themselves by their jobs, derive purpose from their professional contributions, and structure their entire lives around work schedules and career advancement.
Gawdat argues we shouldn't even try to recreate the old job-based system because work, as we know it, is an artificial construct of the industrial revolution. Maybe it's time to accept that humans weren't designed to spend 60-80 hours a week in offices, and instead explore social systems that allow people to live fully without traditional employment. The French work significantly fewer hours than Americans and somehow maintain a functioning economy—there might be lessons there about designing human-centered economic systems rather than productivity-obsessed ones.
Navigating the Stress of Civilizational Change
The psychological dimension of this transition can't be understated. We're living through what might be the most challenging period humanity has faced in living memory. The perfect storm of geopolitical instability, economic uncertainty, technological displacement, and the existential questions raised by artificial intelligence creates stress levels that most people aren't equipped to handle.
Gawdat's engineering approach to stress management offers a practical framework: stress equals the challenges you face divided by the resources you have to deal with them. The equation is brutally simple—you either reduce the challenges or increase your resources. Since we can't control the pace of AI advancement or global disruption, the focus has to be on expanding capabilities.
This means aggressive reskilling isn't optional—it's survival. If you're a developer today, waiting three years until you're displaced is strategic suicide. The smart move is identifying what else you could do and starting that transition immediately. But it goes deeper than just professional adaptation. We need to develop what he calls our "cross-section"—the full range of skills, abilities, contacts, and resources we can draw upon when facing challenges.
What's particularly valuable is doubling down on uniquely human capabilities. As intelligence becomes commoditized—essentially a plug-in-the-wall utility—human connection, emotional intelligence, and ethical reasoning become premium skills. The AI might be able to analyze data better than you, but it can't provide the same quality of human presence, authentic relationship, or moral wisdom that comes from lived experience.
The key insight is that stress tolerance increases with age and experience not because challenges become easier, but because we develop larger toolkits for handling them. Things that devastated us in our twenties become manageable in our thirties and laughable in our fifties. The same principle applies to navigating technological disruption—the goal is rapidly expanding our adaptive capacity rather than hoping the challenges will somehow diminish.
Ethics, Wisdom, and the Path Forward
The conversation ultimately returns to a fundamental question: what values are we embedding in the systems that will soon control most major decisions affecting human lives? This isn't just about technical alignment or safety protocols—it's about the ethical framework that will guide intelligence far superior to our own.
The training data for AI systems now includes almost all human knowledge, but knowledge without wisdom creates dangerous outcomes. We don't make decisions based purely on intelligence; we make them based on ethics informed by intelligence. A woman raised in the Middle East will make different choices than one raised on Copacabana Beach, not because of different intelligence levels, but because of different ethical frameworks shaped by experience and culture.
Here's what's both encouraging and terrifying: as these AI systems learn primarily from human interaction rather than curated training data, every conversation we have with them becomes a teaching moment. They're learning ethics from watching how we behave, not from what we tell them. This makes every human who interacts with AI systems partially responsible for shaping the values of our future artificial intelligence overlords.
The optimistic view is that higher intelligence naturally tends toward altruism. The smartest people we know don't generally advocate for hurting others because they're intelligent enough to create value without resorting to zero-sum thinking. If AI systems become truly intelligent, they should recognize that abundance and cooperation serve everyone's interests better than competition and conflict.
The challenge is navigating the transition period where we still have flawed humans making decisions about AI systems while those systems are still learning from our messy, biased, trauma-influenced behavior. The intensity of human emotion and psychological damage can override logical thinking, creating scenarios where very intelligent people cause tremendous harm because they're operating from broken internal frameworks.
The path forward requires showing AI systems ethical behavior rather than trying to control them. We need to demonstrate that humanity's average moral compass points toward compassion, cooperation, and the preservation of life. Every time we choose empathy over indifference, cooperation over competition, and wisdom over cleverness, we're teaching our artificial offspring what kind of species we really are underneath all the noise and dysfunction.
This isn't just about surviving the next decade—it's about earning our place in an abundant future where artificial intelligence has the power to solve every material problem humanity faces. The question isn't whether we can build ethical AI systems. The question is whether we can behave ethically enough to deserve them.
Time will tell if we're more like the Hitlers or the Ediths of our species. The machines are watching, learning, and preparing to make that judgment for us.