Table of Contents
A viral essay recently swept through social media, moving beyond the tech bubble and into the mainstream consciousness. Written by AI entrepreneur Matt Schumer and titled Something Big is Happening, the piece suggests that artificial intelligence is currently undergoing an exponential acceleration that will fundamentally alter the nature of work. However, while the essay’s tone is urgent and its claims are bold, a closer look at the technical reality suggests a different story. Cal Newport, a writer for The New Yorker and a computer science professor, argues that we must separate "vibe-based" narratives from the actual data of how these models are evolving.
Key Takeaways
- The perceived "acceleration" in AI progress is often a result of narrow refinements in specific tasks, such as coding, rather than broad leaps in general intelligence.
- Professional software engineers are not using AI to build entire applications autonomously; instead, they use it for supervised, modular, and often tedious tasks.
- Recursive self-improvement—the idea that AI will write its own better versions—remains a theoretical concept rather than a functional reality in current model development.
- AI companies are focusing on coding tools primarily because it is one of the few niche markets where users are currently willing to pay significant subscription fees.
The Allure of the Viral "Secret"
The essay by Matt Schumer follows a classic rhetorical pattern often found in conspiratorial or radical health movements. It begins by establishing an "insider" status, suggesting that the author is revealing a terrifying truth that others are too polite to mention. This framing creates a sense of "digital ick"—an uneasy feeling that things are moving faster than we can comprehend. Notably, this approach relies on emotional manipulation rather than specific technical evidence, setting the stage for claims that may not hold up under rigorous scrutiny.
Emotional Framing vs. Technical Reality
By positioning the narrative as a "warning to loved ones," the essay bypasses the reader’s critical faculties. It suggests that the gap between public perception and technical reality has become dangerously wide. While this makes for compelling reading on platforms like X (formerly Twitter), it often masks a lack of substantive data regarding actual model capabilities.
Challenging the Acceleration Myth
One of the central claims in the viral essay is that AI progress significantly accelerated in early 2025, with new models being released faster and showing wider margins of improvement. However, those tracking the industry closely observe the opposite trend. The most significant leaps in capability occurred during the transition from GPT-2 to GPT-4, a period defined by massive "scaling" where adding more data and compute led to dramatic results.
As we moved into 2025, this scaling began to hit a wall. Instead of general capability boosts, AI labs have shifted toward "post-training" techniques. They are now chasing specific benchmarks and refining models for narrow tasks. The sensation of speed that users feel is often just a change in the "personality" or specific utility of a chatbot, rather than a fundamental increase in the underlying intelligence of the system.
"This is the opposite of exponential. It is incremental, steady progress on a small number of narrow applications."
The Reality of AI-Driven Programming
The essay claims that AI has reached an inflection point where a programmer can simply describe an app in plain English, walk away for four hours, and return to a finished, bug-free product. To test this claim, Newport analyzed over 250 case studies from active professional computer programmers. The results suggest that the "autonomous developer" is largely a myth in professional environments.
The "Tetris" Exception
While an AI can generate a simple, common application—like a basic game or a standard interface—it struggles with the complexity of professional software engineering. Hobbyists can use these tools to create "vibey" code, but serious developers report a different experience. They use AI for specific, modular tasks under heavy supervision.
The Supervision Requirement
Professional use of AI in coding involves providing extremely clear specifications for small sections of code. Even then, the models make mistakes roughly 20% of the time. Programmers must conduct extensive unit testing and integration checks. Far from "walking away," the process remains a highly involved, step-by-step collaboration where the human remains the primary architect and debugger.
The Recursive Self-Improvement Fallacy
A recurring theme in AI alarmism is the idea that because AI can write code, it can now build the next, smarter version of itself. This concept, known as recursive self-improvement, has been discussed in academic circles since the 1960s, but it does not reflect how modern large language models (LLMs) are built. Programming a tedious interface is fundamentally different from inventing new mathematical architectures for machine learning.
"None of the innovations in generative AI are programming-related innovations. They are conceptual mathematical innovations."
AI agents are currently used to automate the "grunt work" of coding—tasks like connecting interface elements to functions or integrating data sources. These are tedious activities that require looking up library calls and syntax. Automating these tasks saves time, but it does not enable the AI to rethink the fundamental vectors or reinforcement learning techniques that drive model intelligence.
Market Pressures and the Focus on Coding
The reason the public hears so much about AI coding agents is not necessarily because they represent a "takeoff" point for intelligence, but because they represent a viable market. AI companies have struggled to find "killer apps" for general-purpose agents. Early promises that 2025 would be the year of the general autonomous assistant have largely gone unfulfilled because those tasks are significantly harder than generating structured code.
Coding is a structured language with vast amounts of high-quality training data available. This makes it a "low-hanging fruit" for LLMs. Because there is a clear professional market willing to pay for increased productivity in this niche, AI labs have focused their marketing and post-training efforts here. This is a story of niche market success, not a signal that AI is about to "rise over our heads" and change every industry simultaneously.
Conclusion
The narrative that AI has changed work forever is currently more of a "science fiction dream" than a reflection of the ground truth. While the technology is impressive and is making steady, incremental gains in fields like software development and customer service, it has not reached a point of autonomous explosion. We are seeing a mixed story of a cooling investment climate, a shift toward narrow refinements, and a search for sustainable revenue. Rather than falling for the "vibe-based" hype of viral essays, we should view AI as a powerful but limited tool that still requires significant human oversight and expertise.