Table of Contents
We're standing at the edge of the biggest shift in computing since the smartphone revolution, and Meta's Andrew Bosworth has some fascinating insights about where we're headed next.
Key Takeaways
- AR glasses will fundamentally change how we consume content within the next decade, moving beyond phone-centric experiences
- The AI revolution arrived two years earlier than Meta expected, creating unexpected opportunities for natural interface design
- Current Ray-Ban Meta glasses represent just the beginning of a spectrum from basic always-on displays to full AR experiences
- The traditional app-based computing model might get completely inverted by AI-driven interfaces that understand user intent
- Meta's open-source AI strategy with Llama stems from both research philosophy and smart business positioning
- Hardware invention risk is manageable, but social adoption and regulatory challenges pose the biggest threats to AR's future
- We're experiencing a rare generational moment in computing interface evolution that only happens every few decades
- The transition from mobile won't happen overnight—Meta sees clearer progress in 10 years than 5 years
- AI assistants that understand physical context could eliminate the need to manually choose between different apps and services
- Developer ecosystem challenges remain significant, but AI might provide the silver bullet solution
From Phones to Faces: The Next Computing Revolution
Here's the thing about technology shifts—they don't happen because someone decides it's time. They happen because we hit the limits of what's possible with current tools, and that's exactly where we are with smartphones.
Bosworth explains it perfectly: "We really started to feel 10 years ago like the mobile phone form factor, as amazing as it was, this is 2015, was like already saturated. Like that was what it was going to be." The question then becomes obvious—what comes next when you've maxed out the greatest computing device we've ever created?
The answer, according to Meta's vision, has to do with getting closer to our bodies. "Once you get past the mobile phone, it has to be more natural in terms of how you're getting information into your body, which is, you know, obviously ideally usually through our eyes and ears."
This isn't just about convenience. It's about solving fundamental interface problems. When you can't rely on touchscreens, keyboards, or mice, you need completely different ways for humans to express their intentions to machines. That's why Meta has been betting big on face-mounted devices—they provide direct access to our primary information channels while opening up possibilities for neural interfaces down the road.
The timeline? Well, that's where things get interesting. Bosworth feels "pretty confident" about the 10-year outlook, where we'll have "a lot more ways to bring content into our viewshed than just taking out our phone." But the 5-year picture is trickier because displacing something as central as the smartphone is almost unthinkable right now.
What's fascinating is how the current Ray-Ban Meta glasses fit into this progression. They weren't originally designed as AI glasses at all. The team was six months from production when Llama 3 hit, and they realized they had to integrate AI capabilities. "The hardware isn't that different between the two, but the interactions that we enable with the person using it are so much richer now."
That's a perfect example of how breakthrough products emerge—not from a single innovation, but from the convergence of multiple technology waves hitting at the right moment.
The AI Revolution Nobody Saw Coming (This Soon)
One of the most revealing parts of Bosworth's perspective is how the AI breakthrough caught even Meta off guard. "Mark and I always believed that this AI revolution was coming. We just thought it was going to take longer. We thought we were probably still 10 years away at this point."
Instead, AI arrived as what Bosworth calls "a wonderful blessing" that solved one of their biggest challenges. Meta had been working on the hardware problems and interaction design problems for years, but AI suddenly provided a much more natural way for people to express their intentions to machines.
What makes this AI wave different from previous technology breakthroughs is its broad applicability. "Almost always when these technological breakthroughs happen, they're almost always very domain specific," Bosworth notes. "This kind of feels like, oh, everything's going to get better... every single interface that I interact with, every single problem space that I'm trying to solve are going to be made easier by virtue of this new technology."
The live AI feature in Ray-Ban Meta glasses perfectly illustrates this potential. For 30 minutes until the battery runs out, the glasses can see what you're seeing and respond to questions about your environment. Imagine walking up to breakfast ingredients and asking "Hey Meta, what are some recipes with these ingredients?" and getting immediate, contextual responses.
But here's where Bosworth's thinking gets really interesting—he believes AI might completely invert how we think about software. Right now, when you want to play music, you have to think "Do I open Spotify or do I open Tidal?" That's backwards. What you actually want is just to play music, and you shouldn't have to be responsible for orchestrating which app handles that request.
"I don't want to have to be responsible for orchestrating like what app I'm opening to do a thing. We've had to do that because that's how things were done in the entire history of digital computing."
This isn't just about wearables—it could reshape how we interact with all computing devices, including phones.
The Orion Glimpse: When AR Finally Clicks
When Bosworth talks about Meta's Orion AR glasses, you can hear the excitement in his voice. "When you use Orion, when you use the full AR glasses, you can imagine a post-phone world, you're like, 'Oh, wow.' Like, if this was attractive enough and light enough to wear all day, I could just like this would have all the stuff I need."
What makes Orion special isn't just the display technology—it's the combination of spatial computing with AI understanding. The breakfast demo Bosworth mentions is a perfect example. You look at ingredients laid out on a counter and ask about recipes. The AI can see what you're seeing and provide contextual responses based on the physical world around you.
Initially, Orion was designed around the familiar app model we all know. You'd have calls, email, texting, games, Instagram reels—basically a spatial version of your phone. But the AI integration opens up something much more interesting: "What if the entire app model's upside down?"
Instead of you deciding to open Instagram, the device might realize you have a moment between meetings and suggest catching up on highlights from your favorite basketball team. It's proactive rather than reactive, contextual rather than manual.
The hardware challenges remain significant, though. "You can't come for the king, you best not miss. The phone is an incredible centerpiece of our lives today." The world has adapted itself to phones—even ice makers have phone apps now, as Bosworth wryly observes.
That's why he's more confident about the 10-year timeline than the 5-year one. The infrastructure, habits, and ecosystem built around phones won't disappear overnight.
Rethinking Software: The Death of Apps as We Know Them
Perhaps the most provocative idea Bosworth shares is his prediction about the app model itself. He poses a fascinating thought experiment: "If you were building a phone today, would you build an app store the way you historically built an app store? Or would you say like, hey, you as a consumer express your intention like express what you're trying to accomplish and let's see what the system can produce."
This isn't just theoretical speculation. Bosworth sees a clear path for how this transition might happen. As AI gets better at agentic reasoning, it will inevitably hit walls where users ask for something it can't do. Those failures become a goldmine for developers.
"I've got 100,000 people a day trying this problem. They're trying to use your app. They don't know they are, but they're trying to use your app. Look, here's the query stream. Like, here's what's coming through. And we're going to tell them no today. If you build these hooks, you got 100,000 people clamoring for your service."
It's similar to how Google's dominance in search reshaped entire industries. Before Google, the web was index-based—getting major sources to link to you was the game. Once Google took over, everything became about SEO and where you ranked in the query stream.
The AI equivalent could work the same way. Instead of competing for app store placement or brand recognition, companies would compete on performance within the AI's recommendation system. When someone asks to play music, the AI picks the service based on quality, price, latency, or availability of specific songs.
This creates what Bosworth calls "these very exciting marketplaces for functionality inside the AI." But it also "abstracts away a lot of companies' brand names, which I think is going to be very hard for an entire generation of brands."
Music services that have built their value around brand loyalty and user attachment suddenly find themselves competing purely on technical performance and pricing. That's a completely different game.
Open Source Strategy: Commoditizing Your Complements
Meta's approach to AI development through open-source Llama models isn't just altruistic—it's strategically brilliant. The strategy comes from two converging factors that Bosworth explains clearly.
First, there's the research philosophy. Meta's Fundamental AI Research (FAIR) group has been open-source since the beginning, attracting researchers who believe "we're going to make more progress as a society working together across boundaries of individual labs than not." When Llama first launched, all models were open source anyway. "The only thing that was unusual was everything else just went closed source over time."
But the real genius is in the business strategy. As Bosworth puts it: "I believe these are going to be commodities and you want to commoditize your complements."
Meta is in a unique position where their products get better through AI, but nobody else having AI can build Meta's products. Whether it's recommendation systems for feeds and reels, or knowing which friend to suggest when you start typing a message, AI improves Meta's core offerings. But access to AI models doesn't let competitors replicate Facebook, Instagram, or WhatsApp.
"The asymmetry works in our favor," Bosworth explains. Making sure there are competitively priced AI models available helps startups, academic labs, and the entire industry—while also helping Meta as an application provider.
It's a perfect alignment of societal benefit and business advantage. They get better AI through community contributions (like Deep Seek's innovations in memory architectures), while ensuring they're not disadvantaged if AI capabilities become widely available.
The Risks That Could Derail Everything
Despite his optimism, Bosworth is refreshingly honest about the substantial risks facing this vision. He breaks them down into several categories, starting with invention risk: "There exists risk that the things that we want to build we don't have the capacity to build as a society as a species yet."
But he's actually more worried about adoption risk. "Is it considered socially acceptable? Are people willing to learn a new modality?" We all learned to type as kids and were essentially born with phones in our hands. Will people be willing to learn entirely new ways of interacting with computers?
The ecosystem risk might be even bigger. Even if the hardware works and people accept it, will developers bring the full suite of software we need to interact with modern society? "If it just does like your email and reels, that's probably not enough."
Then there are the thorny regulatory and privacy questions that Bosworth describes as "super deep" and capable of derailing everything. Consider this scenario: you're wearing always-on AR glasses that give you superhuman memory. You see someone you interviewed years ago but can't remember their name. Your glasses could instantly identify them and remind you of your previous interaction.
"Is it am I allowed to use a tool to assist me or not?" Bosworth asks. "You showed me your face. And if I was somebody with a better memory, I could remember the face... I don't have a great memory. So is it am I allowed to use a tool to assist me or not?"
These aren't just technical problems—they're fundamental questions about privacy, consent, and human augmentation that society hasn't figured out yet. Bosworth points to nuclear power as an example of how technology can get derailed for decades "for absolutely stupid reasons" because the industry "played it wrong."
The Belief That Makes the Difference
What stands out most in Bosworth's perspective is the depth of conviction behind Meta's AR efforts. After nine years of headwinds and skepticism, they're finally seeing some tailwinds, and it's clear this isn't just a business bet for them—it's a mission.
"We're true believers. Like we have actual conviction. Mark believes this is the next thing. It needs to happen and it doesn't happen for free. Like we can be the ones to do it."
Bosworth repeatedly emphasizes what he calls "the myth of technological eventualism"—the idea that important technologies will just eventually happen. "That's not how it works. You have to stop and put the money and the time and do it. Like somebody has to stop and do it."
He sees this moment as comparable to Xerox PARC, JCR Licklider's work on human-computer interaction, and other foundational computing breakthroughs. "It's a rare moment. It doesn't even happen once a generation. I think it may happen every other generation or every third generation."
The stakes feel appropriately high. This isn't just about building better gadgets—it's about fundamentally rethinking how humans interact with computers. And while Bosworth acknowledges they might fail, he's clear about one thing: "We will not fail for lack of effort or belief."
Whether that conviction is enough to navigate the technical, social, and regulatory challenges ahead remains to be seen. But after spending years betting against the mobile revolution, it's hard not to take Meta's AR vision seriously. The pieces are starting to come together in ways that few could have predicted even a couple years ago.
The future of computing might not arrive exactly as planned, but it's definitely going to be interesting to watch unfold.