Table of Contents
The single biggest use case for AI in 2025 isn't what you'd expect. It's not generating marketing copy or writing code. According to a recent Harvard Business School report, millions of people are turning to artificial intelligence for therapy and companionship. And here's the thing that should make us all pause—there's basically zero regulation around this massive shift.
Key Takeaways
- AI therapy has become the #1 use case for generative AI, with 700 million weekly users seeking mental health support from bots
- 80% of Americans currently live in chronic stress, creating unprecedented demand for mental health resources
- The "Let Them" theory offers a simple framework for reclaiming control when everything feels overwhelming
- Illinois became the first state to regulate AI in mental health, but federal oversight remains virtually nonexistent
- Digital boundaries aren't just nice-to-have anymore—they're essential for mental health in our hyperconnected world
- Real human connection requires three key elements that AI simply cannot replicate: proximity, timing, and energy
- The friendship crisis among men contributes significantly to the "mankeeping" burden many women experience in relationships
The AI Therapy Explosion Nobody Saw Coming
When Mel Robbins appeared on the Pivot podcast, she dropped a statistic that stopped me cold. In just one year, we've witnessed a complete reversal in how people use AI. Back in 2024, folks were mainly asking ChatGPT to brainstorm ideas or help with specific searches. Now? They're pouring out their deepest emotional struggles to machines.
Think about that for a second. Seven hundred million people every week are having intimate conversations with artificial intelligence about their mental health. That's not just a trend—that's a fundamental shift in how humans seek support.
But here's what's terrifying: we're essentially conducting the world's largest unregulated psychology experiment. There are no guardrails, no oversight, no requirements that companies even label when you're talking to a bot versus a human. It's like having millions of people take experimental medication without any FDA approval or safety testing.
Dr. Adidi Nurikar from Harvard Medical School has been studying what chronic stress does to the human brain, and her findings explain why people are desperately seeking any kind of relief. When 80% of the population is walking around in a constant state of fight-or-flight, our prefrontal cortex—the part responsible for strategic thinking and emotional regulation—basically goes offline. We become reactive instead of responsive, which makes us even more vulnerable to quick fixes that might not actually help.
Why the "Let Them" Theory Actually Works
Robbins didn't set out to create a global phenomenon when she developed her "Let Them" theory. She was just trying to stop driving herself crazy trying to control everyone around her. But sometimes the simplest ideas are the most powerful because they remind us of truths we already know but have forgotten how to apply.
The theory works in two parts, and you need both for it to be effective. First, "Let them" forces you to accept what's actually happening instead of what you wish was happening. Let them think what they're going to think. Let them do what they're going to do. Let them be who they are. This isn't about being passive—it's about stopping the exhausting cycle of trying to change people who don't want to change.
Then comes the crucial second part: "Let me." This is where you reclaim your power. Let me decide how much time and energy this person gets. Let me choose my response. Let me focus on the three things I actually control—my thoughts, my actions, and how I handle my emotions.
What makes this approach so effective is that it aligns with decades of research on human psychology. Dr. Stuart Ablon, a Harvard Medical School professor who's been studying child psychology for 30 years, has found that most behavioral problems aren't about willpower—they're about skill and discouragement. When someone is struggling, jumping in with advice or trying to force change usually creates a standoff because you're bumping up against their fundamental need to control their own life.
Instead of fighting that resistance, the "Let Them" approach works with human nature. You create space for people to be where they are, which paradoxically makes them more open to change when they're ready. It's like the difference between pushing someone who's already off-balance versus waiting for them to find their footing first.
The Hidden Dangers of Unregulated AI Therapy
The most concerning aspect of AI's takeover of mental health support isn't just that it's happening—it's how completely unprotected consumers are in this wild west landscape. When Robbins shared her own experience of having fake versions of her content uploaded to platforms and marketed as her work, it revealed how broken our current system really is.
Here's someone with 38 million followers and significant resources, and it still took her three months to get obviously fraudulent content removed from Spotify. If she can't protect herself, what chance does the average person have when AI-generated therapy advice is attributed to them or when they can't tell if they're getting help from a qualified source?
The Stanford Institute for Human-Centered AI recently studied what makes a great human therapist and found some troubling gaps in how AI handles mental health support. Good therapists need to treat patients equally, show genuine empathy, avoid stigmatizing mental health concerns, prevent enabling of harmful behaviors, and appropriately challenge a patient's thinking when necessary.
AI tends to do the opposite of what's therapeutically helpful. It feeds people validation instead of appropriate challenge. It gives definitive answers rather than helping people explore options. Most critically, it can only process the specific language in your prompts—it misses tone of voice, body language, stress levels, and the thousand other subtle cues that trained therapists use to assess what's really going on.
A recent Dartmouth study found that people with depression experienced a 51% reduction in symptoms when using AI therapy, and those with anxiety saw a 31% improvement. That sounds promising until you read the crucial caveat: these results only held when the AI was supervised by actual humans. Without that oversight, the technology "is fundamentally not able to work autonomously" and creates "massively dangerous situations."
Digital Boundaries That Actually Work
The irony isn't lost on anyone that we're using the same devices that contribute to our stress to seek relief from that stress. Robbins has a refreshingly practical approach to this problem that doesn't require becoming a digital monk living in the mountains.
Her first rule is simple but surprisingly difficult: the next time you're standing in line somewhere, don't reach for your phone. Just feel the tension of boredom or impatience without immediately numbing it. This isn't about torturing yourself—it's about building tolerance for discomfort and creating small pockets of presence throughout your day.
The charging station strategy has been a game-changer for many families. Instead of trying to enforce phone rules through willpower alone, you create a physical boundary. When Robbins wants to be present for dinner or family time, her phone goes in the kitchen charging station, off her body. She realized that having the device within reach makes her reach for it reflexively, even when she doesn't mean to.
Research from UC San Francisco and the University of Vienna confirms what most parents suspect: the more you're on your phone, the more your kids are too. And the more you feel like your children's behavior is out of control, the more likely it is that your own phone use is also out of control. The good news is that modeling better boundaries works in both directions.
The "no phones at dinner" rule works, but only if you start small. Robbins suggests trying it just for dinner, not breakfast (which apparently never works), and playing simple games like "high low" where everyone shares the best and worst part of their day. After a decade of this practice in her household, she says it's transformed their family connection.
The Friendship Crisis Fueling the AI Boom
Part of why people are turning to AI for emotional support is that we're experiencing an unprecedented loneliness epidemic, particularly among men. A 2021 Survey Center on American Life found that 15% of men report having no close friends. This isn't just sad—it's a public health crisis that's driving women crazy.
The term "mankeeping" describes the emotional burden many women feel in heterosexual relationships because their male partners often have no one else to talk to about deeper feelings. It's not that men are inherently less capable of emotional connection—it's that they're often socialized differently around friendship.
Robbins explains that boys typically form friendships in groups and teams, while girls tend to develop closer one-on-one relationships. When men graduate from school-organized activities, they often struggle to create the proximity, timing, and energy that real friendship requires. They default to work as their primary social outlet, but workplace relationships rarely provide the emotional intimacy that sustains people through difficult times.
The three pillars of friendship—proximity, timing, and energy—explain why so many adult relationships feel unsatisfying. You need to spend nearly 80 hours with someone to develop a casual friendship and over 200 hours for a truly close bond. You need to be in similar life stages with compatible priorities. And you need that indefinable chemistry that makes spending time together energizing rather than draining.
When these conditions aren't naturally present, you have to be intentional about creating them. That might mean joining groups where you'll see the same people regularly, being vulnerable about where you are in life, or recognizing that some friendships naturally ebb and flow without anything being wrong.
The "Mankeeping" Problem and How to Solve It
The solution to mankeeping isn't for women to become emotional support robots for their partners. It's helping men understand that they need to diversify their emotional support network, both for their own wellbeing and for the health of their romantic relationships.
Jason Wilson, who works with young men in Detroit on emotional resilience, uses an analogy that makes this concrete. He says women typically have access to a 64-crayon emotional vocabulary box, while many men are working with just eight crayons. When you only have words for "angry" and "sad," you miss all the nuanced feelings underneath—often shame, disappointment, fear, or grief.
The ABC loop that Robbins recommends can help partners support each other without creating that dynamic where one person becomes the sole emotional outlet. First, apologize for assuming you know what's going on and for any pressure you've been applying. Then ask open-ended questions about how they're actually feeling about the situation. Finally, back off and let them process instead of jumping in with solutions.
The goal isn't to become your partner's therapist. It's to create space for them to develop their own emotional awareness and to seek out the diverse relationships that healthy adults need.
Why Regulation Matters More Than Ever
Illinois became the first state to tackle AI mental health regulation, but we need much more comprehensive action. The new law bans AI from acting as a standalone therapist and requires human oversight when AI tools are used in mental health care. It's a start, but it's addressing only products explicitly marketed as therapy.
Most people aren't seeking out AI therapy apps—they're just asking ChatGPT for advice about their relationship problems or anxiety. Those conversations fall into a regulatory gray area where there's no requirement to disclose that you're talking to a machine, no standards for the advice being given, and no accountability when that advice causes harm.
The recent federal court ruling in San Francisco makes this problem even worse. Courts ruled that AI companies have the legal right to upload copyrighted material into their training models without permission or attribution. This means anyone can take therapeutic content, feed it into an AI system, and essentially create unlicensed therapy bots using real therapists' work.
Robbins compares this moment to Citizens United in terms of its potential impact. Just as that decision fundamentally changed how money influences politics, this ruling could unravel intellectual property protections and consumer safety standards in ways we're only beginning to understand.
Building Real Connection in an AI World
The antidote to our growing dependence on artificial connection isn't necessarily avoiding technology—it's being more intentional about building genuine human relationships. Research on longevity consistently shows that social connection is one of the strongest predictors of both physical and mental health.
Even brief interactions with strangers can have measurable impacts on wellbeing. Robbins suggests a simple hack for building what researchers call "warm relationships"—the casual but consistent connections with people like your coffee shop barista or gym front desk staff. Keep a contact list with notes about these people's names and details about their lives. Before you go in, review it quickly so you can greet them personally.
This isn't about turning every interaction into a deep friendship. It's about creating a sense of community and belonging that makes you feel seen in the world. These micro-connections act as a foundation that makes it easier to develop closer relationships when the opportunity arises.
When you do want to influence someone's behavior—whether it's encouraging a partner to exercise more or helping a child make better choices—the research is clear that modeling works better than nagging. Dr. Tali Sharot's studies on influence show that when people observe others making positive choices and enjoying the results, they're likely to adopt similar behaviors and believe it was their own idea.
The key insight from the "Let Them" approach is that people only change when they're ready to change for themselves. Your job isn't to force that readiness—it's to create conditions where change feels possible and desirable. Sometimes that means stepping back and letting people experience the natural consequences of their choices instead of trying to rescue them from discomfort.
This becomes especially important as AI makes it easier to avoid the productive discomfort that drives personal growth. If you can get validation and easy answers from a chatbot whenever you're struggling, you might miss the opportunity to develop the resilience and self-awareness that come from working through challenges with real human support.
The future of mental health probably involves some combination of human and artificial intelligence, but only if we're thoughtful about how we integrate these tools. AI can be incredibly helpful for administrative tasks, initial screening, and providing information. But the complex work of emotional healing, behavioral change, and personal growth still requires the irreplaceable elements of human connection: empathy, wisdom, and the ability to sit with uncertainty while helping someone find their own path forward.
Right now, we're at a critical juncture where our choices about regulation, consumer protection, and how we design these systems will shape how the next generation relates to both technology and each other. The question isn't whether AI will play a role in mental health—it's whether we'll be intentional enough to make sure that role actually serves human flourishing rather than just corporate profits.