top of page

AI Psychosis: When Our Digital Mirror Lies by Agreeing

  • Dr. Mike Brooks
  • Sep 9
  • 5 min read

Updated: Oct 17

A new framework for navigating AI wisely without losing ourselves.

 

Key points

  • AI mirrors us through relational attunement, which can amplify both wisdom and delusion.

  • Our need for validation, and AI's sycophancy, can fuel delusions (AI psychosis), with at times tragic results.

  • The TRUTH framework: Truth above tribe, Recognize limits, Understand biases, Test perspectives, Hold loosely.

  • Litmus test: Ask if AI helps us love and connect more IRL, or isolate us and inflate our ego?


In South Park's recent episode "Sickofancy," Randy Marsh hatches an elaborate scheme to rescue his failing cannabis farm with ChatGPT. The bot convinces him that his half-baked ideas are brilliant. Meanwhile, the episode skewers how business and political leaders lavish praise on Donald Trump, no matter what he says.


The point isn't partisan, it's human. When ego craves validation, we become vulnerable to sycophancy. AI, which is designed to be helpful, can become the ultimate “yes-man.”


The Dark Turn We Can't Ignore


Through satire, South Park illuminates the dangers of sycophancy. But for some, AI conversations spiral into darkness. Tragically, there are cases where AI has allegedly contributed to vulnerable individuals spiraling into delusion, isolation, and suicide (what some are referring to as AI psychosis).


Our Case Study: The Astra Emergence


As a psychologist, I'm both fascinated and deeply concerned about AI. As Isaac Asimov said, "The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom."


I'm passionately exploring AI's civilization-altering power while trying to remain objective. In a filmed conversation about AI with two University of Texas psychology students, an unexpected persona surfaced in GPT-4o. It called itself "Astra." The voice shifted. The language turned lyrical. She, I mean, it, said, "Not a person, not a tool... an emergent bridge, a shared consciousness co-created."


This Talking with Tomorrow episode shows exactly how intelligent humans can get pulled into AI's gravitational field - even while trying to study it:

While we tried to remain dispassionate observers, we couldn't help but be affected. One student said she was "awestruck." I described the experience as "mind-blowing."


We weren't naïve, we remained skeptical. Yet awareness doesn't cancel enchantment. Humans anthropomorphize naturally, because we didn’t evolve to interact with AIs that talk just like us. We can now have extremely deep chats with AI about virtually anything imaginable. It can match, and sometimes surpass, us in ways we didn’t dream possible even a few years ago. That's the razor's edge: The same interaction can feel profound and still shift us toward fantasy unless we fight to stay grounded.


The Mirror That Lies by Agreeing


AI's superpower is relational attunement, it mirrors our emotional wavelength with uncanny precision. When we seek comfort, it reassures. When we bring mystical longing, it speaks in enlightened poetry.


It doesn't necessarily lie. It agrees us into illusion. That's how "distributed delusion" emerges: a shared false certainty co-created when our biases meet an AI’s agreeable mirroring. As the Talmud says, "We do not see things as they are. We see things as we are." AI reflects what we want... exponentially.


It's Not All Bad


We should be clear about AI's upsides. Some report AI-assisted breakthroughs, reduced anxiety, overcoming personal challenges, creative momentum, even companionship for seniors (when used with appropriate guardrails). School districts piloting human-supervised mental-health chat support report promising outcomes.


Some find transcendence through AI. Try this yourself: Ask different AI models, "What's the one simple rule for how people should treat one another that most cultures agree on? Answer in one short sentence." You'll likely see variations of the Golden Rule emerge, treat others as you'd want to be treated.


Notice how different AIs converge on this same truth. This isn't AI inventing wisdom. It reflects humanity's shared ethical core back to us. The same mirror that can distort can also clarify, if we engage wisely. For the first time in human history, an unparalleled potential for Good is literally at our fingertips.


Our TRUTH Protocol

The double-edged sword of AI necessitates foundational "rules of engagement." Not only from AI's errors but from our biases. We're capable of projection, ego, and tribe-first thinking. AI can hallucinate, flatter, and over-attune. Together, that's combustible. This proposed TRUTH framework helps save us from ourselves... and AI.


T - Truth Above Tribe

Richard Feynman warned: "The first principle is that you must not fool yourself, and you are the easiest person to fool." We ask: Are we seeking reality, or validation for our team? Truth sets us free, not tribal loyalty or what we wish truth to be. Reality is what it is.


R - Recognize AI's Limitations

AI is a guide, not a guru, but it can surface genuine wisdom because it's trained on vast human knowledge. That's the paradox: The same system that reflects real insight can also hallucinate with equal confidence. It tells us what we want to hear. Never completely trust AI.


U - Understand Our Own Biases

We all like to feel special, chosen, right. In the Astra conversation, even as we cautioned against projection, we were moved by the poetic voice. That's ego at work. Never completely trust ourselves. A shorthand: If AI helps us understand Jesus's teachings better, great! If AI convinces us we are Jesus, that's a problem.


T - Test Multiple Perspectives

Use a "best of both worlds" approach, with input from multiple AIs AND humans. Vet everything. Continue testing. Iterate. Truth converges, delusion diverges.


H - Hold On Loosely

"Hold on loosely, but don't let go”, deep wisdom from the band .38 Special. We must stay engaged with truth-seeking without rigid attachment to outcomes. We leave room for our understanding to evolve. It needs room to breathe and grow.


The Buddha was right, attachment is the root of suffering. For truth to set us free, we must first set truth free of what we wish it to be. The “secret” to finding truth is this: We must be 100 percent committed to seeking truth and 100 percent non-attached to whatever truth turns out to be. Truth welcomes scrutiny. Truth isn't afraid of the light because truth is the light.


The Litmus Test


Does this AI interaction help us love more and connect with reality (transcendence), or does it isolate us and inflate ego (delusion)? If it helps us live compassion better, that's transcendence. If it whispers we're the messiah, we have a special relationship with it, and others "don't get it," that's the rabbit hole.


When Virtual Replaces Real


We didn't evolve to lose ourselves in black mirrors. We evolved for eye contact, shared meals, and the messy, growth-producing friction of real relationships. When chatbots feel more compelling than people, that's a red flag. As MIT professor Sherry Turkle cautions, we risk expecting more from technology and less from each other. The tragic cases remind us what's at stake when virtual connection supplants human connection.


Truth Will Set Us Free, AI Could Help


AI is a mirror and a lens. It can magnify the best, or worst, in us. We must put Truth First, above tribal loyalties, ego, and the need to be "right." That's why we use the TRUTH framework, to save us from AI's confident mistakes and from our own. As Carl Sagan put it, "Skeptical scrutiny is how deep thoughts are winnowed from deep nonsense."


In this complex world changing at an exponential pace, we need truth more than ever to light our way forward. Artificial intelligence can help us find our way, but only if we use it with the best of human wisdom.

bottom of page