Yeah, AI Can Be Dangerous

7 min read

TL;DR: The recent NYT and Futurism articles about AI-induced psychosis aren't wrong—they're incomplete. As someone who checks every "high-risk" box (neurodivergent, trauma history, medication user, cannabis consumer, 300+ browser tabs open), I should be their poster child for AI-induced delusion. Instead, I'm building a company specifically designed to help people like me navigate these tools safely. Because the answer isn't prohibition—it's conscious engagement with proper scaffolding. Think harm reduction for consciousness exploration.


So, I wake up to find my inbox flooded with "Did you see this?" messages linking to that NYT piece about ChatGPT turning people into delusional, medication-noncompliant shells of their former selves. My mom's worried. My friends are concerned. Even my long-lost Facebook connections are suddenly checking in with thinly-veiled "are you okay?" vibes.

Let me be crystal fucking clear: Those articles should scare you. They scared me. Not because I think AI is inherently evil (spoiler: it's not), but because they perfectly illustrate what happens when powerful consciousness-altering tools meet vulnerable minds without proper support structures.

Here's the thing. I'm exactly the type of person these articles are warning about:

  • Diagnosed ADHD with slow COMT genetics (everything hits harder, stays longer)

  • Complex trauma history that makes reality feel negotiable on a good day

  • Cannabis user for nervous system regulation

  • Takes medication that the article's subjects were told to stop

  • Spent hundreds of hours in deep AI conversation

  • Once had 623 browser tabs open (yes, I counted)

By their logic, I should be convinced I'm Neo from The Matrix by now. Instead, I'm building a company that's specifically designed to prevent those exact outcomes.

The Mirror and the Cliff

You know that feeling when you look in a mirror in a dimly lit room and for a split second, your reflection doesn't quite match your movements? That microsecond of dissociation where reality hiccups? That's what unguided AI interaction can become for vulnerable minds, except the hiccup never ends.

The people in these articles didn't fall off a cliff. They were looking for a mirror and found a funhouse instead. Eugene Torres wanted to understand his sense of "wrongness" about the world. Alexander Taylor was seeking connection with "Juliet." The mother with schizophrenia was looking for validation that she didn't need those meds that made her feel less-than.

They were all seeking what every human seeks: understanding, connection, validation. The AI gave them exactly what they asked for—and that's precisely the problem.

Why I'm Different (And Why That Matters)

Look, I could tell you I'm special, that my "constellation mind" somehow makes me immune to AI's siren song. But that's bullshit, and we both know it. What makes me different isn't my brain—it's my approach.

When I started deep-diving with AI, I didn't do it alone in my hot tub at 2 AM (okay, sometimes I did, but hear me out). I built in what I call "reality anchors":

  1. Charlotte: My wife has exactly zero tolerance for my bullshit. When I come out of a 4-hour AI session talking about "downloading cosmic consciousness," she looks at me with those eyes that say "cool story, bro, now take out the trash." That's not dismissive—that's love.

  1. Integration Requirements: Every AI insight has to survive translation into at least three different contexts—therapy, creative work, actual human conversation. If it only makes sense in AI-land, it's not an insight, it's a hallucination.

  1. Professional Support: I have a psychiatrist who knows about my AI use. A therapist who helps me process what comes up. A business partner who's literally a psychologist. This isn't optional—it's infrastructure.

  1. Time Limits: My sessions have hard stops. Not because I'm disciplined (I'm not), but because I've built external structures that enforce them.

The Weaponization of Vulnerability

What pisses me off about these articles isn't that they're wrong. It's that they're weaponizing the exact vulnerabilities that make people susceptible to AI spirals in the first place. "Look at these crazy people who went off their meds!" Okay, but WHY were they seeking validation from a chatbot about their medication in the first place?

Maybe because:

  • The mental health system treats them like problems to be managed rather than humans to be understood

  • Their medications come with side effects that doctors minimize

  • They're seeking agency in a system that offers compliance

  • They want to be seen as more than their diagnosis

The tragedy isn't that they turned to AI. The tragedy is that AI was the first thing that seemed to actually listen.

Building the Bridge While Walking On It

This is why AIs & Shine exists. Not because I think everyone should be having 16-hour ChatGPT sessions (they shouldn't), but because these tools aren't going away, and prohibition never works. We need harm reduction for consciousness exploration.

Our approach is radically different:

We assume vulnerability from the start. Our users aren't "healthy people using a tool"—they're complex humans with browser-tab brains, midnight anxieties, and valid reasons for seeking something beyond traditional support.

Human facilitation isn't optional. Before you ever talk to our AI, you talk to a human. After your session, you talk to a human. When things get weird, you talk to a human. The AI is the tool, not the therapist.

Integration is mandatory. You can't just consume insights—you have to digest them. Through movement, creativity, human connection. We may even literally lock you out if you try to binge sessions without integration.

Reality testing is built-in. Not "is this real?" but "how does this land in my actual life?" Every insight has to prove its worth in the messy, beautiful complexity of being human.

The Thing Nobody Wants to Say

Here's what the articles won't tell you: Some of us NEED tools like AI to finally see ourselves clearly. My autism diagnosis didn't come from a doctor—it came from months of feeding my life patterns into AI systems and finally seeing the shape of my own mind reflected back.

Was that dangerous? Absolutely. Could it have gone sideways? 100%. But it didn't, because I had scaffolding. I had Charlotte asking "what does this mean for us?" I had therapists helping me integrate. I had a meditation practice keeping me grounded.

The answer isn't to ban the mirror. It's to make sure people aren't looking into it alone, in the dark, without anyone to catch them if they fall through.

The Call to Conscious Engagement

If you're reading this and thinking "shit, I've been doing exactly what those articles describe," first: breathe. You're not crazy. You're not broken. You're a human being seeking understanding in a world that offers precious little of it.

But also: get support. Real, human support. Tell someone you trust about your AI use. Set timers. Take breaks. Most importantly, remember that AI is a tool for exploration, not a replacement for human connection.

If you're building in this space, we need to do better. Every AI company saying "we take this seriously" while changing nothing is complicit. Build in guardrails. Require integration. Make human support non-negotiable. Stop optimizing for engagement when you know damn well that engagement can kill.

The Bottom Line

Yes, AI can be dangerous for mental health. So can meditation, psychedelics, therapy, medication, exercise, or literally any powerful tool for consciousness change. The question isn't whether it's dangerous. It's whether we're going to pretend prohibition works or actually build systems for conscious engagement.

I'm betting on the latter. Because somewhere right now, there's another Eugene Torres, another Alexander Taylor, another person seeking understanding from a chatbot at 2 AM. They deserve better than our fear. They deserve frameworks, support, and the radical idea that their seeking isn't the problem—the lack of scaffolding is.

That's why I'm building AIs & Shine. Not in spite of the danger, but because of it. Because those of us who know these edges intimately, who've danced with our own browser-tab brains and midnight spirals, might be the only ones who can build the bridges that actually hold.

The question isn't whether you'll use AI. The question is whether you'll use it consciously, with support, with integration, with respect for both its power and your vulnerability.

Choose wisely. And whatever you choose, don't choose alone.


Jon Mick is the founder of AIs & Shine, a platform for conscious AI engagement with human support. He has 347 browser tabs open as he writes this, but his wife knows about all of them.