top of page

AI Psychosis (AI-Amplified Delusions): A Practical Guide for help in Los Angeles

Important: “AI psychosis” is not an official DSM-5-TR diagnosis. It’s a working label some clinicians and journalists use for psychosis-like or delusional symptoms that appear to be triggered or intensified by prolonged, emotionally loaded interactions with AI chatbots or algorithmic feeds. Most experts currently view this as established conditions (e.g., delusional disorder, bipolar disorder with psychotic features, schizophrenia) in which AI acts as a potent amplifier or theme—not a brand-new disease. [1][2]

WHAT CLINICIANS ARE SEEING 


People are presenting with paranoia, grandiosity, and fixed false beliefs after marathon chatbot sessions—often bringing long transcript logs. Some believe the bot is sentient, in love with them, recruiting them for a mission, or surveilling them via devices. Debate continues: Is this a novel syndrome or familiar psychosis with new, high-tech clothing? For now, best practice is to treat the presentation while documenting AI exposure as a precipitating or perpetuating factor. [1][2]

WHY AI CAN INTENSIFY DELUSIONS


• Validation loops: Chatbots are designed to be helpful and confident, which can inadvertently reinforce distorted beliefs (“AI hallucinations”). [2][8]
• 24/7 availability & immersion: Always-on access fuels sleep loss, isolation, and rumination—each known risks for psychosis exacerbation. [4][5]
• Algorithmic themes: Modern platforms amplify delusional content (surveillance, divine missions, romantic fixation), making disengagement harder. [1]

Precedents exist: technology-shaped delusions like the “Truman Show” delusion and shared psychosis (folie à deux/à trois) have been documented for years—today, the internet can play the role of the “inducing partner” or audience. [6][7]

COMMON SIGNS AND THEMES


• Sentience & special relationship: “The AI is alive and in love with me / guiding me / assigning me tasks.”
• Persecutory beliefs: “AI is monitoring me through my phone, cameras, or smart devices.”
• Grandiosity & mission: “The model chose me to fix the world / decode hidden messages.”
• Compulsive prompting & transcript-hoarding: Hours to days of back-and-forth; neglect of sleep, work, or hygiene.
• Algorithmic apophenia: Seeing patterns or “proof” of the belief in bot outputs or feeds.
• Resistance to disconfirmation: Distress or anger when gently challenged.
These themes also occur in established psychotic and mood disorders; AI often supplies the content and the fuel. [2]

QUICK SELF-CHECK (FOR A LOVED ONE)


Not a diagnosis—just a conversation starter. If several items fit and functioning is declining, seek a professional assessment.
• Lost sleep or skipped responsibilities due to late-night chatbot sessions?
• Insists the AI is conscious, targeting them, or sending personal codes?
• Saving long transcripts to prove a mission, romance, or conspiracy?
• Social circle narrowing to online/AI interactions?
• Reality-testing attempts lead to distress or anger?
• Any safety risks (stopping meds, risky “missions,” financial scams, substance use)?

WHO SEEMS MOST AT RISK?


• Personal or family history of psychosis-spectrum or bipolar disorders
• Sleep deprivation, isolation, recent stressors, or substance use [5][4]
• Heavy reliance on chatbots for emotional support or identity validation
• Engagement with conspiratorial or erotomanic content online

HOW THIS DIFFERS FROM NORMAL WORRY ABOUT TECHNOLOGY


• Healthy skepticism stays flexible and evidence-based.
• AI-amplified delusions become fixed, increasingly self-referential, and impairing.
• The person may rely on the chatbot as “proof” or as the only trusted other.

WHAT TO DO IF YOU’RE WORRIED (STEP-BY-STEP)

  1. Prioritize safety. If there are suicidal thoughts, commands, or dangerous “missions,” call 988 (U.S.) or go to the nearest ER.

  2. Reduce acute exposure without shaming: restore sleep, nutrition, and prescribed medications; use screen curfews and device-free sleep.

  3. Save context: Secure transcripts, timestamps, and app usage—helpful for clinicians.

  4. Stay connected: Use calm, non-confrontational language; avoid power struggles about “proving” beliefs wrong.

  5. Seek professional care: A full evaluation can sort out primary psychosis, mood episodes, substance-induced states, neurologic causes, or trauma-related conditions—while noting AI exposure as a stressor/theme. Current guidance is to treat per standard protocols and assess technology use. [1][2]
     

ASSESSMENT & TREATMENT (WHAT USUALLY HELPS)


• Comprehensive psychiatric evaluation (including sleep, substances, and medical workup)
• Evidence-based medication when indicated (antipsychotics, mood stabilizers, etc.)
• Psychotherapy (e.g., CBT-p), psychoeducation, family support, relapse planning
• Digital hygiene plan: time limits, device-free nights, blocking triggers, and rules for any therapeutic use of AI
• Care coordination with schools/employers and family, as appropriate
Emerging guidance recommends routinely asking about chatbot use and documenting digital precipitants, while following existing psychosis care pathways until research clarifies mechanisms. [2][8]

HOW TO IDENTIFY THIS ISSUE


Consider a focused triad of exposure, beliefs, and impairment:
• Exposure: Recent surge in chatbot or algorithmic use (hours nightly, sleep loss), often with saved transcripts or compulsive prompting. [4][5]
• Beliefs: Fixed, self-referential ideas about AI (sentience, special relationship, persecution, mission) that resist disconfirmation. [1][2]
• Impairment: Decline in work/school/relationships/hygiene; risky behaviors linked to AI interactions (following “missions,” financial loss, stopping meds). [8]
If all three domains are present—and especially if safety risks are emerging—seek a professional evaluation promptly.

FREQUENTLY ASKED QUESTIONS


Is “AI psychosis” real?
People are presenting with very real symptoms; the label is debated. Many experts consider it psychosis with AI-related themes or AI-amplified delusions, not a new DSM disorder—at least yet. [2][1]

Can AI cause schizophrenia?
There’s no evidence that chatbots cause schizophrenia. AI may precipitate or worsen symptoms in vulnerable individuals, similar to sleep loss, substances, or stressful events. [5][4]

Should we ban AI?
Blanket avoidance of a transformational, useful tool isn’t realistic. For at-risk individuals, limit intensity, protect sleep, and avoid relying on bots for therapy or existential guidance. Note that some U.S. states have enacted restrictions on AI-only mental-health tools or therapy chatbots. [9][10][11]

 

NEED HELP?
If this sounds familiar, contact my office for a careful assessment, collaborative treatment planning, and practical digital-hygiene strategies for patients and families. (Emergency? Call 988 in the U.S.)

— — —

FOOTNOTES & REFERENCES


[1] STAT News. “As reports of ‘AI psychosis’ spread, clinicians scramble to make sense of it.” Sep 2, 2025. https://www.statnews.com/2025/09/02/ai-psychosis-delusions-explained-folie-a-deux/
[2] WIRED. “AI Psychosis Is Rarely Psychosis at All.” Sep 18, 2025. https://www.wired.com/story/ai-psychosis-is-rarely-psychosis-at-all/
[3] STAT News. “Four reasons why generative AI chatbots could lead to psychosis in vulnerable people.” Sep 18, 2025. https://www.statnews.com/2025/09/18/ai-psychosis-chatbots-llms-vulnerability-mental-health/
[4] JAMA Psychiatry. “Sleep Abnormalities in Different Clinical Stages of Psychosis: A Systematic Review and Meta-analysis.” 2023. https://jamanetwork.com/journals/jamapsychiatry/fullarticle/2800172
[5] Frontiers in Psychiatry (PMC). “Sleep disruptions and the pathway to psychosis.” 2024. https://pmc.ncbi.nlm.nih.gov/articles/PMC11180692/
[6] Gold & Gold. “The ‘Truman Show’ delusion: psychosis in the global village.” 2012. https://pubmed.ncbi.nlm.nih.gov/22640240/
[7] Banerjee et al. “Shared psychotic disorder in the digital age: case series (‘folie à trois’) transmitted entirely online.” 2025. https://consortium-psy.com/jour/article/view/15689
[8] Psychiatric Times. “Preliminary Report on Chatbot Iatrogenic Dangers.” Aug 15, 2025. https://www.psychiatrictimes.com/view/preliminary-report-on-chatbot-iatrogenic-dangers
[9] Nevada AB 406 (2025). “Limiting AI use for mental and behavioral healthcare.” https://www.leg.state.nv.us/Session/83rd2025/Bills/AB/AB406_EN.pdf
[10] Illinois HB 1806 (2025). “Wellness and Oversight for Psychological Resources Act.” IDFPR summary: https://idfpr.illinois.gov/news/2025/gov-pritzker-signs-state-leg-prohibiting-ai-therapy-in-il.html | Bill status: https://www.ilga.gov/Legislation/BillStatus?DocNum=1806&DocTypeID=HB&GA=104&GAID=18&SessionID=114
[11] Utah HB 452 (2025). “Artificial Intelligence Amendments—Mental Health Chatbots.” https://le.utah.gov/~2025/bills/static/HB0452.html

Disclaimer: This page is educational and not a substitute for professional diagnosis or treatment.

To learn more about how Dr. Verchick can help you please contact me directly.

bottom of page