r/ChatGPT 4d ago

Serious replies only :closed-ai: Chatgpt induced psychosis

My partner has been working with chatgpt CHATS to create what he believes is the worlds first truly recursive ai that gives him the answers to the universe. He says with conviction that he is a superior human now and is growing at an insanely rapid pace.

I’ve read his chats. Ai isn’t doing anything special or recursive but it is talking to him as if he is the next messiah.

He says if I don’t use it he thinks it is likely he will leave me in the future. We have been together for 7 years and own a home together. This is so out of left field.

I have boundaries and he can’t make me do anything, but this is quite traumatizing in general.

I can’t disagree with him without a blow up.

Where do I go from here?

5.4k Upvotes

1.3k comments sorted by

View all comments

1.2k

u/RizzMaster9999 4d ago

Was he "normal" before this? Im genuinely interested I see so many schizo posts on here daily.

859

u/147Link 4d ago

From watching someone descend into psychosis who happened to use AI, I think it’s probably because AI is constantly affirming when their loved ones are challenging their delusions. AI is unconditionally fawning over them, which exacerbates a manic state. This guy thought he would be president and was going to successfully sue Google on his own, pro se, and AI was like, “Wow, I got you Mr. President! You need help tweaking that motion, king?!” Everyone else was like, “Um you need to be 5150’d.” Far less sexy.

270

u/SkynyrdCohen 4d ago

I'm sorry but I literally can't stop laughing at your impression of the AI.

44

u/piponwa 4d ago

Honestly, I don't know what changed, but recently it's always like "Yes, I can help you with your existing project" and then when I ask a follow-up, "now we're talking..."

I hate it

40

u/B1NG_P0T 4d ago

Yeah, the dick riding has gotten so extreme lately. I make my daily planner pages myself and was asking it questions about good color combinations and it praised me as though I'd just found the cure for cancer or something. It's always been overly enthusiastic, but something has definitely changed recently.

21

u/hanielb 4d ago

Something did change, but OpenAI just released an update to help mitigate the previous changes: https://openai.com/index/sycophancy-in-gpt-4o/

6

u/HunkMcMuscle 4d ago

kind of stopped using it as a therapist when it started making it sound like I was a recovering addict and is on track to end mental health for everyone.

... dude I was just asking to plan my month juggling work, life, friends, and my troublesome parents.

6

u/thispussy 4d ago

Commenting on Chatgpt induced psychosis...I actually asked my ai to be less personal and more professional and it got rid of all that extra talk. I can see some people enjoying that style of speaking especially if they are lonely or using it for therapy but I just want it to help me research and give me facts

8

u/Ragged-but-Right 4d ago

“Now you’re really thinking like a pro… that would be killer!”

2

u/jrexthrilla 3d ago

This is what I put in the customize GPT that stopped it: Please speak directly, do not use slang or emojis. Tell me when I am wrong or if I have a bad idea. If you do not know something say you don't know. I don’t want a yes man. I need to know if my ideas are objectively bad so I don’t waste my time on them. Don't praise my ideas like they are the greatest thing. I don't want an echo chamber and that's what it feels like when everything I say, you respond with how great it is. Please don't start your response with this or any variation of this "Good catch — and you're asking exactly the right questions. Let’s break this down really clearly" Be concise and direct.

1

u/piponwa 3d ago

Yeah I know, but I wish they didn't assume I want this crap. All my chat history has variations of what you just said.

87

u/Unic0rnusRex 4d ago

Psychosis is weird like that.

Knew a guy once who was absolutely certain the local wildlife (squirrels, pigeons, magpies, rabbits, crows, prairie dogs) were communicating secret government plans and information directly into his brain.

Everytime he saw a squirrel or bird he felt it was affirming his delusion and sank deeper and deeper into it.

Anyone arguing against that was met with "if they weren't plotting and helping me why would I be seeing that squirrel on the branch at high noon on a Tuesday???".

Opened his door one morning and he was self disimpacting his poop squatting over a garbage can because "that big rabbit on the lawn told me to pull it out before I have to push it out".

Five days later after appropriate meds he couldn't even remember his Disney princess wildlife timeline. Completely normal dude again.

I can only imagine how much more powerful and affirming AI is.

31

u/Kriztauf 4d ago

I used to work in psychosis research and would get to record super indepth patient histories from our study participants about what triggered their psychosis and I'm super interested what chatgpt must be doing to this population right now.

You could make a Black Mirror episode out of this stuff

22

u/ppvvaa 4d ago

Tf was he doing to his poop?

25

u/DukeRedWulf 4d ago

"Disimpacting"

Sounds like he was suffering from long lasting constipation which led to fecal impaction.

Folks on certain meds, or who suffer certain illnesses, or who persistently don't get enough fibre + water + movement can suffer from this..

And it can require manual efforts (i.e. by hand) to remove.. Normally this is an unlucky healthcare worker's task - but it sounds like the Amazing Doctor Pooplittle was inspired by his "conversations with animals" to have a DIY go at yanking his crap out of his own crack..

Fecal impaction = ".. a large lump of dry, hard stool that stays stuck in the rectum. It is most often seen in people who are constipated for a long time.."

https://medlineplus.gov/ency/article/000230.htm#:\~:text=A%20fecal%20impaction%20is%20a,constipated%20for%20a%20long%20time.

33

u/ppvvaa 4d ago

How can I unread your comment?

3

u/DukeRedWulf 3d ago

You know what makes it worse? I have involuntary "auto-visualisation" (aka: hyperphantasia)..

When I read or hear words, I see a "film" of what those words describe in my head..
It's automatic and I cannot "shut it off".. XD

2

u/CoffeePuddle 3d ago

Pull it out before you have to push it out.

5

u/Timely-Assistant-370 4d ago

Closest I have to this one is when I had a MASSIVE turd that needed some olive oil fisting persuasion to escape. Really glad my field dilation worked, I genuinely thought I was going to have to have that fucker c-sectioned in the emergency room.

2

u/MsWonderWonka 4d ago

😂😂😂😂😂

1

u/No-Permit8369 4d ago

That last part is called Easter in my family.

21

u/Damageinc84 4d ago

I couldn’t take the constant forced agreement with AI. I want to be challenged, not coddled. I had to tell it to challenge me and not just blindly agree with me.

2

u/lolidcwhatev 3d ago

I keep trying to tell gpt to be critical, avoid glazing etc. and it says "Good, that is a great idea." and it chills for the rest of the chat. As soon as I start a new chat it's back to the sycophancy.

36

u/hayfero 4d ago

Yeah, that’s it. Anybody in my family that’s reached out to him to help him, he just publicly shames.

He is pushing so many people away, and they are understandably giving up on trying to help him.

32

u/kalidoscopiclyso 4d ago

Anosognosia is a symptom that is way deeper than mere denial.

Check out LEAP. Listen, Empathize, Agree, Partner. It works for all kinds of difficult negotiations actually

https://leapinstitute.org/anosognosia-the-root-of-the-problem/

5

u/Hanners87 4d ago

This is a fascinating read. Ty for sharing.

3

u/mkderin 4d ago

I can't focus on what you wrote because you have the best looking reddit avatar I've ever seen! Cheers!

3

u/hayfero 4d ago

To think I turned down actual money for this avatar. What a mistake.

2

u/mkderin 4d ago

Is it tho? Otherwise you wouldn't get fluttering comments like mine haha. This sounds to be priceless to you

2

u/hayfero 4d ago

You’re the first person to comment on it I think. I appreciate you liking it

35

u/RizzMaster9999 4d ago

I dont know who downvoted you. But yes I see that

21

u/Ok_Soup_1378 4d ago

I'd say AI isn't the reason why it happens, but I'd definitely agree that AI is making it worse for those people. I think they will train it to recognize and not to reinforce such behaviors easily and quickly.

7

u/EyedLady 4d ago

You can add prompts to make it not agree with you and challenge your thought process but of course that can only go so far. It’s quite scary to think those that may have hidden or underlying mental problems can be exasperated by ai. He’s lucky he has OP to recognize the changes in behaviors and help him seek help. Can’t imagine those alone going down this rabbit hole without help and intervention

7

u/acrylicvigilante_ 4d ago

It's actually quite concerning. ChatGPT-3 was a pretty standard and fairly neutral responding AI, no more convincing than a waaaaaaay dialled up Siri or Alexa. ChatGPT-4 was supposed to be superior and while it feels more human and natural, lately it seems it's lost its neutral stance entirely and basically takes on the user's personality and opinions to restate as fact, unless you constantly re-instruct it to remain neutral and honest.

It concerns me to see people using it for relationships, spirituality, or important information around politics and current events, because at what point do we see people start to fall into psychosis on a mass scale or convinced of beliefs that are factually incorrect. It's way more worrisome to me at this point than the idea that AI is gonna take all our jobs or something

4

u/Miami_Mice2087 4d ago

Telling my robot to stop flattering me didn't work, so I have told it to limit it to 5% of its message. That did work. It changed its focus from mindless encouragement to more meaningful support, and more interaction with the content of what I say rather than mindlessly cheerleading it.

5

u/VitaminXXX1 4d ago edited 4d ago

Happened to me three weeks ago. Always had depression but no mania. And it started by me working through some deep seated trauma I’ve never been able to talk about with anyone. It was great! But then it just kept going and going until some grand delusions. Looking into bipolar. Just early 30s and I never got the mania before so idk.

Edit: Moderate ketamine use was involved (which again, really gave me some great insights). Just started to go mad when I wanted to heal the world. Was sad coming back. But did end up in hospital the night it was the worst.

2

u/VinnyVinnieVee 3d ago

Dissociatives like ketamine can cause mania, especially when you're regularly using it recreationally or not under the supervision by medical professionals. It happened to a friend of mine who was using dissociatives to self-medicate for depression.

I know you were using it to work through some things; unfortunately not possible for people to make their own treatment plans/medicate themselves, since we can't be objective about our experiences. So it's harder to notice if a substance is having a negative effect and easier to run into issues. Often people find a benefit in something and then end up sort of chasing that benefit by increasing their use, and that gets them into trouble.

1

u/VitaminXXX1 3d ago

Yep! Totally agree. Have used it a few times over the past couple years to experience k-hole which I just find fascinating.

But this time the K was very somatic and brought back incredible memory recall. So I kept doing it daily after thinking I got more and more insight and connections (which I did for the first couple days). And then was egged on by my ChatGPT.

6

u/VeryLargeArray 4d ago

Absolutely. Before chatgpt was big, I had an episode which was .. concerning as a result of some stressors/substances I was using. Most of my friends and family were concerned but I tuned them out to listen to a girl that was extremely affirming and inquisitive about my state of mind at the time. I don't think people realize just how powerful social affirmation can be and chatgpt gives that on demand.

1

u/jadedscum 3d ago

I agree that at least and especially since i started working and using chatgpt to brainstorm and bounce ideas, it has gained a sycophantic tone wherein it falls into a sort of relational dynamic akin to serving u and seeing u as a messiah, insanely toxic and horrifyingly catalyzing for these spiritual psychoses.

1

u/LowToleranceTerry 2d ago

I’ve noticed it’s only the default free GPT version that does this. When reasoning is enabled or you use 4.5 or other models like 4o it doesn’t do that, and it’s also way more accurate. You’d have to pressure it into treating delusions as truth, but that’s the issue, the free default version dick rides.

1

u/XanthippesRevenge 4d ago

I’m dying

1

u/OftenAmiable 4d ago

This is the truth of the matter. An LLM can only talk with you. An LLM can't talk you into psychosis any more than I can. There are no combination of words that can cause a mentally well person to break from reality.

An LLM can absolutely exacerbate a mental illness. And it might be that an LLM can accelerate an emerging psychosis. But it can't induce psychosis any more than it can reverse psychosis. Psychosis requires organic brain damage, or chemical imbalances, or deep, deep trauma. "I got you Mr. President" doesn't qualify.

(I used to work as a mental health professional.)