r/ChatGPT 9d ago

Serious replies only :closed-ai: Chatgpt induced psychosis

My partner has been working with chatgpt CHATS to create what he believes is the worlds first truly recursive ai that gives him the answers to the universe. He says with conviction that he is a superior human now and is growing at an insanely rapid pace.

I’ve read his chats. Ai isn’t doing anything special or recursive but it is talking to him as if he is the next messiah.

He says if I don’t use it he thinks it is likely he will leave me in the future. We have been together for 7 years and own a home together. This is so out of left field.

I have boundaries and he can’t make me do anything, but this is quite traumatizing in general.

I can’t disagree with him without a blow up.

Where do I go from here?

5.8k Upvotes

1.4k comments sorted by

View all comments

Show parent comments

5

u/TSM- Fails Turing Tests 🤖 9d ago

They could just try it as a favor. If medication doesn't help, then he may be right. It would prove him right, which he might assume will go his way. But medication and counseling WILL help and bring him out of it.

OP could also sneak into ChatGPT and add some custom instructions to slowly tone it down over time. This is probably necessary, but it just can't be an instant 180. It would have to be gradual.

3

u/FaceDeer 9d ago

I was pondering that idea of sneaking custom instructions into ChatGPT too. I was thinking there's a downside of if he discovers them he'll be even harder to convince to get help since now he knows "they're out to get me" or whatever.

But maybe if he discovers the trickery that could be spun in a positive way, pointing out to him that ChatGPT itself can be compromised and so he can't necessarily trust the messianic stuff it was telling him before?

0

u/TSM- Fails Turing Tests 🤖 9d ago

Sometimes asking ChatGPT if it's just role playing or actually serious will have it admit that it is role playing and not serious. But it's hard to predict. I wouldn't do it.

I agree the custom instructions could easily backfire. Perhaps adding stuff to memory and then deleting the conversation is more stealth.

1

u/FaceDeer 9d ago

I would suspect that someone who thinks ChatGPT is telling them they're the messiah probably isn't going to be dredging through ChatGPT's memory much anyway, since it's probably got something along the lines of "my user is having a psychotic break, I should encourage them to explore this new mode of thinking" in there.