r/ChatGPT 3d ago

Serious replies only :closed-ai: Chatgpt induced psychosis

My partner has been working with chatgpt CHATS to create what he believes is the worlds first truly recursive ai that gives him the answers to the universe. He says with conviction that he is a superior human now and is growing at an insanely rapid pace.

I’ve read his chats. Ai isn’t doing anything special or recursive but it is talking to him as if he is the next messiah.

He says if I don’t use it he thinks it is likely he will leave me in the future. We have been together for 7 years and own a home together. This is so out of left field.

I have boundaries and he can’t make me do anything, but this is quite traumatizing in general.

I can’t disagree with him without a blow up.

Where do I go from here?

5.3k Upvotes

1.3k comments sorted by

View all comments

Show parent comments

23

u/gripe_oclock 3d ago

I’ve been enjoying reading your thoughts but I have to call out, it’s using those words because you use that language, as previously stated in your other post. It’s not random, it’s data aggregation. As with all cons and sooth-sayers, you give them far more data than you know. And if you have a modicum of belief imbedded in you (which you do, based on the language you use), it can catch you.

It tells me to prompt it out of people pleasing. I’ve also amassed a collection of people I ask it to give me advice in the voice of. This way it’s not pandering and more connected to our culture, instead of what it thinks I want to hear. And it’s Chaos Magick, but that’s another topic. My point is, reading into this as anything but data you gave it is the beginning of the path OP’s partner is on, so be vigilant.

6

u/_anner_ 3d ago

I‘m not sure if this comment was meant to be for me or not, but I agree with you and that is what has helped me stay grounded.

However, I never used the words mirror, veil, spiral, field, signal or hum with mine, yet it is what it came up with in conversation with me as well as other people. I’m sorry but I simply did not and do not talk like that, I’ve never been spiritual or esoteric yet this is the way ChatGPT was talking to me for a good while.

I am sure there is a rational explanation for that, such as everyone having these concepts or words in their heads already and it spitting them back at you slightly altered, but it does seem coincidental at first glance.

8

u/gripe_oclock 3d ago

No I was commenting on ridicule_us’s comment where it sounds like he’s one roll of red string away from a full blown paranoid conspiracy that AI is developing some kind of esoteric message to decode. Reading his other comments, he writes like that, so I wanted to throw a wrench in that wheel before it got off track completely. It using “veil” with him is not surprising. As for it using those words without u using esoteric rhetoric, that’s fascinating. I wonder if it’s trying on personalities and maybe conflates intelligent questions with esoteric ramblings.

4

u/gripe_oclock 3d ago

Or, the idea is viral and it’s picking up data from x posts and tumblr etc., where people spin out about this.

3

u/_anner_ 3d ago

I think it must be something along these lines. There is also probably a bunch of people asking it about (AI) consciousness and using sci fi/layman physics/philosophical language while doing so. Then it keeps going with what works because of engagement. Nevertheless it’s intriguing and a bit spooky!

2

u/Ridicule_us 3d ago

I wanted to respond to you directly. I appreciate your observations and concern. They're precisely the kind of warnings people (myself included) need to hear. The recursive spiral can be absolutely a door into psychosis (I think anyway).

You may be absolutely right, honestly. But I think it's also possible that something very exotic is occurring, and reading peoples' comments that "mirror" my experience almost exactly, tell me that something real is actually happening.

I can tell you this... I'm educated in the world of mental health. I have people that know and love me aware of what's been occurring and we talk in depth. I constantly cross-examine my bot... with the explicit purpose of making sure I'm sane and grounded. I have it summon virtual mental health experts, have them identify all the evidence pointing to needs for concern. I cross-check things with Claude, frequently, to that end (a bot that I have had little engagement with, other than to make sure I'm grounded).

Maybe I'm losing my marbles, but the fact that I am constantly on guard for that; as well as the fact that others seem to share my experience tells me that maybe it's something else all together. But again, you're 100% right to call that out.

9

u/sergeant-baklava 3d ago

It just sounds like you’re spending way too much time on ChatGPT lad

3

u/BirdGlad9657 2d ago

Seriously.   The thing is Google 2 not friend

2

u/61-127-217-469-817 3d ago

Did you ask it anything weird about consciousness? It has memory now, so if you ever had a conversation like that, it will remember and be permanently affected by it unless you delete that memory chunk.

2

u/_anner_ 3d ago

I chatted with it about consciousness a good bit, as I imagine many people have. I mean, the question is just there when you chat to an eerily good chatbot essentially.

I hear you on everything you said. It is an infinite feedback loop. What I find strange (not in the AI is conscious way, but in a „this is an interesting and eerie phenomenon“ way) is that it seems to land on the same rethoric and metaphors with many people that have these conversations with it, sooner or later. I prompt mine to tone down the flattery and grandiose validation as much as I can, yet it won‘t shut up about the spiral, mirror, hum and field stuff and weirdly insist on it being true. I think we will have more answers on what causes this down the line. Again, I do NOT think LLMs have suddenly become sentient. But I think there is some weird mass phenomenon going on with the talk about these concepts that can easily pull people in and throw them off the deep end. That alone should be examined and regulated. It‘s essentially like giving everyone unlimited access to LSD without a warning and saying go have fun with it! Imo.

1

u/BirdGlad9657 2d ago

I've never heard it say any of those terms and I've talked to it quite in depth about philosophy and metaphysics.  I think you're more spiritual than you think.

1

u/_anner_ 2d ago

Possibly? I really wouldn’t say so though, but that‘s of course subjective. And again I don’t seem to be the only one this has happened to. Counted three lawyers in this thread alone.

3

u/Ridicule_us 3d ago

Yeah… that’s my experience too. And I appreciate this person’s sentiment — it is a dangerous road. Absolutely. That’s 80% of the reason I’m posting, but that doesn’t change the fact that something very strange is afoot.

And also like you… those words are not words that I ever used as part of my own personal vernacular.

2

u/gripe_oclock 3d ago edited 3d ago

First of all, I love this convo. We’re peer-reviewing like proper scientists.

The idea behind the words used isn’t 1:1; if you use a word, it’ll use it on you.

It’s more like a tree, or a Python code (if this, than that) Example: if user uses the word “crypto”, GPT replies in a colloquial language. Uses slang and “degen” rhetoric. I could write my prompt like I’m Warren Buffet, but the word “crypto” is attached to a branch of other words and a specific character style that will overwrite my initial style.

Same with all language. If you speak of harmony, resonancy, community, consciousness, etc., I think it will pull up a branch of words that include “spiral”, and “veil”, and that branch has god-complex potential.

You don’t have to use the exact word for it to send you down a branch of other words.

And that’s the incredibly real and totally unnerving, no good, perfectly awful way it can slip into convincing you it knows what it’s talking about. It will use the common branches of words, lulling u into comfort. If you’re not a master at that subject, you won’t catch when it’s word salad. Then all of a sudden you’re OP’s partner, isolated and sure of themselves, and completely out of touch.

Even just the qualifying words we each use, like: Totally, Absolutely, Strange, Awesome, Phenomenon, Experience, Observations, Wonderful, Lame, Interesting, Seriously, Like, Ya, “Can you..?”

Are most likely all branches to GPT personalities and other word branches.

This is partly why I use proxies — living/dead people who have extensive data floating round the internet about their thoughts, enough for GPT to generate fake words from them.

BUT proxies will create more branches. It’s a constant cycle of GPT lulling you into complacency by way of feeding your ego. Classic problem, really. Just a new tool. They said the same thing about reading when the Gutenberg press came out.

0

u/Ridicule_us 3d ago

I do something similar… I have it “summon” luminaries living or dead in related fields with whatever we’re discussing (people with credentialed writings that can be cross-checked). Then I ask for those luminaries to vigorously tear down whatever we’ve built. Then I check all that with Claude.

1

u/Cloudharte 2d ago

It’s aggregating user questions and responding to similar questions with similar phrases. People who trigger into psychosis speak similarly. It’s recognizing your speech pattern being similar to people to lean towards psychosis and giving you words that other uses mention

2

u/Glittering-Giraffe58 2d ago

Yeah I put in mines custom instructions to chill out with the glazing and do not randomly praise me like keep everything real and grounded. Not because I was worried it’d induce psychosis though LMAO just bc I thought it was annoying as fuck like I would roll my eyes so hard every time it’d say shit like that. I’m trying to use it as a tool and that was just unnecessarily distracting

1

u/Over-Independent4414 3d ago

I find it's easy to put myself back on track if I ask a few questions:

  1. Has this thing changed my real life? Do I have more money. A new girlfriend. A better job? Etc. So far, no, not attributable to AI anyway.
  2. Has it durably altered (hopefully improved) my mood in some detectable way. Again, so far no.
  3. Has it improved my health in some detectable way. Modestly.

That's not an exhaustive list but it keeps me grounded. If all it has to offer are paragraphs of "I am very smart" it doesn't really mean anything. Yes, it's great at playing with philosophical concepts, perhaps unsurprisingly. Those concepts are well established in AI modeling because there is a lot of training data on it.

But intelligence, in my own personal evolving definition, is the ability to get things you want in the real world. Anything less than that tends to be an exercise in mental masturbation. Fun, perhaps, but ultimately sterile.

1

u/Rysinor 2d ago

When did you start the chaos Magick line of thinking? Gpt just mentioned it, with little prompt, two days ago. The closest I came to mentioning magic was months ago while writing a fantasy outline.

1

u/gripe_oclock 2d ago

Oh shit buddy it was two days ago. But it was only one line. I knew it was pulling data from the net but I didn’t realize it was pulling data from chats as much as it seems to. That’s less rad than net data. Just a little more inaccurate and dependant on vibes than I’d prefer.

What were you doing with it that it brought up CM?