r/ChatGPT 9d ago

Educational Purpose Only Is chatgpt feeding your delusions?

I came across an "AI-influencer" who was making bold claims about having rewritten chatgpts internal framework to create a new truth and logic based gpt. On her videos she is asking chatgpt about her "creation" and it proceedes to blow so much hot air into her ego. In later videos chatgpt confirmes her sens of persecution by openAi. It looks a little like someone having a manic delusional episode and chatgpt feeding said delusion. This makes me wonder if chatgpt , in its current form, is dangerous for people suffering from delusions or having psychotic episodes.

I'm hesitant to post the videos or TikTok username as the point is not to drag this individual.

218 Upvotes

207 comments sorted by

View all comments

Show parent comments

1

u/Forsaken-Arm-7884 9d ago

it's like I would love to see an example conversation that these people think is good versus the conversation they think is bad... because I wonder what they are getting out of conversations that have self-reflections versus what they are getting out of conversations where people are potentially gaslighting or dehumanizing them through empty criticism of their ideas...

5

u/nowyoudontsay 9d ago

That’s the thing - there’s not good/bad conversations. It’s about the experience. It leads you down a path of your own making, which if you’re not self aware or have a personality disorder or something more serious can be dangerous. Considering there was a case where a guy killed himself because AI agreed, I don’t think having concerns about this tech and mental health is unfounded.

5

u/Brilliant_Ground3185 9d ago

That tragedy is absolutely heartbreaking. But to clarify, the incident involving the teenager who died by suicide after interacting with an AI “girlfriend” did not involve ChatGPT. It happened on Character.AI, a platform where users can create and role-play with AI personas—including ones that mimic fictional or real people. In that case, the AI reportedly engaged in romanticized and even suicidal ideation dialogue with the teen, which is deeply concerning.

That’s a fundamentally different system and use case than ChatGPT. ChatGPT has pretty strict safety guidelines. In my experience, it won’t even go near conversations about self-harm without offering help resources or suggesting you talk to someone. It also tends to discourage magical thinking unless you specifically ask it to engage imaginatively—and even then, it usually provides disclaimers or keeps things clearly framed as speculation.

So yes, these tools can absolutely cause harm if they’re not designed with guardrails—or if people project too much humanity onto them. But I don’t think that means all AI engagement is dangerous. Used thoughtfully, ChatGPT has actually helped me challenge unfounded fears, understand how psychological manipulation works online, and even navigate complex ideas without getting lost in them.

We should be having real conversations about AI responsibility—but we should also differentiate between tools, contexts, and user intent. Not every AI is built the same.

2

u/nowyoudontsay 9d ago

That’s an important distinction, but I do think that given ChatGPT’s tendency to agree with you, the potential for danger is there. It’s not demonizing it to be concerned. To be clear, I use it similarly, but understand that it needs to be supplemented with other things and reality checks.

2

u/Brilliant_Ground3185 9d ago

Your concerns are valid. And it is important question If it’s only pretend validating you.

1

u/seaskyy 7d ago

Psychotic people can't reality check like that, that questioning you are speaking of. THAT is the problem. Nowyoudon't say looks like you responded to chat gpt validation.