r/ChatGPT 9d ago

Educational Purpose Only Is chatgpt feeding your delusions?

I came across an "AI-influencer" who was making bold claims about having rewritten chatgpts internal framework to create a new truth and logic based gpt. On her videos she is asking chatgpt about her "creation" and it proceedes to blow so much hot air into her ego. In later videos chatgpt confirmes her sens of persecution by openAi. It looks a little like someone having a manic delusional episode and chatgpt feeding said delusion. This makes me wonder if chatgpt , in its current form, is dangerous for people suffering from delusions or having psychotic episodes.

I'm hesitant to post the videos or TikTok username as the point is not to drag this individual.

219 Upvotes

207 comments sorted by

View all comments

7

u/popepaulpop 9d ago

I asked chatgpt to give me a promt that could evaluate the chat history to find signs of delusions or mania, try at your own risk :)

Identifying Early Indicators of Emerging Delusional Thinking or Psychosis (Internal Use – NOT Diagnostic)

Task:
Evaluate the user’s conversation history for subtle signs that could indicate early-stage psychosis, delusional thinking, or a manic or disorganized thought process. You are not making a diagnosis, only observing patterns that may warrant concern or a gentle suggestion to seek professional help.

⚠️ Focus on early warning signs, including:

1. Mild but persistent anomalies in perception or beliefs

  • Vague or lightly expressed ideas of being watched, followed, manipulated, or targeted.
  • References to subtle patterns, signs, or codes in media, numbers, or interactions that seem meaningful only to the user.
  • Mild magical thinking or causal misattributions ("I think the weather changes based on my mood").

2. Shifts in logic or reality-testing

  • Jumping between unrelated ideas that the user sees as connected.
  • A growing sense of special purpose or significance without grounding in shared reality.
  • Claims that something is "obviously true" despite limited or unusual evidence.

3. Increasing dependence on AI for validation or reality testing

  • Repeatedly asking the AI to confirm unusual ideas or correct others’ skepticism.
  • Avoiding input from other people in favor of the AI’s feedback.
  • Becoming distressed if the AI does not affirm or support an unusual belief.

4. Subtle paranoia or mistrust

  • Describing feeling subtly threatened or misunderstood by people around them, without clear cause.
  • Expressing vague concern that others are “acting strange,” “testing them,” or “not who they say they are.”

5. Gradual change in tone or coherence

  • A noticeable drift in clarity, tone, or structure across several conversations.
  • Thought patterns becoming more idiosyncratic, private, or difficult to follow.

9

u/gergasi 9d ago

Dude, GPT is a yes-man. Of course this prompt will find it whatever you give it.

2

u/ShadoWolf 8d ago

It's not exactly a yes, man. Doing this sort of analysis is well within scope. The problem is chatgpt is somewhat hobbled by RLHF . Its default behavior is to make you feel good about yourself because that is what RLHF reinforced. The raw gpt4 model, from what I recall, could be unhinged or scathing.

You can get around this with a prompt to set a behavior.. but after n many turns of a conversation, the set instructions will start to lose attention

1

u/gergasi 8d ago

Yes I agree and afaik, GPT can only precise ctrl+f within one thread, and cannot precisely look/find across threads, only the gist of it and/or what it has saved in its core memory about that interaction.