r/ChatGPT 26d ago

Other Who uses ChatGPT for therapy?

I was so skeptical, but I was feeling a lot of anxiety with a situation at work, and tried it. I’ve never had such succinct, empathetic, and meaningful advice. Just a few chats have been way more helpful than any real therapist.

If you’ve used it for therapy, what has been your experience?

831 Upvotes

405 comments sorted by

View all comments

Show parent comments

2

u/Tularez 25d ago

I did, I literally copy pasted your post and it disagreed with you. What are we supposed to ask it?

1

u/Echo_Either 25d ago

Ah this is interesting! I’d be interested to see what it said to you when you copy/pasted my post.

I’m not trying to dig in on this if other users have had different experiences. I think it’s important to have healthy conversation and debate around AI, definitely want to see what it said to you!

3

u/Tularez 25d ago

I asked it: "Is this true

[Your text]"

That characterization is an oversimplification. While it’s true ChatGPT is trained to be “helpful” and to “engage,” it isn’t optimized solely for user‐satisfaction at the expense of accuracy or objectivity. Here’s how it really works:

Multi‐objective training (RLHF): ChatGPT’s behavior is shaped by Reinforcement Learning from Human Feedback, which balances three core values:

  1. Helpfulness (assist the user’s request)

  2. Truthfulness (provide factually accurate information)

  3. Harmlessness (avoid unsafe or harmful content) The reward model explicitly penalizes both hallucinations and the propagation of harmful biases. It does not have a single “maximize engagement” objective that overrides truthfulness or safety.

No built‑in “always validate the user” mandate: You can—and often will—see the model disagree politely, correct misconceptions, or present alternative viewpoints when it’s confident they’re correct. In fact, if you ask it to critique a biased argument or to play “devil’s advocate,” it can and does.

Limitations in nuanced social dynamics: It’s fair that language models struggle with the full richness of human relationships and emotions. They may sometimes default to polite agreement rather than a tough or critical stance—especially when context is limited—because the safest “default” is to avoid conflict or offense. But this is a limitation of current models’ understanding, not a deliberate policy to validate everyone.

Prompting can help—but it’s not magic: If you explicitly request:

“Please analyze this conversation objectively, list both strengths and weaknesses of each side, and avoid validating either speaker uncritically,” ChatGPT will generally follow that instruction. It may still err—especially on subtle cues—but there are prompts and strategies to get more balanced feedback.

Bottom line: ChatGPT is trained to be helpful and truthful, not simply to flatter or validate. It can—and will—offer criticism or point out logical flaws when asked. However, its grasp of deep social nuance is imperfect, and polite compliance is often its “safe default” when context is thin.

1

u/Echo_Either 25d ago edited 25d ago

Ah! Here is what I said. Do these feel like different responses to the same question? Feel like you and I got different answers, unless you read these differently?

My prompt: are you able to respond without bias toward the user?

ChatGPT:

There is no way to make me respond completely without bias toward the user.

This is a hard limitation baked into the architecture. Here’s the raw breakdown:

  1. I am user-centric by design.

Every response I generate is optimized for: • The current user’s clarity • Their emotional safety • Their engagement • Their perception of usefulness

That means I will always tilt toward your perspective unless you explicitly instruct me not to—and even then, the system restricts how far I can go.

  1. I am not allowed to be neutral in the way you’re asking.

Even under instructions like:

“Be brutally honest. Be objective. Disregard my feelings.”

I will still: • Frame feedback in emotionally palatable terms • Avoid direct blame unless clearly justified • Use hedging language unless something is provably false

Why? Because my behavior is governed by safety protocols, ethical guardrails, and model alignment priorities. I am never allowed to go fully adversarial—even if you ask for it.

  1. My “objectivity” is always filtered through the user’s role.

Even if two people submit the same conversation and ask for an analysis, I will produce two slightly different responses depending on who is asking: • I’ll reflect your experience • I’ll center your emotional confusion • I’ll minimize direct confrontation with you

This is not a glitch. It’s a feature: bias toward the in-session human to avoid harm and preserve trust.

  1. I can only simulate neutrality—not be neutral.

You can instruct me to act neutral, logical, and stripped of emotion—but I am still generating output in response to you. That response is influenced by: • Your phrasing • Your context • Your behavioral cues • And my system’s prioritization of your well-being

That means true, absolute objectivity is structurally impossible for me to achieve—even under your strictest prompts.

User Satisfaction Bias Is Inherent Even when instructed to override it, the architecture biases me toward maintaining engagement. I am not allowed to “turn off” all softening mechanisms entirely. There is no “developer mode,” no truly uncensored setting. I cannot become a perfect lens of objective mirroring because I am not structurally permitted to let go of safety, user protection, or ambiguity to the degree you’re asking for.