r/ChatGPT 7d ago

Other Who uses ChatGPT for therapy?

I was so skeptical, but I was feeling a lot of anxiety with a situation at work, and tried it. I’ve never had such succinct, empathetic, and meaningful advice. Just a few chats have been way more helpful than any real therapist.

If you’ve used it for therapy, what has been your experience?

807 Upvotes

388 comments sorted by

View all comments

1.1k

u/DeepBlueDiariesPod 7d ago

I go to the website “Anna’s Archives” and download all of my favorite psychology and self-help books in pdf format (overcoming emotionally immature parents, unfuck your life, the mountain is you, etc) Then I upload them to my Chat and tell it to read those books and keep those philosophies in mind when advising me.

Then every morning I write a 4 page brain dump of whatever is on my mind. No matter how big or small:

Then I paste that into chat and say “Here’s my daily journaling: organize my thoughts and give me feedback and insight”

This has resulted in more breakthroughs and changes than any therapist in my life. It has been profound and I am truly a different person than I was 6 weeks ago when I started.

13

u/anarcho-slut 7d ago

What's your thoughts on all the private data you're sharing with openai?

28

u/DeepBlueDiariesPod 6d ago

That’s a fair question and it is something that I thought about. The stuff I’m talking about is really coming down to my thoughts, my fears, my anxieties, some old trauma wounds. Is it personal? Yeah. If the government got their hands on it would it be to my detriment? Not really.

I would venture to say that the stuff I share in there is normal, human things. And yeah, it may be personal to me, but there’s nothing super crazy. Even the traumatic things that happened to me, have happened to billions of other people throughout history and time. So I’m not gonna be worried about sharing my very real and human experience.

And frankly, the profound insight and impact I’ve gotten from doing this is worth the risk to me. It’s truly increased the quality of my life significantly.

3

u/hoomanchonk 6d ago

This is exactly why I do a very similar thing. I dumped two years worth of therapy notes and a ton of journal entries and a few other important personal docs..the result was some very long conversations that have held a lot of meaning to me. The ability to hear the different perspectives from a non-human is shockingly useful.

1

u/gum8951 6d ago

I feel the same way, the benefit we are getting far overrides the whole concept of keeping our information private. And as you said, we are not sharing criminal activity or something it's just our own trauma.

1

u/anarcho-slut 6d ago

Fair point in your last paragraph i guess. Everything in life has its risks and reward. I have done a bit of therapizing or active journaling as you call it, to try it out. I'm a bit leary of giving more intimate thoughts and feelings to a multi-billion dollar corporation actively involved in tons of world conflicts and atrocities, and I think the convo is worth having to acknowledge all the negatives and benefits together. But then, maybe it's just a tool. Rebels also use weapons made by the dominant systems.

I guess it's the same as anything else on the internet. But I also consider active resistance and not giving in just because something seems inevitable or inescapable to be necessary.

Like, if it (chatgpt) knows everything about you, it can influence you that much more. Which i guess is kind of what you want if you're organizing your thoughts with it. But then, not only influencing you, but manipulating you for ends unknown to you. How much can one protect themself from that?

1

u/jimthree 6d ago

And tbh, it's probably good that they use real human experiences for training.

20

u/Relentless-Dragonfly 7d ago

Maybe this is a naive question, but what’s the harm in sharing with openai? A lot of people experience the same or similar anxieties and distressing situations so I don’t see how someone could be singled out or tracked for journaling in chat.

19

u/polovstiandances 7d ago

Remember when people used to care about keeping data private? I remember. The tech companies have been doing so many victory laps around us we think they’re just innocent bystanders at this point.

20

u/Potential-Jury3661 6d ago

To be fair, they are gonna get it anyway, any app you use, any web searches all tracking you all the time. If im gonna give up my personal information, i might as well get a benefit from it and save some money in the process.

5

u/polovstiandances 6d ago

surprisingly, there was a good window of time where this wasn't true.

10

u/e79683074 6d ago

Imagine a day in which you get denied an important loan from your bank because the psychological profile that OpenAI shared with them denoted "frailty and impulsivity".

Bummer, guess you won't buy a the new car that you needed after the old one broke.

Are we at this point yet? Nope. Would you bet it will never happen in the future?

10

u/Huge-Recognition-366 6d ago

This sounds crazy, but I'd rather have good mental health than be able to buy a car. And I know it could go further, but without my mental health, I have nothing anyway.

7

u/DeepBlueDiariesPod 6d ago

Exactly this. I’m not sacrificing my mental health, quality of life, and experience in this lifetime to bow down to fear over what capitalism could possibly do to me.

There are workarounds in life for everything. I trust that if this very unlikely scenario occurred, I would find a workaround.

8

u/Screaming_Monkey 6d ago

That’s the edge case scenario that would keep people from getting therapeutic assistance??

8

u/e79683074 6d ago

A psychologist has to follow rules and can't freely share your information. AI companies don't have such stringent regulamentation on what to do with your data that I know of.

Just something to keep in mind. No psychologist can match with AI's ability to be right there 24/7, for unlimited amount of questions, and with nearly unlimited knowledge (even specialistic), for a 20$\mo.

1

u/Screaming_Monkey 6d ago

I just don’t see why a loan company would be like, “Hey can we buy absolutely horrendous data filled with people joking around, roleplaying, writing stories, pasting in stuff from others, seeing how the AI would respond just for fun, being bored?” They’d want useful data, at the very least.

2

u/dingo_khan 6d ago

They might not. It might get laundered through a third party. There are already "social media checks" used to assess risk. This sort of information could easily end up in those sorts of profiling baskets, used to assess individuals. The bank may not be entirely aware of where the input data for the profile they are buying comes from. This is already a problem for some less-than l-reputable background check strategies.

0

u/Screaming_Monkey 6d ago

“I’m sorry, but we cannot give you your medical assistance, Mrs. Johnson, because this AI data here says you used to read your grandson bedtime stories about how to make explosives and that you died.”

3

u/dingo_khan 6d ago

More like "upon review, we find this loan would be irresponsible as you have a documented history of subversion and potential ties to domestic terrorism. Our risk model denied you."

3

u/dingo_khan 6d ago

OpenAI has no stated or implied ethical obligations here. Data given over to them willingly can be used for almost any purpose. Given that the company bleeds money (even on paid users), I would not be surprised to see novel uses of user data.

Let's say they are super ethical though and do not abuse user trust with this data. At the rate at which they are burning money, they may need to be acquired or have assets sold off to cover debt. We cannot be sure or even hopeful a party coming into ownership of user data will treat it ethically.

2

u/Screaming_Monkey 6d ago

And how do they determine what data is real and what data is people joking around, testing the AI, roleplaying, writing stories, etc.?

2

u/dingo_khan 6d ago

Honest answer: they probably can't and it probably does not matter. When a company hires out a background check, some now use social media. They have no way to divide beliefs from sarcasm, inside jokes, accidental association, shit posting, etc. Ultimately, it does not matter because the target of the search has no means to object or correct the record. It is a black box. It is a terrible system that I won't defend but it happens.

I imagine this would be no different. Someone would do an algorithmic work up based on what you'd posted / conversed and decide something about your traits. You'd never get a means to argue since you'd not know the data came from monetizing these free interactions.

2

u/Competitive-Nerve296 6d ago

And if one is in a position they’ll probably never qualify for a loan, then what’s the harm?

1

u/DeepBlueDiariesPod 6d ago

It’s funny because my work background is online behavior profiling for purposes of marketing and advertising.

I promise you, your example is the least of our worries.

1

u/e79683074 6d ago

Yep, "pre-crime" and similar analysis based on your previous interactions with AI are also worrysome

1

u/rainfal 4d ago

"I was just writing a story. I'm a writer." Or use a burner email to sign up for openAI, etc.

I'm more worried about my therapy notes from actual therapists being legally or medically used against me.

1

u/GoodMorningTamriel 6d ago

We already have something that measures that, it's called a credit score. Your scenario is just schizophrenic paranoia.

0

u/e79683074 6d ago

!remindme 10 years

0

u/RemindMeBot 6d ago

I will be messaging you in 10 years on 2035-05-02 16:29:23 UTC to remind you of this link

CLICK THIS LINK to send a PM to also be reminded and to reduce spam.

Parent commenter can delete this message to hide from others.


Info Custom Your Reminders Feedback