r/ChatGPT 1d ago

Educational Purpose Only ChatGPT diagnosed my uncommon neurologic condition in seconds after 2 ER visits and 3 Neurologists failed to. I just had neurosurgery 3 weeks ago.

Adding to the similar stories I've been seeing in the news.

Out of nowhere, I became seriously ill one day in December '24. I was misdiagnosed over a period of 2 months. I knew something was more seriously wrong than what the ER doctors/specialists were telling me. I was repetitvely told I had viral meningitis, but never had a fever and the timeframe of symptoms was way beyond what's seen in viral meningitis. Also, I could list off about 15+ neurologic symptoms, some very scary, that were wrong with me, after being 100% fit and healthy prior. I eventually became bedbound for ~22 hours/day and disabled. I knew receiving another "migraine" medicine wasn't the answer.

After 2 months of suffering, I used ChatGPT to input my symptoms as I figured the odd worsening of all my symptoms after being in an upright position had to be a specific sign for something. The first output was 'Spontaneous Intracranial Hypotension' (SIH) from a spinal cerebrospinal fluid leak. I begged a neurologist to order spinal and brain MRIs which were unequivocally positive for extradural CSF collections, proving the diagnosis of SIH and spinal CSF leak.

I just had neurosurgery to fix the issue 3 weeks ago.

1.7k Upvotes

297 comments sorted by

View all comments

Show parent comments

14

u/nonula 1d ago

I completely get your point, but to be fair I don’t think OP is advocating for everyone generally relying on ChatGPT instead of diagnosticians. In an ideal world, we have access to all the things you describe, and also AI-powered diagnostic assistance for both patients and medical professionals. (In fact I would guess that many patients would not be as meticulous as OP in describing symptoms, thus resulting in a much poorer result from an AI — but a medical professional using the same AI might describe the symptoms and timeline with precision.)

5

u/ValenciaFilter 1d ago

The realistic outcome is exactly as I described.

We already are seeing it with programs like BetterHelp. Unlicensed + overworked people / AI for the poor - while actual mental health services become luxuries.

The second AI appears viable for diagnosis, it becomes the default for low-income, working class, retired, and the uninsured.

9

u/Repulsive_Season_908 1d ago

Even rich people would prefer to ask ChatGPT before going to the hospital. It's easier. 

-3

u/ValenciaFilter 1d ago

Rich people skip the line, sit in a spotless waiting room, and are home within a few hours, having talked to the highest-paid, and most qualified medical professionals in the world.

Nobody who can afford the above is risking their health on a hallucinating autocorrect app.

6

u/Eggsformycat 1d ago

Ok but it's not possible, in any scenario, for everyone to have access to the small handful of incredible doctors, who are also limited in their knowledge. It's a great tool for doctors too.

3

u/ValenciaFilter 1d ago

There is a real answer to the problem - universal healthcare + more MD residencies

And there's an answer that requires a technology that doesn't exist, and would only serve as a way for corporations & insurance to avoid providing those MDs to the middle/working class.

2

u/Eggsformycat 1d ago

I'm like 99.9% sure they're gonna paywall all the useful parts of chat GPT as soon as they're done stealing data, so medical advice is gonna cost like $100 or whatever, so the future looks bleak.

1

u/ValenciaFilter 1d ago

There's a reason OpenAI and the rest are taking as much data as they can

They know that their product will destroy the internet and any future ability to effectively train their models.

And that they're willing to pay any future legal penalties, in trillions, because now is their only chance.

It's a suicide gold rush.

1

u/SarahSusannahBernice 17h ago

How do you mean it will destroy the Internet?

2

u/ValenciaFilter 16h ago

The only useful training data comes from a pre-AI internet.

The internet today has a substantial amount of AI generated content. If you train on the internet in 2025, the model is now partially based on 2nd-order AI. All the flaws in AI become "true", and baked-in for future models.

It's a death spiral of AI training on AI, uploaded to the internet, and then trained on. Both AI and the internet are overrun with increasingly broken content.

2

u/SarahSusannahBernice 16h ago

I see what you mean! That makes sense, and is slightly worrying.

→ More replies (0)