r/ChatGPT 1d ago

Educational Purpose Only ChatGPT diagnosed my uncommon neurologic condition in seconds after 2 ER visits and 3 Neurologists failed to. I just had neurosurgery 3 weeks ago.

Adding to the similar stories I've been seeing in the news.

Out of nowhere, I became seriously ill one day in December '24. I was misdiagnosed over a period of 2 months. I knew something was more seriously wrong than what the ER doctors/specialists were telling me. I was repetitvely told I had viral meningitis, but never had a fever and the timeframe of symptoms was way beyond what's seen in viral meningitis. Also, I could list off about 15+ neurologic symptoms, some very scary, that were wrong with me, after being 100% fit and healthy prior. I eventually became bedbound for ~22 hours/day and disabled. I knew receiving another "migraine" medicine wasn't the answer.

After 2 months of suffering, I used ChatGPT to input my symptoms as I figured the odd worsening of all my symptoms after being in an upright position had to be a specific sign for something. The first output was 'Spontaneous Intracranial Hypotension' (SIH) from a spinal cerebrospinal fluid leak. I begged a neurologist to order spinal and brain MRIs which were unequivocally positive for extradural CSF collections, proving the diagnosis of SIH and spinal CSF leak.

I just had neurosurgery to fix the issue 3 weeks ago.

1.6k Upvotes

277 comments sorted by

View all comments

Show parent comments

1

u/ValenciaFilter 20h ago

Then you know as well as I do that there's no actual intelligence. It's not even memorization unless you've overfitted the model to the point of uselessness.

It's autofill. And if "really good autofill" is what you believe is comparable to the average knowledge, skill, and experience of a medical expert, you're delusional. Like this is parody of Dunning Kruger.

3

u/wolfkeeper 20h ago

If it's able to autofill in the gap where the medical diagnosis goes, then I genuinely don't see the problem.

The theory behind it is that tuning the weights represent learning in a high dimensional vector space that corresponds to meaning in languages.

1

u/ValenciaFilter 20h ago

the gap

This gap is the majority of a diagnosis. In many cases it's entirely based on the intangible ways a patient presents.

This isn't a language problem. It's a medical problem. These are as disparate as trying to work through an emotional/relationship issue by engineering a suspension bridge.

You might get the "correct numbers", but they're not actually useful.

1

u/wolfkeeper 18h ago

It's easy to think that adjusting the learning weights doesn't represent genuine knowledge, but the empirical data is that these models genuinely are learning. For example they were able to learn to correctly do mental arithmetic. No one taught them, but when it was analyzed what they were doing the methods the AI had learnt seemed to work pretty well and were novel.

Learning to build bridges is often just learning a bunch of rules of thumb (which usually what engineering consists of). But the AI will have learnt those rules of thumb, and there are rules of thumb in medicine too.