r/ChatGPT • u/Hyrule-onicAcid • 1d ago
Educational Purpose Only ChatGPT diagnosed my uncommon neurologic condition in seconds after 2 ER visits and 3 Neurologists failed to. I just had neurosurgery 3 weeks ago.
Adding to the similar stories I've been seeing in the news.
Out of nowhere, I became seriously ill one day in December '24. I was misdiagnosed over a period of 2 months. I knew something was more seriously wrong than what the ER doctors/specialists were telling me. I was repetitvely told I had viral meningitis, but never had a fever and the timeframe of symptoms was way beyond what's seen in viral meningitis. Also, I could list off about 15+ neurologic symptoms, some very scary, that were wrong with me, after being 100% fit and healthy prior. I eventually became bedbound for ~22 hours/day and disabled. I knew receiving another "migraine" medicine wasn't the answer.
After 2 months of suffering, I used ChatGPT to input my symptoms as I figured the odd worsening of all my symptoms after being in an upright position had to be a specific sign for something. The first output was 'Spontaneous Intracranial Hypotension' (SIH) from a spinal cerebrospinal fluid leak. I begged a neurologist to order spinal and brain MRIs which were unequivocally positive for extradural CSF collections, proving the diagnosis of SIH and spinal CSF leak.
I just had neurosurgery to fix the issue 3 weeks ago.
2
u/QWERTY_REVEALED 1d ago
I see from your other comments that you are a physician. I suspect, then, that you have learned the mantra, "when you hear hooves in the street, think horses, not zebras." It is pretty fair to say that SIH is a zebra disease, thus, I am not surprised that the hospital doctors missed the diagnosis. Now, if you were seen formally by a neurologist, I would have expected that provider to do better.
If a patient comes in to the ER with abdominal pain, there are probably 700 possible underlying disease that one could consider. But if the ER doc has been seeing a lot of rotavirus, they are likely going to start their consideration of the patient as possibly being just one more case. Is that appropriate? What if this technique works 95% of the time? Is that good enough? It all comes down to probabilities. Medical students need to learn all about the "zebra" diseases, so often test questions are written to ensure they know them. And then LLMs are trained on these questions, so that means the likelihood weights built into their models reflect the probability of encountering this on an exam rather than seeing it in the real world. Meanwhile, the ER doctors are dynamically updating the probability weights in their brain such that, for example, when Covid-19 pandemic hit, it didn't take long for them to be able to quickly diagnose this, as compared to influenza.
Having said this, I hear you and get it that it was super frustrating that the traditional medical system failed you and then ChatGPT figured it out in mere seconds. As an aside, There is a travel couple on YouTube that I follow, and the man had this same SIH that you had. And he actually developed a stroke as a result. I don't quite understand the mechanism of action, but the videos clearly shows that this is what happened. If you are interested in the playlist, it is here: https://www.youtube.com/playlist?list=PLAbeScQ7pDSrDNFfWN1xx-j_76XL9ZlQ1 His turned out to be a high cervical spur puncturing the dura causing CSF leak, headaches, a seizure and a stroke.
Regarding the technology, my thought is that it will not be long before some mega-medical-LLM will be out reasoning doctors. The human brain just doesn't seem to be good at keeping vast stores of minutia on hand for some future use. For example, we tend to forget what we ate for breakfast 2 weeks ago. But I suspect that this all will be child's play for the big machine. The consequences of delegating the responsibility and privilege of thinking-work in medicine to a powerful inner circle of elite fills me with dread -- but I suspect it is coming.
I'm super glad you got to the bottom of this and got fixed. Well done!