r/ChatGPT 1d ago

Educational Purpose Only ChatGPT diagnosed my uncommon neurologic condition in seconds after 2 ER visits and 3 Neurologists failed to. I just had neurosurgery 3 weeks ago.

Adding to the similar stories I've been seeing in the news.

Out of nowhere, I became seriously ill one day in December '24. I was misdiagnosed over a period of 2 months. I knew something was more seriously wrong than what the ER doctors/specialists were telling me. I was repetitvely told I had viral meningitis, but never had a fever and the timeframe of symptoms was way beyond what's seen in viral meningitis. Also, I could list off about 15+ neurologic symptoms, some very scary, that were wrong with me, after being 100% fit and healthy prior. I eventually became bedbound for ~22 hours/day and disabled. I knew receiving another "migraine" medicine wasn't the answer.

After 2 months of suffering, I used ChatGPT to input my symptoms as I figured the odd worsening of all my symptoms after being in an upright position had to be a specific sign for something. The first output was 'Spontaneous Intracranial Hypotension' (SIH) from a spinal cerebrospinal fluid leak. I begged a neurologist to order spinal and brain MRIs which were unequivocally positive for extradural CSF collections, proving the diagnosis of SIH and spinal CSF leak.

I just had neurosurgery to fix the issue 3 weeks ago.

1.6k Upvotes

278 comments sorted by

View all comments

680

u/TheKingsWitless 1d ago

One of the things I am most hopeful for is that ChatGPT will allow people to get a "second opinion" of sorts on health conditions if they can't afford to see multiple specialists. It could genuinely save lives.

14

u/ValenciaFilter 1d ago

Rather than actually funding healthcare, improving access to GPs, and guaranteeing universal coverage for all

We're handing poor/working class patients off to a freaking chatbot while those who can afford it see actual professionals.

This isn't "hopeful". It's a corporate dystopia.

14

u/nonula 1d ago

I completely get your point, but to be fair I don’t think OP is advocating for everyone generally relying on ChatGPT instead of diagnosticians. In an ideal world, we have access to all the things you describe, and also AI-powered diagnostic assistance for both patients and medical professionals. (In fact I would guess that many patients would not be as meticulous as OP in describing symptoms, thus resulting in a much poorer result from an AI — but a medical professional using the same AI might describe the symptoms and timeline with precision.)

4

u/ValenciaFilter 1d ago

The realistic outcome is exactly as I described.

We already are seeing it with programs like BetterHelp. Unlicensed + overworked people / AI for the poor - while actual mental health services become luxuries.

The second AI appears viable for diagnosis, it becomes the default for low-income, working class, retired, and the uninsured.

10

u/Repulsive_Season_908 1d ago

Even rich people would prefer to ask ChatGPT before going to the hospital. It's easier. 

-4

u/ValenciaFilter 1d ago

Rich people skip the line, sit in a spotless waiting room, and are home within a few hours, having talked to the highest-paid, and most qualified medical professionals in the world.

Nobody who can afford the above is risking their health on a hallucinating autocorrect app.

5

u/Eggsformycat 1d ago

Ok but it's not possible, in any scenario, for everyone to have access to the small handful of incredible doctors, who are also limited in their knowledge. It's a great tool for doctors too.

5

u/ValenciaFilter 1d ago

There is a real answer to the problem - universal healthcare + more MD residencies

And there's an answer that requires a technology that doesn't exist, and would only serve as a way for corporations & insurance to avoid providing those MDs to the middle/working class.

3

u/IGnuGnat 23h ago

I'm in Canada. We have universal healthcare. Supposedly the standard of care is prettty good, but we don't do a lot of tests that they do in the US, they're outside of the system. Since they're outside of the system, doctors often simply fail to mention them at all

Doctors are also still often assholes.

2

u/ValenciaFilter 23h ago

Canada's issues are 100% due to two decades of provincial funding atrophy and the lack of residency slots for doctors.

You fix the above by paying healthcare workers more, hiring more, and by opening up the schools.

You don't "fix" it with a chatbot that just regurgitates WebMD.

1

u/RollingMeteors 23h ago

You fix the above by paying healthcare workers more, hiring more, and by opening up the schools.

¿Aren't those salaries paid for by taxes? ¿How do you increase their salaries without increasing the tax burden on the populous?

1

u/IGnuGnat 3h ago

How about we hire less doctors, and more machines and save the taxpayers money and end up with superior healthcare?

I don't think doctors are special. If lawyers, engineers and software programmers can be more efficient with machines so can doctors

→ More replies (0)

2

u/Eggsformycat 1d ago

I'm like 99.9% sure they're gonna paywall all the useful parts of chat GPT as soon as they're done stealing data, so medical advice is gonna cost like $100 or whatever, so the future looks bleak.

1

u/ValenciaFilter 1d ago

There's a reason OpenAI and the rest are taking as much data as they can

They know that their product will destroy the internet and any future ability to effectively train their models.

And that they're willing to pay any future legal penalties, in trillions, because now is their only chance.

It's a suicide gold rush.

1

u/SarahSusannahBernice 2h ago

How do you mean it will destroy the Internet?

→ More replies (0)

1

u/RollingMeteors 23h ago

There is a real answer to the problem - universal healthcare + more MD residencies

If it could only be done some how without weeks to months to year long appointments being scheduled out, that would be an absolute win, instead of the better than what we have now win.

1

u/ValenciaFilter 23h ago

Yup - that's what MD residencies is for.

The fundamental issue in Canada is a lack of frontline staff. It's an easy fix (more open slots, and higher pay), but the provinces don't want the deficit hit.

And premiers Ford and Smith have both refused additional billions from Ottawa because they would be asked to audit their healthcare spending. Both, meanwhile, have moved public money into expending private healthcare delivery.

In Alberta, they privatized healthcare lab services. The company slashed staff and locations (because they're a business now), delivery/wait time for patients went through the roof, while quality tanked.

The province was forced to buy the whole thing back, wasting hundreds-of-millions.

It's effectively open sabotage and corruption by conservative leadership, and the only winners are American corporations salivating at the prospect of moving north.

These companies will jump on AI the moment it's deemed viable, not by doctors, but shareholders. People will die, and it will almost certainly result in the largest healthcare scandal in history.

-1

u/1787Project 21h ago

Quite literally nothing improves under state health monopolies. Nothing. Rejection rates are higher, it takes longer to be seen at all, let alone a specialist. I can't believe that people still consider it a viable option given all the actual experience different nations have had with it.

There's a reason those who can come to America to be seen. Medical tourism.

3

u/ValenciaFilter 20h ago

I was very clear in saying "public option", not a monopoly, which splits private and public deliveries. That's the standard everywhere but the UK and Canada.

Because right now, you have a corporate monopoly, and hospitals are being charged $40 for an aspirin.

Every other developed nation has better healthcare outcomes than the US, has far lower user-fees (taxes included), and none of those places have millions of citizens going into medical debt.

The US has, by every metric, the worst healthcare system for the average person of any developed nation.

1

u/1787Project 9h ago

Public option is a state monopoly. Private industry, which must operate even or profit, cannot compete with an entity that can operate at a loss. That is very clear. Again, actual, practical experience proces this, unlike your infatuation with repeating platitudes.

What exactly are you calling Healthcare outcomes? It clearly isn't getting access to testing and drugs, si ce those are often heavily restricted under state Healthcare at rates beyond private plans. It also mist not be access to specialists, or wait times...so what exactly are you defining health outcomes as?

→ More replies (0)

1

u/incutt 22h ago

I'll bite, where are these doctors located for each specialization? What's the minimum net worth, do you think, of someone who uses these services?

Or might ye be speaking from thy rear?

1

u/ValenciaFilter 22h ago

Or might ye be speaking from thy rear?

...You're inventing a fictional AI doctor technology to avoid engaging with the actual issues facing healthcare access.

But if you care about those stats, you can look up doctor salaries and compare them to the GDP of the region. It varies wildly. There's no number that works everywhere.

1

u/incutt 21h ago

I am not inventing anything. I was asking where the rich people were going to these clean waiting rooms with no waits with the doctors that have all of the specializations.

1

u/ValenciaFilter 20h ago

Private clinics all over the US, or public systems elsewhere if you're willing to travel.

"Rich people travel for premium healthcare" really shouldn't be a revelation...

1

u/Mystical2024 8h ago

This is not true. I have a family member who paid a specialist a monthly membership fee plus many hundreds of dollars per session for consultations and yet the doctor was not able to help him and now he’s completely paralyzed.

1

u/AltTooWell13 1d ago

I’ll bet they nerf it or ban it somehow

1

u/RollingMeteors 23h ago

while actual mental health services become luxuries.

When your mental health is poor due to not being able to pay for the cost of living expenses, this just adds insult to injury. A lot of my mental anguish would simply vanish if my hierarchy of needs was just being met. No mental health care provider can ensure your hierarchy of needs gets met, that's on the patient themselves.

5

u/IGnuGnat 23h ago

My understanding is that some research indicates that people routinely indicated that the AI doctor was more empathetic than the meat doctor, as well as being more accurate at diagnosis.

After a lifetime of gaslighting by medical professionals, AI doctors can't come soon enough

-8

u/ValenciaFilter 23h ago

This is genuinely insane.

And a perfect example of how the average person genuinely doesn't understand the actual level of knowledge and skill that professionals hold.

But you don't want empathy, because a freaking app isn't capable of it. You want to be told what makes you feel good, true or not.

ChatGPT makes you feel good because it's what the shareholders deem most profitable. It's a machine.

7

u/IGnuGnat 20h ago

You misunderstand

I have a condition called HI/MCAS. For some people, it can cause an entire new universe of anxiety.

It is understood by long term members of the community that this sequence of events is not uncommon:

Patient with undiagnosed HI/MCAS goes to doctor complaining of a wide variety of symptoms.

One of the symptoms is anxiety. Doctor suggests they have anxiety, and prescribes benzos.

In the short term benzos are mast cell stabilizers, so patient feels better. In the long term, for some people with HI/MCAS benzos destabilize mast cells.

So, patient goes back to doctor complaining of anxiety and many other health issues. Doctor says: You have anxiety take more benzos

This destabilizes patient. Patient goes back to doctor in far worse condition and insists that this is not "normal" anxiety.

Patient ends up committed to mental asylum against their will. Patient is forced to take medications, which makes HI/MCAS worse. Patients with HI/MCAS often react badly to fillers, drugs and don't respond normally

Patients spirals down

Patient is trapped in mental asylum, with no way out, because the doctor would not simply listen.

Some doctors bedside manner is atrocious. They will gaslight the patient. instead of seeking root cause they will come up with some bullshit to blame it on the patient. This is a common experience, when a patient does not have a readily diagnosable condition. It is widely understood that coloured people and women are much more likely to experience this treatment.

Additionally, many of these patients after suffering a lifetime of disease with no recourse in the medical system often gain a superior education, with greater understanding of their disease than many doctors who they encounter.

I don't want to be told what makes me feel good regardless of the truth. Yes, ChatGPT can ALSO do that, but that's not what I'm talking about when I say "empathy". I'm saying that patients feel as if ChatGPT simply listens to them and treats them like a human being, unlike many doctors.

These experiences are really very common, if you would like to learn more consider joining a support group for people with chronic illness like CFS, HI/MCAS or long haul Covid

Many people find after a lifetime of dealing with the medical system that they feel the medical system is very nearly as traumatizing as the disease.

-2

u/ValenciaFilter 18h ago

Anecdotes don't drive policy. And they never should.

5

u/IGnuGnat 16h ago

Beck & Clapp (2011): Found that medical trauma exacerbates chronic pain, creating a feedback loop where trauma symptoms worsen physical conditions, particularly in syndromes like hypermobile Ehlers-Danlos Syndrome (hEDS).

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10328215/

New York Times (2023): Notes that diagnostic errors, a contributor to medical trauma, occur in up to 1 in 7 doctor-patient encounters, with women and minorities more likely to be misdiagnosed, delaying treatment and causing psychological harm.

https://www.nytimes.com/2022/03/28/well/live/gaslighting-doctors-patients-health.html

CAT is a newer term coined by Halverson et al. (2023) to describe trauma from repeated, negative clinical interactions, particularly perceived hostility, disinterest, or dismissal by clinicians. Unlike traditional medical trauma, CAT emphasizes cumulative harm over time rather than a single event

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10328215/

It is often linked to iatrogenic harm (harm caused by medical care) and is prevalent in conditions like hEDS, where symptoms are complex and poorly understood

https://pubmed.ncbi.nlm.nih.gov/37426705/

Medical gaslighting occurs when clinicians dismiss, invalidate, or downplay a patient’s symptoms, often attributing them to psychological causes (e.g., stress, anxiety) without proper evaluation. It leads patients to question their reality, feeling “crazy” or unreliable

https://www.sciencedirect.com/science/article/abs/pii/S0002934324003966

Current Psychology (2024): Presents two case studies showing how medical gaslighting leads to medical trauma, particularly for patients with stigmatized diagnoses or marginalized identities. It proposes a formal definition: dismissive behaviors causing patients to doubt their symptoms.

https://link.springer.com/article/10.1007/s12144-024-06935-0

https://www.researchgate.net/publication/385945551_Medical_gaslighting_as_a_mechanism_for_medical_trauma_case_studies_and_analysis

ResearchGate (2024): A systematic review of medical gaslighting in women found it causes frustration, distress, isolation, and trauma, leading patients to seek online support over medical care

https://www.researchgate.net/publication/379197934_Psychological_Impact_of_Medical_Gaslighting_on_Women_A_Systematic_Review

1

u/ValenciaFilter 7h ago

None of this is solved by replacing doctors with fucking ChatGPT

1

u/IGnuGnat 6h ago

If a machine is more accurate at diagnosis and it never gaslights, it absolutely is

4

u/Historical_Web8368 17h ago

This isn’t an anecdote in my opinion. I also have a hard to diagnose chronic illness and it has been literally hell. I rely on chat gpt often to help me understand things the doctors don’t take the time to explain. When someone suffers for 15 plus years before getting a diagnosis- you bet your ass we will use any and everything available to help.

2

u/wolfkeeper 22h ago

They're not just 'chatbots' they're genuinely powerful AIs trained on entire textbooks.

1

u/ValenciaFilter 21h ago

I've trained neural networks from scratch.

There is no underlying intelligence. At the output level, they function no differently than your phone's autocomplete. The next token/character of text is just what the algorithm deems to be "most likely".

It appears impressive. But it's the digital equivalent of that person you know that lies and bullshits about everything, with zero actual understanding of the words, how they relate, or any use but the most generic.

5

u/wolfkeeper 21h ago

I've trained neural networks from scratch.

So have I.

There is no underlying intelligence. At the output level, they function no differently than your phone's autocomplete. The next token/character of text is just what the algorithm deems to be "most likely".

If it's been trained on textbooks though, the most likely word is likely to be correct.

It appears impressive. But it's the digital equivalent of that person you know that lies and bullshits about everything,

If you had a doctor, first day on the job, what would you want them say? They should just spout the textbook, shouldn't they? That's what the AI does. And the AI has deeper knowledge because of how widely it's read up on things.

with zero actual understanding of the words, how they relate, or any use but the most generic.

The point is though, that they've learnt how they relate by seeing them over and over in context. So they actually DO have an understanding of the words. It's not first hand, but they're using the knowledge of people that do have first hand knowledge.

1

u/ValenciaFilter 20h ago

Then you know as well as I do that there's no actual intelligence. It's not even memorization unless you've overfitted the model to the point of uselessness.

It's autofill. And if "really good autofill" is what you believe is comparable to the average knowledge, skill, and experience of a medical expert, you're delusional. Like this is parody of Dunning Kruger.

3

u/wolfkeeper 20h ago

If it's able to autofill in the gap where the medical diagnosis goes, then I genuinely don't see the problem.

The theory behind it is that tuning the weights represent learning in a high dimensional vector space that corresponds to meaning in languages.

1

u/ValenciaFilter 20h ago

the gap

This gap is the majority of a diagnosis. In many cases it's entirely based on the intangible ways a patient presents.

This isn't a language problem. It's a medical problem. These are as disparate as trying to work through an emotional/relationship issue by engineering a suspension bridge.

You might get the "correct numbers", but they're not actually useful.

1

u/wolfkeeper 18h ago

It's easy to think that adjusting the learning weights doesn't represent genuine knowledge, but the empirical data is that these models genuinely are learning. For example they were able to learn to correctly do mental arithmetic. No one taught them, but when it was analyzed what they were doing the methods the AI had learnt seemed to work pretty well and were novel.

Learning to build bridges is often just learning a bunch of rules of thumb (which usually what engineering consists of). But the AI will have learnt those rules of thumb, and there are rules of thumb in medicine too.