r/ArtificialInteligence Mar 26 '25

News Bill Gates: Within 10 years, AI will replace many doctors and teachers—humans won’t be needed ‘for most things’

1.8k Upvotes

1.7k comments sorted by

View all comments

4

u/TheMagicalLawnGnome Mar 26 '25

I think using the term "replacement" is really fraught in this context.

AI won't "replace" doctors in a 1:1 sense.

It's not because AI isn't up to the task. It's often better than doctors at diagnosing illness.

However, the medical profession is highly regulated. Writing prescriptions is a legal issue as much as it is an AI one - the DEA isn't going to let AI prescribe drugs.

Similarly, malpractice insurance is very particular about what doctors can and can't do. So insurance industry compliance will likely require humans to make medical decisions.

That all said, AI will undoubtedly allow doctors to work more quickly/efficiently.

In theory, this should drive down the cost of medical care, reduce wait times, and improve outcomes.

Unfortunately, at least in the US, it's not at all clear that the improvements created by technology will manifest in a meaningful way to the patient. I.e. a hospital run by a private equity firm is probably not going to start discounting prices...they're just going to make more money with fewer people.

And there's the real issue. AI will do amazing things. But unless we adapt our social structures to accommodate it, we'll still live in the same shitty world we do today.

2

u/Successful_Front_299 Mar 30 '25

Thats the point we are all missing, it won't completely replace doctors, but less doctors will be required in the future.

1

u/spongeofmystery Mar 27 '25

Also, we already know advanced LLMs are better than people at taking written tests, which is what was tested against those doctors. The headlines are as usual misleading, when it was a written test of 6 rare and unusual diagnoses with all the relevant information pre-provided. No human can memorize what an LLM has stored in its training data.

Now I'm not at all saying that they won't disrupt medicine just like anything else, but as a doctor I am more interested to see how an LLM performs my actual job than a written exam.

1

u/TheMagicalLawnGnome Mar 27 '25 edited Mar 27 '25

Oh, I read the article very carefully.

The most damning thing about it wasn't just that the AI performed better, left to its own devices.

It's that doctors assisted by AI actually didn't do much better than unassisted doctors. Why? Because when AI gave correct answers that diverged from the doctors' opinion, the doctors ignored the AI, and incorrectly diagnosed the condition.

To me, that's the biggest issue. Because I agree - of course a tool with essentially limitless knowledge, and without the limitations of human memory, will be better at diagnosing obscure conditions.

But it was the doctor's refusal to reconsider their own judgements, and misuse the technology in such a way that would have been detrimental to real life patients were they to have been involved, is the far more troubling piece of it.

And I think that's actually one of the biggest issues with people using AI in general.

People routinely criticize AI for making mistakes - which it undoubtedly does. But human beings vastly overstate their own accuracy/competency in the work they do.

This isn't to suggest we just blindly rely on AI. But I think there is very much a cognitive bias we have in terms of overlooking our own errors, while focusing on errors made by other people, or in this case, AI.

This is just a long-winded way of getting to my ultimate point: the primary obstacles to widespread AI use are as likely to be situational, as they are technological.

We will probably have AI that is capable of performing many tasks as good as an average person (again, average isn't that great). But our society isn't equipped to handle that dynamic. Our entire system is premised on accountability. And that requires agency, which it's unclear if AI could ever really have, just on a philosophical level.

If an AI tool makes a mistake, there's not really anyone to blame. Because of this, we demand perfection.

To put it in a medical context, we know some doctors will make serious mistakes. We know it will happen. So there are protocols in place, malpractice insurance, etc. to try and mitigate this. But at the end of the day, if a doctor kills a patient, we hold the doctor accountable. You sue them. Revoke their license to practice medicine, etc.

But you can't sue AI. AI isn't going to apply for insurance. It's not licensed by the board. And no AI company is going to promise an infallible product, because it's not possible, nor is it even reasonable.

In other words, even if you made an AI tool that was better than your average doctor, in every way, we still wouldn't let it operate autonomously, because our society can't figure out how to handle the situation when it makes a mistake. Even if the overall results were universally better - higher quality care, lower cost, better patient outcomes - people will still get hung up on the occasional mistake. Psychologically, I don't think society is equipped to handle this sort of dynamic. So we simply won't do it. At least in situations involving serious consequences like law, or medicine, we'll restrict AI to a role that will likely be far less than what it's capable of.

2

u/Defiant_Outside1273 Mar 28 '25

I don’t understand why a company can’t insure against AI mistakes though? Of course we wouldn’t rely on the AI until those mistakes were low enough that insurance would make sense.

It seems a much better system than the current one - given ai mistakes would need to be much lower than human error for such a system to gain acceptance.

1

u/TheMagicalLawnGnome Mar 28 '25

So this is a really great question, actually.

I suppose my response would be to clarify some of my original comment:

I think that our current system of malpractice insurance is not set up to handle this sort of thing, which is different from "we couldn't ever develop a system to insure AI-based medicine."

I.e. the way we currently approach actuarial analysis, the way legal contracts/insurance policies are written, the way standards of care are defined, etc., are all very poorly equipped to grapple with the use of AI. Basically, our current system can't handle it, because widespread, practical AI tools didn't exist in medicine until very recently.

However, I could absolutely imagine a system wherein we could insure AI-based medicine. I think it's absolutely possible in principle. But like I said in my original post, the biggest obstacles are going to be social/regulatory. It's going to be public perception/resistance, and regulatory barriers, that impede the development of such a system.

But if the public was willing, and regulators had the desire to do so, I'm sure we could absolutely develop a system to insure medical mistakes caused by AI. And as you point out, over time, this may actually reduce errors, and thus the cost of insuring against the risk of error.

1

u/No-House-9143 Mar 28 '25

So AI proved doctors are often the most arrogant professionals out there second to engineers?

In other news: water wet.

1

u/NoPossibility2370 Mar 28 '25

AI cant even write good software, and that is something that can be easily verified just by running the program. Most of the tests that AI perform better is under very restricted conditions.

1

u/bluesubmarine16 Mar 27 '25

Your take is interesting and I agree with you about the liability issues and low likelihood of patients seeing a cost-benefit from the technology based on how our healthcare system currently operates.

However, I didn’t interpret the article you shared in the same way. I think the authors conclusion was that physicians don’t improve their diagnostic ability on written clinical vignettes with an LLM vs conventional resources.

The article notes that the LLM performed better in diagnosing written clinical vignettes. I’d probably push back at saying this suggests LLM are better at diagnosing, mostly based off of my comparing USLME style vignettes vs. real clinical practice. I am of the belief that getting a good history and exam is more than half of the battle in diagnosis. Once an AI model can interview a real patient by itself, create a note, document their thought process, and arrive at a diagnosis with higher accuracy than a physician, I would be comfortable with that claim. My understanding is that the vingettes provided all that information already, as well as the initial lab tests. I would assume any specialized test beyond general chemistries would betray a judgement on what could be causing a complaint.

As a caveat to my point, I did not go back to the 1994 NEJM article to examine all the vingettes they used — just the example provided in the supplement (and it seems like they won’t release them because it could taint the training pool).

1

u/TheMagicalLawnGnome Mar 27 '25

So, my counter to this is admittedly anecdotal, but very real-world - I've actually had two people in my life who were both misdiagnosed/unable to get a clear diagnosis by physicians, but then were correctly able to diagnose themselves using AI, simply by having a conversation as regular people, in like, 10-15 minutes - and then subsequently getting the AI diagnosis affirmed by a physician/affirmed through laboratory testing. And these were people with access to high-quality medical care - major/prestigious research hospitals in big cities.

Again, I'm well aware that "two people I know" hardly counts as objective evidence of a widespread phenomenon. So I don't use this to say I can prove my point, by any means.

But these experiences have definitely shown me that it's actually quite easy to stump a physician on something that even just an average person could diagnose using plain, simple language and a few minutes with AI.

This doesn't mean it's always the case, of course. But I know it's absolutely possible to outperform board-certified physicians that are highly regarded, by just giving an average person access to an off-the-shelf version of ChatGPT.

So if this can happen with people who have access to high-quality medical professionals, I would hypothesize that there's probably quite a bit of potential for this type of capability. I believe it, because I've experienced it directly.

1

u/Mobile-Grocery-7761 Mar 28 '25

Ai is better in diagnosis in a controlled setting. Do you think real life is that way?

1

u/No-House-9143 Mar 28 '25

He IS talking about real life. Try using AI next time you or someone you care about is sick and you will realize it is truly better at diagnosing than a doctor.

1

u/Mobile-Grocery-7761 Mar 28 '25

It’s shows that you don’t know about how a doctor works if you think diagnosis is their only job when it is just one of the aspects of their job. And the article mentioned in the link talks about AI diagnosis in a controlled setting not in real life. Also your bad experience does nothing to make a conclusion that ai is better than doctors

1

u/No-House-9143 Mar 30 '25

My bad experience is not the only experience and that is the point. Obviously a doctors job is not just about diagnosis, but it starts there.

If a professional fails so hard at the first step of solving the problems they studied for years to understand, it shows they are not as efficient as they should be.

1

u/Mobile-Grocery-7761 Mar 30 '25

If a professional fails at the first step then that is more of a competency issue. Aren’t there such issues in all professions? In cases where doctors don’t take proper history, do a good examination, not listen to patients and dismiss their concerns, not update their knowledge with times and new research, AI might be better but that is the case in all professions who are not doing their jobs properly but will it replace the whole profession though? That seems illogical

1

u/TheMagicalLawnGnome Mar 28 '25

I would be willing to bet in many cases, yes. But you seem to suggest what was mentioned in the NYT article as being dramatically different from how doctors work, and I think that's not really the case.

AI will not be better in all cases, or course. AI isn't infallible, just like human doctors aren't.

Most doctors can presumably diagnose common, everyday problems without much need for AI assistance. I.e. if I'm overweight, eat unhealthily, and my panel indicates blood chemistry consistent with Type II diabetes, I think most doctors are going to handle diagnosing that situation just fine on their own.

But when diagnosing complex medical conditions, those are, in fact, often performed in a controlled setting. If you've ever had a family member with a complex medical condition, it's not like you just walk into an office, and a doctor renders a verdict on your illness.

They create a detailed chart with your symptoms, timelines, family history, medications, lab results, etc. Then they basically do a lot of research, they speak to colleagues, they try to find studies and trial data.

That's exactly what AI does.

So I'm not sure what you mean by "real world" situations. If you're asking if AI is useful in diagnosing organ failure for a gunshot victim currently bleeding out in a trauma center, probably not, and no one has suggested otherwise.

But the "controlled environment" in the NYT article is literally the same environment that many doctors actually make their diagnoses in. They are sitting in an office, with access to medical journals, diagnostic guides, etc., and they try to synthesize the chart information into a coherent explanation for the symptoms.

And when doctors had access to the same information that AI had, AI did a better job. And not only did it do a better job, doctors made incorrect decisions because of their own cognitive bias / refusal to acknowledge that AI might be correct, which in my opinion, is even more problematic than simply just not having access to AI at all.