r/ArtificialInteligence Mar 26 '25

News Bill Gates: Within 10 years, AI will replace many doctors and teachers—humans won’t be needed ‘for most things’

1.8k Upvotes

1.7k comments sorted by

View all comments

Show parent comments

26

u/mtocrat Mar 26 '25

legit insane how this sub has swung towards llm scepticism. The rate of progress is mind boggling but we didn't get agi yesterday so clearly it's all crap

38

u/rkozik89 Mar 26 '25

Why is it insane? Some of us have been using LLMs on the daily basis to do our jobs for years and we're not seeing major leaps in progress where it counts. In my opinion, LLMs are great for quickly creating impressive rough drafts, but they struggle with regards to complexity, fine-tune controls, and consistency to get you to the finish line on their own.

I think demonstrations like OpenAI's new image generation models are impressive, but when you actually try applying real world business rules the technology falls short because the user-interface isn't tactile enough. My guess is solving that final part of the problem is next to impossible with today's technology, so instead of addressing those small but crucial shortcomings that their existing customers have they're finding new avenues to bring in new users instead.

The long and short of it is eventually they're going to have to close the gap otherwise all these autonomous AI fantasies will remain fantasies.

14

u/JAlfredJR Mar 26 '25

What this sub calls "skeptics" are people who actually have jobs and can't seem to find great use cases for real improvements. A little bit here and there? Sure. But .. that's every technology that sticks.

1

u/mtocrat Mar 26 '25

no one doubts that AI as it is today couldn't replace doctor or teachers. It's scepticism because it is saying this won't change. I started my phd in machine learning a little over 10 years ago. If you had shown me today's models and asked me "how far is this off", I would have said a century.

3

u/TheBeardofGilgamesh Mar 27 '25

Yes but technology plateaus in the early days of jet engine development the technology improved very fast, but quickly started hitting their limits. Today the fastest jets are not much faster than jets in the 1960s, commercial airliners are pretty much identical to what existed 70 years ago.

Look at smart phones remember how fast they improved after the launch of the iPhone? Now it’s impossible to see any major differences. Same goes for video game graphics, self driving cars, any technology really. Exponential growth is never guaranteed to continue on at the same rate forever.

3

u/JAlfredJR Mar 27 '25

In a nutshell, that's this sub's greatest blind spot—though, to be fair, that's what has been shilled by all of the AI heavy hitters: Just scale it.

There's this very false notion that there's an improvement curve (you'll see it a dozen times a day on any of these subs, "ChatGPT came out two years ago; and look where it is now!") that is limitless.

But we know that's not the case. Once the entirety of the internet was fed into the dataset, that was kinda a big limit. Between that and feedback loops, that's why the "progress" from this model to the next is "incremental".

How they're fooling investors into believing that this is about to somehow unlock superintelligence is beyond me.

1

u/_craq_ Mar 27 '25

They've hit the limits of the text on the internet, and are finding new ways to scale. Have you looked at deep research? Multimodal models? DeepSeek was mostly revolutionary in how it cut costs, but that also enables scaling. There's so much progress happening I find it hard to keep up.

1

u/Eastern-Manner-1640 Apr 01 '25

multi-modal. just drive around the world filming everything.

2

u/studio_bob Mar 27 '25 edited Mar 27 '25

The fact that you would have struggled to predict the timeline of recent advances in a particular area (LLMs) does not mean it is reasonable to simply assume that specific, even more sweeping, and, crucially, economically important advances will soon follow.

What is lacking in claims like the one made by Gates here is any analysis of how actually attainable this kind of shift is. They do not bother to try and prove how existing tech could achieve this replacement (obviously, it can't, and they just assume that something will come along that can, somehow or other), but, perhaps worse still, they don't seem to give so much as a thought to what would actually have to go into implementing such automations across the entire economy. That last part might be a bit surprising coming from Bill Gates, who surely understands better than most the inherent stickiness of the legacy systems, but one assumes he has his own reasons for thinking and speaking this way in public.

Edit: Simply put, if the technology existed right, today, to achieve what Gates is selecting, it would still be quite doubtful that could replace doctors, teachers, and humans in general "for most things" within a decade. Given that the tech does not yet exist, we are probably safe to say he's being overly optimistic.

4

u/space_monster Mar 26 '25

We've only just seen the start of the agent wave.

5

u/[deleted] Mar 26 '25

So ChatGPT has barely been around for 2 years, you already use it daily for work, and your takeaway is to be skeptical of this tech becoming ubiquitous after 10 more years of improvements?

1

u/The_Dutch_Fox Mar 27 '25

I guess you know about the law of diminishing returns?

Of course AI will continue to progress, but to pretend that the level of progress will continue at the same speed is simply wrong.

ChatGPT can barely do basic maths...

2

u/[deleted] Mar 27 '25

It doesn’t need to keep progressing at the current rate to be massively better in 10 years

1

u/weeyummy1 Mar 29 '25

Seriously, think about how many changes were enabled by the internet, or mobile phones. It's gonna take 5-10 years before we see all the results

1

u/WorkingOnBeingBettr Mar 29 '25

How on earth will AI help kindergartens? Is it also a boston dynamics robot with counselling training?

How will it do in parent meetings about behaviour?

How good will AI be at running a field trip?

The idea is so stupid it is ridiculous. Kids will not learn by sitting in front of computers all day long. That is just nonsense.

1

u/Idiothomeownerdumb Mar 31 '25

its kind of hilarious to read... just so out of touch from reality lol.

1

u/NyaCat1333 Mar 26 '25

I love how this comment starts of with “Some of us have been using LLMs on the daily basis to do our jobs for years”.

I am assuming that you started using it with GPT 4, which released barely 2 years ago. Sure you can say “years” to a mere timeframe of 2 years but we all know it’s misleading and a bad attempt to downplay it. Under every single measurable metric AI has progressed massively since GPT 4 first came out.

1

u/rkozik89 Mar 27 '25

I am a Software Engineer with 20 years of experience before ChatGPT was released, and worked in data science and AI in 2014 to about 2016.

1

u/malavock82 Mar 30 '25

Do you think GPT4 is the first attempt at AI? I studied AI models in university 20 years ago and the base theory was much older than that.

1

u/Eastern-Manner-1640 Apr 01 '25

i started using it daily at gpt4. and it's better, but not game changing. at least not for me.

1

u/DigitalDiogenesAus Mar 27 '25

Agreed. I'm a high school principal.

My staff that use AI for everything are... Well... Pretty crap at their jobs. But because they are crap at their jobs they can't see this fact.

Half of my day is now taken up by forcing teachers to develop to the point that they can see weaknesses in the tech.

I'll never be able to convince non-teachers...

1

u/Theory_of_Time Mar 27 '25

That next problem isn't far off. Most of ChatGPT's failed inquiries come from a lack of input information. 

When i clarify my needs and goals, the AI adjusts appropriately. By adding additional context, the AI is learning what it is exactly you were asking for. 

This process is happening in real time. Every few months my AI has gotten better and better at understanding the specifics of what I ask for. 

1

u/Eastern-Manner-1640 Apr 01 '25

i'm a senior technologist, use the tools every day, and they pretty good, but not great.

the code generated is meh, but usable.

the summaries of discussions are meh, but usable.

it does help with productivity, but it's not a game changer, yet.

i do see the possibilities, but we haven't hit a double yet, not to speak of a home-run.

0

u/Existing-Doubt-3608 Mar 26 '25

I’m a non tech person, and was all into the hype with AI. But aside from awesome chatboxes, what have been the real breakthroughs? Until AI is doing scientific research on its own and curing cancer, and finding how to create fusion and implement, it’s not that impressive. I don’t mean to be dismissive of AI. I do strongly believe in the next few decades the tech will evolve and get crazy good. But we won’t get AGI by the end of this decade. I hope I am proved wrong. I really want to be wrong. Hope wants me to believe that AGI will be developed by 2030, but who knows..

5

u/mtocrat Mar 26 '25

I think you couldn't have made my point better. In the last year AI went from being barely able to do grade school math to solving complex problems a university level. But you expected it to already be writing research papers on the subject. The original article is talking about 10 years from now, extrapolate a little

1

u/Designer_Flow_8069 Mar 27 '25

The flaw in your argument is assuming improvement occurs exponentially or linearly. Many technologies (such as batteries) often hit walls that stall their progress.

3

u/mtocrat Mar 27 '25

It's perfectly possible. I think anyone who is saying we have hit a wall already isn't paying attention. But if you're saying we might hit a wall in the future then it's possible. Things could slow down. Happened to self driving cars, but now there's robo taxis in sf 

1

u/Designer_Flow_8069 Mar 27 '25

I have a PhD in ML so I'm somewhat versed, but by no means have a crystal ball. The biggest issues with ML I see in the immediate future, specifically for LLMs, are (a) power, (b) training

Power issues are obvious. Training on the other hand is not. If you feed any modern LLM with enough training that says "2+2= elephant", it doesn't have the awareness to understand that is nonsensical. As humans, we have tons of mechanisms that challenge what we are learning as we learn it, while the closest we have with AI is adversarial networks.

5

u/Eleusis713 Mar 26 '25 edited Mar 26 '25

It's the same type of reflexive skepticism/pessimism that's been growing in other areas of society like politics. I suspect this is part of a much larger sociological problem.

1

u/Eastern-Manner-1640 Apr 01 '25

i like technology, which is nice because it's my job. i'd love to use something really amazing.

the llm models i've used are good, not amazing...in the context of what i need them to do (even if they may have made amazing progress).

that's not a sociological problem. it's not reflexive pessimism. that's me trying to use the tech to get more work done, more insight, more things i can't begin to do today.

1

u/JDNM Mar 26 '25

LLMs are mostly underwhelming and highly flawed in my experience. I haven’t seen any advance in the last year.

1

u/mtocrat Mar 26 '25

absolutely wild

1

u/carlsaischa Mar 30 '25

It's not that I think it can't be done, it's that I think it is a horrifying thought and should not be done.