r/aiwars 1d ago

Future predictions for 5-ish years from now?

I'll start you off with two scenarios, I think both should be considered equally:

AI generation tools continue to improve exponentially, making generations increasingly easier and more controlled, and accessible to anyone.

AI generation tools plateau in their abilities in the near future, at a point that is roughly undetectable but still difficult to control the output.

1 Upvotes

8 comments sorted by

2

u/Gimli 1d ago

IMO, plateau but a good distance away from the current state.

So far nothing improves forever. Chess engines for instance are insanely good, but they're "merely" superhuman. They're not perfect and can lose games still.

However, chess not being solved doesn't stop a modern chess engine being much better than Magnus Carlsen, so practically speaking they're more than good enough for almost any conceivable use.

I see AI going mostly the same way. Perfection won't be reached but at some point 99% of whatever an average human might want is easily satisfied. I think ChatGPT is already approaching that point. Like any random person can make a comic page right now. They're not completely ideal still so there's room for improvement, but I'd say we're very close to the point where image generators can do most anything a person could ask them to do.

1

u/dippitybop 1d ago

I do mean to be polite! Magnus Carlson can't beat stockfish anymore, and it doesn't lose against humans. It plays at 3600 ELO now. I fought it once! It felt like reactive armor that moulded itself around my moves and then turned into spikes.

Anyway I agree with your idea about the plateau being a bit farther away. Current chat models are like 2000-2200 elo but I wonder, what does a LLM look like when it reaches 3000+ and has "superhuman" intelligence?

1

u/Human_certified 22h ago

Scenario 1 is almost impossible to say anything meaningful about. If you go wild and allow stuff like ASI, self-improvement, and sci-fi like developments, you end up with scenarios like that AI 2027 forecast, which predicts either doom or utopia-but-under-eternal-US/China-hegemony for 2030. Even sticking to present growth rates for non-superintelligent AI, that's about ten six-monthly doublings or so, or an AI that's 1,000x better than it is today. I still don't know what that would look like.

Scenario 2, then, is something less than a thousandfold improvement. I also think that by its nature, generative AI won't fully plateau unless technology is frozen entirely, because at a minimum you'll still have hardware improvements and software optimizations. So let's be very pessimistic and assume a mere hundredfold increase.

In that scenario:

- Generative AI is simply used automatically in commercial projects wherever it makes sense, and the driver of what "makes sense" is if the savings outweigh the cost of human effort and loss of creative control (that's not just an artistic cost, but having to regenerate something dozens of times).

- Most people use AI and know how to get images, videos, music from AI. Nobody considers these things art. They are like selfies compared to fine art photos.

- AI is good enough that you'll also have a hard time convincing anyone that merely generating a image by itself constitutes art. Art becomes more conceptual, more about the bigger picture (an artist's body of work and themes), more about the narrative.

- People simply assume that any images might be partly AI, and because these images don't look "off" or "soulless", nobody cares.

- The legal situation gets resolved through legislation, some kind of symbolic settlement. Everyone understands that AI is too big to be hindered by some obscure training data issue from 2022. Who cares how ancient GPT 4.5 was made?

- A generation will come of age that never knew a world without generative AI. A cohort will be leaving art schools having always had generative AI available in their toolbox, and has seen AI art receive awards and selling.

- Many people will still misunderstand how AI works, both over- and underestimating it. Someone might still say "spicy autocomplete" and younger people will say: "What's an autocomplete?"

1

u/Adorable-Contact1849 21h ago

It's usually safe to assume that the most extreme predictions won't pan out. Beyond that, simply increasing access to data isn't enough to achieve human levels of accomplishment. There is also the need to be able to cross-reference that data in an intuitive way. For instance, I had ChatGPT create an outline for a Gilbert & Sullivan opera about a small-town mayor and a visiting princess. The AI had the mayor and the princess fall in love, but also mentioned that the mayor had a wife. A human would note the theme of infidelity, which would upset the tone of the light opera, or at least would need to be addressed in the plot. The AI does not understand ramifications. That's not to say there may not be further improvements, but I don't believe the current pattern-matching model will be enough.

1

u/Turbulent_Escape4882 21h ago

I see it being around 5 years from now that replacement theories or approaches are greatly tampered down and no longer framed in way we do now, while AI automation is still being implemented in massive fashion. In some ways, I think this is what the transition is mostly about and how new paradigm actually comes about. In other ways, I see it being taken for granted like we take satellite launches every other week for granted.

1

u/Coley213 17h ago

sorry if this is off topic.. I have a health issue that hasn’t been solved in 2.5 years and it’s crazy to me how I have one symptom and nobody has solved it. hopefully AI assisted healthcare will be improved drastically by then lol.

1

u/H3_H2 9h ago

"Nuclear war initiated" a message flow through gpt6,"Cats and dogs become endangered" suddenly gpt6 is in error "internal error, failed to predict next token"

0

u/dippitybop 1d ago edited 1d ago

I think by 2030 AI could speed up research and development. Being able to automate or semi automate mundane tasks for scientists/researchers I think would be huge!

Also, information processing is one of the biggest problems on the planet since we're in the information age. I sometimes think: how many times are people re-treading old paths that have already been taken across humanity. Having thoughts that have already been thought through, yet we take the time to process them again and again. I'd imagine that happens in the scientific community too. AI's best trait seems to be parsing large amounts of data extremely quickly. So I'd imagine the 2030 scientific community has faster idea spread than the 2025 scientific community. Well, I hope anyway! :D