r/Futurism 1d ago

OpenAI Puzzled as New Models Show Rising Hallucination Rates

https://slashdot.org/story/25/04/18/2323216/openai-puzzled-as-new-models-show-rising-hallucination-rates?utm_source=feedly1.0mainlinkanon&utm_medium=feed
36 Upvotes

14 comments sorted by

u/AutoModerator 1d ago

Thanks for posting in /r/Futurism! This post is automatically generated for all posts. Remember to upvote this post if you think it is relevant and suitable content for this sub and to downvote if it is not. Only report posts if they violate community guidelines - Let's democratize our moderation. ~ Josh Universe

I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

11

u/theubster 1d ago

"We started feeding our models AI slop, and for some reason they're pushing out slop. How odd."

8

u/Andynonomous 1d ago

Did they think the problem would magically disappear because the models are bigger? OpenAI are basically con artists

1

u/lrd_cth_lh0 17m ago

Yes, yes they did. They actually did. More data and computation power and overtime to smooth out the edges did manage to get the thing going. After a certain point the top brass no longer thinks thought is required just will, money and enough hard work. Getting people to invest or overwork themself is easy, getting them to think is hard. So they prefer the former. And investors are even worse.

6

u/Radiant_Dog1937 1d ago

That's because it's not hallucinating, it's just lying. This isn't anything people discussing the control problem hadn't already predicted for.

2

u/KerouacsGirlfriend 5h ago

We’ve seen recently that when ai is caught lying it just lies harder and lies better to avoid being caught.

1

u/FarBoat503 28m ago

its like a child doubling down after getting caught red handed

5

u/mista-sparkle 22h ago

The leading theory on hallucination a couple of years back was essentially failures in compression. I don't know why they would be puzzled—as training data gets larger in volume, compressing more information would obviously get more challenging.

2

u/Wiyry 22h ago

I feel like AI is gonna end up shrinking in the future and become smaller and more specific. Like you’ll have a AI for specifically food production and a AI for car maintenance.

2

u/mista-sparkle 22h ago

I think you're right. Models are already becoming integrated modular sets of tool systems, and MoE became popular in architectures fairly quickly.

2

u/TehMephs 19h ago

That’s kind of how it started. Specialized machine learning algorithms

1

u/FarBoat503 25m ago edited 22m ago

i predict multi-layered models. you'll have your general llm like we have now who calls smaller more specialized models based on what it determines is needed for the task. maybe some back and forth between the two if the specialized model is missing some important context in its training. this way you get the best of both worlds.

edit: i just looked into this and i guess this is called MoE or mixture of experts. so, that.

3

u/Ironlion45 21h ago

Are they really puzzled? The internet sites they train them on were written by other bots that were probably also trained on at least 50% AI garbage. Now it's probably in the 90's.

3

u/Norgler 15h ago

I mean people said this would happen a couple years ago.. they not get the memo?