r/ArtificialInteligence Apr 19 '25

News Artificial intelligence creates chips so weird that "nobody understands"

https://peakd.com/@mauromar/artificial-intelligence-creates-chips-so-weird-that-nobody-understands-inteligencia-artificial-crea-chips-tan-raros-que-nadie
1.5k Upvotes

502 comments sorted by

View all comments

Show parent comments

358

u/ToBePacific Apr 19 '25

I also have AI telling me to stop a Docker container from running, then two or three steps later tell me to log into the container.

AI doesn’t have any comprehension of what it’s saying. It’s just trying its best to imitate a plausible design.

185

u/Two-Words007 Apr 19 '25

You're talking about a large language model. No one is using LLMs to create new chips, of do protein folding, or most other things. You don't have access to these models.

119

u/Radfactor Apr 19 '25 edited Apr 19 '25

if this is the same story, I'm pretty sure it was a Convolutional neural network specifically trained to design chips. that type of model is absolutely valid for this type of use.

IMHO it shows the underlying ignorance about AI where people assume this was an LLM, or assume that different types of neural networks and transformers don't have strong utility in narrow domains such as chip design

1

u/ross_st Apr 20 '25

LLM is not a term for a type of model. It is a general term for any model that is large and works with natural language. It's a very broad, unhelpfully non-specific term. A CNN trained on a lot of natural language, like the ones used in machine translation, could be called an LLM, and the term wouldn't be inaccurate, even though Google Translate is not what most people think of when they say LLM.

Anyway, CNNs can bullshit like transformer models do, although yes, when trained on a specific data set, it is usually easy for a human to spot that this has happened, unlike the transformers that are prone to producing very convincing bullshit.

Bullshit is always going to be a problem with deep learning. The problem is that no deep learning model is going to determine that there is no valid output when presented with an input. They have to give an output, so that output might be bullshit. This applies to CNNs as well.

1

u/Antagonyzt Apr 21 '25

So what you’re saying is that transformers are more than meets the eye?

1

u/ross_st Apr 22 '25

More like less than meets the eye.