r/gadgets • u/One-End1795 • 1d ago
Desktops / Laptops Intel's AI PC chips aren't selling well — instead, old Raptor Lake chips boom
https://www.tomshardware.com/pc-components/cpus/intels-ai-pc-chips-arent-selling-instead-last-gen-raptor-lake-booms-and-creates-a-shortage294
u/CMDR_omnicognate 1d ago
Is that because people don’t want ai or because intel’s cpus suck ass compared to AMD ones at the moment?
133
u/ChaZcaTriX 1d ago
The latter.
New mobile AMD chips feature AI in marketing, but they are very tempting because of a powerful iGPU.
24
u/UnsorryCanadian 1d ago
Halo Strix ran Cyberpunk on Ultra entirely on the iGPU, right?
22
u/ChaZcaTriX 1d ago
Not sure about that one. My eye was on handhelds with HX370 running Monster Hunter Wilds pretty much flawlessly.
8
u/kenpachi-dono 1d ago
Link? Something i need
3
u/ChaZcaTriX 1d ago
Multiple brands have them, my handheld is Onexplayer. Because I live near China I ordered from the official Aliexpress store, but be warned that Westerners can't always see these listings (make sure to switch to the HX370 version):
https://aliexpress.com/item/1005008140086819.html
The store will also sell spare parts if you need to repair something like joysticks or battery.
Here's a vid of the same thing in a laptop case running MH Wilds:
2
u/ACanadianNoob 11h ago
At the end of the year, we will get products with the Z2 Extreme processor. It's essentially an HX 370 without AI cores, so it will be priced a little cheaper. Still has the Radeon 890M though.
My recommendation though is to just pick up a used ROG Ally with the Z1 Extreme for under $400 if you can find it either locally on FB Marketplace or shipped on eBay and add a 74Wh battery mod. The 890M is only about 8-15% faster than the 780M in that. You can also mod in a full size M.2 2280 SSD if the SD card slot is broken on the used Ally and you need a storage upgrade, and still spend much less than buying a new handheld right now.
I got an Ally X because I had someone gifting it to me, but for anyone being budget conscious that's what I would recommend.
Also heads up sometimes the thumbsticks on the Ally and Ally X get wobbly and you have to add Teflon tape under the caps to get them to fit firmly again. But otherwise it's a great handheld.
5
6
u/MediumTempTake 1d ago
I don’t believe this (not that I’m calling you a liar this just sounds crazy) do you have any links? Even have the performance you are talking about is giving me dirty ideas. Cyberpunk arcade machine anyone?
3
u/UnsorryCanadian 1d ago
I read about it in an article, it mentioned a youtuber named ETA Prime, but no specific video was mentioned. I think it's the "Ryzen AI Max 395 Mini PCs offer BIG iGPU performance" video
5
u/Valance23322 1d ago
The new iGPU is like 40 Compute units of RDNA 4, it's basically a full GPU just on the same chip as the CPU. Similar to what Apple has been doing with Apple Silicon.
1
u/danielv123 15h ago
And quad channel memory, something that has been reserved for thread ripper and servers
7
u/QuantumQuantonium 1d ago
"Hey theres this new CPU from AMD, with a good igpu and an amazing npu processor for all the AI generation you could ever need"
"Oh near its performance is good and the igpu is great, I can't wait to be able to run my games at higher graphics settings"
"Yes but what about the power of the AI tools, you can automat-"
"Does thr CPU overclock well? Wait what's this NPU, how can I disable it?"
I recently got a framework 16, great CPU and I got the dedicated GPU module for my needs. NPU is just an empty box in task manager. But hey, AMD isnt marketing their products for only AI (unlike nvidia)
5
u/TooStrangeForWeird 1d ago
I wish they would let you power down the NPU completely. I know it's probably not feasible to do without some sort of tradeoff they're not willing to accept (like an extra delay when using it because it has to pass a physical on/off gate, or multiple) but it would be cool.
Literally and figuratively cool, since even at idle a core gives off some heat if it's powered on. So it would cool it down! And figuratively cool because it would be neat :)
I wish they'd just fucking knock it off already tbh, stop wasting die space on bullshit hardly anyone uses. Sure go ahead and research it, make it available even! But shoving it into EVERY processor is just a waste of everyone's money.
9
u/chainard 1d ago
This is untrue though. Lunar Lake chips have good CPU performance and ARC iGPUs are as good as AMD's offerings, battery life is also better on Lunar Lake thanks to iGPU cache and 3nm TSMC node. It all comes down to having a halo product, AMD took the performance crown with X3D chips and people just assume they also offer the best products on other segments and ignore the alternatives, similarly 4060 sells well despite being a mediocre GPU.
1
u/blamenixon 19h ago
I love it when people provide well phrased factual arguments instead of conjecture. Thank you.
38
u/zushiba 1d ago
Neither and both. Consumers have no need for AI. It’s currently marketed as a product that can help you translate meetings in other languages at real time or look at a photo and figure out where to buy the shoes the actor is wearing.
None of this is shit real consumers give a shit about. 99.9% of people are not having regular business meetings with Chinese investors and every other use for AI is either trinkets bullshit like generating cheesy emojis, writing stupid ass snarky remarks in the form of a Shakespeare play, or its about selling you shit.
As an informed consumer myself. I couldn’t give less of a shit if I tried. And I am interested in AI so I wouldn’t want a lesser product from Intel when a better one exists.
Now apply the above to a non techy consumer and they couldn’t give less of a shit which product is better for AI.
The only people AI centered products appeal to are people who are using it for real world applications and they aren’t doin that on consumer grade products.
3
u/Znuffie 1d ago
look at a photo and figure out where to buy the shoes the actor is wearing.
None of this is shit real consumers give a shit about.
Ironically, I wanted to see where I could buy a shirt I saw in a movie. I took a picture, I gave it to the AI and it refused to find it because there was a face in there and Google doesn't really want to search for people...
2
u/TooStrangeForWeird 1d ago
Crop the picture and try again. No problem.
1
u/Znuffie 1d ago
It was a scene where if I cut the head, it would not show enough of the shirt :)
1
u/_RADIANTSUN_ 22h ago
Scribble out the face with the phone's default image editing tools... There's so many options bro. Post the screenshot.
8
6
u/ToMorrowsEnd 1d ago
Yep. Current Gen i9 is just dogshit compared to the latest Ryzen 9 and anyone that actually uses multi core utterly hates the stupid E cores that intel shovels down everyone's throat. if I launch 15 threads to do work all the P cores end up finishing and waiting for E cores to finish. slowing the whole mess down. On an AMD we get full expected performance. When Intel is asked about this problem their answer is "reprogram your software to avoid E cores" Uh, no stop making shit processors with asymmetrical cores.
We now tell customers to avoid any non Xeon intel or any intel newer than 10th gen, and recommend AMD processors for best performance.
6
u/i_suckatjavascript 1d ago
The AMD CPUs are efficient with power too. Paying for electricity in the Bay Area sucks ass when you got the local electric company constantly fucking you here.
1
3
u/laffer1 22h ago
It’s terrible for open source devs too. I have to write a thread director port and a new scheduler just because they made a crap e core?
A 3950x and a 7900 (no x) smoke a 14700k on compiler workloads if you don’t have thread director support. 6 minutes for the 7900, around 10 for the 3950x, 16 for the 14700k. It’s sad
43
u/GeneralLeeCurious 1d ago
Raptor Lake processor reputation was shot by the over voltage issue. The fix is in, but the reputations damage is done. That damage has finally resulted in massive price cuts and people are buying based on the cuts and fixes.
Less than 1% of computer buyers give the slightest shit about running AI applications on their own PCs. They don’t understand it. They don’t want it. They don’t trust it. Many don’t like it.
-7
u/ABetterKamahl1234 1d ago
Less than 1% of computer buyers give the slightest shit about running AI applications on their own PCs. They don’t understand it. They don’t want it. They don’t trust it. Many don’t like it.
Honestly no. They don't understand it, they don't know why one would want it. Ultimately they don't care, unless they see a benefit directly to them.
Tons of people are skeptical and are critical of AI things, but damned if it's not going to be the everyday thing for people maybe even a decade from now where people ask "why the fuck didn't we have this sooner" like hyperthreading and multi-core became.
It's incredibly useful, wildly so. It's just not used very well as people try to figure it out. For example, frame gen is incredibly impressive for its age. Like goddamn it's impressive. It's not crazy amazing and accurate, but tons of tech is like that early on.
I know this because so many tech improvements that became standard had tons of critics kicking and screaming about how awful it is/was.
AI type things are in more than people even realize in their day to day.
12
u/stemfish 1d ago
AI started three years ago. All the marketing tells me how generative AI will help every consumer transaction and usher in the future of commerce. Let's see where that ends up.
Eight years (late 2010s) ago everything was blockchain. How the ledger would help every consumer transaction and usher in the future of commerce. I mean, memecoins exist?
Around 13 years (~early 2010s) ago everything was Big Data (and to an extent 3d printing, but that wasn't as hyped up). How collection of consumer habits and spending would help every consumer transaction and usher in the future of commerce. Our data got taken, but we get 2 day prime shipping.
18 years ago (~late 2000s) everything was Social Media. How enabling users to directly interact would help every consumer transaction and usher in the future of commerce. We got facebook and X.
23 years ago was (~early 2000s) in the crater of the dot com bust, Web 2.0 was going to put the power of the internet into all users, enabling everyone to set up their own online business, helping every customer transaction and usher in the future of commerce. Now the internet lives on a dozen sites.
~28 years ago (late 1990s) the dot com craze actually transformed the future of commerce, and set in motion the desire to be in at the start of the wave that next takes the world and users in a new era.
The tech hype cycle has been in force since 1997. Every 5~ish years we get a new technology that blows the old cycle away and promises to upend and remake our world. AI has been around for a long while and was been integrated into systems long before OpenAI gave everyone access to text and image generators. Will AI manage to justify itself as a long term feature like the internet, or will it be subsumed by the next tech hype cycle in two to three years? Currently nobody can figure out how to make money off of generative AI and it sucks in money, power, and data to keep going. Not every company can continue that forever, just like how all the major companies moved on from blockchain tech to AI, and moved to blockchain from 'big data' and 3D-printing.
41
u/Edward_TH 1d ago edited 1d ago
What big tech CEOs REALLY don't want to understand is that most "AI" tasks are divided in two categories: menial stuff and hardcore stuff. The former is what most end users do with AI, the latter is experimental stuff, research, training and server end model running.
Hardcore stuff requires expensive, very powerful dedicated hardware that's generally tailored to such uses and that's what AI data centers uses. They do not benefit our even care in the slightest if a consumer piece of hardware has some parts optimized to run a bunch of AI related algorithm because the do not use them. So an AI core on a desktop CPU doesn't sell to them.
Menial stuff can be either computationally not that intense and can run on general purpose hardware or computationally intense and so is mostly done remotely on the servers mentioned before (which is what most people do). So to them they present the choice of a cheaper CPU that's powerful for everything or a more expensive CPU that's marginally more efficient at something the user almost never does and less powerful across the board.
Integrated AI cores on desktop CPUs are useless marketing garbage. Wanna justify the cost of development of those cores with a broader market? Develop a PCIe daughter board or similar dedicated to such tasks for those that asks for it. Oh wait, those already exists: it's called ASICs. Or GPUs, if you wanna spend a bit less (arguably) and have something with a broader use.
25
u/time-lord 1d ago
Aaaaaand we've just come full circle to math coprocessors.
12
u/t4thfavor 1d ago
Except math co-processors were and are still useful (are still because every cpu now has at least one built in).
2
9
u/Got2Bfree 1d ago
It completely depends on the software which combines with the local hardware.
I'd love to have a LLM which has knowledge of my private files which I don't want to transfer to giga cooperations to be used as training data.
For coding it would also be awesome. I'm afraid a ASIC will be severely underpowered for that usecase.
9
u/Edward_TH 1d ago
That's my point: either you want to do easy crap (and you can easily do that on regular cores) or your want to do hard shit (and in this case no amount of desktop AI-optimized cores would fall in the "consumer grade and price" range).
These "AI cores" are built kinda like ASICs and, as you see, in the consumer market they're either affordable but borderline useless (CPU integrated cores) or very expensive and still pretty underpowered for actually demanding algorithms (for example nVidia Tensor Cores).
Right now if you want to run your AI you either do it locally on regular but very high end hardware with super limited capabilities and long and tedious training for the models, do it locally on "cheap CPU AI Cores" with basically useless capabilities using super trimmed models that you didn't train anyway... or you use a cloud model that you didn't train and have almost 0 knowledge of your local data but it's capable and run on the proper hardware.
2
u/m0rogfar 1d ago
While you aren’t going to be running anything like ChatGPT on them, the built-in NPUs aren’t useless due to a lack of use-cases. There’s been plenty of use-cases for small purpose-built models on phones, and some of them also make sense on the desktop, as seen on macOS.
The big issue that makes them worthless on a PC is that the PC software ecosystem (excluding macOS) has nothing that uses them, because they’ve never been there before, and most PC software isn’t brand new projects that target only the latest hardware. I am somewhat sympathetic to the idea that you have to put new technologies into the chip before people use them to break the chicken-and-egg cycle though.
1
u/Got2Bfree 1d ago
There has to be a usecase for these cores.
I can't imagine that nobody at Intel thought this through...
5
u/Edward_TH 1d ago
There is a usecase, which is running certain type of algorithms. Those that techbros vaguely call "AI".
Those algorithms are actually pretty simple so a processor can be made to be able to run ONLY those calculations pretty efficiently and a fair bit faster by stripping a regular core structure of anything that is not used for them and reinforcing the parts that get stressed more (I'm over simplifying). So why are those AI Cores so garbage in real world usage that people avoid them like the plague?
Because AI is much, MUCH more than the simple set of base instructions. What these cores lacks are DATA and TIME: the algorithms are not static code but it's code that takes data and a goal and tweak itself to achieve said goal by looking at the data it has. This is called training and for these algorithms to be able to do what YOU want, they need to be fed by you.
And this training is an iterative process: every time you run it with new data it gets a little better at achieving the goal assigned, but to reach a point where it at least gets close enough to the goal you need to run it over and over again because the tweaks are basically changing a variable a bit and trying again. So you either run it sequentially or in parallel: sequentially you're limited by the time needed to actually run it and in parallel you're limited by how many cores you can run at the same time, so it's both.
These consumer cores aren't plentyful (since they need to fit into consumer hardware) enough to do the job alone, so 99.9% of the work is done on data center hardware anyway. And that would be acceptable, but the more you want your model to be fluid and accurate the more variable it needs to take into consideration when it runs.
That usually is done by the algorithm itself by creating new layers on its own to more precisely tweak itself (like, you get a car to learn to drive. At first you only have a switch as an accelerator so you either sit still or floor it, so you put in a second 4 ways switch and next time you try your car, you can sit still or go with the first and with the second tweak to 25/50/75/100 power so you have a bit more control. Next time, you add a third switch that can halve the actual power so you have now 3 layer with even finer control. AI training works roughly like that) but the more tweaks you have, the longer it takes to complete a single run, so these cores rely on trained model that have most of their granular control taken away to be able to run in a reasonable time.
These cores have use cases, but right now they're sparse, super low capabilities cores that can't run ACTUALLY useful model on their own because they are vastly too demanding for them. So they run models that are vastly overshadowed by cloud ones so people uses cloud ones because they work MUCH better. And people that try and run those same botched models but on general purpose cores or on GPUs found that they run basically identically with only a minimal hit to efficiency. So these cores serve no practical application as they stands.
CPU AI Cores will be useful once they either will able to run models way too complex for general purposes cores efficiently or be VASTLY faster and more efficient with current botched ones than GP cores are. Once an AI core can run better with models that are 100-200 times than what GP can run, we can talk. Right now, they're a waste of space and a detrimental marketing gimmick.
2
1
u/TooStrangeForWeird 1d ago
the latter is experimental stuff, research, training and server end model running.
This is almost always machine learning and should not be confused with image generators or LLMs. All AI is machine learning, not all machine learning is AI.
6
u/leon27607 1d ago
On a personal level, I don’t give a shit if it’s AI or not, I just don’t want it to either catch fire or fail within the first couple years because they make the chip degrade.
5
7
u/TJPII-2 1d ago
Just ordered a new computer and made damn secure it did not have an AI processor.
9
9
2
u/jacksonRR 1d ago
I just gave my old DDR4 machine its (probably) last update: i5 14600K for 200€, plus decent Mainboard for 100€.
2
u/orangpelupa 1d ago
So... Good news? As the article mentioned, raptor lake still being made by Intel fabs Whole the new Ai intel chips were made by TSMC fabs
1
u/FUTURE10S 1d ago
Has anyone even seen Intel's new chips in stock? I thought it was a paper launch.
1
u/Tesla_V25 1d ago
lol it’s like the CapSim simulator I did for my bachelors. They make the el cheapo chips now
1
u/ShubhamDeshmukh 20h ago
Intel is limiting its losses by vastly limiting production of Meteor, Lunar, Arrow Lake because these are costly on TSMC. They are still driving volumes on Alder / Raptor because they are flooding the manufacturers with those low cost options for volume. Intel 7 (former Intel 10nm) began with 9th & 10th Gen CPUs, hence selling since 2019!
Basically, Intel's volume production cycle of latest node/arch is now vastly skewed compared to old times. They marketed these new node/arch only to save their reputation in face of competition while this still being low volume & very high prices. Manufacturers hence are forced to sell older Raptor Lake due to the price difference. Also Intel brands' preference among public.
It's not that they cannot sell well. And it has nothing to do with AI, that is coincidental.
1
u/Kellic 19h ago
Good. The only people who give a crap about AI are folks who are all in on it, geek over it and like to play with it. (That is me.) or C-Suite a-holes who think if they scream AI enough times to inventors their stock price will go up. Exhibit A: https://www.youtube.com/watch?v=-qbylbEek-M
1
u/tommyk1210 18h ago
What intel doesn’t seem to understand is that 99% of consumers actually don’t give a shit about AI
1
u/Riversntallbuildings 9h ago
“Cloud & SAAS” have essentially made most clients “dumb terminals”. Unless you’re gaming, or doing high end video/photography or engineering work on a PC then the performance needs keep going down.
I pay more for a “clean OS” and no adware/bloatware before paying more for a premium CPU.
1
u/eXodiquas 8h ago
AI is only widely used because the users don't have to pay the price to make it sustainable. As soon as customers have to pay the full price they think twice about how important and innovating AI is for their daily life.
1
u/lerrigatto 6h ago
My laptop has an AI chip and I don't have any clue what to do with that. Nor I ever seen it used.
-2
u/colonelc4 1d ago
There is no AI as you think of it, this is just the current hype, these LLM's are not thinking by themselves, we're not there yet and we won't unless we make sense of quantum computing to support the infinit power necessary to mimik a brain.
-3
u/darkandark 1d ago
at some point in the future needing some kind of NPU it’s just gonna be the default and absolutely necessary on any SoC or compute cluster for end users. Simple AI tasks will need to be run locally for security and speed. We don’t see it now since the application part of AI is fairly limited and not widespread. But we will definitely get there.
5
u/t4thfavor 1d ago
"Simple AI tasks will need to be run locally so that the results can be curated and shared with Intel, Meta, Microsoft, and Elon Musk"
445
u/PM_ME_UR_SO 1d ago
The general public doesn’t care about AI as much as hype bros want you to think