unless they can fix whatever is causing consistent delays
Yup, that’s their #1 goal right now. If I were CEO, I’d cut/sell any part of the business that doesn’t directly support CPU and GPU sales, which is basically what Intel is doing. My priorities would be:
rescue server CPU business - this is their main money printing machine, and while they may lose to ARM in the future, they need a cash cow in the medium term
get a competent server-oriented GPU product out - they’re late to the game, but they can bundle them w/ their server CPU contracts to get some market share; overwhelm these corporate customers with first class driver support
get something to compete w/ Apple’s M1 - this means super low-power CPU that can scale to gaming workloads, and capable-enough graphics (something a bit better than AMD’s APUs); sell this near cost to keep a foot in the door in the mobile space
sell domestic fab capacity - now is the time to get Sony and Microsoft on board with their next gen consoles, and it might not be too late for Nintendo
I would essentially ignore desktop workloads and solve workstation workloads w/ server chips. To me, those sound like the highest margin businesses that they could potentially still capture, and at least 1 & 2 are a bit less sensitive to being behind on their fab process (corporate contracts respond pretty well to bundle discounts).
This probably wouldn’t work though, especially since I’m an outside observer with zero industry experience. But I think a good CEO would do something along those lines, which seems to be what Pat Gelsinger was going for as well.
Almost certainly too late to get Nintendo. According to Nvidia insiders, their work for the Switch followup SoC has been done for ages, and they’re a bit puzzled that Nintendo hasn’t released it yet. The reason seems to be unfavorable exchange rates between the Yen and USD, and Nintendo’s board of directors has worked themselves into analysis paralysis over the “best” time to release.
If I were CEO, I’d cut/sell any part of the business that doesn’t directly support CPU and GPU sales, which is basically what Intel is doing.
That’s pretty much what they did. They sold off most of the “other” stuff, like their modem division, shut down their SSD division, sold part of Mobileye shares in the IPO, and reportedly Intel is looking to sell part of Altera, their FPGA division.
Not sure about this, but it appears AMD is simply out designing them. Some concepts like the many-little-core SKUs seem promising, but ultimately the EPYC MCM design is fundamentally very good here. And… Delays. Delays are killing them here.
This was Xe-HPC, the Falcon Shores APU, the Falcon Shores GPU, Gaudi… They’re so late to everything it didn’t work and it appears they’ve basically given up on the whole line besides consumer inference products, which is also kinda meager atm. And even AMD is mightily struggling here, with hardware that is straight up bigger/faster than Nvidia.
An M Pro esque chip was also in the plans, but seemingly canceled? Or way behind AMD, at least. And OEMs have repeatedly rejected their GPU heavy designs like Broadwell eDRAM and the AMD collab chip, as they’re kinda idiots and Intel is at their mercy. And the laptop chips they are selling now are basically their best shot at an “M” chip and arguably one of their most decent products.
They tried, and no one bit. Who can blame them, given Intel’s history of delays?
Its all the delays! Its destroying them.
I mean I’d guess I’d press on with Xe if I were CEO, but if they can’t launch anything on time what does it matter?
And even AMD is mightily struggling here, with hardware that is straight up bigger/faster than Nvidia.
The problem has always been software support. If Intel wants a piece of the AI pie, they need fantastic software support. AMD has always been a bit lackluster here, whereas Intel has done a pretty decent job in the past (esp. on Linux, their drivers rock), so they would need to double down if they truly want to get after it.
Intel is at their mercy
Then Intel should make their own laptops and prove the model.
it appears AMD is simply out designing them
I don’t think so, they’re just better at improving margins. Intel was able to keep up for a while despite not keeping up w/ the fabs, so I think their designs are absolutely fine. They’re not cheap to manufacture like AMD’s are, but they are really good.
Its all the delays! Its destroying them.
Exactly. They need to double down on something instead of faffing about with different ideas. Their money maker is server chips, so that should be top priority. Their next biggest is probably laptops, and AMD is getting massive inroads here due to Intel sucking on their fabs. Catching up on servers should be easier than catching up on laptops, because corps can be bought w/ value, whereas the CPU makes up a much smaller portion of overall laptop price, so they have less leeway here.
But yeah, they need to fix the delays. Get the fabs on track and get steady CPU production in their core markets. And do that without giving up on GPUs, because that needs to be in the future plans since people are generally moving away from CPUs to GPUs for compute.
Everything else Intel does can be scrapped for better software. Really good software can do a lot to make up for lagging hardware, so make sure that is top notch while you’re fixing the hardware delivery.
The problem has always been software support. If Intel wants a piece of the AI pie, they need fantastic software support. AMD has always been a bit lackluster here, whereas Intel has done a pretty decent job in the past (esp. on Linux, their drivers rock), so they would need to double down if they truly want to get after it.
Actually AMD is pretty okay for running LLMs and other ML workloads. Many libraries now explicitly target rocm, you can just plop down vllm or the llama.cpp server and have it work with big models out of the box. There are some major issues (like flash-attention), but its quite usable.
Intel though? Their software is a mess. You have to jump throigh all sorts of hoops, use ancient builds of pytorch, use their own quantizations and such to get anything working, fix Python errors, and forget about batched enterprise backends like vllm. And this is just their IGPs and Arc, forget trying to use the vaunted NPUs for anything.
This could change if they actually had a cheap 48GB GPU (or a big APU) for AI devs to target… But they don’t. And no one is renting Gaudi to build in support because its not even availible anywhere.
EDIT: oh, and one weird thing is the volume of Intel software support is high. Like they have all sorts of cool libraries, they make contributions to open projects… But its all disjointed and fragmented. Like theres no leadership or unified push, just random efforts flailing around.
I work in CV and I have to agree that AMD is kind of OK-ish at best there. The core DL libraries like torch will play nice with ROCm, but you don’t have to look far to find third party libraries explicitly designed around CUDA or NVIDIA hardware in general. Some examples are the super popular OpenMMLab/mmcv framework, tiny-cuda-nn and nerfstudio for NeRFs, and Gaussian splatting. You could probably get these to work on ROCm with HIP but it’s a lot more of a hassle than configuring them on CUDA.
Intel is shooting itself in the foot by going halfway. If they want to compete in the AI space, they need to go all-in w/ a solid software and hardware combo. But they don’t.
They have the capability, they’re just not focused. A good CEO should be able to provide that focus. Maybe they should hire Lisa Su. 😆
Speaking as an holder of AMD stock since ot was $8, and an all AMD CPU user, IMO Lisa Su is either an absolute idiot or colliding with her cousin, the CEO of Nvidia.
All they had to do was lift vram restrictions on consumer GPUs (so their OEMs could double the VRAM up) and sick like four engineers on bugs blocking the AI space, and they would be dominating the AI space and eating Nvidia’s pie…
And they didn’t. Like, its two phonecalls, thats it.
Intel had monumental problems it has to solve and struggles, but AMD has tiny ones they inexplicably ignore. Its mind boggling.
An M Pro esque chip was also in the plans, but seemingly canceled? Or way behind AMD, at least. And OEMs have repeatedly rejected their GPU heavy designs like Broadwell eDRAM and the AMD collab chip, as they’re kinda idiots and Intel is at their mercy. And the laptop chips they are selling now are basically their best shot at an “M” chip and arguably one of their most decent products.
Yup, that’s their #1 goal right now. If I were CEO, I’d cut/sell any part of the business that doesn’t directly support CPU and GPU sales, which is basically what Intel is doing. My priorities would be:
I would essentially ignore desktop workloads and solve workstation workloads w/ server chips. To me, those sound like the highest margin businesses that they could potentially still capture, and at least 1 & 2 are a bit less sensitive to being behind on their fab process (corporate contracts respond pretty well to bundle discounts).
This probably wouldn’t work though, especially since I’m an outside observer with zero industry experience. But I think a good CEO would do something along those lines, which seems to be what Pat Gelsinger was going for as well.
Almost certainly too late to get Nintendo. According to Nvidia insiders, their work for the Switch followup SoC has been done for ages, and they’re a bit puzzled that Nintendo hasn’t released it yet. The reason seems to be unfavorable exchange rates between the Yen and USD, and Nintendo’s board of directors has worked themselves into analysis paralysis over the “best” time to release.
Heh this is so Nintendo.
They must be pulling their hair out trying to make predictions now.
That’s pretty much what they did. They sold off most of the “other” stuff, like their modem division, shut down their SSD division, sold part of Mobileye shares in the IPO, and reportedly Intel is looking to sell part of Altera, their FPGA division.
Yeah, and I generally agree with Gelsinger’s direction. I’m interested in the reason for him retiring, as well as who is likely to replace him.
It would be really funny though if Intel tanks and AMD buys their fabs from them.
They tried all this:
Not sure about this, but it appears AMD is simply out designing them. Some concepts like the many-little-core SKUs seem promising, but ultimately the EPYC MCM design is fundamentally very good here. And… Delays. Delays are killing them here.
This was Xe-HPC, the Falcon Shores APU, the Falcon Shores GPU, Gaudi… They’re so late to everything it didn’t work and it appears they’ve basically given up on the whole line besides consumer inference products, which is also kinda meager atm. And even AMD is mightily struggling here, with hardware that is straight up bigger/faster than Nvidia.
An M Pro esque chip was also in the plans, but seemingly canceled? Or way behind AMD, at least. And OEMs have repeatedly rejected their GPU heavy designs like Broadwell eDRAM and the AMD collab chip, as they’re kinda idiots and Intel is at their mercy. And the laptop chips they are selling now are basically their best shot at an “M” chip and arguably one of their most decent products.
They tried, and no one bit. Who can blame them, given Intel’s history of delays?
Its all the delays! Its destroying them.
I mean I’d guess I’d press on with Xe if I were CEO, but if they can’t launch anything on time what does it matter?
The problem has always been software support. If Intel wants a piece of the AI pie, they need fantastic software support. AMD has always been a bit lackluster here, whereas Intel has done a pretty decent job in the past (esp. on Linux, their drivers rock), so they would need to double down if they truly want to get after it.
Then Intel should make their own laptops and prove the model.
I don’t think so, they’re just better at improving margins. Intel was able to keep up for a while despite not keeping up w/ the fabs, so I think their designs are absolutely fine. They’re not cheap to manufacture like AMD’s are, but they are really good.
Exactly. They need to double down on something instead of faffing about with different ideas. Their money maker is server chips, so that should be top priority. Their next biggest is probably laptops, and AMD is getting massive inroads here due to Intel sucking on their fabs. Catching up on servers should be easier than catching up on laptops, because corps can be bought w/ value, whereas the CPU makes up a much smaller portion of overall laptop price, so they have less leeway here.
But yeah, they need to fix the delays. Get the fabs on track and get steady CPU production in their core markets. And do that without giving up on GPUs, because that needs to be in the future plans since people are generally moving away from CPUs to GPUs for compute.
Everything else Intel does can be scrapped for better software. Really good software can do a lot to make up for lagging hardware, so make sure that is top notch while you’re fixing the hardware delivery.
Actually AMD is pretty okay for running LLMs and other ML workloads. Many libraries now explicitly target rocm, you can just plop down vllm or the llama.cpp server and have it work with big models out of the box. There are some major issues (like flash-attention), but its quite usable.
Intel though? Their software is a mess. You have to jump throigh all sorts of hoops, use ancient builds of pytorch, use their own quantizations and such to get anything working, fix Python errors, and forget about batched enterprise backends like vllm. And this is just their IGPs and Arc, forget trying to use the vaunted NPUs for anything.
This could change if they actually had a cheap 48GB GPU (or a big APU) for AI devs to target… But they don’t. And no one is renting Gaudi to build in support because its not even availible anywhere.
EDIT: oh, and one weird thing is the volume of Intel software support is high. Like they have all sorts of cool libraries, they make contributions to open projects… But its all disjointed and fragmented. Like theres no leadership or unified push, just random efforts flailing around.
I work in CV and I have to agree that AMD is kind of OK-ish at best there. The core DL libraries like torch will play nice with ROCm, but you don’t have to look far to find third party libraries explicitly designed around CUDA or NVIDIA hardware in general. Some examples are the super popular OpenMMLab/mmcv framework, tiny-cuda-nn and nerfstudio for NeRFs, and Gaussian splatting. You could probably get these to work on ROCm with HIP but it’s a lot more of a hassle than configuring them on CUDA.
Exactly.
Intel is shooting itself in the foot by going halfway. If they want to compete in the AI space, they need to go all-in w/ a solid software and hardware combo. But they don’t.
They have the capability, they’re just not focused. A good CEO should be able to provide that focus. Maybe they should hire Lisa Su. 😆
Speaking as an holder of AMD stock since ot was $8, and an all AMD CPU user, IMO Lisa Su is either an absolute idiot or colliding with her cousin, the CEO of Nvidia.
All they had to do was lift vram restrictions on consumer GPUs (so their OEMs could double the VRAM up) and sick like four engineers on bugs blocking the AI space, and they would be dominating the AI space and eating Nvidia’s pie…
And they didn’t. Like, its two phonecalls, thats it.
Intel had monumental problems it has to solve and struggles, but AMD has tiny ones they inexplicably ignore. Its mind boggling.
Wasn’t Lunar Lake supposed to be this?
It’s 128 bit. I’d say it needs a bigger GPU and a 256 bit bus to be “M2 Pro” class.