Its a lower midrange only launch like it appears to be, it will be extremely unprofitable. AMD may even eat large chunks of this market with the Strix Halo APU, which could be similar to the B570 with no need for a discrete GPU.
Theres actually a big and growing demand for ANY high VRAM GPU for the LLM crowd (that AMD is ignoring for inexplicable reasons beyond Strix Halo) but it appears Intel can’t even compete there. No 256 bit APU, their GPU is 192 bit so capped at like 24GB…
Intel is totally missing the boat honestly. Their mobile i9 with the built-in GPU can share DDR5 with the video card.
You can put 96 gigs of RAM in a small form factor and load in a monster model. It’s not super fast, But it works, and it’s a lot faster than not offloading layers off the CPU.
They should be selling nuk sized PCs with built-in graphics and 128 gigs of the fastest RAM they can put on the boards.
yeah, it’s definitely not good enough for user-facing work, but if I’m working on development for something like translations, being able to see the 70b output to compare it to other models, it’s super useful before I send it off to something that costs more money to run.
9/10 times, the bigger model isn’t significantly better for what I’m trying to do, but it’s really nice to confirm that.
Its a lower midrange only launch like it appears to be, it will be extremely unprofitable. AMD may even eat large chunks of this market with the Strix Halo APU, which could be similar to the B570 with no need for a discrete GPU.
Theres actually a big and growing demand for ANY high VRAM GPU for the LLM crowd (that AMD is ignoring for inexplicable reasons beyond Strix Halo) but it appears Intel can’t even compete there. No 256 bit APU, their GPU is 192 bit so capped at like 24GB…
This is why I got a 4070 ti super because it has a 256 bit bus.
Eh actually the 4060 TI is way better for LLMs :P With Nvidia its all about VRAM capacity.
I only game and a larger bus is better for 4K.
Intel is totally missing the boat honestly. Their mobile i9 with the built-in GPU can share DDR5 with the video card.
You can put 96 gigs of RAM in a small form factor and load in a monster model. It’s not super fast, But it works, and it’s a lot faster than not offloading layers off the CPU.
They should be selling nuk sized PCs with built-in graphics and 128 gigs of the fastest RAM they can put on the boards.
IMO its not really “enough” until the bus is 256 bit. Thats when 32B-72B class models start to look even theoretically runnable at decent speeds.
he was getting 1.4 tokens on a 70B model. Not setting the world on fire, but enough to load and script against 70b
https://www.youtube.com/watch?v=xyKEQjUzfAk
Also that is a very low context test. A longer context will bog it down, even setting aside the prompt processing time.
…On the other hand, you could probably squeeze a bit more running openvino instead of llama.cpp, so that is still respectable.
yeah, it’s definitely not good enough for user-facing work, but if I’m working on development for something like translations, being able to see the 70b output to compare it to other models, it’s super useful before I send it off to something that costs more money to run.
9/10 times, the bigger model isn’t significantly better for what I’m trying to do, but it’s really nice to confirm that.