What if Apple decided to release their “M” series processors a desktop CPUs? How would that change the market?

It would also be interesting to see Samsung Foundry release desktop Exynos chips or maybe Qualcomm “X” processors for desktop that are more powerful than the laptop chips.

p.s. I know they would never do anything like that, but it would be interesting to imagine how the market would change with more competitors

  • Lasherz@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    4 days ago

    Do you have a source for AMD chips being especially energy efficient? I don’t consider them to be even close. M3 is 190 cinebench points per watt whereas Ryzen 7 7840U is 100. My ppw data doesn’t contain snapdragon x yet, but it’s generally considered to be a multithreading king on the market and it runs as signifcantly lower tdp than AMD. SoCs are inherently more energy efficient. My memory of why is the instruction sets on x86 allow for more complicated process but ARM is hard restricted to using less complicated processes as building blocks if complexity is required.

    Like I mentioned though, there are tasks that x86 cannot be beat on but it’s because they use ASICs on-chip for hardware accelerated encoding/decoding and nothing is more efficient at a task than a (purpose-built, task specific*) ASIC /FPGA.

    • GamingChairModel@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      4 days ago

      Do you have a source for AMD chips being especially energy efficient?

      I remember reviews of the HX 370 commenting on that. Problem is that chip was produced on TSMC’s N4P node, which doesn’t have an Apple comparator (M2 was on N5P and M3 was on N3B). The Ryzen 7 7840U was N4, one year behind that. It just shows that AMD can’t get on a TSMC node even within a year or two of Apple.

      Still, I haven’t seen anything really putting these chips through the paces and actually measuring real world energy usage while running a variety of benchmarks. And the fact that benchmarks themselves only correlate to specific ways that computers are used, aren’t necessarily supported on all hardware or OSes, and it’s hard to get a real comparison.

      SoCs are inherently more energy efficient

      I agree. But that’s a separate issue from instruction set, though. The AMD HX 370 is a SoC (well, technically, SiP as pieces are all packaged together but not actually printed on the same piece of silicon).

      And in terms of actual chip architectures, as you allude, the design dictates how specific instructions are processed. That’s why the RISC versus CISC concepts are basically obsolete. These chip designers are making engineering choices on how much silicon area to devote to specific functions, based on their modeling of how that chip might be used: multi threading, different cores optimized for efficiency or power, speculative execution, various specialized tasks related to hardware accelerated video or cryptography or AI or whatever else, etc., and then deciding how that fits into the broader chip design.

      Ultimately, I’d think that the main reason why something like x86 would die off is licensing reasons, not anything inherent to the instruction set architecture.