• skilltheamps@feddit.de
    link
    fedilink
    arrow-up
    10
    arrow-down
    1
    ·
    5 months ago

    That power efficiency is a direct result of the instructions. Namely smaller chips due to the reduced instructions set, in contrast to x86’s (legacy bearing) complex instruction set.

    • 737@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      5
      arrow-down
      1
      ·
      5 months ago

      It’s really not, x86 (CISC) CPUs could be just as efficient as arm (RISC) CPUs since instruction sets (despite popular consensus) don’t really influence performance or efficiency. It’s just that the x86 CPU oligopoly had little interest in producing power efficient CPUs while arm chip manufacturers were mostly making chips for phones and embedded devices making them focus on power efficiency instead of relentlessly maximizing performance. I expect the next few generations of intel and AMD x86 based laptop CPUs to approach the power efficiency Apple and Qualcomm have to offer.

      • bamboo@lemm.ee
        link
        fedilink
        arrow-up
        1
        ·
        5 months ago

        All else being equal, a complex decoding pipeline does reduce the efficiency of a processor. It’s likely not the most important aspect, but eventually there will be a point where it does become an issue once larger efficiency problems are addressed.

        • 737@lemmy.blahaj.zone
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          5 months ago

          yeah, but you could improve the not ideal encoding with a relatively simple update, no need to throw out all the tools, great compatibility, and working binaries that intel and amd already have.

          its also not the isa’s fault

          • bamboo@lemm.ee
            link
            fedilink
            arrow-up
            1
            ·
            5 months ago

            Well, not exactly. You have to remove instructions at some point. That’s what Intel’s x86-S is supposed to be. You lose some backwards compatibility but they’re chosen to have the least impact on most users.

            • 737@lemmy.blahaj.zone
              link
              fedilink
              arrow-up
              1
              ·
              5 months ago

              Would this actually improve efficiency though or just reduce the manufacturing and development cost?

              • bamboo@lemm.ee
                link
                fedilink
                arrow-up
                1
                ·
                5 months ago

                Instruction decoding takes space and power. If there are fewer, smaller transistors dedicated to the task it will take less space and power.

    • librejoe@lemmy.world
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      5 months ago

      Yes I understand that and agree, but the reason x86 dominated is because of those QoL instructions that x86 has. On arm you need to write more code to do the same thing x86 does, OTOH, if you don’t need to write a complex application, that isn’t a bad thing.

      • ProgrammingSocks
        link
        fedilink
        arrow-up
        1
        ·
        5 months ago

        You don’t need to write more code. It’s just that code compiles to more explicit/numerous machine instructions. A difference in architecture is only really relevant if you’re writing assembly or something like it.

        • librejoe@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          5 months ago

          Sorry, I should have been more specific. I am talking about assembly code. I will again state that I am pro-arm, and wish I was posting this from an arm laptop running a distro.