For once I actually feel excited about processors again

  • @dragontamer@lemmy.world
    link
    fedilink
    1710 months ago

    Itanium, XScale (ARM), Altera FPGAs, Intel Arc GPUs, Intel 8051, Larabee.

    Intel has repeatedly tried (and with $Billions invested in Itanium particularly) to build a new architecture. It just turns out that x86 is way better than anyone ever expected in practice.

    AVX512 is very well designed in any case. Intel knows what they’re doing. If you want cool assembly instructions, I suggest learning AVX512 as well as the PEXT and PDEP instructions. I’ll pointout the AVX512 compression and expansion instructions in particular, as proof that Intel’s engineers really know what SIMD-compute and how its useful.


    Intel Arc GPUs are also interesting, though not as commercially successful as AMD or NVidia, it shows that Intel has substantial investments into SIMD compute patterns as well.

    • AggressivelyPassive
      link
      fedilink
      1110 months ago

      x86 is increasingly a complexity monster. I’m pretty sure, a substantial part of the instructionset is hardly ever used by modern programs, yet Intel has to maintain them.

      And if you look at all the hardware security incidents, they all originated in attempts to squeeze the last drops of performance from an old architecture.

      • @dragontamer@lemmy.world
        link
        fedilink
        610 months ago

        You know that ARM and RISC-V are both subject to Specter attacks right? Any out-of-order processor (which is every modern CPU, not just x86) are subject to Specter.

        All CPUs perform speculative execution and branch prediction.

  • @carl_dungeon@lemmy.world
    link
    fedilink
    710 months ago

    Intels problem with new architectures has always been buy in. Apple has repeatedly been able to swap underlying architectures because they also make compatibility layers, go all in, and have a culture of adapt or die with applications. Yes, some people get boned by old apps getting deprecated, but for most it means devs are encouraged (forced) to keep their stuff updated and new. Look how fast and easy the m1 switch was.

    If intel rolls something out with 0 x86 compatibility and just hopes people will write for it, it’ll never happen, it’s chicken and egg. If the new hotness doesn’t provide enough benefit for the cost investment, then it’ll also flop.

    But, if it’s well supported and the benefits are huge and there is an easy migration/upgrade/conversion path, then people will adopt it.

    AWS got their own chips into machines because it was largely transparent to end users, cheaper, and provided real benefits.