• daddy32@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 days ago

    Except you cannot use them for AI commercially, or at least in data center setting.

    • Breve
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      Data centres want the even beefier cards anyhow, but I think nVidia envisions everyone running local LLMs on their PCs because it will be integrated into software instead of relying on cloud compute. My RTX 4080 can struggle through Llama 3.2.