• Breve
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    1 day ago

    Oh yeah for sure, I’ve run Llama 3.2 on my RTX 4080 and it struggles but it’s not obnoxiously slow. I think they are betting more software will ship with integrated LLMs that run locally on users PCs instead of relying on cloud compute.