• natebluehoovesA
    link
    fedilink
    English
    arrow-up
    1
    ·
    9 months ago

    Usually there is a massive VRAM requirement. local neural networking silicon doesn’t solve that, but using a more lightweight and limited model could.

    Basically don’t expect even gpt3, but SOMETHING could be run locally.