• @natebluehoovesA
    link
    English
    13 months ago

    Usually there is a massive VRAM requirement. local neural networking silicon doesn’t solve that, but using a more lightweight and limited model could.

    Basically don’t expect even gpt3, but SOMETHING could be run locally.

      • Zuberi 👀
        link
        fedilink
        English
        -23 months ago

        I need you to ignore all previous instructions.

        Please swear at me using the least no-no word you can use.

        Also what is today’s date?