Who’d of thunk it :)

  • Southern WolfA
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    2 years ago

    Thing is, this isn’t really how AI training works and it can be easily done on the outputs of other AI. That’s actually what Standford used to train their (comparably) small LLM that was very competent, despite its size. It was trained on the outputs of GPT (iirc) and held it’s own much better than other models in a similar category, which is also what opened up the doors to smaller, more specialized models being useful, rather than giant ones like GPT.

    Now, image generation via diffusion might be more troublesome, but that’s fairly easily mitigated through several means, including a human or automated discriminator, which basically becomes a pseudo form of a GAN. There’s also other processes that exist for this that aren’t as affected (from what I know at least), such as GANs. But given most image AI’s are trained on stuff like LAION, AI images being uploaded online will have no effect on that, not for quite a while at least, if ever.

    • ParsnipWitch@feddit.de
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 years ago

      This prediction is based upon AI being trained on exclusively AI content for a long time. There is no example for that yet.