• Catt@programming.dev
    link
    fedilink
    arrow-up
    30
    arrow-down
    2
    ·
    1 day ago

    For anyone interested, there are some image poisoning tools out there to protect (glaze as example) your att or even attack (nightshade as examle) ai models directly

    • inconel@lemmy.ca
      link
      fedilink
      arrow-up
      12
      ·
      edit-2
      1 day ago

      AFAIK it’s very model specific attacks and won’t work against other models. Their tool preserving art the same to human eye is great offering, and there’s always rigorous watermarking (esp. with strong contrast) as an universally effective option.

      • Dashi@lemmy.world
        link
        fedilink
        arrow-up
        6
        ·
        1 day ago

        Open source ai’s shouldn’t be training on art that isn’t given to them either, should it? I get lemmy has a hard on for open source. But “open source” doesn’t give people free reign to steal other people’s work. If the art is freely offered I bet these tools wouldn’t be run on them.

  • Flying Squid@lemmy.world
    link
    fedilink
    arrow-up
    15
    ·
    1 day ago

    Funnily enough, I’ve found AI has a lot more trouble imitating primitive stick figure art like that than it does more complex art.

    • pemptago@lemmy.ml
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 day ago

      It shouldn’t be. Unfortunately, afaik, no lawsuits have been settled yet. Seems like Anderson v. Stability Ai is the one to watch with regard to OP.