In the past months, there’s a been a issue in various instances where accounts would start uploading blatant CSAM to popular communities. First of all this traumatizes anyone who gets to see it before the admins get to it, including the admins who have to review to take it down. Second of all, even if the content is a link to an external site, lemmy sill caches the thumbnail and stores it in the local pict-rs, causing headaches for the admins who have to somehow clear that out. Finally, both image posts and problematic thumbnails are federated to other lemmy instances, and then likewise stored in their pict-rs, causing such content to be stored in their image storage.

This has caused multiple instances to take radical measures, from defederating liberaly, to stopping image uploads to even shutting down.

Today I’m happy to announce that I’ve spend multiple days developing a tool you can plug into your instance to stop this at the source: pictrs-safety

Using a new feature from pictr-rs 0.4.3 we can now cause pictrs to call an arbitary endpoint to validate the content of an image before uploading it. pictrs-safety builds that endpoint which uses an asynchronous approach to validate such images.

I had already developed fedi-safety which could be used to regularly go through your image storage and delete all potential CSAM. I have now extended fedi-safety to plug into pict-rs safety and scan images sent by pict-rs.

The end effect is that any images uploaded or federated into your instance will be scanned in advance and if fedi-safety thinks they’re potential CSAM, they will not be uploaded to your image storage at all!

This covers three important vectors for abuse:

  • Malicious users cannot upload CSAM to for trolling communities. Even novel GenerativeAI CSAM.
  • Users cannot upload CSAM images and never submit a post or comment (making them invisible to admins). The images will be automatically rejected during upload
  • Deferated images and thumbnails of CSAM will be rejected by your pict-rs.

Now, that said, this tool is AI-driven and thus, not perfect. There will be false positives, especially around lewd images and images which contain children or child-topics (even if not lewd). This is the bargain we have to take to prevent the bigger problem above.

By my napkin calculations, false positive rates are below 1%, but certainly someone’s innocent meme will eventually be affected. If this happen, I request to just move on as currently we don’t have a way to whitelist specific images. Don’t try to resize or modify the images to pass the filter. It won’t help you.

For lemmy admins:

  • pictrs-safety contains a docker-compose sample you can add to your lemmy’s docker-compose. You will need to your put the .env in the same folder, or adjust the provided variables. (All kudos to @Penguincoder@beehaw.org for the docker support).
  • You need to adjust your pict-rs ENVIRONMENT as well. Check the readme.
  • fedi-safety must run on a system with GPU. The reason for this is that lemmy provides just a 10-seconds grace period for each upload before it times out the upload regardless of the results. A CPU scan will not be fast enough. However my architecture allows the fedi-safety to run on a different place than pictrs-safety. I am currently running it from my desktop. In fact, if you have a lot of images to scan, you can connect multiple scanning workers to pictrs-safety!
  • For those who don’t have access to a GPU, I am working on a NSFW-scanner which will use the AI-Horde directly instead and won’t require using fedi-safety at all. Stay tuned.

For other fediverse software admins

fedi-safety can already be used to scan your image storage for CSAM, so you can also protect yourself and your users, even on mastodon or firefish or whatever.

I will try to provide real-time scanning in the future for each software as well and PRs are welcome.

Divisions by zero

This tool is already active now on divisions by zero. It’s usage should be transparent to you, but do let me know if you notice anything wrong.

Support

If you appreciate the priority work that I’ve put in this tool, please consider supporting this and future development work on liberapay:

https://liberapay.com/db0/

All my work is and will always be FOSS and available for all who need it most.

  • Demigodrick@lemmy.zip
    link
    fedilink
    English
    arrow-up
    32
    arrow-down
    1
    ·
    1 year ago

    Just want to add - i’ve been using this (via my desktop!) for my instance for a little while now and its great - While the evidence shows there are false positives, i’ve yet to see it affect anything in real time.

    Beware your B2 transaction costs though! 😭 I’m sure there is a cheaper way to do it but backblaze costs went up quite a bit.

    • PenguinCoder@beehaw.org
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      B2 cloud storage update says:

      effective October 3, we’re making egress free (i.e. free download of data) for all B2 Cloud Storage customers—both pay-as-you-go and B2 Reserve—up to three times the amount of data you store with us, with any additional egress priced at just $0.01/GB. Because supporting an open cloud environment is central to our mission, expanding free egress to all customers so they can move data when and where they prefer is a key next step.

      • Demigodrick@lemmy.zip
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 year ago

        Yeah, I had the email yesterday, but they don’t mention if this is specifically their download charge, or if the class b and class c transactions are included in this - I mean I’ll be honest, I haven’t had time to properly look into yet, but either way it should help.

    • Carighan Maconar@lemmy.world
      link
      fedilink
      arrow-up
      23
      ·
      1 year ago

      There are I think official “training kits”? I remember reading about this, there are sets of data you can get to train CSAM detection with.

    • poVoq@slrpnk.net
      link
      fedilink
      arrow-up
      20
      ·
      1 year ago

      Its a general image classifier. You don’t need to train it specifically on CSAM if you are willing to accept a certain false positive rate.

    • hikaru755@feddit.de
      link
      fedilink
      arrow-up
      16
      ·
      1 year ago

      I think “legally” and “without constantly wanting to vomit” is the trickier question. From a purely ethical standpoint I don’t see a problem with taking CSAM that’s apparently already flooding Lemmy and using it to test whether your filter works before nuking it. At least as long as you’re making sure you’re not exposing anyone else to it.

    • gamer@lemm.ee
      link
      fedilink
      arrow-up
      26
      arrow-down
      9
      ·
      1 year ago

      Are you a lawyer? This feels like FUD.

      I strongly advise anyone against using this software in production, as you will be on the hook for anything this software doesn’t catch.

      So if you don’t use this software, you’re not on the hook for the pictures that this tool doesn’t catch?

        • gamer@lemm.ee
          link
          fedilink
          arrow-up
          20
          arrow-down
          5
          ·
          1 year ago

          I think you’re confused by the purpose of that statement. When the authors say not to use it for anything important, they’re basically trying to waive liability (informally). It’s kind of like how every open source license has a statement like:

          THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED …

          If you use an open source project for air traffic control software, and a bug causes a bunch of people to die, that’s your fault, not the author of the software. The CLIP people are essentially saying that you shouldn’t use their software to build something that requires a lot of accuracy since it probably wasn’t designed to be as accurate as you need it to be.

          But what I’m wondering is why you’re being so dramatic about this. You’re claiming that it’s highly dangerous/reckless/risky to use it, yet hand waving over the why.

    • ChunkMcHorkle@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      11
      arrow-down
      5
      ·
      1 year ago

      For abuse detection, you need to use a service that has been vetted by an actual lawyer.

      Name one. That exists and already works on Fediverse instances.

        • ChunkMcHorkle@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          8
          arrow-down
          2
          ·
          edit-2
          1 year ago

          That’s not how this works.

          So you can’t name one either.

          If you’re running an instance, it’s your responsibility to find an appropriate tool.

          That’s exactly what the admin here did, and what’s more, he did so because he was forced to do so by a lack of “appropriate” or even adequate tools.

          Hence the straightforward question you failed to answer.

          I don’t run an instance. CSAM is but one of the many reasons why. But I have been paying attention to the discussions regarding the flood of it here, and the impossibilities involved in starting from scratch with preventing/blocking this on a federated instance.

          But for reasons I cannot begin to fathom, and with an intense interest in seeing this anti-CSAM tool remain unused, you are blithely sailing by all that with a demand for using a tool you personally could not even name and obviously does not exist in acceptable form, or it would already have been gladly implemented.

          Glad he’s ignoring you and carrying on. I think I’ll do the same.

    • Vqhm@lemmy.world
      link
      fedilink
      arrow-up
      5
      arrow-down
      15
      ·
      edit-2
      1 year ago

      On Reddit there were always so many concern Karen’s peddling FUD when you say anything critical of pedos. For the longest time on Reddit all you had to do was see if the user also posted in pedofriends.

      It was so obvious.

      Any attempt to do anything that might impact a pedophile and there was quickly a brigade saying why that was a wrong.

      No. Any action that supports pedophiles is morally wrong and we should NOT let perfect be the enemy of good when trying to make it more difficult to be a pedophile or support pedophiles.

      If you have a solution that’s better, implement it and share it as Open Source.

      Otherwise crawl back in your pedo hole at reddit and leave us alone.

        • Vqhm@lemmy.world
          link
          fedilink
          arrow-up
          3
          arrow-down
          9
          ·
          edit-2
          1 year ago

          I called concern troll and then mentioned that on Reddit the subreddit pedofriends uses concern trolling as a tactic to prevent people from doing anything about pedophiles.

          I never said you were a pedophile, but way to out yourself as overly concerned with people being labeled pedos cuz you have such a big heart for protecting pedophiles.

      • Gabu@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        1 year ago

        While the OP’s concerns were largely… dubious(?), simply going through with literally whatever solution is just as big of a problem. I haven’t reviewed anything about this implementation in depth, thus can’t make any statements towards quality or truthfulness (except that 1% FP seems very optimistic), but you shouldn’t simply assume it will solve your problem without introducing another. Having said that, the author appears very well intentioned and has experience on the subject, so the veredict shouldn’t take long.

        • Vqhm@lemmy.world
          link
          fedilink
          arrow-up
          5
          arrow-down
          1
          ·
          1 year ago

          The solution is to try.

          In dev it’s called “fail fast”

          You try several solutions to the problem and iterate until you find the best working option then optimize.

          There’s no harm in an independent open source dev trying to solve a problem. Even if they fail there might be a kernel of useful code in a novel way.

          You absolutely shouldn’t just settle on one option without exploring several.

          But being stuck in a loop of over planning, waiting for legal, or outsourcing the problem is why everyone hates waterfall and just has small teams try with “agile” now.

          1% failure is just the start. You should never call a beta shit cuz it fails sometimes. Failure is part of learning and improving.

  • iByteABit [he/him]@lemm.ee
    link
    fedilink
    arrow-up
    15
    arrow-down
    1
    ·
    1 year ago

    Great work, this is the biggest issue that Lemmy has a the moment, I hope the admins will be able to set this up easily and start to take back all the preventative measures.

  • fmstrat@lemmy.nowsci.com
    link
    fedilink
    English
    arrow-up
    14
    ·
    1 year ago

    Have you considered federating hashes of positive matches and working with the Lemmy team to not outward federate on a local positive match (and potentially have the hash go instead)?

    The former can reduce overhead and electricity use, and the latter will stop more distribution and aid those sans-GPU who can’t run it.

    Over time, the hash DB will grow and get better. In addition, perhaps there is metadata that can be used to track image similarity to positive matches to reduce false-positives, but I imagine that algorithm would be much more complicated.

    • db0@lemmy.dbzer0.comOPM
      link
      fedilink
      English
      arrow-up
      25
      arrow-down
      1
      ·
      1 year ago

      Hashes won’t work for novel GenerativeAI images. For this kind of thing we need to be sharing tensors and comparing distances so that it catches format changes and compression artifacts. Theoretically possible. Practically, I don’t know how feasible it is.

      • fmstrat@lemmy.nowsci.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        How large is each tensor? If it can be stored as JSON or Base64 and is of sufficiently small size, integration into ActivityPub wouldn’t be all that bad. The time consuming part would likely be integration into Lemmy itself.

        Another option would be a separate service, similar to how Lemmy Explorer works, where a list of the latest tensors can be downloaded. It’s centralized vs distributed, but probably easier to implement. Just an API admins can register for to send and get tensors.I would be happy to assist with this if it is a route you would like to explore. Feel free to DM me.

        • db0@hachyderm.io
          cake
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          @fmstrat each tensor is small. The problem is when you have millions of them and you have to compare each image to each. You can’t index this. It has to be one by one. And you still need to covert the new image to tensors as well,which still needs gpu. I just don’t see anything useful here. The current system would be faster.