I posted the other day that you can clean up your object storage from CSAM using my AI-based tool. Many people expressed the wish to use it on their local file storage-based pict-rs. So I’ve just extended its functionality to allow exactly that.

The new lemmy_safety_local_storage.py will go through your pict-rs volume in the filesystem and scan each image for CSAM, and delete it. The requirements are

  • A linux account with read-write access to the volume files
  • A private key authentication for that account

As my main instance is using object storage, my testing is limited to my dev instance, and there it all looks OK to me. But do run it with --dry_run if you’re worried. You can delete lemmy_safety.db and rerun to enforce the delete after (method to utilize the --dry_run results coming soon)

PS: if you were using the object storage cleanup, that script has been renamed to lemmy_safety_object_storage.py

  • BitOneZero @ .world
    link
    fedilink
    English
    67
    edit-2
    1 year ago

    I hope people share the positive hits of CSAM and see how widespread the problem is…

    DRAMTIC EDIT: the records lemmy_safety_local_storage.py identifies, not the images! @bamboo@lemmy.blahaj.zone seems to think it “sounds like” I am ACTIVELY encouraging the spreading of child pornography images… NO! I mean audit files, such as timestamps, the account that uploaded, etc. Once you have the timestamp, the nginx logs from a lemmy server should help identify the IP address.

      • BitOneZero @ .world
        link
        fedilink
        English
        28
        edit-2
        1 year ago

        It is not even a mistake, it’s some pretty mind-fucked up on part of @bamboo@lemmy.blahaj.zone to jump to such a conclusion. crap

        • @Kuvwert@lemm.ee
          link
          fedilink
          English
          111 year ago

          It’s cool, most everybody knows what you mean lol. Glad you clarified so there wouldn’t be future misunderstandings

        • Franzia
          link
          fedilink
          English
          -11 year ago

          Oh come on, its just a correction of communication

          • @Earthwormjim91@lemmy.world
            link
            fedilink
            English
            31 year ago

            It’s probably projection. Nobody reasonable would have jumped to the same conclusion. It doesn’t even remotely read like that.

            • Franzia
              link
              fedilink
              English
              21 year ago

              It sounds to me like an NT desire for perfectly crafted arguments, without ambiguity. I do this, and feel fortunate that I didnt call for a correction, myself. See how vicious you all are about it.

                • Franzia
                  link
                  fedilink
                  English
                  31 year ago

                  You are saying the mind jumps, but that is the topic. I meant to say that being ND can create a desire for clarity in communication. A direct or terse argument.

    • @bamboo@lemmy.blahaj.zone
      link
      fedilink
      English
      -551 year ago

      I hope people share the positive hits of CSAM and see how widespread the problem is…

      It sounds like you’re encouraging people to share CSAM images found, which is obviously not the intent of this tool. There’s probably a better way to phrase what you were trying to say.

        • BitOneZero @ .world
          link
          fedilink
          English
          13
          edit-2
          1 year ago

          Yes. odd how people think sharing CSAM is why people would post here, instead of actually tracking down and prosecuting those sharing CSAM. Details about the users who sharedl CSAM content, such as timestamps - would help identify the offenders for prosecution.

          • @bamboo@lemmy.blahaj.zone
            link
            fedilink
            English
            -21 year ago

            I assumed as much after reading it several times, but just wanted let you know and point out that statement could be misconstrued. Thanks for clarifying!

        • andrew
          link
          fedilink
          English
          51 year ago

          Because of the way pictrs organizes photos, which I believe is by hash (could be random id but I suspect not), you should be able to share filenames for cleanup by neighbors without having to share the contents.

          Even if it’s not organized that way automatically, though, you can pretty easily use sha256sum to get a shareable hash before deleting the content.

          • BitOneZero @ .world
            link
            fedilink
            English
            31 year ago

            I think timestamps of files would be one of the easier things, and try to track back to postings and comments that references the upload… ideally the logged-in account (which is the standard install of lemmy, only logged-in users can upload to pictrs)

      • BitOneZero @ .world
        link
        fedilink
        English
        91 year ago

        It sounds like you’re encouraging people to share CSAM images found, which is obviously not the intent of this tool.

        Yes, that is in fact the context.

        Context: "which is obviously not the intent of this tool. "

        it is not my intent to share the images, nor is it the context of the tool… Sharing details about the users, timestamps - would be the obvious context.

      • @Earthwormjim91@lemmy.world
        link
        fedilink
        English
        61 year ago

        Why are you such a weirdo where that’s where your mind goes.

        Sharing positive hits isn’t saying share the images. It’s saying share the data on who what when where how the hit showed up positive.

        Who shared it.

        What was it (this is obviously going to be some kind of CSAM given that’s the tool).

        When did they share it (time stamps).

        Where did they share it (was the same image hit on other runs and what instances did it hit on with the tool).

        How did they do it (local sharing, an image hosting service, etc).

          • @SharkEatingBreakfast@sh.itjust.works
            link
            fedilink
            English
            41 year ago

            In order to get the answers you’re looking for, put out the question “what exactly does this statement mean?” instead of “sounds like this means ___” and waiting for a confirmation/rejection of your assumption.

  • @webghost0101@sopuli.xyz
    link
    fedilink
    English
    241 year ago

    We Evolve!

    There something really satisfying about witnessing a community starting to talk about serious a issue and days later see things already improved.

    Lemmy is now the internet, glory to all volunteer devs. Lets make it the best place we possibly can!

  • @XaeroDegreaz@lemmy.world
    link
    fedilink
    English
    201 year ago

    I’m curious… How does one even test such a thing before distributing it without having offending files to test against.

    Like during the development process of this project, how on earth can you test it properly? 😂

    • @Rescuer6394@feddit.nl
      link
      fedilink
      English
      181 year ago

      it uses a model that describes a photo, then it searches the generated description for some terms and ranks the image to some levels of safety.

      to test it you use a more general filter, for all nsfw for example, and see if the matches are correct.

      • poVoq
        link
        fedilink
        English
        91 year ago

        Hmm, thinking out loud… Wouldn’t that also make it easy to remove scat porn and Hitler + Nazi flag images?

        There has been a lot of spam like that on Lemmy and at least the latter is somewhat illegal to host in Germany as well.

        • @Rescuer6394@feddit.nl
          link
          fedilink
          English
          71 year ago

          yes… maybe.

          as the dev said, it flags a lot of false positive. so a human should look at them anyway.

          maybe when this is a bit more evolved, we can use it to preprocess posts, and if a post gets flagged for something, a mod / admin needs to approve the post manually.

          maybe for CASM, it gets sent to an external service specialized to that stuff, so the mod / admin doesn’t have to look at the images.

    • @Amaltheamannen@lemmy.ml
      link
      fedilink
      English
      31 year ago

      While it’s not the case for this project I’m sure there’s some poor researcher our there who trained a model on actual confiscated CSAM. Or most likely overworked traumatized thirld world content moderators employed by the likes of Meta.

  • @Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    13
    edit-2
    1 year ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    HTTP Hypertext Transfer Protocol, the Web
    IP Internet Protocol
    SSH Secure Shell for remote terminal access
    nginx Popular HTTP server

    3 acronyms in this thread; the most compressed thread commented on today has 8 acronyms.

    [Thread #93 for this sub, first seen 31st Aug 2023, 04:15] [FAQ] [Full list] [Contact] [Source code]

  • @fiveoar@lemmy.dbzer0.com
    link
    fedilink
    English
    121 year ago

    This is fantastic work to an immediate problem. Thank you.

    There is something amazing about someone just sharing a solution like this without expectation of anything back, and even if this isn’t the best right solution, it contributes to the global commons, and improves society.

  • rentar42
    link
    fedilink
    111 year ago

    It’s great that there’s now a tool, but this kind of issue is why I’m not considering self-hosting a fediverse service: due to the nature of the beast even a single-user instance effectively becomes a publicly accessible distributor of content that others created/uploaded. I’m sure this could be restricted somewhat (by making the web UI inaccessible for the public), but the federation means that other instances need to be able to get content from the server. That’s way to much legal risk for me.

    • db0OP
      link
      fedilink
      English
      131 year ago

      Actually the other instances don’t pull content. You push it out instead. Basically if you don’t run pict-rs and you don’t allow user registreations, you’re safe for everything except potential copyright infringement from some text someone might post. If you make your webui inaccessible to anyone but your IP, you protect against even that.

      • rentar42
        link
        fedilink
        31 year ago

        what stops other instances from pushing bad stuff to you? even if someone I follow posts illegal stuff, it’ll end up on my server.

        • db0OP
          link
          fedilink
          English
          31 year ago

          If you don’t run pictrs, they literally cannot push images to you as you can’t store them

  • poVoq
    link
    fedilink
    English
    51 year ago

    Thanks for adding this. I guess I now have my weekend planned for moving Pict-rs to a server with a fast enough GPU and try this out 🤔

    • db0OP
      link
      fedilink
      English
      71 year ago

      You don’t need to move pict-rs to a GPU server. In fact that would be prohibitedly expensive long-term. I suggest you just use your PC to run this against your current pict-rs server, or just rent a GPU server for this time.

      • poVoq
        link
        fedilink
        English
        3
        edit-2
        1 year ago

        I have a server with a smaller Nvidia GPU available. I hope the 3GB vRAM it has will be sufficient.

        • db0OP
          link
          fedilink
          English
          41 year ago

          OK, but just to point out that the script hasn’t been setup to run locally yet. Just through ssh. Making it look for the files locally is my next to-do

          • poVoq
            link
            fedilink
            English
            41 year ago

            Ah, I was already wondering why it would need SSH keys to scan local files 😅 Well, I only have time to look into this on Sunday I think, so no rush.

  • @doot@social.bug.expert
    link
    fedilink
    English
    51 year ago

    seems like clip-interrogator isn’t explicitly supported on apple silicon, I’ll give it a try later

    thanks for making this!

  • fmstrat
    link
    fedilink
    English
    3
    edit-2
    1 year ago

    Is there a reason it needs a PK vs just being able to point it at a local folder and running as a user with write access?

    • db0OP
      link
      fedilink
      English
      71 year ago

      Most lemmy servers do not have a GPU, so that would make it too slow. But I plan to provide this option soon if people want it.

  • sapient [they/them]
    link
    fedilink
    English
    31 year ago

    Something that might be useful long term is trying to train an AI and release weights to identify CSAM that admins can use to check images. The main problem is finding a way to do this without storing those kinds of images or video :/

    My understanding is that right now, the main mechanisms involved use several central databases which use perceptual hashes of known CSAM material. The problem is that this ends up being a whackamole solution, and at least in theory governments could use these databases to censor copyrighted or more general “unapproved” content, though i imagine such a db would lose trust quickly and I’m not aware of this being an issue in practise.

    One potential solution is “opportunistic training” where, when new CSAM material gets identified and submitted to the FBI or these databases by various server admins, a small amount of training is done on the AI weights before the image or video is deleted and only a perceptual hash remains. Furthermore, if a picture is reported as “known CSAM” by these dbs, then you do the same thing with that image before it gets deleted.

    To avoid false positives, you also train the AI on general non-CSAM content.

    Ideally this process would be fully automated so no-one has to look at that shit - over time, ypu’d theoretically get a neural net capable of identifying CSAM reliably with few or no false positives or false negatives .. Admins could also try for some kind of distributed training, where each contributes weight deltas from local training, or each builds up LoRA-style improvement modules and people combine them to reduce bandwidth for modification sharing.