A pseudonymous coder has created and released an open source “tar pit” to indefinitely trap AI training web crawlers in an infinitely, randomly-generating series of pages to waste their time and computing power. The program, called Nepenthes after the genus of carnivorous pitcher plants which trap and consume their prey, can be deployed by webpage owners to protect their own content from being scraped or can be deployed “offensively” as a honeypot trap to waste AI companies’ resources.

Registration bypass: https://archive.is/3tEl0

  • @ExcessShiv@lemmy.dbzer0.com
    link
    fedilink
    English
    482 months ago

    From the description, it sounds like this would only have limited effect and only short lived until guardrails are implemented in crawlers.

    • Flying SquidOP
      link
      fedilink
      English
      402 months ago

      Probably so. It’s always going to be an arms race, just like with malware.

      • SkaveRat
        link
        fedilink
        English
        122 months ago

        I mean… not really. This isn’t even a defence. Any web crawler worth its salt will just stop after a while. And they do so for literally decades already

        • xigoi
          link
          fedilink
          English
          92 months ago

          So it won’t crawl any actual content on that site? Goal achieved.

        • FaceDeer
          link
          fedilink
          32 months ago

          Indeed. And any modern AI training system is going to be extensively curating any training data that ends up being fed into the AI, probably processing it through other AIs to generate synthetic data from it. The days of early ChatGPT where LLMs were trained by just dumping giant piles of random text on them and hoping it’ll figure it out somehow are long past.

          This reminds me of Nightshade, the supposed anti-art-AI technique that could be defeated by resizing the image (which all art AI training systems do as a matter of course). It may make people “feel better” but it’s not going to have any real impact on anything.

          • @ToxicWaste@lemm.ee
            link
            fedilink
            English
            12 months ago

            sure, it is easy to detect and they will. however, at the moment they don’t seem to be doing it. The author said this after deploying a POC:

            Aaron B told 404 Media “If that’s, true, I’ve several million lines of access log that says even Google Almighty didn’t graduate” to avoiding the trap.

            So no, it is not a silver bullet. but it is a defense strategy, which seems to work at the moment.

            • FaceDeer
              link
              fedilink
              22 months ago

              No, a few million hits from bots is routine for anything that’s facing the public at all. Others have posted on this thread (or others like it, this article’s been making the rounds a lot in the past few days) that even the most basic of sites can get that sort of bot traffic, and that it’s just a simple recursion depth limit setting to avoid the “infinite maze” aspect.

              As for AI training, the access log says nothing about that. As I said, AI training sets are not made by just dumping giant piles of randomly scraped text on AIs any more. If a trainer scraped one of those “infinite maze” sites the quality of the resulting data would be checked, and if it was generated by anything remotely economical for the site to be running it’d almost certainly be discarded as junk.

              • @ToxicWaste@lemm.ee
                link
                fedilink
                English
                12 months ago

                The main angle is not to ‘poisen’ the training set. it is to waste time, energy and resources. the site loads deliberately slow and produces garbage, which has to be filtered out.

                as i said: not a silver bullet. but at least some threads where tied up collecting garbage painfully slow. as the data is useless, whatever their cleanup process is, has more to do. or it might even be tricked into discarding the whole website, as the signal to noise ratio is bad.

                so i would still say the author achieved his goal.

                • FaceDeer
                  link
                  fedilink
                  12 months ago

                  The site producing the nonsense has to produce lots of it any time a bot comes along, the trainers only have to filter it once. As others have pointed out it’s likely easy for an automated filter to spot. I don’t see it as being a clear win.

  • @AbouBenAdhem@lemmy.world
    link
    fedilink
    English
    142 months ago

    The typical web crawler doesn’t appear to have a lot of logic. It downloads a URL, and if it sees links to other URLs, it downloads those too.

    So it has nothing to do with “AI training” in the usual sense.

    • jungle
      link
      fedilink
      English
      42 months ago

      It also has nothing to do with real web crawlers. Maybe the first crawlers when the web was a couple million pages were that dumb, but that’s ancient history.