Long story short, my VPS, which I’m forwarding my servers through Tailscale to, got hammered by thousands of requests per minute from Anthropic’s Claude AI. All of which being from different AWS IPs.

The VPS has a 1TB monthly cap, but it’s still kinda shitty to have huge spikes like the 13GB in just a couple of minutes today.

How do you deal with something like this?
I’m only really running a caddy reverse proxy on the VPS which forwards my home server’s services through Tailscale. "

I’d really like to avoid solutions like Cloudflare, since they f over CGNAT users very frequently and all that. Don’t think a WAF would help with this at all(?), but rate limiting on the reverse proxy might work.

(VPS has fail2ban and I’m using /etc/hosts.deny for manual blocking. There’s a WIP website on my root domain with robots.txt that should be denying AWS bots as well…)

I’m still learning and would really appreciate any suggestions.

  • Greg Clarke@lemmy.ca
    link
    fedilink
    English
    arrow-up
    9
    ·
    12 hours ago

    What are you hosting and who are your users? Do you receive any legitimate traffic from AWS or other cloud provider IP addresses? There will always be edge cases like people hosting VPN exit nodes on a VPS etc, but if its a tiny portion of your legitimate traffic I would consider blocking all incoming traffic from cloud providers and then whitelisting any that make sense like search engine crawlers if necessary.

  • WasPentalive@lemmy.one
    link
    fedilink
    English
    arrow-up
    5
    ·
    11 hours ago

    Too bad you can’t post a usage notice that anything scrapped to train an AI will be charged and will owe $some-huge-money, then pepper the site with bogus facts, occasionally ask various AI about the bogus fact and use that to prove scraping and invoice the AI’s company.

  • breadsmasher@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    14 hours ago

    Im struggling to find it, but theres like an “AI tarpit” that causes scrapers to get stuck. something like that? Im sure I saw it posted on lemmy recently. hopefully someone can link it

        • N0x0n@lemmy.ml
          link
          fedilink
          English
          arrow-up
          3
          ·
          3 hours ago

          Now I just wan’t to host a web page and expose it with nepenthes…

          First, because I’m a big fan of carnivorous plants.

          Second, because it let’s you poison LLMs, AI and fuck with their data.

          Lastly, because I can do my part and say F#CK Y0U to those privacy data hungry a$$holes !

          I don’t even expose anything directly to the web (always accessible through a tunnel like wireguard) or have any important data to protect from AI or LLMs. But just giving the opportunity to fuck with them while they continuously harvest data from everyone is something I was already thinking off but didn’t knew how.

          Thanks for the link !

    • zoey@lemmy.librebun.comOP
      link
      fedilink
      English
      arrow-up
      10
      ·
      14 hours ago

      Not gonna lie, the $3900/mo at the top of the /pricing page is pretty wild.
      Searched “crowdsec docker” and they have docs and all that. Thank you very much, I’ve heard of crowdsec before, but never paid much attention, absolutely will check this out!

      • A good tar pit will reduce your bandwidth. Tarpits aren’t about shoving useless data at bots; they’re about responding as slow as possible to keep the bot connected for as long as possible while giving it nothing.

        Endlessh accepts the connection and then… does nothing. It doesn’t even actually perform the SSL negotiation. It just very… slowly… sends… an endless preamble, until the bot gives up.

        As I write, my Internet-facing SSH tarpit currently has 27 clients trapped in it. A few of these have been connected for weeks. In one particular spike it had 1,378 clients trapped at once, lasting about 20 hours.

        • mholiv@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 minutes ago

          Fair. But I haven’t seen any anti-ai-scraper tarpits that do that. The ones I’ve seen mostly just pipe 10MB of /dev/urandom out there.

          Also I assume that the programmers working at ai companies are not literally mentally deficient. They certainly would add .timeout(10) or whatever to their scrapers. They probably have something more dynamic than that.

  • poVoq@slrpnk.net
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    14 hours ago

    It seems any somewhat easy to implement solution gets circumvented by them quickly. Some of the bots do respect robots.txt through if you explicitly add their self-reported user-agent (but they change it from time to time). This repo has a regularly updated list: https://github.com/ai-robots-txt/ai.robots.txt/

    In my experience, git forges are especially hit hard, and the only real solution I found is to put a login wall in front, which kinda sucks especially for open-source projects you want to self-host.

    Oh and recently the mlmym (old reddit) frontend for Lemmy seems to have started attracting AI scraping as well. We had to turn it off on our instance because of that.

    • zoey@lemmy.librebun.comOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      14 hours ago

      In my experience, git forges are especially hit hard

      Is that why my Forgejo instance has been hit twice like crazy before…
      Why can’t we have nice things. Thank you!

      EDIT: Hopefully Photon doesn’t get in their sights as well. Though after using the official lemmy webui for a while, I do really like it a lot.

      • poVoq@slrpnk.net
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        13 hours ago

        Yeah, Forgejo and Gitea. I think it is partially a problem of insufficient caching on the side of these git forges that makes it especially bad, but in the end that is victim blaming 🫠

        Mlmym seems to be the target because it is mostly Javascript free and therefore easier to scrape I think. But the other Lemmy frontends are also not well protected. Lemmy-ui doesn’t even allow to easily add a custom robots.txt, you have to manually overwrite it in the reverse-proxy.

  • solrize@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    14 hours ago

    Might be worth patching fail2ban to recognize the scrapers and block them in iptables.