Attackers explain how an anti-spam defense became an AI weapon.
Attack
To me this looks like defense. If the site asks you to not to scrape and you do it anyway, you are the attacker and deserve the garbage.
I can save you a lot of trouble, actually. You don’t need all of this!
Just make a custom 404 page that returns 13 MBs of junk along with status code 200 and has a few dead links (404, so it just goes to itself)
There are no bots on the domain I do this on anymore. From swarming to zero in under a week.
You don’t need tar pits or heuristics or anything else fancy. Just make your website so expensive to crawl that it’s not worth it so they filter themselves.
Surely any competent web scraper will avoid an infinite loop?
Critics debating Nepenthes’ utility on Hacker News suggested that most AI crawlers could easily avoid tarpits like Nepenthes, with one commenter describing the attack as being “very crawler 101.” Aaron said that was his “favorite comment” because if tarpits are considered elementary attacks, he has “2 million lines of access log that show that Google didn’t graduate.”
You assume incorrectly that bots, scrapers and drive-by malware attacks are made by competent people. I have years worth of stories I’m not going to post on the open internet that says otherwise. I also have months worth of access logs that say otherwise. AhrefsBot in particular is completely unable to deal with anything you throw at it. It spent weeks in a tarpit I made very similar to the one in the article, looping links, until I finally put it out of its misery.
Just make a custom 404 page that returns 13 MBs of junk along with status code 200
How would you go about doing this part? Asking for a friend who’s an idiot, totally not for me.
I use Apache2 and PHP, here’s what I did:
in .htaccess you can set
ErrorDocument 404 /error-hole.php
https://httpd.apache.org/docs/2.4/custom-error.htmlin error-hole.php,
<?php http_response_code(200); ?> <p>*paste a string that is 13 megabytes long*</p>
For the string, I used
dd
to generate 13 MBs of noise from/dev/urandom
and then I converted that to base64 so it would paste into error-hole.phpYou should probably hide some invisible dead links around your website as honeypots for the bots that normal users can’t see.
How does this affect a genuine user who experiences a 404 on your site?
They will see a long string of base64 that takes a quarter of a second longer to load then a regular page. If it’s important to you, you can make the base64 string invisible and add some HTML to make it appear as a normal 404 page.
I don’t know a lot about this, but I would guess a normal user would like a message, that says something along the lines of “404, couldn’t find what you were looking for.” The status code and the links back to itself as well as the 13 MBs of noise should probably not irritate them. Hidden links should also not irritate normal users.
I also “don’t know a lot about this”, but I do know that your browser receiving a 200 means that everything worked properly. From what I can tell, this technique is replaces any and every 404 response with 200, thus tricking the browser (and therefore the user) into thinking the site is working as expected every time they run into a missing webpage on this site.
The user doesn’t see the status code, they see what’s rendered to the screen.
For the string, I used
dd
to generate 13 MBs of noise from/dev/urandom
and then I converted that to base64 so it would paste into error-hole.phpThat string is going to end up being 17MB assuming it’s a utf8 encoded .php file
idk what to tell you.
ls -lha -rw-rw-r-- 1 www-data www-data 14M Jan 14 23:05 error-hole.php
Rather funny that some of these are in code if(useragent == chatgpt) kind of sillyness. You need heiuristics to detect scrapers, because they’ll just report their user agent as the average user’s browser.
This will be as effective against LLM trainers as Nightshade has been against generative image AI trainers.