Also, inserting hidden or misleading links is specifically a no-no for Google Search [0], who have this to say: We detect policy-violating practices both through automated systems and, as needed, human review that can result in a manual action. Sites that violate our policies may rank lower in results or not appear in results at all.
So you may well end up doing more damage to your own site than to the bots by using dodgy links in this manner.
[0]https://developers.google.com/search/docs/essentials/spam-po...
I have a public website, and web scrapers are stealing my work. I just stole this article, and you are stealing my comment. Thieves, thieves, and nothing but thieves!
I don't think that's the case. I'm not even arguing they aren't the worst people on the planet - might as well be. But all is see them doing is burning money all over the place.
Websites are an endless stream of cookies.
The analogy doesn’t hold.
Everything is a Remix culture. We should promote remix culture rather than hamper it.
Everything is a Remix (Original Series) https://youtu.be/nJPERZDfyWc
… browses memory and storage prices on NewEgg …
Hmm.
But the word digital is distracting us.
The word information is the important one. The question isn't where information goes. It's where information comes from.
Is new information post scarcity?
Can it ever be?
I'm also going to download a car.
You are allowed to take one cookie. But you are allowed to view a public website multiple times if you so want.
> The arms race just took another step, and if you're spending money creating or hosting this kind of content, it's not going to make up for the money you're losing by your other content getting scraped.
So we should all just do nothing and accept the inevitable?
It seems pretty reasonable that any scraper would already have mitigations for things like this as a function of just being on the internet.
More centralized web ftw.
My current problem is OpenAI, that scans massively ignoring every limit, 426, 444 and whatever you throw at them, and botnets from East Asia, using one IP per scrap, but thousands of IPs.
Good enough for me.
> More centralized web ftw.
This ain't got anything to do with "centralized web," this kind of epistemological vandalism can't be shunned enough.
I'm completely uncertain that the unsophisticated garbage I generated makes any difference, much less "poisons" the LLMs. A fellow can dream, can't he?
many scraper already know not to follow these, as it's how site used to "cheat" pagerank serving keyword soups
1. Simple, cheap, easy-to-detect bots will scrape the poison, and feed links to expensive-to-run browser-based bots that you can't detect in any other way.
2. Once you see a browser visit a bullshit link, you insta-ban it, as you can now see that it is a bot because it has been poisoned with the bullshit data.
My personal preference is using iocaine for this purpose though, in order to protect the entire server as opposed to a single site.
Isn't it the case that AI models learn better and are more performant with carefully curated material, so companies do actually filter for quality input?
Isn't it also the case that the use of RLHF and other refinement techniques essentially 'cures' the models of bad input?
Isn't it also, potentially, the case that the ai-scrapers are mostly looking for content based on user queries, rather than as training data?
If the answers to the questions lean a particular way (yes to most), then isn't the solution rate-limiting incoming web-queries rather than (presumed) well-poisoning?
Is this a solution in search of a problem?
It's not all that productive, it's an act of desperation. If you can't stop the enemy, at least you can make their action more costly.
One positive outcome I could see it AI companies becoming more critical of their training data.
The irony of machine-generated slop to fight machine-generated slop would be funny, if it weren't for the implications. How long before people start sharing ai-spam lists, both pro-ai and anti-ai?
Just like with email, at some point these share-lists will be adopted by the big corporates, and just like with email will make life hard for the small players.
Once a website appears on one of these lists, legitimately or otherwise, what'll be the reputational damage hurting appearance in search indexes? There have already been examples of Google delisting or dropping websites in search results.
Will there be a process to appeal these blacklists? Based on how things work with email, I doubt this will be a meaningful process. It's essentially an arms race, with the little folks getting crushed by juggernauts on all sides.
This project's selective protection of the major players reinforces that effect; from the README:
" Be sure to protect friendly bots and search engines from Miasma in your robots.txt!
User-agent: Googlebot User-agent: Bingbot User-agent: DuckDuckBot User-agent: Slurp User-agent: SomeOtherNiceBot Disallow: /bots Allow: / "
You can use security challenges as a mechanism to identify false positives.
Sure bots can get tons of proxies for cheap, doesn’t mean you can’t block them similar to how SSH Honeypots or Spamhaus SBL work albeit temporarily.
[1]: in quotes, because I dislike the term, because it’s immaterial whether or not an ugly block of concrete out in the sticks is housing LLM hardware - or good ol’ fashioned colo racks.
We need a Crawler blacklist that can in realtime stream list deltas to centralized list and local dbs can pull changes.
Verified domains can push suspected bot ips, where this engine would run heuristics to see if there is a patters across data sources and issue a temporary block with exponential TTL.
There are many problems to solve here, but as any OSS it will evolve over time if there is enough interest in it.
Costs of running this system will be huge though and corp sponsors may not work but individual sponsors may be incentivized as it’s helps them reduce bandwidth, compute costs related to bot traffic.
Why not have a library of babel esq labrinth visible to normal users on your website,
Like anti surveillance clothing or something they have to sift through
Can't the LLMs just ignore or spoof their user agents anyway?
Seems a clever and fitting name to me. A poison pit would probably smell bad. And at the same time, the theory that this tool would actually cause “illness” (bad training data) in AI is not proven.
It's like if someone was trying to "trap" search crawlers back in the early 2000s.
Seems counterproductive
If you want an AI bot to crawl your website while you pay for that bandwidth then you wont use the tool.
https://www.libraryjournal.com/story/ai-bots-swarm-library-c...