123 pointsby LucidLynx5 hours ago25 comments
  • bobosola11 minutes ago
    I dunno... it feels like the same approach as those people who tell you gleeful stories of how they kept a phone spammer on a call for 45 minutes: "That'll teach 'em, ha ha!" Do these types of techniques really work? I’m not convinced.

    Also, inserting hidden or misleading links is specifically a no-no for Google Search [0], who have this to say: We detect policy-violating practices both through automated systems and, as needed, human review that can result in a manual action. Sites that violate our policies may rank lower in results or not appear in results at all.

    So you may well end up doing more damage to your own site than to the bots by using dodgy links in this manner.

    [0]https://developers.google.com/search/docs/essentials/spam-po...

  • tasuki2 hours ago
    > If you have a public website, they are already stealing your work.

    I have a public website, and web scrapers are stealing my work. I just stole this article, and you are stealing my comment. Thieves, thieves, and nothing but thieves!

    • coldpie11 minutes ago
      I agree theft isn't a good analogy, but there is something similar going on. I put my words out into the world as a form of sharing. I enjoy reading things others write and share freely, so I write so others might enjoy the things I write. But now the things I write and share freely are being used to put money in the bank accounts of the worst people on the planet. They are using my work in a way I don't want it to be used. It makes me not want to share anymore.
      • tasuki4 minutes ago
        > But now the things I write and share freely are being used to put money in the bank accounts of the worst people on the planet.

        I don't think that's the case. I'm not even arguing they aren't the worst people on the planet - might as well be. But all is see them doing is burning money all over the place.

    • spiderfarmeran hour ago
      If someone hands out cookies in the supermarket, are you allowed to grab everything and leave?
      • drfloyd51an hour ago
        Odd thing about cookies… they disappear after one serving.

        Websites are an endless stream of cookies.

        The analogy doesn’t hold.

        • ghywertelling35 minutes ago
          If copying content from harddrive to another is theft, then so is DNA copying itself.

          Everything is a Remix culture. We should promote remix culture rather than hamper it.

          Everything is a Remix (Original Series) https://youtu.be/nJPERZDfyWc

        • z3c0an hour ago
          Digital information may be our first post-scarce resource. It's interesting, and sad, to see so many attempt to fit it within scarcity-based economic models.
          • Terretta9 minutes ago
            > digital information may be our first post-scarce resource

            … browses memory and storage prices on NewEgg …

            Hmm.

            But the word digital is distracting us.

            The word information is the important one. The question isn't where information goes. It's where information comes from.

            Is new information post scarcity?

            Can it ever be?

        • throwaway61374635 minutes ago
          [dead]
      • bengale22 minutes ago
        It’s interesting to see twists on the old anti-piracy arguments recycled for anti-ai.
      • falcor84an hour ago
        That really depends, but the quick answer is that according to our human social contract, we'd just ask "how many can I take?". Until now, the only real tool to limit scrapers has been throttling, but I don't see any reason for there not to be a similar conversational social contract between machines.
        • volemo41 minutes ago
          Isn’t robots.txt such a “social contract between machines”? But AI scrapers couldn’t care less.
      • GaggiXan hour ago
        I will copy the supermarket and paste it somewhere else.

        I'm also going to download a car.

      • pbasista44 minutes ago
        This is a dishonest analogy. In your example, there is only a limited amount of cookies available. While there is no practical limit on the amount of time a certain digital media can be viewed.

        You are allowed to take one cookie. But you are allowed to view a public website multiple times if you so want.

        • hollow-moe26 minutes ago
          There sure is a limit in the load that the server you're DDoSing can take or the will for people to post new worthy content in public. The supply is limited just not at the first degree. Let's make a small edit: Are you allowed to take all the cookies and then sell them with a small ribbon with your name on it ?
        • throwaway61374635 minutes ago
          [dead]
  • aldousd666an hour ago
    This is ultimately just going to give them training material for how to avoid this crap. They'll have to up their game to get good code. The arms race just took another step, and if you're spending money creating or hosting this kind of content, it's not going to make up for the money you're losing by your other content getting scraped. The bottom has always been threatening to fall out of the ads paid for eyeballs, And nobody could anticipate the trigger for the downfall. Looks like we found it.
    • johnethan hour ago
      > This is ultimately just going to give them training material for how to avoid this crap.

      > The arms race just took another step, and if you're spending money creating or hosting this kind of content, it's not going to make up for the money you're losing by your other content getting scraped.

      So we should all just do nothing and accept the inevitable?

    • aldousd666an hour ago
      To be clear, I mean AI is going to be the downfall of ad supported content. But let's face it. We have link farms and spam factories as a result of the ad supported content market. I think this is going to eventually do justice for users because it puts a premium on content quality that someone will want to pay a direct licensing fee to scrape for your AI bots as opposed to tricking somebody into clicking on a link and looking at an impression for something they won't buy.
    • Apocryphonan hour ago
      Tech is just a series of arms races
  • madeofpalk3 hours ago
    Is there any evidence or hints that these actually work?

    It seems pretty reasonable that any scraper would already have mitigations for things like this as a function of just being on the internet.

    • raincolean hour ago
      It might work against people just use their Mini Mac with OpenClaw to summarize news every morning, but it certainly won't work against Google.

      More centralized web ftw.

      • hexage181440 minutes ago
        It also probably won't work if the person actually wants your content and is checking if the thing they scraped actually makes sense or it just noise. Like, none of these are new things. Site owneser sending junk/fake data to webscrapers since since web scraping.
      • otherme12342 minutes ago
        In my experience, Google (among others) plays nice. Just put "disallow: *" in your robots.txt, and they won't bother you again.

        My current problem is OpenAI, that scans massively ignoring every limit, 426, 444 and whatever you throw at them, and botnets from East Asia, using one IP per scrap, but thousands of IPs.

      • LaGrange25 minutes ago
        > It might work against people just use their Mini Mac with OpenClaw to summarize news every morning,

        Good enough for me.

        > More centralized web ftw.

        This ain't got anything to do with "centralized web," this kind of epistemological vandalism can't be shunned enough.

    • sd93 hours ago
      Even it did work, I just can't bring myself to care enough. It doesn't feel like anything I could do on my site would make any material difference. I'm tired.
      • 20k3 hours ago
        I definitely get this. The thing that gives me hope is that you only need to poison a very small % of content to damage AI models pretty significantly. It helps combat the mass scraping, because a significant chunk of the data they get will be useless, and its very difficult to filter it by hand
        • lucasfin00043 minutes ago
          The asymmetry is what makes this very interesting. The cost to inject poison is basically zero for the site owner, but the cost to detect and filter it at scale is significant for the scraper. That math gets a lot worse for them as more sites adopt it. It doesn't solve the problem, but it changes the economics.
    • bediger400015 minutes ago
      The search engine crawlers are sophisticated enough, but Meta's are not. Neither is Anthropic's Claude crawler. Source: personal experience trying garbage generators on Yandex, Blexbot, Meta's and Anthropics crawlers.

      I'm completely uncertain that the unsophisticated garbage I generated makes any difference, much less "poisons" the LLMs. A fellow can dream, can't he?

    • spiderfarmeran hour ago
      There are hundreds of bots using residential proxies. That is not free. Make them pay.
    • nubg2 hours ago
      What kind of migitations? How would you detect the poison fountain?
      • avereveard2 hours ago
        style="display: none;" aria-hidden="true" tabindex="1"

        many scraper already know not to follow these, as it's how site used to "cheat" pagerank serving keyword soups

        • m00dyan hour ago
          Google will give your website a penalty for doing this.
      • GaggiX2 hours ago
        Because the internet is noisy and not up to date all recent LLMs are trained using Reinforcement Learning with Verifiable Rewards, if a model has learned the wrong signature of a function for example it would be apparent when executing the code.
    • m00dyan hour ago
      it won't work, especially on gemini. Googlebot is very experienced when it comes to crawling. It might work for OpenAI and others maybe.
    • phoronixrly2 hours ago
      It does work, on two levels:

      1. Simple, cheap, easy-to-detect bots will scrape the poison, and feed links to expensive-to-run browser-based bots that you can't detect in any other way.

      2. Once you see a browser visit a bullshit link, you insta-ban it, as you can now see that it is a bot because it has been poisoned with the bullshit data.

      My personal preference is using iocaine for this purpose though, in order to protect the entire server as opposed to a single site.

  • ninjagoo17 minutes ago
    Isn't this a trope at this point? That AI companies are indiscriminately training on random websites?

    Isn't it the case that AI models learn better and are more performant with carefully curated material, so companies do actually filter for quality input?

    Isn't it also the case that the use of RLHF and other refinement techniques essentially 'cures' the models of bad input?

    Isn't it also, potentially, the case that the ai-scrapers are mostly looking for content based on user queries, rather than as training data?

    If the answers to the questions lean a particular way (yes to most), then isn't the solution rate-limiting incoming web-queries rather than (presumed) well-poisoning?

    Is this a solution in search of a problem?

  • theandrewbailey36 minutes ago
    Or you can block bots with these (until they start using them) https://developer.mozilla.org/en-US/docs/Glossary/Fetch_meta...
  • jstanley40 minutes ago
    If you want to ruin someone's web experience based on what kind of thing they are, rather than the content of their character, consider that you might be the baddies.
    • mrweasel32 minutes ago
      If you're constantly being harassed by someone and despite your best efforts, nothing is being done to help you, quite the opposite in fact, tons of people cheer your assailant on in the name of profit and progress, it's only natural that you lash out.

      It's not all that productive, it's an act of desperation. If you can't stop the enemy, at least you can make their action more costly.

      One positive outcome I could see it AI companies becoming more critical of their training data.

  • nosmokewhereiaman hour ago
    My asthmar

    I'm assuming this is a reference to Lord of the flies

    • cwnyth17 minutes ago
      Miasma is bad or poisonous air. It's a Greek word.
  • ninjagoo33 minutes ago
    This is essentially machine-generated spam.

    The irony of machine-generated slop to fight machine-generated slop would be funny, if it weren't for the implications. How long before people start sharing ai-spam lists, both pro-ai and anti-ai?

    Just like with email, at some point these share-lists will be adopted by the big corporates, and just like with email will make life hard for the small players.

    Once a website appears on one of these lists, legitimately or otherwise, what'll be the reputational damage hurting appearance in search indexes? There have already been examples of Google delisting or dropping websites in search results.

    Will there be a process to appeal these blacklists? Based on how things work with email, I doubt this will be a meaningful process. It's essentially an arms race, with the little folks getting crushed by juggernauts on all sides.

    This project's selective protection of the major players reinforces that effect; from the README:

    " Be sure to protect friendly bots and search engines from Miasma in your robots.txt!

    User-agent: Googlebot User-agent: Bingbot User-agent: DuckDuckBot User-agent: Slurp User-agent: SomeOtherNiceBot Disallow: /bots Allow: / "

  • snehesht2 hours ago
    Why not simply blacklist or rate limit those bot IP’s ?
    • xprnio2 hours ago
      If you have real traffic and bot traffic, you still need to identify which is which. On top of that, bots very likely don’t reuse the same IPs over and over again. I assume if we knew all the IPs used only by bots ahead of time, then yeah it would be simple to blacklist them. But although it’s simple in theory, the practice of identifying what to blacklist in the first place is the part that isn’t as simple
      • sneheshtan hour ago
        You wouldn’t permanently block them, it’s more like a rolling window.

        You can use security challenges as a mechanism to identify false positives.

        Sure bots can get tons of proxies for cheap, doesn’t mean you can’t block them similar to how SSH Honeypots or Spamhaus SBL work albeit temporarily.

    • phyzome2 hours ago
      Because punishment for breaking the robots.txt rules is a social good.
    • arbolan hour ago
      The AI companies are using virtually unlimited "clean" residential IPs so this is not a valid strategy.
      • DaiPlusPlus42 minutes ago
        How? They run their scraping and training infrastructure - and models themselves - from within those “AI datacenters”[1] we hear about in the news - and not proxying through end-users’ own pipes.

        [1]: in quotes, because I dislike the term, because it’s immaterial whether or not an ugly block of concrete out in the sticks is housing LLM hardware - or good ol’ fashioned colo racks.

    • aduwah2 hours ago
      There are way too many to do that
      • sneheshtan hour ago
        True, most of the blacklists systems today aren’t realtime like Amazon WAF or Cloudflare.

        We need a Crawler blacklist that can in realtime stream list deltas to centralized list and local dbs can pull changes.

        Verified domains can push suspected bot ips, where this engine would run heuristics to see if there is a patters across data sources and issue a temporary block with exponential TTL.

        There are many problems to solve here, but as any OSS it will evolve over time if there is enough interest in it.

        Costs of running this system will be huge though and corp sponsors may not work but individual sponsors may be incentivized as it’s helps them reduce bandwidth, compute costs related to bot traffic.

        • pixl9735 minutes ago
          In the real-time spam market the lists worked well with honest groups for a bit, but started falling apart when once good lists get taken over by actors that realize they can use their position to make more money. It's a really difficult trap to avoid.
  • superkuh12 minutes ago
    Of course Googlebot, Bingbot, Applebot, Amazonbot, YandexBot, etc from the major corps are HTTP useragent spiders that will have their downloaded public content used by corporations for AI training too. Might as well just drop the "AI" and say "corporate scrapers".
  • rob36 minutes ago
    "/brainstorming git checkout this miasma repo source code and implement a fix to prevent the scraper from not working on sites that use this tool"
  • meta-level3 hours ago
    Isn't posting projects like this the most visible way to report a bug and let it have fixed as soon as possible?
    • suprfsat3 hours ago
      "disobeys robots.txt" is more of a feature
  • foxesan hour ago
    Wonder if you can just avoid hiding it to make it more believable

    Why not have a library of babel esq labrinth visible to normal users on your website,

    Like anti surveillance clothing or something they have to sift through

  • imdsm2 hours ago
    Applied model collapse
  • 2 hours ago
    undefined
  • Imustaskforhelp3 hours ago
    I wish if there was some regulation which could force companies who scrape for (profit) to reveal who they are to the end websites, many new AI company don't seem to respect any decision made by the person who owns the website and shares their knowledge for other humans, only for it to get distilled for a few cents.
  • rvz3 hours ago
    > > Be sure to protect friendly bots and search engines from Miasma in your robots.txt!

    Can't the LLMs just ignore or spoof their user agents anyway?

    • phoronixrly2 hours ago
      Well-behaved agents will obey robots.txt and not fall into the trap.
  • maltyxxx30 minutes ago
    [dead]
  • devnotes77an hour ago
    [dead]
  • SophieVeldman2 hours ago
    [dead]
  • firekey_browser2 hours ago
    [dead]
  • GaggiX3 hours ago
    These projects are the new "To-Do List" app.
  • splitbrainhack3 hours ago
    -1 for the name
  • obsidianbases12 hours ago
    Why do this though?

    It's like if someone was trying to "trap" search crawlers back in the early 2000s.

    Seems counterproductive

    • integralid44 minutes ago
      search crawlers used to bring people TO your site llm boots are used to keep people OUT of your site, because knowledge is indexed and distributed by corporations.
    • bilekas2 hours ago
      Because of bots that don't respect ROBOTS.txt .

      If you want an AI bot to crawl your website while you pay for that bandwidth then you wont use the tool.

    • Forgeties792 hours ago
      Web crawlers didn’t routinely take down public resources or use the scraped info to generate facsimiles that people are still having ethical debates over. Its presence didn’t even register and it was indexing that helped them. It isn’t remotely the same thing.

      https://www.libraryjournal.com/story/ai-bots-swarm-library-c...