15 pointsby larsmosr7 hours ago21 comments
  • lich_king6 hours ago
    You break highlighting and copy-and-paste. If I want to share or comment on a piece of your website... I can't. I guess this can be a "feature" in some rare cases, but a major usability pain otherwise.

    I'm not a fan of all the documentation and marketing content for this project evidently being AI-generated because I don't know which parts of it are the things you believe and designed for, and which are just LLM verbal diarrhea. For example, your GitHub threat model says this stops "AI training crawlers (GPTBot, ClaudeBot, CCBot, etc.)" - is this something you've actually confirmed, or just something that AI thinks is true? I don't know how their scrapers work; I'd assume they use headless browsers.

    • larsmosr6 hours ago
      Copy-paste breaking is intentional for protected content but it's opt-in per component, not whole-site.

      On the AI docs concern, fair point. To answer directly: I've confirmed the obfuscation defeats any scraper reading raw HTML via HTTP requests. Whether GPTBot or ClaudeBot use headless browsers internally, I honestly don't know. The README threat model lists headless browsers under "what it does NOT stop" for that reason.

      • 6 hours ago
        undefined
    • larsmosr6 hours ago
      Full user-agent string: Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko); compatible; GPTBot/1.3;

      Official OpenAI documentation: https://platform.openai.com/docs/gptbot

  • obsrcdsucks6 hours ago

        function decodeObscrd(htmlOrElement) {
          let root;
          if (typeof htmlOrElement === 'string') {
            root = new DOMParser().parseFromString(htmlOrElement, 'text/html').body;
          } else {
            root = htmlOrElement || document;
          }
        
          const container = root.querySelector('[class*="obscrd-"]');
          if (!container) { return; }
        
          const words = [...container.children].filter(el => el.hasAttribute('data-o'));
          words.sort((a, b) => +a.dataset.o - +b.dataset.o);
        
          const result = words.map(word => {
            const chars = [...word.querySelectorAll('[data-o]')]
              .filter(el => el.querySelector('[data-o]') === null);
            chars.sort((a, b) => +a.dataset.o - +b.dataset.o);
            return chars.map(c => c.textContent).join('');
          }).join('');
        
          console.log(result);
          return result;
        }
    • larsmosr6 hours ago
      Yep, that works. The data-o attributes are readable in the DOM so you can reverse it with custom code. That's in the threat model. The goal is raising the cost from "curl + cheerio" to "write a custom decoder per site." Most scrapers move on to easier targets.
  • dlcarrier3 hours ago
    Oh great, another method to make screen readers and keyboard navigation impossible.

    At this point, bots are better at getting data out of web pages than people are. (And have been so for at least a few years: https://www.usenix.org/conference/usenixsecurity23/presentat...)

    All we're doing now is making it easier to get data from a web scraper than to browse to the web page ourselves.

  • dec0dedab0de6 hours ago
    Reminds me of when AOL broke all the script kiddy tools in 1996 by adding an extra space to the title of the window. I didn't have AOL, but my friend made one of those tools, and I helped him figure it out.
  • lokimedes6 hours ago
    All I want is an API for my AI, you can ask me for my public key, if you want my human identity verified. The collateral damage of this bot hunting is the emergence of personal AIs. Do we really want that? It feels regressive. (I see the hypocrisy here, we are fighting the scrapers that feed the LLMs that runs our personal agents)
    • larsmosr6 hours ago
      You are not wrong. But the use case I keep seeing is companies with proprietary content they spent real money creating, who don't want it showing up in someone else's training data for free. It's less about bot hunting and more about content owners having a choice.
  • dwa35926 hours ago
    Nice. I have been working on something which utilizes obfuscation, honeypots etc and I have come to a few realizations-

    - today you don't have to be a dedicated/motivated reverse engineer- you just need Sonnet 4.6 and let it do the work.

    - you need to throw constant/new gotchas to LLMs to keep them on their tows while they try to reverse engineer your website.

    • larsmosr6 hours ago
      The bar for reverse engineering dropped to "paste the HTML into Claude and ask it to decode." That's partly why the v2 roadmap moves toward techniques where the readable text never exists in the DOM at all. Static obfuscation patterns need to keep evolving or they become a one-prompt solve.
  • grigio3 hours ago
    Interesting, but i think bots ca just do a screenshot and then scraping the text
  • well_ackshually6 hours ago
    I too, hate people that:

    * Copy text

    * use a screen reader for accessibility purposes (not just on the web, but on mobile too. Your 'light' obfuscation is entirely broken with TalkBack on Android. individual words/characters read, text is not a single block)

    * use an RSS feed

    * use reader mode in their browser

    If you don't want your stuff to be read, and that includes bots, don't put it online.

    > Built this because I got tired of AI crawlers reading my HTML in plain text while robots.txt did nothing.

    You could have spent that time working on your project, instead of actively making the web worse than it already is.

    • larsmosr6 hours ago
      The TalkBack issue is useful feedback, thank you. I tested with NVDA and VoiceOver but not TalkBack on Android. If light mode is reading individual words instead of a continuous block that's a real bug I want to fix.

      On the broader point, I hear you, but I think there's a middle ground. Not all content is public knowledge. Some of it is premium, proprietary, or behind a paywall. The people publishing it should get to decide whether it becomes free training data.

      • yjftsjthsd-h3 hours ago
        > On the broader point, I hear you, but I think there's a middle ground. Not all content is public knowledge. Some of it is premium, proprietary, or behind a paywall. The people publishing it should get to decide whether it becomes free training data.

        I don't follow. Are you suggesting that someone is scraping private sites that they have to log in on in order to train AI on it?

  • 6 hours ago
    undefined
  • costco6 hours ago
    This is an interesting idea... it'd be a fun side project to implement enough of a CSS engine to undo this
    • larsmosr6 hours ago
      You are more than welcome to do so. Please keep in mind the realistic goal is raising the cost of scraping. Most bots use simple HTTP requests, and we make that useless.
  • yesitcan6 hours ago
    The irony of building an anti-AI project but writing your marketing and HN post with AI.
    • ramblurr2 hours ago
      ...and all the HN comment replies too! Egh.
  • GaryBluto6 hours ago
    > Your content, obscured.

    Is that supposed to be a good thing?

    • larsmosr6 hours ago
      For content you want public, no.
  • verse6 hours ago
    couldn't read the hero text on my phone

    it's white text and the shader background is also mostly white

    • larsmosr6 hours ago
      Thanks, what phone/browser? I'll fix that.
  • 6 hours ago
    undefined
  • gzread6 hours ago
    Another thing you can do is to install a font with jumbled characters: "a" looks like "x", "b" looks like "n", and so on. Then instead of writing "abc" you write "jmw" and it looks like "abc" on the screen. This has been used as a form of DRM for eBooks.

    It breaks copy/paste and screen readers, but so does your idea.

    • larsmosr6 hours ago
      Font remapping is actually on the v2 roadmap. The reason v1 uses CSS ordering instead is it preserves screen reader access. Tradeoff is it's reversible (as another commenter just showed). Font remapping is stronger but breaks assistive tech. Solving both is the hard problem.
  • mystraline7 hours ago
    This is also what Facebook does.

    Same result: screen readers and assistive software is rendered useless. Basically is a sign of "I hate disabled people, and AI too"

    • larsmosr7 hours ago
      Fair concern. obscrd actually preserves screen reader access. CSS flexbox order is a visual reordering property, so assistive tech follows the visual order and reads the text correctly. Contact components use sr-only spans with clean text and aria-hidden on the obfuscated layer. We target WCAG 2.2 AA compliance.

      Happy to have a11y experts poke at it and point out gaps.

      • PaulHoule6 hours ago
        Accessibility APIs have long been the royal road to automation. If scrapers were well-written they'd be using this already, but of course if scrapers were well-written they would scrape your site and you'd never notice.
  • h2zizzle6 hours ago
    I hate everything about this, please use your time on this planet to make life better for people instead of worse.

    It is better for a million AI crawlers to get through than for even one search index crawler, that might expose the knowledge on your site to someone who needs it, to be denied.

    • larsmosr6 hours ago
      For public knowledge sites this would be the wrong tool entirely. The use case is more like paywalled articles, proprietary product data, or premium content that companies paid to create and don't want scraped into a competitor's training set. obscrd is opt-in per component, not a whole-site lockdown.
  • kevinsync6 hours ago
    I'm surprised that you don't appear to be using it on obscrd.dev lol
    • larsmosr6 hours ago
      Well the information is not to hide, quiet the opposite haha. There is a Demo page
  • Sebastian_Dev6 hours ago
    [dead]
  • larsmosr7 hours ago
    [dead]
  • ozgurozkan6 hours ago
    [flagged]
    • larsmosr5 hours ago
      The breadcrumb approach right now is simple invisible markers, not paraphrase-resistant watermarking. You're right that semantic watermarking that survives LLM rephrasing is the harder and more interesting problem. It's on the radar but not in scope for v1.