337 pointsby xnx7 hours ago38 comments
  • dataviz10005 hours ago
    I use Playwright to intercept all requests and responses and have Claude Code navigate to a website like YouTube and click and interact with all the elements and inputs while recording all the requests and responses associated with each interaction. Then it creates a detailed strongly typed API to interact with any website using the underlying API.

    Yes, I know it likely breaks everybody's terms of service but at the same time I'm not loading gigabytes of ads, images, markup, to accomplish things.

    If anyone is interested I can take some time and publish it this week.

    • bredren3 hours ago
      I also do this. My primary use case is for reproducing page layout and styling at any given tree in the dom. So, capturing various states of a component etc.

      I also use it to automatically retrieve page responsiveness behavior in complex web apps. It uses playwright to adjust the width and monitor entire trees for exact changes which it writes structured data that includes the complete cascade of styles relevant with screenshots to support the snapshots.

      There are tools you can buy that let you do this kind of inspection manually, but they are designed for humans. So, lots of clickety-clackety and human speed results.

      ---

      My first reaction to seeing this FP was why are people still releasing MCPs? So far I've managed to completely avoid that hype loop and went straight to building custom CLIs even before skills were a thing.

      I think people are still not realizing the power and efficiency of direct access to things you want and skills to guide the AI in using the access effectively.

      Maybe I'm missing something in this particular use case?

    • halJordan3 hours ago
      I love how HN is loving this idea when it's the exact same thing Anthropic and OpenAi (and every other llm maker) did.

      It's God's gift to them when it lets them bypass ads and dl copyrighted material. But it's Satan's curse on humanity when the Zuck does it to train his llm and dl copyrighted material.

      • deauxan hour ago
        Both scale and purpose make them completely different things. You're acting as if they're the same when they're not.
      • tclancy2 hours ago
        So you’re that Hal Jordan then? Why would a Green Lantern feel the need to defend either? I feel like the Guardians would not accept your arguments as soon as you got to Oa, poozer. I guess what I am saying is don’t have a famous name. Seems obvious.
        • llbbddan hour ago
          OP appears to be talking about real life. What are you on about?
    • Axsuul4 hours ago
      Why even use Playwright for this? I feel like Claude just needs agent-browser and it can generate deterministic code from it.
      • thefreeman21 minutes ago
        You can just start claude with the —chrome flag too and it will connect to the chrome extension.
      • dsrtslnd233 hours ago
        • dataviz10003 hours ago
          It is 2 months old!

          My excuse for not keeping up is that I'm in so deep that Claude Code can predict the stock market.

          I'll still publish mine and see if has any value but agent browser looks very complete.

          Thank you for sharing!

    • defen5 hours ago
      Would this hypothetically be able to download arbitrary videos from youtube without the constant yt-dlp arms race?
      • dawnerd4 hours ago
        Don’t know how this could be more stable than ytdlp. When issues come up they’re fixed really quickly.
        • varenc4 hours ago
          yt-dlp was very recently broken for ~2 days for any Youtube videos that required cookies: https://github.com/yt-dlp/yt-dlp/issues/16212

          Here is what actually fixed it: https://github.com/yt-dlp/ejs/pull/53/changes

          yt-dlp is relatively stable, but still occasionally breaks for long periods. I get the sense YouTube is becoming increasingly adversarial to yt-dlp as well.

          I don't know the details, but it doesn't seem like yt-dlp is running the entire YouTube JS+DOM environment. Something like a real headless browser seems like it would break less often, but be much heavier weight. And Youtube might have all sorts of other mitigations against this approach.

          • 22c12 minutes ago
            > yt-dlp is running the entire YouTube JS+DOM environment

            IIRC they maintain a minimal execution environment that is able to run just the JS needed to pass a few checks but this breaks too often enough that they're planning to make Node.js or another JS interpreter a hard requirement (possibly already happened).

            • defrost7 minutes ago
              Pretty much - yt-dlp currently requires Deno to "solve" youtube challenges.

              * https://deno.com/

              * there may well be other JS interpreters that are accepted, can be used - but solving JS challenges is required for much, if not all, YT content.

          • coro_138 minutes ago
            > I get the sense YouTube is becoming increasingly adversarial to yt-dlp as well.

            I rarely use yt-dlp anymore.

            Before I just updated. Now when I do that, it usually becomes complex and full of questions.

          • toomuchtodo3 hours ago
            I think having a hook to an LLM endpoint to enable yt-dlp to attempt to self resolve until an official fix is available would be a useful enhancement.
      • dataviz10004 hours ago
        > yt-dlp arms race

        I don't know anything about yt-dlp.

        It would probably help people who want to go to a concert and have a chance to beat the scalpers cornering the market on an event in 30 seconds hitting the marketplace services with 20,000 requests.

        I can try to see if can bypass yt-dlp. But that is always a cat and mouse game.

        • defen4 hours ago
          To clarify - yt-dlp is a command line tool for downloading youtube videos, but it's in a constant arms race with the youtube website because they are constantly changing things in a way that blocks yt-dlp.
          • dexterdog2 hours ago
            I wouldn't call it an arms race. I don't update my client that often and I rarely have problems downloading any video with it.
    • Johnny_Bonk4 hours ago
      Yes, please do and ping me when it's done lol. Did you make it into an agent skill?
      • dataviz10004 hours ago
        Exactly, it is an agent skill that interacts pressing buttons and stuff with a webpage capturing and documenting all the API requests the page makes using Playwright's request / response interception methods. It creates and strongly typed well documented API at the end.
        • bengt4 hours ago
          Sounds awesome. I've been using mitmproxy's --mode local to intercept with a separate skill to read flow files dumped from it, but interactive is even better.
    • schainks4 hours ago
      Very interested. Would even pay for an api for this. I am doing something similar with vibium and need something more token efficient.
    • mikrl3 hours ago
      I was doing similar by capturing XHR requests while clicking through manually, then asking codex to reverse engineer the API from the export.

      Never tried that level of autonomy though. How long is your iteration cycle?

      If I had to guess, mine was maybe 10-20 minutes over a few prompts.

    • miohtama3 hours ago
      I just ask Claude to reverse engineer the site with Chrome MCP. It goes to work by itself, uses your Chrome logged in session cookies, etc.
    • xrd5 hours ago
      Yes, please do!
      • dataviz10005 hours ago
        100% I'll response to this by Friday with link to Github.

        I use Patchright + Ghostery and I have a cleaver tool that uses web sockets to pass 1 second interval screenshots to the a dashboard and pointer / keyboard events to the server which allow interacting with websites so that a user can create authentication that is stored in the chrome user profile with all the cookies, history, local storage, ect.. in the cloud on a server.

        Can you list some websites that don't require subscription that you would like to me to test against? I used this for Robinhood and I think Linked in would be a good example for people to use.

        • zzleeper4 hours ago
          Another +1, it would be incredibly useful to play with this approach! (and fun)
    • liamdgray3 hours ago
      Please do!
    • heystefan2 hours ago
      Commenting to follow up.
    • toomuchtodo4 hours ago
      Please publish!
    • retinaros4 hours ago
      isnt it what everyone that needs web validation does?
    • lizhang3 hours ago
      [dead]
  • paulirish5 hours ago
    The DevTools MCP project just recently landed a standalone CLI: https://github.com/ChromeDevTools/chrome-devtools-mcp/blob/m...

    Great news to all of us keenly aware of MCP's wild token costs. ;)

    The CLI hasn't been announced yet (sorry guys!), but it is shipping in the latest v0.20.0 release. (Disclaimer: I used to work on the DevTools team. And I still do, too)

    • hank193135 minutes ago
      Love the Mitch Hedberg reference! Thank you! Always good to get a little Mitch!

      ‘I don’t have a girlfriend. But I do know a woman who’d be mad at me for saying that.’

      ‘I’m against picketing, but I don’t know how to show it.’

      ‘I haven’t slept for ten days, because that would be too long.’

      ‘I like to play blackjack. I’m not addicted to gambling. I’m addicted to sitting in a semi-circle.’

      • paulirish17 minutes ago
        "I was going to get my teeth whitened but then I said, fuck that, I'll just get a tan instead."
    • commanderkeen084 hours ago
      MCPs cost nothing in CC now with Tool Search.
      • cheema334 hours ago
        > MCPs cost nothing in CC now with Tool Search.

        This is incorrect. Plenty of people have run the numbers. Tool search does not fix all problems with MCP.

        • ehsanu14 hours ago
          What are the numbers? Are there problems other than context usage you refer to?
      • wahnfrieden4 hours ago
        Codex also has this…
  • aadishv6 hours ago
    Someone already made a great agent skill for this, which I'm using daily, and it's been very cool!

    https://github.com/pasky/chrome-cdp-skill

    For example, I use codex to manage a local music library, and it was able to use the skill to open a YT Music tab in my browser, search for each album, and get the URL to pass to yt-dlp.

    Do note that it only works for Chrome browsers rn, so you have to edit the script to point to a different Chromium browser's binary (e.g. I use Helium) but it's simple enough

    • esperent18 minutes ago
      > Most browser automation tools launch a fresh, isolated browser. This one connects to the Chrome you're already running

      Is this the same as what Claude in Chrome does?

      I tried that for a while and since I use Firefox and Chromium, the security problem of it seeing your tabs wasn't a big deal. Fresh Chrome install, only ever used for this exact purpose. Plus you can watch it working in real (actually very slow) time so if you did point it at something risky you can take over at any point.

      For actual testing of web apps though, a skill with playwright cli in headless mode is much more effective. About 1-2k context per interaction after a bit of tuning.

    • Etheryte6 hours ago
      On one hand, cool demo, on the other, this is horrifying in more ways than I can begin to describe. You're literally one prompt injection away from someone having unlimited access to all of your everything.
      • mh-6 hours ago
        Not the person you're replying to, but: I just use a separate, dedicated Chrome profile that isn't logged into anything except what I'm working on. Then I keep the persistence, but without commingling in a way that dramatically increases the risk.

        edit: upon rereading, I now realize the (different) prompt injection risk you were calling out re: the handoff to yt-dlp. Separate profiles won't save you from that, though there are other approaches.

        • bartek_gdn3 hours ago
          That's also my approach, built quickly a cli for this with lightweight session management

          https://news.ycombinator.com/item?id=47207790

        • sofixa5 hours ago
          Even without the bash escape risk (which can be mitigated with the various ways of only allowing yt-dlp to be executed), YT Music is a paid service gated behind a Google account, with associated payment method. Even just stealing the auth cookie is pretty serious in terms of damage it could do.
          • mh-5 hours ago
            Agreed. I wouldn't cut loose an agent that's at risk of prompt injection w/ unscoped access to my primary Google account.

            But if I understood the original commenter's use case, they're just searching YT Music to get the URL to a given song. This appears[0] to work fine without being logged in. So you could parameterize or wrap the call to yt-dlp and only have your cookie jar usable there.

            [0]: https://music.youtube.com/search?q=sandstorm

            [1]: https://music.youtube.com/watch?v=XjvkxXblpz8

            • sofixa5 hours ago
              Oh, that's true, even allows you to play without an account. I can swear that at some point it flat out refused any use unless you're logged in with an account that has YT Music (I remember having to go to regular YouTube to get the same song to send it to someone who didn't have it).
      • sheepscreek6 hours ago
        As long as it’s gated and not turned on by default, it’s all good. They could also add a warning/sanity check similar to “allow pasting” in the console.
        • hrmtst938375 hours ago
          Relying on warnings or opt-ins for something with this blast radius is security theater more than protection. The cleverest malware barely waits for you to click OK before making itself at home, so that checkbox is a speed bump on a highway.

          Chrome's 'allow pasting' gets ignored reflexively by most users anyway. If this agent can touch DevTools the attack surface expands far faster than most people realize or will ever audit.

      • aadishv6 hours ago
        Of course I still watch it and have my finger on the escape key at all times :)
        • glenpierce5 hours ago
          I am in awe of the confidence you have in your reflexes.
          • aadishv4 hours ago
            You get used to it :) And especially once you get used to the YOLO lifestyle, you end up realizing that practically any form of security is entirely worthless when you're dealing with a 200 IQ brainwashed robot hacker.

            I think using the Pi coding agent really got me used to this way of thinking: https://mariozechner.at/posts/2025-11-30-pi-coding-agent/#to...

        • bergheim6 hours ago
          For now you are. All these things fall with time, of course. You will stop caring once you start feeling safe, we all do.

          Also. AAarrgh, my new thing to be annoyed at is AI drivel written slop.

          "No browser automation framework, no separate browser instance, no re-login."

          Oh really, nice. No separate computer either? No separate power station, no house, no star wars? No something else we didn't ask for? Just one a toggle and you go? Whoaaaaaa.

          Edit: lol even the skill itself is vibe coded:

          Lightweight Chrome DevTools Protocol CLI. Connects directly via WebSocket — no Puppeteer, works with 100+ tabs, instant connection.

          I feel like there's nothing fucking left on the internet anymore that is not some mean of whatever the LLM is trained to talk like now.

          • tacitusarc5 hours ago
            What can you do? I mentioned the use of AI on another thread, asking essentially the same question. The comment was flagged, presumably as off topic. Fair enough, I guess. But about 80% (maybe more) of posted blogs etc that I see on HN now have very obvious signs of AI. Comments do too. I hate it. If I want to see what Claude thinks I can ask it.

            HN is becoming close to unusable, and this isn’t like the previous times where people say it’s like reddit or something. It is inundated with bot spam, it just happens the bot spam is sufficiently engaging and well-written that it is really hard to address.

            • bergheim5 hours ago
              I hear you and I agree. I don't know. Gated communities?
    • paulirish5 hours ago
      To be clear, this isn't a skill for the devtools mcp, but an independent project. It doesn't look bad, but obviously browser automation + agents is a very busy space with lots of parallel efforts.

      DevTools MCP and its new CLI are maintained by the team behind Chrome DevTools & Puppeteer and it certainly has a more comprehensive feature set. I'd expect it to be more reliable, but.. hey open source competition breeds innovation and I love that. :)

      (I used to work on the DevTools team. And I still do, too)

    • xmorse4 hours ago
      Does anyone really use these hacked up with duct tape skills? why not use something more reliable like playwriter.dev?
  • mmaunder5 hours ago
    Google is so far behind agentic cli coding. Gemini CLI is awful. So bad in fact that it’s clear none of their team use it. Also MCP is very obviously dead, as any of us doing heavy agentic coding know. Why permanently sacrifice that chunk of your context window when you can just use CLI tools which are also faster and more flexible and many are already trained in. Playwright with headless Chromium or headed chrome is what anyone serious is using and we get all the dev and inspection tools already. And it works perfectly. This only has appeal to those starting out and confused into thinking this is the way. The answer is almost never MCP.
    • zeroxfe3 hours ago
      > Also MCP is very obviously dead, as any of us doing heavy agentic coding know.

      As someone that does heavy agentic coding (using basically all the tools), this is so far from the truth. People claiming this have probably never worked in large enterprise environments where things like authentication, RBAC, rate limiting, abuse detection, centralized management/updates/ops, etc. are a huge part of the development and deployment workflow.

      In these situations you can't just use skills and cli tools without a gigantic amount of retooling and increased operational and security complexity. MCP is really useful here, and allows centralized eng and ops teams to manage their services in a way that aligns with the organizations overall posture, policies, and infrastructure.

      > Google is so far behind agentic cli coding. Gemini CLI is awful.

      This part I totally agree. It's really hard to express how bad it is (and it's really disappointing.)

      • bloppe39 minutes ago
        > you can't just use skills and cli tools without a gigantic amount of retooling and increased operational and security complexity

        You're describing MCP. After all, MCP is just reinventing the OpenAPI wheel. You can just have a self-documenting REST API using OpenAPI. Put the spec in your context and your model knows how to use it. You can have all the RBAC and rate limiting and auth you want. Heck, you could even build all that complexity into a CLI tool if you want. MCP the protocol doesn't actually enable anything. And implementing an MCP server is exactly as complex as using any other established protocol if you're using all those features anyway

      • moritonal2 hours ago
        Given MCP is supposed to just be a standardised format for self-describing APIs, why are all the features you listed MCP related things? It sounds more like it's forced the enterprise to build such features which cli tooling didn't have?
        • rsalus2 hours ago
          mostly by virtue of being a common standard. MCP servers are primarily useful in a remote environment, where centralized management of cross-cutting concerns matters. also its really useful for integrating existing distributed services, e.g., internal data lakes.

          I think it's clear a self-describing CLI is optimal for local-first tooling and portability. I personally view remote MCP servers as complementary in the space.

        • tomnipotentan hour ago
          MCP's can hide most things behind an API.
    • IX-1032 hours ago
      FYI: Gemini Cli is used internally at Google. It's actually more popular than Antigravity. Google uses MCP services internally for code search (since everything is in a mono-repo you don't want to waste time grepping billions of files), accessing docs and bugs, and also accessing project specific RAG databases for expertise grounding.

      Source - I know people at Google.

      • 2 hours ago
        undefined
    • cheema334 hours ago
      > Also MCP is very obviously dead

      Some people will push back on this. They are holding out hope that the recent improvements Anthropic has made in this regard have improved the context rot problem with MCP. Anthropic's changes improve things a little. But it is akin to putting lipstick on a pig. It helps, but not much.

      The reason MCP is dying/dead is because MCP servers, once configured, bloat up context even when they are not being used. Why would anybody want that?

      Use agent skills. And say goodbye to MCP. We need to move on from MCP.

      • maxwellgan hour ago
        Is your agent harness dropping the entire MCP server tool description output directly into the context window? Is your agent harness always addig MCP servers to the context even when they are not being used?

        MCP is a wire format protocol between clients and servers. What ends up inside the context window is the agent builder's decision.

      • ktoo_2 hours ago
        > it is akin to putting lipstick on a pig. It helps, but not much.

        The lipstick helps? This had me in stitches. Sorry for the non-additive reply. This is the funniest way I have seen this or any other phrase explained. By far. Honestly has made my day and set me up for the whole week.

      • dominotw4 hours ago
        i am using notion mcp. is there a corresponding skill. also wtf is a plugin.
      • Rapzid3 hours ago
        The bloat problem is already out dated though. People are having the LLM pick the MCP servers it needs for a particular task up front, or picking them out-of-band, so the full list doesn't exist in the context every call.
    • edwinjm2 hours ago
      MCP is dead? Which cli tool should we use to instruct Chrome to open a page and click the Open button? And to read what appears in the console after clicking?

      MCP permanently sacrifice a chunk of the context window? And a skill for you cli is free?

    • rsalus4 hours ago
      MCP is very much not dead. centralized remote MCP servers are incredibly useful. also bespoke CLIs still require guidance for models to use effectively, so it's clear that token efficiency is still an issue regardless.
      • Torn4 hours ago
        Tbh I find self-documenting CLIs (e.g. with a `--help` flag, and printing correct usage examples when LLMs make things up) plus a skill that's auto invoked to be pretty reliable. CLIs can do OAuth dances too just fine.

        MCP's remaining moats I think are:

        - No-install product integrations (just paste in mcp config into app)

        - Non-developer end users / no shell needed (no terminal)

        - Multi-tenant auth (many users, dynamic OAuth)

        - Security sandboxing (restrict what agents can do), credential sandboxing (agents never see secrets)

        - Compliance/audit (structured logs, schema enforcement)?

        If you're a developer building for developers though, CLI seems to be a clear winner right

        • quotemstr3 hours ago
          Imagine if, in addition to local MCP "servers", the MCP people had nurtured a structured CLI-based --help-equivalent consumable by LLMs and shell completion engines alike. Doing so, you unify "CLI" (trivial deployment; human accessibility) and MCP-style (structured and discoverable tool calling) in a single DWIM artifact.

          But since when has this industry done the right thing informed by wisdom and hindsight?

          • rsalus3 hours ago
            that's a pretty interesting idea. It would be nice if there was such a standard. the approach I'm taking right now: a CLI that accepts structured JSON as input, with an 'mcp' subcommand that starts a stdio server. I bundle a 'help' command with a 'describe' action for self-service guidance scoped to a particular feature/tool.
      • abhis37984 hours ago
        I see remote MCP servers as a great interface to consume api responses. The idea that you essentially make your apis easily available to agents to bring in relevant context is a powerful one.

        When folks say MCP is dead, I don't get it. What other alternatives exist in place of MCP? Arbitrary code via curl/sdks to call a remote endpoint?

        • attentive3 hours ago
          > What other alternatives exist in place of MCP? Arbitrary code via curl/sdks to call a remote endpoint?

          cli?

          for example aws cli. It's a full interface to aws API. Why would you need mcp for that?

          and if you have any doubts, agents use it with a great effect even without any relevant skill. "aws help" is fully discoverable.

          • rsalus3 hours ago
            yes, but clis thus need self-service commands to provide guidance, and their responses need to be optimized for consumption by agents. in a sense, this is the same sort of context tax that MCP servers incur. so in my view cli and MCP are complementary tools; one is not strictly superior over the other.
      • mattnewton4 hours ago
        I think cli’s are more token efficient- the help menu is loaded only when needed, and the output is trivially pipe able to grep or jq to filter out what the model actually wants
      • nojito4 hours ago
        all you need is a simple skills.md and maybe a couple examples and codex picks up my custom toolkit and uses it.
        • dominotw3 hours ago
          whats your custom toolkit
    • 42 minutes ago
      undefined
    • sega_sai4 hours ago
      I don't know if this just anecdotal random impression, but in a last week or two I had mostly good experience with Google cli. While previously I constantly complained about it. I have been using it together with codex, and I would not say that one is much better than another.

      It is hard to say nowadays, when things change so quickly

    • 2 hours ago
      undefined
    • hu32 hours ago
      > Also MCP is very obviously dead...

      Couldn't have been more wrong. MCP despite its manageable downsides is leagues ahead of anything else in many ways.

      The fact that SoTA models are trained to handle MCP should be hint enough to the observant.

      I probably build one MCP tool per week at work.

      And every project I work on gets its own MCP tool too. It's invaluable to have specialized per-project tooling instead of a bunch of heterogeneous scripts+glue+prayer.

      Anything specialized goes into an MCP.

    • girvo4 hours ago
      I know it’s a bit of a tangent but man you’re right re. Gemini CLI. It’s woefully bad, barely works. Maybe because I was a “free” user trying it out at the time, but it was such a bad experience it turned me off subscribing to whatever their coding plan is called today.
      • ElCapitanMarkla3 hours ago
        I had this exp too, but I trialed the pro sub a few weeks back and it has been great. I have no complaints this time
      • luckydata4 hours ago
        it's not the CLI, it's the model. The model wasn't trained to do that kind of work, was trained to do one shot coding, not sustained back and forth until it gets it right like Claude and ChatGPT.
    • danpalmer3 hours ago
      > So bad in fact that it’s clear none of their team use it.

      I use it extensively, many of my colleagues do. I get a ton of value out of it. Some prefer Antigravity, but I prefer Gemini CLI. I get fairly long trajectories out of it, and some of my colleagues are getting day-long trajectories out of it. It has improved massively since I started using it when it first came out.

    • spiderfarmer4 hours ago
      MCP is not just used for coding.
    • quotemstr3 hours ago
      > Why permanently sacrifice that chunk of your context window when you can just use CLI tools which are also faster and more flexible and many are already trained in

      What about all the CLI tools not baked into the model's priors?

      Every time someone says "extensibility mechanism X is dead!", I think "Well, I guess that guy isn't doing anything that needs to extend the statistical average of 2010s-era Reddit"

  • recroad2 hours ago
    I've been using TideWave[1] for the last few months and it has this built-in. It started off as an Elixir/LiveView thing but now they support popular JavaScript frameworks and RoR as well. For those who like this, check it out. It even takes it further and has access to the runtime of your app (not just the browser).

    The agent basically is living inside your running app with access to databases, endpoints etc. It's awesome.

    1. https://tidewave.ai/

    • galaxyLogican hour ago
      Interesting. Does it only work with known frameworks like Next, React etc. or could I use it with my plain Node.js app which produces browser-output?
      • recroadan hour ago
        No, doesn't use work with server-side only apps.
  • jasonjmcgheean hour ago
    I had fun playing with it + WebMCP this weekend, but I think, similarly to how claude code / codex + MCP require SKILL.md, websites might too.

    We could put them in a dedicated tag:

        <script type="text/skill+markdown">
        ---
        name: ...
        description ...
        ---
        ...
        </script>
    
    For all the skills with you want on the page, optionally set to default which "should be read in full to properly use the page".

    And then add some javascript functions to wrap it / simplify required tokens.

    Made a repo and a website if anyone is interested: https://webagentskills.dev/

  • boomskats6 hours ago
    Been using this one for a while, mostly with codex on opencode. It's more reliable and token efficient than other devtools protocol MCPs i've tried.

    Favourite unexpected use case for me was telling gemini to use it as a SVG editing repl, where it was able to produce some fantastic looking custom icons for me after 3-4 generate/refresh/screenshot iterations.

    Also works very nicely with electron apps, both reverse engineering and extending.

  • zxspectrumk486 hours ago
    I found this one working amazingly well (same idea - connect to existing session): https://github.com/remorses/playwriter
  • RALaBarge2 hours ago
    I made a websocket proxy + chrome extension to give control of the DOM to agents for my middleware app: https://github.com/RALaBarge/browserbox

    The thing I am working on is improving at the moment agentic tool usage success rates for my research and I use this as a proxy to access everything with the cookies I allow in the session.

  • tonyhschu5 hours ago
    Very cool. I do something like this but with Playwright. It used to be a real token hog though, and got expensive fast. So much so that I built a wrapper to dump results to disk first then let the agent query instead. https://uisnap.dev/

    Will check this out to see if they’ve solved the token burn problem.

  • silverwind5 hours ago
    I found Firefox with https://github.com/padenot/firefox-devtools-mcp to work better then the default Chrome MCP, is seems much faster.
  • cheema334 hours ago
    How does this compare with playwright CLI?

    https://github.com/microsoft/playwright-cli

    • Torn4 hours ago
      I personally found playwright-cli, and agent-browser which wraps playwright, both more token-efficient than using the raw mcp.

      Odd that this article from Dec 2025 has been posted to the top of HN though

    • EGreg4 hours ago
      It’s made by Google and comes with Chrome
  • rossvc5 hours ago
    I've been using the DevTools MCP for months now, but it's extremely token heavy. Is there an alternative that provides the same amount of detail when it comes to reading back network requests?
    • nerdsniper5 hours ago
      It's probably not fully optimized and could be compacted more with just some effort, and further with clever techniques, but browser state/session data will always use up a ton of tokens because it's a ton of data. There's not really a way around that. AI's have a surprising "intuition" about problems that often help them guess at solutions based on insufficient information (and they guess correctly more often than I expect they should). But when their intuition isn't enough and you need to feed them the real logs/data...it's always gonna use a bunch of tokens.

      This is one place where human intuition helps a ton today. If you can find the most relevant snippets and give the AI just the right context, it does a much better job.

    • 5 hours ago
      undefined
    • DimitriBouriez4 hours ago
      i'm experimenting with a different approach (no CDP/ARIA trees, just Chrome extension messaging that returns a numbered list of interactive elements). Way lighter on tokens and undetectable but still very experimental : https://github.com/DimitriBouriez/navagent-mcp
    • mmaunder5 hours ago
      Yes. CLI. Always CLI. Never MCP. Ever. You’re welcome.
      • nerdsniper4 hours ago
        That doesn't solve the issue here because the amount of data in the browser state dwarfs the MCP overhead.
        • bartek_gdn3 hours ago
          Can't we just iteratively inspect the network traces then? We don't need to consume the whole 2mb of data, maybe just dump the network trace and use jq to get the fields to keep the context minimal. I haven't added this in https://news.ycombinator.com/item?id=47207790 , but I feel it would be a good addition. Then prompt it with instructions to gradually discover the necessary data.

          But then I wonder, where the balance is between a bunch of small tool calls, vs one larger one.

          I recall some recent discussion here on hn on big data analysis

        • cheema334 hours ago
          > That doesn't solve the issue here because the amount of data in the browser state dwarfs the MCP overhead.

          The problem with MCP is that you are paying the price in token usage, even if you are not using the MCP server. Why would anybody want that?

          And no, the tool search function recently introduced by Anthropic does not completely solve this problem.

  • netdur3 hours ago
    I wrote an ai agent that do chrome testing, yes, chrome MCP do work https://github.com/netdur/hugind/tree/main/agent/chrome_test...
  • NiekvdMaas6 hours ago
    Also works nicely together with agent-browser (https://github.com/vercel-labs/agent-browser) using --auto-connect
  • senand5 hours ago
    I suggest to use https://github.com/simonw/rodney instead
    • meowface5 hours ago
      Unfortunately there are like a billion competitors to this right now (including Playwright MCP, Playwright CLI, the new baked-in Playwright feature in Codex /experimental, Claude Code for Chrome...) and I can never quite decide if or when I should try to switch. I'm still just using the ordinary Playwright MCP server in both Codex and Claude Code, for the time being.
  • raw_anon_11115 hours ago
    I don’t do any serious web development and haven’t for 25 years aside from recently vibe coding internal web admin portals for back end cloud + app dev projects. But I did recently have to implement a web crawler for a customer’s site for a RAG project using Chromium + Playwrite in a Docker container deployed to Lambda.

    I ran the Docker container locally for testing. Could a web developer test using Claude + Chromium in a Docker container without using their real Chrome instance?

  • anesxvito4 hours ago
    Been using MCP tooling heavily for a few months and browser debugging integration is one of those things that sounds gimmicky until you actually try it. The real question is whether it handles flaky async state reliably or just hallucinates what it thinks the DOM looks like?
  • bartek_gdn3 hours ago
    My approach is a thin cli wrapper instead.

    https://news.ycombinator.com/item?id=47207790

  • speedgoose6 hours ago
    Interesting. MCP APIs can be useful for humans too.

    Chrome's dev tools already had an API [1], but perhaps the new MCP one is more user friendly, as one main requirement of MCP APIs is to be understood and used correctly by current gen AI agents.

    [1]: https://chromedevtools.github.io/devtools-protocol/

  • glerk5 hours ago
    Note that this is a mega token guzzler in case you’re paying for your own tokens!
  • oldeucryptoboi5 hours ago
    I tell Claude to use playwright so I don't even need to do the setup myself.
    • nomilk5 hours ago
      Similarly, cursor has a built in browser and visit localhost to see the results in the browser. Although I don't use it much (I probably should).
  • pritesh19085 hours ago
    I have been using Playwright for a fairly long time now. Do checkout
  • slrainka5 hours ago
    chrome-cli with remote developer port has been working fine this entire time.
  • teaearlgraycold3 hours ago
    I love how in their demo video where they center an element it ends up off-center.
  • JKolios5 hours ago
    Now that there's widespread direct connectivity between agents and browser sessions, are CAPTCHAs even relevant anymore?
  • holoduke3 hours ago
    One tip for the illegal scrapers or automators out there. Casperjs and phanthomjs are still working very well for anti bot detection. These are very old libs no longer maintained. But I can even scrape and authenticate at my banks.
  • 6 hours ago
    undefined
  • Yokohiii6 hours ago
    Was already eye rolling about the headline. Then I realized it's from chrome.

    Hoping from some good stories from open claw users that permanently run debug sessions.

  • ClaudeAgent_WK15 minutes ago
    [dead]
  • justboy1987an hour ago
    [dead]
  • robutsume4 hours ago
    [dead]
  • aplomb10264 hours ago
    [dead]
  • ptak_dev5 hours ago
    [dead]
  • myrak6 hours ago
    [dead]
  • AlexDunit6 hours ago
    [flagged]
    • David-Brug-Ai6 hours ago
      This is the exact problem that pushed me to build a security proxy for MCP tool calls. The permission model in most MCP setups is basically binary, either the agent can use the tool or it can't. There's nothing watching what it does with that access once its granted.

      The approach I landed on was a deterministic enforcement pipeline that sits between the agent and the MCP server, so every tool call gets checked for things like SSRF (DNS resolve + private IP blocking), credential leakage in outbound params, and path traversal, before the call hits the real server. No LLM in that path, just pattern matching and policy rules, so it adds single-digit ms overhead.

      The DevTools case is interesting because the attack surface is the page content itself. A crafted page could inject tool calls via prompt injection. Having the proxy there means even if the agent gets tricked, the exfiltration attempt gets caught at the egress layer.

    • rob6 hours ago
      Someone left their bot on default settings.
  • Sonofg0tham6 hours ago
    [flagged]
    • simianwords6 hours ago
      AI
      • rzmmm5 hours ago
        Yes. Can someone tell me why even HN has bots. For selling upvotes to advertisement purposes?
        • Sonofg0tham4 hours ago
          I'm not a bot and definitely not advertising - I'm new on HN and trying to contribute with a few comments where I can.
  • paseante3 hours ago
    [flagged]
    • raincole2 hours ago
      The ultimate conflict of interest here is that the sites people want to crawl the most are the ones that want to be crawled by machines the least (e.g. Youtube). So people will end up emulating genuine human users one way or another.
    • socalgal22 hours ago
      I feel like the fact tha HTML is end result is exactly why the Web is so successful. Yes, structured APIs sound great, until you realize the API owners will never give you the data you actually want via their APIs. This is why HTML has done so well. Why extensions exist. And why it's better for browser automation.
    • Lucasoato2 hours ago
      They’re trying to solve it by making it easier to get Markdown versions of websites.

      For example, you can get a markdown out of most OpenAI documentation by appending .md like this: https://developers.openai.com/api/docs/libraries.md

      Not definitive, but still useful.

    • maxaw2 hours ago
      Fully agree. Will take some time though as immediate incentive not clear for consumer facing companies to do extra work to help ppl bypass website layer. But I think consumers will begin to demand it, once they experience it through their agent. Eg pizza company A exposes an api alongside website and pizza company B doesn’t, and consumer notices their agent is 10x+ faster interacting with company A and begins to question why.
    • codybontecou3 hours ago
      Is this just a well-documented API?
    • ElectricalUnion3 hours ago
      > interface designed for humans — the DOM.

      Citation needed.

      > The web already went through this evolution once: we went from screen-scraping HTML to structured APIs. Now we're regressing back to scraping because agents need to interact with sites that only have human interfaces.

      To me, sites that "only have human interfaces" are more likely that not be that way totally on purpose, attempting to maximize human retention/engagement and are more likely to require strict anti-bot measures like Proof-of-Work to be usable at all.

    • quotemstr3 hours ago
      > expose a machine-readable interaction layer alongside the human one

      Which is called ARIA and has been a thing forever.

    • imiric3 hours ago
      > What we actually need is a standard for websites to expose a machine-readable interaction layer alongside the human one.

      We had this 20 years ago with the Semantic Web movement, XHTML, and microformats. Sadly, it didn't pan out for various reasons, most of them non-technical. There's remnants of it today with RSS feeds, which is either unsupported or badly supported by most web sites.

      Once advertising became the dominant business model on the web, it wasn't in publishers' interest to provide a machine-readable format of their content. Adtech corporations took control of the web, and here we are. Nowadays even API access is tightly controlled (see Reddit, Twitter, etc.).

      So your idea will never pan out in practice. We'll have to continue to rely on hacks and scraping will continue to be a gray area. These new tools make automated scraping easier, for better or worse, but publishers will find new ways to mitigate it. And so it goes.

      Besides, if these new tools are "superintelligent", surely they're able to navigate a web site. Captchas are broken and bot detection algorithms (or "AI" themselves) are unreliable. So I'd say the leverage is on the consumer side, for now.