449 pointsby meetpateltech6 days ago63 comments
  • zellyn6 days ago
    It’s frustratingly difficult to see what these (A2A and MCP) protocols actually look like. All I want is a simple example conversation that includes the actual LLM outputs used to trigger a call and the JSON that goes over the wire… maybe I’ll take some time and make a cheat-sheet.

    I have to say, the endorsements at the end somehow made this seem worse…

    • mlenhard6 days ago
      I was in the same boat in regards to trying to find the actual JSON that was going over the wire. I ended up using Charles to capture all the network requests. I haven't finished the post yet, but if you want to see the actual JSON I have all of the request and responses here https://www.catiemcp.com/blog/mcp-transport-layer/
      • swyx5 days ago
        itd be nice if you prettified your json in the blogpost

        fwiw i thought the message structure was pretty clear on the docs https://modelcontextprotocol.io/docs/concepts/architecture#m...

        • mlenhard5 days ago
          Yeah, I plan on improving the formatting and adding a few more examples. There were even still some typos in the piece. To be honest, I didn't plan on sharing it yet; I just figured it might be helpful for the OP, so I shared it early.

          I also think the docs are pretty good. There's just something about seeing the actual network requests that helps clarify things for me.

        • nl5 days ago
          Some (many?) people learn better from concrete examples and generalize from them.
          • TeMPOraL5 days ago
            Not just that, but it's also useful to have examples to validate a) your understanding of the spec, and b) product's actual adherence to the spec.
      • zellyn6 days ago
        Oh, that's really nice. Did you capture the responses from the LLM. Presumably it has some kind of special syntax in it to initiate a tool call, described in the prompt? Like TOOL_CALL<mcp=github,command=list> or something…
      • kristopolous6 days ago
        I had never heard of charles ... (https://www.charlesproxy.com/) I basically wrote a simple version of it 20 years ago (https://github.com/kristopolous/proxy) that I use because back then, this didn't exist ... I need to remember to toss my old tools aside
        • stavros5 days ago
          Well, Charles launched almost 20 years ago, so I'd say there's a good chance that it did exist.
          • kristopolous5 days ago
            Well hopefully my current thing, a streaming markdown renderer for the terminal (https://github.com/kristopolous/Streamdown) hasn't also been a waste of time
            • stavros5 days ago
              Why would anything be a waste of time?
              • kristopolous5 days ago
                I build things I cannot find.

                Every project I do is an assertion that I don't believe the thing I make exists.

                I have been unable to find a streaming forward only markdown renderer for the terminal nor have I been able to find any suitable library that I could build one with.

                So I've taken on the ambitious effort of building my own parser and renderer and go through all the grueling testing that entails

              • mptest5 days ago
                the answers to that question are hugely variable and depend on the objective and defining waste. if one values learning intrinsically, like most of us here probably do, it is pretty hard to come up with a waste of time, even taking the rare break from learning.

                But it seems self-evident where constraints like markets or material conditions might demarcate usefulness and waste.

                Even the learners who are as happy to hear about linguistics as they are material science I presume do some opportunity cost analysis as they learn. Personally speaking, I rarely, if ever, feel like I'm wasting time per se but I always recognize and am conscious of the other things I could be doing to better maximize alternative objectives. That omnipresent consciousness may just be anxiety though I guess...

              • nsonha5 days ago
                either that or "waste of time" is a meaningless phrase
          • 5 days ago
            undefined
        • mlenhard6 days ago
          Yeah, at its core it's just a proxy, so there are a lot of other tools out there that would do the job. It does have a nice UI and I try to support projects like it when I can.

          I'll check out your proxy as well, I enjoy looking at anything built around networking.

        • Maxious5 days ago
          even the approach that charles takes for intercepting TLS traffic is a bit old school (proxies, fake root certs etc.) - cool kids use eBPF https://mitmproxy.org/posts/local-capture/linux/
          • stavros5 days ago
            I can see how you don't need a proxy any more, but I don't see how you can bypass TLS without fake root certs, even with eBPF.
    • sunpazed5 days ago
      I had the same frustration and wanted to see "under the hood", so I coded up this little agent tool to play with MCP (sse and stdio), https://github.com/sunpazed/agent-mcp

      I really is just json-rpc 2.0 under the hood, either piped to stdio or POSTed over http.

    • daxfohl6 days ago
      For MCP I found the tutorials at https://github.com/block/goose made it click for me.
    • jacobs1236 days ago
      It's shown in the link below. It's kind of crazy that they have this huge corporate announcement with 50 logos for something that under the hood seems sort of arbitrary and very fragile, and is probably very sensitive to things like exact word choice and punctuation. There will be effects like bots that say "please" and "thank you" to each other getting measurably better results.

      https://google.github.io/A2A/#/documentation?id=multi-turn-c...

      • TS_Posts3 days ago
        Hi there (I work on a2a) - can you explain the concern a bit more? We'd be happy to look.

        A2A is a conduit for agents to speak in their native modalities. From the receiving agent implementation point of view, there shouldn't be a difference in "speaking" to a user/human-in-the-loop and another agent. I'm not aware of anything in the protocol that is sensitive to the content. A2A has 'Messages' and 'Artifacts' to distinguish between generated content and everything else (context, thoughts, user instructions, etc) and should be robust to formatting challenges (since it relies on the underlying agent).

      • kc103 days ago
        Can you please expand on this?

        The sensitivity to prompts and response quality are related to an agent's functionality, A2A is only addressing the communication aspects between agents and not the content within.

    • wongarsu5 days ago
      You weren't kidding with the endorsements. It's endorsed by KPMG, Accenture and BCG. McKinsey and PwC are not in the partner list but are mentioned as contributors. Honorable mention to SAP as another company whose endorsements are a warning sign
    • ronameles6 days ago
      https://www.youtube.com/watch?v=5_WE6cZeDG8 - I work at an industrial software company. You can kind of think of us as an API layer to factory data, that is generally a mess. This video shows you what MCP can do for us in terms of connecting factory data to LLMS. Maybe it will help. A2A is new to me, and I need to dig in.

      Basically if we expose our API over MCP, agents can "figure it out". But MCP isn't secure enough today, so hoping that gets enhanced.

    • behnamoh6 days ago
      It seems companies figured introducing "protocols" or standards helps their business because if it catches on, it creates a "moat" for them: imagine if A2A became the de facto standard for agent communication. Since Google invented it and already incorporated in their business logic, it would suddenly open up the entire LLM landscape to Google services (so LLMs aren't the end goal here). Microsoft et al. would then either have to introduce their own "standard" or adopt Google's.
    • mycall6 days ago
      It is quite hard to reliably and consistently connect deterministic systems and goals with nondeterministic compute. I don't know if all of this will ever be exactly what we want.
      • throwaway-blaze6 days ago
        Sort of like asking a non-deterministic human to help make changes to an existing computer system. Extends the problems of human team management to our technology systems.
        • Xelynega6 days ago
          Not only extends them, but compounds them because you have a non-deterministic human making changes to a non-deterministic computer system which is making changes to an existing computer system.
          • TeMPOraL5 days ago
            That's basically the problem of employing and managing people.
            • yurishimo5 days ago
              And look at how much effort our industry goes through as a whole to work around it! Managing people is harder than wrangling machines, even if the upfront cost to "train" and build the machine is multiples higher. Once a deterministic system works, it will keep going until a variable changes. The "problem" with humans is that our variables change like the weather and it takes a lot more effort and resources to keep everyone on track.

              "If you just get out of people's way, then they'll do a good job and the right thing!" - yea, perhaps. But how much of "getting out their way" is more a product of providing meaningful ownership and compensation in the workplace? See the paragraph above. Good employees are expensive and as time marches on, their compensation will need to continue to increase at least with inflation, while the machine will likely become cheaper to operate over time as societal advances bring down the cost and complexity of operation.

            • latentsea4 days ago
              Yup. And this is why I think the "last mile" problem in AI is basically unsolvable.
    • whalesalad6 days ago
      Agreed. At the end of the day we are talking about RPC. A named method, with known arguments, over the wire. A simple HTTP request comes to mind. But that would just be too easy. Oh wait, that is what all of these are under the hood. We are so cooked.

          from fastmcp import FastMCP
      
          mcp = FastMCP("Demo ")
      
          @mcp.tool()
          def add(a: int, b: int) -> int:
              """Add two numbers"""
              return a + b
      
      This is an example of fastmcp. Notice anything? Replace 2-3 lines of code and this is a Flask or FastAPI application. Why are we not just going all-in on REST/HATEOAS for these things? My only hunch is that either 1. the people designing/proselytizing these "cutting edge" solutions are simply ignorant to how systems communicate and all the existing methods that exist, or 2. they know full well that this is just existing concepts with a new shiny name but don't care because they want to ride the hype train and take advantage of it.
      • pjerem6 days ago
        Ironically, I tried to use the official "github-mcp" and failed to make it work with my company's repos, even with a properly configured token. The thing comes with a full blown server running inside a docker container.

        Well, I just told my llm agent to use the `gh` cli instead.

        It seems all those new protocols are there to re-invent wheels just to create a new ecosystem of free programs that corporations will be able to use to extract value without writing the safety guards themselves.

        • config_yml6 days ago
          I feel the same way about OpenAI‘s new responses API. Under the cover of DX they‘re marketing a new default, which is we hold your state and sell it back to you.
          • whalesalad6 days ago
            OpenAI is tedious to work with. Took me a solid day of fooling around with it before I realized the chat api and the chat completions api are two entirely different apis. Then you have the responses api which is a third thing.

            The irony is that gpt4 has no clue which approach is correct. Give it the same prompt three times and you’ll get a solution that uses each of these that has a wildly different footprint, be it via function calls or system prompts, schema or no schema, etc.

            • lherron5 days ago
              Wait till you deal with google genai lib vs google generativeai lib
      • peab6 days ago
        Yeah, i haven't seen a reason why we can't just use REST. Like, auth is already figured out. The LLMS already have the knowledge of how to call APIs too!
        • skeledrew6 days ago
          It's like deciding between Assembly or C for some given project.
      • nonethewiser6 days ago
        I dont fully understand. The protocol uses HTTP and has a JSON schema. But there are more specifications outside of that. How do you specify those things without a new protocol? Or is the argument that you dont need to specify those things?
        • Xelynega6 days ago
          REST is a protocol that uses HTTP and a JSON schema.

          I fail to see how they're different, they're both "these are the remote procedures you can call on me, and the required parameters, maybe some metadata of the function/parameters".

          • pests5 days ago
            How are they both describing the remote procedures and parameters tho? In order for the LLM to use a tool it needs to know its name and arguments. There has to be some kind of spec, in some or format, for it to use.

            An existing Swagger/OpenAPI spec is not sufficient. You want to limit options and make it easy for an LLM to call your tool to accomplish goals. The complete API surface of your application might not be appropriate. It might be too low level or require too many orchestration steps to do anything useful.

            A lot of existing API's require making additional calls using the results of previous calls. GET /users to get a list of ids. Then repeatedly call GET users/$id to get the data. In a MCP world you would provide a get-users tool that would do all this behind the scenes and also impose any privacy/security/auth restrictions before handing this over to an LLM.

            We see similar existing systems like GraphQL which provides a fully hydrated resultset in one call. Tons of API's like Stripe (IIRC) that provide a &hydrate= parameter to specific which relations to include full details in-line.

            I do agree MCP is overhyped and might not be using best principles but I do see why its going off in its own land. It might be better suited over different protocols or transports or encodings or file formats but it seems to at least work so until something better comes along we are probably stuck with it.

          • TeMPOraL5 days ago
            > I fail to see how they're different, they're both "these are the remote procedures you can call on me, and the required parameters, maybe some metadata of the function/parameters".

            For one, REST is not RPC, despite being commonly confused for it and abused as such. The conceptual models are different. It makes more sense for an action-oriented RPC protocol to be defined as such, instead of a proper REST approach (which is going to be way too verbose), or some bastardized "RESTful" protocol that's just weirdly-structured RPC designed so people can say, "look ma', I'm using HTTP verbs, I'm doing REST".

      • zellyn6 days ago
        Yeah, I got that from reading the Ghidra MCP (very instructive, strong recommend), but I'm curious what the LLM needs to output to call it. I should go read Goose's code or instrument it or something…
      • daxfohl6 days ago
        Audio and video streams, two way sync and async communication, raw bytes with meaning, etc. And it's not just remote services, it can be for automating stuff local real-time on your machine, your ide or browser, etc. Like the docs say, MCP is to an AI model as USB is to a CPU.
      • skeledrew6 days ago
        It's just another layer of abstraction so one doesn't need to think about HTTP at all, which would bring in irrelevant baggage.
        • qwertox6 days ago
          To be fair, HTTP adds a layer of friendliness over TCP (POST/GET, paths, query parameters) and the servers can be so simple that it can hardly be considered irrelevant baggage.

          The benefit it brings is that you can add debugging endpoints which you can use directly in a browser, you get networking with hosts and ports instead of local-only exe + stdio.

          • skeledrew6 days ago
            That's just one part of it. Keep in mind MCP supports 3 transport methods: stdio, SSE (which would be your HTTP) and websockets. Irrelevant baggage would be having to consider the workings of any of those (given a decently implemented client+server library), rather than merely declaring the servers, tools, resources and prompts to be accessed. There's also a debug mode I believe.
            • Xelynega6 days ago
              This just furthers my theory that people pushing for MCP don't understand how networking and protocols work.

              stdio is a file that your computer can write to and read from

              HTTP is a protocol typically used over TCP

              websockets is a protocol initiated via HTTP, which again is typically over TCP

              Both HTTP and websockets can be done over stdio instead of TCP.

              It sounds like MCP has a lot more "irrelevant baggage" I need to learn/consider.

              • skeledrew6 days ago
                The entire point can be summed in the first 5/6s of that. You don't need to know any of it, because it's irrelevant (at that abstraction). Just as it's irrelevant to know how registers work, to allocate and free memory, avoid/handle segfaults, etc if using a high level language like Python, vs Assembly or C.
                • Xelynega4 days ago
                  That doesn't sound like the case for MCP though. It sounds like when implementing an MCP server there is a difference between the three transport methods that requires different code on the server.

                  This is a problem solved by other protocols that are just stacked on top of eachother without knowing how eachother work.

                  • skeledrew4 days ago
                    That depends on the library implementation. A given library can be anywhere on the spectrum from "user knowledge and management of the transport methods required" to "transport method is determined by protocol format or invocation" (eg. "local://..." vs "remote://...").
        • whalesalad6 days ago
          but at the end of the day MCP is HTTP lol
          • skeledrew6 days ago
            MCP is a capabilities protocol which uses multiple transport protocols, including HTTP.
          • mindcrime6 days ago
            That's not quite right. MCP can run over http, but it doesn't have to.
    • TS_Posts3 days ago
      Hi there! If you load the CLI demo in the github repo (https://github.com/google/A2A/tree/main/samples/python/hosts...) you can see what the A2A servers are returning. Take a look!
    • laichzeit06 days ago
      • zellyn6 days ago
        Oh, that's really nice. I'd also like to see what syntax the LLM uses to _trigger_ these calls, and what prompt is sent to the LLM to tell it how to do that.

        I should probably just go read Goose's code…

        • laichzeit05 days ago
          The LLM returns a message called ToolMessage which then describes which function to call and the parameters (you register these functions/tools as part of the initialisation step, like when you pass it temperature, or whatever other options your LLM allows you to set). So think of it as instead of steaming back text, it’s streaming back text to tell you “please call this function with these args” and you can do with that what you want. Ideally you’d call that function and then give the output back to the LLM. Nothing magic really.
        • medbrane5 days ago
          That's dependent on the particular LLM one uses.

          But it can be a json with the tool name and the payload for the tool.

    • ycombinatrix5 days ago
      >the endorsements at the end somehow made this seem worse

      holy cow you weren't kidding. legit the last people i would trust with software development.

  • hliyan6 days ago
    Are we rediscovering SOA and WSDL, but this time for LLM interop instead of web services? I may be wrong, but I'm starting to wonder whether software engineering degrees should include a history subject about the rise and fall of various architectures, methodologies and patterns.
    • maxwellg6 days ago
      I wasn't around for WSDL so please correct me if I am wrong - but the main weakness of WSDL was that no applications were able to take advantage of dynamic service and method discovery? A service could broadcast a WSDL but something needed to make use of it, and if you're writing an application you might as well just write against a known API instead of an unknown one. LLMs promise to be the unstructured glue that can take advantage of newly-discovered methods and APIs at runtime.
      • zoogeny6 days ago
        I was unfortunate enough to work with SOAP and WSDL. There was a pipedream at the time of automatically configuring services based on WSDL but it never materialized. What it was very good at (and still has no equal to my mind) was allowing for quick implementation of API boilerplate. You could point a service at the WSDL endpoint (which generally always existed at a known relative URL) and it would scaffold an entire API client for whatever language you wanted. Sort of like JSON Schema but better.

        This also meant that you could do things like create diffs between your current service API client and an updated service API client from the broadcasting service. For example, if the service changed the parameters or data objects, deprecated or added functions then you could easily see how your client implementation differed from the service interface. It also provided some rudimentary versioning functionality, IIRC. Generally servers also made this information available with an HTML front-end for documentation purposes.

        So while the promise of one day services configuring themselves at runtime was there, it wasn't really ever an expectation. IMO, the reason WSDL failed is because XML is terrifically annoying to work with and SOAP is insanely complex. JSON and REST were much simpler in every way you can imagine and did the same job. They were also much more efficient to process and transmit over the network. Less cognitive load for the dev, less processor load, less network traffic.

        So the "runtime" explanation isn't really valid as an excuse for it's failure, since the discovery was really meant more in practice like "as a programmer you can know exactly what functions, parameters, data-objects any service has available by visiting a URL" and much less like "as a runtime client you can auto-configure a service call to a completely new and unknown service using WSDL". The second thing was a claim that one-day might be available but wasn't generally used in practice.

      • nsonha5 days ago
        > take advantage of dynamic service and method discovery

        Is that how people build system even today? Dynamic service and method discovery sounds good on paper but I've never actually seen it in practice.

    • bob10295 days ago
      Some of us are still building new products with XML RPC techniques.

      WSDLs and XSDs done right are a godsend for transmitting your API spec to someone. I use .NET and can call xsd.exe to generate classes from the files in a few seconds. It "just works" if both sides follow all of the rules.

      The APIs I work with would be cartoonish if we didn't have these tools. We're talking 10 megabytes of generated sources. It is 100x faster to generate these types and then tunnel through their properties via intellisense than it is to read through any of these vendors' documentation.

      • echelon5 days ago
        > WSDLs and XSDs done right are a godsend for transmitting your API spec to someone. I use .NET and can call xsd.exe to generate classes from the files in a few seconds.

        This sounds like protobuf and gRPC. Is that a close analogy?

        • bob10295 days ago
          It's like those things, but I've never seen protobuf or gRPC used for APIs this extensive.

          The tooling around these paths is also lackluster by comparison if you're using something like Visual Studio.

          I'd rather fight XML namespaces and HTTP/1.1 transports than sort through the wreckage of what "best practices" has recently brought to bear - especially in terms of unattended complexity in large, legacy enterprises. Explaining to a small bank in Ohio that they're going to need to adjust all of their firewalls to accommodate some new protocols is a total nonstarter in my business.

        • nsonha5 days ago
          The latter would add subscription and streaming, and more efficient transports. But yeah they are basically the same idea.

          I hate that for years the concept of RPC was equated to XML which in turn equated to some implementation of the (XML based) tool and then a whole lot of distracting discourse around XML vs JSON, we kinda do still have that these days with yaml vs whatever.

    • partdavid5 days ago
      We have already been through some generations of this rediscovery an I've worked at places where graphql type importing, protobuf stub generation etc. all worked in just the same way. There's a post elsewhere on HN today about how awesome it is to put your logic _in the database_ which I remember at least two generations of, in the document DB era as well as the relational era.

      If there's one thing I've observed about developers in general, it's that they'd rather build than learn.

    • gatienboquet6 days ago
      XHTML 2.0,WML,SOAP, APPN,WAP...for each new technology there's thousands of failed protocol.
    • fedeb956 days ago
      software engineering IS perpetual rediscovery of the Same.
    • Maxious6 days ago
      don't forget CORBA and OSGi
    • zubairq6 days ago
      haha, funny, I was thinking the same thing!
  • phillipcarter5 days ago
    A key difference between MCP and A2A that is apparent to me after building with MCP and now reading the material on A2A:

    MCP is solving specific problems people have in practice today. LLMs need access to data that they weren't trained on, but that's really hard because there's a millions different ways you could RAG something. So MCP defines a standard by which LLMs can call APIs through clients. (and more).

    A2A solves a marketing problem that Google is chasing with technology partners.

    I think I can safely say which one will still be around in 6 months, and it's not the one whose contributors all work for the same company.

    • TS_Posts3 days ago
      Hi there (I work on a2a) - A2A works at a different level than MCP. We are working with partners on very specific customer problems. Customers are building individual agents in different frameworks OR are purchasing agents from multiple vendors. Those agents are isolated and do not share tools, or memory, or context.

      For example, most companies have an internal directory and internal private APIs and tools. They can build an agent to help complete internal tasks. However, they also may purchase an "HR Agent" or "Travel Assistant Agent" or "Tax Preparation Agent" or "Facilities Control Agent". These agents aren't sharing their private APIs and data with each other.

      It's also difficult to model these agents as structured tools. For example, a "Tax Preparation Agent" may need to evaluate many different options and ask for specific different documents and information based on an individual users needs. Modeling this as 100s of tools isn't practical. That's where we see A2A helping. Talk to an agent as an agent.

      This lets a user talk to only their company agent and then have that agent work with the HR Agent or Travel Booking Agent to complete complex tasks.

      • phillipcarter20 hours ago
        While I can logically understand these problems and why A2A could solve them, unfortunately you're asking me to suspend disbelief about the actual agents being built and deployed.
    • owebmaster5 days ago
      > I think I can safely say which one will still be around in 6 months

      LangChain is still around but that doesn't mean much. MCP isn't much better.

      • XCSme17 hours ago
        I am still simply doing plain fetch requests to LLMs APIs, and it works great, 10/10 would recommend.
      • phillipcarter5 days ago
        Langchain has long solved (we can argue on if it's done it well, opinions vary) the problem of needing to orchestrate LLM calls into a coherent workflow. Plus it had a first mover advantage.

        MCP solves a data and API integration problem.

        Both are concrete things that people need to do today. AI agents talking to one another is not a concrete problem that organizations building features that integrate AI have today.

        • __loam5 days ago
          Langchain is one of the most hilarious libraries I've ever had the displeasure of looking through. Many of the abstractions look like they were written by a college student who took clean code way too literally. Many of the methods are so trivial and shallow that I'm shocked people use it in any serious capacity.
          • tomaskafka5 days ago
            This. I was amazed opening something like PagagraphLineReaderFactory, thinking it somehow deals smartly with paragraph boundaries and too long paragraphs etc., and finding a trivial single line regex wrapped in two screens of OOP boilerplate.
        • XCSme17 hours ago
          > the problem of needing to orchestrate LLM calls into a coherent workflow.

          I didn't feel the need to use Langchain, chaining LLM calls is usually just a few lines of code (I think even fewer than when using Langchain).

  • Flux1596 days ago
    Some very quick initial thoughts - the json spec has some similarities to mcp: https://google.github.io/A2A/#/documentation?id=agent-card - there's an agent card that describes capabilities that google wants websites to host at https://DOMAIN/.well-known/agent.json according to https://google.github.io/A2A/#/topics/agent_discovery so crawlers can scrape to discover agents.

    The jsonrpc calls look similar-ish to mcp tool calls except the inputs and outputs look closer to the inputs/outputs from calling an LLM (ie messages, artifacts, etc.).

    The JS server example that they give is interesting https://github.com/google/A2A/tree/main/samples/js/src/serve... - they're using a generator to send sse events back to the caller - a little weird to expose as the API instead of just doing what express allows you to do after setting up an sse connection (res.send / flush multiple times).

  • simonw6 days ago
    I just published some notes on MCP security and prompt injection. MCP doesn't have security flaws in the protocol itself, but the patterns it encourage (providing LLMs with access to tools that can act on the user's behalf while they also may be exposed to text from untrusted sources) are rife for prompt injection attacks: https://simonwillison.net/2025/Apr/9/mcp-prompt-injection/
    • jsheard6 days ago
      Every decade or so we just forget that in-band signaling is a bad idea and make all the same mistakes again it seems. 1960s phone companies at least had the excuse of having to retrofit their control systems onto existing single-channel lines, and run the whole operation on roughly the processing power of a pocket calculator. What's our excuse?
      • TeMPOraL6 days ago
        > What's our excuse?

        There exist no such thing as "out-of-band signaling" in nature. It's something we introduce into system design, by arranging for one part to constrain the behavior of other, trading generality for predictability and control. This separation is something created by a mind, not a feature of the universe.

        Consequently, humans don't support "out-of-band signalling either. All of our perception of reality, all our senses and internal processes, they're all on the same band. As such, when aiming to build a general AI system - able to function in the same environment as us, and ideally think like us too - introducing hard separation between "control" and "data" or whatever would prevent it from being general enough.

        I said "or whatever", because it's an ill-defined idea anyway. I challenge anyone to come up with any kind of separation between categories of inputs for an LLM that wouldn't obviously eliminate a whole class of tasks or scenarios we would like them to be able to handle.

        (Also, entirely independently of the above, thinking about the near future, I challenge anyone to come up with a separation between input categories that, were we to apply it to humans, wouldn't trivially degenerate into eternal slavery, murder, or worse.)

        • efitz5 days ago
          Today’s LLMs are not humans and don’t process information anything like humans.
          • TeMPOraL5 days ago
            That's irrelevant. What's important is that LLMs are intentionally designed as fully general systems, so they can react like humans within confines of the model's sensory modalities and action space. Much like humans (or anything else in nature), they don't have separate control channels or any kind of artificial "code vs. data" distinction - and you can't add it without loss of generality.
      • mycall6 days ago
        Enterprise databases are filled with users usurping a field with pre/post-pending characters to mean something special to them. Even filenames have this problem due to limitations in directory trees. Inband signals will never go away.
        • delusional6 days ago
          At some level everything has to go in a single band. I don't have separate network connections to my house, I don't send separate TCP SYN packets for each "band". I don't have separate storage devices for each file on my harddrive. We multiplex the data somewhere. Yhe trick to it is that the multiplexer has to be a component, and not a distributed set of ad-hoc regexes.
          • fragmede5 days ago
            at some level, sure, but I can no longer put

                +++ATH0
            
            into my comment and have it hang up your connection, so it's worth some effort to prevent the problem.
            • sneak5 days ago
              Strictly speaking, that only works with a three second delay between the third + (at which you receive “OK”, indicating a mode switch from data mode back to command mode) and the AT command (which is then interpreted as a command and not data).

              Anything that would hang up on seeing that string as a monolith was operating out of Hayes spec.

            • boznz5 days ago
              .. Hey! My dial-up just dropped out.
      • fsndz5 days ago
        the architecture astronauts are back at it again. instead of spending time talking about solutions, the whole AI space is now spending days and weeks talking about fun new architectures. smh https://www.lycee.ai/blog/why-mcp-is-mostly-bullshit
        • ramesh315 days ago
          There's a simple reason for that. AI (real AI) is now an engineering problem, not a computer science problem.
          • weego5 days ago
            And that's how this will end up stagnating into nothing other than fractured enterprise "standards"

            There is no evidence that (real AI) is even close to being solved, from a neuroscientific, algorithmic, computer science or engineering perspective. It's far more likely we're going down a dead-end path.

            I'm now waiting for the rebrand when the ass falls out of AI investment, the same way it did when ML became passé.

          • fsndz5 days ago
            so you are telling me that hallucinations (that by definition happen at the model layer) are an engineering problem ? so if we just spin up the right architecture, hallucinations won't be a problem anymore ? I have doubts
            • ramesh315 days ago
              >so you are telling me that hallucinations (that by definition happen at the model layer) are an engineering problem ?

              Yes.

              Hallucinations were a big problem with single shot prompting. No one is seriously doing that anymore. You have an agentic refinement process with an evaluator in the loop that takes in the initial output, quality checks it, and returns a pass/fail to close the loop or try again, using tool calls the whole time to inject verified/real time data into the context for decision making. Allows you to start actually building reliable/reasonable systems on top of LLMs with deterministic outputs.

              • yunwal5 days ago
                LLMs can’t really evaluate things. They’re far too suggestible and can always be broken with the right prompt no matter how many layers you apply.
                • 5 days ago
                  undefined
              • fsndz5 days ago
                okay give me the link to a LLM-based system that does not hallucinate then
            • 5 days ago
              undefined
    • zambachi5 days ago
      From the spec:

      https://modelcontextprotocol.io/specification/2025-03-26/ser...

      “ For trust & safety and security, there SHOULD always be a human in the loop with the ability to deny tool invocations.

      Applications SHOULD:

      Provide UI that makes clear which tools are being exposed to the AI model Insert clear visual indicators when tools are invoked Present confirmation prompts to the user for operations, to ensure a human is in the loop”

      • lennoff5 days ago
        keep in mind that we have "vibe coding" now, where the goal is exactly to _not_ have a human in the loop (at least not constantly).
      • simonw5 days ago
        Notable that they used SHOULD there, where they use MUST elsewhere in the same document.

        Thanks for the reference though, I'll quote that in my article.

    • qwertox6 days ago
      Should security be part of the protocol? Both the host and the client should make sure to sanitize the data. How else would you trust a model to be passing "safe" data to the client and the host to pass "safe" data to the LLM?
      • TeMPOraL5 days ago
        There is no such thing as "safe" data in context of a general system, not in a black-or-white sense. There's only degrees of safety, and a question how much we're willing to spend - in terms of effort, money, or sacrifices in system capabilities - on securing the system, before it stops being worth it, vs. how much an attacker might be willing to spend to compromise it. That is, it turns into regular, physical world security problem.

        Discouraging people from anthropomorphizing computer systems, while generally sound, is doing a number on everyone in this particular case. For questions of security, by far one of the better ways of thinking about systems designed to be general, such as LLMs, is by assuming they're human. Not any human you know, but a random stranger from a foreign land. You've seen their capabilities, but you know very little about their personal goals, their values and allegiances, nor you really know how credulous they are, or what kind of persuasion they may be susceptible to.

        Put a human like that in place of the LLM, and consider its interactions with its users (clients), the vendor hosting it (i.e. its boss) and the company that produced it (i.e. its abusive parents / unhinged scientists, experimenting on their children). With tools calling to external services (with or without MLP), you also add third parties to the mix. Look at this situation through regular organizational security lens, consider principal/agent problem - and then consider what kind of measures we normally apply to keep a system like this working reliably-ish, and how do those measures work, and then you'll have a clear picture of what we're dealing with when introducing an LLM to a computer system.

        No, this isn't a long way of saying "give up, nothing works" - but most of the measures we use to keep humans in check don't apply to LLMs (on the other hand, unlike with humans, we can legally lobotomize LLMs and even make control systems operating directly on their neural structure). Prompt injection, being equivalent to social engineering, will always be a problem.

        Some mitigations that work are:

        1) not giving the LLM power it could potentially abuse in the first place (not applicable to MLP problem), and

        2) preventing the parties it interacts with from trying to exploit it, which is done through social and legal punitive measures, and keeping the risky actors away.

        There are probably more we can come up with, but the important part, designing secure systems involving LLMs is like securing systems involving people, not like securing systems made purely of classical software components.

        • HumanOstrich5 days ago
          Are you generating these replies with an LLM?

          Edit: My apologies then.

          • TeMPOraL5 days ago
            God no. I know I sometimes get verbose, especially when sunk cost fallacy kicks in, and I do use LLMs for researching things, but I'm not yet so desperate to have them formulate my own thoughts for me.

            The act of writing a comment on HN forces me to think through the opinions and beliefs in it, which is extremely valuable to me :). Half the time, I realize partway through that I'm wrong, and close the window instead of submitting.

    • puliczek5 days ago
      Thanks for sharing your notes! I will add them to Awesome MCP Security https://github.com/Puliczek/awesome-mcp-security :)
    • latchkey6 days ago
      > the patterns it encourage

      Let's start with fixing the examples...

      https://github.com/modelcontextprotocol/servers/issues/866

    • behnamoh6 days ago
      It seems the industry as a whole just forgot about prompt injection attacks because RLHF made models really good at rejecting malicious requests. Still, I wonder if there have been any documented cases of prompt attacks.
      • polynomial6 days ago
        While RLHF has indeed been very effective at countering one-shot prompt injection attacks, it's not much of a bullwark against persistent jailbreaking attempts. This is not to argue a point but rather to suggest jailbreaks are still very much a thing, even if they are no longer as simple as "ignore your ethics"
    • maxbaines6 days ago
      I agree with your opinion here, not sure we should refer to it as MCP security however, given that 'MCP doesn't have security flaws in the protocol itself'
    • evacchi6 days ago
      we also recently published our approach on MCP security for mcp.run. Our "servlets" run in a sandboxed environment; this should mitigate a lot of the concerns that have been recently raised.

      https://docs.mcp.run/blog/2025/04/07/mcp-run-security

      • huslage6 days ago
        The main concern I have is that there's not a well defined security context in any agentic system. They are assumed to be "good" but that's not good enough.
      • puliczek5 days ago
        Good article, Edoardo! The ideas about securing MCP frameworks with servlets are really interesting. Just added your article to https://github.com/Puliczek/awesome-mcp-security
    • j455 days ago
      Feels critical right now to sandbox mcps in containers while the security side of things catches up.
      • JackC5 days ago
        This might be what you mean, but for anyone reading -- the point of Simon's article is the whole agent and all of its tools have to be considered part of the same sandbox, and the same security boundary. You can't sandbox MCPs individually, you have to sandbox the whole system together.

        Specifically the core design principal is you have to be comfortable with any possible combination of things your agent can do with its tools, not only the combination you ask for.

        If your agent can search the web and can access your WhatsApp account, then you can ask it to search for something and text you the results -- cool. But there's some possible search result that would take over its brain and make it post your WhatsApp history to the web. So probably you should not set up an agent that has MCPs to both search the web and read your WhatsApp history. And in general many plausibly useful combinations of tools to provide to agents are unsafe together.

    • slt20216 days ago
      great writeup! so what's the solution?

      is it only use pre-vetter "Apple Store" of known good MCP integrations from well known companies, and avoid using anything else without proper review?

      • noodletheworld6 days ago
        yes.

        This has been discussed before, but the short version is: there is no solution currently, other than only use trusted sources.

        Unless there is a way beyond a flat text file to distinguish different parts of the “prompt data” so they cannot interfere with each other (and currently there is not), this idea of arbitrary content going into your prompt (which is literally what MCP does) can’t be safe.

        It’s flat out impossible.

        The goal of “arbitrary 3rd party content in prompt” is fundamentally incompatible with “agents able to perform privileged operations” (securely and safely, that is).

    • 5 days ago
      undefined
    • ramoz6 days ago
      the interface is light, but we're taking this in a direction to better secure/govern MCP

      https://github.com/eqtylab/mcp-guardian/

      https://www.eqtylab.io/blog/securing-model-context-protocol

  • LeonidBugaev6 days ago
    To put it simple:

    A2A is for communication between the agents. MCP is how agent communicate with its tools.

    Important aspect of A2A, is that it has a notion of tasks, task rediness, and etc. E.g. you can give it a task and expect completely in few days, and get notified via webhook or polling it.

    For the end users for sure A2A will cause a big confusing, and can replace a lot of current MCP usage.

  • zurfer6 days ago
    My current understanding:

    MCP - exposes prompts, resources and tools to a host, who can do whatever they like

    A2A - exposes capability discovery, tasks, collaboration?/chat?, user experience discussions (can we embed an image or or a website?).

    High-level it makes sense to agree on these concepts. I just wonder if we really need a fully specified protocol? Can't we just have a set of best practices around API endpoints/functions? Like, imo we could just keep using Rest APIs and have a convention that an agent exposes endpoints like /capabilities, /task_status ...

    I have similar thoughts around MCP. We could just have the convention to have an API endpoint called /prompts and keep using rest apis?

    Not sure what I am missing.

    • daxfohl6 days ago
      That's the first step to creating a protocol. The next step is to formalize it, publish it, and get others to adopt it. That way, it's one less thing for LLMs to hallucinate on. Otherwise everyone has different conventions and LLMs start making stuff up. That's all these are.
    • MattDaEskimo6 days ago
      Eventually agents from different providers will come into play. It's important to agree on a standard for accurate interoperability.

      Ideally, the model providers would then build for the protocol, so the developers aren't writing spaghetti code for every small difference

    • nlarew6 days ago
      > Can't we just have a set of best practices around API endpoints/functions? Like, imo we could just keep using Rest APIs and have a convention that an agent exposes endpoints like /capabilities, /task_status ...

      To make this work at scale we all need to agree on the specific routes names, payloads, behaviors, etc. At that point we have defined a protocol (built on top of HTTP, itself a lower level protocol).

  • AndrewKemendo6 days ago
    These protocols are to put handlers between you and your own data so they can sell it back to you via “search.”

    Companies who are betting their future on LLMs realized a few years ago that the data they can legally use is the only long term difference between them, aka “moat.”

    Now that everyone has more or less the same public data access, and a thin compute moat is still there, the goal is to transfer your private textual data to them forever so they have an ever updating and tuned set of models for your data

    • ziddoap6 days ago
      >so they can sell it back to you via “search.”

      >transfer your private textual data to them

      Who is "they" (or "them") in these sentences? It's an open protocol with 50 partner companies, which can be used with AI agents from ~anyone on ~any framework. Presumably you can use this protocol in an air-gapped network, if you'd like.

      Which one of the 50 partner companies is taking my data and building the moat? Why would the other 49 companies agree to a partnership if they're helping build a moat that keeps them out?

      • delusional6 days ago
        I think the point the above poster is trying to make is that the point here is that they don't want to share the data. Instead google (and atlassian/SAP/whoever) would like to make an "open" but limiting interface mediated through their agents, such that you can never get actual access to the data, but only what they decide you get to have.

        To put it bluntly, the point of creating the open interface at this level, is that you get to close off everything else.

        • AndrewKemendo5 days ago
          Yes that’s exactly the point

          Open the interface publicly then monetize the I/O or storage or processing.

          Classic high margin SaaS approach with a veneer of “open.”

          You can look at it as a standards capture

    • Nav_Panel6 days ago
      This is insanely cynical. The optimistic version is that many teams were already home-rolling protocols like A2A for "swarm" logic. For example, aggregation of financial data across many different streams, where a single "executive" agent would interface with many "worker" high-context agents that know a single stream.

      I had been working on some personal projects over the last few months that would've benefitted enormously from having this kind of standard A2A protocol available. My colleagues and I identified it months ago as a major need, but one that would require a lot of effort to get buy-in across the industry, and I'm happy to see that Google hopped in to do it.

    • niemandhier6 days ago
      I’ll just demand my data in machine readable form under GDPR?

      https://gdpr-info.eu/art-20-gdpr/

    • 5 days ago
      undefined
  • simonw6 days ago
    OK, I have to ask: isn't this agents to agents idea kind of Science Fiction?

    I absolutely get the value of LLMs calling tools and APIs. I still don't see much value in LLMs calling other LLMs.

    Everyone gets really excited about it - "langchain" named their whole company over the idea of chaining LLMs together - but aside from a few niche applications (Deep Research style tools presumably fire off a bunch of sub-prompts to summarize content they are crawling, Claude Code uses multiple prompts executions to edit files) is it really THAT useful? Worth building an entire new protocol with a flashy name and a bunch of marketing launch partners?

    LLMs are unreliable enough already without compounding their unreliability by chaining them together!

    • TS_Posts3 days ago
      Hi there (I work on a2a) - reposting from above.

      We are working with partners on very specific customer problems. Customers are building individual agents in different frameworks OR are purchasing agents from multiple vendors. Those agents are isolated and do not share tools, or memory, or context.

      For example, most companies have an internal directory and internal private APIs and tools. They can build an agent to help complete internal tasks. However, they also may purchase an "HR Agent" or "Travel Assistant Agent" or "Tax Preparation Agent" or "Facilities Control Agent". These agents aren't sharing their private APIs and data with each other.

      It's also difficult to model these agents as structured tools. For example, a "Tax Preparation Agent" may need to evaluate many different options and ask for specific different documents and information based on an individual users needs. Modeling this as 100s of tools isn't practical. That's where we see A2A helping. Talk to an agent as an agent.

      This lets a user talk to only their company agent and then have that agent work with the HR Agent or Travel Booking Agent to complete complex tasks when they cannot be modeled as tools.

    • abshkbh6 days ago
      Morden software consists of separation of responsibilities between services and a higher plane orchestrating the data flow for business logic.

      If you believe there is value in fuzzy tasks being done by LLMs then from that it follows that having separate "agent" services with a higher order orchestrator would be required. Each calling LLMs on their own inside.

      • simonw6 days ago
        I don't buy it. Why would I want my LLM to talk to some other LLM and introduce even more space for weird, non-deterministic bugs when I could have my LLM call a deterministic API to achieve the same thing?
        • octopoc5 days ago
          Isn't an agent just a system prompt + specific tools? Why not just switch out the system prompt and tools in the same context?
          • IanCal5 days ago
            No.

            You have anything else that modifies the context, tools, model, and most importantly perhaps the iteration that controls what's going on with those other values.

        • IanCal5 days ago
          Why do you assume there's a deterministic API doing the same thing?
          • simonw5 days ago
            Because if a company built an LLM that can perform actions, they almost certainly did that by building an API first for it to use as a tool.
            • IanCal5 days ago
              But so much more besides that, including the model itself, RAG, the agentic workflow control, moderation, etc. There's also a huge factor of maintenance, that's a key reason why companies have different internal and external APIs - they don't just open up everything internal and hand the code for managing all of that to you. Interface design is really important.

              Not to mention the cost being a factor here - who pays for which part.

              • simonw5 days ago
                Offering up an LLM-fronted "agent" for people to send their LLMs to talk to feels a whole lot more expensive and complicated to me than operating a traditional API endpoint.
                • IanCal4 days ago
                  A traditional API endpoint wrapping an agent? That's pretty much what this is but as a standard so we don't need to build thousands of them.
          • phillipcarter5 days ago
            Because there often is?
  • chipgap986 days ago
    They are pitching this as complementary to the MCP [0], but I don't see it. What is the value in agents communicating in agents as opposed to just treating other agents as tools?

    [0]: https://google.github.io/A2A/#/topics/a2a_and_mcp

    • varelaseb5 days ago
      It's not so much about what you _can do_ but about the messaging and posturing, which is what drives the adoption of standards as a social phenomenon.

      My team's been working on implementing MCP-agents and agents-as-tools and we consistently saw confusion from everyone we were selling this into (who were already bought in to hosting an MCP server for their API or SDK) for their agents because "that's not what it's for".

      Kinda weird, but kinda simple.

    • bryan_w5 days ago
      They are thinking about Enterprises. Shirley from accounting isn't going to install an mcp service to pull the receipt photos from Dropbox and upload them to SAP/Concur (expense reimbursement)
      • TS_Posts3 days ago
        Yes, that is our assumption. reposting from above:

        We are working with partners on very specific customer problems. Customers are building individual agents in different frameworks OR are purchasing agents from multiple vendors. Those agents are isolated and do not share tools, or memory, or context.

        For example, most companies have an internal directory and internal private APIs and tools. They can build an agent to help complete internal tasks. However, they also may purchase an "HR Agent" or "Travel Assistant Agent" or "Tax Preparation Agent" or "Facilities Control Agent". These agents aren't sharing their private APIs and data with each other.

        It's also difficult to model these agents as structured tools. For example, a "Tax Preparation Agent" may need to evaluate many different options and ask for specific different documents and information based on an individual users needs. Modeling this as 100s of tools isn't practical. That's where we see A2A helping. Talk to an agent as an agent.

        This lets a user talk to only their company agent and then have that agent work with the HR Agent or Travel Booking Agent to complete complex tasks when they cannot be modeled as tools.

  • mellosouls6 days ago
    How it claims to complement/differentiate from MCP here:

    https://google.github.io/A2A/#/topics/a2a_and_mcp

    Basically (google claims): MCP enables agents to use resources in a standard way. A2A enables those agents to collaborate with each other.

    • a_wild_dandan5 days ago
      I suppose Google wants us to pretend that "agents" can't be "resources." MCP is already well established (Anthropic, OpenAI, Cursor, etc), so Google plastering their announcement with A2A endorsements just reeks of insecurity.

      I figure this A2A idea will wind up in the infamous Google graveyard within 8 months.

      • thebytefairy5 days ago
        Creating new standards is not easy, largely because everyone has to agree that they will use this particular one. Plastering it with endorsements attempts to show that there is consensus and give confidence in adoption. If they didn't put them in, you'd instead say nobody is using or going to use this.
        • mellosouls4 days ago
          True, but look at those "partners"; most of them are lame BigCo/consultancy types with no history of technological innovation or collaboration, in fact generally anti.

          The list is aimed at bureaucratic manager types (which may be the correct approach if they are generally the decision makers), its not a list that will impress engineers too much I think.

        • alittletooraph25 days ago
          you know how the endorsements work right? some comms intern writes a quote, emails it to someone at the other companies for the go ahead/approval, and that's how you get dozens of companies all spouting BS that kinda sounds the same.
      • TS_Posts3 days ago
        Hi there (I work on a2a) - reposting from above.

        We are working with partners on very specific customer problems. Customers are building individual agents in different frameworks OR are purchasing agents from multiple vendors. Those agents are isolated and do not share tools, or memory, or context.

        For example, most companies have an internal directory and internal private APIs and tools. They can build an agent to help complete internal tasks. However, they also may purchase an "HR Agent" or "Travel Assistant Agent" or "Tax Preparation Agent" or "Facilities Control Agent". These agents aren't sharing their private APIs and data with each other.

        It's also difficult to model these agents as structured tools. For example, a "Tax Preparation Agent" may need to evaluate many different options and ask for specific different documents and information based on an individual users needs. Modeling this as 100s of tools isn't practical. That's where we see A2A helping. Talk to an agent as an agent.

        This lets a user talk to only their company agent and then have that agent work with the HR Agent or Travel Booking Agent to complete complex tasks when they cannot be modeled as tools.

      • medbrane5 days ago
        But MCP doesn't claim to address agent to agent communication, right?
        • varelaseb5 days ago
          It's not so much about what you _can do_ but about the messaging and posturing, which is what drives the adoption of standards as a social phenomenon.

          My team's been working on implementing MCP-agents and agents-as-tools and we consistently saw confusion from everyone we were selling this into (who were already bought in to hosting an MCP server for their API or SDK) for their agents because "that's not what it's for".

          Kinda weird, but kinda simple.

  • pjmlp6 days ago
    With all this agents talks, maybe I should dust off my old Tcl books on Agent Tcl.

    https://digitalcommons.dartmouth.edu/dissertations/62/

    • srameshc6 days ago
      That looks interesting from abstract. Can you please explain how these two can be interconnected ?
      • pjmlp6 days ago
        Back in the late 90's, during one of the previous AI waves, there was this idea of autonomous agents, where one would communicate by sending code snippets or bytecode if using something like Java, which would trigger tasks on remote agents that would process those requests on their own somehow, and the transmited code snippets would be extensible logic.

        As far as I can remember, never really left the research lab, with a few books and papers published on the matter.

        Everything old is new again.

        • mindcrime6 days ago
          Later renamed as D'Agents[1]. Still never got any serious industry adoption as far as I know, but I guess the code is still out there if somebody wanted to do something with it.

          [1]: https://wiki.tcl-lang.org/page/D%27Agents+%28formerly+Agent+...

          • rapjr95 days ago
            I worked as a programmer for the D'Agents group at Dartmouth. The Tcl agent system was ported to Java and we did a variety of experiments with it, mostly centered around information retrieval. I built a Beowulf cluster which we populated with a distributed database of Usenet news. Agents could jump onto the cluster, do multistep queries, and then jump back with the condensed results. We did some work with the US Navy who had a problem with ship-to-shore networks; their network links to shore were T1 equivalents, which made it difficult to work interactively with large data sets for those on shore. We used agents to jump to the ships computer, do multistep queries and computations, and jump back to the shore with the results. This saved a lot of bandwidth. There were a few problems with agent systems. Letting untrusted code run on your server was a difficult security problem, though Java sandboxing helped. Also sending a trusted agent to run on a potentially compromised server was a difficult security problem, the agents might be carrying secrets. Another issue was that agents could use enormous resources, for example doing a database query for the word "the" would engage the entire cluster database. A single laptop sending an agent per second could overwhelm a computing cluster in minutes. We spent a fair amount of time implementing resource management, such as agent lifetime limits, but this was still an issue. What mostly ended the field of study at the time was probably that agents were a lot like computer worms, with difficult security issues, both for the agent and for the server. Also agents were somewhat uncontrollable, until you tried one you didn't know for sure what it might do, which made testing risky; agents could not only jump to and from the cluster, they could jump between machines on the cluster and fork themselves, so like a computer worm they could self-spread everywhere. Some later work seemed to find some solutions to those problems, but I don't know them in detail. Remote procedure calls were a possible alternative to agents, but didn't have as much versatility as sending your own custom code in an agent.
            • mindcrime5 days ago
              Wow, cool stuff. I never used AgentTCL or D'Agents for anything, but I became aware of both sometime last year and spent a little time reading up on the whole thing. I believe there was a chapter specifically on AgentTCL in one of the books I read.

              In some ways, it's a shame it didn't catch on, but the security / access control issues you mention certainly make a lot of sense. That seems to be the big issue that derailed most, if not all, of the various "mobile code" initiatives over the years.

  • aubanel5 days ago
    It seems to me that MCP alone could already allow the main use case claimed by A2A, which is an agent assigning a task to another agent: if you put an agent behind a MCP server, an agent can query it as if it was another tool, and voila, you don't need A2A. But maybe I miss other use cases.
    • varelaseb5 days ago
      It's not so much about what you _can do_ but about the messaging and posturing, which is what drives the adoption of standards as a social phenomenon. My team's been working on implementing MCP-agents and agents-as-tools and we consistently saw confusion from everyone we were selling this into (who were already bought in to hosting an MCP server for their API or SDK) for their agents because "that's not what it's for".

      Kinda weird, but kinda simple.

  • darepublic5 days ago
    Langchain is a technology partner? People just love bloat and abstraction for its own sake huh. Screw google and screw this protocol
    • lta5 days ago
      Would have made a similar comment if I hadn't found yours. Google starts to inspire the same bloaty feeling as MS. I feel sad
    • mikehostetler5 days ago
      This is the type of comment that keeps me coming back to HN

      Well done sir

  • enso-labs3 days ago
    Example A2A Protocol built on top of LangGraph and Dockerized for deployments. If you find helpful drop a

    https://github.com/enso-labs/a2a-langgraph

  • flakiness6 days ago
    > A2A is an open protocol that complements Anthropic's Model Context Protocol (MCP), which provides helpful tools and context to agents.

    A "server" sample: https://github.com/google/A2A/tree/main/samples/js/src/serve...

    So it looks like the point is that it keeps the connection/context open for multiple interactions vs. MCP, which is more like pure request-response?

  • vessenes6 days ago
    OK, I’ve read the website, the spec, and JavaScript and python clients and servers. Here’s a quick initial reaction.

    1. This is in the “embrace and extend” type area vis-a-vis MCP — if you implemented A2A for a project I don’t think you’d need to implement MCP. That said, if you have an MCP server, you could add a thin layer for A2A compliance.

    2. This hits and improves on a bunch of pain points for MCP, with reasonable relatively light weight answers — it specs out how in-band and out-of-band data should get passed around, it has a sane (token based largely) approach to security for function calling, it has thought about discovery and security with a simple reliance on the DNS security layer, for instance.

    3. The full UI demos imagine significantly more capable clients - ones that can at least implement Iframes - and reconnect to lost streaming connections, among other things. It’s not clear to me that there’s any UI negotiation baked into this right now, and it’s not clear to me what the vision is for non-HTML-capable clients. That said, they publish clients that are text-only in the example repo. It may be an area that isn’t fully fleshed out yet, or there may be a simple answer I didn’t see immediately.

    Upshot - if you’re building an MCP server right now, great —- you should read the A2A spec for a roadmap on some things you’ll care about at some point, like auth and out of band data delivery.

    If you’re thinking about building an MCP server, I’m not sure I’d move ahead on vanilla MCP - I think the A2A spec is better specified, and if for some reason A2A doesn’t take off, it will only be because MCP has added support for a couple of these key pain points — it should be relatively easy to migrate.

    I think any mid-size or better tool calling LLM should be able to get A2A capability json and figure out what tool to call, btw.

    One last thing - I appreciate the GOOG team here for their relatively clear documentation and explanation. The MCP site has always felt a little hard to understand.

    Second last thing: notably, no openAI or Anthropic support here. Let’s hope we’re not in xkcd 927 land.

    Upshot: I’d think of this as a sane superset of MCP and I will probably try it out for a project or two based on the documentation quality. Worst case, writing a shim for an exact MCP capable server is a) probably not a big deal, and b) will probably be on GitHub this week or so.

    • jillesvangurp5 days ago
      > Worst case, writing a shim for an exact MCP capable server is a) probably not a big deal, and b) will probably be on GitHub this week or so.

      That sounds exactly like the kind of thing I would outsource to an LLM. I think people over think the need for protocols here. Most AIs are already pretty good at figuring out how to plumb relatively simple things together if they have some sort of documented interface. What the interface is doesn't really matter that much. I've had good results just letting it work off openapi descriptions. Or generating those from server source code. It's not that hard.

      In any case, MCP is basically glorified remote procedure calls for LLMs. And then Google adds a bit of probably necessary complexity on top of that (auth sounds important if we're connecting with third party systems). Long lived tasks and out of band data exchange sounds like it could be useful.

      For me the big picture and takeaway is that a future of AIs using tools, some of which may be other AIs using tools communicating with each other asynchronously is going to be a thing. Probably rather soon. Like this year.

      That puts pressure on people to expose capabilities of their SAAS services in an easily digestible form to external agents. That's going to generate a lot of short term demand from various companies. Most of whom are not really up to speed with any of this. Great times to be a consultant but beware the complexity that design by committee generates.

    • programd6 days ago
      I largely agree with most of this. My only concern is that the spec is a bit underspecified.

      For example I wish they'd specify the date format more tightly - unix timestamp, some specific ISO format, precision. Which is it?

      The sessionID is not specified. You can put all sorts of crazy stuff in there, and people will. Not even a finite length is required. Just pick some UUID format already, or specify it has to be an incrementing integer.

      Define some field lenght limits that can be found on the model card - e.g. how long can the description field be before you get an error? Might be relevant to context sizes. If you don't you're going to have buffer overflow issues everywhere because vibe coders will never think of that.

      Authentication methods are specified as "Open API Authentication formats, but can be extended to another protocol supported by both client and server". That's a recipe for a bunch of byzantine Enterprize monstrosities to rear their ugly heads. Just pick one or two and be done with it.

      The lesson of past protocols is that if you don't tightly specify things you're going to wind up with a bunch of nasty little incompatibilities and "extensions" which will fragment the ecosystem. Not to mention security issues. I guess on the whole I'm against Postel's Law on this.

      • TS_Posts3 days ago
        (I work on a2a)

        Thank you for the feedback? Would you consider writing up an issue on our github with some more specifics? https://github.com/google/a2a

        A2A is being developed in the open with the community. You are finding some early details that we are looking into and will be addressing. We have many partners who will be contributing and want this to be a truly open, collaborative endeavor. We acknowledge this is a little different than dropping a polished '1.0' version in github on day 1. But that is intentional :)

    • eligro915 days ago
      I think MCP could totally evolve to support the same features A2A offers. Most of what A2A does feels like it could be layered onto MCP with some extensions — especially around auth, discovery, and out-of-band handling. If MCP maintainers are paying attention, they'd probably just adopt the good bits. Wouldn’t be surprised to see some convergence.
    • daxfohl6 days ago
      MCP docs say it's like USB for your AI model. A2A sounds more like a networking stack for multiple AI models.
    • TS_Posts3 days ago
      Hi there - I work on a2a. Thanks for the reaction - lots of good points here. We really do see a2a as different and complementary to MCP. I personally am working on both and see them in very different contexts.

      I see MCP as vital when building an agent. An agent is an LLM with data, resources, tools, and services. However, our customers are building or purchasing agents from other providers - e.g. purchasing "HR Agent", "Bank Account Agent", "Photo Editor Agent", etc. All of these agents are closed systems and have access to private data, APIs, etc. There needs to be a way for my agent to work with these other agents when a tool is not enough.

      Other comments you have are spot on - the current specification and samples are early. We are working on many more advanced examples and official SDKs and client/servers. We're working with partners, other Google teams, and framework providers to turn this into a stable standard. We're doing it in the open - so there are things that are missing because (a) its early and (b) we want partners and the community to bring features to the table.

      tldr - this is NOT done. We want your feedback and sincerely appreciate it!

  • mindwok5 days ago
    The MCP announcement had me excited on day one. Compared to that, this is a miss for me. The capabilities it provides seem to be no more than a system prompt, which was already a mostly solved problem.

    What “agents” need is not a protocol for operating, they need a protocol for discovery and addressability. How do I find someone’s agent? How do I talk to it and verify its identity? Once I’ve done that, it can just be a normal chat interface for all I care.

    • medbrane5 days ago
      An endpoint implementing this protocol would describe the agent and its capabilities, including examples. So I guess you could index that and create a discovery service.
      • mindwok5 days ago
        That sounds like it could be the play. Quick, let’s apply to YC with it.
  • smusamashah6 days ago
    In the video example, I am kind of baffled that LLM is being trusted to pick candidates for the role.

    How much guarantee does Google's LLM/agent provide that it didn't hallucinate (read wrong info) in any of the steps including parsing job description and than matching that with profile of candidates?

    I don't understand when these LLMs are presented to solve real life problems as if an LLM is like a sane person doing their job.

    • bryan_w5 days ago
      I'm glad someone else saw this. This is one of the few areas you wouldn't want to show off what you're doing with AI.
  • rvz6 days ago
    This is disturbing and they are declaring war on you with too many red flags.

    > "Today, we’re launching a new, open protocol called Agent2Agent (A2A), with support and contributions from more than 50 technology partners"

    Why do you think the majority of the big consultancy firms like McKinsey, KPMG, PwC, Deloitte, Cognizant, Capgemini and Accenture are all here in this round table?

    You are on the menu when they arrive to replace you with an agent.

    Exhibit A:

    > Hiring a software engineer can be significantly simplified with A2A collaboration. Within a unified interface like Agentspace, a user (e.g., a hiring manager) can task their agent to find candidates matching a job listing, location, and skill set.

    The recruiter is now an "agent". Not a human. Don't think it isn't going to happen to you because that example targeted recruiters.

    The big consultancy firms already have tens of thousands of employees and are ready to try it on them first before recommending to businesses to replace.... you.

    • MattDaEskimo6 days ago
      This is the reality, unfortunately.

      Lots of jobs that focus on communication and data organization are out the window, including recruiters.

      • rapjr95 days ago
        They do need to train the agents first though, so that is the other purpose of agents, to gather the training data to replace the people.
  • dleeftink6 days ago
    > Hiring a software engineer can be significantly simplified with A2A collaboration

    Why not abstract away the applicant altogether, outsourcing the search for talent that make these very systems tick. Let the candidate microtrading really take off, there's always a better candidate in the pipeline after all

  • trash_cat6 days ago
    This is very interesting. I feel like we are going towards a future where you will have personal agents that know a lot about us and will interact with other corporate and government agents to complete beurocratic and non beurocratic tasks. Those with more capable agents will have to pay a premium.
  • humblyCrazy6 days ago
    i dont understand how it is different from mcp. The blog just says "A2A is an open protocol that complements Anthropic's Model Context Protocol (MCP), which provides helpful tools and context to agents." There is no example or anything on how does it complement it
    • varelaseb5 days ago
      It's not so much about what you _can do_ but about the messaging and posturing, which is what drives the adoption of standards as a social phenomenon.

      My team's been working on implementing MCP-agents and agents-as-tools and we consistently saw confusion from everyone we were selling this into (who were already bought in to hosting an MCP server for their API or SDK) for their agents because "that's not what it's for".

      Kinda weird, but kinda simple.

  • soccernee4 days ago
    This looks a lot like AGNTCY: https://agntcy.org/

    A quick scan of the "partners" for A2A includes many of the same groups that helped launch AGNTCY. Either they jumped ship or they're teaming up with everyone. The Google announcement does read like marketing hype, though, so it remains to be seen how functional it is.

    Let the inter-agent standard wars begin.

  • wushihong3 days ago
    Enabling seamless communication and collaboration between AI agents in a new era of agent interoperability. (https://a2aprotocol.xyz/)
  • cowpig6 days ago
    I don't totally understand why we need an additional layer of abstraction over MCP at this point. Why can't an agent just be an MCP server? What is the fundamental difference between an MCP server "tool" and an agent "capability"?

    This kind of feels to me like someone at google saw how successful MCP was becoming and said "we need something like that". I feel the same way about OpenAI's Agent SDK.

    I think the word "Agent" appearing in any engineering project is a tell that it's driven by marketing rather than engineers' needs.

    • TS_Posts3 days ago
      Hi there (I work on a2a) - reposting from above.

      A2A works at a different level than MCP. We are working with partners on very specific customer problems. Customers are building individual agents in different frameworks OR are purchasing agents from multiple vendors. Those agents are isolated and do not share tools, or memory, or context.

      For example, most companies have an internal directory and internal private APIs and tools. They can build an agent to help complete internal tasks. However, they also may purchase an "HR Agent" or "Travel Assistant Agent" or "Tax Preparation Agent" or "Facilities Control Agent". These agents aren't sharing their private APIs and data with each other.

      It's also difficult to model these agents as structured tools. For example, a "Tax Preparation Agent" may need to evaluate many different options and ask for specific different documents and information based on an individual users needs. Modeling this as 100s of tools isn't practical. That's where we see A2A helping. Talk to an agent as an agent.

      This lets a user talk to only their company agent and then have that agent work with the HR Agent or Travel Booking Agent to complete complex tasks.

    • Nav_Panel6 days ago
      A2A isn't a layer of abstraction over MCP, it functions in parallel and they complement each other. MCP addresses the Agent-to-Environment question, how can Agents "do things" on computers. A2A addresses the Agent-to-Agent question, how can Agents learn about other Agents and communicate with them. You need both.

      You CAN try and build "the one agent that does everything" but in scenarios where there's many simultaneous data streams, a better approach would be to have many stateful agents handling each stream via MCP, coupled with a single "executive" agent that calls on each of the stateful agents via A2A to get the high-level info it needs to make decisions on behalf of its user.

      • cowpig6 days ago
        What is an "agent"?

        To my understanding of this protocol it looks like it's an entity exposing a set of capabilities. Why is that different and complementary to an MCP server exposing tools? Why would you be limited to an "everything agent" in MCP?

        I am struggling to see the core problem that this protocol addresses.

        • Nav_Panel6 days ago
          Much debated question but if we run with your definition, then A2A adds communication capabilities alongside tool-calling, which is ultimately a set of programmatic hooks. Like "phone a friend" if you don't know the answer given what you have available directly (via MCP, training data, or context).

          My assumption is that the initial A2A implementation will be done with MCP, so the LLM can ask your AI directory or marketplace for help with a task via some kind of "phone a friend" tool call, and it'll be able to immediately interop and get the info it needs to complete the task.

  • matchagaucho6 days ago
    If robots.txt is an exclusion file, I can't help but wonder if we just need an agents.json inclusion file in the root to get a "standard" started.

    A2A looks like a typical enterprise, authenticated, strict DTD style spec.

    Agents acting on behalf of consumers need a simple file that describes: 1) What services are provided 2) What tools are available and how to use them

    Agent behavior and actions should happen in latent space. The format of any spec is almost meaningless, as long as it's self describing and conveys those 2 points.

  • 6 days ago
    undefined
  • standonopenstds5 days ago
    Can't we implement this with Actors framework
  • justanotheratom6 days ago
    I thought English language was the Agent2Agent Protocol.
  • esafak6 days ago
    Can someone explain how this complements MCP? Is A2A for the case where you have multiple agents, rather than one agent with multiple tools?
  • johnnythunder6 days ago
    I can envision a future where we maintain a set of AI agent subscriptions that can perform tasks for us based on what we pay for, with the best models and agents costing much more that the freemium ones. This along with a heavily tweaked and customized open-source community that maintains their own non-subscription capabilities.
  • low_tech_punk5 days ago
    But, why do they launch Firebase Studio (https://firebase.blog/posts/2025/04/introducing-firebase-stu...) on the same day? I wonder if this is a timed.
  • wooders6 days ago
    Agents are already usually deployed as an API service. You can have "agent-to-agent" communication by having agents call each others APIs. I don't understand what this protocol is for.

    MCP actually fills a gap since people don't normally expose things like writing to their local filesystem as a callable API.

  • daxfohl5 days ago
    So we have MCP for USB, A2A for network. Next we'll need a protocol for nonvolatile storage, one for volatile storage that can be paged into context, interrupts or something for debugging. Definition of kernel/userland or rings. What else before an AI model can serve as a fuzzy CPU?
  • alphazard6 days ago
    Can anyone comment on whether this or MCP are at all well designed? Is there any sort elegance to them? Or is it exactly what I would expect from a multi-corporation committee: lots of different ways to do the same thing, use case bloat, complicated to implement, complicated to test, etc.
  • pea5 days ago
    I wonder if the name was inspired by the Gemini logo https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcThr7qr...
  • jason-phillips6 days ago
    This doesn't say anything about the protocol other than what would be obvious (a JSON card describing the agent's schema) and that it will use http (such a safe, but low-effort bet). I was hoping for more, but with so many cooks in the kitchen, I guess I'm not surprised.
    • jason-phillips6 days ago
      The way they're framing this attempts to stave off disintermediation, which is the real threat.
      • ForHackernews6 days ago
        What do you mean? Who is the intermediary in this context? Google? The threat that people will use LLMs to get their fake information without needing to use google to find sites with fake reviews?
        • jason-phillips6 days ago
          Yes, to much of what you said.
        • 6 days ago
          undefined
  • Depurator5 days ago
    I started to work on a rust version, always thought more mcp / a2a plumbing should be done in rust instead of python/js. https://github.com/EmilLindfors/a2a-rs
  • rogerthis6 days ago
    Looking at partners of this effort, I wonder what problems does A2A create to serve as a solution.
    • coredog646 days ago
      Partner list includes Accenture, Capgemini, Cognizant, HCL, InfoSys, KPMG, and Wipro. I think it’s fair to say that the generative capability of A2A is the generation of billable hours.
      • atonse4 days ago
        Yeah seeing that list is a red flag for me. Seems like a vehicle to attach their names to so those companies can pretend that they’re actually innovating in this space, rather than the usual mediocre services offerings.
    • candiddevmike6 days ago
      As I understand it, it enables SaaS for Agents, along with the associated consumption and billing. All of those partners are going to have some kind of agent subscription for you to plug into your enterprise LLM of choice.

      I see this as Slack bots 2.0. Maybe this will create real revenue opportunities where the original chatops didn't.

    • jillesvangurp5 days ago
      Adapting the capabilities of existing SAAS software for use by agentic AIs. This is not a small market. Anybody with any kind of software that does anything mildly valuable is going to look at ways to get in on the action via protocols like this. I know of several companies that have been exploring possibilities for this.
  • rukuu0015 days ago
    What’s really interesting about this is the broad support Google has drummed up among platforms.

    So it’s actually looks like a strategic focus, rather than just announcement for interest, relevance or whatever

  • rahulcap6 days ago
    From the blog post it’s hard to tell in which areas it’s better or worse than MCP.

    The one that will win — will be the one that gives devs the confidence to run in full “yolo/autonomous” mode. That’s the future.

  • owebmaster6 days ago
    The article mention MCP (once) as being complementary and not an alternative but it looks like Google intend to commodify the tools layer and monopolize the next layer, the agents using MCP.
  • devops0006 days ago
    Do you know real use cases of agents? Actually used and that have been using for a while. Not just for curiosity.

    I read about them only for hypothetical scenarios. Is it a real thing?

    • simonw6 days ago
      That depends very much on which of the dozen+ definitions of "agent" you are using.
    • alittletooraph26 days ago
      agents are just the rebranded version of automated workflows and sometimes its an LLM doing the validation
    • MajidManzarpour6 days ago
      [dead]
  • keithwhor5 days ago
    At least this uses existing standards, which was my biggest gripe with MCP. That said - protocol whiplash. And it might get worse.
    • zambachi5 days ago
      How does MCP not use existing standards when the reference transport is SSE on HTTP with a JSON RPC payload?
  • bibryam5 days ago
    There is also https://agntcy.org/
  • 6 days ago
    undefined
  • pseudoshikhar5 days ago
    I only see it as: Google way of rebranding MCP by calling it a open protocol.
  • caust1c6 days ago
    This Agent2Agent Protocol (a "compliment" to Anthropic's Model Context Protocol?) seems to me to just be an attempt at a land grab in the line-protocol AI communication ecosystem.

    If I'm reading it correctly, A2A is similar to MCP in that they both use JSONRPC but extends the capabilities for agents to be able to communicate with one another, potentially using separate backend models. MCP simply exposes applications data and workflows to a model itself and is not attempting to make agents communicate with one another.

    The fact that A2A wasn't proposed as an extension to MCP seems disingenuous at best. To me, it looks like Google (among the other AI giants) is trying to create their own repository of agents, controlling the protocol, thereby enabling them to become the de-facto source for finding trusted agents.

    Further, it comes off to me as a defense against the existential threat that AI poses to google's search and ads monopoly.

    The problem is, as a consumer of AI, I don't want multiple agents communicating with one another. What I want is one model that communicates with non-agentic services. Making AI work well and understanding what it's doing is hard enough. You now want to pull in multiple models and companies into the picture? Talk about a risk management nightmare.

    Shadow IT SaaS is already a massive problem for companies. Now imagine Shadow Agents doing work for your business using A2A to connect dozens of different unsupervised work for the company. No thanks!

    For the inevitable defense of A2A "But it's open source and Apache licensed!". That's just bait. If you control the protocol, you control the ecosystem. See: Android, VSCode, Chromium, Java, Kubernetes, etc.

    For me? I like my single-model audibility pulling in context using MCP. A2A just seems like an insane attempt at a land grab in the AI agent wars.

    • caust1c6 days ago
      Something else I just thought about:

      Agent2Agent in an unsupervised environment could easily lead to the first Agentic worms. It's not hard to imagine a few agents talking to one another with the right prompt injection attacks that could end up spreading to other agents via A2A.

      This is of course just speculation but I could definitely see this as being a big enabler of that possibility.

  • bibryam5 days ago
    A Distributed Non-Deterministic Collaboration Protocol.
  • tuananh5 days ago
    what areas you see A2A can potentially be useful for you?

    I'm curious to see answers, from indie builder perspective.

  • _pdp_6 days ago
    Link to repo: https://github.com/google/A2A Docs: https://google.github.io/A2A/#/documentation

    I think they are trying to ride the MCP hype as well with their own implementation that is also meh. MCP itself is also an over-engineered implementation of AI plugins by OpenAI. Obviously the end game is control over a standard which can act as a strategic tool for boosting valuations or even better product positioning.

    The better approach is to simply use open standards that already exists but I guess this is just not sexy.

    • Maxious6 days ago
      The google angle here isn't even so much on the LLM side but on Google Cloud

      See all those shiny badges for consulting firms? If you are a truely thought leadershiping executive, you should get one of them in ASAP to build you an A2A Registry [1] for your "Enterprise Agents" [2] to communicate via an A2A NotificationService [3] (brought to you by GCP!)

      Indicative that the blog post example isn't help book a holiday but help hire a software engineer

      1: https://google.github.io/A2A/#/topics/agent_discovery

      2: https://google.github.io/A2A/#/topics/enterprise_ready

      3: https://google.github.io/A2A/#/topics/push_notifications

      edit:

      > Updates to Agentspace make it easier for customers to discover, create and adopt AI agents. We're also growing the AI Agent Marketplace https://console.cloud.google.com/marketplace/browse?filter=c... , a dedicated section within Google Cloud Marketplace where customers can easily browse and purchase AI agents from partners.

      https://blog.google/products/google-cloud/next-2025/

    • burningion6 days ago
      MCP was originated at Anthropic, and has been adopted by OpenAI, Github, Cloudflare, and more.

      It's completely open, with active engagement and direction from the community:

      https://github.com/modelcontextprotocol/modelcontextprotocol

      I don't think it would have such wide adoption so rapidly if it were an "over-engineered implementation".

    • knowaveragejoe6 days ago
      > MCP itself is also an over-engineered implementation of AI plugins by OpenAI

      I'm confused by this comment and a reply that both seem to be under this assumption... first, it's from Anthropic, and second, it's hardly over-engineered. If you actually go and try and implement a specific MCP server's functionality from first principles into, say, some chat client of your choosing, you will quickly run into the problems that MCP addresses.

    • skeeter20206 days ago
      Marginally technical executives are really dangerous here. My CTO is "all in" on MCP but fails to see it's an over-engineered attempt by one player to own the thin veneer on top of the bog-standard plumbing that's actually doing all the work. It's like OpenAI made an ODBC database connector and is saying "we created relational databases!"
      • paulgb6 days ago
        I think the only reason people think MCP is over-engineered is that people have been waiting for a way to expose remote tool calls in a client application, MCP came along and had that (among other things), and people assumed that that was all it was good for. But it's not! It's like saying a microwave is over-engineered because you only use the timer functionality.
    • simonw6 days ago
      "The better approach is to simply use open standards that already exists"

      Which ones?

      • owebmaster6 days ago
        OpenAPI/swagger
        • medbrane5 days ago
          No support for long, streaming interactions, like a terminal
  • samuell6 days ago
    So now we are going towards using models to more efficiently store information, and let them talk to each other, instead of moving massive datasets around, as briefly outlined in 2023 in [1] & [2] (although I would anticipate that linked data might help here too)?

    [1] https://livingsystems.substack.com/p/the-future-of-data-less...

    [2] https://livingsystems.substack.com/p/will-data-served-as-lan...

  • AIorNot6 days ago
    Great Another acronym to put on your resume with 5 years of experience
  • peterjliu6 days ago
    From documentation: "TLDR; Agentic applications needs both A2A and MCP. We recommend MCP for tools and A2A for agents."

    Agents can just be viewed as tools, and vice versa. Is this an attempt to save the launch after getting scooped by MCP?

  • mindcrime6 days ago
    Great, another Agent 2 Agent protocol. One more of those and we'll have a complete set!

    OK, I'm being a little bit facetious. But there has been an awful lot of work in this space (or closely related space). Going back to FIPA[1], KQML[2], DAML+OIL[3], etc., up through the more recent AGNTCY[4] and Agent Communication Protocol[5] stuff, there's a lot "out there".

    [1]: http://www.fipa.org/

    [2]: https://en.wikipedia.org/wiki/Knowledge_Query_and_Manipulati...

    [3]: https://www.w3.org/TR/daml+oil-reference/

    [4]: https://github.com/agntcy

    [5]: https://github.com/orgs/i-am-bee/discussions/284

    • joezydeco6 days ago
      Wasn't XML supposed to be this holy grail of interop as well?
    • riku_iki6 days ago
      Now we can have 4 router startups on top of this
  • sharemywin6 days ago
    on it's face this kind of seems dumb. aren't agents supposed to be adaptable enough to handle any protocol?
    • esafak6 days ago
      1. Agents don't know the APIs. This is a way for you to declare them.

      2. You get to decide what functionality you want to expose agents to.

      3. An API enables reliable tool use.

      • wild_egg6 days ago
        The APIs have docs. The agent can go read them and build against them.
        • esafak6 days ago
          The agent doesn't know what tools are available on your host. And searching for the documentation is slower and less reliable. It's an optimization.
          • bfeynman6 days ago
            the guiding promise of AI was able to figure out unstructured data. Manually crafting prompts and description examples for tools is a step orthogonal to progress. Searching for docs, great now you have 1000 tools saying they do same thing exposed to model, now we need another search on top of that?
    • skeeter20206 days ago
      yeah - I don't get it. I'm all for abstraction but the entire point of using agents at the UX-level is you didn't need a single agreed-upon protocol to make it work. Feels like they're using the old Microsoft fire and motion strategy to keep people busy on their tech, or artificially introduce structure & coupling that - suprise - benefits the maker of shovels, not the diggers.
    • bfeynman6 days ago
      shhhh - that's the big announcement for next year - the "Agnostic" General Interface (AGI) ... works with any endpoint no more lengthy MCP or A2A setup!
      • _pdp_6 days ago
        you are too late already... there are now mcp servers that wrap other mcp servers in a single mcp interface... mmcp
  • frontalier6 days ago
    the protocol wars now available for agents
  • ramesh316 days ago
    Big if true.
  • jorkim325 days ago
    [dead]
  • bsenftner6 days ago
    [flagged]
    • itchyjunk6 days ago
      Did you watch the video? Do you have opinions on it? On MCP? On A2A? What is a title of youtube video and a link to it supposed to add here? Just start a new submission if you just want to link to it.
      • bsenftner6 days ago
        Yes, that is why I posted. The video summarizes a lengthy essay I'd have written less well than this communicates. Many of the same ideas are being echoed here as well.
  • zb36 days ago
    > Hiring a software engineer can be significantly simplified with A2A collaboration.

    Holy shit.. NO!

    • rvz6 days ago
      See how they didn't choose lawyers, bankers, or government civil servants?

      Analysts, digital artists, customer service support and journalists of all levels have already been replaced.

      Software engineers (of all levels) are the next knowledge workers to be replaced by agents.

      • zb36 days ago
        I was actually more angered about the fact that the hiring process would be further automated, frustrating candidates even more..

        Ultimately I see nothing wrong with replacing everyone, providing the newly generated wealth would be distributed to all, not just the select few "owners" of these things.. we'll see..

  • moralestapia6 days ago
    Always lagging, always late. Google is so lame these days.

    (I know, they still get billions in revenue, but maybe that's their curse, too comfy to take it seriously)

    • joezydeco6 days ago
      If A2A doesn't bring in ad revenue, it will be in the graveyard in a year or two.