211 pointsby yompal6 days ago22 comments
  • thomasfromcdnjs6 days ago
    I've been following agents.json for a little while. I think it has legs, and would love to see some protocol win this space soon.

    Will be interesting to see where the state/less conversation goes, my gut tells me MCP and "something" (agent.json perhaps) will co-exist. My reasoning being purely organisational, MCP focuses on a lot more, and there ability to make a slimmed down stateless protocol might be nigh impossible.

    ---

    Furthermore, if agents.json wants to win as a protocol through early adoption, the docs need to be far easier to grok. An example should be immediately viewable, and the schema close by. The pitch should be very succinct, the fields in the schema need to have the same amount of clarity at first glance. Maybe a tool, that anyone can paste their OpenAPI schema into, and it gets passed to an LLM to generate a first pass of what their agents.json could look like.

    ---

    The OpenAPI <> agents.json portability is a nice touch, but might actually be overkill. OpenAPI is popular but it never actually took over the market imo. If there is added complexity to agents.json because of this, I'd really question if it is worth supporting it. They don't have to be 100% inoperable, custom converters could manage partial support.

    ---

    A lot of people are using agentic IDE's now, would be nice if agent.json shared a snippet with instructions on how to use it, where to find docs and how to pull a list and/or search the registry that people can just drop straight into Windsurf/Cursor.

    • yompal6 days ago
      1) Thanks for being a part of the journey! We also want something that works for us as agent developers. We didn't feel like anything else was addressing this problem and felt like we had to do it ourselves.

      We love feedback! This is our first time doing OSS. I agree - MCP and agents.json are not mutually exclusive at all. They solve for difference clients.

      2) Agreed. Something we're investing in soon is a generic SDK that can run any valid agents.json. That means the docs might getting a revamp soon too.

      3) While many API services may not use OpenAPI, their docs pages often do! For example, readme.com lets you export your REST API docs as OpenAPI. As we add more types of action sources, agents.json won't be 1:1 with OpenAPI. In that way, we left the future of agents.json extensible.

      4) Great idea! I think this would be so useful

    • thomasfromcdnjs6 days ago
      interoperable*
  • winkle6 days ago
    In what ways is the agents.json file different from an OpenAPI Arazzo specification? Is it more native for LLM use? Looking at the example, I'm seeing similar concepts between them.
    • yompal6 days ago
      We've been in touch with Arazzo after we learned of the similarities. The long-term goal is to be aligned with Arazzo. However, the tooling around Arazzo isn't there today and we think it might take a while. agents.json is meant to be more native to LLMs, since Arazzo serves other use cases than LLMs.

      To be more specific, we're planning to support multiple types of sources alongside REST APIs, like internal SDKs, GraphQL, gRPC, etc.

      • winkle6 days ago
        Thanks, that's helpful. I agree there are many other sources REST APIs where this would be helpful. Outside of that I would be interested in understanding the ways where Arazzo takes a broader approach and doesn't really fit an LLM use case.
        • yompal6 days ago
          It's not that Arazzo can't work for LLMs, just that it's not the primary use case. We want to add LLM enabled transformations between linkages. Arazzo having to serve other use cases like API workflow testing and guided docs experiences may not be incentivized to support these types of features.
  • melvinmelih6 days ago
    This is interesting but why do you make it so hard to view the actual agents.json file? After clicking around in the registry (https://wild-card.ai/registry) for 10 minutes I still haven't found one example.
  • pritambarhate6 days ago
    • yompal6 days ago
      Yup. The specification is under Apache 2.0 and the Python package is under AGPL.

      The full licenses can be found here: https://docs.wild-card.ai/about/licenses

      • cruffle_duffle5 days ago
        AGPL is a great way to prevent people from adopting it.
        • yompal5 days ago
          This SDK isn't meant to be restrictive. This can be implemented into other open-source frameworks as a plugin(ie. BrowserUse, Mastra, LangChain, CrewAI, ...). We just don't want someone like AWS to flip this into a proxy service.

          That said, what do you think is the right license for something like this? This is our first time doing OSS.

  • bberenberg6 days ago
    Cool idea but seems to be dead on arrival due to licensing. Would love to have the team explain how anyone can possibly adopt their agpl package into their product.
    • yompal6 days ago
      A couple people have mentioned some relevant things in this thread. This SDK isn't meant to be restrictive. This can be implemented into other open-source frameworks as a plugin(ie. BrowserUse, Mastra, LangChain, CrewAI, ...). We just don't want someone like AWS flip this into a proxy service.

      Some have asked us to host a version of the agents.json SDK. We're torn on this because we want to make it easier for people to develop with agents.json but acting as a proxy isn't appealing to us and many of the developers we've talked to.

      That said, what do you think is the right license for something like this? This is our first time doing OSS.

    • favorited6 days ago
      Sounds like the spec is Apache 2.0. The Python package is AGPLv3, but the vast majority of the code in there looks to be codegen from OpenAPI specs. I'd imagine someone could create their own implementation without too much headache, though I'm just making an educated guess.
    • froggertoaster6 days ago
      Echoing this - is there a commericialization play you're hoping to make?
  • luke-stanley6 days ago
    This could be more simple, which is a good thing, well done!

    BTW I might have found a bug in the info property title in the spec: "MUST provide the title of the `agents.json` specification. This title serves as a human-readable name for the specification."

    • yompal6 days ago
      It now reads "MUST provide the title of the `agents.json` specification file. ..." Thanks for the heads up!
  • sidhusmart6 days ago
    How does this compare to llms.txt? I think that’s also emerging as a sort of standard to let LLMs understand APIs. I guess agents.json does a better packaging/ structural understanding of different endpoints?
    • yompal6 days ago
      llms.txt is a great standard for making website content more readable to LLMs, but it doesn’t address the challenges of taking structured actions. While llms.txt helps LLMs retrieve and interpret information, agents.json enables them to execute multi-step workflows reliably.
  • alooPotato6 days ago
    Can some help me understand why agents can't just use APIs documented by an openapi spec? Seems to work well in my own testing but I'm sure I'm missing something.
    • yompal6 days ago
      LLMs do well with outcome-described tools and APIs are written as resource-based atomic actions. By describing an API as a collection of outcomes, LLMs don't need to re-reason each time an action needs to be taken.

      Also, when an OpenAPI spec gets sufficiently big, you face a need-in-the-haystack problem https://arxiv.org/abs/2407.01437.

      • alooPotatoa day ago
        This was insightful. The re-reasoning part makes sense. So basically, MCP should be a dumbed down version of your API that accomplishes a few tasks really well. It ha to be a subset of what your API could do because if it wasn't, it either end up being just as generic as your API or the combinatorial explosion of possible use cases would be too large.
      • thomasfromcdnjs5 days ago
        Does anyone have any pro tips for large tool collections? (mine are getting fat)

        Plan on doing a two layered system mentioned earlier, where the first layer of tool calls is as slim as they can be, then a second layer for more in depth tool documentation.

        And/or chunking tools and creating embeddings and also using RAG.

        • yompal5 days ago
          Funnily enough, a search tool to solve this problem was our product going into YC. Now it’s a part of what we do with wild-card.ai and agents.json. I’d love to extend the tool search functionality for all the tools in your belt

          It took us a decently long time to get the search quality good. Just a heads up in case you want to implement this yourself

      • ahamilton4546 days ago
        I can agree this is a huge problem with large APIs, we are doing it with twilios api and it’s rough
        • paradite6 days ago
          Thinking from the retrieval perspective, would it make sense to have two layers?

          First layer just describes on high level, the tools available and what they do, and make the model pick or route the request (via system prompt, or small model).

          Second layer implements the actual function calling or OpenAPI, which then would give the model more details on the params and structures of the request.

          • yompal6 days ago
            That approach does a lot better, but LLMs still have positional bias problem baked into the transformer architecture (https://arxiv.org/html/2406.07791v1). This is where the LLM biases selecting information earlier in the prompt than later, which is unfortunate for tool selection accuracy.

            Since 2 steps are required anyways, might as well use a dedicated semantic search for tools like in agents.json.

            • paradite5 days ago
              Interesting. This is the first time I am hearing about intrinsic positional bias for LLM. I had some intuition on this but nothing concrete.
  • sandinmyjoints6 days ago
    Looks cool! How is it similar/different from MCP?
    • yompal6 days ago
      Thanks! MCP is taking a stateful approach, where every client maintains a 1:1 connection with a server. This means that for each user/client connected to your platform, you'd need a dedicated MCP server. We're used to writing software that interfaces with APIs, as stateless and deployment agnostic. agents.json keeps it that way.

      For example, you can write an web-based chatbot that uses agents.json to interface with APIs. To do the same with MCP, you'd spin up a separate lambda or deployed MCP server for each user.

      • Blahah5 days ago
        Hmm but the OpenAPI MVP server just exposes the commands for each API it knows about to the agent - then the MCP server makes stateless API calls. Problem solved.

        MCP isn't stateful in terms of connection with a downstream API server - only with a local bit of code that translates LLM tool calls to something else. There's no inherent coupling.

        Looking at your get_tools() it does essentially the same thing as the OpenAPI MVP server but without being an MCP server - meaning now there are two standards where before there was one, and your tool is most usefully imagined as a local stdio MCP server.

        edit: https://github.com/snaggle-ai/openapi-mcp-server

        • Blahah5 days ago
          Having said that... I think OpenAPI is exactly the right direction - it comes free with most ways of building an API or easily can, and once you have it interfaces are just a transform away.

          And reflecting on your approach, perhaps it's quite a good way to on-board lots of organisations that might have been hesitant or left behind otherwise.

  • ahamilton4546 days ago
    Hey this looks pretty interesting. I saw that you guys are a YC company, how do you intend on making money deploying a protocol?
    • yompal6 days ago
      We think the main opportunity is to charge API providers, to get white-gloved onto this standard.
      • dkdcwashere5 days ago
        can’t an AI just take an OpenAPI spec and throw it into this standard?

        if it’s an open source standard, who’s paying for that?

  • jimmySixDOF5 days ago
    Agents.json and LLM.txt files could be simple de-factos like robot.txt and I hope something takes off. CrewAI or Letta/MemGPT or OpenHands/OpenDevin all have some kind of swarm orchestration tie in points but there is nothing beyond borders. MCP is probably the most flexible approach and could play nice with agent.json which I would like to see if it's possible to stay open source all the way.

    Also, a little bit under the radar but the Netlify team are onto something interesting thinking about Agent Experience (AX) which both Anthropic and the wildcard folks should consider closely:

    https://agentexperience.ax/

    • yompal5 days ago
      We see the same vision you've described, and it'll take thoughtful execution and distribution on our part to continue to make agents.json the standard for tool use.

      I've been in touch with the AX team at Netlify since the article was first published. A lot of very relatable philosophies in that article that stuck with me.

  • ripped_britches6 days ago
    Can you explain what the LLM sees in your Gmail example instead of the chain?

    And how is that translation layer created? Do you write it yourself for whatever you need? Or is the idea for API owners to provide this?

    I’m sure the details are there if I dig deeper but I just read the readme and this post.

    • yompal6 days ago
      We work with API providers to write this file. It takes a non-negligible amount of thought to put together since we're encoding which outcomes would be useful to enable/disable for an LLM. The standard is open so anyone can write and read and agents.json. Mainly intended for API providers to write.
  • codenote6 days ago
    Our team was just exploring an approach to building an AI Agent Builder by making API calls via LLM, so this is very helpful. I'll give it a try!
    • yompal6 days ago
      Interesting! Reach out if you want to chat about it :)
  • barbazoo5 days ago
    I think this is useful but at the same time I don't really know what this is solving. Is this supposed to be a one stop shop to integrate APIs with tools? So this is similar to pydantic.ai in that sense?

    It would be helpful to have a simple example of the problem / pain point we're solving here.

    • yompal5 days ago
      Fair question. Pydantic.ai looks like a wrapper around the client and closer to an agent framework. agents.json is not that.

      agents.json makes an existing API format like OpenAPI, interpretable to LLMs. This is done through tool selection and workflow creation.

      For example, someone here mentioned the large Twilio API was hard to manage with an LLM. We could write a Twilio agents.json to bundle API calls into outcome-based workflows, and create a searchable collection that lets us get higher accuracy on tool selection.

  • tsunego6 days ago
    I like your approach but it's not clear to me whether it's MCP compatible

    Anthropic just announced a MCP registry

    • yompal6 days ago
      MCP is great for the stateful systems, where shared context is a benefit, but this is a rarity. Developers generally write clients to use APIs in a stateless way, and we want to help this majority of users.

      That said, agents.json is not mutually exclusive to MCP. I can see a future where an MCP for agents.json is created to access any API.

      • winkle6 days ago
        I think MCP being stateful is true in the short term. It's currently at the top of their roadmap to add to the protocol https://modelcontextprotocol.io/development/roadmap.
        • yompal6 days ago
          We've been keeping a close eye on this topic: https://github.com/modelcontextprotocol/specification/discus...

          The options being considered to do this are:

          1) maintain a session token mapping to the state -- which is still statefulness

          2) create a separate stateless MCP protocol and reimplement -- agents.json is already the stateless protocol

          3) reimplement every MCP as stateless and abandon the existing stateful MCP initiative

          As you can tell, we're not bullish on any of these.

      • esafak6 days ago
        Isn't the idea to create a data lake to better inform models? Why are you bearish on stateful protocols? Could you elaborate on your thinking?
        • yompal6 days ago
          Bearish on everyone needing to be on stateful protocols. Developers should have the option to have their state managed internal to their application.
          • esafak6 days ago
            Can't you simply use a stateful protocol and not report any state? Doesn't statefulness subsume statelessness? I am beginning to wrap my head around this space, so excuse the naive questions.
            • yompal6 days ago
              No worries! In other cases, I believe you would be right. But splitting up context is not optional with MCP. Part of the whole state will always reside in an external entity.
  • TZubiri6 days ago
    Is this Agents.json file automatically generated or is one supposed to invest thousands of lines into it?
    • yompal6 days ago
      The end developer doesn't need to even see or read the agents.json file. It's a means for transparency and meant to be implemented by the API provider. Tooling to make creating an agents.json easier is on our roadmap. We have a process internally where we use a validator to guide creating an agents.json.
      • TZubiri6 days ago
        So,the api provider, like stripe, is supposed to publish a second API?

        And then the "end developer" who is going to be making a chatbot/agent, is supposed to use that to make a chatbot?

        Why does the plan involve there being multiple third party developers to make n products per provider? If the plan is to have third parties be creative and combine, say, Stripe with Google Ads, then how is a second API for LLMs useful.

        I'm not seeing the vision here. I've seen something similar in a project where a guy wanted LLM developers to use his API for better browsing websites. If your plan involves:

        1- Bigger players than you implementing your protocol 2- Everybody else doing the work.

        It's just obviously not going to work and you need to rethink your place in the food chain.

        • yompal6 days ago
          We're grateful that bigger players like Resend, Alpaca, etc do want to implement the protocol. The problem is honestly onboarding them fast enough. That's one of the main areas we're going to build out in the next few weeks. Until then, we're writing every agents.json.

          If you check out wild-card.ai and create your own collection, you'll find that it's actually really easy to develop with. As a developer, you never have to look at an agents.json if you don't want to.

          • doomroot6 days ago
            The resend api has around 10 endpoints.
  • linux_devil5 days ago
    Pardon my ignorance , how is it different from MCP servers and having a supervisor agent selecting and executing the right MCP tool
    • yompal5 days ago
      Not ignorant at all! This is our favorite question. MCP is taking a stateful approach, where every client maintains a 1:1 connection with a server. This means that for each user/client connected to your platform, you'd need a dedicated MCP server. We're used to writing software that interfaces with APIs, as stateless and deployment agnostic. agents.json keeps it that way.

      For example, you can write an web-based chatbot that uses agents.json to interface with APIs. To do the same with MCP, you'd spin up a separate lambda MCP or process MCP server for each user.

      • linux_devil4 days ago
        All the best , let me check this out
  • dshah4 days ago
    Thanks for noticing my post, building agents.json and open sourcing it.

    This is why the Internet is awesome.

  • jeffrsch5 days ago
    I've been down this road - with OpenPlugin. It's all technically feasible - we did it successfully. The question is, so what? If the new models can zero-shot the API call and fix issues with long responses, boot parameters, lookup fields, etc, what's your business model?
    • yompal5 days ago
      The tail of the problem is quite long. Even if the average model is perfect at these things, do we want them to re-reason each time there's an impasse of outcomes? Often, the outcomes we want to achieve have well traversed flows anyways and we can just encode that.

      In fact, I'm looking forward to the day that models are better at this so we can generate agents.json automatically and self-heal with RL.

      On the business model, ¯\_(ツ)_/¯. We don't charge developers, anyways

      • jeffrsch5 days ago
        Cool. As long as you know this is 'just for fun'.
  • henglihong-jsu5 days ago
    [dead]
  • henglihong-js5 days ago
    [dead]
  • samchon4 days ago
    [dead]