Will be interesting to see where the state/less conversation goes, my gut tells me MCP and "something" (agent.json perhaps) will co-exist. My reasoning being purely organisational, MCP focuses on a lot more, and there ability to make a slimmed down stateless protocol might be nigh impossible.
---
Furthermore, if agents.json wants to win as a protocol through early adoption, the docs need to be far easier to grok. An example should be immediately viewable, and the schema close by. The pitch should be very succinct, the fields in the schema need to have the same amount of clarity at first glance. Maybe a tool, that anyone can paste their OpenAPI schema into, and it gets passed to an LLM to generate a first pass of what their agents.json could look like.
---
The OpenAPI <> agents.json portability is a nice touch, but might actually be overkill. OpenAPI is popular but it never actually took over the market imo. If there is added complexity to agents.json because of this, I'd really question if it is worth supporting it. They don't have to be 100% inoperable, custom converters could manage partial support.
---
A lot of people are using agentic IDE's now, would be nice if agent.json shared a snippet with instructions on how to use it, where to find docs and how to pull a list and/or search the registry that people can just drop straight into Windsurf/Cursor.
We love feedback! This is our first time doing OSS. I agree - MCP and agents.json are not mutually exclusive at all. They solve for difference clients.
2) Agreed. Something we're investing in soon is a generic SDK that can run any valid agents.json. That means the docs might getting a revamp soon too.
3) While many API services may not use OpenAPI, their docs pages often do! For example, readme.com lets you export your REST API docs as OpenAPI. As we add more types of action sources, agents.json won't be 1:1 with OpenAPI. In that way, we left the future of agents.json extensible.
4) Great idea! I think this would be so useful
To be more specific, we're planning to support multiple types of sources alongside REST APIs, like internal SDKs, GraphQL, gRPC, etc.
EDIT: updated
AGPL? https://github.com/wild-card-ai/agents-json/blob/master/LICE...
The full licenses can be found here: https://docs.wild-card.ai/about/licenses
That said, what do you think is the right license for something like this? This is our first time doing OSS.
Some have asked us to host a version of the agents.json SDK. We're torn on this because we want to make it easier for people to develop with agents.json but acting as a proxy isn't appealing to us and many of the developers we've talked to.
That said, what do you think is the right license for something like this? This is our first time doing OSS.
BTW I might have found a bug in the info property title in the spec: "MUST provide the title of the `agents.json` specification. This title serves as a human-readable name for the specification."
Also, when an OpenAPI spec gets sufficiently big, you face a need-in-the-haystack problem https://arxiv.org/abs/2407.01437.
Plan on doing a two layered system mentioned earlier, where the first layer of tool calls is as slim as they can be, then a second layer for more in depth tool documentation.
And/or chunking tools and creating embeddings and also using RAG.
It took us a decently long time to get the search quality good. Just a heads up in case you want to implement this yourself
First layer just describes on high level, the tools available and what they do, and make the model pick or route the request (via system prompt, or small model).
Second layer implements the actual function calling or OpenAPI, which then would give the model more details on the params and structures of the request.
Since 2 steps are required anyways, might as well use a dedicated semantic search for tools like in agents.json.
For example, you can write an web-based chatbot that uses agents.json to interface with APIs. To do the same with MCP, you'd spin up a separate lambda or deployed MCP server for each user.
MCP isn't stateful in terms of connection with a downstream API server - only with a local bit of code that translates LLM tool calls to something else. There's no inherent coupling.
Looking at your get_tools() it does essentially the same thing as the OpenAPI MVP server but without being an MCP server - meaning now there are two standards where before there was one, and your tool is most usefully imagined as a local stdio MCP server.
And reflecting on your approach, perhaps it's quite a good way to on-board lots of organisations that might have been hesitant or left behind otherwise.
if it’s an open source standard, who’s paying for that?
Also, a little bit under the radar but the Netlify team are onto something interesting thinking about Agent Experience (AX) which both Anthropic and the wildcard folks should consider closely:
I've been in touch with the AX team at Netlify since the article was first published. A lot of very relatable philosophies in that article that stuck with me.
And how is that translation layer created? Do you write it yourself for whatever you need? Or is the idea for API owners to provide this?
I’m sure the details are there if I dig deeper but I just read the readme and this post.
It would be helpful to have a simple example of the problem / pain point we're solving here.
agents.json makes an existing API format like OpenAPI, interpretable to LLMs. This is done through tool selection and workflow creation.
For example, someone here mentioned the large Twilio API was hard to manage with an LLM. We could write a Twilio agents.json to bundle API calls into outcome-based workflows, and create a searchable collection that lets us get higher accuracy on tool selection.
Anthropic just announced a MCP registry
That said, agents.json is not mutually exclusive to MCP. I can see a future where an MCP for agents.json is created to access any API.
The options being considered to do this are:
1) maintain a session token mapping to the state -- which is still statefulness
2) create a separate stateless MCP protocol and reimplement -- agents.json is already the stateless protocol
3) reimplement every MCP as stateless and abandon the existing stateful MCP initiative
As you can tell, we're not bullish on any of these.
And then the "end developer" who is going to be making a chatbot/agent, is supposed to use that to make a chatbot?
Why does the plan involve there being multiple third party developers to make n products per provider? If the plan is to have third parties be creative and combine, say, Stripe with Google Ads, then how is a second API for LLMs useful.
I'm not seeing the vision here. I've seen something similar in a project where a guy wanted LLM developers to use his API for better browsing websites. If your plan involves:
1- Bigger players than you implementing your protocol 2- Everybody else doing the work.
It's just obviously not going to work and you need to rethink your place in the food chain.
If you check out wild-card.ai and create your own collection, you'll find that it's actually really easy to develop with. As a developer, you never have to look at an agents.json if you don't want to.
For example, you can write an web-based chatbot that uses agents.json to interface with APIs. To do the same with MCP, you'd spin up a separate lambda MCP or process MCP server for each user.
This is why the Internet is awesome.
In fact, I'm looking forward to the day that models are better at this so we can generate agents.json automatically and self-heal with RL.
On the business model, ¯\_(ツ)_/¯. We don't charge developers, anyways