The real opportunity with Agent Skills isn't just packaging prompts. It's providing a mechanism that enables a clean split: LLM as the control plane (planning, choosing tools, handling ambiguous steps) and code or sub-agents as the data/execution plane (fetching, parsing, transforming, simulating, or executing NL steps in a separate context).
This requires well-defined input/output contracts and a composition model. I opened a discussion on whether Agent Skills should support this kind of composability:
And also, in writing, writing from top to bottom has its disadvantages. It makes sense to emulate human writing process and have passes, as you flesh out, and conversely summarize writing.
Current LLMs can brute force these things through emulation/observation/mimicry but they arent as good as doing it the right way. Not only would I like to see "skills" but also "processes" where you create a well defined order that tasks are accomplished in sequence. Repeatable templates. This would essentially include variables in the templates, set for replacement.
You can do this with Gemini commands and extensions.
https://cloud.google.com/blog/topics/developers-practitioner...
The template would more define the output, and I imagine it more recursively.
Say we are building a piece of journalism. First pass, do these things, second pass build more coherent topic sentences, third pass build an introduction.
Right now, the way that models write from top to bottom, the introduction paragraph seems to inform the body, and then the body is just a stretched out version of the intro. Whereas how it should work is the body is written and then condensed into topic sentences and introductions.
I find myself having to baby models, "we are going to do this, lets do the first one. ok now lets do the second one, ok now the third one. you forgot the instructions, lets revise with the parameters you were given initially. now lets put it all together."
I'm babbling, I just think these interfaces need a better way to define "lets write paragraph 4 first, followed by blah blah" to better structure the order in which they tackle tasks.
> Parsing a known HTML structure
In most cases, HTML structures that are being parsed aren't known. If they're known, you control them, and you don't need to parse them in the first place. If they're someone else's, who knows when they'll change, or under what condition they're different.
But really, I don't see the stuff you're talking about happening in prod for non-one-off usecases. I see LLMs used in prod usecases exactly for data where you don't know exactly what its shape will be, and there's an enormous amount of such cases. If the same logic is needed every time, of course you don't have an LLM execute that logic, you have the LLM write a deterministic script.
Of course this requires substantial buy in from application owners - create the vocabulary - and users - agree to expose and share the sentences they generate - but the results would be worth it.
Additionally, I can't even get claude or codex to reliable use the prompt and simple rules (use this command to compile) in an agents.md or whatever required markdown file is needed. Why would I assume they will reliably handle skills prompts spread about a codebase?
I've even seen tool usage deteriorate while it's thinking and self commanding through its output to say.. read code from a file. Sometimes it uses tail while other times it gets confused on the output and then writes a basic python program to parse lines and strings from the same file to effectively get what was the same output as before. How bizarre!
IIUC their most recent arc focuses on prompt optimization[0] where you can optimize — using DSPy and an optimization algo GEPA [1] — using relative weights on different things like errors, token usage, complexity.
[0] https://docs.boundaryml.com/guide/baml-advanced/prompt-optim... [1] https://github.com/gepa-ai/gepa?tab=readme-ov-file
Skills are essentially boiling down to distributed parts of a Main Prompt. If you consider a state model you can see this pattern: Task is the state and combining the task's specifics skills defines the current prompt augmentation. When the task changes, another prompt emerges.
In the end, it is the clear guidance of the Agent that is the deciding factor.
Transforming an arbitrary table is still hard, especially a table on a webpage or in a document. Sometimes I even struggle to find the right library. The effort does not seem worth it for one-off need of such transformation too. LLM can be a great tool for doing the tasks.
MCP does three things conceptually: it lets you build a bridge between an agent and <something else>, it specifies a UI+API layer between the bridge and the LLM, and it formalizes the description of that bridge in a tool-calling format.
It's that UI+API layer that's the biggest pain in the ass, in my opinion. Sometimes you need it; for instance, if you wanted an agent to access your emails, a high quality MCP server that can't destroy your life through enthusiastic tool calling makes sense.
If, however, you have, say a CLI tool or simple API that's reasonably self documenting and you're willing to have it run, and/or if you need specific behavior with a different context setting, then a skill can just be a markdown file that explains what, how, why.
I will say, when using MCP be selective about which tools you enable. A lot of the time they come with say 30 tools and you only personally care about 5 of them. The other 25 are just rotting your context.
All public MCP server I’ve seen have been a disaster with too many tools and tokens polluting the context. It’s really most useful when you need tight integration with some other environment and can write a little custom wrapper to provide it.
People like to shit on Copilot's UX but something it does well is making it incredibly easy to switch off individual tools you don't need per MCP server. In general I've found its MCP story the best out of all of them (Codex/CC/Gemini), it utilizes VSCode extensions integration very well.
The durable pattern here isn't a specific file format. It's on-demand capability discovery: a small index with concise metadata so the model can find what's available, then pull details only when needed. That's a real improvement over tool calling and MCP's "preload all tools up front" approach, and it mirrors how humans work. Even as models bake more know-how into their weights, novel capabilities will always be created faster than retraining cycles. And even if context becomes unlimited, preloading everything up front remains wasteful when most of it is irrelevant to the task at hand.
So even if "Skills" gets replaced, discoverability and progressive disclosure likely survive.
The problem isn’t having a standard way for agents to branch out. The problem is that AI is the new Javascript web framework: there’s nothing wrong with frameworks, but when everyone and their son are writing a new framework and half those frameworks barely work, you end up with a buggy, fragmented ecosystem.
I get why this happens. Startups want VC money, established companies then want to appear relevant, and then software engineers and students feel pressured to prove they’re hireable. And you end up with one giant pissing contest where half the players likely see the ridiculousness of the situation but have little choice other than to join party.
Anyway: a lot of earlier stages of drug discovery involve pulling in lots of public datasets, scouring scientific literature for information related to a molecule, a protein, a disease, etc. You join that with your own data and laboratory capabilities and commercial strategy in order to spot opportunities for new drugs that you could maybe, one day, take into the clinic. This is traditionally an extremely time consuming and bias prone activity, and whole startups have gone up around trying to make it easier.
A lot of the public datasets have MCPs someone has put together around someone's REST API. (For example, a while ago Anthropic released "Claude for Life Sciences" which was just a collection of MCPs they had developed over some popular public resources like PubMed).
For those datasets that don't have open source MCPs, and for our proprietary datasets, we stand up our own MCPs which function as gateways for e.g. running SQL queries or Spark jobs against those datasets. We also include MCPs for writing and running Python scripts using popular bioinformatics libraries, etc. We bundle them with `mcpb` so they can be made into a fully configured one-click installer you can load into desktop LLM clients like Claude Desktop or LibreChat. Then our IT team can provision these fully configured tools for everyone in our organization using MDM tools like Jamf.
We manage the underlying data with classical data engineering patterns, ETL jobs, data definition catalogs, etc, and give MCP-enabled tools to our researchers as front end concierge type tools. And once they find something they like, we also have MCPs which can help transform those queries into new views, ETL scripts, etc and serve them using our non-LLM infra, or save tables, protein renderings, graphs, etc and upload them into docs or spreadsheets to be shared with their peers. Part of the reason we have set it up this way is to work through the limitations of MCPs (e.g. all responses have to go through the context window, so you can't pass large files around or trust that it's not mangling the responses). But also we do this so as to end up with repeatable/predictable data assets instead of LLM-only workflows. After the exploration is done, the idea is you use the artifact, not the LLM, to intact with it (though of course you can interact with the artifact in an LLM-assisted workflow as you iterate once again in developing a yet another derivative artifact).
Some of why this works for us is perhaps unique to the research context where the process of deciding what to do and evaluating what has already been done is a big part of daily work. But I also think there are opportunities in other areas, e.g. SRE workflows pulling logs from Kubernetes pods and comparing to Grafana metrics, saving the result as a new dashboard, and so on.
What these workflows all have in common, IMO, is that there are humans using the LLM as an aid to dive understanding, and then translating that understanding into more traditional, reliable tools. For this reason, I tend to think that the concept of autonomous "agents" is stupid, outside of a few very narrow contexts. That is to say, once you know what you want, you are generally better off with a reliable, predictable, LLM-free application, but LLMs are very useful in the prices of figuring out what you want. And MCPs are helpful there.
How do you handle versioning/updates when datasets change? Do the MCPs break or do you have some abstraction layer?
What's your hit rate on researchers actually converting LLM explorations into permanent artifacts vs just using it as a one-off?
Makes sense for research workflows. Do you think this pattern (LLM exploration > traditional tools) generalizes outside domains with high uncertainty? Or is it specifically valuable where 'deciding what to do' is the hard part?
Someone else mentioned using Chrome dev tools + Cursor, I'm going to try that one out as a way to convince myself here. I want to make this work but I just feel like I'm missing something. The problem is clearly me, so I guess i need to put in some time here.
For data MCPs, we use remote MCPs that are served over an stdio bridge. So our configuration is just mcp-proxy[0] pointed at a fixed URL we control. The server has an /mcp endpoint that provides tools and that endpoint is hit whenever the desktop LLM starts up. So adding/removing/altering tools is simply a matter of changing that service and redeploying that API. (Note: There are sometimes complications, e.g. if I change an endpoint that used to return data directly, but now it writes a file to cloud storage and returns a URL (because the result is to large, i.e. to work around the aforementioned broken factor of MCP) we have to sync with our IT team to deploy a configuration change to everyone's machine.)
I have seen nicer implementations that use a full MCP gateway that does another proxy step to the upstream MCP servers, which I haven't used myself (though I want to). The added benefit is that you can log/track which MCPs your users are using most often and how they are doing, and you can abstract away a lot of the details of auth, monitor for security issues, etc. One of the projects I've looked at in that space is Mint MCP, but I haven't used it myself.
> What's your hit rate on researchers actually converting LLM explorations into permanent artifacts vs just using it as a one-off?
Low. Which in our case is ideal, since most research ideas can be quickly discarded and save us a ton of time and money that would otherwise be spent running doomed lab experiments, etc. As you get later in the drug discovery pipeline you have a larger team built around the program, and then the artifacts are more helpful. There still isn't much of a norm in the biotech industry of having an engineering team support an advanced drug program (a mistake, IMO) so these artifacts go a long way given these teams don't have dedicated resources.
> Do you think this pattern (LLM exploration > traditional tools) generalizes outside domains with high uncertainty?
I don't know for sure, as I don't live in that world. My instinct is: I wouldn't necessarily roll something like this out to external customers if you have a well-defined product. (IMO there just isn't that much of a market for uncertain outputs of such products, which is why all of the SaaS companies that have launched their integrated AI tools haven't seen much success with them.) But even within a domain like that, it can be useful to e.g. your customer support team, your engineers, etc. For example, one of the ideas on my "cool projects" list is an SRE toolkit that can query across K8s, Loki/Prometheus, your cloud provider, your git provider and help quickly diagnose production issues. I imagine the result of such an exploration would almost always be a new dashboard/alert/etc.
[0] https://github.com/sparfenyuk/mcp-proxy - don't know much about this repo, but it was our starting point
We'll see how many of these are around in a few years.
The agent loop architectural pattern (and that’s the relevant bit) is going to continue to matter. There will be new patterns for sure, but tool calling plus while loop (which is all an “agent” is) is powerful and highly general.
Right now models have roughly all of the written knowledge available to mankind, minus some obscure held out private archives and so on. They have excellent skills and general abilities to construct plausible sequences of actions to accomplish work, but we need to hold their hands to really get decent performance across a wide range of activities. Skills and agent frameworks and MCP carve out different domains of that problem, with successful solutions providing training data for future models that might be able to be either generalized, or they'll be able to create a vast mountain of synthetic data following successful patterns, and make the next generation of models incredibly useful for a huge number of tasks, by default.
It might also be possible that by studying the problem, identifying where mode collapses and issues with training prevent the right sort of generalization, they might tweak the architecture and be able to solve the deficiency through normal training runs, and thereby discard the need for all the bespoke artisanal agent specifications.
You can have the most capable human available to you, a supreme executive assistant. You still have to convey your intent and needs to them, your preferences, etc, with as high a degree of specificity as necessary.
And you need to provide them with access and mechanisms to do things on your behalf.
Agentic definitions are the former, and they will evolve and grow. I like the metaphor of deal terms in financial contracts- benchmarkers document billions of these now. The "deal terms" governing the work any given entity does for you will be rich and bespoke and specific, like any valuable relationship. Even if the agent is learning about you, your governance is still needed.
MCP is the latter. It is the protocol by which a thing does things for you. It will get extensions. Skill-like directives and instructions will get delivered over it.
Skills themselves are near term scaffold that will soon disappear.
Skill is a great sleight of hand for Anthropic to get people to think Claude Code is a platform. There is no there there. Orgs will figure this out.
Cheers.
However the "waiting out" strategy needs a timeout. It might happen that agentic crutches around LLMs will bear fruit much sooner than high-quality LLMs arrive. If you don't have a timeout or a decent exit criteria you may end up waiting indefinitely, or at least until reality of things becomes too painful to ignore.
The "ski rental problem" comes to mind here, but maybe there is another "wait it out" exit strategy?
I don't this makes any sense as MCP is a part of something they can do already
Sorry for the nit, but this is a gross oversimplification. Most private archives are not obscure but obfuscated and largely are way more valuable training data then the publicly available ones.
Want to know how the DOD may technically tracks your phone? Private.
Want to know how to make Coca Cola at scale? Private.
Want to know what the schematic is for a Google TPU? Private.
etc etc.
The reason I ask is that the pace of new things arriving is overwhelming, hence I was tempted to just ignore it. Not because things had signs of transience, but because I was drowning and didn't know where to start. That is not the same thing as actually observing signs of things being too foamy.
MCP lets you glue random assed parts of services to mega-ultra-high critical business initiatives with no go between. Delivered through a personalized chat interface that will tell you how sexy you are and how you deserved to win at golf yesterday… from salesman to auto interface to forever contract in minutes.
MS sells to insecurities of incompetent management and facilitates territory marking at the expense of governments and societies around the world for mega bucks. MCP, obvious as it is technically, also lets them plug a library into existing services for a quick upgrade then an atomized upsell directly to the chat interfaces of upper management.
Microsoft’s CEO has talked about his agent swarm. Much like RPA this woo appeals strongly to the barely technical.
So basically a reusable prompt like the previous has asked?
The way the OP phrased it
> Is a skill essentially a reusable prompt that is inserted at the start of any query?
Actually is a more apt description for a different Claude Code feature called Slash Commands
Where I can create a preset "prompt" and call it with /name-of-my-prompt $ARGS
and this feature is the one that essentially prefixes a Prompt.
The other description of lazy loading is more accurate for Skills.
Where I can tell my Claude Code system: Hey if you need to run our dev server see my-dev-server-skill
and the agent will determine when to pull in that skill if it needs it.
This may all be very wrong, though, as it's mostly conjecture from the little I've worked with skills.
This lets you trigger a skill with '/foo' in a way that resembles the way you'd use the command line.
Claude Code is very good at using well-defined skills without a command though, but in a scenario where this is some nuance between similar skills they are useful.
BUT what makes them powerful is that you can include code with the skill package.
Like I have a skill that uses a Go program to traverse the AST of a Go project to find different issues in it.
You COULD just prompt it but then the LLM would have to dig around using find and grep. Now it runs a single executable which outputs an LLM optimised clump of text for processing.
Inversely, you can persist/summarize a larger bit of context into a skill, so a new agent session can easily pull it in.
So yes, it's just turtles, sorry, prompts all the way down.
https://github.com/alganet/skills/blob/main/skills/left-padd...
Either way, that’s hilarious. Well done.
<conspiracy_mode> maybe all of them were designed to occupy the full context window of earlier GPT models </conspiracy_mode>
Apart from Google Inc., I have not seen a single "AI company" propose an RFC that was reviewed by the IETF and became a proper internet standard. [0]
"MCP" was one of the worst so-called "standards" ever built since the JWT was proposed. So I do not take Anthropic seriously when they create so-called "open standards" especially when the reference implementation is in Javascript or TypeScript.
> I have not seen a single "AI company" propose an RFC that was reviewed by the IETF and became a proper internet standard.
Why would the IETF have anything to do with LLM/agent standards? This seems like a category error. They also don’t ratify web standards, for example.
IETF is involved in protocol standards, MCP/A2A are certainly in this category, skills less so
like deno vs npm package ecosystems that didn't work together for many years
There are multiple AGENTS vs CLAUDE vs .github/instructions; skills vs commands; ... intermixed and inconsistent concepts, all out in the wild
When I work on a project, do all the files align? If I work in an org, where developers have agent choice, how many of these instructions and skills "distros" do I need to put (pollute?) my repo with?
We then hit the problem of how to best share these and keep them up to date, especially with multiple repositories. It led us to build sx - https://github.com/sleuth-io/sx, a package manager for AI tools.
While I do agentic development in personal projects a lot at this point, at work it's super rare beyond quick lookups to things I should already know but can't be arsed to remember exactly (like writing a one-off SQL scripts which does batching mutations and similar)
It is not healthy when you have an obsession this bad, seriously. Seek help.
Although Skills are just md files but it’s good to see them “donate” it.
There goal seems to be simple: Focus on coding and improving it. They’ve found a great niche and hopefully revenue generating business there.
OpenAI on the other hand doesn’t give me same vibes, they don’t seem very oriented. They’re playing catchup with both Google models and Anthropic
Apple has shortcuts, but they haven’t propped it up like a standard that other people can use.
To contrast this is something you can use even if you have nothing to do with Claude, and your tools created will be compatible with the wider ecosystem.
Many many MCPs could and should just be a skill instead.
``` web-artifacts-builder
Suite of tools for creating elaborate, multi-component claude.ai HTML artifacts using modern frontend web technologies (React, Tailwind CSS, shadcn/ui). Use for complex artifacts requiring state management, routing, or shadcn/ui components - not for simple single-file HTML/JSX artifacts. ```
Say I want to build a landing page with some relatively static content — I don't know it yet but its just gonna be bootstrap CSS, no SPA/React(ish), it'll be fine with templated server side thing. But I don't know how to express this in words. Could the skill _evolve_ based on what my preferences are and what is possible for a relative novice to grok and construct?
This is a simple example, but it could extend to say using sqlite+litestream instead of postgres or using Gradient boosted trees instead of an expensive transformer based classifier.
Paper & applications published here: https://earthpilot.ai/metaskills/
---
persona: hacker
description: logical, talks about computers a lot, enjoys coffee, somewhat snarky and arrogant
---
<more details here>1. For an experienced Claude Code user, you can already build such an agent persona quite trivially by using the /agents settings.
2. It doesn't actually replace agents. Most people I know use pre-defined agents for some tasks, but they still want the ability to create ad-hoc agents for specific needs. Your standard, by requiring them to write markdown files does not solve this ad-hoc issue.
3. It does not seem very "viral" or income-generating. I know this is premature at this point, but without charging users for the standard, is it reasonable to expect to make money off of this?
And of course Claude Code has custom slash commands which are also very similar.
Getting a lot of whiplash from all these specifications that are hastily put together and then quickly forgotten.
Other than that it appears MCP prompts end up as slash commands provided by an MCP Server (instead of client side command definitions).
But the actual knowledge that is encoded in skills/commands/mcp prompts is very similar.
But skills dont really solve the problem. Turning that workaround into a standard feels strange. Standardizing a patch isn’t something I’d expect from Anthropic, it’s unclear what is their endgame here
The value of standardizing skills is that the skills you define work with any agentic tool. Doesn't matter how simple they are, if they dont work easily, they have no use.
You need a practical and efficient way to give the llm your context. Just like every organization has its own standards, best practices, architectures that should be documented, as new developers do not know this upfront, LLMs also need your context.
An llm is not an all knowing brain, but it’s a plan-do-check-act text processing machine.
Marketing. That defines pretty much everything Anthropic does beyond frontier model training. They're the same people producing sensationalized research headlines about LLMs trying to blackmail folks in order to prevent being deleted.
This is not the first time, perhaps expectation adjustment is in order. This is also the same company that has an exec telling people in his Discord (15m of fame recently) Claude has emotions
I think that they often do solve the problem, just maybe they have some other side effects/trade offs.
The best one we have thought of so far.
It has been published as an open specification.
Whether it is a standard isn't for them to declare.
Could one make a copyleft type license such that the generated code must be licensed free and open and under the same license? How enforceable are licenses on these skills anyway, if one can take in the whole skill with an agent and generate a legally distinct but functionally close variant?
It does code execution in an apple container if your Skill requires any code execution.
It also proves the point that Skills are basically repackaged MCPs (if you look into my code).
For example, you can't have a directory named "Stripe-Skills" which will give you a breakdown of last week's revenue (unless you write in the skills how to connect to stripe and get that information). So, most of the remote, existing services are better used as MCPs (essentially APIs).
These two solutions look feel and smell like the same thing. Are they the same thing?
Any OpenCode users out there have any hot or nuanced takes?
It is functionally a skill. I suppose once anti-gravity supports skills, I will make it one officially.
I'm authoring equivalent in CUE, and assimilating "standard" provider ones into CUE on the fly so my agent can work with all the shenanigans out there.
npx ai-agent-skills install frontend-design
20 of the most starred Claude skills ever, now open across Claude Code, Cursor, Amp, VS Cod : anywhere that supports the spec. Would love some feedback on it
github.com/skillcreatorai/Ai-Agent-Skills
There's no real benefit to the MCP protocol over a regular API with a published "client" a local LLM can invoke. The only downside is you'd have to pull this client prior.
I am using local "skill" as reference to an executable function, not specifically Claude Skills.
If the LLM/Agent executes tools via code in a sandbox (which is what things are moving towards), all LLM tools can be simply defined as regular functions that have the flexibility to do anything.
I seriously doubt MCP will exist in any form a few years from now
2. it was delegation to a subagent to select the tools that should be made available, which sounded like it got the whole list and did "rag" on the fly like any model would
You're going to want to provide your agent with search, rag, subagent context gathering (and pruning/compation/mgmt) that can work across the internet, code bases, large tool/skill sets, past interaction history. All of this can be presented as a single or few tools to your main agent and is the more meta-pattern/trend emerging
It's a much better system in my experience.
What they said was don't pollute your context with lots of tool defs, from MCP or not. You'll see this same problem if you have 100s of skills, with their names and descriptions chewing up tokens
Their solution is to let the agent search and discover as needed, it's a general concept around tools (mcp, func, code use, skills)