- My own "execute bash command" tool, adding output pagination, forcing the agent to choose a working directory, and working around some Cursor bugs on Windows. This avoids context explosion when a command unexpectedly returns a huge amount of text, and avoids a common agent failure mode where it misunderstands what directory it is currently in.
- SQL command execution. This can be done perfectly fine with "execute bash command" but the agent struggles to correctly encode multi-line SQL queries on the command line. You can force it to write a file, but then that's two MCP tool calls (write file, execute command) which increases the chances that it goofs it up. I simply accept an unencoded, multi-line SQL query directly via the MCP tool and encode it myself. This, again, is simply avoiding a common failure mode in the built-in tools.
I haven't needed a third tool, and if the built-in tools were better I wouldn't have needed these two, either. Everything else I've ever needed has been a bash script that both the agent and I can run, explained in the agent's global rules. It's really unclear to me what other use case I might encounter that would be better as MCP.
In theory I can see that an MCP server only launches once and is persistent across many requests, whereas bash scripts are one-and-done. Perhaps some use case requires a lot of up-front loading that would need to be redone for every tool call if it were a bash script. Or perhaps there are complex interactions across multiple tool calls where state must be kept in memory and writing to disk is not an option. But I have not yet encountered anything like this.
I have done this with entire textbooks. Find a PDF and get GPT-5 to transcribe it page by page to Markdown. Costs a couple bucks and turns the agent into a wizard on that subject.
Context7, too, could easily have been a command line tool that both you and the agent can use. Even now, I don't see what MCP--specifically--brings to the table.
[1] One trick for Cursor users: put "/context/" in .gitignore and "!/context/" in .cursorignore. This will keep it out of git but still index it.
What if $TOOL_X needs $DATA to be called, but $TOOL_Y only returns $DATA_SUBSET? What happens when $TOOL_Z fails mid-workflow, after $TOOL_W has already executed?
Aren’t these situations current models are quite good at?
This whole idea of doing unsupervised and unstructured work with unstructured data at scale with some sort of army of agents or something sounds ridiculous to me anyway. No amount of MCP or prompting or whatever is going to solve it.
Like if interesting problems are on the boundry of obvious and chaotic this is just like some janky thing that's way too far into the chaotic regime. You won't go anywhere unless you magically solve the value function problem here.
About the TOOL_Z and TOOL_W scenario. It sounds like you're asking about the concept of a distributed unit-of-work which is not considered by MCP.
I didn't explain myself very well, sorry. What I had in mind is: MCP is about putting together workflows using tools from different, independent sources. But since the various tools are not designed to be composed, scenarios occur in which in theory you could string together $TOOL_Y and $TOOL_X, but $TOOL_Y only exposes $DATA_SUBSET (because it doesn't know about $TOOL_X), while $TOOL_X needs $DATA. So the capability would be there if only the tools were designed to be composed.
Of course, that's also the very strength of MCP: it allows you to compose independent tools that were not designed to be composed. So it's a powerful approach, but inherently limited.
> About the TOOL_Z and TOOL_W scenario. It sounds like you're asking about the concept of a distributed unit-of-work which is not considered by MCP.
Yes, distributed transactions / sagas / etc. Which are basically impossible to do with "random" APIs not designed for them.