Would you agree or do you think this stays human driven long term?
Take Supabase for example. It’s disproportionately recommended by LLMs when people ask for backend/database stacks. It can't be just because of it's capability since a lot of tools expose similar primitives. Something in the model’s training data, ecosystem visibility, or reinforcement layer is shaping that ranking.
If agents start choosing tools autonomously, the real leverage point isn’t just “can you describe your capabilities in MCP?” but “how does the agent decide you’re preferred over 5 near identical alternatives?”
Do you think that ranking layer sits inside the model providers, or if it becomes an external reputation network?