2 pointsby nworley4 hours ago2 comments
  • jazz9k3 hours ago
    If the current generation of software engineers only know how to code using AI, engineers that don't need it will be that much more valuable in the coming years.
    • nworley10 minutes ago
      I’m less focused on engineers using AI to code and more on agents being the “users” of software. Especially because you have agents doing all these tasks now (ie. OpenClaw and others). Even if engineers stay critical, if the end consumer shifts from human clicks to agent decisions, distribution and ranking mechanics change.

      Would you agree or do you think this stays human driven long term?

  • jonahbenton4 hours ago
    Yes. Reputation and eval layers on top of MCP.
    • nworley17 minutes ago
      I think that's true but do you see MCP as enough of a discovery primitive on its own, or does it still lack a ranking/trust layer? My intuition is that capability exposure is only half the problem and the harder part is how agents evaluate and choose between multiple similar tools.

      Take Supabase for example. It’s disproportionately recommended by LLMs when people ask for backend/database stacks. It can't be just because of it's capability since a lot of tools expose similar primitives. Something in the model’s training data, ecosystem visibility, or reinforcement layer is shaping that ranking.

      If agents start choosing tools autonomously, the real leverage point isn’t just “can you describe your capabilities in MCP?” but “how does the agent decide you’re preferred over 5 near identical alternatives?”

      Do you think that ranking layer sits inside the model providers, or if it becomes an external reputation network?