1 pointby joyjeetdan7 hours ago2 comments
  • mcnamtm1an hour ago
    "Agents started calling tools nobody remembered wiring up"

    This is very familiar to me. I have seen this problem before. Because AI is often a network of microservices and follow service mesh patterns, there is an inherent ability for agent workloads to reach other agent workloads even if they weren't designed for that. To me, that is an 'authorization' problem and traditional solutions focus on policies to control authorization. But an authz policy approach is avoided because it involves a central server, additional latency, and is a single point of failure.

    I found a different approach that restricts the authorization of agent interconnections to a pre-defined topology and has strict access controls per agent during runtime use dynamic access credentials unique to the agents in the topology. There is no central server.

    And the dynamic access credentials guarantee the control and security of the data flows over the authorized AI topology during runtime.

    I think about securing AI systems a lot. And I am amazed that people expect conventional API security to protect AI systems when these same solutions are failing tremendously for every other kind of API. In my opinion, AI is a leap-ahead technology that needs a leap-ahead security approach rather than the conventional API security methods.

    Happy to discuss further

  • joyjeetdan7 hours ago
    One thing that surprised us: gateways and logs were useful, but insufficient. The real gaps appeared when execution spanned multiple agents, tools, and identities over time. Would love to hear how others are handling this today.
    • abdul_levo6 hours ago
      So true — once multiple agents and tools are involved, it becomes a system-tracking problem, not just a logging problem.