This is very familiar to me. I have seen this problem before. Because AI is often a network of microservices and follow service mesh patterns, there is an inherent ability for agent workloads to reach other agent workloads even if they weren't designed for that. To me, that is an 'authorization' problem and traditional solutions focus on policies to control authorization. But an authz policy approach is avoided because it involves a central server, additional latency, and is a single point of failure.
I found a different approach that restricts the authorization of agent interconnections to a pre-defined topology and has strict access controls per agent during runtime use dynamic access credentials unique to the agents in the topology. There is no central server.
And the dynamic access credentials guarantee the control and security of the data flows over the authorized AI topology during runtime.
I think about securing AI systems a lot. And I am amazed that people expect conventional API security to protect AI systems when these same solutions are failing tremendously for every other kind of API. In my opinion, AI is a leap-ahead technology that needs a leap-ahead security approach rather than the conventional API security methods.
Happy to discuss further