2 pointsby abdelfane5 hours ago2 comments
  • abdelfane5 hours ago
    Author here. I spent the last week analyzing this vulnerability from a security architecture perspective.

    Key insight: This isn't a ServiceNow-specific problem. It's an industry-wide pattern of grafting AI agents onto legacy auth systems.

    We built an open-source platform (AIM) that implements the prevention strategies outlined in the article. Happy to answer questions about AI agent security or the analysis.

    GitHub: github.com/opena2a-org/agent-identity-management

  • chrisjj5 hours ago
    Nice article.

    But the "AI" angle is incidental, surely. The provider simply added an unsecured API, period.

    • abdelfane4 hours ago
      You're right that at the technical level, it's an unsecured API. But I'd argue the AI context matters for two reasons:

        1. The capability itself: The "create data anywhere" permission wasn't a legacy API—it was added specifically to enable AI agent functionality (Now Assist). Traditional chatbots had scoped, rules-based actions. The shift to agentic AI introduced capabilities that the auth model wasn't designed to govern.
      
        2. The pattern: This is going to happen repeatedly. Companies are bolting AI agents onto legacy systems without rethinking authorization. ServiceNow is just the first high-profile example. The same pattern exists in Copilot plugins, Claude Desktop MCP servers, LangChain deployments—anywhere AI agents get grafted onto existing infrastructure.
      
      You could call it "an unsecured API" and be technically correct. But the reason it was unsecured is that AI agents break the assumptions traditional IAM was built on: human decision-making, predictable workflows, fixed permissions.

      The fix isn't just "secure your APIs" (though yes, do that). It's recognizing that autonomous agents need different authorization primitives than human-operated systems.

      • chrisjj3 hours ago
        So someone adds a gateway to a fence but forgets to add the gate. That's not "introducing capabilities that the auth model wasn't designed to govern".

        > The fix isn't just "secure your APIs" (though yes, do that). It's recognizing that autonomous agents need different authorization primitives than human-operated systems.

        An API is for programs, not humans. And isn't suddenly insecure because some of those programs are now purportedly intelligent.

        I will agree though this is going to happen repeatedly. But only because the companies thinking rushing to bolt on "AI" is a good idea are more than averagely likely ones who thought proper API security wasn't.