2 pointsby elsove812a day ago2 comments
  • elsove812a day ago
    AI agents are increasingly capable of performing real-world operations:

    - executing shell commands - calling APIs - manipulating files - interacting with infrastructure

    However most existing authorization models are identity-based:

    User → Permission → Action

    AI systems do not behave like identities.

    They behave like evolving execution processes driven by model reasoning.

    During execution an AI system may:

    - dynamically generate new tool calls - compose multiple capabilities - trigger external side effects - expand behavior through reasoning loops

    This creates a mismatch between traditional authorization models and AI execution.

    This repository proposes an alternative model:

    Execution Context Authorization Model.

    The key idea is that authorization should be bound to an *Execution Context*, not an identity.

    The model defines:

    - capability ceilings - capability requests - validation relations - external event authorization

    Repository:

    https://github.com/Madongming/context-capability-model

    The goal is to define a *formal authorization model for AI execution* rather than a framework or implementation.

    Curious to hear feedback from people working on:

    - AI agent systems - capability-based security - runtime sandboxing - AI safety

  • elsove812a day ago
    Happy to answer questions about the model if anything is unclear.