2 pointsby paujur4 hours ago1 comment
  • paujur4 hours ago
    Hey HN - I built TengineAI after thinking a lot about where LLM apps are heading, especially as they rely more on tools.

    Most implementations today look like: - LLM → function call → backend code → tool

    This works well for demos, but in production it starts to break down: - no permission boundary (the model can trigger anything wired in) - execution is tightly coupled to app logic - hard to audit or observe what actually ran - retries / failures / isolation are ad hoc

    The core issue, as I see it, is: LLMs are being used to trigger application code directly

    I think that’s the wrong abstraction.

    Instead, I’ve been exploring treating tools like infrastructure: LLM → tool request → execution layer → tool

    TengineAI sits in that execution layer: - enforces permissions - runs tools in isolation - tracks and logs every execution - decouples tools from your app/backend

    The goal is to make tools: - reusable across agents/apps - safe in multi-tenant environments - production-ready (not just “model output → run code”)

    Under the hood, TengineAI implements an MCP server, but focuses on execution - permissions, isolation, and observability once a tool is invoked.

    Curious how others here are handling tool execution - especially around permissions, isolation, or running tools across multiple users.

    If you want to try it:

    Quickstart: https://tengine.ai/docs/quick-start-5-minutes Python example: https://github.com/tengineai/tengineai-python-quickstart

    Happy to answer any questions!