35 pointsby bharatgel7 months ago2 comments
  • awsanswers7 months ago
    Strongly agree with this approach. Real world task specific LLM query workloads tend to be long lived workflows. They should be brokered and handled rather than wait on.
    • bharatgel7 months ago
      Thanks, glad it resonates :)
  • criticalpudding7 months ago
    This is perfect for my use case! I'm building a MCP tool that can take 4~10 minutes to complete, and I'm using exactly what you described in README (having another MCP tool for models to poll results) to solve the async problem, which is not ideal. Hope this gets adopted more widely!
    • bharatgel7 months ago
      I'm curious how you're making the polling for results approach work right now? Is it a conditional logic that depends on the result from MCP. Or you let the LLM keep deciding on what to do next?