7 pointsby christalingx5 hours ago6 comments
  • nevon2 hours ago
    There's zero percent chance that I would proxy all my LLM calls with my API key through some third party service. However, if it was self-hostable, so that I can ensure it is only able to reach the LLM providers, I could see deploying this behind an LLM provider router. If it actually achieves the kind of token use reduction that is advertised, that would be worth paying for - especially in the enterprise. I'm skeptical of using it for product integrations, where prompts are tuned for effectiveness and efficiency, but for ad-hoc usage it probably doesn't matter too much if the phrasing affects the results a bit.
    • christalingx23 minutes ago
      Self-hosted version is on our roadmap. You’d run the compression engine yourself — we only validate your license key, nothing else touches our servers
    • christalingx2 hours ago
      Hi! You only need our API for the compression part — API keys and LLM usage are entirely managed by your own application. We don't have access to your SaaS, and we don't even know its name. We simply receive the text through our API, compress it, and return the response to your app. Your LLM — whether local, OpenAI, Claude, or any other — then processes it using your own API keys. Your data stays safe with you. And we NEVER ask for your LLM API keys. Let me know if you have any question :)
  • nateb20222 hours ago
    I'm sure I'm not the only one hesitant to provide a 3rd party virtually MITM access to both my LLM usage + API keys. If this were capable of running locally, or even just an API for compressing non-sensitive parts of a prompt, I think it would be much easier to adopt.
    • christalingx2 hours ago
      Hi! You only need our API for the compression part — API keys and LLM usage are entirely managed by your own application. We don't have access to your SaaS, and we don't even know its name. We simply receive the text through our API, compress it, and return the response to your app. Your LLM — whether local, OpenAI, Claude, or any other — then processes it using your own API keys. Your data stays safe with you. And we NEVER ask for your LLM API keys. Let me know if you have any question :)
      • nateb2022an hour ago
        Wouldn't the example code:

          from openai import OpenAI
        
          client = OpenAI(
              base_url="https://agentready.cloud/v1",     # ← only change
              api_key="ak_...",                           # AgentReady key
              default_headers={
                  "X-Upstream-API-Key": "sk-..."          # your OpenAI key
              }
          )
        
          # Every call is now compressed automatically
          response = client.chat.completions.create(
              model="gpt-4o",
              messages=[{"role": "user", "content": your_long_prompt}]
          )
        
        provide you our OpenAI key (via the X-Upstream-API-Key header)?
        • christalingx40 minutes ago
          You’re absolutely right, and that’s a fair catch thank you so much. The example code contradicts what I said.

          The cleaner architecture — and what we should have shown — is a two-step approach where our API only handles compression, and your key never leaves your environment:

          # Step 1: call AgentReady only to compress import requests

          compressed = requests.post("https://agentready.cloud/v1/compress", headers={"Authorization": "ak_..."}, json={"messages": [{"role": "user", "content": your_long_prompt}]} ).json()

          # Step 2: call OpenAI directly with YOUR key — we never see it from openai import OpenAI client = OpenAI(api_key="sk-...") response = client.chat.completions.create( model="gpt-4o", messages=compressed["messages"] )

          This way AgentReady only touches the text for compression — never your LLM API key. We’ll update the docs and example code accordingly ASAP. Thanks for pushing on this.

  • christalingx2 hours ago
    Do you need my OpenAI / Claude API keys?

    No. You only need our API key for the compression step. Your LLM keys and usage stay entirely in your own app — we never see them. We receive text, compress it, and return it. Your LLM (local, OpenAI, Claude, or any other) then processes it with your own keys. We don't even know your app's name.

  • christalingx5 hours ago
    AgentReady is an OpenAI-compatible proxy. You swap your base_url, and every prompt gets compressed before hitting the LLM — 40-60% fewer tokens, same responses, same streaming.

    It uses a deterministic rule-based engine (not another LLM call): removes filler words, simplifies verbose constructions, strips redundant connectors. ~5ms overhead.

    Works with any OpenAI-compatible SDK: Python, Node, LangChain, LlamaIndex, CrewAI, Vercel AI SDK.

    Free during beta, no credit card: https://agentready.cloud/hn

    Python: pip install agentready-sdk && agentready init

    Happy to answer any technical questions.

  • francesco935 hours ago
    Why nobody created that before, clever approach i will give it a try for my next saas thank you
  • maxwell9995 hours ago
    That’s nice awesome idea!