3 pointsby christalingx5 hours ago1 comment
  • christalingx5 hours ago
    A new privacy-first API We redesigned our API — now the official version — to handle token compression with privacy at its core. We only require your AgentReady key. Your LLM API key stays yours — we never see it:

    ------------------------------------------- import requests, os from openai import OpenAI

    # Step 1: Compress messages with AgentReady res = requests.post("https://agentready.cloud/v1/comp...", headers={"Authorization": "Bearer ak_live_116e......"}, json={"messages": [{"role": "user", "content": your_text}]}) compressed = res.json()["messages"]

    # Step 2: Send to YOUR LLM with YOUR key client = OpenAI(api_key=os.environ["OPENAI_API_KEY"]) response = client.chat.completions.create(model="gpt-4o", messages=compressed) -------------------------------------------

    Here's everything else we shipped:

    - Optimized compression — the new API compresses data more efficiently, reducing token usage further.

    - OpenClaw integration — AgentReady now works seamlessly with OpenClaw.

    - Benchmark page — we created a benchmark page AgentReady — Make the Web Readable for AI Agents

    - PIP & NPM packages — integrate AgentReady directly into your Python or JavaScript projects with a single install.

    - Token usage tracking — better visibility into how your tokens are being used.

    - Self-hostable version (coming soon) — compress tokens entirely on your local machine. Nothing leaves your environment. The only external call is a license key check against our server.

    ------------------------------------------

    Get started in seconds We also streamlined the sign-up flow — you can now register and get your API key in less than 10 seconds here:

    https://agentready.cloud/quick-key

    ------------------------------------------- You can find everything else here:

    homepage: https://agentready.cloud/

    docs: https://agentready.cloud/docs/quickstart