1 pointby anon897452 hours ago1 comment
  • anon897452 hours ago
    Hey HN,

    I was incredibly frustrated with most "AI Agents" being restricted web chatbots or relying on expensive, slow API calls. I wanted a true, autonomous background employee that could natively execute terminal commands, manage my server health, and respond to my Telegram texts 24/7 without melting my electric bill.

    So I built OmniClaw. Instead of leaving a 500W desktop running all day, I engineered the entire python agent swarm to run flawlessly on an old $30 Android phone using Termux (pulling only 2 Watts).

    Engineering pain points solved:

    Frictionless Local LLMs: The

    setup.sh installer auto-detects RAM. >6GB pulls llama3.1:8b. Weak devices (like phones) auto-map to gemma:1b. No configuration needed. Native Subprocesses: The worker nodes drop directly into a shell subprocess. To monitor subdomains, it writes a bash script to /tmp/, executes nmap, feeds the stdout/stderr back into the LLM context, and parses the results. Kernel & Desktop Bridging: I built a C/Rust eBPF bridge so the agent can trace socket layers on Linux. For macOS, there's a zero-friction background SpeechRecognition thread for Voice Wake. I also built a 1-Click Google Colab Sandbox (linked in the repo) so you can test the Telegram integration without installing anything locally.

    Would love to hear your thoughts on autonomous local execution! I'll be here answering questions.