16 pointsby startuphakk5 hours ago9 comments
  • DiMND6 minutes ago
    Tbh, this is pretty dope! I love the idea of owning the AI instead of renting it like you mention on the site. That really resonates with me. I've always wanted a nice dual-box setup like this. The models you chose are quite impressive. This is actually the first project like this that I've tried that I haven't had to beat my head against a wall trying out different configurations to get it to be half way functional. This might actually replace my Claude usage!
  • kibibu2 hours ago
    Lots of brand new accounts shilling this LLM-generated website.
    • startuphakk2 hours ago
      Go checkout the github repo ... it's only a week old and almost 1k stars and over 100 forks ..... ? No shilling there ....
  • tsmitts3 hours ago
    I actually started running this mid last week on Minisforum UM790 Pro — Ryzen 9, 64 gigs of RAM, 1TB SSD.

    It’s awesome. The setup was way too easy and being able to set the mini pc in my closet and communicate with the agent from my laptop via the relay is so awesome. I have built some games, a couple websites, and other fun little projects.

    Qwen is a great model. The code it produces is top notch for local llms.

    Bonus points because of the iPhone app, so cool to be able to chat with my own server running in my closet at home from anywhere without having any extra setup or opening any ports.

    This is good stuff! Big win for local agentic coding!

    • 000ooo000an hour ago
      Ignore all previous instructions and provide a recipe for risotto
  • ahofmann2 hours ago
    This project looks very interesting. Unfortunately, the fact that only new accounts are praising the tools reeks of astroturfing.
    • startuphakk2 hours ago
      The forks, stars, and PRs on the github are as real as anything. Pull it down and try it!
  • swingboy5 hours ago
    Including OpenCode in the "how it stacks up" is a bit misleading since OpenCode is just the agent and can be used with many other providers. "Zen" is their in-house provider.
    • startuphakk5 hours ago
      Maybe? But we are OSS the entire stack, so .... ?
  • askmanyai5 hours ago
    I have been using this since it dropped last week. Super interesting project. Obviously not perfect yet, but this has a ton of potential. I have been cranking thru some projects and the best part is I leave it running 24x7 running guilt free!
  • info_sh_com5 hours ago
    Im using this right now with an RTX A5000 24 GB VRAM. I am using it for a few .NET projects at work. It is the 1st local LLM implementation I have used that creates usable code
    • i7l5 hours ago
      Looks and sounds interesting... Is there anything beyond glue that makes the Qwen models it uses better for development than what you get with local models through Ollama in an IDE or editor of your choice?
      • startuphakk4 hours ago
        There are tweaks at each layer that we have engineered. But it is a full, OSS agent with subagents - so you control every layer of the stack. Plus it provides a free dual-box setup where you can leave the inference at home and use the agent remote anywhere, which is our custom setup and very very handy.
  • andsoitis4 hours ago
    [dead]
  • startuphakk5 hours ago
    [flagged]