319 pointsby indigodaddy13 days ago22 comments
  • xrd13 days ago
    Isn't it more appropriate to compare this to aider?

    I prefer the command line tools to IDE integration, even though I don't feel like the contextual options are great. In other words, I don't always feel that I can see the changes fully. I like Claude Code's option to expand the result using ctrl-r, and I like the diffs it provides. But, it still feels like there is a way to get better than what I see inside Zed and what I see inside Claude and Aider.

    Maybe an editor that can be controlled and modified on the fly using natural language?

    • JeremyNT12 days ago
      I've settled on aider and vim.

      The best experience I've had is to completely divorce editing from vibe coding. Ask the chatbot to do something, then review the results as if a junior developer submitted them - that means diffs and opening files in the editor.

      Fundamentally I think these are really distinct operations. I understand why the kitchen sink IDEs jam the genAI tools into their UIs, but I don't think it's necessarily important to link the two functions.

      • jspdown12 days ago
        I share the same experience. Looking at diffs inside a terminal is as helpful as looking at diffs inside GitHub. I need code navigation to fully understand the impact of a code change or just the code base.

        I exclusively use Claude Code these days, and I don't remember having accepted the result of a prompt a single time on the first shot. I always have to fix some stuff here and there, improve some tests or comments or even make the code more readable. Being in an IDE is a must for me, and I don't see how this could change.

    • flowingfocus13 days ago
      specifically for working better with diffs, I can recommend tmux + lazygit with this keybinding for quickly opening a floating lazygit:

      bind-key C-g display-popup -E -d "#{pane_current_path}" -xC -yC -w 80% -h 75% "lazygit"

      not only does it allow you to see the diffs, but you can directly discard changes you don't want, stage, commit, etc.

      • eyegor12 days ago
        Side note, if you're a lazygit fan, consider using gitui as an alternative. Feature wise they're pretty similar but gitui is much faster and I find it easier to use.

        https://github.com/gitui-org/gitui

      • carraes13 days ago
        Damn, thanks, i have some floating panes on tmux but never thought about doing something like this lol
      • Syzygies12 days ago
        tmux! That was today's project. I'm using Claude Code Opus 4 to translate a K&R C computer algebra system (Macaulay) from the 1980's into C23. Finally getting to the "compiles but crashes" (64 bit issues) I found us living inside the lldb debugger. While I vastly prefer a true terminal to the periscope AI agent view in Cursor, it was still painful having at best a partial view of Claude's shell use, interleaved with our chat. What I wanted was a separate terminal session AI and I could share as equal partners.

        tmux is the quickest way to implement such an idea.

      • xrd12 days ago
        Damn, this is brilliant. Thank you.
    • QRY13 days ago
      That's an interesting idea! I struggle with the same issues you've mentioned, that space between the IDE integrated option and pure CLI. Your comment sparked an idea of using something like vim or similar where you can edit the config on the fly and reload it. I wonder how hard it would be to bolt a prompt interface to the front to have it build the editor for you?

      It would likely quickly devolve into typical editor config bikeshedding, only AI powered? At least for me, maybe someone smarter could streamline it enough to be useful though!

      • xrd13 days ago
        I was hoping I would goad someone into doing it.

        But, do it for emacs, ok? </joke>

        Actually, I *do* prefer emacs.

    • jpalomaki12 days ago
      I'm running Claude Code with vscode. With frequent commits I can use the source control tab to get a feeling of changes being made. This helps in spotting changes to files that should not have been changed.
      • gwd12 days ago
        I've been using VSCode with aider, but with auto-committing turned off. VSCode has a thing where changes not yet checked into the tree are highlighted in the scrollbars -- blue for modified, green for added, red for removed. You can then click the colored part of the sidebar to see a diff.

        Just for fun I typically also have an emacs window open; "git diff > working.diff" lets you see the diff, then "C-c C-c" on a diff hunk will take you the place in the file where that change was made.

    • syabro13 days ago
      I use claude code + pycharm (when need to check changes, improve something)
    • WhyNotHugo13 days ago
      Being able to open the diff in vimdiff view (or your editor's equivalent) would be a neat approach. Not entirely sure how to actually implement that.
  • jeremy_k13 days ago
    Just wanted to say I had been happily plodding along using AI tools in Zed, which had worked pretty well but seeing the SST team was behind OpenCode I decided to finally give a terminal based agent a try. I was blown away, primarily by the feedback loops of say OpenCode writing new tests, running the test suite, seeing the tests errored and looping back start the whole process again. That looping does not happen in Zed!

    It was the first time I felt like I could write up a large prompt, walk away from my laptop, and come back to a lot of work having been done. I've been super happy with the experience so far.

    • crgwbr13 days ago
      I’ve definitely had exactly that sort of looping work with Zed, as long as I tell it how to run the tests. Are you perhaps not using one of the “thinking” models?
      • jeremy_k10 days ago
        That might be it. I use GitHub CoPilot through Zed as Zed does not accept the Claude subscription (that I'm using with OpenCode). I've primarily used Sonnet 3.7 in Zed, I'll try out the thinking model and see if that changes anything.
    • dohguy12 days ago
      "It was the first time I felt like I could write up a large prompt, walk away from my laptop, and come back to a lot of work having been done. I've been super happy with the experience so far." - this yet-to-be-defined "happiness" metric will be important moving forward. Apart from Opencode & Leap.new (so far) I still haven't found something where I feel as happy.

      I don't know if others share this sentiment but with all these tools/agents coming out, the main personal "metric" I look at when using them is happiness, rather than other more traditional metrics that I look at when evaluating tools.

    • manojlds12 days ago
      That's tablestakes at this point. If Zed is not doing it, it's far behind others like Ampcode and rest.
    • brabel12 days ago
      Intellij’s agent, Junie, does that too… and you get a proper UI as well!
      • KronisLV12 days ago
        I've had pretty good experiences with Junie, their UI is really pleasant! Kind of wish I could put in an API key for Sonnet or Gemini myself and get rid of any rate limits.

        Outside of JetBrains IDEs I also quite enjoy RooCode, though stuff like GitHub Copilot is decent.

        • saratogacx12 days ago
          While I haven't used Junie I have been using an intelliJ plugin ProxyAI for a while. It has a couple of built in models and more you can pay for ($10/mo) but you can add pretty much any model you want with your own keys (I even tried perplexity's Sonar model for kicks).

          It's been my go-to, along with Claude Code on the side for bigger stuff.

  • jauntywundrkind13 days ago
    Could really use a comparison versus the seemingly de-facto terminal AI coding tool Aider. https://aider.chat/
    • gwd12 days ago
      Been using aider as my daily driver for a while, just gave opencode a spin last night.

      As another comment by the authors said, they're still pretty early days, so there's a lot of missing documentation and functionality; that's the price you pay for being on the bleeding edge.

      Two big differences:

      1. opencode is much more "agentic": It will just take off and do loads of stuff without asking, whereas aider normally asks permission to do everything. It will make a change, the language server tells it the build is broken, it goes and searches for the file and line in the error message, reads it, and tries to fix it; rinse repeat, running (say) "go vet" and "go test" until it doesn't see anything else to do. You can interrupt it, of course, but it won't wait for you otherwise.

      2. aider has much more specific control over the context window. You say exactly what files you want the LLM to be able to see and/or edit; and you can clear the context window when you're ready to move on to the next task. The current version of opencode has a way to "compact" the context window, where it summarizes for itself what's been done and then (it seems) drops everything else. But it's not clear exactly what's in and out, and you can't simply clear the chat history without exiting the program. (Or if you can, I couldn't find it documented anywhere.)

      ETA: opencode will tell you how big the context window is, but not what's in the context (AFAICT).

      3. As sort of a side effect of both of those, the "rtt" seems much shorter for opencode: lots of small actions with quick feedback for opencode, vs long contiguous responses for aider. But that could be more how I happened to use it than something specific.

      I do feel like opencode's UI is more... "sparkling"? The screen is much more "managed", with windows, a status bar, more colors, etc. Aider is much more like the REPL loop of, say, python or sqlite3.

    • nsonha12 days ago
      How is it de-facto other than being the first? What's so amazing? I thought de-facto is Claude Code?
    • airspresso13 days ago
      and Claude Code
  • thdxr13 days ago
    hey one of the authors here

    we're a little over a month into development and have a lot on our roadmap

    the cli is client/server model - the TUI is our initial focus but the goal is to build alternative frontends, mobile, web, desktop, etc

    we think of our task as building a very good code review tool - you'll see more of that side in the following weeks

    can answer any questions here

    • gwd12 days ago
      Like it a lot so far!

      After a brief play last night, the biggest feature of aider I miss is more control over the context window -- saying "/clear" to re-start the conversation from scratch, or specifying files to add or remove as they become relevant or irrelevant. Not clear how much or how long files stay in the context window.

      The other question I have is whether you use Anthropic's "prompt caching" [1] to reduce the cost of the long conversation?

      [1] https://docs.anthropic.com/en/docs/build-with-claude/prompt-...

      • threecheese12 days ago
        There’s some discussion in GitHub that suggests they are not using prompt caching, but recognize the need and are looking at it: https://github.com/sst/opencode/issues/254
        • thdxr12 days ago
          we actually do heavily use prompt caching now
      • thdxr12 days ago
        curious why /clear instead of making a new session?

        and nothing expires out from the session until you get near the context window max - at which point we do a compaction process

        • gwd11 days ago
          I don't think I realized how the sessions worked; It looks like /new would get pretty close. One of the things /clear does in aider is clear the chat history but not the files you've added (which are always done manually).

          Again the theme of the difference is control over the context: partly for cost management, partly because the quality tends to degrade with the length of the context.

          But it may be that the main thing needed is just a change of workflow. So far I am finding opencode's ability to find things out for itself quite refreshing.

          • gwd11 days ago
            > partly because the quality tends to degrade with the length of the context.

            And as if to emphasize this, I was using opencode this morning with Sonnet; but once the context window got up close to 100k, only about 30% through implementing a new feature (which doesn't seem that large to me), it repeatedly failed with "Error: oldString not found in content or was found multiple times", which I take to mean that it wasn't generating patches in the right format.

            This isn't the first time this has happened to me. Maybe my code is just more complicated than other people's, so it gets confused more easily. But having tools to manage the complexity the LLM is exposed to has been critical to making it useful.

    • cchance13 days ago
      Would be cool if it could be an alternative frontend for gemini-cli, claude code, acli and the other tui's
  • orliesaurus13 days ago
    I feel like the guy behind this project loves getting into internet fights to create drama/clickbait. That being said, it's a cool project - still in its early stages and nowhere as usable as the other CLIs...but it's a darn shame about all the drama.
    • subarctic12 days ago
      Also really confusing. I think I finally figured out which repo is which
  • brainless12 days ago
    I have a question around pricing:

    I am using Claude Code almost exclusively. I am using the Claude Pro subscription and it allows Claude Code usage, with limits on the number of prompts per 5 hours, according to their site. I have not hit these limits yet even though I use this full-time, daily.

    With other tools, do I have to pay API based costs or are there ways to use my subscription? As I see it, the API costs add up quickly. That means we can be stuck with a few tools from the top tier model companies.

    • vlade1111512 days ago
      > do I have to pay API based costs Usually, yes, you do. However, in this case, opencode kinda cheats by using Antropic client ID and pretending to be Claude Code, so it can use your existing subscription. > We recommend signing up for Claude Pro or Max, running opencode auth login and selecting Anthropic. It’s the most cost-effective way to use opencode. https://opencode.ai/docs/
      • 12 days ago
        undefined
    • siddboots12 days ago
      This ain’t what you asked but I’m using Claude Code with a pro subscription and I get about an hour use out of it before I run out of tokens. Then I spend 4 hours thinking about how to set up my context for the next session.
      • brainless12 days ago
        I have a very different experience. I have built https://github.com/pixlie/SmartCrawler almost entirely on Claude Code with some usage of Google Jules. Even all GitHub workflows are from CC. I still have tokens left since I try multiple other ideas in parallel.

        I have coded a few landing pages and even a full React Native app with the same Claude Pro account in the last month. I do not consider this huge usage, but this is similar to a couple months of my own programming with AI assistance (like Zed + Supermaven).

        Please note: SmartCrawler is not ready for usage, you can surely try it out though but I am changing the inner workings and it is half-complete.

        Also, there are many branches of code that I have thrown away because I did not continue that approach. Example, I was trying a bounding box way to detect what data to extract, using the HTML element's position and size on browser. All coded with Claude Code. Generating such ideas is cheap in my opinion.

      • herbst12 days ago
        Gave it my first try yesterday burned trough the 20$ limit in maybe an hour and haven't hit the pro limit yet.

        Guess I really have to look into making this more efficient.

        • blitzar12 days ago
          More walking around the block thinking and more breaks for a cup of coffee.
  • totaa13 days ago
    community drama aside, great to see more open source agentic CLIs tools.

    other than the focus on tui design, does this have any advantage over Claude Code, Aider, Gemini using the same model?

    • manishsharan13 days ago
      In my experience, Claude Code is scary good. Gemini CLI is just plain dumb and not worth the time.
      • theshrike7911 days ago
        Gemini CLI in its current state is _dangerous_.

        I asked it to examine a codebase and it went lightning-fast into full refactoring mode and started "fixing" stuff - while profusely apologising because it couldn't get the code to compile :D

        Currently the best way to use Gemini CLI is to instruct Claude to use it when examining large codebases. Gemini investigates -> generates markdown summary -> Claude uses summary to direct itself without wasting context.

    • thdxr13 days ago
      author here

      we're very focused on UX and less so on LLM performance. we use all the same system prompts/config as claude code

      that said people do observe better performance because of out of the box LSP support - edit tools return errors and the LLM immediately fixes them

      • dotancohen13 days ago

          > we're very focused on UX and less so on LLM performance
        
        Could you spin that as an actual advantage? For people like me who use VIM, have a preference about filesystems, and backup phones via adb?
  • fortyseven5 days ago
    Been spending the day with this. I have no experience with TUI tools like Claude Code, so I had no expectations going into it. My first go with it was unfortunately soured by a broken /init command, but that actually got fixed almost immediately -- the Discord is fairly busy. That one hitch aside, I authed it into my Github Copilot account and immediately I was able to start vibing out, adding some features to a random project of mine. Worked great.

    I'm gonna stick with this for now. The dev is responsive, updates regularly, it works fine, and it just seems to keep getting better.

  • scosman13 days ago
    OpenCode is great. A tier TUI. Basically an open Claude code.
    • graeber_2892712 days ago
      Best part for me is it's model agnostic. I liked Claude Code, it worked better for me than VSCode Copilot Agent, but Claude was too expensive, so I rarely used it, and the price/friction felt bad when I did.

      sst/opencode I can use with my existing Copilot subscription, and select Claude Sonnet 4 freely. I never hit the limit before, and all friction is gone! If Google ever builds a better model, I can switch the same day, and keep my workflows, configs, etc.

      Also, with Claude COde I always felt a little mistrust, since theoretically they benefit from providing a more expensive service. opencode doesn't have this misaligned incentive.

      • CraigJPerry8 days ago
        According to https://models.dev/?search=sonnet+4 the copilot version of sonnet 4 is constrained on output tokens - still seems like the cheapest combo though. Ill give it a whirl
      • indigodaddy12 days ago
        With opencode+copilot sub, when you run out of sonnet 4, can/will it fallback to gpt4.1 (which I believe is unlimited in copilot $10 sub?)
  • cranium12 days ago
    Has anyone done a (somewhat) apple-to-apple comparison between opencode and claude code, as they both can use claude pro/max subscription?

    I'm curious about how they feel to use and their "performance".

  • preciz13 days ago
    Hmm, there is already a similar project with the same name: https://github.com/opencode-ai/opencode
    • isomorphic13 days ago
      https://x.com/thdxr/status/1933561254481666466

      ETA: The above link is at the bottom of the original submission's README. (https://github.com/sst/opencode) I posted it without context, and I have no opinion on the matter. Please read theli0nheart's comment below for an X rebuttal.

      • theli0nheart13 days ago
        https://x.com/meowgorithm/status/1933593074820891062

        --

        I’m the founder and CEO of Charm. There are claims circulating about OpenCode which are untrue, and I want to clarify what actually happened.

        In April, Kujtim Hoxha built a project called TermAI—an agentic coding tool built on top of Charm’s open-source stack: Bubble Tea, Lip Gloss, Bubbles, and Glamour.

        Two developers approached him offering UX help and promotion, and suggested renaming the project to OpenCode. One of them bought a domain and pointed it at the repo.

        At the time, they explicitly assured Kujtim that the project and repo belonged entirely to him, and that he was free to walk away at any point.

        We loved what Kujtim built and offered him a full-time role at Charm so he could continue developing the project with funding, infrastructure, and support. The others were informed and declined to match the offer.

        I also mentioned that if the project moved to Charm, a rename might follow. No agreement was made.

        Shortly after, they forked the repo, moved it into their company’s GitHub org, retained the OpenCode name, took over the AUR package, and redirected the domain they owned.

        To clarify specific claims being circulated:

        - No commit history was altered

        - We re-registered AUR packages for continuity

        - Comments were only removed if misleading or promotional

        - The project is maintained transparently by its original creator

        The original project, created by Kujtim, remains open source and active—with the full support of the team at Charm.

        That’s the story. We’ll have more to share soon.

        • hengheng13 days ago
          > an agentic coding tool built on top of Charm’s open-source stack: Bubble Tea, Lip Gloss, Bubbles, and Glamour.

          Okay I feel old now.

          • skeeter202013 days ago
            It's pretty funny to refer to your libraries for building a TUI as an "open-source stack". From the commonly accepted vision of a "stack" it's a pretty thin slice. It's like saying "my over-engineered component library is a stack because it involves 15 layers of abstraction!".

            Neither of these companies are focused on LLMs or AI, they're both just using this as AI dust to sprinkle on top of their products.

          • esafak13 days ago
            Come on man, the BLBG stack is where it's at! What are you using, Github Copilot?!

            Seriously, though: Charm creates CLI tools, not coding agents: https://charm.sh/ https://github.com/orgs/charmbracelet/repositories

            Also, https://github.com/kujtimiihoxha 's recent commits are in https://github.com/opencode-ai/opencode .

            But what does https://sst.dev/ (org behind https://github.com/sst/opencode) have to do with either charm or opencode?? Like Charm, it has nothing to do with coding agents.

            Not for me.

            • robbomacrae12 days ago
              You’re implying the door has now closed for people to get into coding agents. It’s a bit early for that don’t you think? These guys might one day be considered part of the founders of coding agents for all we know.
              • esafak12 days ago
                No I'm just saying I'm not touching a project with these red flags.
        • subarctic12 days ago
          So which project is which here? Is Kujtim sst on github and is sst/opencode his project? Is opencode-ai/opencode the one that the two developers that went rogue made (if I understood the tweet correctly)? Or did I get it backwards?
        • dizhn12 days ago
          > The original project, created by Kujtim, remains open source and active—with the full support of the team at Charm.

          Anybody know where exactly this is hosted?

          • jrop12 days ago
            • dizhn11 days ago
              From the repo:

              This is the original OpenCode repository, now continuing at Charm with its original creator, Kujtim Hoxha.

              Development is continuing under a new name as we prepare for a public relaunch.

              Follow @charmcli or join our Discord for updates.

          • 12 days ago
            undefined
      • 13 days ago
        undefined
    • subarctic12 days ago
      I'm so confused by this. I saw this post on HN, and then ended up installing the opencode-ai/opencode one via homebrew somehow (I guess I did a google search and ended up on the wrong github). But then sst/opencode is the one that links to the website opencode.ai and I was reading the docs on that website. Which one is better?
    • dizhn13 days ago
      Both are go based using charmbracelet's gui libraries. There's actually a note about the project you posted being developed under the charm repo now but it doesn't seem to be public. Maybe they are the same project?
      • vidyesh12 days ago
        Kujtim started opencode few years back, they were developing this it even before any other CLI tools were in the market. Few months back thdxr(dax)(SST) and Adam started contributing to opencode. And quickly became the biggest contributors to the project. I think they also wanted to make it more presentable and Dax bought a domain and stuff while working on it. At some point charm approached Kujtim for some deal to move opencode to charm and keep working on it under them. Dax and Adam wanted to keep it open source as is. (Dax's commits were somehow squashed and removed at this point too) So they ended up rewriting opencode with the same name in TypeScript TUI away from Kujtim's vision. And thats where we are, since then opencode doesn't seem to have much progress done but Dax's opencode is being worked on non-stop.

        This is a third party retelling of this story from some post I read, as I came to know about it only after Dax started working on TS TUI for opencode under SST.

  • theusus12 days ago
    If I know it correctly. SST didn't build Opencode.
  • 13 days ago
    undefined
  • 12 days ago
    undefined
  • adhamsalama12 days ago
    Would be nice to compare it to Aider.
  • rw_panic0_013 days ago
    the UI looks very great. Just tried it, it's a pity that it doesn't support permissions before executing write/edit commands. I'm a Goose user btw
    • thdxr13 days ago
      it's implemented in the backend, will expose in frontend soon
    • 13 days ago
      undefined
  • zombot12 days ago
    It doesn't say how to configure a local ollama model.
    • ethan_smith12 days ago
      You can configure Ollama by setting OPENCODE_MODEL=ollama/MODEL_NAME and OPENCODE_BASE_URL=http://localhost:11434/api in your environment variables.
    • stocksinsmocks12 days ago
      You can’t edit files with Ollama served models. Codex has the same problem. This is not an issue with Aider.
  • willahmad13 days ago
    UI looks really neat and pleasant to use. Does it create a todo list per prompt similar to Claude Code?
    • daliusd13 days ago
      I have tried it and it does.
  • careful_ai13 days ago
    [dead]
  • entelechy013 days ago
    [dead]
  • Tepix13 days ago
    The name is already taken, openCode is a large important code repository in Europe.
    • brabel12 days ago
      Exactly. And they are great with lots of actually open source tools for developers. Hope this bullshit AI, Claude copycat is forced to change its name.
  • jappgar12 days ago
    Terminal UIs is such a step backward. It's only attractive to people who have a preexisting emotional attachment to the terminal.

    I should be one of those people, I guess. I love shell scripts and all the rest... but interactive terminal UIs have always sucked.

    So much of what AI companies are putting out is designed to capture developer mindshare. Substantive improvements to their core product (models) are few and far between, so they release these fidgets about once a month to keep the hope alive.

    From that standpoint, TUI makes sense because it obscures the process and the result enough to sucker more people into the vibe-coding money hole.

    • sothatsit12 days ago
      I think the way we currently work with agents, through a text context and prompts, is just a very natural fit for the terminal. It is a very simple design and makes it very easy to review the past actions of the agent and continue to guide it through new instructions. And then you can always jump into your IDE when you want to jump around the source code to review it in more detail.

      On the other hand, agent integrations in IDEs seem to often add a lot more widgets for interacting with agents, and often they put the agent is in its own little tab off to the side, and I find that harder to work with.

      That's why, even though I love using IDEs and have never been a big terminal person, I much prefer using Claude Code in the terminal rather than using tools like Copilot in VSCode (ignoring the code quality differences). I just find it nicer to separate the two.

      The portability of being able to really easily run Claude Code in whatever directory you want, and through SSH, is a nice bonus too.

      • jappgar12 days ago
        I agree that the current crop of IDE integrations really leave something to be desired.

        I've been using Roocode (Cline fork) a lot recently and while it's overall great, the UI is janky and incomplete feeling. Same as Cursor and all the others.

        I tried Claude Code after hearing great things and it was just Roocode with a worse UX (for me). Most of the people telling me how great it was were talking up the output as being amazing quality. I didn't notice that. I presume the lack of IDE integration makes it feel more magical. This is fun while you're vibing the "first 80%" of your product, but eventually the agents need much more hand holding and collaborative edits to keep things on track.

    • kissgyorgy12 days ago
      It is composable with all decades old Linux CLI tools, which you simply can't do with an IDE.

      It also doesn't prevent you from using an IDE at all, but still fits for people with text editors like Vim who doesn't want to use IDEs.

    • lvl15512 days ago
      I’d like to think it’s the most extensible format. If you prefer GUI, you can put a wrapper around it but this gives you the most flexibility.
      • jappgar12 days ago
        The underlying process might be extensible, but the TUI likely isn't.

        It makes sense I guess if a TUI is easier to build and ship than a GUI.

        It does make we wonder why devs don't just use the TUI to vibecode a GUI and compete with Cursor...

        • lvl15512 days ago
          I am not 100% sold on these CLI tools. Namely because they don’t optimize on coordination. I’d like to see a more polished AI behind the coordination based on context, memory, cost, speed, etc. It doesn’t make sense to deploy LLM to do this specific task or for me to hardcode that logic either. Right now, I’d start with o3 and delegate to other models based on strengths I perceive but I rather have all of that automated for me.