34 pointsby blumpy224 hours ago9 comments
  • wolttam2 hours ago
    Getting agents used to using `--force` to bypass prompts seems like a bad idea. `--force` is for when the action failed (or would fail) for some reason and you want it to definitely happen this time.

    I think `--yes` or `--yes-do-the-dangerous-thing` is leagues better.

    • tekacs2 hours ago
      It also in the case of an LLM can bias it towards using that sort of flag more commonly, which is less than ideal when it then uses a more ordinary Unix command that uses that to mean something dangerous.
    • hajekt2an hour ago
      [flagged]
    • ihsw2 hours ago
      `--non-interactive` has precedent too.
  • rahimnathwani28 minutes ago
    This guy took inspiration from gog cli (steipete's cli for Google Workspace, which predates gws cli and is apparently more agent-friendly and token-efficient):

    https://github.com/mvanhorn/cli-printing-press

    He made a whole bunch of agent-friendly CLIs: https://printingpress.dev/

    https://github.com/mvanhorn/printing-press-library/tree/main...

  • tfrancislan hour ago
    I dont want "agent-native CLIs" to proliferate because I'd rather we design CLIs for human use and programmatic (automation) use first. Agents are good at vomiting json between tool calls, I am not, and never will be.

    Too many tools stray so wildly from UNIX principles. If we design for agents first we will likely see more and more of this.

    • theshrike79an hour ago
      The point IMO in "agent-native CLIs" is to make them match the statistical average.

      Let the Agent use the CLI and if it guesses the wrong option, you make that the RIGHT option.

      Every time it doesn't guess something right, you change it.

      • pmontra25 minutes ago
        I would naively suppose that the agent is able to read the man page or run the help command of the tool. They usually contain plenty of information. But bending the tool to suit the agent has some value. The GNU-AI suite of userland tools? Unfortunately it's possible that every model will settle on a different average. If that's the case we can't bend to every model. Models will have to bend to whatever we want to use.
        • theshrike796 minutes ago
          Of course it can read the man page and run cmd --help.

          Now you've wasted context on, what? Learning how to use the tool. And it will waste context on it every single time. (You can write skills to mitigate this a bit, but still).

          The alternative is to make the tool work as the user (an LLM in this case) expects it to work, without having to resort to the manual.

      • tfrancisl33 minutes ago
        > Let the Agent use the CLI and if it guesses the wrong option, you make that the RIGHT option

        This sounds backwards and presumes that the statistics machines which are LLMs are getting it right when they "average" out to the wrong command. No, fix the agents behavior, dont change the CLI to accommodate it.

        • alchemist1e930 minutes ago
          I don’t remember exactly the specific examples off the top of my head (some are definitely ffmpeg commands) but I do know that when LLMs keep hallucinating command line flags that don’t exist for that specific command their “suggestion” is actually very reasonable and so many developers are adding support to their tools for common hallucinations.
          • tfrancisl13 minutes ago
            Not to belabor my point, but I think "adding support to tools for common hallucinations" is a bad idea. Sounds like something a vibecoded project being spammed with issues by agents might do. Not so much a serious, mature project, though.
    • alchemist1e9an hour ago
      It’s also likely that agents would also be better if they didn’t deal with json vomit either. I’m optimistic that agent frameworks will eventually come full circle and realize concise teletype linear CLIs aka old school UNIX is actually very effective and efficient for agents as well as humans!
  • debarshrian hour ago
    I think every CLI is agent native when invoked from claude or any coding agents.

    I was really suprised today. We at adaptive [1], is an access management platform to access psql, mysql, vms, k8s etc. When you use `adaptive connect <db-name>` it would connect create just-in-time tunnel and connect the user to the database. You cannot do traditional psql operation etc. That design is by choice.

    Today I was trying to invoke it via claude, and, god damn, it found a way to connect. It create a pseudo shell in python, pass the queries and treat our cli like a tool. This would have been humanly not possible. Partly because, you would like about risks, good practice/bad practice, would be scared to execute and write code like that, and it just did it and acheived the goal.

    [1] https://adaptive.live

  • walski35 minutes ago
    Definitively super human ultra intelligence by the end of Q4!!!!11 Also not able to use tools, which are not explicitly built for machine consumption.
  • jiehongan hour ago
    This reminds me that agents sometimes really like heredoc in shells, and waste tokens retrying with a file.
  • sandermvanvliet2 hours ago
    Is it me or are all these articles about using AI effectively and building for AI just, you know, things that we should have been doing all along?

    It feels like most of the “rules” are “don’t be an ass to your consumer”.

    • bensyversonan hour ago
      Partially, but I think if you design for agents, their needs are different enough from a human's that you end up making different choices.

      I found myself nodding along to the linked tweet/article. Recently I did many rounds of iterative user-centered design with an agent to improve the CLI interface in Jobs [0], a task manager for LLMs. The resulting CLI follows most of these principles.

      One great idea from the tweet that I will be adding: a `feedback` subcommand, for the agent to capture feedback while they work.

      [0]: https://github.com/bensyverson/jobs

  • arian_18 minutes ago
    [dead]