45 pointsby npmipg6 hours ago25 comments
  • petekoomen3 hours ago
    I'm seeing a lot of negativity in the comments. Here's why I think this is actually a Good Idea. Many command line tools rely on something like this for installation:

      $ curl -fsSL https://bun.com/install | bash
    
    This install script is hundreds of lines long and difficult for a human to audit. You can ask a coding agent to do that for you, but you still need to trust that the authors haven't hidden some nefarious instructions for an LLM in the middle of it.

    On the other hand, an equivalent install.md file might read something like this:

    Install bun for me.

    Detect my OS and CPU architecture, then download the appropriate bun binary zip from GitHub releases (oven-sh/bun). Use the baseline build if my CPU doesn't support AVX2. For Linux, use the musl build if I'm on Alpine. If I'm on an Intel Mac running under Rosetta, get the ARM version instead.

    Extract the zip to ~/.bun/bin, make the binary executable, and clean up the temp files.

    Update my shell config (.zshrc, .bashrc, .bash_profile, or fish http://config.fish depending on my shell) to export BUN_INSTALL=~/.bun and add the bin directory to my PATH. Use the correct syntax for my shell.

    Try to install shell completions. Tell me what to run to reload my shell config.

    It's much shorter and written in english and as a user I know at a glance what the author is trying to do. In contrast with install.sh, install.md makes it easy for the user to audit the intentions of the programmer.

    The obvious rebuttal to this is that if you don't trust the programmer, you shouldn't be installing their software in the first place. That is, of course, true, but I think it misses the point: that coding agents can act as a sort of runtime for prose and as a user the loss in determinism and efficiency that this implies is more than made up for by the gain in transparency.

    • jedwhite2 hours ago
      Thanks for posting the original ideas that led to all this. "Runtime for prose" is the new "literate programming" - early days but a pointer to some pretty cool future things, I think.

      It's already made a bunch of tasks that used to be time-consuming to automate much easier for me. I'm still learning where it does and doesn't work well. But it's early days.

      You can tell something is a genuinely interesting new idea when someone posts about it on X and then:

      1. There are multiple launches on HN based on the idea within a week, including this one.

      2. It inspires a lot of discussion on X, here and elsewhere - including many polarized and negative takes.

      Hats off for starting a (small but pretty interesting) movement.

    • blast2 hours ago
      Why the specific application to install scripts? Doesn't your argument apply to software in general?

      (I have my own answer to this but I'd like to hear yours first!)

      • petekoomen2 hours ago
        It does, and possibly this launch is a little window into the future!

        Install scripts are a simple example that current generation LLMs are more than capable of executing correctly with a reasonably descriptive prompt.

        More generally, though, there's something fascinating about the idea that the way you describe a program can _be_ the program that tbh I haven't fully wrapped my head around, but it's not crazy to think that in time more and more software will be exchanged by passing prompts around rather than compiled code.

        • 4b11b4an hour ago
          > "the way you describe a program _can_ be the program"

          One follow-up thought I had was... It may actually be... more difficult(?) to go from a program to a great description

          • dang20 minutes ago
            That's a chance to plump Peter Naur's classic "Programming as Theory Building"!

            https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

            https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

            What Naur meant by "theory" was the mental model of the original programmers who understood why they wrote it that way. He argued the real program was is theory, not the code. The translation of the theory into code is lossy: you can't reconstruct the former from the latter. Naur said that this explains why software teams don't do as well when they lose access to the original programmers, because they were the only ones with the theory.

            If we take "a great description" to mean a writeup of the thinking behind the program, i.e. the theory, then your comment is in keeping with Naur: you can go one way (theory to code) but not the other (code to theory).

            The big question is whether/how LLMs might change this equation.

        • blast2 hours ago
          That's basically what I was thinking too: installation is a constrained domain with tons of previous examples to train on, so current agents should be pretty good at it.
    • smaudet3 hours ago
      > This install script is hundreds of lines long

      Any script can be shortened by hiding commands in other commands.

      LLMs run parameters in the billions.

      Lines of code, as usual, is an incredibly poor metric to go by here.

      • petekoomen3 hours ago
        My point is not that LLMs are inherently trustworthy. It is that a prompt can make the intentions of the programmer clear in a way that is difficult to do with code because code is hard to read, especially in large volumes.
  • 12 minutes ago
    undefined
  • jedwhite2 hours ago
    I shared a repo on HN last week that lets you use remote execution with these kinds of script files autonomously - if you want to. It had some interesting negative and positive discussion.

    The post mentioned Pete Koomen's install.md idea as an example use case. So now with this launch you can try it with a real intstallation script!

    I think it's a really interesting idea worth experimentation and exploration. So it's a positive thing to see Mintlify launch this, and that it's already on Firecrawl.dev's docs!

    We can all learn from it.

    Show HN discussion of executable markdown here:

    https://news.ycombinator.com/item?id=46549444

    The claude-run tool lets you execute files like this autonomously if you want to experiment with it.

        curl -fsSL https://docs.firecrawl.dev/install.md | claude-run --permission-mode bypassPermissions
    
    Github repo:

    https://github.com/andisearch/claude-switcher

    This is still a very early-stage idea, but I'm really stoked to see this today. For anyone interested in experimenting with it, it's a good idea to try in a sandboxed environment.

  • andai3 hours ago
    I'm thinking isn't that what a readme is? But I guess these days due to GitHub, the readme is the entire project homepage, and the install instructions are either hidden somewhere there (hopefully near the top!) or in a separate installation.md file.
  • oftenwrong5 hours ago
    What is the benefit of having this be a standard? Can't an agent follow a guide just as easily in document with similar content in a different structure?
    • skeptrune5 hours ago
      Primarily this being a predictable location for agents. AI not having to fetch the sitemap or llms.txt and then a bunch of subsequent queries saves a lot of time and tokens. There's an advantages section[1] within the proposal docs.

      [1]: https://www.installmd.org/#advantages

  • ollien3 hours ago
    I don't love the concept, but I do wonder if it could be improved by using a skill that packages and install script, and context for troubleshooting. That way you have the benefits of using an install script, and at least a way to provide pointers for those unfamiliar with the underlying tooling.
  • JoshPurtell5 hours ago
    At some point in the future (if not already), claude will install malware less often on average. Just like waymos crash less frequently.

    Once you accept that installation will be automated, standardized formats make a lot of sense. Big q is will this particular format, which seems solid, get adopted - probably mostly a timing question

  • 0o_MrPatrick_o05 hours ago
    Author should explore Ansible/Puppet/Chef.

    I’m not sure this solution is needed with frontier models.

    • skeptrune5 hours ago
      Can you explain more? I see how those relate to a very limited extent, but I'm not getting your entire vision.
  • bigbuppo5 hours ago
    I feel like I should create a project called 'Verify Node.js v20.17.0+' that is totally not malware.
  • arianvanp3 hours ago
  • rarisma4 hours ago
    Great, I can now combine the potential maliciousness of a script with the potential vulnerabilities of an AI Agent!

    Jokes aside, this seems like a really wierd thing to leave to agents; I'm sure its definitely useful but how exactly is this more secure, a bad actor could just prompt inject claude (an issue I'm not sure can ever be fixed with our current model of LLMs).

    And surely this is significantly slower than a script, claude can take 10-20 seconds to check the node version; if not longer with human approval for each command, a script could do that in miliseconds.

    Sure it could help it work on more environments, but stuff is pretty well standardised and we have containers.

    I think this part in the FAQ wraps it up neatly:

    """ What about security? Isn't this just curl | bash with extra steps? This is a fair concern. A few things make install.md different:

        Human-readable by design. Users can review the instructions before execution. Unlike obfuscated scripts, the intent is clear.
    
        Step-by-step approval. LLMs in agentic contexts can be configured to request approval before running commands. Users see each action and can reject it.
    
        No hidden behavior. install.md describes outcomes in natural language. Malicious intent is harder to hide than in a shell script.
    
    Install.md doesn't eliminate trust requirements. Users should only use install.md files from sources they trust—same as any installation method. """

    So it is just curl with extra steps; scripts aren't obfuscated, you can read them; if they are obfuscated then they aren't going to use a Install.md and you (the user) should really think thrice before installing.

    Step by step approval also sorta betrays the inital bit about leaving installing stuff to ai and wasting time reading instructions.

    Malicious intent is harder to hide, but really if you have any doubt in your mind about an authors potential malefeasance you shouldn't be running it, wrapping claude around this doesn't make it any safer really when possible exploits and malware are likely baked into the software you are trying to install, not the install.

    tldr; why not just have @grok is this script safe?

    Ten more glorious years to installer.sh

    • skeptrune2 hours ago
      This is some really fantastic feedback, thank you!

      I personally think that prose is significantly easier to read than complex bash and there are at least some benefits to it. They may not outweigh the cons, but it's interesting to at least consider.

      That said, this is a proposal and something we plan to iterate on. Generating install.sh scripts instead of markdown is something we're at least thinking about.

  • reddalo5 hours ago
    I usually complain about proposed standards not being under the /.well-known namespace, but in this case, wow. I can't even comment.
    • skeptrune5 hours ago
      better or worse than llms.txt you think?
      • johnisgood24 minutes ago
        Worse, because many projects have INSTALL.md which is intended to be read and followed by humans, not LLMs.

        (If LLMs can follow it, so be it, but at least humans remain the target audience.)

  • themikesanto5 hours ago
    I would think that the common bash scripts we already have would provide an agent better context for installation than a markdown file, and even better, they already work without an LLM.

    This is a "solution" looking for a problem.

    • skeptrune5 hours ago
      I can definitely see where you're coming from and agree to a large extent. I was asking myself that question a lot when thinking about this.

      What pushed me over the edge was actually feeding bash install scripts into agents and seeing them not perform well. It does work, but a lot worse than this install.md thing.

      In the docs for the proposal I wrote the following:

      >install.md files are direct commands, not just documentation. The format is structured to trigger immediate autonomous execution.[1]

      [1]: https://www.installmd.org/

  • imiric5 hours ago
    Here's a proposal: app.md. A structured text file with everything you want your app to do.

    That way we can have entire projects with nothing but Markdown files. And we can run apps with just `claude run app.md`. Who needs silly code anyway?

  • roywiggins5 hours ago
    Appropriately, I think this was probably drafted by AI too:

    > How does install.md work with my existing CLI or scripts?

    > install.md doesn't replace your existing tools—it works with them. Your install.md can instruct the LLM to run your CLI, execute your scripts, or follow your existing setup process. Think of it as a layer that guides the LLM to use whatever tools you've already built.

    (It doesn't X — it Ys. Think of it as a Z that Ws. this is LLM speak! I don't know why they lean on these constructions to the exclusion of all else, but they demonstrably do. The repo README was also committed by Claude Code. As much as I like some of the code that Claude produces, its Readmes suck)

    • skeptrune5 hours ago
      Yeah, removing that line right now. Went too fast and some this copy is definitely low quality :(. Incredibly ironic for me to say that AI needs more supervision while working at the company proposing this haha.

      Any other feedback you have about the general idea?

      • roywiggins5 hours ago
        I think my preferred version of this would be a hybrid. Keep the regular installer, add a file filled with information that an LLM can use to assist a human if the install script fails for some reason.

        If the installer was going to succeed in a particular environment anyway, you definitely want to use that instead of an LLM that might sporadically fail for no good reason in that same environment.

        If the installer fails then you have a "knowledge base" to help debug it, usable by humans or LLMs, and if it fails, well, the regular installer failed too, so hopefully you're not worse off. If the user runs the helper LLM in yolo mode then the consequences are on them.

        • skeptrune5 hours ago
          Acknowledged. The standard includes a link to the llms.txt for a site at the bottom which is intended to give it that "knowledge base" to query.

          I think I agree with you on it needing to assist in event of failure instead of jumping straight to install though. Will think more about that.

  • constantcrying5 hours ago
    >Installing software is a task which should be left to AI.

    This is such an insane statement. Is this satire?

    • skeptrunean hour ago
      Ok, I've toned that bit down for you!
  • einpoklum5 hours ago
    > Installing software is a task which should be left to AI.

    Just like installing spice racks is a task which which should be left to military engineer corps.

    • skeptrunean hour ago
      Understood, just toned that bit down.
  • dang2 minutes ago
    [stub for offtopicness]

    Since the article has been changed to tone down its provocative opener, which clearly had a kicking-the-anthill effect, I'm moving those original reactions to this subthread.

  • rvz5 hours ago
    This has to be a joke right?

    > Installing software is a task which should be left to AI.

    Absolutely I don't think so. This is a very bad idea.

    $ curl | bash was bad enough. But $ curl -fsSL | claude looks even worse.

    What could possibly go wrong?

    • andai3 hours ago
      I gave Claude root to my $3 VPS and I'm delighted to have a server that "configures itself."

      I wouldn't use it for anything serious, but that being said, I think it's in better shape than when I was running it.

    • skeptrune5 hours ago
      fascinating. i personally (biased bc i work at Mintlify) think a markdown file makes more sense than a bash script because at least Claude kind of has your best interests at heart.
      • constantcrying5 hours ago
        >i personally (biased bc i work at Mintlify) think a markdown file makes more sense than a bash script because at least Claude kind of has your best interests at heart.

        Most of the largest trends in "how to deploy software" revolve around making things predictable and consistent. The idea of abandoning this in favor of making a LLM do the work seems absurd. At least the bash script can be replicated exactly across machines and will do the same thing in the same situation.

        • skeptrune5 hours ago
          Yeah, I'm going to add that as one of the downsides to the docs. The stochastic nature of the markdown vs. a script is for sure a reason to not adopt this.
      • vimda5 hours ago
        Tell that to the weekly thread where Claude nukes your home directory or similar
      • heliumtera5 hours ago
        >Claude kind of has your best interests at heart.

        That is such a wild thing to say. Unless this whole thing is satire...

        • skeptrune5 hours ago
          Wait, but being serious. You can prompt the ai when you feed it this file to ask "do you see anything nefarious" or "follow these instructions, but make sure you ask me every time you install something because i want to check the safety" in a way that you can't when you pipe a script into bash.

          Does that make any sense or am I just off my rocker?

          • themikesanto5 hours ago
            You can do the same thing with any install script you might come across today.
            • skeptrune5 hours ago
              True, that's a fair point. Do you think there's any merit to the idea that the UX of asking about a markdown file is more natural than a bash script?
              • inlined5 hours ago
                No. Absolutely not. The opposite in fact. Your bash script is deterministic. You can send it to 20 AIs or have someone fluent read it. Then you can be confident it’s safe.

                An LLM will run the probabilistically likely command each time. This is like using Excel’s ridiculous feature to have a cell be populated by copilot rather than having the AI generate a deterministic formula.

          • imiric5 hours ago
            [flagged]
      • nathan_compton3 hours ago
        I try to have my brain have my best interests at heart, personally.
      • esalman5 hours ago
        > Claude kind of has your best interests at heart

        How we've all been blue-pilled. Sigh..

  • heliumtera5 hours ago
    >Installing software is a task which should be left to AI

    What?? How do I get off of this train? I used to come to hacker news for a reason...what the fuck am I reading

  • vimda5 hours ago
    [flagged]
    • dang2 hours ago
      "Don't be snarky."

      "Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."

      "Don't be curmudgeonly. Thoughtful criticism is fine, but please don't be rigidly or generically negative."

      https://news.ycombinator.com/newsguidelines.html

    • skeptrune5 hours ago
      Fascinating. My thinking was that this is an upgrade over a bash script because you can prompt the AI to check it, clear installs with you, or otherwise investigate safety before installing in a way that isn't natural with *.sh. Does that make any amount of sense or am I just crazy?
      • delusional5 hours ago
        Bash scripts give you visibility into what they are going to do by virtue of being machine instructions in a determimistic language. MD files you pipe to matrix multiplication has a much lower chance of being explainable.
        • skeptrune5 hours ago
          Yeah, someone else was pointing that the bash scripts are guaranteed to do the same thing on every system which I think is in the same vein as your feedback. It's for sure a downside of the markdown that I need to explain the docs behind the proposal.
      • vimda5 hours ago
        Time and time again, be it "hallucination", prompt injection, or just plain randomness, LLMs have proven themselves woefully insufficient at best when presented with and asked to work with untrusted documents. This simply changes the attack vector rather than solving a real problem
        • TeMPOraL5 hours ago
          In a computing system, LLMs aren't substituting for code, they're substituting for humans. Treat them accordingly.
  • alex_x5 hours ago
    I don’t understand how this made it to the front page
    • heliumtera5 hours ago
      This is hacker news now. Nothing else here to see, only slop. Everything here is: look what I prompted to take advantage of you
  • pvtmert5 hours ago
    should've been posted on April 1st. would be better suited on that specific date! /s
  • 12_throw_away5 hours ago
    > "Installing software is a task which should be left to AI."

    So, after teaching people to outsource their reasoning to an LLM, LLMs are now actively coaching folks to use LLMs for tasks for which it makes no sense at all.

    • TeMPOraL5 hours ago
      Why? One of the major day-to-day benefits of LLMs is that they can deal with all the bullshit of modern computing for you.
      • whattheheckheck3 hours ago
        It's probably making more bullshit and sloppier bullshit at that
  • wrigby5 hours ago
    Or just, I don’t know… package your software?
    • skeptrune5 hours ago
      Intent here is that this would be adopted by more difficult to install devtools which are unpackaged to the extent that you need a dependency like a specific version of Node, Python, or a dev lib.
      • bigbuppo2 hours ago
        I think you want docker?