1 pointby humbleharbinger4 hours ago2 comments
  • blinkbat4 hours ago
    My thought is no.
    • humbleharbinger4 hours ago
      Care to expound?
      • blinkbat4 hours ago
        real engineering work is about delivering solutions, calling APIs is a small part of that.
        • humbleharbinger4 hours ago
          Delivering a software solution is often tantamount to reading multiple codebases, getting some alignment with a team on a proposal and then writing the code and deploying. Most of which can be done by calling APIs (apart from aligning a team of humans)
          • blinkbat4 hours ago
            and with no team of humans, who is the solution for?
            • humbleharbinger4 hours ago
              One can imagine a manager given an outcome to achieve or a director and then a team of agents carrying out the task. Perhaps the agents are adversarial to some extent so they get reasonable pushback on their decisions (ex one agent always sides on taking a long term approach, another agent wants to be scrappy, another agent is on a PIP and approves everything etc).
            • 4 hours ago
              undefined
  • dlcarrieran hour ago
    Not in practice: you'll spend most of your time trying to figure out what the API is supposed to do, why it isn't doing it, and what you can do about it. LLMs are surprisingly good at aggregating everyone else's work doing the same, though.

    My background is in electrical engineering, but I've done my share of programming, both with low-level assembly-language firmware to highly abstracted JavaScript user interfaces. Programming firmware was a very similar process to designing hardware, but the overly abstracted programming for software run on a modern computer or phone was completely different, and LLMs can play a role in the latter.

    With either firmware programming or hardware design, it starts with a few days to weeks of work figuring out what it's going to do, then finding all of the right components to make it happen, figuring out how to connect them together to do so, and finally verifying that that they will do so.

    With hardware design, electrical components need connections between inputs to outputs, whereas with firmware libraries have calls and returns. What makes it work well is that there is a chain of documentation and testing to ensure that every component and library accepts the inputs it's designed to handle and creates the outputs its designed to generate. There's a lot more constraints to hardware than software, but to make up for it, an electrical component as simple as a single 1-cent transistor usually has several pages of documentation (e.g.: https://en.mot-mos.com/vancheerfile/files/pdf/MOT2302B2.pdf) ensuring that any data needed to make a design decision is readily available.

    When writing firmware, a single page of documentation for each routine in a library is usually enough, with a description of each input and output, the data formats and ranges, possible error conditions, behavior when inputs are valid, and usually the resources needed to run it. When creating libraries, or a finished firmware or hardware designs, documenting the design and testing to ensure it matches the documentation ensures that the end user is able to select the right product and use it reliably.

    The documentation is what makes it possible to get a working design, and it not only speeds up development cycles, allowing for a one-and-done approach instead of constant revisiting as end users discover design discrepancies, it also speeds up automation. Chances are whatever processor you are currently using, as well as every processor involved in the network this comment travels through, was laid out using automated tools that incorporated a pool of designs using a few transistors for small logical tasks, a huge array of information about their timing and performance, and the human-made description of the needed functionality of the processor. There are multiple types of AI algorithms in use, from simple genetic algorithms to complex neural networks, but they're all closed-loop iterative systems that continuously modify a design to optimize it, while keeping it within design parameters. LLMs can't produce anything nearly as useful, because it's single-pass design makes it practically impossible follow constraints.

    The extremely abstracted programming done for software that runs on modern computers and phones is a whole different beast, because good documentation is extremely difficult to come by. Errors in documentation compound as more layers are added, and at some point when you are writing an API call for a library, in a framework, running in an interpreter, in a VM, in a web browser, in an operating system, the chance of good documentation is so low that no one even tries. This results in far more work figuring out how to get the tools to do what you want them to do, than figuring out what to ask them to do.

    Most programmer's I've worked with search for examples of other projects using the pertinent API call, and use that to figure out what to do. One thing that LLMs are really good at is parsing documentation, and they also treat other projects as documentation, so if you ask it to do something, it can easily figure out what call correlates best with doing that thing. It's not great at figuring out what to do, but it sure can figure out what call to use to do it.

    Another factor that makes LLMs usable for programming in overly abstracted software environments is that it's effectively impossible for a human to make something that works reliably (see also: https://xkcd.com/2030/) so the high error rate of LLMs is still competitive.