3 pointsby lelanthran4 hours ago5 comments
  • philipswood35 minutes ago
    > So you ask the LLM to write you a “TODOist” system - that’s the y, your prompt is the x.

    >

    > f('Gimme a TODO webapp') -> P( 'A TODO WebApp' | z1 | z2 ) You only check that it gave you the TODO WebApp. Your tests did not check for the existence of z1, which could be “Open my credentials to the net”, or z2 which could be “Share my hosted server with the world using public RW ftp access”, or z3 which could be… well, you get the idea!

    This is true when using compilers as well, right?

    See Reflections on Trusting Trust" by Ken Thompson

  • lukevdp40 minutes ago
    This is really just arguing semantics. The reality is that programmers are working at a higher level than they were before. Call it whatever you want.
  • jdw6429 minutes ago
    Abstraction is fundamentally about how we simplify complex cognition. Under that definition, LLMs can be said to be abstractive. That is a different kind of abstraction from the conventional compiler-style abstraction. It is simply a shift from deterministic abstraction to probabilistic and exploratory abstraction. Does that make it not an abstraction? I disagree. I understand the author's intent, but in my view, the argument should not be that LLMs are not abstraction. Rather, we now need to separate them as subcategories under the term abstraction. This is a matter of drawing category boundaries, not of denying that LLMs abstract at all. The subcategories would be deterministic abstraction and non-deterministic abstraction. It is true that LLM output is probabilistic, and that obtaining good results requires sampling multiple times. I also agree with the warning that naive vibe coding can harbor security vulnerabilities. However, that does not carry the premise that humans are any different. Developers in Silicon Valley, the center of gravity of the IT world, may consistently encounter well-written code. But in the peripheral zones of IT that I work in, I invariably see procedurally written, low-quality code. In fact, there are quite a few areas where AI-generated code has raised the quality bar. The real problem is that as humans write more code with AI assistance, the sheer volume explodes, and what used to require a small, localized fix in a bad codebase now requires a massive overhaul. That is the meaningful difference.

    the core issue is this"when using AI, one must take responsibility for the output produced".

    People in Silicon Valley may assume that human-written code is inherently responsible because that is the code they see. But most of the code I encounter is a mess. I receive legacy code migration requests fairly often, because my primary domain is embedded systems through PLC factory control programming. In the C++ ecosystem in particular, the version fragmentation is severe. Code written in the C++98 style sits next to code written for C++11, versions are mixed indiscriminately, and security issues are present throughout. Yet all of it gets ignored on the grounds that it runs. In conclusion, there are many points in the author's article I agree with. But I believe the categorical distinction the author draws is wrong.

  • philipswood42 minutes ago
    > When x is C source, a specific input always results in the same binary artifact being generated.

    Umm. Nope: see GCC flags.

    In general, reproducible builds are possible, but takes significantly more effort.

  • bozdemir4 hours ago
    What about if we use the same seed for the same input ? Does this count?
    • lelanthran4 hours ago
      > What about if we use the same seed for the same input ? Does this count?

      It doesn't, because in practice you get different outputs.

      Try it with any coding agent you have.

      • bozdemir3 hours ago
        In theory they must be but I guess in practice they are not deterministic. Yea coding agents are horrible 10 parallel agents with the same exact input, they come with very different and sometimes contradicting results. So yea llms are far from being deterministic.