The first is that the LLM outputs are not consistently good or bad - the LLM can put out 9 good MRs before the 10th one has some critical bug or architecture mistake. This means you need to be hypervigilant of everything the LLM produces, and you need to review everything with the kind of care with which you review intern contributions.
The second is that the LLMs don’t learn once they’re done training, which means I could spend the rest of my life tutoring Claude and it’ll still make the exact same mistakes, which means I’ll never get a return for that time and hypervigilance like I would with an actual junior engineer.
That problem leads to the final problem, which is that you need a senior engineer to vet the LLM’s code, but you don’t get to be a senior engineer without being the kind of junior engineer that the LLMs are replacing - there’s no way up that ladder except to climb it yourself.
All of this may change in the next few years or the next iteration, but the systems as they are today are a tantalizing glimpse at an interesting future, not the actual present you can build on.
This, to me, is the critical and fatal flaw that prevents me from using or even being excited about LLMs: That they can be randomly, nondeterministically and confidently wrong, and there is no way to know without manually reviewing every output.
Traditional computer systems whose outputs relied on probability solved this by including a confidence value next to any output. Do any LLMs do this? If not, why can't they? If they could, then the user would just need to pick a threshold that suits their peace of mind and review any outputs that came back below that threshold.
The entire universe of information consists of human writing, as far as the training process is concerned. Fictional stories and historical documents are equally “true” in that sense, right?
Hmm, maybe somehow one could score outputs based on whether another contradictory output could be written? But it will have to be a little clever. Maybe somehow rank them by how specific they are? Like, a pair of reasonable contradictory sentences that can be written about the history-book setting indicate some controversy. A pair of contradictory sentences, one about history-book, one about Narnia, each equally real to the training set, but the fact that they contradict one another is not so interesting.
LLMs do it much more often. One of the many reasons in the coding area is the fact that they're trained on both the broken and working code. They can propose as a solution a piece of code that was taken verbatim from "why is this code not working" SO question.
Google decided to approach this major problem by trying to run the code before giving the answer. Gemini doesn't always succeed as it might not have all packages needed installed for example, but at least it tries, and when it detects bullshit, it tries do correct that.
Re:contradictory things: as LLM digest increasingly large corpuses, they presumably distill some kind of consensus truth out of the word soup. A few falsehoods aren’t going to lead it astray, unless they happen to pertain to a subject that is otherwise poorly represented in the training data.
They would mean understanding the sources of the information they use for inference, and the certainty of steps they make. Consider:
- "This conclusion is supported by 7 widely cited peer-reviewed papers [list follows]" vs "I don't have a good answer, but consider this idea of mine".
- "This crucial conclusion follows strongly from the principle of the excluded middle; its only logical alternative has been just proved false" vs "This conclusion seems a bit more probable in the light of [...], even though its alternatives remain a possibility".
I suspect that following a steep gradient in some key layers or dimensions may mean more certainty, while following an almost-flat gradient may mean the opposite. This likely can be monitored by the inference process, and integrated into a confidence rating somehow.
I mean, I don’t think such a value (it is definitely possible I’m reading it overly-specifically), like, a numerical value, can generally be assigned to the truthiness of a snippet of general prose.
I mean, in your “7 peer reviewed papers” example, part of the point of research (a big part!) is to eventually overturn previous consensus views. So, if we have 6 peer reviewed papers with lots of citations, and one that conclusively debunks the rest of them, there is not a 6/7 chance that any random sentiment pulled out of the pile of text is “true” in terms of physical reality.
Not to mention, humans say things that make sense for humans to say and not a machine. For example, one recent case I saw was where the LLM hallucinated having a Macbook available that it was using to answer a question. In the context of a human, it was a totally viable response, but was total nonsense coming from an LLM.
This differs from closed-form calculations where a calculator is normally constrained to operate--there is one correct answer. In other words "a random calculation mistake" would be undesirable in a domain of functions (same input yields same output), but would be acceptable and even desirable in a domain of uncertainty.
We are surprised and delighted that LLMs can produce code, but they are more akin to natural language outputs than code outputs--and we're disappointed when they create syntax errors, or worse, intention errors.
When I avoid multiplying large numbers in my head, that's because I can easily characterize the problem and reliably use a calculator.
Neither are the same as people trying to use LLMs to unreliably replacing critical thinking.
I don't follow this statement: if anything, we absolutely must check the resut of an LLM for the reason you mention. For coding, there are tools that attempt to check the generated code for each answer to at least guarantee the code runs (whether it's relevant, optimal, or bug-free is another issue, and one that is not so easy to check without context that can be significant at times).
You got me thinking (less about llms, more about humans), that adults do have many contradictory truths, some require nuance, some require completely different mental compartment.
Now I feel more flexible about what truth is, as a teen and child I was more stuborn, sturdy.
Sounds a lot like most engineers I’ve ever worked with.
There are a lot of people utilizing LLMs wisely because they know and embrace this. Reviewing and understanding their output has always been the game. The whole “vibe coding” trend where you send the LLM off to do something and hope for the best will teach anyone this lesson very quickly if they try it.
The people training the LLMs redid the training and fine tuned the networks and put out new LLMs. Even if marketing misleadingly uses human related terms to make you believe they evolve.
A LLM from 5 years ago will be as bad as 5 years ago.
Conceivably a LLM that can retrain itself on the input that you give it locally could indeed improve somewhat, but even if you could afford the hardware, do you see anyone giving you that option?
LMM "memory" is a larger context with unchanged neural paths.
The ability to formalize and specify the desired functionality and output will become the essential job of the programmer.
If you can formally specify what you need and prove that an LLM has produced something that meets the spec, that's a much higher level of confidence than hoping you have complete test coverage (possibly from LLM generated tests).
That's not how they work - they don't have internal models where they are sort of confident that this is a good answer. They have internal models where they are sort of confident that these tokens look like they were human generated in that order. So they can be very confident and still wrong. Knowing that confidence level (log p) would not help you assess.
There are probabilistic models where they try to model a posterior distribution for the output - but that has to be trained in, with labelled samples. It's not clear how to do that for LLMs at the kind of scale that they require and affordably.
You could consider letting it run code or try out things in simulations and use those as samples for further tuning, but at the moment, this might still lead them to forget something else or just make some other arbitrary and dumb mistake that they didn't make before the fine tuning.
LLMs are ridiculously useful for tasks where false positives (and false negatives) are acceptable but where true positive are valuable.
I've gotten a lot of mileage with prompts like "find bugs in [file contents]" in my own side projects (using a CoT model; before, and in addition to, writing tests). It's also fairly useful for info search (as long as you fact-check afterwards).
Last weekend, I've also had o4-mini-high try for fun to make sense & find vulns in a Nintendo 3DS kernel function that I've reverse-engineered long ago but that is rife with stack location reuse. Turns out, it actually found a real 0day that I failed to spot, and which would have been worth multiple thousands dollars before 2021 when Nintendo still cared about security on the 3DS.
See also: https://www.theregister.com/2025/04/21/ai_models_can_generat...
I think I can confidently assert that this applies to you and I as well.
My analogies may sound apples to gorillas comparison but the point of automation is that they perform 100x better than human with highest safety. Just because I can DUI and get a fine does not mean a self driving car should drive without fully operational sensors, both bear same risk of killing people but one has higher regulatory restrictions.
If an LLM makes a mistake? Companies will get off scot free (they already are), unless there's sufficient loophole for a class-action suit.
This is my exact same issue with LLMs and it's routinely ignored by LLM evangelists/hypesters. It's not necessarily about being wrong it's the non-deterministic nature of the errors. They're not only non-deterministic but unevenly distributed. So you can't predict errors and need expertise to review all the generated content looking for errors.
There's also not necessarily an obvious mapping between input tokens and an output since the output depends on the whole context window. An LLM might never tell you to put glue on pizza because your context window has some set of tokens that will exclude that output while it will tell me to do so because my context window doesn't. So there's not even necessarily determinism or consistency between sessions/users.
I understand the existence of Gell-Mann amnesia so when I see an LLM give confident but subtly wrong answers about a Python library I don't then assume I won't also get confident yet subtly wrong answers about the Parisian Metro or elephants.
I only post this because I find it kind of interesting; I balked at blaming non-determinism because it technically isn't, but came to conclude that practically speaking that's the right thing to blame, although maybe there's a better word that I don't know.
But this is also true for programs that are deliberately random. If you program a computer to output a list of random (not pseudo-random) numbers between 0 and 100, then you cannot determine ahead of time what the output will be.
The difference is, you at least know the range of values that it will give you and the distribution, and if programmed correctly, the random number generator will consistency give you numbers in that range with the expected probability distribution.
In contrast, an LLM's answer to "List random numbers between 0 and 100" usually will result in what you expect, or (with a nonzero probability) it might just up and decide to include numbers outside of that range, or (with a nonzero probability) it might decide to list animals instead of numbers. There's no way to know for sure, and you can't prove from the code that it won't happen.
For example, all of the replies I've gotten that are formatted as "Here is the random number you asked for: forty-two."
Which is both absolutely technically correct and very completely missing the point, and it might decide to do that one time in a hundred and crash your whole stack.
There are ways around that, but it's a headache you don't get with rand() or the equivalent for whatever problem you're solving.
It's deterministic in that (input A, state B) always produces output C. But it can't generally be reasoned about, in terms of how much change to A will produce C+1, nor can you directly apply mechanical reasoning to /why/ (A.B) produces C and get a meaningful answer.
(Yes, I know, "the inputs multiplied by the weights", but I'm talking about what /meaning/ someone might ascribe to certain weights being valued X, Y or Z in the same sense as you'd look at a variable in a running program or a physical property of a mechanical system).
Even with temperature of zero floating point rounding, probability ties, MoE routing, and other factors make outputs not fully deterministic even between multiple runs with identical contexts/prompts.
In theory you could construct a fully deterministic LLM but I don't think any are deployed in practice. Because there's so many places where behavior is effectively non-deterministic the system itself can't be thought of as deterministic.
Errors might be completely innocuous like one token substituted for another with the same semantic meaning. An error might also completely change the semantic meaning of the output with only a single token change like an "un-" prefix added to a word.
The non-determinism is both technically and practically true in practice.
For something it made up.
That's a bit more than an embarrassed junior will do to try to save face, usually.
LLMs as currently deployed don't do the same. They'll happily make the same mistake consistently if a mistake is popular in the training corpus. You need to waste context space telling them to avoid the error until/unless the model is updated.
It's entirely possible for good mentors to make junior developers (or any junior position) feel comfortable being realistic in their confidence levels for an answer. It's ok for a junior person to admit they don't know an answer. A mentor requiring a mentee to know everything and never admit fault or ignorance is a bad mentor. That's encouraging thought terminating behavior and helps neither person.
It's much more difficult to alter system prompts or get LLMs to even admit when they're stumped. They don't have meaningful ways to even gauge their own confidence in their output. Their weights are based on occurrences in training data rather than correctness of the training data. Even with RL the weight adjustments are only as good as the determinism of the output for the input which is not great for several reasons.
Because they aren't knowledgeable. The marketing and at-first-blush impressions that LLMs leave as some kind of actual being, no matter how limited, mask this fact and it's the most frustrating thing about trying to evaluate this tech as useful or not.
To make an incredibly complex topic somewhat simple, LLMs train on a series of materials, in this case we'll talk words. It learns that "it turns out," "in the case of", "however, there is" are all words that naturally follow one another in writing, but it has no clue why one would choose one over the other beyond the other words which form the contexts in which those word series' appear. This process is repeated billions of times as it analyzes the structure of billions of written words until it arrives at a massive in scale statistical model of how likely it is that every word will be followed by every other word or punctuation mark.
Having all that data available does mean an LLM can generate... words. Words that are pretty consistently spelled and arranged correctly in a way that reflects the language they belong to. And, thanks to the documents it trained on, it gains what you could, if you're feeling generous, call a "base of knowledge" on a variety of subjects, in that by the same statistical model, it has "learned" that "measure twice, cut once" is said often enough that it's likely good advice, but again, it doesn't know why that is, which would be: it optimizes your cuts and avoids wasting materials when building something to measure it, mark it, then measure it a second or even third time to make sure it was done correctly before you do the cut, which an operation that cannot be reversed.
However that knowledge has a HARD limit in terms of what was understood within it's training data. For example, way back, a GPT model recommended using elmer's glue to keep pizza toppings attached when making a pizza. No sane person would suggest this, because glue... isn't food. But the LLM doesn't understand that, it takes the question: how do I keep toppings on pizza, and it says, well a ton of things I read said you should use glue to stick things together, and ships that answer out.
This is why I firmly believe LLMs and true AI are just... not the same thing, at all, and I'm annoyed that we now call LLMs AI and AI AGI, because in my mind, LLMs do not demonstrate any intelligence at all.
In that case the error was obvious, but these things become "dangerous" for that sort of use case when end users trust the "AI result" as the "truth".
The problem is distinguishing the various reasons people think something is worth and using the right context.
That requires a lot of intelligence.
The fact that modern language models are able to model sentiment and sarcasm as well as they do is a remarkable achievement.
Sure there is a lot of work to be done to improve that, especially at scale and in products where humans are expecting something more than a good statistical "success rate", but they actually expect the precision level they are used from professionally curated human sources.
Or in short, LLMs don't get satire.
I like to highlight the fundamental difference between fictional qualities of a fictional character versus actual qualities of an author. I might make a program that generates a story about Santa Claus, but that doesn't mean Santa Claus is real or that I myself have a boundless capacity to care for all the children in the world.
Many consumers are misled into thinking they are conversing with an "actual being", rather than contributing "then the user said" lines to a hidden theater script that has a helpful-computer character in it.
Throw in some noise-reduction that disregards too-low probabilities, and that's basically it.
This dials down the usual chaos of Markov chains, and makes their output far more convincing.
Yes, that's really what all this fuss is about. Very fancy Markov chains.
It is the way in which the prediction works, that leads to some form of intelligence.
If this is the case, I can't take your company at all seriously. And if it isn't, then why is reviewing the output of LLM somehow more burdensome than having to write things yourself?
Also, people aren't meant to be hyper-vigilant in this way.
Which is a big contradiction in the way contemporary AI is sold (LLMs, self-driving cars): they replace a relatively fun active task for humans (coding, driving) with a mind-numbing passive monitoring one that humans are actually terrible at. Is that making our lives better?
See also https://www.londonreviewbookshop.co.uk/stock/the-unaccountab...
If a tech works 80% of the time, then I know that I need to be vigilant and I will review the output. The entire team structure is aware of this. There will be processes to offset this 20%.
The problem is that when the AI becomes > 95% accurate (if at all) then humans will become complacent and the checks and balances will be ineffective.
Maybe people here are used to good code bases, so it doesn't make sense that 80% is good enough there, but I've seen some bad code bases (that still made money) that would be much easier to work on by not reinventing the wheel and not following patterns that are decades old and no one does any more.
My list so far is:
* Runs locally on local data and does not connect to the internet in any way (to avoid most security issues)
* Generated by users for their own personal use (so it isn't some outside force inflicting bad, broken software on them)
* Produces output in standard, human-readable formats that can be spot-checked by users (to avoid the cases where the AI fakes the entire program & just produces random answers)
I suspect software will stumble into the strategy deployed by the big 4 Accounting firms and large law firms - have juniors have the first pass and have the changes filter upwards in seniority, with each layer adding comments and suggestions and sending it down to be corrected, until they are ready to sign-off on it.
This will be inefficient amd wildly incompatible with agile practice, but that's one possible way for juniors to become mid-level, and eventually seniors after paying their dues. Its absolutely is inefficient in many ways, and is mostly incompatible with the current way of working as merge-sets have to be considered in a broader context all the time.
The demographical shift over time will eventually lead to degradation of LLM performance, because more content will be of worse quality and transformers are a concept that loses symbolic inference.
So, assuming that LLMs will increase in performance will only be true for the current generations of software engineers, whereas the next generations will lead automatically to worse LLM performance once they've replaced the demographic of the current seniors.
Additionally, every knowledge resource that led to the current generation's advancements is dying out due to proprietarization.
Courses, wikis, forums, tutorials... they all are now part of the enshittification cycle, which means that in the future they will contain less factual content per actual amount of content - which in return will also contribute to making LLM performance worse.
Add to that the problems that come with such platforms, like the stackoverflow mod strikes or the ongoing reddit moderation crisis, and you got a recipe for Idiocracy.
I decided to archive a copy of all books, courses, wikis and websites that led to my advancements in my career, so I have a backup of it. I encourage everyone to do the same. They might be worth a lot in the future, given how the trend is progressing.
This is not a counter-argument, but this is true of any software engineer as well. Maybe for really good engineers it can be 1/100 or 1/1000 instead, but critical mistakes are inevitable.
While we do see this problem when relying on junior knowledge workers, there seems to be a more implicit trust of LLM outputs vs. junior knowledge workers. Also: senior knowledge workers are also subject to errors, but knowledge work isn't always deterministic.
At the end of the day, AI can't tell us what to build or why to build it. So we will always need to know what we want to make or what ancillary things we need. LLMs can definitely support that, but knowing ALL the elements and gotchas is crucial.
I don't think that removes the need for juniors, I think it simplifies what they need to know. Don't bother learning the intracacies of the language or optimization tricks or ORM details - the LLM will handle all that. But you certainly will need to know about catching errors and structuring projects and what needs testing, etc. So juniors will not be able to "look under the hood" very well but will come in learning to be a senior dev FIRST and a junior dev optionally.
Not so different from the shift from everyone programming in C++ during the advent of PHP with "that's not really programming" complaints from the neckbeards. Doing this for 20 years and still haven't had to deal with malloc or pointers.
Compare to the large langle mangles, which somewhat routinely generate weird and wrong stuff, it's entirely unpredictable what inputs may trip it, it's not even reproducible, and nobody is expected to actually fix that. It just happens, use a second LLM to review the output of the first one or something.
I'd rather have my lower-level abstractions be deterministic in a humanly-legible way. Otherwise in a generation or two we may very well end up being actual sorcerers who look for the right magical incantations to make the machine spirits obey their will.
However, this creates a significant return on investment for opensourcing your LLM projects. In fact, you should commit your LLM dialogs along with your code. The LLM won't learn immediately, but it will learn in a few months when the next refresh comes out.
Wholeheartedly agree with this.
I think code review will evolve from "Review this code" to "Review this prompt that was used to generate some code"
Any software engineer who puts a stamp of approval on software they have not read and understood is committing professional malpractice.
Mostly because we almost never read code to understand the intention behind the code: we read it to figure out why the fuck it isn't working, and the intentions don't help us answer that.
Absolutely, for different reasons including later reviews / visits to the code + prompts.
For example Cursor has checked-in rules files and there is a way to have the model update the rules themselves based on the conversation
So even if 9 out of 10 is wrong you can just can it.
Even the worst programmer understands their own code, whereas AI produces code no human has ever understood.
I disagree: LLM are not replacing the kind of junior engineer who become senior ones. They replace "copy from StackOverflow until I get something mostly working" coders. Those who end going up the management ladder, not the engineering one. LLM are (atm) not replacing the junior engineers who use tools to get an idea then read the documentation.
Unless you're hyper-specialised within a large organisation, you can't bring the same degree of obsession to every part in the process, there will always be edges.
Even an artisan who hand-builds everything that matters may take some shortcuts in where they get their tools from, or the products they use to maintain them.
In a big org, you might have a specialist for every domain, but on small teams you don't.
And ultimately I've got other things to do with my life besides learning to write Cmake from scratch.
This is temporary. The new more global memory features in ChatGPT are a good example of how this is already starting to decrease as a factor. Yes it’s not quite the same as fine tuning or rlhf, but the impact is still similar, and I suspect that the toolong for end users or local tenant admins to easily create more sophisticated embeddings is going to increase very quickly.
"At last, there breezed into my office the most senior manager of all, a general manager of our parent company, Andrew St. Johnston. I was surprised that he had even heard of me. "You know what went wrong?" he shouted--he always shouted-- "You let your programmers do things which you yourself do not understand." I stared in astonishment. He was obviously out of touch with present day realities. How could one person ever understand the whole of a modern software product like the Elliott 503 Mark II software system? I realized later that he was absolutely right; he had diagnosed the true cause of the problem and he had planted the seed of its later solution."
My interpretation is that whether shifting from delegation to programmers, or to compilers, or to LLMs, the invariant is that we will always have to understand the consequences of our choices, or suffer the consequences.> Remember the first time an autocomplete suggestion nailed exactly what you meant to type?
I actually don't, because so far this only happened with trivial phrases or text I had already typed in the past. I do remember however dozens of times where autocorrect wrongly "corrected" the last word I typed, changing an easy to spot typo into a much more subtle semantic error.
I have also noticed a SHARP decline in autocorrecting quality.
I don't know how I feel about that. I suspect it's not going to be great for society. Replacing blue collar workers for robots hasn't been super duper great.
That's just not true. Tractors, combine harvesters, dishwashers washing machines, excavators, we've repeatedly revolutionised blue-collar work, made it vastly, extraordinary more efficient.
I'd suspect that these equipments also made it more dangerous. They also made it more industrial in scale and capital costs, driving "homestead" and individual farmers out of the business, replaced by larger and more capitalized corporations.
We went from individual artisans crafting fabrics by hand, to the Industrial Revolution where children lost fingers tending to "extraordinary more efficient" machines that vastly out-produced artisans. This trend has only accelerated, where humans consume and throw out an order of magnitude more clothing than a generation ago.
You can see this trend play out across industrialized jobs - people are less satisfied, there is some social implications, and the entire nature of the job (and usually the human's independence) is changed.
The transitions through industrialization have had dramatic societal upheavals. Focusing on the "efficiency" of the changes, ironically, miss the human component of these transitions.
How many acres do you want to personally farm as your never-ending, no sick days, no vacations ever existence?
Why do people keep saying things like this? "Exponential rate"? That's just not true. So far the benefits are marginal at best and limited to relatively simple tasks. It's a truism at this point, even among fans of AI, that the benefits of AI are much more pronounced at junior-level tasks. For complex work, I'm not convinced that AI has "scaled the creation side of knowledge work" at all. I don't think it's particularly useful for the kind of non-trivial tasks that actually take up our time.
Amdahl's Law comes into play. If using AI gives you 200% efficiency on trivial tasks, but trivial tasks only take 10% of your time, then you've realized a whopping 5.3% productivity boost. I do not actually spend much time on boilerplate. I spend time debugging half-baked code, i.e. the stuff that LLMs spit out.
I realize I'm complaining about the third sentence of the article, but I refuse to keep letting people make claims like this as if they're obviously true. The whole article is based on false premises.
Mostly because all kinds of systems are made for humans - even if we as a dev team were able to pump out features we got pushed back. Exactly because users had to be trained, users would have to be migrated all kinds of things would have to be documented and accounted for that were tangential to main goals.
So bottleneck is a feature not a bug. I can see how we should optimize away documentation and tangential stuff so it would happen automatically but not the main job where it needs more thought anyway.
That's a very human reflective process that requires time.
Which, once you stop to think about it, is insane. There is a complete lack of asking why. To In fact, when you boil it down to its core argument it isn't even about AI at all. It is effectively the same grumblings from management layers heard for decades now where they feel (emphasis) that their product development is slowed down by those pesky engineers and other specialists making things too complex, etc. But now just framed around AI with unrealistic expectations dialed up.
Maybe it's like the transformation of local-to-global that traveling musicians felt in the early 1900s: now what they do can be experienced for free, over the radio waves, by anyone with a radio.
YouTube showed us that video needn't be produced only by those with $10M+ budgets. But we still appreciate Hollywood.
There are new possibilities in this transformation, where we need to adapt. But there are also existing constraints that don't just disappear.
To me, the "Why" is that people want positive experiences. If the only way to get them is to pay experts, then they will. But if they have alternatives, that's fine too.
The answer to this seems obvious to me. Buyers seek the lowest price, so sellers are incentivized to cut their cost of goods sold.
Investors seek the highest return on investment (people prefer more purchasing power than less purchasing power), so again, businesses are incentivized to cut their cost of goods sold.
The opposing force to this is buyers prefer higher quality to lower quality.
The tradeoff between these parameters is in constant flux.
Reviewing human code and writing thoughtful, justified, constructive feedback to help the author grow is one thing - too much of this activity gets draining, for sure, but at least I get the satisfaction of teaching/mentoring through it.
Reviewing AI-generated code, though, I'm increasingly unsure there's any real point to writing constructive feedback, and I can feel I'll burn out if I keep pushing myself to do it. AI also allows less experienced engineers to churn out code faster, so I have more and more code to review.
But right now I'm still "responsible" for "code quality" and "mentoring", even if we are going to have to figure out what those things even mean when everyone is a 10x vibecoder...
Hoping the stock market calms down and I can just decide I'm done with my tech career if/when this change becomes too painful for dinosaurs like me :)
> AI also allows less experienced engineers to churn out code faster, so I have more and more code to review
This to me has been the absolute hardest part of dealing with the post LLM fallout in this industry. It's been so frustrating for me personally I took to writing my thoughts down in a small blog humerously titled
"Yes, I will judge you for using AI...",
in fact I say nearly this exact sentiment in it.
> Generating more complex solutions that are possibly not understood by the engineer submitting the changes.
I'd possibly remove "possibly" :-)
> I'd possibly remove "possibly" :-)
might switch it to "without a doubt" lol
I hate these kind of comments, I'm tired to flag them for removal so they pollute code base more and more, like people did not realise how stupid of a comment this is
# print result
print(result)
I'm yet to experience coding agent to do what I asked for, so many times the solution I came up with was shorter, cleaner, and better approach than what my IDE decided to produce... I think it works well as rubber duck where I was able to explore ideas but in my case that's about it.Basically my job as a staff these days, though not quite that number. I try to pair with those junior to me on some dicey parts of their code at least once a week to get some solid coding time in, and I try to do grunt work that others are not going to get to that can apply leverage to the overall productivity of the organization as a whole.
Implementing complicated sub-systems or features entire from scratch by myself though? Feels like those days are long gone for me. I might get a prototype or sketch out and have someone else implement it, but that's about it.
I sometimes use it to write utility classes/functions in totality when I know the exact behavior, inputs, and outputs.
It's quite good at this. The more standalone the code is, the better it is at this task. It is interesting to review the approaches it takes with some tasks and I find myself sometimes learning new things I would otherwise have not.
I have also noticed a difference in the different models and their approaches.
In one such case, OpenAI dutifully followed my functional outline while Gemini converted it to a class based approach!
In any case, I find that reviewing the output code in these cases is a learning opportunity to see some variety in "thinking".
How does this work? Do you allow merging without reviews? Or are other engineers reviewing code way more than you?
But in terms of time spent, thankfully still spend more time writing.
It’d be even more thankless if instead of writing good feedback that somebody can learn from (or can spark interesting conversations that I can learn from), you would just said “nope GPT it’s not secure enough” and regenerate the whole PR, then read all the way through it again. Absolute tedium nightmare
A number of the docs I'm working with describe using ambient dictation as a game changer. Using the OODA loop analogy of the author: they are tightening the full OODA loop by deferring documentation to the end of the day. Historically this was a disaster because they'd forget the first patient by the end of the day. Now, the first patient's automatically dictated note is perhaps wrong but rich with details the spark sufficient remembrance.
Of course MBAs will use this to further crush physicians with additional workload, but for a time, it may help.
That is, until we mutually decide on removing our agency from the loop entirely . And then what?
I would've liked for the author to be a bit specific here. What exactly could this "very painful and slow transition" look like? Any commenters have any idea? I'm genuinely curious.
Why is that a "uniquely human ability"? Machine learning systems are good at scoring things against some criterion. That's mostly how they work.
Something I learned from working alongside data scientists and financial analysts doing algo trading is that you can almost always find great fits for your criteria, nobody ever worries about that. Its coming up with the criteria that's what everyone frets over, and even more than that, you need to beat other people at doing so - just being good or event great isn't enough. Your profit is the delta between where you are compared to all the other sharks in your pool. So LLMs are useless there, getting token predicted answers is just going to get you the same as everyone else, which means zero alpha.
So - I dunno about uniquely human? But there's definitely something here where, short of AGI, there's always going to need to be someone sitting down and actually beating the market (whatever that metaphor means for your industry or use case).
If you're doing like, real work, solving problems in your domain actually adds value, and so the profits you get are from the value you provide.
But "finance" is very broad and covers very real and valuable work like making loans and insurance - be careful not to be too broad in your condemnation.
Also ignores capital gains - and small market moves are the very mechanism by which capital formation happens.
Put another way - capital was accrued long before we had a stock market, and even longer before we had computers deciding which stocks to sell or buy.
It’s a very rubbery, human oriented activity.
I’m sure this will be solved, but it won’t be solved by noodling with prompts and automation tools - the humans will have to organise themselves to externalise expert knowledge and develop an objective framework for making ‘subjective decisions about the relative value of things’.
And contrary to the article, idea-generation with LLM support can be fun! They must have tested full replacement or something.
I see you have never managed an outsourced project run by a body shop consultancy. They check the boxes you give them with zero thought or regard to the overall project and require significant micro managing to produce usable code.
No.
> Multiply that by a thousand and aim it at every task you once called “work.”
If you mean "menial labor" then sure. The "work" I do is not at all aided by LLMs.
> but our decision-making tools and rituals remain stuck in the past.
That's because LLMs haven't eliminated or even significantly reduced risk. In fact they've created an entirely new category of risk in "hallucinations."
> we need to rethink the entire production-to-judgment pipeline.
Attempting to do this without accounting for risk or how capital is allocated into processes will lead you into folly.
> We must reimagine knowledge work as a high-velocity decision-making operation rather than a creative production process.
Then you will invent nothing new or novel and will be relegated to scraping by on the overpriced annotated databases of your direct competitors. The walled garden just raised the stakes. I can't believe people see a future in it.
> Redesigning for Decision Velocity
This is perhaps the most fundamental problem. In the past, tools took care of the laborious and tedious work so we could focus on creativity. Now we are letting AI do the creative work and asking humans to become managers and code reviewers. Maybe that's great for some people, but it's not what most problem solvers want to be doing. The same people who know how to judge such things are the same people who have years of experience doing this things. Without that experience you can't have good judgement.
Let the AI make it faster and easier for me to create; don't make it replace what I do best and leave me as a manager and code reviewer.
The parallels with grocery checkouts are worth considering. Humans are great at recognizing things, handling unexpected situations, and being friendly and personable. People working checkouts are experts at these things.
Now replace that with self serve checkouts. Random customers are forced to do this all themselves. They are not experts at this. The checkouts are less efficient because they have to accommodate these non-experts. People have to pack their own bags. And they do all of this while punching buttons on a soulless machine instead of getting some social interaction in.
But worse off is the employee who manages these checkouts. Now instead of being social, they are security guards and tech support. They are constantly having to shoot the computer issues and teach disinterested and frustrated beginners how to do something that should be so simple. The employee spends most of their time as a manager and watchdog, looking at a screen that shows the status of all the checkouts, looking for issues, like a prison security guard. This work is inactive and unengaging, requiring constant attention - something humans aren't good at. When little they do interact with others, it is in situations where that are upset.
We didn't automate anything here, we just changed who does what. We made customers into the people doing checkouts and we made more level staff into managers of them, plus being tech support.
This is what companies are trying to do with AI. They want to have fewer employees whose job it is to manage the AIs, directing them to produce. The human is left assigning tasks and checking the results - managers of thankless and soulless machines. The credit for the creation goes to the machines while the employees are seen as low skilled and replaceable.
And we end up back at the start: trying to find high skilled people to perform low skilled work based on experience that they only would have had if they had being doing high skilled work to begin with. When everyone is just managing an AI, no one will know what it is supposed to do.
I think vibe coding might be more successful for people doing things an experienced developer can do in their sleep with a few lines of code in Django or something. Something a non programmer might have previously done with some no code tool.
My experience is that middle manager gatekeepers are the most reluctant to participate in building knowledge systems that obsolete them though.
How was that conclusion reached? And what is meant by knowledge workers? Any work with knowledge is exactly the domain of LLMs. So, LLMs are indeed knowledge workers.
This is a FAQ: <https://news.ycombinator.com/newsfaq.html>
Counterpoint : That decision has to be made only once (probably by some expert). AI can incorportate that training data into its reasoning and voila, it becomes available to everyone. A software framework is already a collection of good decisions, practices and tastes made by experts.
> An MIT study found materials scientists experienced a 44% drop in job satisfaction when AI automated 57% of their “idea-generation” tasks
Counterpoint : Now consider making material science decisions which requires materials to have not just 3 properties but 10 or 15.
> Redesigning for Decision Velocity
Suggestion : I think this section implies we must ask our experts to externalize all their tastes, preferences, top-down thinking so that other juniors can internalize those. So experts will be teaching details (based on their internal model) to LLMs while teaching the model itself to humans.