It made me think of the conductor, seemingly the most skillless job in the orchestra. All you do is wave the batton, no need to ever play a instrument. If LLMs are doing the hard part (writing code) then we can be the conductor waving the batton.
But of course the visuals are misleading. Being conductor doesn't take the least skill, it takes the most. He hears every instrument individually, he knows the piece intimately, and through his conducting brings a unique expression to a familiar work.
LLMs have made the musician part automated. They'll play whatever you want. No doubt a powerful tool in the hands of a skilled conductor. And a incredible tool for someone who can't play to generate music for themselves.
There's no shortage of "I built it and they won't come" posts here on HN, predating LLMs by decades. Because code has never been the hard part of "software as a business ". LLMs have driven this point home. Code has never been cheaper. Business has never been harder.
But there are many ways to apply LLMs in the development flow.
Only specifying features broadly is like a product manager might is definitely highly luck dependent wrt how buggy it will turn out.
But understanding the feature and determining what needs to be done broadly, then ask the LLM to do so and verify after if the resulting change makes sense according to your mental model of the software is definitely not that.
Also, I disagree with your implied message. I frequently struggle to articulate solutions even if I know how they'd work
This should apply to art even more, because art is strongly supported by emotions - and people may know the feeling of the emotion (of the image), but not have an explicit framework for it yet
Unfortunately, for junior engineers the CS path has likely become more arduous, and we'd probably see something more of a doctor-like career path for CS students, where they specialize to obtain deeper architectural knowledge, before receiving employment.
So what you wrote does not bode well for the profession.
I like your metaphor even as someone who can be a bit skeptical of the overly broad promises of LLM’s/AI. But I do think this statement is too generous. It implies way too much actual musical ability. It also means that everything I can imagine musically is possible which it just isn’t, as there are limitations just like with real musicians.
If we want to really make the metaphor work, it’s an orchestra full of very informed people who have read a lot about music and have an idea of what their instrument should sound like and can even make whatever they’re holding sound like the appropriate instrument most of the time sort of. With our direction, our “conducting,” their success goes up.
But ultimately: they aren’t real musicians, they aren’t holding the right instruments, and they haven’t actually been taught how to read music. They are just often good at sort of making it work in a way that approximates what we want.
But I think the analogy holds (from an output point of view), the musicians will continue to improve, and some sections play better than others. The overall effect is "pleasing" although perhaps not concert quality.
So those musicians are no longer getting booked for that bit of music. Instead, one person produces it in their home studio. But, there’s now an industry for creating software tools that support that workflow, and there are a lot more opportunities for such music than there used to be. The amount of music used in background spots on television is astounding.
Things changed. Some jobs diminished (studio players?) or went away altogether (music copyists?). But new work came into existence.
Will there be new software like that? Maybe, but you'll never hear about it. Not only because it's throwaway code, but because the best interface is probably no code at all. The chatbot will instead spin up a VM behind the scenes and never even show the code it generated unless you dig for it.
A sibling [dead] comment to mine is a rebuttal to "just post the prompt", where it itself was expanded to several paragraphs that each say nearly nothing, including this gem:
> "That’s not a critique of the writing. It’s a diagnosis"
I miss when people just typed their thoughts concisely and hit send without passing it to an inflater. I'd maybe have a chance of understanding the sibling comment's point.
This isn't mind control, just language evolution quiety nudged by AI. ;)
- use an LLM to compress a blog article into a singular prompt
- Run it through against all the major LLMs to have them expand it back out again
- Diff the original against the generated versions in terms of content/ideas
- Spit out an "entropy ranking".
Another way to reify this is:
When making software, remember that it is a snapshot of
your understanding of the problem. It states to all,
including your future-self, your approach, clarity, and
appropriateness of the solution for the problem at hand.
Choose your statements wisely.You model inference provider and any intermediaries get to watch what you’re designing from behind the curtain and copy, train on, or sell the insights if you’re not paying attention.
Yes! This is 100% it.
This is a net good for everyone because it brings basic programming literacy to the masses and culls a lot of junk projects that are littering github or SaaS scams.
It means people can focus on the problems that actually matter.
AI doesn't have any impact on the need for accountable humans to write code.
The scratchpad analogy is so good. Most mature business software is almost literally like a tome of legal documents that have to be edited carefully, but that doesn't have anything to do with the napkin in your pocket.
Not only is it taking way more energy to write software now with LLMS than by "hand", now everyone is repeating work many times over to write the same tools.
From a freedom standpoint one could argue is gives the user the most freedom to have what they want and need. But its very bad from an energy efficiency point of view.
But really the only issue is it's monotone linkedin still insight fluff and you can't tell where the prompt ends and the LLM crap begins. I expect something interesting was put into the LLM, but the LLM has destroyed the author's ability to communicate it with me effectively. Everything is overinflated to the same level of importance and I can't tell what the author actually cared about expressing.
Google, Apple, Meta, X, Bluesky, Shopify, Stripe and all the big software companies must be really shaking in their boots for disruption against the army of vibe coders. /s
(They are actually laughing at all of them)
All of the mentioned named companies have network effects, distribution and trust.
Not quite easy to copy. Disposable LLM gen'd code without users is cheap, which is the point of the article.