C makes you a bad programmer. Real men code in assembler.
IDEs make you a bad programmer. Real men code in a text editor.
Intellisense / autocomplete makes you a bad programmer. Real men RTFM.
Round and round we go...
Programming takes practice. And if all of your code is generated via LLM, then you're not getting any practice.
It's the same as saying using genAI will make you a bad artist. In the sense that putting hands-to-medium makes you a good artist, that is true. Unless you take deliberate steps to learn, your skills will attrophe.
However, being a good programmer|artist is different from being a successful programmer|artist. GenAI can help you churn out tons of content, and if you can turn that content into income, you'll be successful.
Even before LLMs, successful and capable were orthogonal features for most programmers. We had people who made millions churning out a crud website over a few months, and others that can build game engines, but are stuck in underpaid contracting roles.
Yes I absolutely am. Yes it's a skill. Some programmers I've discussed this with made up their mind before they tried it. If the programmers goal is to produce valuable software that works and is secure and easy to maintain then they will gravitate to LLM assisted programming. I'd their goal is to program software then they won't like the idea of the LLM doing it for them.
To make the distinction more clear; if the carpenters goal is to produce chairs they may be inclined to use power tools, if their goal is to work with wood then they might not want to use power tools.
So where people fall on this debate mostly depends on their values and priorities (wants).
The thing about wants is they are almost never changed with logical arguments. Some people just want to write the code themselves. Some people also want other people to write the code themselves. I don't know why, but I know that logical arguments are unlikely to change these people's minds because our wants are so entangled in our sense of self that they exist outside of the context of pure logic, and probably for valid evolutionary beneficial reasons.
To be clear, programmers working on very novel niche use cases where LLMs genuinely aren't useful have valid case of "it's not helpful to me yet", and these people are distinct from what I'm mostly referring to. If someone is open minded, tried their best to make it work, and decided it's not good enough yet, that's totally fair and unrelated to my main point.
Consider a math/physics studying a course. Using an LLM is like having all the solutions to math/physics exercises in the course and reading them. If the goal is to finish all the problems quickly then an LLM is great. If the goal is to properly learn math/physics, then doing the thinking yourself and use the LLM as last recourse or to double check your work is the way to go.
Back to the carpenter, I think there is a lot of value on not using power tools to learn more about making chairs and become better at it.
I am using many LLMs for coding everyday. I think they are great. I am more productive. I finish features quickly and make progress quickly and the dopamine release is fantastic. I started playing with agents and I am marvelled at what they can do. I can also tell that I am learning less and becoming a lot more complacent when working like this.
So I question myself what the goal should be (for me). Should my goal be producing more raw output or produce less output while enriching my knowledge and expertise?
If the goal is learning programming then some of that should be done with LLMs and some without. I think we are still figuring out how to use LLMs to optimize rate of learning, but my guess is the way they should be used is very different than how an expert should use them to be productive.
Again it comes back to the want though (learning vs doing vs getting done), so I think my main point stands.
I'd say it's closer to carpenters using CNC machines.
You can be a "successful" carpenter that sells tons of wood projects built entirely using a CNC and not even know how to hold a chisel. But nobody is going to call you a good woodworker. You're just a successful businessman.
For sure, there's gradients, where people use it for the hard parts and manually do what they can. I.e., CNC templates and use those as router guides on their work. People will be impressed by your builds, but Paul Sellers is still going to be more considered more talented.
In the media AI hype perspective your CNC analogy sounds right. In my -grounded in real experience using it- perspective the power tool analogy is far more apt.
If you treat agentic IDE like CNC machine that's how you get problems.
Consider the population of opinions. One other reply to my comment is about how the LLM introduced security flaw and repeated line after line of the same code, implying it's useless and can't be trusted. Now you're replying that the LLMs are more capable and autonomous they can be trusted with full automation to the extent of CNC.
My point is that the truth lies somewhere in between.
Maybe in the future your CNC analogy would be valid but right now with windsurf/cursor and Opus 4.5 we aren't there yet.
Lastly, even with your analogy setting up and using CNC is a skill. It's an engineering and design skill. So maybe the person doing that would be more of an engineer than a woodworker, but going as far as calling them a business person isn't accurate to me.
Just this week alone I had the LLMs:
- Introduce a serious security flaw.
- Decided it was better to duplicate the same 5 lines of code 20 times instead of making a function and calling that.
And that is actually just this week. And to be clear, I am not making that up to prove a point, I use AI day in and day out and it happens consistently. Which is fine, humans can do that too, the issue is when there is a whole new generation of "programmers" that have absolutely zero clue how to spot those issues when (not if) they come up.
And as AI gets better (which it will) it actually makes it more dangerous because people start blindly trusting the code it produces.
How an experienced developer uses LLMs to program is different than how a new developer should use LLMs to learn programming principles.
I don't have a CS degree. I never programmed in assembly. Before LLMs I could pump out functional secure LAMP stack and JS web apps productively after years of practice. Some curmudgeon CS expert might scrutinize my code for being not optimally efficient or engineered. Maybe I reinvented some algorithm instead of using a standard function or library. Yet my code worked and the users got what they wanted.
If you're not using the best tools and you're not using them properly and then they produce a result you don't like, while thousands of developers are using the tools productively, does that say something about you or the tools?
Also, if you use an LLM haphazardly and it introduces a security flaw, you as the user are responsible. The LLM is a power tool, not a person.
Whether the inexperienced dev uses an LLM or not doesn't change the fact that they might product bad code with security flaws.
I'm not arguing that people that don't know how to program can use LLMs to replace competent programmers. I'm arguing that competent programmers can be 3-4x more productive with the current best agentic coding tools.
I have extremely compelling valid evidence of this, and if you're going to try to debate me with examples of how you're unable to get these results then all it proves is you're ideologically opposed to it or not capable.
> Also, if you use an LLM haphazardly and it introduces a security flaw, you as the user are responsible. The LLM is a power tool, not a person.
I 100% agree. That was my point. A lot of people (not saying you, I don't know you) are not qualified to take on that level of responsibility yet they do it anyway and ship it to the user.
And on the human side, that is precisely why procedures like code review have been standard for a while.
But my main objection to the parent post was not that LLMs can't be powerful tools but that specifically the examples used of maintainability and security are (IMO) possibly the worst examples you can use. Since 70k line un-reviewable pull requests are not maintainable and probably also not secure (how would you know?).
It really boils down to who is using the LLM tool and how they are using it and what they want.
When I prompt the LLM to do something I scout out what I want it to do, potential security and maintenance considerations, etc. I then prompt it precisely, sometimes with equivalent of multi page essay, sometimes with a list of instructions, etc. the point is I'm not vague. I then review what it did and look for potential issues. I also ask it to review what it did and if it sees potential issues (sometimes with more specific questioning).
So we are mashing together a few dimensions, my GPC was pointing out:
- A: competent developer wants software functionality produced that is secure and maintainable
- B: competent developer wants to produce software functionality that is secure and maintainable
The distinction between these is subtle but has a huge impact on senior developer attitudes to LLMs from what I've seen. Dev A more likely to figure out how to get most out of LLMs, Dev B will resist and use flaws as excuse to do it themselves. Reminds me a bit of early AWS days and engineers hung up on self hosting. Or devs wanting to built everything from scratch instead of using a framework.
What youre pointing out is that if careless or inexperienced developers use LLMs they will produce unmaintainable and insecure code. Yeah I agree. They would probably produce insecure and unmaintainable code without LLMs too. Experienced devs using LLMs well can produce secure and maintainable code. So the distinction isn't LLMs, it's who is using them and how.
What just occured to me though, and I suspect you will appreciate, is the fact that I'm only working with other very experienced devs. Experienced devs working with JR or careless devs who can now produce unmaintainable and insecure code much faster is a novel problem and would probably be really frustrating to deal with. Reviewing a 70k line PR produced by an LLM without thoughtful prompting and oversight sounds awful. I'm not advocating this is a good thing. Though surely there is some way to manage that, and figuring out how to manage it probably has some huge benefits. I've only been thinking about it for 5 min so I definitely don't have an answer.
One last thought that just occured to me: the whole narrative of AI replacing junior devs seemed bonkers to me because there's still so much demand for new software and LLMs don't remotely compare to developers. That said, as an industry I guess we haven't figured out how to mix LLMs and Jr developers in a way that's net constructive? If JR+LLM = 10x more garbage for SR to review, maybe that's the real reason why JR roles are harder to find?
There's at least one study that suggests that they actually are not in fact working more productively, they just feel that way.
Unfortunately for me personally, Claude Code on the latest models does not generally make me more productive, but it has absolutely led to several of my coworkers submitting absolutely trash-tier untested LLM code for review.
So until i personally see it give me output that meets my standards, or i see my coworkers do so, I'm not going to be convinced. Legions of anonymous HN commenters insisting they're 50 year veterans that have talked Claude into spitting out perfect code will never convince me.
(I spent over an hour working with Claude Code to write unit tests. I did eventually get code that met my standards, after dozens of rounds of feedback and many manual edits, and cleaning up quite a lot of hallucinatory code. Like most times I decide to "put in the effort" to get worthwhile results from Claude, I'm entirely certain I could have done it faster myself. I just didn't really feel like it at 4 on a Friday)
And my point was whether or not people take the time to develop the skill depends on their motivations, values and beliefs.
In this thread I have weighed both sides;, cases when LLMs are productive and when they are not.
Your comment comes off as biased and evidence of my point.
How much can you level up by watching other people do their homework?
They haven't destroyed everyone but there definitely are sets of people who used the crutches and never got past them. And not just in a "well they never needed anything more" but worse programmers than they should or could have been.
It’s like a carpenter talking about IKEA saying “I remember when I got an electric sander, it’s the same thing”.
Surely we agree that some boundary exists where it becomes absurd right? We are just quibbling over where to draw the line. I personally draw it at AI.
IDEs don't make you dependent on constant Internet connectivity, charge a monthly subscription, or expose you to lawsuits from powerful companies claiming copyright over your work.
Intellisense/autocomplete doesn't make you dependent on constant Internet connectivity, charge a monthly subscription, or expose you to lawsuits from powerful companies claiming copyright over your work.
Sometimes they do! But not in general, yes.
I have some friends in the defense industry who have to develop on machines without public internet access. You know what they all do? Have a second machine set up next to them which does have internet access.
So, who is it that supposedly said that? Not K&R (obviously). Not Niklaus Wirth. Not Stroustrup. Not even Dijkstra (Algol 60) and he loved writing acerbic remarks about how much the average developer sucked. I don't recall Ken Thompson, Fred Brooks (OS/360), Cutler, or any other OS architect having said anything like that either. Who in that era that has any kind of credibility said that?
The "Real Men Don't Use Pascal" essay was a humorous shitpost that didn't reflect any kind of prevailing opinion.
I am mid career now.
High level langages like js or python have a lot of bad design / suboptimal code... as well as some java code in many places.
Some bad java code (it just needs to be a sql select in a loop) can easily perform thousand time worse than a clean python implementation of the same thing.
As said above, once c was high level programing language and still is in some places.
I do not code in python / go / js that much these days, but what made me a not so bad developper is my understanding of computing mechanism (why and how to use memory instead of disk, how to arange code so cpu can use it's cache efficiently...)
As said in many posts, code quality even for vibe coded stuff depends more on what was prompted and how many efforts the PR diff is human readable to get maintainable and efficient softwares at the end of the day.
Yet senior devs often spend more time reviewing code instead of actually writting some. Vibe coding ultimately feels the same for me at the moment.
I still love to write some code by hand, but I start to feel less and less productive with this approach while at the same time feeling I don't really lost my skills to do so.
I think I really feel and effectively am more efficient at delivering thing with appropriate quality level for my customers now that I have agentic coding skills in my belt.
AI is different.
Joking aside, we have to understand that this is the way software is being created and this tool is going to be the tool most trivial software (which most of us make) will be created with.
I feel like the industry is telling me: Adopt of become irrelevant
Now I'm just telling AI what to do.
Suddenly, every cousin 13 year old could implement apps for their Uncle's dental office, laboratory, parts shop billing, tourism office management, etc. Some people also believed that software developers would become irrelevant in couple of years.
For me as an old programmer, I am having A BLAST using these tools. I have used enough tools (TurboBasic, Rational Rose (model based development, ha!), NetBeans, Eclipse, VB6, BorlandC++ builder) to be able to identify their limits and work with them.
I’m hired to solve business problems with technology, not to self-improve or get on my high horse because I hand-wrote a silly abstraction layer for the n-th time
After all, you can go and be a goat herder right now, and yet you are presumably not doing this.
Nothing is stopping you being a goat herder - the place that is paying you for solving business problems will continue just fine if you leave, after all. Your presence there is not required.
- first the place that is paying me to solve problems will NOT be just fine when I suddenly leave
- second I need UBI or some AI-enabled utopia to be ushered in to live comfortably as goat herder
- third I do have a concrete plan to get out, but it will take me a couple of years to realise
Yes they will!
No one, not even the chief officers or the shareholders of a company are irreplaceable. Well, maybe you're the only tech guy in a 5-company outfit?
They'll replace you just fine.
> - second I need UBI or some AI-enabled utopia to be ushered in to live comfortably as goat herder
Well, that's not really relevant to your assertion, is it?
>>> I really really hope an AI will do this work and solve all the “business problems” so I can go and be a goat herder
After all, UBI as an outcome is not dependent on AI solving all the business problems you currently solve.
IOW, whether UBI comes to pass or not is completely independent of "AI takes our jobs".
AI utopia is not needed for UBI that is true - but it will be much easier to become reality if “all the jobs” are taken.
Aside from all the snark - I think that the fundamental societal problem is that there will always be some shitty jobs that no one wants to do and there needs to be some system to force some people to do these jobs - call it capitalism, communism, or marriage. There is no way around this basic fact of the human condition
Point is: if that problem is solvable without me, that's the win condition for everyone. Then I go herd goats (and have this nifty tool that helps me spec out an optimal goat fence while I'm at it).
The problem is solvable without you. I don't even need to know what the problem actually is, because the odds of you being one of the handful of the people in the world who are so critical that the world notices their passing is so low, I have a better chance of winning a lottery jackpot than of you being some critical piece of some solution.
Solving the problem - no matter what problem it is - is extremely dependent on you and every single human being (or animal for that matter) is a critical piece of their environment and circumstances.
I have buy-in from a former co-worker with whom I remained in touch over the years, so there will be at least two of us working the fields.
There aare probably 2 ways to see te future of LLMs / AI: they are either going to have the capabilities to replace all white collar work, or they are not.
If you think they are going to replace us, then yo ucan either surrender or fight back, and personally I read all these anti-AI posts as fighting back, to help people realize we might be digging our own grave.
If, OTOH, you see AI as a force-multiplier tool that's never going to completely replace a human developer then yes, probably the smartest thing to do is to learn how to master this new tool, but at the same time keep in mind the side effects it might bring, like atrophy.
We've always been in the business of replacing humans in the 3-D's space (dirty, dangerous, dull... And to be clear. data manipulation for its own sake is dull). If we make AI that replaces 90% of what I do at my desk every day... We did it. We realized the dream from the old Tom Swift novels where he comes up with an idea for an invention and hands the idea off to his computer to extrapolate it, or the ship's computer in Star Trek acting like a perfect engineering and analytical assistant to take fuzzy asks from humans and turn them into useful output.
They aren't going to willingly spread the wealth.
I do however, love solving business problems. This is what I am hired for. I speak to VP/managers to improve their day to day. I come up with feasible solution and translate them into code.
If AI could actually code, like really code(not here is some code, it may or may not work go read documentation to figure out why it doesn't), I would just go and focus on creating affordable software solutions to medium/small businesses.
This is kind of like gardening/farming, before industrial revolution most crops required a huge work force, these days with all the equipment and advancements a single farmer can do a lot on their own with small staff. People still "hand" garden for pleasure, but without using the new tech they wouldn't be able to compete on a big scale.
I know many fear AI, but it is progress and it will never stop. I do think many devs are intelligent and will be able to evolve in the workplace.
So, this "solve business problems" is some temporary[1] gig for you?[2]
------------------------------
[1] I'm reminded of the anti-union people who are merely temporarily embarrassed millionaires.
[2] Skills atrophy. Maybe you won't need the atrophied skill in the future, but how sure are you that this is the case? The eventual outcome?
A time and place may come where the AI are so powerful I’m not needed. That time is not right now.
I have used Rider for years at this point and it automatically handles most imports. It’s not AI, but its one of the things that is just not needed for me to even think about.
I do enjoy programming and low level programming helps you build a mental model on what a system is capable of. If you know what a system is capable of, you can form higher levels of abstraction, that the LLM will never figure out.
AI in its plain definition is the automation of human cognitive process. Having AI work on cognitively demanding tasks gives you energy to focus on other things, like higher levels of abstraction.
I cover a wide set of domains, LLM is much better at some of them, worse at others. I learn from my mistakes, I also learn from the LLM mistakes.
If your intention is to be a good programmer, you will use it as a tool to learn and be productive. I love the stack overflow experience, but now they are a data provider to llms. This is the shift in how technology is being used, even according to stacks ceo, “I think we will see a lot of job displacement in the next two years”
Adoption for code gen and workflow execution is too significant to be ignored.
Fluency in reading will disappear if you aren't writing enough. And for the pipeline from junior to senior, if the juniors don't write as much as we wrote when young, they are never going to develop the fluency to read.
You are saying something completely different to what I am saying; are you really saying that someone who writes all the time will forget how to read?
You understand that the act of writing is in fact partially reading?
I'm trying to think of any examples of someone who said that "a generation ago" at all? I assume that they were some sort of fringe crackpot.
These things are all tradeoffs. A junior engineer who goes to the manual every time is something I encourage, but if they go exclusively to only the manual every time they are going to be slower and produce code more disjoint and harder to maintain than their peers who have taken advantage of other people's insights into the things the manuals don't say.
I think for software engineering, the far more common issue is that there's already a best practice and the individual engineer hasn't chanced to hear about it yet than the problem on the desk is in need of a brand-new mousetrap.
I don’t commit 1,000 lines that i don’t know how it works.
If people are just not coding anymore and trusting AI to do everything, i agree, they’re going to hit a wall hard once the complexity of their non-architected Frankenstein project hits a certain level. And they’ll be paying for a ton of tokens to spin the AI’s wheels trying to fix it.
I'm doing a lot of new things I never would have done before. Yes, I could have googled APIs and read tutorials, but I learn best by doing, and AI helps me learn a lot faster.
Sometimes.
If people aren't learning from AI it's their fault. Yeah AI makes stuff up and hallucinates and can be wrong but how is that different than a distracted senior dev? AI is available to me 24/7 to answer my questions in minutes or seconds where half the time when I message people I have to wait 30-60min for a response.
People just need to approach things intelligently and actually learn along the way. You can easily get to the point where you're thinking more clearly about a problem than the AI writing your code pretty quickly if you just pay attention and do the research you need to understand what's happening. They're not as factual as a textbook but they don't need to be to give you the space to ask the right questions and they'll frequently provide sources (though I'd heavily recommend checking them. Sometimes the sources are a joke)
But the skills you describe are still skills, reading and researching and doing your own fact finding are still important to practice and be good at. Those things only get more important in situations off the beaten path, where AI doesn't always give you trustworthy answers or do trustworthy work.
I'm still going to nurture some of these skills. If I'm trying to learn, I'll stick to using AI only when I'm truly stuck or no longer having fun.
People who claim "It's not synthesized, it's just other people's work run through a woodchipper" aren't precisely right, but they also aren't precisely wrong... And in this space, having the whole ecosystem of programmers who published code looking over my shoulder as I try to solve problems is a huge boon.
Not everyone has access to an expert that will guide them to the most efficient way to do something.
With either form of learning though, critical thinking is required.
I've still learned from it. Just read each line it generates carefully. Read the API references of unfamiliar functions or language features it uses. You'll learn things.
You'll also see a lot of stupidity, overcomplication, outdated or incorrect APIs calls, etc.
Anybody know any weavers making > 100k a year?
If the demand for this work is high, maybe the individual workers aren't earning $100k per year, but the owner of the company who presumably was/is a weaver might well be earning that much.
What the loom has done is made the repeatable mass production of items cheap and convenient. What used to be a very expensive purchase is now available to more people at a significantly cheaper price, so probably the profits of the companies making them are about the same or higher, just on a higher volume.
It hasn't entirely removed the market for high end quality weaving, although it probably has reduced the number of people buying high-end bespoke items if they can buy a "good enough" item much cheaper.
But having said that, I don't think weavers were on the inflation-adjusted equivalent of 100k before the loom either. They may have been skilled artisans, but that doesn't mean the majority were paid multiples above an average wage.
The current price bubble for programming salaries is based on the high salaries being worth paying for a company who can leverage that person to produce software that can earn the company significantly more than that salary, coupled with the historic demand for good programmers exceeding supply.
I'm sure that even if the bulk of programming jobs disappear because people can produce "good enough" software for their purposes using AI, there will always be a requirement for highly skilled specialists to do what AI can't, or from companies that want a higher confidence that the code is correct/maintainable than AI can provide.
I've refactored the sloppiest slop with AI in days with zero regressions. If I did it manually it could have taken months.
Same. Writing code is easy. Reading code is very very hard.
To be clear, I’m not saying there is nothing interesting in the code of others, obviously. However, reading code is, in my opinion, twice as hard as writing it. Especially understanding the structure is very hard.
They could rename it "Using AI Generated Code Makes Programming Less Fun, for Me", that would be more honest.
The problem for programmers is (as a group) they tend to dislike the parts of their job that are hardest to replace with AI and love the stuff that is easiest for machines to copy. It turns out meetings, customer support, documentation, tests, and QA are core parts of being a good engineer.
This is how I work. Honestly the writing time (the one I’m promised I'll gain on by using AI), is something like 10% of my coding time. And it’s the only time I’m “resting” so yeah. I don’t want to get rid of it. Nor do I need it. And I especially do not want to check on the “intern” to verify it did what I imagined. Nor do I want to spend time explaining what I imagined. I just do it.
However I agree there's a different category here under the idea of 'craft'. I don't have a good way to express this. It's not that I'm writing these 'if' statements in a particular way, it's how the whole system is structured and I understand every single line and it's an expression of my clarity of the system in code.
I believe there a split between these two and both are focusing on different problems. Again I don't want to label, but if I *had to* I would say one side is business focused. Here's the thing though - your end customers don't give a fuck if it's built with AI or crafted by hand.
The other side is the craftsmanship, and I don't know how to express this to make sense.
I'm looking for a good way to express this - feeling? Reality? Practice?
IDK, but I do understand your side of it; However, I don't think many companies will give a shit.
If they can go to market in 2 weeks vs 2 month's you know what they'll choose.
I did, for a very long time. Then I realized that it's just work, and I'd like to spend my life minimizing the amount of that I do and maximizing the things I do want to do. Code gen has completely changed the equation for workaday folks. Maybe that will make us obsolete, and fall out of practice. But I tend to think the best software engineers are the laziest ones who don't try to be clever. Maybe not the best programmers per se, but I know whose codebase I'd rather inherit.
I know plenty of 50-something developers out of work because they stuck to their old ways and the tech world left them behind.
However, for those in the first few years of their career, I'm definitely seeing the problem where junior devs are reaching for AI on everything, and aren't developing any skills that would allow them to do anything more than the AI can do or catch any of the mistakes that AI makes. I don't see them on a path that leads them from where they are to where I am.
A lot of my generation of developers is moving into management, switching fields, or simply retiring in their 40s. In theory there should be some of us left who can do what AI can't for another 20 years until we reach actual retirement age, but programming isn't a field that retains its older developers well. So this problem is going to catch up with us quickly.
Then again, I don't feel like I ever really lived up to any of the programmers I looked up to from the 80s and 90s, and I can't really point to many modern programmers I look up to in the same way. Moxie and Rob Nystrom, maybe? And the field hasn't collapsed, so maybe the next generation will figure out how to make it work.
People care if their software works. They don’t care how beautiful the code is.
AI can churn out 25 drafts faster than 99% of devs can get their boilerplate setup for the first time.
The new skill is fitting all that output into deployable code, which if you are experienced in shipping software is not hard to get the model to do.
If you want to be an artist be an artist, that's fine, don't confuse artististry with engineering.
I write art code for myself, I engineer code professionally.
The author wraps with a false dichotomy that uses emotionally loaded language at the end: "You Believe We have Entered a New Post-Work Era, and Trust the Corporations to Shepherd Us Into It". I mean, what? Why can't I think it's quickly becoming a new era _and_ not trust corporations? Why does the author take that idea off the table? Is this logic or rhetoric? Who is this author trying to convince?
SWE life has always had smatterings of weird gatekeeping, self identities wrapped up in external tooling or paradigms, fragile egos, general misanthropy, post-hoc rationalization, etc. but... man watching the progressions of the crash outs these last few years has been wild.
In my day job, I use best practices. If I'm changing a SQL database, I write database migrations.
In my hobby coding? I will never write a database migration. You couldn't force me to at gunpoint. I just hate them, aesthetically. I will come up with the most elaborate and fragile solutions to avoid writing them. It's part of the fun.
Yes, taking the bus to work will make me a worse runner than jogging there. Sometimes, I just want to get to a place.
Secondly, I'm not convinced the best way to learn to be a good programmer is just to do a whole project from 0 to 100. International practice is a thing.
I do think the “becoming dependent on your replacement” point is somewhat weak. Once AI is as good as the best human at programming (which I think could still be many years away), the conversation is moot.
I mean in 2 years the entire mentality shifted. Most people on HN were just completely and utterly wrong (also quite embarrassing if you read how self assured these people were, this is like 70 percent of HN at the time).
First AI is clearly not a stochastic parrot and second it hasn’t taken our jobs yet but we can all see that potential up ahead.
Now we get articles like this saying your skills will atrophy with AI because the entire industry is using it now.
I think it’s clear. Everyone’s skills will atrophy. This is the future. I fully expect in the coming decades that the generation after zoomers have never coded ever without the assistance of AI and they will have an even harder time finding jobs in software.
Also: because the change happened so fast you see tons of pockets of people who aren’t caught up yet. People who don’t realize that the above is the overarching reality. You’ll know you’re one of these people if AI hasn’t basically taken over your work place and you and your coworkers aren’t going all in on Claude or Codex. Give it another 2 years and everyone will flip here too.
> "I’m a senior and LLM’s never provide code that pass my sniff test, and it remains a waste of time."
Even a year ago that seemed like a ridiculous thing to say. LLM's have made one thing very clear to me: A massive percentage of developers derive their sense of self worth from how smart coding makes them feel.
What has to happen first is that people need to rebuild their identity before they can accept what is happening and that rebuilding process will take longer then the rate at which AI is outrunning all of us.
What is my role in tech if for the past 20 years I was a code ninja but now AI can do better than me? I can become a delegator or manager to AI, a prompt wizard or some leadership role… but even this is a target for replacement by AI.
That being said, what will be critical is understanding business needs and being able to articulate them in a manner that computers (not humans) can translate into software.
Your memory of the discourse of that era has apparently been filtered by your brain in order to support the point you want to make. Nobody who thoughtlessly adopted an extreme position at a hinge point where the future was genuinely uncertain came out of that looking particularly good.
That is literally not what happened. You’re hallucinating. The majority of people on HN were so confident in their coding abilities that they weren’t worried at all. Just a cursory glance at the conversations back then and that is what you will see OVERALL.
No it very clearly is. Even still today, it is obvious that it has zero understanding of anything and it's just parroting training data arranged in different ways.
As for “understanding” we can only infer this from input and output. We can’t actually know if it “understands” because we don’t actually know how these things work and in addition to that, we don’t have a formal definition of what “understanding” is.
From what we do know about LLMs we do know that it is not trivial pattern matching, the output formulated is literally by the definition of machine learning itself completely original information not copied from the training data.
Fascinating.
It also has already taken junior jobs. The market is hard for them.
Correction: it has been a convenient excuse for large tech companies to cut junior jobs after ridiculous hiring sprees during COVID/ZIRP.
Well, its taken blame for the job cutting due to the broad growth slowdown since COVID fiscal and monetary stimulus was stopped and replaced with monetary tightening, and then most recently the economy was hit with the additional hammers of the Trump tariff and immigration policies, as lots of people want to obscure, deny, and distract from the general economic malaise (and because many of the companies, and even more of their big investors, involved are in incestuous investment relationships with AI companies, so "blaming" AI for the cuts is also a form of self-serving promotion.)
This quote is so telling. I’m going to be straight with you and this is my opinion so you’re free to disagree.
From my POV you are out of touch with the ground truth reality of AI and that’s ok because it has all changed so fast. Everything in the universe is math based and in theory even your brain can be fully modelled by mathematics… it’s a pointless quote.
The ground truth reality is that nobody and I mean nobody understands how LLMs work. This isn’t me making shit up, if you know transformers, if you know the industry and if you even listened to the people behind the technology who make these things… they all say we don’t know how AI works.
But we do know some things. We know it’s not a stochastic parrot because in addition to the failures we’ve seen plenty of successes to extremely complicated problems that are too non trivial for anything other than an actual intelligence to solve.
In the coming years reality will change so much that your opinion will flip. You might be so stubborn as to continue calling it a stochastic parrot but by then it will just be lip service. Your current reaction is normal given the paradigm shift happened so fast and so recently.
This is a really insane and untrue quote. I would, ironically, ask an LLM to explain how LLMs work. It's really not as complicated as it seems.
You can boil LLM's down to "next token predictor". But that's like boiling down the human brain to "synapses firing".
The point that OP is making I think, is that we don't understand how "next token prediction" leads to more emergent complexity.
It seems clear you don't want to have a good faith discussion.
It's you claiming that we understand how LLM's work, while the researchers who built them say that we ultimately don't.
There’s tons more where that came from. Like I said lots of people are out of touch because the landscape is changing so fast.
What is baffling to me is that not only are you unaware of what I’m saying but you also think what I’m saying is batshit insane despite the fact that people in the center of it all who are creating these things SAY the same thing. Maybe it’s just terminology…understanding how t build an LLM is not the same as understanding why it works or how it works.
Either way I can literally provide tons and tons more of evidence to the contrary if you’re still not getting it: We do not understand how LLMs work.
Also you can prompt an LLM about whether or not we understand LLMs it should tell the same thing I’m saying along with explaining transformers to you.
Just because the restaurant says "World's Best Burgers" on its logo doesn't make it true.
Here’s another: https://youtube.com/shorts/zKM-msksXq0?si=bVethH1vAneCq28v
Geoffrey Hinton father of AI who quit his job at Google to warn people about AI. What’s his motivation? Altruism.
Man it’s not even about people saying things. If you knew how transformers and LLMs work you would know even for the most basic model we do not understand how they work.
If you say you understand LLMs then my claim is then that you are lying. Nobody understands these things and people core to building these things are in absolute agreement with me.
I build LLMs for a living, btw. So it’s not just other experts saying these things.. I know what I’m talking about on a fundamental level.
You know one thing an LLM does better than me and many other people? It admits it’s wrong after it’s been proven wrong. Humans, including me have a hard time doing that but I’m not the one that’s wrong here. You are wrong, and that’s ok. Don’t know why people need to go radio silent or say stupid shit just to dodge the irrefutable reality of being completely and utterly wrong.
It will just spew over-confident sycophantic vomit. There is no attempt to reason. It’s all worthless nonsense.
It’s a fancy regurgitation machine and that will go completely off the rails when it steps outside of it’s training area. That’s it.
I’ve also seen it fuck up in the same way you describe. So do I weigh and balance these two pieces of contrasting evidence to form a logical conclusion? Or do I pick and choose one of pieces of evidence that is convenient to my world view? What should I do? Actually why don’t you tell me what you ended up doing?
Imagine the empire state building was just completed, and you had a man yelling at the construction workers: "PFFT that's just a bunch of steel and bricks"
The money is never wrong! That's why the $100 billion invested in blockchain companies from 2020 to 2023 worked out so well. Or why Mark Zuckerberg's $50 billion investment in the Metaverse resulted in a world-changing paradigm shift.
Those people who invested cash in blockchain believed that they could develop something worthwhile on the blockchain.
Zuckerberg believed the Metaverse could change things. It's why he hired all of those people to work on it.
However, what you have here are people claiming LLMs are going to be writing 90% of code in the next 18 months, then turning around and hiring a bunch of people to write code.
There's another article posted here, "Believe the Checkbook" or something like that. And they point out that Anthropic had no reason to purchase Bun except to get the people working on it. And if you believe we're about to turn a corner on vibe coding, you don't do that.
Very few people say this. But it’s realistically to say at the least in the next decade our jobs are going out the window.
So yeah, he's just "one guy", but in terms of "one guys", he's a notable one.
So we could be right or we could be wrong. What we do know is that from 2 years ago a lot of what people were saying or “believed” about LLMs are now categorically wrong.
And some of those beliefs they were wrong about is about when and how it will change things.
And my post is not about who is correct. It's about discerning what people truly believe despite what they might tell you up front.
People invested money into the internet. They hired people to develop it. That told you they believed it was useful to them.