Maybe it’s because my approach is much closer to a Product Engineer than a Software Engineer, but code output is rarely the reason why projects that I worked on are delayed. All my productivity issues can attributed to poor specifications, or problems that someone just threw over the wall. Every time I’m blocked is because someone didn’t make a decision on something, or no one has thought further enough to see this decision was needed.
It irks me so much when I see the managers of adjacent teams pushing for AI coding tools when the only thing the developers know about the project is what was written in the current JIRA ticket.
This is very true at large enterprises. The pre-coding tasks [0] and the post-coding tasks [1] account for the majority of elapsed time that it takes for a feature to go from inception to production.
The theory of constraints says that optimizations made to a step that's not the bottleneck will only make the actual bottleneck worse.
AI is no match for a well-established bureaucracy.
[0]: architecture reviews, requirements gathering, story-writing
[1]: infrastructure, multiple phases of testing, ops docs, sign-offs
And it's also interesting to think that PMs are also using AI - in my company for example we allow users to submit feedback, then there's an AI summary report sent to PMs. Which them put the report into ChatGPT with the organizational goals and the key players and previous meeting transcripts, and then they ask the AI to weave everything together into a PRD, or even a 10 slide presentation.
I’m working hard on building something right now that I’ve had several false starts on, mostly because it’s taken years for us to totally get our heads around what to build. Code output isn’t the problem.
My experience has been the opposite. I've enjoyed working on hobby projects more than ever, because so many of the boring and often blocking aspects of programming are sped up. You get to focus more on higher level choices and overall design and code quality, rather than searching specific usages of libraries or applying other minutiae. Learning is accelerated and the loop of making choices and seeing code generated for them, is a bit addictive.
I'm mostly worried that it might not take long for me to be a hindrance in the loop more than anything. For now I still have better overall design sense than AI, but it's already much better than I am at producing code for many common tasks. If AI develops more overall insight and sense, and the ability to handle larger code bases, it's not hard to imagine a world where I no longer even look at or know what code is written.
It might challenge us, and maybe those of us who feel challenged in that way need to rise to it, for there are always harder problems to solve
If this new tool seems to make things so easy it's like "cheating", then make the game harder. Can't cheat reality.
I would try to build something "good" (not "perfect", just "good", like modular or future-proof or just not downright malpractice). But while I was doing this, others would build crap. They would do it so fast I couldn't keep up. So they would "solve" the problems much faster. Except that over the years, they just accumulated legacy and had to redo stuff over and over again (at some point you can't throw crap on top of crap, so you just rebuild from scratch and start with new crap, right?).
All that to say, I don't think that AIs will help with that. If anything, AIs will help more people behave like this and produce a lot of crap very quickly.
Similar with GPS and navigation. When you read a map, you learn how to localise yourself based on landmarks you see. You tend to get an understanding of where you are, where you want to go and how to go there. But if you follow the navigation system that tells you "turn right", "continue straight", "turn right", then again you lose intuition. I have seen people following their navigation system around two blocks to finally end up right next to where they started. The navigation system was inefficient, and with some intuition they could have said "oh actually it's right behind us, this navigation is bad".
Back to coding: if you have a deep understanding of your codebases and dependencies, you may end up finding that you could actually extract some part of one codebase into a library and reuse it in another codebase. Or that instead of writing a complex task in your codebase, you could contribute a patch to a dependency and it would make it much simpler (e.g. because the dependency already has this logic internally and you could just expose it instead of rewriting it). But it requires an understanding of those dependencies: do you have access to their code in the first place (either because they are open source or belong to your company)?
Those AIs obviously help writing code. But do they help getting an understanding of the codebase to the point where you build intuition that can be leveraged to improve the project? Not sure.
Is it necessary, though? I don't think so: the tendency is that software becomes more and more profitable by becoming worse and worse. AI may just help writing more profitable worse code, but faster. If we can screw the consumers faster and get more money from them, that's a win, I guess.
I understand the point you are making. But what makes you think refactoring won't be AI's forte. Maybe you could explicitly ask for it. Maybe you could ask it to minify while being human-understandable and that will achieve the refactoring objectives you have in mind.
I don't know that AI won't be able to do that, just like I don't know that AGI won't be a thing.
It just feels like it's harder to have the AI detect your dependencies, maybe browse the web for the sources (?) and offer to make a contribution upstream. Or would you envision downloading all the sources of all the dependencies (transitive included) and telling the AI where to find them? And to give it access to all the private repositories of your company?
And then, upstreaming something is a bit "strategic", I would say: you have to be able to say "I think it makes sense to have this logic in the dependency instead of in my project". Not sure if AIs can do that at all.
To me, it feels like it's at the same level of abstraction as something like "I will go with CMake because my coworkers are familiar with it", or "I will use C++ instead of Rust because the community in this field is bigger". Does an AI know that?
I also have a large collection of handwritten family letters going back over 100 years. I've scanned many of them, but I want to transcribe them to text. The job is daunting, so I ran them through some GPT apps for handwriting recognition. GPT did an astonishing job and at first blush, I thought the problem was solved. But on deeper inspection I found that while the transcriptions sounded reasonable and accurate, significant portions were hallucinated or missing. Ok, I said, I just have to review each transcription for accuracy. Well, reading two documents side by side while looking for errors is much more draining than just reading the original letter and typing it in. I'm a very fast typist and the process doesn't take long. Plus, I get to read every letter from beginning to end while I'm working. It's fun.
So after several years of periodically experimenting with the latest LLM tools, I still haven't found a use for them in my personal life and hobbies. I'm not sure what the future world of engineering and art will look like, but I suspect it will be very different.
My wife spins wool to make yarn, then knits it into clothing. She doesn't worry much about how the clothing is styled because it's the physical process of working intimately with her hands and the raw materials that she finds satisfying. She is staying close to the fundamental process of building clothing. Now that there are machines for manufacturing fibers, fabrics and garments, her skill isn't required, but our society has grown dependent on the machines and the infrastructure needed to keep them operating. We would be helpless and naked if those were lost.
Likewise, with LLM coding, developers will no longer develop the skills needed to design or "architect" complex information processing systems, just as no one bothers to learn assembly language anymore. But those are things that someone or something must still know about. Relegating that essential role to a LLM seems like a risky move for the future of our technological civilization.
Personally, right now I find it difficult to imagine saying "I made this" if I got an AI to generate all the code of a project. If I go to a bookstore, ask for some kind of book ("I want it to be with a hard cover, and talk about X, and be written in language Y, ..."), I don't think that at the end I will feel like I "made the book". I merely chose it, someone else made it (actually it's multiple jobs, between whoever wrote it and whoever actually printed and distributed it).
Now if I can describe a program to an AI and it results in a functioning program, can I say that I made it?
Of course it's more efficient to use knitting machines, but if I actually knit a piece of clothing, then I can say I made it. And that's what I like: I like to make things.
https://www.fictionpress.com/s/3353977/1/The-End-of-Creative...
Some existential objections occur; how sure are we that there isn't an infinite regress of ever deeper games to explore? Can we claim that every game has an enjoyment-nullifying hack yet to discover with no exceptions? If pampered pet animals don't appear to experience the boredom we anticipate is coming for us, is the expectation completely wrong?
The one part that I believe will still be essential is understanding the code. It's one thing to use Claude as a (self-driving) car, where you delegate the actual driving but still understand the roads being taken. (Both for learning and for validating that the route is in fact correct)
It's another thing to treat it like a teleporter, where you tell it a destination and then are magically beamed to a location that sort of looks like that destination, with no way to understand how you got there or if this is really the right place.
Still need to prove that AI-generated code is "better", though.
"More profitable", in a world where software generally becomes worse (for the consumers) and more profitable (for the companies), sure.
If AI makes doing the same thing cheaper, why would they suddenly say "actually instead of increasing our profit, we will invest it into better software"?
Prior to AI, this was also true with software engineering. Now, at least for the time being, programmers can increase productivity and output, which seems good on the surface. However, with AI, one trades the hard work and brain cells created by actively practicing and struggling with craft for this productivity gain. In the long run, is this worth it?
To me, this is the bummer.
The purpose of hobbies is to be a hobby, archetypical tech projects are about self-mastery. You cannot improve your mastery with a "tool" that robs you of most of the minor and major creative and technical decisions of the task. Building IKEA furniture will not make you a better carpenter.
Why be a better carpenter? Because software engineering is not about hobby projects. It's about research and development at the fringes of a business (, orgs, projects...) requirements -- to evolve their software towards solving them.
Carpentry ("programming craft") will always (modulo 100+ years) be essential here. Powertools do not reduce the essential craft, they increase the "time to craft being required" -- they mean we run into walls of required expertise faster.
AI as applied to non-hobby projects -- R&D programming in the large -- where requirements aren't specified already as prior art programs (of func & non-func variety, etc.) ---- just accelerates the time to hitting the wall where you're going to shoot yourself in the foot if you're not an expert.
I have not seen a single take by an experienced software engineer have a "sky is falling" take, ie., those operating at typical "in the large" programming scales, in typical R&D projects (revision to legacy, or greenfield -- just reqs are new).
Compare this to the situation where you have a team develop schemas for your datasets which can be tested and verified, and fixed in the event of errors. You can't really "fix" an LLM or human agent in that way.
So I feel like traditionally computing excelled at many tasks that humans couldn't do - computers are crazy fast and don't make mistakes, as a rule. LLMs remove this speed and accuracy, becoming something more like scalable humans (their "intelligence" is debateable, but possibly a moving target - I've yet to see an LLM that I would trust more than a very junior developer). LLMs (and ML generally) will always have higher error margins, it's how they can do what they do.
So if you want to translate that, there is value in doing a processing step manually to learn how it works - but when you understood that, automation can actually benefit you, because only then are you even able to do larger, higher-level processing steps "manually", that would take an infeasible amount of time and energy otherwise.
Where I'd agree though is that you should never lose the basic understanding and transparency of the lower-level steps if you can avoid that in any way.
I understand what the article means, but sometimes I've got the broad scopes of a feature in my head, and I just want it to work. Sometimes programming isn't like "solving a puzzle", sometimes it's just a huge grind. And if I can let an LLM do it 10 times faster, I'm quite happy with that.
I've always had to fix up the code one way or another though. And most of the times, the code is quite bad (even from Claude Sonnet 3.7 or Gemini Pro 2.5), but it _did_ point me in the right direction.
About the cost: I'm only using Gemini Pro 2.5 Experimental the past few weeks. I get to retry things so many times for free, it's great. But if I had to actually pay for all the millions upon millions of used tokens, it would have cost me *a lot* of money, and I don't want to pay that. (Though I think token usage can be improved a lot, tools like Roo-Code seem very wasteful on that front)
Let me save everybody some time:
1. They're not saying it because they don't want to think of themselves as obsolete.
2. You're not using AI right, programmers who do will take your job.
3. What model/version/prompt did you use? Works For Me.
But seriously: It does not matter _that_ much what experienced engineers think. If the end result looks good enough for laymen and there's no short term negative outcomes, the most idiotic things can build up steam for a long time. There is usually an inevitable correction, but it can take decades. I personally accept that, the world is a bit mad sometimes, but we deal with it.
My personal opinion is pretty chill: I don't know if what I can do will still be needed n years from now. It might be that I need to change my approach, learn something new, or whatever. But I don't spend all that much time worrying about what was, or what will be. I have problems to solve right now, and I solve them with the best options available to me right now.
People spending their days solving problems probably generally don't have much time to create science fiction.
I use AI heavily, it's my field.
They're encountering a type of tool they haven't met before and haven't been trained to use. The default assumption is they are probably using it wrong. There isn't any reason to assume they're using it right - doing things wrong is the default state of humans.
And I might not be the best coder, by far, but I've got over 40 years experience at this crap in practically every language going.
We can quibble about the exact number; 1.2x vs 5x vs 10x, but there's clearly something there.
So the OP was in a bad place without Claude anyways (in industry at least).
This realization is the true bitter one for many engineers.
The realization that productive workers aren't just replaceable cogs in the machine is also a bitter lesson for businessmen.
Independent of what AI can do today, I suspect this was a reason why so many resources were poured into its development in the first place. Because this was the ultimate vision behind it.
(1) Define people's worth through labour.
(2) See labour as a cost center that should be eliminated wherever possible.
US politicians and technologists are trying to have it both ways: Oppose a social safety net out of principle as to "not encourage leechers", forcing people to work, but at the same time seek to reduce the opportunities for work as much as possible. AI is the latest and potentially most far-reaching implementation of that.
This is asking for trouble.
Why? Both AI and outsourcing provide a much cheaper way to get programming done. Why would you pay someone 100k because he likes doing what an AI or an Indian dev Team can do for much cheaper?
Generating value for the shareholders and/or investors, not the customers. I suspect this is the next bitter lesson for developers.
The bitter lesson is that making profit is the only directive.
I am sure Software developers are here to stay, but nobody who just writes software is worth anywhere close to 100k a year. Either AI or outsourcing is making sure of that.
The models all have their specific innate knowledge of the programming ecosystem from the point in time where their last training data was collected. However, unlike humans, they cannot update that knowledge unless a new finetuning is performed - and even then, they can only learn about new libraries that are already in widespread use.
So if everyone now shifts to Vibe Coding, will this now mean that software ecosystems effectively become frozen? New libraries cannot gain popularity because AIs won't use them in code and AIs won't start to use them because they aren't popular.
I saw a submission earlier today that really illustrated perfectly why AI is eating people who write code:
> You could spend a day debating your architecture: slices, layers, shapes, vegetables, or smalltalk. You could spend several days eliminating the biggest risks by building proofs-of-concept to eliminate unknowns. You could spend a week figuring out how you’ll store, search, and cache data and which third–party integrations you’ll need.
$5k/person/week to have an informed opinion of how to store your data! AI going to look at the billion times we already asked these questions and make an instant decision and the really, really important part is it doesn't really matter what we choose anyway because there are dozens of right answers.
And then, yes, you’ll have the legions of vibe coders living in Plato’s cave and churning out tinker toys.
There is an interesting aspect to this whereby there's maybe more incentive to open source stuff now just to get usage examples in the training set. But if context windows keep expanding it may also just not matter.
The trick is to have good docs. If you don't then step one is to work with the model to write some. It can then write its own summaries based on what it found 'surprising' and those can be loaded into the context when needed.
That's where we're at. The LLM needs to be told about the brand new API by feeding it new docs, which just uses up tokens in its context window.
New challenges would come up. If calculators made the arithmetic easy, math challenges move to next higher level. If AI does all the thinking and creativity, human would move to next level. That level could be some menial work which AI can't touch. For example, navigating the complexities of legacy systems and workflows and human interactions needed to keep things working.
Well this sounds delightful! Glad to be free of the thinking and creativity!
Everyone wanted to be an architect. Well, here’s our chance!
The fun part though is that future coding LLMs will eventually be poisoned by ingesting past LLM generated slop code if unrestricted. The most valuable code bases to improve LLM quality in the future will be the ones written by humans with high quality coding skills that are not reliant or minimally reliant on LLMs, making the humans who write them more valuable.
Think about it: A new, even better programming language is created like Sapphire on Skates or whatever. How does a LLM know how to output high quality idiomatically correct code for that hot new language? The answer is that _it doesn't_. Not until 1) somebody writes good code for that language for the LLM to absorb and 2) in a large enough quantity for patterns to emerge that the LLM can reliably identify as idiomatic.
It'll be pretty much like the end of Asimov's "Feeling of Power" (https://en.wikipedia.org/wiki/The_Feeling_of_Power) or his almost exactly LLM relevant novella "Profession" ( https://en.wikipedia.org/wiki/Profession_(novella) ).
When insight from a long-departed dev is needed right now to explain why these rules work in this precise order, but fail when the order is changed, do you have time to git bisect to get an approximate date, then start trawling through chat logs in the hopes you'll happen to find an explanation?
Having to dig through all that other crap is unfortunate. Ideally you have tests that encapsulate the specs, which are then also code. And help with said refactors.
Test-driven development doesn't actually work. No paradigm does. Fundamentally, it all boils down to communication: and generative AI systems essentially strip away all the "non-verbal" communication channels, replacing them with the subtext equivalent of line noise. I have yet to work with anyone good enough at communicating that I can do without the side-channels.
Actually really thinking, if I was running company allowing or promoting AI use that would be first priority. Whatever is prompted, must be stored forever.
This is a human problem, not a technological one.
You can still have all your aforementioned broken powerpoints etc and use AI to help write code you would’ve previously written simply by hand.
If your processes are broken enough to create unmaintainable software, they will do so regardless of how code pops into existence. AI just speeds it up either way.
Personally, I'm in the "you shouldn't leave vital context implicit" camp; but in this case, the software was originally written by "if I don't already have a doctorate, I need only request one" domain experts, and you would need an entire book to provide that context. We actually had a half-finished attempt – 12 names on the title page, a little over 200 pages long – and it helped, but chapter 3 was an introduction-for-people-who-already-know-the-topic (somehow more obscure than the context-free PowerPoints, though at least it helped us decode those), chapter 4 just had "TODO" on every chapter heading, and chapter 5 got almost to the bits we needed before trailing off with "TODO: this is hard to explain because" notes. (We're pretty sure they discussed this in more detail over email, but we didn't find it. Frankly, it's lucky we have the half-finished book at all.)
AI slop lacks this context. If the software had been written using genAI, there wouldn't have been the stylistic consistency to tell us we were on the right track. There wouldn't have been the conspicuous gap in naming, elevating "the current system didn't need that helper function, so they never wrote it" to a favoured hypothesis, allowing us to identify the only possible meaning of one of the words in chapter 3, and thereby learn why one of those rules we were investigating was chosen. (The helper function would've been meaningless at the time, although it does mean something in the context of a newer abstraction.) We wouldn't have been able to used a piece of debugging code from chapter 6 (modified to take advantage of the newer debug interface) to walk through the various data structures, guessing at which parts meant what using the abductive heuristic "we know it's designed deliberately, so any bits that appear redundant probably encode a meaning we don't yet understand".
I am very glad this system was written by humans. Sure, maybe the software would've been written faster (though I doubt it), but we wouldn't have been able to understand it after-the-fact. So we'd have had to throw it away, rediscover the basic principles, and then rewrite more-or-less the same software again – probably with errors. I would bet a large portion of my savings that that monstrosity is correct – that if it doesn't crash, it will produce the correct output – and I wouldn't be willing to bet that on anything we threw together as a replacement. (Yes, I want to rewrite the thing, but that's not a reasoned decision based on the software: it's a character trait.)
A program to calculate payroll might be easy to understand, but unless you understand enough about finance and tax law, you can't successfully modify it. Same with an audio processing pipeline: you know it's doing something with Fourier transforms, because that's what the variable names say, but try to tweak those numbers and you'll probably destroy the sound quality. Or a pseudo-random number generator: modify that without understanding how it works, and even if your change feels better, you might completely break it. (See https://roadrunnerwmc.github.io/blog/2020/05/08/nsmb-rng.htm..., or https://redirect.invidious.io/watch?v=NUPpvoFdiUQ if you want a few more clips.)
I've worked with codebases written by people with varying skillsets, and the only occasions where I've been confused by the subtext have been when the code was plagiarised.
You’re gonna work on captcha puzzles and you’re gonna like it.
At least we have one person who understands it in details: the one who wrote it.
But with AI-generated code, it feels like nobody writes it anymore: everybody reviews. Not only we don't like to review, but we don't do it well. And if you want to review it thoroughly, you may as well write it. Many open source maintainers will tell you that many times, it's faster for them to write the code than to review a PR from a stranger they don't trust.
The bitter lesson is going to be for junior engineers who see less job offers and don’t see consulting power houses eat their lunch.
That's what happened to manufacturing after all.
Therefore, I do not anticipate a massive offshoring of software like what happened in manufacturing. Yet, a lot of software work can be fully specified and will be outsourced.
This seems completely out of whack with my experience of AI coding. I'm definitely in the "it's extremely useful" camp but there's no way I would describe its code as high quality and efficient. It can do simple tasks but it often gets things just completely wrong, or takes a noob-level approach (e.g. O(N) instead of O(1)).
Is there some trick to this that I don't know? Because personally I would love it if AI could do some of the grunt work for me. I do enjoy programming but not all programming.
I find it they all make errors, but 95% of them I spot immediately by eye and either correct manually or reroll through prompting.
The error rate has gone down in the last 6 months, though, and the efficiency of the C# code I mostly generate has gone up by an order of magnitude. I would rarely produce code that is more efficient than what AI produces now. (I have a prompt though that tells it to use all the latest platform advances and to search the web first for the latest updates that will increase the efficiency and size of the code)
We have both been using or integrating AI code support tools since they became available and both writing code (usually Python) for 20+ years.
We both agree that windsurf + claude is our default IDE/Env now on. We also agree that for all future projects we think we can likely cut the number of engineers needed by 1/3rd.
Based on what I’ve been using for the last year professionally (copilot) and on the side, I’m confident I could build faster, better and with less effort with 5 engineers and AI tools as with 10 or 15. Also communication overhead reduces by 3x which prevents slowdowns.
So if I have a HA 5 layer stack application (fe, be, analytics, train/inference, networking/data mgt) with IPCs between them, instead of one senior and two juniors per process for a total of 15 people, I only need the 5 mid-seniors now.
All this went away. I felt a loss of joy and nostalgia for it. It was bitter.
Not bad, but bitter.
My only gripe is that the models are still pretty slow, and that discourages iteration and experimentation. I can’t wait for the day a Claude 3.5 grade model with 1000 tok/s speed releases, this will be a total game changer for me. Gemini 2.5 recently came closer, but it’s still not there.
The product itself is exciting and solves a very real problem, and we have many customers who want to use it and pay for it. But damn, it hurts my soul knowing what goes on under the hood.
Another area I find very helpful is when I need to use the same technique in my code as someone from another language. No longer do I need to spend hours figuring out how they did it. I just ask an AI and have them explain it to me and then often simply translate the code.
AI coding has removed the drudgery for me. It made coding 10X more enjoyable.
Compared what you see from game jams where sometimes solo devs create whole games in just a few days it was pretty trash.
It also tracks with my own experience. Yes, cursor quickly helps me get the first 80% done but then I spent so much time cleaning after it that I have barely saved any time in total.
For personal projects where you don't care about code quality I can see it as a great tool. If you actual have professional standards, no. (Except maybe for unit tests, I hate writing those by hand.)
Most of the current limitation CAN be solved by throwing even more compute at it. Absolutely. The question is will it economically make sense? Maybe if fusion becomes viable some day but currently with the end of fossil fuels and climate change? Is generative Ai worth destroying our planet for?
At some point the energy consumption of generative AI might get so high and expensive that you might be better off just letting humans do the work.
Recently, we've seen a lot of a shift in insight into not just diving straight into implementation, but actually spending time on careful specification, discussion and documentation either with or without an AI assistant before setting it loose to implement stuff.
For large, existing codebases, I sincerely believe that the biggest improvements lie in using MCP and proper instructions to connect the AI assistants to spec and documentation. For new projects I would put pretty much all of that directly into the repos.
I ended up watching maybe 10 minutes of these streams on two separate occasions, and he was writing code manually 90% of the time on both occasions, or yelling at LLM output.
The again primeagen is pretty critical of vibe coding so it was super weird match up anyway. I guess they decided to just have some fun. Maybe advertise the vibe coding "lifestyle" more so than the technical merit of the product.
Oh, it isn't the usual content for primeagen. He mostly reacts to other technical videos and articles and rants about his love for neovim and ziglang. He has ok takes most of the time and is actually critical of the overuse of generative Ai. But yeah, he is not a technical deep dive youtuber but more for entertainment.
Why would this be the exception?
Most professional software development hasn't been fun for years, mostly because of all the required ceremony around it. But it doesn't matter, for your hobby projects you can do what you want and it's up to you how much you let AI change that.
Thought it’s ok to use new for object literal in JS.
With today's AI, driven by code examples it was trained on, it seems more likely to be able to do a good job of optimization in many cases than to have gleaned the principles of conquering complexity, writing bug-free code that is easy and flexible to modify, etc. To be able to learn these "journeyman skills" an LLM would need to either have access to a large number of LARGE projects (not just Stack Overflow snippets) and/or the thought processes (typically not written down) of why certain design decisions were made for a given project.
So, at least for time being, as a developer wielding AI as a tool, I think we can still have the satisfaction of the higher level design (which may be unwise to leave to the AI, until it is better able to reason and learn), while leaving the drudgework (& a little bit of the fun) of coding to the tool. In any case we can still have the satisfaction of dreaming something up and making it real.
My (naive?) assumption is that all of this will come down: the price (eventually free) and the energy costs.
Then again, may daughters know I am Pollyanna (someone has to be).
Depends whether you're in it for the endgame or the journey.
For some the latter is a means to the former, and for others it's the other way around.
Even before AI really took of that was an experience many developers, including me, had. Outsourcing has taken over much of the industry. If you work in the west, there is a good probability that a large part of your work is managing remote teams, often in India or other low cost countries.
What AI could change is either reducing the value of outsourcing or make software development so accessible that managing the outsourcing becomes unnecessary.
Either way, I do believe that Software Developers are here to stay. They won't be writing much code in any case. A software developer in the US costs 100k a year and writing software simply will never again be worth 100k year. There are people and programs who are much cheaper.
Many of us do write code for fun, but that results in a skewed perspective where we don’t realize how inaccessible it is for most people. Programmers are providers of expensive professional services and only businesses that spread the costs over many customers can afford us.
So if anything, these new tools will make some kinds of bespoke software development more accessible to people who couldn’t afford professional help before.
Although, most people don’t need to write new code at all. Using either free software or buying off-the-shelf software (such as from an app store) works fine for most people in most situations. Personal, customized software is a niche.
So much code I have written and worked with is either CRUD or compatibility layers for un/under-documented formats.
It's as of most of the industry are plumbers, but we are mining and fabricating the materials for the pipes, and digging trenches to and from every residence using completely different pipes and designs for every. single. connection.
But it takes a while because the wheel has to be reinvented many times before people give up on improving it. When a new language comes along, a lot of stuff gets reimplemented. There’s plenty of churn, but the tools do get better.
I find the opportunity for improvement exciting, and I'm optimistic for the future.
Like, statistically most software I've seen written, didn't need to be done. There were better ways, or it was already solved, and it was a knowledge or experience gap, or often a not invented here syndrome.
The main thing that frustrates me these days, is trying to do things better doesn't generally align with the quarterly mentality.
That's the opposite of what's happened over the past year or two. Now many more non-technical people can (and are) building software.
Setting aside the fact that the author nowhere says this, it may in fact be plausible.
> That's the opposite of what's happened over the past year or two. Now many more non-technical people can (and are) building software.
Meanwhile half[0] the students supposed to be learning to build software in university will fail to learn something important because they asked Claude instead of thinking about it. (Or all the students using llms will fail to learn something half the time, etc.)
[0]: https://www.anthropic.com/news/anthropic-education-report-ho...
> That said, nearly half (~47%) of student-AI conversations were Direct—that is, seeking answers or content with minimal engagement.
The author tell his experience regarding his joy programming things and figuring stuff out. In the end he says that AI made him lose this joy, and he compares it to cheating in a game. He does not say one word about societal impact and or the amount of engineers in the future, it's what you interpreted yourself.
You comment is talking about ability to build software, vs. the article (in only a single sentence that references this topic, while the other 99% circles around something else) talks about the job market situation. If what you wanted so say "The author is arguing that people will probably have a harder time getting a job in software development", that would have been correct.
> That's the opposite of what's happened over the past year or two. Now many more non-technical people can (and are) building software.
You're (based on the new comment) explicitly saying that people without technical knowledge are getting jobs in software development sector. Where did you get that info from? Would be an interesting read for sure, if it's actually true.
Forty-six percent of the global population has never hired a human programmer either because a good human programmer costs more than $5 a day{{citation needed}}.
I feel the same with a lot of points made here, but hadn't yet thought about the financial one.
When I started out with web development that was one of the things I really loved. Anyone can just read about html, css and Javascript and get started with any kind of free to use code editor.
Though you can still do just that, it seems like you would always drag behind the 'cool guys' using AI.
Without the punctuation, I first read it tautologically as "Most devs that use AI blindly, trust it instead of questioning what it produces". But even assuming you meant "Most devs that use AI, blindly trust it instead of questioning what it produces", there's still a negative feedback loop. We're still at the early experimentation phase, but if/when AI capabilities eventually settle down, people will adapt, learning when and when not they can trust the AI coder and when to take the reins - that would be the skill that people are hired for.
Alternatively, we could be headed towards an intelligence explosion, with AI growing in capabilities until it surpasses human coders at almost all types of coding work, except perhaps for particular tasks which the AI dev could then delegate to a human.
What makes you think that will be necessary?
The hardware for AI is getting cheaper and more efficient, and the models are getting less wasteful too.
Just a few years ago GPT-3.5 used to be a secret sauce running on the most expensive GPU racks, and now models beating it are available with open weights and run on high end consumer hardware. Few iterations down the line good-enough models will run on average hardware.
When that Xcom game came out, filmmaking, 3D graphics, and machine learning required super expensive hardware out of reach of most people. Now you can find objectively better hardware literally in the trash.
Moore's law is withering away due to physical limitations. Energy prices go up because of the end of fossil fuels and rising climate change costs. Furthermore the global supply chain is under attack by rising geopolitical tension.
Depending on US tariffs and how the Taiwan situation plays out and many other risks, it might be that compute will get MORE expensive in the future.
While there is room for optimization on the generative AI front we are still have not even reached the point were generative AI is actually good at programming. We have promising toys but for real productivity we need orders of magnitude bigger models. Just look how ChatGPT 4.5 is barely economically viable already with its price per token.
Sure if humanity survives long enough to widely employ fusion energy, it might become practical and cheap again but that will be a long and rocky road.
The way we use LLMs is also primitive and inefficient. RAG is a hack, and in most LLM architectures the RAM cost grows quadratically with the context length, in a workload that is already DRAM-bound, on a hardware that already doesn't have enough RAM.
> Depending on US tariffs […] end of fossil fuels […] global supply chain
It does look pretty bleak for the US.
OTOH China is rolling out more than a gigawatt of renewables a day, has the largest and fastest growing HVDC grid, a dominant position in battery and solar production, and all the supply chains. With the US going back to mercantilism and isolationism, China is going to have Taiwan too.
Don't get me wrong, it lets me be more productive sometimes but people that think the days of humans programming computers are numbered have a very rosy (and naive) view of the software engineering world, in my opinion.
I'm not sure how much RAM is on the average smartphone owned by someone earning $5/day*, but it's absolutely not going to be the half a terabyte needed for the larger models whose weights you can just download.
It will change, but I don't know how fast.
* I kinda expect that to be around the threshold where they will actually have a smartphone, even though the number of smartphones in the world is greater than the number of people
As for: ” In some countries, more than 90% of the population lives on less than $5 per day.”
Well, with the orders of magnitude difference already in place, this is not going to meaningfully impact that at all.
Im not dismissing this: I’m saying that it isn’t much of a building block in thinking about all of the things AI is going to change and should be addressed as a result because it’s simply in the pile of problems labeled “was here before, will be here after”.
And really, it ought to be thought of in the context of “can we leverage AI to help address this problem in ways we cannot do so now?”
The bathwater of economics will surely dirty, but you don't need to throw out the baby of hobbies with it.
> It makes economic sense, and capitalism is not sentimental.
I find this kind of fatalism irritating. If capitalism isn't doing what we as humans want it to do, we can change it.
>This is the exact same feeling I’m left with after a few days of using Claude Code.
For me what matters is the end result, not the mere act of writing code. What I enjoy is solving problems and building stuff. Writing code is a part.
I would gladly use a tool to speed up that part.
But from my testing, unless the task is very simple and trivial, using AI isn't always a walk in the park, simple and efficient.
It is also useful for learning from independent code snippets, for e.g., learning a new API.
for some reason he also included a import for "resolve from dns".
(the code didn't even need a promise there)
I don't regard programming as merely the act of outputing code. Planning, architecting, having a high level overview, keeping the objective in focus also matters.
Even if we regard programming as just writing code, we have to ask ourselves why we do it.
We plant cereals to be able to eat. At first we used some primitive stone tools to dig the fields. Then we used bronze tools, then iron tools. Then we employed horses to plough the fields more efficiently. Then we used tractors.
Our goal was to eat, not to plough the fields.
Many objects are mass produced now while they were the craft of the artisans centuries ago. We still have craftsmen who enjoy doing things by hand and whose products command a big premium over mass market products.
I don't have an issue if most of the code will be written by AI tools, provided that code is efficient and does exactly what we need. We will still have to manage and verify those tools, and to do that we will still have to understand the whole stack from the very bottom - digital gates and circuits to the highest abstractions.
AI is just another tool in the toolbox. Some carpenters like to use very simple hand tools while other swear by the most modern ones like CNC.