This hits harder depending on how much money, social capital, or debt you accumulated before this volatility began. If you’ve paid off your debts, bought a house, and stabilized your family life, you’re gambling with how comfortable the coming years will be. If you’re a fresh grad with student debt, no house, and no social network, you’re more or less gambling with your life.
Either way, there is very little to no path toward "family + place to live + stable job" model.
Where I am I’m alone. Don’t underestimate the value of community.
It paid off for me, but who knows if I would have taken that leap later in life.
>> My actual accomplishments in the world of computing ... are the stuff of legends
We agree on the legends part
I suspect the problem is elsewhere and you are unwilling or uncomfortable to discuss it.
It's small consolation if sitting in a classroom is something you truly hate, but the guys who are programming pros before they go into a CS program are very often the ones who do really well and get the most out of it.
Tinkering is great but (good) school teaches you all the things and not just the things you obviously, and then you don't have any knowledge gaps.
Any fool can probably weld metal but how do you learn to do it properly if you don't learn properly?
This response, along with your OP, it’s so pretentious and condescending. It seems you feel that you’re superior to everyone intellectually. I assume that you hold the same attitude in person and this is not helping your situation.
The irony is that I’ve done exactly this. I tried to start a business in my early 20’s and failed dramatically. I stopped developing altogether for a decade while I did minimum wage jobs and struggled to find a career. I started developing again in my early 30’s and half a decade later I’m running a software business.
You may well be intelligent but severely lacking in other necessary areas. It seems it is you who has much to learn.
It's been entirely worth it for me and I cannot imagine my life without kids. But it's a deeply personal choice and I am not buying or selling the idea. I would just say nobody is ever ready and the fears around having them probably are more irrational than rational. But not wanting them because of how it might change your own life is a completely valid reason to not have kids.
> the fears around having them probably are more irrational than rational
My $0.02 is that if anything, the fears people have about how much their lives would be transformed are significantly lacking, and a lot of the "it's not so bad" advice is post-hoc rationalization. I mean, it's evolutionarily excellent that we humans choose to have kids, but it's very rational to be afraid and to postpone or even fully reject this on an individual basis. And as an industry and as a society, we should probably do a lot more to support parents of young children.
I found this smbc comic about a "happiness spigot" to be the most poignant metaphor - https://www.smbc-comics.com/comic/happiness-spigot?utm_sourc...
Skills like Legacy Code Anthropology and Reverse Engineering will grow into higher demand. Like the worst legacy apps built by junior developers and non-developers (Access/Excel VBA and VB6 alone had a lot of "low code" legacy by non-developers), LLMs are great at "documenting" What was built, but almost never Why or How, so skills like "Past Developer Mind Reading" and "Code Seances" will also be in high demand.
There will be plenty of work still to do "when" everything is vibe coded. It's going to resemble a lot more the dark matter work a lot of software engineering is in big enterprise: fixing other people's mistakes and trying to figure out the best way you can why they made those mistakes so you can in theory prevent the next mistake.
It's a very dark, cursed hope to believe that the future of software engineering is the darkest parts of its present/past. As a software developer who has spent too large of an amount of my career in the VB6 IDE and who often joked that my "retirement plan" was probably going to be falling into an overly-highly-paid COBOL Consultancy somewhere down the line, I'm more depressed that there will be a lot more legacy work than ever, not that there won't be enough work to go around, and it will be some of the ugliest, most boring, least fun parts of my career, forever, and will have even less "cushiness" to make up for it. (That "dream" of a highly paid COBOL Consultancy disappears when good Legacy Code becomes too common and thus the commodity job. Hard to demand slicker, higher salaries when supply is tainted and full.)
Maybe eventually you'll want to trust your corporate credit card to the LLMs too, but that's gonna be one of the last things where humans get taken out of the loop. And once the AI is that general what even is the CEO, salesperson, or entrepreneur's role either?
That "programmer/archeologist" idea of Vernor Vinge's books is likely to grow as the piles of generated code get bigger and the feasibility of tossing increasingly-large piles into a single context window at once might not keep up (or probably won't be the best or most cost-effective).
You can have hope even if a positive outcome isn't guaranteed. In fact that is when hope is the most valuable (and maybe also difficult to find).
Unless you're a plumber.
(eventually properties collapse, but if they keep the values inflated this way, that won't matter to them)
If you want to know more, look into RCMP reports on high property prices in Vancouver BC/Canada circa 2010s+, for example.
The prices will adapt, but the equilibrium will always be elite-oriented economy where accommodation of the masses is a second-tier goal.
They are promising CEOs they can eliminate their workforce to increase profits. For people working for a wage it’s all downside, no upside.
It's fine to have that opinion, but please frame as an opinion or else give me the lotto numbers for next week if you can predict the future that accurately.
Why are you certain of this?
We've had 50+ years of deteriotating worker conditions and a massive concentration of wealth to like 10,000 people. The 1980s crushed the labor movement, to all of our detriment.
The GFC destroyed the career prospects of many millenials who discovered their entry-level positions no longer existed so we created a generation that we're loaded with student debt, working as baristas.
A lot of people on HN ignored this because the 2010s were good for tech people but many of us didn't realize this post-GFC wave would eventually come for us. And that's what's happening now.
So on top of the millenaisl we now have Gen Z who have correctly realized they'll never have security, never buy a house and will never retire. They'll live paycheck to paycheck, barely surviving until they die. Why? All so Jeff Bezos can have $205 billion instead of $200 billion.
I'm reminded of the quote "only nine meals separates mankind from anarchy".
I believe we've passed the point where we can solve this problem with electoral politics. Western democracies are being overtaken by fascists because of increasing desperation and the total destruction of any kind of leftism since WW2. At this point, it ends violently and sooner than many think.
This is, in a twisted way, an expression of hope. The expectation of a grand collapse is one that's shared by many, but can you explain what gives you that complete certainty that it's near or that it's coming at all?
The far bleaker possibility that I think is totally realistic is that things continue getting worse, but they never cross over the final line. Things are mismanaged, everyone is worse off, but that nine-meal gap is never allowed to happen, and any real threats are squashed at the roots. There's no singular collapse, instead of one definitive societal stab wound that's followed by long hospital recovery, we're inflicted with a thousand minor cuts to near-death.
The people who benefit from all this have been refining their knowledge and growing their power and influence. They're near-gods at this point. They may make a mistake, but what if they don't and the current situation is maintained for decades to come?
Europe created the Russia-Ukraine problem by giving their energy security to Russia. Interestingly, this is a (super) rare win for the first Trump administration: forcing Europe to build an LNG port in 2018 [1] and warning against the dangers of dependence on Russian natural gas. This warning has been completely vindicated.
Europe has stagnant wages, a declining social safety net (eg raising the retirement age in France), a housing affordability crisis in most places (notably exlucding Vienna and there needs to more attention on why this is), inflation problems and skyrocketing energy costs. It's the same 1930s economic conditions that gave rise to fascism last time.
Europe has the same rising anti-immigrant rise in response to declining material conditions that the US hass. In Europe's case it's against Syrians and North Africans. In the UK this also included Polish people.
France is really a perfect example here. Despite all the economic problems you have Macro siding with Le Pen to keep Melenchon and the left out of power.
All of this is neoliberalism run amok and it comes from decisions in WW1, WW2 and post-WW2, most notably that Europe (and the US) decided the biggest threat was socialism and communism. And who's really good at killing communists? Nazis. Just look at the resume of Adolf Heusinger, an early NATO chair [2].
Europe has also outsourced their security to the US via NATO. And NATO is on the verge of collapse. There's a lot of thinking that Congress won't allow Trump to withdraw from NATO, as many in his administration want to do, but NATO could well splinter if Trump takes Greenland.
What happens to Europe with an expansionist Russia and no US security guarantee?
Oh and speaking of worker protections, what happens when the price of bailing out European energy or security issues is the privatizing of your otherwise universal healthcare? It was rumored that parts of the administration wanted the UK to privatize the NHS as part of a post-Brexit trade deal. 15 years of austerity has primed the population to accept this kind of thing.
Many Europeans (rightly) look down on the insanity that's currently going on in the US but at the same time they don't realize just how dire the situation is in Europe.
[1]: https://www.reuters.com/article/business/germany-to-build-ln...
While European military strength isn't in its prime right now, their capabilities without the US are often way underestimated. Not that most of the other issues aren't applicable - everyone appears to be more or less fucked in multiple ways - but losing a conventional war to Russia isn't on the table, barring unthinkable mismanagement or a world-changing event (preemptive use of nukes, etc). Russia has stalemated a war against a singular country that has a fraction of Russia's wealth, loads of antiquated equipment and a small sample of Western tech. The Russian economy has a massive hole in it largely thanks to said war, and is only propped up by existing savings - they're not in danger right now, they're rapidly approaching that point with no way of stopping. Even if the war never happened, they'd still be far weaker than the whole of Europe and likely some individual European countries.
It wasn't "given", Russia did it on purpose. There are SO MANY cases of politicians advocating for Russian natural gas or oil as an energy source who were later revealed to be 100% paid for with Russian money.
I wonder how that is supposed to work when the Executive branch has proven they can do whatever they want regardless of the other two branches. The rules are worthless if there are no consequences for breaking them.
> At the height of the Cold War in the 1950s, law enforcement and intelligence leaders like J. Edgar Hoover at the F.B.I. and Allen Dulles at the C.I.A. aggressively recruited onetime Nazis of all ranks as secret, anti-Soviet “assets,” declassified records show. They believed the ex-Nazis’ intelligence value against the Russians outweighed what one official called “moral lapses” in their service to the Third Reich.
And NATO [2]:
> The most senior officers of the latter group were Hans Speidel and Adolf Heusinger, who on Oct. 10 and Nov. 12, 1955, respectively, were sworn in as the Bundeswehr’s first two lieutenant generals... Heusinger, a POW until 1948, ...
> That spring Heusinger succeeded Speidel as chief of Combined Forces when the latter was appointed commander in chief of Allied Land Forces in Central Europe becoming the first German officer to hold a NATO commander in chief position
And it goes on.
Nazi links are well-established to Operation Paperclip [3] under Werner von braun.
And there are many others [4].
I didn't say all the non-communists were Nazi. I said the neoliberal and imperialist projects of the US and Western Europe post-WW2 sided with and gave haven to Nazis to fight communism, which is true.
Fascism in the US didn't begin with the Nazis however. You can trace back the roots to the white supremacy the US was founded on, the slave trade, the Civil War, Reconstruction and even the Business Plot [5] that sought to overthrow FDR in 1933, probably labelling him a communist.
But the Nazis were very popular in the US, culminating with the German American Bund rally in Madison Square Gardens in 1939 [6].
Oh and let's not forget Henry Ford's contribution to all this, notably The International Jew [7], so much so that Hitler praised him in Mein Kampf.
Personally, I'm of the view that a lot of this can be traced back to simply not stringing up all the former slave owners after the Civil War.
[2]: https://www.historynet.com/these-nato-generals-had-unusual-b...
[3]: https://en.wikipedia.org/wiki/Operation_Paperclip
[4]: https://www.npr.org/2014/11/05/361427276/how-thousands-of-na...
[5]: https://en.wikipedia.org/wiki/Business_Plot
[6]: https://en.wikipedia.org/wiki/1939_Nazi_rally_at_Madison_Squ...
[7]: https://www.thehenryford.org/collections-and-research/digita...
> This warning has been completely vindicated.
That's funny. The US warned Europe of dependence on Russia all the while promoting policies that antagonized Russia in Europe (e.g. NATO expansion). It's almost like the US wanted to push Europe and Russia against each other, so that it could sell its way more expensive natural gas in Europe!? Perhaps they did not anticipate the Russians would be bold enough to go to war on that, but they were certainly willing to accept the risk.
> It's the same 1930s economic conditions that gave rise to fascism last time.
Please. Europe may have some issues , but it's not nearly as bleak as you try to make it... I live here, I go around a lot. Europe is as affluent as ever. People are having a good time, in general. In the 1930's some countries had hyperinflation... you're comparing that to 5% yearly inflation these days?
> Europe has also outsourced their security to the US via NATO.
On that we agree. It was a really bad decision, but understandable given how much the US soft power after WWII was absorbed by Europeans. Some Europeans act like European countries are US states. They take to the streets to join movements that are 100% American, like BLM. It's bizarre.
> What happens to Europe with an expansionist Russia and no US security guarantee?
It shocks me that people like you think Russia is a serious threat to all of Europe, rather than just Ukraine (and maybe Moldova and Georgia if you push it). How can you justify that view? Russia has not drawn any red lines about anything related to the rest of Europe like it had with Ukraine and Georgia (which was thoroughly ignored by Europe, with the strong support and should I say it, advice of the USA), it has not said anything as threatening as Trump saying Greenland will be part of America the nice way or the hard way, yet you believe the US is not a threat, but Russia is. There's some serious dissonance in this line of thought.
> Oh and speaking of worker protections, what happens when the price of bailing out European energy or security issues is the privatizing of your otherwise universal healthcare?
Americans have been saying this for 50 years... they just can't accept that the system has been working well in Europe for workers for all this time (though not as much for companies, as you can clearly notice it's much harder to make behemoths like FAANG in Europe, no doubt because without exploiting workers you can't really do that).
I think there's a certain amount of historical revisionism going on with this. It is complicated however.
You can point to George W. Bush opening the door to NATO membership in 2006 [1] as a turning point but NATO had been gobbling up former Communist bloc countries for more than a decade.
Another noteworthy event is the 2014 revolution that ousted Russian puppet Viktor Yanukovych as the president of Ukraine, culminating in the Minsk Agreement (and Minsk II) to settle disputes in the Dombas and elsewhere.
Russia does have legitimate security concerns int he region such as access to the Black Sea and not having NATO on their border. And by "legitimate" here I simply mean that Europe and the EU do the exact same thing, most notably when the US almost started World War 3 over Soviet influence in Cuba (which itself was a response to the US installing nuclear MRBMs in Turkey). Also, in terms of the threat of a conventional land war, Ukraine is basically a massive highway into Russia, previously used by both Hitler and Napoleon. Not that it worked out well for either.
Whatever the case, having another Belarus in Ukraine was ideal for Russia and I think their designs on this long predated any talk of Ukraine joining NATO, which was DOA anyway. Germany, in particular, were always going to veto expanding NATO to share a border with Russia.
My point here is I'm not convinced that any promises of neutrality by Ukraine would've saved Ukraine from Russian designs.
> Europe is as affluent as ever
Based on what? Personal anecdotes? The EU acknowledges a housing crisis [2].
> It shocks me that people like you think Russia is a serious threat to all of Europe,
It is a serious threat. Not in the conventional land-war a la WW2 sense but we're dealing with the world's other nuclear superpower (China doesn't have the nuclear arsenal Russia does, by choice). But Putin's playbook is oddly reminiscent to Hitler's playbook leading up to the war. That is, Hitler argued he was unifying Germans in Austria, the Sudetenland, etc. Similarly, Putin is using ethnically Russian populations in a similar way: as an excuse to intervene and take territory.
There is a significant Russian population in Latvia who are stateless. IIRC it's estimated there are more than 200,000 of them.
American security and energy guarantees are really the only things holding Europe together right now. If NATO splinters, what's to stop Russia from seizing parts of Latvia?
This situation is precarious.
> they just can't accept that the system has been working well in Europe for workers for all this time
No, they don't care that it works. In fact, they've been doing everything they can to make it not work. We now have a generation of people in many European countries (and I include the UK here) who have never not known austerity and constant government cutbacks. Satisfaction with the NHS deteriorates as it's been deliberately starved for 15+ years.
This is a well-worn and successful playbook called starving the beast [3]. It's laying the groundwork for a push for privatization. It'll be partial privatization to start with and just creep from there.
I'm not sure you truly appreciate just how much US foreign policy is designed to advance the interests of American corporations.
[1]: https://www.theguardian.com/world/2008/apr/01/nato.georgia
[2]: https://www.consilium.europa.eu/en/policies/housing-crisis/
The 1990's Russia was a hugely struggling nation that could barely feed its population, but even then they opposed NATO expansion strongly!
> The decision for the U.S. and its allies to expand NATO into the east was decisively made in 1993. I called this a big mistake from the very beginning. It was definitely a violation of the spirit of the statements and assurances made to us in 1990.
Source: (Gorbachev in interview from 2014) https://www.rbth.com/international/2014/10/16/mikhail_gorbac...
> Based on what? Personal anecdotes? The EU acknowledges a housing crisis [2].
The housing crisis is mostly limited to inflated prices in large cities and is itself evidence that people have a good purchasing power, since it's not being driven by foreign capital (at least where I live, in the Nordics).
Which statistics show the EU is NOT affluent?? If we look at GDP (+1.35% yearly in the last 10 years [1], not too bad for developed economies) and unemployment (currently around 6% for the whole EU [2]), it's not bad, especially if you consider the huge number of recent immigrants (unemployment among the native population is much lower than the total figures show, in Sweden, for example, native Swedes have near full employment).
But yeah, I think personal anedoctes are also helpful to establish whether a country looks like it's going down... and everywhere I go, I see only good signs: shops expanding, lots of new buildings, full bars and restaurants, people are driving the latest electric cars... what I don't see is things like businesses closing down, struggling local shops etc. which are normally very visible (I know, I've seen that) in economies that are in dire straits.
> There is a significant Russian population in Latvia who are stateless. IIRC it's estimated there are more than 200,000 of them.
Yes, I've been to Latvia and Russian is clearly spoken by a large percentage of the population (to my surprise, including the young generation). As long as they are not suppressed from speaking their language (as is happening in Ukraine right now and even before the war, and in some areas in the Baltic countries) and they're not made second-class citizens (as is happening in Estonia, where they can no long vote [3]), Putin will not have any excuse to do that, and those countries would be wise to not provide such excuses! Anyway, I think that regardless of that, NATO will survive even without the USA (as something else, perhaps, but the union between European states is extremely important to maintain) and I really belive Article 5 will exist even if NATO evolves into a Europe-only alliance.
> I'm not sure you truly appreciate just how much US foreign policy is designed to advance the interests of American corporations.
Not sure what you're referring to... I think I do appreciate it. The interview [4] Trump had with the American oil companies after the partial "annexation" of Venezuela couldn't be a better example of that.
[1] https://en.wikipedia.org/wiki/Economy_of_the_European_Union#...
[2] https://en.wikipedia.org/wiki/Economy_of_the_European_Union#...
[3] https://www.lemonde.fr/en/russia/article/2025/03/26/estonia-...
The only way to prevent this is to guarantee that people without jobs will still have a roof over their heads and enough calories and micronutrients every day to survive - and some entertainment.
I guess the next turning of the wheel will be similar too.
Or maybe we all just have poor imaginations.
i would say that we firmly live in the American Empire with techno-feudalistic tendencies, but a historical event of such magnitude as the complete dissolution of the American state will probably see a reversal to a more traditional feudal system. Think Jeff Bezos and Bill Gates buying up and becoming the Dukes of the PNW.
personally though i don't think we are at this stage yet or even close to it. until the federal government becomes COMPLETELY inept and the average citizen cannot buy food, this won't happen. yes market conditions are currently not the best but we are nowhere near starvation.
I need about 4.5 years until basic financial independence, I wonder how does it feel to be at that point.
Will people still buy and sell houses?
Will house prices go down because no one can afford them?
Will house prices go up because so few will sell their assets?
I would like to buy a small farm today without debt and cheap energy (upfront investment in solar and storage) but I need a few years more.
Does the world can really change that fast? I don't know but the progress in AI is fast, very fast.
I feel for the mid-career people with families to support. Can't imagine how stressful that would be
Of course labor jobs will always exist, and a 25 year old would (on average) be much more physically able for that than someone older, so it goes both ways.
A mortgage: if you were assuming a strong income that would continue, you very likely could be forced to sell your house and take a huge loss
A family, kids: people relying on you
Time: at this point you have retirement plans and financial deadlines you need to hit if it's to ever become a reality
God forbid you have any health issues that cost $$$ which tend to come as you age. Can you afford to lose health insurance?
If you think about re-skilling and starting off at entry level.. people don't really want to hire older beginners.
Of course that's absolute worst case scenario, but I guarantee there are a lot of people there.
I'd 100% choose living out of my car for a while. In your 20s you can upend everything and completely reinvent yourself. Time, minimal responsibilities and energy are priceless
> could pivot to business and people-oriented roles by leveraging what they have now
There's a reason that's really vague, right? Because who knows if it'll be available
I don't think AI is gunna reach this point but who knows. It's not off the table
At this point I’ve realized I need to cast all other ambitions aside and work on getting some out of the way land that I own.
Honestly? It does and I feel completely hopeless. I'm very, very angry with the world/life at this point to put it mildly.
This is how basically everyone I know actually uses LLMs.
The whole story about vibecoding and LLMs replacing engineers has become a huge distraction from the really useful discussions to be had. It’s almost impossible to discuss LLMs on HN because everyone is busy attacking the vibecoding strawman all the time.
You're maintaining a large, professional codebase? You definitely shouldn't be vibe coding. The fact that some people are is a genuine problem. You want a simple app that you and your friends will use for a few weeks and throw away? Sure, you can probably vibe code something in 2 hours instead of paying for a SaaS. Both have their place.
I think the next step is to realize that this kind of product manager role is one that more "engineers" should be willing to take on themselves. It's pretty clear why user interviews and research and product requirement docs are not obviously within the wheelhouse of technical people, but building lots of prototypes and getting feedback is a much better fit!
Because the first thing that comes from individual speed up is not engineers making more money but there being less engineers, How much less is the question? Would they be satisfied with 10%, 50% or may be 99%?
If we doubled agricultural productivity globally we'd need to have fewer farmers because there's no way we can all eat twice as much food. But we can absolutely consume twice as much CSS, try to play call of duty on our smart fridge or use a new SaaS to pay our taxes.
Actually, most software either is garbage or goes to waste at some point too. Maybe that's too negative. Maybe one could call it rot or becoming obsolete or obscure.
It’s copium to think that with the combination of AI and oversupply of “good enough” developers, that it won’t be harder for developers to get jobs. We are seeing it now.
It wasn’t this bad after the dot com bust. Then if you were just an ordinary enterprise developer working “in the enterprise” in a 2nd tier city (raises hand), jobs were plentiful.
I saw this coming on the enterprise dev side where most people work back in 2015. Not AI of course, but the commoditization of development.
I started moving closer to the “business”, got experience in leading projects, soft skills, requirements gathering, AWS architecture etc.
I’m not saying the answer is to “learn cloud”. I am saying that it’s important to learn people skills and be the person trusted with strategy and don’t just be a code monkey pulling well defined tickets off the board.
I see this fallacy all the time but I don't know if there is a name for it.
I mean, we make used fun of MBAs for saying the same thing, but now we should be more receptive to the "Line Always Goes Up" argument?
I was referring specifically to this point, which, IMHO, is a fallacy:
>>> There seems to be effectively infinite demand for software from consumers and enterprises so the cheaper it gets the more they buy.
There is no way to use the word "infinite" in this context, even if qualified, that is representative of reality.
The demand for paid software is decreasing cause these AI companies are saying "Oh dont buy that SAAS product because you can build it yourself now"
Our attention is also a finite resource (24h a day max). We already see how this has been the cause for the enshittificaton of large swathes of software like social media where grabbing the attention for a few seconds more drives the main innovation...
Depending on how the future shapes up, we may have gone from artisans to middlemen, at which point we're only in the business of added value and a lot of coding is over.
Not the Google kind of coding, but the "I need a website for my restaur1ant" kind, or the "I need to agregate data from these excel files in a certain way" kind. Anything where you'd accept cheap and disposable. Perhaps even the traditional startup, if POCs are vibecoded and engineers are only introducer later.
Those are huge businesses, even if they are not present in the HN bubble.
I am afraid that kind of jobs were already over by 2015. There are no code website makers available since then and if you can't do it yourself you can just pay someone on fiverr and get it done for less than $5-50 at this point, its so efficient even AI wont be more cost effective than that. If you have $10k saved you can hire a competitive agency to maintain and build your website. This business is completely taken over by low cost fiverr automators and agencies for high budget projects. Agencies have become so good now that they manage websites from Adidas to Lando Norris to your average mom & pop store.
Note that I own an agency that does a lot of what you say is “solved”, and I assure you that it’s not (at least in terms of being an efficient market).
SMBs with ARR up to $100m (or even many times more that in ag) struggle to find anyone good to do technical work for them either internally or externally on a consistent basis.
> I am afraid that kind of jobs were already over by 2015.
Conceptually, maybe. In practice, definitely not.
> There are no code website makers available since then
… that mostly make shit websites.
> and if you can't do it yourself you can just pay someone on fiverr and get it done for less than $5-50 at this point,
Also almost certainly a shit website at that price point, probably using the no-code tools mentioned above.
These websites have so many things wrong with them that demonstrably decrease engagement or lose revenue.
> its so efficient even AI wont be more cost effective than that.
AI will be better very soon, as the best derivative AI tools will be trained on well-developed websites.
That said, AI will never have taste, and it will never have empathy for the end user. These things can only be emulated (at least for the time being).
> If you have $10k saved you can hire a competitive agency to maintain and build your website
You can get an ok “brochure” website built for that. Maintaining it, if you have an agency that actually stays in business, will be about $100 minimum for the lowest effort touch, $200 for an actually one line change (like business hours), and up from there from anything substantial.
If you work with a decent, reputable agency, a $10k customer is the lowest on the totem pole amongst the agency’s customer list. The work is usually delegated to the least experienced devs, and these clients are usually merely tolerated rather than embraced.
It sucks to be the smallest customer of an agency, but it’s a common phenomenon amongst certain classes of SMBs.
> This business is completely taken over by low cost fiverr automators and agencies for high budget projects.
This is actually true. Mainly because any decent small agency either turns into one that does larger contracts, or it gets absorbed by one.
That said, there is a growing market for mid-sized agencies (“lifestyle agencies”?).
> Agencies have become so good now that they manage websites from Adidas to Lando Norris to your average mom & pop store
As mentioned above, you absolutely do not want to be a mom and pop store working with a web agency that works with any large, international brand like Adidas.
I appreciate your points from a conceptual level, but the human element of tech, software, and websites will continue to be a huge business for many decades, imho.
If the prototype can be just dropped in and clear a PR and comply with all the standards, you're just doing software engineering for less money!
What’s “the vibecoding strawman”? There are plenty of people on HN (and elsewhere) repeatedly saying they use LLMs by asking them to “produce full apps in hours instead of weeks” and confirming they don’t read the code.
Just because everyone you personally know does it one way, it doesn’t mean everyone else does it like that.
https://en.wikipedia.org/wiki/Faulty_generalization
Though I get that these days people tend to use “strawman” for anything they see as a bad argument, so you could be right in your assessment. Would be nice to have clarification on what they mean.
Good point.
> I think an accusation of straw-manning is in part a accusation of another's intent (or bad faith - not engaging with the argument).
There I partially disagree. Straw-manning is not engaging with the argument but it can be done accidentally. As in, one may genuinely misunderstand the nuance in an argument and respond to a straw man by mistake. Bad faith does require bad intent.
"Writing code is no longer needed for the most part."
It was a great post and I don't disagree with him. But it's an example of why it isn't necessarily a strawman anymore, because it is being claimed/realized by more than just vibecoders and hobbyists.
> Also note that the python visualizer tool has been basically written by vibe-coding. I know more about analog filters -- and that's not saying much -- than I do about python. It started out as my typical "google and do the monkey-see-monkey-do" kind of programming, but then I cut out the middle-man -- me -- and just used Google Antigravity to do the audio sample visualizer.
* the README was clearly not written by an LLM nor aided
* he still uses GPLv2 (not 3) as the license for his works
You might think that everyone has FOMO or is an anti-AI Luddite when of course there are a LOT of us somewhere in the middle, just trying to get our work done and trying to figure out what our careers will look like in 5-10 years.
One big thing that no one seems to talk about - GenAI is unlocking many new (and oftentimes "small") business ideas that were not practical just a few years ago. I have witnessed this firsthand. . . however, it will also take away jobs. How many, who knows?
tl;dr everyone is full of shit or selling something or terrified to the point where they can't think straight. And no one has a crystal ball.
In part, I think what people are responding to is the trajectory of the tools. I would agree that they seem to be on an asymptote toward being able to do a lot more things on their own, with a lot less direction. But I also feel like the improvements in that direction are incremental at this point, and it's hard to predict when or if there will be a step change.
But yeah, I'm really not sure I buy this whole thing about orchestrating a symphony of agents or whatever. That isn't what my usage of AI is like, and I'm struggling to see how it would become like that.
But what I am starting to see, is "non-programmers" beginning to realize that they can use these tools to do things for their own work and interests, which they would have previously hired a programmer to do for them, or more likely, just decided it wasn't worth the effort. I think for those people, it does feel like a novel automation tool. It's just that we all already knew how to do this, by writing code. But most people didn't know how to do that. And now they can do a lot more.
And I think this is a genuine step change that will have a big effect on our industry. Personally, I think this is ultimately a very good thing! This is how computers should work, that anybody can use them to automate stuff they want to do. It is not a given that "automating tasks" is something that must be its own distinct (and high paying) career. But like any disruption, it is very reasonable to feel concerned and uncertain about the future when you're right in the thick of it.
Dunno why the author thinks an AI-enhanced junior can match the "output"of a whole team unless he means in generating lines of code, which is to say tech debt.
Being able to put a lot of words on screen is not the accomplishment in programming. It usually means you've gone completely out of your depth.
Because the author has a vested interest in peddling this bullshit given he works on Gemini at Google.
Many times, bad code is sufficient. Actually too many times: IMHO that is the reason why the software industry produces lower quality software every year. Bad products are often more profitable than good products. But it's not always for making bad products: sometimes it's totally fine to vibe code a proof or concept or prototype, I would say.
Other times, we really need stable and maintainable code. I don't think we can or want to vibe code that.
LLMs make low-quality coding more accessible, but I don't think they remove the need for high-quality coding. Before LLMs, the fraction of low-quality code was growing already, just because it was already profitable.
An analogy could be buildings: everybody can build a bench that "does the job". Maybe that bench will be broken in 2 months, but right now it works; people can sit on it. But not everybody can build a dam. And if you risk going to jail if your dam collapses, that's a good incentive for not vibe coding it.
I've built a few things end to end where I can verify the tool or app does what I want and I haven't seen a single line of the code the LLM wrote. It was a creepy feeling the first time it happened but it's not a workflow I can really use in a lot of my day to day work.
Not really sure why this article is talking about what happens 2 years from now since that’s 8 times longer than anything anyone with money or power cares about.
The street cred doesn't come from managing more resources, the street cred comes from delivering more.
Then I have some non-trivial side projects where I don’t really care about the code quality, and I’m just letting it run. If I dare look at the code, there’s a bunch of repetition. It rarely gets stuff right the first time, but that’s fine, because it’ll correct it when I tell it it doesn’t work right. Probably full of security holes, code is nasty, but it doesn’t matter for the use-cases I want. I have produced pieces of software here that are actively making my life better, and it’s been mostly unsupervised.
The next step was for me to write a cron job that would reapply the chattr +1 and rewrite the file once in 5 minutes. Sort of an enforcer. I used Claude (web) to write this and cut/pasted it just because I didn't want to bother with bash syntax that I learned and forgot several times.
I then wanted something stronger and looked at publicly available things like pluckeye but they didn't really work the way I wanted. So I tried to write a quick version using Claude (web) and started running it (October 2025). It solved my problem for me.
I wanted a program to use aider on and I started with this. Every time, I needed a feature (e.g. temporary unblocks, prevent tampering and uninstalling, blocking in the browser, violation tracking etc.), I wrote out what I wanted and had the agent do it. OVer the months, it grew to around 4k lines (single file).
Around December, I moved to Claude code from aider and continued doing this. The big task I gave it was to refactor the code into smaller files so that I could manage context better. IT did this well and added tests too. (late December 2025).
I added a helper script to update URLs to block from various sources. Vibe-coded too. Worked fine.
Then, I found it hogging memory because of some crude mistakes I vibe-coded early on fixed that. Cost me around $2 to do so. (Jan 2026).
Then I added support to lock the screen when I crossed a violation threshold. This required some Xlib code to be written. I'm sure I could have written it but it's not really worth it. I know what to do and doing it by hand wouldn't really teach me anything except the innards of a few libraries. I added that.
So, in short, this is something that's 98% AI coded but it genuinely solves a problem for me and has helped me change my behaviour in front of a computer. There are no companies that my research revealed that offer this as a service for Linux. I know what to do but don't have the time write and debug it. With AI, my problem was solved and I have something which is quite valuable to me.
So, while I agree with you that it isn't an "automation tool", the speed and depth which it brings to the environment has opened up possibilities that didn't previously exist. That's the real value and the window through which I'm exploring the whole thing.
It trims the time requirement of a bit of functionality that you might have searched for 4 examples down by the time requirement of 3 of those searches.
It does however remove the benefit of having done the search which might be that you see the various results, and find that a secondary result is better. You no longer get that benefit. Tradeoffs.
What has worked for me is treating it like an enthusiastic intern with his foot always on the accelerator pedal. I need to steer and manage the brakes otherwise, it'll code itself off a cliff and take my software with it. The most workable thing is a pair programmer. For trivial changes and repeatedly "trying stuff out", you don't need to babysit. For larger pieces, it's good to make each change small and review what it's trying.
https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667...
It seems it only took until about 2023 or so
The question is, how much faster is verification only vs writing the code by hand? You gain a lot of understanding when you write the code yourself, and understanding is a prerequisite for verification. The idea seems to be a quick review is all that should be needed "LGTM". That's fine as long as you understand the tradeoffs you are making.
With today's AI you either trade speed for correctness or you have to accept a more modest (and highly project specific) productivity boost.
The perverse incentives being that tech debt is non-obvious & therefore really easy to avoid responsibility for.
Meanwhile, velocity is highly obvious & usually tired directly to personal & team performance metrics.
The only way I see to resolve this is strict enforcement of a comprehensive QA process during both the planning & iteration of an AI-assisted development cycle.
But when even people working at Anthropic are talking about running multiple agents in parallel, I get the idea that CTO's are not taking this seriously.
> enforcement of a comprehensive QA process during both the planning & iteration of an AI-assisted development cycle
and a new bottleneck appears...(i don't disagree with this take though, qa should be done from start to finish and integral every step of the way)
But when I don't have expertise, it's the same speed or even slower. The better I am at something, the faster the LLM coding goes.
I'm still trying to get better at Rust, and I'm past break-even now. So I could use LLMs for a speed boost. But I still hand-write all my code because I'm still gaining expertise. (Here I lean into LLMs in a student capacity, which is different.)
Related to this, I often ask LLMs for code reviews. The number of suggestions it makes that I think are good is inversely proportional to the experience I have with the particular tech used. The ability to discard bad suggestions is valuable.
This is why I think bring an excellent dev with the fundamentals is still important—critical, even—when coding with LLMs. If I were still in a hiring role, I'd hire people with good dev skills over people with poor dev skills every time, regardless of how adept they were at prompting.
If AI automated entry-level tasks from today, that just means "entry-level" means something different now. It doesn't mean entry-level ceases to exist. Entey-level as we know it, but not entry-level in general.
I have eight years of software engineering experience but am only one rung up from the bottom of our SWE ladder, and we don't even hire the bottom rung anymore at my org. Seems like there's crushing pressure from above to limit hiring at every stage.
I used to work on teams which were 50% entry level. Then just one. Then all senior teams became the norm.
This all happened after I became senior but before AI came along.
I think AI is a convenient scapegoat for other macro trends.
The usual trade-off of a well paid software development job is lack of job security and always learning - the skill set is always changing in contrast with other jobs.
My suggestion, stop chase trends and start to hear from mature software developers to get better perspective on what's best to invest on.
And why the mantra is always true?
You can find stable job (slow moving company) doing basic software development and just learn something new every 4 years and then change companies.
Or never change company and be the default expert, because everyone else is changing jobs, get job security, work less hours and have time within your job to uplift your skills.
Keep chasing latest high paid jobs/trends by sacrificing off time.
What's the best option for you? Only you know, it's depends on your own goals.
If you didn't like working with computers, then you (and another gazillion people who choose it for the $$$) probably made the wrong choice.
But totally depends on what you wanted to get out of it. If you wanted to make $$$ and you are making it, what is the problem? That is assuming you have fun outside of work.
But if you wanted to be the best at what you do, then you gotta love what you are doing. May be there are people who have super human discipline. But for normal people, loving what they goes a long way towards that end.
This doesn't match what I have seen in other industries. Many auto mechanics I know drive old Buicks or Ford's with the 4.6l v8 because the cars are reliable and the last thing they want to do on a day off is have to work on their own car. I know a few people in other trades like plumbers, electricians, and chefs and the pattern holds pretty well for them as well.
You can enjoy working with computers and also enjoy not working in your personal time.
The problem is the field is changing, fast. I love writing code... I'm not so sure I love prompting Claude, coordinating agents and reviewing +30k vibe-coded PRs.
This type of argument can hold for any profession and yet we aren't seeing this pattern much in other white-collar professions. Professors, doctors, economists, mechanical engineers, ... it seems like pretty much everybody made the wrong choice then?
I think this is a wrong way to look at it. OP says that he invested a lot of time into becoming proficient in something that today appears to be very close to part extinction.
I think that the question is legit, and he's likely not the only person asking oneself this question.
My take on the question is ability to adapt and learn new skills. Some will succeed some will fail but staying in status-quo position will certainly more likely lead to a failure rather than the success.
There are plenty of such examples but both of these imply that you're ready to devote a lot of your extra time, before or after the job, only that you can show you're relevant in the eyes of those who are the decision makers. This normally means that you're single, that you have no kids, family, no other hobbies but programming etc. This works when you're in your 20's and only up to the certain point unless you become a weirdo in your 30's and 40's etc. without any of these.
However, in the age where we are met with the uncertainty, it may become a new normal to devote extra effort in order to be able to remain not competitive but a mere candidate for the job. Some will find the incentive for this extra pain, some will not but I think it won't be easy. Perhaps in 5 years time we will only have "AI applied" engineers developing or specializing their own models for given domains. Writing code as we have it today I think it's already a thing of a past.
I think the reason is quite simple. Software is endlessly configurable. And thus a lot higher chance to get the configuration wrong.
This is what makes it attractive, and makes it hard to get right.
You cannot get good at it without making a ton of mistakes. When companies look for people with a lot of side projects, they are looking at people who already have made such mistakes and learned from them, preferably on their own time and not on paid, companies time.
I ll list some attributes of software development that makes it unique.
* No hard rules, textbooks to follow, industry as a whole still make costly mistakes and recovery cycles.
* No easy way to gauge the requirement-fit of the thing you made. Only time will tell.
* Cheap (financially) to practice, make mistakes and learn.
Actually that applies to doctors. A doctor who is not curious and is not willing to do learn/research on their own initiative is only a marketing hand of pharma.
But it is quite hard for doctors to do any real research independently. They can't really do experiments on real people...
Software is really special.
Don't get me wrong. I am that guy, who probably over-invested into the development of his skills but I don't think it's a normal thing to expect.
That does not apply here. Because more often than not, we don't prescribe products/services that our clients must go out and buy, without exception.
>it's a normal thing to expect.
It is not a normal thing to expect because in other fields there are few people who can afford to do that. So an employer cannot really pick someone from that pool.
But in software, it is possible if one choose to do it. So the pool is a lot bigger, so it becomes feasible for an employer to pick someone from there, instead of picking from I-am-only-as-good-as-I-am-paid-to-be pool..
You know that treating patients is not only about picking the right medicament and writing prescriptions? It's about diagnosing, testing the hypotheses, optimizing for the particular patient case, learning about all the specific factors of their environment including the genetics, then we have surgeons, etc.
And yet I don't quite see doctors being on a time spending spree to become exquisite in all of those things. Nor do I see hospitals or clinics doing such knowledge and ability harness tests over their potential employees. Stakes are much higher in medicine than they are in software so it makes no sense at all to make an argument that doctors cannot "afford" it. They can, they have books and practice the same way we do. I don't get to modify the production system every day but yet I am learning constantly of how not to make those same production system go down when I do.
> It is not a normal thing to expect because in other fields there are few people who can afford to do that.
It's not a normal thing in software too, you know? Let's please stop normalizing things which are not normal. If there is one thing that makes me happy in this new era of AI-assisted development is that all this bs is coming to its end.
I am just describing the logical behavior of an employer who wants to get the best person for the job.
About the other thing, I think I will let you have the last word since I feel that we are speaking past each other.
It never had time to develop into a truly professional field like medicine, law or engineering.
As AI allows more and more people to accomplish tasks without a deep understanding of computers, “working with computers“ will be as much of a marketable job skill as “working with pencils” 50 or 100 years ago.
Given how quickly models, tools and frameworks rise and fall, betting your career on a single technology stack is risky.
This was something I dealt with a lot when JS frameworks became the newest shiny thing and suddenly the entire industry shifted in a few years from being a front-end developer to being a full stack developer.
This happened to a lot of my friends who went all in on Angular. Then everybody switched to React.
The issue then became, "What should I learn?" because at my company (a large fortune 200 company) they were all in on Angular, and weren't looking for React developers, but I knew companies were moving away from Angular. So do I work to get better and more indispensable with Angular, and risk not knowing React? Or do I learn the new shiny framework betting at some point my company will adopt it or I will be laid off and need to know it?
It feels like half my life as a dev was spent being a degenerate gambler, always trying to hedge my bets in one way or another, constantly thinking about where everything was going. It was the same thing with dozens of other tools as well. It just became so exhausting trying to figure out where to put your effort into to make sure you always knew enough to get that next job.
As senior, if you choose, you can coast. By coast I mean you do justice to your job and the salary you are paid. Its a perfectly acceptable choice for someone to be senior for as long as they want.
The biggest bottleneck is going to be what other seniors and higher think of you.
I wonder what the best decision would have been. What job is AI immune and has a stable 40 hour week, no overtime, with decent pay. Teacher? Nursing?
Definitely something that requires social/interpersonal skills though will be the thing that winds up being AI immune. Humans are social creatures so I assume there will always be some need for it.
I feel you. It's a societal question you're posing. Your employer (most employers) deal in dollars. A business is evaluated by its ability to generate revenue. That is the purpose of a business and the fiduciary duty of the CEO's in charge.
> Am I supposed to want to code all the time?
Yes.
> When can I pursue hobbies,
Your hobby should be coding fun apps for yourself
> a social life, etc.
You social life should be hanging out with other engineers talking about engineering things.
And the most successful people I know basically did exactly that.
I'm not saying y'all should be doing that now, I'm just saying, that is in fact how it used to be.
If all they did was code all the time, write code for fun and interacted mostly with other similar people, they probably wouldn't be the first choice for these projects.
The ones who ace their careers are for the most people that are fun, driven, or psychos, all social traits that make you good in a political game.
Spending lots of time with other socially awkward types talking about hard math problems or whatever will get you nowhere outside of some SF fantasy startup movie.
I'd say it's especially important for the more nerdy (myself included) to be more outgoing, and do other stuff like sales or presentations, design/marketing og workshops - that will make you exceptional because you then got the "whole package" and undestand the process and other people.
Well that depends heavily on how you define successful. Successful in life? I would tend to disagree, unless you believe that career is the only thing that counts. But even when career is concerned: the most successful people I know went on from being developer to some high end management role. The skills that brought them there definitely did not come from hanging out with other engineers talking about engineering things.
Fuck. That.
I worked at a faang, successful people weren't people that did engineering, it was people who did politics.
The most successful people were the ones that joined at the same time as the current VP.
Your hobbies need to be fun, to you. Not support your career. If its just there to support your career, its unpaid career development, not a hobby. Should people not code in their free time? thats not for me to decide. If they enjoy it, and its not hurting anyone, then be my guest.
Engineers are generally useless at understanding whats going on in the real world, they are also quite bad at communicating.
do. fun. things.
My career has been fun, thats why I still do the thing I'm doing. I've worked with the very best in their respective fields for ~20 years.
I have done many and varied fun things through work, and continue to do so.
But.
Work stops at contracted time. After that it's me time.
I'm sorry for you as well.
I’m more concerned that it is the highlight of someone’s life being in front of a computer all day.
I did not do side projects. I really enjoyed most of my 20s as a single person. I was a part time fitness instructor, I dated, hung out with friends, did some traveling.
The other developers at my job also had plenty of outside hobbies.
They're still doing it.
the only real contender in this regard is the win32 api, and actually that did get used in enterprise for a long time too before the major shift to cloud and linux in the mid 2010s.
ultimately the proof is in the real-world use, even if its ugly to look at... id say, even as someone who is a big fan of linux, if i were given a 30 year old obscure software stack that did nothing but work, i would be very hesitant to touch it too!
I would like to add the business core functions of SAP R/3 (1992). Much of the code created for it in the early 90s still lives in the current SAP S/4HANA software.
This study showing 9-10% drop is odd[1] and I'm not sure about their identification critria.
> We identify GenAI adoption by detecting job postings that explicitly seek workers to implement or integrate GenAI technologies into firm workflows.
Based on that MIT study it seems like 90+% of these projects fail. So we could easily be seeing an effect where firms posting these GenAI roles are burning money on the projects in a way that displaces investment in headcount.
The point about "BigTech" hiring 50% fewer grads is almost orthogonal. All of these companies are shifting hiring towards things where new grads are unlikely to add value, building data centers and frontier work.
Moreover the TCJA of 2017 caused software developers to not count for R&D tax write offs (I'm oversimplifying) starting in 2022. This surely has more of an effect than whatever "GenAI integrator roles" postings correlates to.
I find this one hard to believe. Software is already massively present in all these industries and has already replaced jobs. The last step is complete automation (ie drone tractors that can load up at a hub, go to the field and spray all by themselves) but the bottleneck for this isn't "we need more code", it's real-world issues that I don't see AI help solving (political, notably)
Given projections of AI abilities over time AI necessarily creates downward pressure on new job creation. AI is for reducing and/or eliminating jobs (by way of increasing efficiency).
AI isn't creating 'new' things, it's reducing the time needed to do what was already being done. Unlike the automobile revolution new job categories aren't being created with AI.
We are going to need to de-risk our software dependencies, and Germany is going to need to use computers.
Germany is going to be crazy, I think.
The Gewerkschaft tactics to resist AI is what I’m really interested in seeing.
It is a new and exciting tool but immediately limited with medium complex tasks. Also we will see a lot more code with tricky bugs coming out of AI assistants and all of that needs to be maintained. If software development gets cheaper per line of code then there will be more demand. And someone has to clean up the mess created by people who have no clue whatsoever of SWE.
Once upon a time people developed software with punch hole cards. Even without AI a developer today is orders of magnitude more proficient than that.
The only thing I hope I am not going to see in my lifetime is real artificial intelligence.
I don't understand the take that a junior with AI is able to replace a small team. Maybe a horribly performing small team? Even then, wouldn't it just be logical to outfit the small team with AI and then have a small team of small teams?
The alleged increased AI output of developers has yet to be realized. Individuals perceive themselves as having greatly increased output, but the market has not yet demonstrated that with more products (or competitors to existing products) and/or improved products.
1) Senior developers are more likely to know how to approach a variety of tasks, including complex ones, in ways that work, and are more likely to (maybe almost subconsciously) stick to these proven design patterns rather than reinvent the wheel in some novel way. Even if the task itself is somewhat novel, they will break it down in familar ways into familar subtasks/patterns. For sure if a task does require some thinking outside the box, or a novel approach, then the senior developer might have better intuition on what to consider.
The major caveat to this is that I'm an old school developer, who started professionally in the early 80's, a time when you basically had to invent everything from scratch, so certainly there is no mental block to having to do so, and I'm aware there is at least a generation of developers that grew up with stack overflow and have much more of a mindset of building stuff using cut an paste, and less having to sit down and write much complex/novel code themselves.
2) I think the real distinction of senior vs junior programmers, that will carry over into the AI era, is that senior developers have had enough experience, at increasing levels of complexity, that they know how to architect and work on large complex projects where a more junior developer will flounder. In the AI coding world, at least for time being, until something closer to AGI is achieved (could be 10-20 years away), you still need to be able to plan and architect the project if you want to achieve a result where the outcome isn't just some random "I let the AI choose everything" experiment.
The distinguishing behavior is not about the quantity of effort involved but the total cost after consideration of dependency management, maintenance time, and execution time. The people that reinvent wheels do so because they want to learn and they also want to do less work on the same effort in the future.
I think this is really underappreciated and was big in driving how a lot of people felt about LLMs. I found it even more notable on a site named Hacker News. There is an older generation for whom computing was new. 80s through 90s probably being the prime of that era (for people still in the industry). There constantly was a new platform, language, technology, concept to learn. And nobody knew any best practices, nobody knew how anything "should work". Nobody knew what anything was capable of. It was all trying things, figuring them out. It was way more trailblazing / exploring new territory. The birth of the internet being one of the last examples of this from that era.
The past 10-15 years of software development have been the opposite. Just about everything was evolutionary, rarely revolutionary. Optimizing things for scale, improving libraries, or porting successful ideas from one domain to another. A lot of shifting around deck chairs on things that were fundamentally the same. Just about every new "advance" in front-end technology was this. Something hailed as ground breaking really took little exploration, mostly solution space optimization. There was almost always a clear path. Someone always had an answer on stack overflow - you never were "on your own". A generation+ grew up in that environment and it felt normal to them.
LLMs came about and completely broke that. And people who remembered when tech was new and had potential and nobody knew how to use it loved that. Here is a new alien technology and I get to figure out what makes it tick, how it works how to use it. And on the flip side people who were used to there being a happy path, or a manual to tell you when you were doing it wrong got really frustrated as their being no direction - feeling perpetually lost and it not working the way they wanted.
I found it especially ironic being on hacker news how few people seemed to have a hacker mindset when it came to LLMs. So much was, "I tried something it didn't work so I gave up". Or "I just kept telling it to work and it didn't so I gave up". Explore, pretend you're in a sci-fi movie. Does it work better on Wednesdays? Does it work better if you stand on your head? Does it work differently if you speak pig-latin? Think side-ways. What behavior can you find about it that makes you go "hmm, that's interesting...".
Now I think there has been a shift very recently of people getting more comfortable with the tech - but still was surprised of how little of a hacker mindset I saw on hacker news when it came to LLMs.
LLMs have reset the playing field from well manicured lawn, to an unexplored wilderness. Figure out the new territory.
Bashing kludgy things together until they work was always part of the job, but that wasn't the motivational payoff. Even if the result was crappy, knowing why it was crappy and how it could've been better was key.
LLMs promise an unremitting drudgery of the "mess around until it works" part, facing problems that don't really have a cause (except in a stochastic sense) and which can't be reliably fixed and prevented going forward.
The social/managerial stuff that may emerge around "good enough" and velocity is a whole 'nother layer.
Louder for those turned deaf by LLM hype. Vibe coders want to turn a field of applied math into dice casting.
You keep using the word "LLMs" as if Opus 4.x came out in 2022. The first iterations of transformers were awful. Gpt-2 was more of a toy and Gpt-3 was an eyebrow-raising chatbot. It has taken years of innovations to reach the point of usable stuff without constant hallucinations. So don't fault devs for the flaws of early LLMs
For the record, I was genuinely trying to read it properly. But it is becoming unbearable by mid article.
It resembles an article, it has the right ingredients (words), but they aren't combined and cooked into any kind of recognizable food.
Its hard to put my finger on it. But it lacks soul, it factor or whatever you want to call it. Feels empty in a way.
I mean, this is not the first AI assisted article am reading. But usually, it's to a negligible level. Maybe it's just me. :)
intro... Problem... (The Bottom line... What to do about it...) Looped over and over. and then Finally...
I want to read it, but I can't get myself to.
> Narrow specialists risk finding their niche automated or obsolete
Exactly the opposite. Those with expertise will oversee the tool. Those without expertise will take orders from it.
> Universities may struggle to keep up with an industry that changes every few months
Those who know the theory of the craft will oversee the machine. Those who dont will take orders from it. Universities will continue to teach the theory of the discipline.
My similar (verbose) take is that seniors will often be able to wield LLMs productively, where good-faith LLM attempts will be the first step, but will be frequently be discarded when they fail to produce the intended results (personally I find myself swearing at the LLMs when they produce trite garbage; output that gets `gco .`-ed immediately- or LLM MR/PRs that get closed in favor of manually accomplishing the prompted task).
Conversely, juniors will often wield LLMs counterproductively, accepting (unbeknown) tech debt that the neither the junior nor the LLM will be able to correct past a given complexity.
I'm not sure I agree with that. Right now as a senior my task involves reviewing code from juniors; replace juniors with AI and it means reviewing code from AI. More or less the same thing.
Worse. The AI doesn't share any responsibility.
Yes, and even when it learns (because there's new version of the AI model) it doesn't learn according to your company/team's values. Those values might be very specific to your business model.
Currently, AI (LLM) is just a tool. It's a novel and apparently powerful tool. But it's still just a tool.
So why hire juniors at all instead of poaching a mid level ticket taker from another company?
If you are a line level manager, even if you want to retain your former junior now mid level developer, your hands are probably tied.
My value so far in my career has been my very broad knowledge of basically the entire of computer science, IT, engineering, science, mathematics, and even beyond. Basically, I read a lot, at least 10x more than most people it seems. I was starting to wonder how relevant that now is, given that LLMs have read everything.
But maybe I'm wrong about what my skill actually is. Everyone has had LLMs for years now and yet I still seem better at finding info, contextualising it and assimilating it than a lot of people. I'm now using LLMs too but so far I haven't seen anyone use an LLM to become like me.
So I remain slightly confused about what exactly it is about me and people like me that makes us valuable.
The value of a good engineer is his current-context judgment. Something that LLMs can not do Well.
Second point, something that is being mentioned occasionally but not discussed seriously enough, is that the Dead Internet Theory is becoming a reality. The amount of good, professionally written training materials is by now exhausted and LLMs will start to feed on their own slop. See How little the LLM's core competency increased in the last year even with the big expansion of their parameters.
Babysitting LLM's output will be the big thing in the next two years.
Engineers > developers > coders.
It's also an 'skip intro' button for the friction that comes with learning.
You're getting a bug? just ask the thing rather than spending time figuring it out. You don't know how to start a project from scratch? ask for scaffolding. Your first boss asks for a ticket? Better not to screw up, hand it to the machine just to be safe.
If those temptations are avoided you can progress, but I'm not sure that lots of people will succeed. Furthermore, will people be afforded that space to be slow, when their colleagues are going at 5x?
Modern life offers little hope. We're all using uber eats to avoid the friction of cooking, tinder to avoid the awkwardness of a club, and so on. Frictionless tends to win.
It takes extra discipline and willpower to force yourself do the painful thing, if there is a less painful way to do it.
I’m not saying that this was prompted. I’m just summarizing it in my own way.
This is what I expect to happen, but why would these entry-level roles be "developers". I think it's more likely that they will be roles that already exist in those industries, where the responsibilities include (or at least benefit from) effective use of AI tools.
I think the upshot is that more people should probably be learning how to work in some specific domain while learning how to effectively use AI to automate tasks in that domain. (But I also thought this is how the previous iteration of "learn to code" should be directed, so maybe I just have a hammer and everything looks like a nail.)
I think dedicated "pure tech" software where the domain is software rather than some other thing, is more likely to be concentrated in companies building all the infrastructure that is still being built to make this all work. That is, the models themselves and all the surrounding tools, and all the services and databases etc. that are used to orchestrate everything.
Once it is easier to just make almost anything yourself than it is to go through a process of expressing your requirements to a professional software development group and iterating on the results, that will be a popular choice.
At that point it gets handed over to the engineers.
If it touches private customer data, you better have security right from version 1.
Last year was, as it seems, just a normal year in terms of global software output.
But on product hunt, the amount of projects is First week of Jan: 5000+, Entire Jan 2018: 4000 approx.
Has the output of existing companies/products increased substantially?
Have more products proven successful and started companies?
hard to say but maybe a little
Look at smaller SaaS offerings and people selling tiny utility apps, those will go away slowly.
Why would I pay for something when I can make it for my own (or company internal) use in an afternoon?
Not to mention agent capabilities at the end of last year were vastly different to those at the start of the year.
Even if LLMs became better during the year, you'd still expect an increase in releases.
Talk is cheap, let's see the money :D
Exactly my thoughts lately ... Even by yesterday's standards it was already very difficult to land a job and, by tomorrow's standards, it appears as if only the very best of the best will be able to keep their jobs and the ones in a position of decision making power.
Where is all the new and improved software output we’d expect to see?
> A CEO of a low-code platform articulated this vision: in an “agentic” development environment, engineers become “composers,”
I see we'll be twisting words around to keep avoiding the comparison.
A humble way for devs to look at this, is that in the new LLM era we are all juniors now.
A new entrant with a good attitude, curiosity and interest in learning the traditional "meta" of coding (version control, specs, testing etc) and a cutting-edge, first-rate grasp of using LLMs to assist their craft (as recommended in the article) will likely be more useful in a couple of years than a "senior" dragging their heels or dismissing LLMs as hype.
We aren't in coding Kansas anymore, junior and senior will not be so easily mapped to legacy development roles.
I think of it a bit like ebike speed limits. Previously to go above 25mph on a 2-wheeled transport you needed a lot of time training on a bicycle, which gave you the skills, or you needed your motorcycle licence, which required you to pass a test. Now people can jump straight on a Surron and hare off at 40mph with no handling skills and no license. Of course this leads to more accidents.
Not to say LLMs can't solve this eventually, RL approaches look very strong and maybe some kind of self-play can be introduced like AlphaZero. But we aren't there yet, that's for sure.
But the comparison I made was between the junior with a good attitude and expert grasp on LLMs, and the stick-in-the-mud/disinterested "senior". Those are where the senior and junior roles will be more ambiguous in demarcation as time moves forward.
The question IMO is, who will be creating the demand on the other side for all of these goods produced if so many people will be left without the jobs? UBI, redistribution of wealth through taxes? I'm not so convinced about that ...
There is no reason why people will left without jobs. Ultimately, "job" is simply a superstructure for satisfying people's needs. As long as people have needs and the ability to satisfy them, there will be jobs in the market. AI change nothing in those aspects.
The people who lose their jobs prove this was always the case. No job comes with a guarantee, even ones that say or imply they do. Folks who believe their job is guaranteed to be there tomorrow are deceiving themselves.
Curious about how the Specialist vs Generalist theme plays out, who is going to feel it more *first* when AI gets better over time?
1) The AI code maintainence question - who would maintain the AI generated code 2) The true cost of AI. Once the VC/PE money runs out and companies charge the full cost, what would happen to vibe coding at that point ?
1) Either you, the person owning the code, or you + LLms, or just the LLMs in the future. All of them can work. And they can work better with a bit of prep work.
The latest models are very good at following instructions. So instead of "write a service that does X" you can use the tools to ask for specifics (i.e. write a modular service, that uses concept A and concept B to do Y. It should use x y z tech stack. It should use this ruleset, these conventions. Before testing run these linters and these formatters. Fix every env error before testing. etc).
That's the main difference between vibe-coding and llm-assisted coding. You get to decide what you ask for. And you get to set the acceptance criteria. The key po9int that non-practitioners always miss is that once a capability becomes available to these models, you can layer them on top of previous capabilities and get a better end result. Higher instruction adherence -> better specs -> longer context -> better results -> better testing -> better overall loop.
2) You are confusing the fact that some labs subsidise inference costs (for access to data, usage metrics, etc) with the true cost of inference on a given model size. Youc an already have a good indication on what the cost is today for any given model size. 3rd party inference shops exist today, and they are not subsidising the costs (they have no reason to). You can do the math as well, and figure out an average cost per token for a given capability. And those open models are out, they're not gonna change, and you can get the same capability tomorrow or in 10 years. (and likely at lower costs, since hardware improves, inference stack improves, etc).
In a similar fashion, AI generated code will be fed to another AI round and regenerated or refactored. What this also means is that in most cases nobody will care about producing code with high quality. Why bother, if the AI can refactor ("recompile") it in a few minutes?
Wasn't the main take away generally "study everything even more than you were, and talk/network to everybody even more than you were, and hold on. Work more more more"
Tech layoffs have been happening even before LLMs.
On the optimistic take side - I suspect it might end up being true that software might be infused into more niches but not sure it follows that this helps on the jobs market side. Or put different demand for software and SWE might decouple somewhat for much of that additional software demand.
This is really just another form of automation, speeding things up. We can now make more customized software more quickly and cheaply. The market is already realizing that fact, and demand for more performant, bespoke software at lower costs/prices is increasing.
Those who are good at understanding the primary areas of concern in software design generally, and who can communicate well, will continue to be very much in demand.
It’s hard to tell though not just because it’s inherently uncertain where this goes but also because those closest to it are also the least likely to view it objectively.
So near impossible to find someone clued up but also not invested in a specific outcome
Ah, there it is.
Not everyone can afford it, and then we are at the point of changing the field that was so proud about just needing a computer and access to internet to teach oneself into a subscription service.
And yes, that plan can get you started, but when I tested it, I managed to get 1 task done, before having to wait 4 hours.
If I were starting out today, this is basically the only advice I would listen to. There will indeed be a vacuum in the next few years because of the drastic drop in junior hiring today.
Then also nothing has really changed. This was, verbatim, the advice everybody was giving when I was a grad student almost 20 years ago.
Back then, the conclusion was to learn the frameworks du jour, even if it was unfulfilling plumbing and the knowledge had a half-life of a few weeks. You needed it to get hired, but you made your career because of all the solid theory you learned and the adaptability that knowing it gave you.
Now, the conclusion is to learn how to tickle the models du jour in the right way, even though it's intellectually braindead, unaspiring work and knowledge with a half-life of a few days. It's still the theoretical foundation that will actually make the junior become a valuable engineer.
The more I read between the lines of AI evangelists' posts like this, the more I'm convinced that expectations will return to grounded reality soon. They are new tools to help the engineer. They enable new workflows and maybe can even allow a two-digit percentage increase in speed while upholding quality. But they're in no way a revolution that will make possible "10× engineers" or considerably replace engineering positions beyond the "it doesn't really matter" area of PoCs, prototypes, one-offs, cookie-cutter solutions, etc.
Agreed but it's not an easy charge to fulfill.
In today's corporate environment, 70% of the costs are in management and admin muddlement, do we really think these "people skills" translate into anything useful in an AI economy?
The junior devs have far more hope.
Middle-managers output exactly what LLMs do: chats, documents, summaries. Particularly working remotely. They don't even generate tickets/requirements – that's pushed to engineers and product people.
Now, it's expecting senior engineers to "orchestrate" 10 coding agents, then it was expecting them to orchestrate 10 cheap developers on the other side of the world. Then, the reckoning came when those offshore developers realised that if they produced code as good as that of a "1st world" engineer, they can ask a similar salary, too, and those offshoring clients who didn't want to pay up were left with those contractors who weren't good enough to do that. This time, it will be agent pricing approaching the true costs. Both times, the breaking point is when managers realise that writing code was never the bottleneck in the first place.
In my opinion we always needed to be versatile to stand any chance of being comfortable in these insanely rapid changing times.
There's an implicit assumption in the article that the coding models are here to stay in development. It's possible that assumption is incorrect for multiple reasons.
Maybe (as some research indicates) the models are as good as they are going to get. They're always going to be a cross between a chipper stochastic parrot and that ego inflated junior dev that refuses to admit a mistake. Maybe when the real (non-subsidized) economics present themselves, the benefit isn't there.
Perhaps the industry segments itself to a degree. There's a big difference in tolerance for errors in a cat fart app and a nuclear cooling system. I can see a role for certified 100% AI free development. Maybe vibe coders go in one direction, with lower quality output but rapid TTM, but a segment of more highly skilled developers focus on AI free development.
I also think it's possible that over time the AI hyper-productivity stuff is revealed to be mostly a mirage. My personal experience and a few studies seem to indicate this. The purported productivity boost is a result of confirmation bias and ridiculous metrics (like LOC generated) that have little to do with actual value creation. When the mirage fades, companies realize they are stuck with heaps of AI slop and no technical talent able to deal with it. A bitter lesson indeed.
Since we're reading tea leaves, I think the most likely outcome is that the massive central models for code generation fade due to enormous costs and increased endpoint device capabilities. The past 50 years have shown us clearly that computing will always distribute, and centralized mainframe style compute gets pushed down to powerful local devices.
I think it settles at an improved intellisense running locally. The real value of the "better search engine" that LLMs hold today reduces as hard economics drive up subscription fees and content is manipulated by sponsors (same thing that happened to the Google search results).
For end users, I think the models get shoved into a box to do things they're really good at, like giving a much more intuitive human-computer interface, but structured data from that is handed off to a human developer to reason about, MCP will expand and become the glue.
I think that over time market forces will balance between AI and human created content, with a premium placed on the latter. McDonalds vs a 5 star steakhouse.
I'd put my money on this. From my understanding of LLMs, they are basically mashing words together via markov chains and have added a little bit of subject classification with attention, a little bit of short-term memory, and enough grammar to lay things out correctly. They don't understand anything they are saying, they are not learning facts and trying to build connections between them, they are not learning from their conversations with people. They aren't even running the equivalent of a game loop where they can even think about things. I would expect something we're trying to call an AI to call you up sometimes and ask you questions. Trillions of dollars have got us this far, how far can it actually take us?
I want my actual AI personal assistant that I have to coerce somehow into doing something for me like an emo teen.
A few key fallacies at play here.
- Assuming a closed world assumption: we'll do the same amount of work but with less people. This has never been true. As soon as you meaningfully drop the price of a unit of software (pick your favorite), demand goes up and we'll need more of them. Also it opens the door to building software that previously would have been too expensive. That's why the amount of software engineers has consistently increased over the years. This despite a lot of stuff getting a lot easier over time.
- Assuming the type of work always stays the same. This too has never been true. Stuff changes over time. New tools, new frameworks, new types of software, new jobs to do. And the old ones fade away. Being a software engineer is a life of learning. Very few of us get to do the same things for decades on end.
- Assuming people know what to ask for. AIs do as you ask, which isn't necessarily what you want. The quality of what you get correlates very much to your ability you ask for it. The notion that you get a coherent bit of software in response to poorly articulated incoherent prompts is about as realistic as getting a customer to produce coherent requirements. That never happened either. Converting customer wishes into maintainable/valuable software is still a bit of a dark art.
The bottom line: many companies don't have a lot of in house software development capacity or competence. AI doesn't really help these companies to fix that in exactly the same way that Visual Basic didn't magically turn them into software driven companies either. They'll use third party companies to get the software they need because they lack the in house competence to even ask for the right things.
Lowering the cost just means they'll raise the ambition level and ask for more/better software. The type of companies that will deliver that will be staffed with people working with AI tools to build this stuff for them. You might call these people software engineers. Demand for senior SEs will go through the roof because they deliver the best AI generated software because they know what good software looks like and what to ask for. That creates a lot of room for enterprising juniors to skill up and join the club because, as ever, there simply aren't enough seniors around. And thanks to AI, skilling up is easier than ever.
The distinction between junior and senior was always fairly shallow. I know people that were in their twenties that got labeled as senior barely out of college. Maybe on their second or third job. It was always a bit of a vanity title that because of the high demand for any kind of SEs got awarded early. AI changes nothing here. It just creates more opportunities for people to use tools to work themselves up to senior level quicker. And of course there are lots of examples of smart young people that managed to code pretty significant things and create successful startups. If you are ambitious, now is a good time to be alive.
You say that belongs in a trade school? I might agree, if you think trade schools and not universities should teach electrical engineering, mechanical engineering, and chemical engineering.
But if chemical engineering belongs at a university, so does software engineering.
Glad I did CS, since SE looked like it consisted of mostly group projects writing 40 pages of UML charts before implementing a CRUD app.
The bigger problem when I was there was undergrads (me very much included) not understanding the difference at all when signing up.
To be clear, I did not go into CS. But I do live in this world
Devops isn’t even a thing, it’s just a philosophy for doing ops. Ops is mostly state management, observability, and designing resilient systems, and we learned about those too in 1987. Admittedly there has been a lot of progress in distributed systems theory since then, but a CS degree is still where you’ll find it.
School is typically the only time in your life that you’ll have the luxury of focusing on learning the fundamentals full time. After that, it’s a lot slower and has to be fit into the gaps.
For example, the primitives of cloud computing are largely explained by papers published by Amazon, Google, and others in the early '00s (DynamoDB, Bigtable, etc.). If you want to explore massively parallel computation or container orchestration, etc, it would be natural to do that using a public cloud, although of course many of the platform-specific details are incidentals.
Part of the story here is that the scale of computing has expanded enormously. The DB class I took in grad school was missing lots of interesting puzzle pieces around replication, consistency, storage formats, etc. There was a heavy focus on relational algebra and normalization forms, which is just... far from a complete treatment of the necessary topics.
We need to extend our curricula beyond the theory that is require to execute binaries on individual desktops.
However, in the decades since this curricula was established, it's clear that the foundation has expanded. Understanding how containerization works, how k8s and friends work, etc is just as important today.
See doctors for example, you learn a bit of everything. But then if you want to specialise, you choose one.