But it's never been the case that a dev could just focus on technical things and not spend any time figuring out the context they are working in, and behaving accordingly.
My first day of work, this is what my boss said to me: "Look at this trading floor. There's screens everywhere, everything is numbers. Deltas, gammas, vegas. Everything is calculated by computers. But don't forget, every business is a people business!"
While they may not have been very successful, they did have a place.
I have been part of some social circles before but they were always centered around a common activity like a game, and once that activity went away, so did those connections.
As I started working on side hustles, it occurred to me that not having any kind of social network (not even social media accounts) may have added an additional level of difficulty.
I am still working on the side hustles, though.
Not coincidentally, the places I've seen this approach to work are the same places that have hired me as a consultant to bring an effective team to build something high priority or fix a dumpster fire.
Our CEO started sharing screenshots of his xitter/linkedin feeds and most of them contain wisdoms like "Opus 4.5 is better than 90% of talent". There is also longer form business fan fiction. It usually follows this template: there is a hero - a humble manager/sales person, and a villain - a cocky, nerdy software developer. The villain claims that some task is impossible or takes months to complete. Then the humble hero (equipped with Opus 4.5) completes the task in 2 hours. The villain is then humiliated/fired and everyone lives happily ever after.
These posts definitely contribute to the declining morale among employees. Nobody goes "above and beyond" anymore - we just strictly doing the tickets and nothing more.
- You can have a horrible CEO that doesn't value their employees and is trying to devalue labor.
- AI coding tools can be incredible exoskeletons in the hands of skilled engineers and enable them to get much more work done.
Perhaps the real "SaaS-killer" is innovation capital [1] realizing it can take advantage of the various forms of arbitrage and changing of the guards happening now, raise venture capital, and take on the old and slow management-driven businesses.
If you've ever had the itch to fire your boss, now's the time. It's a hard path, there are way more hats to wear, but the dry powder is out there waiting to be deployed.
[1] ICs in both senses of the acronym.
For most software engineers, neglect of soft skills have always been a career tarpit that leads nowhere you want to end up. Being able to navigate social settings and to communicate well is a force multiplier. For most people, it really doesn't matter how good you are if nobody understands what you are saying and you can't convince other people to buy into your ideas. You far more often see moderately successful charlatans that are all talk than successful people with awful communication skills. Of course if you're able to walk the talk, that's when you can really go places.
Truth is most larger software development organizations could have even before LLMs downsized significantly and not lost much productivity.
The X formerly known as Twitter did this and has been chugging along on a fraction on its original staff count. It's had some brand problems since its acquisition, but those are more due to Mr Musk's eccentricities and political ventures than the engineering team.
The reason this hasn't happened to any wider degree is quarterly capitalism and institutional inertia. Looks weird to the investors if the organization claims to be doing well but is also slashing its employee count by 90%. Even if you bring a new CEO in that has these ideas, the org chart will fight it with tooth and nail as managers would lose reportees and clout.
Consultancies in particular are incredibly inefficient by design since they make more money if they take more time and bring a larger headcount to the task: They don't sell productivity, but man hours. Hence horrors like SAFe.
Truthfully, I don't want to get advice from people who become addicted to AI, sorry. The money investment that person did, already leaves me with tons of questions.
If you sleep on this, these people are going to take your job.
I've been writing serious systems code for 15 years. Systems that handled billions of dollars of transaction volume a day and whose hourly outages cost billions of dollars. These are systems you have to design carefully. Active-active, beyond five nines reliable.
I'm telling you AI is extremely beneficial even in this segment of the market. The value prop is undeniable.
I'm easily getting twice my workload done with AI, and I'm not even leveraging the full extent of the tools. I've only just started to do more than fancy tab-autocomplete.
This is going to be a huge shift in our industry, and I would brace for impact.
In the end, they might convince you that 2+2=5.
What are some examples of skills you think are now essential, that prior have been taken for granted or obviated in some way?
Remember when BIOS computers used to boot in seconds, reliably? When chat clients didn't require an embedded copy of Chromium? When appliances and automobiles didn't fall apart in 6 months, costing thousands to "repair" or just needing to be thrown away and bought again?
Remember when there used to be these things called "machine shops" and "Radio Shacks" and "parts stores" that people who built things frequented? Now most people have to call AAA if they get a flat tire. Changing their own oil is out of the question. "Eww, dirty oil, on my clean fingernails?" Many couldn't tell you which end is which on a screwdriver if their life depended on it.
I'd say these concepts are pretty essential, especially for any nation entertaining delusions of waging Total War against other big and powerful nations. Wasteful and foolish nations lose wars.
Thinking of seeing if I can get mutation testing set up next, and expanding our use of fuzzing. All of these techniques that I know about but haven't had the time to do are suddenly more feasible to invest into.
Your $300k+ TC job is going away. The only way you'll make the same take home is if you provide more value.
You can be a robotic IC, but you won't be any better than a beginner with Claude Code. You have to level up your communication and organizational value to stay at the top.
Everyone has to wear the cloth of a senior engineer now to simply stay in place. If you can't write well, communicate well, plan and organize, you're not providing more value than a Claude-enhanced junior.
Would you like to buy a bridge? Coded by Claude. One previous owner. An owner who used said bridge to go to church once a week, and vibe code in Starbucks afterwoods.
Why not ask the LLM to write for you? Same for planning, organization and written communication.
Seems like robotic ICs can "robotize" most of the work stack.
To begin: Math, Linux, Devops, C, and Assembly. Not a youtube video. Not arithmetic. Learn to the point that you could be employed by any of the above as a senior. And don't fear failure. Keep doing it until you understand it.
Yep, exactly. The failure to realize that you mean different things when talking about "larger abstractions" is exactly the kind of miscommunication that software people will need to navigate better in the future.
The AI can already do that part.
The abstraction that matters going forward, is understanding why the abstraction chosen by the AI does or doesn't match the one needed by the customer's "big picture".
The AI is a bit too self-congratulatory in that regard, even if it can sometimes spot its own mistakes.
Like intro differential geometry is basically a deep dive into what one actually does when reading a paper map. Something everyone (over 30?) is familiar with. But it turns out there's plenty to fill a graduate level tome on that topic.
Linear algebra is basically studying easy problems: y=ax. Plenty to write about how to make your problem (or at least parts of it) fit that mould.
I suspect and think I've seen others say that you get better outputs from LLMs when using jargon. Essentialy, its pattern matching tells it to say what an expert would say when using the terminology experts use.
I’m not sure that my colleagues who I think of as “good at math” and “good at thinking in larger abstractions” are necessarily the same ones, but there’s definitely a lot of overlap.
Not only this is extremely patronizing towards all people on spectrum, but at the same time extremely hurtful statement for people who are treating employment as a job(ie - most of population).
And what are you going to say to people who are stuck in low-end jobs?
It is really about prompting and writing specs - the "soft" (but really "hard") skill of giving detailed specs to an LLM so it does what you want.
I think the more important, truly soft, skill in the age of AI is going to be communicating with humans and demonstrating your value in communicating both vertically up and down and horizontally within your team. LLMs are becoming quite capable at the "autistic" skill of coding, but they are still horrible communicators, and don't communicate at all unless spoken to. This is where humans are currently, and maybe for a long time, irreplaceable - using our soft skills to interact with other humans and as always translate and refine fuzzy business requirements into the unforgiving language of the machine, whether that is carefully engineered LLM contexts, or machine code.
As far as communication goes, I have to say that Gemini 3.0, great as it is, is starting to grate on me with it's sycophantic style and failure to just respond as requested rather than to blabber on about "next steps" that it is constantly trying to second guess from it's history. You can tell it to focus and just answer the question, but that only lasts for one or two conversational turns.
One of Gemini's most annoying traits is to cheerfully and authoritatively give design advice, then when questioned admit (or rather tell, as if it were it's own insight) that this advice is catastrophically bad and will lead to a bad outcome, and without pause then tell you what you really should do, as if this is going to be any better.
"You're absolutely right! You've just realized the inevitable hard truth that all designers come to! If you do [what I just told you to do], program performance will be terrible! Here is how you avoid that ... (gives more advice pulled out of ass, without any analysis of consequences)"
It's getting old.
But would you read their blog post on laundry tips?
No - it's just as easy for you to send out your own laundry.
Just to clarify that I'm not a jackass in real-life. In fact, I'm perfectly OK with all sorts of soft skills -- after all, my current position requires me to do so. But I just try to maintain a minimum level of soft skills to navigate the shoreline -- not interested to move up anyway.
She did it at 3-400 times the markup she was being paid while employed :) because they were time critical.
Writing code is just how that happens, sometimes. Soft skills are essential to communication with the users and product managers.
It is easy to determine if someone solved a problem using AI because they can’t explain or recreate “their” solution. Detecting cheating in essays is still far more difficult.
If they had AI write the essay, yet can still explain it as well as if they had written it themselves (ditto for code), then it would tend to indicate that they at least read it and thought about it, which I think should be more acceptable in a learning environment.
If I was a junior today, I'd be studying business impact, effective communication, project management, skills that were previously something you could get away with under-indexing on until senior+.
Heck, I even know a guy who refuses to use an IDE with Java and the indenting is a mess, but he gets there.
“Computer” used to be a job title. It was entirely replaced by … drumroll … electronic computers, i.e. calculators.
Technology doesn’t usually eliminate the need for a job output in general but it can sometimes shift the skills needed wildly.
Why have a slow human CEO when machines are faster..
That LLMs do a better job if you know what you are asking for is old news.
But to be honest, I usually don't care to write properly into an LLM prompt. An LLM will ignore grammar and form and just extract the essence. If I make an actual mistake I will notice quickly and fix it. If I'd send slack messages like that to an peer, they'd either mock me or simply think I am dumb. We also know the stories about people that use LLMs for any communication or anything they write. Probably for the exact reason that being lazy with writing is acceptable now. My call is that writing skill will decline, not improve. This could probably be the case for anything that people use LLMs as a proxy for.
> That LLMs do a better job if you know what you are asking for is old news.
Even a decade after Word Lens had demonstrated augmented reality live translation through a smartphone camera, I was amazing people by showing them the same feature in Google Translate.
Similar anecdotes about Shakuntala Devi, even in 2018 I was seeing claims about her mental arithmetic beating a supercomputer (claims that ignored that this happened in 1977 and the computer was already obsolete at the time), even though my mid-2013 MacBook Air could not only beat her by a factor of 150 million, it could also train an AI to read handwritten numbers from scratch in 0.225 seconds, and then perform inference (read numbers) at just over 6,629 digits per second*.
You say "old news", I say this discussion will be on repeat even in the early 2030s. And possibly even the 2060s.
* Uses an old version of python, you'll need to fix it up accordingly: https://benwheatley.github.io/blog/2018/03/16-10.44.18.html
Everything can be vibed will be vibed until everyone hits a wall, where no docs to form corpus nor instructions for prompts exist. There are problems that are yet to be named, but how can you name things when humans aren't the one to experience patterns of a thought process?
And naming things is one of the only two hard things in computer science
So now instead of needing to manage multiple stakeholders and expectations of 10 different middle managers you'll probably just have a 1:1 with a single person.
Why are we assuming that people who write code don’t have soft skills?
The youngest generation who joined the profession are probably in it for the the salary versus the older generations who came from computer clubs and dungeons and dragons groups of the 1970s/1980s along with a culture where having a niche interest was socially ostracizing and uncool.
I wonder if the youngest generation entering the profession is much more of a cross section of regular people?
2016 to truckers: “Learn to code LOL”
2026 to coders: “Learn soft skills”
https://news.ycombinator.com/item?id=46436872
Look, I personally am taking full advantage of exactly the skills described. I was the one who posted the above thing on HN showing how I am 20-50x more productive now, complete with a 4 hour speedrun video. I usually try not to just talk and point out current problems, but build solutions AND show (github, youtube) with specific details so you can watch it and apply it for yourself. But I am telling you:
1) most people will not adapt, so we will need UBI for those who don’t
2) eventually even those who adapt will be replaced too, so we will need UBI for everybody
It is after all a thin layer that remains. I remember Kasparov proudly talked about how “centaurs” (human + machine working together) in chess were better than machines alone… until they weren’t, and human in the loop became a liability.
But the problem is more widespread in the last 70 years. Just look around. Industry always tells the individual they can do some individual action downstream to clean up the mess they create upstream, and it is leading the entire planet into ruin:
https://magarshak.com/blog/government-and-industry-distract-...
In fact, the human population in modern environments has been living large on an ecological credit card and the bill is coming due for our children, because all the “individual responsibility” stuff — where you can somehow diet, exercise and recycle your way out of things corporations do upstream — is all a gient lie and always has been. So the negative externalities just build up until the next generation won’t be able to ignore them anymore, but it could be too late. Whether that’als day zero for water in cities, or factory farms for meat with antiobiotic resistance, or fossil fuels and greenhouse gases to subsidize the car industry, or ubiqitous microplastic plastic pollution around thr world (yes, personal plastic recycling was just another such scam designed to keep you docile and not organize to force corporations to switch to biodegradeable materials.) The “anthoposcene” is seeing a decline in insects and all species of animal except humans and farm animals. Coral reefs are bleached, kelp forests and rainforests are decimated, and governments work with industry to eg allow Patagonian forests to be burned for new developments and then smokey the bear says “only YOU can prevent forest fires”. Think about it.
Though you're right that there's no I in team. There is one in AI though, which probably tells us something.
*Goldman Sachs(sorry for invoking that name here) did a report on their high turnover, and the above framing was why many quit.
The point is that if you zoom out, it's just a thin slice that can be automated by machines. People keep saying "I'll tell you in my experience, no UAV will ever trump a pilot's instinct, his insight, the ability to look into a situation beyond the obvious and discern the outcome, or a pilot's judgment"... https://www.youtube.com/watch?v=ZygApeuBZdk
But as you can see, they're all wrong. By narrow here I meant a thin layer that thinks it's indispensable as they remove all the other layers. Until the system comes for this layer too.