I’ve been actively trying to apply AI to our field, but the friction is real. We require determinism, whereas AI fundamentally operates on probability.
The issue is the Pareto Principle in overdrive: AI gets you to 90% instantly, but in our environment, anything less than 100% is often a failure. Bridging that final 10% reliability gap is the real challenge.
Still, I view total replacement as inevitable. We are currently in a transition period where our job is to rigorously experiment and figure out how to safely cross that gap.
Good luck!
That said, I have a hunch we're heading toward a world where we stop reading AI-generated code the same way we stopped reading assembly. Not today, not tomorrow, but the direction feels clear.
Until then — yes, we need to understand every bit of what the AI writes.
AI? Not so much. Not deterministic. Sure, the probability of something bizarre may go down. But with AI, as currently constituted, you will always need to review what it does.
The real comparison is: 1. Human writes code (non-deterministic, buggy) → compiler catches errors
2. AI writes code (non-deterministic, buggy) → compiler catches errors
In both cases, the author is non-deterministic. We never trusted human-written code without review and compilation either (and + lots of tests). The question isn't whether AI output needs verification — of course it does. The question is whether AI + human review produces better results faster than human alone.
But, I don't like hype or having things forced down my throat, and there's a lot of that going on.
Psychologically, the part that seems depressing is that everything just seems totally disposable now. It's hard to even see the point of learning the latest and greatest AI tools/models, because they'll be replaced in about 3 months, and it's hard to see the point in trying to build anything with, or without AI, given the deluge of AI slop it will be up against.
I like the idea of spending a bit of time to learn something, like how to use a shell, how to ride a bike, how to drive a car, how to program in C or C++, and use the skill for years or decades, if not a lifetime. AI seems to have taken that away now everything is brand new and disposable, and everyone is an amateur.
Meanwhile, some of us were over here, building embedded systems with C and C++. The big switch was from Green Hills or VxWorks to embedded Linux. The time scale was more "OS of the decade". There's hype and fads, and there's stuff that lasts.
I'm not opposed to new things, but I guess I want incremental improvement on the old thing, and more on the timescale of years than weeks.
I do think that like all trendy hypes, it will go away after awhile. And the people that are focused on the next thing now are going to be a step ahead once the AI hype gets old.
For startups specifically I think the next big thing will be in-person social media. The AI slop will get old after awhile, and someone will figure out how to make Meetup.com actually work.
> The people and corporations and all those LinkedIn gurus, podcasters
You can just mute and ignore them
> I'm now scared to publish open source
If you get many PRs it's a good problem to have, better than you publish and nobody reads it
> mediocre C compilers, Moltbook
it's all experiments. You can say the same thing about cleantech 15 years ago, where companies talked about solar panels and electric cars with swappable batteries all the time. You don't have to keep track of all things people experimenting with
In five years time AI will be just another tool in the toolbox and nobody will remember the names of the hypers. I agree it is depressing: there are quite a few people banging this drum and because of that it becomes harder to be heard. They, like AI have the advantage of quantity. There is one character right here on HN that spews out one low effort AI generated garbage article after another and it all gets upvoted as if it is profound and important. It isn't. All it shows is how incredibly bland all this stuff is.
Meanwhile, here I am, solving a real problem. I use AI as well but mostly to serve as a teacher and I check each and every factoid that isn't immediately obviously true. And the degree to which that turns up hallucinations is proof enough to me that our jobs are safe, for now.
A good niche is cleaning up after failed AI projects ;)
best of luck there!
Jacques
Trust your eyes. You can see what it actually does, therefore the marketing is lying to you.
But it sounds like your problem isn't knowing what to believe. Your problem is that you know the truth, and you're tired of having to wallow in the lies all day. I don't blame you; lies are bad for your mental health. Well, there's a solution: Turn off the internet. You can, you know. Or at least you can turn off the feed into your brain. Stop looking at posts about AI, even on HN. If you can't dodge them well enough, just turn off social media. Go outside, if the temperature is decent. If it isn't, go to a gym or an art museum or something. Just stop feeding this set of lies into your brain.
Recommended reading: [0]
What you are seeing is that anyone can build anything with just a computer and a AI agent and the AI boosters are selling dreams, courses and fantasies without the risks or downsides that come with it. Most of these vibe coded projects just have very bad architecture and the experienced humans still have to review and clean it all up.
Meanwhile, "AGI" is being promised by those big labs, but their actions says otherwise as what it really means is an IPO. After that, we will see a crash come afterwards and the hype brigade and all the vibe coders will be raced to zero by local models and will move on after the grift has concluded.
You now need to know what to build and what should exist out of infinite possibilities as you can assume that someone can do that in 10 mins with AI. What used to be 90% of startups fail; with AI it is now 98% of them failing.
We know how this all ends. Do not fall for the hype.
[0] https://blog.oak.ninja/shower-thoughts/2026/02/12/business-i...