18 pointsby pavello4 hours ago8 comments
  • yodsanklai8 minutes ago
    I feel the exact same way as you do (same age as well), and I know a lot of my teammates do. At that stage, I have no idea what will be our profession in a few years. Maybe the hype will pass and we'll be back to normal, or the profession will disappear (or become much less fun). Who knows... in the meantime, I'm trying to keep my job and save money while I can.
  • 0xecro13 hours ago
    Hi, I’ve been working with embedded Linux for 18 years.

    I’ve been actively trying to apply AI to our field, but the friction is real. We require determinism, whereas AI fundamentally operates on probability.

    The issue is the Pareto Principle in overdrive: AI gets you to 90% instantly, but in our environment, anything less than 100% is often a failure. Bridging that final 10% reliability gap is the real challenge.

    Still, I view total replacement as inevitable. We are currently in a transition period where our job is to rigorously experiment and figure out how to safely cross that gap.

    Good luck!

    • jacquesm2 hours ago
      And by not doing the 90% yourself you lack the understanding you need to be able to tackle the remaining 10%.
      • 0xecro12 hours ago
        Absolutely agree. I do vibe-code, but I still review every line of that 90% — I don't move forward until I understand it and trust the quality. Right now, that human verification step is non-negotiable.

        That said, I have a hunch we're heading toward a world where we stop reading AI-generated code the same way we stopped reading assembly. Not today, not tomorrow, but the direction feels clear.

        Until then — yes, we need to understand every bit of what the AI writes.

        • AnimalMuppetan hour ago
          I disagree. Compilers were deterministic. Complicated, but deterministic. You could be sure that it was going to emit something sensible.

          AI? Not so much. Not deterministic. Sure, the probability of something bizarre may go down. But with AI, as currently constituted, you will always need to review what it does.

          • 0xecro1an hour ago
            I think the comparison is slightly off. The compiler was never the author — it was the verifier.

            The real comparison is: 1. Human writes code (non-deterministic, buggy) → compiler catches errors

            2. AI writes code (non-deterministic, buggy) → compiler catches errors

            In both cases, the author is non-deterministic. We never trusted human-written code without review and compilation either (and + lots of tests). The question isn't whether AI output needs verification — of course it does. The question is whether AI + human review produces better results faster than human alone.

            • apothegm18 minutes ago
              The compiler catches certain classes of errors. And AI can spit out unmaintainable code or code with incorrect logic or giant security holes a lot faster than humans can review it.
  • tacostakohashian hour ago
    Hmm, well I am not philosophically opposed to AI.

    But, I don't like hype or having things forced down my throat, and there's a lot of that going on.

    Psychologically, the part that seems depressing is that everything just seems totally disposable now. It's hard to even see the point of learning the latest and greatest AI tools/models, because they'll be replaced in about 3 months, and it's hard to see the point in trying to build anything with, or without AI, given the deluge of AI slop it will be up against.

    I like the idea of spending a bit of time to learn something, like how to use a shell, how to ride a bike, how to drive a car, how to program in C or C++, and use the skill for years or decades, if not a lifetime. AI seems to have taken that away now everything is brand new and disposable, and everyone is an amateur.

    • AnimalMuppetan hour ago
      In a way, this seems similar to the "web framework of the month" that everyone wrestled with for a while. There's a new tool! You're obsolete if you don't switch now!!!!!

      Meanwhile, some of us were over here, building embedded systems with C and C++. The big switch was from Green Hills or VxWorks to embedded Linux. The time scale was more "OS of the decade". There's hype and fads, and there's stuff that lasts.

      • tacostakohashi44 minutes ago
        Yes, exactly, it's exactly like the peak js framework of the month of the early 2010s, or the coin of the month in the late 2010s... I guess that's just part of the fad dynamic, you get microfads within the macrofad...

        I'm not opposed to new things, but I guess I want incremental improvement on the old thing, and more on the timescale of years than weeks.

  • keiferski3 hours ago
    It’s becoming pretty annoying, and I am noticing that I read HN less.

    I do think that like all trendy hypes, it will go away after awhile. And the people that are focused on the next thing now are going to be a step ahead once the AI hype gets old.

    For startups specifically I think the next big thing will be in-person social media. The AI slop will get old after awhile, and someone will figure out how to make Meetup.com actually work.

  • bkjlblh2 hours ago
    TLDR: you don't have to leave the industry, just focus on yourself and not your feelings

    > The people and corporations and all those LinkedIn gurus, podcasters

    You can just mute and ignore them

    > I'm now scared to publish open source

    If you get many PRs it's a good problem to have, better than you publish and nobody reads it

    > mediocre C compilers, Moltbook

    it's all experiments. You can say the same thing about cleantech 15 years ago, where companies talked about solar panels and electric cars with swappable batteries all the time. You don't have to keep track of all things people experimenting with

  • jacquesm3 hours ago
    I wouldn't blame you. But: hypes come and hypes go, this one will go too. But it will destroy the funding environment for a while when it does, the same happened the previous times this happened.

    In five years time AI will be just another tool in the toolbox and nobody will remember the names of the hypers. I agree it is depressing: there are quite a few people banging this drum and because of that it becomes harder to be heard. They, like AI have the advantage of quantity. There is one character right here on HN that spews out one low effort AI generated garbage article after another and it all gets upvoted as if it is profound and important. It isn't. All it shows is how incredibly bland all this stuff is.

    Meanwhile, here I am, solving a real problem. I use AI as well but mostly to serve as a teacher and I check each and every factoid that isn't immediately obviously true. And the degree to which that turns up hallucinations is proof enough to me that our jobs are safe, for now.

    A good niche is cleaning up after failed AI projects ;)

    best of luck there!

    Jacques

  • AnimalMuppetan hour ago
    > But the gap between the marketing and reality for many of us is hard to describe.

    Trust your eyes. You can see what it actually does, therefore the marketing is lying to you.

    But it sounds like your problem isn't knowing what to believe. Your problem is that you know the truth, and you're tired of having to wallow in the lies all day. I don't blame you; lies are bad for your mental health. Well, there's a solution: Turn off the internet. You can, you know. Or at least you can turn off the feed into your brain. Stop looking at posts about AI, even on HN. If you can't dodge them well enough, just turn off social media. Go outside, if the temperature is decent. If it isn't, go to a gym or an art museum or something. Just stop feeding this set of lies into your brain.

  • rvz3 hours ago
    > Where are we going with this?

    Recommended reading: [0]

    What you are seeing is that anyone can build anything with just a computer and a AI agent and the AI boosters are selling dreams, courses and fantasies without the risks or downsides that come with it. Most of these vibe coded projects just have very bad architecture and the experienced humans still have to review and clean it all up.

    Meanwhile, "AGI" is being promised by those big labs, but their actions says otherwise as what it really means is an IPO. After that, we will see a crash come afterwards and the hype brigade and all the vibe coders will be raced to zero by local models and will move on after the grift has concluded.

    You now need to know what to build and what should exist out of infinite possibilities as you can assume that someone can do that in 10 mins with AI. What used to be 90% of startups fail; with AI it is now 98% of them failing.

    We know how this all ends. Do not fall for the hype.

    [0] https://blog.oak.ninja/shower-thoughts/2026/02/12/business-i...