52 pointsby shinryuu6 hours ago17 comments
  • sgt6 hours ago
    We need developers like these who are able to innovate without being tainted by the echo chamber of AI.

    LLM's tend to regurgitate known design patterns and familiar UX. Like those typical "keep scrolling down to learn about our app as we show you animations" - gets a bit of.

    • sovnade6 hours ago
      Not long before (or it's already happening) LLMs start training on stuff they wrote previously and it becomes the largest echo chamber the internet has ever seen.
      • Cthulhu_6 hours ago
        Which could be damaging, or it could create interesting results if it's more like an evolutionary algorithm than entropy. That is, if it can iterate and improve on itself, instead of just take in all information and treat it equally, we'll get something interesting.
        • sgt4 hours ago
          For the sake of the IT industry, let's hope it's the latter!
      • seanmcdirmid6 hours ago
        I’m pretty sure this is already part of the training loop even if it isn’t coming from the internet. It is definitely used for fine tuning and distillation. As for how LLM producers avoid model collapse, they curate and filter.
    • mpalmer5 hours ago
      Counterpoint: Swearing off AI doesn't ipso facto make you a good developer, and there are plenty of skilled people who can innovate and use AI at the same time.
    • CuriouslyC5 hours ago
      To be fair, hero banners lower bounce rates. Nobody would waste the time if it didn't work.
    • redox995 hours ago
      AI will beat humans at all tasks that are not subjective (such as a landing page being pretty), but instead can be determined to be correct or not (does an endpoint return the correct data? how fast?)

      Just like a chess engine beats any human.

      People think LLMs are still at the point of programming based on what they learned from the data they scraped. We're well past that. We're at the point of heavy reinforcement learning. LLMs will train on billions of LoC of synthetic code they generate, like chess engines train on self-play games.

      • abcde6667775 hours ago
        Chess has a very specific win condition by which moves can be assessed. Many real world problems are much fuzzier than that and don't reduce neatly to algorithmic validation of 'correctness'.
        • redox995 hours ago
          That's the breakthrough of ML, it can handle fuzzy. And chess is in some ways similar. Outside of endgame and blunders (where you can just bruteforce), you can't prove one move is superior to another. That's why chess engines used to have human made heuristics.
    • epolanski6 hours ago
      Sure if all you're doing is making it just vomit code then yes.

      But there's way more LLMs can do, like assist you in connecting dots in complex codebases, find patterns to refactor given some rules, provide ideas, allow you to dig in dependencies to find apis that are undocumented or edge cases that are subtle to find, find information that you might've had to dig up by endless google queries and hard to find GitHub issues, provide support in technologies you have sometimes to use (sed, awk, regexes, Google sheets apis, etc) but you just don't care enough to learn because it happens to use them few times an year allowing you to focus on what matters, etc, etc.

      I'm frankly tired of those pointless debates conflating LLMs in the context of code just for the same boring arguments of people hating on vibecoding or thinking every developer is delegating everything to AI and pushing slop. If that's your colleagues fire them. They are indeed useless and can be replaced by a prompt.

      If one cannot see the infinite uses of LLMs even just for supporting, without ever authoring a single line of code or never ever touching a file, it's only been that person limiting it's own growth and productivity.

      Seriously, this is beyond tiring.

      • sodapopcan5 hours ago
        You hit on something key here: the vast majority of the pro-articles are talking in the context of code vomit or content vomit or other types of vomit. As you say, it is beyond tiring, so we respond. I'm still waiting for that cancer cure!

        > it's only been that person limiting it's own growth and productivity.

        Maybe limiting raw productivity, but I sure don't buy that it limits growth. Maybe if all you ever did was copy and paste off of SO, but taking the time to study and deeply understand something is going to be much better for your overall growth. Also collaborating with humans instead of robots is always nice.

        • epolanski4 hours ago
          You can study better and more effectively with more tools at your disposal ;)

          I've never learned and collaborated as much as in the last years.

          • sodapopcan3 hours ago
            It's all situational, I guess. For example I never learned or collaborated more (or had more fun at work) than when I worked at an XP shop, pairing every single day. I feel the majority of people are just going to have coworkers who have fully stopped asking asking questions. But you aren't wrong. I use AI a bit as a search engine.
            • epolanski3 hours ago
              You can still do XP, can't you!

              I do agree that real people sharing is absolutely crucial, and that AI may have a negative effect it it becomes the main and only way of (non) sharing.

              • sodapopcan3 hours ago
                I could if there were a wealth of companies that offered it ;) Not only that, if there was a wealth of people who wanted to do it. It's unsurprising that most people are more willing to collaborate with a machine than a human (if they are even willing at all), but there's obviously nothing I can do about that.

                > I do agree that real people sharing is absolutely crucial, and that AI may have a negative effect it it becomes the main and only way of (non) sharing.

                Yes, exactly. Most of the pro-AI articles that show up here are about generative AI which is what the vast majority of people are raging against. This particular nuance is often left out. Although the TFA's About page explicitly states that when they use "AI" they mean "Generative AI." Obviously pretty much all of us having been using "AI" in some form or another for the past 20... 30+ years?

                But then there are a whole host of other reasons I don't like "AI" that are a little out of scope of this rant.

  • stavros6 hours ago
    Aren't we all tired by this anti-AI stuff? Use it if you want to, don't use it if you don't want to, I just don't really want to hear about your personal opinion on it any more.
    • monsieurbanana6 hours ago
      I do hope you comment the same thing on the pro-AI articles from people trying to sell you a product. Internet is now infested by those, and without these articles you might think everybody has collectively lost their mind and still think we will get replaced in the next 6 months.

      I use AI, what I'm tired of is shills and post-apocalyptic prophets

      • jfyi5 hours ago
        I use AI, I pay a subscription to google. I use it for work. I use it for learning. I use it for entertainment.

        I am still concerned with how it's going to impact society going forward. The idea of what this is being used for by those with a monopoly on the use of violence is terrifying: https://www.palantir.com/platforms/aip/

        Am I a shill or a post-apocalyptic prophet?

      • embedding-shape6 hours ago
        Yes, us who use AI yet aren't shills nor hypers and also still have our critical thinking receptors left in our brains, are tired of both sides exaggerating and hyping/dooming.

        People would do much better if they just stopped listening so much and started thinking and doing a bit more. But as a lazy person, I definitely understand why it's hard, it requires effort.

      • stavros6 hours ago
        "Look at how I use this cool new technology" tends to be much more interesting to me than "this new technology has changed my job and I refuse to use it because I'm afraid".
        • kfreds5 hours ago
          Obviously it’s far more nuanced than that. I’d say there are several categories where a reasonable person could have reservations (or not) about LLMs:

          Copyright issues (related to training data and inference), openness (OSS, model parameters, training data), sovereignty (geopolitically, individually), privacy, deskilling, manipulation (with or without human intent), AGI doom. I have a list but not in front of me right now.

          • stavros5 hours ago
            Yes, and those are interesting topics to discuss. "AI is useless and I refuse to use it and hate you if you do" isn't, yet look at most of the replies here.
            • bdangubic43 minutes ago
              I hope we’ll eventually reach enough fatigue that either of the two gets 0 comments and we move on
            • simoncion5 hours ago
              > Yes, and those are interesting topics to discuss. "AI is useless and I refuse to use it and hate you if you do" isn't...

              Did you read Mr. Bushell's policy [0], which is linked to by TFA? Here's a very relevant pair of sentences from the document:

                Whilst I abstain from AI usage, I will continue to work with clients and colleagues who choose to use AI themselves. Where necessary I will integrate AI output given by others on the agreement that I am not held accountable for the combined work.
              
              And from the "Ensloppification" article [1], also linked by TFA:

                I’d say [Declan] Chidlow verges towards AI apologism in places but overall writes a rational piece. [2] My key takeaway is to avoid hostility towards individuals†. I don’t believe I’ve ever crossed that line, except the time I attacked you [3] for ruining the web.
                
                † I reserve the right to “punch up” and call individuals like Sam Altman a grifter in clown’s garb.
              
              Based on this information, it doesn't seem that Mr. Bushell will hate anyone for using "AI" tools... unless they're CEO pushers.

              Or are you talking in generalities? If you are, then I find the unending stream of hype articles from folks using this quarter's hottest tool to be extremely disinteresting. It's important for folks who object to the LLM hype train to publish and publicize articles as a counterpoint to the prevailing discussion.

              As an aside, the LLM hype reminds me of the hype for Kubernetes (which I was personally enmeshed in for a great many years), as well as the Metaverse and various varieties of Blockchain hype (which I was merely a bystander for).

              [0] <https://dbushell.com/ai/>

              [1] <https://dbushell.com/2025/05/30/ensloppification/>

              [2] link in the pull quote being discussed to: <https://vale.rocks/posts/ai-criticism>

              [3] inline link to: <https://dbushell.com/2025/05/15/slopaganda/>

        • spooneybarger5 hours ago
          That's an exceedingly unkind summation of the piece in question.
          • stavros5 hours ago
            I wasn't talking about the piece in question, which just says "BTW I don't use AI".
        • bakugo5 hours ago
          > this new technology has changed my job and I refuse to use it because I'm afraid

          You're confusing fear with disgust. Nobody is afraid of your slop, we're disgusted by it. You're making a huge sloppy mess everywhere you go and then leaving it for the rest of us to clean up, all while acting like we should be thankful for your contribution.

    • OptionOfT2 hours ago
      No, I'm tired of AI being pushed as this amazing way to make everybody go 100% faster, while being able to lay off 90% of the people.

      And for some reason the CxO suite and upper management has completely drunk the cool-aid.

      In the past new technology was adopted sparingly, to figure out whether the juice was worth the squeeze.

      However with AI it feels like a lot of places are (trying to go|going) all in, both in their work, and integrating it into the products, regardless of whether it makes sense.

      But most importantly, I think pushback is needed because if AI succeeds in the way it is currently advertised and sold, it's a lot more people than 'just' the Software Engineers that are going to lose their jobs.

      Which is great for all those companies who currently have a lot less people on payroll.

      But on the other hand, a lot of the money spent on these companies is discretionary spending. Guess what's the first thing to be cut when you lose your job?

    • rootnod36 hours ago
      Aren't we all tired by this pro-AI stuff? Use it if you wanna ruin the planet. Don't use it if you care about maintaining skill.

      I just don't really wanna hear about your pro-AI peddling anymore.

      • exitb5 hours ago
        AI is now a mainstream technology and well within the area of topics discussed on this board. Are we going to sit around and pretend it’s 2021? It’s like getting annoyed that all we talk about is computers.
        • rootnod35 hours ago
          The annoying part is that AI is getting shoved down every throat it can find. It's Blockchain and NFTs all over again.
          • lostmsu4 hours ago
            There's nothing wrong with blockchain. In fact I find this swing back to centralized services distateful for this community.
      • cindyllm6 hours ago
        [dead]
      • NitpickLawyer6 hours ago
        Eh, like everything on the Internet, the anti crowd is becoming more obnoxious than the pro crowd ever was. It has become an identity thing, more than a technical thing, and it always sucks when it devolves into that.
        • rootnod35 hours ago
          Is wasting massive fucktons of water and electricity an identity thing? Is it identity that RAM now costs 10x what it used to?
          • lostmsu4 hours ago
            What are you gonna spend these water and electricity in US on instead?
        • kranner5 hours ago
          It is still a technical thing though. AI generated code is outright buggy when it’s not mediocre but the pro AI crowd is pretending you can guardrail and test suite your way to good generated code. As if painting a picture in negative space is somehow less work than painting it directly. And that’s when you know all the requirements (the picture) upfront.
        • sodapopcan5 hours ago
          > Eh, like everything on the Internet, the anti crowd is becoming more obnoxious than the pro crowd ever was.

          In your highly objective opinion, of course.

    • Leynos6 hours ago
      I find some of it interesting. I'm very interested in understanding why others' experience of using genAI is so vastly different to my own.

      (For me it's been as transformational a change as discovering I could do my high school homework on a word processor in the 90s when what I suspect was undiagnosed dyspraxia made writing large volumes of text by hand very painful).

      I'm also interested in understanding if the envisaged transformation of developers into orchestrators, supervisors, tastemakers and curators is realistic, desirable or possible. And if that is even the correct mental model.

      • stavros5 hours ago
        Sure, me too, but most discourse I've seen is just knee-jerk reaction of the form "AI is entirely useless", which is just basically noise.
        • bee_rider2 hours ago
          I don’t like “AI is useless” as an argument because

          * it is basically invalidated by somebody saying “well I find it useful”

          * it is easy to believe it’s on a path toward usefulness

          OTOH it is worth keeping in mind that we haven’t seen what a profitable AI company looks like. If nothing else this technology has massive potential for enshittification…

          • stavros2 hours ago
            I agree with you entirely. On the other hand, I love that nobody will ever be able to take the current open models away from us.
    • sodapopcan5 hours ago
      I'm not, I'm tired of hearing about it. If someone is forcing you to read these articles then that sounds like you are in a really shitty situation. Blink twice if you need help.
    • jrjeksjd8d6 hours ago
      My CEO sent a company-wide email this week saying "AI use is mandatory for all developers". Until this kind of mandatory bullshit stops I'm happy to see other people fighting the good fight and publicly saying that they want to keep doing a job they actually enjoy.

      Many of my coworkers have embraced AI coding and the quality of our product has suffered for it. They deliver bad, hard-to-support software that technically checks some boxes and then rush on to produce more slop. It feels like a regression to the days of measuring LOC as a proxy for productivity.

      • em-beean hour ago
        what's your exit strategy? if i got a letter like that i'd either be out switching jobs at the first opportunity, or i'd ignore it until i get fired for refusing to comply, while hoping that disaster strikes before that happens, or maybe just hoping that noone notices.
      • gedy5 hours ago
        I'm seeing the top-down AI usage pushed from the same types of leaders and companies who love to outsource and are happy with shit shovelled over the wall then devs firefight production bugs forever. It's just a good reminder they don't care a bit about quality.
    • 6 hours ago
      undefined
    • jchw5 hours ago
      > Aren't we all tired of this anti-AI stuff?

      Let's do a quick analysis of the amount of money put forth to push AI:

      > OpenAI has raised a total of $57.9B over 9 funding rounds

      > Groq has raised a total of $1.75 billion as of September, 2025

      Well, we could go on, but I think that's probably a good enough start.

      I looked into it, but I wasn't able to find information on funding rounds that David Bushell had undergone for his anti-AI agenda. So I would assume that he didn't get paid for it, so I guess it's about $0.

      Meanwhile:

      - My mobile phone keyboard has "AI"

      - Gmail has "AI". Google docs has "AI". At one point every app was becoming a chat app, then a TikTok clone. Now every app is a ChatGPT or Gemini frontend.

      - I'm using a fork of Firefox that removes most of the junk, and there's still some "AI" in the form of Link Preview summaries.

      - Windows has "AI". Notepad has "AI". MS Paint has "AI".

      - GitHub stuck an AI button in place of where the notifications button was, then, presumably after being called every single slur imaginable about 50000 times per day, moved it thirty or so pixels over and added about six more AI buttons to the UI. They have a mildly useful AI code review feature, but it's surprisingly half-baked considering how heavily it is marketed. And I'm not even talking about the actual models being limited, the integration itself is lame. I still consider it mildly useful for catching typos, but that is not with several billion dollars of investment.

      - Sometimes when I log into Hacker News, more than half of the posts are about AI. Sometimes you get bored of it, so you start trying to look at entries that are not overtly about AI, but find that most of those are actually also about AI, and if not specifically about AI, goes on a 20 minute tangent about AI at some point.

      - Every day every chat every TV program every person online has been talking about AI this AI that for literally the past couple of years. Literally.

      - I find a new open source project. Looks good at first. Start to get excited. Dig deeper, things start to look "off". It's not as mature or finished as it looks. The README has a "Directory Structure" listing for some odd reason. There's a diagram of the architecture in a fixed width font, but the whitespace is misaligned on some lines. There's comments in the code that reference things like "but the user requested..." as if, the code wasn't written by the user. Because it wasn't, and worse, it wasn't read by them either. They posted it as if they wrote it making no mention at all that it was prompts they didn't read, wasting everyone's time with half-baked crapware.

      And you're tired of anti-AI sentiment? Well God damn, allow me to Stable Diffusion generate the world's smallest violin and synthesize a song to play on it using OpenAI Jukebox.

      I'm not really strictly against AI entirely, but it is the most overhyped technology in human history.

      • Semiapies5 hours ago
        > Sometimes when I log into Hacker News, more than half of the posts are about AI.

        And I don't ever see it under a fifth, anymore. There is a Hell of a marketing push going on, and it's genuinely hard to tell the difference between the AI true believers and the marketing bots.

    • orleyhuxwell5 hours ago
      I am not tired by this anti-AI stuff. As a person who uses it in very limited capacity, also as an ML/computer vision developer and researcher with 10 years of commercial experience with it, I want much more anti AI stuff.

      Low quality (low precision) news, code, marketing, diagnosis, articles, books, food, entertainment (shorts, tik-tok), engineering is in my opinion the biggest problem in XXI century so far.

      Low quality AI usage decisions, low quality AI marketing, retraining, placement, investments are accelerating the worst trends even more. It's like Soviet nuclear trains - just because nuclear is powerful and real it doesn't mean most of it's applications made any sense.

      So as a pro-AI person and AI-builder in general, I want more anti-AI-slop content, more pro-discipline opinions.

      • jfyi5 hours ago
        I think you have hit on something. The problem isn't the tech, it’s the eye of the user.

        The same person who ignores a crooked door frame or a CSS overflow now has a "mostly right" button to bring mediocrity to scale. We unfortunately aren't invested in teaching craftsmanship as a society.

      • seanmcdirmid5 hours ago
        Both sides are against slop, what we are arguing about is basically the position that “AI can be used for useful things” vs the “all AI is slop” positions, the latter being based on hasty generalization fallacies (some AI is slop so all AI is slop).
        • orleyhuxwell34 minutes ago
          Sorry, but I don't think so.

          I think without AI the effort of producing slop code or art that sort of looks like a real thing on the first glance is let's say 5% of the effort needed for the real thing that actually works flawlessly. LLMs and diffusion models bring it down to 0.5%.

          They are also really good at faking comprehension and make recognizing real experise from phoney cosplayers harder for busy managers, officials, execs, politicians etc.

          So while AI CAN be used for useful things, it very rarely is and it requires more discipline than most people are willing to invest.

          Also, the way AI is trained on stolen and random low quality content is deeply disturbing.

          So yeah, while I'd like anti-AI rants to be more precise and nuanced, in general AI in 2026 is mostly a missuse of a technology with great potential.

    • robin_reala6 hours ago
      You deliberately read an article entitled “You can’t pay me to prompt” then complained about having to hear about anti-AI blog posts?
      • SecretDreams6 hours ago
        They said they're tired of anti-AI commentary, not that they're tired of complaining about anti-AI commentary!
    • nuancebydefault6 hours ago
      We're in this in-between phase where we gradually all start to use AI. There is no escaping.
      • kranner5 hours ago
        Resistance is futile, You will be assimilated?

        Is it so hard to understand why people are reacting against this argument?

        • nuancebydefault3 hours ago
          In fact you are right, there is no escape from this assimilation, at least I do not see how. And the outcome might be worse than becoming the Borg. Nobody can tell right now.

          There's resistance but on the other hand there was resistance against light bulbs, trains with engines, automatic press, phones, television, a global internet,...

          • kranner3 hours ago
            > There's resistance but on the other hand there was resistance against light bulbs, trains with engines, automatic press, phones, television, a global internet,...

            There was also resistance against fascism, slavery, Ponzi schemes, the privatisation of public goods, the devaluation of the Humanities, ...

    • rubyfan5 hours ago
      I’m actually more sick of hearing about AI like literally all the time in all forms of media. I’m also sick of seeing AI created content which is so obviously low quality and often unchecked and just thrown out into the world.

      Also when I hear another human suggest using AI for ____, my perception of them is that they are an unserious person.

      So in my opinion AI has had a net negative effect on the world so far. Reading through this persons AI policy resonates with me. It tells me they are a thoughtful individual who cares about their work and the broader implications of using AI.

    • bossyTeacher5 hours ago
      > Aren't we all tired by this anti-AI stuff?

      It's fine to be tired of this. What is not fine is pretending your beliefs/feelings represent everybody else's.

      No one forcing you to read the article. He is as free to write what he wants as you are to complain about it. Balanced. Like all things should be.

    • blurbleblurble6 hours ago
      Very tired
    • arianvanp6 hours ago
      Nah im tired about AI dominating 90% of the posts and the slop machine. People who use AI can't shut up about it.
      • lostmsu4 hours ago
        There's a good reason for that. Because other AI users are listening. This is like choosing a car or a work tool, except they meaningfully progress every 6 months (more often if you restrict yourself to local). So you need to get an impression on what to use next before switching, unless you want to review every single one yourself.

        There are entire sites dedicated to car reviews. This is a hackers website. Makes sense that the most evolving tool for the job is most discussed.

        What else is really changing? CSS added a couple new properties? C++ new standard still didn't add modules (but the year changed!)?

        • arianvanp3 hours ago
          You should ask yourself why nothing interesting is happening anymore. There's a reason for that.
      • seanmcdirmid5 hours ago
        Definitely people who hate AI can’t shut up about it. 90% if the comments on HN seem to just be people hating on AI.

        Everyone else is just busy using it to get work done.

        • sho_hn5 hours ago
          Honestly, it feels differently to me. I have the distinct impression that the pro-AI side is much more desperate to normalize usage and have AI-based achievements recognized as equivalent, rooted in fears of inadequacy. It's about hoping everyone stops with the "you didn't make that, the AI did".
          • seanmcdirmid5 hours ago
            “One side has the impression and/or believes that the other side is more vocal and less sincere” is as old humanity and hasn’t changed with AI.
        • computerthings5 hours ago
          [dead]
    • dvfjsdhgfv3 hours ago
      Frankly, I never understand this usage of "we". Who is "we"? An honest post would be "I'm tired of this anti-AI stuff". (And I feel you as I'm as bored with "look what my claude produced" posts.)
    • bakugo5 hours ago
      > I just don't really want to hear about your personal opinion on it any more.

      And I don't want to hear about how the world of software engineering has been revolutionized because you always hated programming with a passion, but can now instead pay $200 to have Claude bring your groundbreaking B2B SaaS Todo app idea to life, yet that's basically all I hear about in any tech discussion space.

      You should ask your AI assistant to explain to you why people would go out of their way to take a stand against this.

      • lostmsu4 hours ago
        You should ask an AI why this impression is very wrong.
  • knallfrosch6 hours ago
    > Did you noticed the new badge stamped atop my website?

    No, because the banner is cut off on my phone.

    I don't really understand the policy either. I assumed this was a contractor's website. I've never met one who accepted tool recommendations and never a company who cared. Use Solaris and emacs for all I care.

  • nuancebydefault6 hours ago
    The thing is, not using AI costs so much effort that it is almost impossible to correctly say "I don't use AI" . It is like saying 20 years ago "i don't use a search engine."
  • sfortis5 hours ago
    A senior developer + AI = superpowers. But some people have extremely strong resistance to change.
    • amelius5 hours ago
      I think that a large percentage of programmers hate to read other people's code, and that's where the aversion comes from.

      It's much more fun to write code than to review code.

      • kranner5 hours ago
        I think it's also the case that AI code can be insanely bad despite being well-formatted and sometimes very good when it comes to specific functionality. But you have to keep watching out for code that doesn't do anything or replicates existing functionality or devolves into complete loose ends. Actual human code is much more pleasant by comparison.
  • gste6 hours ago
    All you're doing is marking yourself as an untainted source of training data
  • arealaccount5 hours ago
    But the badge looks like it was created by Chat
  • jairojair5 hours ago
    > You Can’t Pay Me To Prompt!

    As a software engineer, you don’t get paid for simply writing code, people pay u for problem-solving.

  • dvh5 hours ago
    It reminds me of elevator boys protests in early 1900s against automatic elevators (now called elevators)
  • markthered4 hours ago
    Does the author ever use Google and read its summary?
  • hiAndrewQuinn5 hours ago
    >This is a practical policy allowing me to maintain my own professional standards and remain employable in a difficult economy.

    I'm interested to hear more about the rationale behind the "remain employable" part of this line.

    All things equal, we would normally expect someone deliberately saying they won't use a certain tool to perform a certain job as limiting their employment opportunities, not expanding it. The classic example is people who refuse to drive for work; there are good non-employment reasons for this (driving is the most dangerous thing many people do on a daily basis) but it's hard to argue that it doesn't restrict where one can work.

    I think the most likely rationale is that the author thinks that posting a no-AI policy for professional work is itself seen as a signal of certain things about them, their skill level, etc., and that wins out for the kinds of clients they wish to take on. This doesn't have to be a long- or even medium-run bet to make, given that it's cheap to backtrack on such a policy down the line. Either way it's clear from reading the measured prose that there's an iceberg of thinking behind what's visible here and they are probably smarter than I am.

    • NateEag5 hours ago
      They're saying that if they completely refused to touch any system that has been touched by AI, they would be unable to find paying work.

      Thus, they won't use it directly themselves, but are willing to work with people who do.

      • lostmsu4 hours ago
        This is not wrong, but the comment you replied to implies the author of the comment understood that perfectly already.
  • Gluber5 hours ago
    My take on AI ( at least for coding ) is the same as for dynamic languages ( python,ruby etc )

    1. Its a great tool to reduce boilerplate 2. Its great for experimenting with ideas without the overhead that comes with starting a new non trivial project 3. Its great for one offs, demos or anything like that. 4. It helps me to work on some personal side projects that would have never seen the light of day otherwise.

    The downsides:

    1. As with dynamic languages its a great tool for EXPERT engineers ( not that i am calling me one ) but is often used by Juniors/Entree Level engineers who do not understand the problem, can't tell it exactly what to do, and can't judge the result. And thus it leads to codebases riddled with issues that are hard to find and since they produce a lot of code are a huge liability.

    "But look what i made" .... no... no you didn't you don't even understand why its doing something.

  • CuriouslyC6 hours ago
    I'm 100% in favor of people doing what they love, if that's hand coding, have a ball.

    I'm sick to death of people trying to grandstand, flag wave and chest pound about "the evils of AI" and "the failings of AI." You hate billionaires and you're afraid of losing your job, I get it, stop trying to propagandize and just do the thing you love to do as if AI didn't exist.

    If I meet someone who hand carves stuff, if it's good I'm into it. If they start to rave about the evils of machines I nope tf out and never return.

    • trollbridge5 hours ago
      The endless prattle about "you must embrace AI now or be locked out forever" is much more tiresome. If AI is really going to be so ubiquitous (and also replace all current skill categories), then I can just wait a year or two for AI to improve and go learn it then, right?
      • CuriouslyC5 hours ago
        As someone who's said something like that several times, I'm fine with never saying it again. I did it because I really do believe the bottom of the job market is going to fall out for engineers that aren't capable with AI and I was trying to be helpful, but if the people hand coding are aware of the risks and accept them, shrug you do you.
  • 5 hours ago
    undefined
  • jonathanstrange5 hours ago
    Luckily, current AI technology is still in its infancy and not good enough. That being said, none of this will matter in the long run. I just don't see a way how AI could not completely replace most jobs done in front of a computer. For example, there is no reason why programs wouldn't be created and modified on the fly in the future. It's just logical to offer this functionality once agentive AI has gotten good enough.

    However, nothing indicates that this will happen soon, we're talking about a timeline of a decade and longer. Maybe pricing as well as a hardware and energy shortage will further slow down the transition. Right now, AI doesn't seem to be profitable for the companies offering it.

    Feel free to downvote this comment but make sure you re-visit this post in 10 years from now.

  • andrewstuart5 hours ago
    I just can’t get my head around why a developer wouldn’t want to use AI assisted programming.

    It’s an absolute joy to be able to achieve essentially anything (within reason), things that previously I’d have known how to design but not build in any reasonable timeframe.

    Who are these anti AI programmers? Computing and programming has just been unlocked and they’re not interested.

    I’ve always had far more ideas than I’d ever be able to build and now I can get at least some of them built very quickly. I just don’t understand why this would t be exciting to a developer.

    20 years ahead it will be completely taken for granted that computers can program themselves and we will look back on that painful era when every line of code had to be hand written by wizards and it will look ancient and quaint.

    Join the party, join the revolution it’s incredible fun to be able to create beyond your hand coding skills.

    • 3rodents5 hours ago
      I’m not anti-AI but it isn’t part of my workflow. You’re describing yourself as someone unable to achieve your ideas without AI. That’s fine. But there are many people who have spent years building the skills necessary to be able to realize all of their ideas, and that their ideas are inextricably linked to the process. I wouldn’t have the ideas I have today if I didn’t spend years of my life in my editor. I could defer to Claude code for all my work, and I’d be frozen in time, never to progress again, losing everything that makes my work my work.

      Perhaps something will change, but right now, Claude code does not change anything for me. If what I do is ancient and quaint, so be it. I’m not competing for who can churn out the most code, never have, never will, because that’s not what software development is about.

      • andrewstuart5 hours ago
        >> You’re describing yourself as someone unable to achieve your ideas without AI.

        I did not say that.

        >> But there are many people who have spent years building the skills necessary to be able to realize all of their ideas, and that their ideas are inextricably linked to the process.

        I have been designing and building software for 35 years and have many open source projects.

        You are implying that I don’t know how to program and I need AI to build stuff. Evidence to the contrary is on my GitHub.

        It’s typical anti AI to suggest that you must love AI because you have no real skill.

        • 3rodents5 hours ago
          “I’ve always had far more ideas than I’d ever be able to build”
          • andrewstuart5 hours ago
            So? I don’t have time to build everything so what.
            • 3rodents5 hours ago
              If a skilled developer can achieve something in 1 week with 1,000 lines of code but it would take you 1 year and 1 million lines of code, is the issue that you don't have enough time, or that you don't have the necessary skills?

              There is no shame whatsoever in using AI. You've edited your comment since I replied. I am not anti-AI. If you can build great things with or without AI, whether it takes 1 day or 1 week, or 1 year, it doesn't matter: good software is good software. Many very talented developers are using AI. There is also no shame in not having certain skills.

              I am responding to "can’t get my head around why a developer wouldn’t want to use AI assisted programming". I explained that there are many developers who have a process that doesn't benefit from being able to generate lots of code very quickly. You said AI enables you to create "things that previously I’d have known how to design but not build in any reasonable timeframe". I'm happy for you, I'm glad AI has given you that, but there are many types of developer, many for whom that isn't a benefit of AI.

              Reddit is filled with vibecoders sharing how vibecoding is a panacea because it enabled them to build an idea they've always wanted to build but never had the time. When pressed, they reveal an idea that could be achieved very simply but their vision for how it should be built is very complicated and unsophisticated. They needed AI to achieve it because their design needed millions of lines of code. I assume you're one of those people. And that's okay.

              I am bad at math. Asking AI to do math for me will always be faster than doing it myself. However, unlike you, I can get my head around the reality that mathematicians are more efficient at doing math themselves.

    • CuriouslyC5 hours ago
      AFAICT, it comes down to people who enjoy the outcome vs people who enjoy the craft. Some people just love programming, and it doesn't matter what they're coding, making little virtual machines gives them joy. You (and I) are more outcome driven.
      • allenu5 hours ago
        I agree that this is a component of it, but there are some other things at play here which I think is what makes the debate so furious.

        For one, I think there's a sense of unfairness that people are expressing as well. A skill that took considerable time to learn and build up can be reproduced with a machine and that just feels unfair. Another, is obviously companies mandating employees use AI in their work. And then there's the environmental cost in training. Then there are the cases where it's being just for slop or submitting PRs that have not even been reviewed by their creator.

        In my opinion, all of these factors make people refuse to see that some of us actually do find use for these tools and that we're not vibe-coding everything in some mad rush to ship trash.

      • andrewstuart5 hours ago
        This is true. I learned to program because I want to create.

        I love creation and creating computer software. A vision appears in my head for a software idea and I have to build it, I am utterly compelled.

        So I had to learn to program. I quite like programming it’s good to feel clever.

        But my deepest joy is creating and it’s like a gift from heaven to get LLMs that can help me realize even my most ambitious creative visions.

        It’s the outcome I want, not the experience of getting there.

        I absolutely love what LLMs have brought to programming - accelerated creation.

    • VorpalWay5 hours ago
      The fun is in the journey, not necessarily the destination, to me. Should the goal of dancing be to get it over with as quickly as possible too? Should the ideal piece of music you play be a single crashing chord so you can get done with playing the piano? To me these are direct analogues.

      Sure, there are some boring rote parts of coding, just like sawing boards might not be the most enjoyable part of woodworking. I guess you could use AI as the analog of power tools (would that be using AI to generate awk and jq command lines?). But I wouldn't want to use a CNC router and take all the fun and skill out of the craft, nor do I find agentic AI enjoyable.

      And AIs fail badly anyway when you are doing things not found much online, e.g. in embedded microcontroller development (which I do) or with company internal frameworks.

      • andrewstuart5 hours ago
        >> The fun is in the journey, not necessarily the destination, to me.

        I fully respect this. Lots of craftspeople love to work with wood by hand whilst factories build furniture on an industrial scale.

        >> AIs fail badly anyway when you are doing things not found much online, e.g. in embedded microcontroller development (which I do)

        But you are very wrong about embedded systems development and AI. I do a huge amount of microcontroller programming and AI is a massive productivity multiplier and the LLMs are extremely good at it.

        • VorpalWay4 hours ago
          > But you are very wrong about embedded systems development and AI. I do a huge amount of microcontroller programming and AI is a massive productivity multiplier and the LLMs are extremely good at it.

          Probably true if you use an SDK there are lots of examples for. I have worked with embassy in Rust and AIs were not good, not with a company internal SDK in C++ at work. They will frequently hallucinate non-existing methods or get the parameters wrong. And for larger systems (e.g. targeting embedded Linux) they can't keep enough context in their head to work with large (millions of lines) existing proprietary code bases. They make mistakes and you end up with unmaintainable soup. Humans can learn on the job, AIs can't yet do that.

          • andrewstuart4 hours ago
            Get the AI to read the datasheets and technical reference manual for the device you are using.
    • ErroneousBosh5 hours ago
      > I just can’t get my head around why a developer wouldn’t want to use AI assisted programming.

      Because it doesn't work in a useful way.

      • andrewstuart5 hours ago
        Only someone who doesn’t use it could say that.

        Yesterday in 3 hours I built a fully working golang program that captures all browser network traffic from chrome and Firefox and filters and logs it with a TUI and includes a wide range of help and checks and fine tuning and environment config and command line options. Multiple cycles of debugging and problem solving and refinement until it was finished.

        That is simply not possible to do by hand.

        And if you don’t believe this, you think I’m exaggerating, then yeah you’re being left behind while the industry moves forward.

        • ErroneousBosh5 hours ago
          I've tried using it. I can't get it to do anything useful.

          I wanted to get it to help write a very simple Django website, basically a phone directory thing. After dicking about trying to get Copilot to actually help I had about 5000 lines of code that didn't work.

          I was able to write something that did work in about an hour and about 50 lines of code.

          Do you actually understand how the program your AI created works?

          • andrewstuart5 hours ago
            >> Do you actually understand how the program your AI created works?

            Of course yes I designed and architected it.

            >> I've tried using it. I can't get it to do anything useful

            Look I really don’t want this to come across as mean or snarky, but you can’t be trying very hard if you haven’t explored the unbelievable power of Claude ChatGPT and Gemini. Or worse if you have, but couldn’t work out how to get them to do anything useful. Id encourage you to go try them and give them all a real go and invest some time learning how to get the best from them.

            • ErroneousBosh44 minutes ago
              I've tried it a few times. It took generally ten times as long to come up with code that I could have written more clearly by myself.

              Can you give me an example of the kind of things you ask it to do, and the results you get?

              I genuinely don't understand why people think it's good. I don't want to waste time trying to work out how to ask a computer to write code for me, when I could just be writing code.

              • bdangubic34 minutes ago
                it takes time:

                - to learn (strengths and limitations)

                - to configure (memory, slash commands, subagents, mcp, skills)

                - to iterate and adjust everything

                - …

                - …

                I spend a minimum of 1hr per day on this alone. The rewards are worth it.

  • 3rodents5 hours ago
    https://notbyai.fyi/

    “Artificial Intelligence (AI) is trained using human-created content. If humans stop producing new content and rely solely on AI, online content across the world may become repetitive and stagnant.

    If your content is not AI-generated, add the badge to your work.”