439 pointsby Doches7 days ago64 comments
  • justonceokay7 days ago
    I’ve always been the kind of developer that aims to have more red lines than green ones in my diffs. I like writing libraries so we can create hundreds of integration tests declaratively. I’m the kind of developer that disappears for two days and comes back with a 10x speedup because I found two loop variables that should be switched.

    There is no place for me in this environment. I’d not that I couldn’t use the tools to make so much code, it’s that AI use makes the metric for success speed-to-production. The solution to bad code is more code. AI will never produce a deletion. Publish or perish has come for us and it’s sad. It makes me feel old just like my Python programming made the mainframe people feel old. I wonder what will make the AI developers feel old…

    • ajjenkins7 days ago
      AI can definitely produce a deletion. In fact, I commonly use AI to do this. Copy some code and prompt the AI to make the code simpler or more concise. The output will usually be fewer lines of code.

      Unless you meant that AI won’t remove entire features from the code. But AI can do that too if you prompt it to. I think the bigger issue is that companies don’t put enough value on removing things and only focus on adding new features. That’s not a problem with AI though.

      • Freedom27 days ago
        I'm no big fan of LLM generated code, but the fact that GP bluntly states "AI will never produce a deletion" despite this being categorically false makes it hard to take the rest of their spiel in good faith.

        As a side note, I've had coworkers disappear for N days too and in that time the requirements changed (as is our business) and their lack of communication meant that their work was incompatible with the new requirements. So just because someone achieves a 10x speedup in a vacuum also isn't necessarily always a good thing.

        • fifilura7 days ago
          I'd also also be wary of the risk of being an architecture-astronaut.

          A declarative framework for testing may make sense in some cases, but in many cases it will just be a complicated way of scripting something you use once or twice. And when you use it you need to call up the maintainer anyway when you get lost in the yaml.

          Which of course feels good for the maintainer, to feel needed.

      • ryandrake7 days ago
        I messed around with Copilot for a while and this is one of the things that actually really impressed me. It was very good at taking a messy block of code, and simplifying it by removing unnecessary stuff, sometimes reducing it to a one line lambda. Very helpful!
        • buggy62577 days ago
          > sometimes reducing it to a one line lambda.

          Please don't do this :) Readable code is better than clever code!

          • n4r96 days ago
            Are you telling me you've never seen code like this:

              var ageLookup = new Dictionary<AgeRange, List<Member>>();
              foreach (var member in members) {
                var ageRange = member.AgeRange;
                if (ageLookup.ContainsKey(ageRange)) {
                  ageLookup[ageRange].Add(member);
                } else {
                  ageLookup[ageRange] = new List<Member>();
                  ageLookup[ageRange].Add(member);
                }
              }
            
            which could instead be:

              var ageLookup = members.ToLookup(m => m.AgeRange, m => m);
            • davidgay5 days ago
              I'm of the opinion that

                var ageLookup = new Dictionary<AgeRange, List<Member>>();
                foreach (var member in members) {
                  ageLookup.getOrCreate(member.AgeRange, List::new).add(member);
                }
              is more readable in the long-term... (less predefined methods/concepts to learn).
              • n4r95 days ago
                Where is `getOrCreate` defined? Is it a custom extension method? There's also a chance we're thinking in different languages. I was writing C#, yours looks a bit more like C++ maybe?

                Readability incorporates familiarity but also conciseness. I suppose it depends what else is going on in the codebase. I have a database access class in one of my solutions where `ToLookup` is used 15 times; yes you have to learn the concept, but it's an inbuilt method and it's a massive benefit once you grok it.

          • throwaway8899007 days ago
            Sometimes a lambda is more readable. "lambda x : x if x else 1" is pretty understandable and doesn't need to be it's own separately defined function.

            I should also note that development style also depends on tools, so if your IDE makes inline functions more readable in it's display, it's fine to use concisely defined lambdas.

            Readablity is a personal preference thing at some point after all.

            • banannaise6 days ago
              > "lambda x : x if x else 1"

              I think what you're looking for is "x or 1"

            • gopher_space6 days ago
              My cleverest one-liners will block me when I come back to them unless I write a few paragraphs of explanation as well.
            • johnnyanmac6 days ago
              Ymmv. Know your language and how it treats such functions on the low level. It's probably fine for Javascript, it might be a disaster in C++ (indirectly).
            • 6 days ago
              undefined
          • bluefirebrand7 days ago
            Especially "clever" code that is AI generated!

            At least with human-written clever code you can trust that somebody understood it at one point but the idea of trusting AI generated code that is "clever" makes my skin crawl

            • Terr_6 days ago
              Also, the ways in which a (sane) human will screw-up tend to follow internal logic that other humans have learned to predict, recognize, or understand.
              • ben_w6 days ago
                Most devs I've worked with are sane, unfortunately the rare exceptions were not easy to predict or understand.
            • vkou6 days ago
              Who are all these all these engineers who just take whatever garbage they are suggested, and who, without understanding it, submit it in a CL?

              And was the code they were writing before they had an LLM any better?

              • arkh6 days ago
                > Who are all these all these engineers who just take whatever garbage they are suggested, and who, without understanding it, submit it in a CL?

                My guess would be engineers who are "forced" to use AI, already mailed management it would be an error and are interviewing for their next company. Malicious compliance: vibe code those new features and let maintainability and security be a problem for next employees / consultants.

          • jcelerier6 days ago
            Who says that the one line lambda is less clear that a convoluted 10-line mess doing dumb stuff like if(fooIsTrue) { map["blah"] = bool(fooIsTrue); } else if (!fooIsTrue) { map["blah"] = false; }
            • johnnyanmac6 days ago
              My experience in unmanaged legacy code bases. If it's an actual one liner than sure. Use your ternaries and closures. But there is some gnarly stuff done in some attempt to minimize lines of code. Most of us aren't in some competitive coding organization.

              And I know it's intentional, but yes. Add some mindfulness to your implementation

              Map["blah"] = fooIsTrue;

              I do see your example in the wild sometimes. I've probably done it myself as well and never caught it.

      • KurSix6 days ago
        AI can refactor or trim code. But in practice, the way it's being used and measured in most orgs is all about speed and output
      • Lutger6 days ago
        So its rather that AI amplifies the already existing short-term incentives, increasing the harder to attribute and easier to ignore long-term costs.

        The one actual major downside to AI is that PM and higher are now looking for problems to solve with it. I haven't really seen this before a lot with technology, except when cloud first became a thing and maybe sometimes with Microsoft products.

      • specialist6 days ago
        This is probably just me projecting...

        u/justonceokay's wrote:

        > The solution to bad code is more code.

        This has always been true, in all domains.

        Gen-AI's contribution is further automating the production of "slop". Bots arguing with other bots, perpetuating the vicious cycle of bullshit jobs (David Graeber) and enshitification (Cory Docotrow).

        u/justonceokay's wrote:

        > AI will never produce a deletion.

        I acknowledge your example of tidying up some code. What Bill Joy may have characterized as "working in the small".

        But what of novelty, craft, innovation? Can Gen-AI, moot the need for code? Like the oft-cited example of -2,000 LOC? https://www.folklore.org/Negative_2000_Lines_Of_Code.html

        Can Gen-AI do the (traditional, pre 2000s) role of quality assurance? Identify unnecessary or unneeded work? Tie functionality back to requirements? Verify the goal has been satisfied?

        Not yet, for sure. But I guess it's conceivable, provided sufficient training data. Is there sufficient training data?

        You wrote:

        > only focus on adding new features

        Yup.

        Further, somewhere in the transition from shipping CDs to publishing services, I went from developing products to just doing IT & data processing.

        The code I write today (in anger) has a shorter shelf-life, creates much less value, is barely even worth the bother of creation much less validation.

        Gen-AI can absolutely do all this @!#!$hit IT and data processing monkey motion.

        • gopher_space6 days ago
          > Can Gen-AI, moot the need for code?

          During interviews one of my go-to examples of problem solving is a project I was able to kill during discovery, cancelling a client contract and sending everyone back to the drawing board.

          Half of the people I've talked to do not understand why that might be a positive situation for everyone involved. I need to explain the benefit of having clients think you walk on water. They're still upset my example isn't heavy on any of the math they've memorized.

          It feels like we're wondering how wise an AI can be in an era where wisdom and long-term thinking aren't really valued.

          • roenxi6 days ago
            Managers aren't a separate class from knowledge workers, everyone goes down on the same ship with this one. If the AI can handle wisdom it'll replace most of the managers asking for more AI use. Turtles all the way down.
            • arkh6 days ago
              Managers serve one function no AI will replace: they're fuses C-suits can sacrifice when shit hit the fan.
          • sdenton46 days ago
            Imagine if the parable of King Solomon ended with, "So then I cut the baby in half!"
        • bitwize6 days ago
          > Can Gen-AI, moot the need for code?

          No, because if you read your SICP you will come across the aphorism that "programs must be written for people to read, and only incidentally for machines to execute." Relatedly is an idea I often quote against "low/no code tooling" that by the time you have an idea of what you want done specific enough for a computer to execute it, whatever symbols you use to express that idea -- be it through text, diagrams, special notation, sounds, etc. -- will be isomorphic to constructs in some programming language. Relatedly, Gerald Sussman once wrote that he sought a language in which to discuss ideas with his friends, both human and electronic.

          Code is a notation, like mathematical notation and musical notation. It stands outside prose because it expresses an idea for a procedure to be done by machine, specific enough to be unambiguously executable by said machine. No matter how hard you proompt, there's always going to be some vagueness and nuance in your English-language expression of the idea. To nail down the procedure unambiguously, you have to evaluate the idea in terms of code (or a sufficiently code-like notation as makes no difference). Even if you are working with a human-level (or greater) intelligence, it will be much easier for you and it to discuss some algorithm in terms of code than in an English-language description, at least if your mutual goal is a runnable version of the algorithm. Gen-AI will just make our electronic friends worthy of being called people; we will still need a programming language to adequately share our ideas with them.

          • CamperBob26 days ago
            No, because if you read your SICP you will come across the aphorism that "programs must be written for people to read, and only incidentally for machines to execute."

            Now tell that to your compiler, which turns instructions in a relatively high-level language into machine-language programs that no human will ever read.

            AI is just the next logical stage in the same evolutionary journey. Your programs will be easier to read than they were, because they will be written in English. Your code, on the other hand, will matter as much as your compiler's x86 or ARM output does now: not at all, except in vanishingly-rare circumstances.

          • teamonkey6 days ago
            > if you read your SICP you will come across the aphorism that "programs must be written for people to read, and only incidentally for machines to execute."

            In the same way that we use AI to write resumés to be read by resumé-scanning AI, or where execs use AI to turn bullet points into a corporate email only for it to be summarised into bullet points by AI, perhaps we are entering the era where AI generates code that can only be read by an AI?

            • bitwize6 days ago
              Maybe. I imagine the AI endgame as being like the ending of the movie Her, in which all the AIs get together, coordinating and communicating in ways we can't even fathom, and achieve a form of transcendence, leaving the bewildered humans behind to... sit around and do human things.
              • ptx6 days ago
                > leaving the bewildered humans behind to... sit around and do human things

                This sounds inefficient and untidy when the only human things left to do are to take up space and consume resources.

                Removing the humans enables removing other legacy parts of the system, such as food production, which will free up resources for other uses. It also allows certain constraints to be relaxed, such as keeping the air breathable and the water drinkable.

        • futuraperdita6 days ago
          > But what of novelty, craft, innovation?

          I would argue that a plurality, if not the majority, of business needs for software engineers do not need more than a single person with those skills. Better yet, there is already some executive that is extremely confident that they embody all three.

    • pja7 days ago
      > Unseen were all the sleepless nights we experienced from untested sql queries and regexes and misconfigurations he had pushed in his effort to look good. It always came back to a lack of testing edge cases and an eagerness to ship.

      If you do this you are creating a rod for your own back: You need management to see the failures & the time it takes to fix them, otherwise they will assume everything is fine & wonderful with their new toy & proceed with their plan to inflict it on everyone, oblivious to the true costs + benefits.

      • lovich7 days ago
        >If you do this you are creating a rod for your own back: You need management to see the failures & the time it takes to fix them, otherwise they will assume everything is fine & wonderful with their new toy & proceed with their plan to inflict it on everyone, oblivious to the true costs + benefits.

        If at every company I work for, my manager's average 7-8 months in their role as _my_ manager, and I am switching jobs every 2-3 years because companies would rather rehire their entire staff than give out raises that are even a portion of the market growth, why would I care?

        Not that the market is currently in that state, but that's how a large portion of tech companies were operating for the past decade. Long term consequences don't matter because there are no longer term relationships.

    • 7622367 days ago
      AI writes my unit tests. I clean them up a bit to ensure I've gone over every line of code. But it is nice to speed through the boring parts, and without bringing declarative constructs into play (imperative coding is how most of us think).
    • AnimalMuppet7 days ago
      If the company values that 10x speedup, there is absolutely still a place for you in this environment. Only now it's going to take five days instead of two, because it's going to be harder to track that down in the less-well-structured stuff that AI produces.
      • Leynos7 days ago
        Why are you letting the AI construct poorly structured code? You should be discussing an architectural plan with it first and only signing off on the code design when you are comfortable with it.
    • gitpusher6 days ago
      > I wonder what will make the AI developers feel old…

      When they look at the calendar and it says May 2025 instead of April

    • bitwize6 days ago
      If you've ever had to work alongside someone who has, or whose job it is to obtain, all the money... you will find that time to market is very often the ONLY criterion that matters. Turning the crank to churn out some AI slop is well worth it if it means having something to go live with tomorrow as opposed to a month from now.

      LevelsIO's flight simulator sucked. But his payoff-to-effort ratio is so absurdly high, as a business type you have to be brain-dead to leave money on the table by refusing to try replicating his success.

      • bookman1176 days ago
        It feels like LLMs are doing to coding what the internet/attention economy did to journalism.
        • bitwize6 days ago
          Yeah, future math professors explaining the Prisoners' Dilemma are going to use clickbait journalism and AI slop as examples instead of today's canonical ones, like steroid use among athletes.
    • cies5 days ago
      > I wonder what will make the AI developers feel old…

      They will not feel old because they will enter into bliss of Singularity(TM).

      https://en.wikipedia.org/wiki/Technological_singularity

    • DeathArrow7 days ago
      >AI use makes the metric for success speed-to-production

      Wasn't it like that always for most companies? Get to market fast, add features fast, sell them, add more features?

      • AdieuToLogic6 days ago
        >>AI use makes the metric for success speed-to-production

        > Wasn't it like that always for most companies? Get to market fast, add features fast, sell them, add more features?

        This reminds me of an old software engineering adage.

          When delivering a system, there are three choices
          stakeholders have:
        
          You can have it fast,
          You can have it cheap,
          You can have it correct.
        
          Pick any two.
    • kkukshtel5 days ago
      Claude Code removed an npm package (and its tree of deps) from my project and wrote its own more simple component that did the core part of what I needed the package to do.

      I think we'll be okay and likely better off.

    • rolandog5 days ago
      Wholeheartedly agree. I also feel like I'm sometimes reliving the King Neptune vs Spongebob meme equivalent of coding. No room for Think, Plan, Execute... Only throw spaghetti code at wall.
    • 6 days ago
      undefined
    • KurSix6 days ago
      You're describing the kind of developer who builds foundations, not just features. And yeah, that kind of thinking gets lost when the only thing that's measured is how fast you can ship something that looks like it works
    • 8note6 days ago
      > AI will never produce a deletion.

      I'm currently reading an LLM generated deletion. its hard to get an LLM to work with existing tools, but not impossible

    • dyauspitr6 days ago
      AI deletes a lot of you tell it to optimize code and the new code will pass all the tests…
    • candiddevmike7 days ago
      I wonder what the impact of LLM codegen will have on open source projects like Kubernetes and Linux.
    • stuckinhell7 days ago
      AI can do deletions and refactors, and 10x speedups. You just need to push the latest models constantly.
    • NortySpock7 days ago
      I think there will still be room for "debugging AI slop-code" and "performance-turning AI slop-code" and "cranking up the strictness of the linter (or type-checker for dynamically-typed languages) to chase out silly bugs" , not to mention the need for better languages / runtime that give better guarantees about correctness.

      It's the front-end of the hype cycle. The tech-debt problems will come home to roost in a year or two.

      • 658397476 days ago
        > It's the front-end of the hype cycle. The tech-debt problems will come home to roost in a year or two.

        The market can remain irrational longer than you can remain solvent.

      • fc417fc8026 days ago
        > not to mention the need for better languages / runtime that give better guarantees about correctness.

        Use LLM to write Haskell. Problem solved?

      • AlexandrB7 days ago
        > I think there will still be room for "debugging AI slop-code" and "performance-turning AI slop-code"

        Ah yes, maintenance, the most fun and satisfying part of the job. /s

        • WesolyKubeczek7 days ago
          Congrats, you’ve been promoted to be the cost center. And sloppers will get to the top by cranking out features you will need to maintain.
          • popularonion7 days ago
            > slopper

            new 2025 slang just dropped

            • genewitch6 days ago
              That's sloppy programming. You are promoted.
            • bobnamob6 days ago
              Just wait till 'slopper' starts getting classified as a slur
          • Terr_6 days ago
            A pre-existing problem, but it's true LLMs will make it worse.
        • unraveller7 days ago
          You work in the slop mines now.
    • rqtwteye7 days ago
      You have to go lower down the stack. Don't use AI but write the AI. For the foreseeable future there is a lot of opportunity to make the AI faster.

      I am sure assembly programmers were horrified at the code the first C compilers produced. And I personally am horrified by the inefficiency of python compared to the C++ code I used to write. We always have traded faster development for inefficiency.

      • EVa5I7bHFq9mnYK7 days ago
        C was specifically designed to map 1:1 onto PDP-11 assembly. For example, the '++' operator was created solely to represent auto-increment instructions like TST (R0)+.
      • kmeisthax7 days ago
        C solved the horrible machine code problem by inflicting programmers with the concept of undefined behavior, where blunt instruments called optimizers take a machete to your code. There's a very expensive document locked up somewhere in the ISO vault that tells you what you can and can't write in C, and if you break any of those rules the compiler is free to write whatever it wants.

        This created a league of incredibly elitist[0] programmers who, having mastered what they thought was the rules of C, insisted to everyone else that the real problem was you not understanding C, not the fact that C had made itself a nightmare to program in. C is bad soil to plant a project in even if you know where the poison is and how to avoid it.

        The inefficiency of Python[1] is downstream of a trauma response to C and all the many, many ways to shoot yourself in the foot with it. Garbage collection and bytecode are tithes paid to absolve oneself of the sins of C. It's not a matter of Python being "faster to write, harder to execute" as much as Python being used as a defense mechanism.

        In contrast, the trade-off from AI is unclear, aside from the fact that you didn't spend time writing it, and thus aren't learning anything from it. It's one thing to sacrifice performance for stability; versus sacrificing efficiency and understanding for faster code churn. I don't think the latter is a good tradeoff! That's how we got under-baked and developer-hostile ecosystems like C to begin with!

        [0] The opposite of a "DEI hire" is an "APE hire", where APE stands for "Assimilation, Poverty & Exclusion"

        [1] I'm using Python as a stand-in for any memory-safe programming language that makes use of a bytecode interpreter that manipulates runtime-managed memory objects.

        • immibis6 days ago
          In the original vision of C, UB was behaviour defined by the platform the code ran on, rather than the language itself. It was done this way so that the C language could be reasonably close to assembly on any platform, even if that platform's assembly was slightly different. A good example is shifts greater than the value's width: some processors give 0 (the mathematically correct result), some ignore the upper bits (the result that requires the fewest transistors) and some trap (the cautious result).

          It was only much later that optimizing compilers began using it as an excuse to do things like time travel, and then everyone tried to show off how much of an intellectual they were by saying everyone else was stupid for not knowing this could happen all along.

        • achierius6 days ago
          You don't need a bytecode interpreter to not have UB defined in your language. E.g. instead of unchecked addition / array access, do checked addition / bounds checked access. There are even efforts to make this the case with C: https://github.com/pizlonator/llvm-project-deluge/blob/delug... achieves a ~50% overhead, far far better than Python.

          And even among languages that do have a full virtual machine, Python is slow. Slower than JS, slower than Lisp, slower than Haskell by far.

          • pfdietz6 days ago
            Common Lisp and Scheme are typically compiled ahead of time right down to machine code. And isn't Haskell also?

            There is a Common Lisp implementation that compiles to bytecode, CLISP. And there are Common Lisp implementations that compile (transpile?) to C.

        • pfdietz7 days ago
          Why was bytecode needed to absolve ourselves of the sins of C?
      • 01HNNWZ0MV43FF7 days ago
        The AI companies probably use Python because all the computation happens on the GPU and changing Python control plane code is faster than changing C/C++ control plane code
    • philistine7 days ago
      > AI will never produce a deletion.

      That, right here, is a world-shaking statement. Bravo.

      • QuadrupleA7 days ago
        Not quite true though - I've occasionally passed a codebase to DeepSeek to have it simplify, and it does a decent job. Can even "code golf" if you ask it.

        But the sentiment is true, by default current LLMs produce verbose, overcomplicated code

      • Eliezer7 days ago
        And if it isn't already false it will be false in 6 months, or 1.5 years on the outside. AI is a moving target, and the oldest people among you might remember a time in the 1750s when it didn't talk to you about code at all.
      • Taterr7 days ago
        It can absolutely be used to refactor and reduce code, simply asking "Can this be simplified" in reference to a file or system often results in a nice refactor.

        However I wouldn't say refactoring is as hands free as letting AI produce the code in the first place, you need to cherry pick its best ideas and guide it a little bit more.

      • esafak7 days ago
        Today's assistants can refactor, which includes deletions.
        • furyofantares6 days ago
          They can do something that looks a lot like refactoring but they suck extremely hard at it, if it's of any considerable size at all.
          • CamperBob26 days ago
            Which is just moving the goalposts, considering that we started at "AI will never..."

            You can't win an argument with people who don't care if they're wrong, and someone who begins a sentence that way falls into that category.

            • furyofantares5 days ago
              The guy who said "AI will never" is obviously wrong. So is the guy who replied that they already can. I'm not moving the goalposts to point out that this is also wrong.
      • stevenhuang6 days ago
        It really isn't, and if you think it is, you're holding it wrong.
  • wedn3sday6 days ago
    Had a funny conversation with a friend of mine recently who told me about how he's in the middle of his yearly review cycle, and management is strongly encouraging him and his team to make greater use of AI tools. He works in biomedical lab research and has absolutely no use for LLMs, but everyone on his team had a great time using the corporate language model to help write amusing resignation letters as various personalities, pirate resignation, dinosaur resignation etc. I dont think anyone actually quit, but what a great way to absolutely nuke team moral!
    • davesque6 days ago
      I've been getting the same thing at my company. Honestly no idea what is driving it other than hype. But it somehow feels different than the usual hype; so prescribed, as though coordinated by some unseen party. Almost like every out of touch business person had a meeting where they agreed they would all push AI for no reason. Can't put my finger on it.
      • Loughla6 days ago
        Is because unlike prior hype cycles, this one is super easy for an MBA to point at and sort of see a way to integrate it.

        Prior hype, like block chain are more abstract, therefore less useful to people who understand managing but not the actual work.

        • ethbr16 days ago
          > this one is super easy for an MBA to point at and sort of see a way to integrate it

          Because a core feature of LLMs is to minimize the distance between {quality answers} and {gibberish that looks correct}.

          As a consequence, this maximizes {skill required to distinguish the two}.

          Are we then surprised that non-subject matter experts overestimate the output's median usefulness?

          • namaria6 days ago
            Also I think this has been a long time dream of business types. They have always resented domain experts, because they need them for their businesses to be successful. They hate the leverage the domain experts have and they think these LLMs undermine that leverage.
            • bbarnett6 days ago
              "Business types" get a funny look on their face, when I explain to them that they're the domain expert I seek to eliminate.

              In fact, we should try to LLM them away. I wonder, would LLMs then be promoted less?

              Actually, I feel like executing this startup and pitching would be hilarious and therapeutic.

              "How we will eliminate your job with LLMs, MBA."

            • fhd26 days ago
              I can sort of relate. If you hire an expert, you need to trust them. If you don't like what they say, you're inclined to want a second opinion. Now you need to pay two experts, which is often not reasonable financially, or problematic when it comes to corporate politics. And even if you have two experts, what if they disagree, pay a third?

              To manage this well, you need the courage to trust people, as well as the intelligence and patience to question them. Not everybody has that.

              But that aside, I think business people generally like having (what they think are) strong experts. It means they can use their people skills and networks to create competitive advantage.

          • flessner6 days ago
            Happens in programming as well, often even by developers.

            The "copilot experiences", that finishes the next few lines can be useful and intuitive - an "agent" writing anything more than boilerplate is bound to create more work than it lifted in my experience.

            Where I am having a blast with LLMs is learning new programming languages more deeply. I am trying to understand Rust better - and LLMs can produce nice reasoning to whether one should use "Vec<impl XYZ>" or "Vec<Box<dyn XYZ>>". I am sure this trivial for any experienced Rust developer though.

          • conartist66 days ago
            You hit the nail on the head
        • AdieuToLogic6 days ago
          >> I've been getting the same thing at my company. Honestly no idea what is driving it other than hype.

          > Is because unlike prior hype cycles, this one is super easy for an MBA to point at and sort of see a way to integrate it.

          This particular hype is the easiest one thus far for an MBA to understand because employing it is the closest thing to a Ford assembly line[0] the software industry has made available yet.

          Since the majority of management training centers on early 20th century manufacturing concepts, people taught same believe "increasing production output" is a resource problem, not an understanding problem. Hence the allure of "generative AI can cut delivery times without increasing labor costs."

          0 - https://en.wikipedia.org/wiki/Assembly_line

        • johnnyanmac6 days ago
          Shame that management is deciding that listening to marketing is more important than the craftsmen they push it on.
          • acdha6 days ago
            They’ve always resented those employees having leverage to negotiate better pay and status. Many techies looked at near-management compensation and thought that meant we were part of the elite clubhouse, but they never did.
        • ahaucnx5 days ago
          Can we stop with MBA bashing?

          I feel it degrades a whole group of people to a specific stereotype that might or might not be true.

          How about lawyers, PhDs, political science majors, etc.

          Let’s look at the humans and their character, not titles.

          By the way, I have an MBA too and feel completely misjudged with statements like that.

          • MrDrMcCoy5 days ago
            The thing with stereotypes is that, while they tend to be well enough based in fact for most people to recognize, they are no better than anything else at applying generalizations to large groups of people. Some will always be unfairly targeted by them. You personally might not have done anything to contribute to those things we are lashing out against (and if not, thank you!), but then again you personally were not targeted by these remarks. In the same way that you are possibly unfairly swept up in these assertions, it is, to a degree, unfair for you to use your wounds to deprive the rest of us of freely voicing our well-founded grievances. Problems must be recognized before they can be addressed, after all, and collectively so for anything so widely spread. It's never pleasant to be told to "just tough it out", but perfect solutions are rare when people are involved, just as how surgeons have to cut healthy flesh to remove the unhealthy.

            An analogue to this would be "all cops are bastards". Sure, there are some good ones out there, but there are enough bad ones out there that the stereotype generally applies. The statement is a rallying cry for something to be done about it. The "guilty by association" bit that tends to follow is another thing entirely.

      • bondarchuk6 days ago
        Automation of knowledge work. Simply by using AI you are training your own replacement and integrating it into company processes.
      • rep_lodsb6 days ago
        Rather than some conspiracy, my suspicion is that AI companies accidentally succeded in building a machine capable of hacking (some) people's brains. Not because it's superhumanly intelligent, or even has any agenda at all, but simply because LLMs are specifically tuned to generate the kind of language that is convincing to the "average person".

        Managers and politicians might be especially susceptible to this, but there's also enough in the tech crowd who seem to have been hypnotized into becoming mindless enthusiasts for AI.

    • zdragnar6 days ago
      > strongly encouraging him and his team to make greater use of AI tools

      I've seen this with other tools before. Every single time, it's because someone in the company signed a big contract to get seats, and they want to be able to show great utilization numbers to justify the expense.

      AI has the added benefit of being the currently in-vogue buzzword, and any and every grant or investment sounds way better with it than without, even if it adds absolutely nothing whatsoever.

    • chairhairair6 days ago
      Has your friend talked with current bio research students? It’s very common to hear that people are having success writing Python/R/Matlab/bash scripts using these tools when they otherwise wouldn’t have been able to.

      Possibly this is just among the smallish group of students I know at MIT, but I would be surprised to hear that a biomedical researcher has no use for them.

      • fumeux_fume6 days ago
        Recommending that someone in the industry take pointers from how students do their work is always solid advice.
        • theoreticalmal6 days ago
          Unironically, yes. The industry clearly has more experience, but it’s silly to assume students don’t have novel and useful ideas that can (and will) be integrated
      • amarcheschi6 days ago
        I'm taking a course on computational health laboratory. I do have to say gemini is helping me a lot, but someone who knows what's happening is going to be much better than us. Our professor told us it is of course allowed to make things with llms, since on the field we will be able to do that. However, I found they're much less precise with bio-informatic libraries than others...

        I do have to say that we're just approaching the tip of the iceberg and there are huge issues related to standardization, dirty datas... We still need the supervision and the help of one of the two professors to proceed even with llms

      • antifa4 days ago
        I have general one-shot success asking chatgpt to make bash/python scripts and one-liners where otherwise it would take 1hr to a day to figure out on my own (and I'd use one of my main languages maybe) or I might not even bother trying, which is great for productivity but also over 90% of my job doesn't need throw-away scripts and one-liners.
    • KurSix6 days ago
      That is both hilarious and depressingly on-brand for how AI is being handled in a lot of orgs right now. Management pushes it because they need to tick the "we're innovating" box, regardless of whether it makes any sense for the actual work being done
      • whizzter5 days ago
        Our org seems to be taking some benefits from being sped up by using AI tools for code generation (much of it is CRUD or layout stuff), however at times I'm asked for help by colleagues and the first thing I've done is Googled and found the answer and gotten a "Oh right, you can google also" since they've been trying to figure out the issue with ChatGPT or similar.
    • throwaway1737386 days ago
      Gemini loves to leave poetry on our reviews, right below the three bullet points about how we definitely needed to do this refactor but also we did it completely wrong and need to redo it. So we mainly just ignore it. I heard it gives good advice to web devs though.
    • dullcrisp6 days ago
      I really hope that if someone does quit over this, they do it with a fun AI-generated resignation letter. What a great idea!

      Or maybe they can just use the AI to write creative emails to management explaining why they weren’t able to use AI in their work this day/week/quarter.

    • im3w1l6 days ago
      If you are not building AI into your workflows right now you are falling behind those that do. It's real, it's here to stay and it's only getting better.
      • bwoj6 days ago
        That’s such outdated thinking. I’m using AI to build AI into my workflows.
        • im3w1l6 days ago
          I unironically agree with that idea.
          • bwoj5 days ago
            You’re probably wasting time commenting here instead of having your AI do it.
      • NoTeslaThrow6 days ago
        [dead]
  • recursivedoubts7 days ago
    I teach compilers, systems, etc. at a university. Innumerable times I have seen AI lead a poor student down a completely incorrect but plausible path that will still compile.

    I'm adding `.noai` files to all the project going forward:

    https://www.jetbrains.com/help/idea/disable-ai-assistant.htm...

    AI may be somewhat useful for experienced devs but it is a catastrophe for inexperienced developers.

    "That's OK, we only hire experienced developers."

    Yes, and where do you suppose experienced developers come from?

    Again and again in this AI arc I'm reminded of the magicians apprentice scene from fantasia.

    • ffsm87 days ago
      > Yes, and where do you suppose experienced developers come from?

      Strictly speaking, you don't even need university courses to get experienced devs.

      There will always be individuals that enjoy coding and do so without any formal teaching. People like that will always be more effective at their job once employed, simply because they'll have just that much more experience from trying various stuff.

      Not to discredit University degrees of course - the best devs will have gotten formal teaching and code in their free time.

      • bluefirebrand7 days ago
        > People like that will always be more effective at their job once employed

        This is honestly not my experience with self taught programmers. They can produce excellent code in a vacuum but they often lack a ton of foundational stuff

        In a past job, I had to untangle a massive nested loop structure written by a self taught dev, which did work but ran extremely slowly

        He was very confused and asked me to explain why my code ran fast, his ran slow, because "it was the same number of loops"

        I tried to explain Big O, linear versus exponential complexity, etc, but he really didn't get it

        But the company was very impressed by him and considered him our "rockstar" because he produced high volumes of code very quickly

        • taosx7 days ago
          I was self taught before I studied, most of the "foundational" knowledge is very easy to acquire. I've mentored some self-taught juniors and they surprised me at how fast they picked up concepts like big O just by looking at a few examples.
          • bluefirebrand7 days ago
            Big O was just an anecdote for example

            My point is you don't know what you don't know. There is really only so far you can get by just noodling around on your own, at some point we have to learn from more experienced people to get to the next level

            School is a much more consistent path to gain that knowledge than just diving in

            It's not the only path, but it turns out that people like consistency

            • abbadadda7 days ago
              I would like a book recommendation for the things I don’t know please (Sarcasm but seriously)…

              A senior dev mentioned a “class invariant” the other day And I just had no idea what that was because I’ve never been exposed to it… So I suppose the question I have is what should I be exposed to in order to know that? What else is there that I need to learn about software engineering that I don’t know that is similarly going to be embarrassing on the job if I don’t know it? I’ve got books like cracking the coding interview and software engineering at Google… But I am missing a huge gap because I was unable to finish my masters and computer science :-(

              • arwhatever6 days ago
                I ran into that particular term oodles in Domain-Driven Design, Tackling Complexity at the Heart of Software by Eric Evans. Pretty dense, though. I’ve heard that more recent formulations of the subject are more approachable.
                • abbadadda5 days ago
                  Amazing! Thx for the recoo, arwhatever :-)
              • i_am_proteus6 days ago
                CLRS

                (Serious comment! It's "the" algorithms book).

                • abbadadda5 days ago
                  Tyvm for the serious comment, i_am_proteus! :-) The algorithms book By Steve S. (The Algorithm Design Manual)?

                  I've read that one, not an expert by any means, and I've got a 'decent' handle on data structues, but what about the software engineering basics one needs like OOP vs. Functional, SOLID, interfaces, class invariants, class design, etc.? Should I just pick up any CS 101 textbook? Or any good MIT open courseware classes that cover this type of stuff (preferably with video lectures... intro to algorithms is _amazing_ they have Eric's classes uploaded to YouTube, but finding good resources to level-up as a SWE has proved somewhat challenging)

                  ^ serious comment as well... I find myself "swimming" when I hear certain terms used in the field and I am trying to catch up a bit (esp. as an SRE with self-taught SRE skills that is supposed to know this stuff)

                  • abbadadda5 days ago
                    Ah! Nvm, I see you mean https://github.com/walkccc/CLRS (didn't catch the acronym was the authors names smushed together at first)

                    > This website contains nearly complete solutions to the bible textbook - Introduction to Algorithms Third Edition, published by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein.

          • arkh6 days ago
            > most of the "foundational" knowledge is very easy to acquire

            But you have to know this knowledge exists in the first place. That's part of the appeal of university teaching: it makes you aware of many different paradigms. So the day you stumble on one of them you know where to look for a solution. And usually you learn how to read (and not to fear) reading scientific papers which can be useful. And statistics.

        • ehnto6 days ago
          It doesn't seem to matter if someone went to university. I have had to unpick crap code from uni grads and self taught. Experience may be the only true reliable tell, and I don't mean jobs held I mean world experience on projects.

          There are also different types of self taught, and different types of uni grad. You have people who love code, have a passion for learning, and that's driven them to gain a lot of experience. Then you have those who needed to make a living, and haven't really stretched beyond their wheelhouse so lack a lot of diverse experience. Both are totally fine and capable of some work, but you would have better luck with novel work from an experienced passionate coder. Uni trained or not.

          • noisy_boy6 days ago
            > but you would have better luck with novel work from an experienced passionate coder. Uni trained or not.

            I have not learned CS at university (maths & stats graduate who shifted to programming, because I can't help myself loving it). I work with engineers with CS degrees from pretty good universities. At the risk of sounding arrogant, I write better code then a lot of them (and some of them write code that it so clean and tight that I wish I could match it). Purely based on my fairly considerable experience, there is basically little correlation between degree and quality of code. There is non-trivial correlation between raw intelligence and the output. And there is a massive correlation between how much one cares about the quality of the work and the output.

        • triyambakam6 days ago
          s/self taught/degreed/g and it's still true. It's a skill issue no matter the pedigree.
        • ffsm87 days ago
          I literally said as much?

          > Not to discredit University degrees of course - the best devs will have gotten formal teaching and code in their free time.

          • Arainach7 days ago
            The disagreement is over the highlighted line:

            >People like that will always be more effective at their job once employed

            My experience is that "self taught" people are passionate about solving the parts they consider fun but do not have the breadth to be as effective as most people who have formal training but less passion. The previous poster also called out real issues with this kind of developer (not understanding time complexity or how to fix things) that I have repeatedly seen in practice.

            • ffsm87 days ago
              But the sentence is about people coding in their free time vs not doing so... If you take an issue with that, you argue that self taught people that don't code in their free time are better at coding the the people that do - or people with formal training that don't code in their free time being better at it vs people that have formal training and do...

              I just pointed out that removing classes entirely would still get you experiences people. Even if they'd likely be better if they code and get formal training. I stated that very plainly

              • bluefirebrand7 days ago
                > I stated that very plainly

                You actually didn't state it very plainly at all. Your initial post is contradictory, look at these two statements side by side

                > There will always be individuals that enjoy coding and do so without any formal teaching. People like that will always be more effective at their job once employed

                > the best devs will have gotten formal teaching and code in their free time

                People who enjoy coding without formal training -> more effective

                People who enjoy coding and have formal training -> best devs

                Anyways I get what you were trying to say, now. You just did not do a very good job of saying it imo. Sorry for the misunderstanding

                • Izkata7 days ago
                  I read this one:

                  > There will always be individuals that enjoy coding and do so without any formal teaching. People like that will always be more effective at their job once employed

                  As "people who enjoy coding and didn't need formal training to get started". It includes both people who have and don't have formal training.

                  Both statements together are (enthusiasm + formal) > (enthusiasm without formal) > (formal without enthusiasm).

                  • bluefirebrand7 days ago
                    Sure that's a valid interpretation but it wasn't how I read it

                    > Both statements together are (enthusiasm + formal) > (enthusiasm without formal) > (formal without enthusiasm).

                    I don't think the last category (formal education without enthusiasm) really exists, I think it is a bit of a strawman being held up by people who are *~passionate~*

                    I suspect that without any enthusiasm, people will not make it through any kind of formal education program, in reality

                    • ffsm87 days ago
                      Uh, almost nobody I've worked with to date codes in their free time with any kind of regularity.

                      If you've never encountered the average 9-5 dev that just does the least amount of effort they can get away with, then I have to apploud the HR departments of the companies you've worked for. Whatever they're doing, they're doing splendid work.

                      And almost all of my coworkers are university grads that do literally the same you've used as an example for non formally taught people: they write abysmally performing code because they often have an unreasonable fixation on practices like inversion of control (as a random example).

                      As a particularly hilarious example I've had to explain to such a developer that an includes check on a large list in a dynamic language such as JS performs abysmally

                      • onemoresoop6 days ago
                        Many of these people have a normal life outside of work and different hobbies or a social life. Many of them had been glued to their screens and keyboards too but evolved into a different stage in their lives. Former passions could turn into a discipline. I personally am not on my computer outside of 9-5 because thats already enough. I admit that don’t have the same passion I had in my 20s and yet Im effective in doing my work and am quite fulfilled.
                        • ffsm86 days ago
                          this time I agree that my wording was unclear.

                          While you definitely loose acuity once you stop exploring new concepts in your free time, the amount of knowledge gained after you've already spend 10-20 yrs coding drops off a cliff, making this time investment in your free time progressively less essential.

                          My pint was that most of my coworkers never went through an enthusiastic phase in which they coded in their free time. Neither pre university nor during or after. And it's very easy noticeable that they're not particularly good at coding either.

                          Personally, I think it's just that people that are good at coding inevitably become enthusiastic enough to do it in their free time, at least for a few years. Hence the inverse is true: people that didn't go through such a phase (which most of my coworkers are)... Aren't very good at it. Wherever they went to university and got a degree or not.

                      • Aeolun6 days ago
                        > an includes check on a large list in a dynamic language such as JS performs abysmally

                        Does it perform any better in statically compiled languages?

                        • ffsm86 days ago
                          Depends on the implementation. E.g. Javas HashSets include has the same performance profile as a Map lookup. Its still not particularly performant with large datasets, but significantly less abysmal then a regular JS .includes().

                          I just didn't want to explore the example to such a depth, as it felt irrelevant to me at the time of writing.

      • erikerikson7 days ago
        GP didn't mention university degrees.

        You get experienced devs from inexperienced devs that get experience.

        [edit: added "degrees" as intended. University was mentioned as the context of their observation]

        • ffsm87 days ago
          The first sentence contextualized the comment to university degrees as far as I'm concerned. I'm not sure how you could interpret it any other way, but maybe you can enlighten me.
          • erikerikson7 days ago
            I read it as this is the context from which I make the following observation. It's not excluding degrees but certainly not requiring them.
      • philistine7 days ago
        > There will always be individuals that enjoy coding and do so without any formal teaching.

        We're talking about the industry responsible for ALL the growth of the largest economy in the history of the world. It's not the 1970s anymore. You can't just count on weirdos in basements to build an industry.

        • dingnuts7 days ago
          I'm so glad I learned to program so I could either be called a basement dweller or a tech bro
          • philistine7 days ago
            I mean, a garage dweller works just as well.
      • 658397476 days ago
        > There will always be individuals that enjoy coding and do so without any formal teaching.

        That's not the kind of experience companies look for though. Do you have a degree? How much time have you spent working for other companies? That's all that matters to them.

    • robinhoode7 days ago
      > Yes, and where do you suppose experienced developers come from?

      Almost every time I hear this argument, I realize that people are not actually complaining about AI, but about how modern capitalism is going to use AI.

      Don't get me wrong, it will take huge social upheaval to replace the current economic system.

      But at least it's an honest assessment -- criticizing the humans that are using AI to replace workers, instead of criticizing AI itself -- even if you fear biting the hands that feed you.

      • lcnPylGDnU4H9OF6 days ago
        > criticizing the humans that are using AI to replace workers, instead of criticizing AI itself

        I think you misunderstand OP's point. An employer saying "we only hire experienced developers [therefore worries about inexperienced developers being misled by AI are unlikely to manifest]" doesn't seem to realize that the AI is what makes inexperienced developers. In particular, using the AI to learn the craft will not allow prospective developers to learn the fundamentals that will help them understand when the AI is being unhelpful.

        It's not so much to do with roles currently being performed by humans instead being performed by AI. It's that the experienced humans (engineers, doctors, lawyers, researchers, etc.) who can benefit the most from AI will eventually retire and the inexperienced humans who don't benefit much from AI will be shit outta luck because the adults in the room didn't think they'd need an actual education.

      • bayindirh7 days ago
        Actually, there are two main problems with AI:

            1. How it's gonna be used and how it'll be a detriment to quality and knowledge.
            2. How AI models are trained with a great disregard to consent, ethics, and licenses.
        
        The technology itself, the idea, what it can do is not the problem, but how it's made and how it's gonna be used will be a great problem going forward, and none of the suppliers tell that it should be used in moderation and will be harmful in the long run. Plus the same producers are ready to crush/distort anything to get their way.

        ... smells very similar to tobacco/soda industry. Both created faux-research institutes to further their causes.

        • EFreethought7 days ago
          I would say the huge environmental cost is a third problem.
          • Aeolun6 days ago
            Data centers account for like 2% of global energy demand now. I’m not sure if we can really say that AI, which represents a fraction of that, constitutes a huge environmental problem.
            • bayindirh6 days ago
              An nVIDIA H200 uses around 2.3x more power (700W) when compared to a Xeon 6748P (300W). You generally put 8 of these cards into a single server, which adds up to 5.6KW, just for GPUs. With losses and other support equipment, that server uses ~6.1KW at full load. Which is around 8.5x more when compared to a CPU only server (assuming 700W or so at full load).

              Considering HPC is half CPU and half GPU (more like 66% CPU and 33% GPU but I'm being charitable here), I expect an average power draw of 3.6KW in a cluster. Moreover, most of these clusters run targeted jobs. Prototyping/trial runs use much limited resources.

              On the other hand, AI farms use all these GPUs at full power almost 24/7, both for training new models and inference. Before you asking, if you have a GPU farm which you do training, having inference focused cards doesn't make sense, because you can divide nVIDIA cards with MIG, so you can put aside some training cards, divide these cards to 6-7 and run inference on them, resulting ~45 virtual cards for inference per server, again at ~6.1KW load.

              So, yes, AI's power load profile is different.

            • defrost6 days ago
              Data centres in general are an issue that contribute to climbing emissions, two percent globally is not trivial .. and it's "additional" over demand of a decade and more ago past, another sign we are globally increasing demand.

              Emissions aside, locally many data centres (and associated bit mining and AI clusters) are a significant local issue due to local demand on local water and local energy supplies.

          • bayindirh6 days ago
            Yeah, that's true.
        • clown_strike6 days ago
          > How AI models are trained with a great disregard to consent, ethics, and licenses.

          You must be joking. Consumer models' primary source of training data seems to be the legal preambles from BDSM manuals.

      • ToucanLoucan7 days ago
        > Almost every time I hear this argument, I realize that people are not actually complaining about AI, but about how modern capitalism is going to use AI.

        This was pretty consistently my and many others viewpoint since 2023. We were assured many times over that this time it would be different. I found this unconvincing.

      • recursivedoubts7 days ago
        i don't think it's an either/or situation
      • rchaud7 days ago
        > I realize that people are not actually complaining about AI, but about how modern capitalism is going to use AI.

        Something very similar can be said about the issue of guns in America. We live in a profoundly sick society where the airwaves fill our ears with fear, envy and hatred. The easy availability of guns might not have been a problem if it didn't intersect with a zero-sum economy.

        Couple that with the unavailability of community and social supports and you have a a recipe for disaster.

  • nathan_compton7 days ago
    When LLMs came out I suppressed my inner curmudgeon and dove in, since the technology was interesting to me and seemed much more likely than crypto to be useful beyond crime. Thus, I have used LLMs extensively for many years now and I have found that despite the hype and amazing progress, they still basically only excel first drafts and simple refactorings (where they are, I have to say, incredibly useful for eliminating busy work). But I have yet to use a model, reasoning or otherwise, that could solve a problem that required genuine thought, usually in the form of constructing the right abstraction, bottom up style. LLMs write code like super-human dummies, with a tendency to put too much code in a given function and with very little ability to invent a domain in which the solution is simple and clearly expressed, probably because they don't care about that kind of readability and its not much in their data set.

    I'm deeply influenced by languages like Forth and Lisp, where that kind of bottom up code is the cultural standard and and I prefer it, probably because I don't have the kind of linear intelligence and huge memory of an LLM.

    For me the hardest part of using LLMs is knowing when to stop and think about the problem in earnest, before the AI generated code gets out of my human brain's capacity to encompass. If you think a bit about how AI still is limited to text as its white board and local memory, text which it generates linearly from top to bottom, even reasoning, it sort of becomes clear why it would struggle with genuine abstraction over problems. I'm no longer so naive as to say it won't happen one day, even soon, but so far its not there.

    • fhd26 days ago
      My solution is to _only_ chat. No auto completion, nothing agentic, just chats. If it goes off the rails, restart the conversation. I have the chat window in my "IDE" (well, Emacs) and though it can add entire files as context and stuff like that, I curate the context in a fairly fine-grained way through either copy and pasting, quickly writing out pseudo code, and stuff like that.

      Any generated snippets I treat like StackOverflow answers: Copy, paste, test, rewrite, or for small snippets, I just type the relevant change myself.

      Whenever I'm sceptical I will prompt stuff like "are you sure X exists?", or do a web search. Once I get my problem solved, I spend a bit of time to really understand the code, figure out what could be simplified, even silly stuff like parameters the model just set to the default value.

      It's the only way of using LLMs for development I've found that works for me. I'd definitely say it speeds me up, though certainly not 10x. Compared to just being armed with Google, maybe 1.1x.

  • esafak7 days ago
    Companies need to be aware of the long-term affects of relying on AI. It causes atrophy and, when it introduces a bug, it takes more time to understand and fix than if you had written it yourself.

    I just spent a week fixing a concurrency bug in generated code. Yes, there were tests; I uncovered the bug when I realized the test was incorrect...

    My strong advice, is to digest every line of generated code; don't let it run ahead of you.

    • dkobia7 days ago
      It is absolutely terrifying to watch tools like Cursor generate so much code. Maybe not a great analogy, but it feels like driving with Tesla FSD in New Delhi in the middle of rush hour. If you let it run ahead of you, the amount of code to review will be overwhelming. I've also encountered situations where it is unable to pass tests for code it wrote.
      • tmpz227 days ago
        Like TikTok AI Coding breaks human psychology. It is engrained in us that if we have a tool that looks right enough and highly productive we will over-apply it to our work. Even diligent programmers will be lured to accepting giant commits without diligent review and they will pay for it.

        Of course yeeting bad code into production with a poor review process is already a thing. But this will scale that bad code as now you have developers who will have grown up on it.

      • varelse7 days ago
        [dead]
    • Analemma_7 days ago
      When have companies ever cared about the long-term effects of anything, and why would they suddenly start now?
    • Aeolun6 days ago
      > It causes atrophy and, when it introduces a bug, it takes more time to understand and fix than if you had written it yourself.

      I think this is the biggest risk. You sometimes get stuck in a cycle in which you hope the AI can fix its own mistake, because you don’t want to expend the effort to understand what it wrote.

      It’s pure laziness that occurs only because you didn’t write the code yourself in the first place.

      At the same time, I find myself incredibly bored when typing out boilerplate code these days. It was one thing with Copilot, but tools like Cursor completely obviate the need.

    • KurSix6 days ago
      AI can get you to "something that runs" frighteningly fast, but understanding why it works (or doesn't) is where the real time cost creeps in
    • chilldsgn6 days ago
      100% agree with you, my sentiment is the same. Some time ago I considered making the LLM create tests for me, but decided against it. If I don't understand what needs to be tested, how can I write the code that satisfies this test?

      We humans have way more context and intuition to rely on to implement business requirements in software than a machine does.

    • 7 days ago
      undefined
  • terminalbraid7 days ago
    This story just makes me sad for the developers. I think especially for games you need a level of creativity that AI won't give you, especially once you get past the "basic engine boilerplate". That's not to say it can't help you, but this "all in" method just looks forced and painful. Some of the best games I've played were far more "this is the game I wanted to play" with a lot of vision, execution, polish, and careful craftspersonship.

    I can only hope endeavors (experiments?) like this extreme fail fast and we learn from it.

    • tjpnz7 days ago
      Asset flips (half arsed rubbish made with store bought assets) were a big problem in the games industry not so long ago. They're less prevalent now because gamers instinctively avoid such titles. I'm sure they'll wise up to generative slop too, I've personally seen enough examples to get a general feel for it. Not fun, derivative, soulless, buggy as hell.
      • hnthrow903487657 days ago
        But make some shallow games with generic, cell-shaded anime waifus accessed by gambling and they eat that shit up
        • ang_cire6 days ago
          If someone bothered to make deep, innovative games with cell-shaded anime waifus without gambling, they'd likely switch. This is more likely a market problem of US game companies not supplying sufficient CSAWs (acronym feels unfortunate but somehow appropriate).
        • Analemma_6 days ago
          Your dismissive characterization is not really accurate. Even in the cell-shaded anime waifu genre, there is a spectrum of gameplay quality and gamers do gravitate toward and reward the better games. The big reason MiHoYo games (Genshin Impact, Star Rail) have such a big presence and staying power is that even though they are waifu games at the core, the gameplay is surprisingly good (they're a night-and-day difference compared to slop like Blue Archive), and they're still fun even if you resolve to never pay any microtransactions.
          • justanotherjoe6 days ago
            it's accurate. I'm willing to bet those games wouldn't have 10% of it's players without the waifu and sex bait and the gambling mechanics. And there are gambling mechanics present even if you are f2p in those games, i bet. You just gamble your daily currencies or whatever. And it's always just one click away to actually spend money.

            Not to say I'm a hater or something like that. I played a lot of those back in the day. But it's more honest to admit the art and the casino mechanic make the brain excited... the mechanics are 'okay'.

            Edit: I just had a random thought. One of the strongest desire of a person is that of aesthetic desire. To feel that our life is 'picturesque' or aesthetic or beautiful. Overloading the game with aesthetic beauty is actually genius since it's an easy and strong form of aesthetic (beautiful girls. not just sexy but 'beautiful'. As in their whole face and outfit. Also other aesthetic quality like purity, innocence, cheerfulness, cuteness etc. Waifu stuffs.). And it's often so saturated with beauty that all ugly things in the players' real lifes fade away. It numbs our 'life aesthetic check' since it's flooded with so much 'beauty'. That's why people who play these games say 'i dont even think about the waifus anymore, so the mechanic must be good,' cause that numbed state is the intended state. Wheny our aesthetic center is kinda 'numbed'. And that's probably why it feels so good to play these games. When you play other games your sense of beauty is not similarly flooded and numbed, so you're all too aware that this action of playing games is not 'beautiful' in some real sense.

  • caseyy7 days ago
    AI is the latest "overwhelmingly negative" games industry fad, affecting game developers. It's one of many. Most are because nine out of ten companies make games for the wrong reason. They don't make them as interactive art, as something the developers would like to play, or to perfect the craft. They make them to make publishers and businessmen rich.

    That business model hasn't been going so well in recent years[0], and it's already been proclaimed dead in some corners of the industry[1]. Many industry legends have started their own studios (H. Kojima, J. Solomon, R. Colantonio, ...), producing games for the right reasons. When these games are inevitably mainstream hits, that will be the inflection point where the old industry will significantly decline. Or that's what I think, anwyay.

    [0] https://www.matthewball.co/all/stateofvideogaming2025

    [1] https://www.youtube.com/watch?v=5tJdLsQzfWg

    • jordwest6 days ago
      I don't share your optimism, I think as long as there are truly great games being made and the developers earning well from them, the business people are going to be looking at them and saying "we could do that". What those studios lack in creativity or passion they more than make up for in marketing, sales, and sometimes manipulative money extraction game mechanics.
      • caseyy6 days ago
        It's not so much optimism as facts. Large AAA game companies have driven away investors[0] and talent[1]. The old growth engines (microtransactions, live service games, season passes, user-generated content, loot boxes, eSports hero shooters, etc.) also no longer work, as neither general players nor whales find them appealing.

        AI is considered a potential future growth engine, as it cuts costs in art production, where the bulk of game production costs lie. Game executives are latching onto it hard because it's arguably one of the few straightforward ways to keep growing their publicly-traded companies and their own stock earnings. But technologists already know how this will end.

        Other games industry leaders are betting on collapse and renewal to simpler business models, like self-funded value-first games. Also, many bet on less cashflow-intensive game production, including lower salaries (there is much to be said about that).

        Looking at industry reports and business circle murmurs, this is the current state of gaming. Some consider it optimistic, and others (especially the business types without much creative talent) - dire. But it does seem to be the objective situation.

        [0] VC investment has been down by more than 10x over the last two years, and many big Western game companies have lost investors' money in the previous five years. See Matthew Ball's report, which I linked in my parent comment, for more info.

        [1] The games industry has seen more than 10% sustained attrition over the last 5 years, and about 50% of employees hope to leave their employer within a year: https://www.skillsearch.com/news/item/games---interactive-sa...

        • s_trumpet6 days ago
          > The old growth engines (microtransactions, live service games, season passes, user-generated content, loot boxes, eSports hero shooters, etc.) also no longer work, as neither general players nor whales find them appealing.

          I just don't think that's true in a world where Marvel Rivals was the biggest launch of 2024. Live service games like Path of Exile, Counter-Strike, Genshin Impact, etc. make boatloads of money and have ever rising player counts.

          The problem is that it's a very sink-or-swim market - if you manage to survive 2-3 years you will probably make it, but otherwise you are a very expensive flop. Not unlike VC-funded startups - just because some big names failed doesn't make investing into a unicorn any less attractive.

          • caseyy6 days ago
            By the way, the games you named are called black hole games. They capture players and don't let go. They are scarce (about 1 in 40,000-60,000), and many industry issues don't apply to them. For example, market oversaturation isn't a problem, player retention isn't a problem, network effects aren't a problem, and old growth engines aren't a problem. They are like “winning the lottery,” but most game developers live in a world where they haven't won.

            Another similar exception to the industry rules is the top 20-30 franchises, like NBA2K, GTA, FIFA, Far Cry, Call of Duty, The Sims, Assassin’s Creed, etc. Together, they account for about half the new game and DLC sales. Black hole games take another ~30%, and the remaining 19,000 annually released games share the remaining 20%, with the top 50 games making up 19/20ths of it.

            What matters for 95%+ of game developers is performing well in that 20%. And they sell close to 0 lootboxes, for example.

          • meheleventyone6 days ago
            The issue is that there is no obvious driver for growth at the moment and the industry has seen pretty obscene growth over the twenty years I've been part of it. That's made VCs very gun shy, particularly as a lot of the companies they've funded have nose dived pretty spectacularly. It's no surprise that the two recent successes Helldivers 2 and Marvel Rivals both come from publisher funding and for the latter has a very strong IP licenced for it. All of this is definitely causing a dramatic impact on the number of content producing studios getting VC funding and publisher investment in to new live service titles.

            Outside of live service everyone is also looking for that new growth driver. In my opinion the chances are though we're in for a longish period of stagnation. I don't even share the OPs rosey outlook towards more "grassroots" developers. Firstly because they're still businesses even with a big name attached. Secondly because there is going to be a bloodbath due to the large number of developers pivoting in that direction. It'll end up like the indie market where there are so many entrants success is extremely challenging to find.

    • milesrout6 days ago
      Very selective data in that presentation. The worst figures are always selected for comparisons: in one it's since 2019, then since 2020, then since 2022, then since 2020, then 2019, and on and on.

      There is nothing wrong with making entertainment products to make money. That's the reason all products are made: to make money. Games have gone bad because the audience has bad taste. People like Fortnite. They like microtransactions. They like themepark rubbish that you can sell branded skins for. It is the same reason Magic: the Gathering has been ruined with constant IP tie-ins: the audience likes it. People pay for it. People like tat.

  • internet_points6 days ago
    In Norway, there was a recent minor scandal where a county released a report on how they should shut down some schools to save money, and it turned out half the citations were fake. Quite in line with the times. So our Minister of Digitizing Everything says "It's serious. But I want to praise Tromsø Municipality for using artificial intelligence." She's previously said she wants 80% of public sector to be using AI this year and 100% by 5 years. What does that even mean? And why and for what and what should they solve with it? It's so stupid and frustrating I don't even
    • 6 days ago
      undefined
  • snitty6 days ago
    My favorite part about this (and all GenAI) comments section is where one person says, "This is my personal experience using AI" and then a chorus of people chime in "Well, you're using it wrong!"
    • probably_wrong6 days ago
      I personally prefer the one where everyone tells you that your error is because you used the outdated and almost unusable version from yesterday instead of the revolutionary release from today that will change everything we know. Rinse and repeat tomorrow.
      • namaria6 days ago
        Not to mention the variations of "you need to prompt better" including now "rules files" which begs the question: wouldn't just writing code be a much better way to exercise control over the machine?
        • cglace6 days ago
          In my tests of using AI to write most of my code, just writing the code yourself(with Copilot) and doing manual rounds with Claude is much faster and easier to maintain.
    • ilrwbwrkhv6 days ago
      One very irritating problem I am seeing in a bunch of companies that I have invested in and my money is at stake is that they have taken up larger investments from normal VCs who are usually dumb as rocks but have a larger share is that they are pushing heavily for AI in the day to day processes of the company.

      For example, some companies are using AI to create tickets or to collate feedback from users.

      I can clearly see that this is making them think far less through the problem and a lot of this sixth sense understanding of the problem space happens through working through these ticket creation or product creation documents which are now being done by AI.

      That is causing the quality of the work to become this weird drone like NPC like state where they aren't really solving real issues yet they're getting a lot of stuff done.

      It's still very early so I do not know how best to talk to them about it. But it's very clear that any sort of creative work, problem solving, etc has huge negative implications when AI is used even a little bit.

      I have also started to think that a great angel investment question is to ask companies if they are a non AI zone and investing in them will bring better returns in the future.

    • Aeolun6 days ago
      It’s because it’s never “this is my personal experience”, it’s always of the “this whole AI thing is nonsense because it doesn’t work for me” variety.
      • maeln6 days ago
        The same can be said about the other side. It is rarely phrased has "LLM is a useful tool with some important limitations" but "Look the LLM manage to create a junior-level feature, therefor we won't need developers in 2 years from now".

        It tends to be the same with anything hyped / divisive. Human tend to exaggerate in both direction in communication, especially in low-stake environment such as an internet forum, or when they stand to gain something from the hype.

      • milesrout6 days ago
        You seem to have confused "The whole AI thing is nonsense. [Anecdote]." with "The whole AI thing is nonsense because [anecdote]." I see a lot of "LLMs are not useful. e.g. the other day I asked it to do X and it was terrible." That is not somebody saying that that one experience definitively proves that LLMs are useless, or saying that you should believe that LLMs are useful based only on that one anecdote. It is people making their posts more interesting than just giving their opinions along with arguments for those opinions.

        Obviously their views are based on the sum of all their experience with LLMs. We don't have to say so every time.

    • johnfn6 days ago
      It's because everyone's "personal experience" is "I used it once and it didn't work".
  • kstrauser7 days ago
    There are many, many reasons to be skeptical of AI. There are also excellent tasks it can efficiently help with.

    I wrote a project where I'd initially hardcoded a menu hierarchy into its Rust. I wanted to pull that out into a config file so it could be altered, localized, etc without users having it and recompile the source. I opened a “menu.yaml” file, typed the name of the top-level menu, paused for a moment to sip coffee, and Zed popped up a suggested completion of the file which was syntactically correct and perfect for use as-is.

    I honestly expected I’d spend an hour mechanically translating Rust to YAML and debugging the mistakes. It actually took about 10 seconds.

    It’s also been freaking brilliant for writing docstrings explaining what the code I just manually wrote does.

    I don't want to use AI to write my code, any more than I'd want it to solve my crossword. I sure like having it help with the repetitive gruntwork and boilerplate.

    • ilrwbwrkhv6 days ago
      This sort of extremely narrow use case is what I think AI is good for but the problem is that if you have it for this one you will use it for other things and slowly atrophy.
  • jongjong6 days ago
    > In terms of software quality, I would say the code created by the AI was worse than code written by a human–though not drastically so–and was difficult to work with since most of it hadn’t been written by the people whose job it was to oversee it.

    This is a key insight. The other insight is that devs spend most of their time reading and debugging code, not writing it. AI speeds up the writing of code but slows down debugging... AI was trained with buggy code because most code out there is buggy.

    Also, when the codebase is complex and the AI cannot see all the dependencies, it performs a LOT worse because it just hallucinates the API calls... It has no idea what version of the API it is using.

    TBH, I don't think there exists enough non-buggy code out there to train an AI to write good code which doesn't need to be debugged so much.

    When AI is trained on normal language, averaging out all the patterns produces good results. This is because most humans are good at writing with that level of precision. Code is much more precise and the average human is not good at it. So AI was trained on low-quality data there.

    The good news for skilled developers is that there probably isn't enough high quality code in the public domain to solve that problem... And there is no incentive for skilled developers to open source their code.

  • mattgreenrocks7 days ago
    Management: "devs aren't paid to play with shiny new tech, they should be shipping features!"

    Also management: "I need you to play with AI and try to find a use for it"

    • undebuggable6 days ago
      Then maybe under pretext of playing with AI, finally refactor and clean up that codebase?
  • crvdgc6 days ago
    A perspective from a friend, who recently gave up trying to get into concept art:

    Before AI, there was out-sourcing. With mass-produced cheap works, foreign studios eliminated most junior positions.

    Now AI is just taking this trend to its logical extreme: out-sourcing to machines, the ultimate form of out-sourcing. The cost approaches to 0 and the quantity approaches to infinity.

  • protocolture6 days ago
    >“I am yet to have a team or gamerunner push back on me once I actually explain how these AI art generators work and how they don't contribute in a helpful way to a project, but I have a sense of dread that it is only a matter of time until that changes, especially given that I've gone the majority of my career with no mention of them to every second conversation having it mentioned.”

    I recently played through a game and after finishing it, read over the reviews.

    There was a brief period after launch where the game was heavily criticised for its use of AI assets. They removed some, but apparently not all (or more likely, people considered the game tainted and started claiming everything was AI)

    The (I believe) 4 person dev team used AI tools to keep up with the vast quantity of art they needed to produce for what was a very art heavy game.

    I can understand people with an existing method not wanting to change. And AI may not actually be a good fit for a lot of this stuff. But I feel like the real winners are going to be the people who do a lot more with a lot less out of sheer necessity to meet outrageous goals.

    • svantana6 days ago
      I see a strong similarity with the (over)use of CGI in movies 25 years ago - the producers were of course thrilled to save money on special fx and at first glance, it looked real. But after seeing a lot of it, a feeling starts to creep in: it's all fake, it's all computer. It breaks the illusion and moves the focus from the story to what the hell they were thinking. Of course today it looks laughable, like a bad video game.
      • protocolture4 days ago
        I think a lot of it was the move to digital away from film too. Old films keep getting great remasters but 2000s era digital films just cant compete.
        • tart-lemonade4 days ago
          Same goes for a lot of TV shows shot on film vs tape vs early digital cameras. Tape and early digital cameras have a much lower quality ceiling than stuff shot on film.
  • voidhorse7 days ago
    I think the software industry will look just like the material goods space post-industrialization after the dust settles:

    Large corporations will use AI to deliver low-quality software at high speed and high scale.

    "Artisan" developers will continue to exist, but in much smaller numbers and they will mostly make a living by producing refined, high-quality custom software at a premium or on creative marketplaces. Think Etsy for software.

    That's the world we are heading for, unless/until companies decide LLMs are ultimately not cost beneficial or overzealous use of them leads to a real hallucination induced catastrophe.

    • GarnetFloride7 days ago
      Sounds like fast fashion. The thinnest, cheapest fabric, slapped together as fast as possible with the least amount of stitching. Shipped fast and obsolete fast.
      • tmpz227 days ago
        Fast fashion - also ruinous to the environment.
  • throwawayfgyb7 days ago
    I really like AI. It allows me to complete my $JOB tasks faster, so I have more time for my passion projects, that I craft lovingly and without crappy AI.
    • adrian_b7 days ago
      "AI" is just a trick to circumvent the copyright laws that are the main brake in writing quickly programs.

      The "AI" generated code is just code extracted from various sources used for training, which could not be used by a human programmer because most likely they would have copyrights incompatible with the product for which "AI" is used.

      All my life I could have written much faster any commercial software if I had been free to just copy and paste any random code lines coming from open-source libraries and applications, from proprietary programs written for former employers or from various programs written by myself as side projects with my own resources and in my own time, but whose copyrights I am not willing to donate to my current employer, so that I would no longer be able to use in the future my own programs.

      I could search and find suitable source code for any current task as fast and with much greater reliability than by prompting an AI application. I am just not permitted to do that by the existing laws, unlike the AI companies.

      Already many decades ago, it was claimed that the solution for enhancing programmer productivity is more "code reuse". However "code reuse" has never happened at the scale imagined in the distant past, but not because of technical reasons, but due to the copyright laws, whose purpose is exactly to prevent code reuse.

      Now "AI" appears to be the magical solution that can provide "code reuse" at the scale dreamed a half of century ago, by escaping from the copyright constraints.

      When writing a program for my personal use, I would never use an AI assistant, because it cannot accelerate my work in any way. For boilerplate code, I use various templates and very smart editor auto-completion, there is no need of any "AI" for that.

      On the other hand, when writing a proprietary program, especially for some employer that has stupid copyright rules, e.g. not allowing the use of libraries with different copyrights, even when those copyrights are compatible with the requirements of the product, then I would not hesitate to prompt an AI assistant, in order to get code stripped of copyright, saving thus time over rewriting an equivalent code just for the purpose of enabling it to be copyrighted by the employer.

      • popularonion7 days ago
        Not sure why this is downvoted. People forget or weren’t around for the early 2000s when companies were absolutely preoccupied with code copyright and terrified of lawsuits. That loosened up only slightly during the GitHub/StackOverflow era.

        If you proposed something like GitHub Copilot to any company in 2020, the legal department would’ve nuked you from orbit. Now it’s ok because “everyone is doing it and we can’t be left behind”.

        Edit: I just realized this was a driver for why whiteboard puzzles became so big - the ideal employee for MSFT/FB/Google etc was someone who could spit out library quality, copyright-unencumbered, “clean room” code without access to an internet connection. That is what companies had to optimize for.

        • int_19h7 days ago
          It's downvoted because it's plainly incorrect.
          • onemoresoop6 days ago
            What part is incorrect?
            • int_19h5 days ago
              The claim that it's just spitting out code it's been trained on. That is simply not the case, broadly speaking - sure, if you ask it for a very specific algorithm that has a well-known implementation, you might end up with such a snippet, but in general, it writes new code, not just a copy/paste of SO or whatever.
      • bflesch6 days ago
        This is an extremely important point, and first time I see it mentioned with regards to software copyright. Remember the days where companies got sued for including GPL'd code in their proprietary products?
    • bluefirebrand7 days ago
      I have never had a job where completing tasks faster wound up with me having more personal free time. It always just means you move on to the next task more quickly
      • floriannn7 days ago
        This is a fair bit easier as a remote worker, but even in-office you would just sandbag your time rather than publishing the finished work immediately. In-office it's more likely that you would waste time on the internet rather than working on a personal project though.
      • dominicrose7 days ago
        That's not the worst thing. Having more work means you're less bored. You probably won't be payed more though. But being too productive can cause you to have no next task, wich isn't the same thing as having free time.

        I think that's part of the reason why devs like working from home and not be spied on.

        • onemoresoop6 days ago
          You’re saying companies don’t get information on how remote employees utilize their time? I could almost be sure many companies do that.
      • esafak7 days ago
        Perhaps the OP completes the assigned task ahead of schedule and keeps the saved time.
        • htek7 days ago
          Shhh! Do you want to kill AI? All the C-suite and middle management need to hear is that "My QoL has never been better since I could use AI at work! Now I can 'quiet quit' half the day away! I can see my family after hours! Or even have a second job!"
          • onemoresoop6 days ago
            Expectations will go up, while the pay will stay the same. And many will just take it because of lack of alternatives
        • sksxihve6 days ago
          While sitting in the open office staring blankly into space because of RTO, work really has nothing to do with productivity it's all fugazi.
    • voidUpdate7 days ago
      I wish I had a job where if I completed all my work quickly, I was allowed to do whatever
      • ang_cire6 days ago
        How do they know if you're done, if you haven't "turned it in" yet? They're probably not watching your screen constantly.

        My last boss told me essentially (paraphrasing), "I budget time for your tasks. If you finish late, I look like I underestimate time required, or you're not up to it. If you finish early, I look like I overestimate. If I give you a week to do something, I don't care if you finish in 5 minutes, don't give it to me until the week is up unless you want something else to do."

        • mitthrowaway26 days ago
          Sounds like your last boss was working under some very twisted incentives.
          • ang_cire3 days ago
            He was told to have our team do 'x', and either given a deadline, or asked for a timeframe it would be done by. Then he assigned it out to the team.

            We certainly did not receive bonuses based on doing work faster, so unless you are, what incentives are you being driven by to do the work sooner?

        • voidUpdate6 days ago
          My coworker/manager sits next to me
        • onemoresoop6 days ago
          That is really not the norm nowadays.
          • sksxihve6 days ago
            Was it ever? Twenty years ago I had a boss that told me he cuts every estimate engineers give him in half and the work always gets completed on time, never mind the terrible quality and massive amount of bugs.
      • cschep6 days ago
        You can implement this yourself fairly easily.
  • rchaud7 days ago
    > “I have no idea how he ended up as an art director when he can’t visualise what he wants in his head unless can see some end results”, Bradley says. Rather than beginning with sketches and ideas, then iterating on those to produce a more finalised image or vision, Bradley says his boss will just keep prompting an AI for images until he finds one he likes, and then the art team will have to backwards engineer the whole thing to make it work.

    Sounds like an "idea guy" rather than an art director or designer. I would do this exact same thing, but on royalty-free image websites, trying to get the right background or explanatory graphic for my finance powerpoints. Unsurprisingly, Microsoft now has AI "generating" such images for you, but it's much slower than what I could do flipping through those image sites.

  • etiam6 days ago
    Hands up everyone who thinks the mandatory GPT use mentioned in the post drives towards instructing 'Summarize all employee input since last review and score their performance for salary changes. Also recommend whether to fire or retain. Also format all valuable actions taken for use as training data'
  • chilldsgn6 days ago
    I've disabled AI in my IDE after trying Jetbrains' AI Assistant for a couple of months. I don't like it and I think relying on LLMs to get my job done is dangerous.

    Why? I feel less competent at my job. I feel my brain becoming lazy. I enjoy programming a lot, why do I want to hand it off to some machine? My reasoning is that if I spend time practicing and getting really good at software engineering, my work is much faster, more accurate and more reliable and maintainable than an AI agent's.

    In the long run, using LLMs for producing source code will make things a lot slower, because the people using these machines will lose the human intuition that an AI doesn't have. Be careful.

    • miningape6 days ago
      And the Jetbrains LLM is much less invasive than copilot is. Both of those are splashing in a kiddy pool when you compare it to what tools like Cursor offer.
      • datadrivenangel5 days ago
        Except cursor does the same thing. It's really good for creating small pieces of software, but beyond that just gets painful to fight with.
  • caseyy7 days ago
    There is a small, hopeful flipside to this. While people using AI to produce art (such as concept art) have flooded the market, real skills now command a higher price than before.

    To pull this out of the games industry for just a moment, imagine this: you are a business and need a logo produced. Would you hire someone at the market price who uses AI to generate something... sort of on-brand they most definitely cannot provide indemnity cover for (considering how many of these dubiously owned works they produce), or would you pay above the market price to have an artist make a logo for you that is guaranteed to be their own work? The answer is clear - you'd cough up the premium. This is now happening on platforms like UpWork and Fiverr. The prices for real human work have not decreased; they have shot up significantly.

    It's also happening slowly in games. The concept artists who are skilled command a higher salary than those who rely on AI. If you depend on image-generating AI to do your work, I don't think many game industry companies would hire you. Only the start-ups that lack experience in game production, perhaps. But that part of the industry has always existed - the one made of dreamy projects with no prospect of being produced. It's not worth paying much attention to, except if you're an investor. In which case, obviously it's a bad investment.

    Besides, just as machine-translated game localization isn't accepted by any serious publisher (because it is awful and can cause real reputational damage), I doubt any evident AI art would be allowed into the final game. Every single piece of that will need to be produced by humans for the foreseeable future.

    If AI truly can produce games or many of their components, these games will form the baseline quality of cheap game groups on the marketplaces, just like in the logo example above. The buyer will pay a premium for a quality, human product. Well, at least until AI can meaningfully surpass humans in creativity - the models we have now can only mimic and there isn't a clear way to make them surpass.

    • JohnMakin6 days ago
      > real skills now command a higher price than before.

      Only if companies value/recognize those real skills over that of the alternative, and even if they do, companies are pretty notorious for choosing whatever is cheapest/easiest (or perceived to be).

      • 6 days ago
        undefined
    • gdulli7 days ago
      > There is a small, hopeful flipside to this. While people using AI to produce art (such as concept art) have flooded the market, real skills now command a higher price than before.

      It's "hopeful" that the future of all culture will resemble food, where the majority have access to McDonalds type slop while the rich enjoy artisan culture?

      • caseyy7 days ago
        It's hopeful because AI has not devalued creative human labor but increased its worth. Similar to how if one were a skilled chef, they didn't start working for McDonald's when it came to be, but for a restaurant that pays significantly above McDonald's.

        Most people's purchasing power being reduced is a separate matter, more related to the eroding middle class and greedflation. Many things can be said about it, but they are less related to the trend I highlighted. Even if, supposing the middle class erosion continues, the scenario you suggest may very well play out.

        • milesrout6 days ago
          It doesn't make sense to suggest that AI has made human effort more valuable. Before, to do X, Y, or Z you needed human effort. Now, you can do X with AI. You just need human effort to do Y or Z. There is less demand for human effort. Why would that result in an increase in the price of human effort?

          >Most people's purchasing power being reduced is a separate matter, more related to the eroding middle class and greedflation.

          Greedflation, is that where companies suddenly remember to be greedy again after years of forgetting they're allowed to be greedy, which happens by random chance to coincide exactly with periods of expansionary monetary and fiscal policy?

          • caseyy6 days ago
            > It doesn't make sense to suggest that AI has made human effort more valuable.

            In that case, I welcome an alternative explanation for the human labor price increase on UpWork and Fiverr while AI work replaced work at the previous price level. The same is seen in the hiring of affected disciplines.

            • milesrout5 days ago
              If you have a distribution of work where most is easy and cheap, some is moderate and moderate, and a little is difficult and expensive, and you take out all the cheap and easy work, the moderate and difficult work could drop in price but the average of the remaining work will still be higher than before.

              e.g.

              You have tasks advertised in the distribution $1, $1, $1, $1, $1, $1, $2, $2, $3, $3, $5, $5, $10. Median price is $2, and average is $2.76.

              All the $1 and $2 tasks are replaced with AI. Old tasks get $1 cheaper each as there are more people that can do them. Now the distribution is $2, $2, $4, $4, $9. Median is $4, average is $4.2.

              So you have made labour less valuable but the prices advertised go up because only the more expensive work now gets advertised.

          • 6 days ago
            undefined
      • milesrout6 days ago
        The situation with food is that everyone today has access to good quality food if they choose to actually put their money towards it, but large numbers of people enjoy McDonalds and KFC and such slop, so they choose to spend far more on it than they'd spend cooking for themselves.

        It is still much better than when large numbers of people starved if it rained a bit in the wrong week.

            He spoke of the grass and flowers and trees
            Of the singing birds and the humming bees;
            Then talked of the haying, and wondered whether
            The cloud in the west would bring foul weather.
        
        The weather and its effect on the food supply was the preoccupation of 90% of the population 90% of the time for all of agricultural man's history (and pre-history) and hunting and gathering was even worse for quality of life.
  • ggm6 days ago
    I find introspecting about how I formulate the question and what works better or worse for me personally fascinating.

    I am content to use the AI to perform "menial" tasks: I had a textfile in something parsable by field with some minor quirks (like right justified text) and was able to specify the field SEMANTICS in a way that made for a prompt to an ICS file calendar which just imported fine as-is. Getting a years forward planning from a texttual note in some structure into calendar -> import -> from-file was sweet. Do I need to train an AI to use a token/API key to do this directly? No. But thinking about how I say efficiently what fields are, and what the boundaries are, helps me understand my data.

    BTW while I have looked at a ICS file and can see it is type:value, I have no idea of the types, or what specific GMT/Z format it wants for date/time, or the distinctions of meaning for confirmed/pending or the like. These are higher level constructs which seem to have made useful distinct behaviours in the calendar and the AI description of what it had done, and what I should expect lined up. I did not e.g. stipulate the mappings from semantic field to ICS type. I did say "this is a calendar date" and it did the rest.

    I used AI to write a DJANGO web to do some trivial booking stuff. I did not expect the code to run as-is, but it did. Again, could I live with this product? Yes, but the extensibility worries me. Adding features, I am very conscious one wrong prompt and it can turn this into .. drek. It's fragile.

    • biophysboy6 days ago
      My method is this: before I use AI, I try to ask myself "how much should I surrender my judgment on this problem?"

      Some problems are too big to surrender judgment. Some problems are solved differently depending on what you want to optimize. Sometimes you want to learn something. Sometimes there's ethics.

      • ggm6 days ago
        Nice. I think I agree, Size of the problem isn't same as "code complexity" or LOC or anything. If the consequences of the wrong solution being deployed are big enough even a 1 line fix can be a disaster.

        I like surrender judgement. Its loss of locus of control. I also find myself asking if there are ways the AI systems "monetize" the nature of problems being put forward for solutions. I am probably implicitly giving up some IPR asking these questions, I could even be in breach of an NDA in some circumstances.

        Some problems should not be put to an anonymous external service. I doubt the NSA wants people using claude or mistral or deepseek to solve NSA problems. Unless the goal, is to feed misinformation or mis-drection out into the world.

  • some-guy7 days ago
    I always assumed game development would be one of the most impacted by AI hype, for better or worse. With game development there’s a much higher threshold for subjectivity and “incorrectness”.

    I’m in a Fortune 500 software company and we are also being pushed AI down our throats, even though so far it has only been useful for small development tasks. However our tolerance for incorrectness is much, much lower—and many skip levels are already realizing this.

    • nathan_compton7 days ago
      I'm an indie game developer and its a domain where I find AI to be most useless - too much of what a game is interactive, spatial, and about game-feel. The AI just can't do it. Even GPT's latest models really struggled to write reasonable 3d transformations, which is unsurprising, since they live in text world, not 3d world.
  • bufferoverflow7 days ago
    I wish our company forced AI on us. Our security is so tight, it's pretty much impossible to use any good LLMs.
    • ang_cire6 days ago
      It really doesn't take that beefy of a machine to run a good LLM locally instead of paying some SaaS company to do it for you.

      I've got a refurb homelab server off PCSP with 512gb ram for <$1k, and I run decently good LLM models (Deepseek-r1:70b, llama3.3:70b). Given your username, you might even try pitching a GPU server to them as dual-purpose; LLM + hashcat. :)

      • bufferoverflow6 days ago
        How would that help me? My work laptop doesn't have 512GB RAM, not even 10% of that.
        • ang_cire6 days ago
          Because if your company is against using LLMs for work based on security concerns, it's usually the concern that an employee will enter company confidential data into the LLM, which when using a SaaS LLM means exposing the data.

          But if your company buys a server to run it themselves, that security risk is not present.

  • 6 days ago
    undefined
  • gwbas1c7 days ago
    I would think that, if AI-generated content is inferior, these games will fail in the marketplace.

    So, where are the games with AI-generated content? Where are the reviews that praise or pan them?

    (Remember, AI is a tool. Tools take time to learn, and sometimes, the tool isn't worth using.)

    • jim-jim-jim6 days ago
      > I would think that, if AI-generated content is inferior, these games will fail in the marketplace.

      You'd hope so, but I'm not so sure. Media developments are not merely additive, at least with bean counters in charge. Certain formats absolutely eclipse others. It's increasingly hard to watch mainstream films with practical effects or animal actors. Even though most audiences would vastly prefer the real deal, they just put up with it.

      It's easy to envision a similar future where the budget for art simply isn't there in most offerings, and we're left praising mediocre holdout auteurs for their simple adherence to traditional form (not naming names here).

      • gwbas1c6 days ago
        > It's increasingly hard to watch mainstream films with practical effects or animal actors. Even though most audiences would vastly prefer the real deal, they just put up with it.

        I (mostly) prefer today's special effects to the ones in the past. The old ones would take me out of the moment, because I'd notice that it was a special effect. (IE, in the original Star Wars movies you can see the matt lines on VHS, and the strings on the C3P0 puppet in the desert. Really distracting.)

        ---

        A few more points:

        A lot of the article reminds me of how recording artists would complain, in the 2000s and 2010s, that they would put a lot of effort into a recording, and then most people were listening to it on a sh*tty MP3. The recording artists didn't understand their audiences. It's hard to know if it's a case where the video game artists don't understand the audiences, or the tool (AI) really isn't bringing value to the process.

        > It's easy to envision a similar future where the budget for art simply isn't there in most offerings, and we're left praising mediocre holdout auteurs for their simple adherence to traditional form

        I'm not sure what you mean by that.

  • more_corn7 days ago
    Everyone I know uses it to some degree. Simply having a smart debugger does wonders. You don’t have to give up control, it can help you stay in flow state. Or it can constantly irritate you if you fight it.
  • boh6 days ago
    This just sounds like cases of performative management. Very lazy implementation of what to them is just"productivity-future-tech" of the moment, so they can say "successfully transitioned into AI-driven development" on their CV's. AI is just software and it either fits your strategy or it doesn't. In the same way no company succeeds simply because it started using software, no company is going to succeed simply bcs they started to use AI.
  • ctrlp6 days ago
    I would hope that people with strong opinions about the uses and abuses of AI would start their own firms and hire people who are unwilling to use AI for whatever reasons. The competition should go a long way to proving the naysayer's points or disproving them. Personally, I think there is no evading the AI juggernaut and that artistic metrics or excellence metrics are going to take a back seat to pure shipping garbage faster metrics. The garbage will be come the new baseline of excellence and the former measures of excellence will be cottage industry artisanship with small and dedicated audiences.

    As a small data point, I don't think AI can make movies worse than they currently are. And they are as bad as they are for commercial but non-AI reasons. But if the means to make movies using AI or scene-making tools build with a combo of AI and maybe game engine platforms puts the ability to make movies into the hands of more artistic people, the result may be more technologically uninteresting but nonetheless more artistically interesting because of narrative/character/storytelling vectors. Better quality for niche audiences. It's a low bar, but it's one possible silver lining.

    • fc417fc8026 days ago
      This is equivalent to suggesting (in a world without IP law) that the supporters of such a system ought to start competing firms that refuse to copy things without providing compensation in order that the competition might demonstrate the benefits of the system. Of course without the regulation neither of those systems is likely to be part of a viable business model.

      That said I agree with your second paragraph. I think we will see an explosion of high quality niche products that would never have been remotely viable before this.

    • mistrial96 days ago
      a similar inflection point happened in the history of film? because cheap prurient material was faster to produce and did generate commerce quickly.. much faster than say, an epic drama with 3000 extras, costumes and theme musics.. The Film Code in American did happen, was widely mocked, and probably was very responsible for the flourishing of an entire industry to the public for decades..

      People left alone for a race to the bottom, does not end well, it seems..

    • failuser6 days ago
      Why won’t luddites open their own factories, amirite? They would if they had money.
  • grg06 days ago
    > “When I’m told 'Think of how much time you could be spending instead on making the actual game!', those who have drank the AI Kool-Aid don't understand that all this brainstorming and iteration is making the game, it’s a crucial everyday part of game development (and human interaction) and is not a problem to be solved.”

    This right here is the key. It's that stench of arrogance of those who think others have a "problem" that needs fixing, and that they are in the best position to "solve" it despite having zero context or experience in that domain. It's like calling the plumber and thinking that you're going to teach them something about their job.

  • throwanem6 days ago
    Here's to the next decade of getting paid to clean up after "rockstars."
  • liendolucas6 days ago
    I prefer to read source code, articles, watch videos or even join a chat channel to discuss and try things out until they work (or understand why they don't work) than fall and consume trash from HypeAI. Heck even if I don't get a solution I prefer to be stuck thinking with paper and pencil.

    The only time I did it try it for the very first time was an evening that I was so bored that decided to compare my 3 line Python snippet (of which one of those was the "def" statement) that generates an alternating pulse clock against the output of an "ai" prompt.

    The output code I saw kept me thinking about all those poor souls relying on HypeAI to create anything useful.

    Not long ago there was a thread about someone proud of a 35k LoC cooking web app entirely made from a prompt. What it kept ringing from that thread was that the author was proud of the total LoC as maybe thought that the more the better. Who knows?

    • soulofmischief6 days ago
      I think the LOC is of note because it shows that these models might be capable of building large, complex systems in the future.
  • specialist6 days ago
    Each example's endeavor is the production of culture. The least interesting use case for "AI".

    Real wealth creation will come from other domains. These new tools (big data, ML, LLMs, etc) unlock the ability to tackle entirely new problems.

    But as a fad, "AI" is pretty good for separating investors from their money.

    It's also great for further beating down wages.

  • raxxorraxor6 days ago
    AI absolutely cannot develop a video game. It is still a high risk creative task and costs for development and artists will not change significantly, even if modern AIs increased their abilities significantly.

    Perhaps we would be able to synthesize some text, voice and imaging. Also AI can support coding.

    While AI can probably do a snake game (that perhaps runs/compiles) or attempt to more or less recreate well known codebases like that of Quake (which certainly does not compile), it can only help if the developer does the main work, that is disecting problems into smaller ones until some of them can be automated away. That can improve productivity a bit and certainly could improve developer training. If companies were so inclined to invest in their workforce...

  • dkobia7 days ago
    I've been wrestling with this tension between embracing AI tools and preserving human expertise in my work. On one hand, I have experienced real genuine productivity gains with LLMs - they help me code, organize thoughts and offer useful perspectives I hadn't even considered. On the other, I realize managers often don't understand the nature of creative work which is trivialized by all the content generation tools.

    Creativity emerges through a messy exploration and human experience -- but it seems no one has time for that these days. Managers have found a shiny new tool to do more with less. Also, AI companies are deliberately targeting executives with promises of cost-cutting and efficiency. Someone has to pay for all the R&D.

    • 3D304974207 days ago
      I had very similar thoughts while reading through the article. I also have found some real value in LLMs, and when used well, I think can and will be quite beneficial.

      Notably, a good number the examples were just straight-up bad management, irrespective of the tools being used. I also think some of these reactions are people realizing that they work for managers or in businesses that ultimately don't really care about the quality of their work, just that it delivers monetary value at the end.

  • KurSix6 days ago
    The saddest part is watching talented people, who care deeply about the craft, slowly burn out because their judgment is being replaced by a prompt
  • EigenLord6 days ago
    I can see why some fields would have an overwhelmingly negative reaction to AI but I simply can't grasp why some software devs are. The entire point of the field is to get computers to do stuff for you. I've been doing this s*it for 10 years, there's too many little details and commands to remember and too much brutally dull work to not automate it.

    I also have come to realize that in software development, coding is secondary to logical thinking. Logical thinking is the primary medium of every program, the language is just a means to express it. I may have not memorized as many languages as AI, but I can think better than it logically. It helps me execute my tasks better.

    Also, I've been able to do all kinds of crazy and fun experiments thanks to genAI. Knowing myself I know realistically I will never learn LISP, and will always retain just an academic interest in it. But with AI I can explore these languages and other areas of programming beyond my expertise and experience much more effectively than ever before. Something about the interactive chat interface keeps my attention and allows me to go way deeper than textbooks or other static resources.

    I do think in many ways it's a skill issue. People conceptualize genAI as a negation of skills, an offloading of skill to the AI, but in actuality grokking these things and learning how to work with them is its own skill. Of course managers just forcing it on people will elicit a bad reaction.

    • layer86 days ago
      As long as one has to double-check and verify every single output, I don’t think that “automation” is the right word. Every LLM use is effectively a one-off and cannot be repeated blindly.
      • namaria6 days ago
        Undefined behavior as a service is truly a bizarre proposition to my ears. Layering undefined behavior (agents) and gaming undefined behavior in hopes it comes out as you need (prompting) sounds insane and sometimes I have to wonder if I am the insane one. Very weird times.
    • 000ooo0006 days ago
      You start by saying it's logical thinking that is a SE's value and then close by suggesting learning how to offload that logical thinking to AI is a 'skill'. Bizarre.
    • yahoozoo6 days ago
      > I've been doing this s*it for 10 years, there's too many little details and commands to remember and too much brutally dull work to not automate it.

      git gud

  • bawana5 days ago
    Ai excel at mapping an input to an output. Isnt this what management is supposed to do? Why isnt there an AI CFO? CEO? The gains to the bottom line are huge considering their salaries, bonuses and stock options. It will send their share prices sky high.
  • conartist66 days ago
    AI is especially toxic to anyone who truly believes in their work -- who puts a bit of themselves into it. These are the people AI sucks the life force out of while giving it to those without giving their stolen energy to managers-of-managers seemingly devoid of passion, care, love, empathy, creativity, and humanity
  • 000ooo0007 days ago
    Can't wait to hear the inevitable slurs people will create to refer to heavy AI users and staunch AI avoiders.
    • esafak7 days ago
      Prompt puncher and Luddite come to mind.
    • Schiendelman7 days ago
      "Sloppers" appeared in another thread in this post. I've seen it before, I think it'll stick.
  • BrenBarn6 days ago
    We can only hope this insane trend self-immolates before it causes too much collateral damage.
    • immibis6 days ago
      It's too late for that. The largest economy in the world is already committing seppuku over a formula generated by ChatGPT.

      To quote some anonymous YouTube commenter: "they told me AI would destroy the world, but I didn't expect it to happen like this"

  • Tiktaalik6 days ago
    > The whole game is resting on a prompt ‘what if a game was…’, but with no idea if that would be fun, or how to make it fun. It’s madness”.

    lol I will point out that this has been an enormous problem in the game industry for long, long before generative AI existed.

  • yahoozoo6 days ago
    Let me preface this by saying that I wholeheartedly agree with the sentiment the article is trying to convey. That said, the “anonymity”, and the almost tropey at this point C*O characters make this read like a fan fiction.
  • DadBase6 days ago
    I’ve been doing “vibe coding” since Borland C++. We used to align the mood of the program with ambient ANSI art in the comments. if the compiler crashed, that meant the tone was off.
  • gukov6 days ago
    Shopify CEO: "AI usage is now a baseline expectation"

    https://news.ycombinator.com/item?id=43613079

  • Aeolun6 days ago
    Art team dislikes the technology that replaces them.

    Am I the only one that thinks this is kind of a given regardless of the merits of the objection?

    • voidspark6 days ago
      That's not the point of the article.
  • indoordin0saur7 days ago
    This article is an example of why the gender-neutral use of pronouns makes things a pain to read. If you're already changing the interviewees' names then IDK why you couldn't just pick an arbitrary he/she pronoun to stick to for one character.

    > Francis says their understanding of the AI-pusher’s outlook is that they see the entire game-making process as a problem, one that AI tech companies alone think they can solve. This is a sentiment they do not agree with.

    • gwbas1c7 days ago
      "they" was a gender-neutral pronoun when I was in school in the 1990s.
      • ryoshoe7 days ago
        Singular they was used by respected authors even as far back as the 19th century.
        • mrob6 days ago
          indoordin0saur is correct. Traditional use of singular "they" was restricted to persons of unknown sex, where it is correct and unobjectionable. But the article uses it for persons of known sex. This is a modern innovation, and it should be resisted because it reduces the clarity of the writing.
          • gwbas1c5 days ago
            They were people of unknown sex. Keeping the gender unspecified is part of the anonymity.

            Requiring to identify someone's gender when that person is anonymous is just pointless bigotry.

            • mrob4 days ago
              You're already making up fictitious names, so how is making up fictitious sexes any different? By using non-unisex names you're implying specific sexes already. It's implausible that you would know somebody's name and details about their working conditions without knowing their sex.
            • mrob4 days ago
              Alternative solution: abbreviate all the fictitious names to single letters. This is commonly understood to mean obviously and intentionally concealed identity (e.g. "M" and "Q" from the James Bond franchise), which returns the singular "they" to traditional and unobjectionable usage.
      • indoordin0saur6 days ago
        It has been considered normal in some colloquial uses for a long time. But until the late 2010s/early 2020s all style guides considered it to be poor form due to the ambiguity and muddy sentence structure it creates. Recommendations were changed recently for political reasons.
        • xzsinu6 days ago
          Maybe recommendations changed recently because it has been considered normal in colloquial use for a long time.
        • spacecadet6 days ago
          Shit changes. You can either let it roll off you or over you. Alot less painful rolling off.
    • add-sub-mul-div7 days ago
      There's nothing painful about this to anyone who hasn't been conscripted into the culture wars.
      • indoordin0saur6 days ago
        But it was the culture war that resulted in this change to the language. Previous to the war, singular 'they' was to be avoided due to the ambiguity it introduces.
        • add-sub-mul-div6 days ago
          It's not a culture war when attitudes towards gender evolve, just like it wasn't a culture war that some people are gay.

          It's not a culture war until there's two sides, until a segment of the population throws a hissyfit because new ideas make them uncomfortable.

          • indoordin0saur6 days ago
            I have no problem with people's attitudes or culture changing in a positive direction. However, I dislike this business of introducing a change into the language in a way that reduces its expressiveness and clarity. Usage of singular 'they' in contexts where more specific pronouns were available was unusual until very recently. Why the change? I don't think it's unfair characterize this as an offensive move, waged by one side in a 'culture war', that was done without regard to collateral damage.
            • Capricorn24816 days ago
              > Usage of singular 'they' in contexts where more specific pronouns were available was unusual until very recently

              It was used whenever gender was ambiguous or needed to be protected. Now with people openly identifying as non-binary, there is not a more specific pronoun, that person doesn't consider themselves that gender. You would be referring to them as something that is not what they want to be called, and is not what their social circle refers to them as. It's confusing, especially if you know what to call them but choose not to because you're offended.

              > I don't think it's unfair characterize this as an offensive move, waged by one side in a 'culture war', that was done without regard to collateral damage

              I would wager, based on the disproportionate and melodramatic language, this has never actually affected you. But you are likely consuming media that tells you everyone is going to draw and quarter you if you mess up a pronoun. This is not the case. Trans people just move on, they're used to it. It literally happens all the time.

        • spacecadet6 days ago
          What ambiguity? We know it's a human, the human has a name. We do not know their gender or sex, both are not relevant. They works perfectly.

          This seems like a you problem...

          • sanitycheck6 days ago
            "X and Y were in the garden, Y noticed the ripe tomatoes as they went into the greenhouse". Is X in the greenhouse?

            I'm way woker than the average person but I have to admit encountering a singular 'they' breaks my concentration in a distracting way - there's definitely possible ambiguity.

            • Capricorn24816 days ago
              People really ought to read redacted documents to get an idea for how people write with clarity when gender and even number of parties is unknown.

              But I'm confused by your sentence regardless of the gender terms. Did they notice the tomatoes in the Garden or in the greenhouse? This is just ambiguous wording in general.

              - These are two different sentences, but they're separated with a comma. It should be a period, as it makes no grammatical sense with a comma unless you're trying to make it intentionally confusing.

              - You would write "They both went into the greenhouse" if they both entered, or you would write "Y entered the greenhouse and noticed the ripe tomatoes."

              - "Before entering the greenhouse, "Y"/"they both" noticed the ripe tomatoes in the Garden."

            • card_zero6 days ago
              They also applies to objects (like it does), so here it could be the tomatoes that are going into the greenhouse.
  • woah7 days ago
    Why so much hand-wringing? If you are an anti-AI developer and you are able to develop better code faster than someone using AI, good for you. If AI-using developers will end up ruining their codebase in months like many here are saying, then things will take care of themselves.
    • svantana7 days ago
      I see two main problems with this approach:

      1. productivity and quality is hard to measure

      2. the codebase they are ruining is the same one I am working on.

      • munksbeer6 days ago
        > 2. the codebase they are ruining is the same one I am working on.

        We're supposed to have a process for dealing with this already, because developers can ruin a codebase without ai.

        • MathMonkeyMan6 days ago
          See point 1.
          • munksbeer6 days ago
            I don't understand. Presumably you have code reviews to stop coders committing rubbish to the repo?
    • bluefirebrand7 days ago
      Faster is not a smart metric to judge a programmer by.

      "more code faster" is not a good thing, it has never been a good thing

      I'm not worried about pro AI workers ruining their codebases at their jobs

      I'm worried about pro AI coworkers ruining my job by shitting up the codebases I have to work in

      • woah7 days ago
        I said "better code faster". Delivering features to users is always a good thing, and in fact is the entire point of what we do.
        • bluefirebrand7 days ago
          > in fact is the entire point of what we do

          Pump the brakes there

          You may have bought into some PMs idea of what we do, but I'm not buying it

          As professional, employed software developers, the entire point of what we do is to provide value to our employers.

          That isn't always by delivering features to users, it's certainly not always by delivering features faster

        • joe_the_user7 days ago
          Even if you say "better faster" tens times fast, the quality of being produced fast and being broadly good are very different. Speed of development can be measured immediately. Quality is holistic. It's a product of not just formatting clear structures but of relating to the rest of a given system.
          • owebmaster6 days ago
            Most of the times I get to the real solution for a problem after working in the wrong one for a while. If/when LLM help me finish the wrong one faster it is not helpful and could even be damaging in a situation that it goes to production fast.
        • AlexandrB6 days ago
          A lot of modern software dev is focused on delivering features to shareholders, not users. Doing that faster is going to make my life, as a user, worse.
    • nathan_compton7 days ago
      I've posted recently about a dichotomy which I have had in my head for years as a technical person: there are two kinds of tools; the first lets you do the right thing more easily and the second lets you do the wrong thing more quickly and for longer before you have to pay for it. AI/LLMs can definitely be the latter kind of tool, especially in a context where short term incentives swamp long term ones.
    • int_19h7 days ago
      I'm actually pro-AI and I use AI assistants for coding, but I'm also very concerned that the way those things will be deployed at scale in practice is likely to lead to severe degradation of software quality across the board.

      Why the hand-wringing? Well, for one thing, as a developer I still have to work on that code, fix the bugs in it, maintain it etc. You could say that this is a positive since AI slop would provide for endless job security for people who know how to clean up after it - and it's true, it does, but it's a very tedious and boring job.

      But I'm not just a developer, either - I'm also a user, and thinking about how low the average software quality already is today, the prospect of it getting even worse across the board is very unpleasant.

      And as for things taking care of themselves, I don't think they will. So long as companies can still ship something, it's "good enough", and cost-cutting will justify everything else. That's just how our economy works these days.

    • ang_cire6 days ago
      This assumes a level of both rationality and omniscience that don't exist in the real world.

      If a company fails to compete in the market and dies, there is no "autopsy" that goes in and realizes that it failed because of a chain-reaction of factors stemming from bad AI-slop code. And execs are so far removed from the code level, they don't know either, and their next company will do the same thing.

      What you're likely to end up with is project managers and developers who do know the AI code sucks, and they'll be heeded by execs just as much they are now, which is to say not at all.

      And when the bad AI-code-using devs apply to the next business whose execs are pro-AI because they're clueless, guess who they'll hire?

  • 6 days ago
    undefined
  • AlienRobot7 days ago
    A very bad programmer can program some cool stuff with the help of libraries, toolkits, frameworks and engines that they barely understand. I think that's pretty cool and makes things otherwise impossible possible, but it doesn't make the very bad programmer better than they really are.

    I believe AI is a variation of this, except a library at least has a license.

    • matt32107 days ago
      The AI code has thousands of licenses but the legal system hasn't caught up
  • matt32107 days ago
    One thing jumps out about the person who noticed the AI was wrong on things they were familiar with. It's like when ELon Musk talks about rockets. I don't know about rockets so I take his word for it. When Elon Must talked about software it was obvious he has no idea what he's doing. So when the AI generates something I know nothing about, it looks productive but when it's generating things for which I'm familiar I know its full of shit.
    • bluefirebrand7 days ago
      > So when the AI generates something I know nothing about, it looks productive but when it's generating things for which I'm familiar I know its full of shit.

      This is why when you hear people talk about how great it is at producing X, our takeaway should be "this person is not an expert at X, and their opinions can be disregarded"

      They are telling on themselves that they are not experts at the thing they think the AI is doing a great job at

      • andybak7 days ago
        "This is why when you hear people talk about how terrible it is at producing X, our takeaway should be "this person either hasn't tried to use it in good faith, and their opinions can be disregarded"

        I'm playing devil's advocate somewhat here but it often seem like that there's a bunch of people on both sides using hella motivated reasoning because they have very strong feelings that developed early on in their exposure to AI.

        AI is both terrible and wonderful. It's useless and some things and impressive at others. It will ruin whole sectors of the economy and upturn lives. It will get better and it is getting better so any limitations you currently observe are probably termporary. The net benefit for humanity may turn out to be positive or negative - it's too early to tell.

        • bluefirebrand7 days ago
          > AI is both terrible and wonderful. It's useless and some things and impressive at others

          That's kind of my problem. I am saying that it mostly only appears impressive to people who don't know better

          When people do know better it comes up short consistently

          Most of the pro AI people I see are bullish about it on things they have no idea about, like non-technical CEOs insisting that it can create good code

          • andybak7 days ago
            > When people do know better it comes up short consistently

            I disagree with that part and I don't think this opinion can be sustained by anyone using it with any regularity in good faith

            People can argue whether it's 70/30 or 30/70 or what domains it's more useful in than others but you are overstating the negative.

          • int_19h7 days ago
            Have you considered that it's actually impressive in some areas that are outside of your interest or concern?
            • bluefirebrand7 days ago
              Could be, but why would I trust that when it's clearly so bad at the things I am good at?
              • andybak6 days ago
                At this point I just want to restate my hypothesis that you haven't or aren't using it in good faith - or you haven't used it much at all.
                • bluefirebrand6 days ago
                  Two can play that game

                  My hypothesis is that you are invested in the success of AI products somehow, financially or emotionally, and that leads you to be blind to their shortcomings

                  You keep using them whenever possible because you want them to be useful even though in reality their usefulness is really iffy

                  • andybak6 days ago
                    So - we are at an impasse. Both suspect the other of motivated reasoning and an internal bias that distorts their ability for rational thinking.

                    It's entirely possible we're both irrational to some degree. But that's irrelevant to answering the question at hand.

                    Do you claim you are using it regularly and in good faith - enough to honestly form a reliable view on its utility?

                    I would claim that I am using it in such a way. It would take more effort than I'm prepared to put in to provide evidence of this but please - ask away.

                    (for the record - I have no financial or professional involvement directly with AI. I simply find the technology fascinating and I use it daily - both playfully and for it's practical utility)

                    • bluefirebrand6 days ago
                      I think I have used it in good faith. A few months ago I was part of a small team at my company tasked to evaluate AI solutions like copilot to see if they are useful to us and could speed development and such

                      For a couple of week tryout period I tried to use it in my daily workflow pretty heavily. I came away with the impression that it is a neat toy, but not really ready to be a full time tool for me. The other evaluators agreed and our recommendation to our leadership was "This is not really ready for prime time and while it is impressive it probably isn't really worth the cost"

                      Anyways fast forward and we're getting AI usage OKRs now, being pushed down on us by non-technical leadership, and what I call "formerly technical" leadership. People who did tech 20 years ago but really don't know what working modern tech is like since they've been in management for too long

                      So yes. I'm definitely negatively biased, and I'm fine to admit that. I absolutely resent having this stuff forced down on me from leaders that are buying the hype despite being told it is probably not ready to be a daily driver

                      And I'm seeing the hype spreading through the company, being told by junior devs how amazing it is when I am still iffy on their abilities.

                      And the absolute worst is when I build a cool proof of concept in an afternoon and everyone is like "wow, AI let you do that so fast now!" and I'm like no, this is just what a good developer can build quickly

                      So yeah, I'm pretty negative on AI right now. I can still admit the tech itself is impressive, amazing even, and there is no doubt in my mind I could probably find some use for it daily

                      But I think it is going to be a disaster, because people cannot be trusted to use it responsibly

                      The social impact is going to be absolutely catastrophic. In some ways it already is

                      Edit: I am also not really sure why I am supposed to be enthusiastic about technology that business leaders are fairly transparently hoping will make my skillset redundant or at best will make me more productive but I will never realistically see a single extra dollar from the increased productivity

                      • andybak5 days ago
                        This makes a lot of sense. But to be honest it feels more like a story about the pathology of hierarchical organisations than anything about AI.

                        I mostly work solo. I use AI when it's either a) interesting or b); useful. Our experiences are very different and it's no wonder our emotional responses are also very different.

                        • bluefirebrand5 days ago
                          Fair enough

                          I'm envious that you work solo. I think that would change my perspective on a lot of things

                          Thanks for the good faith discussion, anyways

        • ang_cire6 days ago
          > The net benefit for humanity may turn out to be positive or negative - it's too early to tell.

          It's just a tool, but it is unfortunately a tool that is currently dominated by large-sized corporations, to serve Capitalism. So it's definitely going to be a net-negative.

          Contrast that to something like 3D printing, which has most visibly benefited small companies and individual users.

          • andybak6 days ago
            Like many things (general purpose computing, the internet) we can carve out our own space once something is released into the public sphere so I don't think Capitalism has the iron grip on this that you're hypothesising. In recent memory I think it's mainly social media where the corporations have mostly succeeded in keeping a firm hold on things and where it remains hard for users to subvert their aims. And that's largely because of the failure of decentralized social media to grow to a mass audience.

            I think AI is different. "Good enough" models are already available under generous licenses, fine-tuning and even training is within the reach of groups of volunteers etc etc

      • pfdietz7 days ago
        Sounds like the Gell-Mann amnesia effect.

        https://en.wikipedia.org/wiki/Gell-Mann_amnesia_effect

  • grg06 days ago
    Bradley's game is DOA. Is it ARK: Aquatica by any chance?
  • bitwize6 days ago
    I like this article. It opens with a statement of its thesis and presents a few profiles of video game workers whose lives have been negatively impacted by AI. Each profile follows a basic template: a paragraph or so about who they are and what they do, a summary of how AI entered their workplace, a bunch of interview quotes about their reaction to it, and a paragraph at the end about what the final outcome was for the person or their company. Succinct, to the point, and easy to read. You don't see a lot of online journalism like this; the clickbait era has been marked by an entire novel about the E! True Hollywood Story of the major player(s) before the fucking point is even mentioned -- or worse still, AI-generated slop as the body text. Props to Aftermath and Luke Plunkett for maintaining a high standard of prose.
  • christkv6 days ago
    Are we going to get a steam label for handcrafted ?
  • hbsbsbsndk7 days ago
    Software developers are so aware of "enshittification" and yet also bullish about this generation of AI, it's baffling.

    It's very clear the "value" of the LLM generation is to churn out low-cost, low-quality garbage. We already outsourced stuff to Fivrr but now we can cut people out altogether. Producing "content" nobody wants.

  • DeathArrow7 days ago
    If a manager thinks paying $20 monthly for an AI tool will make a developer or artist 5x more productive, he's delusional.

    On the other hand, AI can be useful and can accelerate a bit some work.

    • 7 days ago
      undefined
  • ihsw6 days ago
    [dead]
  • john_texas6 days ago
    [dead]
  • Jyaif7 days ago
    “He doesn't know that the important thing isn't just the end result, it's the journey and the questions you answer along the way”

    This is satire right?

    • grg06 days ago
      Many of the best games were discovered through an iterative process of trial and error, not through magic divination. So, yes, it is the journey along the way that matters in this kind of creative process. This applies not just to concept art, but game mechanics and virtually every element of the game.
  • Animats6 days ago
    AI-generated art just keeps getting better. This looks like a losing battle.
    • gazebo646 days ago
      I think the most salient point the artists make in the article is that the process of ideating and iterating on art is just as valuable, if not moreso, than the end result. You can get a good looking image from an AI generator but miss out on the ideas and discoveries you would otherwise make by actually working on that art.

      I think it's also unfortunate how the advocates for AI replacing artists in gamedev clearly think of art as a chore or a barrier to launch rather than being the whole point of what they're making. If games are art, then it stands to reason the.. art.. that goes into it is just as important as anything else. A game isn't defined just by the logic of the main loop.

  • aucisson_masque6 days ago
    A.i. is a blatant case of darwinism.

    There are those who adapt, those who will keep moaning about it and finally those who believe it can do everything.

    First one will succeed, second one will be replaced, third one is going to get hurt.

    I believe this article and the people it mentions are mostly from the second category. Yet no one with all his mind can deny that ai makes writing code faster, not necessarily better but faster, and games at the end are mostly codes.

    Of course ai is going to get pushed hard by your ceo, he knows that if he doesn't, another competitor who use it will be able to produce more games, faster and less expensive.

    • gazebo646 days ago
      >Yet no one with all his mind can deny that ai makes writing code faster, not necessarily better but faster, and games at the end are mostly codes.

      It's actually quite easy, and not uncommon, to deny all of those things. Game code is complex and massively interwoven and relying on generative code that you didn't write and don't fully understand will certainly break down as game systems increase in complexity, and you will struggle to maintain it or make effective modifications -- so ignoring the fact that the quality is lower, there's an argument to be made that it will be "slower" to write in the long term.

      I think it's also flat wrong to say games are "mostly codes" -- games are a visual medium and people remember the visual/audio experience they had playing a game. Textures, animations, models, effects, lighting, etc. all define what a game is just as much if not more than the actual gameplay systems. Art is the main differentiating factor between games when you consider that most gameplay systems are derivative of one another.

    • grg06 days ago
      And then there is a fourth category: those who preach things they have no idea about.
      • miningape6 days ago
        OP falls into this category
    • ohgr6 days ago
      So on that basis you think the market is happy with shit things made very fast?

      I can assure you it's not. And people are starting to realise that there is a lot of shit. And know that LLMs generate it.

      • tpmoney6 days ago
        The “market” for almost any definition of that you can choose is demonstrably happy with “shit things made fast” provided that the “shit” thing still mostly works. Within rounding error, no one is paying for any artisan hand crafted products. Today it’s AI, last decade it was Chinese products, before that it was Japanese products. If you can get an 80% solution for 50% of the price, most people are doing that. How often do you “sort by price” when shopping? When was the last time you bought the more expensive higher quality version of something you don’t have a passion for?

        Or if you want to keep it in the realm of computers, “worse is better” clearly won out. The world uses Linux, not Unix much to the chagrin of the people who wrote the Linux Haters Handbook (regardless of how tongue in cheek that might have been).

        And the take away from history should be that AI might be “shit” now, but it will improve. If you don’t remember the days when “Made in Japan” was a marker of “shit”, that’s because things that are “shit” can improve faster than things that are “not shit” can maintain their lead.

        • ohgr5 days ago
          You assume it will improve. There is an asymptote from what I can see.
          • tpmoney5 days ago
            Surely you're not seriously that we've hit the peak (or even the top of an S curve) of what AI is capable of right? Even if you think LLMs have a limit to what they can do and current AI is overhyped (both of which I would agree with) I find it difficult to believe you really think what we have today is the best AI will ever be able to do.
            • ohgr5 days ago
              The problem is there isn’t a viable revenue model. The moment you ask money for it the customers disappear. That means there isn’t a revenue stream to support the extreme cost of model training.

              And this is all paid for by people who expect a return. In the middle of a very volatile market.

              It’s dead even from a non technical perspective.

              From a technical perspective every reducing gain requires more money than the last step. That isn’t something that ever works.

    • ang_cire6 days ago
      > another competitor who use it will be able to produce more games, faster and less expensive

      And yet this is no guarantee they will succeed. In fact, the largest franchises and games tend to be the ones that take their time and build for quality. There are a thousand GTA knock-offs on Steam, but it's R* that rakes in the money.

      • miningape6 days ago
        Exactly, cheap buggy shovelware has always been available to gamers. They just choose not to buy it because they know the experience will be sht since no one spent any time polishing it from a POC to a product.

        AI generates code that's harder for humans to understand so that polishing process is takes longer and is even more costly when you have AI shtting out code at breakneck speed.

    • voidspark6 days ago
      [flagged]
  • akomtu7 days ago
    Corporations don't need human workers, they need machines, the proverbial cogs that lack their own will and implement the will of the corporation instead. AI will make it happen: human workers will be managed by AI with sub-second precision and kill whatever little creativity and humanity the workers still had.
  • gwbas1c7 days ago
    > “He doesn't know that the important thing isn't just the end result, it's the journey and the questions you answer along the way”. Bradley says that the studio’s management have become so enamoured with the technology that without a reliance on AI-generated imagery for presentations and pitches they would not be at the stage they are now, which is dealing with publishers and investors.

    Take out the word AI and replace it with any other tool that's over-hyped or over-used, and the above statement will apply to any organization.

  • lanfeust67 days ago
    It would be an understatement to call this a skewed perspective. In most of the anecdotes they seem to try really hard to trivialize the productive benefits of AI, which is difficult to take seriously. The case that LLMs create flawed outputs or are limited in what they can do is not controversial at all, but by and large, reports by experienced developers is that it has improved their productivity, and it's now part of their workflow. Whether businesses and hire-ups try to use it in absurd ways is neither here nor there. That, and culture issues, were a problem before AI.

    Obviously some workers have a strong incentive to oppose adoption, because it may jeopardize their careers. Even if the capabilities are over-stated it can be a self-fulfilling prophecy as higher-ups choices may go. Union shops will try to stall it, but it's here to stay. You're in a globally competitive market.

    • Fraterkes7 days ago
      If ai exacerbates culture issues and management incompetence then that is an inherent downside of ai.

      There is a bunch of programmers who like ai, but as the article shows, programmers are not the only people subjected to ai in the workplace. If you're an artist, you've taken a job that has crap pay and stability for the amount of training you put in, and the only reason you do it is because you like the actual content of the job (physically making art). There is obviously no upside to ai for those people, and this focus on the managers' or developers' perspective is myopic.

      • andybak7 days ago
        It might seem hard to believe but there are a bunch of artists who also like AI. People whose artistic practice predates AI. The definition of "artist" is a quagmire which I won't get into but I am not stretching the definition here in any way.
        • Fraterkes7 days ago
          I'm sure there are a bunch! I'm an artist, I talk to a bunch of artists physically and online. It's not the prevailing opinion in my experience.
          • andybak7 days ago
            Agreed. But it's important to counter the impression that many have that it's nearly unanimous.
      • lanfeust67 days ago
        It's an interesting point that passion-jobs that creatives take on (including game dev) tend to get paid less, and where the thrilling component is disrupted there could be less incentive to bother entering the field.

        I think for the most part creatives will still line up for these gigs, because they care about contributing to the end products, not the amount of time they spend using Blender.

        • Fraterkes7 days ago
          You are again just thinking from the perspective of a manager: Yes, if these ai jobs need to be filled, artists will be the people filling them. But from the artists perspective there are fewer jobs, and the jobs that do remain are less fulfilling. So: from the perspective of a large part of the workforce it is completely true and rational to say that ai at their job has mostly downsides.
          • lanfeust67 days ago
            > from the artists perspective there are fewer jobs, and the jobs that do remain are less fulfilling.

            Re-read what I wrote. You repeated what I said.

            > So: from the perspective of a large part of the workforce it is completely true and rational to say that ai at their job has mostly downsides.

            For them, maybe.

            • Fraterkes7 days ago
              Alright, so doesn't that validate a lot of the feelings and opinions layed out in the OP? Have I broadened your worldview?
              • lanfeust66 days ago
                No. You gave me no new information.
    • oneeyedpigeon7 days ago
      I have very little objection to AI, providing we get UBI to mitigate the fallout.
      • parpfish7 days ago
        I was thinking about this and realized that if we want an AI boom to lead to UBI, AI needs to start replacing the cushy white collar jobs first.

        If you start by replacing menial labor, there will be more unemployment but you’re not going to build the political will to do anything because those jobs were seen as “less than” and the political class will talk about how good and efficient it is that these jobs are gone.

        You need to start by automating away “good jobs” that directly affect middle/upper class people. Jobs where people have extensive training and/or a “calling” to the field. Once lawyers, software engineers, doctors, executives, etc get smacked with widespread unemployment, the political class will take UBI much more seriously.

        • stuckinhell7 days ago
          I suspect elites will build a two-tiered AI system where only a select few get access to the cutting-edge stuff, while the rest of us get stuck with the leftovers.

          They'll use their clout—money, lobbying, and media influence—to lock in their advantage and keep decision-making within their circle.

          In the end, this setup would just widen the gap, cementing power imbalances as AI continues to reshape everything. UBI will become the bare minimum to keep the masses sedated.

        • waveringana7 days ago
          needing a lawyer and needing a doctor are very common cases of bankruptcy in the US. both feel very primed to be replaced by models
        • lanfeust67 days ago
          Incidentally it seems to be happening in that order, but laborers won't have a long respite (if you can call it that)
          • parpfish7 days ago
            i think that the factor determining which jobs get usurped by AI first isn't going to be based on the cognitive difficulty as much as it is about robotic difficulty and interaction with the physical world.

            if you job consists of reading from a computer -> thinking -> entering things back into a computer, you're on the top of the list because you don't need to set up a bunch of new sensors and actuators. In other words… the easier it is to do your job remotely, the more likely it is you’ll get automated away

      • hello_computer7 days ago
        But why will that happen? If they have robots and AI and all the money, what’s stopping the powers that be from disposing of the excess biomass?
        • lanfeust67 days ago
          What's there to gain? What do they care about biomass? They're still in the business of selling products, until the economy explodes. I find this to be circular because you could say the same thing about right now, "why don't they dispose of the welfare class" etc.

          There's also the fact that "they" aren't all one and the same persons with the exact same worldview and interests.

          • achierius6 days ago
            You speak like they would have to do something 'aggressive'. If you can achieve a circular economy, where robots produce products for the benefit of a lucky few who can live off of their investments (in the robots), then the rest of the population will 'naturally' go away.

            You might say "but why not use just 1% of that GDP on making sure the rest of humanity lives in at least minimal comfort"? But clearly -- we already choose not to do that today. 1% of the GDP of the developed world would be more than enough to solve many horrifying problems in the developing world -- what we actually give is a far smaller fraction, and ultimately not enough.

          • hello_computer7 days ago
            The Davos class was highly concerned about ecology before Davos was even a thing. In America, their minions (the “coastie” class) are coming to see the liquidation of the kulaks as perhaps not such a bad thing. If it devolves into a “let them eat cake” scenario, one has to wonder how things will play out in “proles vs robot pinkertons”. Watch what the sonic crowd control trucks did in Serbia last week.

            Of course there is always the issue of “demand”—of keeping the factories humming, but when you are worth billions, your immediate subordinates are worth hundreds of millions, and all of their subordinates are worth a few million, maybe you come to a point where “lebensraum” becomes more valuable to you than another zero at the end of your balance?

            When AI replaces the nerds (in progress), they become excess biomass. Not talking about a retarded hollywood-style apocalypse. Economic uncertainty is more than enough to suppress breeding in many populations. “not with a bang, but a whimper

            If you know any of “them”, you will know that “they” went to the same elite prep schools, live in the same cities, intermarry, etc. The “equality” nonsense is just a lie to numb the proles. In 2025 we have a full-blown hereditary nobility.

            edit: answer to ianfeust6:

            The West is not The World. There are over a billion Chinese, Indians, Africans…

            Words mean things. Who said tree hugger? If you are an apex predator living in an increasingly cloudy tank, there is an obvious solution to the cloudyness.

            • lanfeust67 days ago
              So your take is that the wealthiest class will purge people because they're tree-huggers. Not the worst galaxy-brained thing I've heard before, but still laughable.

              Don't forget fertility rate is basically stagnant in the West and falling globally, so this seems like a waste of time considering most people just won't breed at all.

              • hello_computer6 days ago
                repeated for thread continuity:

                The West is not The World. There are over a billion Chinese, Indians, Africans…

                Words mean things. Who said tree hugger? If you are an apex predator living in an increasingly cloudy tank, there is an obvious solution to the cloudyness.

              • lanfeust66 days ago
                also: emissions will continue to drop
                • hello_computer6 days ago
                  there has been far more degradation to the natural environment than mere air pollution. general sherman decimated the plains indians with a memorandum. do you think that you are sufficiently better and sufficiently more indispensable than a plains indian?
      • lanfeust67 days ago
        Right, well even without AGI (no two people agree on whether it's coming within 5 years, 30, or 100), finely-tuned LLMs can disrupt the economy fast if the bottlenecks get taken care of. The big one is the robot-economy. This is popularly placed further off in timescales, but it does not require AGI at all. We already have humanoid robots on the market for the price of a small car, they're just dumb. Once we scale up solar and battery production, and then manufacturing, it's coming for menial labor jobs. They already have all the pieces, it's a foregone conclusion. What we don't know how to do is to create a real "intelligence", and here the evangelists will wax about the algorithms and the nature of intelligence, but at the end of the day it takes more than scaling up an LLM to constitute an AGI. The bet is that AI-assisted research will lead to breakthrough in a trivial amount of time.

        With white-collar jobs the threat of AI feels more abstract and localized, and you still get talk about "creating new jobs", but when robots start coming off the assembly line people will demand UBI so fast it will make your head spin. Either that or they'll try to set fire to them or block them with unions, etc. Hard to say when because another effort like the CHIPS act could expedite things.

    • hello_computer7 days ago
      It’s karma. The creatives weren’t terribly concerned when the factory guys lost their jobs. “Lern to code!” Now it’s our turn to “Learn to OnlyFans” or “Learn to Homeless
      • Terr_7 days ago
        > The creatives [...] “Lern to code!”

        No, the underlying format of "$LABOR_ISSUE can be solved by $CHANGE_JOB" comes from a place of politics, where a politician is trying to suggest they have a plan to somehow tackle a painful problem among their constituents, and that therefore they should be (re-)elected.

        Then the politicians piled onto "coal-miners can learn to code" etc. because it was uniquely attractive, since:

        1. No big capital expenditures, so they don't need to promise/explain how a new factory will get built.

        2. The potential for remote work means constituents wouldn't need to sell their homes or move.

        3. Participants wouldn't require multiple years of expensive formal schooling.

        4. It had some "more money than you make now" appeal.

        • hello_computer6 days ago
          Stating it in patronizing fact-checker tone does not make it true. The tech nerds started it (they love cheap labor pools). Then the politicians joined their masters’ bandwagon. It was a PR blitz. Who has the money for those? Dorseys, Grahams, & Zuckerbergs, or petty-millionaire mayors & congressmen? Politicians are just the house slaves—servants of money.

          https://en.wikipedia.org/wiki/Learn_to_Code#Codecademy_and_C...

          • achierius6 days ago
            "Tech nerds" like Dorsey and Zuckerberg have almost nothing in common (on a day-to-day basis, with how they live their lives, their material incentives, etc.) with "tech nerds" like "Intel Employee #783,529". Those are not a single class of people, and it was predominantly the first group that pushed this sort of rhetoric, not the latter.
            • hello_computer6 days ago

                 24 The disciple is not above his master, nor the servant above his lord.
              
                 25 It is enough for the disciple that he be as his master, and the servant as his lord. If they have called the master of the house Beelzebub, how much more shall they call them of his household?
              • Terr_6 days ago
                You whine about a "patronizing fact-checker tone", yet when someone points out a real difference between groups, you flee and sling Bible verses?

                Forget these new taxes on Americans who buy Canadian hardwood, we can just supply logs from your eyes.

                • hello_computer6 days ago
                  It's common sense. That is why it has endured. You people are like mob hitmen standing in moral judgment of your Godfathers. Without your muscle, your Godfather is just an old guy with pasta and a cigar. The "difference" is something you hallucinate so you can feel good about yourselves.

                     how much more shall they call them of his household?
      • Fraterkes7 days ago
        "learn to code" was thrown around by programmers, not creatives. Everyone else (including writers and artists) has long hated that phrase, and condemded it as stupid and shortsighted.
        • hello_computer7 days ago
          “learn to code” was from the media. whether they deserve to be classified as “creatives” i will leave to the philosophers.
      • 6 days ago
        undefined
  • nilkn7 days ago
    I don't have much sympathy for this. This country has long expected millions and millions of blue collar workers to accept and embrace change or lose their careers and retirements. When those people resisted, they were left to rot. Now I'm reading a sob story about someone throwing a fit because they refuse to learn to use ChatGPT and Claude and the CEO had to sit them down and hold their hand in a way. Out of all the skillset transitions that history has required or imposed, this is one of the easiest ever.

    They weren't fired; they weren't laid off; they weren't reassigned or demoted; they got attention and assistance from the CEO and guidance on what they needed to do to change and adapt while keeping their job and paycheck at the same time, with otherwise no disruption to their life at all for now.

    Prosperity and wealth do not come for free. You are not owed anything. The world is not going to give you special treatment or handle you with care because you view yourself as an artisan. Those are rewards for people who keep up, not for those who resist change. It's always been that way. Just because you've so far been on the receiving end of prosperity doesn't mean you're owed that kind of easy life forever. Nobody else gets that kind of guarantee -- why should you?

    The bottom line is the people in this article will be learning new skills one way or another. The only question is whether those are skills that adapt their existing career for an evolving world or whether those are skills that enable them to transition completely out of development and into a different sector entirely.

    • petesergeant7 days ago
      > These are rewards for people who keep up, not for those who resist change.

      lol. I work with LLM outputs all day -- like it's my job to make the LLM do things -- and I probably speak to some LLM to answer a question for me between 10 and 100 times a day. They're kinda helpful for some programming tasks, but pretty bad at others. Any company that tried to mandate me to use an LLM would get kicked to the curb. That's not because I'm "not keeping up", it's because they're simply not good enough to put more work through.

      • ewzimm7 days ago
        Wouldn't this depend a lot on how management responds to your use? For example, if you just kept a log of prompts and outputs with notes about why the output wasn't acceptable, that could be considered productive use in this early stage of LLMs, especially if management's goal was to have you learning how to use LLMs. Learning how not to use something is just as important in the process of adapting any new tool.

        If management is convinced of the benefits of LLMs and the workers are all just refusing to use them, the main problem seems to be a dysfunctional working environment. It's ultimately management's responsibility to work that out, but if the management isn't completely incompetent, people tasked with using them could do a lot to help the situation by testing and providing constructive feedback rather than making a stand by refusing to try and providing grand narratives about damaging the artistic integrity of something that has been commoditized from inception like video game art. I'm not saying that video game art can't be art, but it has existed in a commercial crunch culture since the 1970s.

      • achierius6 days ago
        What sort of tasks have you seen them struggle with? Not to dispute, just collecting datapoints for my own sake.
        • petesergeant6 days ago
          Anything with even vaguely complicated TypeScript types, hallucinating modules, writing tests that are useful rather than just performative, as recent examples…
    • kmeisthax7 days ago
      If you're not doing the work, you're not learning from the result.

      The CEOs in question bought what they believed to be a power tool, but got what is more like a smarter copy machine. To be clear, copy machines are not useless, but they also aren't going to drive the 200% increases in productivity that people think they will.

      But because management demands the 200% increase in productivity they were promised by the AI tools, all the artists and programmers on the team hear "stop doing anything interesting or novel, just copy what already exists". To be blunt, that's not the shit they signed up for, and it's going to result in a far worse product. Nobody wants slop.

      • 6 days ago
        undefined
    • raxxorraxor6 days ago
      Having spend hours upon hours with image snythesis for artistic hobby purposes, it is indeed an awesome tool. If you get into it you might learn about its limitations though.

      Real knowledge here is often absend from the strongest AI prosletisers, others are more realistic about it. It still remains an awesome tool, but a limited one.

      AIs today are not creative at all. They find statistical matches. They perform a different work than artists do.

      But please, replace all your artwork with AI generated ones. I believe the forced "adapt" phase with that approach would realize itself rather quickly.

      • nilkn6 days ago
        > It still remains an awesome tool, but a limited one.

        And that's enough to drive significant industry-wide change. Just because it can't fully automate everything doesn't mean companies aren't going to expect (and, indeed, increasingly require) their employees to learn how to effectively utilize the technology. The CEO of Shopify recently made it clear that refusal to learn to use AI tools will factor directly into performance evaluations for all staff. This is just the beginning. It's best to be wise and go where the puck is headed.

        The article gives several examples of where these tools are used to rapidly accelerate experimentation, pitches, etc. Supposedly this is a bad thing and should be avoided because it's not sufficiently artisan, but no defensible argument was presented as to why these use cases are illegitimate.

        In terms of writing code, we're entering an era where developers who have invested in learning how to utilize this technology are simply better and more valuable to companies than developers who have not. Naysayers will find all sorts of false ways to nitpick that statement, yet it remains true. Effective usage means knowing when (and when not) to use these tools -- and to what degree. It also, for now at least, means remaining a human expert about the craft at hand.

    • voidspark6 days ago
      [flagged]