There is no place for me in this environment. I’d not that I couldn’t use the tools to make so much code, it’s that AI use makes the metric for success speed-to-production. The solution to bad code is more code. AI will never produce a deletion. Publish or perish has come for us and it’s sad. It makes me feel old just like my Python programming made the mainframe people feel old. I wonder what will make the AI developers feel old…
Unless you meant that AI won’t remove entire features from the code. But AI can do that too if you prompt it to. I think the bigger issue is that companies don’t put enough value on removing things and only focus on adding new features. That’s not a problem with AI though.
As a side note, I've had coworkers disappear for N days too and in that time the requirements changed (as is our business) and their lack of communication meant that their work was incompatible with the new requirements. So just because someone achieves a 10x speedup in a vacuum also isn't necessarily always a good thing.
A declarative framework for testing may make sense in some cases, but in many cases it will just be a complicated way of scripting something you use once or twice. And when you use it you need to call up the maintainer anyway when you get lost in the yaml.
Which of course feels good for the maintainer, to feel needed.
Please don't do this :) Readable code is better than clever code!
var ageLookup = new Dictionary<AgeRange, List<Member>>();
foreach (var member in members) {
var ageRange = member.AgeRange;
if (ageLookup.ContainsKey(ageRange)) {
ageLookup[ageRange].Add(member);
} else {
ageLookup[ageRange] = new List<Member>();
ageLookup[ageRange].Add(member);
}
}
which could instead be: var ageLookup = members.ToLookup(m => m.AgeRange, m => m);
var ageLookup = new Dictionary<AgeRange, List<Member>>();
foreach (var member in members) {
ageLookup.getOrCreate(member.AgeRange, List::new).add(member);
}
is more readable in the long-term... (less predefined methods/concepts to learn).Readability incorporates familiarity but also conciseness. I suppose it depends what else is going on in the codebase. I have a database access class in one of my solutions where `ToLookup` is used 15 times; yes you have to learn the concept, but it's an inbuilt method and it's a massive benefit once you grok it.
I should also note that development style also depends on tools, so if your IDE makes inline functions more readable in it's display, it's fine to use concisely defined lambdas.
Readablity is a personal preference thing at some point after all.
At least with human-written clever code you can trust that somebody understood it at one point but the idea of trusting AI generated code that is "clever" makes my skin crawl
And was the code they were writing before they had an LLM any better?
My guess would be engineers who are "forced" to use AI, already mailed management it would be an error and are interviewing for their next company. Malicious compliance: vibe code those new features and let maintainability and security be a problem for next employees / consultants.
And I know it's intentional, but yes. Add some mindfulness to your implementation
Map["blah"] = fooIsTrue;
I do see your example in the wild sometimes. I've probably done it myself as well and never caught it.
The one actual major downside to AI is that PM and higher are now looking for problems to solve with it. I haven't really seen this before a lot with technology, except when cloud first became a thing and maybe sometimes with Microsoft products.
u/justonceokay's wrote:
> The solution to bad code is more code.
This has always been true, in all domains.
Gen-AI's contribution is further automating the production of "slop". Bots arguing with other bots, perpetuating the vicious cycle of bullshit jobs (David Graeber) and enshitification (Cory Docotrow).
u/justonceokay's wrote:
> AI will never produce a deletion.
I acknowledge your example of tidying up some code. What Bill Joy may have characterized as "working in the small".
But what of novelty, craft, innovation? Can Gen-AI, moot the need for code? Like the oft-cited example of -2,000 LOC? https://www.folklore.org/Negative_2000_Lines_Of_Code.html
Can Gen-AI do the (traditional, pre 2000s) role of quality assurance? Identify unnecessary or unneeded work? Tie functionality back to requirements? Verify the goal has been satisfied?
Not yet, for sure. But I guess it's conceivable, provided sufficient training data. Is there sufficient training data?
You wrote:
> only focus on adding new features
Yup.
Further, somewhere in the transition from shipping CDs to publishing services, I went from developing products to just doing IT & data processing.
The code I write today (in anger) has a shorter shelf-life, creates much less value, is barely even worth the bother of creation much less validation.
Gen-AI can absolutely do all this @!#!$hit IT and data processing monkey motion.
During interviews one of my go-to examples of problem solving is a project I was able to kill during discovery, cancelling a client contract and sending everyone back to the drawing board.
Half of the people I've talked to do not understand why that might be a positive situation for everyone involved. I need to explain the benefit of having clients think you walk on water. They're still upset my example isn't heavy on any of the math they've memorized.
It feels like we're wondering how wise an AI can be in an era where wisdom and long-term thinking aren't really valued.
No, because if you read your SICP you will come across the aphorism that "programs must be written for people to read, and only incidentally for machines to execute." Relatedly is an idea I often quote against "low/no code tooling" that by the time you have an idea of what you want done specific enough for a computer to execute it, whatever symbols you use to express that idea -- be it through text, diagrams, special notation, sounds, etc. -- will be isomorphic to constructs in some programming language. Relatedly, Gerald Sussman once wrote that he sought a language in which to discuss ideas with his friends, both human and electronic.
Code is a notation, like mathematical notation and musical notation. It stands outside prose because it expresses an idea for a procedure to be done by machine, specific enough to be unambiguously executable by said machine. No matter how hard you proompt, there's always going to be some vagueness and nuance in your English-language expression of the idea. To nail down the procedure unambiguously, you have to evaluate the idea in terms of code (or a sufficiently code-like notation as makes no difference). Even if you are working with a human-level (or greater) intelligence, it will be much easier for you and it to discuss some algorithm in terms of code than in an English-language description, at least if your mutual goal is a runnable version of the algorithm. Gen-AI will just make our electronic friends worthy of being called people; we will still need a programming language to adequately share our ideas with them.
Now tell that to your compiler, which turns instructions in a relatively high-level language into machine-language programs that no human will ever read.
AI is just the next logical stage in the same evolutionary journey. Your programs will be easier to read than they were, because they will be written in English. Your code, on the other hand, will matter as much as your compiler's x86 or ARM output does now: not at all, except in vanishingly-rare circumstances.
In the same way that we use AI to write resumés to be read by resumé-scanning AI, or where execs use AI to turn bullet points into a corporate email only for it to be summarised into bullet points by AI, perhaps we are entering the era where AI generates code that can only be read by an AI?
This sounds inefficient and untidy when the only human things left to do are to take up space and consume resources.
Removing the humans enables removing other legacy parts of the system, such as food production, which will free up resources for other uses. It also allows certain constraints to be relaxed, such as keeping the air breathable and the water drinkable.
I would argue that a plurality, if not the majority, of business needs for software engineers do not need more than a single person with those skills. Better yet, there is already some executive that is extremely confident that they embody all three.
If you do this you are creating a rod for your own back: You need management to see the failures & the time it takes to fix them, otherwise they will assume everything is fine & wonderful with their new toy & proceed with their plan to inflict it on everyone, oblivious to the true costs + benefits.
If at every company I work for, my manager's average 7-8 months in their role as _my_ manager, and I am switching jobs every 2-3 years because companies would rather rehire their entire staff than give out raises that are even a portion of the market growth, why would I care?
Not that the market is currently in that state, but that's how a large portion of tech companies were operating for the past decade. Long term consequences don't matter because there are no longer term relationships.
When they look at the calendar and it says May 2025 instead of April
LevelsIO's flight simulator sucked. But his payoff-to-effort ratio is so absurdly high, as a business type you have to be brain-dead to leave money on the table by refusing to try replicating his success.
They will not feel old because they will enter into bliss of Singularity(TM).
Wasn't it like that always for most companies? Get to market fast, add features fast, sell them, add more features?
> Wasn't it like that always for most companies? Get to market fast, add features fast, sell them, add more features?
This reminds me of an old software engineering adage.
When delivering a system, there are three choices
stakeholders have:
You can have it fast,
You can have it cheap,
You can have it correct.
Pick any two.
I think we'll be okay and likely better off.
I'm currently reading an LLM generated deletion. its hard to get an LLM to work with existing tools, but not impossible
I suspect he is pretty unimpressed by the code that LLMs produce given his history with code he thinks is subpar, but what do I know
It's the front-end of the hype cycle. The tech-debt problems will come home to roost in a year or two.
The market can remain irrational longer than you can remain solvent.
Use LLM to write Haskell. Problem solved?
Ah yes, maintenance, the most fun and satisfying part of the job. /s
new 2025 slang just dropped
I am sure assembly programmers were horrified at the code the first C compilers produced. And I personally am horrified by the inefficiency of python compared to the C++ code I used to write. We always have traded faster development for inefficiency.
This created a league of incredibly elitist[0] programmers who, having mastered what they thought was the rules of C, insisted to everyone else that the real problem was you not understanding C, not the fact that C had made itself a nightmare to program in. C is bad soil to plant a project in even if you know where the poison is and how to avoid it.
The inefficiency of Python[1] is downstream of a trauma response to C and all the many, many ways to shoot yourself in the foot with it. Garbage collection and bytecode are tithes paid to absolve oneself of the sins of C. It's not a matter of Python being "faster to write, harder to execute" as much as Python being used as a defense mechanism.
In contrast, the trade-off from AI is unclear, aside from the fact that you didn't spend time writing it, and thus aren't learning anything from it. It's one thing to sacrifice performance for stability; versus sacrificing efficiency and understanding for faster code churn. I don't think the latter is a good tradeoff! That's how we got under-baked and developer-hostile ecosystems like C to begin with!
[0] The opposite of a "DEI hire" is an "APE hire", where APE stands for "Assimilation, Poverty & Exclusion"
[1] I'm using Python as a stand-in for any memory-safe programming language that makes use of a bytecode interpreter that manipulates runtime-managed memory objects.
It was only much later that optimizing compilers began using it as an excuse to do things like time travel, and then everyone tried to show off how much of an intellectual they were by saying everyone else was stupid for not knowing this could happen all along.
And even among languages that do have a full virtual machine, Python is slow. Slower than JS, slower than Lisp, slower than Haskell by far.
There is a Common Lisp implementation that compiles to bytecode, CLISP. And there are Common Lisp implementations that compile (transpile?) to C.
That, right here, is a world-shaking statement. Bravo.
But the sentiment is true, by default current LLMs produce verbose, overcomplicated code
However I wouldn't say refactoring is as hands free as letting AI produce the code in the first place, you need to cherry pick its best ideas and guide it a little bit more.
You can't win an argument with people who don't care if they're wrong, and someone who begins a sentence that way falls into that category.
Prior hype, like block chain are more abstract, therefore less useful to people who understand managing but not the actual work.
Because a core feature of LLMs is to minimize the distance between {quality answers} and {gibberish that looks correct}.
As a consequence, this maximizes {skill required to distinguish the two}.
Are we then surprised that non-subject matter experts overestimate the output's median usefulness?
In fact, we should try to LLM them away. I wonder, would LLMs then be promoted less?
Actually, I feel like executing this startup and pitching would be hilarious and therapeutic.
"How we will eliminate your job with LLMs, MBA."
To manage this well, you need the courage to trust people, as well as the intelligence and patience to question them. Not everybody has that.
But that aside, I think business people generally like having (what they think are) strong experts. It means they can use their people skills and networks to create competitive advantage.
The "copilot experiences", that finishes the next few lines can be useful and intuitive - an "agent" writing anything more than boilerplate is bound to create more work than it lifted in my experience.
Where I am having a blast with LLMs is learning new programming languages more deeply. I am trying to understand Rust better - and LLMs can produce nice reasoning to whether one should use "Vec<impl XYZ>" or "Vec<Box<dyn XYZ>>". I am sure this trivial for any experienced Rust developer though.
> Is because unlike prior hype cycles, this one is super easy for an MBA to point at and sort of see a way to integrate it.
This particular hype is the easiest one thus far for an MBA to understand because employing it is the closest thing to a Ford assembly line[0] the software industry has made available yet.
Since the majority of management training centers on early 20th century manufacturing concepts, people taught same believe "increasing production output" is a resource problem, not an understanding problem. Hence the allure of "generative AI can cut delivery times without increasing labor costs."
I feel it degrades a whole group of people to a specific stereotype that might or might not be true.
How about lawyers, PhDs, political science majors, etc.
Let’s look at the humans and their character, not titles.
By the way, I have an MBA too and feel completely misjudged with statements like that.
An analogue to this would be "all cops are bastards". Sure, there are some good ones out there, but there are enough bad ones out there that the stereotype generally applies. The statement is a rallying cry for something to be done about it. The "guilty by association" bit that tends to follow is another thing entirely.
Managers and politicians might be especially susceptible to this, but there's also enough in the tech crowd who seem to have been hypnotized into becoming mindless enthusiasts for AI.
I've seen this with other tools before. Every single time, it's because someone in the company signed a big contract to get seats, and they want to be able to show great utilization numbers to justify the expense.
AI has the added benefit of being the currently in-vogue buzzword, and any and every grant or investment sounds way better with it than without, even if it adds absolutely nothing whatsoever.
Possibly this is just among the smallish group of students I know at MIT, but I would be surprised to hear that a biomedical researcher has no use for them.
I do have to say that we're just approaching the tip of the iceberg and there are huge issues related to standardization, dirty datas... We still need the supervision and the help of one of the two professors to proceed even with llms
Or maybe they can just use the AI to write creative emails to management explaining why they weren’t able to use AI in their work this day/week/quarter.
I'm adding `.noai` files to all the project going forward:
https://www.jetbrains.com/help/idea/disable-ai-assistant.htm...
AI may be somewhat useful for experienced devs but it is a catastrophe for inexperienced developers.
"That's OK, we only hire experienced developers."
Yes, and where do you suppose experienced developers come from?
Again and again in this AI arc I'm reminded of the magicians apprentice scene from fantasia.
Strictly speaking, you don't even need university courses to get experienced devs.
There will always be individuals that enjoy coding and do so without any formal teaching. People like that will always be more effective at their job once employed, simply because they'll have just that much more experience from trying various stuff.
Not to discredit University degrees of course - the best devs will have gotten formal teaching and code in their free time.
This is honestly not my experience with self taught programmers. They can produce excellent code in a vacuum but they often lack a ton of foundational stuff
In a past job, I had to untangle a massive nested loop structure written by a self taught dev, which did work but ran extremely slowly
He was very confused and asked me to explain why my code ran fast, his ran slow, because "it was the same number of loops"
I tried to explain Big O, linear versus exponential complexity, etc, but he really didn't get it
But the company was very impressed by him and considered him our "rockstar" because he produced high volumes of code very quickly
My point is you don't know what you don't know. There is really only so far you can get by just noodling around on your own, at some point we have to learn from more experienced people to get to the next level
School is a much more consistent path to gain that knowledge than just diving in
It's not the only path, but it turns out that people like consistency
A senior dev mentioned a “class invariant” the other day And I just had no idea what that was because I’ve never been exposed to it… So I suppose the question I have is what should I be exposed to in order to know that? What else is there that I need to learn about software engineering that I don’t know that is similarly going to be embarrassing on the job if I don’t know it? I’ve got books like cracking the coding interview and software engineering at Google… But I am missing a huge gap because I was unable to finish my masters and computer science :-(
(Serious comment! It's "the" algorithms book).
I've read that one, not an expert by any means, and I've got a 'decent' handle on data structues, but what about the software engineering basics one needs like OOP vs. Functional, SOLID, interfaces, class invariants, class design, etc.? Should I just pick up any CS 101 textbook? Or any good MIT open courseware classes that cover this type of stuff (preferably with video lectures... intro to algorithms is _amazing_ they have Eric's classes uploaded to YouTube, but finding good resources to level-up as a SWE has proved somewhat challenging)
^ serious comment as well... I find myself "swimming" when I hear certain terms used in the field and I am trying to catch up a bit (esp. as an SRE with self-taught SRE skills that is supposed to know this stuff)
> This website contains nearly complete solutions to the bible textbook - Introduction to Algorithms Third Edition, published by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein.
But you have to know this knowledge exists in the first place. That's part of the appeal of university teaching: it makes you aware of many different paradigms. So the day you stumble on one of them you know where to look for a solution. And usually you learn how to read (and not to fear) reading scientific papers which can be useful. And statistics.
There are also different types of self taught, and different types of uni grad. You have people who love code, have a passion for learning, and that's driven them to gain a lot of experience. Then you have those who needed to make a living, and haven't really stretched beyond their wheelhouse so lack a lot of diverse experience. Both are totally fine and capable of some work, but you would have better luck with novel work from an experienced passionate coder. Uni trained or not.
I have not learned CS at university (maths & stats graduate who shifted to programming, because I can't help myself loving it). I work with engineers with CS degrees from pretty good universities. At the risk of sounding arrogant, I write better code then a lot of them (and some of them write code that it so clean and tight that I wish I could match it). Purely based on my fairly considerable experience, there is basically little correlation between degree and quality of code. There is non-trivial correlation between raw intelligence and the output. And there is a massive correlation between how much one cares about the quality of the work and the output.
> Not to discredit University degrees of course - the best devs will have gotten formal teaching and code in their free time.
>People like that will always be more effective at their job once employed
My experience is that "self taught" people are passionate about solving the parts they consider fun but do not have the breadth to be as effective as most people who have formal training but less passion. The previous poster also called out real issues with this kind of developer (not understanding time complexity or how to fix things) that I have repeatedly seen in practice.
I just pointed out that removing classes entirely would still get you experiences people. Even if they'd likely be better if they code and get formal training. I stated that very plainly
You actually didn't state it very plainly at all. Your initial post is contradictory, look at these two statements side by side
> There will always be individuals that enjoy coding and do so without any formal teaching. People like that will always be more effective at their job once employed
> the best devs will have gotten formal teaching and code in their free time
People who enjoy coding without formal training -> more effective
People who enjoy coding and have formal training -> best devs
Anyways I get what you were trying to say, now. You just did not do a very good job of saying it imo. Sorry for the misunderstanding
> There will always be individuals that enjoy coding and do so without any formal teaching. People like that will always be more effective at their job once employed
As "people who enjoy coding and didn't need formal training to get started". It includes both people who have and don't have formal training.
Both statements together are (enthusiasm + formal) > (enthusiasm without formal) > (formal without enthusiasm).
> Both statements together are (enthusiasm + formal) > (enthusiasm without formal) > (formal without enthusiasm).
I don't think the last category (formal education without enthusiasm) really exists, I think it is a bit of a strawman being held up by people who are *~passionate~*
I suspect that without any enthusiasm, people will not make it through any kind of formal education program, in reality
If you've never encountered the average 9-5 dev that just does the least amount of effort they can get away with, then I have to apploud the HR departments of the companies you've worked for. Whatever they're doing, they're doing splendid work.
And almost all of my coworkers are university grads that do literally the same you've used as an example for non formally taught people: they write abysmally performing code because they often have an unreasonable fixation on practices like inversion of control (as a random example).
As a particularly hilarious example I've had to explain to such a developer that an includes check on a large list in a dynamic language such as JS performs abysmally
While you definitely loose acuity once you stop exploring new concepts in your free time, the amount of knowledge gained after you've already spend 10-20 yrs coding drops off a cliff, making this time investment in your free time progressively less essential.
My pint was that most of my coworkers never went through an enthusiastic phase in which they coded in their free time. Neither pre university nor during or after. And it's very easy noticeable that they're not particularly good at coding either.
Personally, I think it's just that people that are good at coding inevitably become enthusiastic enough to do it in their free time, at least for a few years. Hence the inverse is true: people that didn't go through such a phase (which most of my coworkers are)... Aren't very good at it. Wherever they went to university and got a degree or not.
Does it perform any better in statically compiled languages?
I just didn't want to explore the example to such a depth, as it felt irrelevant to me at the time of writing.
You get experienced devs from inexperienced devs that get experience.
[edit: added "degrees" as intended. University was mentioned as the context of their observation]
We're talking about the industry responsible for ALL the growth of the largest economy in the history of the world. It's not the 1970s anymore. You can't just count on weirdos in basements to build an industry.
That's not the kind of experience companies look for though. Do you have a degree? How much time have you spent working for other companies? That's all that matters to them.
Almost every time I hear this argument, I realize that people are not actually complaining about AI, but about how modern capitalism is going to use AI.
Don't get me wrong, it will take huge social upheaval to replace the current economic system.
But at least it's an honest assessment -- criticizing the humans that are using AI to replace workers, instead of criticizing AI itself -- even if you fear biting the hands that feed you.
I think you misunderstand OP's point. An employer saying "we only hire experienced developers [therefore worries about inexperienced developers being misled by AI are unlikely to manifest]" doesn't seem to realize that the AI is what makes inexperienced developers. In particular, using the AI to learn the craft will not allow prospective developers to learn the fundamentals that will help them understand when the AI is being unhelpful.
It's not so much to do with roles currently being performed by humans instead being performed by AI. It's that the experienced humans (engineers, doctors, lawyers, researchers, etc.) who can benefit the most from AI will eventually retire and the inexperienced humans who don't benefit much from AI will be shit outta luck because the adults in the room didn't think they'd need an actual education.
1. How it's gonna be used and how it'll be a detriment to quality and knowledge.
2. How AI models are trained with a great disregard to consent, ethics, and licenses.
The technology itself, the idea, what it can do is not the problem, but how it's made and how it's gonna be used will be a great problem going forward, and none of the suppliers tell that it should be used in moderation and will be harmful in the long run. Plus the same producers are ready to crush/distort anything to get their way.... smells very similar to tobacco/soda industry. Both created faux-research institutes to further their causes.
Considering HPC is half CPU and half GPU (more like 66% CPU and 33% GPU but I'm being charitable here), I expect an average power draw of 3.6KW in a cluster. Moreover, most of these clusters run targeted jobs. Prototyping/trial runs use much limited resources.
On the other hand, AI farms use all these GPUs at full power almost 24/7, both for training new models and inference. Before you asking, if you have a GPU farm which you do training, having inference focused cards doesn't make sense, because you can divide nVIDIA cards with MIG, so you can put aside some training cards, divide these cards to 6-7 and run inference on them, resulting ~45 virtual cards for inference per server, again at ~6.1KW load.
So, yes, AI's power load profile is different.
Emissions aside, locally many data centres (and associated bit mining and AI clusters) are a significant local issue due to local demand on local water and local energy supplies.
You must be joking. Consumer models' primary source of training data seems to be the legal preambles from BDSM manuals.
This was pretty consistently my and many others viewpoint since 2023. We were assured many times over that this time it would be different. I found this unconvincing.
Something very similar can be said about the issue of guns in America. We live in a profoundly sick society where the airwaves fill our ears with fear, envy and hatred. The easy availability of guns might not have been a problem if it didn't intersect with a zero-sum economy.
Couple that with the unavailability of community and social supports and you have a a recipe for disaster.
I'm deeply influenced by languages like Forth and Lisp, where that kind of bottom up code is the cultural standard and and I prefer it, probably because I don't have the kind of linear intelligence and huge memory of an LLM.
For me the hardest part of using LLMs is knowing when to stop and think about the problem in earnest, before the AI generated code gets out of my human brain's capacity to encompass. If you think a bit about how AI still is limited to text as its white board and local memory, text which it generates linearly from top to bottom, even reasoning, it sort of becomes clear why it would struggle with genuine abstraction over problems. I'm no longer so naive as to say it won't happen one day, even soon, but so far its not there.
Any generated snippets I treat like StackOverflow answers: Copy, paste, test, rewrite, or for small snippets, I just type the relevant change myself.
Whenever I'm sceptical I will prompt stuff like "are you sure X exists?", or do a web search. Once I get my problem solved, I spend a bit of time to really understand the code, figure out what could be simplified, even silly stuff like parameters the model just set to the default value.
It's the only way of using LLMs for development I've found that works for me. I'd definitely say it speeds me up, though certainly not 10x. Compared to just being armed with Google, maybe 1.1x.
I just spent a week fixing a concurrency bug in generated code. Yes, there were tests; I uncovered the bug when I realized the test was incorrect...
My strong advice, is to digest every line of generated code; don't let it run ahead of you.
Of course yeeting bad code into production with a poor review process is already a thing. But this will scale that bad code as now you have developers who will have grown up on it.
I think this is the biggest risk. You sometimes get stuck in a cycle in which you hope the AI can fix its own mistake, because you don’t want to expend the effort to understand what it wrote.
It’s pure laziness that occurs only because you didn’t write the code yourself in the first place.
At the same time, I find myself incredibly bored when typing out boilerplate code these days. It was one thing with Copilot, but tools like Cursor completely obviate the need.
We humans have way more context and intuition to rely on to implement business requirements in software than a machine does.
I can only hope endeavors (experiments?) like this extreme fail fast and we learn from it.
Not to say I'm a hater or something like that. I played a lot of those back in the day. But it's more honest to admit the art and the casino mechanic make the brain excited... the mechanics are 'okay'.
Edit: I just had a random thought. One of the strongest desire of a person is that of aesthetic desire. To feel that our life is 'picturesque' or aesthetic or beautiful. Overloading the game with aesthetic beauty is actually genius since it's an easy and strong form of aesthetic (beautiful girls. not just sexy but 'beautiful'. As in their whole face and outfit. Also other aesthetic quality like purity, innocence, cheerfulness, cuteness etc. Waifu stuffs.). And it's often so saturated with beauty that all ugly things in the players' real lifes fade away. It numbs our 'life aesthetic check' since it's flooded with so much 'beauty'. That's why people who play these games say 'i dont even think about the waifus anymore, so the mechanic must be good,' cause that numbed state is the intended state. Wheny our aesthetic center is kinda 'numbed'. And that's probably why it feels so good to play these games. When you play other games your sense of beauty is not similarly flooded and numbed, so you're all too aware that this action of playing games is not 'beautiful' in some real sense.
That business model hasn't been going so well in recent years[0], and it's already been proclaimed dead in some corners of the industry[1]. Many industry legends have started their own studios (H. Kojima, J. Solomon, R. Colantonio, ...), producing games for the right reasons. When these games are inevitably mainstream hits, that will be the inflection point where the old industry will significantly decline. Or that's what I think, anwyay.
AI is considered a potential future growth engine, as it cuts costs in art production, where the bulk of game production costs lie. Game executives are latching onto it hard because it's arguably one of the few straightforward ways to keep growing their publicly-traded companies and their own stock earnings. But technologists already know how this will end.
Other games industry leaders are betting on collapse and renewal to simpler business models, like self-funded value-first games. Also, many bet on less cashflow-intensive game production, including lower salaries (there is much to be said about that).
Looking at industry reports and business circle murmurs, this is the current state of gaming. Some consider it optimistic, and others (especially the business types without much creative talent) - dire. But it does seem to be the objective situation.
[0] VC investment has been down by more than 10x over the last two years, and many big Western game companies have lost investors' money in the previous five years. See Matthew Ball's report, which I linked in my parent comment, for more info.
[1] The games industry has seen more than 10% sustained attrition over the last 5 years, and about 50% of employees hope to leave their employer within a year: https://www.skillsearch.com/news/item/games---interactive-sa...
I just don't think that's true in a world where Marvel Rivals was the biggest launch of 2024. Live service games like Path of Exile, Counter-Strike, Genshin Impact, etc. make boatloads of money and have ever rising player counts.
The problem is that it's a very sink-or-swim market - if you manage to survive 2-3 years you will probably make it, but otherwise you are a very expensive flop. Not unlike VC-funded startups - just because some big names failed doesn't make investing into a unicorn any less attractive.
Another similar exception to the industry rules is the top 20-30 franchises, like NBA2K, GTA, FIFA, Far Cry, Call of Duty, The Sims, Assassin’s Creed, etc. Together, they account for about half the new game and DLC sales. Black hole games take another ~30%, and the remaining 19,000 annually released games share the remaining 20%, with the top 50 games making up 19/20ths of it.
What matters for 95%+ of game developers is performing well in that 20%. And they sell close to 0 lootboxes, for example.
Outside of live service everyone is also looking for that new growth driver. In my opinion the chances are though we're in for a longish period of stagnation. I don't even share the OPs rosey outlook towards more "grassroots" developers. Firstly because they're still businesses even with a big name attached. Secondly because there is going to be a bloodbath due to the large number of developers pivoting in that direction. It'll end up like the indie market where there are so many entrants success is extremely challenging to find.
There is nothing wrong with making entertainment products to make money. That's the reason all products are made: to make money. Games have gone bad because the audience has bad taste. People like Fortnite. They like microtransactions. They like themepark rubbish that you can sell branded skins for. It is the same reason Magic: the Gathering has been ruined with constant IP tie-ins: the audience likes it. People pay for it. People like tat.
For example, some companies are using AI to create tickets or to collate feedback from users.
I can clearly see that this is making them think far less through the problem and a lot of this sixth sense understanding of the problem space happens through working through these ticket creation or product creation documents which are now being done by AI.
That is causing the quality of the work to become this weird drone like NPC like state where they aren't really solving real issues yet they're getting a lot of stuff done.
It's still very early so I do not know how best to talk to them about it. But it's very clear that any sort of creative work, problem solving, etc has huge negative implications when AI is used even a little bit.
I have also started to think that a great angel investment question is to ask companies if they are a non AI zone and investing in them will bring better returns in the future.
It tends to be the same with anything hyped / divisive. Human tend to exaggerate in both direction in communication, especially in low-stake environment such as an internet forum, or when they stand to gain something from the hype.
Obviously their views are based on the sum of all their experience with LLMs. We don't have to say so every time.
I wrote a project where I'd initially hardcoded a menu hierarchy into its Rust. I wanted to pull that out into a config file so it could be altered, localized, etc without users having it and recompile the source. I opened a “menu.yaml” file, typed the name of the top-level menu, paused for a moment to sip coffee, and Zed popped up a suggested completion of the file which was syntactically correct and perfect for use as-is.
I honestly expected I’d spend an hour mechanically translating Rust to YAML and debugging the mistakes. It actually took about 10 seconds.
It’s also been freaking brilliant for writing docstrings explaining what the code I just manually wrote does.
I don't want to use AI to write my code, any more than I'd want it to solve my crossword. I sure like having it help with the repetitive gruntwork and boilerplate.
This is a key insight. The other insight is that devs spend most of their time reading and debugging code, not writing it. AI speeds up the writing of code but slows down debugging... AI was trained with buggy code because most code out there is buggy.
Also, when the codebase is complex and the AI cannot see all the dependencies, it performs a LOT worse because it just hallucinates the API calls... It has no idea what version of the API it is using.
TBH, I don't think there exists enough non-buggy code out there to train an AI to write good code which doesn't need to be debugged so much.
When AI is trained on normal language, averaging out all the patterns produces good results. This is because most humans are good at writing with that level of precision. Code is much more precise and the average human is not good at it. So AI was trained on low-quality data there.
The good news for skilled developers is that there probably isn't enough high quality code in the public domain to solve that problem... And there is no incentive for skilled developers to open source their code.
Also management: "I need you to play with AI and try to find a use for it"
Before AI, there was out-sourcing. With mass-produced cheap works, foreign studios eliminated most junior positions.
Now AI is just taking this trend to its logical extreme: out-sourcing to machines, the ultimate form of out-sourcing. The cost approaches to 0 and the quantity approaches to infinity.
I recently played through a game and after finishing it, read over the reviews.
There was a brief period after launch where the game was heavily criticised for its use of AI assets. They removed some, but apparently not all (or more likely, people considered the game tainted and started claiming everything was AI)
The (I believe) 4 person dev team used AI tools to keep up with the vast quantity of art they needed to produce for what was a very art heavy game.
I can understand people with an existing method not wanting to change. And AI may not actually be a good fit for a lot of this stuff. But I feel like the real winners are going to be the people who do a lot more with a lot less out of sheer necessity to meet outrageous goals.
Large corporations will use AI to deliver low-quality software at high speed and high scale.
"Artisan" developers will continue to exist, but in much smaller numbers and they will mostly make a living by producing refined, high-quality custom software at a premium or on creative marketplaces. Think Etsy for software.
That's the world we are heading for, unless/until companies decide LLMs are ultimately not cost beneficial or overzealous use of them leads to a real hallucination induced catastrophe.
The "AI" generated code is just code extracted from various sources used for training, which could not be used by a human programmer because most likely they would have copyrights incompatible with the product for which "AI" is used.
All my life I could have written much faster any commercial software if I had been free to just copy and paste any random code lines coming from open-source libraries and applications, from proprietary programs written for former employers or from various programs written by myself as side projects with my own resources and in my own time, but whose copyrights I am not willing to donate to my current employer, so that I would no longer be able to use in the future my own programs.
I could search and find suitable source code for any current task as fast and with much greater reliability than by prompting an AI application. I am just not permitted to do that by the existing laws, unlike the AI companies.
Already many decades ago, it was claimed that the solution for enhancing programmer productivity is more "code reuse". However "code reuse" has never happened at the scale imagined in the distant past, but not because of technical reasons, but due to the copyright laws, whose purpose is exactly to prevent code reuse.
Now "AI" appears to be the magical solution that can provide "code reuse" at the scale dreamed a half of century ago, by escaping from the copyright constraints.
When writing a program for my personal use, I would never use an AI assistant, because it cannot accelerate my work in any way. For boilerplate code, I use various templates and very smart editor auto-completion, there is no need of any "AI" for that.
On the other hand, when writing a proprietary program, especially for some employer that has stupid copyright rules, e.g. not allowing the use of libraries with different copyrights, even when those copyrights are compatible with the requirements of the product, then I would not hesitate to prompt an AI assistant, in order to get code stripped of copyright, saving thus time over rewriting an equivalent code just for the purpose of enabling it to be copyrighted by the employer.
If you proposed something like GitHub Copilot to any company in 2020, the legal department would’ve nuked you from orbit. Now it’s ok because “everyone is doing it and we can’t be left behind”.
Edit: I just realized this was a driver for why whiteboard puzzles became so big - the ideal employee for MSFT/FB/Google etc was someone who could spit out library quality, copyright-unencumbered, “clean room” code without access to an internet connection. That is what companies had to optimize for.
I think that's part of the reason why devs like working from home and not be spied on.
My last boss told me essentially (paraphrasing), "I budget time for your tasks. If you finish late, I look like I underestimate time required, or you're not up to it. If you finish early, I look like I overestimate. If I give you a week to do something, I don't care if you finish in 5 minutes, don't give it to me until the week is up unless you want something else to do."
We certainly did not receive bonuses based on doing work faster, so unless you are, what incentives are you being driven by to do the work sooner?
Sounds like an "idea guy" rather than an art director or designer. I would do this exact same thing, but on royalty-free image websites, trying to get the right background or explanatory graphic for my finance powerpoints. Unsurprisingly, Microsoft now has AI "generating" such images for you, but it's much slower than what I could do flipping through those image sites.
Why? I feel less competent at my job. I feel my brain becoming lazy. I enjoy programming a lot, why do I want to hand it off to some machine? My reasoning is that if I spend time practicing and getting really good at software engineering, my work is much faster, more accurate and more reliable and maintainable than an AI agent's.
In the long run, using LLMs for producing source code will make things a lot slower, because the people using these machines will lose the human intuition that an AI doesn't have. Be careful.
To pull this out of the games industry for just a moment, imagine this: you are a business and need a logo produced. Would you hire someone at the market price who uses AI to generate something... sort of on-brand they most definitely cannot provide indemnity cover for (considering how many of these dubiously owned works they produce), or would you pay above the market price to have an artist make a logo for you that is guaranteed to be their own work? The answer is clear - you'd cough up the premium. This is now happening on platforms like UpWork and Fiverr. The prices for real human work have not decreased; they have shot up significantly.
It's also happening slowly in games. The concept artists who are skilled command a higher salary than those who rely on AI. If you depend on image-generating AI to do your work, I don't think many game industry companies would hire you. Only the start-ups that lack experience in game production, perhaps. But that part of the industry has always existed - the one made of dreamy projects with no prospect of being produced. It's not worth paying much attention to, except if you're an investor. In which case, obviously it's a bad investment.
Besides, just as machine-translated game localization isn't accepted by any serious publisher (because it is awful and can cause real reputational damage), I doubt any evident AI art would be allowed into the final game. Every single piece of that will need to be produced by humans for the foreseeable future.
If AI truly can produce games or many of their components, these games will form the baseline quality of cheap game groups on the marketplaces, just like in the logo example above. The buyer will pay a premium for a quality, human product. Well, at least until AI can meaningfully surpass humans in creativity - the models we have now can only mimic and there isn't a clear way to make them surpass.
Only if companies value/recognize those real skills over that of the alternative, and even if they do, companies are pretty notorious for choosing whatever is cheapest/easiest (or perceived to be).
It's "hopeful" that the future of all culture will resemble food, where the majority have access to McDonalds type slop while the rich enjoy artisan culture?
Most people's purchasing power being reduced is a separate matter, more related to the eroding middle class and greedflation. Many things can be said about it, but they are less related to the trend I highlighted. Even if, supposing the middle class erosion continues, the scenario you suggest may very well play out.
>Most people's purchasing power being reduced is a separate matter, more related to the eroding middle class and greedflation.
Greedflation, is that where companies suddenly remember to be greedy again after years of forgetting they're allowed to be greedy, which happens by random chance to coincide exactly with periods of expansionary monetary and fiscal policy?
In that case, I welcome an alternative explanation for the human labor price increase on UpWork and Fiverr while AI work replaced work at the previous price level. The same is seen in the hiring of affected disciplines.
e.g.
You have tasks advertised in the distribution $1, $1, $1, $1, $1, $1, $2, $2, $3, $3, $5, $5, $10. Median price is $2, and average is $2.76.
All the $1 and $2 tasks are replaced with AI. Old tasks get $1 cheaper each as there are more people that can do them. Now the distribution is $2, $2, $4, $4, $9. Median is $4, average is $4.2.
So you have made labour less valuable but the prices advertised go up because only the more expensive work now gets advertised.
It is still much better than when large numbers of people starved if it rained a bit in the wrong week.
He spoke of the grass and flowers and trees
Of the singing birds and the humming bees;
Then talked of the haying, and wondered whether
The cloud in the west would bring foul weather.
The weather and its effect on the food supply was the preoccupation of 90% of the population 90% of the time for all of agricultural man's history (and pre-history) and hunting and gathering was even worse for quality of life.I am content to use the AI to perform "menial" tasks: I had a textfile in something parsable by field with some minor quirks (like right justified text) and was able to specify the field SEMANTICS in a way that made for a prompt to an ICS file calendar which just imported fine as-is. Getting a years forward planning from a texttual note in some structure into calendar -> import -> from-file was sweet. Do I need to train an AI to use a token/API key to do this directly? No. But thinking about how I say efficiently what fields are, and what the boundaries are, helps me understand my data.
BTW while I have looked at a ICS file and can see it is type:value, I have no idea of the types, or what specific GMT/Z format it wants for date/time, or the distinctions of meaning for confirmed/pending or the like. These are higher level constructs which seem to have made useful distinct behaviours in the calendar and the AI description of what it had done, and what I should expect lined up. I did not e.g. stipulate the mappings from semantic field to ICS type. I did say "this is a calendar date" and it did the rest.
I used AI to write a DJANGO web to do some trivial booking stuff. I did not expect the code to run as-is, but it did. Again, could I live with this product? Yes, but the extensibility worries me. Adding features, I am very conscious one wrong prompt and it can turn this into .. drek. It's fragile.
Some problems are too big to surrender judgment. Some problems are solved differently depending on what you want to optimize. Sometimes you want to learn something. Sometimes there's ethics.
I like surrender judgement. Its loss of locus of control. I also find myself asking if there are ways the AI systems "monetize" the nature of problems being put forward for solutions. I am probably implicitly giving up some IPR asking these questions, I could even be in breach of an NDA in some circumstances.
Some problems should not be put to an anonymous external service. I doubt the NSA wants people using claude or mistral or deepseek to solve NSA problems. Unless the goal, is to feed misinformation or mis-drection out into the world.
I’m in a Fortune 500 software company and we are also being pushed AI down our throats, even though so far it has only been useful for small development tasks. However our tolerance for incorrectness is much, much lower—and many skip levels are already realizing this.
I've got a refurb homelab server off PCSP with 512gb ram for <$1k, and I run decently good LLM models (Deepseek-r1:70b, llama3.3:70b). Given your username, you might even try pitching a GPU server to them as dual-purpose; LLM + hashcat. :)
But if your company buys a server to run it themselves, that security risk is not present.
So, where are the games with AI-generated content? Where are the reviews that praise or pan them?
(Remember, AI is a tool. Tools take time to learn, and sometimes, the tool isn't worth using.)
You'd hope so, but I'm not so sure. Media developments are not merely additive, at least with bean counters in charge. Certain formats absolutely eclipse others. It's increasingly hard to watch mainstream films with practical effects or animal actors. Even though most audiences would vastly prefer the real deal, they just put up with it.
It's easy to envision a similar future where the budget for art simply isn't there in most offerings, and we're left praising mediocre holdout auteurs for their simple adherence to traditional form (not naming names here).
I (mostly) prefer today's special effects to the ones in the past. The old ones would take me out of the moment, because I'd notice that it was a special effect. (IE, in the original Star Wars movies you can see the matt lines on VHS, and the strings on the C3P0 puppet in the desert. Really distracting.)
---
A few more points:
A lot of the article reminds me of how recording artists would complain, in the 2000s and 2010s, that they would put a lot of effort into a recording, and then most people were listening to it on a sh*tty MP3. The recording artists didn't understand their audiences. It's hard to know if it's a case where the video game artists don't understand the audiences, or the tool (AI) really isn't bringing value to the process.
> It's easy to envision a similar future where the budget for art simply isn't there in most offerings, and we're left praising mediocre holdout auteurs for their simple adherence to traditional form
I'm not sure what you mean by that.
As a small data point, I don't think AI can make movies worse than they currently are. And they are as bad as they are for commercial but non-AI reasons. But if the means to make movies using AI or scene-making tools build with a combo of AI and maybe game engine platforms puts the ability to make movies into the hands of more artistic people, the result may be more technologically uninteresting but nonetheless more artistically interesting because of narrative/character/storytelling vectors. Better quality for niche audiences. It's a low bar, but it's one possible silver lining.
That said I agree with your second paragraph. I think we will see an explosion of high quality niche products that would never have been remotely viable before this.
People left alone for a race to the bottom, does not end well, it seems..
This right here is the key. It's that stench of arrogance of those who think others have a "problem" that needs fixing, and that they are in the best position to "solve" it despite having zero context or experience in that domain. It's like calling the plumber and thinking that you're going to teach them something about their job.
The only time I did it try it for the very first time was an evening that I was so bored that decided to compare my 3 line Python snippet (of which one of those was the "def" statement) that generates an alternating pulse clock against the output of an "ai" prompt.
The output code I saw kept me thinking about all those poor souls relying on HypeAI to create anything useful.
Not long ago there was a thread about someone proud of a 35k LoC cooking web app entirely made from a prompt. What it kept ringing from that thread was that the author was proud of the total LoC as maybe thought that the more the better. Who knows?
Real wealth creation will come from other domains. These new tools (big data, ML, LLMs, etc) unlock the ability to tackle entirely new problems.
But as a fad, "AI" is pretty good for separating investors from their money.
It's also great for further beating down wages.
Perhaps we would be able to synthesize some text, voice and imaging. Also AI can support coding.
While AI can probably do a snake game (that perhaps runs/compiles) or attempt to more or less recreate well known codebases like that of Quake (which certainly does not compile), it can only help if the developer does the main work, that is disecting problems into smaller ones until some of them can be automated away. That can improve productivity a bit and certainly could improve developer training. If companies were so inclined to invest in their workforce...
Creativity emerges through a messy exploration and human experience -- but it seems no one has time for that these days. Managers have found a shiny new tool to do more with less. Also, AI companies are deliberately targeting executives with promises of cost-cutting and efficiency. Someone has to pay for all the R&D.
Notably, a good number the examples were just straight-up bad management, irrespective of the tools being used. I also think some of these reactions are people realizing that they work for managers or in businesses that ultimately don't really care about the quality of their work, just that it delivers monetary value at the end.
I also have come to realize that in software development, coding is secondary to logical thinking. Logical thinking is the primary medium of every program, the language is just a means to express it. I may have not memorized as many languages as AI, but I can think better than it logically. It helps me execute my tasks better.
Also, I've been able to do all kinds of crazy and fun experiments thanks to genAI. Knowing myself I know realistically I will never learn LISP, and will always retain just an academic interest in it. But with AI I can explore these languages and other areas of programming beyond my expertise and experience much more effectively than ever before. Something about the interactive chat interface keeps my attention and allows me to go way deeper than textbooks or other static resources.
I do think in many ways it's a skill issue. People conceptualize genAI as a negation of skills, an offloading of skill to the AI, but in actuality grokking these things and learning how to work with them is its own skill. Of course managers just forcing it on people will elicit a bad reaction.
git gud
To quote some anonymous YouTube commenter: "they told me AI would destroy the world, but I didn't expect it to happen like this"
lol I will point out that this has been an enormous problem in the game industry for long, long before generative AI existed.
> Francis says their understanding of the AI-pusher’s outlook is that they see the entire game-making process as a problem, one that AI tech companies alone think they can solve. This is a sentiment they do not agree with.
Requiring to identify someone's gender when that person is anonymous is just pointless bigotry.
It's not a culture war until there's two sides, until a segment of the population throws a hissyfit because new ideas make them uncomfortable.
It was used whenever gender was ambiguous or needed to be protected. Now with people openly identifying as non-binary, there is not a more specific pronoun, that person doesn't consider themselves that gender. You would be referring to them as something that is not what they want to be called, and is not what their social circle refers to them as. It's confusing, especially if you know what to call them but choose not to because you're offended.
> I don't think it's unfair characterize this as an offensive move, waged by one side in a 'culture war', that was done without regard to collateral damage
I would wager, based on the disproportionate and melodramatic language, this has never actually affected you. But you are likely consuming media that tells you everyone is going to draw and quarter you if you mess up a pronoun. This is not the case. Trans people just move on, they're used to it. It literally happens all the time.
You can say that because you live in a privileged country where compelled speech is illegal.
https://www.eurasiareview.com/20062017-canada-law-makes-it-i...
>Trans people just move on, they're used to it. It literally happens all the time
Or they try to cancel you and get you fired
https://www.washingtontimes.com/news/2019/oct/15/i-was-fired...
https://www.newsweek.com/christian-teacher-says-she-was-fire...
This seems like a you problem...
I'm way woker than the average person but I have to admit encountering a singular 'they' breaks my concentration in a distracting way - there's definitely possible ambiguity.
But I'm confused by your sentence regardless of the gender terms. Did they notice the tomatoes in the Garden or in the greenhouse? This is just ambiguous wording in general.
- These are two different sentences, but they're separated with a comma. It should be a period, as it makes no grammatical sense with a comma unless you're trying to make it intentionally confusing.
- You would write "They both went into the greenhouse" if they both entered, or you would write "Y entered the greenhouse and noticed the ripe tomatoes."
- "Before entering the greenhouse, "Y"/"they both" noticed the ripe tomatoes in the Garden."
1. productivity and quality is hard to measure
2. the codebase they are ruining is the same one I am working on.
We're supposed to have a process for dealing with this already, because developers can ruin a codebase without ai.
"more code faster" is not a good thing, it has never been a good thing
I'm not worried about pro AI workers ruining their codebases at their jobs
I'm worried about pro AI coworkers ruining my job by shitting up the codebases I have to work in
Pump the brakes there
You may have bought into some PMs idea of what we do, but I'm not buying it
As professional, employed software developers, the entire point of what we do is to provide value to our employers.
That isn't always by delivering features to users, it's certainly not always by delivering features faster
Why the hand-wringing? Well, for one thing, as a developer I still have to work on that code, fix the bugs in it, maintain it etc. You could say that this is a positive since AI slop would provide for endless job security for people who know how to clean up after it - and it's true, it does, but it's a very tedious and boring job.
But I'm not just a developer, either - I'm also a user, and thinking about how low the average software quality already is today, the prospect of it getting even worse across the board is very unpleasant.
And as for things taking care of themselves, I don't think they will. So long as companies can still ship something, it's "good enough", and cost-cutting will justify everything else. That's just how our economy works these days.
If a company fails to compete in the market and dies, there is no "autopsy" that goes in and realizes that it failed because of a chain-reaction of factors stemming from bad AI-slop code. And execs are so far removed from the code level, they don't know either, and their next company will do the same thing.
What you're likely to end up with is project managers and developers who do know the AI code sucks, and they'll be heeded by execs just as much they are now, which is to say not at all.
And when the bad AI-code-using devs apply to the next business whose execs are pro-AI because they're clueless, guess who they'll hire?
I believe AI is a variation of this, except a library at least has a license.
This is why when you hear people talk about how great it is at producing X, our takeaway should be "this person is not an expert at X, and their opinions can be disregarded"
They are telling on themselves that they are not experts at the thing they think the AI is doing a great job at
I'm playing devil's advocate somewhat here but it often seem like that there's a bunch of people on both sides using hella motivated reasoning because they have very strong feelings that developed early on in their exposure to AI.
AI is both terrible and wonderful. It's useless and some things and impressive at others. It will ruin whole sectors of the economy and upturn lives. It will get better and it is getting better so any limitations you currently observe are probably termporary. The net benefit for humanity may turn out to be positive or negative - it's too early to tell.
That's kind of my problem. I am saying that it mostly only appears impressive to people who don't know better
When people do know better it comes up short consistently
Most of the pro AI people I see are bullish about it on things they have no idea about, like non-technical CEOs insisting that it can create good code
I disagree with that part and I don't think this opinion can be sustained by anyone using it with any regularity in good faith
People can argue whether it's 70/30 or 30/70 or what domains it's more useful in than others but you are overstating the negative.
My hypothesis is that you are invested in the success of AI products somehow, financially or emotionally, and that leads you to be blind to their shortcomings
You keep using them whenever possible because you want them to be useful even though in reality their usefulness is really iffy
It's entirely possible we're both irrational to some degree. But that's irrelevant to answering the question at hand.
Do you claim you are using it regularly and in good faith - enough to honestly form a reliable view on its utility?
I would claim that I am using it in such a way. It would take more effort than I'm prepared to put in to provide evidence of this but please - ask away.
(for the record - I have no financial or professional involvement directly with AI. I simply find the technology fascinating and I use it daily - both playfully and for it's practical utility)
For a couple of week tryout period I tried to use it in my daily workflow pretty heavily. I came away with the impression that it is a neat toy, but not really ready to be a full time tool for me. The other evaluators agreed and our recommendation to our leadership was "This is not really ready for prime time and while it is impressive it probably isn't really worth the cost"
Anyways fast forward and we're getting AI usage OKRs now, being pushed down on us by non-technical leadership, and what I call "formerly technical" leadership. People who did tech 20 years ago but really don't know what working modern tech is like since they've been in management for too long
So yes. I'm definitely negatively biased, and I'm fine to admit that. I absolutely resent having this stuff forced down on me from leaders that are buying the hype despite being told it is probably not ready to be a daily driver
And I'm seeing the hype spreading through the company, being told by junior devs how amazing it is when I am still iffy on their abilities.
And the absolute worst is when I build a cool proof of concept in an afternoon and everyone is like "wow, AI let you do that so fast now!" and I'm like no, this is just what a good developer can build quickly
So yeah, I'm pretty negative on AI right now. I can still admit the tech itself is impressive, amazing even, and there is no doubt in my mind I could probably find some use for it daily
But I think it is going to be a disaster, because people cannot be trusted to use it responsibly
The social impact is going to be absolutely catastrophic. In some ways it already is
Edit: I am also not really sure why I am supposed to be enthusiastic about technology that business leaders are fairly transparently hoping will make my skillset redundant or at best will make me more productive but I will never realistically see a single extra dollar from the increased productivity
I mostly work solo. I use AI when it's either a) interesting or b); useful. Our experiences are very different and it's no wonder our emotional responses are also very different.
I'm envious that you work solo. I think that would change my perspective on a lot of things
Thanks for the good faith discussion, anyways
It's just a tool, but it is unfortunately a tool that is currently dominated by large-sized corporations, to serve Capitalism. So it's definitely going to be a net-negative.
Contrast that to something like 3D printing, which has most visibly benefited small companies and individual users.
I think AI is different. "Good enough" models are already available under generous licenses, fine-tuning and even training is within the reach of groups of volunteers etc etc
It's very clear the "value" of the LLM generation is to churn out low-cost, low-quality garbage. We already outsourced stuff to Fivrr but now we can cut people out altogether. Producing "content" nobody wants.
On the other hand, AI can be useful and can accelerate a bit some work.
This is satire right?
I think it's also unfortunate how the advocates for AI replacing artists in gamedev clearly think of art as a chore or a barrier to launch rather than being the whole point of what they're making. If games are art, then it stands to reason the.. art.. that goes into it is just as important as anything else. A game isn't defined just by the logic of the main loop.
There are those who adapt, those who will keep moaning about it and finally those who believe it can do everything.
First one will succeed, second one will be replaced, third one is going to get hurt.
I believe this article and the people it mentions are mostly from the second category. Yet no one with all his mind can deny that ai makes writing code faster, not necessarily better but faster, and games at the end are mostly codes.
Of course ai is going to get pushed hard by your ceo, he knows that if he doesn't, another competitor who use it will be able to produce more games, faster and less expensive.
It's actually quite easy, and not uncommon, to deny all of those things. Game code is complex and massively interwoven and relying on generative code that you didn't write and don't fully understand will certainly break down as game systems increase in complexity, and you will struggle to maintain it or make effective modifications -- so ignoring the fact that the quality is lower, there's an argument to be made that it will be "slower" to write in the long term.
I think it's also flat wrong to say games are "mostly codes" -- games are a visual medium and people remember the visual/audio experience they had playing a game. Textures, animations, models, effects, lighting, etc. all define what a game is just as much if not more than the actual gameplay systems. Art is the main differentiating factor between games when you consider that most gameplay systems are derivative of one another.
I can assure you it's not. And people are starting to realise that there is a lot of shit. And know that LLMs generate it.
Or if you want to keep it in the realm of computers, “worse is better” clearly won out. The world uses Linux, not Unix much to the chagrin of the people who wrote the Linux Haters Handbook (regardless of how tongue in cheek that might have been).
And the take away from history should be that AI might be “shit” now, but it will improve. If you don’t remember the days when “Made in Japan” was a marker of “shit”, that’s because things that are “shit” can improve faster than things that are “not shit” can maintain their lead.
And this is all paid for by people who expect a return. In the middle of a very volatile market.
It’s dead even from a non technical perspective.
From a technical perspective every reducing gain requires more money than the last step. That isn’t something that ever works.
And yet this is no guarantee they will succeed. In fact, the largest franchises and games tend to be the ones that take their time and build for quality. There are a thousand GTA knock-offs on Steam, but it's R* that rakes in the money.
AI generates code that's harder for humans to understand so that polishing process is takes longer and is even more costly when you have AI shtting out code at breakneck speed.
Take out the word AI and replace it with any other tool that's over-hyped or over-used, and the above statement will apply to any organization.
Obviously some workers have a strong incentive to oppose adoption, because it may jeopardize their careers. Even if the capabilities are over-stated it can be a self-fulfilling prophecy as higher-ups choices may go. Union shops will try to stall it, but it's here to stay. You're in a globally competitive market.
There is a bunch of programmers who like ai, but as the article shows, programmers are not the only people subjected to ai in the workplace. If you're an artist, you've taken a job that has crap pay and stability for the amount of training you put in, and the only reason you do it is because you like the actual content of the job (physically making art). There is obviously no upside to ai for those people, and this focus on the managers' or developers' perspective is myopic.
I think for the most part creatives will still line up for these gigs, because they care about contributing to the end products, not the amount of time they spend using Blender.
Re-read what I wrote. You repeated what I said.
> So: from the perspective of a large part of the workforce it is completely true and rational to say that ai at their job has mostly downsides.
For them, maybe.
If you start by replacing menial labor, there will be more unemployment but you’re not going to build the political will to do anything because those jobs were seen as “less than” and the political class will talk about how good and efficient it is that these jobs are gone.
You need to start by automating away “good jobs” that directly affect middle/upper class people. Jobs where people have extensive training and/or a “calling” to the field. Once lawyers, software engineers, doctors, executives, etc get smacked with widespread unemployment, the political class will take UBI much more seriously.
They'll use their clout—money, lobbying, and media influence—to lock in their advantage and keep decision-making within their circle.
In the end, this setup would just widen the gap, cementing power imbalances as AI continues to reshape everything. UBI will become the bare minimum to keep the masses sedated.
if you job consists of reading from a computer -> thinking -> entering things back into a computer, you're on the top of the list because you don't need to set up a bunch of new sensors and actuators. In other words… the easier it is to do your job remotely, the more likely it is you’ll get automated away
There's also the fact that "they" aren't all one and the same persons with the exact same worldview and interests.
You might say "but why not use just 1% of that GDP on making sure the rest of humanity lives in at least minimal comfort"? But clearly -- we already choose not to do that today. 1% of the GDP of the developed world would be more than enough to solve many horrifying problems in the developing world -- what we actually give is a far smaller fraction, and ultimately not enough.
Of course there is always the issue of “demand”—of keeping the factories humming, but when you are worth billions, your immediate subordinates are worth hundreds of millions, and all of their subordinates are worth a few million, maybe you come to a point where “lebensraum” becomes more valuable to you than another zero at the end of your balance?
When AI replaces the nerds (in progress), they become excess biomass. Not talking about a retarded hollywood-style apocalypse. Economic uncertainty is more than enough to suppress breeding in many populations. “not with a bang, but a whimper”
If you know any of “them”, you will know that “they” went to the same elite prep schools, live in the same cities, intermarry, etc. The “equality” nonsense is just a lie to numb the proles. In 2025 we have a full-blown hereditary nobility.
edit: answer to ianfeust6:
The West is not The World. There are over a billion Chinese, Indians, Africans…
Words mean things. Who said tree hugger? If you are an apex predator living in an increasingly cloudy tank, there is an obvious solution to the cloudyness.
Don't forget fertility rate is basically stagnant in the West and falling globally, so this seems like a waste of time considering most people just won't breed at all.
The West is not The World. There are over a billion Chinese, Indians, Africans…
Words mean things. Who said tree hugger? If you are an apex predator living in an increasingly cloudy tank, there is an obvious solution to the cloudyness.
With white-collar jobs the threat of AI feels more abstract and localized, and you still get talk about "creating new jobs", but when robots start coming off the assembly line people will demand UBI so fast it will make your head spin. Either that or they'll try to set fire to them or block them with unions, etc. Hard to say when because another effort like the CHIPS act could expedite things.
Goldman Sachs doesn't think so.
https://www.fortunebusinessinsights.com/humanoid-robots-mark...
https://finance.yahoo.com/news/humanoid-robot-market-researc...
https://www.mordorintelligence.com/industry-reports/robotics...
They don't even need to be humanoid is the thing.
No, the underlying format of "$LABOR_ISSUE can be solved by $CHANGE_JOB" comes from a place of politics, where a politician is trying to suggest they have a plan to somehow tackle a painful problem among their constituents, and that therefore they should be (re-)elected.
Then the politicians piled onto "coal-miners can learn to code" etc. because it was uniquely attractive, since:
1. No big capital expenditures, so they don't need to promise/explain how a new factory will get built.
2. The potential for remote work means constituents wouldn't need to sell their homes or move.
3. Participants wouldn't require multiple years of expensive formal schooling.
4. It had some "more money than you make now" appeal.
https://en.wikipedia.org/wiki/Learn_to_Code#Codecademy_and_C...
24 The disciple is not above his master, nor the servant above his lord.
25 It is enough for the disciple that he be as his master, and the servant as his lord. If they have called the master of the house Beelzebub, how much more shall they call them of his household?
Forget these new taxes on Americans who buy Canadian hardwood, we can just supply logs from your eyes.
how much more shall they call them of his household?
They weren't fired; they weren't laid off; they weren't reassigned or demoted; they got attention and assistance from the CEO and guidance on what they needed to do to change and adapt while keeping their job and paycheck at the same time, with otherwise no disruption to their life at all for now.
Prosperity and wealth do not come for free. You are not owed anything. The world is not going to give you special treatment or handle you with care because you view yourself as an artisan. Those are rewards for people who keep up, not for those who resist change. It's always been that way. Just because you've so far been on the receiving end of prosperity doesn't mean you're owed that kind of easy life forever. Nobody else gets that kind of guarantee -- why should you?
The bottom line is the people in this article will be learning new skills one way or another. The only question is whether those are skills that adapt their existing career for an evolving world or whether those are skills that enable them to transition completely out of development and into a different sector entirely.
lol. I work with LLM outputs all day -- like it's my job to make the LLM do things -- and I probably speak to some LLM to answer a question for me between 10 and 100 times a day. They're kinda helpful for some programming tasks, but pretty bad at others. Any company that tried to mandate me to use an LLM would get kicked to the curb. That's not because I'm "not keeping up", it's because they're simply not good enough to put more work through.
If management is convinced of the benefits of LLMs and the workers are all just refusing to use them, the main problem seems to be a dysfunctional working environment. It's ultimately management's responsibility to work that out, but if the management isn't completely incompetent, people tasked with using them could do a lot to help the situation by testing and providing constructive feedback rather than making a stand by refusing to try and providing grand narratives about damaging the artistic integrity of something that has been commoditized from inception like video game art. I'm not saying that video game art can't be art, but it has existed in a commercial crunch culture since the 1970s.
The CEOs in question bought what they believed to be a power tool, but got what is more like a smarter copy machine. To be clear, copy machines are not useless, but they also aren't going to drive the 200% increases in productivity that people think they will.
But because management demands the 200% increase in productivity they were promised by the AI tools, all the artists and programmers on the team hear "stop doing anything interesting or novel, just copy what already exists". To be blunt, that's not the shit they signed up for, and it's going to result in a far worse product. Nobody wants slop.
Real knowledge here is often absend from the strongest AI prosletisers, others are more realistic about it. It still remains an awesome tool, but a limited one.
AIs today are not creative at all. They find statistical matches. They perform a different work than artists do.
But please, replace all your artwork with AI generated ones. I believe the forced "adapt" phase with that approach would realize itself rather quickly.
And that's enough to drive significant industry-wide change. Just because it can't fully automate everything doesn't mean companies aren't going to expect (and, indeed, increasingly require) their employees to learn how to effectively utilize the technology. The CEO of Shopify recently made it clear that refusal to learn to use AI tools will factor directly into performance evaluations for all staff. This is just the beginning. It's best to be wise and go where the puck is headed.
The article gives several examples of where these tools are used to rapidly accelerate experimentation, pitches, etc. Supposedly this is a bad thing and should be avoided because it's not sufficiently artisan, but no defensible argument was presented as to why these use cases are illegitimate.
In terms of writing code, we're entering an era where developers who have invested in learning how to utilize this technology are simply better and more valuable to companies than developers who have not. Naysayers will find all sorts of false ways to nitpick that statement, yet it remains true. Effective usage means knowing when (and when not) to use these tools -- and to what degree. It also, for now at least, means remaining a human expert about the craft at hand.