No, it's simply untrue. Players only object against AI art assets. And only when they're painfully obvious. No one cares about how the code is written.
If you actually read the words used in Steam AI survey you'll know Steam has completely caved in for AI-gen code as well. It's specifically worded like this:
> content such as artwork, sound, narrative, localization, etc.
No 'code' or 'programming.'
If game players are the most anti-AI group then it's crystal clear that LLM coding is inevitable.
> This stands in stark contrast to code, which generally doesn't suffer from re-use at all, or may even benefit from it, if it's infrastructure.
Yeah, exactly. And LLM help developers save time from writing the same thing that has be done by other developers for a thousand times. I don't know how one can spins this as a bad thing.
> Classic procedural generation is noteworthy here as a precedent, which gamers were already familiar with, because by and large it has failed to deliver.
Spore is well acclaimed. Minecraft is literally the most sold game ever. The fact one developer fumbled it doesn't make the idea of procedural generation bad. This is a perfect example of that a tool isn't inherently good or bad. It's up to the tool's wielder.
> That said, Steam's policy has been recently updated to exclude dev tools used for "efficiency gains", but which are not used to generate content presented to players.
I only quoted the first paragraph, but there is more.
Household name game studios have had custom AI art asset tooling for a long time that can create art quickly, using their specific style.
AI is a tool and as Steve Jobs said, you can hold it wrong. It's like plastic surgery, you only notice the bad ones and object to them. An expert might detect the better jobs, but the regular folk don't know and for the most part don't care unless someone else tells them to care.
And then they go around blaming EVERYTHING as AI.
"So you hated the TV Series Ugly Betty then?"
"What? that's not CGI!"
This video is 15 years old
People spin all kinds of things if they believe (accurately or not) that their livelihood is on the line. The knee-jerk "AI universally bad" movement seems just as absurd to me as the "AGI is already here" one.
> Spore is well acclaimed. Minecraft is literally the most sold game ever.
Counterpoint: Oblivion, one of the first high-profile games to use procedural terrain/landscape generation, seemed very soulless to me at the time.
As I see it, it's all a matter of how well it's executed. In the best case, a skilled artist uses automation to fill in mechanical rote work (in the same way that e.g. renaissance artists didn't make every single brushstroke of their masterpieces themselves).
In the worst (or maybe even average? time will tell) case, there are only minimal human-made artistic decisions flowing into a work and the output is a mediocre average of everything that's already been done before, which is then rightfully perceived as slop.
I might be misremembering but wasn't the Oblivion proc-gen entirely in the development process, not "live" in the game, which means...
> "In the best case, a skilled artist uses automation to fill in mechanical rote work"
...is what Bethesda did, no?
I can type up what I want much faster and be sure it's at least solving the right problem, even if it may have bugs.
There are also tools to generate boilerplate that work much much better than LLMs. And they're deterministic.
This reads like a skill issue on your end, in part at least in the prompting side.
It does take time to reach a point where you can prompt an LLM sufficiently well to get a correct answer in one shot, developing an intuitive understanding of what absolutely needs to be written out and what can be inferred by the model.
In the past 2 months I've been using all the SOTA models to help me design a new DSL for narrative scripting (such as game story telling) and a c# runtime implementation o the script player engine.
The language spec and design is about 95% authored by me up to this point; I have the LLMs work on the 2nd layer: the implementation specs/guidelines and the 3rd layer: concrete c# implementation.
Since it's a new language, I consider it's somewhat new/novel tasks for LLMs (at least, not like boilerplate stuff like HTTP API or CRUD service). I'd say, these LLMs have been very helpful - you can tell they sometimes get confused and have trouble to comply to the foreign language spec and design - but they are mostly smart enough to carry out the objectives, and they get better and better after the project got on track and has plenty of files/resources to read and reference.
And I'd also say "prompt better" is a important factor, just much more nuanced/complicated. I started with 0 experience with LLM agents and have learned a lot about how to tame them, and developed a protocol to collaborate with agents, these all comes from countless trial and errors, but in the end get boiled down to "prompt better".
Do you ever ask why you're writing the same thing over and over again? That's literally the foundational piece of being an engineer; understanding when you're reinventing the wheel when there's a perfectly good wheel nearby.
Most of what we do is programming is some small novel idea at high level and repeatable boilerplate at low level. A fair question is: why hasn’t the boilerplate been automated as libraries or other abstractions? LLMs are especially good at fuzzy abstracting repeatable code, and it’s simply not possible to get the same result from other manual methods.
I empathise because it is distressing to realise that most of value we provide is not in those lines of code but in that small innovation at the higher layer. No developer wants to hear that, they would like to think each lexicon is a creation from their soul.
If development velocity was truly an important factor in these businesses, we'd migrated away from that gang of four ass Java 8 codebase, given these poor souls offices, or at least cubicles to reduce the noise, we wouldn't make them spend 3 hours a day in ceremonial meetings.
The reason none of this happens is that even if these developers crank out code 10x faster, by the time it's made it past all the inertia and inefficiencies of the organization, the change is nearly imperceptible. Though the bill for the new office and the 2 year refactoring effort are much more tangible.
Abstractions are the source of bloat. Without abstractions you can always reduce bloat, or you can reduce bloat in your glue, but you can't reduce glue.
It takes discipline to NOT create arbitrary function signatures and short-lived intermediate data structures or type definitions. This is the beginning of boilerplate.
So many advances in removing boilerplate are realizing your 5 function calls and 10 intermediate data structures or type definitions, essentially compute a thing that you can do with 0 function calls and 0 custom datatypes and less lines of code.
The abstraction hides how simple the thing you want is.
Problem is that all open source code looks like the bloat described above, so LLMs have no idea how to actually write code that is without boilerplate. The only place where I've seen it work is in shaders, which are usually written to avoid common pitfalls of abstraction.
LLMs are incapable of writing a big program in 1 function and 1 file, that does what you want. Splitting the program into functions or even multiple files, is a step you do after a lot of time, yet all open source looks nothing like that.
It's weird to look at something that recent and think how dated it reads today. I also wrote about the Turing test as some major milestone of AI development, when in fact the general response to programs passing the Turing test was to shrug and minimize it
To me, a function is a single sentence within a book. It may approach the larger picture, but that sentence can be reviewed, changed, switched around, killed by an editor.
Some programmers believe they're fantastic sentence writers. They brag about how good of a sentence they write, they're entire worldview has been built on being good sentence creators. Especially within enterprises, you may spend your entire life writing sentences without ever really understanding the whole book.
If your worldview has been built on sentence creation, and suddenly there's a sentence creator AI, you're going to be deathly afraid of it replacing you as a sentence writer.
Care to share some examples that prove your point?
Probably the original sin here is that we started calling them programming languages instead of just 'computer code'.
Also - most of your work is far more than mere novelty! There are intangibles like your intellectual labor and time.
There is also the cost reason, somebody trying to sell an abstraction will try to monetize it and this means not everyone will want/be able to use it (or it will take forever/be unfinished if it's open/free).
There's also the platform lockin/competition aspect...
However, LLMs destroy this economic incentive utterly. It now seems most productive to code in fairly low level TypeScript and let the machines spew tons of garbage code for you.
FORTRAN ("formula translator") was one of the first programs ever written and it was supposed to make coding obsolete. Scientists will now be able to just type in formulas and the computer will just calculate the result, imagine that!
Yes, it is. Literally every programming innovation claims to "make coding obsolete". I've seen a half dozen in my own lifetime.
I also don't know what work you do, but I would not characterize the codebases I work in as "small bits of novelty" on boilerplate. Software engineering is always a holistic systems undertaking, where every subcomponent and the interactions between them have to be considered.
I still think LLMs as fancy autocomplete is the truth and not even a dig. Autocomplete is great. It works best when there’s one clear output desired (even if you don’t know exactly what it is yet). Nobody is surprised when you type “cal” and California comes up in an address form, why should we be surprised when you describe a program and the code is returned?
Knowledge has the same problem as cosmology, the part we can observe doesn’t seem to account for the vast majority of what we know us out there. Symbolic knowledge encompasses unfathomable multitudes and will eventually be solved by AI but the “dark matter” of knowledge that can’t easily be expressed in language or math is still out in the wild
Cue the smug Lisp weenies.
The period is now. Just add "be a great teacher but don't attempt to write code" in the prompt.
(yes, it's a teacher who gets things wrong from time to time. You still need to refer to the source and ground truth just like when you're taught by a human teacher.)
I'm not sure if you ever had a teacher or instructor that you didn't trust, because they were a compulsive liar or addiction or any other issue. I didn't (as least not that I can remember) but I know I would be VERY on guard about it. I imagine I would consequently be quite stressed learning with them, even if they were brilliant, kind, etc.
It would feel a bit like walking on thin ice to get to a beautiful island. Sure, it's not infeasible and if you somehow make it, it might be worth the risk, but honestly wouldn't you prefer a slower boat?
I think you can build a very easy workflow that reinforces rather than replaces learning, I've used a citation flow to link and put into practice a ton of more advanced programming techniques, that I found incredibly difficult to locate and research before AI.
I'd say the comparison is faulty, it's more akin to swimming to an island (no-ai) vs using a boat. You control the speed and direction of the boat, which also means you have the responsbility of directing it to the correct location.
I think that's actually deeply different. If a human keeps on apologizing because they are being caught in a lie, or just a mistake, you distrust them a LOT more. It's not normal to shrug off a problem then REPEAT it.
I imagine the cost of a mistake is exponential, not linear. So when somebody says "oops, you got me there!" I don't mistrust them just marginally more, I distrust them a LOT more and it will take a ton of effort, if even feasible, to get back to the initial level of trust.
I do not think it's at all equivalent to what "Real humans" do. Yes, we do mistake, but the humans you trust and want to partner with are precisely the one who are accountable when they make mistakes.
Maybe "Artisanal Coding" will be a thing in the future?
Programming via LLMs is just the logical conclusion to this niche of industrialized software development which favours quantity over quality. It's basically replacing human bots which translate specs written by architecture astronauts into code without having to think on their own.
And good riddance to that type of 'spec-in-code-out' type of programming, it should never have existed in the first place. Let the architecture astronauts go wild with LLMs implementing their ideas without having to bother human programmers who actually value their craft ;)
Like you can still make Karelian pies[0] anywhere, but unless you follow the exact recipe, you can't sell them as "Karelian pies". It's good for the heritage and good for the customers.
You can also make any cheeses and wines and whatever you like, it's just how you name them and market them that's regulated.
This is an absolute chef-kiss double-entendre.
We are only craftsmen to ourselves and each other. To anyone else we are factory workers producing widgets to sell. Once we accept this then there is little surprise that the factory owners want us using a tool that makes production faster, cheaper. I imagine that watchmakers were similarly dismayed when the automatic lathe was invented and they saw their craft being automated into mediocrity. Like watchmakers we can still produce crafted machines of elegance for the customers who want them. But most customers are just going to want a quartz.
Guilty until proven innocent will satisfy the author's LLM-specific point of contention, but it is hardly a good principle.
He is proposing to not make a judgement at all. If the AI company CLAIMS something they have to prove it. Like they do in science or something. Any claim is treated as such, a claim. The trick is to not even claim anything, let the users all on their own come to the conclusion that it's magic. And it's true that LLMs by design cannot cite sources. Thus they cannot by design tell you if they made something up with disregard to it making sense or working, if they just copy and pasted it, something that either works or is crap, or if they somehow created something new that is fantastic.
All we ever see are the success stories. The success after the n-th try and tweaking of the prompt and the process of handling your agents the right way. The hidden cost is out there, barely hidden.
This ambiguity is benefitting the AI companies and they are exploiting it to the maximum. Going even as far as illegally obtaining pirated intellectual property from an entity that is banned in many countries on one end of their utilization pipeline and selling it as the biggest thing ever at the other end. And yes, all the doomsday stories of AI taking over the world are part of the marketing hype.
>AI output should be treated like a forgery
Who's passing this judgement this? Author? Civil society?
Love it. Calling it "Copilot" in itself is a lie. Marketing speak to sell you an idea that doesn't exist. The idea is that you are still in control.
btw you can make git commits with AI as author and you as commiter. Which makes git blame easier
On a philosophical level I do not get the discussions about paintings. I love a painting for what it is not for being the first or the only one. An artist that paints something that I can't distinguish from a Van Gogh is a very skillful artist and the painting is very beautiful. Me labeling "authentic" it or not should not affect it's artistic value.
For a piece of code you might care about many things: correctness, maintainability, efficiency, etc. I don't care if someone wrote bad (or good) code by hand or uses LLM, it is still bad (or good code). Someone has to take the decision if the code fits the requirements, LLM, or software developer, and this will not go away.
> but also a specific geographic origin. There's a good reason for this.
Yes, but the "good reason" is more probably the desire of people to have monopolies and not change. Same as with the paintings, if the cheese is 99% the same I don't care if it was made in a region or not. Of course the region is happy because means more revenue for them, but not sure it is good.
> To stop the machines from lying, they have to cite their sources properly.
I would be curious how can this be applied to a human? Should we also cite all the courses, articles that we have read on a topic when we write code?
The value of a piece is definitely not completely tied to its physical attributes, but the story around it. The story is what creates its scarcity and generates the value.
It is similar for collectible items. If I had in my possession the original costume that Michael Jackson wore in thriller, I am sure I could sell it for thousands of dollars. I can also buy a copy for less than a hundred.
Same with luxury brands. Their price is not necessarily linked to their quality, but to the status they bring and the story they tell (i.e. wearing this transforms me into somebody important).
It can seem quite silly, but I think we are all doing it to some extent. While you said that a good forgery shouldn't affect one's opinion on the object (and I agree with you), what about AI-generated content? If I made a novel painting in the style of Van Gogh, you might find it beautiful. What if I told you I just prompted it and painted it? What if I just printed it? There are levels of involvement that we are all willing to accept differently.
There are a lot such artists who can do that after having seen Van Gogh's paintings before. Only Van Gogh (as far as we know) did paint those without having seen anything like it before - in other words, he had a new idea.
Should we also say "if you can implement Dijkstra's algorithm" it's irrelevant because "you did not have the idea"?
It's great to credit people that have an idea first. I fail to see how using an idea is that "bad" or "not worthy", ideas should be spread and used, not locked by the first one that had them (except some small time period maybe).
Even if you aren't in the group, there is clearly a group of people who appreciate seeing the original, the thing that modified our collective artistic trajectory.
Forgeries and master studies have a long history in art. Every classically trained worth their salt has a handful of forgeries under their belt. Remaking work that you enjoy helps you appreciate it further, understand the choices they made and get a better for feel how they wielded the medium. Though these forgeries are for learning and not intended to be pieces in their own right.
I go to a museum to see a curated collection with explanations in a place that prevents distractions (I can't open a new tab) and going with people that might be interested to talk about what they see and feel. It's as well a social and personal experience on top information gathering.
> there is clearly a group of people who appreciate seeing the original,
There are many people interested in many things, do you want to say that "because some people think it is important, it must be important"? There were many people with really weird and despicable ideas along history and while I am neutral to this one, they definitely don't convince me just by their numbers.
> simply looking at a jpg.
Technically a jpg would not work because is lossy compression. But a png at the correct resolution might do the trick for some things (paintings that you see from far), but not for others. Museum have multiple objects that would be hard to put in an image (statues, clothes, bones, tables, etc.). You definitely can't put https://en.wikipedia.org/wiki/Comedian_(artwork) in a jpg - but the discussion surrounding it touches topics discussed here.
Yea this is the kind of BS and counter-productiveness that irrational radicals try to push the crowd towards.
The idea that one owns your observations of their work and can collect rent on it is absurd.
A short design note and tribute to Richard Stallman (RMS) and St. IGNUcius for the term Pretend Intelligence (PI) and the ethic behind it: don’t overclaim, don’t over-trust, and don’t let marketing launder accountability.
https://github.com/SimHacker/moollm/blob/main/designs/PRETEN...
1. What PI Is
Richard Stallman proposes the term Pretend Intelligence (PI) for what the industry calls “AI”: systems that pretend to be intelligent and are marketed as worthy of trust. He uses it to push back on hype that asks people to trust these systems with their lives and control.
From his January 2026 talk at Georgia Tech (YouTube, event, LibreTech Collective):
https://www.youtube.com/watch?v=YDxPJs1EPS4
> "So I've come up with the term Pretend Intelligence. We could call it PI. And if we start saying this more often, we might help overcome this marketing hype campaign that wants people to trust those systems, and trust their lives and all their activities to the control of those systems and the big companies that develop and control them." — Richard Stallman, Georgia Tech, 2026-01-23. Source: YouTube (full talk) — "Dr. Richard Stallman @ Georgia Tech - 01-23-2026," Alex Jenkins, CC BY-ND 4.0; transcript in video description.
So PI is both a label (call it PI, not AI) and a stance: resist the campaign to make people trust and hand over control to systems and vendors that don’t deserve that trust. In MOOLLM we use the same framing: we find models useful when we don’t overclaim — advisory guidance, not a guarantee (see MOOAM.md §5.3).
[...]
Richard Stallman critiques AI, connected cars, smartphones, and DRM (slashdot.org) 42 points by MilnerRoute 38 days ago | hide | past | favorite | 10 comments
https://news.ycombinator.com/item?id=46757411
https://news.slashdot.org/story/26/01/25/1930244/richard-sta...
Gnu: Words to Avoid: Artificial Intelligence:
https://www.gnu.org/philosophy/words-to-avoid.html#Artificia...
...currently not responding... archive.org link:
https://web.archive.org/web/20260303004610/https://www.gnu.o...
Claude makes me mad: even when I ask for small code snippets to be improved, it increasingly starts to comment "what I could improve" in the code I stead of generating the embarrassingly easy code with the improvement itself.
If I point it to that by something like "include that yourself", it does a decent job.
That's so _L_azy.
Has this really been people's experience?
I develop and maintain several small FOSS projects, some of which are moderately popular (e.g. 90,000-user Thunderbird extension; a library with 850 stars on GitHub). So, I'm no superstar or in the center of attention but also not a tumbleweed. I've not received a single AI-slop pull request, so far.
Am I an exception to the rule? Or is this something that only happens for very "fashionable" projects?
I have accepted that reading 100% of the generated code is not possible.
I am attempting to find methods to allow for clean code to be generated none the less.
I am using extremely strict DDD architecture. Yes it is totally overkill for a one man project.
Now i only have to be intimate with 2 parts of the code:
* the public facade of the modules, which also happens to be the place where authorization is checked.
* the orchestrators, where multiple modules are tied together.
If the inners of the module are a little sloppy (code duplication and al), it is not really an issue, as these do not have an effect at a distance with the rest of the code.
I have to be on the lookout though. It happens that the agent tries to break the boundaries between the modules, cheating its way with stuff like direct SQL queries.