Is a small indie dev "dodgy" if they use AI to unblock a tricky C# problem so they can actually finish their game? Yarn Spinner seems to conflate "Enterprise Scale Replacement" (firing 500 support staff) with "assistive tooling" (a solo dev using GenAI for texture variants).
By drawing such a hard line, they might be signaling virtue to their base, but they are also ignoring the nuance that AI -- like the spellcheckers and compilers before it -- can be a force multiplier for the very creatives they want to protect.
Personally, I do agree that there are many problems with companies behind major LLMs today, as well as big tech companies C-levels who don't understand why AI can't replace engineers. But this post, as much as written in a nice tone, doesn't frame the problem correctly in my mind.
> Is a small indie dev "dodgy" if they use AI to unblock a tricky C# problem so they can actually finish their game?
No amount of framing (unless written into law) would stop small indie devs from doing this. AI is just too efficient, making too much sense economically. People who are willing to starve for their ideology is always the minority.
Even artisans who build hand-made wooden furniture use power tools today. The tools that make economical sense will prevail one way or another.
It's because generative AI has become part of the "culture wars" and is therefore black and white to lots of people.
I think it's self-defeating, but virtue signallers gonna virtue signal.
Personally I'd rather a future where everyone used local models.
> You need to realise that if you use them, you’re both financially and socially supporting dodgy companies doing dodgy things. They will use your support to push their agenda. If these tools are working for you, we’re genuinely pleased. But please also stop using them.
> Your adoption helps promote the companies making these tools. People see you using it and force it onto others at the studio, or at other workplaces entirely. From what we’ve seen, this is followed by people getting fired and overworked. If it isn’t happening to you and your colleagues, great. But you’re still helping it happen elsewhere. And as we said, even if you fixed the labour concerns tomorrow, there are still many other issues. There’s more than just being fired to worry about.
It’s really intriguing how an increasingly popular view of what’s “ethical” is anything that doesn’t stand in the way of the ‘proletariat’ getting their bag, and anything that protects content creators’ intellectual property rights, with no real interest in the greater good.
Such a dramatic shift from the music piracy generation a mere decade or two ago.
It’s especially intriguing as a non-American.
Again, as you say, many sensible arguments against AI, but for some people it really takes a backseat to “they took our jerbs!”
Forty years ago I would've had a personal secretary for my engineering job, and most likely a private office. Now I get to manage more things myself in addition to being expected to be online 24x7 - so I'm not even convinced that eliminating those jobs improve things for the people who now get to self-serve instead of being more directly assisted.
Capitalism is not prepared nor willing to retrain people, drastically lower the workweek, or bring about a UBI sourced from the value of the commons. So indeed, if the promises of AI hold true, a catastrophe is incoming. Fortunately for us, the promises of AI CEOs are unlikely to be true.
If we manage to replace all the workers with AI - that's awesome! We will obviously have to work out a system for everyone to get shelter, and food, and so on. But that post-scarcity utopia of everyone being able to do whatever they want with their time and not have to work, that's the goal, right? That's where we want to be.
Jerbs are an interim nightmare that we have had to do to get from subsistence agriculture to post-scarcity abundance, they're not some intrinsic part of human existence.
One question perhaps is, even if AI can do everything I can do (i.e., has the skills for it), will it do everything I do? I'm sure there are many people in the world with the superset of my skills, yet I bet there are some things only I'm doing, and I don't think a really smart AI will change that.
The luddites didn't destroy automatic looms because they hated technology; they did it because losing their jobs and seeing their whole occupation disappear ruined their lives and lives of their families.
The problem to fix isn't automation, but preventing it from destroying people's lives at scale.
What other people and companies do because I happen to use something correctly (as an assistive technology), is not my responsibility. If someone happens to misuse it or enforce it use in a dysfunctional work environment, that is their doing and not mine.
If a workplace is this dysfunctional, there are likely many other issues that already exist that are making people miserable. AI isn't the root cause of the issue, it is the workplace culture that existed before the presence of AI.
In essence we have an ownership problem. If I own the AI, I can do my work in couple of hours and then some and then have rest of the day off to enjoy things I like. If the company owns AI - I'm out of work. The difference between a world of plenty and beauty vs the world of misery for many of us - is who owns the AI.
But that's not what companies expect from you, even if you owns AI. They expect you to output more, and when you do, someone else is probably out of work.
Lots of folks are mad about how the power of these tools comes from training things they put out in the open but didn't intend to be used to enrich or exclude others like this technology is enabling.
Interesting times ahead... it's so powerful people who ignore it are going to get left behind to some degree. (I say this as someone who actively avoids kubernetes and it does give off the vibe I've been left behind compared to my peers who do resume driven development.)
As a result, I think we'll eventually see a mean shift from rewarding those that are "technically competent" more towards those that are "practically creative" (I assume the high end technical competence will always be safe).
I sorry friends, I think imma quit to farming :$
People with this kind of attitude existed long before AI and will continue to exist.
It’s always been this way in toxic workplaces - LLM’s amplify this.
I know folks tend to frown on security compliances, but if you honestly implement and maintain most of the controls in there, not just to get a certificate -- it really make a lot of sense and improves security/clarity/risks.
But to just copy, paste and move on… terrible.
I have decided I can only use AI that has a benefit to society at all. Say lower energy use apps for eink devices.
Left behind where? We all live in the same world, anyone can pick up AI at any moment, it’s not hard, an idiot can do it (and they do).
If you’re not willing to risk being “left behind”, you won’t be able to spot the next rising trend quickly enough and jump on it, you’ll be too distracted by the current shiny thing.
If you take some percent longer to finish a some code, because you want that code to maintain some level of "purity", you'll finish slower than others. If his is a creative context, you'll spend more time on boilerplate than interesting stuffs. If this is a profit driven context, you'll make less money, less money for staff. Etc.
> If you’re not willing to risk being “left behind”...
I think this is orthogonal. Some tools increase productivity. Using a tool doesn't blind a component person...they just have an another tool under their belt to use if they personally find it valuable.
The anti-ai stance just makes em even cooler.
The Yarn Spinner team explains they don’t use AI in their game development tool despite having academic and professional backgrounds in machine learning—they’ve written books on it and gave talks about ML in games. Their position shifted around 2020 when they observed AI companies pivoting from interesting technical applications toward generative tools explicitly designed to replace workers or extract more output without additional hiring. They argue that firing people has become AI’s primary value proposition, with any other benefits being incidental. Rather than adopt technology for its own sake (“tool-driven development”), they focus on whether features genuinely help developers make better games. While they acknowledge numerous other AI problems exist and may revisit ML techniques if the industry changes, they currently refuse to use, integrate, or normalize AI tools because doing so would financially and socially support companies whose business model centers on eliminating jobs during a period when unemployment can be life-threatening.