Non-contributors dictating how the hen makes bread.
[0] While this fact can be difficult to ascertain, one must remember that mobs are generally much, much louder than normal users, and normal users are generally quiet even when the mob is loud.
I don't want to be too meta, but isn't that a description of most HN threads? We show up to criticize other people's work for self-gratification. In this case, we're here here to criticize the dev caving in, even though most of us don't even know what Stoat is and we don't care.
Except for some corner cases, most developers and content creators mostly get negative engagement, because it's less of an adrenaline rush to say "I like your work" than to say "you're wrong and I'm smarter than you". Many learn to live with it, and when they make a decision like that, it's probably because they actually agree with the mob.
I do think it's harmful to cave in, but that doesn't make me think less of the maintainer's character. On the other hand, some of the commenters in the issue might decry them as evil if they made the "wrong" decision.
It's fine to have opinions on the actions of others, but it's not fine to burn them at the stake.
Anyone with a rhetorical opinion but who otherwise provides little to getting cars off assembly lines, homes built, network cables laid.
In physical terms the world is full of socialist grifters in that they only have a voice, no skill. They are reliant on money because they're helpless to themselves.
Engineers could rule the world of they acted collectively rather than start personal businesses. If we sat on our hands unless demands are met, the world stops.
A lot of people in charge fear tech unions as we control the world getting shit done.
(For whatever reason, LLM coding things seem to love to reinvent the square wheel…)
Gee, I wonder which "side" you're on?
It's not true that all AI generated code looks like it does the right thing but doesn't, or that all that human written code does the right thing.
The code itself matters here. So given code that works, is tested, and implements the features you need, what does it matter if it was completely written by a human, an LLM, or some combination?
Do you also have a problem with LLM-driven code completion? Or with LLM code reviews? LLM assisted tests?
I mean I don’t have a problem with AI driven code completion as such, but IME it is pretty much always worse than good deterministic code completion, and tends to imagine the functions which might exist rather than the functions which actually do. I’ve periodically tried it, but always ended up turning it off as more trouble than it’s worth, and going back to proper code completion.
LLM code reviews, I have not had the pleasure. Inclined to be down on them; it’s the same problem as an aircraft or ship autopilot. It will encourage reduced vigilance by the human reviewer. LLM assisted tests seem like a fairly terrible idea; again, you’ve got the vigilance issue, and also IME they produce a lot of junk tests which mostly test the mocking framework rather than anything else.
Left-pad isn't a success story to be reproduced.
EDIT: I do wonder if some of the enthusiastic acceptance of this stuff is down to the extreme terribleness of the javascript ecosystem, tbh. LLM output may actually beat leftpad (beyond the security issues and the absurdity of having a library specifically to left pad things, it at least used to be rather badly implemented), but a more robust library ecosystem, as exists for pretty much all other languages, not so much.
I’m also not sure why programmatically generated code is inherently untrustworthy but code written by some stranger who is confidence in motives are completely unknown to you is inherently trustworthy. Do we really need to talk about npm?
That’s just dumb things from the last 20 years. I think you may be suffering from a fairly severe case of survivorship bias.
(If you’re willing to go back _30_ years, well, then you’re getting into the previous AI bubble. We all love expert systems, right?)
On the other hand, normal cryptocurrencies continue to exist because their proponents find them useful, even if many others are critical of their existence.
Technology lives and dies by the value it provides, and both proponents and detractors are generally ill-prepared to determine such value.
The orignal topic was "not once blah blah...". I don't have to entertain you further, and won't.
It seems like it is hard to cultivate a community that cares about doing the right thing, but is focused and pragmatic about it.
Not necessarily. sqlite doesn't take outside contributions, and seems to not care too much about external opinion (at least, along certain dimensions). sqlite is also coincidentally a great piece of software.
https://discord.com/blog/developing-rapidly-with-generative-...
But if they are switching from Discord, then that means they are unhappy with it too, thus they are not advocating for it.
So, again, what’s your point?
It's just the perfect world fallacy.
Considering Stoat just (supposedly) removed all LLM code from their code base, there is at least one. I’d expect, based on Meredith Whittaker’s stance regarding LLMs, that Signal also doesn’t have LLM code, though I haven’t verified.
> The commenters simply focus on the thing they saw, and didn't even bother comparing against their existing alternative.
I mean, how do you know? There is one mention of Discord in that thread. Making sweeping statements about “the commenters” doesn’t seem right.
If you use for example, GitHub Co-Pilot IDE integration, there's no evidence.
I encourage people here to go read the 3(!) commits reverted. It's all minor housekeeping and trivial bugfixes—nothing deserving of such religious (cultish?) fervor.
[0] As a corollary, those with civility do deserve transparency. It's a tough situation.
I have pretty low expectations for human code in that repository.
> and most likely serves as a way to tie LLM use to slavery, genocide, or oppression without requiring rational explanation.
Assuming and ascribing nefarious motivations to a complete stranger can be considered bad faith, though. Probably not your intention, but that’s how it came across.
Aside from that, the statement is not empirically true (from my perspective at least). Evidence isn't provided either. I'm not saying that the commenter consciously wanted to tie LLM use to those negative things, but it could be done subconsciously, because I have genuinely seen those arguments before.
Hope that clarifies what I’m getting at.