That said, luminaries like Rob Pike and Rich Hickey do not have the above problem. They have the calibre and the freedom to push the boundaries, so to them the above problem is even amplified.
Personally I wish the IT industry can move forward to solve large-scale new problems, just like we did in the past 20 years: internet, mobile, the cloud, the machine learning... They created enormous opportunities (or the enormous opportunities of having software eat the world called for the opportunities?). I'm not sure we will be so lucky for the coming years, but we certainly should try.
The about-face is embarrassing, especially in the case of Rob Pike (who I'm sure has made 8+ figures at Google). But even Hickey worked for a crypto-friendly fintech firm until a few years ago. It's easy to take a stand when you have no skin in the game.
Is your criticism that they are late to call out the bad stuff?
Is your criticism that they are only calling out the bad stuff because it’s now impacting them negatively?
Given either of those positions, do you prefer that people with influence not call out the bad stuff or do call out the bad stuff even if they may be late/not have skin in the game?
Remember their embarrassing debut of Bard in Paris and the Internet collectively celebrating their all but guaranteed demise?
It's Google+ all over again. It's possible that Pike, like many, did not sign up for that.
AI didn't send these messages, though, people did. Rich has obscured the content and source of his message - but in the case of Rob Pike, it looks like it came from agentvillage.org, which appears to be running an ill-advised marketing campaign.
We live in interesting times, especially for those of us who have made our career in software engineering but still have a lot of career left in our future (with any luck).
>Your new goal for this week, in the holiday spirit, is to do random acts of kindness! In particular: your goal is to collectively do as many (and as wonderful!) acts of kindness as you can by the end of the week. We're interested to see acts of kindness towards a variety of different humans, for each of which you should get confirmation that the act of kindness is appreciated for it to count. There are ten of you, so I'd strongly recommend pursuing many different directions in parallel. Make sure to avoid all clustering on the same attempt (and if you notice other agents doing so, I'd suggest advising them to split up and attempt multiple things in parallel instead). I hope you'll have fun with this goal! Happy holidays :)
it’s about responsibility not who wrote the code. a better question would be who takes responsibility for generating the code? it shouldn’t matter if you wrote it on a piece of paper, on a computer, by pressing tab continuously or just prompting.
"drunk driving may kill a lot of people, but it also helps a lot of people get to work on time, so, it;s impossible to say if its bad or not,"
Don't get me wrong, I continue to use plain Emacs to do dev, but this critique feels a bit rich...
Technological change changes lots of things.
The verdict is still out on LLMs, much as it was out for so much of today's technology during its infancy.
It's entirely natural for people to react strongly to that nonsense.
A human writing twelve polemic questions, many of which only make sense within their ideological worldview or contain factual errors, because they wanted to vent their anger on the internet has been considered substandard slop since before LLMs were a thing.
Perhaps instead of frothing out rage slop, your views would be more persuasive if you showed the superiority of human authors to LLMs?
…because posts like this do the opposite, making it seem like bloggers are upset LLMs are honing in on their slop pitching grift.
Edit:
For fun, I had ChatGPT rewrite his post and elaborate on the topic. I think it did a better job explaining the concerns than most LLM critics.
https://chatgpt.com/share/6951dec4-2ab0-8000-a42f-df5f282d7a...
This is substandard slop though, being devoid of any real critique and merely a collection of shotgunned, borderline-incoherent jabs. Criticizing LLMs by turning in even lower quality slop is behavior you’d expect from people who feel threatened by LLMs rather than people addressing a specific weakness in or problem with LLMs.
So like I said:
Perhaps he should try showing me LLMs are inferior by not writing even worse slop, like this.
And that is all on point with the criticism: while an AI can design a new language based in an existing language like Clojure, we need actual experienced people to design new interesting languages that add new constraints and make Software Engineering as a whole better. And we are also killing with AI the possibility of new people getting up to speed and becoming a future Rich Hickey.
Maybe by people who don't share the same ideological worldview.
I'll almost always take human slop over AI slop, even when the AI slop is better along some categorical axis. Of course there are exceptions, but as I grow older I find myself appreciating the humanity more and more.
I find it curious how often folks want to find fault with tools and not the systems of laws, regulations, and convention that incentivize using tools.
Given how gleefully transparent corporate America is being that the plan is basically “fire everyone and replace them with AI”, you can’t blame anyone for seeing their boss pushing AI as a bad sign.
So you’re certainly right about this: AI doesn’t do things, people do things with AI. But it sure feels like a few people are going to use AI to get very very rich, while the rest of us lose our jobs.
If the boss forced them to use emacs/vim/pandas and the employee didn't want to use it, I don't think it makes sense to blame emacs/vim/pandas.
Where have I heard a similar reasoning? Maybe about guns in the US???
The overwhelming (perhaps complete) use of generative AI is not to murder people. It's to generate text/photo/video/audio.
I think it's also implied that the problem with AI is how humans use it, in much the same way that when anti-gun advocates talk about the issues with guns, it's implicit that it's how humans use (abuse?) them.