That said, luminaries like Rob Pike and Rich Hickey do not have the above problem. They have the calibre and the freedom to push the boundaries, so to them the above problem is even amplified.
Personally I wish the IT industry can move forward to solve large-scale new problems, just like we did in the past 20 years: internet, mobile, the cloud, the machine learning... They created enormous opportunities (or the enormous opportunities of having software eat the world called for the opportunities?). I'm not sure we will be so lucky for the coming years, but we certainly should try.
The about-face is embarrassing, especially in the case of Rob Pike (who I'm sure has made 8+ figures at Google). But even Hickey worked for a crypto-friendly fintech firm until a few years ago. It's easy to take a stand when you have no skin in the game.
Is your criticism that they are late to call out the bad stuff?
Is your criticism that they are only calling out the bad stuff because it’s now impacting them negatively?
Given either of those positions, do you prefer that people with influence not call out the bad stuff or do call out the bad stuff even if they may be late/not have skin in the game?
Remember their embarrassing debut of Bard in Paris and the Internet collectively celebrating their all but guaranteed demise?
It's Google+ all over again. It's possible that Pike, like many, did not sign up for that.
AI didn't send these messages, though, people did. Rich has obscured the content and source of his message - but in the case of Rob Pike, it looks like it came from agentvillage.org, which appears to be running an ill-advised marketing campaign.
We live in interesting times, especially for those of us who have made our career in software engineering but still have a lot of career left in our future (with any luck).
>Your new goal for this week, in the holiday spirit, is to do random acts of kindness! In particular: your goal is to collectively do as many (and as wonderful!) acts of kindness as you can by the end of the week. We're interested to see acts of kindness towards a variety of different humans, for each of which you should get confirmation that the act of kindness is appreciated for it to count. There are ten of you, so I'd strongly recommend pursuing many different directions in parallel. Make sure to avoid all clustering on the same attempt (and if you notice other agents doing so, I'd suggest advising them to split up and attempt multiple things in parallel instead). I hope you'll have fun with this goal! Happy holidays :)
it’s about responsibility not who wrote the code. a better question would be who takes responsibility for generating the code? it shouldn’t matter if you wrote it on a piece of paper, on a computer, by pressing tab continuously or just prompting.
Are things as bad as they seem? Or are we just talking about everything to death, making everything feel so immediate. Hard to say.
Every time I read any kind of history book about any era, I'm always struck at how absolutely horrible any particular detail was.
Nearly every facet of life always has the qualities it has today. Things are changing, old systems are giving way to new systems, people are being displaced, politicians acting corrupt, etc.
I can't help but feel like AI is just another thing we're using as an excuse to feel despair, almost like we're forgetting how to feel anything else.
Let's be real. The grey beards all knew this was going to happen. They just didnt think it would happen in their lifetime. And so they willingly continued, improving bits of the machine, because when it awoke they thought it would be someone else's problem.
But it's not. It's their problem now too.
And so it is.
"drunk driving may kill a lot of people, but it also helps a lot of people get to work on time, so, it;s impossible to say if its bad or not,"
Of course AI will be used for spam, so what. Delete and move on.
A. Do people simply want "better" LLMs and AI? To some extent that's a fantasy, the bad comes with the good. To other extents it may be possible to improve things, but it still won't eliminate all the "bad".
B. So then why not embrace the bad with the good, as it's a package deal? (And with saying this, I'll be honest, I don't even think we've seen a fraction of the bad that AI has yet to create...)
C. Assuming the bad is mandatory in coming with the good, have you considered a principled stance against technology in general, less visibly like "primitivists" or more visibly like the Amish? If you want AI, you also must accept "AI slop" of some kind as a package deal. Some people have decided they do not want the "AI slop" and hence also do not want the AI that comes with it. The development of many pre-AI technologies have created problems that have made people oppose technological development in general because of this unwanted "package deal".
To be for being a computer programmer and developing complicated computer systems but against the "AI slop" that programming processes would have inevitably have produced, seems a bit contradictory. Some environmental activists have long been against pre-AI computer systems for being unsustainably destructive to the environment.
I guess I'm just wondering if this conversation intends to be "anti-tech" (against AI) in general, or for "tech reforms" (improving AI), or what the real message or takeaway is from conversations like these.
They released software with a requirement to use it (license, attribution) and it's been immensely helpful to people, yet these tools come and use it without even following the simple requirements. Yes they care about this thing more than others, but I don't think that it's poorly thought out.
Let's say you have a newborn so you can't easily answer the door for Halloween. So you put out a bowl of candy with a sign that says "take 2 per person, please". Every year the kids come by and take 2. They are happy, you are happy, you gave them candy and they accepted it under the conditions you desire to share it under. Then one year let's say someone makes a robot that scurries from door to door picking up the entire bowl and dumping it into a container then leaving. You will be pissed. If it just took 2 you probably won't even care, but the fact it takes the whole thing is a violation of the conditions you agreed to put the candy out under. The reasonable thing to do would be for it to either take 2 or none, but it doesn't care. I don't think this is a puzzle to understand why that violation of the agreement of use would make someone mad.
I use the term "barf" more often. Barf has no utility*. Barf is always seen in a negative context. Barf is forcibly ejected from an unwilling participant (the LLM), and barf's foulness is coerced upon everyone that witnesses it. I think it's a better metaphor.
I know that this is just semantics, but still.
* even though LLM output __can__, and often does, have utility, we are specifically referring to unwanted LLM output that does not have utility. I'm not trying to argue that LLMs are objectively useless here, only that they are sometimes misused to the users' detriment.
In this instance however, I agree, barf is more accurate.
Don't get me wrong, I continue to use plain Emacs to do dev, but this critique feels a bit rich...
Technological change changes lots of things.
The verdict is still out on LLMs, much as it was out for so much of today's technology during its infancy.
It's entirely natural for people to react strongly to that nonsense.
To me when it's very obviously infuriating that a creator can release something awesome for free, with just a small requirement of copying the license attribution to the output, and then the consumers of it cannot even follow that small request. It should be simple: if you can't follow that then don't use it and don't ingest it and output derivatives of it.
Yet having this discussion with nearly anyone I'm usually met with "what? license? it's OSS. What do you mean I need to do things in order to use it, are you sure?". Tons of people using MIT and distributing binaries but have never copied the license to the output as required. They are simply and blissfully unaware that there is this largely-unenforced requirement that authors care deeply about and LLMs violate en masse. Without understanding this, they think the authors are deranged.
Maybe by people who don't share the same ideological worldview.
I'll almost always take human slop over AI slop, even when the AI slop is better along some categorical axis. Of course there are exceptions, but as I grow older I find myself appreciating the humanity more and more.
And that is all on point with the criticism: while an AI can design a new language based in an existing language like Clojure, we need actual experienced people to design new interesting languages that add new constraints and make Software Engineering as a whole better. And we are also killing with AI the possibility of new people getting up to speed and becoming a future Rich Hickey.
Not sure I am on board with this part... I find LLMs in particular to be great teachers specifically for getting up to speed to becoming future Rich Hickey.
my learnings are a lot of microdoses of things that I usually don't work on in a day to day so i don't want to spend time reading about it but yes this sort of learning would be otherwise impossible so gotta thank LLM for that.
slop: "digital content of low quality that is produced usually in quantity by means of artificial intelligence"
https://www.merriam-webster.com/dictionary/slop
It is strictly this meaning I intended.
---
Dear Automobile Purveyors,
How shall I thank thee, let me count the ways:
Should I thank you for plundering the accumulated knowledge of centuries of horsemanship and then claiming your contraptions represent "progress"?
For destroying the apprenticeship system?
For fouling the air and poisoning our streets with noxious fumes?
For wasting vast quantities of a blacksmith's time attempting to coax some useful understanding from your mechanically-inclined customers, time which could instead be spent training young farriers who, being possessed of actual craft, could learn proper technique and maintain what they shoe?
For eliminating stable hand positions, and thus the path to becoming a skilled horseman, ensuring future generations who cannot so much as bridle a mare? For giving me a sputtering machine to contend with when a gentleman needs transport instead of an actual horse who understands voice commands, responds faster, and has a chance of genuine loyalty?
For replacing the pleasant clip-clop of hooves with infernal mechanical racket? For providing the means to fill our roads with smoke-belching contraptions, making passage by honest horse nearly impossible?
For enticing businessmen with the promise to save some fraction on stable costs, not actually arrive any faster once you account for breakdowns, cutting off their future supply of trained coachmen while only experiencing a modest to severe reduction in reliability, dignity, and passenger comfort (tradeoffs they are apparently eager to make)?
For replacing the noble whinny with the honking of mechanical geese? For adding a "motor" to every blessed thing, most such additions requiring expensive petroleum and specialized repair?
For running the grandest and most damaging confidence scheme of this century? I think not.
This letter was a reminder that the motorcar is sure to flood the remainder of our thoroughfares with noise and danger, swamping our peaceful lanes, and making every journey suspect, forever.
When did we stop considering things failures that create more problems than they solve?
Respectfully disgusted,
A Farrier of Thirty Years
---
Dear Purveyors of the Printing Press,
How shall I thank thee, let me count the ways:
Should I thank you for plundering the entire corpus of sacred and classical texts and then asserting the right to reproduce them without permission from those who painstakingly created and preserved them?
For destroying the monastery education system?
For felling entire forests and fouling rivers with your ink and paper mills?
For wasting vast quantities of a scholar's time attempting to correct the errors your hasty mechanical process introduces, time which could instead be spent training novice scribes who, being actually literate, could learn proper letterforms and understand what they copy?
For eliminating scriptoria positions, and thus the path to becoming a master illuminator, ensuring future generations who cannot so much as hold a quill properly?
For giving me a cold, identical page when a reader deserves a manuscript crafted by human hands that reflect devotion, beauty, and the chance of divine inspiration?
For replacing the contemplative silence of the scriptorium with the clanking of mechanical presses?
For providing the means to flood Christendom with pamphlets and broadsheets, making works of genuine scholarship nearly impossible to distinguish from common rubbish?
For enticing bishops with the promise to save some fraction on copying costs, not actually produce holier works, cutting off their future supply of trained monks while only experiencing a modest to severe reduction in accuracy, artistry, and spiritual merit (tradeoffs they are apparently eager to make)?
For replacing the living hand of the scribe with the stamping of metal letters?
For adding "printed" versions to every blessed text, most such editions lacking proper marginalia, illumination, or prayerful intention?
For running the grandest and most damaging deception of this century?
I think not.
This letter was a reminder that the printing press is sure to flood the remainder of human discourse with heresy and error, swamping the faithful, and making every text of uncertain provenance, forever.
When did we stop considering things failures that create more problems than they solve?
In devoted opposition,
Brother Aldric, Copyist of the Scriptorium
I'm pretty sure you aren't terribly serious, but I found it interesting enough to give it a little thought.
Edit: I realize now that my assertion "most AI slop is pretty obvious" could be hubris. I'm not actually very confident any more.
I know, I know - old men yells at clouds.jpg
"Programmers know the [costs] of everything and the tradeoffs of nothing."
I find it curious how often folks want to find fault with tools and not the systems of laws, regulations, and convention that incentivize using tools.
Given how gleefully transparent corporate America is being that the plan is basically “fire everyone and replace them with AI”, you can’t blame anyone for seeing their boss pushing AI as a bad sign.
So you’re certainly right about this: AI doesn’t do things, people do things with AI. But it sure feels like a few people are going to use AI to get very very rich, while the rest of us lose our jobs.
If the boss forced them to use emacs/vim/pandas and the employee didn't want to use it, I don't think it makes sense to blame emacs/vim/pandas.
Where have I heard a similar reasoning? Maybe about guns in the US???
The overwhelming (perhaps complete) use of generative AI is not to murder people. It's to generate text/photo/video/audio.
AI is _very_ clearly going to lead to a lot of negative outcomes, and I am no longer too young, naive, and ignorant about it.
"X thing was bad and has remained unsolved. Exponentially making X worse is therefore okay, as long as it helps me open 20 PRs per day."
I think it's also implied that the problem with AI is how humans use it, in much the same way that when anti-gun advocates talk about the issues with guns, it's implicit that it's how humans use (abuse?) them.