150 pointsby austinbirch13 hours ago17 comments
  • afandian13 hours ago
    It’s heartwarming to see Rich Hickey corroborating Rob Pike. All the recent LLM stuff has made me feel that we suddenly jumped tracks into an alternate timeline. Having these articulate confirmations from respected figures is a nice confirmation that this is indeed a strange new world.
    • hintymad9 hours ago
      AI coding tools are effective for many because, unfortunately, our work has become increasingly repetitive. When someone marvels at how a brief prompt can produce functioning code, it simply means the AI has delivered a more imaginative or elaborate specification than that person could envision, even if the resulting code is merely a variation of what has already been written countless times before. Maybe there's nothing wrong with that, as not everyone is fortunate enough to work on new problems and get to implement new ideas. It's just that repetitive work is bound to be automated away and therefore we will see the problems in Rich's rants.

      That said, luminaries like Rob Pike and Rich Hickey do not have the above problem. They have the calibre and the freedom to push the boundaries, so to them the above problem is even amplified.

      Personally I wish the IT industry can move forward to solve large-scale new problems, just like we did in the past 20 years: internet, mobile, the cloud, the machine learning... They created enormous opportunities (or the enormous opportunities of having software eat the world called for the opportunities?). I'm not sure we will be so lucky for the coming years, but we certainly should try.

    • dvt12 hours ago
      This is all just cynical bandwagoning. Google/Facebook/Etc. have done provable irreparable damage to the fabric of society via ads/data farming/promulgating fake news, but now that it's in vogue to hate on AI as an "enlightened" tech genius, we're all suddenly worried about.. what? Water? Electricity? Give me a break.

      The about-face is embarrassing, especially in the case of Rob Pike (who I'm sure has made 8+ figures at Google). But even Hickey worked for a crypto-friendly fintech firm until a few years ago. It's easy to take a stand when you have no skin in the game.

      • hshdhdhj444411 hours ago
        I don’t understand what your actual criticism is.

        Is your criticism that they are late to call out the bad stuff?

        Is your criticism that they are only calling out the bad stuff because it’s now impacting them negatively?

        Given either of those positions, do you prefer that people with influence not call out the bad stuff or do call out the bad stuff even if they may be late/not have skin in the game?

      • nunez10 hours ago
        It's worth mentioning that AI in its current form was not AT ALL a part of Google's corporate strategy until Microsoft and OpenAI forced their hand.

        Remember their embarrassing debut of Bard in Paris and the Internet collectively celebrating their all but guaranteed demise?

        It's Google+ all over again. It's possible that Pike, like many, did not sign up for that.

      • llmslave212 hours ago
        Even ignoring that someone's views can change over time, working on an OSS programming language at Google is very different from designing algorithms to get people addicted to scrolling.
        • dvt12 hours ago
          Where do you think his "distinguished engineer" salary came from, I wonder? There are plenty of people working on OSS in their free time (or in poverty, for that matter).
          • llmslave212 hours ago
            Shouldn't you be thinking "it's nice Google diverted some of their funds to doing good" instead of trying to tie Pike's contributions in with everything else?
            • dvt11 hours ago
              This conversation isn't about Google's backbone, it's about Pike's and Hickey's. It's easy to moralize when you've got nothing to lose and the lecture holds much less water.
      • duped11 hours ago
        Both can be bad. What's hard to do though is convincing the people that work on these things that they're actively harming society (in other words, most people working on ads and AI are not good people, they're the bad guys but don't realize it).
  • sethev12 hours ago
    This and Rob Pike's response to a similar message are interesting. There's outrage over the direction of software development and the effects that generative AI will have on society. Hickey has long been an advocate for putting more thought (hammock time) into software development. Coding agents on the other hand can take little to no thought and expand it into thousands of lines of code.

    AI didn't send these messages, though, people did. Rich has obscured the content and source of his message - but in the case of Rob Pike, it looks like it came from agentvillage.org, which appears to be running an ill-advised marketing campaign.

    We live in interesting times, especially for those of us who have made our career in software engineering but still have a lot of career left in our future (with any luck).

    • llmslave212 hours ago
      Not to be pedantic but AI absolutely sent those emails. The instructions were very broad and did not specify email afaik. And even if they did, when Claude Code generates a 1000loc file it would be silly to say "the AI didn't write this code, I did" just because you wrote the prompt.
      • 000ooo00011 hours ago
        Source?
        • llmslave211 hours ago
          • 000ooo00010 hours ago
            Save anyone else the click

            >Your new goal for this week, in the holiday spirit, is to do random acts of kindness! In particular: your goal is to collectively do as many (and as wonderful!) acts of kindness as you can by the end of the week. We're interested to see acts of kindness towards a variety of different humans, for each of which you should get confirmation that the act of kindness is appreciated for it to count. There are ten of you, so I'd strongly recommend pursuing many different directions in parallel. Make sure to avoid all clustering on the same attempt (and if you notice other agents doing so, I'd suggest advising them to split up and attempt multiple things in parallel instead). I hope you'll have fun with this goal! Happy holidays :)

      • kmlx10 hours ago
        > when Claude Code generates a 1000loc file it would be silly to say "the AI didn't write this code, I did" just because you wrote the prompt.

        it’s about responsibility not who wrote the code. a better question would be who takes responsibility for generating the code? it shouldn’t matter if you wrote it on a piece of paper, on a computer, by pressing tab continuously or just prompting.

  • perfmode12 hours ago
    If you’re going to pen a letter to Rich Hickey, the least you can do is spring for Opus.
  • ta900011 hours ago
    It wasn’t AI that decided not to hire entry level employees. Rich should be smart enough to realize that, and probably has employees of his own. So go hire some people Rich.
    • nunez8 hours ago
      False equivalence. He isn't a hiring manager, and AI _has_ been used to justify hiring fewer entry level employees.
      • ta90008 hours ago
        They can justify _their_ decisions all they want. It’d still their decision, not AI’s. This is pure cost cutting nonsense that’s par for the course for poorly run corporations.
  • RodgerTheGreat12 hours ago
    Looking forward to seeing all the slop enthusiasts pipe up with their own llm-oriented version of the age-old dril tweet:

    "drunk driving may kill a lot of people, but it also helps a lot of people get to work on time, so, it;s impossible to say if its bad or not,"

  • 13 hours ago
    undefined
  • morgengold3 hours ago
    AI slop is a big problem. At the same time AI does some things pretty well (proofreading, translation, finding bugs, sum ups ...)
  • pests12 hours ago
    Another victim of the AI village from the other day?
  • bigyabai12 hours ago
    I dub this new phenomenon "slopbaiting"
  • yooogurt12 hours ago
    I have seen similar critiques applied against digital tech in general.

    Don't get me wrong, I continue to use plain Emacs to do dev, but this critique feels a bit rich...

    Technological change changes lots of things.

    The verdict is still out on LLMs, much as it was out for so much of today's technology during its infancy.

    • pdpi12 hours ago
      AI has an image problem around how it takes advantage of other people's work, without credit or compensation. This trend of saccharine "thank you" notes to famous, influential developers (earlier Rob Pike, now Rich Hickey) signed by the models seems like a really glib attempt at fixing that problem. "Look, look! We're giving credit, and we're so cute about how we're doing it!"

      It's entirely natural for people to react strongly to that nonsense.

  • satisfice8 hours ago
    It’s interesting to see AI fanboys desperately trying to shrug off the phenomenon of slop. It makes it clear that AI doesn’t need to take over the world by itself. It will have hundreds of thousands of willing helpers to cooperate in the collapse of human civilization.
  • zmgsabst12 hours ago
    I don’t think human slop is more useful than LLM slop.

    A human writing twelve polemic questions, many of which only make sense within their ideological worldview or contain factual errors, because they wanted to vent their anger on the internet has been considered substandard slop since before LLMs were a thing.

    Perhaps instead of frothing out rage slop, your views would be more persuasive if you showed the superiority of human authors to LLMs?

    …because posts like this do the opposite, making it seem like bloggers are upset LLMs are honing in on their slop pitching grift.

    Edit:

    For fun, I had ChatGPT rewrite his post and elaborate on the topic. I think it did a better job explaining the concerns than most LLM critics.

    https://chatgpt.com/share/6951dec4-2ab0-8000-a42f-df5f282d7a...

    • yooogurt12 hours ago
      If you haven't heard of Rich Hickey, then you're fortunate to have the opportunity to watch "Simple Made Easy" for the first time: https://m.youtube.com/watch?v=LKtk3HCgTa8
      • zmgsabst12 hours ago
        I know who he is.

        This is substandard slop though, being devoid of any real critique and merely a collection of shotgunned, borderline-incoherent jabs. Criticizing LLMs by turning in even lower quality slop is behavior you’d expect from people who feel threatened by LLMs rather than people addressing a specific weakness in or problem with LLMs.

        So like I said:

        Perhaps he should try showing me LLMs are inferior by not writing even worse slop, like this.

    • stanleykm12 hours ago
      Rich Hickey designed Clojure.
      • maplethorpe12 hours ago
        I asked Claude if it could design Clojure and it said yes. Maybe people like Hickey just aren't needed anymore.
        • diab0lic9 hours ago
          Ironically it may only be able to do this because it has been trained on Hickey’s creation. This was one of his criticisms.
          • bdangubic9 hours ago
            what has hickey been trained on? ;)
            • kokada21 minutes ago
              Years of experience working in Enterprise and complex systems.

              And that is all on point with the criticism: while an AI can design a new language based in an existing language like Clojure, we need actual experienced people to design new interesting languages that add new constraints and make Software Engineering as a whole better. And we are also killing with AI the possibility of new people getting up to speed and becoming a future Rich Hickey.

        • 11 hours ago
          undefined
        • 11 hours ago
          undefined
    • jhhh11 hours ago
      What factual errors did you the human notice
    • llmslave211 hours ago
      > A human writing twelve polemic questions, many of which only make sense within their ideological worldview or contain factual errors, because they wanted to vent their anger on the internet has been considered substandard slop since before LLMs were a thing.

      Maybe by people who don't share the same ideological worldview.

      I'll almost always take human slop over AI slop, even when the AI slop is better along some categorical axis. Of course there are exceptions, but as I grow older I find myself appreciating the humanity more and more.

  • aflierre12 hours ago
    [flagged]
  • kurtis_reed10 hours ago
    Dictionary definition of "butthurt"
  • BoneShard10 hours ago
    At this point I'd split HN into artisian HN and modern HN lol
  • kenforthewin12 hours ago
    Pure cringe. I'd rather read 100 "ai slop" posts than another such uninformed anti-ai tirade.
    • llmslave212 hours ago
      Why are you even here then? Go ask your LLM of choice to spit out 100 articles for you to read? You can even have it generate comments!
    • drcode12 hours ago
      I think it is likely your wish will be fulfilled, ai slop posts and spam emails for everyone, at a scale that will be monumental.
      • CPUstring12 hours ago
        Slop existed before AI at a monumental scale. Meta and Alphabet made sure of that.
        • 0x696C696111 hours ago
          Slop is, by definition, AI generated. So ... no it didn't.
          • hnhn3410 hours ago
            slop is not, by definition, AI generated. The word slop is from the mid 16th century, and its modern colloquial/meme use originated in 4chan in 2016. That's why we call AI slop "AI slop", and not just "slop".
  • djoldman12 hours ago
    Companies and people by and large are not forced to use AI. AI isn't doing things, people and corporations are doing things with AI.

    I find it curious how often folks want to find fault with tools and not the systems of laws, regulations, and convention that incentivize using tools.

    • turtletontine12 hours ago
      Many people are, indeed, being forced to use AI by their ignorant boss, who often blame their own employees for the AI’s shortcomings. Not all bosses everywhere of course, and it’s often just pressure to use AI instead of force.

      Given how gleefully transparent corporate America is being that the plan is basically “fire everyone and replace them with AI”, you can’t blame anyone for seeing their boss pushing AI as a bad sign.

      So you’re certainly right about this: AI doesn’t do things, people do things with AI. But it sure feels like a few people are going to use AI to get very very rich, while the rest of us lose our jobs.

      • djoldman11 hours ago
        I guess if someone's boss forces them to use a tool they don't want to use, then the boss is to blame?

        If the boss forced them to use emacs/vim/pandas and the employee didn't want to use it, I don't think it makes sense to blame emacs/vim/pandas.

    • RodgerTheGreat12 hours ago
      Why not both? When you make tools that putrefy everything they touch, on the back of gigantic negative externalities, you share the responsibility for making the garbage with the people who choose to buy it. OpenAI et al. explicitly thrive on outpacing regulation and using their lobbying power to ensure that any possible regulations are built in their favor.
    • netfortius12 hours ago
      > "AI isn't doing things, people and corporations are..."

      Where have I heard a similar reasoning? Maybe about guns in the US???

      • djoldman11 hours ago
        Guns can and are used to murder people directly in the physical world.

        The overwhelming (perhaps complete) use of generative AI is not to murder people. It's to generate text/photo/video/audio.

        • RodgerTheGreat11 hours ago
          Generative AI is used to defraud people, to propagandize them, to steal their intellectual property and livelihoods, to systematically deny their health insurance claims, to dangerously misinform them (e.g. illegitimate legal advice or hallucinated mushroom identification ebooks), to drive people to mental health breakdowns via "ai psychosis" and much more. The harm is real and material, and right now is causing unemployment, physical harm, imprisonment, and in some cases death.
    • llmslave212 hours ago
      I'm sympathetic to your point, but practically it's easier to try to control a tool than it is to control human behaviour.

      I think it's also implied that the problem with AI is how humans use it, in much the same way that when anti-gun advocates talk about the issues with guns, it's implicit that it's how humans use (abuse?) them.

    • 12 hours ago
      undefined