122 pointsby phmx6 days ago12 comments
  • Animats12 hours ago
    So they turned on GC after every allocate ("GC stress"), and

    "With GC.stress = true, the GC runs after every possible allocation. That causes immediate segfaults because objects get freed before Ruby can even allocate new objects in their memory slots."

    That would seem to indicate a situation so broken that you can't expect anything to work reliably. The wrong-value situation would seem to be a subset of a bigger problem. It's like finding C code that depends on use-after-free working and which fails when you turn on buffer scrubbing at free.

    • xerxes9017 hours ago
      That’s exactly what it was. He discovered the customer was using a version of ffi that had this “use-after-free” (ish) bug, but the question “is this actually what my customer was seeing or is there _another_ bug lurking” still needed to be answered.
    • 12 hours ago
      undefined
  • lifthrasiir9 hours ago
    > Million-to-one bugs are real, not theoretical. They happen during initialization and restart, not runtime. When they trigger, they cascade - 2,500 errors from one root cause. In high-restart environments, rare becomes routine.

    Million-to-one bugs are not only real but high enough to matter, depending on which million. Many years ago I had a rare bug that corrupted timestamps in the logs, with an emperical probability of about one to 3--5 million (IIRC). Turned out that that seemingly benign bug was connected to a critical data corruption issue with real consumer complaints. (I have described this bug in detail in the past, see my past comment for details.)

  • mwkaufma18 hours ago
    A little strange to write up a bug hunt that was resolved by the ffi upstream already, and not by the hunt itself. OP didn't fix the bug, though identifying that the upgrade was relevant is of some interest. Writing could have been clearer.
    • mbac3276817 hours ago
      The bug that was fixed in upstream manifested differently than what he was experiencing so the journey was to validate it for his case.

      OTOH I'm a bit surprised he didn't pull back earlier and suggest to his user to update to the latest version though and let him know.

      • eichin15 hours ago
        15 or so years ago I had a similar journey - a single python interpreter "impossible" segfault in production that turned out to be a bug in glibc realloc, that had already been fixed in an update, we just didn't figure out to even look for one until we'd narrowed it down that far. (We were shipping custom Debian installs on DVD, a fair number of our customer installs weren't internet accessible so casual upgrades were both impossible and unwanted, but it was also a process mistake on my part to not notice the existence of the upgrade sooner.)

        Never wrote it up externally because it was already solved and "Debian updates to existing releases are so rare that you really want to pay attention to all of them" (1) was already obvious (2) was only relevant to a really small set of people (3) this somewhat tortured example wasn't going to reach that small set anyway. (Made a reasonable interview story, though.)

  • dmix10 hours ago
    A good example of why everyone should learn a bit of C and low level memory management
  • fleshmonad17 hours ago
    LLM slop. Why do people (presumably) take the time to debug something like this, do tests and go to great lengths, but are too lazy to do a little manual writeup? Maybe the hour saved makes up for being associated with publishing AI slop under your own name? Like there is no way the author would have written a text that reads more convoluted than what we have here.
    • fn-mote15 hours ago
      > Why do people […] take the time to debug […] but are too lazy to do a little manual writeup[?]

      They like to code. They don’t like to write.

      I’m not excusing it, but after you asked the question the conclusion seems logical.

      • PKop12 hours ago
        > They like to code. They don’t like to write.

        People like reading LLM slop less than either of those. So it should become a common understanding not to waste your (or our) time to "write" this. It's frustrating to give it a chance then get rug-pulled with nonsense and there's really no reason to excuse it.

    • sb824417 hours ago
      I read it just fine and everything made sense in it.

      I would spend similar time debugging this if I were the author. It's a pretty serious bug, a non obvious issue, and would be impossible to connect to the ffi fix unless you already knew the problem.

    • xerxes9017 hours ago
      I have no idea whether the text was generated from an LLM, but “slop” it absolutely is not - it’s clearly a very logically ordered walkthrough about a very thorough debugging process.

      If you call anything that comes out of a model “slop” the term uses all meaning.

    • dpark17 hours ago
      Sorry, why is this LLM slop? I only got about halfway through because I don’t care about this enough to finish the read, but I don’t see the “obvious LLM” signal you do.
      • scmccarthy16 hours ago
        It's clearest in the conclusion.
        • dpark16 hours ago
          I still don’t see it.

          I feel like the “this is AI” crowd is getting ridiculous. Too perfect? Clearly AI. Too sloppy? That’s clearly AI too.

          Rarely is there anything concrete that the person claiming AI can point to. It’s just “I can tell”. Same confident assurance that all the teachers trusting “AI detectors” have.

          • dkdcio14 hours ago
            I came to this thread hoping to read an interesting discussion of a topic I don’t understand well; instead it’s this

            I have opened a wager r.e. detecting LLM/AI use in blogs: https://dkdc.dev/posts/llm-ai-blog-challenge/

            • dpark13 hours ago
              I feel like it’s on every other article now. The “this is ai” comments detract way more from the conversation than whatever supposed ai content is actually in the article.

              These ai hunters are like the transvestigators who are certain they can always tell who’s trans.

              • PKop12 hours ago
                No. These articles are annoying to read, the same dumb patterns and structures over and over again in every one. It's a waste of time; the content gives off a generic tone and it's not interesting.
                • sb82449 hours ago
                  Are we reading the same article?

                  Also, you do realize that writing is taught in an incredibly formulaic way? I can't speak to English as second language authors, but I imagine it doesn't make it easier.

                • dkdcio11 hours ago
                  say that! that’s independent of whether AI/LLM tools were used to write it and more valuable (“this was boring and repetitive” vs “I don’t like the tool I suspect you may have used to write this”)
                • WesolyKubeczek3 hours ago
                  So is the vast majority of comments on HN (and in any comment section of any website) well before LLMs came into being, yet we give them a benefit of doubt. Users on forums tend to behave in a starkly bot-like way, often having a very limited set of responses pertaining to their particular hobby horses, so much so that others could easily predict how the most prolific users would react to any topic and in what precise words.

                  Now, apparently, we have a generation of "this is AI slop!" "bots".

            • internetter12 hours ago
              > I will make a bet for $1,000,000!

              > I won't actually make this bet!

              > But if I did make this bet, I would win!

              ???

              • dkdcio11 hours ago
                if two parties put up $1,000,000 each and I get a large cut I’ll do the work! one commenter already wagered $1,000, which I’d easily win, but I suspect this would take me idk at least a few days of work (not worth the time). and, again, for a million dollars I’d make sure I win

                see other comment though, the point is that assessing quality of content on whether AI was used is stupid (and getting really annoying)

            • _dain_13 hours ago
              I don't have a million dollars but I'll take you up on it for like a grand. I'm serious, email me.
              • dkdcio13 hours ago
                the problem is it’s a lot of work (not actually worth it for me for a thousand dollars) — but you cannot win

                just one scenario, I write 100 rather short, very similar blog posts. run 50 through Claude Code with instructions “copy this file”. have fun distinguishing! of course that’s an extreme way to go about it, but I could use the AI more and end up at the same result trivially

                • _dain_12 hours ago
                  This is so childish and pathetic it doesn't deserve a response.
                  • dkdcio12 hours ago
                    why? LLM/AI use doesn’t denote anything about style or quality of a blog, that’s the point — and why this type of commentary all of HackerNews and elsewhere is so annoying.

                    obviously if a million dollars are on the line I’m going to do what I can to win. I’m just pointing out how that can be taken to the extreme, but again I can use the tools more in the spirit of the challenge and (very easily) end up with the same results

                    • Panzer0412 hours ago
                      People object to using AI to write their articles (poorly). Your answer to them saying it's obvious when it's AI written is to.. write it yourself, then pretend copy-pasting that article via an AI counts as AI-written?

                      That's a laughable response.

                      • dkdcio11 hours ago
                        my point is using AI is distinct from from the quality of blog posts. these frequent baseless, distracting claims of AI use are silly

                        this wager is a thought exercise to demonstrate that. want to wager $1,000,000 or think you’ll lose? if you’ll lose, why is it ok to go around writing “YoU uSeD aI” instead of actually assessing the quality of a post?

          • PKop12 hours ago
            That's your issue not ours. It's obvious; if you don't have a problem with it, enjoy reading slop; many people can't stand it and we don't have to apologize for recognizing or not liking it.
            • dpark11 hours ago
              I don’t believe you can recognize anything. Like everyone else claiming they can clearly identify AI you can’t actually point to why it’s AI or what parts are clearly AI.

              If you could actually identify AI deterministically you would have a very profitable product.

              • Jweb_Guru7 hours ago
                I would never claim that we can reliably detect all AI generated text. There are many ways to write text with LLM assistance that is indistinguishable from human output. Moreover, models themselves are extremely bad at detecting AI-generated text, and it is relatively easy to edit these tells out if you know what to look for (one can try to prompt them out too, though success is more limited there). I am happy to make a much narrower claim, however: each particular set of models, when not heavily prompted to do otherwise, has a "house style" that's pretty easily identifiable by humans in long-form writing samples, and content written with that house style has a very high chance of being generated by AI. When text is written in this house style, it is often a sign that not only were LLMs used in its generation, but the person doing the generation did not bother to do much editing or use a more sophisticated prompt that wouldn't result in such obvious tells, which is why the style is commonly associated with "slop."

                I find it interesting that you believe this claim is wildly conspirational, or that you think the difficulty of reliably detecting AI generated text at scale is evidence that humans can't do pretty well at this much more limited task. Do you also find claims that AIs are frequently sycophantic in ways that humans are not, or that they will use phrases like "you're absolutely right!" far more than a human would unless prompted otherwise (which are the exact same type of narrow claim) similarly conspirational? i.e., is your assertion that people would have difficulty differentiating between a real human's response to a prompt and Claude's response to a prompt when there was no specific pre-prompt trying to control the writing style of the response?

                • dpark7 hours ago
                  On the other fork where I responded to your claims with a direct and detailed response, you insisted that my comment “isn't really that interesting” and just disengaged. I’m not going to write another detailed explanation of why your “slop === AI” premise is flawed. Go reread the other fork if you’ve decided you’re interested.

                  > I find it interesting that you believe this claim is wildly conspirational

                  I don’t believe it’s wildly conspiratorial. I believe it’s foolishly conspiratorial. There’s some weird hubris in believing that you (and whatever group you identify as “us”) are able to deterministically identify AI text when experts can’t do it. If you could actually do it you’d probably sell it as a product.

                  • MobiusHorizons6 hours ago
                    > believing that you (and whatever group you identify as “us”) are able to deterministically identify AI text

                    I think you will find the OP said no such thing. They instead said they identified a mixture of writing styles consistent with a human author and an LLM. The OP says nothing about deterministically identifying LLMs, only that the style of specific sections is consistent with LLMs leading to the conclusion.

                    • dpark5 hours ago
                      I think you find OP absolutely did say that.

                      > Parts of it were 100% LLM written. Like it or not, people can recognize LLM-generated text pretty easily

                      https://news.ycombinator.com/item?id=45868782

                      • Jweb_Guru5 hours ago
                        I am pretty much certain that parts of it were LLM-written, yes. This doesn't imply that the entire blog post is LLM-generated. If you're a good Bayesian and object to my use of "100%" feel free to pretend that I said something like "95%" instead. I cannot rule out possibilities like, for example, a human deliberately writing in the style of an LLM to trick people, or a human who uses LLMs so frequently that their writing style has become very close to LLM writing (something I mentioned as a possibility in an earlier reply; for various reasons, including the uneven distribution of the LLM-isms, I think that's unlikely here).
                  • Jweb_Guru6 hours ago
                    Human experts can reliably detect some kinds of long-form, AI-generated text using exactly the same sorts of cues I've outlined: https://arxiv.org/html/2501.15654v1. You may take issue with the quality of the paper, but there have been very few studies like this and this one found an extremely strong effect.

                    I am making an even more limited claim than the article, which is only that it's possible for "experts" (i.e. people who frequently interact with LLMs as part of their day jobs) to identify AI generated text in long-form passages in a way that has very few false positives, not classify it perfectly. I've also introduced the caveat that this only applies to AI generated text that has received minimal or no prompting to "humanize" the writing style, not AI generated text in general.

                    If you would like to perform a higher-quality study with more recent models, feel free (it's only fair that I ask you to do an unreasonable amount of work here given that your argument appears to be that if I don't quit my lucrative programming job and go manually classify text for pennies on the dollar, it proves that it can't be done).

                    The reason this isn't offered as a service is because it makes no economic sense to do so using humans, not because it's impossible as you claim. This kind of "human" detection mechanism does not scale the way generation does. The cues that I rely on are also pretty easy to eliminate if you know someone is looking for them. This means that heuristics do not work reliably against someone actively trying to avoid human detection, or a human deliberately trying to sound like an LLM (I feel the need to reiterate this as many of the counterarguments to what I'm saying are to claims of this form).

                    > I’m not going to write another detailed explanation of why your “slop === AI” premise is flawed.

                    This isn't a claim that I made. I believe that text written with LLM assistance is not necessarily slop, and that slop is not necessarily AI generated. The only assertion I made regarding slop is that being written with LLM assistance with minimal prompting or editing is a strong predictor of slop, and that the heuristics I'm using (if present in large quantities) are a strong predictor of an article being written with LLM assistance with minimal prompting or editing. i.e. I, I am asserting that these kinds of heuristics work pretty well on articles generated by people who don't realize (or care) that there are LLM "tells" all over their work. The fact that many of the articles posted to HN are being accused of being LLM generated could certainly indicate that this is all just a massive witch hunt, but given the acknowledged popularity of ChatGPT among the general population and the fact that experts can pretty easily identify non-humanized articles, I think "a lot of people are using LLMs in the process of generating their blog posts, and some sizable fraction of those people didn't edit the output very much" is an equally compelling hypothesis.

                    • dpark5 hours ago
                      That’s a really interesting study. Thanks for sharing that.

                      This seems like the kind of thing to share when making a bold claim about being able to detect AI with high confidence. This is a lot more weighty than not so subtly asserting that I’m too dumb to recognize AI.

                      > a human deliberately trying to sound like an LLM (I feel the need to reiterate this as many of the counterarguments to what I'm saying are to claims of this form).

                      I assume this is a reference to me. To be clear, I was never referring to humans specifically attempting to sound like AI. I was saying that a lot of formulaic stuff people attribute to AI is simply following the same patterns humans started, and while it might be slop, it’s not necessarily AI slop. Hence the AITA rage bait example.

                      • Jweb_Guru5 hours ago
                        Thanks for engaging thoughtfully! FWIW I actually looked this article up because I was interested in your claim that even experts couldn't perform these tasks, something I hadn't heard before--I'm not actually ignoring what you're saying. It's actually very nice to have a productive conversation on HN :)
      • Jweb_Guru14 hours ago
        Parts of it were 100% LLM written. Like it or not, people can recognize LLM-generated text pretty easily, and if they see it they are going to make the assumption that the rest of the article is slop too.
        • dpark13 hours ago
          And yet you don’t call out any parts that are 100% AI and how you recognize them as such.

          I’m not saying there’s no AI here. I am asking for some evidence to back up the claim though.

          • Jweb_Guru10 hours ago
            I can point to individual sentences that were clearly generated by AI (for example, numerous instances of this parallel construction, "No warning. No error. Just different methods that make no sense.", "Not corrupted. Not misaligned. Not reading wrong offsets.", "Not a segfault. Not the T_NONE error from #1079. There it is, the exact error from production"). The style is list-heavy, including lists used for conditionals, and full of random bolding, both characteristic of AI-generated text. And there are a number of other tells as well.

            The reason I don't usually bother to bring these specific things up is that I already know the response, which is just going to be you arguing that a human could have written this way, too. Which is true. The point is that if you read the collective whole of the article, it is very clear that it was composed with the aid of AI, regardless of whether any single part of it could be defensibly written by a human. I'd add that sometimes, the writing of people who interact heavily with LLMs all day starts to resemble LLM writing (a phenomenon I don't think people talk enough about), but usually not to this extent.

            This doesn't mean that the entire article was written by an LLM, nor does it mean that there's not useful information in it. Regardless, given the amount of low effort LLM-generated spam that makes it onto HN, I think it is fairly defensible to use "this was written with the help of an LLM, and the person posting it did not even bother to edit the article to make that less obvious" as a heuristic to not bother wasting more time on an article.

            • dpark10 hours ago
              > this parallel construction

              “not A, not B, not C” and “not A, not B, but C” are extremely common constructions in general. So common in fact that you did it in this exact reply.

              “This doesn't mean that the entire article was written by an LLM, nor does it mean that there's not useful information in it. Regardless, given the amount of low effort LLM-generated spam that makes it onto HN, I think it is fairly defensible”

              > The style is list-heavy, including lists used for conditionals, and full of random bolding, both characteristic of AI-generated text

              This is just blogspam-style writing. Short snippets that are easy to digest with lists to break it up and bold keywords to grab attention. This style was around for years before ChatGPT showed up. LLMs probably do this so much specifically because they were trained on so much blog content. Hell I’ve given feedback to multiple humans to cut out the distracting bold stuff in their communications because it becomes a distraction.

              • inopinatus7 hours ago
                Blog spam doesn’t intersperse the drivel with literary narrative beats and subsection titles that sound like sci-fi novels. The greasy mixture of superficially polished but substantively vacuous is much more pronounced in LLM output than even the most egregious human-generated content marketing, in part because the cognitive entity in the latter case is either too smart, or too stupid, to leave such a starkly evident gap.
                • dpark5 hours ago
                  Is… is this from an LLM? Because this is the first time I’ve felt confident identifying text as no-human-writes-this-way.
                  • inopinatusan hour ago
                    I don’t usually speak like this on Hacker News, but for fucks sake, just give it a fucking rest already, you utter pillock.
              • Jweb_Guru9 hours ago
                Again, this is why I don't bother explaining why it's very obvious to us. People like you immediately claim that human writing is like this all the time, which it's not. Suffice it to say that if a large number of people are immediately flagging something as AI, it is probably for a reason.

                My reply wasn't an instance of this syntactic pattern, and the fact that you think it's the same thing shows that you are probably not capable of recognizing the particular way in which LLMs write.

                • dpark8 hours ago
                  > Again, this is why I don't bother explaining why it's very obvious to us.

                  The thing is, your premise is that you can identify certain patterns as being indicative of AI. However, those exact same patterns are commonly used by humans. So what you’re actually claiming is some additional insight that you can’t share. Because your premise does not hold up on its own. What you were actually claiming is “I know it when I see it”.

                  Let me give you a related example. If you go to any of the “am I the asshole” subreddits, you will encounter the exact same story format over and over: “Other person engages in obviously unacceptable behavior. I do something reasonable to stop the unacceptable behavior. People who should support me support other person instead. Am I the asshole?” The comments will be filled with people either enraged on behalf of the author or who call it AI.

                  The problem with claiming that it’s AI is that the sub was full of the exact same garbage before AI showed up. The stores have always been the same bullshit rage bait. So it’s not technically wrong to say it looks like AI, because it certainly could be. But it could also be human generated rage bait because it’s indistinguishable. My guess is that some of the sub is totally AI. And a chunk of it is from human humans engaged in shitty creative writing.

                  When you look at generic click-bait/blogspam patterns that humans have been using for decades now and call it AI, all you’re doing is calling annoying blog writing AI. Which it could be, but it could also not be. Humans absolutely write blogs like this and have for longer than LLMs have been widely available.

                  > My reply wasn't an instance of this syntactic pattern, and the fact that you think it's the same thing shows that you are probably not capable of recognizing the particular way in which LLMs write.

                  It was absolutely an example of the pattern, just more wordy. Spare me the ad hominem.

                  Your “you couldn’t understand” and “obvious to us” stuff is leaning into conspiracy theory type territory. When you believe you have some special knowledge, but you don’t know how to share it with others, you should question whether that knowledge is actually real.

                  • Jweb_Guru8 hours ago
                    > It was absolutely an example of the pattern, just more wordy. Spare me the ad hominem.

                    LLMs simply don't generate the syntactic pattern I used consistently, but they do generate the pattern in the article. I'm not really sure what else to tell you.

                    The rest of your post isn't really that interesting to me. You asked why nobody was giving specific examples of why it was generated. I told you some of the specific reasons we believe this article was generated with the assistance of an LLM (not all--there are many other sentences that are more borderline which only slightly increase the probability of LLM generation in isolation, which aren't worth cataloguing except in a context where people genuinely want to know why humans think a post reads as AI-generated and are not just using this as an excuse to deliver a pre-prepared rant), mentioned that the reason people don't typically bother to bring it up is that we know people who demand this sort of thing tend to claim without evidence that humans write in the exact same way all the time, and you proceeded to do exactly that. Next time you don't get a response when you ask for evidence, consider that it might be because we don't particularly want to waste time responding to someone who isn't interested in the answer.

    • iberator13 hours ago
      For example beigg non native english speaker:)
    • michaelcampbell15 hours ago
      > LLM slop

      Is this the new "looks shopped. I can tell by the pixels."?

      • dmix9 hours ago
        Every single article or social media post has someone claiming it's AI these days

        From what I've seen doesn't it take a particularly strong reason for the entire article to get dismissed

    • ryandv17 hours ago
      [flagged]
      • skrebbel16 hours ago
        > which is par for Rubyists

        Pro-tip: re-read your comment before you submit and take out the bits that make you sound like an asshole.

        • dpark15 hours ago
          It’s truly weird how some people are just full of hate like this and either cannot see or do not care that they are unpleasant assholes.
          • 15 hours ago
            undefined
        • ryandv16 hours ago
          [flagged]
  • khazhoux14 hours ago
    I don’t understand why people are saying this article was AI generated. Do you think the author told chatgpt “Write me an article (with diagrams) about a Ruby hash race condition” and pasted that to their blog?
    • Jweb_Guru14 hours ago
      Parts of it being generated by Claude or ChatGPT (which they very clearly were) does not necessarily mean that the whole article was fabricated.
  • philipp-gayret16 hours ago
    Had me in the first half. But from the "The Microsecond Window" chapter and on...;

    > No warning. No error. Just different methods that make no sense.

    > This is why write barriers exist. They're not optional extras for C extension authors. They're how you tell the garbage collector: "I'm holding a reference. Don't free this

    It's all ChatGPT LinkedIn and Instagram spam type slop. An unfortunate end to an otherwise interesting writeup.

  • alexnewman18 hours ago
    I don’t get it. Also it reads llmish
  • YouAreWRONGtoo14 hours ago
    [dead]
  • YouAreWRONGtoo14 hours ago
    [dead]
  • ryandv17 hours ago
    [flagged]
  • davebranton11 hours ago
    If I see another AI-written trash article I am going to scream. Overlong, overwritten garbage. People used to write, and there was personality in that writing. Now people believe it's acceptable to generate reams of utter formless shite and post it on the internet.

    If you cannot be bothered to write something, why on God's good earth would you expect anyone to be bothered to read it?

    • hansvm6 hours ago
      I'd normally agree, but this is a case I don't see often -- despite the form being terrible the content is good. I certainly would strongly prefer the same post with better writing, but if the entire 2019 internet were replaced with articles like this (on orthogonal topics/micro-topics) I think it'd be a better place.