6 pointsby pants26 hours ago2 comments
  • ghostlyInc3 minutes ago
    LLMs tend to pick up recurring metaphors from training data and reinforcement tuning.

    Words like “goblin”, “gremlin”, “yak shaving”, etc. are common in engineering culture to describe hidden bugs or messy systems. If those appear often in the training corpus or get positively reinforced during alignment tuning, the model may overuse them as narrative shortcuts.

    It's basically a mild style artifact of the training distribution, not something intentionally programmed.

  • arthurcolle6 hours ago
    why don't you ask the model?
    • Tarraq5 hours ago
      Not to scare away the goblins!