Words like “goblin”, “gremlin”, “yak shaving”, etc. are common in engineering culture to describe hidden bugs or messy systems. If those appear often in the training corpus or get positively reinforced during alignment tuning, the model may overuse them as narrative shortcuts.
It's basically a mild style artifact of the training distribution, not something intentionally programmed.