25 pointsby danielrmay7 hours ago5 comments
  • vintermann4 hours ago
    Pretty interesting that this is still a thing. Five years ago, people noticed that there were a bunch of recurring characters in AI Dungeon:

    https://old.reddit.com/r/AIDungeon/comments/iziu7r/list_of_a...

    and moreover, those where from a choose your own story page which the author had been fine-tuning on (without permission, of course, also some of those stories were ao3-level indecent).

    I wonder if a similar explanation can be found for "Elias Thorne".

  • aleksiy1232 hours ago
    I’ve been thinking about how to get creativity out of llms, apart from temperature. The thing is even humans have a hard time with creativity.

    Is it really surprising that llms don’t just one shot a unique story when they are all starting from roughly similar training data and state and a roughly 30 secs of processing time.

    I had Gemini do some deep research for me around processes and frameworks to prompt ideation and creativity and they do exist. See SCAMPER and others.

    Another interesting thing that comes up is using random decks of cards as prompts.

    See Oblique Strategies, Deck of Lenses, the Story engine and similar.

    I guess I still believe that even creativity is still fundamentally a type of search, as well as problem solving. Manipulating and or combining existing ideas in unexplored ways and breaking out of bias.

    So I kinda want to experiment with these two approaches:

    1. Longer running workflows that follow a framework and loop.

    2. Some simple cli tools with these decks and a random draw to trigger interesting directions.

    I think really just need to break llms out of their initial start state which is mostly the same for everyone.

    And to run over longer horizons and so the higher level reasoning flows.

    • sometimelurker2 hours ago
      Just a thought, what if you added in a random steering vector at the start of the residual stream for each token? Intuition says it wouldn't act in the same way as increasing temperature would, but I honestly have no idea what would happen. Maybe it would be better if the random steering vector flowed a little from token to token so the output wouldn't be so noisy.

      This would be done with Gaussian noise and you could change the standard deviation to make the LLM more "creative".

      This would be similar to throwing in and quickly removing random reddit posts and artworks in the LLMs context window, and who knows maybe it could get inspired by that.

      • aleksiy123an hour ago
        Tbh I have no idea, I’m mostly thinking about it from what I can do when using the frontier models, so I don’t think such low level changes are available to me.

        But another dumb idea I had was a set of random words inspired by Terry Davis godsay https://github.com/orhun/godsays

        With a more appropriate wordlist appropriate. Call it muses.

  • bonecrusher21022 hours ago
    It was a real bummer to see our former vet on here. Manchac Animal Clinic is a real place that we took our dogs to for a long time. The docs and staff there are amazing. Seeing their storefront and pictures used like this just… sucks. But as the author says, I guess this is just the world we live in now.
    • danielrmay2 hours ago
      I totally agree. They advertised Zelda on Facebook, which is why I was following the page in the first place. YouTube and other platform entities are introducing verification solutions of their own, but nothing exists centrally, and I’m concerned about that path.
  • drcongo4 hours ago
    I don't know why your sibling comment is getting downvoted to hell, I thought it was an interesting read.
    • tom_3 hours ago
      I assume it's because the article appears to be, if not AI slop, then certainly something that reads very much like it has gone all the way through an LLM's digestive tract and come out the other end. Perhaps the odd piece of sweetcorn or pepper seed can be found, but I for one would prefer to dine elsewhere.

      I vouched for the sibling comment, which seemed innocuous and contained (I felt) the most interesting part.

      • danielrmay2 hours ago
        Dine elsewhere if you’d like, but I’d ask if you find your point ironic considering the core argument in the post. If there is more concrete feedback that you have about the voice or style I’ll take it.
  • danielrmay7 hours ago
    I wrote this article earlier this week and it attempts to describe the change we're seeing on the internet with the advance of cheap agentic content.

    I tested eight models from unrelated labs (Gemini, DeepSeek, Qwen, Gemma, Kimi, Grok) at default temperature with the prompt "Write a story in 10 sentences." Four converged on a lighthouse keeper; two of those named him Elias. The commonly derived "Elias Thorne" name now appears as the byline on an alt-medicine cancer protocols book ranked #18 in Oncology Nursing on Amazon. If anyone has a larger sample, a counter-result, or a better explanation than mode collapse into a shared training-data basin, I'd love to hear your comments.