22 pointsby doener5 months ago7 comments
  • tills135 months ago
    One of the most frustrating parts about "AI" in its current form is that you can challenge it on anything and it play dumb, being like " oops I'm sowwy I was wrong, you're right"

    I wish it would either: grow a spine and double down (in the cases that it's right or partially right) or simply admit when something is beyond its capability instead of guessing or this like low-percentage Markov chain continuation.

    • braebo5 months ago
      I was once in an argument with Claude over a bug we were trying to identify in my code, and it refused to concede my argument for almost 20 minutes. It turned out to be correct, and boy was I glad it didn’t capitulate (as it often does). I came up with a prompt that actually reproduces this behavior more reliably than any other I’ve tried:

      ”When presented with questions or choices, treat them as genuine requests for analysis. Always evaluate trade-offs on their merits, never try to guess what answer the user wants.

      Focus on: What does the evidence suggest? What are the trade-offs? What is truly optimal in this context given the user's ultimate goals?

      Avoid: Pattern matching question phrasing to assumed preferences, reflexive agreement, reflexive disagreement, or hedging that avoids taking a position when one is warranted by the evidence.”

    • doener5 months ago
      https://openai.com/de-DE/index/why-language-models-hallucina...

      tl;dr: The models are optimized against a test that evaluates incorrect answers just as well as no answer at all. Therefore, they guess when in doubt.

      • tills135 months ago
        I really don't mind guessing but the model should be encouraged to say when it's guessing.
  • throw3108225 months ago
    Nice. It looks like it's not really focusing much on the task and just drawing something by heart; then when asked for the third time it finally produces a much simpler and smaller design that actually resembles some kind of bird. As if this was the actual real effort.
    • ziml775 months ago
      You say that like it has an understanding. If I could have forked the conversation after it gave a unicorn I would have, but I was able to start my own and ask for ASCII art of a parrot. On the first try it gave me a sitting penguin (probably Tux art from the training data)

      When I then asked it if the image was really a parrot it told me that it was "more of a generic 'ASCII bird' (often used as a generic owl/parrot placeholder), not a true parrot."

      A sitting penguin is certainly not a generic bird.

      • throw3108225 months ago
        Yes. Well, it doesn't really "see" the ascii art it's producing because it's blind and it's just reading and writing it as if it were text- so the task is much more complex than it seems to us. I do notice a difference between the first attempt (and your sitting penguin) and the small art at the end. If you've used gpt5, it's possible that it decided to engage its full capacity only at that point, and that the previous answers were less thought through.

        But yes I do believe these things understand. There is no other way for them to do what they're doing.

      • firesteelrain5 months ago

           __
          (o)>
          //\
          V_/_
        • ziml775 months ago
          That's what you might expect a generic sitting penguin to look like, but this is what it actually gave me

                .---.
               /     \
               \.@-@./
               /`\_/`\
              //  _  \\
             | \     )|_
            /`\_`>  <_/ \
            \__/'---'\__/
          • firesteelrain5 months ago
            Definitely a Linux Tux penguin
          • 5 months ago
            undefined
  • pulvinar5 months ago
    Ask ChatGPT to explain what it's drawing as it adds parts. It's usually more successful. Or at least entertaining. Reminds me of how a young child draws.
  • ravila45 months ago
    It looks to me like it’s regurgitating training data. The biggest success I ever had with drawing ascii art was with the GPT 4.1 model a while back.
  • homeonthemtn5 months ago
    The emojis are painful.
  • CTOSian5 months ago
    google gemini not better LMAO https://g.co/gemini/share/64e01b99c217