4 pointsby grantseltzer16 hours ago3 comments
  • dtagames15 hours ago
    Different cultures use different amounts of politeness and flowery speech. The English standard influenced American style, but it's very different from how people speak to each other in business in Poland, for example.

    Like @NitpitckLawyer said, it's the resulting content that matters, not how it's presented. If a person anthropomorphizes an LLM in their mind (rather than just in their speech patterns), then they probably have pre-existing mental problems.

    People used to also talk to burning bushes.

    • grantseltzer15 hours ago
      > it's the resulting content that matters, not how it's presented

      What a wild thing to say. If you had a coworker who was brilliant and taught you many great things, but only screamed instead of talking, would you feel the same way?

      > If a person anthropomorphizes an LLM in their mind (rather than just in their speech patterns), then they probably have pre-existing mental problems.

      Correct, and that's why these tools should be built responsibly under the assumption that people with mental problems are going to use them. It's clear in the article I linked (and my wording linking to it) that it can exacerbate issues for people. Chatgpt told him that he's sane and his mom was trying to kill him. He didn't understand what an LLM actually was.

  • dnissley15 hours ago
    Funny, I use a variation of the eigenprompt to give my chatgpt even more personality:

    https://x.com/eigenrobot/status/1782957877856018514

  • NitpickLawyer16 hours ago
    I mean, sure, but you lose a ton of learned stuff from books and stuff. You can prompt it how you want, but what most people want are useful results (where useful can be anything from natural tone, fun, accurate, etc.) Unless you can show that this outperforms "regular" prompting, or "please/ty", at the end of the day it's just a prompt.
    • grantseltzer15 hours ago
      I'm not claiming the purpose of this prompt is to get better information. Yes, it's just a prompt.

      You're asserting quite a lot of bias when you say "What most people want are useful results." Maybe in our circles of software engineers or lawyers, but many people are using AI for companionship. Even if they're not seeking companionship, unless you have a very clear understanding of how LLMs work, it's very easy to get caught up thinking that the chatbot you're talking to is "thinking" or "feeling". I feel companies that offer chatbots should be more responsible with this as it can be very dangerous.