18 pointsby 1vuio0pswjnm73 hours ago6 comments
  • awakeasleep27 minutes ago
    You can prevent a good bit of this for your friends and family by going into their ChatGPT settings > Personalization > Base Style and Tone: choose Efficient, and then choose "less" for warmth, enthusiasm, and emoji.

    It makes a remarkable difference.

  • rustyhancockan hour ago
    RLHF optimizes for low creativity sychophants.

    Possibly this is a bigger problem than LLMs existing at all.

  • RcouF1uZ4gsC15 minutes ago
    So like McKinsey consultans but just at the personal level instead of the corporate and government level.

    And much cheaper.

  • jqpabc1233 hours ago
    If you ask for it, AI chatbots will validate lots of stuff --- bad business or political decisions for example.
    • i-e-ban hour ago
      An electronic monk
    • ajucan hour ago
      You don't need to ask for it. They default to validation.
  • renewiltord16 minutes ago
    Should it be legal for mentally disabled people to have free access to Internet services? I believe not. They should have to ask permission from a government proctor if they have a diagnosed mental disability. This will protect them from harm.

    E.g. if you have free internet access as an ADHD patient it’s just going to ruin your life. Make it so you have to have a video chat with your government proctor and you will help these people live successful lives no longer encumbered by these problems. The proctor would obviously refuse diagnosed schizophrenics access to LLMs.

    We need to protect our most vulnerable. These tools are like heavy equipment. An impaired user will hurt themselves.

  • cheald2 hours ago
    It's best to think of instruct-tuned LLMs as mirrors rather than intelligences. They generally reflect what you're putting into them, but they do it in a way that can easily masquerade as wisdom. I think this makes it really easy for people to self-delude.