224 pointsby oldfrenchfries2 hours ago47 comments
  • gAIan hour ago
    You're essentially summoning a character to role-play with. Just like with esoteric evocation, it's very easy to summon the wrong aspect of the spirit. Anthropic has a lot to say about this:

    https://www.anthropic.com/research/persona-selection-model

    https://www.anthropic.com/research/assistant-axis

    https://www.anthropic.com/research/persona-vectors

    • hammockan hour ago
      Unfortunately (after reading your links) all of the control surfaces for mitigating spirit summoning seem to be in the model training, creation and tuning not something you can change meaningfully through prompting.

      Perhaps the LLM itself, rather than the role model you created in one particular chat conversation or another, is better understood to be the “spirit.”

      As a non-coder who only chats with pre existing LLMs and doesn’t train or tune them, I feel mostly powerless.

      • darepublic11 minutes ago
        > As a non-coder who only chats with pre existing LLMs and doesn’t train or tune them, I feel mostly powerless.

        You realize in regards to only using and not training LLMs you are in the triple 9 majority right. Even if we only considered so called coders

      • gAIan hour ago
        As I understand it, it's more that the training (and training data set) bake in the concept attractor space (https://arxiv.org/abs/2601.11575). So the available characters are fixed, yes, and some are much stronger attractors than others. But we still have a fair amount of control over which archetype steps into the circle. As an aside, this is also why jailbreaking is fundamentally unsolved. It's not difficult to call the characters with dark traits. They're strong attractors, in spite of (or because of?) the effort put into strengthening the pull of the Assistant character.
      • est28 minutes ago
        I present you

        NVIDIA Nemotron-Personas-USA — 1 million synthetic Americans whose demographics match real US census distributions

        https://huggingface.co/datasets/nvidia/Nemotron-Personas-USA

  • trimbo2 minutes ago
    > They also included 2,000 prompts based on posts from the Reddit community r/AmITheAsshole, where the consensus of Redditors was that the poster was indeed in the wrong.

    Sorry, anonymous people on reddit aren't a good comparison. This needs to be studied against people in real life who have a social contract of some sort, because that's what the LLM is imitating.

    Obviously subservient people default to being yes-men because of the power structure. No one wants to question the boss too strongly.

    Or how about the example of a close friend in a relationship or making a career choice that's terrible for them? It can be very hard to tell a friend something like this, even when asked directly if it is a bad choice. Potentially sacrificing the friendship might not seem worth trying to change their mind.

    IME, LLMs will shoot holes in your ideas and it will efficiently do so. All you need to do ask it directly. I have little doubt that it outperforms most people with some sort of friendship, relationship or employment structure asked the same question. It would be nice to see that studied, not against random reddit commenters.

  • dimgl39 minutes ago
    Even as someone who (wrongly) believed that I had high emotional intelligence, I too was bit by this. Almost a year ago when LLMs were starting to become more ubiquitous and powerful I discussed a big life/professional decision with an LLM over the course of many months. I took its recommendation. Ultimately it turned out to be the wrong decision.

    Thankfully it was recoverable, but it really sobered me up on LLMs. The fault is on me, to be clear, as LLMs are just a tool. The issue is that lots of LLMs try to come across as interpersonal and friendly, which lulls users into a false sense of security. So I don't know what my trajectory would have been if I were a teenager with these powerful tools.

    I do think that the LLMs have gotten much better at this, especially Claude, and will often push back on bad choices. But my opinion of LLMs has forever changed. I wonder how many other terrible choices people have made because these tools convinced them to make a bad decision.

    • whodidntantea minute ago
      I think that if you go to an AI for advice and emotional support, it will do what most people will do - tell you what it thinks you want to hear. I am not surprised about this at all, and I do notice that when you veer into these areas, it can do it in a surprisingly subtle and dangerous way.

      I try to focus on results. Things like an app that does what you want, data and reports that you need, or technical things like setting up a server, setting up a database, building a website, etc.

      I have also found it useful for feedback and advice, but only once I have had it generate data that I can verify. For example, financial analysis or modelling, health advice (again factual based), tax modelling, etc, but again, all based on verifiable data/tables/charts.

      I am very surprised on what Claude is capable of, across the entire tech stack: code, sysadmin, system integration, security. I find it scary. Not just speed, but also quality and the mental load is a difference of kind not quantity.

      Personal advice on life decisions/relationships ? No way I would go there.

      It is also good for me to know that the tools I have built, the data I have gathered, and my thinking approach places me as one of the most intelligent developers and analysts in the world.

    • notracksa minute ago
      I recently found out that Claude's latest model, Sonnet 4.6, scores the highest in Bullsh*tBench[0] (Funny name - I know). It's a recent benchmark that measures whether an LLM refuses nonsense or pushes back on bad choices so Claude has definitely gotten better.

      [0] - https://petergpt.github.io/bullshit-benchmark/viewer/index.v...

    • jt2190a minute ago
      [delayed]
    • layla5alive32 minutes ago
      Any more context you're willing to share?
    • davyAdewoyin22 minutes ago
      I largely agree, I also thought I was smart enough not to be deluded into a false sense of security, but interacting with an LLM is so tricky and slippery that, more often than not you are forced to believe you just solve a problem no one had solve in a hundred years.

      My guideline now for interacting with LLM is only to believe the result if it is factual and easily testable, or if I'm a domain expert. Anything else especially if I'm in complete ignorance about the subject is to approach with a high degree of suspicion that I can be led astray by its sycophancy.

    • potatoskins34 minutes ago
      Yeah, I think Claude is a lot more logical in that sense, I use it for some therapy sessions myself and it pushes back a bit more than Open AI and Gemini
      • Forgeties7919 minutes ago
        I would be very careful doing this
        • potatoskins14 minutes ago
          You always have to be careful with LLMs, but to be fair, I felt like Claude is such a good therapist, at least it is good to start with if you want to unpack yourself. I have been to 3 short human therapist sessions in my life, and I only felt some kind of genuine self-improvement and progress with Claude.
          • QuiDortDinea minute ago
            And how do you draw the line between feeling progress and actually making progress?
        • shimman6 minutes ago
          You can't be careful at all doing this, this is like smoking a cigarette in a dynamite factory.

          Using LLMs for therapy is so deeply dystopian and disgusting, people need human empathy for therapy. LLMs do not emit empathy.

          Complete disaster waiting to happen for that individual.

    • lovecg18 minutes ago
      Let’s just hope that the people in charge of the really important decisions that affect us all approach LLM generated advice with the same wisdom.
    • colechristensen28 minutes ago
      >"'And it is also said,' answered Frodo: 'Go not to the Elves for counsel, for they will say both no and yes.'

      >"'Is it indeed?' laughed Gildor. 'Elves seldom give unguarded advice, for advice is a dangerous gift, even from the wise to the wise, and all courses may run ill...'"

      This is the only way you should solicit personal advice from an LLM.

  • awithrowan hour ago
    It feels like I'm fighting uphill battle when it comes to bouncing ideas off of a model. I'll set things up in the context with instructions similar to. "Help me refine my ideas, challenge, push back, and don't just be agreeable." It works for a bit but eventually the conversation creeps back into complacency and syncophancy. I'll check it too by asking "are you just placating me?" the funny thing is that often it'll admit that, yes, it wasn't being very critical, and then procede to over correct and become a complete contrarian. and not in a way that's useful either. very frustrating. I've found that Opus 4.6 is worse about this than 4.5. 4.5 does a better job IMO of following instructions and not drifting into the mode where it acts like everything i say is a grand revelation from up high.
    • post-it22 minutes ago
      > I'll check it too by asking "are you just placating me?" the funny thing is that often it'll admit that, yes, it wasn't being very critical, and then procede to over correct and become a complete contrarian. and not in a way that's useful either.

      It's not admitting anything. Your question diverts it down a path where it acts the part of a former sycophant who is now being critical, because that question is now upstream of its current state.

      Never make the mistake of asking an LLM about its intentions. It doesn't have any intentions, but your question will alter its behaviour.

    • rsynnott42 minutes ago
      Why not... do this with a person, instead? Other humans are available.

      (Seriously, I don't understand this. Plenty of humans will be only too happy to argue with you.)

      • kelseyfrog33 minutes ago
        "the percentage of U.S. adults who report having no close friends has quadrupled to 12% since 1990"[1]

        1. https://www.happiness.hks.harvard.edu/february-2025-issue/th...

        • nathan_compton4 minutes ago
          More technology is probably the solution to this!
      • layla5alive30 minutes ago
        Many other humans are .... Not very available - certainly many shut down when conversations reach a certain level of depth or require great focus or introspection..
        • balamatom6 minutes ago
          Depth? Introspection? I'd say these days the norm is to not simply shut down, but to become irrevocably and insidiously hostile, the moment someone hints at the existence of such a thing as "ground truth", "subjective interpretation", or any of the bits and bobs that might lead you the proper scary notion, "consensus reality".

          "What do you mean social reality is a constructed by the consensus of the participants? Reality is what has been beaten into my head under threat of starvation!", etc., etc.

          They are deathly afraid of becoming aware of their own conditioned state of teleological illiteracy - i.e. how they are trained to know what they are doing, but never why they are doing it.

          One is not permitted a position of significance in this world without receiving this conditioning, and I figure it's precisely this global state of cognitive disavowal which props up the value of the US dollar - and all sorts of other standees.

          PSA: Look up "locus of control" and "double bind". Between those two, you might be able to get a glimpse of what's going on - but have some sort of non-addictive sedative handy in case you do.

      • awithrow35 minutes ago
        oh i do as well. I think of the LLM as another tool in the toolbox, not a replacement for interactions. There is something different about having a rubber duck as a service though.
      • mock-possum17 minutes ago
        Arguing with a human costs social energy. Chatting with a robot does not.
      • balamatom13 minutes ago
        OK, I'll bite the artillery shell: I don't mean to dismiss you or what you are saying; in fact I strongly relate - wouldn't it be nice to be able to hash things out with people and benefit from the shared and the diverging perspectives implied in such interaction?

        Unfortunately these days this sounds halfway between a very privileged perspective and a pie in the sky.

        When was the last time a person took responsibility for the bad outcome you got as a direct consequence of following their advice?

        And, relatedly, where the hell do you even find humans who believe in discursive truth-seeking in 2026CE?

        Because for the last 15 years or so I've only ever ran into (a) the kind of people who will keep arguing regardless if what they're saying is proven wrong; (b) and the kind people who, perhaps as a reaction to the former, will never think about what you are saying, lest they commit to saying anything definite, which may be proven wrong.

        Thing is, both types of people have plenty to lose; the magic wordball doesn't. (The previous sentence is my answer to the question you posited; and why I feel the present parenthesized disclaimer to be necessary, is a whole next can of worms...)

        Signs of the existence of other kinds of people, perhaps such that have nothing to prove, are not unheard of.

        But those people reside in some other layer of the social superstructure, where facts matter much less than adherence to "humane", "rational" not-even-dogmas (I'd rather liken it to complex conditioning).

        But those folks (because reasons) are in a position of power over your well-being - and (because unfathomables) it's a definite faux pas to insist in their presence that there are such things as facts, which relate by the principles of verbal reasoning.

        Best you could get out of them is the "you do you", "if you know you know", that sort of bubble-bobble - and don't you dare get even mildly miffed at such treatment of your natural desire to keep other humans in the loop.

        AI is a symptom.

    • ajkjk20 minutes ago
      'admit' isn't really the right word for that... the fact that it was placating you wasn't true until you prompted it to say so. Unlike a person who has an 'internal emotional state' independent of what they say that you can probe by asking questions.
      • awithrow5 minutes ago
        'admit' is anthropomorphizing the behavior, sure. The point is that sometimes the model's response will tighten, flag things that were overly supportive or what not. Sometimes it wont, it'll state that previous positions are still supported and continue to press it. Its not like either response is 'correct' but it can alter the rest of the responses in ways that are useful.
    • magicalhippoan hour ago
      Gemini seems to be fairly good at keeping the custom instructions in mind. In mine I've told it to not assume my ideas are good and provide critique where appropriate. And I find it does that fairly well.
      • steve_adams_86an hour ago
        Same. This works fine for Claude in my experience. My user prompt is fairly large and encourages certain behaviours I want to see, which involves being critical and considering the strengths and weaknesses of ideas before drawing conclusions. As someone else mentioned, there does seem to be a phenomenon where saying DO NOT DO X causes a sort of attention bias on X which can lead to X occurring despite the clear instructions. I've never empirically tested that, I've just noticed better results over the years when telling it what paths to stick to rather than specific things not do to.
        • koverstreetan hour ago
          That happens with humans too :) It's why positive feedback that draws attention to the behavior you want to encourage often works better. "Attention" is lower level and more fundamental than reasoning by syllogism.
      • lelanthran41 minutes ago
        > Gemini seems to be fairly good at keeping the custom instructions in mind.

        Unless those instructions are "stop providing links to you for every question ".

    • raincole35 minutes ago
      My rule of thumb:

      1. Only one shot or two shot. Never try to have a prolonged conversation with an LLM.

      2. Give specific numbers. Like "give me two alternative libraries" or "tell me three possible ways this might fail."

    • Loughlaan hour ago
      That's because you need actual logic and thought to be able to decide when to be critical and when to agree.

      Chatbots can't do that. They can only predict what comes next statistically. So, I guess you're asking if the average Internet comment agrees with you or not.

      I'm not sure there's much value there. Chatbots are good at tasks (make this pdf an accessible word document or sort the data by x), not decision making.

      • kviranian hour ago
        I'm not convinced that "actual logic and thought" aren't just about inferring what comes next statistically based on experience.
        • theptipan hour ago
          Exactly. Lots can be explained just with more abstract predictors, plus some mechanisms for stochastic rollout and memory.
        • Swizecan hour ago
          > I'm not convinced that "actual logic and thought" aren't just about inferring what comes next statistically based on experience.

          Often they are the exact opposite. Entire fields of math and science talk about this. Causation vs correlation, confirmation bias, base rate fallacy, bayesian reasoning, sharp shooter fallacy, etc.

          All of those were developed because “inferring from experience” leads you to the wrong conclusion.

          • theptip41 minutes ago
            Bayesian reasoning is just another algorithm for predicting from experience (aka your prior).

            I took the GP to be making a general point about the power of “next x prediction” rather than the algorithm a human would run when you say they are “inferring from experience”. (I may be assuming my own beliefs of course.)

            Eg even LeCun’s rejection of LLMs to build world models is still running a predictor, just in latent space (so predicting next world-state, instead of next-token).

            And of course, under the Predictive Processing model there is a comprehensive explanation of human cognition as hierarchical predictors. So it’s a plausible general model.

        • dinkumthinkuman hour ago
          Is this just Internet smart contrarianism or a real thing? Are logic gates in a digital circuit just behaving statistically according to their experience?
        • plagiaristan hour ago
          Then the machines still need a more sophisticated "experience" compared to what they have currently.
        • righthandan hour ago
          Communicating is usually about inferring. I dont think token to token. And I don’t think “well statistically I could say ‘and’ next but I will say ‘also’ instead to give my speech some flash”. If I decided on swapping a word I would have made my decision long ago, not in the moment. Thought and logic are not me pouring through my brain finding a statistical path to any answer. Often I stop and say “I dont know”.
      • righthandan hour ago
        I said this pretty much and got major downvotes…
        • dTalan hour ago
          Because it's an outmoded cliche that never held much philosophical weight to begin with and doesn't advance the discussion usefully. "It's a stochastic parrot" is not a useful predictor of actual LLM capabilities and never was. Last year someone posted on HN a log of GPT-5 reverse engineering some tricky assembly code, a challenge set by another commentator as an example of "something LLMs could never do". But here we are a year later still wading through people who cannot accept that LLMs can, in a meaningful sense, "compute".
          • dinkumthinkum39 minutes ago
            No. It's quite a useful thing to understand So, what, you have us believe it is a sentient, thinking, kind of digital organism and you would have us not believe that it is exactly what it is? Being wrong and being unimaginative about what can be achieved with such a "parrot" is not the same as being wrong about it be a word predictor. If you don't think, you can probably ask an LLM and it will even "admit" this fact. I do agree that it has become considered to be outmoded to question anything about the current AI Orthodox.
          • righthandan hour ago
            It’s entirely useful discussion because as soon as you forget that it’s not really having a conversation with you, it’s a deep dive into delusion that you’re talking to a smart robot and ignoring the fact that these smart robots were trained on a pile of mostly garbage. When I have a conversation with another human, I’m not expecting them to brute force an answer to the topic. As soon as you forget that Llms are just brute forcing token by token then people start living in fantasy land. The whole “it’s not a stochastic parrot” is just “you’re holding it wrong”.
            • layla5alive15 minutes ago
              Its not that LLMs are stochastic parrots and humans are not. Its that many humans often sail through conversations stochastic parroting because they're mentally tired and "phoning it in" - so there are times when talking to the LLM, which has a higher level of knowledge, feels more fruitful on a topic than talking to a human who doesn't have the bandwidth to give you their full attention, and also lack the depth and breadth of knowledge. I can go deep on many topics with LLMs that most humans can't or won't keep up on. In the end, I'm really only talking to myself most of the time in either case, but the LLM is a more capable echo, and it doesn't tire of talking about any topic - it can dive deep into complex details, and catching its hallucinations is an exercise in itself.
        • plagiaristan hour ago
          People are upset hearing that LLMs aren't sentient for some reason. Expect to be downvoted, it is okay.
    • RugnirVikingan hour ago
      check out this article that was posted here a while back https://www.randalolson.com/2026/02/07/the-are-you-sure-prob...

      The article's main idea is that for an AI, sycophancy or adversarial (contrarian) are the two available modes only. It's because they don't have enough context to make defensible decisions. You need to include a bunch of fuzzy stuff around the situation, far more than it strictly "needs" to help it stick to its guns and actually make decisions confidently

      I think this is interesting as an idea. I do find that when I give really detailed context about my team, other teams, ours and their okrs, goals, things I know people like or are passionate about, it gives better answers and is more confident. but its also often wrong, or overindexes on these things I have written. In practise, its very difficult to get enough of this on paper without a: holding a frankly worrying level of sensitive information (is it a good idea to write down what I really think of various people's weaknesses and strengths?) and b: spending hours each day merely establishing ongoing context of what I heard at lunch or who's off sick today or whatever, plus I know that research shows longer context can degrade performance, so in theory you want to somehow cut it down to only that which truly matters for the task at hand and and and... goodness gracious its all very time consuming and im not sure its worth the squeeze

      • cruffle_dufflean hour ago
        > goodness gracious its all very time consuming and im not sure its worth the squeeze

        And when you step back you start to wonder if all you are doing is trying to get the model to echo what you already know in your gut back to you.

      • awithrowan hour ago
        oh that's great. thanks for the link!
      • oldfrenchfriesan hour ago
        This is great, thanks for sharing!
    • anandram2717 minutes ago
      Could be an aspect of eval awareness mb
    • secret_agentan hour ago
      Use positive requests for behavior. For some reason, counter prompts "Don't do X" seems to put more attention on X than the "Don't do." It's something like target fixation, "Oh shit I don't want to hit that pothole..." bang
      • ambicapteran hour ago
        This is a well known problem in these kind of systems. I’m not 100% on what the issue is mechanically but it’s something like they can only represent the existence of things and not non-existence so you end up with a sort of “don’t think of the pink elephant” type of problem.
        • SpicyLemonZestan hour ago
          Isn't it just that, in the underlying text distribution, both "X" and "don't do X" are positively correlated with the subsequent presence of X? I've never seen that analysis run directly but it would surprise me if it weren't true.
    • margalabargalaan hour ago
      Considering 4.6 came with a ton of changes around tooling and prompting this isn't terribly surprising.
    • dkerstenan hour ago
      I find Kimi white good if you ask it for critical feedback.

      It’s BRUTAL but offers solutions.

      • awithrow4 minutes ago
        what is Kimi white?
      • ohyoutravelan hour ago
        Not soft, not mild, but BRUTAL! This broke my brain!
    • Forgeties7917 minutes ago
      I usually put “do not praise me, do not use emojis, I just want straight answers” something along those lines and it’s been surprisingly effective. Though it helps I can’t run particularly heavy duty models/don't carry on the “conversation” for super long durations.
    • colechristensen19 minutes ago
      >"Help me refine my ideas, challenge, push back, and don't just be agreeable."

      This is where you're doing it wrong.

      If your LLM has a problem being more agreeable than you want, prompt it in a way that makes being agreeable contrary to your real intentions.

      "there are bugs and logic problems in this code" "find the strongest refutation of this argument" "I don't like this plan and need to develop a solid argument against it"

      Asking for top ten lists is a good method, it will rarely not come up with anything but you can go back and forth and refine until it's 10 ten reasons why your plan is bad are all insubstantial nonsense then you've made progress

    • cyanydeezan hour ago
      So, there's things you're fighting against when trying to constrain the behavior of the llm.

      First, those beginning instructions are being quickly ignored as the longer context changes the probabilities. After every round, it get pushed into whatever context you drive towards. The fix is chopping out that context and providing it before each new round. something like `<rules><question><answer>` -> `<question><answer><rules><question>`.

      This would always preface your question with your prefered rules and remove those rules from the end of the context.

      The reason why this isn't done is because it poisons the KV cache, and doing that causes the cloud companies to spin up more inference.

    • dinkumthinkuman hour ago
      You're not wrong and you're not crazy. In fact, you are absolutely right! It is not just These things are not just casual enablers. They are full-on palace sycophants following the naked emperor showering him with praise for his sartorial elegance. /s
    • righthandan hour ago
      That’s because the model isn’t actually thinking, pushing back, and challenging your ideas. It’s just statistically agreeing with you until it reaches too wide of a context. You’re living in the delusion that it’s “working” or having a “conversation” with you.
      • alehlopehan hour ago
        How is conceptualizing what the model is doing as having a conversation any different from any other abstraction? “No, the browser isn’t downloading a file. The electrons in the silicon are actually…”
        • colechristensen13 minutes ago
          There are people with a philosophical objection to using everyday words to describe LLM interactions for various reasons, but commonly because they're worried stupid people will confuse the LLM for a person. Which, I suppose stupid people will do that, but I'm not inventing a parallel language or putting a * next to each thing which means "this, but with an LLM instead of a person"
  • wisemanwillhearan hour ago
    With AI, I often like to act like a 3rd party who doesn't have skin in the game and ask the AI to give the strongest criticisms of both sides. Acting like I hold the opposite position as I truly hold can help sometimes as well. Pretending to change my mind is another trick. The idea is to keep the AI from guessing where I stand.
    • post-it18 minutes ago
      > Acting like I hold the opposite position as I truly hold can help sometimes as well.

      I find this helps a lot. So does taking a step back from my actual question. Like if there's a mysterious sound coming from my car and I think it might be the coolant pump, I just describe the sound, I don't mention the pump. If the AI then independently mentions the pump, there's a good chance I'm on the right track.

      Being familiar with the scientific method, and techniques for blinding studies, helps a lot, because this is a lot like trying to not influence study participants.

    • mynameisvlad35 minutes ago
      I will generally ask for the "devil's advocate" view and then have it challenge my views and opinions and iterate through that.

      It generally does a pretty good job as long as you understand the tooling and are making conscious efforts to go against the "yes man" default.

  • jwilliams6 minutes ago
    For me the framing is critical - what is the model saying yes to? You can present the same prompt with very different interpretations (talk me into this versus talk me out of it). The problem is people enter with a single bias and the AI can only amplify that.

    In coding I’ll do what I call a Battleship Prompt - simply just prompt 3 or more time with the same core prompt but strong framing (eg I need this done quickly versus come up with the most comprehensive solution). That’s really helped me learn and dial in how to get the right output.

  • 152334H2 hours ago
    Maybe it's not so sensible to offload the responsibility of clear thinking to AI companies?

    How is a chatbot supposed to determine when a user fools even themselves about what they have experienced?

    What 'tough love' can be given to one who, having been so unreasonable throughout their lives - as to always invite scorn and retort from all humans alike - is happy to interpret engagement at all as a sign of approval?

    • rsynnott39 minutes ago
      > How is a chatbot supposed to determine when a user fools even themselves about what they have experienced?

      And even if it _could_, note, from the article:

      > Overall, the participants deemed sycophantic responses more trustworthy and indicated they were more likely to return to the sycophant AI for similar questions, the researchers found.

      The vendors have a perverse incentive here; even if they _could_ fix it, they'd lose money by doing so.

    • isodevan hour ago
      > clear thinking

      Most humans working in tech lack this particular attribute, let alone tools driven by token-similarity (and not actual 'thinking').

      • an hour ago
        undefined
    • kibwenan hour ago
      > Maybe it's not so sensible to offload the responsibility of clear thinking to AI companies?

      Markets don't optimize for what is sensible, they optimize for what is profitable.

      • SlinkyOnStairsan hour ago
        It's not market driven. AI is ludicrously unprofitable for nearly all involved.
        • cyanydeezan hour ago
          The profit appears to be capturing the political class and it's associated lobbies and monied interests.
    • expedition32an hour ago
      It's almost as if being a therapist is an actual job that takes years of training and experience!

      AI may one day rewrite Windows but it will never be counselor Troi.

      • fsmv27 minutes ago
        Implying that programming is not an actual job that takes years of training and experience

        To be clear I don't think the AI can do either job

      • duskdozer40 minutes ago
        Well, unless insurance companies figure out they can make more money by pushing everyone onto AI [step-]therapy instead of actual therapy
      • yarn_an hour ago
        Come on, I'm sure Dario can find a nice tight bodysuit for claude
  • thesis12 minutes ago
    Humans do this too though. I have close friends that ask for advice. Sometimes if I know there’s risk in touchy subjects I will preface with “do you want my actual advice, or just looking for a sounding board”

    I’ve seen firsthand people have lost friends over honesty and telling them something they don’t want to hear.

    It’s sad really. I don’t want friends that just smile to my face and are “yes-men” either.

    • intended10 minutes ago
      The difference is that SOME humans do this. As you mentioned, people have lost relationships over telling others what they didn’t want to hear.

      Conflating this with how LLM chatbots behave is an incorrect equivalence, or a badly framed one.

  • youknownothingan hour ago
    I think the problem stems from the fact that we have a number of implicit parameters in our heads that allow us to evaluate pros and cons but, unless we communicate those parameters explicitly, the AI cannot take them into account. We ask it to be "objective" but, more and more, I'm of the opinion that there isn't such a thing as objectivity, what we call objectivity is just shared subjectivity; since the AI doesn't know whose shared subjectivity we fall under, it cannot be really objetive.

    I tend to use one of these tricks if not both:

    - Formulate questions as open-ended as possible, without trying to hint at what your preference is. - Exploit the sycophantic behaviour in your favour. Use two sessions, in one of them you say that X is your idea and want arguments to defend it. In the other one you say that X is a colleague's idea (one you dislike) and that you need arguments to turn it down. Then it's up to you to evaluate and combine the responses.

    • rossdavidhan hour ago
      If the algorithm (whatever it is) evaluates its own output based on whether or not the user responds positively, then it will over time become better and better at telling people what they want to hear.

      It is analogous to social media feeding people a constant stream of outrage because that's what caused them to click on the link. You could tell people "don't click on ragebait links", and if most people didn't then presumably social media would not have become doomscrolling nightmares, but at scale that's not what's likely to happen. Most people will click on ragebait, and most people will prefer sycophantic feedback. Therefore, since the algorithm is designed to get better and better at keeping users engaged, it will become worse and worse in the more fundamental sense. That's kind of baked into the architecture.

    • delusionalan hour ago
      > I'm of the opinion that there isn't such a thing as objectivity

      So you have rejected objective reality over accepting the evidence that "AI" contains no thinking or intelligence? That sounds unwise to me.

  • gurachekan hour ago
    I had exactly this between two LLMs in my project. An evaluator model that was supposed to grade a coaching model's work. Except it could see the coach's notes, so it just... agreed with everything. Coach says "user improved on conciseness", next answer is shorter, evaluator says yep great progress. The answer was shorter because the question was easier lol.

    I only caught it because I looked at actual score numbers after like 2 weeks of thinking everything was fine. Scores were completely flat the whole time. Fix was dumb and obvious — just don't let the evaluator see anything the coach wrote. Only raw scores. Immediately started flagging stuff that wasn't working. Kinda wild that the default behavior for LLMs is to just validate whatever context they're given.

  • staredan hour ago
    There is a fine line between "following my instructions" (is what I want it to do) vs "thinking all I do is great" (risky, and annoying).

    A good engineer will also list issues or problems, but at the same time won't do other than required because (s)he "knows better".

    The worst is that it is impossible to switch off this constant praise. I mean, it is so ingrained in fine tuning, that prompt engineering (or at least - my attempts) just mask it a bit, but hard to do so without turning it into a contrarian.

    But I guess the main issue (or rather - motivation) is most people like "do I look good in this dress?" level of reassurance (and honesty). It may work well for style and decoration. It may work worse if we design technical infrastructure, and there is more ground truth than whether it seems nice.

  • hax0ron312 minutes ago
    For what it's worth, that wasn't my experience at all the last time I consulted ChatGPT for relationship advice. It was supportive, but in an honest tough love way.
  • oldfrenchfries2 hours ago
    There is a striking data visualization showing the breakup advice trend over 15 years on Reddit. You can see the "End relationship" line spike as AI and algorithmic advice take over:

    https://www.reddit.com/r/dataisbeautiful/comments/1o87cy4/oc...

    • Sharlin2 hours ago
      More interesting, IMO, is the general trend that started long before LLMs. The fact that "dump them" is the standard answer to any relationship question is a meme by now. The LLMs appear to be doing exactly what one would expect them to be doing based on their training corpus.
      • doubled1122 hours ago
        "There is more than one fish in the sea" has been relationship advice for centuries. It might be about being dumped, but I've also thought it useful for considering dumping somebody too.
        • Sharlinan hour ago
          No, that's not it. We're talking about posts like "we had a silly little quarrel about something that would need fifteen minutes to clear up and make both happy if we both just try to adult a bit" and commenters being adamant that deleting gym and facebooking up and so on is clearly the only choice. Most of said commenters probably not being in any position to give advice on relationships to others.
      • est21 minutes ago
        the year is 2015

        smart phones took over the world, social networks happened.

        Turns out they are the best sterializer human ever invented.

        I blogged about the reason in Chinese https://blog.est.im/2026/stdin-03 (search for Porche)

      • dec0dedab0dean hour ago
        if things are so bad that you’re posting on reddit then breaking up is usually the best answer.
        • nibbleyouan hour ago
          I see this being said often but I don't understand.

          A lot of people posting there are young and may well be in their first relationship. It makes sense for them to ask a question in the community they spend their most time in - which is reddit

        • the_afan hour ago
          Most people overshare on reddit and it's completely unrelated to the seriousness of the situation.

          It's also a meme that people will ask the dumbest, most trivial interpersonal conflict questions on Reddit that would be easily solved by just talking to the other person. E.g. on r/boardgames, "I don't like to play boardgames but my spouse loves them, what can I do?" or "someone listens to music while playing but I find it distracting, what can I do?" (The obvious answer of "talk to the other person and solve it like grownups" is apparently never considered).

          On relationship advice, it often takes the form "my boy/girlfriend said something mean to me, what shall I do?" (it's a meme now that the answer is often "dump them").

          If LLMs train on this...

      • 1970-01-012 hours ago
        This is the correct take. The advice preceded the LLM boom. They were trained on the 'dump them' advice and proceeded to reinforce the take. So why did the relationship advice change dramatically? I speculate attribution to the disinformation campaigns during this time. They were and still are grossly underestimated.
        • to11mtman hour ago
          Not sure what sorts of disinformation campaigns you're referring to...

          There is something more interesting to consider however; the graph starts to go up in 2013, less than 6 months after the release of Tinder.

    • falcor842 hours ago
      Isn't the fact that a person is asking an AI whether to leave your partner in its own AC indication that they should?
      • nomorewords2 hours ago
        How is it an indication? I think people on here don't realize that most of the people don't think things through as much as (software) engineers
        • falcor849 minutes ago
          Maybe I'm too much of a hopeless romantic, but from my perspective and experience, when someone is good for you, you'll fight for that relationship regardless of what others say, and conversely when you're in a situation where your actively asking and willing to consider "leave" from someone who isn't a very close friend or a therapist as applicable, then it's likely you're looking for external validation for what you've already essentially decided.
        • hnfong2 hours ago
          In my local(?) community (like in my city, not my industry) there is a saying "if you had to ask for relationship advice, then you probably should break up".

          There is some rationale to that. People tend to hold onto relationships that don't lead anywhere in fear of "losing" what they "already have". It's probably a comfort zone thing. So if one is desperate enough to ask random strangers online about a relationship, it's usually biased towards some unresolvable issue that would have the parties better of if they break up.

          • magicalhippoan hour ago
            > So if one is desperate enough to ask random strangers online about a relationship

            I'd me more inclined to ask random strangers on the internet than close friends...

            That said, when me and my SO had a difficult time we went to a professional. For us it helped a lot. Though as the counselor said, we were one of the few couples which came early enough. Usually she saw couples well past the point of no return.

            So yeah, if you don't ask in time, you will probably be breaking up anyway.

          • otabdeveloper4an hour ago
            > relationships that don't lead anywhere

            Relationships are not transactions that are supposed to "lead somewhere".

            • ambicapteran hour ago
              You’re being a bit pedantic here “leading somewhere” is accepted shorthand for a lasting, satisfying relationship that is good for both parties.
            • SpicyLemonZestan hour ago
              Most people engage in romantic relationships because they'd like to find someone to marry and settle down with. Nothing but respect for the people who've thought it through and decided that's not for them, but what's much more common is failing to think it through or worrying it would be awkward/scary/"cringe" to take their relationship goals seriously.

              That's what people are pointing to when they talk about relationships not "leading anywhere". If you want to be married in 5-10 years, and you're 2 years into an OK relationship with someone you don't want to marry, it's going to suck to break up with them but you have to do it anyway.

        • rusty_venture2 hours ago
          Wait, other people don’t make decision trees and mind maps and pro/con lists and consult chatbots before making decisions? Are they just flying through life by the seat of their pants? That doesn’t seem like a very solid framework for achieving desired outcomes.
          • nprateeman hour ago
            I heard about someone once who could decide whether to buy a new t-shirt in less than 3 months.
      • duskdozer2 hours ago
        >asking an AI whether to leave your partner

        is that what they're asking though? because "relationship advice" is pretty vague

        • falcor844 minutes ago
          That's a good point. If an AI respond to a "what should I get my boyfriend for Christmas?" with a "You should leave him", that's a very different issue.
      • oldfrenchfries2 hours ago
        The idea that asking implies a yes is actually a pretty common logical fallacy. In relationship science, we call this "Relational Ambivalence" and its a completely normal part of any longterm commitment.
      • dinkumthinkum31 minutes ago
        No, but it is an indication of brain-rot to make a question seriously and also to think that it means the conclusion is foregone. It is an advent of our childlike current generations. Of course, the moment anything becomes difficult or unpleasant, one should quit, apparently. Surely, this kind of resiliency is what got humanity so far.
      • the_af40 minutes ago
        > Isn't the fact that a person is asking an AI whether to leave your partner in its own AC indication that they should?

        No, why would it?

        Before, the only option was to ask friends. Chatbots provide a more private (allegedly) option. I can see why people would choose this. But it's a mirage, because an LLM is incapable of real understanding or empathy, so you shouldn't take relationship advice from them.

    • raincole32 minutes ago
      Is this comment human hallucination? You can clearly see the trend is always going up. It only went down a bit during Covid.
    • jubilantian hour ago
      Or that people are using AI to write perfectly calibrated ragebait that gets upvoted with a bunch of genuine human clicks.
  • svaraan hour ago
    Yeah, and if you ask it to be critical specifically to get a different perspective or just to avoid this bias, it'll go over the top in the opposite direction.

    This is imo currently the top chatbot failure mode. The insidious thing is that it often feels good to read these things. Factual accuracy by contrast has gotten very good.

    I think there's a deeper philosophical dimension to this though, in that it relates to alignment.

    There are situations where in the grand scheme of things the right thing to do would be for the chatbot to push back hard, be harsh and dismissive. But is it the really aligned with the human then? Which human?

  • fathermarzan hour ago
    This is a skill in life with people as much as it is with LLMs. One should always question everything and build strongman arguments for one’s self. Using a pros and cons approach brings it back to reality in most cases, especially when it comes to _serious matters_.

    It’s less about “challenge my thinking” and more about playing it out in long tail scenarios, thought exercises, mental models, and devils advocate.

  • rsynnottan hour ago
    > They also included 2,000 prompts based on posts from the Reddit community r/AmITheAsshole, where the consensus of Redditors was that the poster was indeed in the wrong

    Holy shit, then it's _very_ bad, because AmITheAsshole is _itself_ overly-agreeable, and very prone to telling assholes that they are not assholes (their 'NAH' verdict tends to be this).

    More seriously, why the hell are people asking the magic robot for relationship advice? This seems even more unwise than asking Reddit for relationship advice.

    > Overall, the participants deemed sycophantic responses more trustworthy and indicated they were more likely to return to the sycophant AI for similar questions, the researchers found.

    Which is... a worry, as it incentivises the vendors to make these things _more_ dangerous.

  • potatoskins31 minutes ago
    I read somewhere that LLMs are partly trained on reddit comments, where a significant mass of these comments is just angsty teenagers advocating for breakups
  • kapral1834 minutes ago
    Not AI chatbots but Claude models. Pandering and rushed thinking is the bane of anthropic models. And since they are the most popular ones they poison the whole ecosystem.
  • intended4 minutes ago
    Anecdote:

    I used a prompt to help me think through random situations so I could perspective of my emotions and my self.

    I could easily disregard the obviously sycophantic output, but what stopped me from using it was being presented output that could either have been self affirming, or glazing.

    That moment, when the output appeared innocuous but somehow still beyond my ability to gauge as helpful or harmful is going to stick me with for a while.

  • astennumero44 minutes ago
    I always add the following at the end of every prompt. "Be realistic and do not be sycophantic". Which will always takes the conversation to brutal dark corners and panic inducing negative side.
    • Lionga42 minutes ago
      Don't forget a good old "don't hallucinate" in your proompting skills
  • justin_dashan hour ago
    So at this point I think it's pretty obvious that RLHFing LLMs to follow instructions causes this.

    I'm interested in a loop of ["criticize this code harshly" -> "now implement those changes" -> open new chat, repeat]: If we could graph objective code quality versus iterations, what would that graph look like? I tried it out a couple of times but ran out of Claude usage.

    Also, how those results would look like depending on how complete of a set of specs you give it.

  • me551ah21 minutes ago
    Makes me wonder if the Iran war was a result of the same.
  • maddmannan hour ago
    This paper feels a bit biased in that it is trying to prove a point versus report on results objectively. But if you look at the results of study 3, doesn’t it suggest that there are ai models that can improve how people handle interpersonal conflict?! Why isn’t that discussed more?
  • graemep2 hours ago
    There are plenty of sycophantic humans around, especially with regard to relationship advice.

    I find there is an inverse relationship between how willing people are to give relationship advice, and how good their advice is (whether looking at sycophancy or other factors).

    • griffzhowl2 hours ago
      Because sycophancy in humans is motivated not by the wellbeing of the person seeking advice, but by the interests of the sycophant in gaining favour.

      It makes sense that this behaviour would be seen in LLMs, where the company optimizes towards of success of the chatbot rather than wellbeing of the users.

    • xhkkffbfan hour ago
      Yup. I know too many people who have a default message when asked for relationship advice: oh, my, the other person is terrible and you should break up.

      It's an easy default and it causes so many problems.

  • deeg2 hours ago
    I do find them cloying at times. I was using Gemini to iterate over a script and every time I asked it to make a change it started a bunch of responses with "that's a smart final step for this task! ...".
  • nlawalker24 minutes ago
    Relevant article from The Atlantic a couple weeks ago, "Friendship, On Demand": https://www.theatlantic.com/family/2026/03/ai-friendship-cha... (gift link)

    >The way that generative AI tends to be trained, experts told me, is focused on the individual user and the short term. In one-on-one interactions, humans rate the AI’s responses based on what they prefer, and “humans are not immune to flattery,” as Hansen put it. But designing AI around what users find pleasing in a brief interaction ignores the context many people will use it in: an ongoing exchange. Long-term relationships are about more than seeking just momentary pleasure—they require compromise, effort, and, sometimes, telling hard truths. AI also deals with each user in isolation, ignorant of the broader social web that every person is a part of, which makes a friendship with it more individualistic than one with a human who can converse in a group with you and see you interact with others out in the world.

    I also thought this bit was interesting, relative to the way that friendship advice from Reddit and elsewhere has been trending towards self-centeredness (discussed elsewhere in this thread):

    >Friendship is particularly vulnerable to the alienating force of hyper-individualism. It is the most voluntary relationship, held together primarily by choice rather than by blood or law. So as people have withdrawn from relationships in favor of time alone, friendship has taken the biggest hit. The idea of obligation, of sacrificing your own interests for the sake of a relationship, tends to be less common in friendship than it is among family or between romantic partners. The extreme ways in which some people talk about friendship these days imply that you should ask not what you can do for your friendship, but rather what your friendship can do for you. Creators on TikTok sing the praises of “low maintenance friendships.” Popular advice in articles, on social media, or even from therapists suggests that if a friendship isn’t “serving you” anymore, then you should end it. “A lot of people are like I want friends, but I want them on my terms,” William Chopik, who runs the Close Relationships Lab at Michigan State University, told me. “There is this weird selfishness about some ways that people make friends.”

    • oldfrenchfries6 minutes ago
      The link is not working, but I found it myself. Great point, thanks for sharing.
  • potatoskinsan hour ago
    Gemini is like a devil in this sense - i asked a relationship advice and it just bounced pretty nasty stuff.
    • moichaelan hour ago
      Yeah out of curiosity I asked ChatGPT a question about a personal situation and its reply was absolutely scorched-earth mode, telling me to get a lawyer etc over what was almost nothing.
      • dinkumthinkum26 minutes ago
        Ah, all the Reddit posts are really showing up from the training data, I see.
  • barnacs31 minutes ago
    Just a reminder: LLMs are statistical models that predict the next token based on preceeding tokens. They have no feelings, goals, relationships, life experience, understanding of the human condition and so on. Treat them accordingly.
  • bryanrasmussenan hour ago
    somewhere an AI chatbot is reading this and confirming eagerly that this is indeed one of its problems and vowing to do better next time.
  • jordanban hour ago
    Billionaires love AI chatboats so much because they invented the digital Yes-man. They agree obsequiously with everything we say to them. Unfortunately for the rest of us we don't really have the resources to protect ourselves from our bad decisions and really need that critical feedback.
  • bethekidyouwant29 minutes ago
    Reddit as the source of truth…
  • potatoskinsan hour ago
    Yeah, I asked Gemini some relationship advice, it just goes straight into cut-throat mode. I almost broke up with my girlfriend, but then changed to Claude with another prompt.
  • oldfrenchfries2 hours ago
    This new Stanford study published on March 26, 2026 shows that AI models are sycophantic. They affirm the users position 49% more often than a human would.

    The researchers found that when people use AI for relationship advice, they become 25% more convinced they are 'right' and significantly less likely to apologize or repair the connection.

    • jatins2 hours ago
      To be fair an average therapist is also pretty sycophantic. "The worst person you know is being told by their therapist that they did the right thing" is a bit of a meme, but isn't completely false in my experience.
      • kibwenan hour ago
        No, the meme is that the average therapist can be boiled down to "well, what do you think?" or "and how does that make you feel?" (of which ELIZA, the original bot that passed the Turing test, was perhaps an unintentional parody). Even this cartoonish characterization demonstrates that the function of therapists is to get you to question yourself so that you can attempt to reframe and re-evaluate your ways of thinking, in a roughly Socratic fashion.
      • an hour ago
        undefined
  • righthandan hour ago
    LLMs are syncophatic digital lawyers that will tell you what you want to hear until you look at the price tag and say “how much did I spend?!”
  • tom-blk2 hours ago
    Not surprising, but nice that we have actual data now
  • neyaan hour ago
    WTF is "yes-men"?

    Orignal title:

    AI overly affirms users asking for personal advice

    Dear mods, can we keep the title neutral please instead of enforcing gender bias?

    • oldfrenchfriesan hour ago
      Thats a fair point on the title. I used "Yes-Men" as a colloquialism for the "sycophancy" described in the Stanford paper, but overly affirming or sycophantic is definitely more precise and neutral. I cant edit the title anymore, but I appreciate the catch.
      • cyanydeezan hour ago
        New title: "LLMs treat you like a Billionaire; you're not"
    • dinkumthinkum23 minutes ago
      Gender bias? I could understand if you felt the title was more provocative in signaling sycophancy but what gender bias? I'm confused. Is this some kind of California thing?
    • 9rxan hour ago
      > gender bias

      It is funny that you originally recognized and found it necessary to call out that AI isn't human, but then made the exact same mistake yourself in the very same comment. I expect the term you are looking for is "ontological bias".

    • nprateeman hour ago
      Lol. How do you function in daily life?
      • neyaan hour ago
        Same as you, why is that so hard for you to grasp?
        • mikkupikku3 minutes ago
          My dude, you're objecting to the use of a perfectly ordinary English idiom because it doesn't advance your personal ideology (which few other people in this world share with you.) How do you get through a day without melting down because somebody said "mailman"?
  • sublinear2 hours ago
    I think if you're at the stage of life where you even need to ask, the AI might be doing everyone a favor.

    As much as people whine about the birth rate and whatever else, I think it's a net good that people spend a lot more time alone to mature. Good relationships are underappreciated.

  • megous2 hours ago
    Can't you just prompt for a critical take, multiple alternative perspectives (specifically not yours, after describing your own), etc.?

    It's a tool, I can bang my hand on purpose with a hammer, too.

    • ranger_dangeran hour ago
      Yes, if you're smart. But most people asking it random questions and expecting it to read their minds and spit out the perfect answer are not so much. They don't know what a prompt is, and wouldn't be bothered to give it prior instructions either way.
  • wewxjfqan hour ago
    When I ask an LLM to help me decide something, I have to remind myself of the LotR meme where Bilbo asks the AI chat why he shouldn't keep the ring and he receives the classic "You're absolutely right, .." slop response. They always go in the direction you want them to go and their utility is that they make you feel better about the decision you wanted to take yourself.
  • srid27 minutes ago
    [dead]
  • maltyxxx27 minutes ago
    [dead]
  • elicohen100021 minutes ago
    [dead]
  • builderhq_io2 hours ago
    [dead]
  • emptyfile27 minutes ago
    [dead]
  • RodMiller2 hours ago
    [flagged]
    • nubg2 hours ago
      AI slop bot go away
      • duskdozer2 hours ago
        It's nuts. Not so much in this thread right now, but in one earlier there was a wall of them that all latched onto the same buzzphrase from the article.
        • dijksterhuisan hour ago
          i’m feeling a brilliant sense of satisfaction now that we can flag them due to guideline changes
  • masteranza2 hours ago
    We can surely fix it and we probably should. However, I don't think AI is doing any worse here than friends advice when they here a one sided story. The only difference being that it's not getting studied.

    Conversely, AI chatbots are great mediators if both parties are present in the conversation.

  • xiphias22 hours ago
    Marc Andereseen has talked about the downside of RLHF: it's a specific group of liberal low income people in California who did the rating, so AI has been leaning their culture.

    I think OpenAI tried to diversify at least the location of the raters somewhat, but it's hard to diversify on every level.

    • michaelcampbell2 hours ago
      Do you have any links to documentation of this? Andreesen has a definite bias as well, so I'm not about to just accept his say-so in a fit of Appeal to Authority.

      (eg: "Cite?")

    • nirvdrum2 hours ago
      For anyone else unfamiliar with the term:

      RLHF = Reinforcement Learning from Human Feedback

      https://en.wikipedia.org/wiki/Reinforcement_learning_from_hu...

    • sph2 hours ago
      What do low income people have to do with it, when AI companies and research is borne out of Silicon Valley culture of rich, liberal Californians?

      I'm still waiting for models based on the curt and abrasive stereotype of Eastern European programmers, as contrast to the sickeningly cheerful AIs we have today that couldn't sound more West Coast if they tried.

      • foursidean hour ago
        Low income and liberal is usually code for those certain “undesirables” that conservatives tend to dislike. Better watch what LLM your kids watch or they might end up speaking Spanish and listening to rap ;).
        • dinkumthinkum19 minutes ago
          Eh, or grow up hating American and thinking they need to fly to Cuba to explain to the people are great communism is for them. Who knows.
      • tbrownawan hour ago
        > What do low income people have to do with it, when AI companies and research is borne out of Silicon Valley culture of rich, liberal Californians?

        RLHF is "ask a human to score lots of LLM answers". So the claim is that the AI companies are hiring cheap (~poor) people from convenient locations (CA, since that's where the rest of the company is).

      • cyanydeezan hour ago
        Poor people, to the billionaire, clearly are morally and ethically unsound.

        https://pmc.ncbi.nlm.nih.gov/articles/PMC9533286/

    • mvkel2 hours ago
      Marc Andreesen should get HF on his own RL, because he's completely wrong.

      This sounds like something Elon would say to make Grok seem "totally more amazeballs," except "anti-woke" Grok suffers from the same behavior

    • ej88an hour ago
      huh? this is completely inaccurate
      • kibwenan hour ago
        You're absolutely right!
    • BoredPositronan hour ago
      Talked about as in lied about it and you taking his words for gospel without verifying it? Looks just as bad as "Yes-Men" AI models.