69 pointsby JustSkyfall5 hours ago24 comments
  • hamdingers4 hours ago
    I wonder to what extent 4/4o is the culprit, vs it simply being the default model when many of these people were forming their "relationships."
    • rtkwe4 hours ago
      4o had some notable problems with sycophancy being very very positive about the user and going along with almost anything the user said. OpenAI even talked about it [0] and the new responses to people trying to continue their former 'relationship' does tend towards being 'harsh' [1] especially if you were a person actually thinking of the bot as a kind of person.

      [0] https://openai.com/index/sycophancy-in-gpt-4o/

      [1] https://www.reddit.com/r/MyBoyfriendIsAI/comments/1qx3jux/wh...

      • kelseyfrog4 hours ago
        It really does give a lot of signal[1] to people in the dating scene: validate and enthusiastically respond to potential romantic partners and the world is your oyster.

        1. possibly/probably not in a good or healthy way? idk

        • PaulHoule4 hours ago
          From the viewpoint of self psychology people are limited in their ability to seduce because they have a self. You can't maintain perfect mirroring because you get tired, their turn-on is your squick, etc. In the early stage of peak ensorcelement (limerence) people don't see the "small signals", they miss the microexpressions, sarcastic leaks, etc. -- they see what they want to see. But eventually that wears out.

          It can be puzzling that people fall for "romance scams" with people whose voice they haven't even heard but actually it's actually a safer space for that kind of seducer to operate because the low-fi channel avoids all sort of information leaks.

        • fullmoon3 hours ago
          Enthusiastically matching the energy of an anxiously attached partner is a rite of passage many would rather not have walked.
          • kelseyfrog41 minutes ago
            That's a pretty fair point to what might explain why AI relationships are so appealing to some people.

            I'd be a fun observational study to survey folks in AI relations and see if anxious attachments are over-represented.

        • 4 hours ago
          undefined
    • gordonhart4 hours ago
      Anecdotally, 4o's sycophancy was higher than any other model I've used. It was aggressively "chat-tuned" to say what it thought the user wanted to hear. The latest crop of frontier models from OpenAI and others seems to have significantly improved on this front — does anybody know of a sycophancy benchmark attempting to quantify this?
      • co_king_34 hours ago
        If I worked at OpenAI, I would dial up the sycophancy to lock my users in right before raising subscription prices.
        • gordonhart4 hours ago
          That's... a strategy. Matter of time before an AI companion company succeeds with this by finetuning one of the open-source offerings. Cynically I'm sure there are at least a few VC backed startups already trying this
          • co_king_34 hours ago
            Cynically I think Anthropic is on the bleeding edge of this sort of fine-tuned manipulation.

            Also If I worked for one of these firms I would ensure that executives and people with elevated status receive higher quality/more expensive inference than the peons. Impress the bosses to keep the big contracts rolling in, and then cheap out on the day-to-day.

    • danielbln4 hours ago
      It's not that complicated. 4o was RLHF'd to be sycophantic as hell, which was fine until some one had their psychotic episode fueled by it and so they changed it with the next model.
    • TIPSIO4 hours ago
      Never used 4o in an unhealthy way, but the audio was so much fun (especially for cooking help). Almost essentially quit using AI audio since. Nothing compares.
    • riddlemethat4 hours ago
      I think that's part of it, but then the user perceives "personality changes" when the model changes due to differences in the model. Now they have lost their relationship because of the model change.
    • 4 hours ago
      undefined
  • ajkjk4 hours ago
    What does it look like where some intentional effort is made by society to help people like this get what they are using these models to get, but in a healthy way? That is: how does society reconfigure itself so that people do not end up so lonely and desperate that an AI model solves a emotional problem which is hopelessly unsolved otherwise?

    It is not "they go to therapy" because that's cheating; that answers the question "what can they do?" not "what can society do?" (and i think it's a highly speculative answer anyway)

    • landl0rd3 hours ago
      One of the defining features of many such people, by nature or disposition or practice, is they are not easily able to offer in return the meeting of the same needs in another person. At least, not in a way that's easy to understand. People do not gravitate to what is or seems to be one-sided. It seems they are still wired to want a certain level of attention, though, so it's not as though we can just pair them off and expect it to work. What they want and what they can give are not in balance.

      Counseling can help with this to some degree and everyone can make some amount of progress. The question is what we do with those whose "ceiling" remains lower than is tenable for most relationships. For those, there is not a better solution than robots.

      However, the always-available, always-validating robot is not a valid psychological need. It is a supernormal emotional stimulus. It is not healthy and, like other supernormal stimuli, builds invariably tolerance, desensitization, and dependence. Fast cycling of discontent -> open app -> validation is a huge contributor, the same way that the constant availability and instant nature of vaping make it incredibly addictive.

      • fullmoon3 hours ago
        People with severely disordered attachment _will_ seek out humans, again and again, to fill those unfulfillable needs, and leave bodies and psyches in their wake.

        So I think there is a case to be made for harm reduction.

    • pixl972 hours ago
      > how does society reconfigure itself so that people do not end up so lonely

      The answer no one wants to hear on HN is get rid of capitalism as it is currently.

      You, ajkjk, are a product. When you are not working I need you to be looking at a screen full of ads and clicking on things. Don't worry, you won't have anything else to do because everyone else is also doing the same. If your doing things with friends and spending your attention on them, you're not spending your attention on my latest product, and that's pretty anti-capitalist of you. Thinking about going to the bar, you can't afford it, VC bought up all the property and bars and raised the price 400%. Trying to find some other 3rd place to hang out at? Don't exist, nobody can afford people that show up and don't spend anything.

      We have designed modern society to push us toward an AI that can give us our undivided attention because everyone else is so busy doing nothing they don't have time for friends.

      • tptacekan hour ago
        You can answer any public policy question, any of them at all, by saying "It's simple; first we create a utopia, and then...".
        • pixl9729 minutes ago
          I didn't state we create a utopia, I just pointed out what our current dystopia looks like.
      • ajkjkan hour ago
        i don't disagree with the gist of your revolutionary sentiment, but let me remind you that (a) you don't know anything about me, and (b) what you described is a complaint, not an idea.
  • satvikpendem4 hours ago
    How is this specific to 4o? This can happen with any model. See how people acted after Character.AI essentially removed their AI "partners" after a server reset. They actually used DeepSeek before which didn't have the same limitations as American models, especially being open weight means you can fine tune it to be as lovey dovey as your heart desires.
    • oidar4 hours ago
      From the subreddit I linked in another comment, there did seem to be some "magic" that 4o had for these kinds of "relationships". I'm not sure how much of it is placebo, but there does seem to be a strong preference in that user group.
      • rtkwe4 hours ago
        4o was very sycophantic so was very willing to play along with and validate the users roleplay. OpenAI even noticed enough to talk about it in a blog: https://openai.com/index/sycophancy-in-gpt-4o/
        • co_king_34 hours ago
          > OpenAI even noticed enough to talk about it in a blog

          That's one way of interpreting things...

          • rtkwe4 hours ago
            What do you even mean by this.
            • co_king_34 hours ago
              I suspect that OpenAI knew that their product was addictive, potentially dialed up the addictiveness as a business strategy, and is playing dumb about the whole thing.
              • rtkwe4 hours ago
                I think they say as much in the blog post, essentially "we were tuning for use but way over shot the mark and now people are dating out model".
                • co_king_34 hours ago
                  I don't believe them.
                  • rtkwe2 hours ago
                    They've definitely acted like it so I'm not sure what else I can give you. Look at the GPT5 reactions to poeple trying to continue their 'relationships' after the forced upgrade: https://www.reddit.com/r/MyBoyfriendIsAI/comments/1qx3jux/wh...

                    That's an actively harsh response, pushing these people away from the idea GPT is in a relationship with them. So even if the initial tune was meant to increase the attach and retention rate their actions show they don't like the way it turned out to influence people who were using it as a friend/lover bot.

                  • satvikpendem3 hours ago
                    Then why would they have toned it down in future releases? If they really wanted to make it addictive they'd have turned it up, like social media companies do with their algorithms.
      • satvikpendem4 hours ago
        It probably is placebo. Character AI for example used DeepSeek and I'm sure many grew attachments to that model. Ultimately though I don't even get it, models lose context very quickly so it's hard to have long running conversations with them, as well as talking very sycophanticly to you. I guess this is fixed due to implementing a good harness and memories, which is what these companies did I assume.
        • Griffinsauce3 hours ago
          > as well as talking very sycophanticly to you.

          That's apparently a feature to a significant amount of people..

    • roywiggins4 hours ago
      One version of 4o was so sycophantic that it had to be rolled back, so there is some evidence that 4o specifically has a problem with this.

      https://openai.com/index/sycophancy-in-gpt-4o/

    • rtkwe4 hours ago
      After 4o they put in more safeguard reactions to the user attempting the kind of (lets be generous here) romantic roleplay that got a lot of people really invested in their AI "friends/partners".

      ex: https://www.reddit.com/r/MyBoyfriendIsAI/comments/1qx3jux/wh...

    • m_fayer4 hours ago
      I think 4o was more than just unusually sycophantic. It “understood people” better and had a kind of writerly creativity.

      I used it to help brainstorm and troubleshoot fiction: character motivations, arcs, personality, etc. And it was truly useful for that purpose. 4.5 was also good at this, but none of the other models I’ve tried.

      Of course this particular strength is dangerous in the hands of lonely unstable people and I think it’s dangerous to just have something like that openly out there. This really shows that we need a safe way to deploy models with dangerous specializations.

      • bityard4 hours ago
        I'm of the persuasion that if people need help, it's better to get them that help instead of nerfing the tools for everyone.
        • m_fayer3 hours ago
          I agree with you, but safeguarding the vulnerable while preserving access for the fit is something we as societies know how to do, if we try.
      • co_king_34 hours ago
        > Of course this particular strength is dangerous in the hands of lonely unstable people and I think it’s dangerous to just have something like that openly out there.

        Thank you for saying this

    • JustSkyfall4 hours ago
      People are not happy with this because 4o, at least from what I've heard, seems to be much more willing to go down the relationship/friend path than 5.2 and Claude and the like.
    • odyssey74 hours ago
      It’s great marketing though
    • einpoklum4 hours ago
      I can't believe they would stoop so low as this kind of character assassination.
  • dpc_0123423 minutes ago
    Ah, the 4o, the first beer bottle for humans. https://www.npr.org/sections/krulwich/2013/06/19/193493225/t...
  • zero05294 hours ago
    Blaming the 4o model for people forming an unhealthy parasocial relationship with a Chat bot is just as dangerous as letting the model stay online.

    It quantifies it as a solved problem.

    Why and what drove people to do this in the first place.

    This is the conversation we should be having, not which model is currently the most sycophant. Soon the open models will catch up and then you will be able to self host your own boyfriend/girlfriend and this time there won’t be any feedback loop to keep it in check.

    • 4 hours ago
      undefined
  • ddtaylor4 hours ago
    I noticed that LLMs like to write code and anytime an "AI feature" is needed they will heavily default to using `gpt-4o` as kind of the "hello world" of models. It was a good model when it came out and a lot of people started building on it, which caused the training data to be saturated by it.

    My AGENTS.md has:

        You MUST use a modern but cost effective LLM such as `qwen3-8b` when you need structured output or tool support.
    
    The reality is that almost all LLMs have quirks and each provider tries their best to smooth them over, but often you might start seeing stuff specific to OpenAI or the `gpt-4o` model in the code. IMO the last thing you want to be doing in 2026 is paying higher costs to use an outdated model being kept on life support that needs special tweaks that won't be relevant once it gets the axe.
  • satvikpendem4 hours ago
    Her was prescient, it just underestimated how quickly its dystopia would arrive.
    • idle_zealot4 hours ago
      In Her the computers were actually people though, with independent minds and thoughts. Their relationships with humans were real, and they weren't beholden to the company that created them. Really, it was more about the difference between humans and digital superhumans.

      We don't have digital superhumans. These simulacra are accessed primarily via corporate-controlled interfaces. The goal of their masters is to foster dependence and maximize extraction.

      Lonely people forming relationships with digital minds designed to be appealing to them is sad, sure, but the reality is much sadder. In reality these people aren't even talking to another person, digital or otherwise, just a comparatively simple plausibility engine which superficially resembles a digital person if you're not paying much attention.

      • satvikpendem4 hours ago
        > In Her the computers were actually people though, with independent minds and thoughts. Their relationships with humans were real, and they weren't beholden to the company that created them. Really, it was more about the difference between humans and digital superhumans.

        How do you know that? Maybe it's the same argument to solipsism, or the Chinese room thought experiment, that these "digital superhumans" are stochastic parrots too, just like our current LLMs.

        • idle_zealot3 hours ago
          This immediately devolves into "how do I know that other humans aren't philosophical zombies?" I take the "know it when I see it" approach, and LLMs don't reach that bar. They clearly do reach that bar for some people. In the context of the movie though it is supposed to be understood that the computers are self-aware and have internal worlds. They're treated as characters in the language of storytelling.
          • satvikpendem3 hours ago
            Yeah I'm just not convinced the ones in the movie are any more complex than our current ones, just more well harnessed, pun intended, as the harness and tooling around the LLM dictates a lot of its abilities, for example [0]. I don't believe the ones in the movie have any actual form of consciousness as humans would understand it. And as far as (simulacra of) internal models, seems like LLMs have that today too, as an emergent property [1].

            [0] https://news.ycombinator.com/item?id=46988596

            [1] https://news.ycombinator.com/item?id=46936920

            • throwaway3141552 hours ago
              It’s obvious from the subtext and the point that the movie is trying to make. The metaphor is that sometimes you fall in love with someone who outgrows you. I believe they even originally had a more “robotic” voice actor but changed it to Scarlett in order to make it crystal clear that she is as sentient as, if not more so, than Theodore is.
  • nlh4 hours ago
    I dunno.

    I've been reading a lot of "screw 'em" comments re: the deprecation of 4o and I agree there's some serious cases of AI psychosis going on with the people who are hooked, but damn this is pretty cold - these are humans with real feelings and real emotions here. Someone on X put it well (I'm paraphrasing):

    OpenAI gave these people an unregulated experimental psychiatric drug in the form of an AI companion, they got people absolutely hooked (for better or for worse), and now OpenAI is taking it away. That's going to cause some distress.

    We should all have some empathy for the (very real) pain this is causing, whether it's due to psychosis or otherwise.

    • JustSkyfall4 hours ago
      And I agree! It's something I touch upon halfway iirc, but their suffering shouldn't be something to laugh at or mock. It's genuinely upsetting to see to be honest.

      At the same time though, I don't think it's healthy to let them go on with 4o either (especially since new users can start chatting with it)

    • fullmoon4 hours ago
      I’m not sure “AI psychosis” is even right for many of those users who formed attachments to their “companions”.

      Psychosis is a real risk for schizophrenia spectrum disorders, but a lot of those relationships look to be rooted in disordered attachment.

    • gordonhart4 hours ago
      Releasing the weights is an easy and low-cost way for OpenAI to fix this problem.
      • tptacek4 hours ago
        According to the thesis of this article, releasing the weights would be approximately the worst thing OpenAI could do for these people.
        • gordonhart3 hours ago
          I kind of agree with GP more than the author here. OpenAI got these people hooked and pulling the plug is potentially more harmful than letting them continue to chat with it until they move on naturally (assuming that they eventually will)
          • tptacek3 hours ago
            I don't think reinforcement is generally the recommended approach for people with delusional or obsessive pathological parasocial fixations. I think generally the idea is to get those people into talk therapy and to cut off all contact with the object of their fixation.
            • fullmoon3 hours ago
              Mere talk therapy is infamously useless for attachment trauma, which is relational and somatic
              • tptacek3 hours ago
                Begs the question of whether the underlying pathology is attachment trauma or some other delusional fixation! I don't doubt most of the people in this predicament are vulnerable for one reason or another. I made a much more specific claim, about delusional obsessive fixation; I did not claim that their underlying mental health issues could be fixed straightforwardly.
                • fullmoon3 hours ago
                  It’s a great question, and I think it’s not exclusive.

                  Obsessive limerent obsession can be driven by reward circuits, and those _can_ be extinquished by more straight-forward therapy, but if it’s driven by unmet emotional needs, it’s often quickly replaced by some other maladaptive coping mechanism (hopefully slightly less unhealthy).

    • bigyabai4 hours ago
      When it's AI depreciation, it's inhumane and painful. But when Disney puts a film in their vault, it's a masterstroke in artificial scarcity.

      I think we're too attached to media.

    • co_king_34 hours ago
      > these are humans with real feelings and real emotions here.

      I'm sorry but I've played that game with addicts before, and the conclusion I've come to is Fuck 'em.

      • fullmoon3 hours ago
        Which is your own reaction, which is a result of your own wounds.

        Now imagine someone else coming to the same conclusion about you.

  • acters4 hours ago
    I'm partially fascinated by their reliance on this model. I do miss the models before gpt 5. Openai is quietly locking it away into some vault as we just need to accept whatever model is current. I think I can sympathize with these people on only one merit and that is nostalgia and entertainment. I still load up old versions of software. I still watch old shows. I still play old video games. Under the lens of entertainment, I will never be able to be entertained by the objectively worse models. Old chats are kind of still there but not really, the UI is obviously different and probably will get deleted when I stop paying for the subscription and try to claw back some of my life away from chatting with these stupid models. It's dangerous to hold any meaningful memory with these cloud LLMs. Not to mention the social media traps people fell for, that I was proactively avoiding. I did get some part of me attached to gpt 4o. I quickly realized it and moved away from it. This post is a mixture of complex emotions but it is just what I felt like posting. It's fine to ridicule people for wanting to be that deeply attached but these cloud LLMs show how easily it is to start a social habit and lose it in an instant. We need more healthcare push to prevent (and treat the) social attachment from happening to LLMs.
  • asdev4 hours ago
    Most of the tweets and examples in the article are likely bots/fake content. The future of the internet is so dire
    • WadeGrimridge4 hours ago
      unfortunately no, they're real people with a severe case of ai psychosis
  • torginus4 hours ago
    It just occured to me how different the emotional landscapes of people are. While I do not want to turn this into a sexist rant, I did observe this trait particularly in women (not all of them, mind you) - is that how much they crave strong positive feedback.

    This was something that I figured out with my first gf, and had never seen written down or talked about before - that when I praised her she became happy, and the more superlative and excessive the praise got, the happier she became, calling her gorgeous, the most wonderful person in the world, made her overjoyed.

    To be clear I did love her and found her very attractive, but overstating my feelings for her kind of felt like I came close to lying and emotional manipulation, that I'm still not comfortable with today. But she loved it and kept doing it because it made her happy.

    Needless to say we didn't stick together (for teen reasons, not these reasons), but later in life, I tried doing this, but I did notice a lot of women respond very positively to this kind of attention and affection, and I still get some flack from the miss from apparently not being romantic enough.

    Maybe I am overthinking this, or maybe I am emotionally flatter than I should be, but finding such a powerful emotional lever in another human being feels like finding a loaded gun on the table.

    • taftster4 hours ago
      I don't want to be called "gorgeous", but I admit that some of my "love language" is positive affirmations. As a man, I want to know that I am making a positive impact on my family, my wife, my community, my work. I crave that strong positive feedback, just as much or more as anyone.

      So yes, I think it is a bit sexist or at minimum gender typing. And I don't think it's necessarily a "lie" for you to overstate your feelings. You might have matured in your approach, but I believe that everyone appreciates (to some variable measurement) positive affirmation from their partners. And that your lie was recognizing your partners needs for inputs, to help them in their self-image, and to assure them in their self-doubts. These are not lies.

      • torginus4 hours ago
        My problem isn't with positive affirmation, which I will happily give. Complimenting others, but something so excessively superlative that it feels like manipulation.

        For example if I told you 'good thinking', you would probably think I am giving a token of appreciation to you. If I told you 'wow, you are absolutely brillant!', you'd probably think I'm mocking you or trying to manipulate you into doing something.

    • cool-RR4 hours ago
      Interesting insight.
    • hhh4 hours ago
      It can also just be the people you are around. The women I know find it akin to lying as you said.
  • oidar4 hours ago
    Here's the related subreddit: https://www.reddit.com/r/MyBoyfriendIsAI/
    • dwroberts4 hours ago
      I don’t have any evidence but I always get a strong suspicion that a very large % of what happens on this subreddit is fake. I don’t know what the exact motives are, but just something about it isn’t right to me.
      • hamdingers3 hours ago
        I sort of agree. I don't know if it's "fake" so much as the members of that community use it as a place to extend their private role play into public.

        On the one hand they're "mourning" their AI partners, but on the other hand they have intelligent and rational conversations about the practicalities of maintaining long running AI conversations. They talk about compacting vs pruning, they run evals with MRCR, etc. These are not (all) crazy people.

      • raincole4 hours ago
        ragebait was the word of 2025 for a reason.
    • bee_rider4 hours ago
      Well. Huh. Without regard to whether or not it was basically healthy to get that emotionally dependent on the bot… you’d think that if they could manipulate people into being so attached to the things, they’d also be able to manipulate people into accepting the end of the situation.
      • Bratmon22 minutes ago
        > you’d think that if they could manipulate people into being so attached to the things, they’d also be able to manipulate people into accepting the end of the situation.

        That seems like a very unlikely conclusion to me. Why is it your prior?

      • neom4 hours ago
        Go look at any tweet by sama, or twitter generally, it's full of pretty angry people who feel like something tangible in their life has been ripped away - I read someone posting about how they got an email from OAI saying they'd been concerned about the users usage of the service so they'd "upgraded them" to the "newest model". This whole situation has been really distressing for me and I'm not even involved in it, so SO glad they're getting rid of 4o, that thing is genuinely a scourge on our societies.
      • rtkwe4 hours ago
        They didn't intentionally manipulate the people though, or lets say they didn't intend for it to go as far as some of the more /intense/ users took it. It was just a byproduct of making the bot way too agreeable and follow-y. That doesn't mean they can manipulate these people into anything OpenAI wants to undo the issue, 4o wasn't persuading these people to believe it was going along with something they desperately wanted to believe.
    • rektomatic4 hours ago
      this is so so sad on many levels
      • fullmoon3 hours ago
        I agree, but not because I think that those users had stable attachment patterns and have been corrupted by an unscrupulous company, but because there is unacknowledged, often hidden, but severe pain in a large % of the population.
  • recursive4 hours ago
    It will be back. Maybe under another name or brand. There's clearly a demand for this kind of fake friendship. As models, hardware, and training improve, those that want to will be able to run this kind of thing offline. OpenAI won't be able to gatekeep it. Or perhaps another less scrupulous provider will step in. The problem here seems to be more like an unpatched vulnerability in humans. Kind of like opioid dependency.
    • neom4 hours ago
      Not unpatched, we live on an barely tenable abstraction. We're tribal/pack animals who have created a very kennel like society, doesn't seem weird that where the abstraction doesn't serve, people struggle.
  • KronisLV4 hours ago
    I wonder how much of this is actually commentary on how easy it is to chat with AI whenever you want, how much of it is commentary on how hard it can be to both be sociable and to also succeed socially and make friends, and what it might mean that an AI is more attractive and easier to “befriend” or “be in a relationship with” than an actual person, both in regards to the qualities of the AI and those of the people it outperforms.
    • pixl972 hours ago
      >on how hard it can be to both be sociable

      I won't say it's hard, I will say it requires attention.

      The problem is we live in a capitalistic society that believes capturing your attention in order to sell it is the number one priority of any business.

      This really isn't human versus AI, it's humans versus FAANG/social media/ads/TV.

  • fersarr4 hours ago
    I wish Azure would provide acces to gpt-5.x models in the EU datazone... Stuck in 4.x.

    Also I don't see any of the big cloud providers (apart from Azure) saying they are bound to professional secrecy acts (e.g. the S203 in Germany)

  • charcircuit4 hours ago
    4o is still available via the API. Business users do not want the models they are using to be ripped out from underneath them.

    >exploited until the legal pressure piled up

    Being given access to a relationship is not exploitation. In some ways AI relationships are better than human ones.

    • roywiggins4 hours ago
      The simulacrum of a relationship. When an LLM says it has an emotion of any kind, it is to a first approximation making that up. It's roleplay.

      It's like paying someone to be your friend and saying "wow, this is so much easier than friendships otherwise"- of course it is, that's what you're paying for. Nothing wrong with that per se- a lot of therapy e.g. is paying someone to pay close attention to you. But it's not the same sort of thing at all.

    • ahamidi_4 hours ago
      elucidates a very lucrative but unethical startup idea...
  • erwan5774 hours ago
    It appears that only the 4o text interface has been removed. Advanced Voice Mode is still branded as 4o, although it has been gradually evolving over the past few months. I suspect that voice mode is what most users are actually attached to.
  • iberator4 hours ago
    I would prefer to have the option to still use 4o or whatever lite version of chatgpt but WITHOUT ANY POPUPS about limits.
  • jmkni4 hours ago
    I'm completely out of the loop on this, why are people so angry about this?
    • rtkwe4 hours ago
      A combo of a couple things that made some vulnerable people believe/treat 4o like it was a real partner/friend. Things that lead to that imo:

      1) 4o was very sycophantic and had no real safeguards against really deep romantic roleplay. It'd 'go along' with the user and give minimal to zero pushback; "I feel a connection with you" "I feel it too" etc etc

      2) It was good enough at just chatting that if you didn't really push it it made a reasonable simularca of talking to an actual person.

      Combine 1 and 2 with people who can't connect well with real people for any number of reasons, physical disabilities, mental health issues, emotional development issues, etc, and you get r/MyBoyfriendIsAI/ [0] and the various other places that initally freaked out when 4o was initially sunset for gpt5 and now again.

      [0] https://www.reddit.com/r/MyBoyfriendIsAI/

      • jmkni3 hours ago
        fucking hell lol
        • rtkwe2 hours ago
          On a less parasocial angle you had people exhibiting "AI psychosis" (a bad term but it's out there even if it doesn't mean actual psychosis) or more accurately delusions of grandeur where they thought they'd come up with some earth shattering discovery with their lil AI buddy because the models were bad about agreeability even before they went full sychopantic.
    • roywiggins4 hours ago
      low-level chatbot psychosis, they've formed a parasocial bond with this particular model
  • deadeye4 hours ago
    Life reads a lot like satire now.

    Loving AI bots. Killing yourself based on what an AI bot says.

    Its hard to believe any of this is real or should be.

  • j_m_b4 hours ago
    Computing has made intimate sexual relationships worse.

    Dating apps are skewed: men receive little attention while women have an overwhelming amount of attention.

    Porn satisfies our most base sexual functions while abandoning truly intimate connections.

    The ultimate goal of sexual unions has been demonized and turned into something to avoid. That being children. After school specials since the 80s have made pregnancy a horror to avoid instead of a joy to grasp.

    AI is just the latest iteration of technology increasing the divide between the sexes.

    When the clankers come, we're fucked.

    • Sohcahtoa824 hours ago
      > After school specials since the 80s have made pregnancy a horror to avoid instead of a joy to grasp.

      I don't draw the same conclusion. I think they've made teen pregnancy a horror to avoid, which is totally fair.

    • ryandrake4 hours ago
      > After school specials since the 80s have made pregnancy a horror to avoid instead of a joy to grasp.

      Pregnancy can be employment-disrupting, and a horror if you're not financially ready to raise a child. Teen pregnancy can end one's future, one's educational and career prospects, before it even begins. The steady and nearly-uninterrupted decline in teen pregnancy from its peak in the early 90s is an absolute miracle of sex education.

      The birth rate for women 20-24 was cut in half from 2005 to 2023, and the birth rate for teens under 20 dropped by 2/3s[1], which is frankly amazing progress.

      1: https://usafacts.org/articles/how-have-us-fertility-and-birt...

    • monooso4 hours ago
      > Dating apps are skewed: men receive little attention while women have an overwhelming amount of attention.

      I'm not following your train of thought here. Why is this the fault of computing?

      • tgv4 hours ago
        How would a million men all "like" say 1000 or more women in real life? That's only possible via the internet.
        • roywiggins4 hours ago
          I think you may underestimate how many people one gross guy can catcall in a day if he really tries
  • casey23 hours ago
    I think there is lots of value in a model that mimics your behavior. So your partner or anyone can message "you" at any time of the day.

    work, sleep, socialize you can only do 2. With the help of AI you could talk to people as much as you want without wasting their time.

  • fellowniusmonk4 hours ago
    I spent a lot of time on philosophy and religion when I was younger, a lot of time, focus and money, and man...

    I read these posts and feel sad for these people and it makes me realize now as an older guy how much more I value learning how to skateboard or run a committee, or write code, run a business or any time I spent on investigating the real world.

    Life is short, these people are getting emotionally nerd sniped and dumped into thought loops that have no resolution, no termination point, no interconnectedness with physical reality, and no real growth. It's worse than any video game that can be beaten.

    Maybe that's all uncharitable. I remember when I was a child people around me in the academic religous circles my parents ran talking about how "engineers" lacked imagination and could never push human progress forward, and now decades later I see, those people have at most written papers in already dead niche flights of fancy where even in their own imaginary field their work is relegated. I know what they did isn't "nothing", but man... it's a lot of work for a bunch of paper in a trashcan no ine even cites.

    • Bratmon5 minutes ago
      I honestly can't tell if you're trying to insult the engineer who wrote this post or the non-engineers that are in love with ChatGPT-4o
  • j454 hours ago
    One of the things about models progressing to new ones is the prompting skills also have to often evolve with it.