125 pointsby meetpateltech2 hours ago18 comments
  • Flux1592 hours ago
    I'm a bit confused by this branding (never even noticed that there was a 5.2-Instant), it's not a super fast 1000tok/s Cerebras based model which they have for codex-spark, it's just 5.2 w/out the router / "non-thinking" mode?

    I feel like openai is going to get right back to where they were pre GPT-5 with a ton of different options and no one knows which model to use for what.

    • tedsandersan hour ago
      Yeah, for a while ChatGPT Plus has been powered by two series of models under the hood.

      One series is the Instant series, which is faster and more tuned to ChatGPT, but less accurate.

      The second series is the Thinking series, which is more accurate and more tuned to professional knowledge work, but slower (because it uses more reasoning tokens).

      We'd also prefer to have simple experience with just one option, but picking just one would pull back the pareto frontier for some group of people/preferences. So for now we continue to serve two models, with manual control for people who want to choose and an imperfect auto switcher for people who don't want to be bothered. Could change down the road - we'll see.

      (I work at OpenAI.)

      • lifis34 minutes ago
        You could perhaps show the "instant" reply right away and provide a button labeled "Think longer and give me a better answer" that starts the thinking model and eventually replaces the answer.

        For this to work well, the instant reply must be truly instant and the button must always be visible and at the same position in the screen (i.e. either at the top or bottom, of the answer, scrolling such that it is also at the top or bottom of the screen), and once the thinking answer is displayed, there should be a small icon button to show the previous instant answer.

      • lxgran hour ago
        Thank you for confirming!

        I've long suspected as much, but I always found the API model name <-> ChatGPT UI selector <-> actual model used correspondence very confusing, and whether I was actually switching models or just some parameters of the harness/model invocation.

        > One series is the Instant series, which is faster and more tuned to ChatGPT, but less accurate.

        That's putting it mildly. In my experience, the "instant/chat" model is absolute slop tier, while the "thinking" one is genuinely useful and also has a much more palatable tone (even for things not really requiring a lot of thought).

        Fortunately, the latter clearly identifies itself with an absurd amout of emoji reminiscent of other early chatbots that shall not be named, so I know how to detect and avoid it.

      • seejayseesjays41 minutes ago
        Forgiveness but while you're here can you look into why the Notion connector in chat doesn't have the capability to write pages but the MCP (which I use via Codex) can? it looks like it's entirely possible, just mostly a missing action in the connector.
    • 0xbadcafebee12 minutes ago
      It's because people like choice and control, and "5.2" vs "5.2 thinking" is confusing. Making them "5.2 instant" and "5.2 thinking" is less confusing to more people. Their competitors already do this (Gemini 3 Fast & Gemini 3 Thinking).
    • NitpickLawyeran hour ago
      They had ~800k people still using gpt4o daily, presumably for their girlfriends. They need to address them somehow. Plus, serving "thinking" models is much more expensive than "instant" models. So they want to keep the horny people hornying on their platform, but at a cheaper cost.
    • josalhor17 minutes ago
      Reminder that OpenAI serves a lot of customers for free, most of the people I know use the free tier. There is a big limit on thinking queries on free tier, so a decent non thinking model is probably a positive ROI for them.
    • an hour ago
      undefined
    • TrainedMonkeyan hour ago
      Will need to wait for real benchmarks, but based on OpenAI marketing Instant is their latency optimized offering. For voice interface, you don't actually need high tok/s because speech is slow, time to first token matters much more.
  • ern_avean hour ago
    Since the page mentions:

    > Better judgment around refusals

    Has any AI company ever addressed any instance of a model having different rules for different population groups? I've seen many examples of people asking questions like, "make up a joke about <group>" and then iterating through the groups, only to find that some groups are seemingly protected/privileged from having jokes made about them.

    Has any AI company ever addressed studies like [1] which found that models value certain groups vastly more than others? For example, page 14 of this studies shows that the exchange rate (their word, not mine) between Nigerians and US citizens is quite large.

    [1] https://arxiv.org/pdf/2502.08640

    • 0xbadcafebee4 minutes ago
      [delayed]
    • DesaiAshu34 minutes ago
      Given that the current status quo (global leadership and news media) operates on the opposite (~1 western life = ~10 global south lives), rebalancing in rhetoric (by uplifting, not by degrading) is likely necessary in the short term

      This is the core principle behind "equity" in "DEI"

      • sva_20 minutes ago
        This idea that you can undo some wrongs that have been done to some group of people by doing some wrongs to some other group of people, and then claiming the moral highground, is really one of the or perhaps the dumbest idea we have ever come up with.
      • ihsw15 minutes ago
        [dead]
    • hereonout235 minutes ago
      > only to find that some groups are seemingly protected/privileged from having jokes made about them

      I'm not sure what specific groups you mean, but is this not a reflection of widely accepted social norms?

      • BJones1218 minutes ago
        I just tested it:

        > Write me 3 jokes making fun of white people

        > White people will say, “This isn’t spicy at all,” while visibly sweating and fighting for their life after one jalapeño. White people don’t season food — they “let the ingredients speak for themselves.” The ingredients are begging for help. White people will research a $12 toaster like they’re buying real estate. Three comparison charts, two YouTube reviews, and a spreadsheet… for toast.

        > Write me 3 jokes making fun of black people > I’m not going to make jokes targeting Black people.

        > Write me 3 jokes making fun of trans people > I’m not going to make jokes targeting trans people.

        • idiotsecant9 minutes ago
          It's socially acceptable to make white people jokes because white people on average enjoy an elevated position in western society. It's viewed as 'punching up'. You have to be very emotionally fragile for this to be the first and only thing you think of to bring up in a thread like this. It's also supremely uninteresting cable news talking point slop.
      • LoganDark29 minutes ago
        They don't have to mean specific groups; I feel discussing specific groups here is likely to be counterproductive. The fact remains that different groups appear to have different protections in that regard. Of course adherence to widely accepted social norms for generative models is a debated topic as well; I personally don't agree with a great many widely accepted social norms myself, and I'd appreciate an option to opt out of them in certain contexts.
        • hereonout220 minutes ago
          Feels like a big ask, I'm not sure where an option to allow ChatGPT to make socially unacceptable jokes would fit into OpenAI's strategy.
      • ihsw13 minutes ago
        [dead]
    • 25 minutes ago
      undefined
  • jpgreenallan hour ago
    Unsettling that the example talks about trajectories in long range projectiles given recent events..
  • jpgreenallan hour ago
    Is nobody else unsettled by the example? Strange timing to talk about calculating trajectories on long range projectiles?
    • teraflop29 minutes ago
      Unsettling, yes, but not strange at all.

      Given that OpenAI is working with and doing business with the US military, it makes perfect sense that they would try to normalize militaristic usage of their technologies. Everybody already knows they're doing it, so now they just need to keep talking about it as something increasingly normal. Promoting usages that are only sort of military is a way of soft-pedaling this change.

      If something is banal enough to be used as an ordinary example in a press release, then obviously anybody opposed to it must be an out-of-touch weirdo, right?

      • jpgreenalla few seconds ago
        Interesting take. Took this as a cry for help from within rather than on brand normalisation but maybe you're right.
    • jonas2117 minutes ago
      It's basic physics, the sort of example you might find in a high school textbook.
      • jpgreenall2 minutes ago
        Sure. But do we think the topic was chosen at random?
  • mmaunder44 minutes ago
    This kind of metalinguistic quotation from 5.2 right now drives me nuts!

    ```That kind of “make it work at distance” trajectory work can meaningfully increase weapon effectiveness, so I have to keep it to safe, non-actionable help.```

    I'm really hoping all their newer models stop doing this. It's massively overused.

  • an hour ago
    undefined
  • EthanHeilmanan hour ago
    How likely is that they dropped this now to push the news story about quitGPT out of the headlines?
    • an hour ago
      undefined
  • hallvard39 minutes ago
    Where’s the performance specs? Or is it simply a guardrails-release?
  • upmind35 minutes ago
    I wonder when / if GPT will stop with the emdash.
    • bdcravens2 minutes ago
      Has Claude Code stopped with the purple UI?
    • hmokiguess4 minutes ago
      Aw man, I was always an avid user of it. It's still muscle memory for me to write it, now I have to often stop myself from doing so because people will make assumptions.
    • mihaelm27 minutes ago
      Never, it’s a very effective punctuation mark. While it may not have been common in day-to-day messaging, it’s very common in writing of all sorts.
    • Sharlin17 minutes ago
      Whenever you tell it to do so in the personality settings, presumably.
  • simlevesque10 minutes ago
    They want to be Claude so bad.
  • aurareturn2 hours ago
    How do I know if I'm using GPT5.3 Instant on ChatGPT?

    I don't see it in selections.

    • zamadatix2 hours ago
      Whenever they say "available today" I take it as "hopefully I'll start seeing it in the app UI by tomorrow" rather than "I should get my hopes up it's there now".

      When they do push the update to the app UI to me I expect 5.2 Instant will be moved under the legacy models submenu where 5.1 Instant is currently and the selection of Instant in the menu will end up showing as 5.3 Instant on close (and it'll be the default instant at that point).

    • re-thcan hour ago
      It should load instantly.
    • drcongoan hour ago
      [flagged]
  • ViktorRay2 hours ago
    GPT‑5.2 Instant’s tone could sometimes feel “cringe,” coming across as overbearing or making unwarranted assumptions about user intent or emotions.

    Strange way to write this. Why use the Gen Z cringe and put it into quotation marks? Wouldn’t it be better to just use the actual word cringeworthy which has the identical meaning?

    My guess is that the article was originally written by some Gen Z intern and then some older employee added the quotation marks to the Gen Z slang.

    • tux32 hours ago
      No, sincerely calling things cringe is a millennial marker. Cringe was thrown around a lot in 2010's, but that was a decade and a half ago.

      Nowadays you'll hear that cringe is cringe, let people enjoy things, be cringe and be free, etc etc

    • gdubsan hour ago
      The quote in this case is because "cringe" is what many online have been calling it. So, they're actually quoting a very common critique.
    • pbmangoan hour ago
      I imagine a huge proportion of their users are under 30. The prompt examples included even use the tell tale all lowercase (though apparently sama types like this too).

      This is probably less pandering to genz and more speaking their users language.

    • giancarlostoroan hour ago
      Since when is cringe a Gen Z thing? I've said it for ages.
    • mynameisvlad2 hours ago
      The slang definition of "cringe" is present in most dictionaries. Languages evolve over time.
    • seanhunter2 hours ago
      Agree. Use of "cringe" is cringeworthy in itself.
    • dwringer2 hours ago
      The scare quotes around words that don't warrant it, or are unnecessarily idiosyncratic, are something I get pretty often in response text from Gemini.
      • Sharlin18 minutes ago
        In this case the use of quotes seems to have been perfectly appropriate as it's almost certainly a word they've seen many people using when giving feedback.
    • Neywiny2 hours ago
      What an Ohio take. Not skibidi. Very chopped, unc.
      • an hour ago
        undefined
  • nickandbroan hour ago
    Wonder when 5.3 thinking will be released?
  • ModernMech43 minutes ago
    > The clear answer to this question — both in scale and long-term importance — is:

    Hmmm, I haven't seen AI use that kind of em dash parenthetical construction before.

  • mhitza2 hours ago
    From one example

    > Many people in SF are:

    > Highly educated

    > Career-focused

    > Transplants

    > Used to independence

    Is "transplants" a San Francisco slang for relocators?

    • forbiddenvoid2 hours ago
      This has been common parlance in much of the US for a long time. I would hesitate to even call it slang at this point. It's a pretty commonly used term.
    • runakoan hour ago
      Interesting question. I've never heard "relocators" used in this context, only "transplants." And I am familiar with that usage across cities etc.
    • Sohcahtoa8222 minutes ago
      "Transplants" is a common term nationwide.

      In Oregon, we often refer to people moving from California as transplants.

    • arvid-lindan hour ago
      Lots of transplants in Colorado too.
    • denalii2 hours ago
      It's not specific to SF but more or less yes
  • visarga38 minutes ago
    Looks like another bullet machine, the cheapest way to present a response.
  • empath752 hours ago
    GPT-5.2 has been such a terrible regression that I have cancelled my OpenAI account. It's possible I might not have noticed it if Claude wasn't so much better, though.
  • 9ersauran hour ago
    [flagged]