24 pointsby cactusplant73743 hours ago4 comments
  • tekacsan hour ago
    To those wondering about their rationale for this.

    It would be great if the HN title could be changed to something more like, 'OpenAI requiring ID verification for access to 5.3-codex'?

    > Thank you all for reporting this issue. Here's what's going on.

    > This rerouting is related to our efforts to protect against cyber abuse. The gpt-5.3-codex model is our most cyber-capable reasoning model to date. It can be used as an effective tool for cyber defense applications, but it can also be exploited for malicious purposes, and we take safety seriously. When our systems detect potential cyber activity, they reroute to a different, less-capable reasoning model. We're continuing to tune these detection mechanisms. It is important for us to get this right, especially as we prepare to make gpt-5.3-codex available to API users.

    > Refer to this article for additional information. You can go to chatgpt.com/cyber to verify and regain gpt-5.3-codex access. We plan to add notifications in all of our Codex surfaces (TUI, extension, app, etc.) to make users aware that they are being rerouted due to these checks and provide a link to our “Trusted Access for Cyber” flow.

    > We also plan to add a dedicated button in our /feedback flow for reporting false positive classifications. In the meantime, please use the "Bug" option to report issues of this type. Filing bugs in the Github issue tracker is not necessary for these issues.

    • Dylan16807an hour ago
      Sounds like a thing they've said a dozen time so far about how their models are too scary. And a bad implementation of controls on top of that.

      But right now I want to focus on what one of the more recent comments pointed out. "cyber-capable"? "cyber activity"? What the hell is that. Use real words.

  • avaeran hour ago
    When the GPT-5 router architecture was introduced I worried that OpenAI would use the technology as a pretext to mislead or defraud users by substituting in worse quality when they could get away with it and then "blame it on the AI" when they got too agressive.

    I don't know if we're there yet but these reports do not fill me with hope.

    • BrouteMinou11 minutes ago
      The kind of "pay me premium, I am giving you non-premium" type of fraud?

      It just proves that there is not much of an improvement if they can get away with, it isn't? But hey, I am sure that the benchmarks are all saying otherwise.

  • ekaesmem3 hours ago
    Today I also discovered that the speed of gpt-5.3-codex in Codex CLI is extremely slow, and then I found that response.model was routed back to gpt-5.2-2025-12-11 by the upstream.
  • usernamed729 minutes ago
    aaaaaaand this is why i prefer anthropic. There is just too much sneaky/misleading/deceptive things with chatGPT. Even if benchmarks show codex to be slightly better, the developer experience with claude code is much better.