206 pointsby retube10 days ago35 comments
  • deaux9 days ago
    When they started doing this ID verification early this year, I expressed outrage on here, and was met by comments downplaying it saying "It's a given that soon the others will follow". I'm sure some of those came from people at OpenAI.

    We're now at the end of the year and neither Google nor Anthropic nor any single other LLM provider does this. OpenAI does this because their CEO is SamA. That's it.

    • dlcarrier9 days ago
      Google, on the other hand, gives me AI responses that I never asked for, even when I'm using a private browsing window, from a dynamic IP address.
    • cool_man_bob9 days ago
      > I'm sure some of those came from people at OpenAI.

      Don’t underestimate the volume of useful idiots.

  • rsync10 days ago
    A certain business I own has an openai account for testing and research purposes.

    What ID would we provide?

    Would we pick some random employee to attach to the account?

    What relevance does this have to the notion of “piercing the corporate veil” if a business account is tied to someone’s drivers license?

    I place the blame for this situation squarely on the careless and thoughtless user population who have blindly provided their phone numbers and now ID scans to any old random, fly by night, start up who request them.

    • paulddraper10 days ago
      I assume the correct answer is an officer of the company, the same as for who signs contracts etc
  • crazygringo10 days ago
    Just searched for their actual policies to corroborate and found the policies on ID verification:

    https://help.openai.com/en/articles/10910291-api-organizatio...

    And that credits are nonrefundable:

    https://openai.com/policies/service-credit-terms/

    It absolutely seems like terrible horrible customer service not to issue refunds in this case. Obviously the credits can still be used for most of the models, so it's not like you can't do anything with them. But if someone explains they bought the credits specifically to use with the verification-gated models and then discovered they couldn't (since apparently verification fails for some people), there's no question that refunds are the right thing to do. What is OpenAI thinking?

    (BTW, speculation seems to be that the verification process doesn't have anything to do with know-your-customer laws or anti-fraud, but is intended to prevent competitors like Chinese DeepSeek from having large-scale access to OpenAI's best models.)

    • logicchains10 days ago
      >(BTW, speculation seems to be that the verification process doesn't have anything to do with know-your-customer laws or anti-fraud, but is intended to prevent competitors like Chinese DeepSeek from having large-scale access to OpenAI's best models.)

      It's not because OpenAI's CEO is also the founder of WorldCoin, a project to ID everyone?

      • egorfine10 days ago
        Funny though their KYC process is not done via WorldCoin. Obviously because WorldCoin KYC is useless for authorities.
    • pjmlp10 days ago
      Depending on where the post author is located, whatever says on those links is worth garbage if they are located in Europe.

      Most European countries have consumer protection agencies with teeth, and a company cannot decide on their own what they refund or not.

      • retube9 days ago
        This may be true, but potentially involves much time, energy, and money by the author to challenge.... so for 99% of people, OpenAI will get away with it
        • privacyking9 days ago
          A chargeback can be started in minutes.
    • potamic10 days ago
      That's kinda scammy. It's not like they have to manage shipments and handle goods or anything. I wonder if they're banking on a percentage of users leaving credits unused like credit card companies do with loyalty points.
      • helicone10 days ago
        I don't think they care one way or the other. They haven't ever been profitable, and so they're likely going to build up data and pull the rug on all of their users by suddenly declaring themselves a data broker. They won't try this against companies that can afford to sue, but most of their users will probably start to get even more creepily targeted ads directed at them.
    • irvingprime10 days ago
      Customer service? In the age of AI? What have you been smoking?
  • seneca10 days ago
    Yeah, I'm not at all willing to do these sorts of verifications. Any company doing them essentially doesn't exist to me. I don't even use Anthropic because they require a phone number to register.
    • quantummagic10 days ago
      Same. I don't understand why so many people are happy to give their phone number to some random service provider. It's a shame it has become normalized.
      • CaptainOfCoit10 days ago
        My phone number is basically public and has been for 20 years, every email I send has my phone number and it's findable via the public internet too.

        Not sure why people see their phone number as something private?

        FWIW, I've heard some people saying they avoid it because of spam, I've been on my local anti-spam list since I got my current phone number, and receive about 1 spam call every week or something. Maybe there is one for where you live too.

        • seneca10 days ago
          You're lucky. In my experience no-call lists don't work.

          I command a significant budget and even with a lot of effort to not proliferate my phone number, I get at least half a dozen spam or sales calls a day. I can't imagine how bad it would be if I didn't attempt to protect it. Perhaps it would be the same and I should just give up, but I'm not willing to try.

          The other side of the coin is that it's just none of their business. They don't need my phone number to sell me SaaS software. There is no upside for me to give it to them.

          • CaptainOfCoit10 days ago
            > You're lucky. In my experience no-call lists don't work.

            I don't think so, I've had friends and acquaintances that had the same issue as you, multiple spam calls per day. I helped them add themselves to their national list, and after a month or two the constant spamming stops.

            I think you might just be unlucky living in country that doesn't have such list that works OK. I've lived in multiple different countries so far in my life, and it's been the same thing in all of them, adding myself to the list eventually makes the spam stop.

  • gdulli10 days ago
    It's hard to know exactly which forms it will end up taking, but dependence on these companies is going nowhere good. And more quickly than it took streaming (for example) to go from offering a better experience (to win market share) to the current (and inevitable) norm of raising prices constantly and introducing unskippable ads.
    • dawnerd10 days ago
      I could totally see them having responses sprinkled with subtle marketing. Ask it for the best travel backpack and ooops all sponsored.
      • conception10 days ago
        This is already the case for OpenAI. Go ask for some backpack recommendations.
    • CaptainOfCoit10 days ago
      > and introducing unskippable ads.

      Maybe I'm missing something obvious, but where on either ChatGPT or the API platform OpenAI hosts are you seeing ads?

      • mapontosevenths10 days ago
        > it took streaming...

        Parent is comparing OpenAI to other companies that followed a similar trajectory of enshittification.

        • gdulli10 days ago
          Yes, and just as ads eventually came to streaming in a worse form (unskippable, hypertargeted) that cable companies didn't have the ability to innovate, the new frontier of ads will again come with qualitatively worse innovations. Seamless and undisclosed in conversational LLM output.
  • syntaxing10 days ago
    You can try the latest GLM 4.6 https://z.ai/ . Their coding plan is $6 a month and performs on par to Sonnet 4 for my personal task. Sonnet 4.5 still has an edge though. All of ZLM’s models are also open sourced so you can run it locally if you want
    • mark_l_watson10 days ago
      I am mostly retired but I am thinking of restarting a solo products mini-company next year. I have been looking at much less expensive options like Alibaba Cloud, GLM, Kimi K2, etc. There is a recent Stanford study showing most US startups are using less expensive Chinese models, but I think usually hosted in the US.

      For now I am happy enough with Gemini and GPT-5 because my usage is so lite that anything is cheap. For many engineering use cases, Gemini-2.5-flash-lite works well enough.

      How do you use GLM? With codex —oss? Or, just ‘raw’ with no agent-wrapping coding environment?

      • syntaxing10 days ago
        I use it directly with Claude code [1]. Honestly, it just makes sense IMO to host your own model when you have your own company. You can try something like openrouter for now and then setup your own hardware. Since most of these models are MoE, you dont have to load everything in VRAM. A mixture of a 5090 + EPYC CPU + 256GB of DDR5 RAM can go a very long way. You can unload most of the expert layers onto CPU and leave the rest on GPU. As usual Unsloth has a great page about it [2]

        [1] https://docs.z.ai/scenario-example/develop-tools/claude [2] https://docs.unsloth.ai/models/glm-4.6-how-to-run-locally

      • mitjam10 days ago
        Hope you‘ll share your story if you start. Love your book on langchain from iirc 2y ago, it got me going.
      • mistrial910 days ago
        > There is a recent Stanford study showing most US startups are using less expensive Chinese models

        link ?

        • mitjam10 days ago
          Idk if this is the reference but it’s in the same direction:

          „ These days, when entrepreneurs pitch at Andreessen Horowitz (a16z), a major Silicon Valley venture-capital firm, there’s a high chance their startups are running on Chinese models. “I’d say there’s an 80% chance they’re using a Chinese open-source model,” notes Martin Casado, a partner at a16z.“ —- https://ixbroker.com/blog/china-is-quietly-overtaking-americ...

  • jarym10 days ago
    Wonder how long before they'll have to start reporting 'suspicious activity' to the government same as financial institutions have to do for money transfers.
    • A4ET8a8uTh0_v210 days ago
      You can reasonably assume it is already happening. The only difference is that for FIs it is required by law, that it is relatively similar across the board in terms of implementation and openai is a one giant source of info you wouldn't get anywhere else.

      It fairly accurately measured my age, location, place of birth and political inclinations based on our conversations alone. I am certain it can infer a lot more.

      • egorfine10 days ago
        This.

        There is no other reason to require KYC for a server-side text transformation tool, no matter how impressive it is.

        • Mars0089 days ago
          The other reason could be the copyright cases they are fighting in court. OAI was ordered to keep all records, including private. Not sure if it was lifted already.

          And another could be EU requirements for age verification. AI can produce adult content.

          There are may be other reasons, like to prevent using OAI models' output to train competing models.

          • egorfine9 days ago
            > AI can produce adult content.

            They should realize that anything can produce adult content. Anything.

        • weird-eye-issue9 days ago
          No other reason? What about simply fraud protection. The same reason they switched new accounts to be where you have to pay to buy credits first instead of paying at the end of the month. There is a ton of fraud in this industry
          • egorfine9 days ago
            No worries. Their competitors do not require KYC.
            • weird-eye-issue9 days ago
              They all require paying for credits up front which is also an anti-fraud measure though which was my entire point ;)
              • egorfine9 days ago
                Credit upfront as antifraud: perfectly fine. KYC: absolutely not.
                • weird-eye-issue8 days ago
                  And if you ran a company at OpenAI's scale then you can make that decision, but you don't
                  • egorfine8 days ago
                    How much do you know about me?

                    Anyways, competitors do not require KYC for text transformation services, and that's how it should be.

                    • weird-eye-issue8 days ago
                      You call OpenAI a "text transformation service" so clearly you are incompetent and your website backs that up
                      • egorfine8 days ago
                        Thank you for your valuable feedback!
      • weird-eye-issue10 days ago
        Absolutely not. It would require product, engineering, admin, etc. effort to do that and unless it isn't required by law why would they waste the time when they have a lot else to do?
        • bgwalter10 days ago
          They have an ex-NSA chief on the board, and doing surveillance voluntarily may result in government help like getting contracts in South-Korea and Argentine that may bring in far more money than the implementation costs. Perhaps they outsource the implementation to Palantir or the NSA. It is basically a simple middleware that is inserted somewhere once the traffic is decrypted.

          So I don't think implementation costs are an obstacle.

        • orthecreedence10 days ago
          > why would they waste the time

          Because then the NSA shows up with an NSL, you integrate with the fascist surveillance state or you lose your business. How have people forgotten this so fucking quickly?

          • A4ET8a8uTh0_v210 days ago
            To be fair, I am interested in the subject and I don't even remember the name of the telecom that tried to buck under pressure and went out of business not long after. It has been that long. It is possible so I give people some grace.
          • weird-eye-issue9 days ago
            Did you miss where I said "unless it is required by law"
            • orthecreedence7 days ago
              That's point: it is always required by law. There is no case where it is not required by law.
    • queenkjuul9 days ago
      I'd have sworn they've already admitted to this
  • binarymax10 days ago
    GPT-5 works, just not GPT-5 streaming. I posted about this a little while ago with more details: https://news.ycombinator.com/item?id=44837367
    • thr0w10 days ago
      What is it about streaming specifically that necessitates this? Am I missing something obvious?
      • BoorishBears9 days ago
        Excuse is probably that classifiers for streaming are less robust.

        It's easier to get a partial response for something like a CBRN topic

    • deaux9 days ago
      I thought this was indeed the case at release, but then they changed it to also be for non-streaming (completions). So either they reverted it back or it was a temporary bug during the early days of the model.

      When did you last check?

      • binarymax9 days ago
        I ran a test just now, and gpt-5 without streaming works without the biometric check.
        • deaux9 days ago
          Interesting! Makes one wonder why they limit streaming only. I guess to induce just enough friction then.
  • egorfine10 days ago
    Indeed. I have opened the playground and it doesn't let me choose GPT-5. Obviously I will not be KYCing myself.

    But that's okay. There are plenty of other models. Perhaps not bleeding edge great, but great nevertheless.

    • tensility10 days ago
      As far as I understand, many users are better off with GPT-4o anyway. Amusing to be charging premiums for an objectively bad upgrade, but I guess that's the kind of bullshit economics that hype cycles create.
      • Sabinus9 days ago
        How are users better off with 4o? I thought the point of 5 was that delivered better results for cheaper in less tokens.
      • egorfine10 days ago
        I meant competitors
    • SarahPeter10 days ago
      [dead]
  • puppycodes10 days ago
    I wouldn't tie my email to a chatbot let alone my literal goverment ID.
  • Xorakios9 days ago
    >A chargeback can be started in minutes.

    Alas, on my Social Security mandated USDirectExpress card it requires hours to start the process through 3 levels by phone, then documentation that the vendor refused to process a reimbursement, then a physical form received and returned by US Postal Service within 10 calendar days. Everything changed last year when the outgoing administration changed the rules and chose a new bank as the provider for Social Security payments.

  • throway1234510 days ago
    Is it by any chance because your POST is requesting a summary of the reasoning, e.g. setting {summary: "auto"} or somesuch? I know that requires verification.
  • mkbkn10 days ago
    Raise a chargeback
    • rhetocj2310 days ago
      This - and in future ensure any purchases are made on a credit card not debit card.
      • tobwen10 days ago
        In Europe, SEPA direct debits can also be withdrawn. But you can expect to receive a reminder with legal action within a few days.
        • leobg10 days ago
          If they broke the contract? Let them come.
  • ax0ar9 days ago
    OpenAI is literally trying to play the role of a state. Why would I involve a private company in my national ID paperwork? That's none of their business. And that should be literally every sane person's stance. Their model security is not my problem.
  • pwlm8 days ago
    ID verification is often used to increase the cost of abusing AI, by staking one's reputation. But there's another way: to increase the cost without a person's reputation attached while remaining anonymous.
  • johnnyApplePRNG8 days ago
    I would suggest against Deepseek.

    Deepseek is nowhere close to OpenAI in terms of coding ability.

    And the fact that it will just cut the API if you ask anything that might be considered taboo in China... I just don't see the draw.

    Cheap, sure! It's definitely that!

  • yalogin10 days ago
    If deepseek and qwen are capable why does one need to use OpenAI models? Are their models really that much better? If not the only they bring is the hosting service. N that scenario how long do they have this advantage before aws or Microsoft takes over?
    • Mars0089 days ago
      > If deepseek and qwen are capable why ...

      Of course, you can go further and run qwen locally. Or even train your own nanogpt. Why not if it's capable, right? And this 'if' is a big question.

      • yalogin9 days ago
        I was not trying to be snarky but rather a technical one
        • Mars0089 days ago
          technically it's cheaper, but it's not an equivalent replacement.
  • comrade123410 days ago
    I put $2 on my deepseek account and have barely used it, it's so cheap.
  • sxndmxn8 days ago
    Deepseek will route all of your traffic through Hong Kong. If you're really worried about privacy that is NOT the way to go.
  • bilsbie9 days ago
    We need local AI ASAP. that’s really the bottom line.
    • kagerou74a day ago
      Absolutely. The sooner, the better.
    • lcnPylGDnU4H9OF9 days ago
      • marak8308 days ago
        Ollama is a good one, LM Studio is great for those who are unsure what to do (will help you get a model that fits into your system specs).

        If you use open webui(I recommend via docker) you can access your ollama hosted model via the browser on any device on your network. Tailscale will help make that accessible remotely.

        I'm currently working on an open source long term memory system designed to work with ollama to help local models be more competitive with the big players, so we are not so beholden to these big companies.

        • kagerou74a day ago
          That sounds great — thank you for working on this. I’m not a developer, just curious about AI in general. Local AI feels like the right direction if we want to save energy and water, too. Is your memory system open source?
  • gidellav10 days ago
    Just use openrouter, allows to connect to all models
    • Scene_Cast210 days ago
      OpenRouter has some gotchas with OpenAI models. In some cases it requires an OpenAI key.
      • Deathmax10 days ago
        Not anymore, especially after other routers like Vercel's AI Gateway and proxies from LLM providers like Fal, DeepInfra, and AtlasCloud didn't get the memo of enforcing BYOK for ID verification required models after GPT-5's release.
  • replwoacause9 days ago
    What’s so special about GPT-5 streaming that it requires government ID to use it?
    • ax0ar9 days ago
      Nothing. They just want more control step by step. Imagine if the models were really fascinating and could do everything, they would literally act like a sovereign state.
  • journal8 days ago
    admit it, anyone who writes code is completely addicted to ghost text completions. you wouldn't have it any other way. they can do ANYTHING.
  • Halian5 days ago
    Thus be it for slopherds.
  • I_am_tiberius5 days ago
    Don't trust SAMA
  • pixel_popping10 days ago
    This is honestly a huge shame as they aren't legally required to do so, this is PURELY for mass data collection and correlation.
  • reustle10 days ago
    Also terrible that purchased credits expire after 1 year. Not sure how that is legal.

    https://community.openai.com/t/api-credits-amount-get-expire...

  • alganet9 days ago
    That's probably a good thing.
  • bn-l10 days ago
    Was this for all models?
    • retube10 days ago
      Good q. I was trying with gpt-5, but it seems gpt-4o works without verification. However 4o is I guess not as good, plus it seems to be twice as expensive as 5
      • bn-l9 days ago
        Gpt $ is the money gpt. I don’t trust the benchmarks and artificial analysis’ benchmarks are bunk.
    • andai10 days ago
      I believe it started with o3 Pro, back in the day.
  • Lapra10 days ago
    They are a porn company now, after all.
    • goshx10 days ago
      Perhaps this was the reason behind the move.
    • Palmik10 days ago
      In the same sense in which Google is a porn aggregator company because it will return porn results when you ask for them?
      • queenkjuul9 days ago
        OpenAI will now generate porn for you, Google doesn't
        • Palmik9 days ago
          If you read what I wrote carefully, you'll not that I used "porn aggregator" not just "porn site" and not even "porn generator"
          • queenkjuul9 days ago
            Well openAI will not be a porn aggregator, they'll be a porn producer, so not sure your point
    • nopurpose10 days ago
      huh? What did I miss?
  • tensility10 days ago
    [flagged]
    • dang10 days ago
      Ok, but can you please not fulminate on HN? It's not what this site is for, and destroys what it is for.

      This is in the site guidelines: https://news.ycombinator.com/newsguidelines.html.

    • bloppe10 days ago
      Yikes
    • 10 days ago
      undefined
    • senordevnyc10 days ago
      I understand the anger, but do you really want to live in the world of anarchy that would be required for these people to starve? Because if the billionaires are starving, the rest of us are long gone at that point.
      • add-sub-mul-div10 days ago
        "Letting them starve" is clearly rhetorical shorthand for not giving their businesses money. It's based on the irony of the power imbalance, none of these people will ever starve if their businesses fail. Nobody thinks they're going to starve, nobody was intended to take away that literal interpretation, how could you possibly think this interpretation was intended or is worth discussing.
        • senordevnyc10 days ago
          I guess I was thrown off by “let them starve, the way they want us to”. Doesn’t make much sense if you’re using “starve” to mean totally different things.

          Even then, it’s nonsensical to think that you’re going to “starve” these companies of revenue, companies that are growing faster than any in history, bringing in trillions in revenue, and have appreciable fractions of our entire species using them daily.

      • exe3410 days ago
        To be fair, to them, starving is "other people aren't spending their money on me". Remember Emlo sued people who stopped advertising on his personal blog when he let the Nazis back in.
      • helicone10 days ago
        anarchy isn't required for them to starve. these people could be jailed and their assets frozen, for example, and their jail food would then be stolen by the more physically intimidating inmates. regardless of your political opinions on the subject, this is a perfectly cromulent scenario that includes them starving without there being anarchy.
  • Kiboneu10 days ago
    One more step towards worldcoin .. .
  • s530010 days ago
    [dead]
  • SarahPeter10 days ago
    [flagged]
    • 10 days ago
      undefined
    • supriyo-biswas10 days ago
      It appears that you have quite a few LLM comments going on here. While your customers (as you mention in your profile) may appreciate it, it is typically not looked upon well here. Thank you.
    • fouc10 days ago
      > ID verification is becoming common across AI platforms

      Yikes if true. I wonder why?

      • deaux9 days ago
        Ironically, it's a hallucination.
  • chistev10 days ago
    Interesting