207 pointsby retube3 months ago36 comments
  • deaux3 months ago
    When they started doing this ID verification early this year, I expressed outrage on here, and was met by comments downplaying it saying "It's a given that soon the others will follow". I'm sure some of those came from people at OpenAI.

    We're now at the end of the year and neither Google nor Anthropic nor any single other LLM provider does this. OpenAI does this because their CEO is SamA. That's it.

    • dlcarrier3 months ago
      Google, on the other hand, gives me AI responses that I never asked for, even when I'm using a private browsing window, from a dynamic IP address.
    • tmaly3 months ago
      Didn't Sam try to make an iris scanning startup at some point?
    • cool_man_bob3 months ago
      > I'm sure some of those came from people at OpenAI.

      Don’t underestimate the volume of useful idiots.

  • rsync3 months ago
    A certain business I own has an openai account for testing and research purposes.

    What ID would we provide?

    Would we pick some random employee to attach to the account?

    What relevance does this have to the notion of “piercing the corporate veil” if a business account is tied to someone’s drivers license?

    I place the blame for this situation squarely on the careless and thoughtless user population who have blindly provided their phone numbers and now ID scans to any old random, fly by night, start up who request them.

    • paulddraper3 months ago
      I assume the correct answer is an officer of the company, the same as for who signs contracts etc
  • crazygringo3 months ago
    Just searched for their actual policies to corroborate and found the policies on ID verification:

    https://help.openai.com/en/articles/10910291-api-organizatio...

    And that credits are nonrefundable:

    https://openai.com/policies/service-credit-terms/

    It absolutely seems like terrible horrible customer service not to issue refunds in this case. Obviously the credits can still be used for most of the models, so it's not like you can't do anything with them. But if someone explains they bought the credits specifically to use with the verification-gated models and then discovered they couldn't (since apparently verification fails for some people), there's no question that refunds are the right thing to do. What is OpenAI thinking?

    (BTW, speculation seems to be that the verification process doesn't have anything to do with know-your-customer laws or anti-fraud, but is intended to prevent competitors like Chinese DeepSeek from having large-scale access to OpenAI's best models.)

    • logicchains3 months ago
      >(BTW, speculation seems to be that the verification process doesn't have anything to do with know-your-customer laws or anti-fraud, but is intended to prevent competitors like Chinese DeepSeek from having large-scale access to OpenAI's best models.)

      It's not because OpenAI's CEO is also the founder of WorldCoin, a project to ID everyone?

      • egorfine3 months ago
        Funny though their KYC process is not done via WorldCoin. Obviously because WorldCoin KYC is useless for authorities.
    • pjmlp3 months ago
      Depending on where the post author is located, whatever says on those links is worth garbage if they are located in Europe.

      Most European countries have consumer protection agencies with teeth, and a company cannot decide on their own what they refund or not.

      • retube3 months ago
        This may be true, but potentially involves much time, energy, and money by the author to challenge.... so for 99% of people, OpenAI will get away with it
        • privacyking3 months ago
          A chargeback can be started in minutes.
    • potamic3 months ago
      That's kinda scammy. It's not like they have to manage shipments and handle goods or anything. I wonder if they're banking on a percentage of users leaving credits unused like credit card companies do with loyalty points.
      • helicone3 months ago
        I don't think they care one way or the other. They haven't ever been profitable, and so they're likely going to build up data and pull the rug on all of their users by suddenly declaring themselves a data broker. They won't try this against companies that can afford to sue, but most of their users will probably start to get even more creepily targeted ads directed at them.
    • irvingprime3 months ago
      Customer service? In the age of AI? What have you been smoking?
  • seneca3 months ago
    Yeah, I'm not at all willing to do these sorts of verifications. Any company doing them essentially doesn't exist to me. I don't even use Anthropic because they require a phone number to register.
    • quantummagic3 months ago
      Same. I don't understand why so many people are happy to give their phone number to some random service provider. It's a shame it has become normalized.
      • CaptainOfCoit3 months ago
        My phone number is basically public and has been for 20 years, every email I send has my phone number and it's findable via the public internet too.

        Not sure why people see their phone number as something private?

        FWIW, I've heard some people saying they avoid it because of spam, I've been on my local anti-spam list since I got my current phone number, and receive about 1 spam call every week or something. Maybe there is one for where you live too.

        • seneca3 months ago
          You're lucky. In my experience no-call lists don't work.

          I command a significant budget and even with a lot of effort to not proliferate my phone number, I get at least half a dozen spam or sales calls a day. I can't imagine how bad it would be if I didn't attempt to protect it. Perhaps it would be the same and I should just give up, but I'm not willing to try.

          The other side of the coin is that it's just none of their business. They don't need my phone number to sell me SaaS software. There is no upside for me to give it to them.

          • CaptainOfCoit3 months ago
            > You're lucky. In my experience no-call lists don't work.

            I don't think so, I've had friends and acquaintances that had the same issue as you, multiple spam calls per day. I helped them add themselves to their national list, and after a month or two the constant spamming stops.

            I think you might just be unlucky living in country that doesn't have such list that works OK. I've lived in multiple different countries so far in my life, and it's been the same thing in all of them, adding myself to the list eventually makes the spam stop.

  • gdulli3 months ago
    It's hard to know exactly which forms it will end up taking, but dependence on these companies is going nowhere good. And more quickly than it took streaming (for example) to go from offering a better experience (to win market share) to the current (and inevitable) norm of raising prices constantly and introducing unskippable ads.
    • dawnerd3 months ago
      I could totally see them having responses sprinkled with subtle marketing. Ask it for the best travel backpack and ooops all sponsored.
      • conception3 months ago
        This is already the case for OpenAI. Go ask for some backpack recommendations.
    • CaptainOfCoit3 months ago
      > and introducing unskippable ads.

      Maybe I'm missing something obvious, but where on either ChatGPT or the API platform OpenAI hosts are you seeing ads?

      • mapontosevenths3 months ago
        > it took streaming...

        Parent is comparing OpenAI to other companies that followed a similar trajectory of enshittification.

        • gdulli3 months ago
          Yes, and just as ads eventually came to streaming in a worse form (unskippable, hypertargeted) that cable companies didn't have the ability to innovate, the new frontier of ads will again come with qualitatively worse innovations. Seamless and undisclosed in conversational LLM output.
  • syntaxing3 months ago
    You can try the latest GLM 4.6 https://z.ai/ . Their coding plan is $6 a month and performs on par to Sonnet 4 for my personal task. Sonnet 4.5 still has an edge though. All of ZLM’s models are also open sourced so you can run it locally if you want
    • mark_l_watson3 months ago
      I am mostly retired but I am thinking of restarting a solo products mini-company next year. I have been looking at much less expensive options like Alibaba Cloud, GLM, Kimi K2, etc. There is a recent Stanford study showing most US startups are using less expensive Chinese models, but I think usually hosted in the US.

      For now I am happy enough with Gemini and GPT-5 because my usage is so lite that anything is cheap. For many engineering use cases, Gemini-2.5-flash-lite works well enough.

      How do you use GLM? With codex —oss? Or, just ‘raw’ with no agent-wrapping coding environment?

      • syntaxing3 months ago
        I use it directly with Claude code [1]. Honestly, it just makes sense IMO to host your own model when you have your own company. You can try something like openrouter for now and then setup your own hardware. Since most of these models are MoE, you dont have to load everything in VRAM. A mixture of a 5090 + EPYC CPU + 256GB of DDR5 RAM can go a very long way. You can unload most of the expert layers onto CPU and leave the rest on GPU. As usual Unsloth has a great page about it [2]

        [1] https://docs.z.ai/scenario-example/develop-tools/claude [2] https://docs.unsloth.ai/models/glm-4.6-how-to-run-locally

      • mitjam3 months ago
        Hope you‘ll share your story if you start. Love your book on langchain from iirc 2y ago, it got me going.
      • mistrial93 months ago
        > There is a recent Stanford study showing most US startups are using less expensive Chinese models

        link ?

        • mitjam3 months ago
          Idk if this is the reference but it’s in the same direction:

          „ These days, when entrepreneurs pitch at Andreessen Horowitz (a16z), a major Silicon Valley venture-capital firm, there’s a high chance their startups are running on Chinese models. “I’d say there’s an 80% chance they’re using a Chinese open-source model,” notes Martin Casado, a partner at a16z.“ —- https://ixbroker.com/blog/china-is-quietly-overtaking-americ...

  • jarym3 months ago
    Wonder how long before they'll have to start reporting 'suspicious activity' to the government same as financial institutions have to do for money transfers.
    • A4ET8a8uTh0_v23 months ago
      You can reasonably assume it is already happening. The only difference is that for FIs it is required by law, that it is relatively similar across the board in terms of implementation and openai is a one giant source of info you wouldn't get anywhere else.

      It fairly accurately measured my age, location, place of birth and political inclinations based on our conversations alone. I am certain it can infer a lot more.

      • egorfine3 months ago
        This.

        There is no other reason to require KYC for a server-side text transformation tool, no matter how impressive it is.

        • Mars0083 months ago
          The other reason could be the copyright cases they are fighting in court. OAI was ordered to keep all records, including private. Not sure if it was lifted already.

          And another could be EU requirements for age verification. AI can produce adult content.

          There are may be other reasons, like to prevent using OAI models' output to train competing models.

          • egorfine3 months ago
            > AI can produce adult content.

            They should realize that anything can produce adult content. Anything.

        • weird-eye-issue3 months ago
          No other reason? What about simply fraud protection. The same reason they switched new accounts to be where you have to pay to buy credits first instead of paying at the end of the month. There is a ton of fraud in this industry
          • egorfine3 months ago
            No worries. Their competitors do not require KYC.
            • weird-eye-issue3 months ago
              They all require paying for credits up front which is also an anti-fraud measure though which was my entire point ;)
              • egorfine3 months ago
                Credit upfront as antifraud: perfectly fine. KYC: absolutely not.
                • weird-eye-issue3 months ago
                  And if you ran a company at OpenAI's scale then you can make that decision, but you don't
                  • egorfine3 months ago
                    How much do you know about me?

                    Anyways, competitors do not require KYC for text transformation services, and that's how it should be.

                    • weird-eye-issue3 months ago
                      You call OpenAI a "text transformation service" so clearly you are incompetent and your website backs that up
                      • egorfine3 months ago
                        Thank you for your valuable feedback!
      • weird-eye-issue3 months ago
        Absolutely not. It would require product, engineering, admin, etc. effort to do that and unless it isn't required by law why would they waste the time when they have a lot else to do?
        • bgwalter3 months ago
          They have an ex-NSA chief on the board, and doing surveillance voluntarily may result in government help like getting contracts in South-Korea and Argentine that may bring in far more money than the implementation costs. Perhaps they outsource the implementation to Palantir or the NSA. It is basically a simple middleware that is inserted somewhere once the traffic is decrypted.

          So I don't think implementation costs are an obstacle.

        • orthecreedence3 months ago
          > why would they waste the time

          Because then the NSA shows up with an NSL, you integrate with the fascist surveillance state or you lose your business. How have people forgotten this so fucking quickly?

          • A4ET8a8uTh0_v23 months ago
            To be fair, I am interested in the subject and I don't even remember the name of the telecom that tried to buck under pressure and went out of business not long after. It has been that long. It is possible so I give people some grace.
          • weird-eye-issue3 months ago
            Did you miss where I said "unless it is required by law"
            • orthecreedence3 months ago
              That's point: it is always required by law. There is no case where it is not required by law.
    • queenkjuul3 months ago
      I'd have sworn they've already admitted to this
  • binarymax3 months ago
    GPT-5 works, just not GPT-5 streaming. I posted about this a little while ago with more details: https://news.ycombinator.com/item?id=44837367
    • thr0w3 months ago
      What is it about streaming specifically that necessitates this? Am I missing something obvious?
      • BoorishBears3 months ago
        Excuse is probably that classifiers for streaming are less robust.

        It's easier to get a partial response for something like a CBRN topic

    • deaux3 months ago
      I thought this was indeed the case at release, but then they changed it to also be for non-streaming (completions). So either they reverted it back or it was a temporary bug during the early days of the model.

      When did you last check?

      • binarymax3 months ago
        I ran a test just now, and gpt-5 without streaming works without the biometric check.
        • deaux3 months ago
          Interesting! Makes one wonder why they limit streaming only. I guess to induce just enough friction then.
  • egorfine3 months ago
    Indeed. I have opened the playground and it doesn't let me choose GPT-5. Obviously I will not be KYCing myself.

    But that's okay. There are plenty of other models. Perhaps not bleeding edge great, but great nevertheless.

    • tensility3 months ago
      As far as I understand, many users are better off with GPT-4o anyway. Amusing to be charging premiums for an objectively bad upgrade, but I guess that's the kind of bullshit economics that hype cycles create.
      • Sabinus3 months ago
        How are users better off with 4o? I thought the point of 5 was that delivered better results for cheaper in less tokens.
      • egorfine3 months ago
        I meant competitors
    • SarahPeter3 months ago
      [dead]
      • egorfine3 months ago
        I meant competitors.
  • puppycodes3 months ago
    I wouldn't tie my email to a chatbot let alone my literal goverment ID.
  • Xorakios3 months ago
    >A chargeback can be started in minutes.

    Alas, on my Social Security mandated USDirectExpress card it requires hours to start the process through 3 levels by phone, then documentation that the vendor refused to process a reimbursement, then a physical form received and returned by US Postal Service within 10 calendar days. Everything changed last year when the outgoing administration changed the rules and chose a new bank as the provider for Social Security payments.

  • throway123453 months ago
    Is it by any chance because your POST is requesting a summary of the reasoning, e.g. setting {summary: "auto"} or somesuch? I know that requires verification.
  • mkbkn3 months ago
    Raise a chargeback
    • rhetocj233 months ago
      This - and in future ensure any purchases are made on a credit card not debit card.
      • tobwen3 months ago
        In Europe, SEPA direct debits can also be withdrawn. But you can expect to receive a reminder with legal action within a few days.
        • leobg3 months ago
          If they broke the contract? Let them come.
  • ax0ar3 months ago
    OpenAI is literally trying to play the role of a state. Why would I involve a private company in my national ID paperwork? That's none of their business. And that should be literally every sane person's stance. Their model security is not my problem.
  • _jsmh3 months ago
    ID verification is often used to increase the cost of abusing AI, by staking one's reputation. But there's another way: to increase the cost without a person's reputation attached while remaining anonymous.
  • johnnyApplePRNG3 months ago
    I would suggest against Deepseek.

    Deepseek is nowhere close to OpenAI in terms of coding ability.

    And the fact that it will just cut the API if you ask anything that might be considered taboo in China... I just don't see the draw.

    Cheap, sure! It's definitely that!

  • yalogin3 months ago
    If deepseek and qwen are capable why does one need to use OpenAI models? Are their models really that much better? If not the only they bring is the hosting service. N that scenario how long do they have this advantage before aws or Microsoft takes over?
    • Mars0083 months ago
      > If deepseek and qwen are capable why ...

      Of course, you can go further and run qwen locally. Or even train your own nanogpt. Why not if it's capable, right? And this 'if' is a big question.

      • yalogin3 months ago
        I was not trying to be snarky but rather a technical one
        • Mars0083 months ago
          technically it's cheaper, but it's not an equivalent replacement.
  • comrade12343 months ago
    I put $2 on my deepseek account and have barely used it, it's so cheap.
  • sxndmxn3 months ago
    Deepseek will route all of your traffic through Hong Kong. If you're really worried about privacy that is NOT the way to go.
  • bilsbie3 months ago
    We need local AI ASAP. that’s really the bottom line.
    • kagerou743 months ago
      Absolutely. The sooner, the better.
    • lcnPylGDnU4H9OF3 months ago
      • marak8303 months ago
        Ollama is a good one, LM Studio is great for those who are unsure what to do (will help you get a model that fits into your system specs).

        If you use open webui(I recommend via docker) you can access your ollama hosted model via the browser on any device on your network. Tailscale will help make that accessible remotely.

        I'm currently working on an open source long term memory system designed to work with ollama to help local models be more competitive with the big players, so we are not so beholden to these big companies.

        • kagerou743 months ago
          That sounds great — thank you for working on this. I’m not a developer, just curious about AI in general. Local AI feels like the right direction if we want to save energy and water, too. Is your memory system open source?
          • marak8303 months ago
            It will be, I'm applying for a NLNet grant and open sourcing it to non-corporations is one of the requirements. (I need more hardware to develop, already fried one SSD haha)
  • gidellav3 months ago
    Just use openrouter, allows to connect to all models
    • Scene_Cast23 months ago
      OpenRouter has some gotchas with OpenAI models. In some cases it requires an OpenAI key.
      • Deathmax3 months ago
        Not anymore, especially after other routers like Vercel's AI Gateway and proxies from LLM providers like Fal, DeepInfra, and AtlasCloud didn't get the memo of enforcing BYOK for ID verification required models after GPT-5's release.
  • replwoacause3 months ago
    What’s so special about GPT-5 streaming that it requires government ID to use it?
    • ax0ar3 months ago
      Nothing. They just want more control step by step. Imagine if the models were really fascinating and could do everything, they would literally act like a sovereign state.
  • journal3 months ago
    admit it, anyone who writes code is completely addicted to ghost text completions. you wouldn't have it any other way. they can do ANYTHING.
  • pixel_popping3 months ago
    This is honestly a huge shame as they aren't legally required to do so, this is PURELY for mass data collection and correlation.
  • reustle3 months ago
    Also terrible that purchased credits expire after 1 year. Not sure how that is legal.

    https://community.openai.com/t/api-credits-amount-get-expire...

  • Halian3 months ago
    Thus be it for slopherds.
  • I_am_tiberius3 months ago
    Don't trust SAMA
  • alganet3 months ago
    That's probably a good thing.
  • bn-l3 months ago
    Was this for all models?
    • retube3 months ago
      Good q. I was trying with gpt-5, but it seems gpt-4o works without verification. However 4o is I guess not as good, plus it seems to be twice as expensive as 5
      • bn-l3 months ago
        Gpt $ is the money gpt. I don’t trust the benchmarks and artificial analysis’ benchmarks are bunk.
    • andai3 months ago
      I believe it started with o3 Pro, back in the day.
  • Lapra3 months ago
    They are a porn company now, after all.
    • goshx3 months ago
      Perhaps this was the reason behind the move.
    • Palmik3 months ago
      In the same sense in which Google is a porn aggregator company because it will return porn results when you ask for them?
      • queenkjuul3 months ago
        OpenAI will now generate porn for you, Google doesn't
        • Palmik3 months ago
          If you read what I wrote carefully, you'll not that I used "porn aggregator" not just "porn site" and not even "porn generator"
          • queenkjuul3 months ago
            Well openAI will not be a porn aggregator, they'll be a porn producer, so not sure your point
    • nopurpose3 months ago
      huh? What did I miss?
  • tensility3 months ago
    [flagged]
    • dang3 months ago
      Ok, but can you please not fulminate on HN? It's not what this site is for, and destroys what it is for.

      This is in the site guidelines: https://news.ycombinator.com/newsguidelines.html.

    • bloppe3 months ago
      Yikes
    • 3 months ago
      undefined
    • senordevnyc3 months ago
      I understand the anger, but do you really want to live in the world of anarchy that would be required for these people to starve? Because if the billionaires are starving, the rest of us are long gone at that point.
      • add-sub-mul-div3 months ago
        "Letting them starve" is clearly rhetorical shorthand for not giving their businesses money. It's based on the irony of the power imbalance, none of these people will ever starve if their businesses fail. Nobody thinks they're going to starve, nobody was intended to take away that literal interpretation, how could you possibly think this interpretation was intended or is worth discussing.
        • senordevnyc3 months ago
          I guess I was thrown off by “let them starve, the way they want us to”. Doesn’t make much sense if you’re using “starve” to mean totally different things.

          Even then, it’s nonsensical to think that you’re going to “starve” these companies of revenue, companies that are growing faster than any in history, bringing in trillions in revenue, and have appreciable fractions of our entire species using them daily.

      • exe343 months ago
        To be fair, to them, starving is "other people aren't spending their money on me". Remember Emlo sued people who stopped advertising on his personal blog when he let the Nazis back in.
      • helicone3 months ago
        anarchy isn't required for them to starve. these people could be jailed and their assets frozen, for example, and their jail food would then be stolen by the more physically intimidating inmates. regardless of your political opinions on the subject, this is a perfectly cromulent scenario that includes them starving without there being anarchy.
  • Kiboneu3 months ago
    One more step towards worldcoin .. .
  • SarahPeter3 months ago
    [dead]
  • s53003 months ago
    [dead]
  • SarahPeter3 months ago
    [flagged]
    • 3 months ago
      undefined
    • supriyo-biswas3 months ago
      It appears that you have quite a few LLM comments going on here. While your customers (as you mention in your profile) may appreciate it, it is typically not looked upon well here. Thank you.
    • fouc3 months ago
      > ID verification is becoming common across AI platforms

      Yikes if true. I wonder why?

      • deaux3 months ago
        Ironically, it's a hallucination.
  • chistev3 months ago
    Interesting