53 pointsby trickster_5 hours ago5 comments
  • pseudony4 hours ago
    Relevant, I would definitely be sleeping uneasy if I was at “Open”AI.

    Some insist that Chinese models are a few generations behind, how many probably depends more on patriotism rather than fact.

    Those people typically also insist that Chinese models are just distillations and often neglect to see how many of these companies contribute to the theory of designing efficient and capable models. It is somehow thought that they will always trail US models.

    Well. i would say look at recent history. China worked up the ladder of manufacturing from simple, bad stuff to highly complex things - exactly what westerners then claimed they’d never be able to. Then as that was conquered, westerners comforted themselves by insisting that China could copy, but trail-blazing would always still be our thing. Well, Baidu and Alibaba face scaling issues few western companies do and BYD seems to match Tesla or VW just fine.

    I am unsure why anyone would think US models are destined to remain in the lead forever.

    At “best”, I see a fragmented world where each major region (yes also Europe) will eventually have their own models - exactly because no one wants to give any competitive power a chokehold over their society. But beyond that, models will largely be so good that this “generation”/universal superiority idea becomes completely obsolete.

    • glimshe3 hours ago
      A few months ago we were hearing that it was game over because of Deepseek. Today it has a mind share close to zero on the developed world. Being 90% as good (which Deepseek isn't) doesn't cut it...

      US models might not be "destined" to stay in the lead, but I see no reason to believe that won't at the moment.

    • Yizahi3 hours ago
      Thing is, China has the same problems as OAI. Just looks at these two startups, they are one of the first LLM corpos where we have some actual numbers from accounting and not BS from marketing department or Sam's xitter. The situation looks dire.

      https://imgshare.cc/wzw6jzm5

      • maxglute3 hours ago
        > China has the same problems as OAI

        PRC pureplay AI only companies has same problems as openAI, that's not the same as huge tech companies like Baidu or Alibaba or Tencent (i.e. Google/Microsoft tier) who can afford to lose money on AI. And ultimately they are also not sinking 100s of billions in capex, they can't even if they tried due to sanctions. Their financial exposure is magnitude less, i.e. it matters if you're losing 500m a year vs 5 billion per year especially as systemic economic contagion risk - PRC and US bubble sizes as % of economy not the same.

  • c-fe4 hours ago
    As a retail investor mostly invested into broad ETFs (All World), is there any way I can get short exposure to OpenAI? Being short Oracle/Nvidia/Microsoft?
    • Yizahi3 hours ago
      Shorting OIA, or really any big company, is like trying to stop a train which is on fire by standing in front of it. Yes, it is on fire and won't last long, but it will still crush any small player trying to overpower whole corrupt system.
      • fauigerzigerk3 hours ago
        You don't need to stand in front of a train to bet on a trainwreck.
        • piva00an hour ago
          But that isn't the analogy, is it?

          Betting on the trainwreck is quite easy, you got nothing to lose in the analogy, while shorting companies will cost you something, most times a lot if the bet has the wrong timing.

          • fauigerzigerk34 minutes ago
            Betting usually has a cost.
            • piva0011 minutes ago
              A fixed one, not one that can suffer snowballing increase in case your bet is wrong.
    • trickster_4 hours ago
      That's an excellent question. My fear is that it's going to be a little bit like putting a towel on a pool-bed on the Titanic...
      • c-fe4 hours ago
        Exactly. I would prefer to remain invested as I dont want to time the market. But I would prefer if I could meaningfully reduce exposure to OpenAI and the consequences of their possible downfall.
    • 4 hours ago
      undefined
    • helsinkiandrew4 hours ago
      Not really that gives you much exposure:

      If OpenAI is worth $5B, 4% of MSFT market Cap is Open AI.

      ARK Venture Fund (ARKVX) holding is 7.2% of its total but also has xAI, Anthropic and lots of other AI

      https://www.ark-funds.com/funds/arkvx#hold

      OpenAI going bust might be a shock to shareprices of publicly traded companies like Oracle, CoreWeave, Softbank and the like

      EDIT: obviously if OpenAI is worth $500B, not 5

    • 4 hours ago
      undefined
  • 9cb14c1ec04 hours ago
    No, they are not dead. However, they face incredible competition in a brutally commoditized product space.
    • keyle4 hours ago
      AFAIK in some space they're still the best models on offer.
      • A_D_E_P_T4 hours ago
        The way I see it, this was the case until a few months ago. Today, Opus 4.5 is just as good or better than 5.2 Pro at tackling hard questions and coding, Gemini beats the free models, and Kimi K2/K2.5 is the better writer/editor.
      • cromka4 hours ago
        Not in my experience, Gemini proves much better for me now.
        • embedding-shape4 hours ago
          Can you get Gemini to stop outputting code comments yet? Every single time I've tried it, I've been unable to get it to stop adding comments everywhere, even when explicitly prompting against it, seems like it's almost hardcoded in the model that code comments have to be added next to any code it writes.
  • maxglute4 hours ago
    Is OpenAI profitable yet?

    Will it be in time to recoop capex.

  • trickster_5 hours ago
    Tracking the demise of OpenAI through the news cycle
    • NitpickLawyer4 hours ago
      Keep in mind that the "news cycle" isn't of much use in this field. For 2025, almost all "mainstream" media was dead wrong in their takes. Remember the Deepseek r1 craze in feb25? Where nvda is dead, oai is dead and so on? Yeah... that went well. Remember all the "no more data" craze? Despite no actual researcher worth their salt saying it or even hinting at it? Remember the "hitting walls" rhetoric?

      The media has been "social media'd", with everything being driven by algorithms, everything being about capturing attention at the cost of everything else. Negativity sells. FUD sells.

      • viraptor4 hours ago
        Some of those weren't really wrong.

        > Remember all the "no more data" craze? Despite no actual researcher worth their salt saying it or even hinting at it?

        We ran out of fresh interesting data. A large chunk of training needs to generate its own now. Synthetic data training became a huge thing over the last year.

        > Remember the "hitting walls" rhetoric?

        Since then the basic training slowed down a lot and improvements are more in the agentic and thinking solutions, with lots more reinforcement training than in the past.

        The fact we worked around those problems doesn't mean they weren't real. It's like people say Y2K wasn't a problem... ignoring all the work that went into preventing issues.

        • NitpickLawyer4 hours ago
          > We ran out of fresh interesting data.

          No, we didn't. Hassabis has been saying this for a while now, and Gemini3 is proof of that. The data is there, there are still plenty of untapped resources.

          > Synthetic data training became a huge thing over the last year.

          No, people "heard" about it over the last year. Synthetic data training has been a thing in model training for ~2 years already. L3 was post-trained on synthetic-only data, and was released in apr24. Research only was even earlier with the phi family of models. Again, if you're only reading the mainstream media you won't get an accurate picture of these things, as you'd get from actually working in this field, or even following good sources, read the key papers and so on.

          > The fact we worked around those problems doesn't mean they weren't real.

          The way the media (and some influencers in this space) have framed it over the last year is not accurate. I get that people don't trust CEOs (and for good reasons), but even amodei was saying there is no data problem in early interviews in 25.