205 pointsby pixelesque5 hours ago24 comments
  • amluto21 minutes ago
    It’s probably not really related, but this bug and the saga of OpenAI trying and failing to fix it for two weeks is not indicative of a functional company:

    https://github.com/openai/codex/issues/9253

    OTOH, if Anthropic did that to Claude Code, there wasn’t a moderately straightforward workaround, and Anthropic didn’t revert it quickly, it might actually be a risk-the-whole-business issue. Nothing makes people jump ship quite like the ship refusing to go anywhere for weeks while the skipper fumbles around and keeps claiming to have fixed the engines.

    Also, the fact that it’s not major news that most business users cannot log in to the agent CLI for two weeks running is not major news suggests that OpenAI has rather less developer traction than they would like. (Personal users are fine. Users who are running locally on an X11-compatible distro and thus have DISPLAY set are okay because the new behavior doesn’t trigger. It kind of seems like everyone else gets nonsense errors out of the login flow with precise failures that change every couple days while OpenAI fixes yet another bug.)

    • leptons15 minutes ago
      Funny that they can't just get the "AI" to fix it.
  • jjcm4 hours ago
    Not only has OpenAI's market share gone down significantly in the last 6mo, Nvidia has been using its newfound liquid funds to train its own family of models[1]. An alliance with OpenAI just makes less sense today than it did 6mo ago.

    [1] https://blogs.nvidia.com/blog/open-models-data-tools-acceler...

    • sailingparrot3 hours ago
      > Nvidia has been using its newfound liquid funds to train its own family of models

      Nvidia has always had its own family of models, it's nothing new and not something you should read too much into IMHO. They use those as template other people can leverage and they are of course optimized for Nvidia hardware.

      Nvidia has been training models in the Megatron family as well as many others since at least 2019 which was used as blueprint by many players. [1]

      [1] https://arxiv.org/abs/1909.08053

      • breput3 hours ago
        Nemotron-3-Nano-30B-A3B[0][1] is a very impressive local model. It is good with tool calling and works great with llama.cpp/Visual Studio Code/Roo Code for local development.

        It doesn't get a ton of attention on /r/LocalLLaMA but it is worth trying out, even if you have a relatively modest machine.

        [0] https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B...

        [1] https://huggingface.co/unsloth/Nemotron-3-Nano-30B-A3B-GGUF

        • bhadass2 hours ago
          Some of NVIDIA's models also tend to have interesting architectures. For example, usage of the MAMBA architecture instead of purely transformers: https://developer.nvidia.com/blog/inside-nvidia-nemotron-3-t...
          • nextosan hour ago
            Deep SSMs, including the entire S4 to Mamba saga, are a very interesting alternative to transformers. In some of my genomics use cases, Mamba has been easier to train and scale over large context windows, compared to transformers.
        • jychang2 hours ago
          It was good for like, one month. Qwen3 30b dominated for half a year before that, and GLM-4.7 Flash 30b took over the crown soon after Nemotron 3 Nano came out. There was basically no time period for it to shine.
          • breput2 hours ago
            It is still good, even if not the new hotness. But I understand your point.

            It isn't as though GLM-4.7 Flash is significantly better, and honestly, I have had poor experiences with it (and yes, always the latest llama.cpp and the updated GGUFs).

          • ThrowawayTestr2 hours ago
            Genuinely exciting to be around for this. Reminds me of the time when computers were said to be obsolete by the time you drove them home.
          • binary132an hour ago
            I recently tried GLM-4.7 Flash 30b and didn’t have a good experience with it at all.
        • binary132an hour ago
          I find the Q8 runs a bit more than twice as fast as gpt-120b since I don’t have to offload as many MoE layers, but is just about as capable if not better.
    • ryanSrich3 hours ago
      I think there are two things that happened

      1. OpenAI bet largely on consumer. Consumers have mostly rejected AI. And in a lot of cases even hate it (can't go on TikTok or Reddit without people calling something slop, or hating on AI generated content). Anthropic on the other hand went all in on B2B and coding. That seems to be the much better market to be in.

      2. Sam Altman is profoundly unlikable.

      • cschep3 hours ago
        #2 cannot be understated
        • notyourwork2 hours ago
          Cringey to watch their interviews.
        • edoceo3 hours ago
          Was the golden boy for a while? What shifted? I don't even remember what he did "first" to get the status. Is it maybe just a case of familiarity breeding contempt?
          • icepush2 hours ago
            It is starting to become clear to more and more people that Sam is a dyed in the wool True Believer in AGI. While it's obvious in hindsight that OpenAI would never have gotten anywhere if he wasn't, seeing it so starkly is really rubbing a lot of people the wrong way.
            • steveBK1232 hours ago
              Advertising Generated Income?
              • Bayko33 minutes ago
                Damm this is smart. I like it
                • steveBK12329 minutes ago
                  Someone else said it first here
          • pinnochio2 hours ago
            All the manipulation and lying that got him fired.
            • chihuahuaan hour ago
              He is a pretty interesting case. According to the book "Empire of AI" about OpenAI, he lies constantly, even about things that are too trivial to matter. So it may be part of some compulsive behavior.

              And when two people want different things from him, he "resolves" the conflict by agreeing with each of them separately, and then each assumes they got what they wanted, until they talk to the other person and find out that nothing was resolved.

              Really not a person who is qualified to run a company, except the constant lying is good for fundraising and PR.

              • kreelman43 minutes ago
                He was once a big pin in Y Combinator (I think kind of ran it?)... Paul Graham thought he was great for YC.

                Interesting that he's got as far as he has with this issue. I don't think you can run a company effectively if you don't deal in truth.

                Some of his videos have seemed quite bizarre as well, quite sarcastic about concerns people have about AI in general.

        • 3kkdd2 hours ago
          Indeed. Sama seems to be incredibly delusional. OAI going bust is going to really damage his well-being, irrespective of his financial wealth. Brother really thought he was going to take over the world at one point.
          • ambicapteran hour ago
            Scariest part is it probably won't, and he'll be back in five year with something else.
      • raw_anon_111122 minutes ago
        Instead of anecdotes about “what you saw on TikTok and Reddit”, it’s really not that hard to lookup how many paid users ChatGPT has.

        Besides OpenAI was never going to recoup the billions of dollars based on advertising or $20/month subscriptions

      • okhobb19 minutes ago
        Is CEO likeability a reliable predictor?
      • 3 hours ago
        undefined
      • jackblemming3 hours ago
        You have to give credit to Sam, he’s charismatic enough to the right people to climb man made corporate structures. He was also smart enough to be at the right place at the right time to enrich himself (Silicon Valley). He seems to be pretty good at cutting deals. Unfortunately all of the above seems to be at odds with having any sort of moral core.
        • 3kkdd2 hours ago
          Ermmm what?

          He and his personality caused people like Ilya to leave. At that point the failure risk of OAI jumped tremendously. The reality he will have to face is, he has caused OAIs demise.

          Perhaps hes ok with that as long as OAI goes down with him. Would expect nothing less from him.

          • 2 hours ago
            undefined
          • CamperBob22 hours ago
            Ilya took a swing at the king and missed. It would have been awkward to hang around after that debacle.
          • moomoo112 hours ago
            And what has Ilya done since? Genuinely curious.

            All these people are replaceable lol, they’re employee tier. If they’re not CEO then they’re not that important. You might disagree but that’s why there’s 1 guy at the helm (being reductive here, use your brain and actually stop over thinking but the board chose him or whatever) and everyone else follows him. If someone leaves you get another one.

      • moomoo112 hours ago
        I actually think Sam is “better” than say Elon or Dario because he seems like a typical SF/SV tech bro. You probably know the type (not talking about some 600k TC fang worker, I mean entrepreneurs).

        He says a lot of fluff, doesn’t try to be very extreme, and focuses on selling. I don’t know him personally but he comes across like an average person if that makes sense (in this environment that is).

        I think I personally prefer that over Elon’s self induced mental illnesses and Dario being a doomer promoting the “end” of (insert a profession here) in 12 months every 6 months. It’s hard for me to trust a megalomaniac or a total nerd. So Sam is kinda in the middle there.

        I hope OpenAI continues to dominate even if the margins of winning tighten.

        • ryanSrich2 hours ago
          Elon is one of the most unlikable people on the planet, so I wouldn't consider him much of a bar.
          • jacquesm2 hours ago
            Hah, you beat me to it, serves me right for writing longer comments. Have an upvote ;)
          • moomoo112 hours ago
            It’s kind of sad. I can’t believe I used to like him back in the iron man days. Back then I thought he was cool for the various ideas and projects he was working on. I still think many of those are great but he as a person let me down.

            Now I have him muted on X.

            • jordanban hour ago
              Back then he had a PR firm working for him, getting him cameos and good press. But in 2020 he fired them deciding that his own "radically awesome" personality doesn't need any filtering.

              Personally I don't think Elon is the worst billionaire, he's just the one dumb enough to not have any PR (since 2020). They're all pretty reprehensible creatures.

              • majormajor10 minutes ago
                Any number of past mega-rich were probably equally nuts and out of touch and reprehensible but they just didn't let people find out. Then Twitter enabled an unfiltered mass-media broadcast of anyone's personal insanity, and certain public figures got addicted and exposed.

                There will always be enough people willing to suck up to money that they'll have all the yes-men they need to rationalize it as "it's EVERYONE ELSE who's wrong!"

              • leptons7 minutes ago
                Yeah, Putin is probably the worst billionaire. Elon might be a close second though, or maybe it's a US politician if they actually are a billionaire.
        • krupan2 hours ago
          Not extreme? Have you seen his interviews? I guess his wording and delivery are not extreme, but if you really listen to what he's saying, it's kinda nuts.
          • pinnochio2 hours ago
            That Dyson sphere interview should've been a wake up call for the OpenAI faithful.
          • sebmellen2 hours ago
            I understand what GP is saying in the sense that, yes, on an objective scale, what Sam is saying is absolutely and completely nuts... but on a relative scale he's just hyping his startup. Relative to the scale he's at, it’s no worse than the average support tool startup founder claiming they will defeat Salesforce, for example.
        • windexh8eran hour ago
          He's definitely not. If Altman. Is a "typical" SF/SV tech bro then that's an indication the valley has turned full d-bag. Altman's past is gross. So, if he's the norm then I will vehemently avoid any dollars of mine going to OAI. I paid for an account for a while, but just like Musk I lose nothing over actively avoiding his Ponzi scheme of a company.
        • jacquesm2 hours ago
          > I actually think Sam is “better” than say Elon or even Dario because he seems like a typical SF/SV tech bro.

          If you nail the bar to the floor, then sure, you can pass over it.

          > He says a lot of fluff, doesn’t try to be very extreme, and focuses on selling.

          I don't now what your definition of extreme is but by mine he's pretty extreme.

          > I think I personally prefer that over Elon’s self induced mental illnesses and Dario being a doomer promoting the “end” of (insert a profession here) in 12 months every 6 months.

          All of them suffer from thinking their money makes them somehow better.

          > I hope OpenAI continues to dominate even if the margins of winning tighten.

          I couldn't care less. I'm on the whole impressed with AI, less than happy about all of the slop and the societal problems it brings and wished it had been a more robust world that this had been brought in to because I'm not convinced the current one needed another issue of that magnitude to deal with.

          • moomoo112 hours ago
            That’s ok, but AI is useful in particular use cases for many people. I use it a lot and I prefer the Codex 5.2 extra high reasoning model. The AI slop and dumb shit on IG/YT is like the LCD of humans though. They’ve always been there and always will be there to be annoying af. Before AI slop we had brain rot made by humans.

            I think over time it (LLM based) will become like an augmenter, not something like what they’re selling as some doomsday thing. It can help people be more efficient at their jobs by quickly learning something new or helping do some tasks.

            I find it makes me a lot more productive because I can have it follow my architecture and other docs to pump out changes across 10 files that I can then review. In the old way, it would have taken me quite a while longer to just draft those 10 files (I work on a fairly complex system), and I had some crazy code gen scripts and shit I’d built over the years. So I’d say it gives me about 50% more efficiency which I think is good.

            Of course, everyone’s mileage may vary. Kinda reminds me of when everyone was shitting on GUIs, or scripting languages or opinionated frameworks. Except over time those things made productivity increase and led to a lot more solutions. We can nitpick but I think the broader positive implication remains.

            • jacquesman hour ago
              It's very hard to see downsides on something like GUIS, scripting languages or opinionated frameworks compared to a broad, easily weaponized tool like generative AI.
            • binary132an hour ago
              some people are so determined to be positive about AI that at some point it just comes across like they’re getting paid to be
        • pinnochio2 hours ago
          Altman is a consummate liar and manipulator with no moral scruples. I think this LLM business is ethically compromised from the start, but Dario is easily the least worst of the three.
          • techblueberryan hour ago
            Darío unsettles me the most, he kinda reminds me of SBF, I wouldn’t be surprised if, well they’re all bad its to stack rank them.
            • pinnochioan hour ago
              I don't think he's good, but afaik he isn't trying to make everyone psychologically dependent on Claude and releasing sex bots.
            • strange_quark41 minutes ago
              He and SBF are both big into effective altruism, and SBF gave Anthropic their seed funding, so yeah, that checks out.
          • shwaj2 hours ago
            There’s 4 though, where does Demis fit in the stack rank?
            • pinnochioan hour ago
              TBH, I hadn't heard of him until now. Looks like he's had a crazy legit professional career. I'd put him at the top for his work at Bullfrog alone.
          • falkensmaizean hour ago
            Pfft. Dario has been making nonsense fear mongering that never comes true.
    • TheRoque3 hours ago
      Yeah. Even if OpenAI models were the best, I still wouldn't used them, given how the Sam Altman persona is despicable (constantly hyping, lying, asking for no regulations, then asking for regulations, leaked emails where founders say they just wanna get rich without any consideration of their initial "open" claims...). I know other companies are not better, but at least they have a business model and something to lose.
      • pinnochio3 hours ago
        > leaked emails where founders say they just wanna get rich without any consideration of their initial "open" claims

        Point me to these? Would like to have a look.

        • TheRoque2 hours ago
          Sorry, not leaked emails, but it's the Greg Brockman's diary and leaked texts.

          I didn't find the original lawsuit documents, but there's a screenshot in this video: https://youtu.be/csybdOY_CQM?si=otx3yn4N26iZoN7L&t=182 (timestamp is 3:02 if you don't see it)

          There's more details about the behind-the-scenes and greg brockman's diary leaks in this article: https://www.techbuzz.ai/articles/open-ai-lawsuit-exposed-the... Some documents are made public thanks to the Musk-OpenAI trial.

          I'll let you read a few articles about this lawsuit, but basically they said to Musk (and frankly, to everyone else) that they were committed to the non-profit model, while behind the scenes thinking about "making the billion" and turning for-profit.

          • peyton2 minutes ago
            Literally everyone raising money is just searching for the magic combo of stuff to make it happen. Nobody enjoys raising money. Wouldn’t read that much into this.
          • philo_sophiaan hour ago
            Hate that bringing fraud to justice means paying out to the wealthiest person on the planet....
          • pinnochio2 hours ago
            Much appreciated!

            Edit: Ah, so the fake investment announcements started from the very beginning. Incredible.

    • funkyfiddler3694 hours ago
      [flagged]
      • ekianjo4 hours ago
        ChatGPT has nowhere the lead it used to have. Gemini is excellent and Google and Anthropic are very serious competitors. And open weight models are slowly catching up.
      • estearum3 hours ago
        ChatGPT is a goner. OpenAI will probably rule the scam creation, porn bot, and social media slop markets.

        Gemini will own everything normie and professional services, and Anthropic will own engineering (at least software)

        Honestly as of the last few months anyone still hyping ChatGPT is outing themselves.

      • 3 hours ago
        undefined
      • mnky9800n3 hours ago
        [flagged]
      • Onavo4 hours ago
        You mean the DOW right?
  • jt21904 hours ago
    Last paragraph is informative:

    > Anthropic relies heavily on a combination of chips designed by Amazon Web Services known as Trainium, as well as Google’s in-house designed TPU processors, to train its AI models. Google largely uses its TPUs to train Gemini. Both chips represent major competitive threats to Nvidia’s best-selling products, known as graphics processing units, or GPUs.

    So which leading AI company is going to build on Nvidia, if not OpenAI?

    • paxys4 hours ago
      "Largely" is doing a lot of heavy lifting here. Yes Google and Amazon are making their own GPU chips, but they are also buying as many Nvidia chips as they can get their hands on. As are Microsoft, Meta, xAI, Tesla, Oracle and everyone else.
      • bredren2 hours ago
        How about Apple? How is Apple training its next foundation models?
        • consumer4512 hours ago
          To use the parlance of this thread: "next" foundation models is doing a lot of heavy lifting here. Am I doing this right?

          My point is, does Apple have any useful foundation models? Last I checked they made a deal with OpenAI, no wait, now with Google.

          • wmfan hour ago
            Apple does have their own small foundation models but it's not clear they require a lot of GPUs to train.
          • system22 hours ago
            I think Apple is waiting for the bubble to deflate, then do something different. And they have the ready to use user base to provide what they can make money from.
            • amluto12 minutes ago
              If they were taking that approach, they would have absolutely first-class integration between AI tools and user data, complete with proper isolation for security and privacy and convenient ways for users to give agents access to the right things. And they would bide their time for the right models to show up at the right price with the right privacy guarantees.

              I see no evidence of this happening.

            • aurareturn19 minutes ago
              Apple can make more money from shorting the stock market, including their own stock, if they believe the bubble will deflate.
            • ymymsan hour ago
              They apparently are working on and are going to release 2(!) different versions of siri. Idk, that just screams "leadership doesn't know what to do and can't make a tough decision" to me. but who knows? maybe two versions of siri is what people will want.
              • consumer45143 minutes ago
                Arena mode! Which reply do you prefer? /s

                But seriously, would one be for newer phone/tablet models, and one for older?

                • pinnochio18 minutes ago
                  It sounds like the first one, based on Gemini, will be more a more limited version of the second ("competitive with Gemini 3"). IDK if the second is also based on Gemini, but I'd be surprised if that weren't the case.

                  Seems like it's more a ramp-up than two completely separate Siri replacements.

        • xvectoran hour ago
          Apple is sitting this whole thing out. Bizarre.
          • cs_sorcerer8 minutes ago
            From a technology standpoint I don’t feel Apple’s core competency is in AI model foundations
          • random_duck33 minutes ago
            They might know something?
            • leptons3 minutes ago
              More like they don't know the things others do. Siri is a laughing stock.
        • downrightmikean hour ago
          They are in housing their AI to sell it as a secure way to AI, which 100% puts them in the lead for the foreseeable future.
      • greiskul3 hours ago
        But is Google buying those GPU chips for their own use, or to have them on their data centers for their cloud customers?
        • dekhn3 hours ago
          google buys nvidia GPUs for cloud, I don't think they use them much or at all internally. The TPUs are both used internally, and in cloud, and now it looks like they are delivering them to customers in their own data centers.
          • hansvm2 hours ago
            When I was there a few years ago, we only got CPUs and GPUs for training. TPUs were in too high of demand.
          • moralestapia3 hours ago
            I can see them being used for training if they're vacant.
        • notyourwork2 hours ago
          Both. Internal are customers too.
    • Morromist4 hours ago
      Nvidia had the chance to build its own AI software and chose not to. It was a good choice so far, better to sell shovels than go to the mines - but they still could go mining if the other miners start making their own shovels.

      If I were Nvidia I would be hedging my bets a little. OpenAI looks like it's on shaky ground, it might not be around in a few years.

    • wmf4 hours ago
      OpenAI will keep using Nvidia GPUs but they may have to actually pay for them.
    • dylan6044 hours ago
      Would Nvidia investing heavily in ClosedAI dissuade others to use Nvidia?
      • smileson22 hours ago
        Aren't they switching to PI for Pretend Intelligence?
    • raincole3 hours ago
      Literally all the other companies that still believe they can be the leading ones one day?
    • nick494881714 hours ago
      Maybe xAI/Tesla, Meta, Palantir
    • lofaszvanitt2 hours ago
      The moment you threaten NVDA's livelyhood, your company starts to fall apart. History tells.
    • dfajgljsldkjag4 hours ago
      the chinese will probably figure out a way to sneak the nvidia chips around the sanctions
      • ekianjo4 hours ago
        Alibaba has their own chips now they use for training.
  • kennyadam3 hours ago
    This video that breaks down the crazy financial positions of all the AI companies and how they are all involved with one called CoreWeave (who could easily bring the whole thing tumbling down) is fascinating: https://youtu.be/arU9Lvu5Kc0?si=GWTJsXtGkuh5xrY0
  • pinnochio4 hours ago
    All these giant non-binding investment announcements are just a massive confidence scam.
    • rvz4 hours ago
      We know that it is all a grift before the inevitable collapse, so everyone is racing for the exit before that happens.

      I guarrantee you that in 10 years time, you will get claims of unethical conduct by those companies only after the mania has ended (and by then the claimants have sold all their RSUs.)

    • Drunkfoowl4 hours ago
      [dead]
  • ChicagoDave20 minutes ago
    Many of us predicted OpenAIs insistence that the model was the product was the wrong path.

    The tools on top of the models are the path and people building things faster is the value.

  • mordymoop2 hours ago
    I wonder how much the indications of Altman's duplicitous behavior through the deposition findings have been relevant here.
  • johnny_canuck3 hours ago
    Interesting to see this follow the news of their plan IPO in Q4 just yesterday. https://www.wsj.com/tech/ai/openai-ipo-anthropic-race-69f06a...
  • bravetraveler4 hours ago
    In the distance, Uncle Sam groans as his phone rings
  • klysm3 hours ago
    How is this legal for them to do to pump stocks
  • Handy-Man3 hours ago
    > He[Jensen Huang] has also privately criticized what he has described as a lack of discipline in OpenAI’s business approach and expressed concern about the competition it faces from the likes of Google and Anthropic, some of the people said.
  • mattas3 hours ago
    Would be interesting to see how Oracle's CDSs react to this news.
  • random_duck34 minutes ago
    "the people said"
  • caycep2 hours ago
    will there be more 5090 FE cards at a lower price? one can only hope
  • m0004 hours ago
    And so it begins.
  • mrcwinn2 hours ago
    The article references an “undisciplined” business. I wonder if this is speaking to projects like Sora. Sora is technically impressive and was fun for a moment, but it’s nowhere near the cultural relevance of TikTok, but I believe significantly more expensive, harder to monetize, and consuming some significant share of their precious GPU capacity. Maybe I’m just not the demo and missing something.

    And yes, Sam is incredibly unlikable. Every time I see him give an interview, I am shocked how poorly prepared he is. Not to mention his “ads are distasteful, but I love my supercar and ridiculous sunglasses.”

  • 4 hours ago
    undefined
  • moomoo112 hours ago
    nvidia should buy OpenAI. I like Jensen.
    • system22 hours ago
      That's Sam Altman's wet dream: to get out of this with lots of cash and headache-free when the bubble bursts.
  • CamperBob24 hours ago
    Does this mean OpenAI won't be needing all that RAM after all...?
    • diabllicseagullan hour ago
      sadly micron/sandisk bubble is going full steam ahead
  • wigster4 hours ago
    ...and the merry go round stopped
    • echelon4 hours ago
      Not for all the players. Not everyone has over-raised their fundamentals.
      • ajross4 hours ago
        Literally the whole economy has "over-raised its fundamentals" though. Not everyone is going to fail in exactly this way, but (again, pretty much literally) everyone is exposed to a feedback-driven crash from "everyone else" that ended up too exposed.

        We all know this is a speculative run-up. We all know it'll end somehow. Crashes always start with something like this. Is this the tipping point? Damned if I know. But it'll come.

  • nsjdkdkdk4 hours ago
    [dead]
  • radpanda4 hours ago
    If the ice cream cone won't lick itself, who will?
  • whatever12 hours ago
    OpenAI is too important to run out of cash. The gov will make companies invest.
    • batiudrami26 minutes ago
      Important for what? Google and anthropic's models are already better, and google actually makes money, and both are US companies. What strategic relevance is there to Open AI?
    • tartuffe78an hour ago
      Too important to what? The bubble?
    • this_useran hour ago
      Is it? What do they have that Google and Anthropic do not at this point?