In the increasingly frothy world of GenAI that’s a Wednesday and you get the cheque.
If it looks like a bubble and smells like a bubble…
The tech is cool, but investors are about to lose their shirts in this space.
I'll tell you why you won't. Because you wouldn't get it. Because it's not that easy.
Mira Murati is not some rando off the street. Ilya Sutskever is not some rando. Zuckerberg is not paying Millions to randos of the street to come work for meta.
Seriously, this stuff is great. A lot of people are getting rich, and I'm sure some value will be created out of it. LPs will be the ones holding the bag but thankfully we don't have a tradition of bailing them out like banks, so what do I care? In the meantime I benefit from VC subsidized coding assistants.
I am sure there must be a better argument that just that. Because I have "worked with certain people at well-funded and rapidly growing companies" a lot and I'm worth nothing whatsoever.
They know OpenAI’s complete history and roadmap.
They can get a call with any AI researcher on earth.
It’s not Klingon cloaking tech.
With minor variations, it is Transformers via autoregressive next-token prediction on text. Self-attention, residuals, layer norm, positional encodings (RoPE/ALiBi), optimized with AdamW or Lion. Training scales with data, model size, and batch size using LR schedules and distributed parallelism (FSDP, ZeRO, TP). For inference KV caching and probabilistic sampling (temperature, top-k/p).
Most differences are in scale, data quality, and marginal architectural tweaks.
They will do a couple of tools, then another panicked company like Apple, or Facebook or AWS, needing to justify the millions spent on AI and with nothing to show for, will acquire them for billions. When somebody will point out they don't make money or have no product, somebody here will point to the "team" as you just did :-)
But since we are talking about 12 billion, let's play the VC and do the due diligence first on the leader?
Reporting from The Optimist by Keach Hagey and other open sources, plus the famous thread here when Sam Altman got fired from OpenAI, reveals that Mira Murati had a central role in Sam Altman November 2023 firing.
For VCs, this is a masterclass in corporate survival and a massive red flag. Evidence shows Murati was the primary architect of the case against Altman:
- Collected "screenshots from Murati's Slack channel" documenting alleged toxic behavior
- Wrote private memos questioning Altman's leadership
- Initiated board contact through Ilya Sutskever
- Sent "hefty PDF files of evidence" to board members via self-destructing emails
Then ...When employees revolted, fearing lost equity, immediately switched sides, signed the letter demanding Altman's return, then later claimed she "fought the actions aggressively."
She survived by reading the room perfectly:
- Became interim CEO during the crisis
- Positioned herself as essential to Altman's return
- Exited in September 2024 just before the for-profit restructuring
- Immediately raised $2B for her new startup showing it had it planned all along.
Investment Implications:
This reveals someone who will systematically undermine leadership while maintaining plausible deniability, then switch sides when politically convenient. Remember Paul Graham's famous quote about Sam Altman? That he "could be parachuted into an island full of cannibals and come back in 5 years and be king"? Murati might be even more astute.
She went from Intern → Tesla PM → Leap Motion → OpenAI VP → CTO → interim CEO → $2B startup founder, all with zero AI research background.
Her OpenAI trajectory shows world class political instincts. Join during post-Musk power vacuum (2018), build alliances with both sides, orchestrate a coup when convenient, flip sides when it fails, exit perfectly timed before restructuring.
For VCs:
Whether you see this as exceptional political skill or dangerous executive behavior depends on your risk tolerance.
Let's just hope the 10 millions dollars the government of Albania, the poorest nation in Europe, invested on this have been secured with proper share rights. That could make for some awkward reactions back home.
At correct time stamp: https://youtu.be/Wo95ob_s_NI?t=1040
What is in contrast to the published vision of Thinking Machines Lab.
I was more concretely referring to the level of talent in the engineering team, for example Lilian Weng and Horace He.
Horace can probably produce $50M of revenue personally per year.
It seems Horace He focuses on deep learning systems and compiler optimization, improving the performance of frameworks like PyTorch.
While both are clearly highly capable, and maybe capable of focusing on other areas, again just from published papers, neither seems to have published work on core LLM architectures or foundational model training, that could help bring about a scientific advance on the performance of current models.
Their contributions seem to be on enhancing usability and efficiency, not the underlying design or scaling of modern LLMs.
If that is the core team, I would be worried if they have the researchers capable of producing a breakthrough worth of the billions committed. But maybe that is why they are still hiring?
It is a bubble, of course, but this person would get $100m round in a non-bubble.
Are you saying that her previous experience shouldn't have counted as a positive attribute for being hired?
eBay would have succeeded even if a Labrador retriever was CEO - due to everyone else’s efforts.
When proposing a model for evaluating startup investments -- or any decision-making framework -- it’s useful to run a kind of regression test.
Would Shark Tank have funded the most successful startups of the past decade? In many cases, no. They famously passed on companies like Airbnb, Uber, and Dropbox early on (sometimes indirectly via pitch decks or missed introductions), and even directly rejected DoorDash and Cameo, both of which went on to become unicorns. In other cases, they offered terms that, if accepted, would likely have made follow-on funding difficult.
That doesn't mean every GenAI deal today is wise. But pointing to Shark Tank as a benchmark for rational investing is a very selective filter. Historically, it’s missed more home runs than it’s hit.
You would have been a fool to invest in apple in the late 90s on any other signal than the return of Steve Jobs. That era created many rich fools.
Same thing here. Mira is a top tier, high-profile person in the AI space. So investors lined up.
And like, if anyone's gonna win it should be Google because of their ability to push capital into this for a long time, their world class infrastructure and their cost advantage by having their chips in house. But I have faith in their ability to snatch defeat from the jaws of victory.
This is really obvious 'round-tripping' no? Nvidia invests in startups that are going to spend a majority of their funds on GPUs, Nvidia's revenue goes up, Nvidia's stock goes up, Nvidia makes more 'investments' in startups who use that investment to continue the cycle?
This is why the Us is pro Israel as well. They are a good customer to the military industrial complex.
Now if we could only get japan, ASEAN, S.Korea into a conflict w China over taiwan we could ramp up this model - but we’ll need to build drone factories to usurp the the cheap turkish, polish, ukrainian, and russian suppliers.
And the additional bonus is that all this has to be transacted in dollars giving our currency preeminence so we can print more of them and not suffer inflation- even tho there are more dollars chasing the same goods- its happening symmetrically around the world because everyone has to use them.
If you can achieve that without anti competitive behaviour then being a monopolist is fine. Legally anyway
Whether this circular thing counts as such behaviour - don’t know but doubt it
Looking at the valuation I don't get how they would not be round-tripping and otherwise cooking the books a lot right now (not necesarily limited to this case, but more in general).
Very strong Y2K bubble vibes.
But maybe there is some gain that I can't just imagine...
(Note that it's much more legitimate to scrutinize scientific research, since it's mostly publicly funded, as opposed to VCs investing in startups.)
> Overall it is not efficient use of resources or time to me.
I mean, the tech world has done a lot of good and advanced humanity a lot. More than most industries, IMO. Most of that is by inventing new things. A society could dial down useless spending by dialing down innovation, but it's not clear to me that there's a way to get more innovation with less waste than the current system.
I think many industries are far more of an "inefficient use of resources". I think if the world as a whole would spend 10x more on scientific and technological advancement, at the cost of less e.g. fashion, less investment in entertainment, less investment in (some forms of) finance, etc, the world would be far better off. (I'm not advocating that this should be forced on society, just stating my opinion on what a more optimal way of allocating resources would be.)
I think it lies at the intersection of a lot of topics where HN comments are hostile: VCs, large fundraising, AI, OpenAI related employees. These are all topics where HN comments are more hostile than pragmatic.
Most of the hostility is just a proxy for “VCs bad” banter.
In the case of a bubble, hostility is pragmatism.
Of course, nobody knows that it's a bubble, but nobody knows it's not a bubble either.
Many people know it's a bubble. You see lots of HN comments that it's a bubble. We certainly did in 2007.
In a bubble, the predominant notion is not "I wonder if it's a bubble", but rather "It's a bubble, I hope I can make my money before it pops". A perfect example of this was the Tulip craze in the Netherlands. Nobody actually believed tulip bulbs were worth that much, they just wanted to find a bigger sucker to make a profit, if you read accounts from the time.
Why does it represent a "sorry state of affairs"?
VCs are risking money they manage, it's not like they're putting anyone else at risk. Do we really prefer a world in which we don't take risks to develop new technology?
The VC sector is probably an order of magnitude smaller then, say, fashion. Isn't money better spent taking risks to develop new technologies that have a chance to legitimately change the world for the better?
Note: If you think the technology could be a negative (which, to some extent I do because of the elements of AI safety, that's a separate argument. Not sure that that's what you're referring to though.)
I do see some value in it, we need to take risks with our capital and on a long term horizon it's fine.
But there has to be some rationale behind it at least, some evidence based approach.
But I imagine there's immense pressure to do so. Returns are tracked relative to others. All distorted by big jumps in valuations. In the long term, there might be a more prudent/sustainable use of some of that capital but we'll never know, everyone's chasing share price go up, not value added / profit go up.
Yes, it's good to have your finger in the pie. But the stake may be overpriced, given current behaviour.
They're putting everyone at risk.
As others in this thread have pointed out what these VCs do is make wildly risky gambles. When it pays off they make billions of dollars in profit and congratulate themselves for their foresight and the power of capitalism. When it doesn't they go cap-in-hand to the federal government, saying that if they're allowed to collapse it'll destabilize the entire economy.
If it were just some rich billionaires who might have to sell their second private jet when this collapses I'd agree with you: who cares what they do with their own money. But what's going to happen is that the taxpayer will end up footing the bill for their reckless behavior, and millions of people will end up worse off for it as governments cut back on welfare, healthcare, and aid programs to balance the books. In a very real sense, people are going to be paying for VC greed with their lives.
But that's not actually true.
The Silicon Valley Bank situation aside - because it was a unique situation, we can open it if you want - when have VCs asked to be bailed out on failed investments? This kind of moral hazard is absolutely a thing in banks and finance in general, but afaik, not in the VC industry.
But no risk for them, everyone paid for them to keep their billions.
With the public information there is neither a point in positivity not negativity. It would be speculation either way. A question that could be worth pondering is what exactly the investors are shown? What would you need to see to put all your money in thinking machines?
“I want to create the time and space to do my own exploration.”
To create time and space and even able to explore it is far beyond cutting edge physics of today. If she is able to do it in just few billion dollars it is investment worth making.
1. https://www.wired.com/story/mira-murati-thinking-machines-la...
It is a position that is completely unreachable for 99.9999% of people even in tech.
It is of course easy to write off critics as jealous has-beens and bozos.
you imply that VCs are rational because bet their own money, which in current complicated world probably is not true. VC funds get money from complicated funnel likely including my/your retirement account and country public debt, VC managers likely receive bonuses for closed deals and not long term gains which may materialize in 10 years. So, investing 2B into non-existing product with unclear market fit/team/tech moat smells very strongly.
Correct. Most VCs are using someone else's money. See Softbank. And making extremely poor judgements on how to use that money.
there is a funnel of investors: say retirement account holders put money into some retirement fund, fund manager put money into some "innovation fund", "innovation fund" put money in VC fund. All middlemen get % cut, VC likely watch slides about some product roadmap, but they more interested in closing the deal, get % cut and move on.
it is likely they won't suffer any significant consequences
(Sure -- that background is somewhat self earned.)
(They could have just as well picked Skynet.)
Can’t help but feel there’s some sexism attached to the pushback here.
https://makeagif.com/gif/im-actually-not-sure-about-that-mir...
More generic, grandiose and oft-repeated someone's claims are, more skepticism is warranted.
> significant OpenAI talent on board, including Murati herself.
because ppl on this website don't consider her a 'talent'.
"Presumably" they have "some" idea. Oh really? Are you "sure" about that? Is that a $12B type confidence?
Anyway, if Mistral is for sale then if not Apple it'll be another big company.
> if Mistral is for sale
At the moment there's no public information about that, it's all about Apple being interested into buying but we don't know if Mistral is interested into selling at all. That's the gap.
https://www.reuters.com/business/finance/andreessen-horowitz...
Moreover, if some banks will fail in the aftermath, they'll be bailed out at the expense of you know whom.
I don't know that it'll work, but it is worth trying.
Yes, she was there at OpenAI and by all accounts a super respected non researcher.
Clearly she's incredible, her reputation proceeds her etc. but is operational strength (being a good exec) really what gets to AGI?
Feels almost like cargo-culting.
1. Researchers
2. Productivity-enhancing things in the org (either technical stuff like frameworks/optimisations or nebulous social things like better collaboration)
3. Amount of available compute.
4. Luck
And then the past experience with openAI should help a lot with doing a good job of #1 and #2 even if it is just being a competent executive.
Maybe the product she’s pitched has massive commercial potential.
> Feels almost like cargo-culting.
Why almost?
If something makes no sense, seems totally crazy, and is being done by a crowd of extremely smart people, you can only assume one of two things: they are actually crazy and frittering away 2B on hype or, just maybe, there's something we're not aware of. If there are only two camps: optimistic and naive or pessimistic and dismissive, I'll choose naive every day of the week.
Anyways, congrats to Thinking Machines and here's hoping they do have something awesome up their sleeve!
While these are not necessarily typical cases, they show that it's absolutely possible to have gigantic valuations raised from industry experts with nothing to show for it, if you're good enough at lying.
Blood testing for a single condition can involve blood separation via centrifuges, application of various chemical solutions, a multi-step process using varios test strips (each with different specificities/sensitivities). Not to mention some of the offending markers might be so rare in blood that you'd have to draw multiple vials just to get a statistically significant amount of markers.
Reducing just a single one of these tests to an instant test that works with a drop of blood would be a major breakthrough, subject to various medical awards (and lengthy medical trials).
I don't disagree with your broader point, but have spent enough time with enough a16z partners to say they're just people. Not outright stupid, but not extremely smart, either. And their error rate is pretty high.
Which...to some extent is by design. It's part of a VC's job to make bad bets. Sometimes the price of getting into a deal at all is getting in on insane terms, but you still do it because that one investment could return the entire fund. Maybe Thinking Machines is a winner, maybe it's another Clubhouse. We'll see.
It’s that culture that creates some spectacular hits, and a vast number of misses. Not necessarily a bad thing, but it’s a different approach and means that the funding doesn’t necessarily suggest the results one might expect.
They don’t need to show a product. It’s been demonstrated that with capital and some skill you can train a foundation model. A16Z has the former. Murati has the latter.
This assumes that being careless with billions can only ever be crazy.
If you're already set for life, why not gamble (including at completely irrational levels) for even more insane amounts of money when the whole thing is just a crazy house of mirrors. Yes, there's value at the heart, but there's also crazy amounts of money being funneled in, lots of opportunity for chaos, lots of chances for legal rug pulls. All of it inflated even further by a fervor of carelessness for any kind of consequences - things like the stock market are completely removed from any kind of fundamentals.
In a fun house of mirrors, that 2 billion could be 2 cents, or it could be 2 trillion. Buy the ticket and have fun!
What kind of temerity and hubris would it take to believe that the ratings agencies were colluding with the banks to give AAA to Mortgage Backer Securities?
Can it simply be that anyone who wants to create new competitive models at this point needs billions for training? This isn't a saas where you can whip up a prototype in a month. Rather than having a product to show, I'd guess it's more about an experienced team and some plausible-sounding research ideas.
SBF said if he had a die and it had a 99% chance of killing everyone and a 1% chance of making the world 1 million times happier he would roll the die. Repeatedly. And Silicon Valley loved him.
I think AI is a similar calculation. Humans are tearing themselves apart and the only thing worth betting on is AI that can improve itself, self replicate and end scarcity. I believe that these VCs believe that AI is the only chance to save humanity.
And if you believe that, the net present value of AGI is basically infinite.
Scarcity, wow...
- There is no scarcity in the rich world by historical standards.
- There is extreme poverty in large parts of the world, no amount of human intelligence has fixed this and therefore no amount of AI will. It is primarily not a question of intelligence.
- On top of that "ending scarcity" is impossible due to the hedonistic treadmill and the way the human mind works as well as the fact that with or without AI there will still be disease, aging and death.
If you were in the top 1% and you don’t need labor anymore because it can be automated.
Even in post scarcity some people will want as much as possible for themselves, if history is any baseline.
Yeah, similar to Trump administration may have had some misses but extremely smart people are doing great things.
If only I had had that idea, maybe I could have raised $2B.
consciousmachines.ai
Such an obvious next step. Conscious, ethical, inclusive machines.
Uh, run the training script with your thoughtful modifications? They’re not welding together an LLM.
They have very strong talent from Meta's FAIR/Pytorch teams as well as a lot of strong people from OAI.
> "We're excited that in the next couple months we will be able to share our first product, which will include a significant open source component and be useful for researchers and startups developing custom model," CEO Murati said in a post on the X social media platform.
And imagine being the ops guy there, about to run a new training batch, but you specified the wrong input path. Puff, the money is gone. Absolutely wild stuff.
From what I’ve read, Murati was an excellent CTO.
Maybe they can make a T-800 to ensure when AI/skynet goes rogue humanity has a chance.
It’s the ultimate AI safety.
“I’ll be back”
They haven’t built anything yet. It’s a bit premature to call out the non-existing product as insufficiently novel.
I think the point is that the pie-in-the-sky promises are insufficiently novel.
You have no data to judge novelty on. If you think AI rocket science, they’re flubs. If you don’t, they have a shot. Either way, I can’t see novelty being unsatisfied.
We need to jack up interest rates again.
Sort of like the old adage about the Goldrush - why pan for gold when you can just sell tons of shovels to all the others rushing to get rich?
That said, what was left of the old one was bought by Sun, which is now owned by Oracle.
I wonder if they still own rights to the name? Not the wisest move to name your new company after something owned by the most famously litigious tech corporation.
Thinking Machines CM-5 in Jurassic Park (1993):
https://www.starringthecomputer.com/appearance.html?f=11&c=1...
Jurassic World Rebirth passes $500 M in revenue:
https://www.koimoi.com/box-office/jurassic-world-rebirth-wor...
[1] https://cray-history.net/2023/08/20/cray-systems-in-popular-...
Here's[1][2] their trademark application from February, which is still "NOT ASSIGNED". Technically it's for their logotype but I imagine it's all the same issue, considering that they include "Computer hardware" in the description of their company (which is exactly what the old one did). This site ominously says that the only action since the filing date was on June 5th, titled "LETTER OF PROTEST EVIDENCE FORWARDED" -- perhaps that's Oracle?
I think this[4] is the trademark for the original's ("Thinking Machines Corporation") trademark logotype, first used in 1987 and defunct ("cancelled"?) by 1999. Another site[5] lists three other "Dead/Cancelled" trademarks owned by the original, and two more recent attempts by randos in 2006 and 2010 that were both shot down.
Technically they're "Thinking Machine Lab Inc."[3], but they're basically always referred to without the "Lab", even to the point of using thinkingmachines.ai as their domain (which, hilariously, doesn't use their trademarked logotype). Another goofy tidbit is that they also filed a trademark for a serif logotype of the words "BEEP BOOP"[6] -- maybe that's their fallback name!
Would be fascinated to hear from anyone familiar with US trademark law on what might be going on, and how we might see what the "LETTER OF PROTEST" is! My layperson understanding would definitely tell me that Oracle would maintain the trademarks, but perhaps they were forced to let them lapse due to lack of use?
I've been slowly building (y'know how it is...) a (one-man...) company filed as "Doering Thinking Machines, LLC" for a few years (named after an old family business, "Doering Machines"), so I'm quite interested to see how this shakes out!
[1] https://furm.com/trademarks/thinking-machines-99054776
[2] For the love of god, please HN gods, just make these comments markdown. IDK what battle you're fighting but it's a baffling one. The lack of blockquotes is painful, but the lack of inline links is downright diabolical! You have three people now, you can afford the effort ;)
[3] https://trademarks.justia.com/741/37/thinking-machines-74137...
[4] https://en.wikipedia.org/wiki/Thinking_Machines_Lab
IANAL but I do know trademarking a logotype is a kind of 'trade dress' that's not the same trademarking the words of the name (even if those words appear\ in the logotype).