Someone has to come up with $1.4 trillion in actual cash, fast, or this whole thing comes crashing down. Why? At the end of all this circular financing and deals are folks that actually want real cash (eg electricity utilities that aren’t going to accept OpenAI shares for payment).
If the above doesn’t freak you about a bit at how bonkers this whole thing has become then you need a reality check. “Selling ads” on ChatGPT ain’t gonna close that hole.
These deals aren't for 100% payment up front. The deals also include stock, not just cash. So, no, they do not need to come up with $1.4 trillion in cash quickly.
This AWS deal is spread over 7 years. That's $5.4 billion per year, though I assume it's ramping up over time.
> At the end of all this circular financing and deals are folks that actually want real cash (eg electricity utilities that aren’t going to accept OpenAI shares for payment).
Amazon's cash on hand is on the order of $100 billion. They also have constant revenue coming in. They will not have any problem accepting OpenAI shares and then paying electricity bills with cash.
These deals are also being done in the open with publicly traded companies. Investors can see the balance sheets and react accordingly in the stock price.
The one I found best documented (1) is a Meta's SPV to fund their Hyperion DC in Louisiana, which is a deal that is 80% financed by private credit firm Blue Owl. There is a lot of financial trickery to getting the SPV to be counted by the ratings agencies as debt belonging to a different entity that does not count against Meta's books but treated by the market as basically something that Meta will back. But xAI's Memphis DC is also a SPV, and Microsoft is doing that as well. I'm not sure about AMZN, but that we're starting to see that from their competitors suggests they will also be going to this way.
1: By the invaluable Matt Levine, here: https://www.bloomberg.com/opinion/newsletters/2025-10-29/put... but the other major companies have their own SPV's
If the market collapses I think Meta can technically just walk away and they lose access to those data centers (which they no longer want anyways) and the SPV is stuck holding $X of assets with $>X liabilities and the issues of the credit are on the hook but not Meta.
And investors are fine being on the hook because they get a higher return from the SPV bonds than Meta bonds. (risk adjusted it's probably the same return).
Do we?
The payments Meta et al are making to the SPV are payments for data-center services. The data centers are then buying the assets and issuing the debt. Now, Meta is obligated to make those payments to the SPV. Which looks like debt. But they are only obligated to do so if the services are being provided.
Blue Owl, meanwhile, owns 80% of the datacentre venture. If the price of chips crashes, that's Blue Owl's problem. Not Meta's. If Meta terminates their contract, same deal. (If Beijing nukes Taiwan and the chips quintuple in value, that's Blue Owl's gain. Mostly. Not Meta's.)
> Why don't they just say "actually no, we all know that's debt and it's owned by Meta so we will consider it when rating their credit."?
If Meta stopped paying the SPV, the SPV would have the recourse of a vendor. If Meta stopped making payments on its bonds, that would trigger cross defaults, et cetera. Simply put, Meta has more optionality with this structure than it would if it issued its own debt.
The red flag to keep an eye out for are cross guarantees, i.e. Meta, directly or indirectly, guaranteeing the SPV's debt.
Does that make any sense? No.
Then Meta would do this in a wholly-controlled off balance sheet vehicle à la Enron. The fact that they're involving side cars signals some respect for their rating.
I'm no expert on the specifics of the circular financing we're seeing here so the rest of what you wrote might be true, but I know enough about how Wall Street and the world in general works to know that closing with this as a defense shows an incredible naivete that makes me question everything else you have said.
https://www.bloomberg.com/news/articles/2025-10-31/meta-xai-...
‘Brad, if you want to sell shares, I’ll find you a buyer…I just—enough,’ Altman said on Gerstner’s podcast.”
https://www.theinformation.com/articles/ilya-saw-mira-murati...
Hopefully nobody reading this has experienced it: these are the words of a true sociopath/addict.
"I'm mad you questioned me" is fucking classic.
I told dang I was out and I am after this. Sorry dang.
It's not. It's done on the basis of don't question me bro.
Sorry but is there some lore behind it as I feel like the last sentence has me wondering what it means. If you could share the lore, I would really appreciate it.
but overall, I agree that this is a very weird thing to say by Sam Altman
There are two important points by Keynes that are relevant:
1. The market can remain irrational longer than you can remain solvent. Even if you're betting on a crash, it will probably happen after you get margin called and lose all your money. You can be absolutely right about where this is headed, but keep your personal investments away from this.
2. The value of a company isn't determined by any sound fundamentals. It's determined by how much you can get a sucker to pay (aka Keynes' castles in the air theory). Until we run out of suckers OpenAI will be able to keep getting cash infusions to pay whoever actually demands cash instead of stock. And as long as there are suckers that are CEOs of big tech companies they are going to be getting really big cash infusions.
[1] https://www.pgpf.org/programs-and-projects/fiscal-policy/mon...
Because as an American, they are all effectively denominated in USD. Even Bitcoin, which everyone claims to be the savior.
And while I don't know as much about other countries, something tells me most Western countries and their currencies are equally as exposed.
It's certainly possible to imagine OpenAI eventually generating far more revenue than Google, even without anything close to AGI. For example, if they were to improve productivity of 10% of the economy by 10% and capture a third of that value for themselves, that would be more than enough. Alternatively, displacing Google as the go-to place for search and selling ads against that would likely generate at least Google levels of revenue. Or some combination of both.
Is this guaranteed to happen? Of course not. But it's not in "bonkers" territory either.
The Amazon deal is actually spread over 7 years. Other deals have different terms, but also spread over multiple years.
Deals like these have cancellation terms. OpenAI could presumably pay a fee and cancel in the future if their projections are too high and they don't need some of the compute from these deals.
The deals also include OpenAI shares. The deals are being made with companies that have sufficient revenue or even cash on hand to buy the compute and electricity.
The claim above that someone needs to come up with $1.4 trillion right now or everything will collapse isn't grounded in any real understanding of these deals. It's just adding up numbers and comparing them to a single annual revenue snapshot.
Even under the most bullish cases for AI the real $ requires here looks iffy at best.
I think we all know that a big part of the angle here is to keep the hype going until there’s a liquidity event, folks will cash out and then at the like they won’t care what happens.
The fun part is to go back now and listen to Blake Lemoine interviews from summer 2022. That for me was the start of all this.
This is “if we get 1% of the market” logic.
Of course, you must also make a convincing case for getting to that 1%.
Inherently, no. In practice, it's riddled with biases deep enough [1] to make it an informal fallacy.
"The competition in a large market, such as CRM software, is very tough," and "there are power laws which mean that you have to rank surprisingly high to get 1% of a market" [2]. Strategically, it ignores the necessity of establishing a beachhead in a small market, where "a small software company" has "a much better chance of getting a decent sized chunk."
OpenAI has nothing resembling this ecosystem, and will never be nearly as valuable a place to buy ads. Replacing Google is probably the least realistic business plan for OpenAI - if that's what they're betting on, they're cooked.
Search engines were never a user friendly app to begin with. You had to know how to search well to get comprehensive answers, and the average person is not that scrupulous. Google’s product is inferior, believe it or not. There will be nothing normal about seeing a list of search results pretty soon, so Google literally has a legacy app out in the wild as far as facts are concerned.
So imagine that, Google would have to remove Search as they know it (remove their core business) and standup a app that looks the same as all the new apps.
People might like one AI persona more than others, which means people will seek out all types of new apps. LLMs is the worst thing that could have ever happened to Google quite frankly.
I'd be more worried about OpenAI surviving. Aside from the iffy finances, much of their top talent seems to leave after falling out with Altman.
Googles biggest advancement in the last ~15 years is to produce worse search results so that you spend more time engaging with Google, and doing more searches, so that Google can show more ads. Facebook is similar in that they feed you tons of rage-bait, engagement spam, and things you don't like infused with nuggets of what you actually want to see about your friends / interests. Just like a slot machine the point is that you don't always get what you want, so there's a compulsion to use because MAYBE you will get lucky.
OpenAI's potential for mooning hinges on creating a fusion of information and engagement where they can sell some sort of advertisement or influence. The problem of course is that the information and engagement is pretty much coming in the most expensive form possible.
The idea that the LLM is going to erode actual products people find useful enough to pay for is unlikely to come true. In particular people are specifically paying for software because of it's deterministic behavior. The LLM is by its nature extremely nondeterministic. That's fully in the realm of social media, search engines, etc. If you want a repeatable and predictable result the LLM isn't really the go to product.
I don’t disagree with you entirely, but I’d argue the second level apps are harder to chase because they get so specialized.
Death of Google (as everyone knows Google today) is a tricky one. It seems impossible to believe at this exact moment. It can sit next to IBM in the long run, no shame at all, amazing run.
OpenAI is at the very least worth at least half as much as Google. I foresee Google becoming like IBM, and these new LLM companies being the new generation of tech companies.
The big question would be how much of this revenue is unjustifiably circular, and how much of it is extractable - but those are questions for when the growth slows. Im certain every supplier has ways to back out of these commitments if the finances look shaky.
Is there evidence that their revenues are growing faster than their costs?
Very little data about expenses, but it looks like they may be growing a little slower (3-4x a year) than revenue. Which makes sense because inference and training get more efficient over time.
> We don't have evidence one way or the other
I don't see how both of these things can be true. How can we know something to be likely or unlikely if we have no evidence of how things are?
If we don't have any evidence they're moving towards profitability, how is it likely they will become profitable?
You wouldn't demand that a restaurant jack prices up or shutdown in its first month of business after spending ~1 MM on a remodel to earn ~20k in the first month. You would expect that the restaurant isn't going to remodel again for 5 years and the amortized cost should be ~16k/mo (or less).
> it's hard to directly figure out what is true CapEx vs. unsustainable burn.
Exactly, and yet you're so certain they'll achieve profitability. The cost for pickles could get cheaper but if they're constantly spending more and more on the rest of the burger and remodeling the building all the time to add yet another wing of seating that may or may not actually be needed it doesn't really matter in their overall profitability right?
I have got incredible value from ChatGPT up to this point but I have been using it less and less.
What I have mostly extracted from it is a giant list of books I need to read. A summary of the ideas of a book I haven't read is obviously not the same as reading the whole book.
Before all this there were so many areas I was curious about that ChatGPT gave me a nice surface level summary of. I now know much better what I want to focus on but I don't need more surface level summaries.
It can’t be the same hedge on both sides of the trade.
Why vol? They're just short rates, which is a silly way to say leveraged. If rates become volatile but halve, OpenAI does fine. If rates stabilise at 10%, OpenAI fails. There is no "duration hedging," which for OpenAI would involve buying duration, i.e. bets that profit when rates go up, going on.
I have not invested in OpenAi.
But the truth is, right now the potential revenue is not achievable with a relevant investment into energy generation.
Interesting rat race which will lead to something. Let's see what it will be
>OpenAI thought to be preparing for $1tn stock market float. ChatGPT developer is considering filing for an IPO by the second half of 2026...
The effects would be devastating to say the least in how I feel like it.
If S&P 500 grew thanks to this AI bubble, it sure as well will shrink as well due to the popping of this bubble too.
There is no free lunch but more precisely I am worried more about the retirement schemes in which people put their money into etc.
Personally I was saying this thing a long time ago that AI feels like a bubble and maybe S&P 500 would have some issues and thus to diversify into international or gold etc. and I was met with criticism because "S&P 500 is growing the fastest so I am wasting money investing in gold etc.", Yea because that's because bubbles can also grow... and they also shrink... and they do both of these things fast.
Economic history strongly suggests this would be a bad assumption.
More pertinently, we have a long history of people buying into bubbles only for them to crash hard, no matter how often people tell them "past performance is not a guarantee of future growth" or whatever the legally mandated phrase is for the supply of investment opportunities to the public where you live.
Sometimes the bubbles do useful things before they burst, like the railways. Sometimes the response to the burst creates a bunch of social safety nets, sometimes it leads to wars, sometimes both (e.g. Great Depression).
But what if, maybe, it ain't so? Of course, lots of AI things are going to fail, and nobody is exactly sure of the future. But what if, after in depth inspection, the overall thing is actually looking pretty good and OpenAI like a winner?
May be incorrect, but it's not writing down the answer first and working backwards.
> But what if, maybe, it ain't so?
https://www.youtube.com/watch?v=9z70BKwfSUA
Comedic take from last time, but the point at the conclusion remains. "Just this once, we think we might".
> Of course, lots of AI things are going to fail, and nobody is exactly sure of the future. But what if, after in depth inspection, the overall thing is actually looking pretty good and OpenAI like a winner?
Much as I like what LLMs and VLMs can do, much as I think they can provide value to the tune of trillions of USD, I have no confidence that any of this would return to the shareholders. The big players are all in a Red Queen's race, moving as fast as they can just to stay at the same (relative) ranking for the SOTA models; at the same time, once those SOTA models are made, there are ways to compress them effectively with minimal losses of performance, and if you combine that with the current rate of phone hardware improvements it's plausible we'll get {state of the art for 2025} models running on-device sometime between 2027 and 2030, with no money going to any model provider.
it will grow even more with the next generation of models.
What if AI invents fusion power?
(Thanks for the downvotes I wanted to keep my karma at 69)
2. Outside of software, inventions have to be turned into physical things like power plants. That doesn’t happen overnight and is expensive.
3. The industry is already going through a power revolution in the form of battery + solar and it’s going to take a while for a new technology to climb the learning curve enough to be competitive.
4. What if AI gives us all a pony?
“Please don't comment about the voting on comments. It never does any good, and it makes boring reading.”
Also, IIUC the guys in The Big Short would've lost everything if the government stepped in sooner since the banks controlled the price of the CDSs and could've maintained the incorrect price if they had a bunch of extra cash.
Yeah. "Markets can remain irrational longer than you can remain solvent."
https://en.wikipedia.org/wiki/Michael_Burry had an investor panic and nearly lost everything. He was right, but he nearly got the timing wrong.
If you were actually the guys from the big short and you have strong conviction, you should short the market (literally like the guys from big short) and get really rich.
Money is the language they understand, so hit them where it hurts.
When you go long, you can still make money by being “sort of right” or “obliquely right” or “somewhat wrong but lucky”or by just collecting dividends if the market stays irrational long enough. If you short something you have to be exactly right (both about what will happen and precisely when) or your money will end up in the hands of the people you’re betting against. It’s not a symmetrical thing you can just switch back and forth on.
if no, and you thought it was a bubble, does that price of NVIDIA from 2 years ago (not from today) makes sense to you now?
I was at WeWork around the time of its downfall. I have a lot of opinions about how that place was ran, but I can assure you pre-pandemic they were buying up every office space because they were filling them with tenants. Not paying for offices was a result of tenants not paying due to the pandemic.
That’s the same as what happened when WeWork was buying up office space pre-pandemic and then using handwavy nonsense like “Community Adjusted EBITDA” as part of the smoke and mirrors to pretend like there was an actual business there.
The pandemic expedited the pain, but the business model was broken and folks called BS long before Covid hit.
They're going to sell ads at the moment people are looking to buy stuff. It's the single most viable business model we've ever seen.
Besides, how are ads on ChatGPT supposed to work? If some student is asking it to write their paper for them, is ChatGPT going to stop in the middle of it and go "Hey, you know what sounds good right now? A nice bowl of soup..." Although admittedly that would make for some hilarious proof of people using AI for things they shouldn't...
ChatGPT will also probably be selling ad infrastructure to inject ads just like Google injects ads into search. They probably will pay out little to websites that include the “ChatGPT” widget to integrate ChatGPT with their site that also has ads.
Right now the barriers are technical for injecting ads into AI responses.
As an advanced research engine, knowing it will reliably only recommend you sponsored products means it's worthless - and worse it will be primed to advocate for sponsored products.
Then the whole thing becomes a scam engine, because check out what Facebook ads look like today.
Regardless of if that’s true, it’s clearly still a huge business opportunity. And you point out Facebook ads are a scam yet they bring in $164B/year and growing. Regardless of value judgement, there’s clearly a lot of money to chase.
Plus like Google search they have a ton of organic traffic. Chatgpt has replaced Google search as my starting point to investigate anything. Lots of that is related to things where I will eventually spend money
Google/facebook do that today, because the content they're showing is created pre-ad, and the ads have to be injected after the fact.
With AI- the content is being generated in the same place that the ads are being injected, which allows us to be much more subtle about it.
How much do you think a car company would pay for to put special training weight on their marketing materials? I would guess big money
"While we're on the topic of self-harm, did you know the ABC Co Truck has the highest safety rating?"
https://openai.com/index/introducing-chatgpt-atlas/
> Besides, how are ads on ChatGPT supposed to work?
"How do I do XYZ?" "Product ABC can do XYZ for you."
This would create a ton of hesitation to use this for product recommendations if I knew ChatGPT wasn't using its extensive input for products and reviews and coming back with an objective answer for me.
I guess at this point would we even know the difference? Is it possible this is already happening?
Is it going to inject ads for indeed while a recruiter is using ChatGPT to summarize a stack of resumes?
If it only ever injects ads for specific requests how profitable would that even be? I understand clients would want their product to be recommended but if I only get the ad answer when prompting a certain way, can I the user avoid ads by asking questions a specific way?
I think the queries will fall into profitable (product recommendations) and non profitable (writing an essay or code) just the way they do for Google. Probably former will have a generous free tier and latter will be largely paywalled. I don't know how they'll do that, but I imagine they'll find some way
It's a mass consumer (software) product and they need new revenue venues and ads have a history of working well. Even Spotify, Netflix, Amazon Prime, ... Companies that historically don't have the ad infrastructure of Google or Facebook have increasingly profitable ad tiers
OpenAI maybe in the same situation, committed to spending $1.4T while enjoying a good revenue year this year but then One Bad Thing and poof.
Or, well, they stated that the TCO of the compute they have commitments for is $1.4T, which is a somewhat strange phrasing. I assume it's due to it being a mix of self-owned vs. rental compute, and what they mean is the TCO to OpenAI rather than the TCO to the owner of the compute.
I get that folks are now just engaged in “keeping up with the Jones’” FOMO behavior but none of this is making any sense.
The financial impact if the whole AI space loses even 50% of its current "valuation" will be huge. The financial impact of the whole AI space continuing at its current velocity is... More of whatever is going on now?
> A central theme of the discussion was the staggering demand for computational power. Gerstner highlighted OpenAI’s reported commitment of $1.4 trillion for compute over the next five years, questioning how a company with reported revenues of $13 billion could manage such an outlay.
> Altman pushed back forcefully. “First of all, we’re doing well more revenue than that. Second of all, Brad, if you want to sell your shares, I’ll find you a buyer,” he quipped. He expressed profound confidence in the company’s trajectory. “We do plan for revenue to grow steeply. Revenue is growing steeply. We are taking a forward bet that it’s going to continue to grow.”
This seems to be just the tip of the iceberg; what about the rest?
I'd be happy if the industry/stock market proves me wrong, but I can't see this ending any other way than with a major crash that makes the dot-com boom seem like a minor blimp.
We used to have lunch at the bar across the street and just about once or twice a week for several months, we'd walk in and there would be a table with about 15-20 people sitting around drinking and reminiscing about how they were going to change the world.
A lot of developers I know just completely left the industry and never came back.
If this crash exceeds that one? We're in for some seriously tough times.
It doesn't come off as schadenfreude to me as much as it does the emotional clarity of accepting the oncoming train and knowing there's nothing you could have done to stop it. This brand of "just along for the ride" nihilism seems pretty damn common now.
ChatGPT has 800 million weekly users but only 10 million are paying.
Lots of questions on if this makes sense, and highly likely Amazon never gets $38B cash from OpenAI out of this.
[0] https://www.tomshardware.com/tech-industry/artificial-intell...
I remember when everyone was racing to produce "datacenter in a shipping container" solutions. I just laughed because apparently nobody actually bothered to check if you could actually plug it in anywhere.
In what context? This isn't fashion, being the 2nd mover has benefits which often outweigh the costs.
Recent analysis shows AWS is burning through Amazon’s free cash on AI buildouts which is very concerning if the bubble pops, leaving Amazon holding the bag of invested capital not making returns.
Amazon is a bit late to the party on these headlines, and lots of unanswered questions about what’s really going on here.
OpenAI is doing the same with compute. They're going to have more compute than everyone else combined. It will give them the scale and warchest to drive everyone else out. Every AI company is going to end up being a wrapper around them. And OpenAI will slowly take that value too either via acquisition or cloning successful products.
OpenAI and Anthropic are signing large deals with Google and Amazon for compute resources, but ultimately it means that Google and Amazon will own a ton of compute. Is OpenAI paying Amazon's cap ex just so Amazon can invest and end up owning what OpenAI needs over the long term?
For those paying Google, are they giving Google the money Google needs to further invest in their TPUs giving them a huge advantage?
Google is a viable competitor here.
Everyone else is missing part of the puzzle. They theoretically could compete but they're behind with no obvious way of catching up.
Amazon specifically is in a position similar to where they were with mobile. They put out a competing phone but with no clear advantage it flopped. They could put out their own LLM but they're late. They'd have to put out a product that is better enough to overcome consumer inertia. They have no real edge or advantage over OpenAI/Google to make that happen.
Theoretically they could back a competitor like Anthropic but what's the point? They look like an also ran these days and ultimately who wins doesn't affect Amazon's core businesses.
Every image/video/text post on a meta app is essentially subsidized by oai/gemini/anthropic as they are all losing money on inference. Meta is getting more engagement and ad sales through these subsidized genai image content posts.
Long term they need to catch up and training/inference costs need to drop enough such that each genai post costs less than net profit on the ads but they’re in a great position to bridge the gap.
The end of all of this is ad sales. Google and Meta are still the leaders of this. OpenAI needs a social engagement platform or it is only going to take a slice of Google.
Do you have any sources backing this? As in "more engagement and ad sales" relative to what they would get with no genai content
While I can see Anthropic or any other leading on API usage, it is unlikely that Anthropic leads in terms of raw consumer usage as Microsoft has the Office AI integration market locked down
No, it’s Amazon that’s doing this. OpenAI is paying Amazon for the compute services, but it’s Amazon that’s building the capacity.
It's still all about the (yet to be collected) data and advancements in architecture, and OAI doesn't have anything substantial there.
A relatively localized, limited lived experience apparently conveys a lot that LLM input does not - there's an architecture problem (or a compute constraint).
No amount of reddit posts and H200s will result in a model that can cure cancer or drive high-throughput waste filtering or precision agriculture.
Its slow as balls as of late though. So I use a lot of sonnet 4.5 just because it doesn't involve all this waiting even though I find sonnet to be kinda lazy.
the race is for sure on: https://menlovc.com/perspective/2025-mid-year-llm-market-upd...
I started working in 1997 at the height of the dot com bubble. I thought it would go on forever but the second half of 2000 and 2001 was rough.
I know a lot of people designing AI accelerator chips. Everyone over 45 thinks we are in an AI bubble. It's the younger people that think growth is infinite.
I told them to diversify from their company stock but we'll see if they have listened after the bubble pops
They just didn’t like the chips is the most logical answer. Particularly given AWS has been doing everything they can to pump up interest, and this huge PR release doesn’t even mention it at all. That omission speaks volumes.
Anthropic is moving to Trainium[1], that will free Nvidia GPUs and allow AWS to rent those GPUs to OpenAI.
[1] https://finance.yahoo.com/news/amazon-says-anthropic-will-us...
But that feels weird combined with this. You can buy OpenAI API access which is served off of AWS infrastructure, but you can't bill for it through AWS? (I mean, lots of companies work like that. but Microsoft is betting that a lot of people move regular workloads to Azure so they can have centralized billing for inference and their other stuff?)
> Non-API products may be served on any cloud provider.
I am not sure if Bedrock counts. There are 2 OpenAI models already there: https://aws.amazon.com/blogs/aws/openai-open-weight-models-n...
https://www.tomshardware.com/tech-industry/artificial-intell...
There’s been some buzz around the official opening of the Grand Egyptian Museum, which I visited last month. That project took 1.1 to 1.2B USD. Double its original budget estimate but still the museum looks fantastic and it feels, tangibly, like it’s worth a billion.
In contrast with all the money spent on AI, it just feels like monopoly money. Where’s the monument to its success? We could’ve built flying cars or been back to the moon with this much money.
It's much less likely that I'd drive a flying car and there is 0 chance that I would be the one going to the moon if we spent the equivalent money on those things instead.
I currently pay 200 USD a month for AI, and my company pays about 1,200 USD for all employees to use it essentially unlimited - and I get AT LEAST 5x the return on value on that, I would happy multiply all those numbers by 5 and still pay it.
Domain knowledge, bug fixing, writing tests, fixing tests, spotting what’s incomplete, help visualising results, security review generation for human interpretation, writing boilerplate, and simpler refactors
It can’t do all of these things end to end itself, but with the right prompting and guidance holy smokes does it multiply my positive traits as a developer
> and I get AT LEAST 5x the return on value on that
You make $800 by paying OpenAI $200? Can you please explain how your the value put in is 5x and how I can start making $800 more a month?
> holy smokes does it multiply my positive traits as a developer
But it’s not you doing the work. And by your own admission, anyone can eventually figure it out. So if anything you’ve lost traits and handed them to the Llm. As an employee you’re less entrenched and more replaceable.
I estimate that the addtional work I can do is worth that much. It doesn't matter that "I do it" or "The LLM does it" - Its both of us, but I'm responsible for the code (I always execute it, test it, and take responsibility for it). That's just my estimate. Also what a ridiculous phrasing, the intent of what I'm saying is "I would pay a lot more for this because I personally see the value in it" - that's a subjective judgement I'm making, I have no idea who you are, why would you assume thats a tranferrable objective measure that could simply be transferred to you? AI is a multiplier on the human that uses it, and the quality of the output is hugely dependent on the communication skill of the human, you using AI and me using AI will produce different results with 100% certainty, one will be better, it doesn't matter who, I'm saying, they will not be equal.
>But it’s not you doing the work. And by your own admission, anyone can eventually figure it out. So if anything you’ve lost traits and handed them to the Llm. As an employee you’re less entrenched and more replaceable.
So what? I'm results driven - the important thing is that the task gets done - it's not "ME" doing it OR the "LLM" doing it, it's Me AND the LLM. I'm still responsible if there's bugs in it, and I check it and make sure I understand it.
>As an employee you’re less entrenched and more replaceable.
I hate this attitute, this is an attitude of a very poor employee - It leads to gatekeeping and knowledge hoarding, and lots of other petty and defensive behaviour, and it what people think when they view the world from a point of scarcity. I argue the other way - the additional productivity and tasks that I get done with the assistance of the LLMS makes me a more valuable employee, so the business is incentivised to keep me more, there's always more to do, it's just we are now using chainsaws and not axes.
I disagree, I brought all this up because it seems you are confusing perceived, marketed/advertised value with actual value. Again you did not become 5 times more valuable in reality to your employer or by obtaining more money (literal value). You're comparing $200 of "value" which is 200 dollars to...time savings, unmeasureable skill ability? This is the unclear part.
> I hate this attitute, this is an attitude of a very poor employee - It leads to gatekeeping and knowledge hoarding, and lots of other petty and defensive behaviour,
You may hate that attitude but those people will be long employed after the boss sacked you for not taking enough responsibility for your LLM mistakes. This is because entrenching yourself is really the way it's always worked and those people that entrenched themselves didn't do it by relying on a tool to help them do their job. This is the world and sadly LLMs don't do anything to unentrench people making money.
All I am saying is enjoy your honeymoon period with your LLM. If that means creating apple and oranges definitions of "value" then comparing them directly as benefits, then more power to you.
But I agree that the numbers are increasingly beyond reasonable comprehension
Lot of feeling going on in this comment, but that's not really how money works.
This bubble is one for the history books !
The existence proof is, well, every financial crisis ever. Start with the housing bubble and ask, why did huge banks whose entire job was financial modeling get caught up in it? What makes companies today immune to those same types of decision-making errors?
All financial analysis misses the point. They just need to buy enough time and compute to out-last the competition.
All bets are off if China find a way of reducing processing power by 50% or more.
If only killing off grok and Gemini was so easy...
I do worry what the other side of this looks like when the circular feedback loop driving hype up eventually reverses and drives things down with amplifying effect.
corporate would like you to find the difference between these two photos
Here, the clouds have pulled a trick to inflate their revenues with their own cashflows, and have not been punished yet for it by shareholders - except meta which is getting asked some difficult questions.
Not financial advice, obviously, but that's my personal outlook. I've said it before: Alphabet is probably the safest play long term as they haven't been infected by any NVIDIA or OpenAI deals (yet)
The other side to that coin is monetization. Google is dominant there as well. OpenAI can't yield that space to Google because it's how the value is extracted from the consumer.