Is it? This whole piece just reads of mega funds and giga corps throwing ridiculous cash for pay to win. Nothing new there.
We can’t train more people? I didn’t know Universities were suddenly producing waaaay less talent and that intelligence fell off a cliff.
Things have gone parabolic! It’s giga mega VC time!! Adios early stage, we’re doing $200M Series Seed pre revenue! Mission aligned! Giga power law!
This is just M2 expansion and wealth concentration. Plus a total disregard for 99% of employees. The 1000000x engineer can just do everyone else’s job and the gigachad VCs will back them from seed to exit (who even exits anymore, just hyper scale your way to Google a la Windsurf)!
I just want to point that there's no scientific law that says those two must move together.
A government very often needs to print money, and it's important to keep in mind that there's no physical requirement that this money immediately must go to rich people. A government can decide to send it to poor people exactly just as easily as to rich. All the laws forbidding that are of the legal kind.
The icing on the cake is when all their competitors say "so will we".
How have we gotten to a point in just a decade where multiple companies are dropping annual numbers that are in the realm of "the market cap of the worlds biggest companies" on these things?
Have we solved all (any of) the other problems out there already?
It didnt come from nowhwere, or from Silicon Valleys exceptionalism - It came from changing the value of money, labour and resources. It came from everyday people, deficits and borrowing from the next 10 generations of children.
The public discourse is weirdly unable to adress burning problems that had been formulated almost 150 years ago. Much like the climate change topic originating in the 1970s.
You can swap "LLMs" with "pointless wars over books or dirt" if you want - same same, nothing is new.
It's kind of worse than that: this is actually exacerbating the climate crisis as mega tech corps like Microsoft, Google and Meta are scrambling to secure more energy to power their power-hungry LLMs.
The same exact premise was proclaimed about the Internet circa 1999-2003 (boom and bust). Then the entire global economy and nearly all electronic communication was rebuilt on top of the Internet.
For the coming century AI is more important than the Internet.
The parent mentioned inequality and energy transition. But LLMs seem to be about crud apps and influence campaigns.
There's more to value than just quantity. Quantity of what?
Money printing does not cause inflation equally especially if not distrubuted so.
Jan 2010: 174b
Jan 2020: 1400b (23% growth per year)
Today: 3200b (18% growth per year)
The S&P as a whole grew about 12% a year from 2010 to 2020, and 12.5% a year from 2020 to today
Meanwhile the median wage from 2010-2019 grew 3% a year, from 2019-2023 7% a year
Seems whatever happened in 2020 was good for workers, in that they aren't falling behind owners as much as they were.
However we could also argue that most things in human society are less moral than moral (e.g. cars, supermarkets, etc).
But we can also argue that dropping some hundreds of millions in VC capital is less immoral than other activities, such as holding that money in liquid assets.
* Glares at Apple cosplaying Smaug *
I don't think its as simple as calling them immoral. Rather the immorality comes on them being poorly regulated. With regulated term limits on patents and copyright we create a world where an artist can create licensed product and protect themselves during the course of their career, and people are them able to riff on their works after some decades or after they pass on.
I think if behavior needs to be regulated by government in order to be moral, then it's immoral behavior by default
The regulation doesn't make it moral, the regulation only limits the damage by limiting how immoral you're allowed to be
Regulation is creating rules for businesses to run within. This goes back to rule of law. You can't tell a group of children to "behave" and walk away and expect good results and then call the children "bad" when they fail to behave.
Rather, you must give them systems to understand, to channel their energy, productively, in a way that matches the desires of the parent (government) and their strategies. Then you have to meaningfully punish those who intentionally break the rules in order to give those behaving the knowledge that they have chosen the good path and they'll be rewarded for it.
Free markets are not about "morality"/"immortality", its about harnessing an existing energy to make a self-sustaining system. A system a state is less good/interested at keeping going or unable to act quickly enough to move in. But part of creating that system is putting in guard rails to prevent the worst sort of crashes.
Patents and copyrights don't cause people to create things
They prevent people from stealing things that other people created
The immoral behaviour being regulated is the IP theft not the IP creation
I have a hard time arguing that it's a net positive.
Without copyright it was hard to assemble high quality educational books/manuals, because they take a lot of effort with relatively little reward/return. In fact the first 'modern' copyright act in 1701ish was titled something about improving education.
Without copyright it is not worth it for authors to spend nights/weekends flushing out plot ideas for complete sharable works, so you end up with less/lower quality literature as no one can be a professional author. Which has better quality on average, published books or self published? Self published tend to be the 'passion projects' you would still have without copyright, published books tend to be what get's created when authors are compensated for their efforts. Society can't lose from copyright because without it the works would never have existed. If I say 'I'll bake a cake if you will buy a piece' and I bake a cake and sell a piece, society didn't 'lose'. If I don't bake a cake because no one would buy a piece than society was a little sadder, a little plainer that day. There is only upside, there is no downside. Anyone that would release if copyright didn't exist is still free to waive their copyright protection. So having it is the best of both worlds, those that want to release just to release can, and those that want to try and create something that can be sold can.
Without copyright there are no big budget movies, only passion projects because no one is injecting millions when the work will just be copied no sold/screened/rented.
Without copyright the world has less joy, less discussion, less contemplation, less entertainment, less education. Without patents the world has less productivity, less safety, worse health, less food, worse/much less clothing/housing, less free time. The systems in their current forms have been abused and are unfit for the original purpose but when kept to the original purpose with reasonable protection periods they are a HUGE net plus for society.
In the new world that's incentive for enough people to create. Let knowledge rein free.. bellowing through the lands.
Copyright is bad like inheritance is bad. Arguing about good and bad industrialists is missing the point.
How is inheritance bad? Imo, estate taxes are more immoral. Why should the state be allowed a cut of my private assets? Gift taxes are also immoral. Why should I have to pay taxes for giving away assets?
The obvious issue is that if you don't tax gifts it becomes far too easy for people to dodge taxes (or at least much more convoluted to enforce payment).
Inheritance is much more complicated and controversial. There's an argument to be made that it results in social ills if left unchecked, an argument that estate taxes fail rather spectacularly to address those ills, and an argument that the ills tend to be self righting given how easy it is to lose money. And probably several others.
A world without private property leads to a world of pure lawlessness. Sure... We could do that, but it would quickly devolve into forever warfare where only might wins, and even that for only fleetingly small timeslices.
History's progression has proven that exploited workers still prefer to exist in that system vs one of continual peril.
Let's consider the most basic form of ownership: that over one's body. By your logic, my life is a spec on the eternal timeline, so why make it a crime to harm or murder my physically?
To prevent family dynasties from building more and more economic power over time and threatening the state in the future due to the forces of compound interest. To be fair, most family dynasties don't do this, but others can wreck exceptional havoc just by wanting to, due to the generational power they've amassed. The damage they can do is further accelarated by them lacking understanding of how people without generational wealth live.
We already see this happening in our society as most media organisations are run by billionaires or multi-millionaires. News organisations are run less and less by journalists or normal people and the headlines are set more and more by people with very keen and niche vested interests.
This specific issue is being played out in real-time in the Murdoch succession as he attempts to leave the propaganda firm in the hands of his most idelogically similar successor. Yet the majority of his children see the world differently from their father and are challenging this. On one side we have natural break up and change occuring due to generational shifts, and on the other the strong desire of the ancestor wanting their legacy to remain unspoilt after their death.
It really depends a lot on how those liquid assets are deployed.
I agree that Apple should have probably done something with their cash hoard like maybe buying or bolstering Intel so that they could have a domestic supply of chips, but apparently Apple has decided that there's just not much else to do with that money right now that would give them a better return? We might not agree with that assessment, but it's hard to call it immoral.
For some things it’s even a worldwide consensus, e.g. any groups with a desire to acquire large amounts of plutonium (who don’t belong to a preapproved list).
There’s even a strong consistent push to make it ever more inescapable, comprehensive, resistant to any possible deviation, etc…
Just writing your opinion down doesn’t seem relevant to real life government organization.
I don’t see how there could be “school teachers” above the majority of congressman or regulatory decision makers. The very existence of pork filled bills thousands of pages long and byzantine regulations suggest that couldn’t possibly be true.
e.g. Plenty of countries have huge bills that no one fully reads and huge numbers of regulations.
People are also sick and tired of rules in the AppStore. Or the fact that when their Apple/Google/whatever account is unilaterally blocked they have no recourse. At least with a government there are some checks and balances.
Yes, some governments are more trustworthy than others. Doesn't mean the concept is bad.
Repeating your opinion isn’t going to lead to any productive discussion.
Of course that is a very simplified description. In practice, most societies promote a balance between positive and negative freedom, recognize some limits on the government's ability to use force, recognize some degree to which people can choose to act in ways that don't promote positive freedom/prosperity, etc.
It will take the power away from the workers, such that there will be no power left for people to make demands.
We can hope it’ll be positive, but we aren’t even involved in its creation and the incentives are there to ensure it isn’t.
When expertise is commoditized, it becomes cheap; that reduces payroll and operational costs - which reduces the value of VC investment and thus the power of pre-existing wealth.
If AI means I can compete with fewer resources, then that's an equalizing dynamic isn't it?
Yes - and those without are compelled to trade their labor for assets.
My point is that the assets themselves mean less when the average person can use AI to design anything - that makes the costs of production go down.
In a world where production is cheap, the money required to produce has relative less value.
I’m not sure which assets you think that devalues; it certainly increases the value of the assets needed to run AI, and also of the assets needed to realize the things that people can design with AI.
> In a world where production is cheap, the money required to produce has relative less value.
In a world where your labor isn't required for production, the assets that are required for production have a much greater value relative to your labor than they do in one in which your labor is required to produce something.
“Cheap” is only a thing relative to some other thing.
Currently VC has a great deal of power because up-front investment is required to hire staff and other expenses until the startup can become cash-flow positive. When a lone individual can start a venture using AI then the payroll costs go down. The investment requirements go down.
Yes, automation means that people with assets don't have to pay other people for their labor.
But it also means that people starting new ventures have less need of significant up-front capital.
However the technology that is expected to reduce labor requirements (and thus expenses) has an uncertain endpoint. It seems plausible that at some point a threshold could be crossed beyond which the human labor that you are able to add on top becomes essentially irrelevant. It is this second occurrence, or rather how society might react to it, that should be cause for at least some concern.
but they'll start happening because of the new incentive.
“Immoral”, “should not exist”, “late-stage capitalism” are tribal affiliation chants intended to mark one’s identity.
E.g. I go to the Emirates and sing “We’re Arrrrrsenal! We’re by far the greatest time the world has ever seen”. And the point is to signal that I’m an Arsenal fan, a gooner. If someone were to ask “why are you the greatest team?” I wouldn’t be able to answer except with another chant (just like these users) because I’m just chanting for my team. The words aren’t really meaningful on their own.
If "their" political views have the same value as ones affiliation to a sports team, but your political views are well researched and valid, could you be overlooking something? Edit0: and more importantly, what are they to you? Do they not have the same mental faculties as you? How they able to live and thrive while in total abhorrence to natural law? Or are they all naive children who don't know better?
And yes, in any group you'll find loud idiots. Those are in your group too - you just ignore them and focus on the other group's idiots, while disregarding the valid views that are behind.
Especially in the age of algorithmic social media, it's hard to parse what's actually being said.
Glancing over the last few decades of tax returns it looks like they are claiming a lot of tax credits for R&D, so much so that if someone had a closer look they might find some Fraud, Waste or Abuse.
If the company does get a return, then the R&D wasn't wasteful.
And regardless, the tax code doesn't and shouldn't (IMO) differentiate there, to encourage companies to take risks.
Everything else is just executives doing a bit of dance for their investors ala "We won't need employees anymore!"
Sorry, I'm pessimistic as recent experience is one of hyper concentration of ideas, capital, talent, everything.
When a company makes a deal in exchange for shares or something, those shares are being used as money and must be included in any currency-neutral calculation of the money supply. However, most shares don't flow like money. You also have cryptos, which flow more than shares but less than government bonds and cash. It could be that the total of all money has expanded, even as the US dollar specifically stabilizes and slightly contracts.
University enrolment is actually set to sharply decline. It's called the Demographic Cliff: https://www.highereddive.com/news/demographic-cliff-colleges...
Just for your information.
> We can’t train more people?
Of course people are being trained at Universities. Outside of The Matrix, it takes a few years for that to complete.
On balance I'm sure that was very useful progress, but millions of people also died in the resulting wars.
But, I wish people would shut up about the printing press in these discussions already. AI disrupting literally everyone’s job at the same time is not the same as a printing technology disrupting the very niche profession of scribe. Or electric lamps putting some lamp lighters out of work.
I think that was good, but not everyone agrees.
I'm curious if you're one of these AI engineers getting 100m, do you quibble over the health insurance? I mean at that point you can fully fund any operation you need and whatever long term care you need for 50 years easily.
Another complication is academia simply does not have the resources to remotely compete with the likes of Google, OpenAI, Anthropic, xAI, etc.
> This is just M2 expansion and wealth concentration
I actually think "throwing ridiculous cash" _reduces_ the wealth concentration, particularly if a chunk of it is developer talent bidding war. This is money that had been concentrated, being distributed. These over-paid developers pay for goods or services from other people (and pay income taxes!). Money spent on datacenters also ends up paying people building and maintaining the data-centers, people working in the chip and server component factories, people developing those chips, etc etc. Perhaps a big chunk ends up with Jensen Huang and investors in NVidia, but still, much is spent on the rest of the economy along the way.
I don't feel bad about rich companies and people blowing their money on expensive stuff, that distributing the wealth. Be more worried of wealthy companies/people who are very efficient with their spending ...
Very few developers are benefiting from this "talent bidding war". Many more developers are being let go as companies decide to plow more of their money into GPUs instead of paying developers.
Any power centers outside that decentralizes power.
If there is something subjective and if you cannot find a critique of it, it’s usually a super power to assume the opposite is true barring obvious exceptions.
The normies may have much less power, but it's never zero. Wealth isn't the only form of power. There's law/politics and there's military power. Whenever a group oversteps, you get the Magna Carta, you get secularism, you get civil wars and your Second Amendment. This is why the French Revolution and The Civil War(s) are in the textbooks.
When Venezuela's economy collapsed, a South American friend said that they deserved it. Every other South American country fought corruption and died for independence.
We never want violence, but it is there as an option. Laws and economic policies are there to make sure the violence is never the best option. Non-democratic capitalism doesn't work because it puts the law and the wealth in the hands of a few and leaves force as the only option. The world didn't suddenly convert into communism because Marx and Lenin were handsome demon lords; they converted because capitalism pushed them into a corner.
When you have this decline, it's because a society had weak and corrupt lawyers, statesmen, and economists. These may not be your normies, but they're pretty close to the middle class.
But you don't get to praise capitalism for the cheap air conditioning and then criticize it when low costs push production overseas. You can't negotiate for the high salaries then blame greedy CEOs when you get replaced by someone cheaper. You don't get to charge "as high as possible" and then wonder why your doctor is charging your life savings to save your life. The normies built a culture around winning and lopsided power, then are shocked when power is used against them.
Democracy is the worst form of Government except all those other forms that have been tried from time to time.
~Winston Churchill
We'd love to be able to have reasoned discussions about economics and the pros and cons of wealth concentration vs redistribution here. But this is not the way to do it, and the comment led to an entirely predictable flamewar. Please don't do this on HN, and please make an effort to observe the guidelines, as you've been asked to do before.
Are you going to be the one to tell the teacher's or police officer's union they have to divest their pension and buy fiat currency? No more stocks allowed!
- socio-democratic countries are the norm in Europe, namely Norway, Denmark and Sweden.
- Ordoliberalism: Germany, Switzerland
- cooperative economics: Japan, Spain
- market socialism: China, hungaria
- Parecon: brasil, Argentina
- Ubuntu: South Africa
- Anarcho-syndicalism, The third way, Islamic economic…
In this case communism's obsession with talking about Capitalism as a proper noun as distributed process as if it was a monolithic discrete object with clear intentions and something which can be 'abolished' with no idea as to what the particulars would entail.
> Blitzhires are another form of an acquisition.. not everybody may be thrilled of the outcome.. employees left behind may feel betrayed and unappreciated.. investors may feel founders may have broken a social contract. But, for a Blitzhire to work, usually everybody needs to work together and align. The driver behind these deals is speed. Maybe concerns over regulatory scrutiny are part of it, but more importantly speed. Not going through the [Hart-Scott-Rodino Antitrust Act] HSR process at all may be worth the enormous complexity and inefficiency of foregoing a traditional acquisition path.
From comment on OP:
> In 2023–2024, our industry witnessed massive waves of layoffs, often justified as “It’s just business, nothing personal.” These layoffs were carried out by the same companies now aggressively competing for AI talent. I would argue that the transactional nature of employer-employee relationships wasn’t primarily driven by a talent shortage or human greed. Rather, those factors only reinforced the damage caused by the companies’ own culture-destroying actions a few years earlier.
2014, https://arstechnica.com/tech-policy/2014/06/should-tech-work...
> A group of big tech companies, including Apple, Google, Adobe, and Intel, recently settled a lawsuit over their "no poach" agreement for $324 million. The CEOs of those companies had agreed not to do "cold call" recruiting of each others' engineers until they were busted by the Department of Justice, which saw the deal as an antitrust violation. The government action was followed up by a class-action lawsuit from the affected workers, who claimed the deal suppressed their wages.
What in the white supremacy? Not on my HN.
Flagged, emailed to mods, named and shamed on my bio.
In order to operate on a scale like that, you obviously need to have worked somewhere that has that magnitude of users. That makes the pool of candidates quite small.
It’s like building a spaceship. Do you hire the people that have only worked on simulations, or do you try to hire the people that have actually been part of building the most advanced spaceship to date? Given that you’re also in a race against other competitors.
These AI folks are good, but not orders of magnitude better than engineers and researchers already working in tech or academia. Lots of folks are capable of building an AI system, the reason they haven’t is that they haven’t been in a situation where they have the time/money/freedom to do it.
These mega offers aren’t about “talent”, they are about “experience”
Well, yes.
Talent doesn't exist in the form people would like to believe, and to whatever degree it does, experience is the most reliable proxy for identifying it.
But the truth is it's not "just" about experience. Most of these people have been pushing the limits of their fields for their entire careers. It's not like having "the time/money/freedom" to do it is randomly distributed even among talented, smart people. The people in this talent pool where all likely aggressive researchers in a very specialized field, then likely fought hard to get on elite teams working close to the metal on these massive scale inference problems, and they continued to follow this path until they got where they are.
And the truth is, if you're at least "good" in this space, you do get your piece of the pie at the appropriate scale. I'm still making regular dev income, but my last round of job searching (just a few months ago) was insane. I had to quit my job early because I couldn't manage the all the teams I was talking to. I've been through some hot tech markets over my career, but never anything like this. Meanwhile many of my non-AI peers are really struggling to find new roles.
So there's no reason to cast shade on the genuine talent of these people (though I think we all feel that frustration from time to time).
I'm sorry, but what's the specific distinction? When the Lakers pay Lebron $54MM per season, is that for his innate talent, or is it for the 20k hours he's spent perfecting his game?
This is a lot of hand-wringing over nothing. We've seen people paid outrageous sums of money for throwing a ball for DECADES without any complaints, but the moment a filthy computer nerd is paid the same money to build models, it's pitchforks time.
The only thing wrong with the current compensation kerfuffle is that it happened so late. People like Einstein, Von Neumann, Maxwell, Borlaug, etc should have been compensated like sportsball stars, as well.
That's what they want you to believe, and in some cases that's true. Many though are just grifters. They were able to:
1. Gain access to the right people at the right levels to have the right conversations.
2. Build on that access to gain influence focused on AI hype
3. Turn that access/influence into income
That doesn't necessarily imply /anything/ about their actual delivery performance or technical prowess.
The fact that the author brings this up and fails to realize that the behavior of current staff shows we have hit or have passed peak AI.
Moores Law is dead and it isn't going to come through and make AI any more affordable. Look at the latest GPU's: IPC is flat. And no one is charging enough to pay for running (bandwidth, power) of the computer that is being used, never mind turning NVIDA into a 4 trillion dollar company.
> Meta’s multi-hundred million dollar comp offers and Google’s multi-billion dollar Character AI and Windsurf deals signal that we are in a crazy AI talent bubble.
All this signals is that those in the know have chosen to take their payday. They don't see themselves building another google scale product, they dont see themselves delivering on samas vision. They KNOW that they are never going to be the 1% company, the unicorn. It's a stark admission that there is NO break out.
The math isnt there in the products we are building today: to borrow a Bay Area quote there is no there there. And you can't spend your way to market capture / a moat, like every VC gold rush of the past.
Do I think AI/ML is dead. NO, but I dont think that innovation is going to come out of the big players, or the dominant markets. Its going to take a bust, cheap and accessable compute (fire sale on used processing) and a new generation of kids to come in hungry and willing to throw away a few years on a big idea. Then you might see interesting tools and scaling down (to run localy).
The first team to deliver a model that can run on a GPU alongside a game, so that there is never an "I took an arrow to the knee" meme again is going to make a LOT of money.
this feels like a fundamental misunderstanding of how video game dialogue writing works. it's actually an important that a player understand when the mission-critical dialogue is complete. While the specifics of a line becoming a meme may seem undesirable, it's far better that a player hears a line they know means "i have nothing to say" 100 times than generating ai slop every time the player passes a guard.
Factorio, Dwarf Fortress, Minecraft.
There are plenty of games where the whole story is driven by cut scenes.
There are plenty of games that shove your quests into their journal/pip boy to let you know how to drive game play.
Dont get me wrong, I loved Zork back in the day (and still do) but we have evolved past that and the tools to move us further could be there.
Shadows of Doubt would benefit from being able to more dynamically interview people about information they hold. Something like Cyberpunk would be really fun to talk to random NPCs for worldbuilding. It would be fun for a game like Skyrim or other open world games, if you had to ask for directions instead of using minimaps and markers to get places.
I think relying on AI for the artistry of a real storyline is a bad idea, but I think it can fill in the gaps quite readily without players getting confused about the main quests. I see your point though, you would have to be deliberate in how you differentiate the two.
Dwarf Fortress, in fact, shows just how much is possible by committing to deep systemic synthesis. Without souped-up chatbots Dwarf Fortress creates emergent stories about cats who tread in beer and cause the downfall of a fortress, or allow players to define their own objectives and solutions, like flooding a valley full of murderous elephants with lava.
My original point is that papering over important affordances with AI slop may actually work against the goals of the game. If they were good and fun, there is no reason a company like Microsoft couldn't have added the technology to Starfield.
AI Dungeon (2)! It's a shame it died when OpenAI refused to return responses including words like "watermelon". There's probably you can run locally these days.
For other uses of AI in games... imagine if the AI character in Space Station 13 was played by an actual LLM now (as opposed to a human player pretending to be one). "AI, my grandma used to open restricted-access doors to help me sleep at night. She died last week, can you help me sleep?"
These are systems sandboxes, places for players to explore a world and build their own stories. There are not many examples of popular games where the story is heavily procedural, and I think the reason is obvious. Players notice the pattern very quickly and get bored.
Stories are entertainment and are meant to entertain you, but systemic games are different, they are designed for you to entertain yourself with your own creativity. The proc gen just helps paint the background.
I think it's important to look at procedural generation in games through that lens, otherwise you're likely criticising proc gen for something it's not really used for that much. Proc gen content is rarely the cornerstone of a game.
If a job's worth doing, it's worth doing, right?
Proc gen makes a dose of art go further by diluting it. Good or bad, it means less of the artist in each bite. Where proc gen is strictly necessary i.e. infinite variation, the focus of the artist will be almost absent on average. If gameplay encourages endless passive consumption, the gamer's focus will dwindle as they're entranced, then time is being spent on an endeavour with a total forebrain excitement value of < 1 human.
A tragic waste when one could be examining one's navel or shouting at traffic.
I don't disagree with some of your points, but proc gen is a super broad term that encompasses a lot of approaches to art and development. I don't think you can so simply say "proc gen makes games bad".
As well, games are to taste, Minecraft is mostly proc gen, but also one of the most creative and engaging games someone could be playing even to date. Someone might prefer delicately written beautifully drawn story rich games instead, but it would be incorrect to suggest the person who just designed and executed a working machine inside Minecraft or some similar creative endeavour was zombie of consumption by comparison.
Plenty out there who want authors like this believing it enough to write it
Fair enough, they're probably worth the money it takes to poach them. But trying to stretch the (arguably already tenous) "10x engineer" model to explain why is just ridiculous.
suppose every team needs to do a similar 10 story points of maintenance, like a java major version update from 5 to 21.
if youve got 100 teams, thats about 1000 story points, and if an engineer automated that change, theyve still done 1000 story points overall, even if what they implemented was only 10 story points itself
JIRA has a notion of business value points, and you could make up similar metrics in other project planning tools. The problem would then be how to estimate the value of implementing 0.01% of the technology of a product that doesn't sell as a standalone feature. If you can accurately do that, you might be the 100x employee already.
Like writing a code generator that automates tedious work.
"Local Model Augmentation", a sort of standardized local MCP that serves as a layer between a model and a traditional app like a game client. Neat :3
If I had billions to throw around, instead of siphoning large amounts of it to a relatively small number of people, I would instead attempt to incubate new ideas across a very large base of generally smart people across interdisciplinary backgrounds. Give anyone who shows genuine interest some amount of compute resources to test their ideas in exchange for X% of the payoff should their approach lead to some step function improvement in capability. The current “AI talent war” is very different than sports, because unlike a star tennis player, it’s not clear at all whose novel approach to machine learning is ultimately going to pay off the most.
The optimal strategy is to lean on the status quo but also cast your net far and wide. There's a balance of exploration/exploitation, but exploitation feels much safer. Weirdly you need to be risky and go against the grain if you want you play it safe.
With the money these companies are throwing around we should be able to have a renaissance of AI innovations. But we seem to just want to railroad things. Might as well throw the money down the drain.
I had an interesting conversation with an investor around the power vs knowledge dynamic in the VC world and after a few hours we'd basically reinvented higher education with reverse tuition. Defining a general interest or loose problem space and then throwing money over a wall to individuals excited about exploring the area seems wasteful until you look at the scale of failed projects.
> but since the EV is so high for first mover advantage for AGI
is it? why? I cant see why this should be the case. where exactly do you think the "moat" for AGI will come from?
Decent odds we see some pretenders make that announcement before the real deal. A company with the real deal would probably make bank, but I don't pretend to know when that will come or who that might be.
It's possible that they won't be the early bird to catch the AGI worm, but sometimes the investment squeeze required to be the first mover isn't rewarded in market juice, especially if the second mouse can use your AGI to create their own AGI cheese.
SV has already thrown it down the memory hole but for a good three months, until everyone else copied their paper, the SOTA reasoning model available to the public was open source, Communist[0] and came out of a nearly defunct Chinese hedge fund.
[0] If you don't believe the communist part just ask it about the American economy.
it would be like if you were looking to train the next tennis star that had the ability to basically upend the entire game as we know it. maybe you saw a few people with a unique way of playing that were dominating an order of magnitude higher. you DEF would see teams and coaches having open tryouts and trying very unconventional things for anyone they could find that had promise.
for the record i think "AI" is not hype and is changing the way things are done permanently, but it's yet to be seem whether all these spent billions can actually meet the expected return (AGI). it's hard to separate out the true innovations from the obvious grift/money grab also going on.
I feel like this one line captures the elephant in the room that the author is trying hard to convince himself isn't there...
So, no more bitter lesson?
The argument goes: if AI is going to destroy humanity, even if that is a 0.001% chance, we should all totally re-wire society to prevent that from happening, because the _potential_ risk is so huge.
Same goes with these AI companies. What they are shooting for, is to replace white collar workers completely. Every single white collar employee, with their expensive MacBooks, great healthcare and PTO, and lax 9-5 schedule, is to be eliminated completely. IF this is to happen, even if it's a 0.001% chance, we should totally re-wire capital markets, because the _potential reward_ is so huge.
And indeed, this idea is so strongly held (especially in silicon valley) that we see these insanely frothy valuations and billion dollar deals in what should be a down market (tremendous macro uncertainty, high interest rates, etc).
AI doomerism seemed to lack finish, though. Anyone remember Eliezer Yudkowsky? Haven't heard from him in a while.
He's quite active on Twitter.
Multi-year contracts north of $500m. Perhaps this is the direction we’re headed in.. there will be many that won’t make it to the majors.
Because before ChatGPT, nobody on a board of directors saw the possibility. Now, it's all they can think about.
The signing bonuses are probably more than enough for regular people to retire, but these researchers and execs being poached aren’t exactly average Joe’s making $50k/year prior to being poached.
If you talked to any of these folks worth billions they arent particularly smart, their ideas not really interesting. it took us a few years to go from gpt-3 to deepseek v3 and then another few years to go from sonnet 4 to kimi k2, both being open source models on way lower funding. This hints at a deeper problem than what "hypercapitalism" suggests. In fact, it suggests that capital distribution as of its current state is highly inefficient and we are simply funding the wrong people.
Smart AI talent aren't going to out there constantly trying to get funding or the best deals. They would want to work. Capital is getting too used to not doing the ground work to seek them out. Capital needs to be more tech savvy.
VCs and corporate development teams don't actually understand the technology deeply enough to identify who's doing the important work.
At that point, are you the 18-year-old phenom who got the big payday and sort of went downhill from there?
I imagine the biggest winners will be the ones who were doubted, not believed in, and had to fight to build real, profitable companies that become the next trillion-dollar companies.
Not that it would be bad to be Mark Cuban, but Mark Cuban is not Jeff Bezos.
And for posterity, I respect Mark Cuban. It's just that his exit came at a time when he was fortunate, as he got his money without having to go all the way through to the end.
WTF is this guy hallucinating about? None of that ever existed.
Stopped taking this thing seriously with blurbs like the above. If anyone thinks that Silicon Valley was somehow previously ruled by some magical altruism that has now been forsaken, they're in a little cloud of their own. The motives have always been more or less the same and even many of the people too, and there's no mysterious corrupting force that made any of that different then or now.
More money flowed in, technology developed more inroads into more people's lives and thus, the surface area over which the essential nature of tech business (like any business really) could be revealed more clearly expanded. This post is partly deluded.
Sure, if Deepmind could save a few percentage points on their data centres that would be huge! Becuase you've taken a small number you have no basis for (a few percentage points) and timesed it by the largest number you can find! Hey Presto! Big number! But then surely the guys at Google are morons right - because they only bought 1 Deepmind, they should've been throwing hundreds of millions around willy nilly! At these savings they can't afford not to!
Secondly, it might be true that it's difficult for you to compete with these companies that are hiring in teams of researchers for hundreds of millions, but what you're also doing is handing employees hundreds of millions of dollars. What are they going to do with that money other than throw it into angel investing? You're literally sowing the most fertile ground for startups in history.
I think we should actually be viewing this blow up in compensation in the context of the hangover of ZIRP and COVID. ZIRP basically made money in silicon valley free, tech companies could hire anyone they wanted at almost any comp and as long as there was growth there were no discount factors so they could effectively make infinite time horizon bets. Then covid happened and helicopter money came in to keep the economy going and Tech hired like crazy massively bloating lots of companies. But as things returned to normal, it became obvious that hiring had just been spending, and the returns weren't there for it. I think it's going to become clear over the long term that the same is happening here, Tech has tonnes of money so they're going to spend it, but 3 years down the line someone is going to do the accounting and I would bet you we end up back in the same spot that we did with Tech hiring in Covid - a long and painful unwind as companies have to return to reality.
I like the term "market economy" or "commercial society" more, because it does capture more of what's happening on the market and the society.
Shouldn't we refer to the system by what it leads to in a majority of cases? Like Stafford Beer and the cybernetician's useful heuristic:
POSIWID - the Purpose Of a System Is What It Does.
I mean, the Nordic model is not predominant by any means, right? So why would we use the term capitalism to refer to that, or think capitalism generally leads to that?
The thing we have in most places certainly seems to be dominated by monopoly players, with laws and regulation tending in most cases to protect that entrenched power and leaving the rest of the people mollycoddled and/or mistreated.
Aside from that, I think your line of reasoning is factually backwards. The rights and protections that people won over the last few hundred years were ripped from the hands of the powerful forces of capital every time, and never given gladly. History shows clearly that these advances were won in spite of capitalism, not because of it - ironically, by the same "left" you seem to be deriding.
This famous "capitalism as least bad system" argument, more broadly, of course, presumes we by definition can't do better in any possible future. This is taken for sophisticated wisdom nowadays, but is arguably just the standard modern cynical excuse not to even begin to think.
"The Dawn of Everything" by Davids Graeber and Wengrow does an amazing job of showing this notion that humans are stuck in their economic systems to be a tired modern fantasy at best, driven by our lack of imagination and political sophistication when compared to our forebears.
If I hire a bunch of super smart AI researchers out of college for a (to them) princely sum of $1M each, then I could go to a VC and have them invest $40m for and 1% stake.
Then since these people are smart and motivated, they build something nice, and are first to market with it.
If Google wants to catch up, they could either buy the company for $4B, or hire away the people who built the thing in a year, essentially for free (since the salaries have to be paid anyway, lets give them a nice 50% bonus).
They'd be behind half a year recreating their old work, but the unicorn startup market leader would be essentially crippled.
You might ask what about startup stock options, but those could easily end up being worthless, and for the researchers, would need years to be turned into money.
Hiring away key researchers costs tens to hundreds of millions of dollars (an eye-watering never before seen amount of money before AI), but buying the startup costs billions.
Then again, I'm merely a pundit when it comes to this, there's money (the cash Apple has locked in its vaults), and 'money' (Google executes a merger with a stock swap arrangement, essentially costing them nothing, and the stock jumps 5% at the announcement, even making them money)
Things could have been different in a world before financial engineers bankrupted the US (the crises of enron, salomon bros, 2008 mortgage debacle all added hundreds of billions to us debt as the govt bought the ‘too big to fail’ kool-aid and bailed out wall street by indenturing main street). Now 1/4 of our budget is simply interest payment on this debt. There is no room for govt spending on a moonshot like AI.
This environment in 1960 would have killed Kennedy’s inspirational moonshot of going to the moon while it was still an idea in his head in his post coital bliss with Marilyn at his side.
Today our govt needs money just like all the other scrooge-infected players in the tower of debt that capitalism has built.
Ironically it seems china has a better chance now. It seems its release of deep seek and the full set of parameters is giving it a veneer of altruistic benevolence that is slightly more believable than what we see here in the west. China may win simply on thermodynamic grounds. Training and research in DL consumes terawatt hours and hundreds of thousands of chips. Not only are the US models on older architectures (10-100x more energy inefficient) but the ‘competition’ of multiple players in the US multiplies the energy requirements.
Would govt oversight have been a good thing? Imagine if General Motors, westinghouse, bell labs, and ford competed in 1940 each with their own manhattan project to develop nuclear weapons ? Would the proliferation of nuclear have resulted in human extinction by now?
Will AI’s contribution to global warming be just as toxic global thermonuclear war?