A) This is a bunch of marketing hype disguised as performative commiseration. "Our product is so good we need to think about changing laws". Maybe that's true, but let's not get ahead of ourselves. We don't even know the true costs associated with AI or how much better it's even going to get.
B) All of these previous inventions were also touted as "general labor substitutes" and none of them ended up being true.
Let me make an alternate case - coffee in this country used to be an entirely automated process. Everyone has a dedicated coffee robot in their house. But our tastes have so shifted that now the average person gets multiple cups of handmade coffee a week. An entirely new job category called "barista" was introduced, and today we have over half a million of them. They are not high wage jobs, but they are comparable to something like the customer service rep job that Amodei is apparently worried about.
Even if AI were to take away massive swathes of white collar jobs (I'm still skeptical), the historical expectation is that new, unforseen labor categories open up. Just like nobody inventing a computer thought QA tester or video game streamer or spreadsheet technician was going to be a job in the future.
It's like an inverse Baumol's Cost disease - if AI does tank the value of all of these services, all of the services that require, I dunno - physical hands, go up in value. All of the niche things AI can't do suddenly become all the more valuable.
I use AI to build AI systems that are being used and paid for by real users. I'm in the trenches doing customer discovery, designing UX and producing product. Everything I do in my work either leverages AI or is built on AI capabilities.
The LLMs we have today are not capable of replacing most human knowledge worker jobs, and it's not clear to me they will be anytime soon. They are powerful tools but they are not remotely as generally intelligent as people.
The applications being built are more like power tools. The power drill and power saw didn't replace carpenters, it just enabled them to build more and faster.
People see a language calculator doing things that seem super human and assume it will replace human knowledge work, but without realizing the actual range of human intelligence and how it contrasts to LLMs.
There were people with the job title "calculator", and we don't have those narrow jobs any more, but there are more people using electronic calculators/computers to do calculations in their work than ever before.
There are at least three important misconceptions in thinking that AI is going to replace all human labor anytime soon:
- underestimating human intelligence
- overestimating current AI and it's rate of progress
- underestimating current unmet and future demand for getting stuff done
Even if AI makes knowledge work 99% more efficient, it doesn't mean there will be 99% fewer knowledge workers... More likely it means that there will be 100x or more demand for knowledge work, and the demand for the human components will stay the same or even grow.
(Of course, the prestige and pay of being a teller did take a hit as well).
AI and robotics just make it worse.
But it is very arrogant to think that it will be limited to certain types of jobs.
Things have never been meritocratic. We have always had extreme inequality. Technology has made things slightly more fair but that is still very unevenly distributed.
We _should_ be able to leverage advanced technology to lift everyone up.
I am going to point out something uncomfortable: I think that racism, classism, and elitism is extremely prevalent globally and may be one of the biggest impediments to the even distribution of technology benefits.
We do need to redesign society. That starts with having a realistic educated respect for human beings in general otherwise it's going to be a bad design. It also necessitates refined and contemporary worldviews that properly integrate technology rather than outdated vague ideologies.
Society has been working well recently (on the grand scale) - we just need to tweak some of the settings so that we don't backslide into aristocracy and feudalism.
Although, that difference speaks a lot about how one might see this argument. I suppose some people may be willing to have a worse life if the tradeoff was a more egalitarian world.
This is a popular worldview unfortunately. I think it's generally motivated by Social Darwinism which is propped up by racism and classism.
A good illustration of this is in Hans Rosling's books. We're making unprecedented bounds on metrics that matter - like childhood poverty, disease, illiteracy, hunger, child labor, violent crime, lead usage, etc.
Some of these things we are at risk of backsliding on, but for even the poorest person in America the quality of life is so much better today than it was even 50 years ago.
And that is largely despite many structural aspects of our society. There have been some improvements to social structures, but almost all quality of life improvements have been from technology gains.
The social structures are fundamentally based on elitism and exploitation. The prevailing counterviews seem to be basically 1950s style centralized planning.
I'm not saying we should throw the baby out, but we need a more fair, refined, and technologically sound foundational worldview.
I don't think we should abandon money or centralize things. But we do need, for example, protocols and/or protocol registries enforced by government for sharing information effectively, such as about energy and resources. We also need the monetary systems to be integrated into truly democratic government in such a way that resources and power are distributed in a sane way.
Considering the current political climate, that is not likely to happen. There are many things about China that I do admire, but their ability to map out and move their country forward as a whole is one thing we'd be well-served to learn from.
AI can't create a permanent underclass, politics creates the permanent underclass by deciding to under-tax windfall profits and then under-resourcing a segment of the population.
This can apply to pretty much any government, doesn't have to mean the US.
underesourcing segment - why not just segment to stop consuming cheap dophamine and replace a lot of bad food with less of good (nope, new usa pyramid is not so solution).
i would pay people for eating less ut really good and watch less shows and tvs, would be glad to do so.
just give poor people free ai, free internet, free good water; give them chance to find out everything.
This comment holds just as little value as an AI comment, but you've managed to write so poorly that not even the top of the line LLMs could compete. Great work, really.
I think people continue bullshitting in this domain because they’re worried they have no moat, so they have to discourage via sophistry or bold claims.
Don’t take my word for it, here are the stats https://scale.com/leaderboard
For reference, 25% means getting 1 in every 4 question right.
Secondly, if we ever do get to that point, people have lots of social mechanisms they’ve developed over millions of years of evolution which will kick into place. I am not sure anyone dreaming of this “utopia” is then going to be around to see it.
In this case, Dario Amodei cannot understand a world, where AI fails to deliver on his own promises. Amodei is a salesman, just as much as any CEO.