"Gleefully taking away people's livelihoods will be met with violence, and nothing good will come of it." - fixed.
The fact that we're using AI killer robots to wipe each other out in droves doesn't bode well for that future does it...
Why do we watch Olympic runners, when cars on your average city street easily exceed Usain Bolt's top speed on their morning drive to Starbucks? Why do we watch the Tour de France, when we can watch Uber Eats drivers on their 150cc scooters easily outpace top cyclists? I'm sure within a couple years a Boston Dynamics robot will be able to out-gymnast Simone Biles or out-skate Surya Bonaly. Would anyone watch these robots in competition? I doubt it. We watch Bolt, Biles, and Bonaly compete because their performance represents a profound confluence of human effort and talent. It is a celebration of human achievement, even though that achievement objectively pales in comparison to what our machines can accomplish.
I think the same is true for other aspects of human creativity and labor. As we are able to automate more and more, we will place increasing importance on what inherently cannot be automated: celebration of our fellow humanity. Another poster wrote that "bullshit jobs" [0] exist primarily because we value human contact [1]. I am inclined to agree.
Are the only options here being a good and "useful" worker/consumer, or a violent, irrational thug? Is there nothing else you can imagine?
Yeah, this is not happening anytime soon. Have you even looked at AI-generated code or text? AI is just a dumb parrot, it's no match for human effort and creativity even in these "easy" domains.
The business case for AI generation is just being able to generate huge amounts of unusable slop for next to nothing. For skilled workers it's a minor advantage in that they get a sloppy first draft that they can start the real work on - it makes their work a bit more creative than it used to be, by getting rid of the most tedious stuff.
I don't think we're anywhere near that point.
When most engineers and Marvel fans watched Tony Stark in Avengers collaborating with Jarvis they thought of Jarvis like "an AI with Google's knowledge where I can interact with him". It's true that we're close to that level interaction. However, the ultimate goal is to get as much as possible automated on Jarvis, to the point where Tony Stark is not needed or Tony Stark can be replaced by anyone with a mouth.
In this example, Jarvis isn't the goal but a checkpoint. The goal is a genie, providing software and research to anyone who is loaded with money, and knows how to rub the metaphorical lamp the right way.
Labour displacement leads to an erosion of standards of living and in a world that ties purpose to work is an existential threat on a very practical level.
It was always going to be met with violence once it became more than a curiosity for tinkerers.
a) Decouple the value of human life from labour.
b) Watch as the value of human life rapidly approaches zero.
---
Though I'd expand this by adding "technically alive" is not a very good standard to aim for. Ostensibly we're already heading for something like poverty level UBI + living in pod + eating the proverbial bugs. We need a level above that!
A great exploration of the pitfalls of "preserve humanity" as a reward function is the video game SOMA. I think you also need "preserve dignity" to make the life actually worth living.
(Path `a` is not without its pitfalls: what lack of survival pressure might do to the human culture and genome, I leave as an exercise for the reader! But path `b` I think we already have enough examples of, to know better...)
The two biggest labor displacements in human history were the agricultural and industrial revolutions, both of which resulted in enormous gains in human living standards. Can you think of a mass labor displacement that resulted in an overall erosion of living standards? I cannot.
Then there's the minor issue of AI deciding to just wipe us out because we're in the way.
Taking everything together, AI more powerful than that which currently exists must not be created. This needs to be enforced with an international treaty, nuking data centers in non-compliant states if need be.
That's not to say we should just throw up our hands and accept every social injustice. But IMHO we shouldn't go around simplistically implying that all social ills will be solved by neutering the billionaire class.
You’re right. Instead of implying, we should be taking active steps to do it.
Not to put too fine a point on it but this was basically how the Japanese post war economic miracle was achieved.
In this case it was America which ordered the Japanese oligarchy to be stripped of its wealth.
We've had decades of propaganda telling us that this is the worst thing we could do for economic growth though so it's natural to doubt.
The biggest problem we currently have with billionaires is that they are now so rich that the world becomes like a game to them and some of them are deliberately pushing us to a dystopia where non-billionaires become functional slaves (c.f. Amazon workers).
I don't disagree that we tie purpose to work and severing that tie will have negative societal consequences, but it is far more impactful that we tie the ability to continue to exist to work (for anyone not lucky enough to already be wealthy).
If I suddenly became unemployable tomorrow I'm positive I could find alternate purpose in my life to fill that gap, I already volunteer for various causes and could happily do more of the same to fill in the gaps left by lack of work. What I couldn't do is feed myself, keep myself housed, and get medical care (especially in the US, where this is very directly tied to work).
The really big fuckup we are committing as a society in the US (may or may not apply to each person's country individually) isn't just this looming threat of massive labor displacement due to AI, it is that instead of planning for any sort of soft landing we are continually slashing what few social safety nets already exist. We are creating the conditions for desperation that likely will result in increasing violence as outlined in the linked post.
If we thought of all of this as 'stochastic data systems' then our heads would be in the right place as we thought about it just as 'powerful software' that can be used for good or bad purposes, and the negative externalizes will be derived from our use of it, not some inherent property.
Cryptocurrency is an interesting technology with some niche use cases, but it was pitched as replacing the entire money system. LLMs are extremely useful for certain types of work, but are pitched as AGI ending all work. Etc.
It has nothing to do with society; there is infinite demand for medical care. The upper limit is whatever it takes to live until the universe's heat death in good health. That takes a lot of resources.
However much society spends on medical care, there is always more that could be spent. The modern era has the best, most affordable medical care in history and people are showing no signs of being satisfied at all.
While war spending generally just causes pain for no gain it doesn't change the fact that there will never be enough available to satisfy people's demand for medical care. Every single time people get what they want they just come up with a new aspirational minimum standard.
Humanity has taken control of its own evolution and no longer relyies on natural selection to be the driving force for change. Using evolution as an excuse to make bad and immoral choices is a poor argument and should be left back in the stone age.
Soviet Union lost due to an inferior societal model, but this too is too much along what once was a relatively sustainable path. The American dream is now a parody of itself, as it takes more to end up with the rest of them, I could go on about the irony of wanting to escape the pit but not wanting to acknowledge the pit is the 99% of the U.S. -- Not Altmans, Bezos'es, Musks or Trumps or their hordes of peripheral elites.
Point being, the model doesn't work _today_ with its cancerous appetite and correspondingly absurd neglect of the human, _any_ human. We can't have humanism and the kind of AI we're about to "enjoy".
The acceleration of wealth disparity may prove to be nearly geometrical, as the common man is further stripped of any capacity to inflict change on the "system". I hope I am wrong, but for all their crimes, anarchy and in a twist of irony -- inhumane treatment of opponent -- the October revolutionaries in Russia, yes bolsheviks, were merely a natural response to a similar atmosphere in Russia at the turn of the previous century. It's just that they didn't have mass surveillance used against them in the same capacity our gadgets allow the "governments" today, nor were they aided by AI which is _also_ something that can be used against an entire slice of populace (a perfect application of general principles put in action). So although the situation may become similar, we're increasingly in no position to change it. The difference may be counted in _generations_, as in it will take multiple generations to dismantle the power structures we allow be put in place now, with Altmans etc. These people may not be evil, but history proves they only have to be short-sighted enough for evil to take root and thrive.
Sorry for the wall of text, but I do agree with the point of the blog post in a way -- demanding people become civilised and refrain from throwing eggs (or Molotovs) on celebrities that are about to swing _entire governments_, is not seeing the forest for the trees.
There's also no precedent in a way -- our historical cataclysms we have created ourselves, have been on a smaller scale, so we're spiraling outwards and not all of the tools we think we have, are going to have the effect required in order to enact the change we want. In the worst case, of course.
Meanwhile
https://www.reuters.com/world/middle-east/how-many-people-ha...
> U.S.-based rights group HRANA said 3,636 people have been killed since the war erupted. It said 1,701 of those were civilians, including at least 254 children.
(Mentioning this specifically because we know the DoD is using AI)
Let’s not parrot that media propaganda.
Iran has admitted outright to 6k deaths, by the way.
The US must have several dozen spy satellites pointed at Iran. We get various imagery to show us successful strikes. Where are the images of the mass slaughter in the street?
The number I keep seeing is 30k killed. That's not an easy endeavor over the course of a week without big logistical hurdles. The trucks, the digging equipment, the furnaces to burn the bodies, all should have some visible trace that the US gov could point to as proof.
Yet all we got is a "trust me bro".
or just arguing over 20K,30K,50k?
Just want to clarify. Since some people argue Covid never happened, and some just argue the total deaths wasn't really that high.
There is a sliding scale between "I sound like a raving crazy person", and "i'm just splitting hairs."
Coincidentally that's literally the exact same evidence cited to prove the existence of Saddam's WMDs just before launching an entirely different unprovoked attack.
That was just an unhappy mistake though, this time it's totally legit.
Plus the labs themselves, of course.
And the other side, “pause/ban AI” crowd, also sounded impractical, as the vested interests from governments and private industries will not really let it happen.
Sorry for yapping, it might be that I’m looking at the wrong sources.
Even if I support UBI morally, there isn’t even local appetite for it, yet alone global one. And you’ll run into quick questions about inflations, every chart from UBI-lite era of COVID, and so on.
Probably not the scale you imagine but there have been plenty of tests.
"Compatible with current version of captialism" -- the whole point of UBI is to create a new form of capitalism
Polarizing doesn't mean complicated. There's people against it due to ignorance, greed of both, it's certainly not more complicated than that.
> And since then, there hasn’t been a single large scale test of the system to see if it can be compatible with the current version of capitalism that’s ran in the most of the world.
Because people keep fighting against it, because it's scary scary sOcIaLiSm.
> Even if I support UBI morally
As you should, there are no moral arguments against it.
> there isn’t even local appetite for it, yet alone global one.
I would think the majority of the population struggling to pay for groceries would disagree.
> And you’ll run into quick questions about inflations, every chart from UBI-lite era of COVID, and so on.
No reason to think UBI would cause inflation at all, actually.
In any case, this really is the answer. You're worried about disruption due to AI taking jobs, but the only reason there is a problem is because AI will drastically increase inequality by letting rich people and corps become even richer. You want to solve the issue, you solve the disparity by making them give back their fair share. Like I said, simple.
Nothing, really?
I think people are aware that speech can be an act, and that some violent acts must be resisted with reciprocal violence. (That's why we have "incitement to violence" as a limitation on free speech, for instance.)
Are we at that point? Maybe not. But I think it's a poor imagination that says it can never happen.
Lovely writing. I once knew someone who's surname was HorsFELL and now I wonder if they were related
AI is killing writing, music, art, and coding. I've done all of these voluntarily because I simply enjoyed them
Meanwhile the parts of my existence that I actually hate - dealing with customer support, handling government forms, dealing with taxes - is far from being automated by AI
Look at Suno. Fantastic tool, but where was the capital need to make music generation so cheap that no musician could ever compete with it? Did the world really wake up one day and concluded that, "wait, we're spending too much on musicians"?
Seems like a complete misallocation of capital if I'm perfectly honest
This is one of the first parts LLMs tried to automate. They were literally released in a form of chatbot. Whether it succeeded is another question.
> Did the world really wake up one day and concluded that, "wait, we're spending too much on musicians"?
I'm not sure about musicians specifically, but in the whole past decade studios have been complaining how costly it is to make AAA games. And the cost mostly came from art asset side.
> This is one of the first parts LLMs tried to automate. They were literally released in a form of chatbot. Whether it succeeded is another question.
I don't think that's right. They tried to automate customer support dealing with me, not me dealing with customer support. The goal is to reduce costs of serving customer support even if it results in the customer doing more labor than a customer support professional would need to do to fix their problem, or the customer just living with their problem.
Obviously both parties would be happy with a result where I get what I need easily and for free, but the company is also generally happy if I live with it or expend a lot of effort solving it myself.
In any case, during perhaps hundreds of interactions with chatbots accumulated during many years, I have never encountered even one when the chatbots were useful, but they were always just difficult to pass obstacles in the way of reaching a human who could actually solve the problem.
To be honest, even in the case when some services still had humans answering the calls, those were never more helpful than the chatbots, but at least when speaking with humans it was much easier to convince them to transfer the call to a competent person, which with chatbots may be completely impossible.
But recorded music was a crisis. And it did tempt a lot of people into supporting fabulously abusable, rich-enriching "intellectual property" law as a means of financing art.
Rich people are lobbying to capitalize on this crisis as well.
At least today, LLMs make bad creative writing, music, and art. They’re automating sweatshop work that, in an alternative timeline, goes to Fiverr-esque contractors who accept the lowest wages and sacrifice quality for efficiency in every way.
LLMs make developers more efficient but can’t fully replace them. This reduces jobs, but so did better IDEs, open-source libraries, and other developer improvements.
> Meanwhile the parts of my existence that I actually hate - dealing with customer support, handling government forms, dealing with taxes - is far from being automated by AI
LLMs can at least theoretically do these things. I’ve heard people use them to mass-apply to apartments and jobs, and send written customer complaints then handle responses.
> Look at Suno. Fantastic tool, but where was the capital need to make music generation so cheap that no musician could ever compete with it?
There’s no “capital need”, but a benefit of Suno is that it lets individuals, who otherwise don’t have the skill, to make catchy songs with silly lyrics or try out interesting genres. And the vast majority of top artists are still human, although most streaming revenue has already gone to a few celebrities who seem to rely on looks and connections more than music talent.
The fact that people are using it to flood the world with slop is a hyperscaled continuation of the overabundance and discovery problems we already had, but that doesn’t mean that writing is dead or dying.
The technology simply doesn’t have the capabilities right now, and even if it develops them, what will be put to the test is whether literature is about the artifact or the connection between the author and other humans.
Customer support is kind of something you can use AI for; most companies will foist you off to some system of exchanging written messages, which is annoying, but then you can use an AI to write your side of the conversation. It’s ill-mannered to do this when you’re interacting with actual people, but customer support is another story.
> Look at Suno. Fantastic tool, but where was the capital need to make music generation so cheap that no musician could ever compete with it? Did the world really wake up one day and concluded that, "wait, we're spending too much on musicians"?
People didn’t know what LLMs would be capable of until after they were invented. Cheap music generation turned out to be easy once we had cheap text generation, and cheap text generation turned out to be a tractable problem.
Pertinent quote. A lot of AI discourse goes in circles trying to evaluate the truthiness of every individual complaint about AI. Obviously it's good to ensure claims are factual! But I believe it misses a broader point that people are resistant to AI, often out of fear, and are grasping for strategies to exert control. Or at least that's my read of it.
Refuting individual claims won't make a difference if the underlying anxieties aren't addressed (e.g., if I lose my job will I be compensated, will we protect ourselves against x-risk, etc).
On my side the biggest concern is the lake of transparency of ecological impact. This is not strictly related to LLMs though, data centers are not new, and all the concerns about people keeping a leverageable level of control through distributed power is not new.
AI will be 'dangerous' because humans will use it irresponsibly, and that's all of the risk.
- giving it too much trust, being lazy, improper guards and accidents - leveraging it for negative things (black hats, military targetting) - states and governments using it as instrument of control etc.
That's it.
Stop worrying about the ghost in the machine and start worrying about crappy and evil businesses and governing institutions.
Democracy, vigilance, laws, responsibility are what we need, in all things.
In my view that line of argument is pro-AI hype. It's the Big Tech CEOs themselves who often share their predictions of the end of the world as we know it caused by AI. It's FUD that makes the technology sound more powerful and important than it is.
I'm less concerned about AI becoming the Skynet and killing humans and more concerned about AI making the world so miserable that we'll be killing ourselves and each other.
So yeah _we_ will be fine, but some of us definitely won't, and with the growth in our numbers on Earth, the proportion of martyrs may be growing. Quantifying personal suffering is not possible, especially if the prospect is death.
Anyone pish poshing war should go fight in one, and then let me know their opinions.
Because World War I was fine, World War II finer....
Automaters dilemma: the labor that is removed from production due to automation can no longer sustain the market’s that that automater was trying to make more efficient.
By optimizing just the production half of the economy and not the consumption half you end up breaking the market
Good luck doing nothing of value in a restaurant with 20 employees.
Which I think is much better take than that guy that wrote bullshit jobs.
I'm not convinced.
The idea that people will revolt, replaying the luddites history, has been floated a lot. It's used to diminish all kinds of AI skepticism by framing it as backwards, violent people who don't understand progress. This is the preferred bucket of AI fanboys: frame any disagreement as unreasonable rage.
I think AI companies want a general dumb violent popular movement to sprout against AI. In paper, it would be great for them. So far, they have failed to encourage it.
> Nothing that Altman could say justifies violence against him. This is an undeniable truth. But unfortunately, violence might still ensue. I hope not, but I guess we are seeing what appears to be the first cases.
Not arguing with you, but the author, I don't understand this line of thinking.
If Altman introduces a technology that effectively halts the upward mobility of a large portion of the population, how does that not justify violence? Saving up for a house but now there's no work. Your dreams and aspirations are second to shareholder value. The police are already there to protect the shareholders, not the average civilian.
What recourse is there? The money in politics limits the effect voting can have. You can't really opt-out of the system. Why does Sam Altman get this nice little shield where none of his actions can have a negative consequence?
> And then, and I’m sorry to be so blunt, then it’s die or kill.
Of course, by talking about the possibility, despite asserting my disapproval of it, I am sowing seeds, but I assure you that's certainly not my intention!
The question is "what do we do now?".
Conversely, The Loudest Alarm Is Probably False[0]. If the idea that you are a pretty levelheaded guy pops up so frequently, consider that it might be wrong. Especially if you are motivated to write blog posts about violence in response to technology you don't like. Maybe you're just not as levelheaded as you think and that could explain the whole thing?
[0] https://www.lesswrong.com/posts/B2CfMNfay2P8f2yyc/the-loudes...
E.g., suppose that 1,000,000 persons believe that a corporation's evil acts destroyed their happiness [0]. I would have guessed that at least 1 person in that crowd would be so unhinged by the experience that they'd make a viable attempt at vengeance.
But I'm just not hearing of that happening, at least not nearly to the extent I would have guessed. I'm curious where my thinking is wrong.
[0] E.g., big tobacco, the Sacklers with Oxycontin, insurance companies delaying lifesaving treatment, or the Bhopal disaster.
If that’s accurate, Luigi Mangione would be the exception that proves the rule. The “unwashed masses” generally want money more than they want to effect change in the world.
A lot of people spend mental energy fantasizing about getting rich off lawsuits. Like, a lot.
And yet,
> I don’t want to trivialize the grievances of the people who fear for their futures. I don’t want to defend Altman’s decisions. But this is not the way. This is how things devolve into chaos.
If I had a cent every time a lesswrong link was posted alongside a profoundly obtuse comment...
The people ready to die or kill for the AI, do you already imagine what they are going to be like?
And if you decide to stay behind, nobody will kill you. Old age and disease will take care of that.
The rest of the article is equally short sighted and plain wrong.
This was not an oversight. To the contrary, it was the goal. Technological feudalism, with people like Altman and Musk becoming the Lords of the world.
> Most layoffs are not caused by AI, but it’s the perfect excuse to do something that’s otherwise socially reprehensible.
This illustrates my previous point. What they're doing is not a mistake.
> For what it’s worth, the New Yorker piece I’m referring to, which Altman also referred to in his blog post, made me see him more as a flawed human rather than a sociopathic strategist. My sympathy for him will probably never be very high, but it grew after reading it.
It feels like we read two different articles.
Sam Altman having a Molotov cocktail thrown at his house after Ronan wrote a very long and detailed report of his shady personality isn't just coincidence and likely not organic. Sam needs to be viewed as sympathetic, thank goodness for such a moment where no one was hurt and nothing actually damaged.
With the exception of rappers, most musicians who die early die from overdoses, suicides, and such (the "27 club" <https://en.wikipedia.org/wiki/27_Club>), as opposed to being murdered.
We are somewhat violent species, so I agree that almost every significant economic and societal development has the potential to trigger some violence. That said, the jobs that are potentially threatened by AI are nowadays usually done by fairly sedentary people, so I wouldn't expect any large-scale violence, an occasional Ted Kaczynski notwithstanding. Programmers, translators and painters just aren't used to destroying things in the real world.
It would have been different if AI started to replace drug dealers or the mob.