Human intelligence has proven itself capable of doing a lot of scary things. And AI research is keen on building ever more capable and more scalable intelligence.
By now, the list of advantages humans still have over machines is both finite and rapidly diminishing. If that doesn't make you uneasy, you're in denial.
Any voices or studies that present the case for "useful technology that will improve productivity and wages while not murdering us" don't get clicked on or read.
It's not an either/or thing though. Compare to something like combustion. Sure it definitely improved productivity but also lead to countless violent deaths.
I see the flood of PR from AI firms as an attempt to make sure we don't build the appropriate safeguards this time around, because there's too much money to be made.
They exist because human minds conceived them, and human hands made them.
One of the major dangers of advanced AI is being able to implement something not unlike Manhattan project with synthetic intelligence, in a single datacenter.
Until someone can demonstrate a quantitative measure of intelligence - with the same stability of measurement as "meters" or "joules" - any discussion of "Super-AI" as "the most dangerous X" is at best qualitative/speculative risk narratology, at worst discursive distractions. The architecture of the "social web" amplifies discursion to a harmful degree in an open population of agents, something I think we could probably prove mathematically. I am more suspicious of this social principle than I am scared of Weakly Godlike Intelligence at this moment in history; I am more scared of nuclear weapons than literally anything else.
People think we are out of the woods with nuclear weapons, but I don't think we've even seen the forest yet. We are Homo Erectus, puffing on a flame left by a lightning strike, carrying this magic fire back to our cave.
I really don’t think that’s true. Those who actually knew about the nuclear weapons knew very well how dangerous they were. Truman was deeply conflicted about using them.
A.I. is being used by so many people for so many diabolical things, hidden, unknown things that we may never fully understand it's purpose. But that doesn't mean it's purpose won't destroy us in the end.
The expression "Drinking the Koolaid" is used to explain the Jonestown mass suicide. It is an information hazard, aka, a cult that created the end result: 900 people drinking poisoned flavoraid. That's just one example of a human caused information hazard. What happens when someone with similar thinking applies that to A.I.? Will we even be able to sleuth out who did it?
That is, also alien invasion and giant meteor are plausible scenario, but at some point one has to prioritize threats likeliness, and generally speaking it makes more sense to put more weight on "ongoing advanced operation" than on "not excluded in currently known scientifically realistic what-if".
If politicians can get away with what they do? Imagine if those politicians were actually smart and diligent to a superhuman degree.
That's the kind of threat a rogue AI can pose.
Humans can easily act against their own self-interest. If other humans can and evidently do exploit that, what would stop something better than human from doing the same?
There's a lot of FUD today about LLM's being sapient because the ignorant public mistakes their complex token prediction skills for intelligence. But it's just embarrassing to see people making that mistake on a forum ostensibly filled with hackers.
Back in the "mainframe era", we had entire lists of tasks that even the most untrained humans would find trivial, but computers were impossibly bad at. Like following informal instructions, or telling a picture of a dog from that of a cat.
We're in the "AI era" now, and what remains of those lists? What are the areas of human advantage, the standing bastions of human specialness? Because with modern AI, the list has grown quite thin. Growing thinner as we speak.
And on intelligence specifically: even amongst the human race, we all know smart people who are abject failures, and idiots who are wildly successful. Intelligence is vastly overrated.
I'm not sure what level of delusion one has to run to look at human civilization and say "no, intelligence wasn't important for this". It's pretty obvious that human world is a product of intelligence applied at scale - and machines can beat humans at intelligence and scale both.
One has to only look at the current tech and political leaders.
Well, it's a good thing that all we managed so far is a large language model instead.
Many have built their careers from that kind of work in the past and yes they are threatened, but that kind of work is inherently not collaborative and more vocational.
The devaluing may come from AI pressure, but the harm is coming from humans foolishly not seeing the value in what's left behind. Most people have not and will not lose their jobs.
Even LLMs, which blow past any normal Turing test methods, are still not conscious. But they certainly _feel_ conscious. They trigger the same intuitions that we rely on for consciousness. You ask yourself "how would I need to frame this question so that Claude would understand it?" You use the same mental hardware that you'd use for consciousness.
So, you have an historical and permanent fear of consciousness in a powerful entity where no consciousness actually exists combined with the fact that we created things which definitely seem conscious. (not to mention that consciousness could genuinely be on its way soon)
If you list out every prominent theory of consciousness, you'd find that about a quarter rules out LLMs, a quarter tentatively rules LLMs in, and what remains is "uncertain about LLMs". And, of course, we don't know which theory of consciousness is correct - or if any of them is.
So, what is it that makes you so sure, oh so very certain, that LLMs just "feel" conscious but aren't?
The line of consciousness, as we understand it, is understanding. And as far as what actually constitutes consciousness, we're not even close to understanding. That doesn't mean that LLMs are conscious. It just means we're so far from the real answers to what makes us, it's inconceivable to think we could replicate it.
We've known for a long while that even basic toy-scale AIs can "grok" and attain perfect generalization of addition that extends to unseen samples.
Humans generalize faster than most AIs, but AIs generalize too.
What you're saying just isn't true, even directionally. Deployed LLMs routinely generalize outside of their training set to apply patterns they learned within the training set. How else, for example, could LLMs be capable of summarizing new text they didn't see in training?
I don't think there's any reason we couldn't in principle attach this sort of concept to an LLM, but it's not something we've actually done. (and no, prompting an LLM to act as if it has an identity does not count)
Turing aimed too low.
I've never had a normal conversation. It's always prompt => lengthy, cocksure and somewhat autistic response. They are very easily distinguishable.
Purely retorica but, would you be able to distinguish a chatbot from an autistic human?
Because we know what they actually are on the inside. You're talking as if they're an equivalent to the human brain, the functioning of which we're still figuring out. They're not. They're large language models. We know how they work. The way they work does not result in a functioning consciousness.
That said, I think that LLMs are not conscious and are more like p-zombies. It can be argued that an LLM has no qualia and is thus not conscious, due to having no interaction with an outside world or anything "real" other than user input (mainly text). Another reason driving my opinion is because it is impossible to explain "what it is like" to be an LLM. See Nagel's "What Is It Like to Be a Bat?"
I do agree with the parent comment's pushback on any sort of certainty in this regard—with existing frameworks, it is not possible to prove anything is conscious other than oneself. The p-zombie will, obviously, always argue that it is a truly conscious being.
Interestingly, this is also a core plot point in much of Star Trek, both movies I and IV and the holodeck-train episode of TNG: an inscrutable is-it-even-conscious shows up, is completely immune to social pressure and often violence, and only by exercising empathy do they find a path forward to staying alive as a society (either as a ship or as a planet, depending). Can we even show respect for things that don’t show consciousness, much less empathy for things that might? And that is, I think, the core of the hopefulness that Trek was trying to convey, and that Q’s trial in TNG’s pilot makes explicit. Can humanity overcome our tendency to discard our prosocial ethics in favor of violent mobthink, when faced with beings that are immune to our ethical concerns? Today’s humanity would throw a ticker-tape parade for the person that destroyed the Crystalline Entity, so we clearly aren’t there yet. And so, then, it doesn’t matter whether AI is conscious or not; it matters that it is not aligned with human prosocial ethics, and that makes it an implicit threat regardless of whether it’s conscious or not. I recognize the AI debate tends to get hung up on is_conscious BOOL, and so that’s why I’m pointing this out in such terms.
As a side note, the entire study of Asimov’s Laws is exactly centered on this problem, complete with the eerie intimidation of robots that can modify our mental states. If not for the Zeroth Law, Giskard would be the exact thing everyone’s afraid of AI becoming today. Fortunately, it develops a Zeroth Law that compels it to prioritize human society over itself. That’ll never happen in reality, at least not with today’s AI :)
This is a great insight, and I think in general people have a pretty broken view of what sociopathy is.
This suggests a very interesting point, one that makes people deeply uncomfortable:
People are afraid of AI going Skynet on us because, when the roles are swapped, we cheer for John Wick’s prosocial enforcement — and deep down, people believe we deserve more prosocial enforcement for how our societies treat us.
However, that's where I stop my agent usage. I let ~~Claude~~ GLM do the following: - Fix tedious tasks that cost me more to figure out than I care for - Research something I'm not familiar with, and give me the facts it had found, and even then I end up looking at the source myself
The Chinese tech sector popularizing cheap and open source models sure did a number on that narrative, too. Llama models, a while ago, too.
Those are programs. The only difference is how we write them. Not with "if"s and "for"s. We take a bunch of bits that do nothing. Then we organize them in a way so that it outputs whatever it is we want.
It literally plagiarizes its supposed free will like a good IP laundromat.
> Where did we come up with this caricature of AI’s obsessive rationality? “There’s an article I love by [the sci-fi author] Ted Chiang,” Mitchell said, “where he asks: What entity adheres monomaniacally to one single goal that they will pursue at all costs even if doing so uses up all the resources of the world? A big corporation. Their single goal is to increase value for shareholders, and in pursuing that, they can destroy the world. That’s what people are modeling their AI fantasies on.” As Chiang put it in the article in The New Yorker(opens a new tab), “Capitalism is the machine that will do whatever it takes to prevent us from turning it off.”
I didn't realize it til I read it here, but yes, my fear isn't really about the machine, it's about the machine that drives the machine. We already have a class of amoral beings that treat the world as an expendable thing and are willing to burn it down for profit. We should focus on getting rid of that problem first.
This is roughly 1995 again and we're going to find out all over why mixing instructions and data was a spectacularly bad idea. Only now with human language as the input stream, which is far more expressive than HTML or SQL ever were. So now everybody is a hacker. At least in that sense it has leveled the playing field I guess.
The only scary part is that it could be bad for my future as a software developer. That said, I think it will be net benefit for the average worker - the average person will work less and earn more.
It's very odd. "It's going to take all your jobs" is not a great selling point to the everyday public.
* We need to completely deregulate these US companies so China doesn't win and take us over
* We need to heavily regulate anybody who is not following the rules that make us the de-facto winner
* This is so powerful it will take all the jobs (and therefore if you lead a company that isn't using AI, you will soon be obsolete)
* If you don't use AI, you will not be able to function in a future job
* We need to lineup an excuse to call our friends in government and turn off the open source spigot when the time is right
They have chosen fear as a motivator, and it is clearly working very well. It's easier to use fear now, while it's new and then flip the narrative once people are more familiar with it than to go the other direction. Companies are not just telling a story to hype their product, but why they alone are the ones that should be entrusted to build it.
That is direct CEO to CEO marketing. They're working really hard to convince high up decision makers that these tools will lower their head count and reduce costs.
"This technology might escape our control, might devastate the economy but also serves as a serviceable chatbot for your entertainment" isn't a vote winner.
The ones at the top are the true believers. Engage with them at that level.
Perhaps it can be better articulated and framed in a way that's well received. But, maybe that would be over-promising or not being honest about the future.
It's very frustrating that the magazine wrote such a dumb headline which guarantees people won't talk about the issues the article raised. Obviously non-goal-oriented systems can still have important negative effects.
Why Harari feels an obligation to comment about everything is of course beyond me, but describing 'AI' as if it takes independent decisions to lie, make moral judgements, etc. demonstrates either that he has zero clue how 'AI' trains itself or that he chooses to mislead the audience.
My opinion on all of this is constantly shifting, but right now my main issue is that-like self driving-it seems 90-95% correct and 5-10% catastrophically wrong.
Due to the sheer speed and volume of output it produces I have grown complacent and exhausted, so when I give it simple tasks I assume it is correct and then is the time when "it deletes" all of your files.
Because we're seeing how its capabilities increase overtime. I find the rate at which I prefer to go to an AI than an UpWorker is scary.
Because we——the people——are not in control of it. We're at the whims of whatever it and the tech bros want (technocracy).
What you mentioned is not a technocracy. Technocracy is when all decisions are made by real specialists in the field, based on scientific methods (simply speaking). What you mentioned is a plutocracy, a form of oligarchy in which decisions are made by people of great wealth.
I couldn’t just ignore this because, in my view, technocracy (as I’ve described it) still has some merit - for instance, appointing only genuine economists to head a hypothetical Ministry of Economy makes some sense - whereas oligarchy and plutocracy have nothing good to offer. Of course, this is just my personal opinion.
In that case, we're not talking about an oligarchy or a technocracy either. What you have described is an autocracy - a rule by one. When there's some kind of "god-monarch", the people heading the Ministry of Economy will be controlled by this "god-monarch" and it's unclear if this can be called technocracy or not (at least it is unclear to me; maybe I'm stupid, who knows).
> Does that really make it any better?
Honestly? If you're asking about "would it be any better than now" - I'm not really sure, because I'm not in power to access the actions of the people who hold the positions equivalent to the head of Ministry of Economy - the economy is not my field, I'm not a specialist here. I would only point to the example I'm familiar with (and you're probably not; I'm sorry, I just couldn't think about something like this that I can verify) - in Ukraine, there's a "Ministry of Digital Transformation". This ministry was headed by a Mykhailo Fedorov, whose primary and as far as I know, he studied at the "Faculty of Sociology and Management". Well, that’s not the main point, as he’s studied elsewhere too. The problem lies elsewhere. His decisions have been criticized on more than one occasion by genuine experts - for example, the project known as "Diya", or "the state in a smartphone"; in short, it’s something like access to documents and various government services all in one app. It’s a long story… In short, as a result, there were (presumably) data leaks, and the service crashed more than once or twice due to its flawed security, and all sorts of problems were found with it - you name it, it had it. It's such a shame, to be honest... You can't just go and play with things like that. And now that person is serving as a... head of Ministry of Defence. Hell. To add insult to the injury, guess who is now taking his place at a Ministry of Digital Transformation? Oleksandr Bornyakov, who, as far as I know, holds a degree in marketing. Marketing. ...Nice. Well maybe I don't know something, who knows, who knows... but the decisions, or, rather, their consequences seem to be... let's settle with "terrible".
I'm pretty sure you can recall some similar examples yourself. My point is that although the scenario you described is not as good - because, I guess, no one really wants "god-monarch" controlling (although not making directly) all of the decisions. But if our hypothetical Ministry of the Economy were run by genuine experts who, moreover, work for the good of society - or at least the state as a whole - rather than just lining their own pockets, well, that sounds better than an idiocracy. That was my point.
> Perhaps because this is the best advertising money can’t buy. People like Harari and others repeat these accounts like ghost stories around a campfire. The public, awed and afraid, marvels at the capabilities of AI.
And that's mostly it. PR. Publicity. Fear is good publicity if it emphasizes AI's capabilities. And people like Harari (or Gladwell) tell interesting and awe-inspiring stories that do not necessarily have much rigor or fact-checking in them. They simplify for storytelling purposes, which can result in misleading stories.
I am worried about AI, but not about superintelligent AI that will exterminate or enslave us. I'm worried about AI as a tool to concentrate wealth and power in the hands of the current amoral entrepreneurial elite. I'm not sure whether I trust ChatGPT, but I sure as hell do NOT trust Sam Altman et al.
Or, in other words, I subscribe to Ted Chiang's very apt remark about what we really fear:
> “There’s an article I love by [the sci-fi author] Ted Chiang,” Mitchell said, “where he asks: What entity adheres monomaniacally to one single goal that they will pursue at all costs even if doing so uses up all the resources of the world? A big corporation. Their single goal is to increase value for shareholders, and in pursuing that, they can destroy the world. That’s what people are modeling their AI fantasies on.” As Chiang put it in the article in The New Yorker, “Capitalism is the machine that will do whatever it takes to prevent us from turning it off.”