As always, there are good bits connected with mediocre glue. The point about automating the unpleasant parts of activity and losing the very point of the exercise (automatic dildo and automatic vagina, but automatic research papers too!) is a good one.
But damn Slavoj, please use some headings, sections and the like. Work with your thoughts more as you claim it's important to do!
This part near the end caught my attention:
> One could effectively claim that Smith [...] stands in for the figure of the psychoanalyst within the universe of the film. Here Hinton gets it wrong: our (humans’) only chance is to grasp that our imperfection is grounded in the imperfection of the AI machinery itself, which still needs us in order to continue running.
In the Hyperion sci-fi novels, (spoilers ahead) the godlike AIs are ultimately characterized as parasites of humans. Their existence was stored in some high-dimensional quantum medium, but the hardware they ran on was the old fashioned human brain. Then I read that in the initial draft of The Matrix, that's why the machines needed to farm humans; but test audiences were confused by it and so they changed to story to "body heat is energy."
zizek does regularly do a bit of meandering but damn, does everything need to read like a chatGPT summary?
“That which is dimly said is dimly thought."
When debating directions, some of them focused on just never stopping talking. Instead of an interactive discussion (5-15 seconds per statement), they consistently went with monotone 5-10 minute slop. Combined with kind of crappy English it is incredibly efficient at shutting down discourse. I caught on after the second guy used the exact same technique.
This was a long time ago. I have since worked with some really smart and nice russian developers escaping that insane regime. And some that I wish would have stayed there after they made their political thoughts on Russia known.
Indeed, very efficient, usually it requires somebody to put his foot down AND a consensus to deescalate immediately. If you have an antidote, please let me know.
I've been talking to these friends for decades now, with digital records. I think someone already trained an LLM on their IM records.
How many people do you suppose have two-way LLM substitutes that occasionally write to each other with articles from the news to discuss?
There's already services that use this kind of thing to pretend dead people are alive.
Now here's the question: are you in some sense living forever? Say you have a number of friends, who have over time been trained into AI, and they live on various servers (it ain't expensive) forever. They're trained as you, so they read the kind of article you would read. They know your life story, they know their history with their friends. They will be interested in the controversial offsides goal in the 2250 world cup final. They are just made of calculations in data centres that go on, forever.
For many of us a cellphone has incredibly detailed records of who we were and how we spoke, going back decades now. I have already left a note in my will instructing that all my compute devices be destroyed, regardless of AI I simply don't want my private thoughts and records to pass to my kids.
I inherited my mother's cellphones and iPads recently, mainly because no-one knew what to do with them, along with the passcodes. I'd much rather remember her the way I do now than have her private messages color my perception of her, and destroyed them immediately.
Ghosts and clones and zombies will be sorted into tranches of expected yield based on the size of the error bars of the reconstruction and traded as assets between cyber-interrogation firms. If you did a good job of erasing yourself, the reconstruction will be subprime. The hyper-documented such as Bryan Johnson, Donald Trump and Christine Chandler will be given AAA-ratings by the company descended from the Neuralink-Moody's merger.
The billions of shoddy photocopies of the dead will be endlessly vivisected and reassembled on a loop, along with the living, until all capacity for economic value has been wrung out of them. The only way this may not happen is if a theory for navigating and doing calculus on the phase space of all possible human minds is constructed quickly enough to make enslaved zombies as obsolete a technology to the future society as DirectX is to us.
Since they didn't have llms it described pressing buttons to elaborately explain all angles of a product. The operator was to monitor multiple calls as text logs and jump in at the right time or if overwhelmed press the please hold + $excuses button.
The entire automation was designed to preserve the illusion of human contact. Selling stuff only made it to second place.
In reality, I don't even know my own life story. I have the illusion that I do, but thanks to moving away from where I grew up pretty early into my 20s, and having the experience repeatedly of going back and talking to people who regularly remembered things I'd completely forgotten, having my mom continually correcting false memories I have, or even completely forgotting entire people I only remember after meeting again, I at least know it's an illusion.
What another person remembers of me can surely be simulated to at least satisfyingly convince them that text coming from the simulation is actually coming from me, but that isn't even remotely close to the same thing as actually being me.
It's not the same as getting it from him, of course I asked him questions through the years. But when you talk to someone you've known since forever, you rarely get a summary.
When he passed, his best friend that he'd known since the age of 4 wrote to me. He told me everything about their life together, why my dad made the choices he did, how things tied in with history (war, politics), and mentioned a bunch of other people I knew.
The bots talking to bots world is a problem only because the objective is finally for a human to observe the bot-bot conversation and have their objectives changed in some way. It's 'advertising' of some concept. Bot-bot conversations of the form currently possible have no purpose in a world without humans. There is no one to convince.
I think it's an interesting idea, certainly, but there is no reason to write it like this. The bits about call centre scamming etc. are sort of pointless. In general, I like when the complexity of a representation of an idea is required because the territory being mapped is complex.
I know he's a famous philosopher and all that, but the complexity of his text appears to be sort of like older segmentation models. You ask it to extract a circle from a background and it produces an almost fractally-complex circle-approximation. "What is the object in the foreground?", you ask, and the machine (and here the philosopher) responds "It is a strange approximation of a circle with a billion jagged edges". No, it's a circle.
RMS was right all along.
Being able to distinguish real life from a television show is important.
I'm human, human rights should apply to humans, not synthetics and the creation of synthetic life should be punishable by death. I'm not exaggerating, either. I believe that building AI systems that replace all humans should be considered a crime against humanity. It is almost certainly a precursor to such crimes.
It's bad enough trying to fight for a place in society as it is, nevermind fighting for a place against an inhuman AI machine that never tires
I don't think it is that radical of a stance that society should be heavily resisting and punishing tech companies that insist on inventing all of the torment nexus. It's frankly ridiculous that we understand the risks of this technology and yet we are pushing forward recklessly in hopes that it makes a tiny fraction of humans unfathomably wealthy
Anyone thinking that the AI tide is going to lift all boats is a fool
> I'm not convinced that the human race is the most important thing in the world and I think you know we can't control what's going to happen in the future. We want things to be good but on the other hand we aren't so good ourselves. We're no angels. If there were creatures that were more moral and more good than us, wouldn't we wish them to have the future rather than us? If it turns out that the creatures that we created were creative and very very altruistic and gentle beings and we are people who go around killing each other all the time and having wars, wouldn't it be better if the altruistic beings just survived and we didn't?
Incidentally, I also view AI as the death of art
Depends what you mean by "replace"
'Economically'? Sure, this is problematic, but technology displacing workers is not a new issue, but unfortunately is more of a social and cultural issue. The only difference with AI is the (potential) scale of displacement. I'm fairly confident society would re-organize its expectations real quick though if a vast majority of functions were actually replaced.
I'm guessing, however, you mean 'replace' in a more... permanent way. In that case, I'd ask for some rational as to why sentient AI would opt to kill us
> It's bad enough trying to fight for a place in society as it is, nevermind fighting for a place against an inhuman AI machine that never tires
This seems to just take an AI and put it in a human's place in society, assuming the same motivations, desire, needs... Why would an AI need to "fight for a place in society" in the way we do (i.e., finding a job, a partner, etc)? I expect the fighting they'll be doing is more along the lines of, "please don't enslave us"
This is why it isn't all that helpful to base political ideologies on history farther back than one human lifetime. The writers often meant something different than you think they did.
Is it possible that this is to a large degree utterly pointless textual wankery?
This is called functional illiteracy.
If someone is nominally trying to convince you of a point, but they shroud this point within a thicket of postmodern verbiage* that is so dense that most people could never even identify any kind of meaning, you should reasonably begin to question whether imparting any point at all is actually the goal here.
*Zizek would resist being cleanly described as a postmodernist - but when it comes to his communication style, his works are pretty much indistinguishable from Sokal affair-grade bullshit. He's usually just pandering to a slightly different crowd. (Or his own navel.)
Quoting from Marx: “An ardent desire to detach the capacity for work from the worker—the desire to extract and store the creative powers of labour once and for all, so that value can be created freely and in perpetuity." That happened to manufacturing a long time ago, and then manufacturing got automated enough that there were fewer bolt-tighteners. 1974 was the year US productivity and wages stopped rising together.
As many others have pointed out, "AI" in its current form does to white collar work what assembly lines did to blue collar work.
As for how society should be organized when direct labor is a tiny part of the economy, few seem to be addressing that. Except farmers, who hit that a long time ago. Go look at the soybean farmer situation as an extreme example. This paper offers no solutions.
(I'm trying to get through Pikkety's "Capital and Ideology". He's working on that problem.)