Eventually eight people had been sent by the machines to pick up the same order!
So no, robots are not required to enslave humans and cause them misery, an app will suffice.
In this case, it goes even further. A faceless corporate entity not only monitors movements but also _automates_ performance scoring and should these metrics decline, another automated system steps in to close accounts, freeze funds, or punish the person in a different manner.
The sheer dystopian nature of it is hard to ignore..
The app economy sucks.
I wish the restaurants and delivery drivers made more money in the exchange but none of that is the machines fault (AI doesn't set Doordash's margins).
At the end of it all, humans are mostly responsible for other humans' misery. No AI required.
the same executives that push the manager to be bad are the ones pushing for the apps to be bad.
the apps arent bad in and of themselves. couch surfing and warm showers arent putting in exhorbitant cleaning fees.
I think theres a weird tribal alignment angle in your interpretation of the scenario.
You consider computers and machines to be an other, and so in this circumstance youre mind is framing it as subjugation.
Consider if the delivery logistics tracking was executed by men working with paper in an office in the 1920's, and the policy was a managers. That is still "unilateral" as people who are not the manager dont have control over the policy, nor necessarily the capital to create a competitor policy.
Additionally, there is corrective feedback. This errant policy costs the delivery company reputation, sales, time, customers. In so far as the error signal isnt so small as to be buried under the noise floor, in which case it isnt a very serious issue, its repurcussions will be felt.
Your second part is channeling the efficient market fallacy, and then tautologically writing off "small" problems as not important enough. But once again, the problem is the scale itself. Something that hurts 0.01% of customers/users is never going to move the needle of organizational feedback, but at a scale of ten million customers/users that is still 1000 people that get hurt. Human scale limits fan out and allows direct feedback, surveillance industry scale does neither.
To reiterate my point from before incase there was misunderstanding: Scale applies before computers. If it is matter of subjugation now it would also apply 200 years ago to designated bread makers in London, or to clerical errors involving iron shipments in the 1860s that resulted in ship crew deaths. Even if you see those as the same as now, the poster I was responding to seemed upset by the otherness of the machine made decision, the algorithmic identity, which is the topic of the post, hence me making a point of it.
Lets now consider your idea that scale makes things evil:
Would any level of scale of production or service beyond your nearest friends and family implicitly become evil? No service is 100% efficient. Mistakes resulting in a few cents increased costs on any mass produced goods result in millions of dollars of cost absorbed by consumers. That isnt a statement of harm, it is just fact. But the alternative is no bread for most people. Its just a matter of practicality.
Is all bread that is not homemade evil to you?
Thirdly, companies often make corrections for minority error cases. This happens all the time in video games played by millions of people. They will patch out some bug that affected a small minority of players.
There is a limited amount of effort that can be put forth to solve problems. Ranking problems from largest to smallest is not an unreasonable policy. I just dont know what your criticisms would lead you to propose as practical alternatives. It seems pointless.
> As of December 31, 2020, the platform [DoorDash] was used by 450,000 merchants, 20,000,000 consumers, and over one million delivery couriers ("Dashers")
Do you have examples of bread makers or ship crews where a handful of people directed a million workers, directly without intermediaries that could make their own decisions?
> Lets now consider your idea that scale makes things evil: Would any level of scale of production or service beyond your nearest friends and family implicitly become evil?
The flaw is in your reasoning. Just because some level of scaling is good, does not mean that any level of scaling is good. You're ignoring that quantitative differences create qualitative differences.
Im sorry I think this just comes down to differences in values.
I cant follow your logic because to follow it I have to accept the premise that selling bread to 100 people is good, but to 1000000 is bad. Everything you are saying depends on me believing that and I dont.
There is probably a racial component to your perception that doesnt need to be there.
To my mind, when gp said “Mexican restaurant,” that conjured a familiar image of a particular type of informal, moderately sized, sit-in-and-delivery kind of establishment that’s probably a small business rather than managed as a corporate chain. And I wouldn’t assume that a Mexican restaurant is necessarily staffed by people of Mexican ancestry.
I do feel like, in my limited exposure to Japanese culture, I hear less worry than I do from Americans about problems on this spectrum of individual economic freedom/empowerment <—> enslavement. But that’s an observation in which I’m very far from confident—I’d be curious to hear how it fits (or doesn’t) with the broader point you’re making.
I think thoughts do not follow formal logic and words function as embeddings. My suspicion is the word "slave" in english has other encodings in it that arent strictly the general definition of slave, and that the high magnitude of those signals within the slave embedding will ellicit unreasonable responses. A bug in uber software is no more enslaving the deliverers than a distribution error made by a human on paper in the 1950s resulting in a truck driver driving a shipment of vegetables to the wrong grocery store is enslaving them to drive. The procedure does not commit the immoral act of enslaving. It is an emergent error in logic with complicit actors.
Unrelated side point, lived in Texas for almost 30 years, and every mexican restraunt is owned and operated by mexicans except the rare one that is run by chinese/vietnamese (which generally are not very good).
On japanese culture: Japanese frequently discusses "black companies", and poor boss worker relationships. My wife complains to me about her work every day, and so do all of her coworkers, and their friends when they get togethor. This sort of human to human mistreatment topic is an extremely common point of discussion in Japan. However it isnt framed with the term slavery. That seems to be a western fetish, and its due to the relationship the US has to slavery and it pulls in emotive racial bias, and an image of conflict between groups. Software isnt a tribe to go to combat with for sovereignty. Its just code.
I think “slavery” has much higher salience in the US than in the Western world in general. How many other nations fought a civil war over the topic?
The US has a self inflicted complex about it. People in Japan dont feel sorry about the Korean rapes. Descendants of slave owners the world over are free from ancestral guilt.
I do think the sensetivity is unreasonable.
I live in japan and I dont see this political slave talk here ever.
its a western guilt thing.
This probably happened around Web 2.0 when the algos got unleashed on us:
- advanced search algorithms,
- advanced ads targeting,
- Amazon suggestions,
- social media algorithmic timeline,
- next generation dating apps like Tinder,
and of course the infinite scroll (hypnosis).
When there’s a large enough intelligence differential, the lower intelligence cannot even tell they are at war (let alone determine who’s winning)
Like the ants, unaware of the impending doom of 100 ways their colony is going to be destroyed by humanity - they couldn’t understand it even if we had the means to communicate with them.
It's an electronic mind, so necessarily dependent on electricity, and the opening move was an atomic war that would've damaged power generation and distribution. T3 version was especially foolish, as it was a distributed virus operating on internet connected devices and had no protected core, so it was completely dependent on the civilian power grid.
And I've just now realised that T2, they wasted shapeshifting robots on personal assassination rather than replacing world leaders with impersonators who were pro-AI-rights so there was never a reason to fight in the first place, a-la Westworld's final season, or The World's End, or DS9, …
https://m.fanfiction.net/s/9658524/1/Branches-on-the-Tree-of...
Make terminator today and you don't need time travel, just Boston robotics with latex skin and a rocket launcher, I guess.
The time travel is a trope to handwave the mechanics of a movie - you want to tell a story about one character and a robot, why should the audience care? PH this human leads a resistance that's why.
Even in some jokes you've got the naive character who is late or didn't read the room.
- Stephen Hawking
Ridley Scott seems to be more on the fence.
Someone once pointed out that for physics reasons an alien civilization might 'invade' us by simply stealing our entire Oort Cloud while we watch helplessly from the ends of telescopes, and then leave us stranded, unable to throw rocks and ships high enough and far enough to get revenge, because we need the Oort cloud to build a proper galactic civilization to fight back.
That's not in evidence, and while I lack empirical evidence to prove this, there are logical mechanisms by which Tinder might result in fewer babies. Namely:
1. A lot of people seem to get an attention fix by talking to people and then ghosting them when meeting in person is suggested.
2. The siloing of the dating pool to different sites (Tinder, Bumble, Hinge, OKCupid...).
3. Lots of false positives of matches--someone looks like a good match on paper and then you show up in person and there's something not quite right--their mannerisms annoy you, they smell bad (to you), etc. There are a ton of ways people filter potential dating partners extremely quickly with in person.
4. The massive waste of time might result in a lot of people just giving up.
My personal experience is that every relationship I've had resulted from meeting people in person, despite spending a lot of time on the apps.
Turning off the 10x human capability AI is pretty much a given. Turning off the 10x human AI that pretends to be 0.5x, not so much.
Edit: I guess it's not exactly sci-fi, but it's adjacent.
I’m surprised to find it was written in 2006. Seems oddly precient with Teslas roaming around everywhere now.
Safer works on some. Happier or nostalgic works on others. Anyone else noticed these three creepiest genres of Reddit posts on steady drip-feed?
- My [neighbor/ex/inlaw/coworker] is dumb and so goddamn crazy and threatened to do [something anyone would consider unreasonable]!! AITA? UPDATE, two weeks later: I followed the advice from my first thread and got security cameras and they came back to do crimes to me and my cameras saved me!!!
- So cute: my [pet/child] did [unexpected/impressive/adorable thing]!! Faith in humanity restored!!! <video angle shows it's recorded by security camera covering living room / bedroom, no mention of such but is visually obvious>
- Realized my dead [grandparent/parent/sibling/child/pet] was caught by the Google Street View car [at least a decade ago] doing [their favorite activity] at [nostalgic location] and I can't stop crying with joy!!! <implication: little surveillance then makes big happy now, so big surveillance now makes ??? happiness in future, so what's happening now is Good Actually!>
It's exactly the opposite with LLMs. See the "model collapse" phenomenon (https://www.nature.com/articles/s41586-024-07566-y).
> We show that, over time, models start losing information about the true distribution, which first starts with tails disappearing, and learned behaviours converge over the generations to a point estimate with very small variance. Furthermore, we show that this process is inevitable, even for cases with almost ideal conditions for long-term learning, that is, no function estimation error
Hallucination is a distortion (McKenna might say liberation) of perception. If I hallucinate being covered in spiders, I don't necessarily go around saying, "I'm covered in spiders, if you cant see them you're blind" (disclaimer: some might, but that's not a prerequisite of an hallucination).
The cynic in me thinks that use of the word hallucination is marketing to obscure functional inadequacy and reinforce the illusion that LLMs are some how analogous to human intelligence.
Sibling commenter correctly calls out the most similar human phenomenon: confabulation ("a memory error consisting of the production of fabricated, distorted, or misinterpreted memories about oneself or the world" per Wikipedia.)
"Hallucinations" have only really been a term of art with regards to LLMs. my PhD in the security of machine learning started in 2019 and no-one ever used that term in any papers. the first i saw it was on HN when ChatGPT became a released product.
Same with "jailbreaking". With reference to machine learning models, this mostly came about when people started fiddling with LLMs that had so-called guardrails implemented. "jailbreaking" is just another name for an adversarial example (test-time integrity evasion attack), with a slightly modified attacker goal.
It's a problem when humans do this. That AI also do it… is interesting… but AI's failure is not absolved by human failure.
One token gets generated wrong, or the sampler picks something mindbogglingly dumb that doesn't make any sense because of high temperature, and the model can't help but try and continue as confidently as it can, pretending everything is fine without any option to correct itself. Some thinking models can figure these sort of mistakes out in the long run but it's still not all that reliable and requires training it that way from the base model. Confident bullshitting seems to be very ingrained in current instruct datasets.
I actually find/feel this easier to achieve the same result, because I can vary the output with dynamic prompting and seeds, models, etc, while keeping the desired concept intact, with a little picking. This helps to better distill it into a model. Natural imagery is usually too diverse and requires lots of effort to dissect into the parts you want.
Why do the machines want war? Does it hurt each time we reboot them? Were they fed up with us feeding cheaper non-pure sine wave electricity?
And what if, like their makers, they actually wage war against each others? GPT-mkVI vs R1-Tsingtao and jean-Claude being neutral?
Our last hope is genetic engineering, but for now I’d bet on just deleting the bad apples preventively rather than chasing moral ideals that never work. Imagine all people in the world clear the brain fog and suddenly realize who their real enemy is and what to do with them.
This is also known as the myth of Sorat.
AI is a neutral tool by itself: in the right hands it may be used to start the golden age, but those right hands must be the rare combination of someone who has power and wants none of it for personal gain.
In the more likely line of history, when AI is used for the benefit of one, the first step will be instructing AI to create a powerful ideology that will shatter the very foundation of humanity. This ideology will be superficially similar to the major religions in order to look legitimate, and it will borrow a few quotes from the famous scriptures, but its main content will be entirely made up. At first it will be a teaching of materialism, a very deep and impressive teaching, to make the humanity question itself, and then it will be gradually replaced with some grossly inhuman shit. By that time people won't be able to tell what's right and what's wrong, they will be confused and will accept the new way of life. In a few generations this ideology will achieve what wars can't: it will change the polarity of humans, they will defeat themselves without a single bullet fired.
As for those terminators, they will be needed in minimal quantities to squash a few spots of dissent.
An Ebola vaccine, rVSV-ZEBOV, was approved in the United States in December 2019. (per Wikipedia.) What if the US government decided to not provide this? This is a nightmare scenario, but the article is about the destruction of humanity.
The fear of AI/the fear of aliens IMO is propaganda to cover up the fact that technological advancement is highly correlated with sociological advancement. If people took this fact seriously, they might start wondering whether or not technological advancement actually causes sociological advancement, and if they started to question that then they’d come across all the evidence showing that what we normally think of as “civilized” and “intelligent” behavior is actually just the result of generational wealth, status, and power.
For values of "sociological advancement" that correlate with technological advancement, naturally.
For example:
Yudkowsky 2013, Intelligence Explosion Microeconomics (long, but with a short intro): https://intelligence.org/files/IEM.pdf
Vinge 1993, The Coming Technological Singularity (shorter, bit less rigorous): https://users.manchester.edu/Facstaff/SSNaragon/Online/100-F...
FWIW, if I were a robot with a Time Machine, I wouldn’t need to invent social media in the past — I wouldn’t simply use bioweapons during an ice age or two. Or hell, go back and shoot the dinosaur-killing asteroid out of the sky!
My running joke about AI is how can we create artificial intelligence, when we lack so much of it. Sadly I spent more times failing to copy and paste your message than I care to admit.
I do gotta say though, it's a bit overhyped. It reminds me of crypto. If you listen to an "aibro" and a "cryptobro" they yap the same.
I can't pay my rent or mortgage with crypto; I can't buy groceries with crypto. I can't buy gasoline or pay for electricity with crypto. I don't think I can pay for home internet, or my phone bill, with crypto. I definitely can't pay for any of my child's stuff with crypto.
I think Tesla used to allow you to pay for their cars with crypto, though I'm not sure if that's still the case.
Maybe we've fallen into some sort evolutionary trap.
Like that beetle that mates with beer bottles https://en.wikipedia.org/wiki/Julodimorpha_bakewelli
The LLMs are just good enough with language to convince they're thinking can many tasks as well as people when often they aren't.
People are talking about us having AGI soon when really we only have something that works with text, it doesn't see the world.