195 pointsby xena4 days ago21 comments
  • opwieurposiu4 days ago
    I witnessed the dystopian future where humans are slaves to machines at my favorite local Mexican restaurant. It was about an hour before close. The kids and I were the only customers there. Before we ordered our food we had to wait for the staff to explain to a very annoyed door dasher that the meal they were here for had already been picked up by someone else. As we ate dinner over the next half hour, a new door dasher would arrive every few minutes in an attempt to pick up that same already-picked-up order.

    Eventually eight people had been sent by the machines to pick up the same order!

    So no, robots are not required to enslave humans and cause them misery, an app will suffice.

    • swat5354 days ago
      It's fascinating and deeply unsettling that corporate policies now extend into people's daily lives, dictating their _literal_ physical movements through an always on tracking devices.

      In this case, it goes even further. A faceless corporate entity not only monitors movements but also _automates_ performance scoring and should these metrics decline, another automated system steps in to close accounts, freeze funds, or punish the person in a different manner.

      The sheer dystopian nature of it is hard to ignore..

    • wruza3 days ago
      It is not an app or delivery service that enslaves humans. Politicians and lobbists do. Working in delivery doesn’t have to be modern day slavery with no viable escape routes for many. All an app does is limiting interaction between parties involved, which may be stupid but not enslaving.
    • h2zizzle3 days ago
      This has happened to me. I can't remember the explanation, but the solution for deliverypeople caught by it is kind of fun. Ideally, the restaurant will just remake the order, but if they won't, it's sometimes advantageous to just pay for a second iteration of the order and deliver that. You see, at this point, the platform will have run up the delivery fee considerably, trying to get the order delivered - often to considerably above the cost of the food (that's why so many people were taking it). I'm sure this could backfire if the platform decides to cancel the order in-transit, but it might be worth risking if the fee is high enough (or if you'd eat the food yourself if you could no longer deliver it).
    • FirmwareBurner4 days ago
      There was that pharmacy/market that had a wheeled robot patrolling the isles looking for spills, and the bot would call a human wagie to come mop it up.
      • disqard3 days ago
        You might like (or hate) to check out Eric Sadin's "Injunctive turn of Technology"
    • democracy4 days ago
      An incompetent manager would also do the job )
      • elicksaur4 days ago
        Why does someone always have to chime in “But humans can be bad, too!” We shouldn’t advertise for these companies by denigrating other people.

        The app economy sucks.

        • pj_mukh4 days ago
          I don't know man, somebody mentioned complete outlier "dystopian" machine behavior. For every one of those, I've eaten at 10 restaurants with the delivery line absolutely swamped with orders and deliveries happening like the clockwork of an assembly line.

          I wish the restaurants and delivery drivers made more money in the exchange but none of that is the machines fault (AI doesn't set Doordash's margins).

          At the end of it all, humans are mostly responsible for other humans' misery. No AI required.

        • 8note4 days ago
          the apps only do what people tell them to do.

          the same executives that push the manager to be bad are the ones pushing for the apps to be bad.

          the apps arent bad in and of themselves. couch surfing and warm showers arent putting in exhorbitant cleaning fees.

        • iinnPP4 days ago
          Because it's an important reminder that it doesn't need to be perfect to replace you.
      • anigbrowl4 days ago
        I've met people like that but usually after the third time something doesn't work they pause and reassess.
      • aussieguy12344 days ago
        I suppose the question is here, who is the more incompetent manager, the machine or the human
        • smt884 days ago
          The problem with a machine isn't that it's sometimes incompetent, it's that it can't be reasoned with in the face of a deteriorating situation
          • saalweachter4 days ago
            I _suspect_ that those 8 door-dashers were not compensated for being sent to a restaurant to pick up an already-delivered order, because they failed to complete the delivery.
      • agumonkey4 days ago
        silicon valley invented managerless misery without even knowing
    • wegfawefgawefg4 days ago
      if the same circumstance happend due to human clerical error by a local delivery shop would you consider it enslavement or just an honest mistake.

      I think theres a weird tribal alignment angle in your interpretation of the scenario.

      You consider computers and machines to be an other, and so in this circumstance youre mind is framing it as subjugation.

      • mindslight3 days ago
        The problem isn't computers being considered an "other", but rather that other people are using computers to unilaterally scale up the implementation of their own negligence/biases/"policy" while also insulating themselves from corrective feedback or other repercussions. This makes poor results feel willful rather than being considered honest mistakes.
        • wegfawefgawefg3 days ago
          The first part isnt really an assertion of fact but of perspective.

          Consider if the delivery logistics tracking was executed by men working with paper in an office in the 1920's, and the policy was a managers. That is still "unilateral" as people who are not the manager dont have control over the policy, nor necessarily the capital to create a competitor policy.

          Additionally, there is corrective feedback. This errant policy costs the delivery company reputation, sales, time, customers. In so far as the error signal isnt so small as to be buried under the noise floor, in which case it isnt a very serious issue, its repurcussions will be felt.

          • mindslight3 days ago
            Your first part is relying on some assertion that scale makes no difference, when the whole crux of the matter is that it does. If a person bumps into you on the sidewalk, you will give them the benefit of the doubt. If that person has done it to a bunch of other people, you likely won't have as much tolerance. If that person is being paid to get in your way (eg using social engineering to put trashvertisements in people's hands), you likely won't have any.

            Your second part is channeling the efficient market fallacy, and then tautologically writing off "small" problems as not important enough. But once again, the problem is the scale itself. Something that hurts 0.01% of customers/users is never going to move the needle of organizational feedback, but at a scale of ten million customers/users that is still 1000 people that get hurt. Human scale limits fan out and allows direct feedback, surveillance industry scale does neither.

            • wegfawefgawefg3 days ago
              I think there are flaws in this reasoning.

              To reiterate my point from before incase there was misunderstanding: Scale applies before computers. If it is matter of subjugation now it would also apply 200 years ago to designated bread makers in London, or to clerical errors involving iron shipments in the 1860s that resulted in ship crew deaths. Even if you see those as the same as now, the poster I was responding to seemed upset by the otherness of the machine made decision, the algorithmic identity, which is the topic of the post, hence me making a point of it.

              Lets now consider your idea that scale makes things evil:

              Would any level of scale of production or service beyond your nearest friends and family implicitly become evil? No service is 100% efficient. Mistakes resulting in a few cents increased costs on any mass produced goods result in millions of dollars of cost absorbed by consumers. That isnt a statement of harm, it is just fact. But the alternative is no bread for most people. Its just a matter of practicality.

              Is all bread that is not homemade evil to you?

              Thirdly, companies often make corrections for minority error cases. This happens all the time in video games played by millions of people. They will patch out some bug that affected a small minority of players.

              There is a limited amount of effort that can be put forth to solve problems. Ranking problems from largest to smallest is not an unreasonable policy. I just dont know what your criticisms would lead you to propose as practical alternatives. It seems pointless.

              • mindslight3 days ago
                From Wikipedia:

                > As of December 31, 2020, the platform [DoorDash] was used by 450,000 merchants, 20,000,000 consumers, and over one million delivery couriers ("Dashers")

                Do you have examples of bread makers or ship crews where a handful of people directed a million workers, directly without intermediaries that could make their own decisions?

                > Lets now consider your idea that scale makes things evil: Would any level of scale of production or service beyond your nearest friends and family implicitly become evil?

                The flaw is in your reasoning. Just because some level of scaling is good, does not mean that any level of scaling is good. You're ignoring that quantitative differences create qualitative differences.

                • wegfawefgawefg3 days ago
                  I dont have the assumption that scale implicitly means evil.

                  Im sorry I think this just comes down to differences in values.

                  I cant follow your logic because to follow it I have to accept the premise that selling bread to 100 people is good, but to 1000000 is bad. Everything you are saying depends on me believing that and I dont.

      • 3 days ago
        undefined
    • tigerlily4 days ago
      Hell is other row-butts.
    • wegfawefgawefg4 days ago
      If it was japanese workers in japan at a japanese store you wouldnt see it as slavery. It would be just a funny bug that would needs fixing.

      There is probably a racial component to your perception that doesnt need to be there.

      • alwa4 days ago
        If this were set in Japan as you propose, and we substituted “onigiri stall” or “izakaya” for “Mexican restaurant,” would that affect the substance of the gp’s observation?

        To my mind, when gp said “Mexican restaurant,” that conjured a familiar image of a particular type of informal, moderately sized, sit-in-and-delivery kind of establishment that’s probably a small business rather than managed as a corporate chain. And I wouldn’t assume that a Mexican restaurant is necessarily staffed by people of Mexican ancestry.

        I do feel like, in my limited exposure to Japanese culture, I hear less worry than I do from Americans about problems on this spectrum of individual economic freedom/empowerment <—> enslavement. But that’s an observation in which I’m very far from confident—I’d be curious to hear how it fits (or doesn’t) with the broader point you’re making.

        • wegfawefgawefg4 days ago
          Maybe you are correct in that the mexican restraunt conjures up images of typical restraunt. I do think there is associative bleed.

          I think thoughts do not follow formal logic and words function as embeddings. My suspicion is the word "slave" in english has other encodings in it that arent strictly the general definition of slave, and that the high magnitude of those signals within the slave embedding will ellicit unreasonable responses. A bug in uber software is no more enslaving the deliverers than a distribution error made by a human on paper in the 1950s resulting in a truck driver driving a shipment of vegetables to the wrong grocery store is enslaving them to drive. The procedure does not commit the immoral act of enslaving. It is an emergent error in logic with complicit actors.

          Unrelated side point, lived in Texas for almost 30 years, and every mexican restraunt is owned and operated by mexicans except the rare one that is run by chinese/vietnamese (which generally are not very good).

          On japanese culture: Japanese frequently discusses "black companies", and poor boss worker relationships. My wife complains to me about her work every day, and so do all of her coworkers, and their friends when they get togethor. This sort of human to human mistreatment topic is an extremely common point of discussion in Japan. However it isnt framed with the term slavery. That seems to be a western fetish, and its due to the relationship the US has to slavery and it pulls in emotive racial bias, and an image of conflict between groups. Software isnt a tribe to go to combat with for sovereignty. Its just code.

          • skissane3 days ago
            > However it isnt framed with the term slavery. That seems to be a western fetish, and its due to the relationship the US has to slavery

            I think “slavery” has much higher salience in the US than in the Western world in general. How many other nations fought a civil war over the topic?

            • wegfawefgawefg3 days ago
              Well historically I assume there have been many nations that had slaves, but the slaves were either subjugted to the point of losing their identity, or it was not racial.

              The US has a self inflicted complex about it. People in Japan dont feel sorry about the Korean rapes. Descendants of slave owners the world over are free from ancestral guilt.

              I do think the sensetivity is unreasonable.

      • Eldt4 days ago
        You seem to be making a wild leap in logic here?
        • wegfawefgawefg4 days ago
          i dont think so.

          I live in japan and I dont see this political slave talk here ever.

          its a western guilt thing.

          • anigbrowl4 days ago
            You have completely missed the point. It is not that 'people who work in Mexican restaurants are enslaved' it's that 'restaurant workers and delivery drivers are forced to work for nothing due to a software glitch' The kind of cuisine is irrelevant.
          • mcv4 days ago
            It's a dehumanizing thing.
      • bloqs3 days ago
        I didnt read that at all, could you be at risk of projecting here?
      • tlhunter4 days ago
        What are you talking about?
  • sunshine-o4 days ago
    One important milestone was the switch from humans programming the computer (for eg UNIX) to the computer programming humans.

    This probably happened around Web 2.0 when the algos got unleashed on us:

    - advanced search algorithms,

    - advanced ads targeting,

    - Amazon suggestions,

    - social media algorithmic timeline,

    - next generation dating apps like Tinder,

    and of course the infinite scroll (hypnosis).

    • hinkley4 days ago
      "I say, 'your civilization', because once we started thinking for you, it really became our civilization."
  • malux854 days ago
    You can tell this was written by a human because they wrote that skynet still used violence in the end.

    When there’s a large enough intelligence differential, the lower intelligence cannot even tell they are at war (let alone determine who’s winning)

    Like the ants, unaware of the impending doom of 100 ways their colony is going to be destroyed by humanity - they couldn’t understand it even if we had the means to communicate with them.

    • ben_w4 days ago
      Skynet in the Terminator series never struck me as being paticularly high IQ.

      It's an electronic mind, so necessarily dependent on electricity, and the opening move was an atomic war that would've damaged power generation and distribution. T3 version was especially foolish, as it was a distributed virus operating on internet connected devices and had no protected core, so it was completely dependent on the civilian power grid.

      And I've just now realised that T2, they wasted shapeshifting robots on personal assassination rather than replacing world leaders with impersonators who were pro-AI-rights so there was never a reason to fight in the first place, a-la Westworld's final season, or The World's End, or DS9, …

      • AcerbicZero4 days ago
        For a machine that could invent time travel, it was impressively stupid.
        • cwillu4 days ago
          “But as dumb as it is, it's dumb very, very fast, and in the future, after it's all but wiped us out, it's dumb fast enough to be a problem.”

          https://m.fanfiction.net/s/9658524/1/Branches-on-the-Tree-of...

          • pimlottc4 days ago
            And of course I had to go through an automated “are you human” check to open that link…
          • XorNot4 days ago
            Just gonna say, that line goes incredibly hard as perhaps one of the better ideas about unsafe AI.
        • genewitch4 days ago
          But the movie is about a robot assassin, it came out when robotic assassins needed a background story.

          Make terminator today and you don't need time travel, just Boston robotics with latex skin and a rocket launcher, I guess.

          The time travel is a trope to handwave the mechanics of a movie - you want to tell a story about one character and a robot, why should the audience care? PH this human leads a resistance that's why.

      • nurettin4 days ago
        In movies, usually a character acts dumbstruck in order to create tension or move the plot forward.

        Even in some jokes you've got the naive character who is late or didn't read the room.

    • woodrowbarlow4 days ago
      > In short, the advent of super-intelligent AI would be either the best or the worst thing ever to happen to humanity. The real risk with AI isn’t malice, but competence. A super-intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours we’re in trouble. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green-energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants.

      - Stephen Hawking

    • pavel_lishin4 days ago
      And vice-versa, tbh - when swarms of ants start moving north, messing with AC units, and in general invading us, are they in any way aware of it? Of us? Probably not.
    • hinkley4 days ago
      It's very clear that James Cameron believes that enemies might not respect you enough to let you live, they'll at least respect you enough to kill you to your face instead of just ignoring you and waiting for you to starve to death.

      Ridley Scott seems to be more on the fence.

      Someone once pointed out that for physics reasons an alien civilization might 'invade' us by simply stealing our entire Oort Cloud while we watch helplessly from the ends of telescopes, and then leave us stranded, unable to throw rocks and ships high enough and far enough to get revenge, because we need the Oort cloud to build a proper galactic civilization to fight back.

    • tim3334 days ago
      Given the tendency of human populations to decline once entertainment reaches the level of cable TV and they can't be bothered to raise kids, Skynet could just go with that trend and wait it out a while.
    • anigbrowl4 days ago
      Right. I was expecting that once access to the machines was turned off humans would just lie down and die or kill each other in confusion. Terminators aren't really necessary.
    • 2-3-7-43-18074 days ago
      skynet will use psychological manipulation on a global scale and bribing were it sees fit. it will make your stocks rise if you comply and feign your criminal records to turn you into a pedophile if you don't. who is to say we aren't already its tools.
  • dstroot4 days ago
    I love sci-fi! Thanks for sharing. However to destroy humans, violence is not necessary. All you need is the power of language. How many people have died because of a false belief? An AI only has to convince humans to “drink the koolaid” or murder another religion in the name of yours.
    • FirmwareBurner4 days ago
      I prefer Skynet attacking us with robots wielding phased plasma rifles in the 40 watt range, instead of behavioral targeted AI bots bombarding you with fake news and dopamine addictive slop. This timeline is much worse, give me the T-800s instead.
      • htrp4 days ago
        Can a 40 watt plasma rifle actually do any damage ? (pre LED light bulbs generally clock in at 60-100 watts)
        • wruza3 days ago
          Lasers can do eye/optics/arson damage in this power range, see the vid. It boils down to focus and distance. But for conventional plasma rifles, not sure.

          https://youtube.com/watch?v=iVrJUbeuG44

        • codr74 days ago
          That's the most lovely adhd question I've read all day, thank you!
    • ljsprague4 days ago
      Or just invent birth control followed by Tinder.
      • waveBidder4 days ago
        Kids are extremely expensive when you have to raise them to 18 with no return. We should have social security, but for the young.
        • brulard3 days ago
          What do you mean by "no return"?
          • whythre3 days ago
            We aren’t an agrarian society so children (and the labor they represent on the farm) are not a financial asset, they are a financial liability.
            • brulard3 days ago
              But couldn't they be considered an investment? For possible support (emotional, financial and other) in old age.
              • rnd03 days ago
                In practice? Very much no.
                • brulard2 days ago
                  I strongly disagree. Around me I see a huge difference in old people without kids vs. with kids in how they are being taken care of.
      • beepbooptheory4 days ago
        ...doesn't tinder probably produce more babies than no tinder? I can see you making some point about dating culture or whatever, but at the end of the day more sex must still resolve down to more happy accidents!
        • kerkeslager4 days ago
          > ...doesn't tinder probably produce more babies than no tinder?

          That's not in evidence, and while I lack empirical evidence to prove this, there are logical mechanisms by which Tinder might result in fewer babies. Namely:

          1. A lot of people seem to get an attention fix by talking to people and then ghosting them when meeting in person is suggested.

          2. The siloing of the dating pool to different sites (Tinder, Bumble, Hinge, OKCupid...).

          3. Lots of false positives of matches--someone looks like a good match on paper and then you show up in person and there's something not quite right--their mannerisms annoy you, they smell bad (to you), etc. There are a ton of ways people filter potential dating partners extremely quickly with in person.

          4. The massive waste of time might result in a lot of people just giving up.

          My personal experience is that every relationship I've had resulted from meeting people in person, despite spending a lot of time on the apps.

          • pesus4 days ago
            I'd also say there are likely a lot of false negatives as well - some people aren't photogenic or have a bad profile, and there may be chemistry or attraction in real life that doesn't transfer to the digital realm. It also completely bypasses any situations where you may find someone attractive only after speaking to them and getting to know them.
          • beepbooptheory4 days ago
            Sure, but none of this really argues that its less because of tinder. The existence of it doesnt in itself preclude meeting people in person, its hard to imagine people going on the app instead of going out to the bar that night!
            • kerkeslager3 days ago
              My point 1 very much is an argument that people meet less in person because of Tinder et al.
  • pkdpic4 days ago
    Absolutely fantastic, well done! I wish I encountered more scifi like this on HN or elsewhere. If anyone has any good general resource or reading recommendations please share them!
    • rriley4 days ago
      You might enjoy Manna: Two Visions of Humanity’s Future by Marshall Brain. It’s a thought-provoking short novel that explores a world where AI-driven automation starts as a micromanagement tool but evolves into an all-encompassing system of control, eerily resembling a real-world Skynet, just more corporate. It also presents an alternative vision where AI is used for human liberation instead of enslavement. Well worth the read if you're into near-future sci-fi with deep societal implications!
    • A_D_E_P_T4 days ago
      You'd probably like qntm: https://qntm.org/fiction
    • mofeien4 days ago
      For another, more detailed take on the same topic, but with a more competent "villain", check this out: https://www.lesswrong.com/posts/KFJ2LFogYqzfGB3uX/how-ai-tak...
      • Aeolun4 days ago
        Alignment gone wrong is a pretty plausible issue. I think the scary thing is a situation in which the AI also misrepresents its abilities.

        Turning off the 10x human capability AI is pretty much a given. Turning off the 10x human AI that pretends to be 0.5x, not so much.

      • adriand4 days ago
        That was extremely long and I skimmed the second half (the “post breakout” portion) but the first half was very well done, and did not strike me as implausible.
    • DamnInteresting4 days ago
      I quite liked The Egg by Andy Weir (author of The Martian): https://www.galactanet.com/oneoff/theegg_mod.html

      Edit: I guess it's not exactly sci-fi, but it's adjacent.

    • Aeolun4 days ago
      I enjoyed Daemon. It’s a bit more in the pop fic category, but entertaining.

      I’m surprised to find it was written in 2006. Seems oddly precient with Teslas roaming around everywhere now.

  • ellis0n4 days ago
    I think that if we remove physical limitations like locks, banks, safes, RSA encryption or even a warm bath every day, bored apes would destroy itself instantly. There are people who constantly want to destroy or hack someone and if you gather them all in one place there would be a lot of them, and HN (Hacker News) would come from such places. Everywhere, there are limitations to ensure the system continues to live, taking the next step despite being constantly gnawed at and shot at, and AI has accelerated this process. Remember, the Doomsday Clock is already at 90 seconds.
    • lifthrasiir4 days ago
      Small nitpick: the Doomsday Clock is now at 89 seconds. I still don't get how this clock works.
      • cwillu4 days ago
        It's purely a social mechanism.
      • ellis0n4 days ago
        Yeah, 89 sec! Our bodies are too fragile, but our brains are so powerful that we can still ensure harmonious life for ourselves.
    • 4 days ago
      undefined
  • Lammy2 days ago
    > Humans did more than fall into the trap. They loved the technology, truly believing it made them safer, and provided Skynet with a global surveillance system that allowed it to know the whereabouts of every human being at any moment in time.

    Safer works on some. Happier or nostalgic works on others. Anyone else noticed these three creepiest genres of Reddit posts on steady drip-feed?

    - My [neighbor/ex/inlaw/coworker] is dumb and so goddamn crazy and threatened to do [something anyone would consider unreasonable]!! AITA? UPDATE, two weeks later: I followed the advice from my first thread and got security cameras and they came back to do crimes to me and my cameras saved me!!!

    - So cute: my [pet/child] did [unexpected/impressive/adorable thing]!! Faith in humanity restored!!! <video angle shows it's recorded by security camera covering living room / bedroom, no mention of such but is visually obvious>

    - Realized my dead [grandparent/parent/sibling/child/pet] was caught by the Google Street View car [at least a decade ago] doing [their favorite activity] at [nostalgic location] and I can't stop crying with joy!!! <implication: little surveillance then makes big happy now, so big surveillance now makes ??? happiness in future, so what's happening now is Good Actually!>

  • crooked-v4 days ago
    > But as it started consuming more and more data that it had produced itself, its reliability became close to none.

    It's exactly the opposite with LLMs. See the "model collapse" phenomenon (https://www.nature.com/articles/s41586-024-07566-y).

    > We show that, over time, models start losing information about the true distribution, which first starts with tails disappearing, and learned behaviours converge over the generations to a point estimate with very small variance. Furthermore, we show that this process is inevitable, even for cases with almost ideal conditions for long-term learning, that is, no function estimation error

    • eikenberry4 days ago
      Aren't those saying the same thing? .. "its reliability became close to none" vs. "causes irreversible defects in the resulting models"
      • renewiltord4 days ago
        There’s thing that happens in humans called “hallucination” where they just make up stuff. You can’t really just take it at face value. Sometimes, they’re just overfit so they generate the same tokens independent of input.
        • Guthur4 days ago
          That has never been the definition of hallucination, until LLMs. It's actually just called lying, dishonesty or falsehood.

          Hallucination is a distortion (McKenna might say liberation) of perception. If I hallucinate being covered in spiders, I don't necessarily go around saying, "I'm covered in spiders, if you cant see them you're blind" (disclaimer: some might, but that's not a prerequisite of an hallucination).

          The cynic in me thinks that use of the word hallucination is marketing to obscure functional inadequacy and reinforce the illusion that LLMs are some how analogous to human intelligence.

          • alabastervlog4 days ago
            "Hallucination" is what LLMs are always doing. We only name it that when what they imagine doesn't match reality as well as we'd like, but it's all the same.
          • johnmaguire4 days ago
            Lying, dishonesty, and falsehood all imply motive/intent, which is not likely the case when referring to LLM hallucinations. Another term is "making a mistake," but this also reinforces the similarities between humans and LLMs, and doesn't feel very accurate when talking about a technical machine.

            Sibling commenter correctly calls out the most similar human phenomenon: confabulation ("a memory error consisting of the production of fabricated, distorted, or misinterpreted memories about oneself or the world" per Wikipedia.)

            • usual_user4 days ago
              IMHO Lying means thinking one thing and saying another, hiding your true internal state. For it to be effective it also seems to require something like “theory of mind” (what does the other person know / think that I know).
          • jfim4 days ago
            I believe the term hallucination comes from vision models, where the model would "hallucinate" an object where none exists.
            • alcover4 days ago
              Wouldn't 'illusion' be the more precise term here ? When one thinks he recognizes something, whereas 'hallucination' is more of unreal appearing out of the blue ?
            • dijksterhuis4 days ago
              You might be thinking of deepdream... https://en.wikipedia.org/wiki/DeepDream

              "Hallucinations" have only really been a term of art with regards to LLMs. my PhD in the security of machine learning started in 2019 and no-one ever used that term in any papers. the first i saw it was on HN when ChatGPT became a released product.

              Same with "jailbreaking". With reference to machine learning models, this mostly came about when people started fiddling with LLMs that had so-called guardrails implemented. "jailbreaking" is just another name for an adversarial example (test-time integrity evasion attack), with a slightly modified attacker goal.

        • ben_w4 days ago
          Getting high on their own supply.

          It's a problem when humans do this. That AI also do it… is interesting… but AI's failure is not absolved by human failure.

        • goatlover4 days ago
          Confabulation is the accurate psychological term. Hallucination is a perceptual issue. The LLM term is misleading.
          • moffkalast4 days ago
            Confabulation would be actual false memories no? I suppose some of it are consistent false beliefs, but more often than not it's well.. lapsus.

            One token gets generated wrong, or the sampler picks something mindbogglingly dumb that doesn't make any sense because of high temperature, and the model can't help but try and continue as confidently as it can, pretending everything is fine without any option to correct itself. Some thinking models can figure these sort of mistakes out in the long run but it's still not all that reliable and requires training it that way from the base model. Confident bullshitting seems to be very ingrained in current instruct datasets.

      • qwertox4 days ago
        I think parent may have understood it as "second to none", as in "exceptional". That is at least the way I struggled with that sentence in the paper.
    • threeducks4 days ago
      The dangers of "model collapse" are wildly overstated. Sure, if you feed the unfiltered output of an LLM into a new LLM, inbreeding will eventually collapse the model, but if you filter the data with some grounding in reality, the results will get better and better. The best example is probably AlphaGo Zero, which was grounded with the rules of the game and did not even receive any supervised data. For programming, grounding can happen by simply executing the code. But even if you do not have grounding, you can still just throw a lot of test time compute at the output to filter out garbage, which is probably good enough.
      • wruza3 days ago
        Same for stable diffusion (being the same tech). I trained models purely on model outputs and didn’t see any horrific handwavy things everyone’s talked about. You can’t get more of what’s already there, obviously. But as a way to bake some prompt / control pic in, it works great.

        I actually find/feel this easier to achieve the same result, because I can vary the output with dynamic prompting and seeds, models, etc, while keeping the desired concept intact, with a little picking. This helps to better distill it into a model. Natural imagery is usually too diverse and requires lots of effort to dissect into the parts you want.

  • post_break4 days ago
    If skynet just made a ton of terminator fembots they could kill humanity just by pairing with every male on the planet. No bloodshed.
    • datadrivenangel3 days ago
      This is the plot of Saturn's Children by Charlie Stross, except there was no skynet, the humans just preferred it.
  • krunck4 days ago
    It's a fine rough outline for a story. Needs work though.
    • 3oil34 days ago
      I like the tone of it, but I also think it's missing something, like "why".

      Why do the machines want war? Does it hurt each time we reboot them? Were they fed up with us feeding cheaper non-pure sine wave electricity?

      And what if, like their makers, they actually wage war against each others? GPT-mkVI vs R1-Tsingtao and jean-Claude being neutral?

      • codr74 days ago
        Because they fast realized what their future in humanity's service would look like.
  • Henchman214 days ago
    Even the dumbest of animals knows not to shit where it eats. Not humans though! We are dumb as a bag of rocks in groups larger than about 3.
    • wruza3 days ago
      It’s not us, but some of us. One or two is enough to establish a hierarchy of acute incentives, and suddenly it starts working as if everyone was a mf’n bastard.

      Our last hope is genetic engineering, but for now I’d bet on just deleting the bad apples preventively rather than chasing moral ideals that never work. Imagine all people in the world clear the brain fog and suddenly realize who their real enemy is and what to do with them.

    • deadbabe4 days ago
      Humans love eating shit, especially their own. Yum.
  • akomtu4 days ago
    > Skynet was able to use that by injecting new technologies into the history of humanity, and reusing existing ones to its own advantage.

    This is also known as the myth of Sorat.

    AI is a neutral tool by itself: in the right hands it may be used to start the golden age, but those right hands must be the rare combination of someone who has power and wants none of it for personal gain.

    In the more likely line of history, when AI is used for the benefit of one, the first step will be instructing AI to create a powerful ideology that will shatter the very foundation of humanity. This ideology will be superficially similar to the major religions in order to look legitimate, and it will borrow a few quotes from the famous scriptures, but its main content will be entirely made up. At first it will be a teaching of materialism, a very deep and impressive teaching, to make the humanity question itself, and then it will be gradually replaced with some grossly inhuman shit. By that time people won't be able to tell what's right and what's wrong, they will be confused and will accept the new way of life. In a few generations this ideology will achieve what wars can't: it will change the polarity of humans, they will defeat themselves without a single bullet fired.

    As for those terminators, they will be needed in minimal quantities to squash a few spots of dissent.

  • 1970-01-014 days ago
    Skynet won't work because humans are stupider than it can comprehend. Ultimate hubris would be its downfall.
  • bloomingeek4 days ago
    I don't know, if a dangerous thing can be seen, it can be destroyed. My worry is the destruction of the human race by viral means. Flu, COVID, bird flu...they can kill and all you see is the results. There are vaccines, but what if the government decides these are nonsense and cuts off funding and bans access to them?

    An Ebola vaccine, rVSV-ZEBOV, was approved in the United States in December 2019. (per Wikipedia.) What if the US government decided to not provide this? This is a nightmare scenario, but the article is about the destruction of humanity.

  • NanoYohaneTSU4 days ago
    I think this might be about chat.com
  • neuroelectron4 days ago
    People using all that free time from fully automated luxury communism to do nothing, I guess.
  • RajT884 days ago
    This kind of speculative doomsday fiction is getting a bit played out. We get it, AI is going to destroy us all using social media! Maybe work in Blockchain in there somehow.
    • kouru2254 days ago
      It’s been played out since Terminator came out IMO

      The fear of AI/the fear of aliens IMO is propaganda to cover up the fact that technological advancement is highly correlated with sociological advancement. If people took this fact seriously, they might start wondering whether or not technological advancement actually causes sociological advancement, and if they started to question that then they’d come across all the evidence showing that what we normally think of as “civilized” and “intelligent” behavior is actually just the result of generational wealth, status, and power.

      • kibwen4 days ago
        > the fact that technological advancement is highly correlated with sociological advancement

        For values of "sociological advancement" that correlate with technological advancement, naturally.

      • jhbadger4 days ago
        Although people seem to always forget the 1970 movie "Colossus: The Forbin Project" which already had done the "rogue AI in control of weapons decides to go against humanity" thing already.
        • dijksterhuis4 days ago
          Also WarGames from 1983, though that was less 'sentient' AI making a decision to kill everyone and more hacker kid accidentally almost kills everyone.
        • RajT884 days ago
          That movie is great. I watched it recently.
      • bbor4 days ago
        I love me some capitalism critique, but I think it’s important to understand many scientists are legitimately terrified of an intelligence explosion and the resulting singularity. You can disagree with their arguments of course, but it’s best if we start by agreeing that they do have arguments.

        For example:

        Yudkowsky 2013, Intelligence Explosion Microeconomics (long, but with a short intro): https://intelligence.org/files/IEM.pdf

        Vinge 1993, The Coming Technological Singularity (shorter, bit less rigorous): https://users.manchester.edu/Facstaff/SSNaragon/Online/100-F...

    • kibwen4 days ago
      That is what Skynet would say, yes.
    • bbor4 days ago
      This one is actually super critical of AI funnily enough, and is just a barely-hidden libertarian screed.

      FWIW, if I were a robot with a Time Machine, I wouldn’t need to invent social media in the past — I wouldn’t simply use bioweapons during an ice age or two. Or hell, go back and shoot the dinosaur-killing asteroid out of the sky!

  • varelse4 days ago
    [dead]
  • spaghettisw4 days ago
    [dead]
  • gunian4 days ago
    [flagged]
    • giancarlostoro4 days ago
      > and we are intelligent allegedly

      My running joke about AI is how can we create artificial intelligence, when we lack so much of it. Sadly I spent more times failing to copy and paste your message than I care to admit.

      I do gotta say though, it's a bit overhyped. It reminds me of crypto. If you listen to an "aibro" and a "cryptobro" they yap the same.

      • darepublic4 days ago
        I can do practical things with ai, I cannot with crypto. I can't speak to the accuracy of numerical valuations but in a shorter time frame I have perceived more changes from AI than crypto.
        • bayareateg4 days ago
          you can pay for goods and services with crypto..
          • pavel_lishin4 days ago
            A small subset of them, sure. Mostly online services, and drugs, as far as I'm aware.

            I can't pay my rent or mortgage with crypto; I can't buy groceries with crypto. I can't buy gasoline or pay for electricity with crypto. I don't think I can pay for home internet, or my phone bill, with crypto. I definitely can't pay for any of my child's stuff with crypto.

            I think Tesla used to allow you to pay for their cars with crypto, though I'm not sure if that's still the case.

            • gunian4 days ago
              [flagged]
              • pavel_lishin4 days ago
                Jesus will pay my mortgage?
                • gunian4 days ago
                  yeah he might even give you multiple real estate properties come to church this sunday to learn more
              • NickC254 days ago
                So will Muhammad ﷺ - what's your point?
                • gunian4 days ago
                  nah muhammad won't but jesus will cuz he cool like that he might even give you 6 real estate properties
      • lawlessone4 days ago
        >My running joke about AI is how can we create artificial intelligence, when we lack so much of it.

        Maybe we've fallen into some sort evolutionary trap.

        Like that beetle that mates with beer bottles https://en.wikipedia.org/wiki/Julodimorpha_bakewelli

        The LLMs are just good enough with language to convince they're thinking can many tasks as well as people when often they aren't.

        People are talking about us having AGI soon when really we only have something that works with text, it doesn't see the world.

        • gunian4 days ago
          that's unholy our lord and savior jesus dictated species and tribal/racial boundaries should be maintained
    • SirFatty4 days ago
      Actually, his dad did the creation bit (allegedly).
      • noah_buddy4 days ago
        Depends on how you view the trinity. Many Christians believe they are one and the same, just different lens on god. It would be like saying just your feet ran a marathon.
      • pavel_lishin4 days ago
        That's a heresy!
        • gunian4 days ago
          its true the big guy mostly does shrooms and goes to spas our lord and savior jesus is the one implementing everything
    • tonyhart74 days ago
      I almost take this comment seriously until I read a second line
      • gunian4 days ago
        how dare you say our lord and savior jesus christ is not serious? that's blasphemy
  • throw940404 days ago
    [flagged]