153 pointsby smartmic18 hours ago33 comments
  • perching_aix17 hours ago
    > Rather than enhancing our human qualities, these systems degrade our social relations, and undermine our capacity for empathy and care.

    I don't genuinely expect the author of a blogpost who titles their writing "AI is Dehumanization Technology" to be particularly receptive of a counterargument, but hear me out.

    I can think of few things as intellectually miserable as the #help channels of the many open source projects on Discord, for example. Should I wager a guess, if these projects all integrated LLMs into their chatbots for just a few bucks and let them take on the brunt of the interactions, all participants on either side would be left with much more capacity to maintain and express empathy and care, or to nurture social connections. This extends beyond these noncommercial contexts of course, to stuff like customer service for example.

    • reval16 hours ago
      The counter-counter-argument is that the messy part of human interaction is necessary for social cohesion. I’ve already witnessed this erosion prior to LLMs in the rise of SMS over phone calls (personal) and automated menu systems for customer service (institutional).

      It is sad to me that the skill required to navigate everyday life are being delegated to technology. Pretty soon it won’t matter what you think or feel about your neighbors because you will only ever know their tech-mediated facade.

      • ethbr110 hours ago
        > the messy part of human interaction is necessary for social cohesion

        Also, efficiency.

        I think everyone in tech consulting can tell you that inserting another party (outsourcing) in a previously two-party transaction rarely produces better outcomes.

        Human-agent-agent-human communication doesn't fill me with hope, beyond basic well-defined use cases.

      • tobr16 hours ago
        > It is sad to me that the skill required to navigate everyday life are being delegated to technology.

        Isn’t this basically what technology does? I suppose there is also technology to do things that weren’t possible at all before, but the application is often automation of something in someone’s everyday life that is considered burdensome.

      • perching_aix16 hours ago
        I'm not sure I'd agree with characterizing heavily asymmetric social interactions, such as customer service folks assisting tens or hundreds of people on the same issues every week and similar, a "necessarily messy part of human interaction for social cohesion".
        • svieira16 hours ago
          It is well noted that it is very hard to get in contact with a human at Google when you have a problem. And then we wonder why Google never seems to understand its user base.
          • perching_aix16 hours ago
            I don't think these two are actually related, and the automated contact options Google and other megacorporations provide were significantly behind on these developments the last time I tried interacting with them. Namely, e.g. Meta has basically no support line. There was even a thread here a few days ago chronicling that.
      • jofla_net16 hours ago
        It is, in fact, all insulation. The technology, that is. It cuts out face-to-face, vid-to-vid, voice-to-voice, and even direct text as in sms or email. To the point that agents will be advocating for users instead of people even typing back to one another. Until and unless it affects the reproduction cycle, and I think it already has, people will fail to socialize since there is also zero customary expectation to do so (that was the surprisingly good thing about old world customs), so only the overtly gregarious will end up doing it. Kind of a long tailed hyperbolic endgame but, well, there it is.

        Edit: one point i forgot to make is that it has already become absurd how different someones online persona or confidence level is when they are AFK, its as if theyve been reduce to an infantile state.

    • tines17 hours ago
      > I can think of few things as intellectually miserable as the #help channels of the many open source projects on Discord, for example. Should I wager a guess, if these projects all integrated LLMs into their chatbots for just a few bucks and let them take on the brunt of the interactions, all participants on either side would be left with much more capacity to maintain and express empathy and care, or to nurture social connections. This extends beyond these noncommercial contexts of course, to stuff like customer service for example.

      This would be a good counter if this were all that this technology is being used for.

      • perching_aix17 hours ago
        I don't think it's necessary for me to counter everything they're saying. They're making a unilateral judgement - as long as I can demonstrate a good counter for one part of it, the unilateral judgement will fail to hold.

        They can of course still argue that it's majority-bad for whatever list of reasons, but that's not what bugs me. What bugs me is the absolute tunnel vision of "for principled reasons I must find this thing completely and utterly bad, no matter what". Because this is what the title, and the tone, and just about everything else in this article comes across as to me, and I find it equal parts terrifying and disagreeable.

        • asciimov16 hours ago
          > What bugs me is the absolute tunnel vision of "for principled reasons I must find this thing completely and utterly bad, no matter what".

          AI companies also only sell the public on the upside of these technologies. Behind closed doors they are investing hard on this with the hope to reduce or eliminate their labor cost with no regard to any damage to society.

          • perching_aix16 hours ago
            I don't think fighting misrepresentation with misrepresentation is a winning strategy.
      • Lerc16 hours ago
        >This would be a good counter if this were all that this technology is being used for.

        Do you hold the position that a thing is bad because it is possible to do harm, or that it is predominantly causing harm?

        Most criticisms cite examples demonstrating the existence of harm because proving existence requires a single example. Calculating the sum of an effect is much harder.

        Even if the current impact of a field is predominantly harmful, it does not stand that the problem is with the what is being attempted. Consider healthcare, a few hundred years ago much of healthcare did more harm than good, charlatans and frauds were commonplace. Was that because healthcare itself was a bad thing? Was it a mistake to even go down that path?

        • adamc16 hours ago
          I don't think the path was fixed by healthcare, per se. It was fixed by adopting scientific investigation.

          So I think your argument is kind of misleading.

          • Lerc16 hours ago
            Why misleading?

            I am advocating adopting methods of improvement rather than abandoning the persuit of beneficial results.

            I think science was just a part of the solution to healthcare, much of the advance was also in what was considered allowable or ethical. There remains a great deal of harmful medical practices that are used today in places where regulation is weak.

            Science has done little to stop those harms. The advances that led to the requirement for a scientific backing were social. That those practices persist in some places is not a scientific issue but a social one.

            • adamc16 hours ago
              Because adopting having "doctors", for example, isn't really what made for better healthcare. We had doctors for centuries (arguably millenia) who were useful in very limited cases, and probably harmful most of the rest of the time. What made for better healthcare was changing the way we investigated problems.

              That ultimately enabled "doctors" to be quite useful. But the fact that the "profession" existed earlier is not what allowed it to bloom.

        • ToucanLoucan16 hours ago
          > Do you hold the position that a thing is bad because it is possible to do harm, or that it is predominantly causing harm?

          Mere moments later...

          > Even if the current impact of a field is predominantly harmful

          So let's just skip the first part then, you conceded it's predominantly harmful. On this we agree.

          > it does not stand that the problem is with the what is being attempted.

          Well, it's not a logical 1-to-1, no. But I would say if the current impact of a field is predominantly harmful, then revisiting what is being attempted isn't the worst idea.

          > Consider healthcare, a few hundred years ago much of healthcare did more harm than good, charlatans and frauds were commonplace. Was that because healthcare itself was a bad thing? Was it a mistake to even go down that path?

          If OpenAI and company were still pure research projects, this would hold some amount of water, even if I would still disagree with it. However that exempts the context that OpenAI is actively (and under threat of financial ruin) turning itself into a for-profit business, and is actively selling it's products, as are it's competitors, to firms in the market with the explicit notion of reducing headcount for the same productivity. This doesn't need a citation, look at any AI product marketing and you see a consistent theme is the removal of human labor and/or interaction.

          • Lerc16 hours ago
            >> Even if the current impact of a field is predominantly harmful

            >So let's just skip the first part then, you conceded it's predominantly harmful. On this we agree.

            I'm afraid if you interpret that statement as a concession of a fact, I don't think we can have a productive conversation.

    • m4rtink17 hours ago
      I am not sure about your experience, but these types of channles seem to mostly have the issues of people being to bussy to repply, but when they do, there is often an interesting interaction & this is how users often become contributors to the project over time.

      Sure, if you want to make sure you don't get any more contributors, you can try to replace that with a chatbot that will always reply immediately, but might just be wrong like 40% of the time, is not actually working on the project and will certainly not help in building sociel interactions between the project and its users.

      • perching_aix16 hours ago
        I have participated in such channels for multiple years on the assisting side, and have been keeping in touch with some of the folks I knew from there still doing it. Also note that the projects I helped around with were more end-user focused.

        Most interactions start with users being vague. This can already result in some helpers getting triggered, and starting to be vaguely snarky, but usually this is resolved by using prepared bot commands... which these users sometimes just won't read.

        Then the misunderstandings start. Or the misplaced expectations. Or the lies. Or maybe the given helper has been having a bad day, but due to their long time presence in the project, they won't be moderated out properly. And so on. It's just not a good experience.

        Ever since I left, I got screencaps of various kinds of conversations. In some cases, the user was being objectively insufferable - I don't think it's fair to expect a human to put up with that. Other times, the helper was being unnecessarily mean - they did not appreciate my feedback on that. Neither happens with LLMs. People don't grow resentful of the never ending horde of what feels like increasingly clueless users, and innocent folk don't get randomly chewed out for not living up to the optimality expectations of those who tend to 1000s of cases similar to theirs every week.

        • whatevertrevor16 hours ago
          I think the solution is neither AI nor human in this case.

          While direct human support is invaluable in many cases, I find it really hard to believe how our industry has completely forgotten the value of public support forums. Here are some pure advantages over Discord/Slack/<Insert private chat platform of your liking>

          - Much much better search functionality out of the box, because you can leverage existing search engines.

          - From the above it follows that high value contributors do not need to spend their valuable time repeatedly answering the same basic questions over and over.

          - Your high value contributors don't have to be employees of the company, as many enthusiastic power users often participate and contribute in such places.

          - Conversations are _much_ easier to follow without having to resort to hidden threads and forums posts on Discord that no one will ever read or search.

          - Over time you build a living library of supporting documentation instead of useful information being strewn in many tiny conversations over months.

          - No user expectation to be helped immediately. A forum sets the expectation that this is an async method of communication, so you're less likely to see entitled aggravating behavior (though you won't see many users giving you good questions with relevant information attached even on forums).

          • perching_aix16 hours ago
            I think you're forgetting about how e.g. StackOverflow, a Q&A forum, exhibited basically the exact same issues I just ran through. In general, the history of both the unnecessary hostility of helpers and the near-insulting cluelessness and laziness of users on public forums is a very long and extensive one. It's not a format issue, I don't think.
            • whatevertrevor16 hours ago
              I'm surprised you read my post and thought I was trying to say that using more public forums and less private chats will solve the so-called "human issue". My argument is not about making customer support more pleasant, or users less hostile. It's about making information more accessible so people can help themselves.

              If we make information more accessible, support will reduce in volume. Currently there's a tendency for domain experts to hoard all relevant information in their heads, and dole it out at their discretion in various chat forums. Forums whose existence is often not widely known to begin with (not to mention gated behind making accounts in certain apps the users may or may not care about/want to).

              So my point is: instead of trying to automate a decidedly bad solution to make it scalable and treating that as a selling point of AI, we could instead make the information more accessible in the first place?

              • perching_aix16 hours ago
                The number of messages in the #help channels I participated in was limited not by the number of participants on either side, but by the speed of the chat. If it went on too quick, people would hold off from posting.

                This meant you had a fairly low and consistent ceiling for messages. What you'd also observe over the years is a gradual decline in question quality. According to every helper that is. How come?

                Admittedly we'll never really know, so this is speculation on my part, but I think it was exactly because of the better availability of information. During these years, we tried cultivating other resources and implementing features with the specific goal of improving UX. It worked. So the only people still "needing" assistance were those who failed to navigate even this better UX. Hence, worse questions, yet never ending.

                Another issue with this idea is that navigating through the sheer volume of information can become challenging. AWS has a pretty decent documentation for example, but if you don't know the given service's docs you're paging through somewhat well, it's a chore to find anything. Keyword search won't be super helpful either. This is because it's a lot of prose, and not a lot of structure. Compare this to the autogenerated docs of AWS CLI, and you'll find a stark difference.

                Finding things, especially among a lot of faff, is tiring. Asking a natural language question is trivial. The rest is on people to believe that AI isn't the literal devil, unlike what blogposts like the OP would like one to believe.

        • Vegenoid16 hours ago
          Do we have examples of LLMs being used successfully in these scenarios? I’m skeptical that the insufferable users will actually be satisfied and able to be helped by an LLM, unless the LLM is actually presented as a human, which seems unethical. It also hinges on an LLM being able to get the user to provide the required information accurately, without lying or simply getting frustrated, angry, and unwilling to cooperate.

          I’m not sure there is a solution to help people who don’t come to the table willing to put in the effort required to get help. This seems like a deep problem present in all kinds of ways in society, and I don’t think smarter chatbots are the solution. I’d love to be wrong.

          • perching_aix16 hours ago
            > Do we have examples of LLMs being used successfully in these scenarios?

            If such a dataset exists, I don't have it. Most I have is the anecdotal experiences of not having to be afraid of asking silly questions from LLMs, and learning things I could then cross-validate to be correct without tiring anyone.

    • eikenberry16 hours ago
      > [..] if these projects all integrated LLMs into their chatbots for just a few bucks [..]

      No matter your position on the AI helpfulness, asking volunteers to not only spend time helping support a free software project but to also pony up money is just doubling down on the burden free software maintainers face as was highlighted in the recent libxml2 discussion.

      • perching_aix16 hours ago
        A lot of the projects that maintain Discord servers in my experience will receive plenty enough donations to make up for the $5 it'd take to serve the traffic that hits their Discord for help with AI. Yes I did run the numbers. It's so (intentionally) cheap, this is a non-issue.

        But then one could also just argue that this is something the individual projects can decide for themselves. Not really for either of us to make this call. You can consider what I said as just an example you disagree with in that case.

    • raxxorraxor6 hours ago
      Better yet, don't use Discord to build a community because if you don't do anything else, your knowledgebase will be lacking, which hinders adaption of your project.

      As a dev I have to be already quite desperate if I engage with a chat bot.

      Discord is better suited for developers working together, not for publishing the results to an audience.

    • passwordoops16 hours ago
      Counter-counter : there is nothing as intellectually miserable and antisocial as large corporations and institutions replacing Helpdesks with draconian automated systems.

      Also, try to come up with a less esoteric example than Discord Help channels. In fact, this is the issue with most defenses of LLMs. The benefits are so niche, or minor, that the example itself shows why they are not worth the money being poured in

      • perching_aix15 hours ago
        > there is nothing as intellectually miserable and antisocial as large corporations and institutions replacing Helpdesks with (...) automated systems

        Should be fairly obvious, but I disagree. Also I think you mean asocial, not antisocial. What's uniquely draconian about automated systems though? They're even susceptible to the same social engineering attacks humans are (it's just referred to as jailbreaking instead).

        > Also, try to come up with a less esoteric example than Discord Help channels.

        No.

        > The benefits are so niche, or minor, that the example itself shows why they are not worth the money being poured in

        Great. This is already significantly more intellectually honest than the entire blogpost.

    • butundstand16 hours ago
      In their bio:

      “I’m not a tech worker…” they like to tinker with code and local Linux servers.

      They have not seen how robotic the job has become and felt how much pressure there is to act like a copy-paste/git pull assembly line.

    • MarcelOlsz16 hours ago
      You forgot one key thing: I don't want to talk to an AI.
    • 8 hours ago
      undefined
    • trod123416 hours ago
      This assumes the conclusion that AI would solve the lack of resources interaction issue. Unfortunately the data has been in on this for quite a long time (longer the LLM chatbots have been in existence) if you've worked in IT at a large carrier provider or for large call centers you know about this.

      The simple fact of the matter is, there is a sharp gap between what an AI can do, and what a human does in any role involving communications, especially customer service.

      Worse, there are psychological responses that naturally occur when you do any number of a few specific things that escalate conflict if you leave this to an AI. A qualified CSR person is taught how to de-escalate, diffuse, and calm the person who has been wound up to the point of irrationality. They are the front-line punching bags.

      AI can't differentiate between what's acceptable, and what's not because the tokens it uses to identify these contexts have two contradictory states in the same underlying tokens. This goes to core classical computer science problems of halting, and other aspects.

      The companies that were ahead of the curb for this invested a lot into this almost a decade and a half ago, and they found that in most cases these types of systems exponentiated the issues once they did finally get to a person, and they took it out on that person irrationally because they were the representative of the company that put them through what amounts to torture.

      Some examples of behavior that causes these types of responses are when you are being manipulated in a way that you know is manipulation, it causes stress through perceptual blindspots causing an inconsistent internal mental state resulting in confusion. When that happens it causes a psychological reversal often of irrational anger. An infinite or byzantine loop designed to run people in circular hamster wheels is one such structure.

      If you've ever been in a social interaction where you offer an olive branch and they seem to accept it, but at the last minute through it back in your face, you've experienced this. The smart individual doesn't ever do this because they know they will make an enemy for life who will always remember.

      This is also how through communication, you can impose coercive cost on people, and companies have done this for years where anti-trust and FTC weren't being enforced. These triggers are inherent to a lesser or greater degree in all of us, every person alive.

      The imposition of personal cost through this and other psychological blindspots is how torturous and vexatious processes are created.

      Empathy and care are a two way street. It requires both entities to be acting in good faith through reflective appraisal. When this is distorted, it drives people crazy, and there is a critical saturation point where assumptions change because the environment has changed. If people show the indicators that they are acting in bad faith, others will treat them automatically as acting in bad faith. Eventually, the environment dictates that those people must prove they are acting in good faith (somehow) but proving this is quite hard. The environment switches from innocent benefit of the doubt to, guilty until proven innocent.

      These erosions of the social contract while subtle, dictate social behavior. Can you imagine a world where something bad happens to you, and everyone just turns their backs, or prevents you from helping yourself?

      Its the slipperly slope of society failing back to violence, few today commenting on things like this have actually read the material published by the greats on the social contract and don't know how society arose from the chaos of violence.

    • reaperducer16 hours ago
      if these projects all integrated LLMs into their chatbots for just a few bucks and let them take on the brunt of the interactions, all participants on either side would be left with much more capacity to maintain and express empathy and care

      Maybe. But only if the LLMs are correct. Which they too frequently aren't.

      So the result is that the tech industry has figured out how to not only automate making people angry and frustrated, they've managed to do it at scale.

      Yay.

    • mac-attack16 hours ago
      With all due respect, I don't think customer service interaction are a meaningful rebuttal when discussing the decline in social relations, empathy and care.

      The former is an admittedly frustrating aspect in our transactional relationships with companies, while the others are the foundations of a functioning society throughout our civilization. Conflating business interactions with society needs is a familiar trope on HN IMO

      • perching_aix16 hours ago
        I literally led with a noncommercial example.
      • reaperducer16 hours ago
        I don't think customer service interaction are a meaningful rebuttal when discussing the decline in social relations, empathy and care.

        Often you give what you get.

        If you're nice to the customer service people on the phone, frequently they loosen up and are nice right back at you. Some kind of crazy "human" thing, I guess.

  • rolha-capoeira17 hours ago
    This presupposes that human value only exists in the things current AI tech can replace—pattern recognition/creation. I'd wager the same argument was made when hand-crafted things were being replaced with industrialized products.

    I'm not saying those things aren't valuable, or that humans can't express social and spiritual value in those ways, but that human value doesn't only exist there. And so, to give AI the power of complete dehumanization is to reduce humans to just pattern followers. I don't believe that is the case.

    • danielbln17 hours ago
      It kind of makes sense if following a particular pattern is your purpose and life, and maybe your identity.
      • malux8517 hours ago
        We should actively encourage fluidity in purpose, too much rigidity or militant clinging to ideas is insecurity or attempts at absolving personal responsibility.

        Resilience and strength in our civilisation comes from confidence in our competence,

        not sanctifying patterns so we don’t have to think.

        We need to encourage and support fluidity, domain knowledge is commoditised, the future is fluid composition.

        • asciimov17 hours ago
          Great, tell that to someone who spent years honing their skills that it's too bad the rug was pulled out from beneath you, time to start over from the bottom again.

          Maybe there would be merit to this notion if society provided the necessary safety net for this person to start over.

          • danielbln3 hours ago
            "Protect the person, not the job" is what we should be aiming for. I don't think we will, but we should.
          • malux8515 hours ago
            Agreed, I think there should be much more safety net for people to start over and be more fluid, I definitly think the weird "Full time employed or homeless" thing has to change
        • haswell16 hours ago
          > We should actively encourage fluidity in purpose

          I don't think we should assume most people are capable of what you describe. Assigning "should" to this assumes what you're describing is psychologically tenable across a large population.

          > too much rigidity or militant clinging to ideas is insecurity or attempts at absolving personal responsibility.

          Or maybe some people have a singular focus in life and that's ok. And maybe we should be talking about the responsibility of the companies exploiting everyone's content to create these models, or the responsibility of government to provide relief and transition planning for people impacted, etc.

          To frame this as a personal responsibility issue seems fairly disconnected from the reality that most people face. For most people, AI is something that is happening to them, not something they are responsible for.

          And to whatever extent we each do have personal responsibility for our careers, this does not negate the incoming harms currently unfolding.

          • malux8515 hours ago
            “Some people have a singular purpose in life and that’s OK”

            Strong disagree, that’s not OK, it’s fragile

            • haswell14 hours ago
              Much of society is fragile. The point is that we need to approach this from the perspective of what is, not from what we wish things could be.
        • adamc16 hours ago
          People come with all sorts of preferences. Telling people who love mastery that they have to be "fluid" isn't going to lead to happy outcomes.
          • 15 hours ago
            undefined
        • danielbln17 hours ago
          Absolutely, I agree with that.
      • MichaelZuo17 hours ago
        How would this matter?

        People can self assign any value whatsoever… that doesn’t change.

        If they expect external validation then that’s obviously dependent on multiple other parties.

    • munificent16 hours ago
      > I'm not saying those things aren't valuable, or that humans can't express social and spiritual value in those ways, but that human value doesn't only exist there.

      This sounds sort of like a "God of the gaps" argument.

      Yes, we could say that humanity is left to express itself in the margins between the things machines have automated away. As automation increases its capabilities, we just wander around looking for some untouched back-alley or dark corner the robots haven't swept through yet and do our dancing and poetry slams there until the machines arrive forcing us to again scurry away.

      But at that point, who is the master, us or the machines?

      • mattgreenrocks16 hours ago
        If this came to pass, the population would be stripped of dignity pretty much en masse. We need to feel competent, useful, and connected to people. If people feel they have nothing left, then their response will be extremely ugly.
      • rolha-capoeira16 hours ago
        What we still get paid to do is different than what we're still able to do. I'm still able to knit a sweater if I find it enjoyable. Some folks can even do it for a living (but maybe not a living wage)
    • pojzon16 hours ago
      Due to how AI works its only a matter of time till its better at pretty much everything humans do beside “living”.

      People tend to talk about any AI related topic comparing it to any industrial shift that happened in the past.

      But its much Much MUCH bigger this time. Mostly because AI can make itself better, it will be better and it is better with every passing month.

      Its a matter of years until it can completely replace humans in any form of intellectual work.

      And those are not mine words but smartest ppl in the world, like AI grandfather.

      We humans think we are special. That there wont be something better than us. But we are in the middle of the process of creating something better.

      It will be better. Smarter. Not tired. Wont be sick. Wont ever complain.

      And it IS ALREADY and WILL replace a lot of jobs and it will not create new ones purely due to efficiency gains and lack of brainpower in majority of ppl who will be laid off.

      Not everyone is a noble prize winner. And soon we will need only such ppl to advance AI.

      • serbuvlad16 hours ago
        > because AI can make itself better

        Can it? I'm pretty sure current AI (not just LLMs, but neural nets more generally) require human feedback to prevent overfitting. Fundamentally eschewing any fear or hope of the singularity as predicted.

        AI can not make itself better because it can not meaningfully define what better means.

        • pojzon16 hours ago
          AlphaEvolved reviewed how its trained and found a way to improve the process.

          Its only the beginning. aI agents are able to simulate tasks, get better at them and make themselves better.

          At this point its silly to say otherwise.

      • sonofhans16 hours ago
        > Its a matter of years until it can completely replace humans in any form of intellectual work.

        This is sensationalism. There’s no evidence in favor of it. LLMs are useful in small, specific contexts with many guardrails and heavy supervision. Without human-generated prior art for that context they’re effectively useless. There’s no reason to believe that the current technical path will lead to much better than this.

      • z0r16 hours ago
        Call me when 'AI' cook meals in our kitchens, repairs the plumbing in our homes and removes the trash from the curb.

        Automation has costs and imagining what LLMs do now as the start of the self-improving, human replacing machine intelligence is pure fantasy.

        • NitpickLawyer16 hours ago
          To say that this is pure fantasy when there are more and more demos of humanoid robots doing menial tasks, and the costs of those robots are coming down is ... well something. Anger, denial (you are here)...
          • z0r10 hours ago
            I'm waiting. You're talking to someone who believed that self-driving vehicles would put truckers out of work in a decade right around 2012. I didn't think that one through. The world is very complicated and human beings are the cheapest and most effective way to get physical things done.
          • reaperducer16 hours ago
            To say that this is pure fantasy when there are more and more demos of humanoid robots doing menial tasks

            A demo is one thing. Being deployed in the real world is something else.

            The only thing I've seen humanoid robots doing is dancing and occasionally a backflip or two. And even most of that is with human control.

            The only menial task I ever saw a humanoid robot do so far is to take bags off of a conveyor belt, flatten them out and put them on another belt. It did it at about 1/10th the speed of a human, and some still ended up on the floor. This was about a month ago, so the state of the art is still in the demo stage.

    • unsui17 hours ago
      not entirely.

      The risk raised in the article is that AI is being promoted beyond its scope (pattern recognition/creation) to legal/moral choice determination.

      The techo-optimists will claim that legal/moral choices may be nothing more than the sum of various pattern-recognition mechanisms...

      My take on the article is that this is missing a deep point: AI cannot have a human-centered morality/legality because it can never be human. It can only ever amplify the existing biases in its training environments.

      By decoupling the gears of moral choice from human interaction, whether by choice or by inertia, humanity is being removed from the mechanisms that amplify moral and legal action (or, in some perverse cases, amplify the biases intentionally)

      • rolha-capoeira10 hours ago
        to build on your point, we only need to look at another type of entity that has a binary reward system and is inherently amoral: the corporation. Though it has many of the same rights as a human (in the US), the corporation itself is amoral, and we rely upon the humans within to retain moral compass, to their own detriment, which is a foolish endeavor.

        even further, AI has only learned through what we've articulated and recorded, and so its inherent biases are only that of our recordings. I'm not sure how that sways the model, but I'm sure that it does.

    • giraffe_lady17 hours ago
      > And so, to give AI the power of complete dehumanization is to reduce humans to just pattern followers.

      It would but I don't think that's what they're saying. The agent of dehumanization isn't the technology, but the selection of what the technology is applied to. Or like the quip "we made an AI that creates, freeing up more time for you to work."

      Wherever human value, however you define that, exists or is created by people, what does it look like to apply this technology such that human value increases? Does that look like how we're applying it? The article seems to me to be much more focused on how this is actually being used right now rather than how it could be.

    • 16 hours ago
      undefined
  • kelseyfrog16 hours ago
    Whether we like it or not, AI sits at the intersection of both Moravec's and Jevon's paradox. Just as more efficient engines lead to increased gas usage, as AI gets increasingly better at problems difficult for humans, we see even greater proliferation within that domain.

    The reductio on this is the hollowing out of the hard-for-humans problem domain, leaving us to fight for the scraps of the easy-for-humans domain. At first glance this sounds like a win. Who wouldn't want something else to solve the hard problems? The big issue with this is easy-for-human problems are often dull, devoid of meaning, and low-wage. Paradoxically, the hardest problems have always been the ones that make work meaningful.

    We stand at the crossroads where one path leads to an existence where with a poverty of meaning and although humans create and play by their own rules, we feel powerless to change it. What the hell are we doing?

    • ololobus16 hours ago
      Interesting point of view, didn't know about Jevon's paradox before. To me, the outcome still depends on whether AI can get superhuman [1] (and beyond) at some point. If it can, then, well, we will likely indeed see that suitable-for-human areas of the intellectual labor are shrinking. If it cannot, then it becomes an even more philosophical question similar to the agnosticism beliefs. Is the universe completely knowable? Because if it's not, then we might as well have an infinite more hard problems, and AI just rises a bar for what we can achieve by paring a human with AI compared to just human alone.

      [1] I know it's a bit hard to define, but I'd vaguely say that it's significantly better in the majority of intelligence areas than the vast majority of the population. Also it should be scalable. If we can make it slightly better than human by burning the entire Earth's energy, then it doesn't make much sense.

    • serbuvlad16 hours ago
      Prioritize goals over the process and what AIs can do doesn't matter.

      Want to make a movie? The goal should be getting the movie out, seen by people and reviewed.

      Whether it's people captured by film, animations in Blender or AI slop, what matters is the outcome. Is it good? Do people like it?

      I do the infrastructure at a department of my Uni as sort of a side-gig. I would have never had the time to learn Ansible, borg, FreeIPA, wireguard, and everything else I have configured now and would have probably resorted to a bunch of messy shell scripts that don't work half the time like the people before me.

      But everything I was able to set up I was able to set up in days, because of AI.

      Sure, it's really satisfying because I also have a deep understanding of the fundamentals, and I can debug problems when AI fails, and then I ask it "how does this work" as a faster Google/wiki.

      I've tried windsurf but given up because the AI does something that doesn't work and I can give it the prompts to find a solution (+ think for myself) much faster than it can figure out itself (and probably at the cost of a lot less tokens).

      But the fact that I enjoy the process doesn't matter. And the moment I can click a button and make a webapp, I have so many ideas in my drawer for how I could improve the network at Uni.

      I think the problem people have is that they work corporate jobs where they have no freedom to choose their own outcomes so they are basically just doing homework all their life. And AI can do homework better than them.

      • Vegenoid16 hours ago
        Take this too far and you run into a major existential crisis. What is the goal of life? Most people would say something along the lines of bringing joy to others, experiencing joy yourself, accomplishing things that you are proud of, and continuing the existence of life by having children, so that they can experience joy. The joy of life is in doing things, joy comes from process. Goals are useful in that they enable the doing of some process that you want to be doing, or in the joy of achieving the goal (in which case the joy is usually derived from the challenge in the process of achieving the goal).

        > Want to make a movie? The goal should be getting the movie out, seen by people and reviewed.

        This especially falls apart when it comes to art, which is one of the most “end-goal” processes. People make movies because they enjoy making movies, they want movies to be enjoyed by others because they want to share their art, and they want it to be commercially successful so that they can keep making movies. For the “enjoying a movie” process, do you truly believe that you’d be happy watching only AI-generated movies (and music, podcasts, games, etc.) created on demand with little to no human input for the rest of your life? The human element is truly meaningless to you, it is only about the pixels on the screen? If it is, that’s not wrong - I just think that few people actually feel this way.

        This isn’t an “AI bad” take. I just think that some people are losing sight of the role of technology. We can use AI to enable more people than ever before to spend time doing the things they want to do, or we can use it to optimize away the fun parts of life and turn people even further into replaceable meat-bots in a great machine run by and for the elites at the top.

      • kelseyfrog14 hours ago
        When all we care about is the final product, we miss the entire internal arc, the struggle, the bruised ego, and the chance of failure, and the reward in feeling "holy shit, I did it!" that comprises the essence of being human.

        Reducing the human experience as a means to an end is the core idea of dehumanization. Kant addressed this in the "humanity formula of the categorical imperative:

            "Act in such a way that you treat humanity, whether in your own person or in the person of any other, always at the same time as an end, never merely as a means."
        
        I'm curious how you feel about the phrase "the real treasure was the friends we made along the way." What does it mean to you?
      • skuxxlife15 hours ago
        But the process _does_ matter. That is the whole point of life. Why else are we even here if not to enjoy the process of making? It’s why people get into woodworking or knitting as hobbies. If it was just about the end result, they could just go to a store a buy something that would be way cheaper and easier. But that’s not the point - it’s something that _you_ made with your own hands, as imperfect as they are, and the experience of making something.
  • rafram17 hours ago
    > For example, to create an LLM such as ChatGPT, you'd start with an enormous quantity of text, then do a lot of computationally-intense statistical analysis to map out which words and phrases are most likely to appear near to one another. Crunch the numbers long enough, and you end up with something similar to the next-word prediction tool in your phone's text messaging app, except that this tool can generate whole paragraphs of mostly plausible-sounding word salad.

    This explanation might've been passable four years ago, but it's woefully out of date now. "Mostly plausible-sounding word salad"?

    • mv417 hours ago
      What would be a better explanation today?
      • diggan17 hours ago
        I think "mostly plausible-sounding" is, albeit simplified, OK for an analogy I guess. But the "word salad" part gives the impression it doesn't even look like real human text, which it kind of does at the surface. I think it's mostly "word salad" that makes it sound far off from the truth.
        • djhn16 hours ago
          Over the past year the chat bots have improved in many ways, but their written output has regressed to the mean: the average RLHF-driven preference.

          It is word salad, unless you’re a young, underpaid contractor from a country previously colonised by the British or the United States.

          • diggan14 hours ago
            > Over the past year the chat bots have improved in many ways, but their written output has regressed to the mean: the average RLHF-driven preference.

            How could you possibly judge such a diverse set of outputs? There are thousands of models, that can each be steered/programmed with prompts and with a lot parameter-twiddling, it's always impossible you could say "the chat bots" and give some sort of one-size-fits-all judgement of all LLMs. I think your reply shows a bit of ignorance if that's all you've seen.

            Oxford Dictionaries says "word salad" is "a confused or unintelligible mixture of seemingly random words and phrases", and true, I'm no native speaker, but that's not commonly the output I get from LLMs. Sometimes though, some people’s opinions on the internet feel like word salad, but I guess it's hard to distinguish from bait too.

            • djhn8 hours ago
              I meant what I said: chat bots, not models or APIs. Give it a try if you don’t believe me. Try using the leading chat interfaces logged out, from a clean browser and new IP.
              • diggan3 hours ago
                Sure, I don't doubt that, but still, what do you think these chat bots are using? Or are you talking about ELIZA? If so, what you say now makes a lot of sense.
        • relaxing17 hours ago
          Word salad refers to human writing with poor diction and/or syntax.
  • mrcwinn16 hours ago
    Yes, before AI, society was doing fantastically well on "social relations, empathy, and care." XD

    I remain an optimist. I believe AI can actually give us more time to care for people, because the computers will be able to do more themselves and between each other. Unproven thesis, but so is the case laid out in this article.

    • ACCount3616 hours ago
      Anyone who thinks that AI is bad for "empathy and care" should be forced to work for a year in a tech support call center, first line.

      There are some jobs that humans really shouldn't be doing. And now, we're at the point where we can start offloading that to machines.

  • tim33316 hours ago
    >The push to adopt AI is, at its core, a political project of dehumanization

    I can't really say I've seen that. The article seems to be about adoption of AI in the Canadian public sector, not something I'm really familiar with as a Brit. The government here hopes to boost the economy with it and Hassabis at Deepmind hopes to advance science and cure diseases.

    I think AI may well make the world more humane by dealing with a variety of our problems.

  • old_man_cato16 hours ago
    Dehumanization might be the wrong word. It's certainly anti social technology, though, and that's bad enough.
    • munificent16 hours ago
      I believe that our socializing is the absolute most fundamentally human aspect of us as a species.

      If you cut off a bird's wings, it can't bird in any real meaningful sense. If you cut off humans from others, I don't think we can really be human either.

      • ACCount3614 hours ago
        There are a lot of incredibly offended kiwi birds out there now.
      • old_man_cato15 hours ago
        And I think a lot of people would agree with you.
  • PolyBaker16 hours ago
    I think there is a fundamental non-understanding of power present in the post. By that I mean that the author doesn't appreciate that technology (or any tool for that matter) gives power and control to the user. This is used to further our understanding of the world with the intent of creating more technology (recursive process). The normies just support those at the forefront that actually change society. The argument in the post is fundamentally anti-technology. Follow this argument and you end up at a place where we live in caves rather than buildings.

    Also the anti-technology stance is good for humanity since it fundamentally introduces opposition to progress and questions the norm, ultimately killing the weak/inefficient parts of progress.

  • tptacek16 hours ago
    The problem this author has is with the technology industry, not AI in particular, which really is just a (surprisingly powerful) cohering of forces tech has unleashed over the last 25 years.
  • hayst4ck15 hours ago
    To call AI a dehumanization technology is like calling guns a murder technology.

    There is obviously truth to that, but guns are also used for self defense and protecting your dignity. Guns are a technology, and technology can be used for good or evil. Guns have been used to colonially enslave people, but also been used to gain independence.

    I disagree with the assessment that AI is intrinsically dehumanizing. AI is a tool, a very powerful tool, and because the very rich in America doesn't see the people they rule as humans of equal dignity, the technology itself betrays their feelings.

    Attacking the technology is wrong, the problem is not the technology but that every company has a tyrant king at it's helm who answers to no one because they have purchased the regulators that might have bound their behavior, meaning that their are no consequences for a CEO/King of a company's misdeeds. So every company's king ends up using their company/fiefdom to further their own personal ambitions of power and nobody is there to stop them. If the technology is powerful, then failure to invest in it, while other even more oppressive regimes do invest in it, potentially gives them the ability to dominate you. Imagine you argue nuclear weapons are a bad technology, while your neighbor is busy developing them. Are you better off if your neighbor has nuclear weapons and you don't?

    The argument that AI is a dehumanization technology is ultimately an anarchist argument. Anarchy's core belief is that no one should have power to dominate anyone else, which inevitably means that no one is able to provide consequences for anyone who ambitiously betrays that belief system. Reality does not work that way. The only way to provide consequences to a corrupt institution is an even more powerful institution based on collective bargaining (founded by the threat of consequences for failing to reach a compromise, such as striking). There is no way around realpolitik, you must confront pragmatic power relationships to have a cogent philosophy.

    The author is mistaking AI for wealth disparity. Wealth is power and power is wealth, and when it is so concentrated, it puts bad actors above consequences and turns tools that could be used for the public good into tools of oppression.

    We do not primarily have an AI problem, but a wealth concentration problem and this is one of many manifestation of it.

    • k__15 hours ago
      To be fair, people only need guns to protect themselves because guns literally are murder tech.
      • hayst4ck15 hours ago
        That is a truth but not the truth. By framing guns as a murder technology, you ignore that they are also a self defense technology, equalizing technology, or any other set of valid frames.

        My point was that guns can be used for murder, in the same way that AI can be used to influence or surveil, but guns are also what you use to arrest people, fight oppressors and tyrants, and protect your property. Fists, knives, bows and arrows, poison, bombs, tanks, fighter jets, and drones are all forms of weapons. The march of technology is inevitable, and it's important not to be on the losing side of it.

        What the technology is capable of us less interesting then who has access to it and the power disparity that it creates.

        The authors argument is that AI is a (1) high leverage technology (2) in the hands of oligarchs.

        My argument is that the fact that it is a high leverage technology is not as interesting, meaningful, or important as the existence of oligarchs who do not answer to any regulatory body because they have bought and paid for it.

        The author is arguing that a particular weapon is bad, but failing to argue that we are in a class war that we are losing badly. The author is focusing on one weapon being used to wage our class war, instead of arguing about the cost of losing the classwar.

        It is not AI de-humanizing us, but wealth disparity that is de-humanizing us, because there is nothing forcing the extremely wealthy to treat others with dignity. AI is not robbing people of dignity, ultra wealthy people are robbing people of dignity using AI. AI is not dehumanizing people. Ultra wealthy people are using AI to dehumanize people. Those are different arguments with different implications and prescriptions on how to act or what to do.

        AI is bad is a different argument than oligarchs are bad.

  • dwaltrip16 hours ago
    The meat of the post does not depend on the characterization AI as “mere statistical correlations” that produces “plausible sounding word salad”.

    I encourage people to not get too hung up on that and look at the arguments about the effects on society and how we function as humans.

    I have very mixed feelings about AI, and this blog hits some key notes for me. If I have time later I will try to highlight those.

  • adamc16 hours ago
    I think it's an interesting piece, and calls us to consider how the technology will actually be used.

    A lot of things that are possible enable evil purposes as or more readily than noble ones. (Palantir comes to mind.) I think we have an ethical obligation to be aware of that and try to steer to the light.

  • gchamonlive16 hours ago
    Everything is dehumanization technology when society is organized to foster competition and narcissism and not cooperation and care.

    Technology is always an extension of the ethos. It doesn't stand on its own, it needs and reflects the mindset of humans.

    • serbuvlad16 hours ago
      The fundamental advantage of our society as designed is that it weaponizes narcissism, and makes narcissists do useful stuff for society.

      Don't care about competition? Find a place where rent prices are reasonable and you'll find it's actually surprisingly easy to earn a living.

      Oh, but you want the fancy stuff, don't you?

      • munificent16 hours ago
        I suspect that if you find a place where rent prices are reasonable, you'll find it's actually surprisingly hard to find job there that pays a good wage, healthcare that keeps you healthy, decent schools to educate your children, and a community that shares your values and interests.

        People don't move to high cost of living areas because they want nice TVs. Fancy stuff is the same price everywhere.

        • serbuvlad16 hours ago
          I live in Romania so I have different problems. I understand that Americans have problems with rent and healthcare. We have problem with other stuff, like food prices.

          But at the end of the day, it's extremely unhealthy to let these problems force us into feeling like we have to make a lot of money. You can find cheap solutions for almost everything almost everywhere if you compromise.

          • munificent16 hours ago
            I don't think people feel like they have to make a lot of money.

            I think they seek jobs and places to live that give them the maximum overall benefit. I currently live in Seattle, which is quite expensive.

            If there was another city like Seattle with the same schools, healthcare, climate, and culture, but cheaper housing, I'd move there as long as the salaries there weren't so much lower that it more than canceled out the benefit of cheaper housing.

            The problem in the US is that even though some cities are quite expensive, they are still overall the most economical choice for people who can get good jobs in those cities. The increased pay more than makes up for the higher prices.

      • gchamonlive16 hours ago
        I'm talking about narcissism as in Burnout Society.

        Give the book a go if you haven't. It lays out many of the fundamental problems of current social organization way better than I can.

        > Oh, but you want the fancy stuff, don't you?

        Just some food for thought, though. Is weaponizing hyperpositivity the only way to produce fancy stuff? Think about it, you'll see by yourself this is a false dichotomy embedded in a realism that prevents us from improving society.

    • dandanua16 hours ago
      Technology that ultimately breaks power balance in a society is asking for fascism to come. Without strong and working checks and balances the doom scenario is inevitable. Yet, we are witnessing the destruction of our previous, weaker checks and balances. This will only accelerate us to the dead end.
  • drellybochelly17 hours ago
    It can be, but on the other hand its made me think in a radically different way about concepts in humanity.
  • tolerance16 hours ago
    This article is informative, but I can't imagine it would do anything to spur the conscious of people who use AI in ways other than the harmful examples it illustrates.
  • davesque16 hours ago
    I'm not usually a fan of progressive politics, but I thought Bernie Sanders made a great point the other day when he simply asked why adoption of AI would lead to layoffs instead of a 4-day work week. I don't think the question is naive. There really doesn't seem to be any good reason why the value of AI technology couldn't be distributed this way today, only that the people in charge of it don't want to do that, because they are so accustomed to claiming value instead of sharing it.
  • 16 hours ago
    undefined
  • mulippy16 hours ago
    pedestrian rehash of standard ai critique talking points without novel insight. author conflates pattern recognition with "dehumanization" through definitional sleight-of-hand - classic motte-and-bailey where reasonable concerns about bias/labor displacement get weaponized into apocalyptic framing. the empathy-as-weakness musk quote does heavy lifting for entire thesis but represents single data point from notoriously unreliable narrator. building systematic critique around elon's joe rogan appearance is methodologically weak. technical description of llms as "word salad generators" betrays surface-level understanding. dismissing statistical pattern matching as inherently meaningless ignores that human cognition relies heavily on similar processes. the "no understanding" claim assumes consciousness/intentionality as prerequisite for useful output, which is philosophically naive. bias automation concerns valid but not uniquely ai-related - bureaucratic systems have always encoded societal prejudices. author ignores potential for ai to surface and quantify existing biases that human administrators would otherwise perpetuate invisibly. deskilling argument contradicts itself - simultaneously claims ai doesn't improve productivity while arguing it threatens jobs. if tools are genuinely useless, market forces would eliminate them. more likely: author conflates short-term adjustment costs with long-term displacement effects. "surveillance technology" characterization relies on guilt-by-association rather than technical analysis. any information processing system could theoretically enable surveillance - this includes spreadsheets, databases, filing cabinets. the public sector romanticism is revealing. framing government work as inherently altruistic ignores institutional incentives, regulatory capture, and bureaucratic self-preservation. "mission-oriented" workers can implement harmful policies with genuine conviction. strongest section addresses automation bias and human-in-the-loop failures, but author doesn't engage with literature on hybrid human-ai systems or institutional design solutions.

    -claude w/ eigenbot absolute mode system setting

    • bentograd16 hours ago
      I mean, this really proves the point. You dehumanize the author and anyone who even attempts to read this slop by deferring your thinking to the machine, as if this kind of human interaction was not worthy to be had. Worst of all, you dehumanize yourself.
  • npteljes15 hours ago
    >The push to adopt AI is, at its core, a political project of dehumanization

    I agree with the general sentiment, but absolutely disagree with this claim. The push to adopt AI is a gold rush, not any coordinated thing. I think in the political arena they don't give a single f about how humanizing or dehumanizing a thing is, especially if it's this abstract as "AI" or whatever. Everyone out there is there to further their own limited scope goal, according to whatever idea they have on how to achieve that. AI entered the public consciousness and so, companies are now in a race to not get behind. Politicians do enter into the picture, but mostly as ones who enjoy the fruit of the AI effort, it being a good public distraction, and an effective tool in creating propaganda. But nowhere near is it a primary goal, nor does it nefariously further any underlying primary goal, such as dehumanizing the people. It's merely a tool, and a phenomenon with beneficial side effects.

  • Lerc16 hours ago
    AI has the capacity to deflect accountability. That must be addressed. That does not mean that the intent, goal, or even primary result is dehumanisation.

    Address the concerns specifically, suggest solutions for those concerns.

    I have made a submission to a government investigation highlighting the need for explicitly identifying when an AI makes a determination involving an individual, and the need for mechanisms that need to be in place for individuals to be aware when that has happened along with a method to challenge the determination of they feel it was incorrect.

    I have seen a lot of blanket judgements vilifying an entire field of research and industry and all those who participate in it. It has become commonplace use the term techbros as a pejorative to declare people as others.

    There is a word for behaviour like that. That is what dehumanisation is.

  • teekert16 hours ago
    Btw, Rutger Bregman also considers empathy an error. It’s a complex argument.
    • teekert6 hours ago
      I mean it's really worth looking into what he means here. I tend to agree that empathy can be an overly local source of bad stuff. Empathy will often make you ignore the real problem focusing only on suffering right in front of you (ie, donate money to some crappy non-profit because a sad puppy was in their commercial, instead of really doing something). That is not all bad, but is it the best?
  • cheevly17 hours ago
    Down with data-driven decisions and probabilistic computing!!
    • nh23423fefe17 hours ago
      [flagged]
      • shadowgovt17 hours ago
        The larger concern is people treating function approximation as fact (especially in the models where it is technologically understood that what is happening is an estimation algorithm, or what is happening is semantically divorced from "understanding" the underlying fact-patterns and is instead a system building fact-like sentences from data that may or may not contain actual relevant facts).

        There is definitely a huge gap between what is happening right now and public perception (and, I'd argue, a few people with a lot of money to gain or lose going out of their way to increase, not decrease, that gap).

        In that context, the overall notion the post approaches (that Canada would do well to avoid basing decisions that could help or harm real people on the output of these unproven systems at this juncture) is a good notion.

  • alganet16 hours ago
    I think AI could have been great.

    It's just that greed took over, and it took over big time.

    Several shitty decisions in a row: scaling it too much, stealing data, marketing it before it can deliver, government use. The list goes on and on.

    This idea that there's something inherent about the technology that's dehumanizing is an old trick. The issue lies in whoever is making those shitty decisions, not the tech itself.

    There's obviously a fog surrounding every single discussion about this stuff. Probably the outcome of another remarkably shitty decision by someone (so many people parroting marketing ideas, it's dumb).

    We'll be ok as humans, trust me on this one. It's too big to not fail.

  • cynicalsecurity17 hours ago
    This almost reads as AI is from devil.
    • hatradiowigwam16 hours ago
      ....Who can stand against AI? Who can make war against AI? The people ceded their power to AI, and worshiped it.
    • DocTomoe16 hours ago
      Whenever a new tech (or cultural phenomenon) appears, you have naysayers such as this one.

      Luddites fought against automatic weaving stools - by destroying the machines. Reasoning: It is stealing our jobs.

      Comics were sure to destroy our children's minds, because they wouldn't read real literature anymore (conveniently forgetting that only 100 years before that, reading was considered a plague of the youth and putting their minds into bad places).

      Rock music is of the devil. If you play "Stairway to heaven" backwards, and squint your ears just right ...

      Phones that memorise phone numbers? Surely it is brain rot if you are no longer forced to memorise phone numbers.

      Social media? Work of the devil! Manipulates the minds, causes addiction, bad elections, and warts on the nose!

      Cryptocurrency? Waste of energy, won't someone please think of the environment? Only used for crime!

      3D printing? What if someone ghasp prints himself a gun?

      And now it is AI. And when AI is normalised, and something new shows up, it will be that.

      • skuxxlife16 hours ago
        This is an extremely simplistic take that ironically ignores the human cultural context around these technologies.

        For one, you, like many, misunderstand the Luddite movement. They didn’t break weaving frames because they were against technology, they broke them because they were being used to grossly devalue the work weavers used to earn their livelihood. There was a mass consolidation of textile manufacturing from small groups of tradespeople into a few very wealthy factory owners who used easily exploitable labor (like children) in very poor working conditions and paid unlivable wages to make low quality but cheap garments. The luddites weren’t against technology, they were against the way it was being used. They even only targeted factories that they thought were particularly exploitative, leaving the ones with fairer business practices alone. But they get mischaracterized as anti-technology, anti-progress…but maybe they just wanted to be able to live their lives well and support their families.

        There’s really a lot to learn from the luddites and their historical context, and it really goes to show that history is truly cyclical.

      • nico_h16 hours ago
        I think except for rock music and comic books and 3d printing the naysayers are proven to be right? The luddites went at it wrong , but the externalities are definitely terrible for all your examples.
      • yupitsme12316 hours ago
        Most of the things in this list are pretty innocuous and were never weaponized. Two exceptions, social media and cell phones, have been weaponized and after a certain point, I would say that we'd be better off without them.

        I would put AI into that same group. It is and will continue to be weaponized against you by people with more power than you.

      • kmeisthax16 hours ago
        [dead]
    • nico_h16 hours ago
      All those LLMs are trained on stolen data, is pushed by ultra capitalist tech bro and billionaires, and eliminating jobs in creative industries.

      It’s heralded as a tool for increasing efficiency (a western capitalism favorite euphemism for cancerous exploitation of the environment and humans) while neglecting it’s externalities, which includes destroying the jobs of the very people it stole the work from, and making you more reliant upon it, possibly to the point you forget how to do the things you use it for.

      It’s a pretty devilish poisoned fruit.

  • 16 hours ago
    undefined
  • stego-tech17 hours ago
    The poster hits the nail on the head in the summary alone, but I’ll go a step further:

    We have been duped for half a century into solving increasingly niche problems whose benefits accrue ever upward beyond our reach, and whose harms are forcibly distributed across an unwilling populace. On the whole, technology has done exponentially more harm (mass surveillance, psychological exploitation, automated weapons, pollution, contamination of data, destruction of natural resources, outsourcing, dehumanization) than good (medical technology, targeted therapies, knowledge exchanges, Wikipedia, global collaboration). Instead of focusing on the broader issues of survival staring us in the face, we have willingly ceded agency and sovereignty to a handful of unelected Capitalists who convinced us that this invention will somehow, finally, inevitably solve all our ills and enable a utopia.

    Not one of the boosters of any prior modern “technological revolution” can point to societal good that outpaced the harms caused by their creation. Not NFTs, not cryptocurrency, and certainly not AI. Even Machine Learning has seen more harmful than helpful use, despite its genuine benefits to human society and technological progress, enabling surveillance advertising and the disappearance of dissidents instead of customized healthcare and efficient distribution of resources in real-time.

    Yet whenever someone dares to point this out, we’re decried by proponents as Luddites - ignoring the fact the real plight of the Luddites wasn’t anti-technology, but anti-Capital. To call us Luddites derisively is analogous to admitting the indefensibility of your position: You’re acknowledging we are right to be angry for being harmed for the sake of Capital alone, but that you will do everything in your power to stop our cause. We aren’t saying we want technology to disappear and to revert to the dark ages, we’re demanding that technology benefits everyone more than it harms them. We demand it be inclusive rather than exclusive. It should amplify our good works and minimize societal harms.

    AI in the current context is the societal equivalent of suicide. It robs us of the remaining, dwindling resources we have on yet another thin, hollow promise that this time, it will be different. Four years ago we literally had Crypto Miners lighting up Coal Power Plants while proclaiming cryptocurrency and NFTs will solve climate change somehow, and now AI companies are firing up fuel turbines and nuclear power plants while promising the same thing.

    We need to stop obsessing over technical minutiae and showing blind faith in technology, and realize that these are all tools of varying utility. We have mounting evidence that AI is causing more harm than good now, and that there is no practicable roadmap where its benefits will outweigh its harms in the near term. For all this obsessing over share value and “progress”, we need to accept the gruesome reality that our talent, our intelligence, and our passion is being manipulated to harm the masses - and that we alone can decide to STOP. It’s about taking our heads out of the sand, objectively assessing the whole of the system and superstructure, and taking action to change things.

    More fuzzy code and token prediction isn’t going to save our asses or make the world a better place. The only way to do that is to acknowledge our role in the harms we perpetuate and choosing to stop them, regardless of the harms to ourselves in the moment.

    • taejavu17 hours ago
      I understand and empathise with the position you’re putting forward, but am left curious, since you mentioned the evidence is mounting, whether you can substantiate the claim that technology is net-negative. I mean, just on the face of it, the bulk of people went from peasants to middle class over a few hundred years, and I don’t think you can point to much _except_ for technological improvement as the reasons for these gains.
      • pojzon16 hours ago
        It all was at the expense of environment.

        And if we define being good as “help us to further keep human race alive as top spiecies”

        Then yes, technology caused more harm than good.

        World is currently experiencing another mass extinction event and at the current pace of events billions of ppl will either die from starvation or dehydration or various ecological disasters or wars caused by population migrations.

    • tim33316 hours ago
      I can't say I agree with that take:

      >...have willingly ceded agency and sovereignty to a handful of unelected Capitalists who convinced us that this invention will somehow, finally, inevitably solve all our ills and enable a utopia.

      I've been around for that half century. The system of government is much the same. The new tech like pcs, mobile and the web are mostly tech gizmos that people quite like and choose to buy, not some fiendish plan sold as utopia.

  • 16 hours ago
    undefined
  • 16 hours ago
    undefined
  • renewiltord16 hours ago
    Dear god, the endless reams of "woe is us" are worse than any LLM generated content.

    > Elon Musk - whose xAI data centre is being powered by nearly three dozen on-site gas turbines that are poisoning the air of nearby majority-Black neighborhoods in Memphis - went on the Joe Rogan podcast

    Christ, who even reads this stuff. This constant palavering is genuinely too much.

  • TacticalCoder16 hours ago
    [dead]
  • suthakamal17 hours ago
    [flagged]
    • yupitsme12317 hours ago
      Do you really feel that skipping LLMs would be like skipping the industrial revolution, electricity, or the internet? Twenty years from now where do you see societies that "embrace" this technology vs. ones that don't?

      It's obvious what electricity and mass production can do to improve the prosperity and happiness of a given society. It's not so obvious to me what benefits we'd be missing out on if we just canceled LLMs at this point.

      • 16 hours ago
        undefined
      • suthakamal17 hours ago
        LLMs aren’t the end all be all of anything. But they’re clearly a step towards augmenting human cognition and in giving machines the ability to perform cognitive tasks. And when Google says a quarter of its code is being written by LLMs, and DeepMind is making tremendous progress on protein folding and DNA understanding with fundamentally the same technology, it seems pretty clear that we’d miss out on a lot without this.

        Full disclosure: I think protein folding and DNA prediction could quite possibly the biggest advancements in medicine, ever. And still, all the critiques of LLMs being janky and not nearly sufficient to be generally intelligent are true.

        So yes, I think it’s absolutely on the scale of electrification.

        • yupitsme12317 hours ago
          When I look at the problems in my life, in my country, or in the world around me, not once has it occurred to me that they were due to a lack of advanced pattern recognition or DNA prediction.

          When people were dying of hunger then being able to create more food was obviously a huge win. Likewise for creating light where people used to live in darkness.

          But contemporary technologies solve non-problems and take us closer to a future no one asked for, when all we want is cheaper rent, cheaper healthcare, and less corruption.

          • suthakamal16 hours ago
            You don’t think protein folding and dna prediction will yield better healthcare?
            • yupitsme12316 hours ago
              I said cheaper. Not better. What difference does it make if it's better if only a few people can afford it. I also don't accept longer lifespans as something that is always worth pursuing.

              You also didn't address my point that those technologies do nothing to solve the real problems that real people want solved. There's a strong possibility that they'll just exacerbate them.

              • suthakamal13 hours ago
                I guess your argument could be leveled against any transformational technology, from the industrial age through to the internet (which many doubted would have any meaningful economic impact, and clearly didn't solve many of the most pressing problems of the day for humanity).
    • mm26317 hours ago
      By "Luddite," you mean "resist progress, therefore bad." Progress is not inherently bad. Luddites didn't say it is; this blog post doesn't say it is either. We are currently rushing forward with implementing AI everywhere, as much as possible, and what these posts (thinking about Xe Iaso) urge you to think about is how this new revolutionary technology affects us, society, the people who will be displaced by it. If it will yield a disproportionate amount of misery, then we should oppose it on the moral grounds. There's no guarantee of ASI heaven or hell, so it's merely prudent to think about the repercussions. We didn't think - damn, we couldn't even approach imagining - all of the repercussions of replacing traditional agriculture with industrial agriculture, of the industrial revolution, of the internet, so maybe, with technology this powerful, it would be sensible to think about the repercussions before we upend the social order once again.
    • giraffe_lady17 hours ago
      > The idea that we could just reject the technology feels kind of like a Luddite reaction to it.

      The luddites were a labor movement protesting how a new technology was used by mill owners to attack collective worker power in exchange for producing a worse product. Their movement failed but they were right to fight it. The lesson we should take from them isn't to give up in the face of destabilizing technological change.

      • shadowgovt17 hours ago
        > Their movement failed but they were right to fight it. The lesson we should take from them isn't to give up in the face of destabilizing technological change.

        Hard to say. They sort of represented the specialist class being undermined by technology de-specializing their skillset. This is in contrast to labor strikes and riots which were executed by unskilled labor finding solidarity to tell machine owners "your machine is neat but unless you meet our demands, you'll be running around trying to operate it alone." Luddites weren't unions; they were guilds.

        One was an attempt to maintain a status quo that was comfortable for some and kept products expensive, the other was a demand that given the convenience afforded by automation, the fruits of that convenience be diffused through everyone involved, not consolidated in the hands of the machine owners.

      • mouse_17 hours ago
        Preach!
      • suthakamal17 hours ago
        They were wrong to believe that technological progress could be stopped. The viable path is policy which ensures the gains are fairly distributed, not try to break the machines. That tactic has never and will never work.
        • xg1517 hours ago
          > The viable path is policy which ensures the gains are fairly distributed, not try to break the machines.

          This was exactly what the historical Luddite movement was trying to archive. The industrialists responded with "lol no". Then came the breaking of machines.

        • NegativeLatency17 hours ago
          I don't want to start a snippy argument, so sorry if this sounds combative, but when you realize that there isn't a "policy which ensures the gains are fairly distributed", then what would you suggest?

          Unionization and collective action does work, it's why we have things like the concept of the weekend. It's also generally useful when advocating change to have a more extreme faction.

          • suthakamal13 hours ago
            ranked choice voting is a good start. The new york mayoral primary is a hopeful sign.
        • zorked17 hours ago

            But the Luddites themselves “were totally fine with machines,” says Kevin Binfield, editor of the 2004 collection Writings of the Luddites. They confined their attacks to manufacturers who used machines in what they called “a fraudulent and deceitful manner” to get around standard labor practices. “They just wanted machines that made high-quality goods,” says Binfield, “and they wanted these machines to be run by workers who had gone through an apprenticeship and got paid decent wages. Those were their only concerns.”
          
          https://www.smithsonianmag.com/history/what-the-luddites-rea...
        • DrillShopper16 hours ago
          > The viable path is policy which ensures the gains are fairly distributed

          Okay, so where are those? Where are even the proposals for those?

          What would you propose? What do you think is fair distribution of these gains?

          • suthakamal13 hours ago
            High taxes, and ranked choice voting. New York's mayoral primary is a hopeful sign to me.
        • 17 hours ago
          undefined
        • jrflowers17 hours ago
          This is a good point. When has breaking stuff and disrupting productivity as a form of protest ever worked? It’s not like battles are fought with violence. They are fought through people doing Policy in their heads, which sort of just naturally becomes Policy out in the world on its own.
          • suthakamal13 hours ago
            I'm not saying protest doesn't work. I'm saying rejecting technology never has.
            • jrflowers6 hours ago
              That isn’t what luddites did though. I wrote my earlier post quite a while after several other folks clearly and eloquently explained that in response to your post. I figured you would’ve been up to speed about tactics vs goals vis a vis the luddites by the time you got to that post
    • mouse_17 hours ago
      > Any information processing technology can be argued to be a surveillance technology.

      The telemetric enclosure movement and its consequences have been a disaster for humanity, and advancements in technology are now doing more harm than good. Life expectancy is dropping for the first time in ages, and the generational gains in life expectancy had a lot of inertia behind them. That's all gone now.

      • danielbln17 hours ago
        Any sources to back that up? All I can find is rising life expectancy across the board globally, with a dip during the pandemic that almost all countries have recovered from. The US has been a bit sluggish there, but still.
        • suthakamal17 hours ago
          Yes. There has been a regression in these metrics for white folks in the US. This is the first generation of whites in America who can expect to earn less and live shorter lives than their parents. However, that doesn’t generalize to the rest of the population, or world, and in America the reasons are policy: healthcare and education. Not because AI or tech broadly is particularly pernicious.
  • RS-23217 hours ago
    Were water mills, spinning jennies, and printing presses dehumanizing too?
    • GeoAtreides16 hours ago
      In a way; water mills and spinning jennies led to the dickensian horrors of the textile mills: https://www.hartfordstage.org/stagenotes/acc15/child-labor

      The industrialisation itself, although increased material output, decimated the lives and spirits of those who worked in factories.

      And the printing press led to the Reformation and the thirty years war, one of the most devastating wars ever.

      • mchusma14 hours ago
        ...and led to our current time of maximal abundance, free time, leisure, freedom to work in more ways, and peace.
    • ACCount3617 hours ago
      Of course!

      There were people whose entire identities were tied to being able to manually copy a book.

      Just imagine how much they seethed as printing press was popularized.

    • relaxing16 hours ago
      Yes, there are many books written about the dehumanizing aspects of the industrial revolution.

      Consider we still place particular value on products which are “artisanal” or “hand crafted.”

    • micromacrofoot16 hours ago
      Kinda? https://en.wikipedia.org/wiki/Luddite

      > The Luddite movement began in Nottingham, England, and spread to the North West and Yorkshire between 1811 and 1816.[4] Mill and factory owners took to shooting protesters and eventually the movement was suppressed by legal and military force, which included execution and penal transportation of accused and convicted Luddites.

    • stirfish17 hours ago
      I think there are quite a few dehumanizing aspects of the industrial revolution. It wasn't just the water mills, but rather the lengths we put people through to keep them running.
    • bilbo0s17 hours ago
      We don't even need to go back that far.

      All these arguments could be made for, say, news media, or social media.

      AI being singled out is a bit disingenuous.

      If it is dehumanizing, it is because our collective labor, culture, and knowledge base have concerted to make it so.

      I guess, people should really think of it this way: A database is garbage in, garbage out, but you shouldn't blame the database for the data.

      • relaxing16 hours ago
        All those arguments have been and are still being made for MSM and social media. AI is not being singled out.
    • tines17 hours ago
      No, because they aren't the same. Those things are tools that reallocate cognitive burden. LLMs destroy cognitive burden. LLMs cause cognitive decline, a spinning jenny doesn't.
      • bilbo0s16 hours ago
        I don't know man?

        Gonna have to disagree there. A lot of models are being used to reallocate cognitive burden.

        A phd level biologist with access to the models we can envision in the future will probably be exponentially more valuable than entire bio startups are today. This is because s/he will be using the model to reallocate cognitive burden.

        At the same time, I'm not naive. I know that there will be many, many non phd level biologist wannabes that attempt to use models to remove entirely cognitive burden. But what they will discover is that they are unable to hold a candle to the domain expert reallocating cognitive burden.

        Models don't cause cognitive decline. They make cognitive labor exponentially more valuable than it is today. With the problem being that it creates an even more extreme "winner take all" economic environment that a growing population has to live in. What happens when a startup really only needs a few business types and a small team of domain experts? Today, a successful startup might be hundreds of jobs. What happens when it's just a couple dozen? Or not even a dozen? (Other than the founders and investors capturing even more wealth than they do presently.)

        • ololobus16 hours ago
          I'd totally agree with this point if we assume that efficiency/performance growth will flatten at some point. For example, if it gets logarithmic soon, then the progress will grow slowly over the next decades. And then, yes, it will likely look like that current software developers, engineers, scientists, etc., just got an enormously powerful tool, which knows many languages almost perfectly and _briefly_ knows the entire internet.

          Yet, if we trust all these VC-backed AI startups and assume that it will continue growing rapidly, e.g., at least linearly, over the next years, I'm afraid that it may indeed reach a superhuman _intelligence_ level (let's say p99 or maybe even p999 of the population) in most of the areas. And then why do you need this top of the notch smart-ass human biologist if you can as well buy a few racks of TPUs?

          • bilbo0s13 hours ago
            Because only the biologist knows what assays to ask the super human intelligence for. And how the results affect the biomolecular process you want to look at.

            If you can’t ask the right questions, like everyone without a phd in biology, you’re kind of out of luck. The superhuman intelligence will just spin forever trying to figure out what you’re talking about.

        • tines16 hours ago
          It doesn't really matter what something can be used for, it matters what it will be used for most of the time. Television can be used for reading books, but people mostly don't use it that way. Smartphones can be used for creation, but people mostly don't use them that way. You've got Satya Nadella on a stage saying AI makes you a better friend because it can reply to messages from your friends for you. We are creating, and to a large extent have created, a world that we will not want to live in, as evidenced by skyrocketing depression and the loneliness epidemic.

          Read Neil Postman or Daniel Boorstin or Marshall McLuhan or Sherry Turkle. The medium is the message.

  • wagwang17 hours ago
    This blog post basically reads as, AI doesn't always adhere to my leftist values.
    • DowsingSpoon17 hours ago
      Murder also doesn’t adhere to my leftist values, which is to say, your statement is useless without being specific about which values AI doesn’t adhere to, and why you think that’s not a problem at all. The article explicitly calls out the “deeply-held values of justice, fairness, and duty toward one another.” Are these the specific leftist values you’re so dismissive of?
      • Terr_17 hours ago
        Or to invert it, what "non-leftist values" does grandparent poster believe are lacking? (Hopefully the answer isn't "...You know the ones.")
      • 17 hours ago
        undefined
      • charcircuit16 hours ago
        For example:

        >What happens to people's monthly premiums when a US health insurance company's AI finds a correlation between high asthma rates and home addresses in a certain Memphis zip code? In the tradition of skull-measuring eugenicists, AI provides a way to naturalize and reinforce existing social hierarchies, and automates their reproduction.

        This sentence is about how AI may be able to more effectively apply the current values of society as opposed to the author's own values. It also fails to recognize that for things like insurance there are incentives to reduce bias to avoid mispricing policies.

        >The logical answer is that they want an excuse to fire workers, and don't care about the quality of work being done.

        This sentence shows that the author perceives that AI may harm workers. Harming workers appears to be against her values.

        >This doesn't inescapably lead to a technological totalitarianism. But adopting these systems clearly hands a lot of power to whoever builds, controls, and maintains them. For the most part, that means handing power to a handful of tech oligarchs. To at least some degree, this represents a seizure of the 'means of production' from public sector workers, as well as a reduction in democratic oversight. >Lastly, it may come as no surprise that so far, AI systems have found their best product-market fit in police and military applications, where short-circuiting people's critical thinking and decision-making processes is incredibly useful, at least for those who want to turn people into unhesitatingly brutal and lethal instruments of authority.

        These sentences shows that the author values people being able to break the law.

        • Terr_9 hours ago
          > > high asthma rates and home addresses in a certain Memphis zip code [...] naturalize and reinforce existing social hierarchies

          > This sentence is about how AI may be able to more effectively apply the current values of society

          *whoooosh*

          No, it's about how poor people growing up in polluted regions can be kept poor by the damage being inflicted upon them.

          Keeping a permanent poor hereditary underclass is not a "current value" of in society at large.

        • DrillShopper16 hours ago
          This post reads like a really bad LLM
          • charcircuit16 hours ago
            I tried to keep my explanation simple since it appeared the other commenter had trouble understanding the author's views on AI which were pretty clear when I read over it. The other commentary called out a set of values which were quoted from a discussion unrelated to AI.