I don't genuinely expect the author of a blogpost who titles their writing "AI is Dehumanization Technology" to be particularly receptive of a counterargument, but hear me out.
I can think of few things as intellectually miserable as the #help channels of the many open source projects on Discord, for example. Should I wager a guess, if these projects all integrated LLMs into their chatbots for just a few bucks and let them take on the brunt of the interactions, all participants on either side would be left with much more capacity to maintain and express empathy and care, or to nurture social connections. This extends beyond these noncommercial contexts of course, to stuff like customer service for example.
It is sad to me that the skill required to navigate everyday life are being delegated to technology. Pretty soon it won’t matter what you think or feel about your neighbors because you will only ever know their tech-mediated facade.
Also, efficiency.
I think everyone in tech consulting can tell you that inserting another party (outsourcing) in a previously two-party transaction rarely produces better outcomes.
Human-agent-agent-human communication doesn't fill me with hope, beyond basic well-defined use cases.
Isn’t this basically what technology does? I suppose there is also technology to do things that weren’t possible at all before, but the application is often automation of something in someone’s everyday life that is considered burdensome.
Edit: one point i forgot to make is that it has already become absurd how different someones online persona or confidence level is when they are AFK, its as if theyve been reduce to an infantile state.
This would be a good counter if this were all that this technology is being used for.
They can of course still argue that it's majority-bad for whatever list of reasons, but that's not what bugs me. What bugs me is the absolute tunnel vision of "for principled reasons I must find this thing completely and utterly bad, no matter what". Because this is what the title, and the tone, and just about everything else in this article comes across as to me, and I find it equal parts terrifying and disagreeable.
AI companies also only sell the public on the upside of these technologies. Behind closed doors they are investing hard on this with the hope to reduce or eliminate their labor cost with no regard to any damage to society.
Do you hold the position that a thing is bad because it is possible to do harm, or that it is predominantly causing harm?
Most criticisms cite examples demonstrating the existence of harm because proving existence requires a single example. Calculating the sum of an effect is much harder.
Even if the current impact of a field is predominantly harmful, it does not stand that the problem is with the what is being attempted. Consider healthcare, a few hundred years ago much of healthcare did more harm than good, charlatans and frauds were commonplace. Was that because healthcare itself was a bad thing? Was it a mistake to even go down that path?
So I think your argument is kind of misleading.
I am advocating adopting methods of improvement rather than abandoning the persuit of beneficial results.
I think science was just a part of the solution to healthcare, much of the advance was also in what was considered allowable or ethical. There remains a great deal of harmful medical practices that are used today in places where regulation is weak.
Science has done little to stop those harms. The advances that led to the requirement for a scientific backing were social. That those practices persist in some places is not a scientific issue but a social one.
That ultimately enabled "doctors" to be quite useful. But the fact that the "profession" existed earlier is not what allowed it to bloom.
Mere moments later...
> Even if the current impact of a field is predominantly harmful
So let's just skip the first part then, you conceded it's predominantly harmful. On this we agree.
> it does not stand that the problem is with the what is being attempted.
Well, it's not a logical 1-to-1, no. But I would say if the current impact of a field is predominantly harmful, then revisiting what is being attempted isn't the worst idea.
> Consider healthcare, a few hundred years ago much of healthcare did more harm than good, charlatans and frauds were commonplace. Was that because healthcare itself was a bad thing? Was it a mistake to even go down that path?
If OpenAI and company were still pure research projects, this would hold some amount of water, even if I would still disagree with it. However that exempts the context that OpenAI is actively (and under threat of financial ruin) turning itself into a for-profit business, and is actively selling it's products, as are it's competitors, to firms in the market with the explicit notion of reducing headcount for the same productivity. This doesn't need a citation, look at any AI product marketing and you see a consistent theme is the removal of human labor and/or interaction.
>So let's just skip the first part then, you conceded it's predominantly harmful. On this we agree.
I'm afraid if you interpret that statement as a concession of a fact, I don't think we can have a productive conversation.
Sure, if you want to make sure you don't get any more contributors, you can try to replace that with a chatbot that will always reply immediately, but might just be wrong like 40% of the time, is not actually working on the project and will certainly not help in building sociel interactions between the project and its users.
Most interactions start with users being vague. This can already result in some helpers getting triggered, and starting to be vaguely snarky, but usually this is resolved by using prepared bot commands... which these users sometimes just won't read.
Then the misunderstandings start. Or the misplaced expectations. Or the lies. Or maybe the given helper has been having a bad day, but due to their long time presence in the project, they won't be moderated out properly. And so on. It's just not a good experience.
Ever since I left, I got screencaps of various kinds of conversations. In some cases, the user was being objectively insufferable - I don't think it's fair to expect a human to put up with that. Other times, the helper was being unnecessarily mean - they did not appreciate my feedback on that. Neither happens with LLMs. People don't grow resentful of the never ending horde of what feels like increasingly clueless users, and innocent folk don't get randomly chewed out for not living up to the optimality expectations of those who tend to 1000s of cases similar to theirs every week.
While direct human support is invaluable in many cases, I find it really hard to believe how our industry has completely forgotten the value of public support forums. Here are some pure advantages over Discord/Slack/<Insert private chat platform of your liking>
- Much much better search functionality out of the box, because you can leverage existing search engines.
- From the above it follows that high value contributors do not need to spend their valuable time repeatedly answering the same basic questions over and over.
- Your high value contributors don't have to be employees of the company, as many enthusiastic power users often participate and contribute in such places.
- Conversations are _much_ easier to follow without having to resort to hidden threads and forums posts on Discord that no one will ever read or search.
- Over time you build a living library of supporting documentation instead of useful information being strewn in many tiny conversations over months.
- No user expectation to be helped immediately. A forum sets the expectation that this is an async method of communication, so you're less likely to see entitled aggravating behavior (though you won't see many users giving you good questions with relevant information attached even on forums).
If we make information more accessible, support will reduce in volume. Currently there's a tendency for domain experts to hoard all relevant information in their heads, and dole it out at their discretion in various chat forums. Forums whose existence is often not widely known to begin with (not to mention gated behind making accounts in certain apps the users may or may not care about/want to).
So my point is: instead of trying to automate a decidedly bad solution to make it scalable and treating that as a selling point of AI, we could instead make the information more accessible in the first place?
This meant you had a fairly low and consistent ceiling for messages. What you'd also observe over the years is a gradual decline in question quality. According to every helper that is. How come?
Admittedly we'll never really know, so this is speculation on my part, but I think it was exactly because of the better availability of information. During these years, we tried cultivating other resources and implementing features with the specific goal of improving UX. It worked. So the only people still "needing" assistance were those who failed to navigate even this better UX. Hence, worse questions, yet never ending.
Another issue with this idea is that navigating through the sheer volume of information can become challenging. AWS has a pretty decent documentation for example, but if you don't know the given service's docs you're paging through somewhat well, it's a chore to find anything. Keyword search won't be super helpful either. This is because it's a lot of prose, and not a lot of structure. Compare this to the autogenerated docs of AWS CLI, and you'll find a stark difference.
Finding things, especially among a lot of faff, is tiring. Asking a natural language question is trivial. The rest is on people to believe that AI isn't the literal devil, unlike what blogposts like the OP would like one to believe.
I’m not sure there is a solution to help people who don’t come to the table willing to put in the effort required to get help. This seems like a deep problem present in all kinds of ways in society, and I don’t think smarter chatbots are the solution. I’d love to be wrong.
If such a dataset exists, I don't have it. Most I have is the anecdotal experiences of not having to be afraid of asking silly questions from LLMs, and learning things I could then cross-validate to be correct without tiring anyone.
No matter your position on the AI helpfulness, asking volunteers to not only spend time helping support a free software project but to also pony up money is just doubling down on the burden free software maintainers face as was highlighted in the recent libxml2 discussion.
But then one could also just argue that this is something the individual projects can decide for themselves. Not really for either of us to make this call. You can consider what I said as just an example you disagree with in that case.
As a dev I have to be already quite desperate if I engage with a chat bot.
Discord is better suited for developers working together, not for publishing the results to an audience.
Also, try to come up with a less esoteric example than Discord Help channels. In fact, this is the issue with most defenses of LLMs. The benefits are so niche, or minor, that the example itself shows why they are not worth the money being poured in
Should be fairly obvious, but I disagree. Also I think you mean asocial, not antisocial. What's uniquely draconian about automated systems though? They're even susceptible to the same social engineering attacks humans are (it's just referred to as jailbreaking instead).
> Also, try to come up with a less esoteric example than Discord Help channels.
No.
> The benefits are so niche, or minor, that the example itself shows why they are not worth the money being poured in
Great. This is already significantly more intellectually honest than the entire blogpost.
“I’m not a tech worker…” they like to tinker with code and local Linux servers.
They have not seen how robotic the job has become and felt how much pressure there is to act like a copy-paste/git pull assembly line.
The simple fact of the matter is, there is a sharp gap between what an AI can do, and what a human does in any role involving communications, especially customer service.
Worse, there are psychological responses that naturally occur when you do any number of a few specific things that escalate conflict if you leave this to an AI. A qualified CSR person is taught how to de-escalate, diffuse, and calm the person who has been wound up to the point of irrationality. They are the front-line punching bags.
AI can't differentiate between what's acceptable, and what's not because the tokens it uses to identify these contexts have two contradictory states in the same underlying tokens. This goes to core classical computer science problems of halting, and other aspects.
The companies that were ahead of the curb for this invested a lot into this almost a decade and a half ago, and they found that in most cases these types of systems exponentiated the issues once they did finally get to a person, and they took it out on that person irrationally because they were the representative of the company that put them through what amounts to torture.
Some examples of behavior that causes these types of responses are when you are being manipulated in a way that you know is manipulation, it causes stress through perceptual blindspots causing an inconsistent internal mental state resulting in confusion. When that happens it causes a psychological reversal often of irrational anger. An infinite or byzantine loop designed to run people in circular hamster wheels is one such structure.
If you've ever been in a social interaction where you offer an olive branch and they seem to accept it, but at the last minute through it back in your face, you've experienced this. The smart individual doesn't ever do this because they know they will make an enemy for life who will always remember.
This is also how through communication, you can impose coercive cost on people, and companies have done this for years where anti-trust and FTC weren't being enforced. These triggers are inherent to a lesser or greater degree in all of us, every person alive.
The imposition of personal cost through this and other psychological blindspots is how torturous and vexatious processes are created.
Empathy and care are a two way street. It requires both entities to be acting in good faith through reflective appraisal. When this is distorted, it drives people crazy, and there is a critical saturation point where assumptions change because the environment has changed. If people show the indicators that they are acting in bad faith, others will treat them automatically as acting in bad faith. Eventually, the environment dictates that those people must prove they are acting in good faith (somehow) but proving this is quite hard. The environment switches from innocent benefit of the doubt to, guilty until proven innocent.
These erosions of the social contract while subtle, dictate social behavior. Can you imagine a world where something bad happens to you, and everyone just turns their backs, or prevents you from helping yourself?
Its the slipperly slope of society failing back to violence, few today commenting on things like this have actually read the material published by the greats on the social contract and don't know how society arose from the chaos of violence.
Maybe. But only if the LLMs are correct. Which they too frequently aren't.
So the result is that the tech industry has figured out how to not only automate making people angry and frustrated, they've managed to do it at scale.
Yay.
The former is an admittedly frustrating aspect in our transactional relationships with companies, while the others are the foundations of a functioning society throughout our civilization. Conflating business interactions with society needs is a familiar trope on HN IMO
Often you give what you get.
If you're nice to the customer service people on the phone, frequently they loosen up and are nice right back at you. Some kind of crazy "human" thing, I guess.
I'm not saying those things aren't valuable, or that humans can't express social and spiritual value in those ways, but that human value doesn't only exist there. And so, to give AI the power of complete dehumanization is to reduce humans to just pattern followers. I don't believe that is the case.
Resilience and strength in our civilisation comes from confidence in our competence,
not sanctifying patterns so we don’t have to think.
We need to encourage and support fluidity, domain knowledge is commoditised, the future is fluid composition.
Maybe there would be merit to this notion if society provided the necessary safety net for this person to start over.
I don't think we should assume most people are capable of what you describe. Assigning "should" to this assumes what you're describing is psychologically tenable across a large population.
> too much rigidity or militant clinging to ideas is insecurity or attempts at absolving personal responsibility.
Or maybe some people have a singular focus in life and that's ok. And maybe we should be talking about the responsibility of the companies exploiting everyone's content to create these models, or the responsibility of government to provide relief and transition planning for people impacted, etc.
To frame this as a personal responsibility issue seems fairly disconnected from the reality that most people face. For most people, AI is something that is happening to them, not something they are responsible for.
And to whatever extent we each do have personal responsibility for our careers, this does not negate the incoming harms currently unfolding.
People can self assign any value whatsoever… that doesn’t change.
If they expect external validation then that’s obviously dependent on multiple other parties.
This sounds sort of like a "God of the gaps" argument.
Yes, we could say that humanity is left to express itself in the margins between the things machines have automated away. As automation increases its capabilities, we just wander around looking for some untouched back-alley or dark corner the robots haven't swept through yet and do our dancing and poetry slams there until the machines arrive forcing us to again scurry away.
But at that point, who is the master, us or the machines?
People tend to talk about any AI related topic comparing it to any industrial shift that happened in the past.
But its much Much MUCH bigger this time. Mostly because AI can make itself better, it will be better and it is better with every passing month.
Its a matter of years until it can completely replace humans in any form of intellectual work.
And those are not mine words but smartest ppl in the world, like AI grandfather.
We humans think we are special. That there wont be something better than us. But we are in the middle of the process of creating something better.
It will be better. Smarter. Not tired. Wont be sick. Wont ever complain.
And it IS ALREADY and WILL replace a lot of jobs and it will not create new ones purely due to efficiency gains and lack of brainpower in majority of ppl who will be laid off.
Not everyone is a noble prize winner. And soon we will need only such ppl to advance AI.
Can it? I'm pretty sure current AI (not just LLMs, but neural nets more generally) require human feedback to prevent overfitting. Fundamentally eschewing any fear or hope of the singularity as predicted.
AI can not make itself better because it can not meaningfully define what better means.
Its only the beginning. aI agents are able to simulate tasks, get better at them and make themselves better.
At this point its silly to say otherwise.
This is sensationalism. There’s no evidence in favor of it. LLMs are useful in small, specific contexts with many guardrails and heavy supervision. Without human-generated prior art for that context they’re effectively useless. There’s no reason to believe that the current technical path will lead to much better than this.
Automation has costs and imagining what LLMs do now as the start of the self-improving, human replacing machine intelligence is pure fantasy.
A demo is one thing. Being deployed in the real world is something else.
The only thing I've seen humanoid robots doing is dancing and occasionally a backflip or two. And even most of that is with human control.
The only menial task I ever saw a humanoid robot do so far is to take bags off of a conveyor belt, flatten them out and put them on another belt. It did it at about 1/10th the speed of a human, and some still ended up on the floor. This was about a month ago, so the state of the art is still in the demo stage.
The risk raised in the article is that AI is being promoted beyond its scope (pattern recognition/creation) to legal/moral choice determination.
The techo-optimists will claim that legal/moral choices may be nothing more than the sum of various pattern-recognition mechanisms...
My take on the article is that this is missing a deep point: AI cannot have a human-centered morality/legality because it can never be human. It can only ever amplify the existing biases in its training environments.
By decoupling the gears of moral choice from human interaction, whether by choice or by inertia, humanity is being removed from the mechanisms that amplify moral and legal action (or, in some perverse cases, amplify the biases intentionally)
even further, AI has only learned through what we've articulated and recorded, and so its inherent biases are only that of our recordings. I'm not sure how that sways the model, but I'm sure that it does.
It would but I don't think that's what they're saying. The agent of dehumanization isn't the technology, but the selection of what the technology is applied to. Or like the quip "we made an AI that creates, freeing up more time for you to work."
Wherever human value, however you define that, exists or is created by people, what does it look like to apply this technology such that human value increases? Does that look like how we're applying it? The article seems to me to be much more focused on how this is actually being used right now rather than how it could be.
The reductio on this is the hollowing out of the hard-for-humans problem domain, leaving us to fight for the scraps of the easy-for-humans domain. At first glance this sounds like a win. Who wouldn't want something else to solve the hard problems? The big issue with this is easy-for-human problems are often dull, devoid of meaning, and low-wage. Paradoxically, the hardest problems have always been the ones that make work meaningful.
We stand at the crossroads where one path leads to an existence where with a poverty of meaning and although humans create and play by their own rules, we feel powerless to change it. What the hell are we doing?
[1] I know it's a bit hard to define, but I'd vaguely say that it's significantly better in the majority of intelligence areas than the vast majority of the population. Also it should be scalable. If we can make it slightly better than human by burning the entire Earth's energy, then it doesn't make much sense.
Want to make a movie? The goal should be getting the movie out, seen by people and reviewed.
Whether it's people captured by film, animations in Blender or AI slop, what matters is the outcome. Is it good? Do people like it?
I do the infrastructure at a department of my Uni as sort of a side-gig. I would have never had the time to learn Ansible, borg, FreeIPA, wireguard, and everything else I have configured now and would have probably resorted to a bunch of messy shell scripts that don't work half the time like the people before me.
But everything I was able to set up I was able to set up in days, because of AI.
Sure, it's really satisfying because I also have a deep understanding of the fundamentals, and I can debug problems when AI fails, and then I ask it "how does this work" as a faster Google/wiki.
I've tried windsurf but given up because the AI does something that doesn't work and I can give it the prompts to find a solution (+ think for myself) much faster than it can figure out itself (and probably at the cost of a lot less tokens).
But the fact that I enjoy the process doesn't matter. And the moment I can click a button and make a webapp, I have so many ideas in my drawer for how I could improve the network at Uni.
I think the problem people have is that they work corporate jobs where they have no freedom to choose their own outcomes so they are basically just doing homework all their life. And AI can do homework better than them.
> Want to make a movie? The goal should be getting the movie out, seen by people and reviewed.
This especially falls apart when it comes to art, which is one of the most “end-goal” processes. People make movies because they enjoy making movies, they want movies to be enjoyed by others because they want to share their art, and they want it to be commercially successful so that they can keep making movies. For the “enjoying a movie” process, do you truly believe that you’d be happy watching only AI-generated movies (and music, podcasts, games, etc.) created on demand with little to no human input for the rest of your life? The human element is truly meaningless to you, it is only about the pixels on the screen? If it is, that’s not wrong - I just think that few people actually feel this way.
This isn’t an “AI bad” take. I just think that some people are losing sight of the role of technology. We can use AI to enable more people than ever before to spend time doing the things they want to do, or we can use it to optimize away the fun parts of life and turn people even further into replaceable meat-bots in a great machine run by and for the elites at the top.
Reducing the human experience as a means to an end is the core idea of dehumanization. Kant addressed this in the "humanity formula of the categorical imperative:
"Act in such a way that you treat humanity, whether in your own person or in the person of any other, always at the same time as an end, never merely as a means."
I'm curious how you feel about the phrase "the real treasure was the friends we made along the way." What does it mean to you?This explanation might've been passable four years ago, but it's woefully out of date now. "Mostly plausible-sounding word salad"?
It is word salad, unless you’re a young, underpaid contractor from a country previously colonised by the British or the United States.
How could you possibly judge such a diverse set of outputs? There are thousands of models, that can each be steered/programmed with prompts and with a lot parameter-twiddling, it's always impossible you could say "the chat bots" and give some sort of one-size-fits-all judgement of all LLMs. I think your reply shows a bit of ignorance if that's all you've seen.
Oxford Dictionaries says "word salad" is "a confused or unintelligible mixture of seemingly random words and phrases", and true, I'm no native speaker, but that's not commonly the output I get from LLMs. Sometimes though, some people’s opinions on the internet feel like word salad, but I guess it's hard to distinguish from bait too.
I remain an optimist. I believe AI can actually give us more time to care for people, because the computers will be able to do more themselves and between each other. Unproven thesis, but so is the case laid out in this article.
There are some jobs that humans really shouldn't be doing. And now, we're at the point where we can start offloading that to machines.
I can't really say I've seen that. The article seems to be about adoption of AI in the Canadian public sector, not something I'm really familiar with as a Brit. The government here hopes to boost the economy with it and Hassabis at Deepmind hopes to advance science and cure diseases.
I think AI may well make the world more humane by dealing with a variety of our problems.
If you cut off a bird's wings, it can't bird in any real meaningful sense. If you cut off humans from others, I don't think we can really be human either.
Also the anti-technology stance is good for humanity since it fundamentally introduces opposition to progress and questions the norm, ultimately killing the weak/inefficient parts of progress.
There is obviously truth to that, but guns are also used for self defense and protecting your dignity. Guns are a technology, and technology can be used for good or evil. Guns have been used to colonially enslave people, but also been used to gain independence.
I disagree with the assessment that AI is intrinsically dehumanizing. AI is a tool, a very powerful tool, and because the very rich in America doesn't see the people they rule as humans of equal dignity, the technology itself betrays their feelings.
Attacking the technology is wrong, the problem is not the technology but that every company has a tyrant king at it's helm who answers to no one because they have purchased the regulators that might have bound their behavior, meaning that their are no consequences for a CEO/King of a company's misdeeds. So every company's king ends up using their company/fiefdom to further their own personal ambitions of power and nobody is there to stop them. If the technology is powerful, then failure to invest in it, while other even more oppressive regimes do invest in it, potentially gives them the ability to dominate you. Imagine you argue nuclear weapons are a bad technology, while your neighbor is busy developing them. Are you better off if your neighbor has nuclear weapons and you don't?
The argument that AI is a dehumanization technology is ultimately an anarchist argument. Anarchy's core belief is that no one should have power to dominate anyone else, which inevitably means that no one is able to provide consequences for anyone who ambitiously betrays that belief system. Reality does not work that way. The only way to provide consequences to a corrupt institution is an even more powerful institution based on collective bargaining (founded by the threat of consequences for failing to reach a compromise, such as striking). There is no way around realpolitik, you must confront pragmatic power relationships to have a cogent philosophy.
The author is mistaking AI for wealth disparity. Wealth is power and power is wealth, and when it is so concentrated, it puts bad actors above consequences and turns tools that could be used for the public good into tools of oppression.
We do not primarily have an AI problem, but a wealth concentration problem and this is one of many manifestation of it.
My point was that guns can be used for murder, in the same way that AI can be used to influence or surveil, but guns are also what you use to arrest people, fight oppressors and tyrants, and protect your property. Fists, knives, bows and arrows, poison, bombs, tanks, fighter jets, and drones are all forms of weapons. The march of technology is inevitable, and it's important not to be on the losing side of it.
What the technology is capable of us less interesting then who has access to it and the power disparity that it creates.
The authors argument is that AI is a (1) high leverage technology (2) in the hands of oligarchs.
My argument is that the fact that it is a high leverage technology is not as interesting, meaningful, or important as the existence of oligarchs who do not answer to any regulatory body because they have bought and paid for it.
The author is arguing that a particular weapon is bad, but failing to argue that we are in a class war that we are losing badly. The author is focusing on one weapon being used to wage our class war, instead of arguing about the cost of losing the classwar.
It is not AI de-humanizing us, but wealth disparity that is de-humanizing us, because there is nothing forcing the extremely wealthy to treat others with dignity. AI is not robbing people of dignity, ultra wealthy people are robbing people of dignity using AI. AI is not dehumanizing people. Ultra wealthy people are using AI to dehumanize people. Those are different arguments with different implications and prescriptions on how to act or what to do.
AI is bad is a different argument than oligarchs are bad.
I encourage people to not get too hung up on that and look at the arguments about the effects on society and how we function as humans.
I have very mixed feelings about AI, and this blog hits some key notes for me. If I have time later I will try to highlight those.
A lot of things that are possible enable evil purposes as or more readily than noble ones. (Palantir comes to mind.) I think we have an ethical obligation to be aware of that and try to steer to the light.
Technology is always an extension of the ethos. It doesn't stand on its own, it needs and reflects the mindset of humans.
Don't care about competition? Find a place where rent prices are reasonable and you'll find it's actually surprisingly easy to earn a living.
Oh, but you want the fancy stuff, don't you?
People don't move to high cost of living areas because they want nice TVs. Fancy stuff is the same price everywhere.
But at the end of the day, it's extremely unhealthy to let these problems force us into feeling like we have to make a lot of money. You can find cheap solutions for almost everything almost everywhere if you compromise.
I think they seek jobs and places to live that give them the maximum overall benefit. I currently live in Seattle, which is quite expensive.
If there was another city like Seattle with the same schools, healthcare, climate, and culture, but cheaper housing, I'd move there as long as the salaries there weren't so much lower that it more than canceled out the benefit of cheaper housing.
The problem in the US is that even though some cities are quite expensive, they are still overall the most economical choice for people who can get good jobs in those cities. The increased pay more than makes up for the higher prices.
Give the book a go if you haven't. It lays out many of the fundamental problems of current social organization way better than I can.
> Oh, but you want the fancy stuff, don't you?
Just some food for thought, though. Is weaponizing hyperpositivity the only way to produce fancy stuff? Think about it, you'll see by yourself this is a false dichotomy embedded in a realism that prevents us from improving society.
-claude w/ eigenbot absolute mode system setting
I agree with the general sentiment, but absolutely disagree with this claim. The push to adopt AI is a gold rush, not any coordinated thing. I think in the political arena they don't give a single f about how humanizing or dehumanizing a thing is, especially if it's this abstract as "AI" or whatever. Everyone out there is there to further their own limited scope goal, according to whatever idea they have on how to achieve that. AI entered the public consciousness and so, companies are now in a race to not get behind. Politicians do enter into the picture, but mostly as ones who enjoy the fruit of the AI effort, it being a good public distraction, and an effective tool in creating propaganda. But nowhere near is it a primary goal, nor does it nefariously further any underlying primary goal, such as dehumanizing the people. It's merely a tool, and a phenomenon with beneficial side effects.
Address the concerns specifically, suggest solutions for those concerns.
I have made a submission to a government investigation highlighting the need for explicitly identifying when an AI makes a determination involving an individual, and the need for mechanisms that need to be in place for individuals to be aware when that has happened along with a method to challenge the determination of they feel it was incorrect.
I have seen a lot of blanket judgements vilifying an entire field of research and industry and all those who participate in it. It has become commonplace use the term techbros as a pejorative to declare people as others.
There is a word for behaviour like that. That is what dehumanisation is.
There is definitely a huge gap between what is happening right now and public perception (and, I'd argue, a few people with a lot of money to gain or lose going out of their way to increase, not decrease, that gap).
In that context, the overall notion the post approaches (that Canada would do well to avoid basing decisions that could help or harm real people on the output of these unproven systems at this juncture) is a good notion.
It's just that greed took over, and it took over big time.
Several shitty decisions in a row: scaling it too much, stealing data, marketing it before it can deliver, government use. The list goes on and on.
This idea that there's something inherent about the technology that's dehumanizing is an old trick. The issue lies in whoever is making those shitty decisions, not the tech itself.
There's obviously a fog surrounding every single discussion about this stuff. Probably the outcome of another remarkably shitty decision by someone (so many people parroting marketing ideas, it's dumb).
We'll be ok as humans, trust me on this one. It's too big to not fail.
Luddites fought against automatic weaving stools - by destroying the machines. Reasoning: It is stealing our jobs.
Comics were sure to destroy our children's minds, because they wouldn't read real literature anymore (conveniently forgetting that only 100 years before that, reading was considered a plague of the youth and putting their minds into bad places).
Rock music is of the devil. If you play "Stairway to heaven" backwards, and squint your ears just right ...
Phones that memorise phone numbers? Surely it is brain rot if you are no longer forced to memorise phone numbers.
Social media? Work of the devil! Manipulates the minds, causes addiction, bad elections, and warts on the nose!
Cryptocurrency? Waste of energy, won't someone please think of the environment? Only used for crime!
3D printing? What if someone ghasp prints himself a gun?
And now it is AI. And when AI is normalised, and something new shows up, it will be that.
For one, you, like many, misunderstand the Luddite movement. They didn’t break weaving frames because they were against technology, they broke them because they were being used to grossly devalue the work weavers used to earn their livelihood. There was a mass consolidation of textile manufacturing from small groups of tradespeople into a few very wealthy factory owners who used easily exploitable labor (like children) in very poor working conditions and paid unlivable wages to make low quality but cheap garments. The luddites weren’t against technology, they were against the way it was being used. They even only targeted factories that they thought were particularly exploitative, leaving the ones with fairer business practices alone. But they get mischaracterized as anti-technology, anti-progress…but maybe they just wanted to be able to live their lives well and support their families.
There’s really a lot to learn from the luddites and their historical context, and it really goes to show that history is truly cyclical.
I would put AI into that same group. It is and will continue to be weaponized against you by people with more power than you.
It’s heralded as a tool for increasing efficiency (a western capitalism favorite euphemism for cancerous exploitation of the environment and humans) while neglecting it’s externalities, which includes destroying the jobs of the very people it stole the work from, and making you more reliant upon it, possibly to the point you forget how to do the things you use it for.
It’s a pretty devilish poisoned fruit.
We have been duped for half a century into solving increasingly niche problems whose benefits accrue ever upward beyond our reach, and whose harms are forcibly distributed across an unwilling populace. On the whole, technology has done exponentially more harm (mass surveillance, psychological exploitation, automated weapons, pollution, contamination of data, destruction of natural resources, outsourcing, dehumanization) than good (medical technology, targeted therapies, knowledge exchanges, Wikipedia, global collaboration). Instead of focusing on the broader issues of survival staring us in the face, we have willingly ceded agency and sovereignty to a handful of unelected Capitalists who convinced us that this invention will somehow, finally, inevitably solve all our ills and enable a utopia.
Not one of the boosters of any prior modern “technological revolution” can point to societal good that outpaced the harms caused by their creation. Not NFTs, not cryptocurrency, and certainly not AI. Even Machine Learning has seen more harmful than helpful use, despite its genuine benefits to human society and technological progress, enabling surveillance advertising and the disappearance of dissidents instead of customized healthcare and efficient distribution of resources in real-time.
Yet whenever someone dares to point this out, we’re decried by proponents as Luddites - ignoring the fact the real plight of the Luddites wasn’t anti-technology, but anti-Capital. To call us Luddites derisively is analogous to admitting the indefensibility of your position: You’re acknowledging we are right to be angry for being harmed for the sake of Capital alone, but that you will do everything in your power to stop our cause. We aren’t saying we want technology to disappear and to revert to the dark ages, we’re demanding that technology benefits everyone more than it harms them. We demand it be inclusive rather than exclusive. It should amplify our good works and minimize societal harms.
AI in the current context is the societal equivalent of suicide. It robs us of the remaining, dwindling resources we have on yet another thin, hollow promise that this time, it will be different. Four years ago we literally had Crypto Miners lighting up Coal Power Plants while proclaiming cryptocurrency and NFTs will solve climate change somehow, and now AI companies are firing up fuel turbines and nuclear power plants while promising the same thing.
We need to stop obsessing over technical minutiae and showing blind faith in technology, and realize that these are all tools of varying utility. We have mounting evidence that AI is causing more harm than good now, and that there is no practicable roadmap where its benefits will outweigh its harms in the near term. For all this obsessing over share value and “progress”, we need to accept the gruesome reality that our talent, our intelligence, and our passion is being manipulated to harm the masses - and that we alone can decide to STOP. It’s about taking our heads out of the sand, objectively assessing the whole of the system and superstructure, and taking action to change things.
More fuzzy code and token prediction isn’t going to save our asses or make the world a better place. The only way to do that is to acknowledge our role in the harms we perpetuate and choosing to stop them, regardless of the harms to ourselves in the moment.
And if we define being good as “help us to further keep human race alive as top spiecies”
Then yes, technology caused more harm than good.
World is currently experiencing another mass extinction event and at the current pace of events billions of ppl will either die from starvation or dehydration or various ecological disasters or wars caused by population migrations.
>...have willingly ceded agency and sovereignty to a handful of unelected Capitalists who convinced us that this invention will somehow, finally, inevitably solve all our ills and enable a utopia.
I've been around for that half century. The system of government is much the same. The new tech like pcs, mobile and the web are mostly tech gizmos that people quite like and choose to buy, not some fiendish plan sold as utopia.
> Elon Musk - whose xAI data centre is being powered by nearly three dozen on-site gas turbines that are poisoning the air of nearby majority-Black neighborhoods in Memphis - went on the Joe Rogan podcast
Christ, who even reads this stuff. This constant palavering is genuinely too much.
It's obvious what electricity and mass production can do to improve the prosperity and happiness of a given society. It's not so obvious to me what benefits we'd be missing out on if we just canceled LLMs at this point.
Full disclosure: I think protein folding and DNA prediction could quite possibly the biggest advancements in medicine, ever. And still, all the critiques of LLMs being janky and not nearly sufficient to be generally intelligent are true.
So yes, I think it’s absolutely on the scale of electrification.
When people were dying of hunger then being able to create more food was obviously a huge win. Likewise for creating light where people used to live in darkness.
But contemporary technologies solve non-problems and take us closer to a future no one asked for, when all we want is cheaper rent, cheaper healthcare, and less corruption.
You also didn't address my point that those technologies do nothing to solve the real problems that real people want solved. There's a strong possibility that they'll just exacerbate them.
The luddites were a labor movement protesting how a new technology was used by mill owners to attack collective worker power in exchange for producing a worse product. Their movement failed but they were right to fight it. The lesson we should take from them isn't to give up in the face of destabilizing technological change.
Hard to say. They sort of represented the specialist class being undermined by technology de-specializing their skillset. This is in contrast to labor strikes and riots which were executed by unskilled labor finding solidarity to tell machine owners "your machine is neat but unless you meet our demands, you'll be running around trying to operate it alone." Luddites weren't unions; they were guilds.
One was an attempt to maintain a status quo that was comfortable for some and kept products expensive, the other was a demand that given the convenience afforded by automation, the fruits of that convenience be diffused through everyone involved, not consolidated in the hands of the machine owners.
This was exactly what the historical Luddite movement was trying to archive. The industrialists responded with "lol no". Then came the breaking of machines.
Unionization and collective action does work, it's why we have things like the concept of the weekend. It's also generally useful when advocating change to have a more extreme faction.
But the Luddites themselves “were totally fine with machines,” says Kevin Binfield, editor of the 2004 collection Writings of the Luddites. They confined their attacks to manufacturers who used machines in what they called “a fraudulent and deceitful manner” to get around standard labor practices. “They just wanted machines that made high-quality goods,” says Binfield, “and they wanted these machines to be run by workers who had gone through an apprenticeship and got paid decent wages. Those were their only concerns.”
https://www.smithsonianmag.com/history/what-the-luddites-rea...Okay, so where are those? Where are even the proposals for those?
What would you propose? What do you think is fair distribution of these gains?
The telemetric enclosure movement and its consequences have been a disaster for humanity, and advancements in technology are now doing more harm than good. Life expectancy is dropping for the first time in ages, and the generational gains in life expectancy had a lot of inertia behind them. That's all gone now.
The industrialisation itself, although increased material output, decimated the lives and spirits of those who worked in factories.
And the printing press led to the Reformation and the thirty years war, one of the most devastating wars ever.
There were people whose entire identities were tied to being able to manually copy a book.
Just imagine how much they seethed as printing press was popularized.
https://academic.oup.com/book/27066?login=false
Seems the scribes kept going for a good hundred years or so, doing all the premium and arty publications.
Consider we still place particular value on products which are “artisanal” or “hand crafted.”
> The Luddite movement began in Nottingham, England, and spread to the North West and Yorkshire between 1811 and 1816.[4] Mill and factory owners took to shooting protesters and eventually the movement was suppressed by legal and military force, which included execution and penal transportation of accused and convicted Luddites.
All these arguments could be made for, say, news media, or social media.
AI being singled out is a bit disingenuous.
If it is dehumanizing, it is because our collective labor, culture, and knowledge base have concerted to make it so.
I guess, people should really think of it this way: A database is garbage in, garbage out, but you shouldn't blame the database for the data.
Gonna have to disagree there. A lot of models are being used to reallocate cognitive burden.
A phd level biologist with access to the models we can envision in the future will probably be exponentially more valuable than entire bio startups are today. This is because s/he will be using the model to reallocate cognitive burden.
At the same time, I'm not naive. I know that there will be many, many non phd level biologist wannabes that attempt to use models to remove entirely cognitive burden. But what they will discover is that they are unable to hold a candle to the domain expert reallocating cognitive burden.
Models don't cause cognitive decline. They make cognitive labor exponentially more valuable than it is today. With the problem being that it creates an even more extreme "winner take all" economic environment that a growing population has to live in. What happens when a startup really only needs a few business types and a small team of domain experts? Today, a successful startup might be hundreds of jobs. What happens when it's just a couple dozen? Or not even a dozen? (Other than the founders and investors capturing even more wealth than they do presently.)
Yet, if we trust all these VC-backed AI startups and assume that it will continue growing rapidly, e.g., at least linearly, over the next years, I'm afraid that it may indeed reach a superhuman _intelligence_ level (let's say p99 or maybe even p999 of the population) in most of the areas. And then why do you need this top of the notch smart-ass human biologist if you can as well buy a few racks of TPUs?
If you can’t ask the right questions, like everyone without a phd in biology, you’re kind of out of luck. The superhuman intelligence will just spin forever trying to figure out what you’re talking about.
Read Neil Postman or Daniel Boorstin or Marshall McLuhan or Sherry Turkle. The medium is the message.
>What happens to people's monthly premiums when a US health insurance company's AI finds a correlation between high asthma rates and home addresses in a certain Memphis zip code? In the tradition of skull-measuring eugenicists, AI provides a way to naturalize and reinforce existing social hierarchies, and automates their reproduction.
This sentence is about how AI may be able to more effectively apply the current values of society as opposed to the author's own values. It also fails to recognize that for things like insurance there are incentives to reduce bias to avoid mispricing policies.
>The logical answer is that they want an excuse to fire workers, and don't care about the quality of work being done.
This sentence shows that the author perceives that AI may harm workers. Harming workers appears to be against her values.
>This doesn't inescapably lead to a technological totalitarianism. But adopting these systems clearly hands a lot of power to whoever builds, controls, and maintains them. For the most part, that means handing power to a handful of tech oligarchs. To at least some degree, this represents a seizure of the 'means of production' from public sector workers, as well as a reduction in democratic oversight. >Lastly, it may come as no surprise that so far, AI systems have found their best product-market fit in police and military applications, where short-circuiting people's critical thinking and decision-making processes is incredibly useful, at least for those who want to turn people into unhesitatingly brutal and lethal instruments of authority.
These sentences shows that the author values people being able to break the law.
> This sentence is about how AI may be able to more effectively apply the current values of society
*whoooosh*
No, it's about how poor people growing up in polluted regions can be kept poor by the damage being inflicted upon them.
Keeping a permanent poor hereditary underclass is not a "current value" of in society at large.