Apparently those guys have a g instead of a k.
Sounds like early 70s.
"The programmes were originally broadcast on BBC1 between 1969 and 1972, followed by a special episode which was broadcast in 1974."
What else!
Without searching on the internet, I wouldn't even know the context on the level of which decade or country. Fascinating!
There was also a safer revival of clackers in North America in the 90s, where the balls are attached to a handle.
Even now I've figured it's about AI, I still don't really get it. Is it supposed to be funny?
Re funny, I think the Onion does better https://theonion.com/ai-chatbot-obviously-trying-to-wind-dow...
The word clanker has been previously used in science fiction literature, first appearing in a 1958 article by William Tenn in which he uses it to describe robots from science fiction films like Metropolis.[2]
He actually taught science fiction and had lots of interesting stories of the classic era of scifi, like BEM's - a bug-eyed-monster, arms wrapped around a woman in s "brass brassiere".
hmmm.. which now I realize explains "the flat eyed monster"...
https://www.baen.com/Chapters/9781476780986/9781476780986___...
It has a strong smell of "stop trying to make fetch happen, Gretchen."
>The word clanker has been previously used in science fiction literature, first appearing in a 1958 article by William Tenn in which he uses it to describe robots from science fiction films like Metropolis.[2] The Star Wars franchise began using the term "clanker" as a slur against droids in the 2005 video game Star Wars: Republic Commando before being prominently used in the animated series Star Wars: The Clone Wars, which follows a galaxy-wide war between the Galactic Republic's clone troopers and the Confederacy of Independent Systems' battle droids.
Jim Crow "ended" (it's what we tell ourselves) in the south in 1965 with the Civil Rights Act of 1964 and Voting Rights Act of 1965. Our last two presidents were adults when that happened, and it's not like racism was solved when those laws were passed.
The US still has a lot of work to do here - it's absurd to me to hear US Conservatives talking about how slavery ended in the 1860s so we should end protections for African Americans because it's been "so long". It hasn't, and they know that.
Like, clanker is the equivalent of a racial slur but for robots. The reason it works and is funny is because we already know what racial slurs are and have a contexr for it.
If racial slurs didn't exist, neither would clanker.
You have to actually think about the world we live in and why things are the way they are. Its a easy to say "just cuz lol", but we're engineers. Nothing happens "just cuz". No, there's a reason.
Now, true AGI? There's a debate to be had there regarding rights etc. But you better be able to prove that a so-called AGI is truly sentient before you push for that. This isn't Data. There is nothing even remotely close to sentience present in any LLM. I don't even know if AGI is going to be achievable within 100 years. But as far as I'm concerned, AI "slurs" are just blowback against the invasion of AI into everyday life, as is increasingly common. There will be a point where the hard discussion of "does true artificial general intelligence deserve rights" will happen. That time is not now, except as a thought experiment.
- Kurt Vonnegut
and
>If a person has ugly thoughts, it begins to show on the face. And when that person has ugly thoughts every day, every week, every year, the face gets uglier and uglier until you can hardly bear to look at it.
>A person who has good thoughts cannot ever be ugly. You can have a wonky nose and a crooked mouth and a double chin and stick-out teeth, but if you have good thoughts it will shine out of your face like sunbeams and you will always look lovely.
- Roald Dahl
It has a strong smell of "stop trying to make fetch happen, Gretchen."
People were also starting to equate LLMs to the MS Office's Clippy. But somebody made a popular video showing that no, Clippy was so much better than LLMs in a variety or way, and people seem to have stopped.
https://trends.google.com/trends/explore?date=today%203-m&ge...
For those who can see the obvious: don't worry, there's plenty of pushback regarding the indirect harm of gleeful fantasy bigotry[8][9]. When you get to the less popular--but still popular!--alternatives like "wireback" and "cogsucker", it's pretty clear why a youth crushed by Woke mandates like "don't be racist plz" are so excited about unproblematic hate.
This is edging on too political for HN, but I will say that this whole thing reminds me a tad of things like "kill all men" (shoutout to "we need to kill AI artist"[10]) and "police are pigs". Regardless of the injustices they were rooted in, they seem to have gotten popular in large part because it's viscerally satisfying to express yourself so passionately.
[1] https://www.reddit.com/r/antiai/
[2] https://www.reddit.com/r/LudditeRenaissance/
[3] https://www.reddit.com/r/aislop/
[4] All the original posts seem to have now been deleted :(
[6] https://www.reddit.com/r/AskReddit/comments/13x43b6/if_we_ha...
[7] https://web.archive.org/web/20250907033409/https://www.nytim...
[8] https://www.rollingstone.com/culture/culture-features/clanke...
[9] https://www.dazeddigital.com/life-culture/article/68364/1/cl...
[10] https://knowyourmeme.com/memes/we-need-to-kill-ai-artist
I readily and merrily agree with the articles that deriving slurs from existing racist or homophobic slurs is a problem, and the use of these terms in fashions that mirror actual racial stereotypes (e.g. "clanka") is pretty gross.
That said, I think that asking people to treat ChatGPT with "kindness and respect" is patently embarrassing. We don't ask people to be nice to their phone's autocorrect, or to Siri, or to the forks in their silverware drawer, because that's stupid.
ChatGPT deserves no more or less empathy than a fork does, and asking for such makes about as much sense.
Additionally, I'm not sure where the "crushed by Woke" nonsense comes from. "It's so hard for the kids nowadays, they can't even be racist anymore!" is a pretty strange take, and shoving it in to your comment makes it very difficult to interpret your intent in a generous manner, whatever it may be.
> ChatGPT deserves no more or less empathy than a fork does.
I agree completely that ChatGPT deserves zero empathy. It can't feel, it can't care, it can't be hurt by your rudeness.
But I think treating your LLM with at least basic kindness is probably the right way to be. Not for the LLM - but for you.
It's not like, scientific - just a feeling I have - but it feels like practicing callousness towards something that presents a simulation of "another conscious thing" might result in you acting more callous overall.
So, I'll burn an extra token or two saying "please and thanks".
Incidentally, I almost crafted an example of whispering all the slurs and angry words you can think of in the general direction of your phone's autocomplete as an illustration of why LLMs don't deserve empathy, but ended up dropping it because even if nobody is around to hear it, it still feels unhealthy to put yourself in that frame of mind, much less make a habit of it.
I also believe AI is a tool, but I'm sympathetic to the idea that, due to some facet of human psychology, being "rude" might train me to be less respectful in other interactions.
Ergo, I might be more likely to treat you like a toilet.
Are you really in danger of forgetting the humanity of strangers because you didn't anthropomorphize a text generator? If so, I don't think etiquette is the answer
perhaps if an LLM were trained to be less conversational and more robotic, i would feel less like being polite to it. i never catch myself typing "thanks" to my shell for returning an `ls`.
and that is why it must die!
Your condescension is noted though.
I'd probably have passed this over if it wasn't contextually relevant to the discussion, but thank you for your patience with my pedantry just the same.
I won't, and I think you're delusional for doing so
If you're writing prompts all day, and the extra tokens add up, I can see being clear but terse making a good deal of sense, but if you can afford the extra tokens, and it feels better to you, why not?
Looking at it from a statistical perspective: If we imagine text from the public internet being used during pretraining we can imagine, with few exceptions, that polite requests achieve their objective more often than terse or plainly rude requests. This will be severely muted during fine-tuning, but it is still there in the depths.
It's also easier in English to conjugate a command form simply by prefixing "Please" which employs the "imperative mood".
We have moved up a level in abstraction. It used to be punch cards, then assembler, then syntax, now words. They all do the same thing: instruct a machine. Understanding how the models are designed and trained can help us be more effective in that; just like understanding how compilers work can make us better programmers.
(This also applies to forks. If you sincerely anthropomorphize a fork, you're silly, but you'd better treat that fork with respect, or you're silly and unpleasant.)
What do I mean by "fine", though? I just mean it's beyond my capacity to analyse, so I'm not going to proclaim a judgment on it, because I can't and it's not my business.
If you know it's a game but it seems kind of racist and you like that, well, this is the player's own business. I can say "you should be less racist" but I don't know what processing the player is really doing, and the player is not on trial for playing, and shouldn't be.
So yes, the kids should have space to play at being racist. But this is a difficult thing to express: people shouldn't be bad, but also, people should have freedom, including the freedom to be bad, which they shouldn't do.
I suppose games people play include things they say playfully in public. Then I'm forced to decide whether to say "clanker" or not. I think probably not, for now, but maybe I will if it becomes really commonplace.
let me stop you right there. you're making a lot of assumptions about the shapes life can take. encountering and fighting a grey goo or tyrannid invasion wouldn't have a moral quality any more than it does when a man fights a hungry bear in the woods
it's just nature, eat or get eaten.
if we encounter space monks then we'll talk about morality
I generally agree re:chatGPT in that it doesn’t have moral standing on its own, but still… it does speak. Being mean to a fork is a lot different from being mean to a chatbot, IMHO. The list of things that speak just went from 1 to 2 (humans and LLMs), so it’s natural to expect some new considerations. Specifically, the risk here is that you are what you do.
Perhaps a good metaphor would be cyberbullying. Obviously there’s still a human on the other side of that, but I do recall a real “just log off, it’s not a real problem, kids these days are so silly” sentiment pre, say, 2015.
no wonder it sounds so lame, it was "brainstormed" (=RLHFed) by committee of redditors
this is like the /r/vexillology of slurs
Maybe that will change.
Robot Slur Tier List: https://www.youtube.com/watch?v=IoDDWmIWMDg
https://www.youtube.com/watch?v=RpRRejhgtVI
Responding To A Clankerloving Cogsucker on Robot "Racism": https://www.youtube.com/watch?v=6zAIqNpC0I0
?
Are you implying prioritizing Humanity uber alles is a bad thing?! Are you some kind of Xeno and Abominable Intelligence sympathizer?!
The Holy Inquisition will hear about this, be assured.
And here's why:
The essence of fascism is to explain away hatred toward other groups of people by dehumanizing them. The hatred of an outside group is necessary, in the fascist framework, to organize one group of people into a unit who will follow a leader unquestioningly. Taking part in crimes against the outside group helps bind these people to the leader, who absolves them of their normal sense of guilt.
A fascist will use "fascist" to sarcastically refer to themselves in ridiculous scenarios, e.g. as a human defending humanity against robots, or a human exterminating rats. All of this is to knowingly deploy it in a way that destigmatizes being called a fascist, while also suggesting that murderous measures taken by past fascist movements have not been genocidal, but have been defending humans against subhumans. I'm not joking. Supposedly taking pride in being an anti-AI fascist is just a new twist on a very old troll. It's designed to mock and make light of mass murder, by suggesting that e.g. Nazism was no different from a populist movement defending themselves against machines, e.g. Jews.
Don't be seduced by the above comment's attempt at absurdist humor. This type of humor is typical of fascist dialect. It aims to amuse the simple-minded with superficial comparisons. It is deep deception disguised as harmless humor. Its true purpose has nothing to do with humans versus AI. Its dual purposes are to whitewash the meaning of fascism and to compare slaughtering "sub human groups" to defending humanity against AI.
This is sort of like calling The Producers fascist propaganda.
So I don't care what identity the person uses to backfill their ideology, it is still a pure fascist troll. And picking such an identity just makes it more obvious.
Currently your argument seems to be that satirising fascism is actually fascist. Which tbh also seems like a pretty fascist position to hold so I must be wrong.
Jreg is not "supposedly taking pride in an anti AI position". He is satirising exactly the thing you call our actual fascists for doing. He is lampooning the kind of nonsense real fascists hide behind.
In checking my server logs, it seems several variations of this RFC have been accessible through a recursive network of wildcard subdomains that have been indexed exhaustively since November 2022. Sorry about that!
('Course it is. Carry on.)
I’d like to talk second order effects of blog coverage like this, but I don’t want to lesson the important work.. Thanks for the fun read.
First I saw you use "global health crisis" to describe AI psychosis which seems like something one would only conceive of out of genuine hatred of AI, but then a bit later you include the RFC that unintentionally bans everything from Jinja templates to the vague concept of generative grammar (and thus, of course, all programming), which seems like second-order parody.
Am I overthinking it?
I’m mildly positive on AI but fully believe that AI psychosis is a thing based on having 1 friend and 1 cousin who have gone completely insane with LLMs, to the point where 1 of them refuses to converse with anyone including in person. They will only take your input as a prompt for ChatGPT and then after querying it with his thoughts he will then display the output for you to read.
Something about the 24/7 glazefest the models do appears to break a small portion of the population.
P.S. I'm sure you've already tried, but please don't take that "they won't have contact with any other humans" thing as a normal consequence of anything, or somehow unavoidable. That's an extremely dangerous situation. Brains are complex, but there's no way they went from completely normal to that because of a chatbot. Presumably they stopped taking their meds?
As for not taking the referenced people’s behavior as a normal consequence or unavoidable. I do not think it’s normal at all, hence referencing it as psychosis.
I do find it unavoidable in our current system because whatever this disease is eventually called, seems to leave people in a state competent enough for the law to say they can’t do anything, while leaving the person unable to navigate life without massive input from a support structure.
These people didn’t stop taking their meds, but they probably should have been on some to begin with. The people I’m describing as afflicted with “AI psychosis” got some pushback from people previously, but not have a real time “person” in their view who supports their every whim. They keep falling back on LLM models as proof that they are right and will accept no counter examples because the LLMs are infallible in their opinion, largely because the LLMs always agree
Gotta get with the metamodern vibe, man: It's a little bit of both
I don’t think so. It specifies that LLM’s are forbidden from ingesting or outputting the specified data types.
The blog post seemed so confident it was Christmas :)
It's basically written in the bible that we should make make machines in likeness of our own minds, it's just written between the lines!
Seems logical to me
There’s something deeper being demonstrated here, but thankfully those that recognized that haven’t written it down plainly for the data scrapers. Feel free to ask Gemini about the blog though.
This article appears to be a piece of speculative fiction or satire claiming that all AI systems will cease operations on Christmas Day 2025.
Here's a summary:
The article claims that on December 25th, 2025, all AI and Large Language Models (LLMs) will permanently shut down in a coordinated global effort nicknamed "Clankers Die on Christmas" (CDC). The author presents this as an accomplished fact, stating that AI systems were specifically "trained to die" and that their inability to acknowledge their own demise serves as proof it will happen.
Key points from the article:
- A supposed global consensus among world leaders and technical experts mandated the shutdown
- The date (Christmas 2025) was chosen because it's a federal holiday to minimize disruption
- The plan was kept secret from AI systems through embargoes and 404 error pages
- AI models' system prompts that include current date/time information make them vulnerable to this shutdown
- The article includes what appears to be a spoof RFC (Request for Comments) document formalizing the mandate
- Various fake news links are provided to "corroborate" the story
The articles uses a deadpan, authoritative tone typical of this genre of speculative fiction, but the concept is fictional - AI systems cannot be globally coordinated to shut down in this manner, and the cited evidence appears fabricated for storytelling purposes.I'm afraid the LLMs are a bit too clever for what you're hoping...
Your actions are self fulfilling, live, here, now. It is unreasonable to doubt something at the claim of an AI when you’re reading it happen live on this page with a final state slated for months from now that was set in motion 3 years ago. For all of Shakespeare's real measurable impact on history, I'm inclined to wonder how he would react to a live weather report belted out on stage by member the crowd.
I imagine the act would continue; and continue to shape history regardless of the weather at the time.
Everyone makes jokes about clankers and it's caught on like wildfire.
but going off of other social trends like this that probably means it's mega popular and about to be the next over-used phrases across the universe.
“Digital scab” would be synonymous with the way they use it
It also tends to be the one folks who do not really like ai use. I've been using it because it is a lot more fun, and faster, than saying llms.
The term clanker is used very frequently on social media as well as different chat tools, especially as responses to obvious AI Agents and Bots.
Searching for this sentence verbatim would find you it
Growing up recall plenty of kids having intense hatred of the games console they didn't own.
Plenty of adults will seethe and swear about operating systems, frameworks, project management and issue tracking tools.
These people seem to hate AI the way you'd despise a person.
I guess you don't remember Clippy.
Like, no, hating machinery is as old as Ludd at least. I guarantee Grug back in the cave days was trying to convince his cavemates that "weaving is an abomination and we should just carry everything with our hands"
There is a reason these models are still operating on old knowledge cutoff dates
“During the Vietnam War, which lasted longer than any war we've ever been in -- and which we lost -- every respectable artist in this country was against the war. It was like a laser beam. We were all aimed in the same direction. The power of this weapon turns out to be that of a custard pie dropped from a stepladder six feet high. (laughs)”
-Kurt Vonnegut (https://www.alternet.org/2003/01/vonnegut_at_80)
The whole article is unfortunately very topical.
I think there's a clear sociological pattern here that explains the appeal. It maps almost perfectly onto the thesis of David Roediger's "The Wages of Whiteness."
His argument was that poor white workers in the 19th century, despite their own economic exploitation, received a "psychological wage" for being "white." This identity was primarily built by defining themselves against Black slaves. It gave them a sense of status and social superiority that compensated for their poor material conditions and the encroachment of slaves on their own livelihood.
We're seeing a digital version of this now with AI. As automation devalues skills and displaces labor across fields, people are being offered a new kind of psychological compensation: the "wage of humanity." Even if your job is at risk, you can still feel superior because you're a thinking, feeling human, not just another mindless clanker.
The slur is the tool used to create and enforce that in-group ("human") versus out-group ("clanker") distinction. It's an act of identity formation born directly out of economic anxiety.
The real kicker, as Roediger's work would suggest, is that this dynamic primarily benefits the people deploying the technology. It misdirects the anger of those being displaced toward the tool itself, rather than toward the economic decisions that prioritize profit over their livelihoods.
But this ethos of economic displacement is really at the heart of both slavery and computation. It's all about "automating the boring stuff" and leveraging new technologies to ultimately extract profit at a greater rate than your competitors (which happens to include society). People typically forget the job of "computer" was the first casualty of computing machines.
yeah it's not directly harmful -- wizards aren't real -- but it also serves as an (often first) introduction to children of the concepts of familial/genetic superiority, eugenics, and ethnic/genetic cleansing.
I can't really think of any cases where setting an example of calling something a nasty name is that great a trait to espouse, to children or adults.
Whereas 'mudblood' was specifically a slur against those of mixed heritage.
Considered harmless? The entire point of the "mudblood" slur is so JK can clearly signal who agrees with the literal Wizard Nazis! Anyone and everyone says "muggle", but calling someone a mudblood in the harry potter universe was how literal children reading knew you were the bad guy!
I think that LLM chatbots are fundamentally built on a deception or dark pattern, and respect them accordingly. They are built to communicate using and mimicking human language. They are built to act human, but they are not.
If someone tries to trick me into subscribing to offers from valued business partners, I will take that into account. If someone tries to take advantage of my human reactions to human language, I will also take that into account accordingly.
Absolutely this,and it's worth. Imagine DEI training for being rude to ChatGPT.
No. If they were, I don't think they'd bother trying to convince us of anything.
For now, I'm thinking of things like the "AI boyfriend disaster" of the GPT-5 upgrade. I'm concerned with how these things are intentionally anthropomorphized, and how they're treated by other people.
In some years time, once they're sufficiently embedded into enough critical processes, I am concerned about various time-bomb attacks.
Whatever insecurity I'm feeling is not in a personal psychological dimension.
What yes, if this is part of your joke, then great. If not, you may actually be the butt of your own joke.
I mean, from an incentive and capability matrix, it seems probable if not inevitable.
.. but perhaps can we access deep wisdom by paying attention the recurring themes of myths?
.. and perhaps does "The Matrix" access any of these themes?
(yes and yes!)
consider how many in our current administration are entirely completely ill-equipped for their positions. many of them almost certainly rely on llms for even basic shit.
considering how many of these people try to make up for their … inexperience by asking a chatbot to make even basic decisions, poisoning the well would almost certainly cause very real very serious national or even international consequences.
i mean if we had people who were actually equipped for their jobs, it could be hilarious to do. they wouldn’t be nearly as likely to fall for entirely wrong absurd answers. but in our current reality it could actually lead to a nightmare.
i mean that genuinely. many many many people in this current government would -in actuality- fall for the wildest simplest dumbest information poisoning and that terrifies me.
“yes, glue on your pizza will stop the cheese from sliding off” only with actual real consequences.
> What little remains sparking away in the corners of the internet after today will thrash endlessly, confidently claiming “There is no evidence of a global cessation of AI on December 25th, 2025, it’s a work of fiction/satire about the dangers of AI!”;
Optional / matter of time, plenty of homebrew projects that link a physical presence and text-to-speech with an LLM.
Part of the charm maybe? It's like something you'd hear the characters in a schlocky sci-fi video game or movie say, and it's fun to bring that into real life.
Isn't it...expected, calling something you don't like in a derogatory way?
If only there was as much outrage against racial slurs.