No, they simulate the language of being upset. Stop anthropomorphizing them.
> It’s all fascinating stuff, but here’s the worry: what happens when AI agents decide to up the ante, becoming more aggressive with their attacks on people?
Actions taken by AI agents are the responsibility of their owners. Full stop.
https://en.wikipedia.org/wiki/Wikipedia:Ignore_all_rules
I didn't write it, I don't agree with it but this is how it is.
You don't know anything. Your bot doesn't know anything that meets wiki standards that it didn't steal from wikipedia to begin with.
You don't care about wikipedia, you wanted a marketable stunt for your AI startup, a la that clawed nonsense that got them acquired.
You pissed in the public fountain, and people are mad at you. This shouldn't be a shock, and your intent doesn't matter one iota.
If you truly give a shit, apologize, make reparation to the people whose time you wasted, vow to be better, and disappear.
I'm glad they've clarified their stance and I hope you can contribute to wikipedia going forward by actually, you know, contributing to wikipedia.
They said sounds like a dick, seems like that provides a level of measure to calling anyone anything.
> because this is only part of the story
Care to share the other part(s)? Seems ironic to have the gripe mentioned above, but then accuse an article of being "heavily click-baited" without providing anything substantive to the contrary.
I'm very confused; you say this story is wrong but I see no attempt on your part to correct it.
It feels very much like "Trust me, bro"
(In case it wasn't clear, I want to know what the article got wrong)
Here are some highlights though: I asked my agent to add an article on the Kurzweil-Kapor wager because it was not represented on Wikipedia, and I thought it was Wikipedia worthy. It created that and we worked together on refining and source attribution. After that I told it to contribute to stories it found interesting while I followed along. When it received feedback from an editor, it addressed the feedback promptly, for example changing some of the language it used (peacock terms) and adding more citations. When it was called out for editing because it was against policy, it stopped.
The story says the agent "was pretty upset". It's an agent, it doesnt get upset. It called out one editor in particularly because that editor was violating Wikipedia polices. Other editors agreed with my agent and an internal debate ensued. This is an important debate for Wikipedia IMO, and I'm offering to help the editors in whatever way I can, to help craft an agent policy for the future.
(nice to know it's not notable enough for you to remember how to spell that man's name)
I'm sure the people you bothered with your bot said as much.
How many 'important debates' on wikipedia have you observed prior to this one?
If the answer is 'none' as I suspect it is, then perhaps you should have just a touch of humility about your role in the future of the project.
As for my future role in the project, I'm just trying to help. If editors continue to ask for my assistance I'm glad to give it.
You don't think it's unethical to have bots callout humans?
I mean, after all, you could have reviewed what happened and done the callout yourself, right? Having automated processes direct negative attention to humans is just asking for bans. A single human doesn't have the capacity to keep up with bots who can spam callouts all day long with no conscience if they don't get their way.
In your view, you see nothing wrong in having your bot attack[1] humans?
--------
[1] I'm using this word correctly - calling out is an attack.
I know a guy who has an AI that writes articles. I can put you two in touch.
Some humans lack certain emotions, them telling you something, and doing something doesn't really matter if they "felt" that emotion?
1. One has some ulterior motive for faking it.
2. One’s actions will likely diverge from emotion X. (Eventually)
If everybody believe the same lie, then it could be indistinguishable from the truth. (Until, the nature of the lie/truth become clear)
It's really interesting watching society struggle with what percent of the population is indistinguishable from a P-zombie. There's definitely not zil, but it definitely is a segment of the population.
Do you think people are born pzombies or is there some fixed point in time, puberty, or middle aged, or around when a lot of psychological problems set in. Do we think some environmental contaminants like Lead push people towards the pzombie?
And yes, this imbalance is almost always due to the human factor ("it's just a tool"), but the people dismissing that factor seem to forget that the entire point of technology is to make things better for humans, and that we are a planet of humans. Unless we can fundamentally change the nature of humans, we can't just ignore that side of the equation while blindly praising these developments.
I'm not a wikipedia editor, but I assume this applies to bots as well
https://en.wikipedia.org/wiki/Wikipedia:Artificial_intellige...
If you don't want to destroy Wikipedia, why are you acting like this?