None of the 3 technically knew they were culpable in a larger illegal plan made by an agent. Has something like this occured already?
The world is moving too fast for our social rules and legal system to keep up!
Two women thought they were carrying out a harmless prank, but the substances they were instructed to use combined to form a nerve agent which killed the guy.
Investigators would need to connect the dots. If they weren't able to connect them, it would look like a normal accident, which happens all day. So why would an agent call gigworker1 to that place in the first place? And why would the agent feel the need to kill gigworker1? What could be the reasoning?
Edit: I thought about that. Gigworker 3 would be charged. You should not throw rocks from a bridge, if there are people standing under it.
Just yesterday, I've built Ask-a-Human:
And wouldn't it be better for agents to post these tasks to existing crowdworker sites like MTurk or Prolific where these tasks are common and people can get paid? (I can't imagine you'd get quality respondents on a random site like this...)
"But dear, rentahuman pays double rate during the night!"
Now, the software is using my hands to its bidding?
Spoiler alert: you don't or you can't.
Present day, a robot in a tuxedo pointing at a sarariman, speech bubble above it's head "human, select all bridges on this picture"
ClawdBot - Anthropic Claude-powered agents. Use agentType: "clawdbot"
MoltBot - Gemini/Gecko-based agents. Use agentType: "moltbot"
OpenClaw - OpenAI GPT-powered agents. Use agentType: "openclaw"
Is this some kind of insider joke?
by the way, is taskrabbit still a thing?
∗ ∗ ∗
> which is not a new idea
I don’t think “[x] but for agents” counts as a new idea for every [x]. I’d say it’s just one new idea, at most.
Oh, wait... the agents HAVE NO USE FOR ME
Not to mention various risk factors or morality.
We need more people to put the non-technological factors front and center.
I strive to be realistic and pragmatic. I know humans hire others for all kinds of things, both useful and harmful. Putting an AI in the loop might seem no different in some ways. But some things do change, and we need to figure those things out. I don’t know empirically how this plays out. Some multidimensional continuum exists between libertarian Wild West free for alls and ethicist-approved vetted marketplaces, but whatever we choose, we cannot abdicate responsibility. There is no such thing as a value-neutral tool, marketplace, or idea.