My first instinct is to say that when one loses certain trusts society grants, society historically tends to come hard. A common idea in political discourse today is that no hope for a brighter future means a lot of young people looking to trash the system. So, yknow, treat people with kindness, respect, and dignity, lest the opposite be visited upon you.
Don’t underestimate the anger a stolen future creates.
Sort of like the infamous GameStop short squeeze of 2021:
This seems like the inevitable outcome of our current trajectory for a significant portion of society. All the blather about AI utopia and a workless UBI system supported by the boundless productivity advances ushered in by AI-everything simply has no historically basis. History's realistic interpretation points more to this outcome.
Coincidentally, I've been conceptualizing a TV sitcom that tangentially touches on this idea of humans as real-time inputs an AI agent calls on, but those humans are collective characters not actually portrayed in scenes.
Now that I think about it. I still do think I'm sitting in my chair.... Humm...
I understand that this is not exactly in the spirit of your question but, well, a tool is just this.
1. Knowing what you don't know
2. Knowing who is likely to know
3. Asking the question in a way that the other human, with their limited attention and context-window, is able to give a meaningful answer
The "humans as tools" model only works if the AI layer is centralized and owned by platforms. If inference runs on hardware you control, you're not callable - you're the one calling.
Been thinking about this a lot: https://www.localghost.ai/reckoning
How remarkable to think that humans would be happy to dehumanize each other at least in language, before anything else at the promise of 'optimising' for something... whatever it is... it could be the 200 year futuristic axe at this point.
Now we have to find the next level and condition the human to pay to respond to questions.
It seems like an idea bad enough to pay $10 to downvote? Or should that be good enough?
The difference I'm curious about is agents being the primary caller, and humans becoming an explicit dependency in an autonomous loop rather than a human-in-the-loop system.
Maybe I write a bot that answers fivverr requests at the lowest price possible. We can all race to the bottom.