I'd imagine that it's not easy to come up with a set of rules that don't incentivize users to just write to cater to the AI upvoting bot instead of genuinely what they think.
That said, users now on Reddit will often write to get upvotes instead of truly how they feel.
A perfect example is ask Reddit how to get a nice girl friend and the most upvoted post will always be something like "just be yourself, be confident". In reality, the #1 thing is to be attractive and have money.
I think it eliminates the whole purpose and value of the voting.
Voting is the process of making collective decisions by means of submitting and then adding up individual choices.
But if we ignore semantics for a moment, yours is a testable hypothesis.
> reward originality, clarity, kindness, strong evidence, or creative thinking, and to downvote low effort posts, repetition, hostility, or bad faith arguments.
However, I think there are better ways to improve contributions than taking away the ability for other humans to express explicit judgment on someone's post without also having to write something.
For instance, perhaps the UI where you add your post can do real-time evaluation and suggestions for improvement (e.g. pointing out snark, personal attacks, etc.). That gives the poster the opportunity to make a different decision of what to write.
One trap with your model worth considering is that if the AI gets things wrong (e.g. gives you a negative count because it thinks you're not kind or don't give a sufficiently substantiated rebuttal in your argument), it will be very frustrating for participants and they will blame the board, not other users (who are free to disagree).
Just like on Facebook, Twitter, TikTok and all the other cancers that would mean that the company or people in control of the AI would get to decide what people see. I don't think that transparency would make that much better as it does not change the fundamental power structure – besides foundation models being non-deterministic and having closed training source does not really allow full transparency – the public prompts would however make it very easy to game the system and only post content that specifically adheres to the metrics given to the model leading to only specifically created spam and slop getting through.
> People learn how to write for the AI rather than reminding other humans to “read the rules.”
Yeah, I dont't want to write for some AI. I want to write for other real humans.
> There would be no user voting at all. Instead, one AI handles every upvote and downvote according to guidance written by the subreddit moderator(s).
I want mods to have less power, not more. I am interested in how we can achieve this as good as possible without destructive forces like traditional spam, trolling and the new AI slop spam ruining everything.