To use an extreme example, you’d have wanted your model to have offered Warren Buffet the base price, or even a deal.
> What really interests Cian, who has published research[1] exploring how audiences tend to have less trust in media outlets that are transparent about their AI use, is the fact that the Post disclosed its use of algorithmic pricing at all. “If you ask people [whether they] want transparency on what’s behind your pricing strategy, people say ‘yes,'” he says. “But what we found in my research is a paradox, in the sense that people think that they want to know, but once they know, the reaction is worse than not knowing.”
> [1] https://ideas.darden.virginia.edu/AI-disclosure-dilemma
It's as though you caught a thief rifling through your pockets and they just looked you in the eye and said, "You caught me. I'm not stopping. What are you going to do about it chump?"
How's that "I have nothing to hide" working out?
Part of me likes the idea of the price being set by how much you read, but the increasing amount of clickbait would probably make such a scheme not a good thing. I used to look at Google's news feed on my phone at times--lots of stuff that was clearly ads pretending to be articles but mostly you could pick out the interesting stuff. Now, though, very often the "interesting" stuff turns out to be AI garbage that doesn't actually say what it says it says.