1. They reward short or under-developed essays. I'd say most online content, especially with high upvotes next to the post, fits that. Social media surely does.
2. If it's longer posts, the system starts nitpicking it on minor details, like grammar. We see this even on Hacker News, a community valuing quality, with some longer submissions. It's also a debate tactic to derail opponents' better arguments in many discussions which are in their pretraining data.
3. Essays with more praise get higher scores and with more criticism get lower scores. "Get on the Bandwagon" Effect. Echo chambers. One person writes a thing followed by 5-20 people confirming it. That's probably in the pretraining data. It might survive some filtering/cleaning strategies, too.
So, no, I think these AI's are acting way too human. They need to fine-tune them to act like more, reasonable humans. That will initially take RLHF data for many types of situations. Given pretraining bias, they might also have to train them to drop the bad habits the article mentions.
A long comment online now means either two things: it's written by a crank who has strong opinions, usually only tangentially related; or someone who has deep knowledge about the subject and has a lot of detail to provide. It's usually the former.