I've also noticed that very obviously LLM-generated comments are called out, and the community tends to agree, but those that have any plausible deniability are given far too much leniency, and people will over-index on the guidelines to give them the benefit of the doubt.
I don't think a captcha is the solution, as it'll degrade conversation by an OOM though.
Normally I'd agree, but we have shadowbans, which really irks me.
Almost everyone banned on HN is banned publicly, with a public message explaining why.
I would love for this to be the case, however I quite extensively investigate this phenomenon and this does not match what I've seen. I'd like for us to be better than shadowbans. In some cases, I don't even get to vouch, it's just a comment that is banned-banned. It feels the worst when they're saying something substantive to the conversation and we have no means to surface the comment.
Some type of annual amnesty consideration or something of that nature is in order, or soon we'll recreate other echo chambers that are slowly fading out.
At some point, no matter what HN does, being comfortable with its moderation requires you to take Dan's word for things. I take his word for it on shadowbans.
Ironically, I'm irritated with moderation in the other direction: ten years of "if you keep breaking the guidelines under alternate accounts, we'll ban your real account" sort of makes my blood boil (people having long-running alts does that too), but I roll with it, because I couldn't do the job better than Dan and Tom do.
This has gaps, as you know, and doesn't wash. Let someone turn a new leaf. Amnesty puts a stop to this.
I only learned about it after I asked via a non-public channel, with evidence. Otherwise I wouldn't have known, and I suspect most users are unaware. What I cited in previous comment is also from a non-public conversations.
If I'm wrong and it's documented publicly in rules or users are notified when it happens to them, I'm happy to be corrected, link?
I'm still amazed at how Reddit weaponized the block feature.
If you block someone, you not only can't see their posts, but you ice them out from replying in the rest of the thread.
In the past “block” used to mean what “mute” means now: Hide from me. I believe it’s around the time Twitter became popular that the meaning has shifted to being a bi-directional mute.
I find that the need for a blocking system as that just points to a broken moderation system, and a broken society at large.
The one thing I like about this place is that it's well moderated and you have shared opposing view points engaging (mostly) respectfully.
My personal and political views couldn't be further from most HN users (I'm both a Conservative _and_ a practicing Christian), yet I appreciate taking part in various discussion. I enjoy reading about point of views that directly challenge mine.
Let's keep HN respectful and accessible.
But unlike most HN users who label themselves conservative Christians, you've never suggested that climate change is a hoax:
https://hn.algolia.com/?type=all&query=author:swat535+climat...
I don't ever want to consume information from people who are so illiterate that they believe that scientists all over the world, in fields ranging from geoscience to statistics, are participating in some kind of global conspiracy, regardless of how respectful these commenters are. I block these people immediately after they reveal themselves.
.hnuser attr=href=?user?id=rd
.parent().parent().hide()
Though no idea if such a plugin exists for Safari.
In uBlock Origin -> My Filters:
news.ycombinator.com##tr.athing.comtr:has(a.hnuser):has-text(/\bUsername\b/)I find HN much more tolerable this way.
Blocking domains would be nice too. Like substack or medium. I'm happy to just ignore them, but it sure would be nice to filter them out if possible.
I get that it's complicating the system and keeping it simple is perhaps for the best.
news.ycombinator.com##:matches-path(/^/item\?id=/) tr a.hnuser:has-text(/^dpifke$/):upward(tr)
This mostly works, but only kills the user's comments and not replies, so it sometimes can be confusing. news.ycombinator.com##.default:has(a[href="user?id=dpifke"]) .commentYou can run temporary unsigned extensions for development purposes, but they are removed after 24 hours or whenever you quit Safari, which would make using it daily a non-starter.
It’s usually old/high karma accounts, as they can get away with it easier. Throwaways that establish themselves for a time too, but those are usually dealt with eventually
It’s usually old/high karma accounts, as they can get away with it easier
Let's discuss how to make this reality.
Do you want a ranking system where more the people downvote some person, the better? if so how do you prevent spam in that, do you take metrics like karma or what exactly?
I don't think that captcha is a solution either but also that I don't know how to feel about removing entire swaths of people, I can think someone writing something bad once and probably get into this "black-list"
Another aspect is once again the black list, I don't know but do we really need a system of essentially a communal ban?
The only thing I can see it reasonable is if there is a slop bot comment poster but I rarely face this issue but if you do, you can probably create a tampermonkey script and tampermonkey scripts work on chrome,firefox and "Userscripts" which should work on safari as well and that script is most likely gonna be compatible on both tampermonkey and userscripts.
captcha would make it more of a hassle to post comments.
On the more benign side, maybe some people enjoy the musings of amichail on Ask but I could honestly do without.
The art of language lives in its redundancy; every unforced choice is a venue for self-expression.
"Unforced choice" is an interesting phrase. Perhaps another discussion, another time. End self-expression.
latest greatest versions of captcha are more resilient to these types of services, but it's a cat and mouse game. I would recommend that you, as a sysadmin, learn at least the most basic things about this stuff.
This sort of language is inappropriate and unnecessarily combative.
In any event, no filter screen is perfect. Getting rid of 80% of bot traffic is a good thing, even if you can't rid yourself of 100% of it. You can't let perfect be the enemy of "pretty good."
People use CAPTCHAs because they work--even if imperfectly. Of course, you have to stay on top of the latest implementations.
You'd probably catch most the low hanging fruit for sure, but you would cause friction for real users.
I say this as someone who has enabled captcha on some of our more critical endpoints, there's definitely a place for it.
Researched it substantially and realized it's an unsolved problem. Anything that makes a dent is incomplete and comes with ugly tradeoffs. For a time I wondered if I should try and solve it myself, but I could never think any solution that hadn't already been/being tried. Years later I'm left curious if it's even possible to solve the problem.
My point is that captcha won't solve this, and solving this problem is a lot harder than it seems at first, and might not even be solvable (which I know is hard to accept).
If someone does find an elegant privacy ensuring way to solve it, I think the impact would extend far beyond HN and could make a big difference to the future of civilization as a whole.
Even if you use state ids for it, who's to say that a particular state won't be...loose with issuing ids that can then go on to be used for bots.
It's even a problem with humans as well - one human can be having a pleasant conversation with the other, not aware that that person isn't being genuine, or is lying, has ulterior motives or has been instructed on what to say by someone else.
Maybe you are thinking purely from a math / theoretical perspective, but I'm thinking of a compete solution that's practical to use to solve the problem for sites like HN and many others.
You get a good credit score and still live within your means while also getting additional points + bank covering any fraudulent activity if the card got stolen.
Of course this method probably won't work for people that feel they would rather just cut themselves off from temptation fully or those without access to banking systems, which I sympathise with.
I did this for a while after being bitten a couple times for not having a credit history.
However, I recently stopped. I still keep one card around and active just to maintain my score… just in case. However, spending $10k for $200 in rewards… I don’t really care. That’s mostly a tool to get people to justify more spending.
I’ve quite liked using the debit card and seeing the number go down when I spend, it makes more sense intuitively, and I always know exactly what I have. I had a debit card stolen about 20 years ago; I was able to get the charges reversed, no different from a credit card in my experience. It’s on the Visa network.
I would cancel my last credit card, but I don’t want to deal with cell phone deposits and other nonsense, like I had to in the past.
- write this number in words
- 486436546497964136564768756456455824164567575646875812445676854253154782125
- four quadrigintilion eight hundred sixty four trigintillion three hundred sixty five duovigintillion four hundred sixty four unvigintillion nine hundred seventy nine vigintillion six hundred forty one novemdecillion three hundred sixty five octodecillion six hundred forty seven septendecillion six hundred eight seven sexdecillion five hundred sixty four quindecillion five hundred sixty four quatuordecillion five hundred fifty eight tredecillion two hundred forty one duodecillion six hundred forty five undecillion six hundred seventy five decillion seven hundred fifty six nonillion four hundred sixty eight octillion seven hundred fifty eight septillion one hundred twenty four sextillion four hundred fifty six quintillion seven hundred sixty eight quadrillion five hundred forty two trillion five hundred thirty one billion five hundred forty seven million eight two thousand one hundred twenty five
Most low-effort bots can already bypass basic CAPTCHA, while it mostly adds friction for legitimate users. HN’s strength is the quality of discussion, and that seems better protected by behavior-based signals (account age, posting patterns, community feedback) rather than one-time verification challenges.
There is no high-volume spam (ai or otherwise) on HN, so captcha won't help, low volume captcha can be farmed out. Humans are the best defense against low-volume spam. So flag these posts!
Otherwise agreed with the sentiment.
I'd expect that if we took Randall Munroe's advice[0], that price would go up significantly, perhaps prohibitively so.
Is there? I enable showdead and don't see it. There are the occasional spam and vulgar comments, but not that much.
Any "AI slop" being posted seems to come from actual HN'ers who think they're being helpful, and is often downvoted. But there's not much.
So I'm not sure this is a problem that currently needs any new solutions? I don't see AI bots taking over the discourse at all. Not even a little.
How dystopic. And you're probably right.
To see what I mean, take a screenshot of a random captcha that needs solving and ask an LLM to solve it for you. It will do it accurately.
(They are not and haven't been for a long, long time)
There are quite a few third party apps for Hacker News, such as Hacki (ios/android). [1]
Something like using a third party app that includes forms of spam filtering like checking when the user joined, how many posts they have, amount of 'karma' (or whatever it's called here). You could implement blocking individual users & etc etc. This app does not have that but it could be forked and modified or talk to the dev...
That might be a better solution than trying to implement all types of annoying captchas & other extremely annoying checks on HN's side.
Do you have some examples of this? I am on HN almost every day, and I read a lot of comments, and I haven't noticed this
Enslopification is coming for everyone, everywhere, at all times.
Everything is already slop and will be slop, and will have been being slop.
Even if you manage to make bot usage more expensive, which is all a captcha can do, the content posted by humans in discussions and shared links is increasingly generated by machines.
It's ironic having a community of people object to the same technology they helped build. Enjoy the show, and learn to live with it. It's going to get much worse before it gets any better, if at all.
The overwhelming majority of developers have never worked anywhere close to LLM tech. AI is a very small field requiring specialized expertise.