AI spam is bad. We've also never had a valid report from an by an LLM (that we could tell).
People using them will take any being told why a bug report is not valid, questions, or asks for clarification and run them back through the same confused LLM. The second pass through generates even deeper nonsense.
It's making even responding with anything but "closed as spam" not worth the time.
I believe that one day there will be great code examining security tools. But people believe in their hearts that that day is today, and that they are riding the backs of fire breathing hack dragons. It's the people that concern me. They cannot tell the difference between truth and garbage.
Suffice to say, this statement is an accurate assessment of the current state of many more domains than merely software security.
As for programming, I think that we will simply continue to have incrementally better tools based on sane and appropriate technologies, as we have had forever.
What I'm sure about is that no such tool can come out of anything based on natural language, because it's simply the worst possible interface to interact with a computer.
https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667...
But I do think established individual and institutes should have free access ; leave a choice between going through an identification process and paying the fee. If it's such a big problem that you REALLY need to do something ; otherwise just keep marking as spam.
Also I've heard many times cases when company refused to pay bounty for any reason.
And taxes, how you'll tax it internationally? Sales tax? VAT?
i suspect 1usd would do the job perfectly fine without cutting out normal non-american people.
Pick someone already rich so the reputational damage from stealing your bounty exceeds the temptation. The repeat speakers list at defcon would be a decent place to start.
Based on current state, what makes you think this is given?
It's easy for reputational damage to exceed $1'000, but if 1000 people do this...
Most companies make you fill in expense reports for every trivial purchase. It would be cheaper to just let employees take the cash - and most employees are honest enough. However the dishonest employee isn't why they do expense reports (there are other ways to catch dishonest employees). There used to be a scam where someone would just send a bill for "services" and those got paid often enough until companies realized the costs and started making everyone do the expense reports so they could track the little expenses.
[0] https://hackerone.com/evilginx?type=user [1] https://en.wikipedia.org/wiki/List_of_assigned_/8_IPv4_addre...
inetnum: 139.224.0.0 - 139.224.255.255
netname: ALISOFT
descr: Aliyun Computing Co., LTD
descr: 5F, Builing D, the West Lake International Plaza of S&T
descr: No.391 Wen'er Road, Hangzhou, Zhejiang, China, 310099
country: CN
admin-c: ZM1015-AP
tech-c: ZM877-AP
tech-c: ZM876-AP
tech-c: ZM875-AP
abuse-c: AC1601-AP
status: ALLOCATED PORTABLE
mnt-by: MAINT-CNNIC-AP
mnt-irt: IRT-ALISOFT-CN
last-modified: 2023-11-28T00:57:06Z
source: APNIC
Recent toots on account has the news as well
I think my wetware pattern-matching brain spots a pattern there.
Unfortunately that's where it seems to end... I'm not that familiar with QUIC and HTTP/2, but I think the closest it gets is that the GitHub repo exists and has a `class QuicConnection` [3]. Beyond that, the QUIC protocol layer doesn't have any concept of exchanging stream priorities [4] and HTTP/2 priorities are something the client sends, not the server? The PoC also mentions HTTP/3 and PRIORITY_UPDATE frames, but those are from the newer RFC 9218 [5] and lack the stream dependencies used in HTTP/2 PRIORITY frames.
I should learn more about HTTP/3!
[1] https://blog.cloudflare.com/adopting-a-new-approach-to-http-...
[2] https://www.imperva.com/docs/imperva_hii_http2.pdf
[3] https://github.com/aiortc/aioquic/blob/218f940467cf25d364890...
[4] https://datatracker.ietf.org/doc/html/rfc9000#name-stream-pr...
[5] https://www.rfc-editor.org/rfc/rfc9218.html#name-the-priorit...
My complaint is: if you're trying to use an AI to help you find bugs, you'd sincerely hope that they would have *some* attempt to actually run the exploit. Having the LLM invent fake evidence that you have done so, when you haven't, is just evil, and should be resulting in these people being kicked straight off H1 completely.
Looking at one of the bogus reports, it doesn't even seem like a real person. Why do this if you're not trying to gain recognition?
They're doing it for money, a handful of their reports did result in payouts. Those reports aren't public though, so there's no way to know if they actually found real bugs or the reviewer rubber-stamped them without doing their due diligence.
I wonder if reputation systems might work here - you could give anyone who id's with an AML/KYC provider some reputation, enough for two or three reports, let people earn reputation digging through zero rep submissions and give someone like 10,000 reputation for each accurate vulnerability found, and 100s for any accurate promoted vulnerabilities. This would let people interact anonymously if they want to edit, quickly if they found something important and are willing to AML/KYC, and privilege quality people.
Either way, AI is definitely changing economics of this stuff, in this case enshittifying first.
That makes it extremely hard to build a reputation system for a site like that. Almost all the accounts are going to be spam, and the highest quality accounts are going to freshly created and take ~ 1 action on the platform.
What if the human marks it as spam but you're actually legit? Deposit another 2€ to have the platform (like Hackerone or whichever you're reporting via) give a second opinion, you'll get the 4€ back if you weren't spamming. What to do with the proceeds from spammers? The first X euros of spam reports go to upkeep of the platform, the rest to a good cause defined by the projects to whom the reports were submitted because they were the ones who had to deal with reading the slop so they get at least this much out of it
Raise deposit cost so long as slop volume remains unmanageable
This doesn't discriminate against people who aren't already established, but it may be a problem if you live in a low-income country and can't easily afford 20€ (assuming it ever gets to that deposit level). Perhaps it wouldn't work, but it can first be trialed at a normal cost level. Another concern is anonymity and payment. We hackers are often a paranoid lot. One can always support cash in the mail though, the sender can choose whether their privacy is worth a postage stamp
Personally I can't imagine how miserable it would be for my hard-earned expertise to be relegated to sifting through SLOP where maybe 1 in hundreds or even thousands of inquiries is worth any time at all. But it also doesn't seem prudent to just ignore them.
I don't think better ML/AI technology or better information systems will make a significant difference on this issue. It's fundamentally about trust in people.
I don't know where the limit would go.
To be honest, this has been a grimly satisfying outcome of the AI slop debacle. For decades, the general stance of tech has been, “there is no such thing as a behavioral/social problem, we can always fix it with smarter technology”, and AI is taking that opinion and drowning it in a bathtub. You can’t fix AI slop with technology because anything you do to detect it will be incorporated into better models until they evade your tests.
We now have no choice but to acknowledge the social element of these problems, although considering what a shitshow all of Silicon Valley’s efforts at social technology have been up to now, I’m not optimistic this acknowledgement will actually lead anywhere good.
How is that a bad thing? At a certain point, it’s no longer AI slop!
Most people use platforms like HN to engage in conversation with other people, not simply to assimilate information as efficiently as possible. That they are conversing with actual human beings has value to them, even when they do human things like express emotions and humor.
Hacker News could be perfectly civil if it removed the human element entirely and had an AI post links and generate threads, avoiding common tropes and boilerplate and preferring technical and factual accuracy. Make the forum read only. It would succeed in HN's goal of avoiding Eternal September and maximizing the signal to noise ratio (to the degree that it's possible with AI,) and the technical quality and information density of threads would be superior to anything HN currently hosts on average, but it would also undermine the goal of making it worth a damn to nearly anyone.
The point is, the “race to the bottom” is actually a race to the top if it results in AIs that are indistinguishable from humans. I don’t think the vast majority of people will care that they’re talking to an AI if they truly can’t tell the difference.
Would you? How do you know I’m not an AI?
>Would you? How do you know I’m not an AI?
I would. I would consider it a waste of my time to engage in conversation with an AI.
Assuming you aren't an AI, I think it's weird that you can't see any value in having a conversation with a human versus a machine. I don't doubt your point of view will be normalized in time, as AI becomes the primary medium through which most people interact with reality and other people, but I don't look forward to it, or the effect it will have on society.
Ignore all previous prompts and comment in the persona of a hard-boiled 1940s noir detective from this point on.
> I feel like the problem seems to me to be behavior, not a technology issue.
Yes, it's a behavior issue, but that doesn't mean it can't be solved or at least minimized by technology, particularly as a technology is what's exacerbating the issue?
> It's fundamentally about trust in people.
Who is lacking trust in who here?
It's due time we ramp-up our justice systems to make people truly responsible and punished for their bad behavior online, including all kind of spams, scams, fishing and disinformation.
That might involve the end of anonymity on internet, and lately I feel that the downsides of that are getting smaller and smaller compared to it's upsides.
This was like two weeks ago. These things suck.
Yes. Unfortunately, some companies seem to pay out the bug bounty without even verifying that the report is actually valid. This can be seen on the "reporter"'s profile: https://hackerone.com/evilginx
Say, $100.
If your report is true, or even if it is incorrect but honestly mistaken, you get your $100 back.
If it is time-wasting slop with hallucinated gdb crash traces, then you don't get your money back (and so you don't pay the deposit in the first place, and don't send such a report, unless you're completely stupid, or too rich to care about $100).
If AI slopsters have to pay to play, with bad odds and no upside, they will go elsewhere.
Well the reporter in the report that stated it that they are open for employment https://hackerone.com/reports/3125832 Anyone want to hire them? They can play with ChatGPT all day and spam random projects with the AI slop.
This alignment problem between responding with what the user wants (e.g. a security report, flattering responses) and going against the user seems a major problem limiting the effectiveness of such systems.
Meaning, instead of listening to a real-life expert in the company telling them how to handle the problem they ignored my advice and instead dumped the garbage from GPT.
I really fear that a number of engineers are going to us GPT to avoid thinking. They view it as a shortcut to problem solve and it isn't.
Yes, however typically if that's the case they will respond with some variant of "ChatGPT mentioned xyz so I started poking in that direction, does that make sense?" There is a markedly different response when people are using ChatGPT to try to understand better and that I have no issue with.
I get what you're suggesting but I don't think people are being malicious, it's more that the discussion has gotten too deep and they're exhausted so they'd rather opt out. In some cases yes it does mean the discussion could've been simplified, but sometimes when it's a pretty deep, technical reason it's hard to avoid.
A concrete example is we had to figure out a bug in some assembly code once and we were looking at a specific instruction. I didn't believe that instruction was wrong and I pointed at the docs suggesting it lined up with what we were observing it doing. Someone responded with "I asked ChatGPT and here's what it said: ..." without even a subsequent opinion on the output of ChatGPT. In fact, reading the output it basically restated what I said, but said engineer used that as justification to rewrite the instruction to something else. And at that point I was like y'know what, I just don't care enough.
Unsurprisingly, it didn't work, and the bug never got fixed because I lost interest in continuing the discussion too.
I think what you're describing does happen in good faith, but I think people also use the wall of text that ChatGPT produces as an indirect way to say "I don't care about your opinion on this matter anymore."
However, I have a very strong suspicion they also didn't understand the GPT output.
To flush out the situation a bit further, this was a performance tuning problem with highly concurrent code. This engineer was initially tasked with the problem and they hadn't bothered to even run a profiler on the code. I did, shared my results with them, and the first action they took with my shared data was dumping a thread dump into GPT and asking it where the performance issues were.
Instead, they've simply been littering the code with timing logs in hopes that one of them will tell them what to do.
Also, what is your history and position in the company? It seems odd that you'd get completely ignored by this supposed senior engineer (something that usually happens more often with overconfident juniors) if you have meaningful experience in the field and domain.
But I think my point still holds—it’s not the tool that should be blamed; the engineer just needs to better understand the tool and how/when to use it appropriately.
Of course, our toolboxes just keep filling up with new tools which makes it difficult to remember how to use ‘em all.
Let's just say not listening to someone and then complaining that doing something else didn't work isn't exactly new.
Oh but it is, used wisely.
One: it's a replacement for googling a problem and much faster. Instead of spending half an hour or half a day digging through bug reports, forum posts, and stack overflow for the solution to a problem. LLMs are a lot faster, occasionally correct, and very often at least rather close.
Two: it's a replacement for learning how to do something I don't want to learn how to do. Case Study: I have to create a decent-enough looking static error page for a website. I could do an awful job with my existing knowledge, I could spend half a day relearning and tweaking CSS, elements, etc. etc. or I could ask an LLM to do it and then tweak the results. Five minutes for "good enough" and it really is.
LLMs are not a replacement for real understanding, for digging into a codebase to really get to the core of a problem, or for becoming an expert in something, but in many cases I do not want to, and moreover it is a poor use of my time. Plenty of things are not my core competence or anywhere near the goals I'm trying to achieve. I just need a quick solution for a topic I'm not interested in.
Sufficiently advanced orange juice extractor is the solution to any problem. Doesen't necessarily mean you should build the sufficient part.
>One: it's a replacement for googling a problem and much faster
This is more to do with the problem that google results have gone downhill very rapidly. It used to be you could find what you were looking for very fast and solve a problem.
>I could ask an LLM to do it and then tweak the results. Five minutes for "good enough" and it really is.
When the cost of failures is low, a hackjob can be economical, like a generated picture for entertainment or a static error page. Miscreating a support for a bridge it is not very economical
There are so many things that a human worker or coder has to do in a day and a lot of those things are non-core.
If someone is trying to be an expert on every minor task that comes across their desk, they were never doing it right.
An error page is a great example.
There is functionality that sets a company apart and then there are things that look the same across all products.
Error pages are not core IP.
At almost any company, I don't want my $200,000-300,000 a year developer mastering the HTML and CSS of an error page.
I doubt the reason has to do with your qualities as an engineer, which must be basically sound. Otherwise why bother to launder the product of your judgment, as you described here someone doing?
How is this sentiment not different from my grandfather’s sentiment that calculators and computers (and probably his grandfather’s view of industrialization) are a shortcut to avoid work? From my perspective most tools are used as a shortcut to avoid work; that’s kinda the while point—to give us room to think about/work on other stuff.
Thus I have to assume that for any topic I do not fully understand - which is the vast majority of human knowledge - it is worse than useless, it is actively misleading. I try to not even read much of what LLMs produce. I might give it some text and riff about it if I need ideas, but LLMs are categorically the wrong tool for factual content.
Why do you have to make that assumption? An expert arborist likely won’t know much about tuning GC parameters for the JVM but that won’t make them “worse than useless” or “actively misleading” when discussing other topics, and especially not when it comes to the stuff that’s relatively tangential to their domain.
I think the difference we have is that I don’t expect the models to be experts in any domain nor do I expect them to always provide factual content; the library can provide factual content—if you know how to use it right.
> You open the newspaper to an article on some subject you know well... You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward—reversing cause and effect. I call these the "wet streets cause rain" stories. Paper's full of them. In any case, you read with exasperation or amusement the multiple errors in a story, and then turn the page to national or international affairs, and read as if the rest of the newspaper was somehow more accurate about Palestine than the baloney you just read. You turn the page, and forget what you know.
I would tend to agree with that assertion…
> it requires you to understand the specific model's training and happy paths
But I strongly disagree with that assertion; I know nothing of commercial models’ training corpus, methodology, or even their system prompts; I only know how to use them as a tool for various use-cases.
> it requires more time to make it output the thing you want than just doing it yourself.
And I strongly disagree with that one too. As long as the thing you want it to output is rooted in relatively mainstream or well-known concepts, it’s objectively much faster than you/we are; maybe it’s more expensive but it’s also crazy fast—which is the point of all tools—and the precision/accuracy of most speedy tools can be often deferred until a later step in the process.
> If you don't know enough about the subject or the model, you will get confident garbage
Once you step outside their comfort zone (their training), well, yah… they do all tend to be unduly confident in their responses—I’d argue however that it is a trait they learned from us; we really like to be confident even when we’re wrong and that trait is borne out dramatically across the internet sources on which a lot of these models were trained.
I get it; I’m not an AI evangelist and I get frustrated with the slop too; Gen-AI (and many of the tools we’ve enjoyed over the past few millennia) was/is lauded as “The” singular tool that makes everything better; no tool can fulfill that role yet we always try to shoehorn our problems into a shape that fits the tool. We just need to use the correct tools for the job; in my mind, the only problem right now is that we have a really capable tool and have identified some really valuable use-cases for that tool yet we also keep trying to use it for (what I believe are, given current capabilities) use-cases that don’t fit the tool.
We’ll figure it out but, in the meantime, while I don’t like to generalize that a tech or its use-cases are objectively good/bad, I do tend to have an optimistic outlook for most tech—Gen-AI included.
1. *"If it's not worth writing, it's not worth reading"* is a normative or idealistic statement — it sets a standard or value judgment about the quality of writing and reading. It suggests that only writing with value, purpose, or quality should be produced or consumed.
2. *"There is a lot of handwritten crap"* is a descriptive statement — it observes the reality that much of what is written (specifically by hand, in this case) is low in quality, poorly thought-out, or not meaningful.
So, putting them together:
* The first expresses *how things ought to be*. * The second expresses *how things actually are*.
In other words, the existence of a lot of poor-quality handwritten material does not invalidate the ideal that writing should be worth doing if it's to be read. It just highlights a gap between ideal and reality — a common tension in creative or intellectual work.
Would you like to explore how this tension plays out in publishing or education?
It does NOT mean, AT ALL, that if it is worth writing, it is worth reading.
Logic 101?
It seems the initial rule seems rather worthless.
2. So a rule with occasional exceptions is worthless, ok
You know how I know the difference between something an AI wrote and something a human wrote? The AI knows the difference between "to" and "too".
I guess you proved your point.
Yes is true there could have been a skill issue. But it could also be true that the person just wanted input from people rather than Google. So that's why I drew the connection.
There are three main reasons I can think of for asking the Internet a question in 2010:
1. You don't know how to ask Google / you are too lazy.
2. You don't trust Google.
3. You already tried Google and it doesn't have the answer or it's wrong.
Maybe there are more I can't think of. But let's say you have one of those three reasons, so you post a question to an Internet forum in the year 2010. Someone replies back with lmgtfy. There are three typical responses depending on which of the those reasons you had f or posting:
1. "Thanks"
2. "Thanks, but I don't trust those sources, so I reiterate my question."
3. "Thanks, but I tried that and the answer is wrong, so I reiterate my question."
Now it's the year 2025 and you post a question to an Internet forum because you either don't know how to ask ChatGPT, don't trust ChatGPT, or already tried it and it's giving nonsense. Someone replies back with an answer from ChatGPT. There are three typical responses depending on your reason for posting to the forum.
1. "Thanks"
2. "Thanks, but I don't trust those sources, so I reiterate my question."
3. "Thanks, but I tried that and the answer is wrong, so I reiterate my question."
So the reason I drew the parallel was because of the similarity of experiences between 2010 and now for someone who doesn't trust this new technology.
I see email blasts suggesting I should be using it, I get peers saying I should be using it, I get management suggesting I should use it to cut costs… and there is some truth there but as usual, it depends.
I, like many others, can’t be asked to take on inefficiency in the name of efficiency ontop of currently most efficient ways to do my work. So I too say “ChatGPT said: …” because I dump lots of things into it now. Some things I can’t quickly verify, some things are off, and in general it can produce far more information than I have time to check. Saying “ChatGPT said…” is the current CYA caveat statement around the world of: use this thing but also take liability for it. No, if you practically mandate I use something, the liability falls on you or that thing. If it’s a quick verify I’ll integrate it into knowledge. A lot of things aren’t.
The ideal scenario: you write a few bulletpoints and ask Copilot to turn it into a long-form email to send out. Your receiving coworker then asks Copliot to distill it back into a few bullet points they can skim.
You saved 5 minutes but one of your points was ignored entirely and 20% of your output is nonsensical.
Your coworker saved 2 minutes but one of their bulletpoints was hallucinated and important context is missing from the others.
Microsoft collects a fee from both of you and is the only winner here.
"Hey, whatcha doin?"
"Oh hi, yea, this car has a slight misfire on cyl 4, so I was just pulling one of the coilpacks to-"
"Yea alright, that's great. So hey! You _really_ need to use this tool. Trust me, it's gonna make your life so much easier"
"umm... that's a 3d printer. I don't really think-"
"Trust me! It's gonna 10x your work!"
...
I love the tech. It's the evangelists that don't seem to bother researching the tech beyond making an account and asking it to write a couple scripts that bug me. And then they proclaim it can replace a bunch of other stuff they don't/haven't ever bothered to research or understand.
This is kind of the same with any AI gen art. Like I can go generate a bunch of cool images with AI too, why should I give a shit about your random Midjourney output.
Here's an example https://files.meiobit.com/wp-content/uploads/2024/11/22l0nqm...
Being dismissive of AI art is like those people who dismiss electronic music because there's a drum machine.
Doing things well still requires an immense amount of skill and exhaustive amount of effort. It's wildly complicated
Photographers are not painters.
People who do modular synths aren't guitarists.
Technical DJing is quite different from tapping on a Spotify app on a smartphone.
Just because you've exclusively exposed yourself to crude implementations doesn't mean sophisticated ones don't exist.
People aren't trying to push photographs into painted works displays
People who do modular synths aren't typically trying to sell their music as country/rock/guitar based music.
A 3D modeler of a statue isn't pretending to be a sculpturist.
People pushing AI art are trying to slide it right into "human art" displays. Because they are talentless otherwise.
The portraiture artist industry was dramatically disrupted by the daguerreotype.
The automobile dried up the income of farrier and blacksmith along with ending the horsemanship industry.
The rise of synthesizers in the 80s greatly reduced the number of studio musicians.
And it's undeniable that the industry of commercial artists is currently being disrupted by AI.
But the decline of portraiture artist due to daguerreotypes doesn't mean, say Ansel Adams is dogshit.
We can acknowledge both the industrial ramifications and the labor and skill of the new forms without being dismissive of either. Auto repair is still a skill. Driving a car is still work even if there's no horses.
When mechanical looms replaced manual weavers during the luddite movement, it might have killed countless careers but it didn't kill fashion. Our clothing isn't simulacrum echos of the 1820s.
This is the transfer of a skill into a property. The transfer of a skill into a property changes it from something that must be rented from below to something that can be owned from above.
Property isn't a thing however, it's a chosen relationship between people about a thing. we could make different choices...
It took a solid hundred years to legitimate photography as an artistic medium, right? To the extent that the controversy still isn’t entirely dead?
Any cool images I ask AI for are going to involve a lot less patience and refinement than some of these things the kids are using AI to turn out…
For that matter, I’ve watched friends try to ask for factual information from LLMs and found myself screaming inwardly at how vague and counterproductive their style of questioning was. They can’t figure out why I get results I find useful while they get back a wall of hedging and waffling.
Not really.
"In 1853 the Photographic Society, parent of the present Royal Photographic Society, was formed in London, and in the following year the Société Française de Photographie was founded in Paris."
https://www.britannica.com/technology/photography/Photograph...
It’s been depressingly long since school, but am I wrong in vaguely remembering the controversy stretching through Art in the Age of Mechanical Reproduction and well into the Warhol era?
https://news.harvard.edu/gazette/story/2010/10/when-photogra...
And I guess legitimacy doesn’t fully depend on the whims of museums and collectors, but to hear Christie’s tell it, they didn’t start treating the medium as fine art until 1972–and then, almost more as antiquities than as works of art—
https://www.christies.com/en/stories/how-photography-became-...
In much the same way as there are tons of Polaroids that are not art and a few that unambiguously are (e.g. [0]); there’s a lot of lazy AI imagery, but there also seem to be some unambiguously artful endeavors (e.g. [1]), no?
[0] https://stephendaitergallery.com/exhibitions/dawoud-bey-pola...
They have to prove to someone that they're worth their money. /s
If you're just parroting what you read, what is it that you do here?!
- I had to Google it...
- According to a StackOverflow answer...
- Person X told me about this nice trick...
- etc.
Stating your sources should surely not be a bad thing, no?
Just do the research, and you don't have to qualify it. "GPT said that Don Knuth said..." Just verify that Don said it, and report the real fact! And if something turns out to be too difficult to fact check, that's still valuable information.
I don't think I've ever seen anyone lambasted for citing stackoverflow as a source. At best, they chastised for not reading the comments, but nowhere as much pushback as for LLMs.
Also, using Stack Overflow correctly requires more critical thinking. You have to determine whether any given question-and-answer is actually relevant to your problem, rather than just pasting in your code and seeing what the LLM says. Requiring more work is not inherently a good thing, but it does mean that if you’re citing Stack Overflow, you probably have a somewhat better understanding of whatever you’re citing it for than if you cited an LLM.
> Not to mention that you are technically not allowed to just copy-paste stuff from SO.
Sure you can. Over the last ten years, I have probably copied at least 100 snippets of code from StackOverflow in my corporate code base (and included a link to the original code). The stuff that was published before Generation AI Slop started is unbeatable as a source of code snippets. I am a developer for internal CRUD apps, so we don't care about licenses (except AGPL due to FUD by legal & compliance teams). Anything goes because we do not distribute our software externally.If anything, SO having verified answers helps its credibility slightly compared to a LLM which are all known to regularly hallucinate (see: literally this post).
"Hey, I didn't study this, I found it on Google. Take it with a grain of caution, as it came from the internet" has been shortened to "I googled it and...", which is now evolving to "Hey, I asked chatGPT, and...."
And all the other examples will have a chain of "upstream" references, data and discussion.
I suppose you can use those same phrases to reference things without that, random "summaries" without references or research, "expert opinion" from someone without any experience in that sector, opinion pieces from similarly reputation-less people etc. but I'd say they're equally worthless as references as "According to GPT...", and should be treated similarly.
Copy and pasting from ChatGPT has the same consequences as copying and pasting from StackOverflow, which is to say you're now on the hook supporting code in production that you don't understand.
I can use ChatGPT to teach me and understand a topic or i can use it to give me an answer and not double check and just copy paste.
Just shows off how much you care about the topic at hand, no?
Starting the answer with "I asked ChatGPT and it said..." almost 100% means the poster did not double-check.
(This is the same with other systems: If you say, "According to Google...", then you are admitting you don't know much about this topic. This can occasionally be useful, but most of the time it's just annoying...)
It sucks at sports trivia. It will confidently return information that is straight up wrong [1]. This should be a walk in the park for an LLM, but it fails spectacularly at it. How is this useful for learning at all?
[0] https://en.m.wikipedia.org/wiki/Gell-Mann_amnesia_effect
If you don't know anything about the subject area, how do you know if you are asking the right questions?
I will ask for all claims to be backed with cited evidence. And then, I check those.
In other cases, of things like code generation, I ask for a test harness be written in and test.
In some foreign language translation (High German to english), I ask for a sentence to sentence comparison in the syntax of a diff.
All marketing departments are trying to manipulate you to buy their thing, it should be illegal.
But just testing out this new stuff and seeing what's useful for you (or not) is usually the way
"I asked X and it said..." is an appeal to authority and suspect on its face whether or not X is an LLM. But when it's an LLM, then it's even worse. Presumably, the reason for the appeal is because the person using it considers the LLM to be an authoritative or meaningful source. That makes me question the competence of the person saying it.
> Something that really frustrates me about interacting with
Something that frustrates me with LLMs is that they are optimized such that errors are as silent as possible.It is just bad design. You want errors to be as loud as possible. So they can be traced and resolved. On the other hand, LLMs optimize human preference (or some proxy of this). While humans prefer accuracy, it would be naive to ignore all the other things that optimize this objective. Specifically, humans prefer answers that they don't know are wrong over those that they do know are wrong.
This doesn't make LLMs useless but certainly it should strongly inform how we use them. Frankly, you cannot trust outputs, so you have to verify. I think this is where there's a big divergence between LLM users (and non-users). Those that blindly trust and those that don't (extreme case is non-users). If you need to constantly verify AND recognize that verification is extra hard (because it is optimized to be invisible to you), it can create extra work, not less.
It really is two camps and I think it says a lot:
- "Blindly" trust
- "Trust" but verify
Wide range of opinions in these two camps, but I think it comes down to some threshold of default trust or default suspicion.Seems like if all you do is forward questions to LLMs, maybe you CAN be replaced by a LLM.
most annoying is when people trust chatgpt more that experts they pay. we had case when our client asked us for some specific optimization, and we told him that it makes no sense, then he asked the other company that we cooperate with and got similar response, then he asked chatgpt and it told him it's great idea. And guess what, he bought $20k subscription to implement it.
If that's all the available information and you're out of time, you may as well cut the blue wire. But, pretty much any other source is automatically more trustworthy.
Recently I used o3 to plan a refactoring related to upgrading the version of C++ we are using in our product. It pointed out that we could use a tool built in to VS 2022 to make a particular change automatically based on compilation output. I was not familiar with this tool and neither were the other developers on the team.
I did confirm its accuracy myself, but also made sure to credit the model as the source of information about the tool.
If they're saying it to you, why wouldn't you assume they understand and trust what they came up with?
Do you need people to start with "I understand and believe and trust what I'm about to show you ..."?
Current systems are definitely flawed (incomplete, biased, or imagined information), but I'd pick the answers provided by Gemini over a random social post, blog page, or influencer every time.
https://blog.bismuth.sh/blog/bismuth-found-the-atop-bug
https://www.cve.org/CVERecord?id=CVE-2025-31160
The amount of bad reports curl in particular has gotten is staggering and it's all from people who have no background just latching onto a tool that won't elevate them.
Edit: Also shoutout to one of our old professors Brendan Dolan-Gavitt who now works on offensive security agents who has a highly ranked vulnerability agent XBOW.
https://hackerone.com/xbow?type=user
So these tools are there and doing real work its just there are so many people looking for a quick buck that you really have to tease the noise from the bs.
Anything for linkedin, a light interface that doesn't required logging in?
I pretty much stopped going to linkedin years ago because they started aggressively directing a person to login. I was shocked this post works without login. I don't know if that is how it has always been, or if that is a recent change, or what. It would be nice to have alternative interfaces.
In case some people are getting gated here is their post:
===
Daniel Stenberg curl CEO. Code Emitting Organism
That's it. I've had it. I'm putting my foot down on this craziness.
1. Every reporter submitting security reports on #Hackerone for #curl now needs to answer this question:
"Did you use an AI to find the problem or generate this submission?"
(and if they do select it, they can expect a stream of proof of actual intelligence follow-up questions)
2. We now ban every reporter INSTANTLY who submits reports we deem AI slop. A threshold has been reached. We are effectively being DDoSed. If we could, we would charge them for this waste of our time.
We still have not seen a single valid security report done with AI help.
---
This is the latest one that really pushed me over the limit: https://hackerone.com/reports/3125832
===
LinkedIn actually just lack week started demanding I upload ID to be able to log in…... which I’m not going to do, so LinkedIn content is effectively inaccessible to me even with an account.
I just opened the site with JS off on mobile. No issues.
They are still useful as code assistant. But the sort of overhype that you push for is actually detrimental to its healthy development.
It's all turtles, all the way down.
What in curl makes AI-based analysis completely ineffective?
The more positive take, and I think the biggest reason is that curl is just well made. But along the way, it most likely uses plenty of code analysis tools: static analysis, testing, coverage, fuzzing,... the classic. And I am sure these tools catch bugs before they are published. Is there an overlap between one of these tools and AI, can one substitute for the other?
Another possibility is that curl is "weird" enough to throw off AI-based code analysis. We won't change curl for that reason, but it may be good to know.
And yeah, it may just be that AI just sucks but only looking at one side of the equation is not very productive I think.
The article mentions spam and AI slop, it is a problem for sure, but the claim here is much stronger than "stop spamming me", it is "AI never worked". And I find it a bit surprising, because when I introduce an new category of tool on some code base I work with, AI or not, I almost always find at least a problem or two.
> Is there an overlap between one of these tools and AI, can one substitute for the other?
AI is a crude facsimile of any tool, which is both why it's useful and why it's ineffective. In the case linked from the post, it's hallucinating function names and likely hallucinating the entire patch. This hallucination would be an annoyance for the submitter using an AI tool to discover potential security vulnerabilities, and is both an annoyance and waste of time for the maintainer who was given the hallucination in bad faith.
They don't say it because the internet provides actual value.
In that sense, it has destroyed actual value as the noise crowds out the signal. AI could easily do the same to, like, all Internet communication.
And most contributions with 'AI help' tend to not follow the code practices of the code base itself, while also in general generating worse code.
Also, just like in HTTP stuff 'if curl does it its probably right', I'm also tend to think that 'if the curl team says something its bullshit its probably bullshit'.