In this incident, Aurich Lawson of Ars Technica deleted the original article (which had LLM hallucinated quotes) instead of updating it with the error. He then published a vague non-apology, just like large companies and politicians usually do. And now we learn that this reporter was fired and yet Ars Technica doesn’t publish a snippet of an article about it.
There’s something to be said about the value of owning up to issues and being forthright with actions and consequences. In this age of indignation and fear of being perceived as weak or vulnerable due to honesty, I would’ve thought that Ars would be or could’ve been a beacon for how things should be talked about.
It’s sad to see Ars Technica at this level.
I don’t know about you guys, but I feel like 50% of Ars headlines are completely misleading.
They’ve had this problem for years. They will publish anything that gets them clicks. They do not care if a writer makes things up. They do not care if their headlines are misleading - in fact, that’s the point. They clearly got into the job in order to influence and manipulate people.
They’re bad people, with terrible motivations, and unchecked power. They only walk back when something really really bad happens.
Never trust an Ars headline.
It's not just Ars Technica. I would go as far as saying the big majority. I work at the biggest alliance of public service media in EU, and my role required me to interact with editors. I often do not like painting with broad brush, but I am yet to meet a humble editor yet. They approach everything with a "I know better than anyone else" attitude. Probably the "public" aspect of the media, but I woupd argue it's editorial aspect too. The rest of the staff are often very nice and down to earth.
They did report on the article quote sourcing debacle at the time - perhaps not as quickly as some would’ve liked, but within a couple of days.
It stays as a mark, immortalizing the error, but it's a better scar than deleting and acting like it never happened.
I also want to note that, this last incident response is not typical of the Ars I'm used to.
https://www.bbc.co.uk/news/articles/cly51dzw86wo
I think they're an outlier, but still I was disappointed by Ars's response. They deleted the article and didn't detail what was wrong with it at all. Felt like a cover-up.
I think they missed a big opportunity to instead of firing the guy sit him down and stress how not okay this was and that it harms the credibility and he needs to understand that and make a proper apology. They could make him do some education like ethical reporting responsibilities or whatever.
Then like you say not just hide the article but point out the mistakes and corrections. Describe the mistake and how credible reporting is their priority and the author will be given further education to avoid this happening again. They could also make new policies like going forward all articles that use AI for search results must attempt to find a source for that information. This would build trust not harm it in my opinion.
I don't think Ars thought they had a choice but to cut off the journalist who made the mistake, especially when it was regarding a very touchy subject. I don't think they had a choice, it's impossible for us readers to know if this was a single lapse of judgement or a bad habit. Regardless, the communication should have been better.
Their actions so far just make me think they're panicking and found a scapegoat to blame it on, but they're not going to put any new checks in place so it'll just happen again.
I feel bad for the guy, but there's just no way I can imagine much better safeguards other than editors paying more close attention to referencing sources, and hiring more reliable people.
More than that, as a reporter on AI he should have been fully aware that AI frequently bullshits and lies. He should have known it was not reliable and that its output needs to be carefully verified by a human if you care at all about the accuracy or quality of what it gives you. His excuse that this was done in a fever induced state of madness feels weak when it was his whole job to know that AI was not an appropriate tool for the task.
Possibly akin to a roofer taking a shortcut up there, then taking a spill? You knew better but unfortunately let the fact that you could probably get away with it with zero impact decide for you.
IIRC hallucinations were essentially kicked off initially by user error, or rather… let’s say at least: a journalist using the best available technologies should have been able to reduce the chance of this big of an issue to near zero, even with language models in the loop & without human review.
(e.g. imagine Karpathy’s llm-council with extra harnessing/scripting, so even MORE expensive, but still)
Reminds me of a story I was told as an intern deploying infra changes to prod for the first time. Some guy had accidentally caused hours of downtime, and was expecting to be fired, only for his boss to say "Those hours of downtime is the price we pay to train our staff (you) to be careful. If we fire you, we throw the investment out the window"
Making up quotes for article, with technology or not, should lead to firing.
Last year I went viral, and Benji was the first person to interview me. It was a really cool experience, we chatted via Twitter dms, and he wrote a piece about my work - overall did a decent job.
Then, 6 months later a separate project I was adjacent to was starting to pick up steam. I reached out to him asking if he wanted to cover us. No response.
Then, tech crunch wrote an article on our project.
I reached to Benji again saying "Hey would you like to chat again, now we have some coverage?" And he finally responded, but said he couldn't report on me because he had a directive that he could only report on things that didn't have any prior or pre-existing coverage (?)
I thought that was rather strange, especially since we already had built up a relationship.
I don't really have a moral or lesson to this story, other than that journalism can be rather opaque sometimes.
Oh one other tip for anyone reading this - if you do ever get reached out to by journalists, communicate in writing, not a phone call so you can be VERY precise in your wordings.
I am assuming that this comment is about as accurate as what got the journalist in question fired, for the same reasons.
OpenClaw is dangerous - https://news.ycombinator.com/item?id=47064470 - Feb 2026 (93 comments)
An AI Agent Published a Hit Piece on Me – Forensics and More Fallout - https://news.ycombinator.com/item?id=47051956 - Feb 2026 (82 comments)
Editor's Note: Retraction of article containing fabricated quotations - https://news.ycombinator.com/item?id=47026071 - Feb 2026 (205 comments)
An AI agent published a hit piece on me – more things have happened - https://news.ycombinator.com/item?id=47009949 - Feb 2026 (624 comments)
AI Bot crabby-rathbun is still going - https://news.ycombinator.com/item?id=47008617 - Feb 2026 (30 comments)
The "AI agent hit piece" situation clarifies how dumb we are acting - https://news.ycombinator.com/item?id=47006843 - Feb 2026 (125 comments)
An AI agent published a hit piece on me - https://news.ycombinator.com/item?id=46990729 - Feb 2026 (951 comments)
AI agent opens a PR write a blogpost to shames the maintainer who closes it - https://news.ycombinator.com/item?id=46987559 - Feb 2026 (750 comments)
I really don't know where the internet is heading to and how any content site can survive.
I just can't see how this is sustainable since they are stealing from the sources who are now getting defunded.
Yeah, that's why I said I don't know where the internet is heading to.
It says things I know to be false fairly regularly. I don't keep a log or anything, but it's left an impression that it's far from reliable.
You would know how?
The links contradict or do not support the overviews often in my experience.
While trying to find an example by going back through my history though, the search "linux shebang argument splitting" comes back from the AI with:
> On Linux and most Unix-like systems, the shebang line (e.g., #!/bin/bash ...) does not perform argument splitting by default. The entire string after the interpreter path is passed as a single argument to the interpreter.
(that's correct) …followed by:
> To pass multiple arguments portably on modern systems, the env command with the -S (split string) option is the standard solution.
(`env -S` isn't portable. IDK if a subset of it is portable, or not. I tend to avoid it, as it is just too complex, but let's call "is portable" opinion.)
(edited out a bit about the splitting on Linux; I think I had a different output earlier saying it would split the args into "-S" and "the rest", but this one was fine.)
> Note: The -S option is a modern extension and may not be available
But this, … which is it.
Of course Google gets little credit for this since it was their own malfeasance that led to all the SEO spam anyway (and the horrible expertsexchange-quality tech information, and stupid recipe sites that put life stories first)... but at least there now there is a backpressure against some of the spammy crap.
I am also convinced that the people here reporting that the overviews are always wrong are... basically lying? Or more likely applying some serious negative bias to the pattern they're reporting. The overviews are wrong sometimes, yes, but surely it is like 10% of the time, not always. Probably they're biased because they're generally mad at google, or AI being shoved in their face in general, and I get that... but you don't make the case against google/AI stronger by misrepresenting it; it is a stronger argument if it's accurate and resonates with everyones' experiences.
https://en.wikipedia.org/wiki/Availability_heuristic
No one remembers when AI Overview gets the answer right (it's expected to do so after all) but everyone has their favorite examples of "oh stupid AI."
What good is it if the overviews lie some percentage of the time (your own guess is 10%) and you have to search to verify that they aren't making shit up anyway. Also, those SEO spam-ridden garbage sites google feeds you whenever you bother to look past the undependable AI summaries are mostly written by AI these days and prone to the same problem of lying which only makes fact checking google's auto-bullshitter even harder.
In fact, if you switch to "Pro" mode, it frequently says the complete opposite of what it claimed in "Fast" mode while still being ~10-20% wrong. (Not to say it's not useful — there's no better way to aggregate and synthesize obscure information — but it should definitely not be relied on a source of anything other than links for detailed followup.)
If this were just some random blogger, then yes the blame is totally theirs. But this was published under the Ars Technica masthead and there should have been someone or something double checking the veracity of the contents.
That said, there are a number of Ars Technica contributors that are among the best in their fields: Eric Burger, Dan Goodin, Beth Mole, Stephen Clark, and Andrew Cunningham amongst many, so one f'up shouldn't really impugn the entire organization.
I miss Maggie Koerth & Jon Stokes
Has Orland issued a real apology? He bylined a piece containing fraudulent quotes.
Nothing suspicious about heavy use of qualifiers in a non-apology blanket denial. Where's the Polymarket for whether this guy has a job next month?
https://www.404media.co/ars-technica-pulls-article-with-ai-f...
That’s a problem. If he really hasn’t apologized, neither he nor Ars have recognized there is a problem, which means it will happen again.
When journalists are working on a shared byline, they don't each do the same research in order to fact-check each other. There is inherently a level of trust required for collaborating like this and Edwards violated that trust.
You can say this is a failure by the editorial process for not including fact checking, but that is an organizational issue with Ars, it's not the fault of Orland for failing to duplicate the work that he believed his coauthor did.
When Ars released a statement saying this was an isolated incident, my reaction was "they probably didn't look too hard". I suspect they did, in the end?
Well, Ars Technica is already for quite some time on my ignore list, and this further solidifies its place there.
Pretty weird that journalism as a business still revolves around "we hired a guy to write a thing, and he's perfect. oh wait, he's not perfect? it was all his fault. we've hired a new perfect guy, so everything's good now." My dudes... there are many ways you can vet information before publishing it. I get that the business is all about "being first", but that also seems to imply "being the first to be wrong".
I feel bad for the reporters. People seem to be piling onto them like they're supposed to be superhuman, but actually they're normal people under intense pressure. People fail, it's human. But when an organization fails, it's a failure of many people, not one.
The editors are the ones ultimately responsible for what they publish. Yet they’re not taking responsibility.
“Everyone knows that Perl is designed to make easy things easy, and hard things possible, but nobody knows why it’s called Perl.”
Which of course returns 0 results on Google, as is customary for famous quotes.Imagine what he could have gotten up to with LLMs.
This whole story involved asking Claude to mine this text for quotes, which refused because it included harassment related content, then asking ChatGPT to explain that, and so on.
That entire ordeal probably generated more text from the chatbots than just reading the few paragraphs of the blogpost. That's why I think the "I'm sick" angle doesn't matter much. This is the same brainrot as people who go "grok what does this mean" under every twitter post. It's like a schoolchild who cheats and expends more energy cheating than just learning what they're supposed to.
But, does that mean he got slandered twice by an LLM agent or once by an agent and once by a human? Or was he technically slandered 3 times? Twice by agents and a third time by the journalist? New questions for the new agentic society.
Besides, I am sure you could tell it was just a joke but needed to be pedantic for no reason other than feel smart?
A true "senior" AI reporter should be more skeptical of LLM output than anyone else.
Sorry, I never could resist a good dad joke
Oh right, being ill is what caused the error. I can bet that if you start verifying the past content from this author, you will see similar AI slop. Either that or he has been always ill with very little sleep.
I wonder if these are the same people who 3-4 years ago were insisting putting 20 characters onto a blockchain (ie an NFT, which was just a URL) was the next multi-billion dollar business.
Sure there is such a thing as a naysayer but there are also people think all forms of valid criticism are just naysaying.
NFT protocol doesnt really care what the payload is. NFT purveyors likewise dont care what their payload is, as long as they could use the term "NFT".
NFT's are great for certain use cases (Crypto Kitties is still around I believe) but there was never a single moment I considered that owning a weird ape jpeg, even if it was somehow, properly owned by me, would be worth millions of dollars or whatever. Its like trying to sell a "TCP".
That said, future blockchain applications will probably still rely on NFT's in some fashion. Just not the protocol as product weirdness we got for a few years there.
1. Believe LLMs outright even knowing they are frequently wrong
2. Claim that LLMs making shit up is caused by the user not prompting it correctly. I suppose in the same way that C is memory safe and only bad programmers make it not so.
You may not owe your least favorite publications better, but you owe this community better if you're participating in it.
Sorry, I just searched my comment history, maybe I missed it? Was it recent?
You probably wish everyone would post as bots do, without em—dashes of course.
- He didn't care for his story,
- he didn't care to verify his story,
- he published bullshit made up stuff,
- and put words in a real person's mouth
- and he didn't even care to write the thing himself
Why keep him and pay him? What mentality all the above show? What respect, both self respect and respect for the job?
If they wanted stories from an LLM, they can pay for a subscription to one directly.
Hope this sends a message to journalist hacks who offload their writing or research to an LLM.