There are a lot of tasks that follow a pretty predictable but not automatable pattern that LLMs do a great job at, but that you couldn't leverage macros or other powerful editor tools to complete. Getting good at LLMs can greatly increase your productivity. But unless your job is just gluing well-defined interfaces together, spinning up demos, or writing documentation for existing code, you won't get 10x. 2x is very realistic for most programming jobs, though. Some tasks would potentially be below 1x, especially low level, novel, thorny logic. I'd never use it for developing new cryptographic primitives or a novel compression codec, for instance.
The phrasing you use is all marketing. When people share actual chat threads that they claim helped them, they always have the same issues I observe in my own use.
They won't do the thinking for you. You still have to know what you want out of it, you have to know your endpoint, and you have to be able to review and understand every line out of it, but they do save time. You might have tried to use them in situations where they aren't appropriate. They are text prediction programs, but they're extremely powerful text prediction programs. If you accept their unreliability and give them the right patterns and context to generate from, they can write a thousand lines of code in one go that look identical to what you would have written, and it takes 30 seconds to generate and a few minutes to review and test, compared to the half an hour at least that it would have taken otherwise. It can easily yield 10 times efficiency gain in places where it's really appropriate.
That said, it's not perfect by any means. To be really what I want, I need the same quality significantly faster, cheaper, and local. If my company wasn't paying for it, I certainly wouldn't pay to use this thing myself. This costs between $0.05 and $1 per request, which gets expensive fast, and the much cheaper models are less capable enough to save me so little time as to not be worth the bother really.
People have been sharing that, plus the problems they’ve solved or products they’ve made.
Though, most of these workflows are no longer really just “chats” at like chatgpt.com. A lot of that has changed.
i’m not saying you need to do it, or change anything, but… i’ve seen people do shit with a computer lately that 6 months ago would seem weird and alien.
Do you have video?
But those discussions don't crest the hill because the "verified" work consistently either fails to be addressing a challenge that the skeptics encounter or has qualities that the skeptics can't imagine themselves possessing.
The reality is that we all approach our work differently and all carry different standards in what we accept in the work we produce.
It's clear that generative AI genuinely helps some people improve their work or ease their burden of producing it, but it should also be clear at this point that the people it doesn't seem to help are not being failed for lack of having learned about its use cases. We're 2+ years in and are saturated with the hype and with such examples. It's just not as universally useful as some people (for whatever reason) want to believe.
It's fine if you've found a use-case that LLMs aren't so helpful, like nonfiction writing that you don't want to have to constantly verify for hallucinations. Obviously those exist.
But don't forget that they are calling into question whether other people really are super productive with LLMs, and that hints at a self-limiting belief.
It honestly sounds like a cult where I don’t know how to read the tea leaves and believe enough. I’ll pass happily and be so self limited :)
When AI is actually useful, there won’t be a debate.
AI even with flaws is good enough. I'm happy to pick up the profit on the difference in our beliefs and behaviors. It's easy to simply out compete.
Your original comment was suggesting that people who haven't found a use-case that works for them just haven't looked hard enough, despite how inescapable the topic and tools are.
Some hypothetical person that believes nobody might have a personally effective use case is a different one than what you seemed to be writing about, and completely unrelated to what I was writing about.
To be fair, the distinction gets muddy in both directions because some people invest their egos into/against this kind of thing and can't just accept differences as they are. But at that point, with those people, you're just stuck in the same class of endless flamewar debate as emacs vs vim or Xbox vs Playstation and there's really no point in giving time or attention to that.
I recently asked Claude to help deserialize a JSON Post request in an asp.net web API so that absent or null JSON properties get set to the coded default values. They gave me what was nearly the right answer (set NullValueHandling to Ignore) but led me down a merry garden path by saying I should also set DefaultValueHandling to Populate. Every time I said it wasn't working and gave the error message, it came up with additional code to put in, which never helped. Eventually I just started toying around with it myself and realised the solution, but it was maybe half a day down the drain.
Perhaps I could have taken a different approach from the start, but still, my impression is that the tools are not what they're made out to be.
If you got NullValueHandling, jumping to the library documentation would give you immediate accurate results.
Oh man, oh man, this just gave me a great startup idea. What is the total addressable market of nutcases willing to pay $50 or more to have someone check whether they have cracked some difficult code? Perhaps even with a giant prize attached for cracking it?
It has a light inside and if you visit at night, the reverse-cut lettering will be readable on the walkway around it.
The confident incorrectness they breed is a problem.
The impact in online social spaces is already increasingly obvious.
Looking at America here in 2025, it sure seems like the confidently incorrect are confidently and successfully convincing the submorons that they are correct, despite their both being incorrect.
I sure seems like a kind of tensegrity of willful ignorances.
But definitely there are a shitton of folks that are both far too stupid to know how stupid they are, while also being way too overconfident that they're smarter than the actual smart folks. Dunning-Kruger really exposed the majorities.
And there's that quote about change only coming when the old generation dies out; not because they willingly learned and humbly levelled themselves up.
I liked the snark from gizmodos report:
> > The smugness is, frankly, inexplicable. Even if they did successfully crack Sanborn’s code using AI (which, for the record, Sanborn says they haven’t even gotten close), what is it about asking a machine to do the work for you that generates such self-satisfaction?
Hear, hear.
https://gizmodo.com/chatbots-have-convinced-idiots-that-they...
What they lack in intelligence, however, they MORE than make up for in confidence.
"Same as it ever was." --Talking Heads
On one hand this is amazing because it increases access to good enough writing and information. On the other hand it raises the chances that people, regardless of educational attainment, are convinced that falsehoods are true.
1: https://arstechnica.com/ai/2025/03/researchers-surprised-to-...
Which isn't to diminish their utility, but is to say it's smart to treat them like used car salesmen.
Maybe they're telling a true-sounding truth... or maybe it's just true-sounding.
Edit: That’s not to discount the value of said bullshit or to say they are never correct. Simply that correctness isn’t what they are optimized for.
The problem with asking people to interpret gobbledegook is that it might map to English in any number of ways.
I feel like that would be a strong artistic statement about the CIA and intelligence agencies in general. Do people reluctantly work to know every secret because it’s actually necessary for security? Or do some people just want to know every secret, and “security” is the handiest excuse for them to pursue that?
...
> Some years ago, Sanborn began charging $50 to review solutions, providing a speed bump to filter out wild guesses and nut cases.
Yeah I suspect he isn't that ticked off. I'm happy to take over reviewing solutions if he likes!
Especially when the downsides of being wrong are nill.
The question is: are the downsides of being wrong because of an LLM actually nil?
While this is a story of a harmless contest, I think it represents something much bigger and perhaps far less harmless.
The valuation-perception-driven hyperbole around these Dunning-Kruger machines does not help the average person trying to bat above their level.
> Sanborn, a climate-conscious friend of the Earth who lives on a small island on the Chesapeake Bay, is also appalled by the amount of energy that it takes to produce generative AI, and AI’s fabricated answers. Adding to the annoyance is that some of the would-be codebreakers are touting their collaboration with Grok 3, which is made by Elon Musk’s xAI. The same Musk who, despite good deeds with Tesla, now works for an administration determined to reverse any progress on mitigating climate change. “That’s a little twist of the ice pick,” he says.
Nothing “stealth” or “ad copy” about it.