https://mastodon.ar.al/@aral/114160190826192080
"Coding is like taking a lump of clay and slowly working it into the thing you want it to become. It is this process, and your intimacy with the medium and the materials you’re shaping, that teaches you about what you’re making – its qualities, tolerances, and limits – even as you make it. You know the least about what you’re making the moment before you actually start making it. That’s when you think you know what you want to make. The process, which is an iterative one, is what leads you towards understanding what you actually want to make, whether you were aware of it or not at the beginning. Design is not merely about solving problems; it’s about discovering what the right problem to solve is and then solving it. Too often we fail not because we didn’t solve a problem well but because we solved the wrong problem.
When you skip the process of creation you trade the thing you could have learned to make for the simulacrum of the thing you thought you wanted to make. Being handed a baked and glazed artefact that approximates what you thought you wanted to make removes the very human element of discovery and learning that’s at the heart of any authentic practice of creation. Where you know everything about the thing you shaped into being from when it was just a lump of clay, you know nothing about the image of the thing you received for your penny from the vending machine."
There's an upside to this sort of effort too, though. You actually need to make it crystal clear what your idea is and what it is not, because of the continuous pushback from the agentic programming tool. The moment you stop pushing back, is the moment the LLM rolls over your project and more than likely destroys what was unique about your thing in the first place.
If you just chuck ideas at the external coding team/tool you often get rubbish back.
If you're good at managing the requirements and defining things well you can achieve very good things with much less cost.
Frustrated rants about deliverables aside, I don't think that's the case.
On-shoring ;
I thought "on-shoring" is already commonly used for the process that undos off-shoring.
There are tips and tricks on how to manage them and not knowing them will bite you later on. Like the basic thing of never asking yes or no questions, because in some cultures saying "no" isn't a thing. They'll rather just default to yes and effectively lie than admit failure.
That's not an upside in that it's unique to LLM vs human written code. When writing it yourself, you also need to make it crystal clear. You do that in the language of implementation.
The humane and the machinic need to meet halfway - any computing endeavor involves not only specifying something clearly enough for a computer to execute it, but also communicating to humans how to benefit from the process thus specified. And that's the proper domain not only of software engineering, but the set of related disciplines (such as the various non-coding roles you'd have in a project team - if you have any luck, that is).
But considering the incentive misalignments which easily come to dominate in this space even when multiple supposedly conscious humans are ostensibly keeping their eyes on the ball, no matter how good the language machines get at doing the job of any of those roles, I will still intuitively mistrust them exactly as I mistrust any human or organization with responsibly wielding the kind of pre-LLM power required for coordinating humans well enough to produce industrial-scale LLMs in the first place.
What's said upthread about the wordbox continually trying to revert you to the mean as you're trying to prod it with the cowtool of English into outputting something novel, rings very true to me. It's not an LLM-specific selection pressure, but one that LLMs are very likely to have 10x-1000xed as the culmination of a multigenerational gambit of sorts; one whose outset I'd place with the ever-improving immersive simulations that got the GPU supply chain going.
If you use LLMs at very high temperature with samplers which correctly keep your writing coherent (i.e. Min_p, or better like top-h, P-less decoding, etc), than "regression to the mean" literally DOES NOT HAPPEN!!!!
LLMs don’t “reason” the same way humans do. They follow text predictions based on statistical relevance. So raising the temperature will more likely increase the likelihood of unexecutable pseudocode than it would create a valid but more esoteric implementation of a problem.
Code that fails to execute or compile is the default expectation for me. That's why we feed compile and runtime errors back into the model after it proposes something each time.
I'd much rather the code sometimes not work than to get stuck in infinite tool calling loops.
Same problem with image generation (lack of support for different SDE solvers, the image version of LLM sampling) but they have different "coomer" tools, i.e. ComfyUI or Automatic1111
This is no different than many things. I could grow a tree and cut it into wood but I don't. I could buy wood and nails and brackets and make furniture but I don't. I instead just fill my house/apartment with stuff already made and still feel like it's mine. I made it. I decided what's in it. I didn't have to make it all from scratch.
For me, lots of programming is the same. I just want to assemble the pieces
> When you skip the process of creation you trade the thing you could have learned to make for the simulacrum of the thing you thought you wanted to make
No, your favorite movie is not crap because the creators didn't grind their own lens. Popular and highly acclaimed games not at crap because they didn't write their own physics engine (Zelda uses Havok) or their own game engine (Plenty of great games use Unreal or Unity)
So whether you’re writing the spec code out by hand or ask an LLM to do it is besides the point if the code is considered a means to an end, which is what the post above yours was getting at.
> For me, lots of programming is the same. I just want to assemble the pieces
How did those pieces came to be? By someone assembling other pieces or by someone crafting them together out of nowhere because nobody else had written them by the time?
Of course you reuse other parts and abstractions to do whatever things that you're not working on but each time you do something that hasn't been done before you can't but engage the creative process, even if you're sitting on top of 50 years worth of abstractions.
In other words, what a programmer essentially has is a playfield. And whether the playfield is a stack of transistors or coding agents, when you program you create something new even if it's defined and built in terms of the playfield.
I'm starting to wonder if we lose something in all this convenience. Perhaps my life is better because I cook my own food, wash my own dishes, chop my own firewood, drive my own car, write my own software. Outwardly the results look better the more I outsource but inwardly I'm not so sure.
On the subject of furnishing your house the IKEA effect seems to confirm this.
Trying to find the right level is the art. Once you learn the tools of the trade and can do abstraction, it's natural to want to abstract everything. Most programmers go through such a phase. But sometimes things really are distinct and trying to find an abstraction that does both will never be satisfactory.
When building a house there are generally a few distinct trades that do the work: bricklayers, joiners, plumbers, electricians etc. You could try to abstract them all: it's all just joining stuff together isn't it? But something would be lost. The dangers of working with electricity are completely different to working with bricks. On the other hand, if people were too specialised it wouldn't work either. You wouldn't expect a whole gang of electricians, one who can only do lighting, one who can only do sockets, one who can only do wiring etc. After centuries of experience we've found a few trades that work well together.
So, yes, it's all just abstraction, but you can go too far.
Counterpoint to my own counterpoint, will anyone actually (want to) read it?
counterpoint to the third degree, to loop it back around, an LLM might and I'd even argue an LLM is better at reading and ingesting long text (I'm thinking architectural documentation etc) than humans are. Speaking for myself, I struggle to read attentively through e.g. a document, I quickly lose interest and scan read or just focus on what I need instead.
You’re taking a bunch of pre-built abstractions written by other people on top of what the computer is actually doing and plugging them together like LEGOs. The artificial syntax that you use to move the bricks around is the thing you call coding.
The human element of discovery is still there if a robot stacks the bricks based on a different set of syntax (Natural Language), nothing about that precludes authenticity or the human element of creation.
I can do some crud apps where it's just data input to data store to output with little shaping needed. Or I can do apps where there's lots of filters, actions and logic to happen based on what's inputted that require some thought to ensure actually solve the problem it's proposed for.
"Shaping the clay" isn't about the clay, it's about the shaping. If you have to make a ball of clay and also have to make a bridge of Lego a 175kg human can stand on, you'll learn more about Lego and building it than you will about clay.
Get someone to give you a Lego instruction sheet and you'll learn far less, because you're not shaping anymore.
Correct. However, you will probably notice that your solution to the problem doesn't feel right, when the bricks that are available to you, don't compose well. The AI will just happily smash together bricks and at first glance it might seem that the task is done.
Choosing the right abstraction (bricks) is part of finding the right solution. And understanding that choice often requires exploration and contemplation. AI can't give you that.
The other day people were talking about metrics, the amount of lines of code people vs LLMs could output in any given time, or the lines of code in an LLM assisted application - using LOC as a metric for productivity.
But would an LLM ever suggest using a utility or library, or re-architecture an application, over writing their own code?
I've got a fairly simple application, renders a table (and in future some charts) with metrics. At the moment all that is done "by hand", last features were stuff like filtering and sorting the data. But that kind of thing can also be done by a "data table" library. Or the whole application can be thrown out in favor of a workbook (one of those data analysis tools, I'm not at home in that are at all). That'd save hundreds of lines of code + maintenance burden.
Isn't the analogy apt? You can't make a working car using a lump of clay, just a car statue, a lump of clay is already an abstraction of objects you can make in reality.
I find languages like JavaScript promote the idea that of “Lego programming” because you’re encouraged to use a module for everything.
But when you start exploring ideas that haven’t been thoroughly explored already, and particularly in systems languages which are less zealous about DRY (don’t repeat yourself) methodologies, the you can feel a lot more like a sculptor.
Likewise if you’re building frameworks rather than reusing them.
So it really depends on the problems you’re solving.
For general day-to-day coding for your average 9-to-5 software engineering job, I can definitely relate to why people might think coding is basically “LEGO engineering”.
This actually leaves me with a lot more time to think, about what I want the UI to look like, how I'll market my software, and so on.
I wonder if software creation will be in a similar place. There still might be a small market for handmade software but the majority of it will be mass produced. (That is, by LLM or even software itself will mostly go away and people will get their work done via LLM instead of "apps")
Very few people (even before LLM coding tools) actually did low level "artisanal" coding; I'd argue the vast majority of software development goes into implementing features in b2b / b2c software, building screens, logins, overviews, detail pages, etc. That requires (required?) software engineers too, and skill / experience / etc, but it was more assembling existing parts and connecting them.
Years ago there was already a feeling that a lot of software development boiled down to taping libraries together.
Or from another perspective, replace "LLM" with "outsourcing".
What you get right now is mass replicated software, just another copy of sap/office/Spotify/whatever
That software is not made individually for you, you get a copy like millions of other people and there is nearly no market anymore for individual software.
Llms might change that, we have a bunch of internal apps now for small annoying things..
They all have there quirks, but are only accessible internally and make life a little bit easier for people working for us.
Most of them are one shot llms things, throw away if you do not need it anymore or just one shoot again
I'd argue that in most cases it's better to do some research and find out if a tool already exists, and if it isn't exactly how you want it... to get used to it, like one did with all other tools they used.
Skipping over that step results in a world of knock offs and product failures.
People buy Zara or H&M because they can offload the work of verifying quality to the brand.
This was a major hurdle that mass manufacturing had to overcome to achieve dominance.
Discovering the right problem to solve is not necessarily coupled to being "hands on" with the "materials you're shaping".
Obviously I am not comparing his final product with my code, I am simply pointing out how this metaphor is flawed. Having "workers" shape the material according to your plans does not reduce your agency.
That has actually been a major problem for me in the past where my core idea is too simple, and I don't give "the muse" enough time to visit because it doesn't take me long enough to build it. Anytime I have given the muse time to visit, they always have.
I find myself being able to reach for the things that my normal pragmatist code monkey self would consider out of scope - these are often not user facing things at all but things that absolutely improve code maintenance, scalability, testing/testability, or reduce side effects.
The problem is rather that programmers who work on business logic often hate programmers who are actually capable of seeing (often mathematical) patterns in the business logic that could be abstracted away; in other words: many business logic programmers hate abstract mathematical stuff.
So, in my opinion/experience this is a very self-inflected problem that arises from the whole culture around business logic and business logic programming.
Can't speak to firmware code or complex cryptography but my hunch is if it's in it's training dataset and you know enough to guide it, it's generally pretty useful.
Presumably humanity still has room to grow and not everything is already in the training set.
This rather tells that the kind of performance optimizations that you ask for are very "standard".
It would be a lot more interesting to point out the differences and similarities yourself. But then if you wanted an interesting discussion you wouldn’t be posting trite flamebait in the first place, would you?
1826 - The Heliograph - 8+ hours
1839 - The Daguerreotype - 15–30 Mins
1841 - The Calotype - 1–2 Mins
1851 - Wet Plate Collodion - 2–20 Secs
1871 - The Dry Plate - < 1 Second.
So it took 45 years to perfect the process so you could take an instant image. Yet we complain after 4 years of LLMs that they're not good enough.
This is a non sequitur. Cameras have not replaced paintings, assuming this is the inference. Instead, they serve only to be an additional medium for the same concerns quoted:
The process, which is an iterative one, is what leads you
towards understanding what you actually want to make,
whether you were aware of it or not at the beginning.
Just as this is applicable to refining a software solution captured in code, just as a painter discards unsatisfactory paintings and tries again, so too is it when people say, "that picture didn't come out the way I like, let's take another one."You wouldn't have known that, going by all the bellyaching and whining from the artists of the day.
Guess what, they got over it. You will too.
Did you imagine yourself then, as your are now, hunched over a glowing rectangle. Demanding imperiously that the world share your contempt for the sublime. Share your jaundiced view of those that pour the whole of themselves into the act of creation, so that everyone might once again be graced with wonder anew.
I hope you can find a work of art that breaks you free of your resentment.
I took the liberty of pasting it to chatgpt and asked it to write another paragraph in the same style:
Perhaps it is easier to sneer than to feel, to dull the edges of awe before it dares to wound you with longing. Cynicism is a tidy shelter: no drafts of hope, no risk of being moved. But it is also a small room, airless, where nothing grows. Somewhere beyond that glowing rectangle, the world is still doing its reckless, generous thing—colors insisting on being seen, sounds reaching out without permission, hands shaping meaning out of nothing. You could meet it again, if you chose, not as a judge but as a witness, and remember that wonder is not naïveté. It is courage, practiced quietly.
> You wouldn't have known that, going by all the bellyaching and whining from the artists of the day.
> Guess what, they got over it.
You conveniently omitted my next sentence, which contradicts your position and reads thusly:
Instead, they serve only to be an additional medium for the
same concerns quoted ...
> You will too.This statement is assumptive and gratuitous.
Thoughtful retorts such as this are deserving of the same esteem one affords the "rubber v glue"[0] idiom.
As such, I must oblige.
0 - https://idioms.thefreedictionary.com/I%27m+rubber%2c+you%27r...
Prediction is difficult, especially of the future.
People felt (wrongly) that traditional representational forms like portraiture were threatened by photography. Happily, instead of killing any existing genres, we got some interesting new ones.
I think just as hard, I type less. I specify precisely and I review.
If anything, all we've changed is working at a higher level. The product is the same.
But these people just keep mixing things up like "wow I got a ferrari now, watch it fly off the road!"
Yeah so you got a tools upgrade; it's faster, it's more powerful. Keep it on the road or give up driving!
We went from auto completing keywords, to auto completing symbols, to auto completing statements, to auto completing paragraphs, to auto completing entire features.
Because it happened so fast, people feel the need to rename programming every week. We either vibe coders now, or agentic coders or ... or just programmers hey. You know why? I write in C, I get machine code, I didn't write the machine code! It was all an abstraction!
Oh but it's not the same you say, it changes every time you ask. Yes, for now, it's still wonky and janky in places. It's just a stepping stone.
Just chill, it's programming. The tools just got even better.
You can still jump on a camel and cross the desert in 3 days. Have at it, you risk dying, but enjoy. Or you can just rent a helicopter and fly over the damn thing in a few hours. Your choice. Don't let people tell you it isn't travelling.
We're all Linus Torvalds now. We review, we merge, we send back. And if you had no idea what you were doing before, you'll still have no idea what you're doing today. You just fat-finger less typos today than ever before.
its obviously not wrong to fly over the desert in a helicopter. its a means to an end and can be completely preferable. I mean myself I'd prefer to be in a passenger jet even higher above it, at a further remove personally. But I wouldn't think that doing so makes me someone who knows the desert the same way as someone who has crossed it on foot. It is okay to prefer and utilize the power of "the next abstraction", but I think its rather pig headed to deny that nothing of value is lost to people who are mourning the passing of what they gained from intimate contact with the territory. and no it's not just about the literal typing. the advent of LLMs is not the 'end of typing', that is more reductionist failure to see the point.
We miss thinking "hard" about the small details. Maybe "hard" isn't the right adjective, but we all know the process of coding isn't just typing stuff while the mind wanders. We keep thinking about the code we're typing and the interactions between the new code and the existing stuff, and keep thinking about potential bugs and issues. (This may or may not be "hard".)
And this kind of thinking is totally different from what Linus Torvalds has to think about when reviewing a huge patch from a fellow maintainer. Linus' work is probably "harder", but it's a different kind of thinking.
You're totally right it's just tools improving. When compilers improved most people were happy, but some people who loved hand crafting asm kept doing it as a hobby. But in 99+% cases hand crafting asm is a detriment to the project even if it's fun, so if you love writing asm yourself you're either out of work, or you grudgingly accept that you might have to write Java to get paid. I think there's a place for lamenting this kind of situation.
And it's also somewhat egotistical it seems to me. I sense a pattern that many developers care more about doing what they want instead of providing value to others.
But if he enjoyed being in the forest, and _doesn't really care about lumber at all_ (Because it turns out, he never used or liked lumber, he merely produced it for his employer) then these screens won't give him any joy at all.
That's how I feel. I don't care about code, but I also don't really care about products. I mostly care about the craft. It's like solving sudokus. I don't collect solved sudokus. Once solved I don't care about them. Having a robot solve sudokus for me would be completely pointless.
> I sense a pattern that many developers care more about doing what they want instead of providing value to others.
And you'd be 100% right. I do this work because my employer provides me with enough sudokus. And I provide value back which is more than I'm compensated with. That is: I'm compensated with two things: intellectual challenge, and money. That's the relationship I have with my employer. If I could produce 10x more but I don't get the intellectual challenge? The employer isn't giving me what I want - and I'd stop doing the work.
I think "You do what the employer wants, produce what needs to be produced, and in return you get money" is a simplification that misses the literal forest for all the forestry.
This argument falls somewhere between the dishonest and the inattentive. Maybe you don’t care about the environment (which includes yourself and the people you like), or income inequality, or the continued consolidation of power in the hands of a few deranged rich people, or how your favourite artists (do you have any?) are exploited by the industry, but some of us have been banging the drum about those issues for decades. Just because you’re only noticing it now, it doesn’t mean it’s a new thing.
It’s a good thing more people are waking up and talking about those. Just because you don’t care or don’t understand doesn’t make everyone else duplicitous.
I suspect those using the tools in the best way are thinking harder than ever for this reason.
Not inherently, no. Reading it and getting a cursory understanding is easy, truly understanding what it does well, what it does poorly, what the unintended side effects might be, that's the difficult part.
In real life I've witnessed quite a few intelligent and experienced people who truly believe that they're thinking "really hard" and putting out work that's just as good as their previous, pre-AI work, and they're just not. In my experience it roughly correlates to how much time they think they're saving, those who think they're saving the most time are in fact cutting corners and putting out the sloppiest quality work.
FSD is very very good most of the time. It's so good (well, v14 is, anyway), it makes it easy to get lulled into thinking that it works all the time. So you check your watch here, check your phone there, and attend to other things, and it's all good until the car decides to turn into a curb (which almost happened to me the other day) or swerve hard into a tree (which happened to someone else).
Funny enough, much like AI, Tesla is shoving FSD down people's throats by gating Autopilot 2, a lane keeping solution that worked extremely well and is much friendlier to people who want limited autonomy here and there, behind the $99/mo FSD sub (and removing the option to pay for the package out of pocket).
And being a reasonable person I, just like the author, choose the helicopter. That's it, that's the whole problem.
I too did a lot of AI coding but when I saw the spaghetti it made, I went back to regular coding, with ask mode not agent mode as a search engine.
Or, risking to beat the metaphor to death, because over a span of time I'll cross many more deserts than I would have on a camel, and because I'll cross deserts that I wouldn't even try crossing on a camel.
So...where's your OS and SCM?
I get your point that wetware stills matter, but I think it's a bit much to contend that more than a handful of people (or everyone) is on the level of Linus Torvalds now that we have LLMs.
I didn't imply most of use can do half the thing he's done. That's not right.
But even then...don't you think his insight into and ability to verify a PR far exceeds that of most devs (LLM or not)? Most of us cannot (reasonably) aspire to be like him.
Agentic coding in general only amplify your ability (or disability).
You can totally learn how to build an OS and invest 5 years of your life doing so. The first version of Linux I'm sure was pretty shoddy. Same for a SCM.
I've been doing this for 30 years. At some point, your limit becomes how much time you're willing to invest in something.
You might have missed their point.
My typos are largely admissible.
Except Linus understands the code that is being reviewed / merged in since he already built the kernel and git by hand. You only see him vibe-coding toys but not vibe-coding in the kernel.
Today, we are going to see a gradual skill atrophy with developers over-relying on AI and once something like Claude goes down, they can't do any work at all.
The most accurate representation is that AI is going to rapidly make lots of so-called 'senior engineers' who are over-reliant and unable to detect bad AI code like juniors and interns.
I got excited about agents because I told myself it would be "just faster typing". I told myself that my value was never as a typist and that this is just the latest tool like all the tools I had eagerly added to my kit before.
But the reality is different. It's not just typing for me. It's coming up with crap. Filling in the blanks. Guessing.
The huge problem with all these tools is they don't know what they know and what they don't. So when they don't know they just guess. It's absolutely infuriating.
It's not like a Ferrari. A Ferrari does exactly what I tell it to, up to the first-order effects of how open the throttle is, what direction the wheels face, how much pressure is on the brakes etc. The second-order effects are on me, though. I have to understand what effect these pressures will have on my ultimate position on the road. A normie car doesn't give you as much control but it's less likely to come off the road.
Agents are like a teleport. You describe where you want to be and it just takes you directly there. You say "warm and sunny" and you might get to the Bahamas, but you might also get to the Sahara. So you correct: "oh no, I meant somewhere nice" and maybe you get to the Bahamas. But because you didn't travel there yourself you failed to realise what you actually got. Yeah, it's warm, sunny and nice, but now you're on an island in the middle of nowhere and have to import basically everything. So I prompt again and rewrite the entire codebase, right?
Linus Torvalds works with experts that he trusts. This is like a manic 5 year old that doesn't care but is eager to work. Saying we all get to be Torvalds is like saying we all get to experience true love because we have access to porn.
Just look at image generation. Actually factually look at it. We went from horror colours vomit with eyes all over, to 6 fingers humans, to pretty darn good now.
It's only time.
But that approach doesn't work with code, or with reasoning in general, because you would need to exponentially fine tune everything in the universe. The illusion that the AI "understands" what it is doing is lost.
Code generation progression in LLMs still carries higher objective risk of failure depending on the experience on the person using it because:
1. They still do not trust if the code works (even if it has tests) thus, needs thorough human supervision and still requires on-going maintainance.
2. Hence (2) it can cost you more money than the tokens you spent building it in the first place when it goes horribly wrong in production.
Image generation progression comes with close to no operational impact, and has far less human supervision and can be safely done with none.
LLMs aren't a "layer of abstraction."
99% of people writing in assembly don't have to drop down into manual cobbling of machine code. People who write in C rarely drop into assembly. Java developers typically treat the JVM as "the computer." In the OSI network stack, developers writing at level 7 (application layer) almost never drop to level 5 (session layer), and virtually no one even bothers to understand the magic at layers 1 & 2. These all represent successful, effective abstractions for developers.
In contrast, unless you believe 99% of "software development" is about to be replaced with "vibe coding", it's off the mark to describe LLMs as a new layer of abstraction.
Probably not vibe coding, but most certainly with some AI automation
People are paying for it because it helps them. Who are you to whine about it?
I'm not immune to that and I catch myself sometimes being more reluctant to adapt. I'm well aware and I actively try to force myself to adapt. Because the alternative is becoming stuck in my ways and increasingly less relevant. There are a lot of much younger people around me that still have most of their careers ahead of them. They can try to whine about AI all they want for the next four decades or so but I don't think it will help them. Or they can try to deal with the fact that these tools are here now and that they need to learn to adapt to them whether they like it or not. And we are probably going to see quite some progress on the tool front. It's only been 3 years since ChatGPT had its public launch.
To address the core issue here. You can use AI or let AI use you. The difference here is about who is in control and who is setting the goals. The traditional software development team is essentially managers prompting programmers to do stuff. And now we have programmers prompting AIs to do that stuff. If you are just a middle man relaying prompts from managers to the AI, you are not adding a lot of value. That's frustrating. It should be because it means apparently you are very replaceable.
But you can turn that around. What makes that manager the best person to be prompting you? What's stopping them from skipping that entirely? Because that's your added value. Whatever you are good at and they are not is what you should be doing most of your time. The AI tools are just a means to an end to free up more time for whatever that is. Adapting means figuring that out for yourself and figuring out things that you enjoy doing that are still valuable to do.
There's plenty of work to be done. And AI tools won't lift a finger to do it until somebody starts telling them what needs doing. I see a lot of work around me that isn't getting done. A lot of people are blind to those opportunities. Hint: most of that stuff still looks like hard work. If some jerk can one shot prompt it, it isn't all that valuable and not worth your time.
Hard work usually involves thinking hard, skilling up, and figuring things out. The type of stuff the author is complaining he misses doing.
As an industry, we've been preaching the benefits of running lots of small experiments to see what works vs what doesn't, try out different approaches to implementing features, and so on. Pre-AI, lots of these ideas never got implemented because they'd take too much time for no definitive benefit.
You might spend hours thinking up cool/interesting ideas, but not have the time available to try them out.
Now, I can quickly kick off a coding agent to try out any hare-brained ideas I might come up with. The cost of doing so is very low (in terms of time and $$$), so I get to try out far more and weirder approaches than before when the costs were higher. If those ideas don't play out, fine, but I have a good enough success rate with left-field ideas to make it far more justifiable than before.
Also, it makes playing around with one-person projects a lot practical. Like most people with partner & kids, my down time is pretty precious, and tends to come in small chunks that are largely unplannable. For example, last night I spent 10 minutes waiting in a drive-through queue - that gave me about 8 minutes to kick off the next chunk of my one-person project development via my phone, review the results, then kick off the next chunk of development. Absolutely useful to me personally, whereas last year I would've simply sat there annoyed waiting to be serviced.
I know some people have an "outsourcing Lego" type mentality when it comes to AI coding - it's like buying a cool Lego kit, then watching someone else assemble it for you, removing 99% of the enjoyment in the process. I get that, but I prefer to think of it in terms of being able to achieve orders of magnitude more in the time I have available, at close to zero extra cost.
How are you doing this via your phone?
claude via browser and claude mobile apps function this way
but alongside that, people do make tunnels to their personal computer and setup ways to be notified on their phone, or to get the agent unstuck when it asks for a permission, from their phone
On one side, there are people who have become a bit more productive. They are certainly not "10x," but they definitely deliver more code. However, I do not observe a substantial difference in the end-to-end delivery of production-ready software. This might be on me and my lack of capacity to exploit the tools to their full extent. But, iterating over customer requirements, CI/CD, peer reviews, and business validation takes time (and time from the most experienced people, not from the AI).
On the other hand, soemtimes I observe a genuine degradation of thinking among some senior engineers (there aren’t many juniors around, by the way). Meetings, requirements, documents, or technology choices seem to be directly copy/pasted from an LLM, without a grain of original thinking, many times without insight.
The AI tools are great though. They give you an answer to the question. But, many times making the correct question, and knowing when the answer is not correct is the main issue.
I wonder if the productivity boost that senior engineers actually need is to profit from the accumulated knowledge found in books. I know it is an old technology and it is not fashionable, but I believe it is mostly unexploited if you consider the whole population of engineers :D
At that point an idea popped in my mind and I decided to look for similar patterns in the codebase, related to the change, found 3. 1 was a non bug, two were latent bugs.
Shipped a fix plus 2 fixes for bugs yet to be discovered.
You just detailed an example of where you did in fact reduce your thinking.
Managers who tell people what to get done do not think about the problem.
1. I received the ticket, as soon as I read it I had a hunch it was related to some querying ignoring a field that should be filtered by every query (thinking)
2. I give this hunch to the AI which goes search in the codebase in the areas I suggested the problem could be and that's when it find the issue and provide a fix
3. I think the problem could be spread given there is a method that removes the query filter, it could have been used in multiple places, so I ask AI to find other usages of it (thinking, this is my definition of "steering" in this context)
4. AI reports 3 more occurrences and suggests that 2 have the same bug, but one is ok
5. I go in, review the code and understand it and I agree, it doesn't have the bug (thinking)
6. AI provide the fix for all the right spots, but I said "wait, something is fishy here, there is a commit that explicitly say it was added to remove the filter, why is that?" (thinking), so I ask AI to figure out why the commit says that
7. AI proceeds to run a bunch of git-history related commands, finds some commit and then does some correlation to find another commit. This other commit introduced the change at the same time to defend from a bug in a different place
8. I understand what's going on now, I'm happy with the fix, the history suggests I am not breaking stuff. I ask AI to write a commit with detailed information about the bug and the fix based on the conversation
There is a lot of thinking involved. What's reduced is search tooling. I can be way more fuzzy, rather than `rg 'whatever'` I now say "find this and similar patterns"> By “thinking hard,” I mean encountering a specific, difficult problem and spending multiple days just sitting with it to overcome it.
The "thinking hard" I do with an LLM is more like management thinking. Its chaotic and full of conversations and context switches. Its tiring, sure. But I'm not spending multiple days contemplating a single idea.
The "thinking hard" I do over multiple days with a single problem is more like that of a scientist / mathematician. I find myself still thinking about my problem while I'm lying in bed that night. I'm contemplating it in the shower. I have little breakthroughs and setbacks, until I eventually crack it or give up.
Its different.
I find the best uses, for at least my self, are smaller parts of my workflow where I'm not going to learn anything from doing it: - build one to throw away: give me a quick prototype to get stakeholder feedback - straightforward helper functions: I have the design and parameters planned, just need an implementation that I can review - tab-completion code-gen - If I want leads for looking into something (libraries, tools) and Googling isn't cutting it
I just changed employers recently in part due to this: dealing with someone that appears to now spend his time coercing LLM's to give the answers he wants, and becoming deaf to any contradictions. LLMs are very effective at amplifying the Reality Distortion Field for those that live in them. LLMs are replacing blog posts for this purpose.
In fact, since I don't need to do low-thinking tasks like writing boilerplate or repetitive tests, I find my thinking ratio is actually higher than when I write code normally.
That said architectural problems have been also been less difficult, just for the simple fact that research and prototyping has become faster and cheaper.
And then also there’s all the non-systems stuff - what is actually feasible, what’s most valuable etc. Less “fun”, but still lots of potential for thinking.
I guess my main point is there is still lots to think about even post-LLM, but the real challenge is making it as “fun” or as easily useful as it was pre-LLM.
I think local code architecture was a very easy domain for “optimality” that is actually tractable and the joy that comes with it, and LLMs are harmful to that, but I don’t think there’s nothing to replace it with.
With AI we can set high bars and do complex original stuff. Obviously boilerplate and common patterns are slop slap without much thinking. That's why you branch into new creative territory. The challenge then becomes visualising the mental map of modular pieces all working nicely together at the right time to achieve your original intent.
[1]: https://www.jocrf.org/how-clients-use-the-analytical-reasoni...
When I'm just programming, I spend a lot more time working through a single idea, or a single function. Its much less tiring.
Except without the reward of an intellectual high afterwards.
The point they are making is that using AI tools makes it a lot harder for them to keep up the discipline to think hard.
This may or may not be true for everyone.
thinking is tiring and life is complicated, the tool makes it easy to slip into bad habits and bad habits are hard to break even when you recognise its a bad habit.
Many people are too busy/lazy/self-unaware to evaluate their behaviour to recognise a bad habit.
My observation: I've always had that "sound." I don't know or care much about what that implies. I will admit I'm now deliberately avoiding em dashs, whereas I was once an enthusiastic user of them.
With AI the pros outweigh the cons at least at the moment with what we collectively have figured out so far. But with that everyday I wonder if it's possible now to be more ambitious than ever and take on much bigger problem with the pretend smart assistant.
Software engineers are lazy. The good ones are, anyway.
LLMs are extremely dangerous for us because it can easily become a "be lazy button". Press it whenever you want and get that dopamine hit -- you don't even have to dive into the weeds and get dirty!
There's a fine line between "smart autocomplete" and "be lazy button". Use it to generate a boilerplate class, sure. But save some tokens and fill that class in yourself. Especially if you don't want to (at your own discretion; deadlines are a thing). But get back in those weeds, get dirty, remember the pain.
We need to constantly remind ourselves of what we are doing and why we are doing it. Failing that, we forget the how, and eventually even the why. We become the reverse centaur.
And I don't think LLMs are the next layer of abstraction -- if anything, they're preventing it. But I think LLMs can help build that next layer... it just won't look anything like the weekly "here's the greatest `.claude/.skills/AGENTS.md` setup".
If you have to write a ton of boilerplate code, then abstract away the boilerplate in code (nondeterminism is so 2025). And then reuse that abstraction. Make it robust and thoroughly tested. Put it on github. Let others join in on the fun. Iterate on it. Improve it. Maybe it'll become part of the layers of abstraction for the next generation.
I can imagine many positions work out this way in startups
it's important to think hard sometimes, even if it means taking time off to do the thinking - you can do it without the socioeconomic pressure of a work environment
In grad school, I had what I'd call the classic version. I stayed up all night mentally working on a topology question about turning a 2-torus inside out. I already knew you can't flip a torus inside out in ordinary R^3 without self-intersection. So I kept moving and stretching the torus and the surrounding space in my head, trying to understand where the obstruction actually lived.
Sometime around sunrise, it clicked that if you allow the move to go through infinity(so effectively S^3), the inside/outside distinction I was relying on just collapses, and the obstruction I was visualizing dissolves. Birds were chirping, I hadn't slept, and nothing useful came out of it, but my internal model of space felt permanently upgraded. That's clearly "thinking hard" in the sense.
But there's another mode I've experienced that feels related but different. With a tough Code Golf problem, I might carry it around for a week. I'm not actively grinding on it the whole time, but the problem stays loaded in the background. Then suddenly, in the shower or on a walk, a compression trick or a different representation just clicks.
That doesn't feel "hard" moment to moment. It's more like keeping a problem resident in memory long enough for the right structure to surface.
One is concentrated and exhausting, the other is diffuse and slow-burning. They're different phenomenologically, but both feel like forms of deep engagement that are easy to crowd out.
It's like we had the means for production and more or less collectively decided "You know what? Actually, the bourgeoisie can have it, sure."
I feel the existential problem for a world that follows the religion of science and technology to its extreme, is that most people in STEM have no foundation in humanities, so ethical and philosophical concerns never pass through their mind.
We have signed a pact with the devil to help us through boring tasks, and no one thought to ask what we would give in exchange.
Why blame these tools if you can stop using them, and they won't have any effect on you?
In my case, my problem was often overthinking before starting to build anything. Vibe coding rescued me from that cycle. Just a few days ago, I used openclaw to build and launch a complete product via a Telegram chat. Now, I can act immediately rather than just recording an idea and potentially getting to it "someday later"
To me, that's evolutional. I am truly grateful for the advancement of AI technology and this new era. Ultimately, it is a tool you can choose to use or not, rather than something that prevents you from thinking more.
That way my 'thinker' is satiated and also challenged - Did the solution that my thinker came up with solve the problem better than the plan that the agent wrote?
Then either I acknowledge that the agent's solution was better, giving my thinker something to chew on for the next time; or my solution is better which gives the thinker a dopamine hit and gives me better code.
More importantly, thinking and building are two very different modes of operating and it can be hard to switch at moment's notice. I've definitely noticed myself getting stuck in "non-thinking building/fixing mode" at times, only realizing that I've been making steady progress into the wrong direction an hour or two in.
This happens way less with LLMs, as they provide natural time to think while they churn away at doing.
Even when thinking, they can help: They're infinitely patient rubber ducks, and they often press all the right buttons of "somebody being wrong on the Internet" too, which can help engineers that thrive in these kinds of verbal pro/contra discussions.
If you think too much you get into dead ends and you start having circular thoughts, like when you are lost in the desert and you realise you are in the same place again after two hours as you have made a great circle(because one of your legs is dominant over the other).
The thinker needs feedback on the real world. It needs constant testing of hypothesis on reality or else you are dealing with ideology, not critical thinking. It needs other people and confrontation of ideas so the ideas stay fresh and strong and do not stagnate in isolation and personal biases.
That was the most frustrating thing before AI, a thinker could think very fast, but was limited in testing by the ability to build. Usually she had to delegate it to people that were better builders, or else she had to be builder herself, doing what she hates all the time.
But I feel better for not taking the efficient way. Having to be the one to make a decision at every step of the way, choosing the constraints and where I cut my losses on accuracy, I think has taught me more about the subject than even reading literature would’ve directly stated.
> Yes, I blame AI for this.
> I am currently writing much more, and more complicated software than ever, yet I feel I am not growing as an engineer at all. [...] (emphasis added by me)
AI is a force multiplier for accidental complexity in the Brooks sense. (https://en.wikipedia.org/wiki/No_Silver_Bullet)
Just a few days ago, I let it do something that I thought was straightforward, but it kept inserting bugs, and after a few hours of interaction it said itself it was running in circles. It took me a day to figure out what the problem was: an invariant I had given it was actually too strong, and needed to be weakened for a special case. If I had done all of it myself, I would have been faster, and discovered this quicker.
For a different task in the same project I used it to achieve a working version of something in a few days that would have taken me at least a week or two to achieve on my own. The result is not efficient enough for the long term, but for now it is good enough to proceed with other things. On the other hand, with just one (painful) week more, I would have coded a proper solution myself.
What I am looking forward to is being able to converse with the AI in terms of a hard logic. That will take care of the straightforward but technically intricate stuff that it cannot do yet properly, and it will also allow the AI to surface much quicker where a "jump of insight" is needed.
I am not sure what all of this means for us needing to think hard. Certainly thinking hard will be necessary for quite a while. I guess it comes down to when the AIs will be able to do these "jumps of insight" themselves, and for how long we can jump higher than they can.
I have to think more rigorously. I have to find ways to tie up loose ends, to verify the result efficiently, to create efficient feedback loops and define categorical success criteria.
I've thought harder about problems this last year than I have in a long time.
I too am an ex-physcist used to spending days thinking about things, but programming is a gold mine as it is adjacent to computer science. You can design a programming language (or improve an existing one), try to build a better database (or improve an existing one), or many other things that are quite hard.
The LLM is a good rubber duck for exploring the boundaries of human knowledge (or at least knowledge common enough to be in its training set). It can't really "research" on its own, and whenever you suggest something novel and plausable it gets sycophantic, but it can help you prototype ideas and implementation strategies quite fast, and it can help you explore how existing software works and tackles similar problems (or help you start working on an existing project).
Most examples mentioned of “thinking hard” in the comments sound like they think about a lot of stuff superficially instead one particular problem deeply, which is what OP is referring to.
Just don't use AI. The idea that you have ship ship ship 10X ship is an illusion and a fraud. We don't really need more software
Sure, I'm doing less technical thinking these days. But all the hard thinking is happening on feature design.
Good feature design is hard for AI. There's a lot of hidden context: customer conversations, unwritten roadmaps, understanding your users and their behaviour, and even an understanding of your existing feature set and how this new one fits in.
It's a different style of thinking, but it is hard, and a new challenge we gotta embrace imo.
The author says “ Even though the AI almost certainly won't come up with a 100% satisfying solution, the 70% solution it achieves usually hits the “good enough” mark.”
The key is to keep pushing until it gets to the 100% mark. That last 30% takes multiples longer than the first 70%, but that is where the satisfaction lies for me.
An even better analogy is the slot machine. Once you've "won" one time it's hard to break the cycle. There's so little friction to just having another spin. Everyone needs to go and see the depressed people at slot machines at least once to understand where this ends.
We can play a peaceful game and a intense one.
Now, when we think, we can always find a right level of abstract to think on. Decades ago a programmer thought with machine codes, now we think with high level concepts, maybe towards philosophy.
A good outcome always requires hard thinking. We can and we WILL think hard at a appropriate level.
If you're looking for a domain where the 70% AI solution is a total failure, that's the field. You can't rely on vibe coding because the underlying math, like Learning With Errors (LWE) or supersingular isogeny graphs, is conceptually dense and hasn't been commoditized into AI training data yet. It requires that same 'several-day-soak' thinking you loved in physics, specifically because we're trying to build systems that remain secure even against an adversary with a quantum computer. It’s one of the few areas left where the Thinker isn't just a luxury, but a hard requirement for the Builder to even begin.
While this may be an unfair generalization, and apologies to those who don't feel this way, but I believe STEM types like the OP are used to problem solving that's linear in the sense that the problem only exists in its field as something to be solved, and once they figure it out, they're done. The OP even described his mentality as that of a "Thinker" where he received a problem during his schooling, mulled over it for a long time, and eventually came to the answer. That's it, next problem to crack. Their whole lives revolve around this process and most have never considered anything outside it.
Even now, despite my own healthy skepticism of and distaste for AI, I am forced to respect that AI can do some things very fast. People like the OP, used to chiseling away at a problem for days, weeks, months, etc., now have that throughput time slashed. They're used to the notion of thinking long and hard about a very specific problem and finally having some output; now, code modules that are "good enough" can be cooked up in a few minutes, and if the module works the problem is solved and they need to find the next problem.
I think this is more common than most people want to admit, going back to grumblings of "gluing libraries together" being unsatisfying. The only suggestion I have for the OP is to expand what you think about. There are other comments in this thread supporting it but I think a sea change that AI is starting to bring for software folks is that we get to put more time towards enhancing module design, user experience, resolving tech debt, and so on. People being the ones writing code is still very important.
I think there's more to talk about where I do share the OP's yearning and fears (i.e., people who weren't voracious readers or English/literary majors being oneshot by the devil that is AI summaries, AI-assisted reading, etc.) but that's another story for another time.
These people are miserable to work with if you need things done quickly and can tolerate even slight imperfection.
That operating regime is, incidentally, 95% of the work we actually get paid to do.
You now have a bicycle which gets you there in a third of the time
You need to find destinations that are 3x as far away than before
Thinking hard has never been easier.
I think AI for an autodidact is a boon. Now I suddenly have a teacher who is always accessible and will teach me whatever I want for as long as I want exactly the way I want and I don;t have to worry about my social anxiety kicking in.
Learn advanced cryptography? AI, figure out formal verification - AI etc.
I don't think AI has affected my thinking much, but that's because I probably don't know how to use it well. Whenever AI writes a lot of code, I end up having to understand if not change most of it; either because I don't trust the AI, I have to change the specification (and either it's a small change or I don't trust the AI to rewrite), the code has a leaky abstraction, the specification was wrong, the code has a bug, the code looks like it has a bug (but the problem ends up somewhere else), I'm looking for a bug, etc. Although more and more often the AI saves time and thinking vs. if I wrote the implementation myself, it doesn't prevent me from having to think about the code at all and treating it like a black box, due to the above.
I found that doing more physical projects helped me. Large woodworking, home improvement, projects. Built-in bookshelves, a huge butcher block bar top (with 24+ hours of mindlessly sanding), rolling workbenches, and lots of cabinets. Learning and trying to master a new skill, using new design software, filling the garage with tools...
Maybe I subconsciously picked these up because my Thinker side was starved for attention. Nice post.
The current major problem with the software industry isn't quantity, it's quality; and AI just increases the former while decreasing the latter. Instead of e.g. finding ways to reduce boilerplate, people are just using AI to generate more of it.
Except for eating and sleeping, all other human activities are fake now.
As I'm providing context I get to think about what an ideal approach would look like and often dive into a research session to analyze pros and cons of various solutions.
I don't use agents much because it's important to see how a component I just designed fits into the larger codebase. That experience provides insights on what improvements I need to make and what to build next.
The time I've spent thinking about the composability, cohesiveness, and ergonomics of the code itself have really paid off. The codebase is a joy to work in, easy to maintain and extend.
The LLMs have helped me focus my cognitive bandwidth on the quality and architecture instead of the tedious and time consuming parts.
I don’t think you can get the same satisfaction out of these tools if what you want to do is not novel.
If you are exploring the space of possibilities for which there are no clear solutions, then you have to think hard. Take on wildly more ambitious projects. Try to do something you don’t think you can do. And work with them to get there.
Personally, I am going deeper in Quantum Computing, hoping that this field will require thinkers for a long time.
If anything, we have more intractable problems needing deep creative solutions than ever before. People are dying as I write this. We’ve got mass displacement, poverty, polarization in politics. The education and healthcare systems are broken. Climate change marches on. Not to mention the social consequences of new technologies like AI (including the ones discussed in this post) that frankly no one knows what to do about.
The solution is indeed to work on bigger problems. If you can’t find any, look harder.
This I can’t relate to. For me it’s “the better I build, the better”. Building poor code fast isn’t good: it’s just creating debt to deal with in the future, or admitting I’ll toss out the quickly built thing since it won’t have longevity. When quality comes into play (not just “passed the tests”, but is something maintainable, extensible, etc), it’s hard to not employ the Thinker side along with the Builder. They aren’t necessarily mutually exclusive.
Then again, I work on things that are expected to last quite a while and aren’t disposable MVPs or side projects. I suppose if you don’t have that longevity mindset it’s easy to slip into Build-not-Think mode.
It's hard to rationalise this as billable time, but they pay for outcome even if they act like they pay for 9-5 and so if I'm thinking why I like a particular abstraction, or see analogies to another problem, or begin to construct dialogues with mysel(ves|f) about this, and it happens I'm scrubbing my back (or worse) I kind of "go with the flow" so to speak.
Definitely thinking about the problem can be a lot better than actually having to produce it.
for "Thinker" brain food. (it still has the issue of not being a pragmatic use of time, but there are plenty interesting enough questions which it at least helps)
deciding whether to use that to work on multiple features on the same code base, or the same feature in multiple variations is hard
deciding whether to work on a separate project entirely while all of this is happening is hard and mentally taxing
planning all of this up for a few hours and watching it go all at once autonomously is satisfying!
I recently used the analogy of when compilers were invented. Old-school coders wrote machine code, and handled the intricacies of memory and storage and everything themselves. Then compilers took over, we all moved up an abstraction layer and started using high-level languages to code in. There was a generation of programmers who hated compilers because they wrote bad, inelegant, inefficient, programs. And for years they were right.
The hard problems now are "how can I get a set of non-deterministic, fault-prone, LLM agents to build this feature or product with as few errors as possible, with as little oversight as possible?". There's a few generic solutions, a few good approaches coming out, but plenty of scope for some hard thought in there. And a generic approach may not work for your specific project.
And also, I haven't started using AI for writing code yet. I'm shuffling toward that, with much trepidation. I ask it lots of coding questions. I make it teach me stuff. Which brings me to the point of my post:
The other day, I was looking at some Rust code and trying to work out the ownership rules. In theory, I more or less understand them. In practice, not so much. So I had Claude start quizzing me. Claude was a pretty brutal teacher -- he'd ask 4 or 5 questions, most of them solvable from what I knew already, and then 1 or 2 that introduced a new concept that I hadn't seen. I would get that one wrong and ask for another quiz. Same thing: 4 or 5 questions, using what I knew plus the thing just introduced, plus 1 or 2 with a new wrinkle.
I don't think I got 100% on any of the quizzes. Maybe the last one; I should dig up that chat and see. But I learned a ton, and had to think really hard.
Somehow, I doubt this technique will be popular. But my experience with it was very good. I recommend it. (It does make me a little nervous that whenever I work with Claude on things that I'm more familiar with, he's always a little off base on some part of it. Since this was stuff I didn't know, he could have been feeding me slop. But I don't think so; the explanations made sense and the the compiler agreed, so it'd be tough to get anything completely wrong. And I was thinking through all of it; usually the bullshit slips in stealthily in the parts that don't seem to matter, but I had to work through everything.)
These are also tasks the AI can succeed at rather trivially.
Better completions is not as sexy, but in pretending agents are great engineers it's an amazing feature often glossed over.
Another example is automatic test generation or early correctness warnings. If the AI can suggest a basic test and I can add it with the push of a button - great. The length (and thus complexity) of tests can be configured conservatively relative to the AI of the day. Warnings can just be flags in the editors spotting obvious mistakes. Off-by-one errors for example, which might go unnoticed for a while, would be an achievable and valuable notice.
Also, automatic debugging and feeding the raw debugger log into an AI to parse seems promising, but I've done little of it.
...And go from there - if a well-crafted codebase and an advanced model using it as context can generate short functions well, then by all means - scale that up with discretion.
These problems around the AI coding tools are not at all special - it's a classic case of taking the new tool too far too fast.
I'm more spent than before where I would spend 2 hours wrestling with tailwind classes, or testing API endpoints manually by typing json shapes myself.
Shop bread and tomatoes though can be manufactured without any thought of who makes them, though they can be reliably manufactured without someone guiding an LLM which is perhaps where the analogy falls down, and we always want them to be the same, but software is different in every form.
For me, Claude, Suno, Gemini and AI tools are pure bliss for creation, because they eliminate the boring grunt work. Who cares how to implement OAuth login flow, or anything that has been done 1000 times?
I do not miss doing grunt work!
A few years before this wave of AI hit, I got promoted into a tech lead/architect role. All of my mental growth since then has been learning to navigate office politics and getting the 10k ft view way more often.
I was already telling myself "I miss thinking hard" years before this promotion. When I build stuff now, I do it with a much clearer purpose. I have sincerely tried the new tools, but I'm back to just using google search if anything at all.
All I did was prove to myself the bottleneck was never writing code, but deciding why I'm doing anything at all. If you want to think so hard you stay awake at night, try existential dread. It's an important developmental milestone you'd have been forced to confront anyway even 1000 years ago.
My point is, you might want to reconsider how much you blame AI.
7 months later waffling on it on and off with and without ai I finally cracked it.
Author is not wrong though, the number of times i hit this isnt as often since ai. I do miss the feeling though
I've found that the best way to actually think hard about something is to write about it, or to test yourself on it. Not re-read it. Not highlight it. Generate questions from the material and try to answer them from memory.
The research on active recall vs passive review is pretty clear: retrieval practice produces dramatically better long-term retention than re-reading. Karpicke & Blunt (2011) showed that practice testing outperformed even elaborative concept mapping.
So the question isn't whether AI summarizers are good or bad -- it's whether you use them as a crutch to avoid thinking, or as a tool to compress the boring parts so you can spend more time on the genuinely hard thinking.
Why solve a problem when you can import library / scale up / use managed kuberneted / etc.
The menu is great and the number of problems needing deep thought seems rare.
There might be deep thought problems on the requirements side of things but less often on the technical side.
Reads the SQLite db and shit. So burn your tokens on that.
It's like saying I miss running. Get out and run then.
Seen a lot of DIY vibe coded solutions on this site and they are just waiting for a security disaster. Moltbook being a notable example.
That was just the beginning.
1. Take a pen and paper.
2. Write down what we know.
3. Write down where we want to go.
4. Write down our methods of moving forward.
5. Make changes to 2, using 4, and see if we are getting closer to 3. And course correct based on that.
I still do it a lot. LLM's act as assist. Not as a wholesale replacement.
What?
I use AI for the easy stuff.
Please read up on his life. Mainlander is the most extreme/radical Philosophical Pessimist of them all. He wrote a whole book about how you should rationally kill yourself and then he killed himself shortly after.
https://en.wikipedia.org/wiki/Philipp_Mainl%C3%A4nder
https://dokumen.pub/the-philosophy-of-redemption-die-philoso...
Max Stirner and Mainlander would have been friends and are kindred spirits philosophically.
https://en.wikipedia.org/wiki/Bibliography_of_philosophical_...
Just don't use it. That's always an option. Perhaps your builder doesn't actually benefit from an unlimited runway detached from the cost of effort.
I tried this with physics and philosophy. I think i want to do a mix of hard but meaningful. For academic fields like that its impossible for a regular person to do as a hobby. Might as well just do puzzles or something.
I've resigned to mostly using it for "tip-of-my-tongue" style queries, i.e. "where do I look in the docs". Especially for Apple platforms where almost nothing is documented except for random WWDC video tutorials that lack associated text articles.
I don't trust LLMs at all. Everything they make, I end up rewriting from scratch anyway, because it's always garbage. Even when they give me ideas, they can't apply them properly. They have no standards, no principle. It's all just slop.
I hate this. I hate it because LLMs give so many others the impression of greatness, of speed, and of huge productivity gains. I must look like some grumpy hermit, stuck in their ways. But I just can't get over how LLMs all give me the major ick. Everything that comes out of them feels awful.
My standards must be unreasonably high. Extremely, unsustainably high. That must also be the reason I hardly finish any projects I've ever started, and why I can never seem to hit any deadlines at work. LLMs just can't reach my exacting, uncompromising standards. I'm surely expecting far too much of them. Far too much.
I guess I'll just keep doing it all myself. Anything else really just doesn't sit right.
It's as if I woke up in a world where half of resturaunts worldwide started changing their name to McDonalds and gaslighting all their customers into thinking McDonalds is better than their "from scratch" menu.
Just dont use these agentic tools, they legitimately are weapons who's target is your brain. You can ship just as fast with autocomplete and decent workflows, and you know it.
Its weird, I dont understand why any self respecting dev would support these companies. They are openly hostile about their plans for the software industry (and many other verticles).
I see it as a weapon being used by a sect of the ruling class to diminsh the value of labor. While im not confident they'll be successful, I'm very disappointed in my peers that are cheering them on in that mission. My peers are obviously being tricked by promises of being able join that class, but that's not what's going to happen.
You're going to lose that thinking muscle and therefor the value of your labor is going to be directly correlated to the quantity and quality of tokens you can afford (or be given, loaned!?)
Be wary!!!
To say it will free people of the boring tasks is so short sighted....
... OK I guess. I mean sorry but if that's revelation to you, that by using a skill less you hone it less, you were clearly NOT thinking hard BEFORE you started using AI. It sure didn't help but the problem didn't start then.