> I don’t want to depend on something doing the work I earn money with.
> I don’t want to give up my brain and become lazy and not think for myself anymore.
There are a lot of good reasons we should be skeptical of AI and not give up on essential skills. But sometimes I want to shake these people by the shoulders. Do you drive an automatic car? Do you use a microwave? Do you buy food from a grocery store? Do you own power tools?
The entire point of civilization and society is that we are all "addicted" to technology and progress. But the invention of the plow did not, in fact, make us lazier or stop using our brain. We just moved on to the next problems. Maybe the Amish are have it right and we should just be happy with a certain level of technology. But none of us have "lost" the ability to go backwards if we really wanted.
You can finally ask a computer to think and solve problems, and it will! People act like this is a brave new world, but this is literally what computers were supposed to be doing for us 50 years ago! If somebody finally came out with a fusion reactor tomorrow I would half expect people to suddenly come out and say "Oh, I don't think I can support this. What about the soul of solar panels? I think cheap electricity is going to make things too easy."
> “I’ve come up with a set of rules that describe our reactions to technologies,” writes Douglas Adams in The Salmon of Doubt.
> 1. Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.
> 2. Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.
> 3. Anything invented after you’re thirty-five is against the natural order of things.
> 1. Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.
Television and calculators were in the world when I was born, but I never viewed them as "natural". TV always seemed to be a way to distract yourself from the world.
> 2. Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.
I was happy to get on board with the WWW, the web browser, and widespread email usage. Those were revolutionary technologies with immense values. On the other hand, I'm still not on board with text messaging, phone scrolling, or social media. If I could, I'd eliminate social media from society.
> 3. Anything invented after you’re thirty-five is against the natural order of things.
I'm over 50 and a strong believer in the value of the LLM. It's a work tool that I can use at work and put away when I'm at home (or not, depending on my mood). It's new and exciting and revolutionary and a move in the right direction for humanity.
So, Although age tends to have this effect on how we see the world, and some of it probably not to worry about. I think there is part of this awareness that has some wisdom and is trying to protect our species..
don't grow up too set in your ways to not learn the new. But do grow up fast/young to get some cynicism for everything. now that I'm in my 50s the first is important but when younger the later was important.
AI is taking problems and putting them in a drawer so we never have to think about it again. Matches de-intellectualized making a fire. A washing machine de-intellectualized doing laundry. These are now solved problems.
Our brainpower spent on them is effectively worth nothing. The only reason we need to learn to make a fire from scratch is for the intellectual satisfaction or for emergency situations. The same reason we would choose to work on the problems that AI can now solve.
It only a loss if you think the skill and ability you are losing is intrinsically valuable, and the only thing you are going to replace it with is leisure.
I know you just wanted to poke at the analogy, but if you like hollandaise, it's one of the easiest and most rewarding sauces to make at home! Restaurant hollaindaise is usually terrible
(Though it's not as easy as a béchamel, and yet I still see people buy jarred alfredo sauces. You can literally make an amazing alfredo sauce with pantry ingredients in less time than it takes to boil the noodles! Why would anyone buy an alfredo sauce!?)
Although this more or less is my point. If people are willing to give up these incredibly high reward, low effort skills - how much more uphill is the battle to make people code and process data?
Ignorance aside, jarred sauces are sorta shelf stable, and I have occasionally run out of butter and milk.
I'm fascinated by the AI bros putting hollandaise sauce and making fires on the same level as creating production software. One hopes that it is because they create only very simple software, making the analogy less invalid than it would be for more complex software. If not, the implication is that loss of the reasoning and cognitive ability needed to build foundational software like libraries and frameworks is not important to them.
The only thing that separates homo sapiens from other species is the sapience. Diminishing or atrophying one's own cognitive abilities is the same as climbing down the evolutionary ladder.
No one is arguing that everyone needs to build programs ground up from assembly. So what's the magic difference between using a framework and asking a computer to write out the for-loops for me?
What about the skill of learning itself? I would suggest that's one of the most important skills humans have evolved. The more integrated AI becomes in our societies, the more it will automate away potential opportunities for learning. I can forsee a world tightly integrated with AI where people are not only physically sedentary, but mentally as well.
As we progress further into the future, we need more educated people than ever to tackle the exponentially increasing complexities of our society. But AI presents an obstacle that many will never cross due to how to convenient it is to skip the messy work of understanding.
Also, this problem is not unique to AI. It existed before the GPTs and Claude's of the world. But it's a problem of scale, and every company on the Earth right now is trying to scale AI up as fast as possible.
What exactly did AI take from me? Spending hours of research on Google and Youtube to glean little incomplete bits and pieces? Calling a yard service?
It's also clearly obvious when AI gives bad or incorrect advice - I am still trying different things and watching for the results.
Coding is a outlier example where AI can just do the work semi-competently without anyone checking it. But I think it speaks more to the nature of coding itself - coding is a means to an end and for most people not an actual pursuit in itself.
An opportunity for a deeper understanding of gardening? If you spend hours researching on gardening and come away with an incomplete understanding of what you were attempting to do, I'm not sure that's immediately the fault of the research available. It could be that you just didn't do a good job searching for the necessary information.
In this way, AI can be a boon. It helps figure out what you actually want to know in the moment. But I think it would be a step to far to say that a smattering of specific questions can replace the sturdy foundation povided by a typical education--e.g. through apprenticeship, books, etc.
>It's also clearly obvious when AI gives bad or incorrect advice
Is it? Isn't this a __core__ problem that researchers around the world are trying to solve? Also, __how__ could you make such a statement unless you already possessed the knowledge ahead of time to make such a judgment? I think it's hard to know if something is bad advice by looking at just cause and effect. It could be that you just lack the understanding to put the advice into practice.
How can you? The existing resources are terrible.
> But I think it would be a step to far to say that a smattering of specific questions can replace the sturdy foundation povided by a typical education--e.g. through apprenticeship, books, etc.
I am not going to go through a college program for my own garden. And I have books! But unless you can read a tiny and perform a small research project, you are not going to know how all of the plants in your specific garden in your specific region in your specific weather are going to behave.
The best I could do is hire an expert - but again I am learning less by hiring it out.
> Also, __how__ could you make such a statement unless you already possessed the knowledge ahead of time to make such a judgment?
"Use X to kill the moss". It didn't kill the moss. I will now use AI to find a list of alternative things to try to kill the moss, and learn what works in my garden.
The idea that AI is going to make people stop learning I don't think is born out in practice. It might make some people stop researching as an activity though.
Again, writing replacing memorization is not a good 1:1 comparison to AI replacing technical understanding. Someone still needs to understand what is written and act upon that knowledge. That requires skill and experience in the domain they're working within.
However, a person using an AI does not need to understand the underlying problem to get results. A person can ask Claude Code to write them a web app dashboard without having ever learned JS/CSS/HTML. It does not require them to have skills within a domain.
Also, we need to be honest with ourselves. Human brains did not evolve for the instant gratification of modern technology. We've already seen what technology has done to our attention spans. I am concerned over what further reliance on technology, particularly AI, will do to our brains.
This perspective is funny to me because of how much the modern web is already built around web developers refusing to use CSS and PHP. The giving up of the skills happened before the automation.
> The entire point of civilization and society is that we are all "addicted" to technology and progress.
Technology is like much of material reality, in that we can think whatever the hell we like about its various forms, especially so if we’re surrounded by it.
Technology exists today in a way that feels like it could be defining its own path in a sense, but much like oral tradition, neither are large enough concepts to describe civilisation.
Same with calculators, even when today are dirt cheap, are not allowed in school, and being able to do math without it is a valuable skill.
So maybe there are like 2 groups of things: one where using it you are losing nothing, some where you lose some valuable ability.
How different do you think your life would be if the combine harvester did not exist?
None of these things allow you to turn your brain off while the machine does the work.
I still have to DRIVE the car and all the thinking that goes with that. It's not a robotaxi.
I still have to acquire and prep the food I am microwaving. It's not a replicator.
I still have to know what I want to eat before grocery shopping and prepare the food. It's not a take out restaurant.
I still have to know how to use the power tools to carefully shape something into a fine piece of furniture and not a pile of splintered firewood. Power tools can't operate on their own unless aliens (see Maximum Overdrive.)
These are better analogies:
Do you take a taxi or public transport? Those let you turn your brain off while someone or something does the driving work.
Do you go to a restaurant where you can pick what you want, turn your brain off and wait for a delicious (or not) meal?
Do you order takeout where you can order what you want form the comfort of your home, turn your brain off and enjoy the meal when it arrives? Then reheat the leftovers in the microwave.
Do you use a fabrication service where you send them a drawing, turn your brain off, and they ship you an assembled thing?
When AI works (and technology in general) that's kind of what it's like. You'll never perceive that you are not doing the work anymore because you won't perceive the work.
Microwaves aren't doing active problem solving though. It seems what the author is trying to say is they enjoy problem solving and they find coding a rewarding and creative experience. Sure microwaves saved at-home cooks might enjoy zapping a frozen dinner, but the author is a chef who enjoys writing their own recipes and cooking from scratch. AI isn't just the microwave, it's also the chef.
> None of us have "lost" the ability to go backwards if we really wanted
This absolutely isn't true. Using google maps quickly makes people poorer at navigation - skills need to be practiced. The author thinks letting AI into their kitchen to cook for them will change themself cognitively and make them lazy and lose their skills. And that would be true.
What it sounds like you're getting at but never said is there might be newer skills on the other side that are even more rewarding, which may be true. But if history is any indication, there will be no shortage of folks who like things the old way and want to use their meat brains to provide bespoke goods and services that AI can't.
I love driving a manual transmission. But I also understood why it was so hard for me to find a new Jeep Wrangler with a manual transmission a few years ago.
Which is part of the reason these anti-AI screeds fall on deaf ears for me. My generation has willingly abandoned all of these legitimately useful hard-skills But there's also nothing preventing you from picking and choosing what you care about.
I don't work on my own car because I believe that everyone should fix their own cars. But I think enough people should be knowledgeable and have these skills in society - if for no other reason than to keep mechanics and automakers and dealerships honest. I am not personally upset if you work on your own used car or take it to your dealership.
I am against the idea that everyone should somehow be against AI coding.
We should be using these capabilities to allow ourselves to work on harder problems. In science, there are a lot of tasks that require a low, but non-zero amount of intelligence and aren't really the most interesting part of science. Many of these tasks limit how much work can actually be done. Automate them, and you can dramatically increase your capabilities and focus on the actual science work.
;)
Which of these is behind a subscription paywall and owned by another party that would cut off your access immediately?
These comparisons make little sense, which is the problem with comparisons. They are soundbites from enthusiasts who don't know or understand how this technology will actually affect or shape us, but feel entitled enough to misinform the rest of us.
I'm not addicted in any way to an automatic car. I prefer an automatic car, because it's easier to drive than a manual car. There have been numerous studies already into the problematic nature of AI addiction, and calling it simply "progress" is denuding the experiences of tons of people who have been harmed, up to and including dying, as a result of too much AI use.
> But the invention of the plow did not, in fact, make us lazier or stop using our brain.
No but industrial farming practices are not an unalloyed good either.
> But none of us have "lost" the ability to go backwards if we really wanted.
I mean, we kind of have in a few ways, at least insofar as the AI boom is concerned. I can't have a version of Windows that doesn't have copilot in it. I can't have Microsoft Office without Copilot. I can't have Photoshop without generative AI features. Like, say what you will about the AI doomsayers and yes, even this one I think is overstating it a bit? But the AI push is relentless. It's everywhere, in every product, all the time. Last time I was at Home Depot I saw an AI powered microwave for fucks sake.
And, that's not to say there are no problems at which LLMs are good solutions, but it isn't this many. I use Claude to generate code, usually boiler-plate type stuff or to help me solve problems, and it's legitimately quite good. Conversely, generated images and video have always, always looked like absolute shit to me. Generated music is... okay? But as a consumer I barely have a way to choose a non-AI future if that's what I want.
> You can finally ask a computer to think and solve problems, and it will!
Sometimes. Other times it tries for awhile and gives up. Other times it makes some shit up that would solve your problem, and Omnissiah be with you if you follow those instructions. Other times you argue with it for 10 goddamn minutes because it doesn't comprehend your instructions.
> If somebody finally came out with a fusion reactor tomorrow I would half expect people to suddenly come out and say "Oh, I don't think I can support this. What about the soul of solar panels? I think cheap electricity is going to make things too easy."
That is flatly ridiculous. LLMs do a lot of interesting things, that I will grant, but they are not the problem solver you're pitching them as, and certainly nothing like a Fusion reactor.
The answer to these questions could easily be no, and life is way better for it.
---
It occurred to me on my walk today that a program is not the only output of programming. The other, arguably far more important output, is the programmer.
The mental model that you, the programmer, build by writing the program.
And -- here's the million dollar question -- can we get away with removing our hands from the equation? You may know that knowledge lives deeper than "thought-level" -- much of it lives in muscle memory. You can't glance at a paragraph of a textbook, say "yeah that makes sense" and expect to do well on the exam. You need to be able to produce it.
(Many of you will remember the experience of having forgotten a phone number, i.e. not being able to speak or write it, but finding that you are able to punch it into the dialpad, because the muscle memory was still there!)
The recent trend is to increase the output called programs, but decrease the output called programmers. That doesn't exactly bode well.
See also: Preventing the Collapse of Civilization / Jonathan Blow (Thekla, Inc)
https://www.youtube.com/watch?v=ZSRHeXYDLko
---
Munksgaard 1 day ago:
Peter Naur had that realization back in 1985: https://pages.cs.wisc.edu/~remzi/Naur.pdf
The Deliverable is You! Programming as Theory Building
https://nekolucifer.substack.com/p/the-deliverable-is-you-pr...
The open question is to what degree the actual typing is necessary. I see it as proof that the mental model is sound.
The thinking produces the mental model, the mental model produces the program. In the absence of that final step, what validates the model? (Complaints from users and colleagues? ;)
Another great thread to pull on: the overlap between "moving up levels of abstraction", i.e. AI assisted programming merely continues and accelerates a trend that has been there for the entire history of programming.
I’m a strong performer on a good team at a company many people would want to work at… and I know the clock is ticking. Sooner or later, I will be too slow.
I’m not going to claim that this is the wrong way to go. It’s obviously the future, and the future doesn’t care what allenrb does or does not want. I’m somewhat hopeful that power and cooling requirements will come down by multiple factors of 10x over time, reducing the environmental damage.
The fact is, I love what I’ve been able to do “the old way” and just don’t feel the urge to move on. So it goes.
The idea was that one likes AI and the other naturally hates it.
I thought about that for a bit and decided that, like most things, if you’re any good at something the “hard way” you probably have some of both. Or at least I’m sure it’s true for me.
I LOVE that I can produce the things I want to create without spending months crafting lines of text. The “I know how to architect this, I know what a decent data model looks like, I have a good idea of where someone is likely to introduce security or scaling problems. I can pilot this plane and produce something GOOD.”
But, I really also HATE looking at the final product and forever measuring, in my head, how much of it is even mine. Which parts I haven’t thoroughly reviewed, or would have spent a week learning and didn’t, or maybe wouldn’t have accomplished correctly at all? Am I a fraud, now? I wasn’t before…
It’s a really painful trade for me.
Yes, I am much more productive having Claude Code bang out boilerplate back-end code, but honestly I always kind of enjoyed doing it. Now I'm just a micro-manager for an AI.
And honestly, how long will that last? Given that LLMs came out of nowhere to radically redefine my role from software engineer to prompt writer in just a couple years, I have every reason to believe that they're coming for my role as prompt engineer next. (As my CEO surely hopes.)
I'm just glad the timing of the great AI replacement began right when I was nearing burnout anyway.
First my aging father insisting on navigating using his unfortunately fading memory instead of Google maps. Some people just won’t pick up technology out of habit or spite, even if it hinders them.
Second, a quote I read here that I’ll paraphrase “you can be the best marathon runner in the world and still lose a race to a guy on a bike.” Know the race you’re racing. It often changes.
I think it’s valid and commendable to keep the old ways alive, but also potentially dangerous to not realize they’re old ways.
At work, we are in a certain kind of race. In life, we are in a certain other kind. To paraphrase a recent Brandon Sanderson talk about creativity in an era where AI can outpace and possibly soon, out-quality a professional, "The work you do on _you_ can be _the art_."
https://www.statnews.com/2024/12/16/alzheimers-disease-resea...
Then one day, She was on the way to an OB appointment she almost plowed into a car in front of her while she was looking at her Mapquest pages. Risking our unborn child.
Even after pointing out the danger she claimed the guy in front… He did no such thing, I saw everything from my position in the parking lot.
I bought a GPS unit “for me” and put it into my car. I just used it. If we travelled in my car she still insisted on her printed maps. I ignored them. (This was very intense.)
Then one day we took her car for a trip and I brought my GPS. And “forgot it” in her car. I claimed I would remove it “later”.
About two weeks later she gave me the look and said not to laugh. Dead serious. She then said “the GPS is ok “ and can stay in her car.
Hallelujah! The life expectancy of my wife and child just went up exponentially.
This day, I have no idea what her hangup was. The best I could come up with was she was bad with directions. Was probably taught how to read a map. And her father probably instilled her sense of pride for the ability to read a map. And choosing to use a GPS was retroactively wasting her time learning how to use maps. And devaluing a skill she worked hard to learn.
I don’t care. I just wanted my family to live.
This is the KEY difference between people who are willing to adopt this technology and those who aren't.
If you are able to view your job as simply a pursuit of a craft, more power to you.
The reality is likely that over time your employer will realize you are slower than every other engineer, and that your enjoyment of the craft is actually just you being an old slow developer.
The "race" here is the race with every other developer out there. They're getting on bikes, and starting to pull away ... what are YOU going to do?
(1) I have an idea for some app, but either I feel it won't be useful enough/save me enough time to justify developing it, or I simply don't feel the problem is interesting enough to be motivated by it. In that case, a vibe coded tool is perfect. It generally does one simple thing, and I don't care about long term maintenance, because it just needs to keep doing that thing.
(2) Adding a feature to an open source project. Again, it's a case of "I want this feature, but am not willing to spend the time needed to implement it." Even a relatively simple open source project can take a day or two just to get a basic understanding of the code and where I need to make the changes. Now I can often just get a functioning vibe-coded implementation within a few hours.
(2) leaves me with some unsettling feelings about how this will affect the future of open source software. Some of the features I've implemented this way may very well be useful to other users, but I can't in good conscience just dump a vibe coded pull request on a project and except them to do the work of vetting it. But if I didn't have the energy to implement the change myself, I'm definitely not going to bother doing the work of going through all the LLM generated code, cleaning it up to the standards of the project, etc. Whereas before I didn't have a choice, and the idea of getting the change ready for a PR was much less daunting since I understood the problem space and solution well.
So at least for myself, I can see a future where many of the apps I use are bespoke forks of popular applications. Extrapolate that to many, many people and an interesting landscape emerges.
[0]: https://www.scottrlarson.com/publications/publication-my-fir...
- people that give in to AI do so because the technical merits suddenly became too big to ignore (even for seasoned developers that were previously against it) - people who avoid AI center their arguments on principles and personal discomfort
Just from that, you can kind of see where this is going.
Crypto used to be the thing to hate but that made sense as the objective usefulness of crypto was meek. AI models were always crazy useful but prohibitively expensive. Youd need an entire team to build your models. Now you dont.
We've created a communications system bottlenecked by virality and short form text and video in which all nuance and context is stripped from everything.
This, far far more than anything AI is doing, is what's making us dumber.
> DWIM is an embodiment of the idea that the user is interacting with an agent who attempts to interpret the user's request from contextual information. Since we want the user to feel that he is conversing with the system, he should not be stopped and forced to correct himself or give additional information in situations where the correction or information is obvious. [0]
— Teitelman and his Xerox PARC colleague Larry Masinter, Xerox PARC, in 1981
And this is kind of terrifying to me, in the context of an LLM that is working completely based on What You Said and any ability to Do What You Mean has to come from murky associations in the training data.
It is more than kind of terrifying when this is then extended to scenarios requiring novel analysis and problem solving, rather than just performing a repetitive, idiomatic task for the N+1th time.
Maybe it is because I do not do much front-end design. Maybe it is because I'm a bit more diligent than your average "viber", or maybe because for me it is easier to spot a suboptimal solution, or challange with edge cases from experience etc.
But these people turning their backs, not in principle, because that I fully understand, but because of underperformance?
Maybe their expectations are way out there? Maybe (most likely) it is the application domain? Maybe plainly a skill issue?
But seeing how GenAI is plowing through through fields, I would not turn my back on it even if it wasn't there (yet) in my domain.
Just my two cents. No matter whether you use AI or not, I’m sure you’ll gain something.
That's how I have been using AI for years. I feel like my productivity has skyrocketed over the past year or two, and all my code is still written by hand. It's like having StackOverflow on demand. I also never really have to worry about tokens or usage limits. I don't think I have ever hit the limit on the $20 Claude plan, and I use Claude every day.
You have encountered https://en.wiktionary.org/wiki/Gell-Mann_Amnesia_effect
I think the optimal solution is min/maxing this thing. Find the AI process that minimizes unhappiness, and maximizes money.
Your capitalist side needs to read some Deming. "Your system is perfectly tuned to produce the results that you are getting." Obviously, then, if you want better results, you need to improve your system.
Also "the product" is ambiguous. Is it the overall product, like how the product sits in the market, how the user interacts with it to achieve their goals, the manufacturability of the product, etc.? That is Steve Jobs sort of focus on the product, and it is really more of a system (how does the product relate to its user, environment, etc). However, AI doesn't produce that product, nor does any individual engineer. If "the product" means "the result of a task", you don't want to optimize that. That's how you get Microsoft and enterprise products. Nothing works well together, and using it is like cutting a steak with a spoon, but it has a truckload of features.
I wrote an article about this, but honestly I don't think I really captured the totality of my feelings. I really haven't decided where I land. I'm definitely using the tools for economic purposes, and I even have some "pure-fun" side project stuff where I'm getting value from it.
Here's the article if that sounds interesting, would love to discuss the whole topic with anyone who's finding themselves of two (or more) minds on these sorts of issues: https://hermeticwoodsman.substack.com/p/why-i-let-ai-write-m...
Latecomers lack the hundreds of iterations and the experience that comes with it. The senses haven't been trained.
There's a business here. Not one I want to be in, or one without major ethical drag, but a viable one. Fend off extinction, get fit.
I think a lot of people have forgotten why we actually get paid to write code. The person who wants an automated billing system doesn't care if you hand-typed it or not, or if the CSS that would have taken 2 hours to write took 8 seconds via an AI plus 60 seconds of you tweaking a border you didn't like. They just want their billing system. And if you are the person that takes 20x longer to build it, you're going to quickly get outcompeted. Sorry.
A billing system only truly gets built once, then possibly maintained in perpetuity. This makes the advantage of building it 20x times faster pointless. AI builds it in a day, will it matter 5 years from now if that billing system was instead built by hand in 20 days a long time ago? No.
The speed advantage of AI only comes into play when you have a lot of code to crank out continuously.
Do you have a need to constantly build bespoke billing systems at a rate of 1 per day? Probably not. So who cares. Take your little AI grift charging $1000/month somewhere else. It’s not needed.
I think after a while the accretions are going to get slow, and probably unmaintainable even for AI. And by that time, the code will be completely unreadable. It will probably make the code written by people who probably should not be developers that I have had to clean up look fairly straightforward in comparison.
I'm finding that how you choose to use it makes all the difference in whether it's useful or not. I understand the reticence to jump on the hype train and it's taken some reps to find the parts of building with AI that I don't like and how to navigate it and keep it from making choices I wouldn't make or are low quality.
> asking for a recommended tech stack
this is up to you. you can just tell it what tech stack to use. better yet, bootstrap the project yourself and give it to AI as the starting point. nobody is saying AI has to make these choices for you and you're not allowed anymore.
> I wasn’t happy with some of them because of my own experiences in the past... Even when deciding against something for a reason, Claude Code tried to push me back on the suggested track.
this kind of sounds like many human teammates at work... you don't always like their suggestions or they aren't convinced by your arguments? the difference being with AI you can just tell it what to do, no persuasion required.
nothing about AI prevents you from thinking about design choices, architecture, data modeling, or even the minutiae if you want to. the only thing telling AI to do those things for you is you!
Getting a feeling of "wanting to keep going" with something does not automatically make it an "addiction".
> I don’t want to depend on something doing the work I earn money with.
A tale as old as time, and a valid feeling, though not particularly helpful to dwell on since the technology will never go away and never get worse than it is right now.
> I don’t want to give up my brain and become lazy and not think for myself anymore.
Your brain will think about other important things and you don't need to become "lazy" just because a machine is doing something that used to require more effort on your part.
> I enjoy technical discussions with (human) co-workers.
So what? You can still have those discussion.
> I enjoy reading blog posts and tutorials and learning from other developers.
So what? You can still read those blogs, but the subjects might shift away from coding minutiae to other topics.
> I want to learn and grow and become better at what I am doing by trial and error and mistakes I make all by myself.
Going forward, that trial and error process will start to happen more at the product/project level rather than the source code level.
> I don’t want to be part of a trend/hype destroying our planet even faster than we already do without it.
It isn't. https://blog.andymasley.com/p/the-ai-water-issue-is-fake
It's interesting that highly flawed opinion pieces like this are so popular.
Let's compare AI to one typical 20 mile round trip commute. I asked Gemini and Claude and compared to see if the results looked good, but feel free to check.
One ~20 mile round trip commute: about 5700 Wh in an EV, about 27000 Wh in a gas car (due to thermal efficiency).
Comparing to the EV that's about 1,400 ChatGPT queries, 2,800 AI code completions, and 380 AI image generations.
Ordering lunch on Doordash uses the same power as days and days worth of very heavy AI usage, and that's if the dasher is driving a very efficient car. If they're driving an inefficient gas car it's like weeks of heavy AI usage.
Ultimately what matters is where we get our power. If we are getting it from CO2 emitting sources, what we do with it after that is not relevant. Make AI memes? Order burritos? Boil spaghetti? Who cares. The solution is to replace CO2 emitting sources with cleaner sources.
I also think people are avoiding the big fat elephant: wealth inequality. The whole problem with AI that bothers people is loss of jobs and possible wage suppression. The problem isn't AI, it's inequality and the fact that our system is basically regressive at this point with wealth being actively transferred upward.
But that's a hard complicated discussion and involves confronting powerful forces. It's easier to make stuff up about AI being some uniquely bad energy or water waste when it's not. This is really what "problemism" is all about: using a contrived or exaggerated or mis-attributed problem to avoid a hard or complicated conversation.
I am pretty sure that the beef industry is far worse than data centers. Dont get me started on plastics.
Stronger arguments can be made “against” ai then energy use.
I remember the time when people insisted that they would never use a mobile phone. I remember the time when people didnt understand my presentation about the magical "internet" (8th grade school in 94).
Here are some thoughts I have from reading this article:
> The AI can’t “see” the output, so some responsive refinements were just not correct. Within one CSS rule block there were redundant declarations.
This 1,000%.
Vibe coding has its issue and for me personally, frontend polish, responsiveness, and overall quality is the #1 most glaring of them that simply re-prompting often can't solve.
Even with the ability to screen shot your UI that hasn't solved things like glitchy animations. If you want to do anything even remotely above a junior level like scroll animations, page transitions, etc. good luck. AI will certainly try to do it for you, but inevitably it will not work perfectly and you will need to manually refine or even re-write code. When the code base isn't yours, that makes these re-writes a lot less fun.
> The guilty conscience at the same time, like I was cheating. I realized that when I move on like this, my project will never truly feel like my own.
I've wrestled with this over the last year, and still do to some extent. I'm trying to shift my perspective and envision myself as a brand new developer maybe 16 or 17 years of age. Would I think this isn't my work? I doubt it. I'd probably just (correctly) assume that this is the state of the art, this is how you do it.
Unfortunately this doesn't fix a bigger problem... I just don't enjoy vibe coding as a craft. There's something special about sitting down in the morning with your coffee and taking on a difficult programming problem. You start writing some code, the solutions start to formalize in your mind, there's a strong back-and-forth effect where as you code, the concepts crystalize further... small wins fuel a wonderful dopamine hit experience... intellisense completions, compilation completions, page refreshes, etc. are now all replaced with dull moments often waiting for the agent to return its response, which you now read.
> I’m curious (and a little bit scared) to see where we will go from here. I hope that in the end I can be part of a community that values craftsmanship, individuality and honest, high-quality work.
I really hope so too... But speaking honestly, I think this ship is sailing away quite quickly.
Time is money, and it always has been this way. Very few organizations can afford the luxury of time when building, designing, etc. I see no chance for this genie to go back in the bottle, and I believe it has (and will continue) to fundamentally change the nature of our work.
Over time as these models improve, there's a chance it could dramatically reduce the overall need for developers... It will start with low level teams as we're seeing already, but could expand.
I have been saying this to everyone -- what's your exit strategy?
I'm not saying you need to panic, but you need a plan for what happens if / when salaries tank dramatically. I hate to be "that guy" but in life I've found expecting the worst, isn't always a bad thing. Keep your mood up, prepare for the worst possible outcome, and be pleasantly surprised if that's not what happens.
I bet it would look something like the posts we are seeing today with developers and agentic AI.