This has been 100% my experience. I enjoy the puzzle solving and the general joy of organizing and pulling things together. I could really care less about the end result to meet some business need. The fun part is in the building, it's in the understanding, the growth of me.
I have coworkers who get itchy when they don't see their work on production, and super defensive in code review but I've never really cared. The goal is to solve the puzzle. If there's a better way to solve the puzzle, I want to know. If it takes a week to get through code review, what do I care, I'm already off to the next puzzle.
Being forced to use Claude at work, it really just took away everything that was enjoyable. Instead of solving puzzles I'm wrangling a digital junior dev that doesn't really learn from its mistakes, and lies all the time.
Only after that do LLMs come in, mostly to help with the mechanical parts of implementation. In my experience it's still humans all the way down. The thinking, modeling, and responsibility for the system are human. The LLM just helps move the implementation faster.
I also suspect the segment I work in will be among the last affected by LLM-driven job displacement. My clients are small to medium companies that need tailored internal systems. They're not going to suddenly start vibe-coding their own software. What they actually need is someone to understand the business, define the model, and take responsibility for the system. LLMs help with the implementation, but that part was never the hard part of the job.
Quite a few of the projects I always wanted to do have components or dependencies I really don't want to do. And as a result, I never did them, unless they eventually became viable to do in a commercial setting where I then had some junior developer to make the annoying stuff go away.
Now with LLMs I have my own junior developer to handle the annoying stuff - and as a result, a lot of my fun stuff I was thinking about in the last 3 decades finally got done.
One example from just last week - I had a large C codebase from the 90s I always wanted to reuse, but modern compilers have a different idea of how C should look like. It's pretty obvious from the compiler errors what you need to do each case, but I wasn't really in the mood for manually going through hundreds of source files. So I just stuck a locally running qwen coder in yolo mode into a container, forgot about it for a week, and came back to a compiling code base. Diff is quick to review, only had a handful of cases where it needed manual intervention.
AI can make that process still enjoyable. For instance I had to build a very intricate cache handler for Next.js from scratch that worked in a very specific way by serializing JSON in chunks (instead of JSON.parse it all in memory). I knew the theory, but the API details and the other annoyances always made it daunting for me.
With AI I was able to thinker more about the theory of the problem and less about the technical implementation which made the process much more fun and doable.
Perhaps we're just climbing the ladder of abstraction: in the early days people were building their own garbage collection mechanisms, their own binary search algorithms, etc. Once we started using libraries, we had to find the fun in some higher level.
Perhaps in the future the fun will be about solving puzzles within the realm of requirement definitions and all the intricacies that stem from that.
I came back into tech professionally over the last decade. Always been into computers, but the first decade or so of my career was in humanitarian amin. Super interesting sector, super boring day-to-day.
Getting back into code felt like coming home. I'm good at it, I really enjoy it, the problem-solving aspect totally lights up my brain in this amazing way.
I feel exactly the same way. Totally robbed of pleasure at work, with the added kicker of mass layoffs hanging over the sector.
At least OP is sixty, I've got 25 years of work left and I really don't know what to do. I hate it all so much.
Sure, part of the fun of programming is understanding how things work, mentally taking them apart and rebuilding them in the particular way that meets your needs. But this is usually reserved for small parts of the code, self-contained libraries or architectural backbones. And at that level I think human input and direction are still important. Then there is the grunt work of glueing all the parts together, or writing some obvious logic, often for the umpteenth time- these are things I can happily delegate. And finally there are the choices you make because you think of the final product and of the experience of those who will use it- this is not a puzzle to solve at all, this is creative work and there is no predefined result to reach. I'm happy to have tools that allow me to get there faster.
> Being forced to use Claude at work, it really just took away everything that was enjoyable. Instead of solving puzzles I'm wrangling a digital junior dev that doesn't really learn from its mistakes, and lies all the time.
Claude very much learns if you teach it and tell it to note things in the CLAUDE.md files you want it to remember. Claude is much better than any junior and most mid level ticket takers.
No, the company is exactly paying their employees to solve puzzles, which company labels them as problems or requirements.
And when an employee focuses on solving puzzles and enjoys it, the code naturally ends up in production, and gets forgotten because the puzzle is solved well.
But there is no “puzzle” to solving most enterprise problems as far as code - just grind.
And code doesn’t magically go from dev to production without a lot of work and coordination in between.
It's such a shame that everyone only cares about "faster" and not "better"
What a shameful mentality. Absolutely zero respect for quality or craftsmanship, only speed
My employer just like any other employer cares about keeping up with the competition and maximizing profit.
Customers don’t care about the “craftsmanship” of your code - aside from maybe the UI. But if you are a B2B company where the user is not the customer, they probably don’t even care about that.
I bet you most developers here are using the same set of Electron apps.
People can see it as grind. But the pleasure comes in solving the meta problem instead of the one in front (the latter always create brittle systems). But I agree that it can becomes hell if there were no care in building the current systems.
I just told it to do it.
It got the “create S3 pre-signed url to upload it to” right. But then it did the naive implementation of download the file and do a bulk upsetting wrong instead of “use the AWS extension to Postgres and upload it to S3”. Once I told it to do that, it knew what to do.
But still I cared about systems and architecture and not whether it decided to use a for loop or while loop.
Knowing that or knowing how best to upload files to Redshift or other data engineering practices aren’t knew or novel
Actually they are, but it's also true that you need to put solutions in to production.
When things are put to production as soon as possible without respect to quality, we see what's happening all the time.
Bloat, performance problems, angry customers, Windows 11...
You get the idea.
If companies cared about bloat and performance you wouldn’t see web apps with dozens of dependencies, cross platform mobile apps and Electron apps.
I've just written the fifth from-scratch version of a component at work. The requirements have never changed (it's a client library for a proprietary server, which has barely ever changed). I'm the 5th developer at the company to write a version of it.
All because nobody gave engineers the breathing room to factor the solution in to well thought out, testable, reusable components. Every version before is a spaghetti soup of code, mixing up unrelated functionality in to a handful of files.
No well thought out interfaces. No automated end-to-end testing, and no automated regression testing. The whole thing is dire and no managers give a fuck.
AI cannot solve for a lack of engineering culture. It can however produce trash faster than ever at these toxic shops.
On the other hand, AI doesn’t care about sloppy code. I haven’t done any serious web development since 2002, yet I created two decently featureful internal websites without looking at a line of code authenticated with Amazon Cognito. I doubt for the lifetime of this app, anyone will ever look at a line of code and make any changes using AI.
It's the best thing to happen to systems engineering.
I'm working with a team that was an early adopter of LLMs and their architecture is full of unknown-unknowns that they would have thought through if they actually wrote the code themselves. There are impedance mismatches everywhere but they can just produce more code to wrap the old code. It makes the system brittle and hard-to-maintain.
It's not a new problem, I've worked at places where people made these mistakes before. But as time goes on it seems like _most_ systems will accumulate multiple layers of slop because it's increasingly cheap to just add more mud to the ball of mud.
I also ask it a lot of questions regarding my assumptions, and so "we" (me and the AI) find better solutions that either of us could make on our own.
One of the three motivators he mentions is mastery. And cites examples of why people waste hours with no pay learning to play instruments and other hobbies in their discretionary time. This has been very true for me as a coder.
That said, I enjoy the pursuit of mastery as a programmer less than I used to. Mastering a “simple” thing is rewarding. Trying to master much of modern software is not. Web programming rots your brain. Modern languages and software product motivations are all about gaining more money and mindshare. There is no mastering any stack, it changes to swiftly to matter. I view the necessity of using LLMs as an indictment against what working in and with information technology has become.
I wonder if the hope of mastering the agentic process, is what is rejuvenating some programmers. It’s a new challenge to get good at. I wonder what Pink would say today about the role of AI in “what motivates us”.
(Edited, author name correction)
It would have been worth it if the frontier models were open weight. Right now, if you invest time in mastering tools like Claude Code or Google’s Antigravity, there is no guarantee that you won’t be removed from their ecosystems for any reason, which would make your efforts and skills useless.
IME, the tools are largely interchangeable. They are all slightly different, but the basics of prompting and the jaggedness of the frontier is more or less the same across all of them.
Switching from codex to claude code is orders of magnitude simpler than switching from c# to java or emacs to vim.
Growing up, the lakes in New England were filled with sailboats. There were sailing races. Now, its entirely pontoon boats. Not a sailboat to be found.
You want a pre-AI experience? Feel free to code without it. It's definitely still doable.
Rather the issue is they believe they are GOOD at the "journey" and getting to the destination and could compare their journey to others. Another take is they could more readily share their journey or help their peers. Some really like that part.
Now who you are comparing to is not other people going through the same journey, so there is less comradery. Others no longer enjoy that same journey so it feels more "lonely" in a way.
Theres nothing stopping someone from still writing their own code for fun by hand, but the element of sharing the journey with others is diminishing.
I turned 59 this week. I am excited to go to work again. I use Claude every day. I check Claude. I learn new things from Claude.
I no longer need a "UI person" to get something demonstrable quickly. (I've never been a "UI guy"). I've also never been a guy coding during every waking moment of my life as that would have been disastrous for my mental health.
I am retiring in <=2 years, so I am having fun with this new associate of mine.
One pitfall I've managed to avoid all these 36 years I've been at it is not falling in love with the solution. I fall in love with the problems. Claude solves those problems far quicker than I ever could.
I got into “cloud” at 44, got my first job (and hopefully last) at BigTech at 46 and now I work in cloud consulting specializing in app dev leading projects at 51.
Every project I’ve done since late 2023 has involved integrating with LLMs and I usually have three terminal sessions up - one with Claude, one with Codex and one where I do command line stuff and testing.
I am motivated by the result, the design and on the system level.
A career sailor on a sailing ship who finds meaning in rigging a ship just so with a team of shipmates in order to undertake a useful journey may find his love of sailing diminished somewhat when his life's skills and passions are abruptly reduced to a historical curiosity.
Other sailors may prefer their new "easier" jobs now they don't have to climb rigging all day or caulk decking (but now they have other problems, you need far fewer of them per tonne of cargo).
And the diesel engine mechanics are presumably cock-a-hoop at their new market.
(This analogy makes no claim as to the relative utility of AI compared to diesel ships over sailing vessels).
i can continue to row as a hobby, but i've been very lucky in that my work has always been something i genuinely enjoyed. now that it's become something that's actively burning me out, it's far harder to find time for hobbies and interests.
At work though the hype sucks the life out of the last part of the job that some people found enjoyable, because complete control is enjoyable. Personally I think work is just doing what someone else wants, rather than pleasing yourself.
This is a real thing that happens and the analogy is clearly working against you! If you paddle a canoe or rowboat on a river or lake, your experience is made MARKEDLY worse by a motorboat zooming by and scaring the fish, rocking you with wake, smelling up the place with 2-stroke fumes, etc. Even when the motorboats aren't there, the built environment that supports them is bigger and more intrusive.
AI has also exposed that many "engineers" are just "people who like fiddling with code" and that's fine in the sense that it makes it clear who are the actual engineers who are engineering solutions to real human problems and who just want to tinker with code.
Like imagine slandering a civil engineer "you just want a bridge that is safe and lasts for a century, you don't care about enjoying the journey of construction".
Secondly, it's not just about "enjoying the journey of construction", it's also about caring about the quality of the end results. Getting vibe coded software that is as stable as a "bridge that is safe and lasts for a century" is not a matter of careful engineering decisions, it's mostly a matter of luck, because you don't have the necessary oversight in the quality of the output unless you're doing extensive reviews of the generated code, at which point you greatly diminish the time you're supposedly saving.
Would you currently trust a bridge designed by a civil engineer using AI for all of their calculations ?
I am having long design sessions with Claude Code and let it implement the resulting features and changes in version controlled increments.
But I am the one who writes the example games and simulations in the DSL to get a feel for where its design needs to change to improve the user experience. This way I can work on the fun and creative parts and let Claude do the footwork.
I let Claude simultaneously write code, tests and documentation for each increment, and I read it and suggest changes or ask for clarification. I find it a lot easier to dismiss an earlier design for a better idea than when I would have implemented every detail of the system myself, and I think so far the resulting product has largely benefited from this.
To me, now more than ever it is important to keep the love for programming alive by having a side project as a creative outlet, with no time pressure and my own acceptance criteria (like beautiful code or clever solutions) that would not be acceptable in a commercial environment.
- Split up the work so that you write the high-level client code, and have AI write the library/framework code.
- Write some parts of your (client) code first.
- Write a first iteration of the library/framework so that your code runs, along with tests and documentation. This gives the AI information on the desired implementation style.
- Spend time designing/defining the interface (API, DSL or some other module boundary). Discuss the design with the AI and iterate until it feels good.
- For each design increment, let AI implement, test and document its part, then adapt your client code. Or, change your code first and have AI change its interface/implementation to make it work.
- Between iterations, study at least the generated tests, and discuss the implementation.
- Keep iterations small and commit completed features before you advance to the next change.
- Keep a TODO list and don't be afraid to dismiss an earlier design if it is no longer consistent with newer decisions. (This is a variation of the one-off program, but as a design tool.)
That way, there is a clear separation of the client code and the libraries/framework layer, and you own the former and the interface to the latter, just not the low-level implementation (which is true for all 3rd party code, or all code you did not write).
Of course this will not work for you if what you prefer is writing low-level code. But in a business context, where you have the detailed domain knowledge and communicate with the product department, it is a sensible division of labour. (And you keep designing the interface to the low-level code.)
At least for me this workflow works, as I like spending time on getting the design and the boundaries right, as it results in readable and intentional (sometimes even beautiful) client code. It also keeps the creative element in the process and does not reduce the engineer to a mere manager of AI coding agents.
The tool doesn’t invalidate the craft. If anything, what we’re mourning when AI “kills the passion” might be about identity.
Many programmers spent decades defining themselves as the person who knows how to do hard things
And it’s disorienting when that thing becomes easy.
The remaining friction is fundamentally the same as that which existed when writing code manually. The gap between what you envision for your design/solution and the tools for implementing that vision. With code, the friction encountered when implementing your vision is substantial; with AI, that friction is significantly reduced, and what's left is in areas different from what past experience would lead you to expect.
If you enjoyed that you could do something the rest of the world can't - well yeah some of that is somewhat gone. The "real programmers" who could time the execution of assembly instructions to the rotation speed of an early hard drive prob felt the same when compilers came around.
It has rekindled my joy however. Agentic development is so powerful but also so painful and it's the painful parts I love. The painful parts mean there is still so much to create and make better. We get to live in a world now where all of this power is on our home computers, where we can draw on all the world's resources to build in realtime, and where if we make anything cool it can propagate virally and instantly, and where there are blank spaces in every direction for individuals to innovate. Pretty cool in my view.
I’ve made the choice to not go full bore into AI as a result. I still use it to aid search or ask questions, but full on agentic coding isn’t for me, at least not for the projects I actually care about and want/need to support long-term.
I am a bit, but not much, younger than 60 and have been coding since Apple II days.
These tools are pretty close to HAL 9000 so of course GIGO as always has been the case with computer tech.
Almost everything is in Go except an image fingerprinting api server written in Swift. The most USEFUL thing I’ve written is a Go based APFS monitor that will help you not overfill your SSD and get pained into corner by Time Machine.
Or maybe it still gives you the first, too. Maybe you get that from figuring out how to get the AI to produce the code you want, just like you got it from trying to get the compiler to produce the code you want.
Or maybe it depends on your personality and/or your view of your craft.
Anyway, the point is, people take pleasure in their work in different ways. Those who enjoy building with AI are probably not all lying. Some do enjoy it. And that is not a defect in them. It's fine for them to enjoy it. It's fine for you not to enjoy it.
Don't take it personally, but those are the worst kind of engineers in any real world business context. I watched those type of people ruining projects, companies by overengineering them to death.
On the bright side, such traits can make a positive impact in academic or research context.
It's more like iterating on the REPL with AI in the loop, meaning the user stays in control while benefitting from AI, so real growth happens.
Interesting thing to consider, in a couple of years, will there be a differentiator between people who are good at driving company-specific AI tools and those who are generally better by having built skills the hard way ground up with benefit of AI?
Then you hopefully capture that information somehow in a future prompt, documentation, test, or other agent guardrail.
So I finds fun in the knowledge engineering of it all. The meta practice of building up a knowledge base in how to solve problems like this codebase.
Everyone `in the know' appreciates this, but equally in the current environment has to play along with the AI hype machine.
It is depressing, but the true value of the current wave of LLMs in coding will become more clear over time. I think it's going to take some serious advances in architecture to make the coding assistant reliable, rather than simply scaling what we have now.
It's the programming equivalent of those tiktok videos split in half, top half being random stock videos, bottom being temple run and an AI narration of a mildly wtf reddit post.
In a way I am lucky that I work at a place where everyone gets to choose what they want to use and how they use it. So my weapons of choice are a slightly tweaked, almost vanilla zsh, vim and zed with 0 ai features. I have a number of friends/former coworkers working at places where the usage of ai is not just allowed or encouraged but mandated. And is part of their performance score - the more you slop-code, the better your performance.
Adding a jit to perl, inlining, ssa, fixing raku,... Endless possibilities. Just fixing glibc or gcc is out, because people.
if it the puzzle solving metaphor, i'm taking solved puzzles as pieces to solve a more meta puzzle ... and i enjoying the journey at that level.
i try to practice tracing all the way down the stack and learning about new things added to the stack, but i'm not in it for the sake of the stack or its vagaries and difficulties.
I'm more on the second group so LLMs let me get to that part faster without having to get bogged on in the "small stuff".
But I do get the people that enjoy the craftsmanship of the finer details instead.
I finally made it with Claude. I've been writing code a while so I absolutely didn't let Claude loose and I still refactored stuff by hand as sometimes that was faster than trying to get Claude to do it. I also know what the whole thing does - I read all the diffs it presented.
I wouldn't fully trust it to go off and do its thing unsupervised especially in my areas of speciality. But the scaffolding work like command line arguments - typing all that out was never my passion and I had snippets in my editor for most of it.
Perhaps if your passion is the process of doing that kind of meticulously laying out of each file then I can understand. Although the journey for me is the problem solving. Nothing much excites me about any of the boilerplate parts.
Claude can certainly take a stab at the solution too and is best when it has some kind of test case to match or validation step. To me working out what those are was always the core of the job and without them Claude can make plenty of mistakes.
It's just a tool and I use it in ways to support what I enjoy.
My work situation is way more complicated. Bigcorp organizational dynamics nullify any marginal gain anywhere.
Corporate management is full on FOMO and pushes agents down onto teams
Molecular biologists are still searching for the pathways that govern the expression of assets, desires, and safetymargin.
Doctors and tax accountants are still arguing over whether the forecast and anticipation functions are learned or innate. And philosophers and used car salesmen can't even agree on where these functions sit on the cause/effect axis.
Regarding the OP's dilemma. I am split. I enjoy both the process and the destination. With AI, the process is faster and less satisfying, but reaching the destination is satisfying in its own way, and enables certain professional ambitions.
I have always had other outlets for my "process" needs, and I believe I will spend more time on them in the future. Other hobbies. I love "artisanal coding" but that aspect was never really my job.
Similar to when IDEs and autocomplete became common.
That is hard if you are working in Notepad and have to write your own class import statements and write your own Maven POM or Gradle file. It’s a lot quicker in an IDE with autocomplete and auto-generated Maven POMs. And with AI it’s even faster but at the risk of lower code maintainability.
Have you heard of malicious compliance? Give the PMs what they ask for, then show them how what they've asked for is flawed. Your job as an engineer is not to just take orders blindly, it's to push for a better engineered solution. It's really not hard to show that what these PMs are asking for is stupid.
Is your username accurate, are you currently retired? I hope you know there's a big difference between something that is functional and something that is production ready.
You aren’t knowingly exposing those services to the internet.
FTFY. Furthermore, internal services can still be abused to get data that shouldn't be shared. For example, imagine if your imaginary API was for a HR system, and could be used to determine salary information for staff.
If you aren't considering API security, you're almost bound to make major mistakes, and I'd bet money that most APIs designed and implemented in 2 days have tons of security holes.
Ever since the dawn of time I've wanted to make my own games but always ended up wasting time on trying to make engines and frameworks and shit, because no development environment worked the way I wanted to, out of the box.
I don't trust AI enough to let it generate code out of thin air yet, and it's often wrong or inefficient when it does, so I just ask it to review my existing code.
I've been using Codex that way for the last couple months and it's helped me catch of lot of bugs that would have taken me ages on my own. All the code and ideas are still my own, and the AI's made me more productive without being lazy.
Maybe this time I will manage to finish making an actual game for once :')
I thought I enjoyed the journey more, but it turns out the destination is wild! There are still quite a few projects I keep for myself, pieces I want done in a specific way, that I now have time to do properly, while the dull stuff can get done elsewhere.
I just use the chat interface to study and do one-off scripts.
I love having 4-5 bots open and spam the same questions then reading the answers. For everything. Feels like I am doing something, like video games.
It has elevated my wardrobe and music tastes but I still had to have a baseline ofc. They are way too agreeable still.
You're not missing much; don't believe the hype and liars.
The avatar raised its brows in surprise. "Well, for one thing, you do it, it's you who gets the feeling of achievement."
"Ignoring the subjective. What would be the point for those listening to it?"
"They'd know it was one of their own species, not a Mind, who created it."
"Ignoring that, too; suppose they weren't told it was by an AI, or didn't care."
"If they hadn't been told then the comparison isn't complete; information is being concealed. If they don't care, then they're unlike any group of humans I've ever encountered."
"But if you can—"
"Ziller, are concerned that Minds—AIs, if you like—can create, or even just appear to create, original works of art?"
"Frankly, when they're the sort of original works of art that I create, yes."
"Ziller, it doesn't matter. You have to think like a mountain climber."
"Oh, do I?"
"Yes. Some people take days, sweat buckets, endure pain and cold and risk injury and—in some cases—permanent death to achieve the summit of a mountain only to discover there a party of their peers freshly arrived by aircraft and enjoying a light picnic."
"If I was one of those climbers I'd be pretty damned annoyed."
"Well, it is considered rather impolite to land an aircraft on a summit which people are at that moment struggling up to the hard way, but it can and does happen. Good manners indicate that the picnic ought to be shared and that those who arrived by aircraft express awe and respect for the accomplishment of the climbers.
"The point, of course, is that the people who spent days and sweated buckets could also have taken an aircraft to the summit if all they'd wanted was to absorb the view. It is the struggle that they crave. The sense of achievement is produced by the route to and from the peak, not by the peak itself. It is just the fold between the pages." The avatar hesitated. It put its head a little to one side and narrowed its eyes. "How far do I have to take this analogy, Cr. Ziller?”
― Iain M. Banks, Look to Windward
> It is the struggle that they crave
And yet, it's hard to shake the despondent feeling you get looking at the helicopters hoovering around the peak
I am looking for a web API I could use with CURL, and limited "public/testing" API keys. Anyone?
I am very interested in Claude code to test its ability to code assembly (x86_64/RISC-V) and to assist the ports of c++ code to plain and simple C (I read something from HN about this which seems to be promising).
I've made and continue to make things that I've been thinking about for a while, but the juice was never worth the squeeze. Bluetooth troubleshooting for example -- 5 or 6 different programs will log different parts of the stack independently. I've made an app calling all of these apps, and grouping all of their calls based on mac address' and system time of the calls to correlate and pinpoint the exact issue.
Now I heard the neckbeards crack their knuckles, getting ready to bear down on their split keyboards and start telling me how the program doesn't work because AI made it, it isn't artistic enough for their liking, or whatever the current lie they comfort themselves with is. But it does work, and I've used it already to determine some of my bad devices are really bad.
But there are bugs, you exclaim! Sure, but have you seen human written code?? I've made my career in understanding these systems, programming languages, and people using the systems -- troubleshooting is the fun part and I guess lucky for me is that my favorite part is the thing that will continue to exist.
But what about QA? Humans are better? No. Please guys, stop lying to yourselves. Even if there was a benefit that Humans bring over AI in this arena, that lead is evaporating fast or is already gone. I think a lot of people in our industry take their knowledge and ability to gatekeep by having that knowledge as some sort of a good thing. If that was the only thing you were good at, then maybe it is good that the AI is going to do the thing they excel at and leave those folks to theirs.
It can leave humans to figure out how to maybe be more human? It is funny to type that since I have been on a computer 12h a day since like 1997...but there is a reason why we let calculators crunch large sums, and manufacturing robots have multiple articulating points in their arms making incredible items at insane speeds. I guess there were probably people who like using slide rules and were really good at it, pissed because their job was taken by a device that can do it better and faster. Diddnt the slide rule users take the job from people who did not have a tool like that at first but still had to do the job?
Did THEY complain about that change as well? Regardless, all of these people were left behind if all they are going to do is complain. If you only built one skill in your career, and that is writing code and nothing else, that is not the programs fault.
The journey exists for those who desire to build the knowledge that they lack and use these new incredible tools.
For everyone else, there is Hacker News and an overwhelmingly significant crowd that are ready to talk about the good ole days instead of seeing the opportunities in expanding your talents with software that helps you do your thing better than you have ever dreamed of.
I recently wanted to monitor my vehicle batteries with a cheap BLE battery monitor from AliExpress (by getting the data into HomeAssistant). I could have spent days digging through BlueZ on a Raspberry Pi, or I could use AI and have a working solution an hour later.
Yes, I gave up the chance to manually learn every layer of the stack. I’m fine with that. The goal was not to become a Bluetooth archaeologist. The goal was to solve the problem. AI got me there faster - and let me move on to my next fun project.
That sounds really cool. You should share what you used.
> The goal was not to become a Bluetooth archaeologist. The goal was to solve the problem.
I'm sympathetic to this view. It seems very pragmatic. After all, the reason we write software is not to move characters around a repo, but to solve problems, right?
But here's my concern. Like a lot of people, I starting programming to solve little problems my friends and I had. Stuff like manipulating game map files and scripting ftp servers. That lead me to a career that's meant building big systems that people depend on.
If everything bite-sized and self-contained is automated with llms, are people still going to make the jump to be able to build and maintain larger things?
To use your example of the BLE battery monitor, the AI built some automation on top of bluez, a 20+ year-old project representing thousands of hours of labor. If AI can replace 100% of programming, no-big-deal it can maintain bluez going forward, but what if it can't? In that case we've failed to nurture the cognitive skills we need to maintain the world we've built.
I find myself chatting through architectural problems with ChatGPT as I drive (using voice mode). I've continued to learn that way. I don't bother learning little things that I know won't do much for me, but I still do deep research and prototyping (which I can do 5x faster now) using AI as a supplement. I still provide AI significant guidance on the architecture/language/etc of what I want built, and that has come from my 20+ years in software.
This is is the project I was talking about. I prefer using codex day-to-day.
https://github.com/klinquist/HomeAssistant-Vehicle-Battery-M...
This is another fun project I recently built using AI: