...that confidently tells you "1+1=2.71828182845904523536"
This has largely taken me out of the loop. I give it detailed task like I would a junior engineer, and we discuss approaches and align on direction, priorities, and goals, and it then goes off for literally hours iterating on the code.
I have done about 3 months worth of extremely complex engineering work in about a week since I started doing this.
It is a step change from trying to use the chat interface and copy/pasting snippets.
Once it’s going it writes code like a staff engineer.
There are some obscure bugs it can run into that it needs my 20 years of experience to unblock or unwind when it goes down a rabbit hole.
But it has accelerated my development 500x and while it’s iterating I’m not filling my mind with code and syntax and so on, I’m more like a tech lead or manager now. I’m in another room playing with my dog.
I have started testing Copilot for fun, my wife needs a web-based project that is not maintained for a while now and written in PHP.
I asked Copilot (Agent mode) to translate it to rust just for the fun of it, seeing how far it would come - I expected nothing out of it. I broke down the tasks into manageable chunks, directed it in design choices, and asked it to use some specific frameworks.
So far it wrote 40k lines of rust on its own, and keeps track of what functionality is missing compared to the original project. It was impressive seeing it iterate alone for 30m+ at a time.
I'm no programmer, more systems/cloud engineer, so a rewrite like this would have likely costed me >2 years of work and still ending up useless for all intents and purposes. I'm pretty sure that the end result won't work at first try, and I'll need to fix stuff manually or direct Copilot to fix it, but after two weeks of 1-2h iterating at night, I have 90% of something that would have required someone full time for at least a couple years.
The two things I found most valuable (also in other things, like shorter bash and python scripts):
1. Syntax boilerplate: if your task is specific enough, it normally gets it right 99.99% of the time, and my brain can look at the actual logic rather than the commas, brackets and (in python's case) spaces
2. Documentation: I spend 95% less time looking at documentation for something, I don't need to comb the entire language/package/class for the specific things, it normally gets it right, and worst case I can ask it to do a refactor with the most modern standards for specific library version X
But i’m giving it a goal with a fairly distant milestone where it needs to hit 5-10 smaller milestones and even for advanced ai mixing the system languages I’m mixing in this large project there are build errors it powers through on its own (fixing the building, linking, or syntax, or using the apis for the versions of all the things i’m using) and there are some combinations of tech (windowing, webviews, etc.) and some architectures of things that require it to try multiple approaches to succeed.
Sometimes it will ask like “how am i doing so far should i continue”, or can you run this visual thing and report back. But there can be 30-60 mins between it doing that.
It also has to scan a lot of files and find the things it needs to find to fill its own context. Like “let me examine how you implemented that on mac and windows and come up with a solution for linux”
Other times while it’s reasoning if i notice it’s going down a rabbit hole I’ll point it out and set it in the right direction without interrupting it and when it moves onto the next thought it’ll incorporate my suggestion but it didn’t really stop or need another big prompt or goal.
There are plenty of times where I do need to step in and baby it with small goals through something tricky or obscure. But even in those cases I don’t have to write any code directly i just ask it to do xyz small change and hand it back for me to test
I believe we are heading toward a world where AI offers easy mental shortcuts for nearly everything, similar to how cheap carbs became widespread in our diets. I do not yet know how I will deal with that. For now, I am just a kid in a candy store, enjoying the novelty.
I am curious if using AI has changed the fundamental ways in which you view "effort" and "value" from pursuing a piece of work.
Are there are new kinds of challenges that come up when you're using some new AI tools?
I find the analogy to candy particularly interesting. The default comparison being that "too much of it is bad for you". Do you feel that you are putting on "cognitive weight" as a result of using AI?
Many things appear to work at first, right. Most of the time, using AI seems great, until one spends a lot of time working out lots of important details. A bunch of prompts later...
Yeah.
Sometimes it is nice to begin with something, even if it is wrong. AI is great for that.
Funny how often it is we can write in response to errors! Out it comes! Like that fire hose trope.
In that vein:
Proposal templates, and other basic create tasks can start with a boost.
Oh, a surprising one was distilling complex ideas into simple, direct language!
And for code, I like getting a fragment, function, whatever all populated, ready for me to just start working with.
Bonus for languages I want to learn more about, or just learn. There are traps here. You have to run it with that in mind.
Trust, but verify.
What did not work:
Really counting on the things. And like most everyone I suppose, I will easily say I know better. Really, I do, but... [Insert in here.]
Filtering of various kinds.
I may add to this later.
I also have no interest in technology that impedes my skill development. I do not want to use anything that makes me a worse writer over time.
YMMV, I am answering the OP not evangelizing. Counter arguments will be ignored.
Reminds me of the Monty Python Arguing sketch.
I do limit my use today, compared to a few months ago.
Most of that is having successfully mapped out use cases that make sense, I find myself doing less seeking. Where it is a net gain, go; otherwise, why bother?
I don't evaluate the nature of the gain before deciding whether AI will be worth it. Fact is I do not know prior to some use.
I suppose the better question, given the above is how I decide to try it.
I wait until I have both time and a purpose to gamble with. When both become available, I will make an ernest attempt. I may make several.
The process is not a whole lot different than it has always been for me and new tools and tech. One simply has to give it a go to understand value.
Use cases are in other comment.
Code completions are fine. Driving code through chat is a complete waste of time (never saves time for me; always ends up taking longer). Agentic coding (where the LLM works autonomously for half an hour) still holds some promise, but my employer isn't ready for that.
Research/queries only for very low stakes/established things (e.g., how do I achieve X in git).
I've observed colleagues who have used it extensively, I've often been a late adopter for things that carry unspecified risk; and AI was already on par with Pandora's box in my estimation when the weights were first released; I am usually perceptually pretty far ahead of the curve naturally (and accurately so).
Objectively, I've found these colleagues attitude, mental alacrity, work product, and abstract reasoning skills have degraded significantly in reference to their prior work pre-AI. They tried harder, got more actual work done, and were able to converse easily and quickly before. Now its, let me get back to you; and you get emails which have been quite clearly put through an LLM, with no real reasoning happening.
What is worse, is its happened in ways they largely do not notice, and when objective observations are pointed out, they don't take kindly to the feedback despite it not being an issue with them, but with their AI use, and the perceptual blindspots it takes advantage of. Many seem to be adopting destructive behaviors common to junkies, who have addiction problems.
I think given sufficient time, this trend will be recognized; but not before it causes significant adverse effects.
What are some ways in which you have seen the perceptual abilities of coworkers erode over time?
An efficiency oriented logic makes us think that we're getting the work done "faster", and it "feels" like faster time to market, but in reality you experience a slowdown and a decline in quality...
PS: my own dependance on Wispr (a speech to text dictation tool) changed the way I write / interact with computers - my over-reliance meant I didn't proofread the title, and the "EXTEND" sticks out like a sore thumb...
I took a course awhile back taught by a retired military professor on communications and it was eye-opening. He covered what you would expect but with a slant towards 5GW, irregular warfare, political warfare; and heavily referenced Gershanek as a supplemental book; which is published by Navy Press. (https://www.usmcu.edu/Portals/218/Political%20Warfare_web.pd...)
Communications holds a privileged position that is tied strongly to and influences our individual psychology and identity.
Reflected appraisal is how we adopt culture from our parents, and it can be manipulated to distort that in ways that are harmful, if you understand the mechanics; and distortions cause psychological stress (the basis for torture), which can be used for malign influence, destructive interference of core identity, compulsion, or torture.
This along with other structures, elements, and clustering, can cause changes where if you aren't mindful of your environment, you don't recognize its happened, all you have is confusion, as your identity/soul gets pealed back and hollowed out, and this is the basis for how cult programming, and the related involuntary hypnosis works in practice. The same goes for PoWs from the 1950s.
There's quite a lot of material on this if you know where to look.
> Some ways in which you have seen the perceptual abilities of coworkers erode over time?
This is going to sound very subjective, but their overall cognitive speed has decreased dramatically. When you learn a skill to the point where its automatic, you can get a good flow going from a to b to c pivot to e, etc with no delays; and they struggle with each step/connection, each reasoning portion. Almost like there's interference, but its persistent and consistent; and they either don't notice, or they get defensive.
When they need to make a determination or design decision, they will miss the pivots, and not account for things that lead to significant mistakes which would never have happened before.
The solutions they come up with are for the most part no longer creative. They used to take functional structures they had collected and knew well that worked, and repurpose them, or apply them in ways that were quite creative towards a problem that they defined. Now they largely don't; and the definition of the problems they define are only slightly better than the LLM at this point; it used to be much better.
A lot of due diligence is also no longer being done. When asked about specific things, instead of being able to answer, they get confused, sometimes even incoherent, behaviors that seem very dementia-like, but these are guys almost fresh out of college in their mid 20s, and they aren't on drugs (we are all tested regularly).
There are ways people can be blinded, where they will adopt a misleading stance based upon structure (without any reasoning), even very intelligent people.
I'm of the opinion the inconsistency of the LLM's responses which are treated as communication, are gradually damaging people. Incidentally, people who have had a lot of exposure also have stopped taking on the more difficult or challenging tasks.
The core question is not “are we degrading,” but rather: are we thinking better with better tools? Personally, I use AI only to reduce boilerplate and explore alternatives — the decision-making and abstraction stays on me.
If someone starts thinking less because of tools, the problem isn't the tool — it's how it's used.
Those that seem to use these tools become dumber in ways they do not notice. In much the same vein I become smarter in relative retrospect just holding to my guns and shielding my exposure.
If you use a tool, whose primary consequence of use is that you become damaged and less each time you use it, and this happens in most cases in a way where you cannot recognize it happening. How do you ever stop? If you cannot know how to safely use it, and you cannot recognize the mechanism or issue, what is left?
If it alters your ability to perceive things, you certainly can't decide something if you don't recognize the need to decide.
If the factors required for that decision to come about are outside your perception, where the connections for a correct decision no longer exist, there isn't anything you can do.
You take an old argument that its just a tool, saying the choice is with the person who is responsible, not the tool, and yet the person doesn't or more likely cannot notice, or recognize the damage happening.
Its a very rare person who is capable of introspection at such a subtle degree. There is also no informed consent of the danger so all those children being force fed this stuff as GenAI when the data finally is in; well I don't want to think about a future like that, where there may be no future at all for them.
The decision-making process requires things that you may not have anymore, and while you may continue to think falsely that you do and are still capable of that but you've been blinded and when that happens, you've definitionally entered a state of delusion. Quite a lot of delusional people don't realize they've gone off the deep end, its a perceptual deficit.
Who knows maybe it will go so far as delirium as the debasement progresses and you unravel as a sentient person.
We all have psychological blindspots, and there is one blindspot above all others that we have no defense against; called distorted reflected appraisal.
There are some things where the issue is directly with the tool, not how its used.
They see the same thing as you but are overjoyed to power down their brain and spend more time thinking about their personal hobbies and sports and 90 day fiance instead of real problem solving or code.
I've tried for years to build writing tools with AI. I think for the most part, it doesn't work well and they have become worse (more unnatural) since GPT-3, with the exception of GPT-4.5 and Gemini 1.5 Flash.
There are bits you can delegate to AI: Writing punchy intro paragraphs. Brainstorming titles. Starting off dialogue in a certain style, but it can't sustain it for very long. Or dialogue as another person - you often don't want two characters with similar language.
Writing is thinking. You can rubber duck it for ideas. And it does bounce back some good ones. But you can't expect it to do the heavy work.
Lately, I've been reversing the dynamic - getting AI to generate the bullet points while I write the document. The last straw was when I got it to summarize a doc, and then got it to do work based off the doc it wrote. It would get half the work wrong.
Given the above, it's useful as hell for generating templates and usable starters for creating your own work when you're feeling stuck, and that's mainly it for me.
I'm not delegating my thinking to a machine that can't think.
Learned helpesness as a service isn't a thing I want and I worry that long term it will make me think less deeply in ways I can't predict.
I use DeepSeek both locally and online more frequently and find it more trustworthy but it still frequently makes things up. Maybe because it’s Chinese I’ve found it better at translation and somewhat more nuanced.
I don’t want to pay for top notch AI, just like I don’t pay for top notch kernels (i.e., linux), top notch version systems (i.e., git) and so on.
I'd also disagree Linux is the top notch kernel. It might be the most universal one because of drivers and licensing and that makes it my personal favorite because it "just works" with pretty much no fuss (technical or social) but there are a number of kernels out there with better features.
Researchers have called much less intelligent things AI since 1956.
Before there were GPTs, there were RNNs and CNNs. AI is the field of study.
Good logic.
The LLM models are something. To me they look like knowledge holograms. Or maybe just a huge pattern inference machine.
And they are ODD. Like a fragment of smarts. Dangerous and for sure not something we can rely on.
It’s a term with historic precedence.
It’s an aspiration to mechanize thought.
Are we there yet? It’s a continuum.
Excellent logic.
I’m also type 1 diabetic and this is like asking me to what extent I have stopped or limited my use of insulin.
AI and insulin (to different extents) make my life better in significant ways. Why would I stop or limit that?
Funny that I'm exploring the impacts of over-dependance/reliance on AI tools, and made a mistake because of the same. Will certainly proof-read after using speech to text!
and yes, extent*
I use AI a LOT and I have a love hate relationship with it.
Therefore, it’s actually something I value daily.
If I only love something or hate something it’s shallow and meh.