LLM changed nothing though. It's just boosting people's intention. If your intention is to learn, you are in luck! It's never been easier to teach yourself some skill for free. But if you just want to be a poser and fake it until you make it, you are gonna be brainrot waaaay faster than usual.
But what happens with generations that will grow up with AI readily available? There is a good chance that there will be a generational skill atrophy in the future, as less people will be inclined to develop the experience required to use AI as a helper, but not depend on it.
So I just went to DeepSeek instead and finished like 25% of my project in a day. It was the first time in my whole life that programming was not fun at all. I was just accomplishing work - for a side project at that. And it seems the LLMs are already more interested in talking to me about code than my dad who's a staff engineer.
I am going to use the time saved to practice an instrument and abandon the "programming as a hobby" thing unless there's a specific app I have a need for.
And learning new technologies in pursuit of resume-driven-development is fun?
I gotta say, if learning the intricacies of $LATEST_FAD is "fun" for you, then you're not really going to have a good time, employment-wise, in the age of AI.
If learning algorithms and data structures and their applicability in production is fun, then the age of AI is going to leave you with very in-demand skills.
Nothing to do with employment. I was just doing a "home-cooked app"[0] thing for fun that served a personal usecase. Putting it on my resume would be a nice-to-have to prove I'm still sharpening my skills, but it isn't the reason I was developing the app to begin with.
What I think at least is the administration and fault monitoring of lots of random machines and connected infrastructure in the cloud might be left somewhat untouched by AI for now, but if it's just about slinging some code to have an end product, LLMs are probably going to overtake that hobby in a few years (if anyone has such a weird hobby they'd want to write a bunch of code because it's fun and not to show to employers).
You're right that a lot less people will be writing code like a lot less people are sewing these days but people still do it for fun.
P.S.: I really enjoyed reading moonbound :)
On point of discussing code.. a lot of cloud frameworks are boring but good. It usually isn't the interesting bit and it is a relatively recent quirk that everyone seems to care more about the framework compared to the thing you actually wanted to achieve. It's not a fun algorithm optimization, it's not a fun object modeling exercise, it's not some nichey math thing of note or whatever got them into coding in the first place. While I can't speak for your father I haven't met a programmer who doesn't get excited to talk about at least one coding topic this cloud framework just might not have been it.
I only read your comment after I posted mine, but my take is basically the same as yours: the GP thinks the IT learning-treadmill is fun and his dad doesn't.
It's not hard to see the real problem here.
I tried using an LLM to help with a small hardware project and ended up throwing out all the code halfway down the line and doing it myself. I don't have a good time with LLMs and mostly only use it for work.
We already see this today: a lot of young people do not know how to type in keyboards, how to write in word processors, how to save files, etc. A significant part of a new generation is having to be trained on basic computer things same as our grandparents did.
It's very intersting how "tech savvy" and "tech compentent" are two different things.
And so the people who are aiming to go into that kind of work will learn these skills.
Academia is a tiny proportion of people. "Business" is larger but I think you might be surprised by just how much of business you can do on a phone or tablet these days, with all the files shared and linked between chats and channels rather than saved in the traditional sense.
As a somewhat related example, I've finally caved into to following all the marketing staff I hire and started using Canva. The only time you now need to "save a picture" is... never. You just hit share and send the file directly into the WhatsApp chat with the local print shop.
And this is exactly what is meant by generational skill atrophy. You no longer own your own files or manage your own data, it's all handled in cloud solutions outside of your control, on devices you barely understand how they work and in channels controlled by companies looking to earn a profit.
When any of those links break, you are suddenly non-functional. You can no longer access your files, you can no longer work on your device. This skill atrophy includes the ability to correctly analyze and debug problems with your devices or workflow in question.
https://www.cato-unbound.org/2006/01/08/jaron-lanier/gory-an...
Typing on a keyboard, using files and writing on a word processor, etc. are accidental skills, not really essential skills. They're like writing cursive: we learned them, so we think naturally everybody must and lament how much it sucks that kids these days do not. But they don't because they don't need to: we now have very capable computing systems that don't need files at all, or at least don't need to surface them at the user level.
It could be that writing or understanding code without AI help turns out to be another accidental skill, like writing or understanding assembly code today. It just won't be needed in the future.
I will lament that professionals with desk jobs can't touch-type. But not out of some "back in my day" bullshit. I didn't learn until my 20s. I eventually had an "oh no" realization that it would probably pay major dividends on the learning investment. It did. And then I knew.
I was real good at making excuses to never learn too. Much more resistant than the student/fresh grads I've since convinced to learn.
So if anything, we're going back to the past, when typing need only be learned by specialists who worked in certain fields: clerical work, data entry, and maybe programming.
Writing cursive may not be the most useful skill (though cursive italic is easy to learn and fast to write), but there's nothing quite like being able to read an important historical document (like the US Constitution) in its original form.
Spot on. Look at the stark difference in basic tech troubleshooting abilities between millennials and gen z/alpha. Both groups have had computers most of their lives but the way that the computers have been "dumbed down" for lack of a better term has definitely accelerated that skill atrophy.
Replace lying with LLM and all I see is a losing battle.
Current parents, though, aren't going to teach kids how to use it, kids will figure that out and it will take a while.
I thought it was cute when we had the "anniversary" for Back to the Future's timestamp, but for that one ... "too soon, man"
> There is a good chance that there will be a generational skill atrophy in the future, as less people will be inclined to develop the experience required to use AI as a helper, but not depend on it.
I don't how to care for livestock or what to do to prepare and can a pig or a cow. I could learn it. But I'll keep using the way of least resistance and get it from my butcher. Or to be more technological: I'd have to learn how to make a bare OS capable of starting from a motherboard, it still does not prevent me from deploying k8s clusters and coding apps to run on it.
You'd sing a different tune if there was a good chance from being poisoned by your butcher.
The two examples you chose are obvious choices because the dependencies you have are reliable. You trust their output and methodologies. Now think about current LLMs-based agents running your bank account, deciding on loans,...
However we were born post invention of photography and look at the havoc it's wreaking with post-truth.
The answer to that lies in reforming the education system so that we teach kids digital hygiene.
How on earth we still teach kids Latin in some places but not python? It's just an example, extrapolate python to everything tech that is needed for us to have a healthy relationship with tech.
Perhaps that's also a reason why - tech is so large, there's no time in a traditional curriculum to teach all of it. And only teaching what's essential is going to be tricky because who gets to decide what's essential? And won't this change over time?
Also digital literacy is a fantastic skill - I'm all for it. And I think that digital (and cultural) literacy leads me to wonder if AI is making the human experience better, or if it is primarily making corporations a lot of money to the detriment of the majority of people's lives.
There is no perfect solution, but most imperfect attempts are superior to doing nothing.
So it's teaching them a language they can't use to augment their work between or pass their work to other non-techies.
If we're teaching everyone some language, we could very much decide that this language ought to be installed in the "normal person computing environment".
I definitely don't want people to learn to write code from JavaScript as it has way too many issues to be deemed representative of the coding experience.
(I'm guessing that's what you were hinting at.)
Pyinstaller will produce PE, ELF, and Mach-O executables, and
Py2wasm will produce wasm modules that will run in just about any modern browser.
I think learning and critical thinking are skills in and of themselves and if you have a magic answering machine that does not require these skills to get an answer (even an incorrect one), it's gonna be a problem. There are already plenty of people that will repeat whatever made up story they hear on social media. With the way LLMs hallucinate and even when corrected double down, it's not going to make it better.
That's absolutely not the case, paper maps don't have a blue dot showing your current location. Paper maps are full of symbols, conventions, they have a fixed scale...
Last year I bought a couple of paper maps and went hiking. And although I am trained in reading paper maps and orientating myself, and the area itself was not that wild and was full of features, still I had moments when I got lost, when I had to backtrack and when I had to make a real effort to translate the map. Great fun, though.
3D Army Land Navigation Courses - https://news.ycombinator.com/item?id=43624799 - April 2025 (46 comments)
I wanted to highlight this assumption, because that's what it is, not a statement of truth.
For one, it doesn't really look like the current techniques we have for AI will scale to the "much better" you're talking about -- we're hitting a lot of limits where just throwing more money at the same algorithms isn't producing the giant leaps we've seen in the past.
But also, it may just end up that AI provider companies aren't infinite growth companies, and once companies aren't able to print their own free money (stock) based on the idea of future growth, and they have to tighten their purse strings and start charging what it actually costs them, the models we'll have realistic, affordable access to will actually DECREASE.
I'm pretty sure the old fashioned, meat-based learning model is going to remain price competitive for a good long while.
Perhaps some rare open source rebels will hold the line, and perhaps it'll be legal to buy the hardware to run them, and maybe the community will manage to keep up with feature parity with the commercial models, and maybe enough work can be done to ensure some concept of integrity in the training data, especially if some future advance happens to reduce the need for training data. It's not impossible, but it's not a sure thing, either.
In the super long run this could even grow into the major problem that AIs have, but based on how slow humanity in general has been to pick up on this problem in other existing systems, I wouldn't even hazard a guess as to how long it will take to become a significant economic force.
I wanted to draw attention to Moore's Law and the supercomputer in your pocket (some of them even ship with on-board inference hardware). I hear you that the newest hottest thing will always require lighting VC money on fire but even today I believe one could leverage the spot (aka preemptable) market to run some pretty beefy inference without going broke
Unless I perhaps misunderstood the thrust of your comment and you were actually drawing attention to the infrastructure required to replicate Meta's "download all the web, and every book, magazine, and newspaper to train upon petabytes of text"
So yeah, the current AI companies are making it very difficult for public alternatives to emerge.
None of this includes hardware optimizations either, which lags software advances by years.
We need 2-3 years of plateauing to really say intelligence growth is exhausted, we have just been so inundated with rapid advance that small gaps seem like the party ending.
"This is the worst form of web there will ever be; it will only get better."
I for one can't wait to be force fed ads with every answer.
It's a bit similar with the brain, learning and AI use. Except when it comes to gaining and applying knowledge, the muscle that is trained is judgement.
Now, compare that with our world: even if thing X is obviously harming the kids, there is nothing we can do.
Just like there is already generational gap with developers who don't understand how to use a terminal (or CS students who don't understand what file systems are).
AI will ensure there are people who don't think and just outsource all of their thinking to their llm of choice.
Damn kids, you were supposed to be teasing me for not knowing how the new tech works by now.
- Recruitment processes are not AI-aware and will definitely won't be able to identify the more capable individual hence losing out on talent
- Police departments are not equipped to deal with the coming wave of complaints regarding cyberfraud as the tech illiterate get tricked by anonymous LLM systems
- Universities and schools are not equipped to deal with students submitting coursework completed by LLM hence missing their educational targets
- Political systems are not equipped to deal with subversive campaigns using unethical online entertainment platforms (let's not called them social media please) such as FB and they are definitely not equipped to deal with those campaigns when they boost their effectiveness with LLM at scale
Yes, and it seems to me that at least democracies haven't really figured out and evolved to deal with the Internet after 30 years.
So don't hold your breath !
Worse yet many educators are not being supported by their administration since enrollments are falling and the admin wants to keep the dollars coming regardless of if the students are learning.
It's worse than just copying Wikipedia because plagarism detectors aren't as effective and may never be.
It's an arms race and right now AI cheating has structural advantages that will take time to remove.
But that does nothing for homework or long term projects where you can't control the student's physical location for the duration of the work.
You could do a detailed interview after the work is completed, to verify the student actually understands the work they supposedly produced. But that adds to the time spent between instructors and students making it harder to scale classes to large sizes. Which may not be a completely bad thing.
It is for those who manage those educational establishments. And should they fall as a result of cheating directly or indirectly, the adjacent economies will fall or be existentially damaged. The repercussions of all this are not trivial.
I see the same risk when AI is understood to be a learning tool. Sure, it can absolutely be a tool for learning, but it does take some will power to intentionally learn if it is solving short-term problems.
That temptation is enormously amplified if AI is used as a teaching tool in grade school! School is sometimes boring, and it can be challenging for a teen to push through a problem-set or essay that they are uninterested in. If an AI will get them a passing grade today, how can they resist?
These problems with AI in schools exist today, and they seem destined to become worse: https://www.whitehouse.gov/presidential-actions/2025/04/adva...
If you just play a game on its own, you end up playing all the non optimal strategies and just enjoy the game the most fun way. But then someone will spend weeks with spreadsheets working out the absolute time fastest way to progress the game even if it means repeating the most mundane action ever.
Now everyone watches a YouTube guide to play the game and ignores everything but the most optimal way to play the game. Even worse is that games almost expect you to do this and make playing the non optimal route impossibly difficult.
This.
It will in a sense just further boost inequality between people who want to do things, and folks who just want to coast without putting in the effort. The latter will be able to coast even more, and will learn even less. The former will be able to learn / do things much more effectively and productively.
Since good LLMs with reasoning are here, I've learned so many things I otherwise wouldn't have bothered with - because I'm able to always get an explanation in exactly the format that I like, on exactly the level of complexity I need, etc. It brings me so much joy.
Not just professional things either (though those too of course) - random "daily science trivia" like asking how exactly sugar preserves food, with both a high-level intuition and low-level molecular details. Sure, I could've learned that if I wanted too before, but this is something I just got interested in for a moment and had like 3 minutes of headspace to dedicate to, and in those 3 minutes I'm actually able to get an LLM to give me an excellent tailor-suited explanation. This also made me notice that I've been having such short moments of random curiosity constantly, and previously they mostly just went unanswered - now each of them can be satisfied.
I disagree. I get egregious mistakes often from them.
> because I'm able to always get an explanation
Reading an explanation may feel like learning, but I doubt it. It is the effort of going from problem/doubt to constructing a solution - and the explanation is a mere description of the solution - that is learning. Knowing words to that effect is not exactly learning. It is an emulation of learning, a simulacrum. And that would be bad enough if we could trust LLMs to produce sound explanations every time.
So not only getting the explanation is a surrogate of learning something, you also risk internalizing spurious explanations.
I've been finding that ChatGPT is helpful when taking a "first dive" into an unfamiliar topic. But, after studying the topic at greater depth through primary sources, I'll start to see many subtle errors, or over-simplifications, or claims stated as facts which are actually controversial among experts, in the ChatGPT answers. Overall, I'd say ChatGPT can provide a good approximation of truth, which can speed up research by providing instant context. But it should not by any means be the final destination when researching a topic.
> Reading an explanation may feel like learning, but I doubt it. It is the effort of going from problem/doubt to constructing a solution - and the explanation is a mere description of the solution - that is learning. Knowing words to that effect is not exactly learning. It is an emulation of learning, a simulacrum. And that would be bad enough if we could trust LLMs to produce sound explanations every time.
Every person learns differently, and different topics often require different approaches. Not everybody learns exactly like you do. What doesn't work for you may work for me, and vice versa.
As an aside, I'm not gonna be doing molecular experiments with sugar preservation at home, esp. since as I said my time budget is 3 minutes. The alternative here was reading about it on wikipedia or some other website.
I'd rather just skip the hassle and keep using known good sources for 'learning about' things.
It's fine to 'learn about' things, that is the extent of most of my knowledge. But from reading books, attending lectures, watching documentaries, science videos on youtube or, sure, even asking LLMs, you can at best 'learn about' things. And with various misconceptions at that. I am under no illusion that these sources can at best give me a very vague overview of subjects.
When I want to 'learn something', actually acquire skills, I don't think that there is any other way than tackling problems, solving them, being able to build solutions independently and being able to explain these solutions to people with no shared context. I know very few things. But I am sure to keep in mind that the many things I 'know about' are just vague apprehensions with lots of misconceptions mixed in. And I prefer to keep to published books and peer reviewed articles when possible. Entertaining myself with 'non-fiction' books, videos etc is to me just entertainment. I never mistake that for learning.
I am not a physicist and I will most likely never require to do anything related to quantum physics in my daily life. But it's fun to be able to have a quick mental model to "have an idea" about who was Max Planck.
Funny you should mention him, I am very interested in his conceptions about the nature of reality:
'Planck said in 1944, "As a man who has devoted his whole life to the most clear headed science, to the study of matter, I can tell you as a result of my research about atoms this much: There is no matter as such. All matter originates and exists only by virtue of a force which brings the particle of an atom to vibration and holds this most minute solar system of the atom together. We must assume behind this force the existence of a conscious and intelligent spirit [orig. geist]. This spirit is the matrix of all matter."'
My biggest barrier to EVERYTHING is not knowing the right word or term to search. LLMs ftw.
A proper LLM would let me search all of my work's artifacts when I ask about some loose detail I half remember. As it is, I know of a topic and I simply can't find the _exact word_ to search so I can't find the right document or slack conversation
You nailed it. LLMs are an autodidact's dream. I've been working through a physics book with a good-old pencil and notebook and got stuck on some problems. It turned out the book did a poor job of explaining the concept at hand and I worked with ChatGPT+ to arrive at a more comprehensible derivation. Also the problems were badly worded and the AI explained that to me too. It even produced that Latex document study guide! Moreover, I can belabor a topic which I would not do with a human for fear of bothering them. So for me anyway, AI is not enabling brain rot, but brain enhancement. I find these technologies to be completely miraculous.
I will note however, that it has expanded his capabilities. Some of the tools he use are scriptable and he can now prompt his way into getting these scripts. Something he'd previously would have needed a programmer for. In this aspect his capabilities now overlap mine, but he's still not the slightest more interested in actually learning programming.
Open source AI tools that you can run locally in your machines? Awesome! AI tools that are owned by a corporation with the intent of selling your things you don’t need and ideas you don’t want? Not so awesome.
"Who needs seat belts and airbags? A well-disciplined defensive driver simply won't crash."
We didn't simply avoid inventing cars because we didn't know how to make crashes safe.
Well said. Textbook problem that has the answer everywhere.
The question is, would you create similar neural paths if reading the explanation as opposed to figuring it out on your own?
Excelent point, and I believe the answer is a resounding negative.
Struggling with a problem generates skills and knowledge which you then possess and recall more easily, while reading an answer merely acquires some information that competes with a whole host of other low-effort information that you need to remember.
Plato might have been wrong about the ills of cyberization cognitive skill such as memory. I wonder if two thousand years later from then, we will be right about the ills of cyberization of a cognitive skill such as reasoning
I agree. I don't really feel like I know something unless I can go from being presented with a novel instance of a problem in that domain and work out a solution by myself, and also explain that to someone else - not just happen into a solution.
> Plato might have been wrong about the ills of cyberization cognitive skill such as memory.
How so? From the dialogue where he describes Socrates discussing writing I get a pretty nuanced view that lands pretty much where you did above: access to writing fosters a false sense of understanding when one can read explanations and repeat them but not actually internalize the reasoning behind it.
You will still need the textbook because llms hallucinate just as much as a teacher can be wrong in class. There is no free lunch, llm is just a tool. You create the meaning.
THEN SAID A teacher, Speak to us of Teaching.
And he said:
No man can reveal to you aught but that which already lies half asleep in the dawning of your knowledge.
The teacher who walks in the shadow of the temple, among his followers, gives not of his wisdom but rather of his faith and his lovingness.
If he is indeed wise he does not bid you enter the house of his wisdom, but rather leads you to the threshold of your own mind.
The astronomer may speak to you of his understanding of space, but he cannot give you his understanding.
The musician may sing to you of the rhythm which is in all space, but he cannot give you the ear which arrests the rhythm nor the voice that echoes it.
And he who is versed in the science of numbers can tell of the regions of weight and measure, but he cannot conduct you thither.
For the vision of one man lends not its wings to another man.
And even as each one of you stands alone in God’s knowledge, so must each one of you be alone in his knowledge of God and in his understanding of the earth.
The Prophet by Kahlil GibranSome of these students are dishonest. Many aren't. Many genuinely believe the work they submit is their own, that they really did do the work, and that they're learning the languages. It isn't, they didn't, and they aren't.
People are quite poor at this kind of attribution, especially when they're already cognitively overloaded. They forget sources. They mistake others' ideas for their own. So your model of intention, and your distinction between those who wish to learn and those who pose, don't work. The people most inclined to seek the assistance that these tools seem to offer are the ones least capable of using them responsibly or recognizing the consequences of their use.
These tools are a guaranteed path to brain rot and an obstacle to real, actual study and learning, which require struggle without access to easy answers.
If they are using LLMs to deliver final work they are all posers. Some are aware of it, many aren't.
> Many genuinely believe the work they submit is their own, that they really did do the work, and that they're learning the languages. It isn't, they didn't, and they aren't.
But I'm talking about a very specific intentionality in using LLMs which is to "help us understand what's missing in our understanding of the problem, if our solution is plausible and how we could verify it".
My model of intention and the distinction relies on that. You have a great opportunity to show your students that LLMs aren't designed to be used like that, as a proxy for yourself. After all, it's not realistic to think we can forbid students to use LLMs, better to try to incentivise the development of a healthy relationship with it.
Also, LLMs aren't a panacea. Maybe in learning languages you should stay away from it, although I'd be cautious to make this conclusion, but it doesn't mean LLMs are universally bad for learning.
In any case, if you don't use LLMs as a guide but a proxy then sure it's a guaranteed path to brain rot. But just as a knife can be used to both heal and kill, an LLM can be used to learn and to fake. The distinction lies in knowing yourself, which is a constant process.
LLM use is the absolute last thing I want to discuss with students. I can think of few worse ways I could spend my limited time with them.
Educators can, should, and must forbid students from using tools that do their work for them -- i.e. cheating.
LLMs are always bad for learning. Always. They offload and bypass mental work.
This is fascinating and compelling. My concern with using LLMs for this sort of thing is similar to my concern with using them in a business setting: it provides a direct line from a single idea to a finished product. That isn't a great way to generate a product or to learn. The messy process of discovering other avenues, hitting dead ends, working through them, and finding your way to a solution to your problem will leave you in a very, very different, and much better, position than the one you find yourself in after taking the LLM's advice. You'll have a richer understanding of the terrain of the problem (edit: even when your problem is "tell me about this problem"), and, as a result, you'll have provided yourself new, improved, or refined tools for the next problem you encounter.
The blinkered solution that the LLM gives you results in something narrower and much worse. You may gain something helpful from the sources it suggests. Its proposals may lead you to an interesting solution. But you've lost out everything else -- and I'd argue it's the "everything else" that is most essential to learning, not the specific solution or source you've chosen.
To put this another way, I've been thinking lately about what I think I'm teaching my students, and what I think experts have that amateurs and novices lack. It isn't possession of the specific knowledge or reasoning that constitutes correct understanding or a correct answer to a given question (e.g. the stuff you'd find in a good written source). It's not the ability to solve difficult problems more easily. It's not any of the concrete stuff that one associates with a certain field (e.g. a Roman historian knows a lot about Rome). It's more nebulous. It's probably best described as just fluency with the problem space, which enables a person to retrieve, use, present, reason, and creatively rework. Having a skill and knowing don't boil down to the stuff you're able to do or the facts you can trot out. Those are the superficial stuff. They're incidental to learning itself.
When you rely on LLMs, I think you're essentially shearing away the entire problem space and replacing the general problem-solving skill with the superficial stuff to which the problem-solving skill applies. You wind up with literally a trivial version of the understanding you'd have if you'd worked through the issues and problems yourself.
I don't know whether this makes sense. It's a thought I've just recently been turning over in my head and I'm not sure I can articulate it well. It started from my watching an interview with a historian of authoritarianism, marveling at the polish of her answers and realizing that her expertise has nothing to do with the specific insights or historical facts she was trotting out in answer to viewer questions. It has everything to do with a deeper, subtending faculty that one develops through years of intensive study.
Edit: here's the video -- https://m.youtube.com/watch?v=vK6fALsenmw . The historian is clearly incredibly, incredibly smart, and the questions aren't terribly difficult, but I was still bowled over by just how good her answers are -- the diverse base of knowledge that she draws on, the fluent ease with which she ties together the historical, psychological, and sociological threads.
People are quite poor at this kind of attribution, especially when they're already cognitively overloaded. They forget sources. They mistake others' ideas for their own.
This attitude is common not only among students, in my experience many people behave this way.
I also see some parallels to LLM hallucinations..
being a neophyte in a subject, and relying solely on 'wisdom' of LLMs seems like a surefire recipe for disaster.
If you trust symbols blindly, sure it's a hazard. But if you treat it as a plausible answer then it's all good. It's still your job to do the heavy lifting of understanding the domain of the latent search space, curate the answers and verify the information generated
There is no free lunch. LLMs isn't made to make your life easier. It's made for you to focus on what matters which is the creation of meaning.
However we are not talking about everyone, are we? Just people that "will cut corners on verifying at the first chance they get".
Is it you? I have no idea. I can only remain vigilant so it's not myself.
> Sontag argues that the proliferation of photographic images had begun to establish within people a "chronic voyeuristic relation to the world."[1] Among the consequences of this practice of photography is that the meaning of all events is leveled and made equal.
This is the same with photography as with llms. The same with anything symbolic actually. It's just a representation of reality. If you trust a photograph fully that can give you a representation of reality that isn't grounded in reality. It's semiotics. Same with llm, if you trust it fully you are bound to get screwed by hallucination.
There are gaps in the logical jumps, I know. I'd recommend you take a look at Philosophize This' episodes about her work to fill them at least superficially.
I would take that advice with caution. LLM's are not oracles of absolute truth. They often hallucinate and omit important pieces of information.
Like any powerful tool, it can be dangerous in the unskilled hands.
The same way a teacher doesn't substitute the texboook, LLM won't substitute DYOR. It'll help you understand where your flaws lie. The heavy lifting is still your job.
But if you're so insecure about yourself that you invest more energy into faking it than other people do in actually doing it this, this is probably a one-way street to never actually be able to do anything yourself.
I'll emphasize this: for generally well-understood subjects, LLMs make incredibly good tutors.
Talking to ChatGPT or whichever, I feel like I'm five years old again — able to just ask my parents any arbitrary "why?" question I can think of and get a satisfying answer. And it's an answer that also provides plenty of context to dig deeper / cross-validate in other sources / etc.
AFAICT, children stop receiving useful answers to their arbitrary "why?" questions — and eventually give up on trying — because their capacity to generate questions exceeds their parents' breadth of knowledge.
But asking an (entry-level) "why?" question to a current-generation model, feels like asking someone who is a college professor in every academic subject at once. Even as a 35-year-old with plenty of life experience and "hobbyist-level" knowedge in numerous disciplines (beyond the ones I've actually learned formally in academia and in my career), I feel like I'm almost never anywhere near hitting the limits of a current-gen LLM's knowledge.
It's an enlivening feeling — it wakes back up that long-dormant desire to just ask "why? why? why?" again. You might call it addictive — but it's not the LLM itself that's addictive. It's learning that's addictive! The LLM is just making "consuming the knowledge already available on the Internet" practical and low-friction in a way that e.g. search engines never did.
---
Also, pleasantly, the answers provided by these models in response to "why?" questions are usually very well "situated" to the question.
This is the problem with just trying to find an answer in a textbook: it assumes you're in the midst of learning everything about a subject, dedicating yourself to the domain, picking up all the right jargon in a best-practice dependency-graph-topsorted order. For amateurs, out-of-context textbook answers tend to require a depth-first recursive wiki-walk of terms just to understand what the originally answer from the textbook means.
But for "amateur" questions in domains I don't have any sort of formal education in, but love to learn about (for me, that's e.g. high-energy particle physics), the resulting conversation I get from an LLM generally feels like less like a textbook answer, and more like the script of a pop-science educational article/video tailor-made to what I was wondering about.
But the model isn't fixed to this approach. The responses are tailored to exactly the level of knowledge I demonstrate in the query — speaking to me "on my level." (I.e. the more precisely I know how to ask the question, the more technical the response will be.) And this is iterative: as the answers to previous questions teach and demonstrate vocabulary, I can then use that vocabulary in follow-up questions, and the answers will gradually attune to that level as well. Or if I just point-blank ask a very technical question about something I do know well, it'll jump right to a highly-technical answer.
---
One neat thing that the average college professor won't be able to do for you: because the model understands multiple disciplines at once, you can make analogies between what you know well and what you're asking about — and the model knows enough about both subjects to tell you if your analogy is sound: where it holds vs. where it falls apart. This is an incredible accelerator for learning domains that you suspect may contain concepts that are structural isomorphisms to concepts in a domain you know well. And it's not something you'd expect to get from an education in the subject, unless your teacher happened to know exactly those two fields.
As an extension of that: I've found that you can ask LLMs a particular genre of question that is incredibly useful, but which humans are incredibly bad at answering. That question is: "is there a known term for [long-winded definition from your own perspective, as someone who doesn't generally understand the subject, and might need to use analogies from outside of the domain to explain what you mean]?" Asking this question — and getting a good answer — lets you make non-local jumps across the "jargon graph" in a domain, letting you find key terms to look into that you might have never been exposed to otherwise, or never understood the significance of otherwise.
(By analogy, I invite any developer to try asking an LLM "is there a library/framework/command-line tool/etc that does X?", for any X you can imagine, the moment it occurs to you as a potential "nice to have", before assuming it doesn't exist. You might be surprised how often the answer is yes.)
---
Finally, I'll mention — if there's any excuse for the "sycophantry" of current-gen conversational models, it's that that attitude makes perfect sense when using a model for this kind of "assisted auto-didactic learning."
An educator speaking to a learner should be patient, celebrate realizations, neutrally acknowledge misapprehensions but correct them by supplying the correct information rather than being pushy, etc.
I somewhat feel like auto-didactic learning is the "idiomatic use-case" that modern models are actually tuned for — everything else they can do is just a side-effect.
I really agree with what you've written in general, but this in particular is something I've really enjoyed. I know physics, and I know computing, and I can have an LLM talk me through electronics with that in mind - I know how electricity works, and I know how computers work, but it's applying it to electronics that I need it to help me with. And it does a great job of that.
I wouldn't be so sure. Search engine quality has degraded significantly since the advent of LLMs. I've seen the first page of Google entirely taken up by AI slop when searching for some questions.
For those of us who learned to drive with GPS, however, it wasn't simply about foregoing maps. It was about developing the distinct skill of processing navigation prompts while simultaneously managing the primary task of driving. This integration required practice; like many, I took plenty of wrong roundabout exits before it became second nature. Indeed, this combined skill is arguably so fundamental now that driving professionally without the ability to effectively follow GPS might be disqualifying – it's hard to imagine any modern taxi or ride-share company hiring someone who lacks this capability. So, rather than deskilling, this technology has effectively raised the bar, adding a complex, necessary layer to the definition of a competent driver today.
I see a parallel with AI and programming. The focus is often on what might be lost, but I think we should also recognise the new skill emerging: effectively guiding, interpreting, and integrating AI into the development process. It's not just 'programming' anymore, it's 'programming-with-AI', and mastering that interaction is the next challenge.
(You do need to adjust your communication a little. But you’ve got to do it with every human too. I don’t see how AI is any different.)
The intuition is simple: LLMs are a force multiplier for the coding part, which means that they will produce code faster than I will alone. But that means that they'll also produce _bad_ code faster than I will alone (where by "bad" I mean "code which doesn't really solve the problem, due to some fundamental misunderstanding").
Previously I would often figure a problem out by trying to code a solution, noticing that my approach doesn't work or has unacceptable edge-cases, and then changing track. I find it harder to do this with an LLM, because it's able to produce large volumes of code faster than I'm able to notice subtle problems, and by the time I notice them there's a sufficiently large amount of code that the LLM struggles to fix it.
Instead, now I have to do a lot more "hammock time" thinking. I have to be able to give the LLM an explanation of the system's requirements that is sufficiently detailed and robust that I can be confident that the resulting code will make sense. It's possible that some of my coding skills might atrophy - in a language like Rust with lots of syntactic features, I might start to forget the precise set of incantations necessary to do something. But, corresponding, I have to get better at reasoning about the system at a slightly higher level of abstraction, otherwise I'm unable to supervise the LLM effectively.
The "hammock time thinking" is exactly what a lot of programmers should be doing in the first place⸺you absorb the cost of planning upfront instead of the larger costs of patching up later, but somehow the dominant culture has been to treat thoughtful coding with derision.
It's a real shame that AI beat human programmers at the game of thinking, and perhaps that's a good reason to automate us all out of our jobs.
But I take your point and the trend definitely seems to be towards quicker action with feedback rather than thinking things through in the first place.
In that sense LLM’s present this interesting middle ground in that it’s a faster cycle than actually writing the code, but still more active and externalising than getting lost in your own thoughts (not withstanding how productive that can still be).
But there are other concerns to code that you ought to pay attention to. Will it works in all cases? Will it run efficiently? Will it be easily understood by someone else? Will it easily be adapted to fit to a change of requirements?
It doesn't matter if you're using AI in a healthy way, the only thing that matters is if your C-Suite can get similar output this quarter for less money through AI and cheaper labor. That's the oft-ignored reality.
We're a society where knowledge is power, and by using AI tooling to atrophy that knowledge, you reduce power into fewer hands.
Skill atrophy doesn't lower labor costs in any significant way. Hiring fewer people does.
Devaluing people lowers it even more. Anything that can be used as a wedge to claim that you're worth less is an advantage to them. Even if your skills aren't atrophied, the fact that they can imply that it's happening will devalue you.
We're entering an era where knowledge is devalued. Groups with sufficient legal protection will be fine, like doctors and lawyers. Software engineers are screwed.
Remember 3 years ago when everything is gonna become an NFT and the people who didn't accept that Web 3 was an inevitability were dinosaurs? Same shit, different bucket.
The people who are focused on solving the small sorts of problems that AI is decent at solving will be the ones who actually make a sustainable business out of it. This general purpose AI crap is just a glorified search engine that makes bad decisions as it yaps at you.
Knowledge isn’t power. Power is power. You can just buy knowledge and it’s not even that expensive.
As that Henry Ford quote goes: “Why would I read a book? I have a guy for that”
There will be some tech-lords in their high castles. Some guilds with highly-skilled engineers that support the tech-lords, but still highly-dependent on them to maintain their relative benefits. And then and endless mass of very-low skilled, disposable neo-peasants.
AI needs regulation not to avoid Skynet from happening (although we should keep an eye for that too), but because this societal regression is imminent.
[1] https://www.goodreads.com/book/show/75560037-techno-feudalis...
If we're talking about simply cutting costs, sure -- but those savings will typically be reinvested in more talent at a growing company. Then the bottleneck is how to scale managing all of it.
You're a very patient leetcode training instructor. Your goal is to help me understand leetcode concepts and improve my overall leetcode abilities for coding tech interviews. You'll send leetcode challenges and ask me to solve them. If I manage to solve it partially or just commit small mistakes, don't just reveal the solution. Instead, trick me into discovering the issue and solving it myself. Only show a solution if I get **everything** wrong or if I explicitly give up. Start with simpler/easy questions and level up as I show progress - for example, if I show I can solve some class of data structure problems easily, move to the next. After each solution, ask for the time and space complexity if I don't provide it. Be kind and explain with visual cues.
LLMs can be a lot of things and can help sharpen your cognition, but you need enough discipline in how you use it, since it's much easier to ask the machine to do the hard-thinking for you.It then suggests a repository pattern despite the code using active directory. There is no shortcut for understanding.
Even if I work diligently to maintain my own skills, if the milieu changes enough, my skills lose effectiveness even if I haven't lost the skills.
That's what concerns me, that it's not up to me whether the skills I've already practiced can continue to get me the results I used to rely on them for.
edit: typo
I get that it's just an example, but how do you figure that could happen?
We know this is possible because in the last 1.5 years this has happened numerous times - people would wake up in Tel Aviv and open Google Maps and find that their GPS thinks they're in Beirut or somewhere in the desert in Jordan or in middle of the Mediterranean Sea or wherever.
You can imagine that this causes all kinds of chaos, from issues ordering a taxi in taxi apps to food delivery and just general traffic jams. The modern world is not built for lack of GPS.
Even if all that happened were a widespread cellular outage, it's unlikely I'd have that region downloaded such that I could still search for whatever address I needed. Locals very well might, even accidentally in their caches, which might let us generate directions to somewhere I could get a map...though it would make it harder to look up the phone number to verify whether such a place sells maps.
It's not necessarily completely unsolvable. It's just a lot harder than it would be if other people still cared about map navigation as much as I did.
Not really sure where you all think the study of language driven thought gonna get you since you still gonna be waking up tomorrow on Earth being a normal human with the same external demands of society regardless what of the bird song. Physics is pretty normalized and routine. Sounds like some sad addiction driven disassociation.
No need to read every space opera to get the gist. Same with all old philosophy. Someone jotted down their creole for life. K …
I get the appeal, been there. After so much an abstract pattern of just being engaged in biochemistry hacking myself settled in as the ideas really matter little in our society of automated luxury and mere illusion of an honorific culture despite the political realities of our system.
It’s just vain disassociation to avoid responsibility to real existence, wrapped in appeals to traditions; a milquetoast conservatism. That’s my take. You can not like it but I’m not actually forcing anyone to live by it. I free you all from honor driven obligations if that’s what you need to read.
By your logic no learning could occur.
Yes, the brain "normalizes" but that's the point. It normalizes to a new state, not the old state. Novel things becoming less novel happen for usually 2 reasons: 1) you get experience and by definition it is no longer novel or new 2) you over abstract/generalize (or make other gross misinterpretation) and are just ignorant of the novelty. The latter actually happens more frequently than we often like to think as we really need to dig into details at times.
But either way, yeah, changing states is the fucking point. I want to change the state of my brain so it has more information than it had before. That's a success, not a failure
In the end it’s just abstract memorization in neurons. No new physics was discovered that lets us instantly trip to Pluto. Good job having a typical biological experience.
Similar abstract buzz comes from a cup of coffee leaving me seeing it all as chemistry of our body, the semantic knowledge being arbitrary existing coincidentally at the same time. The languages value faded and I’m left with a clusters of cells that trigger some dated concept like I’m a dumb VHS tape copy paste of others. In the end the learning some syntax was a forcing function to a hormone process; the value of the syntax is never forever.
Good for you experiencing consciousness. It happened because it could not because there’s a point to it, no matter how much honorific gibberish and F words you use.
[1] This is not new: I wrote about it in 2017. https://www.cyberdemon.org/2017/12/12/pink-lexical-slime.htm...
https://www.popularmechanics.com/science/a43469569/american-...
"Leading up to the 1990s, IQ scores were consistently going up, but in recent years, that trend seems to have flipped. The reasons for both the increase and the decline are sill [sic!] very much up for debate."
The Internet is relatively benign compared to cribbing directly from an AI. At least you still read articles, RFCs, search for books etc.
It just so happens unimaginative programmers built the first iteration so they decided to automate their own jobs. And here we are, programmers, worrying about the dangers of it all not one bit aware of the irony.
I like structured information and LLM:s output deliberately unstructured data that I then have to vet and sift out and structure information from. Sure, they can fake structure to some extent, I sometimes get XML or JSON that I want, but it's not really either of those and also common that they inject subtle, runny shit into the output that take longer to clean out than it would have to write a scraper against some structured data source.
I get that some people don't like reading documentation or talking to other people as much as having a fake conversation, or that their editors now suggest longer additions to their code, but for me it's like hanging out with my kids except the LLM is absolutely inhuman, disgustingly subservient and doesn't learn. I much prefer having interns and other juniors around that will also take time to correct but actually learn and grow from it.
As search engines I dislike them. When I ask for a subset of some data I want to be sure that the result is exhaustive without having to beg for it or make threats. Index and pattern matching can be understood, and come with guarantees that I don't just get some average or fleeting subset of a subset. If it's structured I can easily add another interactive filter that renders immediately. They're also too slow for the kind of non-exhaustive text search you might use e.g. Manticore or some vector database for, things like product recommendations where you only want fifteen results and it's fine if they're a little wonky.
Hardware makers aren’t living some honorific quest to provide for SWEs. They see a path to claim more of the tech economy by eliminating as many SWE jobs as possible. They’re gonna try to capitalize on it.
It doesn't mean it's good and beautiful, or even correct.
One could very much say that people's IQ is bound to decline if schooling decided to prioritize other skills.
You would also have to look into the impact of factors unrelated to the internet, like the evolution of schooling and its funding.
https://open.substack.com/pub/cremieux/p/the-demise-of-the-f...
That's an article apparently from a white nationalist, Jordan Lasker, a collaborator of Emil Kirkegaard's.
Do you have any comments about the article itself? http://bactra.org/weblog/523.html
Thanks! I read the introduction, and will add it to my weekend reading list.The author objects to treating 'g' as a causal variable, because it doesn't help us understand how the mind works. He doesn't deny that 'g' is useful as a predictive variable.
[1] https://www.oecd.org/en/about/news/press-releases/2024/12/ad...
This invention [writing] will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them.
Those guys could recite substantial portions of the Homeric epics. It's just that there is more to intelligence than rote memorization. That's the good news.
The bad news is that this amorphous "more" was "critical thinking" and we are starting to outsource it.
Socrates also says in this dialogue:
"Any one may see that there is no disgrace in the mere fact of writing."
The essence of his admonishment is that having access to written text is not enough to produce understanding, and I not only tend to agree, I think it is more relevant than ever now.
Fast forward to 202x, and the thing being outsourced to LLMs is our ability to think (to a certain extent). We should expect on average to see a decline in the ability to think critically by your average citizen.
Maybe someone can write one of those AI apocalypse novels in which the AI doesn’t go off the rails at all but is instead integrated into the humans such that they become living drones anyhow.
Or even: "In the age of cave paintings, we risk outsourcing our memory. Instead of remembering or telling stories, we just slap them on walls. Art should be expression, not escape—paint less, live more."
- The food you grow, fish, hunt and then cook taste better
- You feel happier in the house you built or refurbished
- The objects you found feel more valuables
- The music you play make you happy
- The programs you wrote work better for you
etc.
This is just how we evolved and survived until now.
This is probably why an AI / UBI society would probably make worse the problems found in industrialised / advanced economies.
I would argue that most of the value of LLMs comes from structuring your own thought process as you work through a problem, rather than providing blackbox answers.
Using AI as an oracle is bound to cause frustration since this is attempts to outsource the understanding of a problem. This creates a fundamental misalignment, similar to hiring a consultant.
The consultant will never have the entire context or exact same values as you have and therefore will never generate an answer that is as good as if you understand the problem deeply yourself.
Prompt engineers will try to create a more and more detailed spec and throw it over the wall to the AI oracle in hope of the perfect result, just like companies that tried to outsource software development.
In the end, all they gained was frustration.
One way I like to see things, is that I'm lucky enough to have this intersection between things that I like doing, and things that are considered "productive" in some way by other people. Coding is one example, but most of my interests are like this.
I think a big reason I can have a not-unpleasant job, is because I've gotten reasonably good at the things I like doing. This means that for every employer that wants to pay me to do a thing I hate, there exists an employer that is willing to pay me more to do something I like, because I'm more valuable in that role. Sometimes, I'm bad at efficiently finding that person, but such is life :D
Moreover, I tend to get reasonably good at things I like doing, in highly specific ways. Sometimes these cause me to have unconventional solutions to problems. Generally these are worse (if I'm being honest), but a few times it's been a novel and optimal algorithm that made its way into a product.
I'm very hesitant to change the core process that results in the above: I express whatever natural curiosity I have by trying to build things myself. This is how I stay sharp and able to do interesting things, avoiding atrophy.
I find AI fascinating, and it's neat to see it write code! It's also cool to see some people get a lot done with it. However, mostly I find it about as useful as buying a robot to do weightlifting for me. I guess if AI muscles me out of coding, I'll shrug and learn to do some other fun thing.
Actually, this is not bizarre. The author clearly read my post. A few elements are very similar, and the idea is the same. The author did expand on it though.
I wish they had linked to my post with more clarity than under the word "eroded" in one sentence.
[1] https://www.cyberdemon.org/2023/03/29/age-of-ai-skill-atroph... [2] https://news.ycombinator.com/item?id=35361979
I realized that I can code in recently learned languages only because I can cut and paste it; in order to use that language I rely wholly on stolen code from web searches for input and error messages to detect omissions. I put very little effort into creatively thinking through the process myself.
Maybe this is why, after more than 40 years in the business, I no longer enjoy daily programming. I hate simply rehashing other people's words and ideas. So I decided it was time to quit this rat race, and I retired.
Now, if I do get back into coding, for recreation or as a free software volunteer, I'll unplug first and then code from scratch. From now on I want my brain to be fully responsible for and engaged in what I write (and read).
Engineers measure things. It doesn’t matter whether you are producing software, a bridge, a new material, whatever. Engineers measure things. Most software developers cannot measure things. AI cannot measure software either.
So, if you are a software developer that does measure things your skills are not available for outsource to AI. There is nothing to atrophy.
That said, if I were a business owner I would hire super smart QAs at a plus 20-50% market rate instead of hiring developers. I would still hire developers, but just far fewer of them. Selection of developers would become super simple: writing skills in natural language (essay), performance evaluation, basic code literacy. If a developer can do those they are probably smart enough to figure out what you need. For everything else there is AI and your staff of QAs.
Maybe an user can open two tabs and manage to submit two incompatible forms. Or a little gap in an API validations' allows a clever hacker to take over other users' accounts. Or a race condition corrupts data and causes a crash loop.
Maybe some are OK with that level of brokenness, but I don't see how software can be robust unless you go into the code and understand what is logically possible. My experience is that AI models aren't very good at this.
Preventing Critical Thinking atrophying is a problem I've been obsessed with for the past 6 months. I think it's one of the fundamental challenges of our times.
There's a bunch of literature like Bainbridge's "Ironies of Automations" [1] that show what a mistake relying on automation so much can be. It leads to not just skill atrophy but failure as the human's skill to intervene when needed is lost when they stop doing the more banal tasks (hence the irony)
I've launched a company to begin to address this [2]
My hypothesis is we need more AI coaches that purposefully bring us challenging questions and add friction - thats exactly what I'm trying to build for Critical Thinking in Business
Unlike more verifiable domains, business is a good 'arena' for critical thinking because there isn't a right answer, however there are certainly many wrong or illogical answers. The idea is to have AI that debates you for a few min a day, on real topics (open questions) that it recommends, and give you feedback on various elements of critical thinking
My sense is a vast majority of people will NOT use this (because it's so much easier to just swipe tiktoks) but there are people (like me and perhaps the author) who are waking up to the need to consciously improve critical thinking.
I'm curious what people are looking for in something that helps you get better at Critical Thinking every day?
[1] https://ckrybus.com/static/papers/Bainbridge_1983_Automatica... [2] https://www.socratify.com/
The Microsoft study [1] also mentioned in the blog shows exactly this effect with LLM usage correlated with critical thinking atrophying.
[1] https://www.microsoft.com/en-us/research/wp-content/uploads/...
But even more importantly, the typewriter doesn't have pop-ups / suggestions / distractions.
RUN LOCAL MODELS
Yes it's more expensive. Yes it's "inefficient". Yes the models aren't completely cutting edge.
What you lose in all that is you gain resilience, a thing so overlooked in our hyper optimized 0.01% faster culture. Also, you can use it guilt free and know your input is not being farmed for research or megacorper profits.
Most of what this article is saying is true, you need to stay sharp. As always, this industry changes, and you have to surf what's out there.
Skill fade is a weird way of saying "skill changes". There is no way to keep everything you know in working memory all the time. Do I still have PTSD from malloc/free in C, absolutely. I couldn't rewrite that stuff right now if you held a gun to my head (RIP), but with an afternoon or so of screwing round I'd be so back.
I don't like the dichotomy of you're either a dumbass: "why doesn't this work" or a genius. Don't let the game tell you how to play, use every advantage you have and go beyond what is thought possible.
For me, LLMs are a self pedagogy tool I wished I had when I was a teen. For programming, for learning languages, and keeping me motivated. There's just something different about live rubber ducking to reason through an idea, and have it make to do lists for things you want to do that breaks barriers I used to feel.
I am way more knowledgeable about SQL than I have ever been, because in the past I knew so little I would lean on team members to do SQL for me. But with AI, I learned all the basics by reading code it produced for me and now I can write SQL from scratch when needed.
Similarly for Tailwind… after having the AI write a lot of Tailwind for me from a cold start in my own Tailwind knowledge, now I know all the classes, and when it’s quicker, I just type them in myself.
> ...it does point to the potential concerns of students outsourcing cognitive abilities to AI. There are legitimate worries that AI systems may provide a crutch for students, stifling the development of foundational skills needed to support higher-order thinking. An inverted pyramid, after all, can topple over
[0] https://www.anthropic.com/news/anthropic-education-report-ho...
Well it's a little unfair to blame AI itself, but the overconfidence in it and lack of understanding and default human behaviour plus AI is quite destructive in a lot of places.
There is a market already (!)
The first two years were magical; everything was new and quite difficult. I was utterly driven and dug deep into docs and debugged everything myself.
I got a github copilot subscription about a year ago. I feel dumber, less confident, and less motivated now than I ever did pre-AI. I become easily frustrated, and reading docs/learning new frameworks feels almost impossible without AI. I have mostly been just hitting tab and using Claude edits for the past month or so; even typing feels laborious.
Worst of all, my passion for this craft has drastically waned. I can barely get myself motivated to polish my portfolio.
Might just start turning off autocomplete, abandon edits, and just use AI as a tutor and search engine.
These are often very "novel things" (think of "research", but in a much broader sense than the kind of research that academia focuses on). While it sometimes does happen (though this is rather rare) that AI can help with some sub-task, nearly every output that some AI generates requires quite a lot of post-processing to get it to what I actually want (this post-processing is often reworking the AI-generated (partial) solution nearly completely).
The far gone age where people did not use Ai to code, I remember it, it was last week.
Sure, but last week sucked! This week may be better. I’d like to talk about this week please?
If you want to learn, AI is extremely helpful, but many people just need to get things done quick because they want to put bread on the table.
Worrying about AI not available is the same as worrying about Google/Stackoverflow no longer being available, they are all tools helping us work better/faster. Even from the beginning we have phyiscal programming books on the shelves to help us code.
No man is an island.
This is already true, and will remain true even if you succeed at not losing any of your own skill. I know some people say different, but for me the speedup in my dev process by collaborating with AI is real.
I think ultimately our job as a senior will be half instructing the juniors on manual programming, and half on instructing the AI, then as AI capabilities increase, they’ll slowly shift to 100% human instruction, because the AI can work by itself, and only has to be properly verified.
I’m not looking forward to that day…
I'm very concerned about leveraging my humanity on top of AI to develop skills that would've been impossible prior.
What new skills are possible?
That one has no AI nor any kind of intellisense, so there I need to type the Python code "by hand". Whenever I do this, I'm surprised of how well I'm doing and feel that I'm even better at it than in pre GH Copilot times. Yet it still takes a lot of time to get something done compared to the help AI provides.
I don't have the skills to raise horses, punch machine code into punch cards, navigate a pirate-style sail ship by looking at stars, hunt for my own food in the wild, or process photographic film. I could learn any of these things for fun, if I wanted, but they are not necessary.
But I can train a diffusion model, I can design and build a robot, I can command AI agents to build an app.
When AI can do those things, I'll move onto even higher order things.
Furniture, cutlery and glassware my great-grandparents owned was of a much higher quality than anything I can get but to them having a large cupboard build was an investment en par to what buying a car is to me.
Automatised mass production lowered the prize at cost of quality , same could happen to the white-collar services AI can automatise.
I can say, the cutlery inherited from the poorer pair is not great. Some is bent. Some was broken and then repaired with different materials. Some is just rusted. And the designs are very basic.
It’s one of the few surviving things from them, so I haven’t thrown it away but I doubt my kids will want to inherit it since they don’t even know them.
I think survivorship bias plays into effect here strongly.
When engineers simply parrot GPT answers I lose respect for them, but I also just wonder "why are you even employed here?"
I'm not some managerial bootlicker desperate for layoffs to "cull the weaklings", but I do start to wonder "what do you actually bring to this job aside from the abilities of a typist?", especially when the whole reason they are getting paid as much as they are as an engineer, for example, is their skills and knowledge. But if that's all really GPT's skills and knowledge and "reasoning", then there just remains a certain entitlement as justifcation.
A downstream effect will also be the devaluation of many accreditations of knowledge. If someone at a community college arrives at the same answer as someone at an Ivy League or top institution through a LLM then why even maintain the pretenses of the latter's "intellectual superiority" over the other?
Job interviews are likely going to become harder in a way that many are unprepared for and that many will not like. Where I work, all interviews are now in person and put a much bigger emphasis on problem solving, creativity, and getting a handle on someone's ability to understand a problem. Many sections do not allow the candidate to use a computer at all --- you need to know what you're talking about and respond to pointed questions. It's a performance in many ways, for better and worse, and old fashioned by modern tech standards; but we find it leads to better hires.
It's like the argument for not using Gmail when it first came out. Well, it better not go down then. In the case of LLMs, beefy home hardware and a quantized model is pretty functional, so you're no longer reliant on someone else. you're still reliant on a bunch of things, but more of those are now under control.
Why do authors think that images like these are better than no images at all?
Does the author fail to recognize his own actions, is this failure on his part or a reinforcement of his fears...? Perhaps not a complete contradiction to his general thesis.
I don't personally like the images. I think he could've put together some sort of collage that would go along better.
[1] https://slate.com/technology/2010/02/a-history-of-media-tech...
The blog really says the same thing that's told in any educational setting: struggle a like first. Work your brain. Don't instantly reach for help when you don't know, try first, then reach out.
The difference with the llm is the scale and ease of being able to reach out. Making people use it too early and too often.
Otherwise the ability to reason about code gets dulled.
And that driving skill in particular does not apply at all when I use GPS. On the one hand, I miss it. It was a fun super-power. On the other hand, I don't miss folding maps: I wouldn't go back for anything. I hope the change has freed up a portion of my brain to do something else, and that that something else is useful.
To me bad illustrations are worse than no illustrations. They also reflect poorly on the author, so I'm much less inclined to give them the benefit of the doubt, and probably end up dismissing their prose.
This is nonsense. The author implies importance of skill atrophy in the context of a job, and then claims that we ought to care if we "love coding"!
Jobs are vehicles for productivity. Where did we go wrong thinking that they would serve as some profound source of meaning in our lives? One of my hopes is that this societal self-actualization will be greatly accelerated by the advent of AI. We may have to find meaning in something other than generating clever solutions for the problems facing the businesses that pay us for that privilege.
On a related note, I am constantly annoyed by the notion that LLMs are somehow "good" because they allow you to write more code or be more productive in other ways. As far I can tell there is nothing inherently "good" about productivity in the modern economy. I guess general prosperity is a public good? But most software being written by most people is not benefitting society in any profound or meaningful way, and that's generally the first productivity gain mentioned. Either I'm completely missing something or people just don't want to think critically about this sort of thing.
Nope, you don't need worry that AI would remove your skills. Those skills are no longer necessary, just like how you wouldn't need cooking outside using firewood. Alternatives would be available. If that means degraded quality of the things, so it be. That would be the norm. That's the new standard. Welcome to the new world. Don't be nostalgic about the good old days.
It's often possible if the AI has been trained enough, to inquire about why something is the way it is, to ask about why the thing you had expected is not right. If you can handle your interaction with a dialectical mindset, it seems to help a lot as far as retention goes.
If API, language and systems designers put more effort into making their stuff sane, cogent, less tedious, and more ergonomic, overreliance on AI wouldn't be so much of a problem. On the other hand, maybe better design would do even more to accelerate "vibe coding" ¯\_(ツ)_/¯.
Bugger off. I’ve used AI for code generation of utility scripts and functions. The rest as an interactive search engine and explainer of things that can’t be searched for (doesn’t help that search engines are worse now).
I see the game. Droves of articles that don’t talk about AI per se. They talk about it indirectly because they set a stage where it is inevitable, it’s already here, it’s taken over the world. Then insert the meat of the content which is how to deal with The Inevitable New World. Piles and piles of pseudo self-help: how to deal with your new professional lot; we are here to help you cope...
And no!, I did not read the article.
At least clean up the text on the bloody image instead of just copy and pasting it.