> Students would watch (or fall asleep to) 6-hour videos, code along in their own editors, feel like they got it, and then freeze up the moment they had to write anything from scratch. Classic tutorial hell.
This is why, across history, the tried and true method of learning a craft is an apprenticeship. You, the junior, tag along a senior. You work under a shop that is led by a senior-senior that is called a master. Apprentices become craftsmen, craftsmen become masters. AFAIK, the master does not 'offload' project guidance into non-craftsmen, it is an expected part of the craftsmen role to be project/product managers/owners.
I've said this a million times to close friends and at this point I'm only half joking. We, and I'm including myself in the 'developer' crowd although I may not deserve it, have really dropped the ball in not being a 'guild' since way back when. At least since the late 1980's; and certainly since before the Original Boom of software dev as a profession (I'm assuming it was late 90's? I know not)
(Although I suspect that if that were the case we'd have fewer developers throughout the 00s and 10s, which may have impacted the development of the field itself in unexpected, but likely negative, ways)
An apprentice model doesn't really change that. Your average electrician gets called to many more "here's new construction that we're wiring from scratch" jobs than your average corporate engineer gets "we need to set up a new project from scratch without copying any of our existing files or folders."
I’ve worked on greenfield projects for so long (15+ years at this point) that my experience is the exact opposite. The few times I’ve had to contribute to an existing code base was a painful experience of figuring out where everything is, and why everything is done in such a weird way.
There is no better feeling than doing that first `git commit -m 'initial commit'`
In a way, it is interesting and a bit frightening to know that my experience is very different than most engineers are socialized and molded for team work and daily standups and code review which are alien (and scary) to me.
Maybe that's why I like LLM-enhanced programming, I can use it to get the project up to speed and I'm perfectly capable of taking over after that. I'm not that precious about the scaffolding used to bring the project up.
Here's my 2 cents: take any advice on professional matters from someone that has never left academia with a heaping dose of salt.
Can you clarify what that means to you? The full stack of modern web solutions spans from datacenter hardware selection to client device PCB design, and I don't think any single person can realistically manage all of that by themselves.
They wouldn't be able to spin up a RabbitMQ queue and Postgres on bare metal.
And in our modern world, universities are still the best place for such apprenticeship. Not the ones per Mark Trevor's words (https://marktarver.com/professor.html), of course, but a self-respecting university will train their students with progressively challenging and practical assignments. We started with implementing simple data structures and algorithms and solving simple puzzles all the way to implementing toy OSes, databases, persistent data structures, compilers, CPUs, discrete simulations, machine learning models. We started with implementing functions and individual components and quicly to building things from scratch. I'm forever grateful to the training I received in my univerity.
I spent a good portion of my life in Universities -- and went as far as one can go in terms of educational credentials and taught at the university level -- and I cannot disagree more.
Universities produce job skills incidentally, if at all. It's simply not their goal [1]. Even today, at the best CS programs in the country, it's possible to get a degree and still not be better than a very junior engineer at a software company (and quite a few graduates are worse).
> We started with implementing simple data structures and algorithms and solving simple puzzles all the way to implementing toy OSes, databases, persistent data structures, compilers, CPUs, discrete simulations, machine learning models.
This was not my experience, nor is it what I have seen in most university graduates. It's still quite possible for a CS grad to get a degree having only theoretical knowledge in these topics, and no actual ability to write code.
This leaves open the question of where "the best place" is to learn as-practiced programming [2], but I tend to agree with the root commenter that the best programmers come up through a de facto apprenticeship system, even if most of them spend time in universities along the way.
[1] Their goal is to produce professors. You may not realize this if you only went as far as the undergraduate diploma, but that is mostly what academics know, and so it is what they teach. The difference between the "best" CS programs and the others is that they have some professors with actual industry experience, but even then, most of them are academics through and through.
[2] Code academies suck in their own ways.
Having been self taught in both software and electrical engineering, I’ve experienced a lot of this.
In EE, it’s amazing how many graduates come into the job without ever having used Altium/KiCAD/Cadence for a nontrivial project or who can give you a very precise definition of impedance but don’t know how to break out an engineering calculator to set design rules for impedance controlled differential pairs. Or worse yet, people who can give you all the theory of switching model power supply but can’t read datasheets and select parts in practice.
True, grad school was focused on making professors - I did a master's, ended up being a lecturer for a while. Now a 20+ year software developer in the valley. But undergrad was focused on blending theoretical and practical skills. If they didn't, employers would have stopped coming back to hire co-op students, and would stop hiring the students at a high rate when they graduate.
I COULD have learned a lot of software myself - I was already coding in multiple languages before attending and had a side-software-contract before ever going in - and that was before the "web", so I had to buy and study books and magazines and I was able to do that reasonably well (IMHO).
Yet I never regretted my time in school. In fact, I had a job offer with my last employer before going back to grad school, and they hired me as a summer contractor at a very nice hourly rate back then.
Kinda funny when you think about it.
Looking back, I'd consider my University degree to be essentially a 4 year pause on growing my programming skills.
I admit that most development tasks don't need the knowledge you get from a CS degree, but some do.
But in computer science, it's also totally possible to be self-taught. I've learnt a lot on my own, especially after university. Computer science is good for that because it's generally accessible: you don't need an expensive lab or equipments, you can just practice at home on your laptop.
I think it's important to differentiate the personal achievement of students and the training offered by their universities. For instance, the courses offered by CMU and MIT are super useful - insightful, practical, intense, and sufficiently deep. That said, it does not mean that every MIT/CMU graduate will reap the benefit of the courses, even though many will.
It goes without saying that it does NOT mean people can't teach themselves. I'm just saying universities offer a compelling alternative to training next gen of engineers.
And my wife's experience: https://www.quora.com/What-is-it-like-to-learn-computer-scie...
In short, the training that we got from our universities was invaluable, and I always feel fortunate and grateful to my CS department.
On the other hand, we certainly learned more after graduation (or something is wrong, right?). When I was in the AI course, the CS department was all about symbolic reasoning I didn't even know that Hinton was in the same department. I think what matters is the core training stayed with me and helped me learn new stuff year after year.
You might have had a point a few decades ago when the information itself was difficult to fine but with the internet and online courses, its easier than ever to teach yourself in a "nontraditional" setting.
Those classes unlocked a whole new level of programming for me. I just didn't know what I didn't know before.
People keep reinventing the same shit if they haven't learned about it before.
Sure, you can learn many things online. But for most things you just don't even know that they exist, you wouldn't know to search for them.
Not reading and watching.
Pure and simple.
Programming only clicked for me when I had a goal in mind and started reading documentation: change the color of the button when I click it. How to do something on click? How to change the color of an element? Etc. From there my goals became bigger and reading documentation and examples along the way got me to where I am today.
Video is the true deception. I was trying to design patterns for sewing recently, and as a novice I watched a few videos. And none of them ever stuck with me on how to design something myself. It was only when I read a book about pattern design that the concepts stuck. I think the friction of reading, parsing the info, and then acting on it is what allows learning to happen.
Therefore reading and watching are not the key to success.
But it's essentially impossible to learn without information about a subject.
How do you suppose someone learn programming without reading documentation? Without reading code examples? This is active reading compared to passive reading, such as reading a novel.
One reads all of the C++ spec from cover to cover. Remembers every single word of it, doesn't write a line of code.
One just starts fumbling around, writing code, reading just enough docs to get where they need to go.
Which one is the better programmer?
Doing and thinking solidifies it, teaches you to use the things you've read about
You need both
When I'm learning something new I like to skim a bunch of content upfront to get an idea of what's there
It doesn't. That's the problem.
It fills your brain with procedure. For a short time.
If you solidify the procedure, you will be able to perform that one task. What on software development is still useless.
Only at the next step, where you know so much that you can think of your own new procedures that you have basic competence at software development. There are other professions like this, but for most, basic competence happens before you even solidify the procedures.
For most simple problems, it's true that the taking the seemingly shortest path to solving the problem is good enough. There are other problems where you simply have to understand the abstractions at a deeper level than you can visualize in code. It's there that things like reading a textbook or taking a course can help.
I mean: if you're learning a new language/library/framework it's really useful to have a broad idea of what the tooling for it looks like.. what features does it offer? You can look up the details when you need to
It's really useful to have a broad knowledge of algorithms and what problems they're applicable to. Look up the details later
If you're going into a new domain.. know the broad, high level pieces of it. You don't need to pre-learn the specifics of websockets but if you don't even know they exist or what they're useful for in web development.. that's kind of a problem
Even more abstract concepts like how to design code there's a lot of good info out there
If every generation had to re-invent the wheel from scratch we'd never get anywhere. The problem people have is they think ONLY reading is enough
FTFY. Then you have to do it yourself, and none of that instant gratification comes. Tutorials are junk food that appear useful. Looking at one to solve a specific problem might help, trying to learn something by consuming an unspecified amount of hours watching randos with polished youtube videos is akin to scrolling TikTok and feeling you’re doing something useful.
If you're trying to work out at the gym on your without reading anythi g about it first, you'll probably make a mess of it.
There's a lot of info out there about how to train at the gym, as well as how to write code. People who know how to read can certainly get a long way by reading a few simple tutorials.
The alternative has been to massify education for 'students' (not apprentices) in passive lectures with 'exercises/homework', which does not work as well for most things and particularly for crafts.
BTW for a very minor portion of the population the 'student' route is just as effective as the 'apprentice' route, but these are in my experience the exception
From where I stand, we're never going to find what you want in the workplace for reasons which predate LLMs: job hopping, remote/hybrid work, incurious managers etc.
I would argue that, across history, the tried and true method of learning a craft is access to knowledge.
It used to be that you had to sit next to a master to access the knowledge of the master. Now in many fields, you can just find it on the internet and learn at home, at your pace.
Everyone is different, what's best for you may not be what's best for me. But what is absolutely necessary is access to knowledge.
Juniors need to just accept they will have to learn the hard way, on their own, asking occasional questions and looking at tutorials until stuff sticks.
But - I'm not really sure it's necessary in software. The skillset can be entirely self taught if you're intelligent enough. There are an abundance of resources and all it requires is a terminal. Good software engineering principles can be covered in a 200 page book.
You can't say the same for trades like plumber, electrician, etc. which still use apprenticeships.
By contrast software just isn't comparable at all. You can sit at your desk, pay $0, and the only limitations to your experience is the amount of time you're willing to dedicate.
> You can't say the same for trades like plumber, electrician, etc. which still use apprenticeships.
Yes you can, and yet they still have apprenticeships.
I also do not think history shows that was the most effective. That is how it was done when it was the only option.
Citation needed - at least for anything software development. Every single respectable software dev I met around my age bracket or older (40+), was self-taught. Mostly because in the 80s or 90s there wasn't much opportunity. But computers shipped with handbooks how to program them, at that time.
I view the rise of these tools and particularly efficacy in programming as an indictment against modern programming. The modern web is both amazing and horrific. If bureaucratic is "using or connected with many complicated rules and ways of doing things" (Britannica), then modern programming may be the ultimate poster child. Sure, we love to slap this on "civil institutions", but the fact that I need an automaton, answers based on probability, to guide me in how to navigate doing some of the simplest things, is pretty sad (IMO).
I used to counsel aspiring new programmers, "It's not about knowing a certain language or framework. Your single most important asset will be an aptitude to constantly keep relearning. Some trends will stand out along the way, but you'll never quit learning new tools and languages".
Maybe it's just my age, but it feels like we've overflowed at some point.
Early programming was too electrical, too mathematical, so pioneers sought to close the gap between coding and human think. And yet, after years of speculative funding, what we're left with, is a whole different set of problems.
The automaton is not categorically different from the book or the teacher.
The fact that some number of people aren't adept at these things is an invariant of human nature, don't blame the tools getting better.
But we have exactly the same number of reviewers. How the heck are we gonna deal with it when we cannot use LLMs for sanity checking LLM code?
Like literally yesterday I had a not-technical person who used codex to build an optimization algorithm, and due to the momentum it gained I was asked to “fix the rough edges and help with scaling”.
The entire thing was trash (was trying to do naive search in a combinatorial problem with 1000s of integers, and was violating constraints with high probability, including the integrality). I had to spend all my day reviewing it and make a technical presentation to their leadership that it is just a polished turd.
Unit testing. LLM's are very good at writing tests and writing code that is testable (as long as you ask it), and if you just check that the tests are actually calling the code and doing so with all the obvious edge cases and that the results are correct, that's actually quite fast to review -- faster than reviewing the code.
And you can include things like performance testing in tests as well.
We're moving to a world where we work with definitions and tests and are less concerned with the precise details of how code is written within functions. Which is a big shift in mindset.
Having the LLM write the tests is… well, a recipe for destruction unless you babysit it and give it extremely specific restrictions (again, I’ve done this in mid to large sized projects with fairly comprehensive documentation on testing conventions and results have been mixed: sometimes the LLM does an okay job but tests obvious things, sometimes it ignores the instructions, sometimes it hardcodes or disables conditions…)
You take the smallest value and biggest value, do they work?
Take something in the middle, does that work?
Get the smallest and make it even smaller, does it break?
Get the biggest value, make it bigger, does it break?
GOTO 10
And when you got the pattern down, checking the rest is mostly just copying and pasting with different values on "smallest" and "biggest".
Something an LLM is very very good at.
Also you should always use another LLM to critique your primary one (or the same LLM with a clear context). I've found that gpt-5-high is VERY good at finding subtle bugs Claude will never catch. It can fix them immediately when I give it the Codex commentary though.
Inferring intent from plain english prompts and context is a powerful way for computers to guess what you want from underspecified requirements, but the problem of defining what you want specifically always requires you to convey some irreducible amount of information. Whether it’s code, highly specific plain english, or detailed tests, if you care about correctness they all basically converge to the same thing and the same amount of work.
That's the part I'd push back on. They're not the same amount of work.
When I'm writing the code myself, it's basically a ton of "plumbing" of loops and ifs and keeping track of counters and making sure I'm not making off-by-one errors and not making punctuation mistakes and all the rest. It actually takes quite a lot of brain energy and time to get that all perfect.
It saves a lot of time to write the function definition in plain English, have the LLM generate a bunch of tests that you verify are the correct definition... and then let the LLM take care of all the loops and indexing and punctuation and plumbing.
I regularly cut what used to be an entire afternoon or day's worth of work down into 30 minutes. I spend 10 minutes writing the design for what will be 500-1,000 lines of code, 5 minutes answering the LLM's questions about it, 5 minutes skimming the code to make sure it all looks vaguely plausible (no obvious red flags), 5 minutes ensuring the unit tests cover everything I can think of (almost always, the LLM has thought of a bunch of edge cases I never would have bothered to test), and another 5 minutes telling it to fix things, like its unit tests make me suddenly realize there's an edge case that should be defined differently.
The idea that it's the "same amount of work" is crazy to me. It's so much more efficient. And in all honesty, the code is more reliable too because it tests things that I usually wouldn't bother with, because writing all the tests is so boring.
All of that "plumbing" affects behavior. My argument is that all of the brain energy used when checking that behavior is necessary in order to check that behavior. Do you have a test for an off by one error? Do you have a test to make sure your counter behaves correctly when there are multiple components on the same page? Do you have a test to make sure errors don't cause the component to crash? Do you have a test to ensure non utf-8 text or binary data in a text input throws a validation error? Etc etc. If you're checking all the details for correct behavior, the effort involved converges to roughly the same thing.
If you're not checking all of that plumbing, you don't know whether or not the behavior is correct. And the level of abstraction used when working with agents and LLMs is not the same as when working with a higher level language, because LLMs make no guarantees about the correspondence between input and output. Compilers and programming languages are meticulously designed to ensure that output is exactly what is specified. There are bugs and edge cases in compilers and quirks based on different hardware, so it's not always 100% perfect, but it's 99.9999% perfect.
When you use an LLM, you have no guarantees about what it's doing, and in a way that's categorically different than not knowing what a compiler does. Very few people know all of the steps that break down `console.log("hello world")` into the electrical signals that get sent to the pixels on a screen on a modern OS using modern hardware given the complexity of the stack, but they do know with as close as is humanly possible to 100% certainty that a correctly configured environment will result in that statement outputting the text "hello world" to a console. They do not need to know the implementation because the contract is deterministic and well defined. Prompts are not deterministic nor well defined, so if you want to verify it's doing what you want it to do, you have to check what it's doing in detail.
Your basic argument here is that you can save a lot of time by trusting the LLM will faithfully wire the code as you want, and that you can write tests to sanity check behavior and verify that. That's a valid argument, if you're ok tolerating a certain level of uncertainty about behavior that you haven't meticulously checked or tested. The more you want to meticulously check behavior, the more effort it takes, and the more it converges to the effort involved in just writing the code normally.
Except it doesn't. It's much less to verify the tests.
> That's a valid argument, if you're ok tolerating a certain level of uncertainty about behavior that you haven't meticulously checked or tested.
I'm a realist, and know that I, like all other programmers, am fallible. Nobody writes perfect code. So yes, I'm ok tolerating a certain level of uncertainty about everybody's code, because there's no other choice.
I can get the same level of uncertainty in far less time with an LLM. That's what makes it great.
This is only true when there is less information in those tests. You can argue that the extra information you see in the implementation doesn't matter as long as it does what the tests say, but the amount of uncertainty depends on the amount of information omitted in the tests. There's a threshold over which the effort of avoiding uncertainty becomes the same as the effort involved in just writing the code. Whether or not you think that's important depends on the problem you're working on and your tolerance for error and uncertainty, and there's no hard and fast rule for that. But if you want to approach 100% correctness, you need to attempt to specify your intentions 100% precisely. The fact that humans make mistakes and miscommunicate their intentions does not change the basic fact that a human needs to communicate their intention for a machine to fulfill that intention. The more precise the communication, the more work that's involved, regardless of whether you're verifying that precision after something generates it or generating it yourself.
> I can get the same level of uncertainty in far less time with an LLM. That's what makes it great.
I have a low tolerance for uncertainty in software, so I usually can't reach a level I find acceptable with an LLM. Fallible people who understand the intentions and current function of a codebase have a capacity that a statistical amalgamation of tokens trained on fallible people's output simply do not have. People may not use their capacity to verify alignment between intention and execution well, but they have it.
Again, I'm not denying that there's plenty of problems where the level of uncertainty involved in AI generated code is acceptable. I just think it's fundamentally true that extra precision requires extra work/there's simply no way to avoid that.
I think that's what's leading you to the unusual position that "This is only true when there is less information in those tests."
I don't believe in perfection. It's rarely achieved despite one's best efforts -- it's a mirage. What we can realistically look for is a statistical level of reliability that tests help achieve.
At the end of the day, it's about delivering value. If you can on average deliver 5x value with an LLM because of the speed, or 1.05x value because you verified every line of code 3 times and avoided a rare bug that both the LLM and you didn't think about testing (compared to the 1x value of a non-perfectionist developer), then I know which one I'm choosing.
The unit tests LLMs generate are also often crap, testing tautologies, making sure that your dependencies act as specified without testing the actual code, etc.
I’m pretty confident that most developers, again including myself, just really enjoy knowing something is done well. Being able to separate yourself from the code and fixate solely on the outcomes can sometimes get me past this.
You got more diabetes? Use more insulin :x (insulins are very good handling diabetes) (analogy).
Seniors would tell: the more you get in seniority the more you delete code. So I don't think, more cushion for higher jumping is the solution, sometimes you don't need to jump from that high.
We're moving to Junior Generative Juniors, recursively.
But if you have a lot of unit tests and need to make a cross-cutting refactor you run into the same problem that you always have if all your coverage is at the unit level. Now your unit boundary is fundamentally different and you need to know how to lift and shift all the relevant tests to the relevant new places.
And so far I've been less impressed by the "agents"' attempts at cross-cutting integration testing since this usually requires selective and clever interface setup and refactoring.
LLMs have a habit of creating one-off things for particular unit test scenarios that doesn't scale well to that problem.
LLMs can help with reviews as well. LLMs are not too bad at reviewing code; GPT 5 for example can find off-by-one, missed returns, all sorts of problems that are localized. I think they have a harder time with issues requiring a higher-level global understanding. I wonder if in the future you could fine-tune an LLM on a big codebase (maybe nightly or something) and it could be the first-level reviewer for all changes to that codebase.
Honestly, this may be the only way to go about it.
If you want to be seen as the hero who solves things instead of the realist who says why other solutions won’t work, this could be worth exploring.
But why didn't the AI expert solve it using chatGPT? If it has to land to an expert for reimplementation from scratch after wasting a day on reviewing slop, did we gain productivity?
It sort of is, indirectly, and I agree with pretty much everything.
But the bit about sycophancy was particularly enlightening. I actually thought "plain" ChatGPT-like interfaces could be good for learning. But the Youtube ROAS example is really powerful. If the student can skew the teacher's conclusions so much just by the way they phrase their questions/answers, we're going to mislead new programmers en masse.
I'm not even sure that the extensive prompting they say they use for their "Boots" is good enough.
I guess in the age of AI you still need someone to repeatedly reject your pull requests until you learn. And AI won't be that someone, at least for now.
We go back to this original prediction that the tool will help those, who both want help and are painfully aware of LLMs peculiar issues.
I always try to stay above this by prompting the question twice, with the opposite biases. But I of course don't know, which hidden biases I have that the LLM still reinforces.
While I am not learning "coding" as a beginner anymore, I am constantly learning new frameworks, language features, algorithms etc. as is the norm in the industry, and I disagree it's bad to use AI auto-complete. Pre-AI IntelliSense-style autocomplete from Visual Studio or ReSharper makes learning new libraries and language features much easier. ReSharper for example will suggest taking advantage of new language features when you write something an older way, and many times this was my introduction to that new feature.
The new AI-based autocomplete can be even better; it demonstrates one way to do something and regardless of whether you use it or not, you can learn from it. You have to be curious enough to actually read what it is doing, but if you lack that curiosity it isn't AI's fault (before AI this was just "copy-pasting from Stack Overflow").
AI autocomplete essentially searches stackoverflow for you and pastes the first answer without context, adjusting it to match your code. If you are learning, just do the stackoverflow search yourself, or prompt you favorite chatbot if you insist on using AI, so you can have at least some explanation about why it is done like this.
The way to input Japanese is to type the word phonetically using a regular qwerty keyboard. The computer then finds all writings that match and order them by likelihood.
Calling that "AI" may be a little much for 70s tech, but it is definitely machine learning, as the machine is able to match patterns and take previous choices into account.
I want to be able to say I've tried and done things when speaking with highly technical people. I've been a 'programmer' since I was 10, I'm 35 now but never joined the work force as a programmer; I don't know why, but now that AI is here, the love for coding, tinkering, making system level things, trying things like WASM which may be the future of our www; these all give me that joy. I found my limitations as a programmer and excelled because I have different skillsets.
I love learning that doing something MY way is a good idea, but has been thought of and some amazing programmer already built the ground-work for it.
My Cursor AI agent even setup git for me for my projects so I can easily push with my SSH keys: do I know I can do that myself? Yes. Do I want to? no.
> ReSharper for example will suggest taking advantage of new language features when you write something an older way, and many times this was my introduction to that new feature.
That's actually news to me and sounds amazing. I started coding with C syntax when I was young. You learn habits then, it sticks with you.
I'm since enjoying python for backend things, flask for little webserver stuff and javascript for front-end things.
WASM Python ain't there yet, but I _love_ tinkering. I _love_ finding bugs. I _love_ poking and prodding at how things work. I'm almost always re-inventing the wheel with concepts but you know what? At least it's mine and I can tinker and learn.
Some of us enjoy the craft as a hobby and learning. Even within my teams some are more sophisticated tech wise than I am; to get on their level remotely requires me to tinker.
Often times, I find a solution for my problems that were the most simple; engineering minds like to overcomplicate things.
It won't write big chunks, so it won't hinder your learning.
I bet you are most likely just blindly trusting the AI response and moving on. Sure, the code structure might checkout and the calls it completed are sometimes fairly generic/predictable, but there will be plenty of situations where the behavior is just different enough or the black boxes are something you have no idea about what it even does and you are too lazy to check the docs and commit the code anyway.
I've disabled it.
I won't pretend that it's turned me into a senior engineer, of course, but it's definitely gotten me over the 0 to 1 problem much quicker than I think I could have without it nudging my code in the right direction.
For what it's worth, I don't ask Copilot to write the code, I just use it as an advanced auto-complete, reading the suggestion to see if I agree with it before hitting tab.
BS code smells the same in any language.
Beginner devs don't even know what smelling means.
What happened is that companies tried to push an idea that this new AI thing would be inhospitable to whoever is already an experienced programmer. The idea of "new land", fair and equal to all. Smelling woudn't matter, because all smells would be new and unfamiliar.
After insisting on this silly mistake for a while, they realized that experienced programmers were actually their only viable target audience, and attempted to change their approach accordingly. It's embarassing.
In the land of vibe-coders, the old man is king.
Indeed. For me this feels like an “I saw the best minds of my generation” moment.
* in keeping with https://news.ycombinator.com/newsguidelines.html: "Please use the original title, unless it is misleading or linkbait."
Tutorials (at least the good ones) give you some knowledge - the tutorial often explains why they do what they do and how they do it, but don't give you any skill, you just follow what other people do, you don't learn how to build stuff on your own.
Vibe coding on the other hand gives you some skill - how to build stuff with AI, but don't give you necessary coding knowledge - the AI does all the decisions for you and doesn't explain why it did what it did, or how it did it, it just does it for you.
"I can't do anything without Cursor's help" is not really the problem. The problem is that vibe coders create some stuff and they don't understand how that stuff works. And I believe this is much bigger problem than knowing how stuff works but not knowing how to use it.
Learning doesn't need to be "uncomfortable". Learning needs to be "challenging". There is a difference. The suggested approach here vaguely reminds me of the "you must first learn how to code in a notepad before using an IDE" approach.
While the real takeaway should be "you must first learn how to learn, before properly learning something". To learn something properly, you need 2 things: To know what to learn, and to know when you've learned it. To know what to learn you need a curriculum - this obviously depends on your specialization for coders, and can be more crude or more detailed, but you still need something to follow so that you can track your progress. "When you've learned it" for coders is when you can explain what some code does to a colleague and answer questions about said code. It doesn't matter if you wrote it, or someone else wrote it, or an AI wrote it. Understanding code you didn't write is even more important than understanding your own code.
I had a deep rooted emotional response to this. One of the most gruelling and somewhat distressing experiences of learning to program was going through a tutorial, kind of getting it, then trying to make my own spin of the same idea and getting completely stuck.
But I’m also convinced that this gruelling process was the highest density learning I’ve ever done. I’ve learned much more since then, and a lot of considerably more complex things. But I’ve never matched the same density of learning.
The closest was probably high school math. That deeply uncomfortable “this hurts my brain and is stressing me out” feeling that I suspect isn’t normal for everyone.
Its trained on masses ... well, unfortunately, masses are wasting a lot of dough
Makes having good tests even more important. One technique I've found super helpful for coding with agents is to make the agent do TDD.
Basically ask the agent to come up with the test cases first, manually review those to make sure they make sense, then have the agent game itself to write code to pass the tests. I feel like doing TDD on my own manually is very tedious but having it be AI-assisted helps me move a lot faster.
Of course things were much simpler. You had an editor, and a compiler that you ran from the command line. At some point you would learn about Makefiles, but not before you would appreciate their value.
And there was no CI, no source control, no IDEs, no TDD frameworks.
I can see that throwing a brand new developer into something like Visual Studio would be overwhelming. Even I find it overwhelming after three decades. I still use emacs and a shell.
I feel the same about "what books do you recommend reading to learn Y" Have you tried looking at the online documentation?
Usually see it from people that have formal CS education. They learned one way to learn things and refuse to adapt to real life.
As an experienced dev, I hate this trend. I don't need my hand held for 10 minutes; I need to see three specific lines of config that may or may not be somewhere in the video.
That's scary, because you cannot escape from the repercussions just by not being one of the dummies who relinquished all learning to AI.
You depend on being surrounded by other people who know what they are doing. And not just immediately surrounded, but in a broader scope.
Critical thinking is hard.
I grew up with a learning disability. I was extremely curious and able to hyper-focus on learning, but only when I found it interesting or easy to pick up and run with ideas. Other kids didn't have the same problem as me, so they excelled while I dragged behind. What I learned (after attending 5 schools in 2 years) is that I have to find my own path to learning that works for me.
It's impossible for me to focus on dense text. You could point a gun at my head and I still couldn't absorb the information. I need spatial learning. Moving pictures, flow charts, multi-level cutaways. Lists and sections broken up into hierarchies with clear simple headings and compartmentalized concepts. This way my brain can organize a literal map of the information for me to traverse later. But some other people might find that a nightmare.
At the same time, I learned programming extremely slowly, because I only used the methods that were easy to me. I just gave myself projects to accomplish and used trial and error to slowly learn how the language worked, along with a book for reference. It took me years to finally understand the academic underpinnings of how languages (and software) worked. I wish I could've seen a map of the different concepts, to reinforce what I needed to know to learn the next thing.
But there's also different kinds of information which need different learning methods. What's a sine, cosine, and tangent? I honestly still don't know, because the words themselves are foreign to me. For that I would need some kind of Duolingo-style repetitious-card-memory-trick-thing to even remember what word is what concept.
I don't know any framework to break up any subject into multiple course methods. And AI can't do it either. AI sucks at visualizations, and it doesn't have a deep understanding of how to teach things in multiple ways. EdTech needs to be extremely careful not to put all its cards into one "thing" if that thing can't do what people need. (That said: AI is great at quickly explaining things you don't know, and providing you an insanely fast path to the information you need)
In terms of CS itself, I feel like what we're lacking is a big-ass wikipedia or knowledge base. A lot of it is in Wikipedia, but not nearly detailed or interlinked enough. Once you have all the content, then you can reorganize them into different curriculums for different learning styles. But these are two separate problems. The tutorials are way too shallow, and the dense academic verbiage is far too detailed. You need a way to intermix them that's tailored to the user.
They are conversion functions between different fraction-based ways of measuring angles.
You can draw a right triangle for the angle you want to build and you can measure it based on the ratio of any two sides of the triangle.
You can also view the angle as a fraction of a circle. It's up to you decide whether a full circle counts as 360 or 2pi (or 400 or 1 or whatever).
sin/cos/tan and their inverses let you convert between the two. Both are useful, neither is always better. The conversions let you use whichever is easier.
The sine/cosine names don't really make sense in Indo-European languages because they are based on terribly mangled old Arabic. No, they do not come from the Latin word "sinus" = bay or bend. Yes, they probably did affect the direction of the mangling because there was this nice Latin word that looked like it ought to have something to do with it... but they started out as Arabic.
The name of the tangent function comes from the geometric tangent as a line that touches a curve. Tangent comes from a Latin word that means to touch -- hence why they keys on a keyboard are called that in some languages. If you do some fancy geometric drawing involving a unit circle, a radius, and an angle, then the tangent function naturally appears as the length of a line segment that 1) just touches the perimeter of the circle and 2) is at a right angle to the radius.
Tan is the slope of the radius line: sin(angle)/cos(angle).
How do you remember the fraction for the slope of a line? I use a mnemonic: dydx ("dydex").
I draw on the ratios I memorized in high school, e.g. sin=opposite/hypotenuse, but 40+ years later sometimes I'm not sure so I look it up online.
My needs for trigonometry are separated on a scale of years, and at some point knowledge unused is knowledge forgotten.
sine is opposite over hypotenuse
cosine is adjacent over hypotenuse
tangent is opposite over adjacent
It also has the advantages of being language/culture blind.
And this is why it took 8 hours for me to do math homework every night.
Is this not the true promise of technology?
"Now we do x,y, z, and voila! here you have it, a fully fledged (whatever)." Ok, but what did you just do? Why doesn't it work on my machine? etc. I've seen tutorials that do this stuff right and it's a very obvious night and day difference.
Tutorials fundamentally exist to serve a different purpose: to orient people within the subject matter, when they don't even know what question to ask. Going through steps in order is important so that the student can focus. Intentionally going down wrong paths can be counterproductive for the neophyte, because it means having about as much experience doing the wrong thing as the right thing. Debugging is a general skill, but technology-specific debugging can and probably should be taught separately from the "happy path".
A properly done tutorial will properly show the steps, and will have been tested to ensure that it can in fact be expected to work on everyone's machine. The parameters for success will be controlled as tightly as possible.
Take notes as you go. Type the code manually. Experiment with variations of the code. It does help your brain encode the information.
Kind of an aside, I also think that ChatGPT is a great car-ride companion, and that the "conversation mode" that currently exist is pretty bad, primarily due to its response length limitation. If you're driving and verbally chatting with ChatGPT, having it only give you 500 word responses is infuriating because you can't really get into any level of depth without going in constant circles, which isn't a problem that text-based chats have to the same extent.
A: https://chatgpt.com/share/68e940a1-953c-8011-a8f2-3a1a0c51be... B: https://chatgpt.com/share/68e94067-ec74-8011-88e5-9d27670f31...