It is an incredible accelerant in top-down 'theory driven' learning, which is objectively good, I think we can all agree. Like, it's a better world having that than not having it. But at the same time there's a tension between that and the sort of bottom-up practice-driven learning that's pretty inarguably required for mastery.
Perhaps the answer is as mundane as one must simply do both, and failing to do both will just result in... failure to learn properly. Kind of as it is today except today there's often no truly accessible / convenient top-down option at all therefore it's not a question anyone thinks about.
The biggest difference I see is, pre-LLM search, I spent a lot more time looking for a good source for what I was looking for, and I probably picked up some information along the way.
One thing I've noticed is that I've actually learned a lot more code about things I didn't understand before. Just because I built guardrails to make sure that they are built exactly the perfect way that I like them to be built. And then I've watched my AI build it that way dozens of times now. Start to finish. So now I've just seen all the steps so many times that now I understand a lot more than I did before.
This sort of thing is definitely possible, but we have to do it on purpose.
I feel like the way I'm building this in is a violent maintenance of two extremes.
On one hand, fully merged with AI and acting like we are one being, having it do tons of work for me.
And then on the other hand is like this analog gym where I'm stripped of all my augmentations and tools and connectivity, and I am being quizzed on how good I could do just by myself.
And based on how well I can do in the NAUG scenario, that's what determines what tweaks need to be made to regular AUG workflows to improve my NAUG performance.
Especially for those core identity things that I really care about. Like critical thinking, creating and countering arguments, identifying my own bias, etc.
I think as the tech gets better and better, we'll eventually have an assistant whose job is to make sure that our un-augmented performance is improving, vs. deteriorating. But until then, we have to find a way to work this into the system ourselves.
I'm not sure if people would subject themselves to this, but perhaps the market will just serve it to us as it currently does with internet and services sometimes going down :-)
I know for me when this happens, and also when I sometimes do a bit of offline coding in various situations, it feels good to exercise that skill of just writing code from scratch (erm, well, with intellisense) and kind of re-assert that I can do it now we're in tab-autocomplete land most of the time.
But I guess opting into such a scheme would be one-to-one with the type of self determined discipline required to learn anything in the first place anyway, so I could see it happening for those with at least equal motivation to learn X as exist today.
I'm not sure we (meaning society as a whole) are going to have enough say to really draw those lines. Individuals will have more of a choice going forward, just like they did when education was democratized via many other technologies. The most that society will probably have a say in is what folks are allowed to pay for as far as credentials go.
What I worry about most is that AI seems like it's going to make the already large have/not divide grow even more.
Is it? People claim this but I really haven't seen any proof that it is true.
But maybe both of those are in the category of undesirable things.
And the things we end up with are like art and baking and walking and talking and drinking coffee and such.
Professional Chess is a nice pattern here. A calculator can beat Magnus Carlsen at this point, but Chess is more popular than ever. So it should be ok if AI/Robots are better than us at all the stuff we still decide to do.
I'd love for them to take my job as a programmer though, as that would certainty free up time for me to travel and drink coffee and Guinness.
[1] https://evansdata.com/reports/viewRelease.php?reportID=9
I don't know what those things will be for me, yet, but it's good to have a more specific and directed way to think about which skills I want to keep.
"The future of money is gold obtained from prospecting." –VP of Shovels, ShovelsCorp.
- using AI actually most of everything i do anyways, is critical thinking. I'm constantly reviewing the AI's code and finding little places where the AI tried to get away with a shortcut, or started to overarchitect a solution, or both.
And why would you think this would be the only place that'll happen?
I agree there are things they can still do better than AI, but coding isn't one of them.
People who never "went to the gym" in a field are all too eager to brush off the entire design space as pure Job that can and should be fully delegated to AI posthaste.
The best way to engage with these sorts of articles is to completely ignore all stated advice and move on. There is no "separation of moving things on the job vs. moving things at the gym" when it comes to creative craft, the entire analogy is completely absurd.
- Coming up with names for cities in a role-playing game you're making - Summarizing an idea that you're writing about - Doing research for an article - Brainstorming character names - Creating an aesthetic for a new website for a customer - Etc, etc.
I could go on for days with these examples. And so could any AI.
Pre-2022 ALL these were done 100% by a human.
Now they're not. Now creative people are using AI to help them massively with tons of these. So, yes, the separation needs to happen there as well.
For example, maybe you say, I'll never use AI to help me name characters. Or to come up with plot lines. Or whatever.
That's a Gym vs. Job distinction.
For me, there isn't the slightest difference from before 2022 and after 2022, since I continually choose to boycott genAI services, and as an activist in the Pro-Craft movement, encourage others to do the same.
And to make sure that you have your own personal goals separate from it, and that if you're getting help from it you need to make sure it's in line with those goals.
Right?
What about that do you disagree with?
When one thinks about human decision making, there are at least two classes of decisions:
1. decisions made with our "fast" minds: ducking out of the way of an incoming object, turning around when someone calls our name ... a whole host of decisions made without much if any conscious attention, and that if you asked the human who made those decisions you wouldn't get much useful information about.
2. decisions made with our "slow" minds: deciding which of 3 gifts to get for Aunt Mary, choosing to give a hug to our cousin, deciding to double the chile in the recipe we're cooking ... a whole host of decisions that require conscious reasoning, and if you asked the human who made those decisions you would get a reasonably coherent, explanatory logic chain.
When considering why an LLM "made the decisions that it did", it seems important to understand whether those decisions are closer to type 1 or type 2. If the LLM arrived at them the way we arrive at a type 1 decision, it is not clear that an explanation of why is of much value. If an LLM arrived at them the way we arrive at a type 2 decision, the explanation might be fairly interesting and valuable.
As long as the explanation is sound as well and I can follow it, I don't really care if the internal process looked quite different, as long as it's not outright deceptive.