This ties with how I sometimes describe current generation AI as a form of mechanized intelligence: like Babbage’s calculating machine, but scaled up to be able to represent all kinds of classes of things.
And in this perspective that I’m circling these days where I’m currently coming down on it is maybe the effect of this realization will be something like the dichotomy outlined in the Dune series: namely, that between mechanized intelligence embodied by mentats and the more intuitive and prescient aspects of cognition embodied by the Benni Jessarit and Paul’s lineage.
A simple but direct way to describe this transition in perspective may be that we come to see what we formally thought of as intelligence in the West/reductive tradition as a form of mechanized calculation that it’s possible to outsource to automatic non-biological processes, and we start to lean in more deeply to the more intuitive and prescient aspects of cognition.
One thing I’m reminded of is how Indian yogic texts describe various aspects of mind.
I’m not sure if it’s a one-to-one mapping because I’m not across that material but merely the idea of distinguishing between different aspects of mind is something with precedent; and central to that is the idea of removing association between self identity and the aspects of mind.
And so maybe one of the effects for us as a society will be something akin to that.
> Surely they know the risks, and surely people will be just as responsible with AI
I can't imagine even half of students can understand the short and long term risk of using social media and AI intensively.
At least I couldn't when I was a student.That's exactly what worries us.
Everyone loves watching films until they get a curriculum with 100 of them along with a massive reading list, essays, and exams coming up.
Fermentation was a great way to /preserve/ food, but it can be a bit hit and miss. Pickling can be outright dangerous if not done correctly - botulism is a constant risk.
When canning of foods came along it was a massive game changer, many foods became shelf stable for months or years.
Fermentation and pickling was dropped almost universally (in the West).
Have too many of us outsourced our ability to raise horses for transport?
Surely you're capable of walking all day without break?
Although, congestion pricing is a good counter-example. On the surface it looks like it is designed to benefit users of public transportation. But turns out it also benefits car-owners, because it reduces traffic jams and lets you get to your destination with your own car faster.
Designing everything around cars hurts everyone including car owners. Having no option but to drive everywhere just sucks.
Network/snowball effects are not all good. If local businesses close because everybody drives to WalMart to save a buck, now other people around those local businesses also have to buy a car.
I remember a couple of decades ago when some bus companies in the UK were privatized, and they cut out the "unprofitable" feeder routes.
Guess what? More people in cars, and those people didn't just park and take the bus when they got to the main route, either.
Everybody thinks they're customers when they buy a car, but they're really the product. These industries, and others, are the real customers
So much so that my comment attracted downvotes.
C'est la vie.
My fundamental argument: The way the average person is using AI today is as "Thinking as a Service" and this is going to have absolutely devastating long term consequences, training an entire generation not to think for themselves.
A certain group of people have something wrong with their brain where they can't be "educated" and are forced to learn by studying and such. The protagonist of the story is one of these people and feels ashamed at his disability and how everyone around him effortlessly knows things he has to struggle to learn.
He finds out (SPOILER) that he was actually selected for a "priesthood" of creative/problem solvers, because the education process gives knowledge without the ability to apply it creatively. It allows people to rapidly and easily be trained on some process but not the ability to reason it out.
I'll add another analogy. I tell people when I tip I "round off to the nearest dollar, move the decimal place (10%), and multiply by 2" (generating a tip that will be in the ballpark of 18%), and am always told "that's too complicated". It's a 3 step process where the hardest thing is multiplying a number by 2 (and usually a 2 digit number...). It's always struck me as odd that the response is that this is too complicated rather than a nice tip (pun intended) for figuring out how much to tip quickly and with essentially zero thinking. If any of those three steps appear difficult to you then your math skills are below that of elementary school.
I also see a problem with how we look at math and coding. I hear so often "abstraction is bad" yet, that is all coding (and math) is. It is fundamentally abstraction. The ability to abstract is what makes humans human. All creatures abstract, it is a necessary component of intelligence, but humans certainly have a unique capacity for it. Abstraction is no doubt hard, but when in life was anything worth doing easy? I think we unfortunately are willing to put significantly more effort into justifying our laziness than we will to be not lazy. My fear is that we will abdicate doing worthwhile things because they are hard. It's a thing people do every day. So many people love to outsource their thinking. Be it to a calculator, Google, "the algorithm", their favorite political pundit, religion, or anything else. Anything to abdicate responsibility. Anything to abdicate effort.
So I think AI is going to be no different from calculators, as you suggest. They can be great tools to help people do so much. But it will be far more commonly used to outsource thinking, even by many people considered intelligent. Skills atrophy. It's as simple as that.
I tell people when I tip I "round off to the nearest dollar, move the decimal place (10%), and multiply by 2" (generating a tip that will be in the ballpark of 18%), and am always told "that's too complicated".
I would tell others to "shift right once, then divide by 2 and add" for 15%, and get the same response.
However, I'm not so sure what you mean by a problem with thinking that abstraction is bad. Yes, abstraction is bad --- because it is a way to hide and obscure the actual details, and one could argue that such dependence on opaque things, just like a calculator or AI, is the actual problem.
The critical difference between AI and a tool like a calculator, to me, is that a calculator's output is accurate, deterministic and provably true. We don't usually need to worry that a calculator might be giving us the wrong result, or an inferior result. It simply gives us an objective fact. Whereas the output of LLMs can be subjectively considered good or bad - even when it is accurate.
So imagine teaching an architecture student to draw plans for a house, with a calculator that spit out incorrect values 20% of the time, or silently developed an opinion about the height of countertops. You'd not just have a structurally unsound plan, you'd also have a student who'd failed to learn anything useful.
> The critical difference between AI and a tool like a calculator, to me, is that a calculator's output is accurate, deterministic and provably true.
This really resonates with me.
If calculators returned even 99.9% correct answers, it would be impossible to reliably build even small buildings with them.
We are using AI for a lot of small tasks inside big systems, or even for designing the entire architecture, and we still need to validate the answers by ourselves, at least for the foreseeable future.
But outsourcing thinking reduces a lot of brain powers to do that, because it often requires understanding problems' detailed structure and internal thinking path.In current situation, by vibing and YOLOing most problems, we are losing the very ability we still need and can't replace with AI or other tools.
I'm not saying this is ideal, but maybe there's another perspective to consider as well, which is lowering barriers to entry and increased ownership.
Many people can't/won't/don't do what it takes to build things, be it a house or an app, if they're starting from zero knowledge. But if you provide a simple guide they can follow, they might end actually building something. They'll learn a little along the way, make it theirs, and end up with ownership of their thing. As an owner, change comes from you, and so you learn a bit more about your thing.
Obviously whatever gets built by a noob isn't likely to be of the same caliber as a professional who spent half their life in school and job training, but that might be ok. DIY is a great teacher and motivator to continue learning.
Contrast to high barriers to entry, where nothing gets built and nothing gets learned, and the user is left dependent on the powers that be to get what he wants, probably overpriced, and with features he never wanted.
If you're a rocket surgeon and suddenly outsource all your thinking to a new and unpredictable machine, while you get fat and lazy watching tv, that's on you. But for a lot of people who were never going to put in years of preparation just to do a thing, vibing their idea may be a catalyst for positive change.
I think past successes have led to a category error in the thinking of a lot of people.
For example, the internet, and many constituent parts of the internet, are built on a base of fallible hardware.
But mitigated hardware errors, whether equipment failures, alpha particles, or other, are uncorrelated.
If you had three uncorrelated calculators that each worked 99.99% of the time, and you used them to check each other, you'd be fine.
But three seemingly uncorrelated LLMs? No fucking way.
It is unclear that a human thinking about things is going to be an advantage in 10, 20 years. Might be, might not be. In 50 years people will probably be outraged if a human makes an important decision without deferring to an LLM's opinion. I'm quite excited that we seem to be building scaleable superintelligences that can patiently and empathetically explain why people are making stupid political choices and what policy prescriptions would actually get a good outcome based on reading all the available statistical and theoretical literature. Screw people primarily thinking for themselves on that topic, the public has no idea.
https://en.wikipedia.org/wiki/The_Feeling_of_Power
That said, I maintain there are huge qualitative differences between using a calculator versus "hey computer guess-solve this mess of inputs for me."
[1] https://plato.stanford.edu/entries/pythagoreanism/ [2] https://en.wikipedia.org/wiki/Pythia
Can’t say the same for LLM. Our teachers were right with the internet of course as well. If you remember those early internet wild west school days, no one was using the internet to actually look up a good source. No one even knew what that meant. Teachers had to say “cite from these works or references we discussed in class” or they’d get junk back.
The cash register says you owe $16.23, you give the cashier $21.28, and all hell breaks loose.
No one is making cool shit for themselves. Everyone is held hostage ensuring Wall Street growth.
The "cross our fingers and hope for the best" position we find ourselves in politically is entirely due to labor capture.
The US benefited from a social network topology of small businesses. No single business being a lynch pin that would implode everything.
Now the economy is a handful of too big to fails eroding links between human nodes by capturing our agency.
I argued as hard as I could against shipping electronics manufacturing overseas so the next generation would learn real engineering skills. But 20 something me had no idea how far up the political tree the decision was made back then. I helped train a bunch of people's replacements before the telecom focused network hardware manufacturer I worked for then shut down.
American tech workers are now primarily cloud configurators and that's being automated away.
This is a decades long play on the part of aging leadership to ensure Americans feel their only choice is capitulate.
What are we going to do, start our own manufacturing business? Muricans are fish in a barrel.
And some pretty well connected people are hinting at similar sense of what's wrong: https://www.barchart.com/story/news/36862423/weve-done-our-c...
Even just writing notes you'll never refer to again, you're making yourself codify vaguer ideas or impressions, test assumptions, and then compress the concept for later. It's an new external information channel between different regions of your head which seems to provide value.
We already saw a softer version of this with web search and GPS: people didn’t suddenly forget how to read maps, but schools and orgs stopped teaching it, and now almost nobody plans a route without a blue dot. I suspect we’ll see the same with writing and judgment: the danger isn’t that nobody thinks, it’s that fewer people remember how.
That said, LLMs are perhaps accelerating that but aren’t the only cause (lack of reading, more short form content, etc)
Humans are highly adaptable. It's hard to go back while the thing we're used to still exists, but if it vanished from the world we'd adapt within a few weeks.
Notice something subtle.
Early inventions extend coordination. Middle inventions extend memory. Later inventions extend reasoning. The latest inventions extend agency.
This suggests that human history is less about tools and more about outsourcing parts of the mind into the world.
Sign me up for this utopia.
Intellect is not the same thing as volition.
Working in this manner, it is so painfully clear it doesnt really follow the flow of the article even. It misses on so many critical details and just sorta fills in its own blanks wrong... When you tell it that its missing a critical detail, it treats you like some genius, every single time.
It is hard for me to try to imagine growing up with it, and using it to write my own words for me. The only time i copy paste words to a fellow human that is ai generated, is for totally generic customer service style replies, for questions i dont totally consider worthy of any real time.
AI has kinda taken away my flow state for coding, rare as it was... I still get it when writing stuff I am passionate about, and I can't imagine I'll ever wanna outsource that.
Yeah, or as I say, Uriah Heep.
To be fair, telling everybody they are geniuses is the obvious next step after participation awards.
Because people have figured out that participation awards are worthless, so let's give them all first place.
I have been reminded constantly throughout this that a very large fraction of people are easily impressed by such prose. Skill at detecting AI output (in any given endeavour), I think, correlates with skill at valuing the same kind of work generally.
Put more bluntly: slop is slop, and it has been with us for far longer than AI.
The things that are actually dangerous in our lives? Not informing ourselves enough about science, politics, economics, history, and letting angry people lead us astray. Nobody writes about that. Instead they write about spooky things that can't be predicted and shudder. It's easier to wonder about future uncertainty than deal with current certainty.
What's more, that's not fundamentally a new thing, it's always been possible for someone to helplessly cling to another human as their brain... but we've typically considered that to be a mental-disorder and/or abuse.
That's a very low bar. I expect most people know how to cook, at least simple dishes.
To his point: personally, I find it shifts 'where and when' I have to deal with the 'cognitive load'. I've noticed (at times) feeling more impatient, that I tend to skim the results more often, and that it takes a bit more mental energy to maintain my attention..
The article he references gives this example:
“Is it lazy to watch a movie instead of making up a story in your head?”
Yes, yes it is, this was a worry when we transitioned from oral culture to written culture, and I think it was probably prescient.
For many if not most people cultural or technological expectations around what skills you _have_ to learn probably have an impact on total capability. We probably lost something when Google Maps came out and the average person didn’t have to learn to read a map.
When we transitioned from paper and evening news to 24 hour partisan cable news, I think more people outsourced their political opinions to those channels.
That's the risk involved with opinions and conclusions.
When it comes to what we believe, humans see what they want to see. In other words, we have what Julia Galef calls a soldier mindset. From tribalism and wishful thinking, to rationalizing in our personal lives and everything in between, we are driven to defend the ideas we most want to believe--and shoot down those we don't. But if we want to get things right more often, argues Galef, we should train ourselves to have a scout mindset. Unlike the soldier, a scout's goal isn't to defend one side over the other. It's to go out, survey the territory, and come back with as accurate a map as possible. Regardless of what they hope to be the case, above all, the scout wants to know what's actually true. In The Scout Mindset, Galef shows that what makes scouts better at getting things right isn't that they're smarter or more knowledgeable than everyone else. It's a handful of emotional skills, habits, and ways of looking at the world--which anyone can learn. With fascinating examples ranging from how to survive being stranded in the middle of the ocean, to how Jeff Bezos avoids overconfidence, to how superforecasters outperform CIA operatives, to Reddit threads and modern partisan politics, Galef explores why our brains deceive us and what we can do to change the way we think.
https://gwern.net/doc/fiction/science-fiction/2012-10-03-yva...
Cogito, ergo sum
The corollary is: absence of thinking equals non-existence. I don't see how that can be an improvement. Improvement can happen only when it's applied to the quality of people's thinking.
(And that's not what the Cogito means in the first place. It's a statement about knowledge: I think therefore it is a fact that I am. Descartes is using it as the basis of epistemology; he has demonstrated from first principles that at least one thing exists.)
> Plenty of things exist without thinking.
Existence in an animal farm isn't human existence.
If outsourcing thought is beneficial, those who practice it will thrive; if not, they will eventually cease to practice it, one way or another.
Thought, as any other tool, is useful when it solves more problems than it creates. For instance, an ability to move very fast may be beneficial if it gets you where you want to be, and detrimental, if it misses the destination often enough, and badly enough. Similarly, if outsourced intellectual activities miss the mark often enough, and badly enough, the increased speed is not very helpful.
I suspect that the best results would be achieved by outsourcing relatively small intellectual acts in a way that guarantees very rare, very small errors. That is, AI will become useful when AI becomes dependable, comparable to our other tools.
It makes them prey to and dependent on those who are building and selling them the thinking.
> I suspect that the best results would be achieved by outsourcing relatively small intellectual acts in a way that guarantees very rare, very small errors. That is, AI will become useful when AI becomes dependable, comparable to our other tools.
That's like saying ultra processed foods provide the best results when eaten sparingly, so it will become useful when people adopt overall responsible diets. Okay, sure, but what does that matter in practice since it isn't happening?
I suspect that outsourcing thinking may reflect on quite some outcomes, too. We just need time to gather the statistics.