As we make AI better, perhaps we'll inadvertently find ways to make HI (human intelligence) better too.
I had a personal experience with this when I was studying for an exam recently. As I read over practice questions, I spoke aloud, replicating the reasoning methods/personality of Deepseek R1. By spending a lot of time reading long verbose R1 outputs, I've essentially fine-tuned my brain for reasoning tasks. I believe this method contributed to my excellent score on that exam.
I agree that there's potential here, though, and do genuinely hope that we find ways to make human intelligence better as we're going about AI research. Even pessimistically, I think we'll at least surface approaches that people use without thinking about, which is on its own a good thing, because once you know you're doing something, it becomes a lot easier to train yourself to do it better.
There's that quote from Socrates, recorded by Plato:
> For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them.
I don't think anyone today could recite Beowulf from heart. But 1500 years ago that's exactly how it was enjoyed.
Look at these people that will declaim Pi digits, just because.
Also there are difference in education over different cultural era, not only through time but also space. I heard that India for example value more repetition, where western culture is more in love with innovation. These are of course nothing like exclusive tendencies.
Now if you look at antic Greece, it's certainly not like everyone would be able to restitute Homer word for word. Actually it's easy to forget how divided in term of linguistic and social classes this societies where, and focus solely on the most renowned figures as if they where all part of a tight social group full of solidarity and genius. Actually even a guy like Hippias of Elis can be both depicted by plato as exhibiting all the tremendous admirable feats of the time, including mastering the art of mnemonics, and yet turned into a clueless bragger that isn't even able to recognize that he just doesn't know how to define beauty.
Is this the case? I was under the impression they memorized the plot beats and filled in details on the fly. Also using set phrases or epithets like "Gray-eyed Athena" to slow down the narration and let them plan further.
[0] "A Record of Learning about Government" [政學錄] Magistrates handbooks, Author Zheng Duan [鄭端] (compilation), Early Qing Dynasty (1644-1796)
But there are other factors, like, is the amount of outcomes done also changing, thus affecting the absolute number of errors?
Also, does the side effect of disengage the person in most cases means it has side effects like not paying the same attention to what would stand out as a big issue that needs more attention and consideration than business as usual?
And so on
However, research[1] suggests that relying on AI tools degrades reasoning and cognitive ability regardless of your cognitive ability, and may even cause users to stop making their own choices[2].
1. https://www.404media.co/microsoft-study-finds-ai-makes-human...
The Buddha (from the Pali Canon, Vinaya Pitaka, Cullavagga 10:4):
“Writing is like a drug that weakens memory.”
and: “Do not go upon what has been acquired by repeated hearing; nor upon tradition; nor upon rumor... But when you yourselves know: 'These things are good; these things are not blameable; undertaken and observed, these things lead to benefit and happiness,' enter on and abide in them.”
Confucius (Analects 2:15):
“Learning without thought is labor lost; thought without learning is perilous.”
Lao Tzu (Tao Te Ching, Chapter 48):
“In the pursuit of learning, every day something is acquired. In the pursuit of Tao, every day something is dropped.”
Jesus (Matthew 16:26):
“For what shall it profit a man, if he shall gain the whole world, and lose his own soul?”
Muhammad (Hadith, Sahih Muslim):
“The worst vessel to fill is the stomach; sufficient for the child of Adam are a few morsels to keep his back straight. If he must fill it, then one-third food, one-third drink, and one-third air.”
(This Hadith symbolically warns against excessive reliance on external consumption diminishing spiritual clarity and internal balance.)
Rumi (Masnavi):
“These outward forms are but dust and air; Seek the reality beyond appearance and form.”
Krishna (Bhagavad Gita, 2:42-43):
“Those who are attached to pleasure and power, whose minds are drawn away by such things, have no capacity for absorption into higher states of awareness.”
Even things like confession, or therapy, leverage this - people letting go of bad things that are hanging around in their memory.
Also remember, your conclusion itself is "the devil" - the trap of the analytical mind. :) You will likely do everything you can do avoid the fact that you may be disagreeing directly with the word of the creator as given via various prophets, if you go back to the sources, the command is quite clear, however humans will interpret it: because the command is too simple and terrifying to adhere to. It seems impossible to us that we should indeed, be doing nothing but living in nature in a state of oral tradition and anything outside of that is an unintended state, trusting that energy cannot be destroyed and we are nothing but energy. I don't particularly like it either tbh, hence I'm writing a book about it.
What would he say if the collective IQ drops by 30 points in case of a power outage?
What would he say if people need a subscription in order to "think"?
Actually writing out all thinking steps helps with ironing out some wrong steps in my reasoning or going in circles due to having limited working memory.
I started doing this more rigorously after seeing how reasoning based AI does reasoning, because it seemed like a useful thinking technique.
These reasoning AI models help me think on a meta level about my own thinking and shows me tools I can use to improve it.
Great to see that I’m not alone in this!
At the very least, someone at Copilot must have pitched a rubber duck avatar as the new Clippy by now...
In general, this is very helpful for when your executive function feels taxed, as it has the effect of coaching yourself.
As someone who comes from a long ancestral line of people who talk to themselves while reasoning through problems - it would occasionally prove to be a minor handicap during proctored exams, as internal monologue isn't really the same thing.
Girlfriend, coming in from outside: "Who are you talking to?"
Me: "I talk to myself. You know that."
Gf: "Oh right. You also whisper to yourself, which is scary."
Me: "Scary?"
Gf: "It sounds demonic."
Which, to be fair... Evidently, my internal monologuing gets quite a bit vocal even with other people around.
He told me later he too does a lot of internal monologue for stuff as well and was told by some super successful businessman that this is a good thing and a hallmark of successful people so don't be discouraged by it.
So you may have to use the SLI bridge again just to make sense of what the other side is hearing.
It’s equivalent to the LLMs reasoning in the output and not in the latent space before the final output, which gave the rise to the reasoning models we see today. So speaking out loud might not be the best reasoning method ;)
> As I read over practice questions, I spoke aloud
This is also something that’s expected of the applicant in technical interviews
The interviewers want to hear the applicants thought process and how they develop a strategy to solve the problems presented to them as they work them out
Research is extrapolation.
Neural networks are interpolators, they are notoriously bad at extrapolation. For simple example, look at pendigits data set [1]. The test part of pendigits is taken from different "writers" than train part and neural networks aren't that good at it.
[1] https://archive.ics.uci.edu/dataset/81/pen+based+recognition...
Humans do extrapolation all the time.
You are not “emulating R1”, you are talking to yourself to make sure you understand the concept.
Which is fine but don’t act like AI is making this part of life better in any way with this example. Nonsense
There are good ways and bad ways to think aloud, R1 just gave me a large set of examples of doing it the "good" way.
It's uncommon to read hundreds of paragraphs of a smart person's internal reasoning process. Usually we're only able to read the final results of their thoughts.
I'd say the motivated often reap the rewards of innovations more so than the average, as they were pushing the boundaries in the first place.
Having a dishwasher or a robot vacuum does not make me lazy. It allows me to do more productive things.
One of the parts most worth a replication study.
I also suspect I spend less time ruminating and second-guessing myself and other anxious behaviours that I imagine would come with having someone talking in your ear all day, but that's probably off topic.
I don't doubt you or anything like that, just very curious. As someone with a very strong internal monologue, it's hard for me to imagine not having one.
When I read a sentence in a book I don't hear any kind of narration or anything, but I do assemble a 'scene' of images, sounds, facial expressions, motions, etc. not like a movie, but more like a series of small related ideas if that makes sense?
I find that I understand dialogue and characters in books much better when I listen to an audiobook than when I read, not sure if that's related or not.
I am a relatively intelligent successful professional, but I wonder sometimes if I am missing some processing hardware other people have access to.
Anyway thank you for answering!
Where inner language most certainly comes into play is in the 'output' phase, be it spoken or written, as serialization is required there, but to be honest that often feels like a projection or even a reconstruction with an inherrent sense of loss as the inner is so much richer and nuanced.
That is not to say linearization has no merits. Even if it loses so much it forces consistency and rigor in the lower dimensional reasoning.
In a fast debate you have no time for inner single dimension explicit reasoning. It flows more directly.
Compare it to running on uneven terrain. Somehow decisions are being made where to put the next foot, but most of the time not after an introspective deliberation (although these would occur sometimes in e.g. very tricky rock climbing moves). The steering of where to go occurs at a much higher level, and the steps flow from higher level coarse grained directions.
Now in hindsight you can go back and analyze a recording or memory of the debate and see how it all makes sense as if it was rationally reasoned at every step of the way, but during the debate it most certainly is not an "inner voice", homunculus reasoning before consciously uttering the next phrase.
I have moderate inattentive type ADHD that manifests as me being hyper focused on specific sometimes minor things and failing to effectively plan larger picture things and often results in poor executive function outcomes. Maybe that's part of it.
Isn't the same meta process at play when thinking about more fuzzy topics?
Surely, even if the arithmetics can be simplified and "lookup-table'd", you are still aware of the numbers in Arabic form or whatever equivalent you're using, right? Or do you somehow have 53 individual blobs swirling inside your consciousness?
I store numbers as pictures of numbers, or a geometric representation depending on how big or precise the number is.
Are you saying when you think of the concept of 'twelve plus twelve' you have the equivalent of someone in your head saying 'hmm, well twelve is 2 more than ten, so if I add up ten and ten and two and two I get twenty four?'
That's wild if so.
For your reference, I would follow the procedure above approximately, but visually with numbers that just do the thing that feels right. I think under the hood we're probably doing the same thing, just with a different interface layer
If you see a cat walking along the road, do you have to think to yourself "oh, that's a cat" or do you just know that it's a cat without verbalizing anything? It has its own abstract concept, right? Same thing with sufficiently simple numeric transformations.
> you are still aware of the numbers in Arabic form or whatever equivalent you're using, right?
Not sure if you're in a Fahrenheit or Celsius sort of place but if someone says that it's 70 degrees out do you really think in terms of numbers? Or do you just "know" what 70 degrees is without thinking about it?
(Also a reason why I'm very sceptical that the current LLM approach will eventually lead to AGI, BTW)
For LLMs, the tokens (i.e. words) are what the weights are based on, as there isn't other input into them.
However the way I think about math is different than the way I plan my day or other things. In my case, it is very much like I have registers that would hold the result of 16 x 3 in it so I can add the 5 to it later. I have a certain number of registers and with effort like repeating what Ive already solved I could temporarily create more.
It also feels somewhat physical, as if the register is an actual box or has a “location” or like I’ve put the answer down on the desk like a part of something I’m building. Perhaps not coincidentally I am one of the many people who have a “calendar shape” for the months.
I would say most people are like me. They have 3 modes of thinking and they probably have a primary mode which they favor. I favor none and go into all 3 depending on whether I’m reading, writing or doing something else.
The second bigger group has only one primary mode of thinking. The internal monologue. They can only think in terms of an inner voice and this inner voice is so powerful I often encountered people who think this inner voice is the definition of thought. They assumed thinking was COT.
The even rarer versions you get people who assign colors to numbers or people who can’t even perceive to think in pictures. You’re the first person I’ve encountered who can’t even have an internal monologue.
I always thought it was something that we did in TV shows or books to give you a sense of what a character was feeling, I didn't know this was an actual literal experience people had.
I can certainly have an internal monologue, in the way that you could put on a puppet show. I can conciously think to myself 'self, this is self. Clean your car out' I can form the feeling of those words in my head. But there's nobody 'saying' them if that makes sense. I'm playing back a design of my conscious self.
https://en.m.wikipedia.org/wiki/Aphantasia
That said, most of my thinking is not done in the form of a linear monologue where I "talk through" steps to myself.
Worse, when you use multiple agents to get AI LLMs talking to one another, all AI agents switch to this internal language and they make progress despite no human understanding what hell is happening. This seems very bad.
Illustration:
> How many r in strawberry?
I'm asked how many r in strawberry. I can just spell the word and a;dklsjaw; a;ewjraqwpeouypaads;lq qepwiouryaqeopw qewrpoiuyoiauysdqw145124rfa.nkjlwh ;45a8345a894ya4a q4p58q45jaq;lkjas;dlfkja;j
<answer>There are 3 (three) r's in strawberry</answer>
You will penalize this inasmuch as your alignment strategy depends on Deliberative Alignment. But at some point I assume that will come with a real capability cost as Neuralese can be more conceptually dense.
Based on what have they claimed that such methods are used by expert human problem solvers?
In the abstract they use different characters for double quotes here.