To clarify, some are misunderstanding James Somers to be advocating sloppy low quality work, as if he's recommending speed>quality. He's saying something else: remove latencies and delays to shorten feedback loops. Faster feedback cycles leads to more repetitions which leads to higher quality.
"slowness being a virtue" is not the opposite of Somer's recommendation about "working quickly".
If AI can do the OODA loop faster without getting fatigued, even though it is worse quality, like the F-86, it will win 10 out of 10 times.
EDIT:
> Boyd knew both planes very well. He knew the MiG-15 was a better aircraft than the F-86. The MiG-15 could climb faster than the F-86. The MiG-15 could turn faster than the F-86. The MiG-15 had better distance visibility.
> The F-86 had two points in its favor. First, it had better side visibility. While the MiG-15 pilot could see further in front, the F-86 pilot could see slightly more on the sides. Second, the F-86 had a hydraulic flight control. The MiG-15 had a manual flight control.
> Boyd decided that the primary determinant to winning dogfights was not observing, orienting, planning, or acting better. The primary determinant to winning dogfights was observing, orienting, planning, and acting faster.
> Without hydraulics, it took slightly more physical energy to move the MiG-15 flight stick than it did the F-85 flight stick. Even though the MiG-15 would turn faster (or climb higher) once the stick was moved, the amount of energy it took to move the stick was greater for the MiG-15 pilot.
> With each iteration, the MiG-15 pilot grew a little more fatigued than the F-86 pilot. And as he gets more fatigued, it took just a little bit longer to complete his OOPA loop. The MiG-15 pilot didn’t lose because he got outfought. He lost because he got out-OOPAed.
Having a defined flow that gives you quick feedback quick and doesn't get in the way.
I you are writing, then you'd be using an app that you can quickly do what you want, e.g shortcuts for bold, vim/emacs motions, that "things-not-getting-in-the-way" state is what leads to flow state, in my opinion.
Muscle memory is action for free, then you can focus on thinking deeper.
Same happens with coding, although is more complex and can take time to land in a workflow with tools that allow you to move quick, I'm talking about, logs, debugger (if needed), hot reloading of the website, unit test that run fast, knowing who to ask or where to go for finding references, good documentation, good database client, having prepared shortcuts to everything ... and so on.
I think it would be could if people would share their flow-tools with different tech stacks, could benefit a lot of us that have some % of this done, but not 100% there yet.
If I need to install pandoc to test compile a doc change before i submit it for code review with 3 other maintainers, id rather keep my note or useful screenshot to myself.
If i need to create a c binding of my function so that pytest can run it through 50 rows of cryptic CMake, I'd rather do happy testing locally and submit it as a "trust me bro".
Good and fast international tooling matters massively for good software. And it all comes back to speed and iteration loop.
On top of that, slow meticulous work can then be done. 100% test coverage, detailed uml diagrams describing the system, and functional safety risk analysis matrix documents.
So speed and slowness supplement in different levels of analysis.
I would note you might see this as another bland "shift left" argument and you could definitely view if through this lens. But if you consider it from a systems thinking lens it actually incorporates dynamics that are not typically included in shift left. It helps you consider the system within your organization and how to shorten those feedback loops. It also, conveniently, makes engineering organizations stronger as a whole as these feedback loops are also intrinsically linked to the organizations software development process as a whole. It is pretty hard to have a tight security vulnerability discovery loop without a good software engineering practice around it. For security issues like this they are effectively a strict subset of software quality issues.
You can apply this feedback loop shortening to /so/ many things in life.
Implicit in the design of most tests is the idea that a person's ability to quickly solve moderately difficult problems implies a proportional ability to solve very difficult problems if given more time. This is clearly jumping to a conclusion. I doubt there is any credible evidence to support this. My experience tends to suggest the opposite; that more intelligent people need more time to think because their brains have to synthesize more different facts and sources of information. They're doing more work.
We can see it with AI agents as well; they perform better when you give them more time and when they consider the problem from more angles.
It's interesting that we have such bias in our education system because most people would agree that being able to solve new difficult problems is a much more economically valuable skill than being able to quickly solve moderate problems that have already been solved. There is much less economic and social value in solving problems that have already been solved... Yet this is what most tests select for.
It reminds me of the "factory model of schooling." Also there is a George Carlin quote which comes to mind:
"Governments don't want a population capable of critical thinking, they want obedient workers, people just smart enough to run the machines and just dumb enough to passively accept their situation."
I suspect there may be some correlation between High IQ, fast thinking, fast learning and suggestibility (meaning insufficient scrutiny of learned information). What if fast learning comes at the expense of scrutiny? What if fast thinking is tested for as a proxy for fast learning?
What if the tests which our society and economy depend on ultimately select for suggestibility, not intelligence?
I used to share that doubt, especially during my first semesters at university.
However, my experience over the decades has been, that people who solved moderately difficult problems quickly were also the ones that excelled at solving hard and novel problems. So in my (little) experience, there is a justification for that and I'd be definitely interested (and not surprised) to see credible evidence for it.
Do most people agree with that? I agree with that completely, and I have spent a lot of time wishing that most people agreed with that. But my experience is that almost no one agrees with that...ever...in any circumstance.
I don't even think society as a whole agrees with this statement. If you just rank careers according to the ones that have the highest likelihood of making the most money, the most economically valuable tend to be the ones solving medium difficulty problems quickly.
I think this approach is effectively testing if a student studied the material. It makes the correlation between memorization and understanding. Recall a piece of information is fast if avaliable.
Its a commonly expressed experience among university students that learning memorization techniques and focusing on solving previous exams is a disproportionately effectively way to pass courses.
It's technically more impressive to pass the exam by never doing a single similar problem before and deriving a solving method or forumla that wasn't memorised.
I took deliberate effort to avoid looking at previous exam question for a course until the week before, since it cased good grades at little value to me long term.
It’s a simple point but an incorrect one.
If you can work on it for a week, it’s no longer an IQ test. Nobody is saying that the questions on an IQ test are impossible. It’s the fact that there are constraints (time) and that everybody takes the test the same way that makes it an IQ test. Otherwise it’s just a little sheet of kinda tricky puzzles.
Would you be a better basketball player if everyone else had to heave from 3/4 court but you could shoot layups? No, you’d be playing by different rules in an essentially different game. You might have more impressive stats but you wouldn’t be better.
I think the correct analogy here is that if everyone had to shoot from 3/4 court, you would likely end up with a different set of superstars than the set of superstars you get when dunking is allowed.
In other words, if the IQ test were much much harder, but you had a month to do it, you might find that the set of people who do well is different than who does well on the 1 hour test. Those people may be better suited to pursuing really hard open ended long term problems.
Yes, if you play a different game you’ll find different high performers. That is obvious. But it is not what the blog post is saying. It is saying if you let one person play the same game but by different rules, they will look better.
> Consider this: if you get access to an IQ test weeks in advance, you could slowly work through all the problems and memorize the solutions. The test would then score you as a genius. This reveals what IQ tests actually measure. It’s not whether you can solve problems, but how fast you solve them.
You retort that "if you can work on it for a week, then it's no longer an IQ test", but that retort is one that the author would agree with. The author is simply making the argument that, what IQ measures is not necessarily the same kind of intelligence as what is necessary for success in the real world. He's not actually arguing that people should be allowed to take as long as they want on the test, he's simply using that hypothetical to illustrate "what IQ tests actually measure".
Most people aren’t interested enough to work 100+ hours per week. But we wouldn’t say Elon isn’t better at work ”because he doesn’t even work a 40-hour work week”
It has a lot to do with interest. Michael Jordan isn’t a world class mathematician. Elon isn’t a world class father.
Precisely. Speaking from experience, in school, every claim that I was supposed to accept and reproduce on an exam or in homework was met with a gut response: "Is this really true? Is so, why? How do you know?". I wanted to verify the information and know the justification for believing it, the reason something was true. What's more, I had trouble with the coherence of the claims being made. The physics we are taught in school, for example, raises very serious metaphysical questions. This produced in me a spirit of rebellion. I felt a certain vague disgust for the way things were taught that frustrated my motivation. In some sense, it didn't feel like truth was being treated seriously. The ceremony of education, with all its trappings, is all that was treated seriously. "Getting the grade", not understanding something was what it was all about. It felt like an acrobatics contest and a game of one-upmanship.
Now, sometimes, the justification for a claim was obvious, at least given certain premises (these are often left tacitly assumed, often implied: the danger), but that's not always or perhaps even usually the case. Even in math, a science that can be done from the armchair, we are given formulas and methods that are supposed to be taken on faith and simply used. Through repetition, we are supposed to become better at identifying situations in which we can apply them. But where do these formulas and methods come from? What do they tell us, and how do we know?
And I emphasize "faith": there is no way the valedictorian has verified everything he or she was taught or knows the justification for them. A "good student" keeps up, and since scrutiny and analysis take time and skill - time no student is given especially as the workload piles up, and skill no student possesses - a faithful student, a student who obediently accepts what he or she is being told. You can imagine that blind faith would produce the "perfect student". (Curiously, we are simultaneously commanded to "question everything" - except questioning everything, of course - but then required not to actually practice that advice.)
Now, you could argue that students are too young to understand the justifications for the claims being made, and in practice, we are always relying on faith in some authority. Few people realize how much faith we rely on in our lives. Society entails a certain epistemic deference, even if merely practical or perfunctory. In practice, it is unrealistic not to rely on faith. Faith has its proper place.
Someone might also say that students could be bracketing the information they are receiving. They may simply be entertaining it as a possibility in good faith and playing with it, until verification becomes possible or necessary. Maybe. But given the intellectual immaturity of students, and the obedience at the top, I suspect there is at least a superficial assent given to what they are taught. Otherwise, school is a game to be played, one that, we are told, is an instrument for climbing the ladder of social status. The content doesn't matter. What matters is that you play by the rules of the game and that you play by them well. When you do that, the kingmakers and status granters will throw you a few golden chips and elevate you in the eyes of society. You will be in.
Sounds cynical. After all, wouldn't an institution that wants to select for wisdom also create barriers? Of course, regardless of how effective they are. But the differences cannot be ignored. The intent and purpose are different, for one. The means of selection are another. Education is bureaucratized. We think that bureaucracy will create a "level playing field", eliminating the biases and favoritism that "personal judgement" is bound to entail. But who designs the bureaucracy? What does it actually select for? And does it not often commit the fallacy of confusing features of the method for features of the real?
We're obsessed with rank, and bureaucratic methods make us obsessively so. We imagine there is a sharper slope and a smaller peak than there really is. There is a slope, to be sure. I am no egalitarian. But come on.
Anyway, for all that rambling, what are some of the morals here?
I suppose my first point is that education ought to be focused on first principles first. It ought to be focused on understanding and truth and learning the competence of being able to get there, as that is the whole point. The trivium and quadrivium of old did this. People think of the Middle Ages as some kind period hostile to education. They think it was like the Prussian-style of education (from which modern education gets a lot of its ideas), oriented toward mindless obedience and unquestioned submission to the state. Nothing could be further from the truth. Universities were renowned for open discussion and debate, perhaps most famously in the form of the disputation. The Scholastics were famous for intellectual rigor, a rigor that puts to shame the pompous pretensions of the so-called Enlightenment that never missed the opportunity to erect straw men of the Medievals to ridicule.
Second, rewards and penalties are selection mechanisms. We get the behavior we reward and we get less of the kind we punish. Habits are like this, too: indulging a habit of overeating reinforces and magnifies the habit, while restraint has the opposite effect. What does our education system feed? What does it starve? We should ask this question ceaselessly.
The real tragedy here is the question what all people like him could accomplish if they didn't have to use 3/4 of their time and energy on bureaucracy and jumping through endless stupid hoops. (But oh! What would the world come to, if people didn't have to PROVE that they deserve to do research/eat/live/go to the doctor - in the specific way someone came up with to minimize one kind of error over the other ...! /sarcasm)
Yeah it's boring if it all works but boring is good. And we've been trying to apply this to software development for ages as well - think "continuous deployment" practices (or its new name, DORA metrics in the 2020's).
Yea I know its just a display of the industries indecisiveness. Everytime we need something new and fresh some old favorite is revived until after 5 years its old again. I like being able to do things differently, I hate having to implement "security" features knowing all too well that they aren't secure at all. Minimizing attack surface should not be the default. And its not like this is a new problem. For some reason web devs love to work around a problem instead of fixing it.
"Dress me slowly that I am in a hurry"
Walk slowly and you'll walk safe and far.
Same thing, but from the trades instead of the military.
* Come up with 5 possible approaches (2 days)
* Create benchmark framework & suite (1 day)
* Try out approach A, but realise that it cannot work for subtle technical reasons (2 days)
* Try out approach B (2 days)
* Fail to make approach B performant enough (3 day)
...
You just keep trying directions, refining, following hunches, coming up with new things to try etc... until you (seemingly randomly) land on something that works. This is fundamentally un-estimatable. And yet if you're not doing this sort of work, you will rarely come up with truly novel feats of engineering.
> * Come up with 5 possible approaches (2 days)
Even that gives you something to talk about, that looks "solution-oriented" in the way managers like; in deep work the first steps can actually be:
* Break down the existing approach into its fundamental assumptions and components (3 days)
* Try to build a few approaches challenging some assumptions in a useful way, and fail (2 days)
* Get a clearer picture of the cloud of possibilities that are more likely to work, and start assembling the pieces from that cloud (3 days)
and so on. This is the kind of stuff that's really hard to communicate, and often sounds like you're doing nothing at all in the initial stages, even though that time pays for itself many times more once(/if) you get to take it to completion instead of doing patchwork upon patchwork on a design based on outdated assumptions.
People doing actually interesting stuff can't get funding, so they have to lone-wolf their entire research or just give up and work on stuff that gets paid, people like
- Jonathan Edwards
- Allen Webster
- Brett Victor
All with seriously intriguing ideas that probably have potential, but nobody seems to want to actually dig in to the stuff. Fortunately, there are guys like Stephen Kell who are kind of doing it even in academia, but I think he's limited too towards working on the boring problems that get funding as well.
At the time, I read that everybody is better at "slow" chess. But does that explanation make sense? If everybody is better, shouldn't my ELO score have stayed the same?
For example if I were to give $1 to every person on earth, but $100 million to you, everyone would be richer but you would be a lot richer still.
And while people tend to make __better moves__ in slower time controls, their rapid/blitz ratings are usually higher than standard ratings.
I think indubitably intelligence should be linked to speed. If you can since everything faster I think smarter is a correct label. What I also think is true is that slowness can be a virtue in solving problems for a person and as a strategy. But this is usually because fast strategies rely on priors/assumptions and ideas which generalize poorly; and often more general and asymptotically faster algorithms are slower when tested on a limited set or on a difficulty level which is too low
So when you factor speed into tests, you're systematically filtering for intelligences that are biased to avoid novelty. Then if someone is slow to solve the same problems, it's actually a signal that they have the opposite bias, to consider more paths.
IMO the thing being measured by intelligence tests is something closer to "power" or "competitive advantage".
No this isn't true, most of the time they just don't consider any paths at all and are just dumb.
And the bias towards novelty doesn't make you slow, ADHD is biased towards novelty and people wouldn't call those slow.
In the article, "speed" is about reaching specific answers in a specific window of time, the bane of ADHD.
https://bigthink.com/neuropsych/intelligent-people-slower-so...
1. Einstein was a great student (as common sense would expect) [1]. Top in his class in ETHZ, and the supposed failed exam is because he tried to do the exam earlier than intended. He had great, although not flawless, grades all the way through. He wasn’t a mindless robot and clearly got some feathers ruffed by not showing up for classes, but his academic record is exactly what you would expect from a brilliant but somewhat nonconformist mind. He may not have been Von Neumann or Terence Tao, I suppose.
2. The main “source” of the article is an even more flawed blog post [2], which again just bashes on IQ with no sliver of proof that I can see other than waving hands in the hair while saying “dubious statistical transformations”, as if that wasn’t the only possible way to do these kinds of tests. Please prove me wrong and show me some proper study in there, I can’t see it but I’m from mobile.
Disappointing. What’s the point of it? Quote actual scientists, for example Higgs, who are on record saying that modern academic culture is too short term focused. Basically everyone I’ve ever spoken to about it in academia agrees. Might be a biased sample, but I think it’s more that everyone realizes we’ve dug ourselves into a hole that’s not so easy to escape.
[1]: https://m.youtube.com/watch?v=2zwZsjlJ-G4
[2]: https://www.theintrinsicperspective.com/p/your-iq-isnt-160-n...
I think there is an assumption that institutions inherently are short term optimized, but I don’t know if that’s actually true, or merely a more recent phenomenon.
My guess is that you’d need to deliberately be “less than hyper rational” when doling out funding, because otherwise you end up following the metrics mentioned in the post. In other words, you might need to give out income randomly to everyone that meets certain criteria, rather than optimizing for the absolute best choice. The nature of inflation and increasing costs of living also becomes a problem, as whatever mechanism you’re using to fund “long term” work needs to be increasing every year.
> The Buxton Index of an entity, i.e. person or organization, is defined as the length of the period, measured in years, over which the entity makes its plans. For the little grocery shop around the corner it is about 1/2, for the true Christian it is infinity, and for most other entities it is in between: about 4 for the average politician who aims at his re-election, slightly more for most industries, but much less for the managers who have to write quarterly reports. The Buxton Index is an important concept because close co-operation between entities with very different Buxton Indices invariably fails and leads to moral complaints about the partner.
Uhm? That's not my definition of development. Actually the word itself has different meansing - see development biology, from a fertilised egg to some adult animal. But even if the context here is meant for planning research, ALL research also has steps. For instance if you write for a grant, you have to lay down the idea(s) in more details, then after you gotten the grant (hopefully), you will continue to do more planning. So there are definitely planned steps here too. You just can not always plan results or success; see the discovery of penicillin. While it was not 100% random, it was still more of a side-finding than a planned finding.
Also, slowness ... I don't think slowness in and by itself is a virtue. Some things are more complicated and take time to realise. See how Darwin drew the first tree of life with a pencil or pen. Reaching this point in time took some prior thinking.
why is it bad that the person with the highest IQ does puzzle columns? are all people with IQ supposed to be doing groundbreaking research? can you only do groundbreaking research if you’re intelligent?
i think the real virtue here is not “slowness” but rather persistence. what do you think?
I don't know about "supposed to", but... it's a reasonable hope or expectation, right? That someone with extraordinary capabilities would want to use them for some extraordinary benefit for mankind. I appreciate vos Savant's contribution to public knowledge, but if you have the ability to make your name by progressing something extremely challenging (like the Riemann hypothesis) then wouldn't you want to try that?
Reminds me of that scene in Good Will Hunting where Sean presses Will on why he sticks to manual labouring when he's far smarter than highly trained university professors.
I don't know if you read "Flowers for Algernon" but that's what I think about when discussing highly/exceptionally intelligent people.
Consider something like set theory. When set theory entered a period of crisis in the early 20th century, there were those who mathematicians, logicians, and philosophers who tried to determine what a good formalization of the notion of set is. Russell and Leśniewski come to mind, for example. Naturally, this isn't just a matter of coming up with any collection of axioms. It involves analyzing the concept of "set".
This is different from the Erdos's of the world.