Conversely, entire branches of knowledge can be lost if not enough people are working in the area to maintain a common ground of understanding.
In the US, we keep on manufacturing Abrahams tanks. We're not at war. We have no use for these tanks. So to make things make sense, we give money to some countries with the explicit restriction that they must spend that money on these tanks.
Why do we keep making them? Because you need people who, on day one of war, know how to build that tank. You can't spend months and months getting people up to speed - they need to be ready to go. So, in peacetime, we just have a bunch of people making tanks for "no reason".
It's one of the great reasons to cultivate a collection of close allies who you support: it keeps your production lines warm and your workforce active and developing.
But that industry has been taken over by asia.
There aren't many that haven't been.
Certainly there are many we're currently building and many we will build after that. Biggest in the world kind of stuff.
Gerald R Ford class.
https://en.wikipedia.org/wiki/List_of_Arleigh_Burke-class_de...
We sold some of them to Australia, and they struggled to get them off of the docks as the trains and bridges couldn't handle the weight.
Unsurprisingly, the military has asked for lighter tanks.
As it is, we do also have lighter military vehicles - basically retrofitted F250/350s with a shitload of steel bolted on. They're great in many instances, but sometimes a tank is just what you want.
But also remember tanks are tanks - they're meant to drive over all kinds of nonsense. It might be something of a technical roadblock if there's an impassable ravine, but there are still many, many instances where they'll be useful.
IMO we'll see them fall out of favor heavily if we start doing more mountainous island warfare.
Edit: I was curious so I looked it up, and there are a number of options:
1. Shore up/reinforce the bridge you want to cross with temporary structures
2. Just fill the damn terrain if it's a narrow ravine
3. Build a quick ferry
4. There are apparently literal armored vehicle-launched bridges (AVLBs)
They do not believe the tanks will be able to get there in time without doing that.
But that only gets you mobility in allied nations willing to spend billions on infrastructure.
Either way, we can always just... scout a route ahead of time which doesn't take us straight to a dead-end. We probably already do this with satellites in a matter of minutes or hours nowadays.
Didn't you answer your question in the above sentence? They are used to protect US foreign interests by sending them to allies. It's not because people will somehow forget how to make them. It's based off of an assembly line and blueprints. I don't see how this would be forgotten, any more than it would be possible for society to forget how to build a CRT TV just because they are not used anymore.
I'm always impressed that America's moribund manufacturing industry nonetheless makes prodigious amounts of expensive vehicles. It feels like one year I was hearing about the terrible failure that the F-35 was and the next year I looked up and we've got more than a thousand of them - enough to dwarf any other conventional air force.
But in the end the F-35 really does work pretty well by all credible accounts. It could have been a lot worse.
There is also the supply chain to consider. Many of the parts coming from suppliers are custom or on a limited lifecycle so when orders stop the supply chain disintegrates and can't be quickly reconstituted regardless of how much money you spend.
Especially if the work is classified.
The manufacture of FOGBANK, a key material for a thermonuclear weapon's interstage, was lost by 2000 because so few people were involved with its manufacture and the ones who knew retired or moved on. It's thought to be an aerogel-like substance.
5 years and millions in expensive reverse engineering was required to figure it out again.
I'm guessing they documented it this time.
[0] https://www.twz.com/32867/fogbank-is-mysterious-material-use...
Numbers game doesn't work in the idealized way you think it does. if you let too many mediocre or bad people become scientists, some of them engage in fraud or ill- considered modelmaking, which wastes the time of good scientists who are in the place of having to reproduce results that were never going to work.
I feel it is more connected to the culture (for example I would expect to happen more in a hierarchical culture than in an egalitarian culture, or more in a believing culture than in a critical culture).
We worry about "bad science", but I read a physicist that claimed that the theory of relativity was really accepted and used only when the older generation (which were generally reasonable scientists) died and not before, because they just would not wrap their mind around "something that different". Which I don't think is the way "we" perceive scientific progress (a scientist looking at proofs/logic/building on others/impartial/rational/etc.)
let me give you ONE example. Homme Hellinga (tenured professor at Duke) claimed to have designed a triosephosphate isomerase. his grad student Mary Dwyer (science hero) recognized it was an artefact and called him out on it, got railroaded and i think even sued. after the dust settled, hellinga was merely forced to give up his named seat (he kept tenure and was still getting grants). meanwhile my grad student friend burned three years of his life doing a project based on a different hellinga result.
i have many, many more examples (this one is publically known, so its safe to talk about and i personally know someone who lost years to it), some worse, some not as bad.
here's two more publically known incidents just at duke:
there's a nice short story along those lines, by Scott Alexander
https://slatestarcodex.com/2017/11/09/ars-longa-vita-brevis/
My own experience in watching citation patterns, not even with things that I've worked on, is that certain authors or groups attract attention for an idea or result for all kinds of weird reasons, and that drives citation patterns, even when they're not the originator of the results or ideas. This leads to weird patterns, like the same results before a certain "popular" paper being ignored even when the "popular" paper is incredibly incremental or even a replication of previous work; sometimes previous authors discussing the same exact idea, even well-known ones, are forgotten in lieu of a newer more charismatic author; various studies have shown that retracted zombie papers continue to be cited at high rates as if they were never retracted; and so forth and so on.
I've kind of given up trying to figure out what accounts for this. Most of the time it's just a kind of recency availability bias, where people are basically lazy in their citations, or rushed for time, or whatever. Sometimes it's a combination of an older literature simply being forgotten, together with a more recent author with a lot of notoriety for whatever reason discussing the idea. Lots of times there's this weird cult-like buzz around a person, more about their personality or presentation than anything else — as in, a certain person gets a reputation as being a genius, and then people kind of assume whatever they say or show hasn't been said or shown before, leading to a kind of self-fulfilling prophecy in terms of patterns of citations. I don't even think it matters that what they say is valid, it just has to garner a lot of attention and agreement.
In any event, in my field I don't attribute a lot to researchers being famous for any reason other than being famous. The Matthew effect is real, and can happen very rapidly, for all sorts of reasons. People also have a short attention span, and little memory for history.
This is all especially true of more recent literature. Citation patterns pre-1995 or so, as is the case with those Wikipedia citations, are probably not representative of the current state.
https://arxiv.org/abs/2201.11903
It has over 20,000 citations according to Google Scholar. But clearly the technique was not invented by these authors. It was known 1.5 years earlier, just after GPT-3 came out:
https://xcancel.com/kleptid/status/1284069270603866113#m
Perhaps even longer. But the paper above is cited nonetheless. Probably because there is pressure to cite something and the title of that paper sounds like they pioneered it. I doubt many people who cite it have even read it.
Another funny example is that in machine learning and some other fields, a success measure named "Matthews Correlation Coefficient" (MCC) is used. It's named after some biochemist, Brian Matthews, who used it in a paper from 1975. Needless to say, he didn't invent it at all, he just used the well-known binary version of the well-known correlation coefficient. People who named the measure "MCC" apparently thought he invented it. Matthews probably just didn't bother to cite any sources himself.
I can recommend two of his works:
- The Revolt of Masses (mentioned in the article), where he analyzes the problems of industrial mass societies, the loss of self and the ensuing threats to liberal democracies. He posits the concept of the "mass individual" (hombre masa) a man who is born into the industrial society, but takes for granted the progress - technical and political - that he enjoys, does not enquire about the origins of said progress or his relationship to it, and therefore becomes malleable to illiberal rhetoric. It was written in ~1930 and in many ways the book foresees the forces that would lead to WWII. The book was an international success in its day but it remains eerily current.
- His Meditations on Technics expose a rather simple, albeit accurate philosophy of technology. He talks about the history of technology development, from the accidental (eg, fire), to the artisanal, to the age of machines (where the technologist is effectively building technology that builds technology). He also explains the dual-stage cycle in which humans switch between self-absorption (ensimismamiento) and think about their discomforts, and alteration, in which they decide to transform the world as best as they can. The ideas may not be life-changing but it's one of these books that neatly models and settles things you already intuited. Some of Ortega's reflections often come to mind when I'm looking for meaning in my projects. It might be of interest for other HNers!
But you can't tell ahead of time which one is which. Maybe you can shift the distribution but often your pathological cases excluded are precisely the ones you wanted to not exclude (your Karikos get Suhadolniked). So you need to have them all work. It's just an inherent property of the problem.
Like searching an unsorted n list for a number. You kind of need to test all the numbers till you find yours. The search cost is just the cost. You can't uncost it by just picking the right index. That's not a meaningful statement.
it seems clear to me that the downside of society having a bad scientist is relatively low, so long as theres a gap between low quality science and politics [0], while the upside is huge.
- Newton - predicts that most advances are made by standing on the shoulders of giants. This seems true if you look at citations alone. See https://nintil.com/newton-hypothesis
- Matthew effect - extends successful people are successful observation to scientific publishing. Big names get more funding and easier journal publishing, which gets them more exposure, so they end up with their labs and get their name on a lot of papers. https://researchonresearch.org/largest-study-of-its-kind-sho...
If I was allowed to speculate I would make a couple of observations. First one is that resources play a huge role in research, so overall progress direction is influenced more by the economics rather than any group. For example every component of a modern smartphone got hyper optimized via massive capital injections. Second one is that this is a real world and thus likely some kind of power law applies. I don't know the exact numbers, but my expectation is that top 1% of researches produce way more output than bottom 25%.
Giants can be wrong, though; so there's a "giants were standing on our shoulders" problem to be solved. The amyloid-beta hypothesis held up Alzheimer's work for decades based on a handful of seemingly-fraudulent-but-never-significantly-challenged results by the giants of the field.
Kuhn's "paradigm shift" model speaks to this. Eventually the dam breaks, but when it does it's generally not by the sudden appearance of new giants but by the gradual erosion of support in the face of years and years of bland experimental work.
See also astronomy right now, where a never-really-satisfying ΛCDM model is finally failing in the face of new data. And it turns out not only from Webb and new instruments! The older stuff never fit too but no one cared.
Continental drift had a similar trajectory, with literally hundreds of years of pretty convincing geology failing to challenge established assumptions until it all finally clicked in the 60's.
Leibniz did the same, in the same timeframe. I think this lends credence to the Ortega hypothesis. We see the people that connect the dots as great scientists. But the dots must be there in first place. The dots are the work of the miriad nameless scientists/scholars/scribes/artisans. Once the dots are in place, somebody always shows up to take the last hit and connect them. Sometimes multiple individuals at once.
That is not plausible IMO. Nobody has capacity to read the works of a miriad of nameless scientists, not even Isaac Newton. Even less likely that Newton and Leibnitz were both familiar with the same works of minor scientists.
What is much more likely, is that well-known works of other great mathematicians prepared the foundation for both to reach similar results
It gets condensed over time. Take for example Continental Drift/Plate Tectonics theory. One day Alfred Wegener saw the coasts of West Africa and East South America were almost a perfect fit, and connected the dots. But he had no need to read the work of the many surveyors that braved unknown areas and mapped the coasts of both continents in the previous 4-5 centuries, nautical mile by nautical mile, with the help of positional astronomy. The knowledge was slowly integrated, cross checked and recorded by cartographers. Alfred Wegener insight happened at the end of a long cognitive funnel.
One of the ongoing practices of science is people putting out summaries of the state of different parts of fields of work (and reviews of reviews etc.)
What does not go against the hypothesis. Both of their works were heavily subsided by less known researchers that came before them. But it's not clear at all if somebody else would do what they did on each of their particular field. (Just like it's not clear the work they built upon was in any way "mediocre".)
It's very hard to judge science. Both in predictive and retrospective form.
Special Relativity was not such a big breakthrough. Something like it was on the verge of being described by somebody in that timeframe — all the pieces were in place, and science was headed in that direction.
But General Relativity really took everyone by surprise.
At least, that's my understanding from half-remembered interviews from some decades ago (:
It might be that we attribute post hoc greatness to a small number of folks, but require a lot of very interested / ambitious folks to find the most useful threads to pull, run the labs, catalog data, etc.
It's only after the fact that we go back and say "hey this was really useful". If only we knew ahead of time that Calculus and "tracking stars" would lead to so many useful discoveries!
There's a ton of this among all historical figures in general. Any great person you can name throughout history, almost without exception, were born to wealthy connected families that set them on their course. There are certainly exceptions of self made people here and there, and they do tend to be much more interesting. But just about anyone you can easily name in the history math/science/philosophy were rich kids who were afforded the time and resources to develop themselves.
In every lab there are a few ordinary scientists who are the work horses. Writing ethics, supervising students, keeping the new hot shot in line. They rarely get credit (middle author).
Everything crumbles if the lab head leaves. Because they are effectively a CEO of a small company.
Its imo the worst way to do science.
That distance between when the two (or more) similar discoveries happened gives insight into how difficult it was. Separated by years, and it must have been very difficult. Separated by months or days, and it is likely an obvious conclusion from a previous discovery. Just a race to publish at that point.
Ortega hypothesis - https://news.ycombinator.com/item?id=20247092 - June 2019 (1 comment)
I wonder if this is because a paper with such a citation is likely to be taken more seriously than a citation that might actually be more relevant.
There's also a monkey see, monkey do aspect, where "that's just the way things are properly done" comes into play.
Peer review as it is practiced is the perfect example of Goodhart's law. It was a common practice in academia, but not formalized and institutionalized until the late 60s, and by the 90s it had become a thoroughly corrupted and gamed system. Journals and academic institutions created byzantine practices and rules and just like SEO, people became incentivized to hack those rules without honoring the underlying intent.
Now significant double digit percentages of research across all fields meet all the technical criteria for publishing, but up to half in some fields cannot be reproduced, and there's a whole lot of outright fraud, used to swindle research dollars and grants.
Informal good faith communication seemed to be working just fine - as soon as referees and journals got a profit incentive, things started going haywire.
Big names give more talks in more places and people follow their outputs specifically (e.g., author-based alerts on PubMed or Google Scholar), so people are more aware of their work. There are often a lot of papers one could cite to make the same point, and people tend to go with the ones that they've already got in mind....
Another related "rich get richer" effect is also that a famous author or institution is a noisy but easy "quality" signal. If a researcher doesn't know much about a certain area and is not well equipped to judge a paper on its own merits, then they might heuristically assume the paper is relevant or interesting due to the notoriety of the author/institution. You can see this easily at conferences - posters from well known authors or institutions will pretty much automatically attract a lot more visitors, even if they have no idea what they're looking at.
Compare this with paradigm shifts in T. S. Kuhns The structure of scientific revolutions:
https://en.wikipedia.org/wiki/The_Structure_of_Scientific_Re...
This is hilarious
Groundbreaking advances are usually giant leaps, and it takes time for researchers to get comfortable with them. It is in precisely this sense that the numerous contributions of the masses are useful, because their joint combination allows future geniuses to more readily accept these advances, hence giving them more "brain space" to pursue new advances.
One influential paper does not constitute an accepted theory. You need redundancy in your system. Each paper of the masses produces yet another brick for the metaphorical building.
The question here is: yes most major accomplishments cite other "giants," but how many papers have they read and have they cited everything that influenced them?
Or do people tend to cite the most pivotal nodes on the knowledge graph which are themselves pivotal nodes on the knowledge graph while ignoring the minor nodes that contributed to making the insight possible?
Lastly -- minor inputs can be hard to cite. What if you read a paper a year ago that planted an interesting idea in your head but it wasn't conclusive, or gave you a little tidbit of information that nudged your thinking in a certain direction? You might not even remember, or the information might be background enough that it's only alluded to or indirectly contributes to the final product. Thus it doesn't get a citation. But could the final product have happened without a large number of these inputs?
Suppose I'm analyzing a species of bacteria with some well-known techniques and I discover it produces an enzyme with promising medical properties. I cite some other research on that species, I cite a couple papers about my analytical tools. This is paper A.
Other scientists start trying to replicate my findings and discover that my compound really lives up to its promise. A huge meta-analysis is published with a hundred citations—paper B—and my compound becomes a new life-saving medicine.
Which paper is the "important" one? A or B? In the long run, paper A may get more citations, but bear in mind that paper A is, in and of itself, not terribly unique. People discover compounds with the potential to be useful all the time. It's in paper B, in the validation of that potential, that science determines whether something truly valuable has been discovered.
Was paper A a uniquely inspired work of genius, or is science a distributed process of trial and error where we sometimes get lucky? I'm not sure we can decide this based on how many citations paper A winds up with.
AlexNet for example was only possible because of the developed algorithms, but also the availability of GPUs for highly parallel processing and importantly the ImageNet labelled data.
That seems correct to me. Imagine having a hypothesis named after you that a) you disagree with, and b) seems fairly doubtful at best!
1. You let 3 PhDs chew on the problem, make sure you are corresponding author.
2.You get your best friend and the labs “golden boy” and publish a breakthrough paper.
3. Cite “your” previous work.
I guess the Ortega equivalent statement would be "I stood on top of a giant pile of tiny people"
...Not quite as majestic, but hey, if it gets the job done...
Nature is usually 80/20. In other words, 80% of researchers probably might as well not exist.
That said if we took 20% of all working people are doing useful work, then can you guarantee not all research scientists are within that category?
And indeed there are different fields and the distributions of effectiveness may be incomparable.
I think the nature of scientific and mathematical research is interesting in that often "useless" findings can find surprising applications. Boolean algebra is an interesting example of this in that until computing came about, it seemed purely theoretical in nature and impactless almost by definition. Yet the implications of that work underpinned the design of computer processors and the information age as such.
This creates a quandary: we can say perhaps only 20% of work is relevant, but we don't necessarily know which 20% in advance.
Imagine 2 Earths : one with 10 million researchers and the other with 2 million, but the latter is so cut-throat that the 2 million are Science Spartans.
> Due to their mistrust of others, Spartans discouraged the creation of records about their internal affairs. The only histories of Sparta are from the writings of Xenophon, Thucydides, Herodotus and Plutarch, none of whom were Spartans.
A good example, but perhaps not the point you wanted to make.
What does this even mean? Do you think in an ant colony only the queen is needed? Or in a wolf pack only the strongest wolf?
It’s a bizarre debate when it’s glaringly obvious that small contributions matter and big contributions matter as well.
But which contributes more, they ask? Who gives a shit, really?
I think most would be very open to be checked on their priors, but I would be very surprised if those could be designated a single color. In fact, the humanities revel in various hues and grays rather than stark contrasts.
Funding agencies? Should they prioritize established researchers or newcomers? Should they support many smaller grant proposals or fewer large ones?
Not at all obvious to me. What were the small contributions to e.g. the theory of gravity?
And Einstein didn’t pull out special relativity out of his brain alone. There were years of intense debate about the ether and things I totally forgot by now.
And take something like MOND, there has been tons of small contribution to try to prove / disprove / tweak the theory. If it ever comes out as something that holds, it’d be from a lot of people doing the grind.
I guess Kepler got by just using Brahe's observations, but for more modern explorations of gravity there's a boatload of people collecting data.
More specifically, I believe that scientific research winds up dominated by groups who are all chasing the same circle of popular ideas. These groups start because some initial success produced results. This made a small number of scientists achieve prominence. Which makes their opinion important for the advancement of other scientists. Their goodwill and recommendations will help you get grants, tenure, and so on.
But once the initial ideas are played out, there is little prospect of further real progress. Indeed that progress usually doesn't come until someone outside of the group pursues a new idea. At which point the work of those in existing group will turn out to have had essentially no value.
As evidence for my belief, I point to https://www.chemistryworld.com/news/science-really-does-adva.... It documents that Planck's principle is real. Fairly regularly, people who become star researchers, wind up holding back further process until they die. After they die, new people can come into the field, pursuing new ideas, and progress resumes. And so it is that progress advances one funeral at a time.
As a practical example, look at the discovery of blue LEDs. There was a lot of work on this in the 70s and 80s. Everyone knew how important it would be. A lot of money went into the field. Armies of researchers were studying compounds like zinc selenide. The received wisdom was that galium nitride was a dead end. What was the sum contribution of these armies of researchers to the invention of blue LEDs? To convince Shuji Nakamura that if that was the right approach, he had no hope. So he went into galium nitride instead. The rest is history, and the existing field is lost.
Let's take an example that is still going on. Physicists invented string theory around 50 years ago. The problems in the approach are summed up in the quote that is often attributed to Feynman, *"String theorists don't make predictions, they make excuses." To date, string theory has yet to produce a single prediction that was verified by experiment. And yet there are thousands of physicists working in the field. As interesting as they found their research, it is unlikely that any of their work will wind up contributing anything to whatever future improved foundation is discovered for physics.
Here is a tragic example. Alzheimer's is a terrible disease. Very large amounts of money have gone into research for a treatment. The NIH by itself spends around $4 billion per year on this, on top of large investments from the pharmaceutical industry. Several decades ago, the amyloid beta hypothesis rose to prominence. There is indeed a strong correlation between amyloid beta plaques and Alzheimer's, and there are plausible mechanisms by which amyloid beta could cause brain damage.
After several decades of research, and many failed drug trials, support the following conclusion. There are many ways to prevent the buildup of amyloid beta plaques. These cure Alzheimer's in the mouse model that is widely used in research. These drugs produce no clinical improvement in human symptoms. (Yes, even Aduhelm, which was controversially approved by the FDA in 2021, produces no improvement in human symptoms.) The widespread desire for results has created fertile ground for fraudsters. Like Marc Tessier-Lavigne, whose fraud propelled him to becoming President of Stanford in 2016.
After widespread criticism from outside of the field, there is now some research into alternate hypotheses about the root causes of Alzheimer's. I personally think that there is promise in research suggesting that it is caused by damage done by viruses that get into the brain, and the amyloid beta plaques are left by our immune response to those viruses. But regardless of what hypothesis eventually proves to be correct, it seems extremely unlikely to me that the amyloid beta hypothesis will prove correct in the long run. (Cognitive dissonance keeps those currently in the field from drawing that conclusion though...)
We have spend tens of billions of dollars over several decades on Alzheimer's research. What is the future scientific value of this research? My bet is that it is destined for the garbage, except as a cautionary tale about how much damage it can cause when a scientific field becomes unwilling to question its unproven opinions.
What a funny example to pick. See, "string theory" gets a lot of attention in the media, and nowhere else
In actual physics, string theory is a niche of a niche of a niche. It is not a common topic of papers or conferences and does not receive almost anything in funding. What little effort it gets it gets because paper and pencils for some theoretical physics is vastly cheaper than a particle accelerator or space observatory.
Physicists don't really use or do anything with string theory.
This is a great example of what is a serious problem in science.
The public reads pop-sci and thinks they have a good understanding of science. But they verifiably do not. The journalists and writers who create this content are not scientists, do not understand science, and do not have a good view into what is "meaningful" or "big" in science.
Remember cold fusion? It was never considered valid in the field of physics because they did a terrible excuse for "science", went on a stupid press tour, and at no point even attempted to disambiguate the supposed results they claimed. The media however told you this was something huge, something that would change the world.
It never even happened.
Science IS about small advances. Despite all the utter BS pushed by every "Lab builds revolutionary new battery" article, Lithium ion batteries HAVE doubled in capacity over a decade or two. It wasn't a paradigm shift, or some genius coming out of the woodwork, it was boring, dedicated effort from tens of thousands of average scientists, dutifully trying out hundreds and hundreds of processes to map out the problem space for someone else to make a good decision with.
Science isn't "Eureka". Science is "oh, hmm, that's odd...." on reams of data that you weren't expecting to be meaningful.
Science is not "I am a genius so I figured out how inheritance works", science is "I am an average guy and laboriously applied a standardized method to a lot of plants and documented the findings".
Currently it is Nobel Prize week. Consider how many of the hundreds of winners whose name you've never even heard of.
Consider how many scientific papers were published just today. How many of them have you read?
My claim was that there were a few thousand string theorists. In fact I've seen estimates of 1-4 thousand physicists working in the field. Your claim is that this is a small niche. Given that there are something like a quarter million physicists, that is also true.
I wasn't picking on string theory because it is central to physics. I'm picking on it as a highly visible example. What is its actual importance? String theory's lack of experimental evidence means that it isn't that important to the broader field. And if its half-century of failure continues, it will eventually be just a footnote in history. So it stands as a good example of normal research being a waste of time, rather than meaningfully contributing to the progress of knowledge.
That said, your example of technology improving is a good one. As the book The Innovator's Dilemma points out, technology often improves on an exponential curve. With Moore's law just being the best-known example. And this technology improvement does take a lot of research effort.
But this kind of technological research mostly isn't basic science. In fact a lot of it takes place in secret, inside of companies. Rather than being published in Nature, it winds up as intellectual property. Hidden behind patents and trade secrets of various kinds. But even though most if it isn't basic science, it does drive a lot of basic science as well.
The question is what kind of science do most scientists do. My anecdotal experience is that most scientists work in relatively small cliques, focused on problems and paradigms that are local to their field. And when new ideas come along, most of that work is obsoleted. This anecdotal picture fits very well with the article that I pointed at about science progressing one funeral at a time.
But if you work in something more technology adjacent, I can see that your experience might be very different.
My impression remains what it was. But I admit to not having good data on which kind of experience is more common.
My scientific study of the science of science study can prove this. Arxiv preprint forthcoming.
("Ortega most likely would have disagreed with the hypothesis that has been named after him...")
- Significant advances by individuals or small groups (the Newtons, Einsteins, or Gausses of the world), enable narrowly-specialized, incremental work by "average" scientists, which elaborate upon the Great Advancement...
- ... And then those small achievements form the body of work upon which the next Great Advancement can be built?
Our potential to contribute -- even if you're Gauss or Feynman or whomever -- is limited by our time on Earth. We have tools to cheat death a bit when it comes to knowledge, chief among which are writing systems, libraries of knowledge, and the compounding effects of decades or centuries of study.
A good example here might be Fermat's last theorem. Everyone who's dipped their toes in math even at an undergraduate level will have at least heard about it, and about Fermat. People interested in the problem might well know that it was proven by Andrew Wiles, who -- almost no matter what else he does in life -- will probably be remembered mainly as "that guy who proved Fermat's last theorem." He'll go down in history (though likely not as well-known as Fermat himself).
But who's going to remember all the people along the way who failed to prove Fermat? There have been hundreds of serious attempts over the four-odd centuries that the theorem had been around, and I'm certain Wiles had referred to their work while working on his own proof, if only to figure out what doesn't work.
---
There's another part to this, and that's that as our understanding of the world grows, Great Advancements will be ever more specialized, and likely further and further removed from common knowledge.
We've gone from a great advancement being something as fundamental as positing a definition of pi, or the Pythagorean theorem in Classical Greece; to identifying the slightly more abstract, but still intuitive idea that white light is a combination of all other colours on the visible spectrum and that the right piece of glass can refract it back into its "components" during the Renaissance; to the fundamentally less intuitive but no less groundbreaking idea of atomic orbitals in the early 20th century.
The Great Advancements we're making now, I struggle to understand the implications of even as a technical person. What would a memristor really do? What do we do with the knowledge that gravity travels in waves? It's great to have solved n-higher-dimensional sphere packing for some two-digit n... but I'll have to take you at your word that it helps optimize cellular data network topology.
The amount of context it takes to understand these things requires a lifetime of dedicated, focused research, and that's to say nothing of what it takes to find applications for this knowledge. And when those discoveries are made and their applications are found, they're just so abstract, so far removed from the day-to-day life of most people outside of that specialization, that it's difficult to even explain why it matters, no matter what a quantum leap that represents in a given field.