I’m in biotech academia and it has changed things already. Yes the protein folding problem isn’t “solved” but no problem in biology ever is. Comparing to previous bio/chem Nobel winners like Crispr, touch receptors, quantum dots, click chemistry, I do think AlphaFold already has reached sufficient level of impact.
A gap between biological research and biological engineering is that, for bioengineering, the size of the potential solution space and the time and resources required to narrow it down are fundamental drivers of the cost of creating products - it turns out that getting a shitty answer quickly and cheaply is worth more than getting the right answer slowly.
I agree with you, though - they're two different answers. I've done a bunch of work in the metagenomics space, and you very quickly get outside areas where Alphafold can really help, because nothing you're dealing with is similar enough to already-characterized proteins for the algorithm to really have enough to draw on. At that point, an actual solution for protein folding that doesn't require a supercomputer would make a difference.
A proper protein structural model is an all-atom representation of the macromolecule at its global minimum energy conformation, and the expected end result of the folding process; both are equivalent and thus equally canonical. The “fast” part, i.e., the decrease in computational time comes mostly from the heuristics used for conformational space exploration. Structure prediction skips most of the folding pathway/energy funnel, but ends up at the same point as a completed folding simulation.
> At that point, an actual solution for protein folding that doesn't require a supercomputer would make a difference.
Or more representative sequences and enough variants by additional metagenomic surveys, for example. Of course, this might not be easily achievable.
Well, that's the hope, at least.
> Or more representative sequences and enough variants by additional metagenomic surveys, for example. Of course, this might not be easily achievable.
For sure, but for ostensibly profit-generating enterprises, it's pretty much out of the picture.
I think the reason an actual computational solution for folding is interesting is that the existing set of experimentally verified protein structures are for proteins we could isolate and crystalize (which is also the training set for AlphaFold, so that's pretty much the area its predictions are strongest, and even within that, it's only catching certain conformations of the proteins) - even if you can get a large set of metagenomic surveys and a large sample of protein sequences, the limitations on the methods for experimentally verifying the protein structure means we're restricted to a certain section of the protein landscape. A general purpose computationally tractable method for simulating protein folding under various conditions could be a solution for those cases where we can't actually physically "observe" the structure directly.
Attempting to predict structures using mechanism that simulate the physical folding process waste immense amount of energy and time sampling very uninteresting areas of space.
You don't want to use a supercomputer to simulate folding; it can be done with a large collection of embarassingly parallel machines much more cheaply and effectively. I proposed a number of approaches on supercomputers and was repeatedly told no because the codes didn't scale to the full supercomputer, and supercomputers are designed and built for codes that scale really well on non-embarassingly parallel problems. This is the reason I left academia for google- to use their idle cycles to simulate folding (and do protein design, which also works best using embarassingly parallel processing).
As far as I can tell, only extremely small and simple proteins (like ribonuclease) fold to somewhere close to their global energy minimum.
A lot of bioinformatics tools using deep learning appeared around 2017-2018. But rather than being big breakthroughs like AlphaFold, most of them were just incremental improvements to various technical tasks in the middle of a pipeline.
Not many DL based tools I see these days regularly applied in genomics. Maybe: Tiara for 'high level' taxonomic classification, DeepVariant in some papers for SNP calling, that's about it? Some interesting gene prediction tools coming up like Tiberius. AlphaFold, of course.
Lots of papers but not much day-to-day usage from my POV.
There are a lot of differences between the cutting-edge methods that produce the best results, the established tools the average researcher is comfortable using, and whatever you are allowed to use in a clinical setting.
It seems to really accelerate productivity of researchers investigating bio molecules or molecules very similar to existing bio molecules. But not de novo stuff.
―https://moalquraishi.wordpress.com/2018/12/09/alphafold-casp...
There's a lot of pharma companies and drug design startups that are actively trying to apply these methods, but I think the jury is still out for the impact it will finally have.
If I was the Nobel Committee, I would have waited a bit to see if this issue aged well. Also, in terms of giving credit, I think those who invented pairwise and multiple alignment dynamic programming algorithms deserved some recognition. AlphaFold built on top of those. They are the cornerstone of the entire field of biological sequence analysis. Interestingly, ESM was trained on raw sequences, not on multiple alignments. And while it performed worse, it generalizes better to unseen proteins like TCRs.
Protein folding ≠ protein structure prediction
> I think those who invented pairwise and multiple alignment dynamic programming algorithms deserved some recognition
I would add BLAST as well but that ship has sailed, I’m afraid.
For the curious, BLAST is very much like pairwise alignment but uses an index to speed up by avoiding attempting to align poorly scoring regions.
That's the key part, I think, being able to estimate how unique each alignment is without having to simulate the null distribution, as it was done before with FASTA.
The index also helps, but the speedup comes mostly from the other part.
The question is: has blast made more of an impact than alpha fold? I think so at the moment.
>> I do think AlphaFold already has reached sufficient level of impact.
how so?A better one is seeing my grad-school friends with zero background in comp-sci or math, presenting their cell-biology results with AlphaFold in conferences and at lab meetings. They are not protein folding people either- just molecular biologists trying to present more evidence of docking partners, functional groups in their pathway of interest.
It reminds me of when Crispr came out. There were ways to edit DNA before Crispr, but its was tough to do right and required specialized knowledge. After Crispr came out, even non-specialists like me in tangential fields could get started.
There's an on-point blog-post "AI and Biology" (https://www.science.org/content/blog-post/ai-and-biology) which illustrates why AlphaFold's real breakthrough is not super actionable for creating further bio-medicinal applications in a similar vein.
Docking ligands doesn't make for particularly great structures, and snapshot structures really miss out on the important dynamics.
So it's hard for me to imagine how alphafold can help with small molecule development (alphafold2 doesn't even know what small molecules are). I agree it totally sounds plausible in principle, I've been in a team where such an idea was pushed before it flopped, but in practice I feel there's much less use to extract from there than one might think.
EDIT: To not be so purely negative: I'm sure real use can be found in tinkering with AlphaFold. But I really don't think it has or will become a big deal in small drug discovery workflows. My PoV is at least somewhat educated on the matter, but of course it does not reflect the breadth of what people are doing out there.
That people are arguing about the finer details of what it gets wrong is support for its value, not a detriment.
I mean, sure, prior to alphafold, the notion that sequence / structure relationship was "sufficient to predict" protein structure was merely a very confident theory that was used to regularly make the most reliable kind of structure predictions via homology modeling (it was also core to Rosetta, of course).
Now it is a very confident theory that is used to make a slightly larger subset of predictions via a totally different method, but still fails at the ones we don't know about. Vive la change!
3-mers and 9-mers, if I recall correctly. The fragment-based approach helped immensely with cutting down the conformational search space. The secondary structure of those fragments was enough to make educated guesses of the protein backbone’s, at a time where ab initio force field predictions struggled with it.
In order to create those fragment libraries, there was a step involving generation of multiple-sequence alignments, pruning the alignments, etc. Rosetta used sequence homology to generate structure. This wasn't a wild, untested theory.
But, the protein field has always played loose with the term "homology".
Rosetta used remote sequence homology to generate the MSAs and find template fragments, which at the time was innovative. A similar strategy is employed for AlphaFold’s MSAs containing the evolutionary couplings.
At CASP (the biannual protein structure prediction competition) around 2000, I sat down with David and told him that eventually machine learning would supplant humans at structure prediction (at the time Rosetta was already the leading structure prediction/design tool, but was filled with a bunch of ad-hoc hand-coded features and optimizers). he chuckled and said he doubted it, every time he updated the Rosetta model with newer PDB structures, the predictions got worse.
I will say that the Nobel committee needs to stop saying "protein folding" when they mean "protein structure prediction".
For example, a very successful folding model had the signs reversed on hydrophobic and some electrostatic interactions. It made no sense physically but it gave a better prediction than competing models, and it was hard to move away from because it ranked well in CASP.
Yes, CASP was prone to getting stuck in local minima. I think the whole structure prediction field had become moribund.
I feel if I tried to do that in the US- (where I got a masters degree in engineering, spent a 15 yrs as an aerospace engineer,)- tried to go back and ask to do a PhD in, say, Physics - I'd be promptly told to go fuck myself (or, fuck myself but then enroll in a new undergrad or maaaybe graduate program only after re-taking GRE's. Straight PhD? Never heard it work like that.)
You can jump from social sciences to STEM. Your formal admission can wait for a year or two after you actually started. Or you can move to another university and get a PhD in a few months, because the administrative requirements in the original one were too unreasonable. These things happen, because universities are independent organizations that like doing things their own way.
In more chill fields where the waters are relatively calm, this may be less of an issue.
But let's also consider the fact that Hassabis did his undergrad at the University of Cambridge, likely with excellent results. He wasn't just some random programmer.
If you've got any math-heavy STEM graduate degree, you can likely jump into a physics PhD. You might need to take some senior-level undergraduate courses to catch up, but the transition is quite doable. At some point, your overall intelligence, enthusiasm, and work ethic matter more than your specific background.
Only if you need money.
If you pay the tuition they'll receive you with open arms.
I've seen this for a guy is his mid to late 30s.
As everything in the states, it's completely pay to play and largely pay to win.
That I don't know but I have my doubts.
This was in a very well know university, so make of it what you wish.
I won't give any further details.
He began his work on the PhD prior to 1903.
(installing the first electric lighting for the 1885 Munich Oktoberfest)
I wonder if they are actually more likely to come from upper middle class (where parents are highly paid professionals) than the proper idle rich or even CEOs and company founders...
To my mind, Joe Rogan is the most recent embodiment of the American dream which is why I think he is so popular
Look how that turned out...
Maybe the dream is more like a nightmare.
Also worth noting that Jeff Bezos was(and I think still is) the youngest person who ever became a senior VP at DE Shaw. That is a position earned by merit alone.
https://en.wikipedia.org/wiki/Ted_Jorgensen
So $300K in 1994 is about $640K. That's nice but about 80th percentile of net worth. It's nice his parents believed in him. How many of your parents would do that for you? I'm sure at 1 in 5 of them have that kind of money because of the distribution here. So the difference here is He was smart, he got lucky, and your parents don't believe in you enough on this front.
But compare and contrast Bezos and Musk. Bezos's mid-life crisis is leaving his wife to run around on his yacht banging models. Musk's mid-life crisis is trying to destroy democracy so he and his mom won't have to pay US taxes. Neither one is a role model, but I don't even get the point of the latter.
Which brings us back to AlphaFold. The AlphaFold team did something amazing. But also, they had a backer that believed in them. David Baker, for better or worse, didn't achieve what they did and he'd been at it for decades. It's amazing what good backing can achieve.
10k random Americans with backgrounds in software and a business idea? Not so clear.
You also seem very certain that Amazon's scale is a good thing, overall, which I remain unconvinced of.
What do you find unconvincing about roughly 30 billion in net income and free cashflow in 2023?
I view businesses through other metrics as well, including their impact on society in a variety of different ways. From some of those perspectives, it is not clear to me that Amazon (where I was the 2nd employee) is a net benefit.
The way I like to put it is that both of the following are true:
1. Bezos is uniquely talented and driven, and his success depended on that
2. Bezos' success also depended on him having an uncommon level of access to capital at a young age.
The reason I like to say "both of these are true" is that so often today I see "sides" that try to argue that only one is true, e.g. libertarian-leaning folks (especially in the US) arguing that everything is a pure meritocracy, and on the other side that these phenomenally successful people just inherited their situation (e.g. "Elon Musk is only successful because his dad owned an emerald mine")
imagine a family where youngster is encouraged to work on intellectual problems. where you aren't made fun of for touching nerdy things. or for doing puzzles. where the social circle endorses learning. these things more important than $ in a first world economy. (if third world, yes give me some money please for a book or even just food. and hopefully with time, an internet connected device then the cream will rise they can just watch feynman on YouTube...)
that said, it's "better" than it used to be. hundreds of years ago most interesting science, etc. was done by the royal class. not because they are smarter (I assume). But they had free time. And, social encouragement perhaps too.
bill gates and zuck dropped out of Harvard right? it's not per se Harvard, at least not the graduating bit? being surrounded by other smart people is helpful -- and or people who encourage intellectual endeavors.
Not really true. Newton, Copernicus, Kepler, Galileo, Mendel, Faraday, Tesla... Not from royals, nor from high nobility. Many great scientists were born to merchant families, of a level that wasn't even all too rare.
Think the Wright Brothers first aeroplane vs a rocket ship.
Of course, that's not possible, so then you do the same with other highly intelligent and skilled tech professionals. I'd argue that without the funding and other resources, those skilled pro's won't get anywhere. But with it, some would do incredibly well. It's not common in a global sense, but we see it every single day.
Comparing Bezos to thousands/millions of randomised others is pointless.
Then you may say, oh but Amazon is unique. Yes, but then there are other factors at play. Like the luck (skill? funding?) to take advantage of a unique moment in time at the start of the web. That moment isn't available eveI mean, try to start an Amazon today ... etc
> four-term Republican United States Representative for the state of Nebraska
To me, that doesn't sound super impressive. Also: > After failing to secure a job in the family grocery business, he started a small stock brokerage firm.
Ok, that sounds less impressive. Further: > During his first term, when congressional salary was raised from $10,000
Oh wait. That is real money. 10k USD in 1943 is 180k today. Surely upper middle class by any measure.Of course, talent doesn’t always mean prosperity. But in a society modeled on meritocracy, it often will.
I suspect Bezos, then Gates, then Musk, but it could be any order.
I could never get it working on DosBox (some timing issue). Haven't tried in over a decade, though. Should see if I can get it working.
As the author of one such approach, I'm skeptical.
AlphaFold 2 just predicts protein structures. The thing about proteins is that they are often related to each other. If you are trying to predict the structure of a naturally occurring protein, chances are that there are related ones in the dataset of known 3D structures. This makes it much easier for ML. You are (roughly speaking) training on the test set.
However, for drug design, which is what AlphaFold 3 targets, you need to do well on actually novel inputs. It's a completely different use case.
More here: https://olegtrott.substack.com/p/are-alphafolds-new-results-...
That said I'm not sure that's entirely fair, since Alphafold does, as far as I know, work for predicting structures that are far away from structures that have previously been measured.
You're quite wrong about small molecule drug structures. Historically that has been the case but these days many lead structures are made by combinatorial chemistry and are not derived from natural products.
It did very poorly at this last time I checked. Maybe AlphaFold3 is better?
I'm well aware of the impact of natural products and particularly plant secondary metabolites in drug discovery. I'm also aware of combinatorial synthesis occasionally hitting structures that are close to natural products.
But from first principles, why would you need to limit yourself to that subset of molecular space?
Obviously, your structure will need to look vaguely biochemical to be compatible with the bodies chemical environment, but natural products are limited to biochemically feasible syntheses, and are therefore dominated by structures derived from natural amino acids and similar basic biochemical building blocks.
For a concrete example off the top of my head, I'm not aware of any natural diazepines - the structure looks "organic" but biochemistry doesn't often make 7-rings, and those were made long before combinatorial chemistry. Might be wrong on this one, since there's so much out there, but I think it holds.
This depends on the application. If you are trying to design new proteins for something, unconstrained by evolution, you may want a method that does well on novel inputs.
> Same with drug design
Not by a long shot. There are maybe on the order of 10,000 known 3D protein-ligand structures. Meanwhile, when doing drug discovery, people scan drug libraries with millions to billions of molecules (using my software, oftentimes). These molecules will be very poorly represented in the training data.
The theoretical chemical space of interest to drug discovery is bigger still, with on the order of 1e60 molecules in it: https://en.wikipedia.org/wiki/Chemical_space
I mean this is a fast award cycle.
[1]: https://www.science.org/doi/10.1126/science.abj8754
[2]: https://cen.acs.org/analytical-chemistry/structural-biology/...
I remember when computer aided drug design first came out (and several “quantum jumps” along the way). While useful they failed often at the most important cases.
New drugs tend to be developed in spaces we know very little about. Thus there is nothing useful for AI to be trained on.
Nothing quite like hearing from the computational scientist “if you make this one change it will improve binding by 1000x”. Then spending 3 weeks making it to find out it actually binds worse.
It needed Oriol as well doing IC work
Also I really hope the Nobel Prize of Economics goes to Bill Gates! He facilitated sooo much advances by releasing Excel that this must be recognized!
And based on this year's announcements so far I am not sure that my sarcastic comments should be taken as a joke!
"These authors contributed equally: John Jumper, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov, Olaf Ronneberger, Kathryn Tunyasuvunakool, Russ Bates, Augustin Žídek, Anna Potapenko, Alex Bridgland, Clemens Meyer, Simon A. A. Kohl, Andrew J. Ballard, Andrew Cowie, Bernardino Romera-Paredes, Stanislav Nikolov, Rishub Jain, Demis Hassabis"
More significantly: it has yet to be especially impactful in biochemistry research, nor has its results really been carefully audited. Maybe it will turn out to deserve the prize. But the committee needed to wait. I am concerned that they got spun by Google's PR campaign - or, considering yesterday's prize, Big Tech PR in general.
With its help, they have been able to predict the structure of virtually all the 200 million proteins that researchers have identified [the word 'virtually' is stretched into meaninglessness]
or vague, could have been done with other tools, and hardly Nobel-worthy: Among a myriad of scientific applications, researchers can now better understand antibiotic resistance and create images of enzymes that can decompose plastic.
I am seriously wondering if they took Google / DeepMind press releases at face value.The fact you have to reach for "I [wonder if the votes were based on] Google / DeepMind press releases [taken] at face value." should be a red blaring alarm.
It creates a new premise[1] that enables continued permission to seek confirmation bias.
I was once told you should check your premises when facing an unexpected conclusion, and to do that before creating new ones. I strive to.
[1] All Nobel Prize voters choose their support based on reading a press release at face value
Second point is spot on. I really, really hope they didn't just fall for what is frankly a bit of SV style press release meant to hype things. Similar work was done on crystal structures with some massive number reported. It's a vastly other thing than the implied meaning that they are now fully understood and able to be used in some way.
Crispr did not solve gene editing either, but has been made accessible to the broad biochemistry and biology researchers to use.
Both similar impact and changed the field significantly.
Jokes aside, I think the chemistry prize seems to make a bit more sense to me than physics one.
There's already precedent for anonymous creators, aka Banksy.
Thus things like folding kinetics of transition states and intermediates, remain poorly understood through such statistical models, because they do not explicitly incorporate physical laws governing the protein system, such as electrostatic interactions, solvation effects, or entropy-driven conformational changes.
In particular, environmental effects are neglected - there's no modeling of the native solvated environment, where water molecules, ions, and temperature directly affect the protein’s conformational stability. This is critical when it comes to designing a novel protein with catalytic activity that's stable under conditions like high salt, high temperature etc.
As far as Nobel Prizes, it was already understood in the field two decades ago that no single person or small group was going to have an Einstein moment and 'solve protein folding', it's just too complicated. This award is questionable and the marketing effort involved by the relevant actors has been rather misleading - for one of the worst examples of this see:
https://www.scientificamerican.com/article/one-of-the-bigges...
For a more judicious explanation of why the claim that protein folding has been solved isn't really true:
"The power and pitfalls of AlphaFold2 for structure prediction beyond rigid globular proteins" (June 2024)
Lol does this mean there's a chance the Transformer Authors win a Nobel in literature sometime? Certainly seems a lot more plausible than before yesterday.
But still the way forward is computational thinking that is very clear.
Oh well. Fellow realists, see you all 1500 years from now!
I constantly use AlphaFold structures today [1]. And AlphaFold is fantastic. But it only replaces one small step in solving any real-world problem involving proteins such as designing a safe, therapeutic protein binder to interrupt cancer-associated protein-protein interactions or designing an enzyme to degrade PFAS.
I think the primary achievement is that it gets protein structures in front of a lot more smart eyes, and for a lot more proteins. For “everyone else” who never needed to master computational protein structure prediction workflows before, they now have easy access to the rich, function-determinative structural information they need to understand and solve their problem.
The real tough problem in protein design is how to use these structure predictions to understand and ultimately create proteins we care about.
1. https://alexcarlin.bearblog.dev/multistate-protein-design-wi...
I worked on protein structure prediction for a couple years and it was sota.
(lol - one of the PDF attachments to that page is 'Illustration: A string of amino acids' : actually it's a bit better than the title implies :).
Actually, Figure 2 - "How does AlfaFold2 Work?" is impressive to fit that on one page. Nice.
How is the Nobel Prize actually administered? For how long is the Nobel committee bound to follow Alfred Nobel's will? And aren't there laws against perpetual trusts? Or is the rule against awarding the technical awards to organizations one that the committee maintains out of deference to Nobel's original intentions?
Interesting. Any more indepth analys about this?
Btw. you don't just build AlphaFold by doing only 'computers'. Take a look at any good docmentary about it and you will see that they do discuss chemistry on a deep level
It is possible that some committee members might have raised this same concern in their discussions.
relatively short : in comparison to real chemists, whose work is the basis for this development.
This is my first interaction in Hackernews, and I was expecting a more polite discussion. I just expressed my idea. You could ask for my explanations.
And i personally really think if people from a different field, jump into a new field and revolutionize it, a nobel price is not a bad thing to appreciate this effort.
I'm using Chrome on KDE (Ubuntu) on a 1920 wide display (minus the side panel). I checked and I don't have the page zoomed.
it's the year of AI (ChatGPT preparing its acceptance speech)
Other aspects of biotech and research could well be affected far faster than the consumer drug market, but again you’ll need a few years for those early stage developments to aid real world applications.
It moved the needle so much in terms of baseline capability. Let alone Nobel’s original request: positive impact to humanity; well deserved.
In biology/medicine it is still awed like coming from a different planet; tech before was obviously that lacking.
[0] https://scholar.google.com/citations?view_op=view_citation&h...
EDIT: typos
Contrast that with Phyics Nobel for advancing AI using physics.
I think we might be the end of it, as the emphasis shifts to commercialization and product development.
These AI demonstrations require so many GPUs, specialized hardware and data that nobody has but the biggest players. Moreover, they are engineering work, not really scientistic (putting together a lot of hacks and tweaks). Meanwhile, the person who led the transformer paper (a key ingredient in LLMs) hasn’t been recognized.
This will incentivize scientists to focus on management of other researchers who will manage other researchers who will produce the technical inventions. The same issue arises with citations and indices, and the general reward structure in academia.
The signal these AI events convey to me: You better focus on practical stuff, and you better move on in the management ladder.
Isn't sama on track to end up with more financial resources than Nobel had at his disposal?
Plus I think he's got enough of a philanthropic streak which can prove to be not so shabby.
There could very well be a foundation someday awarding the Altman Prize well into the 22nd century.
Whether or not his most dynamic legacy would be something as simultaneously useful/dangerous as dynamite.
For an example, the Millennium Technology Prize is awarded every two years and the prize money is slightly higher than the Nobel prize (1M EUR vs 0.94M EUR). The achievements it's been awarded for tend to be much more practical, immediate and understandable than the Nobel prize achievements. The next one should be awarded in a couple of weeks.
And when that happens, it'll get 1/10th the publicity a Nobel prize gets, because the Nobel prize is older than any living human and has been accumulating prestige all that time, while the Millennium prize is only 20 years old.
Just because there is a ton of hype from OpenAI doesn't detract from what DeepMind has done. AlphaGo anybody?
Are we really already forgetting what a monumental problem protein folding was, decades of research, and AlphaFold came in and revolutionized it overnight.
We are pretty jaded these days when miracles are happening all the time and people are like "yeah, but he's just a manager 'now', what have they done for me in the last few days".
Say I know about ATP Synthase and how the proteins/molecules involved there interact to make a sort of motor.
How does AlphaFold help us understand that or more complicated systems?
Are proteins quite often dispersed and unique, finding each other to interact with? Or like ATP Synthase are they more of a specific blueprint which tends to arrange in the same way but in different forms?
In other words:
Situation 1) Are there many ATP synthase type situations we find too complex to understand - regular patterns and regular co-occurences of proteins but we don't understand them?
Situation 2) Or is most of the use of Protein situational and one-off? We see proteins only once or twice, very complicated ones, yet they do useful things?
I struggle to situate the problem of Unknown proteins without knowing which of the above two is true (and why?)
However, it appears that some the authors were more equal than others.
I think one must also give him the credit for the vision, risk taking and drive to apply the resources at his disposal, and RL, to these particular problems.
Without that push this research would never have been done, but there may have been many fungible people willing to iron out the details (and, to be fair, contribute some important ideas along the way).
I’m not a proponent of the “great man” theory of history, but based on the above I can see that this could be fair (although I have no way of telling if internally this is actually how it played out).
Now that it has grown he might be doing more management. But the groundwork that went into AlphaFold was built on all the earlier Alphaxxx things they have built, and he contributed.
It isn't like other big tech managers that just got some new thing dumped in their lap. He did start off building this.
That's usually how you get a Nobel prize in science. You become an accomplished scientist, and eventually you lead a big lab/department/project and with a massive massiv you work on projects where there are big discoveries. These discoveries aren't possible to attribute to individuals. If you look back through history and try to find how many "Boss professor leading massive team/project" vs. how many "Einstein type making big discovery in their own head" I think you'll find that the former is a lot more common.
> This will incentivize scientists to focus on management of other researchers who will manage other researchers who will produce the technical inventions.
I don't think the Nobel prize is a large driver of science. It's a celebration and a way to put a spotlight on something and someone. But I doubt many people choose careers or projects based on "this might get us the prize..."
That's a very recent thing. Up to the 90s, the Nobel committee refused to even recognize it. They just started to award those prizes at the 21 century, and on most fields they never became the majority.
So is the Large Hadron Collider.
The Nobel isn't a vehicle to recognize hundreds of thousands of deeply technical scientific researchers. How could it be? They have to pick a symbolic figurehead to represent a breakthrough.
They could also simply give it to "DeepMind" similar to how they give the peace prize to orgs sometimes, or how the Time Person of the Year is sometimes something abstract as well (like the cutesy "You" of 2006). But it would be silly. Just deal with it, we can't "recognize" hundreds of thousands, and we want to see a personal face, not a logo of a company getting the award. That's how we are, better learn to deal with it.
Which is okay. The Nobel prize is okay.
> This way, at least once a year, the everyday layperson hears about scientific discoveries.
Spot on.
The problem we have is that the everyday layperson hears very little about scientific discoveries. The scientists themselves, one in a million of them, can get a Nobel prize. The rest, if they are lucky, get a somewhat okay salary. Sometimes better than that of a software engineer. Almost always worse working hours.
But I suppose it's all for the best. Imagine a world where a good scientist, one that knows everything about biology and protein folding, gets to avoid cancer and even aging, while the everyday layperson can only go to the doctor...
There is a reason so many of us work as software engineers now; I earn about 5x more than I would as a university lecturer/assistant professor.
Just insane.
"These authors contributed equally: John Jumper, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov, Olaf Ronneberger, Kathryn Tunyasuvunakool, Russ Bates, Augustin Žídek, Anna Potapenko, Alex Bridgland, Clemens Meyer, Simon A. A. Kohl, Andrew J. Ballard, Andrew Cowie, Bernardino Romera-Paredes, Stanislav Nikolov, Rishub Jain, Demis Hassabis"
https://arxiv.org/pdf/1207.7214#page=26
Guess how many of them were included in the prize. It's a shame that the Nobel committee shies away from awarding it to institutions, but the AlphaFold prize doesn't even make the top 10 in a list of most controversial omissions from a Nobel prize. It's a simple case of lab director gets the most credit.
https://www.nobelprize.org/organization/special-regulations-...
https://www.nobelprize.org/organization/special-regulations-...
J.J. and D.H. led the research. J.J., R.E., A. Pritzel, M.F., O.R., R.B., A. Potapenko, S.A.A.K., B.R.-P., J.A., M.P., T. Berghammer and O.V. developed the neural network architecture and training. T.G., A.Ž., K.T., R.B., A.B., R.E., A.J.B., A.C., S.N., R.J., D.R., M.Z. and S.B. developed the data, analytics and inference systems. D.H., K.K., P.K., C.M. and E.C. managed the research. T.G. led the technical platform. P.K., A.W.S., K.K., O.V., D.S., S.P. and T. Back contributed technical advice and ideas. M.S. created the BFD genomics database and provided technical assistance on HHBlits. D.H., R.E., A.W.S. and K.K. conceived the AlphaFold project. J.J., R.E. and A.W.S. conceived the end-to-end approach. J.J., A. Pritzel, O.R., A. Potapenko, R.E., M.F., T.G., K.T., C.M. and D.H. wrote the paper.”
Hey, I wonder who these mysterious "J.J." and "D.H." might be?
The question is: is this a necessity for doing good science today, or rather an artifact of how important research is organized today (i.e. an artifact of the bureacratic and organizational structure that you have to "accept"/"tolerate" if you want to do want to have a career in science)?
This ratio has flipped in the past 30 years, from 1994-2023, where 17% prizes were individual, 83% collaborative.
So I'd say yes, collaborative work is increasingly a requirement to do groundbreaking research today. The organizational structures and funding are a part of the reason as you mention. But it's also that modern scientific problems are more complex. I used to have a professor that used to say about biology "the easy problems have been solved". While I think that's dismissive to some of the ingenious experiments done in the past, there's some truth to it.
if A=B and A=C, then A=C
Technically true, but you might still want to double check your logic.
In comparison, the one for lithium batteries was awarded in 2019, over 30 years after the original research, when probably more than half of the world's population already used them on a daily basis.
This is really sad. A new recipe for feeding honeybees to make tastier honey could get to market in perhaps a month or two. All the chemical reactions happening in the bees gut and all the chemicals in the resulting honey are unknown, yet within a matter of weeks its being eaten.
Yet if we find a new way of combining chemicals to cure cancer, it takes a decade before most can benefit.
I feel like we don't balance our risks vs rewards well.
Now if the human alternative to treatment is to die anyway than i think that balance shifts. I do think we should be somewhat liberal with experimental treatments for patients in dire need, but you have to also understand that experimental treatments can just be really expensive which limits either the people who can afford it, or if it's given for free, the amount the researcher can make/perform/provide.
10 years is a very long time. I've had close family members die of cancer and any opportunity for treatment (read: hope) is good in my opinion. But i wouldn't say there's no reason that it takes so long
> “To understand this [...] you have to first examine the man’s academic life before and after the war.”
Quote from: https://discover.lanl.gov/news/0609-oppie-nobel-prize/
Not anymore. You're not required to know or have studied Chemistry to get a Nobel in Chemistry.
Curie was a trained chemist when she won her prize in physics. Michelson was a Naval Officer. Of course naturally, being able to win a Nobel usually means you studied the field your entire life but that has never been a requirement.