It's not like bic pens. It's a new technique they couldn't do before that helped crack the mystery.
Also the title is "AI Helps..." not "AI Discovers" so that's kind of a strawman. I don't think anyone is denying the humans did great work. Maybe it's more like Joe Boggs uses the Hubble telescope to find a new galaxy and moaning because the telescope gets a mention.
I'm quite enthusiastic about the AI bit. My grandad died with alzheimer's 50 years ago. My sister is due to die of als in a couple of years. Both areas have been kind of stuck for decades. I'm hoping the AI modeling allows some breakthroughs.
I can't tell you how many times I've sat through talks where someone (usually ill-equipped to really engage with the research) suggests that the speaker tries AlphaFold for this or that without a clear understanding of what sort of biological insight they're expecting. It's also a joke at this point how often grad students plug their protein into AlphaFold and spend several minutes giving a half-baked analysis of the result. There are absolutely places where structure prediction is revolutionizing things including drug discovery, but can we acknowledge the hype when we see it?
I'm very sorry for your loss, my aunt is also declining due to this disease. I think statistically everyone either goes through it or becomes a caretaker if they live long enough.
May be what happens but it’s not required or what people want. A massive amount of those diagnosed, more than half, would prefer a compassionate end to their life at the right time. Less than 2% are able to end up taking this option.
Maybe I've underestimated the impact the AI tooling has had then, because seems to me that your example wouldn't be an issue as it's literally the entire tool to discover.
> I'm hoping the AI modeling allows some breakthroughs.
I'm actually on board with you on this, I think it can be extrememly useful and really speed things up when dealing with such huge amount of complex data that needs to be worked with, my only gripe here was the title itself. It's seems forced when it could have been "Amazing breakthrough discovered to unravel cause of Alzheimer’s" - From here the main body of the article would match the title, with a nice shout out to a really creative use of AI.
> It's a new technique they couldn't do before that helped crack the mystery.
What about SAT-based solvers [1] for same problem?[1] https://ieeexplore.ieee.org/document/5361301
Would that technique do the same? If not, why?
The title cites the AI contribution, not the human
I just read some days ago here on HN an interesting link which shows that more than 70% of VC funding goes straight to "AI" related products.
This thing is affecting all of us one way or another...
(Disclaimer: I'm the author of a competing approach)
Searching for new small-molecule inhibitors requires going through millions of novel compounds. But AlphaFold3 was evaluated on a dataset that tends to be repetitive: https://olegtrott.substack.com/p/are-alphafolds-new-results-...
The title is clickbaity, it would be useful to stress that AI solves a very specific problem here that is extremely hard to do otherwise. It is like a lego piece.
In other words, this is something that happens in the field all the time, most of which don't get any attention from people outside the field, were it not because of the "AI" buzzword in the article.
How many people would have read the article if it didn’t mention AI?
I think it’s cool to see, and a good counterpoint to the “AI can’t do anything except generate slop” negativity that seems surprisingly common round here.
> It's really a bummer to see this marketed as 'AI Discovers Something New'.
The headline doesn't suggest that. It's "AI Helps Unravel", and that seems a fair and accurate claim.
And that's true for the body of the article, too.
> *These authors contributed equally
so your position is satisfied by listing an AI amongst those authors
Go the current very last page and he's hyping up nanotech in 2015, which as far as I'm aware, didn't end up panning out or really going anywhere. https://today.ucsd.edu/archives/author/Liezel_Labios/P260
OK but if the AI did all the non-standard work, then that's even more impressive, no?
> With AI, they could visualize the three-dimensional structure of the PHGDH protein. Within that structure, they discovered that the protein has a substructure that is very similar to a known DNA-binding domain in a class of known transcription factors. The similarity is solely in the structure and not in the protein sequence.>
Reminds me of: if you come across a dataset you have no idea of what it is representing, graph it.
The typical route of discovering those viruses was first genetic. When you get a genome (especially back when this work was initiated), you'd BLAST all the gene sequences against all known organisms to look for homologs. That's how you'd annotate what the gene does. Much more often than not, you'd get back zero results - these genes had absolutely no sequence similarity to anything else known.
My PI would go through and clone every gene of the virus into bacteria to express the protein. If the protein was soluble, we'd crystallize it. And basically every time, once the structure was solved, if you did a 3D search (using Dali Server or PDBe Fold), there would be a number of near identical hits.
In other words, these genes had diverged entirely at the sequence level, but without changing anything at the structural (and thus functional) level.
Presumably, if AlphaFold is finding the relationship, there's some information preserved at the sequence level - but that could potentially be indirect, such as co-evolution. Either way, it's finding things no human-guided algorithm has been able to find.
This is not my area of expertise, and maybe I'm misunderstanding this, but I thought that what AlphaFold does is extrapolate a structure from the sequence. The actual relationship with the other existing proteins would have been found by the investigators through other, more traditional means (like the 3D search you mentioned).
Checking sub-regions of the structure would be more difficult, but depending on how the structural representation works it could just be computationally intensive.
Now, there are a couple ways a gene could be different without altering the protein's function. It turns out multiple codons can code for the same amino acid. So if you switch out one codon for another which codes for the same amino acid, obviously you get a chemically identical sequence and therefore the exact same protein. The other way is you switch an amino acid, but this doesn't meaningfully affect the folded 3D structure of the finished protein, at least not in a way that alters its function. Both these types of mutations are quite common; because they don't affect function, they're not "weeded out" by evolution and tend to accumulate over evolutionary time.
* except for a few that are known as start and stop codons. They delineate the start and end of a gene.
You could build houses from bricks, timber or poured concrete that all looked the same in the end. Their internal structures and methods of construction would be different, but they would have the same form.
I'm reading the GP's comment similarly.
genes are instructions for building proteins.
For a given output, you could write a program in wildly different programming languages, or even use the same language but structure it in wildly different ways.
If there's no match for the source code (genes), then find a match for the output (protein).
Non-polar
Polar
Acidic
Basic
In terms of 3D fold - i.e. the general abstract shape of the protein in 3D, you can make loads of substitutions without changing it, generally as long as you stay within the same class.
It's not until you compare the 3D shape that you see the relationship.
https://www.sciencedirect.com/science/article/pii/S000291652...
https://www.jarlife.net/3844-choline-sleep-disturbances-and-...
PEMT (phosphatidylethanolamine N-methyltransferase) is what makes choline in the body, but it depends on estrogen.(https://pmc.ncbi.nlm.nih.gov/articles/PMC3020773/)
Gemini tells me that amounts to ~850mg of alpha GPC or ~1900mg of citicoline. Eggs it is then.
Claude tells me that’s 4-5 eggs per day or 5x150 mg alpha gpc capsules.
The eggs would be a lot more expensive in both time and materials plus most egg farms seem cruel (especially male chick killing)… I’m leaning towards alpha gpc supplements.
Gemini used 40% choline by weight for alpha GPC and 18% for citicoline, which seems to check out with other sources.
> I’m leaning towards alpha gpc supplements.
I haven't looked into the studies recently, but there have been some negative findings with alpha GPC supplementation[1]. May be worth a gander.
[1] https://examine.com/supplements/alpha-gpc/#what-are-alpha-gp...
https://pubmed.ncbi.nlm.nih.gov/38733921/
Does a 100% safe and effective source of choline exist? Maybe a combination of eggs and supplements are the way to go?
Medicine and Law, OTOH, suffers heavily from a fractal volume of data and a dearth of experts who can deal with the tedium of applying an expert eye to this much data. Imagine we start capturing ultrasound and chest xrays en masse, or giving legal advice for those who needs help. LLMs/ML are more likely to get this right, than writing computer code.
> You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward—reversing cause and effect. [...]
> In any case, you read with exasperation or amusement the multiple errors in a story, and then turn the page to national or international affairs, and read as if the rest of the newspaper was somehow more accurate about Palestine than the baloney you just read. You turn the page, and forget what you know.
The problem with an example like ultrasound is that it's not a passive modality - you don't just take a sweep and then analyze it later. The tech is taking shots and adjusting as they go along to see things better in real time. There's all sorts of potential stuff in way often bowel and bones and you have to work around all that to see what you need to.
A lot of the job is actively analyzing what you're seeing while you're scanning and then going for better shots of the things you see, and having the experience and expertise to get the shots is the same skills required to analyze the images to know what shots to get. It's not just a matter of waving a wand around and then having the rad look at it later.
Legally yes, the rad is the one interpreting it, but it's a very active process by the technologist. The ultrasound tech is actively interpreting the scans as they do them, and then using the wand to chase down what they notice to get better shots of things. If they don't see something the rad won't either, so you need that expertise there to identify things that don't look right, it's very real time and you can't do it post hoc.
Unless there's going to be a huge reduction in hallucinations, I absolutely don't see LLMs replacing doctors or lawyers.
That would be actual malpractice in either case.
LLMs have a history of fabricating laws and precedents when acting as a lawyer. Any advice from the LLM would likely be worse than just assuming something sensible, as that is more likely to reflect what the law is than what the LLM hallucinates it to be. Medicine is in many ways similar.
As for your suggestion to be capture and analyze ultrasounds and X-rays en-mass, that would be malpractice even if it were performed by an actual Doctor instead of an AI. We don't know the base rate of many benign conditions, except that they are always higher than we expect. The additional images are highly likely to show conditions that could be either benign or dangerous, and additional procedures (such as biopsies) would be needed to determine which it is. This would create additional anxiety in patients from the possible diagnosis and further pain and possible complications from the additional procedures.
While you could argue for taking these images and not acting on them, you would either tell the patients the results and leave them worried about what the discovered masses are (so they likely will have the procedures anyway) or you won't tell them (which has ethical implications). Good luck getting that past the institutional review board.
There is a theory that Alzheimer's as we currently understand it, is not one disease, but multiple diseases that are lumped into one category because we don't have an adequate test.
This is also where some of the controversy surrounding the Amyloid hypothesis comes from.
[1] https://stanforddaily.com/2023/07/19/stanford-president-resi...
There are lots of good reasons to believe in the amyloid hypothesis, and no paper or even line of research is the one bedrock of the hypothesis. It was the foundational bedrock of Alzheimer's research back in the early 1990s (essentially, before Alzheimer's became one of the holy grail quests of modern medicine), after all; well before any of the fraudulent research into Alzheimer's was done.
The main good reason not to believe in amyloid is that every drug targeting amyloid plaques has failed to even slow Alzheimer's, even when they do impressive jobs in clearing out plaques--and that is a hell of a good reason to doubt the hypothesis. But no one is going to discover that failure until you have amyloid blockers read out their phase III clinical trial results, and that doesn't really happen until about a decade ago.
Lecanemab and donanemab succeeded in slowing Alzheimer’s.
As did gantenerumab in a recent prevention trial: https://www.alzforum.org/news/research-news/plaque-removal-d...
Right, monocausal explanations in-general will set-off my skept-o-sense too; but then my mind made me think of another example: Andrew Wakefield (except that AW succeeded more at convincing Facebook-moms than the scientific establishment - but still harmed society just as much, IMO)
Amyloid deposits correlate with Alzheimer’s, but they do not cause the symptoms. We know this because we have drugs which (in some patients, not approved for general use) completely clear out amyloids, but have no affect on symptoms or outcomes. We have other very promising medications that do nothing to amyloids. We also have tons of people who have had brain autopsies for other reasons and found to have very high levels of amyloid deposits, but no symptoms of dementia prior to death.
Alzheimer’s isn’t caused by amyloids.
I 100% agree with you that we shouldn't throw the baby out with the bathwater on this one. Data being falsified and the hypothesis being wrong are two different things.
The internet is awash in random garbage and it'd be interesting to have a link that someone who actually sees sleep EEGs thinks is "80% there".
Re: Link, just to lower your load in answering.
Anyone who believes that an entire field and decades of researched pivoted entirely around one researcher falsifying data is oversimplifying. The situation was not good, but it’s silly to act like it all came down to this one person and that there wasn’t anything else the industry was using as their basis for allocating research bets.
One thing that AI/ML is really good at is taking very large datasets and finding correlations that you wouldn't otherwise. If everyone's medical chart were in one place, you could find things like "four years before presenting symptoms of pancreatic cancer, patients complain of increased nosebleeds", or things like that.
Of course we don't need universal healthcare to have a chart exchange, and the privacy issues are certainly something that needs consideration.
But the point is, I suspect we could find cures and leading indicators for a lot of diseases if everyone's medical records were available for analysis.
> The study co-authors (from left to right) Sheng Zhong, Junchen Chen, Wenxin Zhao, Ming Xu, Shuanghong Xue, Zhixuan Song and Fatemeh Hadi
>This work is partially funded by the National Institutes of Health (grants R01GM138852, DP1DK126138, UH3CA256960, R01HD107206, R01AG074273 and R01AG078185).
Universal Health Care would be great but we are at a place where the research itself may vanish from the US.
I believe you, but I'm curious how that works. When you go to a random doctor, do they have to request your records from all your other doctors? Similar to here in the USA when you have a PPO?
One, in some of the countries I know (with universal healthcare and no centralised records) you don't go to a random doctor. You have a declared family doctor and you have to go to them unless they are unavailable, in which case the other doctor you go to has to declare that you couldn't go to your doctor. It's a small hurdle to prevent doctor shopping, but it means people are more likely to always see the same doctor. Specialists are given the relevant information by the family doctor when referring a patient to a specialist, and in most other cases records are not really needed, or the ER will contact whoever to get the information they think they need. It might sound hazardous but in practice it works fine.
Second, some places have centrally-stored records but the access is controlled by the patient. Every access to the record is disclosed to the patient and he has the possibility to revoke access to anyone at any time. That generally goes together with laws that fundamentally oppose any automated access or sharing of these records to third parties.
And third, I don't understand what any of this has to do with who whether healthcare access is universal or not? Universal healthcare without centralised records exists (in France, unless it has changed in recent years, but it at least existed for 60 years or so) and centralised records without universal healthcare could exist (maybe privately managed by insurance companies, since the absence of universal healthcare would indicate a pretty disengaged state).
Universal healthcare is about who is paying, not necessarily about who is running the service.
This was somewhat annoying since unlike the UK system, the Australian system is essentially private GPs getting paid for your individual appointments by the government (so called bulk billing), so there's no guarantee that you can go to the same doctor every time.
This was the last decades way of doing things. The current decade is to stay within the desired charting system. That way you can one-click share data between doctors. Typically you would search for doctors that utilize the same charting platform. EPIC is probably the largest one in US today
Basically, government funded and regulated doesn't mean government run.
There is no standardized EHR system here, despite provincial governments (which are who runs the systems) wasting millions over the last two decades trying to make that happen.
'patient complains of increased nosebleeds' isn't structured data you can query (or feed to ML) like that. It actually takes a physician having this kind of hypothesis, to then trawl through the records, reading unstructured notes, creating their own database for the purpose - you know, had/did not have nosebleed, developed/did not develop pancreatic cancer within 4 years, or whatever - so then they can do the actual analysis on the extracted data.
Where I think LLMs could indeed be very helpful is in this data collection phase: this is the structured data I want, this is the pile of notes, go. (Then you check some small percentage of them and if they're correct assume the rest are too. There's already huge scope for human error here, so this seems acceptable.)
Isn't this exactly what HIPAA was supposed to address?
While universal healthcare is an ambitious goal, even small improvements to medical care access would have huge hidden effects on public health, and in turn our access to health data.
Unfortunately so many junk systems were pushed to the market and the "common charting protocol" is highly dependent on the EHR used by the hospital system.
There _was_ supposed to be some interoperability between EHRs but I honestly haven’t been following it for quite some time.
As for availability of medical history to researchers, I highly doubt this will happen.
Big tech has ruined the trust between people and technology. People gave up their data to G, MS, FB, and others for many years.
We have yet to see any benefit for the common man or woman. Only the data is used against us. Used to divide us (echo chambers). Used to manipulate us (buy THIS, hate that, anti WoKe). Used to control uneducated and vulnerable population. Used to manipulate elections. Used to enrich the billionaire class.
[0] https://www.cell.com/cell/fulltext/S0092-8674(25)00397-6
Sure sounds like it.
...I ask because bio/chem visualization and simulation was a solved problem back in the 1980s (...back when bad TV shows used renders of spinning organic-chemistry hexagons on the protagonist's computer as a visual-metaphore for doing science!).
If I had any funding to work freely in these subjects, I would instead focus on the more fundamental questions of computationally mapping and reversing cellular senescence, starting with something tiny and trivial (but perhaps not tiny nor trivial enough) like a rotifer. My focus wouldn't be the biologists' "we want to understand this rotifer", "or we want to understand senescence", but more "can we create an exact computational framework to map senescence, a framework which can be extended and applied to other organisms"?
Sadly, funding for science is a lost cause, because even where/when it is available, it comes with all sort of political and ideological chains.
Researching and curing AD is not barking up the wrong tree. There is a horrible deadly monster in that tree that needs defeating. I hope people also get scientific funding for other age-related issues.
"AI" in this case was used to generate a 3D model of a protein. Literally, something you can grab from Wikipedia — https://en.m.wikipedia.org/wiki/Phosphoglycerate_dehydrogena...
The underlying work performed by the researchers is much more interesting — https://linkinghub.elsevier.com/retrieve/pii/S00928674250039...
They identified a possible upstream pathway that could help treat disease and build therapeutic treatments for Alzheimer’s.
I don’t know about you all but I’m tired of the AI-mania. At least author didn’t but "blockchain" in the article.
Because there's AI as in "letting ChatGPT do the hard bits of programming or writing for me", for which it is woefully unsuited, and there's AI as in using machine learning as a statistical approach, which it fundamentally is. It's something you can pour data into and let the machine find how the data clump together, so you can investigate potential causative relationships the Mark I eyeball might have missed.
I'm excited for the possibilities these uses of AI might bring.
Because I find myself nodding along with optimism, having two grandfathers that died from this disease. It’d be great if something could sift through all the data and come up with a novel solution.
Then I remember that this is the same technology that eagerly tries to autocomplete every other line of my code to include two nonexistent variables and a nonexistent function.
I hope this field has some good people to sanity check this stuff.
Then I remember that this is the same technology that failed to drive in screws for a project I was working on a week ago."
The AI that's being used in applications like this is not generative AI. It really is just "sparkling statistics" and it's tremendously useful in applications like this because it can accelerate the finding of patterns in data that form the basis of new discoveries.
A paper author did quote the use of AI. But without explaining precisely how AI was used and why it was valuable this article is basically clickbait trash. Was AI necessary for their key result? If so how and why? We don't know!
Everything about this screams "just say AI and we'll get more attention".
I agree the UCSD writeup is pretty misleading; the authors used protein-modeling software, which is really not very interesting, and the fact that the SOTA protein modeler uses machine learning is not at all relevant to this specific paper.
Ah yeah I skimmed and searched for “AI” so missed that. The UCSD article does not contain the term “AlphaFold” so yeah they’re definitely engagement baiting.
It’s a nice reprieve from “we’re using a chatbot as a therapist and it started telling people to kill themselves” type news.
This is a completely normal way to talk about inanimate objects
The human body is a pretty amazing construction, nature doesn't make a lot of mistakes.