If astronomers announced that a large asteroid might strike Earth in twenty years, and that we currently had no way to deflect it, nobody would respond by saying, “Come back when you already have the rocket.” We would immediately build better telescopes to track it precisely, refine its trajectory models, and begin developing propulsion systems capable of interception. You do not wait for the cure before improving the measurement. You improve the measurement so that a cure becomes possible, targeted, and effective.
Medicine is no different. Refusing to improve early, probabilistic diagnosis because today’s treatments are modest confuses sequence with outcome. Breakthroughs do not emerge from vague labels and mixed populations. They emerge from precise, quantitative stratification that allows real effects to be seen. The danger is not that we measure too early. It is that we continue making irreversible clinical and research decisions using imprecise, binary classifications while biological insight and therapeutic tools are advancing rapidly. Building the probabilistic layer now is not premature. It is how we make future intervention feasible.
This is absolutely nothing like the asteroid example, where knowing that anybody is going to fall victim to it would itself be news of astronomical proportions. Previously there was a chance the event wouldn't happen, and now there is, so that entirely change the calculus of your priorities.
This just completely destroys the analogy. (There are other reasons it doesn't fit too, but one is enough.)
> If astronomers announced that a large asteroid might strike Earth in twenty years, and that we currently had no way to deflect it, nobody would respond by saying, “Come back when you already have the rocket.”
I don’t think the analogy fits, for a couple reasons.
1. People not wanting to know whether they have Alzheimer’s is because of the fear of a fate worse than death — living with Alzheimer’s.
2. People not wanting to know whether they have Alzheimer’s is not the same was not wanting a way to detect it. As you said, being able to measure it may help lead to a cure/treatment. I doubt people are against improving detection — they may just not want the detection to be applied personally.
People can be fine with being tested so that epidemiologists can work on growing our knowledge and, at the same time, not wanting to know their own diagnosis.
Imagine you're born and you eventually learn that there's an asteroid on a collision course with earth, from way before you were born. It's going to take many years to get here and you may die before it hits and so far no scientists have been able to come up with a way to deflect it. Do you care?
Adding newness to the situation makes it wildly different.
Left untreated for a very long time (decade+), it spreads to the brain and causes dementia among other things. Older generations with stigmas, taboos, or from lower educational backgrounds seem (to me) less likely to get tested, so it seems plausible.
Source: Have recently discovered this myself with a family member from their neurologist.
Even without this method, the doctors have been able to give diagnosis with 75.5% accuracy (according to the paper's claim).
I.e. it needs the original 75% accuracy or so and boosts it another 20%.
The problem is that the assessment itself is slow, expensive and requires skill.
What we really want from a test is high specificity (a positive test means you have accuracy) and high sensitivity (if you get a negative test you don't have it).
This is how we can offer screening.
No it's not, that's a reported mean, presumably with the right number of significant digits.
If you want to criticize the variance/stddev, do so, but you picked the wrong metric if that's what you wanted to complain about.
If it turns out that driving a Prius on Tuesdays slows down Alzheimer’s, a larger pool of subjects would allow us to figure that out.
It's also better for people around the Alzheimer's patient, as it will let them understand why someone's personality and behaviours may be changing, and possibly let them be bit more forgiving of such changes. It will also give family more time to plan and understand the health and community services and support are offered wherever they live.
Best these type of drugs can do is give you a few months extra window (say 4-6 months). They're not a cure. Sadly.
This is all new. There is research hinting at Alzheimer's subtypes, some of which are more likely to respond than others. Even halting the decline is a huge potential breakthrough.
Time will tell if the 30% slowdown continues beyond four years, and/or if earlier treatment with more effective amyloid clearance from newer drugs has greater effects. The science suggests it should.
> her mental acuity scores are (slightly) better than they were last year
My grandfather had a "fall" at work, he then left that job, and held down 2 more engineering jobs before he was diagnosed with a stroking condition and subsequent dementia. I got the distinct impression he thought he had more time, but rapidly declined.
If he knew he was short of time before his rapid decline he probably would have done things differently. Like not buying a house he would later have to sell to pay for aged care.
If he knew he was at risk of a workplace accident he probably wouldn't have worked as an after hours safety engineer at a major treatment plant, where if the worst had happened he could have endangered others.
(There's enough info in the supplemental link on this page to have an LLM do the Bayes math for you.)
Looks like my prior was not too bad :)
At a personal level, I've been through this with my grandfather.
I want to know. My family wants to know. I want to prepare because there are things I want to do today that I know I won't be able to do in the future.
In many ways, it's just like many terminal cancer diagnoses. You're going to lose that person, but you have some time.
It's a weird disease and IMO not even really a disease it's a bunch of different causes of cognitive impairment under one umbrella but shouldn't be separated out much further to find actual causes and treatments.
These patients are already seeing doctors. Would you rather your doctor to hide the diagnosis just because your disease isn't curable (for now)? It's not like we're testing the whole population in masse.
Getting an accurate diagnosis is always important. Cognitive decline could be caused by other problems, some of which are more treatable than others.
If this test came back negative it would suggest extra testing to rule out other conditions like a brain tumor or hydrocephalus.
It is frankly shocking to think disease diagnosis would be a useless thing
There's Lecanemab and Donanemab. The effects are modest however.
The test is optional. Feel free to skip it.
Tell 50 million people they’re likely to have Alzheimer’s then tell them where to donate towards a cure, or treatments to slow it by a decade.
But apparently your odds go above 30% if you live long enough, so if you could test for being in that cohort I think that result would be too common to actually be devastating.
Pharmaceutical companies have spent something like $50 billion on developing Alzheimer's drugs with, well, the most furtive of straw-grasping to show for it. It's probably the most expensive single disease target (especially as things like cancer are families of diseases)... the failure to have good results isn't for lack of money, and merely throwing more money at it is unlikely to actually make progress towards meaningful treatments.
Someone always says “merely throwing money at the problem…”
What time period was the money spent? The last 25 years?
I just feel the thinking is off, it's like we are trying to treat cuts by removing scabs and scar tissue. We really need deep investigation on the sources, which I feel in many cases are industrial chemicals and how some people's body / immune system respond to them.
One of the most compelling studies I saw was how distance from a Golf Course predicted neurodegenerative diseases, based on their use of certain pesticides.
https://www.alzheimers.org.uk/news/2025-11-18/promising-rese...
Even though it cannot be reversed or eradicated (yet, let's hope) detection can allow individuals to adopt interventions that help either adjust their lives to better cope with its progression or help mitigate some of the detrimental behavioral consequences. In addition, if you have family to care for it may be impetus to get certain things in order for them before later stages of the disease, etc. It's horrible and bleak, but I could certainly see why one might want to know.
In the lucky case, it can also relieve anxiety. Even though false negatives may still be possible, receiving a negative detection might give people who have anxiety about certain symptoms relief, since they can rule out (rightly or wrongly) a pretty severe disease.
It's used to refine clinical diagnosis after patients present with cognitive severe decline.
By the time someone gets this test, they have severe problems. The purpose of this test is to assist with the right diagnosis.
One of interesting checks in this study might be to check when (if) any of the participants had taken this vax and what the impact might be on an Alzimer's diagnosis.
If you have a prevalence of 10 in 1000, how do the numbers shake out?
Well, you test all 1,000. If we assume a 95% accuracy for false-positive and false negatives?
Of the 990 that you test that don't have the disease, the test will false state 50 do have the disease. Yikes!
And of the 10 that do have the disease? You'll miss 1 of them.
It's not terrible. This is a relatively good number. Diagnostics is just terribly difficult.
Spoilers: It's anywhere between 1-15 and 5-30% for false positives and 1-15/5-40 for false negatives. That's imaging, biomarkers, cancer screenings, etc
Like, where do you think the concept of "second opinions" came from? Whimsy? Lets go ask a second doctor if I actually have cancer, it'll be fun!
This statement is quite broad and misses several important factors.
First of all, a test's sensitivity and specificity. The math in your example assumes a balanced test, but on what basis? The math comes out quite different for high-sensitivity or high-specificity tests. (Unfortunately, I could not find the numbers for the test in the linked article.)
Secondly, whom are we testing? The prevalence rate in your example (1%) is unrealistically low even for the general population. But would we screen the general population? No, we'd screen high-risk groups: the elderly, those with certain APOE genotypes etc. Predictive values of a test depend hugely on the prevalence rate.
Lastly, it depends on how the results are used. If it's a high-sensitivity test used to decide whom to send to the next tier in a multi-tier diagnostic system, it could actually be quite effective at that (very rarely missing the disease while greatly reducing the need for more expensive or more invasive testing).