1) it is claimed that the beliefs already existed that different subtypes existed
2) it is claimed this is now confirmed and verified by scientific measurement
I only quickly skimmed the actual study, but couldn't immediately find any concerns of treatment information leaking into the measurement process: suppose a handful of treatments exist (say a few types of drugs, some targeting the same biochemical pathways so the drugs and treatments themselves could be clustered in 3 groups for example); then if we give some kids treatment type A1, A2, ... and other kids treatment B1, B2, ... and other kids treatment type C1, C2, C3 what prevents the brain scans from detecting what drugs a participant was subjected too? bullshit in, bullshit out?
How would a designer of the experiment rule out measuring the treatment as opposed to actually measuring supposedly afflicted clusterable brain types?
A strong but not infallible way would be to only include participants who have just been diagnosed, so we can scan their brains before treatment by healthcare industry, let the dataset slowly trickle in as the cases pop up, and then track repeat assessments to make sure they are still believed to suffer from ADHD, instead of an early misdiagnosis eventually re-evaluated.
I say "strong but not infallible" because we know how kids are these days: they can be addicted to various degrees to illicit drugs, addicted to social media, afflicted with types of trauma's etc. Those might show up and correlate with conventional diagnoses too, but are harder to keep objective and uncontested track of..., but if such earlier correlates can be found, wouldn't we then have discovered that there is no such thing as ADHD, but instead children disturbed by various trauma's, addictions, programmed insecurities, ...?
Red Flag: in order to answer my own question I jump to the methods section to read:
> Inclusion and exclusion criteria across sites are available in eTable 1 in Supplement 1.
That sounds like exactly what I want to check: the inclusion / exclusion criteria (basically exclude scans from patients already being treated).
I click the handy link (Supplement 1 in the sentence above is a hyperlink supposedly pointing me to supplemental data) only to arrive at (ctrl-F eTable1):
site 1 has among other exclusion criteria:
>2. Current or past treatment with psychotropic medication;
Are ADHD medications considere psychotropic or not?
site 2 has among other exclusion criteria:
> 1. Exposure to psychostimulants or other ADHD medications for at least 3 months prior to screening; lifetime exposure to mood stabilizers or antipsychotic medications; exposure to psychotropic medication during the 30 days prior to screening;
That site is a bit better, and since psychotropic medications are mentioned in a different subsentence than ADHD medications, it seems data would already leak through site 1. I say "a bit better" because they still allow up to 90 days of ADHD medications. Here at least it is implicitly acknowledged that the treatment information could leak back into the "objective measurements"; but if they were aware of this potential leakage, why allow leakage of up to 90 days of ADHD medication at all?! Set it to 0! And publish the cumulative distribution plot of days of treatment, so we can see say 3 patients were already taking ADHD medications for 89 days, 4 for 88 days, etc.
site 3 has among other exclusion criteria:
> 2. History of taking psychoactive medications other than stimulants.
so here it is again unclear if ADHD medications count as a exclusion criteria; however from the phrasing in site 2:
> Exposure to psychostimulants or other ADHD medications
It would appear that ADHD medications are typically stimulants, which is expressly allowed to pass through in site 3!
I checked the start of this supplementary data PDF, and the rest of the scientific methodology appears at least superficially sound and based on mathematics, physics and engineering for processing medical imagery. But allowing such objective flows to be used in studies that knowingly leak (they know since it was a 90-day relaxed exclusion criteria for site 2) basically amounts to Science-washing the status quo of ADHD treatment.
The basic idea of this experiment is sound, and it is worthy of repetition, but the exclusion criteria of any type of ADHD treatment must be forcefully applied without compromise (so no "it was just 90 days of treatment" exceptions).
The money is in the healthcare industry, while the credibility is in the math, physics and chemistry branches. Such a distribution is an open invitation to science-washing the status quo.