There are partial holes in at at one end. You insert a small amount of dyed DNA (etc) containing solution each. Apply an electrical potential across the gel. DNA gradually moves along. Smaller DNA fragments move faster. So, at a given time, you can coarsely measure fragment size of a given sample. Your absolute scale is given by "standards", aka "ladders" that have samples of multiple, known sizes.
The paper authors cheated (allegedly) by copy + pasting images of the gel. This is what was caught, so it implies they may have made up some or all results in this and other papers.
The problem are the vertical labels
In Figure 1e it says: "MT1+2", "MT2" and "MT1"
In Figure 3a it says: "5'-CR1", "CR2" and "3'-UTR"
In Figure 3b it says: "CR2", "CR3" and "CR4"
generally, no consequences
Most people only remember the initial publication and the noise it makes. The updated/retractions generally are not remembered resulting in the same "generally, no consequences" but the details matter
In my area we have a few research groups that are very trustworthy and it's safe to try to combine their result with one of our ideas to get a new result. Other groups have a mixed history of dubious results, they don't lie but they cherry pick too much, so their result may not be generalizable to use as a foundation for our research.
[1] Exact reproduction are difficult to publish, but if you reproduce a result and make a twist, it may be good enough to be published.
(And I think part of the general blowback against the credibility of science amongst the public is because there's been a big emphasis in popular communication that "peer reviewed paper == credible", which is an important distortion from the real message "peer reviewed paper is the minimum bar for credible", and high-profile cases of incorrect results or fraud are obvious problems with the first statement)
Also, many sites just copy&paste the press release from the university that many times has a lot of exaggerations, and sometimes they ad a few more.
[1] If the journal has too many single author articles, it's a big red flag.
Jan Hendrick Schon (he was even stripped of his Phd, which is not possible in most jurisdictions) He made up over 200 papers about organic semiconductors
Victor Ninov who lied about creating like 4 different elements
Hwang Woo-suk who faked cloning humans and other mammals, lied about the completely unethical acquisition of human egg cells, and literally had the entire Korean government attempting to prevent him from being discredited, and was caught primarily because his papers were reusing pictures of cells. Hilariously, his lab successfully cloned a dog which was considered difficult at the time.
Pons and Fleischmann didn't do any actual fraud. They were merely startlingly incompetent, incurious, and arrogant. They still never did real research again.
Perhaps this is already happening, and we just don't know it... In this way I've always thought gel images were more susceptible to fraud vs. other commonly faked images (NMR / MS spectra etc, which are harder to spoof)
I'd also suspect that fraud does not necessarily start at the beginning of the experiments, but might happen at a later stage when someone realizes their results didn't turn out as expected or wanted. At that point you already did the gels and it might be much more convenient to just do image manipulation.
Something like NMR data is certainly much more difficult to fake convincingly, especially if you'd have to provide the original raw datasets at publication (which unfortunately isn't really happening yet).
I mean, I've seen people deliberately choose to discard their data and keep no notes, even when I offered to give them a flash drive with their data on it, so I understand that this sort of thing happens. It's still senseless.
https://youtube.com/playlist?list=PLlXXK20HE_dV8rBa2h-8P9d-0...
I wish wish wish there was something similar also for computer science. If I got paid for how many papers that looked interested but could not be replicated, I would be rich.
Google Scholar reports 43 citations: https://scholar.google.com/scholar?q=Novel+RNA-and+FMRP-bind...
The images still seem to be visible in both PubMed and Nature versions.
PubMed version: https://pubmed.ncbi.nlm.nih.gov/26586091/
Nature version: https://www.nature.com/articles/ncomms9888
Nature version (PDF): https://www.nature.com/articles/ncomms9888.pdf
The senior author is Mark Mattson: one of the world’s most highly cited neuroscientists with amazing productivity and large lab while at NIH when this work was done.
https://scholar.google.com/citations?user=N3ObarMAAAAJ&hl=en...
Mattson is well known as a biohacker and an expert in intermittent fasting and health benefits.
https://en.wikipedia.org/wiki/Mark_Mattson
He retired from the National Institute on Aging in 2019 and is now at Johns Hopkins University. Still active researcher.
https://nihrecord.nih.gov/2019/08/23/mattson-expert-brain-ag...
https://harpers.org/archive/2025/01/the-ghosts-in-the-machin...
But I've wondered whether maybe some of the fabrications are just sloppy work tracking so many artifacts.
You might be experienced enough with computers to have filing conventions and workflow tools, around which you could figure out how to accurately keep track of numerous lab equipment artifacts, including those produced by multiple team members, and have traceability from publication figures all the way to original imaging or data. But is this something everyone involved in a university lab would be able to do reliably?
I'm sure there's a lot of dishonesty going on, because people going into the hard sciences can be just as shitty as your average Leetcode Cadet. But maybe some genuine scientists could use better computer tools and skills?
Sure, bad actors will maintain plausible deniability, but I would rather let some people slide than get worked up over mistakes or misunderstandings.
I think that's only true for a single incident. If someone does injury to me, I'm just as injured whether they were malicious or incompetent, but mitigation strategies for future interactions are different.
However, in the long run it is stupid because of two and a half reasons:
- it reduces people's trust in science because it is obvious we cannot trust the scientists which in the long run will reduce public funding for The grift
- it causes misallocation of funds by people misled by the grift and this may lead you actual harm (e.g., what if you catch Alzheimer's but there is no cure because you lied about the causes 20 years ago?)
1/2- there is a chance that you will get caught, and like the former president of Stanford, not be allowed to continue bilking the gullible. This only gets half a point because the repercussions are generally not immediate and definitely not devastating to those who do it skillfully.
I could be hard to do without access to data and costly integration. And like shorting, the difficulty is how to monetize. It could also be easy to game. Still...
The nice thing about the business is that market (publishing) is flourishing. Not sure about state of the art or availability of such services.
For sales: run it on recent publications, and quietly ping the editors with findings and a reasonable price.
Unclear though whether to brand in a user-visible way (i.e., where the journal would report to readers that you validate their stuff). It could drive uptake, but a glaring false negative would be a risk.
Structurally, perhaps should be a non-profit (which of course can accumulate profits at will). Does YC do deals without ownership, e.g., with profit-sharing agreements?
> After I raised my concerns about 4% of papers having image problems, some other journals upped their game and have hired people to look for these things. This is still mainly being done I believe by humans, but there is now software on the market that is being tested by some publishers to screen all incoming manuscripts. The software will search for duplications but can also search for duplicated elements of photos against a database of many papers, so it’s not just screening within a paper or across two papers or so, but it is working with a database to potentially find many more examples of duplications. I believe one of the software packages that is being tested is Proofig.
Proofig makes a lot of claims but they also list a lot of journals: https://www.proofig.com/
[0]: https://thepublicationplan.com/2022/11/29/spotting-fake-imag...
It inverts the second image and passes the first and third images under it, and when there is a complete overlap the combined images make a nearly perfectly gray rectangle, showing that they cancel out.
You can also get the raw pixel information by converting to a bitmap and comparing values, but it's easier visually because it's pretty trivial for a simple image modification to change all of the pixel values but still have the same image.
A imgur for scientific photos with hash-based search or something. We have the technology for this.
"Comment on Nature paper on 2015 mRNA paper suggests data re-used in different contexts"
The current title would suggest music to most lay-people.
picture (the submitter) had the right idea—it's often better to take a subtitle or a representative sentence from the article when an original title isn't suitable for whatever reason, but since in this case it's ambiguous, we can change it.
If there's a better phrase from the article itself, we can change it again.
This has the makings of a Highlander episode. Three groups of immortals forming bands in different generations.