If AI can help spot obvious errors in published papers, it can do it as part of the review process. And if it can do it as part of the review process, authors can run it on their own work before submitting. It could massively raise the quality level of a lot of papers.
What's important here is that it's part of a process involving experts themselves -- the authors, the peer reviewers. They can easily dismiss false positives, but especially get warnings about statistical mistakes or other aspects of the paper that aren't their primary area of expertise but can contain gotchas.
I hope your version of the world wins out. I’m still trying to figure out what a post-trust future looks like.
And let's say someone modifies their faked lab results so that no AI can detect any evidence of photoshopping images. Their results get published. Well, nobody will be able to reproduce their work (unless other people also publish fraudulent work from there), and fellow researchers will raise questions, like, a lot of them. Also, guess what, even today, badly photoshopped results often don't get caught for a few years, and in hindsight that's just some low effort image manipulation -- copying a part of image and paste it elsewhere.
I doubt any of this changes anything. There is a lot of competition in academia, and depending on the field, things may move very fast. Getting away with AI detection of fraudulent work likely doesn't give anyone enough advantage to survive in a competitive field.
Sadly you seem to underestimate how widespread fraud is in academia and overestimate how big the punishment is. In the worst case when someone finds you are guilty of fraud, you will get slap on the wrist. In the usual case absolutely nothing will happen and you will be free to keep publishing fraud.
Anyway, I think "wishful thinking" is way more rampant and problematic than fraud. I.e. work done in a way that does not explore the weakness of it fully.
People shouldn't be trying to publish before they know how to properly define a study and analyze the results. Publications also shouldn't be willing to publish work that does a poor job at following the fundamentals of the scientific method.
Wishful thinking and assuming good intent isn't a bad idea here, but that leaves us with a scientific (or academic) industry that is completely inept at doing what it is meant to do - science.
Alternatively, now that NIH has been turned into a tool for enforcing ideological conformity on research instead focussing on quality, things will get much worse.
Within the reputable set, as someone convinced that fraud is out of control, have you ever tried to calculate the fraud rate as a percentage with numerator and denominator (either number of papers published or number of reputable researchers. I would be very interested and stunned if it was over .1% or even .01%.
Generally speaking, evidence suggests that fraud rates are low ( lower than in most other human endeavours). This study cites 2% [1]. This is similar to numbers that Elizabeth Bik reports. For comparison self reported doping rates were between 6 and 9% here [2]
[1] https://pmc.ncbi.nlm.nih.gov/articles/PMC5723807/ [2] https://pmc.ncbi.nlm.nih.gov/articles/PMC11102888/
And just again for comparison >30% of elite athlete say that they know someone who doped.
Depending on what you choose for those variables it can range from a few percent up to 100%.
https://fantasticanachronism.com/2020/08/11/how-many-undetec...
> 0.04% of papers are retracted. At least 1.9% of papers have duplicate images "suggestive of deliberate manipulation". About 2.5% of scientists admit to fraud, and they estimate that 10% of other scientists have committed fraud. 27% of postdocs said they were willing to select or omit data to improve their results. More than 50% of published findings in psychology are false. The ORI, which makes about 13 misconduct findings per year, gives a conservative estimate of over 2000 misconduct incidents per year.
Although publishing untrue claims isn't the same thing as fraud, editors of well known journals like The Lancet or the New England Journal of Medicine have estimated that maybe half or more of the claims they publish are wrong. Statistical consistency detectors run over psych papers find that ~50% fail such checks (e.g. that computed means are possible given the input data). The authors don't care, when asked to share their data so the causes of the check failures can be explored they just refuse or ignore the request, even if they signed a document saying they'd share.
You don't have these sorts of problems in cryptography but a lot of fields are rife with it, especially if you use a definition of fraud that includes pseudoscientific practices. The article goes into some of the issues and arguments with how to define and measure it.
The other two metrics seem pretty weak. 1.9% of papers in a vast database containing 40 journals show signs of duplication. But then dig into the details: apparently a huge fraction of those are in one journal and in two specific years. Look at Figure 1 and it just screams “something very weird is going on here, let’s look closely at this methodology before we accept the top line results.”
The final result is a meta-survey based on surveys done across scientists all over the world, including surveys that are written in other languages, presumably based on scientists also publishing in smaller local journals. Presumably this covers a vast range of scientists with different reputations. As I said before, if you cast a wide net that includes everyone doing science in the entire world, I bet you’ll find tons of fraud. This study just seems to do that.
Example, select the tortured phrases section of this database. It's literally nothing fancier than a big regex:
https://dbrech.irit.fr/pls/apex/f?p=9999:24::::::
Randomly chosen paper: https://link.springer.com/article/10.1007/s11042-025-20660-1
"A novel approach on heart disease prediction using optimized hybrid deep learning approach", published in Multimedia Tools and Applications.
This paper has been run through a thesaurus spinner yielding garbage text like "To advance the expectation exactness of the anticipated heart malady location show" (heart disease -> heart malady). It also has nothing to do with the journal it's published in.
Now you might object that the paper in question comes from India and not an R1 American university, which is how you're defining reputable. The journal itself does, though. It's edited by an academic in the Dept. of Computer Science and Engineering, Florida Atlantic University, which is an R1. It also has many dozens of people with the title of editor at other presumably reputable western universities like Brunel in the UK, the University of Salerno, etc:
https://link.springer.com/journal/11042/editorial-board
Clearly, none of the so-called editors of the journal can be reading what's submitted to it. Zombie journals run by well known publishers like Spring Nature are common. They auto-publish blatant spam yet always have a gazillion editors at well known universities. This stuff is so basic both generation and detection predate LLMs entirely, but it doesn't get fixed.
Then you get into all the papers that aren't trivially fake but fake in advanced undetectable ways, or which are merely using questionable research practices... the true rate of retraction if standards were at the level laymen imagine would be orders of magnitude higher.
"Unpaid volunteers" describes the majority of the academic publication process so I'm not sure what you're point is. It's also a pretty reasonable approach - readers should report issues. This is exactly how moderation works the web over.
Mind that I'm not arguing in favor of the status quo. Merely pointing out that this isn't some smoking gun.
> you might object that the paper in question comes from India and not an R1 American university
Yes, it does rather seem that you're trying to argue one thing (ie the mainstream scientific establishment of the western world is full of fraud) while selecting evidence from a rather different bucket (non-R1 institutions, journals that aren't mainstream, papers that aren't widely cited and were probably never read by anyone).
> The journal itself does, though. It's edited by an academic in ...
That isn't how anyone I've ever worked with assessed journal reputability. At a glance that journal doesn't look anywhere near high end to me.
Remember that, just as with books, anyone can publish any scientific writeup that they'd like. By raw numbers, most published works of fiction aren't very high quality.[0] That doesn't say anything about the skilled fiction authors or the industry as a whole though.
> but it doesn't get fixed.
Is there a problem to begin with? People are publishing things. Are you seriously suggesting that we attempt to regulate what people are permitted to publish or who academics are permitted to associate with on the basis of some magical objective quality metric that doesn't currently exist?
If you go searching for trash you will find trash. Things like industry and walk of life have little bearing on it. Trash is universal.
You are lumping together a bunch of different things that no professional would ever consider to belong to the same category. If you want to critique mainstream scientific research then you need to present an analysis of sources that are widely accepted as being mainstream.
[0] https://www.goodreads.com/book/show/18628458-taken-by-the-t-...
Academics draw a salary to do their job, but when they go AWOL on tasks critical to their profession suddenly they're all unpaid volunteers. This Is Fine.
Journals don't retract fraudulent articles without a fight, yet the low retraction rate is evidence that This Is Fine.
The publishing process is a source of credibility so rigorous it places academic views well above those of the common man, but when it publishes spam on auto-pilot suddenly journals are just some kind of abandoned subreddit and This Is Fine "but I'm not arguing in favor of it".
And the darned circular logic. Fraud is common but This Is Fine because reputable sources don't do it, where the definition of reputable is totally ad-hoc beyond not engaging in fraud. This thread is an exemplar: today reputable means American R1 universities because they don't do bad stuff like that, except when their employees sign off on it but that's totally different. The editor of The Lancet has said probably half of what his journal publishes is wrong [1] but This Is Fine until there's "an analysis of sources that are widely accepted as being mainstream".
Reputability is meaningless. Many of the supposedly top universities have hosted star researchers, entire labs [2] and even presidents who were caught doing long cons of various kinds. This Is Not Fine.
[1] https://www.thelancet.com/pdfs/journals/lancet/PIIS0140-6736...
[2] https://arstechnica.com/science/2024/01/top-harvard-cancer-r...
Wrong is not the same as fraudulent. 100% of Physics papers before Quantum Mechanics are false[1]. But not on purpose.
[1] hyperbole, but you know what I mean.
There's some real irony in that, as we wouldn't have gotten to this point a ton of self-policing over years where it was exposed with great consequence.
So if you publish an unreproducible paper, you can probably have a full career without anyone noticing.
I know it's not as simple as that, and "useful" can simply mean "cited" (a sadly overrated metric). But surely it's easier to get hired if your work actually results in something somebody uses.
If your academic research results in immediately useful output all of the people waiting for that to happen step in and you no longer worry about employment.
The "better" journals are listed in JCR. Nearly 40% of them have impact factor less than 1, it means that on average papers in them are cited less than 1 times.
Conclusion: even in better journals, the average paper is rarely cited at all, which means that definitely the public has rarely heard of it or found it useful.
They’re not useful at all. Reproduction of results isn’t sexy, nobody does it. Almost feels like science is built on a web on funding trying to buy the desired results.
You tell me that this reaction creates X, and I need X to make Y. If I can't make my Y, sooner or later it's going to occur to me that X is the cause.
Like I said, I know it's never that easy. Bench work is hard and there are a million reasons why your idea failed, and you may not take the time to figure out why. You won't report such failures. And complicated results, like in sociology, are rarely attributable to anything.
Replicability is overrated anyway. Loads of bad papers will replicate just fine if you try. They're still making false claims.
https://blog.plan99.net/replication-studies-cant-fix-science...
Psycho* is rife with that.
Yeah...It's more on the less Pure domains...And mostly overseas?... :-) https://xkcd.com/435/
"A 2016 survey by Nature on 1,576 researchers who took a brief online questionnaire on reproducibility found that more than 70% of researchers have tried and failed to reproduce another scientist's experiment results (including 87% of chemists, 77% of biologists, 69% of physicists and engineers, 67% of medical researchers, 64% of earth and environmental scientists, and 62% of all others), and more than half have failed to reproduce their own experiments."
In theory, yes, in practice, the original result for amyloid beta protein as the main cause of Alzheimer were faked and it wasn't caught for 16 years. A member of my family took med based on it and died in the meantime.
I plus 1 your doubt in the last paragraph.
AI is definitely a good thing (TM) for those honest researchers.
FWIW while fraud gets headlines, unintentional errors and simply crappy writing are much more common and bigger problems I think. As reviewer and editor I often feel I'm the first one (counting the authors) to ever read the paper beginning to end: inconsistent notation & terminology, unnecessary repetitions, unexplained background material, etc.
You would ideally expect blatant fraud to have repercussions, even decades later.
You probably would not expect low quality publications to have direct repercussions, now or ever. This is similar to unacceptably low performance at work. You aren't getting immediately reprimanded for it, but if it keeps up consistently then you might not be working there for much longer.
> The institutions don't care if they publish auto-generated spam
The institutions are generally recognized as having no right to interfere with freedom to publish or freedom to associate. This is a very good thing. So good in fact that it is pretty much the entire point of having a tenure system.
They do tend to get involved if someone commits actual (by which I mean legally defined) fraud.
For the rarer world scale papers we can dedicate more resources to getting vetting them.
During peer review, this could be great. It could stop a fraudulent paper before it causes any damage. But in my experience, I have never gotten a journal editor to retract an already-published paper that had obvious plagiarism in it (very obvious plagiarism in one case!). They have no incentive to do extra work after the fact with no obvious benefit to themselves. They choose to ignore it instead. I wish it wasn't true, but that has been my experience.
The limitations of slow news cycles and slow information transmission lends to slow careful thinking. Especially compared to social media.
No AI needed.
The other day I saw a Facebook post of a national park announcing they'd be closed until further notice. Thousands of comments, 99% of which were divisive political banter assuming this was the result of a top-down order. A very easy-to-miss 1% of the comments were people explaining that the closure was due to a burst pipe or something to that effect. It's reminiscent of the "tragedy of the commons" concept. We are overusing our right to spew nonsense to the point that it's masking the truth.
How do we fix this? Guiding people away from the writings of random nobodies in favor of mainstream authorities doesn't feel entirely proper.
Why not? I think the issue is the word "mainstream". If by mainstream, we mean pre-Internet authorities, such as leading newspapers, then I think that's inappropriate and an odd prejudice.
But we could use 'authorities' to improve the quality of social media - that is, create a category of social media that follows high standards. There's nothing about the medium that prevents it.
There's not much difference between a blog entry and scientific journal publication: The founders of the scientific method wrote letters and reports about what they found; they could just as well have posted it on their blogs, if they could.
At some point, a few decided they would follow certain standards --- You have to see it yourself. You need publicly verifiable evidence. You need a falsifiable claim. You need to prove that the observed phenomena can be generalized. You should start with a review of prior research following this standard. Etc. --- Journalists follow similar standards, as do courts.
There's no reason bloggers can't do the same, or some bloggers and social media posters, and then they could join the group of 'authorities'. Why not? For the ones who are serious and want to be taken seriously, why not? How could they settle for less for their own work product?
Redesign how social media works (and then hope that people are willing to adopt the new model). Yes, I know, technical solutions, social problems. But sometimes the design of the tool is the direct cause of the issue. In other cases a problem rooted in human behavior can be mitigated by carefully thought out tooling design. I think both of those things are happening with social media.
Baffles me that somebody can be professor, director, whatever, meaning: taking the place of somebody _really_ qualified and not get dragged through court after falsifying a publication until nothing is left of that betrayer.
It's not only the damage to society due to false, misleading claims. If those publications decide who gets tenure, a research grant, etc. there are careers of others, that were massively damaged.
There generally aren't penalties beyond that in the West because - outside of libel - lying is usually protected as free speech
The real low hanging fruit that this helps with is detecting accidental errors and preventing researchers with legitimate intent from making mistakes.
Research fraud and its detection is always going to be an adversarial process between those trying to commit it and those trying to detect it. Where I see tools like this making a difference against fraud is that it may also make fraud harder to plausibly pass off as errors if the fraudster gets caught. Since the tools can improve over time, I think this increases the risk that research fraud will be detected by tools that didn't exist when the fraud was perpetrated and which will ideally lead to consequences for the fraudster. This risk will hopefully dissuade some researchers from committing fraud.
This goes back to a principle of safety engineering: the safer, reliable, trustworthy you make the system, the more catastrophic the failures when they happen.
Same as the past.
What do you think religions are?
Obviously it's useful when desired, they can find real issues. But it's also absolutely riddled with unchecked "CVE 11 fix now!!!" spam that isn't even correct, exhausting maintainers. Some of those are legitimate accidents, but many are just karma-farming for some other purpose, to appear like a legitimate effort by throwing plausible-looking work onto other people.
This type of llm use feels like spell check except for basic logic. As long as we stuff have people who know what they are doing reviewing stuff AFTER the AI review, I don't see any downsides.
> It could massively raise the quality level of a lot of papers.
Is there an indication that the difference is 'massive'? For example, reading the OP, it wasn't clear to me how significant these errors are. For example, maybe they are simple factual errors such as the wrong year on a citation.
> They can easily dismiss false positives
That may not be the case - it is possible that the error reports may not be worthwhile. Based on the OP's reporting on accuracy, it doesn't seem like that's the case, but it could vary by field, type of content (quantitative, etc.), etc.
I think the only structural way to change research publication quality en mass is to change the incentives of the publishers, grant recipients, tenure track requirements, and grad or post doc researcher empowerment/funding.
That is a tall order so I suspect we’ll get more of the same and now there will be 100 page 100% articles just like there are 4-5 page top rank resumes. Whereas a dumb human can tell you that a 1 pager resume or 2000 word article should suffice to get the idea across (barring tenuous proofs or explanation of methods).
Edit: incentives of anonymous reviewers as well that can occupy an insular sub-industry to prop up colleagues or discredit research that contradicts theirs.
If the LLM spots a mistake with 90% precision, it's pretty good. If it's a 10% precision, people still might take a look if they publish a paper once per year. If it's 1% - forget it.
Once better with numbers, maybe have one spot statistical errors. I think a constantly-updated, field-specific checklist for human reviewers made more sense on that, though.
For a data source, I thought OpenReview.net would be a nice start.
Perhaps a better suggestion would be to set up industrial AI to attempt to reproduce each of the 1,000 most cited papers in every domain, flagging those that fail to reproduce, probably most of them...
If it could explain what's wrong, that would be awesome. Something tells me we don't have that kind of explainability yet. If we do, people could get advice on what's wrong with their research and improve it. So many scientists would lov3 a tool like that. So if ya got it, let's go!
I think it is more about using AI to analyze the methodology of any experiments performed, the soundness of any math or source code used for data analysis, and the argumentation supporting the final conclusions.
I think that responsible use of AI in this way could be very valuable for research as well as peer review.
It seems that there was an alternating occurences of "days" and "nights" of approximatively the same length as today.
A comparison of the ecosystem and civilization of the time vs. ours are fairly consistent with the hypothesis that the Earth hasn't seen the kind of major gravity disturbances that would have happened if our planet only got captured into Sun orbit within the last 1,000 years.
If your AI rates my claim as an error, it might have too many false positives to be of much use, don't you think?
>> Right now, the YesNoError website contains many false positives, says Nick Brown, a researcher in scientific integrity at Linnaeus University. Among 40 papers flagged as having issues, he found 14 false positives (for example, the model stating that a figure referred to in the text did not appear in the paper, when it did). “The vast majority of the problems they’re finding appear to be writing issues,” and a lot of the detections are wrong, he says.
>> Brown is wary that the effort will create a flood for the scientific community to clear up, as well fuss about minor errors such as typos, many of which should be spotted during peer review (both projects largely look at papers in preprint repositories). Unless the technology drastically improves, “this is going to generate huge amounts of work for no obvious benefit”, says Brown. “It strikes me as extraordinarily naive.”
This shouldn't even be possible for most journals where cross-references with links are required as LaTeX or similar will emit an error.
Can you link to another paper's Figure 2.2 now, and have LaTeX error out if the link is broken? How does that work?
AI, like Cryptocurrencies faces a lot of criticism because of the snake oil and varying levels of poor applications ranging from the fanciful to outright fraud. It bothers me a bit how much of that critique spreads onto the field as well. The origin of the phrase "snake oil" comes from a touted medical treatment, a field that has charlatans deceiving people to this day. In years past I would have thought it a given that people would not consider a wholesale rejection of healthcare as a field because of the presence of fraud. Post pandemic, with the abundance of conspiracies, I have some doubts.
I guess the point I'm making is judge each thing on their individual merits. It might not all be bathwater.
Not all of that is out of reach. Making the AI evaluate a paper in the context of a cluster of related papers might enable spotting some "too good to be true" things.
Hey, here's an idea: use AI for mapping out the influence of papers that were later retracted (whether for fraud or error, it doesn't matter). Not just via citation, but have it try to identify the no longer supported conclusions from a retracted paper, and see where they show up in downstream papers. (Cheap "downstream" is when a paper or a paper in a family of papers by the same team ever cited the upstream paper, even in preprints. More expensive downstream is doing it without citations.)
TBF, this also applies to all humans.
Let's just claim any absurd thing in defense of the AI hype now.
No of course not, I was pointing out that we largely check "for self-consistency and consistency with training data" as well. Our checking of the coherency of other peoples work is presumably an extension of this.
Regardless, computers already check for fraud and incorrect logic as well, albeit in different contexts. Neither humans or computers can do this with general competency, i.e. without specific training to do so.
If there were an AI that can check manufactured data, science would be a solved problem.
[1]: https://daniel.haxx.se/blog/2024/01/02/the-i-in-llm-stands-f...
[0] - Imagine a public ranking system for institutions or specific individuals who have been flagged by a system like this, no verification or human in the loop, just a "shit list"
Sigh.
Sometimes it feels like crypto is the only sector left with any optimism. If they end up doing anything useful it won't be because their tech is better, but just because they believed they could.
Whether it makes more sense to be shackled to investors looking for a return or to some tokenization scheme depends on the the problem that you're trying to solve. Best is to dispense with either, but that's hard unless you're starting from a hefty bank account.
"YesNoError is planning to let holders of its cryptocurrency dictate which papers get scrutinized first"
Putting more money in the pot does not make you more qualified to judge where the most value can be had in scrutinizing papers.
Bad actors could throw a LOT of money in the pot purely to subvert the project -they could use their votes to keep attention away from papers that they know to be inaccurate but that support their interests, and direct all of the attention to papers that they want to undermine.
News organizations that say "our shareholders get to dictate what we cover!" are not news organizations, they are propaganda outfits. This effort is close enough to a news organization that I think the comparison holds.
Even people working reputable mom-and-pops retail jobs know the reputation of retail due to very real high-pressure sales techniques (esp. at car dealerships). Those techniques are undeniably "sigh-able," and reputable retail shops spend a lot of time and energy to distinguish themselves to their potential customers and distance themselves from that ick.
Crypto also has an ick from its rich history of scams. I feel silly even explicitly writing that they have a history rich in scams because everyone on HN knows this.
I could at least understand (though not agree) if you raised a question due to your knowledge of a specific cryptocurrency. But "Why sigh" for general crypto tie-in?
I feel compelled to quote Tim and Eric: "Do you live in a hole, or boat?"
Edit: clarification
That said, your "you're that experienced here and you didn't understand that" line really cheapens the quality of discourse here, too. It certainly doesn't live up to the HN guidelines (https://news.ycombinator.com/newsguidelines.html). You don't have to demean parent's question to deconstruct and disagree with it.
"Is everyone huffing paint?"
"Crypto guy claims to have built an LLM-based tool to detect errors in research papers; funded using its own cryptocurrency; will let coin holders choose what papers to go after; it's unvetted and a total black box—and Nature reports it as if it's a new protein structure."
https://bsky.app/profile/carlbergstrom.com/post/3ljsyoju3s22...
Some things to note : this didn't even require a complex multi-agent pipeline. A single shot prompting was able to detect these errors.
I guess this is a bad idea if these tools replace peer reviewers altogether, and papers get published if they can get past the error checker. But I haven't seen that proposed.
This made me laugh so hard that I was almost crying.
For a specific journal, editor, or reviewer, maybe. For most journals, editors, or reviewers… I would bet money against it.
[1]: Maybe not in the strict original sense of the phrase. More like, an incentive to misbehave and cause downstream harm to others. [2]: https://daniel.haxx.se/blog/2024/01/02/the-i-in-llm-stands-f...
The only false positive rate mentioned in the article is more like 30%, and the true positives in that sample were mostly trivial mistakes (as in, having no effect on the validity of the message) and that is in preprints that have not been peer reviewed, so one would expect that that false positive rate would be much worse after peer review (the true positives would decrease, false positives remain).
And every indication both from the rhetoric of the people developing this and from recent history is that it would almost never be applied in good faith, and instead would empower ideologically motivated bad actors to claim that facts they disapprove of are inadequately supported, or that people they disapprove of should be punished. That kind of user does not care if the "errors" are false positives or trivial.
Other comments have made good points about some of the other downsides.
Unfortunately, no one has the incentives or the resources to do doubly triply thorough fine tooth combing: no reviewer or editor’s getting paid; tenure-track researchers who need the service to the discipline check mark in their tenure portfolios also need to churn out research…
As someone who has had to deal with the output of absolutely stupid "AI code reviewers", I can safely say that the cost of being flooded with useless advice is real, and I will simply ignore them unless I want a reminder of how my job will not be automated away by anyone who wants real quality. I don't care if it's right 1 in 10 times; the other 9 times are more than enough to be of negative value.
Ditto for those flooding GitHub with LLM-generated "fix" PRs.
and many humans can't catch subtle issues in code.
That itself is a problem, but pushing the responsibility onto an unaccountable AI is not a solution. The humans are going to get even worse that way.
AI models only improve through training and good luck convincing any given LLM provider to improve their models for your specific use case unless you have deep pockets…
But the first thing I noticed was the two approaches highlighted - one a small scale approach that does not publish first but approaches the authors privately - and the other publishes first, does not have human review and has its own cryptocurrency
I don’t think anything quite speaks more about the current state of the world and the choices in our political space
While it sometimes spot something I missed it also gives a lot of confident 'advise' that is just wrong or not useful.
Current AI tools are still sophisticated search engines. They cannot reason or think.
So while I think it could spot some errors in research papers I am still very sceptical that it is useful as trusted source.
“This is a paper we are planning to submit to Nature Neuroscience. Please generate a numbered list of significant errors with text tags I can use to find the errors and make corrections.”
It gave me a list of 12 errors of which Claude labeled three as “inconsistencies”, “methods discrepancies”. and “contradictions”. When I requested that Claude reconsider it said “You are right, I apologize” in each of these three instances. Nonetheless it was still a big win for me and caught a lot of my dummheits.
Claude 3.7 running in standard mode does not use its context window very effectively. I suppose I could have demanded that Claude “internally review (wait: think again)” for each serious error it initially thought it had encountered. I’ll try that next time. Exposure of chain of thought would help.
Video demo with human wife narrating it: https://www.youtube.com/watch?v=346pDfOYx0I
Cloudflare-fronted Live site (hopefully that means it can withstand traffi): https://labs.sunami.ai/feed
Free Account Prezi Pitch: https://prezi.com/view/g2CZCqnn56NAKKbyO3P5/
Lawyers are not all bad, as I'm finding out.
We met some amazing human lawyers on our journey so far.
Detect deviations from common patterns, which are often pointed out via common patterns of review feedback on things, which might indicate a mistake: actually I think that fits moderately well.
Are they accurate enough to use in bulk? .... given their accuracy with code bugs, I'm inclined to say "probably not", except by people already knowledgeable in the content. They can generally reject false positives without a lot of effort.
All in all, I probably put in 10 hours of work, I found a bug that was about 10 years old, and the open-source community had to deal with only the final, useful report.
All my work could honestly be done instantaneously with better data harmonization & collection along with better engineering practices. Instead, it requires a lot of manual effort. I remember my professors talking about how they used to calculate linear regressions by hand back in the old days. Hopefully a lot of the data cleaning and study setup that is done now sounds similar to a set of future scientists who use AI tools to operate and check these basic programatic and statistical tasks.
This then becomes the first sanity check for any paper author.
This should save a lot of time and effort, improve the quality of papers, and root out at least some fraud.
Don't worry, many problems will remain :)
Especially those papers cited or promoted by well-known propagandists like Freedman of NYT, Eric Schmidt of Google or anyone on the take of George Soros' grants.
> AI tools are spotting errors in research papers: inside a growing movement (nature.com)
and
> Kill your Feeds – Stop letting algorithms dictate what you think (usher.dev)
so we shouldn't let the feed algorithms influence our thoughs, but also, AI tools need to tell us when we're wrong
Wouldn't a more direct system be one in which journals refused submissions if one of the authors had committed deliberate fraud in a previous paper?
On the other hand, I'm fully supportive of going through ALL of the rejected scientific papers to look for editorial bias, censorship, propaganda, etc.
As it stands there's always a company (juristic person) behind AIs, I haven't yet seen an independent AI.
Let's then try and see if we can uncover any "errors, inconsistencies, and flawed methods" on their website. The "status" is pure madeup garbage. There's no network traffic related to it that would actually allow it to show a real status. The "RECENT ERROR DETECTIONS" lists a single paper from today, but looking at the queue when you click "submit a paper" lists the last completed paper as the 21st of February. The front page tells us that it found some math issue in a paper titled "Waste tea as absorbent for removal of heavy metal present in contaminated water" but if we navigate to that paper[1] the math error suddenly disappears. Most of the comments are also worthless, talking about minor typographical issues or misspellings that do not matter, but of course they still categorize that as an "error".
It's the same garbage as every time with crypto people.
[1]: https://yesnoerror.com/doc/82cd4ea5-4e33-48e1-b517-5ea3e2c5f...
With software design, I find many mistakes in AI where it says things that are incorrect because it parrots common blanket statements and ideologies without actually checking if the statement applies in this case by looking at it from first principles... Once you take the discussion down to first principles, it quickly acknowledges its mistake but you had to have this deep insight in order to take it there... Some person who is trying to learn from AI would not get this insight from AI; instead they would be taught a dumbed-down, cartoonish, wordcel version of reality.
Like this style:
> Methodology check: The paper lacks a quantitative evaluation or comparison to ground truth data, relying on a purely qu...
They always seem to be edited to be simple formatting errors.
https://yesnoerror.com/doc/eb99aec0-a72a-45f7-bf2c-8cf2cbab1...
If they can't improve that the signal to noise ratio will be to high and people will shut it off/ignore it.
Time is not free, cost people lots of time without them seeing value and almost any project will fail.
In other words, I fear this is a leap in Gish Gallop technology.