I'm not sure I'm convinced of the benefit of lowering the barrier to entry to scientific publishing. The hard part always has been, and always will be, understanding the research context (what's been published before) and producing novel and interesting work (the underlying research). Connecting this together in a paper is indeed a challenge, and a skill that must be developed, but is really a minimal part of the process.
I'm not sure what the final state would be here but it seems we are going to find it increasingly difficult to find any real factual information on the internet going forward. Particularly as AI starts ingesting it's own generated fake content.
I don't doubt the AI companies will soon announce products that will claim to solve this very problem, generating turnkey submission reviews. Double-dipping is very profitable.
It appears LLM-parasitism isn't close to being done, and keeps finding new commons to spoil.
HN Search: curl AI slop - https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
So I was not amused about this announcement at all, however easy it may make my own life as an author (I'm pretty happy to do my own literature search, thank you very much).
Also remember, we have no guarantee that these tools will still exist tomorrow, all these AI companies are constantly pivoting and throwing a lot of things at the wall to see what sticks.
OpenAI chose not to build a serious product, as there is no integration with the ACM DL, the IEEE DL, SpringerNatureLink, the ACL Anthology, Wiley, Cambridge/Oxford/Harvard University Press etc. - only papers that are not peer reviewed (arXiv.org) are available/have been integrated. Expect a flood of BS your way.
When my student submit a piece of writing, I can ask them to orally defend their opus maximum (more and more often, ChatGPT's...); I can't do the same with anonymous authors.
I can get behind this. This assumes a tool will need to be made to help determine the 1% that isn't slop. At which point I assume we will have reinvented web search once more.
Has anyone looked at reviving PageRank?
I have heard from people here that Kagi can help remove slop from searches so I guess yeah.
Although I guess I am DDG user and I love using DDG as well because its free as well but I can see how for some price can be a non issue and they might like kagi more.
So Kagi / DDG (Duckduckgo) yeah.
DDG used to be meta-search on top of Yahoo, which doesn't exist anymore. What do Gabriel and co-workers use now?
Maybe you get reimbursed for half as long as there are no obvious hallucinations.
Plus, the t in me from submission to acceptance/rejection can be long. For cutting edge science, you can't really afford to wait to hear back before applying to another journal.
All this to say that spamming 1,000 journals with a submission is bad, but submitting to the journals in your field that are at least decent fits for your paper is good practice.
While well-intentioned, I think this is just gate-keeping. There are mountains of research that result in nothing interesting whatsoever (aside from learning about what doesn't work). And all of that is still valuable knowledge!
Maybe something like a "hierarchy/DAG? of trusted-peers", where groups like universities certify the relevance and correctness of papers by attaching their name and a global reputation score to it. When it's found that the paper is "undesirable" and doesn't pass a subsequent review, their reputation score deteriorates (with the penalty propagating along the whole review chain), in such a way that:
- the overall review model is distributed, hence scalable (everybody may play the certification game and build a reputation score while doing so) - trusted/established institutions have an incentive to keep their global reputation score high and either put a very high level of scrutiny to the review, or delegate to very reputable peers - "bad actors" are immediately punished and universally recognized as such - "bad groups" (such as departments consistently spamming with low quality research) become clearly identified as such within the greater organisation (the university), which can encourage a mindset of quality above quantity - "good actors within a bad group" are not penalised either because they could circumvent their "bad group" on the global review market by having reputable institutions (or intermediaries) certify their good work
There are loopholes to consider, like a black market of reputation trading (I'll pay you generously to sacrifice a bit of your reputation to get this bad science published), but even that cannot pay off long-term in an open system where all transactions are visible.
Incidentally, I think this may be a rare case where a blockchain makes some sense?
Anyway, how will universities check the papers? Somone must read the preprints, like the current reviewers. Someone must check the incoming preprints, find reviewers and make the final decition, like the current editors. ...
Suppose you are an independent researcher writing a paper. Before submitting it for review to journals, you could hire a published author in that field to review it for you (independently of the journal), and tell you whether it is submission-worthy, and help you improve it to the point it was. If they wanted, they could be listed as coauthor, and if they don't want that, at least you'd acknowledge their assistance in the paper.
Because I think there are two types of people who might write AI slop papers: (1) people who just don't care and want to throw everything at the wall and see what sticks; (2) people who genuinely desire to seriously contribute to the field, but don't know what they are doing. Hiring an advisor could help the second group of people.
Of course, I don't know how willing people would be to be hired to do this. Someone who was senior in the field might be too busy, might cost too much, or might worry about damage to their own reputation. But there are so many unemployed and underemployed academics out there...
For developers, academics, editors, etc... in any review driven system the scarcity is around good human judgement not text volume. Ai doesn't remove that constraint and arguably puts more of a spotlight on the ability to separate the shit from the quality.
Unless review itself becomes cheaper or better, this just shifts work further downstream and disguising the change as "efficiency"
In education, understanding is often best demonstrated not by restating text, but by presenting the same data in another representation and establishing the right analogies and isomorphisms, as in Explorable Explanations. [1]
This is still a good step in a direction of AI assisted research, but as you said, for the moment it creates as many problems as it solves.
> > who are looking to 'boost' their CV
Ultimately, this seems like a key root cause - misaligned incentives across a multi-party ecosystem. And as always, incentives tend to be deeply embedded and highly resistant to change.
These acts just must have consequences so people stop doing them. You can use AI if you are doing it well but if you are wasting everyones time you should just be excluded from the discourse altogether.
the early years of LLMs (when they were good enough to correct grammar but not enough to generate entire slop papers) were an equalizer. we may end up here but it would be unfortunate.
On the other hand, the world is now a different place as compared to when several prominent journals were founded (1869-1880 for Nature, Science, Elsevier). The tacit assumptions upon which they were founded might no longer hold in the future. The world is going to continue to change, and the publication process as it stands might need to adapt for it to be sustainable.
The whole process should be made more transparent and open from the start, rather than adding more gatekeeping. There ought to be openness and transparency throughout the entire research process, with auditing-ability automatically baked in, rather than just at the time of publication. One man’s opinion, anyway.
https://hn.algolia.com/?dateRange=pastYear&page=0&prefix=tru...
https://hn.algolia.com/?dateRange=pastYear&page=0&prefix=tru...
If you're not a Zotero user, I can't recommend it enough.
This is a space that probably needs substantial reform, much like grad school models in general (IMO).
On the other hand, Overleaf appears to be open source and at least partially self-hostable, so it’s possible some of these ideas or features will be adopted there over time. Alternatively, someone might eventually manage to move a more complete LaTeX toolchain into WASM.
[1] https://www.reddit.com/r/Crixet/comments/1ptj9k9/comment/nvh...
I do self-host Overleaf which is annoying but ultimately doable if you don't want to pay the $21/mo (!).
I do have to wonder for how long it will be free or even supported, though. On the one hand, remote LaTeX compiling gets expensive at scale. On the other hand, it's only a fraction of a drop in the bucket compared to OpenAI's total compute needs. But I'm hesitant to use it because I'm not convinced it'll still be around in a couple of years.
a lot of academics aren't super technical and don't want to deal with git workflows or syncing local environments. they just want to write their fuckin' paper (WTFP).
overleaf lets the whole research team work together without anyone needing to learn version control or debug their local texlive installation.
also nice for quick edits from any machine without setting anything up. the "just install it locally" advice assumes everyones comfortable with that, but plenty of researchers treat computers as appliances lol.
The visual editor in Overleaf isn't true WYSIWIG, but it's close enough. It feels like working in a word processor, not in a code editor. And the interface overall feels simple and modern.
(And that's just for solo usage -- it's really the collaborative stuff that turns into a game-changer.)
Overleaf ensures that everyone looks at the same version of the document and processes the document with the same set of packages and options.
Any plans of having typst integrated anytime soon?
They’re quite open about Prism being built on top of Crixet.
I would note that Overleaf's main value is as a collaborative authoring tool and not a great latex experience, but science is ideally a collaborative effort.
I think I would only switch from Overleaf if I was writing a textbook or something similarly involved.
@vicapow replied to keep the Dropbox parallel alive
You're right that something like Cursor can work if you're familiar with all the requisite tooling (git, installing cursor, installing latex workshop, knowing how it all works) that most researchers don't want to and really shouldn't have to figure out how to work for their specific workflows.
EDIT: as corrected by comment, Prisma is not Vercel, but ©2026 Prisma Data, Inc. -- curiosity still persists(?)
The earlier LLMs were interesting, in that their sycophantic nature eagerly agreed, often lacking criticality.
After reducing said sycophancy, I’ve found that certain LLMs are much more unwilling (especially the reasoning models) to move past the “known” science[1].
I’m curious to see how/if we can strike the right balance with an LLM focused on scientific exploration.
[0]Sediment lubrication due to organic material in specific subduction zones, potential algorithmic basis for colony collapse disorder, potential to evolve anthropomorphic kiwis, etc.
[1]Caveat, it’s very easy for me to tell when an LLM is “off-the-rails” on a topic I know a lot about, much less so, and much more dangerous, for these “tests” where I’m certainly no expert.
Typst feels more like the future: https://typst.app/
The problem is that so many journals require certain LaTeX templates so Typst often isn't an option at all. It's about network effects, and journals don't want to change their entire toolchain.
The main feature that's important is collaborative editing (like online Word or Google Docs). The second one would be a good reference manager.
I haven't tried it yet but Typst seems like a promising replacement: https://typst.app/
EDIT: Fixed :)
"Eight Leading Interpretations of Quantum Mechanics - A Comparative Survey (2026)"
https://prism.openai.com/?u=de087658-2d28-4dd2-9bc5-c43abb83...
It also offers LaTeX workspaces
see video: https://www.youtube.com/watch?v=feWZByHoViw
I can't wait
"Sure, yes, it comes up all the time in circles that talk about AI all the time, and those are the only circles worth joining."
"Well, what if we made a product entirely focused on having AI generate papers? Like, every step of the paper writing, we give the AI lots of chances to do stuff. Drafting, revisions, preparing to publish, all of it."
"I dunno, does anybody want that?"
"Who cares, we're fucked in about two years if we don't figure out a way to beat the competitors. They have actual profits, they can ride out AI as long as they want."
"Yeah, I guess you're right, let's do your scientific paper generation thing."
Past that, A frontier LLM can do a lot of critiquing, a good amount of experiment design, a check on statistical significance/power claims, kibitz on methodology..likely suggest experiments to verify or disprove. These all seem pretty useful functions to provide to a group of scientists to me.
There was an idea of OpenAI charging commission or royalties on new discoveries.
What kind of researcher wants to potentially lose, or get caught up in legal issues because of a free ChatGPT wrapper, or am I missing something?
Maybe it's cynical, but how does the old saying go? If the service is free, you are the product.
Perhaps, the goal is to hoover up research before it goes public. Then they use it for training data. With enough training data they'll be able to rapidly identify breakthroughs and use that to pick stocks or send their agents to wrap up the IP or something.
Even if yall don’t train off it he’ll find some other way.
“In one example, [Friar] pointed to drug discovery: if a pharma partner used OpenAI technology to help develop a breakthrough medicine, [OpenAI] could take a licensed portion of the drug's sales”
https://www.businessinsider.com/openai-cfo-sarah-friar-futur...
I'm sorry, but publishing is hard, and it should be hard. There is a work function that requires effort to write a paper. We've been dealing with low quality mass-produced papers from certain regions of the planet for decades (which, it appears, are now producing decent papers too).
All this AI tooling will do is lower the effort to the point that complete automated nonsense will now flood in and it will need to be read and filtered by humans. This is already challenging.
Looking elsewhere in society, AI tools are already being used to produce scams and phishing attacks more effective than ever before.
Whole new arenas of abuse are now rife, with the cost of producing fake pornography of real people (what should be considered sexual abuse crime) at mere cents.
We live in a little microcosm where we can see the benefits of AI because tech jobs are mostly about automation and making the impossible (or expensive) possible (or cheap).
I wish more people would talk about the societal issues AI is introducing. My worthless opinion is that prism is not a good thing.
I'm not in favor of letting AI do my thinking for me. Time will tell where Prism sits.
Look at how much BS flooded psychology but had pretty ideas about p values and proper use of affect vs effect. None of that mattered.
Lots of players in this space.
As other top level posters have indicated the review portion of this is the limiting factor
unless journal reviewers decide to utilize entirely automated review process, then they’re not gonna be able to keep up with what will increasingly be the most and best research coming out of any lab.
So whoever figures out the automated reviewer that can actually tell fact from fiction, is going to win this game.
I expect over the longest period, that’s probably not going to be throwing more humans at the problem, but agreeing on some kind of constraint around autonomous reviewers.
If not that then labs will also produce products and science will stop being in public and the only artifacts will be whatever is produced in the market
I've noticed this already with Claude. Claude is so good at code and technical questions... but frankly it's unimpressive at nearly anything else I have asked it to do. Anthropic would probably be better off putting all of their eggs in that one basket that they are good at.
All the more reason that the quest for AGI is a pipe dream. The future is going to be very divergent AI/LLM applications - each marketed and developed around a specific target audience, and priced respectively according to value.
This is all pageantry.
"I know nothing but had an idea and did some work. I have no clue whether this question has been explored or settled one way or another. But here's my new paper claiming to be an incremental improvement on... whatever the previous state of understanding was. I wouldn't know, I haven't read up on it yet. Too many papers to write."
We removed the authorship of a a former co-author on a paper I'm on because his workflow was essentially this--with AI generated text--and a not-insignificant amount of straight-up plagiarism.
E.g. “cite that paper from John Doe on lorem ipsum, but make sure it’s the 2022 update article that I cited in one of my other recent articles, not the original article”
Didn't even open a single one of the papers to look at them! Just said that one is not relevant without even opening it.
At the end of the day, it's all about the incentives. Can we have a world where we incentivize finding the truth rather than just publishing and getting citations?
I thought this was introduced by the NSA some time ago.
(re the decline of scientific integrity / signal-to-noise ratio in science)
Uhm ... no.
I think we need to put an end to AI as it is currently used (not all of it but most of it).
Was this not already possible in the web ui or through a vscode-like editor?
(See also: today’s WhatsApp whistleblower lawsuit.)
Perhaps, like the original PRISM programme, behind the door is a massive data harvesting operation.
https://futurism.com/the-byte/snowden-openai-calculated-betr...
Friendly reminder?
Seems like they have only announced products since and no new model trained from scratch. Are they still having pre-training issues?