280 pointsby meetpateltech5 hours ago51 comments
  • asveikaua minute ago
    Good idea to name this after the spy program that Snowden talked about.
  • JBorrow3 hours ago
    From my perspective as a journal editor and a reviewer these kinds of tools cause many more problems than they actually solve. They make the 'barrier to entry' for submitting vibed semi-plausible journal articles much lower, which I understand some may see as a benefit. The drawback is that scientific editors and reviewers provide those services for free, as a community benefit. One example was a submission their undergraduate affiliation (in accounting) to submit a paper on cosmology, entirely vibe-coded and vibe-written. This just wastes our (already stretched) time. A significant fraction of submissions are now vibe-written and come from folks who are looking to 'boost' their CV (even having a 'submitted' publication is seen as a benefit), which is really not the point of these journals at all.

    I'm not sure I'm convinced of the benefit of lowering the barrier to entry to scientific publishing. The hard part always has been, and always will be, understanding the research context (what's been published before) and producing novel and interesting work (the underlying research). Connecting this together in a paper is indeed a challenge, and a skill that must be developed, but is really a minimal part of the process.

    • SchemaLoad40 minutes ago
      GenAI largely seems like a DDoS on free resources. The effort to review this stuff is now massively more than the effort to "create" it, so really what is the point of even submitting it, the reviewer could have generated it themself. Seeing it in software development where coworkers are submitting massive PRs they generated but hardly read or tested. Shifting the real work to the PR review.

      I'm not sure what the final state would be here but it seems we are going to find it increasingly difficult to find any real factual information on the internet going forward. Particularly as AI starts ingesting it's own generated fake content.

      • cryzinger29 minutes ago
        More relevant than ever:

        > The amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it.

        https://en.wikipedia.org/wiki/Brandolini%27s_law

        • monkaiju10 minutes ago
          Wow the 3 comments from OC to here are all bangers, they combine into a really nice argument against these toys
      • overfeed28 minutes ago
        > The effort to review this stuff is now massively more than the effort to "create" it

        I don't doubt the AI companies will soon announce products that will claim to solve this very problem, generating turnkey submission reviews. Double-dipping is very profitable.

        It appears LLM-parasitism isn't close to being done, and keeps finding new commons to spoil.

      • Spivaka minute ago
        In some ways it might be a good thing that shorthand signals of quality are being destroyed because it forces all of us to meaningfully engage with the work. No more LGTM +1 when every PR looks good.
    • jll2912 minutes ago
      I totally agree. I spend my whole day from getting up to going to bed (not before reading HN!) on reviews for a conference I'm co-organizing later this year.

      So I was not amused about this announcement at all, however easy it may make my own life as an author (I'm pretty happy to do my own literature search, thank you very much).

      Also remember, we have no guarantee that these tools will still exist tomorrow, all these AI companies are constantly pivoting and throwing a lot of things at the wall to see what sticks.

      OpenAI chose not to build a serious product, as there is no integration with the ACM DL, the IEEE DL, SpringerNatureLink, the ACL Anthology, Wiley, Cambridge/Oxford/Harvard University Press etc. - only papers that are not peer reviewed (arXiv.org) are available/have been integrated. Expect a flood of BS your way.

      When my student submit a piece of writing, I can ask them to orally defend their opus maximum (more and more often, ChatGPT's...); I can't do the same with anonymous authors.

    • InsideOutSanta2 hours ago
      I'm scared that this type of thing is going to do to science journals what AI-generated bug reports is doing to bug bounties. We're truly living in a post-scarcity society now, except that the thing we have an abundance of is garbage, and it's drowning out everything of value.
      • willturmanan hour ago
        In a corollary to Sturgeon's Law, I'd propose Altman's Law: "In the Age of AI, 99.999...% of everything is crap"
        • SimianSci23 minutes ago
          Altman's Law: 99% of all content is slop

          I can get behind this. This assumes a tool will need to be made to help determine the 1% that isn't slop. At which point I assume we will have reinvented web search once more.

          Has anyone looked at reviving PageRank?

          • Imustaskforhelp19 minutes ago
            I mean Kagi is probably the PageRank revival we are talking about.

            I have heard from people here that Kagi can help remove slop from searches so I guess yeah.

            Although I guess I am DDG user and I love using DDG as well because its free as well but I can see how for some price can be a non issue and they might like kagi more.

            So Kagi / DDG (Duckduckgo) yeah.

            • jll298 minutes ago
              Does anyone have kept an eye of who uses what back-end?

              DDG used to be meta-search on top of Yahoo, which doesn't exist anymore. What do Gabriel and co-workers use now?

      • jll2910 minutes ago
        Soon, poor people will talk to a LLM, rich people will get human medical care.
      • techblueberry2 hours ago
        There's this thing where all the thought leaders in software engineering ask "What will change about building about building a business when code is free" and while, there are some cool things, I've also thought, like it could have some pretty serious negative externalities? I think this question is going to become big everywhere - business, science, etc. which is like - Ok, you have all this stuff, but do is it valuable? Which of it actually takes away value?
      • jplusequaltan hour ago
        Digital pollution.
      • jcranmer2 hours ago
        The first casualty of LLMs was the slush pile--the unsolicited submission pile for publishers. We've since seen bug bounty programs and open source repositories buckle under the load of AI-generated contributions. And all of these have the same underlying issue: the LLM makes it easy to do things that don't immediately look like garbage, which makes the volume of submission skyrocket while the time-to-reject also goes up slightly because it passes the first (but only the first) absolute garbage filter.
    • bloppe3 hours ago
      I wonder if there's a way to tax the frivolous submissions. There could be a submission fee that would be fully reimbursed iff the submission is actually accepted for publication. If you're confident in your paper, you can think of it as a deposit. If you're spamming journals, you're just going to pay for the wasted time.

      Maybe you get reimbursed for half as long as there are no obvious hallucinations.

      • JBorrow2 hours ago
        The journal that I'm an editor for is 'diamond open access', which means we charge no submission fees and no publication fees, and publish open access. This model is really important in allowing legitimate submissions from a wide range of contributors (e.g. PhD students in countries with low levels of science funding). Publishing in a traditional journal usually costs around $3000.
        • NewsaHackO2 hours ago
          Those journals are really good for getting practice in writing and submitting research papers, but sometimes they are already seen as less impactful because of the quality of accepted papers. At least where I am at, I don't think the advent of AI writing is going to affect how they are seen.
        • methuselah_in2 hours ago
          Welcome to new world of fake stuff i guess
      • s0rce3 hours ago
        That would be tricky, I often submitted to multiple high impact journals going down the list until someone accepted it. You try to ballpark where you can go but it can be worth aiming high. Maybe this isn't a problem and there should be payment for the efforts to screen the paper but then I would expect the reviewers to be paid for their time.
        • noitpmeder2 hours ago
          I mean your methodology also sounds suspect. You're just going down a list until it sticks. You don't care where it ends up (I'm sure within reason) just as long as it is accepted and published somewhere (again, within reason).
          • jll295 minutes ago
            It's standard practice, nothing suspect about their approach - and you won't go lower and lower and lower still because at some point you'll be tired of re-formatting, or a doctoral candidate's funding will be used up, or the topic has "expired" (= is overtaken by reality/competition).
          • antasvara25 minutes ago
            No different from applying to jobs. Much like companies, there are a variety of journals with varying levels of prestige or that fit your paper better/worse. You don't know in advance which journals will respond to your paper, which ones just received submissions similar to yours, etc.

            Plus, the t in me from submission to acceptance/rejection can be long. For cutting edge science, you can't really afford to wait to hear back before applying to another journal.

            All this to say that spamming 1,000 journals with a submission is bad, but submitting to the journals in your field that are at least decent fits for your paper is good practice.

          • niek_pas2 hours ago
            Scientists are incentivized to publish in as high-ranking a journal as possible. You’re always going to have at least a few journals where your paper is a good fit, so aiming for the most ambitious journal first just makes sense.
          • mathematicasteran hour ago
            This is effectively standard across the board.
      • throwaway858253 hours ago
        Pay to publish journals already exist.
        • bloppe3 hours ago
          This is sorta the opposite of pay to publish. It's pay to be rejected.
        • olivia-banks3 hours ago
          I would think it would act more like a security deposit, and you'd get back 100%, no profit for the journal (at least in that respect).
      • pixelready2 hours ago
        I’d worry about creating a perverse incentive to farm rejected submissions. Similar to those renter application fee scams.
      • petcat2 hours ago
        > There could be a submission fee that would be fully reimbursed if the submission is actually accepted for publication.

        While well-intentioned, I think this is just gate-keeping. There are mountains of research that result in nothing interesting whatsoever (aside from learning about what doesn't work). And all of that is still valuable knowledge!

        • ezstan hour ago
          Sure, but now we can't even assume that such research is submitted in good faith anymore. There just seems to be no perfect solution.

          Maybe something like a "hierarchy/DAG? of trusted-peers", where groups like universities certify the relevance and correctness of papers by attaching their name and a global reputation score to it. When it's found that the paper is "undesirable" and doesn't pass a subsequent review, their reputation score deteriorates (with the penalty propagating along the whole review chain), in such a way that:

          - the overall review model is distributed, hence scalable (everybody may play the certification game and build a reputation score while doing so) - trusted/established institutions have an incentive to keep their global reputation score high and either put a very high level of scrutiny to the review, or delegate to very reputable peers - "bad actors" are immediately punished and universally recognized as such - "bad groups" (such as departments consistently spamming with low quality research) become clearly identified as such within the greater organisation (the university), which can encourage a mindset of quality above quantity - "good actors within a bad group" are not penalised either because they could circumvent their "bad group" on the global review market by having reputable institutions (or intermediaries) certify their good work

          There are loopholes to consider, like a black market of reputation trading (I'll pay you generously to sacrifice a bit of your reputation to get this bad science published), but even that cannot pay off long-term in an open system where all transactions are visible.

          Incidentally, I think this may be a rare case where a blockchain makes some sense?

          • gus_massa40 minutes ago
            This idea looks very similar to journals! Each journal has a reputation, if they publish too much crap, the crap is not cited and the impact factors decrease. Also, they have an informal reputation, because impact index also has problems.

            Anyway, how will universities check the papers? Somone must read the preprints, like the current reviewers. Someone must check the incoming preprints, find reviewers and make the final decition, like the current editors. ...

          • amitav126 minutes ago
            How would this work for independent researchers?

            (no snark)

      • utilize18082 hours ago
        Better yet, make a "polymarket" for papers where people can bet on which paper can make it, and rely on "expertise arbitrage" to punish spams.
        • ezst2 hours ago
          Doesn't stop the flood, i.e. the unfair asymmetry between the effort to produce vs. effort to review.
      • mathematicasteran hour ago
        Pay to review is common in Econ and Finance.
        • skissane29 minutes ago
          Variation I thought of on pay-to-review:

          Suppose you are an independent researcher writing a paper. Before submitting it for review to journals, you could hire a published author in that field to review it for you (independently of the journal), and tell you whether it is submission-worthy, and help you improve it to the point it was. If they wanted, they could be listed as coauthor, and if they don't want that, at least you'd acknowledge their assistance in the paper.

          Because I think there are two types of people who might write AI slop papers: (1) people who just don't care and want to throw everything at the wall and see what sticks; (2) people who genuinely desire to seriously contribute to the field, but don't know what they are doing. Hiring an advisor could help the second group of people.

          Of course, I don't know how willing people would be to be hired to do this. Someone who was senior in the field might be too busy, might cost too much, or might worry about damage to their own reputation. But there are so many unemployed and underemployed academics out there...

    • 8 minutes ago
      undefined
    • Rperry21742 hours ago
      This keeps repeating in different domains: we lower the cost of producing artifacts and the real bottleneck is evaluating them.

      For developers, academics, editors, etc... in any review driven system the scarcity is around good human judgement not text volume. Ai doesn't remove that constraint and arguably puts more of a spotlight on the ability to separate the shit from the quality.

      Unless review itself becomes cheaper or better, this just shifts work further downstream and disguising the change as "efficiency"

      • SchemaLoad38 minutes ago
        This has been discussed previously as "workslop", where you produce something that looks at surface level like high quality work, but just shifts the burden to the receiver of the workslop to review and fix.
      • vitalnodo2 hours ago
        This fits into the broader evolution of the visualization market. As data grows, visualization becomes as important as processing. This applies not only to applications, but also to relating texts through ideas close to transclusion in Ted Nelson’s Xanadu. [0]

        In education, understanding is often best demonstrated not by restating text, but by presenting the same data in another representation and establishing the right analogies and isomorphisms, as in Explorable Explanations. [1]

        [0] https://news.ycombinator.com/item?id=40295661

        [1] https://news.ycombinator.com/item?id=22368323

    • jjcman hour ago
      The comparison to make here is that a journal submission is effectively a pull request to humanities scientific knowlegde base. That PR has to be reviewed. We're already seeing the effects of this with open source code - the number of PR submissions have skyrocketed, overwhelming maintainers.

      This is still a good step in a direction of AI assisted research, but as you said, for the moment it creates as many problems as it solves.

    • mrandish2 hours ago
      As a non-scientist (but long-time science fan and user), I feel your pain with what appears to be a layered, intractable problem.

      > > who are looking to 'boost' their CV

      Ultimately, this seems like a key root cause - misaligned incentives across a multi-party ecosystem. And as always, incentives tend to be deeply embedded and highly resistant to change.

    • keithnz24 minutes ago
      wouldn't AI actually be good for filtering given it's going to be a lot better at knowing what has been published? Also seems possible that it could actually work out papers that have ideas that are novel, or at least come up with some kind of likely score.
    • jascha_engan hour ago
      Why not filter out papers from people without credentials? And also publicly call them out and register them somewhere, so that their submission rights can be revoked by other journals and conferences after "vibe writing".

      These acts just must have consequences so people stop doing them. You can use AI if you are doing it well but if you are wasting everyones time you should just be excluded from the discourse altogether.

    • boplicity2 hours ago
      Is it at all possible to have a policy that bans the submission of any AI written text, or text that was written with the assistance of AI tools? I understand that this would, by necessity, be under an "honor system" but maybe it could help weed out papers not worth the time?
      • currymj34 minutes ago
        this is probably a net negative as there are many very good scientists with not very strong English skills.

        the early years of LLMs (when they were good enough to correct grammar but not enough to generate entire slop papers) were an equalizer. we may end up here but it would be unfortunate.

    • maxkfranz2 hours ago
      I generally agree.

      On the other hand, the world is now a different place as compared to when several prominent journals were founded (1869-1880 for Nature, Science, Elsevier). The tacit assumptions upon which they were founded might no longer hold in the future. The world is going to continue to change, and the publication process as it stands might need to adapt for it to be sustainable.

      • ezst2 hours ago
        As I understand it, the problem isn't publication or how it's changing over time, it's about the challenges of producing new science when the existing one is muddied in plausible lies. That warrants a new process by which to assess the inherent quality of a paper, but even if it comes as globally distributed, the cheats have a huge advantage considering the asymmetry between the effort to vibe produce vs. the tedious human review.
        • maxkfranzan hour ago
          That’s a good point. On the other hand, we’ve had that problem long before AI. You already need to mentally filter papers based on your assessment of the reputability of the authors.

          The whole process should be made more transparent and open from the start, rather than adding more gatekeeping. There ought to be openness and transparency throughout the entire research process, with auditing-ability automatically baked in, rather than just at the time of publication. One man’s opinion, anyway.

    • usefulposter3 hours ago
      Completely agree. Look at the independent research that gets submitted under "Show HN" nowadays:

      https://hn.algolia.com/?dateRange=pastYear&page=0&prefix=tru...

      https://hn.algolia.com/?dateRange=pastYear&page=0&prefix=tru...

    • lupsasca3 hours ago
      I am very sympathetic to your point of view, but let me offer another perspective. First off, you can already vibe-write slop papers with AI, even in LaTeX format--tools like Prism are not needed for that. On the other hand, it can really help researchers improve the quality of their papers. I'm someone who collaborates with many students and postdocs. My time is limited and I spend a lot of it on LaTeX drudgery that can and should be automated away, so I'm excited for Prism to save time on writing, proofreading, making TikZ diagrams, grabbing references, etc.
      • CJeffersonan hour ago
        What the heck is the point of a reference you never read?
        • lupsascaan hour ago
          By "grabbing references" I meant queries of the type "add paper [bla] to the bibliography" -- that seems useful to me!
          • nestes37 minutes ago
            Focusing in on "grabbing references", it's as easy as drag-and-drop if you use Zotero. It can copy/paste references in BibTeX format. You can even customize it through the BetterBibTeX extension.

            If you're not a Zotero user, I can't recommend it enough.

      • noitpmeder2 hours ago
        AI generating references seems like a hop away from absolute unverifiable trash.
    • SecretDreams2 hours ago
      I appreciate and sympathize with this take. I'll just note that, in general, journal publications have gone considerably downhill over the last decade, even before the advent of AI. Frequency has gone up, quality has gone down, and the ability to actually check if everything in the article is actually valid is quite challenging as frequency goes up.

      This is a space that probably needs substantial reform, much like grad school models in general (IMO).

  • vitalnodo4 hours ago
    Previously, this existed as crixet.com [0]. At some point it used WASM for client-side compilation, and later transitioned to server-side rendering [1][2]. It now appears that there will be no option to disable AI [3]. I hope the core features remain available and won’t be artificially restricted. Compared to Overleaf, there were fewer service limitations: it was possible to compile more complex documents, share projects more freely, and even do so without registration.

    On the other hand, Overleaf appears to be open source and at least partially self-hostable, so it’s possible some of these ideas or features will be adopted there over time. Alternatively, someone might eventually manage to move a more complete LaTeX toolchain into WASM.

    [0] https://crixet.com

    [1] https://www.reddit.com/r/Crixet/comments/1ptj9k9/comment/nvh...

    [2] https://news.ycombinator.com/item?id=42009254

    [3] https://news.ycombinator.com/item?id=46394937

    • crazygringo3 hours ago
      I'm curious how it compares to Overleaf in terms of features? Putting aside the AI aspect entirely, I'm simply curious if this is a viable Overleaf competitor -- especially since it's free.

      I do self-host Overleaf which is annoying but ultimately doable if you don't want to pay the $21/mo (!).

      I do have to wonder for how long it will be free or even supported, though. On the one hand, remote LaTeX compiling gets expensive at scale. On the other hand, it's only a fraction of a drop in the bucket compared to OpenAI's total compute needs. But I'm hesitant to use it because I'm not convinced it'll still be around in a couple of years.

      • efficax3 hours ago
        Overleaf is a little curious to me. What's the point? Just install LaTeX. Claude is very good at manipulating LaTeX documents and I've found it effective at fixing up layouts for me.
        • radioactivist2 hours ago
          In my circles the killer features of Overleaf are the collaborative ones (easy sharing, multi-user editing with track changes/comments). Academic writing in my community basically went from emailed draft-new-FINAL-v4.tex files (or a shared folder full of those files) to basically people just dumping things on Overleaf fairly quickly.
        • bhadass2 hours ago
          collaboration is the killer feature tbh. overleaf is basically google docs meets latex.. you can have multiple coauthors editing simultaneously, leave comments, see revision history, etc.

          a lot of academics aren't super technical and don't want to deal with git workflows or syncing local environments. they just want to write their fuckin' paper (WTFP).

          overleaf lets the whole research team work together without anyone needing to learn version control or debug their local texlive installation.

          also nice for quick edits from any machine without setting anything up. the "just install it locally" advice assumes everyones comfortable with that, but plenty of researchers treat computers as appliances lol.

        • crazygringo2 hours ago
          I can code in monospace (of course) but I just can't write in monospace markup. I need something approaching WYSIWIG. It's just how my brain works -- I need the italics to look like italics, I need the footnote text to not interrupt the middle of the paragraph.

          The visual editor in Overleaf isn't true WYSIWIG, but it's close enough. It feels like working in a word processor, not in a code editor. And the interface overall feels simple and modern.

          (And that's just for solo usage -- it's really the collaborative stuff that turns into a game-changer.)

        • jdranczewski26 minutes ago
          To add to the points raised by others, "just install LaTeX" is not imo a very strong argument. I prefer working in a local environment, but many of my colleagues much prefer a web app that "just works" to figuring out what MiKTeX is.
        • warkdarrior2 hours ago
          Collaboration is at best rocky when people have different versions of LaTeX packages installed. Also merging changes from multiple people in git are a pain when dealing with scientific, nuanced text.

          Overleaf ensures that everyone looks at the same version of the document and processes the document with the same set of packages and options.

    • vicapow3 hours ago
      The deeper I got, the more I realized really supporting the entire LaTeX toolchain in WASM would mean simulating an entire linux distribution :( We wanted to support Beamer, LuaLaTeX, mobile (wasn't working with WASM because of resource limits), etc.
      • seazoning2 hours ago
        We had been building literally the same thing for the last 8 months along with a great browsing environment over arxiv -- might just have to sunset it

        Any plans of having typst integrated anytime soon?

        • vicapow40 minutes ago
          I'm not against typst. I think it's integration would be a lot easier and more straightforward I just don't know if it's really that popular yet in academia.
    • songodongo3 hours ago
      So this is the product of an acquisition?
      • vitalnodo3 hours ago
        > Prism builds on the foundation of Crixet, a cloud-based LaTeX platform that OpenAI acquired and has since evolved into Prism as a unified product. This allowed us to start with a strong base of a mature writing and collaboration environment, and integrate AI in a way that fits naturally into scientific workflows.

        They’re quite open about Prism being built on top of Crixet.

    • doctorpangloss34 minutes ago
      It seems bad for OpenAI to make this about latex documents, which will be now associated, visually, with AI slop. The opposite of what anyone wants really. Nobody wants you to know they used a chatbot!
      • amitav123 minutes ago
        Am I missing something? LaTeX is associated with slop now?
  • bmaranville20 minutes ago
    Having a chatbot that can natively "speak" latex seems like it might be useful to scientists that already use it exclusively for their work. Writing papers is incredibly time-consuming for a lot of reasons, and having a helper to make quick (non-substantive) edits could be great. Of course, that's not how people will use it...

    I would note that Overleaf's main value is as a collaborative authoring tool and not a great latex experience, but science is ideally a collaborative effort.

  • chaosprint4 minutes ago
    As a researcher who has to use LaTeX, I used to use Overleaf, but lately I've been configuring it locally in VS Code. The configuration process on Mac is very simple. Considering there are so many free LLMs available now, I still won't subscribe to ChatGPT.
  • DominikPeters3 hours ago
    This seems like a very basic overleaf alternative with few of its features, plus a shallow ChatGPT wrapper. Certainly can’t compete with using VS Code or TeXstudio locally, collaborating through GitHub, and getting AI assistance from Claude Code or Codex.
    • qbit4234 minutes ago
      Loads of researchers have only used LaTeX via Overleaf and even more primarily edit LaTeX using Overleaf, for better or worse. It really simplifies collaborative editing and the version history is good enough (not git level, but most people weren't using full git functionality). I just find that there are not that many features I need when paper writing - the main bottlenecks are coming up with the content and collaborating, with Overleaf simplifying the latter. It also removes a class of bugs where different collaborators had slightly different TeX setups.

      I think I would only switch from Overleaf if I was writing a textbook or something similarly involved.

    • mturmon26 minutes ago
      Getting close to the "why Dropbox when you can rsync" mistake (https://news.ycombinator.com/item?id=9224)

      @vicapow replied to keep the Dropbox parallel alive

    • vicapow3 hours ago
      I could see it seeming likely that because the UI is quite minimalist, but the AI capabilities are very extensive, imo, if you really play with it.

      You're right that something like Cursor can work if you're familiar with all the requisite tooling (git, installing cursor, installing latex workshop, knowing how it all works) that most researchers don't want to and really shouldn't have to figure out how to work for their specific workflows.

    • jstummbillig3 hours ago
      Accessibility does matter
  • flockonus28 minutes ago
    Curious in terms of trademark, does it could infringe in Vercel's Prisma (very popular ORM / framework in node.js) ?

    EDIT: as corrected by comment, Prisma is not Vercel, but ©2026 Prisma Data, Inc. -- curiosity still persists(?)

  • reassess_blindan hour ago
    Do you think they used an em-dash in the opening sentence because they’re trying to normalise the AI’s writing style, or…
    • torginusan hour ago
      I haven't used MS Word in quite a while, but I distinctly remember it changed minus signs to em dashes.
    • reed1234an hour ago
      Probably used their product to write it
    • flumpcakesan hour ago
      LaTeX made writing Em dashes very easy. To the point that I would use them all the times in my academic writing. It's a shame that perfectly good typography is now a sign of slop/fraud.
    • exyian hour ago
      ... or they teached GPT to use em-dashes, because of their love for em-dashes :)
  • bekleinan hour ago
    The Latent Space podcast just released a relevant episode today where they interviewed Kevin Weil and Victor Powell from, now, OpenAI, with some demos, background and context, and a Q&A. The YouTube link is here: https://www.youtube.com/watch?v=W2cBTVr8nxU
    • vicapow39 minutes ago
      Hope you like it :D I'm here if you have questions, too
  • jumploops3 hours ago
    I’ve been “testing” LLM willingness to explore novel ideas/hypotheses for a few random topics[0].

    The earlier LLMs were interesting, in that their sycophantic nature eagerly agreed, often lacking criticality.

    After reducing said sycophancy, I’ve found that certain LLMs are much more unwilling (especially the reasoning models) to move past the “known” science[1].

    I’m curious to see how/if we can strike the right balance with an LLM focused on scientific exploration.

    [0]Sediment lubrication due to organic material in specific subduction zones, potential algorithmic basis for colony collapse disorder, potential to evolve anthropomorphic kiwis, etc.

    [1]Caveat, it’s very easy for me to tell when an LLM is “off-the-rails” on a topic I know a lot about, much less so, and much more dangerous, for these “tests” where I’m certainly no expert.

  • markbao3 hours ago
    Not an academic, but I used LaTeX for years and it doesn’t feel like what future of publishing should use. It’s finicky and takes so much markup to do simple things. A lab manager once told me about a study that people who used MS Word to typeset were more productive, and I can see that…
    • crazygringo2 hours ago
      100% completely agreed. It's not the future, it's the past.

      Typst feels more like the future: https://typst.app/

      The problem is that so many journals require certain LaTeX templates so Typst often isn't an option at all. It's about network effects, and journals don't want to change their entire toolchain.

    • maxkfranz2 hours ago
      Latex is good for equations. And Latex tools produce very nice PDFs, but I wouldn't want to write in Latex generally either.

      The main feature that's important is collaborative editing (like online Word or Google Docs). The second one would be a good reference manager.

    • auxym3 hours ago
      Agreed. Tex/Latex is very old tech. Error recovery and messages is very bad. Developing new macros in Tex is about as fun as you expect developing in a 70s-era language to be (ie probably similar to cobol and old fortran).

      I haven't tried it yet but Typst seems like a promising replacement: https://typst.app/

  • pwdisswordfishy2 hours ago
    Oh, like that mass surveillance program!
  • divan8 minutes ago
    No Typst support?
  • radioactivist2 hours ago
    Is anyone else having trouble using even some of the basic features? For example, I can open a comment, but it doesn't seem like there is any way to close them (I try clicking the checkmark and nothing happens). You also can't seem to edit the comments once typed.
    • lxe2 hours ago
      Thanks for surfacing this. If you click to "tools" button to the left of "compile", you'll see a list of comments, and you can resolve them from there. We'll keep improving and fixing things that might be rough around the edges.

      EDIT: Fixed :)

  • OutOfHere6 minutes ago
    It seems to be a great agentic canvas for authoring latex. I used it to create a paper:

    "Eight Leading Interpretations of Quantum Mechanics - A Comparative Survey (2026)"

    https://prism.openai.com/?u=de087658-2d28-4dd2-9bc5-c43abb83...

  • WolfOliver4 hours ago
    Check out MonsterWriter if you are concerned about the recent acquisition of this.

    It also offers LaTeX workspaces

    see video: https://www.youtube.com/watch?v=feWZByHoViw

  • noahbp31 minutes ago
    They seem to have copied Cursor in hijacking ⌘Y shortcut for "Yes" instead of Undo.
  • sva_2 hours ago
    > In 2025, AI changed software development forever. In 2026, we expect a comparable shift in science,

    I can't wait

  • CobrastanJorji22 minutes ago
    "Hey, you know how everybody's complaining about AI making up totally fake science shit? Like, fake citations, garbage content, fake numbers, etc?"

    "Sure, yes, it comes up all the time in circles that talk about AI all the time, and those are the only circles worth joining."

    "Well, what if we made a product entirely focused on having AI generate papers? Like, every step of the paper writing, we give the AI lots of chances to do stuff. Drafting, revisions, preparing to publish, all of it."

    "I dunno, does anybody want that?"

    "Who cares, we're fucked in about two years if we don't figure out a way to beat the competitors. They have actual profits, they can ride out AI as long as they want."

    "Yeah, I guess you're right, let's do your scientific paper generation thing."

  • vitalnodo3 hours ago
    With a tool like this, you could imagine an end-to-end service for restoring and modernizing old scientific books and papers: digitization, cleanup, LaTeX reformatting, collaborative or volunteer-driven workflows, OCR (like Mathpix), and side-by-side comparison with the original. That would be useful.
    • vessenes3 hours ago
      Don’t forget replication!
      • olivia-banks3 hours ago
        I'm curious how you think AI would aide in this.
        • vessenes2 hours ago
          Tao’s doing a lot of related work in mathematics, so I can say that first of all literature search is a clearly valuable function frontier models offer.

          Past that, A frontier LLM can do a lot of critiquing, a good amount of experiment design, a check on statistical significance/power claims, kibitz on methodology..likely suggest experiments to verify or disprove. These all seem pretty useful functions to provide to a group of scientists to me.

        • noitpmeder2 hours ago
          Replicate this <slop>

          Ok! Here's <more slop>

          • olivia-banks2 hours ago
            I don't think you understand what replication means in this context.
  • falcor84an hour ago
    It seems clear to me that this is about OpenAI getting telemetry and other training data with the intent of having their AI do scientific work independently down the line, and I'm very ambivalent about it.
  • sbszllr3 hours ago
    The quality and usefulness of it aside, the primary question is: are they still collecting chats for training data? If so, it limits how comfortable, and sometimes even permitted, people would with working on their yet-to-be-public work using this tool.
  • 3 hours ago
    undefined
  • MattDaEskimo3 hours ago
    What's the goal here?

    There was an idea of OpenAI charging commission or royalties on new discoveries.

    What kind of researcher wants to potentially lose, or get caught up in legal issues because of a free ChatGPT wrapper, or am I missing something?

    • engineer_222 hours ago
      > Prism is free to use, and anyone with a ChatGPT account can start writing immediately.

      Maybe it's cynical, but how does the old saying go? If the service is free, you are the product.

      Perhaps, the goal is to hoover up research before it goes public. Then they use it for training data. With enough training data they'll be able to rapidly identify breakthroughs and use that to pick stocks or send their agents to wrap up the IP or something.

  • AuthAuth3 hours ago
    This does way less than i'd expect. Converting images to tikz is nice but some of the other applications demonstrated were horrible. This is no way anyone should be using AI to cite.
  • zb322 minutes ago
    Is this the product where OpenAI will (soon) take profit share from inventions made there?
  • khalic3 hours ago
    All your papers are belong to us
    • vicapow3 hours ago
      Users have full control over whether their data is used to help improve our models
  • jeffybefffy5193 hours ago
    I postulate 90% of the reason openai now has "variants" for different use cases is just to capture training data...
  • asadman hour ago
    Disappointing actually, what I actually need is a research "management" tool that lets me put in relevant citations but also goes through ENTIRE arxiv or google scholar and connect ideas or find novel ideas in random fields that somehow relate to what I am trying to solve.
  • flumpcakesan hour ago
    This is terrible for Science.

    I'm sorry, but publishing is hard, and it should be hard. There is a work function that requires effort to write a paper. We've been dealing with low quality mass-produced papers from certain regions of the planet for decades (which, it appears, are now producing decent papers too).

    All this AI tooling will do is lower the effort to the point that complete automated nonsense will now flood in and it will need to be read and filtered by humans. This is already challenging.

    Looking elsewhere in society, AI tools are already being used to produce scams and phishing attacks more effective than ever before.

    Whole new arenas of abuse are now rife, with the cost of producing fake pornography of real people (what should be considered sexual abuse crime) at mere cents.

    We live in a little microcosm where we can see the benefits of AI because tech jobs are mostly about automation and making the impossible (or expensive) possible (or cheap).

    I wish more people would talk about the societal issues AI is introducing. My worthless opinion is that prism is not a good thing.

    • jimmaran hour ago
      I've wasted hours of my life trying to get Latex to format my journal articles to different journals' specifications. That's tedious typesetting that wastes my time. I'm all for AI tools that help me produce my thoughts with as little friction as possible.

      I'm not in favor of letting AI do my thinking for me. Time will tell where Prism sits.

      • flumpcakesan hour ago
        This Prism video was not just typesetting. If OpenAI released tools that just helped you typeset or create diagrams from written text, that would be fine. But it's not, it's writing papers for you. Scientists/publishers really do not need the onslaught of slop this will create. How can we even trust qualifications in the post-AI world, where cheating is rampant at univeristies?
    • PlatoIsADiseasean hour ago
      I just want replication in science. I don't care at all how difficult it is to write the paper. Heck, if we could spend more effort on data collection and less on communication, that sounds like a win.

      Look at how much BS flooded psychology but had pretty ideas about p values and proper use of affect vs effect. None of that mattered.

  • oytmealan hour ago
    Some things are worth doing the "hard way".
  • pigeonsan hour ago
    Naming things is hard.
  • Onavo31 minutes ago
    It would be interesting to see how they would compete with the incumbents like

    https://Elicit.com

    https://Consensus.app

    https://Scite.ai

    https://Scispace.com

    https://Scienceos.ai

    https://Undermind.ai

    Lots of players in this space.

  • AndrewKemendo31 minutes ago
    I genuinely don’t see scientific journals and conferences continuing to last in this new world of autonomous agents, at least the same way that they used to be.

    As other top level posters have indicated the review portion of this is the limiting factor

    unless journal reviewers decide to utilize entirely automated review process, then they’re not gonna be able to keep up with what will increasingly be the most and best research coming out of any lab.

    So whoever figures out the automated reviewer that can actually tell fact from fiction, is going to win this game.

    I expect over the longest period, that’s probably not going to be throwing more humans at the problem, but agreeing on some kind of constraint around autonomous reviewers.

    If not that then labs will also produce products and science will stop being in public and the only artifacts will be whatever is produced in the market

  • andrepd32 minutes ago
    "Chatgpt writes scientific papers" is somehow being advertised as a good thing. What is there even left to say?
  • legitster3 hours ago
    It's interesting how quickly the quest for the "Everything AI" has shifted. It's much more efficient to build use-case specific LLMs that can solve a limited set of problems much more deeply than one that tries to do everything well.

    I've noticed this already with Claude. Claude is so good at code and technical questions... but frankly it's unimpressive at nearly anything else I have asked it to do. Anthropic would probably be better off putting all of their eggs in that one basket that they are good at.

    All the more reason that the quest for AGI is a pipe dream. The future is going to be very divergent AI/LLM applications - each marketed and developed around a specific target audience, and priced respectively according to value.

    • falcor8437 minutes ago
      I don't get this argument. Our nervous system is also heterogenous, why wouldn't AGI be based on an "executive functions" AI that manages per-function AIs?
  • ai_critic3 hours ago
    Anybody else notice that half the video was just finding papers to decorate the bibliography with? Not like "find me more papers I should read and consider", but "find papers that are relevant that I should cite--okay, just add those".

    This is all pageantry.

    • sfink2 hours ago
      Yes. That part of the video was straight-up "here's how to automate academic fraud". Those papers could just as easily negate one of your assumptions. What even is research if it's not using cited works?

      "I know nothing but had an idea and did some work. I have no clue whether this question has been explored or settled one way or another. But here's my new paper claiming to be an incremental improvement on... whatever the previous state of understanding was. I wouldn't know, I haven't read up on it yet. Too many papers to write."

    • renyicircle2 hours ago
      It's as if it's marketed to the students who have been using ChatGPT for the last few years to pass courses and now need to throw together a bachelor's thesis. Bibliography and proper citation requirements are a pain.
      • pfisherman2 hours ago
        That is such a bummer. At the time, it was annoying and I groused and grumbled about it; but in hindsight my reviewers pointed me toward some good articles, and I am better for having read them.
      • olivia-banks2 hours ago
        I agree with this. This problem is only going to get worse once these people enter academia and facing needing to publish.
    • olivia-banks3 hours ago
      I've noticed this pattern, and it really drives me nuts. You should really be doing a comprehensive literature review before starting any sort of review or research paper.

      We removed the authorship of a a former co-author on a paper I'm on because his workflow was essentially this--with AI generated text--and a not-insignificant amount of straight-up plagiarism.

      • NewsaHackO2 hours ago
        There is definitely a difference between how senior researchers and students go about making publications. To students, they get told basically what topic they should write a paper on or prepare data for, so they work backwards: try to write the paper (possibly some researching information to write the paper), then add references because they know they have to. For the actual researchers, it would be a complete waste of time/funding to start a project on a question that has already been answered before (and something that the grant reviewers are going to know has already been explored before), so in order to not waste their own time, they have to do what you said and actually conduct a comprehensive literature review before even starting the work.
    • black_puppydog2 hours ago
      Plus, this practice (just inserting AI-proposed citations/sources) is what has recently been the front-runner of some very embarrassing "editing" mistakes, notably in reports from public institutions. Now OpenAI lets us do pageantry even faster! <3
    • verdverm2 hours ago
      It's all performance over practice at this point. Look to the current US administration as the barometer by which many are measuring their public perceptions
    • maxkfranzan hour ago
      A more apt example would have been to show finding a particular paper you want to cite, but you don’t want to be bothered searching your reference manager or Google Scholar.

      E.g. “cite that paper from John Doe on lorem ipsum, but make sure it’s the 2022 update article that I cited in one of my other recent articles, not the original article”

    • adverbly2 hours ago
      I chuckled at that part too!

      Didn't even open a single one of the papers to look at them! Just said that one is not relevant without even opening it.

    • teaearlgraycold2 hours ago
      The hand-drawn diagram to LaTeX is a little embarrassing. If you load up Prism and create your first blank project you can see the image. It looks like it's actually a LaTeX rendering of a diagram rendered with a hand-dawn style and then overlayed on a very clean image of a napkin. So you've proven that you can go from a rasterized LaTeX diagram back to equivalent LaTeX code. Interesting but probably will not hold up when it meets real world use cases.
    • thesuitonym2 hours ago
      You may notice that this is the way writing papers works in undergraduate courses. It's just another in a long line of examples of MBA tech bros gleaning an extremely surface-level understanding of a topic, then decided they're experts.
  • 0dayman3 hours ago
    in the end we're going to end up with papers written by AI, proofread by AI .....summarized for readers by AI. I think this is just for them to remain relevant and be seen as still pushing something out
    • falcor8434 minutes ago
      You're assuming a world where humans are still needed to read the papers. I'm more worried about a future world where AIs do all of the work of progressing science and humans just become bystanders.
  • AlexCoventry2 hours ago
    I don't see the use. You can easily do everything shown in the Prism intro video with ChatGPT already. Is it meant to be an overleaf killer?
  • wasmainiacan hour ago
    The state of publishing in academic was already a dumpster fire, why lower the friction farther? It’s not like writing was the hard part. Give it two years max we will see hallucination citing hallucination, independent repeatability out the window
    • falcor84an hour ago
      That's one scenario, but I also see a potential scenario where this integration makes it easier to manage the full "chain of evidence" for claimed results, as well as replication studies and discovered issues, in order to then make it easier to invalidate results recursively.

      At the end of the day, it's all about the incentives. Can we have a world where we incentivize finding the truth rather than just publishing and getting citations?

  • 2 hours ago
    undefined
  • hulitu3 hours ago
    > Introducing Prism Accelerating science writing and collaboration with AI.

    I thought this was introduced by the NSA some time ago.

  • jsrozneran hour ago
    AI: enshittifying everything you once cared about or relied upon

    (re the decline of scientific integrity / signal-to-noise ratio in science)

  • hit8run2 hours ago
    They are really desperate now, right?
  • shevy-java3 hours ago
    "Accelerating science writing and collaboration with AI"

    Uhm ... no.

    I think we need to put an end to AI as it is currently used (not all of it but most of it).

    • drusepth3 hours ago
      Does "as it is currently used" include what this apparently is (brainstorming, initial research, collaboration, text formatting, sharing ideas, etc)?
    • Jaxan3 hours ago
      Yeah, there are already way more papers being published than we can reasonably read. Collaboration, ok, but we don’t need more writing.
  • preommr3 hours ago
    Very underwhelming.

    Was this not already possible in the web ui or through a vscode-like editor?

    • vicapow3 hours ago
      Yes, but there's a really large number of users who don't want to have to setup vscode, git, texlive, latex workshop, just to collaborate on a paper. You shouldn't have to become a full stack software engineer to be able to write a research paper in LaTeX.
  • hahahahhaah16 minutes ago
    Bringing slop to science.
  • lispisok2 hours ago
    Way too much work having AI generate slop which gets dumped on a human reviewer to deal with. Maybe switch some of that effort into making better review tools.
  • postalcoder4 hours ago
    Very unfortunately named. OpenAI probably (and likely correctly) estimated that 13 years is enough time after the Snowden leaks to use "prism" for a product but, for me, the word is permanently tainted.
    • cheeseomlit4 hours ago
      Anecdotally, I have mentioned PRISM to several non-techie friends over the years and none of them knew what I was talking about, they know 'Snowden' but not 'PRISM'. The amount of people who actually cared about the Snowden leaks is practically a rounding error
      • giancarlostoro24 minutes ago
        Most people don't care about the details. Neither does the media. I've seen national scandals that the media pushed one way disproven during discovery in a legal trial. People only remember headlines, the retractions are never re-published or remembered.
      • hedora3 hours ago
        Given current events, I think you’ll find many more people care in 2026 than did in 2024.

        (See also: today’s WhatsApp whistleblower lawsuit.)

    • arthurcolle3 hours ago
      This was my first thought as well. Prism is a cool name, but I'd never ever use it for a technical product after those leaks, ever.
    • blitzar2 hours ago
      Guessing that Ai came up with the name based on the description of the product.

      Perhaps, like the original PRISM programme, behind the door is a massive data harvesting operation.

    • etiam17 minutes ago
      I think most people who'd catch that expect the company joined in 2024, if not before?

      https://futurism.com/the-byte/snowden-openai-calculated-betr...

      Friendly reminder?

    • vjk8003 hours ago
      I'd think that most people in science would associate the name with an optical prism. A single large political event can't override an everyday physical phenomenon in my head.
    • 3 hours ago
      undefined
    • kaonwarb4 hours ago
      I suspect that name recognition for PRISM as a program is not high at the population level.
      • maqp2 hours ago
        2027: OpenAI Skynet - "Robots help us everywhere, It's coming to your door"
        • willturmanan hour ago
          Skynet? C'mon. That would be too obvious - like naming a company Palantir.
    • seanhunter3 hours ago
      Pretty much every company I’ve worked for in tech over my 25+ year career had a (different) system called prism.
      • no-dr-onboard3 hours ago
        (plot twist: he works for NSA contractors)
    • dylan6044 hours ago
      Surprised they didn't do something trendy like Prizm or OpenPrism while keeping it closed source code.
    • songodongo3 hours ago
      Or the JavaScript ORM.
    • moralestapia4 hours ago
      I never though of that association, not in the slightest, until I read this comment.
    • locusofself3 hours ago
      this was my first thought as well.
    • 3 hours ago
      undefined
    • wilg4 hours ago
      I followed the Snowden stuff fairly closely and forgot, so I bet they didn't think about it at all and if they did they didn't care and that was surely the right call.
  • maximgeorge3 hours ago
    [dead]
  • verdverm2 hours ago
    I remember, something like a month ago, Altman twit'n that they were stopping all product work to focus on training. Was that written on water?

    Seems like they have only announced products since and no new model trained from scratch. Are they still having pre-training issues?

    • 2 hours ago
      undefined