Where it helps
- Deep-dive reading – fetch bulk RIS file and dump a seminal paper’s entire bibliography into Zotero/Mendeley and follow the threads.
- Bulk citing – grab BibTeX's for a cluster of related papers without hunting them down one-by-one.
- LLM grounding – feed language models a clean reference list so they stop hallucinating citations.
https://github.com/MuiseDestiny/zotero-reference
As a disclaimer, I am not associated with its development.
Generally, an About page is always appreciated for such web tools with minimal UX, particularly when it's rather automagical.
APIs Used OpenCitations API (v2)
Endpoint: https://opencitations.net/index/api/v2/references/ Purpose: Retrieves a list of all references from a paper by its DOI Data format: JSON containing cited DOIs and metadata DOI Content Negotiation
Endpoint: https://doi.org/{DOI} Purpose: Fetches metadata and formatted citations for DOIs Formats: BibTeX, RIS, CSL JSON, RDF XML, etc. Implements CSL (Citation Style Language) for text-based citations Local Citation Style Files
Purpose: Provides access to thousands of citation styles Storage: Pre-generated JSON files with style information
Could you share _your_ work though? It's always interesting to see new approaches to metadata.
Traditionally, it was a bit of a one-way street (data comes from publisher) but there's some interesting work being done by COMET [0] and (separately) OpenAlex [1] around cleanup of the publisher-supplied data within the community.
(I used to work at Crossref; am a little involved with COMET)
More info on content negotiation:
I wonder how OpenCitations populates their data? One example I tried showed 9 references where the paper had 30+.
I’m no blockchain evangelist in its current state of “value” but this seems like a great test case for resolving the academic or legitimate origin of published material.
Users would generate and centrally register or receive a generated W3C DID keypair with which to sign their ScholarlyArticles and peer review CreativeWorks.
W3D DID Decentralized Identifiers solve for what DOI and ORCID solve for without requiring a central registry.
W3C PROV is for describing provenance. PROV RDF can be signed with a DID sk.
PDFs can be signed with their own digital signature scheme, but there's no good way to publish Linked Data in a PDF (prepared as a LaTeX manuscript for example).
Bibliographic and experimental control metadata is only so useful in assuring provenance and authenticity of article and data and peer reviews which legitimize.
From https://news.ycombinator.com/item?id=28382186 :
>> JOSS (Journal of Open Source Software) has managed to get articles indexed by Google Scholar [rescience_gscholar]. They publish their costs [joss_costs]: $275 Crossref membership, DOIs: $1/paper:
As if the current political climate isn't going to result in the sabotage of scientific infrastructure if some state actor decides that it could provide some economic or military advantage. (hello three body problem)
DOIs should have been hashes, that would have been cheaper, more resilient, and more covenient. But sadly librarians tend to re-build paper workflows digitally instead of building digitally native infrastructure.
Blockchain would be fine as a timestamp service to replace publishers, although a consensus based system hosted by the worlds libraries would also be fine for that purpose and require a lot less machinery.