I haven't tried it for books. I imagine it's not sufficiently complete to serve as a backbone but a quick look at an example book gives me the ids for OpenLibrary, Librarything, Goodreads, Bing, and even niche stuff like the National Library of Poland MMS ID.
I recently (a year ago... wow) dipped my toe into the world of library science through Wikidata, and was shocked at just how complex it is. OP's work looks really solid, but I hope they're aware of how mature the field is!
For illustration, here are just the book-relevant ID sources I focused on from Wikidata:
ARCHIVERS:
Library of Congress Control Number `P1144` (173M)
Open Library `P648` (39M)
Online Computer Library Center `P10832` (10M)
German National Library `P227` (44M)
Smithsonian Institute `P7851` (155M)
Smitsonian Digital Ark `P9473` (3M)
U.S. Office of Sci. & Tech. Info. `P3894`
PUBLISHERS:
Google Books `P675` (1M)
Project Gutenberg `P2034` (70K)
Amazon `P5749`
CATALOGUERS:
International Standard Book Number `P212`
Wikidata `P8379` (115B)
EU Knowledge Graph `P11012`
Factgrid Database `P10787` (0.4M)
Google Knowledge Graph `P2671` (500B)I’ve recently acquired some photo books that don’t appear to have any ISBN but are listed on WorldCat and have OCLC Numbers and are catalogued in the Japanese National Diet Library. Not sure if they actually don't have ISBNs or if I just haven't been able to find them, but from what I got from some research it's quite common for self-published books.
https://newbooksnetwork.com/subscribe
It's definitely biased towards academia which I personally see as a pro not a con
After v1.0.0 is out I plan to add the ability to add books manually to the database, at which point we'll be able to start improving the database without relying on third-party services.
I'm hoping Goodreads and Anna's Archive will help fill in the gaps, especially since Anna's Archive have gigantic database dumps available[1].
In fact, now that I think about it, you could also contribute your work to WikiData. I don't see ISBNdb ids on WikiData so you could write a script to make those contributions. Then anyone else using WikiData for this sort of thing can benefit from your work
(Only the first 4 or so were json errors, the rest were html-from-nginx, if that matters.)
Right now, I use node-isbn https://www.npmjs.com/package/node-isbn which mostly works well but is getting old in the tooth.
I wrote a Go SDK[1] for the service, maybe I'll try writing one in TypeScript tomorrow.
I could add them as an extractor, I suppose :thinking:
[1]: https://i.cpimg.sh/pexvlwybvbkzuuk8.png
Personally I would go with option 2 as the colour from the covers beats the anaemic feel of 1 and it seems more original than the search with grid below of 3.
Number two is what my wife and I prefer too, and likely what's going to be chosen in the end.
https://www.goodreads.com/book/show/939760.Music_of_Many_Sph...
Still need to figure out how this will work, though.
Although I would suggest that rather than merge (and discard) on initial lookup it might be better to remember each individual request. That way when you inevitably decide to fix or improve things later you could also regenerate all the existing records. If the excess data becomes an issue you can always throw it out later.
I say all this because I've been frustrated by the quantity of subtle inaccuracies encountered when looking things up with these services in the past. Depending on the work sometimes the entries feel less like authoritative records and more like best effort educated guesses.