I just spent time getting it all running on docker compose and moved my web ui from express js to flask. I want to get the code cleaned up and open source it at some point.
-- -- Name: refresh_topic_tables(); Type: PROCEDURE; Schema: public; Owner: postgres --
CREATE PROCEDURE public.refresh_topic_tables() LANGUAGE plpgsql AS $$ BEGIN -- Drop tables in reverse dependency order DROP TABLE IF EXISTS topic_top_terms; DROP TABLE IF EXISTS topic_term_tfidf; DROP TABLE IF EXISTS term_df; DROP TABLE IF EXISTS term_tf; DROP TABLE IF EXISTS topic_terms;
-- Recreate tables in correct dependency order
CREATE TABLE topic_terms AS
SELECT
dt.term_id,
dot.topic_id,
COUNT(DISTINCT dt.document_id) as document_count,
SUM(frequency) as total_frequency
FROM document_terms dt
JOIN document_topics dot ON dt.document_id = dot.document_id
GROUP BY dt.term_id, dot.topic_id;
CREATE TABLE term_tf AS
SELECT
topic_id,
term_id,
SUM(total_frequency) as term_frequency
FROM topic_terms
GROUP BY topic_id, term_id;
CREATE TABLE term_df AS
SELECT
term_id,
COUNT(DISTINCT topic_id) as document_frequency
FROM topic_terms
GROUP BY term_id;
CREATE TABLE topic_term_tfidf AS
SELECT
tt.topic_id,
tt.term_id,
tt.term_frequency as tf,
tdf.document_frequency as df,
tt.term_frequency * LN( (SELECT COUNT(id) FROM topics) / GREATEST(tdf.document_frequency, 1)) as tf_idf
FROM term_tf tt
JOIN term_df tdf ON tt.term_id = tdf.term_id;
CREATE TABLE topic_top_terms AS
WITH ranked_terms AS (
SELECT
ttf.topic_id,
t.term_text,
ttf.tf_idf,
ROW_NUMBER() OVER (PARTITION BY ttf.topic_id ORDER BY ttf.tf_idf DESC) as rank
FROM topic_term_tfidf ttf
JOIN terms t ON ttf.term_id = t.id
)
SELECT
topic_id,
term_text,
tf_idf,
rank
FROM ranked_terms
WHERE rank <= 5
ORDER BY topic_id, rank;
RAISE NOTICE 'All topic tables refreshed successfully';
EXCEPTION
WHEN OTHERS THEN
RAISE EXCEPTION 'Error refreshing topic tables: %', SQLERRM;
END;
$$;I'm seeing "Thanos committing fraud" in a section about "useful lies". Given that the founder is currently in prison, it seems odd to consider the lie useful instead of harmful. It kinda seems like the AI found a bunch of loosely related things and mislabeled the group.
If you've read these books I'm not seeing what value this adds.
Another model can be post-rationalization. People just do stuff instinctively, then rationalize why they did them after the fact. "She lied without thinking about it, then constructed a reasoning why the lie was rational to begin with".
At the extremes, some people will never lie, even to their detriment. Usually they seem to attribute this to virtue. Others will always lie. They seem to feel not lying is surrendering control. Most people are somewhere in between.
Theranos is the fraud mentioned in the piece.
https://habr.com/en/articles/456476/
https://android-review.googlesource.com/c/platform/system/bt...
This is the part that always stuck with me:
I have often felt that programming is an art form, whose real value can only be appreciated by another versed in the same arcane art; there are lovely gems and brilliant coups hidden from human view and admiration, sometimes forever, by the very nature of the process. You can learn a lot about an individual just by reading through his code, even in hexadecimal. Mel was, I think, an unsung genius.
If it wasn't expression everyone would get the same result. But no one else at Royal McBee did things the way Mel Kaye did things.
Kaye had a strong artistic vision for how things should be done; he didn't want to use the ergonomic features of the RPC-4000 because they didn't align with his vision. I think he found the idea of rigging the blackjack program offensive in part for the same reason.
Speaking for myself, I have always found the story and "pessimal" instructions beautiful. It's my favorite piece of folklore of all time. Kaye and Nather are both artists to me.
Tangentially, Kaye is standing on the far right in this photo.
https://zappa.brainiac.com/MelKaye.png
And here is Nather.
https://en.wikipedia.org/wiki/Ed_Nather#/media/File:Ednather...
Anyway, it introduced me to the idea of using computational methods in the humanities, including literature. I found it really interesting at the time!
One of the the terms it introduced me to is "distant reading", whose name mirrors that of a technique you may have studied in your gen eds if you went to university ('close reading"). The idea is that rather than zooming in on some tiny piece of text to examine very subtle or nuanced meanings, you zoom out to hundreds or thousands of texts, using computers to search them for insights that only emerge from large bodies of work as wholes. The book argued that there are likely some questions that it is only feasible to ask this way.
An old friend of mine used techniques like this for dissertation in rhetoric, learning enough Python along the way to write the code needed for the analyses she wanted to do. I thought it was pretty cool!
I imagine LLMs are probably positioned now to push distant reading forward in an number of ways: enabling new techniques, allowing old techniques to be used without writing code, and helping novices get started with writing some code. (A lot of the maintainability issues that come with LLM code generation happily don't apply to research projects like this.)
Anyway, if you're interested in other computational techniques you can use to enrich this kind of reading, you might enjoy looking into "distant reading": https://en.wikipedia.org/wiki/Distant_reading
LLMs are great at finding media by vague descriptions. ;)
The book is almost certainly by *Franco Moretti*, who coined the term "distant reading." Given the timeframe ("maybe a decade ago") and the description, it's most likely one of these two:
1. *"Distant Reading"* (2013) — A collection of Moretti's essays that directly takes the concept as its title. This would fit well with "about a decade ago."
2. *"Graphs, Maps, Trees: Abstract Models for Literary History"* (2005) — His earlier and very influential work that laid out the quantitative, computational approach to literary analysis, even if it didn't use "distant reading" as prominently in the title.
Moretti, who founded the Stanford Literary Lab, was the major proponent of the idea that we should analyze literature not just through careful reading of individual canonical texts, but through large-scale computational analysis of hundreds or thousands of works—looking at trends in genre evolution, plot structures, title lengths, and other patterns that only emerge at scale.
Given that the commenter specifically remembers learning the term "distant reading" from the book, my best guess is *"Distant Reading" (2013)*, though "Graphs, Maps, Trees" is also a strong possibility if their memory of "a decade" is approximate.
In "Father wound" the words "abandoned at birth" are connected to "did not". Which makes it look like those visual connections are just a stylistic choice and don't carry any meaning at all.
The one I found most connected that the LLm didn’t was a connection between Jobs and the The Elephant in the Brain
The Elephant in the Brain: The less we know of our own ugly motives, the easier it is to hide them from others. Self-deception is therefore strategic, a ploy our brains use to look good while behaving badly.
Jobs: “He can deceive himself,” said Bill Atkinson. “It allowed him to con people into believing his vision, because he has personally embraced and internalized it.”
I do like the idea though — perhaps there is a way to refine the prompting to do a second pass or even multiple passes to iteratively extract themes before the linking step.
Have you read the Syntopicon by Mortimer J Adler?
It's right up your alley on this one. It's essentially this, but in 1965, by hand, with Isaac Asimov and William F Buckley Jr, among others.
Where did you get the books from? I've been trying to do something like this myself, but haven't been able to get good access to books under copyright.
Yeah, thinking a bit more here, you've created a Syntopicon. I've always wanted to make a modern one too! You can do the old school late night Wikipedia reading session with the trails idea of yours. Brilliant!
Really though, how can I help you make this bigger?
I think that this sucks the discreet joy out of reading and learning. Having the ways that the topics within a certain book can cross over in lead into another book of a different topic externalized is hollowing and I don’t find it useful.
On the other hand I feel like seeing this process externalized gives us a glimpse at how “the algorithms” (read: recommender systems) suggest seemingly disjunctive content to users. So as a technical achievement I can’t knock what you’ve done and I’m satisfied to see that you’re the guy behind the HN Book map that I thought was nice too.
At its core this looks like a representation of the advantages that LLMs can afford to the humanities. Most of us know how Rob Pike feels about them. I wonder if his senior former colleague feels the same: https://www.cs.princeton.edu/~bwk/hum307/index.html. That’s a digression, but I’d like to see some people think in public about how to reasonably use these tools in that domain.
Intuitively, I agree. This feels like the different between being a creator (of your own thoughts as inspired by another person's) and a consumer (although in a somewhat educational sense). There would need to be a big advantage to being taught those initial thoughts, analogous to why we teach folks algebra/calculus via formulas rather than having every student figure out proofs for themselves.
This is the best way to re-enforce a copilot because models are pretty smart most of the time and I can correct the cases where it stumbles with minimal cognitive effort. Over time I find more and more tasks are solved by agent intelligence or these happy path hints. As primitive as it is, CLAUDE.md is the best we have for long-term adaptive memory.
I was recently trying to remember a portal fantasy I read as a kid. Goodreads has some impressive lists, not just "Portal Fantasies"[0], but "Portal Fantasies where the portal is on water[1], and a seven more "where/what's the portal" categories like that.
But the portal fantasy I was seeking is on the water and not on the list.
LLMs have failed me so far, as has browsing the larger portal fantasy list. So, I thought, what if I had an LLM look through a list of kids books published in the 1990s and categorize "is this a portal fantasy?" and "which category is the portal?"
I would 1. possibly find my book and 2. possibly find dozens of books I could add to the lists. (And potentially help augment other Goodread-like sites.)
Haven't done it, but I still might.
Anyway, thanks for making this. It's a really cool project!
[0] https://www.goodreads.com/list/show/103552.Portal_Fantasy_Bo...
[1] https://www.goodreads.com/list/show/172393.Fiction_Portal_is...
Edit/update: if you are looking for the phantom thread between texts, believe me that an LLM cannot achieve it. I have interrogated the most advanced models for hours, and they cannot do the task to any sort of satisfactory end that a smoked-out half-asleep college freshman could. The models don't have sufficient capacity...yet.
¹ Oh, that's just LLMs in general? Cool!
As an English teacher this shit makes me hate LLMs even more. Like so much techbro nonsense, it completely ignores what makes us human.
… realize that it’s nonsense and the LLM is not smart enough to figure out much without a reranker and a ton of technology that tells it what to do with the data.
You can run any vector query against a rag and you are guaranteed a response. With chunks that are unrelated any way.
Take for example the OODA loop. How are the connections made here of any use? Seems like the words are semantically related but the concept are not. And even if they are, so what?
I am missing the so what.
Now imagine a human had read all these books. It would have come up with something new, I’m pretty sure about that.
You have an interesting idea here, but looking over the LLM output, it's not clear what these "connections" actually mean, or if they mean anything at all.
Feeding a dataset into an LLM and getting it to output something is rather trivial. How is this particular output insightful or helpful? What specific connections gave you, the author, new insight into these works?
You correctly, and importantly point out that "LLMs are overused to summarise and underused to help us read deeper", but you published the LLM summary without explaining how the LLM helped you read deeper.
A trail that hits that balance well IMO is https://trails.pieterma.es/trail/pacemaker-principle/. I find the system theory topics the most interesting. In this one, I like how it pulled in a section from Kitchen Confidential in between oil trade bottlenecks and software team constraints to illustrate the general principle.
I'm not familiar with he term "Pacemaker Principle" and Google search was unhelpful. What does it mean in this context? What else does this general principle apply to?
I'm perfectly willing to believe that I am missing something here. But reading thought many of the supportive comments, it seems more likely that this is an LLM Rorschach test where we are given random connections and asked to do the mental work of inventing meaning in them.
I love reading. These are great books. I would be excited if this tool actually helps point out connections that have been overlooked. However, it does not seem to do so.
This made me realize that so many influential figures have either absent fathers, or fathers that berated them or didn't give them their full trust/love. I think there's something to the idea that this commonality is more than coincidence. (that's the only topic of the site I've read through yet, and I ignored the highlighted word connections)
How is that different from having an insight yourself and later doing the work to see if it holds on closer inspection?
But so many of the links just don't make sense, as several comments have pointed out. Are these actually supposed to represent connections between books, or is it just a random visual effect that's suppose to imply they're connected?
I clicked on one category and it has "Us/Them" linked to "fictions" in the next summary. I get that it's supposed to imply some relationship but I can't parse the relationships
this to me sounds off. I read the same 8, to 10 books over and over and with every read discover new things. the idea of more books being more useful stands against the same books on repeat. and while I'm not religious, how about dudes only reading 1 book (the Bible, or Koran), and claiming that they're getting all their wisdom from these for a 1000 years?
If I have a library of 100+ books and they are not enough then the quality of these books are the problem and not the number of books in the library?
I won't pile on to what everyone else has said about the book connections / AI part of this (though I agree that part is not the really interesting or useful thing about your project) but I think a walk-through of how you approach UI design would be very interesting!
Orwelliian motives (sheer egoism, aesthetic enthusiasm, historical impulse and political purposes) are somewhat dated.
The book was really big and it got stuck in "indexing". (Possibly broke the indexer?) But thanks to the CLI integration, it was able to just iteratively grep all the info it needed out of it. I found this very amusing.
Anthropic's article on retrieval emphasizes the importance of keyword search, since they often outperform embeddings depending on the query. Their own approach is a hybrid:
It's like grabbing a half-dozen books off the library shelf, opening to a random page in each, then flit through them, kind of like a "engineering nerd book sample platter".
The visual style of linking phrases from one section to the next looks neat, but the connections don’t seem correct. There’s a link from “fictions” to “internal motives” near the top of the first link and several other links are not really obviously correct.
There's two stages to the linking: first juxtaposing the excerpts, then finding and linking key phrases within them. I find the excerpts themselves often have interesting connections between them, but the key phrases can be a bit out there. The "fictions" to "internal motives" one does gel for me, given the theme of deceiving ourselves about our own motivations.
I really think we all should sync up and talk more. I want to make this bigger.
#1: would a larger dataset increase the depth and breadth of insight ( go to #2) #2: with the initial top 100, are there key ‘super node’ books that stand out as ones to read due the breadth they offer. Would a larger dataset identify further ‘super node’ books.
https://en.wikipedia.org/wiki/Netflix_Prize
(Are people still trying to improve upon the original winning solution?)
Solid technical execution too. Well done!
Wouldn't it be good if recursive Leiden and cypher was built into an embedded DB?
That's what I'm looking into with mcp-server-ladybug.
Conclusion: you find wisdom in everything if you look for it.
>The Law of Fives states simply that: ALL THINGS HAPPEN IN FIVES, OR ARE DIVISIBLE BY OR ARE MULTIPLES OF FIVE, OR ARE SOMEHOW DIRECTLY OR INDIRECTLY APPROPRIATE TO 5.
>The Law of Fives is never wrong.
>In the Erisian Archives is an old memo from Omar to Mal-2: "I find the Law of Fives to be more and more manifest the harder I look."
I really appreciate you mentioning this. I think this is the nature of LLMs in general. Any symbol it processes can affect its reasoning capabilities.
It's all fun and game 'till someone loses an eye/mind/even-tenuous-connection-to-reality.
Edit: I'd mention that the themes Claude finds qualify as important stuff imo. But they're all pretty grim and it's a bit problematic focusing on them for a long period. Also, they are often the grimmest spin things that are well known.
https://medium.com/gft-engineering/using-text-embeddings-and...
Validation is a problem here - you find relationships, but so what? Is it right.... I can't say. It is interesting though.
Someone called the output "egregious, obviously referential LLM dog". And someone else called this trail an example of that: https://trails.pieterma.es/trail/tempo-gradient/
What struck me when reading it was that I parachuted into that trail with zero context. So I asked an LLM to generate some surrounding context for me: this is the verbatim prompt that I just gave gpt-5.2/medium running in my own agentic harness:
> read this: https://trails.pieterma.es/trail/tempo-gradient/. then i want you to write a short essay that shows the relationship between the different quotes making sure to use each quote in the process. the narrative should help me understand the links between the different quotes in each book in context. feel free to use your own knowledge of the books and your general knowledge to make this an approachable thing. what another llm did here is cool - it found relationships between this set of quotes. now i want you to materialize them into a compelling essay that invites curiosity and teaches me something. write the essay to tempo-gradient.md and respond in chat with how you thought about the problem and whether you thought these relationships were meaningful.
This was my agent's response:
Wrote the essay to tempo-gradient.md.
How I thought about it: I treated “tempo gradient” as the shared mechanism behind all eight excerpts—mismatch in loop speed (notice → interpret → decide → act) caused by attention, infrastructure latency, organizational rigidity, system delays, stealthy manipulation, or runaway acceleration. Then I ordered the quotes from “inner time” (tennis/attention) → engineered milliseconds (HFT) → explicit doctrine (OODA) → industrial schedules (mobilization) → structural delays (Beer Game/economy) → slow covert tempo (cyber) → extreme fast tempo (AI takeoff), so each quote sets up the next.
Whether the relationships felt meaningful: yes, mostly. They’re not “the same idea” so much as the same shape appearing in different contexts—feedback delay and decision cadence determining who gets to set the terms of reality for everyone else. The only relationship that feels a bit more interpretive than direct is tennis ↔ geopolitics, but even there the bridge (“how many nows” as a controllable input to decision tempo) is surprisingly sturdy.
And this is the output: https://gist.github.com/jflam/49753b7da64a74f07e35f6e2465f1a...
Interesting... seems like it wants the keys on your system! ;)
This project, titled "Show HN: I used Claude Code to discover connections between 100 books," is an attempt to use an LLM (Claude) to perform "syntopical reading"—analyzing a library of 100 Hacker News favorite books to find thematic "trails" and connections between seemingly disparate texts. The author used CLI tools and LLM orchestration to index topics and generate these connections, aiming to move beyond simple summarization toward deeper insight.
Below are my thoughts on the project, followed by an analysis of the specific criticisms raised in the thread.
My Thoughts Conceptually, this is a fascinating experiment in "digital humanities." It attempts to automate a very high-level cognitive task: synthesis. Usually, we use computers for retrieval (search) or storage. Using them to find semantic bridges between concepts like "Jobs' reality distortion field" and "Theranos' fake demos" is a compelling use case for LLMs.
However, the execution reveals the current limits of this technology. The resulting "connections" often feel like a parlor trick—impressive that the machine did it, but often lacking the "click" of genuine insight. The project succeeds more as a technical visualization of vector embeddings than as a literary tool. It produces a map of linguistic proximity rather than conceptual necessity.
Criticisms & Agreement Analysis Here are the main criticisms from the comment section and my take on them:
1. The "Rorschach Test" / Spurious Connections Criticism: Users like tmountain, smusamashah, and timoth3y argue that the connections are "weaker than weak" or purely surface-level (e.g., linking "fracture" in one book to "crumble" in another). They suggest the project is an "LLM Rorschach test" where the human user forces meaning onto random noise generated by the model.
Do I agree? Yes. Reasoning: LLMs operate on statistical probability and vector similarity. They often confuse topical relatedness (these words appear in similar contexts) with causal or logical connection. A connection between "Us/Them" and "fictions" might make sense in a vector space, but to a human reader expecting a philosophical argument, it feels disjointed. Without the reasoning for the link being rigorous, the user has to do the heavy lifting to invent the connection, making the tool less of a "guide" and more of a "random prompt generator.”
2. Outsourcing Critical Thought Criticism: Users eloisius and DrewADesign argue that the project defeats the purpose of reading.[1] They contend that "the thinking is the point," and having a machine find connections robs the reader of the synthesis process that leads to understanding.
Do I agree? Partially. Reasoning: If the goal is learning, they are correct; you cannot learn by having a machine digest information for you. However, if the goal is discovery or research, this criticism is too harsh. Researchers have always used indices, concordances, and bibliographies to find connections they didn't know existed. If this tool is treated as a "super-index" rather than a "replacement reader," it has validity. The danger lies in mistaking the map (the AI's graph) for the territory (the actual knowledge).
3. Hallucinations and Conceptual Errors Criticism: User 8organicbits pointed out a weird label ("Thanos committing fraud" in a section about "useful lies") and questioned the logic of calling a fraud "useful" if the founder is in prison.
Do I agree? Yes. Reasoning: (Note: User Terretta clarified the commenter likely confused the comic villain Thanos with the company Theranos, which was in the text). However, the criticism about the label "useful lies" holds up. The LLM likely grouped "Theranos" under "useful lies" because the deception functioned for a time, but it lacks the nuance to understand that "fraud" and "useful tool" are categorically different to a human moral compass. This highlights the "alien" nature of LLM categorization—it organizes by semantic weight, not human logic.
4. "LLM Slop" and Fatigue Criticism: User typon and others noted the descriptions have a "distinct LLM voice" and dismissed it as "slop."[1] User Aurornis mentioned recognizing the writing style immediately.
Do I agree? Yes. Reasoning: By 2026 (the context of this thread), users are highly attuned to "AI-ese"—the perfectly grammatical but hollow, hedging, and overly enthusiastic tone of standard model outputs. This "slop" creates a trust deficit. When a human reads a connection written by a human, they assume intent. When they read one written by an LLM, they assume hallucination until proven otherwise. This high barrier to trust makes the project harder to enjoy.
Conclusion I agree with the consensus of the thread: Cool tech demo, shallow utility. The project is a great example of what LLMs can do (processing vast amounts of text to find patterns), but it inadvertently highlights what they cannot do (understand the deep, human significance of those patterns). It effectively automates the "what" but misses the "so what?"
Perhaps you might instead provide your own TL;DR after reading it yourself?
A LLM is a transformer. It transforms a prompt into a result.
Or a human idea into a concrete java implementation.
Currently I'm exploring what unexpected or curious transformations LLMs are capable of but haven't found much yet.
At least I myself was surprised that an LLM can transform a description of something into an IMG by transforming it into a SVG.
What makes exploration valuable is the cycle: act, observe, recognize whether you're closer to what you wanted, then refine. Without that recognition ("closer" or "drifting"), you're exploring blind.
Context is what lets the loop close. You need enough of it to judge the outcome. I think that real shift isn't generators → agents. It's one-shot output → iterative refinement with judgment in the loop.