I've been working on a project related to a sensemaking tool called Pol.is [1], but reprojecting its wiki survey data with these new algorithms instead of PCA, and it's amazing what new insight it uncovers with these new algorithms!
https://patcon.github.io/polislike-opinion-map-painting/
Painted groups: https://t.co/734qNlMdeh
(Sorry, only really works on desktop)
[1]: https://www.technologyreview.com/2025/04/15/1115125/a-small-...
This ain't your parents' "factor analysis".
The traditional NLP techniques of stripping suffices and POS identification may actually harm embedding quality than improvement, since that removes relevant contextual data from the global embedding.
Appreciate you calling that out — that’s a great push toward iteration.
I recommend ollama to run the artic-embed-v2 model, it also is multimingual and you can use --quantize when loading the modelfile to get it even smaller.
Does it make sense to check the process with a control group?
E.g. if we ask a human to write something that resembles a language but isn’t, then conduct this process (remove suffixes, attempt grouping, etc), are we likely to get similar results?
I wanted to do an analysis of what letters occur just before/after a line break to see if there is a difference from the rest of the text, but couldn't find a transcribed version.
My completely amateur take is that it's an elaborate piece of art or hoax.
Reference mapping each cluster to all the others would be a nice way to indicate that there's no variability left in your analysis
And yes to the cross-cluster reference idea — I didn’t build a similarity matrix between clusters, but now that you’ve said it, it feels like an obvious next step to test how much signal is really being captured.
Might spin those up as a follow-up. Appreciate the thoughtful nudge.
(Before I get yelled out, this isn't prescriptive, it's a personal preference.)
I'd add that just because you can achieve separability from a method, the resulting visualization may not be super informative. The distance between clusters that appear in t-SNE projected space often have nothing to do with their distance in latent space, for example. So while you get nice separate clusters, it comes at the cost of the projected space greatly distorting/hiding the relationship between points across clusters.
I'm not familiar with SBERT, or with modern statistical NLP in general, but SBERT works on sentences, and there are no obvious sentence delimiters in the Voynich Manuscript (only word and paragraph delimiters). One concern I have is "Strips common suffixes from Voynich words". Words in the Voynich Manuscript appear to be prefix + suffix, so as prefixes are quite short, you've lost roughly half the information before commencing your analysis.
You might want to verify that your method works for meaningful text in a natural language, and also for meaningless gibberish (encrypted text is somewhere in between, with simpler encryption methods closer to natural language and more complex ones to meaningless gibberish). Gordon Rugg, Torsten Timm, and myself have produced text which closely resembles the Voynich Manuscript by different methods. Mine is here: https://fmjlang.co.uk/voynich/generated-voynich-manuscript.h... and the equivalent EVA is here: https://fmjlang.co.uk/voynich/generated-voynich-manuscript.t...
I didn’t re-map anything back to glyphs in this project — everything’s built off those EVA transliterations as a starting point. So if "okeeodair" exists in the dataset, that’s because someone much smarter than me saw a sequence of glyphs and agreed to call it that.
The author made an assumption that Voynichese is a Germanic language, and it looks like he was able to make some progress with it.
I’ve also come across accounts that it might be an Uralic or Finno-Ugric language. I think your approach is great, and I wonder if tweaking it for specific language families could go even further.
It's not a mental issue, it's just a rare thing that happens. Voynich fits the whole bill for the work of a naive artist.
It also applies to a range of natural phenomena, e.g. lunar craters and earthquakes: https://www.cs.cornell.edu/courses/cs6241/2019sp/readings/Ne...
So the fact that word frequencies in the Voynich Manuscript follow Zipf's law doesn't prove it's written in a natural language.
Not a recent hoax/scam, but an ancient one.
It's not like there weren't a ton of fake documents in the middle age and renaissance, from the donation of Constantine to Preserve John's letter.
Whoever made the document was sincere in making up something that doesn’t exist. They had no intention to mislead. You wouldn’t call a D&D campaign a hoax because it features nonexistent things?
I doubt it's a rulebook cause it's not a real language.
If it's a prop, it would be extremely expensive.
Just to get the parchment you'd have to slaughter a herd of ovines, then you'd have to process it, then you'd have to pay a skilled professional or more for months of work to draw and write.
So I think the profit motive is more likely, and given we know of a ton of scams like this from this period, it seems the most plausible.
But I'll be happy to be proven wrong if someone finds more info in the future.
Edward Kelly was born over a hundred years later, so him "being at the right time" seems to be a bit of a stretch.
Which is worse actually. Kelly may have semi-erased an existing valuable manuscript.
[0] https://manuscriptroadtrip.wordpress.com/2024/09/08/multispe...
Also there might be some characters that are in there just to confuse. For example that bizarre capital "P"-like thing that has multiple variations seems to appear sometimes far too often to represent real language, so it might be just an obfuscator that's removed prior to decryption. There may be other characters that are abnormally "frequent" and they're maybe also unused dummy characters. But the "too many Ps" problem is also consistent with just pure fiction too, I realize.
Unless author hadn't written tens of books exactly like that before, which didn't survive, of course.
I don't think it's a very novel idea, but I wonder if there's analysis for pattern like that. I haven't seen mentions of page to page consistency anywhere.
A lot of work's been done here. There are believed to have been 2 scribes (see Prescott Currier), although Lisa Fagin Davis posits 5. Here's a discussion of an experiment working off of Fagin Davis' position: https://www.voynich.ninja/thread-3783.html
I'd argue that these are just the camps that non-traditional, amateur analysis efforts fall into. I've only briefly skimmed Voynich work, but my impression is that, traditionally, more academic analyses rely on a combination of linguistic and cryptological analysis. This does happen to be informed by some statistical analysis, but goes way beyond that.
For example, as I recall the strongest argument that Voynichese probably isn't just an alternative alphabet for a well-known language relies on comparing Voynichese to the general patterns for how writing systems map symbols to sounds. That permits the development of more specific hypotheses about how it could possibly function, including how likely it is to be an alphabet or abjad, and, hypotheses about which characters could plausibly represent more than one sound, possible digraphs, etc. All of that work casts severe doubt on the likelihood of it representing a language from the area because it just can't plausibly represent a language with the kinds of phonological inventories we see in the language families that existed in that place and time.
There's also been some pretty interesting work on identifying individual scribes based on a confluence of factors including, but not limited to, analysis of the text itself. Some of the inferred scribes exclusively wrote in the A language (oh yeah, Voynichese seems to contain two distinct "languages"), some exclusively wrote in the B language, I think they've even hypothesized that there's one who actually used both languages.
There isn't a lot of popular awareness of this work because it's not terribly sexy to anyone but a linguistics nerd. But I'd guess that any attempt to poke at the Voynich manuscript that isn't informed by it is operating at a severe disadvantage. You want to be standing on the shoulders of the tallest giants, not the ones with the best social media presence.
That second part wasn’t super important though — this was more about learning and experimenting than trying to break new ground. Really appreciate the kind words, and hopefully it sparks someone to take it even further.
Appreciate the nudge — always fascinating to see where people take this kind of thinking.
On the other hand, it's a bit wild to build a whole city next to volcanos that are definitely going to wake up in less than a few centuries, to begin with.
https://arstechnica.com/science/2024/09/new-multispectral-an...
but imagine if it was just a (wealthy) child's coloring book or practice book for learning to write lol
Even if it was "just" an (extraordinarily wealthy and precocious) child with a fondness for plants, cosmology, and female bodies carefully inscribing nonsense by repeatedly doodling the same few characters in blocks that look like the illuminated manuscripts this child would also need access to, that's still impressive and interesting.
Mapping words 1:1 is not going to lead you anywhere (especially for a text that has stood undecoded for so long time)
It kiiiinda works for very close languages (think Dutch<>German or French<>Spanish) and even then.
The challenge (as I understand it) is that the vocabulary size is pretty massive — thousands of unique words — and the structure might not be 1:1 with how real language maps. Like, is a “word” in Voynich really a word? Or is it a chunk, or a stem with affixes, or something else entirely? That makes brute-forcing a direct mapping tricky.
That said… using cluster IDs instead of individual word (tokens) and scoring the outputs with something like a language model seems like a pretty compelling idea. I hadn’t thought of doing it that way. Definitely some room there for optimization or even evolutionary techniques. If nothing else, it could tell us something about how “language-like” the structure really is.
Might be worth exploring — thanks for tossing that out, hopefully someone with more awareness or knowledge in the space see's it!
Maybe a version of scripture that had been "rejected" by some King, and was illegal to reproduce? Take the best radiocarbon dating, figure out who was King back then, and if they 'sanctioned' any biblical translations, and then go to the version of the bible before that translation, and this will be what was perhaps illegal and needed to be encrypted. That's just one plausible story. Who knows, we might find out the phrase "young girl" was simplified to "virgin", and that would potentially be a big secret.
idk
Sadly the radio carbon dating disproved two of my far out theories, which was, 1) The book survived from some earlier 'iteration' of life on the planet, where all plants were simply different. or 2) All planets form the same 'kind' of carbon-based life, and this book was sent/delivered to us by another planet.
Sadly, it's probably just someone's form of "art", and not even "real".
Pecularities in Voynich also suggest that one to one word mappings are very unlikely to result in well described languages. For instance there's cases of repeated word sequences you don't really see in regular text. There's a lack of extremely common words that you would expect would be neccessary for a word based structured grammar, there's signs that there's at least two 'languages', character distributions within words don't match any known language, etc.
If there still is a real unencoded language in here, it's likely to be entirely different from any known language.
Clustering by sentence or page would be interesting too — I haven't gone that far yet, but it’d be fascinating to see if there’s consistency across visual/media sections. Appreciate the insight!
There's also a very long thread about it here - https://www.voynich.ninja/thread-2318.html - that seems to go from "that's really interesting, let's find out more about it" to "eh, seems about the same as other revelatory announcements about Romance, Hebrew etc"
That said, I just watched a video about the practice of "speaking in tongues" that some christian congregations practice. From what I understand, it's a practice where believers speak in gibberish for certain rituals.
Studying these "speeches", researches found patterns and rhythms that the speakers followed without even being aware they exist.
I'm not saying that's what's happening here, but maybe if this was a hoax (or a prank), maybe these patterns emerged just because they were inscribed by a human brain? At best, these patterns can be thought of as shadows of the patterns found in the writers mother tongue?
People often assert this, but I'm unsure of any evidence. If I wrote a manuscript in a pretend language, I would expect it to end up with language-like patterns, some automatically and some intentionally.
Humans aren't random number generators, and they aren't stupid. Therefore, the implicit claim that a human could not create a manuscript containing gibberish that exhibits many language-like patterns seems unlikely to be true.
So we have two options:
1. This is either a real language or an encoded real language that we've never seen before and can't decrypt, even after many years of attempts
2. Or it is gibberish that exhibits features of a real language
I can't help but feel that option 2 is now the more likely choice.
It's harder to generate good gibberish than it appears at first.
There's certainly a system to the madness, but it exhibits rather different statistical properties from "proper" languages. Look at section 2.4: https://www.voynich.nu/a2_char.html At the moment, any apparently linguistic patterns are happenstance; the cypher fundamentally obscures its actual distribution (if a "proper" language.)
The age of the document can be estimated through various methods that all point to it being ~500 year old. The vellum parchment, the ink, the pictures (particularly clothes and architecture) are perfectly congruent with that.
The weirdest part is that the script has a very low number of different signs, fewer than any known language. That's about the only clue that could point to a hoax afaik.
As far as I know it's just gibberish since it doesn't follow the statistics of the known languages or cyphers of the time.
I have no background in NLP or linguistics, but I do have a question about this:
> I stripped a set of recurring suffix-like endings from each word — things like aiin, dy, chy, and similar variants
This seems to imply stripping the right-hand edges of words, with the assumption that the text was written left to right? Or did you try both possibilities?
Once again, nice work.
https://www.voynich.ninja/thread-4327-post-60796.html#pid607... is the main forum discussing precisely this. I quite liked this explanation of the apparent structure: https://www.voynich.ninja/thread-4286.html
> RU SSUK UKIA UK SSIAKRAINE IARAIN RA AINE RUK UKRU KRIA UKUSSIA IARUK RUSSUK RUSSAINE RUAINERU RUKIA
That is, there may be 2 "word types" with different statistical properties (as Feaster's video above describes)(perhaps e.g. 2 different Cyphers used "randomly" next to each other). Figuring out how to imitate the MS' statistical properties would let us determine cypher system and make steps towards determining its language etc. so most credible work's gone in this direction over the last 10+ years.
This site is a great introduction/deep dive: https://www.voynich.nu/
<quote>
Key Findings
* Cluster 8 exhibits high frequency, low diversity, and frequent line-starts — likely a function word group
* Cluster 3 has high diversity and flexible positioning — likely a root content class
* Transition matrix shows strong internal structure, far from random
* Cluster usage and POS patterns differ by manuscript section (e.g., Biological vs Botanical)
Hypothesis
The manuscript encodes a structured constructed or mnemonic language using syllabic padding and positional repetition. It exhibits syntax, function/content separation, and section-aware linguistic shifts — even in the absence of direct translation.
</quote>
I don't see how it could be random, regardless of whether it is an actual language. Humans are famously terrible at generating randomness.
I wouldn't assume that the writer made decisions based on these goals, but rather that the writer attempted to create a simulacrum of a real language. However, even if they did not, I would expect an attempt at generating a "random" language to ultimately mirror many of the properties of the person's native language.
The arguments that this book is written in a real language rest on the assumption that a human being making up gibberish would not produce something that exhibits many of the properties of a real language; however, I don't see anyone offering any evidence to support this claim.
My main goal was to learn and see if the manuscript behaved like a real language, not necessarily to translate it. Appreciate the link — I’ll check it out (once I get my German up to speed!).
So, sorry but you are not busting any bubbles today.
https://www.researchgate.net/publication/368991190_The_Voyni...
For more info, see https://www.voynich.ninja/thread-3940-post-53738.html#pid537...
Yet 10 years later I still hear that the consensus is that there's no agreeable translation. So, what, all this mandaic-gypsies was nothing? And all coincidences were… coincidences?
So far none of these ideas have been shown to be applicable to the full text though. What you would expect with a real translation is that the further you get with your translation, the easier it becomes to translate more. But with the attempts so far is that we keep seeing that it becomes more and more difficult to pretend that other pages are just as translatable using the same scheme you came up initially. It eventually just dies a quiet death