1 pointby durdanovic6 hours ago1 comment
  • durdanovic6 hours ago
    Author here. I’ve posted two companion preprints that attempt to resolve the Voynich puzzle using rigorous *Bayesian Model Selection* rather than traditional cryptanalysis or linguistics.

    *The Core Argument:* The field has been stalled by the *"Patching Fallacy"*—the habit of salvaging linguistic or cipher hypotheses by adding unconstrained auxiliary parameters (e.g., arbitrary abbreviations, nulls, polyglot switching) whenever the data contradicts the model.

    In *Paper 1* (linked above), I formalize a *"Zero-Patch Standard."* When you strictly penalize unconstrained parameters (the Occam factor in the marginal likelihood), standard "Language" and "Cipher" hypotheses are statistically inadmissible. The topology of the data (Rigid Morphology, ~58% Hapax Legomena, Sectional Disjointness) strongly favors a *Structured Reference System* ($H_{ref}$) as the information-theoretically minimal model.

    *Paper 2 (The Mechanism):* [https://www.preprints.org/manuscript/202602.0301](https://www.preprints.org/manuscript/202602.0301)

    This follow-up explains why the book looks the way it does. I re-interpret the famous "Crust-Mantle-Core" morphology (Stolfi) not as grammar, but as a *Cognitive Optimization for Manual Retrieval*: * *Prefixes:* Classifiers/Index markers (working memory limits). * *Roots:* Visual "combination locks" for parallel search (minimizing lookup latency compared to serial phonetic reading). * *Sectional Shifts:* Namespace partitioning to prevent key collisions in a finite symbol system.

    Essentially, the VMS is likely a *Paper Database*, not a book to be read.

    I’m happy to answer questions about the entropy analysis, the model selection framework, or the "Zero-Patch" standard.