26 + 26 + 10 < VM

We see that many words in the VM comply with a comparatively simple and straightforward grammar, but we also see that lots of words break with those rules — hence, the underlying rules are either more complex than we think, or not all words must obey them.

To paraphrase this in terms of the Stroke theory, it means that a character set of 26 capital and 26 minor letters plus 10 digits apparently was not enough to transcribe the VM plaintext, otherwise we’d only see 62 different “syllables” making up the VM.

Now, idly browsing across the web I came across a German astronomical manuscript from around 1500. If you take a look at f28r —

(click the image to get the full resolution), you’ll notice that while the top half of the page consists almost entirely of the latin alphabet, the bottom section is riddled with astrological symbols.

If such was the case with the VM itself, these “special characters” would require special enciphering: Some graphical elements in those symbols aren’t present in the latin character set, which would give rise to the use of rare ciphertext letters, plus their combinations would be different from the grammar of the body of the text. Hence, we’d have occurences of unusual letters and breach of the grammar rules.

Under this assumption, the hypothesis would be that the VM word “grammar” is not so much a grammar per se, but rather an artifact of the existance of only a limited set of syllables to begin with.

Advertisements

11 thoughts on “26 + 26 + 10 < VM

  1. No, it’s not “degenerating”, it was like that from the start. :-)

    (At least, it’s verbose in the sense of “one plaintext letter maps to n ciphertext letters”*). But there are no nulls or the like introduced.)

    You make it sound like a bad thing, though. But how else would you account for the 10 bit-per-word entropy of the VM and its repepetivitivitiy? 10 bits would nicely match two letters from a 32 letter-alphabet each…

    *) Pending that we have a decent grip on the actual VM character set…

  2. The notion that a single ciphertext letter might well encipher a stroke in the plaintext is quite elegant: the problem is that it’s insufficient to account for the structure of Voynichese. However, the more features you add in to move it closer, the closer to a convoluted verbose cipher it gets.

    Of course, I’m not criticizing verbose ciphers (seeing as I’ve been proposing them as a part of the VMs’ cipher system for 7 or 8 years): rather, I do think that your stroke hypothesis has had to jettison all of its initial elegance in order to match what we actually see, and in the process has become a verbose cipher.

    Don’t worry, verbose ciphers are elegant in a different way. :-)

  3. I could see this making sense if we had found an amount of symbols in the “astronomical section” which roughly amounted to whatever number of astronomical symbols were used in the late 15th century. Apart from that weird glyph looking like the Blair Witch logo, I can’t think of any.

  4. Christopher — Unless you could decipher the VM, you wouldn’t know that there are astronomical symbols, because like the regular letters, these symbols would have been broken down to their strokes, and these strokes then encoded in the VM alphabet.

    Only the unusual strokes occuring in those astronomical/ astrological/ alchemical symbols (like the wavy lines forming the Aquarius symbol) would require unusual ciphertext letters.

    Ie, rare ciphertext letters would map onto rare plaintext strokes.

  5. Nick — come on, give me a chance: The other day you claimed the Strokes weren’t able to reproduce the more complex VM features, now you complain the hypothesis is too complex…

    But actually the algorithm hasn’t evolved beyond the Camelcase concept: “Write down plaintext in Camelcase — decompose letters into strokes — transcribe strokes with Voynichese characters” is still basically all it takes.

    It’s not become any more complicated or “convoluted” beside that. The only difference is the assumption that aside of latin letters also arab numbers and perhaps astrosomething symbols have been used in the plaintext.*) The logical result of catering for those would be an extended character set in the VM, and the occasional deviation from the naive “start syllyable — end syllable” structure of ciphertext words. Just what we observe.

    *) Does this appear far-fetched…?

  6. Aha, it was not clear to me that you were applying the stroke theory to these characters as well (I don’t know your research that well, yet: am I to assume that everything you post is with the stroke theory in mind, unless the opposite is explicitly stated?). Then it does indeed make sense to me as well, and it would even be statistically testable, which is always a bonus. It doesn’t seem far-fetched to me at all in comparison to many other theories. In other words, its VBI is below my threshold ;)

  7. am I to assume that everything you post is with the stroke theory in mind, unless the opposite is explicitly stated?

    I have to admit, I don’t always clearly draw boundaries here… but I try to tag all articles directly relating to the Stroke theory as such.

    In other words, its VBI is below my threshold

    This is encouraging. Thanks!

  8. Another thought on your stroke idea, though… as I understand it, the same VMs characters would be used to designate the same letter… for instance, say, qoc could mean an “A”, for right stroke, bar, left stroke. So then, “qoc’s” repetition would always mean “A”.

    My point is, is there a count which can be used, or a count method which could be used, to look at the frequency of various groups, and match them to letter frequencies? I mean like, for English, since “E” is common, and you would need four VMs characters to represent it, is there a way to look at the counts to find the most common four VMs character group? And so on, down through the frequencies of the groups representing the next succeeding numbers of needed strokes, for the next most frequently used letters…

    I might be another way to attack it… if I understand the usual way might be to first look at needed strokes, then the number of times each stroke would be assumed to appear, based on the total times they would be needed in all the letters which needed them… and then looking for individual character which might match this “multiple” count. So going this way begins to add many variables. Perhaps the first method would keep it from getting away from you? Maybe that is how you started… and sorry if you mentioned this. Rich.

  9. Pingback: Smart Force Required « Thoughts about the Voynich Manuscript

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s