I have 160 bits of random data.
Just for fun, I want to generate pseudo-English phrase to "store" this information in. I want to be able to recover this information from the phrase.
Note: This is not a security question, I don't care if someone else will be able to recover the information or even detect that it is there or not.
Criteria for better phrases, from most important to the least:
The current approach, suggested here:
Take three lists of 1024 nouns, verbs and adjectives each (picking most popular ones). Generate a phrase by the following pattern, reading 20 bits for each word:
Noun verb adjective verb, Noun verb adjective verb, Noun verb adjective verb, Noun verb adjective verb.
Now, this seems to be a good approach, but the phrase is a bit too long and a bit too dull.
I have found a corpus of words here (Part of Speech Database).
After some ad-hoc filtering, I calculated that this corpus contains, approximately
This allows me to use up to
For noun-verb-adjective-verb pattern this gives 57 bits per "sentence" in phrase. This means that, if I'll use all words I can get from this corpus, I can generate three sentences instead of four (160 / 57 ≈ 2.8).
Noun verb adjective verb, Noun verb adjective verb, Noun verb adjective verb.
Still a bit too long and dull.
Any hints how can I improve it?
What I see that I can try:
Try to compress my data somehow before encoding. But since the data is completely random, only some phrases would be shorter (and, I guess, not by much).
Improve phrase pattern, so it would look better.
Use several patterns, using the first word in phrase to somehow indicate for future decoding which pattern was used. (For example, use the last letter or even the length of the word.) Pick pattern according to the first bytes of the data.
...I'm not that good with English to come up with better phrase patterns. Any suggestions?
...I guess, I would need much better word corpus than I have now for that. Any hints where can I get a suitable one?
I would consider adding adverbs to your list. Here is a pattern I came up with:
<Adverb>, the
<adverb> <adjective>, <adverb> <adjective> <noun> and the
<adverb> <adjective>, <adverb> <adjective> <noun>
<verb> <adverb> over the <adverb> <adjective> <noun>.
This can encode 181 bits of data. I derived this figure using lists I made a while back from WordNet data (probably a bit off because I included compound words):
Example sentence: "Soaking, the habitually goofy, socially speculative swatch and the fearlessly cataclysmic, somewhere reciprocal macrocosm foreclose angelically over the unavoidably intermittent comforter."
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With