Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Compose synthetic English phrase that would contain 160 bits of recoverable information

I have 160 bits of random data.

Just for fun, I want to generate pseudo-English phrase to "store" this information in. I want to be able to recover this information from the phrase.

Note: This is not a security question, I don't care if someone else will be able to recover the information or even detect that it is there or not.

Criteria for better phrases, from most important to the least:

  • Short
  • Unique
  • Natural-looking

The current approach, suggested here:

Take three lists of 1024 nouns, verbs and adjectives each (picking most popular ones). Generate a phrase by the following pattern, reading 20 bits for each word:

Noun verb adjective verb,
Noun verb adjective verb,
Noun verb adjective verb,
Noun verb adjective verb.

Now, this seems to be a good approach, but the phrase is a bit too long and a bit too dull.

I have found a corpus of words here (Part of Speech Database).

After some ad-hoc filtering, I calculated that this corpus contains, approximately

  • 50690 usable adjectives
  • 123585 nouns
  • 15301 verbs
  • 13010 adverbs (not included in pattern, but mentioned in answers)

This allows me to use up to

  • 16 bits per adjective (actually 16.9, but I can't figure how to use fractional bits)
  • 15 bits per noun
  • 13 bits per verb
  • 13 bits per adverb

For noun-verb-adjective-verb pattern this gives 57 bits per "sentence" in phrase. This means that, if I'll use all words I can get from this corpus, I can generate three sentences instead of four (160 / 57 ≈ 2.8).

Noun verb adjective verb,
Noun verb adjective verb,
Noun verb adjective verb.

Still a bit too long and dull.

Any hints how can I improve it?

What I see that I can try:

  • Try to compress my data somehow before encoding. But since the data is completely random, only some phrases would be shorter (and, I guess, not by much).

  • Improve phrase pattern, so it would look better.

  • Use several patterns, using the first word in phrase to somehow indicate for future decoding which pattern was used. (For example, use the last letter or even the length of the word.) Pick pattern according to the first bytes of the data.

...I'm not that good with English to come up with better phrase patterns. Any suggestions?

  • Use more linguistics in the pattern. Different tenses etc.

...I guess, I would need much better word corpus than I have now for that. Any hints where can I get a suitable one?

like image 256
Alexander Gladysh Avatar asked Jan 15 '11 05:01

Alexander Gladysh


1 Answers

I would consider adding adverbs to your list. Here is a pattern I came up with:

<Adverb>, the
    <adverb> <adjective>, <adverb> <adjective> <noun> and the
    <adverb> <adjective>, <adverb> <adjective> <noun>
<verb> <adverb> over the <adverb> <adjective> <noun>.

This can encode 181 bits of data. I derived this figure using lists I made a while back from WordNet data (probably a bit off because I included compound words):

  • 12650 usable nouns (13.6 bits/noun, rounded down)
  • 5247 usable adjectives (12.3 bits/adjective)
  • 5009 usable verbs (12.2 bits/verb)
  • 1512 usable adverbs (10.5 bits/adverb)

Example sentence: "Soaking, the habitually goofy, socially speculative swatch and the fearlessly cataclysmic, somewhere reciprocal macrocosm foreclose angelically over the unavoidably intermittent comforter."

like image 64
PleaseStand Avatar answered Oct 24 '22 04:10

PleaseStand