Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Error correcting codes for packet loss (UDP)

I have no real idea what to look for, since all I get with "Error correcting codes" is stuff related to cases where you don't know the location of the error. Thus those codes are much more complicated an inefficient than I need them to be.

In the following, note that bits are equal to packets (because only a whole packet can be missing, thus the bit analogy fits very well).

Are there ECCs that take into account that you already know WHICH k-bits are missing and only provide you with a way to reconstruct the datastream in those k places? Additionally, the bits added by the ECC should be independent (preferably). Such that if packet loss occurs inside the ECC portion of the data, it still can reconstruct some of the original data (not always will there be k errors, mostly there will be none. So its important that the ECC is fault tolerant to its own ECC bits added).

That is a big difference IMO. For one missing bit its simple, I can just use one XOR bit. But I am not clever enough to generalize it to n-bits.

So again, I have a stream of n-bits, and I know that up to k bits are missing (I really know which ones exactly and that they are missing, corruption is not possible). Now I need a codec that can reconstruct them with as little overhead added to the datastream as possible. I am dreaming of having (n+k) bits to correct k random bit errors in an n bit stream :). On top of that, ideally, if any of those k ECC bits added to the n bit data stream gets corrupted, like say c bits of the k bits get corrupted, then it should still be able to reconstruct (k-c) bit errors in the n bit stream.

Please note ofc that I do not know the error positions in advance though xD.

Example:

One algorithm I can think of is this. Datastream of n bits to be protected against errors.

Let p be the smallest relative primes to n. Then iterate through the datastream with i = (p * j) mod n, by incrementing j, XORing the substream obtained by selecting bits of every even j. This substream has n/2 elements. After iterating, we have obtained a parity for n/2 the elements. We can obtain another parity for the other half in the same way (taking odd j).

This yields 50% error reduction for 2 bit losses.

The bright side is we can now get arbitrarily better. Just take the next higher relative prime and do the same again. Now we are at 25% error chance. Basically we can cut the error chance in a half each time we add two additional parity bits.

like image 276
thesaint Avatar asked Mar 08 '15 08:03

thesaint


1 Answers

You need an erasure code (not an error detection code). Error detection is taken care of by the link and transport layer. Since you are trying to mitigate UDP packet loss, you already know which parts are missing -- the dropped packet is missing.

There is no such thing as erasures or errors on a byte (or bit) level, at least not with any reasonable likelihood (there are at least two underlying layers of protocols, sometimes three, each with a checksum, which makes sure of that). You either receive a complete, intact datagram, or you don't. Never anything in between.

Cauchy Reed Solomon codes is one class of algorithms you may consider, these transform k blocks of data of some length into k+m blocks, and allow restoration of the original data given at most m erasures. A special case for this kind of algorithm is parity, for which both encoding and decoding is a simple xor operation, and m=1. This is the very algorithm used in Raid-5, which was mentioned in a comment above.

In one word, you want longhair.

As an alternative, if you have a lot of data to transmit to several parties, and you want to be fancy, you can consider fountain codes. These are much more complicated (and thus slower) and less bit-efficient, but they allow you to create an arbitrary number of packets, of which any k will reconstruct the k-lenght original message. This allows you to save a lot of bandwidth if you are able to multicast to a lot of clients who all want one large set of data, but don't necessarily start downloading at the same time.

like image 174
Damon Avatar answered Sep 23 '22 15:09

Damon