Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Error Detection Effiency (CRC, Checksum, etc)

I have a hypothetical situation of sending data units, each of a thousand bytes. Failure rate is rare but when a error does occur it is less likely to be a single bit error and more likely to be an error in a few bits in a row.

At first I thought of using a checksum, but apparently that can miss bit errors larger than a single bit. A parity check won't work either so CRC might be the best option.

Is using a Cyclic Redundancy Check on a thousand bytes efficient? Or are there other methods that would work better?

like image 952
irl_irl Avatar asked Aug 24 '09 15:08

irl_irl


2 Answers

Cyclic Redundancy Checks (CRCs) are popular specifically because of their efficiency at detecting multiple bit errors with a guaranteed accuracy.

There are different designs to generate CRC polynomials where the trade-off is accuracy vs. computational complexity. In your case, you can choose the "fastest" one that meets your requirements for accuracy.

You might want to start with this Wikipedia article on the Cyclic Redundancy Check.

like image 81
Robert Cartaino Avatar answered Oct 02 '22 14:10

Robert Cartaino


CRC is covered in another question here
When is CRC more appropriate to use than MD5/SHA1?
It is suitable for detecting random errors and easy to implement.

like image 34
nik Avatar answered Oct 02 '22 12:10

nik