I have a hypothetical situation of sending data units, each of a thousand bytes. Failure rate is rare but when a error does occur it is less likely to be a single bit error and more likely to be an error in a few bits in a row.
At first I thought of using a checksum, but apparently that can miss bit errors larger than a single bit. A parity check won't work either so CRC might be the best option.
Is using a Cyclic Redundancy Check on a thousand bytes efficient? Or are there other methods that would work better?
Cyclic Redundancy Checks (CRCs) are popular specifically because of their efficiency at detecting multiple bit errors with a guaranteed accuracy.
There are different designs to generate CRC polynomials where the trade-off is accuracy vs. computational complexity. In your case, you can choose the "fastest" one that meets your requirements for accuracy.
You might want to start with this Wikipedia article on the Cyclic Redundancy Check.
CRC is covered in another question here
When is CRC more appropriate to use than MD5/SHA1?
It is suitable for detecting random errors and easy to implement.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With