Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to Determine Which CRC to Use?

Tags:

crc

If I have a certain number of bytes to transfer serially, how do I determine which CRC (CRC8, CRC16, etc., basically a how many bit CRC?) to use and still have error detection percentage be high? Is there a formula for this?

like image 740
djpark121 Avatar asked Jun 23 '10 19:06

djpark121


People also ask

What CRC to use?

The most commonly used polynomial lengths are 9 bits (CRC-8), 17 bits (CRC-16), 33 bits (CRC-32), and 65 bits (CRC-64). A CRC is called an n-bit CRC when its check value is n-bits. For a given n, multiple CRCs are possible, each with a different polynomial.


2 Answers

From a standpoint of the length of the CRC, normal statistics apply. For a bit-width of CRC, you have 1/(2^n) chance of having a false positive. So for a 8 bit CRC, you have a 1/255 chance, etc.

However, the polynomial chosen also makes a big impact. The math is highly dependent on the data being transferred and isn't an easy answer.

You should evaluate more than just CRC depending on your communication mechanism (FEC with systems such a turbo codes is very useful and common).

like image 175
Yann Ramin Avatar answered Oct 06 '22 02:10

Yann Ramin


To answer this question, you need to know the bit error rate of your channel which can only be determined empirically. And then once you have the measured BER, you have to decide what detection rate is "high" enough for your purposes.

Sending each message, for example, 5 times will give you pretty pretty good detection even on a very noisy channel, but it does crimp your throughput a bit. However, if you are sending commands to a deep-space-probe you may need that redundancy.

like image 34
msw Avatar answered Oct 06 '22 02:10

msw