I am reading the book "Introduction to Algorithms" and I am confused by this part:
We also assume a limit on the size of each word of data . For example when working with >inputs of size n , we typically assume that integers are represented by c lg n bits for some >constant c>=1. We require c>=1 so that each word can hold the value of n , enabling us to >index the individual input elements, and we restrict c to be a constant so that the word >size doesn't grow arbitrarily .
What is the purpose of this constant c?
The reason for the constant c is for the case where you're working with small inputs on a huge word-size machine. For example, if you are working on a 64-bit machine but only processing inputs of size, say, 212, the value of lg n will be 12 but the machine is a 64-bit machine. It wouldn't be correct to assume that the word size is exactly 12 bits here. The added factor of c allows us to assume that the machine word size is at least lg n, but possibly a lot larger, without messing up the analysis.
Hope this helps!
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With