I understand that to compare two strings for equality, the interpreter has to iterate through both strings and compare each character.
That would make the time complexity 0(n)
where n
is the length of the shortest string.
However, comparing two numbers for equality is 0(1)
.
Why is that? Wouldn't the interpreter have to iterate through every number to check for equality?
Numbers in computers are usually handled in fixed-size units. A int
might be 32 or 64 bits in any given language and/or compiler/platform combination, but it will never be variable-length.
Therefore you have a fixed number of bits to compare when comparing numbers. It's very easy to build a hardware circuit that compares that many bits at once (i.e. as "one action").
Strings, on the other hand, have inherently variable lengths, so you just saying "string" doesn't tell you how many bits you'll have to compare.
There are exceptions however, as there are variable-length numbers, usually called something like BigInteger
or BigDecimal
which will behave very similar to String
comparison as it might end up being O(n) to compare two BigDecimal
values for equality (where n is the length of the BigDecimal
s, not either of their numeric values).
Usually programs represent numbers as fixed-sized data structures (binary values, which is why you may see their sizes described in bits). Being fixed-size, comparisons would take a constant amount of time and be O(1), which is one of the benefits of such a representation. A downside would be a limit on the range of values that can be represented.
An alternate representation that lifts this restriction, allowing for an arbitrarily-large range of numbers, would thus no longer be fixed in size, and no longer be O(1) to compare.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With