I had a while loop looking like:
while (str[i] != *p)
++i;
and a friend popped over and mentioned that it would be faster to do:
char c = *p;
while (str[i] != c)
++i;
I guess the question is:
if I dereference a pointer multiple times, at what point does it become faster to create a temp variable.
If *p
is not volatile
qualified, the compiler is allowed to load the *p
only once and do what your friend is proposing.
So in "normal" code you should prefer readability and maintainability and let the compiler do its job. If in doubt, you can look at the assembler that the compiler produces, usually with option -S
.
It might be a little bit faster. It won't be a whole lot faster. If you really care, you'll have to measure it.
Much more important than microoptimization is clarity and accuracy. Depending on the usage of the pointer p
in the program, it may be extremely obvious that it's what points to the character being compared to, meaning that str[i] != *p
is a good, clear test, and that introducing the new variable c
would make the code that much harder to follow. Or, if it's not so obvious that *p
is the character being compared to, then introducing a new variable -- and maybe calling it character_being_compared_to
-- would make the code clearer (and also perhaps that little bit faster).
But there's also the question of whether p
's value ever changes, meaning that the character it points to becomes different. If p
's value can ever change, then copying *p
into c
means you keep comparing against the old value, which might be a bug.
Each variable you add to your program costs something: it costs you (and everyone who ever maintains your code) mental effort to keep track of. So variables have to "carry their weight": they have to be for something. There's a certain benefit in not introducing new variables, especially if the value of the new variable has to be kept in sync with the value of other variables. The benefit gained by adding the new variable (in this case, the alleged speedup) has to be greater than the cost of adding it.
Your friend is doing what a lot of inexperienced programmers do: imagining that the computer is some frail, inefficient, wheezing rattletrap of a machine, and that it's our job to make things easier for it. But that's exactly backwards: computers are actually insanely fast, and it's their job to make our jobs easier for us. So, in general, first write code that's clear and that works, and if it has performance issues then maybe worry about how to speed it up, but don't be too worried about trivial little issues like this making it harder on the poor old computer.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With