Is there a reason I shouldn't be testing a set of variables for 0 by testing their product?
Often in my coding across different languages I will test a set of variables to do something if they all consist of zeros.
For example (c#):
if( myString.Length * myInt * (myUrl.LastIndexOf(@"\")+1) == 0 )
Instead of:
if( myString.Length == 0 || myInt == 0 || myUrl.LastIndexOf(@"\") < 0)
Is there a reason I shouldn't be testing this way?
Here are a few reasons. All are important, and they're in no particular order.
if (myObj != null && myObj.Enabled)
without throwing exceptionsmyString.Length * myInt * myUrl.LastIndexOf(@"\") == 0
actually equivalent in all practical cases to if( myString.Length > 0 && myInt != 0 && myUrl.LastIndexOf(@"\") <= 0)
? I'm not sure. I doubt it. I'm sure I could figure it out with some effort, but why should I have to in the first place? Which brings me to...&&
, anyone reading this code in the future will have a harder time understanding what it's doing. And don't make the excuse that, "I'll be the only one to read it", because in a few months or years, you'll probably have forgotten the thoughts and conventions you had when you wrote it, and be reading it just like anyone else.You shouldn't do that because it's not obvious what you're doing. Code should be clean, readable, and easily maintainable.
It's clever, but it's going to make the next person who looks at your code have to "decipher" what your intent was by doing it that way.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With