Recently I found this article that claims that the idea to prefer for(;;)
over while(1)
for infinite loop came because the C compiler originally available on PDP-11 generated an extra machine instruction for while(1)
.
Btw now even Visual C++ warnings tend to favor the former.
How realistic is such attribution of for(;;)
idiom?
The "for(;;)" idiom is explicitly mentioned in the original K&R. That's attribution enough for me :)
Beware to certain "sensible attribution", because are often sources of false mith, due to lack of context. The OP tagged the message both C and C++.
Now, K&R can be semi-gods about C history, but certainly cannot be considered authority about C++. Furthermore, compiler optimization can play a fundamental role in C++, but is often viewed as "abusive" from C system programmers (they like to see C as "Assembler with expressions", rather than a "high level language").
Todays compilers will produce much likely exactly the same code (this can be easily proven), and the use of one or the other is much more a matter of taste, then other.
In this sense, I tend to favor for(;;)
because I can easily read it as "for-ever" while while(true)
read as "while this true thing is true", making you to figure how if it can even be false ... . 2 msecs of brain wasted! (But it's a personal opinion: I know many people that has to think more about for(;;)
than while(true)
)
However, I can also recognize both of them as "pictorial representation" (without actually read the text, just looking how they look via photographic memory) pointing to a same intellectual concept (stay here until someone will kick you away from the inside).
About the MS warning, sometime it saves you from bad-written expressions (like true||a
). But is clearly abused, and should not appear, for trivial expressions with no operation inside. Nerveless, MS compiler produce the same machine code in both the cases. May be feedbacks to MS will make them less tedious about that warning on future releases.
Here's what the V7 Unix compiler cc
produces (using SIMH and an image from TUHS):
$ cat>a.c
main(){
while(1);
}
$ cat>b.c
main(){
for(;;);
}
$ cc -S a.c
$ cc -S b.c
a.c
(while
) compiles to:
.globl _main
.text
_main:
~~main:
jsr r5,csv
jbr L1
L2:L4:tst $1
jeq L5
jbr L4
L5:L3:jmp cret
L1:jbr L2
.globl
.data
While b.c
(for
) becomes:
.globl _main
.text
_main:
~~main:
jsr r5,csv
jbr L1
L2:L4:jbr L4
L5:L3:jmp cret
L1:jbr L2
.globl
.data
So it's at least true that for(;;)
compiled to fewer instructions when not using optimization. However, when compiling with -O
, both programs produce exactly the same assembly:
.globl _main
.text
_main:
~~main:
jsr r5,csv
L4:jbr L4
.globl
.data
and when I add a loop body of printf("Hello");
, the programs are still the same.
So, it might be that the idiom has its origins in PDP-11 machine language, but by 1979 the difference was already largely irrelevant.
I don't know if it is true, but the article claim is sensible and realistic. And for(;;)
is shorter to type than while(1)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With