I just need to understand this statement:
if (fork() && !fork())
shouldn't it always be false? I mean, if I write:
if (a && !a)
It's always false so the first should always be false too, am I wrong? Of course I am, but I'm hoping someone can explain this strange thing to me.
I'm studying C for an exam and I had to resolve this code:
int main(){
if(fork && !fork()){
printf("a\n");
}
else printf("b\n");
}
Every calls to the unix process creation system call fork() returns twice. First it returns with the PID of the child to the parent(the process which called fork()). Second it returns to 0 to the newly created child.
from man pages:
Return Value
On success, the PID of the child process is returned in the parent, and 0 is returned in the child. On failure, -1 is returned in the parent, no child process is created, and errno is set appropriately.
in your case
if (fork() && !fork())
The statement inside if
, calls fork twice. So what will happen is following :
A
|----------------B
| |
|---C |
| | |
Now first call to fork()
will return in both A and B. In A it will be nonzero and in B it will be zero.
Second call to fork() will be evoked only from A. because first fork returned 0 to B, it will not Evoke a second fork()
. its because &&
short circuits the evaluation if first operand is found non zero. Thanks to Daniel for pointing this out.
So we can make a table out of this:
PID fork()1 fork()2
------------------------------
A >0 >0
B =0 >0
C >0 =0
So from the chart, Process C's if
will be evaluated to TRUE
Its important to remember, fork()1
didn't returned to C . it got the copy of Already evaluated expression from its parent.
I hope this explains your question.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With