Initially I declared variables x and y as type int:
#include<stdio.h>
int main(){
int x, y = 0 ;
x = 1 / y;
printf("%d", x);
return 0;
}
Program crashed (for obvious reasons).
Now I declared variables x and y as double:
#include<stdio.h>
int main(){
double x, y = 0 ;
x = 1 / y;
printf("%d", x);
return 0;
}
But Output: 0. (why?)
Then I changed %d to %f in printf:
#include<stdio.h>
int main(){
double x, y = 0 ;
x = 1 / y;
printf("%f", x);
return 0;
}
Output: 1.#INF00
I don't understand what is happening here.
Please explain me above cases.
Most systems you're likely to come in contact with use IEEE 754 representation for floating point numbers. That representation has ways to store the values +infinity and -infinity.
While strictly speaking, dividing by 0 is undefined behavior, implementations using IEEE 754 extend the language to allow it for floating point types. In this case, dividing by 0 can be considered infinity, so your implementation allows it, and 1.#INF00 is how MSVC prints the value for infinity.
Also, using the wrong format specifier for printing as in your second example where you use %d to print a double is undefined behavior. Format specifiers must match the datatype of what is passed in.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With