Why C# allows:
1.0 / 0 // Infinity
And doesn't allow:
1 / 0 // Division by constant zero [Compile time error]
Mathematically, is there any differences between integral and floating-point numbers in dividing by zero?
Like the letter G, C emerged from the Phoenician letter gimel (centuries later, gimel became the third letter of the Hebrew alphabet). In ancient Rome, as the Latin alphabet was being adapted from the Greek and Etruscan alphabets, G and C became disambiguated by adding a bar to the bottom end of the C.
This is a very important rule considering about 25% of words in our language contain a C.] So why do we need a C? When we combine the C with an H we DO make a unique sound. Without a C we would go to Hurch instead of Church, we would listen to a Hime instead of a Chime, etc.
But yeah, here are all the letters from most useless to most useful: X, C, Q, Y, W, H, Z, V, B, D, G, P, E, M, L, U, J, R, F, N, K, A, I, T, S, O. I hope you enjoyed this.
Quote from wikipedia: "A successor to the programming language B, C was originally developed at Bell Labs by Dennis Ritchie between 1972 and 1973 to construct utilities running on Unix." The creators want that everyone "see" his language. So he named it "C".
According to Microsoft, "Floating-point arithmetic overflow or division by zero never throws an exception, because floating-point types are based on IEEE 754 and so have provisions for representing infinity and NaN (Not a Number)."
Mathematically, there is no difference. With computers, however, only the standard IEEE-754 floating-point specification has special values for representing ±∞. Integers can only hold... integers :-)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With