How do I divide an int
by 100?
eg:
int x = 32894;
int y = 32894 / 100;
Why does this result in y
being 328
and not 328.94
?
C is a general-purpose language that most programmers learn before moving on to more complex languages. From Unix and Windows to Tic Tac Toe and Photoshop, several of the most commonly used applications today have been built on C. It is easy to learn because: A simple syntax with only 32 keywords.
What is C? C is a general-purpose programming language created by Dennis Ritchie at the Bell Laboratories in 1972. It is a very popular language, despite being old. C is strongly associated with UNIX, as it was developed to write the UNIX operating system.
In C programming language, %d and %i are format specifiers as where %d specifies the type of variable as decimal and %i specifies the type as integer. In usage terms, there is no difference in printf() function output while printing a number using %d or %i but using scanf the difference occurs.
Because a and b and c , so it's name is C. C came out of Ken Thompson's Unix project at AT&T. He originally wrote Unix in assembly language. He wrote a language in assembly called B that ran on Unix, and was a subset of an existing language called BCPL.
When one integer is divided by another, the arithmetic is performed as integer arithmetic.
If you want it to be performed as float, double or decimal arithmetic, you need to cast one of the values appropriately. For example:
decimal y = ((decimal) x) / 100;
Note that I've changed the type of y
as well - it doesn't make sense to perform decimal arithmetic but then store the result in an int
. The int
can't possibly store 328.94.
You only need to force one of the values to the right type, as then the other will be promoted to the same type - there's no operator defined for dividing a decimal by an integer, for example. If you're performing arithmetic using several values, you might want to force all of them to the desired type just for clarity - it would be unfortunate for one operation to be performed using integer arithmetic, and another using double arithmetic, when you'd expected both to be in double.
If you're using literals, you can just use a suffix to indicate the type instead:
decimal a = x / 100m; // Use decimal arithmetic due to the "m"
double b = x / 100.0; // Use double arithmetic due to the ".0"
double c = x / 100d; // Use double arithmetic due to the "d"
double d = x / 100f; // Use float arithmetic due to the "f"
As for whether you should be using decimal
, double
or float
, that depends on what you're trying to do. Read my articles on decimal floating point and binary floating point. Usually double
is appropriate if you're dealing with "natural" quantities such as height and weight, where any value will really be an approximation; decimal
is appropriate with artificial quantities such as money, which are typically represented exactly as decimal values to start with.
328.94 is not an integer. Integer /
divide rounds down; that is how it works.
I suggest you cast to decimal:
decimal y = 32894M / 100;
or with variables:
decimal y = (decimal)x / 100;
Because an int is only a whole number. Try this instead.
int x = 32894;
double y = x / 100.0;
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With