I experimented today with how the compiler determines the types for numbers declared as var
.
var a = 255; //Type = int. Value = byte.MaxValue. Why isn't this byte?
var b = 32767; //Type = int. Value = short.MaxValue. Why isn't this short?
var c = 2147483647; //Type = int. Value = int.MaxValue. int as expected.
var d = 2147483648; //Type = uint. Value = int.MaxValue + 1. uint is fine but could have been long?
var e = 4294967296; //Type = long. Value = uint.MaxValue + 1. Type is long as expected.
Why is int
the default for any number that is between Int32.MinValue
and Int32.MaxValue
?
Wouldn't it be better to use the smallest possible data type to save memory? (I understand that these days memory is cheap, but still, saving memory isn't that bad especially if it's so easy to do).
If the compiler did use the smallest data type, and if you had a variable with 255 and knew that later on you would want to store a value like 300, then the programmer could just declare it short
instead of using var
.
Why is var d = 2147483648
implicitly uint
and not long
?
Seems as though the compiler will always try and use a 32 bit integer if it can, first signed, then unsigned, then long
.
The VaR calculates the potential loss of an investment with a given time frame and confidence level. For example, if a security has a 5% Daily VaR (All) of 4%: There is 95% confidence that the security will not have a larger loss than 4% in one day.
Value-at-risk is a statistical measure of the riskiness of financial entities or portfolios of assets. It is defined as the maximum dollar amount expected to be lost over a given time horizon, at a pre-defined confidence level.
For example, if a portfolio of stocks has a one-day 5% VaR of $1 million, there is a 0.05 probability that the portfolio will fall in value by more than $1 million over a one day period, assuming markets are normal and there is no trading.
Conversion across confidence levels is straightforward if one assumes a normal distribution. From standard normal tables, we know that the 95% one-tailed VAR corresponds to 1.645 times the standard deviation; the 99% VAR corresponds to 2.326 times sigma; and so on.
Seems as though the compiler will always try and use a 32 bit integer if it can, first signed, then unsigned, then long.
That is exactly right. C# Language Specification explains that it tries to pick an integral type that uses the smallest possible number of bytes to represent integer literal with no suffix. Here is the explanation from the language specification:
To permit the smallest possible
int
andlong
values to be written as decimal integer literals, the following two rules exist:
- When a decimal-integer-literal with the value
2147483648
and no integer-type-suffix appears as the token immediately following a unary minus operator token, the result is a constant of typeint
with the value−2147483648
. In all other situations, such a decimal-integer-literal is of typeuint
.- When a decimal-integer-literal with the value
9223372036854775808
and no integer-type-suffix or the integer-type-suffixL
orl
appears as the token immediately following a unary minus operator token, the result is a constant of typelong
with the value−9223372036854775808
. In all other situations, such a decimal-integer-literal is of typeulong
.
Note that the language specification mentions your var d = ...
example explicitly, requiring the result to be of type uint
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With