Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Does double have a greater range than long?

Tags:

c#

types

size

In an article on MSDN, it states that the double data type has a range of "-1.79769313486232e308 .. 1.79769313486232e308". Whereas the long data type only has a range of "-9,223,372,036,854,775,808 .. 9,223,372,036,854,775,807". How can a double hold so much more data than a long if they are both 64 bits in size?

http://msdn.microsoft.com/en-us/library/cs7y5x0x(v=vs.90).aspx

like image 410
Nyx Avatar asked Oct 23 '12 00:10

Nyx


2 Answers

The number of possible doubles, and the number of possible longs is the same, they are just distributed differently*.

The longs are uniformly distributed, while the floats are not. You can Read more here.

I'd write more, but for some reason the cursor is jumping around all over the place on my phone.

Edit: This might actually be more helpful: http://en.wikipedia.org/wiki/Double-precision_floating-point_format#section_1

Edit2: and this is even better: http://blogs.msdn.com/b/dwayneneed/archive/2010/05/07/fun-with-floating-point.aspx

* According to that link, it would seem that there are actually more longs, since some doubles are lost due to the way NaNs and other special numbers are represented.

like image 92
will Avatar answered Oct 18 '22 21:10

will


A simple answer is that double is only accurate to 15-16 total digits, as opposed to long which (as an integer type) has an absolute accuracy within an explicit digit limit, in this case 19 digits. (Keep in mind that digits and values are semantically different.)

double: -/+ 0.000,000,000,000,01 to +/- 99,999,999,999,999.9 (at 100% accuracy, with a loss in accuracy starting from 16th digit, as represented in "-1.79769313486232e308 .. 1.79769313486232e308".)

long: -9,223,372,036,854,775,808 to +9,223,372,036,854,775,807

ulong: 0 to 18,446,744,073,709,551,615 (1 more digit than long, but identical value range (since it's only been shifted to exclude negative returns).

In general, int-type real numbers are preferred over floating-point decimal values, unless you explicitly need a decimal representation (for whichever purpose).


In addition, you may know that signed are preferred over unsigned, since the former is much less bug-prone (consider the statement uint i;, then i - x; where x > i).

like image 12
Ronnie 'Madolite' Solbakken Avatar answered Oct 18 '22 22:10

Ronnie 'Madolite' Solbakken