Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

.NET primitives and type hierarchies, why was it designed like this?

Tags:

c#

.net

primitive

I would like to understand why on .NET there are nine integer types: Char, Byte, SByte, Int16, UInt16, Int32, UInt32, Int64, and UInt64; plus other numeric types: Single, Double, Decimal; and all these types have no relation at all.

When I first started coding in C# I thought "cool, there's a uint type, I'm going to use that when negative values are not allowed". Then I realized no API used uint but int instead, and that uint is not derived from int, so a conversion was needed.

What are the real world application of these types? Why not have, instead, integer and positiveInteger ? These are types I can understand. A person's age in years is a positiveInteger, and since positiveInteger is a subset of integer there's so need for conversion whenever integer is expected.

The following is a diagram of the type hierarchy in XPath 2.0 and XQuery 1.0. If you look under xs:anyAtomicType you can see the numeric hierarchy decimal > integer > long > int > short > byte. Why wasn't .NET designed like this? Will the new framework "Oslo" be any different?

alt text

like image 445
Max Toro Avatar asked Nov 05 '09 03:11

Max Toro


2 Answers

My guess would be because the underlying hardware breaks that class hierarchy. There are (perhaps surprisingly) many times when you care that a UInt32 is a 4 bytes big and unsigned, so a UInt32 is not a kind of Int32, nor is an Int32 a type of Int64.

And you almost always care about the difference between an int and a float.

Fundamentally, inheritance & the class hierarchy are not the same as mathematical set inclusion. The fact that the values a UInt32 can hold are a strict subset of the values an Int64 can hold does not mean that a UInt32 is a type of Int64. Less obviously, an Int32 is not a type of Int64 - even though there's no conceptual difference between them, their underlying representations are different (4 bytes versus 8 bytes). Decimals are even more different.

XPath is different: the representations for all the numeric types are fundamentally the same - a string of ASCII digits. There, the difference between a short and a long is one of possible range rather than representation - "123" is both a valid representation of a short and a valid representation of a long with the same value.

like image 70
RAOF Avatar answered Sep 28 '22 00:09

RAOF


Decimal is intended for calculations that need precision (basically, money). See here: http://msdn.microsoft.com/en-us/library/364x0z75(VS.80).aspx

Singles/Doubles are different to decimals, because they're intended to be an approximation (basically, for scientific calculations).

That's why they're not related.

As for bytes and chars, they're totally different: a byte is 0-255, whereas a char is a character, and can hence store unicode characters (there are a lot more than 255 of them!)

Uints and ints don't convert automatically, because they can each store values that are impossible for the other (uints have twice the positive range of ints).

Once you get the hang of it all, it actually does make a lot of sense.

As for your ages thing, i'd simply use an int ;)

like image 39
Chris Avatar answered Sep 28 '22 01:09

Chris