Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why are unsigned int's not CLS compliant?

Not all languages have the concept of unsigned ints. For example VB 6 had no concept of unsigned ints which I suspect drove the decision of the designers of VB7/7.1 not to implement as well (it's implemented now in VB8).

To quote:

http://msdn.microsoft.com/en-us/library/12a7a7h3.aspx

The CLS was designed to be large enough to include the language constructs that are commonly needed by developers, yet small enough that most languages are able to support it. In addition, any language construct that makes it impossible to rapidly verify the type safety of code was excluded from the CLS so that all CLS-compliant languages can produce verifiable code if they choose to do so.

Update: I did wonder about this some years back, and whilst I can't see why a UInt wouldn't be type safety verifiable, I guess the CLS guys had to have a cut off point somewhere as to what would be the baseline minimum number of value types supported. Also when you think about the longer term where more and more languages are being ported to the CLR why force them to implement unsigned ints to gain CLS compliance if there is absolutely no concept, ever?


Part of the issue, I suspect, revolves around the fact that unsigned integer types in C are required to behave as members of an abstract algebraic ring rather than as numbers [meaning, for example, that if an unsigned 16-bit integer variable equals zero, decrementing it is required to yield 65,535, and if it's equal to 65,535 then incrementing it is required to yield zero.] There are times when such behavior is extremely useful, but numeric types exhibit such behavior may have gone against the spirit of some languages. I would conjecture that the decision to omit unsigned types probably predates the decision to support both checked and unchecked numeric contexts. Personally, I wish there had been separate integer types for unsigned numbers and algebraic rings; applying a unary minus operator to unsigned 32-bit number should yield a 64-bit signed result [negating anything other than zero would yield a negative number] but applying a unary minus to a ring type should yield the additive inverse within that ring.

In any case, the reason unsigned integers are not CLS compliant is that Microsoft decided that languages didn't have to support unsigned integers in order to be considered "CLS compatible".


Unsigned integers are not CLS compliant because they're not interoperable between certain languages.


Unsigned int's don't gain you that much in real life, however having more than 1 type of int gives you pain, so a lot of languages only have singed ints.

CLS compliant is aimed at allowing a class to be made use of from lots of languages…

Remember that no one makes you be CLS compliant.

You can still use unsigned ints within a method, or as parms to a private method, as it is only the public API that CLS compliant restricts.