Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why do C# containers and GUI classes use int and not uint for size related members?

Tags:

c#

.net

I usually program in C++, but for school i have to do a project in C#.

So i went ahead and coded like i was used to in C++, but was surprised when the compiler complained about code like the following:

        const uint size = 10;
        ArrayList myarray = new ArrayList(size); //Arg 1: cannot convert from 'uint' to 'int

Ok they expect int as argument type, but why ? I would feel much more comfortable with uint as argument type, because uint fits much better in this case.

Why do they use int as argument type pretty much everywhere in the .NET library even if though for many cases negative numbers dont make any sense (since no container nor gui element can have a negative size).

If the reason that they used int is, that they didnt expect that the average user cares about signedness, why didnt they add overloads for uint additonally ?

Is this just MS not caring about sign correctness or are there cases where negative values make some sense/ carry some information (error code ????) for container/gui widget/... sizes ?

like image 975
smerlin Avatar asked Apr 24 '10 17:04

smerlin


People also ask

Why do we write in C?

It was mainly developed as a system programming language to write an operating system. The main features of the C language include low-level memory access, a simple set of keywords, and a clean style, these features make C language suitable for system programmings like an operating system or compiler development.

Why do we use %d in C?

In C programming language, %d and %i are format specifiers as where %d specifies the type of variable as decimal and %i specifies the type as integer. In usage terms, there is no difference in printf() function output while printing a number using %d or %i but using scanf the difference occurs.

What does %C do in C?

%d is used to print decimal(integer) number ,while %c is used to print character . If you try to print a character with %d format the computer will print the ASCII code of the character.

Why do we use semicolon in C language?

Role of Semicolon in C: Semicolons are end statements in C. The Semicolon tells that the current statement has been terminated and other statements following are new statements. Usage of Semicolon in C will remove ambiguity and confusion while looking at the code.


2 Answers

I would imagine that Microsoft chose Int32 because UInt32 is not CLS-compliant (in other words not all languages that use the .NET framework support unsigned integers).

like image 116
Andrew Hare Avatar answered Oct 26 '22 16:10

Andrew Hare


Because unsigned integers are not CLS compliant. There are languages that are missing support for them, Java would be an example.

like image 43
Hans Passant Avatar answered Oct 26 '22 17:10

Hans Passant