Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why should I use int instead of a byte or short in C#

I have found a few threads in regards to this issue. Most people appear to favor using int in their c# code accross the board even if a byte or smallint would handle the data unless it is a mobile app. I don't understand why. Doesn't it make more sense to define your C# datatype as the same datatype that would be in your data storage solution?

My Premise: If I am using a typed dataset, Linq2SQL classes, POCO, one way or another I will run into compiler datatype conversion issues if I don't keep my datatypes in sync across my tiers. I don't really like doing System.Convert all the time just because it was easier to use int accross the board in c# code. I have always used whatever the smallest datatype is needed to handle the data in the database as well as in code, to keep my interface to the database clean. So I would bet 75% of my C# code is using byte or short as opposed to int, because that is what is in the database.

Possibilities: Does this mean that most people who just use int for everything in code also use the int datatype for their sql storage datatypes and could care less about the overall size of their database, or do they do system.convert in code wherever applicable?

Why I care: I have worked on my own forever and I just want to be familiar with best practices and standard coding conventions.

like image 917
Breadtruck Avatar asked Jul 08 '09 11:07

Breadtruck


People also ask

Should I use int or short?

Conclusion: Use int unless you conserving memory is critical, or your program uses a lot of memory (e.g. many arrays). In that case, use short .

Should I use byte or int?

Items are stored in files as a sequence of bytes, so if you're worried about disk space you should use bytes. Items are processed by your CPU in 32- or 64-bit integers (depending on your processor) so any item that's less than that amount will be "upgraded" to a 32- or 64-bit representation for runtime computation.

Is int faster than short?

A CPU works more efficient when the data with equals to the native CPU register width. This applies indirect to . NET code as well. In most cases using int in a loop is more efficient than using short.

What is the difference between int and byte?

A byte is the format data is stored in memory in past. 8 bits. An int is a format likewise you get it as value from the accumulator.


1 Answers

Performance-wise, an int is faster in almost all cases. The CPU is designed to work efficiently with 32-bit values.

Shorter values are complicated to deal with. To read a single byte, say, the CPU has to read the 32-bit block that contains it, and then mask out the upper 24 bits.

To write a byte, it has to read the destination 32-bit block, overwrite the lower 8 bits with the desired byte value, and write the entire 32-bit block back again.

Space-wise, of course, you save a few bytes by using smaller datatypes. So if you're building a table with a few million rows, then shorter datatypes may be worth considering. (And the same might be good reason why you should use smaller datatypes in your database)

And correctness-wise, an int doesn't overflow easily. What if you think your value is going to fit within a byte, and then at some point in the future some harmless-looking change to the code means larger values get stored into it?

Those are some of the reasons why int should be your default datatype for all integral data. Only use byte if you actually want to store machine bytes. Only use shorts if you're dealing with a file format or protocol or similar that actually specifies 16-bit integer values. If you're just dealing with integers in general, make them ints.

like image 153
jalf Avatar answered Sep 25 '22 16:09

jalf