I've read that using short
vs int
is actually creating an inefficiency for the compiler in that it needs to use the int
datatype regardless because of C integer promotion. Is this true for 16-bit microprocessors?
Another question: If I have an array of 1s and 0s, is it most efficient to use uint8_t
or the unsigned char
in this 16-bit microprocessor? Or is there still an issue with it being converted back to int..
Please help me clear up this muddy issue in my mind. Thanks!
Is it really an issue? On most 16 bit systems I've heard of, int
and short
end up being the same size (16 bits), so there shouldn't really be a difference in practice.
If uint8_t
exists on a system, it's going to be synonymous with unsigned char
. unsigned char
will be the smallest unsigned type avaliable on the system. If it's any more than 8 bits, there will be no uint8_t
. If it's less than 8 bits, then it's violating the standard. There will be no efficiency difference since one has to be defined in terms of the other.
Lastly, do you really need to worry about these kind of microscopic differences? If you do you'll need to peek at the assembly output or (more likely) profile and see which one is faster.
On a Blackfin it is probably not a simple answer whether 32 or 16 bit types will generate higher performance generally since it supports 16, 32 and 64-bit instructions, and has two 16 bit MACs. It will depend on the operations, but I suggest that you trust your compiler optimiser to make such decisions, it knows more about the processor's instruction timing and scheduling than you probably care to.
That said it may be that in your compiler int and short are the same size in any case. Consult the documentation, ot test with sizeof
, or look in the limits.h
header for numeric ranges that will infer the widths or the various types.
If you truly want to restrict data type size use the stdint.h
types such as int16_t
.
stdint.h
also defines fastest minimum-width integer types such as int_fast16_t
, this will guarantee a minimum width, but will use a larger type if it will be faster on your target. This is the probably the most portable way of solving your problem, but it relies on the implementer to have made good decisions about the appropriate types to use. On most architectures it makes little or no difference, but on RISC and DSP architectures that may not be the case. It may also not be the case that a particular size is fastest in all circumstances, and that is probably especially true in the case of Blackfin.
In some cases (where large amounts of data are transferred to an from external memory), the fastest size is likely to be one that matches the data bus width.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With