Using this example:
Color.FromArgb(Int32, Int32, Int32)
Creates a Color structure from the specified 8-bit color values (red, green, and blue). The alpha value is implicitly 255 (fully opaque). Although this method allows a 32-bit value to be passed for each color component, the value of each component is limited to 8 bits.
If the value of each component is limited to 8 bits, then why didn't they use Byte
instead of Int32
?
In broader scope, I find people using Int32
very commonly, even when Int16
or Byte
would suffice. Is there any particular reason for using Int32
generally over Int16
, Byte
, etc?
My guess is that byte is less well supported by some .NET languages. Maybe it's not CLS compliant, I don't remember. These days nobody cares about CLS compliance anymore but in the 1.0 days this was an important feature level. Also note that VB.NET does not support unsigned types well which is an example of different support for integers across .NET languages.
Using int
for the constructor is especially weird because the A, R, G, B
properties are byte
.
I consider this to be an API design error.
The Color
struct is not particularly beautiful in general. It does not only have ARGB values but also a KnownColor
and a name. Many concerns have been crammed into this struct. For fun, the Equals
method has a bug: return name.Equals(name);
. This is always true, of course. This struct looks hastily done. You can tell from the Equals
method that the code author did not know that strings have an overloaded equality operator. The operator equals of Color
is the same 15 lines just copied over. I guess the true answer for this question is: The intern did it!
Generally, Int32
is preferred because most arithmetic operations widen to 32 bits and it's an efficient integer width for common hardware. The shorter integer types are for more special uses.
Since it was suggested that this saves downcasting: I do not understand that point. Widening integer conversions are implicit and have no meaningful performance cost (often none).
I don't think there is a good reason. First, I thought that digging into the code would provide some insight, but all I can find is that there are checks to ensure that the values of alpha
, red
, green
and blue
are within the [0..255] range, throwing an exception if not. Internally, the MakeArgb
method is then called, which does use byte
:
/// <summary>
/// [...]
/// Although this method allows a 32-bit value
/// to be passed for each component, the value of each
/// component is limited to 8 bits.
/// </summary>
public static Color FromArgb(int alpha, int red, int green, int blue)
{
Color.CheckByte(alpha, "alpha");
Color.CheckByte(red, "red");
Color.CheckByte(green, "green");
Color.CheckByte(blue, "blue");
return new Color(Color.MakeArgb((byte)alpha, (byte)red, (byte)green, (byte)blue), Color.StateARGBValueValid, null, (KnownColor)0);
}
private static long MakeArgb(byte alpha, byte red, byte green, byte blue)
{
return (long)((ulong)((int)red << 16 | (int)green << 8 | (int)blue | (int)alpha << 24) & (ulong)-1);
}
private static void CheckByte(int value, string name)
{
if (value < 0 || value > 255)
{
throw new ArgumentException(SR.GetString("InvalidEx2BoundArgument",
name,
value,
0,
255));
}
}
I guess it just became that way in the early days (we're talking .NET 1.0 here) and then it just stuck.
Also, there is the FromArgb(int)
overload, which lets you set all 32 bits with one 32 bit value. Strangely, this is an int
and not a unsigned int
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With