Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why does Int32.Equals(Int16) return true where the reverse doesn't? [duplicate]

Tags:

c#

.net

The .Net Equals() returns different results, though we are comparing the same values. Can someone explain me why that is the case?

class Program
{
    static void Main(string[] args)
    {
        Int16 a = 1;
        Int32 b = 1;

        var test1 = b.Equals(a);    //true
        var test2 = a.Equals(b);    //false
    }
}

Has it got to do anything with the range of the types we are comparing against?

like image 583
AbP Avatar asked Aug 20 '15 20:08

AbP


People also ask

What is the use of return value in int32?

Return Value: This method returns true if obj is an instance of Int32 and equals the value of this instance otherwise, false. Below programs illustrate the use of the above-discussed method:

Why does the equals() method return false for Int32 arguments?

If an implicit conversion between the obj argument and an Int32 is defined and the argument is not typed as an Object, compilers perform an implicit conversion and call the Equals (Int32) method. Otherwise, they call the Equals (Object) method, which always returns false if its obj argument is not an Int32 value.

What is the difference between Int32 and Int16 in Java?

In the case of the Byte, Int16, SByte, and UInt16 values, the first comparison returns true because the compiler automatically performs a widening conversion and calls the Equals (Int32) method, whereas the second comparison returns false because the compiler calls the Equals (Object) method.

What is the value of Int32 myvariable1?

Int32 myVariable1 = 60; Int32 myVariable2 = 60; // Get and display the declaring type.


2 Answers

Int32 has an Equals(Int32) overload and Int16 can be implicity converted to an equivalent Int32. With this overload, it's now comparing two 32-bit integers, checks for value equality, and naturally returns true.

Int16 has its own Equals(Int16) method overload, but there is no implicit conversion from an Int32 to an Int16 (because you can have values that are out of range for a 16-bit integer). Thus the type system ignores this overload and reverts to the Equals(Object) overload. Its documentation reports:

true if obj is an instance of Int16 and equals the value of this instance; otherwise, false.

But, the value we're passing in, while it "equals the value of this instance" (1 == 1) it's not an instance of Int16 as it's an Int32.


The equivalent code for the b.Equals(a) that you have would look like this:

Int16 a = 1;
Int32 b = 1;

Int32 a_As_32Bit = a; //implicit conversion from 16-bit to 32-bit

var test1 = b.Equals(a_As_32Bit); //calls Int32.Equals(Int32)

Now it's clear we're comparing both numbers as 32-bit integers.

The equivalent code for the a.Equals(b) would look this:

Int16 a = 1;
Int32 b = 1;

object b_As_Object = b; //treats our 16-bit integer as a System.Object

var test2 = a.Equals(b_As_Object); //calls Int16.Equals(Object)

Now it's clear we're calling a different equality method. Internally, that equality method is doing more or less this:

Int16 a = 1;
Int32 b = 1;

object b_As_Object = b;

bool test2;
if (b_As_Object is Int16) //but it's not, it's an Int32
{
    test2 = ((Int16)b_As_Object) == a;
}
else
{
    test2 = false; //and this is where your confusing result is returned
}
like image 117
Chris Sinclair Avatar answered Sep 25 '22 00:09

Chris Sinclair


You should use equality operator (==) because Equals() methods are not supposed to return true for objects of different types. Also there is no type in your code inherited from both short and int. change to this returns true:

var test2 = a == b.Id;    //true
like image 26
Salah Akbari Avatar answered Sep 25 '22 00:09

Salah Akbari