I've been led to believe that casting can, in certain circumstances, become a measurable hindrance on performance. This may be moreso the case when we start dealing with incoherent webs of nasty exception throwing\catching.
Given that I wish to create more correct heuristics when it comes to programming, I've been prompted to ask this question to the .NET gurus out there: Is interface casting faster than class casting?
To give a code example, let's say this exists:
public interface IEntity { IParent DaddyMommy { get; } }
public interface IParent : IEntity { }
public class Parent : Entity, IParent { }
public class Entity : IEntity
{
public IParent DaddyMommy { get; protected set; }
public IParent AdamEve_Interfaces
{
get
{
IEntity e = this;
while (e.DaddyMommy != null)
e = e.DaddyMommy as IEntity;
return e as IParent;
}
}
public Parent AdamEve_Classes
{
get
{
Entity e = this;
while (e.DaddyMommy != null)
e = e.DaddyMommy as Entity;
return e as Parent;
}
}
}
So, is AdamEve_Interfaces faster than AdamEve_Classes? If so, by how much? And, if you know the answer, why?
A type cast—or simply a cast— is an explicit indication to convert a value from one data type to another compatible data type. A Java interface contains publicly defined constants and the headers of public methods that a class can define.
Yes, you can. If you implement an interface and provide body to its methods from a class. You can hold object of the that class using the reference variable of the interface i.e. cast an object reference to an interface reference.
If you have a concrete class, you can cast it to the interface. If you have an interface, it is possible to cast to the concrete class. Generally, you only want to go in the first direction. The reason being that you shouldn't know what the concrete class is when you have only a pointer to the interface.
A number of the answers here suggest benchmarking, which is a step in the right direction, but only the first step in the journey.
My team has done a great deal of profiling and benchmarking in this area. The short version is yes, there are situations in which interfaces impose a small but measurable performance cost. However the actual cost depends on a great many factors, including how many interfaces are supported, how many of those interfaces a given reference is cast to, what the pattern of accesses are, and so on. The CLR has a great many heuristics in place designed to speed up interface access in common cases.
If you are benchmarking one of those common cases, but your actual program falls into a less common case, then your benchmarking is actively harmful because it is giving you data that is misleading.
Far better to do realistic performance measurements on real code. Use a profiler, write the code both ways, and see whether either way is measurably, repeatably faster in a way that is visible and relevant to the user.
As for your reference to throwing and catching: the performance cost of throwing and catching should be irrelevant. Exceptions are by definition exceptional, not common. Furthermore, exceptions usually indicate that something is going to halt shortly; it usually doesn't matter whether something halts as fast as possible. If you are in a situation where your performance is gated by exceptions then you have bigger problems to solve: stop throwing so many exceptions. An exception thrown should be extremely rare.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With