Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why should casting be avoided? [closed]

Tags:

java

c++

c#

casting

I generally avoid casting types as much as possible since I am under the impression that it's poor coding practice and may incur a performance penalty.

But if someone asked me to explain why exactly that is, i would probably look at them like a deer in headlights.

So why/when is casting bad?

Is it general for java, c#, c++ or does every different runtime environment deal with it on it's own terms?

Specifics for a any language are welcome, example why is it bad in c++?

like image 213
LoudNPossiblyWrong Avatar asked Nov 12 '10 17:11

LoudNPossiblyWrong


People also ask

Is type casting bad practice?

Being typecast can be lucrative if an actor enjoys the roles they're playing, but it can also feel stifling because it's difficult for the actor to grow and try new things. It's very common for new actors to worry about being typecast because they don't want to be locked into only playing a particular character type.

What does casting to void mean?

Casting to void is used to suppress compiler warnings. The Standard says in §5.2. 9/4 says, Any expression can be explicitly converted to type “cv void.” The expression value is discarded.

What is the point of casting?

A cast holds a broken bone (fracture) in place and prevents the area around it from moving as it heals. Casts also help prevent or decrease muscle contractions and help keep the injured area immobile, especially after surgery, which can also help decrease pain.

Is Casting necessary?

Why Is Casting Important? Casting is one of the most crucial parts of the filmmaking process because performance can significantly impact how audiences and critics receive a film.


1 Answers

You've tagged this with three languages, and the answers are really quite different between the three. Discussion of C++ more or less implies discussion of C casts as well, and that gives (more or less) a fourth answer.

Since it's the one you didn't mention explicitly, I'll start with C. C casts have a number of problems. One is that they can do any of a number of different things. In some cases, the cast does nothing more than tell the compiler (in essence): "shut up, I know what I'm doing" -- i.e., it ensures that even when you do a conversion that could cause problems, the compiler won't warn you about those potential problems. Just for example, char a=(char)123456;. The exact result of this implementation defined (depends on the size and signedness of char), and except in rather strange situations, probably isn't useful. C casts also vary in whether they're something that happens only at compile time (i.e., you're just telling the compiler how to interpret/treat some data) or something that happens at run time (e.g., an actual conversion from double to long).

C++ attempts to deal with that to at least some extent by adding a number of "new" cast operators, each of which is restricted to only a subset of the capabilities of a C cast. This makes it more difficult to (for example) accidentally do a conversion you really didn't intend -- if you only intend to cast away constness on an object, you can use const_cast, and be sure that the only thing it can affect is whether an object is const, volatile, or not. Conversely, a static_cast is not allowed to affect whether an object is const or volatile. In short, you have most of the same types of capabilities, but they're categorized so one cast can generally only do one kind of conversion, where a single C-style cast can do two or three conversions in one operation. The primary exception is that you can use a dynamic_cast in place of a static_cast in at least some cases and despite being written as a dynamic_cast, it'll really end up as a static_cast. For example, you can use dynamic_cast to traverse up or down a class hierarchy -- but a cast "up" the hierarchy is always safe, so it can be done statically, while a cast "down" the hierarchy isn't necessarily safe so it's done dynamically.

Java and C# are much more similar to each other. In particular, with both of them casting is (virtually?) always a run-time operation. In terms of the C++ cast operators, it's usually closest to a dynamic_cast in terms of what's really done -- i.e., when you attempt to cast an object to some target type, the compiler inserts a run-time check to see whether that conversion is allowed, and throw an exception if it's not. The exact details (e.g., the name used for the "bad cast" exception) varies, but the basic principle remains mostly similar (though, if memory serves, Java does make casts applied to the few non-object types like int much closer to C casts -- but these types are used rarely enough that 1) I don't remember that for sure, and 2) even if it's true, it doesn't matter much anyway).

Looking at things more generally, the situation's pretty simple (at least IMO): a cast (obviously enough) means you're converting something from one type to another. When/if you do that, it raises the question "Why?" If you really want something to be a particular type, why didn't you define it to be that type to start with? That's not to say there's never a reason to do such a conversion, but anytime it happens, it should prompt the question of whether you could re-design the code so the correct type was used throughout. Even seemingly innocuous conversions (e.g., between integer and floating point) should be examined much more closely than is common. Despite their seeming similarity, integers should really be used for "counted" types of things and floating point for "measured" kinds of things. Ignoring the distinction is what leads to some of the crazy statements like "the average American family has 1.8 children." Even though we can all see how that happens, the fact is that no family has 1.8 children. They might have 1 or they might 2 or they might have more than that -- but never 1.8.

like image 169
Jerry Coffin Avatar answered Oct 14 '22 23:10

Jerry Coffin