Assume that we have an interface
called Animal
that have two methods called move()
and makeSound()
.
This means we can send the messages move()
and makeSound()
on a variable of type Animal
, and we can only assign objects of classes that implement Animal
to a variable of type Animal
.
Now my question is, could Java have not forced classes that want to use Polymorphism to implement an interface
?
For example, why didn't Java implement Polymorphism like the following:
We would just create an Animal
interface
and then we would be able to assign whatever object we want to a variable of type Animal
as long as that object have the methods move()
and makeSound()
, for example:
Animal animal1;
/* The Java compiler will check if Dog have the methods move() and makeSound(), if yes then
compile, if no then show a compilation error */
animal1 = new Dog();
animal1.move();
animal1.makeSound();
Note: I took Java as an example, but I am talking in general about all OOP languages. Also, I know that we can have Polymorphism using a subclass that inherits from a superclass (but this is basically the same idea as using an interface
).
Interfaces allow us to define polymorphism in a declarative way, unrelated to implementation. Two elements are polymorphic with respect to a set of behaviors if they realize the same interfaces.
They need to have a different number of parameters, one method accepting 2 and another one accepting 3 parameters. The types of the parameters need to be different, one method accepting a String and another one accepting a Long.
Implementing Interfaces is Inheritance. you can achieve polymorphism without inheritance using composition. Composition is an oop topic, when we create an object of a class in an other class we called it composition.
Languages that are not object-oriented provide forms of polymorphism which do not rely on inheritance (i.e parametric polymorphism). Some of this is possible in Java through generics.
Interfaces formalize polymorphism. Interfaces allow us to define polymorphism in a declarative way, unrelated to implementation. Two elements are polymorphic with respect to a set of behaviors if they realize the same interfaces. You always heard that polymorphism was this big benefit of object orientation,...
Using these OOP concepts to have classes with different functionality sharing the same base “blueprint” (abstract class or interface) is called Polymorphism. If this article was helpful, tweet it.
This is compile-time polymorphism because this type of polymorphism is determined during the compilation time because during writing the code we already mention the different types of parameters for the same function name. Interfaces are very similar to classes.
Two elements are polymorphic with respect to a set of behaviors if they realize the same interfaces. You always heard that polymorphism was this big benefit of object orientation, but without interfaces there was no way to enforce it, verify it, or even express it, except in informal ways, or language-specific ways.
There are a number of different ways to get polymorphism. The one you are most familiar with is inclusion polymorphism (also known as subtype polymorphism), where the programmer explicitly says "X is-a Y" via some sort of extends clause. You see this in Java and C#; both give you the choice of having such an is-a for both representation and API (extends
), or only for API (implements
).
There is also parametric polymorphism, which you have probably seen as generics: defining a family of types Foo<T>
with a single declaration. You see this in Java/C#/Scala (generics), C++ (templates), Haskell (type classes), etc.
Some languages have "duck typing", where, rather than looking at the declaration ("X is-a Y"), they are willing to determine typing structurally. If a contract says "to be an Iterator
, you have to have hasNext()
and next()
methods", then under this interpretation, any class that provides these two methods is an Iterator
, regardless of whether it said so or not. This comports with the case you describe; this was a choice open to the Java designers.
Languages with pattern matching or runtime reflection can exhibit a form of ad-hoc polymorphism (also known as data-driven polymorphism), where you can define polymorphic behavior over unrelated types, such as:
int length(Object o) {
return switch (o) {
case String s -> s.length();
case Object[] os -> os.length;
case Collection c -> c.size();
...
};
}
Here, length
is polymorphic over an ad-hoc set of types.
It is also possible to have an explicit declaration of "X is-a Y" without putting this in the declaration of X. Haskell's type classes do this, where, rather than X declaring "I'm a Y", there's a separate declaration of an instance
that explicitly says "X is a Y (and here is how to map X functionality to Y functionality if it is not obvious to the compiler.)" Such instances are often called witnesses; it is a witness to the Y-hood of X. Clojure's protocols are similar, and Scala's implicit parameters play a similar role ("find me a witness to CanCopyFrom[A,B]
, or fail at compile time").
The point of all this is that there are many ways to get polymorphism, and some languages pick their favorite, others support more than one, etc.
If your question is why did Java choose explicit subtyping rather than duck typing, the answer is fairly clear: Java was a language designed for building large systems (as was C++) out of components, and components want strong checking at their boundaries. A loosey-goosey match because the two sides happen to have methods with the same name is a less reliable means of establishing programmer intent than an explicit declaration. Additionally, one of the core design principles of the Java language is "reading code is more important than writing code." It may be more work to declare "implements Iterator" (but not a lot more), but it makes it much more clear to readers what your design intent was.
So, this is a tradeoff of what we might now call "ceremony" for greater reliability and more clear capture of design intent.
The approach you're describing is called "structural subtyping", and it is not only possible, but actually in use; for example, it is used by Go and TypeScript.
Per the Go Programming Language Specification:
A variable of interface type can store a value of any type with a method set that is any superset of the interface. […]
A type implements any interface comprising any subset of its methods and may therefore implement several distinct interfaces. For instance, all types implement the empty interface:
interface{}
[link]
Per the TypeScript documentation:
Type compatibility in TypeScript is based on structural subtyping. Structural typing is a way of relating types based solely on their members. This is in contrast with nominal typing. Consider the following code:
interface Named { name: string; } class Person { name: string; } let p: Named; // OK, because of structural typing p = new Person();
In nominally-typed languages like C# or Java, the equivalent code would be an error because the
Person
class does not explicitly describe itself as being an implementer of theNamed
interface.[link]
Note: I took Java as an example, but I am talking in general about all OOP languages.
I'm not sure it's possible to talk "in general about all OOP languages", because there are so many, and they work in many different ways. Your question makes sense for Java, but it wouldn't make sense for Go or TypeScript (since as you see, it has exactly the feature you'd be claiming it doesn't), nor for non-statically-typed OO as in Python or JavaScript (since they don't have the notion of "a variable of type Animal
").
ETA: In a follow-up comment, you write:
Since it was possible for Java to not force classes to [explicitly] implement an
interface
, then why did Java force classes to [explicitly] implement aninterface
?
I can't say for certain; the first edition of the Java Language Specification [link] explicitly called this out, but didn't indicate the rationale:
It is not sufficient that the class happen to implement all the abstract methods of the interface; the class or one of its superclasses must actually be declared to implement the interface, or else the class is not considered to implement the interface. [p. 183]
However, I think the main reason was probably that interfaces are intended to have a meaning, which often goes beyond what's explicitly indicated in the method signatures. For example:
java.util.List
, in addition to specifying various methods of its own, also specifies the behavior of equals
and hashCode
, instructing implementations to override the implementations provided by java.lang.Object
and implement the specified behavior. If it were possible to "accidentally" implement java.util.List
, then that instruction would be meaningless, because implementations might not even "know" that they were implementations.java.io.Serializable
has no methods at all; it's just a "marker" interface to tell the Java Serialization API that this class is OK with being serialized and deserialized. In Go, such an interface would be meaningless, because every type would automatically implement it.Some other (IMHO less-significant) possible reasons:
Animal animal = new Cat()
.Animal animal = (Animal) obj;
or if (obj instanceof Animal)
were allowed, then the runtime would need to analyze obj
's runtime-type on the fly to determine if it conforms to the Animal
interface. (This also means that adding a method to the Animal
interface could potentially cause runtime failures rather than compile-time failures.). . . but, again, this is just me speculating. I think I'm probably in the right ballpark, but a lot of things go into language design, and there could easily have been major considerations that would never occur to me.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With