I am trying to build a Java program using Hadoop 3.2 client. Will it be able to work with Hadoop 2.x clusters? Or, is it not supported? Thank you for sharing your experience.
With Hadoop and most Apache-licensed projects compatibility is only guaranteed between minor version numbers. So you should not expect a 3.2 client to work with a 2.x Hadoop cluster.
Cloudera's blog Upgrading your clusters and workloads from Apache Hadoop 2 to Apache Hadoop 3 written by Suma Shivaprasad also mentions the following:
Compatibility with Hadoop 2
Wire compatibility
- Hadoop 3 preserves wire compatibility with Hadoop 2 clients
- Distcp/WebHDFS compatibility is preserved
API compatibility
Hadoop 3 doesn’t preserve full API level compatibility due to the following changes
- Classpath – Dependency version bumps like guava
- Removal of deprecated APIs and tools
- Shell script rewrites
- Incompatible bug fixes
But also states:
Migrating Workloads
MapReduce applications
MapReduce is fully binary compatible and workloads should run as is without any changes required.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With