Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Can Hadoop 3.2 HDFS client be used to work with Hadoop 2.x HDFS nodes?

I am trying to build a Java program using Hadoop 3.2 client. Will it be able to work with Hadoop 2.x clusters? Or, is it not supported? Thank you for sharing your experience.

like image 281
Kevin Avatar asked Nov 25 '25 04:11

Kevin


1 Answers

With Hadoop and most Apache-licensed projects compatibility is only guaranteed between minor version numbers. So you should not expect a 3.2 client to work with a 2.x Hadoop cluster.

Cloudera's blog Upgrading your clusters and workloads from Apache Hadoop 2 to Apache Hadoop 3 written by Suma Shivaprasad also mentions the following:

Compatibility with Hadoop 2

Wire compatibility

  • Hadoop 3 preserves wire compatibility with Hadoop 2 clients
  • Distcp/WebHDFS compatibility is preserved

API compatibility

Hadoop 3 doesn’t preserve full API level compatibility due to the following changes

  • Classpath – Dependency version bumps like guava
  • Removal of deprecated APIs and tools
  • Shell script rewrites
  • Incompatible bug fixes

But also states:

Migrating Workloads

MapReduce applications

MapReduce is fully binary compatible and workloads should run as is without any changes required.

like image 100
tk421 Avatar answered Nov 27 '25 14:11

tk421



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!