Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Propagating custom configuration values in Hadoop

Is there any way to set and (later) get a custom configuration object in Hadoop, during Map/Reduce?

For example, assume an application that preprocesses a large file and determines dynamically some characteristics related to the file. Furthermore, assume that those characteristics are saved in a custom Java object (e.g., a Properties object, but not exclusively, since some may not be strings) and are subsequently necessary for each of the map and of the reduce jobs.

How could the application "propagate" this configuration, so that each mapper and reducer function can access it, when needed?

One approach could be to use the set(String, String) method of the JobConf class and, for instance, pass the configuration object serialized as a JSON string via the second parameter, but this may be too much of a hack and then the appropriate JobConf instance must be accessed by each Mapper and Reducer anyway (e.g., following an approach like the one suggested in an earlier question).

like image 677
PNS Avatar asked Feb 20 '13 03:02

PNS


People also ask

What is a configuration Hadoop?

Configuration Files are the files which are located in the extracted tar. gz file in the etc/hadoop/ directory. All Configuration Files in Hadoop are listed below, 1) HADOOP-ENV.sh->>It specifies the environment variables that affect the JDK used by Hadoop Daemon (bin/hadoop).


1 Answers

Unless I'm missing something, if you have a Properties object containing every property you need in your M/R job, you simply need to write the content of the Properties object to the Hadoop Configuration object. For example, something like this:

Configuration conf = new Configuration();
Properties params = getParameters(); // do whatever you need here to create your object
for (Entry<Object, Object> entry : params.entrySet()) {
    String propName = (String)entry.getKey();
    String propValue = (String)entry.getValue();
    conf.set(propName, propValue);
}

Then inside your M/R job, you can use the Context object to get back your Configuration in both the mapper (the map function) or the reducer (the reduce function), like this:

public void map(MD5Hash key, OverlapDataWritable value, Context context)
    Configuration conf = context.getConfiguration();
    String someProperty = conf.get("something");
    ....
}

Note that when using the Configuration object, you can also access the Context in the setup and cleanup methods, useful to do some initialization if needed.

Also it's worth mentioning you could probably directly call the addResource method from the Configuration object to add your properties directly as an InputStream or a file, but I believe this has to be an XML configuration like the regular Hadoop XML configs, so that might just be overkill.

EDIT: In case of non-String objects, I would advise using serialization: You can serialize your objects, and then convert them to Strings (probably encode them for example with Base64 as I'm not sure what would happen if you have unusual characters), and then on the mapper/reducer side de-serialize the objects from the Strings you get from the properties inside Configuration.

Another approach would be to do the same serialization technique, but instead write to HDFS, and then add these files to the DistributedCache. Sounds a bit overkill, but this would probably work.

like image 74
Charles Menguy Avatar answered Sep 22 '22 12:09

Charles Menguy