For now, I have a Hadoop job which creates counters with a pretty big name.
For example, the following one: stats.counters.server-name.job.job-name.mapper.site.site-name.qualifier.qualifier-name.super-long-string-which-is-not-within-standard-limits
. This counter is truncated on web interface and on getName()
method call. I've found out that Hadoop has limitations on the counter max name and this settings id mapreduce.job.counters.counter.name.max
is for configuring this limit. So I incremented this to 500
and web interface now shows full counter name. But getName()
of the counter still returns truncated name.
Could somebody, please, explain this or point me on my mistakes? Thank you.
EDIT 1
My Hadoop server configuration consists of the single server with HDFS, YARN, and map-reduce itself on it. During map-reduce, there are some counter increments and after the job is completed, in ToolRunner
I fetch counters with the use of org.apache.hadoop.mapreduce.Job#getCounters
.
EDIT 2
Hadoop version is the following:
Hadoop 2.6.0-cdh5.8.0
Subversion http://github.com/cloudera/hadoop -r 042da8b868a212c843bcbf3594519dd26e816e79
Compiled by jenkins on 2016-07-12T22:55Z
Compiled with protoc 2.5.0
From source with checksum 2b6c319ecc19f118d6e1c823175717b5
This command was run using /usr/lib/hadoop/hadoop-common-2.6.0-cdh5.8.0.jar
I made some additional investigation and it seems that this issue describes a situation similar to mine. But it's pretty confusing cause I'm able to increase the number of counters but not the length of counter's name...
EDIT 3
Today, I spent pretty much time debugging internals of the Hadoop. Some interesting stuff:
org.apache.hadoop.mapred.ClientServiceDelegate#getJobCounters
method returns a bunch of counters from yarn with TRUNCATED names and FULL display names.org.apache.hadoop.mapreduce.Counter#getName
method works correctly during reducer execution.There's nothing in Hadoop code which truncates counter names after its initialization.
So, as you've already pointed out, mapreduce.job.counters.counter.name.max
controls counter's name max length (with 64 symbols as default value).
This limit is applied during calls to AbstractCounterGroup.addCounter/findCounter
.
Respective source code is the following:
@Override
public synchronized T addCounter(String counterName, String displayName,
long value) {
String saveName = Limits.filterCounterName(counterName);
...
and actually:
public static String filterName(String name, int maxLen) {
return name.length() > maxLen ? name.substring(0, maxLen - 1) : name;
}
public static String filterCounterName(String name) {
return filterName(name, getCounterNameMax());
}
As you can see, the name of the counter is being saved truncated with respect to mapreduce.job.counters.max
.
On its turn, there's only a single place in Hadoop code where call to Limits.init(Configuration conf)
is performed (called from LocalContainerLauncher
class):
class YarnChild {
private static final Logger LOG = LoggerFactory.getLogger(YarnChild.class);
static volatile TaskAttemptID taskid = null;
public static void main(String[] args) throws Throwable {
Thread.setDefaultUncaughtExceptionHandler(new YarnUncaughtExceptionHandler());
LOG.debug("Child starting");
final JobConf job = new JobConf(MRJobConfig.JOB_CONF_FILE);
// Initing with our JobConf allows us to avoid loading confs twice
Limits.init(job);
I believe you need to perform the following steps in order to fix counter names issue you observe:
mapreduce.job.counters.counter.name.max
config valueYou still will see truncated counter names for old jobs I think.
getName()
seems to be deprecated
Alternatively, getUri()
that comes with a default maximum length of 255 can be used.
Documentation link:
getUri()
Have not tried it personally, but it seems to be a possible fix to this problem.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With