I have a dropwizard app, where I configured logger appenders to file as follows:
logging:
  level: INFO
  loggers:
    "mylogger": INFO
    "com.path.to.class": INFO
  appenders:
    - type: file
      currentLogFilename: .logs/mylogs.log
      archivedLogFilenamePattern: .logs/archive.%d.log.gz
      archivedFileCount: 14
And, created logger in my app:
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;    
private final Logger OpLogger = LoggerFactory.getLogger("mylogger");
(and)
private final Logger ClassLogger = LoggerFactory.getLogger(pathToClass.class);
Do some test logging in main():
OpLogger.info("test 1");
ClassLogger.info("test 2);
The application starts and runs without problems; but I don't get any logs (except for Jetty access logs, of course, that are correctly printed to mylogs.log), neither in stdout or in mylogs.log file. Instead, if I remove the loggers configuration in configuration.yml, I get all logs printed to stdout. Perhaps it's a problem of dropwizard or I have to add something to configuration.yml? I'm using Dropwizard 0.8.0
One of the most popular solutions for the Java world is the Apache Log4j 2 framework. Maintained by the Apache Foundation, Log4j 2 is an improvement on the original Log4j, which was the most popular logging framework in Java for many years.
Creating A Configuration Class Each Dropwizard application has its own subclass of the Configuration class which specifies environment-specific parameters. These parameters are specified in a YAML configuration file which is deserialized to an instance of your application's configuration class and validated.
UPDATE The latest version of dropwizard supports logging configurations out of the box
I ran into the same issue trying to set up Dropwizard (0.8.4) with a separate files. I ran into the same issue. So I dug a bit deeper and found a solution for me (not the cleanest but I couldn't seem to get that working differently).
The issue is that LoggingFactory#configure automatically adds every appender to root. This is not very ideal so it needed overwriting. What I did was:
LoggingFactory. This is slightly messy since there is a few things that need to be copied sadly :( Here is my implementation:
import java.io.PrintStream;
import java.lang.management.ManagementFactory;
import java.util.Map;
import javax.management.InstanceAlreadyExistsException;
import javax.management.MBeanRegistrationException;
import javax.management.MBeanServer;
import javax.management.MalformedObjectNameException;
import javax.management.NotCompliantMBeanException;
import javax.management.ObjectName;
import org.slf4j.LoggerFactory;
import org.slf4j.bridge.SLF4JBridgeHandler;
import com.codahale.metrics.MetricRegistry;
import com.codahale.metrics.logback.InstrumentedAppender;
import com.fasterxml.jackson.annotation.JsonIgnore;
import com.fasterxml.jackson.annotation.JsonProperty;
import com.google.common.collect.ImmutableMap;
import ch.qos.logback.classic.Level;
import ch.qos.logback.classic.Logger;
import ch.qos.logback.classic.LoggerContext;
import ch.qos.logback.classic.PatternLayout;
import ch.qos.logback.classic.jmx.JMXConfigurator;
import ch.qos.logback.classic.jul.LevelChangePropagator;
import ch.qos.logback.classic.spi.ILoggingEvent;
import ch.qos.logback.core.Appender;
import ch.qos.logback.core.util.StatusPrinter;
import io.dropwizard.logging.AppenderFactory;
import io.dropwizard.logging.LoggingFactory;
public class BetterDropWizardLoggingConfig extends LoggingFactory {
    @JsonIgnore
    final LoggerContext loggerContext;
    @JsonIgnore
    final PrintStream configurationErrorsStream;
    @JsonProperty("loggerMapping")
    private ImmutableMap<String, String> loggerMappings;
    private static void hijackJDKLogging() {
        SLF4JBridgeHandler.removeHandlersForRootLogger();
        SLF4JBridgeHandler.install();
    }
    public BetterDropWizardLoggingConfig() {
        PatternLayout.defaultConverterMap.put("h", HostNameConverter.class.getName());
        this.loggerContext = (LoggerContext) LoggerFactory.getILoggerFactory();
        this.configurationErrorsStream = System.err;
    }
    private Logger configureLevels() {
        final Logger root = loggerContext.getLogger(org.slf4j.Logger.ROOT_LOGGER_NAME);
        loggerContext.reset();
        final LevelChangePropagator propagator = new LevelChangePropagator();
        propagator.setContext(loggerContext);
        propagator.setResetJUL(true);
        loggerContext.addListener(propagator);
        root.setLevel(getLevel());
        for (Map.Entry<String, Level> entry : getLoggers().entrySet()) {
            loggerContext.getLogger(entry.getKey()).setLevel(entry.getValue());
        }
        return root;
    }
    @Override
    public void configure(MetricRegistry metricRegistry, String name) {
        hijackJDKLogging();
        final Logger root = configureLevels();
        for (AppenderFactory output : getAppenders()) {
            Appender<ILoggingEvent> build = output.build(loggerContext, name, null);
            if(output instanceof MappedLogger && ((MappedLogger) output).getLoggerName() != null) {
                String appenderName = ((MappedLogger) output).getLoggerName();
                String loggerName = loggerMappings.get(appenderName);
                Logger logger = this.loggerContext.getLogger(loggerName);
                logger.addAppender(build);
            } else {
                root.addAppender(build);
            }
        }
        StatusPrinter.setPrintStream(configurationErrorsStream);
        try {
            StatusPrinter.printIfErrorsOccured(loggerContext);
        } finally {
            StatusPrinter.setPrintStream(System.out);
        }
        final MBeanServer server = ManagementFactory.getPlatformMBeanServer();
        try {
            final ObjectName objectName = new ObjectName("io.dropwizard:type=Logging");
            if (!server.isRegistered(objectName)) {
                server.registerMBean(new JMXConfigurator(loggerContext, server, objectName), objectName);
            }
        } catch (MalformedObjectNameException | InstanceAlreadyExistsException | NotCompliantMBeanException
                | MBeanRegistrationException e) {
            throw new RuntimeException(e);
        }
        configureInstrumentation(root, metricRegistry);
    }
    private void configureInstrumentation(Logger root, MetricRegistry metricRegistry) {
        final InstrumentedAppender appender = new InstrumentedAppender(metricRegistry);
        appender.setContext(loggerContext);
        appender.start();
        root.addAppender(appender);
    }
}
As you can se, I sadly had to copy/paste a few private members and methods to make things work as intended.
I added a new field:
@JsonProperty("loggerMapping")
private ImmutableMap<String, String> loggerMappings;
This allows me to configure a mapping for each logger. This wasn't out of the box allowed as I can't get a name (dropwizard defaults the appender names, very inconvenient ...)
So I added a new Logger which in my case also does hostname substitution which I needed for different reasons. For this I overwrite the good old FileAppenderFactory and implement my own interface MappedLogger. Implementation here:
import com.fasterxml.jackson.annotation.JsonProperty;
import com.fasterxml.jackson.annotation.JsonTypeName;
import ch.qos.logback.classic.LoggerContext;
import ch.qos.logback.classic.spi.ILoggingEvent;
import ch.qos.logback.core.FileAppender;
import ch.qos.logback.core.rolling.RollingFileAppender;
import io.dropwizard.logging.AppenderFactory;
import io.dropwizard.logging.FileAppenderFactory;
@JsonTypeName("hostnameFile")
public class HostnameFileAppender extends FileAppenderFactory implements AppenderFactory, MappedLogger {
    private static String uuid = UUID.randomUUID().toString();
    @JsonProperty
    private String name;
    public void setCurrentLogFilename(String currentLogFilename) {
        super.setCurrentLogFilename(substitute(currentLogFilename));
    }
    private String substitute(final String pattern) {
        String substitute = null;
        try {
            substitute = InetAddress.getLocalHost().getHostName();
        } catch (UnknownHostException e) {
            System.err.println("Failed to get local hostname:");
            e.printStackTrace(System.err);
            substitute = uuid;
            System.err.println("Using " + substitute + " as fallback.");
        }
        return pattern.replace("${HOSTNAME}", substitute);
    }
    @Override
    public void setArchivedLogFilenamePattern(String archivedLogFilenamePattern) {
        super.setArchivedLogFilenamePattern(substitute(archivedLogFilenamePattern));
    }
    @Override
    public String getLoggerName() {
        return name;
    }
}
Please note that in order to add a new json type, you will have to follow the JavaDoc in AppenderFactory (Add Meta-inf to the classpath and make the new appender discoverable)
So far so good, we now have a config that can pick up on logger mappings, we have a logger that can take an optional name.
In the configure method I now tie those two together:
for (AppenderFactory output : getAppenders()) {
        Appender<ILoggingEvent> build = output.build(loggerContext, name, null);
        if(output instanceof MappedLogger && ((MappedLogger) output).getLoggerName() != null) {
            String appenderName = ((MappedLogger) output).getLoggerName();
            String loggerName = loggerMappings.get(appenderName);
            Logger logger = this.loggerContext.getLogger(loggerName);
            logger.addAppender(build);
        } else {
            root.addAppender(build);
        }
    }
For backwards compatibility I kept the default behaviour. If there is no name defined, the appender will be added to the root logger. Otherwise I resolve the typed logger and add the appender to it as wished.
And last but not least the good old yaml config:
logging:
  # The default level of all loggers. Can be OFF, ERROR, WARN, INFO, DEBUG, TRACE, or ALL.
  level: INFO
  loggers:
    "EVENT" : INFO
  loggerMapping:
    # for easier search this is defined as: appenderName -> loggerName rather than the other way around
    "eventLog" : "EVENT"
  appenders:
   - type: console   
     threshold: ALL
     logFormat: "myformat"
   - type: hostnameFile # NOTE THE NEW TYPE WITH HOSTNAME RESOLVE
     currentLogFilename: /Users/artur/tmp/log/my-${HOSTNAME}.log
     threshold: ALL
     archive: true
     archivedLogFilenamePattern: mypattern
     archivedFileCount: 31
     timeZone: UTC
     logFormat: "myFormat"
   - type: hostnameFile
     name: eventLog # NOTE THE APPENDER NAME
     currentLogFilename: something
     threshold: ALL
     archive: true
     archivedLogFilenamePattern: something
     archivedFileCount: 31
     timeZone: UTC
     logFormat: "myFormat"
   - type: hostnameFile
     currentLogFilename: something
     threshold: ERROR
     archive: true
     archivedLogFilenamePattern: something
     archivedFileCount: 31
     timeZone: UTC
     logFormat: "myFormat"
As you can see I am mapping the events appender to the events logger. This way all my events end up in file A, while the other information ends up somewhere else.
I hope this helps. Might not be the cleanest solution but I don't think Dropwizard allows this feature currently.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With