LogicMonitor seeks to disrupt AI landscape with $800M strategic investment at $2.4B valuation to revolutionize data centers.

Learn More

Best Practices

Docker logging: How do logs work with Docker containers?

Logging is critical for gaining valuable application insights, such as performance inefficiencies and architectural structure. But creating reliable, flexible, and lightweight logging solutions isn’t the easiest task—which is where Docker helps. 

Docker containers are a great way to create lightweight, portable, and self-contained application environments. They also give IT teams a way to create impermanent and portable logging solutions that can run on any environment. Because logging is such a crucial aspect of performance, Docker dual logging is more beneficial in complex, multi-container setups that depend on reliable log management for troubleshooting and auditing. 

Docker dual logging allows the capture of container logs in two separate locations at the same time. This approach ensures log redundancy, improved compliance, and enhanced operating system (like Windows, Linux, etc.) observability by maintaining consistent log data across distributed environments. 

This guide covers the essentials of Docker logging, focusing on implementing Docker dual logging functionality to optimize your infrastructure.

Key takeaways

Checkmark
Docker dual logging captures container logs in two locations, ensuring redundancy and enhancing system observability.
Checkmark
Containerized environments require specialized logging strategies due to the temporary and multi-layered nature of Docker containers.
Checkmark
Implementing Docker dual logging involves configuring multiple logging drivers, which enhances compliance, troubleshooting, and overall infrastructure resilience.
Checkmark
Log aggregation tools like Fluentd and ELK are essential for effectively managing and analyzing logs from dual sources.

What is a Docker container?

A Docker container is a standard unit of software that wraps up code and all its dependencies so the program can be moved from one environment to another, quickly and reliably.

Containerized software, available for Linux and Windows-based applications, will always run the same way despite the infrastructure. 

Containers encapsulate software from its environment, ensuring that it performs consistently despite variations between environments — for example, development and staging.

Docker container technology was introduced as an open-source Docker Engine in 2013.

What is a Docker image?

A Docker image is a lightweight, standalone, executable software package that contains everything required to run an application: code, system tools, system libraries, and settings.

In other words, an image is a read-only template with instructions for constructing a container that can operate on the Docker platform. It provides an easy method to package up programs and preset server environments that you can use privately or openly with other Docker users.

What is Docker logging?

Docker logging refers to the process of capturing and managing logs generated by containerized applications. Logs provide critical insights into system behavior, helping you troubleshoot issues, monitor performance, and ensure overall application log health.

Combined with monitoring solutions, you can maintain complete visibility into your containerized environments, helping you solve problems faster and ensure reliability. Using other data insights, you can examine historical data to find trends and anticipate potential problems.

Docker logging captures critical system behavior insights, enabling you to troubleshoot, monitor performance, and ensure application health in containerized environments.

Docker container logs

What are container logs?

Docker container logs, in a nutshell, are the console output of running containers. They specifically supply the stdout and stderr streams running within a container.

As previously stated, Docker logging is not the same as logging elsewhere. Everything that is written to the stdout and stderr streams in Docker is implicitly forwarded to a driver, allowing accessing and writing logs to a file.

Logs can also be viewed in the console. The Docker logs command displays information sent by a currently executing container. The docker service logs command displays information by all containers members of the service.

What is a Docker logging driver?

The Docker logging drivers gather data from containers and make it accessible for analysis.

If no additional log-driver option is supplied when a container is launched, Docker will use the json-file driver by default. A few important notes on this:

  • Log-rotation is not performed by default. As a result, log files kept using the json-file logging driver can consume a significant amount of disk space for containers that produce a large output, potentially leading to disk space depletion.
  • Docker preserves the json-file logging driver — without log-rotation — as the default to maintain backward compatibility with older Docker versions and for instances when Docker is used as a Kubernetes runtime
  • The local driver is preferable because it automatically rotates logs and utilizes a more efficient file format.

Docker also includes logging drivers for sending logs to various services — for example, a logging service, a log shipper, or a log analysis platform. There are many different Docker logging drivers available. Some examples are listed below:

  • syslog — A long-standing and widely used standard for logging applications and infrastructure.
  • journald — A structured alternative to Syslog’s unstructured output.
  • fluentd — An open-source data collector for unified logging layer.
  • awslogs — AWS CloudWatch logging driver. If you host your apps on AWS, this is a fantastic choice.

You do, however, have several alternative logging driver options, which you can find in the Docker logging docs.

Docker also allows logging driver plugins, enabling you to write your Docker logging drivers and make them available over Docker Hub. At the same time, you can use any plugins accessible on Docker Hub.

Logging driver configuration

To configure a Docker logging driver as the default for all containers, you can set the value of the log-driver to the name of the logging driver in the daemon.json configuration file.

This example sets the default logging driver to the local driver:

{

  “log-driver”: “local”

}

Another option is configuring a driver on a container-by-container basis. When you initialize a container, you can use the –log-driver flag to specify a different logging driver than the Docker daemon’s default.

The code below starts an Alpine container with the local Docker logging driver:

docker run -it –log-driver local alpine ash

The docker info command will provide you with the current default logging driver for the Docker daemon.

Docker Logs With Remote Logging Drivers

Previously, the Docker logs command could only be used with logging drivers that supported containers utilizing the local, json-file, or journald logging drivers. However, many third-party Docker logging drivers did not enable reading logs from Docker logs locally.

When attempting to collect log data automatically and consistently, this caused a slew of issues. Log information could only be accessed and displayed in the format required by the third-party solution.

Starting with Docker Engine 20.10, you can use docker logs to read container logs independent of the logging driver or plugin that is enabled. 

Dual logging requires no configuration changes. Docker Engine 20.10 later allows double logging by default if the chosen Docker logging driver does not support reading logs.

Where are Docker logs stored?

Docker keeps container logs in its default place, /var/lib/docker/. Each container has a log that is unique to its ID (the full ID, not the shorter one that is generally presented), and you may access it as follows:

/var/lib/docker/containers/ID/ID-json.log

docker run -it –log-driver local alpine ash

What are the Docker logging delivery modes?

Docker logging delivery modes refer to how the container balances or prioritizes logging against other tasks. The available Docker logging delivery modes are blocking and non-blocking. Both the options can be applied regardless of what Docker logging driver you selected.

Blocking mode

When in blocking mode, the program will be interrupted whenever a message needs to be delivered to the driver.

The advantage of the blocking mode is that all logs are forwarded to the logging driver, even though there may be a lag in your application’s performance. In this sense, this mode prioritizes logging against performance.

Depending on the Docker logging driver you choose, your application’s latency may vary. For example, the json-file driver, which writes to the local filesystem, produces logs rapidly and is unlikely to block or create a significant delay.

On the contrary, Docker logging drivers requiring the container to connect to a remote location may block it for extended periods, resulting in increased latency.

Docker’s default mode is blocking.

When to use the blocking mode?

The json-file logging driver in blocking mode is recommended for most use situations. As mentioned before, the driver is quick since it writes to a local file. Therefore it’s generally safe to use it in a blocking way.

The blocking mode should also be used for memory-hungry programs requiring the bulk of the RAM available to your containers. The reason is that if the driver cannot deliver logs to its endpoint due to a problem such as a network issue, there may not be enough memory available for the buffer if it’s in non-blocking mode.

Non-blocking

The non-blocking Docker logging delivery mode will not prevent the program from running to provide logs. Instead of waiting for logs to be sent to their destination, the container will store logs in a buffer in its memory.

Though the non-blocking Docker logging delivery mode appears to be the preferable option, it also introduces the possibility of some log entries being lost. Because the memory buffer in which the logs are saved has a limited capacity, it might fill up. 

Furthermore, if a container breaks, logs may be lost before being released from the buffer.

You may override Docker’s default blocking mode for new containers by adding an log-opts item to the daemon.json file. The max-buffer-size, which refers to the memory buffer capacity mentioned above, may also be changed from the 1 MB default.

{

        “log-driver”: “local”,

        “log-opts”: {

                “mode”: “non-blocking”

        }
}

Also, you can provide log-opts on a single container. The following example creates an Alpine container with non-blocking log output and a 4 MB buffer:

docker run -it –log-opt mode=non-blocking –log-opt max-buffer-size=4m alpine

When to use non-blocking mode?

Consider using the json-file driver in the non-blocking mode if your application has a big I/O demand and generates a significant number of logs. 

Because writing logs locally is rapid, the buffer is unlikely to fill quickly. If your program does not create spikes in logging, this configuration should handle all of your logs without interfering with performance.

For applications where performance is more a priority than logging but cannot use the local file system for logs — such as mission-critical applications — you can provide enough RAM for a reliable buffer and use the non-blocking mode. This setting should ensure the performance is not hampered by logging, yet the container should still handle most log data.

Why Docker logging is different from traditional logging 

Logging in containerized environments like Docker is more complex than in traditional systems due to the temporary and distributed nature of containers. Docker containers generate multiple log streams, often in different formats, making standard log analysis tools less effective and debugging more challenging compared to single, self-contained applications.

Two key characteristics of Docker containers contribute to this complexity:

  1. Temporary containers: Docker containers are designed to be short-lived, meaning they can be stopped or destroyed at any time. When this happens, any logs stored within the container are lost. To prevent data loss, it’s crucial to use a log aggregator that collects and stores logs in a permanent, centralized location. You may use a centralized logging solution to aggregate log data and use data volumes to store persistent data on host devices.
  2. Multiple logging layers: Docker logging involves log entries from individual containers and the host system. Managing these multi-level logs requires specialized tools that can gather and analyze data from all levels and logging formats effectively, ensuring no critical information is missed. Containers may also generate large volumes of log data, which means traditional log analysis tools may struggle with the sheer amount of data.

Understanding Docker dual logging

Docker dual logging involves sending logs to two different locations simultaneously. This approach ensures that log data is redundantly stored, reducing the risk of data loss and providing multiple sources for analysis. Dual logging is particularly valuable in environments where compliance and uptime are critical.

Benefits of Docker dual logging

  • Redundancy: Dual logging ensures that log messages are preserved even if one logging system fails and logging continues in case of service failure.
  • Enhanced troubleshooting: With logs available in two places, cross-referencing data leads to diagnosing issues more effectively.
  • Compliance: For industries with strict data retention and auditing requirements, dual logging helps meet these obligations by providing reliable log storage across multiple systems.

Dual logging offers a safety net for complex containerized applications by preserving critical log data across multiple systems.

Docker dual logging in action

Docker dual logging is widely implemented in various industries to improve compliance, security, and system reliability. By implementing Docker dual logging, you can safeguard data, meet regulatory demands, and optimize your infrastructure. Below are some real-world examples of how organizations benefit from dual logging:

  1. E-commerce compliance: A global e-commerce company uses dual logging to meet data retention laws by storing log files both locally and in the cloud, ensuring regulatory compliance (such as GDPR and CCPA) and audit readiness.
  2. Financial institution security: A financial firm uses dual logging to enhance security by routing logs to secure on-premise and cloud systems, quickly detecting suspicious activities, aiding forensic analysis, and minimizing data loss.
  3. SaaS uptime and reliability: A SaaS provider leverages dual logging to monitor logs across local and remote sites, minimizing downtime by resolving issues faster and debugging across distributed systems to ensure high service availability.

How to implement Docker dual logging

Implementing dual logging in the Docker engine involves configuring containers to use multiple logging drivers. For example, logs can be routed to both a local JSON file and a remote logging service like AWS CloudWatch. Here’s a simple configuration file example:

bash

copy code

docker run -d \

  --log-driver=json-file \

  --log-driver=fluentd \

  --log-opt fluentd-address=localhost:24224 \

  your-container-image

The specific logging driver and other settings will vary based on your specific configuration. Look at your organization’s infrastructure to determine the driver names and the address of the logging server.

This setup ensures that logs are stored locally while also being sent to a centralized log management service. If you’re using Kubernetes to manage and monitor public cloud environments, you can benefit from the LogicMonitor Collector for better cloud monitoring.

Docker Daemon Logs

What are Daemon logs?

The Docker platform generates and stores logs for its daemons. Depending on the host operating system, daemon logs are written to the system’s logging service or a log file.

If you only collected container logs, you would gain insight into the state of your services. On the other hand, you need to be informed of the state of your entire Docker platform, and the daemon logs exist for that reason as they provide an overview of your whole microservices architecture.

Assume a container shuts down unexpectedly. Because the container terminates before any log events can be captured, we cannot pinpoint the underlying cause using the docker logs command or an application-based logging framework. 

Instead, we may filter the daemon log for events that contain the container name or ID and sort by timestamp, which allows us to establish a chronology of the container’s life from its origin through its destruction.

The daemon log also contains helpful information about the host’s status. If the host kernel does not support a specific functionality or the host setup is suboptimal, the Docker daemon will note it during the initialization process.

Depending on the operating system settings and the Docker logging subsystem utilized, the logs may be kept in one of many locations. In Linux, you can look at the journalctl records:

sudo journalctl -xu docker.service

Analyzing Docker logs

Log data must be evaluated before it can be used. When you analyze log data, you’re hunting for a needle in a haystack. 

You’re typically hunting for that one line with an error among thousands of lines of regular log entries. A solid analysis platform is required to determine the actual value of logs. Log collecting and analysis tools are critical. Here are some of the options.

Fluentd

Fluentd is a popular open-source solution for logging your complete stack, including non-Docker services. It’s a data collector that allows you to integrate data gathering and consumption for improved data utilization and comprehension.

ELK

ELK is the most widely used open-source log data analysis solution. It’s a set of tools:ElasticSearch for storing log data, Logstash for processing log data, and Kibana for displaying data via a graphical user interface. 

ELK is an excellent solution for Docker log analysis since it provides a solid platform maintained by a big developer community and is free.

Advanced log analysis tools

With open-source alternatives, you must build up and manage your stack independently, which entails allocating the necessary resources and ensuring that your tools are highly accessible and housed on scalable infrastructure. It can necessitate a significant amount of IT resources as well.

That’s where more advanced log analysis platforms offer tremendous advantages. For example, tools like LogicMonitor’s SaaS platform for log intelligence and aggregation can give teams quick access to contextualized and connected logs and metrics in a single, unified cloud-based platform.

These sophisticated technologies leverage the power of machine learning to enable companies to reduce troubleshooting, streamline IT operations, and increase control while lowering risk.

Best practices for Docker dual logging

Docker dual logging offers many benefits. But to get the most out of it, you’ll need to implement best practices to build a reliable logging environment. Use the best practices below to get started.

  1. Monitor log performance: Regularly check the performance impact of dual logging on containers by gathering metrics like CPU usage and network bandwidth, and adjust configurations as necessary.
  2. Ensure log security: Use encryption and secure access controls when transmitting logs to remote locations, and verify your controls comply with regulations.
  3. Automate log management: implement automated processes to manage, review, and archive logs from devices ingesting logs to prevent storage issues.

Analyzing Docker logs in a dual logging setup

When logs are stored in two places, analyzing them becomes more complicated. Using log aggregation tools like Fluentd or ELK to collect and analyze logs from both sources provides a comprehensive view of a system’s behavior. This dual approach can significantly increase the ability to detect and resolve your issues quickly.

Overview of Docker logging drivers 

Docker supports various logging drivers, each suited to different use cases. Drivers can be mixed and matched when implementing dual logging to achieve the best results for whole environments. Common drivers include:

  • json-file: Stores logs in a JSON format on the local filesystem
  • Fluentd: Sends logs to a Fluentd service, ideal for centralized logging
  • awslogs: Directs logs to AWS CloudWatch, suitable for cloud-based monitoring
  • gelf: Sends logs to in Graylog Extended Log Format for GrayWatch and Logstash endpoints

Tools and integration for Docker dual logging

To fully leverage Docker dual logging, integrating with powerful log management tools is essential. These popular tools enhance Docker dual logging by providing advanced features for log aggregation, analysis, and visualization.

  • ELK Stack: An open-source solution comprising Elasticsearch, Logstash, and Kibana, ideal for collecting, searching, and visualizing log data.
  • Splunk: A platform offering comprehensive log analysis and real-time monitoring capabilities suitable for large-scale environments.
  • Graylog: A flexible, open-source log management tool that allows centralized logging and supports various data sources.

Conclusion

Docker dual logging is a powerful strategy for ensuring reliable, redundant log management in containerized environments. Implementing dual logging enhances your system’s resilience, improves troubleshooting capabilities, and meets compliance requirements with greater ease. As containerized applications continue to grow in complexity and scale, implementing dual logging will be critical for maintaining efficient infrastructures.

Subscribe to our blog

Get articles like this delivered straight to your inbox