Logging Docker Containers With AWS Cloudwatch and Local driver

Almost every application relies on logging in some way. It’s one of those items you’ll want to have on hand in case something goes wrong. Much has been written and spoken about this topic, but today I’d like to concentrate on Docker and the logging choices accessible in the context of containerized applications. The Docker project exposes logging capabilities in the form of drivers, which is highly useful because it allows one to specify how and where log messages should be sent.

Some of the logging drivers are  below:-

Driver Description
none No logs are available for the container and docker logs does not return any output.
local Logs are stored in a custom format designed for minimal overhead.
json-file The logs are formatted as JSON. The default logging driver for Docker.
syslog Writes logging messages to the syslog facility. The syslog daemon must be running on the host machine.
journald Writes log messages to journald. The journald daemon must be running on the host machine.
gelf Writes log messages to a Graylog Extended Log Format (GELF) endpoint such as Graylog or Logstash.
fluentd Writes log messages to fluentd (forward input). The fluentd daemon must be running on the host machine.
awslogs Writes log messages to Amazon CloudWatch Logs.
splunk Writes log messages to splunk using the HTTP Event Collector.
etwlogs Writes log messages as Event Tracing for Windows (ETW) events. Only available on Windows platforms.
gcplogs Writes log messages to Google Cloud Platform (GCP) Logging.
logentries Writes log messages to Rapid7 Logentries.

In this post  we are going to setup local driver and awslogs

1. lets started with local driver first

What is the benefit of using local login driver…..

The local logging driver gathers output from the container’s stdout/stderr and writes it to an internal storage system that is optimized for speed and disc usage.

By default, the local driver saves 100MB of log messages per container and compresses them automatically to save space on disc. The default figure of 100MB is based on a 20M default file size and a count of 5 for the number of such files (to account for log rotation).

To setup the login drivers we need to create a file in /etc/docker/daemon.json with below enteries

{
    "log-driver": "local",
     "log-opts": {
         "max-size": "10m"
 }
  }

After adding the above details , we need to reload the daemon and restart the docker service , after that it will pick up the new logging driver.

systemctl daemon-reload
systemctl restart docker 

Now for testing this we are spinning up a nginx container and lets see what’s the logging driver  they are getting

docker run --log-driver local --name nginx-cloudsbaba12 -p 80:80 nginx:latest

Check is it working or not

docker ps

After that inspect the docker container you will get the details of used logging driver

docker inspect <containername>

2. Configure awslogs driver

Container logs are sent to Amazon CloudWatch Logs by the awslogs logging driver. The AWS Management Console can be used to access log entries.

To setup the login drivers we need to create a file in /etc/docker/daemon.json with below enteries

{
   "log-driver": "awslogs",
    "log-opts": {
        "awslogs-region": "ap-south-1",
        "awslogs-group" : "nginx"
  }
}

After that for accessing cloudwatch you need an IAM role user with limited privilege’s for cloudwatch

copy the access and secret key  and add in the below path file

vi /lib/systemd/system/docker.service

add the below content

Environment="AWS_ACCESS_KEY_ID=AKIAQCJCYVXJQRfee"
Environment="AWS_SECRET_ACCESS_KEY=siysBbqNuuXPRd9R7f4ek0y2ZYmBVOq5I5ZAhhvas"

Note:- In your case Access_Key_ID and AWS_SECRET value is different

You can also do this process with attaching Role to your EC2 instance, if you are working on AWS cloud.

Now Restart the daemon and docker service

systemctl daemon-reload
systemctl restart docker 

After this start a new container with awslogs driver and inspect it

docker run -dit -log-driver=awslogs --log-opt awslogs-region=ap-south-1 --log-opt awslogs-group=nginx -p 80:80 --name nginx-cloudsbaba14 nginx:latest
docker inspect <containername>

Now navigate to AWS management console and check the cloudwatch logsgroup you will find the logs like this:

Congratulation….. you follow all the steps and configure logging for your containers…

References:- 

https://docs.docker.com/config/containers/logging/configure/

https://docs.docker.com/config/containers/logging/awslogs/

https://docs.docker.com/config/containers/logging/local/

One Response

Leave a Reply

Your email address will not be published. Required fields are marked *

Recent Posts