Kubernetes
Kubernetes Production Grade Log Processor
Last updated
Kubernetes Production Grade Log Processor
Last updated
Fluent Bit is a lightweight and extensible log processor with full support for Kubernetes:
Process Kubernetes containers logs from the file system or Systemd/Journald.
Enrich logs with Kubernetes Metadata.
Centralize your logs in third party storage services like Elasticsearch, InfluxDB, HTTP, and so on.
Before getting started it's important to understand how Fluent Bit will be deployed. Kubernetes manages a cluster of nodes. The Fluent Bit log agent tool needs to run on every node to collect logs from every pod. Fluent Bit is deployed as a DaemonSet, which is a pod that runs on every node of the cluster.
When Fluent Bit runs, it reads, parses, and filters the logs of every pod. In addition, Fluent Bit adds metadata to each entry using the Kubernetes filter plugin.
The Kubernetes filter plugin talks to the Kubernetes API Server to retrieve relevant information such as the pod_id
, labels
, and annotations
. Other fields, such as pod_name
, container_id
, and container_name
, are retrieved locally from the log file names. All of this is handled automatically, and no intervention is required from a configuration aspect.
Fluent Bit should be deployed as a DaemonSet, so it will be available on every node of your Kubernetes cluster.
The recommended way to deploy Fluent Bit for Kubernetes is with the official Helm Chart at https://github.com/fluent/helm-charts.
If you are using Red Hat OpenShift you must set up Security Context Constraints (SCC) using the relevant option in the helm chart.
Helm is a package manager for Kubernetes and lets you deploy application packages into your running cluster. Fluent Bit is distributed using a Helm chart found in the Fluent Helm Charts repository.
Use the following command to add the Fluent Helm charts repository
To validate that the repository was added, run helm search repo fluent
to ensure the charts were added. The default chart can then be installed by running the following command:
The default chart values include configuration to read container logs. With Docker parsing, Systemd logs apply Kubernetes metadata enrichment, and output to an Elasticsearch cluster. You can modify the included values file to specify additional outputs, health checks, monitoring endpoints, or other configuration options.
The default configuration of Fluent Bit ensures the following:
Consume all containers logs from the running node and parse them with either the docker
or cri
multi-line parser.
Persist how far it got into each file it's tailing so if a pod is restarted it picks up from where it left off.
The Kubernetes filter adds Kubernetes metadata, specifically labels
and annotations
. The filter only contacts the API Server when it can't find the cached information, otherwise it uses the cache.
The default backend in the configuration is Elasticsearch set by the Elasticsearch Output Plugin. It uses the Logstash format to ingest the logs. If you need a different Index
and Type
, refer to the plugin option and update as needed.
There is an option called Retry_Limit
, which is set to False
. If Fluent Bit can't flush the records to Elasticsearch, it will retry indefinitely until it succeeds.
Fluent Bit v1.5.0 and later supports deployment to Windows pods.
When deploying Fluent Bit to Kubernetes, there are three log files that you need to pay attention to.
C:\k\kubelet.err.log
This is the error log file from kubelet daemon running on host. Retain this file for future troubleshooting, including debugging deployment failures.
C:\var\log\containers\<pod>_<namespace>_<container>-<docker>.log
This is the main log file you need to watch. Configure Fluent Bit to follow this file. It's a symlink to the Docker log file in C:\ProgramData\
, with some additional metadata on the file's name.
C:\ProgramData\Docker\containers\<docker>\<docker>.log
This is the log file produced by Docker. Normally you don't directly read from this file, but you need to make sure that this file is visible from Fluent Bit.
Typically, your deployment YAML contains the following volume configuration.
Assuming the basic volume configuration described previously, you can apply the following configuration to start logging:
Windows pods often lack working DNS immediately after boot (#78479). To mitigate this issue, filter_kubernetes
provides a built-in mechanism to wait until the network starts up:
DNS_Retries
: Retries N times until the network start working (6)
DNS_Wait_Time
: Lookup interval between network status checks (30)
By default, Fluent Bit waits for three minutes (30 seconds x 6 times). If it's not enough for you, update the configuration as follows: