Only this pageAll pages
Powered by GitBook
Couldn't generate the PDF for 224 pages, generation stopped at 100.
Extend with 50 more pages.
1 of 100

4.0

Loading...

About

Loading...

Loading...

Loading...

Loading...

Loading...

Concepts

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Installation

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Administration

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Local Testing

Loading...

Loading...

Data Pipeline

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

What is Fluent Bit?

Fluent Bit is a CNCF sub-project under the umbrella of Fluentd

Fluent Bit is an open source telemetry agent specifically designed to efficiently handle the challenges of collecting and processing telemetry data across a wide range of environments, from constrained systems to complex cloud infrastructures. Managing telemetry data from various sources and formats can be a constant challenge, particularly when performance is a critical factor.

Rather than serving as a drop-in replacement, Fluent Bit enhances the observability strategy for your infrastructure by adapting and optimizing your existing logging layer, and adding metrics and traces processing. Fluent Bit supports a vendor-neutral approach, seamlessly integrating with other ecosystems such as Prometheus and OpenTelemetry. Trusted by major cloud providers, banks, and companies in need of a ready-to-use telemetry agent solution, Fluent Bit effectively manages diverse data sources and formats while maintaining optimal performance and keeping resource consumption low.

Fluent Bit can be deployed as an edge agent for localized telemetry data handling or utilized as a central aggregator/collector for managing telemetry data across multiple sources and environments.

Fluent Bit Documentation

High Performance Telemetry Agent for Logs, Metrics and Traces

is a fast and lightweight telemetry agent for logs, metrics, and traces for Linux, macOS, Windows, and BSD family operating systems. Fluent Bit has been made with a strong focus on performance to allow the collection and processing of telemetry data from different sources without complexity.

Features

  • High performance: High throughput with low resources consumption

  • Data parsing

    • Convert your unstructured messages using Fluent Bit parsers: , , and

  • Metrics support: Prometheus and OpenTelemetry compatible

  • Reliability and data integrity

    • handling

    • in memory and file system

  • Networking

    • Security: Built-in TLS/SSL support

    • Asynchronous I/O

  • Pluggable architecture and : Inputs, Filters and Outputs:

    • Connect nearly any source to nearly any destination using preexisting plugins

    • Extensibility:

      • Write input, filter, or output plugins in the C language

      • WASM: or

      • Write or

  • : Expose internal metrics over HTTP in JSON and format

  • : Perform data selection and transformation using simple SQL queries

    • Create new streams of data using query results

    • Aggregation windows

    • Data analysis and prediction: Time series forecasting

  • Portable: Runs on Linux, macOS, Windows and BSD systems

Release notes

For more details about changes in each release, refer to the .

Fluent Bit, Fluentd, and CNCF

is a graduated sub-project under the umbrella of . Fluent Bit is licensed under the terms of the .

Fluent Bit was originally created by and is now sponsored by . As a CNCF-hosted project, it's a fully vendor-neutral and community-driven project.

Input

The way to gather data from your sources

provides input plugins to gather information from different sources. Some plugins collect data from log files, while others can gather metrics information from the operating system. There are many plugins to suit different needs.

When an input plugin loads, an internal instance is created. Each instance has its own independent configuration. Configuration keys are often called properties.

Every input plugin has its own documentation section that specifies how to use it and what properties are available.

For more details, see .

Data Pipeline

Sources

Download Source Code

You can download the most recent stable or development source code.

Stable

For production systems, it's strongly suggested that you get the latest stable release of the source code in either zip file or tarball file format from GitHub using the following link pattern:

https://github.com/fluent/fluent-bit/archive/refs/tags/v<release version>.tar.gz
https://github.com/fluent/fluent-bit/archive/refs/tags/v<release version>.zip

For example, for version 1.8.12 the link is: https://github.com/fluent/fluent-bit/archive/refs/tags/v1.8.12.tar.gz

Development

If you want to contribute to Fluent Bit, you should use the most recent code. You can get the development version from the Git repository:

git clone https://github.com/fluent/fluent-bit

The master branch is where the development of Fluent Bit happens. Development version users should expect issues when compiling or at run time.

Fluent Bit users are encouraged to help test every development version to ensure a stable release.

Inputs

Classic mode

Fluent Bit
JSON
Regex
LTSV
Logfmt
Backpressure
Data buffering
extensibility
WASM Filter Plugins
WASM Input Plugins
Filters in Lua
Output plugins in Golang
Monitoring
Prometheus
Stream Processing
official release notes
Fluent Bit
CNCF
Fluentd
Apache License v2.0
Eduardo Silva
Chronosphere
Fluent Bit
Input Plugins

Buffer

Data processing with reliability

The buffer phase in the pipeline aims to provide a unified and persistent mechanism to store your data, using the primary in-memory model or the file system-based mode.

The buffer phase contains the data in an immutable state, meaning that no other filter can be applied.

Buffered data uses the Fluent Bit internal binary representation, which isn't raw text.

Fluent Bit offers a buffering mechanism in the file system that acts as a backup system to avoid data loss in case of system failures.

A Brief History of Fluent Bit

Every project has a story

In 2014, the team at was forecasting the need for a lightweight log processor for constraint environments like embedded Linux and gateways. The project aimed to be part of the Fluentd ecosystem. At that moment, Eduardo Silva created , a new open source solution, written from scratch and available under the terms of the.

After the project matured, it gained traction for normal Linux systems. With the new containerized world, the Cloud Native community asked to extend the project scope to support more sources, filters, and destinations. Not long after, Fluent Bit became one of the preferred solutions to solve the logging challenges in Cloud environments.

Filter

Modify, enrich or drop your records

In production environments you need full control of the data you're collecting. Filtering lets you alter the collected data before delivering it to a destination.

Filtering is implemented through plugins. Each available filter can be used to match, exclude, or enrich your logs with specific metadata.

Fluent Bit support many filters. A common use case for filtering is Kubernetes deployments. Every pod log needs the proper metadata associated with it.

Like input plugins, filters run in an instance context, which has its own independent configuration. Configuration keys are often called properties.

For more details about the Filters available and their usage, see.

Linux Packages

The most secure option is to create the repositories according to the instructions for your specific OS.

An installation script is provided for use with most Linux targets. This will by default install the most recent version released.

This is a helper and should always be validated prior to use.

GPG key updates

For the 1.9.0 and 1.8.15 releases and later, the GPG key. Ensure the new key is added.

The GPG Key fingerprint of the new key is:

The previous key is and might be required to install previous versions.

The GPG Key fingerprint of the old key is:

Refer to the to see which platforms are supported in each release.

Migration to Fluent Bit

For version 1.9 and later, td-agent-bit is a deprecated package and is removed after 1.9.9. The correct package name to use now is fluent-bit.

Memory Management

You might need to estimate how much memory Fluent Bit could be using in scenarios like containerized environments where memory limits are essential.

To make an estimate, in-use input plugins must set the Mem_Buf_Limitoption. Learn more about it in .

Estimating

Input plugins append data independently. To make an estimation, impose a limit with the Mem_Buf_Limit option. If the limit was set to 10MB, you can estimate that in the worst case, the output plugin likely could use 20MB.

Fluent Bit has an internal binary representation for the data being processed. When this data reaches an output plugin, it can create its own representation in a new memory buffer for processing. The best examples are the and output plugins, which need to convert the binary representation to their respective custom JSON formats before sending data to the backend servers.

When imposing a limit of 10MB for the input plugins, and a worst case scenario of the output plugin consuming 20MB, you need to allocate a minimum (30MB x 1.2) =36MB.

Glibc and memory fragmentation

In intensive environments where memory allocations happen in the orders of magnitude, the default memory allocator provided by Glibc could lead to high fragmentation, reporting a high memory usage by the service.

It's strongly suggested that in any production environment, Fluent Bit should be built with enabled (-DFLB_JEMALLOC=On). The jemalloc implementation of malloc is an alternative memory allocator that can reduce fragmentation, resulting in better performance.

Use the following command to determine if Fluent Bit has been built with jemalloc:

The output should look like:

If the FLB_HAVE_JEMALLOC option is listed in Build Flags, jemalloc is enabled.

Format and Schema

Fluent Bit might optionally use a configuration file to define how the service will behave.

The schema is defined by three concepts:

  • Sections

  • Entries: key/value

  • Indented Configuration Mode

An example of a configuration file is as follows:

Sections

A section is defined by a name or title inside brackets. Using the previous example, a Service section has been set using [SERVICE] definition. The following rules apply:

  • All section content must be indented (four spaces ideally).

  • Multiple sections can exist on the same file.

  • A section must have comments and entries.

  • Any commented line under a section must be indented too.

  • End-of-line comments aren't supported, only full-line comments.

Entries: key/value

A section can contain entries. An entry is defined by a line of text that contains a Key and a Value. Using the previous example, the [SERVICE] section contains two entries: one is the key Daemon with value off and the other is the key Log_Level with the value debug. The following rules apply:

  • An entry is defined by a key and a value.

  • A key must be indented.

  • A key must contain a value which ends in the breakline.

  • Multiple keys with the same name can exist.

Commented lines are set prefixing the # character. Commented lines aren't processed but they must be indented.

Indented configuration mode

Fluent Bit configuration files are based in a strict indented mode. Each configuration file must follow the same pattern of alignment from left to right when writing text. By default, an indentation level of four spaces from left to right is suggested. Example:

This example shows two sections with multiple entries and comments. Empty lines are allowed.

Requirements

has very low CPU and memory consumption. It's compatible with most x86-, x86_64-, arm32v7-, and arm64v8-based platforms.

The build process requires the following components:

  • Compiler: GCC or clang

  • CMake

  • Flex and Bison: Required for or

  • Libyaml development headers and libraries

Core has no other dependencies. Some features depend on third-party components. For example, output plugins with special backend libraries like Kafka include those libraries in the main source code repository.

Fluent Bit is supported on Linux on IBM Z(s390x), but the WASM and LUA filter plugins aren't.

Includes

The includes section lets you specify additional YAML configuration files to be merged into the current configuration. These files are identified as a list of filenames and can include relative or absolute paths. If no absolute path is provided, the file is assumed to be located in a directory relative to the file that references it.

Use this section to organize complex configurations into smaller, manageable files and include them as needed.

Usage

The following example demonstrates how to include additional YAML files using relative path references. This is the file system path structure:

The content of fluent-bit.yaml:

Ensure that the included files are formatted correctly and contain valid YAML configurations for seamless integration.

If a path isn't specified as absolute, it will be treated as relative to the file that includes it.

[SERVICE]
    # This is a commented line
    Daemon    off
    log_level debug
[FIRST_SECTION]
    # This is a commented line
    Key1  some value
    Key2  another value
    # more comments

[SECOND_SECTION]
    KeyN  3.14
├── fluent-bit.yaml
├── inclusion-1.yaml
└── subdir
    └── inclusion-2.yaml
includes:
  - inclusion-1.yaml
  - subdir/inclusion-2.yaml
Fluentd
Treasure Data
Fluent Bit
Apache License v2.0
curl https://raw.githubusercontent.com/fluent/fluent-bit/master/install.sh | sh
C3C0 A285 34B9 293E AF51  FABD 9F9D DC08 3888 C1CD
Fluentbit releases (Releases signing key) <[email protected]>
F209 D876 2A60 CD49 E680 633B 4FF8 368B 6EA0 722A
has been updated
still available
supported platform documentation
Fluent Bit
Stream Processor
Record Accessor

Amazon Linux

Install on Amazon Linux

Fluent Bit is distributed as the fluent-bit package and is available for the latest Amazon Linux 2 and Amazon Linux 2023. The following architectures are supported

  • x86_64

  • aarch64 / arm64v8

Amazon Linux 2022 is no longer supported.

Single line install

Fluent Bit provides an installation script to use for most Linux targets. This will always install the most recently released version.

curl https://raw.githubusercontent.com/fluent/fluent-bit/master/install.sh | sh

This is a convenience helper and should always be validated prior to use. The recommended secure deployment approach is to use the following instructions:

Configure Yum

The fluent-bit is provided through a Yum repository. To add the repository reference to your system, add a new file called fluent-bit.repo in/etc/yum.repos.d/ with the following content:

Amazon Linux 2

[fluent-bit]
name = Fluent Bit
baseurl = https://packages.fluentbit.io/amazonlinux/2/
gpgcheck=1
gpgkey=https://packages.fluentbit.io/fluentbit.key
enabled=1

Amazon Linux 2023

[fluent-bit]
name = Fluent Bit
baseurl = https://packages.fluentbit.io/amazonlinux/2023/
gpgcheck=1
gpgkey=https://packages.fluentbit.io/fluentbit.key
enabled=1

You should always enable gpgcheck for security reasons. All Fluent Bit packages are signed.

Updated key from March 2022

For the 1.9.0 and 1.8.15 and later releases, theGPG key has been updated. Ensure this new one is added.

The GPG Key fingerprint of the new key is:

C3C0 A285 34B9 293E AF51  FABD 9F9D DC08 3888 C1CD
Fluentbit releases (Releases signing key) <[email protected]>

The previous key is still available and might be required to install previous versions.

The GPG Key fingerprint of the old key is:

F209 D876 2A60 CD49 E680 633B 4FF8 368B 6EA0 722A

Refer to the supported platform documentation to see which platforms are supported in each release.

Install

  1. After your repository is configured, run the following command to install it:

    sudo yum install fluent-bit
  2. Instruct systemd to enable the service:

sudo systemctl start fluent-bit

If you do a status check, you should see a similar output like this:

$ systemctl status fluent-bit
● fluent-bit.service - Fluent Bit
   Loaded: loaded (/usr/lib/systemd/system/fluent-bit.service; disabled; vendor preset: disabled)
   Active: active (running) since Thu 2016-07-07 02:08:01 BST; 9s ago
 Main PID: 3820 (fluent-bit)
   CGroup: /system.slice/fluent-bit.service
           └─3820 /opt/fluent-bit/bin/fluent-bit -c /etc/fluent-bit/fluent-bit.conf
...

The default Fluent Bit configuration collect metrics of CPU usage and sends the records to the standard output. You can see the outgoing data in your/var/log/messages file.

Amazon EC2

Learn how to install Fluent Bit and the AWS output plugins on Amazon Linux 2 usingAWS Systems Manager.

Multithreading

Learn how to run Fluent Bit in multiple threads for improved scalability.

Fluent Bit has one event loop to handle critical operations, like managing timers, receiving internal messages, scheduling flushes, and handling retries. This event loop runs in the main Fluent Bit thread.

To free up resources in the main thread, you can configureinputs and outputs to run in their own self-contained threads. However, inputs and outputs implement multithreading in distinct ways: inputs can run in threaded mode, and outputs can use one or more workers.

Threading also affects certain processes related to inputs and outputs. For example,filters always run in the main thread, butprocessors run in the self-contained threads of their respective inputs or outputs, if applicable.

Inputs

When inputs collect telemetry data, they can either perform this process inside the main Fluent Bit thread or inside a separate dedicated thread. You can configure this behavior by enabling or disabling the threaded setting.

All inputs are capable of running in threaded mode, but certain inputs always run in threaded mode regardless of configuration. These always-threaded inputs are:

  • Kubernetes Events

  • Node Exporter Metrics

  • Process Exporter Metrics

  • Windows Exporter Metrics

Inputs aren't internally aware of multithreading. If an input runs in threaded mode, Fluent Bit manages the logistics of that input's thread.

Outputs

When outputs flush data, they can either perform this operation inside Fluent Bit's main thread or inside a separate dedicated thread called a worker. Each output can have one or more workers running in parallel, and each worker can handle multiple concurrent flushes. You can configure this behavior by changing the value of theworkers setting.

All outputs are capable of running in multiple workers, and each output has a default value of 0, 1, or 2 workers. However, even if an output uses workers by default, you can safely reduce the number of workers below the default or disable workers entirely.

Containers on AWS

AWS maintains a distribution of Fluent Bit that combines the latest official release with a set of Go Plugins for sending logs to AWS services. AWS and Fluent Bit are working together to rewrite their plugins for inclusion in the official Fluent Bit distribution.

Plugins

The AWS for Fluent Bit image contains Go Plugins for:

  • Amazon CloudWatch as cloudwatch_logs. See theFluent Bit docs or thePlugin repository.

  • Amazon Kinesis Data Firehose as kinesis_firehose. See theFluent Bit docs or thePlugin repository.

  • Amazon Kinesis Data Streams as kinesis_streams. See theFluent Bit docs or thePlugin repository.

These plugins are higher performance than Go plugins.

Also, Fluent Bit includes an S3 output plugin named s3.

  • Amazon S3

Versions and Regional Repositories

AWS vends their container image usingDocker Hub, and a set of highly available regional Amazon ECR repositories. For more information, see theAWS for Fluent Bit GitHub repository.

The AWS for Fluent Bit image uses a custom versioning scheme because it contains multiple projects. To see what each release contains, see the release notes on GitHub.

SSM Public Parameters

AWS vends SSM public parameters with the regional repository link for each image. These parameters can be queried by any AWS account.

To see a list of available version tags in a given region, run the following command:

aws ssm get-parameters-by-path --region eu-central-1 --path /aws/service/aws-for-fluent-bit/ --query 'Parameters[*].Name'

To see the ECR repository URI for a given image tag in a given region, run the following:

aws ssm get-parameter --region ap-northeast-1 --name /aws/service/aws-for-fluent-bit/2.0.0

You can use these SSM public parameters as parameters in your CloudFormation templates:

Parameters:
  FireLensImage:
    Description: Fluent Bit image for the FireLens Container
    Type: AWS::SSM::Parameter::Value<String>
    Default: /aws/service/aws-for-fluent-bit/latest

Performance Tips

Fluent Bit is designed for high performance and minimal resource usage. Depending on your use case, you can optimize further using specific configuration options to achieve faster performance or reduce resource consumption.

Reading Files with Tail

The Tail input plugin is used to read data from files on the filesystem. By default, it uses a small memory buffer of 32KB per monitored file. While this is sufficient for most generic use cases and helps keep memory usage low when monitoring many files, there are scenarios where you may want to increase performance by using more memory.

If your files are typically larger than 32KB, consider increasing the buffer size to speed up file reading. For example, you can experiment with a buffer size of 128KB:

pipeline:
  inputs:
    - name: tail
      path: '/var/log/containers/*.log'
      buffer_chunk_size: 128kb
      buffer_max_size: 128kb

By increasing the buffer size, Fluent Bit will make fewer system calls (read(2)) to read the data, reducing CPU usage and improving performance.

Fluent Bit and SIMD for JSON Encoding

Starting in Fluent Bit v3.2, performance improvements have been introduced for JSON encoding. Plugins that convert logs from Fluent Bit's internal binary representation to JSON can now do so up to 30% faster using SIMD (Single Instruction, Multiple Data) optimizations.

Enabling SIMD Support

Ensure that your Fluent Bit binary is built with SIMD support. This feature is available for architectures such as x86_64, amd64, aarch64, and arm64. As of now, SIMD is only enabled by default in Fluent Bit container images.

You can check if SIMD is enabled by looking for the following log entry when Fluent Bit starts:

[2024/11/10 22:25:53] [ info] [fluent bit] version=3.2.0, commit=12cb22e0e9, pid=74359
[2024/11/10 22:25:53] [ info] [storage] ver=1.5.2, type=memory, sync=normal, checksum=off, max_chunks_up=128
[2024/11/10 22:25:53] [ info] [simd    ] SSE2
[2024/11/10 22:25:53] [ info] [cmetrics] version=0.9.8
[2024/11/10 22:25:53] [ info] [ctraces ] version=0.5.7
[2024/11/10 22:25:53] [ info] [sp] stream processor started

Look for the simd entry, which will indicate the SIMD support type, such as SSE2, NEON, or none.

If your Fluent Bit binary was not built with SIMD enabled and you are using a supported platform, you can build Fluent Bit from source using the CMake option -DFLB_SIMD=On.

Run input plugins in threaded mode

By default, most of input plugins runs in the same system thread than the main event loop, however by configuration you can instruct them to run in a separate thread which will allow you to take advantage of other CPU cores in your system.

To run an input plugin in threaded mode, just add threaded: true as in the example below:

pipeline:
  inputs:
    - name: tail
      path: '/var/log/containers/*.log'
      threaded: true

Unit Sizes

Some configuration directives in Fluent Bit refer to unit sizes such as when defining the size of a buffer or specific limits. Plugins like Tail Input, Forward Input or generic properties like Mem_Buf_Limit use unit sizes.

Fluent Bit v0.11.10 standardized unit sizes across the core and plugins. The following table describes the options that can be used and what they mean:

Suffix
Description
Example

When a suffix isn't specified, assume that the value given is a bytes representation.

Specifying a value of 32000 means 32000 bytes.

k, K, KB, kb

Kilobyte: a unit of memory equal to 1,000 bytes.

32k means 32000 bytes.

m, M, MB, mb

Megabyte: a unit of memory equal to 1,000,000 bytes.

1M means 1000000 bytes.

g, G, GB, gb

Gigabyte: a unit of memory equal to 1,000,000,000 bytes.

1G means 1000000000 bytes.

Multiline Parsers

Multiline parsers are used to combine logs that span multiple events into a single, cohesive message. This is particularly useful for handling stack traces, error logs, or any log entry that contains multiple lines of information.

In YAML configuration, the syntax for defining multiline parsers differs slightly from the classic configuration format introducing minor breaking changes, specifically on how the rules are defined.

The following example demonstrates how to define a multiline parser directly in the main configuration file, and how to include additional definitions from external files:

multiline_parsers:
  - name: multiline-regex-test
    type: regex
    flush_timeout: 1000
    rules:
      - state: start_state
        regex: '/([a-zA-Z]+ \d+ \d+:\d+:\d+)(.*)/'
        next_state: cont
      - state: cont
        regex: '/^\s+at.*/'
        next_state: cont

This example defines a multiline parser named multiline-regex-test that uses regular expressions to handle multi-event logs. The parser contains two rules: the first rule transitions from start_state to cont when a matching log entry is detected, and the second rule continues to match subsequent lines.

For more detailed information on configuring multiline parsers, including advanced options and use cases, refer to the Configuring Multiline Parsers documentation.

Upstream Servers

The upstream_servers section defines a group of endpoints, referred to as nodes. Nodes are used by output plugins to distribute data in a round-robin fashion. This is useful for plugins that require load balancing when sending data. Examples of plugins that support this capability include Forward and Elasticsearch.

The upstream_servers section require specifying a name for the group and a list of nodes. The following example defines two upstream server groups, forward-balancing and forward-balancing-2:

upstream_servers:
  - name: forward-balancing
    nodes:
      - name: node-1
        host: 127.0.0.1
        port: 43000

      - name: node-2
        host: 127.0.0.1
        port: 44000

      - name: node-3
        host: 127.0.0.1
        port: 45000
        tls: true
        tls_verify: false
        shared_key: secret

  - name: forward-balancing-2
    nodes:
      - name: node-A
        host: 192.168.1.10
        port: 50000

      - name: node-B
        host: 192.168.1.11
        port: 51000

Each node in the upstream_servers group must specify a name, host, and port. Additional settings like tls, tls_verify, and shared_key can be configured for secure communication.

While the upstream_servers section can be defined globally, some output plugins might require the configuration to be specified in a separate YAML file. Consult the documentation for each specific output plugin to understand its requirements.

Buildroot / Embedded Linux

Install Fluent Bit in your embedded Linux system.

Install

To install, select Fluent Bit in your defconfig. See the Config.in file for all configuration options.

BR2_PACKAGE_FLUENT_BIT=y

Run

The default configuration file is written to:

/etc/fluent-bit/fluent-bit.conf

Fluent Bit is started by the S99fluent-bit script.

Support

All configurations with a toolchain that supports threads and dynamic library linking are supported.

Parsers

Parsers enable Fluent Bit components to transform unstructured data into a structured internal representation. You can define parsers either directly in the main configuration file or in separate external files for better organization.

This page provides a general overview of how to declare parsers.

The main section name is parsers, and it lets you define a list of parser configurations. The following example demonstrates how to set up two basic parsers:

parsers:
  - name: json
    format: json

  - name: docker
    format: json
    time_key: time
    time_format: "%Y-%m-%dT%H:%M:%S.%L"
    time_keep: true

You can define multiple parsers sections, either within the main configuration file or distributed across included files.

For more detailed information on parser options and advanced configurations, refer to the Configuring Parsers documentation.

Pipeline Monitoring

Learn how to monitor your data pipeline with external services

A Data Pipeline represents a flow of data that goes through the inputs (sources), filters, and output (sinks). The following sections contain information and steps to get started monitoring the pipeline.

  • HTTP Server: JSON and Prometheus Exporter-style metrics

  • Grafana Dashboards and Alerts

  • Health Checks

  • Telemetry Pipeline: hosted service to monitor and visualize your pipelines

bin/fluent-bit -h | grep JEMALLOC
Build Flags =  JSMN_PARENT_LINKS JSMN_STRICT FLB_HAVE_TLS FLB_HAVE_SQLDB
FLB_HAVE_TRACE FLB_HAVE_FLUSH_LIBCO FLB_HAVE_VALGRIND FLB_HAVE_FORK
FLB_HAVE_PROXY_GO FLB_HAVE_JEMALLOC JEMALLOC_MANGLE FLB_HAVE_REGEX
FLB_HAVE_C_TLS FLB_HAVE_SETJMP FLB_HAVE_ACCEPT4 FLB_HAVE_INOTIFY
Backpressure
InfluxDB
Elasticsearch
jemalloc
Filters

Sandbox and Lab Resources

The following page gives an overview of free public resources for Sandbox and Labs for learning how to best operate, use, and have success with Fluent Bit.

Fluent Bit Sandbox - sign-up required

The following are labs that can run in your browser however require email sign-up

  • Fluent Bit 101 Sandbox - Getting Started with configuration and routing

Open Source Labs - environment required

The following are open source labs where you will need to spin up resources to run through the lab in details

O11y Workshops by Chronosphere

These workshops, open source, provided by Chronosphere can be found here: https://o11y-workshops.gitlab.io/. The OSS repository can be found here: https://gitlab.com/o11y-workshops/workshop-fluentbit

The cards below include links to each of the labs in the workshop

  1. Lab 1 - Introduction to Fluent Bit

  2. Lab 2 - Installing Fluent Bit

  3. Lab 3 - Exploring First Pipelines

  4. Lab 4 - Exploring More Pipelines

  5. Lab 5 - Understanding Backpressure

  6. Lab 6 - Avoid Telemetry Data Loss

  7. Lab 7 - Pipeline Integration with OpenTelemetry

Logging with Fluent Bit and Amazon OpenSearch workshop by Amazon

This workshop by Amazon goes through common Kubernetes logging patterns and routing data to OpenSearch and visualizing with OpenSearch dashboards

Parser

Convert unstructured messages to structured messages

Dealing with raw strings or unstructured messages is difficult. Having a structure makes data more usable. Set a structure to the incoming data by using input plugins as data is collected:

The parser converts unstructured data to structured data. As an example, consider the following Apache (HTTP Server) log entry:

192.168.2.20 - - [28/Jul/2006:10:27:10 -0300] "GET /cgi-bin/try/ HTTP/1.0" 200 3395

This log line is a raw string without format. Structuring the log makes it easier to process the data later. If theregular expression parser is used, the log entry could be converted to:

{
  "host":    "192.168.2.20",
  "user":    "-",
  "method":  "GET",
  "path":    "/cgi-bin/try/",
  "code":    "200",
  "size":    "3395",
  "referer": "",
  "agent":   ""
 }

Parsers are fully configurable and are independently and optionally handled by each input plugin. For more details, seeParsers.

Output

Learn about destinations for your data, such as databases and cloud services.

The output interface lets you define destinations for your data. Common destinations are remote services, local file systems, or other standard interfaces. Outputs are implemented as plugins.

When an output plugin is loaded, an internal instance is created. Every instance has its own independent configuration. Configuration keys are often calledproperties.

Every output plugin has its own documentation section specifying how it can be used and what properties are available.

For more details, see Output Plugins.

Raspbian / Raspberry Pi

Fluent Bit is distributed as the fluent-bit package and is available for the Raspberry, specifically for distribution. The following versions are supported:

  • Raspbian Bookworm (12)

  • Raspbian Bullseye (11)

  • Raspbian Buster (10)

Server GPG key

The first step is to add the Fluent Bit server GPG key to your keyring so you can get FLuent Bit signed packages:

Updated key from March 2022

For the 1.9.0 and 1.8.15 and later releases, the. Ensure this new one is added.

The GPG Key fingerprint of the new key is:

The previous key is and might be required to install previous versions.

The GPG Key fingerprint of the old key is:

Refer to the to see which platforms are supported in each release.

Update your sources lists

On Debian and derivative systems such as Raspbian, you need to add the Fluent Bit APT server entry to your sources lists.

Add the following content at bottom of your /etc/apt/sources.list file.

Raspbian 12 (Bookworm)

Raspbian 11 (Bullseye)

Raspbian 10 (Buster)

Update your repositories database

Now let your system update the apt database:

Fluent Bit recommends upgrading your system (sudo apt-get upgrade) to avoid potential issues with expired certificates.

Install Fluent Bit

  1. Use the following apt-get command to install the latest Fluent Bit:

  2. Instruct systemd to enable the service:

If you do a status check, you should see a similar output like this:

The default configuration of Fluent Bit collects metrics for CPU usage and sends the records to the standard output. You can see the outgoing data in your/var/log/syslog file.

Alma / Rocky Linux

Fluent Bit is distributed as the fluent-bit package and is available for the latest versions of Rocky or Alma Linux now that CentOS Stream is tracking more recent dependencies.

Fluent Bit supports the following architectures:

  • x86_64

  • aarch64

  • arm64v8

Single line install

Fluent Bit provides an installation script to use for most Linux targets. This will always install the most recently released version.

This is a convenience helper and should always be validated prior to use. Older versions of this install script will not support auto-detecting Rocky or Alma Linux. The recommended secure deployment approach is to use the following instructions:

RHEL 9

From CentOS 9 Stream onwards, the CentOS dependencies will update more often than downstream usage. This may mean that incompatible (more recent) versions are provided of certain dependencies (e.g. OpenSSL). For OSS, we also provide RockyLinux and AlmaLinux repositories. This may be required for RHEL 9 as well which will no longer track equivalent CentOS 9 stream dependencies. No RHEL 9 build is provided, it is expected to use one of the OSS variants listed.

Configure Yum

The fluent-bit is provided through a Yum repository. To add the repository reference to your system:

  1. In /etc/yum.repos.d/, add a new file called fluent-bit.repo.

  2. Add the following content to the file - replace almalinux with rockylinux if required:

  3. As a best practice, enable gpgcheck and repo_gpgcheck for security reasons. Fluent Bit signs its repository metadata and all Fluent Bit packages.

Install

  1. After your repository is configured, run the following command to install it:

  2. Instruct Systemd to enable the service:

If you do a status check, you should see a similar output like this:

The default Fluent Bit configuration collect metrics of CPU usage and sends the records to the standard output. You can see the outgoing data in your/var/log/messages file.

Variables

Fluent Bit supports the usage of environment variables in any value associated to a key when using a configuration file.

The variables are case sensitive and can be used in the following format:

When Fluent Bit starts, the configuration reader will detect any request for ${MY_VARIABLE} and will try to resolve its value.

When Fluent Bit is running under (using the official packages), environment variables can be set in the following files:

  • /etc/default/fluent-bit (Debian based system)

  • /etc/sysconfig/fluent-bit (Others)

These files are ignored if they don't exist.

Example

Create the following configuration file (fluent-bit.conf):

Open a terminal and set the environment variable:

The previous command sets the stdout value to the variable MY_OUTPUT.

Run Fluent Bit with the recently created configuration file:

Ubuntu

Fluent Bit is distributed as the fluent-bit package and is available for long-term support releases of Ubuntu. The latest officially supported version is Noble Numbat (24.04).

Single line install

An installation script is provided for most Linux targets. This will always install the most recent version released.

This is purely a convenience helper and should always be validated prior to use. The recommended secure deployment approach is to use the following instructions.

Server GPG key

The first step is to add the Fluent Bit server GPG key to your keyring to ensure you can get the correct signed packages.

Follow the official.

Updated key from March 2022

For releases 1.9.0 and 1.8.15 and later, the. Ensure the new key is added.

The GPG Key fingerprint of the new key is:

The previous key is and might be required to install previous versions.

The GPG Key fingerprint of the old key is:

Refer to the to see which platforms are supported in each release.

Update your sources lists

On Ubuntu, you need to add the Fluent Bit APT server entry to your sources lists. Add the following content at bottom of your /etc/apt/sources.list file. EnsureCODENAME is set to your specific . For example, focal for Ubuntu 20.04.

Update your repositories database

Update the apt database on your system:

Fluent Bit recommends upgrading your system to avoid potential issues with expired certificates:

sudo apt-get upgrade

If you receive the error Certificate verification failed, check if the packageca-certificates is properly installed:

sudo apt-get install ca-certificates

Install Fluent Bit

  1. Use the following apt-get command to install the latest Fluent Bit:

  2. Instruct systemd to enable the service:

If you do a status check, you should see a similar output like this:

The default configuration of fluent-bit is collecting metrics of CPU usage and sending the records to the standard output. You can see the outgoing data in your/var/log/syslog file.

AWS credentials

Plugins that interact with AWS services fetch credentials from the following providers in order. Only the first provider that provides credentials is used.

All AWS plugins additionally support a role_arn (or AWS_ROLE_ARN, for) configuration parameter. If specified, the fetched credentials are used to assume the given role.

Environment variables

Plugins use the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY (and optionallyAWS_SESSION_TOKEN) environment variables if set.

Shared configuration and credentials files

Plugins read the shared config file at $AWS_CONFIG_FILE (or $HOME/.aws/config), and the shared credentials file at $AWS_SHARED_CREDENTIALS_FILE (or$HOME/.aws/credentials) to fetch the credentials for the profile named$AWS_PROFILE or $AWS_DEFAULT_PROFILE (or "default"). See.

The shared settings evaluate in the following order:

Setting
File
Description

No other settings are supported.

EKS Web Identity Token (OIDC)

Credentials are fetched using a signed web identity token for a Kubernetes service account. See .

ECS HTTP credentials endpoint

Credentials are fetched for the ECS task's role. See.

EKS Pod Identity credentials

Credentials are fetched using a pod identity endpoint. See.

EC2 instance profile credentials (IMDS)

Fetches credentials for the EC2 instance profile's role. See. As of Fluent Bit version 1.8.8, IMDSv2 is used by default and IMDSv1 might be disabled. Prior versions of Fluent Bit require enabling IMDSv1 on EC2.

Docker Events

The Docker events input plugin uses the Docker API to capture server events. A complete list of possible events returned by this plugin can be found .

Configuration parameters

This plugin supports the following configuration parameters:

Key
Description
Default

Command line

You can run this plugin from the command line:

Configuration file

In your main configuration file append the following:

Running a Logging Pipeline Locally

You can test logging pipelines locally to observe how they handles log messages. This guide explains how to use to run Fluent Bit and Elasticsearch locally, but you can use the same principles to test other plugins.

Create a configuration file

Start by creating one of the corresponding Fluent Bit configuration files to start testing.

Use Docker Compose

Use to run Fluent Bit (with the configuration file mounted) and Elasticsearch.

View indexed logs

To view indexed logs, run the following command:

Reset index

To reset your index, run the following command:

Yocto / Embedded Linux

source code provides BitBake recipes to configure, build, and package the software for a Yocto-based image. Specific steps in the usage of these recipes in your Yocto environment (Poky) is out of the scope of this documentation.

Fluent Bit distributes two main recipes, one for testing/dev purposes and one with the latest stable release.

Version
Recipe
Description

It's strongly recommended to always use the stable release of the Fluent Bit recipe and not the one from Git master for production deployments.

Fluent Bit and other architectures

Fluent Bit >= v1.1.x fully supports x86_64, x86, arm32v7, and arm64v8.

Memory Metrics

The Memory (mem) input plugin gathers information about the memory and swap usage of the running system every certain interval of time and reports the total amount of memory and the amount of free available.

Get started

To get memory and swap usage from your system, you can run the plugin from the command line or through the configuration file:

Command line

Run the following command from the command line, noting this is for a Linux machine:

Which outputs information similar to:

Threading

You can enable the threaded setting to run this input in its own .

Configuration file

In your main configuration file append the following:

Hot Reload

Enable hot reload through SIGHUP signal or an HTTP endpoint

Fluent Bit supports the reloading feature when enabled in the configuration file or on the command line with -Y or --enable-hot-reload option.

Hot reloading is supported on Linux, macOS, and Windows operating systems.

Update the configuration

To get started with reloading over HTTP, enable the HTTP Server in the configuration file:

How to reload

After updating the configuration, use one of the following methods to perform a hot reload:

HTTP

Use the following HTTP endpoints to perform a hot reload:

  • PUT /api/v2/reload

  • POST /api/v2/reload

For using curl to reload Fluent Bit, users must specify an empty request body as:

Signal

Hot reloading can be used with SIGHUP.

SIGHUP signal isn't supported on Windows.

Confirm a reload

Use one of the following methods to confirm the reload occurred.

HTTP

Obtain a count of hot reload using the HTTP endpoint:

  • GET /api/v2/reload

The endpoint returns hot_reload_count as follows:

The default value of the counter is 0.

Plugins

Fluent Bit comes with a variety of built-in plugins, and also supports loading external plugins at runtime. This feature is especially useful for loading Go or WebAssembly (Wasm) plugins that are built as shared object files (.so). Fluent Bit YAML configuration provides the following ways to load these external plugins:

Inline YAML

You can specify external plugins directly within your main YAML configuration file using the plugins section. Here's an example:

YAML plugins file included using the plugins_file option

You can load external plugins from a separate YAML file by specifying the plugins_file option in the service section for better modularity.

To configure this:

In this setup, the extra_plugins.yaml file might contain the following plugins section:

Environment Variables

The env section lets you define environment variables directly within the configuration file. These variables can then be used to dynamically replace values throughout your configuration using the ${VARIABLE_NAME} syntax.

Variables set in this section cannot be overridden by system environment variables.

Values set in the env section are case-sensitive. However, as a best practice, Fluent Bit recommends using uppercase names for environment variables. The following example defines two variables, FLUSH_INTERVAL and STDOUT_FMT, which can be accessed in the configuration using ${FLUSH_INTERVAL} and ${STDOUT_FMT}:

Predefined variables

Fluent Bit provides a set of predefined environment variables that can be used in your configuration:

Name
Description

External variables

In addition to variables defined in the configuration file or the predefined ones, Fluent Bit can access system environment variables set in the user space. These external variables can be referenced in the configuration using the same ${VARIABLE_NAME} pattern.

Variables set in the env section cannot be overridden by system environment variables.

For example, to set the FLUSH_INTERVAL system environment variable to 2 and use it in your configuration:

In the configuration file, you can then access this value as follows:

This approach lets you manage and override configuration values using environment variables, providing flexibility in various deployment environments.

curl https://raw.githubusercontent.com/fluent/fluent-bit/master/install.sh | sh
[fluent-bit]
name = Fluent Bit
baseurl = https://packages.fluentbit.io/almalinux/$releasever/
gpgcheck=1
gpgkey=https://packages.fluentbit.io/fluentbit.key
repo_gpgcheck=1
enabled=1
sudo yum install fluent-bit
sudo systemctl start fluent-bit
$ systemctl status fluent-bit
● fluent-bit.service - Fluent Bit
   Loaded: loaded (/usr/lib/systemd/system/fluent-bit.service; disabled; vendor preset: disabled)
   Active: active (running) since Thu 2016-07-07 02:08:01 BST; 9s ago
 Main PID: 3820 (fluent-bit)
   CGroup: /system.slice/fluent-bit.service
           └─3820 /opt/fluent-bit/bin/fluent-bit -c etc/fluent-bit/fluent-bit.conf
...
plugins:
  - /path/to/out_gstdout.so

service:
  log_level: info

pipeline:
  inputs:
    - name: random

  outputs:
    - name: gstdout
      match: '*'
service:
  log_level: info
  plugins_file: extra_plugins.yaml

pipeline:
  inputs:
    - name: random

  outputs:
    - name: gstdout
      match: '*'
plugins:
  - /other/path/to/out_gstdout.so
env:
  FLUSH_INTERVAL: 1
  STDOUT_FMT: 'json_lines'

service:
  flush: ${FLUSH_INTERVAL}
  log_level: info

pipeline:
  inputs:
    - name: random

  outputs:
    - name: stdout
      match: '*'
      format: ${STDOUT_FMT}

${HOSTNAME}

The system's hostname.

export FLUSH_INTERVAL=2
service:
  flush: ${FLUSH_INTERVAL}
  log_level: info

pipeline:
  inputs:
    - name: random

  outputs:
    - name: stdout
      match: '*'
      format: json_lines
curl https://packages.fluentbit.io/fluentbit.key | sudo apt-key add -
C3C0 A285 34B9 293E AF51  FABD 9F9D DC08 3888 C1CD
Fluentbit releases (Releases signing key) <[email protected]>
F209 D876 2A60 CD49 E680 633B 4FF8 368B 6EA0 722A
deb https://packages.fluentbit.io/raspbian/bookworm bookworm main
deb https://packages.fluentbit.io/raspbian/bullseye bullseye main
deb https://packages.fluentbit.io/raspbian/buster buster main
sudo apt-get update
sudo apt-get install fluent-bit
sudo service fluent-bit start
sudo service fluent-bit status
● fluent-bit.service - Fluent Bit
   Loaded: loaded (/lib/systemd/system/fluent-bit.service; disabled; vendor preset: enabled)
   Active: active (running) since mié 2016-07-06 16:58:25 CST; 2h 45min ago
 Main PID: 6739 (fluent-bit)
    Tasks: 1
   Memory: 656.0K
      CPU: 1.393s
   CGroup: /system.slice/fluent-bit.service
           └─6739 /opt/fluent-bit/bin/fluent-bit -c /etc/fluent-bit/fluent-bit.conf
...
Raspbian
GPG key has been updated
still available
supported platform documentation
curl https://raw.githubusercontent.com/fluent/fluent-bit/master/install.sh | sh
curl https://packages.fluentbit.io/fluentbit.key | gpg --dearmor > /usr/share/keyrings/fluentbit-keyring.gpg
C3C0 A285 34B9 293E AF51  FABD 9F9D DC08 3888 C1CD
Fluentbit releases (Releases signing key) <[email protected]>
F209 D876 2A60 CD49 E680 633B 4FF8 368B 6EA0 722A
deb [signed-by=/usr/share/keyrings/fluentbit-keyring.gpg] https://packages.fluentbit.io/ubuntu/${CODENAME} ${CODENAME} main
sudo apt-get update
sudo apt-get install fluent-bit
sudo systemctl start fluent-bit
systemctl status fluent-bit
● fluent-bit.service - Fluent Bit
   Loaded: loaded (/lib/systemd/system/fluent-bit.service; disabled; vendor preset: enabled)
   Active: active (running) since mié 2016-07-06 16:58:25 CST; 2h 45min ago
 Main PID: 6739 (fluent-bit)
    Tasks: 1
   Memory: 656.0K
      CPU: 1.393s
   CGroup: /system.slice/fluent-bit.service
           └─6739 /opt/fluent-bit/bin/fluent-bit -c /etc/fluent-bit/fluent-bit.conf
...
Debian wiki guidance
GPG key has been updated
still available
supported platform documentation
Ubuntu release name

credential_process

config

Linux only. See Sourcing credentials with an external process in the AWS CLI.

aws_access_key_id aws_secret_access_key aws_session_token

credentials

Access key ID and secret key to use to authenticate. The session token must be set for temporary credentials.

Environment variables
Shared configuration and credentials files
EKS Web Identity Token (OIDC)
ECS HTTP credentials endpoint
EC2 Instance Profile Credentials (IMDS)
Elasticsearch
Configuration and credential file settings in the AWS CLI
IAM roles for service accounts
Amazon ECS task IAM role
Learn how EKS Pod Identity grants pods access to AWS services
IAM roles for Amazon EC2

devel

fluent-bit_git.bb

Build Fluent Bit from Git master. Use for development and testing purposes only.

v1.8.11

fluent-bit_1.8.11.bb

Build latest stable version of Fluent Bit.

Fluent Bit
fluent-bit -i mem -t memory -o stdout -m '*'
Fluent Bit v4.0.3
* Copyright (C) 2015-2025 The Fluent Bit Authors
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

______ _                  _    ______ _ _             ___  _____
|  ___| |                | |   | ___ (_) |           /   ||  _  |
| |_  | |_   _  ___ _ __ | |_  | |_/ /_| |_  __   __/ /| || |/' |
|  _| | | | | |/ _ \ '_ \| __| | ___ \ | __| \ \ / / /_| ||  /| |
| |   | | |_| |  __/ | | | |_  | |_/ / | |_   \ V /\___  |\ |_/ /
\_|   |_|\__,_|\___|_| |_|\__| \____/|_|\__|   \_/     |_(_)___/


[2025/07/01 14:44:47] [ info] [fluent bit] version=4.0.3, commit=f5f5f3c17d, pid=1
[2025/07/01 14:44:47] [ info] [storage] ver=1.5.3, type=memory, sync=normal, checksum=off, max_chunks_up=128
[2025/07/01 14:44:47] [ info] [simd    ] disabled
[2025/07/01 14:44:47] [ info] [cmetrics] version=1.0.3
[2025/07/01 14:44:47] [ info] [ctraces ] version=0.6.6
[2025/07/01 14:44:47] [ info] [input:mem:mem.0] initializing
[2025/07/01 14:44:47] [ info] [input:mem:mem.0] storage_strategy='memory' (memory only)
[2025/07/01 14:44:47] [ info] [sp] stream processor started
[2025/07/01 14:44:47] [ info] [engine] Shutdown Grace Period=5, Shutdown Input Grace Period=2
[2025/07/01 14:44:47] [ info] [output:stdout:stdout.0] worker #0 started
[0] memory: [[1751381087.225589224, {}], {"Mem.total"=>3986708, "Mem.used"=>560708, "Mem.free"=>3426000, "Swap.total"=>0, "Swap.used"=>0, "Swap.free"=>0}]
[0] memory: [[1751381088.228411537, {}], {"Mem.total"=>3986708, "Mem.used"=>560708, "Mem.free"=>3426000, "Swap.total"=>0, "Swap.used"=>0, "Swap.free"=>0}]
[0] memory: [[1751381089.225600084, {}], {"Mem.total"=>3986708, "Mem.used"=>561480, "Mem.free"=>3425228, "Swap.total"=>0, "Swap.used"=>0, "Swap.free"=>0}]
[0] memory: [[1751381090.228345064, {}], {"Mem.total"=>3986708, "Mem.used"=>561480, "Mem.free"=>3425228, "Swap.total"=>0, "Swap.used"=>0, "Swap.free"=>0}]
pipeline:
    inputs:
        - name: mem
          tag: memory

    outputs:
        - name: stdout
          match: '*'
[INPUT]
    Name   mem
    Tag    memory

[OUTPUT]
    Name   stdout
    Match  *
thread

Upgrade Notes

The following article covers the relevant compatibility changes for users upgrading from previous Fluent Bit versions.

For more details about changes on each release, refer to the Official Release Notes.

Release notes will be prepared in advance of a Git tag for a release. An official release should provide both a tag and a release note together to allow users to verify and understand the release contents.

The tag drives the binary release process. Release binaries (containers and packages) will appear after a tag and its associated release note. This lets users to expect the new release binary to appear and allow/deny/update it as appropriate in their infrastructure.

Fluent Bit v1.9.9

The td-agent-bit package is no longer provided after this release. Users should switch to the fluent-bit package.

Fluent Bit v1.6

If you are migrating from previous version of Fluent Bit, review the following important changes:

Tail Input Plugin

By default, the tail input plugin follows a file from the end after the service starts, instead of reading it from the beginning. Every file found when the plugin starts is followed from it last position. New files discovered at runtime or when files rotate are read from the beginning.

To keep the old behavior, set the option read_from_head to true.

Stackdriver Output Plugin

The project_id of resource in LogEntry sent to Google Cloud Logging would be set to the project_id rather than the project number. To learn the difference between Project ID and project number, see Creating and managing projects.

If you have existing queries based on the resource's project_id, update your query accordingly.

Fluent Bit v1.5

The migration from v1.4 to v1.5 is pretty straightforward.

  • The keepalive configuration mode has been renamed to net.keepalive. Now, all Network I/O keepalive is enabled by default. To learn more about this and other associated configuration properties read the Networking Administration section. - If you use the Elasticsearch output plugin, the default value of type changed from flb_type to _doc. Many versions of Elasticsearch tolerate this, but Elasticsearch v5.6 through v6.1 require a type without a leading underscore. See the Elasticsearch output plugin documentation FAQ entry for more.

Fluent Bit v1.4

If you are migrating from Fluent Bit v1.3, there are no breaking changes.

Fluent Bit v1.3

If you are migrating from Fluent Bit v1.2 to v1.3, there are no breaking changes. If you are upgrading from an older version, review the following incremental changes:

Fluent Bit v1.2

Docker, JSON, Parsers and Decoders

Fluent Bit v1.2 fixed many issues associated with JSON encoding and decoding.

For example, when parsing Docker logs, it's no longer necessary to use decoders. The new Docker parser looks like this:

python [PARSER] Name docker Format json Time_Key time Time_Format %Y-%m-%dT%H:%M:%S.%L Time_Keep On

Kubernetes Filter

Fluent Bit made improvements to Kubernetes Filter handling of stringified log messages. If the Merge_Log option is enabled, it will try to handle the log content as a JSON map, if so, it will add the keys to the root map.

In addition, fixes and improvements were made to the Merge_Log_Key option. If a merge log succeed, all new keys will be packaged under the key specified by this option. A suggested configuration is as follows:

python [FILTER] Name Kubernetes Match kube.* Kube_Tag_Prefix kube.var.log.containers. Merge_Log On Merge_Log_Key log_processed

As an example, if the original log content is the following map:

javascript {"key1": "val1", "key2": "val2"}

the final record will be composed as follows:

javascript { "log": "{\"key1\": \"val1\", \"key2\": \"val2\"}", "log_processed": { "key1": "val1", "key2": "val2" } }

Fluent Bit v1.1

If you are upgrading from Fluent Bit 1.0.x or earlier, review the following relevant changes when switching to Fluent Bit v1.1 or later series:

Kubernetes filter

Fluent Bit introduced a new configuration property called Kube_Tag_Prefix to help Tag prefix resolution and address an unexpected behavior in previous versions.

During the 1.0.x release cycle, a commit in the Tail input plugin changed the default behavior on how the Tag was composed when using the wildcard for expansion generating breaking compatibility with other services. Consider the following configuration example:

python [INPUT] Name tail Path /var/log/containers/*.log Tag kube.*

The expected behavior is that Tag will be expanded to:

text kube.var.log.containers.apache.log

The change introduced in the 1.0 series switched from absolute path to the base filename only:

text kube.apache.log

THe Fluent Bit v1.1 release restored the default behavior and now the Tag is composed using the absolute path of the monitored file.

Having absolute path in the Tag is relevant for routing and flexible configuration where it also helps to keep compatibility with Fluentd behavior.

This behavior switch in Tail input plugin affects how Filter Kubernetes operates. When the filter is used it needs to perform local metadata lookup that comes from the file names when using Tail as a source. With the new Kube_Tag_Prefix option you can specify the prefix used in the Tail input plugin. For the previous configuration example the new configuration will look like:


[FILTER] Name kubernetes Match * Kube_Tag_Prefix kube.var.log.containers. ```

The proper value for `Kube_Tag_Prefix` must be composed by Tag prefix set in Tail input plugin plus the converted monitored directory replacing slashes with dots.

Record Accessor

A full feature set to access content of your records.

Fluent Bit works internally with structured records and it can be composed of an unlimited number of keys and values. Values can be anything like a number, string, array, or a map.

Having a way to select a specific part of the record is critical for certain core functionalities or plugins, this feature is called Record Accessor.

Consider record accessor to be a basic grammar to specify record content and other miscellaneous values.

Format

A record accessor rule starts with the character $. Use the structured content as an example. The following table describes how to access a record:

{
  "log": "some message",
  "stream": "stdout",
  "labels": {
     "color": "blue",
     "unset": null,
     "project": {
         "env": "production"
      }
  }
}

The following table describes some accessing rules and the expected returned value:

Format
Accessed Value

$log

some message

$labels['color']

blue

$labels['project']['env']

production

$labels['unset']

null

$labels['undefined']

If the accessor key doesn't exist in the record like the last example $labels['undefined'], the operation is omitted, and no exception will occur.

Usage

The feature is enabled on a per plugin basis. Not all plugins enable this feature. As an example, consider a configuration that aims to filter records using grep that only matches where labels have a color blue:

[SERVICE]
    flush        1
    log_level    info
    parsers_file parsers.conf

[INPUT]
    name      tail
    path      test.log
    parser    json

[FILTER]
    name      grep
    match     *
    regex     $labels['color'] ^blue$

[OUTPUT]
    name      stdout
    match     *
    format    json_lines

The file content to process in test.log is the following:

{"log": "message 1", "labels": {"color": "blue"}}
{"log": "message 2", "labels": {"color": "red"}}
{"log": "message 3", "labels": {"color": "green"}}
{"log": "message 4", "labels": {"color": "blue"}}

When running Fluent Bit with the previous configuration, the output is:

$ bin/fluent-bit -c fluent-bit.conf
Fluent Bit v1.x.x
* Copyright (C) 2019-2020 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2020/09/11 16:11:07] [ info] [engine] started (pid=1094177)
[2020/09/11 16:11:07] [ info] [storage] version=1.0.5, initializing...
[2020/09/11 16:11:07] [ info] [storage] in-memory
[2020/09/11 16:11:07] [ info] [storage] normal synchronization mode, checksum disabled, max_chunks_up=128
[2020/09/11 16:11:07] [ info] [sp] stream processor started
[2020/09/11 16:11:07] [ info] inotify_fs_add(): inode=55716713 watch_fd=1 name=test.log
{"date":1599862267.483684,"log":"message 1","labels":{"color":"blue"}}
{"date":1599862267.483692,"log":"message 4","labels":{"color":"blue"}}

Limitations of record_accessor templating

The Fluent Bit record_accessor library has a limitation in the characters that can separate template variables. Only dots and commas (. and ,) can come after a template variable. This is because the templating library must parse the template and determine the end of a variable.

The following templates are invalid because the template variables aren't separated by commas or dots:

  • $TaskID-$ECSContainerName

  • $TaskID/$ECSContainerName

  • $TaskID_$ECSContainerName

  • $TaskIDfooo$ECSContainerName

However, the following are valid:

  • $TaskID.$ECSContainerName

  • $TaskID.ecs_resource.$ECSContainerName

  • $TaskID.fooo.$ECSContainerName

And the following are valid since they only contain one template variable with nothing after it:

  • fooo$TaskID

  • fooo____$TaskID

  • fooo/bar$TaskID

Ebpf

This plugin is experimental and might be unstable. Use it in development or testing environments only. Its features and behavior are subject to change.

The in_ebpf input plugin uses eBPF (extended Berkeley Packet Filter) to capture low-level system events. This plugin lets Fluent Bit monitor kernel-level activities such as process executions, file accesses, memory allocations, network connections, and signal handling. It provides valuable insights into system behavior for debugging, monitoring, and security analysis.

The in_ebpf plugin leverages eBPF to trace kernel events in real-time. By specifying trace points, users can collect targeted system-level metrics and events, giving visibility into operating system interactions and performance characteristics.

System dependencies

To enable in_ebpf, ensure the following dependencies are installed on your system:

  • Kernel version: 4.18 or greater, with eBPF support enabled.

  • Required packages:

    • bpftool: Used to manage and debug eBPF programs.

    • libbpf-dev: Provides the libbpf library for loading and interacting with eBPF programs.

    • CMake 3.13 or higher: Required for building the plugin.

Installing dependencies on Ubuntu

sudo apt update
sudo apt install libbpf-dev linux-tools-common cmake

Building Fluent Bit with in_ebpf

To enable the in_ebpf plugin, follow these steps to build Fluent Bit from source:

  1. Clone the Fluent Bit repository:

    git clone https://github.com/fluent/fluent-bit.git
    cd fluent-bit
  2. Configure the build with in_ebpf:

    Create a build directory and run cmake with the -DFLB_IN_EBPF=On flag to enable the in_ebpf plugin:

    mkdir build
    cd build
    cmake .. -DFLB_IN_EBPF=On
  3. Compile the source:

    make
  4. Run Fluent Bit:

    Run Fluent Bit with elevated permissions (for example, sudo). Loading eBPF programs requires root access or appropriate privileges.

    # For YAML configuration.
    sudo ./bin/fluent-bit --config fluent-bit.yaml
    
    # For classic configuration.
    sudo ./bin/fluent-bit --config fluent-bit.conf

Configuration example

Here's a basic example of how to configure the plugin:

pipeline:
    inputs:
      - name: ebpf
        trace: 
          - trace_signal
          - trace_malloc
          - trace_bind
[INPUT]
    Name          ebpf
    Trace         trace_signal
    Trace         trace_malloc
    Trace         trace_bind

The configuration enables tracing for:

  • Signal handling events (trace_signal)

  • Memory allocation events (trace_malloc)

  • Network bind operations (trace_bind)

You can enable multiple traces by adding multiple Trace directives in your configuration. Full list of existing traces can be seen here: Fluent Bit eBPF Traces

Upstream Servers

Fluent Bit output plugins aim to connect to external services to deliver logs over the network. Being able to connect to one node (host) is normal and enough for more of the use cases, but there are other scenarios where balancing across different nodes is required. The Upstream feature provides this capability.

An Upstream defines a set of nodes that will be targeted by an output plugin, by the nature of the implementation an output plugin must support the Upstream feature. The following plugin has Upstream support:

  • Forward

The current balancing mode implemented is round-robin.

Configuration

To define an Upstream you must create an specific configuration file that contains an UPSTREAM and one or multiple NODE sections. The following table describes the properties associated with each section. All properties are mandatory:

Section
Key
Description

UPSTREAM

name

Defines a name for the `Upstream in question.

NODE

name

Defines a name for the Node in question.

host

IP address or hostname of the target host.

port

TCP port of the target service.

Nodes and specific plugin configuration

A Node might contain additional configuration keys required by the plugin, to provide enough flexibility for the output plugin. A common use case is a Forward output where if TLS is enabled, it requires a shared key.

Nodes and TLS (Transport Layer Security)

In addition to the properties defined in the configuration table, the network operations against a defined node can optionally be done through the use of TLS for further encryption and certificates use.

The TLS options available are described in the TLS/SSL section and can be added to the any Node section.

Configuration file example

The following example defines an Upstream called forward-balancing which aims to be used by Forward output plugin, it register three Nodes:

  • node-1: connects to 127.0.0.1:43000

  • node-2: connects to 127.0.0.1:44000

  • node-3: connects to 127.0.0.1:45000 using TLS without verification. It also defines a specific configuration option required by Forward output called shared_key.

[UPSTREAM]
    name       forward-balancing

[NODE]
    name       node-1
    host       127.0.0.1
    port       43000

[NODE]
    name       node-2
    host       127.0.0.1
    port       44000

[NODE]
    name       node-3
    host       127.0.0.1
    port       45000
    tls        on
    tls.verify off
    shared_key secret

Every Upstream definition must exists in its own configuration file in the file system. Adding multiple Upstream configurations in the same file or different files isn't allowed.

YAML Configuration

Before You Get Started

Fluent Bit traditionally offered a classic configuration mode, a custom configuration format that we are gradually phasing out. While classic mode has served well for many years, it has several limitations. Its basic design only supports grouping sections with key-value pairs and lacks the ability to handle sub-sections or complex data structures like lists.

YAML, now a mainstream configuration format, has become essential in a cloud ecosystem where everything is configured this way. To minimize friction and provide a more intuitive experience for creating data pipelines, we strongly encourage users to transition to YAML. The YAML format enables features, such as processors, that are not possible to configure in classic mode.

As of Fluent Bit v3.2, you can configure everything in YAML.

List of Available Sections

Configuring Fluent Bit with YAML introduces the following root-level sections:

Section Name
Description

service

Describes the global configuration for the Fluent Bit service. This section is optional; if not set, default values will apply. Only one service section can be defined.

parsers

Lists parsers to be used by components like inputs, processors, filters, or output plugins. You can define multiple parsers sections, which can also be loaded from external files included in the main YAML configuration.

multiline_parsers

Lists multiline parsers, functioning similarly to parsers. Multiple definitions can exist either in the root or in included files.

pipeline

Defines a pipeline composed of inputs, processors, filters, and output plugins. You can define multiple pipeline sections, but they will not operate independently. Instead, all components will be merged into a single pipeline internally.

plugins

Specifies the path to external plugins (.so files) to be loaded by Fluent Bit at runtime.

upstream_servers

Refers to a group of node endpoints that can be referenced by output plugins that support this feature.

env

Sets a list of environment variables for Fluent Bit. Note that system environment variables are available, while the ones defined in the configuration apply only to Fluent Bit.

Section Documentation

To access detailed configuration guides for each section, use the following links:

  • Service Section documentation

    • Overview of global settings, configuration options, and examples.

  • Parsers Section documentation

    • Detailed guide on defining parsers and supported formats.

  • Multiline Parsers Section documentation

    • Explanation of multiline parsing configuration.

  • Pipeline Section documentation

    • Details on setting up pipelines and using processors.

  • Plugins Section documentation

    • How to load external plugins.

  • Upstream Servers Section documentation

    • Guide on setting up and using upstream nodes with supported plugins.

  • Environment Variables Section documentation

    • Information on setting environment variables and their scope within Fluent Bit.

  • Includes Section documentation

    • Description on how to include external YAML files.

HTTP Proxy

Enable traffic through a proxy server using the HTTP_PROXY environment variable.

Fluent Bit supports configuring an HTTP proxy for all egress HTTP/HTTPS traffic using the HTTP_PROXY or http_proxy environment variable.

The format for the HTTP proxy environment variable is http://USER:PASS@HOST:PORT, where:

  • USER is the username when using basic authentication.

  • PASS is the password when using basic authentication.

  • HOST is the HTTP proxy hostname or IP address.

  • PORT is the port the HTTP proxy is listening on.

To use an HTTP proxy with basic authentication, provide the username and password:

HTTP_PROXY='http://example_user:[email protected]:8080'

When no authentication is required, omit the username and password:

HTTP_PROXY='http://proxy.example.com:8080'

The HTTP_PROXY environment variable is a standard way of setting a HTTP proxy in a containerized environment, and it's also natively supported by any application written in Go. Fluent Bit implements the same convention. Thehttp_proxy environment variable is also supported. When both the HTTP_PROXY andhttp_proxy environment variables are provided, HTTP_PROXY will be preferred.

The HTTP output plugin also supports configuring an HTTP proxy. This configuration works, but shouldn't be used with the HTTP_PROXY or http_proxy environment variable. The environment variable-based proxy configuration is implemented by creating a TCP connection tunnel usingHTTP CONNECT. Unlike the plugin's implementation, this supports both HTTP and HTTPS egress traffic.

NO_PROXY

Use the NO_PROXY environment variable when traffic shouldn't flow through the HTTP proxy. The no_proxy environment variable is also supported. When both NO_PROXY and no_proxy environment variables are provided, NO_PROXY takes precedence.

The format for the no_proxy environment variable is a comma-separated list of host names or IP addresses.

A domain name matches itself and all of its subdomains (for example, example.com matches both example.com and test.example.com):

NO_PROXY='foo.com,127.0.0.1,localhost'

A domain with a leading dot (.) matches only its subdomains (for example,.example.com matches test.example.com but not example.com):

NO_PROXY='.example.com,127.0.0.1,localhost'

As an example, you might use NO_PROXY when running Fluent Bit in a Kubernetes environment, where and you want:

  • All real egress traffic to flow through an HTTP proxy.

  • All local Kubernetes traffic to not flow through the HTTP proxy.

In this case, set:

NO_PROXY='127.0.0.1,localhost,kubernetes.default.svc'
${MY_VARIABLE}
[SERVICE]
    Flush        1
    Daemon       Off
    Log_Level    info

[INPUT]
    Name cpu
    Tag  cpu.local

[OUTPUT]
    Name  ${MY_OUTPUT}
    Match *
export MY_OUTPUT=stdout
$ bin/fluent-bit -c fluent-bit.conf
Fluent Bit v1.4.0
* Copyright (C) 2019-2020 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2020/03/03 12:25:25] [ info] [engine] started
[0] cpu.local: [1491243925, {"cpu_p"=>1.750000, "user_p"=>1.750000, "system_p"=>0.000000, "cpu0.p_cpu"=>3.000000, "cpu0.p_user"=>2.000000, "cpu0.p_system"=>1.000000, "cpu1.p_cpu"=>0.000000, "cpu1.p_user"=>0.000000, "cpu1.p_system"=>0.000000, "cpu2.p_cpu"=>4.000000, "cpu2.p_user"=>4.000000, "cpu2.p_system"=>0.000000, "cpu3.p_cpu"=>1.000000, "cpu3.p_user"=>1.000000, "cpu3.p_system"=>0.000000}]
systemd

Unix_Path

The docker socket Unix path.

/var/run/docker.sock

Buffer_Size

The size of the buffer used to read docker events in bytes.

8192

Parser

Specify the name of a parser to interpret the entry as a structured message.

none

Key

When a message is unstructured (no parser applied), it's appended as a string under the key name message.

message

Reconnect.Retry_limits

The maximum number of retries allowed. The plugin tries to reconnect with docker socket when EOF is detected.

5

Reconnect.Retry_interval

The retry interval in seconds.

1

Threaded

Indicates whether to run this input in its own thread.

false

fluent-bit -i docker_events -o stdout
in the Docker documentation
[INPUT]
    Name   docker_events

[OUTPUT]
    Name   stdout
    Match  *
docker-compose.yaml
version: "3.7"

services:
  fluent-bit:
    image: fluent/fluent-bit
    volumes:
      - ./fluent-bit.conf:/fluent-bit/etc/fluent-bit.conf
    depends_on:
      - elasticsearch
  elasticsearch:
    image: elasticsearch:7.6.2
    ports:
      - "9200:9200"
    environment:
      - discovery.type=single-node
curl "localhost:9200/_search?pretty" \
  -H 'Content-Type: application/json' \
  -d'{ "query": { "match_all": {} }}'
curl -X DELETE "localhost:9200/fluent-bit?pretty"
Docker Compose
[INPUT]
  Name dummy
  Dummy {"top": {".dotted": "value"}}

[OUTPUT]
  Name es
  Host elasticsearch
  Replace_Dots On
Docker Compose
curl -X POST -d '{}' localhost:2020/api/v2/reload
{"hot_reload_count":3}
[SERVICE]
    HTTP_Server  On
    HTTP_Listen  0.0.0.0
    HTTP_PORT    2020
    Hot_Reload   On
...

Router

Create flexible routing rules

Routing is a core feature that lets you route your data through filters and then to one or multiple destinations. The router relies on the concept ofTags and Matching rules.

There are two important concepts in Routing:

  • Tag

  • Match

When data is generated by an input plugin, it comes with a Tag. A Tag is a human-readable indicator that helps to identify the data source. Tags are usually configured manually.

To define where to route data, specify a Match rule in the output configuration.

Consider the following configuration example that delivers CPU metrics to an Elasticsearch database and Memory (mem) metrics to the standard output interface:

[INPUT]
    Name cpu
    Tag  my_cpu

[INPUT]
    Name mem
    Tag  my_mem

[OUTPUT]
    Name   es
    Match  my_cpu

[OUTPUT]
    Name   stdout
    Match  my_mem

Routing reads the Input Tag and the Output Match rules. If data has a Tag that doesn't match at routing time, the data is deleted.

Routing with Wildcard

Routing is flexible enough to support wildcards in the Match pattern. The following example defines a common destination for both sources of data:

pipeline:
    inputs:
        - name: cpu
          tag: my_cpu

        - name: mem
          tag: my_mem
  
    outputs:
        - name: stdout
          match: 'my_*'
[INPUT]
    Name cpu
    Tag  my_cpu

[INPUT]
    Name mem
    Tag  my_mem

[OUTPUT]
    Name   stdout
    Match  my_*

The match rule is set to my_*, which matches any Tag starting with my_.

Routing with Regex

Routing also provides support for regular expressions with the Match_Regex pattern, allowing for more complex and precise matching criteria. The following example demonstrates how to route data from sources based on a regular expression:

pipeline:
    inputs:
        - name: temperature_sensor
          tag: temp_sensor_A

        - name: humidity_sensor
          tag: humid_sensor_B
 
    outputs:
        - name: stdout
          match: '.*_sensor_[AB]'
[INPUT]
    Name temperature_sensor
    Tag  temp_sensor_A

[INPUT]
    Name humidity_sensor
    Tag  humid_sensor_B

[OUTPUT]
    Name         stdout
    Match_regex  .*_sensor_[AB]

In this configuration, the Match_regex rule is set to .*_sensor_[AB]. This regular expression matches any Tag that ends with _sensor_A or _sensor_B, regardless of what precedes it. This approach provides a more flexible and powerful way to handle different source tags with a single routing rule.

Getting Started with Fluent Bit

A guide on how to install, deploy, and upgrade Fluent Bit

Container deployment

Deployment Type
instructions

Kubernetes

Docker

Containers on AWS

Install on Linux (packages)

Operating System
Installation instructions

CentOS / Red Hat

, ,

Ubuntu

, , ,

Debian

, ,

Amazon Linux

,

Raspbian / Raspberry Pi

,

Yocto / Embedded Linux

Buildroot / Embedded Linux

Install on Windows (packages)

Operating System
Installation instructions

Windows Server 2019

,

Windows 10 2019.03

,

Install on macOS (packages)

Operating System
Installation instructions

macOS

Compile from Source (Linux, Windows, FreeBSD, macOS)

Operating system
Installation instructions

Linux, FreeBSD

macOS

Windows

Sandbox Environment

If you are interested in learning about Fluent Bit you can try out the sandbox environment:

Enterprise Packages

Fluent Bit packages are also provided by enterprise providers for older end of life versions, Unix systems, and additional support and features including aspects like CVE backporting.

Supported Platforms

Fluent Bit supports the following operating systems and architectures:

Operating System
Distribution
Architectures

Linux

x86_64, Arm64v8

x86_64, Arm64v8

x86_64, Arm64v8

x86_64, Arm64v8

x86_64, Arm64v8

x86_64, Arm64v8

x86_64, Arm64v8

x86_64, Arm64v8

x86_64, Arm64v8

x86_64, Arm64v8

x86_64, Arm64v8

x86_64, Arm64v8

x86_64, Arm64v8

x86_64, Arm64v8

Arm32v7

macOS

*

x86_64, Apple M1

Windows

x86_64, x86

x86_64, x86

From an architecture support perspective, Fluent Bit is fully functional on x86_64, Arm64v8, and Arm32v7 based processors.

Fluent Bit can work also on macOS and Berkeley Software Distribution (BSD) systems, but not all plugins will be available on all platforms.

Official support is based on community demand. Fluent Bit might run on older operating systems, but must be built from source, or using custom packages fromenterprise providers.

Fluent Bit is supported for Linux on IBM Z (s390x) environments with some restrictions, but only container images are provided for these targets officially.

Docker Log Based Metrics

The Docker input plugin you collect Docker container metrics, including memory usage and CPU consumption.

Configuration parameters

The plugin supports the following configuration parameters:

Key
Description
Default

Interval_Sec

Polling interval in seconds

1

Include

A space-separated list of containers to include.

none

Exclude

A space-separated list of containers to exclude.

none

Threaded

Indicates whether to run this input in its own .

false

path.containers

Used to specify the container directory if Docker is configured with a custom data-root directory.

/var/lib/docker/containers

If you set neither Include nor Exclude, the plugin will try to get metrics from all running containers.

Configuration file

The following example configuration collects metrics from two docker instances (6bab19c3a0f9 and 14159be4ca2c).

[INPUT]
    Name         docker
    Include      6bab19c3a0f9 14159be4ca2c
[OUTPUT]
    Name   stdout
    Match  *

This configuration will produce records like the following:

[1] docker.0: [1571994772.00555745, {"id"=>"6bab19c3a0f9", "name"=>"postgresql", "cpu_used"=>172102435, "mem_used"=>5693400, "mem_limit"=>4294963200}]

Build with Static Configuration

Fluent Bit in normal operation mode is configurable throughtext files or using specific arguments in the command line. Although this is the ideal deployment case, there are scenarios where a more restricted configuration is required. Static configuration mode restricts configuration ability.

Static configuration mode includes a built-in configuration in the final binary of Fluent Bit, disabling the usage of external files or flags at runtime.

Get started

Requirements

The following steps assume you are familiar with configuring Fluent Bit using text files and you have experience building it from scratch as described inBuild and Install.

Configuration Directory

In your file system, prepare a specific directory that will be used as an entry point for the build system to lookup and parse the configuration files. This directory must contain a minimum of one configuration file calledfluent-bit.conf containing the requiredSERVICE,INPUT, and OUTPUT sections.

As an example, create a new fluent-bit.yaml file or fluent-bit.conf file with the corresponding content below:

[SERVICE]
    Flush     1
    Daemon    off
    Log_Level info

[INPUT]
    Name      cpu

[OUTPUT]
    Name      stdout
    Match     *

This configuration calculates CPU metrics from the running system and prints them to the standard output interface.

Build with custom configuration

  1. Go to the Fluent Bit source code build directory:

    cd fluent-bit/build/
  2. Run CMake, appending the FLB_STATIC_CONF option pointing to the configuration directory recently created:

    cmake -DFLB_STATIC_CONF=/path/to/my/confdir/
  3. Build Fluent Bit:

    make

The generated fluent-bit binary is ready to run without additional configuration:

$ bin/fluent-bit
Fluent-Bit v0.15.0
Copyright (C) Treasure Data

[2018/10/19 15:32:31] [ info] [engine] started (pid=15186)
[0] cpu.local: [1539984752.000347547, {"cpu_p"=>0.750000, "user_p"=>0.500000, "system_p"=>0.250000, "cpu0.p_cpu"=>1.000000, "cpu0.p_user"=>1.000000, "cpu0.p_system"=>0.000000, "cpu1.p_cpu"=>0.000000, "cpu1.p_user"=>0.000000, "cpu1.p_system"=>0.000000, "cpu2.p_cpu"=>0.000000, "cpu2.p_user"=>0.000000, "cpu2.p_system"=>0.000000, "cpu3.p_cpu"=>1.000000, "cpu3.p_user"=>1.000000, "cpu3.p_system"=>0.000000}]

Collectd

The Collectd input plugin lets you receive datagrams from the collectd service.

Configuration parameters

The plugin supports the following configuration parameters:

Key
Description
Default

Listen

Set the address to listen to.

0.0.0.0

Port

Set the port to listen to.

25826

TypesDB

Set the data specification file.

/usr/share/collectd/types.db

Threaded

Indicates whether to run this input in its own .

false

Configuration examples

Here is a basic configuration example:

[INPUT]
    Name         collectd
    Listen       0.0.0.0
    Port         25826
    TypesDB      /usr/share/collectd/types.db,/etc/collectd/custom.db

[OUTPUT]
    Name   stdout
    Match  *

With this configuration, Fluent Bit listens to 0.0.0.0:25826, and outputs incoming datagram packets to stdout.

You must set the same types.db files that your collectd server uses. Otherwise, Fluent Bit might not be able to interpret the payload properly.

Key Concepts

Learn these key concepts to understand how Fluent Bit operates.

Before diving into you might want to get acquainted with some of the key concepts of the service. This document provides an introduction to those concepts and common terminology. Reading this document will help you gain a more general understanding of the following topics:

  • Event or Record

  • Filtering

  • Tag

  • Timestamp

  • Match

  • Structured Message

Event or Record

Every incoming piece of data that belongs to a log or a metric that's retrieved by Fluent Bit is considered an Event or a Record.

As an example, consider the following content of a Syslog file:

It contains four lines that represent four independent Events.

An Event is comprised of:

  • timestamp

  • key/value metadata (v2.1.0 and greater)

  • payload

Event format

The Fluent Bit wire protocol represents an Event as a two-element array with a nested array as the first element:

where

  • TIMESTAMP is a timestamp in seconds as an integer or floating point value (not a string).

  • METADATA is an object containing event metadata, and might be empty.

  • MESSAGE is an object containing the event body.

Fluent Bit versions prior to v2.1.0 used:

to represent events. This format is still supported for reading input event streams.

Filtering

You might need to perform modifications on an Event's content. The process to alter, append to, or drop Events is called .

Use filtering to:

  • Append specific information to the Event like an IP address or metadata.

  • Select a specific piece of the Event content.

  • Drop Events that match a certain pattern.

Tag

Every Event ingested by Fluent Bit is assigned a Tag. This tag is an internal string used in a later stage by the Router to decide which Filter or phase it must go through.

Most tags are assigned manually in the configuration. If a tag isn't specified, Fluent Bit assigns the name of the plugin instance where that Event was generated from.

The input plugin doesn't assign tags. This plugin speaks the Fluentd wire protocol called Forward where every Event already comes with a Tag associated. Fluent Bit will always use the incoming Tag set by the client.

A tagged record must always have a Matching rule. To learn more about Tags and Matches, see .

Timestamp

The timestamp represents the time an Event was created. Every Event contains an associated timestamps. All events have timestamps, and they're set by the input plugin or discovered through a data parsing process.

The timestamp is a numeric fractional integer in the format:

where:

  • _SECONDS_ is the number of seconds that have elapsed since the Unix epoch.

  • _NANOSECONDS_ is a fractional second or one thousand-millionth of a second.

Match

Fluent Bit lets you route your collected and processed Events to one or multiple destinations. A Match represents a rule to select Events where a Tag matches a defined rule.

To learn more about Tags and Matches, see .

Structured messages

Source events can have a structure. A structure defines a set of keys and values inside the Event message to implement faster operations on data modifications. Fluent Bit treats every Event message as a structured message.

Consider the following two messages:

  • No structured message

  • With a structured message

For performance reasons, Fluent Bit uses a binary serialization data format called.

Debian

Fluent Bit is distributed as the fluent-bit package and is available for the latest stable CentOS system.

The following architectures are supported

  • x86_64

  • aarch64

  • arm64v8

Single line install

Fluent Bit provides an installation script to use for most Linux targets. This will always install the most recently released version.

This is a convenience helper and should always be validated prior to use. The recommended secure deployment approach is to use the following instructions:

Server GPG key

The first step is to add the Fluent Bit server GPG key to your keyring to ensure you can get the correct signed packages.

Follow the official.

Updated key from March 2022

For the 1.9.0 and 1.8.15 and later releases, the. Ensure this new one is added.

The GPG Key fingerprint of the new key is:

The previous key is and might be required to install previous versions.

The GPG Key fingerprint of the old key is:

Refer to the to see which platforms are supported in each release.

Update your sources lists

For Debian, you must add the Fluent Bit APT server entry to your sources lists. Add the following content at bottom of your /etc/apt/sources.list file.

Replace CODENAME with your specific (for example: bookworm for Debian 12)

Update your repositories database

Update your system's apt database:

Fluent Bit recommends upgrading your system (sudo apt-get upgrade). This could avoid potential issues with expired certificates.

Install Fluent Bit

  1. Use the following apt-get command to install the latest Fluent Bit:

  2. Instruct systemd to enable the service:

If you do a status check, you should see a similar output similar to:

The default Fluent Bit configuration collect metrics of CPU usage and sends the records to the standard output. You can see the outgoing data in your/var/log/messages file.

macOS

Fluent Bit is compatible with the latest Apple macOS software for x86_64 and Apple Silicon architectures.

Installation packages

Installation packages can be found .

Requirements

You must have installed in your system. If it isn't present, install it with the following command:

Installing from Homebrew

The Fluent Bit package on Homebrew isn't officially supported, but should work for basic use cases and testing. It can be installed using:

Compile from source

Install build dependencies

Run the following brew command in your terminal to retrieve the dependencies:

Download and build the source

  1. Download a copy of the Fluent Bit source code (upstream):

    If you want to use a specific version, checkout to the proper tag. For example, to use v1.8.13, use the command:

  2. To prepare the build system, you must expose certain environment variables so Fluent Bit CMake build rules can pick the right libraries:

  3. Change to the build/ directory inside the Fluent Bit sources:

  4. Build Fluent Bit. This example indicates to the build system the location the final binaries and config files should be installed:

  5. Install Fluent Bit to the previously specified directory. Writing to this directory requires root privileges.

The binaries and configuration examples can be located at /opt/fluent-bit/.

Create macOS installer from source

  1. Clone the Fluent Bit source code (upstream):

    If you want to use a specific version, checkout to the proper tag. For example, to use v1.9.2 do:

  2. To prepare the build system, you must expose certain environment variables so Fluent Bit CMake build rules can pick the right libraries:

  3. Create the specific macOS SDK target. For example, to specify macOS Big Sur (11.3) SDK environment:

  4. Change to the build/ directory inside the Fluent Bit sources:

  5. Build the Fluent Bit macOS installer:

The macOS installer will be generated as:

Finally, the fluent-bit-<fluent-bit version>-(intel or apple).pkg will be generated.

The created installer will put binaries at /opt/fluent-bit/.

Running Fluent Bit

To make the access path easier to Fluent Bit binary, extend the PATH variable:

To test, try Fluent Bit by generating a test message using the which prints to the standard output interface every one second:

You will see an output similar to this:

To halt the process, press ctrl-c in the terminal.

Fluent Bit Metrics

A plugin to collect Fluent Bit metrics

Fluent Bit exposes to let you monitor the internals of your pipeline. The collected metrics can be processed similarly to those from the . They can be sent to output plugins including , or .

Metrics collected with Node Exporter Metrics flow through a separate pipeline from logs and current filters don't operate on top of metrics.

Configuration

Key
Description
Default

Get started

Configuration file

In the following configuration file, the input plugin node_exporter_metrics collects metrics every 2 seconds and exposes them through the output plugin on HTTP/TCP port 2021.

You can test the expose of the metrics by using curl:

Scheduling and Retries

has an engine that helps to coordinate the data ingestion from input plugins. The engine calls the scheduler to decide when it's time to flush the data through one or multiple output plugins. The scheduler flushes new data at a fixed number of seconds, and retries when asked.

When an output plugin gets called to flush some data, after processing that data it can notify the engine using these possible return statuses:

  • OK: Data successfully processed and flushed.

  • Retry: If a retry is requested, the engine asks the scheduler to retry flushing that data. The scheduler decides how many seconds to wait before retry.

  • Error: An unrecoverable error occurred and the engine shouldn't try to flush that data again.

Configure wait time for retry

The scheduler provides two configuration options, called scheduler.cap andscheduler.base, which can be set in the Service section. These determine the waiting time before a retry happens.

Key
Description
Default

The scheduler.base determines the lower bound of time and the scheduler.cap determines the upper bound for each retry.

Fluent Bit uses an exponential backoff and jitter algorithm to determine the waiting time before a retry. The waiting time is a random number between a configurable upper and lower bound. For a detailed explanation of the exponential backoff and jitter algorithm, see.

For example:

For the Nth retry, the lower bound of the random number will be:

base

The upper bound will be:

min(base * (Nth power of 2), cap)

For example:

When base is set to 3 and cap is set to 30:

First retry: The lower bound will be 3. The upper bound will be 3 * 2 = 6. The waiting time will be a random number between (3, 6).

Second retry: The lower bound will be 3. The upper bound will be 3 * (2 * 2) = 12. The waiting time will be a random number between (3, 12).

Third retry: The lower bound will be 3. The upper bound will be 3 * (2 * 2 * 2) =24. The waiting time will be a random number between (3, 24).

Fourth retry: The lower bound will be 3, because 3 * (2 * 2 * 2 * 2) = 48 > 30. The upper bound will be 30. The waiting time will be a random number between (3, 30).

Wait time example

The following example configures the scheduler.base as 3 seconds andscheduler.cap as 30 seconds.

The waiting time will be:

Nth retry
Waiting time range (seconds)

Configure retries

The scheduler provides a configuration option called Retry_Limit, which can be set independently for each output section. This option lets you disable retries or impose a limit to try N times and then discard the data after reaching that limit:

Value
Description

Retry example

The following example configures two outputs, where the HTTP plugin has an unlimited number of retries, and the Elasticsearch plugin have a limit of 5 retries:

Exec Wasi

The Exec Wasi input plugin lets you execute Wasm programs that are WASI targets like external programs and collect event logs from there.

Configuration parameters

The plugin supports the following configuration parameters:

Key
Description

Configuration examples

Here is a configuration example.

in_exec_wasi can handle parsers. To retrieve from structured data from a WASM program, you must create a parser.conf:

The Time_Format should be aligned for the format of your using timestamp.

This example assumes the WASM program writes JSON style strings to stdout.

Then, you can specify the parsers.conf in the main Fluent Bit configuration:

Elasticsearch

The Elasticsearch input plugin handles both Elasticsearch and OpenSearch Bulk API requests.

Configuration parameters

The plugin supports the following configuration parameters:

Key
Description
Default value

The Elasticsearch cluster uses "sniffing" to optimize the connections between its cluster and clients. Elasticsearch can build its cluster and dynamically generate a connection list which is called "sniffing". The hostname will be used for sniffing information and this is handled by the sniffing endpoint.

Get started

In order to start performing the checks, you can run the plugin from the command line or through the configuration file:

Command line

From the command line you can configure Fluent Bit to handle Bulk API requests with the following options:

Configuration file

In your configuration file append the following:

As described previously, the plugin will handle ingested Bulk API requests. For large bulk ingestion, you might have to increase buffer size using the buffer_max_size and buffer_chunk_size parameters:

Ingesting from beats series

Ingesting from beats series agents is also supported. For example, , , and are able to ingest their collected data through this plugin.

The Fluent Bit node information is returning as Elasticsearch 8.0.0.

Users must specify the following configurations on their beats configurations:

For large log ingestion on these beat plugins, users might have to configure rate limiting on those beats plugins when Fluent Bit indicates that the application is exceeding the size limit for HTTP requests:

CPU Log Based Metrics

The CPU input plugin, measures the CPU usage of a process or the whole system by default (considering per CPU core). It reports values in percentage unit for every interval of time set. This plugin is available only for Linux.

The following tables describe the information generated by the plugin. The following keys represent the data used by the overall system, and all values associated to the keys are in a percentage unit (0 to 100%):

The CPU metrics plugin creates metrics that are log-based, such as JSON payload. For Prometheus-based metrics, see the Node Exporter Metrics input plugin.

Key
Description

In addition to the keys reported in the previous table, a similar content is created per CPU core. The cores are listed from 0 to N as the Kernel reports:

Key
Description

Configuration parameters

The plugin supports the following configuration parameters:

Key
Description
Default

Get started

In order to get the statistics of the CPU usage of your system, you can run the plugin from the command line or through the configuration file:

Command line

You can run this filter from the command line using a command like the following:

The command returns results similar to the following:

As described previously, the CPU input plugin gathers the overall usage every one second and flushed the information to the output on the fifth second. This example uses the stdout plugin to demonstrate the output records. In a real use-case you might want to flush this information to some central aggregator such as or .

Configuration file

In your main configuration file append the following:

Disk I/O Log Based Metrics

The Disk input plugin gathers the information about the disk throughput of the running system every certain interval of time and reports them.

The Disk I/O metrics plugin creates metrics that are log-based, such as JSON payload. For Prometheus-based metrics, see the Node Exporter Metrics input plugin.

Configuration parameters

The plugin supports the following configuration parameters:

Key
Description
Default

Get started

In order to get disk usage from your system, you can run the plugin from the command line or through the configuration file:

Command line

You can run the plugin from the command line:

Which returns information like the following:

Configuration file

In your main configuration file append the following:

Total interval (sec) = Interval_Sec + (Interval_Nsec / 1000000000)

For example: 1.5s = 1s + 500000000ns

Kernel Logs

The kmsg input plugin reads the Linux Kernel log buffer from the beginning. It gets every record and parses fields as priority, sequence, seconds, useconds, and message.

Configuration parameters

Key
Description
Default

Get started

To start getting the Linux Kernel messages, you can run the plugin from the command line or through the configuration file:

Command line

Which returns output similar to:

As described previously, the plugin processed all messages that the Linux Kernel reported. The output has been truncated for clarification.

Configuration file

In your main configuration file append the following:

Commands

Configuration files must be flexible enough for any deployment need, but they must keep a clean and readable format.

Fluent Bit Commands extends a configuration file with specific built-in features. The following commands are available:

Command
Prototype
Description

@INCLUDE

Configuring a logging pipeline might lead to an extensive configuration file. In order to maintain a human-readable configuration, split the configuration in multiple files.

The @INCLUDE command allows the configuration reader to include an external configuration file:

This example defines the main service configuration file and also includes two files to continue the configuration.

Fluent Bit will respects the following order when including:

  • Service

  • Inputs

  • Filters

  • Outputs

inputs.conf

The following is an example of an inputs.conf file, like the one called in the previous example.

outputs.conf

The following is an example of an outputs.conf file, like the one called in the previous example.

@SET

Fluent Bit supports . One way to expose this variables to Fluent Bit is through setting a shell environment variable, the other is through the @SET command.

The @SET command can only be used at root level of each line. It can't be used inside a section:

WASI_Path

The location of a Wasm program file.

Parser

Specify the name of a parser to interpret the entry as a structured message.

Accessible_Paths

Specify the allowed list of paths to be able to access paths from WASM programs.

Interval_Sec

Polling interval (seconds).

Interval_NSec

Polling interval (nanosecond).

Wasm_Heap_Size

Size of the heap size of Wasm execution. Review unit sizes for allowed values.

Wasm_Stack_Size

Size of the stack size of Wasm execution. Review unit sizes for allowed values.

Buf_Size

Size of the buffer See unit sizes for allowed values.

Oneshot

Only run once at startup. This allows collection of data precedent to the Fluent Bit startup (Boolean, default: false).

Threaded

Indicates whether to run this input in its own thread. Default: false.

parsers:
    - name: wasi
      format: json
      time_key: time
      time_format: '%Y-%m-%dT%H:%M:%S.%L %z'
[PARSER]
    Name        wasi
    Format      json
    Time_Key    time
    Time_Format %Y-%m-%dT%H:%M:%S.%L %z
service:
    flush: 1
    daemon: off
    parsers_file: parsers.yaml
    log_level: info
    http_server: off
    http_listen: 0.0.0.0
    http_port: 2020
    
pipeline:
    inputs:
      - name: exec_wasi
        tag: exec.wasi.local
        wasi_path: /path/to/wasi/program.wasm
        # Note: run from the 'wasi_path' location.
        accessible_paths: /path/to/accessible
        
    outputs:
        - name: stdout
          match: '*'
[SERVICE]
    Flush        1
    Daemon       Off
    Parsers_File parsers.conf
    Log_Level    info
    HTTP_Server  Off
    HTTP_Listen  0.0.0.0
    HTTP_Port    2020

[INPUT]
    Name exec_wasi
    Tag  exec.wasi.local
    WASI_Path /path/to/wasi/program.wasm
    Accessible_Paths .,/path/to/accessible
    Parser wasi

[OUTPUT]
    Name  stdout
    Match *

Interval_Sec

Polling interval (seconds).

1

Interval_NSec

Polling interval (nanosecond).

0

Dev_Name

Device name to limit the target (for example, sda). If not set, in_disk gathers information from all of disks and partitions.

all disks

Threaded

Indicates whether to run this input in its own thread.

false

fluent-bit -i disk -o stdout
Fluent Bit v1.x.x
* Copyright (C) 2019-2020 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2017/01/28 16:58:16] [ info] [engine] started
[0] disk.0: [1485590297, {"read_size"=>0, "write_size"=>0}]
[1] disk.0: [1485590298, {"read_size"=>0, "write_size"=>0}]
[2] disk.0: [1485590299, {"read_size"=>0, "write_size"=>0}]
[3] disk.0: [1485590300, {"read_size"=>0, "write_size"=>11997184}]
pipeline:
    inputs:
        - name: disk
          tag: disk
          interval_sec: 1
          interval_nsec: 0

    outputs:
        - name: stdout
          match: '*'
[INPUT]
    Name          disk
    Tag           disk
    Interval_Sec  1
    Interval_NSec 0

[OUTPUT]
    Name   stdout
    Match  *

Prio_Level

The log level to filter. The kernel log is dropped if its priority is more than prio_level. Allowed values are 0-8. 8 means all logs are saved.

8

Threaded

Indicates whether to run this input in its own thread.

false

fluent-bit -i kmsg -t kernel -o stdout -m '*'
Fluent Bit v4.0.0
* Copyright (C) 2015-2025 The Fluent Bit Authors
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

______ _                  _    ______ _ _             ___  _____
|  ___| |                | |   | ___ (_) |           /   ||  _  |
| |_  | |_   _  ___ _ __ | |_  | |_/ /_| |_  __   __/ /| || |/' |
|  _| | | | | |/ _ \ '_ \| __| | ___ \ | __| \ \ / / /_| ||  /| |
| |   | | |_| |  __/ | | | |_  | |_/ / | |_   \ V /\___  |\ |_/ /
\_|   |_|\__,_|\___|_| |_|\__| \____/|_|\__|   \_/     |_(_)___/


[2025/06/30 16:12:06] [ info] [fluent bit] version=4.0.0, commit=3a91b155d6, pid=91577
[2025/06/30 16:12:06] [ info] [storage] ver=1.5.2, type=memory, sync=normal, checksum=off, max_chunks_up=128
[2025/06/30 16:12:06] [ info] [simd    ] disabled
[2025/06/30 16:12:06] [ info] [cmetrics] version=0.9.9
[2025/06/30 16:12:06] [ info] [ctraces ] version=0.6.2
[2025/06/30 16:12:06] [ info] [input:health:health.0] initializing
[2025/06/30 16:12:06] [ info] [input:health:health.0] storage_strategy='memory' (memory only)
[2025/06/30 16:12:06] [ info] [sp] stream processor started
[2025/06/30 16:12:06] [ info] [output:stdout:stdout.0] worker #0 started
[0] kernel: [1463421823, {"priority"=>3, "sequence"=>1814, "sec"=>11706, "usec"=>732233, "msg"=>"ERROR @wl_cfg80211_get_station : Wrong Mac address, mac = 34:a8:4e:d3:40:ec profile =20:3a:07:9e:4a:ac"}]
[1] kernel: [1463421823, {"priority"=>3, "sequence"=>1815, "sec"=>11706, "usec"=>732300, "msg"=>"ERROR @wl_cfg80211_get_station : Wrong Mac address, mac = 34:a8:4e:d3:40:ec profile =20:3a:07:9e:4a:ac"}]
[2] kernel: [1463421829, {"priority"=>3, "sequence"=>1816, "sec"=>11712, "usec"=>729728, "msg"=>"ERROR @wl_cfg80211_get_station : Wrong Mac address, mac = 34:a8:4e:d3:40:ec profile =20:3a:07:9e:4a:ac"}]
[3] kernel: [1463421829, {"priority"=>3, "sequence"=>1817, "sec"=>11712, "usec"=>729802, "msg"=>"ERROR @wl_cfg80211_get_station : Wrong Mac address, mac = 34:a8:4e:d3:40:ec
...
pipeline:
    inputs:
        - name: kmsg
          tag: kernel

    outputs:
        - name: stdout
          match: '*'
[INPUT]
    Name   kmsg
    Tag    kernel

[OUTPUT]
    Name   stdout
    Match  *
Amazon Linux 2023
Amazon Linux 2
CentOS 9 Stream
CentOS 8
CentOS 7
Rocky Linux 8
Rocky Linux 9
Alma Linux 8
Alma Linux 9
Debian 12 (Bookworm)
Debian 11 (Bullseye)
Debian 10 (Buster)
Ubuntu 24.04 (Noble Numbat)
Ubuntu 22.04 (Jammy Jellyfish)
Raspbian 12 (Bookworm)
Windows Server 2019
Windows 10 1903
Jan 18 12:52:16 flb systemd[2222]: Starting GNOME Terminal Server
Jan 18 12:52:16 flb dbus-daemon[2243]: [session uid=1000 pid=2243] Successfully activated service 'org.gnome.Terminal'
Jan 18 12:52:16 flb systemd[2222]: Started GNOME Terminal Server.
Jan 18 12:52:16 flb gsd-media-keys[2640]: # watch_fast: "/org/gnome/terminal/legacy/" (establishing: 0, active: 0)
[[TIMESTAMP, METADATA], MESSAGE]
[TIMESTAMP, MESSAGE]
SECONDS.NANOSECONDS
"Project Fluent Bit created on 1398289291"
{"project": "Fluent Bit", "created": 1398289291}
Fluent Bit
Fluent Bit
filtering
Output
Input
Forward
Routing
Routing
MessagePack
curl https://raw.githubusercontent.com/fluent/fluent-bit/master/install.sh | sh
curl https://packages.fluentbit.io/fluentbit.key | gpg --dearmor > /usr/share/keyrings/fluentbit-keyring.gpg
C3C0 A285 34B9 293E AF51  FABD 9F9D DC08 3888 C1CD
Fluentbit releases (Releases signing key) <[email protected]>
F209 D876 2A60 CD49 E680 633B 4FF8 368B 6EA0 722A
deb [signed-by=/usr/share/keyrings/fluentbit-keyring.gpg] https://packages.fluentbit.io/debian/${CODENAME} ${CODENAME} main
sudo apt-get update
sudo apt-get install fluent-bit
sudo systemctl start fluent-bit
sudo service fluent-bit status
● fluent-bit.service - Fluent Bit
   Loaded: loaded (/lib/systemd/system/fluent-bit.service; disabled; vendor preset: enabled)
   Active: active (running) since mié 2016-07-06 16:58:25 CST; 2h 45min ago
 Main PID: 6739 (fluent-bit)
    Tasks: 1
   Memory: 656.0K
      CPU: 1.393s
   CGroup: /system.slice/fluent-bit.service
           └─6739 /opt/fluent-bit/bin/fluent-bit -c /etc/fluent-bit/fluent-bit.conf
...
Debian wiki guidance
GPG key has been updated
still available
supported platform documentation
Debian release name
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
brew install fluent-bit
brew install git cmake openssl bison
git clone https://github.com/fluent/fluent-bit
cd fluent-bit
git checkout v1.8.13
export OPENSSL_ROOT_DIR=`brew --prefix openssl`
export PATH=`brew --prefix bison`/bin:$PATH
cd build/
cmake -DFLB_DEV=on -DCMAKE_INSTALL_PREFIX=/opt/fluent-bit ../
make -j 16
sudo make install
git clone https://github.com/fluent/fluent-bit
cd fluent-bit
git checkout v1.9.2
export OPENSSL_ROOT_DIR=`brew --prefix openssl`
export PATH=`brew --prefix bison`/bin:$PATH
export MACOSX_DEPLOYMENT_TARGET=11.3
cd build/
cmake -DCPACK_GENERATOR=productbuild -DCMAKE_INSTALL_PREFIX=/opt/fluent-bit ../
make -j 16
cpack -G productbuild
CPack: Create package using productbuild
CPack: Install projects
CPack: - Run preinstall target for: fluent-bit
CPack: - Install project: fluent-bit []
CPack: -   Install component: binary
CPack: -   Install component: library
CPack: -   Install component: headers
CPack: -   Install component: headers-extra
CPack: Create package
CPack: -   Building component package: /Users/fluent-bit-builder/GitHub/fluent-bit/build/_CPack_Packages/Darwin/productbuild//Users/fluent-bit-builder/GitHub/fluent-bit/build/fluent-bit-1.9.2-apple/Contents/Packages/fluent-bit-1.9.2-apple-binary.pkg
CPack: -   Building component package: /Users/fluent-bit-builder/GitHub/fluent-bit/build/_CPack_Packages/Darwin/productbuild//Users/fluent-bit-builder/GitHub/fluent-bit/build/fluent-bit-1.9.2-apple/Contents/Packages/fluent-bit-1.9.2-apple-headers.pkg
CPack: -   Building component package: /Users/fluent-bit-builder/GitHub/fluent-bit/build/_CPack_Packages/Darwin/productbuild//Users/fluent-bit-builder/GitHub/fluent-bit/build/fluent-bit-1.9.2-apple/Contents/Packages/fluent-bit-1.9.2-apple-headers-extra.pkg
CPack: -   Building component package: /Users/fluent-bit-builder/GitHub/fluent-bit/build/_CPack_Packages/Darwin/productbuild//Users/fluent-bit-builder/GitHub/fluent-bit/build/fluent-bit-1.9.2-apple/Contents/Packages/fluent-bit-1.9.2-apple-library.pkg
CPack: - package: /Users/fluent-bit-builder/GitHub/fluent-bit/build/fluent-bit-1.9.2-apple.pkg generated.
export PATH=/opt/fluent-bit/bin:$PATH
fluent-bit -i dummy -o stdout -f 1
Fluent Bit v1.9.0
* Copyright (C) 2015-2021 The Fluent Bit Authors
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2022/02/08 17:13:52] [ info] [engine] started (pid=14160)
[2022/02/08 17:13:52] [ info] [storage] version=1.1.6, initializing...
[2022/02/08 17:13:52] [ info] [storage] in-memory
[2022/02/08 17:13:52] [ info] [storage] normal synchronization mode, checksum disabled, max_chunks_up=128
[2022/02/08 17:13:52] [ info] [cmetrics] version=0.2.2
[2022/02/08 17:13:52] [ info] [sp] stream processor started
[0] dummy.0: [1644362033.676766000, {"message"=>"dummy"}]
[0] dummy.0: [1644362034.676914000, {"message"=>"dummy"}]
here
Homebrew
Dummy input plugin

buffer_max_size

Set the maximum size of buffer.

4M

buffer_chunk_size

Set the buffer chunk size.

512K

tag_key

Specify a key name for extracting as a tag.

NULL

meta_key

Specify a key name for meta information.

"@meta"

hostname

Specify hostname or fully qualified domain name. This parameter can be used for "sniffing" (auto-discovery of) cluster node information.

"localhost"

version

Specify Elasticsearch server version. This parameter is effective for checking a version of Elasticsearch/OpenSearch server version.

"8.0.0"

threaded

Indicates whether to run this input in its own thread.

false

fluent-bit -i elasticsearch -p port=9200 -o stdout
pipeline:
    inputs:
        - name: elasticsearch
          listen: 0.0.0.0
          port: 9200

    outputs:
        - name: stdout
          match: '*'
[INPUT]
    name elasticsearch
    listen 0.0.0.0
    port 9200

[OUTPUT]
    name stdout
    match *
pipeline:
    inputs:
        - name: elasticsearch
          listen: 0.0.0.0
          port: 9200
          buffer_max_size: 20M
          buffer_chunk_size: 5M

    outputs:
        - name: stdout
          match: '*'
[INPUT]
    name elasticsearch
    listen 0.0.0.0
    port 9200
    buffer_max_size 20M
    buffer_chunk_size 5M

[OUTPUT]
    name stdout
    match *
output.elasticsearch:
  allow_older_versions: true
  ilm: false
processors:
  - rate_limit:
      limit: "200/s"
Filebeats
Metricbeat
Winlogbeat

cpu_p

CPU usage of the overall system, this value is the summation of time spent on user and kernel space. The result takes in consideration the numbers of CPU cores in the system.

user_p

CPU usage in User mode, for short it means the CPU usage by user space programs. The result of this value takes in consideration the numbers of CPU cores in the system.

system_p

CPU usage in Kernel mode, for short it means the CPU usage by the Kernel. The result of this value takes in consideration the numbers of CPU cores in the system.

threaded

Indicates whether to run this input in its own thread. Default: false.

cpuN.p_cpu

Represents the total CPU usage by core N.

cpuN.p_user

Total CPU spent in user mode or user space programs associated to this core.

cpuN.p_system

Total CPU spent in system or kernel mode associated to this core.

Interval_Sec

Polling interval in seconds.

1

`Interval_NSec

Polling interval in nanoseconds`

0

PID

Specify the ID (PID) of a running process in the system. By default, the plugin monitors the whole system but if this option is set, it will only monitor the given process ID.

none

build/bin/fluent-bit -i cpu -t my_cpu -o stdout -m '*'
Fluent Bit v1.x.x
* Copyright (C) 2019-2020 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2019/09/02 10:46:29] [ info] starting engine
[0] [1452185189, {"cpu_p"=>7.00, "user_p"=>5.00, "system_p"=>2.00, "cpu0.p_cpu"=>10.00, "cpu0.p_user"=>8.00, "cpu0.p_system"=>2.00, "cpu1.p_cpu"=>6.00, "cpu1.p_user"=>4.00, "cpu1.p_system"=>2.00}]
[1] [1452185190, {"cpu_p"=>6.50, "user_p"=>5.00, "system_p"=>1.50, "cpu0.p_cpu"=>6.00, "cpu0.p_user"=>5.00, "cpu0.p_system"=>1.00, "cpu1.p_cpu"=>7.00, "cpu1.p_user"=>5.00, "cpu1.p_system"=>2.00}]
[2] [1452185191, {"cpu_p"=>7.50, "user_p"=>5.00, "system_p"=>2.50, "cpu0.p_cpu"=>7.00, "cpu0.p_user"=>3.00, "cpu0.p_system"=>4.00, "cpu1.p_cpu"=>6.00, "cpu1.p_user"=>6.00, "cpu1.p_system"=>0.00}]
[3] [1452185192, {"cpu_p"=>4.50, "user_p"=>3.50, "system_p"=>1.00, "cpu0.p_cpu"=>6.00, "cpu0.p_user"=>5.00, "cpu0.p_system"=>1.00, "cpu1.p_cpu"=>5.00, "cpu1.p_user"=>3.00, "cpu1.p_system"=>2.00}]

pipeline:
    inputs:
        - name: cpu
          tag: my_cpu

    outputs:
        - name: stdout
          match: '*'
[INPUT]
    Name cpu
    Tag  my_cpu

[OUTPUT]
    Name  stdout
    Match *
Fluentd
Elasticsearch

@INCLUDE

@INCLUDE FILE

Include a configuration file.

@SET

@SET KEY=VAL

Set a configuration variable.

[SERVICE]
    Flush 1

@INCLUDE inputs.conf
@INCLUDE outputs.conf
[INPUT]
    Name cpu
    Tag  mycpu

[INPUT]
    Name tail
    Path /var/log/*.log
    Tag  varlog.*
[OUTPUT]
    Name   stdout
    Match  mycpu

[OUTPUT]
    Name            es
    Match           varlog.*
    Host            127.0.0.1
    Port            9200
    Logstash_Format On
// DO NOT USE
@SET my_input=cpu
@SET my_output=stdout

[SERVICE]
    Flush 1

[INPUT]
    Name ${my_input}

[OUTPUT]
    Name ${my_output}
configuration variables

Redhat / CentOS

Fluent Bit is distributed as the fluent-bit package and is available for the latest stable CentOS system.

Fluent Bit supports the following architectures:

  • x86_64

  • aarch64

  • arm64v8

For CentOS 9 and later, Fluent Bit uses CentOS Stream as the canonical base system.

Single line install

Fluent Bit provides an installation script to use for most Linux targets. This will always install the most recently released version.

curl https://raw.githubusercontent.com/fluent/fluent-bit/master/install.sh | sh

This is a convenience helper and should always be validated prior to use. The recommended secure deployment approach is to use the following instructions:

CentOS 8

CentOS 8 is now end-of-life, so the default Yum repositories are unavailable.

Ensure you've configured an appropriate mirror. For example:

sed -i 's/mirrorlist/#mirrorlist/g' /etc/yum.repos.d/CentOS-* && \
sed -i 's|#baseurl=http://mirror.centos.org|baseurl=http://vault.centos.org|g' /etc/yum.repos.d/CentOS-*

An alternative is to use Rocky or Alma Linux, which should be equivalent.

RHEL/AlmaLinux/RockyLinux and CentOS 9 Stream

From CentOS 9 Stream onwards, the CentOS dependencies will update more often than downstream usage. This may mean that incompatible (more recent) versions are provided of certain dependencies (e.g. OpenSSL). For OSS, we also provide RockyLinux and AlmaLinux repositories.

Replace the centos string in Yum configuration below with almalinux or rockylinux to use those repositories instead. This may be required for RHEL 9 as well which will no longer track equivalent CentOS 9 stream dependencies. No RHEL 9 build is provided, it is expected to use one of the OSS variants listed.

Configure Yum

The fluent-bit is provided through a Yum repository. To add the repository reference to your system:

  1. In /etc/yum.repos.d/, add a new file called fluent-bit.repo.

  2. Add the following content to the file:

    [fluent-bit]
    name = Fluent Bit
    baseurl = https://packages.fluentbit.io/centos/$releasever/
    gpgcheck=1
    gpgkey=https://packages.fluentbit.io/fluentbit.key
    repo_gpgcheck=1
    enabled=1
  3. As a best practice, enable gpgcheck and repo_gpgcheck for security reasons. Fluent Bit signs its repository metadata and all Fluent Bit packages.

Updated key from March 2022

For the 1.9.0 and 1.8.15 and later releases, theGPG key has been updated. Ensure this new one is added.

The GPG Key fingerprint of the new key is:

C3C0 A285 34B9 293E AF51  FABD 9F9D DC08 3888 C1CD
Fluentbit releases (Releases signing key) <[email protected]>

The previous key is still available and might be required to install previous versions.

The GPG Key fingerprint of the old key is:

F209 D876 2A60 CD49 E680 633B 4FF8 368B 6EA0 722A

Refer to the supported platform documentation to see which platforms are supported in each release.

Install

  1. After your repository is configured, run the following command to install it:

    sudo yum install fluent-bit
  2. Instruct Systemd to enable the service:

    sudo systemctl start fluent-bit

If you do a status check, you should see a similar output like this:

$ systemctl status fluent-bit
● fluent-bit.service - Fluent Bit
   Loaded: loaded (/usr/lib/systemd/system/fluent-bit.service; disabled; vendor preset: disabled)
   Active: active (running) since Thu 2016-07-07 02:08:01 BST; 9s ago
 Main PID: 3820 (fluent-bit)
   CGroup: /system.slice/fluent-bit.service
           └─3820 /opt/fluent-bit/bin/fluent-bit -c etc/fluent-bit/fluent-bit.conf
...

The default Fluent Bit configuration collect metrics of CPU usage and sends the records to the standard output. You can see the outgoing data in your/var/log/messages file.

FAQ

Yum install fails with a "404 - Page not found" error for the package mirror

The fluent-bit.repo file for the latest installations of Fluent Bit uses a$releasever variable to determine the correct version of the package to install to your system:

[fluent-bit]
name = Fluent Bit
baseurl = https://packages.fluentbit.io/centos/$releasever/$basearch/
...

Depending on your Red Hat distribution version, this variable can return a value other than the OS major release version (for example, RHEL7 Server distributions return7Server instead of 7). The Fluent Bit package URL uses the major OS release version, so any other value here will cause a 404.

To resolve this issue, replace the $releasever variable with your system's OS major release version. For example:

[fluent-bit]
name = Fluent Bit
baseurl = https://packages.fluentbit.io/centos/7/$basearch/
gpgcheck=1
gpgkey=https://packages.fluentbit.io/fluentbit.key
repo_gpgcheck=1
enabled=1

Yum install fails with incompatible dependencies using CentOS 9+

CentOS 9 onwards will no longer be compatible with RHEL 9 as it may track more recent dependencies. Alternative AlmaLinux and RockyLinux repositories are available.

See the guidance above.

Configuring Fluent Bit

Fluent Bit supports two configuration formats:

  • YAML: Standard configuration format as of v3.2.

  • Classic mode: To be deprecated at the end of 2025.

Command line interface

Fluent Bit exposes most of it features through the command line interface. Running the -h option you can get a list of the options available:

$ docker run --rm -it fluent/fluent-bit --help
Usage: /fluent-bit/bin/fluent-bit [OPTION]

Available Options
  -b  --storage_path=PATH specify a storage buffering path
  -c  --config=FILE       specify an optional configuration file
  -d, --daemon            run Fluent Bit in background mode
  -D, --dry-run           dry run
  -f, --flush=SECONDS     flush timeout in seconds (default: 1)
  -C, --custom=CUSTOM     enable a custom plugin
  -i, --input=INPUT       set an input
  -F  --filter=FILTER     set a filter
  -m, --match=MATCH       set plugin match, same as '-p match=abc'
  -o, --output=OUTPUT     set an output
  -p, --prop="A=B"        set plugin configuration property
  -R, --parser=FILE       specify a parser configuration file
  -e, --plugin=FILE       load an external plugin (shared lib)
  -l, --log_file=FILE     write log info to a file
  -t, --tag=TAG           set plugin tag, same as '-p tag=abc'
  -T, --sp-task=SQL       define a stream processor task
  -v, --verbose           increase logging verbosity (default: info)
  -w, --workdir           set the working directory
  -H, --http              enable monitoring HTTP server
  -P, --port              set HTTP server TCP port (default: 2020)
  -s, --coro_stack_size   set coroutines stack size in bytes (default: 24576)
  -q, --quiet             quiet mode
  -S, --sosreport         support report for Enterprise customers
  -V, --version           show version number
  -h, --help              print this help

Inputs
  cpu                     CPU Usage
  mem                     Memory Usage
  thermal                 Thermal
  kmsg                    Kernel Log Buffer
  proc                    Check Process health
  disk                    Diskstats
  systemd                 Systemd (Journal) reader
  netif                   Network Interface Usage
  docker                  Docker containers metrics
  docker_events           Docker events
  node_exporter_metrics   Node Exporter Metrics (Prometheus Compatible)
  fluentbit_metrics       Fluent Bit internal metrics
  prometheus_scrape       Scrape metrics from Prometheus Endpoint
  tail                    Tail files
  dummy                   Generate dummy data
  dummy_thread            Generate dummy data in a separate thread
  head                    Head Input
  health                  Check TCP server health
  http                    HTTP
  collectd                collectd input plugin
  statsd                  StatsD input plugin
  opentelemetry           OpenTelemetry
  nginx_metrics           Nginx status metrics
  serial                  Serial input
  stdin                   Standard Input
  syslog                  Syslog
  tcp                     TCP
  mqtt                    MQTT, listen for Publish messages
  forward                 Fluentd in-forward
  random                  Random

Filters
  alter_size              Alter incoming chunk size
  aws                     Add AWS Metadata
  checklist               Check records and flag them
  record_modifier         modify record
  throttle                Throttle messages using sliding window algorithm
  type_converter          Data type converter
  kubernetes              Filter to append Kubernetes metadata
  modify                  modify records by applying rules
  multiline               Concatenate multiline messages
  nest                    nest events by specified field values
  parser                  Parse events
  expect                  Validate expected keys and values
  grep                    grep events by specified field values
  rewrite_tag             Rewrite records tags
  lua                     Lua Scripting Filter
  stdout                  Filter events to STDOUT
  geoip2                  add geoip information to records
  nightfall               scans records for sensitive content

Outputs
  azure                   Send events to Azure HTTP Event Collector
  azure_blob              Azure Blob Storage
  azure_kusto             Send events to Kusto (Azure Data Explorer)
  bigquery                Send events to BigQuery via streaming insert
  counter                 Records counter
  datadog                 Send events to DataDog HTTP Event Collector
  es                      Elasticsearch
  exit                    Exit after a number of flushes (test purposes)
  file                    Generate log file
  forward                 Forward (Fluentd protocol)
  http                    HTTP Output
  influxdb                InfluxDB Time Series
  logdna                  LogDNA
  loki                    Loki
  kafka                   Kafka
  kafka-rest              Kafka REST Proxy
  nats                    NATS Server
  nrlogs                  New Relic
  null                    Throws away events
  opensearch              OpenSearch
  plot                    Generate data file for GNU Plot
  pgsql                   PostgreSQL
  skywalking              Send logs into log collector on SkyWalking OAP
  slack                   Send events to a Slack channel
  splunk                  Send events to Splunk HTTP Event Collector
  stackdriver             Send events to Google Stackdriver Logging
  stdout                  Prints events to STDOUT
  syslog                  Syslog
  tcp                     TCP Output
  td                      Treasure Data
  flowcounter             FlowCounter
  gelf                    GELF Output
  websocket               Websocket
  cloudwatch_logs         Send logs to Amazon CloudWatch
  kinesis_firehose        Send logs to Amazon Kinesis Firehose
  kinesis_streams         Send logs to Amazon Kinesis Streams
  opentelemetry           OpenTelemetry
  prometheus_exporter     Prometheus Exporter
  prometheus_remote_write Prometheus remote write
  s3                      Send to S3
pipeline:
    inputs:
        - name: docker_events

    outputs:
        - name: stdout
          match: '*'
pipeline:
    inputs:
        - name: dummy
          dummy: '{"top": {".dotted": "value"}}'
          
    outputs:       
        - name: es
          host: elasticsearch
          replace_dots: on
service:
    http_server: on
    http_listen: 0.0.0.0
    http_port: 2020
    hot_reload: on
thread
pipeline:
    inputs:
        - name: docker
          include: 6bab19c3a0f9 14159be4ca2c

    outputs:
        - name: stdout
          match: '*'
service:
    flush: 1
    daemon: off
    log_level: info

pipeline:
    inputs:
        - name: cpu
          
    outputs:
        - name: stdout
          match: '*'
thread
pipeline:
    inputs:
        - name: collectd
          listen: 0.0.0.0
          port: 25826
          typesdb: '/user/share/collectd/types.db,/etc/collectd/custom.db'

    outputs:
        - name: stdout
          match: '*'

scrape_interval

The rate at which metrics are collected from the host operating system.

2 seconds

scrape_on_start

Scrape metrics upon start, use to avoid waiting for scrape_interval for the first round of metrics.

false

threaded

Indicates whether to run this input in its own thread.

false

curl http://127.0.0.1:2021/metrics
metrics
Prometheus Node Exporter input plugin
Prometheus Exporter
Prometheus Remote Write
OpenTelemetry
Prometheus Exporter
# Fluent Bit Metrics + Prometheus Exporter
# -------------------------------------------
# The following example collects Fluent Bit metrics and exposes
# them through a Prometheus HTTP end-point.
#
# After starting the service try it with:
#
# $ curl http://127.0.0.1:2021/metrics
#
[SERVICE]
    flush           1
    log_level       info

[INPUT]
    name            fluentbit_metrics
    tag             internal_metrics
    scrape_interval 2

[OUTPUT]
    name            prometheus_exporter
    match           internal_metrics
    host            0.0.0.0
    port            2021
service:
    flush: 1
    log_level: info
    
pipeline:
    inputs:
        - name: fluentbit_metrics
          tag: internal_metrics
          scrape_interval: 2

    outputs:
        - name: prometheus_exporter
          match: internal_metrics
          host: 0.0.0.0
          port: 2021

scheduler.cap

Set a maximum retry time in seconds. Supported in v1.8.7 or later.

2000

scheduler.base

Set a base of exponential backoff. Supported in v1.8.7 or later.

5

service:
    flush: 5
    daemon: off
    log_level: debug
    scheduler.base: 3
    scheduler.cap: 30
[SERVICE]
    Flush            5
    Daemon           off
    Log_Level        debug
    scheduler.base   3
    scheduler.cap    30

1

(3, 6)

2

(3, 12)

3

(3, 24)

4

(3, 30)

Retry_Limit

N

Integer value to set the maximum number of retries allowed. N must be >= 1 (default: 1)

Retry_Limit

no_limits or False

When set there no limit for the number of retries that the scheduler can do.

Retry_Limit

no_retries

When set, retries are disabled and scheduler doesn't try to send data to the destination if it failed the first time.

pipeline:
    inputs:
        ...
  
    outputs:
        - name: http
          host: 192.168.5.6
          port: 8080
          retry_limit: false

        - name: es
          host: 192.168.5.20
          port: 9200
          logstash_format: on
          retry_limit: 5
[OUTPUT]
    Name        http
    Host        192.168.5.6
    Port        8080
    Retry_Limit False

[OUTPUT]
    Name            es
    Host            192.168.5.20
    Port            9200
    Logstash_Format On
    Retry_Limit     5
Fluent Bit
Exponential Backoff And Jitter
Deploy on Kubernetes
Deploy with Docker
Deploy on Containers on AWS
CentOS 7
CentOS 8
CentOS 9 Stream
Ubuntu 16.04 LTS
Ubuntu 18.04 LTS
Ubuntu 20.04 LTS
Ubuntu 22.04 LTS
Debian 10
Debian 11
Debian 12
Amazon Linux 2
Amazon Linux 2022
Raspbian 10
Raspbian 11
Yocto / Embedded Linux
Buildroot / Embedded Linux
Windows Server EXE
Windows Server ZIP
Windows EXE
Windows ZIP
Homebrew
Compile from source
Compile from source
Compile from Source

Health

The Health input plugin lets you check how healthy a TCP server is. It checks by issuing a TCP connection at regular intervals.

Configuration parameters

The plugin supports the following configuration parameters:

Key
Description
Default

Host

Name of the target host or IP address.

none

Port

TCP port where to perform the connection request.

none

Interval_Sec

Interval in seconds between the service checks.

1

Internal_Nsec

Specify a nanoseconds interval for service checks. Works in conjunction with the Interval_Sec configuration key.

0

Alert

If enabled, it generates messages if the target TCP service is down.

false

Add_Host

If enabled, hostname is appended to each records.

false

Add_Port

If enabled, port number is appended to each records.

false

Threaded

Indicates whether to run this input in its own .

false

Get started

To start performing the checks, you can run the plugin from the command line or through the configuration file:

Command line

From the command line you can let Fluent Bit generate the checks with the following options:

fluent-bit -i health -p host=127.0.0.1 -p port=80 -o stdout

Configuration file

In your main configuration file append the following:

pipeline:
    inputs:
        - name: health
          host: 127.0.0.1
          port: 80
          interval_sec: 1
          interval_nsec: 0
          
    outputs:
        - name: stdout
          match: '*'
[INPUT]
    Name          health
    Host          127.0.0.1
    Port          80
    Interval_Sec  1
    Interval_NSec 0

[OUTPUT]
    Name   stdout
    Match  *

Testing

Once Fluent Bit is running, you will see some random values in the output interface similar to this:

$ fluent-bit -i health -p host=127.0.0.1 -p port=80 -o stdout

Fluent Bit v4.0.0
* Copyright (C) 2015-2025 The Fluent Bit Authors
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

______ _                  _    ______ _ _             ___  _____
|  ___| |                | |   | ___ (_) |           /   ||  _  |
| |_  | |_   _  ___ _ __ | |_  | |_/ /_| |_  __   __/ /| || |/' |
|  _| | | | | |/ _ \ '_ \| __| | ___ \ | __| \ \ / / /_| ||  /| |
| |   | | |_| |  __/ | | | |_  | |_/ / | |_   \ V /\___  |\ |_/ /
\_|   |_|\__,_|\___|_| |_|\__| \____/|_|\__|   \_/     |_(_)___/


[2025/06/30 16:12:06] [ info] [fluent bit] version=4.0.0, commit=3a91b155d6, pid=91577
[2025/06/30 16:12:06] [ info] [storage] ver=1.5.2, type=memory, sync=normal, checksum=off, max_chunks_up=128
[2025/06/30 16:12:06] [ info] [simd    ] disabled
[2025/06/30 16:12:06] [ info] [cmetrics] version=0.9.9
[2025/06/30 16:12:06] [ info] [ctraces ] version=0.6.2
[2025/06/30 16:12:06] [ info] [input:health:health.0] initializing
[2025/06/30 16:12:06] [ info] [input:health:health.0] storage_strategy='memory' (memory only)
[2025/06/30 16:12:06] [ info] [sp] stream processor started
[2025/06/30 16:12:06] [ info] [output:stdout:stdout.0] worker #0 started
[0] health.0: [1624145988.305640385, {"alive"=>true}]
[1] health.0: [1624145989.305575360, {"alive"=>true}]
[2] health.0: [1624145990.306498573, {"alive"=>true}]
[3] health.0: [1624145991.305595498, {"alive"=>true}]

Dummy

The Dummy input plugin, generates dummy events. Use this plugin for testing, debugging, benchmarking and getting started with Fluent Bit.

Configuration parameters

The plugin supports the following configuration parameters:

Key
Description
Default

Dummy

Dummy JSON record.

{"message":"dummy"}

Metadata

Dummy JSON metadata.

{}

Start_time_sec

Dummy base timestamp, in seconds.

0

Start_time_nsec

Dummy base timestamp, in nanoseconds.

0

Rate

Rate at which messages are generated expressed in how many times per second.

1

Interval_sec

Set time interval, in seconds, at which every message is generated. If set, Rate configuration is ignored.

0

Interval_nsec

Set time interval, in nanoseconds, at which every message is generated. If set, Rate configuration is ignored.

0

Samples

If set, the events number will be limited. For example, if Samples=3, the plugin generates only three events and stops.

none

Copies

Number of messages to generate each time messages generate.

1

Flush_on_startup

If set to true, the first dummy event is generated at startup.

false

Threaded

Indicates whether to run this input in its own .

false

Get started

You can run the plugin from the command line or through the configuration file:

Command line

Run the plugin from the command line using the following command:

fluent-bit -i dummy -o stdout

which returns results like the following:

Fluent Bit v2.x.x
* Copyright (C) 2015-2022 The Fluent Bit Authors
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[0] dummy.0: [[1686451466.659962491, {}], {"message"=>"dummy"}]
[0] dummy.0: [[1686451467.659679509, {}], {"message"=>"dummy"}]

Configuration file

In your main configuration file append the following:

pipeline:
    inputs:
        - name: dummy
          dummy: '{"message": "custom dummy"}'
  
    outputs:
        - name: stdout
          match: '*'
[INPUT]
    Name   dummy
    Dummy {"message": "custom dummy"}

[OUTPUT]
    Name   stdout
    Match  *

MQTT

The MQTT input plugin retrieves messages and data from MQTT control packets over a TCP connection. The incoming data to receive must be a JSON map.

Configuration parameters

The plugin supports the following configuration parameters:

Key
Description
Default

Listen

Listener network interface.

0.0.0.0

Port

TCP port where listening for connections.

1883

Payload_Key

Specify the key where the payload key/value will be preserved.

none

Threaded

Indicates whether to run this input in its own .

false

Get started

To listen for MQTT messages, you can run the plugin from the command line or through the configuration file.

Command line

The MQTT input plugin lets Fluent Bit behave as a server. Dispatch some messages using a MQTT client. In the following example, the mosquitto tool is being used for the purpose:

Running the following command:

fluent-bit -i mqtt -t data -o stdout -m '*'

Returns a response like the following:

Fluent Bit v4.0.3
* Copyright (C) 2015-2025 The Fluent Bit Authors
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

______ _                  _    ______ _ _             ___  _____
|  ___| |                | |   | ___ (_) |           /   ||  _  |
| |_  | |_   _  ___ _ __ | |_  | |_/ /_| |_  __   __/ /| || |/' |
|  _| | | | | |/ _ \ '_ \| __| | ___ \ | __| \ \ / / /_| ||  /| |
| |   | | |_| |  __/ | | | |_  | |_/ / | |_   \ V /\___  |\ |_/ /
\_|   |_|\__,_|\___|_| |_|\__| \____/|_|\__|   \_/     |_(_)___/


[2025/07/01 14:44:47] [ info] [fluent bit] version=4.0.3, commit=f5f5f3c17d, pid=1
[2025/07/01 14:44:47] [ info] [storage] ver=1.5.3, type=memory, sync=normal, checksum=off, max_chunks_up=128
[2025/07/01 14:44:47] [ info] [simd    ] disabled
[2025/07/01 14:44:47] [ info] [cmetrics] version=1.0.3
[2025/07/01 14:44:47] [ info] [ctraces ] version=0.6.6
[2025/07/01 14:44:47] [ info] [input:mem:mem.0] initializing
[2025/07/01 14:44:47] [ info] [input:mem:mem.0] storage_strategy='memory' (memory only)
[2025/07/01 14:44:47] [ info] [sp] stream processor started
[2025/07/01 14:44:47] [ info] [engine] Shutdown Grace Period=5, Shutdown Input Grace Period=2
[2025/07/01 14:44:47] [ info] [output:stdout:stdout.0] worker #0 started
[0] data: [1463775773, {"topic"=>"some/topic", "key1"=>123, "key2"=>456}]

The following command line will send a message to the MQTT input plugin:

mosquitto_pub  -m '{"key1": 123, "key2": 456}' -t some/topic

Configuration file

In your main configuration file append the following:

pipeline:
    inputs:
        - name: mqtt
          tag: data
          listen: 0.0.0.0
          port: 1883
          
    outputs:
        - name: stdout
          match: '*'
[INPUT]
    Name   mqtt
    Tag    data
    Listen 0.0.0.0
    Port   1883

[OUTPUT]
    Name   stdout
    Match  *

Service

The service section defines global properties of the service. The available configuration keys are:

Key
Description
Default

Configuration example

The following configuration example that defines a service section with enabled and a pipeline with a random input and stdout output:

Kubernetes Events

Collect Kubernetes events

Kubernetes exports events through the API server. This input plugin lets you retrieve those events as logs and process them through the pipeline.

Configuration

Key
Description
Default

In Fluent Bit 3.1 or later, this plugin uses a Kubernetes watch stream instead of polling. In versions earlier than 3.1, the interval parameters are used for reconnecting the Kubernetes watch stream.

Threading

This input always runs in its own .

Get started

Kubernetes service account

The Kubernetes service account used by Fluent Bit must have get, list, and watch permissions to namespaces and pods for the namespaces watched in the kube_namespace configuration parameter. If you're using the Helm chart to configure Fluent Bit, this role is included.

Basic configuration file

In the following configuration file, the Kubernetes events plugin collects events every 5 seconds (default for interval_nsec) and exposes them through the on the console:

Event timestamp

Event timestamps are created from the first existing field, based on the following order of precedence:

  1. lastTimestamp

  2. firstTimestamp

  3. metadata.creationTimestamp

Fluentd and Fluent Bit

The production grade telemetry ecosystem

Telemetry data processing can be complex, especially at scale. That's why was created. Fluentd is more than a simple tool, it's grown into a fullscale ecosystem that contains SDKs for different languages and subprojects like .

Here, we describe the relationship between the and open source projects.

Both projects are:

  • Licensed under the terms of Apache License v2.0.

  • Graduated hosted projects by the .

  • Production grade solutions: Deployed millions of times every single day.

  • Vendor neutral and community driven.

  • Widely adopted by the industry: Trusted by major companies like AWS, Microsoft, Google Cloud, and hundreds of others.

The projects have many similarities: is designed and built on top of the best ideas of architecture and general design. Which one you choose depends on your end-users' needs.

The following table describes a comparison of different areas of the projects:

Attribute
Fluentd
Fluent Bit

Both and can work as Aggregators or Forwarders, and can complement each other or be used as standalone solutions.

In the recent years, cloud providers have switched from Fluentd to Fluent Bit for performance and compatibility. Fluent Bit is now considered the next-generation solution.

Buffering

Performance and data safety

When processes data, it uses the system memory (heap) as a primary and temporary place to store the record logs before they get delivered. The records are processed in this private memory area.

Buffering is the ability to store the records, and continue storing incoming data while previous data is processed and delivered. Buffering in memory is the fastest mechanism, but there are scenarios requiring special strategies to deal with, data safety, or to reduce memory consumption by the service in constrained environments.

Network failures or latency in third party service is common. When data can't be delivered fast enough and new data to process arrives, the system can face backpressure.

Fluent Bit buffering strategies are designed to solve problems associated with backpressure and general delivery failures. Fluent Bit offers a primary buffering mechanism in memory and an optional secondary one using the file system. With this hybrid solution you can accommodate any use case safely and keep a high performance while processing your data.

These mechanisms aren't mutually exclusive. When data is ready to be processed or delivered it's always be in memory, while other data in the queue might be in the file system until is ready to be processed and moved up to memory.

To learn more about the buffering configuration in Fluent Bit, see.

pipeline:
    inputs:
        - name: cpu
          tag: my_cpu
          
        - name: mem
          tag: my_mem
          
    outputs:
        - name: es
          match: my_cpu
       
        - name: stdout
          match: my_mem
thread
thread
thread

flush

Sets the flush time in seconds.nanoseconds. The engine loop uses a flush timeout to determine when to flush records ingested by input plugins to output plugins.

1

grace

Sets the grace time in seconds as an integer value. The engine loop uses a grace timeout to define the wait time before exiting.

5

daemon

Boolean. Specifies whether Fluent Bit should run as a daemon (background process). Allowed values are: yes, no, on, and off. Don't enable when using a Systemd-based unit, such as the one provided in Fluent Bit packages.

off

dns.mode

Sets the primary transport layer protocol used by the asynchronous DNS resolver. Can be overridden on a per-plugin basis.

UDP

log_file

Absolute path for an optional log file. By default, all logs are redirected to the standard error interface (stderr).

none

log_level

Sets the logging verbosity level. Allowed values are: off, error, warn, info, debug, and trace. Values are cumulative. If debug is set, it will include error, warn, info, and debug. Trace mode is only available if Fluent Bit was built with the WITH_TRACE option enabled.

info

parsers_file

Path for a parsers configuration file. Multiple parsers_file entries can be defined within the section. However, with the new YAML configuration schema, defining parsers using this key is now optional. Parsers can be declared directly in the parsers section of your YAML configuration, offering a more streamlined and integrated approach.

none

plugins_file

Path for a plugins configuration file. This file specifies the paths to external plugins (.so files) that Fluent Bit can load at runtime. With the new YAML schema, the plugins_file key is optional. External plugins can now be referenced directly within the plugins section, simplifying the plugin management process. See an example.

none

streams_file

Path for the Stream Processor configuration file. This file defines the rules and operations for stream processing within Fluent Bit. The streams_file key is optional, as Stream Processor configurations can be defined directly in the streams section of the YAML schema. This flexibility allows for easier and more centralized configuration. Learn more about Stream Processing configuration.

none

http_server

Enables the built-in HTTP Server.

off

http_listen

Sets the listening interface for the HTTP Server when it's enabled.

0.0.0.0

http_port

Sets the TCP port for the HTTP Server.

2020

hot_reload

Enables hot reloading of configuration with SIGHUP.

on

coro_stack_size

Sets the coroutine stack size in bytes. The value must be greater than the page size of the running system. Setting the value too small (4096) can cause coroutine threads to overrun the stack buffer. The default value of this parameter shouldn't be changed.

24576

scheduler.cap

Sets a maximum retry time in seconds. Supported in v1.8.7 and greater.

2000

scheduler.base

Sets the base of exponential backoff. Supported in v1.8.7 and greater.

5

json.convert_nan_to_null

If enabled, NaN is converted to null when Fluent Bit converts msgpack to json.

false

sp.convert_from_str_to_num

If enabled, the Stream Processor converts strings that represent numbers to a numeric type.

true

service:
  flush: 1
  log_level: info
  http_server: true
  http_listen: 0.0.0.0
  http_port: 2020
  hot_reload: on

pipeline:
  inputs:
    - name: random

  outputs:
    - name: stdout
      match: '*'
hot reloading

db

Set a database file to keep track of recorded Kubernetes events.

none

db.sync

Set a database sync method. Accepted values: extra, full, normal, off.

normal

interval_sec

Set the reconnect interval (seconds).

0

interval_nsec

Set the reconnect interval (sub seconds: nanoseconds).

500000000

kube_url

API Server endpoint.

https://kubernetes.default.svc

kube_ca_file

Kubernetes TLS CA file.

/var/run/secrets/kubernetes.io/serviceaccount/ca.crt

kube_ca_path

Kubernetes TLS ca path.

none

kube_token_file

Kubernetes authorization token file.

/var/run/secrets/kubernetes.io/serviceaccount/token

kube_token_ttl

Kubernetes token time to live, until it's read again from the token file.

10m

kube_request_limit

Kubernetes limit parameter for events query, no limit applied when set to 0.

0

kube_retention_time

Kubernetes retention time for events.

1h

kube_namespace

Kubernetes namespace to query events from.

all

tls.debug

Debug level between 0 (nothing) and 4 (every detail).

0

tls.verify

Enable or disable verification of TLS peer certificate.

On

tls.vhost

Set optional TLS virtual host.

none

service:
    flush: 1
    log_level: info
    
pipeline:
    inputs:
        - name: kubernetes_events
          tag: k8s_events
          kube_url: https://kubernetes.default.svc
          
    outputs:
        - name: stdout
          match: '*'
[SERVICE]
    flush           1
    log_level       info

[INPUT]
    name            kubernetes_events
    tag             k8s_events
    kube_url        https://kubernetes.default.svc

[OUTPUT]
    name            stdout
    match           *
thread
standard output plugin

Scope

Containers / Servers

Embedded Linux / Containers / Servers

Language

C & Ruby

C

Memory

Greater than 60 MB

Approximately 1 MB

Performance

Medium Performance

High Performance

Dependencies

Built as a Ruby Gem, depends on other gems.

Zero dependencies, unless required by a plugin.

Plugins

Over 1,000 external plugins available.

Over 100 built-in plugins available.

License

Apache License v2.0

Apache License v2.0

Fluentd
Fluent Bit
Fluentd
Fluent Bit
Cloud Native Computing Foundation (CNCF)
Fluent Bit
Fluentd
Fluentd
Fluent Bit
Fluent Bit
backpressure
Buffering & Storage

Networking

Fluent Bit implements a unified networking interface that's exposed to components like plugins. This interface abstracts the complexity of general I/O and is fully configurable.

A common use case is when a component or plugin needs to connect with a service to send and receive data. There are many challenges to handle like unresponsive services, networking latency, or any kind of connectivity error. The networking interface aims to abstract and simplify the network I/O handling, minimize risks, and optimize performance.

Networking concepts

Fluent Bit uses the following networking concepts:

TCP connect timeout

Typically, creating a new TCP connection to a remote server is straightforward and takes a few milliseconds. However, there are cases where DNS resolving, a slow network, or incomplete TLS handshakes might create long delays, or incomplete connection statuses.

  • net.connect_timeout lets you configure the maximum time to wait for a connection to be established. This value already considers the TLS handshake process.

  • net.connect_timeout_log_error indicates if an error should be logged in case of connect timeout. If disabled, the timeout is logged as a debug level message.

TCP source address

On environments with multiple network interfaces, you can choose which interface to use for Fluent Bit data that will flow through the network.

Use net.source_address to specify which network address to use for a TCP connection and data flow.

Connection keepalive

A connection keepalive refers to the ability of a client to keep the TCP connection open in a persistent way. This feature offers many benefits in terms of performance because communication channels are always established beforehand.

Any component that uses TCP channels like HTTP or TLS, can take use feature. For configuration purposes use the net.keepalive property.

Connection keepalive idle timeout

If a connection keepalive is enabled, there might be scenarios where the connection can be unused for long periods of time. Unused connections can be removed. To control how long a keepalive connection can be idle, Fluent Bit uses a configuration property called net.keepalive_idle_timeout.

DNS mode

The global dns.mode value issues DNS requests using the specified protocol, either TCP or UDP. If a transport layer protocol is specified, plugins that configure the net.dns.mode setting override the global setting.

Maximum connections per worker

For optimal performance, Fluent Bit tries to deliver data quickly and create TCP connections on-demand and in keepalive mode. In highly scalable environments, you might limit how many connections are created in parallel.

Use the net.max_worker_connections property in the output plugin section to set the maximum number of allowed connections. This property acts at the worker level. For example, if you have five workers and net.max_worker_connections is set to 10, a maximum of 50 connections is allowed. If the limit is reached, the output plugin issues a retry.

Listener backlog

When Fluent Bit listens for incoming connections (for example, in input plugins like HTTP, TCP, OpenTelemetry, Forward, Syslog, etc.), the operating system maintains a queue of pending connections. The net.backlog option controls the maximum number of pending connections that can be queued before new connection attempts are refused. Increasing this value can help Fluent Bit handle bursts of incoming connections more gracefully. The default value is 128.

Note: On Linux, the effective backlog value may be capped by the kernel parameter net.core.somaxconn. If you need to allow a higher number of pending connections, you may need to increase this system setting.

Configuration options

The following table describes the network configuration properties available and their usage in optimizing performance or adjusting configuration needs for plugins that rely on networking I/O:

Property
Description
Default

net.connect_timeout

Set maximum time expressed in seconds to wait for a TCP connection to be established, including the TLS handshake time.

10

net.connect_timeout_log_error

On connection timeout, specify if it should log an error. When disabled, the timeout is logged as a debug message.

true

net.dns.mode

Select the primary DNS connection type (TCP or UDP). Can be set in the [SERVICE] section and overridden on a per plugin basis if desired.

none

net.dns.prefer_ipv4

Prioritize IPv4 DNS results when trying to establish a connection.

false

net.dns.resolver

Select the primary DNS resolver type (LEGACY or ASYNC).

none

net.keepalive

Enable or disable connection keepalive support. Accepts a Boolean value: on or off.

on

net.keepalive_idle_timeout

Set maximum time expressed in seconds for an idle keepalive connection.

30

net.keepalive_max_recycle

Set maximum number of times a keepalive connection can be used before it's retired.

2000

net.max_worker_connections

Set maximum number of TCP connections that can be established per worker.

0 (unlimited)

net.source_address

Specify network address to bind for data traffic.

none

net.backlog

Set the maximum number of pending connections for listening sockets. This option is vailable on versions >= 4.0.4.

128

Example

This example sends five random messages through a TCP output connection. The remote side uses the nc (netcat) utility to see the data.

Use the following configuration snippet of your choice in a corresponding file named fluent-bit.yaml or fluent-bit.conf:

service:
    flush: 1
    log_level: info

pipeline:
    inputs:
        - name:  random
          samples: 5

    outputs:
        - name: tcp
          match: '*'
          host: 127.0.0.1
          port: 9090
          format: json_lines
          # Networking Setup
          net.dns.mode: TCP
          net.connect_timeout: 5
          net.source_address: 127.0.0.1
          net.keepalive: on
          net.keepalive_idle_timeout: 10
[SERVICE]
    flush     1
    log_level info

[INPUT]
    name      random
    samples   5

[OUTPUT]
    name      tcp
    match     *
    host      127.0.0.1
    port      9090
    format    json_lines
    # Networking Setup
    net.dns.mode                TCP
    net.connect_timeout         5
    net.source_address          127.0.0.1
    net.keepalive               on
    net.keepalive_idle_timeout  10

In another terminal, start nc and make it listen for messages on TCP port 9090:

nc -l 9090

Start Fluent Bit with the configuration file you defined previously to see data flowing to netcat:

$ nc -l 9090
{"date":1587769732.572266,"rand_value":9704012962543047466}
{"date":1587769733.572354,"rand_value":7609018546050096989}
{"date":1587769734.572388,"rand_value":17035865539257638950}
{"date":1587769735.572419,"rand_value":17086151440182975160}
{"date":1587769736.572277,"rand_value":527581343064950185}

If the net.keepalive option isn't enabled, Fluent Bit closes the TCP connection and netcat quits.

After the five records arrive, the connection idles. After 10 seconds, the connection closes due to net.keepalive_idle_timeout.

Kubernetes

Kubernetes Production Grade Log Processor

Fluent Bit is a lightweight and extensible log processor with full support for Kubernetes:

  • Process Kubernetes containers logs from the file system or Systemd/Journald.

  • Enrich logs with Kubernetes Metadata.

  • Centralize your logs in third party storage services like Elasticsearch, InfluxDB, HTTP, and so on.

Concepts

Before getting started it's important to understand how Fluent Bit will be deployed. Kubernetes manages a cluster of nodes. The Fluent Bit log agent tool needs to run on every node to collect logs from every pod. Fluent Bit is deployed as a DaemonSet, which is a pod that runs on every node of the cluster.

When Fluent Bit runs, it reads, parses, and filters the logs of every pod. In addition, Fluent Bit adds metadata to each entry using theKubernetes filter plugin.

The Kubernetes filter plugin talks to the Kubernetes API Server to retrieve relevant information such as the pod_id, labels, and annotations. Other fields, such aspod_name, container_id, and container_name, are retrieved locally from the log file names. All of this is handled automatically, and no intervention is required from a configuration aspect.

Installation

Fluent Bit should be deployed as a DaemonSet, so it will be available on every node of your Kubernetes cluster.

The recommended way to deploy Fluent Bit for Kubernetes is with the official Helm Chart at https://github.com/fluent/helm-charts.

Note for OpenShift

If you are using Red Hat OpenShift you must set up Security Context Constraints (SCC) using the relevant option in the helm chart.

Installing with Helm Chart

Helm is a package manager for Kubernetes and lets you deploy application packages into your running cluster. Fluent Bit is distributed using a Helm chart found in the Fluent Helm Charts repository.

Use the following command to add the Fluent Helm charts repository

helm repo add fluent https://fluent.github.io/helm-charts

To validate that the repository was added, run helm search repo fluent to ensure the charts were added. The default chart can then be installed by running the following command:

helm upgrade --install fluent-bit fluent/fluent-bit

Default Values

The default chart values include configuration to read container logs. With Docker parsing, Systemd logs apply Kubernetes metadata enrichment, and output to an Elasticsearch cluster. You can modify theincluded values file to specify additional outputs, health checks, monitoring endpoints, or other configuration options.

Details

The default configuration of Fluent Bit ensures the following:

  • Consume all containers logs from the running node and parse them with either the docker or cri multi-line parser.

  • Persist how far it got into each file it's tailing so if a pod is restarted it picks up from where it left off.

  • The Kubernetes filter adds Kubernetes metadata, specifically labels andannotations. The filter only contacts the API Server when it can't find the cached information, otherwise it uses the cache.

  • The default backend in the configuration is Elasticsearch set by theElasticsearch Output Plugin. It uses the Logstash format to ingest the logs. If you need a different Index and Type, refer to the plugin option and update as needed.

  • There is an option called Retry_Limit, which is set to False. If Fluent Bit can't flush the records to Elasticsearch, it will retry indefinitely until it succeeds.

Windows deployment

Fluent Bit v1.5.0 and later supports deployment to Windows pods.

Log files overview

When deploying Fluent Bit to Kubernetes, there are three log files that you need to pay attention to.

  • C:\k\kubelet.err.log

    This is the error log file from kubelet daemon running on host. Retain this file for future troubleshooting, including debugging deployment failures.

  • C:\var\log\containers\<pod>_<namespace>_<container>-<docker>.log

    This is the main log file you need to watch. Configure Fluent Bit to follow this file. It's a symlink to the Docker log file in C:\ProgramData\, with some additional metadata on the file's name.

  • C:\ProgramData\Docker\containers\<docker>\<docker>.log

    This is the log file produced by Docker. Normally you don't directly read from this file, but you need to make sure that this file is visible from Fluent Bit.

Typically, your deployment YAML contains the following volume configuration.

spec:
  containers:
  - name: fluent-bit
    image: my-repo/fluent-bit:1.8.4
    volumeMounts:
    - mountPath: C:\k
      name: k
    - mountPath: C:\var\log
      name: varlog
    - mountPath: C:\ProgramData
      name: progdata
  volumes:
  - name: k
    hostPath:
      path: C:\k
  - name: varlog
    hostPath:
      path: C:\var\log
  - name: progdata
    hostPath:
      path: C:\ProgramData

Configure Fluent Bit

Assuming the basic volume configuration described previously, you can apply one of the following configurations to start logging:

parsers:
    - name: docker
      format: json
      time_key: time
      time_format: '%Y-%m-%dT%H:%M:%S.%L'
      time_keep: true
      
pipeline:
    inputs:
        - name: tail
          tag: kube.*
          path: 'C:\\var\\log\\containers\\*.log'
          parser: docker
          db: 'C:\\fluent-bit\\tail_docker.db'
          mem_buf_limit: 7MB
          refresh_interval: 10
          
        - name: tail
          tag: kube.error
          path: 'C:\\k\\kubelet.err.log'
          db: 'C:\\fluent-bit\\tail_kubelet.db'
          
    filters:
        - name: kubernetes
          match: kube.*
          kube_url: 'https://kubernetes.default.svc.cluster.local:443'
          
    outputs:
        - name: stdout
          match: '*'
fluent-bit.conf: |
    [SERVICE]
      Parsers_File      C:\\fluent-bit\\parsers.conf

    [INPUT]
      Name              tail
      Tag               kube.*
      Path              C:\\var\\log\\containers\\*.log
      Parser            docker
      DB                C:\\fluent-bit\\tail_docker.db
      Mem_Buf_Limit     7MB
      Refresh_Interval  10

    [INPUT]
      Name              tail
      Tag               kubelet.err
      Path              C:\\k\\kubelet.err.log
      DB                C:\\fluent-bit\\tail_kubelet.db

    [FILTER]
      Name              kubernetes
      Match             kube.*
      Kube_URL          https://kubernetes.default.svc.cluster.local:443

    [OUTPUT]
      Name  stdout
      Match *

parsers.conf: |
    [PARSER]
        Name         docker
        Format       json
        Time_Key     time
        Time_Format  %Y-%m-%dT%H:%M:%S.%L
        Time_Keep    On

Mitigate unstable network on Windows pods

Windows pods often lack working DNS immediately after boot (#78479). To mitigate this issue, filter_kubernetes provides a built-in mechanism to wait until the network starts up:

  • DNS_Retries: Retries N times until the network start working (6)

  • DNS_Wait_Time: Lookup interval between network status checks (30)

By default, Fluent Bit waits for three minutes (30 seconds x 6 times). If it's not enough for you, update the configuration as follows:

    filters:
        - name: kubernetes
          ...
          dns_retries: 10
          dns_wait_time: 30

[filter]
    Name kubernetes
    ...
    DNS_Retries 10
    DNS_Wait_Time 30

% endtab %}

EKSworkshop.comAmazon EKS Workshop
Telemetry Pipelines Workshop
Fluent Bit Workshop for Getting Started with Cloud Native Telemetry Pipelines

License

Fluent Bit license description

, including its core, plugins, and tools are distributed under the terms of the:

Validating your Data and Structure

Fluent Bit supports multiple sources and formats. In addition, it provides filters that you can use to perform custom modifications. As your pipeline grows, it's important to validate your data and structure.

Fluent Bit users are encouraged to integrate data validation in their continuous integration (CI) systems.

In a normal production environment, inputs, filters, and outputs are defined in configuration files. Fluent Bit provides the filter, which you can use to validate keys and values from your records and take action when an exception is found.

A simplified view of the data processing pipeline is as follows:

Understand structure and configuration

Consider the following pipeline, which uses a JSON file as its data source and has two filters:

  • to exclude certain records.

  • to alter records' content by adding and removing specific keys.

Add data validation between each step to ensure your data structure is correct.

This example uses the filter.

Expect filters set rules aiming to validate criteria like:

  • Does the record contain key A?

  • Does the record not contain key A?

  • Does the key A value equal NULL?

  • Is the key A value not NULL?

  • Does the key A value equal B?

Every Expect filter configuration exposes rules to validate the content of your records using .

Test the configuration

Consider a JSON file data.log with the following content:

The following files configure a pipeline to consume the log, while applying an Expect filter to validate that the keys color and label exist.

The following is the Fluent Bit YAML configuration file:

The following is the Fluent Bit YAML parsers file:

The following is the Fluent Bit classic configuration file:

The following is the Fluent Bit classic parsers file:

If the JSON parser fails or is missing in the input (parser json), the Expect filter triggers the exit action.

To extend the pipeline, add a Grep filter to match records that map label containing a key called name with value the abc, and add an Expect filter to re-validate that condition:

The following is the Fluent Bit YAML configuration file:

Production deployment

When deploying in production, consider removing any Expect filters from your configuration file. These filters are unnecessary unless you need 100% coverage of checks at runtime.

HTTP

The HTTP input plugin lets Fluent Bit open an HTTP port that you can then route data to in a dynamic way.

Configuration parameters

Key
Description
Default

TLS / SSL

HTTP input plugin supports TLS/SSL. For more details about the properties available and general configuration, refer to .

gzipped content

The HTTP input plugin will accept and automatically handle gzipped content in version 2.2.1 or later if the header Content-Encoding: gzip is set on the received data.

Get started

This plugin supports dynamic tags which let you send data with different tags through the same input. See the following for an example:

Set a tag

The tag for the HTTP input plugin is set by adding the tag to the end of the request URL. This tag is then used to route the event through the system.

For example, in the following curl message the tag set is app.log**. ** because the end path is /app_log:

Configuration file

Configuration file http.0 example

If you don't set the tag, http.0 is automatically used. If you have multiple HTTP inputs then they will follow a pattern of http.N where N is an integer representing the input.

Set tag_key

The tag_key configuration option lets you specify the key name that will be used to overwrite a tag. The tag's value will be replaced with the value associated with the specified key. For example, setting tag_key to custom_tag and the log event contains a JSON field with the key custom_tag. Fluent Bit will use the value of that field as the new tag for routing the event through the system.

Curl request

Configuration file tag_key example

Set multiple custom HTTP headers on success

The success_header parameter lets you set multiple HTTP headers on success. The format is:

Example curl message

Configuration file example 3

Command line

Forward

Forward is the protocol used by and to route messages between peers. This plugin implements the input service to listen for Forward messages.

Configuration parameters

The plugin supports the following configuration parameters:

Key
Description
Default

Get started

To receive Forward messages, you can run the plugin from the command line or through the configuration file as shown in the following examples.

Command line

From the command line you can let Fluent Bit listen for Forward messages with the following options:

By default, the service listens on all interfaces (0.0.0.0) through TCP port 24224. You can change this by passing parameters to the command:

In the example, the Forward messages arrive only through network interface 192.168.3.2 address and TCP Port 9090.

Configuration file

In your main configuration file append the following:

Fluent Bit and Secure Forward Setup

In Fluent Bit v3 or later, in_forward can handle secure forward protocol.

For using user-password authentication, specify security.users at least an one-pair. For using shared key, specify shared_key in both of forward output and forward input. self_hostname isn't able to specify with the same hostname between fluent servers and clients.

Testing

After Fluent Bit is running, you can send some messages using the fluent-cat tool, provided by :

When you run the plugin with the following command:

In you should see the following output:

Pipeline

The pipeline section defines the flow of how data is collected, processed, and sent to its final destination. It encompasses the following core concepts:

Name
Description

Example configuration

Note: Processors can be enabled only by using the YAML configuration format. Classic mode configuration format doesn't support processors.

Here's an example of a pipeline configuration:

Pipeline processors

Processors operate on specific signals such as logs, metrics, and traces. They're attached to an input plugin and must specify the signal type they will process.

Example of a Processor

In the following example, the content_modifier processor inserts or updates (upserts) the key my_new_key with the value 123 for all log records generated by the tail plugin. This processor is only applied to log signals:

Here is a more complete example with multiple processors:

Processors can be attached to inputs and outputs.

How Processors are different from Filters

While processors and filters are similar in that they can transform, enrich, or drop data from the pipeline, there is a significant difference in how they operate:

  • Processors: Run in the same thread as the input plugin when the input plugin is configured to be threaded (threaded: true). This design provides better performance, especially in multi-threaded setups.

  • Filters: Run in the main event loop. When multiple filters are used, they can introduce performance overhead, particularly under heavy workloads.

Running Filters as Processors

You can configure existing to run as processors. There are no specific changes needed; you use the filter name as if it were a native processor.

Example of a Filter running as a Processor

In the following example, the grep filter is used as a processor to filter log events based on a pattern:

Head

The Head input plugin reads events from the head of a file. Its behavior is similar to the head command.

Configuration parameters

The plugin supports the following configuration parameters:

Key
Description

Split line mode

Use this mode to get a specific line. The following example gets CPU frequency from /proc/cpuinfo.

/proc/cpuinfo is a special file to get CPU information.

The CPU frequency is cpu MHz : 2791.009. The following configuration file gets the needed line:

If you run the following command:

The output is something similar to;

Get started

To read the head of a file, you can run the plugin from the command line or through the configuration file.

Command line

The following example will read events from the /proc/uptime file, tag the records with the uptime name and flush them back to the stdout plugin:

The output will look similar to:

Configuration file

In your main configuration file append the following:

The interval is calculated like this:

Total interval (sec) = Interval_Sec + (Interval_Nsec / 1000000000).

For example: 1.5s = 1s + 500000000ns.

File

Absolute path to the target file. For example: /proc/uptime.

Buf_Size

Buffer size to read the file.

Interval_Sec

Polling interval (seconds).

Interval_NSec

Polling interval (nanoseconds).

Add_Path

If enabled, the path is appended to each records. Default: false.

Key

Rename a key. Default: head.

Lines

Line number to read. If the number N is set, in_head reads first N lines like head(1) -n.

Split_line

If enabled, in_head generates key-value pair per line.

Threaded

Indicates whether to run this input in its own thread. Default: false.

processor    : 0
vendor_id    : GenuineIntel
cpu family   : 6
model        : 42
model name   : Intel(R) Core(TM) i7-2640M CPU @ 2.80GHz
stepping     : 7
microcode    : 41
cpu MHz      : 2791.009
cache size   : 4096 KB
physical id  : 0
siblings     : 1
pipeline:
    inputs:
        - name: head
          tag: head.cpu
          file: /proc/cpuinfo
          lines: 8
          split_line: true
          
    filters:
        - name: record_modifier
          match: '*'
          whitelist_key: line7
          
    outputs:
        - name: stdout
          match: '*'
[INPUT]
    Name           head
    Tag            head.cpu
    File           /proc/cpuinfo
    Lines          8
    Split_line     true
    # {"line0":"processor    : 0", "line1":"vendor_id    : GenuineIntel" ...}

[FILTER]
    Name           record_modifier
    Match          *
    Whitelist_key  line7

[OUTPUT]
    Name           stdout
    Match          *
fluent-bit -c head.conf
Fluent Bit v1.x.x
* Copyright (C) 2019-2020 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2017/06/26 22:38:24] [ info] [engine] started
[0] head.cpu: [1498484305.000279805, {"line7"=>"cpu MHz        : 2791.009"}]
[1] head.cpu: [1498484306.011680137, {"line7"=>"cpu MHz        : 2791.009"}]
[2] head.cpu: [1498484307.010042482, {"line7"=>"cpu MHz        : 2791.009"}]
[3] head.cpu: [1498484308.008447978, {"line7"=>"cpu MHz        : 2791.009"}]
fluent-bit -i head -t uptime -p File=/proc/uptime -o stdout -m '*'
Fluent Bit v1.x.x
* Copyright (C) 2019-2020 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2016/05/17 21:53:54] [ info] starting engine
[0] uptime: [1463543634, {"head"=>"133517.70 194870.97"}]
[1] uptime: [1463543635, {"head"=>"133518.70 194872.85"}]
[2] uptime: [1463543636, {"head"=>"133519.70 194876.63"}]
[3] uptime: [1463543637, {"head"=>"133520.70 194879.72"}]
pipeline:
    inputs:
        - name: head
          tag: uptime
          file: /proc/uptime
          buf_size: 256
          interval_sec: 1
          interval_nsec: 0
          
    outputs:
        - name: stdout
          match: '*'
[INPUT]
    Name          head
    Tag           uptime
    File          /proc/uptime
    Buf_Size      256
    Interval_Sec  1
    Interval_NSec 0

[OUTPUT]
    Name   stdout
    Match  *
                                 Apache License
                           Version 2.0, January 2004
                        http://www.apache.org/licenses/

   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION

   1. Definitions.

      "License" shall mean the terms and conditions for use, reproduction,
      and distribution as defined by Sections 1 through 9 of this document.

      "Licensor" shall mean the copyright owner or entity authorized by
      the copyright owner that is granting the License.

      "Legal Entity" shall mean the union of the acting entity and all
      other entities that control, are controlled by, or are under common
      control with that entity. For the purposes of this definition,
      "control" means (i) the power, direct or indirect, to cause the
      direction or management of such entity, whether by contract or
      otherwise, or (ii) ownership of fifty percent (50%) or more of the
      outstanding shares, or (iii) beneficial ownership of such entity.

      "You" (or "Your") shall mean an individual or Legal Entity
      exercising permissions granted by this License.

      "Source" form shall mean the preferred form for making modifications,
      including but not limited to software source code, documentation
      source, and configuration files.

      "Object" form shall mean any form resulting from mechanical
      transformation or translation of a Source form, including but
      not limited to compiled object code, generated documentation,
      and conversions to other media types.

      "Work" shall mean the work of authorship, whether in Source or
      Object form, made available under the License, as indicated by a
      copyright notice that is included in or attached to the work
      (an example is provided in the Appendix below).

      "Derivative Works" shall mean any work, whether in Source or Object
      form, that is based on (or derived from) the Work and for which the
      editorial revisions, annotations, elaborations, or other modifications
      represent, as a whole, an original work of authorship. For the purposes
      of this License, Derivative Works shall not include works that remain
      separable from, or merely link (or bind by name) to the interfaces of,
      the Work and Derivative Works thereof.

      "Contribution" shall mean any work of authorship, including
      the original version of the Work and any modifications or additions
      to that Work or Derivative Works thereof, that is intentionally
      submitted to Licensor for inclusion in the Work by the copyright owner
      or by an individual or Legal Entity authorized to submit on behalf of
      the copyright owner. For the purposes of this definition, "submitted"
      means any form of electronic, verbal, or written communication sent
      to the Licensor or its representatives, including but not limited to
      communication on electronic mailing lists, source code control systems,
      and issue tracking systems that are managed by, or on behalf of, the
      Licensor for the purpose of discussing and improving the Work, but
      excluding communication that is conspicuously marked or otherwise
      designated in writing by the copyright owner as "Not a Contribution."

      "Contributor" shall mean Licensor and any individual or Legal Entity
      on behalf of whom a Contribution has been received by Licensor and
      subsequently incorporated within the Work.

   2. Grant of Copyright License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      copyright license to reproduce, prepare Derivative Works of,
      publicly display, publicly perform, sublicense, and distribute the
      Work and such Derivative Works in Source or Object form.

   3. Grant of Patent License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      (except as stated in this section) patent license to make, have made,
      use, offer to sell, sell, import, and otherwise transfer the Work,
      where such license applies only to those patent claims licensable
      by such Contributor that are necessarily infringed by their
      Contribution(s) alone or by combination of their Contribution(s)
      with the Work to which such Contribution(s) was submitted. If You
      institute patent litigation against any entity (including a
      cross-claim or counterclaim in a lawsuit) alleging that the Work
      or a Contribution incorporated within the Work constitutes direct
      or contributory patent infringement, then any patent licenses
      granted to You under this License for that Work shall terminate
      as of the date such litigation is filed.

   4. Redistribution. You may reproduce and distribute copies of the
      Work or Derivative Works thereof in any medium, with or without
      modifications, and in Source or Object form, provided that You
      meet the following conditions:

      (a) You must give any other recipients of the Work or
          Derivative Works a copy of this License; and

      (b) You must cause any modified files to carry prominent notices
          stating that You changed the files; and

      (c) You must retain, in the Source form of any Derivative Works
          that You distribute, all copyright, patent, trademark, and
          attribution notices from the Source form of the Work,
          excluding those notices that do not pertain to any part of
          the Derivative Works; and

      (d) If the Work includes a "NOTICE" text file as part of its
          distribution, then any Derivative Works that You distribute must
          include a readable copy of the attribution notices contained
          within such NOTICE file, excluding those notices that do not
          pertain to any part of the Derivative Works, in at least one
          of the following places: within a NOTICE text file distributed
          as part of the Derivative Works; within the Source form or
          documentation, if provided along with the Derivative Works; or,
          within a display generated by the Derivative Works, if and
          wherever such third-party notices normally appear. The contents
          of the NOTICE file are for informational purposes only and
          do not modify the License. You may add Your own attribution
          notices within Derivative Works that You distribute, alongside
          or as an addendum to the NOTICE text from the Work, provided
          that such additional attribution notices cannot be construed
          as modifying the License.

      You may add Your own copyright statement to Your modifications and
      may provide additional or different license terms and conditions
      for use, reproduction, or distribution of Your modifications, or
      for any such Derivative Works as a whole, provided Your use,
      reproduction, and distribution of the Work otherwise complies with
      the conditions stated in this License.

   5. Submission of Contributions. Unless You explicitly state otherwise,
      any Contribution intentionally submitted for inclusion in the Work
      by You to the Licensor shall be under the terms and conditions of
      this License, without any additional terms or conditions.
      Notwithstanding the above, nothing herein shall supersede or modify
      the terms of any separate license agreement you may have executed
      with Licensor regarding such Contributions.

   6. Trademarks. This License does not grant permission to use the trade
      names, trademarks, service marks, or product names of the Licensor,
      except as required for reasonable and customary use in describing the
      origin of the Work and reproducing the content of the NOTICE file.

   7. Disclaimer of Warranty. Unless required by applicable law or
      agreed to in writing, Licensor provides the Work (and each
      Contributor provides its Contributions) on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
      implied, including, without limitation, any warranties or conditions
      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
      PARTICULAR PURPOSE. You are solely responsible for determining the
      appropriateness of using or redistributing the Work and assume any
      risks associated with Your exercise of permissions under this License.

   8. Limitation of Liability. In no event and under no legal theory,
      whether in tort (including negligence), contract, or otherwise,
      unless required by applicable law (such as deliberate and grossly
      negligent acts) or agreed to in writing, shall any Contributor be
      liable to You for damages, including any direct, indirect, special,
      incidental, or consequential damages of any character arising as a
      result of this License or out of the use or inability to use the
      Work (including but not limited to damages for loss of goodwill,
      work stoppage, computer failure or malfunction, or any and all
      other commercial damages or losses), even if such Contributor
      has been advised of the possibility of such damages.

   9. Accepting Warranty or Additional Liability. While redistributing
      the Work or Derivative Works thereof, You may choose to offer,
      and charge a fee for, acceptance of support, warranty, indemnity,
      or other liability obligations and/or rights consistent with this
      License. However, in accepting such obligations, You may act only
      on Your own behalf and on Your sole responsibility, not on behalf
      of any other Contributor, and only if You agree to indemnify,
      defend, and hold each Contributor harmless for any liability
      incurred by, or claims asserted against, such Contributor by reason
      of your accepting any such warranty or additional liability.

   END OF TERMS AND CONDITIONS
Fluent Bit
Apache License v2.0

listen

The address to listen on.

0.0.0.0

port

The port for Fluent Bit to listen on.

9880

tag_key

Specify the key name to overwrite a tag. If set, the tag will be overwritten by a value of the key.

none

buffer_max_size

Specify the maximum buffer size in KB to receive a JSON message.

4M

buffer_chunk_size

This sets the chunk size for incoming JSON messages. These chunks are then stored and managed in the space available by buffer_max_size.

512K

successful_response_code

Allows setting successful response code. Supported values: 200, 201, and 204

201

success_header

Add an HTTP header key/value pair on success. Multiple headers can be set. For example, X-Custom custom-answer

none

threaded

Indicates whether to run this input in its own thread.

false

curl -d '{"key1":"value1","key2":"value2"}' -XPOST -H "content-type: application/json" http://localhost:8888/app.log
pipeline:
    inputs:
        - name: http
          listen: 0.0.0.0
          port: 8888
          
    outputs:
        - name: stdout
          match: app.log
[INPUT]
    name http
    listen 0.0.0.0
    port 8888

[OUTPUT]
    name stdout
    match app.log
curl -d '{"key1":"value1","key2":"value2"}' -XPOST -H "content-type: application/json" http://localhost:8888
pipeline:
    inputs:
        - name: http
          listen: 0.0.0.0
          port: 8888

    outputs:
        - name: stdout
          match: http.0
[INPUT]
    name http
    listen 0.0.0.0
    port 8888

[OUTPUT]
    name  stdout
    match  http.0
curl -d '{"key1":"value1","key2":"value2"}' -XPOST -H "content-type: application/json" http://localhost:8888/app.log
pipeline:
    inputs:
        - name: http
          listen: 0.0.0.0
          port: 8888
          tag_key: key1

    outputs:
        - name: stdout
          match: value1
[INPUT]
    name http
    listen 0.0.0.0
    port 8888
    tag_key key1

[OUTPUT]
    name stdout
    match value1
pipeline:
    inputs:
        - name: http
          success_header: 
            - X-Custom custom-answer
            - X-Another another-answer
[INPUT]
    name http
    success_header X-Custom custom-answer
    success_header X-Another another-answer
curl -d @app.log -XPOST -H "content-type: application/json" http://localhost:8888/app.log
pipeline:
    inputs:
        - name: http
          listen: 0.0.0.0
          port: 8888

    outputs:
        - name: stdout
          match: '*'
[INPUT]
    name http
    listen 0.0.0.0
    port 8888

[OUTPUT]
    name stdout
    match *
 fluent-bit -i http -p port=8888 -o stdout
Transport Security
Link to video

Listen

Listener network interface.

0.0.0.0

Port

TCP port to listen for incoming connections.

24224

Unix_Path

Specify the path to Unix socket to receive a Forward message. If set, Listen and Port are ignored.

none

Unix_Perm

Set the permission of the Unix socket file. If Unix_Path isn't set, this parameter is ignored.

none

Buffer_Max_Size

Specify the maximum buffer memory size used to receive a Forward message. The value must be according to the Unit Size specification.

6144000

Buffer_Chunk_Size

By default the buffer to store the incoming Forward messages, don't allocate the maximum memory allowed, instead it allocate memory when it's required. The rounds of allocations are set by Buffer_Chunk_Size. The value must be according to the Unit Size specification.

1024000

Tag_Prefix

Prefix incoming tag with the defined value.

none

Tag

Override the tag of the forwarded events with the defined value.

none

Shared_Key

Shared key for secure forward authentication.

none

Empty_Shared_Key

Use this option to connect to Fluentd with a zero-length shared key.

false

Self_Hostname

Hostname for secure forward authentication.

none

Security.Users

Specify the username and password pairs for secure forward authentication.

Threaded

Indicates whether to run this input in its own thread.

false

fluent-bit -i forward -o stdout
fluent-bit -i forward -p listen="192.168.3.2" -p port=9090 -o stdout
pipeline:
    inputs:
        - name: forward
          listen: 0.0.0.0
          port: 24224
          buffer_chunk_size: 1M
          buffer_max_size: 6M
          
    outputs:
        - name: stdout
          match: '*'
[INPUT]
    Name              forward
    Listen            0.0.0.0
    Port              24224
    Buffer_Chunk_Size 1M
    Buffer_Max_Size   6M

[OUTPUT]
    Name   stdout
    Match  *
pipeline:
    inputs:
        - name: forward
          listen: 0.0.0.0
          port: 24224
          buffer_chunk_size: 1M
          buffer_max_size: 6M
          security.users: fluentbit changeme
          shared_key: secret
          self_hostname: flb.server.local
          
    outputs:
        - name: stdout
          match: '*'
[INPUT]
    Name              forward
    Listen            0.0.0.0
    Port              24224
    Buffer_Chunk_Size 1M
    Buffer_Max_Size   6M
    Security.Users fluentbit changeme
    Shared_Key secret
    Self_Hostname flb.server.local

[OUTPUT]
    Name   stdout
    Match  *
echo '{"key 1": 123456789, "key 2": "abcdefg"}' | fluent-cat my_tag
fluent-bit -i forward -o stdout
Fluent-Bit v0.9.0
Copyright (C) Treasure Data

[2016/10/07 21:49:40] [ info] [engine] started
[2016/10/07 21:49:40] [ info] [in_fw] binding 0.0.0.0:24224
[0] my_tag: [1475898594, {"key 1"=>123456789, "key 2"=>"abcdefg"}]
Fluent Bit
Fluentd
Fluentd
Fluent Bit

inputs

Specifies the name of the plugin responsible for collecting or receiving data. This component serves as the data source in the pipeline. Examples of input plugins include tail, http, and random.

processors

Unique to YAML configuration, processors are specialized plugins that handle data processing directly attached to input plugins. Unlike filters, processors aren't dependent on tag or matching rules. Instead, they work closely with the input to modify or enrich the data before it reaches the filtering or output stages. Processors are defined within an input plugin section.

filters

Filters are used to transform, enrich, or discard events based on specific criteria. They allow matching tags using strings or regular expressions, providing a more flexible way to manipulate data. Filters run as part of the main event loop and can be applied across multiple inputs and filters. Examples of filters include modify, grep, and nest.

outputs

Defines the destination for processed data. Outputs specify where the data will be sent, such as to a remote server, a file, or another service. Each output plugin is configured with matching rules to determine which events are sent to that destination. Common output plugins include stdout, elasticsearch, and kafka.

parsers:
    - name: json
      format: json

pipeline:
      inputs:
          - name: tail
            path: /var/log/example.log
            parser: json

            processors:
                logs:
                    - name: content_modifier
                      action: upsert
                      key: my_new_key
                      value: 123
  
      filters:
          - name: grep
            match: '*'
            regex: key pattern

      outputs:
          - name: stdout
            match: '*'
service:
    log_level: info
    http_server: on
    http_listen: 0.0.0.0
    http_port: 2021

pipeline:
    inputs:
        - name: random
          tag: test-tag
          interval_sec: 1
      
          processors:
              logs:
                  - name: modify
                    add: hostname monox
          
                  - name: lua
                    call: append_tag
                    code: |
                      function append_tag(tag, timestamp, record)
                          new_record = record
                          new_record["tag"] = tag
                          return 1, timestamp, new_record
                      end

    outputs:
        - name: stdout
          match: '*'
      
          processors:
              logs:
                  - name: lua
                    call: add_field
                    code: |
                      function add_field(tag, timestamp, record)
                          new_record = record
                          new_record["output"] = "new data"
                          return 1, timestamp, new_record
                      end
parsers:
    - name: json
      format: json

pipeline:
    inputs:
        - name: tail
          path: /var/log/example.log
          parser: json

          processors:
              logs:
                  - name: grep
                    regex: log aa
    outputs:
        - name: stdout
          match: '*'
Filters
pipeline:
    inputs:
        - name: tail
          path: /var/log/example.log
          parser: json

          processors:
              logs:
                  - name: record_modifier
                    
    filters:
        - name: grep
          match: '*'
          regex: key pattern

    outputs:
        - name: stdout
          match: '*'
{"color": "blue", "label": {"name": null}}
{"color": "red", "label": {"name": "abc"}, "meta": "data"}
{"color": "green", "label": {"name": "abc"}, "meta": null}
service:
    flush: 1
    log_level: info
    parsers_file: parsers.yaml

pipeline:
    inputs:
        - name: tail
          path: data.log
          parser: json
          exit_on_eof: on

    # First 'expect' filter to validate that our data was structured properly
    filters:
        - name: expect
          match: '*'
          key_exists: 
            - color
            - $label['name']
          action: exit

    outputs:
        - name: stdout
          match: '*'
parsers:
    - name: json
      format: json
[SERVICE]
    flush        1
    log_level    info
    parsers_file parsers.conf

[INPUT]
    name        tail
    path        ./data.log
    parser      json
    exit_on_eof on

# First 'expect' filter to validate that our data was structured properly
[FILTER]
    name        expect
    match       *
    key_exists  color
    key_exists  $label['name']
    action      exit

[OUTPUT]
    name        stdout
    match       *
[PARSER]
    Name json
    Format json
service:
    flush: 1
    log_level: info
    parsers_file: parsers.yaml

pipeline:
    inputs:
        - name: tail
          path: data.log
          parser: json
          exit_on_eof: on

    # First 'expect' filter to validate that our data was structured properly
    filters:
        - name: expect
          match: '*'
          key_exists: 
            - color
            - $label['name']
          action: exit
          
        # Match records that only contains map 'label' with key 'name' = 'abc'
        - name: grep
          match: '*'
          regex: "$label['name'] ^abc$"
          
        # Check that every record contains 'label' with a non-null value
        - name: expect
          match: '*'
          key_val_eq: $label['name'] abc
          action: exit

        # Append a new key to the record using an environment variable
        - name: record_modifier
          match: '*'
          record: hostname ${HOSTNAME}

        # Check that every record contains 'hostname' key
        - name: expect
          match: '*'
          key_exists: hostname
          action: exit

    outputs:
        - name: stdout
          match: '*'
[SERVICE]
    flush        1
    log_level    info
    parsers_file parsers.conf

[INPUT]
    name         tail
    path         ./data.log
    parser       json
    exit_on_eof  on

# First 'expect' filter to validate that our data was structured properly
[FILTER]
    name       expect
    match      *
    key_exists color
    key_exists label
    action     exit

# Match records that only contains map 'label' with key 'name' = 'abc'
[FILTER]
    name       grep
    match      *
    regex      $label['name'] ^abc$

# Check that every record contains 'label' with a non-null value
[FILTER]
    name       expect
    match      *
    key_val_eq $label['name'] abc
    action     exit

# Append a new key to the record using an environment variable
[FILTER]
    name       record_modifier
    match      *
    record     hostname ${HOSTNAME}

# Check that every record contains 'hostname' key
[FILTER]
    name       expect
    match      *
    key_exists hostname
    action     exit

[OUTPUT]
    name       stdout
    match      *
Expect
Grep
Record Modifier
Expect
configuration parameters
Tail

Backpressure

It's possible for logs or data to be ingested or created faster than the ability to flush it to some destinations. A common scenario is when reading from big log files, especially with a large backlog, and dispatching the logs to a backend over the network, which takes time to respond. This generates backpressure, leading to high memory consumption in the service.

To avoid backpressure, Fluent Bit implements a mechanism in the engine that restricts the amount of data an input plugin can ingest. Restriction is done through the configuration parameters Mem_Buf_Limit and storage.Max_Chunks_Up.

As described in the Buffering concepts section, Fluent Bit offers two modes for data handling: in-memory only (default) and in-memory and filesystem (optional).

The default storage.type memory buffer can be restricted with Mem_Buf_Limit. If memory reaches this limit and you reach a backpressure scenario, you won't be able to ingest more data until the data chunks that are in memory can be flushed. The input pauses and Fluent Bitemits a [warn] [input] {input name or alias} paused (mem buf overlimit) log message.

Depending on the input plugin in use, this might cause incoming data to be discarded (for example, TCP input plugin). The tail plugin can handle pauses without data ingloss, storing its current file offset and resuming reading later. When buffer memory is available, the input resumes accepting logs. Fluent Bitemits a [info] [input] {input name or alias} resume (mem buf overlimit) message.

Mitigate the risk of data loss by configuring secondary storage on the filesystem using the storage.type of filesystem (as described in Buffering & Storage). Initially, logs will be buffered to both memory and the filesystem. When the storage.max_chunks_up limit is reached, all new data will be stored in the filesystem. Fluent Bit stops queueing new data in memory and buffers only to the filesystem. When storage.type filesystem is set, theMem_Buf_Limit setting no longer has any effect. Instead, the [SERVICE] levelstorage.max_chunks_up setting controls the size of the memory buffer.

Mem_Buf_Limit

Mem_Buf_Limit applies only with the default storage.type memory. This option is disabled by default and can be applied to all input plugins.

As an example situation:

  • Mem_Buf_Limit is set to 1MB.

  • The input plugin tries to append 700 KB.

  • The engine routes the data to an output plugin.

  • The output plugin backend (HTTP Server) is down.

  • Engine scheduler retries the flush after 10 seconds.

  • The input plugin tries to append 500 KB.

In this situation, the engine allows appending those 500 KB of data into the memory, with a total of 1.2 MB of data buffered. The limit is permissive and will allow a single write past the limit. When the limit is exceeded, the following actions are taken:

  • Block local buffers for the input plugin (can't append more data).

  • Notify the input plugin, invoking a pause callback.

The engine protects itself and won't append more data coming from the input plugin in question. It's the responsibility of the plugin to keep state and decide what to do in a paused state.

In a few seconds, if the scheduler was able to flush the initial 700 KB of data or it has given up after retrying, that amount of memory is released and the following actions occur:

  • Upon data buffer release (700 KB), the internal counters get updated.

  • Counters now are set at 500 KB.

  • Because 500 KB isless than 1 MB, it checks the input plugin state.

  • If the plugin is paused, it invokes a resume callback.

  • The input plugin can continue appending more data.

storage.max_chunks_up

The [SERVICE] level storage.max_chunks_up setting controls the size of the memory buffer. When storage.type filesystem is set, the Mem_Buf_Limit setting no longer has an effect.

The setting behaves similar to the Mem_Buf_Limit scenario when the non-defaultstorage.pause_on_chunks_overlimit is enabled.

When (default) storage.pause_on_chunks_overlimit is disabled, the input won't pause when the memory limit is reached. Instead, it switches to buffering logs only in the filesystem. Limit the disk spaced used for filesystem buffering withstorage.total_limit_size.

See Buffering & Storage docs for more information.

About pause and resume callbacks

Each plugin is independent and not all of them implement pause and resume callbacks. These callbacks are a notification mechanism for the plugin.

One example of a plugin that implements these callbacks and keeps state correctly is the Tail Input plugin. When the pause callback triggers, it pauses its collectors and stops appending data. Upon resume, it resumes the collectors and continues ingesting data. Tail tracks the current file offset when it pauses, and resumes at the same position. If the file hasn't been deleted or moved, it can still be read.

With the default storage.type memory and Mem_Buf_Limit, the following log messages emit for pause and resume:

[warn] [input] {input name or alias} paused (mem buf overlimit)
[info] [input] {input name or alias} resume (mem buf overlimit)

With storage.type filesystem and storage.max_chunks_up, the following log messages emit for pause and resume:

[input] {input name or alias} paused (storage buf overlimit)
[input] {input name or alias} resume (storage buf overlimit)

Configuration File

This page describes the main configuration file used by Fluent Bit.

One of the ways to configure Fluent Bit is using a main configuration file. Fluent Bit allows the use one configuration file that works at a global scope and uses the defined .

The main configuration file supports four sections:

  • Service

  • Input

  • Filter

  • Output

It's also possible to split the main configuration file into multiple files using the Include File feature to include external files.

Service

The Service section defines global properties of the service. The following keys are:

Key
Description
Default Value

The following is an example of a SERVICE section:

For scheduler and retry details, see .

Config input

The INPUT section defines a source (related to an input plugin). Each can add its own configuration keys:

Key
Description

Name is mandatory and tells Fluent Bit which input plugin to load. Tag is mandatory for all plugins except for the input forward plugin, which provides dynamic tags.

Example

The following is an example of an INPUT section:

Config filter

The FILTER section defines a filter (related to an filter plugin). Each filter plugin can add it own configuration keys. The base configuration for eachFILTER section contains:

Key
Description

Name is mandatory and lets Fluent Bit know which filter plugin should be loaded.Match or Match_Regex is mandatory for all plugins. If both are specified,Match_Regex takes precedence.

Filter example

The following is an example of a FILTER section:

Config output

The OUTPUT section specifies a destination that certain records should go to after a Tag match. Fluent Bit can route up to 256 OUTPUT plugins. The configuration supports the following keys:

Key
Description

Output example

The following is an example of an OUTPUT section:

Example: collecting CPU metrics

The following configuration file example demonstrates how to collect CPU metrics and flush the results every five seconds to the standard output:

Config Include File

To avoid complicated long configuration files is better to split specific parts in different files and call them (include) from one main file. The @INCLUDE can be used in the following way:

The configuration reader will try to open the path somefile.conf. If not found, the reader assumes the file is on a relative path based on the path of the base configuration file:

  • Main configuration path: /tmp/main.conf

  • Included file: somefile.conf

  • Fluent Bit will try to open somefile.conf, if it fails it will try /tmp/somefile.conf.

The @INCLUDE command only works at top-left level of the configuration line, and can't be used inside sections.

Wildcard character (*) supports including multiple files. For example:

Files matching the wildcard character are included unsorted. If plugin ordering between files needs to be preserved, the files should be included explicitly.

Kafka

The Kafka input plugin enables Fluent Bit to consume messages directly from one or more topics. By subscribing to specified topics, this plugin efficiently collects and forwards Kafka messages for further processing within your Fluent Bit pipeline.

Starting with version 4.0.4, the Kafka input plugin supports authentication with AWS MSK IAM, enabling integration with Amazon MSK (Managed Streaming for Apache Kafka) clusters that require IAM-based access.

This plugin uses the official as a built-in dependency.

Configuration parameters

Key
Description
default

Get started

To subscribe to or collect messages from Apache Kafka, run the plugin from the command line or through the configuration file as shown below.

Command line

The Kafka plugin can read parameters through the -p argument (property):

Configuration file (recommended)

In your main configuration file append the following:

Example of using Kafka input and output plugins

The Fluent Bit source repository contains a full example of using Fluent Bit to process Kafka records:

The previous example will connect to the broker listening on kafka-broker:9092 and subscribe to the fb-source topic, polling for new messages every 100 milliseconds.

Since the payload will be in JSON format, the plugin is configured to parse the payload with format json.

Every message received is then processed with kafka.lua and sent back to the fb-sink topic of the same broker.

The example can be executed locally with make start in the examples/kafka_filter directory (docker/compose is used).

AWS MSK IAM Authentication

Available since Fluent Bit v4.0.4

Fluent Bit supports authentication to Amazon MSK (Managed Streaming for Apache Kafka) clusters using AWS IAM. This allows you to securely connect to MSK brokers with AWS credentials, leveraging IAM roles and policies for access control.

Prerequisites

Build Requirements

If you are compiling Fluent Bit from source, ensure the following requirements are met to enable AWS MSK IAM support:

  • The packages libsasl2 and libsasl2-dev must be installed on your build environment.

Runtime Requirements

  • Network Access: Fluent Bit must be able to reach your MSK broker endpoints (AWS VPC setup).

  • AWS Credentials: Provide credentials using any supported AWS method:

    • IAM roles (recommended for EC2, ECS, or EKS)

    • Environment variables (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)

    • AWS credentials file (~/.aws/credentials)

    • Instance metadata service (IMDS)

    Note these credentials are discovery by default when aws_msk_iam flag is enabled.

  • IAM Permissions: The credentials must allow access to the target MSK cluster (see example policy below).

Configuration Parameters

Property
Description
Type
Required

Configuration Example

Example AWS IAM Policy

Note: IAM policies and permissions can be complex and may vary depending on your organization's security requirements. If you are unsure about the correct permissions or best practices, please consult with your AWS administrator or an AWS expert who is familiar with MSK and IAM security.

The AWS credentials used by Fluent Bit must have permission to connect to your MSK cluster. Here is a minimal example policy:

brokers

Single or multiple list of Kafka Brokers. For example: 192.168.1.3:9092, 192.168.1.4:9092.

none

topics

Single entry or list of comma-separated topics (,) that Fluent Bit will subscribe to.

none

format

Serialization format of the messages. If set to json, the payload will be parsed as JSON.

none

client_id

Client id passed to librdkafka.

none

group_id

Group id passed to librdkafka.

fluent-bit

poll_ms

Kafka brokers polling interval in milliseconds.

500

Buffer_Max_Size

Specify the maximum size of buffer per cycle to poll Kafka messages from subscribed topics. To increase throughput, specify larger size.

4M

rdkafka.{property}

{property} can be any librdkafka properties

none

threaded

Indicates whether to run this input in its own thread.

false

fluent-bit -i kafka -o stdout -p brokers=192.168.1.3:9092 -p topics=some-topic
pipeline:
    inputs:
        - name: kafka
          brokers: 192.168.1.3:9092
          topics: some-topic
          poll_ms: 100

    outputs:
        - name: stdout
          match: '*'
[INPUT]
    Name        kafka
    Brokers     192.168.1.3:9092
    Topics      some-topic
    poll_ms     100

[OUTPUT]
    Name        stdout
    Match       *
pipeline:
    inputs:
        - name: kafka
          brokers: kafka-broker:9092
          topics: fb-source
          poll_ms: 100
          format: json

    filters:
        - name: lua
          match: '*'
          script: kafka.lua
          call: modify_kafka_message

    outputs:
        - name: kafka
          brokers: kafka-broker:9092
          topics: fb-sink
[INPUT]
    Name kafka
    brokers kafka-broker:9092
    topics fb-source
    poll_ms 100
    format json

[FILTER]
    Name    lua
    Match   *
    script  kafka.lua
    call    modify_kafka_message

[OUTPUT]
    Name kafka
    brokers kafka-broker:9092
    topics fb-sink

aws_msk_iam

Enable AWS MSK IAM authentication

Boolean

No (default: false)

aws_msk_iam_cluster_arn

Full ARN of the MSK cluster for region extraction

String

Yes (if aws_msk_iam is true)

pipeline:
  inputs:
    - name: kafka
      brokers: my-cluster.abcdef.c1.kafka.us-east-1.amazonaws.com:9098
      topics: my-topic
      aws_msk_iam: true
      aws_msk_iam_cluster_arn: arn:aws:kafka:us-east-1:123456789012:cluster/my-cluster/abcdef-1234-5678-9012-abcdefghijkl-s3

  outputs:
    - name: stdout
      match: '*'
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "kafka-cluster:*",
                "kafka-cluster:DescribeCluster",
                "kafka-cluster:ReadData",
                "kafka-cluster:DescribeTopic",
                "kafka-cluster:Connect"
            ],
            "Resource": "*"
        }
    ]
}
Apache Kafka
librdkafka C library
Logo

flush

Set the flush time in seconds.nanoseconds. The engine loop uses a Flush timeout to define when it's required to flush the records ingested by input plugins through the defined output plugins.

1

grace

Set the grace time in seconds as an integer value. The engine loop uses a grace timeout to define wait time on exit.

5

daemon

Boolean. Determines whether Fluent Bit should run as a Daemon (background). Allowed values are: yes, no, on, and off. Don't enable when using a Systemd based unit, such as the one provided in Fluent Bit packages.

Off

dns.mode

Set the primary transport layer protocol used by the asynchronous DNS resolver. Can be overridden on a per plugin basis.

UDP

log_file

Absolute path for an optional log file. By default all logs are redirected to the standard error interface (stderr).

none

log_level

Set the logging verbosity level. Allowed values are: off, error, warn, info, debug, and trace. Values are cumulative. If debug is set, it will include error, warning, info, and debug. Trace mode is only available if Fluent Bit was built with the WITH_TRACE option enabled.

info

parsers_file

Path for a parsers configuration file. Multiple Parsers_File entries can be defined within the section.

none

plugins_file

Path for a plugins configuration file. A plugins configuration file defines paths for external plugins. See an example.

none

streams_file

Path for the Stream Processor configuration file. Learn more about Stream Processing configuration.

none

http_server

Enable the built-in HTTP Server.

Off

http_listen

Set listening interface for HTTP Server when it's enabled.

0.0.0.0

http_port

Set TCP Port for the HTTP Server.

2020

coro_stack_size

Set the coroutines stack size in bytes. The value must be greater than the page size of the running system. Setting the value too small (4096) can cause coroutine threads to overrun the stack buffer. The default value of this parameter shouldn't be changed.

24576

scheduler.cap

Set a maximum retry time in seconds. Supported in v1.8.7 and greater.

2000

scheduler.base

Set a base of exponential backoff. Supported in v1.8.7 and greater.

5

json.convert_nan_to_null

If enabled, NaN converts to null when Fluent Bit converts msgpack to json.

false

sp.convert_from_str_to_num

If enabled, Stream processor converts from number string to number type.

true

[SERVICE]
    Flush           5
    Daemon          off
    Log_Level       debug

Name

Name of the input plugin.

Tag

Tag name associated to all records coming from this plugin.

Log_Level

Set the plugin's logging verbosity level. Allowed values are: off, error, warn, info, debug, and trace. Defaults to the SERVICE section's Log_Level.

[INPUT]
    Name cpu
    Tag  my_cpu

Name

Name of the filter plugin.

Match

A pattern to match against the tags of incoming records. Case sensitive, supports asterisk (*) as a wildcard.

Match_Regex

A regular expression to match against the tags of incoming records. Use this option if you want to use the full regular expression syntax.

Log_Level

Set the plugin's logging verbosity level. Allowed values are: off, error, warn, info, debug, and trace. Defaults to the SERVICE section's Log_Level.

[FILTER]
    Name  grep
    Match *
    Regex log aa

Name

Name of the output plugin.

Match

A pattern to match against the tags of incoming records. Case sensitive and supports the asterisk (*) character as a wildcard.

Match_Regex

A regular expression to match against the tags of incoming records. Use this option if you want to use the full regular expression syntax.

Log_Level

Set the plugin's logging verbosity level. Allowed values are: off, error, warn, info, debug, and trace. Defaults to the SERVICE section's Log_Level.

[OUTPUT]
    Name  stdout
    Match my*cpu
[SERVICE]
    Flush     5
    Daemon    off
    Log_Level debug

[INPUT]
    Name  cpu
    Tag   my_cpu

[OUTPUT]
    Name  stdout
    Match my*cpu
@INCLUDE somefile.conf
@INCLUDE input_*.conf
Format and Schema
scheduling and retries
input plugin

Transport Security

Fluent Bit provides integrated support for Transport Layer Security (TLS) and its predecessor Secure Sockets Layer (SSL). This section refers only to TLS for both implementations.

Both input and output plugins that perform Network I/O can optionally enable TLS and configure the behavior. The following table describes the properties available:

Property
Description
Default

tls

Enable or disable TLS support.

Off

tls.verify

Force certificate validation.

On

tls.verify_hostname

Force TLS verification of host names.

Off

tls.debug

Set TLS debug verbosity level. Accepted values: 0 (No debug), 1 (Error), 2 (State change), 3 (Informational) and 4. (Verbose)

1

tls.ca_file

Absolute path to CA certificate file.

none

tls.ca_path

Absolute path to scan for certificate files.

none

tls.crt_file

Absolute path to Certificate file.

none

tls.key_file

Absolute path to private Key file.

none

tls.key_passwd

Optional password for tls.key_file file.

none

tls.vhost

Hostname to be used for TLS SNI extension.

none

To use TLS on input plugins, you must provide both a certificate and a private key.

The listed properties can be enabled in the configuration file, specifically in each output plugin section or directly through the command line.

The following output plugins can take advantage of the TLS feature:

  • Amazon S3

  • Apache SkyWalking

  • Azure

  • Azure Blob

  • Azure Data Explorer (Kusto)

  • Azure Logs Ingestion API

  • BigQuery

  • Dash0

  • Datadog

  • Elasticsearch

  • Forward

  • GELF

  • Google Chronicle

  • HTTP

  • InfluxDB

  • Kafka REST Proxy

  • LogDNA

  • Loki

  • New Relic

  • OpenSearch

  • OpenTelemetry

  • Oracle Cloud Infrastructure Logging Analytics

  • Prometheus Remote Write

  • Slack

  • Splunk

  • Stackdriver

  • Syslog

  • TCP & TLS

  • Treasure Data

  • WebSocket

The following input plugins can take advantage of the TLS feature:

  • Docker Events

  • Elasticsearch (Bulk API)

  • Forward

  • Health

  • HTTP

  • Kubernetes Events

  • MQTT

  • NGINX Exporter Metrics

  • OpenTelemetry

  • Prometheus Scrape Metrics

  • Prometheus Remote Write

  • Splunk (HTTP HEC)

  • Syslog

  • TCP

In addition, other plugins implement a subset of TLS support, with restricted configuration:

  • Kubernetes Filter

Example: enable TLS on HTTP input

By default, the HTTP input plugin uses plain TCP. Run the following command to enable TLS:

./bin/fluent-bit -i http \
           -p port=9999 \
           -p tls=on \
           -p tls.verify=off \
           -p tls.crt_file=self_signed.crt \
           -p tls.key_file=self_signed.key \
           -o stdout \
           -m '*'

See Tips & Trick section below for details on generating self_signed.crt and self_signed.key files shown in these examples.

In the previous command, the two properties tls and tls.verify are set for demonstration purposes. Always enable verification in production environments.

The same behavior can be accomplished using a configuration file:

pipeline:
    inputs:
      - name: http
        port: 9999
        tls: on
        tls.verify: off
        tls.cert_file: self_signed.crt
        tls.key_file: self_signed.key

    outputs:
      - name: stdout
        match: '*'
[INPUT]
    name http
    port 9999
    tls on
    tls.verify off
    tls.crt_file self_signed.crt
    tls.key_file self_signed.key

[OUTPUT]
    Name       stdout
    Match      *

Example: enable TLS on HTTP output

By default, the HTTP output plugin uses plain TCP. Run the following command to enable TLS:

fluent-bit -i cpu -t cpu -o http://192.168.2.3:80/something \
    -p tls=on         \
    -p tls.verify=off \
    -m '*'

In the previous command, the properties tls and tls.verify are enabled for demonstration purposes. Always enable verification in production environments.

The same behavior can be accomplished using a configuration file:

pipeline:
    inputs:
      - name: cpu
        tag: cpu

    outputs:
      - name: http
        match: '*'
        host: 192.168.2.3
        port: 80
        uri: /something
        tls: on
        tls.verify: off
[INPUT]
    Name  cpu
    Tag   cpu

[OUTPUT]
    Name       http
    Match      *
    Host       192.168.2.3
    Port       80
    URI        /something
    tls        On
    tls.verify Off

Tips and Tricks

Generate a self signed certificates for testing purposes

The following command generates a 4096 bit RSA key pair and a certificate that's signed using SHA-256 with the expiration date set to 30 days in the future. In this example,test.host.net is set as the common name. This example opts out of DES, so the private key is stored in plain text.

openssl req -x509 \
            -newkey rsa:4096 \
            -sha256 \
            -nodes \
            -keyout self_signed.key \
            -out self_signed.crt \
            -subj "/CN=test.host.net"

Connect to virtual servers using TLS

Fluent Bit supportsTLS server name indication. If you are serving multiple host names on a single IP address (for example, using virtual hosting), you can make use of tls.vhost to connect to a specific hostname.

pipeline:
    inputs:
      - name: cpu
        tag: cpu

    outputs:
      - name: forward
        match: '*'
        host: 192.168.10.100
        port: 24224
        tls: on
        tls.verify: off
        tls.ca_file: '/etc/certs/fluent.crt'
        tls.vhost: 'fluent.example.com'
[INPUT]
    Name  cpu
    Tag   cpu

[OUTPUT]
    Name        forward
    Match       *
    Host        192.168.10.100
    Port        24224
    tls         On
    tls.verify  On
    tls.ca_file /etc/certs/fluent.crt
    tls.vhost   fluent.example.com

Verify subjectAltName

By default, TLS verification of host names isn't done automatically. As an example, you can extract the X509v3 Subject Alternative Name from a certificate:

X509v3 Subject Alternative Name:
    DNS:my.fluent-aggregator.net

This certificate covers only my.fluent-aggregator.net so if you use a different hostname it should fail.

To fully verify the alternative name and demonstrate the failure, enabletls.verify_hostname:

pipeline:
    inputs:
      - name: cpu
        tag: cpu

    outputs:
      - name: forward
        match: '*'
        host: other.fluent-aggregator.net
        port: 24224
        tls: on
        tls.verify: on
        tls.verify_hostname: on
        tls.ca_file: '/path/to/fluent-x509v3-alt-name.crt'
[INPUT]
    Name  cpu
    Tag   cpu

[OUTPUT]
    Name                forward
    Match               *
    Host                other.fluent-aggregator.net
    Port                24224
    tls                 On
    tls.verify          On
    tls.verify_hostname on
    tls.ca_file         /path/to/fluent-x509v3-alt-name.crt

This outgoing connect will fail and disconnect:

[2024/06/17 16:51:31] [error] [tls] error: unexpected EOF with reason: certificate verify failed
[2024/06/17 16:51:31] [debug] [upstream] connection #50 failed to other.fluent-aggregator.net:24224
[2024/06/17 16:51:31] [error] [output:forward:forward.0] no upstream connections available

Buffering and Storage

Fluent Bit collects, parses, filters, and ships logs to a central place. A critical piece of this workflow is the ability to do buffering: a mechanism to place processed data into a temporary location until is ready to be shipped.

By default when Fluent Bit processes data, it uses Memory as a primary and temporary place to store the records. There are scenarios where it would be ideal to have a persistent buffering mechanism based in the filesystem to provide aggregation and data safety capabilities.

Choosing the right configuration is critical and the behavior of the service can be conditioned based in the backpressure settings. Before jumping into the configuration it helps to understand the relationship between chunks, memory,filesystem, and backpressure.

Chunks, memory, filesystem, and backpressure

Understanding chunks, buffering, and backpressure is critical for a proper configuration.

Backpressure

See Backpressure for a full explanation.

Chunks

When an input plugin source emits records, the engine groups the records together in a chunk. A chunk's size usually is around 2 MB. By configuration, the engine decides where to place this chunk. By default, all chunks are created only in memory.

Irrecoverable chunks

There are two scenarios where Fluent Bit marks chunks as irrecoverable:

  • When Fluent Bit encounters a bad layout in a chunk. A bad layout is a chunk that doesn't conform to the expected format.Chunk definition

  • When Fluent Bit encounters an incorrect or invalid chunk header size.

In both scenarios Fluent Bit logs an error message and then discards the irrecoverable chunks.

Buffering and memory

As mentioned previously, chunks generated by the engine are placed in memory by default, but this is configurable.

If memory is the only mechanism set for the input plugin, it will store as much data as possible in memory. This is the fastest mechanism with the least system overhead. However, if the service isn't able to deliver the records fast enough, Fluent Bit memory usage increases as it accumulates more data than it can deliver.

In a high load environment with backpressure, having high memory usage risks getting killed by the kernel's OOM Killer. To work around this backpressure scenario, limit the amount of memory in records that an input plugin can register using themem_buf_limit property. If a plugin has queued more than the mem_buf_limit, it won't be able to ingest more until that data can be delivered or flushed properly. In this scenario the input plugin in question is paused. When the input is paused, records won't be ingested until the plugin resumes. For some inputs, such as TCP and tail, pausing the input will almost certainly lead to log loss. For the tail input, Fluent Bit can save its current offset in the current file it's reading, and pick back up when the input resumes.

Look for messages in the Fluent Bit log output like:

[input] tail.1 paused (mem buf overlimit)
[input] tail.1 resume (mem buf overlimit)

Using mem_buf_limit is good for certain scenarios and environments. It helps to control the memory usage of the service. However, if a file rotates while the plugin is paused, data can be lost since it won't be able to register new records. This can happen with any input source plugin. The goal ofmem_buf_limit is memory control and survival of the service.

For a full data safety guarantee, use filesystem buffering.

Choose your preferred format for an example input definition:

pipeline:
    inputs:
        - name: tcp
          listen: 0.0.0.0
          port: 5170
          format: none
          tag: tcp-logs
          mem_buf_limit: 50MB
[INPUT]
    Name          tcp
    Listen        0.0.0.0
    Port          5170
    Format        none
    Tag           tcp-logs
    Mem_Buf_Limit 50MB

If this input uses more than 50 MB memory to buffer logs, you will get a warning like this in the Fluent Bit logs:

[input] tcp.1 paused (mem buf overlimit)

mem_buf_Limit applies only when storage.type is set to the default value ofmemory.

Filesystem buffering

Filesystem buffering helps with backpressure and overall memory control. Enable it using storage.type filesystem.

Memory and filesystem buffering mechanisms aren't mutually exclusive. Enabling filesystem buffering for your input plugin source can improve both performance and data safety.

Enabling filesystem buffering changes the behavior of the engine. Upon chunk creation, the engine stores the content in memory and also maps a copy on disk through mmap(2). The newly created chunk is active in memory, backed up on disk, and called to beup, which means the chunk content is up in memory.

Fluent Bit controls the number of chunks that are up in memory by using the filesystem buffering mechanism to deal with high memory usage and backpressure.

By default, the engine allows a total of 128 chunks up in memory in total, considering all chunks. This value is controlled by the service propertystorage.max_chunks_up. The active chunks that are up are ready for delivery and are still receiving records. Any other remaining chunk is in a down state, which means that it's only in the filesystem and won't be up in memory unless it's ready to be delivered. Chunks are never much larger than 2 MB, so with the default storage.max_chunks_up value of 128, each input is limited to roughly 256 MB of memory.

If the input plugin has enabled storage.type as filesystem, when reaching thestorage.max_chunks_up threshold, instead of the plugin being paused, all new data will go to chunks that are down in the filesystem. This lets you control memory usage by the service and also provides a guarantee that the service won't lose any data. By default, the enforcement of the storage.max_chunks_up limit is best-effort. Fluent Bit can only append new data to chunks that are up. When the limit is reached chunks will be temporarily brought up in memory to ingest new data, and then put to a down state afterwards. In general, Fluent Bit works to keep the total number of up chunks at or below storage.max_chunks_up.

If storage.pause_on_chunks_overlimit is enabled (default is off), the input plugin pauses upon exceeding storage.max_chunks_up. With this option,storage.max_chunks_up becomes a hard limit for the input. When the input is paused, records won't be ingested until the plugin resumes. For some inputs, such as TCP and tail, pausing the input will almost certainly lead to log loss. For the tail input, Fluent Bit can save its current offset in the current file it's reading, and pick back up when the input is resumed.

Look for messages in the Fluent Bit log output like:

[input] tail.1 paused (storage buf overlimit
[input] tail.1 resume (storage buf overlimit

Limiting filesystem space for chunks

Fluent Bit implements the concept of logical queues. Based on its tag, a chunk can be routed to multiple destinations. Fluent Bit keeps an internal reference from where a chunk was created and where it needs to go.

It's common to find cases where multiple destinations with different response times exist for a chunk, or one of the destinations is generating backpressure.

To limit the amount of filesystem chunks logically queueing, Fluent Bit v1.6 and later includes the storage.total_limit_size configuration property for output This property limits the total size in bytes of chunks that can exist in the filesystem for a certain logical output destination. If one of the destinations reaches the configured storage.total_limit_size, the oldest chunk from its queue for that logical output destination will be discarded to make room for new data.

Configuration

The storage layer configuration takes place in three sections:

  • Service

  • Input

  • Output

The known Service section configures a global environment for the storage layer, the Input sections define which buffering mechanism to use, and the Output defines limits for the logical filesystem queues.

Service section configuration

The Service section refers to the section defined in the mainconfiguration file:

Key
Description
Default

storage.path

Set an optional location in the file system to store streams and chunks of data. If this parameter isn't set, Input plugins can only use in-memory buffering.

none

storage.sync

Configure the synchronization mode used to store the data in the file system. Using full increases the reliability of the filesystem buffer and ensures that data is guaranteed to be synced to the filesystem even if Fluent Bit crashes. On Linux, full corresponds with the MAP_SYNC option for . Accepted values: normal, full.

normal

storage.checksum

Enable the data integrity check when writing and reading data from the filesystem. The storage layer uses the CRC32 algorithm. Accepted values: Off, On.

Off

storage.max_chunks_up

If the input plugin has enabled filesystem storage type, this property sets the maximum number of chunks that can be up in memory. Use this setting to control memory usage when you enable storage.type filesystem.

128

storage.backlog.mem_limit

If storage.path is set, Fluent Bit looks for data chunks that weren't delivered and are still in the storage layer. These are called backlog data. Backlog chunks are filesystem chunks that were left over from a previous Fluent Bit run; chunks that couldn't be sent before exit that Fluent Bit will pick up when restarted. Fluent Bit will check the storage.backlog.mem_limit value against the current memory usage from all up chunks for the input. If the up chunks currently consume less memory than the limit, it will bring the backlog chunks up into memory so they can be sent by outputs.

5M

storage.backlog.flush_on_shutdown

When enabled, Fluent Bit will attempt to flush all backlog filesystem chunks to their destination(s) during the shutdown process. This can help ensure data delivery before Fluent Bit stops, but may increase shutdown time. Accepted values: Off, On.

Off

storage.metrics

If http_server option is enabled in the main [SERVICE] section, this option registers a new endpoint where internal metrics of the storage layer can be consumed. For more details refer to the section.

off

storage.delete_irrecoverable_chunks

When enabled, will be deleted during runtime, and any other irrecoverable chunk located in the configured storage path directory will be deleted when Fluent-Bit starts. Accepted values: 'Off, 'On.

Off

A Service section will look like this:

service:
    flush: 1
    log_level: info
    storage.path: /var/log/flb-storage/
    storage.sync: normal
    storage.checksum: off
    storage.backlog.mem_limit: 5M
    storage.backlog.flush_on_shutdown: off
[SERVICE]
    flush                     1
    log_Level                 info
    storage.path              /var/log/flb-storage/
    storage.sync              normal
    storage.checksum          off
    storage.backlog.mem_limit 5M
    storage.backlog.flush_on_shutdown off

This configuration sets an optional buffering mechanism where the route to the data is /var/log/flb-storage/. It uses normal synchronization mode, without running a checksum and up to a maximum of 5 MB of memory when processing backlog data.

Input Section Configuration

Optionally, any Input plugin can configure their storage preference. The following table describes the options available:

Key
Description
Default

storage.type

Specifies the buffering mechanism to use. Accepted values: memory, filesystem.

memory

storage.pause_on_chunks_overlimit

Specifies if the input plugin should pause (stop ingesting new data) when the storage.max_chunks_up value is reached.

off

The following example configures a service offering filesystem buffering capabilities and two input plugins being the first based in filesystem and the second with memory only.

service:
    flush: 1
    log_level: info
    storage.path: /var/log/flb-storage/
    storage.sync: normal
    storage.checksum: off
    storage.max_chunks_up: 128
    storage.backlog.mem_limit: 5M

pipeline:
    inputs:
        - name: cpu
          storage.type: filesystem

        - name: mem
          storage.type: memory
[SERVICE]
    flush                     1
    log_Level                 info
    storage.path              /var/log/flb-storage/
    storage.sync              normal
    storage.checksum          off
    storage.max_chunks_up     128
    storage.backlog.mem_limit 5M

[INPUT]
    name          cpu
    storage.type  filesystem

[INPUT]
    name          mem
    storage.type  memory

Output Section Configuration

If certain chunks are filesystem storage.type based, it's possible to control the size of the logical queue for an output plugin. The following table describes the options available:

Key
Description
Default

storage.total_limit_size

Limit the maximum disk space size in bytes for buffering chunks in the filesystem for the current output logical destination.

none

The following example creates records with CPU usage samples in the filesystem which are delivered to Google Stackdriver service while limiting the logical queue (buffering) to 5M:

service:
    flush: 1
    log_level: info
    storage.path: /var/log/flb-storage/
    storage.sync: normal
    storage.checksum: off
    storage.max_chunks_up: 128
    storage.backlog.mem_limit: 5M

pipeline:
    inputs:
        - name: cpu
          storage.type: filesystem

    outputs:
        - name: stackdriver
          match: '*'
          storage.total_limit_size: 5M
[SERVICE]
    flush                     1
    log_Level                 info
    storage.path              /var/log/flb-storage/
    storage.sync              normal
    storage.checksum          off
    storage.max_chunks_up     128
    storage.backlog.mem_limit 5M

[INPUT]
    name                      cpu
    storage.type              filesystem

[OUTPUT]
    name                      stackdriver
    match                     *
    storage.total_limit_size  5M

If Fluent Bit is offline because of a network issue, it will continue buffering CPU samples, keeping a maximum of 5 MB of the newest data.

Exec

The Exec input plugin lets you execute external programs and collects event logs.

This plugin invokes commands using a shell. Its inputs are subject to shell metacharacter substitution. Careless use of untrusted input in command arguments could lead to malicious command execution.

Container support

This plugin needs a functional /bin/sh and won't function in all the distro-less production images.

The debug images use the same binaries so even though they have a shell, there is no support for this plugin as it's compiled out.

Configuration parameters

The plugin supports the following configuration parameters:

Key
Description

Command

The command to execute, passed to without any additional escaping or processing. Can include pipelines, redirection, command-substitution, or other information.

Parser

Specify the name of a parser to interpret the entry as a structured message.

Interval_Sec

Polling interval (seconds).

Interval_NSec

Polling interval (nanosecond).

Buf_Size

Size of the buffer. See for allowed values.

Oneshot

Only run once at startup. This allows collection of data precedent to Fluent Bit startup (Boolean, default: false).

Exit_After_Oneshot

Exit as soon as the one-shot command exits. This allows the exec plugin to be used as a wrapper for another command, sending the target command's output to any Fluent Bit sink, then exits. (Boolean, default: false).

Propagate_Exit_Code

When exiting due to Exit_After_Oneshot, cause Fluent Bit to exit with the exit code of the command exited by this plugin. Follows . (Boolean, default: false).

Threaded

Indicates whether to run this input in its own . Default: false.

Get started

You can run the plugin from the command line or through the configuration file:

Command line

The following example will read events from the output of ls.

fluent-bit -i exec -p 'command=ls /var/log' -o stdout

which should return something like the following:

Fluent Bit v1.x.x
* Copyright (C) 2019-2020 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2018/03/21 17:46:49] [ info] [engine] started
[0] exec.0: [1521622010.013470159, {"exec"=>"ConsoleKit"}]
[1] exec.0: [1521622010.013490313, {"exec"=>"Xorg.0.log"}]
[2] exec.0: [1521622010.013492079, {"exec"=>"Xorg.0.log.old"}]
[3] exec.0: [1521622010.013493443, {"exec"=>"anaconda.ifcfg.log"}]
[4] exec.0: [1521622010.013494707, {"exec"=>"anaconda.log"}]
[5] exec.0: [1521622010.013496016, {"exec"=>"anaconda.program.log"}]
[6] exec.0: [1521622010.013497225, {"exec"=>"anaconda.storage.log"}]

Configuration file

In your main configuration file append the following:

pipeline:
    inputs:
        - name: exec
          tag: exec_ls
          command: ls /var/log
          interval_sec: 1
          interval_nsec: 0
          buf_size: 8mb
          oneshot: false

    outputs:
        - name: stdout
          match: '*'
[INPUT]
    Name          exec
    Tag           exec_ls
    Command       ls /var/log
    Interval_Sec  1
    Interval_NSec 0
    Buf_Size      8mb
    Oneshot       false

[OUTPUT]
    Name   stdout
    Match  *

Use as a command wrapper

To use Fluent Bit with the exec plugin to wrap another command, use the Exit_After_Oneshot and Propagate_Exit_Code options:

pipeline:
    inputs:
        - name: exec
          tag: exec_oneshot_demo
          command: 'for s in $(seq 1 10); do echo "count: $s"; sleep 1; done; exit 1'
          oneshot: true
          exit_after_oneshot: true
          propagate_exit_code: true

    outputs:
        - name: stdout
          match: '*'
[INPUT]
    Name                exec
    Tag                 exec_oneshot_demo
    Command             for s in $(seq 1 10); do echo "count: $s"; sleep 1; done; exit 1
    Oneshot             true
    Exit_After_Oneshot  true
    Propagate_Exit_Code true

[OUTPUT]
    Name   stdout
    Match  *

Fluent Bit will output:

[0] exec_oneshot_demo: [[1681702172.950574027, {}], {"exec"=>"count: 1"}]
[1] exec_oneshot_demo: [[1681702173.951663666, {}], {"exec"=>"count: 2"}]
[2] exec_oneshot_demo: [[1681702174.953873724, {}], {"exec"=>"count: 3"}]
[3] exec_oneshot_demo: [[1681702175.955760865, {}], {"exec"=>"count: 4"}]
[4] exec_oneshot_demo: [[1681702176.956840282, {}], {"exec"=>"count: 5"}]
[5] exec_oneshot_demo: [[1681702177.958292246, {}], {"exec"=>"count: 6"}]
[6] exec_oneshot_demo: [[1681702178.959508200, {}], {"exec"=>"count: 7"}]
[7] exec_oneshot_demo: [[1681702179.961715745, {}], {"exec"=>"count: 8"}]
[8] exec_oneshot_demo: [[1681702180.963924140, {}], {"exec"=>"count: 9"}]
[9] exec_oneshot_demo: [[1681702181.965852990, {}], {"exec"=>"count: 10"}]

then exits with exit code 1.

Translation of command exit codes to Fluent Bit exit code follows the usual shell rules for exit code handling. Like with a shell, there is no way to differentiate between the command exiting on a signal and the shell exiting on a signal. Similarly, there is no way to differentiate between normal exits with codes greater than 125 and abnormal or signal exits reported by Fluent Bit or the shell. Wrapped commands should use exit codes between 0 and 125 inclusive to allow reliable identification of normal exit. If the command is a pipeline, the exit code will be the exit code of the last command in the pipeline unless overridden by shell options.

Parsing command output

By default the exec plugin emits one message per command output line, with a single field exec containing the full message. Use the Parser directive to specify the name of a parser configuration to use to process the command input.

Security concerns

Take great care with shell quoting and escaping when wrapping commands.

A script like the following can ruin your day if someone passes it the argument $(rm -rf /my/important/files; echo "deleted your stuff!")'

#!/bin/bash
# This is a DANGEROUS example of what NOT to do, NEVER DO THIS
exec fluent-bit \
  -o stdout \
  -i exec \
  -p exit_after_oneshot=true \
  -p propagate_exit_code=true \
  -p command='myscript $*'

The previous script would be safer if written with:

  -p command='echo '"$(printf '%q' "$@")" \

It's generally best to avoid dynamically generating the command or handling untrusted arguments.

Windows

Fluent Bit is distributed as the fluent-bit package for Windows and as a . Fluent Bit provides two Windows installers: a ZIP archive and an EXE installer.

Not all plugins are supported on Windows. The shows the default set of supported plugins.

Configuration

Provide a valid Windows configuration with the installation.

The following configuration is an example:

Migration to Fluent Bit

For version 1.9 and later, td-agent-bit is a deprecated package and was removed after 1.9.9. The correct package name to use now is fluent-bit.

Installation packages

The latest stable version is 4.0.4. Each version is available from the following download URLs.

INSTALLERS
SHA256 CHECKSUMS

These are now using the Github Actions built versions. Legacy AppVeyor builds are still available (AMD 32/64 only) at releases.fluentbit.io but are deprecated.

MSI installers are also available:

To check the integrity, use the Get-FileHash cmdlet for PowerShell.

Installing from a ZIP archive

  1. Download a ZIP archive. Choose the suitable installers for your 32-bit or 64-bit environments.

  2. Expand the ZIP archive. You can do this by clicking Extract All in Explorer or Expand-Archive in PowerShell.

    The ZIP package contains the following set of files.

  3. Launch cmd.exe or PowerShell on your machine, and execute fluent-bit.exe:

The following output indicates Fluent Bit is running:

To halt the process, press Control+C in the terminal.

Installing from the EXE installer

  1. Download an EXE installer for the appropriate 32-bit or 64-bit build.

  2. Double-click the EXE installer you've downloaded. The installation wizard starts.

  3. Click Next and finish the installation. By default, Fluent Bit is installed in C:\Program Files\fluent-bit\.

You should be able to launch Fluent Bit using the following PowerShell command:.

Installer options

The Windows installer is built by and supports the for silent installation and install directory.

To silently install to C:\fluent-bit directory here is an example:

The uninstaller also supports a silent uninstall using the same /S flag. This can be used for provisioning with automation like Ansible, Puppet, and so on.

Windows service support

Windows services are equivalent to daemons in UNIX (long-running background processes). For v1.5.0 and later, Fluent Bit has native support for Windows services.

For example, you have the following installation layout:

To register Fluent Bit as a Windows service, execute the following command on at a command prompt. A single space is required after binpath=.

Fluent Bit can be started and managed as a normal Windows service.

To halt the Fluent Bit service, use the stop command.

To start Fluent Bit automatically on boot, execute the following:

FAQs

Fluent Bit fails to start up when installed under C:\Program Files

Quotations are required if file paths contain spaces. For example:

Can you manage Fluent Bit service using PowerShell?

Instead of sc.exe, PowerShell can be used to manage Windows services.

Create a Fluent Bit service:

Start the service:

Query the service status:

Stop the service:

Remove the service (requires PowerShell 6.0 or later)

Compile from Source

If you need to create a custom executable, use the following procedure to compile Fluent Bit by yourself.

Preparation

  1. Install Microsoft Visual C++ to compile Fluent Bit. You can install the minimum toolkit using the following command:

  1. Choose C++ Build Tools and C++ CMake tools for Windows and wait until the process finishes.

  2. Install flex and bison. One way to install them on Windows is to use .

  3. Add the path C:\WinFlexBison to your systems environment variable Path. .

  4. Install OpenSSL binaries, at least the library files and headers.

  5. Install to pull the source code from the repository.

Compilation

  1. Open the Start menu on Windows and type command Prompt for VS. From the result list, select the one that corresponds to your target system ( x86 or x64).

  2. Verify the installed OpenSSL library files match the selected target. You can examine the library files by using the dumpbin command with the /headers option .

  3. Clone the source code of Fluent Bit.

  4. Compile the source code.

Now you should be able to run Fluent Bit:

Packaging

To create a ZIP package, call cpack as follows:

popen
unit sizes
shell conventions for exit code propagation
thread
[SERVICE]
    # Flush
    # =====
    # set an interval of seconds before to flush records to a destination
    flush        5

    # Daemon
    # ======
    # instruct Fluent Bit to run in foreground or background mode.
    daemon       Off

    # Log_Level
    # =========
    # Set the verbosity level of the service, values can be:
    #
    # - error
    # - warning
    # - info
    # - debug
    # - trace
    #
    # by default 'info' is set, that means it includes 'error' and 'warning'.
    log_level    info

    # Parsers File
    # ============
    # specify an optional 'Parsers' configuration file
    parsers_file parsers.conf

    # Plugins File
    # ============
    # specify an optional 'Plugins' configuration file to load external plugins.
    plugins_file plugins.conf

    # HTTP Server
    # ===========
    # Enable/Disable the built-in HTTP Server for metrics
    http_server  Off
    http_listen  0.0.0.0
    http_port    2020

    # Storage
    # =======
    # Fluent Bit can use memory and filesystem buffering based mechanisms
    #
    # - https://docs.fluentbit.io/manual/administration/buffering-and-storage
    #
    # storage metrics
    # ---------------
    # publish storage pipeline metrics in '/api/v1/storage'. The metrics are
    # exported only if the 'http_server' option is enabled.
    #
    storage.metrics on

[INPUT]
    Name         winlog
    Channels     Setup,Windows PowerShell
    Interval_Sec 1

[OUTPUT]
    name  stdout
    match *

fluent-bit-4.0.4-win32.exe

0e1db952930f8ba47cb88622197343f49803e33a25a4983cba215651f763d350

fluent-bit-4.0.4-win32.zip

f2c3e5e92de41deca2b556ddb36762f5fcc9bcef4f2e050acb4908adeead95bd

fluent-bit-4.0.4-win64.exe

9db8237f3e04205cc4a14c3856b82324dd0fe7c65877a17a21bef65e5156bf3f

fluent-bit-4.0.4-win64.zip

054d074f66b6b96d732fe5c4bd2aadc2fdb0019b1872ab0f298aaa5e84e01ea9

fluent-bit-4.0.4-winarm64.exe

c70efad14418d7c5fb361581260cb82a1475b8196b35c3554aa5497eafb7e3ef

fluent-bit-4.0.4-winarm64.zip

d6819f25005b4e0148ac06802e299d16991f65155b164b7a25b0a0ae0a8b5228

PS> Get-FileHash fluent-bit-4.0.4-win32.exe
PS> Expand-Archive fluent-bit-4.0.4-win64.zip
fluent-bit
├── bin
│   ├── fluent-bit.dll
│   └── fluent-bit.exe
│   └── fluent-bit.pdb
├── conf
│   ├── fluent-bit.conf
│   ├── parsers.conf
│   └── plugins.conf
└── include
    │   ├── flb_api.h
    │   ├── ...
    │   └── flb_worker.h
    └── fluent-bit.h
PS> .\bin\fluent-bit.exe -i dummy -o stdout
PS> .\bin\fluent-bit.exe  -i dummy -o stdout
Fluent Bit v2.0.x
* Copyright (C) 2019-2020 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2019/06/28 10:13:04] [ info] [storage] initializing...
[2019/06/28 10:13:04] [ info] [storage] in-memory
[2019/06/28 10:13:04] [ info] [storage] normal synchronization mode, checksum disabled, max_chunks_up=128
[2019/06/28 10:13:04] [ info] [engine] started (pid=10324)
[2019/06/28 10:13:04] [ info] [sp] stream processor started
[0] dummy.0: [1561684385.443823800, {"message"=>"dummy"}]
[1] dummy.0: [1561684386.428399000, {"message"=>"dummy"}]
[2] dummy.0: [1561684387.443641900, {"message"=>"dummy"}]
[3] dummy.0: [1561684388.441405800, {"message"=>"dummy"}]
PS> C:\Program Files\fluent-bit\bin\fluent-bit.exe -i dummy -o stdout
PS> <installer exe> /S /D=C:\fluent-bit
C:\fluent-bit\
├── conf
│   ├── fluent-bit.conf
│   └── parsers.conf
│   └── plugins.conf
└── bin
    ├── fluent-bit.dll
    └── fluent-bit.exe
    └── fluent-bit.pdb
sc.exe create fluent-bit binpath= "\fluent-bit\bin\fluent-bit.exe -c \fluent-bit\conf\fluent-bit.conf"
% sc.exe start fluent-bit
% sc.exe query fluent-bit
SERVICE_NAME: fluent-bit
    TYPE               : 10  WIN32_OWN_PROCESS
    STATE              : 4 Running
    ...
sc.exe stop fluent-bit
sc.exe config fluent-bit start= auto
sc.exe create fluent-bit binpath= "\"C:\Program Files\fluent-bit\bin\fluent-bit.exe\" -c \"C:\Program Files\fluent-bit\conf\fluent-bit.conf\""
PS> New-Service fluent-bit -BinaryPathName "`"C:\Program Files\fluent-bit\bin\fluent-bit.exe`" -c `"C:\Program Files\fluent-bit\conf\fluent-bit.conf`"" -StartupType Automatic -Description "This service runs Fluent Bit, a log collector that enables real-time processing and delivery of log data to centralized logging systems."
PS> Start-Service fluent-bit
PS> get-Service fluent-bit | format-list
Name                : fluent-bit
DisplayName         : fluent-bit
Status              : Running
DependentServices   : {}
ServicesDependedOn  : {}
CanPauseAndContinue : False
CanShutdown         : False
CanStop             : True
ServiceType         : Win32OwnProcess
PS> Stop-Service fluent-bit
PS> Remove-Service fluent-bit
PS> wget -o vs.exe https://aka.ms/vs/16/release/vs_buildtools.exe
PS> start vs.exe
PS> wget -o winflexbison.zip https://github.com/lexxmark/winflexbison/releases/download/v2.5.22/win_flex_bison-2.5.22.zip
PS> Expand-Archive winflexbison.zip -Destination C:\WinFlexBison
PS> cp -Path C:\WinFlexBison\win_bison.exe C:\WinFlexBison\bison.exe
PS> cp -Path C:\WinFlexBison\win_flex.exe C:\WinFlexBison\flex.exe
PS> wget -o git.exe https://github.com/git-for-windows/git/releases/download/v2.28.0.windows.1/Git-2.28.0-64-bit.exe
PS> start git.exe
% git clone https://github.com/fluent/fluent-bit
% cd fluent-bit/build
% cmake .. -G "NMake Makefiles"
% cmake --build .
.\bin\debug\fluent-bit.exe -i dummy -o stdout
cpack -G ZIP
Windows container on Docker Hub
CMake configuration
fluent-bit-4.0.4-win32.msi
fluent-bit-4.0.4-win64.msi
fluent-bit-4.0.4-winarm64.msi
CPack using NSIS
default NSIS options
winflexbison
Here's how to do that
Git
Installation wizard screenshot
memory mapped files
Monitoring
irrecoverable chunks

Multiline Parsing

In an ideal world, applications might log their messages within a single line, but in reality applications generate multiple log messages that sometimes belong to the same context. Processing this information can be complex, like in application stack traces, which always have multiple log lines.

Fluent Bit v1.8 implemented a unified Multiline core capability to solve corner cases.

Concepts

The Multiline parser engine exposes two ways to configure and use the feature:

  • Built-in multiline parser

  • Configurable multiline parser

Built-in multiline parsers

Fluent Bit exposes certain pre-configured parsers (built-in) to solve specific multiline parser cases. For example:

Parser
Description

docker

Process a log entry generated by a Docker container engine. This parser supports the concatenation of log entries split by Docker.

cri

Process a log entry generated by CRI-O container engine. Like the docker parser, it supports concatenation of log entries

go

Process log entries generated by a Go based language application and perform concatenation if multiline messages are detected.

python

Process log entries generated by a Python based language application and perform concatenation if multiline messages are detected.

java

Process log entries generated by a Google Cloud Java language application and perform concatenation if multiline messages are detected.

Configurable multiline parsers

You can define your own Multiline parsers with their own rules, using a configuration file.

A multiline parser is defined in a parsers configuration file by using a [MULTILINE_PARSER] section definition. The multiline parser must have a unique name and a type, plus other configured properties associated with each type.

To understand which multiline parser type is required for your use case you have to know the conditions in the content that determine the beginning of a multiline message, and the continuation of subsequent lines. Fluent Bit provides a regular expression-based configuration that supports states to handle from the most cases.

Property
Description
Default

name

Specify a unique name for the multiline parser definition. A good practice is to prefix the name with the word multiline_ to avoid confusion with normal parser definitions.

none

type

Set the multiline mode. Fluent Bit supports the type regex.

none

parser

Name of a pre-defined parser that must be applied to the incoming content before applying the regular expression rule. If no parser is defined, it's assumed that's a raw text and not a structured message. When a parser is applied to a raw text, the regular expression is applied against a specific key of the structured message by using the key_content configuration property.

none

key_content

For an incoming structured message, specify the key that contains the data that should be processed by the regular expression and possibly concatenated.

none

flush_timeout

Timeout in milliseconds to flush a non-terminated multiline buffer.

5s

rule

Configure a rule to match a multiline pattern. The rule has a . Multiple rules can be defined.

none

Lines and states

Before configuring your parser you need to know the answer to the following questions:

  1. What's the regular expression (regex) that matches the first line of a multiline message?

  2. What are the regular expressions (regex) that match the continuation lines of a multiline message?

When matching a regular expression, you must to define states. Some states define the start of a multiline message while others are states for the continuation of multiline messages. You can have multiple continuation states definitions to solve complex cases.

The first regular expression that matches the start of a multiline message is called start_state. Other regular expression continuation lines can have different state names.

Rules definition

A rule specifies how to match a multiline pattern and perform the concatenation. A rule is defined by 3 specific components:

  • state name

  • regular expression pattern

  • next state

A rule might be defined as follows (comments added to simplify the definition) in corresponding YAML and classic configuration examples below:

# rules |   state name  | regex pattern                  | next state
# ------|---------------|--------------------------------------------
rules:
  - state: start_state
    regex: '/([a-zA-Z]+ \d+ \d+\:\d+\:\d+)(.*)/'
    next_state:  cont
  - state: cont
    regex: '/^\s+at.*/'
    next_state: cont
# rules   |   state name   | regex pattern                   | next state
# --------|----------------|---------------------------------------------
rule         "start_state"   "/([a-zA-Z]+ \d+ \d+\:\d+\:\d+)(.*)/"   "cont"
rule         "cont"          "/^\s+at.*/"                      "cont"

This example defines two rules. Each rule has its own state name, regex patterns, and the next state name. Every field that composes a rule must be inside double quotes.

The first rule of a state name must be start_state. The regex pattern must match the first line of a multiline message, and a next state must be set to specify what the possible continuation lines look like.

To simplify the configuration of regular expressions, you can use the Rubular web site. This link uses the regex described in the previous example, plus a log line that matches the pattern:

Configuration example

The following example provides a full Fluent Bit configuration file for multiline parsing by using the definition explained previously. It is provided in corresponding YAML and classic configuration examples below:

This is the primary Fluent Bit YAML configuration file. It includes the parsers_multiline.yaml and tails the file test.log by applying the multiline parser multiline-regex-test. Then it sends the processing to the standard output.

service:
    flush: 1
    log_level: info
    parsers_file: parsers_multiline.yaml

pipeline:
    inputs:
      - name: tail
        path: test.log
        read_from_head: true
        multiline.parser: multiline-regex-test

    outputs:
      - name: stdout
        match: '*'

This is the primary Fluent Bit classic configuration file. It includes the parsers_multiline.conf and tails the file test.log by applying the multiline parser multiline-regex-test. Then it sends the processing to the standard output.

[SERVICE]
    flush        1
    log_level    info
    parsers_file parsers_multiline.conf

[INPUT]
    name             tail
    path             test.log
    read_from_head   true
    multiline.parser multiline-regex-test

[OUTPUT]
    name             stdout
    match            *

This file defines a multiline parser for the YAML configuration example.

multiline_parsers:
    - name: multiline-regex-test
      type: regex
      flush_timeout: 1000
      #
      # Regex rules for multiline parsing
      # ---------------------------------
      #
      # configuration hints:
      #
      #  - first state always has the name: start_state
      #  - every field in the rule must be inside double quotes
      #
      # rules |   state name  | regex pattern                  | next state
      # ------|---------------|--------------------------------------------
      rules:
        - state: start_state
          regex: '/([a-zA-Z]+ \d+ \d+\:\d+\:\d+)(.*)/'
          next_state: cont
        - state: cont
          regex: '/^\s+at.*/'
          next_state: cont

This second file defines a multiline parser for the classic configuration example.

[MULTILINE_PARSER]
    name          multiline-regex-test
    type          regex
    flush_timeout 1000
    #
    # Regex rules for multiline parsing
    # ---------------------------------
    #
    # configuration hints:
    #
    #  - first state always has the name: start_state
    #  - every field in the rule must be inside double quotes
    #
    # rules |   state name  | regex pattern                  | next state
    # ------|---------------|--------------------------------------------
    rule      "start_state"   "/([a-zA-Z]+ \d+ \d+\:\d+\:\d+)(.*)/"  "cont"
    rule      "cont"          "/^\s+at.*/"                     "cont"

The example log file with multiline content:

single line...
Dec 14 06:41:08 Exception in thread "main" java.lang.RuntimeException: Something has gone wrong, aborting!
    at com.myproject.module.MyProject.badMethod(MyProject.java:22)
    at com.myproject.module.MyProject.oneMoreMethod(MyProject.java:18)
    at com.myproject.module.MyProject.anotherMethod(MyProject.java:14)
    at com.myproject.module.MyProject.someMethod(MyProject.java:10)
    at com.myproject.module.MyProject.main(MyProject.java:6)
another line...

By running Fluent Bit with the corresponding configuration file you will obtain the following output:

# For YAML configuration.
$ ./fluent-bit --config fluent-bit.yaml

# For classic configuration.
$ ./fluent-bit --config fluent-bit.conf

...
[0] tail.0: [[1750332967.679671000, {}], {"log"=>"single line...
"}]
[1] tail.0: [[1750332967.679677000, {}], {"log"=>"Dec 14 06:41:08 Exception in thread "main" java.lang.RuntimeException: Something has gone wrong, aborting!
    at com.myproject.module.MyProject.badMethod(MyProject.java:22)
    at com.myproject.module.MyProject.oneMoreMethod(MyProject.java:18)
    at com.myproject.module.MyProject.anotherMethod(MyProject.java:14)
    at com.myproject.module.MyProject.someMethod(MyProject.java:10)
    at com.myproject.module.MyProject.main(MyProject.java:6)
"}]
[2] tail.0: [[1750332967.679677000, {}], {"log"=>"another line...
"}]

The lines that didn't match a pattern aren't considered as part of the multiline message, while the ones that matched the rules were concatenated properly.

Limitations

The multiline parser is a very powerful feature, but it has some limitations that you should be aware of:

  • The multiline parser isn't affected by the buffer_max_size configuration option, allowing the composed log record to grow beyond this size. The skip_long_lines option won't be applied to multiline messages.

  • It's not possible to get the time key from the body of the multiline message. However, it can be extracted and set as a new key by using a filter.

Get structured data from multiline message

Fluent-bit supports the /pat/m option. It allows . matches a new line, which can be used to parse multiline logs.

The following example retrieves date and message from concatenated logs.

Example files content:

This is the primary Fluent Bit YAML configuration file. It includes the parsers_multiline.conf and tails the file test.log by applying the multiline parser multiline-regex-test. It also parses concatenated log by applying parser named-capture-test. Then it sends the processing to the standard output.

service:
    flush: 1
    log_level: info
    parsers_file: parsers_multiline.yaml

pipeline:
    inputs:
      - name: tail
        path: test.log
        read_from_head: true
        multiline.parser: multiline-regex-test

    filters:
      - name: parser
        match: '*'
        key_name: log
        parser: named-capture-test

    outputs:
      - name: stdout
        match: '*'

This is the primary Fluent Bit classic configuration file. It includes the parsers_multiline.conf and tails the file test.log by applying the multiline parser multiline-regex-test. It also parses concatenated log by applying parser named-capture-test. Then it sends the processing to the standard output.

[SERVICE]
    flush        1
    log_level    info
    parsers_file parsers_multiline.conf

[INPUT]
    name             tail
    path             test.log
    read_from_head   true
    multiline.parser multiline-regex-test

[FILTER]
    name             parser
    match            *
    key_name         log
    parser           named-capture-test

[OUTPUT]
    name             stdout
    match            *

This file defines a multiline parser for the YAML example.

multiline_parsers:
    - name: multiline-regex-test
      type: regex
      flush_timeout: 1000
      #
      # Regex rules for multiline parsing
      # ---------------------------------
      #
      # configuration hints:
      #
      #  - first state always has the name: start_state
      #  - every field in the rule must be inside double quotes
      #
      # rules |   state name  | regex pattern                  | next state
      # ------|---------------|--------------------------------------------
      rules:
        - state: start_state
          regex: '/([a-zA-Z]+ \d+ \d+\:\d+\:\d+)(.*)/'
          next_state:  cont
        - state: cont
          regex: '/^\s+at.*/'
          next_state: cont

parsers:
    - name: named-capture-test
      format: regex
      regex: '/^(?<date>[a-zA-Z]+ \d+ \d+\:\d+\:\d+) (?<message>.*)/m'

This file defines a multiline parser for the classic example.

[MULTILINE_PARSER]
    name          multiline-regex-test
    type          regex
    flush_timeout 1000
    #
    # Regex rules for multiline parsing
    # ---------------------------------
    #
    # configuration hints:
    #
    #  - first state always has the name: start_state
    #  - every field in the rule must be inside double quotes
    #
    # rules |   state name  | regex pattern                  | next state
    # ------|---------------|--------------------------------------------
    rule      "start_state"   "/([a-zA-Z]+ \d+ \d+\:\d+\:\d+)(.*)/"  "cont"
    rule      "cont"          "/^\s+at.*/"                     "cont"

[PARSER]
    Name named-capture-test
    Format regex
    Regex /^(?<date>[a-zA-Z]+ \d+ \d+\:\d+\:\d+) (?<message>.*)/m

The example log file with multiline content:

single line...
Dec 14 06:41:08 Exception in thread "main" java.lang.RuntimeException: Something has gone wrong, aborting!
    at com.myproject.module.MyProject.badMethod(MyProject.java:22)
    at com.myproject.module.MyProject.oneMoreMethod(MyProject.java:18)
    at com.myproject.module.MyProject.anotherMethod(MyProject.java:14)
    at com.myproject.module.MyProject.someMethod(MyProject.java:10)
    at com.myproject.module.MyProject.main(MyProject.java:6)
another line...

By running Fluent Bit with the corresponding configuration file you will obtain:

# For YAML configuration.
$ ./fluent-bit --config fluent-bit.yaml

# For classic configuration
$ ./fluent-bit --config fluent-bit.conf

[0] tail.0: [[1750333602.460984000, {}], {"log"=>"single line...
"}]
[1] tail.0: [[1750333602.460998000, {}], {"date"=>"Dec 14 06:41:08", "message"=>"Exception in thread "main" java.lang.RuntimeException: Something has gone wrong, aborting!
    at com.myproject.module.MyProject.badMethod(MyProject.java:22)
    at com.myproject.module.MyProject.oneMoreMethod(MyProject.java:18)
    at com.myproject.module.MyProject.anotherMethod(MyProject.java:14)
    at com.myproject.module.MyProject.someMethod(MyProject.java:10)
    at com.myproject.module.MyProject.main(MyProject.java:6)
"}]
[2] tail.0: [[1750333602.460998000, {}], {"log"=>"another line...
"}]
specific format
InstruqtInstruqt
Fluent Bit Sandbox Environment
Logo

Troubleshooting

  • Tap: generate events or records

  • Dump internals signal

Tap

Tap can be used to generate events or records detailing what messages pass through Fluent Bit, at what time and what filters affect them.

Basic Tap example

Ensure that the container image supports Fluent Bit Tap (available in Fluent Bit 2.0+):

$ docker run --rm -ti fluent/fluent-bit:latest --help | grep trace
  -Z, --enable-chunk-traceenable chunk tracing, it can be activated either through the http api or the command line
  --trace-input           input to start tracing on startup.
  --trace-output          output to use for tracing on startup.
  --trace-output-property set a property for output tracing on startup.
  --trace                 setup a trace pipeline on startup. Uses a single line, ie: "input=dummy.0 output=stdout output.format='json'"

If the --enable-chunk-trace option is present, your Fluent Bit version supports Fluent Bit Tap, but it's disabled by default. Use this option to enable it.

You can start Fluent Bit with tracing activated from the beginning by using thetrace-input and trace-output properties:

$ fluent-bit -Z -i dummy -o stdout -f 1 --trace-input=dummy.0 --trace-output=stdout
Fluent Bit v2.1.8
* Copyright (C) 2015-2022 The Fluent Bit Authors
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2023/07/21 16:27:01] [ info] [fluent bit] version=2.1.8, commit=824ba3dd08, pid=622937
[2023/07/21 16:27:01] [ info] [storage] ver=1.4.0, type=memory, sync=normal, checksum=off, max_chunks_up=128
[2023/07/21 16:27:01] [ info] [cmetrics] version=0.6.3
[2023/07/21 16:27:01] [ info] [ctraces ] version=0.3.1
[2023/07/21 16:27:01] [ info] [input:dummy:dummy.0] initializing
[2023/07/21 16:27:01] [ info] [input:dummy:dummy.0] storage_strategy='memory' (memory only)
[2023/07/21 16:27:01] [ info] [sp] stream processor started
[2023/07/21 16:27:01] [ info] [output:stdout:stdout.0] worker #0 started
[2023/07/21 16:27:01] [ info] [fluent bit] version=2.1.8, commit=824ba3dd08, pid=622937
[2023/07/21 16:27:01] [ info] [storage] ver=1.4.0, type=memory, sync=normal, checksum=off, max_chunks_up=128
[2023/07/21 16:27:01] [ info] [cmetrics] version=0.6.3
[2023/07/21 16:27:01] [ info] [ctraces ] version=0.3.1
[2023/07/21 16:27:01] [ info] [input:emitter:trace-emitter] initializing
[2023/07/21 16:27:01] [ info] [input:emitter:trace-emitter] storage_strategy='memory' (memory only)
[2023/07/21 16:27:01] [ info] [sp] stream processor started
[2023/07/21 16:27:01] [ info] [output:stdout:stdout.0] worker #0 started
.[0] dummy.0: [[1689971222.068537501, {}], {"message"=>"dummy"}]
[0] dummy.0: [[1689971223.068556121, {}], {"message"=>"dummy"}]
[0] trace: [[1689971222.068677045, {}], {"type"=>1, "trace_id"=>"0", "plugin_instance"=>"dummy.0", "records"=>[{"timestamp"=>1689971222, "record"=>{"message"=>"dummy"}}], "start_time"=>1689971222, "end_time"=>1689971222}]
[1] trace: [[1689971222.068735577, {}], {"type"=>3, "trace_id"=>"0", "plugin_instance"=>"dummy.0", "records"=>[{"timestamp"=>1689971222, "record"=>{"message"=>"dummy"}}], "start_time"=>1689971222, "end_time"=>1689971222}]
[0] dummy.0: [[1689971224.068586317, {}], {"message"=>"dummy"}]
[0] trace: [[1689971223.068626923, {}], {"type"=>1, "trace_id"=>"1", "plugin_instance"=>"dummy.0", "records"=>[{"timestamp"=>1689971223, "record"=>{"message"=>"dummy"}}], "start_time"=>1689971223, "end_time"=>1689971223}]
[1] trace: [[1689971223.068675735, {}], {"type"=>3, "trace_id"=>"1", "plugin_instance"=>"dummy.0", "records"=>[{"timestamp"=>1689971223, "record"=>{"message"=>"dummy"}}], "start_time"=>1689971223, "end_time"=>1689971223}]
[2] trace: [[1689971224.068689341, {}], {"type"=>1, "trace_id"=>"2", "plugin_instance"=>"dummy.0", "records"=>[{"timestamp"=>1689971224, "record"=>{"message"=>"dummy"}}], "start_time"=>1689971224, "end_time"=>1689971224}]
[3] trace: [[1689971224.068747182, {}], {"type"=>3, "trace_id"=>"2", "plugin_instance"=>"dummy.0", "records"=>[{"timestamp"=>1689971224, "record"=>{"message"=>"dummy"}}], "start_time"=>1689971224, "end_time"=>1689971224}]
^C[2023/07/21 16:27:05] [engine] caught signal (SIGINT)
[2023/07/21 16:27:05] [ warn] [engine] service will shutdown in max 5 seconds
[2023/07/21 16:27:05] [ info] [input] pausing dummy.0
[0] dummy.0: [[1689971225.068568875, {}], {"message"=>"dummy"}]
[2023/07/21 16:27:06] [ info] [engine] service has stopped (0 pending tasks)
[2023/07/21 16:27:06] [ info] [input] pausing dummy.0
[2023/07/21 16:27:06] [ warn] [engine] service will shutdown in max 1 seconds
[0] trace: [[1689971225.068654038, {}], {"type"=>1, "trace_id"=>"3", "plugin_instance"=>"dummy.0", "records"=>[{"timestamp"=>1689971225, "record"=>{"message"=>"dummy"}}], "start_time"=>1689971225, "end_time"=>1689971225}]
[1] trace: [[1689971225.068695829, {}], {"type"=>3, "trace_id"=>"3", "plugin_instance"=>"dummy.0", "records"=>[{"timestamp"=>1689971225, "record"=>{"message"=>"dummy"}}], "start_time"=>1689971225, "end_time"=>1689971225}]
[2023/07/21 16:27:07] [ info] [engine] service has stopped (0 pending tasks)
[2023/07/21 16:27:07] [ info] [output:stdout:stdout.0] thread worker #0 stopping...
[2023/07/21 16:27:07] [ info] [output:stdout:stdout.0] thread worker #0 stopped
[2023/07/21 16:27:07] [ info] [output:stdout:stdout.0] thread worker #0 stopping...
[2023/07/21 16:27:07] [ info] [output:stdout:stdout.0] thread worker #0 stopped

The following warning indicates the -Z or --enable-chunk-tracing option is missing:

[2023/07/21 16:26:42] [ warn] [chunk trace] enable chunk tracing via the configuration or  command line to be able to activate tracing.

Set properties for the output using the --trace-output-property option:

$ fluent-bit -Z -i dummy -o stdout -f 1 --trace-input=dummy.0 --trace-output=stdout --trace-output-property=format=json_lines
Fluent Bit v2.1.8
* Copyright (C) 2015-2022 The Fluent Bit Authors
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2023/07/21 16:28:59] [ info] [fluent bit] version=2.1.8, commit=824ba3dd08, pid=623170
[2023/07/21 16:28:59] [ info] [storage] ver=1.4.0, type=memory, sync=normal, checksum=off, max_chunks_up=128
[2023/07/21 16:28:59] [ info] [cmetrics] version=0.6.3
[2023/07/21 16:28:59] [ info] [ctraces ] version=0.3.1
[2023/07/21 16:28:59] [ info] [input:dummy:dummy.0] initializing
[2023/07/21 16:28:59] [ info] [input:dummy:dummy.0] storage_strategy='memory' (memory only)
[2023/07/21 16:28:59] [ info] [sp] stream processor started
[2023/07/21 16:28:59] [ info] [output:stdout:stdout.0] worker #0 started
[2023/07/21 16:28:59] [ info] [fluent bit] version=2.1.8, commit=824ba3dd08, pid=623170
[2023/07/21 16:28:59] [ info] [storage] ver=1.4.0, type=memory, sync=normal, checksum=off, max_chunks_up=128
[2023/07/21 16:28:59] [ info] [cmetrics] version=0.6.3
[2023/07/21 16:28:59] [ info] [ctraces ] version=0.3.1
[2023/07/21 16:28:59] [ info] [input:emitter:trace-emitter] initializing
[2023/07/21 16:28:59] [ info] [input:emitter:trace-emitter] storage_strategy='memory' (memory only)
[2023/07/21 16:29:00] [ info] [sp] stream processor started
[2023/07/21 16:29:00] [ info] [output:stdout:stdout.0] worker #0 started
.[0] dummy.0: [[1689971340.068565891, {}], {"message"=>"dummy"}]
[0] dummy.0: [[1689971341.068632477, {}], {"message"=>"dummy"}]
{"date":1689971340.068745,"type":1,"trace_id":"0","plugin_instance":"dummy.0","records":[{"timestamp":1689971340,"record":{"message":"dummy"}}],"start_time":1689971340,"end_time":1689971340}
{"date":1689971340.068825,"type":3,"trace_id":"0","plugin_instance":"dummy.0","records":[{"timestamp":1689971340,"record":{"message":"dummy"}}],"start_time":1689971340,"end_time":1689971340}
[0] dummy.0: [[1689971342.068613646, {}], {"message"=>"dummy"}]

With that option set, the stdout plugin emits traces in json_lines format:

{"date":1689971340.068745,"type":1,"trace_id":"0","plugin_instance":"dummy.0","records":[{"timestamp":1689971340,"record":{"message":"dummy"}}],"start_time":1689971340,"end_time":1689971340}

All three options can also be defined using the more flexible --trace option:

fluent-bit -Z -i dummy -o stdout -f 1 --trace="input=dummy.0 output=stdout output.format=json_lines"

This example defines the Tap pipeline using this configuration: input=dummy.0 output=stdout output.format=json_lines which defines the following:

  • input: dummy.0 listens to the tag or alias dummy.0.

  • output: stdout outputs to a stdout plugin.

  • output.format: json_lines sets the stdout format to json_lines.

Tap support can also be activated and deactivated using the embedded web server:

$ docker run --rm -ti -p 2020:2020 fluent/fluent-bit:latest -Z -H -i dummy -p alias=input_dummy -o stdout -f 1
Fluent Bit v2.0.0
* Copyright (C) 2015-2022 The Fluent Bit Authors
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2022/10/21 10:03:16] [ info] [fluent bit] version=2.0.0, commit=3000f699f2, pid=1
[2022/10/21 10:03:16] [ info] [output:stdout:stdout.0] worker #0 started
[2022/10/21 10:03:16] [ info] [storage] ver=1.3.0, type=memory, sync=normal, checksum=off, max_chunks_up=128
[2022/10/21 10:03:16] [ info] [cmetrics] version=0.5.2
[2022/10/21 10:03:16] [ info] [input:dummy:input_dummy] initializing
[2022/10/21 10:03:16] [ info] [input:dummy:input_dummy] storage_strategy='memory' (memory only)
[2022/10/21 10:03:16] [ info] [http_server] listen iface=0.0.0.0 tcp_port=2020
[2022/10/21 10:03:16] [ info] [sp] stream processor started
[0] dummy.0: [1666346597.203307010, {"message"=>"dummy"}]
[0] dummy.0: [1666346598.204103793, {"message"=>"dummy"}]
...

In another terminal, activate Tap by either using the instance id of the input (dummy.0) or its alias. The alias is more predictable, and is used here:

$ curl 127.0.0.1:2020/api/v1/trace/input_dummy
{"status":"ok"}

This response means Tap is active. The terminal with Fluent Bit running should now look like this:

[0] dummy.0: [1666346615.203253156, {"message"=>"dummy"}]
[2022/10/21 10:03:36] [ info] [fluent bit] version=2.0.0, commit=3000f699f2, pid=1
[2022/10/21 10:03:36] [ info] [storage] ver=1.3.0, type=memory, sync=normal, checksum=off, max_chunks_up=128
[2022/10/21 10:03:36] [ info] [cmetrics] version=0.5.2
[2022/10/21 10:03:36] [ info] [input:emitter:trace-emitter] initializing
[2022/10/21 10:03:36] [ info] [input:emitter:trace-emitter] storage_strategy='memory' (memory only)
[2022/10/21 10:03:36] [ info] [sp] stream processor started
[2022/10/21 10:03:36] [ info] [output:stdout:stdout.0] worker #0 started
[0] dummy.0: [1666346616.203551736, {"message"=>"dummy"}]
[0] trace: [1666346617.205221952, {"type"=>1, "trace_id"=>"trace.0", "plugin_instance"=>"dummy.0", "plugin_alias"=>"input_dummy", "records"=>[{"timestamp"=>1666346617, "record"=>{"message"=>"dummy"}}], "start_time"=>1666346617, "end_time"=>1666346617}]
[0] dummy.0: [1666346617.205131790, {"message"=>"dummy"}]
[0] trace: [1666346617.205419358, {"type"=>3, "trace_id"=>"trace.0", "plugin_instance"=>"dummy.0", "plugin_alias"=>"input_dummy", "records"=>[{"timestamp"=>1666346617, "record"=>{"message"=>"dummy"}}], "start_time"=>1666346617, "end_time"=>1666346617}]
[0] trace: [1666346618.204110867, {"type"=>1, "trace_id"=>"trace.1", "plugin_instance"=>"dummy.0", "plugin_alias"=>"input_dummy", "records"=>[{"timestamp"=>1666346618, "record"=>{[0] dummy.0: [1666346618.204049246, {"message"=>"dummy"}]
"message"=>"dummy"}}], "start_time"=>1666346618, "end_time"=>1666346618}]
[0] trace: [1666346618.204198654, {"type"=>3, "trace_id"=>"trace.1", "plugin_instance"=>"dummy.0", "plugin_alias"=>"input_dummy", "records"=>[{"timestamp"=>1666346618, "record"=>{"message"=>"dummy"}}], "start_time"=>1666346618, "end_time"=>1666346618}]

All the records that display are those emitted by the activities of the dummy plugin.

Complex Tap example

This example takes the same steps but demonstrates how the mechanism works with more complicated configurations.

This example follows a single input, out of many, and which passes through several filters.

$ docker run --rm -ti -p 2020:2020 \
   fluent/fluent-bit:latest \
   -Z -H \
      -i dummy -p alias=dummy_0 -p \
         dummy='{"dummy": "dummy_0", "key_name": "foo", "key_cnt": "1"}' \
      -i dummy -p alias=dummy_1 -p dummy='{"dummy": "dummy_1"}' \
      -i dummy -p alias=dummy_2 -p dummy='{"dummy": "dummy_2"}' \
      -F record_modifier -m 'dummy.0' -p record="powered_by fluent" \
      -F record_modifier -m 'dummy.1' -p record="powered_by fluent-bit" \
      -F nest -m 'dummy.0' \
         -p operation=nest -p wildcard='key_*' -p nest_under=data \
      -o null -m '*' -f 1

To ensure the window isn't cluttered by the records generated by the input plugins, send all of it to null.

Activate with the following curl command:

$ curl 127.0.0.1:2020/api/v1/trace/dummy_0
{"status":"ok"}

You should start seeing output similar to the following:

[0] trace: [1666349359.325597543, {"type"=>1, "trace_id"=>"trace.0", "plugin_instance"=>"dummy.0", "plugin_alias"=>"dummy_0", "records"=>[{"timestamp"=>1666349359, "record"=>{"dummy"=>"dummy_0", "key_name"=>"foo", "key_cnt"=>"1"}}], "start_time"=>1666349359, "end_time"=>1666349359}]
[0] trace: [1666349359.325723747, {"type"=>2, "start_time"=>1666349359, "end_time"=>1666349359, "trace_id"=>"trace.0", "plugin_instance"=>"record_modifier.0", "records"=>[{"timestamp"=>1666349359, "record"=>{"dummy"=>"dummy_0", "key_name"=>"foo", "key_cnt"=>"1", "powered_by"=>"fluent"}}]}]
[0] trace: [1666349359.325783954, {"type"=>2, "start_time"=>1666349359, "end_time"=>1666349359, "trace_id"=>"trace.0", "plugin_instance"=>"nest.2", "records"=>[{"timestamp"=>1666349359, "record"=>{"dummy"=>"dummy_0", "powered_by"=>"fluent", "data"=>{"key_name"=>"foo", "key_cnt"=>"1"}}}]}]
[0] trace: [1666349359.325913783, {"type"=>3, "trace_id"=>"trace.0", "plugin_instance"=>"dummy.0", "plugin_alias"=>"dummy_0", "records"=>[{"timestamp"=>1666349359, "record"=>{"dummy"=>"dummy_0", "powered_by"=>"fluent", "data"=>{"key_name"=>"foo", "key_cnt"=>"1"}}}], "start_time"=>1666349359, "end_time"=>1666349359}]
[0] trace: [1666349360.323826619, {"type"=>1, "trace_id"=>"trace.1", "plugin_instance"=>"dummy.0", "plugin_alias"=>"dummy_0", "records"=>[{"timestamp"=>1666349360, "record"=>{"dummy"=>"dummy_0", "key_name"=>"foo", "key_cnt"=>"1"}}], "start_time"=>1666349360, "end_time"=>1666349360}]
[0] trace: [1666349360.323859618, {"type"=>2, "start_time"=>1666349360, "end_time"=>1666349360, "trace_id"=>"trace.1", "plugin_instance"=>"record_modifier.0", "records"=>[{"timestamp"=>1666349360, "record"=>{"dummy"=>"dummy_0", "key_name"=>"foo", "key_cnt"=>"1", "powered_by"=>"fluent"}}]}]
[0] trace: [1666349360.323900784, {"type"=>2, "start_time"=>1666349360, "end_time"=>1666349360, "trace_id"=>"trace.1", "plugin_instance"=>"nest.2", "records"=>[{"timestamp"=>1666349360, "record"=>{"dummy"=>"dummy_0", "powered_by"=>"fluent", "data"=>{"key_name"=>"foo", "key_cnt"=>"1"}}}]}]
[0] trace: [1666349360.323926366, {"type"=>3, "trace_id"=>"trace.1", "plugin_instance"=>"dummy.0", "plugin_alias"=>"dummy_0", "records"=>[{"timestamp"=>1666349360, "record"=>{"dummy"=>"dummy_0", "powered_by"=>"fluent", "data"=>{"key_name"=>"foo", "key_cnt"=>"1"}}}], "start_time"=>1666349360, "end_time"=>1666349360}]
[0] trace: [1666349361.324223752, {"type"=>1, "trace_id"=>"trace.2", "plugin_instance"=>"dummy.0", "plugin_alias"=>"dummy_0", "records"=>[{"timestamp"=>1666349361, "record"=>{"dummy"=>"dummy_0", "key_name"=>"foo", "key_cnt"=>"1"}}], "start_time"=>1666349361, "end_time"=>1666349361}]
[0] trace: [1666349361.324263959, {"type"=>2, "start_time"=>1666349361, "end_time"=>1666349361, "trace_id"=>"trace.2", "plugin_instance"=>"record_modifier.0", "records"=>[{"timestamp"=>1666349361, "record"=>{"dummy"=>"dummy_0", "key_name"=>"foo", "key_cnt"=>"1", "powered_by"=>"fluent"}}]}]
[0] trace: [1666349361.324283250, {"type"=>2, "start_time"=>1666349361, "end_time"=>1666349361, "trace_id"=>"trace.2", "plugin_instance"=>"nest.2", "records"=>[{"timestamp"=>1666349361, "record"=>{"dummy"=>"dummy_0", "powered_by"=>"fluent", "data"=>{"key_name"=>"foo", "key_cnt"=>"1"}}}]}]
[0] trace: [1666349361.324294291, {"type"=>3, "trace_id"=>"trace.2", "plugin_instance"=>"dummy.0", "plugin_alias"=>"dummy_0", "records"=>[{"timestamp"=>1666349361, "record"=>{"dummy"=>"dummy_0", "powered_by"=>"fluent", "data"=>{"key_name"=>"foo", "key_cnt"=>"1"}}}], "start_time"=>1666349361, "end_time"=>1666349361}]
^C[2022/10/21 10:49:23] [engine] caught signal (SIGINT)
[2022/10/21 10:49:23] [ warn] [engine] service will shutdown in max 5 seconds
[2022/10/21 10:49:23] [ info] [input] pausing dummy_0
[2022/10/21 10:49:23] [ info] [input] pausing dummy_1
[2022/10/21 10:49:23] [ info] [input] pausing dummy_2
[2022/10/21 10:49:23] [ info] [engine] service has stopped (0 pending tasks)
[2022/10/21 10:49:23] [ info] [input] pausing dummy_0
[2022/10/21 10:49:23] [ info] [input] pausing dummy_1
[2022/10/21 10:49:23] [ info] [input] pausing dummy_2
[0] trace: [1666349362.323272011, {"type"=>1, "trace_id"=>"trace.3", "plugin_instance"=>"dummy.0", "plugin_alias"=>"dummy_0", "records"=>[{"timestamp"=>1666349362, "record"=>{"dummy"=>"dummy_0", "key_name"=>"foo", "key_cnt"=>"1"}}], "start_time"=>1666349362, "end_time"=>1666349362}]
[0] trace: [1666349362.323306843, {"type"=>2, "start_time"=>1666349362, "end_time"=>1666349362, "trace_id"=>"trace.3", "plugin_instance"=>"record_modifier.0", "records"=>[{"timestamp"=>1666349362, "record"=>{"dummy"=>"dummy_0", "key_name"=>"foo", "key_cnt"=>"1", "powered_by"=>"fluent"}}]}]
[0] trace: [1666349362.323323884, {"type"=>2, "start_time"=>1666349362, "end_time"=>1666349362, "trace_id"=>"trace.3", "plugin_instance"=>"nest.2", "records"=>[{"timestamp"=>1666349362, "record"=>{"dummy"=>"dummy_0", "powered_by"=>"fluent", "data"=>{"key_name"=>"foo", "key_cnt"=>"1"}}}]}]
[0] trace: [1666349362.323334509, {"type"=>3, "trace_id"=>"trace.3", "plugin_instance"=>"dummy.0", "plugin_alias"=>"dummy_0", "records"=>[{"timestamp"=>1666349362, "record"=>{"dummy"=>"dummy_0", "powered_by"=>"fluent", "data"=>{"key_name"=>"foo", "key_cnt"=>"1"}}}], "start_time"=>1666349362, "end_time"=>1666349362}]
[2022/10/21 10:49:24] [ warn] [engine] service will shutdown in max 1 seconds
[2022/10/21 10:49:25] [ info] [engine] service has stopped (0 pending tasks)
[2022/10/21 10:49:25] [ info] [output:stdout:stdout.0] thread worker #0 stopping...
[2022/10/21 10:49:25] [ info] [output:stdout:stdout.0] thread worker #0 stopped
[2022/10/21 10:49:25] [ info] [output:null:null.0] thread worker #0 stopping...
[2022/10/21 10:49:25] [ info] [output:null:null.0] thread worker #0 stopped

Parameters for the output in Tap

When activating Tap, any plugin parameter can be given. These parameters can be used to modify the output format, the name of the time key, the format of the date, and other details.

The following example uses the parameter "format": "json" to demonstrate how to show stdout in JSON format.

First, run Fluent Bit enabling Tap:

$ docker run --rm -ti -p 2020:2020 fluent/fluent-bit:latest -Z -H -i dummy -p alias=input_dummy -o stdout -f 1
Fluent Bit v2.0.8
* Copyright (C) 2015-2022 The Fluent Bit Authors
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2023/01/27 07:44:25] [ info] [fluent bit] version=2.0.8, commit=9444fdc5ee, pid=1
[2023/01/27 07:44:25] [ info] [storage] ver=1.4.0, type=memory, sync=normal, checksum=off, max_chunks_up=128
[2023/01/27 07:44:25] [ info] [cmetrics] version=0.5.8
[2023/01/27 07:44:25] [ info] [ctraces ] version=0.2.7
[2023/01/27 07:44:25] [ info] [input:dummy:input_dummy] initializing
[2023/01/27 07:44:25] [ info] [input:dummy:input_dummy] storage_strategy='memory' (memory only)
[2023/01/27 07:44:25] [ info] [output:stdout:stdout.0] worker #0 started
[2023/01/27 07:44:25] [ info] [http_server] listen iface=0.0.0.0 tcp_port=2020
[2023/01/27 07:44:25] [ info] [sp] stream processor started
[0] dummy.0: [1674805465.976012761, {"message"=>"dummy"}]
[0] dummy.0: [1674805466.973669512, {"message"=>"dummy"}]
...

In another terminal, activate Tap including the output (stdout), and the parameters wanted ("format": "json"):

$ curl 127.0.0.1:2020/api/v1/trace/input_dummy -d '{"output":"stdout", "params": {"format": "json"}}'
{"status":"ok"}

In the first terminal, you should see the output similar to the following:

[0] dummy.0: [1674805635.972373840, {"message"=>"dummy"}]
[{"date":1674805634.974457,"type":1,"trace_id":"0","plugin_instance":"dummy.0","plugin_alias":"input_dummy","records":[{"timestamp":1674805634,"record":{"message":"dummy"}}],"start_time":1674805634,"end_time":1674805634},{"date":1674805634.974605,"type":3,"trace_id":"0","plugin_instance":"dummy.0","plugin_alias":"input_dummy","records":[{"timestamp":1674805634,"record":{"message":"dummy"}}],"start_time":1674805634,"end_time":1674805634},{"date":1674805635.972398,"type":1,"trace_id":"1","plugin_instance":"dummy.0","plugin_alias":"input_dummy","records":[{"timestamp":1674805635,"record":{"message":"dummy"}}],"start_time":1674805635,"end_time":1674805635},{"date":1674805635.972413,"type":3,"trace_id":"1","plugin_instance":"dummy.0","plugin_alias":"input_dummy","records":[{"timestamp":1674805635,"record":{"message":"dummy"}}],"start_time":1674805635,"end_time":1674805635}]
[0] dummy.0: [1674805636.973970215, {"message"=>"dummy"}]
[{"date":1674805636.974008,"type":1,"trace_id":"2","plugin_instance":"dummy.0","plugin_alias":"input_dummy","records":[{"timestamp":1674805636,"record":{"message":"dummy"}}],"start_time":1674805636,"end_time":1674805636},{"date":1674805636.974034,"type":3,"trace_id":"2","plugin_instance":"dummy.0","plugin_alias":"input_dummy","records":[{"timestamp":1674805636,"record":{"message":"dummy"}}],"start_time":1674805636,"end_time":1674805636}]

This parameter shows stdout in JSON format.

See output plugins for additional information.

Analyze a single Tap record

This filter record is an example to explain the details of a Tap record:

{
   "type": 2,
   "start_time": 1666349231,
   "end_time": 1666349231,
   "trace_id": "trace.1",
   "plugin_instance": "nest.2",
   "records": [{
      "timestamp": 1666349231,
      "record": {
         "dummy": "dummy_0",
         "powered_by": "fluent",
         "data": {
            "key_name": "foo",
            "key_cnt": "1"
         }
      }
   }]
}
  • type: Defines the stage the event is generated:

    • 1: Input record. This is the unadulterated input record.

    • 2: Filtered record. This is a record after it was filtered. One record is generated per filter.

    • 3: Pre-output record. This is the record right before it's sent for output.

    This example is a record generated by the manipulation of a record by a filter so it has the type 2.

  • start_time and end_time: Records the start and end of an event, and is different for each event type:

    • type 1: When the input is received, both the start and end time.

    • type 2: The time when filtering is matched until it has finished processing.

    • type 3: The time when the input is received and when it's finally slated for output.

  • trace_id: A string composed of a prefix and a number which is incremented with each record received by the input during the Tap session.

  • plugin_instance: The plugin instance name as generated by Fluent Bit at runtime.

  • plugin_alias: If an alias is set this field will contain the alias set for a plugin.

  • records: An array of all the records being sent. Fluent Bit handles records in chunks of multiple records and chunks are indivisible, the same is done in the Tap output. Each record consists of its timestamp followed by the actual data which is a composite type of keys and values.

Dump Internals / Signal

When the service is running, you can export metrics to see the overall status of the data flow of the service. There are other use cases where you might need to know the current status of the service internals, like the current status of the internal buffers. Dump Internals can help provide this information.

Fluent Bit v1.4 introduced the Dump Internals feature, which can be triggered from the command line triggering the CONT Unix signal.

This feature is only available on Linux and BSD operating systems.

Usage

Run the following kill command to signal Fluent Bit:

kill -CONT `pidof fluent-bit`

The command pidof aims to identify the Process ID of Fluent Bit.

Fluent Bit will dump the following information to the standard output interface (stdout):

[engine] caught signal (SIGCONT)
[2020/03/23 17:39:02] Fluent Bit Dump

===== Input =====
syslog_debug (syslog)
│
├─ status
│  └─ overlimit     : no
│     ├─ mem size   : 60.8M (63752145 bytes)
│     └─ mem limit  : 61.0M (64000000 bytes)
│
├─ tasks
│  ├─ total tasks   : 92
│  ├─ new           : 0
│  ├─ running       : 92
│  └─ size          : 171.1M (179391504 bytes)
│
└─ chunks
   └─ total chunks  : 92
      ├─ up chunks  : 35
      ├─ down chunks: 57
      └─ busy chunks: 92
         ├─ size    : 60.8M (63752145 bytes)
         └─ size err: 0

===== Storage Layer =====
total chunks     : 92
├─ mem chunks    : 0
└─ fs chunks     : 92
   ├─ up         : 35
   └─ down       : 57

Input plugins

The input plugins dump provides insights for every input instance configured.

Status

Overall ingestion status of the plugin.

Entry
Sub-entry
Description

overlimit

If the plugin has been configured with , this entry will report if the plugin is over the limit or not at the moment of the dump. Over the limit prints yes, otherwise no.

mem_size

Current memory size in use by the input plugin in-memory.

mem_limit

Limit set by Mem_Buf_Limit.

Tasks

When an input plugin ingests data into the engine, a Chunk is created. A Chunk can contains multiple records. At flush time, the engine creates a Task that contains the routes for the Chunk associated in question.

The Task dump describes the tasks associated to the input plugin:

Entry
Description

total_tasks

Total number of active tasks associated to data generated by the input plugin.

new

Number of tasks not yet assigned to an output plugin. Tasks are in new status for a very short period of time. This value is normally very low or zero.

running

Number of active tasks being processed by output plugins.

size

Amount of memory used by the Chunks being processed (total chunk size).

Chunks

The Chunks dump tells more details about all the chunks that the input plugin has generated and are still being processed.

Depending of the buffering strategy and limits imposed by configuration, some Chunks might be up (in memory) or down (filesystem).

Entry
Sub-entry
Description

total_chunks

Total number of Chunks generated by the input plugin that are still being processed by the engine.

up_chunks

Total number of Chunks loaded in memory.

down_chunks

Total number of Chunks stored in the filesystem but not loaded in memory yet.

busy_chunks

Chunks marked as busy (being flushed) or locked. Busy Chunks are immutable and likely are ready to be or are being processed.

size

Amount of bytes used by the Chunk.

size err

Number of Chunks in an error state where its size couldn't be retrieved.

Storage Layer

Fluent Bit relies on a custom storage layer interface designed for hybrid buffering. The Storage Layer entry contains a total summary of Chunks registered by Fluent Bit:

Entry
Sub-Entry
Description

total chunks

Total number of Chunks.

mem chunks

Total number of Chunks memory-based.

fs chunks

Total number of Chunks filesystem based.

up

Total number of filesystem chunks up in memory.

down

Total number of filesystem chunks down (not loaded in memory).

Mem_Buf_Limit

Build and Install

Fluent Bit uses CMake as its build system.

Requirements

  • CMake 3.12 or greater. You might need to use cmake3 instead of cmake.

  • Flex

  • Bison 3 or greater

  • YAML headers

  • OpenSSL headers

Prepare environment

If you already know how CMake works, you can skip this section and review the available build options.

The following steps explain how to build and install the project with the default options.

  1. Change to the build/ directory inside the Fluent Bit sources:

    cd build/
  2. Let CMake configure the project specifying where the root path is located:

    cmake ../

    This command displays a series of results similar to:

    -- The C compiler identification is GNU 4.9.2
    -- Check for working C compiler: /usr/bin/cc
    -- Check for working C compiler: /usr/bin/cc -- works
    -- Detecting C compiler ABI info
    -- Detecting C compiler ABI info - done
    -- The CXX compiler identification is GNU 4.9.2
    -- Check for working CXX compiler: /usr/bin/c++
    -- Check for working CXX compiler: /usr/bin/c++ -- works
    ...
    -- Could NOT find Doxygen (missing:  DOXYGEN_EXECUTABLE)
    -- Looking for accept4
    -- Looking for accept4 - not found
    -- Configuring done
    -- Generating done
    -- Build files have been written to: /home/edsiper/coding/fluent-bit/build
  3. Start the compilation process using the make command:

    make

    This command displays results similar to:

    Scanning dependencies of target msgpack
    [  2%] Building C object lib/msgpack-1.1.0/CMakeFiles/msgpack.dir/src/unpack.c.o
    [  4%] Building C object lib/msgpack-1.1.0/CMakeFiles/msgpack.dir/src/objectc.c.o
    [  7%] Building C object lib/msgpack-1.1.0/CMakeFiles/msgpack.dir/src/version.c.o
    ...
    [ 19%] Building C object lib/monkey/mk_core/CMakeFiles/mk_core.dir/mk_file.c.o
    [ 21%] Building C object lib/monkey/mk_core/CMakeFiles/mk_core.dir/mk_rconf.c.o
    [ 23%] Building C object lib/monkey/mk_core/CMakeFiles/mk_core.dir/mk_string.c.o
    ...
    Scanning dependencies of target fluent-bit-static
    [ 66%] Building C object src/CMakeFiles/fluent-bit-static.dir/flb_pack.c.o
    [ 69%] Building C object src/CMakeFiles/fluent-bit-static.dir/flb_input.c.o
    [ 71%] Building C object src/CMakeFiles/fluent-bit-static.dir/flb_output.c.o
    ...
    Linking C executable ../bin/fluent-bit
    [100%] Built target fluent-bit-bin
  4. To continue installing the binary on the system, use make install:

    make install

    If the command indicates insufficient permissions, prefix the command with sudo.

Build options

Fluent Bit provides configurable options to CMake that can be enabled or disabled.

General options

Option
Description
Default

FLB_ALL

Enable all features available

No

FLB_JEMALLOC

Use Jemalloc as default memory allocator

No

FLB_TLS

Build with SSL/TLS support

Yes

FLB_BINARY

Build executable

Yes

FLB_EXAMPLES

Build examples

Yes

FLB_SHARED_LIB

Build shared library

Yes

FLB_MTRACE

Enable mtrace support

No

FLB_INOTIFY

Enable Inotify support

Yes

FLB_POSIX_TLS

Force POSIX thread storage

No

FLB_SQLDB

Enable SQL embedded database support

No

FLB_HTTP_SERVER

Enable HTTP Server

No

FLB_LUAJIT

Enable Lua scripting support

Yes

FLB_RECORD_ACCESSOR

Enable record accessor

Yes

FLB_SIGNV4

Enable AWS Signv4 support

Yes

FLB_STATIC_CONF

Build binary using static configuration files. The value of this option must be a directory containing configuration files.

FLB_STREAM_PROCESSOR

Enable Stream Processor

Yes

FLB_CONFIG_YAML

Enable YAML configuration support

Yes

FLB_WASM

Build with WASM runtime support

Yes

FLB_WAMRC

Build with WASM AOT compiler executable

No

Development options

Option
Description
Default

FLB_DEBUG

Build binaries with debug symbols

No

FLB_VALGRIND

Enable Valgrind support

No

FLB_TRACE

Enable trace mode

No

FLB_SMALL

Minimise binary size

No

FLB_TESTS_RUNTIME

Enable runtime tests

No

FLB_TESTS_INTERNAL

Enable internal tests

No

FLB_TESTS

Enable tests

No

FLB_BACKTRACE

Enable backtrace/stacktrace support

Yes

Optimization options

Option
Description
Default

FLB_MSGPACK_TO_JSON_INIT_BUFFER_SIZE

Determine initial buffer size for msgpack to json conversion in terms of memory used by payload.

2.0

FLB_MSGPACK_TO_JSON_REALLOC_BUFFER_SIZE

Determine percentage of reallocation size when msgpack to json conversion buffer runs out of memory.

0.1

Input plugins

Input plugins gather information from a specific source type like network interfaces, some built-in metrics, or through a specific input device. The following input plugins are available:

Option
Description
Default

Enable Collectd input plugin

On

Enable CPU input plugin

On

Enable Disk I/O Metrics input plugin

On

Enable Docker metrics input plugin

On

Enable Exec input plugin

On

Enable Exec WASI input plugin

On

Enable Fluent Bit metrics input plugin

On

Enable Elasticsearch/OpenSearch Bulk input plugin

On

Enable Forward input plugin

On

Enable Head input plugin

On

Enable Health input plugin

On

Enable Kernel log input plugin

On

Enable Memory input plugin

On

Enable MQTT Server input plugin

On

Enable Network I/O metrics input plugin

On

Enable Process monitoring input plugin

On

Enable Random input plugin

On

Enable Serial input plugin

On

Enable Standard input plugin

On

Enable Syslog input plugin

On

Enable Systemd / Journald input plugin

On

Enable Tail (follow files) input plugin

On

Enable TCP input plugin

On

Enable system temperature input plugin

On

Enable UDP input plugin

On

Enable Windows Event Log input plugin (Windows Only)

On

Enable Windows Event Log input plugin using winevt.h API (Windows Only)

On

Filter plugins

Filter plugins let you modify, enrich or drop records. The following table describes the filters available on this version:

Option
Description
Default

Enable AWS metadata filter

On

Enable AWS metadata filter

On

FLB_FILTER_EXPECT

Enable Expect data test filter

On

Enable Grep filter

On

Enable Kubernetes metadata filter

On

Enable Lua scripting filter

On

Enable Modify filter

On

Enable Nest filter

On

Enable Parser filter

On

Enable Record Modifier filter

On

Enable Rewrite Tag filter

On

Enable Stdout filter

On

Enable Sysinfo filter

On

Enable Throttle filter

On

Enable Type Converter filter

On

Enable WASM filter

On

Output plugins

Output plugins let you flush the information to some external interface, service, or terminal. The following table describes the output plugins available:

Option
Description
Default

Enable Microsoft Azure output plugin

On

Enable Azure Kusto output plugin

On

Enable Google BigQuery output plugin

On

Enable Counter output plugin

On

Enable Amazon CloudWatch output plugin

On

Enable Datadog output plugin

On

Enable output plugin

On

Enable File output plugin

On

Enable Amazon Kinesis Data Firehose output plugin

On

Enable Amazon Kinesis Data Streams output plugin

On

Enable Flowcounter output plugin

On

Enable output plugin

On

Enable Gelf output plugin

On

Enable HTTP output plugin

On

Enable InfluxDB output plugin

On

Enable Kafka output

Off

Enable Kafka REST Proxy output plugin

On

FLB_OUT_LIB

Enable Lib output plugin

On

Enable output plugin

On

FLB_OUT_NULL

Enable NULL output plugin

On

FLB_OUT_PGSQL

Enable PostgreSQL output plugin

On

FLB_OUT_PLOT

Enable Plot output plugin

On

FLB_OUT_SLACK

Enable Slack output plugin

On

Enable Amazon S3 output plugin

On

Enable Splunk output plugin

On

Enable Google Stackdriver output plugin

On

Enable STDOUT output plugin

On

FLB_OUT_TCP

Enable TCP/TLS output plugin

On

Enable output plugin

On

Processor plugins

Processor plugins handle the events within the processor pipelines to allow modifying, enriching, or dropping events.

The following table describes the processors available:

| Option | Description | Default || :--- | :--- | :--- | | FLB_PROCESSOR_METRICS_SELECTOR | Enable metrics selector processor | On | | FLB_PROCESSOR_LABELS | Enable metrics label manipulation processor | On |

FLB_IN_COLLECTD
FLB_IN_CPU
FLB_IN_DISK
FLB_IN_DOCKER
FLB_IN_EXEC
FLB_IN_EXEC_WASI
FLB_IN_FLUENTBIT_METRICS
FLB_IN_ELASTICSEARCH
FLB_IN_FORWARD
FLB_IN_HEAD
FLB_IN_HEALTH
FLB_IN_KMSG
FLB_IN_MEM
FLB_IN_MQTT
FLB_IN_NETIF
FLB_IN_PROC
FLB_IN_RANDOM
FLB_IN_SERIAL
FLB_IN_STDIN
FLB_IN_SYSLOG
FLB_IN_SYSTEMD
FLB_IN_TAIL
FLB_IN_TCP
FLB_IN_THERMAL
FLB_IN_UDP
FLB_IN_WINLOG
FLB_IN_WINEVTLOG
FLB_FILTER_AWS
FLB_FILTER_ECS
FLB_FILTER_GREP
FLB_FILTER_KUBERNETES
FLB_FILTER_LUA
FLB_FILTER_MODIFY
FLB_FILTER_NEST
FLB_FILTER_PARSER
FLB_FILTER_RECORD_MODIFIER
FLB_FILTER_REWRITE_TAG
FLB_FILTER_STDOUT
FLB_FILTER_SYSINFO
FLB_FILTER_THROTTLE
FLB_FILTER_TYPE_CONVERTER
FLB_FILTER_WASM
FLB_OUT_AZURE
FLB_OUT_AZURE_KUSTO
FLB_OUT_BIGQUERY
FLB_OUT_COUNTER
FLB_OUT_CLOUDWATCH_LOGS
FLB_OUT_DATADOG
FLB_OUT_ES
Elastic Search
FLB_OUT_FILE
FLB_OUT_KINESIS_FIREHOSE
FLB_OUT_KINESIS_STREAMS
FLB_OUT_FLOWCOUNTER
FLB_OUT_FORWARD
Fluentd
FLB_OUT_GELF
FLB_OUT_HTTP
FLB_OUT_INFLUXDB
FLB_OUT_KAFKA
FLB_OUT_KAFKA_REST
FLB_OUT_NATS
NATS
FLB_OUT_S3
FLB_OUT_SPLUNK
FLB_OUT_STACKDRIVER
FLB_OUT_STDOUT
FLB_OUT_TD
Treasure Data

Docker

Fluent Bit container images are available on Docker Hub ready for production usage. Current available images can be deployed in multiple architectures.

Start Docker

Use the following command to start Docker with Fluent Bit:

docker run -ti cr.fluentbit.io/fluent/fluent-bit

Use a configuration file

Use the following command to start Fluent Bit while using a configuration file:

docker run -ti -v ./fluent-bit.conf:/fluent-bit/etc/fluent-bit.conf \
  cr.fluentbit.io/fluent/fluent-bit
docker run -ti -v ./fluent-bit.yaml:/fluent-bit/etc/fluent-bit.yaml \
  cr.fluentbit.io/fluent/fluent-bit \
  -c /fluent-bit/etc/fluent-bit.yaml

Tags and versions

The following table describes the Linux container tags that are available on Docker Hub fluent/fluent-bit repository:

Tag(s)
Manifest Architectures
Description

4.0.4-debug

x86_64, arm64v8, arm32v7, s390x

Debug images

4.0.4

x86_64, arm64v8, arm32v7, s390x

Release

4.0.3-debug

x86_64, arm64v8, arm32v7, s390x

Debug images

4.0.3

x86_64, arm64v8, arm32v7, s390x

Release

4.0.1-debug

x86_64, arm64v8, arm32v7, s390x

Debug images

4.0.1

x86_64, arm64v8, arm32v7, s390x

Release

4.0.0-debug

x86_64, arm64v8, arm32v7, s390x

Debug images

4.0.0

x86_64, arm64v8, arm32v7, s390x

Release

3.2.10-debug

x86_64, arm64v8, arm32v7, s390x

Debug images

3.2.10

x86_64, arm64v8, arm32v7, s390x

Release

3.2.9-debug

x86_64, arm64v8, arm32v7, s390x

Debug images

3.2.9

x86_64, arm64v8, arm32v7, s390x

Release

3.2.8-debug

x86_64, arm64v8, arm32v7, s390x

Debug images

3.2.8

x86_64, arm64v8, arm32v7, s390x

Release

3.2.7-debug

x86_64, arm64v8, arm32v7, s390x

Debug images

3.2.7

x86_64, arm64v8, arm32v7, s390x

Release

3.2.6-debug

x86_64, arm64v8, arm32v7, s390x

Debug images

3.2.6

x86_64, arm64v8, arm32v7, s390x

Release

3.2.5-debug

x86_64, arm64v8, arm32v7, s390x

Debug images

3.2.5

x86_64, arm64v8, arm32v7, s390x

Release

3.2.4-debug

x86_64, arm64v8, arm32v7, s390x

Debug images

3.2.4

x86_64, arm64v8, arm32v7, s390x

Release

3.2.3-debug

x86_64, arm64v8, arm32v7, s390x

Debug images

3.2.3

x86_64, arm64v8, arm32v7, s390x

Release

3.2.2-debug

x86_64, arm64v8, arm32v7, s390x

Debug images

3.2.2

x86_64, arm64v8, arm32v7, s390x

Release

3.2.1-debug

x86_64, arm64v8, arm32v7, s390x

Debug images

3.2.1

x86_64, arm64v8, arm32v7, s390x

Release

3.1.10-debug

x86_64, arm64v8, arm32v7, s390x

Debug images

3.1.10

x86_64, arm64v8, arm32v7, s390x

Release

3.1.9-debug

x86_64, arm64v8, arm32v7, s390x

Debug images

3.1.9

x86_64, arm64v8, arm32v7, s390x

Release

3.1.8-debug

x86_64, arm64v8, arm32v7, s390x

Debug images

3.1.8

x86_64, arm64v8, arm32v7, s390x

Release

3.1.7-debug

x86_64, arm64v8, arm32v7, s390x

Debug images

3.1.7

x86_64, arm64v8, arm32v7, s390x

Release

3.1.6-debug

x86_64, arm64v8, arm32v7, s390x

Debug images

3.1.6

x86_64, arm64v8, arm32v7, s390x

Release

3.1.5-debug

x86_64, arm64v8, arm32v7, s390x

Debug images

3.1.5

x86_64, arm64v8, arm32v7, s390x

Release

3.1.4-debug

x86_64, arm64v8, arm32v7, s390x

Debug images

3.1.4

x86_64, arm64v8, arm32v7, s390x

Release

3.1.3-debug

x86_64, arm64v8, arm32v7, s390x

Debug images

3.1.3

x86_64, arm64v8, arm32v7, s390x

Release

3.1.2-debug

x86_64, arm64v8, arm32v7, s390x

Debug images

3.1.2

x86_64, arm64v8, arm32v7, s390x

Release

3.1.1-debug

x86_64, arm64v8, arm32v7, s390x

Debug images

3.1.1

x86_64, arm64v8, arm32v7, s390x

Release

3.1.0-debug

x86_64, arm64v8, arm32v7, s390x

Debug images

3.1.0

x86_64, arm64v8, arm32v7, s390x

Release

3.0.7-debug

x86_64, arm64v8, arm32v7, s390x

Debug images

3.0.7

x86_64, arm64v8, arm32v7, s390x

Release

3.0.6-debug

x86_64, arm64v8, arm32v7, s390x

Debug images

3.0.6

x86_64, arm64v8, arm32v7, s390x

Release

3.0.5-debug

x86_64, arm64v8, arm32v7, s390x

Debug images

3.0.5

x86_64, arm64v8, arm32v7, s390x

Release

3.0.4-debug

x86_64, arm64v8, arm32v7, s390x

Debug images

3.0.4

x86_64, arm64v8, arm32v7, s390x

Release

3.0.3-debug

x86_64, arm64v8, arm32v7, s390x

Debug images

3.0.3

x86_64, arm64v8, arm32v7, s390x

Release

3.0.2-debug

x86_64, arm64v8, arm32v7, s390x

Debug images

3.0.2

x86_64, arm64v8, arm32v7, s390x

Release

3.0.1-debug

x86_64, arm64v8, arm32v7, s390x

Debug images

3.0.1

x86_64, arm64v8, arm32v7, s390x

Release

3.0.0-debug

x86_64, arm64v8, arm32v7, s390x

Debug images

3.0.0

x86_64, arm64v8, arm32v7, s390x

Release

2.2.2-debug

x86_64, arm64v8, arm32v7, s390x

Debug images

2.2.2

x86_64, arm64v8, arm32v7, s390x

Release

2.2.1-debug

x86_64, arm64v8, arm32v7, s390x

Debug images

2.2.1

x86_64, arm64v8, arm32v7, s390x

Release

2.2.0-debug

x86_64, arm64v8, arm32v7

Debug images

2.2.0

x86_64, arm64v8, arm32v7

Release

2.1.10-debug

x86_64, arm64v8, arm32v7

Debug images

2.1.10

x86_64, arm64v8, arm32v7

Release

2.1.9-debug

x86_64, arm64v8, arm32v7

Debug images

2.1.9

x86_64, arm64v8, arm32v7

Release

2.1.8-debug

x86_64, arm64v8, arm32v7

Debug images

2.1.8

x86_64, arm64v8, arm32v7

Release

2.1.7-debug

x86_64, arm64v8, arm32v7

Debug images

2.1.7

x86_64, arm64v8, arm32v7

Release

2.1.6-debug

x86_64, arm64v8, arm32v7

Debug images

2.1.6

x86_64, arm64v8, arm32v7

Release

2.1.5

x86_64, arm64v8, arm32v7

Release

2.1.5-debug

x86_64, arm64v8, arm32v7

Debug images

2.1.4

x86_64, arm64v8, arm32v7

Release

2.1.4-debug

x86_64, arm64v8, arm32v7

Debug images

2.1.3

x86_64, arm64v8, arm32v7

Release

2.1.3-debug

x86_64, arm64v8, arm32v7

Debug images

2.1.2

x86_64, arm64v8, arm32v7

Release

2.1.2-debug

x86_64, arm64v8, arm32v7

Debug images

2.1.1

x86_64, arm64v8, arm32v7

Release

2.1.1-debug

x86_64, arm64v8, arm32v7

v2.1.x releases (production + debug)

2.1.0

x86_64, arm64v8, arm32v7

Release

2.1.0-debug

x86_64, arm64v8, arm32v7

v2.1.x releases (production + debug)

2.0.11

x86_64, arm64v8, arm32v7

Release

2.0.11-debug

x86_64, arm64v8, arm32v7

v2.0.x releases (production + debug)

2.0.10

x86_64, arm64v8, arm32v7

Release

2.0.10-debug

x86_64, arm64v8, arm32v7

v2.0.x releases (production + debug)

2.0.9

x86_64, arm64v8, arm32v7

Release

2.0.9-debug

x86_64, arm64v8, arm32v7

v2.0.x releases (production + debug)

2.0.8

x86_64, arm64v8, arm32v7

Release

2.0.8-debug

x86_64, arm64v8, arm32v7

v2.0.x releases (production + debug)

2.0.6

x86_64, arm64v8, arm32v7

Release

2.0.6-debug

x86_64, arm64v8, arm32v7

v2.0.x releases (production + debug)

2.0.5

x86_64, arm64v8, arm32v7

Release

2.0.5-debug

x86_64, arm64v8, arm32v7

v2.0.x releases (production + debug)

2.0.4

x86_64, arm64v8, arm32v7

Release

2.0.4-debug

x86_64, arm64v8, arm32v7

v2.0.x releases (production + debug)

2.0.3

x86_64, arm64v8, arm32v7

Release

2.0.3-debug

x86_64, arm64v8, arm32v7

v2.0.x releases (production + debug)

2.0.2

x86_64, arm64v8, arm32v7

Release

2.0.2-debug

x86_64, arm64v8, arm32v7

v2.0.x releases (production + debug)

2.0.1

x86_64, arm64v8, arm32v7

Release

2.0.1-debug

x86_64, arm64v8, arm32v7

v2.0.x releases (production + debug)

2.0.0

x86_64, arm64v8, arm32v7

Release

2.0.0-debug

x86_64, arm64v8, arm32v7

v2.0.x releases (production + debug)

1.9.9

x86_64, arm64v8, arm32v7

Release

1.9.9-debug

x86_64, arm64v8, arm32v7

v1.9.x releases (production + debug)

1.9.8

x86_64, arm64v8, arm32v7

Release

1.9.8-debug

x86_64, arm64v8, arm32v7

v1.9.x releases (production + debug)

1.9.7

x86_64, arm64v8, arm32v7

Release

1.9.7-debug

x86_64, arm64v8, arm32v7

v1.9.x releases (production + debug)

1.9.6

x86_64, arm64v8, arm32v7

Release

1.9.6-debug

x86_64, arm64v8, arm32v7

v1.9.x releases (production + debug)

1.9.5

x86_64, arm64v8, arm32v7

Release

1.9.5-debug

x86_64, arm64v8, arm32v7

v1.9.x releases (production + debug)

1.9.4

x86_64, arm64v8, arm32v7

Release

1.9.4-debug

x86_64, arm64v8, arm32v7

v1.9.x releases (production + debug)

1.9.3

x86_64, arm64v8, arm32v7

Release

1.9.3-debug

x86_64, arm64v8, arm32v7

v1.9.x releases (production + debug)

1.9.2

x86_64, arm64v8, arm32v7

Release

1.9.2-debug

x86_64, arm64v8, arm32v7

v1.9.x releases (production + debug)

1.9.1

x86_64, arm64v8, arm32v7

Release

1.9.1-debug

x86_64, arm64v8, arm32v7

v1.9.x releases (production + debug)

1.9.0

x86_64, arm64v8, arm32v7

Release

1.9.0-debug

x86_64, arm64v8, arm32v7

v1.9.x releases (production + debug)

It's strongly suggested that you always use the latest image of Fluent Bit.

Container images for Windows Server 2019 and Windows Server 2022 are provided for v2.0.6 and later. These can be found as tags on the same Docker Hub registry.

Multi-architecture images

Fluent Bit production stable images are based on Distroless. Focusing on security, these images contain only the Fluent Bit binary and minimal system libraries and basic configuration.

Debug images are available for all architectures (for 1.9.0 and later), and contain a full Debian shell and package manager that can be used to troubleshoot or for testing purposes.

From a deployment perspective, there's no need to specify an architecture. The container client tool that pulls the image gets the proper layer for the running architecture.

Verify signed container images

Version 1.9 and 2.0 container images are signed using Cosign/Sigstore. Verify these signatures using cosign (install guide):

$ cosign verify --key "https://packages.fluentbit.io/fluentbit-cosign.pub" fluent/fluent-bit:2.0.6

Verification for index.docker.io/fluent/fluent-bit:2.0.6 --
The following checks were performed on each of these signatures:
  - The cosign claims were validated
  - The signatures were verified against the specified public key

[{"critical":{"identity":{"docker-reference":"index.docker.io/fluent/fluent-bit"},"image":{"docker-manifest-digest":"sha256:c740f90b07f42823d4ecf4d5e168f32ffb4b8bcd87bc41df8f5e3d14e8272903"},"type":"cosign container image signature"},"optional":{"release":"2.0.6","repo":"fluent/fluent-bit","workflow":"Release from staging"}}]

Replace cosign with the binary installed if it has a different name (for example, cosign-linux-amd64).

Keyless signing is also provided but is still experimental:

COSIGN_EXPERIMENTAL=1 cosign verify fluent/fluent-bit:2.0.6

COSIGN_EXPERIMENTAL=1 is used to allow verification of images signed in keyless mode. To learn more about keyless signing, see the Sigstore keyless signature documentation.

Get started

  1. Download the last stable image from 2.0 series:

    docker pull cr.fluentbit.io/fluent/fluent-bit:2.0
  2. After the image is in place, run the following test which makes Fluent Bit measure CPU usage by the container:

    docker run -ti cr.fluentbit.io/fluent/fluent-bit:2.0 \
      -i cpu -o stdout -f 1

That command lets Fluent Bit measure CPU usage every second and flushes the results to the standard output. For example:

[2019/10/01 12:29:02] [ info] [engine] started
[0] cpu.0: [1504290543.000487750, {"cpu_p"=>0.750000, "user_p"=>0.250000, "system_p"=>0.500000, "cpu0.p_cpu"=>0.000000, "cpu0.p_user"=>0.000000, "cpu0.p_system"=>0.000000, "cpu1.p_cpu"=>1.000000, "cpu1.p_user"=>0.000000, "cpu1.p_system"=>1.000000, "cpu2.p_cpu"=>1.000000, "cpu2.p_user"=>1.000000, "cpu2.p_system"=>0.000000, "cpu3.p_cpu"=>0.000000, "cpu3.p_user"=>0.000000, "cpu3.p_system"=>0.000000}]

FAQ

Why there is no Fluent Bit Docker image based on Alpine Linux?

Alpine Linux uses Musl C library instead of Glibc. Musl isn't fully compatible with Glibc, which generated many issues in the following areas when used with Fluent Bit:

  • Memory Allocator: To run properly in high-load environments, Fluent Bit uses Jemalloc as a default memory allocator which reduces fragmentation and provides better performance. Jemalloc can't run smoothly with Musl and requires extra work.

  • Alpine Linux Musl functions bootstrap have a compatibility issue when loading Golang shared libraries. This causes problems when trying to load Golang output plugins in Fluent Bit.

  • Alpine Linux Musl Time format parser doesn't support Glibc extensions.

  • The Fluent Bit maintainers' preference for base images are Distroless and Debian for security and maintenance reasons.

Why use Distroless containers?

The reasons for using Distroless are well covered in Why should I use Distroless images?.

  • Include only what you need, reduce the attack surface available.

  • Reduces size and improves performance.

  • Reduces false positives on scans (and reduces resources required for scanning).

  • Reduces supply chain security requirements to only what you need.

  • Helps prevent unauthorised processes or users interacting with the container.

  • Less need to harden the container (and container runtime, K8s, and so on).

  • Faster CI/CD processes.

With any choice, there are downsides:

  • No shell or package manager to update or add things.

    • Generally, dynamic updating is a bad idea in containers as the time it's done affects the outcome: two containers started at different times using the same base image can perform differently or get different dependencies.

    • A better approach is to rebuild a new image version. You can do this with Distroless, but it's harder and requires multistage builds or similar to provide the new dependencies.

  • Debugging can be harder.

    • More specifically you need applications set up to properly expose information for debugging rather than rely on traditional debug approaches of connecting to processes or dumping memory. This can be an upfront cost versus a runtime cost but does shift left in the development process so hopefully is a reduction overall.

  • Assumption that Distroless is secure: nothing is secure and there are still exploits so it doesn't remove the need for securing your system.

  • Sometimes you need to use a common base image, such as with audits, security, health, and so on.

Using exec to access a container will potentially impact resource limits.

For debugging, debug containers are available now in K8S: https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/#ephemeral-container

  • This can be a significantly different container from the one you want to investigate, with lots of extra tools or even a different base.

  • No resource limits applied to this container, which can be good or bad.

  • Runs in pod namespaces. It's another container that can access everything the others can.

  • Might need architecture of the pod to share volumes or other information.

  • Requires more recent versions of K8S and the container runtime plus RBAC allowing it.

Monitoring

Learn how to monitor your Fluent Bit data pipelines

Fluent Bit includes features for monitoring the internals of your pipeline, in addition to connecting to Prometheus and Grafana, Health checks, and connectors to use external services:

HTTP server

Fluent Bit includes an HTTP server for querying internal information and monitoring metrics of each running plugin.

You can integrate the monitoring interface with Prometheus.

Get started

To get started, enable the HTTP server from the configuration file. The following configuration instructs Fluent Bit to start an HTTP server on TCP port 2020 and listen on all network interfaces:

Start Fluent bit with the corresponding configuration chosen above:

Fluent Bit starts and generates output in your terminal:

Use curl to gather information about the HTTP server. The following command sends the command output to the jq program, which outputs human-readable JSON data to the terminal.

REST API interface

Fluent Bit exposes the following endpoints for monitoring.

URI
Description
Data format

v1 metrics

The following descriptions apply to v1 metric endpoints.

/api/v1/metrics/prometheus endpoint

The following descriptions apply to metrics outputted in Prometheus format by the /api/v1/metrics/prometheus endpoint.

The following terms are key to understanding how Fluent Bit processes metrics:

  • Record: a single message collected from a source, such as a single long line in a file.

  • Chunk: log records ingested and stored by Fluent Bit input plugin instances. A batch of records in a chunk are tracked together as a single unit.

    The Fluent Bit engine attempts to fit records into chunks of at most 2 MB, but the size can vary at runtime. Chunks are then sent to an output. An output plugin instance can either successfully send the full chunk to the destination and mark it as successful, or it can fail the chunk entirely if an unrecoverable error is encountered, or it can ask for the chunk to be retried.

Metric name
Labels
Description
Type
Unit

/api/v1/storage endpoint

The following descriptions apply to metrics outputted in JSON format by the /api/v1/storage endpoint.

Metric Key
Description
Unit

v2 metrics

The following descriptions apply to v2 metric endpoints.

/api/v2/metrics/prometheus or /api/v2/metrics endpoint

The following descriptions apply to metrics outputted in Prometheus format by the /api/v2/metrics/prometheus or /api/v2/metrics endpoints.

The following terms are key to understanding how Fluent Bit processes metrics:

  • Record: a single message collected from a source, such as a single long line in a file.

  • Chunk: log records ingested and stored by Fluent Bit input plugin instances. A batch of records in a chunk are tracked together as a single unit.

    The Fluent Bit engine attempts to fit records into chunks of at most 2 MB, but the size can vary at runtime. Chunks are then sent to an output. An output plugin instance can either successfully send the full chunk to the destination and mark it as successful, or it can fail the chunk entirely if an unrecoverable error is encountered, or it can ask for the chunk to be retried.

Metric Name
Labels
Description
Type
Unit

Storage layer

The following are detailed descriptions for the metrics collected by the storage layer.

Metric Name
Labels
Description
Type
Unit

Uptime example

Query the service uptime with the following command:

The command prints a similar output like this:

Metrics example

Query internal metrics in JSON format with the following command:

The command prints a similar output like this:

Query metrics in Prometheus format

Query internal metrics in Prometheus Text 0.0.4 format:

This command returns the same metrics in Prometheus format instead of JSON:

Configure aliases

By default, configured plugins on runtime get an internal name in the format _plugin_name.ID_. For monitoring purposes, this can be confusing if many plugins of the same type were configured. To make a distinction each configured input or output section can get an alias that will be used as the parent name for the metric.

The following example sets an alias to the INPUT section of the configuration file, which is using the input plugin:

When querying the related metrics, the aliases are returned instead of the plugin name:

Grafana dashboard and alerts

You can create Grafana dashboards and alerts using Fluent Bit's exposed Prometheus style metrics.

The provided is heavily inspired by 's with a few key differences, such as the use of the instance label, stacked graphs, and a focus on Fluent Bit metrics. See for more information.

Alerts

Sample alerts are available .

Health Check for Fluent Bit

Fluent bit supports the following configurations to set up the health check.

Configuration name
Description
Default

Not every error log means an error to be counted. The error retry failures count only on specific errors, which is the example in configuration table description.

Based on the HC_Period setting, if the real error number is over HC_Errors_Count, or retry failure is over HC_Retry_Failure_Count, Fluent Bit is considered unhealthy. The health endpoint returns an HTTP status 500 and an error message. Otherwise, the endpoint returns HTTP status 200 and an ok message.

The equation to calculate this behavior is:

The HC_Errors_Count and HC_Retry_Failure_Count only count for output plugins and count a sum for errors and retry failures from all running output plugins.

The following configuration examples show how to define these settings:

Use the following command to call the health endpoint:

With the example configuration, the health status is determined by the following equation:

  • If this equation evaluates to TRUE, then Fluent Bit is unhealthy.

  • If this equation evaluates to FALSE, then Fluent Bit is healthy.

Telemetry Pipeline

is a hosted service that lets you monitor your Fluent Bit agents including data flow, metrics, and configurations.

v4.0.4
v4.0.3
v4.0.1
v4.0.0
v3.2.10
v3.2.9
v3.2.8
v3.2.7
v3.2.6
v3.2.5
v3.2.4
v3.2.3
v3.2.2
v3.2.1
v3.1.10
v3.1.9
v3.1.8
v3.1.7
v3.1.6
v3.1.5
v3.1.4
v3.1.3
v3.1.2
v3.1.1
v3.1.0
v3.0.7
v3.0.6
v3.0.5
v3.0.4
v3.0.3
v3.0.2
v3.0.1
v3.0.0
v2.2.2
v2.2.1
v2.2.0
v2.1.10
v2.1.9
v2.1.8
v2.1.7
v2.1.6
v2.1.5
v2.1.4
v2.1.3
v2.1.2
v2.1.1
v2.1.0
v2.0.11
v2.0.10
v2.0.9
v2.0.8
v2.0.6
v2.0.5
v2.0.4
v2.0.3
v2.0.2
v2.0.1
v2.0.0
v1.9.9
v1.9.8
v1.9.7
v1.9.6
v1.9.5
v1.9.4
v1.9.3
v1.9.2
v1.9.1
v1.9.0
# For YAML configuration.
./bin/fluent-bit --config fluent-bit.yaml

# For classic configuration.
./bin/fluent-bit --config fluent-bit.conf
Fluent Bit v1.4.0
* Copyright (C) 2019-2020 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2020/03/10 19:08:24] [ info] [engine] started
[2020/03/10 19:08:24] [ info] [http_server] listen iface=0.0.0.0 tcp_port=2020
$ curl -s http://127.0.0.1:2020 | jq
{
  "fluent-bit": {
    "version": "0.13.0",
    "edition": "Community",
    "flags": [
      "FLB_HAVE_TLS",
      "FLB_HAVE_METRICS",
      "FLB_HAVE_SQLDB",
      "FLB_HAVE_TRACE",
      "FLB_HAVE_HTTP_SERVER",
      "FLB_HAVE_FLUSH_LIBCO",
      "FLB_HAVE_SYSTEMD",
      "FLB_HAVE_VALGRIND",
      "FLB_HAVE_FORK",
      "FLB_HAVE_PROXY_GO",
      "FLB_HAVE_REGEX",
      "FLB_HAVE_C_TLS",
      "FLB_HAVE_SETJMP",
      "FLB_HAVE_ACCEPT4",
      "FLB_HAVE_INOTIFY"
    ]
  }
}

/

Fluent Bit build information.

JSON

/api/v1/uptime

Return uptime information in seconds.

JSON

/api/v1/metrics

Display internal metrics per loaded plugin.

JSON

/api/v1/metrics/prometheus

Display internal metrics per loaded plugin in Prometheus Server format.

Prometheus Text 0.0.4

/api/v1/storage

Get internal metrics of the storage layer / buffered data. This option is enabled only if in the SERVICE section of the property storage.metrics is enabled.

JSON

/api/v1/health

Display the Fluent Bit health check result.

String

/api/v2/metrics

Display internal metrics per loaded plugin.

cmetrics text format

/api/v2/metrics/prometheus

Display internal metrics per loaded plugin ready in Prometheus Server format.

Prometheus Text 0.0.4

`/api/v2/reload

Execute hot reloading or get the status of hot reloading. See the hot-reloading documentation.

JSON

fluentbit_input_bytes_total

name: the name or alias for the input instance

The number of bytes of log records that this input instance has ingested successfully.

counter

bytes

fluentbit_input_records_total

name: the name or alias for the input instance

The number of log records this input ingested successfully.

counter

records

fluentbit_output_dropped_records_total

name: the name or alias for the output instance

The number of log records dropped by the output. These records hit an unrecoverable error or retries expired for their chunk.

counter

records

fluentbit_output_errors_total

name: the name or alias for the output instance

The number of chunks with an error that's either unrecoverable or unable to retry. This metric represents the number of times a chunk failed, and doesn't correspond with the number of error messages visible in the Fluent Bit log output.

counter

chunks

fluentbit_output_proc_bytes_total

name: the name or alias for the output instance

The number of bytes of log records that this output instance sent successfully. This metric represents the total byte size of all unique chunks sent by this output. If a record isn't sent due to some error, it doesn't count towards this metric.

counter

bytes

fluentbit_output_proc_records_total

name: the name or alias for the output instance

The number of log records that this output instance sent successfully. This metric represents the total record count of all unique chunks sent by this output. If a record isn't sent successfully, it doesn't count towards this metric.

counter

records

fluentbit_output_retried_records_total

name: the name or alias for the output instance

The number of log records that experienced a retry. This metric is calculated at the chunk level, the count increased when an entire chunk is marked for retry. An output plugin might perform multiple actions that generate many error messages when uploading a single chunk.

counter

records

fluentbit_output_retries_failed_total

name: the name or alias for the output instance

The number of times that retries expired for a chunk. Each plugin configures a Retry_Limit, which applies to chunks. When the Retry_Limit is exceeded, the chunk is discarded and this metric is incremented.

counter

chunks

fluentbit_output_retries_total

name: the name or alias for the output instance

The number of times this output instance requested a retry for a chunk.

counter

chunks

fluentbit_uptime

The number of seconds that Fluent Bit has been running.

counter

seconds

process_start_time_seconds

The Unix Epoch timestamp for when Fluent Bit started.

gauge

seconds

chunks.total_chunks

The total number of chunks of records that Fluent Bit is currently buffering.

chunks

chunks.mem_chunks

The total number of chunks that are currently buffered in memory. Chunks can be both in memory and on the file system at the same time.

chunks

chunks.fs_chunks

The total number of chunks saved to the filesystem.

chunks

chunks.fs_chunks_up

The count of chunks that are both in file system and in memory.

chunks

chunks.fs_chunks_down

The count of chunks that are only in the file system.

chunks

input_chunks.{plugin name}.status.overlimit

Indicates whether the input instance exceeded its configured Mem_Buf_Limit.

boolean

input_chunks.{plugin name}.status.mem_size

The size of memory that this input is consuming to buffer logs in chunks.

bytes

input_chunks.{plugin name}.status.mem_limit

The buffer memory limit (Mem_Buf_Limit) that applies to this input plugin.

bytes

input_chunks.{plugin name}.chunks.total

The current total number of chunks owned by this input instance.

chunks

input_chunks.{plugin name}.chunks.up

The current number of chunks that are in memory for this input. If file system storage is enabled, chunks that are "up" are also stored in the filesystem layer.

chunks

input_chunks.{plugin name}.chunks.down

The current number of chunks that are "down" in the filesystem for this input.

chunks

input_chunks.{plugin name}.chunks.busy

Chunks are that are being processed or sent by outputs and aren't eligible to have new data appended.

chunks

input_chunks.{plugin name}.chunks.busy_size

The sum of the byte size of each chunk which is currently marked as busy.

bytes

fluentbit_input_bytes_total

name: the name or alias for the input instance

The number of bytes of log records that this input instance has ingested successfully.

counter

bytes

fluentbit_input_records_total

name: the name or alias for the input instance

The number of log records this input ingested successfully.

counter

records

fluentbit_filter_bytes_total

name: the name or alias for the filter instance

The number of bytes of log records that this filter instance has ingested successfully.

counter

bytes

fluentbit_filter_records_total

name: the name or alias for the filter instance

The number of log records this filter has ingested successfully.

counter

records

fluentbit_filter_added_records_total

name: the name or alias for the filter instance

The number of log records added by the filter into the data pipeline.

counter

records

fluentbit_filter_drop_records_total

name: the name or alias for the filter instance

The number of log records dropped by the filter and removed from the data pipeline.

counter

records

fluentbit_output_dropped_records_total

name: the name or alias for the output instance

The number of log records dropped by the output. These records hit an unrecoverable error or retries expired for their chunk.

counter

records

fluentbit_output_errors_total

name: the name or alias for the output instance

The number of chunks with an error that's either unrecoverable or unable to retry. This metric represents the number of times a chunk failed, and doesn't correspond with the number of error messages visible in the Fluent Bit log output.

counter

chunks

fluentbit_output_proc_bytes_total

name: the name or alias for the output instance

The number of bytes of log records that this output instance sent successfully. This metric represents the total byte size of all unique chunks sent by this output. If a record isn't sent due to some error, it doesn't count towards this metric.

counter

bytes

fluentbit_output_proc_records_total

name: the name or alias for the output instance

The number of log records that this output instance sent successfully. This metric represents the total record count of all unique chunks sent by this output. If a record isn't sent successfully, it doesn't count towards this metric.

counter

records

fluentbit_output_retried_records_total

name: the name or alias for the output instance

The number of log records that experienced a retry. This metric is calculated at the chunk level, the count increased when an entire chunk is marked for retry. An output plugin might perform multiple actions that generate many error messages when uploading a single chunk.

counter

records

fluentbit_output_retries_failed_total

name: the name or alias for the output instance

The number of times that retries expired for a chunk. Each plugin configures a Retry_Limit, which applies to chunks. When the Retry_Limit is exceeded, the chunk is discarded and this metric is incremented.

counter

chunks

fluentbit_output_retries_total

name: the name or alias for the output instance

The number of times this output instance requested a retry for a chunk.

counter

chunks

fluentbit_uptime

hostname: the hostname on running Fluent Bit

The number of seconds that Fluent Bit has been running.

counter

seconds

fluentbit_process_start_time_seconds

hostname: the hostname on running Fluent Bit

The Unix Epoch time stamp for when Fluent Bit started.

gauge

seconds

fluentbit_build_info

hostname: the hostname, version: the version of Fluent Bit, os: OS type

Build version information. The returned value is originated from initializing the Unix Epoch time stamp of configuration context.

gauge

seconds

fluentbit_hot_reloaded_times

hostname: the hostname on running Fluent Bit

Collect the count of hot reloaded times.

gauge

seconds

fluentbit_input_chunks.storage_chunks

None

The total number of chunks of records that Fluent Bit is currently buffering.

gauge

chunks

fluentbit_storage_mem_chunk

None

The total number of chunks that are currently buffered in memory. Chunks can be both in memory and on the file system at the same time.

gauge

chunks

fluentbit_storage_fs_chunks

None

The total number of chunks saved to the file system.

gauge

chunks

fluentbit_storage_fs_chunks_up

None

The count of chunks that are both in file system and in memory.

gauge

chunks

fluentbit_storage_fs_chunks_down

None

The count of chunks that are only in the file system.

gauge

chunks

fluentbit_storage_fs_chunks_busy

None

The total number of chunks are in a busy state.

gauge

chunks

fluentbit_storage_fs_chunks_busy_bytes

None

The total bytes of chunks are in a busy state.

gauge

bytes

fluentbit_input_storage_overlimit

name: the name or alias for the input instance

Indicates whether the input instance exceeded its configured Mem_Buf_Limit.

gauge

boolean

fluentbit_input_storage_memory_bytes

name: the name or alias for the input instance

The size of memory that this input is consuming to buffer logs in chunks.

gauge

bytes

fluentbit_input_storage_chunks

name: the name or alias for the input instance

The current total number of chunks owned by this input instance.

gauge

chunks

fluentbit_input_storage_chunks_up

name: the name or alias for the input instance

The current number of chunks that are in memory for this input. If file system storage is enabled, chunks that are "up" are also stored in the filesystem layer.

gauge

chunks

fluentbit_input_storage_chunks_down

name: the name or alias for the input instance

The current number of chunks that are "down" in the filesystem for this input.

gauge

chunks

fluentbit_input_storage_chunks_busy

name: the name or alias for the input instance

Chunks are that are being processed or sent by outputs and aren't eligible to have new data appended.

gauge

chunks

fluentbit_input_storage_chunks_busy_bytes

name: the name or alias for the input instance

The sum of the byte size of each chunk which is currently marked as busy.

gauge

bytes

fluentbit_output_upstream_total_connections

name: the name or alias for the output instance

The sum of the connection count of each output plugins.

gauge

bytes

fluentbit_output_upstream_busy_connections

name: the name or alias for the output instance

The sum of the connection count in a busy state of each output plugins.

gauge

bytes

$ curl -s http://127.0.0.1:2020/api/v1/uptime | jq
{
  "uptime_sec": 8950000,
  "uptime_hr": "Fluent Bit has been running:  103 days, 14 hours, 6 minutes and 40 seconds"
}
curl -s http://127.0.0.1:2020/api/v1/metrics | jq
{
  "input": {
    "cpu.0": {
      "records": 8,
      "bytes": 2536
    }
  },
  "output": {
    "stdout.0": {
      "proc_records": 5,
      "proc_bytes": 1585,
      "errors": 0,
      "retries": 0,
      "retries_failed": 0
    }
  }
}
curl -s http://127.0.0.1:2020/api/v1/metrics/prometheus
fluentbit_input_records_total{name="cpu.0"} 57 1509150350542
fluentbit_input_bytes_total{name="cpu.0"} 18069 1509150350542
fluentbit_output_proc_records_total{name="stdout.0"} 54 1509150350542
fluentbit_output_proc_bytes_total{name="stdout.0"} 17118 1509150350542
fluentbit_output_errors_total{name="stdout.0"} 0 1509150350542
fluentbit_output_retries_total{name="stdout.0"} 0 1509150350542
fluentbit_output_retries_failed_total{name="stdout.0"} 0 1509150350542
service:
    http_server: on
    http_listen: 0.0.0.0
    http_port: 2020
    
pipeline:
    inputs:
        - name: cpu
          alias: server1_cpu
          
    outputs:       
        - name: stdout
          alias: raw_output
          match: '*'
[SERVICE]
    HTTP_Server  On
    HTTP_Listen  0.0.0.0
    HTTP_PORT    2020

[INPUT]
    Name  cpu
    Alias server1_cpu

[OUTPUT]
    Name  stdout
    Alias raw_output
    Match *
{
  "input": {
    "server1_cpu": {
      "records": 8,
      "bytes": 2536
    }
  },
  "output": {
    "raw_output": {
      "proc_records": 5,
      "proc_bytes": 1585,
      "errors": 0,
      "retries": 0,
      "retries_failed": 0
    }
  }
}

Health_Check

Enable Health check feature

Off

HC_Errors_Count

the error count to meet the unhealthy requirement, this is a sum for all output plugins in a defined HC_Period, example for output error: [2022/02/16 10:44:10] [ warn] [engine] failed to flush chunk '1-1645008245.491540684.flb', retry in 7 seconds: task_id=0, input=forward.1 > output=cloudwatch_logs.3 (out_id=3)

5

HC_Retry_Failure_Count

the retry failure count to meet the unhealthy requirement, this is a sum for all output plugins in a defined HC_Period, example for retry failure: [2022/02/16 20:11:36] [ warn] [engine] chunk '1-1645042288.260516436.flb' cannot be retried: task_id=0, input=tcp.3 > output=cloudwatch_logs.1

5

HC_Period

The time period by second to count the error and retry failure data point

60

health status = (HC_Errors_Count > HC_Errors_Count config value) OR
(HC_Retry_Failure_Count > HC_Retry_Failure_Count config value) IN
the HC_Period interval
service:
    http_server: on
    http_listen: 0.0.0.0
    http_port: 2020
    health_check: on
    hc_errors_count: 5
    hc_retry_failure_count: 5
    hc_period: 5
    
pipeline:
    inputs:
        - name: cpu
          
    outputs:       
        - name: stdout
          match: '*'
[SERVICE]
    HTTP_Server  On
    HTTP_Listen  0.0.0.0
    HTTP_PORT    2020
    Health_Check On
    HC_Errors_Count 5
    HC_Retry_Failure_Count 5
    HC_Period 5

[INPUT]
    Name  cpu

[OUTPUT]
    Name  stdout
    Match *
curl -s http://127.0.0.1:2020/api/v1/health
Health status = (HC_Errors_Count > 5) OR (HC_Retry_Failure_Count > 5) IN 5 seconds
HTTP Server: JSON and Prometheus Exporter-style metrics
Grafana Dashboards and Alerts
Health Checks
Telemetry Pipeline: hosted service to monitor and visualize your pipelines
[SERVICE]
    HTTP_Server  On
    HTTP_Listen  0.0.0.0
    HTTP_PORT    2020

[INPUT]
    Name cpu

[OUTPUT]
    Name  stdout
    Match *
CPU
example dashboard
Banzai Cloud
logging operator dashboard
this blog post
here
Telemetry Pipeline
dashboard
service:
    http_server: on
    http_listen: 0.0.0.0
    http_port: 2020
    
pipeline:
    inputs:
        - name: cpu
          
    outputs:       
        - name: stdout
          match: '*'