arrow-left

Only this pageAll pages
gitbookPowered by GitBook
triangle-exclamation
Couldn't generate the PDF for 224 pages, generation stopped at 100.
Extend with 50 more pages.
1 of 100

5.0

Loading...

About

Loading...

Loading...

Loading...

Concepts

Loading...

Loading...

Installation

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Administration

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Local testing

Loading...

Loading...

Data pipeline

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Buildroot embedded Linux

Install Fluent Bit in your embedded Linux system.

hashtag
Install

To install, select Fluent Bit in your defconfig. See the Config.in file for all configuration options.

BR2_PACKAGE_FLUENT_BIT=y

hashtag
Run

The default configuration file is written to:

Fluent Bit is started by the S99fluent-bit script.

hashtag
Support

All configurations with a tool chain that supports threads and dynamic library linking are supported.

Lab resources

An overview of free public labs for learning how to successfully use Fluent Bit.

To use these lab resources to their full extent, you'll need to install resources in your local environment to test and run Fluent Bit.

hashtag
O11y workshops by Chronosphere

Chronosphere provides several open source observability workshopsarrow-up-right, including a Fluent Bit workshop:

You can also view the source files for these workshops on .

hashtag
Fluent Bit pod logging on Kubernetes workshop by Amazon

This workshop by Amazon covers deploying Fluent Bit for pod-level logging on Kubernetes and routing data to CloudWatch Logs.

Data pipeline

The Fluent Bit data pipeline incorporates several specific concepts. Data processing flows through the pipeline following these concepts in order.

hashtag
Inputs

gather information from different sources. Some plugins collect data from log files, and others gather metrics information from the operating system. There are many plugins to suit different needs.

Linux packages

Fluent Bit is available for a variety of Linux distributions and embedded Linux systems.

The most secure option is to create the repositories according to the instructions for your specific OS.

hashtag
Single line install

An installation script is provided for use with most Linux targets. This will by default install the most recent version released.

This is a helper and should always be validated prior to use.

Build from source code

You can download the most recent stable or development source code.

hashtag
Stable

For production systems, it's strongly suggested that you get the latest stable release of the source code in either zip file or tarball file format from GitHub using the following link pattern:

For example, for version 1.8.12 the link is:

/etc/fluent-bit/fluent-bit.conf

hashtag
GPG key updates

For the 1.9.0 and 1.8.15 releases and later, the GPG key has been updatedarrow-up-right. Ensure the new key is added.

The GPG Key fingerprint of the new key is:

The previous key is still availablearrow-up-right and might be required to install previous versions.

The GPG Key fingerprint of the old key is:

Refer to the supported platform documentation to see which platforms are supported in each release.

hashtag
Migration to Fluent Bit

For version 1.9 and later, td-agent-bit is a deprecated package and is removed after 1.9.9. The correct package name to use now is fluent-bit.

hashtag
Development

If you want to contribute to Fluent Bit, you should use the most recent code. You can get the development version from the Git repository:

The master branch is where the development of Fluent Bit happens. Development version users should expect issues when compiling or at run time.

Fluent Bit users are encouraged to help test every development version to ensure a stable release.

hashtag
Next step

After downloading Fluent Bit, install it using one of the following methods:

  • Build and install

  • Build with a static configuration

https://github.com/fluent/fluent-bit/archive/refs/tags/v1.8.12.tar.gzarrow-up-right
curl https://raw.githubusercontent.com/fluent/fluent-bit/master/install.sh | sh
C3C0 A285 34B9 293E AF51  FABD 9F9D DC08 3888 C1CD
Fluentbit releases (Releases signing key) <[email protected]>
F209 D876 2A60 CD49 E680 633B 4FF8 368B 6EA0 722A
https://github.com/fluent/fluent-bit/archive/refs/tags/v&lt;release version&gt;.tar.gz
https://github.com/fluent/fluent-bit/archive/refs/tags/v&lt;release version&gt;.zip
git clone https://github.com/fluent/fluent-bit
hashtag
Processors

Processors are components that modify, transform, or enhance data as it flows through the pipeline. Processors are attached directly to individual input or output plugins rather than defined globally, and they don't use tag matching.

Because processors run in the same thread as their associated plugin, they can reduce performance overhead compared to filters—especially when multithreading is enabled.

Processors are configured in YAML configuration files only.

hashtag
Parser

Parsers convert unstructured data to structured data. Use a parser to set a structure to the incoming data by using input plugins as data is collected.

hashtag
Filter

Filters let you alter the collected data before delivering it to a destination. In production environments you need full control of the data you're collecting. Using filters lets you control data before processing.

hashtag
Buffer

The buffering phase in the pipeline aims to provide a unified and persistent mechanism to store your data, using the primary in-memory model or the file system-based mode.

hashtag
Routing

Routing is a core feature that lets you route your data through filters, and then to one or multiple destinations. The router relies on the concept of tags and matching rules.

hashtag
Output

Output plugins let you define destinations for your data. Common destinations are remote services, local file systems, or other standard interfaces.

Input plugins
Lab 1 - Introduction to Fluent Bitarrow-up-right
Lab 2 - Installing Fluent Bitarrow-up-right
Lab 3 - Exploring First Pipelinesarrow-up-right
Lab 4 - Exploring More Pipelinesarrow-up-right
Lab 5 - Understanding Backpressurearrow-up-right
Lab 6 - Avoid Telemetry Data Lossarrow-up-right
Lab 7 - Pipeline Integration with OpenTelemetryarrow-up-right
Lab 8 - Controlling logs with Fluent Bit on Kubernetesarrow-up-right
Lab 9 - Visualizing Fluent Bit telemetry data with OpenSearcharrow-up-right
Lab 10 - Advanced telemetry routingarrow-up-right
GitLabarrow-up-right

Fluentd and Fluent Bit

The production grade telemetry ecosystem

Telemetry data processing can be complex, especially at scale. That's why Fluentdarrow-up-right was created. Fluentd is more than a basic tool. It's grown into a full-scale ecosystem that contains SDKs for different languages and sub-projects, like Fluent Bitarrow-up-right.

The Fluentd and Fluent Bit projects are both:

  • Licensed under the terms of Apache License v2.0.

  • Graduated hosted projects by the Cloud Native Computing Foundation (CNCF)arrow-up-right.

  • Production-grade solutions, with Fluent Bit deployed over 15 billion times globally.

  • Vendor neutral and community driven.

  • Widely adopted by the industry, being trusted by major companies like Amazon, Microsoft, Google, and hundreds of others.

The projects have many similarities: is designed and built on top of the best ideas of architecture and general design. Which one you choose depends on your end-users' needs.

The following table describes a comparison of different areas of the projects:

Attribute
Fluentd
Fluent Bit

Both and can work as Aggregators or Forwarders, and can complement each other or be used as standalone solutions.

In the recent years, cloud providers have switched from Fluentd to Fluent Bit for performance and compatibility. Fluent Bit is now considered the next-generation solution.

Key concepts

Learn these key concepts to understand how Fluent Bit operates.

Before diving into Fluent Bitarrow-up-right you might want to get acquainted with some of the key concepts of the service. This document provides an introduction to those concepts and common Fluent Bitarrow-up-right terminology. Reading this document will help you gain a more general understanding of the following topics:

  • Event or Record

  • Filtering

  • Processor

  • Tag

  • Timestamp

  • Match

  • Structured Message

hashtag
Events or records

Every incoming piece of data that belongs to a log, metric, trace, or profile that's retrieved by Fluent Bit is considered an Event or a Record.

As an example, consider the following content of a Syslog file:

It contains four lines that represent four independent Events.

An Event is comprised of:

  • timestamp

  • key/value metadata (v2.1.0 and greater)

  • payload

hashtag
Event format

The Fluent Bit wire protocol represents an Event as a two-element array with a nested array as the first element:

where

  • TIMESTAMP is a timestamp in seconds as an integer or floating point value (not a string).

  • METADATA is an object containing event metadata, and might be empty.

Fluent Bit versions prior to v2.1.0 used:

to represent events. This format is still supported for reading input event streams.

hashtag
Filtering

You might need to perform modifications on an event's content. The process to alter, append to, or drop Events is called .

Use filtering to:

  • Append specific information to the Event like an IP address or metadata.

  • Select a specific piece of the Event content.

  • Drop Events that match a certain pattern.

hashtag
Processor

modify, transform, or enhance data as it moves through the pipeline. Unlike filters, processors are attached directly to individual input or output plugins and don't use tag matching. Each processor operates only on data from its associated plugin.

Processors run in the same thread as their associated plugin, which improves throughput compared to filters—especially when is enabled.

Processors are supported in only, and can act on logs, metrics, traces, and profiles.

hashtag
Tag

Every Event ingested by Fluent Bit is assigned a Tag. This tag is an internal string used in a later stage by the Router to decide which Filter or phase it must go through.

Most tags are assigned manually in the configuration. If a tag isn't specified, Fluent Bit assigns the name of the plugin instance where that Event was generated from.

circle-info

The input plugin doesn't assign tags. This plugin speaks the Fluentd wire protocol called Forward where every Event already comes with a Tag associated. Fluent Bit will always use the incoming Tag set by the client.

A tagged record must always have a Matching rule. To learn more about Tags and Matches, see .

hashtag
Timestamp

The timestamp represents the time an Event was created. Every Event contains an associated timestamp. All events have timestamps, and they're set by the input plugin or discovered through a data parsing process.

The timestamp is a numeric fractional integer in the format:

where:

  • _SECONDS_ is the number of seconds that have elapsed since the Unix epoch.

  • _NANOSECONDS_ is a fractional second or one thousand-millionth of a second.

hashtag
Match

Fluent Bit lets you route your collected and processed Events to one or multiple destinations. A Match represents a rule to select Events where a Tag matches a defined rule.

To learn more about Tags and Matches, see .

hashtag
Structured messages

Source events can have a structure. A structure defines a set of keys and values inside the Event message to implement faster operations on data modifications. Fluent Bit treats every Event message as a structured message.

Consider the following two messages:

  • No structured message

  • With a structured message

For performance reasons, Fluent Bit uses a binary serialization data format called .

Build with static configuration

Fluent Bitarrow-up-right in normal operation mode is configurable through text files or using specific arguments in the command line. Although this is the ideal deployment case, there are scenarios where a more restricted configuration is required. Static configuration mode restricts configuration ability.

Static configuration mode includes a built-in configuration in the final binary of Fluent Bit, disabling the usage of external files or flags at runtime.

hashtag
Get started

hashtag
Requirements

The following steps assume you are familiar with configuring Fluent Bit using text files and you have experience building it from scratch as described in .

hashtag
Configuration directory

In your file system, prepare a specific directory that will be used as an entry point for the build system to lookup and parse the configuration files. This directory must contain a minimum of one configuration file, called fluent-bit.conf, that contains the required , , and sections.

As an example, create a new fluent-bit.yaml file or fluent-bit.conf file:

This configuration calculates CPU metrics from the running system and prints them to the standard output interface.

hashtag
Build with custom configuration

  1. Go to the Fluent Bit source code build directory:

  2. Run CMake, appending the FLB_STATIC_CONF option pointing to the configuration directory recently created:

  3. Build Fluent Bit:

The generated fluent-bit binary is ready to run without additional configuration:

Amazon Linux

Fluent Bit is distributed as the fluent-bit package and is available for Amazon Linux 2 and Amazon Linux 2023. The following architectures are supported:

  • x86_64

  • aarch64 / arm64v8

hashtag
Install on Amazon EC2

To install Fluent Bit and related AWS output plugins on Amazon Linux 2 on EC2 using AWS Systems Manager, follow .

hashtag
General installation

To install Fluent Bit on any Amazon Linux instance, follow these steps.

  1. Fluent Bit is provided through a Yum repository. To add the repository reference to your system, add a new file called fluent-bit.repo in /etc/yum.repos.d/ with the following content:

circle-info

You should always enable gpgcheck for security reasons. All Fluent Bit packages are signed.

  1. Ensure your is up to date.

  2. After your repository is configured, run the following command to install it:

  3. Instruct systemd to enable the service:

If you do a status check, you should see a similar output like this:

The default Fluent Bit configuration collect metrics of CPU usage and sends the records to the standard output. You can see the outgoing data in your /var/log/messages file.

Includes

The includes section of YAML configuration files lets you specify additional YAML files to be merged into the current configuration. This lets you organize complex configurations into smaller, manageable files and include them as needed.

These files are identified as a list of filenames and can include relative or absolute paths. If a path isn't specified as absolute, it will be treated as relative to the file that includes it.

hashtag
Usage

The following example demonstrates how to include additional YAML files using relative path references. This is the file system path structure:

Classic configuration files

circle-info

Fluent Bit classic mode configuration will be deprecated at the end of 2026.

Classic mode is a custom configuration model for Fluent Bit. It's more limited than the , and doesn't have the more extensive feature support the YAML configuration has. Classic mode basic design only supports grouping sections with key-value pairs and lacks the ability to handle sub-sections or complex data structures like lists.

Learn more about classic mode:

Format and schema
  • Variables

  • Configuration file

  • Commands

  • Upstream servers

  • Record accessor syntax

  • YAML configuration mode
    MESSAGE
    is an object containing the event body.
    filtering
    Processors
    multithreading
    YAML configuration files
    Output
    Input
    Forward
    Routing
    Routing
    MessagePackarrow-up-right
    [SERVICE]
      Flush     1
      Daemon    off
      Log_Level info
    
    [INPUT]
      Name      cpu
    
    [OUTPUT]
      Name      stdout
      Match     *
    Build and Install
    SERVICE
    INPUT
    OUTPUT
    service:
      flush: 1
      daemon: off
      log_level: info
    
    pipeline:
      inputs:
        - name: cpu
    
      outputs:
        - name: stdout
          match: '*'
    [fluent-bit]
      name = Fluent Bit
      baseurl = https://packages.fluentbit.io/amazonlinux/2/
      gpgcheck=1
      gpgkey=https://packages.fluentbit.io/fluentbit.key
      enabled=1
    [fluent-bit]
      name = Fluent Bit
      baseurl = https://packages.fluentbit.io/amazonlinux/2023/
      gpgcheck=1
      gpgkey=https://packages.fluentbit.io/fluentbit.key
      enabled=1
    this AWS guidearrow-up-right
    GPG key

    Upstream servers

    The upstream_servers section of YAML configuration files defines a group of endpoints, referred to as nodes. Nodes are used by output plugins to distribute data in a round-robin fashion. Use this section for plugins that require load balancing when sending data. Examples of plugins that support this capability include Forward and Elasticsearch.

    The upstream_servers section require specifying a name for the group and a list of nodes. The following example defines two upstream server groups, forward-balancing and forward-balancing-2:

    upstream_servers:
      - name: forward-balancing
        nodes:
          - name: node-1
            host: 127.0.0.1
            port: 43000
    
          - name: node-2
            host: 127.0.0.1
            port: 44000
    
          - name: node-3
            host: 127.0.0.1
            port: 45000
            tls: true
            tls_verify: false
            shared_key: secret
    
      - name: forward-balancing-2
        nodes:
          - name: node-A
            host: 192.168.1.10
            port: 50000
    
          - name: node-B
            host: 192.168.1.11
            port: 51000

    Each node in the upstream_servers group must specify a name, host, and port. Additional settings like tls, tls_verify, and shared_key can be configured for secure communication.

    While the upstream_servers section can be defined globally, some output plugins might require the configuration to be specified in a separate YAML file. Consult the documentation for each specific output plugin to understand its requirements.

    circle-info

    Environment variables aren't supported in includes section. The path for each file must be specified as a literal string.

    You can reference these files in fluent-bit.yaml as follows:

    Ensure that the included files are formatted correctly and contain valid YAML configurations for seamless integration.

    ├── fluent-bit.yaml
    ├── inclusion-1.yaml
    └── subdir
        └── inclusion-2.yaml
    includes:
      - inclusion-1.yaml
      - subdir/inclusion-2.yaml
    Jan 18 12:52:16 flb systemd[2222]: Starting GNOME Terminal Server
    Jan 18 12:52:16 flb dbus-daemon[2243]: [session uid=1000 pid=2243] Successfully activated service 'org.gnome.Terminal'
    Jan 18 12:52:16 flb systemd[2222]: Started GNOME Terminal Server.
    Jan 18 12:52:16 flb gsd-media-keys[2640]: # watch_fast: "/org/gnome/terminal/legacy/" (establishing: 0, active: 0)
    [[TIMESTAMP, METADATA], MESSAGE]
    [TIMESTAMP, MESSAGE]
    SECONDS.NANOSECONDS
    "Project Fluent Bit created on 1398289291"
    {"project": "Fluent Bit", "created": 1398289291}
    cd fluent-bit/build/
    cmake -DFLB_STATIC_CONF=/path/to/my/confdir/
    make
    $ bin/fluent-bit
    
    ...
    [0] cpu.local: [1539984752.000347547, {"cpu_p"=>0.750000, "user_p"=>0.500000, "system_p"=>0.250000, "cpu0.p_cpu"=>1.000000, "cpu0.p_user"=>1.000000, "cpu0.p_system"=>0.000000, "cpu1.p_cpu"=>0.000000, "cpu1.p_user"=>0.000000, "cpu1.p_system"=>0.000000, "cpu2.p_cpu"=>0.000000, "cpu2.p_user"=>0.000000, "cpu2.p_system"=>0.000000, "cpu3.p_cpu"=>1.000000, "cpu3.p_user"=>1.000000, "cpu3.p_system"=>0.000000}]
    sudo yum install fluent-bit
    sudo systemctl start fluent-bit
    $ systemctl status fluent-bit
    
    ● fluent-bit.service - Fluent Bit
       Loaded: loaded (/usr/lib/systemd/system/fluent-bit.service; disabled; vendor preset: disabled)
       Active: active (running) since Thu 2016-07-07 02:08:01 BST; 9s ago
     Main PID: 3820 (fluent-bit)
       CGroup: /system.slice/fluent-bit.service
               └─3820 /opt/fluent-bit/bin/fluent-bit -c /etc/fluent-bit/fluent-bit.conf
    ...

    Greater than 60 MB

    Approximately 450 KB

    Performance

    Medium Performance

    High Performance

    Dependencies

    Built as a Ruby Gem, depends on other gems.

    Zero dependencies, unless required by a plugin.

    Plugins

    Over 1,000 external plugins available.

    Over 100 built-in plugins available.

    OpenTelemetry

    Available through plugins.

    Native OTLP ingestion and delivery.

    License

    Scope

    Containers / Servers

    Embedded Linux / Containers / Servers

    Language

    C and Ruby

    C

    Fluent Bitarrow-up-right
    Fluentdarrow-up-right
    Fluentdarrow-up-right
    Fluent Bitarrow-up-right

    Memory

    Rocky Linux and Alma Linux

    Fluent Bit is distributed as the fluent-bit package and is available for the latest versions of Rocky or Alma Linux now that CentOS Stream is tracking more recent dependencies.

    Fluent Bit supports the following architectures:

    • x86_64

    • aarch64

    • arm64v8

    hashtag
    RHEL 9

    From CentOS 9 Stream and later, the CentOS dependencies will update more often than downstream usage. This might mean that incompatible (more recent) versions are provided of certain dependencies (for example, OpenSSL). For OSS, there are RockyLinux and AlmaLinux repositories. This might be required for RHEL 9 as well which will no longer track equivalent CentOS 9 stream dependencies. No RHEL 9 build is provided, it is expected to use one of the OSS variants listed.

    hashtag
    Configure YUM

    The fluent-bit package is provided through a Yum repository. To add the repository reference to your system:

    1. In /etc/yum.repos.d/, add a new file called fluent-bit.repo.

    2. Add the following content to the file - replace almalinux with rockylinux if required:

    hashtag
    Install

    1. After your repository is configured, run the following command to install it:

    2. Instruct Systemd to enable the service:

    If you do a status check, you should see a similar output like this:

    The default Fluent Bit configuration collect metrics of CPU usage and sends the records to the standard output. You can see the outgoing data in your /var/log/messages file.

    Containers on AWS

    AWS maintains a distribution of Fluent Bit that combines the latest official release with a set of Go Plugins for sending logs to AWS services. AWS and Fluent Bit are working together to rewrite their plugins for inclusion in the official Fluent Bit distribution.

    hashtag
    Plugins

    The AWS for Fluent Bitarrow-up-right image contains Go Plugins for:

    • Amazon CloudWatch as cloudwatch_logs. See the or the .

    • Amazon Kinesis Data Firehose as kinesis_firehose. See the or the .

    • Amazon Kinesis Data Streams as kinesis_streams. See the or the .

    These plugins are higher performance than Go plugins.

    Also, Fluent Bit includes an S3 output plugin named s3.

    hashtag
    Versions and regional repositories

    AWS vends their container image using , and a set of highly available regional Amazon ECR repositories. For more information, see the .

    The AWS for Fluent Bit image uses a custom versioning scheme because it contains multiple projects. To see what each release contains, see the .

    hashtag
    SSM public parameters

    AWS vends SSM public parameters with the regional repository link for each image. These parameters can be queried by any AWS account.

    To see a list of available version tags in a given region, run the following command:

    To see the ECR repository URI for a given image tag in a given region, run the following:

    You can use these SSM public parameters as parameters in your CloudFormation templates:

    Environment variables

    The env section of YAML configuration files lets you define environment variables. These variables can then be used to dynamically replace values throughout your configuration using the ${VARIABLE_NAME} syntax.

    Values set in the env section are case-sensitive. However, as a best practice, Fluent Bit recommends using uppercase names for environment variables. The following example defines two variables, FLUSH_INTERVAL and STDOUT_FMT, which can be accessed in the configuration using ${FLUSH_INTERVAL} and ${STDOUT_FMT}:

    hashtag
    Predefined variables

    Fluent Bit supports the following predefined environment variables. You can reference these variables in configuration files without defining them in the env section.

    Name
    Description

    hashtag
    External variables

    In addition to variables defined in the configuration file or the predefined ones, Fluent Bit can access system environment variables set in the user space. These external variables can be referenced in the configuration using the same ${VARIABLE_NAME} pattern.

    circle-info

    Variables set in the env section can't be overridden by system environment variables.

    For example, to set the FLUSH_INTERVAL system environment variable to 2 and use it in your configuration:

    In the configuration file, you can then access this value as follows:

    This approach lets you manage and override configuration values using environment variables, providing flexibility in various deployment environments.

    Parsers

    You can define customer parsers in the parsers section of YAML configuration files.

    circle-info

    To define custom multiline parsers, use the multiline_parsers section of YAML configuration files.

    hashtag
    Syntax

    To define custom parsers in the parsers section of a YAML configuration file, use the following syntax.

    For information about supported configuration options for custom parsers, see .

    hashtag
    Standalone parsers files

    In addition to defining parsers in the parsers section of YAML configuration files, you can store parser definitions in standalone files. These standalone files require the same syntax as parsers defined in a standard YAML configuration file.

    To add a standalone parsers file to Fluent Bit, use the parsers_file parameter in the service section of your YAML configuration file.

    hashtag
    Add a standalone parsers file to Fluent Bit

    To add a standalone parsers file to Fluent Bit, follow these steps.

    1. Define custom parsers in a standalone YAML file. For example, custom-parsers.yaml defines two custom parsers:

    1. Update the parsers_file parameter in the service section of your YAML configuration file:

    Variables

    Fluent Bit supports the usage of environment variables in any value associated to a key when using a configuration file.

    The variables are case sensitive and can be used in the following format:

    ${MY_VARIABLE}

    When Fluent Bit starts, the configuration reader will detect any request for ${MY_VARIABLE} and will try to resolve its value.

    When Fluent Bit is running under systemdarrow-up-right (using the official packages), environment variables can be set in the following files:

    • /etc/default/fluent-bit (Debian based system)

    • /etc/sysconfig/fluent-bit (Others)

    These files are ignored if they don't exist.

    hashtag
    Example

    Create the following configuration file (fluent-bit.conf):

    Open a terminal and set the environment variable:

    The previous command sets the stdout value to the variable MY_OUTPUT.

    Run Fluent Bit with the recently created configuration file:

    Hot reload

    Enable hot reload through SIGHUP signal or an HTTP endpoint

    Fluent Bit supports the reloading feature when enabled in the configuration file or on the command line with -Y or --enable-hot-reload option.

    Hot reloading is supported on Linux, macOS, and Windows operating systems.

    hashtag
    Update the configuration

    To get started with reloading over HTTP, enable the HTTP Server in the configuration file:

    hashtag
    How to reload

    After updating the configuration, use one of the following methods to perform a hot reload:

    hashtag
    HTTP

    Use the following HTTP endpoints to perform a hot reload:

    • PUT /api/v2/reload

    • POST /api/v2/reload

    For using curl to reload Fluent Bit, users must specify an empty request body as:

    Obtain a count of hot reload using the HTTP endpoint:

    • GET /api/v2/reload

    The endpoint returns hot_reload_count as follows:

    The default value of the counter is 0.

    hashtag
    Signal

    Hot reloading can be used with SIGHUP.

    SIGHUP signal isn't supported on Windows.

    hashtag
    Confirm a reload

    Use one of the following methods to confirm the reload occurred.

    Memory management

    You might need to estimate how much memory Fluent Bit could be using in scenarios like containerized environments where memory limits are essential.

    hashtag
    Estimating

    Input plugins append data independently. To make an estimation, impose a limit with the Mem_Buf_Limit option. If the limit was set to 10MB, you can estimate that in the worst case, the output plugin likely could use 20MB.

    Fluent Bit has an internal binary representation for the data being processed. When this data reaches an output plugin, it can create its own representation in a new memory buffer for processing. The best examples are the and output plugins, which need to convert the binary representation to their respective custom JSON formats before sending data to the backend servers.

    When imposing a limit of 10MB for the input plugins, and a worst case scenario of the output plugin consuming 20MB, you need to allocate a minimum (30MB x 1.2) = 36MB.

    circle-info

    For more information about Mem_Buf_Limit, see .

    hashtag
    Glibc and memory fragmentation

    In intensive environments where memory allocations happen in the orders of magnitude, the default memory allocator provided by Glibc could lead to high fragmentation, reporting a high memory usage by the service.

    It's strongly suggested that in any production environment, Fluent Bit should be built with enabled (-DFLB_JEMALLOC=On). The jemalloc implementation of malloc is an alternative memory allocator that can reduce fragmentation, resulting in better performance.

    Use the following command to determine if Fluent Bit has been built with jemalloc:

    The output should look like:

    If the FLB_HAVE_JEMALLOC option is listed in Build Flags, jemalloc is enabled.

    Inputs

    Input plugins gather information from different sources. Some plugins collect data from log files, and others gather metrics information from the operating system. There are many different plugins, and they let you handle many different needs.

    When an input plugin loads, an internal instance is created. Each instance has its own independent configuration. Configuration keys are often called properties.

    What's Fluent Bit?

    Fluent Bit is a CNCF graduated project under the umbrella of Fluentd

    Fluent Bitarrow-up-right is an open source telemetry agent that processes logs, metrics, traces, and profiles. It's designed to efficiently handle the challenges of collecting and processing telemetry data across a wide range of environments, from constrained systems to complex cloud infrastructures. Managing telemetry data from various sources and formats can be a constant challenge, particularly when performance is a critical factor.

    Rather than serving as a drop-in replacement, Fluent Bit enhances the observability strategy for your infrastructure. It adapts and optimizes your existing logging layer, and adds metrics and traces processing. Fluent Bit supports a vendor-neutral approach, with native OpenTelemetry (OTLP) ingestion and delivery and seamless integration with ecosystems such as Prometheus. Trusted by major cloud providers, banks, and companies that need a ready-to-use telemetry agent, Fluent Bit effectively manages diverse data sources and formats. It maintains optimal performance while keeping resource consumption low.

    Fluent Bit can be deployed as an edge agent for localized telemetry data handling or utilized as a central aggregator/collector for managing telemetry data across multiple sources and environments.

    hashtag
    The history of Fluent Bit

    In 2014, the team at was forecasting the need for a lightweight log processor for constraint environments like embedded Linux and gateways. To meet this need, Eduardo Silva created Fluent Bit, a new open source solution and part of the Fluentd ecosystem.

    After the project matured, it gained traction for normal Linux systems. With the new containerized world, the cloud native community asked to extend the project scope to support more sources, filters, and destinations. Not long after, Fluent Bit became one of the preferred solutions to solve the logging challenges in cloud environments.

    Debian

    Fluent Bit is distributed as the fluent-bit package and is available for the latest stable Debian system.

    The following architectures are supported

    • x86_64

    • aarch64

    Raspbian and Raspberry Pi

    Fluent Bit is distributed as the fluent-bit package and is available for . The following versions are supported:

    • Raspbian Bookworm (12)

    • Raspbian Bullseye (11)

    Ubuntu

    Fluent Bit is distributed as the fluent-bit package and is available for long-term support releases of Ubuntu. The latest officially supported version is Noble Numbat (24.04).

    The recommended secure deployment approach is to use the following instructions.

    hashtag
    Server GPG key

    Add the Fluent Bit server GPG key to your keyring to ensure you can get the correct signed packages.

    Configure Fluent Bit

    Fluent Bit uses configuration files to store information about your specified , , , and more. You can write these configuration files in one of these formats:

    • are the standard configuration format as of Fluent Bit v3.2. They use the .yaml file extension.

    • will be deprecated at the end of 2026. They use the .conf

    Multiline parsers

    You can define custom in the multiline_parsers section of YAML configuration files.

    circle-info

    To define standard custom parsers, use of YAML configuration files.

    hashtag

    HTTP proxy

    Enable traffic through a proxy server using the HTTP_PROXY environment variable.

    Fluent Bit supports configuring an HTTP proxy for all egress HTTP/HTTPS traffic using the HTTP_PROXY or http_proxy environment variable.

    The format for the HTTP proxy environment variable is http://USER:PASS@HOST:PORT, where:

    • USER is the username when using basic authentication.

    Run a logging pipeline locally

    You can test logging pipelines locally to observe how they handles log messages. This guide explains how to use to run Fluent Bit and Elasticsearch locally, but you can use the same principles to test other plugins.

    hashtag
    Create a configuration file

    Start by creating one of the corresponding Fluent Bit configuration files to start testing.

    Plugins

    In addition to the plugins that come bundled with Fluent Bit, you can load external plugins. Use this feature for loading Go or WebAssembly (Wasm) plugins that are built as shared object files (.so).

    circle-info

    To configure the settings for individual plugins, use the inputs and outputs sections nested under the of YAML configuration files.

    Format and schema

    Fluent Bit might optionally use a configuration file to define how the service will behave.

    The schema is defined by three concepts:

    • Sections

    • Entries: key/value

    Multithreading

    Learn how to run Fluent Bit in multiple threads for improved scalability.

    Fluent Bit has one event loop to handle critical operations, like managing timers, receiving internal messages, scheduling flushes, and handling retries. This event loop runs in the main Fluent Bit thread.

    To free up resources in the main thread, you can configure and to run in their own self-contained threads. However, inputs and outputs implement multithreading in distinct ways: inputs can run in threaded mode, and outputs can use one or more workers.

    Threading also affects certain processes related to inputs and outputs. For example, always run in the main thread, but run in the self-contained threads of their respective inputs or outputs, if applicable.

    YAML configuration files

    In Fluent Bit v3.2 and later, YAML configuration files support all of the settings and features that support, plus additional features that classic configuration files don't support, like processors.

    YAML configuration files support the following top-level sections:

    • env: Configures .

    env:
      FLUSH_INTERVAL: 1
      STDOUT_FMT: 'json_lines'
    
    service:
      flush: ${FLUSH_INTERVAL}
      log_level: info
    
    pipeline:
      inputs:
        - name: random
    
      outputs:
        - name: stdout
          match: '*'
          format: ${STDOUT_FMT}
    Apache License v2.0arrow-up-right
    Apache License v2.0arrow-up-right
    spinner
    As a best practice, enable gpgcheck and repo_gpgcheck for security reasons. Fluent Bit signs its repository metadata and all Fluent Bit packages.
    Fluent Bit docs
    Plugin repositoryarrow-up-right
    Fluent Bit docs
    Plugin repositoryarrow-up-right
    Fluent Bit docs
    Plugin repositoryarrow-up-right
    Amazon S3
    Docker Hubarrow-up-right
    AWS for Fluent Bit GitHub repositoryarrow-up-right
    release notes on GitHubarrow-up-right

    ${HOSTNAME}

    The system's hostname.

    configuring custom parsers
    parsers:
      - name: custom_parser1
        format: json
        time_key: time
        time_format: '%Y-%m-%dT%H:%M:%S.%L'
        time_keep: on
    
      - name: custom_parser2
        format: regex
        regex: '^\<(?<pri>[0-9]{1,5})\>1 (?<time>[^ ]+) (?<host>[^ ]+) (?<ident>[^ ]+) (?<pid>[-0-9]+) (?<msgid>[^ ]+) (?<extradata>(\[(.*)\]|-)) (?<message>.+)$'
        time_key: time
        time_format: '%Y-%m-%dT%H:%M:%S.%L'
        time_keep: on
        types: pid:integer
    [SERVICE]
      HTTP_Server  On
      HTTP_Listen  0.0.0.0
      HTTP_PORT    2020
      Hot_Reload   On
    service:
      http_server: on
      http_listen: 0.0.0.0
      http_port: 2020
      hot_reload: on
  • arm64v8

  • The recommended secure deployment approach is to use the following instructions:

    hashtag
    Server GPG key

    The first step is to add the Fluent Bit server GPG key to your keyring to ensure you can get the correct signed packages.

    Follow the official Debian wiki guidancearrow-up-right.

    hashtag
    Update your sources lists

    For Debian, you must add the Fluent Bit APT server entry to your sources lists. Ensure codename is set to your specific Debian release namearrow-up-right. (for example: bookworm for Debian 12).

    Update your source's lists:

    hashtag
    Update your repositories database

    Update your system's apt database:

    circle-info

    Fluent Bit recommends upgrading your system (sudo apt-get upgrade). This could avoid potential issues with expired certificates.

    hashtag
    Install Fluent Bit

    1. Ensure your GPG key is up to date.

    2. Use the following apt-get command to install the latest Fluent Bit:

    3. Instruct systemd to enable the service:

    If you do a status check, you should see a similar output similar to:

    The default Fluent Bit configuration collect metrics of CPU usage and sends the records to the standard output. You can see the outgoing data in your /var/log/messages file.

    Raspbian Buster (10)

    hashtag
    Server GPG key

    The first step is to add the Fluent Bit server GPG key to your keyring so you can get Fluent Bit signed packages:

    hashtag
    Update your sources lists

    On Debian and derivative systems such as Raspbian, you need to add the Fluent Bit APT server entry to your sources lists.

    Add the following content at bottom of your /etc/apt/sources.list file.

    hashtag
    Raspbian 12 (Bookworm)

    hashtag
    Raspbian 11 (Bullseye)

    hashtag
    Raspbian 10 (Buster)

    hashtag
    Update your repositories database

    Now let your system update the apt database:

    circle-info

    Fluent Bit recommends upgrading your system (sudo apt-get upgrade) to avoid potential issues with expired certificates.

    hashtag
    Install Fluent Bit

    1. Ensure your GPG key is up to date.

    2. Use the following apt-get command to install the latest Fluent Bit:

    3. Instruct systemd to enable the service:

    If you do a status check, you should see a similar output like this:

    The default configuration of Fluent Bit collects metrics for CPU usage and sends the records to the standard output. You can see the outgoing data in your /var/log/syslog file.

    Raspberry Piarrow-up-right
    Follow the official Debian wiki guidancearrow-up-right.

    hashtag
    Update your sources lists

    On Ubuntu, you need to add the Fluent Bit APT server entry to your sources lists. Ensure codename is set to your specific Ubuntu release namearrow-up-right. For example, focal for Ubuntu 20.04.

    Update your source's list:

    hashtag
    Update your repositories database

    Update the apt database on your system:

    circle-info

    Fluent Bit recommends upgrading your system to avoid potential issues with expired certificates:

    sudo apt-get upgrade

    If you receive the error Certificate verification failed, check if the package ca-certificates is properly installed:

    sudo apt-get install ca-certificates

    hashtag
    Install Fluent Bit

    1. Ensure your GPG key is up to date.

    2. Use the following apt-get command to install the latest Fluent Bit:

    3. Instruct systemd to enable the service:

    If you do a status check, you should see a similar output like this:

    The default configuration of fluent-bit is collecting metrics of CPU usage and sending the records to the standard output. You can see the outgoing data in your /var/log/syslog file.

    PASS is the password when using basic authentication.

  • HOST is the HTTP proxy hostname or IP address.

  • PORT is the port the HTTP proxy is listening on.

  • To use an HTTP proxy with basic authentication, provide the username and password:

    When no authentication is required, omit the username and password:

    The HTTP_PROXY environment variable is a standard wayarrow-up-right of setting a HTTP proxy in a containerized environment, and it's also natively supported by any application written in Go. Fluent Bit implements the same convention. The http_proxy environment variable is also supported. When both the HTTP_PROXY and http_proxy environment variables are provided, HTTP_PROXY will be preferred.

    circle-info

    The HTTP output plugin also supports configuring an HTTP proxy. This configuration works, but shouldn't be used with the HTTP_PROXY or http_proxy environment variable. The environment variable-based proxy configuration is implemented by creating a TCP connection tunnel using HTTP CONNECTarrow-up-right. Unlike the plugin's implementation, this supports both HTTP and HTTPS egress traffic.

    hashtag
    NO_PROXY

    Use the NO_PROXY environment variable when traffic shouldn't flow through the HTTP proxy. The no_proxy environment variable is also supported. When both NO_PROXY and no_proxy environment variables are provided, NO_PROXY takes precedence.

    The format for the no_proxy environment variable is a comma-separated list of host names or IP addresses.

    A domain name matches itself and all of its subdomains (for example, example.com matches both example.com and test.example.com):

    A domain with a leading dot (.) matches only its subdomains (for example, .example.com matches test.example.com but not example.com):

    As an example, you might use NO_PROXY when running Fluent Bit in a Kubernetes environment, where and you want:

    • All real egress traffic to flow through an HTTP proxy.

    • All local Kubernetes traffic to not flow through the HTTP proxy.

    In this case, set:

    hashtag
    Use Docker Compose

    Use Docker Composearrow-up-right to run Fluent Bit (with the configuration file mounted) and Elasticsearch.

    hashtag
    View indexed logs

    To view indexed logs, run the following command:

    hashtag
    Reset index

    To reset your index, run the following command:

    [INPUT]
      Name dummy
      Dummy {"top": {".dotted": "value"}}
    
    [OUTPUT]
      Name es
      Host elasticsearch
      Replace_Dots On
    Docker Composearrow-up-right
    pipeline:
      inputs:
        - name: dummy
          dummy: '{"top": {".dotted": "value"}}'
    
      outputs:
        - name: es
          host: elasticsearch
          replace_dots: on
    hashtag
    Inline YAML

    You can specify external plugins in the plugins section of YAML configuration files. For example:

    hashtag
    YAML plugins file included using the plugins_file option

    Additionally, you can define external plugins in a separate YAML file, then reference that file in the plugins_file key nested under the service section of your YAML configuration file. For example:

    In this setup, the extra_plugins.yaml file might contain the following plugins section:

    pipeline section
    plugins:
      - /other/path/to/out_gstdout.so
    Indented Configuration Mode

    An example of a configuration file is as follows:

    hashtag
    Sections

    A section is defined by a name or title inside brackets. Using the previous example, a Service section has been set using [SERVICE] definition. The following rules apply:

    • All section content must be indented (four spaces ideally).

    • Multiple sections can exist on the same file.

    • A section must have comments and entries.

    • Any commented line under a section must be indented too.

    • End-of-line comments aren't supported, only full-line comments.

    hashtag
    Entries: key/value

    A section can contain entries. An entry is defined by a line of text that contains a Key and a Value. Using the previous example, the [SERVICE] section contains two entries: one is the key Daemon with value off and the other is the key Log_Level with the value debug. The following rules apply:

    • An entry is defined by a key and a value.

    • A key must be indented.

    • A key must contain a value which ends in a line break.

    • Multiple keys with the same name can exist.

    Commented lines are set prefixing the # character. Commented lines aren't processed but they must be indented.

    hashtag
    Indented configuration mode

    Fluent Bit configuration files are based in a strict indented mode. Each configuration file must follow the same pattern of alignment from left to right when writing text. By default, an indentation level of four spaces from left to right is suggested. Example:

    This example shows two sections with multiple entries and comments. Empty lines are allowed.

    [fluent-bit]
      name = Fluent Bit
      baseurl = https://packages.fluentbit.io/almalinux/$releasever/
      gpgcheck=1
      gpgkey=https://packages.fluentbit.io/fluentbit.key
      repo_gpgcheck=1
      enabled=1
    sudo yum install fluent-bit
    sudo systemctl start fluent-bit
    $ systemctl status fluent-bit
    
    ● fluent-bit.service - Fluent Bit
       Loaded: loaded (/usr/lib/systemd/system/fluent-bit.service; disabled; vendor preset: disabled)
       Active: active (running) since Thu 2016-07-07 02:08:01 BST; 9s ago
     Main PID: 3820 (fluent-bit)
       CGroup: /system.slice/fluent-bit.service
               └─3820 /opt/fluent-bit/bin/fluent-bit -c etc/fluent-bit/fluent-bit.conf
    ...
    aws ssm get-parameters-by-path --region eu-central-1 --path /aws/service/aws-for-fluent-bit/ --query 'Parameters[*].Name'
    aws ssm get-parameter --region ap-northeast-1 --name /aws/service/aws-for-fluent-bit/2.0.0
    Parameters:
      FireLensImage:
        Description: Fluent Bit image for the FireLens Container
        Type: AWS::SSM::Parameter::Value<String>
        Default: /aws/service/aws-for-fluent-bit/latest
    export FLUSH_INTERVAL=2
    service:
      flush: ${FLUSH_INTERVAL}
      log_level: info
    
    pipeline:
      inputs:
        - name: random
    
      outputs:
        - name: stdout
          match: '*'
          format: json_lines
    parsers:
      - name: custom_parser1
        format: json
        time_key: time
        time_format: '%Y-%m-%dT%H:%M:%S.%L'
        time_keep: on
    
      - name: custom_parser2
        format: regex
        regex: '^\<(?<pri>[0-9]{1,5})\>1 (?<time>[^ ]+) (?<host>[^ ]+) (?<ident>[^ ]+) (?<pid>[-0-9]+) (?<msgid>[^ ]+) (?<extradata>(\[(.*)\]|-)) (?<message>.+)$'
        time_key: time
        time_format: '%Y-%m-%dT%H:%M:%S.%L'
        time_keep: on
        types: pid:integer
    service:
      parsers_file: my-parsers.yaml
    curl -X POST -d '{}' localhost:2020/api/v2/reload
    {"hot_reload_count":3}
    sudo apt-get install fluent-bit
    sudo systemctl start fluent-bit
    sudo sh -c 'curl https://packages.fluentbit.io/fluentbit.key | gpg --dearmor > /usr/share/keyrings/fluentbit-keyring.gpg'
    codename=$(grep -oP '(?<=VERSION_CODENAME=).*' /etc/os-release 2>/dev/null || lsb_release -cs 2>/dev/null)
    echo "deb [signed-by=/usr/share/keyrings/fluentbit-keyring.gpg] https://packages.fluentbit.io/debian/$codename $codename main" | sudo tee /etc/apt/sources.list.d/fluent-bit.list
    sudo apt-get update
    $ sudo service fluent-bit status
    
    ● fluent-bit.service - Fluent Bit
       Loaded: loaded (/lib/systemd/system/fluent-bit.service; disabled; vendor preset: enabled)
       Active: active (running) since mié 2016-07-06 16:58:25 CST; 2h 45min ago
     Main PID: 6739 (fluent-bit)
        Tasks: 1
       Memory: 656.0K
          CPU: 1.393s
       CGroup: /system.slice/fluent-bit.service
               └─6739 /opt/fluent-bit/bin/fluent-bit -c /etc/fluent-bit/fluent-bit.conf
    ...
    sudo apt-get install fluent-bit
    sudo service fluent-bit start
    sudo sh -c 'curl https://packages.fluentbit.io/fluentbit.key | sudo apt-key add - '
    echo "deb https://packages.fluentbit.io/raspbian/bookworm bookworm main" | sudo tee /etc/apt/sources.list.d/fluent-bit.list
    echo "deb https://packages.fluentbit.io/raspbian/bullseye bullseye main" | sudo tee /etc/apt/sources.list.d/fluent-bit.list
    echo "deb https://packages.fluentbit.io/raspbian/buster buster main" | sudo tee /etc/apt/sources.list.d/fluent-bit.list
    sudo apt-get update
    $ sudo service fluent-bit status
    
    ● fluent-bit.service - Fluent Bit
       Loaded: loaded (/lib/systemd/system/fluent-bit.service; disabled; vendor preset: enabled)
       Active: active (running) since mié 2016-07-06 16:58:25 CST; 2h 45min ago
     Main PID: 6739 (fluent-bit)
        Tasks: 1
       Memory: 656.0K
          CPU: 1.393s
       CGroup: /system.slice/fluent-bit.service
               └─6739 /opt/fluent-bit/bin/fluent-bit -c /etc/fluent-bit/fluent-bit.conf
    ...
    sudo apt-get install fluent-bit
    sudo systemctl start fluent-bit
    sudo sh -c 'curl https://packages.fluentbit.io/fluentbit.key | gpg --dearmor > /usr/share/keyrings/fluentbit-keyring.gpg'
    codename=$(grep -oP '(?<=VERSION_CODENAME=).*' /etc/os-release 2>/dev/null || lsb_release -cs 2>/dev/null)
    echo "deb [signed-by=/usr/share/keyrings/fluentbit-keyring.gpg] https://packages.fluentbit.io/ubuntu/$codename $codename main" | sudo tee /etc/apt/sources.list.d/fluent-bit.list
    sudo apt-get update
    $ systemctl status fluent-bit
    
    ● fluent-bit.service - Fluent Bit
       Loaded: loaded (/lib/systemd/system/fluent-bit.service; disabled; vendor preset: enabled)
       Active: active (running) since mié 2016-07-06 16:58:25 CST; 2h 45min ago
     Main PID: 6739 (fluent-bit)
        Tasks: 1
       Memory: 656.0K
          CPU: 1.393s
       CGroup: /system.slice/fluent-bit.service
               └─6739 /opt/fluent-bit/bin/fluent-bit -c /etc/fluent-bit/fluent-bit.conf
    ...
    HTTP_PROXY='http://example_user:[email protected]:8080'
    HTTP_PROXY='http://proxy.example.com:8080'
    NO_PROXY='foo.com,127.0.0.1,localhost'
    NO_PROXY='.example.com,127.0.0.1,localhost'
    NO_PROXY='127.0.0.1,localhost,kubernetes.default.svc'
    docker-compose.yaml
    version: "3.7"
    
    services:
      fluent-bit:
        image: fluent/fluent-bit
        volumes:
          - ./fluent-bit.conf:/fluent-bit/etc/fluent-bit.conf
        depends_on:
          - elasticsearch
      elasticsearch:
        image: elasticsearch:7.17.6
        ports:
          - "9200:9200"
        environment:
          - discovery.type=single-node
    curl "localhost:9200/_search?pretty" \
      -H 'Content-Type: application/json' \
      -d'{ "query": { "match_all": {} }}'
    curl -X DELETE "localhost:9200/fluent-bit?pretty"
    plugins:
      - /path/to/out_gstdout.so
    
    service:
      log_level: info
    
    pipeline:
      inputs:
        - name: random
    
      outputs:
        - name: gstdout
          match: '*'
    service:
      log_level: info
      plugins_file: extra_plugins.yaml
    
    pipeline:
      inputs:
        - name: random
    
      outputs:
        - name: gstdout
          match: '*'
    [SERVICE]
      # This is a commented line
      Daemon    off
      log_level debug
    [FIRST_SECTION]
      # This is a commented line
      Key1  some value
      Key2  another value
      # more comments
    
    [SECOND_SECTION]
      KeyN  3.14
    file extension.

    hashtag
    Unit sizes

    Some configuration settings in Fluent Bit use standardized unit sizes to define data and storage limits. For example, the buffer_chunk_size and buffer_max_size parameters for the Tail input plugin use unit sizes.

    The following table describes the unit sizes you can use and what they mean.

    Suffix
    Description
    Example

    none

    Bytes: If you specify an integer without a unit size, Fluent Bit interprets that value as a bytes representation.

    32000 means 32,000 bytes.

    k, kb, K, KB

    Kilobytes: A unit of memory equal to 1,000 bytes.

    32k means 32,000 bytes.

    hashtag
    Command line interface

    Fluent Bit exposes most of its configuration features through the command line interface. Use the -h or --help flag to see a list of available options.

    hashtag
    Validate configuration with --dry-run

    Use the --dry-run flag to validate a configuration file without starting Fluent Bit:

    A successful validation prints configuration test is successful and exits with code 0. If validation fails, Fluent Bit exits with a non-zero code and prints the errors to stderr.

    As of Fluent Bit 4.2, --dry-run performs full property validation in addition to syntax checking. Prior to 4.2, unknown or misspelled plugin property names would only surface as errors at runtime; --dry-run now catches them during validation. For example, a configuration with an unknown property on a dummy input produces:

    inputs
    outputs
    filters
    YAML configuration files
    Classic configuration files
    Syntax

    To define custom parsers in the multiline_parsers section of a YAML configuration file, use the following syntax:

    multiline_parsers:
      - name: multiline-regex-test
    
    

    This example defines a multiline parser named multiline-regex-test that uses regular expressions to handle multi-event logs. The parser contains two rules: the first rule transitions from start_state to cont when a matching log entry is detected, and the second rule continues to match subsequent lines.

    For information about supported configuration options for custom multiline parsers, see configuring multiline parsers.

    multiline parsers
    the parsers section
    hashtag
    Inputs

    When inputs collect telemetry data, they can either perform this process inside the main Fluent Bit thread or inside a separate dedicated thread. You can configure this behavior by enabling or disabling the threaded setting.

    All inputs are capable of running in threaded mode, but certain inputs always run in threaded mode regardless of configuration. These always-threaded inputs are:

    • Kubernetes Events

    • Node Exporter Metrics

    • Process Exporter Metrics

    Inputs aren't internally aware of multithreading. If an input runs in threaded mode, Fluent Bit manages the logistics of that input's thread.

    hashtag
    Outputs

    When outputs flush data, they can either perform this operation inside the main Fluent Bit thread or inside a separate dedicated thread called a worker. Each output can have one or more workers running in parallel, and each worker can handle multiple concurrent flushes. You can configure this behavior by changing the value of the workers setting.

    All outputs are capable of running in multiple workers, and each output has a default value of 0, 1, or 2 workers. However, even if an output uses workers by default, you can safely reduce the number of workers under the default or disable workers entirely.

    inputs
    outputs
    filters
    processors
    includes: Specifies additional YAML configuration files to include as part of a parent file.
  • service: Configures global properties of the Fluent Bit service.

  • pipeline: Configures active inputs, filters, and outputs.

  • parsers: Defines custom parsers.

  • multiline_parsers: Defines custom multiline parsers.

  • plugins: Defines paths for custom plugins.

  • upstream_servers: Defines nodes for output plugins.

  • circle-info

    YAML configuration is used in the smoke tests for containers. An always-correct up-to-date example is here: https://github.com/fluent/fluent-bit/blob/master/packaging/testing/smoke/container/fluent-bit.yamlarrow-up-right.

    classic configuration files
    environment variables
    [SERVICE]
      Flush        1
      Daemon       Off
      Log_Level    info
    
    [INPUT]
      Name cpu
      Tag  cpu.local
    
    [OUTPUT]
      Name  ${MY_OUTPUT}
      Match *
    InfluxDB
    Elasticsearch
    Backpressure
    jemallocarrow-up-right
    Fluentdarrow-up-right
    Treasure Dataarrow-up-right

    Commands

    Configuration files must be flexible enough for any deployment need, but they must keep a clean and readable format.

    Fluent Bit Commands extends a configuration file with specific built-in features. The following commands are available:

    Command
    Prototype
    Description

    hashtag
    @INCLUDE

    Configuring a logging pipeline might lead to an extensive configuration file. In order to maintain a human-readable configuration, split the configuration in multiple files.

    The @INCLUDE command allows the configuration reader to include an external configuration file:

    This example defines the main service configuration file and also includes two files to continue the configuration.

    Fluent Bit will respects the following order when including:

    • Service

    • Inputs

    • Filters

    • Outputs

    hashtag
    inputs.conf

    The following is an example of an inputs.conf file, like the one called in the previous example.

    hashtag
    outputs.conf

    The following is an example of an outputs.conf file, like the one called in the previous example.

    hashtag
    @SET

    Fluent Bit supports . One way to expose this variables to Fluent Bit is through setting a shell environment variable, the other is through the @SET command.

    The @SET command can only be used at root level of each line. It can't be used inside a section:

    Upstream servers

    Fluent Bit output plugins aim to connect to external services to deliver logs over the network. Being able to connect to one node (host) is normal and enough for more of the use cases, but there are other scenarios where balancing across different nodes is required. The Upstream feature provides this capability.

    An Upstream defines a set of nodes that will be targeted by an output plugin, by the nature of the implementation an output plugin must support the Upstream feature. The following plugin has Upstream support:

    The current balancing mode implemented is round-robin.

    hashtag
    Configuration

    To define an Upstream you must create an specific configuration file that contains an UPSTREAM and one or multiple NODE sections. The following table describes the properties associated with each section. All properties are mandatory:

    Section
    Key
    Description

    hashtag
    Nodes and specific plugin configuration

    A Node might contain additional configuration keys required by the plugin, to provide enough flexibility for the output plugin. A common use case is a Forward output where if TLS is enabled, it requires a shared key.

    hashtag
    Nodes and TLS (Transport Layer Security)

    In addition to the properties defined in the configuration table, the network operations against a defined node can optionally be done through the use of TLS for further encryption and certificates use.

    The TLS options available are described in the section and can be added to the any Node section.

    hashtag
    Configuration file example

    The following example defines an Upstream called forward-balancing which aims to be used by Forward output plugin, it register three Nodes:

    • node-1: connects to 127.0.0.1:43000

    • node-2: connects to 127.0.0.1:44000

    • node-3: connects to 127.0.0.1:45000 using TLS without verification. It also defines a specific configuration option required by Forward output called shared_key.

    Every Upstream definition must exists in its own configuration file in the file system. Adding multiple Upstream configurations in the same file or different files isn't allowed.

    AWS credentials

    Plugins that interact with AWS services fetch credentials from the following providers in order. Only the first provider that provides credentials is used.

    • Environment variables

    • Shared configuration and credentials files

    All AWS plugins additionally support a role_arn (or AWS_ROLE_ARN, for ) configuration parameter. If specified, the fetched credentials are used to assume the given role.

    hashtag
    Environment variables

    Plugins use the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY (and optionally AWS_SESSION_TOKEN) environment variables if set.

    hashtag
    Shared configuration and credentials files

    Plugins read the shared config file at $AWS_CONFIG_FILE (or $HOME/.aws/config), and the shared credentials file at $AWS_SHARED_CREDENTIALS_FILE (or $HOME/.aws/credentials) to fetch the credentials for the profile named $AWS_PROFILE or $AWS_DEFAULT_PROFILE (or "default"). See .

    The shared settings evaluate in the following order:

    Setting
    File
    Description

    No other settings are supported.

    hashtag
    EKS web identity token (OIDC)

    Credentials are fetched using a signed web identity token for a Kubernetes service account. See .

    hashtag
    ECS HTTP credentials endpoint

    Credentials are fetched for the ECS task's role. See .

    hashtag
    EKS Pod Identity credentials

    Credentials are fetched using a pod identity endpoint. See .

    hashtag
    EC2 instance profile credentials (IMDS)

    Fetches credentials for the EC2 instance profile's role. See . As of Fluent Bit version 1.8.8, IMDSv2 is used by default and IMDSv1 might be disabled. Prior versions of Fluent Bit require enabling IMDSv1 on EC2.

    hashtag
    AWS Greengrass credentials

    Fluent Bit fetches credentials from a localhost endpoint provided by the AWS IoT Greengrass token exchange service. The token exchange service runs as a local server on Greengrass core devices and provides AWS credentials through the AWS_CONTAINER_CREDENTIALS_FULL_URI and AWS_CONTAINER_AUTHORIZATION_TOKEN environment variables. For more information, see the AWS documentation about .

    Collectd

    circle-info

    Supported event types: logs

    The Collectd input plugin lets you receive datagrams from the collectd service over UDP. The plugin listens for collectd network protocol packets and converts them into Fluent Bit records.

    hashtag
    Configuration parameters

    The plugin supports the following configuration parameters:

    Key
    Description
    Default

    hashtag
    Get started

    To receive collectd datagrams, you can run the plugin from the command line or through the configuration file.

    hashtag
    Command line

    From the command line you can let Fluent Bit listen for collectd datagrams with the following options:

    By default, the service listens on all interfaces (0.0.0.0) using UDP port 25826. You can change this directly:

    In this example, collectd datagrams will only arrive through the network interface at 192.168.3.2 address and UDP port 9090.

    hashtag
    Configuration file

    In your main configuration file append the following:

    With this configuration, Fluent Bit listens to 0.0.0.0:25826, and outputs incoming datagram packets to stdout.

    hashtag
    typesdb configuration

    You must set the same types.db files that your collectd server uses. Otherwise, Fluent Bit might not be able to interpret the payload properly.

    The typesdb parameter supports multiple files separated by commas. When multiple files are specified, later entries take precedence over earlier ones if there are duplicate type definitions. This lets you override default types with custom definitions.

    For example:

    In this configuration, custom type definitions in /etc/collectd/custom.db override any matching definitions from /usr/share/collectd/types.db.

    Docker events

    circle-info

    Supported event types: logs

    The Docker events input plugin uses the Docker API to capture server events. A complete list of possible events returned by this plugin can be found in the Docker documentationarrow-up-right.

    hashtag
    Configuration parameters

    This plugin supports the following configuration parameters:

    Key
    Description
    Default

    hashtag
    Get started

    To capture Docker events, you can run the plugin from the command line or through the configuration file.

    hashtag
    Command line

    From the command line you can run the plugin with the following options:

    hashtag
    Configuration file

    In your main configuration file, append the following:

    Performance tips

    Fluent Bit is designed for high performance and minimal resource usage. Depending on your use case, you can optimize further using specific configuration options to achieve faster performance or reduce resource consumption.

    hashtag
    Reading files with tail

    The Tail input plugin is used to read data from files on the filesystem. By default, it uses a small memory buffer of 32KB per monitored file. While this is sufficient for most generic use cases and helps keep memory usage low when monitoring many files, there are scenarios where you might want to increase performance by using more memory.

    If your files are typically larger than 32KB, consider increasing the buffer size to speed up file reading. For example, you can experiment with a buffer size of 128KB:

    By increasing the buffer size, Fluent Bit will make fewer system calls (read(2)) to read the data, reducing CPU usage and improving performance.

    hashtag
    Fluent Bit and SIMD for JSON encoding

    The release of Fluent Bit v4.1.0 introduced new performance improvements for JSON encoding using Single Instruction, Multiple Data (SIMD). Plugins that convert logs from the Fluent Bit internal binary representation to JSON can now do so 2.5 times (read) faster. Powered by the .

    hashtag
    Enabling SIMD support

    Ensure that your Fluent Bit binary is built with SIMD support. This feature is available for architectures such as x86_64, amd64, aarch64, and arm64. As of now, SIMD is only enabled by default in Fluent Bit container images.

    You can check if SIMD is enabled by looking for the following log entry when Fluent Bit starts:

    Look for the simd entry, which will indicate the SIMD support type, such as SSE2, NEON, or none.

    If your Fluent Bit binary wasn't built with SIMD enabled, and you are using a supported platform, you can build Fluent Bit from source using the CMake option -DFLB_SIMD=On.

    hashtag
    Run input plugins in threaded mode

    By default, most input plugins run in the same system thread than the main event loop, however by configuration you can instruct them to run in a separate thread which will allow you to take advantage of other CPU cores in your system.

    To run an input plugin in threaded mode, add threaded: true as in the following example:

    Docker metrics

    circle-info

    Supported event types: logs

    The Docker input plugin lets you collect Docker container metrics, including memory usage and CPU consumption.

    hashtag
    Configuration parameters

    The plugin supports the following configuration parameters:

    Key
    Description
    Default

    If you set neither include nor exclude, the plugin will try to get metrics from all running containers.

    hashtag
    Configuration file

    The following example configuration collects metrics from two docker instances (6bab19c3a0f9 and 14159be4ca2c).

    This configuration will produce records like the following:

    Kernel logs

    circle-info

    Supported event types: logs

    The Kernel logs (kmsg) input plugin reads the Linux Kernel log buffer from the beginning. It gets every record and parses fields as priority, sequence, seconds, useconds, and message.

    hashtag
    Configuration parameters

    The plugin supports the following configuration parameters:

    Key
    Description
    Default

    hashtag
    Getting started

    To start getting the Linux Kernel messages, you can run the plugin from the command line or through the configuration file:

    hashtag
    Command line

    Which returns output similar to:

    As described previously, the plugin processed all messages that the Linux Kernel reported. The output has been truncated for clarification.

    hashtag
    Configuration file

    In your main configuration file append the following:

    Download and install Fluent Bit

    Fluent Bit is compatible with most x86-based, x86_64-based, arm32v7-based, and arm64v8-based systems.

    hashtag
    Build from source code

    You can build and install Fluent Bit from its source code. There are also platform-specific guides for building Fluent Bit from source on macOS and Windows.

    hashtag
    Supported platforms and packages

    To install Fluent Bit from one of the available packages, use the installation method for your chosen platform.

    hashtag
    Container deployment

    Fluent Bit is available for the following container deployments:

    hashtag
    Linux

    Fluent Bit is available on , including the following distributions:

    hashtag
    macOS

    Fluent Bit is available on .

    hashtag
    Windows

    Fluent Bit is available on .

    hashtag
    Other platforms

    Official support is based on community demand. Fluent Bit might run on older operating systems, but must be built from source or using custom packages.

    Fluent Bit can run on Berkeley Software Distribution (BSD) systems and IBM Z Linux (s390x) systems with restrictions. Not all plugins and filters are supported.

    hashtag
    Enterprise providers

    Fluent Bit packages are also provided by for older end-of-life versions, Unix systems, or for additional support and features including aspects (such as CVE backporting).

    Fluent Bit documentation

    High Performance Telemetry Agent for Logs, Metrics and Traces

    Fluent Bitarrow-up-right is a fast and lightweight telemetry agent for logs, metrics, and traces for Linux, macOS, Windows, and BSD family operating systems. Fluent Bit has been made with a strong focus on performance to allow the collection and processing of telemetry data from different sources without complexity.

    hashtag
    Features

    • High performance: High throughput with low resources consumption

    • Data parsing

      • Convert your unstructured messages using Fluent Bit parsers: , , and

    • Metrics support: Prometheus and OpenTelemetry compatible

    • Reliability and data integrity

      • handling

      • in memory and file system

    • Networking

      • Security: Built-in TLS/SSL support

      • Asynchronous I/O

    • Pluggable architecture and : Inputs, Filters and Outputs:

      • Connect nearly any source to nearly any destination using preexisting plugins

      • Extensibility:

    • : Expose internal metrics over HTTP in JSON and format

    • : Perform data selection and transformation using basic SQL queries

      • Create new streams of data using query results

      • Aggregation windows

    • Portable: Runs on Linux, macOS, Windows and BSD systems

    hashtag
    Release notes

    For more details about changes in each release, refer to the .

    If you are upgrading from the Fluent Bit 4.2 series, start with and .

    hashtag
    Fluent Bit, Fluentd, and CNCF

    Fluent Bit is a graduated sub-project under the umbrella of .

    Fluent Bit was originally created by and is now sponsored by . As a CNCF-hosted project, it's a fully vendor-neutral and community-driven project.

    hashtag
    License

    Fluent Bit, including its core, plugins, and tools, is distributed under the terms of the .

    Red Hat and CentOS

    Fluent Bit is distributed as the fluent-bit package and is available for the latest stable CentOS system.

    Fluent Bit supports the following architectures:

    • x86_64

    • aarch64

    macOS

    Fluent Bit is compatible with the latest Apple macOS software for x86_64 and Apple Silicon architectures.

    hashtag
    Installation packages

    Installation packages can be found .

    Backpressure

    It's possible for Fluent Bit to ingest or create data faster than it can flush that data to the intended destinations. This creates a condition known as backpressure.

    Fluent Bit can accommodate a certain amount of backpressure by that data until it can be processed and routed. However, if Fluent Bit continues buffering new data to temporary storage faster than it can flush old data, that storage will eventually reach capacity.

    Strategies for managing backpressure vary depending on the for each active input plugin. Because of this, choosing the right buffering mode is also a key part of managing backpressure.

    Dead letter queue

    The dead letter queue preserves that Fluent Bit fails to deliver to output destinations. Instead of losing this data, Fluent Bit copies the rejected chunks to a dedicated storage location for future analysis and troubleshooting.

    To enable the dead letter queue, filesystem storage must be enabled by setting a value for , and must be set to on.

    Chunks are copied to the dead letter queue in the following failure scenarios:

    • Permanent errors: When an output plugin returns an unrecoverable error (FLB_ERROR

    Disk I/O metrics

    circle-info

    Supported event types: logs

    The Disk input plugin gathers the information about the disk throughput of the running system every certain interval of time and reports them.

    The Disk I/O metrics plugin creates metrics that are log-based, such as JSON payload. For Prometheus-based metrics, see the Node Exporter Metrics input plugin.

    Fluent Bit logs

    circle-info

    Supported event types: logs

    The Fluent Bit logs input plugin routes Fluent Bit internal log output into the pipeline as structured log records. Each record contains a level field and a message field, which lets you ship, filter, or store Fluent Bit internal diagnostic output using the same pipeline you use for all other data.

    This plugin is event-driven: records are delivered immediately as the internal logger emits them, not on a polling interval. Fluent Bit enables internal log mirroring automatically when this input is configured.

    Random

    circle-info

    Supported event types: logs

    The Random input plugin generates random value samples using the device interface /dev/urandom. If that interface is unavailable, it uses a Unix timestamp as a value.

    # Podman container tooling.
    podman run -rm -ti fluent/fluent-bit --help
    
    # Docker container tooling.
    docker run --rm -it fluent/fluent-bit --help
    fluent-bit --dry-run -c /path/to/fluent-bit.yaml
    [error] [config] dummy: unknown configuration property 'invalid_property_that_does_not_exist'.
    export MY_OUTPUT=stdout
    $ bin/fluent-bit -c fluent-bit.conf
    
    ...
    [0] cpu.local: [1491243925, {"cpu_p"=>1.750000, "user_p"=>1.750000, "system_p"=>0.000000, "cpu0.p_cpu"=>3.000000, "cpu0.p_user"=>2.000000, "cpu0.p_system"=>1.000000, "cpu1.p_cpu"=>0.000000, "cpu1.p_user"=>0.000000, "cpu1.p_system"=>0.000000, "cpu2.p_cpu"=>4.000000, "cpu2.p_user"=>4.000000, "cpu2.p_system"=>0.000000, "cpu3.p_cpu"=>1.000000, "cpu3.p_user"=>1.000000, "cpu3.p_system"=>0.000000}]
    fluent-bit -h | grep JEMALLOC
    Build Flags =  JSMN_PARENT_LINKS JSMN_STRICT FLB_HAVE_TLS FLB_HAVE_SQLDB
    FLB_HAVE_TRACE FLB_HAVE_FLUSH_LIBCO FLB_HAVE_VALGRIND FLB_HAVE_FORK
    FLB_HAVE_PROXY_GO FLB_HAVE_JEMALLOC JEMALLOC_MANGLE FLB_HAVE_REGEX
    FLB_HAVE_C_TLS FLB_HAVE_SETJMP FLB_HAVE_ACCEPT4 FLB_HAVE_INOTIFY
    type: regex
    flush_timeout: 1000
    rules:
    - state: start_state
    regex: '/([a-zA-Z]+ \d+ \d+:\d+:\d+)(.*)/'
    next_state: cont
    - state: cont
    regex: '/^\s+at.*/'
    next_state: cont

    m, mb, M, MB

    Megabytes: A unit of memory equal to 1,000,000 bytes.

    32m means 32,000,000 bytes.

    g, gb, G, GB

    Gigabytes: A unit of memory equal to 1,000,000,000 bytes.

    32g means 32,000,000,000 bytes.

    Windows Exporter Metrics

    credential_process

    config

    Linux only. See Sourcing credentials with an external process in the AWS CLIarrow-up-right.

    aws_access_key_id, aws_secret_access_key, aws_session_token

    credentials

    Access key ID and secret key to use to authenticate. The session token must be set for temporary credentials.

    EKS Web Identity Token (OIDC)
    ECS HTTP credentials endpoint
    EC2 Instance Profile Credentials (IMDS)
    AWS Greengrass credentials
    Elasticsearch
    Configuration and credential file settings in the AWS CLIarrow-up-right
    IAM roles for service accountsarrow-up-right
    Amazon ECS task IAM rolearrow-up-right
    Learn how EKS Pod Identity grants pods access to AWS servicesarrow-up-right
    IAM roles for Amazon EC2arrow-up-right
    Token exchange servicearrow-up-right
    spinner

    @INCLUDE FILE

    Include a configuration file.

    @SET

    @SET KEY=VAL

    Set a configuration variable.

    configuration variables
    @INCLUDE

    host

    IP address or hostname of the target host.

    port

    TCP port of the target service.

    UPSTREAM

    name

    Defines a name for the `Upstream in question.

    NODE

    name

    Defines a name for the Node in question.

    Forward
    TLS/SSL

    yyjson projectarrow-up-right

    prio_level

    The log level to filter. The kernel log is dropped if its priority is more than prio_level. Allowed values are 0-8. 8 means all logs are saved.

    8

    threaded

    Indicates whether to run this input in its own thread.

    false

  • arm64v8

  • For CentOS 9 and later, Fluent Bit uses CentOS Streamarrow-up-right as the canonical base system.

    The recommended secure deployment approach is to use the following instructions:

    hashtag
    CentOS 8

    CentOS 8 is now end-of-life, so the default Yum repositories are unavailable.

    Ensure you've configured an appropriate mirror. For example:

    An alternative is to use Rocky or Alma Linux, which should be equivalent.

    hashtag
    RHEL, AlmaLinux, RockyLinux, and CentOS 9 Stream

    From CentOS 9 Stream and later, the CentOS dependencies will update more often than downstream usage. This might mean that incompatible (more recent) versions are provided of certain dependencies (for example, OpenSSL). For OSS, Fluent Bit also provide RockyLinux and AlmaLinux repositories.

    Replace the centos string in Yum configuration with almalinux or rockylinux to use those repositories instead. This might be required for RHEL 9 as well which will no longer track equivalent CentOS 9 stream dependencies. No RHEL 9 build is provided, as it's expected you're using one of the OSS variants listed.

    hashtag
    Configure YUM

    Thefluent-bit package is provided through a Yum repository. To add the repository reference to your system:

    1. In /etc/yum.repos.d/, add a new file called fluent-bit.repo.

    2. Add the following content to the file:

    3. As a best practice, enable gpgcheck and repo_gpgcheck for security reasons. Fluent Bit signs its repository metadata and all Fluent Bit packages.

    hashtag
    Install

    1. Ensure your GPG key is up to date.

    2. After your repository is configured, run the following command to install it:

    3. Instruct Systemd to enable the service:

    If you do a status check, you should see a similar output like this:

    The default Fluent Bit configuration collect metrics of CPU usage and sends the records to the standard output. You can see the outgoing data in your /var/log/messages file.

    hashtag
    FAQ

    hashtag
    Yum install fails with a 404 - Page not found error for the package mirror

    The fluent-bit.repo file for the latest installations of Fluent Bit uses a $releasever variable to determine the correct version of the package to install to your system:

    Depending on your Red Hat distribution version, this variable can return a value other than the OS major release version (for example, RHEL7 Server distributions return 7Server instead of 7). The Fluent Bit package URL uses the major OS release version, so any other value here will cause a 404.

    To resolve this issue, replace the $releasever variable with your system's OS major release version. For example:

    hashtag
    Yum install fails with incompatible dependencies using CentOS 9+

    CentOS 9 and later will no longer be compatible with RHEL 9 as it might track more recent dependencies. Alternative AlmaLinux and RockyLinux repositories are available.

    See the previous guidance.

    hashtag
    Requirements

    You must have Homebrewarrow-up-right installed in your system. If it isn't present, install it with the following command:

    hashtag
    Installing from Homebrew

    The Fluent Bit package on Homebrew isn't officially supported, but should work for basic use cases and testing. It can be installed using:

    hashtag
    Compile from source

    hashtag
    Install build dependencies

    Run the following brew command in your terminal to retrieve the dependencies:

    hashtag
    Download and build the source

    1. Download a copy of the Fluent Bit source code (upstream):

    1. Go to the Fluent Bit directory.

    If you want to use a specific version, checkout to the proper tag. For example, to use v4.0.4, use the command:

    1. To prepare the build system, you must export certain environment variables so Fluent Bit CMake build rules can pick the right libraries:

    1. Change to the build/ directory inside the Fluent Bit sources:

    1. Build Fluent Bit. This example indicates to the build system the location the final binaries and config files should be installed:

    1. Install Fluent Bit to the previously specified directory. Writing to this directory requires root privileges.

    The binaries and configuration examples can be located at /opt/fluent-bit/.

    hashtag
    Create macOS installer from source

    1. Clone the Fluent Bit source code (upstream):

    1. Change to the Fluent Bit directory

    To use a specific version, checkout to the proper tag. For example, to use v4.0.4 do:

    1. To prepare the build system, you must expose certain environment variables so Fluent Bit CMake build rules can pick the right libraries:

    1. Create the specific macOS SDK target. For example, to specify macOS Big Sur (11.3) SDK environment:

    1. Change to the build/ directory inside the Fluent Bit sources:

    1. Build the Fluent Bit macOS installer:

    The macOS installer will be generated as:

    Finally, the fluent-bit-<fluent-bit version>-(intel or apple).pkg will be generated.

    The created installer will put binaries at /opt/fluent-bit/.

    hashtag
    Running Fluent Bit

    To make the access path easier to Fluent Bit binary, extend the PATH variable:

    To test, try Fluent Bit by generating a test message using the Dummy input plugin which prints to the standard output interface every one second:

    You will see an output similar to this:

    To halt the process, press ctrl-c in the terminal.

    in the Fluent Bit repositoryarrow-up-right
    ).
  • Retry limit reached: When a chunk exhausts all configured retry attempts.

  • Retries disabled: When retry_limit is set to no_retries and a flush fails.

  • Scheduler failures: When the retry scheduler can't schedule a retry (for example, due to resource constraints).

  • hashtag
    Location

    Rejected chunks are stored in the subdirectory defined by storage.path. For example, with the following configuration, rejected chunks are stored at /var/log/flb-storage/rejected/:

    hashtag
    Format

    Each dead letter queue file is named using this format:

    For example: kube_var_log_containers_test_400_http_0x7f8b4c.flb

    The file contains the original chunk data in the internal format of Fluent Bit, preserving all records and metadata.

    hashtag
    Troubleshooting with dead letter queue

    The dead letter queue feature enables the following capabilities:

    • Data preservation: Invalid or rejected chunks are preserved instead of being permanently lost.

    • Root cause analysis: Investigate why specific data failed to be delivered without impacting live processing.

    • Data recovery: Replay or transform rejected chunks after fixing the underlying issue.

    • Debugging: Analyze the exact content of problematic records.

    To examine dead letter queue chunks, you can use the storage metrics endpoint (when storage.metrics is enabled) or directly inspect the files in the rejected directory.

    circle-info

    Dead letter queue files remain on disk until manually removed. Monitor disk usage in the rejected directory and implement a cleanup policy for older files.

    A Service section will look like this:

    This configuration sets an optional buffering mechanism where the route to the data is /var/log/flb-storage/. It uses normal synchronization mode, without running a checksum and up to a maximum of 5 MB of memory when processing backlog data. Additionally, the dead letter queue is enabled, and rejected chunks are stored in /var/log/flb-storage/rejected/.

    chunks
    storage.path
    storage.keep.rejected
    circle-info

    Internal log records are buffered in a bounded in-memory queue of up to 1024 entries. Records produced before the pipeline is ready, or while the queue is full, aren't delivered through this plugin.

    hashtag
    Record format

    Each record contains the following fields:

    Field
    Type
    Description

    level

    String

    Severity of the log entry. Possible values: error, warn, info, debug, trace, help.

    message

    String

    The log message text.

    hashtag
    Configuration parameters

    This plugin has no configuration parameters.

    hashtag
    Get started

    hashtag
    Command line

    hashtag
    Configuration file

    The following example captures Fluent Bit internal logs and writes them to standard output:

    To forward internal logs to an external destination, replace the output with any supported output plugin. For example, to forward to an OpenTelemetry collector:

    [SERVICE]
        Flush 1
    
    @INCLUDE inputs.conf
    @INCLUDE outputs.conf
    [INPUT]
        Name cpu
        Tag  mycpu
    
    [INPUT]
        Name tail
        Path /var/log/*.log
        Tag  varlog.*
    [OUTPUT]
        Name   stdout
        Match  mycpu
    
    [OUTPUT]
        Name            es
        Match           varlog.*
        Host            127.0.0.1
        Port            9200
        Logstash_Format On
    // DO NOT USE
    @SET my_input=cpu
    @SET my_output=stdout
    
    [SERVICE]
        Flush 1
    
    [INPUT]
        Name ${my_input}
    
    [OUTPUT]
        Name ${my_output}
    [UPSTREAM]
      name       forward-balancing
    
    [NODE]
      name       node-1
      host       127.0.0.1
      port       43000
    
    [NODE]
      name       node-2
      host       127.0.0.1
      port       44000
    
    [NODE]
      name       node-3
      host       127.0.0.1
      port       45000
      tls        on
      tls.verify off
      shared_key secret
    pipeline:
      inputs:
        - name: tail
          path: '/var/log/containers/*.log'
          buffer_chunk_size: 128kb
          buffer_max_size: 128kb
    [2024/11/10 22:25:53] [ info] [fluent bit] version=3.2.0, commit=12cb22e0e9, pid=74359
    [2024/11/10 22:25:53] [ info] [storage] ver=1.5.2, type=memory, sync=normal, checksum=off, max_chunks_up=128
    [2024/11/10 22:25:53] [ info] [simd    ] SSE2
    [2024/11/10 22:25:53] [ info] [cmetrics] version=0.9.8
    [2024/11/10 22:25:53] [ info] [ctraces ] version=0.5.7
    [2024/11/10 22:25:53] [ info] [sp] stream processor started
    pipeline:
      inputs:
        - name: tail
          path: '/var/log/containers/*.log'
          threaded: true
    fluent-bit -i kmsg -t kernel -o stdout -m '*'
    ...
    [0] kernel: [1463421823, {"priority"=>3, "sequence"=>1814, "sec"=>11706, "usec"=>732233, "msg"=>"ERROR @wl_cfg80211_get_station : Wrong Mac address, mac = 34:a8:4e:d3:40:ec profile =20:3a:07:9e:4a:ac"}]
    [1] kernel: [1463421823, {"priority"=>3, "sequence"=>1815, "sec"=>11706, "usec"=>732300, "msg"=>"ERROR @wl_cfg80211_get_station : Wrong Mac address, mac = 34:a8:4e:d3:40:ec profile =20:3a:07:9e:4a:ac"}]
    [2] kernel: [1463421829, {"priority"=>3, "sequence"=>1816, "sec"=>11712, "usec"=>729728, "msg"=>"ERROR @wl_cfg80211_get_station : Wrong Mac address, mac = 34:a8:4e:d3:40:ec profile =20:3a:07:9e:4a:ac"}]
    [3] kernel: [1463421829, {"priority"=>3, "sequence"=>1817, "sec"=>11712, "usec"=>729802, "msg"=>"ERROR @wl_cfg80211_get_station : Wrong Mac address, mac = 34:a8:4e:d3:40:ec
    ...
    pipeline:
      inputs:
        - name: kmsg
          tag: kernel
    
      outputs:
        - name: stdout
          match: '*'
    [INPUT]
      Name   kmsg
      Tag    kernel
    
    [OUTPUT]
      Name   stdout
      Match  *
    [fluent-bit]
      name = Fluent Bit
      baseurl = https://packages.fluentbit.io/centos/$releasever/
      gpgcheck=1
      gpgkey=https://packages.fluentbit.io/fluentbit.key
      repo_gpgcheck=1
      enabled=1
    sudo yum install fluent-bit
    sudo systemctl start fluent-bit
    $ sed -i 's/mirrorlist/#mirrorlist/g' /etc/yum.repos.d/CentOS-* && \
    
    $ sed -i 's|#baseurl=http://mirror.centos.org|baseurl=http://vault.centos.org|g' /etc/yum.repos.d/CentOS-*
    $ systemctl status fluent-bit
    
    ● fluent-bit.service - Fluent Bit
       Loaded: loaded (/usr/lib/systemd/system/fluent-bit.service; disabled; vendor preset: disabled)
       Active: active (running) since Thu 2016-07-07 02:08:01 BST; 9s ago
     Main PID: 3820 (fluent-bit)
       CGroup: /system.slice/fluent-bit.service
               └─3820 /opt/fluent-bit/bin/fluent-bit -c etc/fluent-bit/fluent-bit.conf
    ...
    [fluent-bit]
      name = Fluent Bit
      baseurl = https://packages.fluentbit.io/centos/$releasever/$basearch/
    [fluent-bit]
      name = Fluent Bit
      baseurl = https://packages.fluentbit.io/centos/7/$basearch/
      gpgcheck=1
      gpgkey=https://packages.fluentbit.io/fluentbit.key
      repo_gpgcheck=1
      enabled=1
    /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
    brew install fluent-bit
    brew install git cmake openssl bison libyaml
    git clone https://github.com/fluent/fluent-bit
    cd fluent-bit
    git checkout v4.0.4
    export OPENSSL_ROOT_DIR=`brew --prefix openssl`
    
    export PATH=`brew --prefix bison`/bin:$PATH
    cd build/
    cmake -DFLB_DEV=on -DCMAKE_INSTALL_PREFIX=/opt/fluent-bit ../
    
    make -j 16
    sudo make install
    git clone https://github.com/fluent/fluent-bit
    cd fluent-bit
    git checkout v4.0.4
    export OPENSSL_ROOT_DIR=`brew --prefix openssl`
    
    export PATH=`brew --prefix bison`/bin:$PATH
    export MACOSX_DEPLOYMENT_TARGET=11.3
    cd build/
    cmake -DCPACK_GENERATOR=productbuild -DCMAKE_INSTALL_PREFIX=/opt/fluent-bit ../
    
    make -j 16
    
    cpack -G productbuild
    CPack: Create package using productbuild
    CPack: Install projects
    CPack: - Run preinstall target for: fluent-bit
    CPack: - Install project: fluent-bit []
    CPack: -   Install component: binary
    CPack: -   Install component: library
    CPack: -   Install component: headers
    CPack: -   Install component: headers-extra
    CPack: Create package
    CPack: -   Building component package: /Users/fluent-bit-builder/GitHub/fluent-bit/build/_CPack_Packages/Darwin/productbuild//Users/fluent-bit-builder/GitHub/fluent-bit/build/fluent-bit-1.9.2-apple/Contents/Packages/fluent-bit-1.9.2-apple-binary.pkg
    CPack: -   Building component package: /Users/fluent-bit-builder/GitHub/fluent-bit/build/_CPack_Packages/Darwin/productbuild//Users/fluent-bit-builder/GitHub/fluent-bit/build/fluent-bit-1.9.2-apple/Contents/Packages/fluent-bit-1.9.2-apple-headers.pkg
    CPack: -   Building component package: /Users/fluent-bit-builder/GitHub/fluent-bit/build/_CPack_Packages/Darwin/productbuild//Users/fluent-bit-builder/GitHub/fluent-bit/build/fluent-bit-1.9.2-apple/Contents/Packages/fluent-bit-1.9.2-apple-headers-extra.pkg
    CPack: -   Building component package: /Users/fluent-bit-builder/GitHub/fluent-bit/build/_CPack_Packages/Darwin/productbuild//Users/fluent-bit-builder/GitHub/fluent-bit/build/fluent-bit-1.9.2-apple/Contents/Packages/fluent-bit-1.9.2-apple-library.pkg
    CPack: - package: /Users/fluent-bit-builder/GitHub/fluent-bit/build/fluent-bit-1.9.2-apple.pkg generated.
    export PATH=/opt/fluent-bit/bin:$PATH
    fluent-bit -i dummy -o stdout -f 1
    ...
    [0] dummy.0: [1644362033.676766000, {"message"=>"dummy"}]
    [0] dummy.0: [1644362034.676914000, {"message"=>"dummy"}]
    service:
      flush: 1
      log_level: info
      storage.path: /var/log/flb-storage/
      storage.sync: normal
      storage.checksum: off
      storage.backlog.mem_limit: 5M
      storage.backlog.flush_on_shutdown: off
      storage.keep.rejected: on
      storage.rejected.path: rejected
    [SERVICE]
      flush                     1
      log_Level                 info
      storage.path              /var/log/flb-storage/
      storage.sync              normal
      storage.checksum          off
      storage.backlog.mem_limit 5M
      storage.backlog.flush_on_shutdown off
      storage.keep.rejected     on
      storage.rejected.path     rejected
    service:
      storage.path: /var/log/flb-storage/
      storage.keep.rejected: on
      storage.rejected.path: rejected
    <sanitized_tag>_<status_code>_<output_name>_<unique_id>.flb
    service:
      flush: 1
      log_level: info
    
    pipeline:
      inputs:
        - name: fluentbit_logs
          tag: internal.logs
    
      outputs:
        - name: stdout
          match: 'internal.logs'
    [SERVICE]
        Flush     1
        Log_Level info
    
    [INPUT]
        Name  fluentbit_logs
        Tag   internal.logs
    
    [OUTPUT]
        Name  stdout
        Match internal.logs
    service:
      flush: 1
      log_level: info
    
    pipeline:
      inputs:
        - name: fluentbit_logs
          tag: internal.logs
    
      outputs:
        - name: opentelemetry
          match: 'internal.logs'
          host: otel-collector
          port: 4318
    [SERVICE]
        Flush     1
        Log_Level info
    
    [INPUT]
        Name  fluentbit_logs
        Tag   internal.logs
    
    [OUTPUT]
        Name  opentelemetry
        Match internal.logs
        Host  otel-collector
        Port  4318
    fluent-bit -i fluentbit_logs -o stdout

    Indicates whether to run this input in its own .

    false

    typesdb

    Set the data specification file. You can specify multiple files separated by commas. Later entries take precedence over earlier ones.

    /usr/share/collectd/types.db

    listen

    Set the address to listen to.

    0.0.0.0

    port

    Set the port to listen to.

    25826

    threaded

    Specify the name of a parser to interpret the entry as a structured message.

    none

    reconnect.retry_interval

    The retry interval in seconds.

    1

    reconnect.retry_limits

    The maximum number of retries allowed. The plugin tries to reconnect with docker socket when EOF is detected.

    5

    threaded

    Indicates whether to run this input in its own .

    false

    unix_path

    The docker socket Unix path.

    /var/run/docker.sock

    buffer_size

    The size of the buffer used to read docker events in bytes.

    8192

    key

    When a message is unstructured (no parser applied), it's appended as a string under the key name message.

    message

    parser

    Polling interval in nanoseconds.

    0

    interval_sec

    Polling interval in seconds.

    1

    path.containers

    Container directory path, for custom Docker data-root configurations.

    /var/lib/docker/containers

    path.sysfs

    Sysfs cgroup mount point.

    /sys/fs/cgroup

    threaded

    Indicates whether to run this input in its own .

    false

    exclude

    A space-separated list of containers to exclude.

    none

    include

    A space-separated list of containers to include.

    none

    [INPUT]
      Name         docker
      Include      6bab19c3a0f9 14159be4ca2c
    
    [OUTPUT]
      Name   stdout
      Match  *
    pipeline:
      inputs:
        - name: docker
          include: 6bab19c3a0f9 14159be4ca2c
    
      outputs:
        - name: stdout
          match: '*'

    interval_nsec

    Raspbian and Raspberry Pi
  • Red Hat and CentOS

  • Rocky Linux and Alma Linux

  • Ubuntu

  • Yocto embedded Linux

  • Containers on AWS
    Docker
    Kubernetes
    Linux
    Amazon Linux
    Buildroot embedded Linux
    Debian
    macOS
    Windows
    enterprise providersarrow-up-right
    Write input, filter, or output plugins in the C language
  • Wasm: Wasm Filter Plugins or Wasm Input Plugins

  • Write Filters in Lua or Output plugins in Golang

  • Data analysis and prediction: Time series forecasting
    JSON
    Regex
    LTSV
    Logfmt
    Backpressure
    Data buffering
    extensibility
    Monitoring
    Prometheusarrow-up-right
    Stream Processing
    official release notesarrow-up-right
    What's new in Fluent Bit v5.0
    Upgrade notes
    CNCFarrow-up-right
    Fluentdarrow-up-right
    Eduardo Silvaarrow-up-right
    Chronospherearrow-up-right
    Apache License v2.0arrow-up-right
    hashtag
    Manage backpressure for memory-only buffering

    If one or more active input plugins use memory-only buffering, use the following settings to manage backpressure.

    circle-exclamation

    Some input plugins are prone to data loss after mem_buf_limit capacity is reached during memory-only buffering. If you need to avoid data loss, consider using filesystem buffering instead.

    hashtag
    Set mem_buf_limit for input plugins

    For input plugins that use memory-only buffering, you can configure the mem_buf_limit setting to enforce a limit for how much data that plugin can buffer to memory.

    circle-info

    This setting doesn't affect how much data can be buffered to memory by plugins that use filesystem buffering.

    When the specified mem_buf_limit capacity is reached, Fluent Bit will stop buffering data from that source plugin until enough buffered chunks are flushed. Most plugins emit a log message that says [warn] [input] <PLUGIN NAME> paused (mem buf overlimit) when buffering pauses.

    After more memory becomes available, Fluent Bit will resume buffering data from that source plugin. Most plugins emit a log message that says [info] [input] <PLUGIN NAME> resume (mem buf overlimit) when buffering resumes.

    hashtag
    Behavior when capacity is reached

    The following example demonstrates what happens when an input plugin with memory-only buffering reaches its mem_buf_limit capacity:

    • The input plugin's mem_buf_limit is set to 1MB.

    • The input plugin tries to append 700 KB.

    • The engine routes the data to an output plugin.

    • The output plugin's backend is down, which means it won't accept the data.

    • Engine scheduler retries the flush after 10 seconds.

    • The input plugin tries to append 500 KB.

    In this situation, the engine allows appending those 500 KB of data into the memory, with a total of 1.2 MB of data buffered. The limit is permissive and will allow a single write past the capacity of mem_buf_limit. When the limit is exceeded, Fluent Bit takes the following actions:

    • It blocks local buffers for the input plugin (can't append more data).

    • It notifies the input plugin, invoking a pause callback.

    The engine protects itself and won't append more data coming from the input plugin in question. It's the responsibility of the plugin to keep state and decide what to do in a paused state.

    In a few seconds, if the scheduler was able to flush the initial 700 KB of data or it has given up after retrying, that amount of memory is released and the following actions occur:

    • Upon data buffer release (700 KB), the internal counters get updated.

    • Counters now are set at 500 KB.

    • Because 500 KB is less than 1 MB, it checks the input plugin state.

    • If the plugin is paused, it invokes a resume callback.

    • The input plugin can continue appending more data.

    hashtag
    Manage backpressure for filesystem buffering

    If one or more active input plugins use filesystem buffering, use the following settings to manage backpressure.

    hashtag
    Set storage.max_chunks_up and storage.backlog.mem_limit in global settings

    In the service section of your Fluent Bit configuration file, you can configure the storage.max_chunks_up and storage.backlog.mem_limit settings. Both settings dictate how much data can be buffered to memory by input plugins that use filesystem buffering, and are combined limits shared by all applicable input plugins.

    circle-info

    These settings don't affect how much data can be buffered to memory by plugins that use memory-only buffering.

    When either the specified storage.max_chunks_up or storage.backlog.mem_limit capacity is reached, all input plugins that use filesystem buffering will stop buffering data to memory until more memory becomes available. Whether these input plugins continue buffering data to the filesystem depends on each plugin's specified storage.pause_on_chunks_overlimit value.

    hashtag
    Set storage.pause_on_chunks_overlimit for input plugins

    For input plugins that use filesystem buffering, you can configure the storage.pause_on_chunks_overlimit setting to specify how each plugin should behave after the global storage.max_chunks_up or storage.backlog.mem_limit capacity is reached.

    If storage.pause_on_chunks_overlimit is set to off for an input plugin, the input plugin will stop buffering data to memory but continue buffering data to the filesystem.

    If storage.pause_on_chunks_overlimit is set to on for an input plugin, the input plugin will stop both memory buffering and filesystem buffering until more memory becomes available.

    hashtag
    Set storage.total_limit_size for output plugins

    Fluent Bit implements the concept of logical queues for buffered chunks. Based on its tag, a chunk can be routed to multiple destinations. Fluent Bit keeps an internal reference from where each chunk was created and where it needs to go. To limit the number of queued chunks, set the storage.total_limit_size for any active output plugins that route data ingested by input plugins that use filesystem buffering.

    Network failures or latency in third-party services is common for output destinations. In some cases, a chunk is tagged for multiple destinations with varying response times, or one destination is generating more backpressure than others. If an output plugin reaches its configured storage.total_limit_size capacity, the oldest chunk from its queue will be discarded to make room for new data.

    buffering
    buffering mode
    hashtag
    Configuration parameters

    The plugin supports the following configuration parameters:

    Key
    Description
    Default

    dev_name

    Device name to limit the target (for example, sda). If not set, in_disk gathers information from all of disks and partitions.

    all disks

    interval_nsec

    Polling interval in nanoseconds.

    0

    hashtag
    Get started

    In order to get disk usage from your system, you can run the plugin from the command line or through the configuration file:

    hashtag
    Command line

    You can run the plugin from the command line:

    Which returns information like the following:

    hashtag
    Configuration file

    In your main configuration file append the following:

    Total interval (sec) = interval_sec + (interval_nsec / 1000000000)

    For example: 1.5s = 1s + 500000000ns

    hashtag
    Configuration parameters

    The plugin supports the following configuration parameters:

    Key
    Description
    Default

    interval_nsec

    Set the interval between generated samples, in nanoseconds. This works in conjunction with interval_sec.

    0

    interval_sec

    Set the interval between generated samples, in seconds.

    1

    hashtag
    Get started

    To start generating random samples, you can either run the plugin from the command line or through a configuration file.

    hashtag
    Command line

    Use the following command line options to generate samples.

    hashtag
    Configuration file

    The following examples are sample configuration files for this input plugin:

    hashtag
    Testing

    After Fluent Bit starts running, it generates reports in the output interface:

    Scheduling and retries

    Fluent Bitarrow-up-right has an engine that helps to coordinate the data ingestion from input plugins. The engine calls the scheduler to decide when it's time to flush the data through one or multiple output plugins. The scheduler flushes new data at a fixed number of seconds, and retries when asked.

    When an output plugin gets called to flush some data, after processing that data it can notify the engine using these possible return statuses:

    • OK: Data successfully processed and flushed.

    • Retry: If a retry is requested, the engine asks the scheduler to retry flushing that data. The scheduler decides how many seconds to wait before retry.

    • Error: An unrecoverable error occurred and the engine shouldn't try to flush that data again.

    hashtag
    Configure wait time for retry

    The scheduler provides two configuration options, called scheduler.cap and scheduler.base, which can be set in the Service section. These determine the waiting time before a retry happens.

    Key
    Description
    Default

    The scheduler.base determines the lower bound of time and the scheduler.cap determines the upper bound for each retry.

    Fluent Bit uses an exponential backoff and jitter algorithm to determine the waiting time before a retry. The waiting time is a random number between a configurable upper and lower bound. For a detailed explanation of the exponential backoff and jitter algorithm, see .

    For example:

    For the Nth retry, the lower bound of the random number will be:

    base

    The upper bound will be:

    min(base * (Nth power of 2), cap)

    For example:

    When base is set to 3 and cap is set to 30:

    First retry: The lower bound will be 3. The upper bound will be 3 * 2 = 6. The waiting time will be a random number between (3, 6).

    Second retry: The lower bound will be 3. The upper bound will be 3 * (2 * 2) = 12. The waiting time will be a random number between (3, 12).

    Third retry: The lower bound will be 3. The upper bound will be 3 * (2 * 2 * 2) =24. The waiting time will be a random number between (3, 24).

    Fourth retry: The lower bound will be 3, because 3 * (2 * 2 * 2 * 2) = 48 > 30. The upper bound will be 30. The waiting time will be a random number between (3, 30).

    hashtag
    Wait time example

    The following example configures the scheduler.base as 3 seconds and scheduler.cap as 30 seconds.

    The waiting time will be:

    Nth retry
    Waiting time range (seconds)

    hashtag
    Configure retries

    The scheduler provides a configuration option called Retry_Limit, which can be set independently for each output section. This option lets you disable retries or impose a limit to try N times and then discard the data after reaching that limit:

    Value
    Description
    circle-info

    When a chunk exhausts all retry attempts or retries are disabled, the data is discarded by default. To preserve rejected data for later analysis, enable the feature by setting storage.keep.rejected to on in the Service section.

    hashtag
    Retry example

    The following example configures two outputs, where the HTTP plugin has an unlimited number of retries, and the Elasticsearch plugin have a limit of 5 retries:

    Dummy

    circle-info

    Supported event types: logs

    The Dummy input plugin generates dummy events. Use this plugin for testing, debugging, benchmarking and getting started with Fluent Bit.

    hashtag
    Configuration parameters

    The plugin supports the following configuration parameters:

    Key
    Description
    Default

    hashtag
    Get started

    You can run the plugin from the command line or through the configuration file:

    hashtag
    Command line

    Run the plugin from the command line using the following command:

    which returns results like the following:

    hashtag
    Configuration file

    In your main configuration file append the following:

    MQTT

    circle-info

    Supported event types: logs

    The MQTT input plugin retrieves messages and data from MQTT control packets over a TCP connection. The incoming data to receive must be a JSON map.

    hashtag
    Configuration parameters

    The plugin supports the following configuration parameters:

    Key
    Description
    Default
    circle-info
    • buffer_size defaults to 2048 bytes; messages larger than this limit are dropped.

    • Defaults for listen and

    hashtag
    TLS / SSL

    The MQTT input plugin supports TLS/SSL. For the available options and guidance, see .

    hashtag
    Get started

    To listen for MQTT messages, you can run the plugin from the command line or through the configuration file.

    hashtag
    Command line

    The MQTT input plugin lets Fluent Bit behave as a server. Dispatch some messages using a MQTT client. In the following example, the mosquitto tool is being used for the purpose:

    Running the following command:

    Returns a response like the following:

    The following command line will send a message to the MQTT input plugin:

    hashtag
    Configuration file

    In your main configuration file append the following:

    Memory metrics

    circle-info

    Supported event types: logs

    The Memory (mem) input plugin gathers memory and swap usage on Linux at a fixed interval and reports totals and free space. The plugin emits log-based metrics (for Prometheus-format metrics see the Node Exporter metrics input plugin).

    hashtag
    Metrics reported

    Key
    Description
    Units

    hashtag
    Configuration parameters

    The plugin supports the following configuration parameters:

    Key
    Description
    Default

    hashtag
    Get started

    To collect memory and swap usage from your system, run the plugin from the command line or through the configuration file.

    hashtag
    Command line

    Run the following command from the command line:

    The output is similar to:

    hashtag
    Configuration file

    In your main configuration file append the following:

    Example output when pid is set:

    Network I/O metrics

    circle-info

    Supported event types: logs

    The Network I/O metrics (netif) input plugin gathers network traffic information of the running system at regular intervals, and reports them. This plugin is available only for Linux.

    The Network I/O metrics plugin creates metrics that are log-based, such as JSON payload. For Prometheus-based metrics, see the Node Exporter metrics input plugin.

    hashtag
    Metrics reported

    The following table describes the metrics generated by the plugin. Metric names are prefixed with the interface name (for example, eth0):

    Key
    Description

    hashtag
    Configuration parameters

    The plugin supports the following configuration parameters:

    Key
    Description
    Default

    hashtag
    Get started

    To monitor network traffic from your system, you can run the plugin from the command line or through the configuration file.

    hashtag
    Command line

    Run Fluent Bit using a command similar to the following:

    Which returns output similar to the following:

    hashtag
    Configuration file

    In your main configuration file append the following:

    Total interval (sec) = interval_sec + (interval_nsec / 1000000000)

    For example: 1.5s = 1s + 500000000ns

    Process metrics

    circle-info

    Supported event types: logs

    The Process metrics input plugin lets you check how healthy a process is. It does so by performing service checks at specified intervals.

    This plugin creates metrics that are log-based, such as JSON payloads. For Prometheus-based metrics, see the Node exporter metrics input plugin.

    hashtag
    Configuration parameters

    The plugin supports the following configuration parameters:

    Key
    Description
    Default

    hashtag
    Get started

    To start performing the checks, you can run the plugin from the command line or through the configuration file:

    The following example checks the health of crond process.

    hashtag
    Configuration file

    In your main configuration file, append the following sections:

    hashtag
    Testing

    After Fluent Bit starts running, it outputs the health of the process:

    Health

    circle-info

    Supported event types: logs

    The Health input plugin lets you check how healthy a TCP server is. It checks by issuing a TCP connection at regular intervals.

    hashtag
    Configuration parameters

    The plugin supports the following configuration parameters:

    Key
    Description
    Default

    hashtag
    Get started

    To start performing the checks, you can run the plugin from the command line or through the configuration file:

    hashtag
    Command line

    From the command line you can let Fluent Bit generate the checks with the following options:

    hashtag
    Configuration file

    In your main configuration file append the following:

    hashtag
    Testing

    Once Fluent Bit is running, you will see health check results in the output interface similar to this:

    Fluent Bit metrics

    A plugin to collect Fluent Bit metrics

    circle-info

    Supported event types: metrics

    Fluent Bit exposes metrics to let you monitor the internals of your pipeline. The collected metrics can be processed similarly to those from the Prometheus Node Exporter input plugin. They can be sent to output plugins including Prometheus Exporter, Prometheus Remote Write or OpenTelemetry.

    circle-info

    Metrics collected with Fluent Bit Metrics flow through a separate pipeline from logs and current filters don't operate on top of metrics.

    hashtag
    Configuration parameters

    Key
    Description
    Default

    hashtag
    Get started

    You can run the plugin from the command line or through the configuration file:

    hashtag
    Command line

    Run the plugin from the command line using the following command:

    which returns results like the following:

    hashtag
    Configuration file

    In the following configuration file, the input plugin fluentbit_metrics collects metrics every 2 seconds and exposes them through the output plugin on HTTP/TCP port 2021.

    You can test the expose of the metrics by using curl:

    CPU metrics

    circle-info

    Supported event types: logs

    The CPU input plugin, measures the CPU usage of a process or the whole system by default (considering per CPU core). It reports values in percentage unit for every interval of time set. This plugin is available only for Linux.

    The following tables describe the information generated by the plugin. The following keys represent the data used by the overall system, and all values associated to the keys are in a percentage unit (0 to 100%):

    The CPU metrics plugin creates metrics that are log-based, such as JSON payload. For Prometheus-based metrics, see the Node Exporter Metrics input plugin.

    Key
    Description

    In addition to the keys reported in the previous table, a similar content is created per CPU core. The cores are listed from 0 to N as the Kernel reports:

    Key
    Description

    hashtag
    Configuration parameters

    The plugin supports the following configuration parameters:

    Key
    Description
    Default

    hashtag
    Get started

    To get the statistics of the CPU usage of your system, you can run the plugin from the command line or through the configuration file:

    hashtag
    Command line

    You can run this input plugin from the command line using a command like the following:

    The command returns results similar to the following:

    As described previously, the CPU input plugin gathers the overall usage every one second and flushed the information to the output on the fifth second. This example uses the stdout plugin to demonstrate the output records. In a real use-case you might want to flush this information to some central aggregator such as or .

    hashtag
    Configuration file

    In your main configuration file append the following:

    Record accessor syntax

    A full feature set to access content of your records.

    Fluent Bit works internally with structured records and it can be composed of an unlimited number of keys and values. Values can be anything like a number, string, array, or a map.

    Having a way to select a specific part of the record is critical for certain core functionalities or plugins, this feature is called Record Accessor.

    Consider record accessor to be a basic grammar to specify record content and other miscellaneous values.

    hashtag
    Format

    A record accessor rule starts with the character $. Use the structured content as an example. The following table describes how to access a record:

    The following table describes some accessing rules and the expected returned value:

    Format
    Accessed Value

    If the accessor key doesn't exist in the record like the last example $labels['undefined'], the operation is omitted, and no exception will occur.

    hashtag
    Usage

    The feature is enabled on a per plugin basis. Not all plugins enable this feature. As an example, consider a configuration that aims to filter records using that only matches where labels have a color blue:

    The file content to process in test.log is the following:

    When running Fluent Bit with the previous configuration, the output is:

    hashtag
    Limitations of record_accessor templating

    The Fluent Bit record_accessor library has a limitation in the characters that can separate template variables. Only dots and commas (. and ,) can come after a template variable. This is because the templating library must parse the template and determine the end of a variable.

    The following templates are invalid because the template variables aren't separated by commas or dots:

    • $TaskID-$ECSContainerName

    • $TaskID/$ECSContainerName

    • $TaskID_$ECSContainerName

    However, the following are valid:

    • $TaskID.$ECSContainerName

    • $TaskID.ecs_resource.$ECSContainerName

    • $TaskID.fooo.$ECSContainerName

    And the following are valid since they only contain one template variable with nothing after it:

    • fooo$TaskID

    • fooo____$TaskID

    • fooo/bar$TaskID

    Telemetry Pipelines Workshopo11y-workshops.gitlab.iochevron-right
    Fluent Bit workshop for getting started with cloud native telemetry pipelines
    Using Fluent Bit | EKS Workshopeksworkshop.comchevron-right

    Exec WASI

    circle-info

    Supported event types: logs

    The Exec WASI input plugin lets you execute Wasm programs that are WASI targets like external programs and collect event logs from there.

    hashtag

    fluent-bit -i collectd -o stdout
    fluent-bit -i collectd -p listen=192.168.3.2 -p port=9090 -o stdout
    pipeline:
      inputs:
        - name: collectd
          listen: 0.0.0.0
          port: 25826
          typesdb: '/usr/share/collectd/types.db,/etc/collectd/custom.db'
    
      outputs:
        - name: stdout
          match: '*'
    [INPUT]
      Name         collectd
      Listen       0.0.0.0
      Port         25826
      TypesDB      /usr/share/collectd/types.db,/etc/collectd/custom.db
    
    [OUTPUT]
      Name   stdout
      Match  *
    typesdb: '/usr/share/collectd/types.db,/etc/collectd/custom.db'
    fluent-bit -i docker_events -o stdout
    pipeline:
      inputs:
        - name: docker_events
    
      outputs:
        - name: stdout
          match: '*'
    [INPUT]
      Name   docker_events
    
    [OUTPUT]
      Name   stdout
      Match  *
    [1] docker.0: [1571994772.00555745, {"id"=>"6bab19c3a0f9", "name"=>"postgresql", "cpu_used"=>172102435, "mem_used"=>5693400, "mem_limit"=>4294963200}]
    pipeline:
      inputs:
        - name: disk
          tag: disk
          interval_sec: 1
          interval_nsec: 0
    
      outputs:
        - name: stdout
          match: '*'
    [INPUT]
      Name          disk
      Tag           disk
      Interval_Sec  1
      Interval_Nsec 0
    
    [OUTPUT]
      Name   stdout
      Match  *
    fluent-bit -i disk -o stdout
    ...
    [0] disk.0: [1485590297, {"read_size"=>0, "write_size"=>0}]
    [1] disk.0: [1485590298, {"read_size"=>0, "write_size"=>0}]
    [2] disk.0: [1485590299, {"read_size"=>0, "write_size"=>0}]
    [3] disk.0: [1485590300, {"read_size"=>0, "write_size"=>11997184}]
    ...
    pipeline:
      inputs:
        - name: random
          samples: -1
          interval_sec: 1
          interval_nsec: 0
    
      outputs:
        - name: stdout
          match: '*'
    [INPUT]
      Name          random
      Samples      -1
      Interval_Sec  1
      Interval_Nsec 0
    
    [OUTPUT]
      Name   stdout
      Match  *
    fluent-bit -i random -o stdout
    $ fluent-bit -i random -o stdout
    
    ...
    [0] random.0: [1475893654, {"rand_value"=>1863375102915681408}]
    [1] random.0: [1475893655, {"rand_value"=>425675645790600970}]
    [2] random.0: [1475893656, {"rand_value"=>7580417447354808203}]
    [3] random.0: [1475893657, {"rand_value"=>1501010137543905482}]
    [4] random.0: [1475893658, {"rand_value"=>16238242822364375212}]
    ...

    interval_sec

    Polling interval in seconds.

    1

    threaded

    Indicates whether to run this input in its own thread.

    false

    samples

    Set the number of samples to generate. -1 generates unlimited samples.

    -1

    threaded

    Indicates whether to run this input in its own thread.

    false

    thread
    thread
    thread

    $labels['undefined']

    $TaskIDfooo$ECSContainerName

    $log

    some message

    $labels['color']

    blue

    $labels['project']['env']

    production

    $labels['unset']

    grep

    null

    {
      "log": "some message",
      "stream": "stdout",
      "labels": {
         "color": "blue",
         "unset": null,
         "project": {
             "env": "production"
          }
      }
    }
    [SERVICE]
        flush        1
        log_level    info
        parsers_file parsers.conf
    
    [INPUT]
        name      tail
        path      test.log
        parser    json
    
    [FILTER]
        name      grep
        match     *
        regex     $labels['color'] ^blue$
    
    [OUTPUT]
        name      stdout
        match     *
        format    json_lines
    {"log": "message 1", "labels": {"color": "blue"}}
    {"log": "message 2", "labels": {"color": "red"}}
    {"log": "message 3", "labels": {"color": "green"}}
    {"log": "message 4", "labels": {"color": "blue"}}
    $ bin/fluent-bit -c fluent-bit.conf
    Fluent Bit v1.x.x
    * Copyright (C) 2019-2020 The Fluent Bit Authors
    * Copyright (C) 2015-2018 Treasure Data
    * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
    * https://fluentbit.io
    
    [2020/09/11 16:11:07] [ info] [engine] started (pid=1094177)
    [2020/09/11 16:11:07] [ info] [storage] version=1.0.5, initializing...
    [2020/09/11 16:11:07] [ info] [storage] in-memory
    [2020/09/11 16:11:07] [ info] [storage] normal synchronization mode, checksum disabled, max_chunks_up=128
    [2020/09/11 16:11:07] [ info] [sp] stream processor started
    [2020/09/11 16:11:07] [ info] inotify_fs_add(): inode=55716713 watch_fd=1 name=test.log
    {"date":1599862267.483684,"log":"message 1","labels":{"color":"blue"}}
    {"date":1599862267.483692,"log":"message 4","labels":{"color":"blue"}}

    no_retries

    When set, retries are disabled and scheduler doesn't try to send data to the destination if it failed the first time.

    scheduler.cap

    Set a maximum retry time in seconds. Supported in v1.8.7 or later.

    2000

    scheduler.base

    Set a base of exponential backoff. Supported in v1.8.7 or later.

    5

    1

    (3, 6)

    2

    (3, 12)

    3

    (3, 24)

    4

    Retry_Limit

    N

    Integer value to set the maximum number of retries allowed. N must be >= 1 (default: 1)

    Retry_Limit

    no_limits or False

    When set there no limit for the number of retries that the scheduler can do.

    Exponential Backoff And Jitterarrow-up-right
    Dead Letter Queue (DLQ)

    (3, 30)

    Retry_Limit

    If enabled, use a fixed timestamp. This allows the message to be pre-generated once.

    false

    flush_on_startup

    If set to true, the first dummy event is generated at startup.

    false

    interval_nsec

    Set time interval, in nanoseconds, at which every message is generated. If set, rate configuration is ignored.

    0

    interval_sec

    Set time interval, in seconds, at which every message is generated. If set, rate configuration is ignored.

    0

    metadata

    Dummy JSON metadata.

    {}

    rate

    Rate at which messages are generated, expressed in how many times per second.

    1

    samples

    Limit the number of events generated. For example, if samples=3, the plugin generates only three events and stops. 0 means no limit.

    0

    start_time_nsec

    Set a dummy base timestamp, in nanoseconds. If set to -1, the current time is used.

    -1

    start_time_sec

    Set a dummy base timestamp, in seconds. If set to -1, the current time is used.

    -1

    test_hang_on_exit

    Test-only option that simulates a hang during shutdown for hot reload watchdog testing. Don't use this in production configurations.

    false

    threaded

    Indicates whether to run this input in its own .

    false

    copies

    Number of messages to generate each time messages are generated.

    1

    dummy

    Dummy JSON record.

    {"message":"dummy"}

    fixed_timestamp

    Field name where the MQTT message payload will be stored in the output record.

    none

    port

    TCP port where listening for connections.

    1883

    threaded

    Indicates whether to run this input in its own .

    false

    port
    are
    0.0.0.0
    and
    1883
    , so you can omit them if you want the standard MQTT listener.
  • Payloads are expected to be JSON maps; non-JSON payloads will fail to parse.

  • buffer_size

    Maximum payload size (in bytes) for a single MQTT message.

    2048

    listen

    Listener network interface.

    0.0.0.0

    Transport Security

    payload_key

    Memory in use (Mem.total - Mem.free).

    Kilobytes

    proc_bytes

    Optional. Resident set size for the configured process (pid).

    Bytes

    proc_hr

    Optional. Human-readable value of proc_bytes (for example, 12.00M).

    Formatted

    Swap.free

    Free swap space.

    Kilobytes

    Swap.total

    Total system swap.

    Kilobytes

    Swap.used

    Swap space in use (Swap.total - Swap.free).

    Kilobytes

    Process ID to measure. When set, the plugin also reports proc_bytes and proc_hr.

    none

    threaded

    Run this input in its own .

    false

    Mem.free

    Free or available memory reported by the kernel.

    Kilobytes

    Mem.total

    Total system memory.

    Kilobytes

    interval_nsec

    Polling interval in nanoseconds.

    0

    interval_sec

    Polling interval in seconds.

    1

    Mem.used

    pid

    {interface}.tx.packets

    Number of packets transmitted on the interface.

    {interface}.tx.errors

    Number of transmit errors on the interface.

    Polling interval in seconds.

    1

    test_at_init

    If true, test if the network interface is valid at initialization.

    false

    threaded

    Indicates whether to run this input in its own .

    false

    verbose

    If true, gather metrics precisely.

    false

    {interface}.rx.bytes

    Number of bytes received on the interface.

    {interface}.rx.packets

    Number of packets received on the interface.

    {interface}.rx.errors

    Number of receive errors on the interface.

    {interface}.tx.bytes

    interface

    Specify the network interface to monitor. For example, eth0.

    none

    interval_nsec

    Polling interval in nanoseconds.

    0

    Number of bytes transmitted on the interface.

    interval_sec

    Specifies the interval between service checks, in nanoseconds. This works in conjunction with interval_sec.

    0

    interval_sec

    Specifies the interval between service checks, in seconds.

    1

    mem

    If enabled, memory usage of the process is appended to each record.

    true

    proc_name

    The name of the target process to check.

    none

    threaded

    Specifies whether to run this input in its own .

    false

    alert

    If enabled, the plugin will only generate messages if the target process is down.

    false

    fd

    If enabled, a number of fd is appended to each record.

    true

    interval_nsec

    If enabled, it generates messages only when the target TCP service is down.

    false

    host

    Name of the target host or IP address.

    none

    interval_nsec

    Specify a nanoseconds interval for service checks. Works with the interval_sec configuration key.

    0

    interval_sec

    Interval in seconds between the service checks.

    1

    port

    TCP port where to perform the connection request.

    none

    threaded

    Indicates whether to run this input in its own .

    false

    add_host

    If enabled, hostname is appended to each record.

    false

    add_port

    If enabled, port number is appended to each record.

    false

    alert

    Indicates whether to run this input in its own .

    false

    scrape_interval

    The rate at which Fluent Bit internal metrics are collected.

    2 seconds

    scrape_on_start

    Scrape metrics upon start, use to avoid waiting for scrape_interval for the first round of metrics.

    false

    Prometheus Exporter

    threaded

    Specify the process ID (PID) of a running process in the system. By default, the plugin monitors the whole system but if this option is set, it will only monitor the given process ID.

    none

    threaded

    Indicates whether to run this input in its own .

    false

    cpu_p

    CPU usage of the overall system, this value is the summation of time spent on user and kernel space. The result takes in consideration the numbers of CPU cores in the system.

    system_p

    CPU usage in Kernel mode, for short it means the CPU usage by the Kernel. The result of this value takes in consideration the numbers of CPU cores in the system.

    user_p

    CPU usage in User mode, for short it means the CPU usage by user space programs. The result of this value takes in consideration the numbers of CPU cores in the system.

    cpuN.p_cpu

    Represents the total CPU usage by core N.

    cpuN.p_system

    Total CPU spent in system or kernel mode associated to this core.

    cpuN.p_user

    Total CPU spent in user mode or user space programs associated to this core.

    interval_nsec

    Polling interval in nanoseconds.

    0

    interval_sec

    Polling interval in seconds.

    1

    Fluentdarrow-up-right
    Elasticsearcharrow-up-right

    pid

    Configuration parameters

    The plugin supports the following configuration parameters:

    Key
    Description
    Default

    accessible_paths

    Specify the allowed list of paths to be able to access paths from Wasm programs.

    .

    buf_size

    Size of the buffer. Review for allowed values.

    4096

    hashtag
    Configuration examples

    Here is a configuration example.

    in_exec_wasi can handle parsers. To retrieve from structured data from a Wasm program, you must create a parser.conf:

    The time_format should be aligned for the format your using for timestamps.

    This example assumes the Wasm program writes JSON style strings to stdout.

    Then, you can specify the parsers.conf in the main Fluent Bit configuration:

    Kubernetes

    Kubernetes Production Grade Log Processor

    Fluent Bitarrow-up-right is a lightweight and extensible log processor with full support for Kubernetes:

    • Process Kubernetes containers logs from the file system or Systemd/Journald.

    • Enrich logs with Kubernetes Metadata.

    • Centralize your logs in third party storage services like Elasticsearch, InfluxDB, HTTP, and so on.

    hashtag
    Concepts

    Before getting started it's important to understand how Fluent Bit will be deployed. Kubernetes manages a cluster of nodes. The Fluent Bit log agent tool needs to run on every node to collect logs from every pod. Fluent Bit is deployed as a DaemonSet, which is a pod that runs on every node of the cluster.

    When Fluent Bit runs, it reads, parses, and filters the logs of every pod. In addition, Fluent Bit adds metadata to each entry using the filter plugin.

    The Kubernetes filter plugin talks to the Kubernetes API Server to retrieve relevant information such as the pod_id, labels, and annotations. Other fields, such as pod_name, container_id, and container_name, are retrieved locally from the log file names. All of this is handled automatically, and no intervention is required from a configuration aspect.

    hashtag
    Installation

    Fluent Bit should be deployed as a DaemonSet, so it will be available on every node of your Kubernetes cluster.

    The recommended way to deploy Fluent Bit for Kubernetes is with the official .

    hashtag
    OpenShift

    If you are using Red Hat OpenShift you must set up Security Context Constraints (SCC) using the relevant option in the helm chart.

    hashtag
    Installing with Helm chart

    is a package manager for Kubernetes and lets you deploy application packages into your running cluster. Fluent Bit is distributed using a Helm chart found in the .

    Use the following command to add the Fluent Helm charts repository

    To validate that the repository was added, run helm search repo fluent to ensure the charts were added. The default chart can then be installed by running the following command:

    hashtag
    Default values

    The default chart values include configuration to read container logs. With Docker parsing, Systemd logs apply Kubernetes metadata enrichment, and output to an Elasticsearch cluster. You can modify the to specify additional outputs, health checks, monitoring endpoints, or other configuration options.

    hashtag
    Details

    The default configuration of Fluent Bit ensures the following:

    • Consume all containers logs from the running node and parse them with either the docker or cri multi-line parser.

    • Persist how far it got into each file it's tailing so if a pod is restarted it picks up from where it left off.

    • The Kubernetes filter adds Kubernetes metadata, specifically labels

    hashtag
    Windows deployment

    Fluent Bit v1.5.0 and later supports deployment to Windows pods.

    hashtag
    Log files overview

    When deploying Fluent Bit to Kubernetes, there are three log files that you need to pay attention to.

    • C:\k\kubelet.err.log

      This is the error log file from kubelet daemon running on host. Retain this file for future troubleshooting, including debugging deployment failures.

    • C:\var\log\containers\<pod>_<namespace>_<container>-<docker>.log

      This is the main log file you need to watch. Configure Fluent Bit to follow this file. It's a symlink to the Docker log file in C:\ProgramData\

    Typically, your deployment YAML contains the following volume configuration.

    hashtag
    Configure Fluent Bit

    Assuming the basic volume configuration described previously, you can apply one of the following configurations to start logging:

    hashtag
    Mitigate unstable network on Windows pods

    Windows pods often lack working DNS immediately after boot (). To mitigate this issue, filter_kubernetes provides a built-in mechanism to wait until the network starts up:

    • DNS_Retries: Retries N times until the network start working (6)

    • DNS_Wait_Time: Lookup interval between network status checks (30)

    By default, Fluent Bit waits for three minutes (30 seconds x 6 times). If it's not enough for you, update the configuration as follows:

    What's new in Fluent Bit v5.0

    Fluent Bit v5.0 adds new inputs and processors, expands authentication and TLS options, and standardizes configuration for HTTP-based plugins. It also delivers an important round of performance and scalability work, especially for pipelines that ingest logs, metrics, and traces through HTTP-based protocols. This page gives a quick user-focused overview of the main changes since Fluent Bit v4.2.

    For migration-impacting changes, see Upgrade notes.

    hashtag
    Performance and scalability

    hashtag
    Unified processing and delivery model

    Fluent Bit v5.0 continues the move toward a more unified runtime for logs, metrics, and traces. In practice, this means the same core engine improvements benefit more of the pipeline, instead of individual signal paths evolving separately.

    For end users, the result is a more consistent behavior across telemetry types and a better base for high-throughput pipelines that mix logs, metrics, and traces in the same deployment.

    hashtag
    Refactored HTTP stack

    One of the most important v5.0 changes is the refactoring of the HTTP listener stack used by several input plugins. Fluent Bit now uses a shared HTTP server implementation across the major HTTP-based receivers instead of maintaining separate code paths.

    This work improves:

    • concurrency through shared listener worker support

    • consistency of request handling across HTTP-based inputs

    • buffer enforcement and connection handling

    The biggest user-facing beneficiaries are:

    If you run large HTTP or OTLP ingestion workloads, v5.0 is not only a feature release. It is also a meaningful runtime improvement.

    hashtag
    Configuration and operations

    hashtag
    Shared HTTP listener settings

    HTTP-based inputs now use a shared listener configuration model. The preferred setting names are:

    • http_server.http2

    • http_server.buffer_chunk_size

    • http_server.buffer_max_size

    Legacy aliases such as http2, buffer_chunk_size, and buffer_max_size still work, but new configurations should use the http_server.* names.

    Affected plugin families include:

    hashtag
    Mutual TLS for inputs

    Input plugins that support TLS can now require client certificate verification with tls.verify_client_cert. This makes it easier to run mutual TLS (mTLS) directly on Fluent Bit listeners.

    See .

    hashtag
    JSON health endpoint in API v2

    The built-in HTTP server exposes /api/v2/health, which returns health status as JSON and uses the HTTP status code to indicate healthy (200) or unhealthy (500) state.

    See .

    hashtag
    Inputs

    hashtag
    New fluentbit_logs input

    The routes Fluent Bit internal logs back into the pipeline as structured records. This lets you forward agent diagnostics to any supported destination.

    hashtag
    HTTP input remote address capture

    The adds:

    • add_remote_addr

    • remote_addr_key

    These settings let you attach the client address from X-Forwarded-For to each ingested record.

    hashtag
    OAuth 2.0 bearer token validation on HTTP-based inputs

    HTTP-based receivers can validate incoming bearer tokens with:

    • oauth2.validate

    • oauth2.issuer

    • oauth2.jwks_url

    This is available on the relevant input plugins, including and .

    hashtag
    OpenTelemetry input improvements

    The in v5.0 expands user-visible behavior with:

    • shared HTTP listener worker support

    • OAuth 2.0 bearer token validation

    • stable JSON metrics ingestion over OTLP/HTTP

    hashtag
    Kubernetes events state database controls

    The documents additional SQLite controls:

    • db.journal_mode

    • db.locking

    These settings help tune event cursor persistence and database access behavior.

    hashtag
    Processors

    hashtag
    New cumulative-to-delta processor

    The converts cumulative monotonic metrics to delta values, which is useful when scraping Prometheus-style metrics but exporting to backends that expect deltas.

    hashtag
    New topological data analysis processor

    The adds a metrics processor for topology-based analysis workflows.

    hashtag
    Sampling processor updates

    The adds legacy_reconcile for tail sampling, which helps compare the optimized reconciler with the previous behavior when validating upgrades.

    hashtag
    Outputs

    hashtag
    HTTP output OAuth 2.0 client credentials

    The now supports built-in OAuth 2.0 client credentials with:

    • basic

    • post

    • private_key_jwt

    You can configure token acquisition directly in Fluent Bit with the oauth2.* settings.

    hashtag
    More compression options for cloud outputs

    Several outputs gained additional compression support in the v4.2 to v5.0 range:

    • : gzip, zstd, snappy

    • : snappy added alongside existing codecs

    hashtag
    Monitoring changes

    hashtag
    fluentbit_hot_reloaded_times is now a counter

    The fluentbit_hot_reloaded_times metric changed from a gauge to a counter, which makes it safe to use with PromQL functions such as rate() and increase().

    hashtag
    New output backpressure visibility

    v5.0 adds output backpressure duration metrics so you can observe time spent waiting because of downstream pressure.

    See .

    Validate your data and structure

    Fluent Bit supports multiple sources and formats. In addition, it provides filters that you can use to perform custom modifications. As your pipeline grows, it's important to validate your data and structure.

    Fluent Bit users are encouraged to integrate data validation in their continuous integration (CI) systems.

    In a normal production environment, inputs, filters, and outputs are defined in configuration files. Fluent Bit provides the Expect filter, which you can use to validate keys and values from your records and take action when an exception is found.

    A simplified view of the data processing pipeline is as follows:

    hashtag
    Understand structure and configuration

    Consider the following pipeline, which uses a JSON file as its data source and has two filters:

    • to exclude certain records.

    • to alter records' content by adding and removing specific keys.

    Add data validation between each step to ensure your data structure is correct.

    This example uses the filter.

    Expect filters set rules aiming to validate criteria like:

    • Does the record contain key A?

    • Does the record not contain key A?

    • Does the key A value equal NULL

    Every Expect filter configuration exposes rules to validate the content of your records using .

    hashtag
    Test the configuration

    Consider a JSON file data.log with the following content:

    The following files configure a pipeline to consume the log, while applying an Expect filter to validate that the keys color and label exist.

    The following is the Fluent Bit YAML configuration file:

    The following is the Fluent Bit YAML parsers file:

    The following is the Fluent Bit classic configuration file:

    The following is the Fluent Bit classic parsers file:

    If the JSON parser fails or is missing in the input (parser json), the Expect filter triggers the exit action.

    To extend the pipeline, add a Grep filter to match records that map label containing a key called name with value the abc, and add an Expect filter to re-validate that condition:

    The following is the Fluent Bit YAML configuration file:

    hashtag
    Production deployment

    When deploying in production, consider removing any Expect filters from your configuration file. These filters are unnecessary unless you need 100% coverage of checks at runtime.

    GPU metrics

    circle-info

    Supported event types: metrics

    The gpu_metrics input plugin collects graphics processing unit (GPU) performance metrics from graphics cards on Linux systems. It provides real-time monitoring of GPU utilization, memory usage (VRAM), clock frequencies, power consumption, temperature, and fan speeds.

    The plugin reads metrics directly from the Linux sysfs filesystem (/sys/class/drm/) without requiring external tools or libraries. Only AMD GPUs are supported through the amdgpu kernel driver. NVIDIA and Intel GPUs aren't supported.

    hashtag
    Metrics collected

    The plugin collects the following metrics for each detected GPU:

    Key
    Description

    hashtag
    Clock metrics

    The gpu_clock_mhz metric is reported separately for three clock domains:

    Type
    Description

    hashtag
    Configuration parameters

    The plugin supports the following configuration parameters:

    Key
    Description
    Default

    hashtag
    GPU detection

    The GPU metrics plugin scans for any supported AMD GPU using the amdgpu kernel driver. Any GPU using legacy drivers is ignored.

    To check if your AMD GPU will be detected run:

    Example output:

    hashtag
    Multiple GPU systems

    In systems with multiple GPUs, the GPU metrics plugin will detect all AMD cards by default. You can control which GPUs you want to monitor with the cards_include and cards_exclude parameters.

    To list the GPUs running in your system run the following command:

    Example output:

    hashtag
    Getting started

    To get GPU metrics from your system, you can run the plugin from either the command line or through the configuration file:

    hashtag
    Command line

    Run the following command from the command line:

    Example output:

    hashtag
    Configuration file

    In your main configuration file append the following:

    NGINX exporter metrics

    circle-info

    Supported event types: metrics

    The NGINX exporter metrics input plugin scrapes metrics from the NGINX stub status handler.

    circle-info

    Metrics collected with NGINX Exporter Metrics flow through a separate pipeline from logs, and current filters don't operate on top of metrics.

    hashtag
    Configuration parameters

    The plugin supports the following configuration parameters:

    Key
    Description
    Default

    hashtag
    TLS / SSL

    The NGINX exporter metrics input plugin supports TLS/SSL. For more details about the properties available and general configuration, refer to .

    hashtag
    Get started

    NGINX must be configured with a location that invokes the stub status handler. The following is an example configuration with such a location:

    hashtag
    Configuration with NGINX Plus REST API

    Another metrics API is available with NGINX Plus. You must first configure a path in NGINX Plus.

    hashtag
    Command line

    From the command line you can let Fluent Bit generate the checks with the following options:

    To gather metrics from the command line with the NGINX Plus REST API, turn on the nginx_plus property:

    hashtag
    Configuration file

    In your main configuration file append the following:

    And for NGINX Plus API:

    hashtag
    Test your configuration

    You can test against the NGINX server running on localhost by invoking it directly from the command line:

    This returns output similar to the following:

    hashtag
    Exported metrics

    For a list of available metrics, see the on GitHub.

    Podman metrics

    circle-info

    Supported event types: metrics

    The Podman metrics input plugin lets Fluent Bit gather Podman container metrics. The procedure for collecting container list and gathering data associated with them is based on filesystem data.

    The metrics can be exposed later as, for example, Prometheus counters and gauges.

    hashtag
    Configuration parameters

    Key
    Description
    Default

    hashtag
    Get started

    This plugin doesn't execute podman commands or send HTTP requests to Podman API. It reads a Podman configuration file and metrics exposed by the /sys and /proc filesystems.

    This plugin supports and automatically detects both cgroups v1 and v2.

    hashtag
    Example curl message for one running container

    You can run the following curl command:

    Which returns information like:

    hashtag
    Configuration file

    hashtag
    Command line

    hashtag
    Exposed metrics

    Currently supported counters are:

    • container_memory_usage_bytes

    • container_memory_max_usage_bytes

    • container_memory_rss

    This plugin mimics the naming convention of Docker metrics exposed by .

    Prometheus remote write

    An input plugin to ingest payloads of Prometheus remote write

    circle-info

    Supported event types: metrics

    The Prometheus remote write input plugin lets you ingest a payload in the Prometheus remote-write format. A remote-write sender can transmit data to Fluent Bit.

    hashtag
    Configuration parameters

    Key
    Description
    Default

    hashtag
    Configuration file

    The following examples are sample configuration files for this input plugin:

    These sample configurations configure Fluent Bit to listen for data on port 8080. You can send payloads in Prometheus remote-write format to the endpoint /api/prom/push.

    hashtag
    Examples

    hashtag
    Communicate with TLS

    The Prometheus remote write input plugin supports TLS and SSL. For more details about the properties available and general configuration, refer to the documentation.

    To communicate with TLS, you must use these TLS-related parameters:

    Now, you should be able to send data over TLS to the remote-write input.

    Yocto embedded Linux

    Fluent Bitarrow-up-right source code provides BitBake recipes to configure, build, and package the software for a Yocto-based image. Specific steps in the usage of these recipes in your Yocto environment (Poky) is out of the scope of this documentation.

    Fluent Bit distributes two main recipes, one for testing/dev purposes and one with the latest stable release.

    Version
    Recipe
    Description

    devel

    It's strongly recommended to always use the stable release of the Fluent Bit recipe and not the one from Git master for production deployments.

    hashtag
    Fluent Bit and other architectures

    Fluent Bit 1.1.x and later fully supports x86_64, x86, arm32v7, and arm64v8.

    Networking

    implements a unified networking interface that's exposed to components like plugins. This interface abstracts the complexity of general I/O and is fully configurable.

    A common use case is when a component or plugin needs to connect with a service to send and receive data. There are many challenges to handle like unresponsive services, networking latency, or any kind of connectivity error. The networking interface aims to abstract and simplify the network I/O handling, minimize risks, and optimize performance.

    hashtag
    Networking concepts

    Fluent Bit uses the following networking concepts:

    Elasticsearch

    circle-info

    Supported event types: logs

    The Elasticsearch input plugin handles both Elasticsearch and OpenSearch Bulk API requests.

    hashtag

    Head

    circle-info

    Supported event types: logs

    The Head input plugin reads events from the head of a file. Its behavior is similar to the head command.

    Kubernetes events

    Collect Kubernetes events

    circle-info

    Supported event types: logs

    Kubernetes exports events through the API server. This input plugin lets you retrieve those events as logs and process them through the pipeline.

    hashtag

    service:
      flush: 5
      daemon: off
      log_level: debug
      scheduler.base: 3
      scheduler.cap: 30
    [SERVICE]
      Flush            5
      Daemon           off
      Log_Level        debug
      scheduler.base   3
      scheduler.cap    30
    pipeline:
    
      outputs:
        - name: http
          host: 192.168.5.6
          port: 8080
          retry_limit: false
    
        - name: es
          host: 192.168.5.20
          port: 9200
          logstash_format: on
          retry_limit: 5
    [OUTPUT]
      Name        http
      Host        192.168.5.6
      Port        8080
      Retry_Limit False
    
    [OUTPUT]
      Name            es
      Host            192.168.5.20
      Port            9200
      Logstash_Format On
      Retry_Limit     5
    fluent-bit -i dummy -o stdout
    ...
    [0] dummy.0: [[1686451466.659962491, {}], {"message"=>"dummy"}]
    [0] dummy.0: [[1686451467.659679509, {}], {"message"=>"dummy"}]
    ...
    pipeline:
      inputs:
        - name: dummy
          dummy: '{"message": "custom dummy"}'
    
      outputs:
        - name: stdout
          match: '*'
    [INPUT]
      Name   dummy
      Dummy  {"message": "custom dummy"}
    
    [OUTPUT]
      Name   stdout
      Match  *
    fluent-bit -i mqtt -t data -o stdout -m '*'
    ...
    [0] data: [1463775773, {"topic"=>"some/topic", "key1"=>123, "key2"=>456}]
    ...
    mosquitto_pub  -m '{"key1": 123, "key2": 456}' -t some/topic
    pipeline:
      inputs:
        - name: mqtt
          tag: data
          listen: 0.0.0.0
          port: 1883
          payload_key: payload
    
      outputs:
        - name: stdout
          match: '*'
    [INPUT]
      Name   mqtt
      Tag    data
      Listen 0.0.0.0
      Port   1883
      Payload_Key payload
    
    [OUTPUT]
      Name   stdout
      Match  *
    fluent-bit -i mem -t memory -o stdout -m '*'
    ...
    [0] memory: [[1751381087.225589224, {}], {"Mem.total"=>3986708, "Mem.used"=>560708, "Mem.free"=>3426000, "Swap.total"=>0, "Swap.used"=>0, "Swap.free"=>0}]
    [0] memory: [[1751381088.228411537, {}], {"Mem.total"=>3986708, "Mem.used"=>560708, "Mem.free"=>3426000, "Swap.total"=>0, "Swap.used"=>0, "Swap.free"=>0}]
    [0] memory: [[1751381089.225600084, {}], {"Mem.total"=>3986708, "Mem.used"=>561480, "Mem.free"=>3425228, "Swap.total"=>0, "Swap.used"=>0, "Swap.free"=>0}]
    [0] memory: [[1751381090.228345064, {}], {"Mem.total"=>3986708, "Mem.used"=>561480, "Mem.free"=>3425228, "Swap.total"=>0, "Swap.used"=>0, "Swap.free"=>0}]
    ...
    pipeline:
      inputs:
        - name: mem
          tag: memory
          interval_sec: 5
          pid: 1234
    
      outputs:
        - name: stdout
          match: '*'
    [INPUT]
      Name          mem
      Tag           memory
      Interval_Sec  5
      Interval_Nsec 0
      Pid           1234
    
    [OUTPUT]
      Name   stdout
      Match  *
    ...
    [0] memory: [[1751381087.225589224, {}], {"Mem.total"=>3986708, "Mem.used"=>560708, "Mem.free"=>3426000, "Swap.total"=>0, "Swap.used"=>0, "Swap.free"=>0, "proc_bytes"=>12349440, "proc_hr"=>"11.78M"}]
    ...
    fluent-bit -i netif -p interface=eth0 -o stdout
    ...
    [0] netif.0: [1499524459.001698260, {"eth0.rx.bytes"=>89769869, "eth0.rx.packets"=>73357, "eth0.rx.errors"=>0, "eth0.tx.bytes"=>4256474, "eth0.tx.packets"=>24293, "eth0.tx.errors"=>0}]
    [1] netif.0: [1499524460.002541885, {"eth0.rx.bytes"=>98, "eth0.rx.packets"=>1, "eth0.rx.errors"=>0, "eth0.tx.bytes"=>98, "eth0.tx.packets"=>1, "eth0.tx.errors"=>0}]
    [2] netif.0: [1499524461.001142161, {"eth0.rx.bytes"=>98, "eth0.rx.packets"=>1, "eth0.rx.errors"=>0, "eth0.tx.bytes"=>98, "eth0.tx.packets"=>1, "eth0.tx.errors"=>0}]
    [3] netif.0: [1499524462.002612971, {"eth0.rx.bytes"=>98, "eth0.rx.packets"=>1, "eth0.rx.errors"=>0, "eth0.tx.bytes"=>98, "eth0.tx.packets"=>1, "eth0.tx.errors"=>0}]
    ...
    pipeline:
      inputs:
        - name: netif
          tag: netif
          interface: eth0
          interval_sec: 1
          interval_nsec: 0
          verbose: false
          test_at_init: false
    
      outputs:
        - name: stdout
          match: '*'
    [INPUT]
      Name          netif
      Tag           netif
      Interface     eth0
      Interval_Sec  1
      Interval_Nsec 0
      Verbose       false
      Test_At_Init  false
    
    [OUTPUT]
      Name   stdout
      Match  *
    fluent-bit -i proc -p proc_name=crond -o stdout
    pipeline:
      inputs:
        - name: proc
          proc_name: crond
          interval_sec: 1
          interval_nsec: 0
          fd: true
          mem: true
    
      outputs:
        - name: stdout
          match: '*'
    [INPUT]
      Name          proc
      Proc_Name     crond
      Interval_Sec  1
      Interval_Nsec 0
      Fd            true
      Mem           true
    
    [OUTPUT]
      Name   stdout
      Match  *
    $ fluent-bit -i proc -p proc_name=fluent-bit -o stdout
    
    ...
    [0] proc.0: [1485780297, {"alive"=>true, "proc_name"=>"fluent-bit", "pid"=>10964, "mem.VmPeak"=>14740000, "mem.VmSize"=>14740000, "mem.VmLck"=>0, "mem.VmHWM"=>1120000, "mem.VmRSS"=>1120000, "mem.VmData"=>2276000, "mem.VmStk"=>88000, "mem.VmExe"=>1768000, "mem.VmLib"=>2328000, "mem.VmPTE"=>68000, "mem.VmSwap"=>0, "fd"=>18}]
    [1] proc.0: [1485780298, {"alive"=>true, "proc_name"=>"fluent-bit", "pid"=>10964, "mem.VmPeak"=>14740000, "mem.VmSize"=>14740000, "mem.VmLck"=>0, "mem.VmHWM"=>1148000, "mem.VmRSS"=>1148000, "mem.VmData"=>2276000, "mem.VmStk"=>88000, "mem.VmExe"=>1768000, "mem.VmLib"=>2328000, "mem.VmPTE"=>68000, "mem.VmSwap"=>0, "fd"=>18}]
    [2] proc.0: [1485780299, {"alive"=>true, "proc_name"=>"fluent-bit", "pid"=>10964, "mem.VmPeak"=>14740000, "mem.VmSize"=>14740000, "mem.VmLck"=>0, "mem.VmHWM"=>1152000, "mem.VmRSS"=>1148000, "mem.VmData"=>2276000, "mem.VmStk"=>88000, "mem.VmExe"=>1768000, "mem.VmLib"=>2328000, "mem.VmPTE"=>68000, "mem.VmSwap"=>0, "fd"=>18}]
    [3] proc.0: [1485780300, {"alive"=>true, "proc_name"=>"fluent-bit", "pid"=>10964, "mem.VmPeak"=>14740000, "mem.VmSize"=>14740000, "mem.VmLck"=>0, "mem.VmHWM"=>1152000, "mem.VmRSS"=>1148000, "mem.VmData"=>2276000, "mem.VmStk"=>88000, "mem.VmExe"=>1768000, "mem.VmLib"=>2328000, "mem.VmPTE"=>68000, "mem.VmSwap"=>0, "fd"=>18}]
    ...
    fluent-bit -i health -p host=127.0.0.1 -p port=80 -o stdout
    pipeline:
      inputs:
        - name: health
          host: 127.0.0.1
          port: 80
          interval_sec: 1
          interval_nsec: 0
    
      outputs:
        - name: stdout
          match: '*'
    [INPUT]
      Name          health
      Host          127.0.0.1
      Port          80
      Interval_Sec  1
      Interval_Nsec 0
    
    [OUTPUT]
      Name   stdout
      Match  *
    $ fluent-bit -i health -p host=127.0.0.1 -p port=80 -o stdout
    
    ...
    [0] health.0: [1624145988.305640385, {"alive"=>true}]
    [1] health.0: [1624145989.305575360, {"alive"=>true}]
    [2] health.0: [1624145990.306498573, {"alive"=>true}]
    [3] health.0: [1624145991.305595498, {"alive"=>true}]
    ...
    fluent-bit -i fluentbit_metrics -o stdout
    ...
    [2025/12/02 08:33:54.689265000] [ info] [input:fluentbit_metrics:fluentbit_metrics.0] initializing
    [2025/12/02 08:33:54.689272000] [ info] [input:fluentbit_metrics:fluentbit_metrics.0] storage_strategy='memory' (memory only)
    [2025/12/02 08:33:54.689917000] [ info] [output:stdout:stdout.0] worker #0 started
    [2025/12/02 08:33:54.690115000] [ info] [sp] stream processor started
    [2025/12/02 08:33:54.690204000] [ info] [engine] Shutdown Grace Period=5, Shutdown Input Grace Period=2
    2025-12-02T07:33:56.692855536Z fluentbit_uptime{hostname="XXXXX.local"} = 2
    2025-12-02T07:33:54.687838528Z fluentbit_logger_logs_total{message_type="error"} = 0
    2025-12-02T07:33:54.687838528Z fluentbit_logger_logs_total{message_type="warn"} = 0
    2025-12-02T07:33:54.690212675Z fluentbit_logger_logs_total{message_type="info"} = 10
    2025-12-02T07:33:54.687838528Z fluentbit_logger_logs_total{message_type="debug"} = 0
    2025-12-02T07:33:54.687838528Z fluentbit_logger_logs_total{message_type="trace"} = 0
    2025-12-02T07:33:54.689222850Z fluentbit_input_bytes_total{name="fluentbit_metrics.0"} = 0
    2025-12-02T07:33:54.689222850Z fluentbit_input_records_total{name="fluentbit_metrics.0"} = 0
    2025-12-02T07:33:54.689222850Z fluentbit_input_ring_buffer_writes_total{name="fluentbit_metrics.0"} = 0
    2025-12-02T07:33:54.689222850Z fluentbit_input_ring_buffer_retries_total{name="fluentbit_metrics.0"} = 0
    2025-12-02T07:33:54.689222850Z fluentbit_input_ring_buffer_retry_failures_total{name="fluentbit_metrics.0"} = 0
    2025-12-02T07:33:56.692846827Z fluentbit_input_metrics_scrapes_total{name="fluentbit_metrics.0"} = 1
    2025-12-02T07:33:54.689563930Z fluentbit_output_proc_records_total{name="stdout.0"} = 0
    ...
    service:
      flush: 1
      log_level: info
    
    pipeline:
      inputs:
        - name: fluentbit_metrics
          tag: internal_metrics
          scrape_interval: 2
    
      outputs:
        - name: prometheus_exporter
          match: internal_metrics
          host: 0.0.0.0
          port: 2021
    # Fluent Bit Metrics + Prometheus Exporter
    # -------------------------------------------
    # The following example collects Fluent Bit metrics and exposes
    # them through a Prometheus HTTP endpoint.
    #
    # After starting the service try it with:
    #
    # $ curl http://127.0.0.1:2021/metrics
    #
    [SERVICE]
      flush           1
      log_level       info
    
    [INPUT]
      name            fluentbit_metrics
      tag             internal_metrics
      scrape_interval 2
    
    [OUTPUT]
      name            prometheus_exporter
      match           internal_metrics
      host            0.0.0.0
      port            2021
    curl http://127.0.0.1:2021/metrics
    build/bin/fluent-bit -i cpu -t my_cpu -o stdout -m '*'
    ...
    [0] [1452185189, {"cpu_p"=>7.00, "user_p"=>5.00, "system_p"=>2.00, "cpu0.p_cpu"=>10.00, "cpu0.p_user"=>8.00, "cpu0.p_system"=>2.00, "cpu1.p_cpu"=>6.00, "cpu1.p_user"=>4.00, "cpu1.p_system"=>2.00}]
    [1] [1452185190, {"cpu_p"=>6.50, "user_p"=>5.00, "system_p"=>1.50, "cpu0.p_cpu"=>6.00, "cpu0.p_user"=>5.00, "cpu0.p_system"=>1.00, "cpu1.p_cpu"=>7.00, "cpu1.p_user"=>5.00, "cpu1.p_system"=>2.00}]
    [2] [1452185191, {"cpu_p"=>7.50, "user_p"=>5.00, "system_p"=>2.50, "cpu0.p_cpu"=>7.00, "cpu0.p_user"=>3.00, "cpu0.p_system"=>4.00, "cpu1.p_cpu"=>6.00, "cpu1.p_user"=>6.00, "cpu1.p_system"=>0.00}]
    [3] [1452185192, {"cpu_p"=>4.50, "user_p"=>3.50, "system_p"=>1.00, "cpu0.p_cpu"=>6.00, "cpu0.p_user"=>5.00, "cpu0.p_system"=>1.00, "cpu1.p_cpu"=>5.00, "cpu1.p_user"=>3.00, "cpu1.p_system"=>2.00}]
    ...
    
    pipeline:
      inputs:
        - name: cpu
          tag: my_cpu
    
      outputs:
        - name: stdout
          match: '*'
    [INPUT]
      Name cpu
      Tag  my_cpu
    
    [OUTPUT]
      Name  stdout
      Match *
    parsers:
      - name: wasi
        format: json
        time_key: time
        time_format: '%Y-%m-%dT%H:%M:%S.%L %z'
    [PARSER]
      Name        wasi
      Format      json
      Time_Key    time
      Time_Format %Y-%m-%dT%H:%M:%S.%L %z
    service:
      flush: 1
      daemon: off
      parsers_file: parsers.yaml
      log_level: info
      http_server: off
      http_listen: 0.0.0.0
      http_port: 2020
    
    pipeline:
      inputs:
        - name: exec_wasi
          tag: exec.wasi.local
          wasi_path: /path/to/wasi/program.wasm
          # Note: run from the 'wasi_path' location.
          accessible_paths: /path/to/accessible
          parser: wasi
    
      outputs:
        - name: stdout
          match: '*'
    [SERVICE]
      Flush        1
      Daemon       Off
      Parsers_File parsers.conf
      Log_Level    info
      HTTP_Server  Off
      HTTP_Listen  0.0.0.0
      HTTP_Port    2020
    
    [INPUT]
      Name exec_wasi
      Tag  exec.wasi.local
      Wasi_Path /path/to/wasi/program.wasm
      Accessible_Paths .,/path/to/accessible
      Parser wasi
    
    [OUTPUT]
      Name  stdout
      Match *
    
    maintainability, which reduces drift between plugin implementations
    OpenTelemetry input
  • Prometheus remote write input

  • http_server.max_connections

  • http_server.workers

  • OpenTelemetry input
  • Prometheus remote write input

  • oauth2.allowed_audience

  • oauth2.allowed_clients

  • oauth2.jwks_refresh_interval

  • improved JSON trace validation and error reporting

    Amazon S3: snappy added alongside existing codecs
  • Azure Blob: zstd support for transfer compression

  • HTTP input
    Splunk input
    Elasticsearch input
    HTTP input
    Splunk input
    Elasticsearch input
    TLS
    Monitoring
    Fluent Bit logs input
    HTTP input
    HTTP
    OpenTelemetry
    OpenTelemetry input
    Kubernetes events input
    cumulative to delta processor
    topological data analysis processor
    sampling processor
    HTTP output
    Amazon Kinesis Data Streams
    Amazon Kinesis Data Firehose
    Monitoring
    thread
    thread
    thread
    thread
    thread
    thread
    thread
    thread

    interval_nsec

    Polling interval (nanosecond).

    0

    interval_sec

    Polling interval (seconds).

    1

    oneshot

    Execute the command only once at startup.

    false

    parser

    Specify the name of a parser to interpret the entry as a structured message.

    threaded

    Indicates whether to run this input in its own thread.

    false

    wasi_path

    The location of a Wasm program file.

    wasm_heap_size

    Size of the heap size of Wasm execution. Review unit sizes for allowed values.

    8192

    wasm_stack_size

    Size of the stack size of Wasm execution. Review unit sizes for allowed values.

    8192

    unit sizes

    Build Fluent Bit from Git master. Use for development and testing purposes only.

    v1.8.11

    fluent-bit_1.8.11.bbarrow-up-right

    Build latest stable version of Fluent Bit.

    fluent-bit_git.bbarrow-up-right
    and
    annotations
    . The filter only contacts the API Server when it can't find the cached information, otherwise it uses the cache.
  • The default backend in the configuration is Elasticsearch set by the Elasticsearch Output Plugin. It uses the Logstash format to ingest the logs. If you need a different Index and Type, refer to the plugin option and update as needed.

  • There is an option called Retry_Limit, which is set to False. If Fluent Bit can't flush the records to Elasticsearch, it will retry indefinitely until it succeeds.

  • , with some additional metadata on the file's name.
  • C:\ProgramData\Docker\containers\<docker>\<docker>.log

    This is the log file produced by Docker. Normally you don't directly read from this file, but you need to make sure that this file is visible from Fluent Bit.

  • Kubernetes
    Helm Chartarrow-up-right
    Helmarrow-up-right
    Fluent Helm Charts repositoryarrow-up-right
    included values filearrow-up-right
    #78479arrow-up-right
    helm repo add fluent https://fluent.github.io/helm-charts
    helm upgrade --install fluent-bit fluent/fluent-bit
    spec:
      containers:
      - name: fluent-bit
        image: my-repo/fluent-bit:1.8.4
        volumeMounts:
        - mountPath: C:\k
          name: k
        - mountPath: C:\var\log
          name: varlog
        - mountPath: C:\ProgramData
          name: progdata
      volumes:
      - name: k
        hostPath:
          path: C:\k
      - name: varlog
        hostPath:
          path: C:\var\log
      - name: progdata
        hostPath:
          path: C:\ProgramData
    parsers:
        - name: docker
          format: json
          time_key: time
          time_format: '%Y-%m-%dT%H:%M:%S.%L'
          time_keep: true
    
    pipeline:
        inputs:
            - name: tail
              tag: kube.*
              path: 'C:\\var\\log\\containers\\*.log'
              parser: docker
              db: 'C:\\fluent-bit\\tail_docker.db'
              mem_buf_limit: 7MB
              refresh_interval: 10
    
            - name: tail
              tag: kube.error
              path: 'C:\\k\\kubelet.err.log'
              db: 'C:\\fluent-bit\\tail_kubelet.db'
    
        filters:
            - name: kubernetes
              match: kube.*
              kube_url: 'https://kubernetes.default.svc.cluster.local:443'
    
        outputs:
            - name: stdout
              match: '*'
    fluent-bit.conf: |
        [SERVICE]
          Parsers_File      C:\\fluent-bit\\parsers.conf
    
        [INPUT]
          Name              tail
          Tag               kube.*
          Path              C:\\var\\log\\containers\\*.log
          Parser            docker
          DB                C:\\fluent-bit\\tail_docker.db
          Mem_Buf_Limit     7MB
          Refresh_Interval  10
    
        [INPUT]
          Name              tail
          Tag               kubelet.err
          Path              C:\\k\\kubelet.err.log
          DB                C:\\fluent-bit\\tail_kubelet.db
    
        [FILTER]
          Name              kubernetes
          Match             kube.*
          Kube_URL          https://kubernetes.default.svc.cluster.local:443
    
        [OUTPUT]
          Name  stdout
          Match *
    
    parsers.conf: |
        [PARSER]
            Name         docker
            Format       json
            Time_Key     time
            Time_Format  %Y-%m-%dT%H:%M:%S.%L
            Time_Keep    On
        filters:
            - name: kubernetes
              ...
              dns_retries: 10
              dns_wait_time: 30
    [filter]
        Name kubernetes
        ...
        DNS_Retries 10
        DNS_Wait_Time 30
    ?
  • Is the key A value not NULL?

  • Does the key A value equal B?

  • Grep
    Record Modifier
    Expect
    configuration parameters
    Tail

    gpu_power_watts

    Current power consumption in watts. Can be disabled with enable_power set to false.

    gpu_temperature_celsius

    GPU die temperature in degrees Celsius. Can be disabled with enable_temperature set to false.

    gpu_fan_speed_rpm

    Fan rotation speed in Revolutions per Minute (RPM).

    gpu_fan_pwm_percent

    Fan PWM duty cycle as a percentage (0-100). Indicates fan intensity.

    Enable collection of power consumption metrics (gpu_power_watts).

    true

    enable_temperature

    Enable collection of temperature metrics (gpu_temperature_celsius).

    true

    path_sysfs

    Path to the sysfs root directory. Typically used for testing or non-standard systems.

    /sys

    scrape_interval

    Interval in seconds between metric collection cycles.

    5

    gpu_utilization_percent

    GPU core utilization as a percentage (0 to 100). Indicates how busy the GPU is when processing workloads.

    gpu_memory_used_bytes

    Amount of video RAM (VRAM) currently in use, measured in bytes.

    gpu_memory_total_bytes

    Total video RAM (VRAM) capacity available on the GPU, measured in bytes.

    gpu_clock_mhz

    graphics

    GPU core/shader clock frequency.

    memory

    VRAM clock frequency.

    soc

    System-on-chip clock frequency.

    cards_exclude

    Pattern specifying which GPU cards to exclude from monitoring. Uses the same syntax as cards_include.

    none

    cards_include

    Pattern specifying which GPU cards to monitor. Supports wildcards (*), ranges (0-3), and comma-separated lists (0,2,4).

    *

    Current GPU clock frequency in MHz. This metric has multiple instances with different type labels (see ).

    enable_power

    Port of the target NGINX service to connect to.

    80

    scrape_interval

    The interval to scrape metrics from the NGINX service.

    5s

    status_url

    The URL of the stub status handler.

    /status

    threaded

    Indicates whether to run this input in its own .

    false

    host

    Name of the target host or IP address.

    localhost

    nginx_plus

    Turn on NGINX Plus mode.

    true

    Transport Security
    NGINX Prometheus Exporter metrics documentationarrow-up-right

    port

    Custom path to the sysfs subsystem directory.

    /sys/fs/cgroup

    scrape_interval

    Interval between each scrape of Podman data (in seconds).

    30

    scrape_on_start

    Sets whether this plugin scrapes Podman data on startup.

    false

    threaded

    Indicates whether to run this input in its own .

    false

    container_spec_memory_limit_bytes

  • container_cpu_user_seconds_total

  • container_cpu_usage_seconds_total

  • container_network_receive_bytes_total

  • container_network_receive_errors_total

  • container_network_transmit_bytes_total

  • container_network_transmit_errors_total

  • path.config

    Custom path to the Podman containers configuration file.

    /var/lib/containers/storage/overlay-containers/containers.json

    path.procfs

    Custom path to the proc subsystem directory.

    /proc

    cadvisorarrow-up-right

    path.sysfs

    Enable HTTP/2 support. Compatibility alias for http_server.http2.

    true

    http_server.max_connections

    Maximum number of concurrent active HTTP connections. 0 means unlimited.

    0

    http_server.workers

    Number of HTTP listener worker threads.

    1

    listen

    The address to listen on.

    0.0.0.0

    port

    The port to listen on.

    8080

    successful_response_code

    Specifies the success response code. Supported values are 200, 201, and 204.

    201

    tag_from_uri

    If true, a tag will be created from the uri parameter (for example, api_prom_push from /api/prom/push), and any tag specified in the configuration will be ignored. If false, you must provide a tag in the configuration for this plugin.

    true

    threaded

    Specifies whether to run this input in its own .

    false

    uri

    Specifies an optional HTTP URI for the target web server listening for Prometheus remote write payloads (for example, /api/prom/push).

    none

    buffer_chunk_size

    Sets the chunk size for incoming data. These chunks are then stored and managed in the space specified by buffer_max_size. Compatibility alias for http_server.buffer_chunk_size.

    512K

    buffer_max_size

    Specifies the maximum buffer size to receive a request. Compatibility alias for http_server.buffer_max_size.

    4M

    [INPUT]
      name prometheus_remote_write
      listen 127.0.0.1
      port 8080
      uri /api/prom/push
    
    [OUTPUT]
      name stdout
      match *
    Transport security
    pipeline:
      inputs:
        - name: prometheus_remote_write
          listen: 127.0.0.1
          port: 8080
          uri: /api/prom/push
    
      outputs:
        - name: stdout
          match: '*'

    http2

    hashtag
    TCP connect timeout

    Typically, creating a new TCP connection to a remote server is straightforward and takes a few milliseconds. However, there are cases where DNS resolving, a slow network, or incomplete TLS handshakes might create long delays, or incomplete connection statuses.

    • net.connect_timeout lets you configure the maximum time to wait for a connection to be established. This value already considers the TLS handshake process.

    • net.connect_timeout_log_error indicates if an error should be logged in case of connect timeout. If disabled, the timeout is logged as a debug level message.

    hashtag
    TCP source address

    On environments with multiple network interfaces, you can choose which interface to use for Fluent Bit data that will flow through the network.

    Use net.source_address to specify which network address to use for a TCP connection and data flow.

    hashtag
    Connection keepalive

    A connection keepalive refers to the ability of a client to keep the TCP connection open in a persistent way. This feature offers many benefits in terms of performance because communication channels are always established beforehand.

    Any component that uses TCP channels like HTTP or TLS, can take use feature. For configuration purposes use the net.keepalive property.

    hashtag
    Connection keepalive idle timeout

    If a connection keepalive is enabled, there might be scenarios where the connection can be unused for long periods of time. Unused connections can be removed. To control how long a keepalive connection can be idle, Fluent Bit uses a configuration property called net.keepalive_idle_timeout.

    hashtag
    DNS mode

    The global dns.mode value issues DNS requests using the specified protocol, either TCP or UDP. If a transport layer protocol is specified, plugins that configure the net.dns.mode setting override the global setting.

    hashtag
    Maximum connections per worker

    For optimal performance, Fluent Bit tries to deliver data quickly and create TCP connections on-demand and in keepalive mode. In highly scalable environments, you might limit how many connections are created in parallel.

    Use the net.max_worker_connections property in the output plugin section to set the maximum number of allowed connections. This property acts at the worker level. For example, if you have five workers and net.max_worker_connections is set to 10, a maximum of 50 connections is allowed. If the limit is reached, the output plugin issues a retry.

    hashtag
    Listener backlog

    When Fluent Bit listens for incoming connections (for example, in input plugins like HTTP, TCP, OpenTelemetry, Forward, and Syslog), the operating system maintains a queue of pending connections. The net.backlog option controls the maximum number of pending connections that can be queued before new connection attempts are refused. Increasing this value can help Fluent Bit handle bursts of incoming connections more gracefully. The default value is 128.

    circle-info

    On Linux, the effective backlog value might be capped by the kernel parameter net.core.somaxconn. If you need to allow a greater number of pending connections, you can increase this system setting.

    hashtag
    Configuration options

    The following table describes the network configuration properties available and their usage in optimizing performance or adjusting configuration needs for plugins that rely on networking I/O:

    Property
    Description
    Default

    net.connect_timeout

    Set maximum time allowed to establish a connection, this time includes the TLS handshake.

    10s

    net.connect_timeout_log_error

    On connection timeout, specify if it should log an error. When disabled, the timeout is logged as a debug message.

    true

    hashtag
    Example

    This example sends five random messages through a TCP output connection. The remote side uses the nc (netcat) utility to see the data.

    Use the following configuration snippet of your choice in a corresponding file named fluent-bit.yaml or fluent-bit.conf:

    In another terminal, start nc and make it listen for messages on TCP port 9090:

    Start Fluent Bit with the configuration file you defined previously to see data flowing to netcat:

    If the net.keepalive option isn't enabled, Fluent Bit closes the TCP connection and netcat quits.

    After the five records arrive, the connection idles. After 10 seconds, the connection closes due to net.keepalive_idle_timeout.

    Fluent Bitarrow-up-right
    Configuration parameters

    The plugin supports the following configuration parameters:

    Key
    Description
    Default value

    buffer_chunk_size

    Set the buffer chunk size. Compatibility alias for http_server.buffer_chunk_size.

    512K

    buffer_max_size

    Set the maximum size of buffer. Compatibility alias for http_server.buffer_max_size.

    4M

    hashtag
    TLS / SSL

    The Elasticsearch input plugin supports TLS/SSL for receiving data from Beats agents or other clients over encrypted connections. For more details about the properties available and general configuration, refer to Transport Security.

    When configuring TLS for Elasticsearch ingestion, common options include:

    • tls.verify: Enable or disable certificate validation for incoming connections.

    • tls.ca_file: Specify a CA certificate to validate client certificates when using mutual TLS (mTLS).

    • tls.crt_file and tls.key_file: Provide the server certificate and private key.

    hashtag
    Sniffing

    Elasticsearch clients use a process called "sniffing" to automatically discover cluster nodes. When a client connects, it can query the cluster to retrieve a list of available nodes and their addresses. This allows the client to distribute requests across the cluster and adapt when nodes join or leave.

    The hostname parameter specifies the hostname or fully qualified domain name that Fluent Bit returns during sniffing requests. Clients use this information to build their connection list. Set this value to match how clients should reach this Fluent Bit instance (for example, an external IP or load balancer address rather than localhost in production environments).

    hashtag
    Get started

    In order to start performing the checks, you can run the plugin from the command line or through the configuration file:

    hashtag
    Command line

    From the command line you can configure Fluent Bit to handle Bulk API requests with the following options:

    hashtag
    Configuration file

    In your configuration file append the following:

    As described previously, the plugin will handle ingested Bulk API requests. For large bulk ingestion, you might have to increase buffer size using the buffer_max_size and buffer_chunk_size parameters:

    hashtag
    Ingesting from beats series

    Ingesting from beats series agents is also supported. For example, Filebeatsarrow-up-right, Metricbeatarrow-up-right, and Winlogbeatarrow-up-right are able to ingest their collected data through this plugin.

    The Fluent Bit node information is returning as Elasticsearch 8.0.0.

    Users must specify the following configurations on their beats configurations:

    For large log ingestion on these beat plugins, users might have to configure rate limiting on those beats plugins when Fluent Bit indicates that the application is exceeding the size limit for HTTP requests:

    hashtag
    Configuration parameters

    The plugin supports the following configuration parameters:

    Key
    Description
    Default

    add_path

    If enabled, the path is appended to each record.

    false

    buf_size

    Buffer size to read the file.

    256

    hashtag
    Getting started

    To read the head of a file, you can run the plugin from the command line or through the configuration file.

    hashtag
    Command line

    The following example will read events from the /proc/uptime file, tag the records with the uptime name and flush them back to the stdout plugin:

    The output will look similar to:

    hashtag
    Configuration file

    In your main configuration file append the following:

    The interval is calculated like this:

    Total interval (sec) = interval_sec + (interval_nsec / 1000000000).

    For example: 1.5s = 1s + 500000000ns.

    hashtag
    Split line mode

    Use this mode to get a specific line. The following example gets CPU frequency from /proc/cpuinfo.

    /proc/cpuinfo is a special file to get CPU information.

    The CPU frequency is cpu MHz : 2791.009. The following configuration file gets the needed line:

    If you run the following command:

    The output is something similar to;

    Configuration parameters
    Key
    Description
    Default

    db

    Set a database file to keep track of recorded Kubernetes events.

    none

    db.journal_mode

    Set the journal mode for databases. Values: DELETE, TRUNCATE, PERSIST, MEMORY, WAL, OFF.

    WAL

    In Fluent Bit 3.1 or later, this plugin uses a Kubernetes watch stream instead of polling. In versions earlier than 3.1, the interval parameters are used for reconnecting the Kubernetes watch stream.

    hashtag
    Threading

    This input always runs in its own thread.

    hashtag
    Get started

    hashtag
    Kubernetes service account

    The Kubernetes service account used by Fluent Bit must have get, list, and watch permissions to namespaces and pods for the namespaces watched in the kube_namespace configuration parameter. If you're using the Helm chart to configure Fluent Bit, this role is included.

    hashtag
    Basic configuration file

    In the following configuration file, the Kubernetes events plugin collects events and exposes them through the standard output plugin on the console:

    hashtag
    Event timestamp

    Event timestamps are created from the first existing field, based on the following order of precedence:

    1. lastTimestamp

    2. firstTimestamp

    3. metadata.creationTimestamp

    Exec

    circle-info

    Supported event types: logs

    The Exec input plugin lets you execute external programs and collects event logs.

    circle-exclamation

    This plugin invokes commands using a shell. Its inputs are subject to shell metacharacter substitution. Careless use of untrusted input in command arguments could lead to malicious command execution.

    hashtag
    Container support

    This plugin needs a functional /bin/sh and won't function in all the distro-less production images.

    The debug images use the same binaries so even though they have a shell, there is no support for this plugin as it's compiled out.

    hashtag
    Configuration parameters

    The plugin supports the following configuration parameters:

    Key
    Description
    Default

    hashtag
    Get started

    You can run the plugin from the command line or through the configuration file:

    hashtag
    Command line

    The following example will read events from the output of ls.

    which should return something like the following:

    hashtag
    Configuration file

    In your main configuration file append the following:

    hashtag
    Use as a command wrapper

    To use Fluent Bit with the exec plugin to wrap another command, use the exit_after_oneshot and propagate_exit_code options:

    Fluent Bit will output:

    then exits with exit code 1.

    Translation of command exit codes to Fluent Bit exit code follows . Like with a shell, there is no way to differentiate between the command exiting on a signal and the shell exiting on a signal. Similarly, there is no way to differentiate between normal exits with codes greater than 125 and abnormal or signal exits reported by Fluent Bit or the shell. Wrapped commands should use exit codes between 0 and 125 inclusive to allow reliable identification of normal exit. If the command is a pipeline, the exit code will be the exit code of the last command in the pipeline unless overridden by shell options.

    hashtag
    Parsing command output

    By default, the exec plugin emits one message per command output line, with a single field exec containing the full message. Use the parser option to specify the name of a parser configuration to use to process the command input.

    hashtag
    Security concerns

    circle-exclamation

    Take great care with shell quoting and escaping when wrapping commands.

    A script like the following can ruin your day if someone passes it the argument $(rm -rf /my/important/files; echo "deleted your stuff!")'

    The previous script would be safer if written with:

    It's generally best to avoid dynamically generating the command or handling untrusted arguments.

    Forward

    circle-info

    Supported event types: logs metrics traces

    Forward is the protocol used by Fluent Bitarrow-up-right and Fluentdarrow-up-right to route messages between peers. This plugin implements the input service to listen for Forward messages.

    hashtag
    Configuration parameters

    The plugin supports the following configuration parameters:

    Key
    Description
    Default

    hashtag
    TLS / SSL

    The Forward input plugin supports TLS/SSL. For more details about the properties available and general configuration, refer to .

    hashtag
    Get started

    To receive Forward messages, you can run the plugin from the command line or through the configuration file as shown in the following examples.

    hashtag
    Command line

    From the command line you can let Fluent Bit listen for Forward messages with the following options:

    By default, the service listens on all interfaces (0.0.0.0) through TCP port 24224. You can change this by passing parameters to the command:

    In the example, the Forward messages arrive only through network interface 192.168.3.2 address and TCP Port 9090.

    hashtag
    Configuration file

    In your main configuration file append the following:

    hashtag
    Fluent Bit and secure forward setup

    In Fluent Bit v3 or later, in_forward can handle secure forward protocol.

    circle-exclamation

    When using security.users for user-password authentication, you must also configure either shared_key or set empty_shared_key to true. The Forward input plugin will reject a configuration that has security.users set without one of these options.

    For shared key authentication, specify shared_key in both forward output and forward input. For user-password authentication, specify security.users with at least one user-password pair along with a shared key. To use user authentication without requiring clients to know a shared key, set empty_shared_key to true.

    The self_hostname value can't be the same between Fluent Bit servers and clients.

    hashtag
    User authentication with empty_shared_key

    To use username and password authentication without requiring clients to know a shared key, set empty_shared_key to true:

    hashtag
    Testing

    After Fluent Bit is running, you can send some messages using the fluent-cat tool, provided by :

    When you run the plugin with the following command:

    In you should see the following output:

    eBPF

    circle-info

    Supported event types: logs

    circle-info

    This plugin is experimental and might be unstable. Use it in development or testing environments only. Its features and behavior are subject to change.

    The in_ebpf input plugin uses eBPF (extended Berkeley Packet Filter) to capture low-level system events. This plugin lets Fluent Bit monitor kernel-level activities such as process executions, file accesses, memory allocations, network connections, and signal handling. It provides valuable insights into system behavior for debugging, monitoring, and security analysis.

    The in_ebpf plugin leverages eBPF to trace kernel events in real-time. By specifying trace points, users can collect targeted system-level metrics and events, giving visibility into operating system interactions and performance characteristics.

    hashtag
    Configuration parameters

    The plugin supports the following configuration parameters:

    Key
    Description
    Default

    hashtag
    System dependencies

    To enable in_ebpf, ensure the following dependencies are installed on your system:

    • Kernel version: 4.18 or greater, with eBPF support enabled.

    • Required packages:

      • bpftool: Used to manage and debug eBPF programs.

    hashtag
    Installing dependencies on Ubuntu

    hashtag
    Building Fluent Bit with in_ebpf

    To enable the in_ebpf plugin, follow these steps to build Fluent Bit from source:

    1. Clone the Fluent Bit repository:

    2. Configure the build with in_ebpf:

      Create a build directory and run cmake with the -DFLB_IN_EBPF=On flag to enable the in_ebpf plugin:

    hashtag
    Configuration example

    Here's a basic example of how to configure the plugin:

    The configuration enables tracing for:

    • Signal handling events (trace_signal)

    • Memory allocation events (trace_malloc)

    • Network bind operations (trace_bind)

    You can enable multiple traces by adding multiple Trace directives in your configuration. Full list of existing traces can be seen here:

    hashtag
    Output fields

    Each trace produces records with common fields and trace-specific fields.

    hashtag
    Common fields

    All traces include the following fields:

    Field
    Description

    hashtag
    Signal trace fields

    The trace_signal trace includes these additional fields:

    Field
    Description

    hashtag
    Memory trace fields

    The trace_malloc trace includes these additional fields:

    Field
    Description

    hashtag
    Bind trace fields

    The trace_bind trace includes these additional fields:

    Field
    Description

    hashtag
    VFS trace fields

    The trace_vfs trace includes these additional fields:

    Field
    Description

    Prometheus text file

    circle-info

    Supported event types: metrics

    The Prometheus text file input plugin allows Fluent Bit to read metrics from Prometheus text format files (.prom files) on the local filesystem. Use this plugin to collect custom metrics that are written to files by external applications or scripts, similar to the Prometheus Node Exporter text file collector.

    hashtag
    Configuration parameters

    Key
    Description
    Default

    hashtag
    Get started

    hashtag
    Basic configuration

    The following configuration will monitor /var/lib/prometheus/textfile directory for .prom files every 15 seconds:

    hashtag
    Prometheus text format

    The plugin expects files to be in the standard Prometheus text exposition format. Here's an example of a valid .prom file:

    hashtag
    Use cases

    hashtag
    Custom application metrics

    Applications can write custom metrics to .prom files, and this plugin will collect them:

    hashtag
    Batch job metrics

    Cron jobs or batch processes can write completion metrics:

    hashtag
    System integration

    External monitoring tools can write metrics that Fluent Bit will collect and forward.

    hashtag
    Integration with other plugins

    hashtag
    OpenTelemetry destination

    Upgrade notes

    The following article covers the relevant compatibility changes for users upgrading from previous Fluent Bit versions.

    For more details about changes on each release, refer to the .

    Release notes will be prepared in advance of a Git tag for a release. An official release should provide both a tag and a release note together to allow users to verify and understand the release contents.

    The tag drives the binary release process. Release binaries (containers and packages) will appear after a tag and its associated release note. This lets users to expect the new release binary to appear and allow/deny/update it as appropriate in their infrastructure.

    hashtag

    Serial interface

    circle-info

    Supported event types: logs

    The Serial input plugin lets you retrieve messages and data from a serial interface.

    hashtag

    {"color": "blue", "label": {"name": null}}
    {"color": "red", "label": {"name": "abc"}, "meta": "data"}
    {"color": "green", "label": {"name": "abc"}, "meta": null}
    service:
      flush: 1
      log_level: info
      parsers_file: parsers.yaml
    
    pipeline:
      inputs:
        - name: tail
          path: data.log
          parser: json
          exit_on_eof: on
    
      # First 'expect' filter to validate that our data was structured properly
      filters:
        - name: expect
          match: '*'
          key_exists:
            - color
            - $label['name']
          action: exit
    
      outputs:
        - name: stdout
          match: '*'
    parsers:
      - name: json
        format: json
    [SERVICE]
        flush        1
        log_level    info
        parsers_file parsers.conf
    
    [INPUT]
        name        tail
        path        ./data.log
        parser      json
        exit_on_eof on
    
    # First 'expect' filter to validate that our data was structured properly
    [FILTER]
        name        expect
        match       *
        key_exists  color
        key_exists  $label['name']
        action      exit
    
    [OUTPUT]
        name        stdout
        match       *
    [PARSER]
        Name json
        Format json
    service:
      flush: 1
      log_level: info
      parsers_file: parsers.yaml
    
    pipeline:
      inputs:
        - name: tail
          path: data.log
          parser: json
          exit_on_eof: on
    
      # First 'expect' filter to validate that our data was structured properly
      filters:
        - name: expect
          match: '*'
          key_exists:
            - color
            - $label['name']
          action: exit
    
        # Match records that only contains map 'label' with key 'name' = 'abc'
        - name: grep
          match: '*'
          regex: "$label['name'] ^abc$"
    
        # Check that every record contains 'label' with a non-null value
        - name: expect
          match: '*'
          key_val_eq: $label['name'] abc
          action: exit
    
        # Append a new key to the record using an environment variable
        - name: record_modifier
          match: '*'
          record: hostname ${HOSTNAME}
    
        # Check that every record contains 'hostname' key
        - name: expect
          match: '*'
          key_exists: hostname
          action: exit
    
      outputs:
        - name: stdout
          match: '*'
    [SERVICE]
        flush        1
        log_level    info
        parsers_file parsers.conf
    
    [INPUT]
        name         tail
        path         ./data.log
        parser       json
        exit_on_eof  on
    
    # First 'expect' filter to validate that our data was structured properly
    [FILTER]
        name       expect
        match      *
        key_exists color
        key_exists label
        action     exit
    
    # Match records that only contains map 'label' with key 'name' = 'abc'
    [FILTER]
        name       grep
        match      *
        regex      $label['name'] ^abc$
    
    # Check that every record contains 'label' with a non-null value
    [FILTER]
        name       expect
        match      *
        key_val_eq $label['name'] abc
        action     exit
    
    # Append a new key to the record using an environment variable
    [FILTER]
        name       record_modifier
        match      *
        record     hostname ${HOSTNAME}
    
    # Check that every record contains 'hostname' key
    [FILTER]
        name       expect
        match      *
        key_exists hostname
        action     exit
    
    [OUTPUT]
        name       stdout
        match      *
    lspci | grep -i vga | grep -i amd
    03:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Navi 31 [Radeon RX 7900 XT/7900 XTX/7900 GRE/7900M] (rev ce)
    73:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Granite Ridge [Radeon Graphics] (rev c5)
    ls /sys/class/drm/card*/device/vendor
    /sys/class/drm/card0/device/vendor
    /sys/class/drm/card1/device/vendor
    fluent-bit -i gpu_metrics -o stdout
    2025-10-25T20:36:55.236905093Z gpu_utilization_percent{card="1",vendor="amd"} = 2
    2025-10-25T20:36:55.237853918Z gpu_utilization_percent{card="0",vendor="amd"} = 0
    2025-10-25T20:36:55.236905093Z gpu_memory_used_bytes{card="1",vendor="amd"} = 1580118016
    2025-10-25T20:36:55.237853918Z gpu_memory_used_bytes{card="0",vendor="amd"} = 26083328
    2025-10-25T20:36:55.236905093Z gpu_memory_total_bytes{card="1",vendor="amd"} = 17163091968
    2025-10-25T20:36:55.237853918Z gpu_memory_total_bytes{card="0",vendor="amd"} = 2147483648
    2025-10-25T20:36:55.236905093Z gpu_clock_mhz{card="1",vendor="amd",type="graphics"} = 45
    2025-10-25T20:36:55.236905093Z gpu_clock_mhz{card="1",vendor="amd",type="memory"} = 96
    2025-10-25T20:36:55.236905093Z gpu_clock_mhz{card="1",vendor="amd",type="soc"} = 500
    2025-10-25T20:36:55.237853918Z gpu_clock_mhz{card="0",vendor="amd",type="graphics"} = 600
    2025-10-25T20:36:55.237853918Z gpu_clock_mhz{card="0",vendor="amd",type="memory"} = 2800
    2025-10-25T20:36:55.237853918Z gpu_clock_mhz{card="0",vendor="amd",type="soc"} = 1200
    2025-10-25T20:36:55.236905093Z gpu_power_watts{card="1",vendor="amd"} = 28
    2025-10-25T20:36:55.236905093Z gpu_temperature_celsius{card="1",vendor="amd"} = 28
    2025-10-25T20:36:55.237853918Z gpu_temperature_celsius{card="0",vendor="amd"} = 39
    2025-10-25T20:36:55.236905093Z gpu_fan_speed_rpm{card="1",vendor="amd"} = 0
    2025-10-25T20:36:55.236905093Z gpu_fan_pwm_percent{card="1",vendor="amd"} = 0
    pipeline:
      inputs:
        - name: gpu_metrics
          cards_exclude: "0"
          cards_include: "1"
          enable_power: true
          enable_temperature: true
          path_sysfs: /sys
          scrape_interval: 2
    
      outputs:
        - name: stdout
          match: '*'
    [INPUT]
      Name                gpu_metrics
      Cards_Exclude       0
      Cards_Include       1
      Enable_Power        true
      Enable_Temperature  true
      Path_Sysfs          /sys
      Scrape_Interval     2
    
    [OUTPUT]
      Name   stdout
      Match  *
    server {
      listen       80;
      listen  [::]:80;
      server_name  localhost;
      location / {
        root   /usr/share/nginx/html;
        index  index.html index.htm;
      }
      # Configure the stub status handler.
      location /status {
        stub_status;
      }
    }
    server {
      listen       80;
      listen  [::]:80;
      server_name  localhost;
    
      # Enable /api/ location with appropriate access control in order
      # to make use of NGINX Plus API.
      location /api/ {
        api write=on;
        # Configure to allow requests from the server running Fluent Bit.
        allow 192.168.1.*;
        deny all;
      }
    }
    fluent-bit -i nginx_metrics -p host=127.0.0.1 -p port=80 -p status_url=/status -p nginx_plus=off -o stdout
    fluent-bit -i nginx_metrics -p host=127.0.0.1 -p port=80 -p nginx_plus=on -p status_url=/api -o stdout
    pipeline:
      inputs:
        - name: nginx_metrics
          host: 127.0.0.1
          port: 80
          status_url: /status
          nginx_plus: off
          scrape_interval: 5s
    
      outputs:
        - name: stdout
          match: '*'
    [INPUT]
      Name            nginx_metrics
      Host            127.0.0.1
      Port            80
      Status_URL      /status
      Nginx_Plus      off
      Scrape_Interval 5s
    
    [OUTPUT]
      Name  stdout
      Match *
    pipeline:
      inputs:
        - name: nginx_metrics
          host: 127.0.0.1
          port: 80
          status_url: /api
          nginx_plus: on
          scrape_interval: 5s
    
      outputs:
        - name: stdout
          match: '*'
    [INPUT]
      Name            nginx_metrics
      Host            127.0.0.1
      Port            80
      Status_URL      /api
      Nginx_Plus      on
      Scrape_Interval 5s
    
    [OUTPUT]
      Name  stdout
      Match *
    fluent-bit -i nginx_metrics -p host=127.0.0.1 -p nginx_plus=off -o stdout -p match=* -f 1
    ...
    2021-10-14T19:37:37.228691854Z nginx_connections_accepted = 788253884
    2021-10-14T19:37:37.228691854Z nginx_connections_handled = 788253884
    2021-10-14T19:37:37.228691854Z nginx_http_requests_total = 42045501
    2021-10-14T19:37:37.228691854Z nginx_connections_active = 2009
    2021-10-14T19:37:37.228691854Z nginx_connections_reading = 0
    2021-10-14T19:37:37.228691854Z nginx_connections_writing = 1
    2021-10-14T19:37:37.228691854Z nginx_connections_waiting = 2008
    2021-10-14T19:37:35.229919621Z nginx_up = 1
    ...
    curl 0.0.0.0:2021/metrics
    # HELP fluentbit_input_bytes_total Number of input bytes.
    # TYPE fluentbit_input_bytes_total counter
    fluentbit_input_bytes_total{name="podman_metrics.0"} 0
    # HELP fluentbit_input_records_total Number of input records.
    # TYPE fluentbit_input_records_total counter
    fluentbit_input_records_total{name="podman_metrics.0"} 0
    # HELP container_memory_usage_bytes Container memory usage in bytes
    # TYPE container_memory_usage_bytes counter
    container_memory_usage_bytes{id="858319c39f3f52cd44aa91a520aafb84ded3bc4b4a1e04130ccf87043149bbbf",name="blissful_wescoff",image="docker.io/library/ubuntu:latest"} 884736
    # HELP container_cpu_user_seconds_total Container cpu usage in seconds in user mode
    # TYPE container_cpu_user_seconds_total counter
    container_cpu_user_seconds_total{id="858319c39f3f52cd44aa91a520aafb84ded3bc4b4a1e04130ccf87043149bbbf",name="blissful_wescoff",image="docker.io/library/ubuntu:latest"} 0
    # HELP container_cpu_usage_seconds_total Container cpu usage in seconds
    # TYPE container_cpu_usage_seconds_total counter
    container_cpu_usage_seconds_total{id="858319c39f3f52cd44aa91a520aafb84ded3bc4b4a1e04130ccf87043149bbbf",name="blissful_wescoff",image="docker.io/library/ubuntu:latest"} 0
    # HELP container_network_receive_bytes_total Network received bytes
    # TYPE container_network_receive_bytes_total counter
    container_network_receive_bytes_total{id="858319c39f3f52cd44aa91a520aafb84ded3bc4b4a1e04130ccf87043149bbbf",name="blissful_wescoff",image="docker.io/library/ubuntu:latest",interface="eth0"} 8515
    # HELP container_network_receive_errors_total Network received errors
    # TYPE container_network_receive_errors_total counter
    container_network_receive_errors_total{id="858319c39f3f52cd44aa91a520aafb84ded3bc4b4a1e04130ccf87043149bbbf",name="blissful_wescoff",image="docker.io/library/ubuntu:latest",interface="eth0"} 0
    # HELP container_network_transmit_bytes_total Network transmitted bytes
    # TYPE container_network_transmit_bytes_total counter
    container_network_transmit_bytes_total{id="858319c39f3f52cd44aa91a520aafb84ded3bc4b4a1e04130ccf87043149bbbf",name="blissful_wescoff",image="docker.io/library/ubuntu:latest",interface="eth0"} 962
    # HELP container_network_transmit_errors_total Network transmitted errors
    # TYPE container_network_transmit_errors_total counter
    container_network_transmit_errors_total{id="858319c39f3f52cd44aa91a520aafb84ded3bc4b4a1e04130ccf87043149bbbf",name="blissful_wescoff",image="docker.io/library/ubuntu:latest",interface="eth0"} 0
    # HELP fluentbit_input_storage_overlimit Is the input memory usage overlimit ?.
    # TYPE fluentbit_input_storage_overlimit gauge
    fluentbit_input_storage_overlimit{name="podman_metrics.0"} 0
    # HELP fluentbit_input_storage_memory_bytes Memory bytes used by the chunks.
    # TYPE fluentbit_input_storage_memory_bytes gauge
    fluentbit_input_storage_memory_bytes{name="podman_metrics.0"} 0
    # HELP fluentbit_input_storage_chunks Total number of chunks.
    # TYPE fluentbit_input_storage_chunks gauge
    fluentbit_input_storage_chunks{name="podman_metrics.0"} 0
    # HELP fluentbit_input_storage_chunks_up Total number of chunks up in memory.
    # TYPE fluentbit_input_storage_chunks_up gauge
    fluentbit_input_storage_chunks_up{name="podman_metrics.0"} 0
    # HELP fluentbit_input_storage_chunks_down Total number of chunks down.
    # TYPE fluentbit_input_storage_chunks_down gauge
    fluentbit_input_storage_chunks_down{name="podman_metrics.0"} 0
    # HELP fluentbit_input_storage_chunks_busy Total number of chunks in a busy state.
    # TYPE fluentbit_input_storage_chunks_busy gauge
    fluentbit_input_storage_chunks_busy{name="podman_metrics.0"} 0
    # HELP fluentbit_input_storage_chunks_busy_bytes Total number of bytes used by chunks in a busy state.
    # TYPE fluentbit_input_storage_chunks_busy_bytes gauge
    fluentbit_input_storage_chunks_busy_bytes{name="podman_metrics.0"} 0
    pipeline:
      inputs:
        - name: podman_metrics
          scrape_interval: 10
          scrape_on_start: true
          
      outputs:
        - name: prometheus_exporter
    [INPUT]
      Name           podman_metrics
      Scrape_Interval 10
      Scrape_On_Start true
    
    [OUTPUT]
      Name prometheus_exporter
    fluent-bit -i podman_metrics -o prometheus_exporter
    pipeline:
      inputs:
        - name: prometheus_remote_write
          listen: 127.0.0.1
          port: 8080
          uri: /api/prom/push
          tls: on
          tls.crt_file: /path/to/certificate.crt
          tls.key_file: /path/to/certificate.key
    [INPUT]
      Name prometheus_remote_write
      Listen 127.0.0.1
      Port 8080
      Uri /api/prom/push
      Tls On
      tls.crt_file /path/to/certificate.crt
      tls.key_file /path/to/certificate.key
    service:
      flush: 1
      log_level: info
    
    pipeline:
      inputs:
        - name:  random
          samples: 5
    
      outputs:
        - name: tcp
          match: '*'
          host: 127.0.0.1
          port: 9090
          format: json_lines
          # Networking Setup
          net.dns.mode: TCP
          net.connect_timeout: 5s
          net.source_address: 127.0.0.1
          net.keepalive: true
          net.keepalive_idle_timeout: 10s
    [SERVICE]
      flush     1
      log_level info
    
    [INPUT]
      name      random
      samples   5
    
    [OUTPUT]
      name      tcp
      match     *
      host      127.0.0.1
      port      9090
      format    json_lines
      # Networking Setup
      net.dns.mode                TCP
      net.connect_timeout         5s
      net.source_address          127.0.0.1
      net.keepalive               true
      net.keepalive_idle_timeout  10s
    nc -l 9090
    $ nc -l 9090
    
    {"date":1587769732.572266,"rand_value":9704012962543047466}
    {"date":1587769733.572354,"rand_value":7609018546050096989}
    {"date":1587769734.572388,"rand_value":17035865539257638950}
    {"date":1587769735.572419,"rand_value":17086151440182975160}
    {"date":1587769736.572277,"rand_value":527581343064950185}
    pipeline:
      inputs:
        - name: elasticsearch
          listen: 0.0.0.0
          port: 9200
    
      outputs:
        - name: stdout
          match: '*'
    [INPUT]
      name   elasticsearch
      listen 0.0.0.0
      port   9200
    
    [OUTPUT]
      name  stdout
      match *
    pipeline:
      inputs:
        - name: elasticsearch
          listen: 0.0.0.0
          port: 9200
          buffer_max_size: 20M
          buffer_chunk_size: 5M
    
      outputs:
        - name: stdout
          match: '*'
    [INPUT]
      name              elasticsearch
      listen            0.0.0.0
      port              9200
      buffer_max_size   20M
      buffer_chunk_size 5M
    
    [OUTPUT]
      name  stdout
      match *
    fluent-bit -i elasticsearch -p port=9200 -o stdout
    output.elasticsearch:
      allow_older_versions: true
      ilm: false
    processors:
      - rate_limit:
          limit: "200/s"
    pipeline:
      inputs:
        - name: head
          tag: uptime
          file: /proc/uptime
          buf_size: 256
          interval_sec: 1
          interval_nsec: 0
    
      outputs:
        - name: stdout
          match: '*'
    [INPUT]
      Name          head
      Tag           uptime
      File          /proc/uptime
      Buf_Size      256
      Interval_Sec  1
      Interval_Nsec 0
    
    [OUTPUT]
      Name   stdout
      Match  *
    pipeline:
      inputs:
        - name: head
          tag: head.cpu
          file: /proc/cpuinfo
          lines: 8
          split_line: true
    
      filters:
        - name: record_modifier
          match: '*'
          whitelist_key: line7
    
      outputs:
        - name: stdout
          match: '*'
    [INPUT]
      Name           head
      Tag            head.cpu
      File           /proc/cpuinfo
      Lines          8
      Split_Line     true
      # {"line0":"processor    : 0", "line1":"vendor_id    : GenuineIntel" ...}
    
    [FILTER]
      Name           record_modifier
      Match          *
      Whitelist_key  line7
    
    [OUTPUT]
      Name   stdout
      Match  *
    fluent-bit -i head -t uptime -p File=/proc/uptime -o stdout -m '*'
    Fluent Bit v1.x.x
    * Copyright (C) 2019-2020 The Fluent Bit Authors
    * Copyright (C) 2015-2018 Treasure Data
    * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
    * https://fluentbit.io
    
    [2016/05/17 21:53:54] [ info] starting engine
    [0] uptime: [1463543634, {"head"=>"133517.70 194870.97"}]
    [1] uptime: [1463543635, {"head"=>"133518.70 194872.85"}]
    [2] uptime: [1463543636, {"head"=>"133519.70 194876.63"}]
    [3] uptime: [1463543637, {"head"=>"133520.70 194879.72"}]
    processor    : 0
    vendor_id    : GenuineIntel
    cpu family   : 6
    model        : 42
    model name   : Intel(R) Core(TM) i7-2640M CPU @ 2.80GHz
    stepping     : 7
    microcode    : 41
    cpu MHz      : 2791.009
    cache size   : 4096 KB
    physical id  : 0
    siblings     : 1
    fluent-bit -c head.conf
    Fluent Bit v1.x.x
    * Copyright (C) 2019-2020 The Fluent Bit Authors
    * Copyright (C) 2015-2018 Treasure Data
    * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
    * https://fluentbit.io
    
    [2017/06/26 22:38:24] [ info] [engine] started
    [0] head.cpu: [1498484305.000279805, {"line7"=>"cpu MHz        : 2791.009"}]
    [1] head.cpu: [1498484306.011680137, {"line7"=>"cpu MHz        : 2791.009"}]
    [2] head.cpu: [1498484307.010042482, {"line7"=>"cpu MHz        : 2791.009"}]
    [3] head.cpu: [1498484308.008447978, {"line7"=>"cpu MHz        : 2791.009"}]
    service:
      flush: 1
      log_level: info
        
    pipeline:
      inputs:
        - name: kubernetes_events
          tag: k8s_events
          kube_url: https://kubernetes.default.svc
          
      outputs:
        - name: stdout
          match: '*'
    [SERVICE]
      flush           1
      log_level       info
    
    [INPUT]
      name            kubernetes_events
      tag             k8s_events
      kube_url        https://kubernetes.default.svc
    
    [OUTPUT]
      name            stdout
      match           *

    net.io_timeout

    Set maximum time a connection can stay idle while assigned.

    0s

    net.keepalive

    Enable or disable connection keepalive support.

    true

    net.keepalive_idle_timeout

    Set maximum time expressed in seconds for an idle keepalive connection.

    30s

    net.dns.mode

    Select the primary DNS connection type (TCP or UDP).

    none

    net.dns.prefer_ipv4

    Prioritize IPv4 DNS results when trying to establish a connection.

    false

    net.dns.prefer_ipv6

    Prioritize IPv6 DNS results when trying to establish a connection.

    false

    net.dns.resolver

    Select the primary DNS resolver type (LEGACY or ASYNC).

    none

    net.keepalive_max_recycle

    Set maximum number of times a keepalive connection can be used before it's retired.

    2000

    net.max_worker_connections

    Set maximum number of TCP connections that can be established per worker.

    0

    net.proxy_env_ignore

    Ignore the environment variables HTTP_PROXY, HTTPS_PROXY and NO_PROXY when set.

    false

    net.tcp_keepalive

    Enable or disable Keepalive support.

    off

    net.tcp_keepalive_time

    Interval between the last data packet sent and the first TCP keepalive probe.

    -1

    net.tcp_keepalive_interval

    Interval between TCP keepalive probes when no response is received on a keepidle probe.

    -1

    net.tcp_keepalive_probes

    Number of unacknowledged probes to consider a connection dead.

    -1

    net.source_address

    Specify network address to bind for data traffic.

    none

    hostname

    Specify hostname or fully qualified domain name. This parameter can be used for "sniffing" (auto-discovery of) cluster node information.

    localhost

    http2

    Enable HTTP/2 support. Compatibility alias for http_server.http2.

    true

    http_server.max_connections

    Maximum number of concurrent active HTTP connections. 0 means unlimited.

    0

    http_server.workers

    Number of HTTP listener worker threads.

    1

    listen

    The address to listen on.

    0.0.0.0

    meta_key

    Specify a key name for meta information.

    @meta

    port

    The port for Fluent Bit to listen on.

    9200

    tag_key

    Specify a key name for extracting as a tag.

    NULL

    threaded

    Indicates whether to run this input in its own thread.

    false

    version

    Specify the Elasticsearch version that Fluent Bit reports to clients during sniffing and API requests.

    8.0.0

    file

    Absolute path to the target file. For example: /proc/uptime.

    none

    interval_nsec

    Polling interval (nanoseconds).

    0

    interval_sec

    Polling interval (seconds).

    1

    key

    Rename a key.

    head

    lines

    Line number to read. If the number N is set, in_head reads first N lines like head(1) -n.

    0

    split_line

    If enabled, in_head generates key-value pair per line.

    false

    threaded

    Indicates whether to run this input in its own thread.

    false

    db.locking

    Specify that the database will be accessed only by Fluent Bit. Enabling this feature helps increase performance when accessing the database but restricts external tools from querying the content.

    false

    db.sync

    Set a database sync method. Values: extra, full, normal, off.

    normal

    interval_nsec

    Set the reconnect interval (sub seconds: nanoseconds).

    500000000

    interval_sec

    Set the reconnect interval (seconds).

    0

    kube_ca_file

    Kubernetes TLS CA file.

    /var/run/secrets/kubernetes.io/serviceaccount/ca.crt

    kube_ca_path

    Kubernetes TLS CA path.

    none

    kube_namespace

    Kubernetes namespace to query events from. Gets events from all namespaces by default.

    none

    kube_request_limit

    Kubernetes limit parameter for events query. No limit applied when set to 0.

    0

    kube_retention_time

    Kubernetes retention time for events.

    1h

    kube_token_file

    Kubernetes authorization token file.

    /var/run/secrets/kubernetes.io/serviceaccount/token

    kube_token_ttl

    Kubernetes token time to live, until it's read again from the token file.

    10m

    kube_url

    API Server endpoint.

    https://kubernetes.default.svc

    tls.debug

    Set TLS debug level: 0 (no debug), 1 (error), 2 (state change), 3 (info), and 4 (verbose).

    0

    tls.verify

    Enable or disable verification of TLS peer certificate.

    true

    tls.vhost

    Set optional TLS virtual host.

    none

    Clock metrics
    thread
    thread
    thread
    Logo
    spinner
    spinner
    spinner

    Set the eBPF trace to enable (for example, trace_bind, trace_malloc, trace_signal, trace_vfs). This parameter can be set multiple times to enable multiple traces.

    none

    libbpf-dev: Provides the libbpf library for loading and interacting with eBPF programs.

  • CMake 3.13 or higher: Required for building the plugin.

  • Compile the source:

  • Run Fluent Bit:

    Run Fluent Bit with elevated permissions (for example, sudo). Loading eBPF programs requires root access or appropriate privileges.

  • error_raw

    Error code for the bind operation (0 indicates success).

    fd

    File descriptor returned by the operation.

    error_raw

    Error code for the operation (0 indicates success).

    poll_ms

    Set the polling interval in milliseconds for collecting events from the ring buffer.

    1000

    ringbuf_map_name

    Set the name of the eBPF ring buffer map to read events from.

    events

    event_type

    Type of event (signal, malloc, bind, or vfs).

    pid

    Process ID that generated the event.

    tid

    Thread ID that generated the event.

    comm

    signal

    Signal number that was sent.

    tpid

    Target process ID that received the signal.

    operation

    Memory operation type (for example, 0 = malloc, 1 = free, 2 = calloc, 3 = realloc).

    address

    Memory address of the operation.

    size

    Size of the memory operation in bytes.

    uid

    User ID of the process.

    gid

    Group ID of the process.

    port

    Port number the socket is binding to.

    bound_dev_if

    operation

    VFS operation type (integer).

    path

    File path involved in the operation.

    flags

    Flags passed to the VFS operation.

    mode

    Fluent Bit eBPF Tracesarrow-up-right

    trace

    Command name (process name) that generated the event.

    Network device interface the socket is bound to.

    File mode bits for the operation.

    Suppresses log messages from input plugin that appear similar within a specified time interval. 0 no suppression.

    0

    mem_buf_limit

    Set a memory buffer limit for the input plugin. If the limit is reached, the plugin will pause until the buffer is drained. The value is in bytes. If set to 0, the buffer limit is disabled.

    0

    path

    Comma-separated list of files or glob patterns to read. Supports * wildcard (for example, /var/lib/prometheus/*.prom).

    none

    routable

    If set to true, the data generated by the plugin will be routable, meaning that it can be forwarded to other plugins or outputs. If set to false, the data will be discarded.

    true

    scrape_interval

    Interval between file scans.

    10s

    storage.pause_on_chunks_overlimit

    Enable pausing on an input when they reach their chunks limit.

    none

    storage.type

    Sets the storage type for this input, one of: filesystem, memory or memrb.

    memory

    tag

    Set a tag for the events generated by this input plugin.

    none

    thread.ring_buffer.capacity

    Set custom ring buffer capacity when the input runs in threaded mode.

    1024

    thread.ring_buffer.window

    Set custom ring buffer window percentage for threaded inputs.

    5

    threaded

    Enable threading on an input.

    false

    alias

    Sets an alias. Use for multiple instances of the same input plugin. If no alias is specified, a default name is assigned using the plugin name followed by a dot and a sequence number.

    none

    log_level

    Specifies the log level for input plugin. If not set here, plugin uses global log level in service section.

    info

    log_suppress_interval

    make
    # For YAML configuration.
    sudo fluent-bit --config fluent-bit.yaml
    
    # For classic configuration.
    sudo fluent-bit --config fluent-bit.conf
    sudo apt update
    sudo apt install libbpf-dev linux-tools-common cmake
    git clone https://github.com/fluent/fluent-bit.git
    cd fluent-bit
    mkdir build
    cd build
    cmake .. -DFLB_IN_EBPF=On
    pipeline:
      inputs:
        - name: ebpf
          poll_ms: 500
          trace:
            - trace_signal
            - trace_malloc
            - trace_bind
    [INPUT]
      Name          ebpf
      Poll_Ms       500
      Trace         trace_signal
      Trace         trace_malloc
      Trace         trace_bind
    pipeline:
      inputs:
        - name: prometheus_textfile
          tag: custom_metrics
          path: '/var/lib/prometheus/textfile/*.prom'
          scrape_interval: 15s
      outputs:
        - name: prometheus_exporter
          match: custom_metrics
          host: 192.168.100.61
          port: 2021
    # HELP custom_counter_total A custom counter metric
    # TYPE custom_counter_total counter
    custom_counter_total{instance="server1",job="myapp"} 42
    
    # HELP custom_gauge A custom gauge metric
    # TYPE custom_gauge gauge
    custom_gauge{environment="production"} 1.23
    
    # HELP custom_histogram_bucket A custom histogram
    # TYPE custom_histogram_bucket histogram
    custom_histogram_bucket{le="0.1"} 10
    custom_histogram_bucket{le="0.5"} 25
    custom_histogram_bucket{le="1.0"} 40
    custom_histogram_bucket{le="+Inf"} 50
    custom_histogram_sum 125.5
    custom_histogram_count 50
    # Script writes metrics to file
    echo "# HELP app_requests_total Total HTTP requests" > /var/lib/prometheus/textfile/app.prom
    echo "# TYPE app_requests_total counter" >> /var/lib/prometheus/textfile/app.prom
    echo "app_requests_total{status=\"200\"} 1500" >> /var/lib/prometheus/textfile/app.prom
    echo "app_requests_total{status=\"404\"} 23" >> /var/lib/prometheus/textfile/app.prom
    #!/bin/bash
    # Backup script writes completion metrics
    BACKUP_START=$(date +%s)
    # ... perform backup ...
    BACKUP_END=$(date +%s)
    DURATION=$((BACKUP_END - BACKUP_START))
    
    cat > /var/lib/prometheus/textfile/backup.prom << EOF
    # HELP backup_duration_seconds Time taken to complete backup
    # TYPE backup_duration_seconds gauge
    backup_duration_seconds ${DURATION}
    
    # HELP backup_last_success_timestamp_seconds Last successful backup timestamp
    # TYPE backup_last_success_timestamp_seconds gauge
    backup_last_success_timestamp_seconds ${BACKUP_END}
    EOF
    pipeline:
      inputs:
        - name: prometheus_textfile
          tag: textfile_metrics
          path: /var/lib/prometheus/textfile
        - name: node_exporter_metrics
          tag: system_metrics
          scrape_interval: 15s
      outputs:
        - name: opentelemetry
          match: '*'
          host: 192.168.56.4
          port: 2021

    Exit as soon as the one-shot command exits. This allows the exec plugin to be used as a wrapper for another command, sending the target command's output to any Fluent Bit sink, then exits. When enabled, oneshot is automatically set to true.

    false

    interval_nsec

    Polling interval (nanoseconds).

    0

    interval_sec

    Polling interval (seconds).

    1

    oneshot

    Only run once at startup. This allows collection of data before to Fluent Bit startup.

    false

    parser

    Specify the name of a parser to interpret the entry as a structured message.

    none

    propagate_exit_code

    Cause Fluent Bit to exit with the exit code of the command exited by this plugin. Follows . Requires exit_after_oneshot=true.

    false

    threaded

    Indicates whether to run this input in its own .

    false

    buf_size

    Size of the buffer. See unit sizes for allowed values.

    4096

    command

    The command to execute, passed to popenarrow-up-right without any additional escaping or processing. Can include pipelines, redirection, command-substitution, or other information.

    none

    the usual shell rules for exit code handlingarrow-up-right

    exit_after_oneshot

    Enable secure forward protocol with a zero-length shared key. Use this to enable user authentication without requiring a shared key, or to connect to Fluentd with a zero-length shared key.

    false

    listen

    Listener network interface.

    0.0.0.0

    port

    TCP port to listen for incoming connections.

    24224

    security.users

    Specify the username and password pairs for secure forward authentication. Requires shared_key or empty_shared_key to be set.

    self_hostname

    Hostname for secure forward authentication.

    localhost

    shared_key

    Shared key for secure forward authentication.

    none

    tag

    Override the tag of the forwarded events with the defined value.

    none

    tag_prefix

    Prefix incoming tag with the defined value.

    none

    threaded

    Indicates whether to run this input in its own .

    false

    unix_path

    Specify the path to Unix socket to receive a Forward message. If set, listen and port are ignored.

    none

    unix_perm

    Set the permission of the Unix socket file. If unix_path isn't set, this parameter is ignored.

    none

    buffer_chunk_size

    By default the buffer to store the incoming Forward messages, don't allocate the maximum memory allowed, instead it allocate memory when it's required. The rounds of allocations are set by buffer_chunk_size. The value must be according to the Unit Size specification.

    1024000

    buffer_max_size

    Specify the maximum buffer memory size used to receive a Forward message. This limit also applies to incoming payloads and decompressed data; payloads exceeding this size are rejected and the connection is closed. The value must be according to the Unit Size specification.

    6144000

    Transport Security
    Fluentdarrow-up-right
    Fluent Bitarrow-up-right

    empty_shared_key

    Fluent Bit v5.0

    hashtag
    hot_reloaded_times metric type change

    The internal metric fluentbit_hot_reloaded_times has changed from a gauge to a counter. The previous gauge registration caused incorrect results when using PromQL functions like rate() and increase(), which expect counters.

    If you have Prometheus dashboards or alerting rules that reference fluentbit_hot_reloaded_times, update them to use counter-appropriate PromQL functions (for example, rate() or increase() instead of gauge-specific functions like delta()).

    hashtag
    Shared HTTP listener settings for HTTP-based inputs

    The HTTP-based input plugins now use a shared HTTP listener configuration model. In Fluent Bit v5.0, the canonical setting names are:

    • http_server.http2

    • http_server.buffer_chunk_size

    • http_server.buffer_max_size

    • http_server.max_connections

    • http_server.workers

    Legacy per-plugin names such as http2, buffer_chunk_size, and buffer_max_size are still accepted as compatibility aliases, but new configurations should use the http_server.* names.

    If you tune http, splunk, elasticsearch, opentelemetry, or prometheus_remote_write inputs, review those sections and migrate to the shared naming so future upgrades are clearer.

    hashtag
    Mutual TLS for input plugins

    Input plugins that support TLS now also support tls.verify_client_cert. Enable this option to require and validate the client certificate presented by the sender.

    If you terminate TLS directly in Fluent Bit and need mutual TLS (mTLS), add tls.verify_client_cert on together with the usual tls.crt_file and tls.key_file settings.

    hashtag
    New internal logs input

    Fluent Bit v5.0 adds the fluentbit_logs input plugin, which mirrors Fluent Bit's own internal log stream back into the data pipeline as structured log records.

    Use this input if you want to forward Fluent Bit diagnostics to another destination, filter them, or store them alongside the rest of your telemetry.

    hashtag
    Emitter backpressure with filesystem storage

    The internal emitter plugin, used by filters such as rewrite_tag, now automatically enables storage.pause_on_chunks_overlimit when filesystem storage is in use and that option hasn't been explicitly configured.

    Previously, the emitter could accumulate chunks beyond the storage.max_chunks_up limit. Pipelines that use rewrite_tag or other emitter-backed filters with filesystem storage will now pause when the configured storage limit is reached.

    If you rely on the previous unlimited accumulation behavior, explicitly set storage.pause_on_chunks_overlimit off on the relevant input. Otherwise, review your storage.max_chunks_up value to ensure it's tuned for your expected throughput.

    hashtag
    More OAuth 2.0 coverage

    Fluent Bit v5.0 expands OAuth 2.0 support in both directions:

    • HTTP-based inputs can validate incoming bearer tokens using oauth2.validate, oauth2.issuer, and oauth2.jwks_url.

    • The HTTP output can acquire access tokens with oauth2.enable and supports basic, post, and private_key_jwt client authentication.

    If you previously handled authentication outside Fluent Bit for these cases, review the plugin pages for the new built-in options.

    For a broader overview of user-visible additions in this release, see What's new in Fluent Bit v5.0.

    hashtag
    Fluent Bit v4.2

    hashtag
    Vivo exporter output plugin

    The HTTP endpoint paths exposed by the Vivo exporter output plugin have changed. All endpoints now follow an /api/v1/ prefix:

    Signal
    Endpoint

    Logs

    /api/v1/logs

    Metrics

    /api/v1/metrics

    Traces

    /api/v1/traces

    Internal metrics

    If you have tooling or dashboards that query the Vivo exporter HTTP endpoints directly, update the endpoint paths accordingly.

    hashtag
    Fluent Bit v4.0

    hashtag
    Package support for older Linux distributions

    Official binary packages are no longer produced for the following Linux distributions:

    • Ubuntu 16.04 (Xenial)

    • Ubuntu 18.04 (Bionic)

    • Ubuntu 20.04 (Focal)

    Users on these platforms should upgrade their OS or build Fluent Bit from source.

    hashtag
    Kafka plugin support on older platforms

    The Kafka input and output plugins are disabled in official packages for CentOS 7 and Amazon Linux 2 (ARM64). This is due to a Kafka library (librdkafka) update that requires a newer glibc version than these platforms provide.

    Users who need Kafka support on these platforms must build Fluent Bit from source against a compatible version of librdkafka.

    hashtag
    Fluent Bit v3.0

    hashtag
    HTTP/2 enabled by default for HTTP-based input plugins

    The following input plugins now have HTTP/2 support enabled by default (http2 true):

    • opentelemetry

    • splunk

    • elasticsearch

    • http

    These plugins transparently support both HTTP/1.1 and HTTP/2 connections. If your clients don't support HTTP/2, or if you have a reverse proxy or load balancer that doesn't handle HTTP/2 correctly, add http2 off to the affected input plugin configuration section.

    hashtag
    Fluent Bit v2.0

    hashtag
    TLS library

    mbedTLS is no longer supported as a TLS backend. All TLS connections now use OpenSSL. If you compile Fluent Bit from source and previously linked against mbedTLS, you must now link against OpenSSL. Official binary packages already use OpenSSL.

    hashtag
    Fluent Bit v1.9.9

    The td-agent-bit package is no longer provided after this release. Users should switch to the fluent-bit package.

    hashtag
    Fluent Bit v1.6

    If you are migrating from previous version of Fluent Bit, review the following important changes:

    hashtag
    Tail input plugin

    By default, the tail input plugin follows a file from the end after the service starts, instead of reading it from the beginning. Every file found when the plugin starts is followed from it last position. New files discovered at runtime or when files rotate are read from the beginning.

    To keep the old behavior, set the option read_from_head to true.

    hashtag
    Stackdriver output plugin

    The project_id of resourcearrow-up-right in LogEntryarrow-up-right sent to Google Cloud Logging would be set to the project_id rather than the project number. To learn the difference between Project ID and project number, see Creating and managing projectsarrow-up-right.

    If you have existing queries based on the resource's project_id, update your query accordingly.

    hashtag
    Fluent Bit v1.5

    The migration from v1.4 to v1.5 is pretty straightforward.

    • The keepalive configuration mode has been renamed to net.keepalive. Now, all Network I/O keepalive is enabled by default. To learn more about this and other associated configuration properties read the Networking Administrationarrow-up-right section. - If you use the Elasticsearch output plugin, the default value of type changed from flb_type to _docarrow-up-right. Many versions of Elasticsearch tolerate this, but Elasticsearch v5.6 through v6.1 require a type without a leading underscore. See the Elasticsearch output plugin documentation FAQ entryarrow-up-right for more.

    hashtag
    Fluent Bit v1.4

    If you are migrating from Fluent Bit v1.3, there are no breaking changes.

    hashtag
    Fluent Bit v1.3

    If you are migrating from Fluent Bit v1.2 to v1.3, there are no breaking changes. If you are upgrading from an older version, review the following incremental changes:

    hashtag
    Fluent Bit v1.2

    hashtag
    Docker, JSON, parsers, and decoders

    Fluent Bit v1.2 fixed many issues associated with JSON encoding and decoding.

    For example, when parsing Docker logs, it's no longer necessary to use decoders. The new Docker parser looks like this:

    hashtag
    Kubernetes filter

    Fluent Bit made improvements to Kubernetes Filter handling of string-encoded log messages. If the Merge_Log option is enabled, it will try to handle the log content as a JSON map, if so, it will add the keys to the root map.

    In addition, fixes and improvements were made to the Merge_Log_Key option. If a merge log succeed, all new keys will be packaged under the key specified by this option. A suggested configuration is as follows:

    As an example, if the original log content is the following map:

    the final record will be composed as follows:

    hashtag
    Fluent Bit v1.1

    If you are upgrading from Fluent Bit 1.0.x or earlier, review the following relevant changes when switching to Fluent Bit v1.1 or later series:

    hashtag
    Kubernetes filter

    Fluent Bit introduced a new configuration property called Kube_Tag_Prefix to help Tag prefix resolution and address an unexpected behavior in previous versions.

    During the 1.0.x release cycle, a commit in the Tail input plugin changed the default behavior on how the Tag was composed when using the wildcard for expansion generating breaking compatibility with other services. Consider the following configuration example:

    The expected behavior is that Tag will be expanded to:

    text kube.var.log.containers.apache.log

    The change introduced in the 1.0 series switched from absolute path to the base filename only:

    text kube.apache.log

    THe Fluent Bit v1.1 release restored the default behavior and now the Tag is composed using the absolute path of the monitored file.

    Having absolute path in the Tag is relevant for routing and flexible configuration where it also helps to keep compatibility with Fluentd behavior.

    This behavior switch in Tail input plugin affects how Filter Kubernetes operates. When the filter is used it needs to perform local metadata lookup that comes from the file names when using Tail as a source. With the new Kube_Tag_Prefix option you can specify the prefix used in the Tail input plugin. For the previous configuration example the new configuration will look like:

    The proper value for Kube_Tag_Prefix must be composed by Tag prefix set in Tail input plugin plus the converted monitored directory replacing slashes with dots.

    Official Release Notesarrow-up-right
    Configuration parameters

    This plugin has the following configuration parameters:

    Key
    Description
    Default

    bitrate

    The bit rate for the communication. For example: 9600, 38400, 115200.

    none

    file

    Absolute path to the device entry. For example, /dev/ttyS0.

    none

    hashtag
    Get started

    To retrieve messages by using the Serial interface, you can run the plugin from the command line or through the configuration file:

    hashtag
    Command line

    The following example loads the input serial plugin where it set a bitrate of 9600, listens from the /dev/tnt0 interface, and uses the custom tag data to route the message.

    The interface (/dev/tnt0) is an emulation of the serial interface. Further examples will write some message to the other end of the interface. For example, /dev/tnt1.

    In Fluent Bit you can run the command:

    Which should produce output like:

    Using the separator configuration, you can send multiple messages at once.

    Run this command after starting Fluent Bit:

    Then, run Fluent Bit:

    This should produce results similar to the following:

    hashtag
    Configuration file

    In your main configuration file append the following sections:

    hashtag
    Emulating a serial interface on Linux

    You can emulate a serial interface on your Linux system and test the serial input plugin locally when you don't have an interface in your computer. The following procedure has been tested on Ubuntu 15.04 running Linux Kernel 4.0.

    hashtag
    Build and install the tty0tty module

    1. Download the sources:

    2. Unpack and compile:

    3. Copy the new kernel module into the kernel modules directory:

    4. Load the module:

      You should see new serial ports in dev (ls /dev/tnt\*\).

    5. Give appropriate permissions to the new serial ports:

    When the module is loaded, it will interconnect the following virtual interfaces:

    Configuration file

    One of the ways to configure Fluent Bit is using a main configuration file. Fluent Bit allows the use one configuration file that works at a global scope and uses the defined Format and Schema.

    The main configuration file supports four sections:

    • Service

    • Input

    • Filter

    • Output

    It's also possible to split the main configuration file into multiple files using the Include File feature to include external files.

    hashtag
    Service

    The Service section defines global properties of the service. The following keys are:

    Key
    Description
    Default Value

    The following is an example of a SERVICE section:

    For scheduler and retry details, see .

    hashtag
    Config input

    The INPUT section defines a source (related to an input plugin). Each can add its own configuration keys:

    Key
    Description

    Name is mandatory and tells Fluent Bit which input plugin to load. Tag is mandatory for all plugins except for the input forward plugin, which provides dynamic tags.

    hashtag
    Example

    The following is an example of an INPUT section:

    hashtag
    Config filter

    The FILTER section defines a filter (related to an filter plugin). Each filter plugin can add it own configuration keys. The base configuration for each FILTER section contains:

    Key
    Description

    Name is mandatory and lets Fluent Bit know which filter plugin should be loaded. Match or Match_Regex is mandatory for all plugins. If both are specified, Match_Regex takes precedence.

    hashtag
    Filter example

    The following is an example of a FILTER section:

    hashtag
    Config output

    The OUTPUT section specifies a destination that certain records should go to after a Tag match. Fluent Bit can route up to 256 OUTPUT plugins. The configuration supports the following keys:

    Key
    Description

    hashtag
    Output example

    The following is an example of an OUTPUT section:

    hashtag
    Collecting cpu metrics example

    The following configuration file example demonstrates how to collect CPU metrics and flush the results every five seconds to the standard output:

    hashtag
    Config include file

    To avoid complicated long configuration files is better to split specific parts in different files and call them (include) from one main file. The @INCLUDE can be used in the following way:

    The configuration reader will try to open the path somefile.conf. If not found, the reader assumes the file is on a relative path based on the path of the base configuration file:

    • Main configuration path: /tmp/main.conf

    • Included file: somefile.conf

    • Fluent Bit will try to open somefile.conf, if it fails it will try /tmp/somefile.conf

    The @INCLUDE command only works at top-left level of the configuration line, and can't be used inside sections.

    Wildcard character (*) supports including multiple files. For example:

    Files matching the wildcard character are included unsorted. If plugin ordering between files needs to be preserved, the files should be included explicitly.

    Environment variables aren't supported in the includes section. The path to the file must be specified as a literal string.

    Kafka

    circle-info

    Supported event types: logs

    The Kafka input plugin enables Fluent Bit to consume messages directly from one or more topics. By subscribing to specified topics, this plugin efficiently collects and forwards Kafka messages for further processing within your Fluent Bit pipeline.

    Starting with version 4.0.4, the Kafka input plugin supports authentication with AWS MSK IAM, enabling integration with Amazon MSK (Managed Streaming for Apache Kafka) clusters that require IAM-based access.

    fluent-bit -i exec -p 'command=ls /var/log' -o stdout
    ...
    [0] exec.0: [1521622010.013470159, {"exec"=>"ConsoleKit"}]
    [1] exec.0: [1521622010.013490313, {"exec"=>"Xorg.0.log"}]
    [2] exec.0: [1521622010.013492079, {"exec"=>"Xorg.0.log.old"}]
    [3] exec.0: [1521622010.013493443, {"exec"=>"anaconda.ifcfg.log"}]
    [4] exec.0: [1521622010.013494707, {"exec"=>"anaconda.log"}]
    [5] exec.0: [1521622010.013496016, {"exec"=>"anaconda.program.log"}]
    [6] exec.0: [1521622010.013497225, {"exec"=>"anaconda.storage.log"}]
    ...
    pipeline:
      inputs:
        - name: exec
          tag: exec_ls
          command: ls /var/log
          interval_sec: 1
          interval_nsec: 0
          buf_size: 8mb
          oneshot: false
    
      outputs:
        - name: stdout
          match: '*'
    [INPUT]
      Name          exec
      Tag           exec_ls
      Command       ls /var/log
      Interval_Sec  1
      Interval_Nsec 0
      Buf_Size      8mb
      Oneshot       false
    
    [OUTPUT]
      Name   stdout
      Match  *
    pipeline:
      inputs:
        - name: exec
          tag: exec_oneshot_demo
          command: 'for s in $(seq 1 10); do echo "count: $s"; sleep 1; done; exit 1'
          oneshot: true
          exit_after_oneshot: true
          propagate_exit_code: true
    
      outputs:
        - name: stdout
          match: '*'
    [INPUT]
      Name                exec
      Tag                 exec_oneshot_demo
      Command             for s in $(seq 1 10); do echo "count: $s"; sleep 1; done; exit 1
      Oneshot             true
      Exit_After_Oneshot  true
      Propagate_Exit_Code true
    
    [OUTPUT]
      Name   stdout
      Match  *
    ...
    [0] exec_oneshot_demo: [[1681702172.950574027, {}], {"exec"=>"count: 1"}]
    [1] exec_oneshot_demo: [[1681702173.951663666, {}], {"exec"=>"count: 2"}]
    [2] exec_oneshot_demo: [[1681702174.953873724, {}], {"exec"=>"count: 3"}]
    [3] exec_oneshot_demo: [[1681702175.955760865, {}], {"exec"=>"count: 4"}]
    [4] exec_oneshot_demo: [[1681702176.956840282, {}], {"exec"=>"count: 5"}]
    [5] exec_oneshot_demo: [[1681702177.958292246, {}], {"exec"=>"count: 6"}]
    [6] exec_oneshot_demo: [[1681702178.959508200, {}], {"exec"=>"count: 7"}]
    [7] exec_oneshot_demo: [[1681702179.961715745, {}], {"exec"=>"count: 8"}]
    [8] exec_oneshot_demo: [[1681702180.963924140, {}], {"exec"=>"count: 9"}]
    [9] exec_oneshot_demo: [[1681702181.965852990, {}], {"exec"=>"count: 10"}]
    ...
    #!/bin/bash
    # This is a DANGEROUS example of what NOT to do, NEVER DO THIS
    exec fluent-bit \
      -o stdout \
      -i exec \
      -p exit_after_oneshot=true \
      -p propagate_exit_code=true \
      -p command='myscript $*'
      -p command='echo '"$(printf '%q' "$@")" \
    fluent-bit -i forward -o stdout
    fluent-bit -i forward -p listen="192.168.3.2" -p port=9090 -o stdout
    pipeline:
      inputs:
        - name: forward
          listen: 0.0.0.0
          port: 24224
          buffer_chunk_size: 1M
          buffer_max_size: 6M
    
      outputs:
        - name: stdout
          match: '*'
    [INPUT]
      Name              forward
      Listen            0.0.0.0
      Port              24224
      Buffer_Chunk_Size 1M
      Buffer_Max_Size   6M
    
    [OUTPUT]
      Name   stdout
      Match  *
    pipeline:
      inputs:
        - name: forward
          listen: 0.0.0.0
          port: 24224
          buffer_chunk_size: 1M
          buffer_max_size: 6M
          security.users: fluentbit changeme
          shared_key: secret
          self_hostname: flb.server.local
    
      outputs:
        - name: stdout
          match: '*'
    [INPUT]
      Name              forward
      Listen            0.0.0.0
      Port              24224
      Buffer_Chunk_Size 1M
      Buffer_Max_Size   6M
      Security.Users    fluentbit changeme
      Shared_Key        secret
      Self_Hostname     flb.server.local
    
    [OUTPUT]
      Name   stdout
      Match  *
    pipeline:
      inputs:
        - name: forward
          listen: 0.0.0.0
          port: 24224
          buffer_chunk_size: 1M
          buffer_max_size: 6M
          security.users: fluentbit changeme
          empty_shared_key: true
          self_hostname: flb.server.local
    
      outputs:
        - name: stdout
          match: '*'
    [INPUT]
      Name              forward
      Listen            0.0.0.0
      Port              24224
      Buffer_Chunk_Size 1M
      Buffer_Max_Size   6M
      Security.Users    fluentbit changeme
      Empty_Shared_Key  true
      Self_Hostname     flb.server.local
    
    [OUTPUT]
      Name   stdout
      Match  *
    echo '{"key 1": 123456789, "key 2": "abcdefg"}' | fluent-cat my_tag
    fluent-bit -i forward -o stdout
    ...
    [0] my_tag: [1475898594, {"key 1"=>123456789, "key 2"=>"abcdefg"}]
    ...
    [PARSER]
      Name        docker
      Format      json
      Time_Key    time
      Time_Format %Y-%m-%dT%H:%M:%S.%L
      Time_Keep   On
    [FILTER]
      Name Kubernetes
      Match kube.*
      Kube_Tag_Prefix kube.var.log.containers.
      Merge_Log On
      Merge_Log_Key log_processed
    {"key1": "val1", "key2": "val2"}
    {"log": "{\"key1\": \"val1\", \"key2\": \"val2\"}", "log_processed": { "key1": "val1", "key2": "val2" } }
    [INPUT]
      Name tail
      Path /var/log/containers/*.log
      Tag kube.*
    [INPUT]
      Name tail
      Path /var/log/containers/*.log
      Tag kube.*
    
    [FILTER]
      Name kubernetes
      Match *
      Kube_Tag_Prefix kube.var.log.containers.
    pipeline:
      inputs:
        - name: serial
          tag: data
          file: /dev/tnt0
          bitrate: 9600
          separator: X
    
      outputs:
        - name: stdout
          match: '*'        
    [INPUT]
      Name      serial
      Tag       data
      File      /dev/tnt0
      Bitrate   9600
      Separator X
    
    [OUTPUT]
      Name   stdout
      Match  *
    git clone https://github.com/freemed/tty0tty
    cd tty0tty/module
    
    make
    sudo cp tty0tty.ko /lib/modules/$(uname -r)/kernel/drivers/misc/
    fluent-bit -i serial -t data -p file=/dev/tnt0 -p bitrate=9600 -o stdout -m '*'
    echo 'this is some message' > /dev/tnt1
    fluent-bit -i serial -t data -p file=/dev/tnt0 -p bitrate=9600 -o stdout -m '*'
    ...
    [0] data: [1463780680, {"msg"=>"this is some message"}]
    ...
    echo 'aaXbbXccXddXee' > /dev/tnt1
    fluent-bit -i serial -t data -p file=/dev/tnt0 -p bitrate=9600 -p separator=X -o stdout -m '*'
    ...
    [0] data: [1463781902, {"msg"=>"aa"}]
    [1] data: [1463781902, {"msg"=>"bb"}]
    [2] data: [1463781902, {"msg"=>"cc"}]
    [3] data: [1463781902, {"msg"=>"dd"}]
    ...
    /dev/tnt0 <=> /dev/tnt1
    /dev/tnt2 <=> /dev/tnt3
    /dev/tnt4 <=> /dev/tnt5
    /dev/tnt6 <=> /dev/tnt7

    /api/v1/internal/metrics

    shell conventions for exit code propagationarrow-up-right
    thread
    thread

    format

    Specify the format of the incoming data stream. The options are json and none. format and separator can't be used at the same time.

    none

    min_bytes

    The serial interface expects at least min_bytes to be available before processing the message.

    1

    separator

    Specify a separator string that's used to determine when a message ends.

    none

    threaded

    Indicates whether to run this input in its own thread.

    false

    sudo depmod
    
    sudo modprobe tty0tty
    sudo chmod 666 /dev/tnt*

    Boolean. Determines whether Fluent Bit should run as a Daemon (background). Allowed values are: yes, no, on, and off. Don't enable when using a Systemd based unit, such as the one provided in Fluent Bit packages.

    Off

    dns.mode

    Set the primary transport layer protocol used by the asynchronous DNS resolver. Can be overridden on a per plugin basis.

    UDP

    log_file

    Absolute path for an optional log file. By default all logs are redirected to the standard error interface (stderr).

    none

    log_level

    Set the logging verbosity level. Allowed values are: off, error, warn, info, debug, and trace. Values are cumulative. If debug is set, it will include error, warning, info, and debug. Trace mode is only available if Fluent Bit was built with the

    info

    parsers_file

    Path for a parsers configuration file. Multiple Parsers_File entries can be defined within the section.

    none

    plugins_file

    Path for a plugins configuration file. A plugins configuration file defines paths for external plugins. .

    none

    streams_file

    Path for the Stream Processor configuration file. .

    none

    http_server

    Enable the built-in HTTP Server.

    Off

    http_listen

    Set listening interface for HTTP Server when it's enabled.

    0.0.0.0

    http_port

    Set TCP Port for the HTTP Server.

    2020

    coro_stack_size

    Set the coroutines stack size in bytes. The value must be greater than the page size of the running system. Setting the value too small (4096) can cause coroutine threads to overrun the stack buffer. The default value of this parameter shouldn't be changed.

    24576

    scheduler.cap

    Set a maximum retry time in seconds. Supported in v1.8.7 and greater.

    2000

    scheduler.base

    Set a base of exponential backoff. Supported in v1.8.7 and greater.

    5

    json.convert_nan_to_null

    If enabled, NaN converts to null when Fluent Bit converts msgpack to json.

    false

    json.escape_unicode

    Controls how Fluent Bit serializes non‑ASCII / multi‑byte Unicode characters in JSON strings. When enabled, Unicode characters are escaped as \uXXXX sequences (characters outside BMP become surrogate pairs). When disabled, Fluent Bit emits raw UTF‑8 bytes.

    true

    sp.convert_from_str_to_num

    If enabled, Stream processor converts from number string to number type.

    true

    windows.maxstdio

    If specified, the limit of stdio is adjusted. Only provided for Windows. From 512 to 2048 is allowed.

    512

    .

    flush

    Set the flush time in seconds.nanoseconds. The engine loop uses a Flush timeout to define when it's required to flush the records ingested by input plugins through the defined output plugins.

    1

    grace

    Set the grace time in seconds as an integer value. The engine loop uses a grace timeout to define wait time on exit.

    5

    Name

    Name of the input plugin.

    Tag

    Tag name associated to all records coming from this plugin.

    Log_Level

    Set the plugin's logging verbosity level. Allowed values are: off, error, warn, info, debug, and trace. Defaults to the SERVICE section's Log_Level.

    Name

    Name of the filter plugin.

    Match

    A pattern to match against the tags of incoming records. Case sensitive, supports asterisk (*) as a wildcard.

    Match_Regex

    A regular expression to match against the tags of incoming records. Use this option if you want to use the full regular expression syntax.

    Log_Level

    Name

    Name of the output plugin.

    Match

    A pattern to match against the tags of incoming records. Case sensitive and supports the asterisk (*) character as a wildcard.

    Match_Regex

    A regular expression to match against the tags of incoming records. Use this option if you want to use the full regular expression syntax.

    Log_Level

    scheduling and retries
    input plugin

    daemon

    Set the plugin's logging verbosity level. Allowed values are: off, error, warn, info, debug, and trace. Defaults to the SERVICE section's Log_Level.

    Set the plugin's logging verbosity level. Allowed values are: off, error, warn, info, debug, and trace. Defaults to the SERVICE section's Log_Level.

    This plugin uses the official librdkafka C libraryarrow-up-right as a built-in dependency.

    hashtag
    Configuration parameters

    Key
    Description
    Default

    brokers

    Single or multiple list of Kafka Brokers. For example: 192.168.1.3:9092, 192.168.1.4:9092.

    none

    buffer_max_size

    Specify the maximum size of buffer per cycle to poll Kafka messages from subscribed topics. To increase throughput, specify larger size.

    4M

    hashtag
    Get started

    To subscribe to or collect messages from Apache Kafka, run the plugin from the command line or through the configuration file as shown in the following examples.

    hashtag
    Command line

    The Kafka plugin can read parameters through the -p argument (property):

    hashtag
    Configuration file (recommended)

    In your main configuration file append the following:

    hashtag
    Example of using Kafka input and output plugins

    The Fluent Bit source repository contains a full example of using Fluent Bit to process Kafka records:

    The previous example will connect to the broker listening on kafka-broker:9092 and subscribe to the fb-source topic, polling for new messages every 100 milliseconds.

    Since the payload will be in JSON format, the plugin is configured to parse the payload with format json.

    Every message received is then processed with kafka.lua and sent back to the fb-sink topic of the same broker.

    The example can be executed locally with make start in the examples/kafka_filter directory (docker/compose is used).

    hashtag
    AWS MSK IAM authentication

    Fluent Bit v4.0.4 and later supports authentication to Amazon MSK (Managed Streaming for Apache Kafka) clusters using AWS IAM. This lets you securely connect to MSK brokers with AWS credentials, leveraging IAM roles and policies for access control.

    hashtag
    Build requirements

    If you are compiling Fluent Bit from source, ensure the following requirements are met to enable AWS MSK IAM support:

    • The packages libsasl2 and libsasl2-dev must be installed on your build environment.

    hashtag
    Runtime requirements

    • Network Access: Fluent Bit must be able to reach your MSK broker endpoints (AWS VPC setup).

    • AWS Credentials: Provide these AWS credentials using any supported AWS method. These credentials are discovered by default when aws_msk_iam flag is enabled.

      • IAM roles (recommended for EC2, ECS, or EKS)

      • Environment variables (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)

      • AWS credentials file (~/.aws/credentials)

      • Instance metadata service (IMDS)

    • IAM Permissions: The credentials must allow access to the target MSK cluster, as shown in the following example policy.

    hashtag
    Configuration parameters [#config-aws]

    Property
    Description
    Required

    aws_msk_iam

    If true, enables AWS MSK IAM authentication. Possible values: true, false.

    false

    aws_msk_iam_cluster_arn

    Full ARN of the MSK cluster for region extraction. This value is required if aws_msk_iam is true.

    none

    hashtag
    Configuration example

    hashtag
    Example AWS IAM policy

    circle-info

    IAM policies and permissions can be complex and might vary depending on your organization's security requirements. If you are unsure about the correct permissions or best practices, consult your AWS administrator or an AWS expert who is familiar with MSK and IAM security.

    The AWS credentials used by Fluent Bit must have permission to connect to your MSK cluster. Here is a minimal example policy:

    Apache Kafkaarrow-up-right

    Windows

    Fluent Bit is distributed as the fluent-bit package for Windows and as a Windows container on Docker Hub. Fluent Bit provides two Windows installers: a ZIP archive and an EXE installer.

    Not all plugins are supported on Windows. The CMake configurationarrow-up-right shows the default set of supported plugins.

    hashtag
    Configuration

    Provide a valid Windows configuration with the installation.

    The following configuration is an example:

    hashtag
    Migration to Fluent Bit

    For version 1.9 and later, td-agent-bit is a deprecated package and was removed after 1.9.9. The correct package name to use now is fluent-bit.

    hashtag
    Installation packages

    The latest stable version is 5.0.2.

    INSTALLERS
    SHA256 CHECKSUMS

    These are now using the Github Actions built versions. Legacy AppVeyor builds are still available (AMD 32/64 only) at releases.fluentbit.io but are deprecated.

    MSI installers are also available:

    To check the integrity, use the Get-FileHash cmdlet for PowerShell.

    hashtag
    Installing from a ZIP archive

    1. Download a ZIP archive. Choose the suitable installers for your 32-bit or 64-bit environments.

    2. Expand the ZIP archive. You can do this by clicking Extract All in Explorer or Expand-Archive in PowerShell.

      The ZIP package contains the following set of files.

    The following output indicates Fluent Bit is running:

    To halt the process, press Control+C in the terminal.

    hashtag
    Installing from the executable installer

    1. Download an EXE installer for the appropriate 32-bit or 64-bit build.

    2. Double-click the EXE installer you've downloaded. The installation wizard starts.

    3. Click Next and finish the installation. By default, Fluent Bit is installed in C:\Program Files\fluent-bit\

    hashtag
    Installer options

    The Windows installer is built by and supports the for silent installation and install directory.

    To silently install to C:\fluent-bit directory here is an example:

    The uninstaller also supports a silent uninstall using the same /S flag. This can be used for provisioning with automation like Ansible, Puppet, and so on.

    hashtag
    Windows service support

    Windows services are equivalent to daemons in Unix (long-running background processes). For v1.5.0 and later, Fluent Bit has native support for Windows services.

    For example, you have the following installation layout:

    To register Fluent Bit as a Windows service, execute the following command on at a command prompt. A single space is required after binpath=.

    Fluent Bit can be started and managed as a normal Windows service.

    To halt the Fluent Bit service, use the stop command.

    To start Fluent Bit automatically on boot, execute the following:

    hashtag
    FAQs

    hashtag
    Can you manage Fluent Bit service using PowerShell?

    Instead of sc.exe, PowerShell can be used to manage Windows services.

    Create a Fluent Bit service:

    Start the service:

    Query the service status:

    Stop the service:

    Remove the service (requires PowerShell 6.0 or later)

    hashtag
    Compile from source

    If you need to create a custom executable, use the following procedure to compile Fluent Bit by yourself.

    hashtag
    Preparation

    1. Install Microsoft Visual C++ to compile Fluent Bit. You can install the minimum toolkit using the following command:

    1. Choose C++ Build Tools and C++ CMake tools for Windows and wait until the process finishes.

    2. Install flex and bison. One way to install them on Windows is to use .

    3. Add the path C:\WinFlexBison

    hashtag
    Compilation

    1. Open the Start menu on Windows and type command Prompt for VS. From the result list, select the one that corresponds to your target system ( x86 or x64).

    2. Verify the installed OpenSSL library files match the selected target. You can examine the library files by using the dumpbin command with the /headers option.

    Now you should be able to run Fluent Bit:

    hashtag
    Packaging

    To create a ZIP package, call cpack as follows:

    Process exporter metrics

    A plugin based on Process Exporter to collect process level of metrics of system metrics

    circle-info

    Supported event types: metrics

    Prometheus Node exporterarrow-up-right is a popular way to collect system level metrics from operating systems such as CPU, disk, network, and process statistics.

    Fluent Bit 2.2 and later includes a process exporter plugin that builds off the Prometheus design to collect process level metrics without having to manage two separate processes or agents.

    The Process Exporter Metrics plugin implements collecting of the various metrics available from and these will be expanded over time as needed.

    circle-info

    All metrics including those collected with this plugin flow through a separate pipeline from logs and current filters don't operate on top of metrics. This plugin is only supported on Linux based operating systems as it uses the proc filesystem to access the relevant metrics. MacOS doesn't have the proc filesystem so this plugin won't work for it.

    hashtag
    Configuration

    Key
    Description
    Default

    hashtag
    Available metrics

    Name
    Description

    hashtag
    Prometheus metric names

    The following tables describe the Prometheus metrics exposed by each collector.

    hashtag
    Process-level metrics

    Prometheus Metric
    Enabled by
    Type
    Description

    hashtag
    Thread-level metrics

    Prometheus Metric
    Enabled by
    Type
    Description

    hashtag
    Threading

    This input always runs in its own .

    hashtag
    Getting started

    hashtag
    Configuration file

    In the following configuration file, the input plugin process_exporter_metrics collects metrics every 2 seconds and exposes them through the output plugin on HTTP/TCP port 2021.

    You can see the metrics by using curl:

    hashtag
    Container to collect host metrics

    When deploying Fluent Bit in a container you will need to specify additional settings to ensure that Fluent Bit has access to the process details.

    The following docker command deploys Fluent Bit with a specific mount path for procfs and settings enabled to ensure that Fluent Bit can collect from the host. These are then exposed over port 2021.

    hashtag
    Enhancement requests

    Development prioritises a subset of the available collectors in the . To request others, open a GitHub issue by using the following template:

    Prometheus scrape Metrics

    circle-info

    Supported event types: metrics

    Fluent Bit 1.9 and later includes additional metrics features to let you collect logs and metrics from a Prometheus-based endpoint at a set interval. These metrics can be routed to metric supported endpoints such as Prometheus Exporter, InfluxDB or Prometheus Remote Write.

    hashtag
    Configuration

    Key
    Description
    Default

    hashtag
    TLS configuration

    This plugin supports TLS/SSL for secure connections to HTTPS endpoints. For detailed TLS configuration options, refer to the documentation section.

    hashtag
    Example

    If an endpoint exposes Prometheus Metrics you can specify the configuration to scrape and then output the metrics. The following example retrieves metrics from the HashiCorp Vault application.

    This returns output similar to:

    Service

    The service section of YAML configuration files defines global properties of the Fluent Bit service. The available configuration keys are:

    Key
    Description
    Default Value

    HTTP

    circle-info

    Supported event types: logs

    The HTTP input plugin lets Fluent Bit open an HTTP port that you can then route data to in a dynamic way.

    hashtag

    Blob

    circle-info

    Supported event types: logs

    The Blob input plugin monitors a directory and processes binary (blob) files. It scans the specified path at regular intervals, reads binary files, and forwards them as records through the Fluent Bit pipeline. This plugin processes binary log files, artifacts, or any binary data that needs to be collected and forwarded to outputs.

    Pipeline

    The pipeline section of YAML configuration files defines the flow of how data is collected, processed, and sent to its final destination. This section contains the following subsections:

    • : Configures input plugins.

    • : Configures filters.

    [SERVICE]
      Flush           5
      Daemon          off
      Log_Level       debug
    [INPUT]
      Name cpu
      Tag  my_cpu
    [FILTER]
      Name  grep
      Match *
      Regex log aa
    [OUTPUT]
      Name  stdout
      Match my*cpu
    [SERVICE]
      Flush     5
      Daemon    off
      Log_Level debug
    
    [INPUT]
      Name  cpu
      Tag   my_cpu
    
    [OUTPUT]
      Name  stdout
      Match my*cpu
    @INCLUDE somefile.conf
    @INCLUDE input_*.conf
    pipeline:
      inputs:
        - name: kafka
          brokers: 192.168.1.3:9092
          topics: some-topic
          poll_ms: 100
    
      outputs:
        - name: stdout
          match: '*'
    [INPUT]
      Name     kafka
      Brokers  192.168.1.3:9092
      Topics   some-topic
      Poll_ms  100
    
    [OUTPUT]
      Name   stdout
      Match  *
    pipeline:
      inputs:
        - name: kafka
          brokers: kafka-broker:9092
          topics: fb-source
          poll_ms: 100
          format: json
    
      filters:
        - name: lua
          match: '*'
          script: kafka.lua
          call: modify_kafka_message
    
      outputs:
        - name: kafka
          brokers: kafka-broker:9092
          topics: fb-sink
    [INPUT]
      Name     kafka
      Brokers  kafka-broker:9092
      Topics   fb-source
      Poll_ms  100
      Format   json
    
    [FILTER]
      Name    lua
      Match   *
      Script  kafka.lua
      Call    modify_kafka_message
    
    [OUTPUT]
      Name    kafka
      Brokers kafka-broker:9092
      Topics  fb-sink
    fluent-bit -i kafka -o stdout -p brokers=192.168.1.3:9092 -p topics=some-topic
    pipeline:
      inputs:
        - name: kafka
          brokers: my-cluster.abcdef.c1.kafka.us-east-1.amazonaws.com:9098
          topics: my-topic
          aws_msk_iam: true
          aws_msk_iam_cluster_arn: arn:aws:kafka:us-east-1:123456789012:cluster/my-cluster/abcdef-1234-5678-9012-abcdefghijkl-s3
    
      outputs:
        - name: stdout
          match: '*'
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Sid": "VisualEditor0",
          "Effect": "Allow",
          "Action": [
            "kafka-cluster:*",
            "kafka-cluster:DescribeCluster",
            "kafka-cluster:ReadData",
            "kafka-cluster:DescribeTopic",
            "kafka-cluster:Connect"
          ],
          "Resource": "*"
        }
      ]
    }

    client_id

    Client id passed to librdkafka.

    none

    enable_auto_commit

    Rely on Kafka auto-commit and commit messages in batches.

    false

    format

    Serialization format of the messages. If set to json, the payload will be parsed as JSON.

    none

    group_id

    Group id passed to librdkafka.

    fluent-bit

    poll_ms

    Kafka brokers polling interval in milliseconds.

    500

    poll_timeout_ms

    Timeout in milliseconds for Kafka consumer poll operations. Only effective when threaded is enabled.

    1

    rdkafka.{property}

    {property} can be any librdkafka propertiesarrow-up-right.

    none

    threaded

    Indicates whether to run this input in its own thread.

    false

    topics

    Single entry or list of comma-separated topics (,) that Fluent Bit will subscribe to.

    none

    Regular expression to determine which names of processes are excluded in the metrics produced by this plugin. It's not applied unless explicitly set.

    NULL

    process_include_pattern

    Regular expression to determine which names of processes are included in the metrics produced by this plugin. It's applied for all process unless explicitly set.

    .+

    scrape_interval

    The rate, in seconds, at which metrics are collected.

    5

    memory

    Exposes memory statistics from /proc.

    start_time

    Exposes start_time statistics from /proc.

    state

    Exposes process state statistics from /proc.

    thread

    Exposes thread statistics from /proc.

    thread_wchan

    Exposes thread_wchan from /proc.

    CPU usage in seconds (user and system).

    process_fd_ratio

    fd

    gauge

    Ratio between open file descriptors and max limit.

    process_major_page_faults_total

    memory

    counter

    Major page faults.

    process_memory_bytes

    memory

    gauge

    Memory in use (virtual memory and RSS).

    process_minor_page_faults_total

    memory

    counter

    Minor page faults.

    process_num_threads

    thread

    gauge

    Number of threads.

    process_open_filedesc

    fd

    gauge

    Number of open file descriptors.

    process_read_bytes_total

    io

    counter

    Number of bytes read.

    process_start_time_seconds

    start_time

    gauge

    Start time in seconds since 1970/01/01.

    process_states

    state

    gauge

    Process state (R, S, D, Z, T, or I).

    process_write_bytes_total

    io

    counter

    Number of bytes written.

    Thread CPU usage in seconds (user and system).

    process_thread_io_bytes_total

    thread

    counter

    Thread I/O bytes (read and write).

    process_thread_major_page_faults_total

    thread

    counter

    Thread major page faults.

    process_thread_minor_page_faults_total

    thread

    counter

    Thread minor page faults.

    process_thread_wchan

    thread_wchan

    gauge

    Number of threads waiting on each wchan.

    metrics

    Specify which process level of metrics are collected from the host operating system. Actual values of metrics will be read from /proc when needed. context_switches, cpu, fd, io, memory, start_time, state, thread, and thread_wchan metrics depend on procfs.

    cpu,io,memory,state,context_switches,fd,start_time,thread_wchan,thread

    path.procfs

    The mount point used to collect process information and metrics. Read-only permissions are enough.

    /proc

    context_switches

    Exposes context_switches statistics from /proc.

    cpu

    Exposes CPU statistics from /proc.

    fd

    Exposes file descriptors statistics from /proc.

    io

    process_context_switches_total

    context_switches

    counter

    Context switches (voluntary and non-voluntary).

    process_cpu_seconds_total

    cpu

    process_thread_context_switches_total

    thread

    counter

    Thread context switches (voluntary and non-voluntary).

    process_thread_cpu_seconds_total

    thread

    the third party implementation of Prometheus Process Exporterarrow-up-right
    thread
    Prometheus Exporter
    third party implementation of Prometheus Process Exporterarrow-up-right
    in_process_exporter_metricsarrow-up-right

    process_exclude_pattern

    Exposes I/O statistics from /proc.

    counter

    counter

    # Process Exporter Metrics + Prometheus Exporter
    # -------------------------------------------
    # The following example collect host metrics on Linux and expose
    # them through a Prometheus HTTP endpoint.
    #
    # After starting the service try it with:
    #
    # $ curl http://127.0.0.1:2021/metrics
    #
    service:
      flush: 1
      log_level: info
    
    pipeline:
      inputs:
        - name: process_exporter_metrics
          tag:  process_metrics
          scrape_interval: 2
    
      outputs:
        - name: prometheus_exporter
          match: process_metrics
          host: 0.0.0.0
          port: 2021
    # Process Exporter Metrics + Prometheus Exporter
    # -------------------------------------------
    # The following example collect host metrics on Linux and expose
    # them through a Prometheus HTTP endpoint.
    #
    # After starting the service try it with:
    #
    # $ curl http://127.0.0.1:2021/metrics
    #
    [SERVICE]
      flush           1
      log_level       info
    
    [INPUT]
      name            process_exporter_metrics
      tag             process_metrics
      scrape_interval 2
    
    [OUTPUT]
      name            prometheus_exporter
      match           process_metrics
      host            0.0.0.0
      port            2021
    curl http://127.0.0.1:2021/metrics
    docker run -ti -v /proc:/host/proc:ro \
                -p 2021:2021        \
                fluent/fluent-bit:2.2 \
                /fluent-bit/bin/fluent-bit \
                -i process_exporter_metrics \
                -p path.procfs=/host/proc  \
                -o prometheus_exporter \
                -f 1

    Launch cmd.exe or PowerShell on your machine, and execute fluent-bit.exe:
    .
    to your systems environment variable
    Path
    .
    .
  • Install OpenSSL binaries, at least the library files and headers.

  • Install Gitarrow-up-right to pull the source code from the repository.

  • Clone the source code of Fluent Bit.

  • Compile the source code.

  • fluent-bit-5.0.2-win32.exearrow-up-right

    11117fd08b65992d93edfd69fdc7735223c74b32a2c42c50e99edcd458b354caarrow-up-right

    fluent-bit-5.0.2-win32.ziparrow-up-right

    209c66a76e98632ea9b96e9db5a3889aee5e0f823ce3bef801fdee81f537a890arrow-up-right

    fluent-bit-5.0.2-win64.exearrow-up-right

    917467e1f3083e4ce172c9e38f49b2c547cdc2fa6386b97ef1960989c1aa7e6carrow-up-right

    fluent-bit-5.0.2-win64.ziparrow-up-right

    [SERVICE]
      # Flush
      # =====
      # set an interval of seconds before to flush records to a destination
      flush        5
    
      # Daemon
      # ======
      # instruct Fluent Bit to run in foreground or background mode.
      daemon       Off
    
      # Log_Level
      # =========
      # Set the verbosity level of the service, values can be:
      #
      # - error
      # - warning
      # - info
      # - debug
      # - trace
      #
      # by default 'info' is set, that means it includes 'error' and 'warning'.
      log_level    info
    
      # Parsers File
      # ============
      # specify an optional 'Parsers' configuration file
      parsers_file parsers.conf
    
      # Plugins File
      # ============
      # specify an optional 'Plugins' configuration file to load external plugins.
      plugins_file plugins.conf
    
      # HTTP Server
      # ===========
      # Enable/Disable the built-in HTTP Server for metrics
      http_server  Off
      http_listen  0.0.0.0
      http_port    2020
    
      # Storage
      # =======
      # Fluent Bit can use memory and filesystem buffering based mechanisms
      #
      # - https://docs.fluentbit.io/manual/administration/buffering-and-storage
      #
      # storage metrics
      # ---------------
      # publish storage pipeline metrics in '/api/v1/storage'. The metrics are
      # exported only if the 'http_server' option is enabled.
      #
      storage.metrics on
    
    [INPUT]
      Name         winlog
      Channels     Setup,Windows PowerShell
      Interval_Sec 1
    
    [OUTPUT]
      name  stdout
      match *
    fluent-bit-5.0.2-win32.msiarrow-up-right
    fluent-bit-5.0.2-win64.msiarrow-up-right
    fluent-bit-5.0.2-winarm64.msiarrow-up-right
    CPack using NSISarrow-up-right
    default NSIS optionsarrow-up-right
    winflexbisonarrow-up-right
    service:
      flush: 5
      daemon: off
      log_level: info
      parsers_file: parsers.yaml
      plugins_file: plugins.yaml
      http_server: off
      http_listen: 0.0.0.0
      http_port: 2020
      storage.metrics: on
    
    pipeline:
      inputs:
        - name: winlog
          channels: Setup,Windows Powershell
          interval_sec: 1
    
      outputs:
        - name: stdout
          match: '*'

    Here's how to do thatarrow-up-right

    The host of the Prometheus metric endpoint to scrape.

    localhost

    http_passwd

    Set the password for HTTP basic authentication.

    ""

    http_user

    Set the username for HTTP basic authentication.

    none

    metrics_path

    The metrics URI endpoint, which must start with a forward slash (/). Parameters can be added to the path by using ?

    /metrics

    port

    The port of the Prometheus metric endpoint to scrape.

    9100

    scrape_interval

    The interval to scrape metrics.

    10s

    threaded

    Indicates whether to run this input in its own .

    false

    bearer_token

    Set the bearer token for authentication with the Prometheus endpoint.

    none

    buffer_max_size

    Set the maximum buffer size for the HTTP response.

    10M

    TLS/SSL

    host

    WITH_TRACE
    option enabled.
    See an examplearrow-up-right
    Learn more about Stream Processing configuration

    Specifies whether Fluent Bit should run as a daemon (background process). Possible values: yes, no, on, and off. Don't enable when using a Systemd-based unit, such as the one provided in Fluent Bit packages.

    off

    dns.mode

    Sets the primary transport layer protocol used by the asynchronous DNS resolver. Can be overridden on a per-plugin basis.

    UDP

    dns.prefer_ipv4

    If enabled, the DNS resolver prefers IPv4 results when resolving hostnames. Possible values: off or on.

    off

    dns.prefer_ipv6

    If enabled, the DNS resolver prefers IPv6 results when resolving hostnames. Possible values: off or on.

    off

    dns.resolver

    Sets the DNS resolver implementation. Possible values: LEGACY, ASYNC.

    none

    enable_chunk_trace

    If enabled, activates chunk tracing for debugging purposes. Requires Fluent Bit to be built with the FLB_HAVE_CHUNK_TRACE option. Possible values: off or on.

    off

    flush

    Sets the flush time in seconds.nanoseconds. The engine loop uses a flush timeout to define when to flush the records ingested by input plugins through the defined output plugins.

    1

    grace

    Sets the grace time in seconds as an integer value. The engine loop uses a grace timeout to define the wait time on exit.

    5

    hc_errors_count

    Sets the number of errors that must occur within the health check period before the health check endpoint reports an unhealthy status.

    5

    hc_period

    Sets the health check evaluation period in seconds.

    60

    hc_retry_failure_count

    Sets the number of retry failures that must occur within the health check period before the health check endpoint reports an unhealthy status.

    5

    health_check

    If enabled, registers a health check endpoint on the built-in HTTP server. Requires http_server to be enabled. Possible values: off or on.

    off

    hot_reload

    Enables of configuration with SIGHUP.

    on

    hot_reload.ensure_thread_safety

    If enabled, ensures thread safety during configuration hot reload. Disabling this can reduce reload time but can cause instability. Possible values: off or on.

    on

    hot_reload.timeout

    Sets a watchdog timeout in seconds for the hot reload process. If the reload doesn't complete within this time, Fluent Bit cancels the reload. A value of 0 disables the watchdog.

    0

    http_listen

    Sets the listening interface for the HTTP Server when it's enabled.

    0.0.0.0

    http_port

    Sets the TCP port for the HTTP server.

    2020

    http_server

    Enables the built-in HTTP server.

    off

    json.convert_nan_to_null

    If enabled, NaN is converted to null when Fluent Bit converts msgpack to JSON.

    false

    json.escape_unicode

    Controls how Fluent Bit serializes non‑ASCII / multi‑byte Unicode characters in JSON strings. When enabled, Unicode characters are escaped as \uXXXX sequences (characters outside BMP become surrogate pairs). When disabled, Fluent Bit emits raw UTF‑8 bytes.

    true

    log_file

    Absolute path for an optional log file. By default, all logs are redirected to the standard error interface (stderr).

    none

    log_level

    Sets the logging verbosity level. Possible values: off, error, warn, info, debug, and trace. Values are cumulative. For example, if debug is set, it will include error, warning, info, and debug. The

    info

    multiline_buffer_limit

    Sets the default buffer size limit for multiline parsers. This value must follow specifications.

    2MB

    parsers_file

    Path for . You can include one or more files.

    none

    plugins_file

    Path for a plugins configuration file. This file specifies the paths to external plugins (.so files) that Fluent Bit can load at runtime. Plugins can also be declared directly in the of YAML configuration files.

    none

    scheduler.base

    Sets the base of exponential backoff.

    5

    scheduler.cap

    Sets a maximum retry time in seconds.

    2000

    sp.convert_from_str_to_num

    If enabled, the stream processor converts strings that represent numbers to a numeric type.

    true

    streams_file

    Path for the configuration file. This file defines the rules and operations for stream processing in Fluent Bit. Stream processor configurations can also be defined directly in the streams section of YAML configuration files.

    none

    windows.maxstdio

    If specified, adjusts the limit of stdio. Only provided for Windows. Values from 512 to 2048 are allowed.

    512

    The service section only controls the built-in monitoring and control HTTP server. Plugin-specific HTTP listener settings such as http_server.http2, http_server.buffer_max_size, http_server.buffer_chunk_size, http_server.max_connections, and http_server.workers are configured on the relevant input plugin in the pipeline.inputs section.

    hashtag
    Storage configuration

    The following storage-related keys can be set as children to the storage key:

    Key
    Description
    Default Value

    storage.backlog.flush_on_shutdown

    If enabled, Fluent Bit attempts to flush all backlog filesystem chunks to their destination during the shutdown process. This can help ensure data delivery before Fluent Bit stops, but can also increase shutdown time. Possible values: off or on.

    off

    storage.backlog.mem_limit

    Sets the memory allocated for storing buffered data for input plugins that use filesystem storage.

    5M

    For storage and buffering details, see Buffering and Backpressure.

    For scheduler and retry details, see Scheduling and retries.

    hashtag
    Configuration example

    The following configuration example defines a service section with hot reloading enabled and a pipeline with a random input and stdout output:

    coro_stack_size

    Sets the coroutines stack size in bytes. The value must be greater than the page size of the running system. Setting the value too small (for example, 4096) can cause coroutine threads to overrun the stack buffer. For best results, don't change this parameter from its default value.

    24576

    daemon

    Configuration parameters
    Key
    Description
    Default

    add_remote_addr

    Adds a REMOTE_ADDR field to the record. The value of REMOTE_ADDR is the client's address, which is extracted from the X-Forwarded-For header.

    false

    buffer_chunk_size

    This sets the chunk size for incoming JSON messages. These chunks are then stored and managed in the space available by buffer_max_size. Compatibility alias for http_server.buffer_chunk_size.

    512K

    hashtag
    TLS / SSL

    HTTP input plugin supports TLS/SSL. For more details about the properties available and general configuration, refer to Transport Security.

    hashtag
    gzipped content

    The HTTP input plugin will accept and automatically handle gzipped content in version 2.2.1 or later if the header Content-Encoding: gzip is set on the received data.

    hashtag
    OAuth 2.0 JWT validation

    When oauth2.validate is set to true, the HTTP input plugin validates the Authorization: Bearer <token> header on every incoming request. Requests with a missing, expired, or invalid token are rejected with a 401 response.

    oauth2.issuer and oauth2.jwks_url are both required when validation is enabled. JWKS keys are fetched lazily: the first request that requires validation triggers the initial retrieval from oauth2.jwks_url. Keys are then cached and refreshed every oauth2.jwks_refresh_interval seconds.

    hashtag
    Get started

    This plugin supports dynamic tags which let you send data with different tags through the same input. See the following for an example:

    Link to videoarrow-up-right

    hashtag
    Set a tag

    The tag for the HTTP input plugin is set by adding the tag to the end of the request URL. This tag is then used to route the event through the system.

    For example, in the following curl message the tag set is app.log because the end path is /app.log:

    hashtag
    Add a remote address field

    The add_remote_addr configuration parameter, when activated, adds a REMOTE_ADDR field to the records. The value of REMOTE_ADDR is the client's address, which is extracted from the X-Forwarded-For header.

    In most cases, only a single X-Forwarded-For header is in the request, so the following curl would add a REMOTE_ADDR field which would be set to host1:

    However, if your system sets multiple X-Forwarded-For headers in the request, the one used (first, or last) depends on the value of the http2 parameter. For example:

    Assuming the following X-Forwarded-For headers are in the request:

    The value of REMOTE_ADDR will be:

    http2 value

    REMOTE_ADDR value

    true (default)

    host3

    false

    host1

    hashtag
    Configuration file

    hashtag
    Configuration file http.0 example

    If you don't set the tag, http.0 is automatically used. If you have multiple HTTP inputs then they will follow a pattern of http.N where N is an integer representing the input.

    hashtag
    Set tag_key

    The tag_key configuration option lets you specify the key name that will be used to overwrite a tag. The tag's value will be replaced with the value associated with the specified key. For example, setting tag_key to custom_tag and the log event contains a JSON field with the key custom_tag. Fluent Bit will use the value of that field as the new tag for routing the event through the system.

    hashtag
    Curl request

    hashtag
    Configuration file tag_key example

    hashtag
    Set add_remote_addr

    The add_remote_addr configuration option lets you activate a feature that systematically adds the REMOTE_ADDR field to events, and set its value to the client's address. The address will be extracted from the X-Forwarded-For header of the request. The format is:

    hashtag
    Example curl to test this feature

    hashtag
    Set multiple custom HTTP headers on success

    The success_header parameter lets you set multiple HTTP headers on success. The format is:

    hashtag
    Example curl message

    hashtag
    Configuration file example 3

    hashtag
    Enable OAuth 2.0 JWT validation

    The following example enables JWT validation using a JWKS endpoint. All incoming requests must include a valid bearer token issued by the specified issuer.

    hashtag
    Command line

    hashtag
    Configuration parameters

    The plugin supports the following configuration parameters:

    Key
    Description
    Default

    alias

    Sets an alias for multiple instances of the same input plugin. This helps when you need to run multiple blob input instances with different configurations.

    none

    database_file

    Specify a database file to keep track of processed files and their state. This enables the plugin to resume processing from the last known position if Fluent Bit is restarted.

    none

    hashtag
    How it works

    The Blob input plugin periodically scans the specified directory path for binary files. When a new or modified file is detected, the plugin reads the file content and creates records that are forwarded through the Fluent Bit pipeline. The plugin can track processed files using a database file, allowing it to resume from the last known position after a restart.

    Binary file content is typically included in the output records, and the exact format depends on the output plugin configuration. The plugin generates one or more records per file, depending on the file size and configuration.

    hashtag
    Database file

    The database file enables the plugin to track which files have been processed and maintain state across Fluent Bit restarts. This is similar to how the Tail input plugin uses a database file.

    When a database file is specified:

    • The plugin stores information about processed files, including file paths and processing status

    • On restart, the plugin can skip files that were already processed

    • The database is backed by SQLite3 and will create additional files (.db-shm and .db-wal) when using write-ahead logging mode

    It's recommended to use a unique database file for each blob input instance to avoid conflicts. For example:

    hashtag
    Use cases

    The Blob input plugin common use cases are:

    • Binary log files: Processing binary-formatted log files that can't be read as text

    • Artifact collection: Collecting binary artifacts or build outputs for analysis or archival

    • File monitoring: Monitoring directories for new binary files and forwarding them to storage or analysis systems

    • Data pipeline integration: Integrating binary data sources into your Fluent Bit data pipeline

    hashtag
    Get started

    You can run the plugin from the command line or through a configuration file.

    hashtag
    Command line

    Run the plugin from the command line using the following command:

    which returns results like the following:

    hashtag
    Configuration file

    In your main configuration file append the following:

    hashtag
    Examples

    hashtag
    Basic configuration with database tracking

    This example shows how to configure the blob plugin with a database file to track processed files:

    hashtag
    Configuration with file exclusion and storage

    This example excludes certain file patterns and uses filesystem storage for better reliability:

    hashtag
    Configuration with file actions after upload

    This example renames files after successful upload and handles failures:

    outputs: Configures output plugins.

    circle-info

    Unlike filters, processors and parsers aren't defined within a unified section of YAML configuration files and don't use tag matching. Instead, each input or output plugin defined in the configuration file can have a parsers key and a processors key to configure the parsers and processors for that specific plugin.

    hashtag
    Syntax

    The pipeline section of a YAML configuration file uses the following syntax:

    Each of the subsections for inputs, filters, and outputs constitutes an array of maps that has the parameters for each. Most properties are either strings or numbers and can be defined directly.

    For example:

    pipeline:
        inputs:
            - name
    
    circle-info

    It's possible to define multiple pipeline sections, but they won't operate independently. Instead, Fluent Bit merges all defined pipelines into a single pipeline internally.

    hashtag
    Inputs

    The inputs section defines one or more input plugins. In addition to the settings unique to each plugin, all input plugins support the following configuration parameters:

    Key
    Description

    name

    Name of the input plugin. Defined as subsection of the inputs section.

    tag

    Tag name associated to all records coming from this plugin.

    log_level

    Set the plugin's logging verbosity level. Allowed values are: off, error, warn, info, debug, and trace. Defaults to the service section's log_level.

    The name parameter is required and defines for Fluent Bit which input plugin should be loaded. The tag parameter is required for all plugins except for the forward plugin, which provides dynamic tags.

    hashtag
    Shared HTTP listener settings for inputs

    Some HTTP-based input plugins share the same listener implementation and support the following common settings in addition to their plugin-specific parameters:

    Key
    Description
    Default

    http_server.http2

    Enable HTTP/2 support for the input listener.

    true

    http_server.buffer_max_size

    Set the maximum size of the HTTP request buffer.

    4M

    For backward compatibility, some plugins also accept the legacy aliases http2, buffer_max_size, buffer_chunk_size, max_connections, and workers.

    hashtag
    Incoming OAuth 2.0 JWT validation settings

    The HTTP-based input plugins that support bearer token validation share the following oauth2.* settings:

    Key
    Description
    Default

    oauth2.validate

    Enable OAuth 2.0 JWT validation for incoming requests.

    false

    oauth2.issuer

    Expected issuer (iss) claim. Required when oauth2.validate is true.

    none

    When validation is enabled, requests without a valid Authorization: Bearer <token> header are rejected.

    hashtag
    Example input configuration

    The following is an example of an inputs section that contains a cpu plugin.

    hashtag
    Filters

    The filters section defines one or more filters. In addition to the settings unique to each filter, all filters support the following configuration parameters:

    Key
    Description

    name

    Name of the filter plugin. Defined as a subsection of the filters section.

    match

    A pattern to match against the tags of incoming records. It's case-sensitive and supports the star (*) character as a wildcard.

    match_regex

    A regular expression to match against the tags of incoming records. Use this option if you want to use the full regular expression syntax.

    log_level

    The name parameter is required and lets Fluent Bit know which filter should be loaded. One of either the match or match_regex parameters is required. If both are specified, match_regex takes precedence.

    hashtag
    Example filter configuration

    The following is an example of a filters section that contains a grep plugin:

    hashtag
    Outputs

    The outputs section defines one or more output plugins. In addition to the settings unique to each plugin, all output plugins support the following configuration parameters:

    Key
    Description

    name

    Name of the output plugin. Defined as a subsection of the outputs section.

    match

    A pattern to match against the tags of incoming records. It's case-sensitive and supports the star (*) character as a wildcard.

    match_regex

    A regular expression to match against the tags of incoming records. Use this option if you want to use the full regular expression syntax.

    log_level

    Fluent Bit can route up to 256 output plugins.

    hashtag
    Outgoing OAuth 2.0 client credentials settings

    Output plugins that support outgoing OAuth 2.0 authentication can expose the following shared oauth2.* settings:

    Key
    Description
    Default

    oauth2.enable

    Enable OAuth 2.0 client credentials for outgoing requests.

    false

    oauth2.token_url

    Token endpoint URL.

    none

    hashtag
    Example output configuration

    The following is an example of an outputs section that contains a stdout plugin:

    inputs
    filters

    TLS

    Fluent Bit provides integrated support for Transport Layer Security (TLS) and its predecessor Secure Sockets Layer (SSL). This section refers only to TLS for both implementations.

    circle-info

    Fluent Bit 4.1.0 introduced the replacement of Next Protocol Negotiation (NPN) with Application Layer Protocol Negotiation (ALPN) as its implementation for TLS. Both NPN and ALPN are used when client and server are establishing SSL/TLS connections. ALPN avoids an additional round trip because the client list the application layer protocols supported by the client in the client hello message.

    Both input and output plugins that perform Network I/O can optionally enable TLS and configure the behavior. The following table describes the properties available:

    wget -o git.exe https://github.com/git-for-windows/git/releases/download/v2.28.0.windows.1/Git-2.28.0-64-bit.exe
    start git.exe
    git clone https://github.com/fluent/fluent-bit
    cd fluent-bit/build
    cmake .. -G "NMake Makefiles"
    cmake --build .
    Get-FileHash fluent-bit-5.0.2-win32.exe
    Expand-Archive fluent-bit-5.0.2-win64.zip
    fluent-bit
    ├── bin
    │   ├── fluent-bit.dll
    │   └── fluent-bit.exe
    │   └── fluent-bit.pdb
    ├── conf
    │   ├── fluent-bit.conf
    │   ├── parsers.conf
    │   └── plugins.conf
    └── include
        │   ├── flb_api.h
        │   ├── ...
        │   └── flb_worker.h
        └── fluent-bit.h
    fluent-bit.exe  -i dummy -o stdout
    
    ...
    [0] dummy.0: [1561684385.443823800, {"message"=>"dummy"}]
    [1] dummy.0: [1561684386.428399000, {"message"=>"dummy"}]
    [2] dummy.0: [1561684387.443641900, {"message"=>"dummy"}]
    [3] dummy.0: [1561684388.441405800, {"message"=>"dummy"}]
    & "C:\Program Files\fluent-bit\bin\fluent-bit.exe" -i dummy -o stdout
    <installer exe> /S /D=C:\fluent-bit
    C:\fluent-bit\
    ├── conf
    │   ├── fluent-bit.conf
    │   └── parsers.conf
    │   └── plugins.conf
    └── bin
        ├── fluent-bit.dll
        └── fluent-bit.exe
        └── fluent-bit.pdb
    sc.exe create fluent-bit binpath= "\fluent-bit\bin\fluent-bit.exe -c \fluent-bit\conf\fluent-bit.conf"
    sc.exe start fluent-bit
    sc.exe query fluent-bit
    SERVICE_NAME: fluent-bit
        TYPE               : 10  WIN32_OWN_PROCESS
        STATE              : 4 Running
        ...
    sc.exe stop fluent-bit
    sc.exe config fluent-bit start= auto
    New-Service fluent-bit -BinaryPathName "`"C:\Program Files\fluent-bit\bin\fluent-bit.exe`" -c `"C:\Program Files\fluent-bit\conf\fluent-bit.conf`"" -StartupType Automatic -Description "This service runs Fluent Bit, a log collector that enables real-time processing and delivery of log data to centralized logging systems."
    Start-Service fluent-bit
    get-Service fluent-bit | format-list
    Name                : fluent-bit
    DisplayName         : fluent-bit
    Status              : Running
    DependentServices   : {}
    ServicesDependedOn  : {}
    CanPauseAndContinue : False
    CanShutdown         : False
    CanStop             : True
    ServiceType         : Win32OwnProcess
    Stop-Service fluent-bit
    Remove-Service fluent-bit
    wget -o vs.exe https://aka.ms/vs/16/release/vs_buildtools.exe
    start vs.exe
    wget -o winflexbison.zip https://github.com/lexxmark/winflexbison/releases/download/v2.5.22/win_flex_bison-2.5.22.zip
    Expand-Archive winflexbison.zip -Destination C:\WinFlexBison
    cp -Path C:\WinFlexBison\win_bison.exe C:\WinFlexBison\bison.exe
    cp -Path C:\WinFlexBison\win_flex.exe C:\WinFlexBison\flex.exe
    fluent-bit.exe -i dummy -o stdout
    cpack -G ZIP
    fluent-bit.exe -i dummy -o stdout
    pipeline:
      inputs:
        - name: prometheus_scrape
          host: 0.0.0.0
          port: 8201
          tag: vault
          metrics_path: /v1/sys/metrics?format=prometheus
          scrape_interval: 10s
    
      outputs:
        - name: stdout
          match: '*'
    [INPUT]
      name prometheus_scrape
      host 0.0.0.0
      port 8201
      tag vault
      metrics_path /v1/sys/metrics?format=prometheus
      scrape_interval 10s
    
    [OUTPUT]
      name stdout
      match *
    ...
    2022-03-26T23:01:29.836663788Z go_memstats_alloc_bytes_total = 31891336
    2022-03-26T23:01:29.836663788Z go_memstats_frees_total = 313264
    2022-03-26T23:01:29.836663788Z go_memstats_lookups_total = 0
    2022-03-26T23:01:29.836663788Z go_memstats_mallocs_total = 378992
    2022-03-26T23:01:29.836663788Z process_cpu_seconds_total = 1.6200000000000001
    2022-03-26T23:01:29.836663788Z go_goroutines = 19
    2022-03-26T23:01:29.836663788Z go_info{version="go1.17.7"} = 1
    2022-03-26T23:01:29.836663788Z go_memstats_alloc_bytes = 12547800
    2022-03-26T23:01:29.836663788Z go_memstats_buck_hash_sys_bytes = 1468900
    2022-03-26T23:01:29.836663788Z go_memstats_gc_cpu_fraction = 8.1509688352783453e-06
    2022-03-26T23:01:29.836663788Z go_memstats_gc_sys_bytes = 5875576
    2022-03-26T23:01:29.836663788Z go_memstats_heap_alloc_bytes = 12547800
    2022-03-26T23:01:29.836663788Z go_memstats_heap_idle_bytes = 2220032
    2022-03-26T23:01:29.836663788Z go_memstats_heap_inuse_bytes = 14000128
    2022-03-26T23:01:29.836663788Z go_memstats_heap_objects = 65728
    2022-03-26T23:01:29.836663788Z go_memstats_heap_released_bytes = 2187264
    2022-03-26T23:01:29.836663788Z go_memstats_heap_sys_bytes = 16220160
    2022-03-26T23:01:29.836663788Z go_memstats_last_gc_time_seconds = 1648335593.2483871
    2022-03-26T23:01:29.836663788Z go_memstats_mcache_inuse_bytes = 2400
    2022-03-26T23:01:29.836663788Z go_memstats_mcache_sys_bytes = 16384
    2022-03-26T23:01:29.836663788Z go_memstats_mspan_inuse_bytes = 150280
    2022-03-26T23:01:29.836663788Z go_memstats_mspan_sys_bytes = 163840
    2022-03-26T23:01:29.836663788Z go_memstats_next_gc_bytes = 16586496
    2022-03-26T23:01:29.836663788Z go_memstats_other_sys_bytes = 422572
    2022-03-26T23:01:29.836663788Z go_memstats_stack_inuse_bytes = 557056
    2022-03-26T23:01:29.836663788Z go_memstats_stack_sys_bytes = 557056
    2022-03-26T23:01:29.836663788Z go_memstats_sys_bytes = 24724488
    2022-03-26T23:01:29.836663788Z go_threads = 8
    2022-03-26T23:01:29.836663788Z process_max_fds = 65536
    2022-03-26T23:01:29.836663788Z process_open_fds = 12
    2022-03-26T23:01:29.836663788Z process_resident_memory_bytes = 200638464
    2022-03-26T23:01:29.836663788Z process_start_time_seconds = 1648333791.45
    2022-03-26T23:01:29.836663788Z process_virtual_memory_bytes = 865849344
    2022-03-26T23:01:29.836663788Z process_virtual_memory_max_bytes = 1.8446744073709552e+19
    2022-03-26T23:01:29.836663788Z vault_runtime_alloc_bytes = 12482136
    2022-03-26T23:01:29.836663788Z vault_runtime_free_count = 313256
    2022-03-26T23:01:29.836663788Z vault_runtime_heap_objects = 65465
    2022-03-26T23:01:29.836663788Z vault_runtime_malloc_count = 378721
    2022-03-26T23:01:29.836663788Z vault_runtime_num_goroutines = 12
    2022-03-26T23:01:29.836663788Z vault_runtime_sys_bytes = 24724488
    2022-03-26T23:01:29.836663788Z vault_runtime_total_gc_pause_ns = 1917611
    2022-03-26T23:01:29.836663788Z vault_runtime_total_gc_runs = 19
    ...
    service:
      flush: 1
      log_level: info
      http_server: true
      http_listen: 0.0.0.0
      http_port: 2020
      hot_reload: on
    
    pipeline:
      inputs:
        - name: random
    
      outputs:
        - name: stdout
          match: '*'
    pipeline:
      inputs:
        - name: http
          listen: 0.0.0.0
          port: 8888
    
      outputs:
        - name: stdout
          match: app.log
    [INPUT]
      Name   http
      Listen 0.0.0.0
      Port   8888
    
    [OUTPUT]
      Name  stdout
      Match app.log
    pipeline:
      inputs:
        - name: http
          listen: 0.0.0.0
          port: 8888
    
      outputs:
        - name: stdout
          match: http.0
    [INPUT]
      Name   http
      Listen 0.0.0.0
      Port   8888
    
    [OUTPUT]
      Name  stdout
      Match http.0
    pipeline:
      inputs:
        - name: http
          listen: 0.0.0.0
          port: 8888
          tag_key: key1
    
      outputs:
        - name: stdout
          match: value1
    [INPUT]
      Name    http
      Listen  0.0.0.0
      Port    8888
      Tag_Key key1
    
    [OUTPUT]
      Name  stdout
      Match value1
    pipeline:
      inputs:
        - name: http
          listen: 0.0.0.0
          port: 8888
          add_remote_addr: true
      outputs:
        - name: stdout
    [INPUT]
      Name http
      Listen 0.0.0.0
      Port 8888
      Add_Remote_Addr true
    
    [OUTPUT]
      Name stdout
    pipeline:
      inputs:
        - name: http
          success_header:
            - X-Custom custom-answer
            - X-Another another-answer
    [INPUT]
      Name           http
      Success_Header X-Custom custom-answer
      Success_Header X-Another another-answer
    pipeline:
      inputs:
        - name: http
          listen: 0.0.0.0
          port: 8888
    
      outputs:
        - name: stdout
          match: '*'
    [INPUT]
      Name   http
      Listen 0.0.0.0
      Port   8888
    
    [OUTPUT]
      Name  stdout
      Match *
    pipeline:
      inputs:
        - name: http
          listen: 0.0.0.0
          port: 8888
          oauth2.validate: true
          oauth2.issuer: https://auth.example.com
          oauth2.jwks_url: https://auth.example.com/.well-known/jwks.json
          oauth2.allowed_audience: my-service
          oauth2.jwks_refresh_interval: 300
    
      outputs:
        - name: stdout
          match: '*'
    [INPUT]
      Name                      http
      Listen                    0.0.0.0
      Port                      8888
      Oauth2.validate           true
      Oauth2.issuer             https://auth.example.com
      Oauth2.jwks_url           https://auth.example.com/.well-known/jwks.json
      Oauth2.allowed_audience   my-service
      Oauth2.jwks_refresh_interval 300
    
    [OUTPUT]
      Name  stdout
      Match *
    curl -d '{"key1":"value1","key2":"value2"}' -XPOST -H "content-type: application/json" http://localhost:8888/app.log
    curl -d '{"key1":"value1"}' -XPOST -H 'Content-Type: application/json' -H 'X-Forwarded-For: host1, host2' http://localhost:8888
    X-Forwarded-For: host1, host2
    X-Forwarded-For: host3, host4
    curl -d '{"key1":"value1","key2":"value2"}' -XPOST -H "content-type: application/json" http://localhost:8888
    curl -d '{"key1":"value1","key2":"value2"}' -XPOST -H "content-type: application/json" http://localhost:8888/app.log
    curl -d '{"key1":"value1"}' -XPOST -H 'Content-Type: application/json' -H 'X-Forwarded-For: host1, host2' http://localhost:8888
    curl -d @app.log -XPOST -H "content-type: application/json" http://localhost:8888/app.log
    fluent-bit -i http -p port=8888 -o stdout
    pipeline:
      inputs:
        - name: blob
          path: '/path/to/binary/files/*.bin'
    
      outputs:
        - name: stdout
          match: '*'
    [INPUT]
      Name    blob
      Path    /path/to/binary/files/*.bin
    
    [OUTPUT]
      Name   stdout
      Match  *
    pipeline:
      inputs:
        - name: blob
          path: /var/log/binaries/*.bin
          database_file: /var/lib/fluent-bit/blob.db
          scan_refresh_interval: 10s
          tag: blob.files
    
      outputs:
        - name: stdout
          match: '*'
    [INPUT]
      Name                blob
      Path                /var/log/binaries/*.bin
      Database_File       /var/lib/fluent-bit/blob.db
      Scan_Refresh_Interval 10s
      Tag                 blob.files
    
    [OUTPUT]
      Name   stdout
      Match  *
    pipeline:
      inputs:
        - name: blob
          path: /data/artifacts/**/*
          exclude_pattern: *.tmp,*.bak,*.old
          storage.type: filesystem
          storage.pause_on_chunks_overlimit: true
          mem_buf_limit: 50M
          tag: artifacts
    
      outputs:
        - name: stdout
          match: '*'
    [INPUT]
      Name                        blob
      Path                        /data/artifacts/**/*
      Exclude_Pattern            *.tmp,*.bak,*.old
      Storage.Type                filesystem
      Storage.Pause_On_Chunks_Overlimit true
      Mem_Buf_Limit              50M
      Tag                         artifacts
    
    [OUTPUT]
      Name   stdout
      Match  *
    pipeline:
      inputs:
        - name: blob
          path: /var/log/binaries/*.bin
          database_file: /var/lib/fluent-bit/blob.db
          upload_success_action: add_suffix
          upload_success_suffix: .processed
          upload_failure_action: add_suffix
          upload_failure_suffix: .failed
          tag: blob.data
    
      outputs:
        - name: stdout
          match: '*'
    [INPUT]
      Name                  blob
      Path                  /var/log/binaries/*.bin
      Database_File         /var/lib/fluent-bit/blob.db
      Upload_Success_Action add_suffix
      Upload_Success_Suffix .processed
      Upload_Failure_Action add_suffix
      Upload_Failure_Suffix .failed
      Tag                   blob.data
    
    [OUTPUT]
      Name   stdout
      Match  *
    pipeline:
      inputs:
        - name: blob
          path: /var/log/binaries/*.bin
          database_file: /var/lib/fluent-bit/blob.db
    fluent-bit -i blob --prop "path=[SOME_PATH_TO_BINARY_FILES]" -o stdout
    ...
    [2025/11/05 17:39:32.818356000] [ info] [input:blob:blob.0] initializing
    [2025/11/05 17:39:32.818362000] [ info] [input:blob:blob.0] storage_strategy='memory' (memory only)
    ...
    pipeline:
        inputs:
            ...
        filters:
            ...
        outputs:
            ...
    pipeline:
        inputs:
            - name: cpu
              tag: my_cpu
    pipeline:
        filters:
            - name: grep
              match: '*'
              regex: log aa
    pipeline:
        outputs:
            - name: stdout
              match: 'my*cpu'

    buffer_max_size

    Specify the maximum buffer size to receive a JSON message. Compatibility alias for http_server.buffer_max_size.

    4M

    http2

    Enable HTTP/2 support. Compatibility alias for http_server.http2.

    true

    http_server.max_connections

    Maximum number of concurrent active HTTP connections. 0 means unlimited.

    0

    http_server.workers

    Number of HTTP listener worker threads.

    1

    listen

    The address to listen on.

    0.0.0.0

    oauth2.allowed_audience

    Audience claim to enforce when validating incoming OAuth 2.0 JWT tokens.

    none

    oauth2.allowed_clients

    Authorized client_id or azp claim values. Can be specified multiple times.

    none

    oauth2.issuer

    Expected issuer (iss) claim. Required when oauth2.validate is true.

    none

    oauth2.jwks_refresh_interval

    How often in seconds to refresh the cached JWKS keys from oauth2.jwks_url.

    300

    oauth2.jwks_url

    JWKS endpoint URL used to fetch public keys for JWT validation. Required when oauth2.validate is true.

    none

    oauth2.validate

    Enable OAuth 2.0 JWT validation for incoming requests.

    false

    port

    The port for Fluent Bit to listen on.

    9880

    remote_addr_key

    Key name for the remote address field added to the record when add_remote_addr is enabled.

    REMOTE_ADDR

    success_header

    Add an HTTP header key/value pair on success. Multiple headers can be set. For example, X-Custom custom-answer.

    none

    successful_response_code

    Allows setting successful response code. Supported values: 200, 201, and 204.

    201

    tag_key

    Specify the key name to overwrite a tag. If set, the tag will be overwritten by a value of the key.

    none

    threaded

    Indicates whether to run this input in its own thread.

    false

    exclude_pattern

    Set one or multiple shell patterns separated by commas to exclude files matching certain criteria. For example, exclude_pattern *.tmp,*.bak will exclude temporary and backup files from processing.

    none

    log_level

    Specifies the log level for this input plugin. If not set here, the plugin uses the global log level specified in the service section. Valid values: off, error, warn, info, debug, trace.

    info

    log_suppress_interval

    Suppresses log messages from this input plugin that appear similar within a specified time interval. Set to 0 to disable suppression. The value must be specified in seconds. This helps reduce log noise when the same error or warning occurs repeatedly.

    0

    mem_buf_limit

    Set a memory buffer limit for the input plugin instance in bytes. If the limit is reached, the plugin will pause until the buffer is drained. If set to 0, the buffer limit is disabled. If the plugin has enabled filesystem buffering, this limit won't apply. The value must be according to the Unit Size specification.

    0

    path

    Path to scan for blob (binary) files. Supports wildcards and glob patterns. For example, /var/log/binaries/*.bin or /data/artifacts/**/*.dat. This is a required parameter.

    none

    routable

    If true, the data generated by the plugin can be forwarded to other plugins or outputs. If false, the data will be discarded. This is used for testing or when you want to process data but not forward it.

    true

    scan_refresh_interval

    Set the interval time to scan for new files. The plugin periodically scans the specified path for new or modified files. The value must be specified according to the Unit Size specification (2s, 30m, 1h).

    2s

    storage.pause_on_chunks_overlimit

    Enable pausing on an input when it reaches its chunks limit. When enabled, the plugin will pause processing if the number of chunks exceeds the limit, preventing memory issues during backpressure scenarios.

    false

    storage.type

    Sets the storage type for this input. Options: filesystem (persists data to disk), memory (stores data in memory only), or memrb (memory ring buffer). For production environments with high data volumes, consider using filesystem to prevent data loss during restarts.

    memory

    tag

    Set a tag for the events generated by this input plugin. Tags are used for routing records to specific outputs. Supports tag expansion with wildcards.

    none

    threaded

    Indicates whether to run this input in its own thread. When enabled, the plugin runs in a separate thread, which can improve performance for I/O-bound operations.

    false

    thread.ring_buffer.capacity

    Set custom ring buffer capacity when the input runs in threaded mode. This determines how many records can be buffered in the ring buffer before blocking.

    1024

    thread.ring_buffer.window

    Set custom ring buffer window percentage for threaded inputs. This controls when the ring buffer is considered "full" and triggers backpressure handling.

    5

    upload_failure_action

    Action to perform on the file after upload failure. Supported values: delete (delete the file), add_suffix (rename file by appending a suffix), emit_log (emit a log record with a custom message). When set to add_suffix, use upload_failure_suffix to specify the suffix. When set to emit_log, use upload_failure_message to specify the message.

    none

    upload_failure_message

    Message to emit as a log record after upload failure. Only used when upload_failure_action is set to emit_log. This can be used for debugging or monitoring purposes.

    none

    upload_failure_suffix

    Suffix to append to the filename after upload failure. Only used when upload_failure_action is set to add_suffix. For example, if set to .failed, a file named data.bin will be renamed to data.bin.failed.

    none

    upload_success_action

    Action to perform on the file after successful upload. Supported values: delete (delete the file), add_suffix (rename file by appending a suffix), emit_log (emit a log record with a custom message). When set to add_suffix, use upload_success_suffix to specify the suffix. When set to emit_log, use upload_success_message to specify the message.

    none

    upload_success_message

    Message to emit as a log record after successful upload. Only used when upload_success_action is set to emit_log. This can be used for debugging or monitoring purposes.

    none

    upload_success_suffix

    Suffix to append to the filename after successful upload. Only used when upload_success_action is set to add_suffix. For example, if set to .processed, a file named data.bin will be renamed to data.bin.processed.

    none

    :
    tail
    path: /var/log/example.log
    parser: json
    processors:
    logs:
    - name: record_modifier
    filters:
    - name: grep
    match: '*'
    regex: key pattern
    outputs:
    - name: stdout
    match: '*'

    http_server.buffer_chunk_size

    Set the allocation chunk size used for the HTTP request buffer.

    512K

    http_server.max_connections

    Set the maximum number of concurrent active HTTP connections. 0 means unlimited.

    0

    http_server.workers

    Set the number of HTTP listener worker threads.

    1

    oauth2.jwks_url

    JWKS endpoint URL used to fetch public keys for token validation. Required when oauth2.validate is true.

    none

    oauth2.allowed_audience

    Audience claim to enforce when validating tokens.

    none

    oauth2.allowed_clients

    Authorized client_id or azp claim values. This key can be specified multiple times.

    none

    oauth2.jwks_refresh_interval

    How often in seconds to refresh cached JWKS keys.

    300

    Set the plugin's logging verbosity level. Allowed values are: off, error, warn, info, debug, and trace. Defaults to the service section's log_level.

    Set the plugin's logging verbosity level. Allowed values are: off, error, warn, info, debug, and trace. The output log level defaults to the service section's log_level.

    oauth2.client_id

    Client ID.

    none

    oauth2.client_secret

    Client secret.

    none

    oauth2.scope

    Optional scope parameter.

    none

    oauth2.audience

    Optional audience parameter.

    none

    oauth2.resource

    Optional resource parameter.

    none

    oauth2.auth_method

    Client authentication method. Supported values: basic, post, private_key_jwt.

    basic

    oauth2.jwt_key_file

    PEM private key file used with private_key_jwt.

    none

    oauth2.jwt_cert_file

    Certificate file used to derive the kid or x5t header value for private_key_jwt.

    none

    oauth2.jwt_aud

    Audience to use in private_key_jwt assertions. Defaults to oauth2.token_url when unset.

    none

    oauth2.jwt_header

    JWT header claim name used for the thumbprint. Supported values: kid, x5t.

    kid

    oauth2.jwt_ttl_seconds

    Lifetime in seconds for private_key_jwt client assertions.

    300

    oauth2.refresh_skew_seconds

    Seconds before expiry at which to refresh the access token.

    60

    oauth2.timeout

    Timeout for token requests.

    0s

    oauth2.connect_timeout

    Connect timeout for token requests.

    0s

    f14c9a907a0d5f8fe43b4e6b3c533618af7f09f706905ffd2612138785b9b25aarrow-up-right
    fluent-bit-5.0.2-winarm64.exearrow-up-right
    9c8602701fa9a804af28bce84663b9318dbcf8e98913b6e05b16ef2e90e85c00arrow-up-right
    fluent-bit-5.0.2-winarm64.ziparrow-up-right
    fa232828a6396bdb28cd906b0284def70b6c5ab98d82b2c079ddd9b843d92d3farrow-up-right
    thread
    trace
    mode is only available if Fluent Bit was built with the
    WITH_TRACE
    option enabled.

    storage.checksum

    Enables data integrity check when writing and reading data from the filesystem. The storage layer uses the CRC32 algorithm. Possible values: off or on.

    off

    storage.delete_irrecoverable_chunks

    If enabled, deletes irrecoverable chunks during runtime and at startup. Possible values: off or on.

    off

    storage.inherit

    If enabled, input plugins that don't explicitly set storage.type will inherit the global storage.type value. Possible values: off or on.

    off

    storage.keep.rejected

    If enabled, the dead letter queue stores failed chunks that can't be delivered. Possible values: off or on.

    off

    storage.max_chunks_up

    Sets the number of chunks that can be up in memory for input plugins that use filesystem storage.

    128

    storage.metrics

    If http_server option is enabled in the main service section, this option registers a new endpoint where internal metrics of the storage layer can be consumed. For more details, see Monitoring. Possible values: off or on.

    off

    storage.path

    Sets a location to store streams and chunks of data. If this parameter isn't set, input plugins can't use filesystem buffering.

    none

    storage.rejected.path

    Sets the subdirectory name under storage.path for storing rejected chunks in the dead letter queue.

    rejected

    storage.sync

    Configures the synchronization mode used to store data in the file system. Using full increases the reliability of the filesystem buffer and ensures that data is guaranteed to be synced to the filesystem even if Fluent Bit crashes. On Linux, full corresponds with the MAP_SYNC option for memory mapped filesarrow-up-right. Possible values: normal, full.

    normal

    storage.trim_files

    If enabled, Fluent Bit trims chunk files in the filesystem to reclaim disk space after data is flushed. Possible values: off or on.

    off

    storage.type

    Sets the default storage type for input plugins. Used in conjunction with storage.inherit to apply this type to all inputs that don't explicitly set their own storage.type. Possible values: memory, filesystem, memrb.

    none

    hot reloading
    unit size
    standalone parsers configuration files
    plugins section
    stream processor
    Property
    Description
    Default

    tls

    Enable or disable TLS support.

    off

    tls.debug

    Set TLS debug verbosity level. Accepted values: 0 (No debug), 1 (Error), 2 (State change), 3 (Informational) and 4 (Verbose).

    1

    tls.ca_file

    Absolute path to CA certificate file.

    none

    tls.ca_path

    To use TLS on input plugins, you must provide both a certificate and a private key.

    The listed properties can be enabled in the configuration file, specifically in each output plugin section or directly through the command line.

    The following output plugins can take advantage of the TLS feature:

    • Amazon S3

    • Apache SkyWalking

    • Azure

    The following input plugins can take advantage of the TLS feature:

    • Docker Events

    • Elasticsearch (Bulk API)

    • Forward

    In addition, other plugins implement a subset of TLS support, with restricted configuration:

    • Kubernetes Filter

    hashtag
    Example: enable TLS on HTTP input

    By default, the HTTP input plugin uses plain TCP. Run the following command to enable TLS:

    circle-info

    See the Tips and Tricks section for details on generating self_signed.crt and self_signed.key files shown in these examples.

    In the previous command, the two properties tls and tls.verify are set for demonstration purposes. Always enable verification in production environments.

    The same behavior can be accomplished using a configuration file:

    hashtag
    Example: enable TLS on HTTP output

    By default, the HTTP output plugin uses plain TCP. Run the following command to enable TLS:

    In the previous command, the properties tls and tls.verify are enabled for demonstration purposes. Always enable verification in production environments.

    The same behavior can be accomplished using a configuration file:

    hashtag
    Tips and tricks

    hashtag
    Generate a self-signed certificates for testing purposes

    The following command generates a 4096-bit RSA key pair and a certificate that's signed using SHA-256 with the expiration date set to 30 days in the future. In this example, test.host.net is set as the common name. This example opts out of DES, so the private key is stored in plain text.

    hashtag
    Connect to virtual servers using TLS

    Fluent Bit supports TLS server name indicationarrow-up-right. If you are serving multiple host names on a single IP address (for example, using virtual hosting), you can make use of tls.vhost to connect to a specific hostname.

    hashtag
    Verify subjectAltName

    By default, TLS verification of host names isn't done automatically. As an example, you can extract the X509v3 Subject Alternative Name from a certificate:

    This certificate covers only my.fluent-aggregator.net so if you use a different hostname it should fail.

    To fully verify the alternative name and demonstrate the failure, enable tls.verify_hostname:

    This outgoing connect will fail and disconnect:

    pipeline:
      inputs:
        - name: http
          port: 9999
          tls: on
          tls.verify: off
          tls.crt_file: self_signed.crt
          tls.key_file: self_signed.key
    
      outputs:
        - name: stdout
          match: '*'
    [INPUT]
      name http
      port 9999
      tls on
      tls.verify off
      tls.crt_file self_signed.crt
      tls.key_file self_signed.key
    
    [OUTPUT]
      Name       stdout
      Match      *
    pipeline:
      inputs:
        - name: cpu
          tag: cpu
    
      outputs:
        - name: http
          match: '*'
          host: 192.168.2.3
          port: 80
          uri: /something
          tls: on
          tls.verify: off
    [INPUT]
      Name  cpu
      Tag   cpu
    
    [OUTPUT]
      Name       http
      Match      *
      Host       192.168.2.3
      Port       80
      URI        /something
      tls        on
      tls.verify off
    pipeline:
      inputs:
        - name: cpu
          tag: cpu
    
      outputs:
        - name: forward
          match: '*'
          host: 192.168.10.100
          port: 24224
          tls: on
          tls.verify: off
          tls.ca_file: '/etc/certs/fluent.crt'
          tls.vhost: 'fluent.example.com'
    [INPUT]
      Name  cpu
      Tag   cpu
    
    [OUTPUT]
      Name        forward
      Match       *
      Host        192.168.10.100
      Port        24224
      tls         on
      tls.verify  on
      tls.ca_file /etc/certs/fluent.crt
      tls.vhost   fluent.example.com
    pipeline:
      inputs:
        - name: cpu
          tag: cpu
    
      outputs:
        - name: forward
          match: '*'
          host: other.fluent-aggregator.net
          port: 24224
          tls: on
          tls.verify: on
          tls.verify_hostname: on
          tls.ca_file: '/path/to/fluent-x509v3-alt-name.crt'
    [INPUT]
      Name  cpu
      Tag   cpu
    
    [OUTPUT]
      Name                forward
      Match               *
      Host                other.fluent-aggregator.net
      Port                24224
      tls                 on
      tls.verify          on
      tls.verify_hostname on
      tls.ca_file         /path/to/fluent-x509v3-alt-name.crt
    fluent-bit -i http \
               -p port=9999 \
               -p tls=on \
               -p tls.verify=off \
               -p tls.crt_file=self_signed.crt \
               -p tls.key_file=self_signed.key \
               -o stdout \
               -m '*'
    fluent-bit -i cpu -t cpu -o http://192.168.2.3:80/something \
               -p tls=on         \
               -p tls.verify=off \
               -m '*'
    openssl req -x509 \
                -newkey rsa:4096 \
                -sha256 \
                -nodes \
                -keyout self_signed.key \
                -out self_signed.crt \
                -subj "/CN=test.host.net"
    X509v3 Subject Alternative Name:
        DNS:my.fluent-aggregator.net
    [2024/06/17 16:51:31] [error] [tls] error: unexpected EOF with reason: certificate verify failed
    [2024/06/17 16:51:31] [debug] [upstream] connection #50 failed to other.fluent-aggregator.net:24224
    [2024/06/17 16:51:31] [error] [output:forward:forward.0] no upstream connections available

    Absolute path to scan for certificate files.

    none

    tls.ciphers

    Specify TLS ciphers up to TLSv1.2.

    none

    tls.crt_file

    Absolute path to Certificate file.

    none

    tls.key_file

    Absolute path to private Key file.

    none

    tls.key_passwd

    Optional password for tls.key_file file.

    none

    tls.max_version

    Specify the maximum version of TLS.

    none

    tls.min_version

    Specify the minimum version of TLS.

    none

    tls.verify

    Force certificate validation.

    on

    tls.vhost

    Hostname to be used for TLS SNI extension.

    none

    tls.verify_hostname

    Force TLS verification of host names.

    off

    tls.verify_client_cert

    Require and verify the TLS certificate presented by a connecting client. Enables mutual TLS (mTLS) for input plugins. Only applies to input plugins.

    off

    Azure Blob
    Azure Data Explorer (Kusto)
    Azure Logs Ingestion API
    BigQuery
    Dash0
    Datadog
    Elasticsearch
    Forward
    GELF
    Google Chronicle
    HTTP
    InfluxDB
    Kafka REST Proxy
    LogDNA
    Loki
    New Relic
    OpenSearch
    OpenTelemetry
    Oracle Cloud Infrastructure Logging Analytics
    Prometheus Remote Write
    Slack
    Splunk
    Stackdriver
    Syslog
    TCP and TLS
    Treasure Data
    WebSocket
    Health
    HTTP
    Kubernetes Events
    MQTT
    NGINX Exporter Metrics
    OpenTelemetry
    Prometheus Scrape Metrics
    Prometheus Remote Write
    Splunk (HTTP HEC)
    Syslog
    TCP

    Troubleshooting

    • Dead letter queue: preserve failed chunks

    • Tap: generate events or records

    • Dump internals signal

    hashtag
    Dead letter queue

    The Dead Letter Queue (DLQ) feature preserves chunks that fail to be delivered to output destinations. This enables troubleshooting delivery failures without losing data.

    hashtag
    Enable dead letter queue

    To enable the DLQ, add the following to your Service section:

    hashtag
    What gets stored

    Chunks are copied to the DLQ when:

    • An output plugin returns an unrecoverable error.

    • A chunk exhausts all configured retry attempts.

    • Retries are disabled (retry_limit: no_retries) and the flush fails.

    hashtag
    Examine dead letter queue files

    DLQ files are stored in the configured path (for example, /var/log/flb-storage/rejected/) with names that include the tag, status code, and output plugin name. This helps identify which records failed and why.

    For example, a file named kube_var_log_containers_test_400_http_0x7f8b4c.flb indicates a chunk with tag kube.var.log.containers.test that failed with status code 400 when sending to the http output.

    hashtag
    Dead letter queue management

    circle-exclamation

    DLQ files remain on disk until manually removed. Monitor disk usage and implement a cleanup policy.

    For more details on DLQ configuration, see .

    hashtag
    Tap

    Tap can be used to generate events or records detailing what messages pass through Fluent Bit, at what time and what filters affect them.

    hashtag
    Tap example

    Ensure that the container image supports Fluent Bit Tap (available in Fluent Bit 2.0+):

    If the --enable-chunk-trace option is present, your Fluent Bit version supports Fluent Bit Tap, but it's disabled by default. Use this option to enable it.

    You can start Fluent Bit with tracing activated from the beginning by using the trace-input and trace-output properties:

    The following warning indicates the -Z or --enable-chunk-tracing option is missing:

    Set properties for the output using the --trace-output-property option:

    With that option set, the stdout plugin emits traces in json_lines format:

    All three options can also be defined using the more flexible --trace option:

    This example defines the Tap pipeline using this configuration: input=dummy.0 output=stdout output.format=json_lines which defines the following:

    • input: dummy.0 listens to the tag or alias dummy.0.

    • output: stdout outputs to a stdout plugin.

    Tap support can also be activated and deactivated using the embedded web server:

    In another terminal, activate Tap by either using the instance id of the input (dummy.0) or its alias. The alias is more predictable, and is used here:

    This response means Tap is active. The terminal with Fluent Bit running should now look like this:

    All the records that display are those emitted by the activities of the dummy plugin.

    hashtag
    Tap example (complex)

    This example takes the same steps but demonstrates how the mechanism works with more complicated configurations.

    This example follows a single input, out of many, and which passes through several filters.

    To ensure the window isn't cluttered by the records generated by the input plugins, send all of it to null.

    Activate with the following curl command:

    You should start seeing output similar to the following:

    hashtag
    Parameters for the output in Tap

    When activating Tap, any plugin parameter can be given. These parameters can be used to modify the output format, the name of the time key, the format of the date, and other details.

    The following example uses the parameter "format": "json" to demonstrate how to show stdout in JSON format.

    First, run Fluent Bit enabling Tap:

    In another terminal, activate Tap including the output (stdout), and the parameters wanted ("format": "json"):

    In the first terminal, you should see the output similar to the following:

    This parameter shows stdout in JSON format.

    See for additional information.

    hashtag
    Analyze a single Tap record

    This filter record is an example to explain the details of a Tap record:

    • type: Defines the stage the event is generated:

      • 1: Input record. This is the unadulterated input record.

      • 2

    hashtag
    Dump Internals and signal

    When the service is running, you can export to see the overall status of the data flow of the service. There are other use cases where you might need to know the current status of the service internals, like the current status of the internal buffers. Dump Internals can help provide this information.

    Fluent Bit v1.4 introduced the Dump Internals feature, which can be triggered from the command line triggering the CONT Unix signal.

    circle-info

    This feature is only available on Linux and BSD operating systems.

    hashtag
    Usage

    Run the following kill command to signal Fluent Bit:

    The command pidof aims to identify the Process ID of Fluent Bit.

    Fluent Bit will dump the following information to the standard output interface (stdout):

    hashtag
    Input plugins

    The input plugins dump provides insights for every input instance configured.

    hashtag
    Status

    Overall ingestion status of the plugin.

    Entry
    Sub-entry
    Description

    hashtag
    Tasks

    When an input plugin ingests data into the engine, a Chunk is created. A Chunk can contain multiple records. At flush time, the engine creates a Task that contains the routes for the Chunk associated in question.

    The Task dump describes the tasks associated to the input plugin:

    Entry
    Description

    hashtag
    Chunks

    The Chunks dump tells more details about all the chunks that the input plugin has generated and are still being processed.

    Depending on the buffering strategy and limits imposed by configuration, some Chunks might be up (in memory) or down (filesystem).

    Entry
    Sub-entry
    Description

    hashtag
    Storage layer

    Fluent Bit relies on a custom storage layer interface designed for hybrid buffering. The Storage Layer entry contains a total summary of Chunks registered by Fluent Bit:

    Entry
    Sub-Entry
    Description

    Node exporter metrics

    A plugin based on Prometheus Node Exporter to collect system and host level metrics

    circle-info

    Supported event types: metrics

    is a popular way to collect system level metrics from operating systems, such as CPU, disk, network, and process statistics. Fluent Bit includes a node exporter metrics plugin that builds off the Prometheus design to collect system level metrics without having to manage two separate processes or agents.

    The Node exporter metrics plugin contains a subset of collectors and metrics available from Prometheus Node exporter.

    The scheduler fails to schedule a retry.

    output.format: json_lines sets the stdout format to json_lines.

    : Filtered record. This is a record after it was filtered. One record is generated per filter.
  • 3: Pre-output record. This is the record right before it's sent for output.

  • This example is a record generated by the manipulation of a record by a filter so it has the type 2.

  • start_time and end_time: Records the start and end of an event, and is different for each event type:

    • type 1: When the input is received, both the start and end time.

    • type 2: The time when filtering is matched until it has finished processing.

    • type 3: The time when the input is received and when it's finally slated for output.

  • trace_id: A string composed of a prefix and a number which is incremented with each record received by the input during the Tap session.

  • plugin_instance: The plugin instance name as generated by Fluent Bit at runtime.

  • plugin_alias: If an alias is set this field will contain the alias set for a plugin.

  • records: An array of all the records being sent. Fluent Bit handles records in chunks of multiple records and chunks are indivisible, the same is done in the Tap output. Each record consists of its timestamp followed by the actual data which is a composite type of keys and values.

  • mem_limit

    Limit set by Mem_Buf_Limit.

    Total number of Chunks stored in the filesystem but not loaded in memory yet.

    busy_chunks

    Chunks marked as busy (being flushed) or locked. Busy Chunks are immutable and likely are ready to be or are being processed.

    size

    Amount of bytes used by the Chunk.

    size err

    Number of Chunks in an error state where its size couldn't be retrieved.

    Total number of Chunks filesystem based.

    up

    Total number of filesystem chunks up in memory.

    down

    Total number of filesystem chunks down (not loaded in memory).

    overlimit

    none

    If the plugin has been configured with Mem_Buf_Limit, this entry will report if the plugin is over the limit or not at the moment of the dump. Over the limit prints yes, otherwise no.

    mem_size

    Current memory size in use by the input plugin in-memory.

    total_tasks

    Total number of active tasks associated to data generated by the input plugin.

    new

    Number of tasks not yet assigned to an output plugin. Tasks are in new status for a very short period of time. This value is normally very low or zero.

    running

    Number of active tasks being processed by output plugins.

    size

    total_chunks

    Total number of Chunks generated by the input plugin that are still being processed by the engine.

    up_chunks

    Total number of Chunks loaded in memory.

    total chunks

    Total number of Chunks.

    mem chunks

    Total number of Chunks memory-based.

    [SERVICE]
      storage.path          /var/log/flb-storage/
      storage.keep.rejected on
      storage.rejected.path rejected
    Dead letter queue
    output plugins
    metrics
    service:
      storage.path: /var/log/flb-storage/
      storage.keep.rejected: on
      storage.rejected.path: rejected

    Amount of memory used by the Chunks being processed (total chunk size).

    down_chunks

    fs chunks

    circle-info

    Metrics collected with Node Exporter Metrics flow through a separate pipeline from logs and current filters don't operate on top of metrics. This plugin is generally supported on Linux-based operating systems, with macOS offering a reduced subset of metrics.

    hashtag
    Configuration parameters

    scrape_interval sets the default for all scrapes. To set granular scrape intervals, set the specific interval. For example, collector.cpu.scrape_interval. When using a granular scrape interval, if a value greater than 0 is used, it overrides the global default. Otherwise, the global default is used.

    The plugin top-level scrape_interval setting is the global default. Any custom settings for individual scrape_intervals override that specific metric scraping interval.

    Each collector.xxx.scrape_interval option only overrides the interval for that specific collector and updates the associated set of provided metrics.

    Overridden intervals only change the collection interval, not the interval for publishing the metrics which is taken from the global setting.

    For example, if the global interval is set to 5 and an override interval of 60 is used, the published metrics will be reported every five seconds. However, the specific collector will stay the same for 60 seconds until it's collected again.

    This helps with down-sampling when collecting metrics.

    Key
    Description
    Default

    collector.cpu.scrape_interval

    The rate in seconds at which cpu metrics are collected from the host operating system.

    0

    collector.cpufreq.scrape_interval

    The rate in seconds at which cpufreq metrics are collected from the host operating system.

    0

    hashtag
    Collectors available

    The following table describes the available collectors as part of this plugin. They're enabled by default and respect the original metrics name, descriptions, and types from Prometheus Exporter. You can use your current dashboards without any compatibility problem.

    The Version column specifies the Fluent Bit version where the collector is available.

    Name
    Description
    Operating system
    Version

    cpu

    Exposes CPU statistics.

    Linux, macOS

    1.8

    cpufreq

    Exposes CPU frequency statistics.

    hashtag
    Threading

    This input always runs in its own thread.

    hashtag
    Get started

    hashtag
    Configuration file

    In the following configuration file, the input plugin node_exporter_metrics collects metrics every two seconds and exposes them through the Prometheus Exporter output plugin on HTTP/TCP port 2021.

    You can test the expose of the metrics by using curl:

    hashtag
    Container to collect host metrics

    When deploying Fluent Bit in a container you will need to specify additional settings to ensure that Fluent Bit has access to the host operating system. The following Docker command deploys Fluent Bit with specific mount paths and settings enabled to ensure that Fluent Bit can collect from the host. These are then exposed over port 2021.

    hashtag
    Fluent Bit with Prometheus and Grafana

    If you use dashboards for monitoring, Grafana is one option. The Fluent Bit source code repository contains a docker-compose example.

    1. Download the Fluent Bit source code:

    2. Start the service and view your dashboard:

    3. Open your browser and use the address http://127.0.0.1:3000.

    4. When asked for the credentials to access Grafana, use admin for the username and password. See .

    An example Grafana dashboard

    By default, Grafana dashboard plots the data from the last 24 hours. Change it to Last 5 minutes to see the recent data being collected.

    hashtag
    Stop the service

    hashtag
    Enhancement requests

    The plugin implements a subset of the available collectors in the original Prometheus Node exporter. If you would like a specific collector prioritized, open a GitHub issue by using the following template:

    • in_node_exporter_metricsarrow-up-right

    Prometheus Node exporterarrow-up-right

    OpenTelemetry

    An input plugin to ingest OpenTelemetry logs, metrics, and traces

    circle-info

    Supported event types: logs metrics traces

    The OpenTelemetry input plugin lets you receive data based on the OpenTelemetry specification from various OpenTelemetry exporters, the OpenTelemetry Collector, or the Fluent Bit OpenTelemetry output plugin.

    Fluent Bit has a compliant implementation which fully supports

    docker run --rm -ti fluent/fluent-bit:latest --help | grep trace
      -Z, --enable-chunk-traceenable chunk tracing, it can be activated either through the http api or the command line
      --trace-input           input to start tracing on startup.
      --trace-output          output to use for tracing on startup.
      --trace-output-property set a property for output tracing on startup.
      --trace                 setup a trace pipeline on startup. Uses a single line, ie: "input=dummy.0 output=stdout output.format='json'"
    $ fluent-bit -Z -i dummy -o stdout -f 1 --trace-input=dummy.0 --trace-output=stdout
    
    ...
    [0] dummy.0: [[1689971222.068537501, {}], {"message"=>"dummy"}]
    [0] dummy.0: [[1689971223.068556121, {}], {"message"=>"dummy"}]
    [0] trace: [[1689971222.068677045, {}], {"type"=>1, "trace_id"=>"0", "plugin_instance"=>"dummy.0", "records"=>[{"timestamp"=>1689971222, "record"=>{"message"=>"dummy"}}], "start_time"=>1689971222, "end_time"=>1689971222}]
    [1] trace: [[1689971222.068735577, {}], {"type"=>3, "trace_id"=>"0", "plugin_instance"=>"dummy.0", "records"=>[{"timestamp"=>1689971222, "record"=>{"message"=>"dummy"}}], "start_time"=>1689971222, "end_time"=>1689971222}]
    [0] dummy.0: [[1689971224.068586317, {}], {"message"=>"dummy"}]
    [0] trace: [[1689971223.068626923, {}], {"type"=>1, "trace_id"=>"1", "plugin_instance"=>"dummy.0", "records"=>[{"timestamp"=>1689971223, "record"=>{"message"=>"dummy"}}], "start_time"=>1689971223, "end_time"=>1689971223}]
    [1] trace: [[1689971223.068675735, {}], {"type"=>3, "trace_id"=>"1", "plugin_instance"=>"dummy.0", "records"=>[{"timestamp"=>1689971223, "record"=>{"message"=>"dummy"}}], "start_time"=>1689971223, "end_time"=>1689971223}]
    [2] trace: [[1689971224.068689341, {}], {"type"=>1, "trace_id"=>"2", "plugin_instance"=>"dummy.0", "records"=>[{"timestamp"=>1689971224, "record"=>{"message"=>"dummy"}}], "start_time"=>1689971224, "end_time"=>1689971224}]
    [3] trace: [[1689971224.068747182, {}], {"type"=>3, "trace_id"=>"2", "plugin_instance"=>"dummy.0", "records"=>[{"timestamp"=>1689971224, "record"=>{"message"=>"dummy"}}], "start_time"=>1689971224, "end_time"=>1689971224}]
    ^C[2023/07/21 16:27:05] [engine] caught signal (SIGINT)
    [2023/07/21 16:27:05] [ warn] [engine] service will shutdown in max 5 seconds
    [2023/07/21 16:27:05] [ info] [input] pausing dummy.0
    [0] dummy.0: [[1689971225.068568875, {}], {"message"=>"dummy"}]
    [2023/07/21 16:27:06] [ info] [engine] service has stopped (0 pending tasks)
    [2023/07/21 16:27:06] [ info] [input] pausing dummy.0
    [2023/07/21 16:27:06] [ warn] [engine] service will shutdown in max 1 seconds
    [0] trace: [[1689971225.068654038, {}], {"type"=>1, "trace_id"=>"3", "plugin_instance"=>"dummy.0", "records"=>[{"timestamp"=>1689971225, "record"=>{"message"=>"dummy"}}], "start_time"=>1689971225, "end_time"=>1689971225}]
    [1] trace: [[1689971225.068695829, {}], {"type"=>3, "trace_id"=>"3", "plugin_instance"=>"dummy.0", "records"=>[{"timestamp"=>1689971225, "record"=>{"message"=>"dummy"}}], "start_time"=>1689971225, "end_time"=>1689971225}]
    [2023/07/21 16:27:07] [ info] [engine] service has stopped (0 pending tasks)
    [2023/07/21 16:27:07] [ info] [output:stdout:stdout.0] thread worker #0 stopping...
    [2023/07/21 16:27:07] [ info] [output:stdout:stdout.0] thread worker #0 stopped
    [2023/07/21 16:27:07] [ info] [output:stdout:stdout.0] thread worker #0 stopping...
    [2023/07/21 16:27:07] [ info] [output:stdout:stdout.0] thread worker #0 stopped
    [2023/07/21 16:26:42] [ warn] [chunk trace] enable chunk tracing via the configuration or  command line to be able to activate tracing.
    $ fluent-bit -Z -i dummy -o stdout -f 1 --trace-input=dummy.0 --trace-output=stdout --trace-output-property=format=json_lines
    
    ...
    [0] dummy.0: [[1689971340.068565891, {}], {"message"=>"dummy"}]
    [0] dummy.0: [[1689971341.068632477, {}], {"message"=>"dummy"}]
    {"date":1689971340.068745,"type":1,"trace_id":"0","plugin_instance":"dummy.0","records":[{"timestamp":1689971340,"record":{"message":"dummy"}}],"start_time":1689971340,"end_time":1689971340}
    {"date":1689971340.068825,"type":3,"trace_id":"0","plugin_instance":"dummy.0","records":[{"timestamp":1689971340,"record":{"message":"dummy"}}],"start_time":1689971340,"end_time":1689971340}
    [0] dummy.0: [[1689971342.068613646, {}], {"message"=>"dummy"}]
    {"date":1689971340.068745,"type":1,"trace_id":"0","plugin_instance":"dummy.0","records":[{"timestamp":1689971340,"record":{"message":"dummy"}}],"start_time":1689971340,"end_time":1689971340}
    fluent-bit -Z -i dummy -o stdout -f 1 --trace="input=dummy.0 output=stdout output.format=json_lines"
    $ docker run --rm -ti -p 2020:2020 fluent/fluent-bit:latest -Z -H -i dummy -p alias=input_dummy -o stdout -f 1
    
    ...
    [0] dummy.0: [1666346597.203307010, {"message"=>"dummy"}]
    [0] dummy.0: [1666346598.204103793, {"message"=>"dummy"}]
    $ curl 127.0.0.1:2020/api/v1/trace/input_dummy
    
    {"status":"ok"}
    ...
    [0] dummy.0: [1666346616.203551736, {"message"=>"dummy"}]
    [0] trace: [1666346617.205221952, {"type"=>1, "trace_id"=>"trace.0", "plugin_instance"=>"dummy.0", "plugin_alias"=>"input_dummy", "records"=>[{"timestamp"=>1666346617, "record"=>{"message"=>"dummy"}}], "start_time"=>1666346617, "end_time"=>1666346617}]
    [0] dummy.0: [1666346617.205131790, {"message"=>"dummy"}]
    [0] trace: [1666346617.205419358, {"type"=>3, "trace_id"=>"trace.0", "plugin_instance"=>"dummy.0", "plugin_alias"=>"input_dummy", "records"=>[{"timestamp"=>1666346617, "record"=>{"message"=>"dummy"}}], "start_time"=>1666346617, "end_time"=>1666346617}]
    [0] trace: [1666346618.204110867, {"type"=>1, "trace_id"=>"trace.1", "plugin_instance"=>"dummy.0", "plugin_alias"=>"input_dummy", "records"=>[{"timestamp"=>1666346618, "record"=>{[0] dummy.0: [1666346618.204049246, {"message"=>"dummy"}]
    "message"=>"dummy"}}], "start_time"=>1666346618, "end_time"=>1666346618}]
    [0] trace: [1666346618.204198654, {"type"=>3, "trace_id"=>"trace.1", "plugin_instance"=>"dummy.0", "plugin_alias"=>"input_dummy", "records"=>[{"timestamp"=>1666346618, "record"=>{"message"=>"dummy"}}], "start_time"=>1666346618, "end_time"=>1666346618}]
    $ docker run --rm -ti -p 2020:2020 fluent/fluent-bit:latest \
          -Z -H \
          -i dummy -p alias=dummy_0 -p \
             dummy='{"dummy": "dummy_0", "key_name": "foo", "key_cnt": "1"}' \
          -i dummy -p alias=dummy_1 -p dummy='{"dummy": "dummy_1"}' \
          -i dummy -p alias=dummy_2 -p dummy='{"dummy": "dummy_2"}' \
          -F record_modifier -m 'dummy.0' -p record="powered_by fluent" \
          -F record_modifier -m 'dummy.1' -p record="powered_by fluent-bit" \
          -F nest -m 'dummy.0' \
          -p operation=nest -p wildcard='key_*' -p nest_under=data \
          -o null -m '*' -f 1
    $ curl 127.0.0.1:2020/api/v1/trace/dummy_0
    
    {"status":"ok"}
    ...
    [0] trace: [1666349359.325597543, {"type"=>1, "trace_id"=>"trace.0", "plugin_instance"=>"dummy.0", "plugin_alias"=>"dummy_0", "records"=>[{"timestamp"=>1666349359, "record"=>{"dummy"=>"dummy_0", "key_name"=>"foo", "key_cnt"=>"1"}}], "start_time"=>1666349359, "end_time"=>1666349359}]
    [0] trace: [1666349359.325723747, {"type"=>2, "start_time"=>1666349359, "end_time"=>1666349359, "trace_id"=>"trace.0", "plugin_instance"=>"record_modifier.0", "records"=>[{"timestamp"=>1666349359, "record"=>{"dummy"=>"dummy_0", "key_name"=>"foo", "key_cnt"=>"1", "powered_by"=>"fluent"}}]}]
    [0] trace: [1666349359.325783954, {"type"=>2, "start_time"=>1666349359, "end_time"=>1666349359, "trace_id"=>"trace.0", "plugin_instance"=>"nest.2", "records"=>[{"timestamp"=>1666349359, "record"=>{"dummy"=>"dummy_0", "powered_by"=>"fluent", "data"=>{"key_name"=>"foo", "key_cnt"=>"1"}}}]}]
    [0] trace: [1666349359.325913783, {"type"=>3, "trace_id"=>"trace.0", "plugin_instance"=>"dummy.0", "plugin_alias"=>"dummy_0", "records"=>[{"timestamp"=>1666349359, "record"=>{"dummy"=>"dummy_0", "powered_by"=>"fluent", "data"=>{"key_name"=>"foo", "key_cnt"=>"1"}}}], "start_time"=>1666349359, "end_time"=>1666349359}]
    [0] trace: [1666349360.323826619, {"type"=>1, "trace_id"=>"trace.1", "plugin_instance"=>"dummy.0", "plugin_alias"=>"dummy_0", "records"=>[{"timestamp"=>1666349360, "record"=>{"dummy"=>"dummy_0", "key_name"=>"foo", "key_cnt"=>"1"}}], "start_time"=>1666349360, "end_time"=>1666349360}]
    [0] trace: [1666349360.323859618, {"type"=>2, "start_time"=>1666349360, "end_time"=>1666349360, "trace_id"=>"trace.1", "plugin_instance"=>"record_modifier.0", "records"=>[{"timestamp"=>1666349360, "record"=>{"dummy"=>"dummy_0", "key_name"=>"foo", "key_cnt"=>"1", "powered_by"=>"fluent"}}]}]
    [0] trace: [1666349360.323900784, {"type"=>2, "start_time"=>1666349360, "end_time"=>1666349360, "trace_id"=>"trace.1", "plugin_instance"=>"nest.2", "records"=>[{"timestamp"=>1666349360, "record"=>{"dummy"=>"dummy_0", "powered_by"=>"fluent", "data"=>{"key_name"=>"foo", "key_cnt"=>"1"}}}]}]
    [0] trace: [1666349360.323926366, {"type"=>3, "trace_id"=>"trace.1", "plugin_instance"=>"dummy.0", "plugin_alias"=>"dummy_0", "records"=>[{"timestamp"=>1666349360, "record"=>{"dummy"=>"dummy_0", "powered_by"=>"fluent", "data"=>{"key_name"=>"foo", "key_cnt"=>"1"}}}], "start_time"=>1666349360, "end_time"=>1666349360}]
    [0] trace: [1666349361.324223752, {"type"=>1, "trace_id"=>"trace.2", "plugin_instance"=>"dummy.0", "plugin_alias"=>"dummy_0", "records"=>[{"timestamp"=>1666349361, "record"=>{"dummy"=>"dummy_0", "key_name"=>"foo", "key_cnt"=>"1"}}], "start_time"=>1666349361, "end_time"=>1666349361}]
    [0] trace: [1666349361.324263959, {"type"=>2, "start_time"=>1666349361, "end_time"=>1666349361, "trace_id"=>"trace.2", "plugin_instance"=>"record_modifier.0", "records"=>[{"timestamp"=>1666349361, "record"=>{"dummy"=>"dummy_0", "key_name"=>"foo", "key_cnt"=>"1", "powered_by"=>"fluent"}}]}]
    [0] trace: [1666349361.324283250, {"type"=>2, "start_time"=>1666349361, "end_time"=>1666349361, "trace_id"=>"trace.2", "plugin_instance"=>"nest.2", "records"=>[{"timestamp"=>1666349361, "record"=>{"dummy"=>"dummy_0", "powered_by"=>"fluent", "data"=>{"key_name"=>"foo", "key_cnt"=>"1"}}}]}]
    [0] trace: [1666349361.324294291, {"type"=>3, "trace_id"=>"trace.2", "plugin_instance"=>"dummy.0", "plugin_alias"=>"dummy_0", "records"=>[{"timestamp"=>1666349361, "record"=>{"dummy"=>"dummy_0", "powered_by"=>"fluent", "data"=>{"key_name"=>"foo", "key_cnt"=>"1"}}}], "start_time"=>1666349361, "end_time"=>1666349361}]
    ^C[2022/10/21 10:49:23] [engine] caught signal (SIGINT)
    [2022/10/21 10:49:23] [ warn] [engine] service will shutdown in max 5 seconds
    [2022/10/21 10:49:23] [ info] [input] pausing dummy_0
    [2022/10/21 10:49:23] [ info] [input] pausing dummy_1
    [2022/10/21 10:49:23] [ info] [input] pausing dummy_2
    [2022/10/21 10:49:23] [ info] [engine] service has stopped (0 pending tasks)
    [2022/10/21 10:49:23] [ info] [input] pausing dummy_0
    [2022/10/21 10:49:23] [ info] [input] pausing dummy_1
    [2022/10/21 10:49:23] [ info] [input] pausing dummy_2
    [0] trace: [1666349362.323272011, {"type"=>1, "trace_id"=>"trace.3", "plugin_instance"=>"dummy.0", "plugin_alias"=>"dummy_0", "records"=>[{"timestamp"=>1666349362, "record"=>{"dummy"=>"dummy_0", "key_name"=>"foo", "key_cnt"=>"1"}}], "start_time"=>1666349362, "end_time"=>1666349362}]
    [0] trace: [1666349362.323306843, {"type"=>2, "start_time"=>1666349362, "end_time"=>1666349362, "trace_id"=>"trace.3", "plugin_instance"=>"record_modifier.0", "records"=>[{"timestamp"=>1666349362, "record"=>{"dummy"=>"dummy_0", "key_name"=>"foo", "key_cnt"=>"1", "powered_by"=>"fluent"}}]}]
    [0] trace: [1666349362.323323884, {"type"=>2, "start_time"=>1666349362, "end_time"=>1666349362, "trace_id"=>"trace.3", "plugin_instance"=>"nest.2", "records"=>[{"timestamp"=>1666349362, "record"=>{"dummy"=>"dummy_0", "powered_by"=>"fluent", "data"=>{"key_name"=>"foo", "key_cnt"=>"1"}}}]}]
    [0] trace: [1666349362.323334509, {"type"=>3, "trace_id"=>"trace.3", "plugin_instance"=>"dummy.0", "plugin_alias"=>"dummy_0", "records"=>[{"timestamp"=>1666349362, "record"=>{"dummy"=>"dummy_0", "powered_by"=>"fluent", "data"=>{"key_name"=>"foo", "key_cnt"=>"1"}}}], "start_time"=>1666349362, "end_time"=>1666349362}]
    [2022/10/21 10:49:24] [ warn] [engine] service will shutdown in max 1 seconds
    [2022/10/21 10:49:25] [ info] [engine] service has stopped (0 pending tasks)
    [2022/10/21 10:49:25] [ info] [output:stdout:stdout.0] thread worker #0 stopping...
    [2022/10/21 10:49:25] [ info] [output:stdout:stdout.0] thread worker #0 stopped
    [2022/10/21 10:49:25] [ info] [output:null:null.0] thread worker #0 stopping...
    [2022/10/21 10:49:25] [ info] [output:null:null.0] thread worker #0 stopped
    $ docker run --rm -ti -p 2020:2020 fluent/fluent-bit:latest -Z -H -i dummy -p alias=input_dummy -o stdout -f 1
    
    ...
    [0] dummy.0: [1674805465.976012761, {"message"=>"dummy"}]
    [0] dummy.0: [1674805466.973669512, {"message"=>"dummy"}]
    $ curl 127.0.0.1:2020/api/v1/trace/input_dummy -d '{"output":"stdout", "params": {"format": "json"}}'
    
    {"status":"ok"}
    ...
    [0] dummy.0: [1674805635.972373840, {"message"=>"dummy"}]
    [{"date":1674805634.974457,"type":1,"trace_id":"0","plugin_instance":"dummy.0","plugin_alias":"input_dummy","records":[{"timestamp":1674805634,"record":{"message":"dummy"}}],"start_time":1674805634,"end_time":1674805634},{"date":1674805634.974605,"type":3,"trace_id":"0","plugin_instance":"dummy.0","plugin_alias":"input_dummy","records":[{"timestamp":1674805634,"record":{"message":"dummy"}}],"start_time":1674805634,"end_time":1674805634},{"date":1674805635.972398,"type":1,"trace_id":"1","plugin_instance":"dummy.0","plugin_alias":"input_dummy","records":[{"timestamp":1674805635,"record":{"message":"dummy"}}],"start_time":1674805635,"end_time":1674805635},{"date":1674805635.972413,"type":3,"trace_id":"1","plugin_instance":"dummy.0","plugin_alias":"input_dummy","records":[{"timestamp":1674805635,"record":{"message":"dummy"}}],"start_time":1674805635,"end_time":1674805635}]
    [0] dummy.0: [1674805636.973970215, {"message"=>"dummy"}]
    [{"date":1674805636.974008,"type":1,"trace_id":"2","plugin_instance":"dummy.0","plugin_alias":"input_dummy","records":[{"timestamp":1674805636,"record":{"message":"dummy"}}],"start_time":1674805636,"end_time":1674805636},{"date":1674805636.974034,"type":3,"trace_id":"2","plugin_instance":"dummy.0","plugin_alias":"input_dummy","records":[{"timestamp":1674805636,"record":{"message":"dummy"}}],"start_time":1674805636,"end_time":1674805636}]
    {
      "type": 2,
      "start_time": 1666349231,
      "end_time": 1666349231,
      "trace_id": "trace.1",
      "plugin_instance": "nest.2",
      "records": [{
        "timestamp": 1666349231,
        "record": {
          "dummy": "dummy_0",
          "powered_by": "fluent",
          "data": {
            "key_name": "foo",
            "key_cnt": "1"
          }
        }
      }]
    }
    kill -CONT `pidof fluent-bit`
    ...
    [engine] caught signal (SIGCONT)
    [2020/03/23 17:39:02] Fluent Bit Dump
    
    ===== Input =====
    syslog_debug (syslog)
    │
    ├─ status
    │  └─ overlimit     : no
    │     ├─ mem size   : 60.8M (63752145 bytes)
    │     └─ mem limit  : 61.0M (64000000 bytes)
    │
    ├─ tasks
    │  ├─ total tasks   : 92
    │  ├─ new           : 0
    │  ├─ running       : 92
    │  └─ size          : 171.1M (179391504 bytes)
    │
    └─ chunks
       └─ total chunks  : 92
          ├─ up chunks  : 35
          ├─ down chunks: 57
          └─ busy chunks: 92
             ├─ size    : 60.8M (63752145 bytes)
             └─ size err: 0
    
    ===== Storage Layer =====
    total chunks     : 92
    ├─ mem chunks    : 0
    └─ fs chunks     : 92
       ├─ up         : 35
       └─ down       : 57
    # Node Exporter Metrics + Prometheus Exporter
    # -------------------------------------------
    # The following example collect host metrics on Linux and expose
    # them through a Prometheus HTTP endpoint.
    #
    # After starting the service try it with:
    #
    # $ curl http://127.0.0.1:2021/metrics
    #
    service:
      flush: 1
      log_level: info
    
    pipeline:
      inputs:
        - name: node_exporter_metrics
          tag:  node_metrics
          scrape_interval: 2
    
      outputs:
        - name: prometheus_exporter
          match: node_metrics
          host: 0.0.0.0
          port: 2021
    
    # Node Exporter Metrics + Prometheus Exporter
    # -------------------------------------------
    # The following example collect host metrics on Linux and expose
    # them through a Prometheus HTTP end-point.
    #
    # After starting the service try it with:
    #
    # $ curl http://127.0.0.1:2021/metrics
    #
    [SERVICE]
      flush           1
      log_level       info
    
    [INPUT]
      name            node_exporter_metrics
      tag             node_metrics
      scrape_interval 2
    
    [OUTPUT]
      name            prometheus_exporter
      match           node_metrics
      host            0.0.0.0
      port            2021
    
    
    git clone https://github.com/fluent/fluent-bit
    
    cd fluent-bit/docker_compose/node-exporter-dashboard/
    docker-compose up --force-recreate -d --build
    curl http://127.0.0.1:2021/metrics
    docker run -ti -v /proc:/host/proc \
               -v /sys:/host/sys   \
               -p 2021:2021        \
               fluent/fluent-bit:1.8.0 \
               /fluent-bit/bin/fluent-bit \
               -i node_exporter_metrics \
               -p path.procfs=/host/proc \
               -p path.sysfs=/host/sys \
               -o prometheus_exporter \
               -p "add_label=host $HOSTNAME" \
               -f 1
    docker-compose down

    collector.diskstats.scrape_interval

    The rate in seconds at which diskstats metrics are collected from the host operating system.

    0

    collector.filefd.scrape_interval

    The rate in seconds at which filefd metrics are collected from the host operating system.

    0

    collector.filesystem.scrape_interval

    The rate in seconds at which filesystem metrics are collected from the host operating system.

    0

    collector.hwmon.chip-exclude

    Regex of chips to exclude for the hwmon collector.

    Not set by default.

    collector.hwmon.chip-include

    Regex of chips to include for the hwmon collector.

    Not set by default.

    collector.hwmon.scrape_interval

    The rate in seconds at which hwmon metrics are collected from the host operating system.

    0

    collector.hwmon.sensor-exclude

    Regex of sensors to exclude for the hwmon collector.

    Not set by default.

    collector.hwmon.sensor-include

    Regex of sensors to include for the hwmon collector.

    Not set by default.

    collector.loadavg.scrape_interval

    The rate in seconds at which loadavg metrics are collected from the host operating system.

    0

    collector.meminfo.scrape_interval

    The rate in seconds at which meminfo metrics are collected from the host operating system.

    0

    collector.netdev.scrape_interval

    The rate in seconds at which netdev metrics are collected from the host operating system.

    0

    collector.netstat.scrape_interval

    The rate in seconds at which netstat metrics are collected from the host operating system.

    0

    collector.nvme.scrape_interval

    The rate in seconds at which nvme metrics are collected from the host operating system.

    0

    collector.processes.scrape_interval

    The rate in seconds at which system-level process metrics are collected from the host operating system.

    0

    collector.sockstat.scrape_interval

    The rate in seconds at which sockstat metrics are collected from the host operating system.

    0

    collector.stat.scrape_interval

    The rate in seconds at which stat metrics are collected from the host operating system.

    0

    collector.systemd.scrape_interval

    The rate in seconds at which systemd metrics are collected from the host operating system.

    0

    collector.textfile.path

    Specify path or directory to collect textfile metrics from the host operating system.

    Not set by default.

    collector.textfile.scrape_interval

    The rate in seconds at which textfile metrics are collected from the host operating system.

    0

    collector.thermalzone.scrape_interval

    The rate in seconds at which thermal_zone metrics are collected from the host operating system.

    0

    collector.time.scrape_interval

    The rate in seconds at which time metrics are collected from the host operating system.

    0

    collector.uname.scrape_interval

    The rate in seconds at which uname metrics are collected from the host operating system.

    0

    collector.vmstat.scrape_interval

    The rate in seconds at which vmstat metrics are collected from the host operating system.

    0

    diskstats.ignore_device_regex

    Specify the regular expression for the diskstats to prevent collection of/ignore.

    ^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\\d+n\\d+p)\\d+$

    filesystem.ignore_filesystem_type_regex

    Specify the regular expression for the filesystem types to prevent collection of or ignore.

    ^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$

    filesystem.ignore_mount_point_regex

    Specify the regular expression for the mount points to prevent collection of/ignore.

    ^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)

    metrics

    Specify which metrics are collected from the host operating system. These metrics depend on /procfs, /sysfs, systemd, or custom files. The actual values of metrics will be read from /proc, /sys, or systemd as needed. cpu, cpufreq, meminfo, diskstats, filesystem, stat, loadavg, vmstat, netdev, netstat, sockstat, filefd, nvme, and processes depend on procfs. cpufreq, hwmon, and thermal_zone depend on sysfs. systemd depends on systemd services. textfile requires explicit path configuration using collector.textfile.path.

    "cpu,cpufreq,meminfo,diskstats,filesystem,uname,stat,time,loadavg,vmstat,netdev,netstat,sockstat,filefd,systemd,nvme,thermal_zone,hwmon"

    path.procfs

    The mount point used to collect process information and metrics.

    /proc

    path.rootfs

    The root filesystem mount point.

    /

    path.sysfs

    The path in the filesystem used to collect system metrics.

    /sys

    scrape_interval

    The rate in seconds at which metrics are collected from the host operating system.

    5

    systemd_exclude_pattern

    Regular expression to determine which units are excluded in the metrics produced by the systemd collector.

    .+\\.(automount|device|mount|scope|slice)

    systemd_include_pattern

    Regular expression to determine which units are included in the metrics produced by the systemd collector.

    Not applied unless explicitly set.

    systemd_include_service_task_metrics

    Determines if the collector will include service task metrics.

    false

    systemd_service_restart_metrics

    Determines if the collector will include service restart metrics.

    false

    systemd_unit_start_time_metrics

    Determines if the collector will include unit start time metrics.

    false

    Linux

    1.8

    diskstats

    Exposes disk I/O statistics.

    Linux, macOS

    1.8

    filefd

    Exposes file descriptor statistics from /proc/sys/fs/file-nr.

    Linux

    1.8.2

    filesystem

    Exposes filesystem statistics from /proc/*/mounts.

    Linux

    2.0.9

    hwmon

    Exposes hardware monitoring metrics from /sys/class/hwmon.

    Linux

    2.2.0

    loadavg

    Exposes load average.

    Linux, macOS

    1.8

    meminfo

    Exposes memory statistics.

    Linux, macOS

    1.8

    netdev

    Exposes network interface statistics such as bytes transferred.

    Linux, macOS

    1.8.2

    netstat

    Exposes network statistics from /proc/net/netstat.

    Linux

    2.2.0

    nvme

    Exposes nvme statistics from /proc.

    Linux

    2.2.0

    processes

    Exposes processes statistics from /proc.

    Linux

    2.2.0

    sockstat

    Exposes socket statistics from /proc/net/sockstat.

    Linux

    2.2.0

    stat

    Exposes various statistics from /proc/stat. This includes boot time, forks, and interruptions.

    Linux

    1.8

    systemd

    Exposes statistics from systemd.

    Linux

    2.1.3

    textfile

    Exposes custom metrics from text files. Requires collector.textfile.path to be set.

    Linux

    2.2.0

    thermal_zone

    Exposes thermal statistics from /sys/class/thermal/thermal_zone/*.

    Linux

    2.2.1

    time

    Exposes the current system time.

    Linux

    1.8

    uname

    Exposes system information as provided by the uname system call.

    Linux, macOS

    1.8

    vmstat

    Exposes statistics from /proc/vmstat.

    Linux

    1.8.2

    Sign in to Grafanaarrow-up-right
    OTLP/HTTP
    and
    OTLP/GRPC
    . The single
    port
    configured defaults to
    4318
    and supports both transport methods.

    hashtag
    Configuration

    Key
    Description
    Default

    alias

    Sets an alias for multiple instances of the same input plugin. If no alias is specified, a default name will be assigned using the plugin name followed by a dot and a sequence number.

    none

    buffer_chunk_size

    Size of each buffer chunk allocated for HTTP requests (advanced users only).

    512K

    When raw_traces is set to false (default), the traces endpoint (/v1/traces) processes incoming trace data using the unified JSON parser with strict validation. The endpoint accepts both protobuf and JSON encoded payloads. When raw_traces is set to true, any data forwarded to the traces endpoint will be packed and forwarded as a log message without processing, validation, or conversion to the Fluent Bit internal trace format.

    hashtag
    OpenTelemetry transport protocol endpoints

    Fluent Bit exposes the following endpoints for data ingestion based on the OpenTelemetry protocol:

    For OTLP/HTTP:

    • Logs

      • /v1/logs

    • Metrics

      • /v1/metrics

    • Traces

      • /v1/traces

    For OTLP/GRPC:

    • Logs

      • /opentelemetry.proto.collector.log.v1.LogService/Export

      • /opentelemetry.proto.collector.log.v1.LogService/Export

    • Metrics

      • /opentelemetry.proto.collector.metric.v1.MetricService/Export

      • /opentelemetry.proto.collector.metrics.v1.MetricsService/Export

    • Traces

      • /opentelemetry.proto.collector.trace.v1.TraceService/Export

      • /opentelemetry.proto.collector.traces.v1.TracesService/Export

    hashtag
    Get started

    The OpenTelemetry input plugin supports the following telemetry data types:

    Type
    HTTP1/JSON
    HTTP1/Protobuf
    HTTP2/gRPC

    Logs

    Stable

    Stable

    Stable

    Metrics

    Stable

    A sample configuration file to get started will look something like the following:

    With this configuration, Fluent Bit listens on port 4318 for data. You can now send telemetry data to the endpoints /v1/metrics for metrics, /v1/traces for traces, and /v1/logs for logs.

    A sample curl request to POST JSON encoded log data would be:

    hashtag
    OpenTelemetry trace improvements

    Fluent Bit includes enhanced support for OpenTelemetry traces with improved JSON parsing, error handling, and validation capabilities.

    hashtag
    Unified trace JSON parser

    Fluent Bit provides a unified interface for processing OpenTelemetry trace data in JSON format. The parser converts OpenTelemetry JSON trace payloads into the Fluent Bit internal trace representation, supporting the full OpenTelemetry trace specification including:

    • Resource spans with attributes

    • Instrumentation scope information

    • Span data (names, IDs, timestamps, status)

    • Span events and links

    • Trace and span ID validation

    The unified parser handles the OpenTelemetry JSON encoding format, which wraps attribute values in type-specific containers (for example, stringValue, intValue, doubleValue, boolValue).

    hashtag
    Error status propagation

    The OpenTelemetry input plugin provides detailed error status information when processing trace data. If trace processing fails, the plugin returns specific error codes that help identify the issue:

    • FLB_OTEL_TRACES_ERR_INVALID_JSON - Invalid JSON format

    • FLB_OTEL_TRACES_ERR_INVALID_TRACE_ID - Invalid trace ID format or length

    • FLB_OTEL_TRACES_ERR_INVALID_SPAN_ID - Invalid span ID format or length

    • FLB_OTEL_TRACES_ERR_INVALID_PARENT_SPAN_ID - Invalid parent span ID

    • FLB_OTEL_TRACES_ERR_STATUS_FAILURE - Invalid span status code

    • FLB_OTEL_TRACES_ERR_INVALID_ATTRIBUTES - Invalid attribute format

    • FLB_OTEL_TRACES_ERR_INVALID_EVENT_ENTRY - Invalid span event

    • FLB_OTEL_TRACES_ERR_INVALID_LINK_ENTRY - Invalid span link

    hashtag
    Valid span status codes

    The OpenTelemetry specification defines three valid span status codes. When processing trace data, the plugin accepts the following status code values (case-insensitive):

    • OK - The operation completed successfully

    • ERROR - The operation has an error

    • UNSET - The status isn't set (default)

    Any other status code value triggers FLB_OTEL_TRACES_ERR_STATUS_FAILURE and causes the trace data to be rejected. The status code must be provided as a string in the status.code field of the span object.

    hashtag
    Error handling behavior

    When trace validation fails, the following behavior applies:

    • Trace data is dropped: Invalid trace data isn't processed or forwarded. The trace payload is rejected immediately.

    • Error logging: The plugin logs an error message with the specific error status code to help diagnose issues. Error messages include the error code number and description.

    • No retry mechanism: Failed requests aren't automatically retried. The client must resend corrected trace data.

    • HTTP response codes:

      • HTTP/1.1: Returns 400 Bad Request with an error message when validation fails. Returns the configured successful_response_code (default 201 Created) when processing succeeds.

      • gRPC: Returns gRPC status 2 (UNKNOWN)

    hashtag
    Strict ID decoding

    Fluent Bit enforces strict validation for trace and span IDs to ensure data integrity:

    • Trace IDs: Must be exactly 32 hexadecimal characters (16 bytes)

    • Span IDs: Must be exactly 16 hexadecimal characters (8 bytes)

    • Parent Span IDs: Must be exactly 16 hexadecimal characters (8 bytes) when present

    The validation process:

    1. Verifies the ID length matches the expected size

    2. Validates that all characters are valid hexadecimal digits (0-9, a-f, A-F)

    3. Decodes the hexadecimal string to binary format

    4. Rejects invalid IDs with appropriate error codes

    Invalid IDs result in error status codes (FLB_OTEL_TRACES_ERR_INVALID_TRACE_ID, FLB_OTEL_TRACES_ERR_INVALID_SPAN_ID, and so on) and the trace data is rejected to prevent processing of corrupted or malformed trace information.

    hashtag
    Example: JSON trace payload

    The following example shows a valid OpenTelemetry JSON trace payload that can be sent to the /v1/traces endpoint:

    Trace IDs must be exactly 32 hex characters and span IDs must be exactly 16 hex characters. Invalid IDs will be rejected with appropriate error messages.

    In the example, the status.code field uses "OK". Valid status code values are "OK", "ERROR", and "UNSET" (case-insensitive). Any other value triggers FLB_OTEL_TRACES_ERR_STATUS_FAILURE and causes the trace to be rejected.

    pipeline:
      inputs:
        - name: opentelemetry
          listen: 127.0.0.1
          port: 4318
    
      outputs:
        - name: stdout
          match: '*'
    [INPUT]
      Name   opentelemetry
      Listen 127.0.0.1
      Port   4318
    
    [OUTPUT]
      Name  stdout
      Match *
    curl --header "Content-Type: application/json" --request POST --data '{"resourceLogs":[{"resource":{},"scopeLogs":[{"scope":{},"logRecords":[{"timeUnixNano":"1660296023390371588","body":{"stringValue":"{\"message\":\"dummy\"}"},"traceId":"","spanId":""}]}]}]}'   http://0.0.0.0:4318/v1/logs
    {
      "resourceSpans": [
        {
          "resource": {
            "attributes": [
              {
                "key": "service.name",
                "value": {
                  "stringValue": "my-service"
                }
              }
            ]
          },
          "scopeSpans": [
            {
              "scope": {
                "name": "my-instrumentation",
                "version": "1.0.0"
              },
              "spans": [
                {
                  "traceId": "0123456789abcdef0123456789abcdef",
                  "spanId": "0123456789abcdef",
                  "name": "my-span",
                  "kind": 1,
                  "startTimeUnixNano": "1660296023390371588",
                  "endTimeUnixNano": "1660296023391371588",
                  "status": {
                    "code": "OK"
                  },
                  "attributes": [
                    {
                      "key": "http.method",
                      "value": {
                        "stringValue": "GET"
                      }
                    }
                  ]
                }
              ]
            }
          ]
        }
      ]
    }
    with message "Serialization error." when validation fails. Returns gRPC status
    0 (OK)
    with an empty
    ExportTraceServiceResponse
    when processing succeeds.

    buffer_max_size

    Maximum size of the HTTP request buffer in KB, MB, or GB.

    4M

    encode_profiles_as_log

    Encode profiles received as text and ingest them in the logging pipeline.

    true

    host

    The hostname.

    localhost

    http2

    Enable HTTP/2 protocol support for the OpenTelemetry receiver.

    true

    http_server.max_connections

    Maximum number of concurrent active HTTP connections. 0 means unlimited.

    0

    http_server.workers

    Number of HTTP listener worker threads.

    1

    listen

    The network address to listen on.

    0.0.0.0

    log_level

    Specifies the log level for this plugin. If not set here, the plugin uses the global log level specified in the service section of your configuration file.

    info

    log_suppress_interval

    Suppresses log messages from this plugin that appear similar within a specified time interval. 0 no suppression.

    0

    logs_body_key

    Specify a body key.

    none

    logs_metadata_key

    Key name to store OpenTelemetry logs metadata in the record.

    otlp

    mem_buf_limit

    Set a memory buffer limit for the input plugin. If the limit is reached, the plugin will pause until the buffer is drained. The value is in bytes. If set to 0, the buffer limit is disabled.

    0

    net.accept_timeout

    Set maximum time allowed to establish an incoming connection. This time includes the TLS handshake.

    10s

    net.accept_timeout_log_error

    On client accept timeout, specify if it should log an error. When disabled, the timeout is logged as a debug message.

    true

    net.backlog

    Set the backlog size for listening sockets.

    128

    net.io_timeout

    Set maximum time a connection can stay idle.

    0s

    net.keepalive

    Enable or disable keepalive support.

    true

    net.share_port

    Allow multiple plugins to bind to the same port.

    false

    oauth2.allowed_audience

    Audience claim to enforce when validating incoming OAuth 2.0 JSON Web Token (JWT) tokens.

    none

    oauth2.allowed_clients

    Authorized client_id or azp claim values. Can be specified multiple times.

    none

    oauth2.issuer

    Expected issuer (iss) claim for OAuth 2.0 JWT validation.

    none

    oauth2.jwks_refresh_interval

    How often in seconds to refresh the cached JSON Web Key Set (JWKS) keys from oauth2.jwks_url.

    300

    oauth2.jwks_url

    JWKS endpoint URL used to fetch public keys for OAuth 2.0 JWT validation.

    none

    oauth2.validate

    Enable OAuth 2.0 JWT validation for incoming requests.

    false

    port

    The port for Fluent Bit to listen for incoming connections.

    4318

    profiles_support

    This is an experimental feature, feel free to test it but don't enable this in production environments.

    false

    raw_traces

    Forward traces without processing. When set to false (default), traces are processed using the unified JSON parser with strict validation. When set to true, trace data is forwarded as raw log messages without validation or processing.

    false

    routable

    If set to true, the data generated by the plugin will be routable, meaning that it can be forwarded to other plugins or outputs. If set to false, the data will be discarded.

    true

    storage.pause_on_chunks_overlimit

    Enable pausing on an input when they reach their chunks limit.

    none

    storage.type

    Sets the storage type for this input, one of: filesystem, memory or memrb.

    memory

    successful_response_code

    Allows for setting a successful response code. Supported values: 200, 201, or 204.

    201

    tag

    Set a tag for the events generated by this input plugin.

    none

    tag_from_uri

    By default, the tag will be created from the URI. For example, v1_metrics from /v1/metrics. This must be set to false if using tag.

    true

    tag_key

    Record accessor key to use for generating tags from incoming records.

    none

    thread.ring_buffer.capacity

    Number of slots in the ring buffer for data entries when running in mode. Each slot can hold one data entry.

    1024

    thread.ring_buffer.window

    Percentage threshold (1-100) of the ring buffer capacity at which a flush is triggered when running in mode.

    5

    threaded

    Enable for this input to run in a separate dedicated thread.

    false

    tls

    Enable or disable TLS/SSL support.

    off

    tls.ca_file

    Absolute path to CA certificate file.

    none

    tls.ca_path

    Absolute path to scan for certificate files.

    none

    tls.ciphers

    Specify TLS ciphers up to TLSv1.2.

    none

    tls.crt_file

    Absolute path to Certificate file.

    none

    tls.debug

    Set TLS debug level. Accepts 0 (No debug), 1(Error), 2 (State change), 3 (Informational) and 4 (Verbose).

    1

    tls.key_file

    Absolute path to private Key file.

    none

    tls.key_passwd

    Optional password for tls.key_file file.

    none

    tls.max_version

    Specify the maximum version of TLS.

    none

    tls.min_version

    Specify the minimum version of TLS.

    none

    tls.verify

    Force certificate validation.

    on

    tls.verify_hostname

    Enable or disable to verify hostname.

    off

    tls.vhost

    Hostname to be used for TLS SNI extension.

    none

    Stable

    Stable

    Traces

    Stable

    Stable

    Stable

    threaded
    threaded
    multithreading

    Build and install

    Fluent Bitarrow-up-right uses CMakearrow-up-right as its build system.

    hashtag
    Requirements

    To build and install Fluent Bit from source, you must also install the following packages:

    • bison

    • build-essentials

    • cmake (version 3.31.6 or later)

    • flex

    • libssl-dev

    • libyaml-dev

    • pkg-config

    Additionally, certain or plugins might depend on additional components. For example, some plugins require Kafka.

    hashtag
    Prepare environment

    If you already know how CMake works, you can skip this section and review the available .

    The following steps explain how to build and install the project with the default options.

    1. Change to the build/ directory inside the Fluent Bit sources:

    2. Let configure the project specifying where the root path is located:

      This command displays a series of results similar to:

    3. Start the compilation process using the

    hashtag
    Build options

    Fluent Bit provides configurable options to CMake that can be enabled or disabled.

    hashtag
    General options

    Option
    Description
    Default

    hashtag
    Development options

    Option
    Description
    Default

    hashtag
    Optimization options

    Option
    Description
    Default

    hashtag
    Input plugins

    Input plugins gather information from a specific source type like network interfaces, some built-in metrics, or through a specific input device.

    The following input plugins are available:

    Option
    Description
    Default

    hashtag
    Processor plugins

    Processor plugins handle the events within the processor pipelines to allow modifying, enriching, or dropping events.

    The following table describes the processors available:

    Option
    Description
    Default

    hashtag
    Filter plugins

    Filter plugins let you modify, enrich or drop records.

    The following table describes the filters available on this version:

    Option
    Description
    Default

    hashtag
    Output plugins

    Output plugins let you flush the information to some external interface, service, or terminal.

    The following table describes the output plugins available:

    Option
    Description
    Default
    make
    command:

    This command displays results similar to:

  • To continue installing the binary on the system, use make install:

    If the command indicates insufficient permissions, prefix the command with sudo.

  • Build with Avro encoding support

    No

    FLB_AWS

    Enable AWS support

    Yes

    FLB_AWS_ERROR_REPORTER

    Build with AWS error reporting support

    No

    FLB_BENCHMARKS

    Enable benchmarks

    No

    FLB_BINARY

    Build executable

    Yes

    FLB_CHUNK_TRACE

    Enable chunk traces

    Yes

    FLB_COVERAGE

    Build with code-coverage

    No

    FLB_CONFIG_YAML

    Enable YAML configuration support

    Yes

    FLB_CORO_STACK_SIZE

    Set coroutine stack size

    FLB_CUSTOM_CALYPTIA

    Enable Calyptia Support

    Yes

    FLB_ENFORCE_ALIGNMENT

    Enable limited platform specific aligned memory access

    No

    FLB_EXAMPLES

    Build examples

    Yes

    FLB_HTTP_SERVER

    Enable HTTP Server

    Yes

    FLB_INOTIFY

    Enable Inotify support

    Yes

    FLB_JEMALLOC

    Build with Jemalloc support

    No

    FLB_KAFKA

    Enable Kafka support

    Yes

    FLB_LUAJIT

    Enable Lua scripting support

    Yes

    FLB_METRICS

    Enable metrics support

    Yes

    FLB_MTRACE

    Enable mtrace support

    No

    FLB_PARSER

    Build with Parser support

    Yes

    FLB_POSIX_TLS

    Force POSIX thread storage

    No

    FLB_PROFILES

    Enable profiles support

    Yes

    FLB_PROXY_GO

    Enable Go plugins support

    Yes

    FLB_RECORD_ACCESSOR

    Enable record accessor

    Yes

    FLB_REGEX

    Build with Regex support

    Yes

    FLB_RELEASE

    Build with release mode (-O2 -g -DNDEBUG)

    No

    FLB_SHARED_LIB

    Build shared library

    Yes

    FLB_SIGNV4

    Enable AWS Signv4 support

    Yes

    FLB_SIMD

    Enable SIMD support

    No

    FLB_SQLDB

    Enable SQL embedded database support

    Yes

    FLB_STATIC_CONF

    Build binary using static configuration files. The value of this option must be a directory containing configuration files.

    FLB_STREAM_PROCESSOR

    Enable Stream Processor

    Yes

    FLB_TLS

    Build with SSL/TLS support

    Yes

    FLB_UNICODE_ENCODER

    Build with Unicode (UTF-16LE, UTF-16BE) encoding support

    Yes (if C++ compiler found)

    FLB_UTF8_ENCODER

    Build with UTF8 encoding support

    Yes

    FLB_WASM

    Build with Wasm runtime support

    Yes

    FLB_WASM_STACK_PROTECT

    Build with WASM runtime with strong stack protector flags

    No

    FLB_WAMRC

    Build with Wasm AOT compiler executable

    No

    FLB_WINDOWS_DEFAULTS

    Build with predefined Windows settings

    Yes

    FLB_ZIG

    Enable zig integration

    Yes

    Optimize for small size

    No

    FLB_TESTS_INTERNAL

    Enable internal tests

    No

    FLB_TESTS_INTERNAL_FUZZ

    Enable internal fuzz tests

    No

    FLB_TESTS_OSSFUZZ

    Enable OSS-Fuzz build

    No

    FLB_TESTS_RUNTIME

    Enable runtime tests

    No

    FLB_TRACE

    Enable trace mode

    No

    FLB_VALGRIND

    Enable Valgrind support

    No

    Enable CPU input plugin

    On

    Enable Disk I/O Metrics input plugin

    On

    Enable Docker input plugin

    On

    Enable Docker events input plugin

    On

    Enable Dummy input plugin

    On

    Enable Linux eBPF input plugin

    Off

    Enable Elasticsearch (Bulk API) input plugin

    On

    Enable Exec input plugin

    On

    Enable Exec WASI input plugin

    On

    Enable Fluent Bit metrics input plugin

    On

    Enable Fluent Bit internal logs input plugin

    On

    Enable Forward input plugin

    On

    Enable GPU metrics input plugin

    On

    Enable Head input plugin

    On

    Enable Health input plugin

    On

    Enable HTTP input plugin

    On

    Enable Kafka input plugin

    On

    Enable Kernel log input plugin

    On

    Enable Kubernetes Events input plugin

    On

    Enable Memory input plugin

    On

    Enable MQTT Broker input plugin

    On

    Enable Network I/O metrics input plugin

    On

    Enable NGINX metrics input plugin

    On

    Enable Node exporter metrics input plugin

    On

    Enable OpenTelemetry input plugin

    On

    Enable Podman metrics input plugin

    On

    Enable Process input plugin

    On

    Enable Process exporter metrics input plugin

    On

    Enable Prometheus remote write input plugin

    On

    Enable Prometheus scrape metrics input plugin

    On

    Enable Prometheus textfile input plugin

    On

    Enable Random input plugin

    On

    Enable Serial input plugin

    On

    Enable Splunk input plugin

    On

    Enable StatsD input plugin

    On

    Enable Standard input plugin

    On

    Enable Syslog input plugin

    On

    Enable Systemd input plugin

    On

    Enable Tail input plugin

    On

    Enable TCP input plugin

    On

    Enable Thermal input plugin

    On

    Enable UDP input plugin

    On

    Enable Windows Event Log input plugin (Windows Only)

    Off

    Enable Windows Event Log input plugin using winevt.h API (Windows Only)

    Off

    Enable Windows exporter metrics input plugin

    On

    Enable Windows system statistics input plugin

    Off

    Enable metrics selector processor

    On

    Enable OpenTelemetry envelope processor

    On

    Enable sampling processor

    On

    Enable SQL processor

    On

    Enable Topological Data Analysis (TDA) processor

    On

    Enable AWS ECS metadata filter

    On

    Enable Expect data test filter

    On

    Enable Geoip2 filter

    On

    Enable Grep filter

    On

    Enable Kubernetes metadata filter

    On

    Enable Log derived metrics filter

    On

    Enable Lua scripting filter

    On

    Enable Modify filter

    On

    Enable Multiline stack trace filter

    On

    Enable Nest filter

    On

    Enable Nightfall filter

    On

    Enable Parser filter

    On

    Enable Record Modifier filter

    On

    Enable Rewrite Tag filter

    On

    Enable Stdout filter

    On

    Enable Sysinfo filter

    On

    Enable Tensorflow filter

    Off

    Enable Throttle filter

    On

    Enable Type Converter filter

    On

    Enable Wasm filter

    On

    Enable Azure Data Explorer (Kusto) output plugin

    On

    Enable Azure Log Ingestion output plugin

    On

    Enable Google BigQuery output plugin

    On

    Enable Google Chronicle output plugin

    On

    Enable Amazon CloudWatch output plugin

    On

    Enable Counter output plugin

    On

    Enable Datadog output plugin

    On

    Enable Elastic Search output plugin

    On

    Enable Exit output plugin

    On

    Enable File output plugin

    On

    Enable Flow counter output plugin

    On

    Enable output plugin

    On

    Enable GELF output plugin

    On

    Enable HTTP output plugin

    On

    Enable InfluxDB output plugin

    On

    Enable Kafka output

    On

    Enable Kafka REST Proxy output plugin

    On

    Enable Amazon Kinesis Data Firehose output plugin

    On

    Enable Amazon Kinesis Data Streams output plugin

    On

    FLB_OUT_LIB

    Enable Library output plugin

    On

    Enable LogDNA output plugin

    On

    Enable Loki output plugin

    On

    Enable NATS output plugin

    On

    Enable New Relic output plugin

    On

    Enable NULL output plugin

    On

    Enable OpenSearch output plugin

    On

    Enable OpenTelemetry output plugin

    On

    Enable Oracle Cloud Infrastructure Logging output plugin

    On

    Enable PostgreSQL output plugin

    Off

    Enable Plot output plugin

    On

    Enable Prometheus exporter output plugin

    On

    Enable Prometheus remote write output plugin

    On

    Enable Amazon S3 output plugin

    On

    Enable Apache Skywalking output plugin

    On

    Enable Slack output plugin

    On

    Enable Splunk output plugin

    On

    Enable Stackdriver output plugin

    On

    Enable STDOUT output plugin

    On

    Enable Syslog output plugin

    On

    Enable Treasure Data output plugin

    On

    Enable TCP/TLS output plugin

    On

    Enable UDP output plugin

    On

    Enable Vivo exporter output plugin

    On

    Enable WebSocket output plugin

    On

    FLB_ALL

    Enable all features available

    No

    FLB_ARROW

    Build with Apache Arrow support

    No

    FLB_BACKTRACE

    Enable stack trace support

    Yes

    FLB_DEBUG

    Build with debug mode (-g)

    No

    FLB_MSGPACK_TO_JSON_INIT_BUFFER_SIZE

    Determine initial buffer size for msgpack to json conversion in terms of memory used by payload.

    2.0

    FLB_MSGPACK_TO_JSON_REALLOC_BUFFER_SIZE

    Determine percentage of reallocation size when msgpack to json conversion buffer runs out of memory.

    0.1

    FLB_IN_BLOB

    Enable Blob input plugin

    On

    FLB_IN_COLLECTD

    Enable Collectd input plugin

    On

    FLB_PROCESSOR_CONTENT_MODIFIER

    Enable content modifier processor

    On

    FLB_PROCESSOR_LABELS

    Enable metrics label manipulation processor

    On

    FLB_FILTER_AWS

    Enable AWS metadata filter

    On

    FLB_FILTER_CHECKLIST

    Enable Checklist filter

    On

    FLB_OUT_AZURE

    Enable Microsoft Azure output plugin

    On

    FLB_OUT_AZURE_BLOB

    Enable Microsoft Azure storage blob output plugin

    On

    input
    output
    build options
    CMakearrow-up-right

    FLB_AVRO_ENCODER

    FLB_SMALL

    Docker

    Fluent Bit container images are available on Docker Hub ready for production usage. Current available images can be deployed in multiple architectures.

    hashtag
    Start Docker

    Use the following command to start Docker with Fluent Bit:

    hashtag
    Use a configuration file

    Use the following command to start Fluent Bit while using a configuration file:

    hashtag
    Tags and versions

    The following table describes the Linux container tags that are available on Docker Hub repository:

    Tags
    Manifest Architectures
    Description

    It's strongly suggested that you always use the latest image of Fluent Bit.

    Container images for Windows Server 2019 and Windows Server 2022 are provided for v2.0.6 and later. These can be found as tags on the same Docker Hub registry.

    hashtag
    Multi-architecture images

    Fluent Bit production stable images are based on . Focusing on security, these images contain only the Fluent Bit binary and minimal system libraries and basic configuration.

    Debug images are available for all architectures (for 1.9.0 and later), and contain a full Debian shell and package manager that can be used to troubleshoot or for testing purposes.

    From a deployment perspective, there's no need to specify an architecture. The container client tool that pulls the image gets the proper layer for the running architecture.

    hashtag
    Verify signed container images

    Version 1.9 and 2.0 container images are signed using Cosign/Sigstore. Verify these signatures using cosign ():

    Replace cosign with the binary installed if it has a different name (for example, cosign-linux-amd64).

    Keyless signing is also provided but is still experimental:

    COSIGN_EXPERIMENTAL=1 is used to allow verification of images signed in keyless mode. To learn more about keyless signing, see the documentation.

    hashtag
    Get started

    1. Download the last stable image from 2.0 series:

    2. After the image is in place, run the following test which makes Fluent Bit measure CPU usage by the container:

    That command lets Fluent Bit measure CPU usage every second and flushes the results to the standard output. For example:

    hashtag
    FAQ

    hashtag
    Why there is no Fluent Bit Docker image based on Alpine Linux?

    Alpine Linux uses Musl C library instead of Glibc. Musl isn't fully compatible with Glibc, which generated many issues in the following areas when used with Fluent Bit:

    • Memory Allocator: To run properly in high-load environments, Fluent Bit uses Jemalloc as a default memory allocator which reduces fragmentation and provides better performance. Jemalloc can't run smoothly with Musl and requires extra work.

    • Alpine Linux Musl functions bootstrap have a compatibility issue when loading Golang shared libraries. This causes problems when trying to load Golang output plugins in Fluent Bit.

    • Alpine Linux Musl Time format parser doesn't support Glibc extensions.

    hashtag
    Why use distroless containers?

    The reasons for using distroless are well covered in

    .

    • Include only what you need, reduce the attack surface available.

    • Reduces size and improves performance.

    • Reduces false positives on scans (and reduces resources required for scanning).

    With any choice, there are downsides:

    • No shell or package manager to update or add things.

      • Generally, dynamic updating is a bad idea in containers as the time it's done affects the outcome: two containers started at different times using the same base image can perform differently or get different dependencies.

      • A better approach is to rebuild a new image version. You can do this with Distroless, but it's harder and requires multistage builds or similar to provide the new dependencies.

    Using exec to access a container will potentially impact resource limits.

    For debugging, debug containers are available now in K8S:

    • This can be a significantly different container from the one you want to investigate, with lots of extra tools or even a different base.

    • No resource limits applied to this container, which can be good or bad.

    • Runs in pod namespaces. It's another container that can access everything the others can.

    make install
    cd build/
    cmake ../
    -- The C compiler identification is GNU 4.9.2
    -- Check for working C compiler: /usr/bin/cc
    -- Check for working C compiler: /usr/bin/cc -- works
    -- Detecting C compiler ABI info
    -- Detecting C compiler ABI info - done
    -- The CXX compiler identification is GNU 4.9.2
    -- Check for working CXX compiler: /usr/bin/c++
    -- Check for working CXX compiler: /usr/bin/c++ -- works
    ...
    -- Could NOT find Doxygen (missing:  DOXYGEN_EXECUTABLE)
    -- Looking for accept4
    -- Looking for accept4 - not found
    -- Configuring done
    -- Generating done
    -- Build files have been written to: /home/edsiper/coding/fluent-bit/build
    make
    Scanning dependencies of target msgpack
    [  2%] Building C object lib/msgpack-1.1.0/CMakeFiles/msgpack.dir/src/unpack.c.o
    [  4%] Building C object lib/msgpack-1.1.0/CMakeFiles/msgpack.dir/src/objectc.c.o
    [  7%] Building C object lib/msgpack-1.1.0/CMakeFiles/msgpack.dir/src/version.c.o
    ...
    [ 19%] Building C object lib/monkey/mk_core/CMakeFiles/mk_core.dir/mk_file.c.o
    [ 21%] Building C object lib/monkey/mk_core/CMakeFiles/mk_core.dir/mk_rconf.c.o
    [ 23%] Building C object lib/monkey/mk_core/CMakeFiles/mk_core.dir/mk_string.c.o
    ...
    Scanning dependencies of target fluent-bit-static
    [ 66%] Building C object src/CMakeFiles/fluent-bit-static.dir/flb_pack.c.o
    [ 69%] Building C object src/CMakeFiles/fluent-bit-static.dir/flb_input.c.o
    [ 71%] Building C object src/CMakeFiles/fluent-bit-static.dir/flb_output.c.o
    ...
    Linking C executable ../bin/fluent-bit
    [100%] Built target fluent-bit-bin
    docker run -ti cr.fluentbit.io/fluent/fluent-bit
    FLB_IN_CPU
    FLB_IN_DISK
    FLB_IN_DOCKER
    FLB_IN_DOCKER_EVENTS
    FLB_IN_DUMMY
    FLB_IN_EBPF
    FLB_IN_ELASTICSEARCH
    FLB_IN_EXEC
    FLB_IN_EXEC_WASI
    FLB_IN_FLUENTBIT_METRICS
    FLB_IN_FLUENTBIT_LOGS
    FLB_IN_FORWARD
    FLB_IN_GPU_METRICS
    FLB_IN_HEAD
    FLB_IN_HEALTH
    FLB_IN_HTTP
    FLB_IN_KAFKA
    FLB_IN_KMSG
    FLB_IN_KUBERNETES_EVENTS
    FLB_IN_MEM
    FLB_IN_MQTT
    FLB_IN_NETIF
    FLB_IN_NGINX_EXPORTER_METRICS
    FLB_IN_NODE_EXPORTER_METRICS
    FLB_IN_OPENTELEMETRY
    FLB_IN_PODMAN_METRICS
    FLB_IN_PROC
    FLB_IN_PROCESS_EXPORTER_METRICS
    FLB_IN_PROMETHEUS_REMOTE_WRITE
    FLB_IN_PROMETHEUS_SCRAPE_METRICS
    FLB_IN_PROMETHEUS_TEXTFILE
    FLB_IN_RANDOM
    FLB_IN_SERIAL
    FLB_IN_SPLUNK
    FLB_IN_STATSD
    FLB_IN_STDIN
    FLB_IN_SYSLOG
    FLB_IN_SYSTEMD
    FLB_IN_TAIL
    FLB_IN_TCP
    FLB_IN_THERMAL
    FLB_IN_UDP
    FLB_IN_WINLOG
    FLB_IN_WINEVTLOG
    FLB_IN_WINDOWS_EXPORTER_METRICS
    FLB_IN_WINSTAT
    FLB_PROCESSOR_METRICS_SELECTOR
    FLB_PROCESSOR_OPENTELEMETRY_ENVELOPE
    FLB_PROCESSOR_SAMPLING
    FLB_PROCESSOR_SQL
    FLB_PROCESSOR_TDA
    FLB_FILTER_ECS
    FLB_FILTER_EXPECT
    FLB_FILTER_GIOIP2
    FLB_FILTER_GREP
    FLB_FILTER_KUBERNETES
    FLB_FILTER_LOG_TO_METRICS
    FLB_FILTER_LUA
    FLB_FILTER_MODIFY
    FLB_FILTER_MULTILINE
    FLB_FILTER_NEST
    FLB_FILTER_NIGHTFALL
    FLB_FILTER_PARSER
    FLB_FILTER_RECORD_MODIFIER
    FLB_FILTER_REWRITE_TAG
    FLB_FILTER_STDOUT
    FLB_FILTER_SYSINFO
    FLB_FILTER_TENSORFLOW
    FLB_FILTER_THROTTLE
    FLB_FILTER_TYPE_CONVERTER
    FLB_FILTER_WASM
    FLB_OUT_AZURE_KUSTO
    FLB_OUT_AZURE_LOGS_INGESTION
    FLB_OUT_BIGQUERY
    FLB_OUT_CHRONICLE
    FLB_OUT_CLOUDWATCH_LOGS
    FLB_OUT_COUNTER
    FLB_OUT_DATADOG
    FLB_OUT_ES
    FLB_OUT_EXIT
    FLB_OUT_FILE
    FLB_OUT_FLOWCOUNTER
    FLB_OUT_FORWARD
    Fluentdarrow-up-right
    FLB_OUT_GELF
    FLB_OUT_HTTP
    FLB_OUT_INFLUXDB
    FLB_OUT_KAFKA
    FLB_OUT_KAFKA_REST
    FLB_OUT_KINESIS_FIREHOSE
    FLB_OUT_KINESIS_STREAMS
    FLB_OUT_LOGDNA
    FLB_OUT_LOKI
    FLB_OUT_NATS
    FLB_OUT_NRLOGS
    FLB_OUT_NULL
    FLB_OUT_OPENSEARCH
    FLB_OUT_OPENTELEMETRY
    FLB_OUT_ORACLE_LOG_ANALYTICS
    FLB_OUT_PGSQL
    FLB_OUT_PLOT
    FLB_OUT_PROMETHEUS_EXPORTER
    FLB_OUT_PROMETHEUS_REMOTE_WRITE
    FLB_OUT_S3
    FLB_OUT_SKYWALKING
    FLB_OUT_SLACK
    FLB_OUT_SPLUNK
    FLB_OUT_STACKDRIVER
    FLB_OUT_STDOUT
    FLB_OUT_SYSLOG
    FLB_OUT_TD
    FLB_OUT_TCP
    FLB_OUT_UDP
    FLB_OUT_VIVO_EXPORTER
    FLB_OUT_WEBSOCKET

    amd64, arm64, arm/v7

    Debug images

    4.2.3

    amd64, arm64, arm/v7

    Release

    4.2.2-debug

    amd64, arm64, arm/v7

    Debug images

    4.2.2

    amd64, arm64, arm/v7

    Release

    4.2.1-debug

    amd64, arm64, arm/v7

    Debug images

    4.2.1

    amd64, arm64, arm/v7

    Release

    4.2.0-debug

    amd64, arm64, arm/v7

    Debug images

    4.2.0

    amd64, arm64, arm/v7

    Release

    4.1.2-debug

    amd64, arm64, arm/v7

    Debug images

    4.1.2

    amd64, arm64, arm/v7

    Release v4.1.2

    4.1.1-debug

    amd64, arm64, arm/v7

    Debug images

    4.1.1

    amd64, arm64, arm/v7

    Release

    4.1.0-debug

    amd64, arm64, arm/v7

    Debug images

    4.1.0

    amd64, arm64, arm/v7

    Release

    4.0.12-debug

    amd64, arm64, arm/v7

    Debug images

    4.0.12

    amd64, arm64, arm/v7

    Release

    4.0.11-debug

    amd64, arm64, arm/v7

    Debug images

    4.0.11

    amd64, arm64, arm/v7

    Release

    4.0.10-debug

    amd64, arm64, arm/v7

    Debug images

    4.0.10

    amd64, arm64, arm/v7

    Release

    4.0.9-debug

    amd64, arm64, arm/v7

    Debug images

    4.0.9

    amd64, arm64, arm/v7

    Release

    4.0.8-debug

    amd64, arm64, arm/v7

    Debug images

    4.0.8

    amd64, arm64, arm/v7

    Release

    4.0.7-debug

    amd64, arm64, arm/v7

    Debug images

    4.0.7

    amd64, arm64, arm/v7

    Release

    4.0.6-debug

    amd64, arm64, arm/v7

    Debug images

    4.0.6

    amd64, arm64, arm/v7

    Release

    4.0.5-debug

    amd64, arm64, arm/v7

    Debug images

    4.0.5

    amd64, arm64, arm/v7

    Release

    4.0.4-debug

    amd64, arm64, arm/v7

    Debug images

    4.0.4

    amd64, arm64, arm/v7

    Release

    4.0.3-debug

    amd64, arm64, arm/v7

    Debug images

    4.0.3

    amd64, arm64, arm/v7

    Release

    4.0.1-debug

    amd64, arm64, arm/v7

    Debug images

    4.0.1

    amd64, arm64, arm/v7

    Release

    4.0.0-debug

    amd64, arm64, arm/v7

    Debug images

    4.0.0

    amd64, arm64, arm/v7

    Release

    3.2.10-debug

    amd64, arm64, arm/v7, s390x

    Debug images

    3.2.10

    amd64, arm64, arm/v7, s390x

    Release

    3.2.9-debug

    amd64, arm64, arm/v7, s390x

    Debug images

    3.2.9

    amd64, arm64, arm/v7, s390x

    Release

    3.2.8-debug

    amd64, arm64, arm/v7, s390x

    Debug images

    3.2.8

    amd64, arm64, arm/v7, s390x

    Release

    3.2.7-debug

    amd64, arm64, arm/v7, s390x

    Debug images

    3.2.7

    amd64, arm64, arm/v7, s390x

    Release

    3.2.6-debug

    amd64, arm64, arm/v7, s390x

    Debug images

    3.2.6

    amd64, arm64, arm/v7, s390x

    Release

    3.2.5-debug

    amd64, arm64, arm/v7, s390x

    Debug images

    3.2.5

    amd64, arm64, arm/v7, s390x

    Release

    3.2.4-debug

    amd64, arm64, arm/v7, s390x

    Debug images

    3.2.4

    amd64, arm64, arm/v7, s390x

    Release

    3.2.3-debug

    amd64, arm64, arm/v7, s390x

    Debug images

    3.2.3

    amd64, arm64, arm/v7, s390x

    Release

    3.2.2-debug

    amd64, arm64, arm/v7, s390x

    Debug images

    3.2.2

    amd64, arm64, arm/v7, s390x

    Release

    3.2.1-debug

    amd64, arm64, arm/v7, s390x

    Debug images

    3.2.1

    amd64, arm64, arm/v7, s390x

    Release

    3.1.10-debug

    amd64, arm64, arm/v7, s390x

    Debug images

    3.1.10

    amd64, arm64, arm/v7, s390x

    Release

    3.1.9-debug

    amd64, arm64, arm/v7, s390x

    Debug images

    3.1.9

    amd64, arm64, arm/v7, s390x

    Release

    3.1.8-debug

    amd64, arm64, arm/v7, s390x

    Debug images

    3.1.8

    amd64, arm64, arm/v7, s390x

    Release

    3.1.7-debug

    amd64, arm64, arm/v7, s390x

    Debug images

    3.1.7

    amd64, arm64, arm/v7, s390x

    Release

    3.1.6-debug

    amd64, arm64, arm/v7, s390x

    Debug images

    3.1.6

    amd64, arm64, arm/v7, s390x

    Release

    3.1.5-debug

    amd64, arm64, arm/v7, s390x

    Debug images

    3.1.5

    amd64, arm64, arm/v7, s390x

    Release

    3.1.4-debug

    amd64, arm64, arm/v7, s390x

    Debug images

    3.1.4

    amd64, arm64, arm/v7, s390x

    Release

    3.1.3-debug

    amd64, arm64, arm/v7, s390x

    Debug images

    3.1.3

    amd64, arm64, arm/v7, s390x

    Release

    3.1.2-debug

    amd64, arm64, arm/v7, s390x

    Debug images

    3.1.2

    amd64, arm64, arm/v7, s390x

    Release

    3.1.1-debug

    amd64, arm64, arm/v7, s390x

    Debug images

    3.1.1

    amd64, arm64, arm/v7, s390x

    Release

    3.1.0-debug

    amd64, arm64, arm/v7, s390x

    Debug images

    3.1.0

    amd64, arm64, arm/v7, s390x

    Release

    3.0.7-debug

    amd64, arm64, arm/v7, s390x

    Debug images

    3.0.7

    amd64, arm64, arm/v7, s390x

    Release

    3.0.6-debug

    amd64, arm64, arm/v7, s390x

    Debug images

    3.0.6

    amd64, arm64, arm/v7, s390x

    Release

    3.0.5-debug

    amd64, arm64, arm/v7, s390x

    Debug images

    3.0.5

    amd64, arm64, arm/v7, s390x

    Release

    3.0.4-debug

    amd64, arm64, arm/v7, s390x

    Debug images

    3.0.4

    amd64, arm64, arm/v7, s390x

    Release

    3.0.3-debug

    amd64, arm64, arm/v7, s390x

    Debug images

    3.0.3

    amd64, arm64, arm/v7, s390x

    Release

    3.0.2-debug

    amd64, arm64, arm/v7, s390x

    Debug images

    3.0.2

    amd64, arm64, arm/v7, s390x

    Release

    3.0.1-debug

    amd64, arm64, arm/v7, s390x

    Debug images

    3.0.1

    amd64, arm64, arm/v7, s390x

    Release

    3.0.0-debug

    amd64, arm64, arm/v7, s390x

    Debug images

    3.0.0

    amd64, arm64, arm/v7, s390x

    Release

    2.2.2-debug

    amd64, arm64, arm/v7, s390x

    Debug images

    2.2.2

    amd64, arm64, arm/v7, s390x

    Release

    2.2.1-debug

    amd64, arm64, arm/v7, s390x

    Debug images

    2.2.1

    amd64, arm64, arm/v7, s390x

    Release

    2.2.0-debug

    amd64, arm64, arm/v7

    Debug images

    2.2.0

    amd64, arm64, arm/v7

    Release

    2.1.10-debug

    amd64, arm64, arm/v7

    Debug images

    2.1.10

    amd64, arm64, arm/v7

    Release

    2.1.9-debug

    amd64, arm64, arm/v7

    Debug images

    2.1.9

    amd64, arm64, arm/v7

    Release

    2.1.8-debug

    amd64, arm64, arm/v7

    Debug images

    2.1.8

    amd64, arm64, arm/v7

    Release

    2.1.7-debug

    amd64, arm64, arm/v7

    Debug images

    2.1.7

    amd64, arm64, arm/v7

    Release

    2.1.6-debug

    amd64, arm64, arm/v7

    Debug images

    2.1.6

    amd64, arm64, arm/v7

    Release

    2.1.5

    amd64, arm64, arm/v7

    Release

    2.1.5-debug

    amd64, arm64, arm/v7

    Debug images

    2.1.3

    amd64, arm64, arm/v7

    Release

    2.1.3-debug

    amd64, arm64, arm/v7

    Debug images

    2.1.2

    amd64, arm64, arm/v7

    Release

    2.1.2-debug

    amd64, arm64, arm/v7

    Debug images

    2.1.1

    amd64, arm64, arm/v7

    Release

    2.1.1-debug

    amd64, arm64, arm/v7

    v2.1.x releases (production + debug)

    2.1.0

    amd64, arm64, arm/v7

    Release

    2.1.0-debug

    amd64, arm64, arm/v7

    v2.1.x releases (production + debug)

    2.0.11

    amd64, arm64, arm/v7

    Release

    2.0.11-debug

    amd64, arm64, arm/v7

    v2.0.x releases (production + debug)

    2.0.10

    amd64, arm64, arm/v7

    Release

    2.0.10-debug

    amd64, arm64, arm/v7

    v2.0.x releases (production + debug)

    2.0.9

    amd64, arm64, arm/v7

    Release

    2.0.9-debug

    amd64, arm64, arm/v7

    v2.0.x releases (production + debug)

    2.0.8

    amd64, arm64, arm/v7

    Release

    2.0.8-debug

    amd64, arm64, arm/v7

    v2.0.x releases (production + debug)

    2.0.6

    amd64, arm64, arm/v7

    Release

    2.0.6-debug

    amd64, arm64, arm/v7

    v2.0.x releases (production + debug)

    2.0.5

    amd64, arm64, arm/v7

    Release

    2.0.5-debug

    amd64, arm64, arm/v7

    v2.0.x releases (production + debug)

    2.0.4

    amd64, arm64, arm/v7

    Release

    2.0.4-debug

    amd64, arm64, arm/v7

    v2.0.x releases (production + debug)

    2.0.3

    amd64, arm64, arm/v7

    Release

    2.0.3-debug

    amd64, arm64, arm/v7

    v2.0.x releases (production + debug)

    2.0.2

    amd64, arm64, arm/v7

    Release

    2.0.2-debug

    amd64, arm64, arm/v7

    v2.0.x releases (production + debug)

    2.0.1

    amd64, arm64, arm/v7

    Release

    2.0.1-debug

    amd64, arm64, arm/v7

    v2.0.x releases (production + debug)

    2.0.0

    amd64, arm64, arm/v7

    Release

    2.0.0-debug

    amd64, arm64, arm/v7

    v2.0.x releases (production + debug)

    1.9.9

    amd64, arm64, arm/v7

    Release

    1.9.9-debug

    amd64, arm64, arm/v7

    v1.9.x releases (production + debug)

    1.9.8

    amd64, arm64, arm/v7

    Release

    1.9.8-debug

    amd64, arm64, arm/v7

    v1.9.x releases (production + debug)

    1.9.7

    amd64, arm64, arm/v7

    Release

    1.9.7-debug

    amd64, arm64, arm/v7

    v1.9.x releases (production + debug)

    1.9.6

    amd64, arm64, arm/v7

    Release

    1.9.6-debug

    amd64, arm64, arm/v7

    v1.9.x releases (production + debug)

    1.9.5

    amd64, arm64, arm/v7

    Release

    1.9.5-debug

    amd64, arm64, arm/v7

    v1.9.x releases (production + debug)

    1.9.4

    amd64, arm64, arm/v7

    Release

    1.9.4-debug

    amd64, arm64, arm/v7

    v1.9.x releases (production + debug)

    1.9.3

    amd64, arm64, arm/v7

    Release

    1.9.3-debug

    amd64, arm64, arm/v7

    v1.9.x releases (production + debug)

    1.9.2

    amd64, arm64, arm/v7

    Release

    1.9.2-debug

    amd64, arm64, arm/v7

    v1.9.x releases (production + debug)

    1.9.1

    amd64, arm64, arm/v7

    Release

    1.9.1-debug

    amd64, arm64, arm/v7

    v1.9.x releases (production + debug)

    1.9.0

    amd64, arm64, arm/v7

    Release

    1.9.0-debug

    amd64, arm64, arm/v7

    v1.9.x releases (production + debug)

    The Fluent Bit maintainers' preference for base images are Distroless and Debian for security and maintenance reasons.

    Reduces supply chain security requirements to only what you need.
  • Helps prevent unauthorised processes or users interacting with the container.

  • Less need to harden the container (and container runtime, K8s, and so on).

  • Faster CI/CD processes.

  • Debugging can be harder.

    • More specifically you need applications set up to properly expose information for debugging rather than rely on traditional debug approaches of connecting to processes or dumping memory. This can be an upfront cost versus a runtime cost but does shift left in the development process so hopefully is a reduction overall.

  • Assumption that Distroless is secure: nothing is secure and there are still exploits so it doesn't remove the need for securing your system.

  • Sometimes you need to use a common base image, such as with audits, security, health, and so on.

  • Might need architecture of the pod to share volumes or other information.
  • Requires more recent versions of K8S and the container runtime plus RBAC allowing it.

  • 5.0.2-debug

    amd64, arm64, arm/v7

    Debug images

    5.0.2

    amd64, arm64, arm/v7

    Release v5.0.2arrow-up-right

    fluent/fluent-bitarrow-up-right
    Distrolessarrow-up-right
    install guidearrow-up-right
    Sigstore keyless signaturearrow-up-right
    Why should I use Distroless images?arrow-up-right
    https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/#ephemeral-containerarrow-up-right
    docker run -ti -v ./fluent-bit.conf:/fluent-bit/etc/fluent-bit.conf \
      cr.fluentbit.io/fluent/fluent-bit
    docker run -ti -v ./fluent-bit.yaml:/fluent-bit/etc/fluent-bit.yaml \
      cr.fluentbit.io/fluent/fluent-bit \
      -c /fluent-bit/etc/fluent-bit.yaml
    

    4.2.3-debug

    Monitoring

    Learn how to monitor your Fluent Bit data pipelines

    Fluent Bit includes features for monitoring the internals of your pipeline, in addition to connecting to Prometheus and Grafana, Health checks, and connectors to use external services:

    $ cosign verify --key "https://packages.fluentbit.io/fluentbit-cosign.pub" fluent/fluent-bit:2.0.6
    
    Verification for index.docker.io/fluent/fluent-bit:2.0.6 --
    The following checks were performed on each of these signatures:
      - The cosign claims were validated
      - The signatures were verified against the specified public key
    
    [{"critical":{"identity":{"docker-reference":"index.docker.io/fluent/fluent-bit"},"image":{"docker-manifest-digest":"sha256:c740f90b07f42823d4ecf4d5e168f32ffb4b8bcd87bc41df8f5e3d14e8272903"},"type":"cosign container image signature"},"optional":{"release":"2.0.6","repo":"fluent/fluent-bit","workflow":"Release from staging"}}]
    COSIGN_EXPERIMENTAL=1 cosign verify fluent/fluent-bit:2.0.6
    docker pull cr.fluentbit.io/fluent/fluent-bit:2.0
    docker run -ti cr.fluentbit.io/fluent/fluent-bit:2.0 \
      -i cpu -o stdout -f 1
    [2019/10/01 12:29:02] [ info] [engine] started
    [0] cpu.0: [1504290543.000487750, {"cpu_p"=>0.750000, "user_p"=>0.250000, "system_p"=>0.500000, "cpu0.p_cpu"=>0.000000, "cpu0.p_user"=>0.000000, "cpu0.p_system"=>0.000000, "cpu1.p_cpu"=>1.000000, "cpu1.p_user"=>0.000000, "cpu1.p_system"=>1.000000, "cpu2.p_cpu"=>1.000000, "cpu2.p_user"=>1.000000, "cpu2.p_system"=>0.000000, "cpu3.p_cpu"=>0.000000, "cpu3.p_user"=>0.000000, "cpu3.p_system"=>0.000000}]
    v4.2.3arrow-up-right
    v4.2.2arrow-up-right
    v4.2.1arrow-up-right
    v4.2.0arrow-up-right
    v4.1.1arrow-up-right
    v4.1.0arrow-up-right
    v4.0.12arrow-up-right
    v4.0.11arrow-up-right
    v4.0.10arrow-up-right
    v4.0.9arrow-up-right
    v4.0.8arrow-up-right
    v4.0.7arrow-up-right
    v4.0.6arrow-up-right
    v4.0.5arrow-up-right
    v4.0.4arrow-up-right
    v4.0.3arrow-up-right
    v4.0.1arrow-up-right
    v4.0.0arrow-up-right
    v3.2.10arrow-up-right
    v3.2.9arrow-up-right
    v3.2.8arrow-up-right
    v3.2.7arrow-up-right
    v3.2.6arrow-up-right
    v3.2.5arrow-up-right
    v3.2.4arrow-up-right
    v3.2.3arrow-up-right
    v3.2.2arrow-up-right
    v3.2.1arrow-up-right
    v3.1.10arrow-up-right
    v3.1.9arrow-up-right
    v3.1.8arrow-up-right
    v3.1.7arrow-up-right
    v3.1.6arrow-up-right
    v3.1.5arrow-up-right
    v3.1.4arrow-up-right
    v3.1.3arrow-up-right
    v3.1.2arrow-up-right
    v3.1.1arrow-up-right
    v3.1.0arrow-up-right
    v3.0.7arrow-up-right
    v3.0.6arrow-up-right
    v3.0.5arrow-up-right
    v3.0.4arrow-up-right
    v3.0.3arrow-up-right
    v3.0.2arrow-up-right
    v3.0.1arrow-up-right
    v3.0.0arrow-up-right
    v2.2.2arrow-up-right
    v2.2.1arrow-up-right
    v2.2.0arrow-up-right
    v2.1.10arrow-up-right
    v2.1.9arrow-up-right
    v2.1.8arrow-up-right
    v2.1.7arrow-up-right
    v2.1.6arrow-up-right
    v2.1.5arrow-up-right
    v2.1.3arrow-up-right
    v2.1.2arrow-up-right
    v2.1.1arrow-up-right
    v2.1.0arrow-up-right
    v2.0.11arrow-up-right
    v2.0.10arrow-up-right
    v2.0.9arrow-up-right
    v2.0.8arrow-up-right
    v2.0.6arrow-up-right
    v2.0.5arrow-up-right
    v2.0.4arrow-up-right
    v2.0.3arrow-up-right
    v2.0.2arrow-up-right
    v2.0.1arrow-up-right
    v2.0.0arrow-up-right
    v1.9.9arrow-up-right
    v1.9.8arrow-up-right
    v1.9.7arrow-up-right
    v1.9.6arrow-up-right
    v1.9.5arrow-up-right
    v1.9.4arrow-up-right
    v1.9.3arrow-up-right
    v1.9.2arrow-up-right
    v1.9.1arrow-up-right
    v1.9.0arrow-up-right

    Health Checks

  • Telemetry Pipeline: hosted service to monitor and visualize your pipelines

  • hashtag
    HTTP server

    Fluent Bit includes an HTTP server for querying internal information and monitoring metrics of each running plugin.

    You can integrate the monitoring interface with Prometheus.

    hashtag
    Get started

    To get started, enable the HTTP server from the configuration file. The following configuration instructs Fluent Bit to start an HTTP server on TCP port 2020 and listen on all network interfaces:

    service:
      http_server: on
      http_listen
    
    [SERVICE]
      HTTP_Server  On
      HTTP_Listen  0.0.0.0
      HTTP_PORT    2020
    
    [INPUT]
      Name cpu
    
    [OUTPUT]
      Name  stdout
      Match *

    Start Fluent Bit with the corresponding configuration chosen previously:

    Fluent Bit starts and generates output in your terminal:

    Use curl to gather information about the HTTP server. The following command sends the command output to the jq program, which outputs human-readable JSON data to the terminal.

    hashtag
    REST API interface

    Fluent Bit exposes the following endpoints for monitoring.

    URI
    Description
    Data format

    /

    Fluent Bit build information.

    JSON

    /api/v1/uptime

    Return uptime information in seconds.

    JSON

    hashtag
    V1 metrics

    The following descriptions apply to v1 metric endpoints.

    hashtag
    /api/v1/metrics/prometheus endpoint

    The following descriptions apply to metrics outputted in Prometheus format by the /api/v1/metrics/prometheus endpoint.

    The following terms are key to understanding how Fluent Bit processes metrics:

    • Record: a single message collected from a source, such as a single long line in a file.

    • Chunk: log records ingested and stored by Fluent Bit input plugin instances. A batch of records in a chunk are tracked together as a single unit.

      The Fluent Bit engine attempts to fit records into chunks of at most 2 MB, but the size can vary at runtime. Chunks are then sent to an output. An output plugin instance can successfully send the full chunk to the destination and mark it as successful. If an unrecoverable error is encountered, the chunk fails entirely. Otherwise, the output can request a retry.

    Metric name
    Labels
    Description
    Type
    Unit

    fluentbit_input_bytes_total

    name: the name or alias for the input instance

    The number of bytes of log records that this input instance has ingested successfully.

    counter

    bytes

    hashtag
    /api/v1/storage endpoint

    The following descriptions apply to metrics outputted in JSON format by the /api/v1/storage endpoint.

    Metric Key
    Description
    Unit

    chunks.total_chunks

    The total number of chunks of records that Fluent Bit is currently buffering.

    chunks

    chunks.mem_chunks

    The total number of chunks that are currently buffered in memory. Chunks can be both in memory and on the file system at the same time.

    chunks

    hashtag
    V2 metrics

    The following descriptions apply to v2 metric endpoints.

    hashtag
    /api/v2/metrics/prometheus or /api/v2/metrics endpoint

    The following descriptions apply to metrics outputted in Prometheus format by the /api/v2/metrics/prometheus or /api/v2/metrics endpoints.

    The following terms are key to understanding how Fluent Bit processes metrics:

    • Record: a single message collected from a source, such as a single long line in a file.

    • Chunk: log records ingested and stored by Fluent Bit input plugin instances. A batch of records in a chunk are tracked together as a single unit.

      The Fluent Bit engine attempts to fit records into chunks of at most 2 MB, but the size can vary at runtime. Chunks are then sent to an output. An output plugin instance can successfully send the full chunk to the destination and mark it as successful. If an unrecoverable error is encountered, the chunk fails entirely. Otherwise, the output can request a retry.

    Metric Name
    Labels
    Description
    Type
    Unit

    fluentbit_build_info

    hostname: the hostname, version: the version of Fluent Bit, os: OS type

    Build version information. The value is the Unix epoch timestamp of the configuration context initialization.

    gauge

    seconds

    hashtag
    Storage layer

    The following are detailed descriptions for the metrics collected by the storage layer.

    Metric Name
    Labels
    Description
    Type
    Unit

    fluentbit_input_storage_chunks

    name: the name or alias for the input instance

    The current total number of chunks owned by this input instance.

    gauge

    chunks

    hashtag
    Output latency metric

    Introduced in Fluent Bit 4.0.6, the fluentbit_output_latency_seconds histogram metric captures end-to-end latency from the time a chunk is created by an input plugin until it's successfully delivered by an output plugin. This provides observability into chunk-level pipeline performance and helps identify slowdowns or bottlenecks in the output path.

    hashtag
    Bucket configuration

    The histogram uses the following default bucket boundaries, designed around the Fluent Bit typical flush interval of 1 second:

    These boundaries provide:

    • High resolution around 1 s latency: Captures normal operation near the default flush interval.

    • Small backpressure detection: Identifies minor delays in the 1-2.5 s range.

    • Bottleneck identification: Detects retry cycles, network stalls, or plugin bottlenecks in higher ranges.

    • Complete coverage: The +Inf bucket ensures all latencies are captured.

    hashtag
    Example output

    When exposed through the Fluent Bit built-in HTTP server, the metric appears in Prometheus format:

    hashtag
    Use cases

    Performance monitoring: Monitor overall pipeline health by tracking latency percentiles:

    Bottleneck detection: Identify specific input/output pairs experiencing high latency:

    SLA monitoring: Track how many chunks are delivered within acceptable time bounds:

    Alerting: Create alerts for degraded pipeline performance:

    hashtag
    Uptime example

    Query the service uptime with the following command:

    The command prints a similar output like this:

    hashtag
    Metrics example

    Query internal metrics in JSON format with the following command:

    The command prints a similar output like this:

    hashtag
    Query metrics in Prometheus format

    Query internal metrics in Prometheus Text 0.0.4 format:

    This command returns the same metrics in Prometheus format instead of JSON:

    hashtag
    Configure aliases

    By default, configured plugins on runtime get an internal name in the format _plugin_name.ID_. For monitoring purposes, this can be confusing if many plugins of the same type were configured. To make a distinction each configured input or output section can get an alias that will be used as the parent name for the metric.

    The following example sets an alias to the INPUT section of the configuration file, which is using the CPU input plugin:

    When querying the related metrics, the aliases are returned instead of the plugin name:

    hashtag
    Grafana dashboard and alerts

    You can create Grafana dashboards and alerts using Fluent Bit exposed Prometheus style metrics.

    The provided example dashboardarrow-up-right is heavily inspired by Banzai Cloudarrow-up-right's logging operator dashboardarrow-up-right with a few key differences, such as the use of the instance label, stacked graphs, and a focus on Fluent Bit metrics. See this blog postarrow-up-right for more information.

    dashboard

    hashtag
    Alerts

    Sample alerts are availablearrow-up-right.

    hashtag
    Health check for Fluent Bit

    Fluent Bit supports the following configurations to set up the health check.

    Configuration name
    Description
    Default

    Health_Check

    Enable Health check feature

    Off

    HC_Errors_Count

    the error count to meet the unhealthy requirement, this is a sum for all output plugins in a defined HC_Period, example for output error: [2022/02/16 10:44:10] [ warn] [engine] failed to flush chunk '1-1645008245.491540684.flb', retry in 7 seconds: task_id=0, input=forward.1 > output=cloudwatch_logs.3 (out_id=3)

    5

    Not every error log means an error to be counted. The error retry failures count only on specific errors, which is the example in configuration table description.

    Based on the HC_Period setting, if the real error number is over HC_Errors_Count, or retry failure is over HC_Retry_Failure_Count, Fluent Bit is considered unhealthy. The health endpoint returns an HTTP status 500 and an error message. Otherwise, the endpoint returns HTTP status 200 and an ok message.

    The equation to calculate this behavior is:

    The HC_Errors_Count and HC_Retry_Failure_Count only count for output plugins and count a sum for errors and retry failures from all running output plugins.

    The following configuration examples show how to define these settings:

    Use the following command to call the health endpoint:

    With the example configuration, the health status is determined by the following equation:

    • If this equation evaluates to TRUE, then Fluent Bit is unhealthy.

    • If this equation evaluates to FALSE, then Fluent Bit is healthy.

    hashtag
    Telemetry Pipeline

    Telemetry Pipelinearrow-up-right is a hosted service that lets you monitor your Fluent Bit agents including data flow, metrics, and configurations.

    HTTP Server: JSON and Prometheus Exporter-style metrics
    Grafana Dashboards and Alerts
    service:
      http_server: on
      http_listen: 0.0.0.0
      http_port: 2020
    
    pipeline:
      inputs:
        - name: cpu
          alias: server1_cpu
    
      outputs:
        - name: stdout
          alias: raw_output
          match: '*'
    [SERVICE]
      HTTP_Server  On
      HTTP_Listen  0.0.0.0
      HTTP_PORT    2020
    
    [INPUT]
      Name  cpu
      Alias server1_cpu
    
    [OUTPUT]
      Name  stdout
      Alias raw_output
      Match *
    service:
      http_server: on
      http_listen: 0.0.0.0
      http_port: 2020
      health_check: on
      hc_errors_count: 5
      hc_retry_failure_count: 5
      hc_period: 5
    
    pipeline:
      inputs:
        - name: cpu
    
      outputs:
        - name: stdout
          match: '*'
    [SERVICE]
      HTTP_Server  On
      HTTP_Listen  0.0.0.0
      HTTP_PORT    2020
      Health_Check On
      HC_Errors_Count 5
      HC_Retry_Failure_Count 5
      HC_Period 5
    
    [INPUT]
      Name  cpu
    
    [OUTPUT]
      Name  stdout
      Match *
    # For YAML configuration.
    $ fluent-bit --config fluent-bit.yaml
    
    # For classic configuration.
    $ fluent-bit --config fluent-bit.conf
    ...
    [2020/03/10 19:08:24] [ info] [engine] started
    [2020/03/10 19:08:24] [ info] [http_server] listen iface=0.0.0.0 tcp_port=2020
    $ curl -s http://127.0.0.1:2020 | jq
    {
      "fluent-bit": {
        "version": "0.13.0",
        "edition": "Community",
        "flags": [
          "FLB_HAVE_TLS",
          "FLB_HAVE_METRICS",
          "FLB_HAVE_SQLDB",
          "FLB_HAVE_TRACE",
          "FLB_HAVE_HTTP_SERVER",
          "FLB_HAVE_FLUSH_LIBCO",
          "FLB_HAVE_SYSTEMD",
          "FLB_HAVE_VALGRIND",
          "FLB_HAVE_FORK",
          "FLB_HAVE_PROXY_GO",
          "FLB_HAVE_REGEX",
          "FLB_HAVE_C_TLS",
          "FLB_HAVE_SETJMP",
          "FLB_HAVE_ACCEPT4",
          "FLB_HAVE_INOTIFY"
        ]
      }
    }
    0.5, 1.0, 1.5, 2.5, 5.0, 10.0, 20.0, 30.0, +Inf
    # HELP fluentbit_output_latency_seconds End-to-end latency in seconds
    # TYPE fluentbit_output_latency_seconds histogram
    fluentbit_output_latency_seconds_bucket{le="0.5",input="random.0",output="stdout.0"} 0
    fluentbit_output_latency_seconds_bucket{le="1.0",input="random.0",output="stdout.0"} 1
    fluentbit_output_latency_seconds_bucket{le="1.5",input="random.0",output="stdout.0"} 6
    fluentbit_output_latency_seconds_bucket{le="2.5",input="random.0",output="stdout.0"} 6
    fluentbit_output_latency_seconds_bucket{le="5.0",input="random.0",output="stdout.0"} 6
    fluentbit_output_latency_seconds_bucket{le="10.0",input="random.0",output="stdout.0"} 6
    fluentbit_output_latency_seconds_bucket{le="20.0",input="random.0",output="stdout.0"} 6
    fluentbit_output_latency_seconds_bucket{le="30.0",input="random.0",output="stdout.0"} 6
    fluentbit_output_latency_seconds_bucket{le="+Inf",input="random.0",output="stdout.0"} 6
    fluentbit_output_latency_seconds_sum{input="random.0",output="stdout.0"} 6.0015411376953125
    fluentbit_output_latency_seconds_count{input="random.0",output="stdout.0"} 6
    # 95th percentile latency
    histogram_quantile(0.95, rate(fluentbit_output_latency_seconds_bucket[5m]))
    
    # Average latency
    rate(fluentbit_output_latency_seconds_sum[5m]) / rate(fluentbit_output_latency_seconds_count[5m])
    # Outputs with highest average latency
    topk(5, rate(fluentbit_output_latency_seconds_sum[5m]) / rate(fluentbit_output_latency_seconds_count[5m]))
    # Percentage of chunks delivered within 2 seconds
    (
      rate(fluentbit_output_latency_seconds_bucket{le="2.0"}[5m]) /
      rate(fluentbit_output_latency_seconds_count[5m])
    ) * 100
    # Example Prometheus alerting rule
    - alert: FluentBitHighLatency
      expr: histogram_quantile(0.95, rate(fluentbit_output_latency_seconds_bucket[5m])) > 5
      for: 2m
      labels:
        severity: warning
      annotations:
        summary: "Fluent Bit pipeline experiencing high latency"
        description: "95th percentile latency is {{ $value }}s for {{ $labels.input }} -> {{ $labels.output }}"
    curl -s http://127.0.0.1:2020/api/v1/uptime | jq
    {
      "uptime_sec": 8950000,
      "uptime_hr": "Fluent Bit has been running:  103 days, 14 hours, 6 minutes and 40 seconds"
    }
    curl -s http://127.0.0.1:2020/api/v1/metrics | jq
    {
      "input": {
        "cpu.0": {
          "records": 8,
          "bytes": 2536
        }
      },
      "output": {
        "stdout.0": {
          "proc_records": 5,
          "proc_bytes": 1585,
          "errors": 0,
          "retries": 0,
          "retries_failed": 0
        }
      }
    }
    curl -s http://127.0.0.1:2020/api/v2/metrics/prometheus
    fluentbit_input_records_total{name="cpu.0"} 57 1509150350542
    fluentbit_input_bytes_total{name="cpu.0"} 18069 1509150350542
    fluentbit_output_proc_records_total{name="stdout.0"} 54 1509150350542
    fluentbit_output_proc_bytes_total{name="stdout.0"} 17118 1509150350542
    fluentbit_output_errors_total{name="stdout.0"} 0 1509150350542
    fluentbit_output_retries_total{name="stdout.0"} 0 1509150350542
    fluentbit_output_retries_failed_total{name="stdout.0"} 0 1509150350542
    {
      "input": {
        "server1_cpu": {
          "records": 8,
          "bytes": 2536
        }
      },
      "output": {
        "raw_output": {
          "proc_records": 5,
          "proc_bytes": 1585,
          "errors": 0,
          "retries": 0,
          "retries_failed": 0
        }
      }
    }
    health status = (HC_Errors_Count > HC_Errors_Count config value) OR
    (HC_Retry_Failure_Count > HC_Retry_Failure_Count config value) IN
    the HC_Period interval
    curl -s http://127.0.0.1:2020/api/v2/health
    Health status = (HC_Errors_Count > 5) OR (HC_Retry_Failure_Count > 5) IN 5 seconds
    :
    0.0.0.0
    http_port: 2020
    pipeline:
    inputs:
    - name: cpu
    outputs:
    - name: stdout
    match: '*'

    /api/v1/metrics

    Display internal metrics per loaded plugin.

    JSON

    /api/v1/metrics/prometheus

    Display internal metrics per loaded plugin in Prometheus Server format.

    Prometheus Text 0.0.4

    /api/v1/storage

    Get internal metrics of the storage layer / buffered data. This option is enabled only if in the SERVICE section of the property storage.metrics is enabled.

    JSON

    /api/v1/health

    Display the Fluent Bit health check result.

    String

    /api/v2/metrics

    Display internal metrics per loaded plugin.

    /api/v2/metrics/prometheus

    Display internal metrics per loaded plugin ready in Prometheus Server format.

    Prometheus Text 0.0.4

    /api/v2/health

    Returns Fluent Bit health status as JSON. HTTP 200 when healthy, HTTP 500 when unhealthy. Response fields: status (ok or error), errors, retries_failed, error_limit, retry_failure_limit, period_limit.

    JSON

    /api/v2/reload

    Execute hot reloading (POST, PUT) or get the status of hot reloading (GET). Unsupported methods return 405 Method Not Allowed with an Allow: GET, POST, PUT header. See the .

    JSON

    fluentbit_input_records_total

    name: the name or alias for the input instance

    The number of log records this input ingested successfully.

    counter

    records

    fluentbit_output_dropped_records_total

    name: the name or alias for the output instance

    The number of log records dropped by the output. These records hit an unrecoverable error or retries expired for their chunk.

    counter

    records

    fluentbit_output_errors_total

    name: the name or alias for the output instance

    The number of chunks with an error that's either unrecoverable or unable to retry. This metric represents the number of times a chunk failed, and doesn't correspond with the number of error messages visible in the Fluent Bit log output.

    counter

    chunks

    fluentbit_output_proc_bytes_total

    name: the name or alias for the output instance

    The number of bytes of log records that this output instance sent successfully. This metric represents the total byte size of all unique chunks sent by this output. If a record isn't sent due to some error, it doesn't count towards this metric.

    counter

    bytes

    fluentbit_output_proc_records_total

    name: the name or alias for the output instance

    The number of log records that this output instance sent successfully. This metric represents the total record count of all unique chunks sent by this output. If a record isn't sent successfully, it doesn't count towards this metric.

    counter

    records

    fluentbit_output_retried_records_total

    name: the name or alias for the output instance

    The number of log records that experienced a retry. This metric is calculated at the chunk level, the count increased when an entire chunk is marked for retry. An output plugin might perform multiple actions that generate many error messages when uploading a single chunk.

    counter

    records

    fluentbit_output_retries_failed_total

    name: the name or alias for the output instance

    The number of times that retries expired for a chunk. Each plugin configures a Retry_Limit, which applies to chunks. When the Retry_Limit is exceeded, the chunk is discarded and this metric is incremented.

    counter

    chunks

    fluentbit_output_retries_total

    name: the name or alias for the output instance

    The number of times this output instance requested a retry for a chunk.

    counter

    chunks

    fluentbit_uptime

    The number of seconds that Fluent Bit has been running.

    counter

    seconds

    process_start_time_seconds

    The Unix Epoch timestamp for when Fluent Bit started.

    gauge

    seconds

    chunks.fs_chunks

    The total number of chunks saved to the filesystem.

    chunks

    chunks.fs_chunks_up

    The count of chunks that are both in file system and in memory.

    chunks

    chunks.fs_chunks_down

    The count of chunks that are only in the file system.

    chunks

    input_chunks.{plugin name}.status.overlimit

    Indicates whether the input instance exceeded its configured Mem_Buf_Limit.

    boolean

    input_chunks.{plugin name}.status.mem_size

    The size of memory that this input is consuming to buffer logs in chunks.

    bytes

    input_chunks.{plugin name}.status.mem_limit

    The buffer memory limit (Mem_Buf_Limit) that applies to this input plugin.

    bytes

    input_chunks.{plugin name}.chunks.total

    The current total number of chunks owned by this input instance.

    chunks

    input_chunks.{plugin name}.chunks.up

    The current number of chunks that are in memory for this input. If file system storage is enabled, chunks that are "up" are also stored in the filesystem layer.

    chunks

    input_chunks.{plugin name}.chunks.down

    The current number of chunks that are "down" in the filesystem for this input.

    chunks

    input_chunks.{plugin name}.chunks.busy

    Chunks are that are being processed or sent by outputs and aren't eligible to have new data appended.

    chunks

    input_chunks.{plugin name}.chunks.busy_size

    The sum of the byte size of each chunk which is currently marked as busy.

    bytes

    fluentbit_filter_added_records_total

    name: the name or alias for the filter instance

    The number of log records added by the filter into the data pipeline.

    counter

    records

    fluentbit_filter_bytes_total

    name: the name or alias for the filter instance

    The number of bytes of log records that this filter instance has ingested successfully.

    counter

    bytes

    fluentbit_filter_drop_records_total

    name: the name or alias for the filter instance

    The number of log records dropped by the filter and removed from the data pipeline.

    counter

    records

    fluentbit_filter_records_total

    name: the name or alias for the filter instance

    The number of log records this filter has ingested successfully.

    counter

    records

    fluentbit_hot_reloaded_times

    hostname: the hostname on running Fluent Bit

    Collect the count of hot reloaded times.

    counter

    times

    fluentbit_input_bytes_total

    name: the name or alias for the input instance

    The number of bytes of log records that this input instance has ingested successfully.

    counter

    bytes

    fluentbit_input_files_closed_total

    name: the name or alias for the input instance

    The total number of closed files. Only available for the input plugin.

    counter

    files

    fluentbit_input_files_opened_total

    name: the name or alias for the input instance

    The total number of opened files. Only available for the input plugin.

    counter

    files

    fluentbit_input_files_rotated_total

    name: the name or alias for the input instance

    The total number of rotated files. Only available for the input plugin.

    counter

    files

    fluentbit_input_ingestion_paused

    name: the name or alias for the input instance

    Indicates whether the input instance ingestion is currently paused (1) or not (0).

    gauge

    boolean

    fluentbit_input_long_line_skipped_total

    name: the name or alias for the input instance

    The total number of skipped occurrences for long lines. Only available for the input plugin when skip_long_lines is enabled.

    counter

    occurrences

    fluentbit_input_long_line_truncated_total

    name: the name or alias for the input instance

    The total number of truncated occurrences for long lines. Only available for the input plugin when truncate_long_lines is enabled.

    counter

    occurrences

    fluentbit_input_memrb_dropped_bytes

    name: the name or alias for the input instance

    The number of bytes dropped by the memory ring buffer (memrb) storage type when the buffer is full. Only available for input plugins with storage.type set to memrb.

    counter

    bytes

    fluentbit_input_memrb_dropped_chunks

    name: the name or alias for the input instance

    The number of chunks dropped by the memory ring buffer (memrb) storage type when the buffer is full. Only available for input plugins with storage.type set to memrb.

    counter

    chunks

    fluentbit_input_multiline_truncated_total

    name: the name or alias for the input instance

    The total number of truncated occurrences for multiline messages. Only available for the input plugin when multiline.parser is configured.

    counter

    occurrences

    fluentbit_input_records_total

    name: the name or alias for the input instance

    The number of log records this input ingested successfully.

    counter

    records

    fluentbit_input_ring_buffer_retries_total

    name: the name or alias for the input instance

    The number of ring buffer write retries.

    counter

    retries

    fluentbit_input_ring_buffer_retry_failures_total

    name: the name or alias for the input instance

    The number of ring buffer write retry failures.

    counter

    failures

    fluentbit_input_ring_buffer_writes_total

    name: the name or alias for the input instance

    The number of ring buffer write operations.

    counter

    writes

    fluentbit_output_backpressure_wait_seconds

    output: the name or alias for the output instance

    Time spent waiting due to output backpressure.

    histogram

    seconds

    fluentbit_output_chunk_available_capacity_percent

    name: the name or alias for the output instance

    The available chunk capacity for this output as a percentage.

    gauge

    percent

    fluentbit_output_dropped_records_total

    name: the name or alias for the output instance

    The number of log records dropped by the output. These records hit an unrecoverable error or retries expired for their chunk.

    counter

    records

    fluentbit_output_errors_total

    name: the name or alias for the output instance

    The number of chunks with an error that's either unrecoverable or unable to retry. This metric represents the number of times a chunk failed, and doesn't correspond with the number of error messages visible in the Fluent Bit log output.

    counter

    chunks

    fluentbit_output_latency_seconds

    input: the name of the input plugin instance, output: the name of the output plugin instance

    End-to-end latency from chunk creation to successful delivery. Provides observability into chunk-level pipeline performance.

    histogram

    seconds

    fluentbit_output_proc_bytes_total

    name: the name or alias for the output instance

    The number of bytes of log records that this output instance sent successfully. This metric represents the total byte size of all unique chunks sent by this output. If a record isn't sent due to some error, it doesn't count towards this metric.

    counter

    bytes

    fluentbit_output_proc_records_total

    name: the name or alias for the output instance

    The number of log records that this output instance sent successfully. This metric represents the total record count of all unique chunks sent by this output. If a record isn't sent successfully, it doesn't count towards this metric.

    counter

    records

    fluentbit_output_retried_records_total

    name: the name or alias for the output instance

    The number of log records that experienced a retry. This metric is calculated at the chunk level, the count increased when an entire chunk is marked for retry. An output plugin might perform multiple actions that generate many error messages when uploading a single chunk.

    counter

    records

    fluentbit_output_retries_failed_total

    name: the name or alias for the output instance

    The number of times that retries expired for a chunk. Each plugin configures a Retry_Limit, which applies to chunks. When the Retry_Limit is exceeded, the chunk is discarded and this metric is incremented.

    counter

    chunks

    fluentbit_output_retries_total

    name: the name or alias for the output instance

    The number of times this output instance requested a retry for a chunk.

    counter

    chunks

    fluentbit_uptime

    hostname: the hostname on running Fluent Bit

    The number of seconds that Fluent Bit has been running.

    counter

    seconds

    fluentbit_process_start_time_seconds

    hostname: the hostname on running Fluent Bit

    The Unix Epoch time stamp for when Fluent Bit started.

    gauge

    seconds

    fluentbit_build_info

    hostname: the hostname, version: the version of Fluent Bit, os: OS type

    Build version information. The returned value is originated from initializing the Unix Epoch time stamp of configuration context.

    gauge

    seconds

    fluentbit_hot_reloaded_times

    hostname: the hostname on running Fluent Bit

    Collect the count of hot reloaded times.

    counter

    times

    fluentbit_input_storage_chunks_busy

    name: the name or alias for the input instance

    Chunks that are being processed or sent by outputs and aren't eligible to have new data appended.

    gauge

    chunks

    fluentbit_input_storage_chunks_busy_bytes

    name: the name or alias for the input instance

    The sum of the byte size of each chunk which is currently marked as busy.

    gauge

    bytes

    fluentbit_input_storage_chunks_down

    name: the name or alias for the input instance

    The current number of chunks that are "down" in the filesystem for this input.

    gauge

    chunks

    fluentbit_input_storage_chunks_up

    name: the name or alias for the input instance

    The current number of chunks that are in memory for this input. If file system storage is enabled, chunks that are "up" are also stored in the filesystem layer.

    gauge

    chunks

    fluentbit_input_storage_memory_bytes

    name: the name or alias for the input instance

    The size of memory that this input is consuming to buffer logs in chunks.

    gauge

    bytes

    fluentbit_input_storage_overlimit

    name: the name or alias for the input instance

    Indicates whether the input instance exceeded its configured Mem_Buf_Limit.

    gauge

    boolean

    fluentbit_output_upstream_busy_connections

    name: the name or alias for the output instance

    The sum of the connection count in a busy state of each output plugin.

    gauge

    connections

    fluentbit_output_upstream_total_connections

    name: the name or alias for the output instance

    The sum of the connection count of each output plugin.

    gauge

    connections

    fluentbit_storage_fs_chunks

    None

    The total number of chunks saved to the file system.

    gauge

    chunks

    fluentbit_storage_fs_chunks_busy

    None

    The total number of chunks that are in a busy state.

    gauge

    chunks

    fluentbit_storage_fs_chunks_busy_bytes

    None

    The total bytes of chunks that are in a busy state.

    gauge

    bytes

    fluentbit_storage_fs_chunks_down

    None

    The count of chunks that are only in the file system.

    gauge

    chunks

    fluentbit_storage_fs_chunks_up

    None

    The count of chunks that are both in file system and in memory.

    gauge

    chunks

    fluentbit_storage_mem_chunks

    None

    The total number of chunks that are currently buffered in memory. Chunks can be both in memory and on the file system at the same time.

    gauge

    chunks

    HC_Retry_Failure_Count

    the retry failure count to meet the unhealthy requirement, this is a sum for all output plugins in a defined HC_Period, example for retry failure: [2022/02/16 20:11:36] [ warn] [engine] chunk '1-1645042288.260516436.flb' cannot be retried: task_id=0, input=tcp.3 > output=cloudwatch_logs.1

    5

    HC_Period

    The time period by second to count the error and retry failure data point

    60

    cmetrics text formatarrow-up-right
    hot-reloading documentation
    Tail
    Tail
    Tail
    Tail
    Tail
    Tail