Only this pageAll pages
Powered by GitBook
Couldn't generate the PDF for 182 pages, generation stopped at 100.
Extend with 50 more pages.
1 of 100

2.0

Loading...

About

Loading...

Loading...

Loading...

Loading...

Concepts

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Installation

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Administration

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Local Testing

Loading...

Loading...

Data Pipeline

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

A Brief History of Fluent Bit

Every project has a story

On 2014, the Fluentd team at Treasure Data was forecasting the need for a lightweight log processor for constraint environments like Embedded Linux and Gateways, the project aimed to be part of the Fluentd Ecosystem; at that moment, Eduardo created Fluent Bit, a new open source solution written from scratch available under the terms of the Apache License v2.0.\

After the project was around for some time, it got more traction for normal Linux systems, also with the new containerized world, the Cloud Native community asked to extend the project scope to support more sources, filters, and destinations. Not so long after, Fluent Bit became one of the preferred solutions to solve the logging challenges in Cloud environments.

What is Fluent Bit?

Fluent Bit is a CNCF sub-project under the umbrella of Fluentd

Fluent Bit is an open-source telemetry agent specifically designed to efficiently handle the challenges of collecting and processing telemetry data across a wide range of environments, from constrained systems to complex cloud infrastructures. Managing telemetry data from various sources and formats can be a constant challenge, particularly when performance is a critical factor.

Rather than serving as a drop-in replacement, Fluent Bit enhances the observability strategy for your infrastructure by adapting and optimizing your existing logging layer, as well as metrics and traces processing. Furthermore, Fluent Bit supports a vendor-neutral approach, seamlessly integrating with other ecosystems such as Prometheus and OpenTelemetry. Trusted by major cloud providers, banks, and companies in need of a ready-to-use telemetry agent solution, Fluent Bit effectively manages diverse data sources and formats while maintaining optimal performance.

Fluent Bit can be deployed as an edge agent for localized telemetry data handling or utilized as a central aggregator/collector for managing telemetry data across multiple sources and environments.

Fluent Bit has been designed with performance and low resource consumption in mind.

Fluent Bit v2.0 Documentation

High Performance Log and Metrics Processor

Fluent Bit is a Fast and Lightweight Telemetry Agent for Logs, Metrics, and Traces for Linux, macOS, Windows, and BSD family operating systems. It has been made with a strong focus on performance to allow the collection and processing of telemetry data from different sources without complexity.

Features

  • High Performance: High throughput with low resources consumption

  • Data Parsing

    • Convert your unstructured messages using our parsers: , , and

  • Metrics Support: Prometheus and OpenTelemetry compatible

  • Reliability and Data Integrity

    • Handling

    • in memory and file system

  • Networking

    • Security: built-in TLS/SSL support

    • Asynchronous I/O

  • Pluggable Architecture and : Inputs, Filters and Outputs

    • More than 100 built-in plugins are available

    • Extensibility

  • : expose internal metrics over HTTP in JSON and format

  • : Perform data selection and transformation using simple SQL queries

    • Create new streams of data using query results

    • Aggregation Windows

    • Data analysis and prediction: Timeseries forecasting

  • Portable: runs on Linux, macOS, Windows and BSD systems

Fluent Bit, Fluentd and CNCF

is a graduated sub-project under the umbrella of , it's licensed under the terms of the .

Fluent Bit was originally created by ; as a CNCF-hosted project is a fully vendor-neutral and community-driven project.

Buffer

Data processing with reliability

Previously defined in the concept section, the buffer phase in the pipeline aims to provide a unified and persistent mechanism to store your data, either using the primary in-memory model or using the filesystem based mode.

The buffer phase already contains the data in an immutable state, meaning, no other filter can be applied.

Note that buffered data is not raw text, it's in Fluent Bit's internal binary representation.

Amazon EC2

Learn how to .

install Fluent Bit and the AWS output plugins on Amazon Linux 2 via AWS Systems Manager
Fluent Bit offers a buffering mechanism in the file system that acts as a backup system to avoid data loss in case of system failures.
Buffering

Data Pipeline

Classic mode

Inputs

Sources

YAML Configuration

YAML configuration feature was introduced since FLuent Bit version 1.9 as experimental, and it is production ready since Fluent Bit 2.0.

Write any input, filter or output plugin in C language
  • WASM: WASM Filter Plugins or WASM Input Plugins

  • Bonus: write Filters in Lua or Output plugins in Golang

  • JSON
    Regex
    LTSV
    Logfmt
    Backpressure
    Data Buffering
    Extensibility
    Monitoring
    Prometheus
    Stream Processing
    Fluent Bit
    CNCF
    Fluentd
    Apache License v2.0
    Eduardo Silva

    Buffering

    Performance and Data Safety

    When Fluent Bit processes data, it uses the system memory (heap) as a primary and temporary place to store the record logs before they get delivered, in this private memory area the records are processed.

    Buffering refers to the ability to store the records somewhere, and while they are processed and delivered, still be able to store more. Buffering in memory is the fastest mechanism, but there are certain scenarios where it requires special strategies to deal with backpressure, data safety or reduce memory consumption by the service in constrained environments.

    Network failures or latency on third party service is pretty common, and on scenarios where we cannot deliver data fast enough as we receive new data to process, we likely will face backpressure.

    Our buffering strategies are designed to solve problems associated with backpressure and general delivery failures.

    Fluent Bit as buffering strategies go, offers a primary buffering mechanism in memory and an optional secondary one using the file system. With this hybrid solution you can accommodate any use case safely and keep a high performance while processing your data.

    Both mechanisms are not mutually exclusive and when the data is ready to be processed or delivered it will always be in memory, while other data in the queue might be in the file system until is ready to be processed and moved up to memory.

    To learn more about the buffering configuration in Fluent Bit, please jump to the section.

    Input

    The way to gather data from your sources

    Fluent Bit provides different Input Plugins to gather information from different sources, some of them just collect data from log files while others can gather metrics information from the operating system. There are many plugins for different needs.

    When an input plugin is loaded, an internal instance is created. Every instance has its own and independent configuration. Configuration keys are often called properties.

    Every input plugin has its own documentation section where it's specified how it can be used and what properties are available.

    For more details, please refer to the Input Plugins section.

    Parser

    Convert Unstructured to Structured messages

    Dealing with raw strings or unstructured messages is a constant pain; having a structure is highly desired. Ideally we want to set a structure to the incoming data by the Input Plugins as soon as they are collected:

    The Parser allows you to convert from unstructured to structured data. As a demonstrative example consider the following Apache (HTTP Server) log entry:

    192.168.2.20 - - [28/Jul/2006:10:27:10 -0300] "GET /cgi-bin/try/ HTTP/1.0" 200 3395

    The above log line is a raw string without format, ideally we would like to give it a structure that can be processed later easily. If the proper configuration is used, the log entry could be converted to:

    {
      "host":    "192.168.2.20",
      "user":    "-",
      "method":  "GET",
      "path":    "/cgi-bin/try/",
      "code":    "200",
      "size":    "3395",
      "referer": "",
      "agent":   ""
     }

    Parsers are fully configurable and are independently and optionally handled by each input plugin, for more details please refer to the Parsers section.

    Linux Packages

    The most secure option is to create the repositories acccording to the instructions for your specific OS.

    A simple installation script is provided to be used for most Linux targets. This will by default install the most recent version released.

    curl https://raw.githubusercontent.com/fluent/fluent-bit/master/install.sh | sh

    This is purely a convenience helper and should always be validated prior to use.

    GPG key updates

    From the 1.9.0 and 1.8.15 releases please note that the GPG key has been updated at https://packages.fluentbit.io/fluentbit.key so ensure this new one is added.

    The GPG Key fingerprint of the new key is:

    The previous key is still available at and may be required to install previous versions.

    The GPG Key fingerprint of the old key is:

    Refer to the to see which platforms are supported in each release.

    Migration to Fluent Bit

    From version 1.9, td-agent-bit is a deprecated package and is removed after 1.9.9. The correct package name to use now is fluent-bit.

    Filter

    Modify, Enrich or Drop your records

    In production environments we want to have full control of the data we are collecting, filtering is an important feature that allows us to alter the data before delivering it to some destination.

    Filtering is implemented through plugins, so each filter available could be used to match, exclude or enrich your logs with some specific metadata.

    We support many filters, A common use case for filtering is Kubernetes deployments. Every Pod log needs to get the proper metadata associated

    Very similar to the input plugins, Filters run in an instance context, which has its own independent configuration. Configuration keys are often called properties.

    For more details about the Filters available and their usage, please refer to the Filters section.

    Output

    Destinations for your data: databases, cloud services and more!

    The output interface allows us to define destinations for the data. Common destinations are remote services, local file system or standard interface with others. Outputs are implemented as plugins and there are many available.

    When an output plugin is loaded, an internal instance is created. Every instance has its own independent configuration. Configuration keys are often called properties.

    Every output plugin has its own documentation section specifying how it can be used and what properties are available.

    For more details, please refer to the Output Plugins section.

    Requirements

    uses very low CPU and Memory consumption, it's compatible with most of x86, x86_64, arm32v7 and arm64v8 based platforms. In order to build it you need the following components in your system for the build process:

    • Compiler: GCC or clang

    • CMake

    • Flex & Bison: only if you enable the Stream Processor or Record Accessor feature (both enabled by default)

    Yocto / Embedded Linux

    source code provides Bitbake recipes to configure, build and package the software for a Yocto based image. Note that specific steps of usage of these recipes in your Yocto environment (Poky) is out of the scope of this documentation.

    We distribute two main recipes, one for testing/dev purposes and other with the latest stable release.

    Version
    Recipe
    Description

    Download Source Code

    Stable

    For production systems, we strongly suggest that you always get the latest stable release of the source code in either zip or tarball format from Github using the following link pattern:

    https://github.com/fluent/fluent-bit/archive/refs/tags/v<release version>.tar.gz https://github.com/fluent/fluent-bit/archive/refs/tags/v<release version>.zip

    For example for version 1.8.12 the link is the following:

    Unit Sizes

    Certain configuration directives in Fluent Bit refer to unit sizes such as when defining the size of a buffer or specific limits, we can find these in plugins like , or in generic properties like .

    Starting from v0.11.10, all unit sizes have been standardized across the core and plugins, the following table describes the options that can be used and what they mean:

    Suffix
    Description
    Example

    Collectd

    The collectd input plugin allows you to receive datagrams from collectd service.

    Configuration Parameters

    The plugin supports the following configuration parameters:

    Key
    Description
    Default

    Memory Metrics

    The mem input plugin, gathers the information about the memory and swap usage of the running system every certain interval of time and reports the total amount of memory and the amount of free available.

    Getting Started

    In order to get memory and swap usage from your system, you can run the plugin from the command line or through the configuration file:

    Buffering & Storage

    Libyaml development headers and libraries

    In the core there are not other dependencies, For certain features that depends on third party components like output plugins with special backend libraries (e.g: kafka), those are included in the main source code repository.

    Fluent Bit

    Kilobyte: a unit of memory equal to 1,000 bytes.

    32k means 32000 bytes.

    m, M, MB, mb

    Megabyte: a unit of memory equal to 1,000,000 bytes

    1M means 1000000 bytes

    g, G, GB, gb

    Gigabyte: a unit of memory equal to 1,000,000,000 bytes

    1G means 1000000000 bytes

    When a suffix is not specified, it's assumed that the value given is a bytes representation.

    Specifying a value of 32000, means 32000 bytes

    Tail Input
    Forward Input
    Mem_Buf_Limit
    Fluent Bit

    k, K, KB, kb

    C3C0 A285 34B9 293E AF51  FABD 9F9D DC08 3888 C1CD
    Fluentbit releases (Releases signing key) <[email protected]>
    F209 D876 2A60 CD49 E680 633B 4FF8 368B 6EA0 722A
    https://packages.fluentbit.io/fluentbit-legacy.key
    supported platform documentation
    Development

    For anyone who aims to contribute to the project by testing or extending the code base, you can get the development version from our GIT repository:

    Note that our master branch is where the development of Fluent Bit happens. Since it's a development version, expect issues when compiling or at run time.

    We encourage everybody to help us testing every development version, at the end this is what will become stable.

    https://github.com/fluent/fluent-bit/archive/refs/tags/v1.8.12.tar.gz
    $ git clone https://github.com/fluent/fluent-bit

    Listen

    Set the address to listen to

    0.0.0.0

    Port

    Set the port to listen to

    25826

    TypesDB

    Set the data specification file

    /usr/share/collectd/types.db

    Configuration Examples

    Here is a basic configuration example.

    With this configuration, Fluent Bit listens to 0.0.0.0:25826, and outputs incoming datagram packets to stdout.

    You must set the same types.db files that your collectd server uses. Otherwise, Fluent Bit may not be able to interpret the payload properly.

    [INPUT]
        Name         collectd
        Listen       0.0.0.0
        Port         25826
        TypesDB      /usr/share/collectd/types.db,/etc/collectd/custom.db
    
    [OUTPUT]
        Name   stdout
        Match  *
    Command Line

    Configuration File

    In your main configuration file append the following Input & Output sections:

    $ fluent-bit -i mem -t memory -o stdout -m '*'
    Fluent Bit v1.x.x
    * Copyright (C) 2019-2020 The Fluent Bit Authors
    * Copyright (C) 2015-2018 Treasure Data
    * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
    * https://fluentbit.io
    
    [2017/03/03 21:12:35] [ info] [engine] started
    [0] memory: [1488543156, {"Mem.total"=>1016044, "Mem.used"=>841388, "Mem.free"=>174656, "Swap.total"=>2064380, "Swap.used"=>139888, "Swap.free"=>1924492}]
    [1] memory: [1488543157, {"Mem.total"=>1016044, "Mem.used"=>841420, "Mem.free"=>174624, "Swap.total"=>2064380, "Swap.used"=>139888, "Swap.free"=>1924492}]
    [2] memory: [1488543158, {"Mem.total"=>1016044, "Mem.used"=>841420, "Mem.free"=>174624, "Swap.total"=>2064380, "Swap.used"=>139888, "Swap.free"=>1924492}]
    [3] memory: [1488543159, {"Mem.total"=>1016044, "Mem.used"=>841420, "Mem.free"=>174624, "Swap.total"=>2064380, "Swap.used"=>139888, "Swap.free"=>1924492}]
    [INPUT]
        Name   mem
        Tag    memory
    
    [OUTPUT]
        Name   stdout
        Match  *

    Build latest stable version of Fluent Bit.

    It's strongly recommended to always use the stable release of Fluent Bit recipe and not the one from GIT master for production deployments.

    Fluent Bit and other architectures

    Fluent Bit >= v1.1.x fully supports x86_64, x86, arm32v7 and arm64v8.

    devel

    fluent-bit_git.bb

    Build Fluent Bit from GIT master. This recipe aims to be used for development and testing purposes only.

    v1.8.11

    Fluent Bit

    Variables

    Fluent Bit supports the usage of environment variables in any value associated to a key when using a configuration file.

    The variables are case sensitive and can be used in the following format:

    ${MY_VARIABLE}

    When Fluent Bit starts, the configuration reader will detect any request for ${MY_VARIABLE} and will try to resolve its value.

    Example

    Create the following configuration file (fluent-bit.conf):

    Open a terminal and set the environment variable:

    The above command set the 'stdout' value to the variable MY_OUTPUT.

    Run Fluent Bit with the recently created configuration file:

    As you can see the service worked properly as the configuration was valid.

    Containers on AWS

    AWS maintains a distribution of Fluent Bit combining the latest official release with a set of Go Plugins for sending logs to AWS services. AWS and Fluent Bit are working together to rewrite their plugins for inclusion in the official Fluent Bit distribution.

    Plugins

    Currently, the AWS for Fluent Bit image contains Go Plugins for:

    • Amazon CloudWatch Logs

    Fluent Bit includes Amazon CloudWatch Logs plugin named cloudwatch_logs, Amazon Kinesis Firehose plugin named kinesis_firehose and Amazon Kinesis Data Streams plugin named kinesis_streams which are higher performance than Go plugins.

    Also, Fluent Bit includes S3 output plugin named s3.

    Versions and Regional Repositories

    AWS vends their container image via , and a set of highly available regional Amazon ECR repositories. For more information, see the .

    The AWS for Fluent Bit image uses a custom versioning scheme because it contains multiple projects. To see what each release contains, check out the .

    SSM Public Parameters

    AWS vends SSM Public Parameters with the regional repository link for each image. These parameters can be queried by any AWS account.

    To see a list of available version tags in a given region, run the following command:

    To see the ECR repository URI for a given image tag in a given region, run the following:

    You can use these SSM public parameters as parameters in your CloudFormation templates:

    Running a Logging Pipeline Locally

    You may wish to test a logging pipeline locally to observe how it deals with log messages. The following is a walk-through for running Fluent Bit and Elasticsearch locally with Docker Compose which can serve as an example for testing other plugins locally.

    Create a Configuration File

    Refer to the Configuration File section to create a configuration to test.

    fluent-bit.conf:

    [INPUT]
      Name dummy
      Dummy {"top": {".dotted": "value"}}
    
    [OUTPUT]
      Name es
      Host elasticsearch
      Replace_Dots On

    Docker Compose

    Use to run Fluent Bit (with the configuration file mounted) and Elasticsearch.

    docker-compose.yaml:

    View indexed logs

    To view indexed logs run:

    To "start fresh", delete the index by running:

    Kernel Logs

    The kmsg input plugin reads the Linux Kernel log buffer since the beginning, it gets every record and parse it field as priority, sequence, seconds, useconds, and message.

    Configuration Parameters

    Key
    Description
    Default

    Getting Started

    In order to start getting the Linux Kernel messages, you can run the plugin from the command line or through the configuration file:

    Command Line

    As described above, the plugin processed all messages that the Linux Kernel reported, the output has been truncated for clarification.

    Configuration File

    In your main configuration file append the following Input & Output sections:

    Backpressure

    Under certain scenarios it is possible for logs or data to be ingested or created faster than the ability to flush it to some destinations. One such common scenario is when reading from big log files, especially with a large backlog, and dispatching the logs to a backend over the network, which takes time to respond. This generates backpressure leading to high memory consumption in the service.

    In order to avoid backpressure, Fluent Bit implements a mechanism in the engine that restricts the amount of data that an input plugin can ingest, this is done through the configuration parameter Mem_Buf_Limit.

    As described in the Buffering concepts section, Fluent Bit offers a hybrid mode for data handling: in-memory and filesystem (optional).

    In memory is always available and can be restricted with Mem_Buf_Limit. If memory reaches this limit and you reach a backpressure scenario, you will not be able to ingest more data until the data chunks that are in memory can be flushed.

    Depending on the input plugin in use, this might lead to discard incoming data (e.g: TCP input plugin). This can be mitigated by configuring secondary storage on the filesystem using the storage.type of filesystem (as described in ). When the limit is reached, all the new data will be stored safely in the filesystem.

    Mem_Buf_Limit

    This option is disabled by default and can be applied to all input plugins. Let's explain its behavior using the following scenario:

    • Mem_Buf_Limit is set to 1MB (one megabyte)

    • input plugin tries to append 700KB

    • engine route the data to an output plugin

    • output plugin backend (HTTP Server) is down

    At this exact point, the engine will allow appending those 500KB of data into the memory; in total it will have 1.2MB of data buffered. The limit is permissive and will allow a single write past the limit, but once the limit is exceeded the following actions are taken:

    • block local buffers for the input plugin (cannot append more data)

    • notify the input plugin invoking a pause callback

    The engine will protect itself and will not append more data coming from the input plugin in question; note that it is the responsibility of the plugin to keep state and decide what to do in that paused state.

    After some time, usually measured in seconds, if the scheduler was able to flush the initial 700KB of data or it has given up after retrying, that amount of memory is released and the following actions will occur:

    • Upon data buffer release (700KB), the internal counters get updated

    • Counters now are set at 500KB

    • Since 500KB is < 1MB it checks the input plugin state

    • If the plugin is paused, it invokes a resume callback

    About pause and resume Callbacks

    Each plugin is independent and not all of them implements the pause and resume callbacks. As said, these callbacks are just a notification mechanism for the plugin.

    One example of a plugin that implements these callbacks and keeps state correctly is the plugin. When the pause callback is triggered, it pauses its collectors and stops appending data. Upon resume, it resumes the collectors and continues ingesting data.

    Pipeline Monitoring

    Learn how to monitor your data pipeline with external services

    A Data Pipeline represents a flow of data that goes through the inputs (sources), filers, and output (sinks). There are a couple of ways to monitor the pipeline. We recommend the following sections for a better understanding and steps to get started:

    • HTTP Server: JSON and Prometheus Exporter-style metrics

    • Grafana Dashboards and Alerts

    Fluent Bit Metrics

    A plugin to collect Fluent Bit's own metrics

    Fluent Bit exposes its own metrics to allow you to monitor the internals of your pipeline. The collected metrics can be processed similarly to those from the Prometheus Node Exporter input plugin. They can be sent to output plugins including Prometheus Exporter, Prometheus Remote Write or OpenTelemetry..

    Important note: Metrics collected with Node Exporter Metrics flow through a separate pipeline from logs and current filters do not operate on top of metrics.

    Configuration

    Key
    Description
    Default

    Getting Started

    Simple Configuration File

    In the following configuration file, the input plugin _node_exporter_metrics collects _metrics every 2 seconds and exposes them through our output plugin on HTTP/TCP port 2021.

    You can test the expose of the metrics by using curl:

    Docker Log Based Metrics

    The docker input plugin allows you to collect Docker container metrics such as memory usage and CPU consumption.

    Content:

    • Configuration Parameters

    • Configuration File

    Configuration Parameters

    The plugin supports the following configuration parameters:

    Key
    Description
    Default

    If you set neither Include nor Exclude, the plugin will try to get metrics from all the running containers.

    Configuration File

    Here is an example configuration that collects metrics from two docker instances (6bab19c3a0f9 and 14159be4ca2c).

    This configuration will produce records like below.

    StatsD

    The statsd input plugin allows you to receive metrics via StatsD protocol.

    Content:

    • Configuration Parameters

    • Configuration Examples

    Configuration Parameters

    The plugin supports the following configuration parameters:

    Key
    Description
    Default

    Configuration Examples

    Here is a configuration example.

    Now you can input metrics through the UDP port as follows:

    Fluent Bit will produce the following records:

    MQTT

    The MQTT input plugin, allows to retrieve messages/data from MQTT control packets over a TCP connection. The incoming data to receive must be a JSON map.

    Configuration Parameters

    The plugin supports the following configuration parameters:

    Key
    Description

    Getting Started

    In order to start listening for MQTT messages, you can run the plugin from the command line or through the configuration file:

    Command Line

    Since the MQTT input plugin let Fluent Bit behave as a server, we need to dispatch some messages using some MQTT client, in the following example mosquitto tool is being used for the purpose:

    The following command line will send a message to the MQTT input plugin:

    Configuration File

    In your main configuration file append the following Input & Output sections:

    Build with Static Configuration

    Fluent Bit in normal operation mode allows to be configurable through text files or using specific arguments in the command line, while this is the ideal deployment case, there are scenarios where a more restricted configuration is required: static configuration mode.

    Static configuration mode aims to include a built-in configuration in the final binary of Fluent Bit, disabling the usage of external files or flags at runtime.

    Getting Started

    Requirements

    The following steps assumes you are familiar with configuring Fluent Bit using text files and you have experience building it from scratch as described in the section.

    Configuration Directory

    In your file system prepare a specific directory that will be used as an entry point for the build system to lookup and parse the configuration files. It is mandatory that this directory contain as a minimum one configuration file called fluent-bit.conf containing the required , and sections. As an example create a new fluent-bit.conf file with the following content:

    the configuration provided above will calculate CPU metrics from the running system and print them to the standard output interface.

    Build with Custom Configuration

    Inside Fluent Bit source code, get into the build/ directory and run CMake appending the FLB_STATIC_CONF option pointing the configuration directory recently created, e.g:

    then build it:

    At this point the fluent-bit binary generated is ready to run without necessity of further configuration:

    Memory Management

    In certain scenarios it would be ideal to estimate how much memory Fluent Bit could be using, this is very useful for containerized environments where memory limits are a must.

    In order to that we will assume that the input plugins have set the Mem_Buf_Limit option (you can learn more about it in the Backpressure section).

    Estimating

    Input plugins append data independently, so in order to do an estimation, a limit should be imposed through the Mem_Buf_Limit option. If the limit was set to 10MB we need to estimate that in the worse case, the output plugin likely could use 20MB.

    Fluent Bit has an internal binary representation for the data being processed, but when this data reaches an output plugin, it will likely create its own representation in a new memory buffer for processing. The best examples are the and output plugins, both need to convert the binary representation to their respective custom JSON formats before it can be sent to the backend servers.

    So, if we impose a limit of 10MB for the input plugins and consider the worse case scenario of the output plugin consuming 20MB extra, as a minimum we need (30MB x 1.2) = 36MB.

    Glibc and Memory Fragmentation

    It is well known that in intensive environments where memory allocations happen in the orders of magnitude, the default memory allocator provided by Glibc could lead to high fragmentation, reporting a high memory usage by the service.

    It's strongly suggested that in any production environment, Fluent Bit should be built with enabled (e.g. -DFLB_JEMALLOC=On). Jemalloc is an alternative memory allocator that can reduce fragmentation (among others things) resulting in better performance.

    You can check if Fluent Bit has been built with Jemalloc using the following command:

    The output should look like:

    If the FLB_HAVE_JEMALLOC option is listed in Build Flags, everything will be fine.

    Fluentd & Fluent Bit

    The Production Grade Telemetry Ecosystem

    Telemetry data processing in general can be complex, and at scale a bit more, that's why was born. Fluentd has become more than a simple tool, it has grown into a fullscale ecosystem that contains SDKs for different languages and sub-projects like .

    On this page, we will describe the relationship between the and open source projects, as a summary we can say both are:

    • Licensed under the terms of Apache License v2.0

    • Graduated Hosted projects by the

    Debian

    Fluent Bit is distributed as fluent-bit package and is available for the latest (and legacy) stable Debian systems: Bookworm and Bullseye. The following architectures are supported

    • x86_64

    • aarch64 / arm64v8

    Ubuntu

    Fluent Bit is distributed as fluent-bit package and is available for the latest stable Ubuntu system: Jammy Jellyfish.

    Single line install

    A simple installation script is provided to be used for most Linux targets. This will always install the most recent version released.

    This is purely a convenience helper and should always be validated prior to use. The recommended secure deployment approach is to follow the instructions below.

    HTTP Proxy

    Enable traffic through a proxy server via HTTP_PROXY environment variable

    HTTP Proxy

    Fluent Bit supports configuring an HTTP proxy for all egress HTTP/HTTPS traffic via the HTTP_PROXY or http_proxy environment variable.

    The format for the HTTP proxy environment variable is http://USER:PASS@HOST:PORT, where:

    Router

    Create flexible routing rules

    Routing is a core feature that allows to route your data through Filters and finally to one or multiple destinations. The router relies on the concept of and rules

    There are two important concepts in Routing:

    • Tag

    • Match

    Format and Schema

    Fluent Bit might optionally use a configuration file to define how the service will behave.

    Before proceeding we need to understand how the configuration schema works.

    The schema is defined by three concepts:

    • Sections

    • Entries: Key/Value

    Disk I/O Log Based Metrics

    The disk input plugin, gathers the information about the disk throughput of the running system every certain interval of time and reports them.

    The Disk I/O metrics plugin creates metrics that are log-based (I.e. JSON payload). If you are looking for Prometheus-based metrics please see the Node Exporter Metrics input plugin.

    Configuration Parameters

    The plugin supports the following configuration parameters:

    Key

    Standard Input

    The stdin plugin allows to retrieve valid JSON text messages over the standard input interface (stdin). In order to use it, specify the plugin name as the input, e.g:

    As input data the stdin plugin recognize the following JSON data formats:

    A better example to demonstrate how it works will be through a Bash script that generates messages and writes them to . Write the following content in a file named test.sh:

    Give the script execution permission:

    Now lets start the script and in the following way:

    engine scheduler will retry the flush after 10 seconds

  • input plugin tries to append 500KB

  • input plugin can continue appending more data

    Buffering & Storage
    Tail Input
    Health Checks
    Calyptia Cloud: hosted service to monitor and visualize your pipelines
    fluent-bit_1.8.11.bb
    [SERVICE]
        Flush        1
        Daemon       Off
        Log_Level    info
    
    [INPUT]
        Name cpu
        Tag  cpu.local
    
    [OUTPUT]
        Name  ${MY_OUTPUT}
        Match *
    $ export MY_OUTPUT=stdout
    $ bin/fluent-bit -c fluent-bit.conf
    Fluent Bit v1.4.0
    * Copyright (C) 2019-2020 The Fluent Bit Authors
    * Copyright (C) 2015-2018 Treasure Data
    * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
    * https://fluentbit.io
    
    [2020/03/03 12:25:25] [ info] [engine] started
    [0] cpu.local: [1491243925, {"cpu_p"=>1.750000, "user_p"=>1.750000, "system_p"=>0.000000, "cpu0.p_cpu"=>3.000000, "cpu0.p_user"=>2.000000, "cpu0.p_system"=>1.000000, "cpu1.p_cpu"=>0.000000, "cpu1.p_user"=>0.000000, "cpu1.p_system"=>0.000000, "cpu2.p_cpu"=>4.000000, "cpu2.p_user"=>4.000000, "cpu2.p_system"=>0.000000, "cpu3.p_cpu"=>1.000000, "cpu3.p_user"=>1.000000, "cpu3.p_system"=>0.000000}]
    Amazon Kinesis Firehose
    Amazon Kinesis Streams
    Amazon CloudWatch
    Amazon Kinesis Data Firehose
    Amazon Kinesis Data Streams
    Amazon S3
    Docker Hub
    AWS for Fluent Bit GitHub repo
    release notes on GitHub
    curl "localhost:9200/_search?pretty" \
      -H 'Content-Type: application/json' \
      -d'{ "query": { "match_all": {} }}'
    Docker Compose

    Prio_Level

    The log level to filter. The kernel log is dropped if its priority is more than prio_level. Allowed values are 0-8. Default is 8. 8 means all logs are saved.

    8

    [INPUT]
        Name   kmsg
        Tag    kernel
    
    [OUTPUT]
        Name   stdout
        Match  *

    scrape_interval

    The rate at which metrics are collected from the host operating system

    2 seconds

    scrape_on_start

    Scrape metrics upon start, useful to avoid waiting for 'scrape_interval' for the first round of metrics.

    false

    Prometheus Exporter

    Interval_Sec

    Polling interval in seconds

    1

    Include

    A space-separated list of containers to include

    Exclude

    A space-separated list of containers to exclude

    Listen

    Listener network interface.

    0.0.0.0

    Port

    UDP port where listening for connections

    8125

    echo "click:10|c|@0.1" | nc -q0 -u 127.0.0.1 8125
    echo "active:99|g"     | nc -q0 -u 127.0.0.1 8125

    Listen

    Listener network interface, default: 0.0.0.0

    Port

    TCP port where listening for connections, default: 1883

    $ mosquitto_pub  -m '{"key1": 123, "key2": 456}' -t some/topic
    Build and Install
    SERVICE
    INPUT
    OUTPUT
    $ bin/fluent-bit -h | grep JEMALLOC
    InfluxDB
    Elasticsearch
    jemalloc
    Single line install

    A simple installation script is provided to be used for most Linux targets. This will always install the most recent version released.

    This is purely a convenience helper and should always be validated prior to use. The recommended secure deployment approach is to follow the instructions below.

    Server GPG key

    The first step is to add our server GPG key to your keyring, on that way you can get our signed packages. Follow the official Debian wiki guidance: https://wiki.debian.org/DebianRepository/UseThirdParty#OpenPGP\_Key\_distribution

    Updated key from March 2022

    From the 1.9.0 and 1.8.15 releases please note that the GPG key has been updated at https://packages.fluentbit.io/fluentbit.key so ensure this new one is added.

    The GPG Key fingerprint of the new key is:

    The previous key is still available at https://packages.fluentbit.io/fluentbit-legacy.key and may be required to install previous versions.

    The GPG Key fingerprint of the old key is:

    Refer to the supported platform documentation to see which platforms are supported in each release.

    Update your sources lists

    On Debian, you need to add our APT server entry to your sources lists, please add the following content at bottom of your /etc/apt/sources.list file - ensure to set CODENAME to your specific Debian release name (e.g. bookworm for Debian 12):

    Update your repositories database

    Now let your system update the apt database:

    We recommend upgrading your system (sudo apt-get upgrade). This could avoid potential issues with expired certificates.

    Install Fluent Bit

    Using the following apt-get command you are able now to install the latest fluent-bit:

    Now the following step is to instruct systemd to enable the service:

    If you do a status check, you should see a similar output like this:

    The default configuration of fluent-bit is collecting metrics of CPU usage and sending the records to the standard output, you can see the outgoing data in your /var/log/syslog file.

    Server GPG key

    The first step is to add our server GPG key to your keyring to ensure you can get our signed packages. Follow the official Debian wiki guidance: https://wiki.debian.org/DebianRepository/UseThirdParty#OpenPGP\_Key\_distribution

    Updated key from March 2022

    From the 1.9.0 and 1.8.15 releases please note that the GPG key has been updated at https://packages.fluentbit.io/fluentbit.key so ensure this new one is added.

    The GPG Key fingerprint of the new key is:

    The previous key is still available at https://packages.fluentbit.io/fluentbit-legacy.key and may be required to install previous versions.

    The GPG Key fingerprint of the old key is:

    Refer to the supported platform documentation to see which platforms are supported in each release.

    Update your sources lists

    On Ubuntu, you need to add our APT server entry to your sources lists, please add the following content at bottom of your /etc/apt/sources.list file - ensure to set CODENAME to your specific Ubuntu release name (e.g. focal for Ubuntu 20.04):

    Update your repositories database

    Now let your system update the apt database:

    We recommend upgrading your system (sudo apt-get upgrade). This could avoid potential issues with expired certificates.

    If you have the following error "Certificate verification failed", you might want to check if the package ca-certificates is properly installed (sudo apt-get install ca-certificates).

    ## Install Fluent Bit

    Using the following apt-get command you are able now to install the latest fluent-bit:

    Now the following step is to instruct systemd to enable the service:

    If you do a status check, you should see a similar output like this:

    The default configuration of fluent-bit is collecting metrics of CPU usage and sending the records to the standard output, you can see the outgoing data in your /var/log/syslog file.

    USER is the username when using basic authentication.

  • PASS is the password when using basic authentication.

  • HOST is the HTTP proxy hostname or IP address.

  • PORT is the port the HTTP proxy is listening on.

  • To use an HTTP proxy with basic authentication, provide the username and password:

    When no authentication is required, omit the username and password:

    The HTTP_PROXY environment variable is a standard way for setting a HTTP proxy in a containerized environment, and it is also natively supported by any application written in Go. Therefore, we follow and implement the same convention for Fluent Bit. For convenience and compatibility, the http_proxy environment variable is also supported. When both the HTTP_PROXY and http_proxy environment variables are provided, HTTP_PROXY will be preferred.

    Note: The HTTP output plugin also supports configuring an HTTP proxy. This configuration continues to work, however it should not be used together with the HTTP_PROXY or http_proxy environment variable. This is because under the hood, the environment variable based proxy configuration is implemented by setting up a TCP connection tunnel via HTTP CONNECT. Unlike the plugin's implementation, this supports both HTTP and HTTPS egress traffic.

    NO_PROXY

    Not all traffic should flow through the HTTP proxy. In this case, the NO_PROXY or no_proxy environment variable should be used.

    The format for the no proxy environment variable is a comma-separated list of hostnames or IP addresses whose traffic should not flow through the HTTP proxy.

    A domain name matches itself and all its subdomains (i.e. foo.com matches foo.com and bar.foo.com):

    A domain with a leading . only matches its subdomains (i.e. .foo.com matches bar.foo.com but not foo.com):

    One typical use case for NO_PROXY is when running Fluent Bit in a Kubernetes environment, where we want:

    • All real egress traffic to flow through an HTTP proxy.

    • All local Kubernetes traffic to not flow through the HTTP proxy.

    In this case, we can set:

    For convenience and compatibility, the no_proxy environment variable is also supported. When both the NO_PROXY and no_proxy environment variables are provided, NO_PROXY will be preferred.

    When the data is generated by the input plugins, it comes with a Tag (most of the time the Tag is configured manually), the Tag is a human-readable indicator that helps to identify the data source.

    In order to define where the data should be routed, a Match rule must be specified in the output configuration.

    Consider the following configuration example that aims to deliver CPU metrics to an Elasticsearch database and Memory metrics to the standard output interface:

    Note: the above is a simple example demonstrating how Routing is configured.

    Routing works automatically reading the Input Tags and the Output Match rules. If some data has a Tag that doesn't match upon routing time, the data is deleted.

    Routing with Wildcard

    Routing is flexible enough to support wildcard in the Match pattern. The below example defines a common destination for both sources of data:

    The match rule is set to my_* which means it will match any Tag that starts with my_.

    Tags
    Matching

    Indented Configuration Mode

    A simple example of a configuration file is as follows:

    Sections

    A section is defined by a name or title inside brackets. Looking at the example above, a Service section has been set using [SERVICE] definition. Section rules:

    • All section content must be indented (4 spaces ideally).

    • Multiple sections can exist on the same file.

    • A section is expected to have comments and entries, it cannot be empty.

    • Any commented line under a section, must be indented too.

    Entries: Key/Value

    A section may contain Entries, an entry is defined by a line of text that contains a Key and a Value, using the above example, the [SERVICE] section contains two entries, one is the key Daemon with value off and the other is the key Log_Level with the value debug. Entries rules:

    • An entry is defined by a key and a value.

    • A key must be indented.

    • A key must contain a value which ends in the breakline.

    • Multiple keys with the same name can exist.

    Also commented lines are set prefixing the # character, those lines are not processed but they must be indented too.

    Indented Configuration Mode

    Fluent Bit configuration files are based in a strict Indented Mode, that means that each configuration file must follow the same pattern of alignment from left to right when writing text. By default an indentation level of four spaces from left to right is suggested. Example:

    As you can see there are two sections with multiple entries and comments, note also that empty lines are allowed and they do not need to be indented.

    Description
    Default

    Interval_Sec

    Polling interval (seconds).

    1

    Interval_NSec

    Polling interval (nanosecond).

    0

    Dev_Name

    Device name to limit the target. (e.g. sda). If not set, in_disk gathers information from all of disks and partitions.

    all disks

    Getting Started

    In order to get disk usage from your system, you can run the plugin from the command line or through the configuration file:

    Command Line

    Configuration File

    In your main configuration file append the following Input & Output sections:

    Note: Total interval (sec) = Interval_Sec + (Interval_Nsec / 1000000000).

    e.g. 1.5s = 1s + 500000000ns

    aws ssm get-parameters-by-path --region eu-central-1 --path /aws/service/aws-for-fluent-bit/ --query 'Parameters[*].Name'
    $ aws ssm get-parameter --region ap-northeast-1 --name /aws/service/aws-for-fluent-bit/2.0.0
    Parameters:
      FireLensImage:
        Description: Fluent Bit image for the FireLens Container
        Type: AWS::SSM::Parameter::Value<String>
        Default: /aws/service/aws-for-fluent-bit/latest
    version: "3.7"
    
    services:
      fluent-bit:
        image: fluent/fluent-bit
        volumes:
          - ./fluent-bit.conf:/fluent-bit/etc/fluent-bit.conf
        depends_on:
          - elasticsearch
      elasticsearch:
        image: elasticsearch:7.6.2
        ports:
          - "9200:9200"
        environment:
          - discovery.type=single-node
    curl -X DELETE "localhost:9200/fluent-bit?pretty"
    $ bin/fluent-bit -i kmsg -t kernel -o stdout -m '*'
    Fluent Bit v1.x.x
    * Copyright (C) 2019-2020 The Fluent Bit Authors
    * Copyright (C) 2015-2018 Treasure Data
    * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
    * https://fluentbit.io
    
    [0] kernel: [1463421823, {"priority"=>3, "sequence"=>1814, "sec"=>11706, "usec"=>732233, "msg"=>"ERROR @wl_cfg80211_get_station : Wrong Mac address, mac = 34:a8:4e:d3:40:ec profile =20:3a:07:9e:4a:ac"}]
    [1] kernel: [1463421823, {"priority"=>3, "sequence"=>1815, "sec"=>11706, "usec"=>732300, "msg"=>"ERROR @wl_cfg80211_get_station : Wrong Mac address, mac = 34:a8:4e:d3:40:ec profile =20:3a:07:9e:4a:ac"}]
    [2] kernel: [1463421829, {"priority"=>3, "sequence"=>1816, "sec"=>11712, "usec"=>729728, "msg"=>"ERROR @wl_cfg80211_get_station : Wrong Mac address, mac = 34:a8:4e:d3:40:ec profile =20:3a:07:9e:4a:ac"}]
    [3] kernel: [1463421829, {"priority"=>3, "sequence"=>1817, "sec"=>11712, "usec"=>729802, "msg"=>"ERROR @wl_cfg80211_get_station : Wrong Mac address, mac = 34:a8:4e:d3:40:ec
    ...
    # Fluent Bit Metrics + Prometheus Exporter
    # -------------------------------------------
    # The following example collects Fluent Bit metrics and exposes
    # them through a Prometheus HTTP end-point.
    #
    # After starting the service try it with:
    #
    # $ curl http://127.0.0.1:2021/metrics
    #
    [SERVICE]
        flush           1
        log_level       info
    
    [INPUT]
        name            fluentbit_metrics
        tag             internal_metrics
        scrape_interval 2
    
    [OUTPUT]
        name            prometheus_exporter
        match           internal_metrics
        host            0.0.0.0
        port            2021
    
    curl http://127.0.0.1:2021/metrics
    [INPUT]
        Name         docker
        Include      6bab19c3a0f9 14159be4ca2c
    [OUTPUT]
        Name   stdout
        Match  *
    [1] docker.0: [1571994772.00555745, {"id"=>"6bab19c3a0f9", "name"=>"postgresql", "cpu_used"=>172102435, "mem_used"=>5693400, "mem_limit"=>4294963200}]
    [INPUT]
        Name   statsd
        Listen 0.0.0.0
        Port   8125
    
    [OUTPUT]
        Name   stdout
        Match  *
    [0] statsd.0: [1574905088.971380537, {"type"=>"counter", "bucket"=>"click", "value"=>10.000000, "sample_rate"=>0.100000}]
    [0] statsd.0: [1574905141.863344517, {"type"=>"gauge", "bucket"=>"active", "value"=>99.000000, "incremental"=>0}]
    $ fluent-bit -i mqtt -t data -o stdout -m '*'
    Fluent Bit v1.x.x
    * Copyright (C) 2019-2020 The Fluent Bit Authors
    * Copyright (C) 2015-2018 Treasure Data
    * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
    * https://fluentbit.io
    
    [2016/05/20 14:22:52] [ info] starting engine
    [0] data: [1463775773, {"topic"=>"some/topic", "key1"=>123, "key2"=>456}]
    [INPUT]
        Name   mqtt
        Tag    data
        Listen 0.0.0.0
        Port   1883
    
    [OUTPUT]
        Name   stdout
        Match  *
    [SERVICE]
        Flush     1
        Daemon    off
        Log_Level info
    
    [INPUT]
        Name      cpu
    
    [OUTPUT]
        Name      stdout
        Match     *
    $ cd fluent-bit/build/
    $ cmake -DFLB_STATIC_CONF=/path/to/my/confdir/
    $ make
    $ bin/fluent-bit 
    Fluent-Bit v0.15.0
    Copyright (C) Treasure Data
    
    [2018/10/19 15:32:31] [ info] [engine] started (pid=15186)
    [0] cpu.local: [1539984752.000347547, {"cpu_p"=>0.750000, "user_p"=>0.500000, "system_p"=>0.250000, "cpu0.p_cpu"=>1.000000, "cpu0.p_user"=>1.000000, "cpu0.p_system"=>0.000000, "cpu1.p_cpu"=>0.000000, "cpu1.p_user"=>0.000000, "cpu1.p_system"=>0.000000, "cpu2.p_cpu"=>0.000000, "cpu2.p_user"=>0.000000, "cpu2.p_system"=>0.000000, "cpu3.p_cpu"=>1.000000, "cpu3.p_user"=>1.000000, "cpu3.p_system"=>0.000000}]
    Build Flags =  JSMN_PARENT_LINKS JSMN_STRICT FLB_HAVE_TLS FLB_HAVE_SQLDB
    FLB_HAVE_TRACE FLB_HAVE_FLUSH_LIBCO FLB_HAVE_VALGRIND FLB_HAVE_FORK
    FLB_HAVE_PROXY_GO FLB_HAVE_JEMALLOC JEMALLOC_MANGLE FLB_HAVE_REGEX
    FLB_HAVE_C_TLS FLB_HAVE_SETJMP FLB_HAVE_ACCEPT4 FLB_HAVE_INOTIFY
    curl https://raw.githubusercontent.com/fluent/fluent-bit/master/install.sh | sh
    curl https://packages.fluentbit.io/fluentbit.key | gpg --dearmor > /usr/share/keyrings/fluentbit-keyring.gpg
    C3C0 A285 34B9 293E AF51  FABD 9F9D DC08 3888 C1CD
    Fluentbit releases (Releases signing key) <[email protected]>
    F209 D876 2A60 CD49 E680 633B 4FF8 368B 6EA0 722A
    deb [signed-by=/usr/share/keyrings/fluentbit-keyring.gpg] https://packages.fluentbit.io/debian/${CODENAME} ${CODENAME} main
    sudo apt-get update
    sudo apt-get install fluent-bit
    sudo systemctl start fluent-bit
    sudo service fluent-bit status
    ● fluent-bit.service - Fluent Bit
       Loaded: loaded (/lib/systemd/system/fluent-bit.service; disabled; vendor preset: enabled)
       Active: active (running) since mié 2016-07-06 16:58:25 CST; 2h 45min ago
     Main PID: 6739 (fluent-bit)
        Tasks: 1
       Memory: 656.0K
          CPU: 1.393s
       CGroup: /system.slice/fluent-bit.service
               └─6739 /opt/fluent-bit/bin/fluent-bit -c /etc/fluent-bit/fluent-bit.conf
    ...
    curl https://raw.githubusercontent.com/fluent/fluent-bit/master/install.sh | sh
    curl https://packages.fluentbit.io/fluentbit.key | gpg --dearmor > /usr/share/keyrings/fluentbit-keyring.gpg
    C3C0 A285 34B9 293E AF51  FABD 9F9D DC08 3888 C1CD
    Fluentbit releases (Releases signing key) <[email protected]>
    F209 D876 2A60 CD49 E680 633B 4FF8 368B 6EA0 722A
    deb [signed-by=/usr/share/keyrings/fluentbit-keyring.gpg] https://packages.fluentbit.io/ubuntu/${CODENAME} ${CODENAME} main
    sudo apt-get update
    sudo apt-get install fluent-bit
    sudo systemctl start fluent-bit
    sudo service status fluent-bit
    ● fluent-bit.service - Fluent Bit
       Loaded: loaded (/lib/systemd/system/fluent-bit.service; disabled; vendor preset: enabled)
       Active: active (running) since mié 2016-07-06 16:58:25 CST; 2h 45min ago
     Main PID: 6739 (fluent-bit)
        Tasks: 1
       Memory: 656.0K
          CPU: 1.393s
       CGroup: /system.slice/fluent-bit.service
               └─6739 /opt/fluent-bit/bin/fluent-bit -c /etc/fluent-bit/fluent-bit.conf
    ...
    HTTP_PROXY='http://example_user:[email protected]:8080'
    HTTP_PROXY='http://proxy.example.com:8080'
    NO_PROXY='foo.com,127.0.0.1,localhost'
    NO_PROXY='.foo.com,127.0.0.1,localhost'
    NO_PROXY='127.0.0.1,localhost,kubernetes.default.svc'
    [INPUT]
        Name cpu
        Tag  my_cpu
    
    [INPUT]
        Name mem
        Tag  my_mem
    
    [OUTPUT]
        Name   es
        Match  my_cpu
    
    [OUTPUT]
        Name   stdout
        Match  my_mem
    [INPUT]
        Name cpu
        Tag  my_cpu
    
    [INPUT]
        Name mem
        Tag  my_mem
    
    [OUTPUT]
        Name   stdout
        Match  my_*
    [SERVICE]
        # This is a commented line
        Daemon    off
        log_level debug
    [FIRST_SECTION]
        # This is a commented line
        Key1  some value
        Key2  another value
        # more comments
    
    [SECOND_SECTION]
        KeyN  3.14
    $ fluent-bit -i disk -o stdout
    Fluent Bit v1.x.x
    * Copyright (C) 2019-2020 The Fluent Bit Authors
    * Copyright (C) 2015-2018 Treasure Data
    * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
    * https://fluentbit.io
    
    [2017/01/28 16:58:16] [ info] [engine] started
    [0] disk.0: [1485590297, {"read_size"=>0, "write_size"=>0}]
    [1] disk.0: [1485590298, {"read_size"=>0, "write_size"=>0}]
    [2] disk.0: [1485590299, {"read_size"=>0, "write_size"=>0}]
    [3] disk.0: [1485590300, {"read_size"=>0, "write_size"=>11997184}]
    [INPUT]
        Name          disk
        Tag           disk
        Interval_Sec  1
        Interval_NSec 0
    [OUTPUT]
        Name   stdout
        Match  *
  • Production Grade solutions: deployed million of times every single day.

  • Vendor neutral and community driven projects

  • Widely Adopted by the Industry: trusted by all major companies like AWS, Microsoft, Google Cloud and hundreds of others.

  • Both projects share a lot of similarities, Fluent Bit is fully designed and built on top of the best ideas of Fluentd architecture and general design. Choosing which one to use depends on the end-user needs.

    The following table describes a comparison of different areas of the projects:

    Fluentd
    Fluent Bit

    Scope

    Containers / Servers

    Embedded Linux / Containers / Servers

    Language

    C & Ruby

    C

    Memory

    > 60MB

    ~1MB

    Both Fluentd and Fluent Bit can work as Aggregators or Forwarders, they both can complement each other or use them as standalone solutions. In the recent years, Cloud Providers switched from Fluentd to Fluent Bit for performance and compatibility reasons. Fluent Bit is now considered the next generation solution.

    Fluentd
    Fluent Bit
    Fluentd
    Fluent Bit
    Cloud Native Computing Foundation (CNCF)
    Configuration Parameters

    The plugin supports the following configuration parameters:

    Key
    Description
    Default

    Buffer_Size

    Set the buffer size to read data. This value is used to increase buffer size. The value must be according to the specification.

    16k

    $ fluent-bit -i stdin -o stdout
    1. { map => val, map => val, map => val }
    2. [ time, { map => val, map => val, map => val } ]
    Fluent Bit
    Fluent Bit

    Key Concepts

    There are a few key concepts that are really important to understand how Fluent Bit operates.

    Before diving into Fluent Bit it’s good to get acquainted with some of the key concepts of the service. This document provides a gentle introduction to those concepts and common Fluent Bit terminology. We’ve provided a list below of all the terms we’ll cover, but we recommend reading this document from start to finish to gain a more general understanding of our log and stream processor.

    • Event or Record

    • Filtering

    • Tag

    • Timestamp

    • Match

    • Structured Message

    Event or Record

    Every incoming piece of data that belongs to a log or a metric that is retrieved by Fluent Bit is considered an Event or a Record.

    As an example consider the following content of a Syslog file:

    It contains four lines and all of them represents four independent Events.

    Internally, an Event always has two components (in an array form):

    Filtering

    In some cases it is required to perform modifications on the Events content, the process to alter, enrich or drop Events is called Filtering.

    There are many use cases when Filtering is required like:

    • Append specific information to the Event like an IP address or metadata.

    • Select a specific piece of the Event content.

    • Drop Events that matches certain pattern.

    Tag

    Every Event that gets into Fluent Bit gets assigned a Tag. This tag is an internal string that is used in a later stage by the Router to decide which Filter or Output phase it must go through.

    Most of the tags are assigned manually in the configuration. If a tag is not specified, Fluent Bit will assign the name of the Input plugin instance from where that Event was generated from.

    The only input plugin that does NOT assign tags is input. This plugin speaks the Fluentd wire protocol called Forward where every Event already comes with a Tag associated. Fluent Bit will always use the incoming Tag set by the client.

    A Tagged record must always have a Matching rule. To learn more about Tags and Matches check the section.

    Timestamp

    The Timestamp represents the time when an Event was created. Every Event contains a Timestamp associated. The Timestamp is a numeric fractional integer in the format:

    Seconds

    It is the number of seconds that have elapsed since the Unix epoch.

    Nanoseconds

    Fractional second or one thousand-millionth of a second.

    A timestamp always exists, either set by the Input plugin or discovered through a data parsing process.

    Match

    Fluent Bit allows to deliver your collected and processed Events to one or multiple destinations, this is done through a routing phase. A Match represent a simple rule to select Events where it Tags matches a defined rule.

    To learn more about Tags and Matches check the section.

    Structured Messages

    Source events can have or not have a structure. A structure defines a set of keys and values inside the Event message. As an example consider the following two messages:

    No structured message

    Structured Message

    At a low level both are just an array of bytes, but the Structured message defines keys and values, having a structure helps to implement faster operations on data modifications.

    Fluent Bit always handles every Event message as a structured message. For performance reasons, we use a binary serialization data format called .

    Consider as a binary version of JSON on steroids.

    Raspbian / Raspberry Pi

    Fluent Bit is distributed as fluent-bit package and is available for the Raspberry, specifically for Raspbian distribution, the following versions are supported:

    • Raspbian Bullseye (11)

    • Raspbian Buster (10)

    Server GPG key

    The first step is to add our server GPG key to your keyring, on that way you can get our signed packages:

    Updated key from March 2022

    From the 1.9.0 and 1.8.15 releases please note that the GPG key has been updated at so ensure this new one is added.

    The GPG Key fingerprint of the new key is:

    The previous key is still available at and may be required to install previous versions.

    The GPG Key fingerprint of the old key is:

    Refer to the to see which platforms are supported in each release.

    Update your sources lists

    On Debian and derivative systems such as Raspbian, you need to add our APT server entry to your sources lists, please add the following content at bottom of your /etc/apt/sources.list file.

    Raspbian 11 (Bullseye)

    Raspbian 10 (Buster)

    Update your repositories database

    Now let your system update the apt database:

    We recommend upgrading your system (sudo apt-get upgrade). This could avoid potential issues with expired certificates.

    Install Fluent Bit

    Using the following apt-get command you are able now to install the latest fluent-bit:

    Now the following step is to instruct systemd to enable the service:

    If you do a status check, you should see a similar output like this:

    The default configuration of fluent-bit is collecting metrics of CPU usage and sending the records to the standard output, you can see the outgoing data in your /var/log/syslog file.

    Docker Events

    The docker events input plugin uses the docker API to capture server events. A complete list of possible events returned by this plugin can be found here

    Configuration Parameters

    This plugin supports the following configuration parameters:

    Key
    Description
    Default

    Command Line

    Configuration File

    In your main configuration file append the following Input & Output sections:

    HTTP

    The HTTP input plugin allows you to send custom records to an HTTP endpoint.

    Configuration Parameters

    Key

    Description

    default

    listen

    The address to listen on

    Getting Started

    The http input plugin allows Fluent Bit to open up an HTTP port that you can then route data to in a dynamic way. This plugin supports dynamic tags which allow you to send data with different tags through the same input. An example video and curl message can be seen below

    How to set tag

    The tag for the HTTP input plugin is set by adding the tag to the end of the request URL. This tag is then used to route the event through the system. For example, in the following curl message below the tag set is app.log**. **If you do not set the tag http.0 is automatically used. If you have multiple HTTP inputs then they will follow a pattern of http.N where N is an integer representing the input.

    Example Curl message

    Configuration File

    Command Line

    Network I/O Log Based Metrics

    The netif input plugin gathers network traffic information of the running system every certain interval of time, and reports them.

    The Network I/O Metrics plugin creates metrics that are log-based (I.e. JSON payload). If you are looking for Prometheus-based metrics please see the Node Exporter Metrics input plugin.

    Configuration Parameters

    The plugin supports the following configuration parameters:

    Key
    Description
    Default

    Getting Started

    In order to monitor network traffic from your system, you can run the plugin from the command line or through the configuration file:

    Command Line

    Configuration File

    In your main configuration file append the following Input & Output sections:

    Note: Total interval (sec) = Interval_Sec + (Interval_Nsec / 1000000000).

    e.g. 1.5s = 1s + 500000000ns

    Dummy

    The dummy input plugin, generates dummy events. It is useful for testing, debugging, benchmarking and getting started with Fluent Bit.

    Configuration Parameters

    The plugin supports the following configuration parameters:

    Key
    Description

    Getting Started

    You can run the plugin from the command line or through the configuration file:

    Command Line

    Configuration File

    In your main configuration file append the following Input & Output sections:

    Process Log Based Metrics

    Process input plugin allows you to check how healthy a process is. It does so by performing a service check at every certain interval of time specified by the user.

    The Process metrics plugin creates metrics that are log-based (I.e. JSON payload). If you are looking for Prometheus-based metrics please see the Node Exporter Metrics input plugin.

    Configuration Parameters

    The plugin supports the following configuration parameters:

    Key
    Description

    Getting Started

    In order to start performing the checks, you can run the plugin from the command line or through the configuration file:

    The following example will check the health of crond process.

    Configuration File

    In your main configuration file append the following Input & Output sections:

    Testing

    Once Fluent Bit is running, you will see the health of process:

    Exec Wasi

    The exec_wasi input plugin, allows to execute WASM program that is WASI target like as external program and collects event logs from there.

    Configuration Parameters

    The plugin supports the following configuration parameters:

    Key
    Description

    Configuration Examples

    Here is a configuration example. in_exec_wasi can handle parser. To retrieve from structured data from WASM program, you have to create parser.conf:

    Note that Time_Format should be aligned for the format of your using timestamp. In this documents, we assume that WASM program should write JSON style strings into stdout.

    Then, you can specify the above parsers.conf in the main fluent-bit configuration:

    Random

    Random input plugin generate very simple random value samples using the device interface /dev/urandom, if not available it will use a unix timestamp as value.

    Configuration Parameters

    The plugin supports the following configuration parameters:

    Key
    Description

    Getting Started

    In order to start generating random samples, you can run the plugin from the command line or through the configuration file:

    Command Line

    From the command line you can let Fluent Bit generate the samples with the following options:

    Configuration File

    In your main configuration file append the following Input & Output sections:

    Testing

    Once Fluent Bit is running, you will see the reports in the output interface similar to this:

    Windows Event Log

    The winlog input plugin allows you to read Windows Event Log.

    Configuration Parameters

    The plugin supports the following configuration parameters:

    Key
    Description
    Default

    Note that if you do not set db, the plugin will read channels from the beginning on each startup.

    Configuration Examples

    Configuration File

    Here is a minimum configuration example.

    Note that some Windows Event Log channels (like Security) requires an admin privilege for reading. In this case, you need to run fluent-bit as an administrator.

    Command Line

    If you want to do a quick test, you can run this plugin from the command line.

    Upgrade Notes

    The following article cover the relevant notes for users upgrading from previous Fluent Bit versions. We aim to cover compatibility changes that you must be aware of.

    For more details about changes on each release please refer to the .

    Note: release notes will be prepared in advance of a Git tag for a release so an official release should provide both a tag and a release note together to allow users to verify and understand the release contents.

    The tag drives the overall binary release process so release binaries (containers/packages) will appear after a tag and its associated release note. This allows users to expect the new release binary to appear and allow/deny/update it as appropriate in their infrastructure.

    Amazon Linux

    Install on Amazon Linux

    Fluent Bit is distributed as fluent-bit package and is available for the latest Amazon Linux 2 and Amazon Linux 2022. The following architectures are supported

    • x86_64

    Redhat / CentOS

    Install on Redhat / CentOS

    Fluent Bit is distributed as fluent-bit package and is available for the latest stable CentOS system.

    The following architectures are supported

    • x86_64

    Commands

    Configuration files must be flexible enough for any deployment need, but they must keep a clean and readable format.

    Fluent Bit Commands extends a configuration file with specific built-in features. The list of commands available as of Fluent Bit 0.12 series are:

    Command
    Prototype
    Description

    macOS

    Fluent Bit is compatible with latest Apple macOS system on x86_64 and Apple Silicon M1 architectures. At the moment there is no official supported package but you can build it from sources by following the instructions below.

    Requirements

    For the next steps, you will need to have installed in your system. If is not there, you can install it with the following command:

    Exec

    The exec input plugin, allows to execute external program and collects event logs.

    Container support

    This plugin will not function in the distroless production images (AMD64 currently) as it needs a functional /bin/sh which is not present. The debug images use the same binaries so even though they have a shell, there is no support for this plugin as it is compiled out.

    Scheduling and Retries

    has an Engine that helps to coordinate the data ingestion from input plugins and calls the Scheduler to decide when it is time to flush the data through one or multiple output plugins. The Scheduler flushes new data at a fixed time of seconds and the Scheduler retries when asked.

    Once an output plugin gets called to flush some data, after processing that data it can notify the Engine three possible return statuses:

    • OK

    • Retry

    Systemd

    The Systemd input plugin allows to collect log messages from the Journald daemon on Linux environments.

    Configuration Parameters

    The plugin supports the following configuration parameters:

    Key
    Description
    Default

    Upstream Servers

    It's common that Fluent Bit aims to connect to external services to deliver the logs over the network, this is the case of , and within others. Being able to connect to one node (host) is normal and enough for more of the use cases, but there are other scenarios where balancing across different nodes is required. The Upstream feature provides such capability.

    An Upstream defines a set of nodes that will be targeted by an output plugin, by the nature of the implementation an output plugin must support the Upstream feature. The following plugin(s) have Upstream support:

    The current balancing mode implemented is round-robin.

    Forward

    Forward is the protocol used by and to route messages between peers. This plugin implements the input service to listen for Forward messages.

    Configuration Parameters

    The plugin supports the following configuration parameters:

    Key
    Description

    Thermal

    The thermal input plugin reports system temperatures periodically -- each second by default. Currently this plugin is only available for Linux.

    The following tables describes the information generated by the plugin.

    key
    description

    Health

    Health input plugin allows you to check how healthy a TCP server is. It does the check by issuing a TCP connection every a certain interval of time.

    Configuration Parameters

    The plugin supports the following configuration parameters:

    Key
    Description
    #!/bin/sh
    
    while :; do
      echo -n "{\"key\": \"some value\"}"
      sleep 1
    done
    $ chmod 755 test.sh
    $ ./test.sh | fluent-bit -i stdin -o stdout
    Fluent Bit v1.x.x
    * Copyright (C) 2019-2020 The Fluent Bit Authors
    * Copyright (C) 2015-2018 Treasure Data
    * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
    * https://fluentbit.io
    
    [2016/10/07 21:44:46] [ info] [engine] started
    [0] stdin.0: [1475898286, {"key"=>"some value"}]
    [1] stdin.0: [1475898287, {"key"=>"some value"}]
    [2] stdin.0: [1475898288, {"key"=>"some value"}]
    [3] stdin.0: [1475898289, {"key"=>"some value"}]
    [4] stdin.0: [1475898290, {"key"=>"some value"}]

    Performance

    Medium Performance

    High Performance

    Dependencies

    Built as a Ruby Gem, it requires a certain number of gems.

    Zero dependencies, unless some special plugin requires them.

    Plugins

    More than 1000 external plugins are available

    More than 100 built-in plugins are available

    License

    Apache License v2.0

    Apache License v2.0

    Unit Size
    Forward
    Routing
    Routing
    MessagePack
    MessagePack
    https://packages.fluentbit.io/fluentbit.key
    https://packages.fluentbit.io/fluentbit-legacy.key
    supported platform documentation

    Reconnect.Retry_interval

    The retrying interval. Unit is second.

    1

    Unix_Path

    The docker socket unix path

    /var/run/docker.sock

    Buffer_Size

    The size of the buffer used to read docker events (in bytes)

    8192

    Parser

    Specify the name of a parser to interpret the entry as a structured message.

    None

    Key

    When a message is unstructured (no parser applied), it's appended as a string under the key name message.

    message

    Reconnect.Retry_limits

    The maximum number of retries allowed. The plugin tries to reconnect with docker socket when EOF is detected.

    5

    0.0.0.0

    port

    The port for Fluent Bit to listen on

    9880

    tag_key

    Specify the key name to overwrite a tag. If set, the tag will be overwritten by a value of the key.

    buffer_max_size

    Specify the maximum buffer size in KB to receive a JSON message.

    4M

    buffer_chunk_size

    This sets the chunk size for incoming incoming JSON messages. These chunks are then stored/managed in the space available by buffer_max_size.

    512K

    successful_response_code

    It allows to set successful response code. 200, 201 and 204 are supported.

    201

    Link to video

    Interface

    Specify the network interface to monitor. e.g. eth0

    Interval_Sec

    Polling interval (seconds).

    1

    Interval_NSec

    Polling interval (nanosecond).

    0

    Verbose

    If true, gather metrics precisely.

    false

    Test_At_Init

    If true, testing if the network interface is valid at initialization.

    false

    Dummy

    Dummy JSON record. Default: {"message":"dummy"}

    Start_time_sec

    Dummy base timestamp in seconds. Default: 0

    Start_time_nsec

    Dummy base timestamp in nanoseconds. Default: 0

    Rate

    Rate at which messages are generated expressed in how many times per second. Default: 1

    Samples

    If set, the events number will be limited. e.g. If Samples=3, the plugin only generates three events and stops.

    Copies

    Number of messages to generate each time they are generated. Defaults to 1.

    Proc_Name

    Name of the target Process to check.

    Interval_Sec

    Interval in seconds between the service checks. Default value is 1.

    Interval_Nsec

    Specify a nanoseconds interval for service checks, it works in conjunction with the Interval_Sec configuration key. Default value is 0.

    Alert

    If enabled, it will only generate messages if the target process is down. By default this option is disabled.

    Fd

    If enabled, a number of fd is appended to each records. Default value is true.

    Mem

    If enabled, memory usage of the process is appended to each records. Default value is true.

    WASI_Path

    The place of a WASM program file.

    Parser

    Specify the name of a parser to interpret the entry as a structured message.

    Accessible_Paths

    Specify the whilelist of paths to be able to access paths from WASM programs.

    Interval_Sec

    Polling interval (seconds).

    Interval_NSec

    Polling interval (nanosecond).

    Buf_Size

    Size of the buffer (check unit sizes for allowed values)

    Oneshot

    Only run once at startup. This allows collection of data precedent to fluent-bit's startup (bool, default: false)

    Samples

    If set, it will only generate a specific number of samples. By default this value is set to -1, which will generate unlimited samples.

    Interval_Sec

    Interval in seconds between samples generation. Default value is 1.

    Interval_Nsec

    Specify a nanoseconds interval for samples generation, it works in conjunction with the Interval_Sec configuration key. Default value is 0.

    Channels

    A comma-separated list of channels to read from.

    Interval_Sec

    Set the polling interval for each channel. (optional)

    1

    DB

    Set the path to save the read offsets. (optional)

    Fluent Bit v1.9.9

    The td-agent-bit package is no longer provided after this release. Users should switch to the fluent-bit package.

    Fluent Bit v1.6

    If you are migrating from previous version of Fluent Bit please review the following important changes:

    Tail Input Plugin

    Now by default the plugin follows a file from the end once the service starts (old behavior was always read from the beginning). For every file found at start, its followed from it last position, for new files discovered at runtime or rotated, they are read from the beginning.

    If you desire to keep the old behavior you can set the option read_from_head to true.

    Stackdriver Output Plugin

    The project_id of resource in LogEntry sent to Google Cloud Logging would be set to the project ID rather than the project number. To learn the difference between Project ID and project number, see this for more details.

    If you have any existing queries based on the resource's project_id, please update your query accordingly.

    Fluent Bit v1.5

    The migration from v1.4 to v1.5 is pretty straightforward.

    • If you enabled keepalive mode in your configuration, note that this configuration property has been renamed to net.keepalive. Now all Network I/O keepalive is enabled by default, to learn more about this and other associated configuration properties read the Networking Administration section.

    • If you use the Elasticsearch output plugin, note the default value of type changed from flb_type to _doc. Many versions of Elasticsearch will tolerate this, but ES v5.6 through v6.1 require a type without a leading underscore. See the Elasticsearch output plugin documentation FAQ entry for more.

    Fluent Bit v1.4

    If you are migrating from Fluent Bit v1.3, there are no breaking changes. Just new exciting features to enjoy :)

    Fluent Bit v1.3

    If you are migrating from Fluent Bit v1.2 to v1.3, there are no breaking changes. If you are upgrading from an older version please review the incremental changes below.

    Fluent Bit v1.2

    Docker, JSON, Parsers and Decoders

    On Fluent Bit v1.2 we have fixed many issues associated with JSON encoding and decoding, for hence when parsing Docker logs is no longer necessary to use decoders. The new Docker parser looks like this:

    Note: again, do not use decoders.

    Kubernetes Filter

    We have done improvements also on how Kubernetes Filter handle the stringified log message. If the option Merge_Log is enabled, it will try to handle the log content as a JSON map, if so, it will add the keys to the root map.

    In addition, we have fixed and improved the option called Merge_Log_Key. If a merge log succeed, all new keys will be packaged under the key specified by this option, a suggested configuration is as follows:

    As an example, if the original log content is the following map:

    the final record will be composed as follows:

    Fluent Bit v1.1

    If you are upgrading from Fluent Bit <= 1.0.x you should take in consideration the following relevant changes when switching to Fluent Bit v1.1 series:

    Kubernetes Filter

    We introduced a new configuration property called Kube_Tag_Prefix to help Tag prefix resolution and address an unexpected behavior that landed in previous versions.

    During 1.0.x release cycle, a commit in Tail input plugin changed the default behavior on how the Tag was composed when using the wildcard for expansion generating breaking compatibility with other services. Consider the following configuration example:

    The expected behavior is that Tag will be expanded to:

    but the change introduced in 1.0 series switched from absolute path to the base file name only:

    On Fluent Bit v1.1 release we restored to our default behavior and now the Tag is composed using the absolute path of the monitored file.

    Having absolute path in the Tag is relevant for routing and flexible configuration where it also helps to keep compatibility with Fluentd behavior.

    This behavior switch in Tail input plugin affects how Filter Kubernetes operates. As you know when the filter is used it needs to perform local metadata lookup that comes from the file names when using Tail as a source. Now with the new Kube_Tag_Prefix option you can specify what's the prefix used in Tail input plugin, for the configuration example above the new configuration will look as follows:

    So the proper for Kube_Tag_Prefix value must be composed by Tag prefix set in Tail input plugin plus the converted monitored directory replacing slashes with dots.

    Official Release Notes

    aarch64 / arm64v8

    Single line install

    A simple installation script is provided to be used for most Linux targets. This will always install the most recent version released.

    This is purely a convenience helper and should always be validated prior to use. The recommended secure deployment approach is to follow the instructions below.

    Amazon Linux 2022

    For Amazon Linux 2022, until it is GA, we need to force it to use the 2022 releasever in Yum but only for the Fluent Bit repository.

    Configure Yum

    We provide fluent-bit through a Yum repository. In order to add the repository reference to your system, please add a new file called fluent-bit.repo in /etc/yum.repos.d/ with the following content:

    Amazon Linux 2

    Amazon Linux 2022

    Note: we encourage you always enable the gpgcheck for security reasons. All our packages are signed.

    Updated key from March 2022

    From the 1.9.0 and 1.8.15 releases please note that the GPG key has been updated at https://packages.fluentbit.io/fluentbit.key so ensure this new one is added.

    The GPG Key fingerprint of the new key is:

    The previous key is still available at https://packages.fluentbit.io/fluentbit-legacy.key and may be required to install previous versions.

    The GPG Key fingerprint of the old key is:

    Refer to the supported platform documentation to see which platforms are supported in each release.

    Install

    Once your repository is configured, run the following command to install it:

    Now the following step is to instruct systemd to enable the service:

    If you do a status check, you should see a similar output like this:

    The default configuration of fluent-bit is collecting metrics of CPU usage and sending the records to the standard output, you can see the outgoing data in your /var/log/messages file.

    aarch64 / arm64v8

    For CentOS 9+ we use CentOS Stream as the canonical base system.

    Single line install

    A simple installation script is provided to be used for most Linux targets. This will always install the most recent version released.

    This is purely a convenience helper and should always be validated prior to use. The recommended secure deployment approach is to follow the instructions below.

    CentOS 8

    CentOS 8 is now EOL so the default Yum repositories are unavailable.

    Make sure to configure to use an appropriate mirror, for example:

    An alternative is to use Rocky or Alma Linux which should be equivalent.

    Configure Yum

    We provide fluent-bit through a Yum repository. In order to add the repository reference to your system, please add a new file called fluent-bit.repo in /etc/yum.repos.d/ with the following content:

    It is best practice to always enable the gpgcheck and repo_gpgcheck for security reasons. We sign our repository metadata as well as all of our packages.

    Updated key from March 2022

    From the 1.9.0 and 1.8.15 releases please note that the GPG key has been updated at https://packages.fluentbit.io/fluentbit.key so ensure this new one is added.

    The GPG Key fingerprint of the new key is:

    The previous key is still available at https://packages.fluentbit.io/fluentbit-legacy.key and may be required to install previous versions.

    The GPG Key fingerprint of the old key is:

    Refer to the supported platform documentation to see which platforms are supported in each release.

    Install

    Once your repository is configured, run the following command to install it:

    Now the following step is to instruct Systemd to enable the service:

    If you do a status check, you should see a similar output like this:

    The default configuration of fluent-bit is collecting metrics of CPU usage and sending the records to the standard output, you can see the outgoing data in your /var/log/messages file.

    FAQ

    Yum install fails with a "404 - Page not found" error for the package mirror

    The fluent-bit.repo file for the latest installations of Fluent-Bit uses a $releasever variable to determine the correct version of the package to install to your system:

    Depending on your Red Hat distribution version, this variable may return a value other than the OS major release version (e.g., RHEL7 Server distributions return "7Server" instead of just "7"). The Fluent-Bit package url uses just the major OS release version, so any other value here will cause a 404.

    In order to resolve this issue, you can replace the $releasever variable with your system's OS major release version. For example:

    Set a configuration variable

    @INCLUDE Command

    Configuring a logging pipeline might lead to an extensive configuration file. In order to maintain a human-readable configuration, it's suggested to split the configuration in multiple files.

    The @INCLUDE command allows the configuration reader to include an external configuration file, e.g:

    The above example defines the main service configuration file and also include two files to continue the configuration:

    inputs.conf

    outputs.conf

    Note that despites the order of inclusion, Fluent Bit will ALWAYS respect the following order:

    • Service

    • Inputs

    • Filters

    • Outputs

    @SET Command

    Fluent Bit supports configuration variables, one way to expose this variables to Fluent Bit is through setting a Shell environment variable, the other is through the @SET command.

    The @SET command can only be used at root level of each line, meaning it cannot be used inside a section, e.g:

    @INCLUDE

    @INCLUDE FILE

    Include a configuration file

    @SET

    @SET KEY=VAL

    Installing from Homebrew

    The Fluent Bit package on Homebrew is not officially supported, but should work for basic use cases and testing. It can be installed using:

    Compile from Source

    Install build dependencies

    Run the following brew command in your terminal to retrieve the dependencies:

    Get the source and build it

    Grab a fresh copy of the Fluent Bit source code (upstream):

    Optionally, if you want to use a specific version, just checkout to the proper tag. If you want to use v1.8.13 just do:

    In order to prepare the build system, we need to expose certain environment variables so Fluent Bit CMake build rules can pick the right libraries:

    Change to the build/ directory inside the Fluent Bit sources:

    Build Fluent Bit. Note that we are indicating to the build system "where" the final binaries and config files should be installed:

    Install Fluent Bit to the directory specified above. Note that this requires root privileges due to the directory we will write information to:

    The binaries and configuration examples can be located at /opt/fluent-bit/.

    Create macOS installer from source

    Grab a fresh copy of the Fluent Bit source code (upstream):

    Optionally, if you want to use a specific version, just checkout to the proper tag. If you want to use v1.9.2 just do:

    In order to prepare the build system, we need to expose certain environment variables so Fluent Bit CMake build rules can pick the right libraries:

    And then, creating the specific macOS SDK target (For example, specifying macOS Big Sur (11.3) SDK environment):

    Change to the build/ directory inside the Fluent Bit sources:

    Build the Fluent Bit macOS installer.

    Then, macOS installer will be generated as:

    Finally, fluent-bit-<fluent-bit version>-(intel or apple).pkg will be generated.

    The created installer will put binaries at /opt/fluent-bit/.

    Running Fluent Bit

    To make the access path easier to Fluent Bit binary, in your terminal extend the PATH variable:

    Now as a simple test, try Fluent Bit by generating a simple dummy message which will be printed to the standard output interface every 1 second:

    You will see an output similar to this:

    To halt the process, press ctrl-c in the terminal.

    Homebrew

    Error

    If the return status was OK, it means it was successfully able to process and flush the data. If it returned an Error status, it means that an unrecoverable error happened and the engine should not try to flush that data again. If a Retry was requested, the Engine will ask the Scheduler to retry to flush that data, the Scheduler will decide how many seconds to wait before that happens.

    Configuring Wait Time for Retry

    The Scheduler provides two configuration options called scheduler.cap and scheduler.base which can be set in the Service section.

    Key
    Description
    Default Value

    scheduler.cap

    Set a maximum retry time in seconds. The property is supported from v1.8.7.

    2000

    scheduler.base

    Set a base of exponential backoff. The property is supported from v1.8.7.

    5

    These two configuration options determine the waiting time before a retry will happen.

    Fluent Bit uses an exponential backoff and jitter algorithm to determine the waiting time before a retry.

    The waiting time is a random number between a configurable upper and lower bound.

    For the Nth retry, the lower bound of the random number will be:

    base

    The upper bound will be:

    min(base * (Nth power of 2), cap)

    Given an example where base is set to 3 and cap is set to 30.

    1st retry: The lower bound will be 3, the upper bound will be 3 * 2 = 6. So the waiting time will be a random number between (3, 6).

    2nd retry: the lower bound will be 3, the upper bound will be 3 * (2 * 2) = 12. So the waiting time will be a random number between (3, 12).

    3rd retry: the lower bound will be 3, the upper bound will be 3 * (2 * 2 * 2) = 24. So the waiting time will be a random number between (3, 24).

    4th retry: the lower bound will be 3, since 3 * (2 * 2 * 2 * 2) = 48 > 30, the upper bound will be 30. So the waiting time will be a random number between (3, 30).

    Basically, the scheduler.base determines the lower bound of time between each retry and the scheduler.cap determines the upper bound.

    For a detailed explanation of the exponential backoff and jitter algorithm, please check this blog.

    Example

    The following example configures the scheduler.base as 3 seconds and scheduler.cap as 30 seconds.

    The waiting time will be:

    Nth retry
    waiting time range (seconds)

    1

    (3, 6)

    2

    (3, 12)

    3

    (3, 24)

    4

    (3, 30)

    Configuring Retries

    The Scheduler provides a simple configuration option called Retry_Limit, which can be set independently on each output section. This option allows us to disable retries or impose a limit to try N times and then discard the data after reaching that limit:

    Value
    Description

    Retry_Limit

    N

    Integer value to set the maximum number of retries allowed. N must be >= 1 (default: 1)

    Retry_Limit

    no_limits or False

    When Retry_Limit is set to no_limits orFalse, means that there is not limit for the number of retries that the Scheduler can do.

    Retry_Limit

    no_retries

    When Retry_Limit is set to no_retries, means that retries are disabled and Scheduler would not try to send data to the destination if it failed the first time.

    Example

    The following example configures two outputs where the HTTP plugin has an unlimited number of while the Elasticsearch plugin have a limit of 5 retries:

    Fluent Bit

    Configuration Parameters

    The plugin supports the following configuration parameters:

    Key
    Description

    Interval_Sec

    Polling interval (seconds). default: 1

    Interval_NSec

    Polling interval (nanoseconds). default: 0

    name_regex

    Optional name filter regex. default: None

    type_regex

    Optional type filter regex. default: None

    Getting Started

    In order to get temperature(s) of your system, you can run the plugin from the command line or through the configuration file:

    Command Line

    Some systems provide multiple thermal zones. In this example monitor only thermal_zone0 by name, once per minute.

    Configuration File

    In your main configuration file append the following Input & Output sections:

    name

    The name of the thermal zone, such as thermal_zone0

    type

    The type of the thermal zone, such as x86_pkg_temp

    temp

    Current temperature in celsius

    Host

    Name of the target host or IP address to check.

    Port

    TCP port where to perform the connection check.

    Interval_Sec

    Interval in seconds between the service checks. Default value is 1.

    Internal_Nsec

    Specify a nanoseconds interval for service checks, it works in conjunction with the Interval_Sec configuration key. Default value is 0.

    Alert

    If enabled, it will only generate messages if the target TCP service is down. By default this option is disabled.

    Add_Host

    If enabled, hostname is appended to each records. Default value is false.

    Add_Port

    If enabled, port number is appended to each records. Default value is false.

    Getting Started

    In order to start performing the checks, you can run the plugin from the command line or through the configuration file:

    Command Line

    From the command line you can let Fluent Bit generate the checks with the following options:

    Configuration File

    In your main configuration file append the following Input & Output sections:

    Testing

    Once Fluent Bit is running, you will see some random values in the output interface similar to this:

    Jan 18 12:52:16 flb systemd[2222]: Starting GNOME Terminal Server
    Jan 18 12:52:16 flb dbus-daemon[2243]: [session uid=1000 pid=2243] Successfully activated service 'org.gnome.Terminal'
    Jan 18 12:52:16 flb systemd[2222]: Started GNOME Terminal Server.
    Jan 18 12:52:16 flb gsd-media-keys[2640]: # watch_fast: "/org/gnome/terminal/legacy/" (establishing: 0, active: 0)
    [TIMESTAMP, MESSAGE]
    SECONDS.NANOSECONDS
    "Project Fluent Bit created on 1398289291"
    {"project": "Fluent Bit", "created": 1398289291}
    curl https://packages.fluentbit.io/fluentbit.key | sudo apt-key add -
    C3C0 A285 34B9 293E AF51  FABD 9F9D DC08 3888 C1CD
    Fluentbit releases (Releases signing key) <[email protected]>
    F209 D876 2A60 CD49 E680 633B 4FF8 368B 6EA0 722A
    deb https://packages.fluentbit.io/raspbian/bullseye bullseye main
    deb https://packages.fluentbit.io/raspbian/buster buster main
    sudo apt-get update
    sudo apt-get install fluent-bit
    sudo service fluent-bit start
    sudo service fluent-bit status
    ● fluent-bit.service - Fluent Bit
       Loaded: loaded (/lib/systemd/system/fluent-bit.service; disabled; vendor preset: enabled)
       Active: active (running) since mié 2016-07-06 16:58:25 CST; 2h 45min ago
     Main PID: 6739 (fluent-bit)
        Tasks: 1
       Memory: 656.0K
          CPU: 1.393s
       CGroup: /system.slice/fluent-bit.service
               └─6739 /opt/fluent-bit/bin/fluent-bit -c /etc/fluent-bit/fluent-bit.conf
    ...
    $ fluent-bit -i docker_events -o stdout
    [INPUT]
        Name   docker_events
    
    [OUTPUT]
        Name   stdout
        Match  *
    curl -d @app.log -XPOST -H "content-type: application/json" http://localhost:8888/app.log
    [INPUT]
        name http
        listen 0.0.0.0
        port 8888
    
    [OUTPUT]
        name stdout
        match *
    $> fluent-bit -i http -p port=8888 -o stdout
    $ bin/fluent-bit -i netif -p interface=eth0 -o stdout
    Fluent Bit v1.x.x
    * Copyright (C) 2019-2020 The Fluent Bit Authors
    * Copyright (C) 2015-2018 Treasure Data
    * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
    * https://fluentbit.io
    
    [2017/07/08 23:34:18] [ info] [engine] started
    [0] netif.0: [1499524459.001698260, {"eth0.rx.bytes"=>89769869, "eth0.rx.packets"=>73357, "eth0.rx.errors"=>0, "eth0.tx.bytes"=>4256474, "eth0.tx.packets"=>24293, "eth0.tx.errors"=>0}]
    [1] netif.0: [1499524460.002541885, {"eth0.rx.bytes"=>98, "eth0.rx.packets"=>1, "eth0.rx.errors"=>0, "eth0.tx.bytes"=>98, "eth0.tx.packets"=>1, "eth0.tx.errors"=>0}]
    [2] netif.0: [1499524461.001142161, {"eth0.rx.bytes"=>98, "eth0.rx.packets"=>1, "eth0.rx.errors"=>0, "eth0.tx.bytes"=>98, "eth0.tx.packets"=>1, "eth0.tx.errors"=>0}]
    [3] netif.0: [1499524462.002612971, {"eth0.rx.bytes"=>98, "eth0.rx.packets"=>1, "eth0.rx.errors"=>0, "eth0.tx.bytes"=>98, "eth0.tx.packets"=>1, "eth0.tx.errors"=>0}]
    [INPUT]
        Name          netif
        Tag           netif
        Interval_Sec  1
        Interval_NSec 0
        Interface     eth0
    [OUTPUT]
        Name   stdout
        Match  *
    $ fluent-bit -i dummy -o stdout
    Fluent Bit v1.x.x
    * Copyright (C) 2019-2020 The Fluent Bit Authors
    * Copyright (C) 2015-2018 Treasure Data
    * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
    * https://fluentbit.io
    
    [2017/07/06 21:55:29] [ info] [engine] started
    [0] dummy.0: [1499345730.015265366, {"message"=>"dummy"}]
    [1] dummy.0: [1499345731.002371371, {"message"=>"dummy"}]
    [2] dummy.0: [1499345732.000267932, {"message"=>"dummy"}]
    [3] dummy.0: [1499345733.000757746, {"message"=>"dummy"}]
    [INPUT]
        Name   dummy
        Tag    dummy.log
    
    [OUTPUT]
        Name   stdout
        Match  *
    $ fluent-bit -i proc -p proc_name=crond -o stdout
    [INPUT]
        Name          proc
        Proc_Name     crond
        Interval_Sec  1
        Interval_NSec 0
        Fd            true
        Mem           true
    
    [OUTPUT]
        Name   stdout
        Match  *
    $ fluent-bit -i proc -p proc_name=fluent-bit -o stdout
    Fluent Bit v1.x.x
    * Copyright (C) 2019-2020 The Fluent Bit Authors
    * Copyright (C) 2015-2018 Treasure Data
    * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
    * https://fluentbit.io
    
    [2017/01/30 21:44:56] [ info] [engine] started
    [0] proc.0: [1485780297, {"alive"=>true, "proc_name"=>"fluent-bit", "pid"=>10964, "mem.VmPeak"=>14740000, "mem.VmSize"=>14740000, "mem.VmLck"=>0, "mem.VmHWM"=>1120000, "mem.VmRSS"=>1120000, "mem.VmData"=>2276000, "mem.VmStk"=>88000, "mem.VmExe"=>1768000, "mem.VmLib"=>2328000, "mem.VmPTE"=>68000, "mem.VmSwap"=>0, "fd"=>18}]
    [1] proc.0: [1485780298, {"alive"=>true, "proc_name"=>"fluent-bit", "pid"=>10964, "mem.VmPeak"=>14740000, "mem.VmSize"=>14740000, "mem.VmLck"=>0, "mem.VmHWM"=>1148000, "mem.VmRSS"=>1148000, "mem.VmData"=>2276000, "mem.VmStk"=>88000, "mem.VmExe"=>1768000, "mem.VmLib"=>2328000, "mem.VmPTE"=>68000, "mem.VmSwap"=>0, "fd"=>18}]
    [2] proc.0: [1485780299, {"alive"=>true, "proc_name"=>"fluent-bit", "pid"=>10964, "mem.VmPeak"=>14740000, "mem.VmSize"=>14740000, "mem.VmLck"=>0, "mem.VmHWM"=>1152000, "mem.VmRSS"=>1148000, "mem.VmData"=>2276000, "mem.VmStk"=>88000, "mem.VmExe"=>1768000, "mem.VmLib"=>2328000, "mem.VmPTE"=>68000, "mem.VmSwap"=>0, "fd"=>18}]
    [3] proc.0: [1485780300, {"alive"=>true, "proc_name"=>"fluent-bit", "pid"=>10964, "mem.VmPeak"=>14740000, "mem.VmSize"=>14740000, "mem.VmLck"=>0, "mem.VmHWM"=>1152000, "mem.VmRSS"=>1148000, "mem.VmData"=>2276000, "mem.VmStk"=>88000, "mem.VmExe"=>1768000, "mem.VmLib"=>2328000, "mem.VmPTE"=>68000, "mem.VmSwap"=>0, "fd"=>18}]
    [PARSER]
        Name        wasi
        Format      json
        Time_Key    time
        Time_Format %Y-%m-%dT%H:%M:%S.%L %z
    [SERVICE]
        Flush        1
        Daemon       Off
        Parsers_File parsers.conf
        Log_Level    info
        HTTP_Server  Off
        HTTP_Listen  0.0.0.0
        HTTP_Port    2020
    
    [INPUT]
        Name exec_wasi
        Tag  exec.wasi.local
        WASI_Path /path/to/wasi/program.wasm
        Accessible_Paths .,/path/to/accessible
        Parser wasi
    
    [OUTPUT]
        Name  stdout
        Match *
    
    $ fluent-bit -i random -o stdout
    [INPUT]
        Name          random
        Samples      -1
        Interval_Sec  1
        Interval_NSec 0
    
    [OUTPUT]
        Name   stdout
        Match  *
    $ fluent-bit -i random -o stdout
    Fluent Bit v1.x.x
    * Copyright (C) 2019-2020 The Fluent Bit Authors
    * Copyright (C) 2015-2018 Treasure Data
    * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
    * https://fluentbit.io
    
    [2016/10/07 20:27:34] [ info] [engine] started
    [0] random.0: [1475893654, {"rand_value"=>1863375102915681408}]
    [1] random.0: [1475893655, {"rand_value"=>425675645790600970}]
    [2] random.0: [1475893656, {"rand_value"=>7580417447354808203}]
    [3] random.0: [1475893657, {"rand_value"=>1501010137543905482}]
    [4] random.0: [1475893658, {"rand_value"=>16238242822364375212}]
    [INPUT]
        Name         winlog
        Channels     Setup,Windows PowerShell
        Interval_Sec 1
        DB           winlog.sqlite
    
    [OUTPUT]
        Name   stdout
        Match  *
    $ fluent-bit -i winlog -p 'channels=Setup' -o stdout
    [PARSER]
        Name         docker
        Format       json
        Time_Key     time
        Time_Format  %Y-%m-%dT%H:%M:%S.%L
        Time_Keep    On
    [FILTER]
        Name             Kubernetes
        Match            kube.*
        Kube_Tag_Prefix  kube.var.log.containers.
        Merge_Log        On
        Merge_Log_Key    log_processed
    {"key1": "val1", "key2": "val2"}
    {
        "log": "{\"key1\": \"val1\", \"key2\": \"val2\"}",
        "log_processed": {
            "key1": "val1",
            "key2": "val2"
        }
    }
    [INPUT]
        Name  tail
        Path  /var/log/containers/*.log
        Tag   kube.*
    kube.var.log.containers.apache.log
    kube.apache.log
    [INPUT]
        Name  tail
        Path  /var/log/containers/*.log
        Tag   kube.*
    
    [FILTER]
        Name             kubernetes
        Match            *
        Kube_Tag_Prefix  kube.var.log.containers.
    curl https://raw.githubusercontent.com/fluent/fluent-bit/master/install.sh | sh
    export FLUENT_BIT_INSTALL_COMMAND_PREFIX="sed -i 's|\$releasever/|2022/|g' /etc/yum.repos.d/fluent-bit.repo"
    curl https://raw.githubusercontent.com/fluent/fluent-bit/master/install.sh | sh
    [fluent-bit]
    name = Fluent Bit
    baseurl = https://packages.fluentbit.io/amazonlinux/2/$basearch/
    gpgcheck=1
    gpgkey=https://packages.fluentbit.io/fluentbit.key
    enabled=1
    [fluent-bit]
    name = Fluent Bit
    baseurl = https://packages.fluentbit.io/amazonlinux/2022/$basearch/
    gpgcheck=1
    gpgkey=https://packages.fluentbit.io/fluentbit.key
    enabled=1
    C3C0 A285 34B9 293E AF51  FABD 9F9D DC08 3888 C1CD
    Fluentbit releases (Releases signing key) <[email protected]>
    F209 D876 2A60 CD49 E680 633B 4FF8 368B 6EA0 722A
    yum install fluent-bit
    sudo service fluent-bit start
    $ service fluent-bit status
    Redirecting to /bin/systemctl status  fluent-bit.service
    ● fluent-bit.service - Fluent Bit
       Loaded: loaded (/usr/lib/systemd/system/fluent-bit.service; disabled; vendor preset: disabled)
       Active: active (running) since Thu 2016-07-07 02:08:01 BST; 9s ago
     Main PID: 3820 (fluent-bit)
       CGroup: /system.slice/fluent-bit.service
               └─3820 /opt/fluent-bit/bin/fluent-bit -c etc/fluent-bit/fluent-bit.conf
    ...
    curl https://raw.githubusercontent.com/fluent/fluent-bit/master/install.sh | sh
    $ sed -i 's/mirrorlist/#mirrorlist/g' /etc/yum.repos.d/CentOS-* && \
      sed -i 's|#baseurl=http://mirror.centos.org|baseurl=http://vault.centos.org|g' /etc/yum.repos.d/CentOS-*
    [fluent-bit]
    name = Fluent Bit
    baseurl = https://packages.fluentbit.io/centos/$releasever/$basearch/
    gpgcheck=1
    gpgkey=https://packages.fluentbit.io/fluentbit.key
    repo_gpgcheck=1
    enabled=1
    C3C0 A285 34B9 293E AF51  FABD 9F9D DC08 3888 C1CD
    Fluentbit releases (Releases signing key) <[email protected]>
    F209 D876 2A60 CD49 E680 633B 4FF8 368B 6EA0 722A
    yum install fluent-bit
    sudo service fluent-bit start
    $ service fluent-bit status
    Redirecting to /bin/systemctl status  fluent-bit.service
    ● fluent-bit.service - Fluent Bit
       Loaded: loaded (/usr/lib/systemd/system/fluent-bit.service; disabled; vendor preset: disabled)
       Active: active (running) since Thu 2016-07-07 02:08:01 BST; 9s ago
     Main PID: 3820 (fluent-bit)
       CGroup: /system.slice/fluent-bit.service
               └─3820 /opt/fluent-bit/bin/fluent-bit -c etc/fluent-bit/fluent-bit.conf
    ...
    [fluent-bit]
    name = Fluent Bit
    baseurl = https://packages.fluentbit.io/centos/$releasever/$basearch/
    ...
    [fluent-bit]
    name = Fluent Bit
    baseurl = https://packages.fluentbit.io/centos/7/$basearch/
    gpgcheck=1
    gpgkey=https://packages.fluentbit.io/fluentbit.key
    repo_gpgcheck=1
    enabled=1
    [SERVICE]
        Flush 1
    
    @INCLUDE inputs.conf
    @INCLUDE outputs.conf
    [INPUT]
        Name cpu
        Tag  mycpu
    
    [INPUT]
        Name tail
        Path /var/log/*.log
        Tag  varlog.*
    [OUTPUT]
        Name   stdout
        Match  mycpu
    
    [OUTPUT]
        Name            es
        Match           varlog.*
        Host            127.0.0.1
        Port            9200
        Logstash_Format On
    @SET my_input=cpu
    @SET my_output=stdout
    
    [SERVICE]
        Flush 1
    
    [INPUT]
        Name ${my_input}
    
    [OUTPUT]
        Name ${my_output}
    /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
    brew install fluent-bit
    brew install git cmake openssl bison
    git clone https://github.com/fluent/fluent-bit
    cd fluent-bit
    git checkout v1.8.13
    export OPENSSL_ROOT_DIR=`brew --prefix openssl`
    export PATH=`brew --prefix bison`/bin:$PATH
    cd build/
    cmake -DFLB_DEV=on -DCMAKE_INSTALL_PREFIX=/opt/fluent-bit ../
    make -j 16
    sudo make install
    git clone https://github.com/fluent/fluent-bit
    cd fluent-bit
    git checkout v1.9.2
    export OPENSSL_ROOT_DIR=`brew --prefix openssl`
    export PATH=`brew --prefix bison`/bin:$PATH
    export MACOSX_DEPLOYMENT_TARGET=11.3
    cd build/
    cmake -DCPACK_GENERATOR=productbuild -DCMAKE_INSTALL_PREFIX=/opt/fluent-bit ../
    make -j 16
    cpack -G productbuild
    CPack: Create package using productbuild
    CPack: Install projects
    CPack: - Run preinstall target for: fluent-bit
    CPack: - Install project: fluent-bit []
    CPack: -   Install component: binary
    CPack: -   Install component: library
    CPack: -   Install component: headers
    CPack: -   Install component: headers-extra
    CPack: Create package
    CPack: -   Building component package: /Users/fluent-bit-builder/GitHub/fluent-bit/build/_CPack_Packages/Darwin/productbuild//Users/fluent-bit-builder/GitHub/fluent-bit/build/fluent-bit-1.9.2-apple/Contents/Packages/fluent-bit-1.9.2-apple-binary.pkg
    CPack: -   Building component package: /Users/fluent-bit-builder/GitHub/fluent-bit/build/_CPack_Packages/Darwin/productbuild//Users/fluent-bit-builder/GitHub/fluent-bit/build/fluent-bit-1.9.2-apple/Contents/Packages/fluent-bit-1.9.2-apple-headers.pkg
    CPack: -   Building component package: /Users/fluent-bit-builder/GitHub/fluent-bit/build/_CPack_Packages/Darwin/productbuild//Users/fluent-bit-builder/GitHub/fluent-bit/build/fluent-bit-1.9.2-apple/Contents/Packages/fluent-bit-1.9.2-apple-headers-extra.pkg
    CPack: -   Building component package: /Users/fluent-bit-builder/GitHub/fluent-bit/build/_CPack_Packages/Darwin/productbuild//Users/fluent-bit-builder/GitHub/fluent-bit/build/fluent-bit-1.9.2-apple/Contents/Packages/fluent-bit-1.9.2-apple-library.pkg
    CPack: - package: /Users/fluent-bit-builder/GitHub/fluent-bit/build/fluent-bit-1.9.2-apple.pkg generated.
    export PATH=/opt/fluent-bit/bin:$PATH
     fluent-bit -i dummy -o stdout -f 1
    Fluent Bit v1.9.0
    * Copyright (C) 2015-2021 The Fluent Bit Authors
    * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
    * https://fluentbit.io
    
    [2022/02/08 17:13:52] [ info] [engine] started (pid=14160)
    [2022/02/08 17:13:52] [ info] [storage] version=1.1.6, initializing...
    [2022/02/08 17:13:52] [ info] [storage] in-memory
    [2022/02/08 17:13:52] [ info] [storage] normal synchronization mode, checksum disabled, max_chunks_up=128
    [2022/02/08 17:13:52] [ info] [cmetrics] version=0.2.2
    [2022/02/08 17:13:52] [ info] [sp] stream processor started
    [0] dummy.0: [1644362033.676766000, {"message"=>"dummy"}]
    [0] dummy.0: [1644362034.676914000, {"message"=>"dummy"}]
    [SERVICE]
        Flush            5
        Daemon           off
        Log_Level        debug
        scheduler.base   3
        scheduler.cap    30
    [OUTPUT]
        Name        http
        Host        192.168.5.6
        Port        8080
        Retry_Limit False
    
    [OUTPUT]
        Name            es
        Host            192.168.5.20
        Port            9200
        Logstash_Format On
        Retry_Limit     5
    $ bin/fluent-bit -i thermal -t my_thermal -o stdout -m '*'
    Fluent Bit v1.x.x
    * Copyright (C) 2019-2020 The Fluent Bit Authors
    * Copyright (C) 2015-2018 Treasure Data
    * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
    * https://fluentbit.io
    
    [2019/08/18 13:39:43] [ info] [storage] initializing...
    ...
    [0] my_thermal: [1566099584.000085820, {"name"=>"thermal_zone0", "type"=>"x86_pkg_temp", "temp"=>60.000000}]
    [1] my_thermal: [1566099585.000136466, {"name"=>"thermal_zone0", "type"=>"x86_pkg_temp", "temp"=>59.000000}]
    [2] my_thermal: [1566099586.000083156, {"name"=>"thermal_zone0", "type"=>"x86_pkg_temp", "temp"=>59.000000}]
    $ bin/fluent-bit -i thermal -t my_thermal -p "interval_sec=60" -p "name_regex=thermal_zone0" -o stdout -m '*'
    Fluent Bit v1.3.0
    Copyright (C) Treasure Data
    
    [2019/08/18 13:39:43] [ info] [storage] initializing...
    ...
    [0] my_temp: [1565759542.001053749, {"name"=>"thermal_zone0", "type"=>"pch_skylake", "temp"=>48.500000}]
    [0] my_temp: [1565759602.001661061, {"name"=>"thermal_zone0", "type"=>"pch_skylake", "temp"=>48.500000}]
    [INPUT]
        Name thermal
        Tag  my_thermal
    
    [OUTPUT]
        Name  stdout
        Match *
    $ fluent-bit -i health -p host=127.0.0.1 -p port=80 -o stdout
    [INPUT]
        Name          health
        Host          127.0.0.1
        Port          80
        Interval_Sec  1
        Interval_NSec 0
    
    [OUTPUT]
        Name   stdout
        Match  *
    $ fluent-bit -i health -p host=127.0.0.1 -p port=80 -o stdout
    Fluent Bit v1.8.0
    * Copyright (C) 2019-2021 The Fluent Bit Authors
    * Copyright (C) 2015-2018 Treasure Data
    * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
    * https://fluentbit.io
    
    [2021/06/20 08:39:47] [ info] [engine] started (pid=4621)
    [2021/06/20 08:39:47] [ info] [storage] version=1.1.1, initializing...
    [2021/06/20 08:39:47] [ info] [storage] in-memory
    [2021/06/20 08:39:47] [ info] [storage] normal synchronization mode, checksum disabled, max_chunks_up=128
    [2021/06/20 08:39:47] [ info] [sp] stream processor started
    [0] health.0: [1624145988.305640385, {"alive"=>true}]
    [1] health.0: [1624145989.305575360, {"alive"=>true}]
    [2] health.0: [1624145990.306498573, {"alive"=>true}]
    [3] health.0: [1624145991.305595498, {"alive"=>true}]
    Configuration Parameters

    The plugin supports the following configuration parameters:

    Key
    Description

    Command

    The command to execute.

    Parser

    Specify the name of a parser to interpret the entry as a structured message.

    Interval_Sec

    Polling interval (seconds).

    Interval_NSec

    Polling interval (nanosecond).

    Buf_Size

    Size of the buffer (check for allowed values)

    Getting Started

    You can run the plugin from the command line or through the configuration file:

    Command Line

    The following example will read events from the output of ls.

    Configuration File

    In your main configuration file append the following Input & Output sections:

    Path

    Optional path to the Systemd journal directory, if not set, the plugin will use default paths to read local-only logs.

    Max_Fields

    Set a maximum number of fields (keys) allowed per record.

    8000

    Max_Entries

    When Fluent Bit starts, the Journal might have a high number of logs in the queue. In order to avoid delays and reduce memory usage, this option allows to specify the maximum number of log entries that can be processed per round. Once the limit is reached, Fluent Bit will continue processing the remaining log entries once Journald performs the notification.

    5000

    Systemd_Filter

    Allows to perform a query over logs that contains a specific Journald key/value pairs, e.g: _SYSTEMD_UNIT=UNIT. The Systemd_Filter option can be specified multiple times in the input section to apply multiple filters as required.

    Systemd_Filter_Type

    Define the filter type when Systemd_Filter is specified multiple times. Allowed values are And and Or. With And a record is matched only when all of the Systemd_Filter have a match. With Or a record is matched when any of the Systemd_Filter has a match.

    Getting Started

    In order to receive Systemd messages, you can run the plugin from the command line or through the configuration file:

    Command Line

    From the command line you can let Fluent Bit listen for Systemd messages with the following options:

    In the example above we are collecting all messages coming from the Docker service.

    Configuration File

    In your main configuration file append the following Input & Output sections:

    Configuration

    To define an Upstream it's required to create an specific configuration file that contains an UPSTREAM and one or multiple NODE sections. The following table describe the properties associated to each section. Note that all of them are mandatory:

    Section
    Key
    Description

    UPSTREAM

    name

    Defines a name for the Upstream in question.

    NODE

    name

    Defines a name for the Node in question.

    host

    IP address or hostname of the target host.

    Nodes and specific plugin configuration

    A Node might contain additional configuration keys required by the plugin, on that way we provide enough flexibility for the output plugin, a common use case is Forward output where if TLS is enabled, it requires a shared key (more details in the example below).

    Nodes and TLS (Transport Layer Security)

    In addition to the properties defined in the table above, the network operations against a defined node can optionally be done through the use of TLS for further encryption and certificates use.

    The TLS options available are described in the TLS/SSL section and can be added to the any Node section.

    Configuration File Example

    The following example defines an Upstream called forward-balancing which aims to be used by Forward output plugin, it register three Nodes:

    • node-1: connects to 127.0.0.1:43000

    • node-2: connects to 127.0.0.1:44000

    • node-3: connects to 127.0.0.1:45000 using TLS without verification. It also defines a specific configuration option required by Forward output called shared_key.

    Note that every Upstream definition must exists on it own configuration file in the file system. Adding multiple Upstreams in the same file or different files is not allowed.

    output plugins
    HTTP
    Elasticsearch
    Forward
    Forward
    Default

    Listen

    Listener network interface.

    0.0.0.0

    Port

    TCP port to listen for incoming connections.

    24224

    Unix_Path

    Specify the path to unix socket to receive a Forward message. If set, Listen and Port are ignored.

    Unix_Perm

    Set the permission of the unix socket file. If Unix_Path is not set, this parameter is ignored.

    Buffer_Max_Size

    Specify the maximum buffer memory size used to receive a Forward message. The value must be according to the specification.

    Getting Started

    In order to receive Forward messages, you can run the plugin from the command line or through the configuration file as shown in the following examples.

    Command Line

    From the command line you can let Fluent Bit listen for Forward messages with the following options:

    By default the service will listen an all interfaces (0.0.0.0) through TCP port 24224, optionally you can change this directly, e.g:

    In the example the Forward messages will only arrive through network interface under 192.168.3.2 address and TCP Port 9090.

    Configuration File

    In your main configuration file append the following Input & Output sections:

    Testing

    Once Fluent Bit is running, you can send some messages using the fluent-cat tool (this tool is provided by Fluentd:

    In Fluent Bit we should see the following output:

    Fluent Bit
    Fluentd

    Record Accessor

    A full feature set to access content of your records

    Fluent Bit works internally with structured records and it can be composed of an unlimited number of keys and values. Values can be anything like a number, string, array, or a map.

    Having a way to select a specific part of the record is critical for certain core functionalities or plugins, this feature is called Record Accessor.

    consider Record Accessor a simple grammar to specify record content and other miscellaneous values.

    Format

    A record accessor rule starts with the character $. Using the structured content above as an example the following table describes how to access a record:

    The following table describe some accessing rules and the expected returned value:

    Format
    Accessed Value

    If the accessor key does not exist in the record like the last example $labels['undefined'] , the operation is simply omitted, no exception will occur.

    Usage Example

    The feature is enabled on a per plugin basis, not all plugins enable this feature. As an example consider a configuration that aims to filter records using that only matches where labels have a color blue:

    The file content to process in test.log is the following:

    Running Fluent Bit with the configuration above the output will be:

    CPU Log Based Metrics

    The cpu input plugin, measures the CPU usage of a process or the whole system by default (considering per CPU core). It reports values in percentage unit for every interval of time set. At the moment this plugin is only available for Linux.

    The following tables describes the information generated by the plugin. The keys below represent the data used by the overall system, all values associated to the keys are in a percentage unit (0 to 100%):

    The CPU metrics plugin creates metrics that are log-based (I.e. JSON payload). If you are looking for Prometheus-based metrics please see the Node Exporter Metrics input plugin.

    key
    description

    cpu_p

    CPU usage of the overall system, this value is the summation of time spent on user and kernel space. The result takes in consideration the numbers of CPU cores in the system.

    In addition to the keys reported in the above table, a similar content is created per CPU core. The cores are listed from 0 to N as the Kernel reports:

    key
    description

    Configuration Parameters

    The plugin supports the following configuration parameters:

    Key
    Description
    Default

    Getting Started

    In order to get the statistics of the CPU usage of your system, you can run the plugin from the command line or through the configuration file:

    Command Line

    As described above, the CPU input plugin gathers the overall usage every one second and flushed the information to the output on the fifth second. On this example we used the stdout plugin to demonstrate the output records. In a real use-case you may want to flush this information to some central aggregator such as or .

    Configuration File

    In your main configuration file append the following Input & Output sections:

    Head

    The head input plugin, allows to read events from the head of file. It's behavior is similar to the head command.

    Configuration Parameters

    The plugin supports the following configuration parameters:

    Key
    Description

    Split Line Mode

    This mode is useful to get a specific line. This is an example to get CPU frequency from /proc/cpuinfo.

    /proc/cpuinfo is a special file to get cpu information.

    Cpu frequency is "cpu MHz : 2791.009". We can get the line with this configuration file.

    Output is

    Getting Started

    In order to read the head of a file, you can run the plugin from the command line or through the configuration file:

    Command Line

    The following example will read events from the /proc/uptime file, tag the records with the uptime name and flush them back to the stdout plugin:

    Configuration File

    In your main configuration file append the following Input & Output sections:

    Note: Total interval (sec) = Interval_Sec + (Interval_Nsec / 1000000000).

    e.g. 1.5s = 1s + 500000000ns

    Prometheus Scrape Metrics

    Fluent Bit 1.9 includes additional metrics features to allow you to collect both logs and metrics with the same collector.

    The initial release of the Prometheus Scrape metric allows you to collect metrics from a Prometheus-based endpoint at a set interval. These metrics can be routed to metric supported endpoints such as Prometheus Exporter, InfluxDB, or Prometheus Remote Write

    Configuration

    Key
    Description
    Default

    Example

    If an endpoint exposes Prometheus Metrics we can specify the configuration to scrape and then output the metrics. In the following example, we retrieve metrics from the HashiCorp Vault application.

    Example Output

    TCP

    The tcp input plugin allows to retrieve structured JSON or raw messages over a TCP network interface (TCP port).

    Configuration Parameters

    The plugin supports the following configuration parameters:

    Key
    Description
    Default

    Getting Started

    In order to receive JSON messages over TCP, you can run the plugin from the command line or through the configuration file:

    Command Line

    From the command line you can let Fluent Bit listen for JSON messages with the following options:

    By default the service will listen an all interfaces (0.0.0.0) through TCP port 5170, optionally you can change this directly, e.g:

    In the example the JSON messages will only arrive through network interface under 192.168.3.2 address and TCP Port 9090.

    Configuration File

    In your main configuration file append the following Input & Output sections:

    Testing

    Once Fluent Bit is running, you can send some messages using the netcat:

    In we should see the following output:

    Performance Considerations

    When receiving payloads in JSON format, there are high performance penalties. Parsing JSON is a very expensive task so you could expect your CPU usage increase under high load environments.

    To get faster data ingestion, consider to use the option Format none to avoid JSON parsing if not needed.

    OpenTelemetry

    An input plugin to ingest OTLP Logs, Metrics, and Traces

    The OpenTelemetry plugin allows you to ingest telemetry data as per the OTLP specification, from various OpenTelemetry exporters, the OpenTelemetry Collector, or Fluent Bit's OpenTelemetry output plugin.

    Configuration

    Key
    Description
    default

    Important note: Raw traces means that any data forwarded to the traces endpoint (/v1/traces) will be packed and forwarded as a log message, and will NOT be processed by Fluent Bit. The traces endpoint by default expects a valid protobuf encoded payload, but you can set the raw_traces option in case you want to get trace telemetry data to any of Fluent Bit's supported outputs.

    Getting started

    The OpenTelemetry plugin currently supports the following telemetry data types:

    Type
    HTTP/JSON
    HTTP/Protobuf

    A sample config file to get started will look something like the following:

    With the above configuration, Fluent Bit will listen on port 4318 for data. You can now send telemetry data to the endpoints /v1/metrics, /v1/traces, and /v1/logs for metrics, traces, and logs respectively.

    A sample curl request to POST json encoded log data would be:

    Windows Exporter Metrics

    A plugin based on Prometheus Windows Exporter to collect system / host level metrics

    Prometheus Windows Exporter is a popular way to collect system level metrics from microsoft windows, such as CPU / Disk / Network / Process statistics. Fluent Bit 1.9.0 includes windows exporter metrics plugin that builds off the Prometheus design to collect system level metrics without having to manage two separate processes or agents.

    The initial release of Windows Exporter Metrics contains a single collector available from Prometheus Windows Exporter and we plan to expand it over time.

    Important note: Metrics collected with Windows Exporter Metrics flow through a separate pipeline from logs and current filters do not operate on top of metrics.

    Configuration

    Key
    Description
    Default

    Collectors available

    The following table describes the available collectors as part of this plugin. All of them are enabled by default and respects the original metrics name, descriptions, and types from Prometheus Windows Exporter, so you can use your current dashboards without any compatibility problem.

    note: the Version column specifies the Fluent Bit version where the collector is available.

    Name
    Description
    OS
    Version

    Getting Started

    Simple Configuration File

    In the following configuration file, the input plugin _windows_exporter_metrics collects _metrics every 2 seconds and exposes them through our output plugin on HTTP/TCP port 2021.

    You can test the expose of the metrics by using curl:

    Enhancement Requests

    Our current plugin implements a sub-set of the available collectors in the original Prometheus Windows Exporter, if you would like that we prioritize a specific collector please open a Github issue by using the following template: -

    Windows Event Log (winevtlog)

    The winevtlog input plugin allows you to read Windows Event Log with new API from winevt.h.

    Configuration Parameters

    The plugin supports the following configuration parameters:

    Key
    Description
    Default

    Note that if you do not set db, the plugin will tail channels on each startup.

    Configuration Examples

    Configuration File

    Here is a minimum configuration example.

    Note that some Windows Event Log channels (like Security) requires an admin privilege for reading. In this case, you need to run fluent-bit as an administrator.

    Command Line

    If you want to do a quick test, you can run this plugin from the command line.

    Note that winevtlog plugin will tail channels on each startup. If you want to confirm whether this plugin is working or not, you should specify -p 'Read_Existing_Events=true' parameter.

    Serial Interface

    The serial input plugin, allows to retrieve messages/data from a Serial interface.

    Configuration Parameters

    Key
    Description

    File

    Getting Started

    In order to retrieve messages over the Serial interface, you can run the plugin from the command line or through the configuration file:

    Command Line

    The following example loads the input serial plugin where it set a Bitrate of 9600, listen from the /dev/tnt0 interface and use the custom tag data to route the message.

    The above interface (/dev/tnt0) is an emulation of the serial interface (more details at bottom), for demonstrative purposes we will write some message to the other end of the interface, in this case /dev/tnt1, e.g:

    In Fluent Bit you should see an output like this:

    Now using the Separator configuration, we could send multiple messages at once (run this command after starting Fluent Bit):

    Configuration File

    In your main configuration file append the following Input & Output sections:

    Emulating Serial Interface on Linux

    The following content is some extra information that will allow you to emulate a serial interface on your Linux system, so you can test this Serial input plugin locally in case you don't have such interface in your computer. The following procedure has been tested on Ubuntu 15.04 running a Linux Kernel 4.0.

    Build and install the tty0tty module

    Download the sources

    Unpack and compile

    Copy the new kernel module into the kernel modules directory

    Load the module

    You should see new serial ports in /dev/ (ls /dev/tnt*) Give appropriate permissions to the new serial ports:

    When the module is loaded, it will interconnect the following virtual interfaces:

    Supported Platforms

    The following operating systems and architectures are supported in Fluent Bit.

    Operating System
    Distribution
    Architectures
    $ fluent-bit -i exec -p 'command=ls /var/log' -o stdout
    Fluent Bit v1.x.x
    * Copyright (C) 2019-2020 The Fluent Bit Authors
    * Copyright (C) 2015-2018 Treasure Data
    * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
    * https://fluentbit.io
    
    [2018/03/21 17:46:49] [ info] [engine] started
    [0] exec.0: [1521622010.013470159, {"exec"=>"ConsoleKit"}]
    [1] exec.0: [1521622010.013490313, {"exec"=>"Xorg.0.log"}]
    [2] exec.0: [1521622010.013492079, {"exec"=>"Xorg.0.log.old"}]
    [3] exec.0: [1521622010.013493443, {"exec"=>"anaconda.ifcfg.log"}]
    [4] exec.0: [1521622010.013494707, {"exec"=>"anaconda.log"}]
    [5] exec.0: [1521622010.013496016, {"exec"=>"anaconda.program.log"}]
    [6] exec.0: [1521622010.013497225, {"exec"=>"anaconda.storage.log"}]
    [INPUT]
        Name          exec
        Tag           exec_ls
        Command       ls /var/log
        Interval_Sec  1
        Interval_NSec 0
        Buf_Size      8mb
        Oneshot       false
    
    [OUTPUT]
        Name   stdout
        Match  *
    $ fluent-bit -i systemd \
                 -p systemd_filter=_SYSTEMD_UNIT=docker.service \
                 -p tag='host.*' -o stdout
    [SERVICE]
        Flush        1
        Log_Level    info
        Parsers_File parsers.conf
    
    [INPUT]
        Name            systemd
        Tag             host.*
        Systemd_Filter  _SYSTEMD_UNIT=docker.service
    
    [OUTPUT]
        Name   stdout
        Match  *
    [UPSTREAM]
        name       forward-balancing
    
    [NODE]
        name       node-1
        host       127.0.0.1
        port       43000
    
    [NODE]
        name       node-2
        host       127.0.0.1
        port       44000
    
    [NODE]
        name       node-3
        host       127.0.0.1
        port       45000
        tls        on
        tls.verify off
        shared_key secret
    $ fluent-bit -i forward -o stdout
    $ fluent-bit -i forward -p listen="192.168.3.2" -p port=9090 -o stdout
    [INPUT]
        Name              forward
        Listen            0.0.0.0
        Port              24224
        Buffer_Chunk_Size 1M
        Buffer_Max_Size   6M
    
    [OUTPUT]
        Name   stdout
        Match  *
    $ echo '{"key 1": 123456789, "key 2": "abcdefg"}' | fluent-cat my_tag
    $ bin/fluent-bit -i forward -o stdout
    Fluent-Bit v0.9.0
    Copyright (C) Treasure Data
    
    [2016/10/07 21:49:40] [ info] [engine] started
    [2016/10/07 21:49:40] [ info] [in_fw] binding 0.0.0.0:24224
    [0] my_tag: [1475898594, {"key 1"=>123456789, "key 2"=>"abcdefg"}]

    Or

    Tag

    The tag is used to route messages but on Systemd plugin there is an extra functionality: if the tag includes a star/wildcard, it will be expanded with the Systemd Unit file (_SYSTEMD_UNIT, e.g. host.* => host.UNIT_NAME) or unknown (e.g. host.unknown) if _SYSTEMD_UNIT is missing.

    DB

    Specify the absolute path of a database file to keep track of Journald cursor.

    DB.Sync

    Set a default synchronization (I/O) method. values: Extra, Full, Normal, Off. This flag affects how the internal SQLite engine do synchronization to disk, for more details about each option please refer to this section. note: this option was introduced on Fluent Bit v1.4.6.

    Full

    Read_From_Tail

    Start reading new entries. Skip entries already stored in Journald.

    Off

    Lowercase

    Lowercase the Journald field (key).

    Off

    Strip_Underscores

    Remove the leading underscore of the Journald field (key). For example the Journald field _PID becomes the key PID.

    Off

    port

    TCP port of the target service.

    Oneshot

    Only run once at startup. This allows collection of data precedent to fluent-bit's startup (bool, default: false)

    unit sizes

    6144000

    Buffer_Chunk_Size

    By default the buffer to store the incoming Forward messages, do not allocate the maximum memory allowed, instead it allocate memory when is required. The rounds of allocations are set by Buffer_Chunk_Size. The value must be according to the Unit Size specification.

    1024000

    Tag_Prefix

    Prefix incoming tag with the defined value.

    Unit Size

    $log

    "some message"

    $labels['color']

    "blue"

    $labels['project']['env']

    "production"

    $labels['unset']

    null

    $labels['undefined']

    grep

    user_p

    CPU usage in User mode, for short it means the CPU usage by user space programs. The result of this value takes in consideration the numbers of CPU cores in the system.

    system_p

    CPU usage in Kernel mode, for short it means the CPU usage by the Kernel. The result of this value takes in consideration the numbers of CPU cores in the system.

    cpuN.p_cpu

    Represents the total CPU usage by core N.

    cpuN.p_user

    Total CPU spent in user mode or user space programs associated to this core.

    cpuN.p_system

    Total CPU spent in system or kernel mode associated to this core.

    Interval_Sec

    Polling interval in seconds

    1

    Interval_NSec

    Polling interval in nanoseconds

    0

    PID

    Specify the ID (PID) of a running process in the system. By default the plugin monitors the whole system but if this option is set, it will only monitor the given process ID.

    Fluentd
    Elasticsearch

    File

    Absolute path to the target file, e.g: /proc/uptime

    Buf_Size

    Buffer size to read the file.

    Interval_Sec

    Polling interval (seconds).

    Interval_NSec

    Polling interval (nanosecond).

    Add_Path

    If enabled, filepath is appended to each records. Default value is false.

    Key

    Rename a key. Default: head.

    Lines

    Line number to read. If the number N is set, in_head reads first N lines like head(1) -n.

    Split_line

    If enabled, in_head generates key-value pair per line.

    host

    The host of the prometheus metric endpoint that you want to scrape

    port

    The port of the prometheus metric endpoint that you want to scrape

    scrape_interval

    The interval to scrape metrics

    10s

    metrics_path

    The metrics URI endpoint, that must start with a forward slash. Note: Parameters can also be added to the path by using ?

    /metrics

    Separator

    When the expected Format is set to none, Fluent Bit needs a separator string to split the records. By default it uses the breakline character (LF or 0x10).

    Listen

    Listener network interface.

    0.0.0.0

    Port

    TCP port where listening for connections

    5170

    Buffer_Size

    Specify the maximum buffer size in KB to receive a JSON message. If not set, the default size will be the value of Chunk_Size.

    Chunk_Size

    By default the buffer to store the incoming JSON messages, do not allocate the maximum memory allowed, instead it allocate memory when is required. The rounds of allocations are set by Chunk_Size in KB. If not set, Chunk_Size is equal to 32 (32KB).

    32

    Format

    Specify the expected payload format. It support the options json and none. When using json, it expects JSON maps, when is set to none, it will split every record using the defined Separator (option below).

    Fluent Bit

    json

    listen

    The address to listen on

    0.0.0.0

    port

    The port for Fluent Bit to listen on

    4318

    tag_key

    Specify the key name to overwrite a tag. If set, the tag will be overwritten by a value of the key.

    raw_traces

    Route trace data as a log message

    false

    buffer_max_size

    Specify the maximum buffer size in KB to receive a JSON message.

    4M

    buffer_chunk_size

    This sets the chunk size for incoming incoming JSON messages. These chunks are then stored/managed in the space available by buffer_max_size.

    512K

    successful_response_code

    It allows to set successful response code. 200, 201 and 204 are supported.

    201

    Logs

    Stable

    Stable

    Metrics

    Unimplemented

    Stable

    Traces

    Unimplemented

    Stable

    scrape_interval

    The rate at which metrics are collected from the host operating system

    5 seconds

    cpu

    Exposes CPU statistics.

    Windows

    v1.9

    Prometheus Exporter
    in_windows_exporter_metrics

    String_Inserts

    Whether to include StringInserts in output records. (optional)

    True

    Render_Event_As_XML

    Whether to render system part of event as XML string or not. (optional)

    False

    Use_ANSI

    Use ANSI encoding on eventlog messages. If you have issues receiving blank strings with old Windows versions (Server 2012 R2), setting this to True may solve the problem. (optional)

    False

    Channels

    A comma-separated list of channels to read from.

    Interval_Sec

    Set the polling interval for each channel. (optional)

    1

    Interval_NSec

    Set the polling interval for each channel (sub seconds. (optional)

    0

    Read_Existing_Events

    Whether to read existing events from head or tailing events at last on subscribing. (optional)

    False

    DB

    Set the path to save the read offsets. (optional)

    Absolute path to the device entry, e.g: /dev/ttyS0

    Bitrate

    The bitrate for the communication, e.g: 9600, 38400, 115200, etc

    Min_Bytes

    The serial interface will expect at least Min_Bytes to be available before to process the message (default: 1)

    Separator

    Allows to specify a separator string that's used to determinate when a message ends.

    Format

    Specify the format of the incoming data stream. The only option available is 'json'. Note that Format and Separator cannot be used at the same time.

    {
      "log": "some message",
      "stream": "stdout",
      "labels": {
         "color": "blue", 
         "unset": null,
         "project": {
             "env": "production"
          }
      }
    }
    [SERVICE]
        flush        1
        log_level    info
        parsers_file parsers.conf
    
    [INPUT]
        name      tail
        path      test.log
        parser    json
    
    [FILTER]
        name      grep
        match     *
        regex     $labels['color'] ^blue$
    
    [OUTPUT]
        name      stdout
        match     *
        format    json_lines
    {"log": "message 1", "labels": {"color": "blue"}}
    {"log": "message 2", "labels": {"color": "red"}}
    {"log": "message 3", "labels": {"color": "green"}}
    {"log": "message 4", "labels": {"color": "blue"}}
    $ bin/fluent-bit -c fluent-bit.conf 
    Fluent Bit v1.x.x
    * Copyright (C) 2019-2020 The Fluent Bit Authors
    * Copyright (C) 2015-2018 Treasure Data
    * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
    * https://fluentbit.io
    
    [2020/09/11 16:11:07] [ info] [engine] started (pid=1094177)
    [2020/09/11 16:11:07] [ info] [storage] version=1.0.5, initializing...
    [2020/09/11 16:11:07] [ info] [storage] in-memory
    [2020/09/11 16:11:07] [ info] [storage] normal synchronization mode, checksum disabled, max_chunks_up=128
    [2020/09/11 16:11:07] [ info] [sp] stream processor started
    [2020/09/11 16:11:07] [ info] inotify_fs_add(): inode=55716713 watch_fd=1 name=test.log
    {"date":1599862267.483684,"log":"message 1","labels":{"color":"blue"}}
    {"date":1599862267.483692,"log":"message 4","labels":{"color":"blue"}}
    $ build/bin/fluent-bit -i cpu -t my_cpu -o stdout -m '*'
    Fluent Bit v1.x.x
    * Copyright (C) 2019-2020 The Fluent Bit Authors
    * Copyright (C) 2015-2018 Treasure Data
    * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
    * https://fluentbit.io
    
    [2019/09/02 10:46:29] [ info] starting engine
    [0] [1452185189, {"cpu_p"=>7.00, "user_p"=>5.00, "system_p"=>2.00, "cpu0.p_cpu"=>10.00, "cpu0.p_user"=>8.00, "cpu0.p_system"=>2.00, "cpu1.p_cpu"=>6.00, "cpu1.p_user"=>4.00, "cpu1.p_system"=>2.00}]
    [1] [1452185190, {"cpu_p"=>6.50, "user_p"=>5.00, "system_p"=>1.50, "cpu0.p_cpu"=>6.00, "cpu0.p_user"=>5.00, "cpu0.p_system"=>1.00, "cpu1.p_cpu"=>7.00, "cpu1.p_user"=>5.00, "cpu1.p_system"=>2.00}]
    [2] [1452185191, {"cpu_p"=>7.50, "user_p"=>5.00, "system_p"=>2.50, "cpu0.p_cpu"=>7.00, "cpu0.p_user"=>3.00, "cpu0.p_system"=>4.00, "cpu1.p_cpu"=>6.00, "cpu1.p_user"=>6.00, "cpu1.p_system"=>0.00}]
    [3] [1452185192, {"cpu_p"=>4.50, "user_p"=>3.50, "system_p"=>1.00, "cpu0.p_cpu"=>6.00, "cpu0.p_user"=>5.00, "cpu0.p_system"=>1.00, "cpu1.p_cpu"=>5.00, "cpu1.p_user"=>3.00, "cpu1.p_system"=>2.00}]
    [INPUT]
        Name cpu
        Tag  my_cpu
    
    [OUTPUT]
        Name  stdout
        Match *
    processor    : 0
    vendor_id    : GenuineIntel
    cpu family   : 6
    model        : 42
    model name   : Intel(R) Core(TM) i7-2640M CPU @ 2.80GHz
    stepping     : 7
    microcode    : 41
    cpu MHz      : 2791.009
    cache size   : 4096 KB
    physical id  : 0
    siblings     : 1
    [INPUT]
        Name           head
        Tag            head.cpu
        File           /proc/cpuinfo
        Lines          8
        Split_line     true
        # {"line0":"processor    : 0", "line1":"vendor_id    : GenuineIntel" ...}
    
    [FILTER]
        Name           record_modifier
        Match          *
        Whitelist_key  line7
    
    [OUTPUT]
        Name           stdout
        Match          *
    $ bin/fluent-bit -c head.conf 
    Fluent Bit v1.x.x
    * Copyright (C) 2019-2020 The Fluent Bit Authors
    * Copyright (C) 2015-2018 Treasure Data
    * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
    * https://fluentbit.io
    
    [2017/06/26 22:38:24] [ info] [engine] started
    [0] head.cpu: [1498484305.000279805, {"line7"=>"cpu MHz        : 2791.009"}]
    [1] head.cpu: [1498484306.011680137, {"line7"=>"cpu MHz        : 2791.009"}]
    [2] head.cpu: [1498484307.010042482, {"line7"=>"cpu MHz        : 2791.009"}]
    [3] head.cpu: [1498484308.008447978, {"line7"=>"cpu MHz        : 2791.009"}]
    $ fluent-bit -i head -t uptime -p File=/proc/uptime -o stdout -m '*'
    Fluent Bit v1.x.x
    * Copyright (C) 2019-2020 The Fluent Bit Authors
    * Copyright (C) 2015-2018 Treasure Data
    * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
    * https://fluentbit.io
    
    [2016/05/17 21:53:54] [ info] starting engine
    [0] uptime: [1463543634, {"head"=>"133517.70 194870.97"}]
    [1] uptime: [1463543635, {"head"=>"133518.70 194872.85"}]
    [2] uptime: [1463543636, {"head"=>"133519.70 194876.63"}]
    [3] uptime: [1463543637, {"head"=>"133520.70 194879.72"}]
    [INPUT]
        Name          head
        Tag           uptime
        File          /proc/uptime
        Buf_Size      256
        Interval_Sec  1
        Interval_NSec 0
    
    [OUTPUT]
        Name   stdout
        Match  *
    [INPUT]
        name prometheus_scrape
        host 0.0.0.0 
        port 8201
        tag vault 
        metrics_path /v1/sys/metrics?format=prometheus 
        scrape_interval 10s
    
    [OUTPUT]
        name stdout
        match *
    
    2022-03-26T23:01:29.836663788Z go_memstats_alloc_bytes_total = 31891336
    2022-03-26T23:01:29.836663788Z go_memstats_frees_total = 313264
    2022-03-26T23:01:29.836663788Z go_memstats_lookups_total = 0
    2022-03-26T23:01:29.836663788Z go_memstats_mallocs_total = 378992
    2022-03-26T23:01:29.836663788Z process_cpu_seconds_total = 1.6200000000000001
    2022-03-26T23:01:29.836663788Z go_goroutines = 19
    2022-03-26T23:01:29.836663788Z go_info{version="go1.17.7"} = 1
    2022-03-26T23:01:29.836663788Z go_memstats_alloc_bytes = 12547800
    2022-03-26T23:01:29.836663788Z go_memstats_buck_hash_sys_bytes = 1468900
    2022-03-26T23:01:29.836663788Z go_memstats_gc_cpu_fraction = 8.1509688352783453e-06
    2022-03-26T23:01:29.836663788Z go_memstats_gc_sys_bytes = 5875576
    2022-03-26T23:01:29.836663788Z go_memstats_heap_alloc_bytes = 12547800
    2022-03-26T23:01:29.836663788Z go_memstats_heap_idle_bytes = 2220032
    2022-03-26T23:01:29.836663788Z go_memstats_heap_inuse_bytes = 14000128
    2022-03-26T23:01:29.836663788Z go_memstats_heap_objects = 65728
    2022-03-26T23:01:29.836663788Z go_memstats_heap_released_bytes = 2187264
    2022-03-26T23:01:29.836663788Z go_memstats_heap_sys_bytes = 16220160
    2022-03-26T23:01:29.836663788Z go_memstats_last_gc_time_seconds = 1648335593.2483871
    2022-03-26T23:01:29.836663788Z go_memstats_mcache_inuse_bytes = 2400
    2022-03-26T23:01:29.836663788Z go_memstats_mcache_sys_bytes = 16384
    2022-03-26T23:01:29.836663788Z go_memstats_mspan_inuse_bytes = 150280
    2022-03-26T23:01:29.836663788Z go_memstats_mspan_sys_bytes = 163840
    2022-03-26T23:01:29.836663788Z go_memstats_next_gc_bytes = 16586496
    2022-03-26T23:01:29.836663788Z go_memstats_other_sys_bytes = 422572
    2022-03-26T23:01:29.836663788Z go_memstats_stack_inuse_bytes = 557056
    2022-03-26T23:01:29.836663788Z go_memstats_stack_sys_bytes = 557056
    2022-03-26T23:01:29.836663788Z go_memstats_sys_bytes = 24724488
    2022-03-26T23:01:29.836663788Z go_threads = 8
    2022-03-26T23:01:29.836663788Z process_max_fds = 65536
    2022-03-26T23:01:29.836663788Z process_open_fds = 12
    2022-03-26T23:01:29.836663788Z process_resident_memory_bytes = 200638464
    2022-03-26T23:01:29.836663788Z process_start_time_seconds = 1648333791.45
    2022-03-26T23:01:29.836663788Z process_virtual_memory_bytes = 865849344
    2022-03-26T23:01:29.836663788Z process_virtual_memory_max_bytes = 1.8446744073709552e+19
    2022-03-26T23:01:29.836663788Z vault_runtime_alloc_bytes = 12482136
    2022-03-26T23:01:29.836663788Z vault_runtime_free_count = 313256
    2022-03-26T23:01:29.836663788Z vault_runtime_heap_objects = 65465
    2022-03-26T23:01:29.836663788Z vault_runtime_malloc_count = 378721
    2022-03-26T23:01:29.836663788Z vault_runtime_num_goroutines = 12
    2022-03-26T23:01:29.836663788Z vault_runtime_sys_bytes = 24724488
    2022-03-26T23:01:29.836663788Z vault_runtime_total_gc_pause_ns = 1917611
    2022-03-26T23:01:29.836663788Z vault_runtime_total_gc_runs = 19
    $ fluent-bit -i tcp -o stdout
    $ fluent-bit -i tcp://192.168.3.2:9090 -o stdout
    [INPUT]
        Name        tcp
        Listen      0.0.0.0
        Port        5170
        Chunk_Size  32
        Buffer_Size 64
        Format      json
    
    [OUTPUT]
        Name        stdout
        Match       *
    $ echo '{"key 1": 123456789, "key 2": "abcdefg"}' | nc 127.0.0.1 5170
    $ bin/fluent-bit -i tcp -o stdout -f 1
    Fluent Bit v1.x.x
    * Copyright (C) 2019-2020 The Fluent Bit Authors
    * Copyright (C) 2015-2018 Treasure Data
    * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
    * https://fluentbit.io
    
    [2019/10/03 09:19:34] [ info] [storage] initializing...
    [2019/10/03 09:19:34] [ info] [storage] in-memory
    [2019/10/03 09:19:34] [ info] [engine] started (pid=14569)
    [2019/10/03 09:19:34] [ info] [in_tcp] binding 0.0.0.0:5170
    [2019/10/03 09:19:34] [ info] [sp] stream processor started
    [0] tcp.0: [1570115975.581246030, {"key 1"=>123456789, "key 2"=>"abcdefg"}]
    [INPUT]
    	name opentelemetry
    	listen 127.0.0.1
    	port 4318
    
    [OUTPUT]
    	name stdout
    	match *
    curl --header "Content-Type: application/json" --request POST --data '{"resourceLogs":[{"resource":{},"scopeLogs":[{"scope":{},"logRecords":[{"timeUnixNano":"1660296023390371588","body":{"stringValue":"{\"message\":\"dummy\"}"},"traceId":"","spanId":""}]}]}]}'   http://0.0.0.0:4318/v1/logs
    # Node Exporter Metrics + Prometheus Exporter
    # -------------------------------------------
    # The following example collect host metrics on Linux and expose
    # them through a Prometheus HTTP end-point.
    #
    # After starting the service try it with:
    #
    # $ curl http://127.0.0.1:2021/metrics
    #
    [SERVICE]
        flush           1
        log_level       info
    
    [INPUT]
        name            windows_exporter_metrics
        tag             node_metrics
        scrape_interval 2
    
    [OUTPUT]
        name            prometheus_exporter
        match           node_metrics
        host            0.0.0.0
        port            2021
    
            
    curl http://127.0.0.1:2021/metrics
    [INPUT]
        Name         winevtlog
        Channels     Setup,Windows PowerShell
        Interval_Sec 1
        DB           winevtlog.sqlite
    
    [OUTPUT]
        Name   stdout
        Match  *
    $ fluent-bit -i winevtlog -p 'channels=Setup' -p 'Read_Existing_Events=true' -o stdout
    $ fluent-bit -i serial -t data -p File=/dev/tnt0 -p BitRate=9600 -o stdout -m '*'
    $ echo 'this is some message' > /dev/tnt1
    $ fluent-bit -i serial -t data -p File=/dev/tnt0 -p BitRate=9600 -o stdout -m '*'
    Fluent Bit v1.x.x
    * Copyright (C) 2019-2020 The Fluent Bit Authors
    * Copyright (C) 2015-2018 Treasure Data
    * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
    * https://fluentbit.io
    
    [2016/05/20 15:44:39] [ info] starting engine
    [0] data: [1463780680, {"msg"=>"this is some message"}]
    $ echo 'aaXbbXccXddXee' > /dev/tnt1
    $ fluent-bit -i serial -t data -p File=/dev/tnt0 -p BitRate=9600 -p Separator=X -o stdout -m '*'
    Fluent-Bit v0.8.0
    Copyright (C) Treasure Data
    
    [2016/05/20 16:04:51] [ info] starting engine
    [0] data: [1463781902, {"msg"=>"aa"}]
    [1] data: [1463781902, {"msg"=>"bb"}]
    [2] data: [1463781902, {"msg"=>"cc"}]
    [3] data: [1463781902, {"msg"=>"dd"}]
    [INPUT]
        Name      serial
        Tag       data
        File      /dev/tnt0
        BitRate   9600
        Separator X
    
    [OUTPUT]
        Name   stdout
        Match  *
    $ git clone https://github.com/freemed/tty0tty
    $ cd tty0tty/module
    $ make
    $ sudo cp tty0tty.ko /lib/modules/$(uname -r)/kernel/drivers/misc/
    $ sudo depmod
    $ sudo modprobe tty0tty
    $ sudo chmod 666 /dev/tnt*
    /dev/tnt0 <=> /dev/tnt1
    /dev/tnt2 <=> /dev/tnt3
    /dev/tnt4 <=> /dev/tnt5
    /dev/tnt6 <=> /dev/tnt7

    x86_64, Arm64v8

    x86_64, Arm64v8

    x86_64, Arm64v8

    x86_64, Arm64v8

    x86_64, Arm64v8

    x86_64, Arm64v8

    x86_64, Arm64v8

    x86_64, Arm64v8

    x86_64, Arm64v8

    x86_64, Arm64v8

    x86_64, Arm64v8

    x86_64

    Arm32v7

    Arm32v7

    macOS

    *

    x86_64, Apple M1

    Windows

    x86_64, x86

    x86_64, x86

    From an architecture support perspective, Fluent Bit is fully functional on x86_64, Arm64v8 and Arm32v7 based processors.

    Fluent Bit can work also on OSX and *BSD systems, but not all plugins will be available on all platforms. Official support will be expanding based on community demand. Fluent Bit may run on older operating systems though will need to be built from source, or use custom packages from enterprise providers.

    Linux

    Amazon Linux 2022

    x86_64, Arm64v8

    Amazon Linux 2

    x86_64, Arm64v8

    Getting Started with Fluent Bit

    The following serves as a guide on how to install/deploy/upgrade Fluent Bit

    Container Deployment

    Deployment Type
    Instructions

    Kubernetes

    Install on Linux (Packages)

    Operating System
    Installation Instructions

    Install on Windows (Packages)

    Operating System
    Installation Instructions

    Install on macOS (Packages)

    Operating System
    Installation Instructions

    Compile from Source (Linux, Windows, FreeBSD, macOS)

    Operating System
    Installation Instructions

    Sandbox Environment

    If you are interested in learning about Fluent Bit you can try out the sandbox environment

    Enterprise Packages

    Fluent Bit packages are also provided by for older end of life versions, Unix systems, and additional support and features including aspects like CVE backporting. A list provided by fluentbit.io/enterprise is provided below

    Configuring Fluent Bit

    Currently, Fluent Bit supports two configuration formats:

    • Classic mode.

    • Yaml. (YAML configuration is tech preview so not recommended for production.)

    CLI flags

    Fluent Bit also supports a CLI interface with various flags matching up to the configuration options available.

    Validating your Data and Structure

    Fluent Bit is a powerful log processing tool that can deal with different sources and formats, in addition it provides several filters that can be used to perform custom modifications. This flexibility is really good but while your pipeline grows, it's strongly recommended to validate your data and structure.

    We encourage Fluent Bit users to integrate data validation in their CI systems

    A simplified view of our data processing pipeline is as follows:

    In a normal production environment, many Inputs, Filters, and Outputs are defined in the configuration, so integrating a continuous validation of your configuration against expected results is a must. For this requirement, Fluent Bit provides a specific Filter called Expect which can be used to validate expected Keys and Values from your records and takes some action when an exception is found.

    How it Works

    As an example, consider the following pipeline where your source of data is a normal file with JSON content on it and then two filters: to exclude certain records and to alter the record content adding and removing specific keys.

    Ideally you want to add checkpoints of validation of your data between each step so you can know if your data structure is correct, we do this by using expect filter.

    Expect filter sets rules that aims to validate certain criteria like:

    • does the record contain a key A ?

    • does the record not contains key A?

    • does the record key A value equals NULL ?

    • does the record key A value a different value than NULL ?

    Every expect filter configuration can expose specific rules to validate the content of your records, it supports the following configuration properties:

    Property
    Description

    Start Testing

    Consider the following JSON file called data.log with the following content:

    The following Fluent Bit configuration file will configure a pipeline to consume the log above apply an expect filter to validate that keys color and label exists:

    note that if for some reason the JSON parser failed or is missing in the tail input (line 9), the expect filter will trigger the exit action. As a test, go ahead and comment out or remove line 9.

    As a second step, we will extend our pipeline and we will add a grep filter to match records that map label contains a key called name with value abc, then an expect filter to re-validate that condition:

    Deploying in Production

    When deploying your configuration in production, you might want to remove the expect filters from your configuration since it's an unnecessary extra work unless you want to have a 100% coverage of checks at runtime.

    Networking

    Fluent Bit implements a unified networking interface that is exposed to components like plugins. This interface abstract all the complexity of general I/O and is fully configurable.

    A common use case is when a component or plugin needs to connect to a service to send and receive data. Despite the operational mode sounds easy to deal with, there are many factors that can make things hard like unresponsive services, networking latency or any kind of connectivity error. The networking interface aims to abstract and simplify the network I/O handling, minimize risks and optimize performance.

    Concepts

    TCP Connect Timeout

    Most of the time creating a new TCP connection to a remote server is straightforward and takes a few milliseconds. But there are cases where DNS resolving, slow network or incomplete TLS handshakes might create long delays, or incomplete connection statuses.

    The net.connect_timeout allows to configure the maximum time to wait for a connection to be established, note that this value already considers the TLS handshake process.

    The net.connect_timeout_log_error indicates if an error should be logged in case of connect timeout. If disabled, the timeout is logged as debug level message instead.

    TCP Source Address

    On environments with multiple network interfaces, might be desired to choose which interface to use for our data that will flow through the network.

    The net.source_address allows to specify which network address must be used for a TCP connection and data flow.

    Connection Keepalive

    TCP is a connected oriented channel, to deliver and receive data from a remote end-point in most of cases we use a TCP connection. This TCP connection can be created and destroyed once is not longer needed, this approach has pros and cons, here we will refer to the opposite case: keep the connection open.

    The concept of Connection Keepalive refers to the ability of the client (Fluent Bit on this case) to keep the TCP connection open in a persistent way, that means that once the connection is created and used, instead of close it, it can be recycled. This feature offers many benefits in terms of performance since communication channels are always established before hand.

    Any component that uses TCP channels like HTTP or , can take advantage of this feature. For configuration purposes use the net.keepalive property.

    Connection Keepalive Idle Timeout

    If a connection is keepalive enabled, there might be scenarios where the connection can be unused for long periods of time. Having an idle keepalive connection is not helpful and is recommendable to keep them alive if they are used.

    In order to control how long a keepalive connection can be idle, we expose the configuration property called net.keepalive_idle_timeout.

    DNS mode

    If a transport layer protocol is specified, the plugin whose configuration section the net.dns.mode setting is specified on overrides the global dns.mode value and issues DNS requests using the specified protocol which can be either TCP or UDP

    Configuration Options

    For plugins that rely on networking I/O, the following section describes the network configuration properties available and how they can be used to optimize performance or adjust to different configuration needs:

    Property
    Description
    Default

    Example

    As an example, we will send 5 random messages through a TCP output connection, in the remote side we will use nc (netcat) utility to see the data.

    Put the following configuration snippet in a file called fluent-bit.conf:

    In another terminal, start nc and make it listen for messages on TCP port 9090:

    Now start Fluent Bit with the configuration file written above and you will see the data flowing to netcat:

    If the net.keepalive option is not enabled, Fluent Bit will close the TCP connection and netcat will quit, here we can see how the keepalive connection works.

    After the 5 records arrive, the connection will keep idle and after 10 seconds it will be closed due to net.keepalive_idle_timeout.

    Node Exporter Metrics

    A plugin based on Prometheus Node Exporter to collect system / host level metrics

    is a popular way to collect system level metrics from operating systems, such as CPU / Disk / Network / Process statistics. Fluent Bit 1.8.0 includes node exporter metrics plugin that builds off the Prometheus design to collect system level metrics without having to manage two separate processes or agents.

    The initial release of Node Exporter Metrics contains a subset of collectors and metrics available from Prometheus Node Exporter and we plan to expand them over time.

    Important note: Metrics collected with Node Exporter Metrics flow through a separate pipeline from logs and current filters do not operate on top of metrics.

    This plugin is currently only supported on Linux based operating systems\

    Centos 9 Stream
    Centos 8
    Centos 7
    Rocky Linux 8
    Alma Linux 8
    Debian 12 (Bookworm)
    Debian 11 (Bullseye)
    Debian 10 (Buster)
    Ubuntu 22.04 (Jammy Jellyfish)
    Ubuntu 20.04 (Focal Fossa)
    Ubuntu 18.04 (Bionic Beaver)
    Ubuntu 16.04 (Xenial Xerus)
    Raspbian 11 (Bullseye)
    Raspbian 10 (Buster)
    Windows Server 2019
    Windows 10 1903

    does the record key A value equals B ?

    action

    action to take when a rule does not match. The available options are warn or exit. On warn, a warning message is sent to the logging layer when a mismatch of the rules above is found; using exit makes Fluent Bit abort with status code 255.

    key_exists

    Check if a key with a given name exists in the record.

    key_not_exists

    Check if a key does not exist in the record.

    key_val_is_null

    check that the value of the key is NULL.

    key_val_is_not_null

    check that the value of the key is NOT NULL.

    key_val_eq

    check that the value of the key equals the given value in the configuration.

    grep
    record_modifier

    Prioritize IPv4 DNS results when trying to establish a connection.

    false

    net.dns.resolver

    Select the primary DNS resolver type (LEGACY or ASYNC).

    net.keepalive

    Enable or disable connection keepalive support. Accepts a boolean value: on / off.

    on

    net.keepalive_idle_timeout

    Set maximum time expressed in seconds for an idle keepalive connection.

    30

    net.keepalive_max_recycle

    Set maximum number of times a keepalive connection can be used before it is retired.

    2000

    net.source_address

    Specify network address to bind for data traffic.

    net.connect_timeout

    Set maximum time expressed in seconds to wait for a TCP connection to be established, this include the TLS handshake time.

    10

    net.connect_timeout_log_error

    On connection timeout, specify if it should log an error. When disabled, the timeout is logged as a debug message.

    true

    net.dns.mode

    Select the primary DNS connection type (TCP or UDP). Can be set in the [SERVICE] section and overridden on a per plugin basis if desired.

    TLS

    net.dns.prefer_ipv4

    License

    Strong Commitment to the Openness and Collaboration

    Fluent Bit, including it core, plugins and tools are distributed under the terms of the Apache License v2.0:

    $ docker run --rm -it fluent/fluent-bit --help
    Usage: /fluent-bit/bin/fluent-bit [OPTION]
    
    Available Options
      -b  --storage_path=PATH specify a storage buffering path
      -c  --config=FILE       specify an optional configuration file
      -d, --daemon            run Fluent Bit in background mode
      -D, --dry-run           dry run
      -f, --flush=SECONDS     flush timeout in seconds (default: 1)
      -C, --custom=CUSTOM     enable a custom plugin
      -i, --input=INPUT       set an input
      -F  --filter=FILTER     set a filter
      -m, --match=MATCH       set plugin match, same as '-p match=abc'
      -o, --output=OUTPUT     set an output
      -p, --prop="A=B"        set plugin configuration property
      -R, --parser=FILE       specify a parser configuration file
      -e, --plugin=FILE       load an external plugin (shared lib)
      -l, --log_file=FILE     write log info to a file
      -t, --tag=TAG           set plugin tag, same as '-p tag=abc'
      -T, --sp-task=SQL       define a stream processor task
      -v, --verbose           increase logging verbosity (default: info)
      -w, --workdir           set the working directory
      -H, --http              enable monitoring HTTP server
      -P, --port              set HTTP server TCP port (default: 2020)
      -s, --coro_stack_size   set coroutines stack size in bytes (default: 24576)
      -q, --quiet             quiet mode
      -S, --sosreport         support report for Enterprise customers
      -V, --version           show version number
      -h, --help              print this help
    
    Inputs
      cpu                     CPU Usage
      mem                     Memory Usage
      thermal                 Thermal
      kmsg                    Kernel Log Buffer
      proc                    Check Process health
      disk                    Diskstats
      systemd                 Systemd (Journal) reader
      netif                   Network Interface Usage
      docker                  Docker containers metrics
      docker_events           Docker events
      node_exporter_metrics   Node Exporter Metrics (Prometheus Compatible)
      fluentbit_metrics       Fluent Bit internal metrics
      prometheus_scrape       Scrape metrics from Prometheus Endpoint
      tail                    Tail files
      dummy                   Generate dummy data
      dummy_thread            Generate dummy data in a separate thread
      head                    Head Input
      health                  Check TCP server health
      http                    HTTP
      collectd                collectd input plugin
      statsd                  StatsD input plugin
      opentelemetry           OpenTelemetry
      nginx_metrics           Nginx status metrics
      serial                  Serial input
      stdin                   Standard Input
      syslog                  Syslog
      tcp                     TCP
      mqtt                    MQTT, listen for Publish messages
      forward                 Fluentd in-forward
      random                  Random
    
    Filters
      alter_size              Alter incoming chunk size
      aws                     Add AWS Metadata
      checklist               Check records and flag them
      record_modifier         modify record
      throttle                Throttle messages using sliding window algorithm
      type_converter          Data type converter
      kubernetes              Filter to append Kubernetes metadata
      modify                  modify records by applying rules
      multiline               Concatenate multiline messages
      nest                    nest events by specified field values
      parser                  Parse events
      expect                  Validate expected keys and values
      grep                    grep events by specified field values
      rewrite_tag             Rewrite records tags
      lua                     Lua Scripting Filter
      stdout                  Filter events to STDOUT
      geoip2                  add geoip information to records
      nightfall               scans records for sensitive content
    
    Outputs
      azure                   Send events to Azure HTTP Event Collector
      azure_blob              Azure Blob Storage
      azure_kusto             Send events to Kusto (Azure Data Explorer)
      bigquery                Send events to BigQuery via streaming insert
      counter                 Records counter
      datadog                 Send events to DataDog HTTP Event Collector
      es                      Elasticsearch
      exit                    Exit after a number of flushes (test purposes)
      file                    Generate log file
      forward                 Forward (Fluentd protocol)
      http                    HTTP Output
      influxdb                InfluxDB Time Series
      logdna                  LogDNA
      loki                    Loki
      kafka                   Kafka
      kafka-rest              Kafka REST Proxy
      nats                    NATS Server
      nrlogs                  New Relic
      null                    Throws away events
      opensearch              OpenSearch
      plot                    Generate data file for GNU Plot
      pgsql                   PostgreSQL
      skywalking              Send logs into log collector on SkyWalking OAP
      slack                   Send events to a Slack channel
      splunk                  Send events to Splunk HTTP Event Collector
      stackdriver             Send events to Google Stackdriver Logging
      stdout                  Prints events to STDOUT
      syslog                  Syslog
      tcp                     TCP Output
      td                      Treasure Data
      flowcounter             FlowCounter
      gelf                    GELF Output
      websocket               Websocket
      cloudwatch_logs         Send logs to Amazon CloudWatch
      kinesis_firehose        Send logs to Amazon Kinesis Firehose
      kinesis_streams         Send logs to Amazon Kinesis Streams
      opentelemetry           OpenTelemetry
      prometheus_exporter     Prometheus Exporter
      prometheus_remote_write Prometheus remote write
      s3                      Send to S3
    {"color": "blue", "label": {"name": null}}
    {"color": "red", "label": {"name": "abc"}, "meta": "data"}
    {"color": "green", "label": {"name": "abc"}, "meta": null}
    [SERVICE]
        flush        1
        log_level    info
        parsers_file parsers.conf
    
    [INPUT]
        name        tail
        path        ./data.log
        parser      json
        exit_on_eof on
    
    # First 'expect' filter to validate that our data was structured properly
    [FILTER]
        name        expect
        match       *
        key_exists  color
        key_exists  $label['name']
        action      exit
    
    [OUTPUT]
        name        stdout
        match       *
    [SERVICE]
        flush        1
        log_level    info
        parsers_file parsers.conf
    
    [INPUT]
        name         tail
        path         ./data.log
        parser       json
        exit_on_eof  on
    
    # First 'expect' filter to validate that our data was structured properly
    [FILTER]
        name       expect
        match      *
        key_exists color
        key_exists label
        action     exit
    
    # Match records that only contains map 'label' with key 'name' = 'abc'
    [FILTER]
        name       grep
        match      *
        regex      $label['name'] ^abc$
    
    # Check that every record contains 'label' with a non-null value
    [FILTER]
        name       expect
        match      *
        key_val_eq $label['name'] abc
        action     exit
    
    # Append a new key to the record using an environment variable
    [FILTER]
        name       record_modifier
        match      *
        record     hostname ${HOSTNAME}
    
    # Check that every record contains 'hostname' key
    [FILTER]
        name       expect
        match      *
        key_exists hostname
        action     exit
    
    [OUTPUT]
        name       stdout
        match      *
    [SERVICE]
        flush     1
        log_level info
    
    [INPUT]
        name      random
        samples   5
    
    [OUTPUT]
        name      tcp
        match     *
        host      127.0.0.1
        port      9090
        format    json_lines
        # Networking Setup
        net.dns.mode                TCP
        net.connect_timeout         5
        net.source_address          127.0.0.1
        net.keepalive               on
        net.keepalive_idle_timeout  10
    $ nc -l 9090
    $ nc -l 9090
    {"date":1587769732.572266,"rand_value":9704012962543047466}
    {"date":1587769733.572354,"rand_value":7609018546050096989}
    {"date":1587769734.572388,"rand_value":17035865539257638950}
    {"date":1587769735.572419,"rand_value":17086151440182975160}
    {"date":1587769736.572277,"rand_value":527581343064950185}
                                     Apache License
                               Version 2.0, January 2004
                            http://www.apache.org/licenses/
    
       TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
    
       1. Definitions.
    
          "License" shall mean the terms and conditions for use, reproduction,
          and distribution as defined by Sections 1 through 9 of this document.
    
          "Licensor" shall mean the copyright owner or entity authorized by
          the copyright owner that is granting the License.
    
          "Legal Entity" shall mean the union of the acting entity and all
          other entities that control, are controlled by, or are under common
          control with that entity. For the purposes of this definition,
          "control" means (i) the power, direct or indirect, to cause the
          direction or management of such entity, whether by contract or
          otherwise, or (ii) ownership of fifty percent (50%) or more of the
          outstanding shares, or (iii) beneficial ownership of such entity.
    
          "You" (or "Your") shall mean an individual or Legal Entity
          exercising permissions granted by this License.
    
          "Source" form shall mean the preferred form for making modifications,
          including but not limited to software source code, documentation
          source, and configuration files.
    
          "Object" form shall mean any form resulting from mechanical
          transformation or translation of a Source form, including but
          not limited to compiled object code, generated documentation,
          and conversions to other media types.
    
          "Work" shall mean the work of authorship, whether in Source or
          Object form, made available under the License, as indicated by a
          copyright notice that is included in or attached to the work
          (an example is provided in the Appendix below).
    
          "Derivative Works" shall mean any work, whether in Source or Object
          form, that is based on (or derived from) the Work and for which the
          editorial revisions, annotations, elaborations, or other modifications
          represent, as a whole, an original work of authorship. For the purposes
          of this License, Derivative Works shall not include works that remain
          separable from, or merely link (or bind by name) to the interfaces of,
          the Work and Derivative Works thereof.
    
          "Contribution" shall mean any work of authorship, including
          the original version of the Work and any modifications or additions
          to that Work or Derivative Works thereof, that is intentionally
          submitted to Licensor for inclusion in the Work by the copyright owner
          or by an individual or Legal Entity authorized to submit on behalf of
          the copyright owner. For the purposes of this definition, "submitted"
          means any form of electronic, verbal, or written communication sent
          to the Licensor or its representatives, including but not limited to
          communication on electronic mailing lists, source code control systems,
          and issue tracking systems that are managed by, or on behalf of, the
          Licensor for the purpose of discussing and improving the Work, but
          excluding communication that is conspicuously marked or otherwise
          designated in writing by the copyright owner as "Not a Contribution."
    
          "Contributor" shall mean Licensor and any individual or Legal Entity
          on behalf of whom a Contribution has been received by Licensor and
          subsequently incorporated within the Work.
    
       2. Grant of Copyright License. Subject to the terms and conditions of
          this License, each Contributor hereby grants to You a perpetual,
          worldwide, non-exclusive, no-charge, royalty-free, irrevocable
          copyright license to reproduce, prepare Derivative Works of,
          publicly display, publicly perform, sublicense, and distribute the
          Work and such Derivative Works in Source or Object form.
    
       3. Grant of Patent License. Subject to the terms and conditions of
          this License, each Contributor hereby grants to You a perpetual,
          worldwide, non-exclusive, no-charge, royalty-free, irrevocable
          (except as stated in this section) patent license to make, have made,
          use, offer to sell, sell, import, and otherwise transfer the Work,
          where such license applies only to those patent claims licensable
          by such Contributor that are necessarily infringed by their
          Contribution(s) alone or by combination of their Contribution(s)
          with the Work to which such Contribution(s) was submitted. If You
          institute patent litigation against any entity (including a
          cross-claim or counterclaim in a lawsuit) alleging that the Work
          or a Contribution incorporated within the Work constitutes direct
          or contributory patent infringement, then any patent licenses
          granted to You under this License for that Work shall terminate
          as of the date such litigation is filed.
    
       4. Redistribution. You may reproduce and distribute copies of the
          Work or Derivative Works thereof in any medium, with or without
          modifications, and in Source or Object form, provided that You
          meet the following conditions:
    
          (a) You must give any other recipients of the Work or
              Derivative Works a copy of this License; and
    
          (b) You must cause any modified files to carry prominent notices
              stating that You changed the files; and
    
          (c) You must retain, in the Source form of any Derivative Works
              that You distribute, all copyright, patent, trademark, and
              attribution notices from the Source form of the Work,
              excluding those notices that do not pertain to any part of
              the Derivative Works; and
    
          (d) If the Work includes a "NOTICE" text file as part of its
              distribution, then any Derivative Works that You distribute must
              include a readable copy of the attribution notices contained
              within such NOTICE file, excluding those notices that do not
              pertain to any part of the Derivative Works, in at least one
              of the following places: within a NOTICE text file distributed
              as part of the Derivative Works; within the Source form or
              documentation, if provided along with the Derivative Works; or,
              within a display generated by the Derivative Works, if and
              wherever such third-party notices normally appear. The contents
              of the NOTICE file are for informational purposes only and
              do not modify the License. You may add Your own attribution
              notices within Derivative Works that You distribute, alongside
              or as an addendum to the NOTICE text from the Work, provided
              that such additional attribution notices cannot be construed
              as modifying the License.
    
          You may add Your own copyright statement to Your modifications and
          may provide additional or different license terms and conditions
          for use, reproduction, or distribution of Your modifications, or
          for any such Derivative Works as a whole, provided Your use,
          reproduction, and distribution of the Work otherwise complies with
          the conditions stated in this License.
    
       5. Submission of Contributions. Unless You explicitly state otherwise,
          any Contribution intentionally submitted for inclusion in the Work
          by You to the Licensor shall be under the terms and conditions of
          this License, without any additional terms or conditions.
          Notwithstanding the above, nothing herein shall supersede or modify
          the terms of any separate license agreement you may have executed
          with Licensor regarding such Contributions.
    
       6. Trademarks. This License does not grant permission to use the trade
          names, trademarks, service marks, or product names of the Licensor,
          except as required for reasonable and customary use in describing the
          origin of the Work and reproducing the content of the NOTICE file.
    
       7. Disclaimer of Warranty. Unless required by applicable law or
          agreed to in writing, Licensor provides the Work (and each
          Contributor provides its Contributions) on an "AS IS" BASIS,
          WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
          implied, including, without limitation, any warranties or conditions
          of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
          PARTICULAR PURPOSE. You are solely responsible for determining the
          appropriateness of using or redistributing the Work and assume any
          risks associated with Your exercise of permissions under this License.
    
       8. Limitation of Liability. In no event and under no legal theory,
          whether in tort (including negligence), contract, or otherwise,
          unless required by applicable law (such as deliberate and grossly
          negligent acts) or agreed to in writing, shall any Contributor be
          liable to You for damages, including any direct, indirect, special,
          incidental, or consequential damages of any character arising as a
          result of this License or out of the use or inability to use the
          Work (including but not limited to damages for loss of goodwill,
          work stoppage, computer failure or malfunction, or any and all
          other commercial damages or losses), even if such Contributor
          has been advised of the possibility of such damages.
    
       9. Accepting Warranty or Additional Liability. While redistributing
          the Work or Derivative Works thereof, You may choose to offer,
          and charge a fee for, acceptance of support, warranty, indemnity,
          or other liability obligations and/or rights consistent with this
          License. However, in accepting such obligations, You may act only
          on Your own behalf and on Your sole responsibility, not on behalf
          of any other Contributor, and only if You agree to indemnify,
          defend, and hold each Contributor harmless for any liability
          incurred by, or claims asserted against, such Contributor by reason
          of your accepting any such warranty or additional liability.
    
       END OF TERMS AND CONDITIONS

    Yocto / Embedded Linux

    Docker

    Deploy with Docker

    Containers on AWS

    Deploy on Containers on AWS

    CentOS / Red Hat

    CentOS 7, CentOS 8, CentOS 9 Stream

    Ubuntu

    Ubuntu 16.04 LTS, Ubuntu 18.04 LTS, Ubuntu 20.04 LTS, Ubuntu 22.04 LTS

    Debian

    Debian 10, Debian 11, Debian 12

    Amazon Linux

    Amazon Linux 2, Amazon Linux 2022

    Raspbian / Raspberry Pi

    Raspbian 10, Raspbian 11

    Windows Server 2019

    Windows Server EXE, Windows Server ZIP

    Windows 10 2019.03

    Windows EXE, Windows ZIP

    macOS

    Homebrew

    Linux, FreeBSD

    Compile from source

    macOS

    Compile from source

    Windows

    Compile from Source

    enterprise providers
    Calyptia Fluent Bit LTS
    Deploy on Kubernetes
    Configuration
    Key
    Description
    Default

    scrape_interval

    The rate at which metrics are collected from the host operating system

    5 seconds

    path.procfs

    The mount point used to collect process information and metrics

    /proc/

    path.sysfs

    The path in the filesystem used to collect system metrics

    /sys/

    Collectors available

    The following table describes the available collectors as part of this plugin. All of them are enabled by default and respects the original metrics name, descriptions, and types from Prometheus Exporter, so you can use your current dashboards without any compatibility problem.

    note: the Version column specifies the Fluent Bit version where the collector is available.

    Name
    Description
    OS
    Version

    cpu

    Exposes CPU statistics.

    Linux

    v1.8

    cpufreq

    Exposes CPU frequency statistics.

    Linux

    v1.8

    diskstats

    Getting Started

    Simple Configuration File

    In the following configuration file, the input plugin _node_exporter_metrics collects _metrics every 2 seconds and exposes them through our Prometheus Exporter output plugin on HTTP/TCP port 2021.

    You can test the expose of the metrics by using curl:

    Container to Collect Host Metrics

    When deploying Fluent Bit in a container you will need to specify additional settings to ensure that Fluent Bit has access to the host operating system. The following docker command deploys Fluent Bit with specific mount paths and settings enabled to ensure that Fluent Bit can collect from the host. These are then exposed over port 2021.

    Fluent Bit + Prometheus + Grafana

    If you like dashboards for monitoring, Grafana is one of the preferred options. In our Fluent Bit source code repository, we have pushed a simple **docker-compose **example. Steps:

    Get a copy of Fluent Bit source code

    Start the service and view your Dashboard

    Now open your browser in the address http://127.0.0.1:3000. When asked for the credentials to access Grafana, just use the **admin **username and admin password.

    Note that by default Grafana dashboard plots the data from the last 24 hours, so just change it to Last 5 minutes to see the recent data being collected.

    Stop the Service

    Enhancement Requests

    Our current plugin implements a sub-set of the available collectors in the original Prometheus Node Exporter, if you would like that we prioritize a specific collector please open a Github issue by using the following template: - in_node_exporter_metrics

    Prometheus Node Exporter

    Kubernetes

    Kubernetes Production Grade Log Processor

    is a lightweight and extensible Log Processor that comes with full support for Kubernetes:

    • Process Kubernetes containers logs from the file system or Systemd/Journald.

    • Enrich logs with Kubernetes Metadata.

    Syslog

    Syslog input plugins allows to collect Syslog messages through a Unix socket server (UDP or TCP) or over the network using TCP or UDP.

    Configuration Parameters

    The plugin supports the following configuration parameters:

    Key
    Description
    Default
    # Node Exporter Metrics + Prometheus Exporter
    # -------------------------------------------
    # The following example collect host metrics on Linux and expose
    # them through a Prometheus HTTP end-point.
    #
    # After starting the service try it with:
    #
    # $ curl http://127.0.0.1:2021/metrics
    #
    [SERVICE]
        flush           1
        log_level       info
    
    [INPUT]
        name            node_exporter_metrics
        tag             node_metrics
        scrape_interval 2
    
    [OUTPUT]
        name            prometheus_exporter
        match           node_metrics
        host            0.0.0.0
        port            2021
    
            
    curl http://127.0.0.1:2021/metrics
    docker run -ti -v /proc:/host/proc \
                   -v /sys:/host/sys   \
                   -p 2021:2021        \
                   fluent/fluent-bit:1.8.0 \
                   /fluent-bit/bin/fluent-bit \
                             -i node_exporter_metrics -p path.procfs=/host/proc -p path.sysfs=/host/sys \
                             -o prometheus_exporter -p "add_label=host $HOSTNAME" \
                             -f 1
    git clone https://github.com/fluent/fluent-bit
    cd fluent-bit/docker_compose/node-exporter-dashboard/
    docker-compose up --force-recreate -d --build
    docker-compose down

    Exposes disk I/O statistics.

    Linux

    v1.8

    filefd

    Exposes file descriptor statistics from /proc/sys/fs/file-nr.

    Linux

    v1.8.2

    loadavg

    Exposes load average.

    Linux

    v1.8

    meminfo

    Exposes memory statistics.

    Linux

    v1.8

    netdev

    Exposes network interface statistics such as bytes transferred.

    Linux

    v1.8.2

    stat

    Exposes various statistics from /proc/stat. This includes boot time, forks, and interruptions.

    Linux

    v1.8

    time

    Exposes the current system time.

    Linux

    v1.8

    uname

    Exposes system information as provided by the uname system call.

    Linux

    v1.8

    vmstat

    Exposes statistics from /proc/vmstat.

    Linux

    v1.8.2

    Centralize your logs in third party storage services like Elasticsearch, InfluxDB, HTTP, etc.

    Concepts

    Before getting started it is important to understand how Fluent Bit will be deployed. Kubernetes manages a cluster of nodes, so our log agent tool will need to run on every node to collect logs from every POD, hence Fluent Bit is deployed as a DaemonSet (a POD that runs on every node of the cluster).

    When Fluent Bit runs, it will read, parse and filter the logs of every POD and will enrich each entry with the following information (metadata):

    • Pod Name

    • Pod ID

    • Container Name

    • Container ID

    • Labels

    • Annotations

    To obtain this information, a built-in filter plugin called kubernetes talks to the Kubernetes API Server to retrieve relevant information such as the pod_id, labels and annotations, other fields such as pod_name, container_id and container_name are retrieved locally from the log file names. All of this is handled automatically, no intervention is required from a configuration aspect.

    Our Kubernetes Filter plugin is fully inspired by the Fluentd Kubernetes Metadata Filter written by Jimmi Dyson.

    Installation

    Fluent Bit should be deployed as a DaemonSet, so on that way it will be available on every node of your Kubernetes cluster.

    The recommended way to deploy Fluent Bit is with the official Helm Chart: https://github.com/fluent/helm-charts

    Note for OpenShift

    If you are using Red Hat OpenShift you will also need to set up security context constraints (SCC):

    Installing with Helm Chart

    Helm is a package manager for Kubernetes and allows you to quickly deploy application packages into your running cluster. Fluent Bit is distributed via a helm chart found in the Fluent Helm Charts repo: https://github.com/fluent/helm-charts.

    To add the Fluent Helm Charts repo use the following command

    To validate that the repo was added you can run helm search repo fluent to ensure the charts were added. The default chart can then be installed by running the following

    Default Values

    The default chart values include configuration to read container logs, with Docker parsing, systemd logs apply Kubernetes metadata enrichment and finally output to an Elasticsearch cluster. You can modify the values file included https://github.com/fluent/helm-charts/blob/master/charts/fluent-bit/values.yaml to specify additional outputs, health checks, monitoring endpoints, or other configuration options.

    Details

    The default configuration of Fluent Bit makes sure of the following:

    • Consume all containers logs from the running Node.

    • The Tail input plugin will not append more than 5MB into the engine until they are flushed to the Elasticsearch backend. This limit aims to provide a workaround for backpressure scenarios.

    • The Kubernetes filter will enrich the logs with Kubernetes metadata, specifically labels and annotations. The filter only goes to the API Server when it cannot find the cached info, otherwise it uses the cache.

    • The default backend in the configuration is Elasticsearch set by the . It uses the Logstash format to ingest the logs. If you need a different Index and Type, please refer to the plugin option and do your own adjustments.

    • There is an option called Retry_Limit set to False, that means if Fluent Bit cannot flush the records to Elasticsearch it will re-try indefinitely until it succeed.

    Container Runtime Interface (CRI) parser

    Fluent Bit by default assumes that logs are formatted by the Docker interface standard. However, when using CRI you can run into issues with malformed JSON if you do not modify the parser used. Fluent Bit includes a CRI log parser that can be used instead. An example of the parser is seen below:

    To use this parser change the Input section for your configuration from docker to cri

    Windows Deployment

    Since v1.5.0, Fluent Bit supports deployment to Windows pods.

    Log files overview

    When deploying Fluent Bit to Kubernetes, there are three log files that you need to pay attention to.

    C:\k\kubelet.err.log

    • This is the error log file from kubelet daemon running on host.

    • You will need to retain this file for future troubleshooting (to debug deployment failures etc.)

    C:\var\log\containers\<pod>_<namespace>_<container>-<docker>.log

    • This is the main log file you need to watch. Configure Fluent Bit to follow this file.

    • It is actually a symlink to the Docker log file in C:\ProgramData\, with some additional metadata on its file name.

    C:\ProgramData\Docker\containers\<docker>\<docker>.log

    • This is the log file produced by Docker.

    • Normally you don't directly read from this file, but you need to make sure that this file is visible from Fluent Bit.

    Typically, your deployment yaml contains the following volume configuration.

    Configure Fluent Bit

    Assuming the basic volume configuration described above, you can apply the following config to start logging. You can visualize this configuration here

    Mitigate unstable network on Windows pods

    Windows pods often lack working DNS immediately after boot (#78479). To mitigate this issue, filter_kubernetes provides a built-in mechanism to wait until the network starts up:

    • DNS_Retries - Retries N times until the network start working (6)

    • DNS_Wait_Time - Lookup interval between network status checks (30)

    By default, Fluent Bit waits for 3 minutes (30 seconds x 6 times). If it's not enough for you, tweak the configuration as follows.

    Fluent Bit

    Mode

    Defines transport protocol mode: unix_udp (UDP over Unix socket), unix_tcp (TCP over Unix socket), tcp or udp

    unix_udp

    Listen

    If Mode is set to tcp or udp, specify the network interface to bind.

    0.0.0.0

    Port

    If Mode is set to tcp or udp, specify the TCP port to listen for incoming connections.

    5140

    Path

    If Mode is set to unix_tcp or unix_udp, set the absolute path to the Unix socket file.

    Unix_Perm

    If Mode is set to unix_tcp or unix_udp, set the permission of the Unix socket file.

    Considerations

    • When using Syslog input plugin, Fluent Bit requires access to the parsers.conf file, the path to this file can be specified with the option -R or through the Parsers_File key on the [SERVICE] section (more details below).

    • When udp or unix_udp is used, the buffer size to receive messages is configurable only through the Buffer_Chunk_Size option which defaults to 32kb.

    Getting Started

    In order to receive Syslog messages, you can run the plugin from the command line or through the configuration file:

    Command Line

    From the command line you can let Fluent Bit listen for Forward messages with the following options:

    By default the service will create and listen for Syslog messages on the unix socket /tmp/in_syslog

    Configuration File

    In your main configuration file append the following Input & Output sections:

    Testing

    Once Fluent Bit is running, you can send some messages using the logger tool:

    In Fluent Bit we should see the following output:

    Recipes

    The following content aims to provide configuration examples for different use cases to integrate Fluent Bit and make it listen for Syslog messages from your systems.

    Rsyslog to Fluent Bit: Network mode over TCP

    Fluent Bit Configuration

    Put the following content in your fluent-bit.conf file:

    then start Fluent Bit.

    RSyslog Configuration

    Add a new file to your rsyslog config rules called 60-fluent-bit.conf inside the directory /etc/rsyslog.d/ and add the following content:

    then make sure to restart your rsyslog daemon:

    Rsyslog to Fluent Bit: Unix socket mode over UDP

    Fluent Bit Configuration

    Put the following content in your fluent-bit.conf file:

    then start Fluent Bit.

    RSyslog Configuration

    Add a new file to your rsyslog config rules called 60-fluent-bit.conf inside the directory /etc/rsyslog.d/ and place the following content:

    Make sure that the socket file is readable by rsyslog (tweak the Unix_Perm option shown above).

    Yocto / Embedded Linux

    Configuration File

    This page describes the main configuration file used by Fluent Bit

    One of the ways to configure Fluent Bit is using a main configuration file. Fluent Bit allows to use one configuration file which works at a global scope and uses the Format and Schema defined previously.

    The main configuration file supports four types of sections:

    • Service

    • Input

    • Filter

    • Output

    In addition, it's also possible to split the main configuration file in multiple files using the feature to include external files:

    • Include File

    Service

    The Service section defines global properties of the service, the keys available as of this version are described in the following table:

    Key
    Description
    Default Value

    The following is an example of a SERVICE section:

    For scheduler and retry details, please check there:

    Input

    An INPUT section defines a source (related to an input plugin), here we will describe the base configuration for each INPUT section. Note that each input plugin may add it own configuration keys:

    Key
    Description

    The Name is mandatory and it let Fluent Bit know which input plugin should be loaded. The Tag is mandatory for all plugins except for the input forward plugin (as it provides dynamic tags).

    Example

    The following is an example of an INPUT section:

    Filter

    A FILTER section defines a filter (related to an filter plugin), here we will describe the base configuration for each FILTER section. Note that each filter plugin may add it own configuration keys:

    Key
    Description

    The Name is mandatory and it let Fluent Bit know which filter plugin should be loaded. The Match or Match_Regex is mandatory for all plugins. If both are specified, Match_Regex takes precedence.

    Example

    The following is an example of an FILTER section:

    Output

    The OUTPUT section specify a destination that certain records should follow after a Tag match. Currently, Fluent Bit can route up to 256 OUTPUT plugins. The configuration support the following keys:

    Key
    Description

    Example

    The following is an example of an OUTPUT section:

    Example: collecting CPU metrics

    The following configuration file example demonstrates how to collect CPU metrics and flush the results every five seconds to the standard output:

    Visualize

    You can also visualize Fluent Bit INPUT, FILTER, and OUTPUT configuration via

    Include File

    To avoid complicated long configuration files is better to split specific parts in different files and call them (include) from one main file.

    Starting from Fluent Bit 0.12 the new configuration command @INCLUDE has been added and can be used in the following way:

    The configuration reader will try to open the path somefile.conf, if not found, it will assume it's a relative path based on the path of the base configuration file, e.g:

    • Main configuration file path: /tmp/main.conf

    • Included file: somefile.conf

    • Fluent Bit will try to open somefile.conf, if it fails it will try /tmp/somefile.conf.

    The @INCLUDE command only works at top-left level of the configuration line, it cannot be used inside sections.

    Wildcard character (*) is supported to include multiple files, e.g:

    Note files matching the wildcard character are included unsorted. If plugins ordering between files need to be preserved, the files should be included explicitly.

    Buffering & Storage

    The end-goal of Fluent Bit is to collect, parse, filter and ship logs to a central place. In this workflow there are many phases and one of the critical pieces is the ability to do buffering : a mechanism to place processed data into a temporary location until is ready to be shipped.

    By default when Fluent Bit processes data, it uses Memory as a primary and temporary place to store the records, but there are certain scenarios where it would be ideal to have a persistent buffering mechanism based in the filesystem to provide aggregation and data safety capabilities.

    Choosing the right configuration is critical and the behavior of the service can be conditioned based in the backpressure settings. Before we jump into the configuration let's make sure we understand the relationship between Chunks, Memory, Filesystem and Backpressure.

    Chunks, Memory, Filesystem and Backpressure

    Understanding the chunks, buffering and backpressure concepts is critical for a proper configuration. Let's do a recap of the meaning of these concepts.

    Chunks

    When an input plugin (source) emits records, the engine groups the records together in a Chunk. A Chunk size usually is around 2MB. By configuration, the engine decides where to place this Chunk, the default is that all chunks are created only in memory.

    Buffering and Memory

    As mentioned above, the Chunks generated by the engine are placed in memory but this is configurable.

    If memory is the only mechanism set for the input plugin, it will just store data as much as it can there (memory). This is the fastest mechanism with the least system overhead, but if the service is not able to deliver the records fast enough because of a slow network or an unresponsive remote service, Fluent Bit memory usage will increase since it will accumulate more data than it can deliver.

    In a high load environment with backpressure the risks of having high memory usage is the chance of getting killed by the Kernel (OOM Killer). A workaround for this backpressure scenario is to limit the amount of memory in records that an input plugin can register, this configuration property is called mem_buf_limit. If a plugin has enqueued more than the mem_buf_limit, it won't be able to ingest more until that data can be delivered or flushed properly. In this scenario the input plugin in question is paused. When the input is paused, records will not be ingested until it is resumed. For some inputs, such as TCP and tail, pausing the input will almost certainly lead to log loss. For the tail input, Fluent Bit can save its current offset in the current file it is reading, and pick back up when the input is resumed.

    Look for messages in the Fluent Bit log output like:

    The workaround of mem_buf_limit is good for certain scenarios and environments, it helps to control the memory usage of the service, but at the costs that if a file gets rotated while paused, you might lose that data since it won't be able to register new records. This can happen with any input source plugin. The goal of mem_buf_limit is memory control and survival of the service.

    For full data safety guarantee, use filesystem buffering.

    Here is an example input definition:

    If this input uses more than 50MB memory to buffer logs, you will get a warning like this in the Fluent Bit logs:

    Please note that Mem_Buf_Limit only applies when storage.type is set to the default value of memory. The section below explains the limits that apply when you enable storage.type filesystem.

    Filesystem buffering to the rescue

    Filesystem buffering enabled helps with backpressure and overall memory control.

    Behind the scenes, Memory and Filesystem buffering mechanisms are not mutually exclusive. Indeed when enabling filesystem buffering for your input plugin (source) you are getting the best of the two worlds: performance and data safety.

    When Filesystem buffering is enabled, the behavior of the engine is different. Upon Chunk creation, the engine stores the content in memory and also maps a copy on disk (through ). The newly created Chunk is (1) active in memory, (2) backed up on disk, and (3) is called to be up which means "the chunk content is up in memory".

    How does the Filesystem buffering mechanism deal with high memory usage and backpressure? Fluent Bit controls the number of Chunks that are up in memory.

    By default, the engine allows us to have 128 Chunks up in memory in total (considering all Chunks), this value is controlled by service property storage.max_chunks_up. The active Chunks that are up are ready for delivery and the ones that are still receiving records. Any other remaining Chunk is in a down state, which means that it is only in the filesystem and won't be up in memory unless it is ready to be delivered. Remember, chunks are never much larger than 2 MB, thus, with the default storage.max_chunks_up value of 128, each input is limited to roughly 256 MB of memory.

    If the input plugin has enabled storage.type as filesystem, when reaching the storage.max_chunks_up threshold, instead of the plugin being paused, all new data will go to Chunks that are down in the filesystem. This allows us to control the memory usage by the service and also provides a guarantee that the service won't lose any data. By default, the enforcement of the storage.max_chunks_up limit is best-effort. Fluent Bit can only append new data to chunks that are up; when the limit is reached chunks will be temporarily brought up in memory to ingest new data, and then put to a down state afterwards. In general, Fluent Bit will work to keep the total number of up chunks at or below storage.max_chunks_up.

    If storage.pause_on_chunks_overlimit is enabled (default is off), the input plugin will be paused upon exceeding storage.max_chunks_up. Thus, with this option, storage.max_chunks_up becomes a hard limit for the input. When the input is paused, records will not be ingested until it is resumed. For some inputs, such as TCP and tail, pausing the input will almost certainly lead to log loss. For the tail input, Fluent Bit can save its current offset in the current file it is reading, and pick back up when the input is resumed.

    Look for messages in the Fluent Bit log output like:

    Limiting Filesystem space for Chunks

    Fluent Bit implements the concept of logical queues: based on its Tag, a Chunk can be routed to multiple destinations. Thus, we keep an internal reference from where a Chunk was created and where it needs to go.

    It's common to find cases where if we have multiple destinations for a Chunk, one of the destinations might be slower than the other, or maybe one is generating backpressure and not all of them. In this scenario, how do we limit the amount of filesystem Chunks that we are logically queueing?

    Starting from Fluent Bit v1.6, we introduced the new configuration property for output plugins called storage.total_limit_size which limits the number of Chunks that exist in the filesystem for a certain logical output destination. If one of the destinations reaches the storage.total_limit_size, the oldest Chunk from its queue for that logical output destination will be discarded.

    Configuration

    The storage layer configuration takes place in three areas:

    • Service Section

    • Input Section

    • Output Section

    The known Service section configures a global environment for the storage layer, the Input sections define which buffering mechanism to use and the output the limits for the logical filesystem queues.

    Service Section Configuration

    The Service section refers to the section defined in the main :

    Key
    Description
    Default

    a Service section will look like this:

    that configuration sets an optional buffering mechanism where the route to the data is /var/log/flb-storage/, it will use normal synchronization mode, without running a checksum and up to a maximum of 5MB of memory when processing backlog data.

    Input Section Configuration

    Optionally, any Input plugin can configure their storage preference, the following table describes the options available:

    Key
    Description
    Default

    The following example configures a service that offers filesystem buffering capabilities and two Input plugins being the first based in filesystem and the second with memory only.

    Output Section Configuration

    If certain chunks are filesystem storage.type based, it's possible to control the size of the logical queue for an output plugin. The following table describes the options available:

    Key
    Description
    Default

    The following example create records with CPU usage samples in the filesystem and then they are delivered to Google Stackdriver service limiting the logical queue (buffering) to 5M:

    If for some reason Fluent Bit gets offline because of a network issue, it will continue buffering CPU samples but just keep a maximum of 5M of the newest data.

    Transport Security

    Fluent Bit provides integrated support for Transport Layer Security (TLS) and it predecessor Secure Sockets Layer (SSL) respectively. In this section we will refer as TLS only for both implementations.

    Both input and output plugins that perform Network I/O can optionally enable TLS and configure the behavior. The following table describes the properties available:

    Property
    Description
    Default

    tls

    enable or disable TLS support

    Off

    Note : in order to use TLS on input plugins the user is expected to provide both a certificate and private key

    The listed properties can be enabled in the configuration file, specifically on each output plugin section or directly through the command line.

    The following output plugins can take advantage of the TLS feature:

    The following input plugins can take advantage of the TLS feature:

    In addition, other plugins implements a sub-set of TLS support, meaning, with restricted configuration:

    Example: enable TLS on HTTP input

    By default HTTP input plugin uses plain TCP, enabling TLS from the command line can be done with:

    In the command line above, the two properties tls and tls.verify where enabled for demonstration purposes (we strongly suggest always keep verification ON).

    The same behavior can be accomplished using a configuration file:

    Example: enable TLS on HTTP output

    By default HTTP output plugin uses plain TCP, enabling TLS from the command line can be done with:

    In the command line above, the two properties tls and tls.verify where enabled for demonstration purposes (we strongly suggest always keep verification ON).

    The same behavior can be accomplished using a configuration file:

    Tips and Tricks

    Generate your own self signed certificates for testing purposes.

    This will generate a 4096 bit RSA key pair and a certificate that is signed using SHA-256 with the expiration date set to 30 days in the future, test.host.net set as common name and since we opted out of DES the private key will be stored in plain text.

    Connect to virtual servers using TLS

    Fluent Bit supports . If you are serving multiple hostnames on a single IP address (a.k.a. virtual hosting), you can make use of tls.vhost to connect to a specific hostname.

    $ kubectl create -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/fluent-bit-openshift-security-context-constraints.yaml
    helm repo add fluent https://fluent.github.io/helm-charts
    helm upgrade --install fluent-bit fluent/fluent-bit
    # CRI Parser
    [PARSER]
        # http://rubular.com/r/tjUt3Awgg4
        Name cri
        Format regex
        Regex ^(?<time>[^ ]+) (?<stream>stdout|stderr) (?<logtag>[^ ]*) (?<message>.*)$
        Time_Key    time
        Time_Format %Y-%m-%dT%H:%M:%S.%L%z
    [INPUT]
        Name tail
        Path /var/log/containers/*.log
        Parser cri
        Tag kube.*
        Mem_Buf_Limit 5MB
        Skip_Long_Lines On
    spec:
      containers:
      - name: fluent-bit
        image: my-repo/fluent-bit:1.8.4
        volumeMounts:
        - mountPath: C:\k
          name: k
        - mountPath: C:\var\log
          name: varlog
        - mountPath: C:\ProgramData
          name: progdata
      volumes:
      - name: k
        hostPath:
          path: C:\k
      - name: varlog
        hostPath:
          path: C:\var\log
      - name: progdata
        hostPath:
          path: C:\ProgramData
    fluent-bit.conf: |
        [SERVICE]
          Parsers_File      C:\\fluent-bit\\parsers.conf
    
        [INPUT]
          Name              tail
          Tag               kube.*
          Path              C:\\var\\log\\containers\\*.log
          Parser            docker
          DB                C:\\fluent-bit\\tail_docker.db
          Mem_Buf_Limit     7MB
          Refresh_Interval  10
    
        [INPUT]
          Name              tail
          Tag               kubelet.err
          Path              C:\\k\\kubelet.err.log
          DB                C:\\fluent-bit\\tail_kubelet.db
    
        [FILTER]
          Name              kubernetes
          Match             kube.*
          Kube_URL          https://kubernetes.default.svc.cluster.local:443
    
        [OUTPUT]
          Name  stdout
          Match *
    
    parsers.conf: |
        [PARSER]
            Name         docker
            Format       json
            Time_Key     time
            Time_Format  %Y-%m-%dT%H:%M:%S.%L
            Time_Keep    On
    [filter]
        Name kubernetes
        ...
        DNS_Retries 10
        DNS_Wait_Time 30
    $ fluent-bit -R /path/to/parsers.conf -i syslog -p path=/tmp/in_syslog -o stdout
    [SERVICE]
        Flush               1
        Log_Level           info
        Parsers_File        parsers.conf
    
    [INPUT]
        Name                syslog
        Path                /tmp/in_syslog
        Buffer_Chunk_Size   32000
        Buffer_Max_Size     64000
        Receive_Buffer_Size 512000
    
    [OUTPUT]
        Name   stdout
        Match  *
    $ logger -u /tmp/in_syslog my_ident my_message
    $ bin/fluent-bit -R ../conf/parsers.conf -i syslog -p path=/tmp/in_syslog -o stdout
    Fluent Bit v1.x.x
    * Copyright (C) 2019-2020 The Fluent Bit Authors
    * Copyright (C) 2015-2018 Treasure Data
    * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
    * https://fluentbit.io
    
    [2017/03/09 02:23:27] [ info] [engine] started
    [0] syslog.0: [1489047822, {"pri"=>"13", "host"=>"edsiper:", "ident"=>"my_ident", "pid"=>"", "message"=>"my_message"}]
    [SERVICE]
        Flush        1
        Parsers_File parsers.conf
    
    [INPUT]
        Name     syslog
        Parser   syslog-rfc3164
        Listen   0.0.0.0
        Port     5140
        Mode     tcp
    
    [OUTPUT]
        Name     stdout
        Match    *
    action(type="omfwd" Target="127.0.0.1" Port="5140" Protocol="tcp")
    $ sudo service rsyslog restart
    [SERVICE]
        Flush        1
        Parsers_File parsers.conf
    
    [INPUT]
        Name      syslog
        Parser    syslog-rfc3164
        Path      /tmp/fluent-bit.sock
        Mode      unix_udp
        Unix_Perm 0644
    
    [OUTPUT]
        Name      stdout
        Match     *
    $ModLoad omuxsock
    $OMUxSockSocket /tmp/fluent-bit.sock
    *.* :omuxsock:

    0644

    Parser

    Specify an alternative parser for the message. If Mode is set to tcp or udp then the default parser is syslog-rfc5424 otherwise syslog-rfc3164-local is used. If your syslog messages have fractional seconds set this Parser value to syslog-rfc5424 instead.

    Buffer_Chunk_Size

    By default the buffer to store the incoming Syslog messages, do not allocate the maximum memory allowed, instead it allocate memory when is required. The rounds of allocations are set by Buffer_Chunk_Size. If not set, Buffer_Chunk_Size is equal to 32000 bytes (32KB). Read considerations below when using udp or unix_udp mode.

    Buffer_Max_Size

    Specify the maximum buffer size to receive a Syslog message. If not set, the default size will be the value of Buffer_Chunk_Size.

    Receive_Buffer_Size

    Specify the maximum socket receive buffer size. If not set, the default value is OS-dependant, but generally too low to accept thousands of syslog messages per second without loss on udp or unix_udp sockets. Note that on Linux the value is capped by sysctl net.core.rmem_max.

    Elasticsearch Output Plugin

    Azure

  • BigQuery

  • Datadog

  • Elasticsearch

  • Forward

  • GELF

  • HTTP

  • InfluxDB

  • Kafka REST Proxy

  • Loki

  • Slack

  • Splunk

  • Stackdriver

  • Syslog

  • TCP & TLS

  • Treasure Data

  • tls.verify

    force certificate validation

    On

    tls.debug

    Set TLS debug verbosity level. It accept the following values: 0 (No debug), 1 (Error), 2 (State change), 3 (Informational) and 4 Verbose

    1

    tls.ca_file

    absolute path to CA certificate file

    tls.ca_path

    absolute path to scan for certificate files

    tls.crt_file

    absolute path to Certificate file

    tls.key_file

    absolute path to private Key file

    tls.key_passwd

    optional password for tls.key_file file

    tls.vhost

    hostname to be used for TLS SNI extension

    Amazon CloudWatch
    Amazon Kinesis Data Firehose
    Amazon Kinesis Data Streams
    Amazon S3
    MQTT
    TCP
    HTTP
    OpenTelemetry
    Kubernetes Filter
    TLS server name indication
    ./bin/fluent-bit -i http \
               -p port=9999 \
               -p tls=on \
               -p tls.verify=off \
               -p tls.crt_file=self_signed.crt \
               -p tls.key_file=self_signed.key \
               -o stdout \
               -m '*'
    [INPUT]
        name http
        port 9999
        tls on
        tls.verify off
        tls.crt_file self_signed.crt
        tls.key_file self_signed.key
    
    [OUTPUT]
        Name       stdout
        Match      *
    $ fluent-bit -i cpu -t cpu -o http://192.168.2.3:80/something \
        -p tls=on         \
        -p tls.verify=off \
        -m '*'
    [INPUT]
        Name  cpu
        Tag   cpu
    
    [OUTPUT]
        Name       http
        Match      *
        Host       192.168.2.3
        Port       80
        URI        /something
        tls        On
        tls.verify Off
    openssl req -x509 \
                -newkey rsa:4096 \
                -sha256 \
                -nodes \
                -keyout self_signed.key \
                -out self_signed.crt \
                -subj "/CN=test.host.net"
    [INPUT]
        Name  cpu
        Tag   cpu
    
    [OUTPUT]
        Name        forward
        Match       *
        Host        192.168.10.100
        Port        24224
        tls         On
        tls.verify  On
        tls.ca_file /etc/certs/fluent.crt
        tls.vhost   fluent.example.com

    Set the primary transport layer protocol used by the asynchronous DNS resolver which can be overridden on a per plugin basis

    UDP

    log_file

    Absolute path for an optional log file. By default all logs are redirected to the standard error interface (stderr).

    log_level

    Set the logging verbosity level. Allowed values are: off, error, warn, info, debug and trace. Values are accumulative, e.g: if 'debug' is set, it will include error, warning, info and debug. Note that trace mode is only available if Fluent Bit was built with the WITH_TRACE option enabled.

    info

    parsers_file

    Path for a parsers configuration file. Multiple Parsers_File entries can be defined within the section.

    plugins_file

    Path for a plugins configuration file. A plugins configuration file allows to define paths for external plugins, for an example .

    streams_file

    Path for the Stream Processor configuration file. To learn more about Stream Processing configuration go .

    http_server

    Enable built-in HTTP Server

    Off

    http_listen

    Set listening interface for HTTP Server when it's enabled

    0.0.0.0

    http_port

    Set TCP Port for the HTTP Server

    2020

    coro_stack_size

    Set the coroutines stack size in bytes. The value must be greater than the page size of the running system. Don't set too small value (say 4096), or coroutine threads can overrun the stack buffer. Do not change the default value of this parameter unless you know what you are doing.

    24576

    scheduler.cap

    Set a maximum retry time in second. The property is supported from v1.8.7.

    2000

    scheduler.base

    Set a base of exponential backoff. The property is supported from v1.8.7.

    5

    json.convert_nan_to_null

    If enabled, NaN is converted to null when fluent-bit converts msgpack to json.

    false

    flush

    Set the flush time in seconds.nanoseconds. The engine loop uses a Flush timeout to define when is required to flush the records ingested by input plugins through the defined output plugins.

    5

    grace

    Set the grace time in seconds as Integer value. The engine loop uses a Grace timeout to define wait time on exit

    5

    daemon

    Boolean value to set if Fluent Bit should run as a Daemon (background) or not. Allowed values are: yes, no, on and off. note: If you are using a Systemd based unit as the one we provide in our packages, do not turn on this option.

    Off

    Name

    Name of the input plugin.

    Tag

    Tag name associated to all records coming from this plugin.

    Log_Level

    Set the plugin's logging verbosity level. Allowed values are: off, error, warn, info, debug and trace. Defaults to the SERVICE section's Log_Level.

    Name

    Name of the filter plugin.

    Match

    A pattern to match against the tags of incoming records. It's case sensitive and support the star (*) character as a wildcard.

    Match_Regex

    A regular expression to match against the tags of incoming records. Use this option if you want to use the full regex syntax.

    Log_Level

    Set the plugin's logging verbosity level. Allowed values are: off, error, warn, info, debug and trace. Defaults to the SERVICE section's Log_Level.

    Name

    Name of the output plugin.

    Match

    A pattern to match against the tags of incoming records. It's case sensitive and support the star (*) character as a wildcard.

    Match_Regex

    A regular expression to match against the tags of incoming records. Use this option if you want to use the full regex syntax.

    Log_Level

    Set the plugin's logging verbosity level. Allowed values are: off, error, warn, info, debug and trace. Defaults to the SERVICE section's Log_Level.

    scheduling and retries
    https://cloud.calyptia.com

    dns.mode

    If the input plugin has enabled filesystem storage type, this property sets the maximum number of Chunks that can be up in memory. This is the setting to use to control memory usage when you enable storage.type filesystem.

    128

    storage.backlog.mem_limit

    If storage.path is set, Fluent Bit will look for data chunks that were not delivered and are still in the storage layer, these are called backlog data. Backlog chunks are filesystem chunks that were left over from a previous Fluent Bit run; chunks that could not be sent before exit that Fluent Bit will pick up when restarted. Fluent Bit will check the storage.backlog.mem_limit value against the current memory usage from all up chunks for the input. If the up chunks currently consume less memory than the limit, it will bring the backlog chunks up into memory so they can be sent by outputs.

    5M

    storage.metrics

    If http_server option has been enabled in the main [SERVICE] section, this option registers a new endpoint where internal metrics of the storage layer can be consumed. For more details refer to the section.

    off

    storage.path

    Set an optional location in the file system to store streams and chunks of data. If this parameter is not set, Input plugins can only use in-memory buffering.

    storage.sync

    Configure the synchronization mode used to store the data into the file system. It can take the values normal or full. Using full increases the reliability of the filesystem buffer and ensures that data is guaranteed to be synced to the filesystem even if Fluent Bit crashes. On linux, full corresponds with the MAP_SYNC option for memory mapped files.

    normal

    storage.checksum

    Enable the data integrity check when writing and reading data from the filesystem. The storage layer uses the CRC32 algorithm.

    Off

    storage.type

    Specifies the buffering mechanism to use. It can be memory or filesystem.

    memory

    storage.pause_on_chunks_overlimit

    Specifies if the input plugin should be paused (stop ingesting new data) when the storage.max_chunks_up value is reached.

    off

    storage.total_limit_size

    Limit the maximum number of Chunks in the filesystem for the current output logical destination.

    mmap(2)
    configuration file

    storage.max_chunks_up

    Windows

    Fluent Bit is distributed as fluent-bit package for Windows and as a . Fluent Bit has two flavours of Windows installers: a ZIP archive (for quick testing) and an EXE installer (for system installation).

    Configuration

    Make sure to provide a valid Windows configuration with the installation, a sample one is shown below:

    Multiline Parsing

    In an ideal world, applications might log their messages within a single line, but in reality applications generate multiple log messages that sometimes belong to the same context. But when is time to process such information it gets really complex. Consider application stack traces which always have multiple log lines.

    Starting from Fluent Bit v1.8, we have implemented a unified Multiline core functionality to solve all the user corner cases. In this section, you will learn about the features and configuration options available.

    Concepts

    The Multiline parser engine exposes two ways to configure and use the functionality:

    Configuration File

    This page describes the yaml configuration file used by Fluent Bit

    One of the ways to configure Fluent Bit is using a YAML configuration file that works at a global scope.

    The yaml configuration file supports the following sections:

    • Env

    • Service

    [SERVICE]
        Flush           5
        Daemon          off
        Log_Level       debug
    [INPUT]
        Name cpu
        Tag  my_cpu
    [FILTER]
        Name  grep
        Match *
        Regex log aa
    [OUTPUT]
        Name  stdout
        Match my*cpu
    [SERVICE]
        Flush     5
        Daemon    off
        Log_Level debug
    
    [INPUT]
        Name  cpu
        Tag   my_cpu
    
    [OUTPUT]
        Name  stdout
        Match my*cpu
    @INCLUDE somefile.conf
    @INCLUDE input_*.conf
    [input] tail.1 paused (mem buf overlimit)
    [input] tail.1 resume (mem buf overlimit)
    [INPUT]
        Name          tcp
        Listen        0.0.0.0
        Port          5170
        Format        none
        Tag           tcp-logs
        Mem_Buf_Limit 50MB
    [input] tcp.1 paused (mem buf overlimit)
    [input] tail.1 paused (storage buf overlimit
    [input] tail.1 resume (storage buf overlimit
    [SERVICE]
        flush                     1
        log_Level                 info
        storage.path              /var/log/flb-storage/
        storage.sync              normal
        storage.checksum          off
        storage.backlog.mem_limit 5M
    [SERVICE]
        flush                     1
        log_Level                 info
        storage.path              /var/log/flb-storage/
        storage.sync              normal
        storage.checksum          off
        storage.max_chunks_up     128
        storage.backlog.mem_limit 5M
    
    [INPUT]
        name          cpu
        storage.type  filesystem
    
    [INPUT]
        name          mem
        storage.type  memory
    [SERVICE]
        flush                     1
        log_Level                 info
        storage.path              /var/log/flb-storage/
        storage.sync              normal
        storage.checksum          off
        storage.max_chunks_up     128
        storage.backlog.mem_limit 5M
    
    [INPUT]
        name                      cpu
        storage.type              filesystem 
    
    [OUTPUT]
        name                      stackdriver
        match                     *
        storage.total_limit_size  5M
    see here
    here
    Monitoring
    https://play.instruqt.com/embed/Fluent/tracks/fluent-bit-getting-started-101?token=em_S2zOzhhDQepM0vDSplay.instruqt.com
    Fluent Bit Sandbox Environment
    Migration to Fluent Bit

    From version 1.9, td-agent-bit is a deprecated package and was removed after 1.9.9. The correct package name to use now is fluent-bit.

    Installation Packages

    The latest stable version is 2.0.9, each version is available on the Github release as well as at https://releases.fluentbit.io/<Major Version>/fluent-bit-<Full Version>-win[32|64].[exe|zip]:

    INSTALLERS
    SHA256 CHECKSUMS

    To check the integrity, use Get-FileHash cmdlet on PowerShell.

    Installing from ZIP archive

    Download a ZIP archive from above. There are installers for 32-bit and 64-bit environments, so choose one suitable for your environment.

    Then you need to expand the ZIP archive. You can do this by clicking "Extract All" on Explorer, or if you're using PowerShell, you can use Expand-Archive cmdlet.

    The ZIP package contains the following set of files.

    Now, launch cmd.exe or PowerShell on your machine, and execute fluent-bit.exe as follows.

    If you see the following output, it's working fine!

    To halt the process, press CTRL-C in the terminal.

    Installing from EXE installer

    Download an EXE installer from the download page. It has both 32-bit and 64-bit builds. Choose one which is suitable for you.

    Double-click the EXE installer you've downloaded. The installation wizard will automatically start.

    Installation wizard screenshot

    Click Next and proceed. By default, Fluent Bit is installed into C:\Program Files\fluent-bit\, so you should be able to launch fluent-bit as follows after installation.

    Installer options

    The Windows installer is built by [CPack using NSIS(https://cmake.org/cmake/help/latest/cpack_gen/nsis.html) and so supports the default options that all NSIS installers do for silent installation and the directory to install to.

    To silently install to C:\fluent-bit directory here is an example:

    The uninstaller automatically provided also supports a silent un-install using the same /S flag. This may be useful for provisioning with automation like Ansible, Puppet, etc.

    Windows Service Support

    Windows services are equivalent to "daemons" in UNIX (i.e. long-running background processes). Since v1.5.0, Fluent Bit has the native support for Windows Service.

    Suppose you have the following installation layout:

    To register Fluent Bit as a Windows service, you need to execute the following command on Command Prompt. Please be careful that a single space is required after binpath=.

    Now Fluent Bit can be started and managed as a normal Windows service.

    To halt the Fluent Bit service, just execute the "stop" command.

    To start Fluent Bit automatically on boot, execute the following:

    [FAQ] Fluent Bit fails to start up when installed under C:\Program Files

    Quotations are required if file paths contain spaces. Here is an example:

    [FAQ] How can I manage Fluent Bit service via PowerShell?

    Instead of sc.exe, PowerShell can be used to manage Windows services.

    Create a Fluent Bit service:

    Start the service:

    Query the service status:

    Stop the service:

    Remove the service (requires PowerShell 6.0 or later)

    Compile from Source

    If you need to create a custom executable, you can use the following procedure to compile Fluent Bit by yourself.

    Preparation

    First, you need Microsoft Visual C++ to compile Fluent Bit. You can install the minimum toolkit by the following command:

    When asked which packages to install, choose "C++ Build Tools" (make sure that "C++ CMake tools for Windows" is selected too) and wait until the process finishes.

    Also you need to install flex and bison. One way to install them on Windows is to use winflexbison.

    Add the path C:\WinFlexBison to your systems environment variable "Path". Here's how to do that.

    Also you need to install git to pull the source code from the repository.

    Compilation

    Open the start menu on Windows and type "Developer Command Prompt".

    Clone the source code of Fluent Bit.

    Compile the source code.

    Now you should be able to run Fluent Bit:

    Packaging

    To create a ZIP package, call cpack as follows:

    Windows container on Docker Hub

    Built-in multiline parser

  • Configurable multiline parser

  • Built-in Multiline Parsers

    Without any extra configuration, Fluent Bit exposes certain pre-configured parsers (built-in) to solve specific multiline parser cases, e.g:

    Parser
    Description

    docker

    Process a log entry generated by a Docker container engine. This parser supports the concatenation of log entries split by Docker.

    cri

    Process a log entry generated by CRI-O container engine. Same as the docker parser, it supports concatenation of log entries

    go

    Process log entries generated by a Go based language application and perform concatenation if multiline messages are detected.

    python

    Process log entries generated by a Python based language application and perform concatenation if multiline messages are detected.

    java

    Process log entries generated by a Google Cloud Java language application and perform concatenation if multiline messages are detected.

    Configurable Multiline Parsers

    Besides the built-in parsers listed above, through the configuration files is possible to define your own Multiline parsers with their own rules.

    A multiline parser is defined in a parsers configuration file by using a [MULTILINE_PARSER] section definition. The Multiline parser must have a unique name and a type plus other configured properties associated with each type.

    To understand which Multiline parser type is required for your use case you have to know beforehand what are the conditions in the content that determines the beginning of a multiline message and the continuation of subsequent lines. We provide a regex based configuration that supports states to handle from the most simple to difficult cases.

    Property
    Description
    Default

    name

    Specify a unique name for the Multiline Parser definition. A good practice is to prefix the name with the word multiline_ to avoid confusion with normal parser's definitions.

    type

    Set the multiline mode, for now, we support the type regex.

    parser

    Name of a pre-defined parser that must be applied to the incoming content before applying the regex rule. If no parser is defined, it's assumed that's a raw text and not a structured message.

    Note: when a parser is applied to a raw text, then the regex is applied against a specific key of the structured message by using the key_content configuration property (see below).

    Lines and States

    Before start configuring your parser you need to know the answer to the following questions:

    1. What is the regular expression (regex) that matches the first line of a multiline message ?

    2. What are the regular expressions (regex) that match the continuation lines of a multiline message ?

    When matching regex, we have to define states, some states define the start of a multiline message while others are states for the continuation of multiline messages. You can have multiple continuation states definitions to solve complex cases.

    The first regex that matches the start of a multiline message is called start_state, then other regexes continuation lines can have different state names.

    Rules Definition

    A rule specifies how to match a multiline pattern and perform the concatenation. A rule is defined by 3 specific components:

    1. state name

    2. regular expression pattern

    3. next state

    A rule might be defined as follows (comments added to simplify the definition) :

    In the example above, we have defined two rules, each one has its own state name, regex patterns, and the next state name. Every field that composes a rule must be inside double quotes.

    The first rule of state name must always be start_state, and the regex pattern must match the first line of a multiline message, also a next state must be set to specify how the possible continuation lines would look like.

    To simplify the configuration of regular expressions, you can use the Rubular web site. We have posted an example by using the regex described above plus a log line that matches the pattern: https://rubular.com/r/NDuyKwlTGOvq2g

    Configuration Example

    The following example provides a full Fluent Bit configuration file for multiline parsing by using the definition explained above.

    The following example files can be located at: https://github.com/fluent/fluent-bit/tree/master/documentation/examples/multiline/regex-001

    Example files content:

    This is the primary Fluent Bit configuration file. It includes the parsers_multiline.conf and tails the file test.log by applying the multiline parser multiline-regex-test. Then it sends the processing to the standard output.

    This second file defines a multiline parser for the example.

    An example file with multiline content:

    By running Fluent Bit with the given configuration file you will obtain:

    The lines that did not match a pattern are not considered as part of the multiline message, while the ones that matched the rules were concatenated properly.

    Limitations

    The multiline parser is a very powerful feature, but it has some limitations that you should be aware of:

    • The multiline parser is not affected by the buffer_max_size configuration option, allowing the composed log record to grow beyond this size. Hence, the skip_long_lines option will not be applied to multiline messages.

    • It is not possible to get the time key from the body of the multiline message. However, it can be extracted and set as a new key by using a filter.

    Get structured data from multiline message

    Fluent-bit supports /pat/m option. It allows . matches a new line. It is useful to parse multiline log.

    The following example is to get date and message from concatenated log.

    Example files content:

    This is the primary Fluent Bit configuration file. It includes the parsers_multiline.conf and tails the file test.log by applying the multiline parser multiline-regex-test. It also parses concatenated log by applying parser named-capture-test. Then it sends the processing to the standard output.

    This second file defines a multiline parser for the example.

    An example file with multiline content:

    By running Fluent Bit with the given configuration file you will obtain:

    Pipeline
    • Inputs

    • Filters

    • Outputs

    YAML configuration is used in the smoke tests for containers so an always-correct up-to-date example is here: https://github.com/fluent/fluent-bit/blob/master/packaging/testing/smoke/container/fluent-bit.yaml.

    Env

    The env section allows to configure variables that will be used later on this configuration file.

    Example:

    Service

    The service section defines global properties of the service, the keys available as of this version are described in the following table:

    Key
    Description
    Default Value

    flush

    Set the flush time in seconds.nanoseconds. The engine loop uses a Flush timeout to define when is required to flush the records ingested by input plugins through the defined output plugins.

    5

    grace

    Set the grace time in seconds as Integer value. The engine loop uses a Grace timeout to define wait time on exit

    5

    daemon

    Boolean value to set if Fluent Bit should run as a Daemon (background) or not. Allowed values are: yes, no, on and off. note: If you are using a Systemd based unit as the one we provide in our packages, do not turn on this option.

    Off

    The following is an example of a service section:

    For scheduler and retry details, please check there: scheduling and retries

    Pipeline

    A pipeline section will define a complete pipeline configuration, including inputs, filters and outputs subsections.

    Each of the subsections for inputs, filters and outputs constitutes an array of maps that has the parameters for each.

    As an example, this pipeline consists of two inputs; a tail plugin and an http server plugin. Each plugin has its own map in the array of inputs consisting of simple properties. To use more advanced properties that consist of multiple values the property itself can be defined using an array, ie: the record and allowlist_key properties for the record_modifier filter:

    In the cases where each value in a list requires two values they must be separated by a space, such as in the record property for the record_modifier filter.

    Input

    An input section defines a source (related to an input plugin). Here we will describe the base configuration for each input section. Note that each input plugin may add it own configuration keys:

    Key
    Description

    Name

    Name of the input plugin. Defined as subsection of the inputs section.

    Tag

    Tag name associated to all records coming from this plugin.

    Log_Level

    Set the plugin's logging verbosity level. Allowed values are: off, error, warn, info, debug and trace. Defaults to the SERVICE section's Log_Level.

    The Name is mandatory and it lets Fluent Bit know which input plugin should be loaded. The Tag is mandatory for all plugins except for the input forward plugin (as it provides dynamic tags).

    Example input

    The following is an example of an input section for the cpu plugin.

    Filter

    A filter section defines a filter (related to a filter plugin). Here we will describe the base configuration for each filter section. Note that each filter plugin may add its own configuration keys:

    Key
    Description

    Name

    Name of the filter plugin. Defined as a subsection of the filters section.

    Match

    A pattern to match against the tags of incoming records. It's case-sensitive and supports the star (*) character as a wildcard.

    Match_Regex

    A regular expression to match against the tags of incoming records. Use this option if you want to use the full regex syntax.

    Log_Level

    Set the plugin's logging verbosity level. Allowed values are: off, error, warn, info, debug and trace. Defaults to the SERVICE section's Log_Level.

    The Name is mandatory and it lets Fluent Bit know which filter plugin should be loaded. The Match or Match_Regex is mandatory for all plugins. If both are specified, Match_Regex takes precedence.

    Example filter

    The following is an example of a filter section for the grep plugin:

    Output

    The outputs section specify a destination that certain records should follow after a Tag match. Currently, Fluent Bit can route up to 256 OUTPUT plugins. The configuration supports the following keys:

    Key
    Description

    Name

    Name of the output plugin. Defined as a subsection of the outputs section.

    Match

    A pattern to match against the tags of incoming records. It's case-sensitive and supports the star (*) character as a wildcard.

    Match_Regex

    A regular expression to match against the tags of incoming records. Use this option if you want to use the full regex syntax.

    Log_Level

    Set the plugin's logging verbosity level. Allowed values are: off, error, warn, info, debug and trace. The output log level defaults to the SERVICE section's Log_Level.

    Example output

    The following is an example of an output section:

    Example: collecting CPU metrics

    The following configuration file example demonstrates how to collect CPU metrics and flush the results every five seconds to the standard output:

    [SERVICE]
        # Flush
        # =====
        # set an interval of seconds before to flush records to a destination
        flush        5
    
        # Daemon
        # ======
        # instruct Fluent Bit to run in foreground or background mode.
        daemon       Off
    
        # Log_Level
        # =========
        # Set the verbosity level of the service, values can be:
        #
        # - error
        # - warning
        # - info
        # - debug
        # - trace
        #
        # by default 'info' is set, that means it includes 'error' and 'warning'.
        log_level    info
    
        # Parsers File
        # ============
        # specify an optional 'Parsers' configuration file
        parsers_file parsers.conf
    
        # Plugins File
        # ============
        # specify an optional 'Plugins' configuration file to load external plugins.
        plugins_file plugins.conf
    
        # HTTP Server
        # ===========
        # Enable/Disable the built-in HTTP Server for metrics
        http_server  Off
        http_listen  0.0.0.0
        http_port    2020
    
        # Storage
        # =======
        # Fluent Bit can use memory and filesystem buffering based mechanisms
        #
        # - https://docs.fluentbit.io/manual/administration/buffering-and-storage
        #
        # storage metrics
        # ---------------
        # publish storage pipeline metrics in '/api/v1/storage'. The metrics are
        # exported only if the 'http_server' option is enabled.
        #
        storage.metrics on
    
    [INPUT]
        Name         winlog
        Channels     Setup,Windows PowerShell
        Interval_Sec 1
    
    [OUTPUT]
        name  stdout
        match *
    PS> Get-FileHash fluent-bit-2.0.9-win32.exe
    PS> Expand-Archive fluent-bit-2.0.9-win64.zip
    fluent-bit
    ├── bin
    │   ├── fluent-bit.dll
    │   └── fluent-bit.exe
    │   └── fluent-bit.pdb
    ├── conf
    │   ├── fluent-bit.conf
    │   ├── parsers.conf
    │   └── plugins.conf
    └── include
        │   ├── flb_api.h
        │   ├── ...
        │   └── flb_worker.h
        └── fluent-bit.h
    PS> .\bin\fluent-bit.exe -i dummy -o stdout
    PS> .\bin\fluent-bit.exe  -i dummy -o stdout
    Fluent Bit v2.0.x
    * Copyright (C) 2019-2020 The Fluent Bit Authors
    * Copyright (C) 2015-2018 Treasure Data
    * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
    * https://fluentbit.io
    
    [2019/06/28 10:13:04] [ info] [storage] initializing...
    [2019/06/28 10:13:04] [ info] [storage] in-memory
    [2019/06/28 10:13:04] [ info] [storage] normal synchronization mode, checksum disabled, max_chunks_up=128
    [2019/06/28 10:13:04] [ info] [engine] started (pid=10324)
    [2019/06/28 10:13:04] [ info] [sp] stream processor started
    [0] dummy.0: [1561684385.443823800, {"message"=>"dummy"}]
    [1] dummy.0: [1561684386.428399000, {"message"=>"dummy"}]
    [2] dummy.0: [1561684387.443641900, {"message"=>"dummy"}]
    [3] dummy.0: [1561684388.441405800, {"message"=>"dummy"}]
    PS> C:\Program Files\fluent-bit\bin\fluent-bit.exe -i dummy -o stdout
    PS> <installer exe> /S /D=C:\fluent-bit
    C:\fluent-bit\
    ├── conf
    │   ├── fluent-bit.conf
    │   └── parsers.conf
    │   └── plugins.conf
    └── bin
        ├── fluent-bit.dll
        └── fluent-bit.exe
        └── fluent-bit.pdb
    % sc.exe create fluent-bit binpath= "\fluent-bit\bin\fluent-bit.exe -c \fluent-bit\conf\fluent-bit.conf"
    % sc.exe start fluent-bit
    % sc.exe query fluent-bit
    SERVICE_NAME: fluent-bit
        TYPE               : 10  WIN32_OWN_PROCESS
        STATE              : 4 Running
        ...
    % sc.exe stop fluent-bit
    % sc.exe config fluent-bit start= auto
    % sc.exe create fluent-bit binpath= "\"C:\Program Files\fluent-bit\bin\fluent-bit.exe\" -c \"C:\Program Files\fluent-bit\conf\fluent-bit.conf\""
    PS> New-Service fluent-bit -BinaryPathName "C:\fluent-bit\bin\fluent-bit.exe -c C:\fluent-bit\conf\fluent-bit.conf" -StartupType Automatic
    PS> Start-Service fluent-bit
    PS> get-Service fluent-bit | format-list
    Name                : fluent-bit
    DisplayName         : fluent-bit
    Status              : Running
    DependentServices   : {}
    ServicesDependedOn  : {}
    CanPauseAndContinue : False
    CanShutdown         : False
    CanStop             : True
    ServiceType         : Win32OwnProcess
    PS> Stop-Service fluent-bit
    PS> Remove-Service fluent-bit
    PS> wget -o vs.exe https://aka.ms/vs/16/release/vs_buildtools.exe
    PS> start vs.exe
    PS> wget -o winflexbison.zip https://github.com/lexxmark/winflexbison/releases/download/v2.5.22/win_flex_bison-2.5.22.zip
    PS> Expand-Archive winflexbison.zip -Destination C:\WinFlexBison
    PS> cp -Path C:\WinFlexBison\win_bison.exe C:\WinFlexBison\bison.exe
    PS> cp -Path C:\WinFlexBison\win_flex.exe C:\WinFlexBison\flex.exe
    PS> wget -o git.exe https://github.com/git-for-windows/git/releases/download/v2.28.0.windows.1/Git-2.28.0-64-bit.exe
    PS> start git.exe
    % git clone https://github.com/fluent/fluent-bit
    % cd fluent-bit/build
    % cmake .. -G "NMake Makefiles"
    % cmake --build .
    % .\bin\debug\fluent-bit.exe -i dummy -o stdout
    % cpack -G ZIP
    [SERVICE]
        flush        1
        log_level    info
        parsers_file parsers_multiline.conf
    
    [INPUT]
        name             tail
        path             test.log
        read_from_head   true
        multiline.parser multiline-regex-test
    
    [OUTPUT]
        name             stdout
        match            *
    [MULTILINE_PARSER]
        name          multiline-regex-test
        type          regex
        flush_timeout 1000
        #
        # Regex rules for multiline parsing
        # ---------------------------------
        #
        # configuration hints:
        #
        #  - first state always has the name: start_state
        #  - every field in the rule must be inside double quotes
        #
        # rules |   state name  | regex pattern                  | next state
        # ------|---------------|--------------------------------------------
        rule      "start_state"   "/([a-zA-Z]+ \d+ \d+\:\d+\:\d+)(.*)/"  "cont"
        rule      "cont"          "/^\s+at.*/"                     "cont"
    single line...
    Dec 14 06:41:08 Exception in thread "main" java.lang.RuntimeException: Something has gone wrong, aborting!
        at com.myproject.module.MyProject.badMethod(MyProject.java:22)
        at com.myproject.module.MyProject.oneMoreMethod(MyProject.java:18)
        at com.myproject.module.MyProject.anotherMethod(MyProject.java:14)
        at com.myproject.module.MyProject.someMethod(MyProject.java:10)
        at com.myproject.module.MyProject.main(MyProject.java:6)
    another line...
    
    [SERVICE]
        flush        1
        log_level    info
        parsers_file parsers_multiline.conf
    
    [INPUT]
        name             tail
        path             test.log
        read_from_head   true
        multiline.parser multiline-regex-test
    
    [FILTER]
        name             parser
        match            *
        key_name         log
        parser           named-capture-test
    
    [OUTPUT]
        name             stdout
        match            *
    [MULTILINE_PARSER]
        name          multiline-regex-test
        type          regex
        flush_timeout 1000
        #
        # Regex rules for multiline parsing
        # ---------------------------------
        #
        # configuration hints:
        #
        #  - first state always has the name: start_state
        #  - every field in the rule must be inside double quotes
        #
        # rules |   state name  | regex pattern                  | next state
        # ------|---------------|--------------------------------------------
        rule      "start_state"   "/([a-zA-Z]+ \d+ \d+\:\d+\:\d+)(.*)/"  "cont"
        rule      "cont"          "/^\s+at.*/"                     "cont"
    
    [PARSER]
        Name named-capture-test
        Format regex
        Regex /^(?<date>[a-zA-Z]+ \d+ \d+\:\d+\:\d+) (?<message>.*)/m
    single line...
    Dec 14 06:41:08 Exception in thread "main" java.lang.RuntimeException: Something has gone wrong, aborting!
        at com.myproject.module.MyProject.badMethod(MyProject.java:22)
        at com.myproject.module.MyProject.oneMoreMethod(MyProject.java:18)
        at com.myproject.module.MyProject.anotherMethod(MyProject.java:14)
        at com.myproject.module.MyProject.someMethod(MyProject.java:10)
        at com.myproject.module.MyProject.main(MyProject.java:6)
    another line...
    
    # rules   |   state name   | regex pattern                   | next state
    # --------|----------------|---------------------------------------------
    rule         "start_state"   "/([a-zA-Z]+ \d+ \d+\:\d+\:\d+)(.*)/"   "cont"
    rule         "cont"          "/^\s+at.*/"                      "cont"
    $ fluent-bit -c fluent-bit.conf 
    
    [0] tail.0: [0.000000000, {"log"=>"single line...
    "}]
    [1] tail.0: [1626634867.472226330, {"log"=>"Dec 14 06:41:08 Exception in thread "main" java.lang.RuntimeException: Something has gone wrong, aborting!
        at com.myproject.module.MyProject.badMethod(MyProject.java:22)
        at com.myproject.module.MyProject.oneMoreMethod(MyProject.java:18)
        at com.myproject.module.MyProject.anotherMethod(MyProject.java:14)
        at com.myproject.module.MyProject.someMethod(MyProject.java:10)
        at com.myproject.module.MyProject.main(MyProject.java:6)
    "}]
    [2] tail.0: [1626634867.472226330, {"log"=>"another line...
    "}]
    
    $ fluent-bit -c fluent-bit.conf
    
    [0] tail.0: [1669160706.737650473, {"log"=>"single line...
    "}]
    [1] tail.0: [1669160706.737657687, {"date"=>"Dec 14 06:41:08", "message"=>"Exception in thread "main" java.lang.RuntimeException: Something has gone wrong, aborting!
        at com.myproject.module.MyProject.badMethod(MyProject.java:22)
        at com.myproject.module.MyProject.oneMoreMethod(MyProject.java:18)
        at com.myproject.module.MyProject.anotherMethod(MyProject.java:14)
        at com.myproject.module.MyProject.someMethod(MyProject.java:10)
        at com.myproject.module.MyProject.main(MyProject.java:6)
    "}]
    [2] tail.0: [1669160706.737657687, {"log"=>"another line...
    "}]
    # setting up a local environment variable
    env:
        flush_interval: 1
    
    # service configuration
    service:
        flush:       ${flush_interval}
        log_level:   info
        http_server: on
    service:
        flush: 5
        daemon: off
        log_level: debug
    pipeline:
        inputs:
            ...
        filters:
            ...
        outputs:
            ...
    pipeline:
        inputs:
            - name: tail
              tag: syslog
              path: /var/log/syslog
            - name: http
              tag: http_server
              port: 8080
    pipeline:
        inputs:
            - name: tail
              tag: syslog
              path: /var/log/syslog
        filters:
            - name: record_modifier
              match: syslog
              record:
                  - powered_by calyptia
            - name: record_modifier
              match: syslog
              allowlist_key:
                  - powered_by
                  - message
    pipeline:
        inputs:
            - name: cpu
              tag: my_cpu
    pipeline:
        filters:
            - name: grep
              match: '*'
              regex: log aa
    pipeline:
        outputs:
            - name: stdout
              match: 'my*cpu'
    service:
        flush: 5
        daemon: off
        log_level: debug
    
    pipeline:
        inputs:
            - name: cpu
              tag: my_cpu
        outputs:
            - name: stdout
              match: 'my*cpu'

    key_content

    For an incoming structured message, specify the key that contains the data that should be processed by the regular expression and possibly concatenated.

    flush_timeout

    Timeout in milliseconds to flush a non-terminated multiline buffer. Default is set to 5 seconds.

    5s

    rule

    Configure a rule to match a multiline pattern. The rule has a specific format described below. Multiple rules can be defined.

    dns.mode

    Set the primary transport layer protocol used by the asynchronous DNS resolver which can be overridden on a per plugin basis

    UDP

    log_file

    Absolute path for an optional log file. By default all logs are redirected to the standard error interface (stderr).

    log_level

    Set the logging verbosity level. Allowed values are: off, error, warn, info, debug and trace. Values are accumulative, e.g: if 'debug' is set, it will include error, warning, info and debug. Note that trace mode is only available if Fluent Bit was built with the WITH_TRACE option enabled.

    info

    parsers_file

    Path for a parsers configuration file. Multiple Parsers_File entries can be defined within the section.

    plugins_file

    Path for a plugins configuration file. A plugins configuration file allows to define paths for external plugins, for an example see here.

    streams_file

    Path for the Stream Processor configuration file. To learn more about Stream Processing configuration go here.

    http_server

    Enable built-in HTTP Server

    Off

    http_listen

    Set listening interface for HTTP Server when it's enabled

    0.0.0.0

    http_port

    Set TCP Port for the HTTP Server

    2020

    coro_stack_size

    Set the coroutines stack size in bytes. The value must be greater than the page size of the running system. Don't set too small value (say 4096), or coroutine threads can overrun the stack buffer. Do not change the default value of this parameter unless you know what you are doing.

    24576

    scheduler.cap

    Set a maximum retry time in second. The property is supported from v1.8.7.

    2000

    scheduler.base

    Set a base of exponential backoff. The property is supported from v1.8.7.

    5

    json.convert_nan_to_null

    If enabled, NaN is converted to null when fluent-bit converts msgpack to json.

    false

    fluent-bit-2.0.9-win32.exe
    a6c1a74acc00ce6211694f4f0a037b1b6ce3ab8dd4e6d857ea7d0d4cbadec682
    fluent-bit-2.0.9-win32.zip
    8c0935a89337d073d4eae3440c65f55781bc097cdefa8819d2475db6c1befc9c
    fluent-bit-2.0.9-win64.exe
    7970350f5bd0212be7d87ad51046a6d1600f3516c6209cd69af6d95759d280df
    fluent-bit-2.0.9-win64.zip
    94750cf1faf6f5594047f70c585577ee38d8cdd4d6e098eefb3e665c98c3709f

    Docker

    Fluent Bit container images are available on Docker Hub ready for production usage. Current available images can be deployed in multiple architectures.

    Quick Start

    Get started by simply typing the following command:

    Tags and Versions

    The following table describes the Linux container tags that are available on Docker Hub fluent/fluent-bit repository:

    Tag(s)
    Manifest Architectures
    Description

    2.0.9

    x86_64, arm64v8, arm32v7

    Release

    2.0.9-debug

    x86_64, arm64v8, arm32v7

    v2.0.x releases (production + debug)

    2.0.8

    x86_64, arm64v8, arm32v7

    Release

    It is strongly suggested that you always use the latest image of Fluent Bit.

    Windows container images are provided from v2.0.6 for Windows Server 2019 and Windows Server 2022. These can be found as tags on the same Docker Hub registry above.

    Multi Architecture Images

    Our production stable images are based on Distroless focusing on security containing just the Fluent Bit binary and minimal system libraries and basic configuration. We also provide debug images for all architectures (from 1.9.0+) which contain a full (Debian) shell and package manager that can be used to troubleshoot or for testing purposes.

    From a deployment perspective, there is no need to specify an architecture, the container client tool that pulls the image gets the proper layer for the running architecture.

    Verify signed container images

    1.9 and 2.0 container images are signed using Cosign/Sigstore. These signatures can be verified using cosign (install guide):

    Note: replace cosign above with the binary installed if it has a different name (e.g. cosign-linux-amd64).

    Keyless signing is also provided but this is still experimental:

    Note: COSIGN_EXPERIMENTAL=1 is used to allow verification of images signed in KEYLESS mode. To learn more about keyless signing, please refer to Keyless Signatures.

    Getting Started

    Download the last stable image from 2.0 series:

    Once the image is in place, now run the following (useless) test which makes Fluent Bit measure CPU usage by the container:

    That command will let Fluent Bit measure CPU usage every second and flush the results to the standard output, e.g:

    F.A.Q

    Why there is no Fluent Bit Docker image based on Alpine Linux ?

    Alpine Linux uses Musl C library instead of Glibc. Musl is not fully compatible with Glibc which generated many issues in the following areas when used with Fluent Bit:

    • Memory Allocator: to run Fluent Bit properly in high-load environments, we use Jemalloc as a default memory allocator which reduce fragmentation and provides better performance for our needs. Jemalloc cannot run smoothly with Musl and requires extra work.

    • Alpine Linux Musl functions bootstrap have a compatibility issue when loading Golang shared libraries, this generate problems when trying to load Golang output plugins in Fluent Bit.

    • Alpine Linux Musl Time format parser does not support Glibc extensions

    • Maintainers preference in terms of base image due to security and maintenance reasons are Distroless and Debian.

    Why use distroless containers ?

    Briefly tackled in a blog post which links out to the following possibly opposing views:

    • https://hackernoon.com/distroless-containers-hype-or-true-value-2rfl3wat

    • https://www.redhat.com/en/blog/why-distroless-containers-arent-security-solution-you-think-they-are

    The reasons for using Distroless are fairly well covered here: https://github.com/GoogleContainerTools/distroless#why-should-i-use-distroless-images

    • Only include what you need, reduce the attack surface available.

    • Reduces size so improves perfomance as well.

    • Reduces false positives on scans (and reduces resources required for scanning).

    • Reduces supply chain security requirements to just what you need.

    • Helps prevent unauthorised processes or users interacting with the container.

    • Less need to harden the container (and container runtime, K8S, etc.).

    • Faster CICD processes.

    With any choice of course there are downsides:

    • No shell or package manager to update/add things.

      • Generally though dynamic updating is a bad idea in containers as the time it is done affects the outcome: two containers started at different times using the same base image may perform differently or get different dependencies, etc.

      • A better approach is to rebuild a new image version but then you can do this with Distroless, however it is harder requiring multistage builds or similar to provide the new dependencies.

    • Debugging can be harder.

      • More specifically you need applications set up to properly expose information for debugging rather than rely on traditional debug approaches of connecting to processes or dumping memory. This can be an upfront cost vs a runtime cost but does shift left in the development process so hopefully is a reduction overall.

    • Assumption that Distroless is secure: nothing is secure (just more or less secure) and there are still exploits so it does not remove the need for securing your system.

    • Sometimes you need to use a common base image, e.g. with audit/security/health/etc. hooks integrated, or common base tooling (this could still be Distroless though).

    One other important thing to note is that exec'ing into a container will potentially impact resource limits.

    For debugging, debug containers are available now in K8S: https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/#ephemeral-container

    • This can be a quite different container from the one you want to investigate (e.g. lots of extra tools or even a different base).

    • No resource limits applied to this container - can be good or bad.

    • Runs in pod namespaces, just another container that can access everything the others can.

    • May need architecture of the pod to share volumes, etc.

    • Requires more recent versions of K8S and the container runtime plus RBAC allowing it.

    Troubleshooting

    Tap Functionality

    docker run -ti cr.fluentbit.io/fluent/fluent-bit
    $ cosign verify --key "https://packages.fluentbit.io/fluentbit-cosign.pub" fluent/fluent-bit:2.0.6
    
    Verification for index.docker.io/fluent/fluent-bit:2.0.6 --
    The following checks were performed on each of these signatures:
      - The cosign claims were validated
      - The signatures were verified against the specified public key
    
    [{"critical":{"identity":{"docker-reference":"index.docker.io/fluent/fluent-bit"},"image":{"docker-manifest-digest":"sha256:c740f90b07f42823d4ecf4d5e168f32ffb4b8bcd87bc41df8f5e3d14e8272903"},"type":"cosign container image signature"},"optional":{"release":"2.0.6","repo":"fluent/fluent-bit","workflow":"Release from staging"}}]
    COSIGN_EXPERIMENTAL=1 cosign verify fluent/fluent-bit:2.0.6
    docker pull cr.fluentbit.io/fluent/fluent-bit:2.0
    docker run -ti cr.fluentbit.io/fluent/fluent-bit:2.0 \
      -i cpu -o stdout -f 1
    [2019/10/01 12:29:02] [ info] [engine] started
    [0] cpu.0: [1504290543.000487750, {"cpu_p"=>0.750000, "user_p"=>0.250000, "system_p"=>0.500000, "cpu0.p_cpu"=>0.000000, "cpu0.p_user"=>0.000000, "cpu0.p_system"=>0.000000, "cpu1.p_cpu"=>1.000000, "cpu1.p_user"=>0.000000, "cpu1.p_system"=>1.000000, "cpu2.p_cpu"=>1.000000, "cpu2.p_user"=>1.000000, "cpu2.p_system"=>0.000000, "cpu3.p_cpu"=>0.000000, "cpu3.p_user"=>0.000000, "cpu3.p_system"=>0.000000}]

    2.0.8-debug

    x86_64, arm64v8, arm32v7

    v2.0.x releases (production + debug)

    2.0.6

    x86_64, arm64v8, arm32v7

    Release v2.0.6

    2.0.6-debug

    x86_64, arm64v8, arm32v7

    v2.0.x releases (production + debug)

    2.0.5

    x86_64, arm64v8, arm32v7

    Release v2.0.5

    2.0.5-debug

    x86_64, arm64v8, arm32v7

    v2.0.x releases (production + debug)

    2.0.4

    x86_64, arm64v8, arm32v7

    Release v2.0.4

    2.0.4-debug

    x86_64, arm64v8, arm32v7

    v2.0.x releases (production + debug)

    2.0.3

    x86_64, arm64v8, arm32v7

    Release v2.0.3

    2.0.3-debug

    x86_64, arm64v8, arm32v7

    v2.0.x releases (production + debug)

    2.0.2

    x86_64, arm64v8, arm32v7

    Release v2.0.2

    2.0.2-debug

    x86_64, arm64v8, arm32v7

    v2.0.x releases (production + debug)

    2.0.1

    x86_64, arm64v8, arm32v7

    Release v2.0.1

    2.0.1-debug

    x86_64, arm64v8, arm32v7

    v2.0.x releases (production + debug)

    2.0.0

    x86_64, arm64v8, arm32v7

    Release v2.0.0

    2.0.0-debug

    x86_64, arm64v8, arm32v7

    v2.0.x releases (production + debug)

    1.9.9

    x86_64, arm64v8, arm32v7

    Release v1.9.9

    1.9.9-debug

    x86_64, arm64v8, arm32v7

    v1.9.x releases (production + debug)

    1.9.8

    x86_64, arm64v8, arm32v7

    Release v1.9.8

    1.9.8-debug

    x86_64, arm64v8, arm32v7

    v1.9.x releases (production + debug)

    1.9.7

    x86_64, arm64v8, arm32v7

    Release v1.9.7

    1.9.7-debug

    x86_64, arm64v8, arm32v7

    v1.9.x releases (production + debug)

    1.9.6

    x86_64, arm64v8, arm32v7

    Release v1.9.6

    1.9.6-debug

    x86_64, arm64v8, arm32v7

    v1.9.x releases (production + debug)

    1.9.5

    x86_64, arm64v8, arm32v7

    Release v1.9.5

    1.9.5-debug

    x86_64, arm64v8, arm32v7

    v1.9.x releases (production + debug)

    1.9.4

    x86_64, arm64v8, arm32v7

    Release v1.9.4

    1.9.4-debug

    x86_64, arm64v8, arm32v7

    v1.9.x releases (production + debug)

    1.9.3

    x86_64, arm64v8, arm32v7

    Release v1.9.3

    1.9.3-debug

    x86_64, arm64v8, arm32v7

    v1.9.x releases (production + debug)

    1.9.2

    x86_64, arm64v8, arm32v7

    Release v1.9.2

    1.9.2-debug

    x86_64, arm64v8, arm32v7

    v1.9.x releases (production + debug)

    1.9.1

    x86_64, arm64v8, arm32v7

    Release v1.9.1

    1.9.1-debug

    x86_64, arm64v8, arm32v7

    v1.9.x releases (production + debug)

    1.9.0

    x86_64, arm64v8, arm32v7

    Release v1.9.0

    1.9.0-debug

    x86_64, arm64v8, arm32v7

    v1.9.x releases (production + debug)

    v2.0.9
    v2.0.8
    Tap can be used to generate events or records detailing what messages pass through Fluent Bit, at what time and what filters affect them.

    Simple example

    First, we will make sure that the container image we are going to use actually supports Fluent Bit Tap (available in Fluent Bit 2.0+):

    If the --enable-chunk-trace option is present it means Fluent Bit has support for Fluent Bit Tap but it is disabled by default, so remember to enable it with this option.

    Tap support is enabled and disabled via the embedded web server, so enable it like so (or the equivalent option in the configuration file):

    In another terminal we can activate Tap by either using the instance id of the input; dummy.0 or its alias.

    Since the alias is more predictable that is what we will use:

    This response means we have activated Tap, the terminal with Fluent Bit running should now look like this:

    All the records that now appear are those emitted by the activities of the dummy plugin.

    Complex example

    This example takes the same steps but demonstrates the same mechanism works with more complicated configurations. In this example we will follow a single input of many which passes through several filters.

    To make sure the window is not cluttered by the actual records generated by the input plugins we send all of it to null.

    We activate with the following 'curl' command:

    Now we should start seeing output similar to the following:

    Parameters for the output in Tap

    When activating Tap, any plugin parameter can be given. These can be used to modify, for example, the output format, the name of the time key, the format of the date, etc.

    In the next example we will use the parameter "format": "json" to demonstrate how in Tap, stdout can be shown in Json format.

    First, run Fluent Bit enabling Tap:

    Next, in another terminal, we activate Tap including the output, in this case stdout, and the parameters wanted, in this case "format": "json":

    In the first terminal, we should be seeing the output similar to the following:

    This parameter shows stdout in Json format, however, as mentioned before, parameters can be passed to any plugin.

    Please visit the following link for more information on other output plugins: https://docs.fluentbit.io/manual/pipeline/outputs

    Analysis of a single Tap record

    Here we analyze a single record from a filter event to explain the meaning of each field in detail. We chose a filter record since it includes the most details of all the record types.

    type

    The type defines at what stage the event is generated:

    • type=1: input record

      • this is the unadulterated input record

    • type=2: filtered record

      • this is a record once it has been filtered. One record is generated per filter.

    • type=3: pre-output record

      • this is the record right before it is sent for output.

    Since this is a record generated by the manipulation of a record by a filter is has the type 2.

    start_time and end_time

    This records the start and end of an event, it is a bit different for each event type:

    • type 1: when the input is received, both the start and end time.

    • type 2: the time when filtering is matched until it has finished processing.

    • type 3: the time when the input is received and when it is finally slated for output.

    trace_id

    This is a string composed of a prefix and a number which is incremented with each record received by the input during the Tap session.

    plugin_instance

    This is the plugin instance name as it is generated by Fluent Bit at runtime.

    plugin_alias

    If an alias is set this field will contain the alias set for a plugin.

    records

    This is an array of all the records being sent. Since Fluent Bit handles records in chunks of multiple records and chunks are indivisible the same is done in the Tap output. Each record consists of its timestamp followed by the actual data which is a composite type of keys and values.

    Dump Internals / Signal

    When the service is running we can export metrics to see the overall status of the data flow of the service. But there are other use cases where we would like to know the current status of the internals of the service, specifically to answer questions like what's the current status of the internal buffers ? , the Dump Internals feature is the answer.

    Fluent Bit v1.4 introduces the Dump Internals feature that can be triggered easily from the command line triggering the CONT Unix signal.

    note: this feature is only available on Linux and BSD family operating systems

    Usage

    Run the following kill command to signal Fluent Bit:

    The command pidof aims to lookup the Process ID of Fluent Bit. You can replace the

    Fluent Bit will dump the following information to the standard output interface (stdout):

    Input Plugins Dump

    The dump provides insights for every input instance configured.

    Status

    Overall ingestion status of the plugin.

    Entry
    Sub-entry
    Description

    overlimit

    If the plugin has been configured with , this entry will report if the plugin is over the limit or not at the moment of the dump. If it is overlimit, it will print yes, otherwise no.

    mem_size

    Current memory size in use by the input plugin in-memory.

    mem_limit

    Limit set by Mem_Buf_Limit.

    Tasks

    When an input plugin ingest data into the engine, a Chunk is created. A Chunk can contains multiple records. Upon flush time, the engine creates a Task that contains the routes for the Chunk associated in question.

    The Task dump describes the tasks associated to the input plugin:

    Entry
    Description

    total_tasks

    Total number of active tasks associated to data generated by the input plugin.

    new

    Number of tasks not assigned yet to an output plugin. Tasks are in new status for a very short period of time (most of the time this value is very low or zero).

    running

    Number of active tasks being processed by output plugins.

    size

    Amount of memory used by the Chunks being processed (Total chunks size).

    Chunks

    The Chunks dump tells more details about all the chunks that the input plugin has generated and are still being processed.

    Depending of the buffering strategy and limits imposed by configuration, some Chunks might be up (in memory) or down (filesystem).

    Entry
    Sub-entry
    Description

    total_chunks

    Total number of Chunks generated by the input plugin that are still being processed by the engine.

    up_chunks

    Total number of Chunks that are loaded in memory.

    down_chunks

    Total number of Chunks that are stored in the filesystem but not loaded in memory yet.

    Storage Layer Dump

    Fluent Bit relies on a custom storage layer interface designed for hybrid buffering. The Storage Layer entry contains a total summary of Chunks registered by Fluent Bit:

    Entry
    Sub-Entry
    Description

    total chunks

    Total number of Chunks

    mem chunks

    Total number of Chunks memory-based

    fs chunks

    Total number of Chunks filesystem based

    Tap Functionality: generate events or records
    Dump Internals Signal

    Tail

    The tail input plugin allows to monitor one or several text files. It has a similar behavior like tail -f shell command.

    The plugin reads every matched file in the Path pattern and for every new line found (separated by a newline character (\n) ), it generates a new record. Optionally a database file can be used so the plugin can have a history of tracked files and a state of offsets, this is very useful to resume a state if the service is restarted.

    Configuration Parameters

    The plugin supports the following configuration parameters:

    Key
    Description
    Default

    Note that if the database parameter DB is not specified, by default the plugin will start reading each target file from the beginning. This also might cause some unwanted behavior, for example when a line is bigger that Buffer_Chunk_Size and Skip_Long_Lines is not turned on, the file will be read from the beginning of each Refresh_Interval until the file is rotated.

    Multiline Support

    Starting from Fluent Bit v1.8 we have introduced a new Multiline core functionality. For Tail input plugin, it means that now it supports the old configuration mechanism but also the new one. In order to avoid breaking changes, we will keep both but encourage our users to use the latest one. We will call the two mechanisms as:

    • Multiline Core

    • Old Multiline

    Multiline Core (v1.8)

    The new multiline core is exposed by the following configuration:

    Key
    Description

    As stated in the , now we provide built-in configuration modes. Note that when using a new multiline.parser definition, you must disable the old configuration from your tail section like:

    • parser

    • parser_firstline

    • parser_N

    • multiline

    Multiline and Containers (v1.8)

    If you are running Fluent Bit to process logs coming from containers like Docker or CRI, you can use the new built-in modes for such purposes. This will help to reassembly multiline messages originally split by Docker or CRI:

    The two options separated by a comma means multi-format: try docker and cri multiline formats.

    We are still working on extending support to do multiline for nested stack traces and such. Over the Fluent Bit v1.8.x release cycle we will be updating the documentation.

    Old Multiline Configuration Parameters

    For the old multiline configuration, the following options exist to configure the handling of multilines logs:

    Key
    Description
    Default

    Old Docker Mode Configuration Parameters

    Docker mode exists to recombine JSON log lines split by the Docker daemon due to its line length limit. To use this feature, configure the tail plugin with the corresponding parser and then enable Docker mode:

    Key
    Description
    Default

    Getting Started

    In order to tail text or log files, you can run the plugin from the command line or through the configuration file:

    Command Line

    From the command line you can let Fluent Bit parse text files with the following options:

    Configuration File

    In your main configuration file append the following Input & Output sections. An example visualization can be found

    Old Multi-line example

    When using multi-line configuration you need to first specify Multiline On in the configuration and use the Parser_Firstline and additional parser parameters Parser_N if needed. If we are trying to read the following Java Stacktrace as a single event

    We need to specify a Parser_Firstline parameter that matches the first line of a multi-line event. Once a match is made Fluent Bit will read all future lines until another match with Parser_Firstline is made .

    In the case above we can use the following parser, that extracts the Time as time and the remaining portion of the multiline as log

    If we want to further parse the entire event we can add additional parsers with Parser_N where N is an integer. The final Fluent Bit configuration looks like the following:

    Our output will be as follows.

    Tailing files keeping state

    The tail input plugin a feature to save the state of the tracked files, is strongly suggested you enabled this. For this purpose the db property is available, e.g:

    When running, the database file /path/to/logs.db will be created, this database is backed by SQLite3 so if you are interested into explore the content, you can open it with the SQLite client tool, e.g:

    Make sure to explore when Fluent Bit is not hard working on the database file, otherwise you will see some Error: database is locked messages.

    Formatting SQLite

    By default SQLite client tool do not format the columns in a human read-way, so to explore in_tail_files table you can create a config file in ~/.sqliterc with the following content:

    SQLite and Write Ahead Logging

    Fluent Bit keep the state or checkpoint of each file through using a SQLite database file, so if the service is restarted, it can continue consuming files from it last checkpoint position (offset). The default options set are enabled for high performance and corruption-safe.

    The SQLite journaling mode enabled is Write Ahead Log or WAL. This allows to improve performance of read and write operations to disk. When enabled, you will see in your file system additional files being created, consider the following configuration statement:

    The above configuration enables a database file called test.db and in the same path for that file SQLite will create two additional files:

    • test.db-shm

    • test.db-wal

    Those two files aims to support the WAL mechanism that helps to improve performance and reduce the number system calls required. The -wal file refers to the file that stores the new changes to be committed, at some point the WAL file transactions are moved back to the real database file. The -shm file is a shared-memory type to allow concurrent-users to the WAL file.

    WAL and Memory Usage

    The WAL mechanism give us higher performance but also might increase the memory usage by Fluent Bit. Most of this usage comes from the memory mapped and cached pages. In some cases you might see that memory usage keeps a bit high giving the impression of a memory leak, but actually is not relevant unless you want your memory metrics back to normal. Starting from Fluent Bit v1.7.3 we introduced the new option db.journal_mode mode that sets the journal mode for databases, by default it will be WAL (Write-Ahead Logging), currently allowed configurations for db.journal_mode are DELETE | TRUNCATE | PERSIST | MEMORY | WAL | OFF .

    File Rotation

    File rotation is properly handled, including logrotate's copytruncate mode.

    Note that the Path patterns cannot match the rotated files. Otherwise, the rotated file would be read again and lead to duplicate records.

    $ docker run --rm -ti fluent/fluent-bit:latest --help | grep -Z
      -Z, --enable-chunk-trace     enable chunk tracing. activating it requires using the HTTP Server API.
    $ docker run --rm -ti -p 2020:2020 fluent/fluent-bit:latest -Z -H -i dummy -p alias=input_dummy -o stdout -f 1
    Fluent Bit v2.0.0
    * Copyright (C) 2015-2022 The Fluent Bit Authors
    * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
    * https://fluentbit.io
    
    [2022/10/21 10:03:16] [ info] [fluent bit] version=2.0.0, commit=3000f699f2, pid=1
    [2022/10/21 10:03:16] [ info] [output:stdout:stdout.0] worker #0 started
    [2022/10/21 10:03:16] [ info] [storage] ver=1.3.0, type=memory, sync=normal, checksum=off, max_chunks_up=128
    [2022/10/21 10:03:16] [ info] [cmetrics] version=0.5.2
    [2022/10/21 10:03:16] [ info] [input:dummy:input_dummy] initializing
    [2022/10/21 10:03:16] [ info] [input:dummy:input_dummy] storage_strategy='memory' (memory only)
    [2022/10/21 10:03:16] [ info] [http_server] listen iface=0.0.0.0 tcp_port=2020
    [2022/10/21 10:03:16] [ info] [sp] stream processor started
    [0] dummy.0: [1666346597.203307010, {"message"=>"dummy"}]
    [0] dummy.0: [1666346598.204103793, {"message"=>"dummy"}]
    ...
    
    $ curl 127.0.0.1:2020/api/v1/trace/input_dummy
    {"status":"ok"}
    [0] dummy.0: [1666346615.203253156, {"message"=>"dummy"}]
    [2022/10/21 10:03:36] [ info] [fluent bit] version=2.0.0, commit=3000f699f2, pid=1
    [2022/10/21 10:03:36] [ info] [storage] ver=1.3.0, type=memory, sync=normal, checksum=off, max_chunks_up=128
    [2022/10/21 10:03:36] [ info] [cmetrics] version=0.5.2
    [2022/10/21 10:03:36] [ info] [input:emitter:trace-emitter] initializing
    [2022/10/21 10:03:36] [ info] [input:emitter:trace-emitter] storage_strategy='memory' (memory only)
    [2022/10/21 10:03:36] [ info] [sp] stream processor started
    [2022/10/21 10:03:36] [ info] [output:stdout:stdout.0] worker #0 started
    [0] dummy.0: [1666346616.203551736, {"message"=>"dummy"}]
    [0] trace: [1666346617.205221952, {"type"=>1, "trace_id"=>"trace.0", "plugin_instance"=>"dummy.0", "plugin_alias"=>"input_dummy", "records"=>[{"timestamp"=>1666346617, "record"=>{"message"=>"dummy"}}], "start_time"=>1666346617, "end_time"=>1666346617}]
    [0] dummy.0: [1666346617.205131790, {"message"=>"dummy"}]
    [0] trace: [1666346617.205419358, {"type"=>3, "trace_id"=>"trace.0", "plugin_instance"=>"dummy.0", "plugin_alias"=>"input_dummy", "records"=>[{"timestamp"=>1666346617, "record"=>{"message"=>"dummy"}}], "start_time"=>1666346617, "end_time"=>1666346617}]
    [0] trace: [1666346618.204110867, {"type"=>1, "trace_id"=>"trace.1", "plugin_instance"=>"dummy.0", "plugin_alias"=>"input_dummy", "records"=>[{"timestamp"=>1666346618, "record"=>{[0] dummy.0: [1666346618.204049246, {"message"=>"dummy"}]
    "message"=>"dummy"}}], "start_time"=>1666346618, "end_time"=>1666346618}]
    [0] trace: [1666346618.204198654, {"type"=>3, "trace_id"=>"trace.1", "plugin_instance"=>"dummy.0", "plugin_alias"=>"input_dummy", "records"=>[{"timestamp"=>1666346618, "record"=>{"message"=>"dummy"}}], "start_time"=>1666346618, "end_time"=>1666346618}]
    
    $ docker run --rm -ti -p 2020:2020 \
    	fluent/fluent-bit:latest \
    	-Z -H \
    		-i dummy -p alias=dummy_0 -p \
    			dummy='{"dummy": "dummy_0", "key_name": "foo", "key_cnt": "1"}' \
    		-i dummy -p alias=dummy_1 -p dummy='{"dummy": "dummy_1"}' \
    		-i dummy -p alias=dummy_2 -p dummy='{"dummy": "dummy_2"}' \
    		-F record_modifier -m 'dummy.0' -p record="powered_by fluent" \
    		-F record_modifier -m 'dummy.1' -p record="powered_by fluent-bit" \
    		-F nest -m 'dummy.0' \
    			-p operation=nest -p wildcard='key_*' -p nest_under=data \
    		-o null -m '*' -f 1
    $ curl 127.0.0.1:2020/api/v1/trace/dummy_0
    {"status":"ok"}
    [0] trace: [1666349359.325597543, {"type"=>1, "trace_id"=>"trace.0", "plugin_instance"=>"dummy.0", "plugin_alias"=>"dummy_0", "records"=>[{"timestamp"=>1666349359, "record"=>{"dummy"=>"dummy_0", "key_name"=>"foo", "key_cnt"=>"1"}}], "start_time"=>1666349359, "end_time"=>1666349359}]
    [0] trace: [1666349359.325723747, {"type"=>2, "start_time"=>1666349359, "end_time"=>1666349359, "trace_id"=>"trace.0", "plugin_instance"=>"record_modifier.0", "records"=>[{"timestamp"=>1666349359, "record"=>{"dummy"=>"dummy_0", "key_name"=>"foo", "key_cnt"=>"1", "powered_by"=>"fluent"}}]}]
    [0] trace: [1666349359.325783954, {"type"=>2, "start_time"=>1666349359, "end_time"=>1666349359, "trace_id"=>"trace.0", "plugin_instance"=>"nest.2", "records"=>[{"timestamp"=>1666349359, "record"=>{"dummy"=>"dummy_0", "powered_by"=>"fluent", "data"=>{"key_name"=>"foo", "key_cnt"=>"1"}}}]}]
    [0] trace: [1666349359.325913783, {"type"=>3, "trace_id"=>"trace.0", "plugin_instance"=>"dummy.0", "plugin_alias"=>"dummy_0", "records"=>[{"timestamp"=>1666349359, "record"=>{"dummy"=>"dummy_0", "powered_by"=>"fluent", "data"=>{"key_name"=>"foo", "key_cnt"=>"1"}}}], "start_time"=>1666349359, "end_time"=>1666349359}]
    [0] trace: [1666349360.323826619, {"type"=>1, "trace_id"=>"trace.1", "plugin_instance"=>"dummy.0", "plugin_alias"=>"dummy_0", "records"=>[{"timestamp"=>1666349360, "record"=>{"dummy"=>"dummy_0", "key_name"=>"foo", "key_cnt"=>"1"}}], "start_time"=>1666349360, "end_time"=>1666349360}]
    [0] trace: [1666349360.323859618, {"type"=>2, "start_time"=>1666349360, "end_time"=>1666349360, "trace_id"=>"trace.1", "plugin_instance"=>"record_modifier.0", "records"=>[{"timestamp"=>1666349360, "record"=>{"dummy"=>"dummy_0", "key_name"=>"foo", "key_cnt"=>"1", "powered_by"=>"fluent"}}]}]
    [0] trace: [1666349360.323900784, {"type"=>2, "start_time"=>1666349360, "end_time"=>1666349360, "trace_id"=>"trace.1", "plugin_instance"=>"nest.2", "records"=>[{"timestamp"=>1666349360, "record"=>{"dummy"=>"dummy_0", "powered_by"=>"fluent", "data"=>{"key_name"=>"foo", "key_cnt"=>"1"}}}]}]
    [0] trace: [1666349360.323926366, {"type"=>3, "trace_id"=>"trace.1", "plugin_instance"=>"dummy.0", "plugin_alias"=>"dummy_0", "records"=>[{"timestamp"=>1666349360, "record"=>{"dummy"=>"dummy_0", "powered_by"=>"fluent", "data"=>{"key_name"=>"foo", "key_cnt"=>"1"}}}], "start_time"=>1666349360, "end_time"=>1666349360}]
    [0] trace: [1666349361.324223752, {"type"=>1, "trace_id"=>"trace.2", "plugin_instance"=>"dummy.0", "plugin_alias"=>"dummy_0", "records"=>[{"timestamp"=>1666349361, "record"=>{"dummy"=>"dummy_0", "key_name"=>"foo", "key_cnt"=>"1"}}], "start_time"=>1666349361, "end_time"=>1666349361}]
    [0] trace: [1666349361.324263959, {"type"=>2, "start_time"=>1666349361, "end_time"=>1666349361, "trace_id"=>"trace.2", "plugin_instance"=>"record_modifier.0", "records"=>[{"timestamp"=>1666349361, "record"=>{"dummy"=>"dummy_0", "key_name"=>"foo", "key_cnt"=>"1", "powered_by"=>"fluent"}}]}]
    [0] trace: [1666349361.324283250, {"type"=>2, "start_time"=>1666349361, "end_time"=>1666349361, "trace_id"=>"trace.2", "plugin_instance"=>"nest.2", "records"=>[{"timestamp"=>1666349361, "record"=>{"dummy"=>"dummy_0", "powered_by"=>"fluent", "data"=>{"key_name"=>"foo", "key_cnt"=>"1"}}}]}]
    [0] trace: [1666349361.324294291, {"type"=>3, "trace_id"=>"trace.2", "plugin_instance"=>"dummy.0", "plugin_alias"=>"dummy_0", "records"=>[{"timestamp"=>1666349361, "record"=>{"dummy"=>"dummy_0", "powered_by"=>"fluent", "data"=>{"key_name"=>"foo", "key_cnt"=>"1"}}}], "start_time"=>1666349361, "end_time"=>1666349361}]
    ^C[2022/10/21 10:49:23] [engine] caught signal (SIGINT)
    [2022/10/21 10:49:23] [ warn] [engine] service will shutdown in max 5 seconds
    [2022/10/21 10:49:23] [ info] [input] pausing dummy_0
    [2022/10/21 10:49:23] [ info] [input] pausing dummy_1
    [2022/10/21 10:49:23] [ info] [input] pausing dummy_2
    [2022/10/21 10:49:23] [ info] [engine] service has stopped (0 pending tasks)
    [2022/10/21 10:49:23] [ info] [input] pausing dummy_0
    [2022/10/21 10:49:23] [ info] [input] pausing dummy_1
    [2022/10/21 10:49:23] [ info] [input] pausing dummy_2
    [0] trace: [1666349362.323272011, {"type"=>1, "trace_id"=>"trace.3", "plugin_instance"=>"dummy.0", "plugin_alias"=>"dummy_0", "records"=>[{"timestamp"=>1666349362, "record"=>{"dummy"=>"dummy_0", "key_name"=>"foo", "key_cnt"=>"1"}}], "start_time"=>1666349362, "end_time"=>1666349362}]
    [0] trace: [1666349362.323306843, {"type"=>2, "start_time"=>1666349362, "end_time"=>1666349362, "trace_id"=>"trace.3", "plugin_instance"=>"record_modifier.0", "records"=>[{"timestamp"=>1666349362, "record"=>{"dummy"=>"dummy_0", "key_name"=>"foo", "key_cnt"=>"1", "powered_by"=>"fluent"}}]}]
    [0] trace: [1666349362.323323884, {"type"=>2, "start_time"=>1666349362, "end_time"=>1666349362, "trace_id"=>"trace.3", "plugin_instance"=>"nest.2", "records"=>[{"timestamp"=>1666349362, "record"=>{"dummy"=>"dummy_0", "powered_by"=>"fluent", "data"=>{"key_name"=>"foo", "key_cnt"=>"1"}}}]}]
    [0] trace: [1666349362.323334509, {"type"=>3, "trace_id"=>"trace.3", "plugin_instance"=>"dummy.0", "plugin_alias"=>"dummy_0", "records"=>[{"timestamp"=>1666349362, "record"=>{"dummy"=>"dummy_0", "powered_by"=>"fluent", "data"=>{"key_name"=>"foo", "key_cnt"=>"1"}}}], "start_time"=>1666349362, "end_time"=>1666349362}]
    [2022/10/21 10:49:24] [ warn] [engine] service will shutdown in max 1 seconds
    [2022/10/21 10:49:25] [ info] [engine] service has stopped (0 pending tasks)
    [2022/10/21 10:49:25] [ info] [output:stdout:stdout.0] thread worker #0 stopping...
    [2022/10/21 10:49:25] [ info] [output:stdout:stdout.0] thread worker #0 stopped
    [2022/10/21 10:49:25] [ info] [output:null:null.0] thread worker #0 stopping...
    [2022/10/21 10:49:25] [ info] [output:null:null.0] thread worker #0 stopped
    $ docker run --rm -ti -p 2020:2020 fluent/fluent-bit:latest -Z -H -i dummy -p alias=input_dummy -o stdout -f 1
    Fluent Bit v2.0.8
    * Copyright (C) 2015-2022 The Fluent Bit Authors
    * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
    * https://fluentbit.io
    
    [2023/01/27 07:44:25] [ info] [fluent bit] version=2.0.8, commit=9444fdc5ee, pid=1
    [2023/01/27 07:44:25] [ info] [storage] ver=1.4.0, type=memory, sync=normal, checksum=off, max_chunks_up=128
    [2023/01/27 07:44:25] [ info] [cmetrics] version=0.5.8
    [2023/01/27 07:44:25] [ info] [ctraces ] version=0.2.7
    [2023/01/27 07:44:25] [ info] [input:dummy:input_dummy] initializing
    [2023/01/27 07:44:25] [ info] [input:dummy:input_dummy] storage_strategy='memory' (memory only)
    [2023/01/27 07:44:25] [ info] [output:stdout:stdout.0] worker #0 started
    [2023/01/27 07:44:25] [ info] [http_server] listen iface=0.0.0.0 tcp_port=2020
    [2023/01/27 07:44:25] [ info] [sp] stream processor started
    [0] dummy.0: [1674805465.976012761, {"message"=>"dummy"}]
    [0] dummy.0: [1674805466.973669512, {"message"=>"dummy"}]
    ...
    $ curl 127.0.0.1:2020/api/v1/trace/input_dummy -d '{"output":"stdout", "params": {"format": "json"}}'
    {"status":"ok"}
    [0] dummy.0: [1674805635.972373840, {"message"=>"dummy"}]
    [{"date":1674805634.974457,"type":1,"trace_id":"0","plugin_instance":"dummy.0","plugin_alias":"input_dummy","records":[{"timestamp":1674805634,"record":{"message":"dummy"}}],"start_time":1674805634,"end_time":1674805634},{"date":1674805634.974605,"type":3,"trace_id":"0","plugin_instance":"dummy.0","plugin_alias":"input_dummy","records":[{"timestamp":1674805634,"record":{"message":"dummy"}}],"start_time":1674805634,"end_time":1674805634},{"date":1674805635.972398,"type":1,"trace_id":"1","plugin_instance":"dummy.0","plugin_alias":"input_dummy","records":[{"timestamp":1674805635,"record":{"message":"dummy"}}],"start_time":1674805635,"end_time":1674805635},{"date":1674805635.972413,"type":3,"trace_id":"1","plugin_instance":"dummy.0","plugin_alias":"input_dummy","records":[{"timestamp":1674805635,"record":{"message":"dummy"}}],"start_time":1674805635,"end_time":1674805635}]
    [0] dummy.0: [1674805636.973970215, {"message"=>"dummy"}]
    [{"date":1674805636.974008,"type":1,"trace_id":"2","plugin_instance":"dummy.0","plugin_alias":"input_dummy","records":[{"timestamp":1674805636,"record":{"message":"dummy"}}],"start_time":1674805636,"end_time":1674805636},{"date":1674805636.974034,"type":3,"trace_id":"2","plugin_instance":"dummy.0","plugin_alias":"input_dummy","records":[{"timestamp":1674805636,"record":{"message":"dummy"}}],"start_time":1674805636,"end_time":1674805636}]
    {
    	"type": 2,
    	"start_time": 1666349231,
    	"end_time": 1666349231,
    	"trace_id": "trace.1",
    	"plugin_instance": "nest.2", 
    	"records": [{
    		"timestamp": 1666349231,
    		"record": {
    			"dummy": "dummy_0",
    			"powered_by": "fluent",
    			"data": {
    				"key_name": "foo", 
    				"key_cnt": "1"
    			}
    		}
    	}]
    }
    kill -CONT `pidof fluent-bit`
    [engine] caught signal (SIGCONT)
    [2020/03/23 17:39:02] Fluent Bit Dump
    
    ===== Input =====
    syslog_debug (syslog)
    │
    ├─ status
    │  └─ overlimit     : no
    │     ├─ mem size   : 60.8M (63752145 bytes)
    │     └─ mem limit  : 61.0M (64000000 bytes)
    │
    ├─ tasks
    │  ├─ total tasks   : 92
    │  ├─ new           : 0
    │  ├─ running       : 92
    │  └─ size          : 171.1M (179391504 bytes)
    │
    └─ chunks
       └─ total chunks  : 92
          ├─ up chunks  : 35
          ├─ down chunks: 57
          └─ busy chunks: 92
             ├─ size    : 60.8M (63752145 bytes)
             └─ size err: 0
    
    ===== Storage Layer =====
    total chunks     : 92
    ├─ mem chunks    : 0
    └─ fs chunks     : 92
       ├─ up         : 35
       └─ down       : 57

    busy_chunks

    Chunks marked as busy (being flushed) or locked. Busy Chunks are immutable and likely are ready to (or being) processed.

    size

    Amount of bytes used by the Chunk.

    size err

    Number of Chunks in an error state where it size could not be retrieved.

    up

    Total number of filesystem chunks up in memory

    down

    Total number of filesystem chunks down (not loaded in memory)

    Mem_Buf_Limit

    If enabled, it appends the name of the monitored file as part of the record. The value assigned becomes the key in the map.

    Exclude_Path

    Set one or multiple shell patterns separated by commas to exclude files matching certain criteria, e.g: Exclude_Path *.gz,*.zip

    Offset_Key

    If enabled, Fluent Bit appends the offset of the current monitored file as part of the record. The value assigned becomes the key in the map

    Read_from_Head

    For new discovered files on start (without a database offset/position), read the content from the head of the file, not tail.

    False

    Refresh_Interval

    The interval of refreshing the list of watched files in seconds.

    60

    Rotate_Wait

    Specify the number of extra time in seconds to monitor a file once is rotated in case some pending data is flushed.

    5

    Ignore_Older

    Ignores records older than ignore_older. Supports m, h, d (minutes, hours, days) syntax. Default behavior is to read all records. Option only available when a Parser is specified and it can parse the time of a record.

    Skip_Long_Lines

    When a monitored file reaches its buffer capacity due to a very long line (Buffer_Max_Size), the default behavior is to stop monitoring that file. Skip_Long_Lines alter that behavior and instruct Fluent Bit to skip long lines and continue processing other lines that fits into the buffer size.

    Off

    Skip_Empty_Lines

    Skips empty lines in the log file from any further processing or output.

    Off

    DB

    Specify the database file to keep track of monitored files and offsets.

    DB.sync

    Set a default synchronization (I/O) method. Values: Extra, Full, Normal, Off. This flag affects how the internal SQLite engine do synchronization to disk, for more details about each option please refer to . Most of workload scenarios will be fine with normal mode, but if you really need full synchronization after every write operation you should set full mode. Note that full has a high I/O performance cost.

    normal

    DB.locking

    Specify that the database will be accessed only by Fluent Bit. Enabling this feature helps to increase performance when accessing the database but it restrict any external tool to query the content.

    false

    DB.journal_mode

    sets the journal mode for databases (WAL). Enabling WAL provides higher performance. Note that WAL is not compatible with shared network file systems.

    WAL

    Mem_Buf_Limit

    Set a limit of memory that Tail plugin can use when appending data to the Engine. If the limit is reach, it will be paused; when the data is flushed it resumes.

    Exit_On_Eof

    When reading a file will exit as soon as it reach the end of the file. Useful for bulk load and tests

    false

    Parser

    Specify the name of a parser to interpret the entry as a structured message.

    Key

    When a message is unstructured (no parser applied), it's appended as a string under the key name log. This option allows to define an alternative name for that key.

    log

    Inotify_Watcher

    Set to false to use file stat watcher instead of inotify.

    true

    Tag

    Set a tag (with regex-extract fields) that will be placed on lines read. E.g. kube.<namespace_name>.<pod_name>.<container_name>. Note that "tag expansion" is supported: if the tag includes an asterisk (*), that asterisk will be replaced with the absolute path of the monitored file (also see ).

    Tag_Regex

    Set a regex to extract fields from the file name. E.g. (?<pod_name>[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*)_(?<namespace_name>[^_]+)_(?<container_name>.+)-

    Static_Batch_Size

    Set the maximum number of bytes to process per iteration for the monitored static files (files that already exists upon Fluent Bit start).

    50M

    multiline_flush

  • docker_mode

  • Optional-extra parser to interpret and structure multiline entries. This option can be used to define multiple parsers, e.g: Parser_1 ab1, Parser_2 ab2, Parser_N abN.

    Buffer_Chunk_Size

    Set the initial buffer size to read files data. This value is used to increase buffer size. The value must be according to the Unit Size specification.

    32k

    Buffer_Max_Size

    Set the limit of the buffer size per monitored file. When a buffer needs to be increased (e.g: very long lines), this value is used to restrict how much the memory buffer can grow. If reading a file exceeds this limit, the file is removed from the monitored file list. The value must be according to the Unit Size specification.

    32k

    Path

    Pattern specifying a specific log file or multiple ones through the use of common wildcards. Multiple patterns separated by commas are also allowed.

    multiline.parser

    Specify one or multiple Multiline Parser definitions to apply to the content.

    Multiline

    If enabled, the plugin will try to discover multiline messages and use the proper parsers to compose the outgoing messages. Note that when this option is enabled the Parser option is not used.

    Off

    Multiline_Flush

    Wait period time in seconds to process queued multiline messages

    4

    Parser_Firstline

    Name of the parser that matches the beginning of a multiline message. Note that the regular expression defined in the parser must include a group name (named capture), and the value of the last match group must be a string

    Docker_Mode

    If enabled, the plugin will recombine split Docker log lines before passing them to any parser as configured above. This mode cannot be used at the same time as Multiline.

    Off

    Docker_Mode_Flush

    Wait period time in seconds to flush queued unfinished split lines.

    4

    Docker_Mode_Parser

    Specify an optional parser for the first line of the docker multiline mode. The parser name to be specified must be registered in the parsers.conf file.

    Multiline Parser documentation
    here

    Path_Key

    Parser_N

    [INPUT]
        name              tail
        path              /var/log/containers/*.log
        multiline.parser  docker, cri
    pipeline:
      inputs:
        - tail:
          path: /var/log/containers/*.log
          multiline.parser: docker, cri
    $ fluent-bit -i tail -p path=/var/log/syslog -o stdout
    [INPUT]
        Name        tail
        Path        /var/log/syslog
    
    [OUTPUT]
        Name   stdout
        Match  *
    pipeline:
      inputs:
        - tail:
          path: /var/log/syslog
          
      outputs:
        - stdout:
          match: *
    Dec 14 06:41:08 Exception in thread "main" java.lang.RuntimeException: Something has gone wrong, aborting!
        at com.myproject.module.MyProject.badMethod(MyProject.java:22)
        at com.myproject.module.MyProject.oneMoreMethod(MyProject.java:18)
        at com.myproject.module.MyProject.anotherMethod(MyProject.java:14)
        at com.myproject.module.MyProject.someMethod(MyProject.java:10)
        at com.myproject.module.MyProject.main(MyProject.java:6)
    [PARSER]
        Name multiline
        Format regex
        Regex /(?<time>Dec \d+ \d+\:\d+\:\d+)(?<message>.*)/
        Time_Key  time
        Time_Format %b %d %H:%M:%S
    # Note this is generally added to parsers.conf and referenced in [SERVICE]
    [PARSER]
        Name multiline
        Format regex
        Regex /(?<time>Dec \d+ \d+\:\d+\:\d+)(?<message>.*)/
        Time_Key  time
        Time_Format %b %d %H:%M:%S
    
    [INPUT]
        Name             tail
        Multiline        On
        Parser_Firstline multiline
        Path             /var/log/java.log
    
    [OUTPUT]
        Name             stdout
        Match            *
    [0] tail.0: [1607928428.466041977, {"message"=>"Exception in thread "main" java.lang.RuntimeException: Something has gone wrong, aborting!
        at com.myproject.module.MyProject.badMethod(MyProject.java:22)
        at com.myproject.module.MyProject.oneMoreMethod(MyProject.java:18)
        at com.myproject.module.MyProject.anotherMethod(MyProject.java:14)
        at com.myproject.module.MyProject.someMethod(MyProject.java:10)", "message"=>"at com.myproject.module.MyProject.main(MyProject.java:6)"}]
    $ fluent-bit -i tail -p path=/var/log/syslog -p db=/path/to/logs.db -o stdout
    $ sqlite3 tail.db
    -- Loading resources from /home/edsiper/.sqliterc
    
    SQLite version 3.14.1 2016-08-11 18:53:32
    Enter ".help" for usage hints.
    sqlite> SELECT * FROM in_tail_files;
    id     name                              offset        inode         created
    -----  --------------------------------  ------------  ------------  ----------
    1      /var/log/syslog                   73453145      23462108      1480371857
    sqlite>
    .headers on
    .mode column
    .width 5 32 12 12 10
    [INPUT]
        name    tail
        path    /var/log/containers/*.log
        db      test.db
    this section
    Workflow of Tail + Kubernetes Filter

    Monitoring

    Learn how to monitor your Fluent Bit data pipelines

    Fluent Bit comes with built-it features to allow you to monitor the internals of your pipeline, connect to Prometheus and Grafana, Health checks and also connectors to use external services for such purposes:

    • HTTP Server: JSON and Prometheus Exporter-style metrics

    • Grafana Dashboards and Alerts

    HTTP Server

    Fluent Bit comes with a built-in HTTP Server that can be used to query internal information and monitor metrics of each running plugin.

    The monitoring interface can be easily integrated with Prometheus since we support it native format.

    Getting Started

    To get started, the first step is to enable the HTTP Server from the configuration file:

    the above configuration snippet will instruct Fluent Bit to start it HTTP Server on TCP Port 2020 and listening on all network interfaces:

    now with a simple curl command is enough to gather some information:

    Note that we are sending the curl command output to the jq program which helps to make the JSON data easy to read from the terminal. Fluent Bit don't aim to do JSON pretty-printing.

    REST API Interface

    Fluent Bit aims to expose useful interfaces for monitoring, as of Fluent Bit v0.14 the following end points are available:

    URI
    Description
    Data Format

    Metric Descriptions

    The following are detailed descriptions for the metrics outputted in prometheus format by /api/v1/metrics/prometheus.

    The following definitions are key to understand:

    • record: a single message collected from a source, such as a single long line in a file.

    • chunk: Fluent Bit input plugin instances ingest log records and store them in chunks. A batch of records in a chunk are tracked together as a single unit; the Fluent Bit engine attempts to fit records into chunks of at most 2 MB, but the size can vary at runtime. Chunks are then sent to an output. An output plugin instance can either successfully send the full chunk to the destination and mark it as successful, or it can fail the chunk entirely if an unrecoverable error is encountered, or it can ask for the chunk to be retried.

    Metric Name
    Labels
    Description
    Type
    Unit

    The following are detailed descriptions for the metrics outputted in JSON format by /api/v1/storage.

    Metric Key
    Description
    Unit

    Uptime Example

    Query the service uptime with the following command:

    it should print a similar output like this:

    Metrics Examples

    Query internal metrics in JSON format with the following command:

    it should print a similar output like this:

    Metrics in Prometheus format

    Query internal metrics in Prometheus Text 0.0.4 format:

    this time the same metrics will be in Prometheus format instead of JSON:

    Configuring Aliases

    By default configured plugins on runtime get an internal name in the format plugin_name.ID. For monitoring purposes, this can be confusing if many plugins of the same type were configured. To make a distinction each configured input or output section can get an alias that will be used as the parent name for the metric.

    The following example set an alias to the INPUT section which is using the input plugin:

    Now when querying the metrics we get the aliases in place instead of the plugin name:

    Grafana Dashboard and Alerts

    Fluent Bit's exposed can be leveraged to create dashboards and alerts.

    The provided is heavily inspired by 's but with a few key differences such as the use of the instance label (see ), stacked graphs and a focus on Fluent Bit metrics.

    Alerts

    Sample alerts are available .

    Health Check for Fluent Bit

    Fluent bit now supports four new configs to set up the health check.

    Config Name
    Description
    Default Value

    Note: Not every error log means an error nor be counted, the errors retry failures count only on specific errors which is the example in config table description

    So the feature works as: Based on the HC_Period customer setup, if the real error number is over HC_Errors_Count or retry failure is over HC_Retry_Failure_Count, fluent bit will be considered as unhealthy. The health endpoint will return HTTP status 500 and String error. Otherwise it's healthy, will return HTTP status 200 and string ok

    The equation is:

    Note: the HC_Errors_Count and HC_Retry_Failure_Count only count for output plugins and count a sum for errors and retry failures from all output plugins which is running.

    See the config example:

    The command to call health endpoint

    Based on the fluent bit status, the result will be:

    • HTTP status 200 and "ok" in response to healthy status

    • HTTP status 500 and "error" in response for unhealthy status

    With the example config, the health status is determined by following equation:

    If (HC_Errors_Count > 5) OR (HC_Retry_Failure_Count > 5) IN 5 seconds is TRUE, then it's unhealthy.

    If (HC_Errors_Count > 5) OR (HC_Retry_Failure_Count > 5) IN 5 seconds is FALSE, then it's healthy.

    Calyptia Cloud

    is a hosted service that allows you to monitor your Fluent Bit agents including data flow, metrics and configurations.

    Get Started with Calyptia Cloud

    Register your Fluent Bit agent will take less than one minute, steps:

    • Go to and sign-in

    • On the left menu click on and generate/copy your API key

    In your Fluent Bit configuration file, append the following configuration section:

    Make sure to replace your API key in the configuration. After a few seconds upon restart your Fluent Bit agent, the Calyptia Cloud Dashboard will list your agent. Metrics will take around 30 seconds to shows up.

    Contact Calyptia

    If want to get in touch with Calyptia team, just send an email to

    Internal metrics per loaded plugin ready to be consumed by a Prometheus Server

    Prometheus Text 0.0.4

    /api/v1/storage

    Get internal metrics of the storage layer / buffered data. This option is enabled only if in the SERVICE section the property storage.metrics has been enabled

    JSON

    /api/v1/health

    Fluent Bit health check result

    String

    records

    fluentbit_output_dropped_records_total

    name: the name or alias for the output instance

    The number of log records that have been dropped by the output. This means they met an unrecoverable error or retries expired for their chunk.

    counter

    records

    fluentbit_output_errors_total

    name: the name or alias for the output instance

    The number of chunks that have faced an error (either unrecoverable or retriable). This is the number of times a chunk has failed, and does not correspond with the number of error messages you see in the Fluent Bit log output.

    counter

    chunks

    fluentbit_output_proc_bytes_total

    name: the name or alias for the output instance

    The number of bytes of log records that this output instance has successfully sent. This is the total byte size of all unique chunks sent by this output. If a record is not sent due to some error, then it will not count towards this metric.

    counter

    bytes

    fluentbit_output_proc_records_total

    name: the name or alias for the output instance

    The number of log records that this output instance has successfully sent. This is the total record count of all unique chunks sent by this output. If a record is not successfully sent, it does not count towards this metric.

    counter

    records

    fluentbit_output_retried_records_total

    name: the name or alias for the output instance

    The number of log records that experienced a retry. Note that this is calculated at the chunk level, the count increased when an entire chunk is marked for retry. An output plugin may or may not perform multiple actions that generate many error messages when uploading a single chunk.

    counter

    records

    fluentbit_output_retries_failed_total

    name: the name or alias for the output instance

    The number of times that retries expired for a chunk. Each plugin configures a Retry_Limit which applies to chunks. Once the Retry_Limit has been reached for a chunk it is discarded and this metric is incremented.

    counter

    chunks

    fluentbit_output_retries_total

    name: the name or alias for the output instance

    The number of times this output instance requested a retry for a chunk.

    counter

    chunks

    fluentbit_uptime

    The number of seconds that Fluent Bit has been running.

    counter

    seconds

    process_start_time_seconds

    The Unix Epoch time stamp for when Fluent Bit started..

    guage

    seconds

    A chunk is "up" if it is in memory. So this is the count of chunks that are both in filesystem and in memory.

    chunks

    chunks.fs_chunks_down

    The count of chunks that are "down" and thus are only in the filesystem.

    chunks

    input_chunks.{plugin name}.status.overlimit

    Is this input instance over its configured Mem_Buf_Limit?

    boolean

    input_chunks.{plugin name}.status.mem_size

    The size of memory that this input is consuming to buffer logs in chunks.

    bytes

    input_chunks.{plugin name}.status.mem_limit

    The buffer memory limit (Mem_Buf_Limit) that applies to this input plugin.

    bytes

    input_chunks.{plugin name}.chunks.total

    The current total number of chunks owned by this input instance.

    chunks

    input_chunks.{plugin name}.chunks.up

    The current number of chunks that are "up" in memory for this input. Chunks that are "up" will also be in the filesystem layer as well if filesystem storage is enabled.

    chunks

    input_chunks.{plugin name}.chunks.down

    The current number of chunks that are "down" in the filesystem for this input.

    chunks

    input_chunks.{plugin name}.chunks.busy

    "Busy" chunks are chunks that are being processed/sent by outputs and are not eligible to have new data appended.

    chunks

    input_chunks.{plugin name}.chunks.busy_size

    The sum of the byte size of each chunk which is currently marked as busy.

    bytes

    The time period by second to count the error and retry failure data point

    60

    /

    Fluent Bit build information

    JSON

    /api/v1/uptime

    Get uptime information in seconds and human readable format

    JSON

    /api/v1/metrics

    Internal metrics per loaded plugin

    JSON

    fluentbit_input_bytes_total

    name: the name or alias for the input instance

    The number of bytes of log records that this input instance has successfully ingested

    counter

    bytes

    fluentbit_input_records_total

    name: the name or alias for the input instance

    The number of log records this input has successfully ingested

    chunks.total_chunks

    The total number of chunks of records that Fluent Bit is currently buffering

    chunks

    chunks.mem_chunks

    The total number of chunks that are buffered in memory at this time. Note that chunks can be both in memory and on the file system at the same time.

    chunks

    chunks.fs_chunks

    The total number of chunks saved to the filesystem.

    chunks

    Health_Check

    enable Health check feature

    Off

    HC_Errors_Count

    the error count to meet the unhealthy requirement, this is a sum for all output plugins in a defined HC_Period, example for output error: [2022/02/16 10:44:10] [ warn] [engine] failed to flush chunk '1-1645008245.491540684.flb', retry in 7 seconds: task_id=0, input=forward.1 > output=cloudwatch_logs.3 (out_id=3)

    5

    HC_Retry_Failure_Count

    the retry failure count to meet the unhealthy requirement, this is a sum for all output plugins in a defined HC_Period, example for retry failure: [2022/02/16 20:11:36] [ warn] [engine] chunk '1-1645042288.260516436.flb' cannot be retried: task_id=0, input=tcp.3 > output=cloudwatch_logs.1

    5

    Health Checks
    Calyptia Cloud: hosted service to monitor and visualize your pipelines
    CPU
    prometheus style metrics
    example dashboard
    Banzai Cloud
    logging operator dashboard
    why here
    here
    Calyptia Cloud
    cloud.calyptia.com
    Settings
    [email protected]
    dashboard

    /api/v1/metrics/prometheus

    counter

    chunks.fs_chunks_up

    HC_Period

    [SERVICE]
        HTTP_Server  On
        HTTP_Listen  0.0.0.0
        HTTP_PORT    2020
    
    [INPUT]
        Name cpu
    
    [OUTPUT]
        Name  stdout
        Match *
    $ bin/fluent-bit -c fluent-bit.conf
    Fluent Bit v1.4.0
    * Copyright (C) 2019-2020 The Fluent Bit Authors
    * Copyright (C) 2015-2018 Treasure Data
    * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
    * https://fluentbit.io
    
    [2020/03/10 19:08:24] [ info] [engine] started
    [2020/03/10 19:08:24] [ info] [http_server] listen iface=0.0.0.0 tcp_port=2020
    $ curl -s http://127.0.0.1:2020 | jq
    {
      "fluent-bit": {
        "version": "0.13.0",
        "edition": "Community",
        "flags": [
          "FLB_HAVE_TLS",
          "FLB_HAVE_METRICS",
          "FLB_HAVE_SQLDB",
          "FLB_HAVE_TRACE",
          "FLB_HAVE_HTTP_SERVER",
          "FLB_HAVE_FLUSH_LIBCO",
          "FLB_HAVE_SYSTEMD",
          "FLB_HAVE_VALGRIND",
          "FLB_HAVE_FORK",
          "FLB_HAVE_PROXY_GO",
          "FLB_HAVE_REGEX",
          "FLB_HAVE_C_TLS",
          "FLB_HAVE_SETJMP",
          "FLB_HAVE_ACCEPT4",
          "FLB_HAVE_INOTIFY"
        ]
      }
    }
    $ curl -s http://127.0.0.1:2020/api/v1/uptime | jq
    {
      "uptime_sec": 8950000,
      "uptime_hr": "Fluent Bit has been running:  103 days, 14 hours, 6 minutes and 40 seconds"
    }
    $ curl -s http://127.0.0.1:2020/api/v1/metrics | jq
    {
      "input": {
        "cpu.0": {
          "records": 8,
          "bytes": 2536
        }
      },
      "output": {
        "stdout.0": {
          "proc_records": 5,
          "proc_bytes": 1585,
          "errors": 0,
          "retries": 0,
          "retries_failed": 0
        }
      }
    }
    $ curl -s http://127.0.0.1:2020/api/v1/metrics/prometheus
    fluentbit_input_records_total{name="cpu.0"} 57 1509150350542
    fluentbit_input_bytes_total{name="cpu.0"} 18069 1509150350542
    fluentbit_output_proc_records_total{name="stdout.0"} 54 1509150350542
    fluentbit_output_proc_bytes_total{name="stdout.0"} 17118 1509150350542
    fluentbit_output_errors_total{name="stdout.0"} 0 1509150350542
    fluentbit_output_retries_total{name="stdout.0"} 0 1509150350542
    fluentbit_output_retries_failed_total{name="stdout.0"} 0 1509150350542
    [SERVICE]
        HTTP_Server  On
        HTTP_Listen  0.0.0.0
        HTTP_PORT    2020
    
    [INPUT]
        Name  cpu
        Alias server1_cpu
    
    [OUTPUT]
        Name  stdout
        Alias raw_output
        Match *
    {
      "input": {
        "server1_cpu": {
          "records": 8,
          "bytes": 2536
        }
      },
      "output": {
        "raw_output": {
          "proc_records": 5,
          "proc_bytes": 1585,
          "errors": 0,
          "retries": 0,
          "retries_failed": 0
        }
      }
    }
    health status = (HC_Errors_Count > HC_Errors_Count config value) OR (HC_Retry_Failure_Count > HC_Retry_Failure_Count config value) IN the HC_Period interval
    [SERVICE]
        HTTP_Server  On
        HTTP_Listen  0.0.0.0
        HTTP_PORT    2020
        Health_Check On 
        HC_Errors_Count 5 
        HC_Retry_Failure_Count 5 
        HC_Period 5 
    
    [INPUT]
        Name  cpu
    
    [OUTPUT]
        Name  stdout
        Match *
    $ curl -s http://127.0.0.1:2020/api/v1/health
    Health status = (HC_Errors_Count > 5) OR (HC_Retry_Failure_Count > 5) IN 5 seconds
    [CUSTOM]
        name     calyptia
        api_key  <YOUR_API_KEY>

    NGINX Exporter Metrics

    NGINX Exporter Metrics input plugin scrapes metrics from the NGINX stub status handler.

    Configuration Parameters

    The plugin supports the following configuration parameters:

    Key
    Description
    Default

    Getting Started

    NGINX must be configured with a location that invokes the stub status handler. Here is an example configuration with such a location:

    Configuration with NGINX Plus REST API

    A much more powerful and flexible metrics API is available with NGINX Plus. A path needs to be configured in NGINX Plus first.

    Command Line

    From the command line you can let Fluent Bit generate the checks with the following options:

    To gather metrics from the command line with the NGINX Plus REST API we need to turn on the nginx_plus property, like so:

    Configuration File

    In your main configuration file append the following Input & Output sections:

    And for NGINX Plus API:

    Testing

    You can quickly test against the NGINX server running on localhost by invoking it directly from the command line:

    Exported Metrics

    This documentation is copied from the nginx prometheus exporter metrics documentation: [https://github.com/nginxinc/nginx-prometheus-exporter/blob/master/README.md].

    Common metrics:

    Name
    Type
    Description
    Labels

    Metrics for NGINX OSS:

    Name
    Type
    Description
    Labels

    Metrics for NGINX Plus:

    Name
    Type
    Description
    Labels

    Name
    Type
    Description
    Labels

    Name
    Type
    Description
    Labels

    Name
    Type
    Description
    Labels

    Name
    Type
    Description
    Labels

    Note: for the state metric, the string values are converted to float64 using the following rule: "up" -> 1.0, "draining" -> 2.0, "down" -> 3.0, "unavail" –> 4.0, "checking" –> 5.0, "unhealthy" -> 6.0

    Name
    Type
    Description
    Labels

    Note: for the state metric, the string values are converted to float64 using the following rule: "up" -> 1.0, "down" -> 3.0, "unavail" –> 4.0, "checking" –> 5.0, "unhealthy" -> 6.0.

    Name
    Type
    Description
    Labels

    Name
    Type
    Description
    Labels

    Handled client connections.

    []

    nginx_connections_reading

    Gauge

    Connections where NGINX is reading the request header.

    []

    nginx_connections_waiting

    Gauge

    Idle client connections.

    []

    nginx_connections_writing

    Gauge

    Connections where NGINX is writing the response back to the client.

    []

    nginx_http_requests_total

    Counter

    Total http requests.

    []

    Dropped client connections dropped

    []

    nginxplus_connections_idle

    Gauge

    Idle client connections

    []

    Session reuses during SSL handshake

    []

    Total responses sent to clients

    code (the response status code. The values are: 1xx, 2xx, 3xx, 4xx and 5xx), server_zone

    nginxplus_server_zone_discarded

    Counter

    Requests completed without sending a response

    server_zone

    nginxplus_server_zone_received

    Counter

    Bytes received from clients

    server_zone

    nginxplus_server_zone_sent

    Counter

    Bytes sent to clients

    server_zone

    Total sessions completed

    code (the response status code. The values are: 2xx, 4xx, and 5xx), server_zone

    nginxplus_stream_server_zone_discarded

    Counter

    Connections completed without creating a session

    server_zone

    nginxplus_stream_server_zone_received

    Counter

    Bytes received from clients

    server_zone

    nginxplus_stream_server_zone_sent

    Counter

    Bytes sent to clients

    server_zone

    .

    Limit for connections which corresponds to the max_conns parameter of the upstream server. Zero value means there is no limit

    server, upstream

    nginxplus_upstream_server_requests

    Counter

    Total client requests

    server, upstream

    nginxplus_upstream_server_responses

    Counter

    Total responses sent to clients

    code (the response status code. The values are: 1xx, 2xx, 3xx, 4xx and 5xx), server, upstream

    nginxplus_upstream_server_sent

    Counter

    Bytes sent to this server

    server, upstream

    nginxplus_upstream_server_received

    Counter

    Bytes received to this server

    server, upstream

    nginxplus_upstream_server_fails

    Counter

    Number of unsuccessful attempts to communicate with the server

    server, upstream

    nginxplus_upstream_server_unavail

    Counter

    How many times the server became unavailable for client requests (state 'unavail') due to the number of unsuccessful attempts reaching the max_fails threshold

    server, upstream

    nginxplus_upstream_server_header_time

    Gauge

    Average time to get the response header from the server

    server, upstream

    nginxplus_upstream_server_response_time

    Gauge

    Average time to get the full response from the server

    server, upstream

    nginxplus_upstream_keepalives

    Gauge

    Idle keepalive connections

    upstream

    nginxplus_upstream_zombies

    Gauge

    Servers removed from the group but still processing active client requests

    upstream

    Limit for connections which corresponds to the max_conns parameter of the upstream server. Zero value means there is no limit

    server , upstream

    nginxplus_stream_upstream_server_connections

    Counter

    Total number of client connections forwarded to this server

    server, upstream

    nginxplus_stream_upstream_server_connect_time

    Gauge

    Average time to connect to the upstream server

    server, upstream

    nginxplus_stream_upstream_server_first_byte_time

    Gauge

    Average time to receive the first byte of data

    server, upstream

    nginxplus_stream_upstream_server_response_time

    Gauge

    Average time to receive the last byte of data

    server, upstream

    nginxplus_stream_upstream_server_sent

    Counter

    Bytes sent to this server

    server, upstream

    nginxplus_stream_upstream_server_received

    Counter

    Bytes received from this server

    server, upstream

    nginxplus_stream_upstream_server_fails

    Counter

    Number of unsuccessful attempts to communicate with the server

    server, upstream

    nginxplus_stream_upstream_server_unavail

    Counter

    How many times the server became unavailable for client connections (state 'unavail') due to the number of unsuccessful attempts reaching the max_fails threshold

    server, upstream

    nginxplus_stream_upstream_zombies

    Gauge

    Servers removed from the group but still processing active client connections

    upstream

    Requests completed without sending a response

    location_zone

    nginxplus_location_zone_received

    Counter

    Bytes received from clients

    location_zone

    nginxplus_location_zone_sent

    Counter

    Bytes sent to clients

    location_zone

    Host

    Name of the target host or IP address to check.

    localhost

    Port

    Port of the target nginx service to connect to.

    80

    Status_URL

    The URL of the Stub Status Handler.

    /status

    Nginx_Plus

    Turn on NGINX plus mode.

    true

    nginx_up

    Gauge

    Shows the status of the last metric scrape: 1 for a successful scrape and 0 for a failed one

    []

    nginx_connections_accepted

    Counter

    Accepted client connections.

    []

    nginx_connections_active

    Gauge

    Active client connections.

    []

    nginx_connections_handled

    nginxplus_connections_accepted

    Counter

    Accepted client connections

    []

    nginxplus_connections_active

    Gauge

    Active client connections

    []

    nginxplus_connections_dropped

    nginxplus_http_requests_total

    Counter

    Total http requests

    []

    nginxplus_http_requests_current

    Gauge

    Current http requests

    []

    nginxplus_ssl_handshakes

    Counter

    Successful SSL handshakes

    []

    nginxplus_ssl_handshakes_failed

    Counter

    Failed SSL handshakes

    []

    nginxplus_ssl_session_reuses

    nginxplus_server_zone_processing

    Gauge

    Client requests that are currently being processed

    server_zone

    nginxplus_server_zone_requests

    Counter

    Total client requests

    server_zone

    nginxplus_server_zone_responses

    nginxplus_stream_server_zone_processing

    Gauge

    Client connections that are currently being processed

    server_zone

    nginxplus_stream_server_zone_connections

    Counter

    Total connections

    server_zone

    nginxplus_stream_server_zone_sessions

    nginxplus_upstream_server_state

    Gauge

    Current state

    server, upstream

    nginxplus_upstream_server_active

    Gauge

    Active connections

    server, upstream

    nginxplus_upstream_server_limit

    nginxplus_stream_upstream_server_state

    Gauge

    Current state

    server, upstream

    nginxplus_stream_upstream_server_active

    Gauge

    Active connections

    server , upstream

    nginxplus_stream_upstream_server_limit

    nginxplus_location_zone_requests

    Counter

    Total client requests

    location_zone

    nginxplus_location_zone_responses

    Counter

    Total responses sent to clients

    code (the response status code. The values are: 1xx, 2xx, 3xx, 4xx and 5xx), location_zone

    nginxplus_location_zone_discarded

    Stub status metrics
    Connections
    HTTP
    SSL
    HTTP Server Zones
    Stream Server Zones
    HTTP Upstreams
    Stream Upstreams
    Location Zones

    Counter

    Counter

    Counter

    Counter

    Counter

    Gauge

    Gauge

    Counter

    server {
        listen       80;
        listen  [::]:80;
        server_name  localhost;
        location / {
            root   /usr/share/nginx/html;
            index  index.html index.htm;
        }
        // configure the stub status handler.
        location /status {
            stub_status;
        }
    }
    server {
    	listen       80;
    	listen  [::]:80;
    	server_name  localhost;
    
    	# enable /api/ location with appropriate access control in order
    	# to make use of NGINX Plus API
    	#
    	location /api/ {
    		api write=on;
    		# configure to allow requests from the server running fluent-bit
    		allow 192.168.1.*;
    		deny all;
    	}
    }
    $ fluent-bit -i nginx_metrics -p host=127.0.0.1 -p port=80 -p status_url=/status -p nginx_plus=off -o stdout
    $ fluent-bit -i nginx_metrics -p host=127.0.0.1 -p port=80 -p nginx_plus=on -p status_url=/api -o stdout
    [INPUT]
        Name          nginx_metrics
        Host          127.0.0.1
        Port          80
        Status_URL    /status
        Nginx_Plus    off
    
    [OUTPUT]
        Name   stdout
        Match  *
    [INPUT]
        Name          nginx_metrics
        Nginx_Plus    on
        Host          127.0.0.1
        Port          80
        Status_URL    /api
    
    [OUTPUT]
        Name   stdout
        Match  *
    $ fluent-bit -i nginx_metrics -p host=127.0.0.1 -p nginx_plus=off -o stdout -p match=* -f 1
    Fluent Bit v2.x.x
    * Copyright (C) 2019-2020 The Fluent Bit Authors
    * Copyright (C) 2015-2018 Treasure Data
    * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
    * https://fluentbit.io
    
    2021-10-14T19:37:37.228691854Z nginx_connections_accepted = 788253884
    2021-10-14T19:37:37.228691854Z nginx_connections_handled = 788253884
    2021-10-14T19:37:37.228691854Z nginx_http_requests_total = 42045501
    2021-10-14T19:37:37.228691854Z nginx_connections_active = 2009
    2021-10-14T19:37:37.228691854Z nginx_connections_reading = 0
    2021-10-14T19:37:37.228691854Z nginx_connections_writing = 1
    2021-10-14T19:37:37.228691854Z nginx_connections_waiting = 2008
    2021-10-14T19:37:35.229919621Z nginx_up = 1

    Build and Install

    Fluent Bit uses CMake as its build system. The suggested procedure to prepare the build system consists of the following steps:

    Requirements

    • CMake >= 3.0

    • Flex

    • Bison

    • YAML headers

    • OpenSSL headers

    Prepare environment

    In the following steps you can find exact commands to build and install the project with the default options. If you already know how CMake works you can skip this part and look at the build options available. Note that Fluent Bit requires CMake 3.x. You may need to use cmake3 instead of cmake to complete the following steps on your system.

    Change to the build/ directory inside the Fluent Bit sources:

    Let configure the project specifying where the root path is located:

    Now you are ready to start the compilation process through the simple make command:

    to continue installing the binary on the system just do:

    it's likely you may need root privileges so you can try to prefixing the command with sudo.

    Build Options

    Fluent Bit provides certain options to CMake that can be enabled or disabled when configuring, please refer to the following tables under the General Options, Development Options, Input Plugins and _Output Plugins sections.

    General Options

    option
    description
    default

    Development Options

    option
    description
    default

    Optimization Options

    option
    description
    default

    Input Plugins

    The input plugins provides certain features to gather information from a specific source type which can be a network interface, some built-in metric or through a specific input device, the following input plugins are available:

    option
    description
    default

    Filter Plugins

    The filter plugins allows to modify, enrich or drop records. The following table describes the filters available on this version:

    option
    description
    default

    Output Plugins

    The output plugins gives the capacity to flush the information to some external interface, service or terminal, the following table describes the output plugins available as of this version:

    option
    description
    default

    Build executable

    Yes

    FLB_EXAMPLES

    Build examples

    Yes

    FLB_SHARED_LIB

    Build shared library

    Yes

    FLB_MTRACE

    Enable mtrace support

    No

    FLB_INOTIFY

    Enable Inotify support

    Yes

    FLB_POSIX_TLS

    Force POSIX thread storage

    No

    FLB_SQLDB

    Enable SQL embedded database support

    No

    FLB_HTTP_SERVER

    Enable HTTP Server

    No

    FLB_LUAJIT

    Enable Lua scripting support

    Yes

    FLB_RECORD_ACCESSOR

    Enable record accessor

    Yes

    FLB_SIGNV4

    Enable AWS Signv4 support

    Yes

    FLB_STATIC_CONF

    Build binary using static configuration files. The value of this option must be a directory containing configuration files.

    FLB_STREAM_PROCESSOR

    Enable Stream Processor

    Yes

    FLB_CONFIG_YAML

    Enable YAML configuration support

    Yes

    FLB_WASM

    Build with WASM runtime support

    Yes

    FLB_WAMRC

    Build with WASM AOT compiler executable

    No

    Minimise binary size

    No

    FLB_TESTS_RUNTIME

    Enable runtime tests

    No

    FLB_TESTS_INTERNAL

    Enable internal tests

    No

    FLB_TESTS

    Enable tests

    No

    FLB_BACKTRACE

    Enable backtrace/stacktrace support

    Yes

    Enable Docker metrics input plugin

    On

    Enable Exec input plugin

    On

    Enable Exec WASI input plugin

    On

    Enable Fluent Bit metrics input plugin

    On

    Enable Forward input plugin

    On

    Enable Head input plugin

    On

    Enable Health input plugin

    On

    Enable Kernel log input plugin

    On

    Enable Memory input plugin

    On

    Enable MQTT Server input plugin

    On

    Enable Network I/O metrics input plugin

    On

    Enable Process monitoring input plugin

    On

    Enable Random input plugin

    On

    Enable Serial input plugin

    On

    Enable Standard input plugin

    On

    Enable Syslog input plugin

    On

    Enable Systemd / Journald input plugin

    On

    Enable Tail (follow files) input plugin

    On

    Enable TCP input plugin

    On

    Enable system temperature(s) input plugin

    On

    Enable Windows Event Log input plugin (Windows Only)

    On

    Enable Windows Event Log input plugin using winevt.h API (Windows Only)

    On

    Enable Grep filter

    On

    Enable Kubernetes metadata filter

    On

    Enable Lua scripting filter

    On

    Enable Modify filter

    On

    Enable Nest filter

    On

    Enable Parser filter

    On

    Enable Record Modifier filter

    On

    Enable Rewrite Tag filter

    On

    Enable Stdout filter

    On

    Enable Throttle filter

    On

    Enable WASM filter

    On

    Enable Counter output plugin

    On

    Enable Amazon CloudWatch output plugin

    On

    Enable Datadog output plugin

    On

    Enable output plugin

    On

    Enable File output plugin

    On

    Enable Amazon Kinesis Data Firehose output plugin

    On

    Enable Amazon Kinesis Data Streams output plugin

    On

    Enable Flowcounter output plugin

    On

    Enable output plugin

    On

    Enable Gelf output plugin

    On

    Enable HTTP output plugin

    On

    Enable InfluxDB output plugin

    On

    Enable Kafka output

    Off

    Enable Kafka REST Proxy output plugin

    On

    FLB_OUT_LIB

    Enable Lib output plugin

    On

    Enable output plugin

    On

    FLB_OUT_NULL

    Enable NULL output plugin

    On

    FLB_OUT_PGSQL

    Enable PostgreSQL output plugin

    On

    FLB_OUT_PLOT

    Enable Plot output plugin

    On

    FLB_OUT_SLACK

    Enable Slack output plugin

    On

    Enable Amazon S3 output plugin

    On

    Enable Splunk output plugin

    On

    Enable Google Stackdriver output plugin

    On

    Enable STDOUT output plugin

    On

    FLB_OUT_TCP

    Enable TCP/TLS output plugin

    On

    Enable output plugin

    On

    FLB_ALL

    Enable all features available

    No

    FLB_JEMALLOC

    Use Jemalloc as default memory allocator

    No

    FLB_TLS

    Build with SSL/TLS support

    Yes

    FLB_DEBUG

    Build binaries with debug symbols

    No

    FLB_VALGRIND

    Enable Valgrind support

    No

    FLB_TRACE

    Enable trace mode

    No

    FLB_MSGPACK_TO_JSON_INIT_BUFFER_SIZE

    Determine initial buffer size for msgpack to json conversion in terms of memory used by payload.

    2.0

    FLB_MSGPACK_TO_JSON_REALLOC_BUFFER_SIZE

    Determine percentage of reallocation size when msgpack to json conversion buffer runs out of memory.

    0.1

    FLB_IN_COLLECTD

    Enable Collectd input plugin

    On

    FLB_IN_CPU

    Enable CPU input plugin

    On

    FLB_IN_DISK

    Enable Disk I/O Metrics input plugin

    On

    FLB_FILTER_AWS

    Enable AWS metadata filter

    On

    FLB_FILTER_ECS

    Enable AWS metadata filter

    On

    FLB_FILTER_EXPECT

    Enable Expect data test filter

    On

    FLB_OUT_AZURE

    Enable Microsoft Azure output plugin

    On

    FLB_OUT_AZURE_KUSTO

    Enable Azure Kusto output plugin

    On

    FLB_OUT_BIGQUERY

    Enable Google BigQuery output plugin

    On

    CMake

    FLB_BINARY

    FLB_SMALL

    $ cd build/
    $ cmake ../
    -- The C compiler identification is GNU 4.9.2
    -- Check for working C compiler: /usr/bin/cc
    -- Check for working C compiler: /usr/bin/cc -- works
    -- Detecting C compiler ABI info
    -- Detecting C compiler ABI info - done
    -- The CXX compiler identification is GNU 4.9.2
    -- Check for working CXX compiler: /usr/bin/c++
    -- Check for working CXX compiler: /usr/bin/c++ -- works
    ...
    -- Could NOT find Doxygen (missing:  DOXYGEN_EXECUTABLE)
    -- Looking for accept4
    -- Looking for accept4 - not found
    -- Configuring done
    -- Generating done
    -- Build files have been written to: /home/edsiper/coding/fluent-bit/build
    $ make
    Scanning dependencies of target msgpack
    [  2%] Building C object lib/msgpack-1.1.0/CMakeFiles/msgpack.dir/src/unpack.c.o
    [  4%] Building C object lib/msgpack-1.1.0/CMakeFiles/msgpack.dir/src/objectc.c.o
    [  7%] Building C object lib/msgpack-1.1.0/CMakeFiles/msgpack.dir/src/version.c.o
    ...
    [ 19%] Building C object lib/monkey/mk_core/CMakeFiles/mk_core.dir/mk_file.c.o
    [ 21%] Building C object lib/monkey/mk_core/CMakeFiles/mk_core.dir/mk_rconf.c.o
    [ 23%] Building C object lib/monkey/mk_core/CMakeFiles/mk_core.dir/mk_string.c.o
    ...
    Scanning dependencies of target fluent-bit-static
    [ 66%] Building C object src/CMakeFiles/fluent-bit-static.dir/flb_pack.c.o
    [ 69%] Building C object src/CMakeFiles/fluent-bit-static.dir/flb_input.c.o
    [ 71%] Building C object src/CMakeFiles/fluent-bit-static.dir/flb_output.c.o
    ...
    Linking C executable ../bin/fluent-bit
    [100%] Built target fluent-bit-bin
    $ make install
    FLB_IN_DOCKER
    FLB_IN_EXEC
    FLB_IN_EXEC_WASI
    FLB_IN_FLUENTBIT_METRICS
    FLB_IN_FORWARD
    FLB_IN_HEAD
    FLB_IN_HEALTH
    FLB_IN_KMSG
    FLB_IN_MEM
    FLB_IN_MQTT
    FLB_IN_NETIF
    FLB_IN_PROC
    FLB_IN_RANDOM
    FLB_IN_SERIAL
    FLB_IN_STDIN
    FLB_IN_SYSLOG
    FLB_IN_SYSTEMD
    FLB_IN_TAIL
    FLB_IN_TCP
    FLB_IN_THERMAL
    FLB_IN_WINLOG
    FLB_IN_WINEVTLOG
    FLB_FILTER_GREP
    FLB_FILTER_KUBERNETES
    FLB_FILTER_LUA
    FLB_FILTER_MODIFY
    FLB_FILTER_NEST
    FLB_FILTER_PARSER
    FLB_FILTER_RECORD_MODIFIER
    FLB_FILTER_REWRITE_TAG
    FLB_FILTER_STDOUT
    FLB_FILTER_THROTTLE
    FLB_FILTER_WASM
    FLB_OUT_COUNTER
    FLB_OUT_CLOUDWATCH_LOGS
    FLB_OUT_DATADOG
    FLB_OUT_ES
    Elastic Search
    FLB_OUT_FILE
    FLB_OUT_KINESIS_FIREHOSE
    FLB_OUT_KINESIS_STREAMS
    FLB_OUT_FLOWCOUNTER
    FLB_OUT_FORWARD
    Fluentd
    FLB_OUT_GELF
    FLB_OUT_HTTP
    FLB_OUT_INFLUXDB
    FLB_OUT_KAFKA
    FLB_OUT_KAFKA_REST
    FLB_OUT_NATS
    NATS
    FLB_OUT_S3
    FLB_OUT_SPLUNK
    FLB_OUT_STACKDRIVER
    FLB_OUT_STDOUT
    FLB_OUT_TD
    Treasure Data