Only this pageAll pages
Powered by GitBook
Couldn't generate the PDF for 139 pages, generation stopped at 100.
Extend with 50 more pages.
1 of 100

1.5

Loading...

About

Loading...

Loading...

Loading...

Loading...

Concepts

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Installation

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Administration

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Local Testing

Loading...

Loading...

Data Pipeline

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Requirements

Fluent Bit uses very low CPU and Memory consumption, it's compatible with most of x86, x86_64, arm32v7 and arm64v8 based platforms. In order to build it you need the following components in your system for the build process:

  • Compiler: GCC or clang

  • CMake

  • Flex & Bison: only if you enable the Stream Processor or Record Accessor feature (both enabled by default)

In the core there are not other dependencies, For certain features that depends on third party components like output plugins with special backend libraries (e.g: kafka), those are included in the main source code repository.

Amazon EC2

Learn how to install Fluent Bit and the AWS output plugins on Amazon Linux 2 via AWS Systems Manager.

Sources

Data Pipeline

Inputs

Linux Packages

Configuring Fluent Bit

Download Source Code

Stable

For production systems, we strongly suggest that you always get the latest stable release from our web site, you can get the official tarballs (.tar.gz) from the following link:

https://fluentbit.io/download/

Development

For people who aims to contribute to the project testing or extending the code base, can get the development version from our GIT repository:

Note that our master branch is where the development of Fluent Bit happens. Since it's a development version, expect issues when compiling or at run time.

We encourage everybody to help us testing every development version, at the end this is what will become stable.

Input

The way to gather data from your sources

Fluent Bit provides different Input Plugins to gather information from different sources, some of them just collect data from log files while others can gather metrics information from the operating system. There are many plugins for different needs.

When an input plugin is loaded, an internal instance is created. Every instance has its own and independent configuration. Configuration keys are often called properties.

Every input plugin has its own documentation section where it's specified how it can be used and what properties are available.

For more details, please refer to the Input Plugins section.

A Brief History of Fluent Bit

Every project has a story

On 2014, the team at forecasted the need of a lightweight log processor for constraint environments like Embedded Linux and Gateways, the project aimed to be part of the Fluentd Ecosystem and we called it , fully open source and available under the terms of the .

After the project was around for some time, it got some traction in the Embedded market but we also started getting requests for several features from the Cloud community like more inputs, filters, and outputs. Not so long after that, Fluent Bit becomes one of the preferred solutions to solve the logging challenges in Cloud environments.

Output

Destinations for your data: databases, cloud services and more!

The output interface allows us to define destinations for the data. Common destinations are remote services, local file system or standard interface with others. Outputs are implemented as plugins and there are many available.

When an output plugin is loaded, an internal instance is created. Every instance has its own independent configuration. Configuration keys are often called properties.

Every output plugin has its own documentation section specifying how it can be used and what properties are available.

For more details, please refer to the section.

Containers on AWS

AWS maintains a distribution of Fluent Bit combining the latest official release with a set of Go Plugins for sending logs to AWS services. AWS and Fluent Bit are working together to rewrite their plugins for inclusion in the official Fluent Bit distribution.

Plugins

Currently, the image contains Go Plugins for:

Buffering

Performance and Data Safety

When processes data, it uses the system memory (heap) as a primary and temporal place to store the record logs before they get delivered, on this private memory area the records are processed.

Buffering refers to the ability to store the records somewhere, and while they are processed and delivered, still be able to store more. Buffering in memory is the fastest mechanism, but there are certain scenarios where the mechanism requires special strategies to deal with , data safety or reduce memory consumption by the service in constraint environments.

Network failures or latency on third party service is pretty common, and on scenarios where we cannot deliver data fast enough as we receive new data to process, we likely will face backpressure.

Our buffering strategies are designed to solve problems associated with backpressure and general delivery failures.

Filter

Modify, Enrich or Drop your records

In production environments we want to have full control of the data we are collecting, filtering is an important feature that allows us to alter the data before delivering it to some destination.

Filtering is implemented through plugins, so each filter available could be used to match, exclude or enrich your logs with some specific metadata.

We support many filters, A common use case for filtering is Kubernetes deployments. Every Pod log needs to get the proper metadata associated

Very similar to the input plugins, Filters run in an instance context, which has its own independent configuration. Configuration keys are often called properties.

For more details about the Filters available and their usage, please refer to the section.

Fluentd
Treasure Data
Fluent Bit
Apache License v2.0

Fluent Bit as buffering strategies, offers a primary buffering mechanism in memory and an optional secondary one using the file system. With this hybrid solution you can adjust to any use case safety and keep a high performance while processing your data.

Both mechanisms are not exclusive and when the data is ready to be processed or delivered it will be always in memory, while other data in the queue might be in the file system until is ready to be processed and moved up to memory.

To learn more about the buffering configuration in Fluent Bit, please jump to the Buffering & Storage section.

Fluent Bit
backpressure
$ git clone https://github.com/fluent/fluent-bit
  • Amazon Kinesis Firehose

  • Amazon Kinesis Streams

  • Versions and Regional Repositories

    AWS vends their container image via Docker Hub, and a set of highly available regional Amazon ECR repositories. For more information, see the AWS for Fluent Bit GitHub repo.

    The AWS for Fluent Bit image uses a custom versioning scheme because it contains multiple projects. To see what each release contains, check out the release notes on GitHub.

    SSM Public Parameters

    AWS vends SSM Public Parameters with the regional repository link for each image. These parameters can be queried by any AWS account.

    To see a list of available version tags in a given region, run the following command:

    To see the ECR repository URI for a given image tag in a given region, run the following:

    You can use these SSM public parameters as parameters in your CloudFormation templates:

    AWS for Fluent Bit
    Amazon CloudWatch Logs
    aws ssm get-parameters-by-path --region eu-central-1 --path /aws/service/aws-for-fluent-bit/ --query 'Parameters[*].Name'
    $ aws ssm get-parameter --region ap-northeast-1 --name /aws/service/aws-for-fluent-bit/2.0.0
    Parameters:
      FireLensImage:
        Description: Fluent Bit image for the FireLens Container
        Type: AWS::SSM::Parameter::Value<String>
        Default: /aws/service/aws-for-fluent-bit/latest
    Output Plugins
    Filters

    What is Fluent Bit ?

    Fluent Bit is a CNCF sub-project under the umbrella of Fluentd

    ​Fluent Bit is an open source and multi-platform log processor tool which aims to be a generic Swiss knife for logs processing and distribution.

    Nowadays the number of sources of information in our environments is ever increasing. Handling data collection at scale is complex, and collecting and aggregating diverse data requires a specialized tool that can deal with:

    • Different sources of information

    • Different data formats

    • Data Reliability

    • Security

    • Flexible Routing

    • Multiple destinations

    has been designed with performance and low resources consumption in mind.

    Parser

    Convert Unstructured to Structured messages

    Dealing with raw strings or unstructured messages is a constant pain; having a structure is highly desired. Ideally we want to set a structure to the incoming data by the Input Plugins as soon as they are collected:

    The Parser allows you to convert from unstructured to structured data. As a demonstrative example consider the following Apache (HTTP Server) log entry:

    192.168.2.20 - - [28/Jul/2006:10:27:10 -0300] "GET /cgi-bin/try/ HTTP/1.0" 200 3395

    The above log line is a raw string without format, ideally we would like to give it a structure that can be processed later easily. If the proper configuration is used, the log entry could be converted to:

    {
      "host":    "192.168.2.20",
      "user":    "-",
      "method":  "GET",
      "path":    "/cgi-bin/try/",
      "code":    "200",
      "size":    "3395",
      "referer": "",
      "agent":   ""
     }

    Parsers are fully configurable and are independently and optionally handled by each input plugin, for more details please refer to the Parsers section.

    Amazon Linux

    Install on Amazon Linux 2

    Fluent Bit is distributed as td-agent-bit package and is available for the latest Amazon Linux 2. The following architectures are supported

    • x86_64

    • aarch64 / arm64v8

    Configure Yum

    We provide td-agent-bit through a Yum repository. In order to add the repository reference to your system, please add a new file called td-agent-bit.repo in /etc/yum.repos.d/ with the following content:

    note: we encourage you always enable the gpgcheck for security reasons. All our packages are signed.

    The GPG Key fingerprint is F209 D876 2A60 CD49 E680 633B 4FF8 368B 6EA0 722A

    Install

    Once your repository is configured, run the following command to install it:

    Now the following step is to instruct systemd to enable the service:

    If you do a status check, you should see a similar output like this:

    The default configuration of td-agent-bit is collecting metrics of CPU usage and sending the records to the standard output, you can see the outgoing data in your /var/log/messages file.

    Variables

    Fluent Bit supports the usage of environment variables in any value associated to a key when using a configuration file.

    The variables are case sensitive and can be used in the following format:

    ${MY_VARIABLE}

    When Fluent Bit starts, the configuration reader will detect any request for ${MY_VARIABLE} and will try to resolve its value.

    Example

    Create the following configuration file (fluent-bit.conf):

    Open a terminal and set the environment variable:

    The above command set the 'stdout' value to the variable MY_OUTPUT.

    Run Fluent Bit with the recently created configuration file:

    As you can see the service worked properly as the configuration was valid.

    Raspbian / Raspberry Pi

    Fluent Bit is distributed as td-agent-bit package and is available for the Raspberry, specifically for Raspbian distribution, the following versions are supported:

    • Raspbian Buster (10)

    • Raspbian Stretch (9)

    • Raspbian Jessie (8)

    Server GPG key

    The first step is to add our server GPG key to your keyring, on that way you can get our signed packages:

    Update your sources lists

    On Debian and derivated systems such as Raspbian, you need to add our APT server entry to your sources lists, please add the following content at bottom of your /etc/apt/sources.list file:

    Raspbian 10 (Buster)

    Raspbian 9 (Stretch)

    Raspbian 8 (Jessie)

    Update your repositories database

    Now let your system update the apt database:

    Install TD-Agent Bit

    Using the following apt-get command you are able now to install the latest td-agent-bit:

    Now the following step is to instruct systemd to enable the service:

    If you do a status check, you should see a similar output like this:

    The default configuration of td-agent-bit is collecting metrics of CPU usage and sending the records to the standard output, you can see the outgoing data in your /var/log/syslog file.

    Unit Sizes

    Certain configuration directives in Fluent Bit refer to unit sizes such as when defining the size of a buffer or specific limits, we can find these in plugins like Tail Input, Forward Input or in generic properties like Mem_Buf_Limit.

    Starting from Fluent Bit v0.11.10, all unit sizes have been standardized across the core and plugins, the following table describes the options that can be used and what they mean:

    Suffix

    Description

    Example

    When a suffix is not specified, it's assumed that the value given is a bytes representation.

    Specifying a value of 32000, means 32000 bytes

    Debian

    Fluent Bit is distributed as td-agent-bit package and is available for the latest (and old) stable Debian systems: Buster, Stretch and Jessie.

    Server GPG key

    The first step is to add our server GPG key to your keyring, on that way you can get our signed packages:

    Update your sources lists

    On Debian, you need to add our APT server entry to your sources lists, please add the following content at bottom of your /etc/apt/sources.list file:

    Debian 10 (Buster)

    Debian 9 (Stretch)

    Debian 8 (Jessie)

    Update your repositories database

    Now let your system update the apt database:

    Install TD Agent Bit

    Using the following apt-get command you are able now to install the latest td-agent-bit:

    Now the following step is to instruct systemd to enable the service:

    If you do a status check, you should see a similar output like this:

    The default configuration of td-agent-bit is collecting metrics of CPU usage and sending the records to the standard output, you can see the outgoing data in your /var/log/syslog file.

    Yocto / Embedded Linux

    Fluent Bit source code provides Bitbake recipes to configure, build and package the software for a Yocto based image. Note that specific steps of usage of these recipes in your Yocto environment (Poky) is out of the scope of this documentation.

    We distribute two main recipes, one for testing/dev purposes and other with the latest stable release.

    Version

    Recipe

    Description

    devel

    Build Fluent Bit from GIT master. This recipe aims to be used for development and testing purposes only.

    v1.5.7

    It's strongly recommended to always use the stable release of Fluent Bit recipe and not the one from GIT master for production deployments.

    Fluent Bit and other architectures

    Fluent Bit >= v1.1.x fully supports x86_64, x86, arm32v7 and arm64v8.

    Scheduling and Retries

    Fluent Bit has an Engine that helps to coordinate the data ingestion from input plugins and call the Scheduler to decide when is time to flush the data through one or multiple output plugins. The Scheduler flush new data every a fixed time of seconds and Schedule retries when asked.

    Once an output plugin gets call to flush some data, after processing that data it can notify the Engine three possible return statuses:

    • OK

    • Retry

    • Error

    If the return status was OK, it means it was successfully able to process and flush the data, if it returned an Error status, means that an unrecoverable error happened and the engine should not try to flush that data again. If a Retry was requested, the Engine will ask the Scheduler to retry to flush that data, the Scheduler will decide how many seconds to wait before that happen.

    Configuring Retries

    The Scheduler provides a simple configuration option called Retry_Limit which can be set independently on each output section. This option allows to disable retries or impose a limit to try N times and then discard the data after reaching that limit:

    Example

    The following example configure two outputs where the HTTP plugin have an unlimited number of retries and the Elasticsearch plugin have a limit of 5 times:

    Collectd

    The collectd input plugin allows you to receive datagrams from collectd service.

    Configuration Parameters

    The plugin supports the following configuration parameters:

    Key

    Description

    Default

    Configuration Examples

    Here is a basic configuration example.

    With this configuration, Fluent Bit listens to 0.0.0.0:25826, and outputs incoming datagram packets to stdout.

    You must set the same types.db files that your collectd server uses. Otherwise, Fluent Bit may not be able to interpret the payload properly.

    Router

    Create flexible routing rules

    Routing is a core feature that allows to route your data through Filters and finally to one or multiple destinations. The router relies on the concept of Tags and Matching rules

    There are two important concepts in Routing:

    • Tag

    • Match

    When the data is generated by the input plugins, it comes with a Tag (most of the time the Tag is configured manually), the Tag is a human-readable indicator that helps to identify the data source.

    In order to define where the data should be routed, a Match rule must be specified in the output configuration.

    Consider the following configuration example that aims to deliver CPU metrics to an Elasticsearch database and Memory metrics to the standard output interface:

    Note: the above is a simple example demonstrating how Routing is configured.

    Routing works automatically reading the Input Tags and the Output Match rules. If some data has a Tag that doesn't match upon routing time, the data is deleted.

    Routing with Wildcard

    Routing is flexible enough to support wildcard in the Match pattern. The below example defines a common destination for both sources of data:

    The match rule is set to my_* which means it will match any Tag that starts with my_.

    Buffer

    Data processing with reliability

    Previously defined in the Buffering concept section, the buffer phase in the pipeline aims to provide a unified and persistent mechanism to store your data, either using the primary in-memory model or using the filesystem based mode.

    The buffer phase already contains the data in an immutable state, meaning, no other filter can be applied.

    Note that buffered data is not raw text, it's in Fluent Bit's internal binary representation.

    Fluent Bit offers a buffering mechanism in the file system that acts as a backup system to avoid data loss in case of system failures.

    Ubuntu

    Fluent Bit is distributed as td-agent-bit package and is available for the latest stable Ubuntu system: Focal Fossa.

    Server GPG key

    The first step is to add our server GPG key to your keyring, on that way you can get our signed packages:

    Backpressure

    In certain environments is common to see that logs or data being ingested is faster than the ability to flush it to some destinations. The common case is reading from big log files and dispatching the logs to a backend over the network which takes some time to respond, this generate backpressure leading to a high memory consumption in the service.

    In order to avoid backpressure, Fluent Bit implements a mechanism in the engine that restrict the amount of data than an input plugin can ingest, this is done through the configuration parameter Mem_Buf_Limit.

    As described in the concepts section, Fluent Bit offers an hybrid mode for data handling: in-memory and filesystem (optional).

    In memory

    Format and Schema

    Fluent Bit might optionally use a configuration file to define how the service will behave, and before proceeding we need to understand how the configuration schema works.

    The schema is defined by three concepts:

    • Sections

    • Entries: Key/Value

    Build with Static Configuration

    in normal operation mode allows to be configurable through or using specific arguments in the command line, while this is the ideal deployment case, there are scenarios where a more restricted configuration is required: static configuration mode.

    Static configuration mode aims to include a built-in configuration in the final binary of Fluent Bit, disabling the usage of external files or flags at runtime.

    Getting Started

    Memory Management

    In certain scenarios would be ideal to estimate how much memory Fluent Bit could be using, this is very useful for containerized environments where memory limits are a must.

    In order to estimate we will assume that the input plugins have set the Mem_Buf_Limit option (you can learn more about it in the section).

    Estimating

    Input plugins append data independently, so in order to do an estimation a limit should be imposed through the Mem_Buf_Limit option. If the limit was set to 10MB we need to estimate that in the worse case, the output plugin likely could use 20MB.

    Redhat / CentOS

    Install on Redhat / CentOS

    Fluent Bit is distributed as td-agent-bit package and is available for the latest stable CentOS system. The following architectures are supported

    • x86_64

    $ wget -qO - https://packages.fluentbit.io/fluentbit.key | sudo apt-key add -

    k, K, KB, kb

    Kilobyte: a unit of memory equal to 1,000 bytes.

    32k means 32000 bytes.

    m, M, MB, mb

    Megabyte: a unit of memory equal to 1,000,000 bytes

    1M means 1000000 bytes

    g, G, GB, gb

    Gigabyte: a unit of memory equal to 1,000,000,000 bytes

    1G means 1000000000 bytes

    Fluent Bit

    fluent-bit_1.5.7.bb

    Build latest stable version of Fluent Bit.

    fluent-bit_git.bb
    is always available and can be restricted with
    Mem_Buf_Limit
    . If your plugin gets restricted because of the configuration and you are under a backpressure scenario, you won't be able to ingest more data until the data chunks that are in memory can flushed.

    Depending of the input plugin type in use, this might lead to discard incoming data (e.g: TCP input plugin), but you can rely on the secondary filesystem buffering to be safe.

    If in addition to Mem_Buf_Limit the input plugin defined a storage.type of filesystem (as described in Buffering & Storage), when the limit is reached, all the new data will be stored safety in the file system.

    Mem_Buf_Limit

    This option is disabled by default and can be applied to all input plugins. Let's explain it behavior using the following scenario:

    • Mem_Buf_Limit is set to 1MB (one megabyte)

    • input plugin tries to append 700KB

    • engine route the data to an output plugin

    • output plugin backend (HTTP Server) is down

    • engine scheduler will retry the flush after 10 seconds

    • input plugin tries to append 500KB

    At this exact point, the engine will allow to append those 500KB of data into the engine: in total we have 1.2MB. The options works in a permissive mode before to reach the limit, but the limit is exceeded the following actions are taken:

    • block local buffers for the input plugin (cannot append more data)

    • notify the input plugin invoking a pause callback

    The engine will protect it self and will not append more data coming from the input plugin in question; Note that is the plugin responsibility to keep their state and take some decisions about what to do on that paused state.

    After some seconds if the scheduler was able to flush the initial 700KB of data or it gave up after retrying, that amount memory is released and internally the following actions happens:

    • Upon data buffer release (700KB), the internal counters get updated

    • Counters now are set at 500KB

    • Since 500KB is < 1MB it checks the input plugin state

    • If the plugin is paused, it invokes a resume callback

    • input plugin can continue appending more data

    About pause and resume Callbacks

    Each plugin is independent and not all of them implements the pause and resume callbacks. As said, these callbacks are just a notification mechanism for the plugin.

    The plugin who implements and keep a good state is the Tail Input plugin. When the pause callback is triggered, it stop their collectors and stop appending data. Upon resume, it re-enable the collectors.

    Buffering

    Parsers

    Filters

    [td-agent-bit]
    name = TD Agent Bit
    baseurl = https://packages.fluentbit.io/amazonlinux/2/$basearch/
    gpgcheck=1
    gpgkey=https://packages.fluentbit.io/fluentbit.key
    enabled=1
    [SERVICE]
        Flush        1
        Daemon       Off
        Log_Level    info
    
    [INPUT]
        Name cpu
        Tag  cpu.local
    
    [OUTPUT]
        Name  ${MY_OUTPUT}
        Match *
    $ export MY_OUTPUT=stdout
    $ bin/fluent-bit -c fluent-bit.conf
    Fluent Bit v1.4.0
    * Copyright (C) 2019-2020 The Fluent Bit Authors
    * Copyright (C) 2015-2018 Treasure Data
    * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
    * https://fluentbit.io
    
    [2020/03/03 12:25:25] [ info] [engine] started
    [0] cpu.local: [1491243925, {"cpu_p"=>1.750000, "user_p"=>1.750000, "system_p"=>0.000000, "cpu0.p_cpu"=>3.000000, "cpu0.p_user"=>2.000000, "cpu0.p_system"=>1.000000, "cpu1.p_cpu"=>0.000000, "cpu1.p_user"=>0.000000, "cpu1.p_system"=>0.000000, "cpu2.p_cpu"=>4.000000, "cpu2.p_user"=>4.000000, "cpu2.p_system"=>0.000000, "cpu3.p_cpu"=>1.000000, "cpu3.p_user"=>1.000000, "cpu3.p_system"=>0.000000}]

    Value

    Description

    Retry_Limit

    N

    Integer value to set the maximum number of retries allowed. N must be >= 1 (default: 1)

    Retry_Limit

    False

    When Retry_Limit is set to False, means that there is not limit for the number of retries that the Scheduler can do.

    Listen

    Set the address to listen to

    0.0.0.0

    Port

    Set the port to listen to

    25826

    TypesDB

    Set the data specification file

    /usr/share/collectd/types.db

    Update your sources lists

    On Ubuntu, you need to add our APT server entry to your sources lists, please add the following content at bottom of your /etc/apt/sources.list file:

    Ubuntu 20.04 LTS (Focal Fossa)

    Ubuntu 18.04 LTS (Bionic Beaver)

    Ubuntu 16.04 LTS (Xenial Xerus)

    Update your repositories database

    Now let your system update the apt database:

    Install TD-Agent Bit

    Using the following apt-get command you are able now to install the latest td-agent-bit:

    Now the following step is to instruct systemd to enable the service:

    If you do a status check, you should see a similar output like this:

    The default configuration of td-agent-bit is collecting metrics of CPU usage and sending the records to the standard output, you can see the outgoing data in your /var/log/syslog file.

    Indented Configuration Mode

    A simple example of a configuration file is as follows:

    Sections

    A section is defined by a name or title inside brackets. Looking at the example above, a Service section has been set using [SERVICE] definition. Section rules:

    • All section content must be indented (4 spaces ideally).

    • Multiple sections can exist on the same file.

    • A section is expected to have comments and entries, it cannot be empty.

    • Any commented line under a section, must be indented too.

    Entries: Key/Value

    A section may contain Entries, an entry is defined by a line of text that contains a Key and a Value, using the above example, the [SERVICE] section contains two entries, one is the key Daemon with value off and the other is the key Log_Level with the value debug. Entries rules:

    • An entry is defined by a key and a value.

    • A key must be indented.

    • A key must contain a value which ends in the breakline.

    • Multiple keys with the same name can exist.

    Also commented lines are set prefixing the # character, those lines are not processed but they must be indented too.

    Indented Configuration Mode

    Fluent Bit configuration files are based in a strict Indented Mode, that means that each configuration file must follow the same pattern of alignment from left to right when writing text. By default an indentation level of four spaces from left to right is suggested. Example:

    As you can see there are two sections with multiple entries and comments, note also that empty lines are allowed and they do not need to be indented.

    Requirements

    The following steps assumes you are familiar with configuring Fluent Bit using text files and you have experience building it from scratch as described in the Build and Install section.

    Configuration Directory

    In your file system prepare a specific directory that will be used as an entry point for the build system to lookup and parse the configuration files. It is mandatory that this directory contain as a minimum one configuration file called fluent-bit.conf containing the required SERVICE, INPUT and OUTPUT sections. As an example create a new fluent-bit.conf file with the following content:

    the configuration provided above will calculate CPU metrics from the running system and print them to the standard output interface.

    Build with Custom Configuration

    Inside Fluent Bit source code, get into the build/ directory and run CMake appending the FLB_STATIC_CONF option pointing the configuration directory recently created, e.g:

    then build it:

    At this point the fluent-bit binary generated is ready to run without necessity of further configuration:

    Fluent Bit
    text files
    Fluent Bit has an internal binary representation for the data being processed, but when this data reach an output plugin, this one will likely create their own representation in a new memory buffer for processing. The best example are the InfluxDB and Elasticsearch output plugins, both needs to convert the binary representation to their respective-custom JSON formats before to talk to their backend servers.

    So, if we impose a limit of 10MB for the input plugins and considering the worse case scenario of the output plugin consuming 20MB extra, as a minimum we need (30MB x 1.2) = 36MB.

    Glibc and Memory Fragmentation

    Is well known that in intensive environments where memory allocations happens in the order of magnitude, the default memory allocator provided by Glibc could lead to a high fragmentation, reporting a high memory usage by the service.

    It's strongly suggested that in any production environment, Fluent Bit should be built with jemalloc enabled (e.g. -DFLB_JEMALLOC=On). Jemalloc is an alternative memory allocator that can reduce fragmentation (among others things) resulting in better performance.

    You can check if Fluent Bit has been built with Jemalloc using the following command:

    The output should looks like:

    If the FLB_HAVE_JEMALLOC option is listed in Build Flags, everything will be fine.

    Backpressure
    $ bin/fluent-bit -h|grep JEMALLOC

    aarch64 / arm64v8

    Configure Yum

    We provide td-agent-bit through a Yum repository. In order to add the repository reference to your system, please add a new file called td-agent-bit.repo in /etc/yum.repos.d/ with the following content:

    note: we encourage you always enable the gpgcheck for security reasons. All our packages are signed.

    The GPG Key fingerprint is F209 D876 2A60 CD49 E680 633B 4FF8 368B 6EA0 722A

    Install

    Once your repository is configured, run the following command to install it:

    Now the following step is to instruct Systemd to enable the service:

    If you do a status check, you should see a similar output like this:

    The default configuration of td-agent-bit is collecting metrics of CPU usage and sending the records to the standard output, you can see the outgoing data in your /var/log/messages file.

    [td-agent-bit]
    name = TD Agent Bit
    baseurl = https://packages.fluentbit.io/centos/7/$basearch/
    gpgcheck=1
    gpgkey=https://packages.fluentbit.io/fluentbit.key
    enabled=1
    $ yum install td-agent-bit
    $ sudo service td-agent-bit start
    $ service td-agent-bit status
    Redirecting to /bin/systemctl status  td-agent-bit.service
    ● td-agent-bit.service - TD Agent Bit
       Loaded: loaded (/usr/lib/systemd/system/td-agent-bit.service; disabled; vendor preset: disabled)
       Active: active (running) since Thu 2016-07-07 02:08:01 BST; 9s ago
     Main PID: 3820 (td-agent-bit)
       CGroup: /system.slice/td-agent-bit.service
               └─3820 /opt/td-agent-bit/bin/td-agent-bit -c etc/td-agent-bit/td-agent-bit.conf
    ...
    $ wget -qO - https://packages.fluentbit.io/fluentbit.key | sudo apt-key add -
    deb https://packages.fluentbit.io/raspbian/buster buster main
    deb https://packages.fluentbit.io/raspbian/stretch stretch main
    deb https://packages.fluentbit.io/raspbian/jessie jessie main
    $ sudo apt-get update
    $ sudo apt-get install td-agent-bit
    $ sudo service td-agent-bit start
    sudo service td-agent-bit status
    ● td-agent-bit.service - TD Agent Bit
       Loaded: loaded (/lib/systemd/system/td-agent-bit.service; disabled; vendor preset: enabled)
       Active: active (running) since mié 2016-07-06 16:58:25 CST; 2h 45min ago
     Main PID: 6739 (td-agent-bit)
        Tasks: 1
       Memory: 656.0K
          CPU: 1.393s
       CGroup: /system.slice/td-agent-bit.service
               └─6739 /opt/td-agent-bit/bin/td-agent-bit -c /etc/td-agent-bit/td-agent-bit.conf
    ...
    deb https://packages.fluentbit.io/debian/buster buster main
    deb https://packages.fluentbit.io/debian/stretch stretch main
    deb https://packages.fluentbit.io/debian/jessie jessie main
    $ sudo apt-get update
    $ sudo apt-get install td-agent-bit
    $ sudo service td-agent-bit start
    sudo service td-agent-bit status
    ● td-agent-bit.service - TD Agent Bit
       Loaded: loaded (/lib/systemd/system/td-agent-bit.service; disabled; vendor preset: enabled)
       Active: active (running) since mié 2016-07-06 16:58:25 CST; 2h 45min ago
     Main PID: 6739 (td-agent-bit)
        Tasks: 1
       Memory: 656.0K
          CPU: 1.393s
       CGroup: /system.slice/td-agent-bit.service
               └─6739 /opt/td-agent-bit/bin/td-agent-bit -c /etc/td-agent-bit/td-agent-bit.conf
    ...
    [OUTPUT]
        Name        http
        Host        192.168.5.6
        Port        8080
        Retry_Limit False
    
    [OUTPUT]
        Name            es
        Host            192.168.5.20
        Port            9200
        Logstash_Format On
        Retry_Limit     5
    [INPUT]
        Name         collectd
        Listen       0.0.0.0
        Port         25826
        TypesDB      /usr/share/collectd/types.db,/etc/collectd/custom.db
    
    [OUTPUT]
        Name   stdout
        Match  *
    $ wget -qO - https://packages.fluentbit.io/fluentbit.key | sudo apt-key add -
    deb https://packages.fluentbit.io/ubuntu/focal focal main
    deb https://packages.fluentbit.io/ubuntu/bionic bionic main
    deb https://packages.fluentbit.io/ubuntu/xenial xenial main
    $ sudo apt-get update
    $ sudo apt-get install td-agent-bit
    $ sudo service td-agent-bit start
    sudo service td-agent-bit status
    ● td-agent-bit.service - TD Agent Bit
       Loaded: loaded (/lib/systemd/system/td-agent-bit.service; disabled; vendor preset: enabled)
       Active: active (running) since mié 2016-07-06 16:58:25 CST; 2h 45min ago
     Main PID: 6739 (td-agent-bit)
        Tasks: 1
       Memory: 656.0K
          CPU: 1.393s
       CGroup: /system.slice/td-agent-bit.service
               └─6739 /opt/td-agent-bit/bin/td-agent-bit -c /etc/td-agent-bit/td-agent-bit.conf
    ...
    [SERVICE]
        # This is a commented line
        Daemon    off
        log_level debug
    [FIRST_SECTION]
        # This is a commented line
        Key1  some value
        Key2  another value
        # more comments
    
    [SECOND_SECTION]
        KeyN  3.14
    [SERVICE]
        Flush     1
        Daemon    off
        Log_Level info
    
    [INPUT]
        Name      cpu
    
    [OUTPUT]
        Name      stdout
        Match     *
    $ cd fluent-bit/build/
    $ cmake -DFLB_STATIC_CONF=/path/to/my/confdir/
    $ make
    $ bin/fluent-bit 
    Fluent-Bit v0.15.0
    Copyright (C) Treasure Data
    
    [2018/10/19 15:32:31] [ info] [engine] started (pid=15186)
    [0] cpu.local: [1539984752.000347547, {"cpu_p"=>0.750000, "user_p"=>0.500000, "system_p"=>0.250000, "cpu0.p_cpu"=>1.000000, "cpu0.p_user"=>1.000000, "cpu0.p_system"=>0.000000, "cpu1.p_cpu"=>0.000000, "cpu1.p_user"=>0.000000, "cpu1.p_system"=>0.000000, "cpu2.p_cpu"=>0.000000, "cpu2.p_user"=>0.000000, "cpu2.p_system"=>0.000000, "cpu3.p_cpu"=>1.000000, "cpu3.p_user"=>1.000000, "cpu3.p_system"=>0.000000}]
    Build Flags =  JSMN_PARENT_LINKS JSMN_STRICT FLB_HAVE_TLS FLB_HAVE_SQLDB
    FLB_HAVE_TRACE FLB_HAVE_FLUSH_LIBCO FLB_HAVE_VALGRIND FLB_HAVE_FORK
    FLB_HAVE_PROXY_GO FLB_HAVE_JEMALLOC JEMALLOC_MANGLE FLB_HAVE_REGEX
    FLB_HAVE_C_TLS FLB_HAVE_SETJMP FLB_HAVE_ACCEPT4 FLB_HAVE_INOTIFY
    $ yum install td-agent-bit
    $ sudo service td-agent-bit start
    $ service td-agent-bit status
    Redirecting to /bin/systemctl status  td-agent-bit.service
    ● td-agent-bit.service - TD Agent Bit
       Loaded: loaded (/usr/lib/systemd/system/td-agent-bit.service; disabled; vendor preset: disabled)
       Active: active (running) since Thu 2016-07-07 02:08:01 BST; 9s ago
     Main PID: 3820 (td-agent-bit)
       CGroup: /system.slice/td-agent-bit.service
               └─3820 /opt/td-agent-bit/bin/td-agent-bit -c etc/td-agent-bit/td-agent-bit.conf
    ...
    [INPUT]
        Name cpu
        Tag  my_cpu
    
    [INPUT]
        Name mem
        Tag  my_mem
    
    [OUTPUT]
        Name   es
        Match  my_cpu
    
    [OUTPUT]
        Name   stdout
        Match  my_mem

    Fluentd & Fluent Bit

    The Production Grade Ecosystem

    Logging and data processing in general can be complex, and substantially more so at scale. That's why Fluentd was born. Now, Fluentd is more than a simple tool. It's a full ecosystem that contains SDKs for different languages and sub projects like Fluent Bit.

    On this page, we will describe the relationship between the Fluentd and Fluent Bit open source projects. As a summary, we can say both are:

    • Licensed under the terms of Apache License v2.0

    • Hosted projects by the Cloud Native Computing Foundation (CNCF)

    • Production Grade solutions: deployed thousands of times every single day, millions per month.

    • Community driven projects

    • Widely Adopted by the Industry: trusted by all major companies like AWS, Microsoft, Google Cloud and hundred of others.

    • Originally created by .

    Both projects have a lot of similarities. is fully designed and built on top of the best ideas of the architecture and general design. Choosing which one to use depends on the end-user needs.

    The following table describes a comparison in different areas of the projects:

    Both and can work as Aggregators or Forwarders. They can both complement each other or be used as standalone solutions.

    Fluent Bit v1.5 Documentation

    High Performance Logs Processor

    Fluent Bit is a Fast and Lightweight Log Processor, Stream Processor and Forwarder for Linux, OSX, Windows and BSD family operating systems. It has been made with a strong focus on performance to allow the collection of events from different sources without complexity.

    Features

    • High Performance

    • Data Parsing

      • Convert your unstructured messages using our parsers: , , and

    • Reliability and Data Integrity

      • Handling

      • in memory and file system

    • Networking

      • Security: built-in TLS/SSL support

      • Asynchronous I/O

    • Pluggable Architecture and : Inputs, Filters and Outputs

      • More than 50 built-in plugins available

      • Extensibility

    • : expose internal metrics over HTTP in JSON and format

    • : Perform data selection and transformation using simple SQL queries

      • Create new streams of data using query results

      • Aggregation Windows

      • Data analysis and prediction: Timeseries forecasting

    • Portable: runs on Linux, MacOS, Windows and BSD systems

    Fluent Bit, Fluentd and CNCF

    is a sub-component of the project ecosystem, it's licensed under the terms of the . This project was created by and is its current primary sponsor.

    Nowadays Fluent Bit get contributions from several companies and individuals and same as , it's hosted as a subproject.

    Running a Logging Pipeline Locally

    You may wish to test a logging pipeline locally to observe how it deals with log messages. The following is a walk-through for running Fluent Bit and Elasticsearch locally with Docker Compose which can serve as an example for testing other plugins locally.

    Create a Configuration File

    Refer to the Configuration File section to create a configuration to test.

    fluent-bit.conf:

    [INPUT]
      Name dummy
      Dummy {"top": {".dotted": "value"}}
    
    [OUTPUT]
      Name es
      Host elasticsearch
      Replace_Dots On

    Docker Compose

    Use to run Fluent Bit (with the configuration file mounted) and Elasticsearch.

    docker-compose.yaml:

    View indexed logs

    To view indexed logs run:

    To "start fresh", delete the index by running:

    Commands

    Configuration files must be flexible enough for any deployment need, but they must keep a clean and readable format.

    Fluent Bit Commands extends a configuration file with specific built-in features. The list of commands available as of Fluent Bit 0.12 series are:

    Command

    Prototype

    Description

    @INCLUDE FILE

    Include a configuration file

    @SET KEY=VAL

    @INCLUDE Command

    Configuring a logging pipeline might lead to an extensive configuration file. In order to maintain a human-readable configuration, it's suggested to split the configuration in multiple files.

    The @INCLUDE command allows the configuration reader to include an external configuration file, e.g:

    The above example defines the main service configuration file and also include two files to continue the configuration:

    inputs.conf

    outputs.conf

    Note that despites the order of inclusion, Fluent Bit will ALWAYS respect the following order:

    • Service

    • Inputs

    • Filters

    • Outputs

    @SET Command

    Fluent Bit supports , one way to expose this variables to Fluent Bit is through setting a Shell environment variable, the other is through the @SET command.

    The @SET command can only be used at root level of each line, meaning it cannot be used inside a section, e.g:

    Key Concepts

    There are a few key concepts that are really important to understand how Fluent Bit operates.

    Before diving into it’s good to get acquainted with some of the key concepts of the service. This document provides a gentle introduction to those concepts and common terminology. We’ve provided a list below of all the terms we’ll cover, but we recommend reading this document from start to finish to gain a more general understanding of our log and stream processor.

    • Event or Record

    • Filtering

    Upstream Servers

    It's common that Fluent Bit aims to connect to external services to deliver the logs over the network, this is the case of , and within others. Being able to connect to one node (host) is normal and enough for more of the use cases, but there are other scenarios where balancing across different nodes is required. The Upstream feature provides such capability.

    An Upstream defines a set of nodes that will be targeted by an output plugin, by the nature of the implementation an output plugin must support the Upstream feature. The following plugin(s) have Upstream support:

    The current balancing mode implemented is round-robin.

    Record Accessor

    A full feature set to access content of your records

    Fluent Bit works internally with structured records and it can be composed of an unlimited number of keys and values. Values can be anything like a number, string, array, or a map.

    Having a way to select a specific part of the record is critical for certain core functionalities or plugins, this feature is called Record Accessor.

    consider Record Accessor a simple grammar to specify record content and other miscellaneus values.

    Format

    Buffering & Storage

    The end-goal of is to collect, parse, filter and ship logs to a central place. In this workflow there are many phases and one of the critical pieces is the ability to do buffering : a mechanism to place processed data into a temporal location until is ready to be shipped.

    By default when Fluent Bit process data, it uses Memory as a primary and temporal place to store the record logs, but there are certain scenarios where would be ideal to have a persistent buffering mechanism based in the filesystem to provide aggregation and data safety capabilities.

    Starting with Fluent Bit v1.0, we introduced a new storage layer that can either work in memory or in the file system. Input plugins can be configured to use one or the other upon demand at start time.

    Configuration

    Logfmt

    The logfmt parser allows to parse the logfmt format described in . A more formal description is in .

    Here is an example configuration:

    The following log entry is a valid content for the parser defined above:

    After processing, it internal representation will be:

    [INPUT]
        Name cpu
        Tag  my_cpu
    
    [INPUT]
        Name mem
        Tag  my_mem
    
    [OUTPUT]
        Name   stdout
        Match  my_*
    curl "localhost:9200/_search?pretty" \
      -H 'Content-Type: application/json' \
      -d'{ "query": { "match_all": {} }}'
    Docker Compose

    Set a configuration variable

    configuration variables
    @INCLUDE
    @SET
    [PARSER]
        Name        logfmt
        Format      logfmt
    key1=val1 key2=val2
    [1540936693, {"key1"=>"val1",
                  "key2"=>"val2"}]
    https://brandur.org/logfmt
    https://godoc.org/github.com/kr/logfmt

    JSON

    The JSON parser is the simplest option: if the original log source is a JSON map string, it will take it structure and convert it directly to the internal binary representation.

    A simple configuration that can be found in the default parsers configuration file, is the entry to parse Docker log files (when the tail input plugin is used):

    [PARSER]
        Name        docker
        Format      json
        Time_Key    time
        Time_Format %Y-%m-%dT%H:%M:%S %z

    The following log entry is a valid content for the parser defined above:

    {"key1": 12345, "key2": "abc", "time": "2006-07-28T13:22:04Z"}

    After processing, it internal representation will be:

    [1154103724, {"key1"=>12345, "key2"=>"abc"}]

    The time has been converted to Unix timestamp (UTC) and the map reduced to each component of the original message.

    LTSV

    The ltsv parser allows to parse LTSV formatted texts.

    Labeled Tab-separated Values (LTSV format is a variant of Tab-separated Values (TSV). Each record in a LTSV file is represented as a single line. Each field is separated by TAB and has a label and a value. The label and the value have been separated by ':'.

    Here is an example how to use this format in the apache access log.

    Config this in httpd.conf:

    LogFormat "host:%h\tident:%l\tuser:%u\ttime:%t\treq:%r\tstatus:%>s\tsize:%b\treferer:%{Referer}i\tua:%{User-Agent}i" combined_ltsv
    CustomLog "logs/access_log" combined_ltsv

    The parser.conf:

    [PARSER]
        Name        access_log_ltsv
        Format      ltsv
        Time_Key    time
        Time_Format [%d/%b/%Y:%H:%M:%S %z]
        Types       status:integer size:integer

    The following log entry is a valid content for the parser defined above:

    host:127.0.0.1  ident:- user:-  time:[10/Jul/2018:13:27:05 +0200]       req:GET / HTTP/1.1      status:200      size:16218      referer:http://127.0.0.1/       ua:Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:59.0) Gecko/20100101 Firefox/59.0
    host:127.0.0.1  ident:- user:-  time:[10/Jul/2018:13:27:05 +0200]       req:GET /assets/plugins/bootstrap/css/bootstrap.min.css HTTP/1.1        status:200      size:121200     referer:http://127.0.0.1/       ua:Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:59.0) Gecko/20100101 Firefox/59.0
    host:127.0.0.1  ident:- user:-  time:[10/Jul/2018:13:27:05 +0200]       req:GET /assets/css/headers/header-v6.css HTTP/1.1      status:200      size:37706      referer:http://127.0.0.1/       ua:Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:59.0) Gecko/20100101 Firefox/59.0
    host:127.0.0.1  ident:- user:-  time:[10/Jul/2018:13:27:05 +0200]       req:GET /assets/css/style.css HTTP/1.1  status:200      size:1279       referer:http://127.0.0.1/       ua:Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:59.0) Gecko/20100101 Firefox/59.0

    After processing, it internal representation will be:

    The time has been converted to Unix timestamp (UTC).

    Tag
  • Timestamp

  • Match

  • Structured Message

  • Event or Record

    Every incoming piece of data that belongs to a log or a metric that is retrieved by Fluent Bit is considered an Event or a Record.

    As an example consider the following content of a Syslog file:

    It contains four lines and each of them represents an independent Event, for a total of four Events.

    Internally, an Event always has two components (in an array form):

    Filtering

    Some cases require modifications on the Events content. Modifying events by altering, enriching or dropping Events is called Filtering.

    There are many use cases when Filtering is required like:

    • Append specific information to the Event like an IP address or metadata.

    • Select a specific piece of the Event content.

    • Drop Events that matches certain pattern.

    Tag

    Every Event that gets into Fluent Bit gets assigned a Tag. This tag is an internal string that is used in a later stage by the Router to decide which Filter or Output phase it must go through.

    Most of the tags are assigned manually in the configuration. If a tag is not specified, Fluent Bit will assign the name of the Input plugin instance from where that Event was generated from.

    The only input plugin that doesn't assign Tags is Forward input. This plugin speaks the Fluentd wire protocol called Forward where every Event already comes with a Tag associated. Fluent Bit will always use the incoming Tag set by the client.

    A Tagged record must always have a Matching rule. To learn more about Tags and Matches check the Routing section.

    Timestamp

    The Timestamp represents the time when an Event was created. Every Event contains a Timestamp associated. The Timestamp is a numeric fractional integer in the format:

    Seconds

    It is the number of seconds that have elapsed since the Unix epoch.

    Nanoseconds

    Fractional second or one thousand-millionth of a second.

    A timestamp always exists, either set by the Input plugin or discovered through a data parsing process.

    Match

    Fluent Bit allows to deliver your collected and processed Events to one or multiple destinations, this is done through a routing phase. A Match represent a simple rule to select Events where it Tags matches a defined rule.

    To learn more about Tags and Matches check the Routing section.

    Structured Messages

    Source events can have or not have a structure. A structure defines a set of keys and values inside the Event message. As an example consider the following two messages:

    No structured message

    Structured Message

    At a low level both are just an array of bytes, but the Structured message defines keys and values, having a structure helps to implement faster operations on data modifications.

    Fluent Bit always handles every Event message as a structured message. For performance reasons, we use a binary serialization data format called MessagePack.

    Consider MessagePack as a binary version of JSON on steroids.

    Fluent Bit
    Fluent Bit
    A
    record accessor
    rule starts with the character
    $
    . Using the structured content above as an example the following table describes how to access a record:

    The following table describe some accessing rules and the expected returned value:

    Format

    Accessed Value

    $log

    "some message"

    $labels['color']

    "blue"

    $labels['project']['env']

    "production"

    $labels['unset']

    null

    $labels['undefined']

    If the accessor key does not exist in the record like the last example $labels['undefined'] , the operation is simply omitted, no exception will occur.

    Usage Example

    The feature is enabled on a per plugin basis, not all plugins enable this feature. As an example consider a configuration that aims to filter records using grep that only matches where labels have a color blue:

    The file content to process in test.log is the following:

    Running Fluent Bit with the configuration above the output will be:

    version: "3.7"
    
    services:
      fluent-bit:
        image: fluent/fluent-bit
        volumes:
          - ./fluent-bit.conf:/fluent-bit/etc/fluent-bit.conf
        depends_on:
          - elasticsearch
      elasticsearch:
        image: elasticsearch:7.6.2
        ports:
          - "9200:9200"
        environment:
          - discovery.type=single-node
    curl -X DELETE "localhost:9200/fluent-bit?pretty"
    [SERVICE]
        Flush 1
    
    @INCLUDE inputs.conf
    @INCLUDE outputs.conf
    [INPUT]
        Name cpu
        Tag  mycpu
    
    [INPUT]
        Name tail
        Path /var/log/*.log
        Tag  varlog.*
    [OUTPUT]
        Name   stdout
        Match  mycpu
    
    [OUTPUT]
        Name            es
        Match           varlog.*
        Host            127.0.0.1
        Port            9200
        Logstash_Format On
    @SET my_input=cpu
    @SET my_output=stdout
    
    [SERVICE]
        Flush 1
    
    [INPUT]
        Name ${my_input}
    
    [OUTPUT]
        Name ${my_output}
    [1531222025.000000000, {"host"=>"127.0.0.1", "ident"=>"-", "user"=>"-", "req"=>"GET / HTTP/1.1", "status"=>200, "size"=>16218, "referer"=>"http://127.0.0.1/", "ua"=>"Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:59.0) Gecko/20100101 Firefox/59.0"}]
    [1531222025.000000000, {"host"=>"127.0.0.1", "ident"=>"-", "user"=>"-", "req"=>"GET /assets/plugins/bootstrap/css/bootstrap.min.css HTTP/1.1", "status"=>200, "size"=>121200, "referer"=>"http://127.0.0.1/", "ua"=>"Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:59.0) Gecko/20100101 Firefox/59.0"}]
    [1531222025.000000000, {"host"=>"127.0.0.1", "ident"=>"-", "user"=>"-", "req"=>"GET /assets/css/headers/header-v6.css HTTP/1.1", "status"=>200, "size"=>37706, "referer"=>"http://127.0.0.1/", "ua"=>"Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:59.0) Gecko/20100101 Firefox/59.0"}]
    [1531222025.000000000, {"host"=>"127.0.0.1", "ident"=>"-", "user"=>"-", "req"=>"GET /assets/css/style.css HTTP/1.1", "status"=>200, "size"=>1279, "referer"=>"http://127.0.0.1/", "ua"=>"Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:59.0) Gecko/20100101 Firefox/59.0"}]
    Jan 18 12:52:16 flb systemd[2222]: Starting GNOME Terminal Server
    Jan 18 12:52:16 flb dbus-daemon[2243]: [session uid=1000 pid=2243] Successfully activated service 'org.gnome.Terminal'
    Jan 18 12:52:16 flb systemd[2222]: Started GNOME Terminal Server.
    Jan 18 12:52:16 flb gsd-media-keys[2640]: # watch_fast: "/org/gnome/terminal/legacy/" (establishing: 0, active: 0)
    [TIMESTAMP, MESSAGE]
    SECONDS.NANOSECONDS
    "Project Fluent Bit created on 1398289291"
    {"project": "Fluent Bit", "created": 1398289291}
    {
        "log": "some message",
        "stream": "stdout",
        "labels": {
            "color": "blue",
            "unset": null,
            "project": {
                "env": "production"
            }
        }
    }
    [SERVICE]
        flush        1
        log_level    info
        parsers_file parsers.conf
    
    [INPUT]
        name      tail
        path      test.log
        parser    json
    
    [FILTER]
        name      grep
        match     *
        regex     $labels['color'] ^blue$
    
    [OUTPUT]
        name      stdout
        match     *
        format    json_lines
    {"log": "message 1", "labels": {"color": "blue"}}
    {"log": "message 2", "labels": {"color": "red"}}
    {"log": "message 3", "labels": {"color": "green"}}
    {"log": "message 4", "labels": {"color": "blue"}}
    $ bin/fluent-bit -c fluent-bit.conf 
    Fluent Bit v1.x.x
    * Copyright (C) 2019-2020 The Fluent Bit Authors
    * Copyright (C) 2015-2018 Treasure Data
    * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
    * https://fluentbit.io
    
    [2020/09/11 16:11:07] [ info] [engine] started (pid=1094177)
    [2020/09/11 16:11:07] [ info] [storage] version=1.0.5, initializing...
    [2020/09/11 16:11:07] [ info] [storage] in-memory
    [2020/09/11 16:11:07] [ info] [storage] normal synchronization mode, checksum disabled, max_chunks_up=128
    [2020/09/11 16:11:07] [ info] [sp] stream processor started
    [2020/09/11 16:11:07] [ info] inotify_fs_add(): inode=55716713 watch_fd=1 name=test.log
    {"date":1599862267.483684,"log":"message 1","labels":{"color":"blue"}}
    {"date":1599862267.483692,"log":"message 4","labels":{"color":"blue"}}

    High Performance

    Dependencies

    Built as a Ruby Gem, it requires a certain number of gems.

    Zero dependencies, unless some special plugin requires them.

    Plugins

    More than 1000 plugins available

    Around 70 plugins available

    License

    Fluentd

    Fluent Bit

    Scope

    Containers / Servers

    Embedded Linux / Containers / Servers

    Language

    C & Ruby

    C

    Memory

    ~40MB

    ~650KB

    Performance

    Treasure Data
    Fluent Bit
    Fluentd
    Fluentd
    Fluent Bit

    High Performance

    Write any input, filter or output plugin in C language
  • Bonus: write Filters in Lua or Output plugins in Golang

  • JSON
    Regex
    LTSV
    Logfmt
    Backpressure
    Data Buffering
    Extensibility
    Monitoring
    Prometheus
    Stream Processing
    Fluent Bit
    Fluentd
    Apache License v2.0
    Treasure Data
    Fluentd
    CNCF
    Configuration

    To define an Upstream it's required to create an specific configuration file that contains an UPSTREAM and one or multiple NODE sections. The following table describe the properties associated to each section. Note that all of them are mandatory:

    Section

    Key

    Description

    UPSTREAM

    name

    Defines a name for the Upstream in question.

    NODE

    name

    Defines a name for the Node in question.

    host

    IP address or hostname of the target host.

    Nodes and specific plugin configuration

    A Node might contain additional configuration keys required by the plugin, on that way we provide enough flexibility for the output plugin, a common use case is Forward output where if TLS is enabled, it requires a shared key (more details in the example below).

    Nodes and TLS (Transport Layer Security)

    In addition to the properties defined in the table above, the network operations against a defined node can optionally be done through the use of TLS for further encryption and certificates use.

    The TLS options available are described in the TLS/SSL section and can be added to the any Node section.

    Configuration File Example

    The following example defines an Upstream called forward-balancing which aims to be used by Forward output plugin, it register three Nodes:

    • node-1: connects to 127.0.0.1:43000

    • node-2: connects to 127.0.0.1:44000

    • node-3: connects to 127.0.0.1:45000 using TLS without verification. It also defines a specific configuration option required by Forward output called shared_key.

    Note that every Upstream definition must exists on it own configuration file in the file system. Adding multiple Upstreams in the same file or different files is not allowed.

    output plugins
    HTTP
    Elasticsearch
    Forward
    Forward
    The storage layer configuration takes place in two areas:
    • Service Section

    • Input Section

    The known Service section configure a global environment for the storage layer, and then in the Input sections defines which mechanism to use.

    Service Section Configuration

    The Service section refers to the section defined in the main configuration file:

    Key

    Description

    Default

    storage.path

    Set an optional location in the file system to store streams and chunks of data. If this parameter is not set, Input plugins can only use in-memory buffering.

    storage.sync

    Configure the synchronization mode used to store the data into the file system. It can take the values normal or full.

    normal

    storage.checksum

    Enable the data integrity check when writing and reading data from the filesystem. The storage layer uses the CRC32 algorithm.

    Off

    storage.backlog.mem_limit

    a Service section will look like this:

    that configuration configure an optional buffering mechanism where it root for data is /var/log/flb-storage/, it will use normal synchronization mode, without checksum and up to a maximum of 5MB of memory when processing backlog data.

    Input Section Configuration

    Optionally, any Input plugin can configure their storage preference, the following table describe the options available:

    Key

    Description

    Default

    storage.type

    Specify the buffering mechanism to use. It can be memory or filesystem.

    memory

    The following example configure a service that offers filesystem buffering capabilities and two Input plugins being the first based in filesystem and the second with memory only.

    Fluent Bit

    Kubernetes

    Kubernetes Production Grade Log Processor

    Fluent Bit is a lightweight and extensible Log Processor that comes with full support for Kubernetes:

    • Process Kubernetes containers logs from the file system or Systemd/Journald.

    • Enrich logs with Kubernetes Metadata.

    • Centralize your logs in third party storage services like Elasticsearch, InfluxDB, HTTP, etc.

    Concepts

    Before getting started it is important to understand how Fluent Bit will be deployed. Kubernetes manages a cluster of nodes, so our log agent tool will need to run on every node to collect logs from every POD, hence Fluent Bit is deployed as a DaemonSet (a POD that runs on every node of the cluster).

    When Fluent Bit runs, it will read, parse and filter the logs of every POD and will enrich each entry with the following information (metadata):

    • Pod Name

    • Pod ID

    • Container Name

    • Container ID

    To obtain these information, a built-in filter plugin called kubernetes talks to the Kubernetes API Server to retrieve relevant information such as the pod_id, labels and annotations, other fields such as pod_name, container_id and container_name are retrieved locally from the log file names. All of this is handled automatically, no intervention is required from a configuration aspect.

    Our Kubernetes Filter plugin is fully inspired on the written by .

    Installation

    must be deployed as a DaemonSet, so on that way it will be available on every node of your Kubernetes cluster. To get started run the following commands to create the namespace, service account and role setup:

    The next step is to create a ConfigMap that will be used by our Fluent Bit DaemonSet:

    Note for Kubernetes < v1.16

    For Kubernetes versions olden than v1.16, the DaemonSet resource is not available on apps/v1 , the resource is available on apiVersion: extensions/v1beta1 . Our current Daemonset Yaml files uses the new apiVersion.

    If you are using and older Kubernetes, grab manually a copy of your Daemonset Yaml file and replace the value of apiVersion from:

    to

    You can read more about this deprecation on Kubernetes v1.14 Changelog here:

    Fluent Bit to Elasticsearch

    Fluent Bit DaemonSet ready to be used with Elasticsearch on a normal Kubernetes Cluster:

    Fluent Bit to Elasticsearch on Minikube

    If you are using Minikube for testing purposes, use the following alternative DaemonSet manifest:

    Details

    The default configuration of Fluent Bit makes sure of the following:

    • Consume all containers logs from the running Node.

    • The will not append more than 5MB into the engine until they are flushed to the Elasticsearch backend. This limit aims to provide a workaround for scenarios.

    • The Kubernetes filter will enrich the logs with Kubernetes metadata, specifically labels and annotations. The filter only goes to the API Server when it cannot find the cached info, otherwise it uses the cache.

    Upgrade Notes

    The following article cover the relevant notes for users upgrading from previous Fluent Bit versions. We aim to cover compatibility changes that you must be aware of.

    For more details about changes on each release please refer to the Official Release Notes.

    Fluent Bit v1.5

    The migration from v1.4 to v1.5 is pretty straightforward.

    • If you enabled keepalive mode in your configuration, note that this configuration property has been renamed to net.keepalive. Now all Network I/O keepalive is enabled by default, to learn more about this and other associated configuration properties read the section.

    • If you use the Elasticsearch output plugin, note the default value of type . Many versions of Elasticsearch will tolerate this, but ES v5.6 through v6.1 require a type without a leading underscore. See the for more.

    Fluent Bit v1.4

    If you are migrating from Fluent Bit v1.3, there are no breaking changes. Just new exciting features to enjoy :)

    Fluent Bit v1.3

    If you are migrating from Fluent Bit v1.2 to v1.3, there are no breaking changes. If you are upgrading from an older version please review the incremental changes below.

    Fluent Bit v1.2

    Docker, JSON, Parsers and Decoders

    On Fluent Bit v1.2 we have fixed many issues associated with JSON encoding and decoding, for hence when parsing Docker logs is no longer necessary to use decoders. The new Docker parser looks like this:

    Note: again, do not use decoders.

    Kubernetes Filter

    We have done improvements also on how Kubernetes Filter handle the stringified log message. If the option Merge_Log is enabled, it will try to handle the log content as a JSON map, if so, it will add the keys to the root map.

    In addition, we have fixed and improved the option called Merge_Log_Key. If a merge log succeed, all new keys will be packaged under the key specified by this option, a suggested configuration is as follows:

    As an example, if the original log content is the following map:

    the final record will be composed as follows:

    Fluent Bit v1.1

    If you are upgrading from Fluent Bit <= 1.0.x you should take in consideration the following relevant changes when switching to Fluent Bit v1.1 series:

    Kubernetes Filter

    We introduced a new configuration property called Kube_Tag_Prefix to help Tag prefix resolution and address an unexpected behavior that landed in previous versions.

    During 1.0.x release cycle, a commit in Tail input plugin changed the default behavior on how the Tag was composed when using the wildcard for expansion generating breaking compatibility with other services. Consider the following configuration example:

    The expected behavior is that Tag will be expanded to:

    but the change introduced in 1.0 series switched from absolute path to the base file name only:

    On Fluent Bit v1.1 release we restored to our default behavior and now the Tag is composed using the absolute path of the monitored file.

    Having absolute path in the Tag is relevant for routing and flexible configuration where it also helps to keep compatibility with Fluentd behavior.

    This behavior switch in Tail input plugin affects how Filter Kubernetes operates. As you know when the filter is used it needs to perform local metadata lookup that comes from the file names when using Tail as a source. Now with the new Kube_Tag_Prefix option you can specify what's the prefix used in Tail input plugin, for the configuration example above the new configuration will look as follows:

    So the proper for Kube_Tag_Prefix value must be composed by Tag prefix set in Tail input plugin plus the converted monitored directory replacing slashes with dots.

    Supported Platforms

    The following operating systems and architectures are supported in Fluent Bit.

    Operating System

    Distribution

    Architectures

    Linux

    x86_64, Arm64v8

    x86_64, Arm64v8

    From an architecture support perspective, Fluent Bit is fully functional on x86_64, Arm64v8 and Arm32v7 based processors.

    Fluent Bit can work also on OSX and *BSD systems, but not all plugins will be available on all platforms. Official support will be expanding based on community demand.

    Windows

    1# Windows

    Fluent Bit is distributed as td-agent-bit package for Windows. Fluent Bit has two flavours of Windows installers: a ZIP archive (for quick testing) and an EXE installer (for system installation).

    Installation Packages

    The latest stable version is 1.5.7:

    INSTALLERS

    To check the integrity, use Get-FileHash commandlet on PowerShell.

    Installing from ZIP archive

    Download a ZIP archive . There are installers for 32-bit and 64-bit environments, so choose one suitable for your environment.

    Then you need to expand the ZIP archive. You can do this by clicking "Extract All" on Explorer, or if you're using PowerShell, you can use Expand-Archive commandlet.

    The ZIP package contains the following set of files.

    Now, launch cmd.exe or PowerShell on your machine, and execute fluent-bit.exe as follows.

    If you see the following output, it's working fine!

    To halt the process, press CTRL-C in the terminal.

    Installing from EXE installer

    Download an EXE installer from the . It has both 32-bit and 64-bit builds. Choose one which is suitable for you.

    Then, double-click the EXE installer you've downloaded. Installation wizard will automatically start.

    Click Next and proceed. By default, Fluent Bit is installed into C:\Program Files\td-agent-bit\, so you should be able to launch fluent-bit as follow after installation.

    Windows Service Support

    Windows services are equivalent to "daemons" in UNIX (i.e. long-running background processes). Since v1.5.0, Fluent Bit has the native support for Windows Service.

    Suppose you have the following installation layout:

    To register Fluent Bit as a Windows service, you need to execute the following command on Command Prompt. Please be careful that a single space is required after binpath=.

    Now Fluent Bit can be started and managed as a normal Windows service.

    To halt the Fluent Bit service, just execute the "stop" command.

    Networking

    Fluent Bit implements a unified networking interface that is exposed to components like plugins. This interface abstract all the complexity of general I/O and is fully configurable.

    A common use case is when a component or plugin needs to connect to a service to send and receive data. Despite the operational mode sounds easy to deal with, there are many factors that can make things hard like unresponsive services, networking latency or any kind of connectivity error. The networking interface aims to abstract and simplify the network I/O handling, minimize risks and optimize performance.

    Concepts

    TCP Connect Timeout

    Most of the time creating a new TCP connection to a remote server is straightforward and takes a few milliseconds. But there are cases where DNS resolving, slow network or incomplete TLS handshakes might create long delays, or incomplete connection statuses.

    The net.connect_timeout allows to configure the maximum time to wait for a connection to be established, note that this value already considers the TLS handshake process.

    TCP Source Address

    On environments with multiple network interfaces, might be desired to choose which interface to use for our data that will flow through the network.

    The net.source_address allows to specify which network address must be used for a TCP connection and data flow.

    TCP Keepalive

    TCP is a connected oriented channel, to deliver and receive data from a remote end-point in most of cases we use a TCP connection. This TCP connection can be created and destroyed once is not longer needed, this approach has pros and cons, here we will refer to the opposite case: keep the connection open.

    The concept of TCP Keepalive refers to the ability of the client (Fluent Bit on this case) to keep the TCP connection open in a persistent way, that means that once the connection is created and used, instead of close it, it can be recycled. This feature offers many benefits in terms of performance since communication channels are always established before hand.

    Any component that uses TCP channels like HTTP or , can take advantage of this feature. For configuration purposes use the net.keepalive property.

    TCP Keepalive Idle Timeout

    If a TCP connection is keepalive enabled, there might be scenarios where the connection can be unused for long periods of time. Having an idle keepalive connection is not helpful and is recommendable to keep them alive if they are used.

    In order to control how long a keepalive connection can be idle, we expose the configuration property called net.keepalive_idle_timeout.

    Configuration Options

    For plugins that relies on networking I/O, the following section describes the network configuration properties available and how they can be used to optimize performance or adjust to different configuration needs:

    Example

    As an example, we will send 5 random messages through a TCP output connection, in the remote side we will use nc (netcat) utility to see the data.

    Put the following configuration snippet in a file called fluent-bit.conf:

    In another terminal, start nc and make it listen for messages on TCP port 9090:

    Now start Fluent Bit with the configuration file written above and you will see the data flowing to netcat:

    If the net.keepalive option is not enabled, Fluent Bit will close the TCP connection and netcat will quit, here we can see how the keepalive connection works.

    After the 5 records arrive, the connection will keep idle and after 10 seconds it will be closed due to net.keepalive_idle_timeout.

    CPU Metrics

    The cpu input plugin, measures the CPU usage of a process or the whole system by default (considering per CPU core). It reports values in percentage unit for every interval of time set. At the moment this plugin is only available for Linux.

    The following tables describes the information generated by the plugin. The keys below represent the data used by the overall system, all values associated to the keys are in a percentage unit (0 to 100%):

    key

    description

    cpu_p

    CPU usage of the overall system, this value is the summatory of time spent on user and kernel space. The result takes in consideration the numbers of CPU cores in the system.

    user_p

    CPU usage in User mode, for short it means the CPU usage by user space programs. The result of this value takes in consideration the numbers of CPU cores in the system.

    In addition to the keys reported in the above table, a similar content is created per CPU core. The cores are listed from 0 to N as the Kernel reports:

    Configuration Parameters

    The plugin supports the following configuration parameters:

    Getting Started

    In order to get the statistics of the CPU usage of your system, you can run the plugin from the command line or through the configuration file:

    Command Line

    As described above, the CPU input plugin gathers the overall usage every one second and flushed the information to the output on the fifth second. On this example we used the stdout plugin to demonstrate the output records. In a real use-case you may want to flush this information to some central aggregator such as or .

    Configuration File

    In your main configuration file append the following Input & Output sections:

    Windows Event Log

    The winlog input plugin allows you to read Windows Event Log.

    Configuration Parameters

    The plugin supports the following configuration parameters:

    Key

    Description

    Default

    Note that if you do not set db, the plugin will read channels from the beginning on each startup.

    Configuration Examples

    Configuration File

    Here is a minimum configuration example.

    Note that some Windows Event Log channels (like Security) requires an admin privilege for reading. In this case, you need to run fluent-bit as an administrator.

    Command Line

    If you want to do a quick test, you can run this plugin from the command line.

    Memory Metrics

    The mem input plugin, gathers the information about the memory and swap usage of the running system every certain interval of time and reports the total amount of memory and the amount of free available.

    Getting Started

    In order to get memory and swap usage from your system, you can run the plugin from the command line or through the configuration file:

    Command Line

    Configuration File

    In your main configuration file append the following Input & Output sections:

    Kernel Logs

    The kmsg input plugin reads the Linux Kernel log buffer since the beginning, it gets every record and parse it field as priority, sequence, seconds, useconds, and message.

    Getting Started

    In order to start getting the Linux Kernel messages, you can run the plugin from the command line or through the configuration file:

    Command Line

    As described above, the plugin processed all messages that the Linux Kernel reported, the output has been truncated for clarification.

    Configuration File

    In your main configuration file append the following Input & Output sections:

    Docker Events

    The docker events input plugin uses the docker API to capture server events. A complete list of possible events returned by this plugin can be found

    Configuration Parameters

    This plugin supports the following configuration parameters:

    Disk I/O Metrics

    The disk input plugin, gathers the information about the disk throughput of the running system every certain interval of time and reports them.

    Configuration Parameters

    The plugin supports the following configuration parameters:

    Network I/O Metrics

    The netif input plugin gathers network traffic information of the running system every certain interval of time, and reports them.

    Configuration Parameters

    The plugin supports the following configuration parameters:

    Standard Output

    The stdout output plugin allows to print to the standard output the data received through the input plugin. Their usage is very simple as follows:

    Configuration Parameters

    Expect

    Made for testing: make sure that your records contain the expected key and values

    The expect filter plugin allows you to validate that records match certain criteria in their structure, like validating that a key exists or it has a specific value.

    The following page just describes the configuration properties available, for a detailed explanation of its usage and use cases, please refer the following page:

    Dummy

    The dummy input plugin, generates dummy events. It is useful for testing, debugging, benchmarking and getting started with Fluent Bit.

    Configuration Parameters

    The plugin supports the following configuration parameters:

    Random

    Random input plugin generate very simple random value samples using the device interface /dev/urandom, if not available it will use a unix timestamp as value.

    Configuration Parameters

    The plugin supports the following configuration parameters:

    [UPSTREAM]
        name       forward-balancing
    
    [NODE]
        name       node-1
        host       127.0.0.1
        port       43000
    
    [NODE]
        name       node-2
        host       127.0.0.1
        port       44000
    
    [NODE]
        name       node-3
        host       127.0.0.1
        port       45000
        tls        on
        tls.verify off
        shared_key secret
    [SERVICE]
        flush                     1
        log_Level                 info
        storage.path              /var/log/flb-storage/
        storage.sync              normal
        storage.checksum          off
        storage.backlog.mem_limit 5M
    [SERVICE]
        flush                     1
        log_Level                 info
        storage.path              /var/log/flb-storage/
        storage.sync              normal
        storage.checksum          off
        storage.backlog.mem_limit 5M
    
    [INPUT]
        name          cpu
        storage.type  filesystem
    
    [INPUT]
        name          mem
        storage.type  memory

    port

    TCP port of the target service.

    If storage.path is set, Fluent Bit will look for data chunks that were not delivered and are still in the storage layer, these are called backlog data. This option configure a hint of maximum value of memory to use when processing these records.

    5M

    storage.metrics

    If http_server option has been enable in the main [SERVICE] section, this option registers a new endpoint where internal metrics of the storage layer can be consumed. For more details refer to the Monitoring section.

    off

    Apache License v2.0
    Apache License v2.0

    Centos 7

    x86_64, Arm64v8

    Debian 10 (Buster)

    x86_64, Arm64v8

    Debian 9 (Stretch)

    x86_64, Arm64v8

    Debian 8 (Jessie)

    x86_64, Arm64v8

    Nixos

    x86_64, Arm64v8

    Ubuntu 20.04 (Focal Fossa)

    x86_64, Arm64v8

    Ubuntu 18.04 (Bionic Beaver)

    x86_64, Arm64v8

    Ubuntu 16.04 (Xenial Xerus)

    x86_64

    Raspbian 10 (Buster)

    Arm32v7

    Raspbian 9 (Stretch)

    Arm32v7

    Raspbian 8 (Jessie)

    Arm32v7

    Windows

    Windows Server 2019

    x86_64, x86

    Windows 10 1903

    x86_64, x86

    Amazon Linux 2
    Centos 8
    Networking Administration
    changed from flb_type to _doc
    Elasticsearch output plugin documentation FAQ entry

    30

    Property

    Description

    Default

    net.connect_timeout

    Set maximum time expressed in seconds to wait for a TCP connection to be established, this include the TLS handshake time.

    10

    net.source_address

    Specify network address (interface) to use for connection and data traffic.

    net.keepalive

    Enable or disable TCP keepalive support. Accepts a boolean value: on / off.

    on

    net.keepalive_idle_timeout

    TLS

    Set maximum time expressed in seconds for an idle keepalive connection.

    system_p

    CPU usage in Kernel mode, for short it means the CPU usage by the Kernel. The result of this value takes in consideration the numbers of CPU cores in the system.

    key

    description

    cpuN.p_cpu

    Represents the total CPU usage by core N.

    cpuN.p_user

    Total CPU spent in user mode or user space programs associated to this core.

    cpuN.p_system

    Total CPU spent in system or kernel mode associated to this core.

    Key

    Description

    Default

    Interval_Sec

    Polling interval in seconds

    1

    Interval_NSec

    Polling interval in nanoseconds

    0

    PID

    Specify the ID (PID) of a running process in the system. By default the plugin monitors the whole system but if this option is set, it will only monitor the given process ID.

    Fluentd
    Elasticsearch

    Channels

    A comma-separated list of channels to read from.

    Interval_Sec

    Set the polling interval for each channel. (optional)

    1

    DB

    Set the path to save the read offsets. (optional)

    $ fluent-bit -i mem -t memory -o stdout -m '*'
    Fluent Bit v1.x.x
    * Copyright (C) 2019-2020 The Fluent Bit Authors
    * Copyright (C) 2015-2018 Treasure Data
    * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
    * https://fluentbit.io
    
    [2017/03/03 21:12:35] [ info] [engine] started
    [0] memory: [1488543156, {"Mem.total"=>1016044, "Mem.used"=>841388, "Mem.free"=>174656, "Swap.total"=>2064380, "Swap.used"=>139888, "Swap.free"=>1924492}]
    [1] memory: [1488543157, {"Mem.total"=>1016044, "Mem.used"=>841420, "Mem.free"=>174624, "Swap.total"=>2064380, "Swap.used"=>139888, "Swap.free"=>1924492}]
    [2] memory: [1488543158, {"Mem.total"=>1016044, "Mem.used"=>841420, "Mem.free"=>174624, "Swap.total"=>2064380, "Swap.used"=>139888, "Swap.free"=>1924492}]
    [3] memory: [1488543159, {"Mem.total"=>1016044, "Mem.used"=>841420, "Mem.free"=>174624, "Swap.total"=>2064380, "Swap.used"=>139888, "Swap.free"=>1924492}]
    [INPUT]
        Name   mem
        Tag    memory
    
    [OUTPUT]
        Name   stdout
        Match  *
    $ bin/fluent-bit -i kmsg -t kernel -o stdout -m '*'
    Fluent Bit v1.x.x
    * Copyright (C) 2019-2020 The Fluent Bit Authors
    * Copyright (C) 2015-2018 Treasure Data
    * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
    * https://fluentbit.io
    
    [0] kernel: [1463421823, {"priority"=>3, "sequence"=>1814, "sec"=>11706, "usec"=>732233, "msg"=>"ERROR @wl_cfg80211_get_station : Wrong Mac address, mac = 34:a8:4e:d3:40:ec profile =20:3a:07:9e:4a:ac"}]
    [1] kernel: [1463421823, {"priority"=>3, "sequence"=>1815, "sec"=>11706, "usec"=>732300, "msg"=>"ERROR @wl_cfg80211_get_station : Wrong Mac address, mac = 34:a8:4e:d3:40:ec profile =20:3a:07:9e:4a:ac"}]
    [2] kernel: [1463421829, {"priority"=>3, "sequence"=>1816, "sec"=>11712, "usec"=>729728, "msg"=>"ERROR @wl_cfg80211_get_station : Wrong Mac address, mac = 34:a8:4e:d3:40:ec profile =20:3a:07:9e:4a:ac"}]
    [3] kernel: [1463421829, {"priority"=>3, "sequence"=>1817, "sec"=>11712, "usec"=>729802, "msg"=>"ERROR @wl_cfg80211_get_station : Wrong Mac address, mac = 34:a8:4e:d3:40:ec
    ...
    [INPUT]
        Name   kmsg
        Tag    kernel
    
    [OUTPUT]
        Name   stdout
        Match  *

    Standard Input

    The stdin plugin allows to retrieve valid JSON text messages over the standard input interface (stdin). In order to use it, specify the plugin name as the input, e.g:

    $ fluent-bit -i stdin -o stdout

    As input data the stdin plugin recognize the following JSON data formats:

    1. { map => val, map => val, map => val }
    2. [ time, { map => val, map => val, map => val } ]

    A better example to demonstrate how it works will be through a Bash script that generates messages and writes them to Fluent Bit. Write the following content in a file named test.sh:

    #!/bin/sh
    
    while :; do
      echo -n "{\"key\": \"some value\"}"
      sleep 1
    done

    Give the script execution permission:

    $ chmod 755 test.sh

    Now lets start the script and Fluent Bit in the following way:

    Unix_Path

    The docker socket unix path

    /var/run/docker.sock

    Buffer_Size

    The size of the buffer used to read docker events (in bytes)

    8192

    Parser

    Specify the name of a parser to interpret the entry as a structured message.

    None

    Key

    When a message is unstructured (no parser applied), it's appended as a string under the key name message.

    message

    Command Line

    Configuration File

    In your main configuration file append the following Input & Output sections:

    Key

    Description

    here

    Default

    Polling interval (seconds). default: 1

    Interval_NSec

    Polling interval (nanosecond). default: 0

    Dev_Name

    Device name to limit the target. (e.g. sda). If not set, in_disk gathers information from all of disks and partitions.

    Getting Started

    In order to get disk usage from your system, you can run the plugin from the command line or through the configuration file:

    Command Line

    Configuration File

    In your main configuration file append the following Input & Output sections:

    Note: Total interval (sec) = Interval_Sec + (Interval_Nsec / 1000000000).

    e.g. 1.5s = 1s + 500000000ns

    Key

    Description

    Interval_Sec

    Specify the network interface to monitor. e.g. eth0

    Interval_Sec

    Polling interval (seconds). default: 1

    Interval_NSec

    Polling interval (nanosecond). default: 0

    Verbose

    If true, gather metrics precisely. default: false

    Getting Started

    In order to monitor network traffic from your system, you can run the plugin from the command line or through the configuration file:

    Command Line

    Configuration File

    In your main configuration file append the following Input & Output sections:

    Note: Total interval (sec) = Interval_Sec + (Interval_Nsec / 1000000000).

    e.g. 1.5s = 1s + 500000000ns

    Key

    Description

    Interface

    Specify the data format to be printed. Supported formats are msgpack json, json_lines and json_stream.

    msgpack

    json_date_key

    Specify the name of the date field in output

    date

    json_date_format

    Specify the format of the date. Supported formats are double, iso8601 (eg: 2018-05-30T09:39:52.000681Z) and epoch.

    double

    Command Line

    We have specified to gather CPU usage metrics and print them out to the standard output in a human readable way:

    No more, no less, it just works.

    Key

    Description

    default

    Format

    Getting Started

    You can run the plugin from the command line or through the configuration file:

    Key

    Description

    Dummy

    Dummy JSON record. Default: {"message":"dummy"}

    Start_time_sec

    Dummy base timestamp in seconds. Default: 0

    Start_time_nsec

    Dummy base timestamp in nanoseconds. Default: 0

    Rate

    Events number generated per second. Default: 1

    Command Line

    Configuration File

    In your main configuration file append the following Input & Output sections:

    Samples

    If set, it will only generate a specific number of samples. By default this value is set to -1, which will generate unlimited samples.

    Interval_Sec

    Interval in seconds between samples generation. Default value is 1.

    Internal_Nsec

    Specify a nanoseconds interval for samples generation, it works in conjuntion with the Interval_Sec configuration key. Default value is 0.

    Getting Started

    In order to start generating random samples, you can run the plugin from the command line or through the configuration file:

    Command Line

    From the command line you can let Fluent Bit generate the samples with the following options:

    Configuration File

    In your main configuration file append the following Input & Output sections:

    Testing

    Once Fluent Bit is running, you will see the reports in the output interface similar to this:

    Key

    Description

    [PARSER]
        Name         docker
        Format       json
        Time_Key     time
        Time_Format  %Y-%m-%dT%H:%M:%S.%L
        Time_Keep    On
    [FILTER]
        Name             Kubernetes
        Match            kube.*
        Kube_Tag_Prefix  kube.var.log.containers.
        Merge_Log        On
        Merge_Log_Key    log_processed
    {"key1": "val1", "key2": "val2"}
    {
        "log": "{\"key1\": \"val1\", \"key2\": \"val2\"}",
        "log_processed": {
            "key1": "val1",
            "key2": "val2"
        }
    }
    [INPUT]
        Name  tail
        Path  /var/log/containers/*.log
        Tag   kube.*
    kube.var.log.containers.apache.log
    kube.apache.log
    [INPUT]
        Name  tail
        Path  /var/log/containers/*.log
        Tag   kube.*
    
    [FILTER]
        Name             kubernetes
        Match            *
        Kube_Tag_Prefix  kube.var.log.containers.
    [SERVICE]
        flush     1
        log_level info
    
    [INPUT]
        name      random
        samples   5
    
    [OUTPUT]
        name      tcp
        match     *
        host      127.0.0.1
        port      9090
        format    json_lines
        # Networking Setup
        net.connect_timeout         5
        net.source_address          127.0.0.1
        net.keepalive               on
        net.keepalive_idle_timeout  10
    $ nc -l 9090
    $ nc -l 9090
    {"date":1587769732.572266,"rand_value":9704012962543047466}
    {"date":1587769733.572354,"rand_value":7609018546050096989}
    {"date":1587769734.572388,"rand_value":17035865539257638950}
    {"date":1587769735.572419,"rand_value":17086151440182975160}
    {"date":1587769736.572277,"rand_value":527581343064950185}
    $ build/bin/fluent-bit -i cpu -t my_cpu -o stdout -m '*'
    Fluent Bit v1.x.x
    * Copyright (C) 2019-2020 The Fluent Bit Authors
    * Copyright (C) 2015-2018 Treasure Data
    * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
    * https://fluentbit.io
    
    [2019/09/02 10:46:29] [ info] starting engine
    [0] [1452185189, {"cpu_p"=>7.00, "user_p"=>5.00, "system_p"=>2.00, "cpu0.p_cpu"=>10.00, "cpu0.p_user"=>8.00, "cpu0.p_system"=>2.00, "cpu1.p_cpu"=>6.00, "cpu1.p_user"=>4.00, "cpu1.p_system"=>2.00}]
    [1] [1452185190, {"cpu_p"=>6.50, "user_p"=>5.00, "system_p"=>1.50, "cpu0.p_cpu"=>6.00, "cpu0.p_user"=>5.00, "cpu0.p_system"=>1.00, "cpu1.p_cpu"=>7.00, "cpu1.p_user"=>5.00, "cpu1.p_system"=>2.00}]
    [2] [1452185191, {"cpu_p"=>7.50, "user_p"=>5.00, "system_p"=>2.50, "cpu0.p_cpu"=>7.00, "cpu0.p_user"=>3.00, "cpu0.p_system"=>4.00, "cpu1.p_cpu"=>6.00, "cpu1.p_user"=>6.00, "cpu1.p_system"=>0.00}]
    [3] [1452185192, {"cpu_p"=>4.50, "user_p"=>3.50, "system_p"=>1.00, "cpu0.p_cpu"=>6.00, "cpu0.p_user"=>5.00, "cpu0.p_system"=>1.00, "cpu1.p_cpu"=>5.00, "cpu1.p_user"=>3.00, "cpu1.p_system"=>2.00}]
    [INPUT]
        Name cpu
        Tag  my_cpu
    
    [OUTPUT]
        Name  stdout
        Match *
    [INPUT]
        Name         winlog
        Channels     Setup,Windows PowerShell
        Interval_Sec 1
        DB           winlog.sqlite
    
    [OUTPUT]
        Name   stdout
        Match  *
    $ fluent-bit -i winlog -p 'channels=Setup' -o stdout
    $ ./test.sh | fluent-bit -i stdin -o stdout
    Fluent Bit v1.x.x
    * Copyright (C) 2019-2020 The Fluent Bit Authors
    * Copyright (C) 2015-2018 Treasure Data
    * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
    * https://fluentbit.io
    
    [2016/10/07 21:44:46] [ info] [engine] started
    [0] stdin.0: [1475898286, {"key"=>"some value"}]
    [1] stdin.0: [1475898287, {"key"=>"some value"}]
    [2] stdin.0: [1475898288, {"key"=>"some value"}]
    [3] stdin.0: [1475898289, {"key"=>"some value"}]
    [4] stdin.0: [1475898290, {"key"=>"some value"}]
    $ fluent-bit -i docker_events -o stdout
    [INPUT]
        Name   docker_events
    
    [OUTPUT]
        Name   stdout
        Match  *
    $ fluent-bit -i disk -o stdout
    Fluent Bit v1.x.x
    * Copyright (C) 2019-2020 The Fluent Bit Authors
    * Copyright (C) 2015-2018 Treasure Data
    * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
    * https://fluentbit.io
    
    [2017/01/28 16:58:16] [ info] [engine] started
    [0] disk.0: [1485590297, {"read_size"=>0, "write_size"=>0}]
    [1] disk.0: [1485590298, {"read_size"=>0, "write_size"=>0}]
    [2] disk.0: [1485590299, {"read_size"=>0, "write_size"=>0}]
    [3] disk.0: [1485590300, {"read_size"=>0, "write_size"=>11997184}]
    [INPUT]
        Name          disk
        Tag           disk
        Interval_Sec  1
        Interval_NSec 0
    [OUTPUT]
        Name   stdout
        Match  *
    $ bin/fluent-bit -i netif -p interface=eth0 -o stdout
    Fluent Bit v1.x.x
    * Copyright (C) 2019-2020 The Fluent Bit Authors
    * Copyright (C) 2015-2018 Treasure Data
    * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
    * https://fluentbit.io
    
    [2017/07/08 23:34:18] [ info] [engine] started
    [0] netif.0: [1499524459.001698260, {"eth0.rx.bytes"=>89769869, "eth0.rx.packets"=>73357, "eth0.rx.errors"=>0, "eth0.tx.bytes"=>4256474, "eth0.tx.packets"=>24293, "eth0.tx.errors"=>0}]
    [1] netif.0: [1499524460.002541885, {"eth0.rx.bytes"=>98, "eth0.rx.packets"=>1, "eth0.rx.errors"=>0, "eth0.tx.bytes"=>98, "eth0.tx.packets"=>1, "eth0.tx.errors"=>0}]
    [2] netif.0: [1499524461.001142161, {"eth0.rx.bytes"=>98, "eth0.rx.packets"=>1, "eth0.rx.errors"=>0, "eth0.tx.bytes"=>98, "eth0.tx.packets"=>1, "eth0.tx.errors"=>0}]
    [3] netif.0: [1499524462.002612971, {"eth0.rx.bytes"=>98, "eth0.rx.packets"=>1, "eth0.rx.errors"=>0, "eth0.tx.bytes"=>98, "eth0.tx.packets"=>1, "eth0.tx.errors"=>0}]
    [INPUT]
        Name          netif
        Tag           netif
        Interval_Sec  1
        Interval_NSec 0
        Interface     eth0
    [OUTPUT]
        Name   stdout
        Match  *
    $ bin/fluent-bit -i cpu -o stdout -v
    $ bin/fluent-bit -i cpu -o stdout -p format=msgpack -v
    Fluent Bit v1.x.x
    * Copyright (C) 2019-2020 The Fluent Bit Authors
    * Copyright (C) 2015-2018 Treasure Data
    * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
    * https://fluentbit.io
    
    [2016/10/07 21:52:01] [ info] [engine] started
    [0] cpu.0: [1475898721, {"cpu_p"=>0.500000, "user_p"=>0.250000, "system_p"=>0.250000, "cpu0.p_cpu"=>0.000000, "cpu0.p_user"=>0.000000, "cpu0.p_system"=>0.000000, "cpu1.p_cpu"=>0.000000, "cpu1.p_user"=>0.000000, "cpu1.p_system"=>0.000000, "cpu2.p_cpu"=>0.000000, "cpu2.p_user"=>0.000000, "cpu2.p_system"=>0.000000, "cpu3.p_cpu"=>1.000000, "cpu3.p_user"=>0.000000, "cpu3.p_system"=>1.000000}]
    [1] cpu.0: [1475898722, {"cpu_p"=>0.250000, "user_p"=>0.250000, "system_p"=>0.000000, "cpu0.p_cpu"=>0.000000, "cpu0.p_user"=>0.000000, "cpu0.p_system"=>0.000000, "cpu1.p_cpu"=>1.000000, "cpu1.p_user"=>1.000000, "cpu1.p_system"=>0.000000, "cpu2.p_cpu"=>0.000000, "cpu2.p_user"=>0.000000, "cpu2.p_system"=>0.000000, "cpu3.p_cpu"=>0.000000, "cpu3.p_user"=>0.000000, "cpu3.p_system"=>0.000000}]
    [2] cpu.0: [1475898723, {"cpu_p"=>0.750000, "user_p"=>0.250000, "system_p"=>0.500000, "cpu0.p_cpu"=>2.000000, "cpu0.p_user"=>1.000000, "cpu0.p_system"=>1.000000, "cpu1.p_cpu"=>0.000000, "cpu1.p_user"=>0.000000, "cpu1.p_system"=>0.000000, "cpu2.p_cpu"=>1.000000, "cpu2.p_user"=>0.000000, "cpu2.p_system"=>1.000000, "cpu3.p_cpu"=>0.000000, "cpu3.p_user"=>0.000000, "cpu3.p_system"=>0.000000}]
    [3] cpu.0: [1475898724, {"cpu_p"=>1.000000, "user_p"=>0.750000, "system_p"=>0.250000, "cpu0.p_cpu"=>1.000000, "cpu0.p_user"=>1.000000, "cpu0.p_system"=>0.000000, "cpu1.p_cpu"=>2.000000, "cpu1.p_user"=>1.000000, "cpu1.p_system"=>1.000000, "cpu2.p_cpu"=>1.000000, "cpu2.p_user"=>1.000000, "cpu2.p_system"=>0.000000, "cpu3.p_cpu"=>1.000000, "cpu3.p_user"=>1.000000, "cpu3.p_system"=>0.000000}]
    $ fluent-bit -i dummy -o stdout
    Fluent Bit v1.x.x
    * Copyright (C) 2019-2020 The Fluent Bit Authors
    * Copyright (C) 2015-2018 Treasure Data
    * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
    * https://fluentbit.io
    
    [2017/07/06 21:55:29] [ info] [engine] started
    [0] dummy.0: [1499345730.015265366, {"message"=>"dummy"}]
    [1] dummy.0: [1499345731.002371371, {"message"=>"dummy"}]
    [2] dummy.0: [1499345732.000267932, {"message"=>"dummy"}]
    [3] dummy.0: [1499345733.000757746, {"message"=>"dummy"}]
    [INPUT]
        Name   dummy
        Tag    dummy.log
    
    [OUTPUT]
        Name   stdout
        Match  *
    $ fluent-bit -i random -o stdout
    [INPUT]
        Name          random
        Samples      -1
        Interval_Sec  1
        Interval_NSec 0
    
    [OUTPUT]
        Name   stdout
        Match  *
    $ fluent-bit -i random -o stdout
    Fluent Bit v1.x.x
    * Copyright (C) 2019-2020 The Fluent Bit Authors
    * Copyright (C) 2015-2018 Treasure Data
    * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
    * https://fluentbit.io
    
    [2016/10/07 20:27:34] [ info] [engine] started
    [0] random.0: [1475893654, {"rand_value"=>1863375102915681408}]
    [1] random.0: [1475893655, {"rand_value"=>425675645790600970}]
    [2] random.0: [1475893656, {"rand_value"=>7580417447354808203}]
    [3] random.0: [1475893657, {"rand_value"=>1501010137543905482}]
    [4] random.0: [1475893658, {"rand_value"=>16238242822364375212}]

    Labels

  • Annotations

  • The default backend in the configuration is Elasticsearch set by the Elasticsearch Ouput Plugin. It uses the Logstash format to ingest the logs. If you need a different Index and Type, please refer to the plugin option and do your own adjustments.

  • There is an option called Retry_Limit set to False, that means if Fluent Bit cannot flush the records to Elasticsearch it will re-try indefinitely until it succeed.

  • Fluentd Kubernetes Metadata Filter
    Jimmi Dyson
    Fluent Bit
    https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.14.md#deprecations
    Tail input plugin
    backpressure
    Configuration Parameters

    The plugin supports the following configuration parameters:

    Property

    Description

    key_exists

    Check if a key with a given name exists in the record.

    key_not_exists

    Check if a key does not exist in the record.

    key_val_is_null

    check that the value of the key is NULL.

    key_val_is_not_null

    check that the value of the key is NOT NULL.

    key_val_eq

    check that the value of the key equals the given value in the configuration.

    Getting Started

    As mentioned on top, refer to the following page for specific details of usage of this filter:

    • Validating and your Data and Structure

    Validating and your Data and Structure

    SHA256 CHECKSUMS

    td-agent-bit-1.5.7-win32.exe

    907514b34ea8c8a59209f70d7d5ec8b0ad09cfa3e7cc850bc64dcbac992b89c6

    td-agent-bit-1.5.7-win32.zip

    ba388f89a8519b221f6ea23151df7070cc95088486d5ed037b33a36b51bc95ee

    td-agent-bit-1.5.7-win64.exe

    2d48534ed3dca1ec6dd97cc4b1bed4bb226c3aa5e8240b29be3cdc0cd7e9cec8

    td-agent-bit-1.5.7-win64.zip

    3fab0f852a079861b946cd8785706b650d1f6ada4389f85ff3e50f98cb4f62d3

    from the download page
    download page

    Validating your Data and Structure

    Fluent Bit is a powerful log processing tool that can deal with different sources and formats, in addition it provides several filters that can be used to perform custom modifications. This flexibility is really good but while your pipeline grows, it's strongly recommended to validate your data and structure.

    We encourage Fluent Bit users to integrate data validation in their CI systems

    A simplified view of our data processing pipeline is as follows:

    In a normal production environment, many Inputs, Filters, and Outputs are defined in the configuration, so integrating a continuous validation of your configuration against expected results is a must. For this requirement, Fluent Bit provides a specific Filter called Expect which can be used to validate expected Keys and Values from your records and takes some action when an exception is found.

    How it Works

    As an example, consider the following pipeline where your source of data is a normal file with JSON content on it and then two filters: to exclude certain records and to alter the record content adding and removing specific keys.

    Ideally you want to add checkpoints of validation of your data between each step so you can know if your data structure is correct, we do this by using expect filter.

    Expect filter sets rules that aims to validate certain criteria like:

    • does the record contain a key A ?

    • does the record not contains key A?

    • does the record key A value equals NULL ?

    • does the record key A value a different value than NULL ?

    Every expect filter configuration can expose specific rules to validate the content of your records, it supports the following configuration properties:

    Start Testing

    Consider the following JSON file called data.log with the following content:

    The following Fluent Bit configuration file will configure a pipeline to consume the log above apply an expect filter to validate that keys color and label exists:

    note that if for some reason the JSON parser failed or is missing in the tail input (line 9), the expect filter will trigger the exit action. As a test, go ahead and comment out or remove line 9.

    As a second step, we will extend our pipeline and we will add a grep filter to match records that map label contains a key called name with value abc, then an expect filter to re-validate that condition:

    Deploying in Production

    When deploying your configuration in production, you might want to remove the expect filters from your configuration since it's an unnecessary extra work unless you want to have a 100% coverage of checks at runtime.

    Security

    Fluent Bit provides integrated support for Transport Layer Security (TLS) and it predecessor Secure Sockets Layer (SSL) respectively. In this section we will refer as TLS only for both implementations.

    Each output plugin that requires to perform Network I/O can optionally enable TLS and configure the behavior. The following table describes the properties available:

    Property

    Description

    Default

    tls

    enable or disable TLS support

    Off

    tls.verify

    The listed properties can be enabled in the configuration file, specifically on each output plugin section or directly through the command line.

    The following output plugins can take advantage of the TLS feature:

    In addition, other plugins implements a sub-set of TLS support, meaning, with restricted configuration:

    Example: enable TLS on HTTP output

    By default HTTP output plugin uses plain TCP, enabling TLS from the command line can be done with:

    In the command line above, the two properties tls and tls.verify where enabled for demonstration purposes (we strongly suggest always keep verification ON).

    The same behavior can be accomplished using a configuration file:

    Tips and Tricks

    Connect to virtual servers using TLS

    Fluent Bit supports . If you are serving multiple hostnames on a single IP address (a.k.a. virtual hosting), you can make use of tls.vhost to connect to a specific hostname.

    Exec

    The exec input plugin, allows to execute external program and collects event logs.

    Configuration Parameters

    The plugin supports the following configuration parameters:

    Key

    Description

    Getting Started

    You can run the plugin from the command line or through the configuration file:

    Command Line

    The following example will read events from the output of ls.

    Configuration File

    In your main configuration file append the following Input & Output sections:

    Thermal

    The thermal input plugin reports system temperatures periodically -- each second by default. Currently this plugin is only available for Linux.

    The following tables describes the information generated by the plugin.

    key

    description

    name

    The name of the thermal zone, such as thermal_zone0

    type

    The type of the thermal zone, such as x86_pkg_temp

    temp

    Current temperature in celcius

    Configuration Parameters

    The plugin supports the following configuration parameters:

    Getting Started

    In order to get temperature(s) of your system, you can run the plugin from the command line or through the configuration file:

    Command Line

    Some systems provide multiple thermal zones. In this example monitor only thermal_zone0 by name, once per minute.

    Configuration File

    In your main configuration file append the following Input & Output sections:

    Regular Expression

    The regex parser allows to define a custom Ruby Regular Expression that will use a named capture feature to define which content belongs to which key name.

    Fluent Bit uses Onigmo regular expression library on Ruby mode, for testing purposes you can use the following web editor to test your expressions:

    http://rubular.com/

    Important: do not attempt to add multiline support in your regular expressions if you are using Tail input plugin since each line is handled as a separated entity. Instead use Tail Multiline support configuration feature.

    Security Warning: Onigmo is a backtracking regex engine. You need to be careful not to use expensive regex patterns, or Onigmo can take very long time to perform pattern matching. For details, please read the article "ReDoS" on OWASP.

    Note: understanding how regular expressions works is out of the scope of this content.

    From a configuration perspective, when the format is set to regex, is mandatory and expected that a Regex configuration key exists.

    The following parser configuration example aims to provide rules that can be applied to an Apache HTTP Server log entry:

    As an example, takes the following Apache HTTP Server log entry:

    The above content do not provide a defined structure for Fluent Bit, but enabling the proper parser we can help to make a structured representation of it:

    A common pitfall is that you cannot use characters other than alphabets, numbers and underscore in group names. For example, a group name like (?<user-name>.*) will cause an error due to containing an invalid character (-).

    In order to understand, learn and test regular expressions like the example above, we suggest you try the following Ruby Regular Expression Editor:

    MQTT

    The MQTT input plugin, allows to retrieve messages/data from MQTT control packets over a TCP connection. The incoming data to receive must be a JSON map.

    Configuration Parameters

    The plugin supports the following configuration parameters:

    Key

    Description

    Getting Started

    In order to start listening for MQTT messages, you can run the plugin from the command line or through the configuration file:

    Command Line

    Since the MQTT input plugin let Fluent Bit behave as a server, we need to dispatch some messages using some MQTT client, in the following example mosquitto tool is being used for the purpose:

    The following command line will send a message to the MQTT input plugin:

    Configuration File

    In your main configuration file append the following Input & Output sections:

    AWS Metadata

    The AWS Filter Enriches logs with AWS Metadata. Currently the plugin adds the EC2 instance ID and availability zone to log records. To use this plugin, you must be running in EC2 and have the instance metadata service enabled.

    Configuration Parameters

    The plugin supports the following configuration parameters:

    Key

    Description

    Note: If you run Fluent Bit in a container, you may have to use instance metadata v1. The plugin behaves the same regardless of which version is used.

    Usage

    Metadata Fields

    Currently, the plugin only adds the instance ID and availability zone. AWS plans to .

    Command Line

    Configuration File

    Docker

    Fluent Bit container images are available on Docker Hub ready for production usage. Current available images can be deployed in multiple architectures.

    Tags and Versions

    The following table describe the tags are available on Docker Hub repository:

    Dump Internals / Signal

    When the service is running we can export to see the overall status of the data flow of the service. But there are other use cases where we would like to know the current status of the internals of the service, specifically to answer questions like what's the current status of the internal buffers ? , the Dump Internals feature is the answer.

    Fluent Bit v1.4 introduces the Dump Internals feature that can be triggered easily from the command line triggering the CONT Unix signal.

    note: this feature is only available on Linux and BSD family operating systems

    Systemd

    The Systemd input plugin allows to collect log messages from the Journald daemon on Linux environments.

    Configuration Parameters

    The plugin supports the following configuration parameters:

    Grep

    Select or exclude records per patterns

    The Grep Filter plugin allows you to match or exclude specific records based on regular expression patterns for values or nested values.

    Configuration Parameters

    The plugin supports the following configuration parameters:

    Record Modifier

    The Record Modifier Filter plugin allows to append fields or to exclude specific fields.

    Configuration Parameters

    The plugin supports the following configuration parameters: Remove_key and Whitelist_key are exclusive.

    Process

    Process input plugin allows you to check how health a process is. It does the check by issuing a process every a certain interval of time.

    Configuration Parameters

    The plugin supports the following configuration parameters:

    $ kubectl create namespace logging
    $ kubectl create -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/fluent-bit-service-account.yaml
    $ kubectl create -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/fluent-bit-role.yaml
    $ kubectl create -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/fluent-bit-role-binding.yaml
    $ kubectl create -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/output/elasticsearch/fluent-bit-configmap.yaml
    apiVersion: apps/v1
    apiVersion: extensions/v1beta1
    $ kubectl create -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/output/elasticsearch/fluent-bit-ds.yaml
    $ kubectl create -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/output/elasticsearch/fluent-bit-ds-minikube.yaml
    PS> Get-FileHash td-agent-bit-1.5.7-win32.exe
    PS> Expand-Archive td-agent-bit-1.5.7-win64.zip
    td-agent-bit
    ├── bin
    │   ├── fluent-bit.dll
    │   └── fluent-bit.exe
    ├── conf
    │   ├── fluent-bit.conf
    │   ├── parsers.conf
    │   └── plugins.conf
    └── include
        │   ├── flb_api.h
        │   ├── ...
        │   └── flb_worker.h
        └── fluent-bit.h
    PS> .\bin\fluent-bit.exe -i dummy -o stdout
    PS> .\bin\fluent-bit.exe  -i dummy -o stdout
    Fluent Bit v1.5.x
    * Copyright (C) 2019-2020 The Fluent Bit Authors
    * Copyright (C) 2015-2018 Treasure Data
    * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
    * https://fluentbit.io
    
    [2019/06/28 10:13:04] [ info] [storage] initializing...
    [2019/06/28 10:13:04] [ info] [storage] in-memory
    [2019/06/28 10:13:04] [ info] [storage] normal synchronization mode, checksum disabled, max_chunks_up=128
    [2019/06/28 10:13:04] [ info] [engine] started (pid=10324)
    [2019/06/28 10:13:04] [ info] [sp] stream processor started
    [0] dummy.0: [1561684385.443823800, {"message"=>"dummy"}]
    [1] dummy.0: [1561684386.428399000, {"message"=>"dummy"}]
    [2] dummy.0: [1561684387.443641900, {"message"=>"dummy"}]
    [3] dummy.0: [1561684388.441405800, {"message"=>"dummy"}]
    PS> C:\Program Files\td-agent-bit\bin\fluent-bit.exe -i dummy -o stdout
    C:\fluent-bit\
    ├── conf
    │   ├── fluent-bit.conf
    │   └── parsers.conf
    └── bin
        ├── fluent-bit.dll
        └── fluent-bit.exe
    % sc.exe create fluent-bit binpath= "\fluent-bit\bin\fluent-bit.exe -c \fluent-bit\conf\fluent-bit.conf"
    % sc.exe start fluent-bit
    % sc.exe query fluent-bit
    SERVICE_NAME: fluent-bit
        TYPE               : 10  WIN32_OWN_PROCESS
        STATE              : 4 Running
        ...
    % sc.exe stop fluent-bit

    action

    action to take when a rule does not match. The available options are warn or exit. On warn, a warning message is sent to the logging layer when a mismatch of the rules above is found; using exit makes Fluent Bit abort with status code 255

    Elasticsearch

  • Forward

  • GELF

  • HTTP

  • InfluxDB

  • Kafka REST Proxy

  • Slack

  • Splunk

  • Stackdriver

  • TCP & TLS

  • Treasure Data

  • force certificate validation

    On

    tls.debug

    Set TLS debug verbosity level. It accept the following values: 0 (No debug), 1 (Error), 2 (State change), 3 (Informational) and 4 Verbose

    1

    tls.ca_file

    absolute path to CA certificate file

    tls.ca_path

    absolute path to scan for certificate files

    tls.crt_file

    absolute path to Certificate file

    tls.key_file

    absolute path to private Key file

    tls.key_passwd

    optional password for tls.key_file file

    tls.vhost

    hostname to be used for TLS SNI extension

    Amazon CloudWatch
    Azure
    BigQuery
    Datadog
    Kubernetes Filter
    TLS server name indication

    Command

    The command to execute.

    Parser

    Specify the name of a parser to interpret the entry as a structured message.

    Interval_Sec

    Polling interval (seconds).

    Interval_NSec

    Polling interval (nanosecond).

    Buf_Size

    Size of the buffer (check unit sizes for allowed values)

    Key

    Description

    Interval_Sec

    Polling interval (seconds). default: 1

    Interval_NSec

    Polling interval (nanoseconds). default: 0

    name_regex

    Optional name filter regex. default: None

    type_regex

    Optional type filter regex. default: None

    192.168.2.20 - - [29/Jul/2015:10:27:10 -0300] "GET /cgi-bin/try/ HTTP/1.0" 200 3395
    http://rubular.com/r/X7BH0M4Ivm

    Listen

    Listener network interface, default: 0.0.0.0

    Port

    TCP port where listening for connections, default: 1883

    Default

    imds_version

    Specify which version of the instance metadata service to use. Valid values are 'v1' or 'v2'.

    v2

    Key

    Value

    az

    The availability zone; for example, "us-east-1a".

    ec2_instance_id

    The EC2 instance ID.

    expand this plugin in the future

    Regex

    KEY REGEX

    Keep records in which the content of KEY matches the regular expression.

    Exclude

    KEY REGEX

    Exclude records in which the content of KEY matches the regular expression.

    Record Accessor Enabled

    This plugin enables the Record Accessor feature to specify the KEY. Using the record accessor is suggested if you want to match values against nested values.

    Getting Started

    In order to start filtering records, you can run the filter from the command line or through the configuration file. The following example assumes that you have a file called lines.txt with the following content:

    Command Line

    Note: using the command line mode need special attention to quote the regular expressions properly. It's suggested to use a configuration file.

    The following command will load the tail plugin and read the content of lines.txt file. Then the grep filter will apply a regular expression rule over the log field (created by tail plugin) and only pass the records which field value starts with aa:

    Configuration File

    The filter allows to use multiple rules which are applied in order, you can have many Regex and Exclude entries as required.

    Nested fields example

    If you want to match or exclude records based on nested values, you can use a Record Accessor format as the KEY name. Consider the following record example:

    if you want to exclude records that match given nested field (for example kubernetes.labels.app), you can use the following rule:

    Key

    Value Format

    Description

    Record

    Append fields. This parameter needs key and value pair.

    Remove_key

    If the key is matched, that field is removed.

    Whitelist_key

    If the key is not matched, that field is removed.

    Getting Started

    In order to start filtering records, you can run the filter from the command line or through the configuration file.

    This is a sample in_mem record to filter.

    Append fields

    The following configuration file is to append product name and hostname (via environment variable) to record.

    You can also run the filter from command line.

    The output will be

    Remove fields with Remove_key

    The following configuration file is to remove 'Swap.*' fields.

    You can also run the filter from command line.

    The output will be

    Remove fields with Whitelist_key

    The following configuration file is to remain 'Mem.*' fields.

    You can also run the filter from command line.

    The output will be

    Key

    Description

    Name of the target Process to check.

    Interval_Sec

    Interval in seconds between the service checks. Default value is 1.

    Internal_Nsec

    Specify a nanoseconds interval for service checks, it works in conjuntion with the Interval_Sec configuration key. Default value is 0.

    Alert

    If enabled, it will only generate messages if the target process is down. By default this option is disabled.

    Fd

    If enabled, a number of fd is appended to each records. Default value is true.

    Mem

    If enabled, memory usage of the process is appended to each records. Default value is true.

    Getting Started

    In order to start performing the checks, you can run the plugin from the command line or through the configuration file:

    The following example will check the health of crond process.

    Configuration File

    In your main configuration file append the following Input & Output sections:

    Testing

    Once Fluent Bit is running, you will see the health of process:

    Key

    Description

    Proc_Name

    $ fluent-bit -i cpu -t cpu -o http://192.168.2.3:80/something \
        -p tls=on         \
        -p tls.verify=off \
        -m '*'
    [INPUT]
        Name  cpu
        Tag   cpu
    
    [OUTPUT]
        Name       http
        Match      *
        Host       192.168.2.3
        Port       80
        URI        /something
        tls        On
        tls.verify Off
    [INPUT]
        Name  cpu
        Tag   cpu
    
    [OUTPUT]
        Name        forward
        Match       *
        Host        192.168.10.100
        Port        24224
        tls         On
        tls.verify  On
        tls.ca_file /etc/certs/fluent.crt
        tls.vhost   fluent.example.com
    $ fluent-bit -i exec -p 'command=ls /var/log' -o stdout
    Fluent Bit v1.x.x
    * Copyright (C) 2019-2020 The Fluent Bit Authors
    * Copyright (C) 2015-2018 Treasure Data
    * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
    * https://fluentbit.io
    
    [2018/03/21 17:46:49] [ info] [engine] started
    [0] exec.0: [1521622010.013470159, {"exec"=>"ConsoleKit"}]
    [1] exec.0: [1521622010.013490313, {"exec"=>"Xorg.0.log"}]
    [2] exec.0: [1521622010.013492079, {"exec"=>"Xorg.0.log.old"}]
    [3] exec.0: [1521622010.013493443, {"exec"=>"anaconda.ifcfg.log"}]
    [4] exec.0: [1521622010.013494707, {"exec"=>"anaconda.log"}]
    [5] exec.0: [1521622010.013496016, {"exec"=>"anaconda.program.log"}]
    [6] exec.0: [1521622010.013497225, {"exec"=>"anaconda.storage.log"}]
    [INPUT]
        Name          exec
        Tag           exec_ls
        Command       ls /var/log
        Interval_Sec  1
        Interval_NSec 0
        Buf_Size      8mb
    
    [OUTPUT]
        Name   stdout
        Match  *
    $ bin/fluent-bit -i thermal -t my_thermal -o stdout -m '*'
    Fluent Bit v1.x.x
    * Copyright (C) 2019-2020 The Fluent Bit Authors
    * Copyright (C) 2015-2018 Treasure Data
    * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
    * https://fluentbit.io
    
    [2019/08/18 13:39:43] [ info] [storage] initializing...
    ...
    [0] my_thermal: [1566099584.000085820, {"name"=>"thermal_zone0", "type"=>"x86_pkg_temp", "temp"=>60.000000}]
    [1] my_thermal: [1566099585.000136466, {"name"=>"thermal_zone0", "type"=>"x86_pkg_temp", "temp"=>59.000000}]
    [2] my_thermal: [1566099586.000083156, {"name"=>"thermal_zone0", "type"=>"x86_pkg_temp", "temp"=>59.000000}]
    $ bin/fluent-bit -i thermal -t my_thermal -p "interval_sec=60" -p "name_regex=thermal_zone0" -o stdout -m '*'
    Fluent Bit v1.3.0
    Copyright (C) Treasure Data
    
    [2019/08/18 13:39:43] [ info] [storage] initializing...
    ...
    [0] my_temp: [1565759542.001053749, {"name"=>"thermal_zone0", "type"=>"pch_skylake", "temp"=>48.500000}]
    [0] my_temp: [1565759602.001661061, {"name"=>"thermal_zone0", "type"=>"pch_skylake", "temp"=>48.500000}]
    [INPUT]
        Name thermal
        Tag  my_thermal
    
    [OUTPUT]
        Name  stdout
        Match *
    [PARSER]
        Name   apache
        Format regex
        Regex  ^(?<host>[^ ]*) [^ ]* (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^\"]*?)(?: +\S*)?)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")?$
        Time_Key time
        Time_Format %d/%b/%Y:%H:%M:%S %z
    [1154104030, {"host"=>"192.168.2.20",
                  "user"=>"-",
                  "method"=>"GET",
                  "path"=>"/cgi-bin/try/",
                  "code"=>"200",
                  "size"=>"3395",
                  "referer"=>"",
                  "agent"=>""
                  }
    ]
    $ fluent-bit -i mqtt -t data -o stdout -m '*'
    Fluent Bit v1.x.x
    * Copyright (C) 2019-2020 The Fluent Bit Authors
    * Copyright (C) 2015-2018 Treasure Data
    * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
    * https://fluentbit.io
    
    [2016/05/20 14:22:52] [ info] starting engine
    [0] data: [1463775773, {"topic"=>"some/topic", "key1"=>123, "key2"=>456}]
    $ mosquitto_pub  -m '{"key1": 123, "key2": 456}' -t some/topic
    [INPUT]
        Name   mqtt
        Tag    data
        Listen 0.0.0.0
        Port   1883
    
    [OUTPUT]
        Name   stdout
        Match  *
    $ bin/fluent-bit -i dummy -F aws -m '*' -o stdout
    
    [2020/01/17 07:57:17] [ info] [engine] started (pid=32744)
    [0] dummy.0: [1579247838.000171227, {"message"=>"dummy", "az"=>"us-west-2b", "ec2_instance_id"=>"i-06bc83dbc2ac2fdf8"}]
    [1] dummy.0: [1579247839.000125097, {"message"=>"dummy", "az"=>"us-west-2b", "ec2_instance_id"=>"i-06bc87dbc2ac3fdf8"}]
    [INPUT]
        Name dummy
        Tag dummy
    
    [FILTER]
        Name aws
        Match *
        imds_version v1
    
    [OUTPUT]
        Name stdout
        Match *
    {"log": "aaa"}
    {"log": "aab"}
    {"log": "bbb"}
    {"log": "ccc"}
    {"log": "ddd"}
    {"log": "eee"}
    {"log": "fff"}
    {"log": "ggg"}
    $ bin/fluent-bit -i tail -p 'path=lines.txt' -F grep -p 'regex=log aa' -m '*' -o stdout
    [INPUT]
        name   tail
        path   lines.txt
        parser json
    
    [FILTER]
        name   grep
        match  *
        regex  log aa
    
    [OUTPUT]
        name   stdout
        match  *
    {
        "log": "something",
        "kubernetes": {
            "pod_name": "myapp-0",
            "namespace_name": "default",
            "pod_id": "216cd7ae-1c7e-11e8-bb40-000c298df552",
            "labels": {
                "app": "myapp"
            },
            "host": "minikube",
            "container_name": "myapp",
            "docker_id": "370face382c7603fdd309d8c6aaaf434fd98b92421ce"
        }
    }
    [FILTER]
        Name    grep
        Match   *
        Exclude $kubernetes['labels']['app'] myapp
    {"Mem.total"=>1016024, "Mem.used"=>716672, "Mem.free"=>299352, "Swap.total"=>2064380, "Swap.used"=>32656, "Swap.free"=>2031724}
    [INPUT]
        Name mem
        Tag  mem.local
    
    [OUTPUT]
        Name  stdout
        Match *
    
    [FILTER]
        Name record_modifier
        Match *
        Record hostname ${HOSTNAME}
        Record product Awesome_Tool
    $ fluent-bit -i mem -o stdout -F record_modifier -p 'Record=hostname ${HOSTNAME}' -p 'Record=product Awesome_Tool' -m '*'
    [0] mem.local: [1492436882.000000000, {"Mem.total"=>1016024, "Mem.used"=>716672, "Mem.free"=>299352, "Swap.total"=>2064380, "Swap.used"=>32656, "Swap.free"=>2031724, "hostname"=>"localhost.localdomain", "product"=>"Awesome_Tool"}]
    [INPUT]
        Name mem
        Tag  mem.local
    
    [OUTPUT]
        Name  stdout
        Match *
    
    [FILTER]
        Name record_modifier
        Match *
        Remove_key Swap.total
        Remove_key Swap.used
        Remove_key Swap.free
    $ fluent-bit -i mem -o stdout -F  record_modifier -p 'Remove_key=Swap.total' -p 'Remove_key=Swap.free' -p 'Remove_key=Swap.used' -m '*'
    [0] mem.local: [1492436998.000000000, {"Mem.total"=>1016024, "Mem.used"=>716672, "Mem.free"=>295332}]
    [INPUT]
        Name mem
        Tag  mem.local
    
    [OUTPUT]
        Name  stdout
        Match *
    
    [FILTER]
        Name record_modifier
        Match *
        Whitelist_key Mem.total
        Whitelist_key Mem.used
        Whitelist_key Mem.free
    $ fluent-bit -i mem -o stdout -F  record_modifier -p 'Whitelist_key=Mem.total' -p 'Whitelist_key=Mem.free' -p 'Whitelist_key=Mem.used' -m '*'
    [0] mem.local: [1492436998.000000000, {"Mem.total"=>1016024, "Mem.used"=>716672, "Mem.free"=>295332}]
    $ fluent-bit -i proc -p proc_name=crond -o stdout
    [INPUT]
        Name          proc
        Proc_Name     crond
        Interval_Sec  1
        Interval_NSec 0
        Fd            true
        Mem           true
    
    [OUTPUT]
        Name   stdout
        Match  *
    $ fluent-bit -i proc -p proc_name=fluent-bit -o stdout
    Fluent Bit v1.x.x
    * Copyright (C) 2019-2020 The Fluent Bit Authors
    * Copyright (C) 2015-2018 Treasure Data
    * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
    * https://fluentbit.io
    
    [2017/01/30 21:44:56] [ info] [engine] started
    [0] proc.0: [1485780297, {"alive"=>true, "proc_name"=>"fluent-bit", "pid"=>10964, "mem.VmPeak"=>14740000, "mem.VmSize"=>14740000, "mem.VmLck"=>0, "mem.VmHWM"=>1120000, "mem.VmRSS"=>1120000, "mem.VmData"=>2276000, "mem.VmStk"=>88000, "mem.VmExe"=>1768000, "mem.VmLib"=>2328000, "mem.VmPTE"=>68000, "mem.VmSwap"=>0, "fd"=>18}]
    [1] proc.0: [1485780298, {"alive"=>true, "proc_name"=>"fluent-bit", "pid"=>10964, "mem.VmPeak"=>14740000, "mem.VmSize"=>14740000, "mem.VmLck"=>0, "mem.VmHWM"=>1148000, "mem.VmRSS"=>1148000, "mem.VmData"=>2276000, "mem.VmStk"=>88000, "mem.VmExe"=>1768000, "mem.VmLib"=>2328000, "mem.VmPTE"=>68000, "mem.VmSwap"=>0, "fd"=>18}]
    [2] proc.0: [1485780299, {"alive"=>true, "proc_name"=>"fluent-bit", "pid"=>10964, "mem.VmPeak"=>14740000, "mem.VmSize"=>14740000, "mem.VmLck"=>0, "mem.VmHWM"=>1152000, "mem.VmRSS"=>1148000, "mem.VmData"=>2276000, "mem.VmStk"=>88000, "mem.VmExe"=>1768000, "mem.VmLib"=>2328000, "mem.VmPTE"=>68000, "mem.VmSwap"=>0, "fd"=>18}]
    [3] proc.0: [1485780300, {"alive"=>true, "proc_name"=>"fluent-bit", "pid"=>10964, "mem.VmPeak"=>14740000, "mem.VmSize"=>14740000, "mem.VmLck"=>0, "mem.VmHWM"=>1152000, "mem.VmRSS"=>1148000, "mem.VmData"=>2276000, "mem.VmStk"=>88000, "mem.VmExe"=>1768000, "mem.VmLib"=>2328000, "mem.VmPTE"=>68000, "mem.VmSwap"=>0, "fd"=>18}]

    does the record key A value equals B ?

    action to take when a rule does not match. The available options are warn or exit. On warn, a warning message is sent to the logging layer when a mismatch of the rules above is found; using exit makes Fluent Bit abort with status code 255.

    Property

    Description

    key_exists

    Check if a key with a given name exists in the record.

    key_not_exists

    Check if a key does not exist in the record.

    key_val_is_null

    check that the value of the key is NULL.

    key_val_is_not_null

    check that the value of the key is NOT NULL.

    key_val_eq

    check that the value of the key equals the given value in the configuration.

    grep
    record_modifier

    action

    1.5

    x86_64, arm64v8, arm32v7

    Latest release of 1.5.x series.

    1.5.7

    x86_64, arm64v8, arm32v7

    Release

    1.5-debug, 1.5.7-debug

    x86_64

    v1.5.x releases + Busybox

    1.5.6

    x86_64, arm64v8, arm32v7

    Release

    1.5-debug, 1.5.6-debug

    x86_64

    v1.5.x releases + Busybox

    1.5.5

    x86_64, arm64v8, arm32v7

    Release

    1.5-debug, 1.5.5-debug

    x86_64

    v1.5.x releases + Busybox

    1.5.4

    x86_64, arm64v8, arm32v7

    Release

    1.5-debug, 1.5.4-debug

    x86_64

    v1.5.x releases + Busybox

    1.5.3

    x86_64, arm64v8, arm32v7

    Release

    1.5-debug, 1.5.3-debug

    x86_64

    v1.5.x releases + Busybox

    1.5.2

    x86_64, arm64v8, arm32v7

    Release

    1.5-debug, 1.5.2-debug

    x86_64

    v1.5.x releases + Busybox

    1.5.1

    x86_64, arm64v8, arm32v7

    Release

    1.5-debug, 1.5.1-debug

    x86_64

    v1.5.x releases + Busybox

    1.5.0

    x86_64, arm64v8, arm32v7

    Release

    1.5-debug, 1.5.0-debug

    x86_64

    v1.5.x releases + Busybox

    It's strongly suggested that you always use the latest image of Fluent Bit.

    Multi Architecture Images

    Our x86_64 stable image is based in Distroless focusing on security containing just the Fluent Bit binary and minimal system libraries and basic configuration. Optionally, we provide debug images for x86_64 which contains Busybox that can be used to troubleshoot or testing purposes.

    In addition, the main manifest provides images for arm64v8 and arm32v7 architectures. From a deployment perspective there is no need to specify an architecture, the container client tool that pulls the image gets the proper layer for the running architecture.

    For every architecture we build the layers using the following base images:

    Architecture

    Base Image

    x86_64

    arm64v8

    arm64v8/debian:buster-slim

    arm32v7

    arm32v7/debian:buster-slim

    Getting Started

    Download the last stable image from 1.5 series:

    Once the image is in place, now run the following (useless) test which makes Fluent Bit measure CPU usage by the container:

    That command will let Fluent Bit measure CPU usage every second and flush the results to the standard output, e.g:

    F.A.Q

    Why there is no Fluent Bit Docker image based on Alpine Linux ?

    Alpine Linux uses Musl C library instead of Glibc. Musl is not fully compatible with Glibc which generated many issues in the following areas when used with Fluent Bit:

    • Memory Allocator: to run Fluent Bit properly in high-load environments, we use Jemalloc as a default memory allocator which reduce fragmentation and provides better performance for our needs. Jemalloc cannot run smoothly with Musl and requires extra work.

    • Alpine Linux Musl functions bootstrap have a compatibility issue when loading Golang shared libraries, this generate problems when trying to load Golang output plugins in Fluent Bit.

    • Alpine Linux Musl Time format parser does not support Glibc extensions

    • Maintainers preference in terms of base image due to security and maintenance reasons are Distroless and Debian.

    Where 'latest' Tag points to ?

    Our Docker containers images are deployed thousands of times per day, we take security and stability very seriously.

    The latest tag most of the time points to the latest stable image. When we release a major update to Fluent Bit like for example from v1.3.x to v1.4.0, we don't move latest tag until 2 weeks after the release. That give us extra time to verify with our community that everything works as expected.

    Tag(s)

    Manifest Architectures

    fluent/fluent-bit

    Description

    Usage

    Run the following kill command to signal Fluent Bit:

    The command pidof aims to lookup the Process ID of Fluent Bit. You can replace the

    Fluent Bit will dump the following information to the standard output interface (stdout):

    Input Plugins Dump

    The dump provides insights for every input instance configured.

    Status

    Overall ingestion status of the plugin.

    Entry

    Sub-entry

    Description

    overlimit

    If the plugin has been configured with , this entry will report if the plugin is over the limit or not at the moment of the dump. If it is overlimit, it will print yes, otherwise no.

    mem_size

    Current memory size in use by the input plugin in-memory.

    mem_limit

    Limit set by Mem_Buf_Limit.

    Tasks

    When an input plugin ingest data into the engine, a Chunk is created. A Chunk can contains multiple records. Upon flush time, the engine creates a Task that contains the routes for the Chunk associated in question.

    The Task dump describes the tasks associated to the input plugin:

    Entry

    Description

    total_tasks

    Total number of active tasks associated to data generated by the input plugin.

    new

    Number of tasks not assigned yet to an output plugin. Tasks are in new status for a very short period of time (most of the time this value is very low or zero).

    running

    Number of active tasks being processed by output plugins.

    size

    Amount of memory used by the Chunks being processed (Total chunks size).

    Chunks

    The Chunks dump tells more details about all the chunks that the input plugin has generated and are still being processed.

    Depending of the buffering strategy and limits imposed by configuration, some Chunks might be up (in memory) or down (filesystem).

    Entry

    Sub-entry

    Description

    total_chunks

    Total number of Chunks generated by the input plugin that are still being processed by the engine.

    up_chunks

    Total number of Chunks that are loaded in memory.

    down_chunks

    Total number of Chunks that are stored in the filesystem but not loaded in memory yet.

    busy_chunks

    Storage Layer Dump

    Fluent Bit relies on a custom storage layer interface designed for hybrid buffering. The Storage Layer entry contains a total summary of Chunks registered by Fluent Bit:

    Entry

    Sub-Entry

    Description

    total chunks

    Total number of Chunks

    mem chunks

    Total number of Chunks memory-based

    fs chunks

    Total number of Chunks filesystem based

    metrics

    Path

    Optional path to the Systemd journal directory, if not set, the plugin will use default paths to read local-only logs.

    Max_Fields

    Set a maximum number of fields (keys) allowed per record.

    8000

    Max_Entries

    When Fluent Bit starts, the Journal might have a high number of logs in the queue. In order to avoid delays and reduce memory usage, this option allows to specify the maximum number of log entries that can be processed per round. Once the limit is reached, Fluent Bit will continue processing the remaining log entries once Journald performs the notification.

    5000

    Systemd_Filter

    Allows to perform a query over logs that contains a specific Journald key/value pairs, e.g: _SYSTEMD_UNIT=UNIT. The Systemd_Filter option can be specified multiple times in the input section to apply multiple filters as required.

    Systemd_Filter_Type

    Define the filter type when Systemd_Filter is specified multiple times. Allowed values are And and Or. With And a record is matched only when all of the Systemd_Filter have a match. With Or a record is matched when any of the Systemd_Filter has a match.

    Or

    Tag

    The tag is used to route messages but on Systemd plugin there is an extra functionality: if the tag includes a star/wildcard, it will be expanded with the Systemd Unit file (e.g: host.* => host.UNIT_NAME).

    DB

    Specify the absolute path of a database file to keep track of Journald cursor.

    DB.Sync

    Set a default synchronization (I/O) method. values: Extra, Full, Normal, Off. This flag affects how the internal SQLite engine do synchronization to disk, for more details about each option please refer to . note: this option was introduced on Fluent Bit v1.4.6.

    Full

    Read_From_Tail

    Start reading new entries. Skip entries already stored in Journald.

    Off

    Strip_Underscores

    Remove the leading underscore of the Journald field (key). For example the Journald field _PID becomes the key PID.

    Off

    Getting Started

    In order to receive Systemd messages, you can run the plugin from the command line or through the configuration file:

    Command Line

    From the command line you can let Fluent Bit listen for Systemd messages with the following options:

    In the example above we are collecting all messages coming from the Docker service.

    Configuration File

    In your main configuration file append the following Input & Output sections:

    Key

    Description

    Default

    Health

    Health input plugin allows you to check how healthy a TCP server is. It does the check by issuing a TCP connection every a certain interval of time.

    Configuration Parameters

    The plugin supports the following configuration parameters:

    Key

    Description

    Getting Started

    In order to start performing the checks, you can run the plugin from the command line or through the configuration file:

    Command Line

    From the command line you can let Fluent Bit generate the checks with the following options:

    Configuration File

    In your main configuration file append the following Input & Output sections:

    Testing

    Once Fluent Bit is running, you will see some random values in the output interface similar to this:

    TCP

    The tcp input plugin allows to retrieve structured JSON or raw messages over a TCP network interface (TCP port).

    Configuration Parameters

    The plugin supports the following configuration parameters:

    Key

    Description

    Getting Started

    In order to receive JSON messages over TCP, you can run the plugin from the command line or through the configuration file:

    Command Line

    From the command line you can let Fluent Bit listen for JSON messages with the following options:

    By default the service will listen an all interfaces (0.0.0.0) through TCP port 5170, optionally you can change this directly, e.g:

    In the example the JSON messages will only arrive through network interface under 192.168.3.2 address and TCP Port 9090.

    Configuration File

    In your main configuration file append the following Input & Output sections:

    Testing

    Once Fluent Bit is running, you can send some messages using the netcat:

    In we should see the following output:

    Performance Considerations

    When receiving payloads in JSON format, there are high performance penalties. Parsing JSON is a very expensive task so you could expect your CPU usage increase under high load environments.

    To get faster data ingestion, consider to use the option Format none to avoid JSON parsing if not needed.

    Throttle

    The Throttle Filter plugin sets the average Rate of messages per Interval, based on leaky bucket and sliding window algorithm. In case of overflood, it will leak within certain rate.

    Configuration Parameters

    The plugin supports the following configuration parameters:

    Key

    Functional description

    Lets imagine we have configured:

    we received 1 message first second, 3 messages 2nd, and 5 3rd. As you can see, disregard that Window is actually 5, we use "slow" start to prevent overflooding during the startup.

    But as soon as we reached Window size * Interval, we will have true sliding window with aggregation over complete window.

    When we have average over window is more than Rate, we will start dropping messages, so that

    will become:

    As you can see, last pane of the window was overwritten and 1 message was dropped.

    Interval vs Window size

    You might noticed possibility to configure Interval of the Window shift. It is counter intuitive, but there is a difference between two examples above:

    and

    Even though both examples will allow maximum Rate of 60 messages per minute, first example may get all 60 messages within first second, and will drop all the rest for the entire minute:

    While the second example will not allow more than 1 message per second every second, making output rate more smooth:

    It may drop some data if the rate is ragged. I would recommend to use bigger interval and rate for streams of rare but important events, while keep Window bigger and Interval small for constantly intensive inputs.

    Command Line

    Note: It's suggested to use a configuration file.

    The following command will load the tail plugin and read the content of lines.txt file. Then the throttle filter will apply a rate limit and only pass the records which are read below the certain rate:

    Configuration File

    The example above will pass 1000 messages per second in average over 300 seconds.

    Serial Interface

    The serial input plugin, allows to retrieve messages/data from a Serial interface.

    Configuration Parameters

    Key

    Description

    File

    Getting Started

    In order to retrieve messages over the Serial interface, you can run the plugin from the command line or through the configuration file:

    Command Line

    The following example loads the input serial plugin where it set a Bitrate of 9600, listen from the /dev/tnt0 interface and use the custom tag data to route the message.

    The above interface (/dev/tnt0) is an emulation of the serial interface (more details at bottom), for demonstrative purposes we will write some message to the other end of the interface, in this case /dev/tnt1, e.g:

    In Fluent Bit you should see an output like this:

    Now using the Separator configuration, we could send multiple messages at once (run this command after starting Fluent Bit):

    Configuration File

    In your main configuration file append the following Input & Output sections:

    Emulating Serial Interface on Linux

    The following content is some extra information that will allow you to emulate a serial interface on your Linux system, so you can test this Serial input plugin locally in case you don't have such interface in your computer. The following procedure has been tested on Ubuntu 15.04 running a Linux Kernel 4.0.

    Build and install the tty0tty module

    Download the sources

    Unpack and compile

    Copy the new kernel module into the kernel modules directory

    Load the module

    You should see new serial ports in /dev/ (ls /dev/tnt*) Give appropriate permissions to the new serial ports:

    When the module is loaded, it will interconnect the following virtual interfaces:

    Decoders

    There are certain cases where the log messages being parsed contains encoded data, a typical use case can be found in containerized environments with Docker: application logs it data in JSON format but becomes an escaped string, Consider the following example

    Original message generated by the application:

    Then the Docker log message become encapsulated as follows:

    as you can see the original message is handled as an escaped string. Ideally in Fluent Bit we would like to keep having the original structured message and not a string.

    Getting Started

    Decoders are a built-in feature available through the Parsers file, each Parser definition can optionally set one or multiple decoders. There are two type of decoders type:

    • Decode_Field: if the content can be decoded in a structured message, append that structure message (keys and values) to the original log message.

    • Decode_Field_As: any content decoded (unstructured or structured) will be replaced in the same key/value, no extra keys are added.

    Our pre-defined Docker Parser have the following definition:

    Each line in the parser with a key Decode_Field instruct the parser to apply a specific decoder on a given field, optionally it offer the option to take an extra action if the decoder cannot succeed.

    Decoders

    Optional Actions

    By default if a decoder fails to decode the field or want to try a next decoder, is possible to define an optional action. Available actions are:

    Note that actions are affected by some restrictions:

    • on Decode_Field_As, if succeeded, another decoder of the same type in the same field can be applied only if the data continue being a unstructed message (raw text).

    • on Decode_Field, if succeeded, can only be applied once for the same field. By nature Decode_Field aims to decode a structured message.

    Examples

    escaped_utf8

    Example input (from /path/to/log.log in configuration below)

    Example output

    Configuration file

    The fluent-bit-parsers.conf file,

    Monitoring

    Gather Metrics from Fluent Bit pipeline

    Fluent Bit comes with a built-in HTTP Server that can be used to query internal information and monitor metrics of each running plugin.

    The monitoring interface can be easily integrated with Prometheus since we support it native format.

    Getting Started

    To get started, the first step is to enable the HTTP Server from the configuration file:

    the above configuration snippet will instruct Fluent Bit to start it HTTP Server on TCP Port 2020 and listening on all network interfaces:

    Head

    The head input plugin, allows to read events from the head of file. It's behavior is similar to the head command.

    Configuration Parameters

    The plugin supports the following configuration parameters:

    {"color": "blue", "label": {"name": null}}
    {"color": "red", "label": {"name": "abc"}, "meta": "data"}
    {"color": "green", "label": {"name": "abc"}, "meta": null}
    [SERVICE]
        flush        1
        log_level    info
        parsers_file parsers.conf
    
    [INPUT]
        name        tail
        path        ./data.log
        parser      json
        exit_on_eof on
    
    # First 'expect' filter to validate that our data was structured properly
    [FILTER]
        name        expect
        match       *
        key_exists  color
        key_exists  $label['name']
        action      exit
    
    [OUTPUT]
        name        stdout
        match       *
    [SERVICE]
        flush        1
        log_level    info
        parsers_file parsers.conf
    
    [INPUT]
        name         tail
        path         ./data.log
        parser       json
        exit_on_eof  on
    
    # First 'expect' filter to validate that our data was structured properly
    [FILTER]
        name       expect
        match      *
        key_exists color
        key_exists label
        action     exit
    
    # Match records that only contains map 'label' with key 'name' = 'abc'
    [FILTER]
        name       grep
        match      *
        regex      $label['name'] ^abc$
    
    # Check that every record contains 'label' with a non-null value
    [FILTER]
        name       expect
        match      *
        key_val_eq $label['name'] abc
        action     exit
    
    # Append a new key to the record using an environment variable
    [FILTER]
        name       record_modifier
        match      *
        record     hostname ${HOSTNAME}
    
    # Check that every record contains 'hostname' key
    [FILTER]
        name       expect
        match      *
        key_exists hostname
        action     exit
    
    [OUTPUT]
        name       stdout
        match      *
    $ docker pull fluent/fluent-bit:1.5
    $ docker run -ti fluent/fluent-bit:1.5 /fluent-bit/bin/fluent-bit -i cpu -o stdout -f 1
    Fluent-Bit v1.5.x
    Copyright (C) Treasure Data
    
    [2019/10/01 12:29:02] [ info] [engine] started
    [0] cpu.0: [1504290543.000487750, {"cpu_p"=>0.750000, "user_p"=>0.250000, "system_p"=>0.500000, "cpu0.p_cpu"=>0.000000, "cpu0.p_user"=>0.000000, "cpu0.p_system"=>0.000000, "cpu1.p_cpu"=>1.000000, "cpu1.p_user"=>0.000000, "cpu1.p_system"=>1.000000, "cpu2.p_cpu"=>1.000000, "cpu2.p_user"=>1.000000, "cpu2.p_system"=>0.000000, "cpu3.p_cpu"=>0.000000, "cpu3.p_user"=>0.000000, "cpu3.p_system"=>0.000000}]
    kill -CONT `pidof fluent-bit`
    [engine] caught signal (SIGCONT)
    [2020/03/23 17:39:02] Fluent Bit Dump
    
    ===== Input =====
    syslog_debug (syslog)
    │
    ├─ status
    │  └─ overlimit     : no
    │     ├─ mem size   : 60.8M (63752145 bytes)
    │     └─ mem limit  : 61.0M (64000000 bytes)
    │
    ├─ tasks
    │  ├─ total tasks   : 92
    │  ├─ new           : 0
    │  ├─ running       : 92
    │  └─ size          : 171.1M (179391504 bytes)
    │
    └─ chunks
       └─ total chunks  : 92
          ├─ up chunks  : 35
          ├─ down chunks: 57
          └─ busy chunks: 92
             ├─ size    : 60.8M (63752145 bytes)
             └─ size err: 0
    
    ===== Storage Layer =====
    total chunks     : 92
    ├─ mem chunks    : 0
    └─ fs chunks     : 92
       ├─ up         : 35
       └─ down       : 57
    $ fluent-bit -i systemd \
                 -p systemd_filter=_SYSTEMD_UNIT=docker.service \
                 -p tag='host.*' -o stdout
    [SERVICE]
        Flush        1
        Log_Level    info
        Parsers_File parsers.conf
    
    [INPUT]
        Name            systemd
        Tag             host.*
        Systemd_Filter  _SYSTEMD_UNIT=docker.service
    
    [OUTPUT]
        Name   stdout
        Match  *
    {"status": "up and running"}
    {"log":"{\"status\": \"up and running\"}\r\n","stream":"stdout","time":"2018-03-09T01:01:44.851160855Z"}
    v1.5.7
    v1.5.6
    v1.5.5
    v1.5.4
    v1.5.3
    v1.5.2
    v1.5.1
    v1.5.0
    Distroless

    Chunks marked as busy (being flushed) or locked. Busy Chunks are immutable and likely are ready to (or being) processed.

    size

    Amount of bytes used by the Chunk.

    size err

    Number of Chunks in an error state where it size could not be retrieved.

    up

    Total number of filesystem chunks up in memory

    down

    Total number of filesystem chunks down (not loaded in memory)

    Mem_Buf_Limit
    this section

    Host

    Name of the target host or IP address to check.

    Port

    TCP port where to perform the connection check.

    Interval_Sec

    Interval in seconds between the service checks. Default value is 1.

    Internal_Nsec

    Specify a nanoseconds interval for service checks, it works in conjuntion with the Interval_Sec configuration key. Default value is 0.

    Alert

    If enabled, it will only generate messages if the target TCP service is down. By default this option is disabled.

    Add_Host

    If enabled, hostname is appended to each records. Default value is false.

    Add_Port

    If enabled, port number is appended to each records. Default value is false.

    Default

    Listen

    Listener network interface.

    0.0.0.0

    Port

    TCP port where listening for connections

    5170

    Buffer_Size

    Specify the maximum buffer size in KB to receive a JSON message. If not set, the default size will be the value of Chunk_Size.

    Chunk_Size

    By default the buffer to store the incoming JSON messages, do not allocate the maximum memory allowed, instead it allocate memory when is required. The rounds of allocations are set by Chunk_Size in KB. If not set, Chunk_Size is equal to 32 (32KB).

    32

    Format

    Specify the expected payload format. It support the options json and none. When using json, it expects JSON maps, when is set to none, it will split every record using the defined Separator (option below).

    json

    Separator

    When the expected Format is set to none, Fluent Bit needs a separator string to split the records. By default it uses the breakline character \n (LF or 0x10).

    \n

    Fluent Bit

    Value Format

    Description

    Rate

    Integer

    Amount of messages for the time.

    Window

    Integer

    Amount of intervals to calculate average over. Default 5.

    Interval

    String

    Time interval, expressed in "sleep" format. e.g 3s, 1.5m, 0.5h etc

    Print_Status

    Bool

    Whether to print status messages with current rate and the limits to information logs

    Absolute path to the device entry, e.g: /dev/ttyS0

    Bitrate

    The bitrate for the communication, e.g: 9600, 38400, 115200, etc

    Min_Bytes

    The serial interface will expect at least Min_Bytes to be available before to process the message (default: 1)

    Separator

    Allows to specify a separator string that's used to determinate when a message ends.

    Format

    Specify the format of the incoming data stream. The only option available is 'json'. Note that Format and Separator cannot be used at the same time.

    Name

    Description

    json

    handle the field content as a JSON map. If it find a JSON map it will replace the content with a structured map.

    escaped

    decode an escaped string.

    escaped_utf8

    decode a UTF8 escaped string.

    Name

    Description

    try_next

    if the decoder failed, apply the next Decoder in the list for the same field.

    do_next

    if the decoder succeeded or failed, apply the next Decoder in the list for the same field.

    License

    Strong Commitment to the Openness and Collaboration

    Fluent Bit, including it core, plugins and tools are distributed under the terms of the Apache License v2.0:

    Absolute path to the target file, e.g: /proc/uptime

    Buf_Size

    Buffer size to read the file.

    Interval_Sec

    Polling interval (seconds).

    Interval_NSec

    Polling interval (nanosecond).

    Add_Path

    If enabled, filepath is appended to each records. Default value is false.

    Key

    Rename a key. Default: head.

    Lines

    Line number to read. If the number N is set, in_head reads first N lines like head(1) -n.

    Split_line

    If enabled, in_head generates key-value pair per line.

    Split Line Mode

    This mode is useful to get a specific line. This is an example to get CPU frequency from /proc/cpuinfo.

    /proc/cpuinfo is a special file to get cpu information.

    Cpu frequency is "cpu MHz : 2791.009". We can get the line with this configuration file.

    Output is

    Getting Started

    In order to read the head of a file, you can run the plugin from the command line or through the configuration file:

    Command Line

    The following example will read events from the /proc/uptime file, tag the records with the uptime name and flush them back to the stdout plugin:

    Configuration File

    In your main configuration file append the following Input & Output sections:

    Note: Total interval (sec) = Interval_Sec + (Interval_Nsec / 1000000000).

    e.g. 1.5s = 1s + 500000000ns

    Key

    Description

    File

    $ fluent-bit -i health://127.0.0.1:80 -o stdout
    [INPUT]
        Name          health
        Host          127.0.0.1
        Port          80
        Interval_Sec  1
        Interval_NSec 0
    
    [OUTPUT]
        Name   stdout
        Match  *
    $ fluent-bit -i health://127.0.0.1:80 -o stdout
    Fluent Bit v1.x.x
    * Copyright (C) 2019-2020 The Fluent Bit Authors
    * Copyright (C) 2015-2018 Treasure Data
    * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
    * https://fluentbit.io
    
    [2016/10/07 21:37:51] [ info] [engine] started
    [0] health.0: [1475897871, {"alive"=>true}]
    [1] health.0: [1475897872, {"alive"=>true}]
    [2] health.0: [1475897873, {"alive"=>true}]
    [3] health.0: [1475897874, {"alive"=>true}]
    $ fluent-bit -i tcp -o stdout
    $ fluent-bit -i tcp://192.168.3.2:9090 -o stdout
    [INPUT]
        Name        tcp
        Listen      0.0.0.0
        Port        5170
        Chunk_Size  32
        Buffer_Size 64
        Format      json
    
    [OUTPUT]
        Name        stdout
        Match       *
    $ echo '{"key 1": 123456789, "key 2": "abcdefg"}' | nc 127.0.0.1 5170
    $ bin/fluent-bit -i tcp -o stdout -f 1
    Fluent Bit v1.x.x
    * Copyright (C) 2019-2020 The Fluent Bit Authors
    * Copyright (C) 2015-2018 Treasure Data
    * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
    * https://fluentbit.io
    
    [2019/10/03 09:19:34] [ info] [storage] initializing...
    [2019/10/03 09:19:34] [ info] [storage] in-memory
    [2019/10/03 09:19:34] [ info] [engine] started (pid=14569)
    [2019/10/03 09:19:34] [ info] [in_tcp] binding 0.0.0.0:5170
    [2019/10/03 09:19:34] [ info] [sp] stream processor started
    [0] tcp.0: [1570115975.581246030, {"key 1"=>123456789, "key 2"=>"abcdefg"}]
    Rate 5
    Window 5
    Interval 1s
    +-------+-+-+-+ 
    |1|3|5| | | | | 
    +-------+-+-+-+ 
    |  3  |         average = 3, and not 1.8 if you calculate 0 for last 2 panes. 
    +-----+
    +-------------+ 
    |1|3|5|7|3|4| | 
    +-------------+ 
      |  4.4    |   
      ----------+
    +-------------+
    |1|3|5|7|3|4|7|
    +-------------+
        |   5.2   |
        +---------+
    +-------------+
    |1|3|5|7|3|4|6|
    +-------------+
        |   5     |
        +---------+
    Rate 60
    Window 5
    Interval 1m
    Rate 1
    Window 300
    Interval 1s
    XX        XX        XX
    XX        XX        XX
    XX        XX        XX
    XX        XX        XX
    XX        XX        XX
    XX        XX        XX
    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      X    X     X    X    X    X
    XXXX XXXX  XXXX XXXX XXXX XXXX
    +-+-+-+-+-+--+-+-+-+-+-+-+-+-+-+
    $ bin/fluent-bit -i tail -p 'path=lines.txt' -F throttle -p 'rate=1' -m '*' -o stdout
    [INPUT]
        Name   tail
        Path   lines.txt
    
    [FILTER]
        Name     throttle
        Match    *
        Rate     1000
        Window   300
        Interval 1s
    
    [OUTPUT]
        Name   stdout
        Match  *
    $ fluent-bit -i serial -t data -p File=/dev/tnt0 -p BitRate=9600 -o stdout -m '*'
    $ echo 'this is some message' > /dev/tnt1
    $ fluent-bit -i serial -t data -p File=/dev/tnt0 -p BitRate=9600 -o stdout -m '*'
    Fluent Bit v1.x.x
    * Copyright (C) 2019-2020 The Fluent Bit Authors
    * Copyright (C) 2015-2018 Treasure Data
    * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
    * https://fluentbit.io
    
    [2016/05/20 15:44:39] [ info] starting engine
    [0] data: [1463780680, {"msg"=>"this is some message"}]
    $ echo 'aaXbbXccXddXee' > /dev/tnt1
    $ fluent-bit -i serial -t data -p File=/dev/tnt0 -p BitRate=9600 -p Separator=X -o stdout -m '*'
    Fluent-Bit v0.8.0
    Copyright (C) Treasure Data
    
    [2016/05/20 16:04:51] [ info] starting engine
    [0] data: [1463781902, {"msg"=>"aa"}]
    [1] data: [1463781902, {"msg"=>"bb"}]
    [2] data: [1463781902, {"msg"=>"cc"}]
    [3] data: [1463781902, {"msg"=>"dd"}]
    [INPUT]
        Name      serial
        Tag       data
        File      /dev/tnt0
        BitRate   9600
        Separator X
    
    [OUTPUT]
        Name   stdout
        Match  *
    $ git clone https://github.com/freemed/tty0tty
    $ cd tty0tty/module
    $ make
    $ sudo cp tty0tty.ko /lib/modules/$(uname -r)/kernel/drivers/misc/
    $ sudo depmod
    $ sudo modprobe tty0tty
    $ sudo chmod 666 /dev/tnt*
    /dev/tnt0 <=> /dev/tnt1
    /dev/tnt2 <=> /dev/tnt3
    /dev/tnt4 <=> /dev/tnt5
    /dev/tnt6 <=> /dev/tnt7
    [PARSER]
        Name         docker
        Format       json
        Time_Key     time
        Time_Format  %Y-%m-%dT%H:%M:%S.%L
        Time_Keep    On
        # Command       |  Decoder  | Field | Optional Action   |
        # ==============|===========|=======|===================|
        Decode_Field_As    escaped     log
    {"log":"\u0009Checking indexes...\n","stream":"stdout","time":"2018-02-19T23:25:29.1845444Z"}
    {"log":"\u0009\u0009Validated: _audit _internal _introspection _telemetry _thefishbucket history main snmp_data summary\n","stream":"stdout","time":"2018-02-19T23:25:29.1845536Z"}
    {"log":"\u0009Done\n","stream":"stdout","time":"2018-02-19T23:25:29.1845622Z"}
    [24] tail.0: [1519082729.184544400, {"log"=>"   Checking indexes...                                                   
    ", "stream"=>"stdout", "time"=>"2018-02-19T23:25:29.1845444Z"}]
    [25] tail.0: [1519082729.184553600, {"log"=>"           Validated: _audit _internal _introspection _telemetry _thefishbucket history main snmp_data summary
    ", "stream"=>"stdout", "time"=>"2018-02-19T23:25:29.1845536Z"}]
    [26] tail.0: [1519082729.184562200, {"log"=>"   Done                  
    ", "stream"=>"stdout", "time"=>"2018-02-19T23:25:29.1845622Z"}]
    [SERVICE]
        Parsers_File fluent-bit-parsers.conf
    
    [INPUT]
        Name        tail
        Parser      docker
        Path        /path/to/log.log
    
    [OUTPUT]
        Name   stdout
        Match  *
    [PARSER]
        Name        docker
        Format      json
        Time_Key    time
        Time_Format %Y-%m-%dT%H:%M:%S %z
        Decode_Field_as escaped_utf8 log
                                     Apache License
                               Version 2.0, January 2004
                            http://www.apache.org/licenses/
    
       TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
    
       1. Definitions.
    
          "License" shall mean the terms and conditions for use, reproduction,
          and distribution as defined by Sections 1 through 9 of this document.
    
          "Licensor" shall mean the copyright owner or entity authorized by
          the copyright owner that is granting the License.
    
          "Legal Entity" shall mean the union of the acting entity and all
          other entities that control, are controlled by, or are under common
          control with that entity. For the purposes of this definition,
          "control" means (i) the power, direct or indirect, to cause the
          direction or management of such entity, whether by contract or
          otherwise, or (ii) ownership of fifty percent (50%) or more of the
          outstanding shares, or (iii) beneficial ownership of such entity.
    
          "You" (or "Your") shall mean an individual or Legal Entity
          exercising permissions granted by this License.
    
          "Source" form shall mean the preferred form for making modifications,
          including but not limited to software source code, documentation
          source, and configuration files.
    
          "Object" form shall mean any form resulting from mechanical
          transformation or translation of a Source form, including but
          not limited to compiled object code, generated documentation,
          and conversions to other media types.
    
          "Work" shall mean the work of authorship, whether in Source or
          Object form, made available under the License, as indicated by a
          copyright notice that is included in or attached to the work
          (an example is provided in the Appendix below).
    
          "Derivative Works" shall mean any work, whether in Source or Object
          form, that is based on (or derived from) the Work and for which the
          editorial revisions, annotations, elaborations, or other modifications
          represent, as a whole, an original work of authorship. For the purposes
          of this License, Derivative Works shall not include works that remain
          separable from, or merely link (or bind by name) to the interfaces of,
          the Work and Derivative Works thereof.
    
          "Contribution" shall mean any work of authorship, including
          the original version of the Work and any modifications or additions
          to that Work or Derivative Works thereof, that is intentionally
          submitted to Licensor for inclusion in the Work by the copyright owner
          or by an individual or Legal Entity authorized to submit on behalf of
          the copyright owner. For the purposes of this definition, "submitted"
          means any form of electronic, verbal, or written communication sent
          to the Licensor or its representatives, including but not limited to
          communication on electronic mailing lists, source code control systems,
          and issue tracking systems that are managed by, or on behalf of, the
          Licensor for the purpose of discussing and improving the Work, but
          excluding communication that is conspicuously marked or otherwise
          designated in writing by the copyright owner as "Not a Contribution."
    
          "Contributor" shall mean Licensor and any individual or Legal Entity
          on behalf of whom a Contribution has been received by Licensor and
          subsequently incorporated within the Work.
    
       2. Grant of Copyright License. Subject to the terms and conditions of
          this License, each Contributor hereby grants to You a perpetual,
          worldwide, non-exclusive, no-charge, royalty-free, irrevocable
          copyright license to reproduce, prepare Derivative Works of,
          publicly display, publicly perform, sublicense, and distribute the
          Work and such Derivative Works in Source or Object form.
    
       3. Grant of Patent License. Subject to the terms and conditions of
          this License, each Contributor hereby grants to You a perpetual,
          worldwide, non-exclusive, no-charge, royalty-free, irrevocable
          (except as stated in this section) patent license to make, have made,
          use, offer to sell, sell, import, and otherwise transfer the Work,
          where such license applies only to those patent claims licensable
          by such Contributor that are necessarily infringed by their
          Contribution(s) alone or by combination of their Contribution(s)
          with the Work to which such Contribution(s) was submitted. If You
          institute patent litigation against any entity (including a
          cross-claim or counterclaim in a lawsuit) alleging that the Work
          or a Contribution incorporated within the Work constitutes direct
          or contributory patent infringement, then any patent licenses
          granted to You under this License for that Work shall terminate
          as of the date such litigation is filed.
    
       4. Redistribution. You may reproduce and distribute copies of the
          Work or Derivative Works thereof in any medium, with or without
          modifications, and in Source or Object form, provided that You
          meet the following conditions:
    
          (a) You must give any other recipients of the Work or
              Derivative Works a copy of this License; and
    
          (b) You must cause any modified files to carry prominent notices
              stating that You changed the files; and
    
          (c) You must retain, in the Source form of any Derivative Works
              that You distribute, all copyright, patent, trademark, and
              attribution notices from the Source form of the Work,
              excluding those notices that do not pertain to any part of
              the Derivative Works; and
    
          (d) If the Work includes a "NOTICE" text file as part of its
              distribution, then any Derivative Works that You distribute must
              include a readable copy of the attribution notices contained
              within such NOTICE file, excluding those notices that do not
              pertain to any part of the Derivative Works, in at least one
              of the following places: within a NOTICE text file distributed
              as part of the Derivative Works; within the Source form or
              documentation, if provided along with the Derivative Works; or,
              within a display generated by the Derivative Works, if and
              wherever such third-party notices normally appear. The contents
              of the NOTICE file are for informational purposes only and
              do not modify the License. You may add Your own attribution
              notices within Derivative Works that You distribute, alongside
              or as an addendum to the NOTICE text from the Work, provided
              that such additional attribution notices cannot be construed
              as modifying the License.
    
          You may add Your own copyright statement to Your modifications and
          may provide additional or different license terms and conditions
          for use, reproduction, or distribution of Your modifications, or
          for any such Derivative Works as a whole, provided Your use,
          reproduction, and distribution of the Work otherwise complies with
          the conditions stated in this License.
    
       5. Submission of Contributions. Unless You explicitly state otherwise,
          any Contribution intentionally submitted for inclusion in the Work
          by You to the Licensor shall be under the terms and conditions of
          this License, without any additional terms or conditions.
          Notwithstanding the above, nothing herein shall supersede or modify
          the terms of any separate license agreement you may have executed
          with Licensor regarding such Contributions.
    
       6. Trademarks. This License does not grant permission to use the trade
          names, trademarks, service marks, or product names of the Licensor,
          except as required for reasonable and customary use in describing the
          origin of the Work and reproducing the content of the NOTICE file.
    
       7. Disclaimer of Warranty. Unless required by applicable law or
          agreed to in writing, Licensor provides the Work (and each
          Contributor provides its Contributions) on an "AS IS" BASIS,
          WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
          implied, including, without limitation, any warranties or conditions
          of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
          PARTICULAR PURPOSE. You are solely responsible for determining the
          appropriateness of using or redistributing the Work and assume any
          risks associated with Your exercise of permissions under this License.
    
       8. Limitation of Liability. In no event and under no legal theory,
          whether in tort (including negligence), contract, or otherwise,
          unless required by applicable law (such as deliberate and grossly
          negligent acts) or agreed to in writing, shall any Contributor be
          liable to You for damages, including any direct, indirect, special,
          incidental, or consequential damages of any character arising as a
          result of this License or out of the use or inability to use the
          Work (including but not limited to damages for loss of goodwill,
          work stoppage, computer failure or malfunction, or any and all
          other commercial damages or losses), even if such Contributor
          has been advised of the possibility of such damages.
    
       9. Accepting Warranty or Additional Liability. While redistributing
          the Work or Derivative Works thereof, You may choose to offer,
          and charge a fee for, acceptance of support, warranty, indemnity,
          or other liability obligations and/or rights consistent with this
          License. However, in accepting such obligations, You may act only
          on Your own behalf and on Your sole responsibility, not on behalf
          of any other Contributor, and only if You agree to indemnify,
          defend, and hold each Contributor harmless for any liability
          incurred by, or claims asserted against, such Contributor by reason
          of your accepting any such warranty or additional liability.
    
       END OF TERMS AND CONDITIONS
    processor    : 0
    vendor_id    : GenuineIntel
    cpu family   : 6
    model        : 42
    model name   : Intel(R) Core(TM) i7-2640M CPU @ 2.80GHz
    stepping     : 7
    microcode    : 41
    cpu MHz      : 2791.009
    cache size   : 4096 KB
    physical id  : 0
    siblings     : 1
    [INPUT]
        Name           head
        Tag            head.cpu
        File           /proc/cpuinfo
        Lines          8
        Split_line     true
        # {"line0":"processor    : 0", "line1":"vendor_id    : GenuineIntel" ...}
    
    [FILTER]
        Name           record_modifier
        Match          *
        Whitelist_key  line7
    
    [OUTPUT]
        Name           stdout
        Match          *
    $ bin/fluent-bit -c head.conf 
    Fluent Bit v1.x.x
    * Copyright (C) 2019-2020 The Fluent Bit Authors
    * Copyright (C) 2015-2018 Treasure Data
    * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
    * https://fluentbit.io
    
    [2017/06/26 22:38:24] [ info] [engine] started
    [0] head.cpu: [1498484305.000279805, {"line7"=>"cpu MHz        : 2791.009"}]
    [1] head.cpu: [1498484306.011680137, {"line7"=>"cpu MHz        : 2791.009"}]
    [2] head.cpu: [1498484307.010042482, {"line7"=>"cpu MHz        : 2791.009"}]
    [3] head.cpu: [1498484308.008447978, {"line7"=>"cpu MHz        : 2791.009"}]
    $ fluent-bit -i head -t uptime -p File=/proc/uptime -o stdout -m '*'
    Fluent Bit v1.x.x
    * Copyright (C) 2019-2020 The Fluent Bit Authors
    * Copyright (C) 2015-2018 Treasure Data
    * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
    * https://fluentbit.io
    
    [2016/05/17 21:53:54] [ info] starting engine
    [0] uptime: [1463543634, {"head"=>"133517.70 194870.97"}]
    [1] uptime: [1463543635, {"head"=>"133518.70 194872.85"}]
    [2] uptime: [1463543636, {"head"=>"133519.70 194876.63"}]
    [3] uptime: [1463543637, {"head"=>"133520.70 194879.72"}]
    [INPUT]
        Name          head
        Tag           uptime
        File          /proc/uptime
        Buf_Size      256
        Interval_Sec  1
        Interval_NSec 0
    
    [OUTPUT]
        Name   stdout
        Match  *
    now with a simple
    curl
    command is enough to gather some information:

    Note that we are sending the curl command output to the jq program which helps to make the JSON data easy to read from the terminal. Fluent Bit don't aim to do JSON pretty-printing.

    REST API Interface

    Fluent Bit aims to expose useful interfaces for monitoring, as of Fluent Bit v0.14 the following end points are available:

    URI

    Description

    Data Format

    /

    Fluent Bit build information

    JSON

    /api/v1/uptime

    Get uptime information in seconds and human readable format

    JSON

    /api/v1/metrics

    Internal metrics per loaded plugin

    JSON

    /api/v1/metrics/prometheus

    Uptime Example

    Query the service uptime with the following command:

    it should print a similar output like this:

    Metrics Examples

    Query internal metrics in JSON format with the following command:

    it should print a similar output like this:

    Metrics in Prometheus format

    Query internal metrics in Prometheus Text 0.0.4 format:

    this time the same metrics will be in Prometheus format instead of JSON:

    Configuring Aliases

    By default configured plugins on runtime get an internal name in the format plugin_name.ID. For monitoring purposes this can be confusing if many plugins of the same type were configured. To make a distinction each configured input or output section can get an alias that will be used as the parent name for the metric.

    The following example set an alias to the INPUT section which is using the CPU input plugin:

    Now when querying the metrics we get the aliases in place instead of the plugin name:

    Dashboard and Alerts

    Fluent Bit's exposed prometheus style metrics can be leveraged to create dashboards and alerts.

    Grafana Dashboard

    The provided example dashboard is heavily inspired by Banzai Cloud's logging operator dashboard but with a few key differences such as the use of the instance label (see why here), stacked graphs and a focus on Fluent Bit metrics.

    dashboard

    Alerts

    Sample alerts are available here.

    Parser

    The Parser Filter plugin allows to parse field in event records.

    Configuration Parameters

    The plugin supports the following configuration parameters:

    Key

    Description

    Default

    Getting Started

    Configuration File

    This is an example to parser a record {"data":"100 0.5 true This is example"}.

    The plugin needs parser file which defines how to parse field.

    The path of parser file should be written in configuration file at [SERVICE] section.

    The output is

    You can see the record {"data":"100 0.5 true This is example"} are parsed.

    Preserve original fields

    By default, the parser plugin only keeps the parsed fields in its output.

    If you enable Reserve_Data, all other fields are preserved:

    This will produce the output:

    If you enable Reserved_Data and Preserve_Key, the original key field will be preserved as well:

    This will produce the output:

    Lua

    Lua Filter allows you to modify the incoming records using custom Scripts.

    Due to the necessity to have a flexible filtering mechanism, now is possible to extend Fluent Bit capabilities writing simple filters using Lua programming language. A Lua based filter takes two steps:

    • Configure the Filter in the main configuration

    • Prepare a Lua script that will be used by the Filter

    Syslog

    Syslog input plugins allows to collect Syslog messages through a Unix socket server (UDP or TCP) or over the network using TCP or UDP.

    Configuration Parameters

    The plugin supports the following configuration parameters:

    [SERVICE]
        HTTP_Server  On
        HTTP_Listen  0.0.0.0
        HTTP_PORT    2020
    
    [INPUT]
        Name cpu
    
    [OUTPUT]
        Name  stdout
        Match *
    $ bin/fluent-bit -c fluent-bit.conf
    Fluent Bit v1.4.0
    * Copyright (C) 2019-2020 The Fluent Bit Authors
    * Copyright (C) 2015-2018 Treasure Data
    * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
    * https://fluentbit.io
    
    [2020/03/10 19:08:24] [ info] [engine] started
    [2020/03/10 19:08:24] [ info] [http_server] listen iface=0.0.0.0 tcp_port=2020
    $ curl -s http://127.0.0.1:2020 | jq
    {
      "fluent-bit": {
        "version": "0.13.0",
        "edition": "Community",
        "flags": [
          "FLB_HAVE_TLS",
          "FLB_HAVE_METRICS",
          "FLB_HAVE_SQLDB",
          "FLB_HAVE_TRACE",
          "FLB_HAVE_HTTP_SERVER",
          "FLB_HAVE_FLUSH_LIBCO",
          "FLB_HAVE_SYSTEMD",
          "FLB_HAVE_VALGRIND",
          "FLB_HAVE_FORK",
          "FLB_HAVE_PROXY_GO",
          "FLB_HAVE_REGEX",
          "FLB_HAVE_C_TLS",
          "FLB_HAVE_SETJMP",
          "FLB_HAVE_ACCEPT4",
          "FLB_HAVE_INOTIFY"
        ]
      }
    }
    $ curl -s http://127.0.0.1:2020/api/v1/uptime | jq
    {
      "uptime_sec": 8950000,
      "uptime_hr": "Fluent Bit has been running:  103 days, 14 hours, 6 minutes and 40 seconds"
    }
    $ curl -s http://127.0.0.1:2020/api/v1/metrics | jq
    {
      "input": {
        "cpu.0": {
          "records": 8,
          "bytes": 2536
        }
      },
      "output": {
        "stdout.0": {
          "proc_records": 5,
          "proc_bytes": 1585,
          "errors": 0,
          "retries": 0,
          "retries_failed": 0
        }
      }
    }
    $ curl -s http://127.0.0.1:2020/api/v1/metrics/prometheus
    fluentbit_input_records_total{name="cpu.0"} 57 1509150350542
    fluentbit_input_bytes_total{name="cpu.0"} 18069 1509150350542
    fluentbit_output_proc_records_total{name="stdout.0"} 54 1509150350542
    fluentbit_output_proc_bytes_total{name="stdout.0"} 17118 1509150350542
    fluentbit_output_errors_total{name="stdout.0"} 0 1509150350542
    fluentbit_output_retries_total{name="stdout.0"} 0 1509150350542
    fluentbit_output_retries_failed_total{name="stdout.0"} 0 1509150350542
    [SERVICE]
        HTTP_Server  On
        HTTP_Listen  0.0.0.0
        HTTP_PORT    2020
    
    [INPUT]
        Name  cpu
        Alias server1_cpu
    
    [OUTPUT]
        Name  stdout
        Alias raw_output
        Match *
    {
      "input": {
        "server1_cpu": {
          "records": 8,
          "bytes": 2536
        }
      },
      "output": {
        "raw_output": {
          "proc_records": 5,
          "proc_bytes": 1585,
          "errors": 0,
          "retries": 0,
          "retries_failed": 0
        }
      }
    }

    Key_Name

    Specify field name in record to parse.

    Parser

    Specify the parser name to interpret the field. Multiple Parser entries are allowed (one per line).

    Preserve_Key

    Keep original Key_Name field in the parsed result. If false, the field will be removed.

    False

    Reserve_Data

    Keep all other original fields in the parsed result. If false, all other original fields will be removed.

    False

    Unescape_Key

    If the key is a escaped string (e.g: stringify JSON), unescape the string before to apply the parser.

    False

    Configuration Parameters

    The plugin supports the following configuration parameters:

    Key

    Description

    Script

    Path to the Lua script that will be used.

    Call

    Lua function name that will be triggered to do filtering. It's assumed that the function is declared inside the Script defined above.

    Type_int_key

    If these keys are matched, the fields are converted to integer. If more than one key, delimit by space

    Protected_mode

    If enabled, Lua script will be executed in protected mode. It prevents to crash when invalid Lua script is executed. Default is true.

    Getting Started

    In order to test the filter, you can run the plugin from the command line or through the configuration file. The following examples uses the dummy input plugin for data ingestion, invoke Lua filter using the test.lua script and calls the cb_print() function which only print the same information to the standard output:

    Command Line

    From the command line you can use the following options:

    Configuration File

    In your main configuration file append the following Input, Filter & Output sections:

    Lua Script Filter API

    The life cycle of a filter have the following steps:

    • Upon Tag matching by filter_lua, it may process or bypass the record.

    • If filter_lua accepts the record, it will invoke the function defined in the call property which basically is the name of a function defined in the Lua script.

    • Invoke Lua function passing each record in JSON format.

    • Upon return, validate return value and take some action (described above)

    Callback Prototype

    The Lua script can have one or multiple callbacks that can be used by filter_lua, it prototype is as follows:

    Function Arguments

    name

    description

    tag

    Name of the tag associated with the incoming record.

    timestamp

    Unix timestamp with nanoseconds associated with the incoming record. The original format is a double (seconds.nanoseconds)

    record

    Lua table with the record content

    Return Values

    Each callback must return three values:

    name

    data type

    description

    code

    integer

    The code return value represents the result and further action that may follows. If code equals -1, means that filter_lua must drop the record. If code equals 0 the record will not be modified, otherwise if code equals 1, means the original timestamp and record have been modified so it must be replaced by the returned values from timestamp (second return value) and record (third return value). If code equals 2, means the original timestamp is not modified and the record has been modified so it must be replaced by the returned values from record (third return value). The code 2 is supported from v1.4.3.

    timestamp

    double

    If code equals 1, the original record timestamp will be replaced with this new value.

    record

    table

    if code equals 1, the original record information will be replaced with this new value. Note that the format of this value must be a valid Lua table.

    Code Examples

    For functional examples of this interface, please refer to the code samples provided in the source code of the project located here:

    https://github.com/fluent/fluent-bit/tree/master/scripts

    Number Type

    In Lua, Fluent Bit treats number as double. It means an integer field (e.g. IDs, log levels) will be converted double. To avoid type conversion, Type_int_key property is available.

    Protected Mode

    Fluent Bit supports protected mode to prevent crash when executes invalid Lua script. See also Error Handling in Application Code.

    Lua

    Mode

    Defines transport protocol mode: unix_udp (UDP over Unix socket), unix_tcp (TCP over Unix socket), tcp or udp

    unix_udp

    Listen

    If Mode is set to tcp or udp, specify the network interface to bind.

    0.0.0.0

    Port

    If Mode is set to tcp or udp, specify the TCP port to listen for incoming connections.

    5140

    Path

    If Mode is set to unix_tcp or unix_udp, set the absolute path to the Unix socket file.

    Unix_Perm

    If Mode is set to unix_tcp or unix_udp, set the permission of the Unix socket file.

    0644

    Parser

    Specify an alternative parser for the message. If Mode is set to tcp or udp then the default parser is syslog-rfc5424 otherwise syslog-rfc3164-local is used. If your syslog messages have fractional seconds set this Parser value to syslog-rfc5424 instead.

    Buffer_Chunk_Size

    By default the buffer to store the incoming Syslog messages, do not allocate the maximum memory allowed, instead it allocate memory when is required. The rounds of allocations are set by Buffer_Chunk_Size. If not set, Buffer_Chunk_Size is equal to 32000 bytes (32KB). Read considerations below when using udp or unix_udp mode.

    Buffer_Max_Size

    Specify the maximum buffer size to receive a Syslog message. If not set, the default size will be the value of Buffer_Chunk_Size.

    Considerations

    • When using Syslog input plugin, Fluent Bit requires access to the parsers.conf file, the path to this file can be specified with the option -R or through the Parsers_File key on the [SERVER] section (more details below).

    • When udp or unix_udp is used, the buffer size to receive messages is configurable only through the Buffer_Chunk_Size option which defaults to 32kb.

    Getting Started

    In order to receive Syslog messages, you can run the plugin from the command line or through the configuration file:

    Command Line

    From the command line you can let Fluent Bit listen for Forward messages with the following options:

    By default the service will create and listen for Syslog messages on the unix socket /tmp/in_syslog

    Configuration File

    In your main configuration file append the following Input & Output sections:

    Testing

    Once Fluent Bit is running, you can send some messages using the logger tool:

    In Fluent Bit we should see the following output:

    Recipes

    The following content aims to provide configuration examples for different use cases to integrate Fluent Bit and make it listen for Syslog messages from your systems.

    Rsyslog to Fluent Bit: Network mode over TCP

    Fluent Bit Configuration

    Put the following content in your fluent-bit.conf file:

    then start Fluent Bit.

    RSyslog Configuration

    Add a new file to your rsyslog config rules called 60-fluent-bit.conf inside the directory /etc/rsyslog.d/ and add the following content:

    then make sure to restart your rsyslog daemon:

    Rsyslog to Fluent Bit: Unix socket mode over UDP

    Fluent Bit Configuration

    Put the following content in your fluent-bit.conf file:

    then start Fluent Bit.

    RSyslog Configuration

    Add a new file to your rsyslog config rules called 60-fluent-bit.conf inside the directory /etc/rsyslog.d/ and place the following content:

    Make sure that the socket file is readable by rsyslog (tweak the Unix_Perm option shown above).

    Key

    Description

    Default

    [PARSER]
        Name dummy_test
        Format regex
        Regex ^(?<INT>[^ ]+) (?<FLOAT>[^ ]+) (?<BOOL>[^ ]+) (?<STRING>.+)$
    [SERVICE]
        Parsers_File /path/to/parsers.conf
    
    [INPUT]
        Name dummy
        Tag  dummy.data
        Dummy {"data":"100 0.5 true This is example"}
    
    [FILTER]
        Name parser
        Match dummy.*
        Key_Name data
        Parser dummy_test
    
    [OUTPUT]
        Name stdout
        Match *
    $ fluent-bit -c dummy.conf
    Fluent Bit v1.x.x
    * Copyright (C) 2019-2020 The Fluent Bit Authors
    * Copyright (C) 2015-2018 Treasure Data
    * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
    * https://fluentbit.io
    
    [2017/07/06 22:33:12] [ info] [engine] started
    [0] dummy.data: [1499347993.001371317, {"INT"=>"100", "FLOAT"=>"0.5", "BOOL"=>"true", "STRING"=>"This is example"}]
    [1] dummy.data: [1499347994.001303118, {"INT"=>"100", "FLOAT"=>"0.5", "BOOL"=>"true", "STRING"=>"This is example"}]
    [2] dummy.data: [1499347995.001296133, {"INT"=>"100", "FLOAT"=>"0.5", "BOOL"=>"true", "STRING"=>"This is example"}]
    [3] dummy.data: [1499347996.001320284, {"INT"=>"100", "FLOAT"=>"0.5", "BOOL"=>"true", "STRING"=>"This is example"}]
    [PARSER]
        Name dummy_test
        Format regex
        Regex ^(?<INT>[^ ]+) (?<FLOAT>[^ ]+) (?<BOOL>[^ ]+) (?<STRING>.+)$
    [SERVICE]
        Parsers_File /path/to/parsers.conf
    
    [INPUT]
        Name dummy
        Tag  dummy.data
        Dummy {"data":"100 0.5 true This is example", "key1":"value1", "key2":"value2"}
    
    [FILTER]
        Name parser
        Match dummy.*
        Key_Name data
        Parser dummy_test
        Reserve_Data On
    $ fluent-bit -c dummy.conf
    Fluent-Bit v0.12.0
    Copyright (C) Treasure Data
    
    [2017/07/06 22:33:12] [ info] [engine] started
    [0] dummy.data: [1499347993.001371317, {"INT"=>"100", "FLOAT"=>"0.5", "BOOL"=>"true", "STRING"=>"This is example"}, "key1":"value1", "key2":"value2"]
    [1] dummy.data: [1499347994.001303118, {"INT"=>"100", "FLOAT"=>"0.5", "BOOL"=>"true", "STRING"=>"This is example"}, "key1":"value1", "key2":"value2"]
    [2] dummy.data: [1499347995.001296133, {"INT"=>"100", "FLOAT"=>"0.5", "BOOL"=>"true", "STRING"=>"This is example"}, "key1":"value1", "key2":"value2"]
    [3] dummy.data: [1499347996.001320284, {"INT"=>"100", "FLOAT"=>"0.5", "BOOL"=>"true", "STRING"=>"This is example"}, "key1":"value1", "key2":"value2"]
    [PARSER]
        Name dummy_test
        Format regex
        Regex ^(?<INT>[^ ]+) (?<FLOAT>[^ ]+) (?<BOOL>[^ ]+) (?<STRING>.+)$
    [SERVICE]
        Parsers_File /path/to/parsers.conf
    
    [INPUT]
        Name dummy
        Tag  dummy.data
        Dummy {"data":"100 0.5 true This is example", "key1":"value1", "key2":"value2"}
    
    [FILTER]
        Name parser
        Match dummy.*
        Key_Name data
        Parser dummy_test
        Reserve_Data On
        Preserve_Key On
    $ fluent-bit -c dummy.conf
    Fluent-Bit v0.12.0
    Copyright (C) Treasure Data
    
    [2017/07/06 22:33:12] [ info] [engine] started
    [0] dummy.data: [1499347993.001371317, {"data":"100 0.5 true This is example", "INT"=>"100", "FLOAT"=>"0.5", "BOOL"=>"true", "STRING"=>"This is example"}]
    [1] dummy.data: [1499347994.001303118, {"data":"100 0.5 true This is example", "INT"=>"100", "FLOAT"=>"0.5", "BOOL"=>"true", "STRING"=>"This is example"}]
    [2] dummy.data: [1499347995.001296133, {"data":"100 0.5 true This is example", "INT"=>"100", "FLOAT"=>"0.5", "BOOL"=>"true", "STRING"=>"This is example"}]
    [3] dummy.data: [1499347996.001320284, {"data":"100 0.5 true This is example", "INT"=>"100", "FLOAT"=>"0.5", "BOOL"=>"true", "STRING"=>"This is example"}]
    $ fluent-bit -i dummy -F lua -p script=test.lua -p call=cb_print -m '*' -o null
    [INPUT]
        Name   dummy
    
    [FILTER]
        Name    lua
        Match   *
        script  test.lua
        call    cb_print
    
    [OUTPUT]
        Name   null
        Match  *
    function cb_print(tag, timestamp, record)
       return code, timestamp, record
    end
    $ fluent-bit -R /path/to/parsers.conf -i syslog -p path=/tmp/in_syslog -o stdout
    [SERVICE]
        Flush               1
        Log_Level           info
        Parsers_File        parsers.conf
    
    [INPUT]
        Name                syslog
        Path                /tmp/in_syslog
        Buffer_Chunk_Size   32000
        Buffer_Max_Size     64000
    
    [OUTPUT]
        Name   stdout
        Match  *
    $ logger -u /tmp/in_syslog my_ident my_message
    $ bin/fluent-bit -R ../conf/parsers.conf -i syslog -p path=/tmp/in_syslog -o stdout
    Fluent Bit v1.x.x
    * Copyright (C) 2019-2020 The Fluent Bit Authors
    * Copyright (C) 2015-2018 Treasure Data
    * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
    * https://fluentbit.io
    
    [2017/03/09 02:23:27] [ info] [engine] started
    [0] syslog.0: [1489047822, {"pri"=>"13", "host"=>"edsiper:", "ident"=>"my_ident", "pid"=>"", "message"=>"my_message"}]
    [SERVICE]
        Flush        1
        Parsers_File parsers.conf
    
    [INPUT]
        Name     syslog
        Parser   syslog-rfc3164
        Listen   0.0.0.0
        Port     5140
        Mode     tcp
    
    [OUTPUT]
        Name     stdout
        Match    *
    action(type="omfwd" Target="127.0.0.1" Port="5140" Protocol="tcp")
    $ sudo service rsyslog restart
    [SERVICE]
        Flush        1
        Parsers_File parsers.conf
    
    [INPUT]
        Name      syslog
        Parser    syslog-rfc3164
        Path      /tmp/fluent-bit.sock
        Mode      unix_udp
        Unix_Perm 0644
    
    [OUTPUT]
        Name      stdout
        Match     *
    $ModLoad omuxsock
    $OMUxSockSocket /tmp/fluent-bit.sock
    *.* :omuxsock:

    Internal metrics per loaded plugin ready to be consumed by a Prometheus Server

    Prometheus Text 0.0.4

    /api/v1/storage

    Get internal metrics of the storage layer / buffered data. This option is enabled only if in the SERVICE section the property storage.metrics has been enabled

    JSON

    Configuration File

    This page describes the main configuration file used by Fluent Bit

    One of the ways to configure Fluent Bit is using a main configuration file. Fluent Bit allows to use one configuration file which works at a global scope and uses the Format and Schema defined previously.

    The main configuration file supports four types of sections:

    • Service

    • Input

    • Filter

    • Output

    In addition, it's also possible to split the main configuration file in multiple files using the feature to include external files:

    • Include File

    Service

    The Service section defines global properties of the service, the keys available as of this version are described in the following table:

    The following is an example of a SERVICE section:

    Input

    An INPUT section defines a source (related to an input plugin), here we will describe the base configuration for each INPUT section. Note that each input plugin may add its own configuration keys:

    The Name is mandatory and it let Fluent Bit know which input plugin should be loaded. The Tag is mandatory for all plugins except for the input forward plugin (as it provides dynamic tags).

    Example

    The following is an example of an INPUT section:

    Filter

    A FILTER section defines a filter (related to an filter plugin), here we will describe the base configuration for each FILTER section. Note that each filter plugin may add its own configuration keys:

    The Name is mandatory and it let Fluent Bit know which filter plugin should be loaded. The Match or Match_Regex is mandatory for all plugins. If both are specified, Match_Regex takes precedence.

    Example

    The following is an example of an FILTER section:

    Output

    The OUTPUT section specify a destination that certain records should follow after a Tag match. The configuration support the following keys:

    Example

    The following is an example of an OUTPUT section:

    Example: collecting CPU metrics

    The following configuration file example demonstrates how to collect CPU metrics and flush the results every five seconds to the standard output:

    Include File

    To avoid complicated long configuration files is better to split specific parts in different files and call them (include) from one main file.

    Starting from Fluent Bit 0.12 the new configuration command @INCLUDE has been added and can be used in the following way:

    The configuration reader will try to open the path somefile.conf, if not found, it will assume it's a relative path based on the path of the base configuration file, e.g:

    • Main configuration file path: /tmp/main.conf

    • Included file: somefile.conf

    • Fluent Bit will try to open somefile.conf, if it fails it will try /tmp/somefile.conf.

    The @INCLUDE command only works at top-left level of the configuration line, it cannot be used inside sections.

    Wildcard character (*) is supported to include multiple files, e.g:

    Rewrite Tag

    Powerful and flexible routing

    Tags are what makes possible. Tags are set in the configuration of the Input definitions where the records are generated, but there are certain scenarios where might be useful to modify the Tag in the pipeline so we can perform more advanced and flexible routing.

    The rewrite_tag filter, allows to re-emit a record under a new Tag. Once a record has been re-emitted, the original record can be preserved or discarded.

    How it Works

    The way it works is defining rules that matches specific record key content against a regular expression, if a match exists, a new record with the defined Tag will be emitted. Multiple rules can be specified and they are processed in order until one of them matches.

    Key

    Description

    Default Value

    Flush

    Set the flush time in seconds.nanoseconds. The engine loop uses a Flush timeout to define when is required to flush the records ingested by input plugins through the defined output plugins.

    5

    Daemon

    Boolean value to set if Fluent Bit should run as a Daemon (background) or not. Allowed values are: yes, no, on and off. note: If you are using a Systemd based unit as the one we provide in our packages, do not turn on this option.

    Off

    Log_File

    Absolute path for an optional log file. By default all logs are redirected to the standard output interface (stdout).

    Log_Level

    Set the logging verbosity level. Allowed values are: error, warning, info, debug and trace. Values are accumulative, e.g: if 'debug' is set, it will include error, warning, info and debug. Note that trace mode is only available if Fluent Bit was built with the WITH_TRACE option enabled.

    info

    Parsers_File

    Path for a parsers configuration file. Multiple Parsers_File entries can be defined within the section.

    Plugins_File

    Path for a plugins configuration file. A plugins configuration file allows to define paths for external plugins, for an example see here.

    Streams_File

    Path for the Stream Processor configuration file. To learn more about Stream Processing configuration go here.

    HTTP_Server

    Enable built-in HTTP Server

    Off

    HTTP_Listen

    Set listening interface for HTTP Server when it's enabled

    0.0.0.0

    HTTP_Port

    Set TCP Port for the HTTP Server

    2020

    Coro_Stack_Size

    Set the coroutines stack size in bytes. The value must be greater than the page size of the running system. Don't set too small value (say 4096), or coroutine threads can overrun the stack buffer.

    Do not change the default value of this parameter unless you know what you are doing.

    24576

    Key

    Description

    Name

    Name of the input plugin.

    Tag

    Tag name associated to all records comming from this plugin.

    Key

    Description

    Name

    Name of the filter plugin.

    Match

    A pattern to match against the tags of incoming records. It's case sensitive and support the star (*) character as a wildcard.

    Match_Regex

    A regular expression to match against the tags of incoming records. Use this option if you want to use the full regex syntax.

    Key

    Description

    Name

    Name of the output plugin.

    Match

    A pattern to match against the tags of incoming records. It's case sensitive and support the star (*) character as a wildcard.

    Match_Regex

    A regular expression to match against the tags of incoming records. Use this option if you want to use the full regex syntax.

    The new Tag to define can be composed by:

    • Alphabet characters & Numbers

    • Original Tag string or part of it

    • Regular Expressions groups capture

    • Any key or sub-key of the processed record

    • Environment variables

    Configuration Parameters

    The rewrite_tag filter supports the following configuration parameters:

    Key

    Description

    Rule

    Defines the matching criteria and the format of the Tag for the matching record. The Rule format have four components: KEY REGEX NEW_TAG KEEP. For more specific details of the Rule format and it composition read the next section.

    Emitter_Name

    When the filter emits a record under the new Tag, there is an internal emitter plugin that takes care of the job. Since this emitter expose metrics as any other component of the pipeline, you can use this property to configure an optional name for it.

    Emitter_Storage.type

    Define a buffering mechanism for the new records created. Note these records are part of the emitter plugin. This option support the values memory (default) or filesystem. If the destination for the new records generated might face backpressure due to latency or slow network, we strongly recommend enabling the filesystem mode.

    Emitter_Mem_Buf_Limit

    Set a limit on the amount of memory the tag rewrite emitter can consume if the outputs provide backpressure. The default for this limit is 10M. The pipeline will pause once the buffer exceeds the value of this setting. For example, if the value is set to 10M then the pipeline will pause if the buffer exceeds 10M. The pipeline will remain paused until the output drains the buffer below the 10M limit.

    Rules

    A rule aims to define matching criteria and specify how to create a new Tag for a record. You can define one or multiple rules in the same configuration section. The rules have the following format:

    Key

    The key represents the name of the record key that holds the value that we want to use to match our regular expression. A key name is specified and prefixed with a $. Consider the following structured record (formatted for readability):

    If we wanted to match against the value of the key name we must use $name. The key selector is flexible enough to allow to match nested levels of sub-maps from the structure. If we wanted to check the value of the nested key s2 we can do it specifying $ss['s1']['s2'], for short:

    • $name = "abc-123"

    • $ss['s1']['s2'] = "flb"

    Note that a key must point a value that contains a string, it's not valid for numbers, booleans, maps or arrays.

    Regex

    Using a simple regular expression we can specify a matching pattern to use against the value of the key specified above, also we can take advantage of group capturing to create custom placeholder values.

    If we wanted to match any record that it $name contains a value of the format string-number like the example provided above, we might use:

    Note that in our example we are using parentheses, this teams that we are specifying groups of data. If the pattern matches the value a placeholder will be created that can be consumed by the NEW_TAG section.

    If $name equals abc-123 , then the following placeholders will be created:

    • $0 = "abc-123"

    • $1 = "abc"

    • $2 = "123"

    If the Regular expression do not matches an incoming record, the rule will be skipped and the next rule (if any) will be processed.

    New Tag

    If a regular expression has matched the value of the defined key in the rule, we are ready to compose a new Tag for that specific record. The tag is a concatenated string that can contain any of the following characters: a-z,A-Z, 0-9 and .-,.

    A Tag can take any string value from the matching record, the original tag it self, environment variable or general placeholder.

    Consider the following incoming data on the rule:

    • Tag = aa.bb.cc

    • Record = {"name": "abc-123", "ss": {"s1": {"s2": "flb"}}}

    • Environment variable $HOSTNAME = fluent

    With such information we could create a very custom Tag for our record like the following:

    the expected Tag to generated will be:

    We make use of placeholders, record content and environment variables.

    Keep

    If a rule matches the criteria the filter will emit a copy of the record with the new defined Tag. The property keep takes a boolean value to define if the original record with the old Tag must be preserved and continue in the pipeline or just be discarded.

    You can use true or false to decide the expected behavior. There is no default value and this is a mandatory field in the rule.

    Configuration Example

    The following configuration example will emit a dummy (hand-crafted) record, the filter will rewrite the tag, discard the old record and print the new record to the standard output interface:

    The original tag test_tag will be rewritten as from.test_tag.new.fluent.bit.out:

    Monitoring

    As described in the Monitoring section, every component of the pipeline of Fluent Bit exposes metrics. The basic metrics exposed by this filter are drop_records and add_records, they summarize the total of dropped records from the incoming data chunk or the new records added.

    Since rewrite_tag emit new records that goes through the beginning of the pipeline, it exposes an additional metric called emit_records that summarize the total number of emitted records.

    Understanding the Metrics

    Using the configuration provided above, if we query the metrics exposed in the HTTP interface we will see the following:

    Command:

    Metrics output:

    The dummy input generated two records, the filter dropped two from the chunks and emitted two new ones under a different Tag.

    The records generated are handled by the internal Emitter, so the new records are summarized in the Emitter metrics, take a look at the entry called emitter_for_rewrite_tag.0.

    What is the Emitter ?

    The Emitter is an internal Fluent Bit plugin that allows other components of the pipeline to emit custom records. On this case rewrite_tag creates an Emitter instance to use it exclusively to emit records, on that way we can have a granular control of who is emitting what.

    The Emitter name in the metrics can be changed setting up the Emitter_Name configuration property described above.

    routing
    [SERVICE]
        Flush           5
        Daemon          off
        Log_Level       debug
    [INPUT]
        Name cpu
        Tag  my_cpu
    [FILTER]
        Name  stdout
        Match *
    [OUTPUT]
        Name  stdout
        Match my*cpu
    [SERVICE]
        Flush     5
        Daemon    off
        Log_Level debug
    
    [INPUT]
        Name  cpu
        Tag   my_cpu
    
    [OUTPUT]
        Name  stdout
        Match my*cpu
    @INCLUDE somefile.conf
    @INCLUDE input_*.conf
    $KEY  REGEX  NEW_TAG  KEEP
    {
      "name": "abc-123",
      "ss": {
        "s1": {
          "s2": "flb"
        }
      }
    }
    ^([a-z]+)-([0-9]+)$
    newtag.$TAG.$TAG[1].$1.$ss['s1']['s2'].out.${HOSTNAME}
    newtag.aa.bb.cc.bb.abc.flb.out.fluent
    [SERVICE]
        Flush     1
        Log_Level info
    
    [INPUT]
        NAME   dummy
        Dummy  {"tool": "fluent", "sub": {"s1": {"s2": "bit"}}}
        Tag    test_tag
    
    [FILTER]
        Name          rewrite_tag
        Match         test_tag
        Rule          $tool ^(fluent)$  from.$TAG.new.$tool.$sub['s1']['s2'].out false
        Emitter_Name  re_emitted
    
    [OUTPUT]
        Name   stdout
        Match  from.*
    $ bin/fluent-bit -c example.conf
    Fluent Bit v1.x.x
    * Copyright (C) 2019-2020 The Fluent Bit Authors
    * Copyright (C) 2015-2018 Treasure Data
    * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
    * https://fluentbit.io
    
    ...
    [0] from.test_tag.new.fluent.bit.out: [1580436933.000050569, {"tool"=>"fluent", "sub"=>{"s1"=>{"s2"=>"bit"}}}]
    $ curl  http://127.0.0.1:2020/api/v1/metrics/ | jq
    {
      "input": {
        "dummy.0": {
          "records": 2,
          "bytes": 80
        },
        "emitter_for_rewrite_tag.0": {
          "records": 1,
          "bytes": 40
        }
      },
      "filter": {
        "rewrite_tag.0": {
          "drop_records": 2,
          "add_records": 0,
          "emit_records": 2
        }
      },
      "output": {
        "stdout.0": {
          "proc_records": 1,
          "proc_bytes": 40,
          "errors": 0,
          "retries": 0,
          "retries_failed": 0
        }
      }
    }

    Tail

    The tail input plugin allows to monitor one or several text files. It has a similar behavior like tail -f shell command.

    The plugin reads every matched file in the Path pattern and for every new line found (separated by a \n), it generates a new record. Optionally a database file can be used so the plugin can have a history of tracked files and a state of offsets, this is very useful to resume a state if the service is restarted.

    Configuration Parameters

    The plugin supports the following configuration parameters:

    Note that if the database parameter db is not specified, by default the plugin will start reading each target file from the beginning.

    Multiline Configuration Parameters

    Additionally the following options exists to configure the handling of multi-lines files:

    Docker Mode Configuration Parameters

    Docker mode exists to recombine JSON log lines split by the Docker daemon due to its line length limit. To use this feature, configure the tail plugin with the corresponding parser and then enable Docker mode:

    Getting Started

    In order to tail text or log files, you can run the plugin from the command line or through the configuration file:

    Command Line

    From the command line you can let Fluent Bit parse text files with the following options:

    Configuration File

    In your main configuration file append the following Input & Output sections:

    Tailing files keeping state

    The tail input plugin a feature to save the state of the tracked files, is strongly suggested you enabled this. For this purpose the db property is available, e.g:

    When running, the database file /path/to/logs.db will be created, this database is backed by SQLite3 so if you are interested into explore the content, you can open it with the SQLite client tool, e.g:

    Make sure to explore when Fluent Bit is not hard working on the database file, otherwise you will see some Error: database is locked messages.

    Formatting SQLite

    By default SQLite client tool do not format the columns in a human read-way, so to explore in_tail_files table you can create a config file in ~/.sqliterc with the following content:

    Files Rotation

    Files rotation are properly handled, including logrotate copytruncate mode.

    Key

    Description

    Default

    Buffer_Chunk_Size

    Set the initial buffer size to read files data. This value is used too to increase buffer size. The value must be according to the Unit Size specification.

    32k

    Buffer_Max_Size

    Set the limit of the buffer size per monitored file. When a buffer needs to be increased (e.g: very long lines), this value is used to restrict how much the memory buffer can grow. If reading a file exceed this limit, the file is removed from the monitored file list. The value must be according to the Unit Size specification.

    Buffer_Chunk_Size

    Path

    Pattern specifying a specific log files or multiple ones through the use of common wildcards.

    Path_Key

    If enabled, it appends the name of the monitored file as part of the record. The value assigned becomes the key in the map.

    Exclude_Path

    Set one or multiple shell patterns separated by commas to exclude files matching a certain criteria, e.g: Exclude_Path *.gz,*.zip

    Refresh_Interval

    The interval of refreshing the list of watched files in seconds.

    60

    Rotate_Wait

    Specify the number of extra time in seconds to monitor a file once is rotated in case some pending data is flushed.

    5

    Ignore_Older

    Ignores records which are older than this time in seconds. Supports m,h,d (minutes, hours, days) syntax. Default behavior is to read all records from specified files. Only available when a Parser is specificied and it can parse the time of a record.

    Skip_Long_Lines

    When a monitored file reach it buffer capacity due to a very long line (Buffer_Max_Size), the default behavior is to stop monitoring that file. Skip_Long_Lines alter that behavior and instruct Fluent Bit to skip long lines and continue processing other lines that fits into the buffer size.

    Off

    DB

    Specify the database file to keep track of monitored files and offsets.

    DB.Sync

    Set a default synchronization (I/O) method. Values: Extra, Full, Normal, Off. This flag affects how the internal SQLite engine do synchronization to disk, for more details about each option please refer to this section.

    Full

    Mem_Buf_Limit

    Set a limit of memory that Tail plugin can use when appending data to the Engine. If the limit is reach, it will be paused; when the data is flushed it resumes.

    exit_on_eof

    exit Fluent Bit when reaching EOF of the monitored files

    false

    Parser

    Specify the name of a parser to interpret the entry as a structured message.

    Key

    When a message is unstructured (no parser applied), it's appended as a string under the key name log. This option allows to define an alternative name for that key.

    log

    Tag

    Set a tag (with regex-extract fields) that will be placed on lines read. E.g. kube.<namespace_name>.<pod_name>.<container_name>. Note that "tag expansion" is supported: if the tag includes an asterisk (*), that asterisk will be replaced with the absolute path of the monitored file (also see Workflow of Tail + Kubernetes Filter).

    Tag_Regex

    Set a regex to exctract fields from the file. E.g. (?<pod_name>[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*)_(?<namespace_name>[^_]+)_(?<container_name>.+)-

    Key

    Description

    Default

    Multiline

    If enabled, the plugin will try to discover multiline messages and use the proper parsers to compose the outgoing messages. Note that when this option is enabled the Parser option is not used.

    Off

    Multiline_Flush

    Wait period time in seconds to process queued multiline messages

    4

    Parser_Firstline

    Name of the parser that matchs the beginning of a multiline message. Note that the regular expression defined in the parser must include a group name (named capture)

    Parser_N

    Key

    Description

    Default

    Docker_Mode

    If enabled, the plugin will recombine split Docker log lines before passing them to any parser as configured above. This mode cannot be used at the same time as Multiline.

    Off

    Docker_Mode_Flush

    Wait period time in seconds to flush queued unfinished split lines.

    4

    Docker_Mode_Parser

    Specify an optional parser for the first line of the docker multiline mode. The parser name to be specified must be registered in the parsers.conf file.

    Optional-extra parser to interpret and structure multiline entries. This option can be used to define multiple parsers, e.g: Parser_1 ab1, Parser_2 ab2, Parser_N abN.

    $ fluent-bit -i tail -p path=/var/log/syslog -o stdout
    [INPUT]
        Name        tail
        Path        /var/log/syslog
    
    [OUTPUT]
        Name   stdout
        Match  *
    $ fluent-bit -i tail -p path=/var/log/syslog -p db=/path/to/logs.db -o stdout
    $ sqlite3 tail.db
    -- Loading resources from /home/edsiper/.sqliterc
    
    SQLite version 3.14.1 2016-08-11 18:53:32
    Enter ".help" for usage hints.
    sqlite> SELECT * FROM in_tail_files;
    id     name                              offset        inode         created
    -----  --------------------------------  ------------  ------------  ----------
    1      /var/log/syslog                   73453145      23462108      1480371857
    sqlite>
    .headers on
    .mode column
    .width 5 32 12 12 10

    Nest

    The Nest Filter plugin allows you to operate on or with nested data. Its modes of operation are

    • nest - Take a set of records and place them in a map

    • lift - Take a map by key and lift its records up

    Example usage (nest)

    As an example using JSON notation, to nest keys matching the Wildcard value Key* under a new key NestKey the transformation becomes,

    Example (input)

    Example (output)

    Example usage (lift)

    As an example using JSON notation, to lift keys nested under the Nested_under value NestKey* the transformation becomes,

    Example (input)

    Example (output)

    Configuration Parameters

    The plugin supports the following configuration parameters:

    Key

    Value Format

    Operation

    Description

    Operation

    ENUM [nest or lift]

    Select the operation nest or lift

    Wildcard

    FIELD WILDCARD

    nest

    Nest records which field matches the wildcard

    Nest_under

    Getting Started

    In order to start filtering records, you can run the filter from the command line or through the configuration file. The following invokes the Memory Usage Input Plugin, which outputs the following (example),

    Example #1 - nest

    Command Line

    Note: Using the command line mode requires quotes parse the wildcard properly. The use of a configuration file is recommended.

    The following command will load the mem plugin. Then the nest filter will match the wildcard rule to the keys and nest the keys matching Mem.* under the new key NEST.

    Configuration File

    Result

    The output of both the command line and configuration invocations should be identical and result in the following output.

    Example #1 - nest and lift undo

    This example nests all Mem.* and Swap,* items under the Stats key and then reverses these actions with a lift operation. The output appears unchanged.

    Configuration File

    Result

    Example #2 - nest 3 levels deep

    This example takes the keys starting with Mem.* and nests them under LAYER1, which itself is then nested under LAYER2, which is nested under LAYER3.

    Configuration File

    Result

    Example #3 - multiple nest and lift filters with prefix

    This example starts with the 3-level deep nesting of Example 2 and applies the lift filter three times to reverse the operations. The end result is that all records are at the top level, without nesting, again. One prefix is added for each level that is lifted.

    Configuration file

    Result

    Forward

    Forward is the protocol used by Fluentd to route messages between peers. The forward output plugin allows to provide interoperability between Fluent Bit and Fluentd. There are not configuration steps required besides to specify where Fluentd is located, it can be in the local host or a in a remote machine.

    This plugin offers two different transports and modes:

    • Forward (TCP): It uses a plain TCP connection.

    • Secure Forward (TLS): when TLS is enabled, the plugin switch to Secure Forward mode.

    Configuration Parameters

    The following parameters are mandatory for either Forward for Secure Forward modes:

    Secure Forward Mode Configuration Parameters

    When using Secure Forward mode, the mode requires to be enabled. The following additional configuration parameters are available:

    Forward Setup

    Before proceeding, make sure that is installed in your system, if it's not the case please refer to the following document and go ahead with that.

    Once is installed, create the following configuration file example that will allow us to stream data into it:

    That configuration file specifies that it will listen for TCP connections on the port 24224 through the forward input type. Then for every message with a fluent_bit TAG, will print the message to the standard output.

    In one terminal launch specifying the new configuration file created (in_fluent-bit.conf):

    Fluent Bit + Forward Setup

    Now that is ready to receive messages, we need to specify where the forward output plugin will flush the information using the following format:

    If the TAG parameter is not set, the plugin will set the tag as fluent_bit. Keep in mind that TAG is important for routing rules inside .

    Using the input plugin as an example we will flush CPU metrics to :

    Now on the side, you will see the CPU metrics gathered in the last seconds:

    So we gathered metrics and flushed them out to properly.

    Fluent Bit + Secure Forward Setup

    DISCLAIMER: the following example do not consider the generation of certificates for a proper usage of production environments.

    Secure Forward aims to provide a secure channel of communication with the remote Fluentd service using . Above there is a minimalist configuration for testing purposes.

    Fluent Bit

    Paste this content in a file called flb.conf:

    Fluentd

    Paste this content in a file called fld.conf:

    If you're using Fluentd v1, set up it as below:

    Test Communication

    Start Fluentd:

    Start Fluent Bit:

    After five seconds, Fluent Bit will write the records to Fluentd. In Fluentd output you will see a message like this:

    {
      "Key1"     : "Value1",
      "Key2"     : "Value2",
      "OtherKey" : "Value3"
    }
    {
      "OtherKey" : "Value3"
      "NestKey"  : {
        "Key1"     : "Value1",
        "Key2"     : "Value2",
      }
    }
    {
      "OtherKey" : "Value3"
      "NestKey"  : {
        "Key1"     : "Value1",
        "Key2"     : "Value2",
      }
    }
    {
      "Key1"     : "Value1",
      "Key2"     : "Value2",
      "OtherKey" : "Value3"
    }
    [0] memory: [1488543156, {"Mem.total"=>1016044, "Mem.used"=>841388, "Mem.free"=>174656, "Swap.total"=>2064380, "Swap.used"=>139888, "Swap.free"=>1924492}]
    $ bin/fluent-bit -i mem -p 'tag=mem.local' -F nest -p 'Operation=nest' -p 'Wildcard=Mem.*' -p 'Nest_under=Memstats' -p 'Remove_prefix=Mem.' -m '*' -o stdout
    [INPUT]
        Name mem
        Tag  mem.local
    
    [OUTPUT]
        Name  stdout
        Match *
    
    [FILTER]
        Name nest
        Match *
        Operation nest
        Wildcard Mem.*
        Nest_under Memstats
        Remove_prefix Mem.
    [2018/04/06 01:35:13] [ info] [engine] started
    [0] mem.local: [1522978514.007359767, {"Swap.total"=>1046524, "Swap.used"=>0, "Swap.free"=>1046524, "Memstats"=>{"total"=>4050908, "used"=>714984, "free"=>3335924}}]
    [INPUT]
        Name mem
        Tag  mem.local
    
    [OUTPUT]
        Name  stdout
        Match *
    
    [FILTER]
        Name nest
        Match *
        Operation nest
        Wildcard Mem.*
        Wildcard Swap.*
        Nest_under Stats
        Add_prefix NESTED
    
    [FILTER]
        Name nest
        Match *
        Operation lift
        Nested_under Stats
        Remove_prefix NESTED
    [2018/06/21 17:42:37] [ info] [engine] started (pid=17285)
    [0] mem.local: [1529566958.000940636, {"Mem.total"=>8053656, "Mem.used"=>6940380, "Mem.free"=>1113276, "Swap.total"=>16532988, "Swap.used"=>1286772, "Swap.free"=>15246216}]
    [INPUT]
        Name mem
        Tag  mem.local
    
    [OUTPUT]
        Name  stdout
        Match *
    
    [FILTER]
        Name nest
        Match *
        Operation nest
        Wildcard Mem.*
        Nest_under LAYER1
    
    [FILTER]
        Name nest
        Match *
        Operation nest
        Wildcard LAYER1*
        Nest_under LAYER2
    
    [FILTER]
        Name nest
        Match *
        Operation nest
        Wildcard LAYER2*
        Nest_under LAYER3
    [0] mem.local: [1524795923.009867831, {"Swap.total"=>1046524, "Swap.used"=>0, "Swap.free"=>1046524, "LAYER3"=>{"LAYER2"=>{"LAYER1"=>{"Mem.total"=>4050908, "Mem.used"=>1112036, "Mem.free"=>2938872}}}}]
    
    
    {
      "Swap.total"=>1046524,
      "Swap.used"=>0,
      "Swap.free"=>1046524,
      "LAYER3"=>{
        "LAYER2"=>{
          "LAYER1"=>{
            "Mem.total"=>4050908,
            "Mem.used"=>1112036,
            "Mem.free"=>2938872
          }
        }
      }
    }
    [INPUT]
        Name mem
        Tag  mem.local
    
    [OUTPUT]
        Name  stdout
        Match *
    
    [FILTER]
        Name nest
        Match *
        Operation nest
        Wildcard Mem.*
        Nest_under LAYER1
    
    [FILTER]
        Name nest
        Match *
        Operation nest
        Wildcard LAYER1*
        Nest_under LAYER2
    
    [FILTER]
        Name nest
        Match *
        Operation nest
        Wildcard LAYER2*
        Nest_under LAYER3
    
    [FILTER]
        Name nest
        Match *
        Operation lift
        Nested_under LAYER3
        Add_prefix Lifted3_
    
    [FILTER]
        Name nest
        Match *
        Operation lift
        Nested_under Lifted3_LAYER2
        Add_prefix Lifted3_Lifted2_
    
    [FILTER]
        Name nest
        Match *
        Operation lift
        Nested_under Lifted3_Lifted2_LAYER1
        Add_prefix Lifted3_Lifted2_Lifted1_
    [0] mem.local: [1524862951.013414798, {"Swap.total"=>1046524, "Swap.used"=>0, "Swap.free"=>1046524, "Lifted3_Lifted2_Lifted1_Mem.total"=>4050908, "Lifted3_Lifted2_Lifted1_Mem.used"=>1253912, "Lifted3_Lifted2_Lifted1_Mem.free"=>2796996}]
    
    
    {
      "Swap.total"=>1046524, 
      "Swap.used"=>0, 
      "Swap.free"=>1046524, 
      "Lifted3_Lifted2_Lifted1_Mem.total"=>4050908, 
      "Lifted3_Lifted2_Lifted1_Mem.used"=>1253912, 
      "Lifted3_Lifted2_Lifted1_Mem.free"=>2796996
    }

    FIELD STRING

    nest

    Nest records matching the Wildcard under this key

    Nested_under

    FIELD STRING

    lift

    Lift records nested under the Nested_under key

    Add_prefix

    FIELD STRING

    ANY

    Prefix affected keys with this string

    Remove_prefix

    FIELD STRING

    ANY

    Remove prefix from affected keys if it matches this string

    Tag

    Overwrite the tag as we transmit. This allows the receiving pipeline start fresh, or to attribute source.

    Send_options

    Always send options (with "size"=count of messages)

    False

    Require_ack_response

    Send "chunk"-option and wait for "ack" response from server. Enables at-least-once and receiving server can control rate of traffic. (Requires Fluentd v0.14.0+ server)

    False

    Self_Hostname

    Default value of the auto-generated certificate common name (CN).

    localhost

    tls

    Enable or disable TLS support

    Off

    tls.verify

    Force certificate validation

    On

    tls.debug

    Set TLS debug verbosity level. It accept the following values: 0 (No debug), 1 (Error), 2 (State change), 3 (Informational) and 4 Verbose

    1

    tls.ca_file

    Absolute path to CA certificate file

    tls.crt_file

    Absolute path to Certificate file.

    tls.key_file

    Absolute path to private Key file.

    tls.key_passwd

    Optional password for tls.key_file file.

    Key

    Description

    Default

    Host

    Target host where Fluent-Bit or Fluentd are listening for Forward messages.

    127.0.0.1

    Port

    TCP Port of the target service.

    24224

    Time_as_Integer

    Set timestamps in integer format, it enable compatibility mode for Fluentd v0.12 series.

    False

    Upstream

    Key

    Description

    Default

    Shared_Key

    A key string known by the remote Fluentd used for authorization.

    Empty_Shared_Key

    Use this option to connect to Fluentd with a zero-length secret.

    False

    Username

    Specify the username to present to a Fluentd server that enables user_auth.

    Password

    TLS
    Fluentd
    Fluentd Installation
    Fluentd
    Fluentd
    Fluentd
    Fluentd
    CPU
    Fluentd
    Fluentd
    CPU
    Fluentd
    TLS

    If Forward will connect to an Upstream instead of a simple host, this property defines the absolute path for the Upstream configuration file, for more details about this refer to the documentation section.

    Specify the password corresponding to the username.

    Modify

    The Modify Filter plugin allows you to change records using rules and conditions.

    Example usage

    As an example using JSON notation to,

    • Rename Key2 to RenamedKey

    • Add a key OtherKey with value Value3 if OtherKey does not yet exist

    Example (input)

    Example (output)

    Configuration Parameters

    Rules

    The plugin supports the following rules:

    • Rules are case insensitive, parameters are not

    • Any number of rules can be set in a filter instance.

    • Rules are applied in the order they appear, with each rule operating on the result of the previous rule.

    Conditions

    The plugin supports the following conditions:

    • Conditions are case insensitive, parameters are not

    • Any number of conditions can be set.

    • Conditions apply to the whole filter instance and all its rules. Not to individual rules.

    • All conditions have to be true

    Example #1 - Add and Rename

    In order to start filtering records, you can run the filter from the command line or through the configuration file. The following invokes the , which outputs the following (example),

    Using command Line

    Note: Using the command line mode requires quotes parse the wildcard properly. The use of a configuration file is recommended.

    Configuration File

    Result

    The output of both the command line and configuration invocations should be identical and result in the following output.

    Example #2 - Conditionally Add and Remove

    Configuration File

    Result

    Example #3 - Emoji

    Configuration File

    Result

    <source>
      type forward
      bind 0.0.0.0
      port 24224
    </source>
    
    <match fluent_bit>
      type stdout
    </match>
    $ fluentd -c test.conf
    2017-03-23 11:50:43 -0600 [info]: reading config file path="test.conf"
    2017-03-23 11:50:43 -0600 [info]: starting fluentd-0.12.33
    2017-03-23 11:50:43 -0600 [info]: gem 'fluent-mixin-config-placeholders' version '0.3.1'
    2017-03-23 11:50:43 -0600 [info]: gem 'fluent-plugin-docker' version '0.1.0'
    2017-03-23 11:50:43 -0600 [info]: gem 'fluent-plugin-elasticsearch' version '1.4.0'
    2017-03-23 11:50:43 -0600 [info]: gem 'fluent-plugin-flatten-hash' version '0.2.0'
    2017-03-23 11:50:43 -0600 [info]: gem 'fluent-plugin-flowcounter-simple' version '0.0.4'
    2017-03-23 11:50:43 -0600 [info]: gem 'fluent-plugin-influxdb' version '0.2.8'
    2017-03-23 11:50:43 -0600 [info]: gem 'fluent-plugin-json-in-json' version '0.1.4'
    2017-03-23 11:50:43 -0600 [info]: gem 'fluent-plugin-mongo' version '0.7.10'
    2017-03-23 11:50:43 -0600 [info]: gem 'fluent-plugin-out-http' version '0.1.3'
    2017-03-23 11:50:43 -0600 [info]: gem 'fluent-plugin-parser' version '0.6.0'
    2017-03-23 11:50:43 -0600 [info]: gem 'fluent-plugin-record-reformer' version '0.7.0'
    2017-03-23 11:50:43 -0600 [info]: gem 'fluent-plugin-rewrite-tag-filter' version '1.5.1'
    2017-03-23 11:50:43 -0600 [info]: gem 'fluent-plugin-stdin' version '0.1.1'
    2017-03-23 11:50:43 -0600 [info]: gem 'fluent-plugin-td' version '0.10.27'
    2017-03-23 11:50:43 -0600 [info]: adding match pattern="fluent_bit" type="stdout"
    2017-03-23 11:50:43 -0600 [info]: adding source type="forward"
    2017-03-23 11:50:43 -0600 [info]: using configuration file: <ROOT>
      <source>
        type forward
        bind 0.0.0.0
        port 24224
      </source>
      <match fluent_bit>
        type stdout
      </match>
    </ROOT>
    2017-03-23 11:50:43 -0600 [info]: listening fluent socket on 0.0.0.0:24224
    bin/fluent-bit -i INPUT -o forward://HOST:PORT
    $ bin/fluent-bit -i cpu -t fluent_bit -o forward://127.0.0.1:24224
    2017-03-23 11:53:06 -0600 fluent_bit: {"cpu_p":0.0,"user_p":0.0,"system_p":0.0,"cpu0.p_cpu":0.0,"cpu0.p_user":0.0,"cpu0.p_system":0.0,"cpu1.p_cpu":0.0,"cpu1.p_user":0.0,"cpu1.p_system":0.0,"cpu2.p_cpu":0.0,"cpu2.p_user":0.0,"cpu2.p_system":0.0,"cpu3.p_cpu":1.0,"cpu3.p_user":1.0,"cpu3.p_system":0.0}
    2017-03-23 11:53:07 -0600 fluent_bit: {"cpu_p":2.25,"user_p":2.0,"system_p":0.25,"cpu0.p_cpu":3.0,"cpu0.p_user":3.0,"cpu0.p_system":0.0,"cpu1.p_cpu":1.0,"cpu1.p_user":1.0,"cpu1.p_system":0.0,"cpu2.p_cpu":1.0,"cpu2.p_user":1.0,"cpu2.p_system":0.0,"cpu3.p_cpu":3.0,"cpu3.p_user":2.0,"cpu3.p_system":1.0}
    2017-03-23 11:53:08 -0600 fluent_bit: {"cpu_p":1.75,"user_p":1.0,"system_p":0.75,"cpu0.p_cpu":2.0,"cpu0.p_user":1.0,"cpu0.p_system":1.0,"cpu1.p_cpu":3.0,"cpu1.p_user":1.0,"cpu1.p_system":2.0,"cpu2.p_cpu":3.0,"cpu2.p_user":2.0,"cpu2.p_system":1.0,"cpu3.p_cpu":2.0,"cpu3.p_user":1.0,"cpu3.p_system":1.0}
    2017-03-23 11:53:09 -0600 fluent_bit: {"cpu_p":4.75,"user_p":3.5,"system_p":1.25,"cpu0.p_cpu":4.0,"cpu0.p_user":3.0,"cpu0.p_system":1.0,"cpu1.p_cpu":5.0,"cpu1.p_user":4.0,"cpu1.p_system":1.0,"cpu2.p_cpu":3.0,"cpu2.p_user":2.0,"cpu2.p_system":1.0,"cpu3.p_cpu":5.0,"cpu3.p_user":4.0,"cpu3.p_system":1.0}
    [SERVICE]
        Flush      5
        Daemon     off
        Log_Level  info
    
    [INPUT]
        Name       cpu
        Tag        cpu_usage
    
    [OUTPUT]
        Name          forward
        Match         *
        Host          127.0.0.1
        Port          24284
        Shared_Key    secret
        Self_Hostname flb.local
        tls           on
        tls.verify    off
    <source>
      @type         secure_forward
      self_hostname myserver.local
      shared_key    secret
      secure no
    </source>
    
    <match **>
     @type stdout
    </match>
    <source>
      @type forward
      <transport tls>
        cert_path /etc/td-agent/certs/fluentd.crt
        private_key_path /etc/td-agent/certs/fluentd.key
        private_key_passphrase password
      </transport>
      <security>
        self_hostname myserver.local
        shared_key secret
      </security>
    </source>
    
    <match **>
     @type stdout
    </match>
    $ fluentd -c fld.conf
    $ fluent-bit -c flb.conf
    2017-03-23 13:34:40 -0600 [info]: using configuration file: <ROOT>
      <source>
        @type secure_forward
        self_hostname myserver.local
        shared_key xxxxxx
        secure no
      </source>
      <match **>
        @type stdout
      </match>
    </ROOT>
    2017-03-23 13:34:41 -0600 cpu_usage: {"cpu_p":1.0,"user_p":0.75,"system_p":0.25,"cpu0.p_cpu":1.0,"cpu0.p_user":1.0,"cpu0.p_system":0.0,"cpu1.p_cpu":2.0,"cpu1.p_user":1.0,"cpu1.p_system":1.0,"cpu2.p_cpu":1.0,"cpu2.p_user":1.0,"cpu2.p_system":0.0,"cpu3.p_cpu":2.0,"cpu3.p_user":1.0,"cpu3.p_system":1.0}
    2017-03-23 13:34:42 -0600 cpu_usage: {"cpu_p":1.75,"user_p":1.75,"system_p":0.0,"cpu0.p_cpu":3.0,"cpu0.p_user":3.0,"cpu0.p_system":0.0,"cpu1.p_cpu":2.0,"cpu1.p_user":2.0,"cpu1.p_system":0.0,"cpu2.p_cpu":0.0,"cpu2.p_user":0.0,"cpu2.p_system":0.0,"cpu3.p_cpu":1.0,"cpu3.p_user":1.0,"cpu3.p_system":0.0}
    2017-03-23 13:34:43 -0600 cpu_usage: {"cpu_p":1.75,"user_p":1.25,"system_p":0.5,"cpu0.p_cpu":3.0,"cpu0.p_user":3.0,"cpu0.p_system":0.0,"cpu1.p_cpu":2.0,"cpu1.p_user":2.0,"cpu1.p_system":0.0,"cpu2.p_cpu":0.0,"cpu2.p_user":0.0,"cpu2.p_system":0.0,"cpu3.p_cpu":1.0,"cpu3.p_user":0.0,"cpu3.p_system":1.0}
    2017-03-23 13:34:44 -0600 cpu_usage: {"cpu_p":5.0,"user_p":3.25,"system_p":1.75,"cpu0.p_cpu":4.0,"cpu0.p_user":2.0,"cpu0.p_system":2.0,"cpu1.p_cpu":8.0,"cpu1.p_user":5.0,"cpu1.p_system":3.0,"cpu2.p_cpu":4.0,"cpu2.p_user":3.0,"cpu2.p_system":1.0,"cpu3.p_cpu":4.0,"cpu3.p_user":2.0,"cpu3.p_system":2.0}
    Upstream Servers

    NONE

    Remove a key/value pair with key KEY if it exists

    Remove_wildcard

    WILDCARD:KEY

    NONE

    Remove all key/value pairs with key matching wildcard KEY

    Remove_regex

    REGEXP:KEY

    NONE

    Remove all key/value pairs with key matching regexp KEY

    Rename

    STRING:KEY

    STRING:RENAMED_KEY

    Rename a key/value pair with key KEY to RENAMED_KEY if KEY exists AND RENAMED_KEY does not exist

    Hard_rename

    STRING:KEY

    STRING:RENAMED_KEY

    Rename a key/value pair with key KEY to RENAMED_KEY if KEY exists. If RENAMED_KEY already exists, this field is overwritten

    Copy

    STRING:KEY

    STRING:COPIED_KEY

    Copy a key/value pair with key KEY to COPIED_KEY if KEY exists AND COPIED_KEY does not exist

    Hard_copy

    STRING:KEY

    STRING:COPIED_KEY

    Copy a key/value pair with key KEY to COPIED_KEY if KEY exists. If COPIED_KEY already exists, this field is overwritten

    NONE

    Is true if a key matches regex KEY

    No_key_matches

    REGEXP:KEY

    NONE

    Is true if no key matches regex KEY

    Key_value_equals

    STRING:KEY

    STRING:VALUE

    Is true if KEY exists and its value is VALUE

    Key_value_does_not_equal

    STRING:KEY

    STRING:VALUE

    Is true if KEY exists and its value is not VALUE

    Key_value_matches

    STRING:KEY

    REGEXP:VALUE

    Is true if key KEY exists and its value matches VALUE

    Key_value_does_not_match

    STRING:KEY

    REGEXP:VALUE

    Is true if key KEY exists and its value does not match VALUE

    Matching_keys_have_matching_values

    REGEXP:KEY

    REGEXP:VALUE

    Is true if all keys matching KEY have values that match VALUE

    Matching_keys_do_not_have_matching_values

    REGEXP:KEY

    REGEXP:VALUE

    Is true if all keys matching KEY have values that do not match VALUE

    for the rules to be applied.

    Operation

    Parameter 1

    Parameter 2

    Description

    Set

    STRING:KEY

    STRING:VALUE

    Add a key/value pair with key KEY and value VALUE. If KEY already exists, this field is overwritten

    Add

    STRING:KEY

    STRING:VALUE

    Add a key/value pair with key KEY and value VALUE if KEY does not exist

    Remove

    Condition

    Parameter

    Parameter 2

    Description

    Key_exists

    STRING:KEY

    NONE

    Is true if KEY exists

    Key_does_not_exist

    STRING:KEY

    STRING:VALUE

    Is true if KEY does not exist

    A_key_matches

    Memory Usage Input Plugin

    STRING:KEY

    REGEXP:KEY

    {
      "Key1"     : "Value1",
      "Key2"     : "Value2"
    }
    {
      "Key1"       : "Value1",
      "RenamedKey" : "Value2",
      "OtherKey"   : "Value3"
    }
    [0] memory: [1488543156, {"Mem.total"=>1016044, "Mem.used"=>841388, "Mem.free"=>174656, "Swap.total"=>2064380, "Swap.used"=>139888, "Swap.free"=>1924492}]
    [1] memory: [1488543157, {"Mem.total"=>1016044, "Mem.used"=>841420, "Mem.free"=>174624, "Swap.total"=>2064380, "Swap.used"=>139888, "Swap.free"=>1924492}]
    [2] memory: [1488543158, {"Mem.total"=>1016044, "Mem.used"=>841420, "Mem.free"=>174624, "Swap.total"=>2064380, "Swap.used"=>139888, "Swap.free"=>1924492}]
    [3] memory: [1488543159, {"Mem.total"=>1016044, "Mem.used"=>841420, "Mem.free"=>174624, "Swap.total"=>2064380, "Swap.used"=>139888, "Swap.free"=>1924492}]
    bin/fluent-bit -i mem \
      -p 'tag=mem.local' \
      -F modify \
      -p 'Add=Service1 SOMEVALUE' \
      -p 'Add=Service2 SOMEVALUE3' \
      -p 'Add=Mem.total2 TOTALMEM2' \
      -p 'Rename=Mem.free MEMFREE' \
      -p 'Rename=Mem.used MEMUSED' \
      -p 'Rename=Swap.total SWAPTOTAL' \
      -p 'Add=Mem.total TOTALMEM' \
      -m '*' \
      -o stdout
    [INPUT]
        Name mem
        Tag  mem.local
    
    [OUTPUT]
        Name  stdout
        Match *
    
    [FILTER]
        Name modify
        Match *
        Add Service1 SOMEVALUE
        Add Service3 SOMEVALUE3
        Add Mem.total2 TOTALMEM2
        Rename Mem.free MEMFREE
        Rename Mem.used MEMUSED
        Rename Swap.total SWAPTOTAL
        Add Mem.total TOTALMEM
    [2018/04/06 01:35:13] [ info] [engine] started
    [0] mem.local: [1522980610.006892802, {"Mem.total"=>4050908, "MEMUSED"=>738100, "MEMFREE"=>3312808, "SWAPTOTAL"=>1046524, "Swap.used"=>0, "Swap.free"=>1046524, "Service1"=>"SOMEVALUE", "Service3"=>"SOMEVALUE3", "Mem.total2"=>"TOTALMEM2"}]
    [1] mem.local: [1522980611.000658288, {"Mem.total"=>4050908, "MEMUSED"=>738068, "MEMFREE"=>3312840, "SWAPTOTAL"=>1046524, "Swap.used"=>0, "Swap.free"=>1046524, "Service1"=>"SOMEVALUE", "Service3"=>"SOMEVALUE3", "Mem.total2"=>"TOTALMEM2"}]
    [2] mem.local: [1522980612.000307652, {"Mem.total"=>4050908, "MEMUSED"=>738068, "MEMFREE"=>3312840, "SWAPTOTAL"=>1046524, "Swap.used"=>0, "Swap.free"=>1046524, "Service1"=>"SOMEVALUE", "Service3"=>"SOMEVALUE3", "Mem.total2"=>"TOTALMEM2"}]
    [3] mem.local: [1522980613.000122671, {"Mem.total"=>4050908, "MEMUSED"=>738068, "MEMFREE"=>3312840, "SWAPTOTAL"=>1046524, "Swap.used"=>0, "Swap.free"=>1046524, "Service1"=>"SOMEVALUE", "Service3"=>"SOMEVALUE3", "Mem.total2"=>"TOTALMEM2"}]
    [INPUT]
        Name mem
        Tag  mem.local
        Interval_Sec 1
    
    [FILTER]
        Name    modify
        Match   mem.*
    
        Condition Key_Does_Not_Exist cpustats
        Condition Key_Exists Mem.used
    
        Set cpustats UNKNOWN
    
    [FILTER]
        Name    modify
        Match   mem.*
    
        Condition Key_Value_Does_Not_Equal cpustats KNOWN
    
        Add sourcetype memstats
    
    [FILTER]
        Name    modify
        Match   mem.*
    
        Condition Key_Value_Equals cpustats UNKNOWN
    
        Remove_wildcard Mem
        Remove_wildcard Swap
        Add cpustats_more STILL_UNKNOWN
    
    [OUTPUT]
        Name           stdout
        Match          *
    [2018/06/14 07:37:34] [ info] [engine] started (pid=1493)
    [0] mem.local: [1528925855.000223110, {"cpustats"=>"UNKNOWN", "sourcetype"=>"memstats", "cpustats_more"=>"STILL_UNKNOWN"}]
    [1] mem.local: [1528925856.000064516, {"cpustats"=>"UNKNOWN", "sourcetype"=>"memstats", "cpustats_more"=>"STILL_UNKNOWN"}]
    [2] mem.local: [1528925857.000165965, {"cpustats"=>"UNKNOWN", "sourcetype"=>"memstats", "cpustats_more"=>"STILL_UNKNOWN"}]
    [3] mem.local: [1528925858.000152319, {"cpustats"=>"UNKNOWN", "sourcetype"=>"memstats", "cpustats_more"=>"STILL_UNKNOWN"}]
    [INPUT]
        Name mem
        Tag  mem.local
    
    [OUTPUT]
        Name  stdout
        Match *
    
    [FILTER]
        Name modify
        Match *
    
        Remove_Wildcard Mem
        Remove_Wildcard Swap
        Set This_plugin_is_on 🔥
        Set 🔥 is_hot
        Copy 🔥 💦
        Rename  💦 ❄️
        Set ❄️ is_cold
        Set 💦 is_wet
    [2018/06/14 07:46:11] [ info] [engine] started (pid=21875)
    [0] mem.local: [1528926372.000197916, {"This_plugin_is_on"=>"🔥", "🔥"=>"is_hot", "❄️"=>"is_cold", "💦"=>"is_wet"}]
    [1] mem.local: [1528926373.000107868, {"This_plugin_is_on"=>"🔥", "🔥"=>"is_hot", "❄️"=>"is_cold", "💦"=>"is_wet"}]
    [2] mem.local: [1528926374.000181042, {"This_plugin_is_on"=>"🔥", "🔥"=>"is_hot", "❄️"=>"is_cold", "💦"=>"is_wet"}]
    [3] mem.local: [1528926375.000090841, {"This_plugin_is_on"=>"🔥", "🔥"=>"is_hot", "❄️"=>"is_cold", "💦"=>"is_wet"}]
    [0] mem.local: [1528926376.000610974, {"This_plugin_is_on"=>"🔥", "🔥"=>"is_hot", "❄️"=>"is_cold", "💦"=>"is_wet"}]

    Kubernetes

    Fluent Bit Kubernetes Filter allows to enrich your log files with Kubernetes metadata.

    When Fluent Bit is deployed in Kubernetes as a DaemonSet and configured to read the log files from the containers (using tail or systemd input plugins), this filter aims to perform the following operations:

    • Analyze the Tag and extract the following metadata:

      • Pod Name

      • Namespace

      • Container Name

      • Container ID

    • Query Kubernetes API Server to obtain extra metadata for the POD in question:

      • Pod ID

      • Labels

      • Annotations

    The data is cached locally in memory and appended to each record.

    Configuration Parameters

    The plugin supports the following configuration parameters:

    Processing the 'log' value

    Kubernetes Filter aims to provide several ways to process the data contained in the log key. The following explanation of the workflow assumes that your original Docker parser defined in parsers.conf is as follows:

    Since Fluent Bit v1.2 we are not suggesting the use of decoders (Decode_Field_As) if you are using Elasticsearch database in the output to avoid data type conflicts.

    To perform processing of the log key, it's mandatory to enable the Merge_Log configuration property in this filter, then the following processing order will be done:

    • If a Pod suggest a parser, the filter will use that parser to process the content of log.

    • If the option Merge_Parser was set and the Pod did not suggest a parser, process the log content using the suggested parser in the configuration.

    • If no Pod was suggested and no Merge_Parser is set, try to handle the content as JSON.

    If log value processing fails, the value is untouched. The order above is not chained, meaning it's exclusive and the filter will try only one of the options above, not all of them.

    Kubernetes Annotations

    A flexible feature of Fluent Bit Kubernetes filter is that allow Kubernetes Pods to suggest certain behaviors for the log processor pipeline when processing the records. At the moment it support:

    • Suggest a pre-defined parser

    • Request to exclude logs

    The following annotations are available:

    Annotation Examples in Pod definition

    Suggest a parser

    The following Pod definition runs a Pod that emits Apache logs to the standard output, in the Annotations it suggest that the data should be processed using the pre-defined parser called apache:

    Request to exclude logs

    There are certain situations where the user would like to request that the log processor simply skip the logs from the Pod in question:

    Note that the annotation value is boolean which can take a true or false and must be quoted.

    Workflow of Tail + Kubernetes Filter

    Kubernetes Filter depends on either or input plugins to process and enrich records with Kubernetes metadata. Here we will explain the workflow of Tail and how it configuration is correlated with Kubernetes filter. Consider the following configuration example (just for demo purposes, not production):

    In the input section, the plugin will monitor all files ending in .log in path /var/log/containers/. For every file it will read every line and apply the docker parser. Then the records are emitted to the next step with an expanded tag.

    Tail support Tags expansion, which means that if a tag have a star character (*), it will replace the value with the absolute path of the monitored file, so if you file name and path is:

    then the Tag for every record of that file becomes:

    note that slashes are replaced with dots.

    When runs, it will try to match all records that starts with kube. (note the ending dot), so records from the file mentioned above will hit the matching rule and the filter will try to enrich the records

    Kubernetes Filter do not care from where the logs comes from, but it cares about the absolute name of the monitored file, because that information contains the pod name and namespace name that are used to retrieve associated metadata to the running Pod from the Kubernetes Master/API Server.

    If you have large pod specifications (can be caused by large numbers of environment variables, etc.), be sure to increase the Buffer_Size parameter of the kubernetes filter. If object sizes exceed this buffer, some metadata will fail to be injected to the logs.

    If the configuration property Kube_Tag_Prefix was configured (available on Fluent Bit >= 1.1.x), it will use that value to remove the prefix that was appended to the Tag in the previous Input section. Note that the configuration property defaults to _kube._var.logs.containers. , so the previous Tag content will be transformed from:

    to:

    the transformation above do not modify the original Tag, just creates a new representation for the filter to perform metadata lookup.

    that new value is used by the filter to lookup the pod name and namespace, for that purpose it uses an internal Regular expression:

    If you want to know more details, check the source code of that definition .

    You can see on web site how this operation is performed, check the following demo link:

    Custom Regex

    Under certain and not common conditions, a user would want to alter that hard-coded regular expression, for that purpose the option Regex_Parser can be used (documented on top).

    Final Comments

    So at this point the filter is able to gather the values of pod_name and namespace, with that information it will check in the local cache (internal hash table) if some metadata for that key pair exists, if so, it will enrich the record with the metadata value, otherwise it will connect to the Kubernetes Master/API Server and retrieve that information.

    Kube_Token_File

    Token file

    /var/run/secrets/kubernetes.io/serviceaccount/token

    Kube_Tag_Prefix

    When the source records comes from Tail input plugin, this option allows to specify what's the prefix used in Tail configuration.

    kube.var.log.containers.

    Merge_Log

    When enabled, it checks if the log field content is a JSON string map, if so, it append the map fields as part of the log structure.

    Off

    Merge_Log_Key

    When Merge_Log is enabled, the filter tries to assume the log field from the incoming message is a JSON string message and make a structured representation of it at the same level of the log field in the map. Now if Merge_Log_Key is set (a string name), all the new structured fields taken from the original log content are inserted under the new key.

    Merge_Log_Trim

    When Merge_Log is enabled, trim (remove possible \n or \r) field values.

    On

    Merge_Parser

    Optional parser name to specify how to parse the data contained in the log key. Recommended use is for developers or testing only.

    Keep_Log

    When Keep_Log is disabled, the log field is removed from the incoming message once it has been successfully merged (Merge_Log must be enabled as well).

    On

    tls.debug

    Debug level between 0 (nothing) and 4 (every detail).

    -1

    tls.verify

    When enabled, turns on certificate validation when connecting to the Kubernetes API server.

    On

    Use_Journal

    When enabled, the filter reads logs coming in Journald format.

    Off

    Regex_Parser

    Set an alternative Parser to process record Tag and extract pod_name, namespace_name, container_name and docker_id. The parser must be registered in a (refer to parser filter-kube-test as an example).

    K8S-Logging.Parser

    Allow Kubernetes Pods to suggest a pre-defined Parser (read more about it in Kubernetes Annotations section)

    Off

    K8S-Logging.Exclude

    Allow Kubernetes Pods to exclude their logs from the log processor (read more about it in Kubernetes Annotations section).

    Off

    Labels

    Include Kubernetes resource labels in the extra metadata.

    On

    Annotations

    Include Kubernetes resource annotations in the extra metadata.

    On

    Kube_meta_preload_cache_dir

    If set, Kubernetes meta-data can be cached/pre-loaded from files in JSON format in this directory, named as namespace-pod.meta

    Dummy_Meta

    If set, use dummy-meta data (for test/dev purposes)

    Off

    DNS_Retries

    DNS lookup retries N times until the network start working

    6

    DNS_Wait_Time

    DNS lookup interval between network status checks

    30

    Key

    Description

    Default

    Buffer_Size

    Set the buffer size for HTTP client when reading responses from Kubernetes API server. The value must be according to the Unit Size specification. A value of 0 results in no limit, and the buffer will expand as-needed. Note that if pod specifications exceed the buffer limit, the API response will be discarded when retrieving metadata, and some kubernetes metadata will fail to be injected to the logs.

    32k

    Kube_URL

    API Server end-point

    https://kubernetes.default.svc:443

    Kube_CA_File

    CA certificate file

    /var/run/secrets/kubernetes.io/serviceaccount/ca.crt

    Kube_CA_Path

    Annotation

    Description

    Default

    fluentbit.io/parser[_stream][-container]

    Suggest a pre-defined parser. The parser must be registered already by Fluent Bit. This option will only be processed if Fluent Bit configuration (Kubernetes Filter) have enabled the option K8S-Logging.Parser. If present, the stream (stdout or stderr) will restrict that specific stream. If present, the container can override a specific container in a Pod.

    fluentbit.io/exclude[_stream][-container]

    Request to Fluent Bit to exclude or not the logs generated by the Pod. This option will only be processed if Fluent Bit configuration (Kubernetes Filter) have enabled the option K8S-Logging.Exclude.

    False

    Tail
    Systemd
    Tail
    Kubernetes Filter
    here
    Rublar.com
    https://rubular.com/r/HZz3tYAahj6JCd

    Absolute path to scan for certificate files

    [PARSER]
        Name         docker
        Format       json
        Time_Key     time
        Time_Format  %Y-%m-%dT%H:%M:%S.%L
        Time_Keep    On
    apiVersion: v1
    kind: Pod
    metadata:
      name: apache-logs
      labels:
        app: apache-logs
      annotations:
        fluentbit.io/parser: apache
    spec:
      containers:
      - name: apache
        image: edsiper/apache_logs
    apiVersion: v1
    kind: Pod
    metadata:
      name: apache-logs
      labels:
        app: apache-logs
      annotations:
        fluentbit.io/exclude: "true"
    spec:
      containers:
      - name: apache
        image: edsiper/apache_logs
    [INPUT]
        Name    tail
        Tag     kube.*
        Path    /var/log/containers/*.log
        Parser  docker
    
    [FILTER]
        Name             kubernetes
        Match            kube.*
        Kube_URL         https://kubernetes.default.svc:443
        Kube_CA_File     /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        Kube_Token_File  /var/run/secrets/kubernetes.io/serviceaccount/token
        Kube_Tag_Prefix  kube.var.log.containers.
        Merge_Log        On
        Merge_Log_Key    log_processed
    /var/log/container/apache-logs-annotated_default_apache-aeeccc7a9f00f6e4e066aeff0434cf80621215071f1b20a51e8340aa7c35eac6.log
    kube.var.log.containers.apache-logs-annotated_default_apache-aeeccc7a9f00f6e4e066aeff0434cf80621215071f1b20a51e8340aa7c35eac6.log
    kube.var.log.containers.apache-logs-annotated_default_apache-aeeccc7a9f00f6e4e066aeff0434cf80621215071f1b20a51e8340aa7c35eac6.log
    apache-logs-annotated_default_apache-aeeccc7a9f00f6e4e066aeff0434cf80621215071f1b20a51e8340aa7c35eac6.log
    (?<pod_name>[a-z0-9](?:[-a-z0-9]*[a-z0-9])?(?:\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*)_(?<namespace_name>[^_]+)_(?<container_name>.+)-(?<docker_id>[a-z0-9]{64})\.log$
    parsers file

    Build and Install

    Fluent Bit uses CMake as it build system. The suggested procedure to prepare the build system consists on the following steps:

    Prepare environment

    In the following steps you can find exact commands to build and install the project with the default options. If you already know how CMake works you can skip this part and look at the build options available. Note that Fluent Bit requires CMake 3.x. You may need to use cmake3 instead of cmake to complete the following steps on your system.

    Change to the build/ directory inside the Fluent Bit sources:

    Let configure the project specifying where the root path is located:

    Now you are ready to start the compilation process through the simple make command:

    to continue installing the binary on the system just do:

    it's likely you may need root privileges so you can try to prefixing the command with sudo.

    Build Options

    Fluent Bit provides certain options to CMake that can be enabled or disabled when configuring, please refer to the following tables under the General Options, Development Options, Input Plugins and _Output Plugins sections.

    General Options

    Development Options

    Input Plugins

    The input plugins provides certain features to gather information from a specific source type which can be a network interface, some built-in metric or through a specific input device, the following input plugins are available:

    Filter Plugins

    The filter plugins allows to modify, enrich or drop records. The following table describes the filters available on this version:

    Output Plugins

    The output plugins gives the capacity to flush the information to some external interface, service or terminal, the following table describes the output plugins available as of this version:

    Yes

    FLB_EXAMPLES

    Build examples

    Yes

    FLB_SHARED_LIB

    Build shared library

    Yes

    FLB_MTRACE

    Enable mtrace support

    No

    FLB_INOTIFY

    Enable Inotify support

    Yes

    FLB_POSIX_TLS

    Force POSIX thread storage

    No

    FLB_SQLDB

    Enable SQL embedded database support

    No

    FLB_HTTP_SERVER

    Enable HTTP Server

    No

    FLB_LUAJIT

    Enable Lua scripting support

    Yes

    FLB_RECORD_ACCESSOR

    Enable record accessor

    Yes

    FLB_SIGNV4

    Enable AWS Signv4 support

    Yes

    FLB_STATIC_CONF

    Build binary using static configuration files. The value of this option must be a directory containing configuration files.

    FLB_STREAM_PROCESSOR

    Enable Stream Processor

    Yes

    No

    FLB_TESTS_RUNTIME

    Enable runtime tests

    No

    FLB_TESTS_INTERNAL

    Enable internal tests

    No

    FLB_TESTS

    Enable tests

    No

    FLB_BACKTRACE

    Enable backtrace/stacktrace support

    Yes

    On

    Enable Exec input plugin

    On

    Enable Forward input plugin

    On

    Enable Head input plugin

    On

    Enable Health input plugin

    On

    Enable Kernel log input plugin

    On

    Enable Memory input plugin

    On

    Enable MQTT Server input plugin

    On

    Enable Network I/O metrics input plugin

    On

    Enable Process monitoring input plugin

    On

    Enable Random input plugin

    On

    Enable Serial input plugin

    On

    Enable Standard input plugin

    On

    Enable Syslog input plugin

    On

    Enable Systemd / Journald input plugin

    On

    Enable Tail (follow files) input plugin

    On

    Enable TCP input plugin

    On

    Enable system temperature(s) input plugin

    On

    Enable Windows Event Log input plugin (Windows Only)

    On

    On

    Enable Lua scripting filter

    On

    Enable Modify filter

    On

    Enable Nest filter

    On

    Enable Parser filter

    On

    Enable Record Modifier filter

    On

    Enable Rewrite Tag filter

    On

    Enable Stdout filter

    On

    Enable Throttle filter

    On

    On

    Enable Datadog output plugin

    On

    Enable output plugin

    On

    Enable File output plugin

    On

    Enable Flowcounter output plugin

    On

    Enable output plugin

    On

    Enable Gelf output plugin

    On

    Enable HTTP output plugin

    On

    Enable InfluxDB output plugin

    On

    Enable Kafka output

    Off

    Enable Kafka REST Proxy output plugin

    On

    FLB_OUT_LIB

    Enable Lib output plugin

    On

    Enable output plugin

    Off

    FLB_OUT_NULL

    Enable NULL output plugin

    On

    FLB_OUT_PGSQL

    Enable PostgreSQL output plugin

    On

    FLB_OUT_PLOT

    Enable Plot output plugin

    On

    FLB_OUT_SLACK

    Enable Slack output plugin

    On

    Enable Splunk output plugin

    On

    Enable Google Stackdriver output plugin

    On

    Enable STDOUT output plugin

    On

    FLB_OUT_TCP

    Enable TCP/TLS output plugin

    On

    Enable output plugin

    On

    option

    description

    default

    FLB_ALL

    Enable all features available

    No

    FLB_JEMALLOC

    Use Jemalloc as default memory allocator

    No

    FLB_TLS

    Build with SSL/TLS support

    No

    FLB_BINARY

    option

    description

    default

    FLB_DEBUG

    Build binaries with debug symbols

    No

    FLB_VALGRIND

    Enable Valgrind support

    No

    FLB_TRACE

    Enable trace mode

    No

    FLB_SMALL

    option

    description

    default

    FLB_IN_COLLECTD

    Enable Collectd input plugin

    On

    FLB_IN_CPU

    Enable CPU input plugin

    On

    FLB_IN_DISK

    Enable Disk I/O Metrics input plugin

    On

    FLB_IN_DOCKER

    option

    description

    default

    FLB_FILTER_AWS

    Enable AWS metadata filter

    On

    FLB_FILTER_EXPECT

    Enable Expect data test filter

    On

    FLB_FILTER_GREP

    Enable Grep filter

    On

    FLB_FILTER_KUBERNETES

    option

    description

    default

    FLB_OUT_AZURE

    Enable Microsoft Azure output plugin

    On

    FLB_OUT_BIGQUERY

    Enable Google BigQuery output plugin

    On

    FLB_OUT_COUNTER

    Enable Counter output plugin

    On

    FLB_OUT_CLOUDWATCH_LOGS

    CMake

    Build executable

    Minimise binary size

    Enable Docker metrics input plugin

    Enable Kubernetes metadata filter

    Enable Amazon CloudWatch output plugin

    $ cd build/
    $ cmake ../
    -- The C compiler identification is GNU 4.9.2
    -- Check for working C compiler: /usr/bin/cc
    -- Check for working C compiler: /usr/bin/cc -- works
    -- Detecting C compiler ABI info
    -- Detecting C compiler ABI info - done
    -- The CXX compiler identification is GNU 4.9.2
    -- Check for working CXX compiler: /usr/bin/c++
    -- Check for working CXX compiler: /usr/bin/c++ -- works
    ...
    -- Could NOT find Doxygen (missing:  DOXYGEN_EXECUTABLE)
    -- Looking for accept4
    -- Looking for accept4 - not found
    -- Configuring done
    -- Generating done
    -- Build files have been written to: /home/edsiper/coding/fluent-bit/build
    $ make
    Scanning dependencies of target msgpack
    [  2%] Building C object lib/msgpack-1.1.0/CMakeFiles/msgpack.dir/src/unpack.c.o
    [  4%] Building C object lib/msgpack-1.1.0/CMakeFiles/msgpack.dir/src/objectc.c.o
    [  7%] Building C object lib/msgpack-1.1.0/CMakeFiles/msgpack.dir/src/version.c.o
    ...
    [ 19%] Building C object lib/monkey/mk_core/CMakeFiles/mk_core.dir/mk_file.c.o
    [ 21%] Building C object lib/monkey/mk_core/CMakeFiles/mk_core.dir/mk_rconf.c.o
    [ 23%] Building C object lib/monkey/mk_core/CMakeFiles/mk_core.dir/mk_string.c.o
    ...
    Scanning dependencies of target fluent-bit-static
    [ 66%] Building C object src/CMakeFiles/fluent-bit-static.dir/flb_pack.c.o
    [ 69%] Building C object src/CMakeFiles/fluent-bit-static.dir/flb_input.c.o
    [ 71%] Building C object src/CMakeFiles/fluent-bit-static.dir/flb_output.c.o
    ...
    Linking C executable ../bin/fluent-bit
    [100%] Built target fluent-bit-bin
    $ make install
    FLB_IN_EXEC
    FLB_IN_FORWARD
    FLB_IN_HEAD
    FLB_IN_HEALTH
    FLB_IN_KMSG
    FLB_IN_MEM
    FLB_IN_MQTT
    FLB_IN_NETIF
    FLB_IN_PROC
    FLB_IN_RANDOM
    FLB_IN_SERIAL
    FLB_IN_STDIN
    FLB_IN_SYSLOG
    FLB_IN_SYSTEMD
    FLB_IN_TAIL
    FLB_IN_TCP
    FLB_IN_THERMAL
    FLB_IN_WINLOG
    FLB_FILTER_LUA
    FLB_FILTER_MODIFY
    FLB_FILTER_NEST
    FLB_FILTER_PARSER
    FLB_FILTER_RECORD_MODIFIER
    FLB_FILTER_REWRITE_TAG
    FLB_FILTER_STDOUT
    FLB_FILTER_THROTTLE
    FLB_OUT_DATADOG
    FLB_OUT_ES
    Elastic Search
    FLB_OUT_FILE
    FLB_OUT_FLOWCOUNTER
    FLB_OUT_FORWARD
    Fluentd
    FLB_OUT_GELF
    FLB_OUT_HTTP
    FLB_OUT_INFLUXDB
    FLB_OUT_KAFKA
    FLB_OUT_KAFKA_REST
    FLB_OUT_NATS
    NATS
    FLB_OUT_SPLUNK
    FLB_OUT_STACKDRIVER
    FLB_OUT_STDOUT
    FLB_OUT_TD
    Treasure Data