Only this pageAll pages
Powered by GitBook
1 of 100

1.1

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Introduction

Fluent Bit is a Fast and Lightweight Log Processor and Forwarder for Linux, OSX and BSD family operating systems. It has been made with a strong focus on performance to allow the collection of events from different sources without complexity.

Fluent Bit is part of the Fluentd project ecosystem, it's licensed under the terms of the Apache License v2.0. This project is made and sponsored by Treasure Data.

About

is an open source and multi-platform log forwarder tool which aims to be a generic Swiss knife for log collection and distribution.

We, , as a Big Data company, provide an analytics infrastructure in the Cloud where we provide an end-to-end solution to collect, store and do analytics over the data. is an integral part of this pipeline where it solves the log collection needs.

Being an open source project, it has been widely adopted to solve logging needs in Cloud Native environments where Docker and Kubernetes are key components; Fluent Bit is a natural fit.

Fluent Bit
Treasure Data
Fluent Bit

Service

Fluent Bit has a 'Service' which runs the filter chain from input to output. Global configuration here includes whether to daemonise, diagnostic logging, flush interval, etc.

For more details, please refer to the Service section.

Why ?

Data collection and log forwarding is hard.

Nowadays the number of sources of information in our environments is ever increasing. Handling data collection at scale is complex, and collecting and aggregating diverse data requires a specialized tool that can deal with:

  • Different sources of information.

  • Different data formats.

  • Multiple destinations.

was born to address the need for a high performance and optimized tool that can collect data from any input source, unify that data and deliver it to multiple destinations.

CentOS Packages

Install on Redhat / CentOS

Fluent Bit is distributed as td-agent-bit package and is available for the latest stable CentOS system. This stable Fluent Bit distribution package is maintained by .

Parser

Dealing with raw strings is a constant pain; having a structure is highly desired. Ideally we want to set a structure to the incoming data by the Input Plugins as soon as they are collected:

The Parser allows you to convert from unstructured to structured data. As a demonstrative example consider the following Apache (HTTP Server) log entry:

The above log line is a raw string without format, ideally we would like to give it a structure that can be processed later easily. If the proper configuration is used, the log entry could be converted to:

Parsers are fully configurable and are independently and optionally handled by each input plugin, for more details please refer to the section.

Installation

The following section will guide you to the step to download, build and install Fluent Bit from sources and specific instructions for the installation of binaries that we already distribute for Debian/Ubuntu/Redhat/CentOS and Raspberry Pi.

If you find some problem on a certain step, don't hesitate to report the problem on our bug tracker:

Buffer

When the data or logs are ready to be routed to some destination, by default they are buffered in memory.

Note that buffered data is not longer a raw text, instead it's in Fluent Bit internal binary representation.

Optionally Fluent Bit offers a buffering mechanism in the file system that acts as a backup system to avoid data loss in case of system failures.

Output

The output interface allows us to define destinations for the data. Common destinations are remote services, local file system or standard interface with others. Outputs are implemented as plugins and there are many available.

When an output plugin is loaded, an internal instance is created. Every instance has its own independent configuration. Configuration keys are often called properties.

Every output plugin has its own documentation section specifying how it can be used and what properties are available.

For more details, please refer to the section.

Input

provides different Input Plugins to gather information from different sources, some of them just collect data from log files while others can gather metrics information from the operating system. There are many plugins for different needs.

When an input plugin is loaded, an internal instance is created. Every instance has its own and independent configuration. Configuration keys are often called properties.

Every input plugin has its own documentation section where it's specified how it can be used and what properties are available.

For more details, please refer to the section.

Fluent Bit
https://github.com/fluent/fluent-bit/issues

Requirements

Fluent Bit uses very low CPU and Memory consumption, it's compatible with most of x86, x86_64, AArch32 and AArch64 based platforms. In order to build it you need the following components in your system:

  • Compiler: GCC or clang

  • CMake

  • Flex (only if Stream Processor is enabled)

  • Bison (only if Stream Processor is enabled)

There are not other dependencies besides libc and pthreads in the most basic mode. For certain features that depends on third party components, those are included in the main source code repository.

Configure Yum

We provide td-agent-bit through a Yum repository. In order to add the repository reference to your system, please add a new file called td-agent-bit.repo in /etc/yum.repos.d/ with the following content:

note: we encourage you always enable the gpgcheck for security reasons. All our packages are signed.

Install

Once your repository is configured, run the following command to install it:

Now the following step is to instruct systemd to enable the service:

If you do a status check, you should see a similar output like this:

The default configuration of td-agent-bit is collecting metrics of CPU usage and sending the records to the standard output, you can see the outgoing data in your /var/log/messages file.

Treasure Data, Inc
[td-agent-bit]
name = TD Agent Bit
baseurl = http://packages.fluentbit.io/centos/7
gpgcheck=1
gpgkey=http://packages.fluentbit.io/fluentbit.key
enabled=1
$ yum install td-agent-bit
$ service td-agent-bit start
$ service td-agent-bit status
Redirecting to /bin/systemctl status  td-agent-bit.service
● td-agent-bit.service - TD Agent Bit
   Loaded: loaded (/usr/lib/systemd/system/td-agent-bit.service; disabled; vendor preset: disabled)
   Active: active (running) since Thu 2016-07-07 02:08:01 BST; 9s ago
 Main PID: 3820 (td-agent-bit)
   CGroup: /system.slice/td-agent-bit.service
           └─3820 /opt/td-agent-bit/bin/td-agent-bit -c etc/td-agent-bit/td-agent-bit.conf
...
192.168.2.20 - - [28/Jul/2006:10:27:10 -0300] "GET /cgi-bin/try/ HTTP/1.0" 200 3395
{
  "host":    "192.168.2.20",
  "user":    "-",
  "method":  "GET",
  "path":    "/cgi-bin/try/",
  "code":    "200",
  "size":    "3395",
  "referer": "",
  "agent":   ""
 }
Parsers
Output Plugins
Fluent Bit
Input Plugins

Ubuntu Packages

Fluent Bit is distributed as td-agent-bit package and is available for the latest stable Ubuntu system: Xenial Xerus. This stable Fluent Bit distribution package is maintained by Treasure Data, Inc.

Server GPG key

The first step is to add our server GPG key to your keyring, on that way you can get our signed packages:

Update your sources lists

On Ubuntu, you need to add our APT server entry to your sources lists, please add the following content at bottom of your /etc/apt/sources.list file:

Ubuntu 18.04 LTS (Bionic Beaver)

Ubuntu 16.04 LTS (Xenial Xerus)

Update your repositories database

Now let your system update the apt database:

Install TD-Agent Bit

Using the following apt-get command you are able now to install the latest td-agent-bit:

Now the following step is to instruct systemd to enable the service:

If you do a status check, you should see a similar output like this:

The default configuration of td-agent-bit is collecting metrics of CPU usage and sending the records to the standard output, you can see the outgoing data in your /var/log/syslog file.

Scheduler

Fluent Bit has an Engine that helps to coordinate the data ingestion from input plugins and call the Scheduler to decide when is time to flush the data through one or multiple output plugins. The Scheduler flush new data every a fixed time of seconds and Schedule retries when asked.

Once an output plugin gets call to flush some data, after processing that data it can notify the Engine three possible return statuses:

  • OK

  • Retry

  • Error

If the return status was OK, it means it was successfully able to process and flush the data, if it returned an Error status, means that an unrecoverable error happened and the engine should not try to flush that data again. If a Retry was requested, the Engine will ask the Scheduler to retry to flush that data, the Scheduler will decide how many seconds to wait before that happen.

Configuring Retries

The Scheduler provides a simple configuration option called Retry_Limit which can be set independently on each output section. This option allows to disable retries or impose a limit to try N times and then discard the data after reaching that limit:

Example

The following example configure two outputs where the HTTP plugin have an unlimited number of retries and the Elasticsearch plugin have a limit of 5 times:

Configuration

Fluent Bit is flexible enough to be configured either from the command line or through a configuration file. For production environments, we strongly recommend to use the configuration file approach.

Note that all configuration files use a specific fixed and strict schema, please proceed to the following sections for a better understanding:

  • File Schema (must read)

  • Configuration Files

Kernel Log Buffer

The kmsg input plugin reads the Linux Kernel log buffer since the beginning, it gets every record and parse it field as priority, sequence, seconds, useconds, and message.

Getting Started

In order to start getting the Linux Kernel messages, you can run the plugin from the command line or through the configuration file:

Command Line

As described above, the plugin processed all messages that the Linux Kernel reported, the output has been truncated for clarification.

Configuration File

In your main configuration file append the following Input & Output sections:

Memory Usage

In certain scenarios would be ideal to estimate how much memory Fluent Bit could be using, this is very useful for containerized environments where memory limits are a must.

In order to estimate we will assume that the input plugins have set the Mem_Buf_Limit option (you can learn more about it in the Backpressure section).

Estimating

Input plugins append data independently, so in order to do an estimation a limit should be imposed through the Mem_Buf_Limit option. If the limit was set to 10MB we need to estimate that in the worse case, the output plugin likely could use 20MB.

Fluent Bit has an internal binary representation for the data being processed, but when this data reach an output plugin, this one will likely create their own representation in a new memory buffer for processing. The best example are the and output plugins, both needs to convert the binary representation to their respective-custom JSON formats before to talk to their backend servers.

So, if we impose a limit of 10MB for the input plugins and considering the worse case scenario of the output plugin consuming 20MB extra, as a minimum we need (30MB x 1.2) = 36MB.

Glibc and Memory Fragmentation

Is well known that in intensive environments where memory allocations happens in the order of magnitude, the default memory allocator provided by Glibc could lead to a high fragmentation, reporting a high memory usage by the service.

It's strongly suggested that in any production environment, Fluent Bit should be built with enabled (e.g. -DFLB_JEMALLOC=On). Jemalloc is an alternative memory allocator that can reduce fragmentation (among others things) resulting in better performance.

You can check if Fluent Bit has been built with Jemalloc using the following command:

The output should looks like:

If the FLB_JEMALLOC option is listed in Build Flags, everything will be fine.

Unit Sizes

Certain configuration directives in Fluent Bit refer to unit sizes such as when defining the size of a buffer or specific limits, we can find these in plugins like Tail Input, Forward Input or in generic properties like Mem_Buf_Limit.

Starting from Fluent Bit v0.11.10, all unit sizes have been standardized across the core and plugins, the following table describes the options that can be used and what they mean:

Suffix

Description

Example

When a suffix is not specified, it's assumed that the value given is a bytes representation.

Specifying a value of 32000, means 32000 bytes

Counter

Counter is a very simple plugin that counts how many records it's getting upon flush time. Plugin output is as follows:

[TIMESTAMP, NUMBER_OF_RECORDS_NOW] (total = RECORDS_SINCE_IT_STARTED)

Getting Started

You can run the plugin from the command line or through the configuration file:

Command Line

From the command line you can let Fluent Bit count up a data with the following options:

Configuration File

In your main configuration file append the following Input & Output sections:

Testing

Once Fluent Bit is running, you will see the reports in the output interface similar to this:

Download Sources

Stable

For production systems, we strongly suggest that you always get the latest stable release from our web site, you can get the official tarballs (.tar.gz) from the following link:

http://fluentbit.io/download/

Development

For people who aims to contribute to the project testing or extending the code base, can get the development version from our GIT repository:

Note that our master branch is where the development of Fluent Bit happens. Since it's a development version, expect issues when compiling or at run time.

We encourage everybody to help us testing every development version, at the end this is what will become stable.

Build with Static Configuration

in normal operation mode allows to be configurable through or using specific arguments in the command line, while this is the ideal deployment case, there are scenarios where a more restricted configuration is required: static configuration mode.

Static configuration mode aims to include a built-in configuration in the final binary of Fluent Bit, disabling the usage of external files or flags at runtime.

Getting Started

Debian Packages

Fluent Bit is distributed as td-agent-bit package and is available for the latest stable Debian system: Jessie. This stable Fluent Bit distribution package is maintained by .

Server GPG key

The first step is to add our server GPG key to your keyring, on that way you can get our signed packages:

Supported Platforms

The following operating systems and architectures are supported in Fluent Bit.

Upgrade Notes

If you are upgrading from Fluent Bit <= 1.0.x you should take in consideration the following relevant changes when switching to Fluent Bit v1.1 series:

Kubernetes Filter

We introduced a new configuration property called Kube_Tag_Prefix to help Tag prefix resolution and address an unexpected behavior that landed in previous versions.

Duing 1.0.x release cycle, a commit in Tail input plugin changed the default behavior on how the Tag was composed when using the wildcard for expansion generating breaking compatibility with other services. Consider the following configuration example:

Raspberry Pi

Fluent Bit is distributed as td-agent-bit package and is available for the Raspberry, specifically for . This stable Fluent Bit distribution package is maintained by .

Server GPG key

The first step is to add our server GPG key to your keyring, on that way you can get our signed packages:

Configuration Variables

Fluent Bit support the usage of environment variables in any value associated to a key when using a configuration file.

The variables are case sensitive and can be used in the following format:

When Fluent Bit starts, the configuration reader will detect any request for ${MY_VARIABLE} and will try to resolve it value.

Example

Create the following configuration file (fluent-bit.conf):

Open a terminal and set the environment variable:

Unit Tests

comes with some unit test programs that uses the library mode to ingest data and test the output. The tests are based on suite and requires a C++ compiler.

Requirements

In order to build and run the tests, your system needs a C++ compiler and an installed version of . On Debian/Ubuntu systems the following commands will install the dependencies:

Note that libgtest-dev will only install the sources of the test suite, you need to take some extra steps to make this work:

Random

Random input plugin generate very simple random value samples using the device interface /dev/urandom, if not available it will use a unix timestamp as value.

Configuration Parameters

The plugin supports the following configuration parameters:

Disk Usage

The disk input plugin, gathers the information about the disk usage of the running system every certain interval of time and reports them.

Configuration Parameters

The plugin supports the following configuration parameters:

Routing

Routing is a core feature that allows to route your data through Filters and finally to one or multiple destinations.

There are two important concepts in Routing:

  • Tag

  • Match

When the data is generated by the input plugins, it comes with a Tag (most of the time the Tag is configured manually), the Tag is a human-readable indicator that helps to identify the data source.

Standard Output

The Standard Output Filter plugin allows to print to the standard output the data received through the input plugin.

Configuration Parameters

There are no parameters.

Standard Input

The stdin plugin allows to retrieve valid JSON text messages over the standard input interface (stdin). In order to use it, specify the plugin name as the input, e.g:

As input data the stdin plugin recognize the following JSON data formats:

A better example to demonstrate how it works will be through a Bash script that generates messages and writes them to . Write the following content in a file named test.sh:

Give the script execution permission:

Now lets start the script and in the following way:

Fluent Bit for Developers

has been designed and built to be used not only as a standalone tool, it can also be embedded in your C or C++ applications. The following section presents details about how you can use it inside your own programs. We assume that you have some basic knowledge of C language, ideally experience compiling programs on Unix/Linux environments.

$ wget -qO - https://packages.fluentbit.io/fluentbit.key | sudo apt-key add -

k, K, KB, kb

Kilobyte: a unit of memory equal to 1,000 bytes.

32k means 32000 bytes.

m, M, MB, mb

Megabyte: a unit of memory equal to 1,000,000 bytes

1M means 1000000 bytes

g, G, GB, gb

Gigabyte: a unit of memory equal to 1,000,000,000 bytes

1G means 1000000000 bytes

Configuration Variables
Configuration Commands
Monitoring
Unit Sizes
TLS / SSL
Backpressure
Memory Usage
Fluent Bit

x86_64

Raspbian 8 (Debian Jessie)

AArch32

Raspbian 9 (Debian Stretch)

AArch32

Ubuntu 16.04 (Xenial Xerus)

x86_64

Ubuntu 18.04 (Bionic Beaver)

x86_64

From an architecture support perspective, Fluent Bit is fully functional on x86, x86_64, AArch32 and AArch64 based processors.

Fluent Bit can work also on OSX and *BSD systems, but not all plugins will be available on all platforms. Official support will be expanding based on community demand.

Operating System

Distribution

Architecture

Linux

Centos 7

x86_64

Debian 8 (Jessie)

x86_64

Debian 9 (Stretch)

Value

Description

Retry_Limit

N

Integer value to set the maximum number of retries allowed. N must be >= 1 (default: 2)

Retry_Limit

False

When Retry_Limit is set to False, means that there is not limit for the number of retries that the Scheduler can do.

$ bin/fluent-bit -i kmsg -t kernel -o stdout -m '*'
Fluent-Bit v0.8.0
Copyright (C) Treasure Data

[0] kernel: [1463421823, {"priority"=>3, "sequence"=>1814, "sec"=>11706, "usec"=>732233, "msg"=>"ERROR @wl_cfg80211_get_station : Wrong Mac address, mac = 34:a8:4e:d3:40:ec profile =20:3a:07:9e:4a:ac"}]
[1] kernel: [1463421823, {"priority"=>3, "sequence"=>1815, "sec"=>11706, "usec"=>732300, "msg"=>"ERROR @wl_cfg80211_get_station : Wrong Mac address, mac = 34:a8:4e:d3:40:ec profile =20:3a:07:9e:4a:ac"}]
[2] kernel: [1463421829, {"priority"=>3, "sequence"=>1816, "sec"=>11712, "usec"=>729728, "msg"=>"ERROR @wl_cfg80211_get_station : Wrong Mac address, mac = 34:a8:4e:d3:40:ec profile =20:3a:07:9e:4a:ac"}]
[3] kernel: [1463421829, {"priority"=>3, "sequence"=>1817, "sec"=>11712, "usec"=>729802, "msg"=>"ERROR @wl_cfg80211_get_station : Wrong Mac address, mac = 34:a8:4e:d3:40:ec
...
[INPUT]
    Name   kmsg
    Tag    kernel

[OUTPUT]
    Name   stdout
    Match  *
$ bin/fluent-bit -h|grep JEMALLOC
InfluxDB
Elasticsearch
jemalloc
$ fluent-bit -i cpu -o counter
[INPUT]
    Name cpu
    Tag  cpu

[OUTPUT]
    Name  counter
    Match *
$ git clone https://github.com/fluent/fluent-bit
$ fluent-bit -i stdin -o stdout
1. { map => val, map => val, map => val }
2. [ time, { map => val, map => val, map => val } ]
#!/bin/sh

while :; do
  echo -n "{\"key\": \"some value\"}"
  sleep 1
done
$ chmod 755 test.sh
Fluent Bit
Fluent Bit

JSON Parser

The JSON parser is the simplest option: if the original log source is a JSON map string, it will take it structure and convert it directly to the internal binary representation.

A simple configuration that can be found in the default parsers configuration file, is the entry to parse Docker log files (when the tail input plugin is used):

[PARSER]
    Name        docker
    Format      json
    Time_Key    time
    Time_Format %Y-%m-%dT%H:%M:%S %z

The following log entry is a valid content for the parser defined above:

{"key1": 12345, "key2": "abc", "time": "2006-07-28T13:22:04Z"}

After processing, it internal representation will be:

[1154103724, {"key1"=>12345, "key2"=>"abc"}]

The time has been converted to Unix timestamp (UTC) and the map reduced to each component of the original message.

Requirements

The following steps assumes you are familiar with configuring Fluent Bit using text files and you have experience building it from scratch as described in the Build and Install section.

Configuration Directory

In your file system prepare a specific directory that will be used as an entry point for the build system to lookup and parse the configuration files. It is mandatory that this directory contain as a minimum one configuration file called fluent-bit.conf containing the required SERVICE, INPUT and OUTPUT sections. As an example create a new fluent-bit.conf file with the following content:

the configuration provided above will calculate CPU metrics from the running system and print them to the standard output interface.

Build with Custom Configuration

Inside Fluent Bit source code, get into the build/ directory and run CMake appending the FLB_STATIC_CONF option pointing the configuration directory recently created, e.g:

then build it:

At this point the fluent-bit binary generated is ready to run without necessity of further configuration:

Fluent Bit
text files
Update your sources lists

On Debian, you need to add our APT server entry to your sources lists, please add the following content at bottom of your /etc/apt/sources.list file:

Debian 9 (Stretch)

Debian 8 (Jessie)

Update your repositories database

Now let your system update the apt database:

Install TD-Agent Bit

Using the following apt-get command you are able now to install the latest td-agent-bit:

Now the following step is to instruct systemd to enable the service:

If you do a status check, you should see a similar output like this:

The default configuration of td-agent-bit is collecting metrics of CPU usage and sending the records to the standard output, you can see the outgoing data in your /var/log/syslog file.

Treasure Data, Inc
The expected behavior is that Tag will be expanded to:

but the change introduced in 1.0 series switched from absolute path to the base file name only:

On Fluent Bit v1.1 release we restored to our default behavior and now the Tag is composed using the absolute path of the monitored file.

Having absolute path in the Tag is relevant for routing and flexible configuration where it also helps to keep compatibility with Fluentd behavior.

This behavior switch in Tail input plugin affects how Filter Kubernetes operates. As you know when the filter is used it needs to perform local metadata lookup that comes from the file names when using Tail as a source. Now with the new Kube_Tag_Prefix option you can specify what's the prefix used in Tail input plugin, for the configuration example above the new configuration will look as follows:

So the proper for Kube_Tag_Prefix value must be composed by Tag prefix set in Tail input plugin plus the converted monitored directory replacing slashes with dots.

[INPUT]
    Name  tail
    Path  /var/log/containers/*.log
    Tag   kube.*
kube.var.log.containers.apache.log
kube.apache.log
Update your sources lists

On Debian and derivated systems such as Raspbian, you need to add our APT server entry to your sources lists, please add the following content at bottom of your /etc/apt/sources.list file:

Raspbian 9 (Stretch)

Raspbian 8 (Jessie)

Update your repositories database

Now let your system update the apt database:

Install TD-Agent Bit

Using the following apt-get command you are able now to install the latest td-agent-bit:

Now the following step is to instruct systemd to enable the service:

If you do a status check, you should see a similar output like this:

The default configuration of td-agent-bit is collecting metrics of CPU usage and sending the records to the standard output, you can see the outgoing data in your /var/log/syslog file.

Raspbian 8
Treasure Data, Inc

The above command set the 'stdout' value to the variable MY_OUTPUT.

Run Fluent Bit with the recently created configuration file:

As you can see the service worked properly as the configuration was valid.

${MY_VARIABLE}
[SERVICE]
    Flush        1
    Daemon       Off
    Log_Level    info

[INPUT]
    Name cpu
    Tag  cpu.local

[OUTPUT]
    Name  ${MY_OUTPUT}
    Match *
$ export MY_OUTPUT=stdout
Enable Tests

By default Fluent Bit have the tests disabled, you need to append the ENABLE_TESTS option to your cmake line, e.g:

Running Tests

To run the tests just issue the following command:

$ sudo apt-get install g++ libgtest-dev
Fluent Bit
Google Test
gtest

Samples

If set, it will only generate a specific number of samples. By default this value is set to -1, which will generate unlimited samples.

Interval_Sec

Interval in seconds between samples generation. Default value is 1.

Internal_Nsec

Specify a nanoseconds interval for samples generation, it works in conjuntion with the Interval_Sec configuration key. Default value is 0.

Getting Started

In order to start generating random samples, you can run the plugin from the command line or through the configuration file:

Command Line

From the command line you can let Fluent Bit generate the samples with the following options:

Configuration File

In your main configuration file append the following Input & Output sections:

Testing

Once Fluent Bit is running, you will see the reports in the output interface similar to this:

Key

Description

Polling interval (seconds). default: 1

Interval_NSec

Polling interval (nanosecond). default: 0

Dev_Name

Device name to limit the target. (e.g. sda). If not set, in_disk gathers information from all of disks and partitions.

Getting Started

In order to get disk usage from your system, you can run the plugin from the command line or through the configuration file:

Command Line

Configuration File

In your main configuration file append the following Input & Output sections:

Note: Total interval (sec) = Interval_Sec + (Interval_Nsec / 1000000000).

e.g. 1.5s = 1s + 500000000ns

Key

Description

Interval_Sec

Getting Started

In order to start filtering records, you can run the filter from the command line or through the configuration file.

Command Line

Configuration File

In your main configuration file append the following FILTER sections:

$ fluent-bit -i cpu -t cpu.local -F stdout -m '*' -o null -m '*'
[INPUT]
    Name cpu
    Tag  cpu.local

[FILTER]
    Name  stdout
    Match *

[OUTPUT]
    Name  null
    Match *
deb https://packages.fluentbit.io/ubuntu/bionic bionic main
deb https://packages.fluentbit.io/ubuntu/xenial xenial main
$ sudo apt-get update
$ sudo apt-get install td-agent-bit
$ sudo service td-agent-bit start
sudo service td-agent-bit status
● td-agent-bit.service - TD Agent Bit
   Loaded: loaded (/lib/systemd/system/td-agent-bit.service; disabled; vendor preset: enabled)
   Active: active (running) since mié 2016-07-06 16:58:25 CST; 2h 45min ago
 Main PID: 6739 (td-agent-bit)
    Tasks: 1
   Memory: 656.0K
      CPU: 1.393s
   CGroup: /system.slice/td-agent-bit.service
           └─6739 /opt/td-agent-bit/bin/td-agent-bit -c /etc/td-agent-bit/td-agent-bit.conf
...
[OUTPUT]
    Name        http
    Host        192.168.5.6
    Port        8080
    Retry_Limit False

[OUTPUT]
    Name            es
    Host            192.168.5.20
    Port            9200
    Logstash_Format On
    Retry_Limit     5
Build Flags =  JSMN_PARENT_LINKS JSMN_STRICT FLB_HAVE_TLS FLB_HAVE_SQLDB
FLB_HAVE_TRACE FLB_HAVE_FLUSH_LIBCO FLB_HAVE_VALGRIND FLB_HAVE_FORK
FLB_HAVE_PROXY_GO FLB_HAVE_JEMALLOC JEMALLOC_MANGLE FLB_HAVE_REGEX
FLB_HAVE_C_TLS FLB_HAVE_SETJMP FLB_HAVE_ACCEPT4 FLB_HAVE_INOTIFY
$ bin/fluent-bit -i cpu -o counter -f 1
Fluent-Bit v0.12.0
Copyright (C) Treasure Data

[2017/07/19 11:19:02] [ info] [engine] started
1500484743,1 (total = 1)
1500484744,1 (total = 2)
1500484745,1 (total = 3)
1500484746,1 (total = 4)
1500484747,1 (total = 5)
$ ./test.sh | fluent-bit -i stdin -o stdout
Fluent-Bit v0.9.0
Copyright (C) Treasure Data

[2016/10/07 21:44:46] [ info] [engine] started
[0] stdin.0: [1475898286, {"key"=>"some value"}]
[1] stdin.0: [1475898287, {"key"=>"some value"}]
[2] stdin.0: [1475898288, {"key"=>"some value"}]
[3] stdin.0: [1475898289, {"key"=>"some value"}]
[4] stdin.0: [1475898290, {"key"=>"some value"}]
[SERVICE]
    Flush     1
    Daemon    off
    Log_Level info

[INPUT]
    Name      cpu

[OUTPUT]
    Name      stdout
    Match     *
$ cd fluent-bit/build/
$ cmake -DFLB_STATIC_CONF=/path/to/my/confdir/
$ make
$ bin/fluent-bit 
Fluent-Bit v0.15.0
Copyright (C) Treasure Data

[2018/10/19 15:32:31] [ info] [engine] started (pid=15186)
[0] cpu.local: [1539984752.000347547, {"cpu_p"=>0.750000, "user_p"=>0.500000, "system_p"=>0.250000, "cpu0.p_cpu"=>1.000000, "cpu0.p_user"=>1.000000, "cpu0.p_system"=>0.000000, "cpu1.p_cpu"=>0.000000, "cpu1.p_user"=>0.000000, "cpu1.p_system"=>0.000000, "cpu2.p_cpu"=>0.000000, "cpu2.p_user"=>0.000000, "cpu2.p_system"=>0.000000, "cpu3.p_cpu"=>1.000000, "cpu3.p_user"=>1.000000, "cpu3.p_system"=>0.000000}]
$ wget -qO - https://packages.fluentbit.io/fluentbit.key | sudo apt-key add -
deb https://packages.fluentbit.io/debian/stretch stretch main
deb https://packages.fluentbit.io/debian/jessie jessie main
$ sudo apt-get update
$ sudo apt-get install td-agent-bit
$ sudo service td-agent-bit start
sudo service td-agent-bit status
● td-agent-bit.service - TD Agent Bit
   Loaded: loaded (/lib/systemd/system/td-agent-bit.service; disabled; vendor preset: enabled)
   Active: active (running) since mié 2016-07-06 16:58:25 CST; 2h 45min ago
 Main PID: 6739 (td-agent-bit)
    Tasks: 1
   Memory: 656.0K
      CPU: 1.393s
   CGroup: /system.slice/td-agent-bit.service
           └─6739 /opt/td-agent-bit/bin/td-agent-bit -c /etc/td-agent-bit/td-agent-bit.conf
...
[INPUT]
    Name  tail
    Path  /var/log/containers/*.log
    Tag   kube.*

[FILTER]
    Name             kubernetes
    Match            *
    Kube_Tag_Prefix  kube.var.log.containers.
$ wget -qO - https://packages.fluentbit.io/fluentbit.key | sudo apt-key add -
deb https://packages.fluentbit.io/raspbian/stretch stretch main
deb https://packages.fluentbit.io/raspbian/jessie jessie main
$ sudo apt-get update
$ sudo apt-get install td-agent-bit
$ sudo service td-agent-bit start
sudo service td-agent-bit status
● td-agent-bit.service - TD Agent Bit
   Loaded: loaded (/lib/systemd/system/td-agent-bit.service; disabled; vendor preset: enabled)
   Active: active (running) since mié 2016-07-06 16:58:25 CST; 2h 45min ago
 Main PID: 6739 (td-agent-bit)
    Tasks: 1
   Memory: 656.0K
      CPU: 1.393s
   CGroup: /system.slice/td-agent-bit.service
           └─6739 /opt/td-agent-bit/bin/td-agent-bit -c /etc/td-agent-bit/td-agent-bit.conf
...
$ bin/fluent-bit -c fluent-bit.conf
Fluent-Bit v0.11.0
Copyright (C) Treasure Data

[2017/04/03 12:25:25] [ info] [engine] started
[0] cpu.local: [1491243925, {"cpu_p"=>1.750000, "user_p"=>1.750000, "system_p"=>0.000000, "cpu0.p_cpu"=>3.000000, "cpu0.p_user"=>2.000000, "cpu0.p_system"=>1.000000, "cpu1.p_cpu"=>0.000000, "cpu1.p_user"=>0.000000, "cpu1.p_system"=>0.000000, "cpu2.p_cpu"=>4.000000, "cpu2.p_user"=>4.000000, "cpu2.p_system"=>0.000000, "cpu3.p_cpu"=>1.000000, "cpu3.p_user"=>1.000000, "cpu3.p_system"=>0.000000}]
$ cd /usr/src/gtest
$ sudo cmake .
$ sudo make
$ sudo cp libg* /usr/lib/
$ cd build/
$ cmake -DFLB_TESTS=ON ../
$ make test
$ fluent-bit -i random -o stdout
[INPUT]
    Name          random
    Samples      -1
    Interval_Sec  1
    Interval_NSec 0

[OUTPUT]
    Name   stdout
    Match  *
$ fluent-bit -i random -o stdout
Fluent-Bit v0.9.0
Copyright (C) Treasure Data

[2016/10/07 20:27:34] [ info] [engine] started
[0] random.0: [1475893654, {"rand_value"=>1863375102915681408}]
[1] random.0: [1475893655, {"rand_value"=>425675645790600970}]
[2] random.0: [1475893656, {"rand_value"=>7580417447354808203}]
[3] random.0: [1475893657, {"rand_value"=>1501010137543905482}]
[4] random.0: [1475893658, {"rand_value"=>16238242822364375212}]
$ fluent-bit -i disk -o stdout
Fluent-Bit v0.11.0
Copyright (C) Treasure Data

[2017/01/28 16:58:16] [ info] [engine] started
[0] disk.0: [1485590297, {"read_size"=>0, "write_size"=>0}]
[1] disk.0: [1485590298, {"read_size"=>0, "write_size"=>0}]
[2] disk.0: [1485590299, {"read_size"=>0, "write_size"=>0}]
[3] disk.0: [1485590300, {"read_size"=>0, "write_size"=>11997184}]
[INPUT]
    Name          disk
    Tag           disk
    Interval_Sec  1
    Interval_NSec 0
[OUTPUT]
    Name   stdout
    Match  *

In order to define where the data should be routed, a Match rule must be specified in the output configuration.

Consider the following configuration example that aims to deliver CPU metrics to an Elasticsearch database and Memory metrics to the standard output interface:

Note: the above is a simple example demonstrating how Routing is configured.

Routing works automatically reading the Input Tags and the Output Match rules. If some data has a Tag that doesn't match upon routing time, the data is deleted.

Routing with Wildcard

Routing is flexible enough to support wildcard in the Match pattern. The below example defines a common destination for both sources of data:

The match rule is set to my_* which means it will match any Tag that starts with my_.

[INPUT]
    Name cpu
    Tag  my_cpu

[INPUT]
    Name mem
    Tag  my_mem

[OUTPUT]
    Name   es
    Match  my_cpu

[OUTPUT]
    Name   stdout
    Match  my_mem
[INPUT]
    Name cpu
    Tag  my_cpu

[INPUT]
    Name mem
    Tag  my_mem

[OUTPUT]
    Name   stdout
    Match  my_*

Fluentd & Fluent Bit

Data collection matters and nowadays the scenarios from where the information can come from are very variable. For hence to be more flexible in certain markets needs, we may need different options. On this page, we will describe the relationship between the Fluentd and Fluent Bit open source projects.

Fluentd and Fluent Bit projects are both created and sponsored by Treasure Data and they aim to solve the collection, processing, and delivery of Logs.

Both projects share a lot of similarities, Fluent Bit is fully based on the design and experience of Fluentd architecture and general design. Choosing which one to use depends on the final needs, from an architecture perspective we can consider:

  • Fluentd is a log collector, processor, and aggregator.

  • Fluent Bit is a log collector and processor (it doesn't have strong aggregation features like Fluentd).

The following table describes a comparison in different areas of the projects:

Consider Fluentd mainly as an Aggregator and Fluent Bit as a Log Forwarder, we can see both projects complement each other providing a full reliable solution.

TD Agent Bit

We distribute Fluent Bit as packages for specific Enterprise Linux distributions under the name of td-agent-bit. These packages are maintained by Treasure Data, Inc..

The following distributions are supported:

Distribution

Version

Codename

18.04

Bionic Beaver

Configuration Schema

Fluent Bit may optionally use a configuration file to define how the service will behave, and before proceeding we need to understand how the configuration schema works. The schema is defined by three concepts:

  • Sections

  • Entries: Key/Value

  • Indented Configuration Mode

A simple example of a configuration file is as follows:

Sections

A section is defined by a name or title inside brackets. Looking at the example above, a Service section has been set using [SERVICE] definition. Section rules:

  • All section content must be indented (4 spaces ideally).

  • Multiple sections can exist on the same file.

  • A section is expected to have comments and entries, it cannot be empty.

  • Any commented line under a section, must be indented too.

Entries: Key/Value

A section may contain Entries, an entry is defined by a line of text that contains a Key and a Value, using the above example, the [SERVICE] section contains two entries, one is the key Daemon with value off and the other is the key Log_Level with the value debug. Entries rules:

  • An entry is defined by a key and a value.

  • A key must be indented.

  • A key must contain a value which ends in the breakline.

  • Multiple keys with the same name can exist.

Also commented lines are set prefixing the # character, those lines are not processed but they must be indented too.

Indented Configuration Mode

Fluent Bit configuration files are based in a strict Indented Mode, that means that each configuration file must follow the same pattern of alignment from left to right when writing text. By default an indentation level of four spaces from left to right is suggested. Example:

As you can see there are two sections with multiple entries and comments, note also that empty lines are allowed and they do not need to be indented.

Memory Usage

The mem input plugin, gathers the information about the memory and swap usage of the running system every certain interval of time and reports the total amount of memory and the amount of free available.

Getting Started

In order to get memory and swap usage from your system, you can run the plugin from the command line or through the configuration file:

Command Line

Configuration File

In your main configuration file append the following Input & Output sections:

Getting Started

Fluent Bit is a straightforward tool and to get started with it we need to understand it basic workflow. Consider the following diagram a global overview of it:

Fluent Bit Workflow

Interface

Description

Entry point of data. Implemented through Input Plugins, this interface allows to gather or receive data. E.g: log file content, data over TCP, built-in metrics, etc.

Parsers allow to convert unstructured data gathered from the Input interface into a structured one. Parsers are optional and depends on Input plugins.

The filtering mechanism allows to alter the data ingested by the Input plugins. Filters are implemented as plugins.

Dummy

The dummy input plugin, generates dummy events. It is useful for testing, debugging, benchmarking and getting started with Fluent Bit.

Configuration Parameters

The plugin supports the following configuration parameters:

Getting Started

You can run the plugin from the command line or through the configuration file:

Command Line

Configuration File

In your main configuration file append the following Input & Output sections:

MQTT

The MQTT input plugin, allows to retrieve messages/data from MQTT control packets over a TCP connection. The incoming data to receive must be a JSON map.

Configuration Parameters

The plugin supports the following configuration parameters:

Key

Description

Getting Started

In order to start listening for MQTT messages, you can run the plugin from the command line or through the configuration file:

Command Line

Since the MQTT input plugin let Fluent Bit behave as a server, we need to dispatch some messages using some MQTT client, in the following example mosquitto tool is being used for the purpose:

The following command line will send a message to the MQTT input plugin:

Configuration File

In your main configuration file append the following Input & Output sections:

Null

The null output plugin just throws away events.

Configuration Parameters

The plugin doesn't support configuration parameters.

Getting Started

You can run the plugin from the command line or through the configuration file:

Command Line

From the command line you can let Fluent Bit throws away events with the following options:

Configuration File

In your main configuration file append the following Input & Output sections:

Filter

In production environments we want to have full control of the data we are collecting, filtering is an important feature that allows us to alter the data before delivering it to some destination.

Filtering is implemented through plugins, so each filter available could be used to match, exclude or enrich your logs with some specific metadata.

Very similar to the input plugins, Filters run in an instance context, which has its own independent configuration. Configuration keys are often called properties.

For more details about the Filters available and their usage, please refer to the Filters section.

Yocto Project

source code provides Bitbake recipes to configure, build and package the software for a Yocto based image. Note that specific steps of usage of these recipes in your Yocto environment (Poky) is out of the scope of this documentation.

We distribute two main recipes, one for testing/dev purposes and other with the latest stable release.

TLS / SSL

Fluent Bit provides integrated support for Transport Layer Security (TLS) and it predecessor Secure Sockets Layer (SSL) respectively. In this section we will refer as TLS only for both implementations.

Each output plugin that requires to perform Network I/O can optionally enable TLS and configure the behavior. The following table describes the properties available:

Docker Images

Fluent Bit container images are available on Docker Hub ready for production usage. Our stable images are based in focusing on security containing just the Fluent Bit binary, minimal system libraries and basic configuration.

Optionally, we provide debug images which contains Busybox that can be used to troubleshoot or testing purposes.

The following table describe the tags are available on repository:

Configuration Commands

Configuration files must be flexible enough for any deployment need, but they must keep a clean and readable format.

Fluent Bit Commands extends a configuration file with specific built-in features. The list of commands available as of Fluent Bit 0.12 series are:

Grep

The Grep Filter plugin allows to match or exclude specific records based in regular expression patterns.

Configuration Parameters

The plugin supports the following configuration parameters:

Health

Health input plugin allows you to check how healthy a TCP server is. It does the check by issuing a TCP connection every a certain interval of time.

Configuration Parameters

The plugin supports the following configuration parameters:

Network Traffic

The netif input plugin, gathers the information about the network traffic of the running system every certain interval of time and reports them.

Configuration Parameters

The plugin supports the following configuration parameters:

Azure

Azure output plugin allows to ingest your records into service.

To get more details about how to setup the Azure Log Analytics please refer to the following documentation:

Configuration Parameters

Backpressure

In certain environments is common to see that logs or data being ingested is faster than the ability to flush it to some destinations. The common case is reading from big log files and dispatching the logs to a backend over the network which takes some time to respond, this generate backpressure leading to a high memory consumption in the service.

In order to avoid backpressure, Fluent Bit implements a mechanism in the engine that restrict the amount of data than an input plugin can ingest, this is done through the configuration parameter Mem_Buf_Limit.

Mem_Buf_Limit

This option is disabled by default and can be applied to all input plugins. Let's explain it behavior using the following scenario:

Exec

The exec input plugin, allows to execute external program and collects event logs.

Configuration Parameters

The plugin supports the following configuration parameters:

Standard Output

The stdout output plugin allows to print to the standard output the data received through the input plugin. Their usage is very simple as follows:

Configuration Parameters

Mem_Buf_Limit is set to 1MB (one megabyte)

  • input plugin tries to append 700KB

  • engine route the data to an output plugin

  • output plugin backend (HTTP Server) is down

  • engine scheduler will retry the flush after 10 seconds

  • input plugin tries to append 500KB

  • At this exact point, the engine will allow to append those 500KB of data into the engine: in total we have 1.2MB. The options works in a permissive mode before to reach the limit, but the limit is exceeded the following actions are taken:

    • block local buffers for the input plugin (cannot append more data)

    • notify the input plugin invoking a pause callback

    The engine will protect it self and will not append more data coming from the input plugin in question; Note that is the plugin responsibility to keep their state and take some decisions about what to do on that paused state.

    After some seconds if the scheduler was able to flush the initial 700KB of data or it gave up after retrying, that amount memory is released and internally the following actions happens:

    • Upon data buffer release (700KB), the internal counters get updated

    • Counters now are set at 500KB

    • Since 500KB is < 1MB it checks the input plugin state

    • If the plugin is paused, it invokes a resume callback

    • input plugin can continue appending more data

    About pause and resume Callbacks

    Each plugin is independent and not all of them implements the pause and resume callbacks. As said, these callbacks are just a notification mechanism for the plugin.

    The plugin who implements and keep a good state is the Tail Input plugin. When the pause callback is triggered, it stop their collectors and stop appending data. Upon resume, it re-enable the collectors.

    16.04

    Xenial Xerus

    Debian

    9

    Stretch

    Debian

    8

    Jessie

    Raspbian

    8

    Jessie

    CentOS

    7

    Ubuntu
    Ubuntu
    $ fluent-bit -i mem -t memory -o stdout -m '*'
    Fluent-Bit v0.11.0
    Copyright (C) Treasure Data
    
    [2017/03/03 21:12:35] [ info] [engine] started
    [0] memory: [1488543156, {"Mem.total"=>1016044, "Mem.used"=>841388, "Mem.free"=>174656, "Swap.total"=>2064380, "Swap.used"=>139888, "Swap.free"=>1924492}]
    [1] memory: [1488543157, {"Mem.total"=>1016044, "Mem.used"=>841420, "Mem.free"=>174624, "Swap.total"=>2064380, "Swap.used"=>139888, "Swap.free"=>1924492}]
    [2] memory: [1488543158, {"Mem.total"=>1016044, "Mem.used"=>841420, "Mem.free"=>174624, "Swap.total"=>2064380, "Swap.used"=>139888, "Swap.free"=>1924492}]
    [3] memory: [1488543159, {"Mem.total"=>1016044, "Mem.used"=>841420, "Mem.free"=>174624, "Swap.total"=>2064380, "Swap.used"=>139888, "Swap.free"=>1924492}]
    [INPUT]
        Name   mem
        Tag    memory
    
    [OUTPUT]
        Name   stdout
        Match  *

    Key

    Description

    Dummy

    Dummy JSON record. Default: {"message":"dummy"}

    Rate

    Events number generated per second. Default: 1

    Listen

    Listener network interface, default: 0.0.0.0

    Port

    TCP port where listening for connections, default: 1883

    $ fluent-bit -i cpu -o null
    [INPUT]
        Name cpu
        Tag  cpu
    
    [OUTPUT]
        Name null
        Match *

    On

    tls.debug

    Set TLS debug verbosity level. It accept the following values: 0 (No debug), 1 (Error), 2 (State change), 3 (Informational) and 4 Verbose

    1

    tls.ca_file

    absolute path to CA certificate file

    tls.ca_path

    absolute path to scan for certificate files

    tls.crt_file

    absolute path to Certificate file

    tls.key_file

    absolute path to private Key file

    tls.key_passwd

    optional password for tls.key_file file

    The listed properties can be enabled in the configuration file, specifically on each output plugin section or directly through the command line. The following output plugins can take advantage of the TLS feature:

    • Elasticsearch

    • Forward

    • HTTP

    • Splunk

    Example: enable TLS on HTTP output

    By default HTTP output plugin uses plain TCP, enabling TLS from the command line can be done with:

    In the command line above, the two properties tls and tls.verify where enabled for demonstration purposes (we strongly suggest always keep verification ON).

    The same behavior can be accomplished using a configuration file:

    Property

    Description

    Default

    tls

    enable or disable TLS support

    Off

    tls.verify

    force certificate validation

    @INCLUDE Command

    Configuring a logging pipeline might lead to an extensive configuration file. In order to maintain a human-readable configuration, it's suggested to split the configuration in multiple files.

    The @INCLUDE command allows the configuration reader to include an external configuration file, e.g:

    The above example defines the main service configuration file and also include two files to continue the configuration:

    inputs.conf

    outputs.conf

    Note that despites the order of inclusion, Fluent Bit will ALWAYS respect the following order:

    • Service

    • Inputs

    • Filters

    • Outputs

    @SET Command

    Fluent Bit supports configuration variables, one way to expose this variables to Fluent Bit is through setting a Shell environment variable, the other is through the @SET command.

    The @SET command can only be used at root level of each line, meaning it cannot be used inside a section, e.g:

    Command

    Prototype

    Description

    @INCLUDE

    @INCLUDE FILE

    Include a configuration file

    @SET

    @SET KEY=VAL

    Set a configuration variable

    Regex

    FIELD REGEX

    Keep records which field matches the regular expression.

    Exclude

    FIELD REGEX

    Exclude records which field matches the regular expression.

    Getting Started

    In order to start filtering records, you can run the filter from the command line or through the configuration file. The following example assumes that you have a file called lines.txt with the following content

    Command Line

    Note: using the command line mode need special attention to quote the regular expressions properly. It's suggested to use a configuration file.

    The following command will load the tail plugin and read the content of lines.txt file. Then the grep filter will apply a regular expression rule over the log field (created by tail plugin) and only pass the records which field value starts with aa:

    Configuration File

    The filter allows to use multiple rules which are applied in order, you can have many Regex and Exclude entries as required.

    Key

    Value Format

    Description

    Name of the target host or IP address to check.

    Port

    TCP port where to perform the connection check.

    Interval_Sec

    Interval in seconds between the service checks. Default value is 1.

    Internal_Nsec

    Specify a nanoseconds interval for service checks, it works in conjuntion with the Interval_Sec configuration key. Default value is 0.

    Alert

    If enabled, it will only generate messages if the target TCP service is down. By default this option is disabled.

    Add_Host

    If enabled, hostname is appended to each records. Default value is false.

    Add_Port

    If enabled, port number is appended to each records. Default value is false.

    Getting Started

    In order to start performing the checks, you can run the plugin from the command line or through the configuration file:

    Command Line

    From the command line you can let Fluent Bit generate the checks with the following options:

    Configuration File

    In your main configuration file append the following Input & Output sections:

    Testing

    Once Fluent Bit is running, you will see some random values in the output interface similar to this:

    Key

    Description

    Host

    Specify the network interface to monitor. e.g. eth0

    Interval_Sec

    Polling interval (seconds). default: 1

    Interval_NSec

    Polling interval (nanosecond). default: 0

    Verbose

    If true, gather metrics precisely. default: false

    Getting Started

    In order to monitor network traffic from your system, you can run the plugin from the command line or through the configuration file:

    Command Line

    Configuration File

    In your main configuration file append the following Input & Output sections:

    Note: Total interval (sec) = Interval_Sec + (Interval_Nsec / 1000000000).

    e.g. 1.5s = 1s + 500000000ns

    Key

    Description

    Interface

    Customer_ID

    Customer ID or WorkspaceID string.

    Shared_Key

    The primary or the secondary Connected Sources client authentication key.

    Log_Type

    The name of the event type.

    fluentbit

    Getting Started

    In order to insert records into a Azure, you can run the plugin from the command line or through the configuration file:

    Command Line

    The azure plugin, can read the parameters from the command line in two ways, through the -p argument (property), e.g:

    Configuration File

    In your main configuration file append the following Input & Output sections:

    Key

    Description

    Azure Log Analytics
    Azure Log Analytics

    default

    The command to execute.

    Parser

    Specify the name of a parser to interpret the entry as a structured message.

    Interval_Sec

    Polling interval (seconds).

    Interval_NSec

    Polling interval (nanosecond).

    Getting Started

    You can run the plugin from the command line or through the configuration file:

    Command Line

    The following example will read events from the output of ls.

    Configuration File

    In your main configuration file append the following Input & Output sections:

    Key

    Description

    Command

    Specify the data format to be printed. Supported formats are msgpack and json_lines.

    msgpack

    json_date_key

    Specify the name of the date field in output

    date

    json_date_format

    Specify the format of the date. Supported formats are double and iso8601 (eg: 2018-05-30T09:39:52.000681Z)

    double

    Command Line

    We have specified to gather CPU usage metrics and print them out to the standard output in a human readable way:

    No more, no less, it just works.

    Key

    Description

    default

    Format

    $ bin/fluent-bit -i cpu -o stdout -v
    [SERVICE]
        # This is a commented line
        Daemon    off
        log_level debug
    [FIRST_SECTION]
        # This is a commented line
        Key1  some value
        Key2  another value
        # more comments
    
    [SECOND_SECTION]
        KeyN  3.14
    $ fluent-bit -i dummy -o stdout
    Fluent-Bit v0.12.0
    Copyright (C) Treasure Data
    
    [2017/07/06 21:55:29] [ info] [engine] started
    [0] dummy.0: [1499345730.015265366, {"message"=>"dummy"}]
    [1] dummy.0: [1499345731.002371371, {"message"=>"dummy"}]
    [2] dummy.0: [1499345732.000267932, {"message"=>"dummy"}]
    [3] dummy.0: [1499345733.000757746, {"message"=>"dummy"}]
    [INPUT]
        Name   dummy
        Tag    dummy.log
    
    [OUTPUT]
        Name   stdout
        Match  *
    $ fluent-bit -i mqtt -t data -o stdout -m '*'
    Fluent-Bit v0.8.0
    Copyright (C) Treasure Data
    
    [2016/05/20 14:22:52] [ info] starting engine
    [0] data: [1463775773, {"topic"=>"some/topic", "key1"=>123, "key2"=>456}]
    $ mosquitto_pub  -m '{"key1": 123, "key2": 456}' -t some/topic
    [INPUT]
        Name   mqtt
        Tag    data
        Listen 0.0.0.0
        Port   1883
    
    [OUTPUT]
        Name   stdout
        Match  *
    $ fluent-bit -i cpu -t cpu -o http://192.168.2.3:80/something \
        -p tls=on         \
        -p tls.verify=off \
        -m '*'
    [INPUT]
        Name  cpu
        Tag   cpu
    
    [OUTPUT]
        Name       http
        Match      *
        Host       192.168.2.3
        Port       80
        URI        /something
        tls        On
        tls.verify Off
    [SERVICE]
        Flush 1
    
    @INCLUDE inputs.conf
    @INCLUDE outputs.conf
    [INPUT]
        Name cpu
        Tag  mycpu
    
    [INPUT]
        Name tail
        Path /var/log/*.log
        Tag  varlog.*
    [OUTPUT]
        Name   stdout
        Match  mycpu
    
    [OUTPUT]
        Name            es
        Match           varlog.*
        Host            127.0.0.1
        Port            9200
        Logstash_Format On
    @SET my_input=cpu
    @SET my_output=stdout
    
    [SERVICE]
        Flush 1
    
    [INPUT]
        Name ${my_input}
    
    [OUTPUT]
        Name ${my_output}
    aaa
    aab
    bbb
    ccc
    ddd
    eee
    fff
    ggg
    $ bin/fluent-bit -i tail -p 'path=lines.txt' -F grep -p 'regex=log aa' -m '*' -o stdout
    [INPUT]
        Name   tail
        Path   lines.txt
    
    [FILTER]
        Name   grep
        Match  *
        Regex  log aa
    
    [OUTPUT]
        Name   stdout
        Match  *
    $ fluent-bit -i health://127.0.0.1:80 -o stdout
    [INPUT]
        Name          health
        Host          127.0.0.1
        Port          80
        Interval_Sec  1
        Interval_NSec 0
    
    [OUTPUT]
        Name   stdout
        Match  *
    $ fluent-bit -i health://127.0.0.1:80 -o stdout
    Fluent-Bit v0.9.0
    Copyright (C) Treasure Data
    
    [2016/10/07 21:37:51] [ info] [engine] started
    [0] health.0: [1475897871, {"alive"=>true}]
    [1] health.0: [1475897872, {"alive"=>true}]
    [2] health.0: [1475897873, {"alive"=>true}]
    [3] health.0: [1475897874, {"alive"=>true}]
    $ bin/fluent-bit -i netif -p interface=eth0 -o stdout
    Fluent-Bit v0.12.0
    Copyright (C) Treasure Data
    
    [2017/07/08 23:34:18] [ info] [engine] started
    [0] netif.0: [1499524459.001698260, {"eth0.rx.bytes"=>89769869, "eth0.rx.packets"=>73357, "eth0.rx.errors"=>0, "eth0.tx.bytes"=>4256474, "eth0.tx.packets"=>24293, "eth0.tx.errors"=>0}]
    [1] netif.0: [1499524460.002541885, {"eth0.rx.bytes"=>98, "eth0.rx.packets"=>1, "eth0.rx.errors"=>0, "eth0.tx.bytes"=>98, "eth0.tx.packets"=>1, "eth0.tx.errors"=>0}]
    [2] netif.0: [1499524461.001142161, {"eth0.rx.bytes"=>98, "eth0.rx.packets"=>1, "eth0.rx.errors"=>0, "eth0.tx.bytes"=>98, "eth0.tx.packets"=>1, "eth0.tx.errors"=>0}]
    [3] netif.0: [1499524462.002612971, {"eth0.rx.bytes"=>98, "eth0.rx.packets"=>1, "eth0.rx.errors"=>0, "eth0.tx.bytes"=>98, "eth0.tx.packets"=>1, "eth0.tx.errors"=>0}]
    [INPUT]
        Name          netif
        Tag           netif
        Interval_Sec  1
        Interval_NSec 0
        Interface     eth0
    [OUTPUT]
        Name   stdout
        Match  *
    $ fluent-bit -i cpu -o azure -p customer_id=abc -p shared_key=def -m '*' -f 1
    [INPUT]
        Name  cpu
    
    [OUTPUT]
        Name        azure
        Match       *
        Customer_ID abc
        Shared_Key  def
    $ fluent-bit -i exec -p 'command=ls /var/log' -o stdout
    Fluent-Bit v0.13.0
    Copyright (C) Treasure Data
    
    [2018/03/21 17:46:49] [ info] [engine] started
    [0] exec.0: [1521622010.013470159, {"exec"=>"ConsoleKit"}]
    [1] exec.0: [1521622010.013490313, {"exec"=>"Xorg.0.log"}]
    [2] exec.0: [1521622010.013492079, {"exec"=>"Xorg.0.log.old"}]
    [3] exec.0: [1521622010.013493443, {"exec"=>"anaconda.ifcfg.log"}]
    [4] exec.0: [1521622010.013494707, {"exec"=>"anaconda.log"}]
    [5] exec.0: [1521622010.013496016, {"exec"=>"anaconda.program.log"}]
    [6] exec.0: [1521622010.013497225, {"exec"=>"anaconda.storage.log"}]
    [INPUT]
        Name          exec
        Tag           exec_ls
        Command       ls /var/log
        Interval_Sec  1
        Interval_NSec 0
    
    [OUTPUT]
        Name   stdout
        Match  *
    $ bin/fluent-bit -i cpu -o stdout -v
    Fluent-Bit v0.9.0
    Copyright (C) Treasure Data
    
    [2016/10/07 21:52:01] [ info] [engine] started
    [0] cpu.0: [1475898721, {"cpu_p"=>0.500000, "user_p"=>0.250000, "system_p"=>0.250000, "cpu0.p_cpu"=>0.000000, "cpu0.p_user"=>0.000000, "cpu0.p_system"=>0.000000, "cpu1.p_cpu"=>0.000000, "cpu1.p_user"=>0.000000, "cpu1.p_system"=>0.000000, "cpu2.p_cpu"=>0.000000, "cpu2.p_user"=>0.000000, "cpu2.p_system"=>0.000000, "cpu3.p_cpu"=>1.000000, "cpu3.p_user"=>0.000000, "cpu3.p_system"=>1.000000}]
    [1] cpu.0: [1475898722, {"cpu_p"=>0.250000, "user_p"=>0.250000, "system_p"=>0.000000, "cpu0.p_cpu"=>0.000000, "cpu0.p_user"=>0.000000, "cpu0.p_system"=>0.000000, "cpu1.p_cpu"=>1.000000, "cpu1.p_user"=>1.000000, "cpu1.p_system"=>0.000000, "cpu2.p_cpu"=>0.000000, "cpu2.p_user"=>0.000000, "cpu2.p_system"=>0.000000, "cpu3.p_cpu"=>0.000000, "cpu3.p_user"=>0.000000, "cpu3.p_system"=>0.000000}]
    [2] cpu.0: [1475898723, {"cpu_p"=>0.750000, "user_p"=>0.250000, "system_p"=>0.500000, "cpu0.p_cpu"=>2.000000, "cpu0.p_user"=>1.000000, "cpu0.p_system"=>1.000000, "cpu1.p_cpu"=>0.000000, "cpu1.p_user"=>0.000000, "cpu1.p_system"=>0.000000, "cpu2.p_cpu"=>1.000000, "cpu2.p_user"=>0.000000, "cpu2.p_system"=>1.000000, "cpu3.p_cpu"=>0.000000, "cpu3.p_user"=>0.000000, "cpu3.p_system"=>0.000000}]
    [3] cpu.0: [1475898724, {"cpu_p"=>1.000000, "user_p"=>0.750000, "system_p"=>0.250000, "cpu0.p_cpu"=>1.000000, "cpu0.p_user"=>1.000000, "cpu0.p_system"=>0.000000, "cpu1.p_cpu"=>2.000000, "cpu1.p_user"=>1.000000, "cpu1.p_system"=>1.000000, "cpu2.p_cpu"=>1.000000, "cpu2.p_user"=>1.000000, "cpu2.p_system"=>0.000000, "cpu3.p_cpu"=>1.000000, "cpu3.p_user"=>1.000000, "cpu3.p_system"=>0.000000}]

    High Performance

    Dependencies

    Built as a Ruby Gem, it requires a certain number of gems.

    Zero dependencies, unless some special plugin requires them.

    Plugins

    More than 650 plugins available

    Around 35 plugins available

    License

    Fluentd

    Fluent Bit

    Scope

    Containers / Servers

    Containers / Servers

    Language

    C & Ruby

    C

    Memory

    ~40MB

    ~450KB

    Performance

    High Performance

    Build latest stable version of Fluent Bit.

    It's strongly recommended to always use the stable release of Fluent Bit recipe and not the one from GIT master for production deployments.

    Notes about AArch64

    When Fluent Bit series v1.0.x is build for an AArch64 target platform, the default backend mechanism for co-routines will be sigaltstack(2), if the compiler flags specified _FORTIFY_SOURCE, it will generate an explicit crash with an error message similar to this one:

    the workaround for this problem is to remove the _FORTIFY_SOURCE from the build system.

    Fluent Bit v1.1 and native AArch64 support

    Fluent Bit >= v1.1.x already integrates native AArch64 support where stack switches for co-routines are done through native ASM calls, on this scenario there is no issues as the one faced with _FORTIFY_SOURCE in previous 1.0.x series.

    Version

    Recipe

    Description

    devel

    fluent-bit_git.bb

    Build Fluent Bit from GIT master. This recipe aims to be used for development and testing purposes only.

    v1.1.3

    Fluent Bit

    1.1.2, 1.1.2-debug

    Container image of Fluent Bit

    1.1.1, 1.1.1-debug

    Container image of Fluent Bit

    1.1.0, 1.1.0-debug

    Container image of Fluent Bit

    It's strongly suggested that you always use the latest image of Fluent Bit.

    Getting Started

    Download the last stable image from 1.1 series:

    Once the image is in place, now run the following (useless) test which makes Fluent Bit meassure CPU usage by the container:

    That command will let Fluent Bit meassure CPU usage every second and flush the results to the standard output, e.g:

    F.A.Q

    Why there is no Fluent Bit Docker image based on Alpine Linux ?

    Alpine Linux uses Musl C library instead of Glibc. Musl is not fully compatible with Glibc which generated many issues in the following areas when used with Fluent Bit:

    • Memory Allocator: to run Fluent Bit properly in high-load environments, we use Jemalloc as a default memory allocator which reduce fragmentation and provides better performance for our needs. Jemalloc cannot run smoothly with Musl and requires extra work.

    • Alpine Linux Musl functions bootstrap have a compatibility issue when loading Golang shared libraries, this generate problems when trying to load Golang output plugins in Fluent Bit.

    • Alpine Linux Musl Time format parser does not support Glibc extensions

    • Maintainers preference in terms of base image due to security and maintenance reasons are Distroless and Debian.

    Tag(s)

    Description

    1.1, 1.1-debug

    Latest release of 1.1.x series

    1.1.3, 1.1.3-debug

    Distroless
    fluent/fluent-bit

    Container image of Fluent Bit

    Buffer

    By default, the data ingested by the Input plugins, resides in memory until is routed and delivered to an Output interface.

    Routing

    Data ingested by an Input interface is tagged, that means that a Tag is assigned and this one is used to determinate where the data should be routed based on a match rule.

    Output

    An output defines a destination for the data. Destinations are handled by output plugins. Note that thanks to the Routing interface, the data can be delivered to multiple destinations.

    Input
    Parser
    Filter

    Buffering / Storage

    The end-goal of Fluent Bit is to collect, parse, filter and ship logs to a central place. In this workflow there are many phases and one of the critical pieces is the ability to do buffering : a mechanism to place processed data into a temporal location until is ready to be shipped.

    By default when Fluent Bit process data, it uses Memory as a primary and temporal place to store the record logs, but there are certain scenarios where would be ideal to have a persistent buffering mechanism based in the filesystem to provide aggregation and data safety capabilities.

    Starting with Fluent Bit v1.0, we introduced a new storage layer that can either work in memory or in the file system. Input plugins can be configured to use one or the other upon demand at start time.

    Configuration

    The storage layer configuration takes place in two areas:

    • Service Section

    • Input Section

    The known Service section configure a global environment for the storage layer, and then in the Input sections defines which mechanism to use.

    Service Section Configuration

    a Service section will look like this:

    that configuration configure an optional buffering mechanism where it root for data is /var/log/flb-storage/, it will use normal synchronization mode, without checksum and up to a maximum of 5MB of memory when processing backlog data.

    Input Section Configuration

    Optionally, any Input plugin can configure their storage preference, the following table describe the options available:

    The following example configure a service that offers filesystem buffering capabilities and two Input plugins being the first based in memory and the second with the filesystem:

    Upstream Servers

    It's common that Fluent Bit output plugins aims to connect to external services to deliver the logs over the network, this is the case of HTTP, Elasticsearch and Forward within others. Being able to connect to one node (host) is normal and enough for more of the use cases, but there are other scenarios where balancing across different nodes is required. The Upstream feature provides such capability.

    An Upstream defines a set of nodes that will be targeted by an output plugin, by the nature of the implementation an output plugin must support the Upstream feature. The following plugin(s) have Upstream support:

    • Forward

    The current balancing mode implemented is round-robin.

    Configuration

    To define an Upstream it's required to create an specific configuration file that contains an UPSTREAM and one or multiple NODE sections. The following table describe the properties associated to each section. Note that all of them are mandatory:

    Nodes and specific plugin configuration

    A Node might contain additional configuration keys required by the plugin, on that way we provide enough flexibility for the output plugin, a common use case is Forward output where if TLS is enabled, it requires a shared key (more details in the example below).

    Nodes and TLS (Transport Layer Security)

    In addition to the properties defined in the table above, the network operations against a defined node can optionally be done through the use of TLS for further encryption and certificates use.

    The TLS options available are described in the section and can be added to the any Node section.

    Configuration File Example

    The following example defines an Upstream called forward-balancing which aims to be used by Forward output plugin, it register three Nodes:

    • node-1: connects to 127.0.0.1:43000

    • node-2: connects to 127.0.0.1:44000

    • node-3: connects to 127.0.0.1:45000 using TLS without verification. It also defines a specific configuration option required by Forward output called shared_key.

    Note that every Upstream definition must exists on it own configuration file in the file system. Adding multiple Upstreams in the same file or different files is not allowed.

    Kubernetes

    Fluent Bit is a lightweight and extensible Log Processor that comes with full support for Kubernetes:

    • Read Kubernetes/Docker log files from the file system or through Systemd Journal.

    • Enrich logs with Kubernetes metadata.

    • Deliver logs to third party storage services like Elasticsearch, InfluxDB, HTTP, etc.

    Content:

    Concepts

    Before geting started it is important to understand how Fluent Bit will be deployed. Kubernetes manages a cluster of nodes, so our log agent tool will need to run on every node to collect logs from every POD, hence Fluent Bit is deployed as a DaemonSet (a POD that runs on every node of the cluster).

    When Fluent Bit runs, it will read, parse and filter the logs of every POD and will enrich each entry with the following information (metadata):

    • POD Name

    • POD ID

    • Container Name

    • Container ID

    To obtain these information, a built-in filter plugin called kubernetes talks to the Kubernetes API Server to retrieve relevant information such as the pod_id, labels and annotations, other fields such as pod_name, container_id and container_name are retrieved locally from the log file names. All of this is handled automatically, no intervention is required from a configuration aspect.

    Our Kubernetes Filter plugin is fully inspired on the written by .

    Installation

    must be deployed as a DaemonSet, so on that way it will be available on every node of your Kubernetes cluster. To get started run the following commands to create the namespace, service account and role setup:

    The next step is to create a ConfigMap that will be used by our Fluent Bit DaemonSet:

    Fluent Bit to Elasticsearch

    Fluent Bit DaemonSet ready to be used with Elasticsearch on a normal Kubernetes Cluster:

    Fluent Bit to Elasticsearch on Minikube

    If you are using Minikube for testing purposes, use the following alternative DaemonSet manifest:

    Details

    The default configuration of Fluent Bit makes sure of the following:

    • Consume all containers logs from the running Node.

    • The will not append more than 5MB into the engine until they are flushed to the Elasticsearch backend. This limit aims to provide a workaround for scenarios.

    • The Kubernetes filter will enrich the logs with Kubernetes metadata, specifically labels and annotations. The filter only goes to the API Server when it cannot find the cached info, otherwise it uses the cache.

    Systemd

    The Systemd input plugin allows to collect log messages from the Journald daemon on Linux environments.

    Configuration Parameters

    The plugin supports the following configuration parameters:

    Key

    Description

    Getting Started

    In order to receive Systemd messages, you can run the plugin from the command line or through the configuration file:

    Command Line

    From the command line you can let Fluent Bit listen for Systemd messages with the following options:

    In the example above we are collecting all messages coming from the Docker service.

    Configuration File

    In your main configuration file append the following Input & Output sections:

    Record Modifier

    The Record Modifier Filter plugin allows to append fields or to exclude specific fields.

    Configuration Parameters

    The plugin supports the following configuration parameters: Remove_key and Whitelist_key are exclusive.

    Key

    Description

    Getting Started

    In order to start filtering records, you can run the filter from the command line or through the configuration file.

    This is a sample in_mem record to filter.

    Append fields

    The following configuration file is to append product name and hostname (via environment variable) to record.

    You can also run the filter from command line.

    The output will be

    Remove fields with Remove_key

    The following configuration file is to remove 'Swap.*' fields.

    You can also run the filter from command line.

    The output will be

    Remove fields with Whitelist_key

    The following configuration file is to remain 'Mem.*' fields.

    You can also run the filter from command line.

    The output will be

    Forward

    Forward is the protocol used by Fluent Bit and Fluentd to route messages between peers. This plugin implements the input service to listen for Forward messages.

    Configuration Parameters

    The plugin supports the following configuration parameters:

    Key

    Description

    Getting Started

    In order to receive Forward messages, you can run the plugin from the command line or through the configuration file:

    Command Line

    From the command line you can let Fluent Bit listen for Forward messages with the following options:

    By default the service will listen an all interfaces (0.0.0.0) through TCP port 24224, optionally you can change this directly, e.g:

    In the example the Forward messages will only arrive through network interface under 192.168.3.2 address and TCP Port 9090.

    Configuration File

    In your main configuration file append the following Input & Output sections:

    Testing

    Once Fluent Bit is running, you can send some messages using the fluent-cat tool (this tool is provided by :

    In we should see the following output:

    Service

    The SERVICE defines the global behaviour of the Fluent Bit engine.

    name

    type

    description

    Buffer_Path

    Str

    Path to write buffered chunks if enabled

    Buffer_Workers

    Int

    Number of workers to operate on buffer chunks

    The Parsers_File and Plugins_File are both relative to the directory the main config file is in.

    Example

    CPU Usage

    The cpu input plugin, measures the CPU usage by the system and per CPU core in a percentage unit between every second of time. At the moment this plugin is only available for Linux.

    The following tables describes the information generated by the plugin. The keys below represent the data used by the overall system, all values associated to the keys are in a percentage unit (0 to 100%):

    key

    description

    cpu_p

    CPU usage of the overall system, this value is the summatory of time spent on user and kernel space. The result takes in consideration the numbers of CPU cores in the system.

    user_p

    CPU usage in User mode, for short it means the CPU usage by user space programs. The result of this value takes in consideration the numbers of CPU cores in the system.

    system_p

    In addition to the keys reported in the above table, a similar content is created per CPU core. The cores are listed from 0 to N as the Kernel reports:

    Getting Started

    In order to get the statistics of the CPU usage of your system, you can run the plugin from the command line or through the configuration file:

    Command Line

    As described above, the CPU input plugin gathers the overall usage every one second and flushed the information to the output on the fifth second. On this example we used the stdout plugin to demonstrate the output records. In a real use-case you may want to flush this information to some central aggregator such as or .

    Configuration File

    In your main configuration file append the following Input & Output sections:

    File

    The file output plugin allows to write the data received through the input plugin to file.

    Configuration Parameters

    The plugin supports the following configuration parameters:

    Key

    Description

    Format

    out_file format

    Output time, tag and json records. There is no configuration parameters for out_file.

    plain format

    Output the records as JSON (without additional tag and timestamp attributes). There is no configuration parameters for plain format.

    csv format

    Output the records as csv. Csv supports an additional configuration parameter.

    ltsv format

    Output the records as LTSV. LTSV supports an additional configuration parameter.

    Getting Started

    You can run the plugin from the command line or through the configuration file:

    Command Line

    From the command line you can let Fluent Bit count up a data with the following options:

    Configuration File

    In your main configuration file append the following Input & Output sections:

    TCP

    The tcp input plugin allows to listen for JSON messages through a network interface (TCP port).

    Configuration Parameters

    The plugin supports the following configuration parameters:

    Key

    Description

    Getting Started

    In order to receive JSON messages over TCP, you can run the plugin from the command line or through the configuration file:

    Command Line

    From the command line you can let Fluent Bit listen for JSON messages with the following options:

    By default the service will listen an all interfaces (0.0.0.0) through TCP port 5170, optionally you can change this directly, e.g:

    In the example the JSON messages will only arrive through network interface under 192.168.3.2 address and TCP Port 9090.

    Configuration File

    In your main configuration file append the following Input & Output sections:

    Testing

    Once Fluent Bit is running, you can send some messages using the netcat:

    In we should see the following output:

    Kafka

    Kafka output plugin allows to ingest your records into an Apache Kafka service. This plugin use the official librdkafka C library (built-in dependency)

    Configuration Parameters

    Key

    Description

    default

    Setting rdkafka.log.connection.close to false and rdkafka.request.required.acks to 1 are examples of recommended settings of librdfkafka properties.

    Getting Started

    In order to insert records into Apache Kafka, you can run the plugin from the command line or through the configuration file:

    Command Line

    The splunk plugin, can read the parameters from the command line in two ways, through the -p argument (property), e.g:

    Configuration File

    In your main configuration file append the following Input & Output sections:

    FlowCounter

    FlowCounter is the protocol to count records. The flowcounter output plugin allows to count up records and its size.

    Configuration Parameters

    The plugin supports the following configuration parameters:

    Key

    Description

    Getting Started

    You can run the plugin from the command line or through the configuration file:

    Command Line

    From the command line you can let Fluent Bit count up a data with the following options:

    Configuration File

    In your main configuration file append the following Input & Output sections:

    Testing

    Once Fluent Bit is running, you will see the reports in the output interface similar to this:

    Treasure Data

    The td output plugin, allows to flush your records into the Treasure Data cloud service.

    Configuration Parameters

    The plugin supports the following configuration parameters:

    Key

    Description

    Getting Started

    In order to start inserting records into , you can run the plugin from the command line or through the configuration file:

    Command Line:

    Ideally you don't want to expose your API key from the command line, using a configuration file is higly desired.

    Configuration File

    In your main configuration file append the following Input & Output sections:

    Head

    The head input plugin, allows to read events from the head of file. It's behavior is similar to the head command.

    Configuration Parameters

    The plugin supports the following configuration parameters:

    BigQuery

    BigQuery output plugin is and experimental plugin that allows you to stream records into service. The implementation does not support the following, which would be expected in a full production version:

    • .

    • using insertId.

    Process

    Process input plugin allows you to check how health a process is. It does the check by issuing a process every a certain interval of time.

    Configuration Parameters

    The plugin supports the following configuration parameters:

    Stackdriver

    Stackdriver output plugin allows to ingest your records into service.

    Before to get started with the plugin configuration, make sure to obtain the proper credentials to get access to the service. We strongly recommend to use a common JSON credentials file, reference link:

    Your goal is to obtain a credentials JSON file that will be used later by Fluent Bit Stackdriver output plugin.

    Filter Plugins

    The filter plugins allows to alter the incoming data generated by the input plugins. As of this version the following filter plugins are available:

    Decoders

    There are certain cases where the log messages being parsed contains encoded data, a typical use case can be found in containerized environments with Docker: application logs it data in JSON format but becomes an escaped string, Consider the following example

    Original message generated by the application:

    Then the Docker log message become encapsulated as follows:

    as you can see the original message is handled as an escaped string. Ideally in Fluent Bit we would like to keep having the original structured message and not a string.

    Getting Started

    Fluent Bit and Golang Plugins

    Fluent Bit currently supports integration of Golang plugins built as shared objects for output plugins only. The interface for the Golang plugins is currently under development but is functional.

    Getting Started

    Compile Fluent Bit with Golang support, e.g:

    Once compiled, we can see a new option in the binary -e which stands for external plugin, e.g:

    Kafka REST Proxy

    The kafka-rest output plugin, allows to flush your records into a server. The following instructions assumes that you have a fully operational Kafka REST Proxy and Kafka services running in your environment.

    Configuration Parameters

    NATS

    The nats output plugin, allows to flush your records into a end point. The following instructions assumes that you have a fully operational NATS Server in place.

    In order to flush records, the nats plugin requires to know two parameters:

    Ingest Records Manually

    There are some cases where Fluent Bit library is used to send records from the caller application to some destination, this process is called manual data ingestion.

    For this purpose a specific input plugin called lib exists and can be using in conjunction with the flb_lib_push() API function.

    Data Format

    The lib input plugin expect the data comes in a fixed JSON format as follows:

    Every record must be a JSON array that contains at least two entries. The first one is the UNIX_TIMESTAMP

    *** longjmp causes uninitialized stack frame ***: ...
    $ docker pull fluent/fluent-bit:1.1
    $ docker run -ti fluent/fluent-bit:1.1 /fluent-bit/bin/fluent-bit -i cpu -o stdout -f 1
    Fluent-Bit v1.1.x
    Copyright (C) Treasure Data
    
    [2017/11/07 14:29:02] [ info] [engine] started
    [0] cpu.0: [1504290543.000487750, {"cpu_p"=>0.750000, "user_p"=>0.250000, "system_p"=>0.500000, "cpu0.p_cpu"=>0.000000, "cpu0.p_user"=>0.000000, "cpu0.p_system"=>0.000000, "cpu1.p_cpu"=>1.000000, "cpu1.p_user"=>0.000000, "cpu1.p_system"=>1.000000, "cpu2.p_cpu"=>1.000000, "cpu2.p_user"=>1.000000, "cpu2.p_system"=>0.000000, "cpu3.p_cpu"=>0.000000, "cpu3.p_user"=>0.000000, "cpu3.p_system"=>0.000000}]
    Apache License v2.0
    Apache License v2.0
    fluent-bit_1.1.3.bb
    v1.1.3
    v1.1.2
    v1.1.1
    v1.1.0

    5M

    Key

    Description

    Default

    storage.path

    Set an optional location in the file system to store streams and chunks of data. If this parameter is not set, Input plugins can only use in-memory buffering.

    storage.sync

    Configure the synchronization mode used to store the data into the file system. It can take the values normal or full.

    normal

    storage.checksum

    Enable the data integrity check when writing and reading data from the filesystem. The storage layer uses the CRC32 algorithm.

    Off

    storage.backlog.mem_limit

    Key

    Description

    Default

    storage.type

    Specify the buffering mechanism to use. It can be memory or filesystem.

    memory

    If storage.path is set, Fluent Bit will look for data chunks that were not delivered and are still in the storage layer, these are called backlog data. This option configure a hint of maximum value of memory to use when processing these records.

    TCP port of the target service.

    Section

    Key

    Description

    UPSTREAM

    name

    Defines a name for the Upstream in question.

    NODE

    name

    Defines a name for the Node in question.

    host

    IP address or hostname of the target host.

    TLS/SSL

    port

    Default

    Path

    Optional path to the Systemd journal directory, if not set, the plugin will use default paths to read local-only logs.

    Max_Fields

    Set a maximum number of fields (keys) allowed per record.

    8000

    Max_Entries

    When Fluent Bit starts, the Journal might have a high number of logs in the queue. In order to avoid delays and reduce memory usage, this option allows to specify the maximum number of log entries that can be processed per round. Once the limit is reached, Fluent Bit will continue processing the remaining log entries once Journald performs the notification.

    5000

    Systemd_Filter

    Allows to perform a query over logs that contains a specific Journald key/value pairs, e.g: _SYSTEMD_UNIT=UNIT. The Systemd_Filter option can be specified multiple times in the input section to apply multiple filters as required.

    Systemd_Filter_Type

    Define the filter type when Systemd_Filter is specified multiple times. Allowed values are And and Or. With And a record is matched only when all of the Systemd_Filter have a match. With Or a record is matched when any of the Systemd_Filter has a match.

    Or

    Tag

    The tag is used to route messages but on Systemd plugin there is an extra functionality: if the tag includes a star/wildcard, it will be expanded with the Systemd Unit file (e.g: host.* => host.UNIT_NAME).

    DB

    Specify the absolute path of a database file to keep track of Journald cursor.

    Read_From_Tail

    Start reading new entries. Skip entries already stored in Journald.

    Off

    Strip_Underscores

    Remove the leading underscore of the Journald field (key). For example the Journald field _PID becomes the key PID.

    Off

    Record

    Append fields. This parameter needs key and value pair.

    Remove_key

    If the key is matched, that field is removed.

    Whitelist_key

    If the key is not matched, that field is removed.

    Default

    Listen

    Listener network interface.

    0.0.0.0

    Port

    TCP port to listen for incoming connections.

    24224

    Buffer_Max_Size

    Specify the maximum buffer memory size used to receive a Forward message. The value must be according to the Unit Size specification.

    Buffer_Chunk_Size

    Buffer_Chunk_Size

    By default the buffer to store the incoming Forward messages, do not allocate the maximum memory allowed, instead it allocate memory when is required. The rounds of allocations are set by Buffer_Chunk_Size. The value must be according to the Unit Size specification.

    32KB

    Fluentd
    Fluent Bit

    Config_Watch

    Bool

    If true, exit on change in config directory

    Daemon

    Bool

    If true go to background on start

    Flush

    Int

    Interval to flush output (seconds)

    Grace

    Int

    Wait time (seconds) on exit

    HTTP_Listen

    Str

    Address to listen (e.g. 0.0.0.0)

    HTTP_Port

    Int

    Port to listen (e.g. 8888)

    HTTP_Server

    Bool

    If true enable statistics HTTP server

    Log_File

    Str

    File to log diagnostic output

    Log_Level

    Int

    Diagnostic level (error/warning/info/debug/trace)

    Parsers_File

    Str

    Optional 'parsers' config file (can be multiple)

    Plugins_File

    Str

    Optional 'plugins' config file (can be multiple)

    CPU usage in Kernel mode, for short it means the CPU usage by the Kernel. The result of this value takes in consideration the numbers of CPU cores in the system.

    key

    description

    cpuN.p_cpu

    Represents the total CPU usage by core N.

    cpuN.p_user

    Total CPU spent in user mode or user space programs associated to this core.

    cpuN.p_system

    Total CPU spent in system or kernel mode associated to this core.

    Fluentd
    Elasticsearch

    Path

    File path to output. If not set, the filename will be tag name.

    Format

    The format of the file content. See also Format section. Default: out_file.

    Key

    Description

    Delimiter

    The character to separate each data. Default: ','

    Key

    Description

    Delimiter

    The character to separate each pair. Default: '\t'(TAB)

    Label_Delimiter

    The character to separate label and the value. Default: ':'

    Listen

    Listener network interface, default: 0.0.0.0.

    Port

    TCP port where listening for connections, default: 5170.

    Buffer_Size

    Specify the maximum buffer size in KB to receive a JSON message. If not set, the default size will be the value of Chunk_Size.

    Chunk_Size

    By default the buffer to store the incoming JSON messages, do not allocate the maximum memory allowed, instead it allocate memory when is required. The rounds of allocations are set by Chunk_Size in KB. If not set, Chunk_Size is equal to 32 (32KB).

    Fluent Bit

    Format

    Specify data format, options available: json, msgpack.

    json

    Message_Key

    Optional key to store the message

    Timestamp_Key

    Set the key to store the record timestamp

    @timestamp

    Timestamp_Format

    'iso8601' or 'double'

    double

    Brokers

    Single of multiple list of Kafka Brokers, e.g: 192.168.1.3:9092, 192.168.1.4:9092.

    Topics

    Single entry or list of topics separated by comma (,) that Fluent Bit will use to send messages to Kafka. If only one topic is set, that one will be used for all records. Instead if multiple topics exists, the one set in the record by Topic_Key will be used.

    fluent-bit

    Topic_Key

    If multiple Topics exists, the value of TopicKey in the record will indicate the topic to use. E.g: if Topic_Key is _router and the record is {"key1": 123, "router": "route2"}, Fluent Bit will use topic _route_2. Note that the topic must be registered in the Topics list.

    rdkafka.{property}

    {property} can be any librdkafka properties

    Default

    Unit

    The unit of duration. (second/minute/hour/day)

    minute

    Default

    API

    The Treasure Data API key. To obtain it please log into the Console and in the API keys box, copy the API key hash.

    Database

    Specify the name of your target database.

    Table

    Specify the name of your target table where the records will be stored.

    Region

    Set the service region, available values: US and JP

    US

    Treasure Data

    Absolute path to the target file, e.g: /proc/uptime

    Buf_Size

    Buffer size to read the file.

    Interval_Sec

    Polling interval (seconds).

    Interval_NSec

    Polling interval (nanosecond).

    Add_Path

    If enabled, filepath is appended to each records. Default value is false.

    Key

    Rename a key. Default: head.

    Lines

    Line number to read. If the number N is set, in_head reads first N lines like head(1) -n.

    Split_line

    If enabled, in_head generates key-value pair per line.

    Split Line Mode

    This mode is useful to get a specific line. This is an example to get CPU frequency from /proc/cpuinfo.

    /proc/cpuinfo is a special file to get cpu information.

    Cpu frequency is "cpu MHz : 2791.009". We can get the line with this configuration file.

    Output is

    Getting Started

    In order to read the head of a file, you can run the plugin from the command line or through the configuration file:

    Command Line

    The following example will read events from the /proc/uptime file, tag the records with the uptime name and flush them back to the stdout plugin:

    Configuration File

    In your main configuration file append the following Input & Output sections:

    Note: Total interval (sec) = Interval_Sec + (Interval_Nsec / 1000000000).

    e.g. 1.5s = 1s + 500000000ns

    Key

    Description

    File

    Name of the target Process to check.

    Interval_Sec

    Interval in seconds between the service checks. Default value is 1.

    Internal_Nsec

    Specify a nanoseconds interval for service checks, it works in conjuntion with the Interval_Sec configuration key. Default value is 0.

    Alert

    If enabled, it will only generate messages if the target process is down. By default this option is disabled.

    Fd

    If enabled, a number of fd is appended to each records. Default value is true.

    Mem

    If enabled, memory usage of the process is appended to each records. Default value is true.

    Getting Started

    In order to start performing the checks, you can run the plugin from the command line or through the configuration file:

    The following example will check the health of crond process.

    Configuration File

    In your main configuration file append the following Input & Output sections:

    Testing

    Once Fluent Bit is running, you will see the health of process:

    Key

    Description

    Proc_Name

    Decoders are a built-in feature available through the Parsers file, each Parser definition can optionally set one or multiple decoders. There are two type of decoders type:
    • Decode_Field: if the content can be decoded in a structured message, append that structure message (keys and values) to the original log message.

    • Decode_Field_As: any content decoded (unstructured or structured) will be replaced in the same key/value, no extra keys are added.

    Our pre-defined Docker Parser have the following definition:

    Each line in the parser with a key Decode_Field instruct the parser to apply a specific decoder on a given field, optionally it offer the option to take an extra action if the decoder cannot succeed.

    Decoders

    Name

    Description

    json

    handle the field content as a JSON map. If it find a JSON map it will replace the content with a structured map.

    escaped

    decode an escaped string.

    escaped_utf8

    decode a UTF8 escaped string.

    Optional Actions

    By default if a decoder fails to decode the field or want to try a next decoder, is possible to define an optional action. Available actions are:

    Name

    Description

    try_next

    if the decoder failed, apply the next Decoder in the list for the same field.

    do_next

    if the decoder succeeded or failed, apply the next Decoder in the list for the same field.

    Note that actions are affected by some restrictions:

    • on Decode_Field_As, if succeeded, another decoder of the same type in the same field can be applied only if the data continue being a unstructed message (raw text).

    • on Decode_Field, if succeeded, can only be applied once for the same field. By nature Decode_Field aims to decode a structured message.

    Examples

    escaped_utf8

    Example input (from /path/to/log.log in configuration below)

    Example output

    Configuration file

    The fluent-bit-parsers.conf file,

    Build a Go Plugin

    The fluent-bit-go package is available to assist developers in creating Go plugins.

    https://github.com/fluent/fluent-bit-go

    At a minimum, a Go plugin looks like this:

    the code above is a template to write an output plugin, it's really important to keep the package name as main and add an explicit main() function. This is a requirement as the code will be build as a shared library.

    To build the code above, use the following line:

    Once built, a shared library called out\_gstdout.so will be available. It's really important to double check the final .so file is what we expect. Doing a ldd over the library we should see something similar to this:

    Run Fluent Bit with the new plugin

    $ cd build/
    $ cmake -DFLB_DEBUG=On -DFLB_PROXY_GO=On ../
    $ make

    IP address or hostname of the target Kafka REST Proxy server

    127.0.0.1

    Port

    TCP port of the target Kafka REST Proxy server

    8082

    Topic

    Set the Kafka topic

    fluent-bit

    Partition

    Set the partition number (optional)

    Message_Key

    Set a message key (optional)

    Time_Key

    The Time_Key property defines the name of the field that holds the record timestamp.

    @timestamp

    Time_Key_Format

    Defines the format of the timestamp.

    %Y-%m-%dT%H:%M:%S

    Include_Tag_Key

    Append the Tag name to the final record.

    Off

    Tag_Key

    If Include_Tag_Key is enabled, this property defines the key name for the tag.

    _flb-key

    TLS / SSL

    Kafka REST Proxy output plugin supports TTL/SSL, for more details about the properties available and general configuration, please refer to the TLS/SSL section.

    Getting Started

    In order to insert records into a Kafka REST Proxy service, you can run the plugin from the command line or through the configuration file:

    Command Line

    The kafka-rest plugin, can read the parameters from the command line in two ways, through the -p argument (property), e.g:

    Configuration File

    In your main configuration file append the following Input & Output sections:

    Key

    Description

    default

    Kafka REST Proxy

    Host

    4222

    In order to override the default configuration values, the plugin uses the optional Fluent Bit network address format, e.g:

    Running

    Fluent Bit only requires to know that it needs to use the nats output plugin, if no extra information is given, it will use the default values specified in the above table.

    As described above, the target service and storage point can be changed, e.g:

    Data format

    For every set of records flushed to a NATS Server, Fluent Bit uses the following JSON format:

    Each record is an individual entity represented in a JSON array that contains a UNIX_TIMESTAMP and a JSON map with a set of key/values. A summarized output of the CPU input plugin will looks as this:

    parameter

    description

    default

    host

    IP address or hostname of the NATS Server

    127.0.0.1

    port

    NATS Server

    TCP port of the target NATS Server

    which is a number representing time associated to the event generation (Epoch time) and the second entry is a JSON map with a list of key/values. A valid entry can be the following:

    Usage

    The following C code snippet shows how to insert a few JSON records into a running Fluent Bit engine:

    [UNIX_TIMESTAMP, MAP]
    [1449505010, {"key1": "some value", "key2": false}]
    [SERVICE]
        flush                     1
        log_Level                 info
        storage.path              /var/log/flb-storage/
        storage.sync              normal
        storage.checksum          off
        storage.backlog.mem_limit 5M
    [SERVICE]
        flush                     1
        log_Level                 info
        storage.path              /var/log/flb-storage/
        storage.sync              normal
        storage.checksum          off
        storage.backlog.mem_limit 5M
    
    [INPUT]
        name          cpu
        storage.type  filesystem
    
    [INPUT]
        name          mem
        storage.type  memory
    [UPSTREAM]
        name       forward-balancing
    
    [NODE]
        name       node-1
        host       127.0.0.1
        port       43000
    
    [NODE]
        name       node-2
        host       127.0.0.1
        port       44000
    
    [NODE]
        name       node-3
        host       127.0.0.1
        port       45000
        tls        on
        tls.verify off
        shared_key secret
    $ fluent-bit -i systemd \
                 -p systemd_filter=_SYSTEMD_UNIT=docker.service \
                 -p tag='host.*' -o stdout
    [SERVICE]
        Flush        1
        Log_Level    info
        Parsers_File parsers.conf
    
    [INPUT]
        Name            systemd
        Tag             host.*
        Systemd_Filter  _SYSTEMD_UNIT=docker.service
    
    [OUTPUT]
        Name   stdout
        Match  *
    {"Mem.total"=>1016024, "Mem.used"=>716672, "Mem.free"=>299352, "Swap.total"=>2064380, "Swap.used"=>32656, "Swap.free"=>2031724}
    [INPUT]
        Name mem
        Tag  mem.local
    
    [OUTPUT]
        Name  stdout
        Match *
    
    [FILTER]
        Name record_modifier
        Match *
        Record hostname ${HOSTNAME}
        Record product Awesome_Tool
    $ fluent-bit -i mem -o stdout -F record_modifier -p 'Record=hostname ${HOSTNAME}' -p 'Record=product Awesome_Tool' -m '*'
    [0] mem.local: [1492436882.000000000, {"Mem.total"=>1016024, "Mem.used"=>716672, "Mem.free"=>299352, "Swap.total"=>2064380, "Swap.used"=>32656, "Swap.free"=>2031724, "hostname"=>"localhost.localdomain", "product"=>"Awesome_Tool"}]
    [INPUT]
        Name mem
        Tag  mem.local
    
    [OUTPUT]
        Name  stdout
        Match *
    
    [FILTER]
        Name record_modifier
        Match *
        Remove_key Swap.total
        Remove_key Swap.used
        Remove_key Swap.free
    $ fluent-bit -i mem -o stdout -F  record_modifier -p 'Remove_key=Swap.total' -p 'Remove_key=Swap.free' -p 'Remove_key=Swap.used' -m '*'
    [0] mem.local: [1492436998.000000000, {"Mem.total"=>1016024, "Mem.used"=>716672, "Mem.free"=>295332}]
    [INPUT]
        Name mem
        Tag  mem.local
    
    [OUTPUT]
        Name  stdout
        Match *
    
    [FILTER]
        Name record_modifier
        Match *
        Whitelist_key Mem.total
        Whitelist_key Mem.used
        Whitelist_key Mem.free
    $ fluent-bit -i mem -o stdout -F  record_modifier -p 'Whitelist_key=Mem.total' -p 'Whitelist_key=Mem.free' -p 'Whitelist_key=Mem.used' -m '*'
    [0] mem.local: [1492436998.000000000, {"Mem.total"=>1016024, "Mem.used"=>716672, "Mem.free"=>295332}]
    $ fluent-bit -i forward -o stdout
    $ fluent-bit -i forward://192.168.3.2:9090 -o stdout
    [INPUT]
        Name              forward
        Listen            0.0.0.0
        Port              24224
        Buffer_Chunk_Size 32KB
        Buffer_Max_Size   64KB
    
    [OUTPUT]
        Name   stdout
        Match  *
    $ echo '{"key 1": 123456789, "key 2": "abcdefg"}' | fluent-cat my_tag
    $ bin/fluent-bit -i forward -o stdout
    Fluent-Bit v0.9.0
    Copyright (C) Treasure Data
    
    [2016/10/07 21:49:40] [ info] [engine] started
    [2016/10/07 21:49:40] [ info] [in_fw] binding 0.0.0.0:24224
    [0] my_tag: [1475898594, {"key 1"=>123456789, "key 2"=>"abcdefg"}]
    [SERVICE]
        Flush        5
        Daemon       Off
        Config_Watch On
        Parsers_File parsers.conf
        Parsers_File custom_parsers.conf
    $ build/bin/fluent-bit -i cpu -t my_cpu -o stdout -m '*'
    Fluent-Bit v0.8.0
    Copyright (C) Treasure Data
    
    [2016/01/07 10:46:29] [ info] starting engine
    [0] [1452185189, {"cpu_p"=>7.00, "user_p"=>5.00, "system_p"=>2.00, "cpu0.p_cpu"=>10.00, "cpu0.p_user"=>8.00, "cpu0.p_system"=>2.00, "cpu1.p_cpu"=>6.00, "cpu1.p_user"=>4.00, "cpu1.p_system"=>2.00}]
    [1] [1452185190, {"cpu_p"=>6.50, "user_p"=>5.00, "system_p"=>1.50, "cpu0.p_cpu"=>6.00, "cpu0.p_user"=>5.00, "cpu0.p_system"=>1.00, "cpu1.p_cpu"=>7.00, "cpu1.p_user"=>5.00, "cpu1.p_system"=>2.00}]
    [2] [1452185191, {"cpu_p"=>7.50, "user_p"=>5.00, "system_p"=>2.50, "cpu0.p_cpu"=>7.00, "cpu0.p_user"=>3.00, "cpu0.p_system"=>4.00, "cpu1.p_cpu"=>6.00, "cpu1.p_user"=>6.00, "cpu1.p_system"=>0.00}]
    [3] [1452185192, {"cpu_p"=>4.50, "user_p"=>3.50, "system_p"=>1.00, "cpu0.p_cpu"=>6.00, "cpu0.p_user"=>5.00, "cpu0.p_system"=>1.00, "cpu1.p_cpu"=>5.00, "cpu1.p_user"=>3.00, "cpu1.p_system"=>2.00}]
    [INPUT]
        Name cpu
        Tag  my_cpu
    
    [OUTPUT]
        Name  stdout
        Match *
    tag: [time, {"key1":"value1", "key2":"value2", "key3":"value3"}]
    {"key1":"value1", "key2":"value2", "key3":"value3"}
    time[delimiter]"value1"[delimiter]"value2"[delimiter]"value3"
    field1[label_delimiter]value1[delimiter]field2[label_delimiter]value2\n
    $ fluent-bit -i cpu -o file -p path=output.txt
    [INPUT]
        Name cpu
        Tag  cpu
    
    [OUTPUT]
        Name file
        Match *
        Path output.txt
    $ fluent-bit -i tcp -o stdout
    $ fluent-bit -i tcp://192.168.3.2:9090 -o stdout
    [INPUT]
        Name        tcp
        Listen      0.0.0.0
        Port        5170
        Chunk_Size  32
        Buffer_Size 64
    
    [OUTPUT]
        Name   stdout
        Match  *
    $ echo '{"key 1": 123456789, "key 2": "abcdefg"}' | nc 127.0.0.1 5170
    $ bin/fluent-bit -i tcp -o stdout
    Fluent-Bit v0.11.0
    Copyright (C) Treasure Data
    
    [2017/01/02 10:57:44] [ info] [engine] started
    [2017/01/02 10:57:44] [ info] [in_tcp] binding 0.0.0.0:5170
    [0] tcp.0: [1483376268, {"msg"=>{"key 1"=>123456789, "key 2"=>"abcdefg"}}]
    $ fluent-bit -i cpu -o kafka -p brokers=192.168.1.3:9092 -p topics=test
    [INPUT]
        Name  cpu
    
    [OUTPUT]
        Name        kafka
        Match       *
        Brokers     192.168.1.3:9092
        Topics      test
    $ fluent-bit -i cpu -o flowcounter
    [INPUT]
        Name cpu
        Tag  cpu
    
    [OUTPUT]
        Name flowcounter
        Match *
        Unit second
    $ fluent-bit -i cpu -o flowcounter  
    Fluent-Bit v0.10.0
    Copyright (C) Treasure Data
    
    [2016/12/23 11:01:20] [ info] [engine] started
    [out_flowcounter] cpu.0:[1482458540, {"counts":60, "bytes":7560, "counts/minute":1, "bytes/minute":126 }]
    $ fluent-bit -i cpu -o td -p API="abc" -p Database="fluentbit" -p Table="cpu_samples"
    [INPUT]
        Name cpu
        Tag  my_cpu
    
    [OUTPUT]
        Name     td
        Match    *
        API      5713/e75be23caee19f8041dfa635ddfbd0dcd8c8d981
        Database fluentbit
        Table    cpu_samples
    processor    : 0
    vendor_id    : GenuineIntel
    cpu family    : 6
    model        : 42
    model name    : Intel(R) Core(TM) i7-2640M CPU @ 2.80GHz
    stepping    : 7
    microcode    : 41
    cpu MHz        : 2791.009
    cache size    : 4096 KB
    physical id    : 0
    siblings    : 1
    [INPUT]
        Name           head
        Tag            head.cpu
        File           /proc/cpuinfo
        Lines          8
        Split_line     true
        # {"line0":"processor    : 0", "line1":"vendor_id    : GenuineIntel" ...}
    
    [FILTER]
        Name           record_modifier
        Match          *
        Whitelist_key  line7
    
    [OUTPUT]
        Name           stdout
        Match          *
    $ bin/fluent-bit -c head.conf 
    Fluent-Bit v0.12.0
    Copyright (C) Treasure Data
    
    [2017/06/26 22:38:24] [ info] [engine] started
    [0] head.cpu: [1498484305.000279805, {"line7"=>"cpu MHz        : 2791.009"}]
    [1] head.cpu: [1498484306.011680137, {"line7"=>"cpu MHz        : 2791.009"}]
    [2] head.cpu: [1498484307.010042482, {"line7"=>"cpu MHz        : 2791.009"}]
    [3] head.cpu: [1498484308.008447978, {"line7"=>"cpu MHz        : 2791.009"}]
    $ fluent-bit -i head -t uptime -p File=/proc/uptime -o stdout -m '*'
    Fluent-Bit v0.8.0
    Copyright (C) Treasure Data
    
    [2016/05/17 21:53:54] [ info] starting engine
    [0] uptime: [1463543634, {"head"=>"133517.70 194870.97"}]
    [1] uptime: [1463543635, {"head"=>"133518.70 194872.85"}]
    [2] uptime: [1463543636, {"head"=>"133519.70 194876.63"}]
    [3] uptime: [1463543637, {"head"=>"133520.70 194879.72"}]
    [INPUT]
        Name          head
        Tag           uptime
        File          /proc/uptime
        Buf_Size      256
        Interval_Sec  1
        Interval_NSec 0
    
    [OUTPUT]
        Name   stdout
        Match  *
    $ fluent-bit -i proc -p proc_name=crond -o stdout
    [INPUT]
        Name          proc
        Proc_Name     crond
        Interval_Sec  1
        Interval_NSec 0
        Fd            true
        Mem           true
    
    [OUTPUT]
        Name   stdout
        Match  *
    $ fluent-bit -i proc -p proc_name=fluent-bit -o stdout
    Fluent-Bit v0.11.0
    Copyright (C) Treasure Data
    
    [2017/01/30 21:44:56] [ info] [engine] started
    [0] proc.0: [1485780297, {"alive"=>true, "proc_name"=>"fluent-bit", "pid"=>10964, "mem.VmPeak"=>14740000, "mem.VmSize"=>14740000, "mem.VmLck"=>0, "mem.VmHWM"=>1120000, "mem.VmRSS"=>1120000, "mem.VmData"=>2276000, "mem.VmStk"=>88000, "mem.VmExe"=>1768000, "mem.VmLib"=>2328000, "mem.VmPTE"=>68000, "mem.VmSwap"=>0, "fd"=>18}]
    [1] proc.0: [1485780298, {"alive"=>true, "proc_name"=>"fluent-bit", "pid"=>10964, "mem.VmPeak"=>14740000, "mem.VmSize"=>14740000, "mem.VmLck"=>0, "mem.VmHWM"=>1148000, "mem.VmRSS"=>1148000, "mem.VmData"=>2276000, "mem.VmStk"=>88000, "mem.VmExe"=>1768000, "mem.VmLib"=>2328000, "mem.VmPTE"=>68000, "mem.VmSwap"=>0, "fd"=>18}]
    [2] proc.0: [1485780299, {"alive"=>true, "proc_name"=>"fluent-bit", "pid"=>10964, "mem.VmPeak"=>14740000, "mem.VmSize"=>14740000, "mem.VmLck"=>0, "mem.VmHWM"=>1152000, "mem.VmRSS"=>1148000, "mem.VmData"=>2276000, "mem.VmStk"=>88000, "mem.VmExe"=>1768000, "mem.VmLib"=>2328000, "mem.VmPTE"=>68000, "mem.VmSwap"=>0, "fd"=>18}]
    [3] proc.0: [1485780300, {"alive"=>true, "proc_name"=>"fluent-bit", "pid"=>10964, "mem.VmPeak"=>14740000, "mem.VmSize"=>14740000, "mem.VmLck"=>0, "mem.VmHWM"=>1152000, "mem.VmRSS"=>1148000, "mem.VmData"=>2276000, "mem.VmStk"=>88000, "mem.VmExe"=>1768000, "mem.VmLib"=>2328000, "mem.VmPTE"=>68000, "mem.VmSwap"=>0, "fd"=>18}]
    {"status": "up and running"}
    {"log":"{\"status\": \"up and running\"}\r\n","stream":"stdout","time":"2018-03-09T01:01:44.851160855Z"}
    [PARSER]
        Name         docker
        Format       json
        Time_Key     time
        Time_Format  %Y-%m-%dT%H:%M:%S.%L
        Time_Keep    On
        # Command       |  Decoder  | Field | Optional Action   |
        # ==============|===========|=======|===================|
        Decode_Field_As    escaped     log
    {"log":"\u0009Checking indexes...\n","stream":"stdout","time":"2018-02-19T23:25:29.1845444Z"}
    {"log":"\u0009\u0009Validated: _audit _internal _introspection _telemetry _thefishbucket history main snmp_data summary\n","stream":"stdout","time":"2018-02-19T23:25:29.1845536Z"}
    {"log":"\u0009Done\n","stream":"stdout","time":"2018-02-19T23:25:29.1845622Z"}
    [24] tail.0: [1519082729.184544400, {"log"=>"   Checking indexes...                                                   
    ", "stream"=>"stdout", "time"=>"2018-02-19T23:25:29.1845444Z"}]
    [25] tail.0: [1519082729.184553600, {"log"=>"           Validated: _audit _internal _introspection _telemetry _thefishbucket history main snmp_data summary
    ", "stream"=>"stdout", "time"=>"2018-02-19T23:25:29.1845536Z"}]
    [26] tail.0: [1519082729.184562200, {"log"=>"   Done                  
    ", "stream"=>"stdout", "time"=>"2018-02-19T23:25:29.1845622Z"}]
    [SERVICE]
        Parsers_File fluent-bit-parsers.conf
    
    [INPUT]
        Name        tail
        Parser      docker
        Path        /path/to/log.log
    
    [OUTPUT]
        Name   stdout
        Match  *
    [PARSER]
        Name        docker
        Format      json
        Time_Key    time
        Time_Format %Y-%m-%dT%H:%M:%S %z
        Decode_Field_as escaped_utf8 log
    $ bin/fluent-bit -h
    Usage: fluent-bit [OPTION]
    
    Available Options
      -c  --config=FILE    specify an optional configuration file
      -d, --daemon        run Fluent Bit in background mode
      -f, --flush=SECONDS    flush timeout in seconds (default: 5)
      -i, --input=INPUT    set an input
      -m, --match=MATCH    set plugin match, same as '-p match=abc'
      -o, --output=OUTPUT    set an output
      -p, --prop="A=B"    set plugin configuration property
      -e, --plugin=FILE    load an external plugin (shared lib)
      ...
    package main
    
    import "github.com/fluent/fluent-bit-go/output"
    
    //export FLBPluginRegister
    func FLBPluginRegister(def unsafe.Pointer) int {
        // Gets called only once when the plugin.so is loaded
        return output.FLBPluginRegister(ctx, "gstdout", "Stdout GO!")
    }
    
    //export FLBPluginInit
    func FLBPluginInit(plugin unsafe.Pointer) int {
        // Gets called only once for each instance you have configured.
        return output.FLB_OK
    }
    
    //export FLBPluginFlushCtx
    func FLBPluginFlushCtx(ctx, data unsafe.Pointer, length C.int, tag *C.char) int {
        // Gets called with a batch of records to be written to an instance.
        return output.FLB_OK
    }
    
    //export FLBPluginExit
    func FLBPluginExit() int {
        return output.FLB_OK
    }
    
    func main() {
    }
    $ go build -buildmode=c-shared -o out_gstdout.so out_gstdout.go
    $ ldd out_gstdout.so
        linux-vdso.so.1 =>  (0x00007fff561dd000)
        libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007fc4aeef0000)
        libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fc4aeb27000)
        /lib64/ld-linux-x86-64.so.2 (0x000055751a4fd000)
    $ bin/fluent-bit -e /path/to/out_gstdout.so -i cpu -o gstdout
    $ fluent-bit -i cpu -t cpu -o kafka-rest -p host=127.0.0.1 -p port=8082 -m '*'
    [INPUT]
        Name  cpu
        Tag   cpu
    
    [OUTPUT]
        Name        kafka-rest
        Match       *
        Host        127.0.0.1
        Port        8082
        Topic       fluent-bit
        Message_Key my_key
    nats://host:port
    $ bin/fluent-bit -i cpu -o nats -V -f 5
    Fluent-Bit v0.7.0
    Copyright (C) Treasure Data
    
    [2016/03/04 10:17:33] [ info] Configuration
    flush time     : 5 seconds
    input plugins  : cpu
    collectors     :
    [2016/03/04 10:17:33] [ info] starting engine
    cpu[all] all=3.250000 user=2.500000 system=0.750000
    cpu[i=0] all=3.000000 user=1.000000 system=2.000000
    cpu[i=1] all=3.000000 user=2.000000 system=1.000000
    cpu[i=2] all=2.000000 user=2.000000 system=0.000000
    cpu[i=3] all=6.000000 user=5.000000 system=1.000000
    [2016/03/04 10:17:33] [debug] [in_cpu] CPU 3.25%
    ...
    [
      [UNIX_TIMESTAMP, JSON_MAP_1],
      [UNIX_TIMESTAMP, JSON_MAP_2],
      [UNIX_TIMESTAMP, JSON_MAP_N],
    ]
    [
      [1457108504,{"tag":"fluentbit","cpu_p":1.500000,"user_p":1,"system_p":0.500000}],
      [1457108505,{"tag":"fluentbit","cpu_p":4.500000,"user_p":3,"system_p":1.500000}],
      [1457108506,{"tag":"fluentbit","cpu_p":6.500000,"user_p":4.500000,"system_p":2}]
    ]
    #include <fluent-bit.h>
    
    #define JSON_1   "[1449505010, {\"key1\": \"some value\"}]"
    #define JSON_2   "[1449505620, {\"key1\": \"some new value\"}]"
    
    int main()
    {
        int ret;
        int in_ffd;
        int out_ffd;
        flb_ctx_t *ctx;
    
        /* Create library context */
        ctx = flb_create();
        if (!ctx) {
            return -1;
        }
    
        /* Enable the input plugin for manual data ingestion */
        in_ffd = flb_input(ctx, "lib", NULL);
        if (in_ffd == -1) {
            flb_destroy(ctx);
            return -1;
        }
    
        /* Enable output plugin 'stdout' (print records to the standard output) */
        out_ffd = flb_output(ctx, "stdout", NULL);
        if (out_ffd == -1) {
            flb_destroy(ctx);
            return -1;
        }
    
        /* Start the engine */
        ret = flb_start(ctx);
        if (ret == -1) {
            flb_destroy(ctx);
            return -1;
        }
    
        /* Ingest data manually */
        flb_lib_push(ctx, in_ffd, JSON_1, sizeof(JSON_1) - 1);
        flb_lib_push(ctx, in_ffd, JSON_2, sizeof(JSON_2) - 1);
    
        /* Stop the engine (5 seconds to flush remaining data) */
        flb_stop(ctx);
    
        /* Destroy library context, release all resources */
        flb_destroy(ctx);
    
        return 0;
    }

    Labels

  • Annotations

  • The default backend in the configuration is Elasticsearch set by the Elasticsearch Ouput Plugin. It uses the Logstash format to ingest the logs. If you need a different Index and Type, please refer to the plugin option and do your own adjustments.

  • There is an option called Retry_Limit set to False, that means if Fluent Bit cannot flush the records to Elasticsearch it will re-try indefinitely until it succeed.

  • Concepts
    Installation Steps
    Fluentd Kubernetes Metadata Filter
    Jimmi Dyson
    Fluent Bit
    Tail input plugin
    backpressure
    Template tables using templateSuffix.

    Google Cloud Configuration

    Fluent Bit streams data into an existing BigQuery table using a service account that you specify. Therefore, before using the BigQuery output plugin, you must create a service account, create a BigQuery dataset and table, authorize the service account to write to the table, and provide the service account credentials to Fluent Bit.

    Creating a Service Account

    To stream data into BigQuery, the first step is to create a Google Cloud service account for Fluent Bit:

    • Creating a Google Cloud Service Account

    Creating a BigQuery Dataset and Table

    Fluent Bit does not create datasets or tables for your data, so you must create these ahead of time. You must also grant the service account WRITER permission on the dataset:

    • Creating and using datasets

    Within the dataset you will need to create a table for the data to reside in. You can follow the following instructions for creating your table. Pay close attention to the schema. It must match the schema of your output JSON. Unfortunately, since BigQuery does not allow dots in field names, you will need to use a filter to change the fields for many of the standard inputs (e.g, mem or cpu).

    • Creating and using tables

    Retrieving Service Account Credentials

    Fluent Bit BigQuery output plugin uses a JSON credentials file for authentication credentials. Download the credentials file by following these instructions:

    • Creating and Managing Service Account Keys

    Configurations Parameters

    Key

    Description

    default

    google_service_credentials

    Absolute path to a Google Cloud credentials JSON file

    Value of the environment variable $GOOGLE_SERVICE_CREDENTIALS

    project_id

    The project id containing the BigQuery dataset to stream into.

    The value of the project_id in the credentials file

    dataset_id

    The dataset id of the BigQuery dataset to write into. This dataset must exist in your project.

    table_id

    Configuration File

    If you are using a Google Cloud Credentials File, the following configuration is enough to get you started:

    Google Cloud BigQuery
    Application Default Credentials
    Data deduplication
    Configuration Parameters

    Key

    Description

    default

    google_service_credentials

    Absolute path to a Google Cloud credentials JSON file

    Value of environment variable $GOOGLE_SERVICE_CREDENTIALS

    service_account_email

    Account email associated to the service. Only available if no credentials file has been provided.

    Value of environment variable $SERVICE_ACCOUNT_EMAIL

    service_account_secret

    Private key content associated with the service account. Only available if no credentials file has been provided.

    Value of environment variable $SERVICE_ACCOUNT_SECRET

    resource

    Configuration File

    If you are using a Google Cloud Credentials File, the following configuration is enough to get started:

    Troubleshooting Notes

    Upstream connection error

    Github reference: #761

    An upstream connection error means Fluent Bit was not able to reach Google services, the error looks like this:

    This belongs to a network issue by the environment where Fluent Bit is running, make sure that from the Host, Container or Pod you can reach the following Google end-points:

    • https://www.googleapis.com

    • https://logging.googleapis.com

    Other implementations

    Stackdriver officially supports a logging agent based on Fluentd.

    Google Cloud Stackdriver Logging
    Creating a Google Service Account for Stackdriver

    Lua

    Filter records using Lua Scripts.

    Parser

    Parse record.

    Record Modifier

    Modify record.

    Stdout

    Print records to the standard output interface.

    Throttle

    Apply rate limit to event flow.

    Nest

    Nest records under a specified key

    Modify

    Modifications to record.

    In order to let a Filter be applied over some data, the Match rule must exists and it must match the Tag for the incoming data.

    name

    title

    description

    grep

    Grep

    Match or exclude specific records by patterns.

    kubernetes

    Kubernetes

    Enrich logs with Kubernetes Metadata.

    Stream Processor

    Fluent Bit v1.1 comes with a new and optional Stream Processor Engine that allows to do data processing through SQL queries. This article covers the format of the expected configuration file.

    For more details about the Stream Processor Engine use please refer to the following guide:

    • https://docs.fluentbit.io/stream-processing/

    Concepts

    The stream processor can be configured through defining Tasks which have a name and an execution SQL statement:

    Streams File Configuration

    The Stream Processor is configured through a streams file that is referenced from the main fluent-bit.conf configuration file through the Streams_File key. The content of the streams file must have the following format specified in the table below:

    Configuration Example

    Consider the following fluent-bit.conf configuration file:

    Now creates a stream_processor.conf configuration file with the following content:

    On the query there are a few things happening:

    • Fluent Bit will gather CPU usage metrics through (metrics are calculated by default every second).

    • Stream Processor have a Task attached to any incoming Stream of data called cpu_data (check the alias set in the Input section).

    • Stream Processor will aggregate the value of cpu_p record field and calculate it average during a window of 5 seconds.

    You should see the following output in your terminal:

    If you want to learn more about our Stream Processor engine please read the .

    Serial Interface

    The serial input plugin, allows to retrieve messages/data from a Serial interface.

    Configuration Parameters

    Key

    Description

    File

    Getting Started

    In order to retrieve messages over the Serial interface, you can run the plugin from the command line or through the configuration file:

    Command Line

    The following example loads the input serial plugin where it set a Bitrate of 9600, listen from the /dev/tnt0 interface and use the custom tag data to route the message.

    The above interface (/dev/tnt0) is an emulation of the serial interface (more details at bottom), for demonstrative purposes we will write some message to the other end of the interface, in this case /dev/tnt1, e.g:

    In Fluent Bit you should see an output like this:

    Now using the Separator configuration, we could send multiple messages at once (run this command after starting Fluent Bit):

    Configuration File

    In your main configuration file append the following Input & Output sections:

    Emulating Serial Interface on Linux

    The following content is some extra information that will allow you to emulate a serial interface on your Linux system, so you can test this Serial input plugin locally in case you don't have such interface in your computer. The following procedure has been tested on Ubuntu 15.04 running a Linux Kernel 4.0.

    Build and install the tty0tty module

    Download the sources

    Unpack and compile

    Copy the new kernel module into the kernel modules directory

    Load the module

    You should see new serial ports in /dev/ (ls /dev/tnt*) Give appropriate permissions to the new serial ports:

    When the module is loaded, it will interconnect the following virtual interfaces:

    Output Plugins

    The output plugins defines where should flush the information it gathers from the input. At the moment the available options are the following:

    $ kubectl create namespace logging
    $ kubectl create -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/fluent-bit-service-account.yaml
    $ kubectl create -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/fluent-bit-role.yaml
    $ kubectl create -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/fluent-bit-role-binding.yaml
    $ kubectl create -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/output/elasticsearch/fluent-bit-configmap.yaml
    $ kubectl create -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/output/elasticsearch/fluent-bit-ds.yaml
    $ kubectl create -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/output/elasticsearch/fluent-bit-ds-minikube.yaml
    [INPUT]
        Name  dummy
        Tag   dummy
    
    [OUTPUT]
        Name       bigquery
        Match      *
        dataset_id my_dataset
        table_id   dummy_table
    [INPUT]
        Name  cpu
        Tag   cpu
    
    [OUTPUT]
        Name        stackdriver
        Match       *
    [2019/01/07 23:24:09] [error] [oauth2] could not get an upstream connection

    The table id of the BigQuery table to write into. This table must exist in the specified dataset and the schema must match the output.

    Set resource type of data. Only global is supported.

    global

    lua
    parser
    record_modifier
    stdout
    throttle
    nest
    modify

    SQL statement to be executed by the task. Note that the SQL statement must be finished with a semicolon. The SQL statement must be set in one single line (no multiline support in the configuration)

    Yes

    Stream Processor every 5 seconds will send the results back into Fluent Bit pipeline with a tag called results.
  • Fluent Bit output section will match results tagged records and print them to the standard output interface.

  • Concept

    Description

    Task

    Definition of a Stream Processor task to be executed. A task is defined through a section called STREAM_TASK.

    Name

    Tasks have a name for debugging and testing purposes.

    Exec

    SQL statement to be executed when a Task runs.

    Section

    Key

    Description

    Mandatory?

    STREAM_TASK

    Name

    Set a name for the task in question. The value is used as a reference only.

    Yes

    CPU input plugin
    official guide

    Exec

    Absolute path to the device entry, e.g: /dev/ttyS0

    Bitrate

    The bitrate for the communication, e.g: 9600, 38400, 115200, etc

    Min_Bytes

    The serial interface will expect at least Min_Bytes to be available before to process the message (default: 1)

    Separator

    Allows to specify a separator string that's used to determinate when a message ends.

    Format

    Specify the format of the incoming data stream. The only option available is 'json'. Note that Format and Separator cannot be used at the same time.

    [SERVICE]
        Flush        1
        Log_Level    info
        Streams_File stream_processor.conf
    
    [INPUT]
        Name         cpu
        alias        cpu_data
    
    [OUTPUT]
        Name         stdout
        Match        results
    [STREAM_TASK]
        Name   cpu_test
        Exec   CREATE STREAM cpu WITH (tag='results') AS SELECT AVG(cpu_p) from STREAM:cpu_data WINDOW TUMBLING (5 SECOND);
    $ bin/fluent-bit -c fluent-bit.conf 
    Fluent Bit v1.1.0
    Copyright (C) Treasure Data
    
    [2019/05/17 11:26:34] [ info] [storage] initializing...
    [2019/05/17 11:26:34] [ info] [storage] in-memory
    [2019/05/17 11:26:34] [ info] [storage] normal synchronization mode, checksum disabled
    [2019/05/17 11:26:34] [ info] [engine] started (pid=16769)
    [2019/05/17 11:26:34] [ info] [sp] stream processor started
    [2019/05/17 11:26:34] [ info] [sp] registered task: cpu_test
    [0] results: [1558085199.000175517, {"AVG(cpu_p)"=>2.750000}]
    [0] results: [1558085204.000151430, {"AVG(cpu_p)"=>3.400000}]
    [0] results: [1558085209.000131753, {"AVG(cpu_p)"=>1.700000}]
    [0] results: [1558085214.000147562, {"AVG(cpu_p)"=>3.500000}]
    [0] results: [1558085219.000118591, {"AVG(cpu_p)"=>2.050000}]
    [0] results: [1558085224.000179645, {"AVG(cpu_p)"=>26.375000}]
    $ fluent-bit -i serial -t data -p File=/dev/tnt0 -p BitRate=9600 -o stdout -m '*'
    $ echo 'this is some message' > /dev/tnt1
    $ fluent-bit -i serial -t data -p File=/dev/tnt0 -p BitRate=9600 -o stdout -m '*'
    Fluent-Bit v0.8.0
    Copyright (C) Treasure Data
    
    [2016/05/20 15:44:39] [ info] starting engine
    [0] data: [1463780680, {"msg"=>"this is some message"}]
    $ echo 'aaXbbXccXddXee' > /dev/tnt1
    $ fluent-bit -i serial -t data -p File=/dev/tnt0 -p BitRate=9600 -p Separator=X -o stdout -m '*'
    Fluent-Bit v0.8.0
    Copyright (C) Treasure Data
    
    [2016/05/20 16:04:51] [ info] starting engine
    [0] data: [1463781902, {"msg"=>"aa"}]
    [1] data: [1463781902, {"msg"=>"bb"}]
    [2] data: [1463781902, {"msg"=>"cc"}]
    [3] data: [1463781902, {"msg"=>"dd"}]
    [INPUT]
        Name      serial
        Tag       data
        File      /dev/tnt0
        BitRate   9600
        Separator X
    
    [OUTPUT]
        Name   stdout
        Match  *
    $ git clone https://github.com/freemed/tty0tty
    $ cd tty0tty/module
    $ make
    $ sudo cp tty0tty.ko /lib/modules/$(uname -r)/kernel/drivers/misc/
    $ sudo depmod
    $ sudo modprobe tty0tty
    $ sudo chmod 666 /dev/tnt*
    /dev/tnt0 <=> /dev/tnt1
    /dev/tnt2 <=> /dev/tnt3
    /dev/tnt4 <=> /dev/tnt5
    /dev/tnt6 <=> /dev/tnt7

    Count Records

    Simple records counter.

    Elasticsearch

    flush records to a Elasticsearch server.

    File

    Flush records to a file.

    FlowCounter

    Count records.

    Forward

    Fluentd forward protocol.

    HTTP

    Flush records to an HTTP end point.

    InfluxDB

    Flush records to InfluxDB time series database.

    Apache Kafka

    Flush records to Apache Kafka

    Kafka REST Proxy

    Flush records to a Kafka REST Proxy server.

    Google Stackdriver Logging

    Flush records to Google Stackdriver Logging service.

    Standard Output

    Flush records to the standard output.

    Splunk

    Flush records to a Splunk Enterprise service

    Flush records to the cloud service for analytics.

    NATS

    flush records to a NATS server.

    NULL

    throw away events.

    name

    title

    description

    azure

    Azure Log Analytics

    Ingest records into Azure Log Analytics

    bigquery

    BigQuery

    Ingest records into Google BigQuery

    Fluent Bit

    Parser

    The Parser Filter plugin allows to parse field in event records.

    Configuration Parameters

    The plugin supports the following configuration parameters:

    Key

    Description

    Default

    Getting Started

    Configuration File

    This is an example to parser a record {"data":"100 0.5 true This is example"}.

    The plugin needs parser file which defines how to parse field.

    The path of parser file should be written in configuration file at [SERVICE] section.

    The output is

    You can see the record {"data":"100 0.5 true This is example"} are parsed.

    Preserve original fields

    By default, the parser plugin only keeps the parsed fields in its output.

    If you enable Preserve_Key, the original key field is preserved:

    This will produce the output:

    If you enable Reserve_Data, all other fields are preserved:

    This will produce the output:

    Splunk

    Splunk output plugin allows to ingest your records into a Splunk Enterprise service through the HTTP Event Collector (HEC) interface.

    To get more details about how to setup the HEC in Splunk please refer to the following documentation: Splunk / Use the HTTP Event Collector

    Configuration Parameters

    Key

    Description

    TLS / SSL

    Splunk output plugin supports TTL/SSL, for more details about the properties available and general configuration, please refer to the section.

    Getting Started

    In order to insert records into a Splunk service, you can run the plugin from the command line or through the configuration file:

    Command Line

    The splunk plugin, can read the parameters from the command line in two ways, through the -p argument (property), e.g:

    Configuration File

    In your main configuration file append the following Input & Output sections:

    Data format

    By default, the Splunk output plugin nests the record under the event key in the payload sent to the HEC. It will also append the time of the record to a top level time key.

    If you would like to customize any of the Splunk event metadata, such as the host or target index, you can set Splunk_Send_Raw On in the plugin configuration, and add the metadata as keys/values in the record. Note: with Splunk_Send_Raw enabled, you are responsible for creating and populating the event section of the payload.

    For example, to add a custom index and hostname:

    This will create a payload that looks like:

    For more information on the Splunk HEC payload format and all event meatadata Splunk accepts, see here:

    Monitoring

    Fluent Bit comes with a built-in HTTP Server that can be used to query internal information and monitor metrics of each running plugin.

    Getting Started

    To get started, the first step is to enable the HTTP Server from the configuration file:

    the above configuration snippet will instruct Fluent Bit to start it HTTP Server on TCP Port 2020 and listening on all network interfaces:

    now with a simple curl command is enough to gather some information:

    Note that we are sending the

    Elasticsearch

    The es output plugin, allows to flush your records into a database. The following instructions assumes that you have a fully operational Elasticsearch service running in your environment.

    Configuration Parameters

    Lua

    Lua Filter allows you to modify the incoming records using custom Scripts.

    Due to the necessity to have a flexible filtering mechanism, now is possible to extend Fluent Bit capabilities writing simple filters using Lua programming language. A Lua based filter takes two steps:

    • Configure the Filter in the main configuration

    • Prepare a Lua script that will be used by the Filter

    Content:

    InfluxDB

    The influxdb output plugin, allows to flush your records into a time series database. The following instructions assumes that you have a fully operational InfluxDB service running in your system.

    Configuration Parameters

    HTTP

    The http output plugin allows to flush your records into a HTTP endpoint. For now the functionality is pretty basic and it issues a POST request with the data records in (or JSON) format.

    Configuration Parameters

    Input Plugins

    The input plugins defines the source from where can collect data, it can be through a network interface, radio hardware or some built-in metric. As of this version the following input plugins are available:

    Regular Expression Parser

    The regex parser allows to define a custom Ruby Regular Expression that will use a named capture feature to define which content belongs to which key name.

    Fluent Bit uses regular expression library on Ruby mode, for testing purposes you can use the following web editor to test your expressions:

    Important: do not attempt to add multiline support in your regular expressions if you are using input plugin since each line is handled as a separated entity. Instead use Tail support configuration feature.

    Note: understanding how regular expressions works is out of the scope of this content.

    From a configuration perspective, when the format is set to regex, is mandatory and expected that a Regex configuration key exists.

    counter
    es
    file
    flowcounter
    forward
    http
    influxdb
    kafka
    kafka-rest
    stackdriver
    stdout
    splunk
    td
    Treasure Data
    Treasure Data
    nats
    null

    Key_Name

    Specify field name in record to parse.

    Parser

    Specify the parser name to interpret the field. Multiple Parser entries are allowed (one per line).

    Preserve_Key

    Keep original Key_Name field in the parsed result. If false, the field will be removed.

    False

    Reserve_Data

    Keep all other original fields in the parsed result. If false, all other original fields will be removed.

    False

    Unescape_Key

    If the key is a escaped string (e.g: stringify JSON), unescape the string before to apply the parser.

    False

    default

    Host

    IP address or hostname of the target Splunk service.

    127.0.0.1

    Port

    TCP port of the target Splunk service.

    8088

    Splunk_Token

    Specify the Authentication Token for the HTTP Event Collector interface.

    Splunk_Send_Raw

    When enabled, the record keys and values are set in the top level of the map instead of under the event key.

    Off

    HTTP_User

    Optional username for Basic Authentication on HEC

    HTTP_Passwd

    Password for user defined in HTTP_User

    TLS/SSL
    http://docs.splunk.com/Documentation/Splunk/latest/Data/AboutHEC

    Configuration Parameters

  • Getting Started

  • Lua Script Filter API

  • Configuration Parameters

    The plugin supports the following configuration parameters:

    Key

    Description

    Script

    Path to the Lua script that will be used.

    Call

    Lua function name that will be triggered to do filtering. It's assumed that the function is declared inside the Script defined above.

    Type_int_key

    If the key is matched, that field will be converted to integer.

    Getting Started

    In order to test the filter, you can run the plugin from the command line or through the configuration file. The following examples uses the dummy input plugin for data ingestion, invoke Lua filter using the test.lua script and calls the cb_print() function which only print the same information to the standard output:

    Command Line

    From the command line you can use the following options:

    Configuration File

    In your main configuration file append the following Input, Filter & Output sections:

    Lua Script Filter API

    The life cycle of a filter have the following steps:

    • Upon Tag matching by filter_lua, it may process or bypass the record.

    • If filter_lua accepts the record, it will invoke the function defined in the call property which basically is the name of a function defined in the Lua script.

    • Invoke Lua function passing each record in JSON format.

    • Upon return, validate return value and take some action (described above)

    Callback Prototype

    The Lua script can have one or multiple callbacks that can be used by filter_lua, it prototype is as follows:

    Function Arguments

    name

    description

    tag

    Name of the tag associated with the incoming record.

    timestamp

    Unix timestamp with nanoseconds associated with the incoming record. The original format is a double (seconds.nanoseconds)

    record

    Lua table with the record content

    Return Values

    Each callback must return three values:

    name

    data type

    description

    code

    integer

    The code return value represents the result and further action that may follows. If code equals -1, means that filter_lua must drop the record. If code equals 0 the record will not be modified, otherwise if code equals 1, means the original timestamp or record have been modified so it must be replaced by the returned values from timestamp (second return value) and record (third return value).

    timestamp

    double

    If code equals 1, the original record timestamp will be replaced with this new value.

    record

    table

    if code equals 1, the original record information will be replaced with this new value. Note that the format of this value must be a valid Lua table.

    Code Examples

    For functional examples of this interface, please refer to the code samples provided in the source code of the project located here:

    https://github.com/fluent/fluent-bit/tree/master/scripts

    Number Type

    In Lua, Fluent Bit treats number as double. It means an integer field (e.g. IDs, log levels) will be converted double. To avoid type conversion, Type_int_key property is available.

    Lua

    IP address or hostname of the target InfluxDB service

    127.0.0.1

    Port

    TCP port of the target InfluxDB service

    8086

    Database

    InfluxDB database name where records will be inserted

    fluentbit

    Sequence_Tag

    The name of the tag whose value is incremented for the consecutive simultaneous events.

    _seq

    HTTP_User

    Optional username for HTTP Basic Authentication

    HTTP_Passwd

    Password for user defined in HTTP_User

    Tag_Keys

    Space separated list of keys that needs to be tagged

    Auto_Tags

    Automatically tag keys where value is string. This option takes a boolean value: True/False, On/Off.

    Off

    TLS / SSL

    InfluxDB output plugin supports TTL/SSL, for more details about the properties available and general configuration, please refer to the TLS/SSL section.

    Getting Started

    In order to start inserting records into an InfluxDB service, you can run the plugin from the command line or through the configuration file:

    Command Line

    The influxdb plugin, can read the parameters from the command line in two ways, through the -p argument (property) or setting them directly through the service URI. The URI format is the following:

    Using the format specified, you could start Fluent Bit through:

    Configuration File

    In your main configuration file append the following Input & Output sections:

    Tagging

    Basic example of Tag_Keys usage:

    With Auto_Tags=On in this example cause error, because every parsed field value type is string. Best usage of this option in metrics like record where one ore more field value is not string typed.

    Testing

    Before to start Fluent Bit, make sure the target database exists on InfluxDB, using the above example, we will insert the data into a fluentbit database.

    1. Create database

    Log into InfluxDB console:

    Create the database:

    Check the database exists:

    2. Run Fluent Bit

    The following command will gather CPU metrics from the system and send the data to InfluxDB database every five seconds:

    Note that all records coming from the cpu input plugin, have a tag cpu, this tag is used to generate the measurement in InfluxDB

    3. Query the data

    From InfluxDB console, choose your database:

    Now query some specific fields:

    The CPU input plugin gather more metrics per CPU core, in the above example we just selected three specific metrics. The following query will give a full result:

    4. View tags

    Query tagged keys:

    And now query method key values:

    Key

    Description

    default

    InfluxDB

    Host

    The following parser configuration example aims to provide rules that can be applied to an Apache HTTP Server log entry:

    As an example, takes the following Apache HTTP Server log entry:

    The above content do not provide a defined structure for Fluent Bit, but enabling the proper parser we can help to make a structured representation of it:

    In order to understand, learn and test regular expressions like the example above, we suggest you try the following Ruby Regular Expression Editor: http://rubular.com/r/X7BH0M4Ivm

    Onigmo
    http://rubular.com/
    Tail
    Multiline
    [PARSER]
        Name   apache
        Format regex
        Regex  ^(?<host>[^ ]*) [^ ]* (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^\"]*?)(?: +\S*)?)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")?$
        Time_Key time
        Time_Format %d/%b/%Y:%H:%M:%S %z
    192.168.2.20 - - [29/Jul/2015:10:27:10 -0300] "GET /cgi-bin/try/ HTTP/1.0" 200 3395
    [PARSER]
        Name dummy_test
        Format regex
        Regex ^(?<INT>[^ ]+) (?<FLOAT>[^ ]+) (?<BOOL>[^ ]+) (?<STRING>.+)$
    [SERVICE]
        Parsers_File /path/to/parsers.conf
    
    [INPUT]
        Name dummy
        Tag  dummy.data
        Dummy {"data":"100 0.5 true This is example"}
    
    [FILTER]
        Name parser
        Match dummy.*
        Key_Name data
        Parser dummy_test
    
    [OUTPUT]
        Name stdout
        Match *
    $ fluent-bit -c dummy.conf
    Fluent-Bit v0.12.0
    Copyright (C) Treasure Data
    
    [2017/07/06 22:33:12] [ info] [engine] started
    [0] dummy.data: [1499347993.001371317, {"INT"=>"100", "FLOAT"=>"0.5", "BOOL"=>"true", "STRING"=>"This is example"}]
    [1] dummy.data: [1499347994.001303118, {"INT"=>"100", "FLOAT"=>"0.5", "BOOL"=>"true", "STRING"=>"This is example"}]
    [2] dummy.data: [1499347995.001296133, {"INT"=>"100", "FLOAT"=>"0.5", "BOOL"=>"true", "STRING"=>"This is example"}]
    [3] dummy.data: [1499347996.001320284, {"INT"=>"100", "FLOAT"=>"0.5", "BOOL"=>"true", "STRING"=>"This is example"}]
    [PARSER]
        Name dummy_test
        Preserve_Key On
        Format regex
        Regex ^(?<INT>[^ ]+) (?<FLOAT>[^ ]+) (?<BOOL>[^ ]+) (?<STRING>.+)$
    $ fluent-bit -c dummy.conf
    Fluent-Bit v0.12.0
    Copyright (C) Treasure Data
    
    [2017/07/06 22:33:12] [ info] [engine] started
    [0] dummy.data: [1499347993.001371317, {"data":"100 0.5 true This is example", "INT"=>"100", "FLOAT"=>"0.5", "BOOL"=>"true", "STRING"=>"This is example"}]
    [1] dummy.data: [1499347994.001303118, {"data":"100 0.5 true This is example", "INT"=>"100", "FLOAT"=>"0.5", "BOOL"=>"true", "STRING"=>"This is example"}]
    [2] dummy.data: [1499347995.001296133, {"data":"100 0.5 true This is example", "INT"=>"100", "FLOAT"=>"0.5", "BOOL"=>"true", "STRING"=>"This is example"}]
    [3] dummy.data: [1499347996.001320284, {"data":"100 0.5 true This is example", "INT"=>"100", "FLOAT"=>"0.5", "BOOL"=>"true", "STRING"=>"This is example"}]
    [PARSER]
        Name dummy_test
        Reserve_Data On
        Format regex
        Regex ^(?<INT>[^ ]+) (?<FLOAT>[^ ]+) (?<BOOL>[^ ]+) (?<STRING>.+)$
    [SERVICE]
        Parsers_File /path/to/parsers.conf
    
    [INPUT]
        Name dummy
        Tag  dummy.data
        Dummy {"data":"100 0.5 true This is example", "key1":"value1", "key2":"value2"}
    $ fluent-bit -c dummy.conf
    Fluent-Bit v0.12.0
    Copyright (C) Treasure Data
    
    [2017/07/06 22:33:12] [ info] [engine] started
    [0] dummy.data: [1499347993.001371317, {"INT"=>"100", "FLOAT"=>"0.5", "BOOL"=>"true", "STRING"=>"This is example"}, "key1":"value1", "key2":"value2"]
    [1] dummy.data: [1499347994.001303118, {"INT"=>"100", "FLOAT"=>"0.5", "BOOL"=>"true", "STRING"=>"This is example"}, "key1":"value1", "key2":"value2"]
    [2] dummy.data: [1499347995.001296133, {"INT"=>"100", "FLOAT"=>"0.5", "BOOL"=>"true", "STRING"=>"This is example"}, "key1":"value1", "key2":"value2"]
    [3] dummy.data: [1499347996.001320284, {"INT"=>"100", "FLOAT"=>"0.5", "BOOL"=>"true", "STRING"=>"This is example"}, "key1":"value1", "key2":"value2"]
    $ fluent-bit -i cpu -t cpu -o splunk -p host=127.0.0.1 -p port=8088 \
      -p tls=on -p tls.verify=off -m '*'
    [INPUT]
        Name  cpu
        Tag   cpu
    
    [OUTPUT]
        Name        splunk
        Match       *
        Host        127.0.0.1
        Port        8088
        TLS         On
        TLS.Verify  Off
        Message_Key my_key
    [INPUT]
        Name  cpu
        Tag   cpu
    
    # nest the record under the 'event' key
    [FILTER]
        Name nest
        Match *
        Operation nest
        Wildcard *
        Nest_under event
    
    # add event metadata
    [FILTER]
        Name      modify
        Match     *
        Add index my-splunk-index
        Add host  my-host
    
    [OUTPUT]
        Name        splunk
        Match       *
        Host        127.0.0.1
        Splunk_Token xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx
        Splunk_Send_Raw On
    {
        "time": "1535995058.003385189",
        "index": "my-splunk-index",
        "host": "my-host",
        "event": {
            "cpu_p":0.000000,
            "user_p":0.000000,
            "system_p":0.000000
        }
    }
    $ fluent-bit -i dummy -F lua -p script=test.lua -p call=cb_print -m '*' -o null
    [INPUT]
        Name   dummy
    
    [FILTER]
        Name    lua
        Match   *
        script  test.lua
        call    cb_print
    
    [OUTPUT]
        Name   null
        Match  *
    function cb_print(tag, timestamp, record)
       return code, timestamp, record
    end
    influxdb://host:port
    $ fluent-bit -i cpu -t cpu -o influxdb://127.0.0.1:8086 -m '*'
    [INPUT]
        Name  cpu
        Tag   cpu
    
    [OUTPUT]
        Name          influxdb
        Match         *
        Host          127.0.0.1
        Port          8086
        Database      fluentbit
        Sequence_Tag  _seq
    [INPUT]
        Name            tail
        Tag             apache.access
        parser          apache2
        path            /var/log/apache2/access.log
    
    [OUTPUT]
        Name          influxdb
        Match         *
        Host          127.0.0.1
        Port          8086
        Database      fluentbit
        Sequence_Tag  _seq
        # make tags from method and path fields
        Tag_Keys      method path
    $ influx
    Visit https://enterprise.influxdata.com to register for updates, InfluxDB server management, and monitoring.
    Connected to http://localhost:8086 version 1.1.0
    InfluxDB shell version: 1.1.0
    >
    > create database fluentbit
    >
    > show databases
    name: databases
    name
    ----
    _internal
    fluentbit
    
    >
    $ bin/fluent-bit -i cpu -t cpu -o influxdb -m '*'
    > use fluentbit
    Using database fluentbit
    > SELECT cpu_p, system_p, user_p FROM cpu
    name: cpu
    time                  cpu_p   system_p    user_p
    ----                  -----   --------    ------
    1481132860000000000   2.75        0.5      2.25
    1481132861000000000   2           0.5      1.5
    1481132862000000000   4.75        1.5      3.25
    1481132863000000000   6.75        1.25     5.5
    1481132864000000000   11.25       3.75     7.5
    > SELECT * FROM cpu
    > SHOW TAG KEYS ON fluentbit FROM "apache.access"
    name: apache.access
    tagKey
    ------
    _seq
    method
    path
    > SHOW TAG VALUES ON fluentbit FROM "apache.access" WITH KEY = "method"
    name: apache.access
    key    value
    ---    -----
    method "MATCH"
    method "POST"
    [1154104030, {"host"=>"192.168.2.20",
                  "user"=>"-",
                  "method"=>"GET",
                  "path"=>"/cgi-bin/try/",
                  "code"=>"200",
                  "size"=>"3395",
                  "referer"=>"",
                  "agent"=>""
                  }
    ]
    curl
    command output to the
    jq
    program which helps to make the JSON data easy to read from the terminal. Fluent Bit don't aim to do JSON pretty-printing.

    REST API Interface

    Fluent Bit aims to expose useful interfaces for monitoring, as of Fluent Bit v0.14 the following end points are available:

    URI

    Description

    Data Format

    /

    Fluent Bit build information

    JSON

    /api/v1/uptime

    Get uptime information in seconds and human readable format

    JSON

    /api/v1/metrics

    Internal metrics per loaded plugin

    JSON

    /api/v1/metrics/prometheus

    Uptime Example

    Query the service uptime with the following command:

    it should print a similar output like this:

    Metrics Examples

    Query internal metrics in JSON format with the following command:

    it should print a similar output like this:

    Metrics in Prometheus format

    Query internal metrics in Prometheus Text 0.0.4 format:

    this time the same metrics will be in Prometheus format instead of JSON:

    Configuring Aliases

    By default configured plugins on runtime get an internal name in the format plugin_name.ID. For monitoring purposes this can be confusing if many plugins of the same type were configured. To make a distinction each configured input or output section can get an alias that will be used as the parent name for the metric.

    The following example set an alias to the INPUT section which is using the CPU input plugin:

    Now when querying the metrics we get the aliases in place instead of the plugin name:

    IP address or hostname of the target Elasticsearch instance

    127.0.0.1

    Port

    TCP port of the target Elasticsearch instance

    9200

    Path

    Elasticsearch accepts new data on HTTP query path "/_bulk". But it is also possible to serve Elasticsearch behind a reverse proxy on a subpath. This option defines such path on the fluent-bit side. It simply adds a path prefix in the indexing HTTP POST URI.

    Empty string

    Buffer_Size

    Specify the buffer size used to read the response from the Elasticsearch HTTP service. This option is useful for debugging purposes where is required to read full responses, note that response size grows depending of the number of records inserted. To set an unlimited amount of memory set this value to False, otherwise the value must be according to the specification.

    4KB

    Pipeline

    Newer versions of Elasticsearch allows to setup filters called pipelines. This option allows to define which pipeline the database should use. For performance reasons is strongly suggested to do parsing and filtering on Fluent Bit side, avoid pipelines.

    HTTP_User

    Optional username credential for Elastic X-Pack access

    HTTP_Passwd

    Password for user defined in HTTP_User

    Index

    Index name

    fluentbit

    Type

    Type name

    flb_type

    Logstash_Format

    Enable Logstash format compatibility. This option takes a boolean value: True/False, On/Off

    Off

    Logstash_Prefix

    When Logstash_Format is enabled, the Index name is composed using a prefix and the date, e.g: If Logstash_Prefix is equals to 'mydata' your index will become 'mydata-YYYY.MM.DD'. The last string appended belongs to the date when the data is being generated.

    logstash

    Logstash_DateFormat

    Time format (based on ) to generate the second part of the Index name.

    %Y.%m.%d

    Time_Key

    When Logstash_Format is enabled, each record will get a new timestamp field. The Time_Key property defines the name of that field.

    @timestamp

    Time_Key_Format

    When Logstash_Format is enabled, this property defines the format of the timestamp.

    %Y-%m-%dT%H:%M:%S

    Include_Tag_Key

    When enabled, it append the Tag name to the record.

    Off

    Tag_Key

    When Include_Tag_Key is enabled, this property defines the key name for the tag.

    _flb-key

    Generate_ID

    When enabled, generate _id for outgoing records. This prevents duplicate records when retrying ES.

    Off

    Replace_Dots

    When enabled, replace field name dots with underscore, required by Elasticsearch 2.0-2.3.

    Off

    Trace_Output

    When enabled print the elasticsearch API calls to stdout (for diag only)

    Off

    Current_Time_Index

    Use current time for index generation instead of message record

    Off

    Logstash_Prefix_Key

    Prefix keys with this string

    The parameters index and type can be confusing if you are new to Elastic, if you have used a common relational database before, they can be compared to the database and table concepts.

    TLS / SSL

    Elasticsearch output plugin supports TTL/SSL, for more details about the properties available and general configuration, please refer to the TLS/SSL section.

    Getting Started

    In order to insert records into a Elasticsearch service, you can run the plugin from the command line or through the configuration file:

    Command Line

    The es plugin, can read the parameters from the command line in two ways, through the -p argument (property) or setting them directly through the service URI. The URI format is the following:

    Using the format specified, you could start Fluent Bit through:

    which is similar to do:

    Configuration File

    In your main configuration file append the following Input & Output sections:

    About Elasticsearch field names

    Some input plugins may generate messages where the field names contains dots, since Elasticsearch 2.0 this is not longer allowed, so the current es plugin replaces them with an underscore, e.g:

    becomes

    Key

    Description

    default

    Elasticsearch

    Host

    IP address or hostname of the target HTTP Server

    127.0.0.1

    HTTP_User

    Basic Auth Username

    HTTP_Passwd

    Basic Auth Password. Requires HTTP_User to be set

    Port

    TCP port of the target HTTP Server

    80

    Proxy

    Specify an HTTP Proxy. The expected format of this value is . Note that https is not supported yet.

    URI

    Specify an optional HTTP URI for the target web server, e.g: /something

    /

    Format

    Specify the data format to be used in the HTTP request body, by default it uses msgpack. Other supported formats are json, json_stream and json_lines and gelf.

    msgpack

    header_tag

    Specify an optional HTTP header field for the original message tag.

    Header

    Add a HTTP header key/value pair. Multiple headers can be set.

    json_date_key

    Specify the name of the date field in output

    date

    json_date_format

    Specify the format of the date. Supported formats are double and iso8601 (eg: 2018-05-30T09:39:52.000681Z)

    double

    gelf_timestamp_key

    Specify the key to use for timestamp in gelf format

    gelf_host_key

    Specify the key to use for the host in gelf format

    gelf_short_messge_key

    Specify the key to use as the short message in gelf format

    gelf_full_message_key

    Specify the key to use for the full message in gelf format

    gelf_level_key

    Specify the key to use for the level in gelf format

    TLS / SSL

    HTTP output plugin supports TTL/SSL, for more details about the properties available and general configuration, please refer to the TLS/SSL section.

    Getting Started

    In order to insert records into a HTTP server, you can run the plugin from the command line or through the configuration file:

    Command Line

    The http plugin, can read the parameters from the command line in two ways, through the -p argument (property) or setting them directly through the service URI. The URI format is the following:

    Using the format specified, you could start Fluent Bit through:

    Configuration File

    In your main configuration file, append the following Input & Output sections:

    By default, the URI becomes tag of the message, the original tag is ignored. To retain the tag, multiple configuration sections have to be made based and flush to different URIs.

    Another approach we also support is the sending the original message tag in a configurable header. It's up to the receiver to do what it wants with that header field: parse it and use it as the tag for example.

    To configure this behaviour, add this config:

    Provided you are using Fluentd as data receiver, you can combine in_http and out_rewrite_tag_filter to make use of this HTTP header.

    Notice how we override the tag, which is from URI path, with our custom header

    Example : Add a header

    Key

    Description

    default

    MessagePack

    Host

    Dummy

    generate dummy event.

    Exec

    executes external program and collects event logs.

    Forward

    Fluentd forward protocol.

    Head

    read first part of files.

    Health

    Check health of TCP services.

    Kernel Log Buffer

    read the Linux Kernel log buffer messages.

    Memory Usage

    measure the total amount of memory used on the system.

    MQTT

    start a MQTT server and receive publish messages.

    Network Traffic

    measure network traffic.

    Process

    Check health of Process.

    Random

    Generate Random samples.

    Serial Interface

    read data information from the serial interface.

    Standard Input

    read data from the standard input.

    Syslog

    read syslog messages from a Unix socket.

    Systemd

    read logs from Systemd/Journald.

    Tail

    Tail log files

    TCP

    Listen for JSON messages over TCP.

    name

    title

    description

    cpu

    CPU Usage

    measure total CPU usage of the system.

    disk

    Disk Usage

    measure Disk I/Os.

    Fluent Bit

    Library API

    Fluent Bit library is written in C language and can be used from any C or C++ application. Before digging into the specification it is recommended to understand the workflow involved in the runtime.

    Workflow

    Fluent Bit runs as a service, meaning that the API exposed for developers provide interfaces to create and manage a context, specify inputs/outputs, set configuration parameters and set routing paths for the event/records. A typical usage of the library involves:

    • Create library instance/context and set properties.

    • Enable input plugin(s) and set properties.

    • Enable output plugin(s) and set properties.

    • Start the library runtime.

    • Optionally ingest records manually.

    • Stop the library runtime.

    • Destroy library instance/context.

    Data Types

    Starting from Fluent Bit v0.9, there is only one data type exposed by the library, by convention prefixed with flb_.

    API Reference

    Library Context Creation

    As described earlier, the first step to use the library is to create a context of it, for the purpose the function flb_create() is used.

    Prototype

    Return Value

    On success, flb_create() returns the library context; on error, it returns NULL.

    Usage

    Set Service Properties

    Using the flb_service_set() function is possible to set context properties.

    Prototype

    Return Value

    On success it returns 0; on error it returns a negative number.

    Usage

    The flb_service_set() allows to set one or more properties in a key/value string mode, e.g:

    The above example specified the values for the properties Flush , note that the value is always a string (char *) and once there is no more parameters a NULL argument must be added at the end of the list.

    Enable Input Plugin Instance

    When built, library contains a certain number of built-in input plugins. In order to enable an input plugin, the function flb_input() is used to create an instance of it.

    For plugins, an instance means a context of the plugin enabled. You can create multiples instances of the same plugin.

    Prototype

    The argument ctx represents the library context created by flb_create(), then name is the name of the input plugin that is required to enable.

    The third argument data can be used to pass a custom reference to the plugin instance, this is mostly used by custom or third party plugins, for generic plugins passing NULL is OK.

    Return Value

    On success, flb_input() returns an integer value >= zero (similar to a file descriptor); on error, it returns a negative number.

    Usage

    Set Input Plugin Properties

    A plugin instance created through flb_input(), may provide some configuration properties. Using the flb_input_set() function is possible to set these properties.

    Prototype

    Return Value

    On success it returns 0; on error it returns a negative number.

    Usage

    The flb_input_set() allows to set one or more properties in a key/value string mode, e.g:

    The argument ctx represents the library context created by flb_create(). The above example specified the values for the properties tag and ssl, note that the value is always a string (char *) and once there is no more parameters a NULL argument must be added at the end of the list.

    The properties allowed per input plugin are specified on each specific plugin documentation.

    Enable Output Plugin Instance

    When built, library contains a certain number of built-in output plugins. In order to enable an output plugin, the function flb_output() is used to create an instance of it.

    For plugins, an instance means a context of the plugin enabled. You can create multiples instances of the same plugin.

    Prototype

    The argument ctx represents the library context created by flb_create(), then name is the name of the output plugin that is required to enable.

    The third argument data can be used to pass a custom reference to the plugin instance, this is mostly used by custom or third party plugins, for generic plugins passing NULL is OK.

    Return Value

    On success, flb_output() returns the output plugin instance; on error, it returns a negative number.

    Usage

    Set Output Plugin Properties

    A plugin instance created through flb_output(), may provide some configuration properties. Using the flb_output_set() function is possible to set these properties.

    Prototype

    Return Value

    On success it returns an integer value >= zero (similar to a file descriptor); on error it returns a negative number.

    Usage

    The flb_output_set() allows to set one or more properties in a key/value string mode, e.g:

    The argument ctx represents the library context created by flb_create(). The above example specified the values for the properties tag and ssl, note that the value is always a string (char *) and once there is no more parameters a NULL argument must be added at the end of the list.

    The properties allowed per output plugin are specified on each specific plugin documentation.

    Start Fluent Bit Engine

    Once the library context has been created and the input/output plugin instances are set, the next step is to start the engine. When started, the engine runs inside a new thread (POSIX thread) without blocking the caller application. To start the engine the function flb_start() is used.

    Prototype

    Return Value

    On success it returns 0; on error it returns a negative number.

    Usage

    This simple call only needs as argument ctx which is the reference to the context created at the beginning with flb_create():

    Stop Fluent Bit Engine

    To stop a running Fluent Bit engine, we provide the call flb_stop() for that purpose.

    Prototype

    The argument ctx is a reference to the context created at the beginning with flb_create() and previously started with flb_start().

    When the call is invoked, the engine will wait a maximum of five seconds to flush buffers and release the resources in use. A stopped context can be re-started any time but without any data on it.

    Return Value

    On success it returns 0; on error it returns a negative number.

    Usage

    Destroy Library Context

    A library context must be destroyed after is not longer necessary, note that a previous flb_stop() call is mandatory. When destroyed all resources associated are released.

    Prototype

    The argument ctx is a reference to the context created at the beginning with flb_create().

    Return Value

    No return value.

    Usage

    Ingest Data Manually

    There are some cases where the caller application may want to ingest data into Fluent Bit, for this purpose exists the function flb_lib_push().

    Prototype

    The first argument is the context created previously through flb_create(). in_ffd is the numeric reference of the input plugin (for this case it should be an input of plugin lib type), data is a reference to the message to be ingested and len the number of bytes to take from it.

    Return Value

    On success, it returns the number of bytes written; on error it returns -1.

    Usage

    For more details and an example about how to use this function properly please refer to the next section .

    Syslog

    Syslog input plugins allows to collect Syslog messages through a Unix socket server (UDP or TCP) or over the network using TCP or UDP.

    Configuration Parameters

    The plugin supports the following configuration parameters:

    Key

    Description

    Considerations

    • When using Syslog input plugin, Fluent Bit requires access to the parsers.conf file, the path to this file can be specified with the option -R or through the Parsers_File key on the [SERVER] section (more details below).

    • When udp or unix_udp is used, the buffer size to receive messages is configurable only through the Buffer_Chunk_Size option which defaults to 32kb.

    Getting Started

    In order to receive Syslog messages, you can run the plugin from the command line or through the configuration file:

    Command Line

    From the command line you can let Fluent Bit listen for Forward messages with the following options:

    By default the service will create and listen for Syslog messages on the unix socket /tmp/in_syslog

    Configuration File

    In your main configuration file append the following Input & Output sections:

    Testing

    Once Fluent Bit is running, you can send some messages using the logger tool:

    In we should see the following output:

    Recipes

    The following content aims to provide configuration examples for different use cases to integrate Fluent Bit and make it listen for Syslog messages from your systems.

    Rsyslog to Fluent Bit: Network mode over TCP

    Fluent Bit Configuration

    Put the following content in your fluent-bit.conf file:

    then start Fluent Bit.

    RSyslog Configuration

    Add a new file to your rsyslog config rules called 60-fluent-bit.conf inside the directory /etc/rsyslog.d/ and add the following content:

    then make sure to restart your rsyslog daemon:

    Rsyslog to Fluent Bit: Unix socket mode over UDP

    Fluent Bit Configuration

    Put the following content in your fluent-bit.conf file:

    then start Fluent Bit.

    RSyslog Configuration

    Add a new file to your rsyslog config rules called 60-fluent-bit.conf inside the directory /etc/rsyslog.d/ and place the following content:

    then make sure to set proper permissions to the socket and restart your rsyslog daemon:

    Configuration File

    There are some cases where using the command line to start Fluent Bit is not ideal. When running Fluent Bit as a service, a configuration file is preferred.

    Fluent Bit allows to use one configuration file which works at a global scope and uses the defined previously.

    The configuration file supports four types of sections:

    License

    , including it core, plugins and tools are distributed under the terms of the :

    Throttle

    The Throttle Filter plugin sets the average Rate of messages per Interval, based on leaky bucket and sliding window algorithm. In case of overflood, it will leak within certain rate.

    Configuration Parameters

    The plugin supports the following configuration parameters:

    [SERVICE]
        HTTP_Server  On
        HTTP_Listen  0.0.0.0
        HTTP_PORT    2020
    
    [INPUT]
        Name cpu
    
    [OUTPUT]
        Name  stdout
        Match *
    $ bin/fluent-bit -c fluent-bit.conf
    Fluent-Bit v0.14.x
    Copyright (C) Treasure Data
    
    [2017/10/27 19:08:24] [ info] [engine] started
    [2017/10/27 19:08:24] [ info] [http_server] listen iface=0.0.0.0 tcp_port=2020
    $ curl -s http://127.0.0.1:2020 | jq
    {
      "fluent-bit": {
        "version": "0.13.0",
        "edition": "Community",
        "flags": [
          "FLB_HAVE_TLS",
          "FLB_HAVE_METRICS",
          "FLB_HAVE_SQLDB",
          "FLB_HAVE_TRACE",
          "FLB_HAVE_HTTP_SERVER",
          "FLB_HAVE_FLUSH_LIBCO",
          "FLB_HAVE_SYSTEMD",
          "FLB_HAVE_VALGRIND",
          "FLB_HAVE_FORK",
          "FLB_HAVE_PROXY_GO",
          "FLB_HAVE_REGEX",
          "FLB_HAVE_C_TLS",
          "FLB_HAVE_SETJMP",
          "FLB_HAVE_ACCEPT4",
          "FLB_HAVE_INOTIFY"
        ]
      }
    }
    $ curl -s http://127.0.0.1:2020/api/v1/uptime | jq
    {
      "uptime_sec": 8950000,
      "uptime_hr": "Fluent Bit has been running:  103 days, 14 hours, 6 minutes and 40 seconds"
    }
    $ curl -s http://127.0.0.1:2020/api/v1/metrics | jq
    {
      "input": {
        "cpu.0": {
          "records": 8,
          "bytes": 2536
        }
      },
      "output": {
        "stdout.0": {
          "proc_records": 5,
          "proc_bytes": 1585,
          "errors": 0,
          "retries": 0,
          "retries_failed": 0
        }
      }
    }
    $ curl -s http://127.0.0.1:2020/api/v1/metrics/prometheus
    fluentbit_input_records_total{name="cpu.0"} 57 1509150350542
    fluentbit_input_bytes_total{name="cpu.0"} 18069 1509150350542
    fluentbit_output_proc_records_total{name="stdout.0"} 54 1509150350542
    fluentbit_output_proc_bytes_total{name="stdout.0"} 17118 1509150350542
    fluentbit_output_errors_total{name="stdout.0"} 0 1509150350542
    fluentbit_output_retries_total{name="stdout.0"} 0 1509150350542
    fluentbit_output_retries_failed_total{name="stdout.0"} 0 1509150350542
    [SERVICE]
        HTTP_Server  On
        HTTP_Listen  0.0.0.0
        HTTP_PORT    2020
    
    [INPUT]
        Name  cpu
        Alias server1_cpu
    
    [OUTPUT]
        Name  stdout
        Alias raw_output
        Match *
    {
      "input": {
        "server1_cpu": {
          "records": 8,
          "bytes": 2536
        }
      },
      "output": {
        "raw_output": {
          "proc_records": 5,
          "proc_bytes": 1585,
          "errors": 0,
          "retries": 0,
          "retries_failed": 0
        }
      }
    }
    es://host:port/index/type
    $ fluent-bit -i cpu -t cpu -o es://192.168.2.3:9200/my_index/my_type \
        -o stdout -m '*'
    $ fluent-bit -i cpu -t cpu -o es -p Host=192.168.2.3 -p Port=9200 \
        -p Index=my_index -p Type=my_type -o stdout -m '*'
    [INPUT]
        Name  cpu
        Tag   cpu
    
    [OUTPUT]
        Name  es
        Match *
        Host  192.168.2.3
        Port  9200
        Index my_index
        Type  my_type
    {"cpu0.p_cpu"=>17.000000}
    {"cpu0_p_cpu"=>17.000000}
    http://host:port/something
    $ fluent-bit -i cpu -t cpu -o http://192.168.2.3:80/something -m '*'
    [INPUT]
        Name  cpu
        Tag   cpu
    
    [OUTPUT]
        Name  http
        Match *
        Host  192.168.2.3
        Port  80
        URI   /something
    [OUTPUT]
        Name  http
        Match *
        Host  192.168.2.3
        Port  80
        URI   /something
        Format json
        header_tag  FLUENT-TAG
    <source>
      @type http
      add_http_headers true
    </source>
    
    <match something>
      @type rewrite_tag_filter
      <rule>
        key HTTP_FLUENT_TAG
        pattern /^(.*)$/
        tag $1
      </rule>
    </match>
    [OUTPUT]
        Name           http
        Match          *
        Host           127.0.0.1
        Port           9000
        Header         X-Key-A Value_A
        Header         X-Key-B Value_B
        URI            /something

    Internal metrics per loaded plugin ready to be consumed by a Prometheus Server

    Prometheus Text 0.0.4

    Unit Size
    strftime
    http://host:port
    dummy
    exec
    forward
    head
    health
    kmsg
    mem
    mqtt
    netif
    proc
    random
    serial
    stdin
    syslog
    systemd
    tail
    tcp

    Type

    Description

    flb_ctx_t

    Main library context. It aims to reference the context returned by flb_create();

    Fluent Bit
    Fluent Bit
    Ingest Records Manually

    Default

    Mode

    Defines transport protocol mode: unix_udp (UDP over Unix socket), unix_tcp (TCP over Unix socket), tcp or udp

    unix_udp

    Listen

    If Mode is set to tcp, specify the network interface to bind.

    0.0.0.0

    Port

    If Mode is set to tcp, specify the TCP port to listen for incoming connections.

    5140

    Path

    If Mode is set to unix_tcp or unix_udp, set the absolute path to the Unix socket file.

    Parser

    Specify an alternative parser for the message. By default, the plugin uses the parser syslog-rfc3164. If your syslog messages have fractional seconds set this Parser value to syslog-rfc5424 instead.

    Buffer_Chunk_Size

    By default the buffer to store the incoming Syslog messages, do not allocate the maximum memory allowed, instead it allocate memory when is required. The rounds of allocations are set by Chunk_Size in KB. If not set, Chunk_Size is equal to 32 (32KB). Read considerations below when using udp or unix_udp mode.

    Buffer_Max_Size

    Specify the maximum buffer size in KB to receive a Syslog message. If not set, the default size will be the value of Chunk_Size.

    Fluent Bit
    Fluent Bit
    Apache License v2.0

    Description

    Rate

    Integer

    Amount of messages for the time.

    Window

    Integer

    Amount of intervals to calculate average over. Default 5.

    Interval

    String

    Time interval, expressed in "sleep" format. e.g 3s, 1.5m, 0.5h etc

    Print_Status

    Bool

    Whether to print status messages with current rate and the limits to information logs

    Functional description

    Lets imagine we have configured:

    we received 1 message first second, 3 messages 2nd, and 5 3rd. As you can see, disregard that Window is actually 5, we use "slow" start to prevent overflooding during the startup.

    But as soon as we reached Window size * Interval, we will have true sliding window with aggregation over complete window.

    When we have average over window is more than Rate, we will start dropping messages, so that

    will become:

    As you can see, last pane of the window was overwritten and 1 message was dropped.

    Interval vs Window size

    You might noticed possibility to configure Interval of the Window shift. It is counter intuitive, but there is a difference between two examples above:

    and

    Even though both examples will allow maximum Rate of 60 messages per minute, first example may get all 60 messages within first second, and will drop all the rest for the entire minute:

    While the second example will not allow more than 1 message per second every second, making output rate more smooth:

    It may drop some data if the rate is ragged. I would recommend to use bigger interval and rate for streams of rare but important events, while keep Window bigger and Interval small for constantly intensive inputs.

    Command Line

    Note: It's suggested to use a configuration file.

    The following command will load the tail plugin and read the content of lines.txt file. Then the throttle filter will apply a rate limit and only pass the records which are read below the certain rate:

    Configuration File

    The example above will pass 1000 messages per second in average over 300 seconds.

    Key

    Value Format

    flb_ctx_t *flb_create();
    flb_ctx_t *ctx;
    
    ctx = flb_create();
    if (!ctx) {
        return NULL;
    }
    int flb_service_set(flb_ctx_t *ctx, ...);
    int ret;
    
    ret = flb_service_set(ctx, "Flush", "1", NULL);
    int flb_input(flb_ctx_t *ctx, char *name, void *data);
    int in_ffd;
    
    in_ffd = flb_input(ctx, "cpu", NULL);
    int flb_input_set(flb_ctx_t *ctx, int in_ffd, ...);
    int ret;
    
    ret = flb_input_set(ctx, in_ffd,
                        "tag", "my_records",
                        "ssl", "false",
                        NULL);
    int flb_output(flb_ctx_t *ctx, char *name, void *data);
    int out_ffd;
    
    out_ffd = flb_output(ctx, "stdout", NULL);
    int flb_output_set(flb_ctx_t *ctx, int out_ffd, ...);
    int ret;
    
    ret = flb_output_set(ctx, out_ffd,
                         "tag", "my_records",
                         "ssl", "false",
                         NULL);
    int flb_start(flb_ctx_t *ctx);
    int ret;
    
    ret = flb_start(ctx);
    int flb_stop(flb_ctx_t *ctx);
    int ret;
    
    ret = flb_stop(ctx);
    void flb_destroy(flb_ctx_t *ctx);
    flb_destroy(ctx);
    int flb_lib_push(flb_ctx_t *ctx, int in_ffd, void *data, size_t len);
    $ fluent-bit -R /path/to/parsers.conf -i syslog -p path=/tmp/in_syslog -o stdout
    [SERVICE]
        Flush        1
        Log_Level    info
        Parsers_File parsers.conf
    
    [INPUT]
        Name         syslog
        Path         /tmp/in_syslog
        Chunk_Size   32
        Buffer_Size  64
    
    [OUTPUT]
        Name   stdout
        Match  *
    $ logger -u /tmp/in_syslog my_ident my_message
    $ bin/fluent-bit -R ../conf/parsers.conf -i syslog -p path=/tmp/in_syslog -o stdout
    Fluent-Bit v0.11.0
    Copyright (C) Treasure Data
    
    [2017/03/09 02:23:27] [ info] [engine] started
    [0] syslog.0: [1489047822, {"pri"=>"13", "host"=>"edsiper:", "ident"=>"my_ident", "pid"=>"", "message"=>"my_message"}]
    [SERVICE]
        Flush        1
        Parsers_File parsers.conf
    
    [INPUT]
        Name     syslog
        Parser   syslog-rfc3164
        Listen   0.0.0.0
        Port     5140
        Mode     tcp
    
    [OUTPUT]
        Name     stdout
        Match    *
    action(type="omfwd" Target="127.0.0.1" Port="5140" Protocol="tcp")
    $ sudo service rsyslog restart
    [SERVICE]
        Flush        1
        Parsers_File parsers.conf
    
    [INPUT]
        Name     syslog
        Parser   syslog-rfc3164
        Path     /tmp/fluent-bit.sock
        Mode     unix_udp
    
    [OUTPUT]
        Name     stdout
        Match    *
    $ModLoad omuxsock
    $OMUxSockSocket /tmp/fluent-bit.sock
    *.* :omuxsock:
    $ sudo chmod 666 /tmp/fluent-bit.sock
    $ sudo service rsyslog restart
                                     Apache License
                               Version 2.0, January 2004
                            http://www.apache.org/licenses/
    
       TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
    
       1. Definitions.
    
          "License" shall mean the terms and conditions for use, reproduction,
          and distribution as defined by Sections 1 through 9 of this document.
    
          "Licensor" shall mean the copyright owner or entity authorized by
          the copyright owner that is granting the License.
    
          "Legal Entity" shall mean the union of the acting entity and all
          other entities that control, are controlled by, or are under common
          control with that entity. For the purposes of this definition,
          "control" means (i) the power, direct or indirect, to cause the
          direction or management of such entity, whether by contract or
          otherwise, or (ii) ownership of fifty percent (50%) or more of the
          outstanding shares, or (iii) beneficial ownership of such entity.
    
          "You" (or "Your") shall mean an individual or Legal Entity
          exercising permissions granted by this License.
    
          "Source" form shall mean the preferred form for making modifications,
          including but not limited to software source code, documentation
          source, and configuration files.
    
          "Object" form shall mean any form resulting from mechanical
          transformation or translation of a Source form, including but
          not limited to compiled object code, generated documentation,
          and conversions to other media types.
    
          "Work" shall mean the work of authorship, whether in Source or
          Object form, made available under the License, as indicated by a
          copyright notice that is included in or attached to the work
          (an example is provided in the Appendix below).
    
          "Derivative Works" shall mean any work, whether in Source or Object
          form, that is based on (or derived from) the Work and for which the
          editorial revisions, annotations, elaborations, or other modifications
          represent, as a whole, an original work of authorship. For the purposes
          of this License, Derivative Works shall not include works that remain
          separable from, or merely link (or bind by name) to the interfaces of,
          the Work and Derivative Works thereof.
    
          "Contribution" shall mean any work of authorship, including
          the original version of the Work and any modifications or additions
          to that Work or Derivative Works thereof, that is intentionally
          submitted to Licensor for inclusion in the Work by the copyright owner
          or by an individual or Legal Entity authorized to submit on behalf of
          the copyright owner. For the purposes of this definition, "submitted"
          means any form of electronic, verbal, or written communication sent
          to the Licensor or its representatives, including but not limited to
          communication on electronic mailing lists, source code control systems,
          and issue tracking systems that are managed by, or on behalf of, the
          Licensor for the purpose of discussing and improving the Work, but
          excluding communication that is conspicuously marked or otherwise
          designated in writing by the copyright owner as "Not a Contribution."
    
          "Contributor" shall mean Licensor and any individual or Legal Entity
          on behalf of whom a Contribution has been received by Licensor and
          subsequently incorporated within the Work.
    
       2. Grant of Copyright License. Subject to the terms and conditions of
          this License, each Contributor hereby grants to You a perpetual,
          worldwide, non-exclusive, no-charge, royalty-free, irrevocable
          copyright license to reproduce, prepare Derivative Works of,
          publicly display, publicly perform, sublicense, and distribute the
          Work and such Derivative Works in Source or Object form.
    
       3. Grant of Patent License. Subject to the terms and conditions of
          this License, each Contributor hereby grants to You a perpetual,
          worldwide, non-exclusive, no-charge, royalty-free, irrevocable
          (except as stated in this section) patent license to make, have made,
          use, offer to sell, sell, import, and otherwise transfer the Work,
          where such license applies only to those patent claims licensable
          by such Contributor that are necessarily infringed by their
          Contribution(s) alone or by combination of their Contribution(s)
          with the Work to which such Contribution(s) was submitted. If You
          institute patent litigation against any entity (including a
          cross-claim or counterclaim in a lawsuit) alleging that the Work
          or a Contribution incorporated within the Work constitutes direct
          or contributory patent infringement, then any patent licenses
          granted to You under this License for that Work shall terminate
          as of the date such litigation is filed.
    
       4. Redistribution. You may reproduce and distribute copies of the
          Work or Derivative Works thereof in any medium, with or without
          modifications, and in Source or Object form, provided that You
          meet the following conditions:
    
          (a) You must give any other recipients of the Work or
              Derivative Works a copy of this License; and
    
          (b) You must cause any modified files to carry prominent notices
              stating that You changed the files; and
    
          (c) You must retain, in the Source form of any Derivative Works
              that You distribute, all copyright, patent, trademark, and
              attribution notices from the Source form of the Work,
              excluding those notices that do not pertain to any part of
              the Derivative Works; and
    
          (d) If the Work includes a "NOTICE" text file as part of its
              distribution, then any Derivative Works that You distribute must
              include a readable copy of the attribution notices contained
              within such NOTICE file, excluding those notices that do not
              pertain to any part of the Derivative Works, in at least one
              of the following places: within a NOTICE text file distributed
              as part of the Derivative Works; within the Source form or
              documentation, if provided along with the Derivative Works; or,
              within a display generated by the Derivative Works, if and
              wherever such third-party notices normally appear. The contents
              of the NOTICE file are for informational purposes only and
              do not modify the License. You may add Your own attribution
              notices within Derivative Works that You distribute, alongside
              or as an addendum to the NOTICE text from the Work, provided
              that such additional attribution notices cannot be construed
              as modifying the License.
    
          You may add Your own copyright statement to Your modifications and
          may provide additional or different license terms and conditions
          for use, reproduction, or distribution of Your modifications, or
          for any such Derivative Works as a whole, provided Your use,
          reproduction, and distribution of the Work otherwise complies with
          the conditions stated in this License.
    
       5. Submission of Contributions. Unless You explicitly state otherwise,
          any Contribution intentionally submitted for inclusion in the Work
          by You to the Licensor shall be under the terms and conditions of
          this License, without any additional terms or conditions.
          Notwithstanding the above, nothing herein shall supersede or modify
          the terms of any separate license agreement you may have executed
          with Licensor regarding such Contributions.
    
       6. Trademarks. This License does not grant permission to use the trade
          names, trademarks, service marks, or product names of the Licensor,
          except as required for reasonable and customary use in describing the
          origin of the Work and reproducing the content of the NOTICE file.
    
       7. Disclaimer of Warranty. Unless required by applicable law or
          agreed to in writing, Licensor provides the Work (and each
          Contributor provides its Contributions) on an "AS IS" BASIS,
          WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
          implied, including, without limitation, any warranties or conditions
          of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
          PARTICULAR PURPOSE. You are solely responsible for determining the
          appropriateness of using or redistributing the Work and assume any
          risks associated with Your exercise of permissions under this License.
    
       8. Limitation of Liability. In no event and under no legal theory,
          whether in tort (including negligence), contract, or otherwise,
          unless required by applicable law (such as deliberate and grossly
          negligent acts) or agreed to in writing, shall any Contributor be
          liable to You for damages, including any direct, indirect, special,
          incidental, or consequential damages of any character arising as a
          result of this License or out of the use or inability to use the
          Work (including but not limited to damages for loss of goodwill,
          work stoppage, computer failure or malfunction, or any and all
          other commercial damages or losses), even if such Contributor
          has been advised of the possibility of such damages.
    
       9. Accepting Warranty or Additional Liability. While redistributing
          the Work or Derivative Works thereof, You may choose to offer,
          and charge a fee for, acceptance of support, warranty, indemnity,
          or other liability obligations and/or rights consistent with this
          License. However, in accepting such obligations, You may act only
          on Your own behalf and on Your sole responsibility, not on behalf
          of any other Contributor, and only if You agree to indemnify,
          defend, and hold each Contributor harmless for any liability
          incurred by, or claims asserted against, such Contributor by reason
          of your accepting any such warranty or additional liability.
    
       END OF TERMS AND CONDITIONS
    Rate 5
    Window 5
    Interval 1s
    +-------+-+-+-+ 
    |1|3|5| | | | | 
    +-------+-+-+-+ 
    |  3  |         average = 3, and not 1.8 if you calculate 0 for last 2 panes. 
    +-----+
    +-------------+ 
    |1|3|5|7|3|4| | 
    +-------------+ 
      |  4.4    |   
      ----------+
    +-------------+
    |1|3|5|7|3|4|7|
    +-------------+
        |   5.2   |
        +---------+
    +-------------+
    |1|3|5|7|3|4|6|
    +-------------+
        |   5     |
        +---------+
    Rate 60
    Window 5
    Interval 1m
    Rate 1
    Window 300
    Interval 1s
    XX        XX        XX
    XX        XX        XX
    XX        XX        XX
    XX        XX        XX
    XX        XX        XX
    XX        XX        XX
    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      X    X     X    X    X    X
    XXXX XXXX  XXXX XXXX XXXX XXXX
    +-+-+-+-+-+--+-+-+-+-+-+-+-+-+-+
    $ bin/fluent-bit -i tail -p 'path=lines.txt' -F throttle -p 'rate=1' -m '*' -o stdout
    [INPUT]
        Name   tail
        Path   lines.txt
    
    [FILTER]
        Name     throttle
        Match    *
        Rate     1000
        Window   300
        Interval 1s
    
    [OUTPUT]
        Name   stdout
        Match  *

    Filter

  • Output

  • In addition there is an additional feature to include external files:

    • Include File

    Service

    The Service section defines global properties of the service, the keys available as of this version are described in the following table:

    Key

    Description

    Default Value

    Flush

    Set the flush time in seconds. Everytime it timeouts, the engine will flush the records to the output plugin.

    5

    Daemon

    Boolean value to set if Fluent Bit should run as a Daemon (background) or not. Allowed values are: yes, no, on and off.

    Off

    Log_File

    Absolute path for an optional log file.

    Log_Level

    Example

    The following is an example of a SERVICE section:

    Input

    An INPUT section defines a source (related to an input plugin), here we will describe the base configuration for each INPUT section. Note that each input plugin may add it own configuration keys:

    Key

    Description

    Name

    Name of the input plugin.

    Tag

    Tag name associated to all records comming from this plugin.

    The Name is mandatory and it let Fluent Bit know which input plugin should be loaded. The Tag is mandatory for all plugins except for the input forward plugin (as it provides dynamic tags).

    Example

    The following is an example of an INPUT section:

    Filter

    A FILTER section defines a filter (related to an filter plugin), here we will describe the base configuration for each FILTER section. Note that each filter plugin may add it own configuration keys:

    Key

    Description

    Name

    Name of the filter plugin.

    Match

    It sets a pattern to match certain records Tag. It's case sensitive and support the star (*) character as a wildcard.

    Match_Regex

    It sets a pattern to match certain records Tag.

    The Name is mandatory and it let Fluent Bit know which filter plugin should be loaded. The Match or Match_Regex is mandatory for all plugins. If both are specified, Match_Regex takes precedence.

    Example

    The following is an example of an FILTER section:

    Output

    The OUTPUT section specify a destination that certain records should follow after a Tag match. The configuration support the following keys:

    Key

    Description

    Name

    Name of the output plugin.

    Match

    It sets a pattern to match certain records Tag. It's case sensitive and support the star (*) character as a wildcard.

    Match_Regex

    It sets a pattern to match certain records Tag.

    Example

    The following is an example of an OUTPUT section:

    Example: collecting CPU metrics

    The following configuration file example demonstrates how to collect CPU metrics and flush the results every five seconds to the standard output:

    Include File

    To avoid complicated long configuration files is better to split specific parts in different files and call them (include) from one main file.

    Starting from Fluent Bit 0.12 the new configuration command @INCLUDE has been added and can be used in the following way:

    The configuration reader will try to open the path somefile.conf, if not found, it will assume it's a relative path based on the path of the base configuration file, e.g:

    • Main configuration file path: /tmp/main.conf

    • Included file: somefile.conf

    • Fluent Bit will try to open somefile.conf, if it fails it will try /tmp/somefile.conf.

    The @INCLUDE command only works at top-left level of the configuration line, it cannot be used inside sections.

    Wildcard character (*) is supported to include multiple files, e.g:

    schema
    Service
    Input
    [SERVICE]
        Flush           5
        Daemon          off
        Log_Level       debug
    [INPUT]
        Name cpu
        Tag  my_cpu
    [FILTER]
        Name  stdout
        Match *
    [OUTPUT]
        Name  stdout
        Match my*cpu
    [SERVICE]
        Flush     5
        Daemon    off
        Log_Level debug
    
    [INPUT]
        Name  cpu
        Tag   my_cpu
    
    [OUTPUT]
        Name  stdout
        Match my*cpu
    @INCLUDE somefile.conf
    @INCLUDE input_*.conf

    Set the logging verbosity level. Allowed values are: error, info, debug and trace. Values are accumulative, e.g: if 'debug' is set, it will include error, info and debug. Note that trace mode is only available if Fluent Bit was built with the WITH_TRACE option enabled.

    info

    Parsers_File

    Path for a parsers configuration file. Multiple Parsers_File entries can be used.

    Plugins_File

    Path for a plugins configuration file. A plugins configuration file allows to define paths for external plugins, for an example see here.

    Streams_File

    Path for the Stream Processor configuration file. For details about the format of SP configuration file see here.

    HTTP_Server

    Enable built-in HTTP Server

    Off

    HTTP_Listen

    Set listening interface for HTTP Server when it's enabled

    0.0.0.0

    HTTP_Port

    Set TCP Port for the HTTP Server

    2020

    Coro_Stack_Size

    Set the coroutines stack size in bytes. The value must be greater than the page size of the running system.

    24576

    Nest

    The Nest Filter plugin allows you to operate on or with nested data. Its modes of operation are

    • nest - Take a set of records and place them in a map

    • lift - Take a map by key and lift its records up

    Example usage (nest)

    As an example using JSON notation, to nest keys matching the Wildcard value Key* under a new key NestKey the transformation becomes,

    Example (input)

    Example (output)

    Example usage (lift)

    As an example using JSON notation, to lift keys nested under the Nested_under value NestKey* the transformation becomes,

    Example (input)

    Example (output)

    Configuration Parameters

    The plugin supports the following configuration parameters:

    Getting Started

    In order to start filtering records, you can run the filter from the command line or through the configuration file. The following invokes the , which outputs the following (example),

    Example #1 - nest

    Command Line

    Note: Using the command line mode requires quotes parse the wildcard properly. The use of a configuration file is recommended.

    The following command will load the mem plugin. Then the nest filter will match the wildcard rule to the keys and nest the keys matching Mem.* under the new key NEST.

    Configuration File

    Result

    The output of both the command line and configuration invocations should be identical and result in the following output.

    Example #1 - nest and lift undo

    This example nests all Mem.* and Swap,* items under the Stats key and then reverses these actions with a lift operation. The output appears unchanged.

    Configuration File

    Result

    Example #2 - nest 3 levels deep

    This example takes the keys starting with Mem.* and nests them under LAYER1, which itself is then nested under LAYER2, which is nested under LAYER3.

    Configuration File

    Result

    Example #3 - multiple nest and lift filters with prefix

    This example starts with the 3-level deep nesting of Example 2 and applies the lift filter three times to reverse the operations. The end result is that all records are at the top level, without nesting, again. One prefix is added for each level that is lifted.

    Configuration file

    Result

    Kubernetes

    Fluent Bit Kubernetes Filter allows to enrich your log files with Kubernetes metadata.

    When Fluent Bit is deployed in Kubernetes as a DaemonSet and configured to read the log files from the containers (using tail or systemd input plugins), this filter aims to perform the following operations:

    • Analyze the Tag and extract the following metadata:

      • Pod Name

    nest

    Nest records matching the Wildcard under this key

    Nested_under

    FIELD STRING

    lift

    Lift records nested under the Nested_under key

    Add_prefix

    FIELD STRING

    ANY

    Prefix affected keys with this string

    Remove_prefix

    FIELD STRING

    ANY

    Remove prefix from affected keys if it matches this string

    Key

    Value Format

    Operation

    Description

    Operation

    ENUM [nest or lift]

    Select the operation nest or lift

    Wildcard

    FIELD WILDCARD

    nest

    Nest records which field matches the wildcard

    Nest_under

    Memory Usage Input Plugin

    FIELD STRING

    {
      "Key1"     : "Value1",
      "Key2"     : "Value2",
      "OtherKey" : "Value3"
    }
    {
      "OtherKey" : "Value3"
      "NestKey"  : {
        "Key1"     : "Value1",
        "Key2"     : "Value2",
      }
    }
    {
      "OtherKey" : "Value3"
      "NestKey"  : {
        "Key1"     : "Value1",
        "Key2"     : "Value2",
      }
    }
    {
      "Key1"     : "Value1",
      "Key2"     : "Value2",
      "OtherKey" : "Value3"
    }
    [0] memory: [1488543156, {"Mem.total"=>1016044, "Mem.used"=>841388, "Mem.free"=>174656, "Swap.total"=>2064380, "Swap.used"=>139888, "Swap.free"=>1924492}]
    $ bin/fluent-bit -i mem -p 'tag=mem.local' -F nest -p 'Operation=nest' -p 'Wildcard=Mem.*' -p 'Nest_under=Memstats' -p 'Remove_prefix=Mem.' -m '*' -o stdout
    [INPUT]
        Name mem
        Tag  mem.local
    
    [OUTPUT]
        Name  stdout
        Match *
    
    [FILTER]
        Name nest
        Match *
        Operation nest
        Wildcard Mem.*
        Nest_under Memstats
        Remove_prefix Mem.
    [2018/04/06 01:35:13] [ info] [engine] started
    [0] mem.local: [1522978514.007359767, {"Swap.total"=>1046524, "Swap.used"=>0, "Swap.free"=>1046524, "Memstats"=>{"total"=>4050908, "used"=>714984, "free"=>3335924}}]
    [INPUT]
        Name mem
        Tag  mem.local
    
    [OUTPUT]
        Name  stdout
        Match *
    
    [FILTER]
        Name nest
        Match *
        Operation nest
        Wildcard Mem.*
        Wildcard Swap.*
        Nest_under Stats
        Add_prefix NESTED
    
    [FILTER]
        Name nest
        Match *
        Operation lift
        Nested_under Stats
        Remove_prefix NESTED
    [2018/06/21 17:42:37] [ info] [engine] started (pid=17285)
    [0] mem.local: [1529566958.000940636, {"Mem.total"=>8053656, "Mem.used"=>6940380, "Mem.free"=>1113276, "Swap.total"=>16532988, "Swap.used"=>1286772, "Swap.free"=>15246216}]
    [INPUT]
        Name mem
        Tag  mem.local
    
    [OUTPUT]
        Name  stdout
        Match *
    
    [FILTER]
        Name nest
        Match *
        Operation nest
        Wildcard Mem.*
        Nest_under LAYER1
    
    [FILTER]
        Name nest
        Match *
        Operation nest
        Wildcard LAYER1*
        Nest_under LAYER2
    
    [FILTER]
        Name nest
        Match *
        Operation nest
        Wildcard LAYER2*
        Nest_under LAYER3
    [0] mem.local: [1524795923.009867831, {"Swap.total"=>1046524, "Swap.used"=>0, "Swap.free"=>1046524, "LAYER3"=>{"LAYER2"=>{"LAYER1"=>{"Mem.total"=>4050908, "Mem.used"=>1112036, "Mem.free"=>2938872}}}}]
    
    
    {
      "Swap.total"=>1046524,
      "Swap.used"=>0,
      "Swap.free"=>1046524,
      "LAYER3"=>{
        "LAYER2"=>{
          "LAYER1"=>{
            "Mem.total"=>4050908,
            "Mem.used"=>1112036,
            "Mem.free"=>2938872
          }
        }
      }
    }
    [INPUT]
        Name mem
        Tag  mem.local
    
    [OUTPUT]
        Name  stdout
        Match *
    
    [FILTER]
        Name nest
        Match *
        Operation nest
        Wildcard Mem.*
        Nest_under LAYER1
    
    [FILTER]
        Name nest
        Match *
        Operation nest
        Wildcard LAYER1*
        Nest_under LAYER2
    
    [FILTER]
        Name nest
        Match *
        Operation nest
        Wildcard LAYER2*
        Nest_under LAYER3
    
    [FILTER]
        Name nest
        Match *
        Operation lift
        Nested_under LAYER3
        Add_prefix Lifted3_
    
    [FILTER]
        Name nest
        Match *
        Operation lift
        Nested_under Lifted3_LAYER2
        Add_prefix Lifted3_Lifted2_
    
    [FILTER]
        Name nest
        Match *
        Operation lift
        Nested_under Lifted3_Lifted2_LAYER1
        Add_prefix Lifted3_Lifted2_Lifted1_
    [0] mem.local: [1524862951.013414798, {"Swap.total"=>1046524, "Swap.used"=>0, "Swap.free"=>1046524, "Lifted3_Lifted2_Lifted1_Mem.total"=>4050908, "Lifted3_Lifted2_Lifted1_Mem.used"=>1253912, "Lifted3_Lifted2_Lifted1_Mem.free"=>2796996}]
    
    
    {
      "Swap.total"=>1046524, 
      "Swap.used"=>0, 
      "Swap.free"=>1046524, 
      "Lifted3_Lifted2_Lifted1_Mem.total"=>4050908, 
      "Lifted3_Lifted2_Lifted1_Mem.used"=>1253912, 
      "Lifted3_Lifted2_Lifted1_Mem.free"=>2796996
    }

    Namespace

  • Container Name

  • Container ID

  • Query Kubernetes API Server to obtain extra metadata for the POD in question:

    • Pod ID

    • Labels

    • Annotations

  • The data is cached locally in memory and appended to each record.

    Configuration Parameters

    The plugin supports the following configuration parameters:

    Key

    Description

    Default

    Buffer_Size

    Set the buffer size for HTTP client when reading responses from Kubernetes API server. The value must be according to the specification.

    32k

    Kube_URL

    API Server end-point

    Kube_CA_File

    CA certificate file

    /var/run/secrets/kubernetes.io/serviceaccount/ca.crt

    Kube_CA_Path

    Kubernetes Annotations

    A flexible feature of Fluent Bit Kubernetes filter is that allow Kubernetes Pods to suggest certain behaviors for the log processor pipeline when processing the records. At the moment it support:

    • Suggest a pre-defined parser

    • Request to exclude logs

    The following annotations are available:

    Annotation

    Description

    Default

    fluentbit.io/parser[_stream][-container]

    Suggest a pre-defined parser. The parser must be registered already by Fluent Bit. This option will only be processed if Fluent Bit configuration (Kubernetes Filter) have enabled the option K8S-Logging.Parser. If present, the stream (stdout or stderr) will restrict that specific stream. If present, the container can override a specific container in a Pod.

    fluentbit.io/exclude

    Request to Fluent Bit to exclude or not the logs generated by the Pod. This option will only be processed if Fluent Bit configuration (Kubernetes Filter) have enabled the option K8S-Logging.Exclude.

    False

    Annotation Examples in Pod definition

    Suggest a parser

    The following Pod definition runs a Pod that emits Apache logs to the standard output, in the Annotations it suggest that the data should be processed using the pre-defined parser called apache:

    Request to exclude logs

    There are certain situations where the user would like to request that the log processor simply skip the logs from the Pod in question:

    Note that the annotation value is boolean which can take a true or false and must be quoted.

    Workflow of Tail + Kubernetes Filter

    Kubernetes Filter depends on either Tail or Systemd input plugins to process and enrich records with Kubernetes metadata. Here we will explain the workflow of Tail and how it configuration is correlated with Kubernetes filter. Consider the following configuration example (just for demo purposes, not production):

    In the input section, the Tail plugin will monitor all files ending in .log in path /var/log/containers/. For every file it will read every line and apply the docker parser. Then the records are emitted to the next step with an expanded tag.

    Tail support Tags expansion, which means that if a tag have a star character (*), it will replace the value with the absolute path of the monitored file, so if you file name and path is:

    then the Tag for every record of that file becomes:

    note that slashes are replaced with dots.

    When Kubernetes Filter runs, it will try to match all records that starts with kube. (note the ending dot), so records from the file mentioned above will hit the matching rule and the filter will try to enrich the records

    Kubernetes Filter do not care from where the logs comes from, but it cares about the absolute name of the monitored file, because that information contains the pod name and namespace name that are used to retrieve associated metadata to the running Pod from the Kubernetes Master/API Server.

    If the configuration property Kube_Tag_Prefix was configured (available on Fluent Bit >= 1.1.x), it will use that value to remove the prefix that was appended to the Tag in the previous Input section. Note that the configuration property defaults to _kube._var.logs.containers. , so the previous Tag content will be transformed from:

    to:

    the transformation above do not modify the original Tag, just creates a new representation for the filter to perform metadata lookup.

    that new value is used by the filter to lookup the pod name and namespace, for that purpose it uses an internal Regular expression:

    If you want to know more details, check the source code of that definition here.

    You can see on Rublar.com web site how this operation is performed, check the following demo link:

    • https://rubular.com/r/HZz3tYAahj6JCd

    Custom Regex

    Under certain and not common conditions, a user would want to alter that hard-coded regular expression, for that purpose the option Regex_Parser can be used (documented on top).

    Final Comments

    So at this point the filter is able to gather the values of pod_name and namespace, with that information it will check in the local cache (internal hash table) if some metadata for that key pair exists, if so, it will enrich the record with the metadata value, otherwise it will connect to the Kubernetes Master/API Server and retrieve that information.

    Build and Install

    Fluent Bit uses CMake as it build system. The suggested procedure to prepare the build system consists on the following steps:

    Prepare environment

    In the following steps you can find exact commands to build and install the project with the default options. If you already know how CMake works you can skip this part and look at the build options available.

    Change to the build/ directory inside the Fluent Bit sources:

    Let configure the project specifying where the root path is located:

    Now you are ready to start the compilation process through the simple make command:

    to continue installing the binary on the system just do:

    it's likely you may need root privileges so you can try to prefixing the command with sudo.

    Build Options

    Fluent Bit provides certain options to CMake that can be enabled or disabled when configuring, please refer to the following tables under the General Options, Input Plugins and Output Plugins sections.

    General Options

    Input Plugins

    The input plugins provides certain features to gather information from a specific source type which can be a network interface, some built-in metric or through a specific input device, the following input plugins are available:

    Output Plugins

    The output plugins gives the capacity to flush the information to some external interface, service or terminal, the following table describes the output plugins available as of this version:

    Parsers

    Parsers are an important component of Fluent Bit, with them you can take any unstructured log entry and give them a structure that makes easier it processing and further filtering.

    The parser engine is fully configurable and can process log entries based in two types of format:

    • JSON Maps

    • Regular Expressions (named capture)

    By default, Fluent Bit provides a set of pre-configured parsers that can be used for different use cases such as logs from:

    • Apache

    • Nginx

    • Docker

    • Syslog rfc5424

    Parsers are defined in one or multiple configuration files that are loaded at start time, either from the command line or through the main Fluent Bit configuration file.

    Note: if you are using Regular Expressions note that Fluent Bit uses Ruby based regular expressions and we encourage to use web site as an online editor to test them.

    Configuration Parameters

    Multiple parsers can be defined and each section have it own properties. The following table describes the available options for each parser definition:

    Parsers Configuration File

    All parsers must be defined in a parsers.conf file, not in the Fluent Bit global configuration file. The parsers file expose all parsers available that can be used by the Input plugins that are aware of this feature. A parsers file can have multiple entries like this:

    For more information about the parsers available, please refer to the default parsers file distributed with Fluent Bit source code:

    Time Resolution and Fractional Seconds

    Some timestamps might have fractional seconds like 2017-05-17T15:44:31.187512963Z. Since Fluent Bit v0.12 we have full support for nanoseconds resolution, the %L format option for Time_Format is provided as a way to indicate that content must be interpreted as fractional seconds.

    Note: The option %L is only valid when used after seconds (%S) or seconds since the Epoch (%s), e.g: %S.%L or %s.%L

    Modify

    The Modify Filter plugin allows you to change records using rules and conditions.

    Example usage

    As an example using JSON notation to,

    • Rename Key2 to

    apiVersion: v1
    kind: Pod
    metadata:
      name: apache-logs
      labels:
        app: apache-logs
      annotations:
        fluentbit.io/parser: apache
    spec:
      containers:
      - name: apache
        image: edsiper/apache_logs
    apiVersion: v1
    kind: Pod
    metadata:
      name: apache-logs
      labels:
        app: apache-logs
      annotations:
        fluentbit.io/exclude: "true"
    spec:
      containers:
      - name: apache
        image: edsiper/apache_logs
    [INPUT]
        Name    tail
        Tag     kube.*
        Path    /var/log/containers/*.log
        Parser  docker
    
    [FILTER]
        Name             kubernetes
        Match            kube.*
        Kube_URL         https://kubernetes.default.svc:443
        Kube_CA_File     /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        Kube_Token_File  /var/run/secrets/kubernetes.io/serviceaccount/token
        Kube_Tag_Prefix  kube.
    /var/log/container/apache-logs-annotated_default_apache-aeeccc7a9f00f6e4e066aeff0434cf80621215071f1b20a51e8340aa7c35eac6.log
    kube.var.log.containers.apache-logs-annotated_default_apache-aeeccc7a9f00f6e4e066aeff0434cf80621215071f1b20a51e8340aa7c35eac6.log
    kube.var.log.containers.apache-logs-annotated_default_apache-aeeccc7a9f00f6e4e066aeff0434cf80621215071f1b20a51e8340aa7c35eac6.log
    apache-logs-annotated_default_apache-aeeccc7a9f00f6e4e066aeff0434cf80621215071f1b20a51e8340aa7c35eac6.log
    (?<pod_name>[a-z0-9](?:[-a-z0-9]*[a-z0-9])?(?:\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*)_(?<namespace_name>[^_]+)_(?<container_name>.+)-(?<docker_id>[a-z0-9]{64})\.log$
    $ cd build/

    Absolute path to scan for certificate files

    Kube_Token_File

    Token file

    /var/run/secrets/kubernetes.io/serviceaccount/token

    Kube_Tag_Prefix

    When the source records comes from Tail input plugin, this option allows to specify what's the prefix used in Tail configuration.

    kube.var.log.containers.

    Merge_Log

    When enabled, it checks if the log field content is a JSON string map, if so, it append the map fields as part of the log structure.

    Off

    Merge_Log_Key

    When Merge_Log is enabled, the filter tries to assume the log field from the incoming message is a JSON string message and make a structured representation of it at the same level of the log field in the map. Now if Merge_Log_Key is set (a string name), all the new structured fields taken from the original log content are inserted under the new key.

    Merge_Log_Trim

    When Merge_Log is enabled, trim (remove possible \n or \r) field values.

    On

    Keep_Log

    When Keep_Log is disabled, the log field is removed from the incoming message once it has been successfully merged (Merge_Log must be enabled as well).

    On

    tls.debug

    Debug level between 0 (nothing) and 4 (every detail).

    -1

    tls.verify

    When enabled, turns on certificate validation when connecting to the Kubernetes API server.

    On

    Use_Journal

    When enabled, the filter reads logs coming in Journald format.

    Off

    Regex_Parser

    Set an alternative Parser to process record Tag and extract pod_name, namespace_name, container_name and docker_id. The parser must be registered in a parsers file (refer to parser filter-kube-test as an example).

    K8S-Logging.Parser

    Allow Kubernetes Pods to suggest a pre-defined Parser (read more about it in Kubernetes Annotations section)

    Off

    K8S-Logging.Exclude

    Allow Kubernetes Pods to exclude their logs from the log processor (read more about it in Kubernetes Annotations section).

    Off

    Labels

    Include Kubernetes resource labels in the extra metadata.

    On

    Annotations

    Include Kubernetes resource annotations in the extra metadata.

    On

    Kube_meta_preload_cache_dir

    If set, Kubernetes meta-data can be cached/pre-loaded from files in JSON format in this directory, named as namespace-pod.meta

    Dummy_Meta

    If set, use dummy-meta data (for test/dev purposes)

    Off

    Unit Size
    https://kubernetes.default.svc:443
    Syslog rfc3164

    Specify a fixed UTC time offset (e.g. -0600, +0200, etc.) for local dates.

    Time_Keep

    By default when a time key is recognized and parsed, the parser will drop the original time field. Enabling this option will make the parser to keep the original time field and it value in the log entry.

    Types

    Specify the data type of parsed field. The syntax is types <field_name_1>:<type_name_1> <field_name_2>:<type_name_2> .... The supported types are string(default), integer, bool, float, hex.

    Decode_Field

    Decode a field value, the only decoder available is json. The syntax is: Decode_Field json <field_name>.

    Key

    Description

    Name

    Set an unique name for the parser in question.

    Format

    Specify the format of the parser, the available options here are: json or regex.

    Regex

    If format is regex, this option must be set specifying the Ruby Regular Expression that will be used to parse and compose the structured message.

    Time_Key

    If the log entry provides a field with a timestamp, this option specify the name of that field.

    Time_Format

    Specify the format of the time field so it can be recognized and analyzed properly. Fluent-bit uses strptime(3) to parse time so you can ferer to strptime documentation for available modifiers.

    Rubular
    https://github.com/fluent/fluent-bit/blob/master/conf/parsers.conf

    Time_Offset

    [PARSER]
        Name        docker
        Format      json
        Time_Key    time
        Time_Format %Y-%m-%dT%H:%M:%S.%L
        Time_Keep   On
    
    [PARSER]
        Name        syslog-rfc5424
        Format      regex
        Regex       ^\<(?<pri>[0-9]{1,5})\>1 (?<time>[^ ]+) (?<host>[^ ]+) (?<ident>[^ ]+) (?<pid>[-0-9]+) (?<msgid>[^ ]+) (?<extradata>(\[(.*)\]|-)) (?<message>.+)$
        Time_Key    time
        Time_Format %Y-%m-%dT%H:%M:%S.%L
        Time_Keep   On
        Types pid:integer

    No

    FLB_BINARY

    Build executable

    Yes

    FLB_EXAMPLES

    Build examples

    Yes

    FLB_SHARED_LIB

    Build shared library

    Yes

    FLB_VALGRIND

    Enable Valgrind support

    No

    FLB_TRACE

    Enable trace mode

    No

    FLB_TESTS_RUNTIME

    Enable runtime tests

    No

    FLB_TESTS_INTERNAL

    Enable internal tests

    No

    FLB_TESTS

    Enable tests

    No

    FLB_MTRACE

    Enable mtrace support

    No

    FLB_INOTIFY

    Enable Inotify support

    Yes

    FLB_POSIX_TLS

    Force POSIX thread storage

    No

    FLB_SQLDB

    Enable SQL embedded database support

    No

    FLB_HTTP_SERVER

    Enable HTTP Server

    No

    FLB_BACKTRACE

    Enable backtrace/stacktrace support

    Yes

    FLB_LUAJIT

    Enable Lua scripting support

    Yes

    FLB_STATIC_CONF

    Build binary using static configuration files. The value of this option must be a directory containing configuration files.

    On

    Enable Kernel log input plugin

    On

    Enable Memory input plugin

    On

    FLB_IN_RANDOM

    Enable Random input plugin

    On

    Enable Serial input plugin

    On

    Enable Standard input plugin

    On

    FLB_IN_TCP

    Enable TCP input plugin

    On

    Enable MQTT input plugin

    On

    Enable Xbee input plugin

    Off

    Off

    FLB_OUT_PLOT

    Enable Plot output plugin

    On

    Enable STDOUT output plugin

    On

    Enable output plugin

    On

    FLB_OUT_NULL

    Enable /dev/null output plugin

    On

    option

    description

    default

    FLB_ALL

    Enable all features available

    No

    FLB_DEBUG

    Build binaries with debug symbols

    No

    FLB_JEMALLOC

    Use Jemalloc as default memory allocator

    No

    FLB_TLS

    option

    description

    default

    FLB_IN_CPU

    Enable CPU input plugin

    On

    FLB_IN_FORWARD

    Enable Forward input plugin

    On

    FLB_IN_HEAD

    Enable Head input plugin

    On

    FLB_IN_HEALTH

    option

    description

    default

    FLB_OUT_ES

    Enable Elastic Search output plugin

    On

    FLB_OUT_FORWARD

    Enable Fluentd output plugin

    On

    FLB_OUT_HTTP

    Enable HTTP output plugin

    On

    FLB_OUT_NATS

    CMake

    Buils with SSL/TLS support

    Enable Health input plugin

    Enable output plugin

    RenamedKey
  • Add a key OtherKey with value Value3 if OtherKey does not yet exist

  • Example (input)

    Example (output)

    Configuration Parameters

    Rules

    The plugin supports the following rules:

    Operation

    Parameter 1

    Parameter 2

    Description

    Set

    STRING:KEY

    STRING:VALUE

    Add a key/value pair with key KEY and value VALUE. If KEY already exists, this field is overwritten

    Add

    STRING:KEY

    STRING:VALUE

    Add a key/value pair with key KEY and value VALUE if KEY does not exist

    Remove

    • Rules are case insensitive, parameters are not

    • Any number of rules can be set in a filter instance.

    • Rules are applied in the order they appear, with each rule operating on the result of the previous rule.

    Conditions

    The plugin supports the following conditions:

    Condition

    Parameter

    Parameter 2

    Description

    Key_exists

    STRING:KEY

    NONE

    Is true if KEY exists

    Key_does_not_exist

    STRING:KEY

    STRING:VALUE

    Is true if KEY does not exist

    A_key_matches

    • Conditions are case insensitive, parameters are not

    • Any number of conditions can be set.

    • Conditions apply to the whole filter instance and all its rules. Not to individual rules.

    • All conditions have to be true for the rules to be applied.

    Example #1 - Add and Rename

    In order to start filtering records, you can run the filter from the command line or through the configuration file. The following invokes the Memory Usage Input Plugin, which outputs the following (example),

    Using command Line

    Note: Using the command line mode requires quotes parse the wildcard properly. The use of a configuration file is recommended.

    Configuration File

    Result

    The output of both the command line and configuration invocations should be identical and result in the following output.

    Example #2 - Conditionally Add and Remove

    Configuration File

    Result

    Example #3 - Emoji

    Configuration File

    Result

    $ cmake ../
    -- The C compiler identification is GNU 4.9.2
    -- Check for working C compiler: /usr/bin/cc
    -- Check for working C compiler: /usr/bin/cc -- works
    -- Detecting C compiler ABI info
    -- Detecting C compiler ABI info - done
    -- The CXX compiler identification is GNU 4.9.2
    -- Check for working CXX compiler: /usr/bin/c++
    -- Check for working CXX compiler: /usr/bin/c++ -- works
    ...
    -- Could NOT find Doxygen (missing:  DOXYGEN_EXECUTABLE)
    -- Looking for accept4
    -- Looking for accept4 - not found
    -- Configuring done
    -- Generating done
    -- Build files have been written to: /home/edsiper/coding/fluent-bit/build
    $ make
    Scanning dependencies of target msgpack
    [  2%] Building C object lib/msgpack-1.1.0/CMakeFiles/msgpack.dir/src/unpack.c.o
    [  4%] Building C object lib/msgpack-1.1.0/CMakeFiles/msgpack.dir/src/objectc.c.o
    [  7%] Building C object lib/msgpack-1.1.0/CMakeFiles/msgpack.dir/src/version.c.o
    ...
    [ 19%] Building C object lib/monkey/mk_core/CMakeFiles/mk_core.dir/mk_file.c.o
    [ 21%] Building C object lib/monkey/mk_core/CMakeFiles/mk_core.dir/mk_rconf.c.o
    [ 23%] Building C object lib/monkey/mk_core/CMakeFiles/mk_core.dir/mk_string.c.o
    ...
    Scanning dependencies of target fluent-bit-static
    [ 66%] Building C object src/CMakeFiles/fluent-bit-static.dir/flb_pack.c.o
    [ 69%] Building C object src/CMakeFiles/fluent-bit-static.dir/flb_input.c.o
    [ 71%] Building C object src/CMakeFiles/fluent-bit-static.dir/flb_output.c.o
    ...
    Linking C executable ../bin/fluent-bit
    [100%] Built target fluent-bit-bin
    $ make install
    {
      "Key1"     : "Value1",
      "Key2"     : "Value2"
    }
    {
      "Key1"       : "Value1",
      "RenamedKey" : "Value2",
      "OtherKey"   : "Value3"
    }
    [0] memory: [1488543156, {"Mem.total"=>1016044, "Mem.used"=>841388, "Mem.free"=>174656, "Swap.total"=>2064380, "Swap.used"=>139888, "Swap.free"=>1924492}]
    [1] memory: [1488543157, {"Mem.total"=>1016044, "Mem.used"=>841420, "Mem.free"=>174624, "Swap.total"=>2064380, "Swap.used"=>139888, "Swap.free"=>1924492}]
    [2] memory: [1488543158, {"Mem.total"=>1016044, "Mem.used"=>841420, "Mem.free"=>174624, "Swap.total"=>2064380, "Swap.used"=>139888, "Swap.free"=>1924492}]
    [3] memory: [1488543159, {"Mem.total"=>1016044, "Mem.used"=>841420, "Mem.free"=>174624, "Swap.total"=>2064380, "Swap.used"=>139888, "Swap.free"=>1924492}]
    bin/fluent-bit -i mem \
      -p 'tag=mem.local' \
      -F modify \
      -p 'Add=Service1 SOMEVALUE' \
      -p 'Add=Service2 SOMEVALUE3' \
      -p 'Add=Mem.total2 TOTALMEM2' \
      -p 'Rename=Mem.free MEMFREE' \
      -p 'Rename=Mem.used MEMUSED' \
      -p 'Rename=Swap.total SWAPTOTAL' \
      -p 'Add=Mem.total TOTALMEM' \
      -m '*' \
      -o stdout
    [INPUT]
        Name mem
        Tag  mem.local
    
    [OUTPUT]
        Name  stdout
        Match *
    
    [FILTER]
        Name modify
        Match *
        Add Service1 SOMEVALUE
        Add Service3 SOMEVALUE3
        Add Mem.total2 TOTALMEM2
        Rename Mem.free MEMFREE
        Rename Mem.used MEMUSED
        Rename Swap.total SWAPTOTAL
        Add Mem.total TOTALMEM
    [2018/04/06 01:35:13] [ info] [engine] started
    [0] mem.local: [1522980610.006892802, {"Mem.total"=>4050908, "MEMUSED"=>738100, "MEMFREE"=>3312808, "SWAPTOTAL"=>1046524, "Swap.used"=>0, "Swap.free"=>1046524, "Service1"=>"SOMEVALUE", "Service3"=>"SOMEVALUE3", "Mem.total2"=>"TOTALMEM2"}]
    [1] mem.local: [1522980611.000658288, {"Mem.total"=>4050908, "MEMUSED"=>738068, "MEMFREE"=>3312840, "SWAPTOTAL"=>1046524, "Swap.used"=>0, "Swap.free"=>1046524, "Service1"=>"SOMEVALUE", "Service3"=>"SOMEVALUE3", "Mem.total2"=>"TOTALMEM2"}]
    [2] mem.local: [1522980612.000307652, {"Mem.total"=>4050908, "MEMUSED"=>738068, "MEMFREE"=>3312840, "SWAPTOTAL"=>1046524, "Swap.used"=>0, "Swap.free"=>1046524, "Service1"=>"SOMEVALUE", "Service3"=>"SOMEVALUE3", "Mem.total2"=>"TOTALMEM2"}]
    [3] mem.local: [1522980613.000122671, {"Mem.total"=>4050908, "MEMUSED"=>738068, "MEMFREE"=>3312840, "SWAPTOTAL"=>1046524, "Swap.used"=>0, "Swap.free"=>1046524, "Service1"=>"SOMEVALUE", "Service3"=>"SOMEVALUE3", "Mem.total2"=>"TOTALMEM2"}]
    [INPUT]
        Name mem
        Tag  mem.local
        Interval_Sec 1
    
    [FILTER]
        Name    modify
        Match   mem.*
    
        Condition Key_Does_Not_Exist cpustats
        Condition Key_Exists Mem.used
    
        Set cpustats UNKNOWN
    
    [FILTER]
        Name    modify
        Match   mem.*
    
        Condition Key_Value_Does_Not_Equal cpustats KNOWN
    
        Add sourcetype memstats
    
    [FILTER]
        Name    modify
        Match   mem.*
    
        Condition Key_Value_Equals cpustats UNKNOWN
    
        Remove_wildcard Mem
        Remove_wildcard Swap
        Add cpustats_more STILL_UNKNOWN
    
    [OUTPUT]
        Name           stdout
        Match          *
    [2018/06/14 07:37:34] [ info] [engine] started (pid=1493)
    [0] mem.local: [1528925855.000223110, {"cpustats"=>"UNKNOWN", "sourcetype"=>"memstats", "cpustats_more"=>"STILL_UNKNOWN"}]
    [1] mem.local: [1528925856.000064516, {"cpustats"=>"UNKNOWN", "sourcetype"=>"memstats", "cpustats_more"=>"STILL_UNKNOWN"}]
    [2] mem.local: [1528925857.000165965, {"cpustats"=>"UNKNOWN", "sourcetype"=>"memstats", "cpustats_more"=>"STILL_UNKNOWN"}]
    [3] mem.local: [1528925858.000152319, {"cpustats"=>"UNKNOWN", "sourcetype"=>"memstats", "cpustats_more"=>"STILL_UNKNOWN"}]
    [INPUT]
        Name mem
        Tag  mem.local
    
    [OUTPUT]
        Name  stdout
        Match *
    
    [FILTER]
        Name modify
        Match *
    
        Remove_Wildcard Mem
        Remove_Wildcard Swap
        Set This_plugin_is_on 🔥
        Set 🔥 is_hot
        Copy 🔥 💦
        Rename  💦 ❄️
        Set ❄️ is_cold
        Set 💦 is_wet
    [2018/06/14 07:46:11] [ info] [engine] started (pid=21875)
    [0] mem.local: [1528926372.000197916, {"This_plugin_is_on"=>"🔥", "🔥"=>"is_hot", "❄️"=>"is_cold", "💦"=>"is_wet"}]
    [1] mem.local: [1528926373.000107868, {"This_plugin_is_on"=>"🔥", "🔥"=>"is_hot", "❄️"=>"is_cold", "💦"=>"is_wet"}]
    [2] mem.local: [1528926374.000181042, {"This_plugin_is_on"=>"🔥", "🔥"=>"is_hot", "❄️"=>"is_cold", "💦"=>"is_wet"}]
    [3] mem.local: [1528926375.000090841, {"This_plugin_is_on"=>"🔥", "🔥"=>"is_hot", "❄️"=>"is_cold", "💦"=>"is_wet"}]
    [0] mem.local: [1528926376.000610974, {"This_plugin_is_on"=>"🔥", "🔥"=>"is_hot", "❄️"=>"is_cold", "💦"=>"is_wet"}]

    STRING:KEY

    NONE

    Remove a key/value pair with key KEY if it exists

    Remove_wildcard

    WILDCARD:KEY

    NONE

    Remove all key/value pairs with key matching wildcard KEY

    Remove_regex

    REGEXP:KEY

    NONE

    Remove all key/value pairs with key matching regexp KEY

    Rename

    STRING:KEY

    STRING:RENAMED_KEY

    Rename a key/value pair with key KEY to RENAMED_KEY if KEY exists AND RENAMED_KEY does not exist

    Hard_rename

    STRING:KEY

    STRING:RENAMED_KEY

    Rename a key/value pair with key KEY to RENAMED_KEY if KEY exists. If RENAMED_KEY already exists, this field is overwritten

    Copy

    STRING:KEY

    STRING:COPIED_KEY

    Copy a key/value pair with key KEY to COPIED_KEY if KEY exists AND COPIED_KEY does not exist

    Hard_copy

    STRING:KEY

    STRING:COPIED_KEY

    Copy a key/value pair with key KEY to COPIED_KEY if KEY exists. If COPIED_KEY already exists, this field is overwritten

    REGEXP:KEY

    NONE

    Is true if a key matches regex KEY

    No_key_matches

    REGEXP:KEY

    NONE

    Is true if no key matches regex KEY

    Key_value_equals

    STRING:KEY

    STRING:VALUE

    Is true if KEY exists and its value is VALUE

    Key_value_does_not_equal

    STRING:KEY

    STRING:VALUE

    Is true if KEY exists and its value is not VALUE

    Key_value_matches

    STRING:KEY

    REGEXP:VALUE

    Is true if key KEY exists and its value matches VALUE

    Key_value_does_not_match

    STRING:KEY

    REGEXP:VALUE

    Is true if key KEY exists and its value does not match VALUE

    Matching_keys_have_matching_values

    REGEXP:KEY

    REGEXP:VALUE

    Is true if all keys matching KEY have values that match VALUE

    Matching_keys_do_not_have_matching_values

    REGEXP:KEY

    REGEXP:VALUE

    Is true if all keys matching KEY have values that do not match VALUE

    FLB_IN_KMSG
    FLB_IN_MEM
    FLB_IN_SERIAL
    FLB_IN_STDIN
    FLB_IN_MQTT
    FLB_IN_XBEE
    NATS
    FLB_OUT_STDOUT
    FLB_OUT_TD
    Treasure Data

    Tail

    The tail input plugin allows to monitor one or several text files. It have a similar behavior to tail -f shell command.

    The plugin reads every matched file in the Path pattern and for every new line found (separated by a \n), it generate a new record. Optionally a database file can be used so the plugin can have a history of tracked files and a state of offsets, this is very useful to resume a state if the service is restarted.

    Content:

    • Configuration Parameters

    Configuration Parameters

    The plugin supports the following configuration parameters:

    Note that if the database parameter db is not specified, by default the plugin will start reading each target file from the beginning.

    Multiline Configuration Parameters

    Additionally the following options exists to configure the handling of multi-lines files:

    Docker Mode Configuration Parameters

    Docker mode exists to recombine JSON log lines split by the Docker daemon due to its line length limit. To use this feature, configure the tail plugin with the corresponding parser and then enable Docker mode:

    Getting Started

    In order to tail text or log files, you can run the plugin from the command line or through the configuration file:

    Command Line

    From the command line you can let Fluent Bit parse text files with the following options:

    Configuration File

    In your main configuration file append the following Input & Output sections:

    Tailing files keeping state

    The tail input plugin a feature to save the state of the tracked files, is strongly suggested you enabled this. For this purpose the db property is available, e.g:

    When running, the database file /path/to/logs.db will be created, this database is backed by SQLite3 so if you are interested into explore the content, you can open it with the SQLite client tool, e.g:

    Make sure to explore when Fluent Bit is not hard working on the database file, otherwise you will see some Error: database is locked messages.

    Formatting SQLite

    By default SQLite client tool do not format the columns in a human read-way, so to explore in_tail_files table you can create a config file in ~/.sqliterc with the following content:

    Rotation

    Rotation with truncation (e.g. logrotate's copytruncate mode) is not supported.

    Forward

    Forward is the protocol used by to route messages between peers. The forward output plugin allows to provide interoperability between and . There are not configuration steps required besides to specify where is located, it can be in the local host or a in a remote machine.

    This plugin offers two different transports and modes:

    • Forward (TCP): It uses a plain TCP connection.

    • Secure Forward (TLS): when TLS is enabled, the plugin switch to Secure Forward mode.

    Exclude_Path

    Set one or multiple shell patterns separated by commas to exclude files matching a certain criteria, e.g: exclude_path=*.gz,*.zip

    Refresh_Interval

    The interval of refreshing the list of watched files in seconds.

    60

    Rotate_Wait

    Specify the number of extra time in seconds to monitor a file once is rotated in case some pending data is flushed.

    5

    Ignore_Older

    Ignores files that have been last modified before this time in seconds. Supports m,h,d (minutes, hours,days) syntax. Default behavior is to read all specified files.

    Skip_Long_Lines

    When a monitored file reach it buffer capacity due to a very long line (Buffer_Max_Size), the default behavior is to stop monitoring that file. Skip_Long_Lines alter that behavior and instruct Fluent Bit to skip long lines and continue processing other lines that fits into the buffer size.

    Off

    DB

    Specify the database file to keep track of monitored files and offsets.

    DB.Sync

    Set a default synchronization (I/O) method. Values: Extra, Full, Normal, Off. This flag affects how the internal SQLite engine do synchronization to disk, for more details about each option please refer to .

    Full

    Mem_Buf_Limit

    Set a limit of memory that Tail plugin can use when appending data to the Engine. If the limit is reach, it will be paused; when the data is flushed it resumes.

    Parser

    Specify the name of a parser to interpret the entry as a structured message.

    Key

    When a message is unstructured (no parser applied), it's appended as a string under the key name log. This option allows to define an alternative name for that key.

    log

    Tag

    Set a tag (with regex-extract fields) that will be placed on lines read. E.g. kube.<namespace_name>.<pod_name>.<container_name>

    Tag_Regex

    Set a regex to exctract fields from the file. E.g. (?<pod_name>[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*)_(?<namespace_name>[^_]+)_(?<container_name>.+)-

    Key

    Description

    Default

    Buffer_Chunk_Size

    Set the initial buffer size to read files data. This value is used too to increase buffer size. The value must be according to the Unit Size specification.

    32k

    Buffer_Max_Size

    Set the limit of the buffer size per monitored file. When a buffer needs to be increased (e.g: very long lines), this value is used to restrict how much the memory buffer can grow. If reading a file exceed this limit, the file is removed from the monitored file list. The value must be according to the Unit Size specification.

    Buffer_Chunk_Size

    Path

    Pattern specifying a specific log files or multiple ones through the use of common wildcards.

    Path_Key

    Key

    Description

    Default

    Multiline

    If enabled, the plugin will try to discover multiline messages and use the proper parsers to compose the outgoing messages. Note that when this option is enabled the Parser option is not used.

    Off

    Multiline_Flush

    Wait period time in seconds to process queued multiline messages

    4

    Parser_Firstline

    Name of the parser that matchs the beginning of a multiline message. Note that the regular expression defined in the parser must include a group name (named capture)

    Parser_N

    Key

    Description

    Default

    Docker_Mode

    If enabled, the plugin will recombine split Docker log lines before passing them to any parser as configured above. This mode cannot be used at the same time as Multiline.

    Off

    Docker_Mode_Flush

    Wait period time in seconds to flush queued unfinished split lines.

    4

    Multiline Parameters
    Docker Mode Parameters
    Getting Started
    Tailing Files Keeping State

    If enabled, it appends the name of the monitored file as part of the record. The value assigned becomes the key in the map.

    Optional-extra parser to interpret and structure multiline entries. This option can be used to define multiple parsers, e.g: Parser_1 ab1, Parser_2 ab2, Parser_N abN.

    Configuration Parameters

    The following parameters are mandatory for either Forward for Secure Forward modes:

    Key

    Description

    Default

    Host

    Target host where Fluent-Bit or Fluentd are listening for Forward messages.

    127.0.0.1

    Port

    TCP Port of the target service.

    24224

    Time_as_Integer

    Set timestamps in integer format, it enable compatibility mode for Fluentd v0.12 series.

    False

    Upstream

    Secure Forward Mode Configuration Parameters

    When using Secure Forward mode, the TLS mode requires to be enabled. The following additional configuration parameters are available:

    Key

    Description

    Default

    Shared_Key

    A key string known by the remote Fluentd used for authorization.

    Self_Hostname

    Default value of the auto-generated certificate common name (CN).

    tls

    Enable or disable TLS support

    Off

    tls.verify

    Forward Setup

    Before proceeding, make sure that Fluentd is installed in your system, if it's not the case please refer to the following Fluentd Installation document and go ahead with that.

    Once Fluentd is installed, create the following configuration file example that will allow us to stream data into it:

    That configuration file specifies that it will listen for TCP connections on the port 24224 through the forward input type. Then for every message with a fluent_bit TAG, will print the message to the standard output.

    In one terminal launch Fluentd specifying the new configuration file created (in_fluent-bit.conf):

    Fluent Bit + Forward Setup

    Now that Fluentd is ready to receive messages, we need to specify where the forward output plugin will flush the information using the following format:

    If the TAG parameter is not set, the plugin will set the tag as fluent_bit. Keep in mind that TAG is important for routing rules inside Fluentd.

    Using the CPU input plugin as an example we will flush CPU metrics to Fluentd:

    Now on the Fluentd side, you will see the CPU metrics gathered in the last seconds:

    So we gathered CPU metrics and flushed them out to Fluentd properly.

    Fluent Bit + Secure Forward Setup

    DISCLAIMER: the following example do not consider the generation of certificates for a proper usage of production environments.

    Secure Forward aims to provide a secure channel of communication with the remote Fluentd service using TLS. Above there is a minimalist configuration for testing purposes.

    Fluent Bit

    Paste this content in a file called flb.conf:

    Fluentd

    Paste this content in a file called fld.conf:

    If you're using Fluentd v1, set up it as below:

    Test Communication

    Start Fluentd:

    Start Fluent Bit:

    After five seconds, Fluent Bit will write the records to Fluentd. In Fluentd output you will see a message like this:

    Fluentd
    Fluent Bit
    Fluentd
    Fluentd
    $ fluent-bit -i tail -p path=/var/log/syslog -o stdout
    [INPUT]
        Name        tail
        Path        /var/log/syslog
    
    [OUTPUT]
        Name   stdout
        Match  *
    $ fluent-bit -i tail -p path=/var/log/syslog -p db=/path/to/logs.db -o stdout
    $ sqlite3 tail.db
    -- Loading resources from /home/edsiper/.sqliterc
    
    SQLite version 3.14.1 2016-08-11 18:53:32
    Enter ".help" for usage hints.
    sqlite> SELECT * FROM in_tail_files;
    id     name                              offset        inode         created
    -----  --------------------------------  ------------  ------------  ----------
    1      /var/log/syslog                   73453145      23462108      1480371857
    sqlite>
    .headers on
    .mode column
    .width 5 32 12 12 10
    <source>
      type forward
      bind 0.0.0.0
      port 24224
    </source>
    
    <match fluent_bit>
      type stdout
    </match>
    $ fluentd -c test.conf
    2017-03-23 11:50:43 -0600 [info]: reading config file path="test.conf"
    2017-03-23 11:50:43 -0600 [info]: starting fluentd-0.12.33
    2017-03-23 11:50:43 -0600 [info]: gem 'fluent-mixin-config-placeholders' version '0.3.1'
    2017-03-23 11:50:43 -0600 [info]: gem 'fluent-plugin-docker' version '0.1.0'
    2017-03-23 11:50:43 -0600 [info]: gem 'fluent-plugin-elasticsearch' version '1.4.0'
    2017-03-23 11:50:43 -0600 [info]: gem 'fluent-plugin-flatten-hash' version '0.2.0'
    2017-03-23 11:50:43 -0600 [info]: gem 'fluent-plugin-flowcounter-simple' version '0.0.4'
    2017-03-23 11:50:43 -0600 [info]: gem 'fluent-plugin-influxdb' version '0.2.8'
    2017-03-23 11:50:43 -0600 [info]: gem 'fluent-plugin-json-in-json' version '0.1.4'
    2017-03-23 11:50:43 -0600 [info]: gem 'fluent-plugin-mongo' version '0.7.10'
    2017-03-23 11:50:43 -0600 [info]: gem 'fluent-plugin-out-http' version '0.1.3'
    2017-03-23 11:50:43 -0600 [info]: gem 'fluent-plugin-parser' version '0.6.0'
    2017-03-23 11:50:43 -0600 [info]: gem 'fluent-plugin-record-reformer' version '0.7.0'
    2017-03-23 11:50:43 -0600 [info]: gem 'fluent-plugin-rewrite-tag-filter' version '1.5.1'
    2017-03-23 11:50:43 -0600 [info]: gem 'fluent-plugin-stdin' version '0.1.1'
    2017-03-23 11:50:43 -0600 [info]: gem 'fluent-plugin-td' version '0.10.27'
    2017-03-23 11:50:43 -0600 [info]: adding match pattern="fluent_bit" type="stdout"
    2017-03-23 11:50:43 -0600 [info]: adding source type="forward"
    2017-03-23 11:50:43 -0600 [info]: using configuration file: <ROOT>
      <source>
        type forward
        bind 0.0.0.0
        port 24224
      </source>
      <match fluent_bit>
        type stdout
      </match>
    </ROOT>
    2017-03-23 11:50:43 -0600 [info]: listening fluent socket on 0.0.0.0:24224
    bin/fluent-bit -i INPUT -o forward://HOST:PORT
    $ bin/fluent-bit -i cpu -t fluent_bit -o forward://127.0.0.1:24224
    2017-03-23 11:53:06 -0600 fluent_bit: {"cpu_p":0.0,"user_p":0.0,"system_p":0.0,"cpu0.p_cpu":0.0,"cpu0.p_user":0.0,"cpu0.p_system":0.0,"cpu1.p_cpu":0.0,"cpu1.p_user":0.0,"cpu1.p_system":0.0,"cpu2.p_cpu":0.0,"cpu2.p_user":0.0,"cpu2.p_system":0.0,"cpu3.p_cpu":1.0,"cpu3.p_user":1.0,"cpu3.p_system":0.0}
    2017-03-23 11:53:07 -0600 fluent_bit: {"cpu_p":2.25,"user_p":2.0,"system_p":0.25,"cpu0.p_cpu":3.0,"cpu0.p_user":3.0,"cpu0.p_system":0.0,"cpu1.p_cpu":1.0,"cpu1.p_user":1.0,"cpu1.p_system":0.0,"cpu2.p_cpu":1.0,"cpu2.p_user":1.0,"cpu2.p_system":0.0,"cpu3.p_cpu":3.0,"cpu3.p_user":2.0,"cpu3.p_system":1.0}
    2017-03-23 11:53:08 -0600 fluent_bit: {"cpu_p":1.75,"user_p":1.0,"system_p":0.75,"cpu0.p_cpu":2.0,"cpu0.p_user":1.0,"cpu0.p_system":1.0,"cpu1.p_cpu":3.0,"cpu1.p_user":1.0,"cpu1.p_system":2.0,"cpu2.p_cpu":3.0,"cpu2.p_user":2.0,"cpu2.p_system":1.0,"cpu3.p_cpu":2.0,"cpu3.p_user":1.0,"cpu3.p_system":1.0}
    2017-03-23 11:53:09 -0600 fluent_bit: {"cpu_p":4.75,"user_p":3.5,"system_p":1.25,"cpu0.p_cpu":4.0,"cpu0.p_user":3.0,"cpu0.p_system":1.0,"cpu1.p_cpu":5.0,"cpu1.p_user":4.0,"cpu1.p_system":1.0,"cpu2.p_cpu":3.0,"cpu2.p_user":2.0,"cpu2.p_system":1.0,"cpu3.p_cpu":5.0,"cpu3.p_user":4.0,"cpu3.p_system":1.0}
    [SERVICE]
        Flush      5
        Daemon     off
        Log_Level  info
    
    [INPUT]
        Name       cpu
        Tag        cpu_usage
    
    [OUTPUT]
        Name          forward
        Match         *
        Host          127.0.0.1
        Port          24284
        Shared_Key    secret
        Self_Hostname flb.local
        tls           on
        tls.verify    off
    <source>
      @type         secure_forward
      self_hostname myserver.local
      shared_key    secret
      secure no
    </source>
    
    <match **>
     @type stdout
    </match>
    <source>
      @type forward
      <transport tls>
        cert_path /etc/td-agent/certs/fluentd.crt
        private_key_path /etc/td-agent/certs/fluentd.key
        private_key_passphrase password
      </transport>
      <security>
        self_hostname myserver.local
        shared_key secret
      </security>
    </source>
    
    <match **>
     @type stdout
    </match>
    $ fluentd -c fld.conf
    $ fluent-bit -c flb.conf
    2017-03-23 13:34:40 -0600 [info]: using configuration file: <ROOT>
      <source>
        @type secure_forward
        self_hostname myserver.local
        shared_key xxxxxx
        secure no
      </source>
      <match **>
        @type stdout
      </match>
    </ROOT>
    2017-03-23 13:34:41 -0600 cpu_usage: {"cpu_p":1.0,"user_p":0.75,"system_p":0.25,"cpu0.p_cpu":1.0,"cpu0.p_user":1.0,"cpu0.p_system":0.0,"cpu1.p_cpu":2.0,"cpu1.p_user":1.0,"cpu1.p_system":1.0,"cpu2.p_cpu":1.0,"cpu2.p_user":1.0,"cpu2.p_system":0.0,"cpu3.p_cpu":2.0,"cpu3.p_user":1.0,"cpu3.p_system":1.0}
    2017-03-23 13:34:42 -0600 cpu_usage: {"cpu_p":1.75,"user_p":1.75,"system_p":0.0,"cpu0.p_cpu":3.0,"cpu0.p_user":3.0,"cpu0.p_system":0.0,"cpu1.p_cpu":2.0,"cpu1.p_user":2.0,"cpu1.p_system":0.0,"cpu2.p_cpu":0.0,"cpu2.p_user":0.0,"cpu2.p_system":0.0,"cpu3.p_cpu":1.0,"cpu3.p_user":1.0,"cpu3.p_system":0.0}
    2017-03-23 13:34:43 -0600 cpu_usage: {"cpu_p":1.75,"user_p":1.25,"system_p":0.5,"cpu0.p_cpu":3.0,"cpu0.p_user":3.0,"cpu0.p_system":0.0,"cpu1.p_cpu":2.0,"cpu1.p_user":2.0,"cpu1.p_system":0.0,"cpu2.p_cpu":0.0,"cpu2.p_user":0.0,"cpu2.p_system":0.0,"cpu3.p_cpu":1.0,"cpu3.p_user":0.0,"cpu3.p_system":1.0}
    2017-03-23 13:34:44 -0600 cpu_usage: {"cpu_p":5.0,"user_p":3.25,"system_p":1.75,"cpu0.p_cpu":4.0,"cpu0.p_user":2.0,"cpu0.p_system":2.0,"cpu1.p_cpu":8.0,"cpu1.p_user":5.0,"cpu1.p_system":3.0,"cpu2.p_cpu":4.0,"cpu2.p_user":3.0,"cpu2.p_system":1.0,"cpu3.p_cpu":4.0,"cpu3.p_user":2.0,"cpu3.p_system":2.0}

    If Forward will connect to an Upstream instead of a simple host, this property defines the absolute path for the Upstream configuration file, for more details about this refer to the Upstream Servers documentation section.

    Force certificate validation

    On

    tls.debug

    Set TLS debug verbosity level. It accept the following values: 0 (No debug), 1 (Error), 2 (State change), 3 (Informational) and 4 Verbose

    1

    tls.ca_file

    Absolute path to CA certificate file

    tls.crt_file

    Absolute path to Certificate file.

    tls.key_file

    Absolute path to private Key file.

    tls.key_passwd

    Optional password for tls.key_file file.

    this section