Only this pageAll pages
Powered by GitBook
Couldn't generate the PDF for 162 pages, generation stopped at 100.
Extend with 50 more pages.
1 of 100

1.8

Loading...

About

Loading...

Loading...

Loading...

Loading...

Concepts

Loading...

Loading...

Data Pipeline

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Installation

Loading...

Loading...

Loading...

Loading...

Sources

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Administration

Configuring Fluent Bit

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Local Testing

Loading...

Loading...

Data Pipeline

Loading...

Inputs

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Parsers

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Filters

Loading...

Loading...

Fluent Bit v1.8 Documentation

High Performance Log and Metrics Processor

Features

  • High Performance

  • Data Parsing

  • Metrics Collection (Prometheus compatible)

  • Reliability and Data Integrity

  • Networking

    • Security: built-in TLS/SSL support

    • Asynchronous I/O

    • More than 80 built-in plugins available

    • Extensibility

      • Write any input, filter or output plugin in C language

    • Create new streams of data using query results

    • Aggregation Windows

    • Data analysis and prediction: Timeseries forecasting

  • Portable: runs on Linux, MacOS, Windows and BSD systems

Fluent Bit, Fluentd and CNCF

is a Fast and Lightweight Logs and Metrics Processor and Forwarder for Linux, OSX, Windows and BSD family operating systems. It has been made with a strong focus on performance to allow the collection of events from different sources without complexity.

Convert your unstructured messages using our parsers: , , and

Handling

in memory and file system

Pluggable Architecture and : Inputs, Filters and Outputs

Bonus: write or

: expose internal metrics over HTTP in JSON and format

: Perform data selection and transformation using simple SQL queries

is a sub-project under the umbrella of , it's licensed under the terms of the . This project was originally created by and is currently a vendor neutral and community driven project.

Fluent Bit
JSON
Regex
LTSV
Logfmt
Backpressure
Data Buffering
Extensibility
Filters in Lua
Output plugins in Golang
Monitoring
Prometheus
Stream Processing
Fluent Bit
CNCF
Fluentd
Apache License v2.0
Treasure Data

What is Fluent Bit ?

Fluent Bit is a CNCF sub-project under the umbrella of Fluentd

Nowadays the number of sources of information in our environments is ever increasing. Handling data collection at scale is complex, and collecting and aggregating diverse data requires a specialized tool that can deal with:

  • Different sources of information

  • Different data formats

  • Data Reliability

  • Security

  • Flexible Routing

  • Multiple destinations

Input

The way to gather data from your sources

When an input plugin is loaded, an internal instance is created. Every instance has its own and independent configuration. Configuration keys are often called properties.

Every input plugin has its own documentation section where it's specified how it can be used and what properties are available.

A Brief History of Fluent Bit

Every project has a story

After the project was around for some time, it got some traction in the Embedded market but we also started getting requests for several features from the Cloud community like more inputs, filters, and outputs. Not so long after that, Fluent Bit becomes one of the preferred solutions to solve the logging challenges in Cloud environments.

Buffering

Performance and Data Safety

Network failures or latency on third party service is pretty common, and on scenarios where we cannot deliver data fast enough as we receive new data to process, we likely will face backpressure.

Our buffering strategies are designed to solve problems associated with backpressure and general delivery failures.

Fluent Bit as buffering strategies, offers a primary buffering mechanism in memory and an optional secondary one using the file system. With this hybrid solution you can adjust to any use case safety and keep a high performance while processing your data.

Both mechanisms are not exclusive and when the data is ready to be processed or delivered it will be always in memory, while other data in the queue might be in the file system until is ready to be processed and moved up to memory.

​ is an open source and multi-platform log processor tool which aims to be a generic Swiss knife for logs processing and distribution.

has been designed with performance and low resources consumption in mind.

provides different Input Plugins to gather information from different sources, some of them just collect data from log files while others can gather metrics information from the operating system. There are many plugins for different needs.

For more details, please refer to the section.

On 2014, the team at forecasted the need of a lightweight log processor for constraint environments like Embedded Linux and Gateways, the project aimed to be part of the Fluentd Ecosystem and we called it , fully open source and available under the terms of the .

When processes data, it uses the system memory (heap) as a primary and temporal place to store the record logs before they get delivered, on this private memory area the records are processed.

Buffering refers to the ability to store the records somewhere, and while they are processed and delivered, still be able to store more. Buffering in memory is the fastest mechanism, but there are certain scenarios where the mechanism requires special strategies to deal with , data safety or reduce memory consumption by the service in constraint environments.

To learn more about the buffering configuration in Fluent Bit, please jump to the section.

Fluent Bit
Fluent Bit
Fluent Bit
Input Plugins
Fluentd
Treasure Data
Fluent Bit
Apache License v2.0
Fluent Bit
backpressure
Buffering & Storage

Key Concepts

There are a few key concepts that are really important to understand how Fluent Bit operates.

  • Event or Record

  • Filtering

  • Tag

  • Timestamp

  • Match

  • Structured Message

Event or Record

Every incoming piece of data that belongs to a log or a metric that is retrieved by Fluent Bit is considered an Event or a Record.

As an example consider the following content of a Syslog file:

Jan 18 12:52:16 flb systemd[2222]: Starting GNOME Terminal Server
Jan 18 12:52:16 flb dbus-daemon[2243]: [session uid=1000 pid=2243] Successfully activated service 'org.gnome.Terminal'
Jan 18 12:52:16 flb systemd[2222]: Started GNOME Terminal Server.
Jan 18 12:52:16 flb gsd-media-keys[2640]: # watch_fast: "/org/gnome/terminal/legacy/" (establishing: 0, active: 0)

It contains four lines and all of them represents four independent Events.

Internally, an Event always has two components (in an array form):

[TIMESTAMP, MESSAGE]

Filtering

In some cases it is required to perform modifications on the Events content, the process to alter, enrich or drop Events is called Filtering.

There are many use cases when Filtering is required like:

  • Append specific information to the Event like an IP address or metadata.

  • Select a specific piece of the Event content.

  • Drop Events that matches certain pattern.

Tag

Every Event that gets into Fluent Bit gets assigned a Tag. This tag is an internal string that is used in a later stage by the Router to decide which Filter or Output phase it must go through.

Most of the tags are assigned manually in the configuration. If a tag is not specified, Fluent Bit will assign the name of the Input plugin instance from where that Event was generated from.

Timestamp

The Timestamp represents the time when an Event was created. Every Event contains a Timestamp associated. The Timestamp is a numeric fractional integer in the format:

SECONDS.NANOSECONDS

Seconds

It is the number of seconds that have elapsed since the Unix epoch.

Nanoseconds

Fractional second or one thousand-millionth of a second.

A timestamp always exists, either set by the Input plugin or discovered through a data parsing process.

Match

Fluent Bit allows to deliver your collected and processed Events to one or multiple destinations, this is done through a routing phase. A Match represent a simple rule to select Events where it Tags matches a defined rule.

Structured Messages

Source events can have or not have a structure. A structure defines a set of keys and values inside the Event message. As an example consider the following two messages:

No structured message

"Project Fluent Bit created on 1398289291"

Structured Message

{"project": "Fluent Bit", "created": 1398289291}

At a low level both are just an array of bytes, but the Structured message defines keys and values, having a structure helps to implement faster operations on data modifications.

Before diving into it’s good to get acquainted with some of the key concepts of the service. This document provides a gentle introduction to those concepts and common terminology. We’ve provided a list below of all the terms we’ll cover, but we recommend reading this document from start to finish to gain a more general understanding of our log and stream processor.

The only input plugin that does NOT assign tags is input. This plugin speaks the Fluentd wire protocol called Forward where every Event already comes with a Tag associated. Fluent Bit will always use the incoming Tag set by the client.

A Tagged record must always have a Matching rule. To learn more about Tags and Matches check the section.

To learn more about Tags and Matches check the section.

Fluent Bit always handles every Event message as a structured message. For performance reasons, we use a binary serialization data format called .

Consider as a binary version of JSON on steroids.

Fluent Bit
Fluent Bit
Forward
Routing
Routing
MessagePack
MessagePack

Output

Destinations for your data: databases, cloud services and more!

The output interface allows us to define destinations for the data. Common destinations are remote services, local file system or standard interface with others. Outputs are implemented as plugins and there are many available.

When an output plugin is loaded, an internal instance is created. Every instance has its own independent configuration. Configuration keys are often called properties.

Every output plugin has its own documentation section specifying how it can be used and what properties are available.

For more details, please refer to the section.

Output Plugins

Buffer

Data processing with reliability

The buffer phase already contains the data in an immutable state, meaning, no other filter can be applied.

Note that buffered data is not raw text, it's in Fluent Bit's internal binary representation.

Fluent Bit offers a buffering mechanism in the file system that acts as a backup system to avoid data loss in case of system failures.

Filter

Modify, Enrich or Drop your records

In production environments we want to have full control of the data we are collecting, filtering is an important feature that allows us to alter the data before delivering it to some destination.

Filtering is implemented through plugins, so each filter available could be used to match, exclude or enrich your logs with some specific metadata.

We support many filters, A common use case for filtering is Kubernetes deployments. Every Pod log needs to get the proper metadata associated

Very similar to the input plugins, Filters run in an instance context, which has its own independent configuration. Configuration keys are often called properties.

Fluentd & Fluent Bit

The Production Grade Ecosystem

  • Licensed under the terms of Apache License v2.0

  • Production Grade solutions: deployed thousands of times every single day, millions per month.

  • Community driven projects

  • Widely Adopted by the Industry: trusted by all major companies like AWS, Microsoft, Google Cloud and hundred of others.

The following table describes a comparison in different areas of the projects:

Fluentd
Fluent Bit

Scope

Containers / Servers

Embedded Linux / Containers / Servers

Language

C & Ruby

C

Memory

~40MB

~650KB

Performance

High Performance

High Performance

Dependencies

Built as a Ruby Gem, it requires a certain number of gems.

Zero dependencies, unless some special plugin requires them.

Plugins

More than 1000 plugins available

Around 70 plugins available

License

Parser

Convert Unstructured to Structured messages

Dealing with raw strings or unstructured messages is a constant pain; having a structure is highly desired. Ideally we want to set a structure to the incoming data by the Input Plugins as soon as they are collected:

The Parser allows you to convert from unstructured to structured data. As a demonstrative example consider the following Apache (HTTP Server) log entry:

192.168.2.20 - - [28/Jul/2006:10:27:10 -0300] "GET /cgi-bin/try/ HTTP/1.0" 200 3395

The above log line is a raw string without format, ideally we would like to give it a structure that can be processed later easily. If the proper configuration is used, the log entry could be converted to:

{
  "host":    "192.168.2.20",
  "user":    "-",
  "method":  "GET",
  "path":    "/cgi-bin/try/",
  "code":    "200",
  "size":    "3395",
  "referer": "",
  "agent":   ""
 }

Previously defined in the concept section, the buffer phase in the pipeline aims to provide a unified and persistent mechanism to store your data, either using the primary in-memory model or using the filesystem based mode.

For more details about the Filters available and their usage, please refer to the section.

Logging and data processing in general can be complex, and at scale a bit more, that's why was born. Fluentd has become more than a simple tool, it has grown into a fullscale ecosystem that contains SDKs for different languages and sub-projects like .

On this page, we will describe the relationship between the and open source projects, as a summary we can say both are:

Hosted projects by the

Originally created by .

Both projects share a lot of similarities, is fully designed and built on top of the best ideas of architecture and general design. Choosing which one to use depends on the end-user needs.

Both and can work as Aggregators or Forwarders, they both can complement each other or use them as standalone solutions.

Parsers are fully configurable and are independently and optionally handled by each input plugin, for more details please refer to the section.

Buffering
Filters
Fluentd
Fluent Bit
Fluentd
Fluent Bit
Cloud Native Computing Foundation (CNCF)
Treasure Data
Fluent Bit
Fluentd
Fluentd
Fluent Bit
Parsers
Apache License v2.0
Apache License v2.0

Router

Create flexible routing rules

There are two important concepts in Routing:

  • Tag

  • Match

When the data is generated by the input plugins, it comes with a Tag (most of the time the Tag is configured manually), the Tag is a human-readable indicator that helps to identify the data source.

In order to define where the data should be routed, a Match rule must be specified in the output configuration.

Consider the following configuration example that aims to deliver CPU metrics to an Elasticsearch database and Memory metrics to the standard output interface:

[INPUT]
    Name cpu
    Tag  my_cpu

[INPUT]
    Name mem
    Tag  my_mem

[OUTPUT]
    Name   es
    Match  my_cpu

[OUTPUT]
    Name   stdout
    Match  my_mem

Note: the above is a simple example demonstrating how Routing is configured.

Routing works automatically reading the Input Tags and the Output Match rules. If some data has a Tag that doesn't match upon routing time, the data is deleted.

Routing with Wildcard

Routing is flexible enough to support wildcard in the Match pattern. The below example defines a common destination for both sources of data:

[INPUT]
    Name cpu
    Tag  my_cpu

[INPUT]
    Name mem
    Tag  my_mem

[OUTPUT]
    Name   stdout
    Match  my_*

The match rule is set to my_* which means it will match any Tag that starts with my_.

Routing is a core feature that allows to route your data through Filters and finally to one or multiple destinations. The router relies on the concept of and rules

Tags
Matching

Download Source Code

Stable

For production systems, we strongly suggest that you always get the latest stable release of the source code in either zip or tarball format from Github using the following link pattern:

https://github.com/fluent/fluent-bit/archive/refs/tags/v<release version>.tar.gz https://github.com/fluent/fluent-bit/archive/refs/tags/v<release version>.zip

Development

For anyone who aims to contribute to the project by testing or extending the code base, you can get the development version from our GIT repository:

$ git clone https://github.com/fluent/fluent-bit

Note that our master branch is where the development of Fluent Bit happens. Since it's a development version, expect issues when compiling or at run time.

We encourage everybody to help us testing every development version, at the end this is what will become stable.

For example for version 1.8.12 the link is the following:

https://github.com/fluent/fluent-bit/archive/refs/tags/v1.8.12.tar.gz

Build and Install

Prepare environment

In the following steps you can find exact commands to build and install the project with the default options. If you already know how CMake works you can skip this part and look at the build options available. Note that Fluent Bit requires CMake 3.x. You may need to use cmake3 instead of cmake to complete the following steps on your system.

Change to the build/ directory inside the Fluent Bit sources:

$ cd build/
$ cmake ../
-- The C compiler identification is GNU 4.9.2
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- The CXX compiler identification is GNU 4.9.2
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
...
-- Could NOT find Doxygen (missing:  DOXYGEN_EXECUTABLE)
-- Looking for accept4
-- Looking for accept4 - not found
-- Configuring done
-- Generating done
-- Build files have been written to: /home/edsiper/coding/fluent-bit/build

Now you are ready to start the compilation process through the simple make command:

$ make
Scanning dependencies of target msgpack
[  2%] Building C object lib/msgpack-1.1.0/CMakeFiles/msgpack.dir/src/unpack.c.o
[  4%] Building C object lib/msgpack-1.1.0/CMakeFiles/msgpack.dir/src/objectc.c.o
[  7%] Building C object lib/msgpack-1.1.0/CMakeFiles/msgpack.dir/src/version.c.o
...
[ 19%] Building C object lib/monkey/mk_core/CMakeFiles/mk_core.dir/mk_file.c.o
[ 21%] Building C object lib/monkey/mk_core/CMakeFiles/mk_core.dir/mk_rconf.c.o
[ 23%] Building C object lib/monkey/mk_core/CMakeFiles/mk_core.dir/mk_string.c.o
...
Scanning dependencies of target fluent-bit-static
[ 66%] Building C object src/CMakeFiles/fluent-bit-static.dir/flb_pack.c.o
[ 69%] Building C object src/CMakeFiles/fluent-bit-static.dir/flb_input.c.o
[ 71%] Building C object src/CMakeFiles/fluent-bit-static.dir/flb_output.c.o
...
Linking C executable ../bin/fluent-bit
[100%] Built target fluent-bit-bin

to continue installing the binary on the system just do:

$ make install

it's likely you may need root privileges so you can try to prefixing the command with sudo.

Build Options

Fluent Bit provides certain options to CMake that can be enabled or disabled when configuring, please refer to the following tables under the General Options, Development Options, Input Plugins and _Output Plugins sections.

General Options

option
description
default

FLB_ALL

Enable all features available

No

FLB_JEMALLOC

Use Jemalloc as default memory allocator

No

FLB_TLS

Build with SSL/TLS support

Yes

FLB_BINARY

Build executable

Yes

FLB_EXAMPLES

Build examples

Yes

FLB_SHARED_LIB

Build shared library

Yes

FLB_MTRACE

Enable mtrace support

No

FLB_INOTIFY

Enable Inotify support

Yes

FLB_POSIX_TLS

Force POSIX thread storage

No

FLB_SQLDB

Enable SQL embedded database support

No

FLB_HTTP_SERVER

Enable HTTP Server

No

FLB_LUAJIT

Enable Lua scripting support

Yes

FLB_RECORD_ACCESSOR

Enable record accessor

Yes

FLB_SIGNV4

Enable AWS Signv4 support

Yes

FLB_STATIC_CONF

Build binary using static configuration files. The value of this option must be a directory containing configuration files.

FLB_STREAM_PROCESSOR

Enable Stream Processor

Yes

Development Options

option
description
default

FLB_DEBUG

Build binaries with debug symbols

No

FLB_VALGRIND

Enable Valgrind support

No

FLB_TRACE

Enable trace mode

No

FLB_SMALL

Minimise binary size

No

FLB_TESTS_RUNTIME

Enable runtime tests

No

FLB_TESTS_INTERNAL

Enable internal tests

No

FLB_TESTS

Enable tests

No

FLB_BACKTRACE

Enable backtrace/stacktrace support

Yes

Input Plugins

The input plugins provides certain features to gather information from a specific source type which can be a network interface, some built-in metric or through a specific input device, the following input plugins are available:

option
description
default

Enable Collectd input plugin

On

Enable CPU input plugin

On

Enable Disk I/O Metrics input plugin

On

Enable Docker metrics input plugin

On

Enable Exec input plugin

On

Enable Forward input plugin

On

Enable Head input plugin

On

Enable Health input plugin

On

Enable Kernel log input plugin

On

Enable Memory input plugin

On

Enable MQTT Server input plugin

On

Enable Network I/O metrics input plugin

On

Enable Process monitoring input plugin

On

Enable Random input plugin

On

Enable Serial input plugin

On

Enable Standard input plugin

On

Enable Syslog input plugin

On

Enable Systemd / Journald input plugin

On

Enable Tail (follow files) input plugin

On

Enable TCP input plugin

On

Enable system temperature(s) input plugin

On

Enable Windows Event Log input plugin (Windows Only)

On

Filter Plugins

The filter plugins allows to modify, enrich or drop records. The following table describes the filters available on this version:

option
description
default

Enable AWS metadata filter

On

FLB_FILTER_EXPECT

Enable Expect data test filter

On

Enable Grep filter

On

Enable Kubernetes metadata filter

On

Enable Lua scripting filter

On

Enable Modify filter

On

Enable Nest filter

On

Enable Parser filter

On

Enable Record Modifier filter

On

Enable Rewrite Tag filter

On

Enable Stdout filter

On

Enable Throttle filter

On

Output Plugins

The output plugins gives the capacity to flush the information to some external interface, service or terminal, the following table describes the output plugins available as of this version:

option
description
default

Enable Microsoft Azure output plugin

On

Enable Google BigQuery output plugin

On

Enable Counter output plugin

On

Enable Amazon CloudWatch output plugin

On

Enable Datadog output plugin

On

On

Enable File output plugin

On

Enable Amazon Kinesis Data Firehose output plugin

On

Enable Amazon Kinesis Data Streams output plugin

On

Enable Flowcounter output plugin

On

On

Enable Gelf output plugin

On

Enable HTTP output plugin

On

Enable InfluxDB output plugin

On

Enable Kafka output

Off

Enable Kafka REST Proxy output plugin

On

FLB_OUT_LIB

Enable Lib output plugin

On

On

FLB_OUT_NULL

Enable NULL output plugin

On

FLB_OUT_PGSQL

Enable PostgreSQL output plugin

On

FLB_OUT_PLOT

Enable Plot output plugin

On

FLB_OUT_SLACK

Enable Slack output plugin

On

Enable Amazon S3 output plugin

On

Enable Splunk output plugin

On

Enable Google Stackdriver output plugin

On

Enable STDOUT output plugin

On

FLB_OUT_TCP

Enable TCP/TLS output plugin

On

On

uses as it build system. The suggested procedure to prepare the build system consists of the following steps:

Let configure the project specifying where the root path is located:

Enable output plugin

Enable output plugin

Enable output plugin

Enable output plugin

Fluent Bit
CMake
CMake
FLB_IN_COLLECTD
FLB_IN_CPU
FLB_IN_DISK
FLB_IN_DOCKER
FLB_IN_EXEC
FLB_IN_FORWARD
FLB_IN_HEAD
FLB_IN_HEALTH
FLB_IN_KMSG
FLB_IN_MEM
FLB_IN_MQTT
FLB_IN_NETIF
FLB_IN_PROC
FLB_IN_RANDOM
FLB_IN_SERIAL
FLB_IN_STDIN
FLB_IN_SYSLOG
FLB_IN_SYSTEMD
FLB_IN_TAIL
FLB_IN_TCP
FLB_IN_THERMAL
FLB_IN_WINLOG
FLB_FILTER_AWS
FLB_FILTER_GREP
FLB_FILTER_KUBERNETES
FLB_FILTER_LUA
FLB_FILTER_MODIFY
FLB_FILTER_NEST
FLB_FILTER_PARSER
FLB_FILTER_RECORD_MODIFIER
FLB_FILTER_REWRITE_TAG
FLB_FILTER_STDOUT
FLB_FILTER_THROTTLE
FLB_OUT_AZURE
FLB_OUT_BIGQUERY
FLB_OUT_COUNTER
FLB_OUT_CLOUDWATCH_LOGS
FLB_OUT_DATADOG
FLB_OUT_ES
Elastic Search
FLB_OUT_FILE
FLB_OUT_KINESIS_FIREHOSE
FLB_OUT_KINESIS_STREAMS
FLB_OUT_FLOWCOUNTER
FLB_OUT_FORWARD
Fluentd
FLB_OUT_GELF
FLB_OUT_HTTP
FLB_OUT_INFLUXDB
FLB_OUT_KAFKA
FLB_OUT_KAFKA_REST
FLB_OUT_NATS
NATS
FLB_OUT_S3
FLB_OUT_SPLUNK
FLB_OUT_STACKDRIVER
FLB_OUT_STDOUT
FLB_OUT_TD
Treasure Data

Amazon Linux

Install on Amazon Linux 2

Fluent Bit is distributed as td-agent-bit package and is available for the latest Amazon Linux 2. The following architectures are supported

  • x86_64

  • aarch64 / arm64v8

Configure Yum

We provide td-agent-bit through a Yum repository. In order to add the repository reference to your system, please add a new file called td-agent-bit.repo in /etc/yum.repos.d/ with the following content:

[td-agent-bit]
name = TD Agent Bit
baseurl = https://packages.fluentbit.io/amazonlinux/2/$basearch/
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.fluentbit.io/fluentbit.key
enabled=1

note: we encourage you always enable the gpgcheck for security reasons. All our packages are signed.

Updated key from March 2022

The GPG Key fingerprint of the new key is:

C3C0 A285 34B9 293E AF51  FABD 9F9D DC08 3888 C1CD
Fluentbit releases (Releases signing key) <releases@fluentbit.io>

The GPG Key fingerprint of the old key is:

F209 D876 2A60 CD49 E680 633B 4FF8 368B 6EA0 722A

Install

Once your repository is configured, run the following command to install it:

$ yum install td-agent-bit

Now the following step is to instruct systemd to enable the service:

$ sudo service td-agent-bit start

If you do a status check, you should see a similar output like this:

$ service td-agent-bit status
Redirecting to /bin/systemctl status  td-agent-bit.service
● td-agent-bit.service - TD Agent Bit
   Loaded: loaded (/usr/lib/systemd/system/td-agent-bit.service; disabled; vendor preset: disabled)
   Active: active (running) since Thu 2016-07-07 02:08:01 BST; 9s ago
 Main PID: 3820 (td-agent-bit)
   CGroup: /system.slice/td-agent-bit.service
           └─3820 /opt/td-agent-bit/bin/td-agent-bit -c etc/td-agent-bit/td-agent-bit.conf
...

The default configuration of td-agent-bit is collecting metrics of CPU usage and sending the records to the standard output, you can see the outgoing data in your /var/log/messages file.

Upgrade Notes

The following article cover the relevant notes for users upgrading from previous Fluent Bit versions. We aim to cover compatibility changes that you must be aware of.

Fluent Bit v1.6

If you are migrating from previous version of Fluent Bit please review the following important changes:

Tail Input Plugin

Now by default the plugin follows a file from the end once the service starts (old behavior was always read from the beginning). For every file found at start, its followed from it last position, for new files discovered at runtime or rotated, they are read from the beginning.

If you desire to keep the old behavior you can set the option read_from_head to true.

Stackdriver Output Plugin

If you have any existing queries based on the resource's project_id, please update your query accordingly.

Fluent Bit v1.5

The migration from v1.4 to v1.5 is pretty straightforward.

Fluent Bit v1.4

If you are migrating from Fluent Bit v1.3, there are no breaking changes. Just new exciting features to enjoy :)

Fluent Bit v1.3

If you are migrating from Fluent Bit v1.2 to v1.3, there are no breaking changes. If you are upgrading from an older version please review the incremental changes below.

Fluent Bit v1.2

Docker, JSON, Parsers and Decoders

On Fluent Bit v1.2 we have fixed many issues associated with JSON encoding and decoding, for hence when parsing Docker logs is no longer necessary to use decoders. The new Docker parser looks like this:

Note: again, do not use decoders.

Kubernetes Filter

We have done improvements also on how Kubernetes Filter handle the stringified log message. If the option Merge_Log is enabled, it will try to handle the log content as a JSON map, if so, it will add the keys to the root map.

In addition, we have fixed and improved the option called Merge_Log_Key. If a merge log succeed, all new keys will be packaged under the key specified by this option, a suggested configuration is as follows:

As an example, if the original log content is the following map:

the final record will be composed as follows:

Fluent Bit v1.1

If you are upgrading from Fluent Bit <= 1.0.x you should take in consideration the following relevant changes when switching to Fluent Bit v1.1 series:

Kubernetes Filter

We introduced a new configuration property called Kube_Tag_Prefix to help Tag prefix resolution and address an unexpected behavior that landed in previous versions.

During 1.0.x release cycle, a commit in Tail input plugin changed the default behavior on how the Tag was composed when using the wildcard for expansion generating breaking compatibility with other services. Consider the following configuration example:

The expected behavior is that Tag will be expanded to:

but the change introduced in 1.0 series switched from absolute path to the base file name only:

On Fluent Bit v1.1 release we restored to our default behavior and now the Tag is composed using the absolute path of the monitored file.

Having absolute path in the Tag is relevant for routing and flexible configuration where it also helps to keep compatibility with Fluentd behavior.

This behavior switch in Tail input plugin affects how Filter Kubernetes operates. As you know when the filter is used it needs to perform local metadata lookup that comes from the file names when using Tail as a source. Now with the new Kube_Tag_Prefix option you can specify what's the prefix used in Tail input plugin, for the configuration example above the new configuration will look as follows:

So the proper for Kube_Tag_Prefix value must be composed by Tag prefix set in Tail input plugin plus the converted monitored directory replacing slashes with dots.

Requirements

  • Compiler: GCC or clang

  • CMake

  • Flex & Bison: only if you enable the Stream Processor or Record Accessor feature (both enabled by default)

In the core there are not other dependencies, For certain features that depends on third party components like output plugins with special backend libraries (e.g: kafka), those are included in the main source code repository.

Getting Started with Fluent Bit

The following serves as a guide on how to install/deploy/upgrade Fluent Bit

Container Deployment

Install on Linux (Packages)

Install on Windows (Packages)

Compile from Source (Linux, Windows, FreeBSD, MacOS)

Supported Platforms

The following operating systems and architectures are supported in Fluent Bit.

From an architecture support perspective, Fluent Bit is fully functional on x86_64, Arm64v8 and Arm32v7 based processors.

License

Strong Commitment to the Openness and Collaboration

Build with Static Configuration

Static configuration mode aims to include a built-in configuration in the final binary of Fluent Bit, disabling the usage of external files or flags at runtime.

Getting Started

Requirements

Configuration Directory

the configuration provided above will calculate CPU metrics from the running system and print them to the standard output interface.

Build with Custom Configuration

Inside Fluent Bit source code, get into the build/ directory and run CMake appending the FLB_STATIC_CONF option pointing the configuration directory recently created, e.g:

then build it:

At this point the fluent-bit binary generated is ready to run without necessity of further configuration:

From the 1.9.0 and 1.8.15 releases please note that the GPG key has been updated at so ensure this new one is added.

The previous key is still available at and may be required to install previous versions.

Refer to the to see which platforms are supported in each release.

For more details about changes on each release please refer to the .

The project_id of in sent to Google Cloud Logging would be set to the project ID rather than the project number. To learn the difference between Project ID and project number, see for more details.

If you enabled keepalive mode in your configuration, note that this configuration property has been renamed to net.keepalive. Now all Network I/O keepalive is enabled by default, to learn more about this and other associated configuration properties read the section.

If you use the Elasticsearch output plugin, note the default value of type . Many versions of Elasticsearch will tolerate this, but ES v5.6 through v6.1 require a type without a leading underscore. See the for more.

uses very low CPU and Memory consumption, it's compatible with most of x86, x86_64, arm32v7 and arm64v8 based platforms. In order to build it you need the following components in your system for the build process:

Deployment Type
Instructions
Operating System
Installation Instructions
Operating System
Installation Instructions
Operating System
Installation Instructions
Operating System
Distribution
Architectures

Fluent Bit can work also on OSX and *BSD systems, but not all plugins will be available on all platforms. Official support will be expanding based on community demand. Fluent Bit may run on older operating systems though will need to be built from source, or use custom packages from

, including it core, plugins and tools are distributed under the terms of the :

in normal operation mode allows to be configurable through or using specific arguments in the command line, while this is the ideal deployment case, there are scenarios where a more restricted configuration is required: static configuration mode.

The following steps assumes you are familiar with configuring Fluent Bit using text files and you have experience building it from scratch as described in the section.

In your file system prepare a specific directory that will be used as an entry point for the build system to lookup and parse the configuration files. It is mandatory that this directory contain as a minimum one configuration file called fluent-bit.conf containing the required , and sections. As an example create a new fluent-bit.conf file with the following content:

https://packages.fluentbit.io/fluentbit.key
https://packages.fluentbit.io/fluentbit-legacy.key
supported platform documentation
[PARSER]
    Name         docker
    Format       json
    Time_Key     time
    Time_Format  %Y-%m-%dT%H:%M:%S.%L
    Time_Keep    On
[FILTER]
    Name             Kubernetes
    Match            kube.*
    Kube_Tag_Prefix  kube.var.log.containers.
    Merge_Log        On
    Merge_Log_Key    log_processed
{"key1": "val1", "key2": "val2"}
{
    "log": "{\"key1\": \"val1\", \"key2\": \"val2\"}",
    "log_processed": {
        "key1": "val1",
        "key2": "val2"
    }
}
[INPUT]
    Name  tail
    Path  /var/log/containers/*.log
    Tag   kube.*
kube.var.log.containers.apache.log
kube.apache.log
[INPUT]
    Name  tail
    Path  /var/log/containers/*.log
    Tag   kube.*

[FILTER]
    Name             kubernetes
    Match            *
    Kube_Tag_Prefix  kube.var.log.containers.
                                 Apache License
                           Version 2.0, January 2004
                        http://www.apache.org/licenses/

   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION

   1. Definitions.

      "License" shall mean the terms and conditions for use, reproduction,
      and distribution as defined by Sections 1 through 9 of this document.

      "Licensor" shall mean the copyright owner or entity authorized by
      the copyright owner that is granting the License.

      "Legal Entity" shall mean the union of the acting entity and all
      other entities that control, are controlled by, or are under common
      control with that entity. For the purposes of this definition,
      "control" means (i) the power, direct or indirect, to cause the
      direction or management of such entity, whether by contract or
      otherwise, or (ii) ownership of fifty percent (50%) or more of the
      outstanding shares, or (iii) beneficial ownership of such entity.

      "You" (or "Your") shall mean an individual or Legal Entity
      exercising permissions granted by this License.

      "Source" form shall mean the preferred form for making modifications,
      including but not limited to software source code, documentation
      source, and configuration files.

      "Object" form shall mean any form resulting from mechanical
      transformation or translation of a Source form, including but
      not limited to compiled object code, generated documentation,
      and conversions to other media types.

      "Work" shall mean the work of authorship, whether in Source or
      Object form, made available under the License, as indicated by a
      copyright notice that is included in or attached to the work
      (an example is provided in the Appendix below).

      "Derivative Works" shall mean any work, whether in Source or Object
      form, that is based on (or derived from) the Work and for which the
      editorial revisions, annotations, elaborations, or other modifications
      represent, as a whole, an original work of authorship. For the purposes
      of this License, Derivative Works shall not include works that remain
      separable from, or merely link (or bind by name) to the interfaces of,
      the Work and Derivative Works thereof.

      "Contribution" shall mean any work of authorship, including
      the original version of the Work and any modifications or additions
      to that Work or Derivative Works thereof, that is intentionally
      submitted to Licensor for inclusion in the Work by the copyright owner
      or by an individual or Legal Entity authorized to submit on behalf of
      the copyright owner. For the purposes of this definition, "submitted"
      means any form of electronic, verbal, or written communication sent
      to the Licensor or its representatives, including but not limited to
      communication on electronic mailing lists, source code control systems,
      and issue tracking systems that are managed by, or on behalf of, the
      Licensor for the purpose of discussing and improving the Work, but
      excluding communication that is conspicuously marked or otherwise
      designated in writing by the copyright owner as "Not a Contribution."

      "Contributor" shall mean Licensor and any individual or Legal Entity
      on behalf of whom a Contribution has been received by Licensor and
      subsequently incorporated within the Work.

   2. Grant of Copyright License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      copyright license to reproduce, prepare Derivative Works of,
      publicly display, publicly perform, sublicense, and distribute the
      Work and such Derivative Works in Source or Object form.

   3. Grant of Patent License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      (except as stated in this section) patent license to make, have made,
      use, offer to sell, sell, import, and otherwise transfer the Work,
      where such license applies only to those patent claims licensable
      by such Contributor that are necessarily infringed by their
      Contribution(s) alone or by combination of their Contribution(s)
      with the Work to which such Contribution(s) was submitted. If You
      institute patent litigation against any entity (including a
      cross-claim or counterclaim in a lawsuit) alleging that the Work
      or a Contribution incorporated within the Work constitutes direct
      or contributory patent infringement, then any patent licenses
      granted to You under this License for that Work shall terminate
      as of the date such litigation is filed.

   4. Redistribution. You may reproduce and distribute copies of the
      Work or Derivative Works thereof in any medium, with or without
      modifications, and in Source or Object form, provided that You
      meet the following conditions:

      (a) You must give any other recipients of the Work or
          Derivative Works a copy of this License; and

      (b) You must cause any modified files to carry prominent notices
          stating that You changed the files; and

      (c) You must retain, in the Source form of any Derivative Works
          that You distribute, all copyright, patent, trademark, and
          attribution notices from the Source form of the Work,
          excluding those notices that do not pertain to any part of
          the Derivative Works; and

      (d) If the Work includes a "NOTICE" text file as part of its
          distribution, then any Derivative Works that You distribute must
          include a readable copy of the attribution notices contained
          within such NOTICE file, excluding those notices that do not
          pertain to any part of the Derivative Works, in at least one
          of the following places: within a NOTICE text file distributed
          as part of the Derivative Works; within the Source form or
          documentation, if provided along with the Derivative Works; or,
          within a display generated by the Derivative Works, if and
          wherever such third-party notices normally appear. The contents
          of the NOTICE file are for informational purposes only and
          do not modify the License. You may add Your own attribution
          notices within Derivative Works that You distribute, alongside
          or as an addendum to the NOTICE text from the Work, provided
          that such additional attribution notices cannot be construed
          as modifying the License.

      You may add Your own copyright statement to Your modifications and
      may provide additional or different license terms and conditions
      for use, reproduction, or distribution of Your modifications, or
      for any such Derivative Works as a whole, provided Your use,
      reproduction, and distribution of the Work otherwise complies with
      the conditions stated in this License.

   5. Submission of Contributions. Unless You explicitly state otherwise,
      any Contribution intentionally submitted for inclusion in the Work
      by You to the Licensor shall be under the terms and conditions of
      this License, without any additional terms or conditions.
      Notwithstanding the above, nothing herein shall supersede or modify
      the terms of any separate license agreement you may have executed
      with Licensor regarding such Contributions.

   6. Trademarks. This License does not grant permission to use the trade
      names, trademarks, service marks, or product names of the Licensor,
      except as required for reasonable and customary use in describing the
      origin of the Work and reproducing the content of the NOTICE file.

   7. Disclaimer of Warranty. Unless required by applicable law or
      agreed to in writing, Licensor provides the Work (and each
      Contributor provides its Contributions) on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
      implied, including, without limitation, any warranties or conditions
      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
      PARTICULAR PURPOSE. You are solely responsible for determining the
      appropriateness of using or redistributing the Work and assume any
      risks associated with Your exercise of permissions under this License.

   8. Limitation of Liability. In no event and under no legal theory,
      whether in tort (including negligence), contract, or otherwise,
      unless required by applicable law (such as deliberate and grossly
      negligent acts) or agreed to in writing, shall any Contributor be
      liable to You for damages, including any direct, indirect, special,
      incidental, or consequential damages of any character arising as a
      result of this License or out of the use or inability to use the
      Work (including but not limited to damages for loss of goodwill,
      work stoppage, computer failure or malfunction, or any and all
      other commercial damages or losses), even if such Contributor
      has been advised of the possibility of such damages.

   9. Accepting Warranty or Additional Liability. While redistributing
      the Work or Derivative Works thereof, You may choose to offer,
      and charge a fee for, acceptance of support, warranty, indemnity,
      or other liability obligations and/or rights consistent with this
      License. However, in accepting such obligations, You may act only
      on Your own behalf and on Your sole responsibility, not on behalf
      of any other Contributor, and only if You agree to indemnify,
      defend, and hold each Contributor harmless for any liability
      incurred by, or claims asserted against, such Contributor by reason
      of your accepting any such warranty or additional liability.

   END OF TERMS AND CONDITIONS
[SERVICE]
    Flush     1
    Daemon    off
    Log_Level info

[INPUT]
    Name      cpu

[OUTPUT]
    Name      stdout
    Match     *
$ cd fluent-bit/build/
$ cmake -DFLB_STATIC_CONF=/path/to/my/confdir/
$ make
$ bin/fluent-bit 
Fluent-Bit v0.15.0
Copyright (C) Treasure Data

[2018/10/19 15:32:31] [ info] [engine] started (pid=15186)
[0] cpu.local: [1539984752.000347547, {"cpu_p"=>0.750000, "user_p"=>0.500000, "system_p"=>0.250000, "cpu0.p_cpu"=>1.000000, "cpu0.p_user"=>1.000000, "cpu0.p_system"=>0.000000, "cpu1.p_cpu"=>0.000000, "cpu1.p_user"=>0.000000, "cpu1.p_system"=>0.000000, "cpu2.p_cpu"=>0.000000, "cpu2.p_user"=>0.000000, "cpu2.p_system"=>0.000000, "cpu3.p_cpu"=>1.000000, "cpu3.p_user"=>1.000000, "cpu3.p_system"=>0.000000}]

Format and Schema

Fluent Bit might optionally use a configuration file to define how the service will behave, and before proceeding we need to understand how the configuration schema works.

The schema is defined by three concepts:

  • Sections

  • Entries: Key/Value

  • Indented Configuration Mode

A simple example of a configuration file is as follows:

[SERVICE]
    # This is a commented line
    Daemon    off
    log_level debug

Sections

A section is defined by a name or title inside brackets. Looking at the example above, a Service section has been set using [SERVICE] definition. Section rules:

  • All section content must be indented (4 spaces ideally).

  • Multiple sections can exist on the same file.

  • A section is expected to have comments and entries, it cannot be empty.

  • Any commented line under a section, must be indented too.

Entries: Key/Value

A section may contain Entries, an entry is defined by a line of text that contains a Key and a Value, using the above example, the [SERVICE] section contains two entries, one is the key Daemon with value off and the other is the key Log_Level with the value debug. Entries rules:

  • An entry is defined by a key and a value.

  • A key must be indented.

  • A key must contain a value which ends in the breakline.

  • Multiple keys with the same name can exist.

Also commented lines are set prefixing the # character, those lines are not processed but they must be indented too.

Indented Configuration Mode

Fluent Bit configuration files are based in a strict Indented Mode, that means that each configuration file must follow the same pattern of alignment from left to right when writing text. By default an indentation level of four spaces from left to right is suggested. Example:

[FIRST_SECTION]
    # This is a commented line
    Key1  some value
    Key2  another value
    # more comments

[SECOND_SECTION]
    KeyN  3.14

As you can see there are two sections with multiple entries and comments, note also that empty lines are allowed and they do not need to be indented.

Official Release Notes
resource
LogEntry
this
Networking Administration
changed from flb_type to _doc
Elasticsearch output plugin documentation FAQ entry
Fluent Bit
enterprise providers
Fluent Bit
Apache License v2.0
Fluent Bit
text files
Build and Install
SERVICE
INPUT
OUTPUT

Kubernetes

Docker

Containers on AWS

CentOS / Red Hat

Ubuntu

Debian

Amazon Linux

Raspbian / Rasberry Pi

Yocto / Embedded Linux

Windows Server 2019

Windows 10 2019.03

Linux, FreeBSD, MacOS

Windows

Redhat / CentOS

Install on Redhat / CentOS

Fluent Bit is distributed as td-agent-bit package and is available for the latest stable CentOS system. The following architectures are supported

  • x86_64

  • aarch64 / arm64v8

Configure Yum

We provide td-agent-bit through a Yum repository. In order to add the repository reference to your system, please add a new file called td-agent-bit.repo in /etc/yum.repos.d/ with the following content:

[td-agent-bit]
name = TD Agent Bit
baseurl = https://packages.fluentbit.io/centos/7/$basearch/
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.fluentbit.io/fluentbit.key
enabled=1

note: we encourage you always enable the gpgcheck for security reasons. All our packages are signed.

Updated key from March 2022

The GPG Key fingerprint of the new key is:

C3C0 A285 34B9 293E AF51  FABD 9F9D DC08 3888 C1CD
Fluentbit releases (Releases signing key) <releases@fluentbit.io>

The GPG Key fingerprint of the old key is:

F209 D876 2A60 CD49 E680 633B 4FF8 368B 6EA0 722A

Install

Once your repository is configured, run the following command to install it:

$ yum install td-agent-bit

Now the following step is to instruct Systemd to enable the service:

$ sudo service td-agent-bit start

If you do a status check, you should see a similar output like this:

$ service td-agent-bit status
Redirecting to /bin/systemctl status  td-agent-bit.service
● td-agent-bit.service - TD Agent Bit
   Loaded: loaded (/usr/lib/systemd/system/td-agent-bit.service; disabled; vendor preset: disabled)
   Active: active (running) since Thu 2016-07-07 02:08:01 BST; 9s ago
 Main PID: 3820 (td-agent-bit)
   CGroup: /system.slice/td-agent-bit.service
           └─3820 /opt/td-agent-bit/bin/td-agent-bit -c etc/td-agent-bit/td-agent-bit.conf
...

The default configuration of td-agent-bit is collecting metrics of CPU usage and sending the records to the standard output, you can see the outgoing data in your /var/log/messages file.

Debian

Fluent Bit is distributed as td-agent-bit package and is available for the latest (and old) stable Debian systems: Buster, Stretch and Jessie.

Server GPG key

The first step is to add our server GPG key to your keyring, on that way you can get our signed packages:

curl https://packages.fluentbit.io/fluentbit.key | sudo apt-key add -

Updated key from March 2022

The GPG Key fingerprint of the new key is:

C3C0 A285 34B9 293E AF51  FABD 9F9D DC08 3888 C1CD
Fluentbit releases (Releases signing key) <releases@fluentbit.io>

The GPG Key fingerprint of the old key is:

F209 D876 2A60 CD49 E680 633B 4FF8 368B 6EA0 722A

Update your sources lists

On Debian, you need to add our APT server entry to your sources lists, please add the following content at bottom of your /etc/apt/sources.list file:

Debian 10 (Buster)

deb https://packages.fluentbit.io/debian/buster buster main

Debian 9 (Stretch)

deb https://packages.fluentbit.io/debian/stretch stretch main

Update your repositories database

Now let your system update the apt database:

$ sudo apt-get update

We recommend upgrading your system (sudo apt-get upgrade). This could avoid potential issues with expired certificates.

Install TD Agent Bit

Using the following apt-get command you are able now to install the latest td-agent-bit:

$ sudo apt-get install td-agent-bit

Now the following step is to instruct systemd to enable the service:

$ sudo service td-agent-bit start

If you do a status check, you should see a similar output like this:

sudo service td-agent-bit status
● td-agent-bit.service - TD Agent Bit
   Loaded: loaded (/lib/systemd/system/td-agent-bit.service; disabled; vendor preset: enabled)
   Active: active (running) since mié 2016-07-06 16:58:25 CST; 2h 45min ago
 Main PID: 6739 (td-agent-bit)
    Tasks: 1
   Memory: 656.0K
      CPU: 1.393s
   CGroup: /system.slice/td-agent-bit.service
           └─6739 /opt/td-agent-bit/bin/td-agent-bit -c /etc/td-agent-bit/td-agent-bit.conf
...

The default configuration of td-agent-bit is collecting metrics of CPU usage and sending the records to the standard output, you can see the outgoing data in your /var/log/syslog file.

Ubuntu

Fluent Bit is distributed as td-agent-bit package and is available for the latest stable Ubuntu system: Focal Fossa.

Server GPG key

The first step is to add our server GPG key to your keyring, on that way you can get our signed packages:

$ wget -qO - https://packages.fluentbit.io/fluentbit.key | sudo apt-key add -

Updated key from March 2022

The GPG Key fingerprint of the new key is:

C3C0 A285 34B9 293E AF51  FABD 9F9D DC08 3888 C1CD
Fluentbit releases (Releases signing key) <releases@fluentbit.io>

The GPG Key fingerprint of the old key is:

F209 D876 2A60 CD49 E680 633B 4FF8 368B 6EA0 722A

Update your sources lists

On Ubuntu, you need to add our APT server entry to your sources lists, please add the following content at bottom of your /etc/apt/sources.list file:

Ubuntu 20.04 LTS (Focal Fossa)

deb https://packages.fluentbit.io/ubuntu/focal focal main

Ubuntu 18.04 LTS (Bionic Beaver)

deb https://packages.fluentbit.io/ubuntu/bionic bionic main

Ubuntu 16.04 LTS (Xenial Xerus)

deb https://packages.fluentbit.io/ubuntu/xenial xenial main

Update your repositories database

Now let your system update the apt database:

sudo apt-get update

We recommend upgrading your system (sudo apt-get upgrade). This could avoid potential issues with expired certificates.

Install TD-Agent Bit

Using the following apt-get command you are able now to install the latest td-agent-bit:

sudo apt-get install td-agent-bit

Now the following step is to instruct systemd to enable the service:

sudo service td-agent-bit start

If you do a status check, you should see a similar output like this:

sudo service td-agent-bit status
● td-agent-bit.service - TD Agent Bit
   Loaded: loaded (/lib/systemd/system/td-agent-bit.service; disabled; vendor preset: enabled)
   Active: active (running) since mié 2016-07-06 16:58:25 CST; 2h 45min ago
 Main PID: 6739 (td-agent-bit)
    Tasks: 1
   Memory: 656.0K
      CPU: 1.393s
   CGroup: /system.slice/td-agent-bit.service
           └─6739 /opt/td-agent-bit/bin/td-agent-bit -c /etc/td-agent-bit/td-agent-bit.conf
...

The default configuration of td-agent-bit is collecting metrics of CPU usage and sending the records to the standard output, you can see the outgoing data in your /var/log/syslog file.

Containers on AWS

AWS maintains a distribution of Fluent Bit combining the latest official release with a set of Go Plugins for sending logs to AWS services. AWS and Fluent Bit are working together to rewrite their plugins for inclusion in the official Fluent Bit distribution.

Plugins

Fluent Bit includes Amazon CloudWatch Logs plugin named cloudwatch_logs, Amazon Kinesis Firehose plugin named kinesis_firehose and Amazon Kinesis Data Streams plugin named kinesis_streams which are higher performance than Go plugins.

Also, Fluent Bit includes S3 output plugin named s3.

Versions and Regional Repositories

SSM Public Parameters

AWS vends SSM Public Parameters with the regional repository link for each image. These parameters can be queried by any AWS account.

To see a list of available version tags in a given region, run the following command:

aws ssm get-parameters-by-path --region eu-central-1 --path /aws/service/aws-for-fluent-bit/ --query 'Parameters[*].Name'

To see the ECR repository URI for a given image tag in a given region, run the following:

$ aws ssm get-parameter --region ap-northeast-1 --name /aws/service/aws-for-fluent-bit/2.0.0

You can use these SSM public parameters as parameters in your CloudFormation templates:

Parameters:
  FireLensImage:
    Description: Fluent Bit image for the FireLens Container
    Type: AWS::SSM::Parameter::Value<String>
    Default: /aws/service/aws-for-fluent-bit/latest

Linux Packages

GPG key updates

The GPG Key fingerprint of the new key is:

C3C0 A285 34B9 293E AF51  FABD 9F9D DC08 3888 C1CD
Fluentbit releases (Releases signing key) <releases@fluentbit.io>

The GPG Key fingerprint of the old key is:

F209 D876 2A60 CD49 E680 633B 4FF8 368B 6EA0 722A

Raspbian / Raspberry Pi

  • Raspbian Buster (10)

  • Raspbian Stretch (9)

  • Raspbian Jessie (8)

Server GPG key

The first step is to add our server GPG key to your keyring, on that way you can get our signed packages:

curl https://packages.fluentbit.io/fluentbit.key | sudo apt-key add -

Updated key from March 2022

The GPG Key fingerprint of the new key is:

C3C0 A285 34B9 293E AF51  FABD 9F9D DC08 3888 C1CD
Fluentbit releases (Releases signing key) <releases@fluentbit.io>

The GPG Key fingerprint of the old key is:

F209 D876 2A60 CD49 E680 633B 4FF8 368B 6EA0 722A

Update your sources lists

On Debian and derivative systems such as Raspbian, you need to add our APT server entry to your sources lists, please add the following content at bottom of your /etc/apt/sources.list file:

Raspbian 10 (Buster)

deb https://packages.fluentbit.io/raspbian/buster buster main

Update your repositories database

Now let your system update the apt database:

$ sudo apt-get update

We recommend upgrading your system (sudo apt-get upgrade). This could avoid potential issues with expired certificates.

Install TD-Agent Bit

Using the following apt-get command you are able now to install the latest td-agent-bit:

$ sudo apt-get install td-agent-bit

Now the following step is to instruct systemd to enable the service:

$ sudo service td-agent-bit start

If you do a status check, you should see a similar output like this:

sudo service td-agent-bit status
● td-agent-bit.service - TD Agent Bit
   Loaded: loaded (/lib/systemd/system/td-agent-bit.service; disabled; vendor preset: enabled)
   Active: active (running) since mié 2016-07-06 16:58:25 CST; 2h 45min ago
 Main PID: 6739 (td-agent-bit)
    Tasks: 1
   Memory: 656.0K
      CPU: 1.393s
   CGroup: /system.slice/td-agent-bit.service
           └─6739 /opt/td-agent-bit/bin/td-agent-bit -c /etc/td-agent-bit/td-agent-bit.conf
...

The default configuration of td-agent-bit is collecting metrics of CPU usage and sending the records to the standard output, you can see the outgoing data in your /var/log/syslog file.

Docker

Fluent Bit container images are available on Docker Hub ready for production usage. Current available images can be deployed in multiple architectures.

Tags and Versions

Tag(s)
Manifest Architectures
Description

1.8, 1.8.15

x86_64, arm64v8, arm32v7

1.8-debug, 1.8.15-debug

x86_64

v1.8.x releases (production + debug)

1.8.14

x86_64, arm64v8, arm32v7

1.8.14-debug

x86_64

v1.8.x releases (production + debug)

1.8.13

x86_64, arm64v8, arm32v7

1.8.13-debug

x86_64

v1.8.x releases (production + debug)

1.8.12

x86_64, arm64v8, arm32v7

1.8.12-debug

x86_64

v1.8.x releases (production + debug)

1.8.11

x86_64, arm64v8, arm32v7

1.8.11-debug

x86_64

v1.8.x releases + Busybox

1.8.10

x86_64, arm64v8, arm32v7

1.8.10-debug

x86_64

v1.8.x releases + Busybox

1.8.9

x86_64, arm64v8, arm32v7

1.8.9-debug

x86_64

v1.8.x releases + Busybox

1.8.8

x86_64, arm64v8, arm32v7

1.8.8-debug

x86_64

v1.8.x releases + Busybox

1.8.7

x86_64, arm64v8, arm32v7

1.8.7-debug

x86_64

v1.8.x releases + Busybox

1.8.6

x86_64, arm64v8, arm32v7

1.8.6-debug

x86_64

v1.8.x releases + Busybox

1.8.5

x86_64, arm64v8, arm32v7

1.8.5-debug

x86_64

v1.8.x releases + Busybox

1.8.4

x86_64, arm64v8, arm32v7

1.8.4-debug

x86_64

v1.8.x releases + Busybox

1.8.3

x86_64, arm64v8, arm32v7

1.8.3-debug

x86_64

v1.8.x releases + Busybox

1.8.2

x86_64, arm64v8, arm32v7

1.8.2-debug

x86_64

v1.8.x releases + Busybox

1.8.1

x86_64, arm64v8, arm32v7

1.8.1-debug

x86_64

v1.8.x releases + Busybox

It's strongly suggested that you always use the latest image of Fluent Bit.

Multi Architecture Images

In addition, the main manifest provides images for arm64v8 and arm32v7 architectures. From a deployment perspective, there is no need to specify an architecture, the container client tool that pulls the image gets the proper layer for the running architecture.

For every architecture we build the layers using the following base images:

Architecture
Base Image

x86_64

arm64v8

arm64v8/debian:bullseye-slim

arm32v7

arm32v7/debian:bullseye-slim

Getting Started

Download the last stable image from 1.8 series:

docker pull fluent/fluent-bit:1.8

Once the image is in place, now run the following (useless) test which makes Fluent Bit measure CPU usage by the container:

docker run -ti fluent/fluent-bit:1.8 /fluent-bit/bin/fluent-bit -i cpu -o stdout -f 1

That command will let Fluent Bit measure CPU usage every second and flush the results to the standard output, e.g:

Fluent-Bit v1.8.x
Copyright (C) Treasure Data

[2019/10/01 12:29:02] [ info] [engine] started
[0] cpu.0: [1504290543.000487750, {"cpu_p"=>0.750000, "user_p"=>0.250000, "system_p"=>0.500000, "cpu0.p_cpu"=>0.000000, "cpu0.p_user"=>0.000000, "cpu0.p_system"=>0.000000, "cpu1.p_cpu"=>1.000000, "cpu1.p_user"=>0.000000, "cpu1.p_system"=>1.000000, "cpu2.p_cpu"=>1.000000, "cpu2.p_user"=>1.000000, "cpu2.p_system"=>0.000000, "cpu3.p_cpu"=>0.000000, "cpu3.p_user"=>0.000000, "cpu3.p_system"=>0.000000}]

F.A.Q

Why there is no Fluent Bit Docker image based on Alpine Linux ?

Alpine Linux uses Musl C library instead of Glibc. Musl is not fully compatible with Glibc which generated many issues in the following areas when used with Fluent Bit:

  • Memory Allocator: to run Fluent Bit properly in high-load environments, we use Jemalloc as a default memory allocator which reduce fragmentation and provides better performance for our needs. Jemalloc cannot run smoothly with Musl and requires extra work.

  • Alpine Linux Musl functions bootstrap have a compatibility issue when loading Golang shared libraries, this generate problems when trying to load Golang output plugins in Fluent Bit.

  • Alpine Linux Musl Time format parser does not support Glibc extensions

  • Maintainers preference in terms of base image due to security and maintenance reasons are Distroless and Debian.

Where 'latest' Tag points to ?

Our Docker containers images are deployed thousands of times per day, we take security and stability very seriously.

The latest tag most of the time points to the latest stable image. When we release a major update to Fluent Bit like for example from v1.3.x to v1.4.0, we don't move latest tag until 2 weeks after the release. That give us extra time to verify with our community that everything works as expected.

Configuration File

This page describes the main configuration file used by Fluent Bit

The main configuration file supports four types of sections:

  • Service

  • Input

  • Filter

  • Output

In addition, it's also possible to split the main configuration file in multiple files using the feature to include external files:

  • Include File

Service

The Service section defines global properties of the service, the keys available as of this version are described in the following table:

Key
Description
Default Value

flush

Set the flush time in seconds.nanoseconds. The engine loop uses a Flush timeout to define when is required to flush the records ingested by input plugins through the defined output plugins.

5

grace

Set the grace time in seconds as Integer value. The engine loop uses a Grace timeout to define wait time on exit

5

daemon

Boolean value to set if Fluent Bit should run as a Daemon (background) or not. Allowed values are: yes, no, on and off. note: If you are using a Systemd based unit as the one we provide in our packages, do not turn on this option.

Off

dns.mode

Set the primary transport layer protocol used by the asynchronous DNS resolver which can be overriden on a per plugin basis

UDP

log_file

Absolute path for an optional log file. By default all logs are redirected to the standard error interface (stderr).

log_level

Set the logging verbosity level. Allowed values are: off, error, warn, info, debug and trace. Values are accumulative, e.g: if 'debug' is set, it will include error, warning, info and debug. Note that trace mode is only available if Fluent Bit was built with the WITH_TRACE option enabled.

info

parsers_file

Path for a parsers configuration file. Multiple Parsers_File entries can be defined within the section.

plugins_file

streams_file

http_server

Enable built-in HTTP Server

Off

http_listen

Set listening interface for HTTP Server when it's enabled

0.0.0.0

http_port

Set TCP Port for the HTTP Server

2020

coro_stack_size

Set the coroutines stack size in bytes. The value must be greater than the page size of the running system. Don't set too small value (say 4096), or coroutine threads can overrun the stack buffer. Do not change the default value of this parameter unless you know what you are doing.

24576

scheduler.cap

Set a maximum retry time in second. The property is supported from v1.8.7.

2000

scheduler.base

Set a base of exponential backoff. The property is supported from v1.8.7.

5

The following is an example of a SERVICE section:

[SERVICE]
    Flush           5
    Daemon          off
    Log_Level       debug

Input

An INPUT section defines a source (related to an input plugin), here we will describe the base configuration for each INPUT section. Note that each input plugin may add it own configuration keys:

Key
Description

Name

Name of the input plugin.

Tag

Tag name associated to all records coming from this plugin.

The Name is mandatory and it let Fluent Bit know which input plugin should be loaded. The Tag is mandatory for all plugins except for the input forward plugin (as it provides dynamic tags).

Example

The following is an example of an INPUT section:

[INPUT]
    Name cpu
    Tag  my_cpu

Filter

A FILTER section defines a filter (related to an filter plugin), here we will describe the base configuration for each FILTER section. Note that each filter plugin may add it own configuration keys:

Key
Description

Name

Name of the filter plugin.

Match

A pattern to match against the tags of incoming records. It's case sensitive and support the star (*) character as a wildcard.

Match_Regex

A regular expression to match against the tags of incoming records. Use this option if you want to use the full regex syntax.

The Name is mandatory and it let Fluent Bit know which filter plugin should be loaded. The Match or Match_Regex is mandatory for all plugins. If both are specified, Match_Regex takes precedence.

Example

The following is an example of an FILTER section:

[FILTER]
    Name  stdout
    Match *

Output

The OUTPUT section specify a destination that certain records should follow after a Tag match. The configuration support the following keys:

Key
Description

Name

Name of the output plugin.

Match

A pattern to match against the tags of incoming records. It's case sensitive and support the star (*) character as a wildcard.

Match_Regex

A regular expression to match against the tags of incoming records. Use this option if you want to use the full regex syntax.

Example

The following is an example of an OUTPUT section:

[OUTPUT]
    Name  stdout
    Match my*cpu

Example: collecting CPU metrics

The following configuration file example demonstrates how to collect CPU metrics and flush the results every five seconds to the standard output:

[SERVICE]
    Flush     5
    Daemon    off
    Log_Level debug

[INPUT]
    Name  cpu
    Tag   my_cpu

[OUTPUT]
    Name  stdout
    Match my*cpu

Visualize

Include File

To avoid complicated long configuration files is better to split specific parts in different files and call them (include) from one main file.

Starting from Fluent Bit 0.12 the new configuration command @INCLUDE has been added and can be used in the following way:

@INCLUDE somefile.conf

The configuration reader will try to open the path somefile.conf, if not found, it will assume it's a relative path based on the path of the base configuration file, e.g:

  • Main configuration file path: /tmp/main.conf

  • Included file: somefile.conf

  • Fluent Bit will try to open somefile.conf, if it fails it will try /tmp/somefile.conf.

The @INCLUDE command only works at top-left level of the configuration line, it cannot be used inside sections.

Wildcard character (*) is supported to include multiple files, e.g:

@INCLUDE input_*.conf

Yocto / Embedded Linux

We distribute two main recipes, one for testing/dev purposes and other with the latest stable release.

It's strongly recommended to always use the stable release of Fluent Bit recipe and not the one from GIT master for production deployments.

Fluent Bit and other architectures

Fluent Bit >= v1.1.x fully supports x86_64, x86, arm32v7 and arm64v8.

Commands

Configuration files must be flexible enough for any deployment need, but they must keep a clean and readable format.

Fluent Bit Commands extends a configuration file with specific built-in features. The list of commands available as of Fluent Bit 0.12 series are:

@INCLUDE Command

Configuring a logging pipeline might lead to an extensive configuration file. In order to maintain a human-readable configuration, it's suggested to split the configuration in multiple files.

The @INCLUDE command allows the configuration reader to include an external configuration file, e.g:

The above example defines the main service configuration file and also include two files to continue the configuration:

inputs.conf

outputs.conf

Note that despites the order of inclusion, Fluent Bit will ALWAYS respect the following order:

  • Service

  • Inputs

  • Filters

  • Outputs

@SET Command

The @SET command can only be used at root level of each line, meaning it cannot be used inside a section, e.g:

Amazon EC2

Linux

x86_64, Arm64v8

x86_64, Arm64v8

x86_64, Arm64v8

x86_64, Arm64v8

x86_64, Arm64v8

x86_64, Arm64v8

x86_64, Arm64v8

x86_64, Arm64v8

x86_64

Arm32v7

Windows

x86_64, x86

x86_64, x86

,

, ,

,

,

,

From the 1.9.0 and 1.8.15 releases please note that the GPG key has been updated at so ensure this new one is added.

The previous key is still available at and may be required to install previous versions.

Refer to the to see which platforms are supported in each release.

From the 1.9.0 and 1.8.15 releases please note that the GPG key has been updated at so ensure this new one is added.

The previous key is still available at and may be required to install previous versions.

Refer to the to see which platforms are supported in each release.

From the 1.9.0 and 1.8.15 releases please note that the GPG key has been updated at so ensure this new one is added.

The previous key is still available at and may be required to install previous versions.

Refer to the to see which platforms are supported in each release.

Currently, the image contains Go Plugins for:

AWS vends their container image via , and a set of highly available regional Amazon ECR repositories. For more information, see the .

The AWS for Fluent Bit image uses a custom versioning scheme because it contains multiple projects. To see what each release contains, check out the .

From the 1.9.0 and 1.8.15 releases please note that the GPG key has been updated at so ensure this new one is added.

The previous key is still available at and may be required to install previous versions.

Refer to the to see which platforms are supported in each release.## Migration to Fluent BitFrom version 1.9, td-agent-bit is a deprecated package and will be removed in the future.The correct package name to use now is fluent-bit.Both are currently provided to allow migration.

Fluent Bit is distributed as td-agent-bit package and is available for the Raspberry, specifically for distribution, the following versions are supported:

From the 1.9.0 and 1.8.15 releases please note that the GPG key has been updated at so ensure this new one is added.

The previous key is still available at and may be required to install previous versions.

Refer to the to see which platforms are supported in each release.

The following table describes the tags that are available on Docker Hub repository:

Release

Release

Release

Release

Release

Release

Release

Release

Release

Release

Release

Release

Release

Release

Release

Our x86_64 stable image is based on focusing on security containing just the Fluent Bit binary and minimal system libraries and basic configuration. Optionally, we provide debug images for x86_64 which contain a full shell and package manager that can be used to troubleshoot or for testing purposes.

One of the ways to configure Fluent Bit is using a main configuration file. Fluent Bit allows to use one configuration file which works at a global scope and uses the defined previously.

Path for a plugins configuration file. A plugins configuration file allows to define paths for external plugins, for an example .

Path for the Stream Processor configuration file. To learn more about Stream Processing configuration go .

You can also visualize Fluent Bit INPUT, FILTER, and OUTPUT configuration via

source code provides Bitbake recipes to configure, build and package the software for a Yocto based image. Note that specific steps of usage of these recipes in your Yocto environment (Poky) is out of the scope of this documentation.

Version
Recipe
Description
Command
Prototype
Description

Fluent Bit supports , one way to expose this variables to Fluent Bit is through setting a Shell environment variable, the other is through the @SET command.

Learn how to .

Deploy with Docker
Deploy on Containers on AWS
Compile from source
Amazon Linux 2
Centos 8
Centos 7
Debian 10 (Buster)
Debian 9 (Stretch)
Nixos
Ubuntu 20.04 (Focal Fossa)
Ubuntu 18.04 (Bionic Beaver)
Ubuntu 16.04 (Xenial Xerus)
Raspbian 10 (Buster)
Windows Server 2019
Windows 10 1903
https://packages.fluentbit.io/fluentbit.key
https://packages.fluentbit.io/fluentbit-legacy.key
supported platform documentation
https://packages.fluentbit.io/fluentbit.key
https://packages.fluentbit.io/fluentbit-legacy.key
supported platform documentation
https://packages.fluentbit.io/fluentbit.key
https://packages.fluentbit.io/fluentbit-legacy.key
supported platform documentation
AWS for Fluent Bit
Amazon CloudWatch Logs
Amazon Kinesis Firehose
Amazon Kinesis Streams
Amazon CloudWatch
Amazon Kinesis Data Firehose
Amazon Kinesis Data Streams
Amazon S3
Docker Hub
AWS for Fluent Bit GitHub repo
release notes on GitHub
https://packages.fluentbit.io/fluentbit.key
https://packages.fluentbit.io/fluentbit-legacy.key
supported platform documentation
Raspbian
https://packages.fluentbit.io/fluentbit.key
https://packages.fluentbit.io/fluentbit-legacy.key
supported platform documentation
fluent/fluent-bit
Distroless
Format and Schema
https://cloud.calyptia.com
CentOS 7
CentOS 8
Ubuntu 16.04 LTS
Ubuntu 18.04 LTS
Ubuntu 20.04 LTS
Debian 9
Debian 10
Amazon Linux 2
Raspbian 10
Yocto / Embedded Linux
[SERVICE]
    Flush 1

@INCLUDE inputs.conf
@INCLUDE outputs.conf
[INPUT]
    Name cpu
    Tag  mycpu

[INPUT]
    Name tail
    Path /var/log/*.log
    Tag  varlog.*
[OUTPUT]
    Name   stdout
    Match  mycpu

[OUTPUT]
    Name            es
    Match           varlog.*
    Host            127.0.0.1
    Port            9200
    Logstash_Format On
@SET my_input=cpu
@SET my_output=stdout

[SERVICE]
    Flush 1

[INPUT]
    Name ${my_input}

[OUTPUT]
    Name ${my_output}

Variables

Fluent Bit supports the usage of environment variables in any value associated to a key when using a configuration file.

The variables are case sensitive and can be used in the following format:

${MY_VARIABLE}

When Fluent Bit starts, the configuration reader will detect any request for ${MY_VARIABLE} and will try to resolve its value.

Example

Create the following configuration file (fluent-bit.conf):

[SERVICE]
    Flush        1
    Daemon       Off
    Log_Level    info

[INPUT]
    Name cpu
    Tag  cpu.local

[OUTPUT]
    Name  ${MY_OUTPUT}
    Match *

Open a terminal and set the environment variable:

$ export MY_OUTPUT=stdout

The above command set the 'stdout' value to the variable MY_OUTPUT.

Run Fluent Bit with the recently created configuration file:

$ bin/fluent-bit -c fluent-bit.conf
Fluent Bit v1.4.0
* Copyright (C) 2019-2020 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2020/03/03 12:25:25] [ info] [engine] started
[0] cpu.local: [1491243925, {"cpu_p"=>1.750000, "user_p"=>1.750000, "system_p"=>0.000000, "cpu0.p_cpu"=>3.000000, "cpu0.p_user"=>2.000000, "cpu0.p_system"=>1.000000, "cpu1.p_cpu"=>0.000000, "cpu1.p_user"=>0.000000, "cpu1.p_system"=>0.000000, "cpu2.p_cpu"=>4.000000, "cpu2.p_user"=>4.000000, "cpu2.p_system"=>0.000000, "cpu3.p_cpu"=>1.000000, "cpu3.p_user"=>1.000000, "cpu3.p_system"=>0.000000}]

As you can see the service worked properly as the configuration was valid.

v1.8.15
v1.8.14
v1.8.13
v1.8.12
v1.8.11
v1.8.10
v1.8.9
v1.8.8
v1.8.7
v1.8.6
v1.8.5
v1.8.4
v1.8.3
v1.8.2
v1.8.1
Distroless
see here
here
Fluent Bit
configuration variables
install Fluent Bit and the AWS output plugins on Amazon Linux 2 via AWS Systems Manager
Deploy on Kubernetes
Windows Server EXE
Windows Server ZIP
Windows EXE
Windows ZIP
Compile from Source

@INCLUDE FILE

Include a configuration file

@SET KEY=VAL

Set a configuration variable

Kubernetes

Kubernetes Production Grade Log Processor

  • Process Kubernetes containers logs from the file system or Systemd/Journald.

  • Enrich logs with Kubernetes Metadata.

  • Centralize your logs in third party storage services like Elasticsearch, InfluxDB, HTTP, etc.

Concepts

Before getting started it is important to understand how Fluent Bit will be deployed. Kubernetes manages a cluster of nodes, so our log agent tool will need to run on every node to collect logs from every POD, hence Fluent Bit is deployed as a DaemonSet (a POD that runs on every node of the cluster).

When Fluent Bit runs, it will read, parse and filter the logs of every POD and will enrich each entry with the following information (metadata):

  • Pod Name

  • Pod ID

  • Container Name

  • Container ID

  • Labels

  • Annotations

To obtain this information, a built-in filter plugin called kubernetes talks to the Kubernetes API Server to retrieve relevant information such as the pod_id, labels and annotations, other fields such as pod_name, container_id and container_name are retrieved locally from the log file names. All of this is handled automatically, no intervention is required from a configuration aspect.

Installation

For Kubernetes v1.21 and below

$ kubectl create namespace logging
$ kubectl create -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/fluent-bit-service-account.yaml
$ kubectl create -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/fluent-bit-role.yaml
$ kubectl create -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/fluent-bit-role-binding.yaml

For Kubernetes v1.22

$ kubectl create namespace logging
$ kubectl create -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/fluent-bit-service-account.yaml
$ kubectl create -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/fluent-bit-role-1.22.yaml
$ kubectl create -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/fluent-bit-role-binding-1.22.yaml

The next step is to create a ConfigMap that will be used by our Fluent Bit DaemonSet:

$ kubectl create -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/output/elasticsearch/fluent-bit-configmap.yaml

Note for OpenShift

If you are using Red Hat OpenShift you will also need to run the following

$ kubectl create -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/fluent-bit-openshift-security-context-constraints.yaml

Note for Kubernetes < v1.16

For Kubernetes versions older than v1.16, the DaemonSet resource is not available on apps/v1 , the resource is available on apiVersion: extensions/v1beta1 . Our current Daemonset Yaml files uses the new apiVersion.

If you are using and older Kubernetes version, manually grab a copy of your Daemonset Yaml file and replace the value of apiVersion from:

apiVersion: apps/v1

to

apiVersion: extensions/v1beta1

You can read more about this deprecation on Kubernetes v1.14 Changelog here:

Fluent Bit to Elasticsearch

Fluent Bit DaemonSet ready to be used with Elasticsearch on a normal Kubernetes Cluster:

$ kubectl create -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/output/elasticsearch/fluent-bit-ds.yaml

Fluent Bit to Elasticsearch on Minikube

If you are using Minikube for testing purposes, use the following alternative DaemonSet manifest:

$ kubectl create -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/output/elasticsearch/fluent-bit-ds-minikube.yaml

Installing with Helm Chart

To add the Fluent Helm Charts repo use the following command

helm repo add fluent https://fluent.github.io/helm-charts

To validate that the repo was added you can run helm search repo fluent to ensure the charts were added. The default chart can then be installed by running the following

helm install fluent-bit fluent/fluent-bit

Default Values

Details

The default configuration of Fluent Bit makes sure of the following:

  • Consume all containers logs from the running Node.

  • The Kubernetes filter will enrich the logs with Kubernetes metadata, specifically labels and annotations. The filter only goes to the API Server when it cannot find the cached info, otherwise it uses the cache.

  • There is an option called Retry_Limit set to False, that means if Fluent Bit cannot flush the records to Elasticsearch it will re-try indefinitely until it succeed.

Container Runtime Interface (CRI) parser

Fluent Bit by default assumes that logs are formatted by the Docker interface standard. However, when using CRI you can run into issues with malformed JSON if you do not modify the parser used. Fluent Bit includes a CRI log parser that can be used instead. An example of the parser is seen below:

# CRI Parser
[PARSER]
    # http://rubular.com/r/tjUt3Awgg4
    Name cri
    Format regex
    Regex ^(?<time>[^ ]+) (?<stream>stdout|stderr) (?<logtag>[^ ]*) (?<message>.*)$
    Time_Key    time
    Time_Format %Y-%m-%dT%H:%M:%S.%L%z

To use this parser change the Input section for your configuration from docker to cri

[INPUT]
    Name tail
    Path /var/log/containers/*.log
    Parser cri
    Tag kube.*
    Mem_Buf_Limit 5MB
    Skip_Long_Lines On

Windows Deployment

Since v1.5.0, Fluent Bit supports deployment to Windows pods.

Log files overview

When deploying Fluent Bit to Kubernetes, there are three log files that you need to pay attention to.

C:\k\kubelet.err.log

  • This is the error log file from kubelet daemon running on host.

  • You will need to retain this file for future troubleshooting (to debug deployment failures etc.)

C:\var\log\containers\<pod>_<namespace>_<container>-<docker>.log

  • This is the main log file you need to watch. Configure Fluent Bit to follow this file.

  • It is actually a symlink to the Docker log file in C:\ProgramData\, with some additional metadata on its file name.

C:\ProgramData\Docker\containers\<docker>\<docker>.log

  • This is the log file produced by Docker.

  • Normally you don't directly read from this file, but you need to make sure that this file is visible from Fluent Bit.

Typically, your deployment yaml contains the following volume configuration.

spec:
  containers:
  - name: fluent-bit
    image: my-repo/fluent-bit:1.8.4
    volumeMounts:
    - mountPath: C:\k
      name: k
    - mountPath: C:\var\log
      name: varlog
    - mountPath: C:\ProgramData
      name: progdata
  volumes:
  - name: k
    hostPath:
      path: C:\k
  - name: varlog
    hostPath:
      path: C:\var\log
  - name: progdata
    hostPath:
      path: C:\ProgramData

Configure Fluent Bit

fluent-bit.conf: |
    [SERVICE]
      Parsers_File      C:\\fluent-bit\\parsers.conf

    [INPUT]
      Name              tail
      Tag               kube.*
      Path              C:\\var\\log\\containers\\*.log
      Parser            docker
      DB                C:\\fluent-bit\\tail_docker.db
      Mem_Buf_Limit     7MB
      Refresh_Interval  10

    [INPUT]
      Name              tail
      Tag               kubelet.err
      Path              C:\\k\\kubelet.err.log
      DB                C:\\fluent-bit\\tail_kubelet.db

    [FILTER]
      Name              kubernetes
      Match             kube.*
      Kube_URL          https://kubernetes.default.svc.cluster.local:443

    [OUTPUT]
      Name  stdout
      Match *

parsers.conf: |
    [PARSER]
        Name         docker
        Format       json
        Time_Key     time
        Time_Format  %Y-%m-%dT%H:%M:%S.%L
        Time_Keep    On

Mitigate unstable network on Windows pods

  • DNS_Retries - Retries N times until the network start working (6)

  • DNS_Wait_Time - Lookup interval between network status checks (30)

By default, Fluent Bit waits for 3 minutes (30 seconds x 6 times). If it's not enough for you, tweak the configuration as follows.

[filter]
    Name kubernetes
    ...
    DNS_Retries 10
    DNS_Wait_Time 30

Windows

Fluent Bit is distributed as td-agent-bit package for Windows. Fluent Bit has two flavours of Windows installers: a ZIP archive (for quick testing) and an EXE installer (for system installation).

Configuration

Currently the default configuration is intended for Linux only so will not function on Windows. Make sure to provide a valid Windows configuration with the installation, a sample one is shown below:

[SERVICE]
    # Flush
    # =====
    # set an interval of seconds before to flush records to a destination
    flush        5

    # Daemon
    # ======
    # instruct Fluent Bit to run in foreground or background mode.
    daemon       Off

    # Log_Level
    # =========
    # Set the verbosity level of the service, values can be:
    #
    # - error
    # - warning
    # - info
    # - debug
    # - trace
    #
    # by default 'info' is set, that means it includes 'error' and 'warning'.
    log_level    info

    # Parsers File
    # ============
    # specify an optional 'Parsers' configuration file
    parsers_file parsers.conf

    # Plugins File
    # ============
    # specify an optional 'Plugins' configuration file to load external plugins.
    plugins_file plugins.conf

    # HTTP Server
    # ===========
    # Enable/Disable the built-in HTTP Server for metrics
    http_server  Off
    http_listen  0.0.0.0
    http_port    2020

    # Storage
    # =======
    # Fluent Bit can use memory and filesystem buffering based mechanisms
    #
    # - https://docs.fluentbit.io/manual/administration/buffering-and-storage
    #
    # storage metrics
    # ---------------
    # publish storage pipeline metrics in '/api/v1/storage'. The metrics are
    # exported only if the 'http_server' option is enabled.
    #
    storage.metrics on

[INPUT]
    Name         winlog
    Channels     Setup,Windows PowerShell
    Interval_Sec 1

[OUTPUT]
    name  stdout
    match *

Installation Packages

The latest stable version is 1.8.15, each version is available on the Github release as well as at https://fluentbit.io/releases/<Major Version>/Major>fluent-bit-<Full Version>-win[32|64].exe:

INSTALLERS
SHA256 CHECKSUMS

Legacy td-agent-bit packages are also available, just substitute fluent-bit with td-agent-bit in the URLs above.

To check the integrity, use Get-FileHash cmdlet on PowerShell.

PS> Get-FileHash fluent-bit-1.8.15-win32.exe

Installing from ZIP archive

Download a ZIP archive from above. There are installers for 32-bit and 64-bit environments, so choose one suitable for your environment.

Then you need to expand the ZIP archive. You can do this by clicking "Extract All" on Explorer, or if you're using PowerShell, you can use Expand-Archive cmdlet.

PS> Expand-Archive td-agent-bit-1.8.12-win64.zip

The ZIP package contains the following set of files.

td-agent-bit
├── bin
│   ├── fluent-bit.dll
│   └── fluent-bit.exe
├── conf
│   ├── fluent-bit.conf
│   ├── parsers.conf
│   └── plugins.conf
└── include
    │   ├── flb_api.h
    │   ├── ...
    │   └── flb_worker.h
    └── fluent-bit.h

Now, launch cmd.exe or PowerShell on your machine, and execute fluent-bit.exe as follows.

PS> .\bin\fluent-bit.exe -i dummy -o stdout

If you see the following output, it's working fine!

PS> .\bin\fluent-bit.exe  -i dummy -o stdout
Fluent Bit v1.8.x
* Copyright (C) 2019-2020 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2019/06/28 10:13:04] [ info] [storage] initializing...
[2019/06/28 10:13:04] [ info] [storage] in-memory
[2019/06/28 10:13:04] [ info] [storage] normal synchronization mode, checksum disabled, max_chunks_up=128
[2019/06/28 10:13:04] [ info] [engine] started (pid=10324)
[2019/06/28 10:13:04] [ info] [sp] stream processor started
[0] dummy.0: [1561684385.443823800, {"message"=>"dummy"}]
[1] dummy.0: [1561684386.428399000, {"message"=>"dummy"}]
[2] dummy.0: [1561684387.443641900, {"message"=>"dummy"}]
[3] dummy.0: [1561684388.441405800, {"message"=>"dummy"}]

To halt the process, press CTRL-C in the terminal.

Installing from EXE installer

Then, double-click the EXE installer you've downloaded. Installation wizard will automatically start.

Click Next and proceed. By default, Fluent Bit is installed into C:\Program Files\td-agent-bit\, so you should be able to launch fluent-bit as follow after installation.

PS> C:\Program Files\td-agent-bit\bin\fluent-bit.exe -i dummy -o stdout

Installer options

To silently install to C:\fluent-bit directory here is an example:

PS> <installer exe> /S /D=C:\fluent-bit

The uninstaller automatically provided also supports a silent un-install using the same /S flag. This may be useful for provisioning with automation like Ansible, Puppet, etc.

Windows Service Support

Windows services are equivalent to "daemons" in UNIX (i.e. long-running background processes). Since v1.5.0, Fluent Bit has the native support for Windows Service.

Suppose you have the following installation layout:

C:\fluent-bit\
├── conf
│   ├── fluent-bit.conf
│   └── parsers.conf
└── bin
    ├── fluent-bit.dll
    └── fluent-bit.exe

To register Fluent Bit as a Windows service, you need to execute the following command on Command Prompt. Please be careful that a single space is required after binpath=.

% sc.exe create fluent-bit binpath= "\fluent-bit\bin\fluent-bit.exe -c \fluent-bit\conf\fluent-bit.conf"

Now Fluent Bit can be started and managed as a normal Windows service.

% sc.exe start fluent-bit
% sc.exe query fluent-bit
SERVICE_NAME: fluent-bit
    TYPE               : 10  WIN32_OWN_PROCESS
    STATE              : 4 Running
    ...

To halt the Fluent Bit service, just execute the "stop" command.

% sc.exe stop fluent-bit

To start Fluent Bit automatically on boot, execute the following:

% sc.exe config fluent-bit start= auto

[FAQ] Fluent Bit fails to start up when installed under C:\Program Files

Quotations are required if file paths contain spaces. Here is an example:

% sc.exe create fluent-bit binpath= "\"C:\Program Files\fluent-bit\bin\fluent-bit.exe\" -c \"C:\Program Files\fluent-bit\conf\fluent-bit.conf\""

[FAQ] How can I manage Fluent Bit service via PowerShell?

Instead of sc.exe, PowerShell can be used to manage Windows services.

Create a Fluent Bit service:

PS> New-Service fluent-bit -BinaryPathName "C:\fluent-bit\bin\fluent-bit.exe -c C:\fluent-bit\conf\fluent-bit.conf" -StartupType Automatic

Start the service:

PS> Start-Service fluent-bit

Query the service status:

PS> get-Service fluent-bit | format-list
Name                : fluent-bit
DisplayName         : fluent-bit
Status              : Running
DependentServices   : {}
ServicesDependedOn  : {}
CanPauseAndContinue : False
CanShutdown         : False
CanStop             : True
ServiceType         : Win32OwnProcess

Stop the service:

PS> Stop-Service fluent-bit

Remove the service (requires PowerShell 6.0 or later)

PS> Remove-Service fluent-bit

Compile from Source

If you need to create a custom executable, you can use the following procedure to compile Fluent Bit by yourself.

Preparation

First, you need Microsoft Visual C++ to compile Fluent Bit. You can install the minimum toolkit by the following command:

PS> wget -o vs.exe https://aka.ms/vs/16/release/vs_buildtools.exe
PS> start vs.exe

When asked which packages to install, choose "C++ Build Tools" (make sure that "C++ CMake tools for Windows" is selected too) and wait until the process finishes.

PS> wget -o winflexbison.zip https://github.com/lexxmark/winflexbison/releases/download/v2.5.22/win_flex_bison-2.5.22.zip
PS> Expand-Archive winflexbison.zip -Destination C:\WinFlexBison
PS> cp -Path C:\WinFlexBison\win_bison.exe C:\WinFlexBison\bison.exe
PS> cp -Path C:\WinFlexBison\win_flex.exe C:\WinFlexBison\flex.exe
PS> wget -o git.exe https://github.com/git-for-windows/git/releases/download/v2.28.0.windows.1/Git-2.28.0-64-bit.exe
PS> start git.exe

Compilation

Open the start menu on Windows and type "Developer Command Prompt".

Clone the source code of Fluent Bit.

% git clone https://github.com/fluent/fluent-bit
% cd fluent-bit/build

Compile the source code.

% cmake .. -G "NMake Makefiles"
% cmake --build .

Now you should be able to run Fluent Bit:

% .\bin\debug\fluent-bit.exe -i dummy -o stdout

Packaging

To create a ZIP package, call cpack as follows:

% cpack -G ZIP

Record Accessor

A full feature set to access content of your records

Fluent Bit works internally with structured records and it can be composed of an unlimited number of keys and values. Values can be anything like a number, string, array, or a map.

Having a way to select a specific part of the record is critical for certain core functionalities or plugins, this feature is called Record Accessor.

consider Record Accessor a simple grammar to specify record content and other miscellaneous values.

Format

A record accessor rule starts with the character $. Using the structured content above as an example the following table describes how to access a record:

{
  "log": "some message",
  "stream": "stdout",
  "labels": {
     "color": "blue", 
     "unset": null,
     "project": {
         "env": "production"
      }
  }
}

The following table describe some accessing rules and the expected returned value:

Format
Accessed Value

$log

"some message"

$labels['color']

"blue"

$labels['project']['env']

"production"

$labels['unset']

null

$labels['undefined']

If the accessor key does not exist in the record like the last example $labels['undefined'] , the operation is simply omitted, no exception will occur.

Usage Example

[SERVICE]
    flush        1
    log_level    info
    parsers_file parsers.conf

[INPUT]
    name      tail
    path      test.log
    parser    json

[FILTER]
    name      grep
    match     *
    regex     $labels['color'] ^blue$

[OUTPUT]
    name      stdout
    match     *
    format    json_lines

The file content to process in test.log is the following:

{"log": "message 1", "labels": {"color": "blue"}}
{"log": "message 2", "labels": {"color": "red"}}
{"log": "message 3", "labels": {"color": "green"}}
{"log": "message 4", "labels": {"color": "blue"}}

Running Fluent Bit with the configuration above the output will be:

$ bin/fluent-bit -c fluent-bit.conf 
Fluent Bit v1.x.x
* Copyright (C) 2019-2020 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2020/09/11 16:11:07] [ info] [engine] started (pid=1094177)
[2020/09/11 16:11:07] [ info] [storage] version=1.0.5, initializing...
[2020/09/11 16:11:07] [ info] [storage] in-memory
[2020/09/11 16:11:07] [ info] [storage] normal synchronization mode, checksum disabled, max_chunks_up=128
[2020/09/11 16:11:07] [ info] [sp] stream processor started
[2020/09/11 16:11:07] [ info] inotify_fs_add(): inode=55716713 watch_fd=1 name=test.log
{"date":1599862267.483684,"log":"message 1","labels":{"color":"blue"}}
{"date":1599862267.483692,"log":"message 4","labels":{"color":"blue"}}

Multiline Parsing

In an ideal world, applications might log their messages within a single line, but in reality applications generate multiple log messages that sometimes belong to the same context. But when is time to process such information it gets really complex. Consider application stack traces which always have multiple log lines.

Starting from Fluent Bit v1.8, we have implemented a unified Multiline core functionality to solve all the user corner cases. In this section, you will learn about the features and configuration options available.

Concepts

The Multiline parser engine exposes two ways to configure and use the functionality:

  • Built-in multiline parser

  • Configurable multiline parser

Built-in Multiline Parsers

Without any extra configuration, Fluent Bit exposes certain pre-configured parsers (built-in) to solve specific multiline parser cases, e.g:

Parser
Description

docker

Process a log entry generated by a Docker container engine. This parser supports the concatenation of log entries split by Docker.

cri

Process a log entry generated by CRI-O container engine. Same as the docker parser, it supports concatenation of log entries

go

Process log entries generated by a Go based language application and perform concatenation if multiline messages are detected.

python

Process log entries generated by a Python based language application and perform concatenation if multiline messages are detected.

java

Process log entries generated by a Google Cloud Java language application and perform concatenation if multiline messages are detected.

Configurable Multiline Parsers

Besides the built-in parsers listed above, through the configuration files is possible to define your own Multiline parsers with their own rules.

A multiline parser is defined in a parsers configuration file by using a [MULTILINE_PARSER] section definition. The Multiline parser must have a unique name and a type plus other configured properties associated with each type.

To understand which Multiline parser type is required for your use case you have to know beforehand what are the conditions in the content that determines the beginning of a multiline message and the continuation of subsequent lines. We provide a regex based configuration that supports states to handle from the most simple to difficult cases.

Property
Description
Default

name

Specify a unique name for the Multiline Parser definition. A good practice is to prefix the name with the word multiline_ to avoid confusion with normal parser's definitions.

type

Set the multiline mode, for now, we support the type regex.

parser

Name of a pre-defined parser that must be applied to the incoming content before applying the regex rule. If no parser is defined, it's assumed that's a raw text and not a structured message.

Note: when a parser is applied to a raw text, then the regex is applied against a specific key of the structured message by using the key_content configuration property (see below).

key_content

For an incoming structured message, specify the key that contains the data that should be processed by the regular expression and possibly concatenated.

flush_timeout

Timeout in milliseconds to flush a non-terminated multiline buffer. Default is set to 5 seconds.

5s

rule

Configure a rule to match a multiline pattern. The rule has a specific format described below. Multiple rules can be defined.

Lines and States

Before start configuring your parser you need to know the answer to the following questions:

  1. What is the regular expression (regex) that matches the first line of a multiline message ?

  2. What are the regular expressions (regex) that match the continuation lines of a multiline message ?

When matching regex, we have to define states, some states define the start of a multiline message while others are states for the continuation of multiline messages. You can have multiple continuation states definitions to solve complex cases.

The first regex that matches the start of a multiline message is called start_state, then other regexes continuation lines can have different state names.

Rules Definition

A rule specifies how to match a multiline pattern and perform the concatenation. A rule is defined by 3 specific components:

  1. state name

  2. regular expression pattern

  3. next state

A rule might be defined as follows (comments added to simplify the definition) :

# rules   |   state name   | regex pattern                   | next state
# --------|----------------|---------------------------------------------
rule         "start_state"   "/(Dec \d+ \d+\:\d+\:\d+)(.*)/"   "cont"
rule         "cont"          "/^\s+at.*/"                      "cont"

In the example above, we have defined two rules, each one has its own state name, regex paterns, and the next state name. Every field that composes a rule must be inside double quotes.

The first rule of state name must always be start_state, and the regex pattern must match the first line of a multiline message, also a next state must be set to specify how the possible continuation lines would look like.

Configuration Example

The following example provides a full Fluent Bit configuration file for multiline parsing by using the definition explained above.

Example files content:

This is the primary Fluent Bit configuration file. It includes the parsers_multiline.conf and tails the file test.log by applying the multiline parser multiline-regex-test. Then it sends the processing to the standard output.

[SERVICE]
    flush        1
    log_level    info
    parsers_file parsers_multiline.conf

[INPUT]
    name             tail
    path             test.log
    read_from_head   true
    multiline.parser multiline-regex-test

[OUTPUT]
    name             stdout
    match            *

This second file defines a multiline parser for the example.

[MULTILINE_PARSER]
    name          multiline-regex-test
    type          regex
    flush_timeout 1000
    #
    # Regex rules for multiline parsing
    # ---------------------------------
    #
    # configuration hints:
    #
    #  - first state always has the name: start_state
    #  - every field in the rule must be inside double quotes
    #
    # rules |   state name  | regex pattern                  | next state
    # ------|---------------|--------------------------------------------
    rule      "start_state"   "/(Dec \d+ \d+\:\d+\:\d+)(.*)/"  "cont"
    rule      "cont"          "/^\s+at.*/"                     "cont"

An example file with multiline content:

single line...
Dec 14 06:41:08 Exception in thread "main" java.lang.RuntimeException: Something has gone wrong, aborting!
    at com.myproject.module.MyProject.badMethod(MyProject.java:22)
    at com.myproject.module.MyProject.oneMoreMethod(MyProject.java:18)
    at com.myproject.module.MyProject.anotherMethod(MyProject.java:14)
    at com.myproject.module.MyProject.someMethod(MyProject.java:10)
    at com.myproject.module.MyProject.main(MyProject.java:6)
another line...

By running Fluent Bit with the given configuration file you will obtain:

$ fluent-bit -c fluent-bit.conf 

[0] tail.0: [0.000000000, {"log"=>"single line...
"}]
[1] tail.0: [1626634867.472226330, {"log"=>"Dec 14 06:41:08 Exception in thread "main" java.lang.RuntimeException: Something has gone wrong, aborting!
    at com.myproject.module.MyProject.badMethod(MyProject.java:22)
    at com.myproject.module.MyProject.oneMoreMethod(MyProject.java:18)
    at com.myproject.module.MyProject.anotherMethod(MyProject.java:14)
    at com.myproject.module.MyProject.someMethod(MyProject.java:10)
    at com.myproject.module.MyProject.main(MyProject.java:6)
"}]
[2] tail.0: [1626634867.472226330, {"log"=>"another line...
"}]

The lines that did not match a pattern are not considered as part of the multiline message, while the ones that matched the rules were concatenated properly.

Upstream Servers

An Upstream defines a set of nodes that will be targeted by an output plugin, by the nature of the implementation an output plugin must support the Upstream feature. The following plugin(s) have Upstream support:

The current balancing mode implemented is round-robin.

Configuration

To define an Upstream it's required to create an specific configuration file that contains an UPSTREAM and one or multiple NODE sections. The following table describe the properties associated to each section. Note that all of them are mandatory:

Section
Key
Description

UPSTREAM

name

Defines a name for the Upstream in question.

NODE

name

Defines a name for the Node in question.

host

IP address or hostname of the target host.

port

TCP port of the target service.

Nodes and specific plugin configuration

A Node might contain additional configuration keys required by the plugin, on that way we provide enough flexibility for the output plugin, a common use case is Forward output where if TLS is enabled, it requires a shared key (more details in the example below).

Nodes and TLS (Transport Layer Security)

In addition to the properties defined in the table above, the network operations against a defined node can optionally be done through the use of TLS for further encryption and certificates use.

Configuration File Example

The following example defines an Upstream called forward-balancing which aims to be used by Forward output plugin, it register three Nodes:

  • node-1: connects to 127.0.0.1:43000

  • node-2: connects to 127.0.0.1:44000

  • node-3: connects to 127.0.0.1:45000 using TLS without verification. It also defines a specific configuration option required by Forward output called shared_key.

[UPSTREAM]
    name       forward-balancing

[NODE]
    name       node-1
    host       127.0.0.1
    port       43000

[NODE]
    name       node-2
    host       127.0.0.1
    port       44000

[NODE]
    name       node-3
    host       127.0.0.1
    port       45000
    tls        on
    tls.verify off
    shared_key secret

Note that every Upstream definition must exists on it own configuration file in the file system. Adding multiple Upstreams in the same file or different files is not allowed.

Unit Sizes

Suffix
Description
Example

When a suffix is not specified, it's assumed that the value given is a bytes representation.

Specifying a value of 32000, means 32000 bytes

k, K, KB, kb

Kilobyte: a unit of memory equal to 1,000 bytes.

32k means 32000 bytes.

m, M, MB, mb

Megabyte: a unit of memory equal to 1,000,000 bytes

1M means 1000000 bytes

g, G, GB, gb

Gigabyte: a unit of memory equal to 1,000,000,000 bytes

1G means 1000000000 bytes

Buffering & Storage

By default when Fluent Bit process data, it uses Memory as a primary and temporary place to store the records, but there are certain scenarios where would be ideal to have a persistent buffering mechanism based in the filesystem to provide aggregation and data safety capabilities.

Choosing the right configuration is critical and the behavior of the service can be conditioned based in the backpressure settings. Before to jump into the configuration properties let's understand the relationship between Chunks, Memory, Filesystem and Backpressure.

Chunks, Memory, Filesystem and Backpressure

Understanding the chunks, buffering and backpressure concepts is critical for a proper configuration. Let's do a recap of the meaning of these concepts.

Chunks

When an input plugin (source) emit records, the engine group the records together in a Chunk. A Chunk size usually is around 2MB. By configuration, the engine decide where to place this Chunk, the default is that all chunks are created only in memory.

Buffering and Memory

As mentioned above, the Chunks generated by the engine are placed in memory but this is configurable.

If memory is the only mechanism set for the input plugin, it will just store data as much as it can there (memory). This is the fastest mechanism with less system overhead, but if the service is not able to deliver the records fast enough because of a slow network or an unresponsive remote service, Fluent Bit memory usage will increase since it will accumulate more data than it can deliver.

On a high load environment with backpressure the risks of having high memory usage is the chance to get killed by the Kernel (OOM Killer). A workaround for this backpressure scenario is to limit the amount of memory in records that an input plugin can register, this configuration property is called mem_buf_limit: if a plugin have enqueued more than mem_buf_limit, it won't be able to ingest more until it data can be delivered or flushed properly. On this scenario the input plugin in question is paused.

The workaround of mem_buf_limit is good for certain scenarios and environments, it helps to control the memory usage of the service, but at the costs that if a file gets rotated while paused, you might lose that data since it won't be able to register new records. This can happen with any input source plugin. The goal of mem_buf_limit is memory control and survival of the service.

For full data safety guarantee, use filesystem buffering.

Filesystem buffering to the rescue

Filesystem buffering enabled helps with backpressure and overall memory control.

Behind the scenes, Memory and Filesystem buffering mechanisms are not mutual exclusive, indeed when enabling filesystem buffering for your input plugin (source) you are getting the best of the two worlds: performance and data safety.

How this Filesystem buffering mechanism deals with high memory usage and backpressure ?: Fluent Bit controls the number of Chunks that are up in memory.

By default, the engine allows to have 128 Chunks up in memory in total (considering all Chunks), this value is controlled by service property storage.max_chunks_up. The active Chunks that are up are ready for delivery and the ones that still are receiving records. Any other remaining Chunk is in a down state, which means that's only in the filesystem and won't be up in memory unless is ready to be delivered.

If the input plugin has enabled mem_buf_limit and storage.type as filesystem, when reaching the mem_buf_limit threshold, instead of the plugin being paused, all new data will go to Chunks that are down in the filesystem. This allows to control the memory usage by the service but also providing a a guarantee that the service won't lose any data.

Limiting Filesystem space for Chunks

Fluent Bit implements the concept of logical queues: a Chunk based on its Tag, can be routed to multiple destinations, so internally we keep a reference from where a Chunk was created and where it needs to go.

It's common to find cases that if we have multiple destinations for a Chunk, one of the destination might be slower than the other, and maybe one of the destinations is generating backpressure and not all of them. On this scenario how do we limit the amount of filesystem Chunks that we are logically queueing ?.

Starting from Fluent Bit v1.6, we introduced the new configuration property for output plugins called storage.total_limit_size which limits the number of Chunks that exists in the file system for a certain logical output destination. If one destinations reaches the storage.total_limit_size limit, the oldest Chunk from it queue for that logical output destination will be discarded.

Configuration

The storage layer configuration takes place in three areas:

  • Service Section

  • Input Section

  • Output Section

The known Service section configure a global environment for the storage layer, the Input sections defines which buffering mechanism to use and the output the limits for the logical queues.

Service Section Configuration

a Service section will look like this:

that configuration configure an optional buffering mechanism where it root for data is /var/log/flb-storage/, it will use normal synchronization mode, without checksum and up to a maximum of 5MB of memory when processing backlog data.

Input Section Configuration

Optionally, any Input plugin can configure their storage preference, the following table describe the options available:

The following example configure a service that offers filesystem buffering capabilities and two Input plugins being the first based in filesystem and the second with memory only.

Output Section Configuration

If certain chunks are filesystem storage.type based, it's possible to control the size of the logical queue for an output plugin. The following table describe the options available:

The following example create records with CPU usage samples in the filesystem and then they are delivered to Google Stackdriver service limiting the logical queue (buffering) to 5M:

If for some reason Fluent Bit gets offline because of a network issue, it will continuing buffering CPU samples but just keeping a maximum of 5M of the newest data.

Security

Fluent Bit provides integrated support for Transport Layer Security (TLS) and it predecessor Secure Sockets Layer (SSL) respectively. In this section we will refer as TLS only for both implementations.

Each output plugin that requires to perform Network I/O can optionally enable TLS and configure the behavior. The following table describes the properties available:

The listed properties can be enabled in the configuration file, specifically on each output plugin section or directly through the command line.

The following output plugins can take advantage of the TLS feature:

In addition, other plugins implements a sub-set of TLS support, meaning, with restricted configuration:

Example: enable TLS on HTTP output

By default HTTP output plugin uses plain TCP, enabling TLS from the command line can be done with:

In the command line above, the two properties tls and tls.verify where enabled for demonstration purposes (we strongly suggest always keep verification ON).

The same behavior can be accomplished using a configuration file:

Tips and Tricks

Connect to virtual servers using TLS

Backpressure

In certain environments is common to see that logs or data being ingested is faster than the ability to flush it to some destinations. The common case is reading from big log files and dispatching the logs to a backend over the network which takes some time to respond, this generate backpressure leading to a high memory consumption in the service.

In order to avoid backpressure, Fluent Bit implements a mechanism in the engine that restrict the amount of data than an input plugin can ingest, this is done through the configuration parameter Mem_Buf_Limit.

In memory is always available and can be restricted with Mem_Buf_Limit. If your plugin gets restricted because of the configuration and you are under a backpressure scenario, you won't be able to ingest more data until the data chunks that are in memory can flushed.

Depending of the input plugin type in use, this might lead to discard incoming data (e.g: TCP input plugin), but you can rely on the secondary filesystem buffering to be safe.

Mem_Buf_Limit

This option is disabled by default and can be applied to all input plugins. Let's explain it behavior using the following scenario:

  • Mem_Buf_Limit is set to 1MB (one megabyte)

  • input plugin tries to append 700KB

  • engine route the data to an output plugin

  • output plugin backend (HTTP Server) is down

  • engine scheduler will retry the flush after 10 seconds

  • input plugin tries to append 500KB

At this exact point, the engine will allow to append those 500KB of data into the engine: in total we have 1.2MB. The options works in a permissive mode before to reach the limit, but the limit is exceeded the following actions are taken:

  • block local buffers for the input plugin (cannot append more data)

  • notify the input plugin invoking a pause callback

The engine will protect it self and will not append more data coming from the input plugin in question; Note that is the plugin responsibility to keep their state and take some decisions about what to do on that paused state.

After some seconds if the scheduler was able to flush the initial 700KB of data or it gave up after retrying, that amount memory is released and internally the following actions happens:

  • Upon data buffer release (700KB), the internal counters get updated

  • Counters now are set at 500KB

  • Since 500KB is < 1MB it checks the input plugin state

  • If the plugin is paused, it invokes a resume callback

  • input plugin can continue appending more data

About pause and resume Callbacks

Each plugin is independent and not all of them implements the pause and resume callbacks. As said, these callbacks are just a notification mechanism for the plugin.

Memory Management

In certain scenarios would be ideal to estimate how much memory Fluent Bit could be using, this is very useful for containerized environments where memory limits are a must.

Estimating

Input plugins append data independently, so in order to do an estimation a limit should be imposed through the Mem_Buf_Limit option. If the limit was set to 10MB we need to estimate that in the worse case, the output plugin likely could use 20MB.

So, if we impose a limit of 10MB for the input plugins and considering the worse case scenario of the output plugin consuming 20MB extra, as a minimum we need (30MB x 1.2) = 36MB.

Glibc and Memory Fragmentation

Is well known that in intensive environments where memory allocations happens in the order of magnitude, the default memory allocator provided by Glibc could lead to a high fragmentation, reporting a high memory usage by the service.

You can check if Fluent Bit has been built with Jemalloc using the following command:

The output should looks like:

If the FLB_HAVE_JEMALLOC option is listed in Build Flags, everything will be fine.

devel

Build Fluent Bit from GIT master. This recipe aims to be used for development and testing purposes only.

v1.8.12

Build latest stable version of Fluent Bit.

is a lightweight and extensible Log Processor that comes with full support for Kubernetes:

Our Kubernetes Filter plugin is fully inspired by the written by .

must be deployed as a DaemonSet, so on that way it will be available on every node of your Kubernetes cluster. To get started run the following commands to create the namespace, service account and role setup:

The default configmap assumes that dockershim is utilized for the cluster. If a CRI runtime, such as containerd or CRI-O, is being utilized, the should be utilized. More specifically, change the Parser described in input-kubernetes.conf from docker to cri.

is a package manager for Kubernetes and allows you to quickly deploy application packages into your running cluster. Fluent Bit is distributed via a helm chart found in the Fluent Helm Charts repo: .

The default chart values include configuration to read container logs, with Docker parsing, systemd logs apply Kubernetes metadata enrichment and finally output to an Elasticsearch cluster. You can modify the values file included to specify additional outputs, health checks, monitoring endpoints, or other configuration options.

The will not append more than 5MB into the engine until they are flushed to the Elasticsearch backend. This limit aims to provide a workaround for scenarios.

The default backend in the configuration is Elasticsearch set by the . It uses the Logstash format to ingest the logs. If you need a different Index and Type, please refer to the plugin option and do your own adjustments.

Assuming the basic volume configuration described above, you can apply the following config to start logging. You can visualize this configuration

Windows pods often lack working DNS immediately after boot (). To mitigate this issue, filter_kubernetes provides a built-in mechanism to wait until the network starts up:

Download an EXE installer from the . It has both 32-bit and 64-bit builds. Choose one which is suitable for you.

The Windows installer is built by [CPack using NSIS(https://cmake.org/cmake/help/latest/cpack_gen/nsis.html) and so supports the that all NSIS installers do for silent installation and the directory to install to.

Also you need to install flex and bison. One way to install them on Windows is to use .

Add the path C:\WinFlexBison to your systems environment variable "Path". .

Also you need to install to pull the source code from the repository.

The feature is enabled on a per plugin basis, not all plugins enable this feature. As an example consider a configuration that aims to filter records using that only matches where labels have a color blue:

To simplify the configuration of regular expressions, you can use the Rubular web site. We have posted an example by using the regex described above plus a log line that matches the pattern:

The following example files can be located at:

It's common that Fluent Bit aims to connect to external services to deliver the logs over the network, this is the case of , and within others. Being able to connect to one node (host) is normal and enough for more of the use cases, but there are other scenarios where balancing across different nodes is required. The Upstream feature provides such capability.

The TLS options available are described in the section and can be added to the any Node section.

Certain configuration directives in Fluent Bit refer to unit sizes such as when defining the size of a buffer or specific limits, we can find these in plugins like , or in generic properties like .

Starting from v0.11.10, all unit sizes have been standardized across the core and plugins, the following table describes the options that can be used and what they mean:

The end-goal of is to collect, parse, filter and ship logs to a central place. In this workflow there are many phases and one of the critical pieces is the ability to do buffering : a mechanism to place processed data into a temporary location until is ready to be shipped.

When the Filesystem buffering is enabled, the behavior of the engine is different, upon Chunk creation, it stores the content in memory but also it maps a copy on disk (through ), this Chunk is active in memory and backed up in disk is called to be up which means "the chunk content is up in memory".

The Service section refers to the section defined in the main :

Key
Description
Default
Key
Description
Default
Key
Description
Default
Property
Description
Default

Fluent Bit supports . If you are serving multiple hostnames on a single IP address (a.k.a. virtual hosting), you can make use of tls.vhost to connect to a specific hostname.

As described in the concepts section, Fluent Bit offers an hybrid mode for data handling: in-memory and filesystem (optional).

If in addition to Mem_Buf_Limit the input plugin defined a storage.type of filesystem (as described in ), when the limit is reached, all the new data will be stored safety in the file system.

The plugin who implements and keep a good state is the plugin. When the pause callback is triggered, it stop their collectors and stop appending data. Upon resume, it re-enable the collectors.

In order to estimate we will assume that the input plugins have set the Mem_Buf_Limit option (you can learn more about it in the section).

Fluent Bit has an internal binary representation for the data being processed, but when this data reach an output plugin, this one will likely create their own representation in a new memory buffer for processing. The best example are the and output plugins, both needs to convert the binary representation to their respective-custom JSON formats before to talk to their backend servers.

It's strongly suggested that in any production environment, Fluent Bit should be built with enabled (e.g. -DFLB_JEMALLOC=On). Jemalloc is an alternative memory allocator that can reduce fragmentation (among others things) resulting in better performance.

fluent-bit_git.bb
fluent-bit_1.8.12.bb
Fluent Bit
Fluentd Kubernetes Metadata Filter
Jimmi Dyson
Fluent Bit
CRI parser
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.14.md#deprecations
Helm
https://github.com/fluent/helm-charts
https://github.com/fluent/helm-charts/blob/master/charts/fluent-bit/values.yaml
Tail input plugin
backpressure
Elasticsearch Output Plugin
here
#78479
download page
default options
winflexbison
Here's how to do that
git
grep
https://rubular.com/r/NDuyKwlTGOvq2g
https://github.com/fluent/fluent-bit/tree/master/documentation/examples/multiline/regex-001
output plugins
HTTP
Elasticsearch
Forward
Forward
TLS/SSL
Tail Input
Forward Input
Mem_Buf_Limit
Fluent Bit
@INCLUDE
@SET
[SERVICE]
    flush                     1
    log_Level                 info
    storage.path              /var/log/flb-storage/
    storage.sync              normal
    storage.checksum          off
    storage.backlog.mem_limit 5M

storage.type

Specify the buffering mechanism to use. It can be memory or filesystem.

memory

storage.max_chunks_pause

Specify if file storage is to be paused when reaching the chunk limit.

off

[SERVICE]
    flush                     1
    log_Level                 info
    storage.path              /var/log/flb-storage/
    storage.sync              normal
    storage.checksum          off
    storage.backlog.mem_limit 5M

[INPUT]
    name          cpu
    storage.type  filesystem

[INPUT]
    name          mem
    storage.type  memory

storage.total_limit_size

Limit the maximum number of Chunks in the filesystem for the current output logical destination.

[SERVICE]
    flush                     1
    log_Level                 info
    storage.path              /var/log/flb-storage/
    storage.sync              normal
    storage.checksum          off
    storage.backlog.mem_limit 5M

[INPUT]
    name                      cpu
    storage.type              filesystem 

[OUTPUT]
    name                      stackdriver
    match                     *
    storage.total_limit_size  5M

tls

enable or disable TLS support

Off

tls.verify

force certificate validation

On

tls.debug

Set TLS debug verbosity level. It accept the following values: 0 (No debug), 1 (Error), 2 (State change), 3 (Informational) and 4 Verbose

1

tls.ca_file

absolute path to CA certificate file

tls.ca_path

absolute path to scan for certificate files

tls.crt_file

absolute path to Certificate file

tls.key_file

absolute path to private Key file

tls.key_passwd

optional password for tls.key_file file

tls.vhost

hostname to be used for TLS SNI extension

$ fluent-bit -i cpu -t cpu -o http://192.168.2.3:80/something \
    -p tls=on         \
    -p tls.verify=off \
    -m '*'
[INPUT]
    Name  cpu
    Tag   cpu

[OUTPUT]
    Name       http
    Match      *
    Host       192.168.2.3
    Port       80
    URI        /something
    tls        On
    tls.verify Off
[INPUT]
    Name  cpu
    Tag   cpu

[OUTPUT]
    Name        forward
    Match       *
    Host        192.168.10.100
    Port        24224
    tls         On
    tls.verify  On
    tls.ca_file /etc/certs/fluent.crt
    tls.vhost   fluent.example.com
$ bin/fluent-bit -h|grep JEMALLOC
Build Flags =  JSMN_PARENT_LINKS JSMN_STRICT FLB_HAVE_TLS FLB_HAVE_SQLDB
FLB_HAVE_TRACE FLB_HAVE_FLUSH_LIBCO FLB_HAVE_VALGRIND FLB_HAVE_FORK
FLB_HAVE_PROXY_GO FLB_HAVE_JEMALLOC JEMALLOC_MANGLE FLB_HAVE_REGEX
FLB_HAVE_C_TLS FLB_HAVE_SETJMP FLB_HAVE_ACCEPT4 FLB_HAVE_INOTIFY

Disk I/O Metrics

The disk input plugin, gathers the information about the disk throughput of the running system every certain interval of time and reports them.

The Disk I/O metrics plugin creates metrics that are log-based (I.e. JSON payload). If you are looking for Prometheus-based metrics please see the Node Exporter Metrics input plugin.

Configuration Parameters

The plugin supports the following configuration parameters:

Key
Description
Default

Interval_Sec

Polling interval (seconds).

1

Interval_NSec

Polling interval (nanosecond).

0

Dev_Name

Device name to limit the target. (e.g. sda). If not set, in_disk gathers information from all of disks and partitions.

all disks

Getting Started

In order to get disk usage from your system, you can run the plugin from the command line or through the configuration file:

Command Line

$ fluent-bit -i disk -o stdout
Fluent Bit v1.x.x
* Copyright (C) 2019-2020 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2017/01/28 16:58:16] [ info] [engine] started
[0] disk.0: [1485590297, {"read_size"=>0, "write_size"=>0}]
[1] disk.0: [1485590298, {"read_size"=>0, "write_size"=>0}]
[2] disk.0: [1485590299, {"read_size"=>0, "write_size"=>0}]
[3] disk.0: [1485590300, {"read_size"=>0, "write_size"=>11997184}]

Configuration File

In your main configuration file append the following Input & Output sections:

[INPUT]
    Name          disk
    Tag           disk
    Interval_Sec  1
    Interval_NSec 0
[OUTPUT]
    Name   stdout
    Match  *

Note: Total interval (sec) = Interval_Sec + (Interval_Nsec / 1000000000).

e.g. 1.5s = 1s + 500000000ns

fluent-bit-1.8.15-win32.exe
dd72b525b9a9a5b053bdb7cf67c9d4efe3fcde47dba2fcbf88b33b13687eddf5
fluent-bit-1.8.15-win32.zip
4646f0c7f5ed91d264f756ccfc144cd58b47d6c17f5bc6f96057bac68ead8613
fluent-bit-1.8.15-win64.exe
b625f4bf56dc63836f996f802c55e36b7748c4876f95c07fe0a086f735ec9a7e
fluent-bit-1.8.15-win64.zip
af799fc8c33d07f467e839b26936a24e20d527ff99fb7f97ecaad1b7782349c8
Fluent Bit
mmap(2)
configuration file
Amazon CloudWatch
Amazon Kinesis Data Firehose
Amazon Kinesis Data Streams
Amazon S3
Azure
BigQuery
Datadog
Elasticsearch
Forward
GELF
HTTP
InfluxDB
Kafka REST Proxy
Loki
Slack
Splunk
Stackdriver
TCP & TLS
Treasure Data
Kubernetes Filter
TLS server name indication
Buffering
Buffering & Storage
Tail Input
Backpressure
InfluxDB
Elasticsearch
jemalloc

storage.path

Set an optional location in the file system to store streams and chunks of data. If this parameter is not set, Input plugins can only use in-memory buffering.

storage.sync

Configure the synchronization mode used to store the data into the file system. It can take the values normal or full.

normal

storage.checksum

Enable the data integrity check when writing and reading data from the filesystem. The storage layer uses the CRC32 algorithm.

Off

storage.max_chunks_up

If the input plugin has enabled filesystem storage type, this property sets the maximum number of Chunks that can be up in memory. This helps to control memory usage.

128

storage.backlog.mem_limit

If storage.path is set, Fluent Bit will look for data chunks that were not delivered and are still in the storage layer, these are called backlog data. This option configure a hint of maximum value of memory to use when processing these records.

5M

storage.metrics

off

Networking

A common use case is when a component or plugin needs to connect to a service to send and receive data. Despite the operational mode sounds easy to deal with, there are many factors that can make things hard like unresponsive services, networking latency or any kind of connectivity error. The networking interface aims to abstract and simplify the network I/O handling, minimize risks and optimize performance.

Concepts

TCP Connect Timeout

Most of the time creating a new TCP connection to a remote server is straightforward and takes a few milliseconds. But there are cases where DNS resolving, slow network or incomplete TLS handshakes might create long delays, or incomplete connection statuses.

The net.connect_timeout allows to configure the maximum time to wait for a connection to be established, note that this value already considers the TLS handshake process.

The net.connect_timeout_log_error indicates if an error should be logged in case of connect timeout. If disabled, the timeout is logged as debug level message instead.

TCP Source Address

On environments with multiple network interfaces, might be desired to choose which interface to use for our data that will flow through the network.

The net.source_address allows to specify which network address must be used for a TCP connection and data flow.

Connection Keepalive

TCP is a connected oriented channel, to deliver and receive data from a remote end-point in most of cases we use a TCP connection. This TCP connection can be created and destroyed once is not longer needed, this approach has pros and cons, here we will refer to the opposite case: keep the connection open.

The concept of Connection Keepalive refers to the ability of the client (Fluent Bit on this case) to keep the TCP connection open in a persistent way, that means that once the connection is created and used, instead of close it, it can be recycled. This feature offers many benefits in terms of performance since communication channels are always established before hand.

Connection Keepalive Idle Timeout

If a connection is keepalive enabled, there might be scenarios where the connection can be unused for long periods of time. Having an idle keepalive connection is not helpful and is recommendable to keep them alive if they are used.

In order to control how long a keepalive connection can be idle, we expose the configuration property called net.keepalive_idle_timeout.

DNS mode

If a transport layer protocol is specified, the plugin whose configuration section the net.dns.mode setting is specified on overrides the global dns.mode value and issues DNS requests using the specified protocol which can be either TCP or UDP

Configuration Options

For plugins that rely on networking I/O, the following section describes the network configuration properties available and how they can be used to optimize performance or adjust to different configuration needs:

Property
Description
Default

net.connect_timeout

Set maximum time expressed in seconds to wait for a TCP connection to be established, this include the TLS handshake time.

10

net.connect_timeout_log_error

On connection timeout, specify if it should log an error. When disabled, the timeout is logged as a debug message

true

net.source_address

Specify network address (interface) to use for connection and data traffic.

net.keepalive

Enable or disable connection keepalive support. Accepts a boolean value: on / off.

on

net.keepalive_idle_timeout

Set maximum time expressed in seconds for an idle keepalive connection.

30

net.keepalive_max_recycle

Set the maximum number of times a keepalive connection can be used before it is destroyed.

0

net.dns.mode

Set the primary transport layer protocol used by the asynchronous DNS resolver for connections established in the plugin where this configuration value is used

UDP

Example

As an example, we will send 5 random messages through a TCP output connection, in the remote side we will use nc (netcat) utility to see the data.

Put the following configuration snippet in a file called fluent-bit.conf:

[SERVICE]
    flush     1
    log_level info

[INPUT]
    name      random
    samples   5

[OUTPUT]
    name      tcp
    match     *
    host      127.0.0.1
    port      9090
    format    json_lines
    # Networking Setup
    net.dns.mode                TCP
    net.connect_timeout         5
    net.source_address          127.0.0.1
    net.keepalive               on
    net.keepalive_idle_timeout  10

In another terminal, start nc and make it listen for messages on TCP port 9090:

$ nc -l 9090

Now start Fluent Bit with the configuration file written above and you will see the data flowing to netcat:

$ nc -l 9090
{"date":1587769732.572266,"rand_value":9704012962543047466}
{"date":1587769733.572354,"rand_value":7609018546050096989}
{"date":1587769734.572388,"rand_value":17035865539257638950}
{"date":1587769735.572419,"rand_value":17086151440182975160}
{"date":1587769736.572277,"rand_value":527581343064950185}

If the net.keepalive option is not enabled, Fluent Bit will close the TCP connection and netcat will quit, here we can see how the keepalive connection works.

After the 5 records arrive, the connection will keep idle and after 10 seconds it will be closed due to net.keepalive_idle_timeout.

Scheduling and Retries

Once an output plugin gets call to flush some data, after processing that data it can notify the Engine three possible return statuses:

  • OK

  • Retry

  • Error

If the return status was OK, it means it was successfully able to process and flush the data, if it returned an Error status, means that an unrecoverable error happened and the engine should not try to flush that data again. If a Retry was requested, the Engine will ask the Scheduler to retry to flush that data, the Scheduler will decide how many seconds to wait before that happen.

Configuring Retries

The Scheduler provides a simple configuration option called Retry_Limit which can be set independently on each output section. This option allows to disable retries or impose a limit to try N times and then discard the data after reaching that limit:

Value
Description

Retry_Limit

N

Integer value to set the maximum number of retries allowed. N must be >= 1 (default: 1)

Retry_Limit

no_limits or False

When Retry_Limit is set to no_limits orFalse, means that there is not limit for the number of retries that the Scheduler can do.

Retry_Limit

no_retries

When Retry_Limit is set to no_retries, means that reries are disabled and Scheduler would not try to send data to destination if it failed first time.

Example

The following example configure two outputs where the HTTP plugin have an unlimited number of retries and the Elasticsearch plugin have a limit of 5 times:

[OUTPUT]
    Name        http
    Host        192.168.5.6
    Port        8080
    Retry_Limit False

[OUTPUT]
    Name            es
    Host            192.168.5.20
    Port            9200
    Logstash_Format On
    Retry_Limit     5

Monitoring

Learn how to monitor your Fluent Bit data pipelines

Fluent Bit comes with built-it features to allow you to monitor the internals of your pipeline, connect to Prometheus and Grafana, Health checks and also connectors to use external services for such purposes:

HTTP Server

Fluent Bit comes with a built-in HTTP Server that can be used to query internal information and monitor metrics of each running plugin.

The monitoring interface can be easily integrated with Prometheus since we support it native format.

Getting Started

To get started, the first step is to enable the HTTP Server from the configuration file:

[SERVICE]
    HTTP_Server  On
    HTTP_Listen  0.0.0.0
    HTTP_PORT    2020

[INPUT]
    Name cpu

[OUTPUT]
    Name  stdout
    Match *

the above configuration snippet will instruct Fluent Bit to start it HTTP Server on TCP Port 2020 and listening on all network interfaces:

$ bin/fluent-bit -c fluent-bit.conf
Fluent Bit v1.4.0
* Copyright (C) 2019-2020 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2020/03/10 19:08:24] [ info] [engine] started
[2020/03/10 19:08:24] [ info] [http_server] listen iface=0.0.0.0 tcp_port=2020

now with a simple curl command is enough to gather some information:

$ curl -s http://127.0.0.1:2020 | jq
{
  "fluent-bit": {
    "version": "0.13.0",
    "edition": "Community",
    "flags": [
      "FLB_HAVE_TLS",
      "FLB_HAVE_METRICS",
      "FLB_HAVE_SQLDB",
      "FLB_HAVE_TRACE",
      "FLB_HAVE_HTTP_SERVER",
      "FLB_HAVE_FLUSH_LIBCO",
      "FLB_HAVE_SYSTEMD",
      "FLB_HAVE_VALGRIND",
      "FLB_HAVE_FORK",
      "FLB_HAVE_PROXY_GO",
      "FLB_HAVE_REGEX",
      "FLB_HAVE_C_TLS",
      "FLB_HAVE_SETJMP",
      "FLB_HAVE_ACCEPT4",
      "FLB_HAVE_INOTIFY"
    ]
  }
}

Note that we are sending the curl command output to the jq program which helps to make the JSON data easy to read from the terminal. Fluent Bit don't aim to do JSON pretty-printing.

REST API Interface

Fluent Bit aims to expose useful interfaces for monitoring, as of Fluent Bit v0.14 the following end points are available:

URI
Description
Data Format

/

Fluent Bit build information

JSON

/api/v1/uptime

Get uptime information in seconds and human readable format

JSON

/api/v1/metrics

Internal metrics per loaded plugin

JSON

/api/v1/metrics/prometheus

Internal metrics per loaded plugin ready to be consumed by a Prometheus Server

Prometheus Text 0.0.4

/api/v1/storage

Get internal metrics of the storage layer / buffered data. This option is enabled only if in the SERVICE section the property storage.metrics has been enabled

JSON

/api/v1/health

Fluent Bit health check result

String

Uptime Example

Query the service uptime with the following command:

$ curl -s http://127.0.0.1:2020/api/v1/uptime | jq

it should print a similar output like this:

{
  "uptime_sec": 8950000,
  "uptime_hr": "Fluent Bit has been running:  103 days, 14 hours, 6 minutes and 40 seconds"
}

Metrics Examples

Query internal metrics in JSON format with the following command:

$ curl -s http://127.0.0.1:2020/api/v1/metrics | jq

it should print a similar output like this:

{
  "input": {
    "cpu.0": {
      "records": 8,
      "bytes": 2536
    }
  },
  "output": {
    "stdout.0": {
      "proc_records": 5,
      "proc_bytes": 1585,
      "errors": 0,
      "retries": 0,
      "retries_failed": 0
    }
  }
}

Metrics in Prometheus format

Query internal metrics in Prometheus Text 0.0.4 format:

$ curl -s http://127.0.0.1:2020/api/v1/metrics/prometheus

this time the same metrics will be in Prometheus format instead of JSON:

fluentbit_input_records_total{name="cpu.0"} 57 1509150350542
fluentbit_input_bytes_total{name="cpu.0"} 18069 1509150350542
fluentbit_output_proc_records_total{name="stdout.0"} 54 1509150350542
fluentbit_output_proc_bytes_total{name="stdout.0"} 17118 1509150350542
fluentbit_output_errors_total{name="stdout.0"} 0 1509150350542
fluentbit_output_retries_total{name="stdout.0"} 0 1509150350542
fluentbit_output_retries_failed_total{name="stdout.0"} 0 1509150350542

Configuring Aliases

By default configured plugins on runtime get an internal name in the format plugin_name.ID. For monitoring purposes, this can be confusing if many plugins of the same type were configured. To make a distinction each configured input or output section can get an alias that will be used as the parent name for the metric.

[SERVICE]
    HTTP_Server  On
    HTTP_Listen  0.0.0.0
    HTTP_PORT    2020

[INPUT]
    Name  cpu
    Alias server1_cpu

[OUTPUT]
    Name  stdout
    Alias raw_output
    Match *

Now when querying the metrics we get the aliases in place instead of the plugin name:

{
  "input": {
    "server1_cpu": {
      "records": 8,
      "bytes": 2536
    }
  },
  "output": {
    "raw_output": {
      "proc_records": 5,
      "proc_bytes": 1585,
      "errors": 0,
      "retries": 0,
      "retries_failed": 0
    }
  }
}

Grafana Dashboard and Alerts

Alerts

Health Check for Fluent Bit

Fluent bit now supports four new configs to set up the health check.

Config Name
Description
Default Value

Health_Check

enable Health check feature

Off

HC_Errors_Count

the error count to meet the unhealthy requirement, this is a sum for all output plugins in a defined HC_Period, example for output error: [2022/02/16 10:44:10] [ warn] [engine] failed to flush chunk '1-1645008245.491540684.flb', retry in 7 seconds: task_id=0, input=forward.1 > output=cloudwatch_logs.3 (out_id=3)

5

HC_Retry_Failure_Count

the retry failure count to meet the unhealthy requirement, this is a sum for all output plugins in a defined HC_Period, example for retry failure: [2022/02/16 20:11:36] [ warn] [engine] chunk '1-1645042288.260516436.flb' cannot be retried: task_id=0, input=tcp.3 > output=cloudwatch_logs.1

5

HC_Period

The time period by second to count the error and retry failure data point

60

Note: Not every error log means an error nor be counted, the errors retry failures count only on specific errors which is the example in config table description

So the feature works as: Based on the HC_Period customer setup, if the real error number is over HC_Errors_Count or retry failure is over HC_Retry_Failure_Count, fluent bit will be considered as unhealthy. The health endpoint will return HTTP status 500 and String error. Otherwise it's healthy, will return HTTP status 200 and string ok

The equation is:

health status = (HC_Errors_Count > HC_Errors_Count config value) OR (HC_Retry_Failure_Count > HC_Retry_Failure_Count config value) IN the HC_Period interval

Note: the HC_Errors_Count and HC_Retry_Failure_Count only count for output plugins and count a sum for errors and retry failures from all output plugins which is running.

See the config example:

[SERVICE]
    HTTP_Server  On
    HTTP_Listen  0.0.0.0
    HTTP_PORT    2020
    Health_Check On 
    HC_Errors_Count 5 
    HC_Retry_Failure_Count 5 
    HC_Period 5 

[INPUT]
    Name  cpu

[OUTPUT]
    Name  stdout
    Match *

The command to call health endpoint

$ curl -s http://127.0.0.1:2020/api/v1/health

Based on the fluent bit status, the result will be:

  • HTTP status 200 and "ok" in response to healthy status

  • HTTP status 500 and "error" in response for unhealthy status

With the example config, the health status is determined by following equation:

Health status = (HC_Errors_Count > 5) OR (HC_Retry_Failure_Count > 5) IN 5 seconds

If (HC_Errors_Count > 5) OR (HC_Retry_Failure_Count > 5) IN 5 seconds is TRUE, then it's unhealthy.

If (HC_Errors_Count > 5) OR (HC_Retry_Failure_Count > 5) IN 5 seconds is FALSE, then it's healthy.

Calyptia Cloud

Get Started with Calyptia Cloud

Register your Fluent Bit agent will take less than one minute, steps:

In your Fluent Bit configuration file, append the following configuration section:

[CUSTOM]
    name     calyptia
    api_key  <YOUR_API_KEY>

Make sure to replace your API key in the configuration. After a few seconds upon restart your Fluent Bit agent, the Calyptia Cloud Dashboard will list your agent. Metrics will take around 30 seconds to shows up.

Contact Calyptia

Dump Internals / Signal

Fluent Bit v1.4 introduces the Dump Internals feature that can be triggered easily from the command line triggering the CONT Unix signal.

note: this feature is only available on Linux and BSD family operating systems

Usage

Run the following kill command to signal Fluent Bit:

kill -CONT `pidof fluent-bit`

The command pidof aims to lookup the Process ID of Fluent Bit. You can replace the

Fluent Bit will dump the following information to the standard output interface (stdout):

[engine] caught signal (SIGCONT)
[2020/03/23 17:39:02] Fluent Bit Dump

===== Input =====
syslog_debug (syslog)
│
├─ status
│  └─ overlimit     : no
│     ├─ mem size   : 60.8M (63752145 bytes)
│     └─ mem limit  : 61.0M (64000000 bytes)
│
├─ tasks
│  ├─ total tasks   : 92
│  ├─ new           : 0
│  ├─ running       : 92
│  └─ size          : 171.1M (179391504 bytes)
│
└─ chunks
   └─ total chunks  : 92
      ├─ up chunks  : 35
      ├─ down chunks: 57
      └─ busy chunks: 92
         ├─ size    : 60.8M (63752145 bytes)
         └─ size err: 0

===== Storage Layer =====
total chunks     : 92
├─ mem chunks    : 0
└─ fs chunks     : 92
   ├─ up         : 35
   └─ down       : 57

Input Plugins Dump

The dump provides insights for every input instance configured.

Status

Overall ingestion status of the plugin.

Entry
Sub-entry
Description

overlimit

mem_size

Current memory size in use by the input plugin in-memory.

mem_limit

Limit set by Mem_Buf_Limit.

Tasks

When an input plugin ingest data into the engine, a Chunk is created. A Chunk can contains multiple records. Upon flush time, the engine creates a Task that contains the routes for the Chunk associated in question.

The Task dump describes the tasks associated to the input plugin:

Entry
Description

total_tasks

Total number of active tasks associated to data generated by the input plugin.

new

Number of tasks not assigned yet to an output plugin. Tasks are in new status for a very short period of time (most of the time this value is very low or zero).

running

Number of active tasks being processed by output plugins.

size

Amount of memory used by the Chunks being processed (Total chunks size).

Chunks

The Chunks dump tells more details about all the chunks that the input plugin has generated and are still being processed.

Depending of the buffering strategy and limits imposed by configuration, some Chunks might be up (in memory) or down (filesystem).

Entry
Sub-entry
Description

total_chunks

Total number of Chunks generated by the input plugin that are still being processed by the engine.

up_chunks

Total number of Chunks that are loaded in memory.

down_chunks

Total number of Chunks that are stored in the filesystem but not loaded in memory yet.

busy_chunks

Chunks marked as busy (being flushed) or locked. Busy Chunks are immutable and likely are ready to (or being) processed.

size

Amount of bytes used by the Chunk.

size err

Number of Chunks in an error state where it size could not be retrieved.

Storage Layer Dump

Fluent Bit relies on a custom storage layer interface designed for hybrid buffering. The Storage Layer entry contains a total summary of Chunks registered by Fluent Bit:

Entry
Sub-Entry
Description

total chunks

Total number of Chunks

mem chunks

Total number of Chunks memory-based

fs chunks

Total number of Chunks filesystem based

up

Total number of filesystem chunks up in memory

down

Total number of filesystem chunks down (not loaded in memory)

HTTP Proxy

Enable traffic through a proxy server via HTTP_PROXY environment variable

HTTP Proxy

Fluent Bit supports setting up a HTTP proxy for all egress HTTP/HTTPS traffic by setting HTTP_PROXY environment variable:

  • You can set up basic authentication with HTTP_PROXY=http://<username>:<password>@<proxy host>:<port> to provide your username and password when connecting to the proxy.

  • You can also set up HTTP_PROXY=http://<proxy host>:<port> to omit username and password if there is none.

NO_PROXY

In some environments, we wish HTTP traffic for some domains don't go through the HTTP_PROXY, and this is where we need to use NO_PROXY environment variable.

One typical use case for NO_PROXY is when running fluent-bit in a Kubernetes environment, where we want:

  • All real egress traffic goes through a HTTP proxy.

  • All "Kubernetes local" traffic does not go through the HTTP proxy.

  • We can set NO_PROXY=127.0.0.1,localhost,kubernetes.default.svc in this case.

Validating your Data and Structure

Fluent Bit is a powerful log processing tool that can deal with different sources and formats, in addition it provides several filters that can be used to perform custom modifications. This flexibility is really good but while your pipeline grows, it's strongly recommended to validate your data and structure.

We encourage Fluent Bit users to integrate data validation in their CI systems

A simplified view of our data processing pipeline is as follows:

In a normal production environment, many Inputs, Filters, and Outputs are defined in the configuration, so integrating a continuous validation of your configuration against expected results is a must. For this requirement, Fluent Bit provides a specific Filter called Expect which can be used to validate expected Keys and Values from your records and takes some action when an exception is found.

How it Works

Ideally you want to add checkpoints of validation of your data between each step so you can know if your data structure is correct, we do this by using expect filter.

Expect filter sets rules that aims to validate certain criteria like:

  • does the record contain a key A ?

  • does the record not contains key A?

  • does the record key A value equals NULL ?

  • does the record key A value a different value than NULL ?

  • does the record key A value equals B ?

Every expect filter configuration can expose specific rules to validate the content of your records, it supports the following configuration properties:

Property
Description

key_exists

Check if a key with a given name exists in the record.

key_not_exists

Check if a key does not exist in the record.

key_val_is_null

check that the value of the key is NULL.

key_val_is_not_null

check that the value of the key is NOT NULL.

key_val_eq

check that the value of the key equals the given value in the configuration.

action

action to take when a rule does not match. The available options are warn or exit. On warn, a warning message is sent to the logging layer when a mismatch of the rules above is found; using exit makes Fluent Bit abort with status code 255.

Start Testing

Consider the following JSON file called data.log with the following content:

{"color": "blue", "label": {"name": null}}
{"color": "red", "label": {"name": "abc"}, "meta": "data"}
{"color": "green", "label": {"name": "abc"}, "meta": null}

The following Fluent Bit configuration file will configure a pipeline to consume the log above apply an expect filter to validate that keys color and label exists:

[SERVICE]
    flush        1
    log_level    info
    parsers_file parsers.conf

[INPUT]
    name        tail
    path        ./data.log
    parser      json
    exit_on_eof on

# First 'expect' filter to validate that our data was structured properly
[FILTER]
    name        expect
    match       *
    key_exists  color
    key_exists  $label['name']
    action      exit

[OUTPUT]
    name        stdout
    match       *

note that if for some reason the JSON parser failed or is missing in the tail input (line 9), the expect filter will trigger the exit action. As a test, go ahead and comment out or remove line 9.

As a second step, we will extend our pipeline and we will add a grep filter to match records that map label contains a key called name with value abc, then an expect filter to re-validate that condition:

[SERVICE]
    flush        1
    log_level    info
    parsers_file parsers.conf

[INPUT]
    name         tail
    path         ./data.log
    parser       json
    exit_on_eof  on

# First 'expect' filter to validate that our data was structured properly
[FILTER]
    name       expect
    match      *
    key_exists color
    key_exists label
    action     exit

# Match records that only contains map 'label' with key 'name' = 'abc'
[FILTER]
    name       grep
    match      *
    regex      $label['name'] ^abc$

# Check that every record contains 'label' with a non-null value
[FILTER]
    name       expect
    match      *
    key_val_eq $label['name'] abc
    action     exit

# Append a new key to the record using an environment variable
[FILTER]
    name       record_modifier
    match      *
    record     hostname ${HOSTNAME}

# Check that every record contains 'hostname' key
[FILTER]
    name       expect
    match      *
    key_exists hostname
    action     exit

[OUTPUT]
    name       stdout
    match      *

Deploying in Production

When deploying your configuration in production, you might want to remove the expect filters from your configuration since it's an unnecessary extra work unless you want to have a 100% coverage of checks at runtime.

Pipeline Monitoring

Learn how to monitor your data pipeline with external services

A Data Pipeline represents a flow of data that goes through the inputs (sources), filers, and output (sinks). There are a couple of ways to monitor the pipeline. We recommend the following sections for a better understanding and steps to get started:

Running a Logging Pipeline Locally

Create a Configuration File

fluent-bit.conf:

[INPUT]
  Name dummy
  Dummy {"top": {".dotted": "value"}}

[OUTPUT]
  Name es
  Host elasticsearch
  Replace_Dots On

Docker Compose

docker-compose.yaml:

version: "3.7"

services:
  fluent-bit:
    image: fluent/fluent-bit
    volumes:
      - ./fluent-bit.conf:/fluent-bit/etc/fluent-bit.conf
    depends_on:
      - elasticsearch
  elasticsearch:
    image: elasticsearch:7.6.2
    ports:
      - "9200:9200"
    environment:
      - discovery.type=single-node

View indexed logs

To view indexed logs run:

curl "localhost:9200/_search?pretty" \
  -H 'Content-Type: application/json' \
  -d'{ "query": { "match_all": {} }}'

To "start fresh", delete the index by running:

curl -X DELETE "localhost:9200/fluent-bit?pretty"

Node Exporter Metrics

A plugin based on Prometheus Node Exporter to collect system / host level metrics

The initial release of Node Exporter Metrics contains a subset of collectors and metrics available from Prometheus Node Exporter and we plan to expand them over time.

Important note: Metrics collected with Node Exporter Metrics flow through a separate pipeline from logs and current filters do not operate on top of metrics.

This plugin is currently only supported on Linux based operating systems\

Configuration

Key
Description
Default

scrape_interval

The rate at which metrics are collected from the host operating system

5 seconds

path.procfs

The mount point used to collect process information and metrics

/proc/

path.sysfs

The path in the filesystem used to collect system metrics

/sys/

Collectors available

The following table describes the available collectors as part of this plugin. All of them are enabled by default and respects the original metrics name, descriptions, and types from Prometheus Exporter, so you can use your current dashboards without any compatibility problem.

note: the Version column specifies the Fluent Bit version where the collector is available.

Name
Description
OS
Version

cpu

Exposes CPU statistics.

Linux

v1.8

cpufreq

Exposes CPU frequency statistics.

Linux

v1.8

diskstats

Exposes disk I/O statistics.

Linux

v1.8

filefd

Exposes file descriptor statistics from /proc/sys/fs/file-nr.

Linux

v1.8.2

loadavg

Exposes load average.

Linux

v1.8

meminfo

Exposes memory statistics.

Linux

v1.8

netdev

Exposes network interface statistics such as bytes transferred.

Linux

v1.8.2

stat

Exposes various statistics from /proc/stat. This includes boot time, forks, and interruptions.

Linux

v1.8

time

Exposes the current system time.

Linux

v1.8

uname

Exposes system information as provided by the uname system call.

Linux

v1.8

vmstat

Exposes statistics from /proc/vmstat.

Linux

v1.8.2

Getting Started

Simple Configuration File

# Node Exporter Metrics + Prometheus Exporter
# -------------------------------------------
# The following example collect host metrics on Linux and expose
# them through a Prometheus HTTP end-point.
#
# After starting the service try it with:
#
# $ curl http://127.0.0.1:2021/metrics
#
[SERVICE]
    flush           1
    log_level       info

[INPUT]
    name            node_exporter_metrics
    tag             node_metrics
    scrape_interval 2

[OUTPUT]
    name            prometheus_exporter
    match           node_metrics
    listen          0.0.0.0
    port            2021

        

You can test the expose of the metrics by using curl:

curl http://127.0.0.1:2021/metrics

Container to Collect Host Metrics

When deploying Fluent Bit in a container you will need to specify additional settings to ensure that Fluent Bit has access to the host operating system. The following docker command deploys Fluent Bit with specific mount paths and settings enabled to ensure that Fluent Bit can collect from the host. These are then exposed over port 2021.

docker run -ti -v /proc:/host/proc \
               -v /sys:/host/sys   \
               -p 2021:2021        \
               fluent/fluent-bit:1.8.0 \
               /fluent-bit/bin/fluent-bit \
                         -i node_exporter_metrics -p path.procfs=/host/proc -p path.sysfs=/host/sys \
                         -o prometheus_exporter -p "add_label=host $HOSTNAME" \
                         -f 1

Fluent Bit + Prometheus + Grafana

If you like dashboards for monitoring, Grafana is one of the preferred options. In our Fluent Bit source code repository, we have pushed a simple **docker-compose **example. Steps:

Get a copy of Fluent Bit source code

git clone https://github.com/fluent/fluent-bit
cd fluent-bit/docker_compose/node-exporter-dashboard/

Start the service and view your Dashboard

docker-compose up --force-recreate -d --build

Now open your browser in the address http://127.0.0.1:3000. When asked for the credentials to access Grafana, just use the **admin **username and admin password.

Note that by default Grafana dashboard plots the data from the last 24 hours, so just change it to Last 5 minutes to see the recent data being collected.

Stop the Service

docker-compose down

Enhancement Requests

Docker Events

Configuration Parameters

This plugin supports the following configuration parameters:

Command Line

Configuration File

In your main configuration file append the following Input & Output sections:

Dummy

The dummy input plugin, generates dummy events. It is useful for testing, debugging, benchmarking and getting started with Fluent Bit.

Configuration Parameters

The plugin supports the following configuration parameters:

Getting Started

You can run the plugin from the command line or through the configuration file:

Command Line

Configuration File

In your main configuration file append the following Input & Output sections:

Collectd

The collectd input plugin allows you to receive datagrams from collectd service.

Configuration Parameters

The plugin supports the following configuration parameters:

Configuration Examples

Here is a basic configuration example.

With this configuration, Fluent Bit listens to 0.0.0.0:25826, and outputs incoming datagram packets to stdout.

You must set the same types.db files that your collectd server uses. Otherwise, Fluent Bit may not be able to interpret the payload properly.

Docker Metrics

The docker input plugin allows you to collect Docker container metrics such as memory usage and CPU consumption.

Content:

Configuration Parameters

The plugin supports the following configuration parameters:

If you set neither Include nor Exclude, the plugin will try to get metrics from all the running containers.

Configuration File

Here is an example configuration that collects metrics from two docker instances (6bab19c3a0f9 and 14159be4ca2c).

This configuration will produce records like below.

Exec

The exec input plugin, allows to execute external program and collects event logs.

Container support

This plugin will not function in the distroless production images (AMD64 currently) as it needs a functional /bin/sh which is not present. It will function in the 1.8.12 and later -debug images though as well as the ARM production images as these include a full shell.

Configuration Parameters

The plugin supports the following configuration parameters:

Getting Started

You can run the plugin from the command line or through the configuration file:

Command Line

The following example will read events from the output of ls.

Configuration File

In your main configuration file append the following Input & Output sections:

CPU Metrics

The cpu input plugin, measures the CPU usage of a process or the whole system by default (considering per CPU core). It reports values in percentage unit for every interval of time set. At the moment this plugin is only available for Linux.

The following tables describes the information generated by the plugin. The keys below represent the data used by the overall system, all values associated to the keys are in a percentage unit (0 to 100%):

The CPU metrics plugin creates metrics that are log-based (I.e. JSON payload). If you are looking for Prometheus-based metrics please see the Node Exporter Metrics input plugin.

In addition to the keys reported in the above table, a similar content is created per CPU core. The cores are listed from 0 to N as the Kernel reports:

Configuration Parameters

The plugin supports the following configuration parameters:

Getting Started

In order to get the statistics of the CPU usage of your system, you can run the plugin from the command line or through the configuration file:

Command Line

Configuration File

In your main configuration file append the following Input & Output sections:

If http_server option has been enable in the main [SERVICE] section, this option registers a new endpoint where internal metrics of the storage layer can be consumed. For more details refer to the section.

implements a unified networking interface that is exposed to components like plugins. This interface abstract all the complexity of general I/O and is fully configurable.

Any component that uses TCP channels like HTTP or , can take advantage of this feature. For configuration purposes use the net.keepalive property.

has an Engine that helps to coordinate the data ingestion from input plugins and call the Scheduler to decide when is time to flush the data through one or multiple output plugins. The Scheduler flush new data every a fixed time of seconds and Schedule retries when asked.

The following example set an alias to the INPUT section which is using the input plugin:

Fluent Bit's exposed can be leveraged to create dashboards and alerts.

The provided is heavily inspired by 's but with a few key differences such as the use of the instance label (see ), stacked graphs and a focus on Fluent Bit metrics.

Sample alerts are available .

is a hosted service that allows you to monitor your Fluent Bit agents including data flow, metrics and configurations.

Go to and sign-in

On the left menu click on and generate/copy your API key

If want to get in touch with Calyptia team, just send an email to

When the service is running we can export to see the overall status of the data flow of the service. But there are other use cases where we would like to know the current status of the internals of the service, specifically to answer questions like what's the current status of the internal buffers ? , the Dump Internals feature is the answer.

If the plugin has been configured with , this entry will report if the plugin is over the limit or not at the moment of the dump. If it is overlimit, it will print yes, otherwise no.

The HTTP_PROXY environment variable is a for setting a HTTP proxy in a containerized environment, and it is also natively supported by any application written in Go. Therefore, we follow and implement the same convention for Fluent Bit.

Note: HTTP proxy is also supported using the . This configuration continues to work, however it should not be used together with the HTTP_PROXY environment variable. This is because under the hood, the HTTP_PROXY environment variable based proxy support is implemented by setting up a TCP connection tunnel via . Unlike the plugin's implementation, this supports both HTTP and HTTPS egress traffic.

NO_PROXY is a comma-separated list of host names that shouldn't go through any proxy is set in (only an asterisk, * matches all hosts), e.g. foo.com,bar.com. This is as a .

As an example, consider the following pipeline where your source of data is a normal file with JSON content on it and then two filters: to exclude certain records and to alter the record content adding and removing specific keys.

You may wish to test a logging pipeline locally to observe how it deals with log messages. The following is a walk-through for running Fluent Bit and Elasticsearch locally with which can serve as an example for testing other plugins locally.

Refer to the to create a configuration to test.

Use to run Fluent Bit (with the configuration file mounted) and Elasticsearch.

is a popular way to collect system level metrics from operating systems, such as CPU / Disk / Network / Process statistics. Fluent Bit 1.8.0 includes node exporter metrics plugin that builds off the Prometheus design to collect system level metrics without having to manage two separate processes or agents.

In the following configuration file, the input plugin _node_exporter_metrics collects _metrics every 2 seconds and exposes them through our output plugin on HTTP/TCP port 2021.

Our current plugin implements a sub-set of the available collectors in the original Prometheus Node Exporter, if you would like that we prioritize a specific collector please open a Github issue by using the following template: -

The docker events input plugin uses the docker API to capture server events. A complete list of possible events returned by this plugin can be found

Key
Description
Default
Key
Description
Key
Description
Default

Key
Description
Default
Key
Description
key
description
key
description
Key
Description
Default

As described above, the CPU input plugin gathers the overall usage every one second and flushed the information to the output on the fifth second. On this example we used the stdout plugin to demonstrate the output records. In a real use-case you may want to flush this information to some central aggregator such as or .

Monitoring
Fluent Bit
TLS
Fluent Bit
CPU
prometheus style metrics
example dashboard
Banzai Cloud
logging operator dashboard
why here
here
Calyptia Cloud
cloud.calyptia.com
Settings
hello@calyptia.com
metrics
standard way
HTTP output plugin
HTTP CONNECT
curl convention
grep
record_modifier
Docker Compose
Configuration File section
Docker Compose
Prometheus Node Exporter
Prometheus Exporter
in_node_exporter_metrics
HTTP Server: JSON and Prometheus Exporter-style metrics
Grafana Dashboards and Alerts
Health Checks
Calyptia Cloud: hosted service to monitor and visualize your pipelines
HTTP Server: JSON and Prometheus Exporter-style metrics
Grafana Dashboards and Alerts
Health Checks
Calyptia Cloud: hosted service to monitor and visualize your pipelines

Unix_Path

The docker socket unix path

/var/run/docker.sock

Buffer_Size

The size of the buffer used to read docker events (in bytes)

8192

Parser

Specify the name of a parser to interpret the entry as a structured message.

None

Key

When a message is unstructured (no parser applied), it's appended as a string under the key name message.

message

Reconnect.Retry_limits

The maximum number of retries allowed. The plugin tries to reconnect with docker socket when EOF is detected.

5

Reconnect.Retry_interval

The retrying interval. Unit is second.

1

$ fluent-bit -i docker_events -o stdout
[INPUT]
    Name   docker_events

[OUTPUT]
    Name   stdout
    Match  *

Dummy

Dummy JSON record. Default: {"message":"dummy"}

Start_time_sec

Dummy base timestamp in seconds. Default: 0

Start_time_nsec

Dummy base timestamp in nanoseconds. Default: 0

Rate

Events number generated per second. Default: 1

Samples

If set, the events number will be limited. e.g. If Samples=3, the plugin only generates three events and stops.

$ fluent-bit -i dummy -o stdout
Fluent Bit v1.x.x
* Copyright (C) 2019-2020 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2017/07/06 21:55:29] [ info] [engine] started
[0] dummy.0: [1499345730.015265366, {"message"=>"dummy"}]
[1] dummy.0: [1499345731.002371371, {"message"=>"dummy"}]
[2] dummy.0: [1499345732.000267932, {"message"=>"dummy"}]
[3] dummy.0: [1499345733.000757746, {"message"=>"dummy"}]
[INPUT]
    Name   dummy
    Tag    dummy.log

[OUTPUT]
    Name   stdout
    Match  *

Listen

Set the address to listen to

0.0.0.0

Port

Set the port to listen to

25826

TypesDB

Set the data specification file

/usr/share/collectd/types.db

[INPUT]
    Name         collectd
    Listen       0.0.0.0
    Port         25826
    TypesDB      /usr/share/collectd/types.db,/etc/collectd/custom.db

[OUTPUT]
    Name   stdout
    Match  *

Interval_Sec

Polling interval in seconds

1

Include

A space-separated list of containers to include

Exclude

A space-separated list of containers to exclude

[INPUT]
    Name         docker
    Include      6bab19c3a0f9 14159be4ca2c
[OUTPUT]
    Name   stdout
    Match  *
[1] docker.0: [1571994772.00555745, {"id"=>"6bab19c3a0f9", "name"=>"postgresql", "cpu_used"=>172102435, "mem_used"=>5693400, "mem_limit"=>4294963200}]
$ fluent-bit -i exec -p 'command=ls /var/log' -o stdout
Fluent Bit v1.x.x
* Copyright (C) 2019-2020 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2018/03/21 17:46:49] [ info] [engine] started
[0] exec.0: [1521622010.013470159, {"exec"=>"ConsoleKit"}]
[1] exec.0: [1521622010.013490313, {"exec"=>"Xorg.0.log"}]
[2] exec.0: [1521622010.013492079, {"exec"=>"Xorg.0.log.old"}]
[3] exec.0: [1521622010.013493443, {"exec"=>"anaconda.ifcfg.log"}]
[4] exec.0: [1521622010.013494707, {"exec"=>"anaconda.log"}]
[5] exec.0: [1521622010.013496016, {"exec"=>"anaconda.program.log"}]
[6] exec.0: [1521622010.013497225, {"exec"=>"anaconda.storage.log"}]
[INPUT]
    Name          exec
    Tag           exec_ls
    Command       ls /var/log
    Interval_Sec  1
    Interval_NSec 0
    Buf_Size      8mb
    Oneshot       false

[OUTPUT]
    Name   stdout
    Match  *

cpu_p

CPU usage of the overall system, this value is the summatory of time spent on user and kernel space. The result takes in consideration the numbers of CPU cores in the system.

user_p

CPU usage in User mode, for short it means the CPU usage by user space programs. The result of this value takes in consideration the numbers of CPU cores in the system.

system_p

CPU usage in Kernel mode, for short it means the CPU usage by the Kernel. The result of this value takes in consideration the numbers of CPU cores in the system.

cpuN.p_cpu

Represents the total CPU usage by core N.

cpuN.p_user

Total CPU spent in user mode or user space programs associated to this core.

cpuN.p_system

Total CPU spent in system or kernel mode associated to this core.

Interval_Sec

Polling interval in seconds

1

Interval_NSec

Polling interval in nanoseconds

0

PID

Specify the ID (PID) of a running process in the system. By default the plugin monitors the whole system but if this option is set, it will only monitor the given process ID.

$ build/bin/fluent-bit -i cpu -t my_cpu -o stdout -m '*'
Fluent Bit v1.x.x
* Copyright (C) 2019-2020 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2019/09/02 10:46:29] [ info] starting engine
[0] [1452185189, {"cpu_p"=>7.00, "user_p"=>5.00, "system_p"=>2.00, "cpu0.p_cpu"=>10.00, "cpu0.p_user"=>8.00, "cpu0.p_system"=>2.00, "cpu1.p_cpu"=>6.00, "cpu1.p_user"=>4.00, "cpu1.p_system"=>2.00}]
[1] [1452185190, {"cpu_p"=>6.50, "user_p"=>5.00, "system_p"=>1.50, "cpu0.p_cpu"=>6.00, "cpu0.p_user"=>5.00, "cpu0.p_system"=>1.00, "cpu1.p_cpu"=>7.00, "cpu1.p_user"=>5.00, "cpu1.p_system"=>2.00}]
[2] [1452185191, {"cpu_p"=>7.50, "user_p"=>5.00, "system_p"=>2.50, "cpu0.p_cpu"=>7.00, "cpu0.p_user"=>3.00, "cpu0.p_system"=>4.00, "cpu1.p_cpu"=>6.00, "cpu1.p_user"=>6.00, "cpu1.p_system"=>0.00}]
[3] [1452185192, {"cpu_p"=>4.50, "user_p"=>3.50, "system_p"=>1.00, "cpu0.p_cpu"=>6.00, "cpu0.p_user"=>5.00, "cpu0.p_system"=>1.00, "cpu1.p_cpu"=>5.00, "cpu1.p_user"=>3.00, "cpu1.p_system"=>2.00}]
[INPUT]
    Name cpu
    Tag  my_cpu

[OUTPUT]
    Name  stdout
    Match *

Kernel Logs

The kmsg input plugin reads the Linux Kernel log buffer since the beginning, it gets every record and parse it field as priority, sequence, seconds, useconds, and message.

Getting Started

In order to start getting the Linux Kernel messages, you can run the plugin from the command line or through the configuration file:

Command Line

$ bin/fluent-bit -i kmsg -t kernel -o stdout -m '*'
Fluent Bit v1.x.x
* Copyright (C) 2019-2020 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[0] kernel: [1463421823, {"priority"=>3, "sequence"=>1814, "sec"=>11706, "usec"=>732233, "msg"=>"ERROR @wl_cfg80211_get_station : Wrong Mac address, mac = 34:a8:4e:d3:40:ec profile =20:3a:07:9e:4a:ac"}]
[1] kernel: [1463421823, {"priority"=>3, "sequence"=>1815, "sec"=>11706, "usec"=>732300, "msg"=>"ERROR @wl_cfg80211_get_station : Wrong Mac address, mac = 34:a8:4e:d3:40:ec profile =20:3a:07:9e:4a:ac"}]
[2] kernel: [1463421829, {"priority"=>3, "sequence"=>1816, "sec"=>11712, "usec"=>729728, "msg"=>"ERROR @wl_cfg80211_get_station : Wrong Mac address, mac = 34:a8:4e:d3:40:ec profile =20:3a:07:9e:4a:ac"}]
[3] kernel: [1463421829, {"priority"=>3, "sequence"=>1817, "sec"=>11712, "usec"=>729802, "msg"=>"ERROR @wl_cfg80211_get_station : Wrong Mac address, mac = 34:a8:4e:d3:40:ec
...

As described above, the plugin processed all messages that the Linux Kernel reported, the output has been truncated for clarification.

Configuration File

In your main configuration file append the following Input & Output sections:

[INPUT]
    Name   kmsg
    Tag    kernel

[OUTPUT]
    Name   stdout
    Match  *

Health

Health input plugin allows you to check how healthy a TCP server is. It does the check by issuing a TCP connection every a certain interval of time.

Configuration Parameters

The plugin supports the following configuration parameters:

Key
Description

Host

Name of the target host or IP address to check.

Port

TCP port where to perform the connection check.

Interval_Sec

Interval in seconds between the service checks. Default value is 1.

Internal_Nsec

Specify a nanoseconds interval for service checks, it works in conjunction with the Interval_Sec configuration key. Default value is 0.

Alert

If enabled, it will only generate messages if the target TCP service is down. By default this option is disabled.

Add_Host

If enabled, hostname is appended to each records. Default value is false.

Add_Port

If enabled, port number is appended to each records. Default value is false.

Getting Started

In order to start performing the checks, you can run the plugin from the command line or through the configuration file:

Command Line

From the command line you can let Fluent Bit generate the checks with the following options:

$ fluent-bit -i health -p host=127.0.0.1 -p port=80 -o stdout

Configuration File

In your main configuration file append the following Input & Output sections:

[INPUT]
    Name          health
    Host          127.0.0.1
    Port          80
    Interval_Sec  1
    Interval_NSec 0

[OUTPUT]
    Name   stdout
    Match  *

Testing

Once Fluent Bit is running, you will see some random values in the output interface similar to this:

$ fluent-bit -i health -p host=127.0.0.1 -p port=80 -o stdout
Fluent Bit v1.8.0
* Copyright (C) 2019-2021 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2021/06/20 08:39:47] [ info] [engine] started (pid=4621)
[2021/06/20 08:39:47] [ info] [storage] version=1.1.1, initializing...
[2021/06/20 08:39:47] [ info] [storage] in-memory
[2021/06/20 08:39:47] [ info] [storage] normal synchronization mode, checksum disabled, max_chunks_up=128
[2021/06/20 08:39:47] [ info] [sp] stream processor started
[0] health.0: [1624145988.305640385, {"alive"=>true}]
[1] health.0: [1624145989.305575360, {"alive"=>true}]
[2] health.0: [1624145990.306498573, {"alive"=>true}]
[3] health.0: [1624145991.305595498, {"alive"=>true}]

Head

The head input plugin, allows to read events from the head of file. It's behavior is similar to the head command.

Configuration Parameters

The plugin supports the following configuration parameters:

Key
Description

File

Absolute path to the target file, e.g: /proc/uptime

Buf_Size

Buffer size to read the file.

Interval_Sec

Polling interval (seconds).

Interval_NSec

Polling interval (nanosecond).

Add_Path

If enabled, filepath is appended to each records. Default value is false.

Key

Rename a key. Default: head.

Lines

Line number to read. If the number N is set, in_head reads first N lines like head(1) -n.

Split_line

If enabled, in_head generates key-value pair per line.

Split Line Mode

This mode is useful to get a specific line. This is an example to get CPU frequency from /proc/cpuinfo.

/proc/cpuinfo is a special file to get cpu information.

processor    : 0
vendor_id    : GenuineIntel
cpu family   : 6
model        : 42
model name   : Intel(R) Core(TM) i7-2640M CPU @ 2.80GHz
stepping     : 7
microcode    : 41
cpu MHz      : 2791.009
cache size   : 4096 KB
physical id  : 0
siblings     : 1

Cpu frequency is "cpu MHz : 2791.009". We can get the line with this configuration file.

[INPUT]
    Name           head
    Tag            head.cpu
    File           /proc/cpuinfo
    Lines          8
    Split_line     true
    # {"line0":"processor    : 0", "line1":"vendor_id    : GenuineIntel" ...}

[FILTER]
    Name           record_modifier
    Match          *
    Whitelist_key  line7

[OUTPUT]
    Name           stdout
    Match          *

Output is

$ bin/fluent-bit -c head.conf 
Fluent Bit v1.x.x
* Copyright (C) 2019-2020 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2017/06/26 22:38:24] [ info] [engine] started
[0] head.cpu: [1498484305.000279805, {"line7"=>"cpu MHz        : 2791.009"}]
[1] head.cpu: [1498484306.011680137, {"line7"=>"cpu MHz        : 2791.009"}]
[2] head.cpu: [1498484307.010042482, {"line7"=>"cpu MHz        : 2791.009"}]
[3] head.cpu: [1498484308.008447978, {"line7"=>"cpu MHz        : 2791.009"}]

Getting Started

In order to read the head of a file, you can run the plugin from the command line or through the configuration file:

Command Line

The following example will read events from the /proc/uptime file, tag the records with the uptime name and flush them back to the stdout plugin:

$ fluent-bit -i head -t uptime -p File=/proc/uptime -o stdout -m '*'
Fluent Bit v1.x.x
* Copyright (C) 2019-2020 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2016/05/17 21:53:54] [ info] starting engine
[0] uptime: [1463543634, {"head"=>"133517.70 194870.97"}]
[1] uptime: [1463543635, {"head"=>"133518.70 194872.85"}]
[2] uptime: [1463543636, {"head"=>"133519.70 194876.63"}]
[3] uptime: [1463543637, {"head"=>"133520.70 194879.72"}]

Configuration File

In your main configuration file append the following Input & Output sections:

[INPUT]
    Name          head
    Tag           uptime
    File          /proc/uptime
    Buf_Size      256
    Interval_Sec  1
    Interval_NSec 0

[OUTPUT]
    Name   stdout
    Match  *

Note: Total interval (sec) = Interval_Sec + (Interval_Nsec / 1000000000).

e.g. 1.5s = 1s + 500000000ns

MQTT

The MQTT input plugin, allows to retrieve messages/data from MQTT control packets over a TCP connection. The incoming data to receive must be a JSON map.

Configuration Parameters

The plugin supports the following configuration parameters:

Key
Description

Listen

Listener network interface, default: 0.0.0.0

Port

TCP port where listening for connections, default: 1883

Getting Started

In order to start listening for MQTT messages, you can run the plugin from the command line or through the configuration file:

Command Line

Since the MQTT input plugin let Fluent Bit behave as a server, we need to dispatch some messages using some MQTT client, in the following example mosquitto tool is being used for the purpose:

$ fluent-bit -i mqtt -t data -o stdout -m '*'
Fluent Bit v1.x.x
* Copyright (C) 2019-2020 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2016/05/20 14:22:52] [ info] starting engine
[0] data: [1463775773, {"topic"=>"some/topic", "key1"=>123, "key2"=>456}]

The following command line will send a message to the MQTT input plugin:

$ mosquitto_pub  -m '{"key1": 123, "key2": 456}' -t some/topic

Configuration File

In your main configuration file append the following Input & Output sections:

[INPUT]
    Name   mqtt
    Tag    data
    Listen 0.0.0.0
    Port   1883

[OUTPUT]
    Name   stdout
    Match  *
Mem_Buf_Limit
here
Configuration Parameters
Configuration File
Fluentd
Elasticsearch

Command

The command to execute.

Parser

Specify the name of a parser to interpret the entry as a structured message.

Interval_Sec

Polling interval (seconds).

Interval_NSec

Polling interval (nanosecond).

Buf_Size

Oneshot

Only run once at startup. This allows collection of data precedent to fluent-bit's startup (bool, default: false)

Forward

Configuration Parameters

The plugin supports the following configuration parameters:

Key
Description
Default

Listen

Listener network interface.

0.0.0.0

Port

TCP port to listen for incoming connections.

24224

Unix_Path

Specify the path to unix socket to receive a Forward message. If set, Listen and Port are ignored.

Buffer_Max_Size

6144000

Buffer_Chunk_Size

1024000

Tag_Prefix

Prefix incoming tag with the defined value.

Getting Started

In order to receive Forward messages, you can run the plugin from the command line or through the configuration file as shown in the following examples.

Command Line

From the command line you can let Fluent Bit listen for Forward messages with the following options:

$ fluent-bit -i forward -o stdout

By default the service will listen an all interfaces (0.0.0.0) through TCP port 24224, optionally you can change this directly, e.g:

$ fluent-bit -i forward -p listen="192.168.3.2" -p port=9090 -o stdout

In the example the Forward messages will only arrive through network interface under 192.168.3.2 address and TCP Port 9090.

Configuration File

In your main configuration file append the following Input & Output sections:

[INPUT]
    Name              forward
    Listen            0.0.0.0
    Port              24224
    Buffer_Chunk_Size 1M
    Buffer_Max_Size   6M

[OUTPUT]
    Name   stdout
    Match  *

Testing

$ echo '{"key 1": 123456789, "key 2": "abcdefg"}' | fluent-cat my_tag
$ bin/fluent-bit -i forward -o stdout
Fluent-Bit v0.9.0
Copyright (C) Treasure Data

[2016/10/07 21:49:40] [ info] [engine] started
[2016/10/07 21:49:40] [ info] [in_fw] binding 0.0.0.0:24224
[0] my_tag: [1475898594, {"key 1"=>123456789, "key 2"=>"abcdefg"}]

Fluent Bit Metrics

A plugin to collect Fluent Bit's own metrics

Important note: Metrics collected with Node Exporter Metrics flow through a separate pipeline from logs and current filters do not operate on top of metrics.

Configuration

Key
Description
Default

scrape_interval

The rate at which metrics are collected from the host operating system

2 seconds

scrape_on_start

Scrape metrics upon start, useful to avoid waiting for 'scrape_interval' for the first round of metrics.

false

Getting Started

Simple Configuration File

# Fluent Bit Metrics + Prometheus Exporter
# -------------------------------------------
# The following example collects Fluent Bit metrics and exposes
# them through a Prometheus HTTP end-point.
#
# After starting the service try it with:
#
# $ curl http://127.0.0.1:2021/metrics
#
[SERVICE]
    flush           1
    log_level       info

[INPUT]
    name            fluentbit_metrics
    tag             internal_metrics
    scrape_interval 2

[OUTPUT]
    name            prometheus_exporter
    match           internal_metrics
    host            0.0.0.0
    port            2021

You can test the expose of the metrics by using curl:

curl http://127.0.0.1:2021/metrics

HTTP

The HTTP input plugin allows you to send custom records to an HTTP endpoint.

Configuration Parameters

Key

Description

default

host

The address to listen on

0.0.0.0

port

The port for Fluent Bit to listen on

9880

buffer_max_size

Specify the maximum buffer size in KB to receive a JSON message.

4M

buffer_chunk_size

This sets the chunk size for incoming incoming JSON messages. These chunks are then stored/managed in the space available by buffer_max_size.

512K

Getting Started

The http input plugin allows Fluent Bit to open up an HTTP port that you can then route data to in a dynamic way. This plugin supports dynamic tags which allow you to send data with different tags through the same input. An example video and curl message can be seen below

How to set tag

The tag for the HTTP input plugin is set by adding the tag to the end of the request URL. This tag is then used to route the event through the system. For example, in the following curl message below the tag set is app.log**. **If you do not set the tag http.0 is automatically used. If you have multiple HTTP inputs then they will follow a pattern of http.N where N is an integer representing the input.

Example Curl message

curl -d @app.log -XPOST -H "content-type: application/json" http://localhost:8888/app.log

Configuration File

[INPUT]
    name http
    host 0.0.0.0
    port 8888

[OUTPUT]
    name stdout
    match *

Command Line

$> fluent-bit -i http -p port=8888 -o stdout

Process Metrics

Process input plugin allows you to check how healthy a process is. It does so by performing a service check at every certain interval of time specified by the user.

The Process metrics plugin creates metrics that are log-based (I.e. JSON payload). If you are looking for Prometheus-based metrics please see the Node Exporter Metrics input plugin.

Configuration Parameters

The plugin supports the following configuration parameters:

Getting Started

In order to start performing the checks, you can run the plugin from the command line or through the configuration file:

The following example will check the health of crond process.

Configuration File

In your main configuration file append the following Input & Output sections:

Testing

Once Fluent Bit is running, you will see the health of process:

Standard Input

The stdin plugin allows to retrieve valid JSON text messages over the standard input interface (stdin). In order to use it, specify the plugin name as the input, e.g:

As input data the stdin plugin recognize the following JSON data formats:

Give the script execution permission:

Configuration Parameters

The plugin supports the following configuration parameters:

StatsD

The statsd input plugin allows you to receive metrics via StatsD protocol.

Content:

Configuration Parameters

The plugin supports the following configuration parameters:

Configuration Examples

Here is a configuration example.

Now you can input metrics through the UDP port as follows:

Fluent Bit will produce the following records:

Memory Metrics

The mem input plugin, gathers the information about the memory and swap usage of the running system every certain interval of time and reports the total amount of memory and the amount of free available.

Getting Started

In order to get memory and swap usage from your system, you can run the plugin from the command line or through the configuration file:

Command Line

Configuration File

In your main configuration file append the following Input & Output sections:

Network I/O Metrics

The netif input plugin gathers network traffic information of the running system every certain interval of time, and reports them.

The Network I/O Metrics plugin creates metrics that are log-based (I.e. JSON payload). If you are looking for Prometheus-based metrics please see the Node Exporter Metrics input plugin.

Configuration Parameters

The plugin supports the following configuration parameters:

Getting Started

In order to monitor network traffic from your system, you can run the plugin from the command line or through the configuration file:

Command Line

Configuration File

In your main configuration file append the following Input & Output sections:

Note: Total interval (sec) = Interval_Sec + (Interval_Nsec / 1000000000).

e.g. 1.5s = 1s + 500000000ns

Random

Random input plugin generate very simple random value samples using the device interface /dev/urandom, if not available it will use a unix timestamp as value.

Configuration Parameters

The plugin supports the following configuration parameters:

Getting Started

In order to start generating random samples, you can run the plugin from the command line or through the configuration file:

Command Line

From the command line you can let Fluent Bit generate the samples with the following options:

Configuration File

In your main configuration file append the following Input & Output sections:

Testing

Once Fluent Bit is running, you will see the reports in the output interface similar to this:

Serial Interface

The serial input plugin, allows to retrieve messages/data from a Serial interface.

Configuration Parameters

Getting Started

In order to retrieve messages over the Serial interface, you can run the plugin from the command line or through the configuration file:

Command Line

The following example loads the input serial plugin where it set a Bitrate of 9600, listen from the /dev/tnt0 interface and use the custom tag data to route the message.

The above interface (/dev/tnt0) is an emulation of the serial interface (more details at bottom), for demonstrative purposes we will write some message to the other end of the interface, in this case /dev/tnt1, e.g:

In Fluent Bit you should see an output like this:

Now using the Separator configuration, we could send multiple messages at once (run this command after starting Fluent Bit):

Configuration File

In your main configuration file append the following Input & Output sections:

Emulating Serial Interface on Linux

The following content is some extra information that will allow you to emulate a serial interface on your Linux system, so you can test this Serial input plugin locally in case you don't have such interface in your computer. The following procedure has been tested on Ubuntu 15.04 running a Linux Kernel 4.0.

Build and install the tty0tty module

Download the sources

Unpack and compile

Copy the new kernel module into the kernel modules directory

Load the module

You should see new serial ports in /dev/ (ls /dev/tnt*) Give appropriate permissions to the new serial ports:

When the module is loaded, it will interconnect the following virtual interfaces:

Size of the buffer (check for allowed values)

Forward is the protocol used by and to route messages between peers. This plugin implements the input service to listen for Forward messages.

Specify the maximum buffer memory size used to receive a Forward message. The value must be according to the specification.

By default the buffer to store the incoming Forward messages, do not allocate the maximum memory allowed, instead it allocate memory when is required. The rounds of allocations are set by Buffer_Chunk_Size. The value must be according to the specification.

Once Fluent Bit is running, you can send some messages using the fluent-cat tool (this tool is provided by :

In we should see the following output:

Fluent Bit exposes its to allow you to monitor the internals of your pipeline. The collected metrics can be processed similarly to those from the . They can be sent to output plugins including or .

In the following configuration file, the input plugin _node_exporter_metrics collects _metrics every 2 seconds and exposes them through our output plugin on HTTP/TCP port 2021.

Key
Description

A better example to demonstrate how it works will be through a Bash script that generates messages and writes them to . Write the following content in a file named test.sh:

Now lets start the script and in the following way:

Key
Description
Default

Key
Description
Default
Key
Description
Key
Description
Key
Description
unit sizes
Fluent Bit
Fluentd
Fluentd
Fluent Bit
own metrics
Prometheus Node Exporter input plugin
Prometheus Exporter
Prometheus Remote Write
Prometheus Exporter
Link to video

Proc_Name

Name of the target Process to check.

Interval_Sec

Interval in seconds between the service checks. Default value is 1.

Interval_Nsec

Specify a nanoseconds interval for service checks, it works in conjunction with the Interval_Sec configuration key. Default value is 0.

Alert

If enabled, it will only generate messages if the target process is down. By default this option is disabled.

Fd

If enabled, a number of fd is appended to each records. Default value is true.

Mem

If enabled, memory usage of the process is appended to each records. Default value is true.

$ fluent-bit -i proc -p proc_name=crond -o stdout
[INPUT]
    Name          proc
    Proc_Name     crond
    Interval_Sec  1
    Interval_NSec 0
    Fd            true
    Mem           true

[OUTPUT]
    Name   stdout
    Match  *
$ fluent-bit -i proc -p proc_name=fluent-bit -o stdout
Fluent Bit v1.x.x
* Copyright (C) 2019-2020 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2017/01/30 21:44:56] [ info] [engine] started
[0] proc.0: [1485780297, {"alive"=>true, "proc_name"=>"fluent-bit", "pid"=>10964, "mem.VmPeak"=>14740000, "mem.VmSize"=>14740000, "mem.VmLck"=>0, "mem.VmHWM"=>1120000, "mem.VmRSS"=>1120000, "mem.VmData"=>2276000, "mem.VmStk"=>88000, "mem.VmExe"=>1768000, "mem.VmLib"=>2328000, "mem.VmPTE"=>68000, "mem.VmSwap"=>0, "fd"=>18}]
[1] proc.0: [1485780298, {"alive"=>true, "proc_name"=>"fluent-bit", "pid"=>10964, "mem.VmPeak"=>14740000, "mem.VmSize"=>14740000, "mem.VmLck"=>0, "mem.VmHWM"=>1148000, "mem.VmRSS"=>1148000, "mem.VmData"=>2276000, "mem.VmStk"=>88000, "mem.VmExe"=>1768000, "mem.VmLib"=>2328000, "mem.VmPTE"=>68000, "mem.VmSwap"=>0, "fd"=>18}]
[2] proc.0: [1485780299, {"alive"=>true, "proc_name"=>"fluent-bit", "pid"=>10964, "mem.VmPeak"=>14740000, "mem.VmSize"=>14740000, "mem.VmLck"=>0, "mem.VmHWM"=>1152000, "mem.VmRSS"=>1148000, "mem.VmData"=>2276000, "mem.VmStk"=>88000, "mem.VmExe"=>1768000, "mem.VmLib"=>2328000, "mem.VmPTE"=>68000, "mem.VmSwap"=>0, "fd"=>18}]
[3] proc.0: [1485780300, {"alive"=>true, "proc_name"=>"fluent-bit", "pid"=>10964, "mem.VmPeak"=>14740000, "mem.VmSize"=>14740000, "mem.VmLck"=>0, "mem.VmHWM"=>1152000, "mem.VmRSS"=>1148000, "mem.VmData"=>2276000, "mem.VmStk"=>88000, "mem.VmExe"=>1768000, "mem.VmLib"=>2328000, "mem.VmPTE"=>68000, "mem.VmSwap"=>0, "fd"=>18}]
$ fluent-bit -i stdin -o stdout
1. { map => val, map => val, map => val }
2. [ time, { map => val, map => val, map => val } ]
#!/bin/sh

while :; do
  echo -n "{\"key\": \"some value\"}"
  sleep 1
done
$ chmod 755 test.sh
$ ./test.sh | fluent-bit -i stdin -o stdout
Fluent Bit v1.x.x
* Copyright (C) 2019-2020 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2016/10/07 21:44:46] [ info] [engine] started
[0] stdin.0: [1475898286, {"key"=>"some value"}]
[1] stdin.0: [1475898287, {"key"=>"some value"}]
[2] stdin.0: [1475898288, {"key"=>"some value"}]
[3] stdin.0: [1475898289, {"key"=>"some value"}]
[4] stdin.0: [1475898290, {"key"=>"some value"}]

Listen

Listener network interface.

0.0.0.0

Port

UDP port where listening for connections

8125

[INPUT]
    Name   statsd
    Listen 0.0.0.0
    Port   8125

[OUTPUT]
    Name   stdout
    Match  *
echo "click:10|c|@0.1" | nc -q0 -u 127.0.0.1 8125
echo "active:99|g"     | nc -q0 -u 127.0.0.1 8125
[0] statsd.0: [1574905088.971380537, {"type"=>"counter", "bucket"=>"click", "value"=>10.000000, "sample_rate"=>0.100000}]
[0] statsd.0: [1574905141.863344517, {"type"=>"gauge", "bucket"=>"active", "value"=>99.000000, "incremental"=>0}]
$ fluent-bit -i mem -t memory -o stdout -m '*'
Fluent Bit v1.x.x
* Copyright (C) 2019-2020 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2017/03/03 21:12:35] [ info] [engine] started
[0] memory: [1488543156, {"Mem.total"=>1016044, "Mem.used"=>841388, "Mem.free"=>174656, "Swap.total"=>2064380, "Swap.used"=>139888, "Swap.free"=>1924492}]
[1] memory: [1488543157, {"Mem.total"=>1016044, "Mem.used"=>841420, "Mem.free"=>174624, "Swap.total"=>2064380, "Swap.used"=>139888, "Swap.free"=>1924492}]
[2] memory: [1488543158, {"Mem.total"=>1016044, "Mem.used"=>841420, "Mem.free"=>174624, "Swap.total"=>2064380, "Swap.used"=>139888, "Swap.free"=>1924492}]
[3] memory: [1488543159, {"Mem.total"=>1016044, "Mem.used"=>841420, "Mem.free"=>174624, "Swap.total"=>2064380, "Swap.used"=>139888, "Swap.free"=>1924492}]
[INPUT]
    Name   mem
    Tag    memory

[OUTPUT]
    Name   stdout
    Match  *

Interface

Specify the network interface to monitor. e.g. eth0

Interval_Sec

Polling interval (seconds). default: 1

Interval_NSec

Polling interval (nanosecond). default: 0

Verbose

If true, gather metrics precisely. default: false

$ bin/fluent-bit -i netif -p interface=eth0 -o stdout
Fluent Bit v1.x.x
* Copyright (C) 2019-2020 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2017/07/08 23:34:18] [ info] [engine] started
[0] netif.0: [1499524459.001698260, {"eth0.rx.bytes"=>89769869, "eth0.rx.packets"=>73357, "eth0.rx.errors"=>0, "eth0.tx.bytes"=>4256474, "eth0.tx.packets"=>24293, "eth0.tx.errors"=>0}]
[1] netif.0: [1499524460.002541885, {"eth0.rx.bytes"=>98, "eth0.rx.packets"=>1, "eth0.rx.errors"=>0, "eth0.tx.bytes"=>98, "eth0.tx.packets"=>1, "eth0.tx.errors"=>0}]
[2] netif.0: [1499524461.001142161, {"eth0.rx.bytes"=>98, "eth0.rx.packets"=>1, "eth0.rx.errors"=>0, "eth0.tx.bytes"=>98, "eth0.tx.packets"=>1, "eth0.tx.errors"=>0}]
[3] netif.0: [1499524462.002612971, {"eth0.rx.bytes"=>98, "eth0.rx.packets"=>1, "eth0.rx.errors"=>0, "eth0.tx.bytes"=>98, "eth0.tx.packets"=>1, "eth0.tx.errors"=>0}]
[INPUT]
    Name          netif
    Tag           netif
    Interval_Sec  1
    Interval_NSec 0
    Interface     eth0
[OUTPUT]
    Name   stdout
    Match  *

Samples

If set, it will only generate a specific number of samples. By default this value is set to -1, which will generate unlimited samples.

Interval_Sec

Interval in seconds between samples generation. Default value is 1.

Internal_Nsec

Specify a nanoseconds interval for samples generation, it works in conjunction with the Interval_Sec configuration key. Default value is 0.

$ fluent-bit -i random -o stdout
[INPUT]
    Name          random
    Samples      -1
    Interval_Sec  1
    Interval_NSec 0

[OUTPUT]
    Name   stdout
    Match  *
$ fluent-bit -i random -o stdout
Fluent Bit v1.x.x
* Copyright (C) 2019-2020 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2016/10/07 20:27:34] [ info] [engine] started
[0] random.0: [1475893654, {"rand_value"=>1863375102915681408}]
[1] random.0: [1475893655, {"rand_value"=>425675645790600970}]
[2] random.0: [1475893656, {"rand_value"=>7580417447354808203}]
[3] random.0: [1475893657, {"rand_value"=>1501010137543905482}]
[4] random.0: [1475893658, {"rand_value"=>16238242822364375212}]

File

Absolute path to the device entry, e.g: /dev/ttyS0

Bitrate

The bitrate for the communication, e.g: 9600, 38400, 115200, etc

Min_Bytes

The serial interface will expect at least Min_Bytes to be available before to process the message (default: 1)

Separator

Allows to specify a separator string that's used to determinate when a message ends.

Format

Specify the format of the incoming data stream. The only option available is 'json'. Note that Format and Separator cannot be used at the same time.

$ fluent-bit -i serial -t data -p File=/dev/tnt0 -p BitRate=9600 -o stdout -m '*'
$ echo 'this is some message' > /dev/tnt1
$ fluent-bit -i serial -t data -p File=/dev/tnt0 -p BitRate=9600 -o stdout -m '*'
Fluent Bit v1.x.x
* Copyright (C) 2019-2020 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2016/05/20 15:44:39] [ info] starting engine
[0] data: [1463780680, {"msg"=>"this is some message"}]
$ echo 'aaXbbXccXddXee' > /dev/tnt1
$ fluent-bit -i serial -t data -p File=/dev/tnt0 -p BitRate=9600 -p Separator=X -o stdout -m '*'
Fluent-Bit v0.8.0
Copyright (C) Treasure Data

[2016/05/20 16:04:51] [ info] starting engine
[0] data: [1463781902, {"msg"=>"aa"}]
[1] data: [1463781902, {"msg"=>"bb"}]
[2] data: [1463781902, {"msg"=>"cc"}]
[3] data: [1463781902, {"msg"=>"dd"}]
[INPUT]
    Name      serial
    Tag       data
    File      /dev/tnt0
    BitRate   9600
    Separator X

[OUTPUT]
    Name   stdout
    Match  *
$ git clone https://github.com/freemed/tty0tty
$ cd tty0tty/module
$ make
$ sudo cp tty0tty.ko /lib/modules/$(uname -r)/kernel/drivers/misc/
$ sudo depmod
$ sudo modprobe tty0tty
$ sudo chmod 666 /dev/tnt*
/dev/tnt0 <=> /dev/tnt1
/dev/tnt2 <=> /dev/tnt3
/dev/tnt4 <=> /dev/tnt5
/dev/tnt6 <=> /dev/tnt7
Unit Size
Unit Size
Fluent Bit
Fluent Bit
Configuration Parameters
Configuration Examples

Buffer_Size

16k

Tail

The tail input plugin allows to monitor one or several text files. It has a similar behavior like tail -f shell command.

The plugin reads every matched file in the Path pattern and for every new line found (separated by a ), it generates a new record. Optionally a database file can be used so the plugin can have a history of tracked files and a state of offsets, this is very useful to resume a state if the service is restarted.

Configuration Parameters

The plugin supports the following configuration parameters:

Key
Description
Default

Buffer_Chunk_Size

32k

Buffer_Max_Size

32k

Path

Pattern specifying a specific log file or multiple ones through the use of common wildcards. Multiple patterns separated by commas are also allowed.

Path_Key

If enabled, it appends the name of the monitored file as part of the record. The value assigned becomes the key in the map.

Exclude_Path

Set one or multiple shell patterns separated by commas to exclude files matching certain criteria, e.g: Exclude_Path *.gz,*.zip

Offset_Key

If enabled, Fluent Bit appends the offset of the current monitored file as part of the record. The value assigned becomes the key in the map

Read_from_Head

For new discovered files on start (without a database offset/position), read the content from the head of the file, not tail.

False

Refresh_Interval

The interval of refreshing the list of watched files in seconds.

60

Rotate_Wait

Specify the number of extra time in seconds to monitor a file once is rotated in case some pending data is flushed.

5

Ignore_Older

Ignores files which modification date is older than this time in seconds. Supports m,h,d (minutes, hours, days) syntax.

Skip_Long_Lines

When a monitored file reaches its buffer capacity due to a very long line (Buffer_Max_Size), the default behavior is to stop monitoring that file. Skip_Long_Lines alter that behavior and instruct Fluent Bit to skip long lines and continue processing other lines that fits into the buffer size.

Off

Skip_Empty_Lines

Skips empty lines in the log file from any further processing or output.

Off

DB

Specify the database file to keep track of monitored files and offsets.

DB.sync

normal

DB.locking

Specify that the database will be accessed only by Fluent Bit. Enabling this feature helps to increase performance when accessing the database but it restrict any external tool to query the content.

false

DB.journal_mode

sets the journal mode for databases (WAL). Enabling WAL provides higher performance. Note that WAL is not compatible with shared network file systems.

WAL

Mem_Buf_Limit

Set a limit of memory that Tail plugin can use when appending data to the Engine. If the limit is reach, it will be paused; when the data is flushed it resumes.

Exit_On_Eof

When reading a file will exit as soon as it reach the end of the file. Useful for bulk load and tests

false

Parser

Specify the name of a parser to interpret the entry as a structured message.

Key

When a message is unstructured (no parser applied), it's appended as a string under the key name log. This option allows to define an alternative name for that key.

log

Inotify_Watcher

Set to false to use file stat watcher instead of inotify.

true

Tag

Tag_Regex

Set a regex to extract fields from the file name. E.g. (?<pod_name>[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*)_(?<namespace_name>[^_]+)_(?<container_name>.+)-

Static_Batch_Size

Set the maximum number of bytes to process per iteration for the monitored static files (files that already exists upon Fluent Bit start).

50M

Note that if the database parameter DB is not specified, by default the plugin will start reading each target file from the beginning. This also might cause some unwanted behavior, for example when a line is bigger that Buffer_Chunk_Size and Skip_Long_Lines is not turned on, the file will be read from the beginning of each Refresh_Interval until the file is rotated.

Multiline Support

Starting from Fluent Bit v1.8 we have introduced a new Multiline core functionality. For Tail input plugin, it means that now it supports the old configuration mechanism but also the new one. In order to avoid breaking changes, we will keep both but encourage our users to use the latest one. We will call the two mechanisms as:

  • Multiline Core

  • Old Multiline

Multiline Core (v1.8)

The new multiline core is exposed by the following configuration:

Key
Description

multiline.parser

  • parser

  • parser_firstline

  • parser_N

  • multiline

  • multiline_flush

  • docker_mode

Multiline and Containers (v1.8)

If you are running Fluent Bit to process logs coming from containers like Docker or CRI, you can use the new built-in modes for such purposes. This will help to reassembly multiline messages originally split by Docker or CRI:

[INPUT]
    name              tail
    path              /var/log/containers/*.log
    multiline.parser  docker, cri

The two options separated by a comma means multi-format: try docker and cri multiline formats.

We are still working on extending support to do multiline for nested stack traces and such. Over the Fluent Bit v1.8.x release cycle we will be updating the documentation.

Old Multiline Configuration Parameters

For the old multiline configuration, the following options exist to configure the handling of multilines logs:

Key
Description
Default

Multiline

If enabled, the plugin will try to discover multiline messages and use the proper parsers to compose the outgoing messages. Note that when this option is enabled the Parser option is not used.

Off

Multiline_Flush

Wait period time in seconds to process queued multiline messages

4

Parser_Firstline

Name of the parser that matches the beginning of a multiline message. Note that the regular expression defined in the parser must include a group name (named capture), and the value of the last match group must be a string

Parser_N

Optional-extra parser to interpret and structure multiline entries. This option can be used to define multiple parsers, e.g: Parser_1 ab1, Parser_2 ab2, Parser_N abN.

Old Docker Mode Configuration Parameters

Docker mode exists to recombine JSON log lines split by the Docker daemon due to its line length limit. To use this feature, configure the tail plugin with the corresponding parser and then enable Docker mode:

Key
Description
Default

Docker_Mode

If enabled, the plugin will recombine split Docker log lines before passing them to any parser as configured above. This mode cannot be used at the same time as Multiline.

Off

Docker_Mode_Flush

Wait period time in seconds to flush queued unfinished split lines.

4

Docker_Mode_Parser

Specify an optional parser for the first line of the docker multiline mode. The parser name to be specified must be registered in the parsers.conf file.

Getting Started

In order to tail text or log files, you can run the plugin from the command line or through the configuration file:

Command Line

From the command line you can let Fluent Bit parse text files with the following options:

$ fluent-bit -i tail -p path=/var/log/syslog -o stdout

Configuration File

[INPUT]
    Name        tail
    Path        /var/log/syslog

[OUTPUT]
    Name   stdout
    Match  *

Old Multi-line example

When using multi-line configuration you need to first specify Multiline On in the configuration and use the Parser_Firstline and additional parser parameters Parser_N if needed. If we are trying to read the following Java Stacktrace as a single event

Dec 14 06:41:08 Exception in thread "main" java.lang.RuntimeException: Something has gone wrong, aborting!
    at com.myproject.module.MyProject.badMethod(MyProject.java:22)
    at com.myproject.module.MyProject.oneMoreMethod(MyProject.java:18)
    at com.myproject.module.MyProject.anotherMethod(MyProject.java:14)
    at com.myproject.module.MyProject.someMethod(MyProject.java:10)
    at com.myproject.module.MyProject.main(MyProject.java:6)

We need to specify a Parser_Firstline parameter that matches the first line of a multi-line event. Once a match is made Fluent Bit will read all future lines until another match with Parser_Firstline is made .

In the case above we can use the following parser, that extracts the Time as time and the remaining portion of the multiline as log

[PARSER]
    Name multiline
    Format regex
    Regex /(?<time>Dec \d+ \d+\:\d+\:\d+)(?<message>.*)/
    Time_Key  time
    Time_Format %b %d %H:%M:%S

If we want to further parse the entire event we can add additional parsers with Parser_N where N is an integer. The final Fluent Bit configuration looks like the following:

# Note this is generally added to parsers.conf and referenced in [SERVICE]
[PARSER]
    Name multiline
    Format regex
    Regex /(?<time>Dec \d+ \d+\:\d+\:\d+)(?<message>.*)/
    Time_Key  time
    Time_Format %b %d %H:%M:%S

[INPUT]
    Name             tail
    Multiline        On
    Parser_Firstline multiline
    Path             /var/log/java.log

[OUTPUT]
    Name             stdout
    Match            *

Our output will be as follows.

[0] tail.0: [1607928428.466041977, {"message"=>"Exception in thread "main" java.lang.RuntimeException: Something has gone wrong, aborting!
    at com.myproject.module.MyProject.badMethod(MyProject.java:22)
    at com.myproject.module.MyProject.oneMoreMethod(MyProject.java:18)
    at com.myproject.module.MyProject.anotherMethod(MyProject.java:14)
    at com.myproject.module.MyProject.someMethod(MyProject.java:10)", "message"=>"at com.myproject.module.MyProject.main(MyProject.java:6)"}]

Tailing files keeping state

The tail input plugin a feature to save the state of the tracked files, is strongly suggested you enabled this. For this purpose the db property is available, e.g:

$ fluent-bit -i tail -p path=/var/log/syslog -p db=/path/to/logs.db -o stdout

When running, the database file /path/to/logs.db will be created, this database is backed by SQLite3 so if you are interested into explore the content, you can open it with the SQLite client tool, e.g:

$ sqlite3 tail.db
-- Loading resources from /home/edsiper/.sqliterc

SQLite version 3.14.1 2016-08-11 18:53:32
Enter ".help" for usage hints.
sqlite> SELECT * FROM in_tail_files;
id     name                              offset        inode         created
-----  --------------------------------  ------------  ------------  ----------
1      /var/log/syslog                   73453145      23462108      1480371857
sqlite>

Make sure to explore when Fluent Bit is not hard working on the database file, otherwise you will see some Error: database is locked messages.

Formatting SQLite

By default SQLite client tool do not format the columns in a human read-way, so to explore in_tail_files table you can create a config file in ~/.sqliterc with the following content:

.headers on
.mode column
.width 5 32 12 12 10

SQLite and Write Ahead Logging

Fluent Bit keep the state or checkpoint of each file through using a SQLite database file, so if the service is restarted, it can continue consuming files from it last checkpoint position (offset). The default options set are enabled for high performance and corruption-safe.

The SQLite journaling mode enabled is Write Ahead Log or WAL. This allows to improve performance of read and write operations to disk. When enabled, you will see in your file system additional files being created, consider the following configuration statement:

[INPUT]
    name    tail
    path    /var/log/containers/*.log
    db      test.db

The above configuration enables a database file called test.db and in the same path for that file SQLite will create two additional files:

  • test.db-shm

  • test.db-wal

Those two files aims to support the WAL mechanism that helps to improve performance and reduce the number system calls required. The -wal file refers to the file that stores the new changes to be committed, at some point the WAL file transactions are moved back to the real database file. The -shm file is a shared-memory type to allow concurrent-users to the WAL file.

WAL and Memory Usage

The WAL mechanism give us higher performance but also might increase the memory usage by Fluent Bit. Most of this usage comes from the memory mapped and cached pages. In some cases you might see that memory usage keeps a bit high giving the impression of a memory leak, but actually is not relevant unless you want your memory metrics back to normal. Starting from Fluent Bit v1.7.3 we introduced the new option db.journal_mode mode that sets the journal mode for databases, by default it will be WAL (Write-Ahead Logging), currently allowed configurations for db.journal_mode are DELETE | TRUNCATE | PERSIST | MEMORY | WAL | OFF .

File Rotation

File rotation is properly handled, including logrotate's copytruncate mode.

Note that the Path patterns cannot match the rotated files. Otherwise, the rotated file would be read again and lead to duplicate records.

Systemd

The Systemd input plugin allows to collect log messages from the Journald daemon on Linux environments.

Configuration Parameters

The plugin supports the following configuration parameters:

Key
Description
Default

Path

Optional path to the Systemd journal directory, if not set, the plugin will use default paths to read local-only logs.

Max_Fields

Set a maximum number of fields (keys) allowed per record.

8000

Max_Entries

When Fluent Bit starts, the Journal might have a high number of logs in the queue. In order to avoid delays and reduce memory usage, this option allows to specify the maximum number of log entries that can be processed per round. Once the limit is reached, Fluent Bit will continue processing the remaining log entries once Journald performs the notification.

5000

Systemd_Filter

Allows to perform a query over logs that contains a specific Journald key/value pairs, e.g: _SYSTEMD_UNIT=UNIT. The Systemd_Filter option can be specified multiple times in the input section to apply multiple filters as required.

Systemd_Filter_Type

Define the filter type when Systemd_Filter is specified multiple times. Allowed values are And and Or. With And a record is matched only when all of the Systemd_Filter have a match. With Or a record is matched when any of the Systemd_Filter has a match.

Or

Tag

The tag is used to route messages but on Systemd plugin there is an extra functionality: if the tag includes a star/wildcard, it will be expanded with the Systemd Unit file (_SYSTEMD_UNIT, e.g. host.* => host.UNIT_NAME) or unknown (e.g. host.unknown) if _SYSTEMD_UNIT is missing.

DB

Specify the absolute path of a database file to keep track of Journald cursor.

DB.Sync

Full

Read_From_Tail

Start reading new entries. Skip entries already stored in Journald.

Off

Lowercase

Lowercase the Journald field (key).

Off

Strip_Underscores

Remove the leading underscore of the Journald field (key). For example the Journald field _PID becomes the key PID.

Off

Getting Started

In order to receive Systemd messages, you can run the plugin from the command line or through the configuration file:

Command Line

From the command line you can let Fluent Bit listen for Systemd messages with the following options:

$ fluent-bit -i systemd \
             -p systemd_filter=_SYSTEMD_UNIT=docker.service \
             -p tag='host.*' -o stdout

In the example above we are collecting all messages coming from the Docker service.

Configuration File

In your main configuration file append the following Input & Output sections:

[SERVICE]
    Flush        1
    Log_Level    info
    Parsers_File parsers.conf

[INPUT]
    Name            systemd
    Tag             host.*
    Systemd_Filter  _SYSTEMD_UNIT=docker.service

[OUTPUT]
    Name   stdout
    Match  *

TCP

The tcp input plugin allows to retrieve structured JSON or raw messages over a TCP network interface (TCP port).

Configuration Parameters

The plugin supports the following configuration parameters:

Key
Description
Default

Listen

Listener network interface.

0.0.0.0

Port

TCP port where listening for connections

5170

Buffer_Size

Specify the maximum buffer size in KB to receive a JSON message. If not set, the default size will be the value of Chunk_Size.

Chunk_Size

By default the buffer to store the incoming JSON messages, do not allocate the maximum memory allowed, instead it allocate memory when is required. The rounds of allocations are set by Chunk_Size in KB. If not set, Chunk_Size is equal to 32 (32KB).

32

Format

Specify the expected payload format. It support the options json and none. When using json, it expects JSON maps, when is set to none, it will split every record using the defined Separator (option below).

json

Separator

When the expected Format is set to none, Fluent Bit needs a separator string to split the records. By default it uses the breakline character (LF or 0x10).

Getting Started

In order to receive JSON messages over TCP, you can run the plugin from the command line or through the configuration file:

Command Line

From the command line you can let Fluent Bit listen for JSON messages with the following options:

$ fluent-bit -i tcp -o stdout

By default the service will listen an all interfaces (0.0.0.0) through TCP port 5170, optionally you can change this directly, e.g:

$ fluent-bit -i tcp://192.168.3.2:9090 -o stdout

In the example the JSON messages will only arrive through network interface under 192.168.3.2 address and TCP Port 9090.

Configuration File

In your main configuration file append the following Input & Output sections:

[INPUT]
    Name        tcp
    Listen      0.0.0.0
    Port        5170
    Chunk_Size  32
    Buffer_Size 64
    Format      json

[OUTPUT]
    Name        stdout
    Match       *

Testing

Once Fluent Bit is running, you can send some messages using the netcat:

$ echo '{"key 1": 123456789, "key 2": "abcdefg"}' | nc 127.0.0.1 5170
$ bin/fluent-bit -i tcp -o stdout -f 1
Fluent Bit v1.x.x
* Copyright (C) 2019-2020 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2019/10/03 09:19:34] [ info] [storage] initializing...
[2019/10/03 09:19:34] [ info] [storage] in-memory
[2019/10/03 09:19:34] [ info] [engine] started (pid=14569)
[2019/10/03 09:19:34] [ info] [in_tcp] binding 0.0.0.0:5170
[2019/10/03 09:19:34] [ info] [sp] stream processor started
[0] tcp.0: [1570115975.581246030, {"key 1"=>123456789, "key 2"=>"abcdefg"}]

Performance Considerations

When receiving payloads in JSON format, there are high performance penalties. Parsing JSON is a very expensive task so you could expect your CPU usage increase under high load environments.

To get faster data ingestion, consider to use the option Format none to avoid JSON parsing if not needed.

Windows Event Log

The winlog input plugin allows you to read Windows Event Log.

Configuration Parameters

The plugin supports the following configuration parameters:

Note that if you do not set db, the plugin will read channels from the beginning on each startup.

Configuration Examples

Configuration File

Here is a minimum configuration example.

Note that some Windows Event Log channels (like Security) requires an admin privilege for reading. In this case, you need to run fluent-bit as an administrator.

Command Line

If you want to do a quick test, you can run this plugin from the command line.

Thermal

The thermal input plugin reports system temperatures periodically -- each second by default. Currently this plugin is only available for Linux.

The following tables describes the information generated by the plugin.

Configuration Parameters

The plugin supports the following configuration parameters:

Getting Started

In order to get temperature(s) of your system, you can run the plugin from the command line or through the configuration file:

Command Line

Some systems provide multiple thermal zones. In this example monitor only thermal_zone0 by name, once per minute.

Configuration File

In your main configuration file append the following Input & Output sections:

Syslog

Syslog input plugins allows to collect Syslog messages through a Unix socket server (UDP or TCP) or over the network using TCP or UDP.

Configuration Parameters

The plugin supports the following configuration parameters:

Considerations

  • When using Syslog input plugin, Fluent Bit requires access to the parsers.conf file, the path to this file can be specified with the option -R or through the Parsers_File key on the [SERVICE] section (more details below).

  • When udp or unix_udp is used, the buffer size to receive messages is configurable only through the Buffer_Chunk_Size option which defaults to 32kb.

Getting Started

In order to receive Syslog messages, you can run the plugin from the command line or through the configuration file:

Command Line

From the command line you can let Fluent Bit listen for Forward messages with the following options:

By default the service will create and listen for Syslog messages on the unix socket /tmp/in_syslog

Configuration File

In your main configuration file append the following Input & Output sections:

Testing

Once Fluent Bit is running, you can send some messages using the logger tool:

Recipes

The following content aims to provide configuration examples for different use cases to integrate Fluent Bit and make it listen for Syslog messages from your systems.

Rsyslog to Fluent Bit: Network mode over TCP

Fluent Bit Configuration

Put the following content in your fluent-bit.conf file:

then start Fluent Bit.

RSyslog Configuration

Add a new file to your rsyslog config rules called 60-fluent-bit.conf inside the directory /etc/rsyslog.d/ and add the following content:

then make sure to restart your rsyslog daemon:

Rsyslog to Fluent Bit: Unix socket mode over UDP

Fluent Bit Configuration

Put the following content in your fluent-bit.conf file:

then start Fluent Bit.

RSyslog Configuration

Add a new file to your rsyslog config rules called 60-fluent-bit.conf inside the directory /etc/rsyslog.d/ and place the following content:

Make sure that the socket file is readable by rsyslog (tweak the Unix_Perm option shown above).

Configuring Parser

The parser engine is fully configurable and can process log entries based in two types of format:

By default, Fluent Bit provides a set of pre-configured parsers that can be used for different use cases such as logs from:

  • Apache

  • Nginx

  • Docker

  • Syslog rfc5424

  • Syslog rfc3164

Parsers are defined in one or multiple configuration files that are loaded at start time, either from the command line or through the main Fluent Bit configuration file.

Configuration Parameters

Multiple parsers can be defined and each section has it own properties. The following table describes the available options for each parser definition:

Parsers Configuration File

All parsers must be defined in a parsers.conf file, not in the Fluent Bit global configuration file. The parsers file expose all parsers available that can be used by the Input plugins that are aware of this feature. A parsers file can have multiple entries like this:

For more information about the parsers available, please refer to the default parsers file distributed with Fluent Bit source code:

Time Resolution and Fractional Seconds

In addition, we extended our time resolution to support fractional seconds like 2017-05-17T15:44:31**.187512963**Z. Since Fluent Bit v0.12 we have full support for nanoseconds resolution, the %L format option for Time_Format is provided as a way to indicate that content must be interpreted as fractional seconds.

Note: The option %L is only valid when used after seconds (%S) or seconds since the Epoch (%s), e.g: %S.%L or %s.%L

Regular Expression

The regex parser allows to define a custom Ruby Regular Expression that will use a named capture feature to define which content belongs to which key name.

Note: understanding how regular expressions works is out of the scope of this content.

From a configuration perspective, when the format is set to regex, is mandatory and expected that a Regex configuration key exists.

The following parser configuration example aims to provide rules that can be applied to an Apache HTTP Server log entry:

As an example, takes the following Apache HTTP Server log entry:

The above content do not provide a defined structure for Fluent Bit, but enabling the proper parser we can help to make a structured representation of it:

A common pitfall is that you cannot use characters other than alphabets, numbers and underscore in group names. For example, a group name like (?<user-name>.*) will cause an error due to containing an invalid character (-).

JSON

The JSON parser is the simplest option: if the original log source is a JSON map string, it will take its structure and convert it directly to the internal binary representation.

A simple configuration that can be found in the default parsers configuration file, is the entry to parse Docker log files (when the tail input plugin is used):

The following log entry is a valid content for the parser defined above:

After processing, its internal representation will be:

The time has been converted to Unix timestamp (UTC) and the map reduced to each component of the original message.

Set the buffer size to read data. This value is used to increase buffer size. The value must be according to the specification.

Set the initial buffer size to read files data. This value is used to increase buffer size. The value must be according to the specification.

Set the limit of the buffer size per monitored file. When a buffer needs to be increased (e.g: very long lines), this value is used to restrict how much the memory buffer can grow. If reading a file exceeds this limit, the file is removed from the monitored file list. The value must be according to the specification.

Set a default synchronization (I/O) method. Values: Extra, Full, Normal, Off. This flag affects how the internal SQLite engine do synchronization to disk, for more details about each option please refer to . Most of workload scenarios will be fine with normal mode, but if you really need full synchronization after every write operation you should set full mode. Note that full has a high I/O performance cost.

Set a tag (with regex-extract fields) that will be placed on lines read. E.g. kube.<namespace_name>.<pod_name>.<container_name>. Note that "tag expansion" is supported: if the tag includes an asterisk (*), that asterisk will be replaced with the absolute path of the monitored file (also see ).

Specify one or multiple to apply to the content.

As stated in the , now we provide built-in configuration modes. Note that when using a new multiline.parser definition, you must disable the old configuration from your tail section like:

In your main configuration file append the following Input & Output sections. An example visualization can be found

Set a default synchronization (I/O) method. values: Extra, Full, Normal, Off. This flag affects how the internal SQLite engine do synchronization to disk, for more details about each option please refer to . note: this option was introduced on Fluent Bit v1.4.6.

In we should see the following output:

Key
Description
Default
key
description
Key
Description
Key
Description
Default

In we should see the following output:

Parsers are an important component of , with them you can take any unstructured log entry and give them a structure that makes easier it processing and further filtering.

(named capture)

Note: If you are using Regular Expressions note that Fluent Bit uses Ruby based regular expressions and we encourage to use web site as an online editor to test them.

Key
Description

Time resolution and it format supported are handled by using the libc system function.

Fluent Bit uses regular expression library on Ruby mode, for testing purposes you can use the following web editor to test your expressions:

Important: do not attempt to add multiline support in your regular expressions if you are using input plugin since each line is handled as a separated entity. Instead use Tail support configuration feature.

Security Warning: Onigmo is a backtracking regex engine. You need to be careful not to use expensive regex patterns, or Onigmo can take very long time to perform pattern matching. For details, please read the article on OWASP.

In order to understand, learn and test regular expressions like the example above, we suggest you try the following Ruby Regular Expression Editor:

Unit Size
Multiline Parser documentation
here
Fluent Bit

Channels

A comma-separated list of channels to read from.

Interval_Sec

Set the polling interval for each channel. (optional)

1

DB

Set the path to save the read offsets. (optional)

[INPUT]
    Name         winlog
    Channels     Setup,Windows PowerShell
    Interval_Sec 1
    DB           winlog.sqlite

[OUTPUT]
    Name   stdout
    Match  *
$ fluent-bit -i winlog -p 'channels=Setup' -o stdout

name

The name of the thermal zone, such as thermal_zone0

type

The type of the thermal zone, such as x86_pkg_temp

temp

Current temperature in celsius

Interval_Sec

Polling interval (seconds). default: 1

Interval_NSec

Polling interval (nanoseconds). default: 0

name_regex

Optional name filter regex. default: None

type_regex

Optional type filter regex. default: None

$ bin/fluent-bit -i thermal -t my_thermal -o stdout -m '*'
Fluent Bit v1.x.x
* Copyright (C) 2019-2020 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2019/08/18 13:39:43] [ info] [storage] initializing...
...
[0] my_thermal: [1566099584.000085820, {"name"=>"thermal_zone0", "type"=>"x86_pkg_temp", "temp"=>60.000000}]
[1] my_thermal: [1566099585.000136466, {"name"=>"thermal_zone0", "type"=>"x86_pkg_temp", "temp"=>59.000000}]
[2] my_thermal: [1566099586.000083156, {"name"=>"thermal_zone0", "type"=>"x86_pkg_temp", "temp"=>59.000000}]
$ bin/fluent-bit -i thermal -t my_thermal -p "interval_sec=60" -p "name_regex=thermal_zone0" -o stdout -m '*'
Fluent Bit v1.3.0
Copyright (C) Treasure Data

[2019/08/18 13:39:43] [ info] [storage] initializing...
...
[0] my_temp: [1565759542.001053749, {"name"=>"thermal_zone0", "type"=>"pch_skylake", "temp"=>48.500000}]
[0] my_temp: [1565759602.001661061, {"name"=>"thermal_zone0", "type"=>"pch_skylake", "temp"=>48.500000}]
[INPUT]
    Name thermal
    Tag  my_thermal

[OUTPUT]
    Name  stdout
    Match *

Mode

Defines transport protocol mode: unix_udp (UDP over Unix socket), unix_tcp (TCP over Unix socket), tcp or udp

unix_udp

Listen

If Mode is set to tcp or udp, specify the network interface to bind.

0.0.0.0

Port

If Mode is set to tcp or udp, specify the TCP port to listen for incoming connections.

5140

Path

If Mode is set to unix_tcp or unix_udp, set the absolute path to the Unix socket file.

Unix_Perm

If Mode is set to unix_tcp or unix_udp, set the permission of the Unix socket file.

0644

Parser

Specify an alternative parser for the message. If Mode is set to tcp or udp then the default parser is syslog-rfc5424 otherwise syslog-rfc3164-local is used. If your syslog messages have fractional seconds set this Parser value to syslog-rfc5424 instead.

Buffer_Chunk_Size

By default the buffer to store the incoming Syslog messages, do not allocate the maximum memory allowed, instead it allocate memory when is required. The rounds of allocations are set by Buffer_Chunk_Size. If not set, Buffer_Chunk_Size is equal to 32000 bytes (32KB). Read considerations below when using udp or unix_udp mode.

Buffer_Max_Size

Specify the maximum buffer size to receive a Syslog message. If not set, the default size will be the value of Buffer_Chunk_Size.

$ fluent-bit -R /path/to/parsers.conf -i syslog -p path=/tmp/in_syslog -o stdout
[SERVICE]
    Flush               1
    Log_Level           info
    Parsers_File        parsers.conf

[INPUT]
    Name                syslog
    Path                /tmp/in_syslog
    Buffer_Chunk_Size   32000
    Buffer_Max_Size     64000

[OUTPUT]
    Name   stdout
    Match  *
$ logger -u /tmp/in_syslog my_ident my_message
$ bin/fluent-bit -R ../conf/parsers.conf -i syslog -p path=/tmp/in_syslog -o stdout
Fluent Bit v1.x.x
* Copyright (C) 2019-2020 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2017/03/09 02:23:27] [ info] [engine] started
[0] syslog.0: [1489047822, {"pri"=>"13", "host"=>"edsiper:", "ident"=>"my_ident", "pid"=>"", "message"=>"my_message"}]
[SERVICE]
    Flush        1
    Parsers_File parsers.conf

[INPUT]
    Name     syslog
    Parser   syslog-rfc3164
    Listen   0.0.0.0
    Port     5140
    Mode     tcp

[OUTPUT]
    Name     stdout
    Match    *
action(type="omfwd" Target="127.0.0.1" Port="5140" Protocol="tcp")
$ sudo service rsyslog restart
[SERVICE]
    Flush        1
    Parsers_File parsers.conf

[INPUT]
    Name      syslog
    Parser    syslog-rfc3164
    Path      /tmp/fluent-bit.sock
    Mode      unix_udp
    Unix_Perm 0644

[OUTPUT]
    Name      stdout
    Match     *
$ModLoad omuxsock
$OMUxSockSocket /tmp/fluent-bit.sock
*.* :omuxsock:
[PARSER]
    Name        docker
    Format      json
    Time_Key    time
    Time_Format %Y-%m-%dT%H:%M:%S.%L
    Time_Keep   On

[PARSER]
    Name        syslog-rfc5424
    Format      regex
    Regex       ^\<(?<pri>[0-9]{1,5})\>1 (?<time>[^ ]+) (?<host>[^ ]+) (?<ident>[^ ]+) (?<pid>[-0-9]+) (?<msgid>[^ ]+) (?<extradata>(\[(.*)\]|-)) (?<message>.+)$
    Time_Key    time
    Time_Format %Y-%m-%dT%H:%M:%S.%L
    Time_Keep   On
    Types pid:integer
[PARSER]
    Name   apache
    Format regex
    Regex  ^(?<host>[^ ]*) [^ ]* (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^\"]*?)(?: +\S*)?)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")?$
    Time_Key time
    Time_Format %d/%b/%Y:%H:%M:%S %z
    Types code:integer size:integer
192.168.2.20 - - [29/Jul/2015:10:27:10 -0300] "GET /cgi-bin/try/ HTTP/1.0" 200 3395
[1154104030, {"host"=>"192.168.2.20",
              "user"=>"-",
              "method"=>"GET",
              "path"=>"/cgi-bin/try/",
              "code"=>"200",
              "size"=>"3395",
              "referer"=>"",
              "agent"=>""
              }
]
[PARSER]
    Name        docker
    Format      json
    Time_Key    time
    Time_Format %Y-%m-%dT%H:%M:%S %z
{"key1": 12345, "key2": "abc", "time": "2006-07-28T13:22:04Z"}
[1154103724, {"key1"=>12345, "key2"=>"abc"}]

Decoders

There are certain cases where the log messages being parsed contains encoded data, a typical use case can be found in containerized environments with Docker: application logs it data in JSON format but becomes an escaped string, Consider the following example

Original message generated by the application:

{"status": "up and running"}

Then the Docker log message become encapsulated as follows:

{"log":"{\"status\": \"up and running\"}\r\n","stream":"stdout","time":"2018-03-09T01:01:44.851160855Z"}

as you can see the original message is handled as an escaped string. Ideally in Fluent Bit we would like to keep having the original structured message and not a string.

Getting Started

Decoders are a built-in feature available through the Parsers file, each Parser definition can optionally set one or multiple decoders. There are two type of decoders type:

  • Decode_Field: if the content can be decoded in a structured message, append that structure message (keys and values) to the original log message.

  • Decode_Field_As: any content decoded (unstructured or structured) will be replaced in the same key/value, no extra keys are added.

Our pre-defined Docker Parser have the following definition:

[PARSER]
    Name         docker
    Format       json
    Time_Key     time
    Time_Format  %Y-%m-%dT%H:%M:%S.%L
    Time_Keep    On
    # Command       |  Decoder  | Field | Optional Action   |
    # ==============|===========|=======|===================|
    Decode_Field_As    escaped     log

Each line in the parser with a key Decode_Field instruct the parser to apply a specific decoder on a given field, optionally it offer the option to take an extra action if the decoder cannot succeed.

Decoders

Name
Description

json

handle the field content as a JSON map. If it find a JSON map it will replace the content with a structured map.

escaped

decode an escaped string.

escaped_utf8

decode a UTF8 escaped string.

Optional Actions

By default if a decoder fails to decode the field or want to try a next decoder, is possible to define an optional action. Available actions are:

Name
Description

try_next

if the decoder failed, apply the next Decoder in the list for the same field.

do_next

if the decoder succeeded or failed, apply the next Decoder in the list for the same field.

Note that actions are affected by some restrictions:

  • on Decode_Field_As, if succeeded, another decoder of the same type in the same field can be applied only if the data continues being an unstructured message (raw text).

  • on Decode_Field, if succeeded, can only be applied once for the same field. By nature Decode_Field aims to decode a structured message.

Examples

escaped_utf8

Example input (from /path/to/log.log in configuration below)

{"log":"\u0009Checking indexes...\n","stream":"stdout","time":"2018-02-19T23:25:29.1845444Z"}
{"log":"\u0009\u0009Validated: _audit _internal _introspection _telemetry _thefishbucket history main snmp_data summary\n","stream":"stdout","time":"2018-02-19T23:25:29.1845536Z"}
{"log":"\u0009Done\n","stream":"stdout","time":"2018-02-19T23:25:29.1845622Z"}

Example output

[24] tail.0: [1519082729.184544400, {"log"=>"   Checking indexes...                                                   
", "stream"=>"stdout", "time"=>"2018-02-19T23:25:29.1845444Z"}]
[25] tail.0: [1519082729.184553600, {"log"=>"           Validated: _audit _internal _introspection _telemetry _thefishbucket history main snmp_data summary
", "stream"=>"stdout", "time"=>"2018-02-19T23:25:29.1845536Z"}]
[26] tail.0: [1519082729.184562200, {"log"=>"   Done                  
", "stream"=>"stdout", "time"=>"2018-02-19T23:25:29.1845622Z"}]

Configuration file

[SERVICE]
    Parsers_File fluent-bit-parsers.conf

[INPUT]
    Name        tail
    Parser      docker
    Path        /path/to/log.log

[OUTPUT]
    Name   stdout
    Match  *

The fluent-bit-parsers.conf file,

[PARSER]
    Name        docker
    Format      json
    Time_Key    time
    Time_Format %Y-%m-%dT%H:%M:%S %z
    Decode_Field_as escaped_utf8 log

CheckList

The following plugin looks up if a value in a specified list exists and then allows the addition of a record to indicate if found. Introduced in version 1.8.4

Configuration Parameters

The plugin supports the following configuration parameters

Key
Description

file

The single value file that Fluent Bit will use as a lookup table to determine if the specified lookup_key exists

lookup_key

The specific key to look up and determine if it exists, supports record accessor

record

The record to add if the lookup_key is found in the specified file. Note you may add multiple record parameters.

Example Configuration

[INPUT]
    name           tail
    tag            test1
    path           test1.log
    read_from_head true
    parser         json

[FILTER]
    name       checklist
    match      test1
    file       ip_list.txt
    lookup_key $remote_addr
    record     ioc    abc
    record     badurl null
    log_level  debug

[OUTPUT]
    name       stdout
    match      test1

In the following configuration we will read a file test1.log that includes the following values

{"remote_addr": true, "ioc":"false", "url":"https://badurl.com/payload.htm","badurl":"no"}
{"remote_addr": "7.7.7.2", "ioc":"false", "url":"https://badurl.com/payload.htm","badurl":"no"}
{"remote_addr": "7.7.7.3", "ioc":"false", "url":"https://badurl.com/payload.htm","badurl":"no"}
{"remote_addr": "7.7.7.4", "ioc":"false", "url":"https://badurl.com/payload.htm","badurl":"no"}
{"remote_addr": "7.7.7.5", "ioc":"false", "url":"https://badurl.com/payload.htm","badurl":"no"}
{"remote_addr": "7.7.7.6", "ioc":"false", "url":"https://badurl.com/payload.htm","badurl":"no"}
{"remote_addr": "7.7.7.7", "ioc":"false", "url":"https://badurl.com/payload.htm","badurl":"no"}

Additionally, we will use the following lookup file which contains a list of malicious IPs (ip_list.txt)

1.2.3.4
6.6.4.232
7.7.7.7

In the configuration we are using $remote_addr as the lookup key and 7.7.7.7 is malicious. This means the record we would output for the last record would look like the following

{"remote_addr": "7.7.7.7", "ioc":"abc", "url":"https://badurl.com/payload.htm","badurl":"null"}
Unit Size
Unit Size
this section
Multiline Parser definitions
this section
Fluent Bit
Fluent Bit
JSON Maps
Regular Expressions
Rubular
https://github.com/fluent/fluent-bit/blob/master/conf/parsers.conf
strftime(3)
Onigmo
http://rubular.com/
"ReDoS"
http://rubular.com/r/X7BH0M4Ivm
Tail
Multiline

Name

Set an unique name for the parser in question.

Format

Regex

If format is regex, this option must be set specifying the Ruby Regular Expression that will be used to parse and compose the structured message.

Time_Key

If the log entry provides a field with a timestamp, this option specifies the name of that field.

Time_Format

Time_Offset

Specify a fixed UTC time offset (e.g. -0600, +0200, etc.) for local dates.

Time_Keep

By default when a time key is recognized and parsed, the parser will drop the original time field. Enabling this option will make the parser to keep the original time field and it value in the log entry.

Types

Specify the data type of parsed field. The syntax is types <field_name_1>:<type_name_1> <field_name_2>:<type_name_2> .... The supported types are string(default), integer, bool, float, hex. The option is supported by ltsv, logfmt and regex.

Decode_Field

Decode a field value, the only decoder available is json. The syntax is: Decode_Field json <field_name>.

LTSV

Labeled Tab-separated Values (LTSV format is a variant of Tab-separated Values (TSV). Each record in a LTSV file is represented as a single line. Each field is separated by TAB and has a label and a value. The label and the value have been separated by ':'.

Here is an example how to use this format in the apache access log.

Config this in httpd.conf:

LogFormat "host:%h\tident:%l\tuser:%u\ttime:%t\treq:%r\tstatus:%>s\tsize:%b\treferer:%{Referer}i\tua:%{User-Agent}i" combined_ltsv
CustomLog "logs/access_log" combined_ltsv

The parser.conf:

[PARSER]
    Name        access_log_ltsv
    Format      ltsv
    Time_Key    time
    Time_Format [%d/%b/%Y:%H:%M:%S %z]
    Types       status:integer size:integer

The following log entry is a valid content for the parser defined above:

host:127.0.0.1  ident:- user:-  time:[10/Jul/2018:13:27:05 +0200]       req:GET / HTTP/1.1      status:200      size:16218      referer:http://127.0.0.1/       ua:Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:59.0) Gecko/20100101 Firefox/59.0
host:127.0.0.1  ident:- user:-  time:[10/Jul/2018:13:27:05 +0200]       req:GET /assets/plugins/bootstrap/css/bootstrap.min.css HTTP/1.1        status:200      size:121200     referer:http://127.0.0.1/       ua:Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:59.0) Gecko/20100101 Firefox/59.0
host:127.0.0.1  ident:- user:-  time:[10/Jul/2018:13:27:05 +0200]       req:GET /assets/css/headers/header-v6.css HTTP/1.1      status:200      size:37706      referer:http://127.0.0.1/       ua:Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:59.0) Gecko/20100101 Firefox/59.0
host:127.0.0.1  ident:- user:-  time:[10/Jul/2018:13:27:05 +0200]       req:GET /assets/css/style.css HTTP/1.1  status:200      size:1279       referer:http://127.0.0.1/       ua:Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:59.0) Gecko/20100101 Firefox/59.0

After processing, it internal representation will be:

[1531222025.000000000, {"host"=>"127.0.0.1", "ident"=>"-", "user"=>"-", "req"=>"GET / HTTP/1.1", "status"=>200, "size"=>16218, "referer"=>"http://127.0.0.1/", "ua"=>"Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:59.0) Gecko/20100101 Firefox/59.0"}]
[1531222025.000000000, {"host"=>"127.0.0.1", "ident"=>"-", "user"=>"-", "req"=>"GET /assets/plugins/bootstrap/css/bootstrap.min.css HTTP/1.1", "status"=>200, "size"=>121200, "referer"=>"http://127.0.0.1/", "ua"=>"Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:59.0) Gecko/20100101 Firefox/59.0"}]
[1531222025.000000000, {"host"=>"127.0.0.1", "ident"=>"-", "user"=>"-", "req"=>"GET /assets/css/headers/header-v6.css HTTP/1.1", "status"=>200, "size"=>37706, "referer"=>"http://127.0.0.1/", "ua"=>"Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:59.0) Gecko/20100101 Firefox/59.0"}]
[1531222025.000000000, {"host"=>"127.0.0.1", "ident"=>"-", "user"=>"-", "req"=>"GET /assets/css/style.css HTTP/1.1", "status"=>200, "size"=>1279, "referer"=>"http://127.0.0.1/", "ua"=>"Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:59.0) Gecko/20100101 Firefox/59.0"}]

The time has been converted to Unix timestamp (UTC).

Logfmt

Here is an example configuration:

[PARSER]
    Name        logfmt
    Format      logfmt

The following log entry is a valid content for the parser defined above:

key1=val1 key2=val2

After processing, it internal representation will be:

[1540936693, {"key1"=>"val1",
              "key2"=>"val2"}]

AWS Metadata

Configuration Parameters

The plugin supports the following configuration parameters:

Key
Description
Default

imds_version

Specify which version of the instance metadata service to use. Valid values are 'v1' or 'v2'.

v2

az

true

ec2_instance_id

The EC2 instance ID.

true

ec2_instance_type

The EC2 instance type.

false

private_ip

The EC2 instance private ip.

false

ami_id

The EC2 instance image id.

false

account_id

The account ID for current EC2 instance.

false

hostname

The hostname for current EC2 instance.

false

vpc_id

The VPC ID for current EC2 instance.

false

Note: If you run Fluent Bit in a container, you may have to use instance metadata v1. The plugin behaves the same regardless of which version is used.

Command Line

$ bin/fluent-bit -c /PATH_TO_CONF_FILE/fluent-bit.conf

[2020/01/17 07:57:17] [ info] [engine] started (pid=32744)
[0] dummy: [1579247838.000171227, {"message"=>"dummy", "az"=>"us-west-2c", "ec2_instance_id"=>"i-0c862eca9038f5aae", "ec2_instance_type"=>"t2.medium", "private_ip"=>"172.31.6.59", "vpc_id"=>"vpc-7ea11c06", "ami_id"=>"ami-0841edc20334f9287", "account_id"=>"YOUR_ACCOUNT_ID", "hostname"=>"ip-172-31-6-59.us-west-2.compute.internal"}]
[0] dummy: [1601274509.970235760, {"message"=>"dummy", "az"=>"us-west-2c", "ec2_instance_id"=>"i-0c862eca9038f5aae", "ec2_instance_type"=>"t2.medium", "private_ip"=>"172.31.6.59", "vpc_id"=>"vpc-7ea11c06", "ami_id"=>"ami-0841edc20334f9287", "account_id"=>"YOUR_ACCOUNT_ID", "hostname"=>"ip-172-31-6-59.us-west-2.compute.internal"}]

Configuration File

[INPUT]
    Name dummy
    Tag dummy

[FILTER]
    Name aws
    Match *
    imds_version v1
    az true
    ec2_instance_id true
    ec2_instance_type true
    private_ip true
    ami_id true
    account_id true
    hostname true
    vpc_id true

[OUTPUT]
    Name stdout
    Match *

Specify the format of the parser, the available options here are: , , or .

Specify the format of the time field so it can be recognized and analyzed properly. Fluent-bit uses strptime(3) to parse time so you can refer to for available modifiers.

The ltsv parser allows to parse formatted texts.

The logfmt parser allows to parse the logfmt format described in . A more formal description is in .

The AWS Filter Enriches logs with AWS Metadata. Currently the plugin adds the EC2 instance ID and availability zone to log records. To use this plugin, you must be running in EC2 and have the .

The ; for example, "us-east-1a".

json
regex
ltsv
logfmt
strptime documentation
LTSV
https://brandur.org/logfmt
https://godoc.org/github.com/kr/logfmt
instance metadata service enabled
availability zone
dashboard
Workflow of Tail + Kubernetes Filter