Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
Fluent Bit is an open source and multi-platform log processor tool which aims to be a generic Swiss knife for logs processing and distribution.
Nowadays the number of sources of information in our environments is ever increasing. Handling data collection at scale is complex, and collecting and aggregating diverse data requires a specialized tool that can deal with:
Different sources of information
Different data formats
Data Reliability
Security
Flexible Routing
Multiple destinations
Fluent Bit has been designed with performance and low resources consumption in mind.
Strong Commitment to the Openness and Collaboration
Fluent Bit, including it core, plugins and tools are distributed under the terms of the Apache License v2.0:
The Production Grade Ecosystem
Logging and data processing in general can be complex, and at scale a bit more, that's why Fluentd was born. But now is more than a simple tool, it's a full ecosystem that contains SDKs for different languages and sub projects like Fluent Bit.
On this page, we will describe the relationship between the Fluentd and Fluent Bit open source projects, as a summary we can say both are:
Licensed under the terms of Apache License v2.0
Hosted projects by the Cloud Native Computing Foundation (CNCF)
Production Grade solutions: deployed thousands of times every single day, millions per month.
Community driven projects
Widely Adopted by the Industry: trusted by all major companies like AWS, Microsoft, Google Cloud and hundred of others.
Originally created by Treasure Data.
Both projects share a lot of similarities, Fluent Bit is fully designed and built on top of the best ideas of Fluentd architecture and general design. Choosing which one to use depends on the end-user needs.
The following table describes a comparison in different areas of the projects:
Fluentd
Fluent Bit
Scope
Containers / Servers
Embedded Linux / Containers / Servers
Language
C & Ruby
C
Memory
~40MB
~650KB
Performance
High Performance
High Performance
Dependencies
Built as a Ruby Gem, it requires a certain number of gems.
Zero dependencies, unless some special plugin requires them.
Plugins
More than 1000 plugins available
Around 70 plugins available
License
Both Fluentd and Fluent Bit can work as Aggregators or Forwarders, they both can complement each other or use them as standalone solutions.
Convert Unstructured to Structured messages
Dealing with raw strings or unstructured messages is a constant pain; having a structure is highly desired. Ideally we want to set a structure to the incoming data by the Input Plugins as soon as they are collected:
The Parser allows you to convert from unstructured to structured data. As a demonstrative example consider the following Apache (HTTP Server) log entry:
The above log line is a raw string without format, ideally we would like to give it a structure that can be processed later easily. If the proper configuration is used, the log entry could be converted to:
There are a few key concepts that are really important to understand how Fluent Bit operates.
Before diving into it’s good to get acquainted with some of the key concepts of the service. This document provides a gentle introduction to those concepts and common terminology. We’ve provided a list below of all the terms we’ll cover, but we recommend reading this document from start to finish to gain a more general understanding of our log and stream processor.
Event or Record
Filtering
Tag
Timestamp
Match
Structured Message
Every incoming piece of data that belongs to a log or a metric that is retrieved by Fluent Bit is considered an Event or a Record.
As an example consider the following content of a Syslog file:
It contains four lines and all of them represents four independent Events.
Internally, an Event always has two components (in an array form):
In some cases it is required to perform modifications on the Events content, the process to alter, enrich or drop Events is called Filtering.
There are many use cases when Filtering is required like:
Append specific information to the Event like an IP address or metadata.
Select a specific piece of the Event content.
Drop Events that matches certain pattern.
Every Event that gets into Fluent Bit gets assigned a Tag. This tag is an internal string that is used in a later stage by the Router to decide which Filter or Output phase it must go through.
Most of the tags are assigned manually in the configuration. If a tag is not specified, Fluent Bit will assign the name of the Input plugin instance from where that Event was generated from.
The Timestamp represents the time when an Event was created. Every Event contains a Timestamp associated. The Timestamp is a numeric fractional integer in the format:
It is the number of seconds that have elapsed since the Unix epoch.
Fractional second or one thousand-millionth of a second.
A timestamp always exists, either set by the Input plugin or discovered through a data parsing process.
Fluent Bit allows to deliver your collected and processed Events to one or multiple destinations, this is done through a routing phase. A Match represent a simple rule to select Events where it Tags matches a defined rule.
Source events can have or not have a structure. A structure defines a set of keys and values inside the Event message. As an example consider the following two messages:
At a low level both are just an array of bytes, but the Structured message defines keys and values, having a structure helps to implement faster operations on data modifications.
Modify, Enrich or Drop your records
In production environments we want to have full control of the data we are collecting, filtering is an important feature that allows us to alter the data before delivering it to some destination.
Filtering is implemented through plugins, so each filter available could be used to match, exclude or enrich your logs with some specific metadata.
We support many filters, A common use case for filtering is Kubernetes deployments. Every Pod log needs to get the proper metadata associated
Very similar to the input plugins, Filters run in an instance context, which has its own independent configuration. Configuration keys are often called properties.
High Performance Logs Processor
is a Fast and Lightweight Log Processor, Stream Processor and Forwarder for Linux, OSX, Windows and BSD family operating systems. It has been made with a strong focus on performance to allow the collection of events from different sources without complexity.
High Performance
Data Parsing
Reliability and Data Integrity
Networking
Security: built-in TLS/SSL support
Asynchronous I/O
More than 80 built-in plugins available
Extensibility
Write any input, filter or output plugin in C language
Create new streams of data using query results
Aggregation Windows
Data analysis and prediction: Timeseries forecasting
Portable: runs on Linux, MacOS, Windows and BSD systems
Parsers are fully configurable and are independently and optionally handled by each input plugin, for more details please refer to the section.
The only input plugin that does NOT assign tags is input. This plugin speaks the Fluentd wire protocol called Forward where every Event already comes with a Tag associated. Fluent Bit will always use the incoming Tag set by the client.
A Tagged record must always have a Matching rule. To learn more about Tags and Matches check the section.
To learn more about Tags and Matches check the section.
Fluent Bit always handles every Event message as a structured message. For performance reasons, we use a binary serialization data format called .
Consider as a binary version of JSON on steroids.
For more details about the Filters available and their usage, please refer to the section.
Convert your unstructured messages using our parsers: , , and
Handling
in memory and file system
Pluggable Architecture and : Inputs, Filters and Outputs
Bonus: write or
: expose internal metrics over HTTP in JSON and format
: Perform data selection and transformation using simple SQL queries
is a sub-component of the project ecosystem, it's licensed under the terms of the . This project was created by and is its current primary sponsor.
Nowadays Fluent Bit get contributions from several companies and individuals and same as , it's hosted as a subproject.
The following article cover the relevant notes for users upgrading from previous Fluent Bit versions. We aim to cover compatibility changes that you must be aware of.
For more details about changes on each release please refer to the Official Release Notes.
If you are migrating from previous version of Fluent Bit please review the following important changes:
Now by default the plugin follows a file from the end once the service starts (old behavior was always read from the beginning). For every file found at start, its followed from it last position, for new files discovered at runtime or rotated, they are read from the beginning.
If you desire to keep the old behavior you can set the option read_from_head
to true.
The project_id of resource in LogEntry sent to Google Cloud Logging would be set to the project ID rather than the project number. To learn the difference between Project ID and project number, see this for more details.
If you have any existing queries based on the resource's project_id, please update your query accordingly.
The migration from v1.4 to v1.5 is pretty straightforward.
If you enabled keepalive
mode in your configuration, note that this configuration property has been renamed to net.keepalive
. Now all Network I/O keepalive is enabled by default, to learn more about this and other associated configuration properties read the Networking Administration section.
If you use the Elasticsearch output plugin, note the default value of type
changed from flb_type
to _doc
. Many versions of Elasticsearch will tolerate this, but ES v5.6 through v6.1 require a type without a leading underscore. See the Elasticsearch output plugin documentation FAQ entry for more.
If you are migrating from Fluent Bit v1.3, there are no breaking changes. Just new exciting features to enjoy :)
If you are migrating from Fluent Bit v1.2 to v1.3, there are no breaking changes. If you are upgrading from an older version please review the incremental changes below.
On Fluent Bit v1.2 we have fixed many issues associated with JSON encoding and decoding, for hence when parsing Docker logs is no longer necessary to use decoders. The new Docker parser looks like this:
Note: again, do not use decoders.
We have done improvements also on how Kubernetes Filter handle the stringified log message. If the option Merge_Log is enabled, it will try to handle the log content as a JSON map, if so, it will add the keys to the root map.
In addition, we have fixed and improved the option called Merge_Log_Key. If a merge log succeed, all new keys will be packaged under the key specified by this option, a suggested configuration is as follows:
As an example, if the original log content is the following map:
the final record will be composed as follows:
If you are upgrading from Fluent Bit <= 1.0.x you should take in consideration the following relevant changes when switching to Fluent Bit v1.1 series:
We introduced a new configuration property called Kube_Tag_Prefix to help Tag prefix resolution and address an unexpected behavior that landed in previous versions.
During 1.0.x release cycle, a commit in Tail input plugin changed the default behavior on how the Tag was composed when using the wildcard for expansion generating breaking compatibility with other services. Consider the following configuration example:
The expected behavior is that Tag will be expanded to:
but the change introduced in 1.0 series switched from absolute path to the base file name only:
On Fluent Bit v1.1 release we restored to our default behavior and now the Tag is composed using the absolute path of the monitored file.
Having absolute path in the Tag is relevant for routing and flexible configuration where it also helps to keep compatibility with Fluentd behavior.
This behavior switch in Tail input plugin affects how Filter Kubernetes operates. As you know when the filter is used it needs to perform local metadata lookup that comes from the file names when using Tail as a source. Now with the new Kube_Tag_Prefix option you can specify what's the prefix used in Tail input plugin, for the configuration example above the new configuration will look as follows:
So the proper for Kube_Tag_Prefix value must be composed by Tag prefix set in Tail input plugin plus the converted monitored directory replacing slashes with dots.
Performance and Data Safety
When Fluent Bit processes data, it uses the system memory (heap) as a primary and temporal place to store the record logs before they get delivered, on this private memory area the records are processed.
Buffering refers to the ability to store the records somewhere, and while they are processed and delivered, still be able to store more. Buffering in memory is the fastest mechanism, but there are certain scenarios where the mechanism requires special strategies to deal with backpressure, data safety or reduce memory consumption by the service in constraint environments.
Network failures or latency on third party service is pretty common, and on scenarios where we cannot deliver data fast enough as we receive new data to process, we likely will face backpressure.
Our buffering strategies are designed to solve problems associated with backpressure and general delivery failures.
Fluent Bit as buffering strategies, offers a primary buffering mechanism in memory and an optional secondary one using the file system. With this hybrid solution you can adjust to any use case safety and keep a high performance while processing your data.
Both mechanisms are not exclusive and when the data is ready to be processed or delivered it will be always in memory, while other data in the queue might be in the file system until is ready to be processed and moved up to memory.
To learn more about the buffering configuration in Fluent Bit, please jump to the Buffering & Storage section.
Fluent Bit is distributed as td-agent-bit package and is available for the latest Amazon Linux 2. The following architectures are supported
x86_64
aarch64 / arm64v8
We provide td-agent-bit through a Yum repository. In order to add the repository reference to your system, please add a new file called td-agent-bit.repo in /etc/yum.repos.d/ with the following content:
note: we encourage you always enable the gpgcheck for security reasons. All our packages are signed.
The GPG Key fingerprint is F209 D876 2A60 CD49 E680 633B 4FF8 368B 6EA0 722A
Once your repository is configured, run the following command to install it:
Now the following step is to instruct systemd to enable the service:
If you do a status check, you should see a similar output like this:
The default configuration of td-agent-bit is collecting metrics of CPU usage and sending the records to the standard output, you can see the outgoing data in your /var/log/messages file.
For production systems, we strongly suggest that you always get the latest stable release from our web site, you can get the official tarballs (.tar.gz) from using the following link pattern:
https://fluentbit.io/release/1.7/fluent-bit-<release version>. For example for version 1.7.4 the link is the following: https://fluentbit.io/releases/1.7/fluent-bit-1.7.4.tar.gz
For people who aims to contribute to the project testing or extending the code base, can get the development version from our GIT repository:
Note that our master branch is where the development of Fluent Bit happens. Since it's a development version, expect issues when compiling or at run time.
We encourage everybody to help us testing every development version, at the end this is what will become stable.
Fluent Bit in normal operation mode allows to be configurable through text files or using specific arguments in the command line, while this is the ideal deployment case, there are scenarios where a more restricted configuration is required: static configuration mode.
Static configuration mode aims to include a built-in configuration in the final binary of Fluent Bit, disabling the usage of external files or flags at runtime.
The following steps assumes you are familiar with configuring Fluent Bit using text files and you have experience building it from scratch as described in the Build and Install section.
In your file system prepare a specific directory that will be used as an entry point for the build system to lookup and parse the configuration files. It is mandatory that this directory contain as a minimum one configuration file called fluent-bit.conf containing the required SERVICE, INPUT and OUTPUT sections. As an example create a new fluent-bit.conf file with the following content:
the configuration provided above will calculate CPU metrics from the running system and print them to the standard output interface.
Inside Fluent Bit source code, get into the build/ directory and run CMake appending the FLB_STATIC_CONF option pointing the configuration directory recently created, e.g:
then build it:
At this point the fluent-bit binary generated is ready to run without necessity of further configuration:
Fluent Bit uses very low CPU and Memory consumption, it's compatible with most of x86, x86_64, arm32v7 and arm64v8 based platforms. In order to build it you need the following components in your system for the build process:
Compiler: GCC or clang
CMake
Flex & Bison: only if you enable the Stream Processor or Record Accessor feature (both enabled by default)
In the core there are not other dependencies, For certain features that depends on third party components like output plugins with special backend libraries (e.g: kafka), those are included in the main source code repository.
Fluent Bit is distributed as td-agent-bit package and is available for the latest stable CentOS system. The following architectures are supported
x86_64
aarch64 / arm64v8
We provide td-agent-bit through a Yum repository. In order to add the repository reference to your system, please add a new file called td-agent-bit.repo in /etc/yum.repos.d/ with the following content:
note: we encourage you always enable the gpgcheck for security reasons. All our packages are signed.
The GPG Key fingerprint is F209 D876 2A60 CD49 E680 633B 4FF8 368B 6EA0 722A
Once your repository is configured, run the following command to install it:
Now the following step is to instruct Systemd to enable the service:
If you do a status check, you should see a similar output like this:
The default configuration of td-agent-bit is collecting metrics of CPU usage and sending the records to the standard output, you can see the outgoing data in your /var/log/messages file.
The following serves as a guide on how to install/deploy/upgrade Fluent Bit
Deployment Type
Instructions
Kubernetes
Docker
Containers on AWS
Operating System
Installation Instructions
CentOS / Red Hat
Ubuntu
Debian
Amazon Linux
Raspbian / Rasberry Pi
Yocto / Embedded Linux
Operating System
Installation Instructions
Windows Server 2019
Windows 10 2019.03
Operating System
Installation Instructions
Linux, FreeBSD, MacOS
Windows
Fluent Bit packages are also provided by enterprise providers for older end of life versions, Unix systems, and additional support and features. A list provided by fluentbit.io/enterprise is provided below
Fluent Bit uses CMake as it build system. The suggested procedure to prepare the build system consists on the following steps:
In the following steps you can find exact commands to build and install the project with the default options. If you already know how CMake works you can skip this part and look at the build options available. Note that Fluent Bit requires CMake 3.x. You may need to use
cmake3
instead ofcmake
to complete the following steps on your system.
Change to the build/ directory inside the Fluent Bit sources:
Let CMake configure the project specifying where the root path is located:
Now you are ready to start the compilation process through the simple make command:
to continue installing the binary on the system just do:
it's likely you may need root privileges so you can try to prefixing the command with sudo.
Fluent Bit provides certain options to CMake that can be enabled or disabled when configuring, please refer to the following tables under the General Options, Development Options, Input Plugins and _Output Plugins sections.
option
description
default
FLB_ALL
Enable all features available
No
FLB_JEMALLOC
Use Jemalloc as default memory allocator
No
FLB_TLS
Build with SSL/TLS support
Yes
FLB_BINARY
Build executable
Yes
FLB_EXAMPLES
Build examples
Yes
FLB_SHARED_LIB
Build shared library
Yes
FLB_MTRACE
Enable mtrace support
No
FLB_INOTIFY
Enable Inotify support
Yes
FLB_POSIX_TLS
Force POSIX thread storage
No
FLB_SQLDB
Enable SQL embedded database support
No
FLB_HTTP_SERVER
Enable HTTP Server
No
FLB_LUAJIT
Enable Lua scripting support
Yes
FLB_RECORD_ACCESSOR
Enable record accessor
Yes
FLB_SIGNV4
Enable AWS Signv4 support
Yes
FLB_STATIC_CONF
Build binary using static configuration files. The value of this option must be a directory containing configuration files.
FLB_STREAM_PROCESSOR
Enable Stream Processor
Yes
option
description
default
FLB_DEBUG
Build binaries with debug symbols
No
FLB_VALGRIND
Enable Valgrind support
No
FLB_TRACE
Enable trace mode
No
FLB_SMALL
Minimise binary size
No
FLB_TESTS_RUNTIME
Enable runtime tests
No
FLB_TESTS_INTERNAL
Enable internal tests
No
FLB_TESTS
Enable tests
No
FLB_BACKTRACE
Enable backtrace/stacktrace support
Yes
The input plugins provides certain features to gather information from a specific source type which can be a network interface, some built-in metric or through a specific input device, the following input plugins are available:
option
description
default
Enable Collectd input plugin
On
Enable CPU input plugin
On
Enable Disk I/O Metrics input plugin
On
Enable Docker metrics input plugin
On
Enable Exec input plugin
On
Enable Forward input plugin
On
Enable Head input plugin
On
Enable Health input plugin
On
Enable Kernel log input plugin
On
Enable Memory input plugin
On
Enable MQTT Server input plugin
On
Enable Network I/O metrics input plugin
On
Enable Process monitoring input plugin
On
Enable Random input plugin
On
Enable Serial input plugin
On
Enable Standard input plugin
On
Enable Syslog input plugin
On
Enable Systemd / Journald input plugin
On
Enable Tail (follow files) input plugin
On
Enable TCP input plugin
On
Enable system temperature(s) input plugin
On
Enable Windows Event Log input plugin (Windows Only)
On
The filter plugins allows to modify, enrich or drop records. The following table describes the filters available on this version:
option
description
default
Enable AWS metadata filter
On
FLB_FILTER_EXPECT
Enable Expect data test filter
On
Enable Grep filter
On
Enable Kubernetes metadata filter
On
Enable Lua scripting filter
On
Enable Modify filter
On
Enable Nest filter
On
Enable Parser filter
On
Enable Record Modifier filter
On
Enable Rewrite Tag filter
On
Enable Stdout filter
On
Enable Throttle filter
On
The output plugins gives the capacity to flush the information to some external interface, service or terminal, the following table describes the output plugins available as of this version:
option
description
default
Enable Microsoft Azure output plugin
On
Enable Google BigQuery output plugin
On
Enable Counter output plugin
On
Enable Amazon CloudWatch output plugin
On
Enable Datadog output plugin
On
On
Enable File output plugin
On
Enable Amazon Kinesis Data Firehose output plugin
On
Enable Amazon Kinesis Data Streams output plugin
On
Enable Flowcounter output plugin
On
On
Enable Gelf output plugin
On
Enable HTTP output plugin
On
Enable InfluxDB output plugin
On
Enable Kafka output
Off
Enable Kafka REST Proxy output plugin
On
FLB_OUT_LIB
Enable Lib output plugin
On
Off
FLB_OUT_NULL
Enable NULL output plugin
On
FLB_OUT_PGSQL
Enable PostgreSQL output plugin
On
FLB_OUT_PLOT
Enable Plot output plugin
On
FLB_OUT_SLACK
Enable Slack output plugin
On
Enable Amazon S3 output plugin
On
Enable Splunk output plugin
On
Enable Google Stackdriver output plugin
On
Enable STDOUT output plugin
On
FLB_OUT_TCP
Enable TCP/TLS output plugin
On
On
Data processing with reliability
Previously defined in the Buffering concept section, the buffer
phase in the pipeline aims to provide a unified and persistent mechanism to store your data, either using the primary in-memory model or using the filesystem based mode.
The buffer
phase already contains the data in an immutable state, meaning, no other filter can be applied.
Note that buffered data is not raw text, it's in Fluent Bit's internal binary representation.
Fluent Bit offers a buffering mechanism in the file system that acts as a backup system to avoid data loss in case of system failures.
Destinations for your data: databases, cloud services and more!
The output interface allows us to define destinations for the data. Common destinations are remote services, local file system or standard interface with others. Outputs are implemented as plugins and there are many available.
When an output plugin is loaded, an internal instance is created. Every instance has its own independent configuration. Configuration keys are often called properties.
Every output plugin has its own documentation section specifying how it can be used and what properties are available.
For more details, please refer to the Output Plugins section.
The way to gather data from your sources
Fluent Bit provides different Input Plugins to gather information from different sources, some of them just collect data from log files while others can gather metrics information from the operating system. There are many plugins for different needs.
When an input plugin is loaded, an internal instance is created. Every instance has its own and independent configuration. Configuration keys are often called properties.
Every input plugin has its own documentation section where it's specified how it can be used and what properties are available.
For more details, please refer to the Input Plugins section.
Create flexible routing rules
Routing is a core feature that allows to route your data through Filters and finally to one or multiple destinations. The router relies on the concept of Tags and Matching rules
There are two important concepts in Routing:
Tag
Match
When the data is generated by the input plugins, it comes with a Tag (most of the time the Tag is configured manually), the Tag is a human-readable indicator that helps to identify the data source.
In order to define where the data should be routed, a Match rule must be specified in the output configuration.
Consider the following configuration example that aims to deliver CPU metrics to an Elasticsearch database and Memory metrics to the standard output interface:
Note: the above is a simple example demonstrating how Routing is configured.
Routing works automatically reading the Input Tags and the Output Match rules. If some data has a Tag that doesn't match upon routing time, the data is deleted.
Routing is flexible enough to support wildcard in the Match pattern. The below example defines a common destination for both sources of data:
The match rule is set to my_* which means it will match any Tag that starts with my_.
The following operating systems and architectures are supported in Fluent Bit.
Operating System
Distribution
Architectures
Linux
x86_64, Arm64v8
x86_64, Arm64v8
x86_64, Arm64v8
x86_64, Arm64v8
x86_64, Arm64v8
x86_64, Arm64v8
x86_64, Arm64v8
x86_64, Arm64v8
x86_64, Arm64v8
x86_64
Arm32v7
Arm32v7
Arm32v7
Windows
x86_64, x86
x86_64, x86
From an architecture support perspective, Fluent Bit is fully functional on x86_64, Arm64v8 and Arm32v7 based processors.
Fluent Bit can work also on OSX and *BSD systems, but not all plugins will be available on all platforms. Official support will be expanding based on community demand. Fluent Bit may run on older operating systems though will need to be built from source, or use custom packages from enterprise providers
Fluent Bit is distributed as td-agent-bit package and is available for the latest stable Ubuntu system: Focal Fossa.
The first step is to add our server GPG key to your keyring, on that way you can get our signed packages:
On Ubuntu, you need to add our APT server entry to your sources lists, please add the following content at bottom of your /etc/apt/sources.list file:
Now let your system update the apt database:
Using the following apt-get command you are able now to install the latest td-agent-bit:
Now the following step is to instruct systemd to enable the service:
If you do a status check, you should see a similar output like this:
The default configuration of td-agent-bit is collecting metrics of CPU usage and sending the records to the standard output, you can see the outgoing data in your /var/log/syslog file.
Learn how to .
Fluent Bit is distributed as td-agent-bit package and is available for the Raspberry, specifically for distribution, the following versions are supported:
Raspbian Buster (10)
Raspbian Stretch (9)
Raspbian Jessie (8)
The first step is to add our server GPG key to your keyring, on that way you can get our signed packages:
On Debian and derivative systems such as Raspbian, you need to add our APT server entry to your sources lists, please add the following content at bottom of your /etc/apt/sources.list file:
Now let your system update the apt database:
Using the following apt-get command you are able now to install the latest td-agent-bit:
Now the following step is to instruct systemd to enable the service:
If you do a status check, you should see a similar output like this:
The default configuration of td-agent-bit is collecting metrics of CPU usage and sending the records to the standard output, you can see the outgoing data in your /var/log/syslog file.
Fluent Bit is distributed as td-agent-bit package and is available for the latest (and old) stable Debian systems: Buster, Stretch and Jessie.
The first step is to add our server GPG key to your keyring, on that way you can get our signed packages:
On Debian, you need to add our APT server entry to your sources lists, please add the following content at bottom of your /etc/apt/sources.list file:
Now let your system update the apt database:
Using the following apt-get command you are able now to install the latest td-agent-bit:
Now the following step is to instruct systemd to enable the service:
If you do a status check, you should see a similar output like this:
The default configuration of td-agent-bit is collecting metrics of CPU usage and sending the records to the standard output, you can see the outgoing data in your /var/log/syslog file.
source code provides Bitbake recipes to configure, build and package the software for a Yocto based image. Note that specific steps of usage of these recipes in your Yocto environment (Poky) is out of the scope of this documentation.
We distribute two main recipes, one for testing/dev purposes and other with the latest stable release.
It's strongly recommended to always use the stable release of Fluent Bit recipe and not the one from GIT master for production deployments.
Fluent Bit >= v1.1.x fully supports x86_64, x86, arm32v7 and arm64v8.
Fluent Bit container images are available on Docker Hub ready for production usage. Current available images can be deployed in multiple architectures.
The following table describe the tags are available on Docker Hub repository:
It's strongly suggested that you always use the latest image of Fluent Bit.
In addition, the main manifest provides images for arm64v8 and arm32v7 architectures. From a deployment perspective there is no need to specify an architecture, the container client tool that pulls the image gets the proper layer for the running architecture.
For every architecture we build the layers using the following base images:
Download the last stable image from 1.7 series:
Once the image is in place, now run the following (useless) test which makes Fluent Bit measure CPU usage by the container:
That command will let Fluent Bit measure CPU usage every second and flush the results to the standard output, e.g:
Alpine Linux uses Musl C library instead of Glibc. Musl is not fully compatible with Glibc which generated many issues in the following areas when used with Fluent Bit:
Memory Allocator: to run Fluent Bit properly in high-load environments, we use Jemalloc as a default memory allocator which reduce fragmentation and provides better performance for our needs. Jemalloc cannot run smoothly with Musl and requires extra work.
Alpine Linux Musl functions bootstrap have a compatibility issue when loading Golang shared libraries, this generate problems when trying to load Golang output plugins in Fluent Bit.
Alpine Linux Musl Time format parser does not support Glibc extensions
Maintainers preference in terms of base image due to security and maintenance reasons are Distroless and Debian.
Our Docker containers images are deployed thousands of times per day, we take security and stability very seriously.
The latest tag most of the time points to the latest stable image. When we release a major update to Fluent Bit like for example from v1.3.x to v1.4.0, we don't move latest tag until 2 weeks after the release. That give us extra time to verify with our community that everything works as expected.
AWS maintains a distribution of Fluent Bit combining the latest official release with a set of Go Plugins for sending logs to AWS services. AWS and Fluent Bit are working together to rewrite their plugins for inclusion in the official Fluent Bit distribution.
Currently, the image contains Go Plugins for:
AWS vends their container image via , and a set of highly available regional Amazon ECR repositories. For more information, see the .
The AWS for Fluent Bit image uses a custom versioning scheme because it contains multiple projects. To see what each release contains, check out the .
AWS vends SSM Public Parameters with the regional repository link for each image. These parameters can be queried by any AWS account.
To see a list of available version tags in a given region, run the following command:
To see the ECR repository URI for a given image tag in a given region, run the following:
You can use these SSM public parameters as parameters in your CloudFormation templates:
Kubernetes Production Grade Log Processor
is a lightweight and extensible Log Processor that comes with full support for Kubernetes:
Process Kubernetes containers logs from the file system or Systemd/Journald.
Enrich logs with Kubernetes Metadata.
Centralize your logs in third party storage services like Elasticsearch, InfluxDB, HTTP, etc.
Before getting started it is important to understand how Fluent Bit will be deployed. Kubernetes manages a cluster of nodes, so our log agent tool will need to run on every node to collect logs from every POD, hence Fluent Bit is deployed as a DaemonSet (a POD that runs on every node of the cluster).
When Fluent Bit runs, it will read, parse and filter the logs of every POD and will enrich each entry with the following information (metadata):
Pod Name
Pod ID
Container Name
Container ID
Labels
Annotations
To obtain this information, a built-in filter plugin called kubernetes talks to the Kubernetes API Server to retrieve relevant information such as the pod_id, labels and annotations, other fields such as pod_name, container_id and container_name are retrieved locally from the log file names. All of this is handled automatically, no intervention is required from a configuration aspect.
The next step is to create a ConfigMap that will be used by our Fluent Bit DaemonSet:
For Kubernetes versions older than v1.16, the DaemonSet resource is not available on apps/v1
, the resource is available on apiVersion: extensions/v1beta1
. Our current Daemonset Yaml files uses the new apiVersion
.
If you are using and older Kubernetes version, manually grab a copy of your Daemonset Yaml file and replace the value of apiVersion
from:
to
You can read more about this deprecation on Kubernetes v1.14 Changelog here:
Fluent Bit DaemonSet ready to be used with Elasticsearch on a normal Kubernetes Cluster:
If you are using Minikube for testing purposes, use the following alternative DaemonSet manifest:
To add the Fluent Helm Charts repo use the following command
To validate that the repo was added you can run helm search repo fluent
to ensure the charts were added. The default chart can then be installed by running the following
The default configuration of Fluent Bit makes sure of the following:
Consume all containers logs from the running Node.
The Kubernetes filter will enrich the logs with Kubernetes metadata, specifically labels and annotations. The filter only goes to the API Server when it cannot find the cached info, otherwise it uses the cache.
There is an option called Retry_Limit set to False, that means if Fluent Bit cannot flush the records to Elasticsearch it will re-try indefinitely until it succeed.
Fluent Bit by default assumes that logs are formatted by the Docker interface standard. However, when using CRI you can run into issues with malformed JSON if you do not modify the parser used. Fluent Bit includes a CRI log parser that can be used instead. An example of the parser is seen below:
To use this parser change the Input section for your configuration from docker
to cri
Since v1.5.0, Fluent Bit supports deployment to Windows pods.
When deploying Fluent Bit to Kubernetes, there are three log files that you need to pay attention to.
C:\k\kubelet.err.log
This is the error log file from kubelet daemon running on host.
You will need to retain this file for future troubleshooting (to debug deployment failures etc.)
C:\var\log\containers\<pod>_<namespace>_<container>-<docker>.log
This is the main log file you need to watch. Configure Fluent Bit to follow this file.
It is actually a symlink to the Docker log file in C:\ProgramData\
, with some additional metadata on its file name.
C:\ProgramData\Docker\containers\<docker>\<docker>.log
This is the log file produced by Docker.
Normally you don't directly read from this file, but you need to make sure that this file is visible from Fluent Bit.
Typically, your deployment yaml contains the following volume configuration.
DNS_Retries
- Retries N times until the network start working (6)
DNS_Wait_Time
- Lookup interval between network status checks (30)
By default, Fluent Bit waits for 3 minutes (30 seconds x 6 times). If it's not enough for you, tweak the configuration as follows.
,
, ,
,
, ,
,
,
Enable output plugin
Enable output plugin
Enable output plugin
Enable output plugin
Our x86_64 stable image is based in focusing on security containing just the Fluent Bit binary and minimal system libraries and basic configuration. Optionally, we provide debug images for x86_64 which contains Busybox that can be used to troubleshoot or testing purposes.
Our Kubernetes Filter plugin is fully inspired by the written by .
must be deployed as a DaemonSet, so on that way it will be available on every node of your Kubernetes cluster. To get started run the following commands to create the namespace, service account and role setup:
The default configmap assumes that dockershim is utilized for the cluster. If a CRI runtime, such as containerd or CRI-O, is being utilized, the should be utilized. More specifically, change the Parser
described in input-kubernetes.conf
from docker to cri.
is a package manager for Kubernetes and allows you to quickly deploy application packages into your running cluster. Fluent Bit is distributed via a helm chart found in the Fluent Helm Charts repo: .
The default chart values include configuration to read container logs, with Docker parsing, systemd logs apply Kubernetes metadata enrichment and finally output to an Elasticsearch cluster. You can modify the values file included to specify additional outputs, health checks, monitoring endpoints, or other configuration options.
The will not append more than 5MB into the engine until they are flushed to the Elasticsearch backend. This limit aims to provide a workaround for scenarios.
The default backend in the configuration is Elasticsearch set by the . It uses the Logstash format to ingest the logs. If you need a different Index and Type, please refer to the plugin option and do your own adjustments.
Assuming the basic volume configuration described above, you can apply the following config to start logging. You can visualize this configuration
Windows pods often lack working DNS immediately after boot (). To mitigate this issue, filter_kubernetes
provides a built-in mechanism to wait until the network starts up:
Architecture
Base Image
x86_64
arm64v8
arm64v8/debian:buster-slim
arm32v7
arm32v7/debian:buster-slim
Version
Recipe
Description
devel
Build Fluent Bit from GIT master. This recipe aims to be used for development and testing purposes only.
v1.7.9
Build latest stable version of Fluent Bit.
Tag(s)
Manifest Architectures
Description
1.7
x86_64, arm64v8, arm32v7
Latest release of 1.7.x series.
1.7.9
x86_64, arm64v8, arm32v7
Release v1.7.9
1.7-debug, 1.7.9-debug
x86_64
v1.7.x releases + Busybox
1.7.8
x86_64, arm64v8, arm32v7
Release v1.7.8
1.7-debug, 1.7.8-debug
x86_64
v1.7.x releases + Busybox
1.7.7
x86_64, arm64v8, arm32v7
Release v1.7.7
1.7-debug, 1.7.7-debug
x86_64
v1.7.x releases + Busybox
1.7.6
x86_64, arm64v8, arm32v7
Release v1.7.6
1.7-debug, 1.7.6-debug
x86_64
v1.7.x releases + Busybox
1.7.5
x86_64, arm64v8, arm32v7
Release v1.7.5
1.7-debug, 1.7.5-debug
x86_64
v1.7.x releases + Busybox
1.7.4
x86_64, arm64v8, arm32v7
Release v1.7.4
1.7-debug, 1.7.4-debug
x86_64
v1.7.x releases + Busybox
1.7.3
x86_64, arm64v8, arm32v7
Release v1.7.3
1.7-debug, 1.7.3-debug
x86_64
v1.7.x releases + Busybox
1.7.2
x86_64, arm64v8, arm32v7
Release v1.7.2
1.7-debug, 1.7.2-debug
x86_64
v1.7.x releases + Busybox
1.7.1
x86_64, arm64v8, arm32v7
Release v1.7.1
1.7-debug, 1.7.1-debug
x86_64
v1.7.x releases + Busybox
1.7.0
x86_64, arm64v8, arm32v7
Release v1.7.0
1.7-debug, 1.7.0-debug
x86_64
v1.7.x releases + Busybox
Fluent Bit might optionally use a configuration file to define how the service will behave, and before proceeding we need to understand how the configuration schema works.
The schema is defined by three concepts:
Sections
Entries: Key/Value
Indented Configuration Mode
A simple example of a configuration file is as follows:
A section is defined by a name or title inside brackets. Looking at the example above, a Service section has been set using [SERVICE] definition. Section rules:
All section content must be indented (4 spaces ideally).
Multiple sections can exist on the same file.
A section is expected to have comments and entries, it cannot be empty.
Any commented line under a section, must be indented too.
A section may contain Entries, an entry is defined by a line of text that contains a Key and a Value, using the above example, the [SERVICE]
section contains two entries, one is the key Daemon with value off and the other is the key Log_Level with the value debug. Entries rules:
An entry is defined by a key and a value.
A key must be indented.
A key must contain a value which ends in the breakline.
Multiple keys with the same name can exist.
Also commented lines are set prefixing the # character, those lines are not processed but they must be indented too.
Fluent Bit configuration files are based in a strict Indented Mode, that means that each configuration file must follow the same pattern of alignment from left to right when writing text. By default an indentation level of four spaces from left to right is suggested. Example:
As you can see there are two sections with multiple entries and comments, note also that empty lines are allowed and they do not need to be indented.
A full feature set to access content of your records
Fluent Bit works internally with structured records and it can be composed of an unlimited number of keys and values. Values can be anything like a number, string, array, or a map.
Having a way to select a specific part of the record is critical for certain core functionalities or plugins, this feature is called Record Accessor.
consider Record Accessor a simple grammar to specify record content and other miscellaneous values.
A record accessor rule starts with the character $
. Using the structured content above as an example the following table describes how to access a record:
The following table describe some accessing rules and the expected returned value:
Format
Accessed Value
$log
"some message"
$labels['color']
"blue"
$labels['project']['env']
"production"
$labels['unset']
null
$labels['undefined']
If the accessor key does not exist in the record like the last example $labels['undefined']
, the operation is simply omitted, no exception will occur.
The feature is enabled on a per plugin basis, not all plugins enable this feature. As an example consider a configuration that aims to filter records using grep that only matches where labels have a color blue:
The file content to process in test.log
is the following:
Running Fluent Bit with the configuration above the output will be:
Fluent Bit supports the usage of environment variables in any value associated to a key when using a configuration file.
The variables are case sensitive and can be used in the following format:
When Fluent Bit starts, the configuration reader will detect any request for ${MY_VARIABLE}
and will try to resolve its value.
Create the following configuration file (fluent-bit.conf
):
Open a terminal and set the environment variable:
The above command set the 'stdout' value to the variable
MY_OUTPUT
.
Run Fluent Bit with the recently created configuration file:
As you can see the service worked properly as the configuration was valid.
It's common that Fluent Bit output plugins aims to connect to external services to deliver the logs over the network, this is the case of HTTP, Elasticsearch and Forward within others. Being able to connect to one node (host) is normal and enough for more of the use cases, but there are other scenarios where balancing across different nodes is required. The Upstream feature provides such capability.
An Upstream defines a set of nodes that will be targeted by an output plugin, by the nature of the implementation an output plugin must support the Upstream feature. The following plugin(s) have Upstream support:
The current balancing mode implemented is round-robin.
To define an Upstream it's required to create an specific configuration file that contains an UPSTREAM and one or multiple NODE sections. The following table describe the properties associated to each section. Note that all of them are mandatory:
Section
Key
Description
UPSTREAM
name
Defines a name for the Upstream in question.
NODE
name
Defines a name for the Node in question.
host
IP address or hostname of the target host.
port
TCP port of the target service.
A Node might contain additional configuration keys required by the plugin, on that way we provide enough flexibility for the output plugin, a common use case is Forward output where if TLS is enabled, it requires a shared key (more details in the example below).
In addition to the properties defined in the table above, the network operations against a defined node can optionally be done through the use of TLS for further encryption and certificates use.
The TLS options available are described in the TLS/SSL section and can be added to the any Node section.
The following example defines an Upstream called forward-balancing which aims to be used by Forward output plugin, it register three Nodes:
node-1: connects to 127.0.0.1:43000
node-2: connects to 127.0.0.1:44000
node-3: connects to 127.0.0.1:45000 using TLS without verification. It also defines a specific configuration option required by Forward output called shared_key.
Note that every Upstream definition must exists on it own configuration file in the file system. Adding multiple Upstreams in the same file or different files is not allowed.
Fluent Bit implements a unified networking interface that is exposed to components like plugins. This interface abstract all the complexity of general I/O and is fully configurable.
A common use case is when a component or plugin needs to connect to a service to send and receive data. Despite the operational mode sounds easy to deal with, there are many factors that can make things hard like unresponsive services, networking latency or any kind of connectivity error. The networking interface aims to abstract and simplify the network I/O handling, minimize risks and optimize performance.
Most of the time creating a new TCP connection to a remote server is straightforward and takes a few milliseconds. But there are cases where DNS resolving, slow network or incomplete TLS handshakes might create long delays, or incomplete connection statuses.
The net.connect_timeout
allows to configure the maximum time to wait for a connection to be established, note that this value already considers the TLS handshake process.
On environments with multiple network interfaces, might be desired to choose which interface to use for our data that will flow through the network.
The net.source_address
allows to specify which network address must be used for a TCP connection and data flow.
TCP is a connected oriented channel, to deliver and receive data from a remote end-point in most of cases we use a TCP connection. This TCP connection can be created and destroyed once is not longer needed, this approach has pros and cons, here we will refer to the opposite case: keep the connection open.
The concept of Connection Keepalive
refers to the ability of the client (Fluent Bit on this case) to keep the TCP connection open in a persistent way, that means that once the connection is created and used, instead of close it, it can be recycled. This feature offers many benefits in terms of performance since communication channels are always established before hand.
Any component that uses TCP channels like HTTP or TLS, can take advantage of this feature. For configuration purposes use the net.keepalive
property.
If a connection is keepalive enabled, there might be scenarios where the connection can be unused for long periods of time. Having an idle keepalive connection is not helpful and is recommendable to keep them alive if they are used.
In order to control how long a keepalive connection can be idle, we expose the configuration property called net.keepalive_idle_timeout
.
An open TCP connection to a remote server is subject to be silently dropped by intermediate equipment in the network (e.g., routers) if it's quiet for too long. What too long means depends on manufacturers and configurations outside of the control of fluentbit.
If you're using the Connection Keepalive feature, but not achieving the desired connectivity rates, you might want to try setting net.tcp_keepalive
to on
. This will configure the socket to periodically send keepalive probes if the connection is silent. These probes will be sent all the way to the server, making the equipment in between consider the connection as active. Is then expected that the server will acknowledge the probe, allowing fluentbit to detect a broken connection right away.
If TCP keepalive is used, net.tcp_keepalive_time
allows to override the OS default configuration with the desired period to wait between the last data packet is sent and TCP keepalive probing starts.
If TCP keepalive is used, net.tcp_keepalive_interval
allows to override the OS default configuration with the desired period between probes if the first one fails to be acknowledged.
If TCP keepalive is used, net.tcp_keepalive_probes
allows to override the OS default configuration with the desired number of unacknowledged probes before deeming a connection dead.
If a TCP connection is keepalive enabled and has very high traffic, the connection may never be killed. In a situation where the remote endpoint is load-balanced in some way, this may lead to an unequal distribution of traffic. Setting net.keepalive_max_recycle
causes keepalive connections to be recycled after a number of messages are sent over that connection. Once this limit is reached, the connection is terminated gracefully, and a new connection will be created for subsequent messages.
For plugins that rely on networking I/O, the following section describes the network configuration properties available and how they can be used to optimize performance or adjust to different configuration needs:
Property
Description
Default
net.connect_timeout
Set maximum time expressed in seconds to wait for a TCP connection to be established, this include the TLS handshake time.
10
net.source_address
Specify network address (interface) to use for connection and data traffic.
net.keepalive
Enable or disable connection keepalive support. Accepts a boolean value: on / off.
on
net.keepalive_idle_timeout
Set maximum time expressed in seconds for an idle keepalive connection.
30
net.tcp_keepalive
Enable or disable TCP keepalive support. Accepts a boolean value: on / off.
off
net.tcp_keepalive_time
Interval between the last data packet sent and the first TCP keepalive probe.
net.tcp_keepalive_interval
Interval between TCP keepalive probes when no response is received on a keepidle probe.
net.tcp_keepalive_probes
Number of unacknowledged probes to consider a connection dead.
net.keepalive_max_recycle
Set the maximum number of times a keepalive connection can be used before it is destroyed.
0
As an example, we will send 5 random messages through a TCP output connection, in the remote side we will use nc
(netcat) utility to see the data.
Put the following configuration snippet in a file called fluent-bit.conf
:
In another terminal, start nc
and make it listen for messages on TCP port 9090:
Now start Fluent Bit with the configuration file written above and you will see the data flowing to netcat:
If the net.keepalive
option is not enabled, Fluent Bit will close the TCP connection and netcat will quit, here we can see how the keepalive connection works.
After the 5 records arrive, the connection will keep idle and after 10 seconds it will be closed due to net.keepalive_idle_timeout
.
In certain environments is common to see that logs or data being ingested is faster than the ability to flush it to some destinations. The common case is reading from big log files and dispatching the logs to a backend over the network which takes some time to respond, this generate backpressure leading to a high memory consumption in the service.
In order to avoid backpressure, Fluent Bit implements a mechanism in the engine that restrict the amount of data than an input plugin can ingest, this is done through the configuration parameter Mem_Buf_Limit.
As described in the Buffering concepts section, Fluent Bit offers an hybrid mode for data handling: in-memory and filesystem (optional).
In memory
is always available and can be restricted with Mem_Buf_Limit. If your plugin gets restricted because of the configuration and you are under a backpressure scenario, you won't be able to ingest more data until the data chunks that are in memory can flushed.
Depending of the input plugin type in use, this might lead to discard incoming data (e.g: TCP input plugin), but you can rely on the secondary filesystem buffering to be safe.
If in addition to Mem_Buf_Limit the input plugin defined a storage.type
of filesystem
(as described in Buffering & Storage), when the limit is reached, all the new data will be stored safety in the file system.
This option is disabled by default and can be applied to all input plugins. Let's explain it behavior using the following scenario:
Mem_Buf_Limit is set to 1MB (one megabyte)
input plugin tries to append 700KB
engine route the data to an output plugin
output plugin backend (HTTP Server) is down
engine scheduler will retry the flush after 10 seconds
input plugin tries to append 500KB
At this exact point, the engine will allow to append those 500KB of data into the engine: in total we have 1.2MB. The options works in a permissive mode before to reach the limit, but the limit is exceeded the following actions are taken:
block local buffers for the input plugin (cannot append more data)
notify the input plugin invoking a pause callback
The engine will protect it self and will not append more data coming from the input plugin in question; Note that is the plugin responsibility to keep their state and take some decisions about what to do on that paused state.
After some seconds if the scheduler was able to flush the initial 700KB of data or it gave up after retrying, that amount memory is released and internally the following actions happens:
Upon data buffer release (700KB), the internal counters get updated
Counters now are set at 500KB
Since 500KB is < 1MB it checks the input plugin state
If the plugin is paused, it invokes a resume callback
input plugin can continue appending more data
Each plugin is independent and not all of them implements the pause and resume callbacks. As said, these callbacks are just a notification mechanism for the plugin.
The plugin who implements and keep a good state is the Tail Input plugin. When the pause callback is triggered, it stop their collectors and stop appending data. Upon resume, it re-enable the collectors.
Fluent Bit provides integrated support for Transport Layer Security (TLS) and it predecessor Secure Sockets Layer (SSL) respectively. In this section we will refer as TLS only for both implementations.
Each output plugin that requires to perform Network I/O can optionally enable TLS and configure the behavior. The following table describes the properties available:
Property
Description
Default
tls
enable or disable TLS support
Off
tls.verify
force certificate validation
On
tls.debug
Set TLS debug verbosity level. It accept the following values: 0 (No debug), 1 (Error), 2 (State change), 3 (Informational) and 4 Verbose
1
tls.ca_file
absolute path to CA certificate file
tls.ca_path
absolute path to scan for certificate files
tls.crt_file
absolute path to Certificate file
tls.key_file
absolute path to private Key file
tls.key_passwd
optional password for tls.key_file file
tls.vhost
hostname to be used for TLS SNI extension
The listed properties can be enabled in the configuration file, specifically on each output plugin section or directly through the command line.
The following output plugins can take advantage of the TLS feature:
In addition, other plugins implements a sub-set of TLS support, meaning, with restricted configuration:
By default HTTP output plugin uses plain TCP, enabling TLS from the command line can be done with:
In the command line above, the two properties tls and tls.verify where enabled for demonstration purposes (we strongly suggest always keep verification ON).
The same behavior can be accomplished using a configuration file:
Fluent Bit supports TLS server name indication. If you are serving multiple hostnames on a single IP address (a.k.a. virtual hosting), you can make use of tls.vhost
to connect to a specific hostname.
Fluent Bit has an Engine that helps to coordinate the data ingestion from input plugins and call the Scheduler to decide when is time to flush the data through one or multiple output plugins. The Scheduler flush new data every a fixed time of seconds and Schedule retries when asked.
Once an output plugin gets call to flush some data, after processing that data it can notify the Engine three possible return statuses:
OK
Retry
Error
If the return status was OK, it means it was successfully able to process and flush the data, if it returned an Error status, means that an unrecoverable error happened and the engine should not try to flush that data again. If a Retry was requested, the Engine will ask the Scheduler to retry to flush that data, the Scheduler will decide how many seconds to wait before that happen.
The Scheduler provides a simple configuration option called Retry_Limit which can be set independently on each output section. This option allows to disable retries or impose a limit to try N times and then discard the data after reaching that limit:
Value
Description
Retry_Limit
N
Integer value to set the maximum number of retries allowed. N must be >= 1 (default: 1)
Retry_Limit
False
When Retry_Limit is set to False, means that there is not limit for the number of retries that the Scheduler can do.
The following example configure two outputs where the HTTP plugin have an unlimited number of retries and the Elasticsearch plugin have a limit of 5 times:
Configuration files must be flexible enough for any deployment need, but they must keep a clean and readable format.
Fluent Bit Commands extends a configuration file with specific built-in features. The list of commands available as of Fluent Bit 0.12 series are:
Command
Prototype
Description
@INCLUDE FILE
Include a configuration file
@SET KEY=VAL
Set a configuration variable
Configuring a logging pipeline might lead to an extensive configuration file. In order to maintain a human-readable configuration, it's suggested to split the configuration in multiple files.
The @INCLUDE command allows the configuration reader to include an external configuration file, e.g:
The above example defines the main service configuration file and also include two files to continue the configuration:
Note that despites the order of inclusion, Fluent Bit will ALWAYS respect the following order:
Service
Inputs
Filters
Outputs
Fluent Bit supports configuration variables, one way to expose this variables to Fluent Bit is through setting a Shell environment variable, the other is through the @SET command.
The @SET command can only be used at root level of each line, meaning it cannot be used inside a section, e.g:
This page describes the main configuration file used by Fluent Bit
One of the ways to configure Fluent Bit is using a main configuration file. Fluent Bit allows to use one configuration file which works at a global scope and uses the Format and Schema defined previously.
The main configuration file supports four types of sections:
Service
Input
Filter
Output
In addition, it's also possible to split the main configuration file in multiple files using the feature to include external files:
Include File
The Service section defines global properties of the service, the keys available as of this version are described in the following table:
Key
Description
Default Value
Flush
Set the flush time in seconds.nanoseconds
. The engine loop uses a Flush timeout to define when is required to flush the records ingested by input plugins through the defined output plugins.
5
Grace
Set the grace time in seconds
as Integer value. The engine loop uses a Grace timeout to define wait time on exit
5
Daemon
Boolean value to set if Fluent Bit should run as a Daemon (background) or not. Allowed values are: yes, no, on and off. note: If you are using a Systemd based unit as the one we provide in our packages, do not turn on this option.
Off
Log_File
Absolute path for an optional log file. By default all logs are redirected to the standard error interface (stderr).
Log_Level
Set the logging verbosity level. Allowed values are: error, warn, info, debug and trace. Values are accumulative, e.g: if 'debug' is set, it will include error, warning, info and debug. Note that trace mode is only available if Fluent Bit was built with the WITH_TRACE option enabled.
info
Parsers_File
Path for a parsers
configuration file. Multiple Parsers_File entries can be defined within the section.
Plugins_File
Streams_File
HTTP_Server
Enable built-in HTTP Server
Off
HTTP_Listen
Set listening interface for HTTP Server when it's enabled
0.0.0.0
HTTP_Port
Set TCP Port for the HTTP Server
2020
Coro_Stack_Size
Set the coroutines stack size in bytes. The value must be greater than the page size of the running system. Don't set too small value (say 4096), or coroutine threads can overrun the stack buffer. Do not change the default value of this parameter unless you know what you are doing.
24576
The following is an example of a SERVICE section:
An INPUT section defines a source (related to an input plugin), here we will describe the base configuration for each INPUT section. Note that each input plugin may add it own configuration keys:
Key
Description
Name
Name of the input plugin.
Tag
Tag name associated to all records coming from this plugin.
The Name is mandatory and it let Fluent Bit know which input plugin should be loaded. The Tag is mandatory for all plugins except for the input forward plugin (as it provides dynamic tags).
The following is an example of an INPUT section:
A FILTER section defines a filter (related to an filter plugin), here we will describe the base configuration for each FILTER section. Note that each filter plugin may add it own configuration keys:
Key
Description
Name
Name of the filter plugin.
Match
A pattern to match against the tags of incoming records. It's case sensitive and support the star (*) character as a wildcard.
Match_Regex
A regular expression to match against the tags of incoming records. Use this option if you want to use the full regex syntax.
The Name is mandatory and it let Fluent Bit know which filter plugin should be loaded. The Match or Match_Regex is mandatory for all plugins. If both are specified, Match_Regex takes precedence.
The following is an example of an FILTER section:
The OUTPUT section specify a destination that certain records should follow after a Tag match. The configuration support the following keys:
Key
Description
Name
Name of the output plugin.
Match
A pattern to match against the tags of incoming records. It's case sensitive and support the star (*) character as a wildcard.
Match_Regex
A regular expression to match against the tags of incoming records. Use this option if you want to use the full regex syntax.
The following is an example of an OUTPUT section:
The following configuration file example demonstrates how to collect CPU metrics and flush the results every five seconds to the standard output:
You can also visualize Fluent Bit INPUT, FILTER, and OUTPUT configuration via https://config.calyptia.com
To avoid complicated long configuration files is better to split specific parts in different files and call them (include) from one main file.
Starting from Fluent Bit 0.12 the new configuration command @INCLUDE has been added and can be used in the following way:
The configuration reader will try to open the path somefile.conf, if not found, it will assume it's a relative path based on the path of the base configuration file, e.g:
Main configuration file path: /tmp/main.conf
Included file: somefile.conf
Fluent Bit will try to open somefile.conf, if it fails it will try /tmp/somefile.conf.
The @INCLUDE command only works at top-left level of the configuration line, it cannot be used inside sections.
Wildcard character (*) is supported to include multiple files, e.g:
The end-goal of Fluent Bit is to collect, parse, filter and ship logs to a central place. In this workflow there are many phases and one of the critical pieces is the ability to do buffering : a mechanism to place processed data into a temporary location until is ready to be shipped.
By default when Fluent Bit process data, it uses Memory as a primary and temporary place to store the records, but there are certain scenarios where would be ideal to have a persistent buffering mechanism based in the filesystem to provide aggregation and data safety capabilities.
Choosing the right configuration is critical and the behavior of the service can be conditioned based in the backpressure settings. Before to jump into the configuration properties let's understand the relationship between Chunks, Memory, Filesystem and Backpressure.
Understanding the chunks, buffering and backpressure concepts is critical for a proper configuration. Let's do a recap of the meaning of these concepts.
When an input plugin (source) emit records, the engine group the records together in a Chunk. A Chunk size usually is around 2MB. By configuration, the engine decide where to place this Chunk, the default is that all chunks are created only in memory.
As mentioned above, the Chunks generated by the engine are placed in memory but this is configurable.
If memory is the only mechanism set for the input plugin, it will just store data as much as it can there (memory). This is the fastest mechanism with less system overhead, but if the service is not able to deliver the records fast enough because of a slow network or an unresponsive remote service, Fluent Bit memory usage will increase since it will accumulate more data than it can deliver.
On a high load environment with backpressure the risks of having high memory usage is the chance to get killed by the Kernel (OOM Killer). A workaround for this backpressure scenario is to limit the amount of memory in records that an input plugin can register, this configuration property is called mem_buf_limit
: if a plugin have enqueued more than mem_buf_limit
, it won't be able to ingest more until it data can be delivered or flushed properly. On this scenario the input plugin in question is paused.
The workaround of mem_buf_limit
is good for certain scenarios and environments, it helps to control the memory usage of the service, but at the costs that if a file gets rotated while paused, you might lose that data since it won't be able to register new records. This can happen with any input source plugin. The goal of mem_buf_limit
is memory control and survival of the service.
For full data safety guarantee, use filesystem buffering.
Filesystem buffering enabled helps with backpressure and overall memory control.
Behind the scenes, Memory and Filesystem buffering mechanisms are not mutual exclusive, indeed when enabling filesystem buffering for your input plugin (source) you are getting the best of the two worlds: performance and data safety.
When the Filesystem buffering is enabled, the behavior of the engine is different, upon Chunk creation, it stores the content in memory but also it maps a copy on disk (through mmap(2)), this Chunk is active in memory and backed up in disk is called to be up
which means "the chunk content is up in memory".
How this Filesystem buffering mechanism deals with high memory usage and backpressure ?: Fluent Bit controls the number of Chunks that are up
in memory.
By default, the engine allows to have 128 Chunks up
in memory in total (considering all Chunks), this value is controlled by service property storage.max_chunks_up
. The active Chunks that are up
are ready for delivery and the ones that still are receiving records. Any other remaining Chunk is in a down
state, which means that's only in the filesystem and won't be up
in memory unless is ready to be delivered.
If the input plugin has enabled mem_buf_limit
and storage.type
as filesystem
, when reaching the mem_buf_limit
threshold, instead of the plugin being paused, all new data will go to Chunks that are down
in the filesystem. This allows to control the memory usage by the service but also providing a a guarantee that the service won't lose any data.
Limiting Filesystem space for Chunks
Fluent Bit implements the concept of logical queues: a Chunk based on its Tag, can be routed to multiple destinations, so internally we keep a reference from where a Chunk was created and where it needs to go.
It's common to find cases that if we have multiple destinations for a Chunk, one of the destination might be slower than the other, and maybe one of the destinations is generating backpressure and not all of them. On this scenario how do we limit the amount of filesystem Chunks that we are logically queueing ?.
Starting from Fluent Bit v1.6, we introduced the new configuration property for output plugins called storage.total_limit_size
which limits the number of Chunks that exists in the file system for a certain logical output destination. If one destinations reaches the storage.total_limit_size
limit, the oldest Chunk from it queue for that logical output destination will be discarded.
The storage layer configuration takes place in three areas:
Service Section
Input Section
Output Section
The known Service section configure a global environment for the storage layer, the Input sections defines which buffering mechanism to use and the output the limits for the logical queues.
The Service section refers to the section defined in the main configuration file:
Key
Description
Default
storage.path
Set an optional location in the file system to store streams and chunks of data. If this parameter is not set, Input plugins can only use in-memory buffering.
storage.sync
Configure the synchronization mode used to store the data into the file system. It can take the values normal or full.
normal
storage.checksum
Enable the data integrity check when writing and reading data from the filesystem. The storage layer uses the CRC32 algorithm.
Off
storage.max_chunks_up
If the input plugin has enabled filesystem
storage type, this property sets the maximum number of Chunks that can be up
in memory. This helps to control memory usage.
128
storage.backlog.mem_limit
If storage.path is set, Fluent Bit will look for data chunks that were not delivered and are still in the storage layer, these are called backlog data. This option configure a hint of maximum value of memory to use when processing these records.
5M
storage.metrics
off
a Service section will look like this:
that configuration configure an optional buffering mechanism where it root for data is /var/log/flb-storage/, it will use normal synchronization mode, without checksum and up to a maximum of 5MB of memory when processing backlog data.
Optionally, any Input plugin can configure their storage preference, the following table describe the options available:
Key
Description
Default
storage.type
Specify the buffering mechanism to use. It can be memory or filesystem.
memory
The following example configure a service that offers filesystem buffering capabilities and two Input plugins being the first based in filesystem and the second with memory only.
If certain chunks are filesystem storage.type based, it's possible to control the size of the logical queue for an output plugin. The following table describe the options available:
Key
Description
Default
storage.total_limit_size
Limit the maximum number of Chunks in the filesystem for the current output logical destination.
The following example create records with CPU usage samples in the filesystem and then they are delivered to Google Stackdriver service limiting the logical queue (buffering) to 5M:
If for some reason Fluent Bit gets offline because of a network issue, it will continuing buffering CPU samples but just keeping a maximum of 5M of the newest data.
Enable traffic through a proxy server via HTTP_PROXY environment variable
Fluent Bit supports setting up a HTTP proxy for all egress HTTP/HTTPS traffic by setting HTTP_PROXY
environment variable:
You can set up basic authentication with HTTP_PROXY=http://<username>:<password>@<proxy host>:<port>
to provide your username
and password
when connecting to the proxy.
You can also set up HTTP_PROXY=http://<proxy host>:<port>
to omit username
and password
if there is none.
The HTTP_PROXY
environment variable is a for setting a HTTP proxy in a containerized environment, and it is also natively supported by any application written in Go. Therefore, we follow and implement the same convention for Fluent Bit.
Note: HTTP proxy is also supported using the . This configuration continues to work, however it should not be used together with the HTTP_PROXY
environment variable. This is because under the hood, the HTTP_PROXY
environment variable based proxy support is implemented by setting up a TCP connection tunnel via . Unlike the plugin's implementation, this supports both HTTP and HTTPS egress traffic.
The collectd input plugin allows you to receive datagrams from collectd service.
The plugin supports the following configuration parameters:
Here is a basic configuration example.
With this configuration, Fluent Bit listens to 0.0.0.0:25826
, and outputs incoming datagram packets to stdout.
You must set the same types.db files that your collectd server uses. Otherwise, Fluent Bit may not be able to interpret the payload properly.
The docker input plugin allows you to collect Docker container metrics such as memory usage and CPU consumption.
Content:
The plugin supports the following configuration parameters:
If you set neither Include
nor Exclude
, the plugin will try to get metrics from all the running containers.
Here is an example configuration that collects metrics from two docker instances (6bab19c3a0f9
and 14159be4ca2c
).
This configuration will produce records like below.
The dummy input plugin, generates dummy events. It is useful for testing, debugging, benchmarking and getting started with Fluent Bit.
The plugin supports the following configuration parameters:
You can run the plugin from the command line or through the configuration file:
In your main configuration file append the following Input & Output sections:
The cpu input plugin, measures the CPU usage of a process or the whole system by default (considering per CPU core). It reports values in percentage unit for every interval of time set. At the moment this plugin is only available for Linux.
The following tables describes the information generated by the plugin. The keys below represent the data used by the overall system, all values associated to the keys are in a percentage unit (0 to 100%):
In addition to the keys reported in the above table, a similar content is created per CPU core. The cores are listed from 0 to N as the Kernel reports:
The plugin supports the following configuration parameters:
In order to get the statistics of the CPU usage of your system, you can run the plugin from the command line or through the configuration file:
In your main configuration file append the following Input & Output sections:
Gather Metrics from Fluent Bit pipeline
Fluent Bit comes with a built-in HTTP Server that can be used to query internal information and monitor metrics of each running plugin.
The monitoring interface can be easily integrated with Prometheus since we support it native format.
NOTE: The Windows version does not support the HTTP monitoring feature yet as of v1.7.0
To get started, the first step is to enable the HTTP Server from the configuration file:
the above configuration snippet will instruct Fluent Bit to start it HTTP Server on TCP Port 2020 and listening on all network interfaces:
now with a simple curl command is enough to gather some information:
Note that we are sending the curl command output to the jq program which helps to make the JSON data easy to read from the terminal. Fluent Bit don't aim to do JSON pretty-printing.
Fluent Bit aims to expose useful interfaces for monitoring, as of Fluent Bit v0.14 the following end points are available:
Query the service uptime with the following command:
it should print a similar output like this:
Query internal metrics in JSON format with the following command:
it should print a similar output like this:
Query internal metrics in Prometheus Text 0.0.4 format:
this time the same metrics will be in Prometheus format instead of JSON:
By default configured plugins on runtime get an internal name in the format plugin_name.ID. For monitoring purposes this can be confusing if many plugins of the same type were configured. To make a distinction each configured input or output section can get an alias that will be used as the parent name for the metric.
Now when querying the metrics we get the aliases in place instead of the plugin name:
Fluent Bit is a powerful log processing tool that can deal with different sources and formats, in addition it provides several filters that can be used to perform custom modifications. This flexibility is really good but while your pipeline grows, it's strongly recommended to validate your data and structure.
We encourage Fluent Bit users to integrate data validation in their CI systems
A simplified view of our data processing pipeline is as follows:
In a normal production environment, many Inputs, Filters, and Outputs are defined in the configuration, so integrating a continuous validation of your configuration against expected results is a must. For this requirement, Fluent Bit provides a specific Filter called Expect which can be used to validate expected Keys and Values from your records and takes some action when an exception is found.
Ideally you want to add checkpoints of validation of your data between each step so you can know if your data structure is correct, we do this by using expect filter.
Expect filter sets rules that aims to validate certain criteria like:
does the record contain a key A ?
does the record not contains key A?
does the record key A value equals NULL ?
does the record key A value a different value than NULL ?
does the record key A value equals B ?
Every expect filter configuration can expose specific rules to validate the content of your records, it supports the following configuration properties:
Consider the following JSON file called data.log
with the following content:
The following Fluent Bit configuration file will configure a pipeline to consume the log above apply an expect filter to validate that keys color
and label
exists:
note that if for some reason the JSON parser failed or is missing in the tail
input (line 9), the expect
filter will trigger the exit
action. As a test, go ahead and comment out or remove line 9.
As a second step, we will extend our pipeline and we will add a grep filter to match records that map label
contains a key called name
with value abc
, then an expect filter to re-validate that condition:
When deploying your configuration in production, you might want to remove the expect filters from your configuration since it's an unnecessary extra work unless you want to have a 100% coverage of checks at runtime.
The disk input plugin, gathers the information about the disk throughput of the running system every certain interval of time and reports them.
The plugin supports the following configuration parameters:
In order to get disk usage from your system, you can run the plugin from the command line or through the configuration file:
In your main configuration file append the following Input & Output sections:
Note: Total interval (sec) = Interval_Sec + (Interval_Nsec / 1000000000).
e.g. 1.5s = 1s + 500000000ns
Path for a plugins
configuration file. A plugins configuration file allows to define paths for external plugins, for an example .
Path for the Stream Processor configuration file. To learn more about Stream Processing configuration go .
If http_server
option has been enable in the main [SERVICE]
section, this option registers a new endpoint where internal metrics of the storage layer can be consumed. For more details refer to the section.
As described above, the CPU input plugin gathers the overall usage every one second and flushed the information to the output on the fifth second. On this example we used the stdout plugin to demonstrate the output records. In a real use-case you may want to flush this information to some central aggregator such as or .
The following example set an alias to the INPUT section which is using the input plugin:
Fluent Bit's exposed can be leveraged to create dashboards and alerts.
The provided is heavily inspired by 's but with a few key differences such as the use of the instance
label (see ), stacked graphs and a focus on Fluent Bit metrics.
Sample alerts are available .
As an example, consider the following pipeline where your source of data is a normal file with JSON content on it and then two filters: to exclude certain records and to alter the record content adding and removing specific keys.
key
description
cpu_p
CPU usage of the overall system, this value is the summatory of time spent on user and kernel space. The result takes in consideration the numbers of CPU cores in the system.
user_p
CPU usage in User mode, for short it means the CPU usage by user space programs. The result of this value takes in consideration the numbers of CPU cores in the system.
system_p
CPU usage in Kernel mode, for short it means the CPU usage by the Kernel. The result of this value takes in consideration the numbers of CPU cores in the system.
key
description
cpuN.p_cpu
Represents the total CPU usage by core N.
cpuN.p_user
Total CPU spent in user mode or user space programs associated to this core.
cpuN.p_system
Total CPU spent in system or kernel mode associated to this core.
Key
Description
Default
Interval_Sec
Polling interval in seconds
1
Interval_NSec
Polling interval in nanoseconds
0
PID
Specify the ID (PID) of a running process in the system. By default the plugin monitors the whole system but if this option is set, it will only monitor the given process ID.
URI
Description
Data Format
/
Fluent Bit build information
JSON
/api/v1/uptime
Get uptime information in seconds and human readable format
JSON
/api/v1/metrics
Internal metrics per loaded plugin
JSON
/api/v1/metrics/prometheus
Internal metrics per loaded plugin ready to be consumed by a Prometheus Server
Prometheus Text 0.0.4
/api/v1/storage
Get internal metrics of the storage layer / buffered data. This option is enabled only if in the SERVICE
section the property storage.metrics
has been enabled
JSON
Property
Description
key_exists
Check if a key with a given name exists in the record.
key_not_exists
Check if a key does not exist in the record.
key_val_is_null
check that the value of the key is NULL.
key_val_is_not_null
check that the value of the key is NOT NULL.
key_val_eq
check that the value of the key equals the given value in the configuration.
action
action to take when a rule does not match. The available options are warn
or exit
. On warn
, a warning message is sent to the logging layer when a mismatch of the rules above is found; using exit
makes Fluent Bit abort with status code 255
.
Key
Description
Default
Listen
Set the address to listen to
0.0.0.0
Port
Set the port to listen to
25826
TypesDB
Set the data specification file
/usr/share/collectd/types.db
Key
Description
Default
Interval_Sec
Polling interval in seconds
1
Include
A space-separated list of containers to include
Exclude
A space-separated list of containers to exclude
Key
Description
Dummy
Dummy JSON record. Default: {"message":"dummy"}
Start_time_sec
Dummy base timestamp in seconds. Default: 0
Start_time_nsec
Dummy base timestamp in nanoseconds. Default: 0
Rate
Events number generated per second. Default: 1
Samples
If set, the events number will be limited. e.g. If Samples=3, the plugin only generates three events and stops.
Key
Description
Default
Interval_Sec
Polling interval (seconds).
1
Interval_NSec
Polling interval (nanosecond).
0
Dev_Name
Device name to limit the target. (e.g. sda). If not set, in_disk gathers information from all of disks and partitions.
all disks
Health input plugin allows you to check how healthy a TCP server is. It does the check by issuing a TCP connection every a certain interval of time.
The plugin supports the following configuration parameters:
Key
Description
Host
Name of the target host or IP address to check.
Port
TCP port where to perform the connection check.
Interval_Sec
Interval in seconds between the service checks. Default value is 1.
Internal_Nsec
Specify a nanoseconds interval for service checks, it works in conjunction with the Interval_Sec configuration key. Default value is 0.
Alert
If enabled, it will only generate messages if the target TCP service is down. By default this option is disabled.
Add_Host
If enabled, hostname is appended to each records. Default value is false.
Add_Port
If enabled, port number is appended to each records. Default value is false.
In order to start performing the checks, you can run the plugin from the command line or through the configuration file:
From the command line you can let Fluent Bit generate the checks with the following options:
In your main configuration file append the following Input & Output sections:
Once Fluent Bit is running, you will see some random values in the output interface similar to this:
The head input plugin, allows to read events from the head of file. It's behavior is similar to the head command.
The plugin supports the following configuration parameters:
Key
Description
File
Absolute path to the target file, e.g: /proc/uptime
Buf_Size
Buffer size to read the file.
Interval_Sec
Polling interval (seconds).
Interval_NSec
Polling interval (nanosecond).
Add_Path
If enabled, filepath is appended to each records. Default value is false.
Key
Rename a key. Default: head.
Lines
Line number to read. If the number N is set, in_head reads first N lines like head(1) -n.
Split_line
If enabled, in_head generates key-value pair per line.
This mode is useful to get a specific line. This is an example to get CPU frequency from /proc/cpuinfo.
/proc/cpuinfo is a special file to get cpu information.
Cpu frequency is "cpu MHz : 2791.009". We can get the line with this configuration file.
Output is
In order to read the head of a file, you can run the plugin from the command line or through the configuration file:
The following example will read events from the /proc/uptime file, tag the records with the uptime name and flush them back to the stdout plugin:
In your main configuration file append the following Input & Output sections:
Note: Total interval (sec) = Interval_Sec + (Interval_Nsec / 1000000000).
e.g. 1.5s = 1s + 500000000ns
The mem input plugin, gathers the information about the memory and swap usage of the running system every certain interval of time and reports the total amount of memory and the amount of free available.
In order to get memory and swap usage from your system, you can run the plugin from the command line or through the configuration file:
In your main configuration file append the following Input & Output sections:
The HTTP input plugin allows you to send custom records to an HTTP endpoint.
Key
Description
default
host
The address to listen on
0.0.0.0
port
The port for Fluent Bit to listen on
9880
buffer_max_size
Specify the maximum buffer size in KB to receive a JSON message.
4M
buffer_chunk_size
This sets the chunk size for incoming incoming JSON messages. These chunks are then stored/managed in the space available by buffer_max_size.
512K
The http input plugin allows Fluent Bit to open up an HTTP port that you can then route data to in a dynamic way. This plugin supports dynamic tags which allow you to send data with different tags through the same input. An example video and curl message can be seen below
How to set tag
The tag for the HTTP input plugin is set by adding the tag to the end of the request URL. This tag is then used to route the event through the system. For example, in the following curl message below the tag set is app.log
. If you do not set the tag http.0
is automatically used. If you have multiple HTTP inputs then they will follow a pattern of http.N
where N is an integer representing the input.
Example Curl message
The MQTT input plugin, allows to retrieve messages/data from MQTT control packets over a TCP connection. The incoming data to receive must be a JSON map.
The plugin supports the following configuration parameters:
Key
Description
Listen
Listener network interface, default: 0.0.0.0
Port
TCP port where listening for connections, default: 1883
In order to start listening for MQTT messages, you can run the plugin from the command line or through the configuration file:
Since the MQTT input plugin let Fluent Bit behave as a server, we need to dispatch some messages using some MQTT client, in the following example mosquitto tool is being used for the purpose:
The following command line will send a message to the MQTT input plugin:
In your main configuration file append the following Input & Output sections:
The netif input plugin gathers network traffic information of the running system every certain interval of time, and reports them.
The plugin supports the following configuration parameters:
Key
Description
Interface
Specify the network interface to monitor. e.g. eth0
Interval_Sec
Polling interval (seconds). default: 1
Interval_NSec
Polling interval (nanosecond). default: 0
Verbose
If true, gather metrics precisely. default: false
In order to monitor network traffic from your system, you can run the plugin from the command line or through the configuration file:
In your main configuration file append the following Input & Output sections:
Note: Total interval (sec) = Interval_Sec + (Interval_Nsec / 1000000000).
e.g. 1.5s = 1s + 500000000ns
The stdin plugin allows to retrieve valid JSON text messages over the standard input interface (stdin). In order to use it, specify the plugin name as the input, e.g:
As input data the stdin plugin recognize the following JSON data formats:
A better example to demonstrate how it works will be through a Bash script that generates messages and writes them to Fluent Bit. Write the following content in a file named test.sh:
Give the script execution permission:
Now lets start the script and Fluent Bit in the following way:
The kmsg input plugin reads the Linux Kernel log buffer since the beginning, it gets every record and parse it field as priority, sequence, seconds, useconds, and message.
In order to start getting the Linux Kernel messages, you can run the plugin from the command line or through the configuration file:
As described above, the plugin processed all messages that the Linux Kernel reported, the output has been truncated for clarification.
In your main configuration file append the following Input & Output sections:
The statsd input plugin allows you to receive metrics via StatsD protocol.
Content:
The plugin supports the following configuration parameters:
Key
Description
Default
Listen
Listener network interface.
0.0.0.0
Port
UDP port where listening for connections
8125
Here is a configuration example.
Now you can input metrics through the UDP port as follows:
Fluent Bit will produce the following records:
The exec input plugin, allows to execute external program and collects event logs.
The plugin supports the following configuration parameters:
Key
Description
Command
The command to execute.
Parser
Specify the name of a parser to interpret the entry as a structured message.
Interval_Sec
Polling interval (seconds).
Interval_NSec
Polling interval (nanosecond).
Buf_Size
You can run the plugin from the command line or through the configuration file:
The following example will read events from the output of ls.
In your main configuration file append the following Input & Output sections:
Process input plugin allows you to check how healthy a process is. It does so by performing a service check at every certain interval of time specified by the user.
The plugin supports the following configuration parameters:
In order to start performing the checks, you can run the plugin from the command line or through the configuration file:
The following example will check the health of crond process.
In your main configuration file append the following Input & Output sections:
Once Fluent Bit is running, you will see the health of process:
The tail input plugin allows to monitor one or several text files. It has a similar behavior like tail -f
shell command.
The plugin reads every matched file in the Path
pattern and for every new line found (separated by a \n
), it generates a new record. Optionally a database file can be used so the plugin can have a history of tracked files and a state of offsets, this is very useful to resume a state if the service is restarted.
The plugin supports the following configuration parameters:
Note that if the database parameter DB
is not specified, by default the plugin will start reading each target file from the beginning. This also might cause some unwanted behaviour, for example when a line is bigger that Buffer_Chunk_Size
and Skip_Long_Lines
is not turned on, the file will be read from the beginning each Refresh_Interval
until the file is rotated.
Additionally the following options exists to configure the handling of multi-lines files:
Docker mode exists to recombine JSON log lines split by the Docker daemon due to its line length limit. To use this feature, configure the tail plugin with the corresponding parser and then enable Docker mode:
In order to tail text or log files, you can run the plugin from the command line or through the configuration file:
From the command line you can let Fluent Bit parse text files with the following options:
When using multi-line configuration you need to first specify Multiline On
in the configuration and use the Parser_Firstline
and additional parser parameters Parser_N
if needed. If we are trying to read the following Java Stacktrace as a single event
We need to specify a Parser_Firstline
parameter that matches the first line of a multi-line event. Once a match is made Fluent Bit will read all future lines until another match with Parser_Firstline
is made .
In the case above we can use the following parser, that extracts the Time as time
and the remaining portion of the multiline as log
If we want to further parse the entire event we can add additional parsers with Parser_N
where N is an integer. The final Fluent Bit configuration looks like the following:
Our output will be as follows.
The tail input plugin a feature to save the state of the tracked files, is strongly suggested you enabled this. For this purpose the db property is available, e.g:
When running, the database file /path/to/logs.db will be created, this database is backed by SQLite3 so if you are interested into explore the content, you can open it with the SQLite client tool, e.g:
Make sure to explore when Fluent Bit is not hard working on the database file, otherwise you will see some Error: database is locked messages.
By default SQLite client tool do not format the columns in a human read-way, so to explore in_tail_files table you can create a config file in ~/.sqliterc with the following content:
Fluent Bit keep the state or checkpoint of each file through using a SQLite database file, so if the service is restarted, it can continue consuming files from it last checkpoint position (offset). The default options set are enabled for high performance and corruption-safe.
The SQLite journaling mode enabled is Write Ahead Log
or WAL
. This allows to improve performance of read and write operations to disk. When enabled, you will see in your file system additional files being created, consider the following configuration statement:
The above configuration enables a database file called test.db
and in the same path for that file SQLite will create two additional files:
test.db-shm
test.db-wal
Those two files aims to support the WAL
mechanism that helps to improve performance and reduce the number system calls required. The -wal
file refers to the file that stores the new changes to be committed, at some point the WAL
file transactions are moved back to the real database file. The -shm
file is a shared-memory type to allow concurrent-users to the WAL
file.
The WAL
mechanism give us higher performance but also might increase the memory usage by Fluent Bit. Most of this usage comes from the memory mapped and cached pages. In some cases you might see that memory usage keeps a bit high giving the impression of a memory leak, but actually is not relevant unless you want your memory metrics back to normal. Starting from Fluent Bit v1.7.3 we introduced the new option db.journal_mode
mode that sets the journal mode for databases, by default it will be WAL (Write-Ahead Logging)
, currently allowed configurations for db.journal_mode
are DELETE | TRUNCATE | PERSIST | MEMORY | WAL | OFF
.
File rotation is properly handled, including logrotate's copytruncate mode.
Note that the Path
patterns cannot match the rotated files. Otherwise, the rotated file would be read again and lead to duplicate records.
Syslog input plugins allows to collect Syslog messages through a Unix socket server (UDP or TCP) or over the network using TCP or UDP.
The plugin supports the following configuration parameters:
When using Syslog input plugin, Fluent Bit requires access to the parsers.conf file, the path to this file can be specified with the option -R or through the Parsers_File key on the [SERVER] section (more details below).
When udp or unix_udp is used, the buffer size to receive messages is configurable only through the Buffer_Chunk_Size option which defaults to 32kb.
In order to receive Syslog messages, you can run the plugin from the command line or through the configuration file:
From the command line you can let Fluent Bit listen for Forward messages with the following options:
By default the service will create and listen for Syslog messages on the unix socket /tmp/in_syslog
In your main configuration file append the following Input & Output sections:
Once Fluent Bit is running, you can send some messages using the logger tool:
The following content aims to provide configuration examples for different use cases to integrate Fluent Bit and make it listen for Syslog messages from your systems.
Put the following content in your fluent-bit.conf file:
then start Fluent Bit.
Add a new file to your rsyslog config rules called 60-fluent-bit.conf inside the directory /etc/rsyslog.d/ and add the following content:
then make sure to restart your rsyslog daemon:
Put the following content in your fluent-bit.conf file:
then start Fluent Bit.
Add a new file to your rsyslog config rules called 60-fluent-bit.conf inside the directory /etc/rsyslog.d/ and place the following content:
Make sure that the socket file is readable by rsyslog (tweak the Unix_Perm
option shown above).
The thermal input plugin reports system temperatures periodically -- each second by default. Currently this plugin is only available for Linux.
The following tables describes the information generated by the plugin.
The plugin supports the following configuration parameters:
In order to get temperature(s) of your system, you can run the plugin from the command line or through the configuration file:
Some systems provide multiple thermal zones. In this example monitor only thermal_zone0 by name, once per minute.
In your main configuration file append the following Input & Output sections:
The winlog input plugin allows you to read Windows Event Log.
The plugin supports the following configuration parameters:
Note that if you do not set db, the plugin will read channels from the beginning on each startup.
Here is a minimum configuration example.
Note that some Windows Event Log channels (like Security
) requires an admin privilege for reading. In this case, you need to run fluent-bit as an administrator.
If you want to do a quick test, you can run this plugin from the command line.
Random input plugin generate very simple random value samples using the device interface /dev/urandom, if not available it will use a unix timestamp as value.
The plugin supports the following configuration parameters:
In order to start generating random samples, you can run the plugin from the command line or through the configuration file:
From the command line you can let Fluent Bit generate the samples with the following options:
In your main configuration file append the following Input & Output sections:
Once Fluent Bit is running, you will see the reports in the output interface similar to this:
There are certain cases where the log messages being parsed contains encoded data, a typical use case can be found in containerized environments with Docker: application logs it data in JSON format but becomes an escaped string, Consider the following example
Original message generated by the application:
Then the Docker log message become encapsulated as follows:
as you can see the original message is handled as an escaped string. Ideally in Fluent Bit we would like to keep having the original structured message and not a string.
Decoders are a built-in feature available through the Parsers file, each Parser definition can optionally set one or multiple decoders. There are two type of decoders type:
Decode_Field: if the content can be decoded in a structured message, append that structure message (keys and values) to the original log message.
Decode_Field_As: any content decoded (unstructured or structured) will be replaced in the same key/value, no extra keys are added.
Our pre-defined Docker Parser have the following definition:
Each line in the parser with a key Decode_Field instruct the parser to apply a specific decoder on a given field, optionally it offer the option to take an extra action if the decoder cannot succeed.
By default if a decoder fails to decode the field or want to try a next decoder, is possible to define an optional action. Available actions are:
Note that actions are affected by some restrictions:
on Decode_Field_As, if succeeded, another decoder of the same type in the same field can be applied only if the data continues being an unstructured message (raw text).
on Decode_Field, if succeeded, can only be applied once for the same field. By nature Decode_Field aims to decode a structured message.
Example input (from /path/to/log.log
in configuration below)
Example output
Configuration file
The fluent-bit-parsers.conf
file,
The tcp input plugin allows to retrieve structured JSON or raw messages over a TCP network interface (TCP port).
The plugin supports the following configuration parameters:
In order to receive JSON messages over TCP, you can run the plugin from the command line or through the configuration file:
From the command line you can let Fluent Bit listen for JSON messages with the following options:
By default the service will listen an all interfaces (0.0.0.0) through TCP port 5170, optionally you can change this directly, e.g:
In the example the JSON messages will only arrive through network interface under 192.168.3.2 address and TCP Port 9090.
In your main configuration file append the following Input & Output sections:
Once Fluent Bit is running, you can send some messages using the netcat:
When receiving payloads in JSON format, there are high performance penalties. Parsing JSON is a very expensive task so you could expect your CPU usage increase under high load environments.
To get faster data ingestion, consider to use the option Format none
to avoid JSON parsing if not needed.
Size of the buffer (check for allowed values)
In your main configuration file append the following Input & Output sections. An example visualization can be found
In we should see the following output:
In we should see the following output:
Key
Description
Default
Buffer_Chunk_Size
Set the initial buffer size to read files data. This value is used to increase buffer size. The value must be according to the Unit Size specification.
32k
Buffer_Max_Size
Set the limit of the buffer size per monitored file. When a buffer needs to be increased (e.g: very long lines), this value is used to restrict how much the memory buffer can grow. If reading a file exceeds this limit, the file is removed from the monitored file list. The value must be according to the Unit Size specification.
32k
Path
Pattern specifying a specific log file or multiple ones through the use of common wildcards. Multiple patterns separated by commas are also allowed.
Path_Key
If enabled, it appends the name of the monitored file as part of the record. The value assigned becomes the key in the map.
Exclude_Path
Set one or multiple shell patterns separated by commas to exclude files matching certain criteria, e.g: Exclude_Path *.gz,*.zip
Offset_Key
If enabled, Fluent Bit appends the offset of the current monitored file as part of the record. The value assigned becomes the key in the map
Read_from_Head
For new discovered files on start (without a database offset/position), read the content from the head of the file, not tail.
False
Refresh_Interval
The interval of refreshing the list of watched files in seconds.
60
Rotate_Wait
Specify the number of extra time in seconds to monitor a file once is rotated in case some pending data is flushed.
5
Ignore_Older
Ignores records which are older than this time in seconds. Supports m,h,d (minutes, hours, days) syntax. Default behavior is to read all records from specified files. Only available when a Parser is specified and it can parse the time of a record.
Skip_Long_Lines
When a monitored file reach it buffer capacity due to a very long line (Buffer_Max_Size), the default behavior is to stop monitoring that file. Skip_Long_Lines alter that behavior and instruct Fluent Bit to skip long lines and continue processing other lines that fits into the buffer size.
Off
DB
Specify the database file to keep track of monitored files and offsets.
DB.sync
Set a default synchronization (I/O) method. Values: Extra, Full, Normal, Off. This flag affects how the internal SQLite engine do synchronization to disk, for more details about each option please refer to this section. Most of workload scenarios will be fine with normal
mode, but if you really need full synchronization after every write operation you should set full
mode. Note that full
has a high I/O performance cost.
normal
DB.locking
Specify that the database will be accessed only by Fluent Bit. Enabling this feature helps to increase performance when accessing the database but it restrict any external tool to query the content.
false
DB.journal_mode
sets the journal mode for databases (WAL). Enabling WAL provides higher performance. Note that WAL is not compatible with shared network file systems.
WAL
Mem_Buf_Limit
Set a limit of memory that Tail plugin can use when appending data to the Engine. If the limit is reach, it will be paused; when the data is flushed it resumes.
exit_on_eof
When reading a file will exit as soon as it reach the end of the file. Useful for bulk load and tests
false
Parser
Specify the name of a parser to interpret the entry as a structured message.
Key
When a message is unstructured (no parser applied), it's appended as a string under the key name log. This option allows to define an alternative name for that key.
log
Tag
Set a tag (with regex-extract fields) that will be placed on lines read. E.g. kube.<namespace_name>.<pod_name>.<container_name>
. Note that "tag expansion" is supported: if the tag includes an asterisk (*), that asterisk will be replaced with the absolute path of the monitored file (also see Workflow of Tail + Kubernetes Filter).
Tag_Regex
Set a regex to extract fields from the file name. E.g. (?<pod_name>[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*)_(?<namespace_name>[^_]+)_(?<container_name>.+)-
Key
Description
Default
Multiline
If enabled, the plugin will try to discover multiline messages and use the proper parsers to compose the outgoing messages. Note that when this option is enabled the Parser option is not used.
Off
Multiline_Flush
Wait period time in seconds to process queued multiline messages
4
Parser_Firstline
Name of the parser that matches the beginning of a multiline message. Note that the regular expression defined in the parser must include a group name (named capture), and the value of the last match group must be a string
Parser_N
Optional-extra parser to interpret and structure multiline entries. This option can be used to define multiple parsers, e.g: Parser_1 ab1, Parser_2 ab2, Parser_N abN.
Key
Description
Default
Docker_Mode
If enabled, the plugin will recombine split Docker log lines before passing them to any parser as configured above. This mode cannot be used at the same time as Multiline.
Off
Docker_Mode_Flush
Wait period time in seconds to flush queued unfinished split lines.
4
Docker_Mode_Parser
Specify an optional parser for the first line of the docker multiline mode. The parser name to be specified must be registered in the parsers.conf
file.
Key
Description
Default
Mode
Defines transport protocol mode: unix_udp (UDP over Unix socket), unix_tcp (TCP over Unix socket), tcp or udp
unix_udp
Listen
If Mode is set to tcp, specify the network interface to bind.
0.0.0.0
Port
If Mode is set to tcp, specify the TCP port to listen for incoming connections.
5140
Path
If Mode is set to unix_tcp or unix_udp, set the absolute path to the Unix socket file.
Unix_Perm
If Mode is set to unix_tcp or unix_udp, set the permission of the Unix socket file.
0644
Parser
Specify an alternative parser for the message. If Mode is set to tcp or udp then the default parser is syslog-rfc5424 otherwise syslog-rfc3164-local is used. If your syslog messages have fractional seconds set this Parser value to syslog-rfc5424 instead.
Buffer_Chunk_Size
By default the buffer to store the incoming Syslog messages, do not allocate the maximum memory allowed, instead it allocate memory when is required. The rounds of allocations are set by Buffer_Chunk_Size. If not set, Buffer_Chunk_Size is equal to 32000 bytes (32KB). Read considerations below when using udp or unix_udp mode.
Buffer_Max_Size
Specify the maximum buffer size to receive a Syslog message. If not set, the default size will be the value of Buffer_Chunk_Size.
key
description
name
The name of the thermal zone, such as thermal_zone0
type
The type of the thermal zone, such as x86_pkg_temp
temp
Current temperature in celsius
Key
Description
Interval_Sec
Polling interval (seconds). default: 1
Interval_NSec
Polling interval (nanoseconds). default: 0
name_regex
Optional name filter regex. default: None
type_regex
Optional type filter regex. default: None
Name
Description
json
handle the field content as a JSON map. If it find a JSON map it will replace the content with a structured map.
escaped
decode an escaped string.
escaped_utf8
decode a UTF8 escaped string.
Name
Description
try_next
if the decoder failed, apply the next Decoder in the list for the same field.
do_next
if the decoder succeeded or failed, apply the next Decoder in the list for the same field.
Key
Description
Proc_Name
Name of the target Process to check.
Interval_Sec
Interval in seconds between the service checks. Default value is 1.
Interval_Nsec
Specify a nanoseconds interval for service checks, it works in conjunction with the Interval_Sec configuration key. Default value is 0.
Alert
If enabled, it will only generate messages if the target process is down. By default this option is disabled.
Fd
If enabled, a number of fd is appended to each records. Default value is true.
Mem
If enabled, memory usage of the process is appended to each records. Default value is true.
Key
Description
Default
Channels
A comma-separated list of channels to read from.
Interval_Sec
Set the polling interval for each channel. (optional)
1
DB
Set the path to save the read offsets. (optional)
Key
Description
Samples
If set, it will only generate a specific number of samples. By default this value is set to -1, which will generate unlimited samples.
Interval_Sec
Interval in seconds between samples generation. Default value is 1.
Internal_Nsec
Specify a nanoseconds interval for samples generation, it works in conjunction with the Interval_Sec configuration key. Default value is 0.
Key
Description
Default
Listen
Listener network interface.
0.0.0.0
Port
TCP port where listening for connections
5170
Buffer_Size
Specify the maximum buffer size in KB to receive a JSON message. If not set, the default size will be the value of Chunk_Size.
Chunk_Size
By default the buffer to store the incoming JSON messages, do not allocate the maximum memory allowed, instead it allocate memory when is required. The rounds of allocations are set by Chunk_Size in KB. If not set, Chunk_Size is equal to 32 (32KB).
32
Format
Specify the expected payload format. It support the options json and none. When using json, it expects JSON maps, when is set to none, it will split every record using the defined Separator (option below).
json
Separator
When the expected Format is set to none, Fluent Bit needs a separator string to split the records. By default it uses the breakline character \n
(LF or 0x10).
\n
Made for testing: make sure that your records contain the expected key and values
The expect filter plugin allows you to validate that records match certain criteria in their structure, like validating that a key exists or it has a specific value.
The following page just describes the configuration properties available, for a detailed explanation of its usage and use cases, please refer the following page:
The plugin supports the following configuration parameters:
Property
Description
key_exists
Check if a key with a given name exists in the record.
key_not_exists
Check if a key does not exist in the record.
key_val_is_null
check that the value of the key is NULL.
key_val_is_not_null
check that the value of the key is NOT NULL.
key_val_eq
check that the value of the key equals the given value in the configuration.
action
action to take when a rule does not match. The available options are warn
or exit
. On warn
, a warning message is sent to the logging layer when a mismatch of the rules above is found; using exit
makes Fluent Bit abort with status code 255
As mentioned on top, refer to the following page for specific details of usage of this filter:
Look up Geo data from IP
GeoIP2 Filter allows you to enrich the incoming data stream using location data from GeoIP2 database.
This plugin supports the following configuration parameters:
Key
Description
database
Path to the GeoIP2 database.
lookup_key
Field name to process
record
Defines the KEY LOOKUP_KEY VALUE
triplet. See below for how to set up this option.
The following configuration will process incoming remote_addr
, and append country information retrieved from GeoLite2 database.
Each Record
parameter above specifies the following triplet:
The field name to be added to records (country
)
The lookup key to process (remote_addr
)
The query for GeoIP2 database (%{country.names.en}
)
By running Fluent Bit with the configuration above, you will see the following output:
Note that the GeoLite2-City.mmdb
database is available from MaxMind's official site.
The logfmt parser allows to parse the logfmt format described in https://brandur.org/logfmt . A more formal description is in https://godoc.org/github.com/kr/logfmt .
Here is an example configuration:
The following log entry is a valid content for the parser defined above:
After processing, it internal representation will be:
The ltsv parser allows to parse LTSV formatted texts.
Labeled Tab-separated Values (LTSV format is a variant of Tab-separated Values (TSV). Each record in a LTSV file is represented as a single line. Each field is separated by TAB and has a label and a value. The label and the value have been separated by ':'.
Here is an example how to use this format in the apache access log.
Config this in httpd.conf:
The parser.conf:
The following log entry is a valid content for the parser defined above:
After processing, it internal representation will be:
The time has been converted to Unix timestamp (UTC).
The JSON parser is the simplest option: if the original log source is a JSON map string, it will take it structure and convert it directly to the internal binary representation.
A simple configuration that can be found in the default parsers configuration file, is the entry to parse Docker log files (when the tail input plugin is used):
The following log entry is a valid content for the parser defined above:
After processing, it internal representation will be:
The time has been converted to Unix timestamp (UTC) and the map reduced to each component of the original message.
Select or exclude records per patterns
The Grep Filter plugin allows you to match or exclude specific records based on regular expression patterns for values or nested values.
The plugin supports the following configuration parameters:
Key
Value Format
Description
Regex
KEY REGEX
Keep records in which the content of KEY matches the regular expression.
Exclude
KEY REGEX
Exclude records in which the content of KEY matches the regular expression.
This plugin enables the Record Accessor feature to specify the KEY. Using the record accessor is suggested if you want to match values against nested values.
In order to start filtering records, you can run the filter from the command line or through the configuration file. The following example assumes that you have a file called lines.txt
with the following content:
Note: using the command line mode need special attention to quote the regular expressions properly. It's suggested to use a configuration file.
The following command will load the tail plugin and read the content of lines.txt
file. Then the grep filter will apply a regular expression rule over the log field (created by tail plugin) and only pass the records which field value starts with aa:
The filter allows to use multiple rules which are applied in order, you can have many Regex and Exclude entries as required.
If you want to match or exclude records based on nested values, you can use a Record Accessor format as the KEY name. Consider the following record example:
if you want to exclude records that match given nested field (for example kubernetes.labels.app
), you can use the following rule:
The Parser Filter plugin allows for parsing fields in event records.
The plugin supports the following configuration parameters:
Key
Description
Default
Key_Name
Specify field name in record to parse.
Parser
Specify the parser name to interpret the field. Multiple Parser entries are allowed (one per line).
Preserve_Key
Keep original Key_Name
field in the parsed result. If false, the field will be removed.
False
Reserve_Data
Keep all other original fields in the parsed result. If false, all other original fields will be removed.
False
Unescape_Key
If the key is an escaped string (e.g: stringify JSON), unescape the string before applying the parser.
False
This is an example of parsing a record {"data":"100 0.5 true This is example"}
.
The plugin needs a parser file which defines how to parse each field.
The path of the parser file should be written in configuration file under the [SERVICE] section.
The output is
You can see the records {"data":"100 0.5 true This is example"}
are parsed.
By default, the parser plugin only keeps the parsed fields in its output.
If you enable Reserve_Data
, all other fields are preserved:
This will produce the output:
If you enable Reserved_Data
and Preserve_Key
, the original key field will be preserved as well:
This will produce the following output:
Lua Filter allows you to modify the incoming records using custom Lua Scripts.
Due to the necessity to have a flexible filtering mechanism, now is possible to extend Fluent Bit capabilities writing simple filters using Lua programming language. A Lua based filter takes two steps:
Configure the Filter in the main configuration
Prepare a Lua script that will be used by the Filter
The plugin supports the following configuration parameters:
Key
Description
script
Path to the Lua script that will be used.
call
Lua function name that will be triggered to do filtering. It's assumed that the function is declared inside the Script defined above.
type_int_key
If these keys are matched, the fields are converted to integer. If more than one key, delimit by space. Note that starting from Fluent Bit v1.6 integer data types are preserved and not converted to double as in previous versions.
protected_mode
If enabled, Lua script will be executed in protected mode. It prevents to crash when invalid Lua script is executed. Default is true.
time_as_table
By default when the Lua script is invoked, the record timestamp is passed as a Floating number which might lead to loss precision when the data is converted back. If you desire timestamp precision enabling this option will pass the timestamp as a Lua table with keys sec
for seconds since epoch and nsec
for nanoseconds.
In order to test the filter, you can run the plugin from the command line or through the configuration file. The following examples uses the dummy input plugin for data ingestion, invoke Lua filter using the test.lua script and calls the cb_print() function which only print the same information to the standard output:
From the command line you can use the following options:
In your main configuration file append the following Input, Filter & Output sections:
The life cycle of a filter have the following steps:
Upon Tag matching by filter_lua, it may process or bypass the record.
If filter_lua accepts the record, it will invoke the function defined in the call property which basically is the name of a function defined in the Lua script.
Invoke Lua function passing each record in JSON format.
Upon return, validate return value and take some action (described above)
The Lua script can have one or multiple callbacks that can be used by filter_lua, it prototype is as follows:
name
description
tag
Name of the tag associated with the incoming record.
timestamp
Unix timestamp with nanoseconds associated with the incoming record. The original format is a double (seconds.nanoseconds)
record
Lua table with the record content
Each callback must return three values:
name
data type
description
code
integer
The code return value represents the result and further action that may follows. If code equals -1, means that filter_lua must drop the record. If code equals 0 the record will not be modified, otherwise if code equals 1, means the original timestamp and record have been modified so it must be replaced by the returned values from timestamp (second return value) and record (third return value). If code equals 2, means the original timestamp is not modified and the record has been modified so it must be replaced by the returned values from record (third return value). The code 2 is supported from v1.4.3.
timestamp
double
If code equals 1, the original record timestamp will be replaced with this new value.
record
table
if code equals 1, the original record information will be replaced with this new value. Note that the format of this value must be a valid Lua table.
For functional examples of this interface, please refer to the code samples provided in the source code of the project located here:
https://github.com/fluent/fluent-bit/tree/master/scripts
In Lua, Fluent Bit treats number as double. It means an integer field (e.g. IDs, log levels) will be converted double. To avoid type conversion, Type_int_key property is available.
Fluent Bit supports protected mode to prevent crash when executes invalid Lua script. See also Error Handling in Application Code.
Fluent Bit Kubernetes Filter allows to enrich your log files with Kubernetes metadata.
When Fluent Bit is deployed in Kubernetes as a DaemonSet and configured to read the log files from the containers (using tail or systemd input plugins), this filter aims to perform the following operations:
Analyze the Tag and extract the following metadata:
Pod Name
Namespace
Container Name
Container ID
Query Kubernetes API Server to obtain extra metadata for the POD in question:
Pod ID
Labels
Annotations
The data is cached locally in memory and appended to each record.
The plugin supports the following configuration parameters:
Key
Description
Default
Buffer_Size
32k
Kube_URL
API Server end-point
Kube_CA_File
CA certificate file
/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
Kube_CA_Path
Absolute path to scan for certificate files
Kube_Token_File
Token file
/var/run/secrets/kubernetes.io/serviceaccount/token
Kube_Tag_Prefix
When the source records comes from Tail input plugin, this option allows to specify what's the prefix used in Tail configuration.
kube.var.log.containers.
Merge_Log
When enabled, it checks if the log
field content is a JSON string map, if so, it append the map fields as part of the log structure.
Off
Merge_Log_Key
When Merge_Log
is enabled, the filter tries to assume the log
field from the incoming message is a JSON string message and make a structured representation of it at the same level of the log
field in the map. Now if Merge_Log_Key
is set (a string name), all the new structured fields taken from the original log
content are inserted under the new key.
Merge_Log_Trim
When Merge_Log
is enabled, trim (remove possible \n or \r) field values.
On
Merge_Parser
Optional parser name to specify how to parse the data contained in the log key. Recommended use is for developers or testing only.
Keep_Log
When Keep_Log
is disabled, the log
field is removed from the incoming message once it has been successfully merged (Merge_Log
must be enabled as well).
On
tls.debug
Debug level between 0 (nothing) and 4 (every detail).
-1
tls.verify
When enabled, turns on certificate validation when connecting to the Kubernetes API server.
On
Use_Journal
When enabled, the filter reads logs coming in Journald format.
Off
Cache_Use_Docker_Id
When enabled, metadata will be fetched from K8s when docker_id is changed.
Off
Regex_Parser
K8S-Logging.Parser
Allow Kubernetes Pods to suggest a pre-defined Parser (read more about it in Kubernetes Annotations section)
Off
K8S-Logging.Exclude
Allow Kubernetes Pods to exclude their logs from the log processor (read more about it in Kubernetes Annotations section).
Off
Labels
Include Kubernetes resource labels in the extra metadata.
On
Annotations
Include Kubernetes resource annotations in the extra metadata.
On
Kube_meta_preload_cache_dir
If set, Kubernetes meta-data can be cached/pre-loaded from files in JSON format in this directory, named as namespace-pod.meta
Dummy_Meta
If set, use dummy-meta data (for test/dev purposes)
Off
DNS_Retries
DNS lookup retries N times until the network start working
6
DNS_Wait_Time
DNS lookup interval between network status checks
30
Use_Kubelet
Off
Kubelet_Port
kubelet port using for HTTP request, this only works when Use_Kubelet
set to On.
10250
Kubernetes Filter aims to provide several ways to process the data contained in the log key. The following explanation of the workflow assumes that your original Docker parser defined in parsers.conf is as follows:
Since Fluent Bit v1.2 we are not suggesting the use of decoders (Decode_Field_As) if you are using Elasticsearch database in the output to avoid data type conflicts.
To perform processing of the log key, it's mandatory to enable the Merge_Log configuration property in this filter, then the following processing order will be done:
If a Pod suggest a parser, the filter will use that parser to process the content of log.
If the option Merge_Parser was set and the Pod did not suggest a parser, process the log content using the suggested parser in the configuration.
If no Pod was suggested and no Merge_Parser is set, try to handle the content as JSON.
If log value processing fails, the value is untouched. The order above is not chained, meaning it's exclusive and the filter will try only one of the options above, not all of them.
A flexible feature of Fluent Bit Kubernetes filter is that allow Kubernetes Pods to suggest certain behaviors for the log processor pipeline when processing the records. At the moment it support:
Suggest a pre-defined parser
Request to exclude logs
The following annotations are available:
Annotation
Description
Default
fluentbit.io/parser[_stream][-container]
Suggest a pre-defined parser. The parser must be registered already by Fluent Bit. This option will only be processed if Fluent Bit configuration (Kubernetes Filter) have enabled the option K8S-Logging.Parser. If present, the stream (stdout or stderr) will restrict that specific stream. If present, the container can override a specific container in a Pod.
fluentbit.io/exclude[_stream][-container]
Request to Fluent Bit to exclude or not the logs generated by the Pod. This option will only be processed if Fluent Bit configuration (Kubernetes Filter) have enabled the option K8S-Logging.Exclude.
False
The following Pod definition runs a Pod that emits Apache logs to the standard output, in the Annotations it suggest that the data should be processed using the pre-defined parser called apache:
There are certain situations where the user would like to request that the log processor simply skip the logs from the Pod in question:
Note that the annotation value is boolean which can take a true or false and must be quoted.
Kubernetes Filter depends on either Tail or Systemd input plugins to process and enrich records with Kubernetes metadata. Here we will explain the workflow of Tail and how it configuration is correlated with Kubernetes filter. Consider the following configuration example (just for demo purposes, not production):
In the input section, the Tail plugin will monitor all files ending in .log in path /var/log/containers/. For every file it will read every line and apply the docker parser. Then the records are emitted to the next step with an expanded tag.
Tail support Tags expansion, which means that if a tag have a star character (*), it will replace the value with the absolute path of the monitored file, so if you file name and path is:
then the Tag for every record of that file becomes:
note that slashes are replaced with dots.
When Kubernetes Filter runs, it will try to match all records that starts with kube. (note the ending dot), so records from the file mentioned above will hit the matching rule and the filter will try to enrich the records
Kubernetes Filter do not care from where the logs comes from, but it cares about the absolute name of the monitored file, because that information contains the pod name and namespace name that are used to retrieve associated metadata to the running Pod from the Kubernetes Master/API Server.
If you have large pod specifications (can be caused by large numbers of environment variables, etc.), be sure to increase the
Buffer_Size
parameter of the kubernetes filter. If object sizes exceed this buffer, some metadata will fail to be injected to the logs.
If the configuration property Kube_Tag_Prefix was configured (available on Fluent Bit >= 1.1.x), it will use that value to remove the prefix that was appended to the Tag in the previous Input section. Note that the configuration property defaults to _kube._var.logs.containers. , so the previous Tag content will be transformed from:
to:
the transformation above do not modify the original Tag, just creates a new representation for the filter to perform metadata lookup.
that new value is used by the filter to lookup the pod name and namespace, for that purpose it uses an internal Regular expression:
If you want to know more details, check the source code of that definition here.
You can see on Rublar.com web site how this operation is performed, check the following demo link:
Under certain and not common conditions, a user would want to alter that hard-coded regular expression, for that purpose the option Regex_Parser can be used (documented on top).
So at this point the filter is able to gather the values of pod_name and namespace, with that information it will check in the local cache (internal hash table) if some metadata for that key pair exists, if so, it will enrich the record with the metadata value, otherwise it will connect to the Kubernetes Master/API Server and retrieve that information.
There is an issue reported about kube-apiserver fall over and become unresponsive when cluster is too large and too many requests are sent to it. For this feature, fluent bit Kubernetes filter will send the request to kubelet /pods endpoint instead of kube-apiserver to retrieve the pods information and use it to enrich the log. Since Kubelet is running locally in nodes, the request would be responded faster and each node would only get one request one time. This could save kube-apiserver power to handle other requests. When this feature is enabled, you should see no difference in the kubernetes metadata added to logs, but the Kube-apiserver bottleneck should be avoided when cluster is large.
There are some configuration setup needed for this feature.
Role Configuration for Fluent Bit DaemonSet Example:
The difference is that kubelet need a special permission for resource nodes/proxy
to get HTTP request in. When creating the role
or clusterRole
, you need to add nodes/proxy
into the rule for resource.
Fluent Bit Configuration Example:
So for fluent bit configuration, you need to set the Use_Kubelet
to true to enable this feature.
DaemonSet config Example:
The key point is to set hostNetwork
to true
and dnsPolicy
to ClusterFirstWithHostNet
that fluent bit DaemonSet could call Kubelet locally. Otherwise it could not resolve the dns for kubelet.
Now you are good to use this new feature!
Basically you should see no difference about your experience for enriching your log files with Kubernetes metadata.
To check if Fluent Bit is using the kubelet, you can check fluent bit logs and there should be a log like this:
And if you are in debug mode, you could see more:
The following section goes over specific log messages you may run into and how to solve them to ensure that Fluent Bit's Kubernetes filter is operating properly
If you are not seeing metadata added to your kubernetes logs and see the following in your log message, then you may be facing connectivity issues with the Kubernetes API server.
Potential fix #1: Check Kubernetes roles
When Fluent Bit is deployed as a DaemonSet it generally runs with specific roles that allow the application to talk to the Kubernetes API server. If you are deployed in a more restricted environment check that all the Kubernetes roles are set correctly.
Potential fix #2: Check Kubernetes IPv6
There may be cases where you have IPv6 on in the environment and you need to enable this within Fluent Bit. Under the service tag please set the following option ipv6
to on
.
Potential fix #3: Check connectivity to Kube_URL
By default the Kube_URL is set to https://kubernetes.default.svc:443
. Ensure that you have connectivity to this endpoint from within the cluster and that there are no special permission interfering with the connection.
In some cases, you may only see some objects being appended with metadata while other objects are not enriched. This can occur at times when local data is cached and does not contain the correct id for the kubernetes object that requires enrichment. For most Kubernetes objects the Kubernetes API server is updated which will then be reflected in Fluent Bit logs, however in some cases for Pod
objects this refresh to the Kubernetes API server can be skipped, causing metadata to be skipped.
Set the buffer size for HTTP client when reading responses from Kubernetes API server. The value must be according to the specification. A value of 0
results in no limit, and the buffer will expand as-needed. Note that if pod specifications exceed the buffer limit, the API response will be discarded when retrieving metadata, and some kubernetes metadata will fail to be injected to the logs.
Set an alternative Parser to process record Tag and extract pod_name, namespace_name, container_name and docker_id. The parser must be registered in a (refer to parser filter-kube-test as an example).
this is an optional feature flag to get metadata information from kubelet instead of calling Kube Server API to enhance the log. This could mitigate the .
Every project has a story
On 2014, the Fluentd team at Treasure Data forecasted the need of a lightweight log processor for constraint environments like Embedded Linux and Gateways, the project aimed to be part of the Fluentd Ecosystem and we called it Fluent Bit, fully open source and available under the terms of the Apache License v2.0.
After the project was around for some time, it got some traction in the Embedded market but we also started getting requests for several features from the Cloud community like more inputs, filters, and outputs. Not so long after that, Fluent Bit becomes one of the preferred solutions to solve the logging challenges in Cloud environments.
Certain configuration directives in Fluent Bit refer to unit sizes such as when defining the size of a buffer or specific limits, we can find these in plugins like Tail Input, Forward Input or in generic properties like Mem_Buf_Limit.
Starting from Fluent Bit v0.11.10, all unit sizes have been standardized across the core and plugins, the following table describes the options that can be used and what they mean:
Suffix
Description
Example
When a suffix is not specified, it's assumed that the value given is a bytes representation.
Specifying a value of 32000, means 32000 bytes
k, K, KB, kb
Kilobyte: a unit of memory equal to 1,000 bytes.
32k means 32000 bytes.
m, M, MB, mb
Megabyte: a unit of memory equal to 1,000,000 bytes
1M means 1000000 bytes
g, G, GB, gb
Gigabyte: a unit of memory equal to 1,000,000,000 bytes
1G means 1000000000 bytes
In certain scenarios would be ideal to estimate how much memory Fluent Bit could be using, this is very useful for containerized environments where memory limits are a must.
In order to estimate we will assume that the input plugins have set the Mem_Buf_Limit option (you can learn more about it in the Backpressure section).
Input plugins append data independently, so in order to do an estimation a limit should be imposed through the Mem_Buf_Limit option. If the limit was set to 10MB we need to estimate that in the worse case, the output plugin likely could use 20MB.
Fluent Bit has an internal binary representation for the data being processed, but when this data reach an output plugin, this one will likely create their own representation in a new memory buffer for processing. The best example are the InfluxDB and Elasticsearch output plugins, both needs to convert the binary representation to their respective-custom JSON formats before to talk to their backend servers.
So, if we impose a limit of 10MB for the input plugins and considering the worse case scenario of the output plugin consuming 20MB extra, as a minimum we need (30MB x 1.2) = 36MB.
Is well known that in intensive environments where memory allocations happens in the order of magnitude, the default memory allocator provided by Glibc could lead to a high fragmentation, reporting a high memory usage by the service.
It's strongly suggested that in any production environment, Fluent Bit should be built with jemalloc enabled (e.g. -DFLB_JEMALLOC=On
). Jemalloc is an alternative memory allocator that can reduce fragmentation (among others things) resulting in better performance.
You can check if Fluent Bit has been built with Jemalloc using the following command:
The output should looks like:
If the FLB_HAVE_JEMALLOC option is listed in Build Flags, everything will be fine.
You may wish to test a logging pipeline locally to observe how it deals with log messages. The following is a walk-through for running Fluent Bit and Elasticsearch locally with Docker Compose which can serve as an example for testing other plugins locally.
Refer to the Configuration File section to create a configuration to test.
fluent-bit.conf
:
Use Docker Compose to run Fluent Bit (with the configuration file mounted) and Elasticsearch.
docker-compose.yaml
:
To view indexed logs run:
To "start fresh", delete the index by running:
The docker events input plugin uses the docker API to capture server events. A complete list of possible events returned by this plugin can be found here
This plugin supports the following configuration parameters:
Key
Description
Default
Unix_Path
The docker socket unix path
/var/run/docker.sock
Buffer_Size
The size of the buffer used to read docker events (in bytes)
8192
Parser
Specify the name of a parser to interpret the entry as a structured message.
None
Key
When a message is unstructured (no parser applied), it's appended as a string under the key name message.
message
In your main configuration file append the following Input & Output sections:
The serial input plugin, allows to retrieve messages/data from a Serial interface.
Key
Description
File
Absolute path to the device entry, e.g: /dev/ttyS0
Bitrate
The bitrate for the communication, e.g: 9600, 38400, 115200, etc
Min_Bytes
The serial interface will expect at least Min_Bytes to be available before to process the message (default: 1)
Separator
Allows to specify a separator string that's used to determinate when a message ends.
Format
Specify the format of the incoming data stream. The only option available is 'json'. Note that Format and Separator cannot be used at the same time.
In order to retrieve messages over the Serial interface, you can run the plugin from the command line or through the configuration file:
The following example loads the input serial plugin where it set a Bitrate of 9600, listen from the /dev/tnt0 interface and use the custom tag data to route the message.
The above interface (/dev/tnt0) is an emulation of the serial interface (more details at bottom), for demonstrative purposes we will write some message to the other end of the interface, in this case /dev/tnt1, e.g:
In Fluent Bit you should see an output like this:
Now using the Separator configuration, we could send multiple messages at once (run this command after starting Fluent Bit):
In your main configuration file append the following Input & Output sections:
The following content is some extra information that will allow you to emulate a serial interface on your Linux system, so you can test this Serial input plugin locally in case you don't have such interface in your computer. The following procedure has been tested on Ubuntu 15.04 running a Linux Kernel 4.0.
Download the sources
Unpack and compile
Copy the new kernel module into the kernel modules directory
Load the module
You should see new serial ports in /dev/ (ls /dev/tnt*) Give appropriate permissions to the new serial ports:
When the module is loaded, it will interconnect the following virtual interfaces:
Fluent Bit is distributed as td-agent-bit package for Windows. Fluent Bit has two flavours of Windows installers: a ZIP archive (for quick testing) and an EXE installer (for system installation).
The latest stable version is 1.7.9:
INSTALLERS
SHA256 CHECKSUMS
cc6606ac4f8f32a8c466e0fe11cea60792365568625f462938f72dc14e03ee9d
bf7f514ac3b0e02148661486dfaa895230920d2b44edafe27eb782518e3d469b
45d304809638d5baab2b2689b942fc4d7361a8faabf137b28dec5274547ee11e
20d53bc5092164378d8b31fa431f7c88c8d33d49641d2e7a9238f2d95c1fc027
To check the integrity, use Get-FileHash
cmdlet on PowerShell.
Download a ZIP archive from the download page. There are installers for 32-bit and 64-bit environments, so choose one suitable for your environment.
Then you need to expand the ZIP archive. You can do this by clicking "Extract All" on Explorer, or if you're using PowerShell, you can use Expand-Archive
cmdlet.
The ZIP package contains the following set of files.
Now, launch cmd.exe or PowerShell on your machine, and execute fluent-bit.exe
as follows.
If you see the following output, it's working fine!
To halt the process, press CTRL-C in the terminal.
Download an EXE installer from the download page. It has both 32-bit and 64-bit builds. Choose one which is suitable for you.
Then, double-click the EXE installer you've downloaded. Installation wizard will automatically start.
Click Next and proceed. By default, Fluent Bit is installed into C:\Program Files\td-agent-bit\
, so you should be able to launch fluent-bit as follow after installation.
Windows services are equivalent to "daemons" in UNIX (i.e. long-running background processes). Since v1.5.0, Fluent Bit has the native support for Windows Service.
Suppose you have the following installation layout:
To register Fluent Bit as a Windows service, you need to execute the following command on Command Prompt. Please be careful that a single space is required after binpath=
.
Now Fluent Bit can be started and managed as a normal Windows service.
To halt the Fluent Bit service, just execute the "stop" command.
If you need to create a custom executable, you can use the following procedure to compile Fluent Bit by yourself.
First, you need Microsoft Visual C++ to compile Fluent Bit. You can install the minimum toolkit by the following command:
When asked which packages to install, choose "C++ Build Tools" (make sure that "C++ CMake tools for Windows" is selected too) and wait until the process finishes.
Also you need to install flex and bison. One way to install them on Windows is to use winflexbison.
Also you need to install git to pull the source code from the repository.
Open the start menu on Windows and type "Developer Command Prompt".
Clone the source code of Fluent Bit.
Compile the source code.
Now you should be able to run Fluent Bit:
To create a ZIP package, call cpack
as follows:
Forward is the protocol used by Fluent Bit and Fluentd to route messages between peers. This plugin implements the input service to listen for Forward messages.
The plugin supports the following configuration parameters:
Key
Description
Default
Listen
Listener network interface.
0.0.0.0
Port
TCP port to listen for incoming connections.
24224
Buffer_Max_Size
Buffer_Chunk_Size
Buffer_Chunk_Size
32KB
In order to receive Forward messages, you can run the plugin from the command line or through the configuration file:
From the command line you can let Fluent Bit listen for Forward messages with the following options:
By default the service will listen an all interfaces (0.0.0.0) through TCP port 24224, optionally you can change this directly, e.g:
In the example the Forward messages will only arrive through network interface under 192.168.3.2 address and TCP Port 9090.
In your main configuration file append the following Input & Output sections:
Once Fluent Bit is running, you can send some messages using the fluent-cat tool (this tool is provided by Fluentd:
In Fluent Bit we should see the following output:
The Systemd input plugin allows to collect log messages from the Journald daemon on Linux environments.
The plugin supports the following configuration parameters:
Key
Description
Default
Path
Optional path to the Systemd journal directory, if not set, the plugin will use default paths to read local-only logs.
Max_Fields
Set a maximum number of fields (keys) allowed per record.
8000
Max_Entries
When Fluent Bit starts, the Journal might have a high number of logs in the queue. In order to avoid delays and reduce memory usage, this option allows to specify the maximum number of log entries that can be processed per round. Once the limit is reached, Fluent Bit will continue processing the remaining log entries once Journald performs the notification.
5000
Systemd_Filter
Allows to perform a query over logs that contains a specific Journald key/value pairs, e.g: _SYSTEMD_UNIT=UNIT. The Systemd_Filter option can be specified multiple times in the input section to apply multiple filters as required.
Systemd_Filter_Type
Define the filter type when Systemd_Filter is specified multiple times. Allowed values are And and Or. With And a record is matched only when all of the Systemd_Filter have a match. With Or a record is matched when any of the Systemd_Filter has a match.
Or
Tag
The tag is used to route messages but on Systemd plugin there is an extra functionality: if the tag includes a star/wildcard, it will be expanded with the Systemd Unit file (e.g: host.* => host.UNIT_NAME).
DB
Specify the absolute path of a database file to keep track of Journald cursor.
DB.Sync
Full
Read_From_Tail
Start reading new entries. Skip entries already stored in Journald.
Off
Strip_Underscores
Remove the leading underscore of the Journald field (key). For example the Journald field _PID becomes the key PID.
Off
In order to receive Systemd messages, you can run the plugin from the command line or through the configuration file:
From the command line you can let Fluent Bit listen for Systemd messages with the following options:
In the example above we are collecting all messages coming from the Docker service.
In your main configuration file append the following Input & Output sections:
When the service is running we can export metrics to see the overall status of the data flow of the service. But there are other use cases where we would like to know the current status of the internals of the service, specifically to answer questions like what's the current status of the internal buffers ? , the Dump Internals feature is the answer.
Fluent Bit v1.4 introduces the Dump Internals feature that can be triggered easily from the command line triggering the CONT
Unix signal.
note: this feature is only available on Linux and BSD family operating systems
Run the following kill
command to signal Fluent Bit:
The command
pidof
aims to lookup the Process ID of Fluent Bit. You can replace the
Fluent Bit will dump the following information to the standard output interface (stdout):
The dump provides insights for every input instance configured.
Overall ingestion status of the plugin.
Entry
Sub-entry
Description
overlimit
mem_size
Current memory size in use by the input plugin in-memory.
mem_limit
Limit set by Mem_Buf_Limit.
When an input plugin ingest data into the engine, a Chunk is created. A Chunk can contains multiple records. Upon flush time, the engine creates a Task that contains the routes for the Chunk associated in question.
The Task dump describes the tasks associated to the input plugin:
Entry
Description
total_tasks
Total number of active tasks associated to data generated by the input plugin.
new
Number of tasks not assigned yet to an output plugin. Tasks are in new
status for a very short period of time (most of the time this value is very low or zero).
running
Number of active tasks being processed by output plugins.
size
Amount of memory used by the Chunks being processed (Total chunks size).
The Chunks dump tells more details about all the chunks that the input plugin has generated and are still being processed.
Depending of the buffering strategy and limits imposed by configuration, some Chunks might be up
(in memory) or down
(filesystem).
Entry
Sub-entry
Description
total_chunks
Total number of Chunks generated by the input plugin that are still being processed by the engine.
up_chunks
Total number of Chunks that are loaded in memory.
down_chunks
Total number of Chunks that are stored in the filesystem but not loaded in memory yet.
busy_chunks
Chunks marked as busy (being flushed) or locked. Busy Chunks are immutable and likely are ready to (or being) processed.
size
Amount of bytes used by the Chunk.
size err
Number of Chunks in an error state where it size could not be retrieved.
Fluent Bit relies on a custom storage layer interface designed for hybrid buffering. The Storage Layer
entry contains a total summary of Chunks registered by Fluent Bit:
Entry
Sub-Entry
Description
total chunks
Total number of Chunks
mem chunks
Total number of Chunks memory-based
fs chunks
Total number of Chunks filesystem based
up
Total number of filesystem chunks up in memory
down
Total number of filesystem chunks down (not loaded in memory)
Specify the maximum buffer memory size used to receive a Forward message. The value must be according to the specification.
By default the buffer to store the incoming Forward messages, do not allocate the maximum memory allowed, instead it allocate memory when is required. The rounds of allocations are set by Buffer_Chunk_Size. The value must be according to the specification.
Set a default synchronization (I/O) method. values: Extra, Full, Normal, Off. This flag affects how the internal SQLite engine do synchronization to disk, for more details about each option please refer to . note: this option was introduced on Fluent Bit v1.4.6.
If the plugin has been configured with , this entry will report if the plugin is over the limit or not at the moment of the dump. If it is overlimit, it will print yes
, otherwise no
.
The regex parser allows to define a custom Ruby Regular Expression that will use a named capture feature to define which content belongs to which key name.
Fluent Bit uses Onigmo regular expression library on Ruby mode, for testing purposes you can use the following web editor to test your expressions:
Important: do not attempt to add multiline support in your regular expressions if you are using Tail input plugin since each line is handled as a separated entity. Instead use Tail Multiline support configuration feature.
Security Warning: Onigmo is a backtracking regex engine. You need to be careful not to use expensive regex patterns, or Onigmo can take very long time to perform pattern matching. For details, please read the article "ReDoS" on OWASP.
Note: understanding how regular expressions works is out of the scope of this content.
From a configuration perspective, when the format is set to regex, is mandatory and expected that a Regex configuration key exists.
The following parser configuration example aims to provide rules that can be applied to an Apache HTTP Server log entry:
As an example, takes the following Apache HTTP Server log entry:
The above content do not provide a defined structure for Fluent Bit, but enabling the proper parser we can help to make a structured representation of it:
A common pitfall is that you cannot use characters other than alphabets, numbers and underscore in group names. For example, a group name like (?<user-name>.*)
will cause an error due to containing an invalid character (-
).
In order to understand, learn and test regular expressions like the example above, we suggest you try the following Ruby Regular Expression Editor: http://rubular.com/r/X7BH0M4Ivm
The AWS Filter Enriches logs with AWS Metadata. Currently the plugin adds the EC2 instance ID and availability zone to log records. To use this plugin, you must be running in EC2 and have the instance metadata service enabled.
The plugin supports the following configuration parameters:
Key
Description
Default
imds_version
Specify which version of the instance metadata service to use. Valid values are 'v1' or 'v2'.
v2
az
true
ec2_instance_id
The EC2 instance ID.
true
ec2_instance_type
The EC2 instance type.
false
private_ip
The EC2 instance private ip.
false
ami_id
The EC2 instance image id.
false
account_id
The account ID for current EC2 instance.
false
hostname
The hostname for current EC2 instance.
false
vpc_id
The VPC ID for current EC2 instance.
false
Note: If you run Fluent Bit in a container, you may have to use instance metadata v1. The plugin behaves the same regardless of which version is used.
The ; for example, "us-east-1a".