Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Every project has a story
In 2014, the Fluentd team at Treasure Data was forecasting the need for a lightweight log processor for constraint environments like embedded Linux and gateways. The project aimed to be part of the Fluentd ecosystem. At that moment, Eduardo Silva created Fluent Bit, a new open source solution, written from scratch and available under the terms of the Apache License v2.0.
After the project matured, it gained traction for normal Linux systems. With the new containerized world, the Cloud Native community asked to extend the project scope to support more sources, filters, and destinations. Not long after, Fluent Bit became one of the preferred solutions to solve the logging challenges in Cloud environments.
Performance and data safety
When Fluent Bit processes data, it uses the system memory (heap) as a primary and temporary place to store the record logs before they get delivered. The records are processed in this private memory area.
Buffering is the ability to store the records, and continue storing incoming data while previous data is processed and delivered. Buffering in memory is the fastest mechanism, but there are scenarios requiring special strategies to deal with backpressure, data safety, or to reduce memory consumption by the service in constrained environments.
Network failures or latency in third party service is common. When data can't be delivered fast enough and new data to process arrives, the system can face backpressure.
Fluent Bit buffering strategies are designed to solve problems associated with backpressure and general delivery failures. Fluent Bit offers a primary buffering mechanism in memory and an optional secondary one using the file system. With this hybrid solution you can accommodate any use case safely and keep a high performance while processing your data.
These mechanisms aren't mutually exclusive. When data is ready to be processed or delivered it's always be in memory, while other data in the queue might be in the file system until is ready to be processed and moved up to memory.
To learn more about the buffering configuration in Fluent Bit, see Buffering & Storage.
Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
Fluent Bit is an open source telemetry agent specifically designed to efficiently handle the challenges of collecting and processing telemetry data across a wide range of environments, from constrained systems to complex cloud infrastructures. Managing telemetry data from various sources and formats can be a constant challenge, particularly when performance is a critical factor.
Rather than serving as a drop-in replacement, Fluent Bit enhances the observability strategy for your infrastructure by adapting and optimizing your existing logging layer, and adding metrics and traces processing. Fluent Bit supports a vendor-neutral approach, seamlessly integrating with other ecosystems such as Prometheus and OpenTelemetry. Trusted by major cloud providers, banks, and companies in need of a ready-to-use telemetry agent solution, Fluent Bit effectively manages diverse data sources and formats while maintaining optimal performance and keeping resource consumption low.
Fluent Bit can be deployed as an edge agent for localized telemetry data handling or utilized as a central aggregator/collector for managing telemetry data across multiple sources and environments.
The production grade telemetry ecosystem
Telemetry data processing can be complex, especially at scale. That's why Fluentd was created. Fluentd is more than a simple tool, it's grown into a fullscale ecosystem that contains SDKs for different languages and subprojects like Fluent Bit.
Here, we describe the relationship between the Fluentd and Fluent Bit open source projects.
Both projects are:
Licensed under the terms of Apache License v2.0.
Graduated hosted projects by the Cloud Native Computing Foundation (CNCF).
Production grade solutions: Deployed millions of times every single day.
Vendor neutral and community driven.
Widely adopted by the industry: Trusted by major companies like AWS, Microsoft, Google Cloud, and hundreds of others.
The projects have many similarities: Fluent Bit is designed and built on top of the best ideas of Fluentd architecture and general design. Which one you choose depends on your end-users' needs.
The following table describes a comparison of different areas of the projects:
Attribute | Fluentd | Fluent Bit |
---|---|---|
Both Fluentd and Fluent Bit can work as Aggregators or Forwarders, and can complement each other or be used as standalone solutions.
In the recent years, cloud providers have switched from Fluentd to Fluent Bit for performance and compatibility. Fluent Bit is now considered the next-generation solution.
Convert unstructured messages to structured messages
Dealing with raw strings or unstructured messages is difficult. Having a structure makes data more usable. Set a structure to the incoming data by using input plugins as data is collected:
The parser converts unstructured data to structured data. As an example, consider the following Apache (HTTP Server) log entry:
This log line is a raw string without format. Structuring the log makes it easier to process the data later. If the regular expression parser is used, the log entry could be converted to:
Parsers are fully configurable and are independently and optionally handled by each input plugin. For more details, see Parsers.
The way to gather data from your sources
Fluent Bit provides input plugins to gather information from different sources. Some plugins collect data from log files, while others can gather metrics information from the operating system. There are many plugins to suit different needs.
When an input plugin loads, an internal instance is created. Each instance has its own independent configuration. Configuration keys are often called properties.
Every input plugin has its own documentation section that specifies how to use it and what properties are available.
For more details, see Input Plugins.
Modify, enrich or drop your records
In production environments you need full control of the data you're collecting. Filtering lets you alter the collected data before delivering it to a destination.
Filtering is implemented through plugins. Each available filter can be used to match, exclude, or enrich your logs with specific metadata.
Fluent Bit support many filters. A common use case for filtering is Kubernetes deployments. Every pod log needs the proper metadata associated with it.
Like input plugins, filters run in an instance context, which has its own independent configuration. Configuration keys are often called properties.
For more details about the Filters available and their usage, see Filters.
Data processing with reliability
The buffer
phase in the pipeline aims to provide a unified and persistent mechanism to store your data, using the primary in-memory model or the file system-based mode.
The buffer
phase contains the data in an immutable state, meaning that no other filter can be applied.
Buffered data uses the Fluent Bit internal binary representation, which isn't raw text.
Fluent Bit offers a buffering mechanism in the file system that acts as a backup system to avoid data loss in case of system failures.
High Performance Telemetry Agent for Logs, Metrics and Traces
Fluent Bit is a fast and lightweight telemetry agent for logs, metrics, and traces for Linux, macOS, Windows, and BSD family operating systems. Fluent Bit has been made with a strong focus on performance to allow the collection and processing of telemetry data from different sources without complexity.
High performance: High throughput with low resources consumption
Metrics support: Prometheus and OpenTelemetry compatible
Reliability and data integrity
Backpressure handling
Data buffering in memory and file system
Networking
Security: Built-in TLS/SSL support
Asynchronous I/O
Pluggable architecture and extensibility: Inputs, Filters and Outputs:
Connect nearly any source to nearly any destination using preexisting plugins
Extensibility:
Write input, filter, or output plugins in the C language
WASM: WASM Filter Plugins or WASM Input Plugins
Write Filters in Lua or Output plugins in Golang
Monitoring: Expose internal metrics over HTTP in JSON and Prometheus format
Stream Processing: Perform data selection and transformation using simple SQL queries
Create new streams of data using query results
Aggregation windows
Data analysis and prediction: Timeseries forecasting
Portable: Runs on Linux, macOS, Windows and BSD systems
Fluent Bit is a CNCF graduated sub-project under the umbrella of Fluentd. Fluent Bit is licensed under the terms of the Apache License v2.0.
Fluent Bit was originally created by Eduardo Silva and is now sponsored by Chronosphere. As a CNCF-hosted project, it is a fully vendor-neutral and community-driven project.
Learn about destinations for your data, such as databases and cloud services.
The output interface lets you define destinations for your data. Common destinations are remote services, local file systems, or other standard interfaces. Outputs are implemented as plugins.
When an output plugin is loaded, an internal instance is created. Every instance has its own independent configuration. Configuration keys are often called properties.
Every output plugin has its own documentation section specifying how it can be used and what properties are available.
For more details, see .
Create flexible routing rules
Routing is a core feature that lets you route your data through filters and then to one or multiple destinations. The router relies on the concept of and rules.
There are two important concepts in Routing:
Tag
Match
When data is generated by an input plugin, it comes with a Tag
. A Tag is a human-readable indicator that helps to identify the data source. Tags are usually configured manually.
To define where to route data, specify a Match
rule in the output configuration.
Consider the following configuration example that delivers CPU
metrics to an Elasticsearch database and Memory (mem
) metrics to the standard output interface:
Routing reads the Input
Tag
and the Output
Match
rules. If data has a Tag
that doesn't match at routing time, the data is deleted.
Routing is flexible enough to support wildcards in the Match
pattern. The following example defines a common destination for both sources of data:
The match rule is set to my_*
, which matches any Tag starting with my_*
.
Routing also provides support for regular expressions with the Match_Regex
pattern, allowing for more complex and precise matching criteria. The following example demonstrates how to route data from sources based on a regular expression:
In this configuration, the Match_regex
rule is set to .*_sensor_[AB]
. This regular expression matches any Tag
that ends with _sensor_A
or _sensor_B
, regardless of what precedes it. This approach provides a more flexible and powerful way to handle different source tags with a single routing rule.
Fluent Bit supports the following operating systems and architectures:
Operating System | Distribution | Architectures |
---|
From an architecture support perspective, Fluent Bit is fully functional on x86_64, Arm64v8, and Arm32v7 based processors.
Fluent Bit can work also on macOS and Berkeley Software Distribution (BSD) systems, but not all plugins will be available on all platforms.
Fluent Bit is supported for Linux on IBM Z (s390x) environments with some restrictions, but only container images are provided for these targets officially.
A guide on how to install, deploy, and upgrade Fluent Bit
Deployment Type | instructions |
---|
Operating System | Installation instructions |
---|
Operating System | Installation instructions |
---|
If you are interested in learning about Fluent Bit you can try out the sandbox environment:
uses as its build system.
CMake 3.12 or greater. You might need to use cmake3
instead of cmake
.
Flex
Bison 3 or greater
YAML headers
OpenSSL headers
If you already know how CMake works, you can skip this section and review the available .
The following steps explain how to build and install the project with the default options.
Change to the build/
directory inside the Fluent Bit sources:
Let configure the project specifying where the root path is located:
This command displays a series of results similar to:
Start the compilation process using the make
command:
This command displays results similar to:
To continue installing the binary on the system, use make install
:
If the command indicates insufficient permissions, prefix the command with sudo
.
Fluent Bit provides configurable options to CMake that can be enabled or disabled.
Input plugins gather information from a specific source type like network interfaces, some built-in metrics, or through a specific input device. The following input plugins are available:
Filter plugins let you modify, enrich or drop records. The following table describes the filters available on this version:
Output plugins let you flush the information to some external interface, service, or terminal. The following table describes the output plugins available:
Processor plugins handle the events within the processor pipelines to allow modifying, enriching, or dropping events.
The following table describes the processors available:
Learn these key concepts to understand how Fluent Bit operates.
Before diving into you might want to get acquainted with some of the key concepts of the service. This document provides an introduction to those concepts and common terminology. Reading this document will help you gain a more general understanding of the following topics:
Event or Record
Filtering
Tag
Timestamp
Match
Structured Message
Every incoming piece of data that belongs to a log or a metric that's retrieved by Fluent Bit is considered an Event or a Record.
As an example, consider the following content of a Syslog file:
It contains four lines that represent four independent Events.
An Event is comprised of:
timestamp
key/value metadata (v2.1.0 and greater)
payload
The Fluent Bit wire protocol represents an Event as a two-element array with a nested array as the first element:
where
TIMESTAMP
is a timestamp in seconds as an integer or floating point value (not a string).
METADATA
is an object containing event metadata, and might be empty.
MESSAGE
is an object containing the event body.
Fluent Bit versions prior to v2.1.0 used:
to represent events. This format is still supported for reading input event streams.
Use filtering to:
Append specific information to the Event like an IP address or metadata.
Select a specific piece of the Event content.
Drop Events that match a certain pattern.
The timestamp represents the time an Event was created. Every Event contains an associated timestamps. All events have timestamps, and they're set by the input plugin or discovered through a data parsing process.
The timestamp is a numeric fractional integer in the format:
where:
_SECONDS_
is the number of seconds that have elapsed since the Unix epoch.
_NANOSECONDS_
is a fractional second or one thousand-millionth of a second.
Fluent Bit lets you route your collected and processed Events to one or multiple destinations. A Match represents a rule to select Events where a Tag matches a defined rule.
Source events can have a structure. A structure defines a set of keys
and values
inside the Event message to implement faster operations on data modifications. Fluent Bit treats every Event message as a structured message.
Consider the following two messages:
No structured message
With a structured message
Fluent Bit license description
, including its core, plugins, and tools are distributed under the terms of the :
has very low CPU and memory consumption. It's compatible with most x86-, x86_64-, arm32v7-, and arm64v8-based platforms.
The build process requires the following components:
Compiler: GCC or clang
CMake
Flex and Bison: Required for or
Libyaml development headers and libraries
Core has no other dependencies. Some features depend on third-party components. For example, output plugins with special backend libraries like Kafka include those libraries in the main source code repository.
Fluent Bit is supported on Linux on IBM Z(s390x), but the WASM and LUA filter plugins aren't.
The following article cover the relevant notes for users upgrading from previous Fluent Bit versions. We aim to cover compatibility changes that you must be aware of.
For more details about changes on each release please refer to the .
Note: release notes will be prepared in advance of a Git tag for a release so an official release should provide both a tag and a release note together to allow users to verify and understand the release contents.
The tag drives the overall binary release process so release binaries (containers/packages) will appear after a tag and its associated release note. This allows users to expect the new release binary to appear and allow/deny/update it as appropriate in their infrastructure.
The td-agent-bit
package is no longer provided after this release. Users should switch to the fluent-bit
package.
If you are migrating from previous version of Fluent Bit please review the following important changes:
Now by default the plugin follows a file from the end once the service starts (old behavior was always read from the beginning). For every file found at start, its followed from it last position, for new files discovered at runtime or rotated, they are read from the beginning.
If you desire to keep the old behavior you can set the option read_from_head
to true.
The project_id of in sent to Google Cloud Logging would be set to the project ID rather than the project number. To learn the difference between Project ID and project number, see for more details.
If you have any existing queries based on the resource's project_id, please update your query accordingly.
The migration from v1.4 to v1.5 is pretty straightforward.
If you are migrating from Fluent Bit v1.3, there are no breaking changes. Just new exciting features to enjoy :)
If you are migrating from Fluent Bit v1.2 to v1.3, there are no breaking changes. If you are upgrading from an older version please review the incremental changes below.
On Fluent Bit v1.2 we have fixed many issues associated with JSON encoding and decoding, for hence when parsing Docker logs is no longer necessary to use decoders. The new Docker parser looks like this:
Note: again, do not use decoders.
We have done improvements also on how Kubernetes Filter handle the stringified log message. If the option Merge_Log is enabled, it will try to handle the log content as a JSON map, if so, it will add the keys to the root map.
In addition, we have fixed and improved the option called Merge_Log_Key. If a merge log succeed, all new keys will be packaged under the key specified by this option, a suggested configuration is as follows:
As an example, if the original log content is the following map:
the final record will be composed as follows:
If you are upgrading from Fluent Bit <= 1.0.x you should take in consideration the following relevant changes when switching to Fluent Bit v1.1 series:
We introduced a new configuration property called Kube_Tag_Prefix to help Tag prefix resolution and address an unexpected behavior that landed in previous versions.
During 1.0.x release cycle, a commit in Tail input plugin changed the default behavior on how the Tag was composed when using the wildcard for expansion generating breaking compatibility with other services. Consider the following configuration example:
The expected behavior is that Tag will be expanded to:
but the change introduced in 1.0 series switched from absolute path to the base file name only:
On Fluent Bit v1.1 release we restored to our default behavior and now the Tag is composed using the absolute path of the monitored file.
Having absolute path in the Tag is relevant for routing and flexible configuration where it also helps to keep compatibility with Fluentd behavior.
This behavior switch in Tail input plugin affects how Filter Kubernetes operates. As you know when the filter is used it needs to perform local metadata lookup that comes from the file names when using Tail as a source. Now with the new Kube_Tag_Prefix option you can specify what's the prefix used in Tail input plugin, for the configuration example above the new configuration will look as follows:
So the proper for Kube_Tag_Prefix value must be composed by Tag prefix set in Tail input plugin plus the converted monitored directory replacing slashes with dots.
in normal operation mode allows to be configurable through or using specific arguments in the command line, while this is the ideal deployment case, there are scenarios where a more restricted configuration is required: static configuration mode.
Static configuration mode aims to include a built-in configuration in the final binary of Fluent Bit, disabling the usage of external files or flags at runtime.
The following steps assumes you are familiar with configuring Fluent Bit using text files and you have experience building it from scratch as described in the section.
In your file system prepare a specific directory that will be used as an entry point for the build system to lookup and parse the configuration files. It is mandatory that this directory contain as a minimum one configuration file called fluent-bit.conf containing the required , and sections. As an example create a new fluent-bit.conf file with the following content:
the configuration provided above will calculate CPU metrics from the running system and print them to the standard output interface.
Inside Fluent Bit source code, get into the build/ directory and run CMake appending the FLB_STATIC_CONF option pointing the configuration directory recently created, e.g:
then build it:
At this point the fluent-bit binary generated is ready to run without necessity of further configuration:
You can download the most recent stable or development source code.
For production systems, it's strongly suggested that you get the latest stable release of the source code in either zip file or tarball file format from GitHub using the following link pattern:
For example, for version 1.8.12 the link is:
If you want to contribute to Fluent Bit, you should use the most recent code. You can get the development version from the Git repository:
The master
branch is where the development of Fluent Bit happens. Development version users should expect issues when compiling or at run time.
Fluent Bit users are encouraged to help test every development version to ensure a stable release.
Fluent Bit is distributed as fluent-bit package and is available for long-term support releases of Ubuntu. The latest officially supported version is Noble Numbat (24.04).
A simple installation script is provided to be used for most Linux targets. This will always install the most recent version released.
This is purely a convenience helper and should always be validated prior to use. The recommended secure deployment approach is to follow the instructions below.
The first step is to add our server GPG key to your keyring to ensure you can get our signed packages. Follow the official Debian wiki guidance:
From the 1.9.0 and 1.8.15 releases please note that the GPG key has been updated at so ensure this new one is added.
The GPG Key fingerprint of the new key is:
The previous key is still available at and may be required to install previous versions.
The GPG Key fingerprint of the old key is:
Now let your system update the apt database:
We recommend upgrading your system (sudo apt-get upgrade
). This could avoid potential issues with expired certificates.
If you have the following error "Certificate verification failed", you might want to check if the package ca-certificates
is properly installed (sudo apt-get install ca-certificates
).
Using the following apt-get command you are able now to install the latest fluent-bit:
Now the following step is to instruct systemd to enable the service:
If you do a status check, you should see a similar output like this:
The default configuration of fluent-bit is collecting metrics of CPU usage and sending the records to the standard output, you can see the outgoing data in your /var/log/syslog file.
Fluent Bit is distributed as fluent-bit package and is available for the Raspberry, specifically for distribution, the following versions are supported:
Raspbian Bullseye (11)
Raspbian Buster (10)
The first step is to add our server GPG key to your keyring, on that way you can get our signed packages:
From the 1.9.0 and 1.8.15 releases please note that the GPG key has been updated at so ensure this new one is added.
The GPG Key fingerprint of the new key is:
The previous key is still available at and may be required to install previous versions.
The GPG Key fingerprint of the old key is:
Refer to the to see which platforms are supported in each release.
On Debian and derivative systems such as Raspbian, you need to add our APT server entry to your sources lists, please add the following content at bottom of your /etc/apt/sources.list file.
Now let your system update the apt database:
We recommend upgrading your system (sudo apt-get upgrade
). This could avoid potential issues with expired certificates.
Using the following apt-get command you are able now to install the latest fluent-bit:
Now the following step is to instruct systemd to enable the service:
If you do a status check, you should see a similar output like this:
The default configuration of fluent-bit is collecting metrics of CPU usage and sending the records to the standard output, you can see the outgoing data in your /var/log/syslog file.
Official support is based on community demand. Fluent Bit might run on older operating systems, but must be built from source, or using custom packages from .
Operating System | Installation instructions |
---|
Operating system | Installation instructions |
---|
Fluent Bit packages are also provided by for older end of life versions, Unix systems, and additional support and features including aspects like CVE backporting.
Option | Description | Default |
---|
Option | Description | Default |
---|
Option | Description | Default |
---|
Option | Description | Default |
---|
Option | Description | Default |
---|
Option | Description | Default |
---|
| Option | Description | Default || :--- | :--- | :--- | | | Enable metrics selector processor | On | | | Enable metrics label manipulation processor | On |
You might need to perform modifications on an Event's content. The process to alter, append to, or drop Events is called .
Every Event ingested by Fluent Bit is assigned a Tag. This tag is an internal string used in a later stage by the Router to decide which Filter or phase it must go through.
Most tags are assigned manually in the configuration. If a tag isn't specified, Fluent Bit assigns the name of the plugin instance where that Event was generated from.
The input plugin doesn't assign tags. This plugin speaks the Fluentd wire protocol called Forward where every Event already comes with a Tag associated. Fluent Bit will always use the incoming Tag set by the client.
A tagged record must always have a Matching rule. To learn more about Tags and Matches, see .
To learn more about Tags and Matches, see .
For performance reasons, Fluent Bit uses a binary serialization data format called .
If you enabled keepalive
mode in your configuration, note that this configuration property has been renamed to net.keepalive
. Now all Network I/O keepalive is enabled by default, to learn more about this and other associated configuration properties read the section.
If you use the Elasticsearch output plugin, note the default value of type
. Many versions of Elasticsearch will tolerate this, but ES v5.6 through v6.1 require a type without a leading underscore. See the for more.
Refer to the to see which platforms are supported in each release.
On Ubuntu, you need to add our APT server entry to your sources lists, please add the following content at bottom of your /etc/apt/sources.list file - ensure to set CODENAME
to your specific (e.g. focal
for Ubuntu 20.04):
Scope
Containers / Servers
Embedded Linux / Containers / Servers
Language
C & Ruby
C
Memory
Greater than 60 MB
Approximately 1 MB
Performance
Medium Performance
High Performance
Dependencies
Built as a Ruby Gem, depends on other gems.
Zero dependencies, unless required by a plugin.
Plugins
Over 1,000 external plugins available.
Over 100 built-in plugins available.
License
| Enable all features available | No |
| Use Jemalloc as default memory allocator | No |
| Build with SSL/TLS support | Yes |
| Build executable | Yes |
| Build examples | Yes |
| Build shared library | Yes |
| Enable mtrace support | No |
| Enable Inotify support | Yes |
| Force POSIX thread storage | No |
| Enable SQL embedded database support | No |
| Enable HTTP Server | No |
| Enable Lua scripting support | Yes |
| Enable record accessor | Yes |
| Enable AWS Signv4 support | Yes |
| Build binary using static configuration files. The value of this option must be a directory containing configuration files. |
| Enable Stream Processor | Yes |
| Enable YAML configuration support | Yes |
| Build with WASM runtime support | Yes |
| Build with WASM AOT compiler executable | No |
| Build binaries with debug symbols | No |
| Enable Valgrind support | No |
| Enable trace mode | No |
| Minimise binary size | No |
| Enable runtime tests | No |
| Enable internal tests | No |
| Enable tests | No |
| Enable backtrace/stacktrace support | Yes |
| Determine initial buffer size for |
|
| Determine percentage of reallocation size when |
|
The most secure option is to create the repositories according to the instructions for your specific OS.
A simple installation script is provided to be used for most Linux targets. This will by default install the most recent version released.
This is purely a convenience helper and should always be validated prior to use.
From the 1.9.0 and 1.8.15 releases please note that the GPG key has been updated at https://packages.fluentbit.io/fluentbit.key so ensure this new one is added.
The GPG Key fingerprint of the new key is:
The previous key is still available at https://packages.fluentbit.io/fluentbit-legacy.key and may be required to install previous versions.
The GPG Key fingerprint of the old key is:
Refer to the supported platform documentation to see which platforms are supported in each release.
From version 1.9, td-agent-bit
is a deprecated package and is removed after 1.9.9. The correct package name to use now is fluent-bit
.
Learn how to install Fluent Bit and the AWS output plugins on Amazon Linux 2 using AWS Systems Manager.
Fluent Bit is distributed as fluent-bit package and is available for the latest Amazon Linux 2 and Amazon Linux 2023. The following architectures are supported
x86_64
aarch64 / arm64v8
A simple installation script is provided to be used for most Linux targets. This will always install the most recent version released.
This is purely a convenience helper and should always be validated prior to use. The recommended secure deployment approach is to follow the instructions below.
Amazon Linux 2022 was previously supported but is removed since it became GA Amazon Linux 2023
We provide fluent-bit through a Yum repository. In order to add the repository reference to your system, please add a new file called fluent-bit.repo in /etc/yum.repos.d/ with the following content:
Note: we encourage you always enable the gpgcheck for security reasons. All our packages are signed.
From the 1.9.0 and 1.8.15 releases please note that the GPG key has been updated at https://packages.fluentbit.io/fluentbit.key so ensure this new one is added.
The GPG Key fingerprint of the new key is:
The previous key is still available at https://packages.fluentbit.io/fluentbit-legacy.key and may be required to install previous versions.
The GPG Key fingerprint of the old key is:
Refer to the supported platform documentation to see which platforms are supported in each release.
Once your repository is configured, run the following command to install it:
Now the following step is to instruct systemd to enable the service:
If you do a status check, you should see a similar output like this:
The default configuration of fluent-bit is collecting metrics of CPU usage and sending the records to the standard output, you can see the outgoing data in your /var/log/messages file.
AWS maintains a distribution of Fluent Bit combining the latest official release with a set of Go Plugins for sending logs to AWS services. AWS and Fluent Bit are working together to rewrite their plugins for inclusion in the official Fluent Bit distribution.
Currently, the AWS for Fluent Bit image contains Go Plugins for:
Fluent Bit includes Amazon CloudWatch Logs plugin named cloudwatch_logs
, Amazon Kinesis Firehose plugin named kinesis_firehose
and Amazon Kinesis Data Streams plugin named kinesis_streams
which are higher performance than Go plugins.
Also, Fluent Bit includes S3 output plugin named s3
.
AWS vends their container image via Docker Hub, and a set of highly available regional Amazon ECR repositories. For more information, see the AWS for Fluent Bit GitHub repo.
The AWS for Fluent Bit image uses a custom versioning scheme because it contains multiple projects. To see what each release contains, check out the release notes on GitHub.
AWS vends SSM Public Parameters with the regional repository link for each image. These parameters can be queried by any AWS account.
To see a list of available version tags in a given region, run the following command:
To see the ECR repository URI for a given image tag in a given region, run the following:
You can use these SSM public parameters as parameters in your CloudFormation templates:
Fluent Bit is distributed as fluent-bit package and is available for the latest stable CentOS system.
The following architectures are supported
x86_64
aarch64 / arm64v8
For CentOS 9+ we use CentOS Stream as the canonical base system.
A simple installation script is provided to be used for most Linux targets. This will always install the most recent version released.
This is purely a convenience helper and should always be validated prior to use. The recommended secure deployment approach is to follow the instructions below.
CentOS 8 is now EOL so the default Yum repositories are unavailable.
Make sure to configure to use an appropriate mirror, for example:
An alternative is to use Rocky or Alma Linux which should be equivalent.
We provide fluent-bit through a Yum repository. In order to add the repository reference to your system, please add a new file called fluent-bit.repo in /etc/yum.repos.d/ with the following content:
It is best practice to always enable the gpgcheck and repo_gpgcheck for security reasons. We sign our repository metadata as well as all of our packages.
From the 1.9.0 and 1.8.15 releases please note that the GPG key has been updated at https://packages.fluentbit.io/fluentbit.key so ensure this new one is added.
The GPG Key fingerprint of the new key is:
The previous key is still available at https://packages.fluentbit.io/fluentbit-legacy.key and may be required to install previous versions.
The GPG Key fingerprint of the old key is:
Refer to the supported platform documentation to see which platforms are supported in each release.
Once your repository is configured, run the following command to install it:
Now the following step is to instruct Systemd to enable the service:
If you do a status check, you should see a similar output like this:
The default configuration of fluent-bit is collecting metrics of CPU usage and sending the records to the standard output, you can see the outgoing data in your /var/log/messages file.
The fluent-bit.repo file for the latest installations of Fluent-Bit uses a $releasever variable to determine the correct version of the package to install to your system:
Depending on your Red Hat distribution version, this variable may return a value other than the OS major release version (e.g., RHEL7 Server distributions return "7Server" instead of just "7"). The Fluent-Bit package url uses just the major OS release version, so any other value here will cause a 404.
In order to resolve this issue, you can replace the $releasever variable with your system's OS major release version. For example:
Fluent Bit is distributed as fluent-bit package and is available for the latest (and legacy) stable Debian systems: Bookworm and Bullseye. The following architectures are supported
x86_64
aarch64 / arm64v8
A simple installation script is provided to be used for most Linux targets. This will always install the most recent version released.
This is purely a convenience helper and should always be validated prior to use. The recommended secure deployment approach is to follow the instructions below.
The first step is to add our server GPG key to your keyring, on that way you can get our signed packages. Follow the official Debian wiki guidance: https://wiki.debian.org/DebianRepository/UseThirdParty#OpenPGP\_Key\_distribution
From the 1.9.0 and 1.8.15 releases please note that the GPG key has been updated at https://packages.fluentbit.io/fluentbit.key so ensure this new one is added.
The GPG Key fingerprint of the new key is:
The previous key is still available at https://packages.fluentbit.io/fluentbit-legacy.key and may be required to install previous versions.
The GPG Key fingerprint of the old key is:
Refer to the supported platform documentation to see which platforms are supported in each release.
On Debian, you need to add our APT server entry to your sources lists, please add the following content at bottom of your /etc/apt/sources.list file - ensure to set CODENAME
to your specific Debian release name (e.g. bookworm
for Debian 12):
Now let your system update the apt database:
We recommend upgrading your system (sudo apt-get upgrade
). This could avoid potential issues with expired certificates.
Using the following apt-get command you are able now to install the latest fluent-bit:
Now the following step is to instruct systemd to enable the service:
If you do a status check, you should see a similar output like this:
The default configuration of fluent-bit is collecting metrics of CPU usage and sending the records to the standard output, you can see the outgoing data in your /var/log/syslog file.
Currently, Fluent Bit supports two configuration formats:
Yaml: standard configuration format as of v3.2.
Classic mode: to be deprecated at the end of 2025.
Fluent Bit exposes most of it features through the command line interface. Running the -h
option you can get a list of the options available:
Parsers enable Fluent Bit components to transform unstructured data into a structured internal representation. You can define parsers either directly in the main configuration file or in separate external files for better organization.
This page provides a general overview of how to declare parsers.
The main section name is parsers
, and it allows you to define a list of parser configurations. The following example demonstrates how to set up two simple parsers:
You can define multiple parsers sections, either within the main configuration file or distributed across included files.
For more detailed information on parser options and advanced configurations, please refer to the Configuring Parsers section.
While Fluent Bit comes with a variety of built-in plugins, it also supports loading external plugins at runtime. This feature is especially useful for loading Go or Wasm plugins that are built as shared object files (.so). Fluent Bit's YAML configuration provides two ways to load these external plugins:
You can specify external plugins directly within your main YAML configuration file using the plugins
section. Here’s an example:
Alternatively, you can load external plugins from a separate YAML file by specifying the plugins_file option in the service section. Here’s how to configure this:
In this setup, the extra_plugins.yaml
file might contain the following plugins section:
Built-in vs. External: Fluent Bit comes with many built-in plugins, but you can load external plugins at runtime to extend the tool’s functionality.
Loading Mechanism: External plugins must be shared object files (.so). You can define them inline in the main YAML configuration or include them from a separate YAML file for better modularity.
Multiline parsers are used to combine logs that span multiple events into a single, cohesive message. This is particularly useful for handling stack traces, error logs, or any log entry that contains multiple lines of information.
In YAML configuration, the syntax for defining multiline parsers differs slightly from the classic configuration format introducing minor breaking changes, specifically on how the rules are defined.
Below is an example demonstrating how to define a multiline parser directly in the main configuration file, as well as how to include additional definitions from external files:
The example above defines a multiline parser named multiline-regex-test
that uses regular expressions to handle multi-event logs. The parser contains two rules: the first rule transitions from start_state to cont when a matching log entry is detected, and the second rule continues to match subsequent lines.
For more detailed information on configuring multiline parsers, including advanced options and use cases, please refer to the Configuring Multiline Parsers section.
The includes
section allows you to specify additional YAML configuration files to be merged into the current configuration. These files are identified as a list of filenames and can include relative or absolute paths. If no absolute path is provided, the file is assumed to be located in a directory relative to the file that references it.
This feature is useful for organizing complex configurations into smaller, manageable files and including them as needed.
Below is an example demonstrating how to include additional YAML files using relative path references. This is the file system path structure
The content of fluent-bit.yaml
Relative Paths: If a path is not specified as absolute, it will be treated as relative to the file that includes it.
Organized Configurations: Using the includes section helps keep your configuration modular and easier to maintain.
note: Ensure that the included files are formatted correctly and contain valid YAML configurations for seamless integration.
The Upstream Servers
section defines a group of endpoints, referred to as nodes, which are used by output plugins to distribute data in a round-robin fashion. This is particularly useful for plugins that require load balancing when sending data. Examples of plugins that support this capability include Forward and Elasticsearch.
In YAML, this section is named upstream_servers
and requires specifying a name
for the group and a list of nodes
. Below is an example that defines two upstream server groups: forward-balancing
and forward-balancing-2
:
Nodes: Each node in the upstream_servers group must specify a name, host, and port. Additional settings like tls, tls_verify, and shared_key can be configured as needed for secure communication.
While the upstream_servers
section can be defined globally, some output plugins may require the configuration to be specified in a separate YAML file. Be sure to consult the documentation for each specific output plugin to understand its requirements.
For more details, refer to the documentation of the respective output plugins.
Install Fluent Bit in your embedded Linux system.
To install, select Fluent Bit in your defconfig
. See the Config.in
file for all configuration options.
The default configuration file is written to:
Fluent Bit is started by the S99fluent-bit
script.
All configurations with a toolchain that supports threads and dynamic library linking are supported.
Fluent Bit is compatible with latest Apple macOS system on x86_64 and Apple Silicon architectures.
The packages can be found here: https://packages.fluentbit.io/macos/
For the next steps, you will need to have Homebrew installed in your system. If is not there, you can install it with the following command:
The Fluent Bit package on Homebrew is not officially supported, but should work for basic use cases and testing. It can be installed using:
Run the following brew command in your terminal to retrieve the dependencies:
Grab a fresh copy of the Fluent Bit source code (upstream):
Optionally, if you want to use a specific version, just checkout to the proper tag. If you want to use v1.8.13
just do:
In order to prepare the build system, we need to expose certain environment variables so Fluent Bit CMake build rules can pick the right libraries:
Change to the build/ directory inside the Fluent Bit sources:
Build Fluent Bit. Note that we are indicating to the build system "where" the final binaries and config files should be installed:
Install Fluent Bit to the directory specified above. Note that this requires root privileges due to the directory we will write information to:
The binaries and configuration examples can be located at /opt/fluent-bit/
.
Grab a fresh copy of the Fluent Bit source code (upstream):
Optionally, if you want to use a specific version, just checkout to the proper tag. If you want to use v1.9.2
just do:
In order to prepare the build system, we need to expose certain environment variables so Fluent Bit CMake build rules can pick the right libraries:
And then, creating the specific macOS SDK target (For example, specifying macOS Big Sur (11.3) SDK environment):
Change to the build/ directory inside the Fluent Bit sources:
Build the Fluent Bit macOS installer.
Then, macOS installer will be generated as:
Finally, fluent-bit-<fluent-bit version>
-(intel or apple)
.pkg will be generated.
The created installer will put binaries at /opt/fluent-bit/
.
To make the access path easier to Fluent Bit binary, in your terminal extend the PATH
variable:
Now as a simple test, try Fluent Bit by generating a simple dummy message which will be printed to the standard output interface every 1 second:
You will see an output similar to this:
To halt the process, press ctrl-c
in the terminal.
Fluent Bit might optionally use a configuration file to define how the service will behave.
Before proceeding we need to understand how the configuration schema works.
The schema is defined by three concepts:
Sections
Entries: Key/Value
Indented Configuration Mode
A simple example of a configuration file is as follows:
A section is defined by a name or title inside brackets. Looking at the example above, a Service section has been set using [SERVICE] definition. Section rules:
All section content must be indented (4 spaces ideally).
Multiple sections can exist on the same file.
A section is expected to have comments and entries, it cannot be empty.
Any commented line under a section, must be indented too.
End-of-line comments are not supported, only full-line comments.
A section may contain Entries, an entry is defined by a line of text that contains a Key and a Value, using the above example, the [SERVICE]
section contains two entries, one is the key Daemon with value off and the other is the key Log_Level with the value debug. Entries rules:
An entry is defined by a key and a value.
A key must be indented.
A key must contain a value which ends in the breakline.
Multiple keys with the same name can exist.
Also commented lines are set prefixing the # character, those lines are not processed but they must be indented too.
Fluent Bit configuration files are based in a strict Indented Mode, that means that each configuration file must follow the same pattern of alignment from left to right when writing text. By default an indentation level of four spaces from left to right is suggested. Example:
As you can see there are two sections with multiple entries and comments, note also that empty lines are allowed and they do not need to be indented.
Linux | x86_64, Arm64v8 |
x86_64, Arm64v8 |
x86_64, Arm64v8 |
x86_64, Arm64v8 |
x86_64, Arm64v8 |
x86_64, Arm64v8 |
x86_64, Arm64v8 |
x86_64, Arm64v8 |
x86_64, Arm64v8 |
x86_64, Arm64v8 |
x86_64, Arm64v8 |
x86_64, Arm64v8 |
x86_64, Arm64v8 |
x86_64, Arm64v8 |
x86_64 |
Arm32v7 |
Arm32v7 |
macOS | * | x86_64, Apple M1 |
Windows | x86_64, x86 |
x86_64, x86 |
macOS |
Linux, FreeBSD |
macOS |
Windows |
Enable Collectd input plugin | On |
Enable CPU input plugin | On |
Enable Disk I/O Metrics input plugin | On |
Enable Docker metrics input plugin | On |
Enable Exec input plugin | On |
Enable Exec WASI input plugin | On |
Enable Fluent Bit metrics input plugin | On |
Enable Elasticsearch/OpenSearch Bulk input plugin | On |
Enable Forward input plugin | On |
Enable Head input plugin | On |
Enable Health input plugin | On |
Enable Kernel log input plugin | On |
Enable Memory input plugin | On |
Enable MQTT Server input plugin | On |
Enable Network I/O metrics input plugin | On |
Enable Process monitoring input plugin | On |
Enable Random input plugin | On |
Enable Serial input plugin | On |
Enable Standard input plugin | On |
Enable Syslog input plugin | On |
Enable Systemd / Journald input plugin | On |
Enable Tail (follow files) input plugin | On |
Enable TCP input plugin | On |
Enable system temperature input plugin | On |
Enable UDP input plugin | On |
Enable Windows Event Log input plugin (Windows Only) | On |
Enable Windows Event Log input plugin using | On |
Enable AWS metadata filter | On |
Enable AWS metadata filter | On |
| Enable Expect data test filter | On |
Enable Grep filter | On |
Enable Kubernetes metadata filter | On |
Enable Lua scripting filter | On |
Enable Modify filter | On |
Enable Nest filter | On |
Enable Parser filter | On |
Enable Record Modifier filter | On |
Enable Rewrite Tag filter | On |
Enable Stdout filter | On |
Enable Sysinfo filter | On |
Enable Throttle filter | On |
Enable Type Converter filter | On |
Enable WASM filter | On |
Enable Microsoft Azure output plugin | On |
Enable Azure Kusto output plugin | On |
Enable Google BigQuery output plugin | On |
Enable Counter output plugin | On |
Enable Amazon CloudWatch output plugin | On |
Enable Datadog output plugin | On |
On |
Enable File output plugin | On |
Enable Amazon Kinesis Data Firehose output plugin | On |
Enable Amazon Kinesis Data Streams output plugin | On |
Enable Flowcounter output plugin | On |
On |
Enable Gelf output plugin | On |
Enable HTTP output plugin | On |
Enable InfluxDB output plugin | On |
Enable Kafka output | Off |
Enable Kafka REST Proxy output plugin | On |
| Enable Lib output plugin | On |
On |
| Enable NULL output plugin | On |
| Enable PostgreSQL output plugin | On |
| Enable Plot output plugin | On |
| Enable Slack output plugin | On |
Enable Amazon S3 output plugin | On |
Enable Splunk output plugin | On |
Enable Google Stackdriver output plugin | On |
Enable STDOUT output plugin | On |
| Enable TCP/TLS output plugin | On |
On |
Fluent Bit is distributed as fluent-bit package for Windows and as a Windows container on Docker Hub. Fluent Bit has two flavours of Windows installers: a ZIP archive (for quick testing) and an EXE installer (for system installation).
Not all plugins are supported on Windows: the CMake configuration shows the default set of supported plugins.
Make sure to provide a valid Windows configuration with the installation, a sample one is shown below:
From version 1.9, td-agent-bit
is a deprecated package and was removed after 1.9.9. The correct package name to use now is fluent-bit
.
The latest stable version is 3.2.1. Each version is available via the following download URLs.
Note these are now using the Github Actions built versions, the legacy AppVeyor builds are still available (AMD 32/64 only) at releases.fluentbit.io but are deprecated.
MSI installers are also available:
To check the integrity, use Get-FileHash
cmdlet on PowerShell.
Download a ZIP archive from above. There are installers for 32-bit and 64-bit environments, so choose one suitable for your environment.
Then you need to expand the ZIP archive. You can do this by clicking "Extract All" on Explorer, or if you're using PowerShell, you can use Expand-Archive
cmdlet.
The ZIP package contains the following set of files.
Now, launch cmd.exe or PowerShell on your machine, and execute fluent-bit.exe
as follows.
If you see the following output, it's working fine!
To halt the process, press CTRL-C in the terminal.
Download an EXE installer from above. It has both 32-bit and 64-bit builds. Choose one which is suitable for you.
Double-click the EXE installer you've downloaded. The installation wizard will automatically start.
Click Next and proceed. By default, Fluent Bit is installed into C:\Program Files\fluent-bit\
, so you should be able to launch fluent-bit as follows after installation.
The Windows installer is built by [CPack
using NSIS(https://cmake.org/cmake/help/latest/cpack_gen/nsis.html) and so supports the default options that all NSIS installers do for silent installation and the directory to install to.
To silently install to C:\fluent-bit
directory here is an example:
The uninstaller automatically provided also supports a silent un-install using the same /S
flag. This may be useful for provisioning with automation like Ansible, Puppet, etc.
Windows services are equivalent to "daemons" in UNIX (i.e. long-running background processes). Since v1.5.0, Fluent Bit has the native support for Windows Service.
Suppose you have the following installation layout:
To register Fluent Bit as a Windows service, you need to execute the following command on Command Prompt. Please be careful that a single space is required after binpath=
.
Now Fluent Bit can be started and managed as a normal Windows service.
To halt the Fluent Bit service, just execute the "stop" command.
To start Fluent Bit automatically on boot, execute the following:
C:\Program Files
Quotations are required if file paths contain spaces. Here is an example:
Instead of sc.exe
, PowerShell can be used to manage Windows services.
Create a Fluent Bit service:
Start the service:
Query the service status:
Stop the service:
Remove the service (requires PowerShell 6.0 or later)
If you need to create a custom executable, you can use the following procedure to compile Fluent Bit by yourself.
First, you need Microsoft Visual C++ to compile Fluent Bit. You can install the minimum toolkit by the following command:
When asked which packages to install, choose "C++ Build Tools" (make sure that "C++ CMake tools for Windows" is selected too) and wait until the process finishes.
Also you need to install flex and bison. One way to install them on Windows is to use winflexbison.
Add the path C:\WinFlexBison
to your systems environment variable "Path". Here's how to do that.
It is important to have installed OpenSSL binaries, at least the library files and headers.
Also you need to install git to pull the source code from the repository.
Open the start menu on Windows and type "Command Prompt for VS". From the result list select the one that corresponds to your target system ( x86 or x64).
Note: Check that the installed OpenSSL library files match the selected target. You can check the library files by using the dumpbin command with the /headers option .
Clone the source code of Fluent Bit.
Compile the source code.
Now you should be able to run Fluent Bit:
To create a ZIP package, call cpack
as follows:
The env
section allows you to define environment variables directly within the configuration file. These variables can then be used to dynamically replace values throughout your configuration using the ${VARIABLE_NAME}
syntax.
Values set in the env
section are case-sensitive. However, as a best practice, we recommend using uppercase names for environment variables. The example below defines two variables, FLUSH_INTERVAL
and STDOUT_FMT
, which can be accessed in the configuration using ${FLUSH_INTERVAL}
and ${STDOUT_FMT}
:
Fluent Bit provides a set of predefined environment variables that can be used in your configuration:
In addition to variables defined in the configuration file or the predefined ones, Fluent Bit can access system environment variables set in the user space. These external variables can be referenced in the configuration using the same ${VARIABLE_NAME} pattern.
For example, to set the FLUSH_INTERVAL system environment variable to 2 and use it in your configuration:
In the configuration file, you can then access this value as follows:
This approach allows you to easily manage and override configuration values using environment variables, providing flexibility in various deployment environments.
Kubernetes |
Docker |
Containers on AWS |
CentOS / Red Hat |
Ubuntu |
Debian |
Amazon Linux |
Raspbian / Raspberry Pi |
Yocto / Embedded Linux |
Buildroot / Embedded Linux |
Windows Server 2019 |
Windows 10 2019.03 |
Fluent Bit container images are available on Docker Hub ready for production usage. Current available images can be deployed in multiple architectures.
Get started by simply typing the following command:
The following table describes the Linux container tags that are available on Docker Hub fluent/fluent-bit repository:
Tag(s) | Manifest Architectures | Description |
---|---|---|
It is strongly suggested that you always use the latest image of Fluent Bit.
Windows container images are provided from v2.0.6 for Windows Server 2019 and Windows Server 2022. These can be found as tags on the same Docker Hub registry above.
Our production stable images are based on Distroless focusing on security containing just the Fluent Bit binary and minimal system libraries and basic configuration. We also provide debug images for all architectures (from 1.9.0+) which contain a full (Debian) shell and package manager that can be used to troubleshoot or for testing purposes.
From a deployment perspective, there is no need to specify an architecture, the container client tool that pulls the image gets the proper layer for the running architecture.
1.9 and 2.0 container images are signed using Cosign/Sigstore. These signatures can be verified using cosign
(install guide):
Note: replace cosign
above with the binary installed if it has a different name (e.g. cosign-linux-amd64
).
Keyless signing is also provided but this is still experimental:
Note: COSIGN_EXPERIMENTAL=1
is used to allow verification of images signed in KEYLESS mode. To learn more about keyless signing, please refer to Keyless Signatures.
Download the last stable image from 2.0 series:
Once the image is in place, now run the following (useless) test which makes Fluent Bit measure CPU usage by the container:
That command will let Fluent Bit measure CPU usage every second and flush the results to the standard output, e.g:
Alpine Linux uses Musl C library instead of Glibc. Musl is not fully compatible with Glibc which generated many issues in the following areas when used with Fluent Bit:
Memory Allocator: to run Fluent Bit properly in high-load environments, we use Jemalloc as a default memory allocator which reduce fragmentation and provides better performance for our needs. Jemalloc cannot run smoothly with Musl and requires extra work.
Alpine Linux Musl functions bootstrap have a compatibility issue when loading Golang shared libraries, this generate problems when trying to load Golang output plugins in Fluent Bit.
Alpine Linux Musl Time format parser does not support Glibc extensions
Maintainers preference in terms of base image due to security and maintenance reasons are Distroless and Debian.
Briefly tackled in a blog post which links out to the following possibly opposing views:
The reasons for using Distroless are fairly well covered here: https://github.com/GoogleContainerTools/distroless#why-should-i-use-distroless-images
Only include what you need, reduce the attack surface available.
Reduces size so improves perfomance as well.
Reduces false positives on scans (and reduces resources required for scanning).
Reduces supply chain security requirements to just what you need.
Helps prevent unauthorised processes or users interacting with the container.
Less need to harden the container (and container runtime, K8S, etc.).
Faster CICD processes.
With any choice of course there are downsides:
No shell or package manager to update/add things.
Generally though dynamic updating is a bad idea in containers as the time it is done affects the outcome: two containers started at different times using the same base image may perform differently or get different dependencies, etc.
A better approach is to rebuild a new image version but then you can do this with Distroless, however it is harder requiring multistage builds or similar to provide the new dependencies.
Debugging can be harder.
More specifically you need applications set up to properly expose information for debugging rather than rely on traditional debug approaches of connecting to processes or dumping memory. This can be an upfront cost vs a runtime cost but does shift left in the development process so hopefully is a reduction overall.
Assumption that Distroless is secure: nothing is secure (just more or less secure) and there are still exploits so it does not remove the need for securing your system.
Sometimes you need to use a common base image, e.g. with audit/security/health/etc. hooks integrated, or common base tooling (this could still be Distroless though).
One other important thing to note is that exec
'ing into a container will potentially impact resource limits.
For debugging, debug containers are available now in K8S: https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/#ephemeral-container
This can be a quite different container from the one you want to investigate (e.g. lots of extra tools or even a different base).
No resource limits applied to this container - can be good or bad.
Runs in pod namespaces, just another container that can access everything the others can.
May need architecture of the pod to share volumes, etc.
Requires more recent versions of K8S and the container runtime plus RBAC allowing it.
Fluent Bit traditionally offered a classic
configuration mode, a custom configuration format that we are gradually phasing out. While classic
mode has served well for many years, it has several limitations. Its basic design only supports grouping sections with key-value pairs and lacks the ability to handle sub-sections or complex data structures like lists.
YAML, now a mainstream configuration format, has become essential in a cloud ecosystem where everything is configured this way. To minimize friction and provide a more intuitive experience for creating data pipelines, we strongly encourage users to transition to YAML. The YAML format enables features, such as processors, that are not possible to configure in classic
mode.
As of Fluent Bit v3.2, you can configure everything in YAML.
Configuring Fluent Bit with YAML introduces the following root-level sections:
Section Name | Description |
---|---|
To access detailed configuration guides for each section, use the following links:
Overview of global settings, configuration options, and examples.
Detailed guide on defining parsers and supported formats.
Multiline Parsers Section documentation
Explanation of multiline parsing configuration.
Pipeline Section documentation
Details on setting up pipelines and using processors.
How to load external plugins.
Upstream Servers Section documentation
Guide on setting up and using upstream nodes with supported plugins.
Environment Variables Section documentation
Information on setting environment variables and their scope within Fluent Bit.
Includes Section documentation
Description on how to include external YAML files.
Kubernetes Production Grade Log Processor
Fluent Bit is a lightweight and extensible Log Processor that comes with full support for Kubernetes:
Process Kubernetes containers logs from the file system or Systemd/Journald.
Enrich logs with Kubernetes Metadata.
Centralize your logs in third party storage services like Elasticsearch, InfluxDB, HTTP, etc.
Before getting started it is important to understand how Fluent Bit will be deployed. Kubernetes manages a cluster of nodes, so our log agent tool will need to run on every node to collect logs from every POD, hence Fluent Bit is deployed as a DaemonSet (a POD that runs on every node of the cluster).
When Fluent Bit runs, it will read, parse and filter the logs of every POD and will enrich each entry with the following information (metadata):
Pod Name
Pod ID
Container Name
Container ID
Labels
Annotations
To obtain this information, a built-in filter plugin called kubernetes talks to the Kubernetes API Server to retrieve relevant information such as the pod_id, labels and annotations, other fields such as pod_name, container_id and container_name are retrieved locally from the log file names. All of this is handled automatically, no intervention is required from a configuration aspect.
Our Kubernetes Filter plugin is fully inspired by the Fluentd Kubernetes Metadata Filter written by Jimmi Dyson.
Fluent Bit should be deployed as a DaemonSet, so it will be available on every node of your Kubernetes cluster.
The recommended way to deploy Fluent Bit is with the official Helm Chart: https://github.com/fluent/helm-charts
If you are using Red Hat OpenShift you will also need to set up security context constraints (SCC) using the relevant option in the helm chart.
Helm is a package manager for Kubernetes and allows you to quickly deploy application packages into your running cluster. Fluent Bit is distributed via a helm chart found in the Fluent Helm Charts repo: https://github.com/fluent/helm-charts.
To add the Fluent Helm Charts repo use the following command
To validate that the repo was added you can run helm search repo fluent
to ensure the charts were added. The default chart can then be installed by running the following
The default chart values include configuration to read container logs, with Docker parsing, systemd logs apply Kubernetes metadata enrichment and finally output to an Elasticsearch cluster. You can modify the values file included https://github.com/fluent/helm-charts/blob/master/charts/fluent-bit/values.yaml to specify additional outputs, health checks, monitoring endpoints, or other configuration options.
The default configuration of Fluent Bit makes sure of the following:
Consume all containers logs from the running Node and parse them with either the docker
or cri
multiline parser.
Persist how far it got into each file it is tailing so if a pod is restarted it picks up from where it left off.
The Kubernetes filter will enrich the logs with Kubernetes metadata, specifically labels and annotations. The filter only goes to the API Server when it cannot find the cached info, otherwise it uses the cache.
The default backend in the configuration is Elasticsearch set by the Elasticsearch Output Plugin. It uses the Logstash format to ingest the logs. If you need a different Index and Type, please refer to the plugin option and do your own adjustments.
There is an option called Retry_Limit set to False, that means if Fluent Bit cannot flush the records to Elasticsearch it will re-try indefinitely until it succeed.
Since v1.5.0, Fluent Bit supports deployment to Windows pods.
When deploying Fluent Bit to Kubernetes, there are three log files that you need to pay attention to.
C:\k\kubelet.err.log
This is the error log file from kubelet daemon running on host.
You will need to retain this file for future troubleshooting (to debug deployment failures etc.)
C:\var\log\containers\<pod>_<namespace>_<container>-<docker>.log
This is the main log file you need to watch. Configure Fluent Bit to follow this file.
It is actually a symlink to the Docker log file in C:\ProgramData\
, with some additional metadata on its file name.
C:\ProgramData\Docker\containers\<docker>\<docker>.log
This is the log file produced by Docker.
Normally you don't directly read from this file, but you need to make sure that this file is visible from Fluent Bit.
Typically, your deployment yaml contains the following volume configuration.
Assuming the basic volume configuration described above, you can apply the following config to start logging. You can visualize this configuration here (Sign-up required)
Windows pods often lack working DNS immediately after boot (#78479). To mitigate this issue, filter_kubernetes
provides a built-in mechanism to wait until the network starts up:
DNS_Retries
- Retries N times until the network start working (6)
DNS_Wait_Time
- Lookup interval between network status checks (30)
By default, Fluent Bit waits for 3 minutes (30 seconds x 6 times). If it's not enough for you, tweak the configuration as follows.
Fluent Bit source code provides Bitbake recipes to configure, build and package the software for a Yocto based image. Note that specific steps of usage of these recipes in your Yocto environment (Poky) is out of the scope of this documentation.
We distribute two main recipes, one for testing/dev purposes and other with the latest stable release.
Version | Recipe | Description |
---|---|---|
It's strongly recommended to always use the stable release of Fluent Bit recipe and not the one from GIT master for production deployments.
Fluent Bit >= v1.1.x fully supports x86_64, x86, arm32v7 and arm64v8.
The pipeline
section defines the flow of how data is collected, processed, and sent to its final destination. It encompasses the following core concepts:
Name | Description |
---|---|
Here’s a simple example of a pipeline configuration:
Processors operate on specific signals such as logs, metrics, and traces. They are attached to an input plugin and must specify the signal type they will process.
In the example below, the content_modifier processor inserts or updates (upserts) the key my_new_key with the value 123 for all log records generated by the tail plugin. This processor is only applied to log signals:
Here is a more complete example with multiple processors:
You might noticed that processors not only can be attached to input, but also to an output.
While processors and filters are similar in that they can transform, enrich, or drop data from the pipeline, there is a significant difference in how they operate:
Processors: Run in the same thread as the input plugin when the input plugin is configured to be threaded (threaded: true). This design provides better performance, especially in multi-threaded setups.
Filters: Run in the main event loop. When multiple filters are used, they can introduce performance overhead, particularly under heavy workloads.
You can configure existing Filters to run as processors. There are no specific changes needed; you simply use the filter name as if it were a native processor.
In the example below, the grep filter is used as a processor to filter log events based on a pattern:
Fluent Bit supports the usage of environment variables in any value associated to a key when using a configuration file.
The variables are case sensitive and can be used in the following format:
When Fluent Bit starts, the configuration reader will detect any request for ${MY_VARIABLE}
and will try to resolve its value.
When Fluent Bit is running under systemd (using the official packages), environment variables can be set in the following files:
/etc/default/fluent-bit
(Debian based system)
/etc/sysconfig/fluent-bit
(Others)
These files are ignored if they do not exist.
Create the following configuration file (fluent-bit.conf
):
Open a terminal and set the environment variable:
The above command set the 'stdout' value to the variable
MY_OUTPUT
.
Run Fluent Bit with the recently created configuration file:
As you can see the service worked properly as the configuration was valid.
This page describes the main configuration file used by Fluent Bit.
One of the ways to configure Fluent Bit is using a main configuration file. Fluent Bit allows the use one configuration file that works at a global scope and uses the defined Format and Schema.
The main configuration file supports four sections:
Service
Input
Filter
Output
It's also possible to split the main configuration file into multiple files using the Include File feature to include external files.
The Service
section defines global properties of the service. The following keys are:
Key | Description | Default Value |
---|---|---|
The following is an example of a SERVICE
section:
For scheduler and retry details, see scheduling and retries.
The INPUT
section defines a source (related to an input plugin). Each input plugin can add its own configuration keys:
Name
is mandatory and tells Fluent Bit which input plugin to load. Tag
is mandatory for all plugins except for the input forward
plugin, which provides dynamic tags.
The following is an example of an INPUT
section:
The FILTER
section defines a filter (related to an filter plugin). Each filter plugin can add it own configuration keys. The base configuration for each FILTER
section contains:
Name
is mandatory and lets Fluent Bit know which filter plugin should be loaded. Match
or Match_Regex
is mandatory for all plugins. If both are specified, Match_Regex
takes precedence.
The following is an example of a FILTER
section:
The OUTPUT
section specifies a destination that certain records should go to after a Tag
match. Fluent Bit can route up to 256 OUTPUT
plugins. The configuration supports the following keys:
The following is an example of an OUTPUT
section:
The following configuration file example demonstrates how to collect CPU metrics and flush the results every five seconds to the standard output:
To avoid complicated long configuration files is better to split specific parts in different files and call them (include) from one main file. The @INCLUDE
can be used in the following way:
The configuration reader will try to open the path somefile.conf
. If not found, the reader assumes the file is on a relative path based on the path of the base configuration file:
Main configuration path: /tmp/main.conf
Included file: somefile.conf
Fluent Bit will try to open somefile.conf
, if it fails it will try /tmp/somefile.conf
.
The @INCLUDE
command only works at top-left level of the configuration line, and can't be used inside sections.
Wildcard character (*
) supports including multiple files. For example:
Files matching the wildcard character are included unsorted. If plugin ordering between files needs to be preserved, the files should be included explicitly.
Certain configuration directives in Fluent Bit refer to unit sizes such as when defining the size of a buffer or specific limits, we can find these in plugins like Tail Input, Forward Input or in generic properties like Mem_Buf_Limit.
Starting from Fluent Bit v0.11.10, all unit sizes have been standardized across the core and plugins, the following table describes the options that can be used and what they mean:
Suffix | Description | Example |
---|---|---|
Fluent Bit provides integrated support for Transport Layer Security (TLS) and it predecessor Secure Sockets Layer (SSL) respectively. In this section we will refer as TLS only for both implementations.
Both input and output plugins that perform Network I/O can optionally enable TLS and configure the behavior. The following table describes the properties available:
Property | Description | Default |
---|---|---|
Note : in order to use TLS on input plugins the user is expected to provide both a certificate and private key
The listed properties can be enabled in the configuration file, specifically on each output plugin section or directly through the command line.
The following output plugins can take advantage of the TLS feature:
The following input plugins can take advantage of the TLS feature:
In addition, other plugins implements a sub-set of TLS support, meaning, with restricted configuration:
By default HTTP input plugin uses plain TCP, enabling TLS from the command line can be done with:
In the command line above, the two properties tls and tls.verify where enabled for demonstration purposes (we strongly suggest always keep verification ON).
The same behavior can be accomplished using a configuration file:
By default HTTP output plugin uses plain TCP, enabling TLS from the command line can be done with:
In the command line above, the two properties tls and tls.verify where enabled for demonstration purposes (we strongly suggest always keep verification ON).
The same behavior can be accomplished using a configuration file:
This will generate a 4096 bit RSA key pair and a certificate that is signed using SHA-256 with the expiration date set to 30 days in the future, test.host.net
set as common name and since we opted out of DES
the private key will be stored in plain text.
Fluent Bit supports TLS server name indication. If you are serving multiple hostnames on a single IP address (a.k.a. virtual hosting), you can make use of tls.vhost
to connect to a specific hostname.
By default, TLS verification of hostnames is not done automatically. As an example, we can extract the X509v3 Subject Alternative Name from a certificate:
As you can see, this certificate covers only my.fluent-aggregator.net
so if we use a different hostname it should fail.
To fully verify the alternative name and demonstrate the failure we enable tls.verify_hostname
:
This outgoing connect will be failed and disconnected:
A full feature set to access content of your records
Fluent Bit works internally with structured records and it can be composed of an unlimited number of keys and values. Values can be anything like a number, string, array, or a map.
Having a way to select a specific part of the record is critical for certain core functionalities or plugins, this feature is called Record Accessor.
consider Record Accessor a simple grammar to specify record content and other miscellaneous values.
A record accessor rule starts with the character $
. Using the structured content above as an example the following table describes how to access a record:
The following table describe some accessing rules and the expected returned value:
Format | Accessed Value |
---|---|
If the accessor key does not exist in the record like the last example $labels['undefined']
, the operation is simply omitted, no exception will occur.
The feature is enabled on a per plugin basis, not all plugins enable this feature. As an example consider a configuration that aims to filter records using grep that only matches where labels have a color blue:
The file content to process in test.log
is the following:
Running Fluent Bit with the configuration above the output will be:
The Fluent Bit record_accessor library has a limitation in the characters that can separate template variables- only dots and commas (.
and ,
) can come after a template variable. This is because the templating library must parse the template and determine the end of a variable.
The following would be invalid templates because the two template variables are not separated by commas or dots:
$TaskID-$ECSContainerName
$TaskID/$ECSContainerName
$TaskID_$ECSContainerName
$TaskIDfooo$ECSContainerName
However, the following are valid:
$TaskID.$ECSContainerName
$TaskID.ecs_resource.$ECSContainerName
$TaskID.fooo.$ECSContainerName
And the following are valid since they only contain one template variable with nothing after it:
fooo$TaskID
fooo____$TaskID
fooo/bar$TaskID
It's common that Fluent Bit output plugins aims to connect to external services to deliver the logs over the network, this is the case of HTTP, Elasticsearch and Forward within others. Being able to connect to one node (host) is normal and enough for more of the use cases, but there are other scenarios where balancing across different nodes is required. The Upstream feature provides such capability.
An Upstream defines a set of nodes that will be targeted by an output plugin, by the nature of the implementation an output plugin must support the Upstream feature. The following plugin(s) have Upstream support:
The current balancing mode implemented is round-robin.
To define an Upstream it's required to create an specific configuration file that contains an UPSTREAM and one or multiple NODE sections. The following table describe the properties associated to each section. Note that all of them are mandatory:
Section | Key | Description |
---|---|---|
A Node might contain additional configuration keys required by the plugin, on that way we provide enough flexibility for the output plugin, a common use case is Forward output where if TLS is enabled, it requires a shared key (more details in the example below).
In addition to the properties defined in the table above, the network operations against a defined node can optionally be done through the use of TLS for further encryption and certificates use.
The TLS options available are described in the TLS/SSL section and can be added to the any Node section.
The following example defines an Upstream called forward-balancing which aims to be used by Forward output plugin, it register three Nodes:
node-1: connects to 127.0.0.1:43000
node-2: connects to 127.0.0.1:44000
node-3: connects to 127.0.0.1:45000 using TLS without verification. It also defines a specific configuration option required by Forward output called shared_key.
Note that every Upstream definition must exists on it own configuration file in the file system. Adding multiple Upstreams in the same file or different files is not allowed.
Fluent Bit has an engine that helps to coordinate the data ingestion from input plugins. The engine calls the scheduler to decide when it's time to flush the data through one or multiple output plugins. The scheduler flushes new data at a fixed number of seconds, and retries when asked.
When an output plugin gets called to flush some data, after processing that data it can notify the engine using these possible return statuses:
OK
: Data successfully processed and flushed.
Retry
: If a retry is requested, the engine asks the scheduler to retry flushing that data. The scheduler decides how many seconds to wait before retry.
Error
: An unrecoverable error occurred and the engine shouldn't try to flush that data again.
The scheduler provides two configuration options, called scheduler.cap
and scheduler.base
, which can be set in the Service section. These determine the waiting time before a retry happens.
Key | Description | Default |
---|---|---|
The scheduler.base
determines the lower bound of time and the scheduler.cap
determines the upper bound for each retry.
Fluent Bit uses an exponential backoff and jitter algorithm to determine the waiting time before a retry. The waiting time is a random number between a configurable upper and lower bound. For a detailed explanation of the exponential backoff and jitter algorithm, see Exponential Backoff And Jitter.
For example:
For the Nth retry, the lower bound of the random number will be:
base
The upper bound will be:
min(base * (Nth power of 2), cap)
For example:
When base
is set to 3 and cap
is set to 30:
First retry: The lower bound will be 3. The upper bound will be 3 * 2 = 6
. The waiting time will be a random number between (3, 6).
Second retry: The lower bound will be 3. The upper bound will be 3 * (2 * 2) = 12
. The waiting time will be a random number between (3, 12).
Third retry: The lower bound will be 3. The upper bound will be 3 * (2 * 2 * 2) =24
. The waiting time will be a random number between (3, 24).
Fourth retry: The lower bound will be 3, because 3 * (2 * 2 * 2 * 2) = 48
> 30
. The upper bound will be 30. The waiting time will be a random number between (3, 30).
The following example configures the scheduler.base
as 3
seconds and scheduler.cap
as 30
seconds.
The waiting time will be:
The scheduler provides a configuration option called Retry_Limit
, which can be set independently for each output section. This option lets you disable retries or impose a limit to try N times and then discard the data after reaching that limit:
The following example configures two outputs, where the HTTP plugin has an unlimited number of retries, and the Elasticsearch plugin have a limit of 5
retries:
Learn how to run Fluent Bit in multiple threads for improved scalability.
Fluent Bit has one event loop to handle critical operations, like managing timers, receiving internal messages, scheduling flushes, and handling retries. This event loop runs in the main Fluent Bit thread.
To free up resources in the main thread, you can configure and to run in their own self-contained threads. However, inputs and outputs implement multithreading in distinct ways: inputs can run in threaded
mode, and outputs can use one or more workers
.
Threading also affects certain processes related to inputs and outputs. For example, always run in the main thread, but run in the self-contained threads of their respective inputs or outputs, if applicable.
When inputs collect telemetry data, they can either perform this process inside the main Fluent Bit thread or inside a separate dedicated thread. You can configure this behavior by enabling or disabling the threaded
setting.
All inputs are capable of running in threaded mode, but certain inputs always run in threaded mode regardless of configuration. These always-threaded inputs are:
Inputs aren't internally aware of multithreading. If an input runs in threaded mode, Fluent Bit manages the logistics of that input's thread.
When outputs flush data, they can either perform this operation inside Fluent Bit's main thread or inside a separate dedicated thread called a worker. Each output can have one or more workers running in parallel, and each worker can handle multiple concurrent flushes. You can configure this behavior by changing the value of the workers
setting.
All outputs are capable of running in multiple workers, and each output has a default value of 0
, 1
, or 2
workers. However, even if an output uses workers by default, you can safely reduce the number of workers below the default or disable workers entirely.
It's possible for logs or data to be ingested or created faster than the ability to flush it to some destinations. A common scenario is when reading from big log files, especially with a large backlog, and dispatching the logs to a backend over the network, which takes time to respond. This generates backpressure, leading to high memory consumption in the service.
To avoid backpressure, Fluent Bit implements a mechanism in the engine that restricts the amount of data an input plugin can ingest. Restriction is done through the configuration parameters Mem_Buf_Limit
and storage.Max_Chunks_Up
.
As described in the concepts section, Fluent Bit offers two modes for data handling: in-memory only (default) and in-memory and filesystem (optional).
The default storage.type memory
buffer can be restricted with Mem_Buf_Limit
. If memory reaches this limit and you reach a backpressure scenario, you won't be able to ingest more data until the data chunks that are in memory can be flushed. The input pauses and Fluent Bit a [warn] [input] {input name or alias} paused (mem buf overlimit)
log message.
Depending on the input plugin in use, this might cause incoming data to be discarded (for example, TCP input plugin). The tail plugin can handle pauses without data ingloss, storing its current file offset and resuming reading later. When buffer memory is available, the input resumes accepting logs. Fluent Bit a [info] [input] {input name or alias} resume (mem buf overlimit)
message.
Mitigate the risk of data loss by configuring secondary storage on the filesystem using the storage.type
of filesystem
(as described in ). Initially, logs will be buffered to both memory and the filesystem. When the storage.max_chunks_up
limit is reached, all new data will be stored in the filesystem. Fluent Bit stops queueing new data in memory and buffers only to the filesystem. When storage.type filesystem
is set, the Mem_Buf_Limit
setting no longer has any effect. Instead, the [SERVICE]
level storage.max_chunks_up
setting controls the size of the memory buffer.
Mem_Buf_Limit
Mem_Buf_Limit
applies only with the default storage.type memory
. This option is disabled by default and can be applied to all input plugins.
As an example situation:
Mem_Buf_Limit
is set to 1MB
.
The input plugin tries to append 700 KB.
The engine routes the data to an output plugin.
The output plugin backend (HTTP Server) is down.
Engine scheduler retries the flush after 10 seconds.
The input plugin tries to append 500 KB.
In this situation, the engine allows appending those 500 KB of data into the memory, with a total of 1.2 MB of data buffered. The limit is permissive and will allow a single write past the limit. When the limit is exceeded, the following actions are taken:
Block local buffers for the input plugin (can't append more data).
Notify the input plugin, invoking a pause
callback.
The engine protects itself and won't append more data coming from the input plugin in question. It's the responsibility of the plugin to keep state and decide what to do in a paused
state.
In a few seconds, if the scheduler was able to flush the initial 700 KB of data or it has given up after retrying, that amount of memory is released and the following actions occur:
Upon data buffer release (700 KB), the internal counters get updated.
Counters now are set at 500 KB.
Because 500 KB isless than 1 MB, it checks the input plugin state.
If the plugin is paused, it invokes a resume
callback.
The input plugin can continue appending more data.
storage.max_chunks_up
The [SERVICE]
level storage.max_chunks_up
setting controls the size of the memory buffer. When storage.type filesystem
is set, the Mem_Buf_Limit
setting no longer has an effect.
The setting behaves similar to the Mem_Buf_Limit
scenario when the non-default storage.pause_on_chunks_overlimit
is enabled.
When (default) storage.pause_on_chunks_overlimit
is disabled, the input won't pause when the memory limit is reached. Instead, it switches to buffering logs only in the filesystem. Limit the disk spaced used for filesystem buffering with storage.total_limit_size
.
Each plugin is independent and not all of them implement pause
and resume
callbacks. These callbacks are a notification mechanism for the plugin.
With the default storage.type memory
and Mem_Buf_Limit
, the following log messages emit for pause
and resume
:
With storage.type filesystem
and storage.max_chunks_up
, the following log messages emit for pause
and resume
:
You might need to estimate how much memory Fluent Bit could be using in scenarios like containerized environments where memory limits are essential.
To make an estimate, in-use input plugins must set the Mem_Buf_Limit
option. Learn more about it in .
Input plugins append data independently. To make an estimation, impose a limit with the Mem_Buf_Limit
option. If the limit was set to 10MB
, you can estimate that in the worst case, the output plugin likely could use 20MB
.
Fluent Bit has an internal binary representation for the data being processed. When this data reaches an output plugin, it can create its own representation in a new memory buffer for processing. The best examples are the and output plugins, which need to convert the binary representation to their respective custom JSON formats before sending data to the backend servers.
When imposing a limit of 10MB
for the input plugins, and a worst case scenario of the output plugin consuming 20MB
, you need to allocate a minimum (30MB
x 1.2) = 36MB
.
In intensive environments where memory allocations happen in the orders of magnitude, the default memory allocator provided by Glibc could lead to high fragmentation, reporting a high memory usage by the service.
It's strongly suggested that in any production environment, Fluent Bit should be built with enabled (-DFLB_JEMALLOC=On
). The jemalloc implementation of malloc is an alternative memory allocator that can reduce fragmentation, resulting in better performance.
Use the following command to determine if Fluent Bit has been built with jemalloc:
The output should look like:
If the FLB_HAVE_JEMALLOC
option is listed in Build Flags
, jemalloc is enabled.
Fluent Bit is designed for high performance and minimal resource usage. Depending on your use case, you can optimize further using specific configuration options to achieve faster performance or reduce resource consumption.
The Tail
input plugin is used to read data from files on the filesystem. By default, it uses a small memory buffer of 32KB
per monitored file. While this is sufficient for most generic use cases and helps keep memory usage low when monitoring many files, there are scenarios where you may want to increase performance by using more memory.
If your files are typically larger than 32KB
, consider increasing the buffer size to speed up file reading. For example, you can experiment with a buffer size of 128KB
:
By increasing the buffer size, Fluent Bit will make fewer system calls (read(2)) to read the data, reducing CPU usage and improving performance.
Starting in Fluent Bit v3.2, performance improvements have been introduced for JSON encoding. Plugins that convert logs from Fluent Bit’s internal binary representation to JSON can now do so up to 30% faster using SIMD (Single Instruction, Multiple Data) optimizations.
Ensure that your Fluent Bit binary is built with SIMD support. This feature is available for architectures such as x86_64, amd64, aarch64, and arm64. As of now, SIMD is only enabled by default in Fluent Bit container images.
You can check if SIMD is enabled by looking for the following log entry when Fluent Bit starts:
Look for the simd entry, which will indicate the SIMD support type, such as SSE2, NEON, or none.
If your Fluent Bit binary was not built with SIMD enabled and you are using a supported platform, you can build Fluent Bit from source using the CMake option -DFLB_SIMD=On
.
By default, most of input plugins runs in the same system thread than the main event loop, however by configuration you can instruct them to run in a separate thread which will allow you to take advantage of other CPU cores in your system.
To run an input plugin in threaded mode, just add threaded: true
as in the example below:
implements a unified networking interface that's exposed to components like plugins. This interface abstracts the complexity of general I/O and is fully configurable.
A common use case is when a component or plugin needs to connect with a service to send and receive data. There are many challenges to handle like unresponsive services, networking latency, or any kind of connectivity error. The networking interface aims to abstract and simplify the network I/O handling, minimize risks, and optimize performance.
Fluent Bit uses the following networking concepts:
Typically, creating a new TCP connection to a remote server is straightforward and takes a few milliseconds. However, there are cases where DNS resolving, a slow network, or incomplete TLS handshakes might create long delays, or incomplete connection statuses.
net.connect_timeout
lets you configure the maximum time to wait for a connection to be established. This value already considers the TLS handshake process.
net.connect_timeout_log_error
indicates if an error should be logged in case of connect timeout. If disabled, the timeout is logged as a debug level message.
On environments with multiple network interfaces, you can choose which interface to use for Fluent Bit data that will flow through the network.
Use net.source_address
to specify which network address to use for a TCP connection and data flow.
A connection keepalive refers to the ability of a client to keep the TCP connection open in a persistent way. This feature offers many benefits in terms of performance because communication channels are always established beforehand.
Any component that uses TCP channels like HTTP or , can take use feature. For configuration purposes use the net.keepalive
property.
If a connection keepalive is enabled, there might be scenarios where the connection can be unused for long periods of time. Unused connections can be removed. To control how long a keepalive connection can be idle, Fluent Bit uses a configuration property called net.keepalive_idle_timeout
.
The global dns.mode
value issues DNS requests using the specified protocol, either TCP or UDP. If a transport layer protocol is specified, plugins that configure the net.dns.mode
setting override the global setting.
For optimal performance, Fluent Bit tries to deliver data quickly and create TCP connections on-demand and in keepalive mode. In highly scalable environments, you might limit how many connections are created in parallel.
Use the net.max_worker_connections
property in the output plugin section to set the maximum number of allowed connections. This property acts at the worker level. For example, if you have five workers and net.max_worker_connections
is set to 10, a maximum of 50 connections is allowed. If the limit is reached, the output plugin issues a retry.
The following table describes the network configuration properties available and their usage in optimizing performance or adjusting configuration needs for plugins that rely on networking I/O:
This example sends five random messages through a TCP output connection. The remote side uses the nc
(netcat) utility to see the data.
Put the following configuration snippet in a file called fluent-bit.conf
:
In another terminal, start nc
and make it listen for messages on TCP port 9090:
Start Fluent Bit with the configuration file you defined previously to see data flowing to netcat:
If the net.keepalive
option isn't enabled, Fluent Bit closes the TCP connection and netcat quits.
After the five records arrive, the connection idles. After 10 seconds, the connection closes due to net.keepalive_idle_timeout
.
Enable traffic through a proxy server using the HTTP_PROXY environment variable.
Fluent Bit supports configuring an HTTP proxy for all egress HTTP/HTTPS traffic using the HTTP_PROXY
or http_proxy
environment variable.
The format for the HTTP proxy environment variable is http://USER:PASS@HOST:PORT
, where:
USER
is the username when using basic authentication.
PASS
is the password when using basic authentication.
HOST
is the HTTP proxy hostname or IP address.
PORT
is the port the HTTP proxy is listening on.
To use an HTTP proxy with basic authentication, provide the username and password:
When no authentication is required, omit the username and password:
The HTTP_PROXY
environment variable is a of setting a HTTP proxy in a containerized environment, and it's also natively supported by any application written in Go. Fluent Bit implements the same convention. The http_proxy
environment variable is also supported. When both the HTTP_PROXY
and http_proxy
environment variables are provided, HTTP_PROXY
will be preferred.
The also supports configuring an HTTP proxy. This configuration works, but shouldn't be used with the HTTP_PROXY
or http_proxy
environment variable. The environment variable-based proxy configuration is implemented by creating a TCP connection tunnel using . Unlike the plugin's implementation, this supports both HTTP and HTTPS egress traffic.
NO_PROXY
Use the NO_PROXY
environment variable when traffic shouldn't flow through the HTTP proxy. The no_proxy
environment variable is also supported. When both NO_PROXY
and no_proxy
environment variables are provided, NO_PROXY
takes precedence.
The format for the no_proxy
environment variable is a comma-separated list of host names or IP addresses.
A domain name matches itself and all of its subdomains (for example, example.com
matches both example.com
and test.example.com
):
A domain with a leading dot (.
) matches only its subdomains (for example, .example.com
matches test.example.com
but not example.com
):
As an example, you might use NO_PROXY
when running Fluent Bit in a Kubernetes environment, where and you want:
All real egress traffic to flow through an HTTP proxy.
All local Kubernetes traffic to not flow through the HTTP proxy.
In this case, set:
Learn how to monitor your data pipeline with external services
A Data Pipeline represents a flow of data that goes through the inputs (sources), filters, and output (sinks). There are a couple of ways to monitor the pipeline. We recommend the following sections for a better understanding and steps to get started:
Learn how to monitor your Fluent Bit data pipelines
Fluent Bit includes features for monitoring the internals of your pipeline, in addition to connecting to Prometheus and Grafana, Health checks, and connectors to use external services:
Fluent Bit includes an HTTP server for querying internal information and monitoring metrics of each running plugin.
You can integrate the monitoring interface with Prometheus.
To get started, enable the HTTP server from the configuration file. The following configuration instructs Fluent Bit to start an HTTP server on TCP port 2020
and listen on all network interfaces:
Apply the configuration file:
Fluent Bit starts and generates output in your terminal:
Use curl
to gather information about the HTTP server. The following command sends the command output to the jq
program, which outputs human-readable JSON data to the terminal.
Fluent Bit exposes the following endpoints for monitoring.
The following descriptions apply to v1 metric endpoints.
/api/v1/metrics/prometheus
endpoint
The following descriptions apply to metrics outputted in Prometheus format by the /api/v1/metrics/prometheus
endpoint.
The following terms are key to understanding how Fluent Bit processes metrics:
Record: a single message collected from a source, such as a single long line in a file.
Chunk: log records ingested and stored by Fluent Bit input plugin instances. A batch of records in a chunk are tracked together as a single unit.
The Fluent Bit engine attempts to fit records into chunks of at most 2 MB
, but the size can vary at runtime. Chunks are then sent to an output. An output plugin instance can either successfully send the full chunk to the destination and mark it as successful, or it can fail the chunk entirely if an unrecoverable error is encountered, or it can ask for the chunk to be retried.
/api/v1/storage
endpoint
The following descriptions apply to metrics outputted in JSON format by the /api/v1/storage
endpoint.
The following descriptions apply to v2 metric endpoints.
/api/v2/metrics/prometheus
or /api/v2/metrics
endpoint
The following descriptions apply to metrics outputted in Prometheus format by the /api/v2/metrics/prometheus
or /api/v2/metrics
endpoints.
The following terms are key to understanding how Fluent Bit processes metrics:
Record: a single message collected from a source, such as a single long line in a file.
Chunk: log records ingested and stored by Fluent Bit input plugin instances. A batch of records in a chunk are tracked together as a single unit.
The Fluent Bit engine attempts to fit records into chunks of at most 2 MB
, but the size can vary at runtime. Chunks are then sent to an output. An output plugin instance can either successfully send the full chunk to the destination and mark it as successful, or it can fail the chunk entirely if an unrecoverable error is encountered, or it can ask for the chunk to be retried.
Storage layer
The following are detailed descriptions for the metrics collected by the storage layer.
Query the service uptime with the following command:
The command prints a similar output like this:
Query internal metrics in JSON format with the following command:
The command prints a similar output like this:
Query internal metrics in Prometheus Text 0.0.4 format:
This command returns the same metrics in Prometheus format instead of JSON:
By default, configured plugins on runtime get an internal name in the format _plugin_name.ID_
. For monitoring purposes, this can be confusing if many plugins of the same type were configured. To make a distinction each configured input or output section can get an alias that will be used as the parent name for the metric.
When querying the related metrics, the aliases are returned instead of the plugin name:
You can create Grafana dashboards and alerts using Fluent Bit's exposed Prometheus style metrics.
Fluent bit now supports four new configs to set up the health check.
Not every error log means an error to be counted. The error retry failures count only on specific errors, which is the example in configuration table description.
Based on the HC_Period
setting, if the real error number is over HC_Errors_Count
, or retry failure is over HC_Retry_Failure_Count
, Fluent Bit is considered unhealthy. The health endpoint returns an HTTP status 500
and an error
message. Otherwise, the endpoint returns HTTP status 200
and an ok
message.
The equation to calculate this behavior is:
The HC_Errors_Count
and HC_Retry_Failure_Count
only count for output plugins and count a sum for errors and retry failures from all running output plugins.
The following configuration file example shows how to define these settings:
Use the following command to call the health endpoint:
With the example config, the health status is determined by the following equation:
If this equation evaluates to TRUE
, then Fluent Bit is unhealthy.
If this equation evaluates to FALSE
, then Fluent Bit is healthy.
In an ideal world, applications might log their messages within a single line, but in reality applications generate multiple log messages that sometimes belong to the same context. But when is time to process such information it gets really complex. Consider application stack traces which always have multiple log lines.
Starting from Fluent Bit v1.8, we have implemented a unified Multiline core functionality to solve all the user corner cases. In this section, you will learn about the features and configuration options available.
The Multiline parser engine exposes two ways to configure and use the functionality:
Built-in multiline parser
Configurable multiline parser
Without any extra configuration, Fluent Bit exposes certain pre-configured parsers (built-in) to solve specific multiline parser cases, e.g:
Parser | Description |
---|
Besides the built-in parsers listed above, through the configuration files is possible to define your own Multiline parsers with their own rules.
A multiline parser is defined in a parsers configuration file by using a [MULTILINE_PARSER]
section definition. The Multiline parser must have a unique name and a type plus other configured properties associated with each type.
To understand which Multiline parser type is required for your use case you have to know beforehand what are the conditions in the content that determines the beginning of a multiline message and the continuation of subsequent lines. We provide a regex based configuration that supports states to handle from the most simple to difficult cases.
Before start configuring your parser you need to know the answer to the following questions:
What is the regular expression (regex) that matches the first line of a multiline message ?
What are the regular expressions (regex) that match the continuation lines of a multiline message ?
When matching regex, we have to define states, some states define the start of a multiline message while others are states for the continuation of multiline messages. You can have multiple continuation states definitions to solve complex cases.
The first regex that matches the start of a multiline message is called start_state, then other regexes continuation lines can have different state names.
A rule specifies how to match a multiline pattern and perform the concatenation. A rule is defined by 3 specific components:
state name
regular expression pattern
next state
A rule might be defined as follows (comments added to simplify the definition) :
In the example above, we have defined two rules, each one has its own state name, regex patterns, and the next state name. Every field that composes a rule must be inside double quotes.
The first rule of state name must always be start_state, and the regex pattern must match the first line of a multiline message, also a next state must be set to specify how the possible continuation lines would look like.
The following example provides a full Fluent Bit configuration file for multiline parsing by using the definition explained above.
Example files content:
This is the primary Fluent Bit configuration file. It includes the parsers_multiline.conf
and tails the file test.log
by applying the multiline parser multiline-regex-test
. Then it sends the processing to the standard output.
This second file defines a multiline parser for the example.
An example file with multiline content:
By running Fluent Bit with the given configuration file you will obtain:
The lines that did not match a pattern are not considered as part of the multiline message, while the ones that matched the rules were concatenated properly.
The multiline parser is a very powerful feature, but it has some limitations that you should be aware of:
The multiline parser is not affected by the buffer_max_size
configuration option, allowing the composed log record to grow beyond this size. Hence, the skip_long_lines
option will not be applied to multiline messages.
It is not possible to get the time key from the body of the multiline message. However, it can be extracted and set as a new key by using a filter.
Fluent-bit supports /pat/m
option. It allows .
matches a new line. It is useful to parse multiline log.
The following example is to get date
and message
from concatenated log.
Example files content:
This is the primary Fluent Bit configuration file. It includes the parsers_multiline.conf
and tails the file test.log
by applying the multiline parser multiline-regex-test
. It also parses concatenated log by applying parser named-capture-test
. Then it sends the processing to the standard output.
This second file defines a multiline parser for the example.
An example file with multiline content:
By running Fluent Bit with the given configuration file you will obtain:
Configuration files must be flexible enough for any deployment need, but they must keep a clean and readable format.
Fluent Bit Commands extends a configuration file with specific built-in features. The list of commands available as of Fluent Bit 0.12 series are:
Command | Prototype | Description |
---|
Configuring a logging pipeline might lead to an extensive configuration file. In order to maintain a human-readable configuration, it's suggested to split the configuration in multiple files.
The @INCLUDE command allows the configuration reader to include an external configuration file, e.g:
The above example defines the main service configuration file and also include two files to continue the configuration:
Note that despites the order of inclusion, Fluent Bit will ALWAYS respect the following order:
Service
Inputs
Filters
Outputs
The @SET command can only be used at root level of each line, meaning it cannot be used inside a section, e.g:
Fluent Bit is a powerful log processing tool that supports mulitple sources and formats. In addition, it provides filters that can be used to perform custom modifications. As your pipeline grows, it's important to validate your data and structure.
Fluent Bit users are encouraged to integrate data validation in their contininuous integration (CI) systems.
In a normal production environment, inputs, filters, and outputs are defined in the configuration. Fluent Bit provides the filter, which can be used to validate keys
and values
from your records and take action when an exception is found.
A simplified view of the data processing pipeline is as follows:
Consider the following pipeline, where your source of data is a file with JSON content and two filters:
to exclude certain records
to alter the record content by adding and removing specific keys.
Add data validation between each step to ensure your data structure is correct.
This example uses the expect
filter.
Expect
filters set rules aiming to validate criteria like:
Does the record contain a key A
?
Does the record not contain key A
?
Does the record key A
value equal NULL
?
Is the record key A
value not NULL
?
Does the record key A
value equal B
?
Consider a JSON file data.log
with the following content:
The following Fluent Bit configuration file configures a pipeline to consume the log, while applying an expect
filter to validate that the keys color
and label
exist:
If the JSON parser fails or is missing in the tail
input (parser json
), the expect
filter triggers the exit
action.
To extend the pipeline, add a grep filter to match records that map label
containing a key called name
with value the abc
, and add an expect
filter to re-validate that condition:
When deploying in production, consider removing the expect
filters from your configuration. These filters are unneccesary unless you need 100% coverage of checks at runtime.
Tap can be used to generate events or records detailing what messages pass through Fluent Bit, at what time and what filters affect them.
Ensure that the container image supports Fluent Bit Tap (available in Fluent Bit 2.0+):
If the --enable-chunk-trace
option is present, your Fluent Bit version supports Fluent Bit Tap, but it's disabled by default. Use this option to enable it.
You can start Fluent Bit with tracing activated from the beginning by using the trace-input
and trace-output
properties:
The following warning indicates the -Z
or --enable-chunk-tracing
option is missing:
Set properties for the output using the --trace-output-property
option:
With that option set, the stdout plugin emits traces in json_lines
format:
All three options can also be defined using the more flexible --trace
option:
This example defines the Tap pipeline using this configuration: input=dummy.0 output=stdout output.format=json_lines
which defines the following:
input
: dummy.0
listens to the tag or alias dummy.0
.
output
: stdout
outputs to a stdout plugin.
output.format
: json_lines
sets the stdout format to json_lines
.
Tap support can also be activated and deactivated using the embedded web server:
In another terminal, activate Tap by either using the instance id of the input (dummy.0
) or its alias. The alias is more predictable, and is used here:
This response means Tap is active. The terminal with Fluent Bit running should now look like this:
All the records that display are those emitted by the activities of the dummy plugin.
This example takes the same steps but demonstrates how the mechanism works with more complicated configurations.
This example follows a single input, out of many, and which passes through several filters.
To ensure the window isn't cluttered by the records generated by the input plugins, send all of it to null
.
Activate with the following curl
command:
You should start seeing output similar to the following:
When activating Tap, any plugin parameter can be given. These parameters can be used to modify the output format, the name of the time key, the format of the date, and other details.
The following example uses the parameter "format": "json"
to demonstrate how to show stdout
in JSON format.
First, run Fluent Bit enabling Tap:
In another terminal, activate Tap including the output (stdout
), and the parameters wanted ("format": "json"
):
In the first terminal, you should see the output similar to the following:
This parameter shows stdout in JSON format.
This filter record is an example to explain the details of a Tap record:
type
: Defines the stage the event is generated:
1
: Input record. This is the unadulterated input record.
2
: Filtered record. This is a record after it was filtered. One record is generated per filter.
3
: Pre-output record. This is the record right before it's sent for output.
This example is a record generated by the manipulation of a record by a filter so it has the type 2
.
start_time
and end_time
: Records the start and end of an event, and is different for each event type:
type 1: When the input is received, both the start and end time.
type 2: The time when filtering is matched until it has finished processing.
type 3: The time when the input is received and when it's finally slated for output.
trace_id
: A string composed of a prefix and a number which is incremented with each record received by the input during the Tap session.
plugin_instance
: The plugin instance name as generated by Fluent Bit at runtime.
plugin_alias
: If an alias is set this field will contain the alias set for a plugin.
records
: An array of all the records being sent. Fluent Bit handles records in chunks of multiple records and chunks are indivisible, the same is done in the Tap output. Each record consists of its timestamp followed by the actual data which is a composite type of keys and values.
Fluent Bit v1.4 introduced the Dump Internals feature, which can be triggered from the command line triggering the CONT
Unix signal.
This feature is only available on Linux and BSD operating systems.
Run the following kill
command to signal Fluent Bit:
The command pidof
aims to identify the Process ID of Fluent Bit.
Fluent Bit will dump the following information to the standard output interface (stdout
):
The input plugins dump provides insights for every input instance configured.
Overall ingestion status of the plugin.
When an input plugin ingests data into the engine, a Chunk is created. A Chunk can contains multiple records. At flush time, the engine creates a Task that contains the routes for the Chunk associated in question.
The Task dump describes the tasks associated to the input plugin:
The Chunks dump tells more details about all the chunks that the input plugin has generated and are still being processed.
Depending of the buffering strategy and limits imposed by configuration, some Chunks might be up
(in memory) or down
(filesystem).
Fluent Bit relies on a custom storage layer interface designed for hybrid buffering. The Storage Layer
entry contains a total summary of Chunks registered by Fluent Bit:
The end-goal of is to collect, parse, filter and ship logs to a central place. In this workflow there are many phases and one of the critical pieces is the ability to do buffering : a mechanism to place processed data into a temporary location until is ready to be shipped.
By default when Fluent Bit processes data, it uses Memory as a primary and temporary place to store the records, but there are certain scenarios where it would be ideal to have a persistent buffering mechanism based in the filesystem to provide aggregation and data safety capabilities.
Choosing the right configuration is critical and the behavior of the service can be conditioned based in the backpressure settings. Before we jump into the configuration let's make sure we understand the relationship between Chunks, Memory, Filesystem and Backpressure.
Understanding the chunks, buffering and backpressure concepts is critical for a proper configuration. Let's do a recap of the meaning of these concepts.
When an input plugin (source) emits records, the engine groups the records together in a Chunk. A Chunk size usually is around 2MB. By configuration, the engine decides where to place this Chunk, the default is that all chunks are created only in memory.
There are two scenarios where fluent-bit marks chunks as irrecoverable:
When Fluent Bit encounters a bad layout in a chunk. A bad layout is a chunk that does not conform to the expected format.
When Fluent Bit encounters an incorrect or invalid chunk header size.
In both scenarios Fluent-Bit will log an error message and then discard the irrecoverable chunks.
As mentioned above, the Chunks generated by the engine are placed in memory but this is configurable.
If memory is the only mechanism set for the input plugin, it will just store data as much as it can there (memory). This is the fastest mechanism with the least system overhead, but if the service is not able to deliver the records fast enough because of a slow network or an unresponsive remote service, Fluent Bit memory usage will increase since it will accumulate more data than it can deliver.
In a high load environment with backpressure the risks of having high memory usage is the chance of getting killed by the Kernel (OOM Killer). A workaround for this backpressure scenario is to limit the amount of memory in records that an input plugin can register, this configuration property is called mem_buf_limit
. If a plugin has enqueued more than the mem_buf_limit
, it won't be able to ingest more until that data can be delivered or flushed properly. In this scenario the input plugin in question is paused. When the input is paused, records will not be ingested until it is resumed. For some inputs, such as TCP and tail, pausing the input will almost certainly lead to log loss. For the tail input, Fluent Bit can save its current offset in the current file it is reading, and pick back up when the input is resumed.
Look for messages in the Fluent Bit log output like:
The workaround of mem_buf_limit
is good for certain scenarios and environments, it helps to control the memory usage of the service, but at the costs that if a file gets rotated while paused, you might lose that data since it won't be able to register new records. This can happen with any input source plugin. The goal of mem_buf_limit
is memory control and survival of the service.
For full data safety guarantee, use filesystem buffering.
Here is an example input definition:
If this input uses more than 50MB memory to buffer logs, you will get a warning like this in the Fluent Bit logs:
Mem_Buf_Limit
applies only when storage.type
is set to the default value of memory
.
The following section explains the applicable limits when you enable storage.type filesystem
.
Filesystem buffering enabled helps with backpressure and overall memory control.
Behind the scenes, Memory and Filesystem buffering mechanisms are not mutually exclusive. Indeed when enabling filesystem buffering for your input plugin (source) you are getting the best of the two worlds: performance and data safety.
How does the Filesystem buffering mechanism deal with high memory usage and backpressure? Fluent Bit controls the number of Chunks that are up
in memory.
By default, the engine allows us to have 128 Chunks up
in memory in total (considering all Chunks), this value is controlled by service property storage.max_chunks_up
. The active Chunks that are up
are ready for delivery and the ones that are still receiving records. Any other remaining Chunk is in a down
state, which means that it is only in the filesystem and won't be up
in memory unless it is ready to be delivered. Remember, chunks are never much larger than 2 MB, thus, with the default storage.max_chunks_up
value of 128, each input is limited to roughly 256 MB of memory.
If the input plugin has enabled storage.type
as filesystem
, when reaching the storage.max_chunks_up
threshold, instead of the plugin being paused, all new data will go to Chunks that are down
in the filesystem. This allows us to control the memory usage by the service and also provides a guarantee that the service won't lose any data. By default, the enforcement of the storage.max_chunks_up
limit is best-effort. Fluent Bit can only append new data to chunks that are up
; when the limit is reached chunks will be temporarily brought up
in memory to ingest new data, and then put to a down
state afterwards. In general, Fluent Bit will work to keep the total number of up
chunks at or below storage.max_chunks_up
.
If storage.pause_on_chunks_overlimit
is enabled (default is off), the input plugin will be paused upon exceeding storage.max_chunks_up
. Thus, with this option, storage.max_chunks_up
becomes a hard limit for the input. When the input is paused, records will not be ingested until it is resumed. For some inputs, such as TCP and tail, pausing the input will almost certainly lead to log loss. For the tail input, Fluent Bit can save its current offset in the current file it is reading, and pick back up when the input is resumed.
Look for messages in the Fluent Bit log output like:
Limiting Filesystem space for Chunks
Fluent Bit implements the concept of logical queues: based on its Tag, a Chunk can be routed to multiple destinations. Thus, we keep an internal reference from where a Chunk was created and where it needs to go.
It's common to find cases where if we have multiple destinations for a Chunk, one of the destinations might be slower than the other, or maybe one is generating backpressure and not all of them. In this scenario, how do we limit the amount of filesystem Chunks that we are logically queueing?
Starting from Fluent Bit v1.6, we introduced the new configuration property for output plugins called storage.total_limit_size
which limits the total size in bytes of chunks that can exist in the filesystem for a certain logical output destination. If one of the destinations reaches the configured storage.total_limit_size
, the oldest Chunk from its queue for that logical output destination will be discarded to make room for new data.
The storage layer configuration takes place in three areas:
Service Section
Input Section
Output Section
The known Service section configures a global environment for the storage layer, the Input sections define which buffering mechanism to use and the output the limits for the logical filesystem queues.
a Service section will look like this:
that configuration sets an optional buffering mechanism where the route to the data is /var/log/flb-storage/, it will use normal synchronization mode, without running a checksum and up to a maximum of 5MB of memory when processing backlog data.
Optionally, any Input plugin can configure their storage preference, the following table describes the options available:
The following example configures a service that offers filesystem buffering capabilities and two Input plugins being the first based in filesystem and the second with memory only.
If certain chunks are filesystem storage.type based, it's possible to control the size of the logical queue for an output plugin. The following table describes the options available:
The following example create records with CPU usage samples in the filesystem and then they are delivered to Google Stackdriver service limiting the logical queue (buffering) to 5M:
If for some reason Fluent Bit gets offline because of a network issue, it will continue buffering CPU samples but just keep a maximum of 5MB of the newest data.
You may wish to test a logging pipeline locally to observe how it deals with log messages. The following is a walk-through for running Fluent Bit and Elasticsearch locally with which can serve as an example for testing other plugins locally.
Refer to the to create a configuration to test.
fluent-bit.conf
:
Use to run Fluent Bit (with the configuration file mounted) and Elasticsearch.
docker-compose.yaml
:
To view indexed logs run:
To "start fresh", delete the index by running:
Enable hot reload through SIGHUP signal or an HTTP endpoint
Fluent Bit supports the reloading feature when enabled in the configuration file or on the command line with -Y
or --enable-hot-reload
option.
Hot reloading is supported on Linux, macOS, and Windows operating systems.
To get started with reloading over HTTP, enable the HTTP Server in the configuration file:
After updating the configuration, use one of the following methods to perform a hot reload:
Use the following HTTP endpoints to perform a hot reload:
PUT /api/v2/reload
POST /api/v2/reload
For using curl to reload Fluent Bit, users must specify an empty request body as:
Hot reloading can be used with SIGHUP
.
SIGHUP
signal isn't supported on Windows.
Use one of the following methods to confirm the reload occurred.
Obtain a count of hot reload using the HTTP endpoint:
GET /api/v2/reload
The endpoint returns hot_reload_count
as follows:
The default value of the counter is 0
.
The docker events input plugin uses the docker API to capture server events. A complete list of possible events returned by this plugin can be found
This plugin supports the following configuration parameters:
Key | Description | Default |
---|
In your main configuration file append the following Input & Output sections:
The docker input plugin allows you to collect Docker container metrics such as memory usage and CPU consumption.
The plugin supports the following configuration parameters:
Key | Description | Default |
---|
If you set neither Include
nor Exclude
, the plugin will try to get metrics from all the running containers.
Here is an example configuration that collects metrics from two docker instances (6bab19c3a0f9
and 14159be4ca2c
).
This configuration will produce records like below.
The dummy input plugin, generates dummy events. It is useful for testing, debugging, benchmarking and getting started with Fluent Bit.
The plugin supports the following configuration parameters:
Key | Description | Default |
---|
You can run the plugin from the command line or through the configuration file:
In your main configuration file append the following Input & Output sections:
The exec_wasi input plugin, allows to execute WASM program that is WASI target like as external program and collects event logs from there.
The plugin supports the following configuration parameters:
Key | Description |
---|
Here is a configuration example. in_exec_wasi can handle parser. To retrieve from structured data from WASM program, you have to create parser.conf:
Note that Time_Format
should be aligned for the format of your using timestamp. In this documents, we assume that WASM program should write JSON style strings into stdout.
Then, you can specify the above parsers.conf in the main fluent-bit configuration:
Enable output plugin
Enable output plugin
Enable output plugin
Enable output plugin
INSTALLERS | SHA256 CHECKSUMS |
---|---|
Name | Description |
---|---|
, ,
, , ,
, ,
,
,
,
,
Key | Description |
---|---|
Key | Description |
---|---|
Key | Description |
---|---|
Nth retry | Waiting time range (seconds) |
---|---|
Value | Description | |
---|---|---|
See docs for more information.
One example of a plugin that implements these callbacks and keeps state correctly is the plugin. When the pause
callback triggers, it pauses its collectors and stops appending data. Upon resume
, it resumes the collectors and continues ingesting data. Tail tracks the current file offset when it pauses, and resumes at the same position. If the file hasn't been deleted or moved, it can still be read.
Property | Description | Default |
---|
URI | Description | Data format |
---|
Metric name | Labels | Description | Type | Unit |
---|
Metric Key | Description | Unit |
---|
Metric Name | Labels | Description | Type | Unit |
---|
Metric Name | Labels | Description | Type | Unit |
---|
The following example sets an alias to the INPUT
section of the configuration file, which is using the input plugin:
The provided is heavily inspired by 's with a few key differences, such as the use of the instance
label, stacked graphs, and a focus on Fluent Bit metrics. See for more information.
Sample alerts are available .
Configuration name | Description | Default |
---|
is a hosted service that allows you to monitor your Fluent Bit agents including data flow, metrics, and configurations.
Property | Description | Default |
---|
To simplify the configuration of regular expressions, you can use the Rubular web site. We have posted an example by using the regex described above plus a log line that matches the pattern:
The following example files can be located at:
Fluent Bit supports , one way to expose this variables to Fluent Bit is through setting a Shell environment variable, the other is through the @SET command.
Every expect
filter configuration exposes rules to validate the content of your records using .
See for additional information.
When the service is running, you can export to see the overall status of the data flow of the service. There are other use cases where you might need to know the current status of the service internals, like the current status of the internal buffers. Dump Internals can help provide this information.
Entry | Sub-entry | Description |
---|
Entry | Description |
---|
Entry | Sub-entry | Description |
---|
Entry | Sub-Entry | Description |
---|
When Filesystem buffering is enabled, the behavior of the engine is different. Upon Chunk creation, the engine stores the content in memory and also maps a copy on disk (through ). The newly created Chunk is (1) active in memory, (2) backed up on disk, and (3) is called to be up
which means "the chunk content is up in memory".
The Service section refers to the section defined in the main :
Key | Description | Default |
---|
Key | Description | Default |
---|
Key | Description | Default |
---|
3.2.1-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
3.2.1
x86_64, arm64v8, arm32v7, s390x
Release v3.2.1
3.1.10-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
3.1.10
x86_64, arm64v8, arm32v7, s390x
Release v3.1.10
3.1.9-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
3.1.9
x86_64, arm64v8, arm32v7, s390x
Release v3.1.9
3.1.8-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
3.1.8
x86_64, arm64v8, arm32v7, s390x
Release v3.1.8
3.1.7-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
3.1.7
x86_64, arm64v8, arm32v7, s390x
Release v3.1.7
3.1.6-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
3.1.6
x86_64, arm64v8, arm32v7, s390x
Release v3.1.6
3.1.5-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
3.1.5
x86_64, arm64v8, arm32v7, s390x
Release v3.1.5
3.1.4-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
3.1.4
x86_64, arm64v8, arm32v7, s390x
Release v3.1.4
3.1.3-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
3.1.3
x86_64, arm64v8, arm32v7, s390x
Release v3.1.3
3.1.2-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
3.1.2
x86_64, arm64v8, arm32v7, s390x
Release v3.1.2
3.1.1-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
3.1.1
x86_64, arm64v8, arm32v7, s390x
Release v3.1.1
3.1.0-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
3.1.0
x86_64, arm64v8, arm32v7, s390x
Release v3.1.0
3.0.7-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
3.0.7
x86_64, arm64v8, arm32v7, s390x
Release v3.0.7
3.0.6-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
3.0.6
x86_64, arm64v8, arm32v7, s390x
Release v3.0.6
3.0.5-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
3.0.5
x86_64, arm64v8, arm32v7, s390x
Release v3.0.5
3.0.4-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
3.0.4
x86_64, arm64v8, arm32v7, s390x
Release v3.0.4
3.0.3-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
3.0.3
x86_64, arm64v8, arm32v7, s390x
Release v3.0.3
3.0.2-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
3.0.2
x86_64, arm64v8, arm32v7, s390x
Release v3.0.2
3.0.1-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
3.0.1
x86_64, arm64v8, arm32v7, s390x
Release v3.0.1
3.0.0-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
3.0.0
x86_64, arm64v8, arm32v7, s390x
Release v3.0.0
2.2.2-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
2.2.2
x86_64, arm64v8, arm32v7, s390x
Release v2.2.2
2.2.1-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
2.2.1
x86_64, arm64v8, arm32v7, s390x
Release v2.2.1
2.2.0-debug
x86_64, arm64v8, arm32v7
Debug images
2.2.0
x86_64, arm64v8, arm32v7
Release v2.2.0
2.1.10-debug
x86_64, arm64v8, arm32v7
Debug images
2.1.10
x86_64, arm64v8, arm32v7
Release v2.1.10
2.1.9-debug
x86_64, arm64v8, arm32v7
Debug images
2.1.9
x86_64, arm64v8, arm32v7
Release v2.1.9
2.1.8-debug
x86_64, arm64v8, arm32v7
Debug images
2.1.8
x86_64, arm64v8, arm32v7
Release v2.1.8
2.1.7-debug
x86_64, arm64v8, arm32v7
Debug images
2.1.7
x86_64, arm64v8, arm32v7
Release v2.1.7
2.1.6-debug
x86_64, arm64v8, arm32v7
Debug images
2.1.6
x86_64, arm64v8, arm32v7
Release v2.1.6
2.1.5
x86_64, arm64v8, arm32v7
Release v2.1.5
2.1.5-debug
x86_64, arm64v8, arm32v7
Debug images
2.1.4
x86_64, arm64v8, arm32v7
Release v2.1.4
2.1.4-debug
x86_64, arm64v8, arm32v7
Debug images
2.1.3
x86_64, arm64v8, arm32v7
Release v2.1.3
2.1.3-debug
x86_64, arm64v8, arm32v7
Debug images
2.1.2
x86_64, arm64v8, arm32v7
Release v2.1.2
2.1.2-debug
x86_64, arm64v8, arm32v7
Debug images
2.1.1
x86_64, arm64v8, arm32v7
Release v2.1.1
2.1.1-debug
x86_64, arm64v8, arm32v7
v2.1.x releases (production + debug)
2.1.0
x86_64, arm64v8, arm32v7
Release v2.1.0
2.1.0-debug
x86_64, arm64v8, arm32v7
v2.1.x releases (production + debug)
2.0.11
x86_64, arm64v8, arm32v7
Release v2.0.11
2.0.11-debug
x86_64, arm64v8, arm32v7
v2.0.x releases (production + debug)
2.0.10
x86_64, arm64v8, arm32v7
Release v2.0.10
2.0.10-debug
x86_64, arm64v8, arm32v7
v2.0.x releases (production + debug)
2.0.9
x86_64, arm64v8, arm32v7
Release v2.0.9
2.0.9-debug
x86_64, arm64v8, arm32v7
v2.0.x releases (production + debug)
2.0.8
x86_64, arm64v8, arm32v7
Release v2.0.8
2.0.8-debug
x86_64, arm64v8, arm32v7
v2.0.x releases (production + debug)
2.0.6
x86_64, arm64v8, arm32v7
Release v2.0.6
2.0.6-debug
x86_64, arm64v8, arm32v7
v2.0.x releases (production + debug)
2.0.5
x86_64, arm64v8, arm32v7
Release v2.0.5
2.0.5-debug
x86_64, arm64v8, arm32v7
v2.0.x releases (production + debug)
2.0.4
x86_64, arm64v8, arm32v7
Release v2.0.4
2.0.4-debug
x86_64, arm64v8, arm32v7
v2.0.x releases (production + debug)
2.0.3
x86_64, arm64v8, arm32v7
Release v2.0.3
2.0.3-debug
x86_64, arm64v8, arm32v7
v2.0.x releases (production + debug)
2.0.2
x86_64, arm64v8, arm32v7
Release v2.0.2
2.0.2-debug
x86_64, arm64v8, arm32v7
v2.0.x releases (production + debug)
2.0.1
x86_64, arm64v8, arm32v7
Release v2.0.1
2.0.1-debug
x86_64, arm64v8, arm32v7
v2.0.x releases (production + debug)
2.0.0
x86_64, arm64v8, arm32v7
Release v2.0.0
2.0.0-debug
x86_64, arm64v8, arm32v7
v2.0.x releases (production + debug)
1.9.9
x86_64, arm64v8, arm32v7
Release v1.9.9
1.9.9-debug
x86_64, arm64v8, arm32v7
v1.9.x releases (production + debug)
1.9.8
x86_64, arm64v8, arm32v7
Release v1.9.8
1.9.8-debug
x86_64, arm64v8, arm32v7
v1.9.x releases (production + debug)
1.9.7
x86_64, arm64v8, arm32v7
Release v1.9.7
1.9.7-debug
x86_64, arm64v8, arm32v7
v1.9.x releases (production + debug)
1.9.6
x86_64, arm64v8, arm32v7
Release v1.9.6
1.9.6-debug
x86_64, arm64v8, arm32v7
v1.9.x releases (production + debug)
1.9.5
x86_64, arm64v8, arm32v7
Release v1.9.5
1.9.5-debug
x86_64, arm64v8, arm32v7
v1.9.x releases (production + debug)
1.9.4
x86_64, arm64v8, arm32v7
Release v1.9.4
1.9.4-debug
x86_64, arm64v8, arm32v7
v1.9.x releases (production + debug)
1.9.3
x86_64, arm64v8, arm32v7
Release v1.9.3
1.9.3-debug
x86_64, arm64v8, arm32v7
v1.9.x releases (production + debug)
1.9.2
x86_64, arm64v8, arm32v7
Release v1.9.2
1.9.2-debug
x86_64, arm64v8, arm32v7
v1.9.x releases (production + debug)
1.9.1
x86_64, arm64v8, arm32v7
Release v1.9.1
1.9.1-debug
x86_64, arm64v8, arm32v7
v1.9.x releases (production + debug)
1.9.0
x86_64, arm64v8, arm32v7
Release v1.9.0
1.9.0-debug
x86_64, arm64v8, arm32v7
v1.9.x releases (production + debug)
service
Describes the global configuration for the Fluent Bit service. This section is optional; if not set, default values will apply. Only one service
section can be defined.
parsers
Lists parsers to be used by components like inputs, processors, filters, or output plugins. You can define multiple parsers
sections, which can also be loaded from external files included in the main YAML configuration.
multiline_parsers
Lists multiline parsers, functioning similarly to parsers
. Multiple definitions can exist either in the root or in included files.
pipeline
Defines a pipeline composed of inputs, processors, filters, and output plugins. You can define multiple pipeline
sections, but they will not operate independently. Instead, all components will be merged into a single pipeline internally.
plugins
Specifies the path to external plugins (.so files) to be loaded by Fluent Bit at runtime.
upstream_servers
Refers to a group of node endpoints that can be referenced by output plugins that support this feature.
env
Sets a list of environment variables for Fluent Bit. Note that system environment variables are available, while the ones defined in the configuration apply only to Fluent Bit.
devel
Build Fluent Bit from GIT master. This recipe aims to be used for development and testing purposes only.
v1.8.11
Build latest stable version of Fluent Bit.
flush
Sets the flush time in seconds.nanoseconds
. The engine loop uses a flush timeout to determine when to flush records ingested by input plugins to output plugins.
1
grace
Sets the grace time in seconds
as an integer value. The engine loop uses a grace timeout to define the wait time before exiting.
5
daemon
Boolean. Specifies whether Fluent Bit should run as a daemon (background process). Allowed values are: yes
, no
, on
, and off
. Do not enable when using a Systemd-based unit, such as the one provided in Fluent Bit packages.
off
dns.mode
Sets the primary transport layer protocol used by the asynchronous DNS resolver. Can be overridden on a per-plugin basis.
UDP
log_file
Absolute path for an optional log file. By default, all logs are redirected to the standard error interface (stderr).
none
log_level
Sets the logging verbosity level. Allowed values are: off
, error
, warn
, info
, debug
, and trace
. Values are cumulative. If debug
is set, it will include error
, warn
, info
, and debug
. Trace mode is only available if Fluent Bit was built with the WITH_TRACE
option enabled.
info
parsers_file
Path for a parsers
configuration file. Multiple parsers_file
entries can be defined within the section. However, with the new YAML configuration schema, defining parsers using this key is now optional. Parsers can be declared directly in the parsers
section of your YAML configuration, offering a more streamlined and integrated approach.
none
plugins_file
Path for a plugins
configuration file. This file specifies the paths to external plugins (.so files) that Fluent Bit can load at runtime. With the new YAML schema, the plugins_file
key is optional. External plugins can now be referenced directly within the plugins
section, simplifying the plugin management process. See an example.
none
streams_file
Path for the Stream Processor configuration file. This file defines the rules and operations for stream processing within Fluent Bit. The streams_file
key is optional, as Stream Processor configurations can be defined directly in the streams
section of the YAML schema. This flexibility allows for easier and more centralized configuration. Learn more about Stream Processing configuration.
none
http_server
Enables the built-in HTTP Server.
off
http_listen
Sets the listening interface for the HTTP Server when it's enabled.
0.0.0.0
http_port
Sets the TCP port for the HTTP Server.
2020
coro_stack_size
Sets the coroutine stack size in bytes. The value must be greater than the page size of the running system. Setting the value too small (4096
) can cause coroutine threads to overrun the stack buffer. The default value of this parameter should not be changed.
24576
scheduler.cap
Sets a maximum retry time in seconds. Supported in v1.8.7 and greater.
2000
scheduler.base
Sets the base of exponential backoff. Supported in v1.8.7 and greater.
5
json.convert_nan_to_null
If enabled, NaN
is converted to null
when Fluent Bit converts msgpack
to json
.
false
sp.convert_from_str_to_num
If enabled, the Stream Processor converts strings that represent numbers to a numeric type.
true
inputs
Specifies the name of the plugin responsible for collecting or receiving data. This component serves as the data source in the pipeline. Examples of input plugins include tail
, http
, and random
.
processors
Unique to YAML configuration, processors are specialized plugins that handle data processing directly attached to input plugins. Unlike filters, processors are not dependent on tag or matching rules. Instead, they work closely with the input to modify or enrich the data before it reaches the filtering or output stages. Processors are defined within an input plugin section.
filters
Filters are used to transform, enrich, or discard events based on specific criteria. They allow matching tags using strings or regular expressions, providing a more flexible way to manipulate data. Filters run as part of the main event loop and can be applied across multiple inputs and filters. Examples of filters include modify
, grep
, and nest
.
outputs
Defines the destination for processed data. Outputs specify where the data will be sent, such as to a remote server, a file, or another service. Each output plugin is configured with matching rules to determine which events are sent to that destination. Common output plugins include stdout
, elasticsearch
, and kafka
.
Name
Name of the input plugin.
Tag
Tag name associated to all records coming from this plugin.
Log_Level
Set the plugin's logging verbosity level. Allowed values are: off
, error
, warn
, info
, debug
, and trace
. Defaults to the SERVICE
section's Log_Level
.
Name
Name of the filter plugin.
Match
A pattern to match against the tags of incoming records. Case sensitive, supports asterisk (*
) as a wildcard.
Match_Regex
A regular expression to match against the tags of incoming records. Use this option if you want to use the full regular expression syntax.
Log_Level
Set the plugin's logging verbosity level. Allowed values are: off
, error
, warn
, info
, debug
, and trace
. Defaults to the SERVICE
section's Log_Level
.
Name
Name of the output plugin.
Match
A pattern to match against the tags of incoming records. Case sensitive and supports the asterisk (*
) character as a wildcard.
Match_Regex
A regular expression to match against the tags of incoming records. Use this option if you want to use the full regular expression syntax.
Log_Level
Set the plugin's logging verbosity level. Allowed values are: off
, error
, warn
, info
, debug
, and trace
. Defaults to the SERVICE
section's Log_Level
.
When a suffix is not specified, it's assumed that the value given is a bytes representation.
Specifying a value of 32000, means 32000 bytes
k, K, KB, kb
Kilobyte: a unit of memory equal to 1,000 bytes.
32k means 32000 bytes.
m, M, MB, mb
Megabyte: a unit of memory equal to 1,000,000 bytes
1M means 1000000 bytes
g, G, GB, gb
Gigabyte: a unit of memory equal to 1,000,000,000 bytes
1G means 1000000000 bytes
${HOSTNAME}
The system’s hostname.
tls
enable or disable TLS support
Off
tls.verify
force certificate validation
On
tls.verify_hostname
force TLS verification of hostnames
Off
tls.debug
Set TLS debug verbosity level. It accept the following values: 0 (No debug), 1 (Error), 2 (State change), 3 (Informational) and 4 Verbose
1
tls.ca_file
absolute path to CA certificate file
tls.ca_path
absolute path to scan for certificate files
tls.crt_file
absolute path to Certificate file
tls.key_file
absolute path to private Key file
tls.key_passwd
optional password for tls.key_file file
tls.vhost
hostname to be used for TLS SNI extension
$log
"some message"
$labels['color']
"blue"
$labels['project']['env']
"production"
$labels['unset']
null
$labels['undefined']
UPSTREAM
name
Defines a name for the Upstream in question.
NODE
name
Defines a name for the Node in question.
host
IP address or hostname of the target host.
port
TCP port of the target service.
1
(3, 6)
2
(3, 12)
3
(3, 24)
4
(3, 30)
Retry_Limit
N
Integer value to set the maximum number of retries allowed. N must be >= 1 (default: 1
)
Retry_Limit
no_limits
or False
When set there no limit for the number of retries that the scheduler can do.
Retry_Limit
no_retries
When set, retries are disabled and scheduler doesn't try to send data to the destination if it failed the first time.
| Set maximum time expressed in seconds to wait for a TCP connection to be established, including the TLS handshake time. |
|
| On connection timeout, specify if it should log an error. When disabled, the timeout is logged as a debug message. |
|
| Select the primary DNS connection type (TCP or UDP). Can be set in the | none |
| Prioritize IPv4 DNS results when trying to establish a connection. |
|
| Select the primary DNS resolver type ( | none |
| Enable or disable connection keepalive support. Accepts a Boolean value: |
|
| Set maximum time expressed in seconds for an idle keepalive connection. |
|
| Set maximum number of times a keepalive connection can be used before it's retired. |
|
| Set maximum number of TCP connections that can be established per worker. |
|
| Specify network address to bind for data traffic. | none |
| name: the name or alias for the input instance | The number of bytes of log records that this input instance has ingested successfully. | counter | bytes |
| name: the name or alias for the input instance | The number of log records this input ingested successfully. | counter | records |
| name: the name or alias for the output instance | The number of log records dropped by the output. These records hit an unrecoverable error or retries expired for their chunk. | counter | records |
| name: the name or alias for the output instance | The number of chunks with an error that's either unrecoverable or unable to retry. This metric represents the number of times a chunk failed, and doesn't correspond with the number of error messages visible in the Fluent Bit log output. | counter | chunks |
| name: the name or alias for the output instance | The number of bytes of log records that this output instance sent successfully. This metric represents the total byte size of all unique chunks sent by this output. If a record is not sent due to some error, it doesn't count towards this metric. | counter | bytes |
| name: the name or alias for the output instance | The number of log records that this output instance sent successfully. This metric represents the total record count of all unique chunks sent by this output. If a record is not sent successfully, it doesn't count towards this metric. | counter | records |
| name: the name or alias for the output instance | The number of log records that experienced a retry. This metric is calculated at the chunk level, the count increased when an entire chunk is marked for retry. An output plugin might perform multiple actions that generate many error messages when uploading a single chunk. | counter | records |
| name: the name or alias for the output instance | The number of times that retries expired for a chunk. Each plugin configures a | counter | chunks |
| name: the name or alias for the output instance | The number of times this output instance requested a retry for a chunk. | counter | chunks |
| The number of seconds that Fluent Bit has been running. | counter | seconds |
| The Unix Epoch timestamp for when Fluent Bit started. | gauge | seconds |
| The total number of chunks of records that Fluent Bit is currently buffering. | chunks |
| The total number of chunks that are currently buffered in memory. Chunks can be both in memory and on the file system at the same time. | chunks |
| The total number of chunks saved to the filesystem. | chunks |
| The count of chunks that are both in file system and in memory. | chunks |
| The count of chunks that are only in the file system. | chunks |
| Indicates whether the input instance exceeded its configured | boolean |
| The size of memory that this input is consuming to buffer logs in chunks. | bytes |
| The buffer memory limit ( | bytes |
| The current total number of chunks owned by this input instance. | chunks |
| The current number of chunks that are in memory for this input. If file system storage is enabled, chunks that are "up" are also stored in the filesystem layer. | chunks |
| The current number of chunks that are "down" in the filesystem for this input. | chunks |
| Chunks are that are being processed or sent by outputs and are not eligible to have new data appended. | chunks |
| The sum of the byte size of each chunk which is currently marked as busy. | bytes |
| name: the name or alias for the input instance | The number of bytes of log records that this input instance has ingested successfully. | counter | bytes |
| name: the name or alias for the input instance | The number of log records this input ingested successfully. | counter | records |
| name: the name or alias for the filter instance | The number of bytes of log records that this filter instance has ingested successfully. | counter | bytes |
| name: the name or alias for the filter instance | The number of log records this filter has ingested successfully. | counter | records |
| name: the name or alias for the filter instance | The number of log records added by the filter into the data pipeline. | counter | records |
| name: the name or alias for the filter instance | The number of log records dropped by the filter and removed from the data pipeline. | counter | records |
| name: the name or alias for the output instance | The number of log records dropped by the output. These records hit an unrecoverable error or retries expired for their chunk. | counter | records |
| name: the name or alias for the output instance | The number of chunks with an error that's either unrecoverable or unable to retry. This metric represents the number of times a chunk failed, and doesn't correspond with the number of error messages visible in the Fluent Bit log output. | counter | chunks |
| name: the name or alias for the output instance | The number of bytes of log records that this output instance sent successfully. This metric represents the total byte size of all unique chunks sent by this output. If a record is not sent due to some error, it doesn't count towards this metric. | counter | bytes |
| name: the name or alias for the output instance | The number of log records that this output instance sent successfully. This metric represents the total record count of all unique chunks sent by this output. If a record is not sent successfully, it doesn't count towards this metric. | counter | records |
| name: the name or alias for the output instance | The number of log records that experienced a retry. This metric is calculated at the chunk level, the count increased when an entire chunk is marked for retry. An output plugin might perform multiple actions that generate many error messages when uploading a single chunk. | counter | records |
| name: the name or alias for the output instance | The number of times that retries expired for a chunk. Each plugin configures a | counter | chunks |
| name: the name or alias for the output instance | The number of times this output instance requested a retry for a chunk. | counter | chunks |
| hostname: the hostname on running Fluent Bit | The number of seconds that Fluent Bit has been running. | counter | seconds |
| hostname: the hostname on running Fluent Bit | The Unix Epoch time stamp for when Fluent Bit started. | gauge | seconds |
| hostname: the hostname, version: the version of Fluent Bit, os: OS type | Build version information. The returned value is originated from initializing the Unix Epoch time stamp of configuration context. | gauge | seconds |
| hostname: the hostname on running Fluent Bit | Collect the count of hot reloaded times. | gauge | seconds |
| None | The total number of chunks of records that Fluent Bit is currently buffering. | gauge | chunks |
| None | The total number of chunks that are currently buffered in memory. Chunks can be both in memory and on the file system at the same time. | gauge | chunks |
| None | The total number of chunks saved to the file system. | gauge | chunks |
| None | The count of chunks that are both in file system and in memory. | gauge | chunks |
| None | The count of chunks that are only in the file system. | gauge | chunks |
| None | The total number of chunks are in a busy state. | gauge | chunks |
| None | The total bytes of chunks are in a busy state. | gauge | bytes |
| name: the name or alias for the input instance | Indicates whether the input instance exceeded its configured | gauge | boolean |
| name: the name or alias for the input instance | The size of memory that this input is consuming to buffer logs in chunks. | gauge | bytes |
| name: the name or alias for the input instance | The current total number of chunks owned by this input instance. | gauge | chunks |
| name: the name or alias for the input instance | The current number of chunks that are in memory for this input. If file system storage is enabled, chunks that are "up" are also stored in the filesystem layer. | gauge | chunks |
| name: the name or alias for the input instance | The current number of chunks that are "down" in the filesystem for this input. | gauge | chunks |
| name: the name or alias for the input instance | Chunks are that are being processed or sent by outputs and are not eligible to have new data appended. | gauge | chunks |
| name: the name or alias for the input instance | The sum of the byte size of each chunk which is currently marked as busy. | gauge | bytes |
| name: the name or alias for the output instance | The sum of the connection count of each output plugins. | gauge | bytes |
| name: the name or alias for the output instance | The sum of the connection count in a busy state of each output plugins. | gauge | bytes |
| enable Health check feature | Off |
| the error count to meet the unhealthy requirement, this is a sum for all output plugins in a defined HC_Period, example for output error: | 5 |
| the retry failure count to meet the unhealthy requirement, this is a sum for all output plugins in a defined HC_Period, example for retry failure: | 5 |
| The time period by second to count the error and retry failure data point | 60 |
name | Specify a unique name for the Multiline Parser definition. A good practice is to prefix the name with the word |
type | Set the multiline mode, for now, we support the type |
parser | Name of a pre-defined parser that must be applied to the incoming content before applying the regex rule. If no parser is defined, it's assumed that's a raw text and not a structured message. Note: when a parser is applied to a raw text, then the regex is applied against a specific key of the structured message by using the |
key_content | For an incoming structured message, specify the key that contains the data that should be processed by the regular expression and possibly concatenated. |
flush_timeout | Timeout in milliseconds to flush a non-terminated multiline buffer. Default is set to 5 seconds. | 5s |
rule | Configure a rule to match a multiline pattern. The rule has a specific format described below. Multiple rules can be defined. |
| Total number of active tasks associated to data generated by the input plugin. |
| Number of tasks not yet assigned to an output plugin. Tasks are in |
| Number of active tasks being processed by output plugins. |
| Amount of memory used by the Chunks being processed (total chunk size). |
| Total number of Chunks generated by the input plugin that are still being processed by the engine. |
| Total number of Chunks loaded in memory. |
| Total number of Chunks stored in the filesystem but not loaded in memory yet. |
| Chunks marked as busy (being flushed) or locked. Busy Chunks are immutable and likely are ready to be or are being processed. |
| Amount of bytes used by the Chunk. |
| Number of Chunks in an error state where its size couldn't be retrieved. |
| Total number of Chunks. |
| Total number of Chunks memory-based. |
| Total number of Chunks filesystem based. |
| Total number of filesystem chunks up in memory. |
| Total number of filesystem chunks down (not loaded in memory). |
storage.type | Specifies the buffering mechanism to use. It can be memory or filesystem. | memory |
storage.pause_on_chunks_overlimit | Specifies if the input plugin should be paused (stop ingesting new data) when the | off |
storage.total_limit_size | Limit the maximum disk space size in bytes for buffering chunks in the filesystem for the current output logical destination. |
Note: This plugin is experimental and may be unstable. Use it in development or testing environments only, as its features and behavior are subject to change.
The in_ebpf
input plugin is an experimental plugin for Fluent Bit that uses eBPF (extended Berkeley Packet Filter) to capture low-level system events. This plugin allows Fluent Bit to monitor kernel-level activities such as process executions, file accesses, memory allocations, network connections, and signal handling. It provides valuable insights into system behavior for debugging, monitoring, and security analysis.
The in_ebpf
plugin leverages eBPF to trace kernel events in real-time. By specifying trace points, users can collect targeted system-level metrics and events, which can be particularly useful for gaining visibility into operating system interactions and performance characteristics.
To enable in_ebpf
, ensure the following dependencies are installed on your system:
Kernel Version: 4.18 or higher with eBPF support enabled.
Required Packages:
bpftool
: Used to manage and debug eBPF programs.
libbpf-dev
: Provides the libbpf
library for loading and interacting with eBPF programs.
CMake 3.13 or higher: Required for building the plugin.
in_ebpf
To enable the in_ebpf
plugin, follow these steps to build Fluent Bit from source:
Clone the Fluent Bit Repository
Configure the Build with in_ebpf
Create a build directory and run cmake
with the -DFLB_IN_EBPF=On
flag to enable the in_ebpf
plugin:
Compile the Source
Run Fluent Bit
Run Fluent Bit with elevated permissions (e.g., sudo
), as loading eBPF programs requires root access or appropriate privileges:
Here's a basic example of how to configure the plugin:
The configuration above enables tracing for:
Signal handling events (trace_signal
)
Memory allocation events (trace_malloc
)
Network bind operations (trace_bind
)
You can enable multiple traces by adding multiple Trace
directives in your configuration. Full list of existing traces can be seen here: Fluent Bit eBPF Traces
/ | Fluent Bit build information. | JSON |
/api/v1/uptime | Return uptime information in seconds. | JSON |
/api/v1/metrics | Display internal metrics per loaded plugin. | JSON |
/api/v1/metrics/prometheus | Display internal metrics per loaded plugin in Prometheus Server format. | Prometheus Text 0.0.4 |
/api/v1/storage | Get internal metrics of the storage layer / buffered data. This option is enabled only if in the | JSON |
/api/v1/health | Display the Fluent Bit health check result. | String |
/api/v2/metrics | Display internal metrics per loaded plugin. |
/api/v2/metrics/prometheus | Display internal metrics per loaded plugin ready in Prometheus Server format. | Prometheus Text 0.0.4 |
/api/v2/reload | JSON |
@INCLUDE FILE | Include a configuration file |
@SET KEY=VAL | Set a configuration variable |
|
| Current memory size in use by the input plugin in-memory. |
| Limit set by |
storage.path | Set an optional location in the file system to store streams and chunks of data. If this parameter is not set, Input plugins can only use in-memory buffering. |
storage.sync | normal |
storage.checksum | Enable the data integrity check when writing and reading data from the filesystem. The storage layer uses the CRC32 algorithm. | Off |
storage.max_chunks_up | If the input plugin has enabled | 128 |
storage.backlog.mem_limit | If storage.path is set, Fluent Bit will look for data chunks that were not delivered and are still in the storage layer, these are called backlog data. Backlog chunks are filesystem chunks that were left over from a previous Fluent Bit run; chunks that could not be sent before exit that Fluent Bit will pick up when restarted. Fluent Bit will check the | 5M |
storage.metrics | off |
storage.delete_irrecoverable_chunks | Off |
Unix_Path | The docker socket unix path | /var/run/docker.sock |
Buffer_Size | The size of the buffer used to read docker events (in bytes) | 8192 |
Parser | Specify the name of a parser to interpret the entry as a structured message. | None |
Key | When a message is unstructured (no parser applied), it's appended as a string under the key name message. | message |
Reconnect.Retry_limits | The maximum number of retries allowed. The plugin tries to reconnect with docker socket when EOF is detected. | 5 |
Reconnect.Retry_interval | The retrying interval. Unit is second. | 1 |
Threaded |
|
Interval_Sec | Polling interval in seconds | 1 |
Include | A space-separated list of containers to include |
Exclude | A space-separated list of containers to exclude |
Threaded |
|
path.containers | Used to specify the container directory if Docker is configured with a custom "data-root" directory. |
|
Dummy | Dummy JSON record. |
|
Metadata | Dummy JSON metadata. |
|
Start_time_sec | Dummy base timestamp, in seconds. |
|
Start_time_nsec | Dummy base timestamp, in nanoseconds. |
|
Rate | Rate at which messages are generated expressed in how many times per second. |
|
Interval_sec | Set time interval, in seconds, at which every message is generated. If set, |
|
Interval_nsec | Set time interval, in nanoseconds, at which every message is generated. If set, |
|
Samples | If set, the events number will be limited. For example, if Samples=3, the plugin generates only three events and stops. | none |
Copies | Number of messages to generate each time they are generated. |
|
Flush_on_startup | If set to |
|
Threaded |
|
WASI_Path | The place of a WASM program file. |
Parser | Specify the name of a parser to interpret the entry as a structured message. |
Accessible_Paths | Specify the whitelist of paths to be able to access paths from WASM programs. |
Interval_Sec | Polling interval (seconds). |
Interval_NSec | Polling interval (nanosecond). |
Wasm_Heap_Size |
Wasm_Stack_Size |
Buf_Size |
Oneshot | Only run once at startup. This allows collection of data precedent to fluent-bit's startup (bool, default: false) |
Threaded |
flush
Set the flush time in seconds.nanoseconds
. The engine loop uses a Flush timeout to define when it's required to flush the records ingested by input plugins through the defined output plugins.
1
grace
Set the grace time in seconds
as an integer value. The engine loop uses a grace timeout to define wait time on exit.
5
daemon
Boolean. Determines whether Fluent Bit should run as a Daemon (background). Allowed values are: yes
, no
, on
, and off
. Don't enable when using a Systemd based unit, such as the one provided in Fluent Bit packages.
Off
dns.mode
Set the primary transport layer protocol used by the asynchronous DNS resolver. Can be overridden on a per plugin basis.
UDP
log_file
Absolute path for an optional log file. By default all logs are redirected to the standard error interface (stderr).
none
log_level
Set the logging verbosity level. Allowed values are: off
, error
, warn
, info
, debug
, and trace
. Values are cumulative. If debug
is set, it will include error
, warning
, info
, and debug
. Trace mode is only available if Fluent Bit was built with the WITH_TRACE
option enabled.
info
parsers_file
Path for a parsers
configuration file. Multiple Parsers_File
entries can be defined within the section.
none
plugins_file
Path for a plugins
configuration file. A plugins
configuration file defines paths for external plugins. See an example.
none
streams_file
Path for the Stream Processor configuration file. Learn more about Stream Processing configuration.
none
http_server
Enable the built-in HTTP Server.
Off
http_listen
Set listening interface for HTTP Server when it's enabled.
0.0.0.0
http_port
Set TCP Port for the HTTP Server.
2020
coro_stack_size
Set the coroutines stack size in bytes. The value must be greater than the page size of the running system. Setting the value too small (4096
) can cause coroutine threads to overrun the stack buffer. The default value of this parameter shouldn't be changed.
24576
scheduler.cap
Set a maximum retry time in seconds. Supported in v1.8.7 and greater.
2000
scheduler.base
Set a base of exponential backoff. Supported in v1.8.7 and greater.
5
json.convert_nan_to_null
If enabled, NaN
converts to null
when Fluent Bit converts msgpack
to json
.
false
sp.convert_from_str_to_num
If enabled, Stream processor converts from number string to number type.
true
scheduler.cap
Set a maximum retry time in seconds. Supported in v1.8.7 or later.
2000
scheduler.base
Set a base of exponential backoff. Supported in v1.8.7 or later.
5
docker | Process a log entry generated by a Docker container engine. This parser supports the concatenation of log entries split by Docker. |
cri | Process a log entry generated by CRI-O container engine. Same as the docker parser, it supports concatenation of log entries |
go | Process log entries generated by a Go based language application and perform concatenation if multiline messages are detected. |
python | Process log entries generated by a Python based language application and perform concatenation if multiline messages are detected. |
java | Process log entries generated by a Google Cloud Java language application and perform concatenation if multiline messages are detected. |
The elasticsearch input plugin handles both Elasticsearch and OpenSearch Bulk API requests.
The plugin supports the following configuration parameters:
Key | Description | Default value |
---|---|---|
Note: The Elasticsearch cluster uses "sniffing" to optimize the connections between its cluster and clients. Elasticsearch can build its cluster and dynamically generate a connection list which is called "sniffing". The hostname
will be used for sniffing information and this is handled by the sniffing endpoint.
In order to start performing the checks, you can run the plugin from the command line or through the configuration file:
From the command line you can configure Fluent Bit to handle Bulk API requests with the following options:
In your main configuration file append the following Input & Output sections:
As described above, the plugin will handle ingested Bulk API requests. For large bulk ingestions, you may have to increase buffer size with buffer_max_size and buffer_chunk_size parameters:
Ingesting from beats series agents is also supported. For example, Filebeats, Metricbeat, and Winlogbeat are able to ingest their collected data through this plugin.
Note that Fluent Bit's node information is returning as Elasticsearch 8.0.0.
So, users have to specify the following configurations on their beats configurations:
For large log ingestion on these beat plugins, users might have to configure rate limiting on those beats plugins when Fluent Bit indicates that the application is exceeding the size limit for HTTP requests:
Health input plugin allows you to check how healthy a TCP server is. It does the check by issuing a TCP connection every a certain interval of time.
The plugin supports the following configuration parameters:
Key | Description |
---|---|
In order to start performing the checks, you can run the plugin from the command line or through the configuration file:
From the command line you can let Fluent Bit generate the checks with the following options:
In your main configuration file append the following Input & Output sections:
Once Fluent Bit is running, you will see some random values in the output interface similar to this:
A plugin to collect Fluent Bit's own metrics
Fluent Bit exposes its own metrics to allow you to monitor the internals of your pipeline. The collected metrics can be processed similarly to those from the Prometheus Node Exporter input plugin. They can be sent to output plugins including Prometheus Exporter, Prometheus Remote Write or OpenTelemetry..
Important note: Metrics collected with Node Exporter Metrics flow through a separate pipeline from logs and current filters do not operate on top of metrics.
Key | Description | Default |
---|---|---|
In the following configuration file, the input plugin _node_exporter_metrics collects _metrics every 2 seconds and exposes them through our Prometheus Exporter output plugin on HTTP/TCP port 2021.
You can test the expose of the metrics by using curl:
The disk input plugin, gathers the information about the disk throughput of the running system every certain interval of time and reports them.
The Disk I/O metrics plugin creates metrics that are log-based, such as JSON payload. For Prometheus-based metrics, see the Node Exporter Metrics input plugin.
The plugin supports the following configuration parameters:
Key | Description | Default |
---|---|---|
In order to get disk usage from your system, you can run the plugin from the command line or through the configuration file:
In your main configuration file append the following Input & Output sections:
Note: Total interval (sec) = Interval_Sec + (Interval_Nsec / 1000000000).
e.g. 1.5s = 1s + 500000000ns
The collectd input plugin allows you to receive datagrams from collectd service.
The plugin supports the following configuration parameters:
Key | Description | Default |
---|---|---|
Here is a basic configuration example.
With this configuration, Fluent Bit listens to 0.0.0.0:25826
, and outputs incoming datagram packets to stdout.
You must set the same types.db files that your collectd server uses. Otherwise, Fluent Bit may not be able to interpret the payload properly.
Forward is the protocol used by Fluent Bit and Fluentd to route messages between peers. This plugin implements the input service to listen for Forward messages.
The plugin supports the following configuration parameters:
Key | Description | Default |
---|---|---|
In order to receive Forward messages, you can run the plugin from the command line or through the configuration file as shown in the following examples.
From the command line you can let Fluent Bit listen for Forward messages with the following options:
By default the service will listen an all interfaces (0.0.0.0) through TCP port 24224, optionally you can change this directly, e.g:
In the example the Forward messages will only arrive through network interface under 192.168.3.2 address and TCP Port 9090.
In your main configuration file append the following Input & Output sections:
Since Fluent Bit v3, in_forward can handle secure forward protocol.
For using user-password authentication, it needs to specify security.users
at least an one-pair. For using shared key, it needs to specify shared_key
in both of forward output and forward input. self_hostname
is not able to specify with the same hostname between fluent servers and clients.
Once Fluent Bit is running, you can send some messages using the fluent-cat tool (this tool is provided by Fluentd:
In Fluent Bit we should see the following output:
The cpu input plugin, measures the CPU usage of a process or the whole system by default (considering per CPU core). It reports values in percentage unit for every interval of time set. At the moment this plugin is only available for Linux.
The following tables describes the information generated by the plugin. The keys below represent the data used by the overall system, all values associated to the keys are in a percentage unit (0 to 100%):
The CPU metrics plugin creates metrics that are log-based, such as JSON payload. For Prometheus-based metrics, see the Node Exporter Metrics input plugin.
key | description |
---|---|
In addition to the keys reported in the above table, a similar content is created per CPU core. The cores are listed from 0 to N as the Kernel reports:
key | description |
---|---|
The plugin supports the following configuration parameters:
Key | Description | Default |
---|---|---|
In order to get the statistics of the CPU usage of your system, you can run the plugin from the command line or through the configuration file:
As described above, the CPU input plugin gathers the overall usage every one second and flushed the information to the output on the fifth second. On this example we used the stdout plugin to demonstrate the output records. In a real use-case you may want to flush this information to some central aggregator such as Fluentd or Elasticsearch.
In your main configuration file append the following Input & Output sections:
The HTTP input plugin allows you to send custom records to an HTTP endpoint.
HTTP input plugin supports TLS/SSL, for more details about the properties available and general configuration, please refer to the Transport Security section.
The HTTP input plugin will accept and automatically handle gzipped content as of v2.2.1 as long as the header Content-Encoding: gzip
is set on the received data.
The http input plugin allows Fluent Bit to open up an HTTP port that you can then route data to in a dynamic way. This plugin supports dynamic tags which allow you to send data with different tags through the same input. An example video and curl message can be seen below
The tag for the HTTP input plugin is set by adding the tag to the end of the request URL. This tag is then used to route the event through the system. For example, in the following curl message below the tag set is app.log**. **
because the end end path is /app_log
:
If you do not set the tag http.0
is automatically used. If you have multiple HTTP inputs then they will follow a pattern of http.N
where N is an integer representing the input.
The tag_key configuration option allows to specify the key name that will be used to overwrite a tag. The tag's value will be replaced with the value associated with the specified key. For example, setting tag_key to "custom_tag" and the log event contains a json field with the key "custom_tag" Fluent Bit will use the value of that field as the new tag for routing the event through the system.
The success_header
parameter allows to set multiple HTTP headers on success. The format is:
The head input plugin, allows to read events from the head of file. It's behavior is similar to the head command.
The plugin supports the following configuration parameters:
Key | Description |
---|---|
This mode is useful to get a specific line. This is an example to get CPU frequency from /proc/cpuinfo.
/proc/cpuinfo is a special file to get cpu information.
Cpu frequency is "cpu MHz : 2791.009". We can get the line with this configuration file.
Output is
In order to read the head of a file, you can run the plugin from the command line or through the configuration file:
The following example will read events from the /proc/uptime file, tag the records with the uptime name and flush them back to the stdout plugin:
In your main configuration file append the following Input & Output sections:
Note: Total interval (sec) = Interval_Sec + (Interval_Nsec / 1000000000).
e.g. 1.5s = 1s + 500000000ns
The exec input plugin, allows to execute external program and collects event logs.
WARNING: Because this plugin invokes commands via a shell, its inputs are subject to shell metacharacter substitution. Careless use of untrusted input in command arguments could lead to malicious command execution.
This plugin will not function in all the distroless production images as it needs a functional /bin/sh
which is not present. The debug images use the same binaries so even though they have a shell, there is no support for this plugin as it is compiled out.
The plugin supports the following configuration parameters:
Key | Description |
---|---|
You can run the plugin from the command line or through the configuration file:
The following example will read events from the output of ls.
In your main configuration file append the following Input & Output sections:
To use fluent-bit
with the exec
plugin to wrap another command, use the Exit_After_Oneshot
and Propagate_Exit_Code
options, e.g.:
fluent-bit
will output
then exit with exit code 1.
Translation of command exit code(s) to fluent-bit
exit code follows the usual shell rules for exit code handling. Like with a shell, there is no way to differentiate between the command exiting on a signal and the shell exiting on a signal, and no way to differentiate between normal exits with codes greater than 125 and abnormal or signal exits reported by fluent-bit or the shell. Wrapped commands should use exit codes between 0 and 125 inclusive to allow reliable identification of normal exit. If the command is a pipeline, the exit code will be the exit code of the last command in the pipeline unless overridden by shell options.
By default the exec
plugin emits one message per command output line, with a single field exec
containing the full message. Use the Parser
directive to specify the name of a parser configuration to use to process the command input.
Take great care with shell quoting and escaping when wrapping commands. A script like
can ruin your day if someone passes it the argument $(rm -rf /my/important/files; echo "deleted your stuff!")'
The above script would be safer if written with:
... but it's generally best to avoid dynamically generating the command or handling untrusted arguments to it at all.
The kmsg input plugin reads the Linux Kernel log buffer since the beginning, it gets every record and parse it field as priority, sequence, seconds, useconds, and message.
Key | Description | Default |
---|
In order to start getting the Linux Kernel messages, you can run the plugin from the command line or through the configuration file:
As described above, the plugin processed all messages that the Linux Kernel reported, the output has been truncated for clarification.
In your main configuration file append the following Input & Output sections:
The mem input plugin, gathers the information about the memory and swap usage of the running system every certain interval of time and reports the total amount of memory and the amount of free available.
In order to get memory and swap usage from your system, you can run the plugin from the command line or through the configuration file:
In your main configuration file append the following Input & Output sections:
Collects Kubernetes Events
Kubernetes exports it events through the API server. This input plugin allows to retrieve those events as logs and get them processed through the pipeline.
Key | Description | Default |
---|
* As of Fluent-Bit 3.1, this plugin uses a Kubernetes watch stream instead of polling. In versions before 3.1, the interval parameters are used for reconnecting the Kubernetes watch stream.
The Kubernetes service account used by Fluent Bit must have get
, list
, and watch
permissions to namespaces
and pods
for the namespaces watched in the kube_namespace
configuration parameter. If you're using the helm chart to configure Fluent Bit, this role is included.
Event timestamps are created from the first existing field, based on the following order of precedence:
lastTimestamp
firstTimestamp
metadata.creationTimestamp
The netif input plugin gathers network traffic information of the running system every certain interval of time, and reports them.
The Network I/O Metrics plugin creates metrics that are log-based, such as JSON payload. For Prometheus-based metrics, see the Node Exporter Metrics input plugin.
The plugin supports the following configuration parameters:
Key | Description | Default |
---|
In order to monitor network traffic from your system, you can run the plugin from the command line or through the configuration file:
In your main configuration file append the following Input & Output sections:
Note: Total interval (sec) = Interval_Sec + (Interval_Nsec / 1000000000).
e.g. 1.5s = 1s + 500000000ns
The Kafka input plugin allows subscribing to one or more Kafka topics to collect messages from an service. This plugin uses the official (built-in dependency).
Key | Description | default |
---|
In order to subscribe/collect messages from Apache Kafka, you can run the plugin from the command line or through the configuration file:
The kafka plugin can read parameters through the -p argument (property), e.g:
In your main configuration file append the following Input & Output sections:
The Fluent Bit source repository contains a full example of using Fluent Bit to process Kafka records:
The above will connect to the broker listening on kafka-broker:9092
and subscribe to the fb-source
topic, polling for new messages every 100 milliseconds.
Since the payload will be in json format, we ask the plugin to automatically parse the payload with format json
.
Every message received is then processed with kafka.lua
and sent back to the fb-sink
topic of the same broker.
The example can be executed locally with make start
in the examples/kafka_filter
directory (docker/compose is used).
The MQTT input plugin, allows to retrieve messages/data from MQTT control packets over a TCP connection. The incoming data to receive must be a JSON map.
The plugin supports the following configuration parameters:
Key | Description | Default |
---|
In order to start listening for MQTT messages, you can run the plugin from the command line or through the configuration file:
Since the MQTT input plugin let Fluent Bit behave as a server, we need to dispatch some messages using some MQTT client, in the following example mosquitto tool is being used for the purpose:
The following command line will send a message to the MQTT input plugin:
In your main configuration file append the following Input & Output sections:
NGINX Exporter Metrics input plugin scrapes metrics from the NGINX stub status handler.
The plugin supports the following configuration parameters:
Key | Description | Default |
---|
NGINX must be configured with a location that invokes the stub status handler. Here is an example configuration with such a location:
Another metrics API is available with NGINX Plus. You must first configure a path in NGINX Plus.
From the command line you can let Fluent Bit generate the checks with the following options:
To gather metrics from the command line with the NGINX Plus REST API we need to turn on the nginx_plus property, like so:
In your main configuration file append the following Input & Output sections:
And for NGINX Plus API:
You can quickly test against the NGINX server running on localhost by invoking it directly from the command line:
Note: for the
state
metric, the string values are converted to float64 using the following rule:"up"
->1.0
,"draining"
->2.0
,"down"
->3.0
,"unavail"
–>4.0
,"checking"
–>5.0
,"unhealthy"
->6.0
.
Note: for the
state
metric, the string values are converted to float64 using the following rule:"up"
->1.0
,"down"
->3.0
,"unavail"
–>4.0
,"checking"
–>5.0
,"unhealthy"
->6.0
.
A plugin based on Prometheus Node Exporter to collect system / host level metrics
is a popular way to collect system level metrics from operating systems, such as CPU / Disk / Network / Process statistics. Fluent Bit 1.8.0 includes node exporter metrics plugin that builds off the Prometheus design to collect system level metrics without having to manage two separate processes or agents.
The initial release of Node Exporter Metrics contains a subset of collectors and metrics available from Prometheus Node Exporter and we plan to expand them over time.
Important note: Metrics collected with Node Exporter Metrics flow through a separate pipeline from logs and current filters do not operate on top of metrics.
This plugin is supported on Linux-based operating systems for the most part with macOS offering a reduced subset of metrics. The table below indicates which collector is supported on macOS.
Key | Description | Default |
---|
Note: The plugin top-level scrape_interval
setting is the global default with any custom settings for individual scrape_intervals
then overriding just that specific metric scraping interval. Each collector.xxx.scrape_interval
option only overrides the interval for that specific collector and updates the associated set of provided metrics.
The overridden intervals only change the collection interval, not the interval for publishing the metrics which is taken from the global setting. For example, if the global interval is set to 5s and an override interval of 60s is used then the published metrics will be reported every 5s but for the specific collector they will stay the same for 60s until it is collected again. This feature aims to help with down-sampling when collecting metrics.
The following table describes the available collectors as part of this plugin. All of them are enabled by default and respects the original metrics name, descriptions, and types from Prometheus Exporter, so you can use your current dashboards without any compatibility problem.
note: the Version column specifies the Fluent Bit version where the collector is available.
You can test the expose of the metrics by using curl:
When deploying Fluent Bit in a container you will need to specify additional settings to ensure that Fluent Bit has access to the host operating system. The following docker command deploys Fluent Bit with specific mount paths and settings enabled to ensure that Fluent Bit can collect from the host. These are then exposed over port 2021.
If you like dashboards for monitoring, Grafana is one of the preferred options. In our Fluent Bit source code repository, we have pushed a simple **docker-compose **example. Steps:
Now open your browser in the address http://127.0.0.1:3000. When asked for the credentials to access Grafana, just use the **admin **username and admin password.
Note that by default Grafana dashboard plots the data from the last 24 hours, so just change it to Last 5 minutes to see the recent data being collected.
Execute hot reloading or get the status of hot reloading. See the .
If the plugin has been configured with , this entry will report if the plugin is over the limit or not at the moment of the dump. Over the limit prints yes
, otherwise no
.
Configure the synchronization mode used to store the data into the file system. It can take the values normal or full. Using full increases the reliability of the filesystem buffer and ensures that data is guaranteed to be synced to the filesystem even if Fluent Bit crashes. On linux, full corresponds with the MAP_SYNC
option for .
If http_server
option has been enabled in the main [SERVICE]
section, this option registers a new endpoint where internal metrics of the storage layer can be consumed. For more details refer to the section.
When enabled, will be deleted during runtime, and any other irrecoverable chunk located in the configured storage path directory will be deleted when Fluent-Bit starts.
Indicates whether to run this input in its own .
Indicates whether to run this input in its own .
Indicates whether to run this input in its own .
Size of the heap size of Wasm execution. Review for allowed values.
Size of the stack size of Wasm execution. Review for allowed values.
Size of the buffer (check for allowed values)
Indicates whether to run this input in its own . Default: false
.
You can enable the threaded
setting to run this input in its own .
This input always runs in its own .
In the following configuration file, the input plugin kubernetes_events collects events every 5 seconds (default for interval_nsec) and exposes them through the on the console.
This documentation is copied from the on GitHub.
Name | Type | Description | Labels |
---|
Name | Type | Description | Labels |
---|
Name | Type | Description | Labels |
---|
Name | Type | Description | Labels |
---|
Name | Type | Description | Labels |
---|
Name | Type | Description | Labels |
---|
Name | Type | Description | Labels |
---|
Name | Type | Description | Labels |
---|
Name | Type | Description | Labels |
---|
Name | Type | Description | Labels |
---|
Name | Description | OS | Version |
---|
This input always runs in its own .
In the following configuration file, the input plugin _node_exporter_metrics collects _metrics every 2 seconds and exposes them through our output plugin on HTTP/TCP port 2021.
Our current plugin implements a sub-set of the available collectors in the original Prometheus Node Exporter, if you would like that we prioritize a specific collector please open a Github issue by using the following template: -
buffer_max_size
Set the maximum size of buffer.
4M
buffer_chunk_size
Set the buffer chunk size.
512K
tag_key
Specify a key name for extracting as a tag.
NULL
meta_key
Specify a key name for meta information.
"@meta"
hostname
Specify hostname or FQDN. This parameter can be used for "sniffing" (auto-discovery of) cluster node information.
"localhost"
version
Specify Elasticsearch server version. This parameter is effective for checking a version of Elasticsearch/OpenSearch server version.
"8.0.0"
threaded
Indicates whether to run this input in its own thread.
false
Host
Name of the target host or IP address to check.
Port
TCP port where to perform the connection check.
Interval_Sec
Interval in seconds between the service checks. Default value is 1.
Internal_Nsec
Specify a nanoseconds interval for service checks, it works in conjunction with the Interval_Sec configuration key. Default value is 0.
Alert
If enabled, it will only generate messages if the target TCP service is down. By default this option is disabled.
Add_Host
If enabled, hostname is appended to each records. Default value is false.
Add_Port
If enabled, port number is appended to each records. Default value is false.
Threaded
Indicates whether to run this input in its own thread. Default: false
.
scrape_interval
The rate at which metrics are collected from the host operating system
2 seconds
scrape_on_start
Scrape metrics upon start, useful to avoid waiting for 'scrape_interval' for the first round of metrics.
false
threaded
Indicates whether to run this input in its own thread.
false
Interval_Sec
Polling interval (seconds).
1
Interval_NSec
Polling interval (nanosecond).
0
Dev_Name
Device name to limit the target. (e.g. sda). If not set, in_disk gathers information from all of disks and partitions.
all disks
Threaded
Indicates whether to run this input in its own thread.
false
Listen
Set the address to listen to
0.0.0.0
Port
Set the port to listen to
25826
TypesDB
Set the data specification file
/usr/share/collectd/types.db
Threaded
Indicates whether to run this input in its own thread.
false
Listen
Listener network interface.
0.0.0.0
Port
TCP port to listen for incoming connections.
24224
Unix_Path
Specify the path to unix socket to receive a Forward message. If set, Listen
and Port
are ignored.
Unix_Perm
Set the permission of the unix socket file. If Unix_Path
is not set, this parameter is ignored.
Buffer_Max_Size
Specify the maximum buffer memory size used to receive a Forward message. The value must be according to the Unit Size specification.
6144000
Buffer_Chunk_Size
By default the buffer to store the incoming Forward messages, do not allocate the maximum memory allowed, instead it allocate memory when is required. The rounds of allocations are set by Buffer_Chunk_Size. The value must be according to the Unit Size specification.
1024000
Tag_Prefix
Prefix incoming tag with the defined value.
Tag
Override the tag of the forwarded events with the defined value.
Shared_Key
Shared key for secure forward authentication.
Self_Hostname
Hostname for secure forward authentication.
Security.Users
Specify the username and password pairs for secure forward authentication.
Threaded
Indicates whether to run this input in its own thread.
false
cpu_p
CPU usage of the overall system, this value is the summation of time spent on user and kernel space. The result takes in consideration the numbers of CPU cores in the system.
user_p
CPU usage in User mode, for short it means the CPU usage by user space programs. The result of this value takes in consideration the numbers of CPU cores in the system.
system_p
CPU usage in Kernel mode, for short it means the CPU usage by the Kernel. The result of this value takes in consideration the numbers of CPU cores in the system.
threaded
Indicates whether to run this input in its own thread. Default: false
.
cpuN.p_cpu
Represents the total CPU usage by core N.
cpuN.p_user
Total CPU spent in user mode or user space programs associated to this core.
cpuN.p_system
Total CPU spent in system or kernel mode associated to this core.
Interval_Sec
Polling interval in seconds
1
Interval_NSec
Polling interval in nanoseconds
0
PID
Specify the ID (PID) of a running process in the system. By default the plugin monitors the whole system but if this option is set, it will only monitor the given process ID.
Key
Description
default
listen
The address to listen on
0.0.0.0
port
The port for Fluent Bit to listen on
9880
tag_key
Specify the key name to overwrite a tag. If set, the tag will be overwritten by a value of the key.
buffer_max_size
Specify the maximum buffer size in KB to receive a JSON message.
4M
buffer_chunk_size
This sets the chunk size for incoming incoming JSON messages. These chunks are then stored/managed in the space available by buffer_max_size.
512K
successful_response_code
It allows to set successful response code. 200
, 201
and 204
are supported.
201
success_header
Add an HTTP header key/value pair on success. Multiple headers can be set. Example: X-Custom custom-answer
threaded
Indicates whether to run this input in its own thread.
false
File
Absolute path to the target file, e.g: /proc/uptime
Buf_Size
Buffer size to read the file.
Interval_Sec
Polling interval (seconds).
Interval_NSec
Polling interval (nanosecond).
Add_Path
If enabled, filepath is appended to each records. Default value is false.
Key
Rename a key. Default: head.
Lines
Line number to read. If the number N is set, in_head reads first N lines like head(1) -n.
Split_line
If enabled, in_head generates key-value pair per line.
Threaded
Indicates whether to run this input in its own thread. Default: false
.
Command
The command to execute, passed to popen(...) without any additional escaping or processing. May include pipelines, redirection, command-substitution, etc.
Parser
Specify the name of a parser to interpret the entry as a structured message.
Interval_Sec
Polling interval (seconds).
Interval_NSec
Polling interval (nanosecond).
Buf_Size
Size of the buffer (check unit sizes for allowed values)
Oneshot
Only run once at startup. This allows collection of data precedent to fluent-bit's startup (bool, default: false)
Exit_After_Oneshot
Exit as soon as the one-shot command exits. This allows the exec plugin to be used as a wrapper for another command, sending the target command's output to any fluent-bit sink(s) then exiting. (bool, default: false)
Propagate_Exit_Code
When exiting due to Exit_After_Oneshot, cause fluent-bit to exit with the exit code of the command exited by this plugin. Follows shell conventions for exit code propagation. (bool, default: false)
Threaded
Indicates whether to run this input in its own thread. Default: false
.
db | Set a database file to keep track of recorded Kubernetes events |
db.sync | Set a database sync method. values: extra, full, normal and off | normal |
interval_sec | Set the reconnect interval (seconds)* | 0 |
interval_nsec | Set the reconnect interval (sub seconds: nanoseconds)* | 500000000 |
kube_url | API Server end-point | https://kubernetes.default.svc |
kube_ca_file | Kubernetes TLS CA file | /var/run/secrets/kubernetes.io/serviceaccount/ca.crt |
kube_ca_path | Kubernetes TLS ca path |
kube_token_file | Kubernetes authorization token file. | /var/run/secrets/kubernetes.io/serviceaccount/token |
kube_token_ttl | kubernetes token ttl, until it is reread from the token file. | 10m |
kube_request_limit | kubernetes limit parameter for events query, no limit applied when set to 0. | 0 |
kube_retention_time | Kubernetes retention time for events. | 1h |
kube_namespace | Kubernetes namespace to query events from. Gets events from all namespaces by default |
tls.debug | Debug level between 0 (nothing) and 4 (every detail). | 0 |
tls.verify | Enable or disable verification of TLS peer certificate. | On |
tls.vhost | Set optional TLS virtual host. |
| Gauge | Shows the status of the last metric scrape: | [] |
| Counter | Accepted client connections. | [] |
| Gauge | Active client connections. | [] |
| Counter | Handled client connections. | [] |
| Gauge | Connections where NGINX is reading the request header. | [] |
| Gauge | Idle client connections. | [] |
| Gauge | Connections where NGINX is writing the response back to the client. | [] |
| Counter | Total http requests. | [] |
| Counter | Accepted client connections | [] |
| Gauge | Active client connections | [] |
| Counter | Dropped client connections dropped | [] |
| Gauge | Idle client connections | [] |
| Counter | Total http requests | [] |
| Gauge | Current http requests | [] |
| Counter | Successful SSL handshakes | [] |
| Counter | Failed SSL handshakes | [] |
| Counter | Session reuses during SSL handshake | [] |
| Gauge | Client requests that are currently being processed |
|
| Counter | Total client requests |
|
| Counter | Total responses sent to clients |
|
| Counter | Requests completed without sending a response |
|
| Counter | Bytes received from clients |
|
| Counter | Bytes sent to clients |
|
| Gauge | Client connections that are currently being processed |
|
| Counter | Total connections |
|
| Counter | Total sessions completed |
|
| Counter | Connections completed without creating a session |
|
| Counter | Bytes received from clients |
|
| Counter | Bytes sent to clients |
|
| Gauge | Current state |
|
| Gauge | Active connections |
|
| Gauge | Limit for connections which corresponds to the max_conns parameter of the upstream server. Zero value means there is no limit |
|
| Counter | Total client requests |
|
| Counter | Total responses sent to clients |
|
| Counter | Bytes sent to this server |
|
| Counter | Bytes received to this server |
|
| Counter | Number of unsuccessful attempts to communicate with the server |
|
| Counter | How many times the server became unavailable for client requests (state 'unavail') due to the number of unsuccessful attempts reaching the max_fails threshold |
|
| Gauge | Average time to get the response header from the server |
|
| Gauge | Average time to get the full response from the server |
|
| Gauge | Idle keepalive connections |
|
| Gauge | Servers removed from the group but still processing active client requests |
|
| Gauge | Current state |
|
| Gauge | Active connections |
|
| Gauge | Limit for connections which corresponds to the max_conns parameter of the upstream server. Zero value means there is no limit |
|
| Counter | Total number of client connections forwarded to this server |
|
| Gauge | Average time to connect to the upstream server |
|
| Gauge | Average time to receive the first byte of data |
|
| Gauge | Average time to receive the last byte of data |
|
| Counter | Bytes sent to this server |
|
| Counter | Bytes received from this server |
|
| Counter | Number of unsuccessful attempts to communicate with the server |
|
| Counter | How many times the server became unavailable for client connections (state 'unavail') due to the number of unsuccessful attempts reaching the max_fails threshold |
|
| Gauge | Servers removed from the group but still processing active client connections |
|
| Counter | Total client requests |
|
| Counter | Total responses sent to clients |
|
| Counter | Requests completed without sending a response |
|
| Counter | Bytes received from clients |
|
| Counter | Bytes sent to clients |
|
cpu | Exposes CPU statistics. | Linux,macOS | v1.8 |
cpufreq | Exposes CPU frequency statistics. | Linux | v1.8 |
diskstats | Exposes disk I/O statistics. | Linux,macOS | v1.8 |
filefd | Exposes file descriptor statistics from | Linux | v1.8.2 |
filesystem | Exposes filesystem statistics from | Linux | v2.0.9 |
loadavg | Exposes load average. | Linux,macOS | v1.8 |
meminfo | Exposes memory statistics. | Linux,macOS | v1.8 |
netdev | Exposes network interface statistics such as bytes transferred. | Linux,macOS | v1.8.2 |
stat | Exposes various statistics from | Linux | v1.8 |
time | Exposes the current system time. | Linux | v1.8 |
uname | Exposes system information as provided by the uname system call. | Linux,macOS | v1.8 |
vmstat | Exposes statistics from | Linux | v1.8.2 |
systemd collector | Exposes statistics from systemd. | Linux | v2.1.3 |
thermal_zone | Expose thermal statistics from | Linux | v2.2.1 |
nvme | Exposes nvme statistics from | Linux | v2.2.0 |
processes | Exposes processes statistics from | Linux | v2.2.0 |
scrape_interval | The rate at which metrics are collected from the host operating system | 5 seconds |
path.procfs | The mount point used to collect process information and metrics | /proc/ |
path.sysfs | The path in the filesystem used to collect system metrics | /sys/ |
collector.cpu.scrape_interval | The rate in seconds at which cpu metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used. | 0 seconds |
collector.cpufreq.scrape_interval | The rate in seconds at which cpufreq metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used. | 0 seconds |
collector.meminfo.scrape_interval | The rate in seconds at which meminfo metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used. | 0 seconds |
collector.diskstats.scrape_interval | The rate in seconds at which diskstats metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used. | 0 seconds |
collector.filesystem.scrape_interval | The rate in seconds at which filesystem metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used. | 0 seconds |
collector.uname.scrape_interval | The rate in seconds at which uname metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used. | 0 seconds |
collector.stat.scrape_interval | The rate in seconds at which stat metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used. | 0 seconds |
collector.time.scrape_interval | The rate in seconds at which time metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used. | 0 seconds |
collector.loadavg.scrape_interval | The rate in seconds at which loadavg metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used. | 0 seconds |
collector.vmstat.scrape_interval | The rate in seconds at which vmstat metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used. | 0 seconds |
collector.thermal_zone.scrape_interval | The rate in seconds at which thermal_zone metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used. | 0 seconds |
collector.filefd.scrape_interval | The rate in seconds at which filefd metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used. | 0 seconds |
collector.nvme.scrape_interval | The rate in seconds at which nvme metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used. | 0 seconds |
collector.processes.scrape_interval | The rate in seconds at which system level of process metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used. | 0 seconds |
metrics | To specify which metrics are collected from the host operating system. These metrics depend on |
|
filesystem.ignore_mount_point_regex | Specify the regex for the mount points to prevent collection of/ignore. | `^/(dev |
filesystem.ignore_filesystem_type_regex | Specify the regex for the filesystem types to prevent collection of/ignore. | `^(autofs |
diskstats.ignore_device_regex | Specify the regex for the diskstats to prevent collection of/ignore. | `^(ram |
systemd_service_restart_metrics | Determines if the collector will include service restart metrics | false |
systemd_unit_start_time_metrics | Determines if the collector will include unit start time metrics | false |
systemd_include_service_task_metrics | Determines if the collector will include service task metrics | false |
systemd_include_pattern | regex to determine which units are included in the metrics produced by the systemd collector | It is not applied unless explicitly set |
systemd_exclude_pattern | regex to determine which units are excluded in the metrics produced by the systemd collector | `.+\.(automount |
Prio_Level | The log level to filter. The kernel log is dropped if its priority is more than prio_level. Allowed values are 0-8. Default is 8. 8 means all logs are saved. | 8 |
Threaded |
|
Interface | Specify the network interface to monitor. e.g. eth0 |
Interval_Sec | Polling interval (seconds). | 1 |
Interval_NSec | Polling interval (nanosecond). | 0 |
Verbose | If true, gather metrics precisely. | false |
Test_At_Init | If true, testing if the network interface is valid at initialization. | false |
Threaded |
|
brokers | Single or multiple list of Kafka Brokers, e.g: 192.168.1.3:9092, 192.168.1.4:9092. |
topics | Single entry or list of topics separated by comma (,) that Fluent Bit will subscribe to. |
format | Serialization format of the messages. If set to "json", the payload will be parsed as json. | none |
client_id | Client id passed to librdkafka. |
group_id | Group id passed to librdkafka. | fluent-bit |
poll_ms | Kafka brokers polling interval in milliseconds. | 500 |
Buffer_Max_Size | Specify the maximum size of buffer per cycle to poll kafka messages from subscribed topics. To increase throughput, specify larger size. | 4M |
rdkafka.{property} |
threaded |
|
Listen | Listener network interface. |
|
Port | TCP port where listening for connections. |
|
Payload_Key | Specify the key where the payload key/value will be preserved. | none |
Threaded |
|
Host | Name of the target host or IP address to check. | localhost |
Port | Port of the target nginx service to connect to. | 80 |
Status_URL | The URL of the Stub Status Handler. | /status |
Nginx_Plus | Turn on NGINX plus mode. | true |
Threaded |
|
Indicates whether to run this input in its own .
Indicates whether to run this input in its own .
{property}
can be any
Indicates whether to run this input in its own .
Indicates whether to run this input in its own .
Indicates whether to run this input in its own .