Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Currently, Fluent Bit supports two configuration formats:
Yaml: standard configuration format as of v3.2.
Classic mode: to be deprecated at the end of 2025.
Fluent Bit exposes most of it features through the command line interface. Running the -h
option you can get a list of the options available:
Fluent Bit traditionally offered a classic
configuration mode, a custom configuration format that we are gradually phasing out. While classic
mode has served well for many years, it has several limitations. Its basic design only supports grouping sections with key-value pairs and lacks the ability to handle sub-sections or complex data structures like lists.
YAML, now a mainstream configuration format, has become essential in a cloud ecosystem where everything is configured this way. To minimize friction and provide a more intuitive experience for creating data pipelines, we strongly encourage users to transition to YAML. The YAML format enables features, such as processors, that are not possible to configure in classic
mode.
As of Fluent Bit v3.2, you can configure everything in YAML.
Configuring Fluent Bit with YAML introduces the following root-level sections:
Section Name | Description |
---|---|
To access detailed configuration guides for each section, use the following links:
Overview of global settings, configuration options, and examples.
Detailed guide on defining parsers and supported formats.
Multiline Parsers Section documentation
Explanation of multiline parsing configuration.
Pipeline Section documentation
Details on setting up pipelines and using processors.
How to load external plugins.
Upstream Servers Section documentation
Guide on setting up and using upstream nodes with supported plugins.
Environment Variables Section documentation
Information on setting environment variables and their scope within Fluent Bit.
Includes Section documentation
Description on how to include external YAML files.
While Fluent Bit comes with a variety of built-in plugins, it also supports loading external plugins at runtime. This feature is especially useful for loading Go or Wasm plugins that are built as shared object files (.so). Fluent Bit's YAML configuration provides two ways to load these external plugins:
You can specify external plugins directly within your main YAML configuration file using the plugins
section. Here’s an example:
Alternatively, you can load external plugins from a separate YAML file by specifying the plugins_file option in the service section. Here’s how to configure this:
In this setup, the extra_plugins.yaml
file might contain the following plugins section:
Built-in vs. External: Fluent Bit comes with many built-in plugins, but you can load external plugins at runtime to extend the tool’s functionality.
Loading Mechanism: External plugins must be shared object files (.so). You can define them inline in the main YAML configuration or include them from a separate YAML file for better modularity.
Multiline parsers are used to combine logs that span multiple events into a single, cohesive message. This is particularly useful for handling stack traces, error logs, or any log entry that contains multiple lines of information.
In YAML configuration, the syntax for defining multiline parsers differs slightly from the classic configuration format introducing minor breaking changes, specifically on how the rules are defined.
Below is an example demonstrating how to define a multiline parser directly in the main configuration file, as well as how to include additional definitions from external files:
The example above defines a multiline parser named multiline-regex-test
that uses regular expressions to handle multi-event logs. The parser contains two rules: the first rule transitions from start_state to cont when a matching log entry is detected, and the second rule continues to match subsequent lines.
For more detailed information on configuring multiline parsers, including advanced options and use cases, please refer to the Configuring Multiline Parsers section.
Parsers enable Fluent Bit components to transform unstructured data into a structured internal representation. You can define parsers either directly in the main configuration file or in separate external files for better organization.
This page provides a general overview of how to declare parsers.
The main section name is parsers
, and it allows you to define a list of parser configurations. The following example demonstrates how to set up two simple parsers:
You can define multiple parsers sections, either within the main configuration file or distributed across included files.
For more detailed information on parser options and advanced configurations, please refer to the section.
The pipeline
section defines the flow of how data is collected, processed, and sent to its final destination. It encompasses the following core concepts:
Name | Description |
---|
Here’s a simple example of a pipeline configuration:
Processors operate on specific signals such as logs, metrics, and traces. They are attached to an input plugin and must specify the signal type they will process.
In the example below, the content_modifier processor inserts or updates (upserts) the key my_new_key with the value 123 for all log records generated by the tail plugin. This processor is only applied to log signals:
Here is a more complete example with multiple processors:
You might noticed that processors not only can be attached to input, but also to an output.
While processors and filters are similar in that they can transform, enrich, or drop data from the pipeline, there is a significant difference in how they operate:
Processors: Run in the same thread as the input plugin when the input plugin is configured to be threaded (threaded: true). This design provides better performance, especially in multi-threaded setups.
Filters: Run in the main event loop. When multiple filters are used, they can introduce performance overhead, particularly under heavy workloads.
In the example below, the grep filter is used as a processor to filter log events based on a pattern:
The Upstream Servers
section defines a group of endpoints, referred to as nodes, which are used by output plugins to distribute data in a round-robin fashion. This is particularly useful for plugins that require load balancing when sending data. Examples of plugins that support this capability include and .
In YAML, this section is named upstream_servers
and requires specifying a name
for the group and a list of nodes
. Below is an example that defines two upstream server groups: forward-balancing
and forward-balancing-2
:
Nodes: Each node in the upstream_servers group must specify a name, host, and port. Additional settings like tls, tls_verify, and shared_key can be configured as needed for secure communication.
While the upstream_servers
section can be defined globally, some output plugins may require the configuration to be specified in a separate YAML file. Be sure to consult the documentation for each specific output plugin to understand its requirements.
For more details, refer to the documentation of the respective output plugins.
You can configure existing to run as processors. There are no specific changes needed; you simply use the filter name as if it were a native processor.
service
Describes the global configuration for the Fluent Bit service. This section is optional; if not set, default values will apply. Only one service
section can be defined.
parsers
Lists parsers to be used by components like inputs, processors, filters, or output plugins. You can define multiple parsers
sections, which can also be loaded from external files included in the main YAML configuration.
multiline_parsers
Lists multiline parsers, functioning similarly to parsers
. Multiple definitions can exist either in the root or in included files.
pipeline
Defines a pipeline composed of inputs, processors, filters, and output plugins. You can define multiple pipeline
sections, but they will not operate independently. Instead, all components will be merged into a single pipeline internally.
plugins
Specifies the path to external plugins (.so files) to be loaded by Fluent Bit at runtime.
upstream_servers
Refers to a group of node endpoints that can be referenced by output plugins that support this feature.
env
Sets a list of environment variables for Fluent Bit. Note that system environment variables are available, while the ones defined in the configuration apply only to Fluent Bit.
| Specifies the name of the plugin responsible for collecting or receiving data. This component serves as the data source in the pipeline. Examples of input plugins include |
| Unique to YAML configuration, processors are specialized plugins that handle data processing directly attached to input plugins. Unlike filters, processors are not dependent on tag or matching rules. Instead, they work closely with the input to modify or enrich the data before it reaches the filtering or output stages. Processors are defined within an input plugin section. |
| Filters are used to transform, enrich, or discard events based on specific criteria. They allow matching tags using strings or regular expressions, providing a more flexible way to manipulate data. Filters run as part of the main event loop and can be applied across multiple inputs and filters. Examples of filters include |
| Defines the destination for processed data. Outputs specify where the data will be sent, such as to a remote server, a file, or another service. Each output plugin is configured with matching rules to determine which events are sent to that destination. Common output plugins include |
The includes
section allows you to specify additional YAML configuration files to be merged into the current configuration. These files are identified as a list of filenames and can include relative or absolute paths. If no absolute path is provided, the file is assumed to be located in a directory relative to the file that references it.
This feature is useful for organizing complex configurations into smaller, manageable files and including them as needed.
Below is an example demonstrating how to include additional YAML files using relative path references. This is the file system path structure
The content of fluent-bit.yaml
Relative Paths: If a path is not specified as absolute, it will be treated as relative to the file that includes it.
Organized Configurations: Using the includes section helps keep your configuration modular and easier to maintain.
note: Ensure that the included files are formatted correctly and contain valid YAML configurations for seamless integration.
Fluent Bit might optionally use a configuration file to define how the service will behave.
Before proceeding we need to understand how the configuration schema works.
The schema is defined by three concepts:
Sections
Entries: Key/Value
Indented Configuration Mode
A simple example of a configuration file is as follows:
A section is defined by a name or title inside brackets. Looking at the example above, a Service section has been set using [SERVICE] definition. Section rules:
All section content must be indented (4 spaces ideally).
Multiple sections can exist on the same file.
A section is expected to have comments and entries, it cannot be empty.
Any commented line under a section, must be indented too.
End-of-line comments are not supported, only full-line comments.
A section may contain Entries, an entry is defined by a line of text that contains a Key and a Value, using the above example, the [SERVICE]
section contains two entries, one is the key Daemon with value off and the other is the key Log_Level with the value debug. Entries rules:
An entry is defined by a key and a value.
A key must be indented.
A key must contain a value which ends in the breakline.
Multiple keys with the same name can exist.
Also commented lines are set prefixing the # character, those lines are not processed but they must be indented too.
Fluent Bit configuration files are based in a strict Indented Mode, that means that each configuration file must follow the same pattern of alignment from left to right when writing text. By default an indentation level of four spaces from left to right is suggested. Example:
As you can see there are two sections with multiple entries and comments, note also that empty lines are allowed and they do not need to be indented.
| Sets the flush time in |
|
| Sets the grace time in |
|
| Boolean. Specifies whether Fluent Bit should run as a daemon (background process). Allowed values are: |
|
| Sets the primary transport layer protocol used by the asynchronous DNS resolver. Can be overridden on a per-plugin basis. |
|
| Absolute path for an optional log file. By default, all logs are redirected to the standard error interface (stderr). | none |
| Sets the logging verbosity level. Allowed values are: |
|
| Path for a | none |
| none |
| none |
| Enables the built-in HTTP Server. |
|
| Sets the listening interface for the HTTP Server when it's enabled. |
|
| Sets the TCP port for the HTTP Server. |
|
| Sets the coroutine stack size in bytes. The value must be greater than the page size of the running system. Setting the value too small ( |
|
| Sets a maximum retry time in seconds. Supported in v1.8.7 and greater. |
|
| Sets the base of exponential backoff. Supported in v1.8.7 and greater. |
|
| If enabled, |
|
| If enabled, the Stream Processor converts strings that represent numbers to a numeric type. |
|
Fluent Bit supports the usage of environment variables in any value associated to a key when using a configuration file.
The variables are case sensitive and can be used in the following format:
When Fluent Bit starts, the configuration reader will detect any request for ${MY_VARIABLE}
and will try to resolve its value.
When Fluent Bit is running under systemd (using the official packages), environment variables can be set in the following files:
/etc/default/fluent-bit
(Debian based system)
/etc/sysconfig/fluent-bit
(Others)
These files are ignored if they do not exist.
Create the following configuration file (fluent-bit.conf
):
Open a terminal and set the environment variable:
The above command set the 'stdout' value to the variable
MY_OUTPUT
.
Run Fluent Bit with the recently created configuration file:
As you can see the service worked properly as the configuration was valid.
This page describes the main configuration file used by Fluent Bit.
One of the ways to configure Fluent Bit is using a main configuration file. Fluent Bit allows the use one configuration file that works at a global scope and uses the defined Format and Schema.
The main configuration file supports four sections:
Service
Input
Filter
Output
It's also possible to split the main configuration file into multiple files using the Include File feature to include external files.
The Service
section defines global properties of the service. The following keys are:
Key | Description | Default Value |
---|---|---|
The following is an example of a SERVICE
section:
For scheduler and retry details, see scheduling and retries.
The INPUT
section defines a source (related to an input plugin). Each input plugin can add its own configuration keys:
Name
is mandatory and tells Fluent Bit which input plugin to load. Tag
is mandatory for all plugins except for the input forward
plugin, which provides dynamic tags.
The following is an example of an INPUT
section:
The FILTER
section defines a filter (related to an filter plugin). Each filter plugin can add it own configuration keys. The base configuration for each FILTER
section contains:
Name
is mandatory and lets Fluent Bit know which filter plugin should be loaded. Match
or Match_Regex
is mandatory for all plugins. If both are specified, Match_Regex
takes precedence.
The following is an example of a FILTER
section:
The OUTPUT
section specifies a destination that certain records should go to after a Tag
match. Fluent Bit can route up to 256 OUTPUT
plugins. The configuration supports the following keys:
The following is an example of an OUTPUT
section:
The following configuration file example demonstrates how to collect CPU metrics and flush the results every five seconds to the standard output:
To avoid complicated long configuration files is better to split specific parts in different files and call them (include) from one main file. The @INCLUDE
can be used in the following way:
The configuration reader will try to open the path somefile.conf
. If not found, the reader assumes the file is on a relative path based on the path of the base configuration file:
Main configuration path: /tmp/main.conf
Included file: somefile.conf
Fluent Bit will try to open somefile.conf
, if it fails it will try /tmp/somefile.conf
.
The @INCLUDE
command only works at top-left level of the configuration line, and can't be used inside sections.
Wildcard character (*
) supports including multiple files. For example:
Files matching the wildcard character are included unsorted. If plugin ordering between files needs to be preserved, the files should be included explicitly.
It's common that Fluent Bit output plugins aims to connect to external services to deliver the logs over the network, this is the case of HTTP, Elasticsearch and Forward within others. Being able to connect to one node (host) is normal and enough for more of the use cases, but there are other scenarios where balancing across different nodes is required. The Upstream feature provides such capability.
An Upstream defines a set of nodes that will be targeted by an output plugin, by the nature of the implementation an output plugin must support the Upstream feature. The following plugin(s) have Upstream support:
The current balancing mode implemented is round-robin.
To define an Upstream it's required to create an specific configuration file that contains an UPSTREAM and one or multiple NODE sections. The following table describe the properties associated to each section. Note that all of them are mandatory:
Section | Key | Description |
---|---|---|
A Node might contain additional configuration keys required by the plugin, on that way we provide enough flexibility for the output plugin, a common use case is Forward output where if TLS is enabled, it requires a shared key (more details in the example below).
In addition to the properties defined in the table above, the network operations against a defined node can optionally be done through the use of TLS for further encryption and certificates use.
The TLS options available are described in the TLS/SSL section and can be added to the any Node section.
The following example defines an Upstream called forward-balancing which aims to be used by Forward output plugin, it register three Nodes:
node-1: connects to 127.0.0.1:43000
node-2: connects to 127.0.0.1:44000
node-3: connects to 127.0.0.1:45000 using TLS without verification. It also defines a specific configuration option required by Forward output called shared_key.
Note that every Upstream definition must exists on it own configuration file in the file system. Adding multiple Upstreams in the same file or different files is not allowed.
The env
section allows you to define environment variables directly within the configuration file. These variables can then be used to dynamically replace values throughout your configuration using the ${VARIABLE_NAME}
syntax.
Values set in the env
section are case-sensitive. However, as a best practice, we recommend using uppercase names for environment variables. The example below defines two variables, FLUSH_INTERVAL
and STDOUT_FMT
, which can be accessed in the configuration using ${FLUSH_INTERVAL}
and ${STDOUT_FMT}
:
Fluent Bit provides a set of predefined environment variables that can be used in your configuration:
In addition to variables defined in the configuration file or the predefined ones, Fluent Bit can access system environment variables set in the user space. These external variables can be referenced in the configuration using the same ${VARIABLE_NAME} pattern.
For example, to set the FLUSH_INTERVAL system environment variable to 2 and use it in your configuration:
In the configuration file, you can then access this value as follows:
This approach allows you to easily manage and override configuration values using environment variables, providing flexibility in various deployment environments.
Configuration files must be flexible enough for any deployment need, but they must keep a clean and readable format.
Fluent Bit Commands extends a configuration file with specific built-in features. The list of commands available as of Fluent Bit 0.12 series are:
Command | Prototype | Description |
---|
Configuring a logging pipeline might lead to an extensive configuration file. In order to maintain a human-readable configuration, it's suggested to split the configuration in multiple files.
The @INCLUDE command allows the configuration reader to include an external configuration file, e.g:
The above example defines the main service configuration file and also include two files to continue the configuration:
Note that despites the order of inclusion, Fluent Bit will ALWAYS respect the following order:
Service
Inputs
Filters
Outputs
The @SET command can only be used at root level of each line, meaning it cannot be used inside a section, e.g:
Certain configuration directives in Fluent Bit refer to unit sizes such as when defining the size of a buffer or specific limits, we can find these in plugins like , or in generic properties like .
Starting from v0.11.10, all unit sizes have been standardized across the core and plugins, the following table describes the options that can be used and what they mean:
Suffix | Description | Example |
---|
A full feature set to access content of your records
Fluent Bit works internally with structured records and it can be composed of an unlimited number of keys and values. Values can be anything like a number, string, array, or a map.
Having a way to select a specific part of the record is critical for certain core functionalities or plugins, this feature is called Record Accessor.
consider Record Accessor a simple grammar to specify record content and other miscellaneous values.
A record accessor rule starts with the character $
. Using the structured content above as an example the following table describes how to access a record:
The following table describe some accessing rules and the expected returned value:
Format | Accessed Value |
---|
If the accessor key does not exist in the record like the last example $labels['undefined']
, the operation is simply omitted, no exception will occur.
The file content to process in test.log
is the following:
Running Fluent Bit with the configuration above the output will be:
The Fluent Bit record_accessor library has a limitation in the characters that can separate template variables- only dots and commas (.
and ,
) can come after a template variable. This is because the templating library must parse the template and determine the end of a variable.
The following would be invalid templates because the two template variables are not separated by commas or dots:
$TaskID-$ECSContainerName
$TaskID/$ECSContainerName
$TaskID_$ECSContainerName
$TaskIDfooo$ECSContainerName
However, the following are valid:
$TaskID.$ECSContainerName
$TaskID.ecs_resource.$ECSContainerName
$TaskID.fooo.$ECSContainerName
And the following are valid since they only contain one template variable with nothing after it:
fooo$TaskID
fooo____$TaskID
fooo/bar$TaskID
Path for a plugins
configuration file. This file specifies the paths to external plugins (.so files) that Fluent Bit can load at runtime. With the new YAML schema, the plugins_file
key is optional. External plugins can now be referenced directly within the plugins
section, simplifying the plugin management process. .
Path for the Stream Processor configuration file. This file defines the rules and operations for stream processing within Fluent Bit. The streams_file
key is optional, as Stream Processor configurations can be defined directly in the streams
section of the YAML schema. This flexibility allows for easier and more centralized configuration. .
Key | Description |
---|---|
Key | Description |
---|---|
Key | Description |
---|---|
Name | Description |
---|
Fluent Bit supports , one way to expose this variables to Fluent Bit is through setting a Shell environment variable, the other is through the @SET command.
The feature is enabled on a per plugin basis, not all plugins enable this feature. As an example consider a configuration that aims to filter records using that only matches where labels have a color blue:
Name
Name of the input plugin.
Tag
Tag name associated to all records coming from this plugin.
Log_Level
Set the plugin's logging verbosity level. Allowed values are: off
, error
, warn
, info
, debug
, and trace
. Defaults to the SERVICE
section's Log_Level
.
Name
Name of the filter plugin.
Match
A pattern to match against the tags of incoming records. Case sensitive, supports asterisk (*
) as a wildcard.
Match_Regex
A regular expression to match against the tags of incoming records. Use this option if you want to use the full regular expression syntax.
Log_Level
Set the plugin's logging verbosity level. Allowed values are: off
, error
, warn
, info
, debug
, and trace
. Defaults to the SERVICE
section's Log_Level
.
Name
Name of the output plugin.
Match
A pattern to match against the tags of incoming records. Case sensitive and supports the asterisk (*
) character as a wildcard.
Match_Regex
A regular expression to match against the tags of incoming records. Use this option if you want to use the full regular expression syntax.
Log_Level
Set the plugin's logging verbosity level. Allowed values are: off
, error
, warn
, info
, debug
, and trace
. Defaults to the SERVICE
section's Log_Level
.
UPSTREAM
name
Defines a name for the Upstream in question.
NODE
name
Defines a name for the Node in question.
host
IP address or hostname of the target host.
port
TCP port of the target service.
| The system’s hostname. |
When a suffix is not specified, it's assumed that the value given is a bytes representation. | Specifying a value of 32000, means 32000 bytes |
k, K, KB, kb | Kilobyte: a unit of memory equal to 1,000 bytes. | 32k means 32000 bytes. |
m, M, MB, mb | Megabyte: a unit of memory equal to 1,000,000 bytes | 1M means 1000000 bytes |
g, G, GB, gb | Gigabyte: a unit of memory equal to 1,000,000,000 bytes | 1G means 1000000000 bytes |
$log | "some message" |
$labels['color'] | "blue" |
$labels['project']['env'] | "production" |
$labels['unset'] | null |
$labels['undefined'] |
flush
Set the flush time in seconds.nanoseconds
. The engine loop uses a Flush timeout to define when it's required to flush the records ingested by input plugins through the defined output plugins.
1
grace
Set the grace time in seconds
as an integer value. The engine loop uses a grace timeout to define wait time on exit.
5
daemon
Boolean. Determines whether Fluent Bit should run as a Daemon (background). Allowed values are: yes
, no
, on
, and off
. Don't enable when using a Systemd based unit, such as the one provided in Fluent Bit packages.
Off
dns.mode
Set the primary transport layer protocol used by the asynchronous DNS resolver. Can be overridden on a per plugin basis.
UDP
log_file
Absolute path for an optional log file. By default all logs are redirected to the standard error interface (stderr).
none
log_level
Set the logging verbosity level. Allowed values are: off
, error
, warn
, info
, debug
, and trace
. Values are cumulative. If debug
is set, it will include error
, warning
, info
, and debug
. Trace mode is only available if Fluent Bit was built with the WITH_TRACE
option enabled.
info
parsers_file
Path for a parsers
configuration file. Multiple Parsers_File
entries can be defined within the section.
none
plugins_file
Path for a plugins
configuration file. A plugins
configuration file defines paths for external plugins. See an example.
none
streams_file
Path for the Stream Processor configuration file. Learn more about Stream Processing configuration.
none
http_server
Enable the built-in HTTP Server.
Off
http_listen
Set listening interface for HTTP Server when it's enabled.
0.0.0.0
http_port
Set TCP Port for the HTTP Server.
2020
coro_stack_size
Set the coroutines stack size in bytes. The value must be greater than the page size of the running system. Setting the value too small (4096
) can cause coroutine threads to overrun the stack buffer. The default value of this parameter shouldn't be changed.
24576
scheduler.cap
Set a maximum retry time in seconds. Supported in v1.8.7 and greater.
2000
scheduler.base
Set a base of exponential backoff. Supported in v1.8.7 and greater.
5
json.convert_nan_to_null
If enabled, NaN
converts to null
when Fluent Bit converts msgpack
to json
.
false
sp.convert_from_str_to_num
If enabled, Stream processor converts from number string to number type.
true
@INCLUDE FILE | Include a configuration file |
@SET KEY=VAL | Set a configuration variable |
In an ideal world, applications might log their messages within a single line, but in reality applications generate multiple log messages that sometimes belong to the same context. But when is time to process such information it gets really complex. Consider application stack traces which always have multiple log lines.
Starting from Fluent Bit v1.8, we have implemented a unified Multiline core functionality to solve all the user corner cases. In this section, you will learn about the features and configuration options available.
The Multiline parser engine exposes two ways to configure and use the functionality:
Built-in multiline parser
Configurable multiline parser
Without any extra configuration, Fluent Bit exposes certain pre-configured parsers (built-in) to solve specific multiline parser cases, e.g:
Parser | Description |
---|
Besides the built-in parsers listed above, through the configuration files is possible to define your own Multiline parsers with their own rules.
A multiline parser is defined in a parsers configuration file by using a [MULTILINE_PARSER]
section definition. The Multiline parser must have a unique name and a type plus other configured properties associated with each type.
To understand which Multiline parser type is required for your use case you have to know beforehand what are the conditions in the content that determines the beginning of a multiline message and the continuation of subsequent lines. We provide a regex based configuration that supports states to handle from the most simple to difficult cases.
Before start configuring your parser you need to know the answer to the following questions:
What is the regular expression (regex) that matches the first line of a multiline message ?
What are the regular expressions (regex) that match the continuation lines of a multiline message ?
When matching regex, we have to define states, some states define the start of a multiline message while others are states for the continuation of multiline messages. You can have multiple continuation states definitions to solve complex cases.
The first regex that matches the start of a multiline message is called start_state, then other regexes continuation lines can have different state names.
A rule specifies how to match a multiline pattern and perform the concatenation. A rule is defined by 3 specific components:
state name
regular expression pattern
next state
A rule might be defined as follows (comments added to simplify the definition) :
In the example above, we have defined two rules, each one has its own state name, regex patterns, and the next state name. Every field that composes a rule must be inside double quotes.
The first rule of state name must always be start_state, and the regex pattern must match the first line of a multiline message, also a next state must be set to specify how the possible continuation lines would look like.
The following example provides a full Fluent Bit configuration file for multiline parsing by using the definition explained above.
Example files content:
This is the primary Fluent Bit configuration file. It includes the parsers_multiline.conf
and tails the file test.log
by applying the multiline parser multiline-regex-test
. Then it sends the processing to the standard output.
This second file defines a multiline parser for the example.
An example file with multiline content:
By running Fluent Bit with the given configuration file you will obtain:
The lines that did not match a pattern are not considered as part of the multiline message, while the ones that matched the rules were concatenated properly.
The multiline parser is a very powerful feature, but it has some limitations that you should be aware of:
The multiline parser is not affected by the buffer_max_size
configuration option, allowing the composed log record to grow beyond this size. Hence, the skip_long_lines
option will not be applied to multiline messages.
It is not possible to get the time key from the body of the multiline message. However, it can be extracted and set as a new key by using a filter.
Fluent-bit supports /pat/m
option. It allows .
matches a new line. It is useful to parse multiline log.
The following example is to get date
and message
from concatenated log.
Example files content:
This is the primary Fluent Bit configuration file. It includes the parsers_multiline.conf
and tails the file test.log
by applying the multiline parser multiline-regex-test
. It also parses concatenated log by applying parser named-capture-test
. Then it sends the processing to the standard output.
This second file defines a multiline parser for the example.
An example file with multiline content:
By running Fluent Bit with the given configuration file you will obtain:
Property | Description | Default |
---|
To simplify the configuration of regular expressions, you can use the Rubular web site. We have posted an example by using the regex described above plus a log line that matches the pattern:
The following example files can be located at:
name | Specify a unique name for the Multiline Parser definition. A good practice is to prefix the name with the word |
type | Set the multiline mode, for now, we support the type |
parser | Name of a pre-defined parser that must be applied to the incoming content before applying the regex rule. If no parser is defined, it's assumed that's a raw text and not a structured message. Note: when a parser is applied to a raw text, then the regex is applied against a specific key of the structured message by using the |
key_content | For an incoming structured message, specify the key that contains the data that should be processed by the regular expression and possibly concatenated. |
flush_timeout | Timeout in milliseconds to flush a non-terminated multiline buffer. Default is set to 5 seconds. | 5s |
rule | Configure a rule to match a multiline pattern. The rule has a specific format described below. Multiple rules can be defined. |
docker | Process a log entry generated by a Docker container engine. This parser supports the concatenation of log entries split by Docker. |
cri | Process a log entry generated by CRI-O container engine. Same as the docker parser, it supports concatenation of log entries |
go | Process log entries generated by a Go based language application and perform concatenation if multiline messages are detected. |
python | Process log entries generated by a Python based language application and perform concatenation if multiline messages are detected. |
java | Process log entries generated by a Google Cloud Java language application and perform concatenation if multiline messages are detected. |