Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
When the data or logs are ready to be routed to some destination, by default they are buffered in memory.
Note that buffered data is not longer a raw text, instead it's in Fluent Bit internal binary representation.
Optionally Fluent Bit offers a buffering mechanism in the file system that acts as a backup system to avoid data loss in case of system failures.
Fluent Bit has a 'Service' which runs the filter chain from input to output. Global configuration here includes whether to daemonise, diagnostic logging, flush interval, etc.
For more details, please refer to the Service section.
In production environments we want to have full control of the data we are collecting, filtering is an important feature that allows us to alter the data before delivering it to some destination.
Filtering is implemented through plugins, so each filter available could be used to match, exclude or enrich your logs with some specific metadata.
Very similar to the input plugins, Filters run in an instance context, which has its own independent configuration. Configuration keys are often called properties.
For more details about the Filters available and their usage, please refer to the Filters section.
Fluent Bit provides different Input Plugins to gather information from different sources, some of them just collect data from log files while others can gather metrics information from the operating system. There are many plugins for different needs.
When an input plugin is loaded, an internal instance is created. Every instance has its own and independent configuration. Configuration keys are often called properties.
Every input plugin has its own documentation section where it's specified how it can be used and what properties are available.
For more details, please refer to the Input Plugins section.
is a straightforward tool and to get started with it we need to understand its basic workflow. Consider the following diagram a global overview of it:
Interface | Description |
Entry point of data. Implemented through Input Plugins, this interface allows to gather or receive data. E.g: log file content, data over TCP, built-in metrics, etc. |
Parsers allow to convert unstructured data gathered from the Input interface into a structured one. Parsers are optional and depend on Input plugins. |
The filtering mechanism allows to alter the data ingested by the Input plugins. Filters are implemented as plugins. |
By default, the data ingested by the Input plugins resides in memory until is routed and delivered to an Output interface. |
Data ingested by an Input interface is tagged, that means that a Tag is assigned and this one is used to determine where the data should be routed based on a match rule. |
An output defines a destination for the data. Destinations are handled by output plugins. Note that thanks to the Routing interface, the data can be delivered to multiple destinations. |
Dealing with raw strings is a constant pain; having a structure is highly desired. Ideally we want to set a structure to the incoming data by the Input Plugins as soon as they are collected:
The Parser allows you to convert from unstructured to structured data. As a demonstrative example consider the following Apache (HTTP Server) log entry:
The above log line is a raw string without format, ideally we would like to give it a structure that can be processed later easily. If the proper configuration is used, the log entry could be converted to:
Parsers are fully configurable and are independently and optionally handled by each input plugin, for more details please refer to the Parsers section.
The output interface allows us to define destinations for the data. Common destinations are remote services, local file system or standard interface with others. Outputs are implemented as plugins and there are many available.
When an output plugin is loaded, an internal instance is created. Every instance has its own independent configuration. Configuration keys are often called properties.
Every output plugin has its own documentation section specifying how it can be used and what properties are available.
Routing is a core feature that allows to route your data through Filters and finally to one or multiple destinations.
There are two important concepts in Routing:
Tag
Match
When the data is generated by the input plugins, it comes with a Tag (most of the time the Tag is configured manually), the Tag is a human-readable indicator that helps to identify the data source.
In order to define where the data should be routed, a Match rule must be specified in the output configuration.
Consider the following configuration example that aims to deliver CPU metrics to an Elasticsearch database and Memory metrics to the standard output interface:
Note: the above is a simple example demonstrating how Routing is configured.
Routing works automatically reading the Input Tags and the Output Match rules. If some data has a Tag that doesn't match upon routing time, the data is deleted.
Routing is flexible enough to support wildcard in the Match pattern. The below example defines a common destination for both sources of data:
The match rule is set to my_* which means it will match any Tag that starts with my_.
For more details, please refer to the section.