Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
The Nest Filter plugin allows you to operate on or with nested data. Its modes of operation are
nest
- Take a set of records and place them in a map
lift
- Take a map by key and lift its records up
As an example using JSON notation, to nest keys matching the Wildcard
value Key*
under a new key NestKey
the transformation becomes,
Example (input)
Example (output)
As an example using JSON notation, to lift keys nested under the Nested_under
value NestKey*
the transformation becomes,
Example (input)
Example (output)
The plugin supports the following configuration parameters:
In order to start filtering records, you can run the filter from the command line or through the configuration file. The following invokes the Memory Usage Input Plugin, which outputs the following (example),
Note: Using the command line mode requires quotes parse the wildcard properly. The use of a configuration file is recommended.
The following command will load the mem plugin. Then the nest filter will match the wildcard rule to the keys and nest the keys matching Mem.*
under the new key NEST
.
The output of both the command line and configuration invocations should be identical and result in the following output.
This example nests all Mem.*
and Swap,*
items under the Stats
key and then reverses these actions with a lift
operation. The output appears unchanged.
This example takes the keys starting with Mem.*
and nests them under LAYER1
, which itself is then nested under LAYER2
, which is nested under LAYER3
.
This example starts with the 3-level deep nesting of Example 2 and applies the lift
filter three times to reverse the operations. The end result is that all records are at the top level, without nesting, again. One prefix is added for each level that is lifted.
The Grep Filter plugin allows to match or exclude specific records based in regular expression patterns.
The plugin supports the following configuration parameters:
In order to start filtering records, you can run the filter from the command line or through the configuration file. The following example assumes that you have a file called lines.txt with the following content
Note: using the command line mode need special attention to quote the regular expressions properly. It's suggested to use a configuration file.
The following command will load the tail plugin and read the content of lines.txt file. Then the grep filter will apply a regular expression rule over the log field (created by tail plugin) and only pass the records which field value starts with aa:
The filter allows to use multiple rules which are applied in order, you can have many Regex and Exclude entries as required.
Lua Filter allows you to modify the incoming records using custom Lua Scripts.
Due to the necessity to have a flexible filtering mechanism, now is possible to extend Fluent Bit capabilities writing simple filters using Lua programming language. A Lua based filter takes two steps:
Configure the Filter in the main configuration
Prepare a Lua script that will be used by the Filter
Content:
The plugin supports the following configuration parameters:
In order to test the filter, you can run the plugin from the command line or through the configuration file. The following examples uses the dummy input plugin for data ingestion, invoke Lua filter using the test.lua script and calls the cb_print() function which only print the same information to the standard output:
From the command line you can use the following options:
In your main configuration file append the following Input, Filter & Output sections:
The life cycle of a filter have the following steps:
Upon Tag matching by filter_lua, it may process or bypass the record.
If filter_lua accepts the record, it will invoke the function defined in the call property which basically is the name of a function defined in the Lua script.
Invoke Lua function passing each record in JSON format.
Upon return, validate return value and take some action (described above)
The Lua script can have one or multiple callbacks that can be used by filter_lua, it prototype is as follows:
Each callback must return three values:
For functional examples of this interface, please refer to the code samples provided in the source code of the project located here:
https://github.com/fluent/fluent-bit/tree/master/scripts
In Lua, Fluent Bit treats number as double. It means an integer field (e.g. IDs, log levels) will be converted double. To avoid type conversion, Type_int_key property is available.
The Record Modifier Filter plugin allows to append fields or to exclude specific fields.
The plugin supports the following configuration parameters: Remove_key and Whitelist_key are exclusive.
In order to start filtering records, you can run the filter from the command line or through the configuration file.
This is a sample in_mem record to filter.
The following configuration file is to append product name and hostname (via environment variable) to record.
You can also run the filter from command line.
The output will be
The following configuration file is to remove 'Swap.*' fields.
You can also run the filter from command line.
The output will be
The following configuration file is to remain 'Mem.*' fields.
You can also run the filter from command line.
The output will be
The filter plugins allows to alter the incoming data generated by the input plugins. As of this version the following filter plugins are available:
In order to let a Filter be applied over some data, the Match rule must exists and it must match the Tag for the incoming data.
The Modify Filter plugin allows you to change records using rules and conditions.
As an example using JSON notation to,
Rename Key2
to RenamedKey
Add a key OtherKey
with value Value3
if OtherKey
does not yet exist
Example (input)
Example (output)
The plugin supports the following rules:
Rules are case insensitive, parameters are not
Any number of rules can be set in a filter instance.
Rules are applied in the order they appear, with each rule operating on the result of the previous rule.
The plugin supports the following conditions:
Conditions are case insensitive, parameters are not
Any number of conditions can be set.
Conditions apply to the whole filter instance and all its rules. Not to individual rules.
All conditions have to be true
for the rules to be applied.
Note: Using the command line mode requires quotes parse the wildcard properly. The use of a configuration file is recommended.
The output of both the command line and configuration invocations should be identical and result in the following output.
The Parser Filter plugin allows to parse field in event records.
The plugin supports the following configuration parameters:
This is an example to parser a record {"data":"100 0.5 true This is example"}
.
The plugin needs parser file which defines how to parse field.
The path of parser file should be written in configuration file at [SERVICE] section.
The output is
You can see the record {"data":"100 0.5 true This is example"}
are parsed.
By default, the parser plugin only keeps the parsed fields in its output.
If you enable Preserve_Key
, the original key field is preserved:
This will produce the output:
If you enable Reserve_Data
, all other fields are preserved:
This will produce the output:
In order to start filtering records, you can run the filter from the command line or through the configuration file. The following invokes the , which outputs the following (example),
Key
Value Format
Description
Regex
FIELD REGEX
Keep records which field matches the regular expression.
Exclude
FIELD REGEX
Exclude records which field matches the regular expression.
Key
Description
Script
Path to the Lua script that will be used.
Call
Lua function name that will be triggered to do filtering. It's assumed that the function is declared inside the Script defined above.
Type_int_key
If the key is matched, that field will be converted to integer.
name
description
tag
Name of the tag associated with the incoming record.
timestamp
Unix timestamp with nanoseconds associated with the incoming record. The original format is a double (seconds.nanoseconds)
record
Lua table with the record content
name
data type
description
code
integer
The code return value represents the result and further action that may follows. If code equals -1, means that filter_lua must drop the record. If code equals 0 the record will not be modified, otherwise if code equals 1, means the original timestamp or record have been modified so it must be replaced by the returned values from timestamp (second return value) and record (third return value).
timestamp
double
If code equals 1, the original record timestamp will be replaced with this new value.
record
table
if code equals 1, the original record information will be replaced with this new value. Note that the format of this value must be a valid Lua table.
Key
Description
Record
Append fields. This parameter needs key and value pair.
Remove_key
If the key is matched, that field is removed.
Whitelist_key
If the key is not matched, that field is removed.
name
title
description
Grep
Match or exclude specific records by patterns.
Kubernetes
Enrich logs with Kubernetes Metadata.
Lua
Filter records using Lua Scripts.
Parser
Parse record.
Record Modifier
Modify record.
Stdout
Print records to the standard output interface.
Throttle
Apply rate limit to event flow.
Nest
Nest records under a specified key
Modify
Modifications to record.
Key
Value Format
Operation
Description
Operation
ENUM [nest
or lift
]
Select the operation nest
or lift
Wildcard
FIELD WILDCARD
nest
Nest records which field matches the wildcard
Nest_under
FIELD STRING
nest
Nest records matching the Wildcard
under this key
Nested_under
FIELD STRING
lift
Lift records nested under the Nested_under
key
Add_prefix
FIELD STRING
ANY
Prefix affected keys with this string
Remove_prefix
FIELD STRING
ANY
Remove prefix from affected keys if it matches this string
Operation | Parameter 1 | Parameter 2 | Description |
Set | STRING:KEY | STRING:VALUE | Add a key/value pair with key |
Add | STRING:KEY | STRING:VALUE | Add a key/value pair with key |
Remove | STRING:KEY | NONE | Remove a key/value pair with key |
Remove_wildcard | WILDCARD:KEY | NONE | Remove all key/value pairs with key matching wildcard |
Remove_regex | REGEXP:KEY | NONE | Remove all key/value pairs with key matching regexp |
Rename | STRING:KEY | STRING:RENAMED_KEY | Rename a key/value pair with key |
Hard_rename | STRING:KEY | STRING:RENAMED_KEY | Rename a key/value pair with key |
Copy | STRING:KEY | STRING:COPIED_KEY | Copy a key/value pair with key |
Hard_copy | STRING:KEY | STRING:COPIED_KEY | Copy a key/value pair with key |
Condition | Parameter | Parameter 2 | Description |
Key_exists | STRING:KEY | NONE | Is |
Key_does_not_exist | STRING:KEY | STRING:VALUE | Is |
A_key_matches | REGEXP:KEY | NONE | Is |
No_key_matches | REGEXP:KEY | NONE | Is |
Key_value_equals | STRING:KEY | STRING:VALUE | Is |
Key_value_does_not_equal | STRING:KEY | STRING:VALUE | Is |
Key_value_matches | STRING:KEY | REGEXP:VALUE | Is |
Key_value_does_not_match | STRING:KEY | REGEXP:VALUE | Is |
Matching_keys_have_matching_values | REGEXP:KEY | REGEXP:VALUE | Is |
Matching_keys_do_not_have_matching_values | REGEXP:KEY | REGEXP:VALUE | Is |
Key | Description | Default |
Key_Name | Specify field name in record to parse. |
Parser | Specify the parser name to interpret the field. Multiple Parser entries are allowed (one per line). |
Preserve_Key | Keep original | False |
Reserve_Data | Keep all other original fields in the parsed result. If false, all other original fields will be removed. | False |
Unescape_Key | If the key is a escaped string (e.g: stringify JSON), unescape the string before to apply the parser. | False |
The Standard Output Filter plugin allows to print to the standard output the data received through the input plugin.
There are no parameters.
In order to start filtering records, you can run the filter from the command line or through the configuration file.
In your main configuration file append the following FILTER sections:
The Throttle Filter plugin sets the average Rate of messages per Interval, based on leaky bucket and sliding window algorithm. In case of overflood, it will leak within certain rate.
The plugin supports the following configuration parameters:
Lets imagine we have configured:
we received 1 message first second, 3 messages 2nd, and 5 3rd. As you can see, disregard that Window is actually 5, we use "slow" start to prevent overflooding during the startup.
But as soon as we reached Window size * Interval, we will have true sliding window with aggregation over complete window.
When we have average over window is more than Rate, we will start dropping messages, so that
will become:
As you can see, last pane of the window was overwritten and 1 message was dropped.
You might noticed possibility to configure Interval of the Window shift. It is counter intuitive, but there is a difference between two examples above:
and
Even though both examples will allow maximum Rate of 60 messages per minute, first example may get all 60 messages within first second, and will drop all the rest for the entire minute:
While the second example will not allow more than 1 message per second every second, making output rate more smooth:
It may drop some data if the rate is ragged. I would recommend to use bigger interval and rate for streams of rare but important events, while keep Window bigger and Interval small for constantly intensive inputs.
Note: It's suggested to use a configuration file.
The following command will load the tail plugin and read the content of lines.txt file. Then the throttle filter will apply a rate limit and only pass the records which are read below the certain rate:
The example above will pass 1000 messages per second in average over 300 seconds.
The Kubernetes Filter allows to enrich your log files with Kubernetes metadata.
When Fluent Bit is deployed in Kubernetes as a DaemonSet and configured to read the log files from the containers (using tail plugin), this filter aims to perform the following operations:
Analyze the Tag and extract the following metadata:
POD Name
Namespace
Container Name
Container ID
Query Kubernetes API Server to obtain extra metadata for the POD in question:
POD ID
Labels
Annotations
The data is cached locally in memory and appended to each record.
The plugin supports the following configuration parameters:
A flexible feature of Fluent Bit Kubernetes filter is that allow Kubernetes Pods to suggest certain behaviors for the log processor pipeline when processing the records. At the moment it support:
Suggest a pre-defined parser
Request to exclude logs
The following annotations are available:
The following Pod definition runs a Pod that emits Apache logs to the standard output, in the Annotations it suggest that the data should be processed using the pre-defined parser called apache:
There are certain situations where the user would like to request that the log processor simply skip the logs from the Pod in question:
Note that the annotation value is boolean which can take a true or false and must be quoted.
Key
Value Format
Description
Rate
Integer
Amount of messages for the time.
Window
Integer
Amount of intervals to calculate average over. Default 5.
Interval
String
Time interval, expressed in "sleep" format. e.g 3s, 1.5m, 0.5h etc
Print_Status
Bool
Whether to print status messages with current rate and the limits to information logs
Key
Description
Default
Buffer_Size
Set the buffer size for HTTP client when reading responses from Kubernetes API server. The value must be according to the Unit Size specification.
32k
Kube_URL
API Server end-point
Kube_CA_File
CA certificate file
/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
Kube_CA_Path
Absolute path to scan for certificate files
Kube_Token_File
Token file
/var/run/secrets/kubernetes.io/serviceaccount/token
Merge_Log
When enabled, it checks if the log
field content is a JSON string map, if so, it append the map fields as part of the log structure.
Off
Merge_Log_Key
When Merge_Log
is enabled, the filter tries to assume the log
field from the incoming message is a JSON string message and make a structured representation of it at the same level of the log
field in the map. Now if Merge_Log_Key
is set (a string name), all the new structured fields taken from the original log
content are inserted under the new key.
Merge_Log_Trim
When Merge_Log
is enabled, trim (remove possible \n or \r) field values.
On
tls.debug
Debug level between 0 (nothing) and 4 (every detail).
-1
tls.verify
When enabled, turns on certificate validation when connecting to the Kubernetes API server.
On
Use_Journal
When enabled, the filter reads logs coming in Journald format.
Off
Regex_Parser
Set an alternative Parser to process record Tag and extract pod_name, namespace_name, container_name and docker_id. The parser must be registered in a parsers file (refer to parser filter-kube-test as an example).
K8S-Logging.Parser
Allow Kubernetes Pods to suggest a pre-defined Parser (read more about it in Kubernetes Annotations section)
Off
K8S-Logging.Exclude
Allow Kubernetes Pods to exclude their logs from the log processor (read more about it in Kubernetes Annotations section).
Off
Annotations
Include Kubernetes resource annotations in the extra metadata.
On
Kube_meta_preload_cache_dir
If set, Kubernetes meta-data can be cached/pre-loaded from files in JSON format in this directory, named as namespace-pod.meta
Dummy_Meta
If set, use dummy-meta data (for test/dev purposes)
Off
Annotation
Description
Default
fluentbit.io/parser[_stream][-container]
Suggest a pre-defined parser. The parser must be registered already by Fluent Bit. This option will only be processed if Fluent Bit configuration (Kubernetes Filter) have enabled the option K8S-Logging.Parser. If present, the stream (stdout or stderr) will restrict that specific stream. If present, the container can override a specific container in a Pod.
fluentbit.io/exclude
Request to Fluent Bit to exclude or not the logs generated by the Pod. This option will only be processed if Fluent Bit configuration (Kubernetes Filter) have enabled the option K8S-Logging.Exclude.
False