All pages
Powered by GitBook
1 of 12

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Filters

AWS Metadata

The AWS Filter Enriches logs with AWS Metadata. Currently the plugin adds the EC2 instance ID and availability zone to log records. To use this plugin, you must be running in EC2 and have the instance metadata service enabled.

Configuration Parameters

The plugin supports the following configuration parameters:

Key

Description

Default

imds_version

Specify which version of the instance metadata service to use. Valid values are 'v1' or 'v2'.

v2

Note: If you run Fluent Bit in a container, you may have to use instance metadata v1. The plugin behaves the same regardless of which version is used.

Usage

Metadata Fields

Currently, the plugin only adds the instance ID and availability zone. AWS plans to expand this plugin in the future.

Key

Value

az

The ; for example, "us-east-1a".

ec2_instance_id

The EC2 instance ID.

Command Line

$ bin/fluent-bit -i dummy -F aws -m '*' -o stdout

[2020/01/17 07:57:17] [ info] [engine] started (pid=32744)
[0] dummy.0: [1579247838.000171227, {"message"=>"dummy", "az"=>"us-west-2b", "ec2_instance_id"=>"i-06bc83dbc2ac2fdf8"}]
[1] dummy.0: [1579247839.000125097, {"message"=>"dummy", "az"=>"us-west-2b", "ec2_instance_id"=>"i-06bc87dbc2ac3fdf8"}]

Configuration File

[INPUT]
    Name dummy
    Tag dummy

[FILTER]
    Name aws
    Match *
    imds_version v1

[OUTPUT]
    Name stdout
    Match *
availability zone

Grep

The Grep Filter plugin allows to match or exclude specific records based in regular expression patterns.

Configuration Parameters

The plugin supports the following configuration parameters:

Key

Value Format

Description

Regex

FIELD REGEX

Keep records which field matches the regular expression.

Exclude

FIELD REGEX

Exclude records which field matches the regular expression.

Getting Started

In order to start filtering records, you can run the filter from the command line or through the configuration file. The following example assumes that you have a file called lines.txt with the following content

aaa
aab
bbb
ccc
ddd
eee
fff
ggg

Command Line

Note: using the command line mode need special attention to quote the regular expressions properly. It's suggested to use a configuration file.

The following command will load the tail plugin and read the content of lines.txt file. Then the grep filter will apply a regular expression rule over the log field (created by tail plugin) and only pass the records which field value starts with aa:

$ bin/fluent-bit -i tail -p 'path=lines.txt' -F grep -p 'regex=log aa' -m '*' -o stdout

Configuration File

[INPUT]
    Name   tail
    Path   lines.txt

[FILTER]
    Name   grep
    Match  *
    Regex  log aa

[OUTPUT]
    Name   stdout
    Match  *

The filter allows to use multiple rules which are applied in order, you can have many Regex and Exclude entries as required.

Nested fields example

Currently nested fields are not supported. If you have records in the following format

{
    "kubernetes": {
        "pod_name": "myapp-0",
        "namespace_name": "default",
        "pod_id": "216cd7ae-1c7e-11e8-bb40-000c298df552",
        "labels": {
            "app": "myapp"
        },
        "host": "minikube",
        "container_name": "myapp",
        "docker_id": "370face382c7603fdd309d8c6aaaf434fd98b92421ce7c7c8aafe7697d4aa362"
    }
}

and if you want to exclude records that match given nested field (for example kubernetes.labels.app), you could use combination of nest and grep filters. Here is an example that will exclude records that match kubernetes.labels.app: myapp:

[FILTER]
    Name         nest
    Match        *
    Operation    lift
    Nested_under kubernetes

[FILTER]
    Name         nest
    Match        *
    Operation    lift
    Nested_under labels

[FILTER]
    Name    grep
    Match   *
    Exclude app myapp

Record Modifier

The Record Modifier Filter plugin allows to append fields or to exclude specific fields.

Configuration Parameters

The plugin supports the following configuration parameters: Remove_key and Whitelist_key are exclusive.

Getting Started

In order to start filtering records, you can run the filter from the command line or through the configuration file.

This is a sample in_mem record to filter.

Append fields

The following configuration file is to append product name and hostname (via environment variable) to record.

You can also run the filter from command line.

The output will be

Remove fields with Remove_key

The following configuration file is to remove 'Swap.*' fields.

You can also run the filter from command line.

The output will be

Remove fields with Whitelist_key

The following configuration file is to remain 'Mem.*' fields.

You can also run the filter from command line.

The output will be

Key

Description

Record

Append fields. This parameter needs key and value pair.

Remove_key

If the key is matched, that field is removed.

Whitelist_key

If the key is not matched, that field is removed.

{"Mem.total"=>1016024, "Mem.used"=>716672, "Mem.free"=>299352, "Swap.total"=>2064380, "Swap.used"=>32656, "Swap.free"=>2031724}
[INPUT]
    Name mem
    Tag  mem.local

[OUTPUT]
    Name  stdout
    Match *

[FILTER]
    Name record_modifier
    Match *
    Record hostname ${HOSTNAME}
    Record product Awesome_Tool
$ fluent-bit -i mem -o stdout -F record_modifier -p 'Record=hostname ${HOSTNAME}' -p 'Record=product Awesome_Tool' -m '*'
[0] mem.local: [1492436882.000000000, {"Mem.total"=>1016024, "Mem.used"=>716672, "Mem.free"=>299352, "Swap.total"=>2064380, "Swap.used"=>32656, "Swap.free"=>2031724, "hostname"=>"localhost.localdomain", "product"=>"Awesome_Tool"}]
[INPUT]
    Name mem
    Tag  mem.local

[OUTPUT]
    Name  stdout
    Match *

[FILTER]
    Name record_modifier
    Match *
    Remove_key Swap.total
    Remove_key Swap.used
    Remove_key Swap.free
$ fluent-bit -i mem -o stdout -F  record_modifier -p 'Remove_key=Swap.total' -p 'Remove_key=Swap.free' -p 'Remove_key=Swap.used' -m '*'
[0] mem.local: [1492436998.000000000, {"Mem.total"=>1016024, "Mem.used"=>716672, "Mem.free"=>295332}]
[INPUT]
    Name mem
    Tag  mem.local

[OUTPUT]
    Name  stdout
    Match *

[FILTER]
    Name record_modifier
    Match *
    Whitelist_key Mem.total
    Whitelist_key Mem.used
    Whitelist_key Mem.free
$ fluent-bit -i mem -o stdout -F  record_modifier -p 'Whitelist_key=Mem.total' -p 'Whitelist_key=Mem.free' -p 'Whitelist_key=Mem.used' -m '*'
[0] mem.local: [1492436998.000000000, {"Mem.total"=>1016024, "Mem.used"=>716672, "Mem.free"=>295332}]

Throttle

The Throttle Filter plugin sets the average Rate of messages per Interval, based on leaky bucket and sliding window algorithm. In case of overflood, it will leak within certain rate.

Configuration Parameters

The plugin supports the following configuration parameters:

Key

Value Format

Description

Rate

Integer

Amount of messages for the time.

Window

Integer

Amount of intervals to calculate average over. Default 5.

Interval

String

Time interval, expressed in "sleep" format. e.g 3s, 1.5m, 0.5h etc

Print_Status

Bool

Whether to print status messages with current rate and the limits to information logs

Functional description

Lets imagine we have configured:

Rate 5
Window 5
Interval 1s

we received 1 message first second, 3 messages 2nd, and 5 3rd. As you can see, disregard that Window is actually 5, we use "slow" start to prevent overflooding during the startup.

+-------+-+-+-+ 
|1|3|5| | | | | 
+-------+-+-+-+ 
|  3  |         average = 3, and not 1.8 if you calculate 0 for last 2 panes. 
+-----+

But as soon as we reached Window size * Interval, we will have true sliding window with aggregation over complete window.

+-------------+ 
|1|3|5|7|3|4| | 
+-------------+ 
  |  4.4    |   
  ----------+

When we have average over window is more than Rate, we will start dropping messages, so that

+-------------+
|1|3|5|7|3|4|7|
+-------------+
    |   5.2   |
    +---------+

will become:

+-------------+
|1|3|5|7|3|4|6|
+-------------+
    |   5     |
    +---------+

As you can see, last pane of the window was overwritten and 1 message was dropped.

Interval vs Window size

You might noticed possibility to configure Interval of the Window shift. It is counter intuitive, but there is a difference between two examples above:

Rate 60
Window 5
Interval 1m

and

Rate 1
Window 300
Interval 1s

Even though both examples will allow maximum Rate of 60 messages per minute, first example may get all 60 messages within first second, and will drop all the rest for the entire minute:

XX        XX        XX
XX        XX        XX
XX        XX        XX
XX        XX        XX
XX        XX        XX
XX        XX        XX
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

While the second example will not allow more than 1 message per second every second, making output rate more smooth:

  X    X     X    X    X    X
XXXX XXXX  XXXX XXXX XXXX XXXX
+-+-+-+-+-+--+-+-+-+-+-+-+-+-+-+

It may drop some data if the rate is ragged. I would recommend to use bigger interval and rate for streams of rare but important events, while keep Window bigger and Interval small for constantly intensive inputs.

Command Line

Note: It's suggested to use a configuration file.

The following command will load the tail plugin and read the content of lines.txt file. Then the throttle filter will apply a rate limit and only pass the records which are read below the certain rate:

$ bin/fluent-bit -i tail -p 'path=lines.txt' -F throttle -p 'rate=1' -m '*' -o stdout

Configuration File

[INPUT]
    Name   tail
    Path   lines.txt

[FILTER]
    Name     throttle
    Match    *
    Rate     1000
    Window   300
    Interval 1s

[OUTPUT]
    Name   stdout
    Match  *

The example above will pass 1000 messages per second in average over 300 seconds.

Lua

Lua Filter allows you to modify the incoming records using custom Scripts.

Due to the necessity to have a flexible filtering mechanism, now is possible to extend Fluent Bit capabilities writing simple filters using Lua programming language. A Lua based filter takes two steps:

  • Configure the Filter in the main configuration

  • Prepare a Lua script that will be used by the Filter

Configuration Parameters

The plugin supports the following configuration parameters:

Getting Started

In order to test the filter, you can run the plugin from the command line or through the configuration file. The following examples uses the input plugin for data ingestion, invoke Lua filter using the script and calls the function which only print the same information to the standard output:

Command Line

From the command line you can use the following options:

Configuration File

In your main configuration file append the following Input, Filter & Output sections:

Lua Script Filter API

The life cycle of a filter have the following steps:

  • Upon Tag matching by filter_lua, it may process or bypass the record.

  • If filter_lua accepts the record, it will invoke the function defined in the call property which basically is the name of a function defined in the Lua script.

  • Invoke Lua function passing each record in JSON format.

  • Upon return, validate return value and take some action (described above)

Callback Prototype

The Lua script can have one or multiple callbacks that can be used by filter_lua, it prototype is as follows:

Function Arguments

Return Values

Each callback must return three values:

Code Examples

For functional examples of this interface, please refer to the code samples provided in the source code of the project located here:

Number Type

In Lua, Fluent Bit treats number as double. It means an integer field (e.g. IDs, log levels) will be converted double. To avoid type conversion, Type_int_key property is available.

Protected Mode

Fluent Bit supports protected mode to prevent crash when executes invalid Lua script. See also .

Key

Description

Script

Path to the Lua script that will be used.

Call

Lua function name that will be triggered to do filtering. It's assumed that the function is declared inside the Script defined above.

Type_int_key

If these keys are matched, the fields are converted to integer. If more than one key, delimit by space

Protected_mode

If enabled, Lua script will be executed in protected mode. It prevents to crash when invalid Lua script is executed. Default is true.

$ fluent-bit -i dummy -F lua -p script=test.lua -p call=cb_print -m '*' -o null
[INPUT]
    Name   dummy

[FILTER]
    Name    lua
    Match   *
    script  test.lua
    call    cb_print

[OUTPUT]
    Name   null
    Match  *
function cb_print(tag, timestamp, record)
   return code, timestamp, record
end

name

description

tag

Name of the tag associated with the incoming record.

timestamp

Unix timestamp with nanoseconds associated with the incoming record. The original format is a double (seconds.nanoseconds)

record

Lua table with the record content

name

data type

description

code

integer

The code return value represents the result and further action that may follows. If code equals -1, means that filter_lua must drop the record. If code equals 0 the record will not be modified, otherwise if code equals 1, means the original timestamp and record have been modified so it must be replaced by the returned values from timestamp (second return value) and record (third return value). If code equals 2, means the original timestamp is not modified and the record has been modified so it must be replaced by the returned values from record (third return value). The code 2 is supported from v1.4.3.

timestamp

double

If code equals 1, the original record timestamp will be replaced with this new value.

record

table

if code equals 1, the original record information will be replaced with this new value. Note that the format of this value must be a valid Lua table.

Lua
dummy
test.lua
cb_print()
https://github.com/fluent/fluent-bit/tree/master/scripts
Error Handling in Application Code

Parser

The Parser Filter plugin allows to parse field in event records.

Configuration Parameters

The plugin supports the following configuration parameters:

Key

Description

Default

Key_Name

Specify field name in record to parse.

Parser

Specify the parser name to interpret the field. Multiple Parser entries are allowed (one per line).

Preserve_Key

Keep original Key_Name field in the parsed result. If false, the field will be removed.

False

Reserve_Data

Keep all other original fields in the parsed result. If false, all other original fields will be removed.

False

Unescape_Key

If the key is a escaped string (e.g: stringify JSON), unescape the string before to apply the parser.

False

Getting Started

Configuration File

This is an example to parser a record {"data":"100 0.5 true This is example"}.

The plugin needs parser file which defines how to parse field.

[PARSER]
    Name dummy_test
    Format regex
    Regex ^(?<INT>[^ ]+) (?<FLOAT>[^ ]+) (?<BOOL>[^ ]+) (?<STRING>.+)$

The path of parser file should be written in configuration file at [SERVICE] section.

[SERVICE]
    Parsers_File /path/to/parsers.conf

[INPUT]
    Name dummy
    Tag  dummy.data
    Dummy {"data":"100 0.5 true This is example"}

[FILTER]
    Name parser
    Match dummy.*
    Key_Name data
    Parser dummy_test

[OUTPUT]
    Name stdout
    Match *

The output is

$ fluent-bit -c dummy.conf
Fluent Bit v1.x.x
* Copyright (C) 2019-2020 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2017/07/06 22:33:12] [ info] [engine] started
[0] dummy.data: [1499347993.001371317, {"INT"=>"100", "FLOAT"=>"0.5", "BOOL"=>"true", "STRING"=>"This is example"}]
[1] dummy.data: [1499347994.001303118, {"INT"=>"100", "FLOAT"=>"0.5", "BOOL"=>"true", "STRING"=>"This is example"}]
[2] dummy.data: [1499347995.001296133, {"INT"=>"100", "FLOAT"=>"0.5", "BOOL"=>"true", "STRING"=>"This is example"}]
[3] dummy.data: [1499347996.001320284, {"INT"=>"100", "FLOAT"=>"0.5", "BOOL"=>"true", "STRING"=>"This is example"}]

You can see the record {"data":"100 0.5 true This is example"} are parsed.

Preserve original fields

By default, the parser plugin only keeps the parsed fields in its output.

If you enable Reserve_Data, all other fields are preserved:

[PARSER]
    Name dummy_test
    Format regex
    Regex ^(?<INT>[^ ]+) (?<FLOAT>[^ ]+) (?<BOOL>[^ ]+) (?<STRING>.+)$
[SERVICE]
    Parsers_File /path/to/parsers.conf

[INPUT]
    Name dummy
    Tag  dummy.data
    Dummy {"data":"100 0.5 true This is example", "key1":"value1", "key2":"value2"}

[FILTER]
    Name parser
    Match dummy.*
    Key_Name data
    Parser dummy_test
    Reserve_Data On

This will produce the output:

$ fluent-bit -c dummy.conf
Fluent-Bit v0.12.0
Copyright (C) Treasure Data

[2017/07/06 22:33:12] [ info] [engine] started
[0] dummy.data: [1499347993.001371317, {"INT"=>"100", "FLOAT"=>"0.5", "BOOL"=>"true", "STRING"=>"This is example"}, "key1":"value1", "key2":"value2"]
[1] dummy.data: [1499347994.001303118, {"INT"=>"100", "FLOAT"=>"0.5", "BOOL"=>"true", "STRING"=>"This is example"}, "key1":"value1", "key2":"value2"]
[2] dummy.data: [1499347995.001296133, {"INT"=>"100", "FLOAT"=>"0.5", "BOOL"=>"true", "STRING"=>"This is example"}, "key1":"value1", "key2":"value2"]
[3] dummy.data: [1499347996.001320284, {"INT"=>"100", "FLOAT"=>"0.5", "BOOL"=>"true", "STRING"=>"This is example"}, "key1":"value1", "key2":"value2"]

If you enable Reserved_Data and Preserve_Key, the original key field will be preserved as well:

[PARSER]
    Name dummy_test
    Format regex
    Regex ^(?<INT>[^ ]+) (?<FLOAT>[^ ]+) (?<BOOL>[^ ]+) (?<STRING>.+)$
[SERVICE]
    Parsers_File /path/to/parsers.conf

[INPUT]
    Name dummy
    Tag  dummy.data
    Dummy {"data":"100 0.5 true This is example", "key1":"value1", "key2":"value2"}

[FILTER]
    Name parser
    Match dummy.*
    Key_Name data
    Parser dummy_test
    Reserve_Data On
    Preserve_Key On

This will produce the output:

$ fluent-bit -c dummy.conf
Fluent-Bit v0.12.0
Copyright (C) Treasure Data

[2017/07/06 22:33:12] [ info] [engine] started
[0] dummy.data: [1499347993.001371317, {"data":"100 0.5 true This is example", "INT"=>"100", "FLOAT"=>"0.5", "BOOL"=>"true", "STRING"=>"This is example"}]
[1] dummy.data: [1499347994.001303118, {"data":"100 0.5 true This is example", "INT"=>"100", "FLOAT"=>"0.5", "BOOL"=>"true", "STRING"=>"This is example"}]
[2] dummy.data: [1499347995.001296133, {"data":"100 0.5 true This is example", "INT"=>"100", "FLOAT"=>"0.5", "BOOL"=>"true", "STRING"=>"This is example"}]
[3] dummy.data: [1499347996.001320284, {"data":"100 0.5 true This is example", "INT"=>"100", "FLOAT"=>"0.5", "BOOL"=>"true", "STRING"=>"This is example"}]

Time Resolution and Fractional Seconds

Some timestamps might have fractional seconds, like 2017-05-17T15:44:31.187512963Z. The %L format option for Time_Format is provided as a way to indicate that content must be interpreted as fractional seconds. To parse the previous example, you could specify Time_Format %Y-%m-%dT%H:%M:%S.%LZ.

The option %L is only valid when used after seconds (%S) or seconds since the Epoch (%s), e.g: %S.%L or %s.%L.

Support for %L was added in Fluent Bit 0.12.

Standard Output

The stdout output plugin allows to print to the standard output the data received through the input plugin. Their usage is very simple as follows:

Configuration Parameters

Key

Description

default

Format

Specify the data format to be printed. Supported formats are msgpack json, json_lines and json_stream.

msgpack

json_date_key

Specify the name of the date field in output

date

json_date_format

Specify the format of the date. Supported formats are double, iso8601 (eg: 2018-05-30T09:39:52.000681Z) and epoch.

double

Command Line

$ bin/fluent-bit -i cpu -o stdout -v

We have specified to gather CPU usage metrics and print them out to the standard output in a human readable way:

$ bin/fluent-bit -i cpu -o stdout -p format=msgpack -v
Fluent Bit v1.x.x
* Copyright (C) 2019-2020 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2016/10/07 21:52:01] [ info] [engine] started
[0] cpu.0: [1475898721, {"cpu_p"=>0.500000, "user_p"=>0.250000, "system_p"=>0.250000, "cpu0.p_cpu"=>0.000000, "cpu0.p_user"=>0.000000, "cpu0.p_system"=>0.000000, "cpu1.p_cpu"=>0.000000, "cpu1.p_user"=>0.000000, "cpu1.p_system"=>0.000000, "cpu2.p_cpu"=>0.000000, "cpu2.p_user"=>0.000000, "cpu2.p_system"=>0.000000, "cpu3.p_cpu"=>1.000000, "cpu3.p_user"=>0.000000, "cpu3.p_system"=>1.000000}]
[1] cpu.0: [1475898722, {"cpu_p"=>0.250000, "user_p"=>0.250000, "system_p"=>0.000000, "cpu0.p_cpu"=>0.000000, "cpu0.p_user"=>0.000000, "cpu0.p_system"=>0.000000, "cpu1.p_cpu"=>1.000000, "cpu1.p_user"=>1.000000, "cpu1.p_system"=>0.000000, "cpu2.p_cpu"=>0.000000, "cpu2.p_user"=>0.000000, "cpu2.p_system"=>0.000000, "cpu3.p_cpu"=>0.000000, "cpu3.p_user"=>0.000000, "cpu3.p_system"=>0.000000}]
[2] cpu.0: [1475898723, {"cpu_p"=>0.750000, "user_p"=>0.250000, "system_p"=>0.500000, "cpu0.p_cpu"=>2.000000, "cpu0.p_user"=>1.000000, "cpu0.p_system"=>1.000000, "cpu1.p_cpu"=>0.000000, "cpu1.p_user"=>0.000000, "cpu1.p_system"=>0.000000, "cpu2.p_cpu"=>1.000000, "cpu2.p_user"=>0.000000, "cpu2.p_system"=>1.000000, "cpu3.p_cpu"=>0.000000, "cpu3.p_user"=>0.000000, "cpu3.p_system"=>0.000000}]
[3] cpu.0: [1475898724, {"cpu_p"=>1.000000, "user_p"=>0.750000, "system_p"=>0.250000, "cpu0.p_cpu"=>1.000000, "cpu0.p_user"=>1.000000, "cpu0.p_system"=>0.000000, "cpu1.p_cpu"=>2.000000, "cpu1.p_user"=>1.000000, "cpu1.p_system"=>1.000000, "cpu2.p_cpu"=>1.000000, "cpu2.p_user"=>1.000000, "cpu2.p_system"=>0.000000, "cpu3.p_cpu"=>1.000000, "cpu3.p_user"=>1.000000, "cpu3.p_system"=>0.000000}]

No more, no less, it just works.

Rewrite Tag

Powerful and flexible routing

Tags are what makes routing possible. Tags are set in the configuration of the Input definitions where the records are generated, but there are certain scenarios where might be useful to modify the Tag in the pipeline so we can perform more advanced and flexible routing.

The rewrite_tag filter, allows to re-emit a record under a new Tag. Once a record has been re-emitted, the original record can be preserved or discarded.

How it Works

The way it works is defining rules that matches specific record key content against a regular expression, if a match exists, a new record with the defined Tag will be emitted. Multiple rules can be specified and they are processed in order until one of them matches.

The new Tag to define can be composed by:

  • Alphabet characters & Numbers

  • Original Tag string or part of it

  • Regular Expressions groups capture

  • Any key or sub-key of the processed record

  • Environment variables

Configuration Parameters

The rewrite_tag filter supports the following configuration parameters:

Key

Description

Rule

Defines the matching criteria and the format of the Tag for the matching record. The Rule format have four components: KEY REGEX NEW_TAG KEEP. For more specific details of the Rule format and it composition read the next section.

Emitter_Name

When the filter emits a record under the new Tag, there is an internal emitter plugin that takes care of the job. Since this emitter expose metrics as any other component of the pipeline, you can use this property to configure an optional name for it.

Emitter_Storage.type

Define a buffering mechanism for the new records created. Note these records are part of the emitter plugin. This option support the values memory (default) or filesystem. If the destination for the new records generated might face backpressure due to latency or slow network, we strongly recommends enabling the filesystem mode.

Rules

A rule aims to define matching criteria and specify how to create a new Tag for a record. You can define one or multiple rules in the same configuration section. The rules have the following format:

$KEY  REGEX  NEW_TAG  KEEP

Key

The key represents the name of the record key that holds the value that we want to use to match our regular expression. A key name is specified and prefixed with a $. Consider the following structured record (formatted for readability):

{
  "name": "abc-123",
  "ss": {
    "s1": {
      "s2": "flb"
    }
  }
}

If we wanted to match against the value of the key name we must use $name. The key selector is flexible enough to allow to match nested levels of sub-maps from the structure. If we wanted to check the value of the nested key s2 we can do it specifying $ss['s1']['s2'], for short:

  • $name = "abc-123"

  • $ss['s1']['s2'] = "flb"

Note that a key must point a value that contains a string, it's not valid for numbers, booleans, maps or arrays.

Regex

Using a simple regular expression we can specify a matching pattern to use against the value of the key specified above, also we can take advantage of group capturing to create custom placeholder values.

If we wanted to match any record that it $name contains a value of the format string-number like the example provided above, we might use:

^([a-z]+)-([0-9]+)$

Note that in our example we are using parentheses, this teams that we are specifying groups of data. If the pattern matches the value a placeholder will be created that can be consumed by the NEW_TAG section.

If $name equals abc-123 , then the following placeholders will be created:

  • $0 = "abc-123"

  • $1 = "abc"

  • $2 = "123"

If the Regular expression do not matches an incoming record, the rule will be skipped and the next rule (if any) will be processed.

New Tag

If a regular expression has matched the value of the defined key in the rule, we are ready to compose a new Tag for that specific record. The tag is a concatenated string that can contain any of the following characters: a-z,A-Z, 0-9 and .-,.

A Tag can take any string value from the matching record, the original tag it self, environment variable or general placeholder.

Consider the following incoming data on the rule:

  • Tag = aa.bb.cc

  • Record = {"name": "abc-123", "ss": {"s1": {"s2": "flb"}}}

  • Environment variable $HOSTNAME = fluent

With such information we could create a very custom Tag for our record like the following:

newtag.$TAG.$TAG[1].$1.$ss['s1']['s2'].out.${HOSTNAME}

the expected Tag to generated will be:

newtag.aa.bb.cc.bb.abc.flb.out.fluent

We make use of placeholders, record content and environment variables.

Keep

If a rule matches the criteria the filter will emit a copy of the record with the new defined Tag. The property keep takes a boolean value to define if the original record with the old Tag must be preserved and continue in the pipeline or just be discarded.

You can use true or false to decide the expected behavior. There is no default value and this is a mandatory field in the rule.

Configuration Example

The following configuration example will emit a dummy (hand-crafted) record, the filter will rewrite the tag, discard the old record and print the new record to the standard output interface:

[SERVICE]
    Flush     1
    Log_Level info

[INPUT]
    NAME   dummy
    Dummy  {"tool": "fluent", "sub": {"s1": {"s2": "bit"}}}
    Tag    test_tag

[FILTER]
    Name          rewrite_tag
    Match         test_tag
    Rule          $tool ^(fluent)$  from.$TAG.new.$tool.$sub['s1']['s2'].out false
    Emitter_Name  re_emitted

[OUTPUT]
    Name   stdout
    Match  from.*

The original tag test_tag will be rewritten as from.test_tag.new.fluent.bit.out:

$ bin/fluent-bit -c example.conf
Fluent Bit v1.x.x
* Copyright (C) 2019-2020 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

...
[0] from.test_tag.new.fluent.bit.out: [1580436933.000050569, {"tool"=>"fluent", "sub"=>{"s1"=>{"s2"=>"bit"}}}]

Monitoring

As described in the Monitoring section, every component of the pipeline of Fluent Bit exposes metrics. The basic metrics exposed by this filter are drop_records and add_records, they summarize the total of dropped records from the incoming data chunk or the new records added.

Since rewrite_tag emit new records that goes through the beginning of the pipeline, it exposes an additional metric called emit_records that summarize the total number of emitted records.

Understanding the Metrics

Using the configuration provided above, if we query the metrics exposed in the HTTP interface we will see the following:

Command:

$ curl  http://127.0.0.1:2020/api/v1/metrics/ | jq

Metrics output:

{
  "input": {
    "dummy.0": {
      "records": 2,
      "bytes": 80
    },
    "emitter_for_rewrite_tag.0": {
      "records": 1,
      "bytes": 40
    }
  },
  "filter": {
    "rewrite_tag.0": {
      "drop_records": 2,
      "add_records": 0,
      "emit_records": 2
    }
  },
  "output": {
    "stdout.0": {
      "proc_records": 1,
      "proc_bytes": 40,
      "errors": 0,
      "retries": 0,
      "retries_failed": 0
    }
  }
}

The dummy input generated two records, the filter dropped two from the chunks and emitted two new ones under a different Tag.

The records generated are handled by the internal Emitter, so the new records are summarized in the Emitter metrics, take a look at the entry called emitter_for_rewrite_tag.0.

What is the Emitter ?

The Emitter is an internal Fluent Bit plugin that allows other components of the pipeline to emit custom records. On this case rewrite_tag creates an Emitter instance to use it exclusively to emit records, on that way we can have a granular control of who is emitting what.

The Emitter name in the metrics can be changed setting up the Emitter_Name configuration property described above.

Modify

The Modify Filter plugin allows you to change records using rules and conditions.

Example usage

As an example using JSON notation to,

  • Rename Key2 to RenamedKey

  • Add a key OtherKey with value Value3 if OtherKey does not yet exist

Example (input)

{
  "Key1"     : "Value1",
  "Key2"     : "Value2"
}

Example (output)

{
  "Key1"       : "Value1",
  "RenamedKey" : "Value2",
  "OtherKey"   : "Value3"
}

Configuration Parameters

Rules

The plugin supports the following rules:

Operation

Parameter 1

Parameter 2

Description

Set

STRING:KEY

STRING:VALUE

Add a key/value pair with key KEY and value VALUE. If KEY already exists, this field is overwritten

Add

STRING:KEY

STRING:VALUE

Add a key/value pair with key KEY and value VALUE if KEY does not exist

Remove

STRING:KEY

NONE

Remove a key/value pair with key KEY if it exists

Remove_wildcard

WILDCARD:KEY

NONE

Remove all key/value pairs with key matching wildcard KEY

Remove_regex

REGEXP:KEY

NONE

Remove all key/value pairs with key matching regexp KEY

Rename

STRING:KEY

STRING:RENAMED_KEY

Rename a key/value pair with key KEY to RENAMED_KEY if KEY exists AND RENAMED_KEY does not exist

Hard_rename

STRING:KEY

STRING:RENAMED_KEY

Rename a key/value pair with key KEY to RENAMED_KEY if KEY exists. If RENAMED_KEY already exists, this field is overwritten

Copy

STRING:KEY

STRING:COPIED_KEY

Copy a key/value pair with key KEY to COPIED_KEY if KEY exists AND COPIED_KEY does not exist

Hard_copy

STRING:KEY

STRING:COPIED_KEY

Copy a key/value pair with key KEY to COPIED_KEY if KEY exists. If COPIED_KEY already exists, this field is overwritten

  • Rules are case insensitive, parameters are not

  • Any number of rules can be set in a filter instance.

  • Rules are applied in the order they appear, with each rule operating on the result of the previous rule.

Conditions

The plugin supports the following conditions:

Condition

Parameter

Parameter 2

Description

Key_exists

STRING:KEY

NONE

Is true if KEY exists

Key_does_not_exist

STRING:KEY

STRING:VALUE

Is true if KEY does not exist

A_key_matches

REGEXP:KEY

NONE

Is true if a key matches regex KEY

No_key_matches

REGEXP:KEY

NONE

Is true if no key matches regex KEY

Key_value_equals

STRING:KEY

STRING:VALUE

Is true if KEY exists and its value is VALUE

Key_value_does_not_equal

STRING:KEY

STRING:VALUE

Is true if KEY exists and its value is not VALUE

Key_value_matches

STRING:KEY

REGEXP:VALUE

Is true if key KEY exists and its value matches VALUE

Key_value_does_not_match

STRING:KEY

REGEXP:VALUE

Is true if key KEY exists and its value does not match VALUE

Matching_keys_have_matching_values

REGEXP:KEY

REGEXP:VALUE

Is true if all keys matching KEY have values that match VALUE

Matching_keys_do_not_have_matching_values

REGEXP:KEY

REGEXP:VALUE

Is true if all keys matching KEY have values that do not match VALUE

  • Conditions are case insensitive, parameters are not

  • Any number of conditions can be set.

  • Conditions apply to the whole filter instance and all its rules. Not to individual rules.

  • All conditions have to be true for the rules to be applied.

Example #1 - Add and Rename

In order to start filtering records, you can run the filter from the command line or through the configuration file. The following invokes the Memory Usage Input Plugin, which outputs the following (example),

[0] memory: [1488543156, {"Mem.total"=>1016044, "Mem.used"=>841388, "Mem.free"=>174656, "Swap.total"=>2064380, "Swap.used"=>139888, "Swap.free"=>1924492}]
[1] memory: [1488543157, {"Mem.total"=>1016044, "Mem.used"=>841420, "Mem.free"=>174624, "Swap.total"=>2064380, "Swap.used"=>139888, "Swap.free"=>1924492}]
[2] memory: [1488543158, {"Mem.total"=>1016044, "Mem.used"=>841420, "Mem.free"=>174624, "Swap.total"=>2064380, "Swap.used"=>139888, "Swap.free"=>1924492}]
[3] memory: [1488543159, {"Mem.total"=>1016044, "Mem.used"=>841420, "Mem.free"=>174624, "Swap.total"=>2064380, "Swap.used"=>139888, "Swap.free"=>1924492}]

Using command Line

Note: Using the command line mode requires quotes parse the wildcard properly. The use of a configuration file is recommended.

bin/fluent-bit -i mem \
  -p 'tag=mem.local' \
  -F modify \
  -p 'Add=Service1 SOMEVALUE' \
  -p 'Add=Service2 SOMEVALUE3' \
  -p 'Add=Mem.total2 TOTALMEM2' \
  -p 'Rename=Mem.free MEMFREE' \
  -p 'Rename=Mem.used MEMUSED' \
  -p 'Rename=Swap.total SWAPTOTAL' \
  -p 'Add=Mem.total TOTALMEM' \
  -m '*' \
  -o stdout

Configuration File

[INPUT]
    Name mem
    Tag  mem.local

[OUTPUT]
    Name  stdout
    Match *

[FILTER]
    Name modify
    Match *
    Add Service1 SOMEVALUE
    Add Service3 SOMEVALUE3
    Add Mem.total2 TOTALMEM2
    Rename Mem.free MEMFREE
    Rename Mem.used MEMUSED
    Rename Swap.total SWAPTOTAL
    Add Mem.total TOTALMEM

Result

The output of both the command line and configuration invocations should be identical and result in the following output.

[2018/04/06 01:35:13] [ info] [engine] started
[0] mem.local: [1522980610.006892802, {"Mem.total"=>4050908, "MEMUSED"=>738100, "MEMFREE"=>3312808, "SWAPTOTAL"=>1046524, "Swap.used"=>0, "Swap.free"=>1046524, "Service1"=>"SOMEVALUE", "Service3"=>"SOMEVALUE3", "Mem.total2"=>"TOTALMEM2"}]
[1] mem.local: [1522980611.000658288, {"Mem.total"=>4050908, "MEMUSED"=>738068, "MEMFREE"=>3312840, "SWAPTOTAL"=>1046524, "Swap.used"=>0, "Swap.free"=>1046524, "Service1"=>"SOMEVALUE", "Service3"=>"SOMEVALUE3", "Mem.total2"=>"TOTALMEM2"}]
[2] mem.local: [1522980612.000307652, {"Mem.total"=>4050908, "MEMUSED"=>738068, "MEMFREE"=>3312840, "SWAPTOTAL"=>1046524, "Swap.used"=>0, "Swap.free"=>1046524, "Service1"=>"SOMEVALUE", "Service3"=>"SOMEVALUE3", "Mem.total2"=>"TOTALMEM2"}]
[3] mem.local: [1522980613.000122671, {"Mem.total"=>4050908, "MEMUSED"=>738068, "MEMFREE"=>3312840, "SWAPTOTAL"=>1046524, "Swap.used"=>0, "Swap.free"=>1046524, "Service1"=>"SOMEVALUE", "Service3"=>"SOMEVALUE3", "Mem.total2"=>"TOTALMEM2"}]

Example #2 - Conditionally Add and Remove

Configuration File

[INPUT]
    Name mem
    Tag  mem.local
    Interval_Sec 1

[FILTER]
    Name    modify
    Match   mem.*

    Condition Key_Does_Not_Exist cpustats
    Condition Key_Exists Mem.used

    Set cpustats UNKNOWN

[FILTER]
    Name    modify
    Match   mem.*

    Condition Key_Value_Does_Not_Equal cpustats KNOWN

    Add sourcetype memstats

[FILTER]
    Name    modify
    Match   mem.*

    Condition Key_Value_Equals cpustats UNKNOWN

    Remove_wildcard Mem
    Remove_wildcard Swap
    Add cpustats_more STILL_UNKNOWN

[OUTPUT]
    Name           stdout
    Match          *

Result

[2018/06/14 07:37:34] [ info] [engine] started (pid=1493)
[0] mem.local: [1528925855.000223110, {"cpustats"=>"UNKNOWN", "sourcetype"=>"memstats", "cpustats_more"=>"STILL_UNKNOWN"}]
[1] mem.local: [1528925856.000064516, {"cpustats"=>"UNKNOWN", "sourcetype"=>"memstats", "cpustats_more"=>"STILL_UNKNOWN"}]
[2] mem.local: [1528925857.000165965, {"cpustats"=>"UNKNOWN", "sourcetype"=>"memstats", "cpustats_more"=>"STILL_UNKNOWN"}]
[3] mem.local: [1528925858.000152319, {"cpustats"=>"UNKNOWN", "sourcetype"=>"memstats", "cpustats_more"=>"STILL_UNKNOWN"}]

Example #3 - Emoji

Configuration File

[INPUT]
    Name mem
    Tag  mem.local

[OUTPUT]
    Name  stdout
    Match *

[FILTER]
    Name modify
    Match *

    Remove_Wildcard Mem
    Remove_Wildcard Swap
    Set This_plugin_is_on 🔥
    Set 🔥 is_hot
    Copy 🔥 💦
    Rename  💦 ❄️
    Set ❄️ is_cold
    Set 💦 is_wet

Result

[2018/06/14 07:46:11] [ info] [engine] started (pid=21875)
[0] mem.local: [1528926372.000197916, {"This_plugin_is_on"=>"🔥", "🔥"=>"is_hot", "❄️"=>"is_cold", "💦"=>"is_wet"}]
[1] mem.local: [1528926373.000107868, {"This_plugin_is_on"=>"🔥", "🔥"=>"is_hot", "❄️"=>"is_cold", "💦"=>"is_wet"}]
[2] mem.local: [1528926374.000181042, {"This_plugin_is_on"=>"🔥", "🔥"=>"is_hot", "❄️"=>"is_cold", "💦"=>"is_wet"}]
[3] mem.local: [1528926375.000090841, {"This_plugin_is_on"=>"🔥", "🔥"=>"is_hot", "❄️"=>"is_cold", "💦"=>"is_wet"}]
[0] mem.local: [1528926376.000610974, {"This_plugin_is_on"=>"🔥", "🔥"=>"is_hot", "❄️"=>"is_cold", "💦"=>"is_wet"}]

Kubernetes

Fluent Bit Kubernetes Filter allows to enrich your log files with Kubernetes metadata.

When Fluent Bit is deployed in Kubernetes as a DaemonSet and configured to read the log files from the containers (using tail or systemd input plugins), this filter aims to perform the following operations:

  • Analyze the Tag and extract the following metadata:

    • Pod Name

    • Namespace

    • Container Name

    • Container ID

  • Query Kubernetes API Server to obtain extra metadata for the POD in question:

    • Pod ID

    • Labels

    • Annotations

The data is cached locally in memory and appended to each record.

Configuration Parameters

The plugin supports the following configuration parameters:

Key

Description

Default

Buffer_Size

Set the buffer size for HTTP client when reading responses from Kubernetes API server. The value must be according to the specification.

32k

Kube_URL

API Server end-point

Kube_CA_File

CA certificate file

/var/run/secrets/kubernetes.io/serviceaccount/ca.crt

Kube_CA_Path

Absolute path to scan for certificate files

Kube_Token_File

Token file

/var/run/secrets/kubernetes.io/serviceaccount/token

Kube_Tag_Prefix

When the source records comes from Tail input plugin, this option allows to specify what's the prefix used in Tail configuration.

kube.var.log.containers.

Merge_Log

When enabled, it checks if the log field content is a JSON string map, if so, it append the map fields as part of the log structure.

Off

Merge_Log_Key

When Merge_Log is enabled, the filter tries to assume the log field from the incoming message is a JSON string message and make a structured representation of it at the same level of the log field in the map. Now if Merge_Log_Key is set (a string name), all the new structured fields taken from the original log content are inserted under the new key.

Merge_Log_Trim

When Merge_Log is enabled, trim (remove possible \n or \r) field values.

On

Merge_Parser

Optional parser name to specify how to parse the data contained in the log key. Recommended use is for developers or testing only.

Keep_Log

When Keep_Log is disabled, the log field is removed from the incoming message once it has been successfully merged (Merge_Log must be enabled as well).

On

tls.debug

Debug level between 0 (nothing) and 4 (every detail).

-1

tls.verify

When enabled, turns on certificate validation when connecting to the Kubernetes API server.

On

Use_Journal

When enabled, the filter reads logs coming in Journald format.

Off

Regex_Parser

Set an alternative Parser to process record Tag and extract pod_name, namespace_name, container_name and docker_id. The parser must be registered in a (refer to parser filter-kube-test as an example).

K8S-Logging.Parser

Allow Kubernetes Pods to suggest a pre-defined Parser (read more about it in Kubernetes Annotations section)

Off

K8S-Logging.Exclude

Allow Kubernetes Pods to exclude their logs from the log processor (read more about it in Kubernetes Annotations section).

Off

Labels

Include Kubernetes resource labels in the extra metadata.

On

Annotations

Include Kubernetes resource annotations in the extra metadata.

On

Kube_meta_preload_cache_dir

If set, Kubernetes meta-data can be cached/pre-loaded from files in JSON format in this directory, named as namespace-pod.meta

Dummy_Meta

If set, use dummy-meta data (for test/dev purposes)

Off

Processing the 'log' value

Kubernetes Filter aims to provide several ways to process the data contained in the log key. The following explanation of the workflow assumes that your original Docker parser defined in parsers.conf is as follows:

[PARSER]
    Name         docker
    Format       json
    Time_Key     time
    Time_Format  %Y-%m-%dT%H:%M:%S.%L
    Time_Keep    On

Since Fluent Bit v1.2 we are not suggesting the use of decoders (Decode_Field_As) if you are using Elasticsearch database in the output to avoid data type conflicts.

To perform processing of the log key, it's mandatory to enable the Merge_Log configuration property in this filter, then the following processing order will be done:

  • If a Pod suggest a parser, the filter will use that parser to process the content of log.

  • If the option Merge_Parser was set and the Pod did not suggest a parser, process the log content using the suggested parser in the configuration.

  • If no Pod was suggested and no Merge_Parser is set, try to handle the content as JSON.

If log value processing fails, the value is untouched. The order above is not chained, meaning it's exclusive and the filter will try only one of the options above, not all of them.

Kubernetes Annotations

A flexible feature of Fluent Bit Kubernetes filter is that allow Kubernetes Pods to suggest certain behaviors for the log processor pipeline when processing the records. At the moment it support:

  • Suggest a pre-defined parser

  • Request to exclude logs

The following annotations are available:

Annotation

Description

Default

fluentbit.io/parser[_stream][-container]

Suggest a pre-defined parser. The parser must be registered already by Fluent Bit. This option will only be processed if Fluent Bit configuration (Kubernetes Filter) have enabled the option K8S-Logging.Parser. If present, the stream (stdout or stderr) will restrict that specific stream. If present, the container can override a specific container in a Pod.

fluentbit.io/exclude[_stream][-container]

Request to Fluent Bit to exclude or not the logs generated by the Pod. This option will only be processed if Fluent Bit configuration (Kubernetes Filter) have enabled the option K8S-Logging.Exclude.

False

Annotation Examples in Pod definition

Suggest a parser

The following Pod definition runs a Pod that emits Apache logs to the standard output, in the Annotations it suggest that the data should be processed using the pre-defined parser called apache:

apiVersion: v1
kind: Pod
metadata:
  name: apache-logs
  labels:
    app: apache-logs
  annotations:
    fluentbit.io/parser: apache
spec:
  containers:
  - name: apache
    image: edsiper/apache_logs

Request to exclude logs

There are certain situations where the user would like to request that the log processor simply skip the logs from the Pod in question:

apiVersion: v1
kind: Pod
metadata:
  name: apache-logs
  labels:
    app: apache-logs
  annotations:
    fluentbit.io/exclude: "true"
spec:
  containers:
  - name: apache
    image: edsiper/apache_logs

Note that the annotation value is boolean which can take a true or false and must be quoted.

Workflow of Tail + Kubernetes Filter

Kubernetes Filter depends on either Tail or Systemd input plugins to process and enrich records with Kubernetes metadata. Here we will explain the workflow of Tail and how it configuration is correlated with Kubernetes filter. Consider the following configuration example (just for demo purposes, not production):

[INPUT]
    Name    tail
    Tag     kube.*
    Path    /var/log/containers/*.log
    Parser  docker

[FILTER]
    Name             kubernetes
    Match            kube.*
    Kube_URL         https://kubernetes.default.svc:443
    Kube_CA_File     /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
    Kube_Token_File  /var/run/secrets/kubernetes.io/serviceaccount/token
    Kube_Tag_Prefix  kube.var.log.containers.
    Merge_Log        On
    Merge_Log_Key    log_processed

In the input section, the Tail plugin will monitor all files ending in .log in path /var/log/containers/. For every file it will read every line and apply the docker parser. Then the records are emitted to the next step with an expanded tag.

Tail support Tags expansion, which means that if a tag have a star character (*), it will replace the value with the absolute path of the monitored file, so if you file name and path is:

/var/log/container/apache-logs-annotated_default_apache-aeeccc7a9f00f6e4e066aeff0434cf80621215071f1b20a51e8340aa7c35eac6.log

then the Tag for every record of that file becomes:

kube.var.log.containers.apache-logs-annotated_default_apache-aeeccc7a9f00f6e4e066aeff0434cf80621215071f1b20a51e8340aa7c35eac6.log

note that slashes are replaced with dots.

When Kubernetes Filter runs, it will try to match all records that starts with kube. (note the ending dot), so records from the file mentioned above will hit the matching rule and the filter will try to enrich the records

Kubernetes Filter do not care from where the logs comes from, but it cares about the absolute name of the monitored file, because that information contains the pod name and namespace name that are used to retrieve associated metadata to the running Pod from the Kubernetes Master/API Server.

If the configuration property Kube_Tag_Prefix was configured (available on Fluent Bit >= 1.1.x), it will use that value to remove the prefix that was appended to the Tag in the previous Input section. Note that the configuration property defaults to _kube._var.logs.containers. , so the previous Tag content will be transformed from:

kube.var.log.containers.apache-logs-annotated_default_apache-aeeccc7a9f00f6e4e066aeff0434cf80621215071f1b20a51e8340aa7c35eac6.log

to:

apache-logs-annotated_default_apache-aeeccc7a9f00f6e4e066aeff0434cf80621215071f1b20a51e8340aa7c35eac6.log

the transformation above do not modify the original Tag, just creates a new representation for the filter to perform metadata lookup.

that new value is used by the filter to lookup the pod name and namespace, for that purpose it uses an internal Regular expression:

(?<pod_name>[a-z0-9](?:[-a-z0-9]*[a-z0-9])?(?:\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*)_(?<namespace_name>[^_]+)_(?<container_name>.+)-(?<docker_id>[a-z0-9]{64})\.log$

If you want to know more details, check the source code of that definition here.

You can see on Rublar.com web site how this operation is performed, check the following demo link:

  • https://rubular.com/r/HZz3tYAahj6JCd

Custom Regex

Under certain and not common conditions, a user would want to alter that hard-coded regular expression, for that purpose the option Regex_Parser can be used (documented on top).

Final Comments

So at this point the filter is able to gather the values of pod_name and namespace, with that information it will check in the local cache (internal hash table) if some metadata for that key pair exists, if so, it will enrich the record with the metadata value, otherwise it will connect to the Kubernetes Master/API Server and retrieve that information.

Unit Size
https://kubernetes.default.svc:443
parsers file

Nest

The Nest Filter plugin allows you to operate on or with nested data. Its modes of operation are

  • nest - Take a set of records and place them in a map

  • lift - Take a map by key and lift its records up

Example usage (nest)

As an example using JSON notation, to nest keys matching the Wildcard value Key* under a new key NestKey the transformation becomes,

Example (input)

{
  "Key1"     : "Value1",
  "Key2"     : "Value2",
  "OtherKey" : "Value3"
}

Example (output)

{
  "OtherKey" : "Value3"
  "NestKey"  : {
    "Key1"     : "Value1",
    "Key2"     : "Value2",
  }
}

Example usage (lift)

As an example using JSON notation, to lift keys nested under the Nested_under value NestKey* the transformation becomes,

Example (input)

{
  "OtherKey" : "Value3"
  "NestKey"  : {
    "Key1"     : "Value1",
    "Key2"     : "Value2",
  }
}

Example (output)

{
  "Key1"     : "Value1",
  "Key2"     : "Value2",
  "OtherKey" : "Value3"
}

Configuration Parameters

The plugin supports the following configuration parameters:

Key

Value Format

Operation

Description

Operation

ENUM [nest or lift]

Select the operation nest or lift

Wildcard

FIELD WILDCARD

nest

Nest records which field matches the wildcard

Nest_under

FIELD STRING

nest

Nest records matching the Wildcard under this key

Nested_under

FIELD STRING

lift

Lift records nested under the Nested_under key

Add_prefix

FIELD STRING

ANY

Prefix affected keys with this string

Remove_prefix

FIELD STRING

ANY

Remove prefix from affected keys if it matches this string

Getting Started

In order to start filtering records, you can run the filter from the command line or through the configuration file. The following invokes the Memory Usage Input Plugin, which outputs the following (example),

[0] memory: [1488543156, {"Mem.total"=>1016044, "Mem.used"=>841388, "Mem.free"=>174656, "Swap.total"=>2064380, "Swap.used"=>139888, "Swap.free"=>1924492}]

Example #1 - nest

Command Line

Note: Using the command line mode requires quotes parse the wildcard properly. The use of a configuration file is recommended.

The following command will load the mem plugin. Then the nest filter will match the wildcard rule to the keys and nest the keys matching Mem.* under the new key NEST.

$ bin/fluent-bit -i mem -p 'tag=mem.local' -F nest -p 'Operation=nest' -p 'Wildcard=Mem.*' -p 'Nest_under=Memstats' -p 'Remove_prefix=Mem.' -m '*' -o stdout

Configuration File

[INPUT]
    Name mem
    Tag  mem.local

[OUTPUT]
    Name  stdout
    Match *

[FILTER]
    Name nest
    Match *
    Operation nest
    Wildcard Mem.*
    Nest_under Memstats
    Remove_prefix Mem.

Result

The output of both the command line and configuration invocations should be identical and result in the following output.

[2018/04/06 01:35:13] [ info] [engine] started
[0] mem.local: [1522978514.007359767, {"Swap.total"=>1046524, "Swap.used"=>0, "Swap.free"=>1046524, "Memstats"=>{"total"=>4050908, "used"=>714984, "free"=>3335924}}]

Example #1 - nest and lift undo

This example nests all Mem.* and Swap,* items under the Stats key and then reverses these actions with a lift operation. The output appears unchanged.

Configuration File

[INPUT]
    Name mem
    Tag  mem.local

[OUTPUT]
    Name  stdout
    Match *

[FILTER]
    Name nest
    Match *
    Operation nest
    Wildcard Mem.*
    Wildcard Swap.*
    Nest_under Stats
    Add_prefix NESTED

[FILTER]
    Name nest
    Match *
    Operation lift
    Nested_under Stats
    Remove_prefix NESTED

Result

[2018/06/21 17:42:37] [ info] [engine] started (pid=17285)
[0] mem.local: [1529566958.000940636, {"Mem.total"=>8053656, "Mem.used"=>6940380, "Mem.free"=>1113276, "Swap.total"=>16532988, "Swap.used"=>1286772, "Swap.free"=>15246216}]

Example #2 - nest 3 levels deep

This example takes the keys starting with Mem.* and nests them under LAYER1, which itself is then nested under LAYER2, which is nested under LAYER3.

Configuration File

[INPUT]
    Name mem
    Tag  mem.local

[OUTPUT]
    Name  stdout
    Match *

[FILTER]
    Name nest
    Match *
    Operation nest
    Wildcard Mem.*
    Nest_under LAYER1

[FILTER]
    Name nest
    Match *
    Operation nest
    Wildcard LAYER1*
    Nest_under LAYER2

[FILTER]
    Name nest
    Match *
    Operation nest
    Wildcard LAYER2*
    Nest_under LAYER3

Result

[0] mem.local: [1524795923.009867831, {"Swap.total"=>1046524, "Swap.used"=>0, "Swap.free"=>1046524, "LAYER3"=>{"LAYER2"=>{"LAYER1"=>{"Mem.total"=>4050908, "Mem.used"=>1112036, "Mem.free"=>2938872}}}}]


{
  "Swap.total"=>1046524,
  "Swap.used"=>0,
  "Swap.free"=>1046524,
  "LAYER3"=>{
    "LAYER2"=>{
      "LAYER1"=>{
        "Mem.total"=>4050908,
        "Mem.used"=>1112036,
        "Mem.free"=>2938872
      }
    }
  }
}

Example #3 - multiple nest and lift filters with prefix

This example starts with the 3-level deep nesting of Example 2 and applies the lift filter three times to reverse the operations. The end result is that all records are at the top level, without nesting, again. One prefix is added for each level that is lifted.

Configuration file

[INPUT]
    Name mem
    Tag  mem.local

[OUTPUT]
    Name  stdout
    Match *

[FILTER]
    Name nest
    Match *
    Operation nest
    Wildcard Mem.*
    Nest_under LAYER1

[FILTER]
    Name nest
    Match *
    Operation nest
    Wildcard LAYER1*
    Nest_under LAYER2

[FILTER]
    Name nest
    Match *
    Operation nest
    Wildcard LAYER2*
    Nest_under LAYER3

[FILTER]
    Name nest
    Match *
    Operation lift
    Nested_under LAYER3
    Add_prefix Lifted3_

[FILTER]
    Name nest
    Match *
    Operation lift
    Nested_under Lifted3_LAYER2
    Add_prefix Lifted3_Lifted2_

[FILTER]
    Name nest
    Match *
    Operation lift
    Nested_under Lifted3_Lifted2_LAYER1
    Add_prefix Lifted3_Lifted2_Lifted1_

Result

[0] mem.local: [1524862951.013414798, {"Swap.total"=>1046524, "Swap.used"=>0, "Swap.free"=>1046524, "Lifted3_Lifted2_Lifted1_Mem.total"=>4050908, "Lifted3_Lifted2_Lifted1_Mem.used"=>1253912, "Lifted3_Lifted2_Lifted1_Mem.free"=>2796996}]


{
  "Swap.total"=>1046524, 
  "Swap.used"=>0, 
  "Swap.free"=>1046524, 
  "Lifted3_Lifted2_Lifted1_Mem.total"=>4050908, 
  "Lifted3_Lifted2_Lifted1_Mem.used"=>1253912, 
  "Lifted3_Lifted2_Lifted1_Mem.free"=>2796996
}