Fluent Bit is a powerful log processing tool that can deal with different sources and formats, in addition it provides several filters that can be used to perform custom modifications. This flexibility is really good but while your pipeline grows, it's strongly recommended to validate your data and structure.
We encourage Fluent Bit users to integrate data validation in their CI systems
A simplified view of our data processing pipeline is as follows:
In a normal production environment, many Inputs, Filters, and Outputs are defined in the configuration, so integrating a continuous validation of your configuration against expected results is a must. For this requirement, Fluent Bit provides a specific Filter called Expect which can be used to validate expected Keys and Values from your records and takes some action when an exception is found.
As an example, consider the following pipeline where your source of data is a normal file with JSON content on it and then two filters: grep to exclude certain records and record_modifier to alter the record content adding and removing specific keys.
Ideally you want to add checkpoints of validation of your data between each step so you can know if your data structure is correct, we do this by using expect filter.
Expect filter sets rules that aims to validate certain criteria like:
does the record contain a key A ?
does the record not contains key A?
does the record key A value equals NULL ?
does the record key A value a different value than NULL ?
does the record key A value equals B ?
Every expect filter configuration can expose specific rules to validate the content of your records, it supports the following configuration properties:
Property | Description |
key_exists | Check if a key with a given name exists in the record. |
key_not_exists | Check if a key does not exist in the record. |
key_val_is_null | check that the value of the key is NULL. |
key_val_is_not_null | check that the value of the key is NOT NULL. |
key_val_eq | check that the value of the key equals the given value in the configuration. |
action | action to take when a rule does not match. The available options are |
Consider the following JSON file called data.log
with the following content:
{"color": "blue", "label": {"name": null}}{"color": "red", "label": {"name": "abc"}, "meta": "data"}{"color": "green", "label": {"name": "abc"}, "meta": null}
The following Fluent Bit configuration file will configure a pipeline to consume the log above apply an expect filter to validate that keys color
and label
exists:
[SERVICE]flush 1log_level infoparsers_file parsers.conf​[INPUT]name tailpath ./data.logparser jsonexit_on_eof on​# First 'expect' filter to validate that our data was structured properly[FILTER]name expectmatch *key_exists colorkey_exists $label['name']action exit​[OUTPUT]name stdoutmatch *
note that if for some reason the JSON parser failed or is missing in the tail
input (line 9), the expect
filter will trigger the exit
action. As a test, go ahead and comment out or remove line 9.
As a second step, we will extend our pipeline and we will add a grep filter to match records that map label
contains a key called name
with value abc
, then an expect filter to re-validate that condition:
[SERVICE]flush 1log_level infoparsers_file parsers.conf​[INPUT]name tailpath ./data.logparser jsonexit_on_eof on​# First 'expect' filter to validate that our data was structured properly[FILTER]name expectmatch *key_exists colorkey_exists labelaction exit​# Match records that only contains map 'label' with key 'name' = 'abc'[FILTER]name grepmatch *regex $label['name'] ^abc$​# Check that every record contains 'label' with a non-null value[FILTER]name expectmatch *key_val_eq $label['name'] abcaction exit​# Append a new key to the record using an environment variable[FILTER]name record_modifiermatch *record hostname ${HOSTNAME}​# Check that every record contains 'hostname' key[FILTER]name expectmatch *key_exists hostnameaction exit​[OUTPUT]name stdoutmatch *
When deploying your configuration in production, you might want to remove the expect filters from your configuration since it's an unnecessary extra work unless you want to have a 100% coverage of checks at runtime.