arrow-left

All pages
gitbookPowered by GitBook
1 of 43

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Inputs

Collectd

The collectd input plugin allows you to receive datagrams from collectd service.

hashtag
Configuration Parameters

The plugin supports the following configuration parameters:

Key
Description
Default

hashtag
Configuration Examples

Here is a basic configuration example.

With this configuration, Fluent Bit listens to 0.0.0.0:25826, and outputs incoming datagram packets to stdout.

You must set the same types.db files that your collectd server uses. Otherwise, Fluent Bit may not be able to interpret the payload properly.

Docker Log Based Metrics

The docker input plugin allows you to collect Docker container metrics such as memory usage and CPU consumption.

hashtag
Configuration Parameters

The plugin supports the following configuration parameters:

Key
Description
Default

Docker Events

The docker events input plugin uses the docker API to capture server events. A complete list of possible events returned by this plugin can be found

hashtag
Configuration Parameters

This plugin supports the following configuration parameters:

Key
Description

Listen

Set the address to listen to

0.0.0.0

Port

Set the port to listen to

25826

TypesDB

Set the data specification file

/usr/share/collectd/types.db

Threaded

Indicates whether to run this input in its own thread.

false

[INPUT]
    Name         collectd
    Listen       0.0.0.0
    Port         25826
    TypesDB      /usr/share/collectd/types.db,/etc/collectd/custom.db

[OUTPUT]
    Name   stdout
    Match  *

Polling interval in seconds

1

Include

A space-separated list of containers to include

Exclude

A space-separated list of containers to exclude

Threaded

Indicates whether to run this input in its own .

false

If you set neither Include nor Exclude, the plugin will try to get metrics from all the running containers.

hashtag
Configuration File

Here is an example configuration that collects metrics from two docker instances (6bab19c3a0f9 and 14159be4ca2c).

[INPUT]
    Name         docker
    Include      6bab19c3a0f9 14159be4ca2c
pipeline:
    inputs:
        - name:

This configuration will produce records like below.

Interval_Sec

Default

Unix_Path

The docker socket unix path

/var/run/docker.sock

Buffer_Size

The size of the buffer used to read docker events (in bytes)

8192

Parser

Specify the name of a parser to interpret the entry as a structured message.

None

Key

When a message is unstructured (no parser applied), it's appended as a string under the key name message.

message

Reconnect.Retry_limits

The maximum number of retries allowed. The plugin tries to reconnect with docker socket when EOF is detected.

5

Reconnect.Retry_interval

hashtag
Command Line

hashtag
Configuration File

In your main configuration file append the following Input & Output sections:

[INPUT]
    Name   docker_events

[OUTPUT
pipeline:
    inputs:
        - name:
herearrow-up-right

Disk I/O Log Based Metrics

The disk input plugin, gathers the information about the disk throughput of the running system every certain interval of time and reports them.

The Disk I/O metrics plugin creates metrics that are log-based, such as JSON payload. For Prometheus-based metrics, see the Node Exporter Metrics input plugin.

hashtag
Configuration Parameters

The plugin supports the following configuration parameters:

Key
Description
Default

hashtag
Getting Started

In order to get disk usage from your system, you can run the plugin from the command line or through the configuration file:

hashtag
Command Line

hashtag
Configuration File

In your main configuration file append the following Input & Output sections:

Note: Total interval (sec) = Interval_Sec + (Interval_Nsec / 1000000000).

e.g. 1.5s = 1s + 500000000ns

CPU Log Based Metrics

The cpu input plugin, measures the CPU usage of a process or the whole system by default (considering per CPU core). It reports values in percentage unit for every interval of time set. At the moment this plugin is only available for Linux.

The following tables describes the information generated by the plugin. The keys below represent the data used by the overall system, all values associated to the keys are in a percentage unit (0 to 100%):

The CPU metrics plugin creates metrics that are log-based, such as JSON payload. For Prometheus-based metrics, see the Node Exporter Metrics input plugin.

key
description
[1] docker.0: [1571994772.00555745, {"id"=>"6bab19c3a0f9", "name"=>"postgresql", "cpu_used"=>172102435, "mem_used"=>5693400, "mem_limit"=>4294963200}]
$ fluent-bit -i docker_events -o stdout
]
Name stdout
Match *
docker_events
outputs:
- name: stdout
match: '*'

The retrying interval. Unit is second.

1

Threaded

Indicates whether to run this input in its own thread.

false

[OUTPUT]
Name stdout
Match *
docker
include: 6bab19c3a0f9 14159be4ca2c
outputs:
- name: stdout
match: '*'
thread

Interval_Sec

Polling interval (seconds).

1

Interval_NSec

Polling interval (nanosecond).

0

Dev_Name

Device name to limit the target. (e.g. sda). If not set, in_disk gathers information from all of disks and partitions.

all disks

Threaded

Indicates whether to run this input in its own thread.

false

$ fluent-bit -i disk -o stdout
Fluent Bit v1.x.x
* Copyright (C) 2019-2020 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2017/01/28 16:58:16] [ info] [engine] started
[0] disk.0: [1485590297, {"read_size"=>0, "write_size"=>0}]
[1] disk.0: [1485590298, {"read_size"=>0, "write_size"=>0}]
[2] disk.0: [1485590299, {"read_size"=>0, "write_size"=>0}]
[3] disk.0: [1485590300, {"read_size"=>0, "write_size"=>11997184}]
[INPUT]
    Name          disk
    Tag           disk
    Interval_Sec  1
    Interval_NSec 0
[OUTPUT]
    Name   stdout
    Match  *
pipeline:
    inputs:
        - name: disk
          tag: disk
          interval_sec: 1
          interval_nsec: 0
    outputs:
        - name: stdout
          match: '*'

user_p

CPU usage in User mode, for short it means the CPU usage by user space programs. The result of this value takes in consideration the numbers of CPU cores in the system.

system_p

CPU usage in Kernel mode, for short it means the CPU usage by the Kernel. The result of this value takes in consideration the numbers of CPU cores in the system.

threaded

Indicates whether to run this input in its own . Default: false.

In addition to the keys reported in the above table, a similar content is created per CPU core. The cores are listed from 0 to N as the Kernel reports:

key
description

cpuN.p_cpu

Represents the total CPU usage by core N.

cpuN.p_user

Total CPU spent in user mode or user space programs associated to this core.

cpuN.p_system

Total CPU spent in system or kernel mode associated to this core.

hashtag
Configuration Parameters

The plugin supports the following configuration parameters:

Key
Description
Default

Interval_Sec

Polling interval in seconds

1

Interval_NSec

Polling interval in nanoseconds

0

PID

Specify the ID (PID) of a running process in the system. By default the plugin monitors the whole system but if this option is set, it will only monitor the given process ID.

hashtag
Getting Started

In order to get the statistics of the CPU usage of your system, you can run the plugin from the command line or through the configuration file:

hashtag
Command Line

As described above, the CPU input plugin gathers the overall usage every one second and flushed the information to the output on the fifth second. On this example we used the stdout plugin to demonstrate the output records. In a real use-case you may want to flush this information to some central aggregator such as Fluentdarrow-up-right or Elasticsearcharrow-up-right.

hashtag
Configuration File

In your main configuration file append the following Input & Output sections:

cpu_p

CPU usage of the overall system, this value is the summation of time spent on user and kernel space. The result takes in consideration the numbers of CPU cores in the system.

Dummy

The dummy input plugin, generates dummy events. It is useful for testing, debugging, benchmarking and getting started with Fluent Bit.

hashtag
Configuration Parameters

The plugin supports the following configuration parameters:

Key
Description
Default
[INPUT]
    Name cpu
    Tag  my_cpu

[OUTPUT]
    Name  stdout
    Match *
pipeline:
    inputs:
        - name: cpu
          tag: my_cpu

    outputs:
        - name: stdout
          match: '*'
$ build/bin/fluent-bit -i cpu -t my_cpu -o stdout -m '*'
Fluent Bit v1.x.x
* Copyright (C) 2019-2020 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2019/09/02 10:46:29] [ info] starting engine
[0] [1452185189, {"cpu_p"=>7.00, "user_p"=>5.00, "system_p"=>2.00, "cpu0.p_cpu"=>10.00, "cpu0.p_user"=>8.00, "cpu0.p_system"=>2.00, "cpu1.p_cpu"=>6.00, "cpu1.p_user"=>4.00, "cpu1.p_system"=>2.00}]
[1] [1452185190, {"cpu_p"=>6.50, "user_p"=>5.00, "system_p"=>1.50, "cpu0.p_cpu"=>6.00, "cpu0.p_user"=>5.00, "cpu0.p_system"=>1.00, "cpu1.p_cpu"=>7.00, "cpu1.p_user"=>5.00, "cpu1.p_system"=>2.00}]
[2] [1452185191, {"cpu_p"=>7.50, "user_p"=>5.00, "system_p"=>2.50, "cpu0.p_cpu"=>7.00, "cpu0.p_user"=>3.00, "cpu0.p_system"=>4.00, "cpu1.p_cpu"=>6.00, "cpu1.p_user"=>6.00, "cpu1.p_system"=>0.00}]
[3] [1452185192, {"cpu_p"=>4.50, "user_p"=>3.50, "system_p"=>1.00, "cpu0.p_cpu"=>6.00, "cpu0.p_user"=>5.00, "cpu0.p_system"=>1.00, "cpu1.p_cpu"=>5.00, "cpu1.p_user"=>3.00, "cpu1.p_system"=>2.00}]
thread

Dummy

Dummy JSON record.

{"message":"dummy"}

Metadata

Dummy JSON metadata.

{}

Start_time_sec

Dummy base timestamp, in seconds.

0

Start_time_nsec

Dummy base timestamp, in nanoseconds.

0

Rate

Rate at which messages are generated expressed in how many times per second.

1

Interval_sec

hashtag
Getting Started

You can run the plugin from the command line or through the configuration file:

hashtag
Command Line

hashtag
Configuration File

In your main configuration file append the following Input & Output sections:

MQTT

The MQTT input plugin, allows to retrieve messages/data from MQTT control packets over a TCP connection. The incoming data to receive must be a JSON map.

hashtag
Configuration Parameters

The plugin supports the following configuration parameters:

Key
Description
Default

hashtag
Getting Started

In order to start listening for MQTT messages, you can run the plugin from the command line or through the configuration file:

hashtag
Command Line

Since the MQTT input plugin let Fluent Bit behave as a server, we need to dispatch some messages using some MQTT client, in the following example mosquitto tool is being used for the purpose:

The following command line will send a message to the MQTT input plugin:

hashtag
Configuration File

In your main configuration file append the following Input & Output sections:

Kernel Logs

The kmsg input plugin reads the Linux Kernel log buffer since the beginning, it gets every record and parse it field as priority, sequence, seconds, useconds, and message.

hashtag
Configuration Parameters

Key
Description
Default

hashtag
Getting Started

In order to start getting the Linux Kernel messages, you can run the plugin from the command line or through the configuration file:

hashtag
Command Line

As described above, the plugin processed all messages that the Linux Kernel reported, the output has been truncated for clarification.

hashtag
Configuration File

In your main configuration file append the following Input & Output sections:

Memory Metrics

The mem input plugin, gathers the information about the memory and swap usage of the running system every certain interval of time and reports the total amount of memory and the amount of free available.

hashtag
Getting Started

In order to get memory and swap usage from your system, you can run the plugin from the command line or through the configuration file:

Exec Wasi

The exec_wasi input plugin, allows to execute WASM program that is WASI target like as external program and collects event logs from there.

hashtag
Configuration Parameters

The plugin supports the following configuration parameters:

Key
Description

StatsD

The statsd input plugin allows you to receive metrics via StatsD protocol.

Content:

[INPUT]
    Name   dummy
    Dummy {"message": "custom dummy"}

[OUTPUT]
    Name   stdout
    Match  *
pipeline:
  inputs:
    - name: dummy
      dummy: '{"message": "custom dummy"}'
  outputs:
    - name: stdout
      match: '*'
$ fluent-bit -i dummy -o stdout
Fluent Bit v2.x.x
* Copyright (C) 2015-2022 The Fluent Bit Authors
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[0] dummy.0: [[1686451466.659962491, {}], {"message"=>"dummy"}]
[0] dummy.0: [[1686451467.659679509, {}], {"message"=>"dummy"}]

Set time interval, in seconds, at which every message is generated. If set, Rate configuration is ignored.

0

Interval_nsec

Set time interval, in nanoseconds, at which every message is generated. If set, Rate configuration is ignored.

0

Samples

If set, the events number will be limited. For example, if Samples=3, the plugin generates only three events and stops.

none

Copies

Number of messages to generate each time they are generated.

1

Flush_on_startup

If set to true, the first dummy event is generated at startup.

false

Threaded

Indicates whether to run this input in its own thread.

false

Listen

Listener network interface.

0.0.0.0

Port

TCP port where listening for connections.

1883

Payload_Key

Specify the key where the payload key/value will be preserved.

none

Threaded

Indicates whether to run this input in its own thread.

false

Prio_Level

The log level to filter. The kernel log is dropped if its priority is more than prio_level. Allowed values are 0-8. Default is 8. 8 means all logs are saved.

8

Threaded

Indicates whether to run this input in its own thread.

false

hashtag
Command Line

hashtag
Threading

You can enable the threaded setting to run this input in its own thread.

hashtag
Configuration File

In your main configuration file append the following Input & Output sections:

$ fluent-bit -i mqtt -t data -o stdout -m '*'
Fluent Bit v1.x.x
* Copyright (C) 2019-2020 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2016/05/20 14:22:52] [ info] starting engine
[0] data: [1463775773, {"topic"=>"some/topic", "key1"=>123, "key2"=>456}]
$ mosquitto_pub  -m '{"key1": 123, "key2": 456}' -t some/topic
[INPUT]
    Name   mqtt
    Tag    data
    Listen 0.0.0.0
    Port   1883

[OUTPUT]
    Name   stdout
    Match  *
$ bin/fluent-bit -i kmsg -t kernel -o stdout -m '*'
Fluent Bit v1.x.x
* Copyright (C) 2019-2020 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[0] kernel: [1463421823, {"priority"=>3, "sequence"=>1814, "sec"=>11706, "usec"=>732233, "msg"=>"ERROR @wl_cfg80211_get_station : Wrong Mac address, mac = 34:a8:4e:d3:40:ec profile =20:3a:07:9e:4a:ac"}]
[1] kernel: [1463421823, {"priority"=>3, "sequence"=>1815, "sec"=>11706, "usec"=>732300, "msg"=>"ERROR @wl_cfg80211_get_station : Wrong Mac address, mac = 34:a8:4e:d3:40:ec profile =20:3a:07:9e:4a:ac"}]
[2] kernel: [1463421829, {"priority"=>3, "sequence"=>1816, "sec"=>11712, "usec"=>729728, "msg"=>"ERROR @wl_cfg80211_get_station : Wrong Mac address, mac = 34:a8:4e:d3:40:ec profile =20:3a:07:9e:4a:ac"}]
[3] kernel: [1463421829, {"priority"=>3, "sequence"=>1817, "sec"=>11712, "usec"=>729802, "msg"=>"ERROR @wl_cfg80211_get_station : Wrong Mac address, mac = 34:a8:4e:d3:40:ec
...
[INPUT]
    Name   kmsg
    Tag    kernel

[OUTPUT]
    Name   stdout
    Match  *
pipeline:
    inputs:
        - name: kmsg
          tag: kernel
    outputs:
        - name: stdout
          match: '*'
[INPUT]
    Name   mem
    Tag    memory

[OUTPUT]
    Name   stdout
    Match  *
pipeline:
    inputs:
        - name: mem
          tag: memory
    outputs:
        - name: stdout
          match: '*'
$ fluent-bit -i mem -t memory -o stdout -m '*'
Fluent Bit v1.x.x
* Copyright (C) 2019-2020 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2017/03/03 21:12:35] [ info] [engine] started
[0] memory: [1488543156, {"Mem.total"=>1016044, "Mem.used"=>841388, "Mem.free"=>174656, "Swap.total"=>2064380, "Swap.used"=>139888, "Swap.free"=>1924492}]
[1] memory: [1488543157, {"Mem.total"=>1016044, "Mem.used"=>841420, "Mem.free"=>174624, "Swap.total"=>2064380, "Swap.used"=>139888, "Swap.free"=>1924492}]
[2] memory: [1488543158, {"Mem.total"=>1016044, "Mem.used"=>841420, "Mem.free"=>174624, "Swap.total"=>2064380, "Swap.used"=>139888, "Swap.free"=>1924492}]
[3] memory: [1488543159, {"Mem.total"=>1016044, "Mem.used"=>841420, "Mem.free"=>174624, "Swap.total"=>2064380, "Swap.used"=>139888, "Swap.free"=>1924492}]

WASI_Path

The place of a WASM program file.

Parser

Specify the name of a parser to interpret the entry as a structured message.

Accessible_Paths

Specify the whitelist of paths to be able to access paths from WASM programs.

Interval_Sec

Polling interval (seconds).

Interval_NSec

Polling interval (nanosecond).

Wasm_Heap_Size

Size of the heap size of Wasm execution. Review for allowed values.

Wasm_Stack_Size

Size of the stack size of Wasm execution. Review for allowed values.

Buf_Size

Size of the buffer (check for allowed values)

hashtag
Configuration Examples

Here is a configuration example. in_exec_wasi can handle parser. To retrieve from structured data from WASM program, you have to create parser.conf:

Note that Time_Format should be aligned for the format of your using timestamp. In this documents, we assume that WASM program should write JSON style strings into stdout.

Then, you can specify the above parsers.conf in the main fluent-bit configuration:

hashtag
Configuration Parameters

The plugin supports the following configuration parameters:

Key
Description
Default

Listen

Listener network interface.

0.0.0.0

Port

UDP port where listening for connections

8125

Threaded

Indicates whether to run this input in its own .

false

hashtag
Configuration Examples

Here is a configuration example.

[INPUT]
    Name   statsd
    Listen 0.0.0.0
pipeline:
    inputs:
        - name:

Now you can input metrics through the UDP port as follows:

Fluent Bit will produce the following records:

Configuration Parameters
Configuration Examples

Health

Health input plugin allows you to check how healthy a TCP server is. It does the check by issuing a TCP connection every a certain interval of time.

hashtag
Configuration Parameters

The plugin supports the following configuration parameters:

Key
Description

hashtag
Getting Started

In order to start performing the checks, you can run the plugin from the command line or through the configuration file:

hashtag
Command Line

From the command line you can let Fluent Bit generate the checks with the following options:

hashtag
Configuration File

In your main configuration file append the following Input & Output sections:

hashtag
Testing

Once Fluent Bit is running, you will see some random values in the output interface similar to this:

Fluent Bit Metrics

A plugin to collect Fluent Bit's own metrics

Fluent Bit exposes its own metrics to allow you to monitor the internals of your pipeline. The collected metrics can be processed similarly to those from the Prometheus Node Exporter input plugin. They can be sent to output plugins including Prometheus Exporter, Prometheus Remote Write or OpenTelemetry..

Important note: Metrics collected with Node Exporter Metrics flow through a separate pipeline from logs and current filters do not operate on top of metrics.

hashtag
Configuration

Key
Description
Default

hashtag
Getting Started

hashtag
Simple Configuration File

In the following configuration file, the input plugin _node_exporter_metrics collects _metrics every 2 seconds and exposes them through our output plugin on HTTP/TCP port 2021.

You can test the expose of the metrics by using curl:

Windows Event Log

The winlog input plugin allows you to read Windows Event Log.

hashtag
Configuration Parameters

The plugin supports the following configuration parameters:

Key
Description
Default

Note that if you do not set db, the plugin will read channels from the beginning on each startup.

hashtag
Configuration Examples

hashtag
Configuration File

Here is a minimum configuration example.

Note that some Windows Event Log channels (like Security) requires an admin privilege for reading. In this case, you need to run fluent-bit as an administrator.

hashtag
Command Line

If you want to do a quick test, you can run this plugin from the command line.

Network I/O Log Based Metrics

The netif input plugin gathers network traffic information of the running system every certain interval of time, and reports them.

The Network I/O Metrics plugin creates metrics that are log-based, such as JSON payload. For Prometheus-based metrics, see the Node Exporter Metrics input plugin.

hashtag
Configuration Parameters

The plugin supports the following configuration parameters:

Elasticsearch

The elasticsearch input plugin handles both Elasticsearch and OpenSearch Bulk API requests.

hashtag
Configuration Parameters

The plugin supports the following configuration parameters:

Key
Description
Default value

Podman Metrics

The Podman Metrics input plugin allows you to collect metrics from podman containers, so they can be exposed later as, for example, Prometheus counters and gauges.

hashtag
Configuration Parameters

Serial Interface

The serial input plugin, allows to retrieve messages/data from a Serial interface.

hashtag
Configuration Parameters

Key
Description

Windows Event Log (winevtlog)

The winevtlog input plugin allows you to read Windows Event Log with new API from winevt.h.

hashtag
Configuration Parameters

The plugin supports the following configuration parameters:

Key
Description

Random

Random input plugin generate very simple random value samples using the device interface /dev/urandom, if not available it will use a unix timestamp as value.

hashtag
Configuration Parameters

The plugin supports the following configuration parameters:

Key
Description
[PARSER]
    Name        wasi
    Format      json
    Time_Key    time
    Time_Format %Y-%m-%dT%H:%M:%S.%L %z
[SERVICE]
    Flush        1
    Daemon       Off
    Parsers_File parsers.conf
    Log_Level    info
    HTTP_Server  Off
    HTTP_Listen  0.0.0.0
    HTTP_Port    2020

[INPUT]
    Name exec_wasi
    Tag  exec.wasi.local
    WASI_Path /path/to/wasi/program.wasm
    Accessible_Paths .,/path/to/accessible
    Parser wasi

[OUTPUT]
    Name  stdout
    Match *
echo "click:10|c|@0.1" | nc -q0 -u 127.0.0.1 8125
echo "active:99|g"     | nc -q0 -u 127.0.0.1 8125
[0] statsd.0: [1574905088.971380537, {"type"=>"counter", "bucket"=>"click", "value"=>10.000000, "sample_rate"=>0.100000}]
[0] statsd.0: [1574905141.863344517, {"type"=>"gauge", "bucket"=>"active", "value"=>99.000000, "incremental"=>0}]

Oneshot

Only run once at startup. This allows collection of data precedent to fluent-bit's startup (bool, default: false)

Threaded

Indicates whether to run this input in its own thread. Default: false.

unit sizes
unit sizes
unit sizes
Port 8125
[OUTPUT]
Name stdout
Match *
statsd
listen: 0.0.0.0
port: 8125
outputs:
- name: stdout
match: '*'
thread

Host

Name of the target host or IP address to check.

Port

TCP port where to perform the connection check.

Interval_Sec

Interval in seconds between the service checks. Default value is 1.

Internal_Nsec

Specify a nanoseconds interval for service checks, it works in conjunction with the Interval_Sec configuration key. Default value is 0.

Alert

If enabled, it will only generate messages if the target TCP service is down. By default this option is disabled.

Add_Host

If enabled, hostname is appended to each records. Default value is false.

Add_Port

If enabled, port number is appended to each records. Default value is false.

Threaded

Indicates whether to run this input in its own thread. Default: false.

scrape_interval

The rate at which metrics are collected from the host operating system

2 seconds

scrape_on_start

Scrape metrics upon start, useful to avoid waiting for 'scrape_interval' for the first round of metrics.

false

threaded

Indicates whether to run this input in its own thread.

false

# Fluent Bit Metrics + Prometheus Exporter
# -------------------------------------------
# The following example collects Fluent Bit metrics and exposes
# them through a Prometheus HTTP end-point.
#
# After starting the service try it with:
#
# $ curl http://127.0.0.1:2021/metrics
#
[SERVICE]
    flush           1
    log_level       info

[INPUT]
    name            fluentbit_metrics
    tag             internal_metrics
    scrape_interval 2

[OUTPUT]
    name            prometheus_exporter
    match           internal_metrics
    host            0.0.0.0
    port            2021
Prometheus Exporter

Channels

A comma-separated list of channels to read from.

Interval_Sec

Set the polling interval for each channel. (optional)

1

DB

Set the path to save the read offsets. (optional)

Threaded

Indicates whether to run this input in its own thread.

false

$ fluent-bit -i health -p host=127.0.0.1 -p port=80 -o stdout
[INPUT]
    Name          health
    Host          127.0.0.1
    Port          80
    Interval_Sec  1
    Interval_NSec 0

[OUTPUT]
    Name   stdout
    Match  *
pipeline:
    inputs:
        - name: health
          host: 127.0.0.1
          port: 80
          interval_sec: 1
          interval_nsec: 0
    outputs:
        - name: stdout
          match: '*'
$ fluent-bit -i health -p host=127.0.0.1 -p port=80 -o stdout
Fluent Bit v1.8.0
* Copyright (C) 2019-2021 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2021/06/20 08:39:47] [ info] [engine] started (pid=4621)
[2021/06/20 08:39:47] [ info] [storage] version=1.1.1, initializing...
[2021/06/20 08:39:47] [ info] [storage] in-memory
[2021/06/20 08:39:47] [ info] [storage] normal synchronization mode, checksum disabled, max_chunks_up=128
[2021/06/20 08:39:47] [ info] [sp] stream processor started
[0] health.0: [1624145988.305640385, {"alive"=>true}]
[1] health.0: [1624145989.305575360, {"alive"=>true}]
[2] health.0: [1624145990.306498573, {"alive"=>true}]
[3] health.0: [1624145991.305595498, {"alive"=>true}]
service:
    flush: 1
    log_level: info
pipeline:
    inputs:
        - name: fluentbit_metrics
          tag: internal_metrics
          scrape_interval: 2

    outputs:
        - name: prometheus_exporter
          match: internal_metrics
          host: 0.0.0.0
          port: 2021
curl http://127.0.0.1:2021/metrics
[INPUT]
    Name         winlog
    Channels     Setup,Windows PowerShell
    Interval_Sec 1
    DB           winlog.sqlite

[OUTPUT]
    Name   stdout
    Match  *
$ fluent-bit -i winlog -p 'channels=Setup' -o stdout
Key
Description
Default

Interface

Specify the network interface to monitor. e.g. eth0

Interval_Sec

Polling interval (seconds).

1

Interval_NSec

Polling interval (nanosecond).

0

Verbose

If true, gather metrics precisely.

false

Test_At_Init

If true, testing if the network interface is valid at initialization.

false

Threaded

hashtag
Getting Started

In order to monitor network traffic from your system, you can run the plugin from the command line or through the configuration file:

hashtag
Command Line

hashtag
Configuration File

In your main configuration file append the following Input & Output sections:

Note: Total interval (sec) = Interval_Sec + (Interval_Nsec / 1000000000).

e.g. 1.5s = 1s + 500000000ns

buffer_max_size

Set the maximum size of buffer.

4M

buffer_chunk_size

Set the buffer chunk size.

512K

tag_key

Specify a key name for extracting as a tag.

NULL

meta_key

Specify a key name for meta information.

"@meta"

hostname

Specify hostname or FQDN. This parameter can be used for "sniffing" (auto-discovery of) cluster node information.

"localhost"

version

Note: The Elasticsearch cluster uses "sniffing" to optimize the connections between its cluster and clients. Elasticsearch can build its cluster and dynamically generate a connection list which is called "sniffing". The hostname will be used for sniffing information and this is handled by the sniffing endpoint.

hashtag
Getting Started

In order to start performing the checks, you can run the plugin from the command line or through the configuration file:

hashtag
Command Line

From the command line you can configure Fluent Bit to handle Bulk API requests with the following options:

hashtag
Configuration File

In your main configuration file append the following Input & Output sections:

As described above, the plugin will handle ingested Bulk API requests. For large bulk ingestions, you may have to increase buffer size with buffer_max_size and buffer_chunk_size parameters:

hashtag
Ingesting from beats series

Ingesting from beats series agents is also supported. For example, Filebeatsarrow-up-right, Metricbeatarrow-up-right, and Winlogbeatarrow-up-right are able to ingest their collected data through this plugin.

Note that Fluent Bit's node information is returning as Elasticsearch 8.0.0.

So, users have to specify the following configurations on their beats configurations:

For large log ingestion on these beat plugins, users might have to configure rate limiting on those beats plugins when Fluent Bit indicates that the application is exceeding the size limit for HTTP requests:

scrape_on_start

Should this plugin scrape podman data after it is started

false

path.config

Custom path to podman containers configuration file

/var/lib/containers/storage/overlay-containers/containers.json

path.sysfs

Custom path to sysfs subsystem directory

/sys/fs/cgroup

path.procfs

Custom path to proc subsystem directory

/proc

threaded

Indicates whether to run this input in its own .

false

hashtag
Getting Started

The podman metrics input plugin allows Fluent Bit to gather podman container metrics. The entire procedure of collecting container list and gathering data associated with them bases on filesystem data.This plugin does not execute podman commands or send http requests to podman api - instead it reads podman configuration file and metrics exposed by /sys and /proc filesystems.

This plugin supports and automatically detects both cgroups v1 and v2.

Example Curl message for one running container

hashtag
Configuration File

hashtag
Command Line

hashtag
Exposed metrics

Currently supported counters are:

  • container_memory_usage_bytes

  • container_memory_max_usage_bytes

  • container_memory_rss

  • container_spec_memory_limit_bytes

  • container_cpu_user_seconds_total

  • container_cpu_usage_seconds_total

  • container_network_receive_bytes_total

  • container_network_receive_errors_total

  • container_network_transmit_bytes_total

  • container_network_transmit_errors_total

This plugin mimics naming convetion of docker metrics exposed by cadvisorarrow-up-right project

Key

Description

Default

scrape_interval

Interval between each scrape of podman data (in seconds)

30

Bitrate

The bitrate for the communication, e.g: 9600, 38400, 115200, etc

Min_Bytes

The serial interface will expect at least Min_Bytes to be available before to process the message (default: 1)

Separator

Allows to specify a separator string that's used to determinate when a message ends.

Format

Specify the format of the incoming data stream. The only option available is 'json'. Note that Format and Separator cannot be used at the same time.

Threaded

Indicates whether to run this input in its own . Default: false.

hashtag
Getting Started

In order to retrieve messages over the Serial interface, you can run the plugin from the command line or through the configuration file:

hashtag
Command Line

The following example loads the input serial plugin where it set a Bitrate of 9600, listen from the /dev/tnt0 interface and use the custom tag data to route the message.

The above interface (/dev/tnt0) is an emulation of the serial interface (more details at bottom), for demonstrative purposes we will write some message to the other end of the interface, in this case /dev/tnt1, e.g:

In Fluent Bit you should see an output like this:

Now using the Separator configuration, we could send multiple messages at once (run this command after starting Fluent Bit):

hashtag
Configuration File

In your main configuration file append the following Input & Output sections:

hashtag
Emulating Serial Interface on Linux

The following content is some extra information that will allow you to emulate a serial interface on your Linux system, so you can test this Serial input plugin locally in case you don't have such interface in your computer. The following procedure has been tested on Ubuntu 15.04 running a Linux Kernel 4.0.

hashtag
Build and install the tty0tty module

Download the sources

Unpack and compile

Copy the new kernel module into the kernel modules directory

Load the module

You should see new serial ports in /dev/ (ls /dev/tnt*) Give appropriate permissions to the new serial ports:

When the module is loaded, it will interconnect the following virtual interfaces:

File

Absolute path to the device entry, e.g: /dev/ttyS0

Default

Channels

A comma-separated list of channels to read from.

Interval_Sec

Set the polling interval for each channel. (optional)

1

Interval_NSec

Set the polling interval for each channel (sub seconds. (optional)

0

Read_Existing_Events

Whether to read existing events from head or tailing events at last on subscribing. (optional)

False

DB

Set the path to save the read offsets. (optional)

String_Inserts

Note that if you do not set db, the plugin will tail channels on each startup.

hashtag
Configuration Examples

hashtag
Configuration File

Here is a minimum configuration example.

Note that some Windows Event Log channels (like Security) requires an admin privilege for reading. In this case, you need to run fluent-bit as an administrator.

The default value of Read_Limit_Per_Cycle is set up as 512KiB. Note that 512KiB(= 0x7ffff = 512 * 1024 * 1024) does not equals to 512KB (= 512 * 1000 * 1000). To increase events per second on this plugin, specify larger value than 512KiB.

hashtag
Query Languages for Event_Query Parameter

The Event_Query parameter can be used to specify the XML query for filtering Windows EventLog during collection. The supported query types are XPatharrow-up-right and XML Query. For further details, please refer to the MSDN docarrow-up-right.

hashtag
Command Line

If you want to do a quick test, you can run this plugin from the command line.

Note that winevtlog plugin will tail channels on each startup. If you want to confirm whether this plugin is working or not, you should specify -p 'Read_Existing_Events=true' parameter.

Samples

If set, it will only generate a specific number of samples. By default this value is set to -1, which will generate unlimited samples.

Interval_Sec

Interval in seconds between samples generation. Default value is 1.

Interval_Nsec

Specify a nanoseconds interval for samples generation, it works in conjunction with the Interval_Sec configuration key. Default value is 0.

Threaded

Indicates whether to run this input in its own . Default: false.

hashtag
Getting Started

In order to start generating random samples, you can run the plugin from the command line or through the configuration file:

hashtag
Command Line

From the command line you can let Fluent Bit generate the samples with the following options:

hashtag
Configuration File

In your main configuration file append the following Input & Output sections:

[INPUT]
    Name          random
    Samples      -1
    Interval_Sec  
pipeline:
    inputs:
        - name:

hashtag
Testing

Once Fluent Bit is running, you will see the reports in the output interface similar to this:

Kafka

The Kafka input plugin allows subscribing to one or more Kafka topics to collect messages from an Apache Kafkaarrow-up-right service. This plugin uses the official librdkafka C libraryarrow-up-right (built-in dependency).

hashtag
Configuration Parameters

Key
Description
default

hashtag
Getting Started

In order to subscribe/collect messages from Apache Kafka, you can run the plugin from the command line or through the configuration file:

hashtag
Command Line

The kafka plugin can read parameters through the -p argument (property), e.g:

hashtag
Configuration File

In your main configuration file append the following Input & Output sections:

hashtag
Example of using kafka input/output plugins

The Fluent Bit source repository contains a full example of using Fluent Bit to process Kafka records:

The above will connect to the broker listening on kafka-broker:9092 and subscribe to the fb-source topic, polling for new messages every 100 milliseconds.

Since the payload will be in json format, we ask the plugin to automatically parse the payload with format json.

Every message received is then processed with kafka.lua and sent back to the fb-sink topic of the same broker.

The example can be executed locally with make start in the examples/kafka_filter directory (docker/compose is used).

OpenTelemetry

An input plugin to ingest OTLP Logs, Metrics, and Traces

The OpenTelemetry input plugin allows you to receive data as per the OTLP specification, from various OpenTelemetry exporters, the OpenTelemetry Collector, or Fluent Bit's OpenTelemetry output plugin.

Our compliant implementation fully supports OTLP/HTTP and OTLP/GRPC. Note that the single port configured which defaults to 4318 supports both transports.

hashtag
Configuration

Key
Description
default

Important note: Raw traces means that any data forwarded to the traces endpoint (/v1/traces) will be packed and forwarded as a log message, and will NOT be processed by Fluent Bit. The traces endpoint by default expects a valid protobuf encoded payload, but you can set the raw_traces option in case you want to get trace telemetry data to any of Fluent Bit's supported outputs.

hashtag
OTLP Transport Protocol Endpoints

Fluent Bit based on the OTLP desired protocol exposes the following endpoints for data ingestion:

OTLP/HTTP

  • Logs

    • /v1/logs

  • Metrics

OTLP/GRPC

  • Logs

    • /opentelemetry.proto.collector.log.v1.LogService/Export

    • /opentelemetry.proto.collector.log.v1.LogService/Export

hashtag
Getting started

The OpenTelemetry input plugin supports the following telemetry data types:

Type
HTTP1/JSON
HTTP1/Protobuf
HTTP2/GRPC

A sample config file to get started will look something like the following:

With the above configuration, Fluent Bit will listen on port 4318 for data. You can now send telemetry data to the endpoints /v1/metrics, /v1/traces, and /v1/logs for metrics, traces, and logs respectively.

A sample curl request to POST json encoded log data would be:

Prometheus Scrape Metrics

Fluent Bit 1.9 includes additional metrics features to allow you to collect both logs and metrics with the same collector.

The initial release of the Prometheus Scrape metric allows you to collect metrics from a Prometheus-based endpoint at a set interval. These metrics can be routed to metric supported endpoints such as Prometheus Exporter, InfluxDB, or Prometheus Remote Write

hashtag
Configuration

Key
Description
Default

hashtag
Example

If an endpoint exposes Prometheus Metrics we can specify the configuration to scrape and then output the metrics. In the following example, we retrieve metrics from the HashiCorp Vault application.

Example Output

Splunk

The splunk input plugin handles Splunk HTTP HECarrow-up-right requests.

hashtag
Configuration Parameters

Key

Description

default

listen

hashtag
Getting Started

In order to start performing the checks, you can run the plugin from the command line or through the configuration file.

hashtag
How to set tag

The tag for the Splunk input plugin is set by adding the tag to the end of the request URL by default. This tag is then used to route the event through the system. The default behavior of the splunk input sets the tags for the following endpoints:

  • /services/collector

  • /services/collector/event

  • /services/collector/raw

The requests for these endpoints are interpreted as services_collector, services_collector_event, and services_collector_raw.

If you want to use the other tags for multiple instantiating input splunk plugin, you have to specify tag property on the each of splunk plugin configurations to prevent collisions of data pipeline.

hashtag
Command Line

From the command line you can configure Fluent Bit to handle HTTP HEC requests with the following options:

hashtag
Configuration File

In your main configuration file append the following Input & Output sections:

Kubernetes Events

Collects Kubernetes Events

Kubernetes exports it events through the API server. This input plugin allows to retrieve those events as logs and get them processed through the pipeline.

hashtag
Configuration

Key
Description
Default
  • * As of Fluent-Bit 3.1, this plugin uses a Kubernetes watch stream instead of polling. In versions before 3.1, the interval parameters are used for reconnecting the Kubernetes watch stream.

hashtag
Threading

This input always runs in its own .

hashtag
Getting Started

hashtag
Kubernetes Service Account

The Kubernetes service account used by Fluent Bit must have get, list, and watch permissions to namespaces and pods for the namespaces watched in the kube_namespace configuration parameter. If you're using the helm chart to configure Fluent Bit, this role is included.

hashtag
Simple Configuration File

In the following configuration file, the input plugin kubernetes_events collects events every 5 seconds (default for interval_nsec) and exposes them through the on the console.

hashtag
Event Timestamp

Event timestamps are created from the first existing field, based on the following order of precedence:

  1. lastTimestamp

  2. firstTimestamp

  3. metadata.creationTimestamp

Thermal

The thermal input plugin reports system temperatures periodically -- each second by default. Currently this plugin is only available for Linux.

The following tables describes the information generated by the plugin.

key
description

name

The name of the thermal zone, such as thermal_zone0

type

The type of the thermal zone, such as x86_pkg_temp

temp

hashtag
Configuration Parameters

The plugin supports the following configuration parameters:

Key
Description

hashtag
Getting Started

In order to get temperature(s) of your system, you can run the plugin from the command line or through the configuration file:

hashtag
Command Line

Some systems provide multiple thermal zones. In this example monitor only thermal_zone0 by name, once per minute.

hashtag
Configuration File

In your main configuration file append the following Input & Output sections:

Process Exporter Metrics

A plugin based on Process Exporter to collect process level of metrics of system metrics

is a popular way to collect system level metrics from operating systems, such as CPU / Disk / Network / Process statistics. Fluent Bit 2.2 onwards includes a process exporter plugin that builds off the Prometheus design to collect process level metrics without having to manage two separate processes or agents.

The Process Exporter Metrics plugin implements collecting of the various metrics available from and these will be expanded over time as needed.

Important note: All metrics including those collected with this plugin flow through a separate pipeline from logs and current filters do not operate on top of metrics.

This plugin is only supported on Linux based operating systems as it uses the proc filesystem to access the relevant metrics.

macOS does not have the proc

Process Log Based Metrics

Process input plugin allows you to check how healthy a process is. It does so by performing a service check at every certain interval of time specified by the user.

The Process metrics plugin creates metrics that are log-based, such as JSON payload. For Prometheus-based metrics, see the Node Exporter Metrics input plugin.

hashtag
Configuration Parameters

Systemd

The Systemd input plugin allows to collect log messages from the Journald daemon on Linux environments.

hashtag
Configuration Parameters

The plugin supports the following configuration parameters:

Key
Description
Default

UDP

The udp input plugin allows to retrieve structured JSON or raw messages over a UDP network interface (UDP port).

hashtag
Configuration Parameters

The plugin supports the following configuration parameters:

Key
Description
Default

Prometheus Remote Write

An input plugin to ingest payloads of Prometheus remote write

This input plugin allows you to ingest a payload in the Prometheus remote-write format, i.e. a remote write sender can transmit data to Fluent Bit.

hashtag
Configuration

Key
Description
default
[INPUT]
    Name          netif
    Tag           netif
    Interval_Sec  1
    Interval_NSec 0
    Interface     eth0
[OUTPUT]
    Name   stdout
    Match  *
pipeline:
    inputs:
        - name: netif
          tag: netif
          interval_sec: 1
          interval_nsec: 0
          interface: eth0
    outputs:
        - name: stdout
          match: '*'
$ bin/fluent-bit -i netif -p interface=eth0 -o stdout
Fluent Bit v1.x.x
* Copyright (C) 2019-2020 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2017/07/08 23:34:18] [ info] [engine] started
[0] netif.0: [1499524459.001698260, {"eth0.rx.bytes"=>89769869, "eth0.rx.packets"=>73357, "eth0.rx.errors"=>0, "eth0.tx.bytes"=>4256474, "eth0.tx.packets"=>24293, "eth0.tx.errors"=>0}]
[1] netif.0: [1499524460.002541885, {"eth0.rx.bytes"=>98, "eth0.rx.packets"=>1, "eth0.rx.errors"=>0, "eth0.tx.bytes"=>98, "eth0.tx.packets"=>1, "eth0.tx.errors"=>0}]
[2] netif.0: [1499524461.001142161, {"eth0.rx.bytes"=>98, "eth0.rx.packets"=>1, "eth0.rx.errors"=>0, "eth0.tx.bytes"=>98, "eth0.tx.packets"=>1, "eth0.tx.errors"=>0}]
[3] netif.0: [1499524462.002612971, {"eth0.rx.bytes"=>98, "eth0.rx.packets"=>1, "eth0.rx.errors"=>0, "eth0.tx.bytes"=>98, "eth0.tx.packets"=>1, "eth0.tx.errors"=>0}]
[INPUT]
    name elasticsearch
    listen 0.0.0.0
    port 9200

[OUTPUT]
    name stdout
    match *
pipeline:
    inputs:
        - name: elasticsearch
          listen: 0.0.0.0
          port: 9200

    outputs:
        - name: stdout
          match: '*'
[INPUT]
    name elasticsearch
    listen 0.0.0.0
    port 9200
    buffer_max_size 20M
    buffer_chunk_size 5M

[OUTPUT]
    name stdout
    match *
pipeline:
    inputs:
        - name: elasticsearch
          listen: 0.0.0.0
          port: 9200
          buffer_max_size: 20M
          buffer_chunk_size: 5M

    outputs:
        - name: stdout
          match: '*'
$ fluent-bit -i elasticsearch -p port=9200 -o stdout
output.elasticsearch:
  allow_older_versions: true
  ilm: false
processors:
  - rate_limit:
      limit: "200/s"
$> curl 0.0.0.0:2021/metrics
# HELP fluentbit_input_bytes_total Number of input bytes.
# TYPE fluentbit_input_bytes_total counter
fluentbit_input_bytes_total{name="podman_metrics.0"} 0
# HELP fluentbit_input_records_total Number of input records.
# TYPE fluentbit_input_records_total counter
fluentbit_input_records_total{name="podman_metrics.0"} 0
# HELP container_memory_usage_bytes Container memory usage in bytes
# TYPE container_memory_usage_bytes counter
container_memory_usage_bytes{id="858319c39f3f52cd44aa91a520aafb84ded3bc4b4a1e04130ccf87043149bbbf",name="blissful_wescoff",image="docker.io/library/ubuntu:latest"} 884736
# HELP container_cpu_user_seconds_total Container cpu usage in seconds in user mode
# TYPE container_cpu_user_seconds_total counter
container_cpu_user_seconds_total{id="858319c39f3f52cd44aa91a520aafb84ded3bc4b4a1e04130ccf87043149bbbf",name="blissful_wescoff",image="docker.io/library/ubuntu:latest"} 0
# HELP container_cpu_usage_seconds_total Container cpu usage in seconds
# TYPE container_cpu_usage_seconds_total counter
container_cpu_usage_seconds_total{id="858319c39f3f52cd44aa91a520aafb84ded3bc4b4a1e04130ccf87043149bbbf",name="blissful_wescoff",image="docker.io/library/ubuntu:latest"} 0
# HELP container_network_receive_bytes_total Network received bytes
# TYPE container_network_receive_bytes_total counter
container_network_receive_bytes_total{id="858319c39f3f52cd44aa91a520aafb84ded3bc4b4a1e04130ccf87043149bbbf",name="blissful_wescoff",image="docker.io/library/ubuntu:latest",interface="eth0"} 8515
# HELP container_network_receive_errors_total Network received errors
# TYPE container_network_receive_errors_total counter
container_network_receive_errors_total{id="858319c39f3f52cd44aa91a520aafb84ded3bc4b4a1e04130ccf87043149bbbf",name="blissful_wescoff",image="docker.io/library/ubuntu:latest",interface="eth0"} 0
# HELP container_network_transmit_bytes_total Network transmited bytes
# TYPE container_network_transmit_bytes_total counter
container_network_transmit_bytes_total{id="858319c39f3f52cd44aa91a520aafb84ded3bc4b4a1e04130ccf87043149bbbf",name="blissful_wescoff",image="docker.io/library/ubuntu:latest",interface="eth0"} 962
# HELP container_network_transmit_errors_total Network transmitedd errors
# TYPE container_network_transmit_errors_total counter
container_network_transmit_errors_total{id="858319c39f3f52cd44aa91a520aafb84ded3bc4b4a1e04130ccf87043149bbbf",name="blissful_wescoff",image="docker.io/library/ubuntu:latest",interface="eth0"} 0
# HELP fluentbit_input_storage_overlimit Is the input memory usage overlimit ?.
# TYPE fluentbit_input_storage_overlimit gauge
fluentbit_input_storage_overlimit{name="podman_metrics.0"} 0
# HELP fluentbit_input_storage_memory_bytes Memory bytes used by the chunks.
# TYPE fluentbit_input_storage_memory_bytes gauge
fluentbit_input_storage_memory_bytes{name="podman_metrics.0"} 0
# HELP fluentbit_input_storage_chunks Total number of chunks.
# TYPE fluentbit_input_storage_chunks gauge
fluentbit_input_storage_chunks{name="podman_metrics.0"} 0
# HELP fluentbit_input_storage_chunks_up Total number of chunks up in memory.
# TYPE fluentbit_input_storage_chunks_up gauge
fluentbit_input_storage_chunks_up{name="podman_metrics.0"} 0
# HELP fluentbit_input_storage_chunks_down Total number of chunks down.
# TYPE fluentbit_input_storage_chunks_down gauge
fluentbit_input_storage_chunks_down{name="podman_metrics.0"} 0
# HELP fluentbit_input_storage_chunks_busy Total number of chunks in a busy state.
# TYPE fluentbit_input_storage_chunks_busy gauge
fluentbit_input_storage_chunks_busy{name="podman_metrics.0"} 0
# HELP fluentbit_input_storage_chunks_busy_bytes Total number of bytes used by chunks in a busy state.
# TYPE fluentbit_input_storage_chunks_busy_bytes gauge
fluentbit_input_storage_chunks_busy_bytes{name="podman_metrics.0"} 0
[INPUT]
    name podman_metrics
    scrape_interval 10
    scrape_on_start true
[OUTPUT]
    name prometheus_exporter
$> fluent-bit -i podman_metrics -o prometheus_exporter
$ fluent-bit -i serial -t data -p File=/dev/tnt0 -p BitRate=9600 -o stdout -m '*'
$ echo 'this is some message' > /dev/tnt1
$ fluent-bit -i serial -t data -p File=/dev/tnt0 -p BitRate=9600 -o stdout -m '*'
Fluent Bit v1.x.x
* Copyright (C) 2019-2020 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2016/05/20 15:44:39] [ info] starting engine
[0] data: [1463780680, {"msg"=>"this is some message"}]
$ echo 'aaXbbXccXddXee' > /dev/tnt1
$ fluent-bit -i serial -t data -p File=/dev/tnt0 -p BitRate=9600 -p Separator=X -o stdout -m '*'
Fluent-Bit v0.8.0
Copyright (C) Treasure Data

[2016/05/20 16:04:51] [ info] starting engine
[0] data: [1463781902, {"msg"=>"aa"}]
[1] data: [1463781902, {"msg"=>"bb"}]
[2] data: [1463781902, {"msg"=>"cc"}]
[3] data: [1463781902, {"msg"=>"dd"}]
[INPUT]
    Name      serial
    Tag       data
    File      /dev/tnt0
    BitRate   9600
    Separator X

[OUTPUT]
    Name   stdout
    Match  *
$ git clone https://github.com/freemed/tty0tty
$ cd tty0tty/module
$ make
$ sudo cp tty0tty.ko /lib/modules/$(uname -r)/kernel/drivers/misc/
$ sudo depmod
$ sudo modprobe tty0tty
$ sudo chmod 666 /dev/tnt*
/dev/tnt0 <=> /dev/tnt1
/dev/tnt2 <=> /dev/tnt3
/dev/tnt4 <=> /dev/tnt5
/dev/tnt6 <=> /dev/tnt7
[INPUT]
    Name         winevtlog
    Channels     Setup,Windows PowerShell
    Interval_Sec 1
    DB           winevtlog.sqlite

[OUTPUT]
    Name   stdout
    Match  *
$ fluent-bit -i winevtlog -p 'channels=Setup' -p 'Read_Existing_Events=true' -o stdout
$ fluent-bit -i random -o stdout
$ fluent-bit -i random -o stdout
Fluent Bit v1.x.x
* Copyright (C) 2019-2020 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2016/10/07 20:27:34] [ info] [engine] started
[0] random.0: [1475893654, {"rand_value"=>1863375102915681408}]
[1] random.0: [1475893655, {"rand_value"=>425675645790600970}]
[2] random.0: [1475893656, {"rand_value"=>7580417447354808203}]
[3] random.0: [1475893657, {"rand_value"=>1501010137543905482}]
[4] random.0: [1475893658, {"rand_value"=>16238242822364375212}]

Indicates whether to run this input in its own thread.

false

Specify Elasticsearch server version. This parameter is effective for checking a version of Elasticsearch/OpenSearch server version.

"8.0.0"

threaded

Indicates whether to run this input in its own thread.

false

Whether to include StringInserts in output records. (optional)

True

Render_Event_As_XML

Whether to render system part of event as XML string or not. (optional)

False

Use_ANSI

Use ANSI encoding on eventlog messages. If you have issues receiving blank strings with old Windows versions (Server 2012 R2), setting this to True may solve the problem. (optional)

False

Event_Query

Specify XML query for filtering events.

*

Read_Limit_Per_Cycle

Specify read limit per cycle.

512KiB

Threaded

Indicates whether to run this input in its own thread.

false

thread
thread
1
Interval_NSec 0
[OUTPUT]
Name stdout
Match *
random
samples: -1
interval_sec: 1
interval_nsec: 0
outputs:
- name: stdout
match: '*'
thread

brokers

Single or multiple list of Kafka Brokers, e.g: 192.168.1.3:9092, 192.168.1.4:9092.

topics

Single entry or list of topics separated by comma (,) that Fluent Bit will subscribe to.

format

Serialization format of the messages. If set to "json", the payload will be parsed as json.

none

client_id

Client id passed to librdkafka.

group_id

Group id passed to librdkafka.

fluent-bit

poll_ms

Kafka brokers polling interval in milliseconds.

500

Buffer_Max_Size

Specify the maximum size of buffer per cycle to poll kafka messages from subscribed topics. To increase throughput, specify larger size.

4M

rdkafka.{property}

{property} can be any librdkafka propertiesarrow-up-right

threaded

Indicates whether to run this input in its own thread.

false

host

The host of the prometheus metric endpoint that you want to scrape

port

The port of the prometheus metric endpoint that you want to scrape

scrape_interval

The interval to scrape metrics

10s

metrics_path

The metrics URI endpoint, that must start with a forward slash. Note: Parameters can also be added to the path by using ?

/metrics

threaded

Indicates whether to run this input in its own thread.

false

The address to listen on

0.0.0.0

port

The port for Fluent Bit to listen on

9880

tag_key

Specify the key name to overwrite a tag. If set, the tag will be overwritten by a value of the key.

buffer_max_size

Specify the maximum buffer size in KB to receive a JSON message.

4M

buffer_chunk_size

This sets the chunk size for incoming incoming JSON messages. These chunks are then stored/managed in the space available by buffer_max_size.

512K

successful_response_code

It allows to set successful response code. 200, 201 and 204 are supported.

201

splunk_token

Specify a Splunk token for HTTP HEC authentication. If multiple tokens are specified (with commas and no spaces), usage will be divided across each of the tokens.

store_token_in_metadata

Store Splunk HEC tokens in the Fluent Bit metadata. If set false, they will be stored as normal key-value pairs in the record data.

true

splunk_token_key

Use the specified key for storing the Splunk token for HTTP HEC. This is only effective when store_token_in_metadata is false.

@splunk_token

Threaded

Indicates whether to run this input in its own thread.

false

db

Set a database file to keep track of recorded Kubernetes events

db.sync

Set a database sync method. values: extra, full, normal and off

normal

interval_sec

Set the reconnect interval (seconds)*

0

interval_nsec

Set the reconnect interval (sub seconds: nanoseconds)*

500000000

kube_url

API Server end-point

https://kubernetes.default.svc

kube_ca_file

Kubernetes TLS CA file

/var/run/secrets/kubernetes.io/serviceaccount/ca.crt

kube_ca_path

Kubernetes TLS ca path

kube_token_file

Kubernetes authorization token file.

/var/run/secrets/kubernetes.io/serviceaccount/token

kube_token_ttl

kubernetes token ttl, until it is reread from the token file.

10m

kube_request_limit

kubernetes limit parameter for events query, no limit applied when set to 0.

0

kube_retention_time

Kubernetes retention time for events.

1h

kube_namespace

Kubernetes namespace to query events from. Gets events from all namespaces by default

tls.debug

Debug level between 0 (nothing) and 4 (every detail).

0

tls.verify

Enable or disable verification of TLS peer certificate.

On

tls.vhost

Set optional TLS virtual host.

thread
standard output plugin

Current temperature in celsius

Interval_Sec

Polling interval (seconds). default: 1

Interval_NSec

Polling interval (nanoseconds). default: 0

name_regex

Optional name filter regex. default: None

type_regex

Optional type filter regex. default: None

Threaded

Indicates whether to run this input in its own thread. Default: false.

$ fluent-bit -i kafka -o stdout -p brokers=192.168.1.3:9092 -p topics=some-topic
[INPUT]
    Name        kafka
    Brokers     192.168.1.3:9092
    Topics      some-topic
    poll_ms     100

[OUTPUT]
    Name        stdout
[INPUT]
    Name kafka
    brokers kafka-broker:9092
    topics fb-source
    poll_ms 100
    format json

[FILTER]
    Name    lua
    Match   *
    script  kafka.lua
    call    modify_kafka_message

[OUTPUT]
    Name kafka
    brokers kafka-broker:9092
    topics fb-sink
[INPUT]
    name prometheus_scrape
    host 0.0.0.0
    port 8201
    tag vault
    metrics_path /v1/sys/metrics?format=prometheus
    scrape_interval 10s

[OUTPUT]
    name stdout
    match *
2022-03-26T23:01:29.836663788Z go_memstats_alloc_bytes_total = 31891336
2022-03-26T23:01:29.836663788Z go_memstats_frees_total = 313264
2022-03-26T23:01:29.836663788Z go_memstats_lookups_total = 0
2022-03-26T23:01:29.836663788Z go_memstats_mallocs_total = 378992
2022-03-26T23:01:29.836663788Z process_cpu_seconds_total = 1.6200000000000001
2022-03-26T23:01:29.836663788Z go_goroutines = 19
2022-03-26T23:01:29.836663788Z go_info{version="go1.17.7"} = 1
2022-03-26T23:01:29.836663788Z go_memstats_alloc_bytes = 12547800
2022-03-26T23:01:29.836663788Z go_memstats_buck_hash_sys_bytes = 1468900
2022-03-26T23:01:29.836663788Z go_memstats_gc_cpu_fraction = 8.1509688352783453e-06
2022-03-26T23:01:29.836663788Z go_memstats_gc_sys_bytes = 5875576
2022-03-26T23:01:29.836663788Z go_memstats_heap_alloc_bytes = 12547800
2022-03-26T23:01:29.836663788Z go_memstats_heap_idle_bytes = 2220032
2022-03-26T23:01:29.836663788Z go_memstats_heap_inuse_bytes = 14000128
2022-03-26T23:01:29.836663788Z go_memstats_heap_objects = 65728
2022-03-26T23:01:29.836663788Z go_memstats_heap_released_bytes = 2187264
2022-03-26T23:01:29.836663788Z go_memstats_heap_sys_bytes = 16220160
2022-03-26T23:01:29.836663788Z go_memstats_last_gc_time_seconds = 1648335593.2483871
2022-03-26T23:01:29.836663788Z go_memstats_mcache_inuse_bytes = 2400
2022-03-26T23:01:29.836663788Z go_memstats_mcache_sys_bytes = 16384
2022-03-26T23:01:29.836663788Z go_memstats_mspan_inuse_bytes = 150280
2022-03-26T23:01:29.836663788Z go_memstats_mspan_sys_bytes = 163840
2022-03-26T23:01:29.836663788Z go_memstats_next_gc_bytes = 16586496
2022-03-26T23:01:29.836663788Z go_memstats_other_sys_bytes = 422572
2022-03-26T23:01:29.836663788Z go_memstats_stack_inuse_bytes = 557056
2022-03-26T23:01:29.836663788Z go_memstats_stack_sys_bytes = 557056
2022-03-26T23:01:29.836663788Z go_memstats_sys_bytes = 24724488
2022-03-26T23:01:29.836663788Z go_threads = 8
2022-03-26T23:01:29.836663788Z process_max_fds = 65536
2022-03-26T23:01:29.836663788Z process_open_fds = 12
2022-03-26T23:01:29.836663788Z process_resident_memory_bytes = 200638464
2022-03-26T23:01:29.836663788Z process_start_time_seconds = 1648333791.45
2022-03-26T23:01:29.836663788Z process_virtual_memory_bytes = 865849344
2022-03-26T23:01:29.836663788Z process_virtual_memory_max_bytes = 1.8446744073709552e+19
2022-03-26T23:01:29.836663788Z vault_runtime_alloc_bytes = 12482136
2022-03-26T23:01:29.836663788Z vault_runtime_free_count = 313256
2022-03-26T23:01:29.836663788Z vault_runtime_heap_objects = 65465
2022-03-26T23:01:29.836663788Z vault_runtime_malloc_count = 378721
2022-03-26T23:01:29.836663788Z vault_runtime_num_goroutines = 12
2022-03-26T23:01:29.836663788Z vault_runtime_sys_bytes = 24724488
2022-03-26T23:01:29.836663788Z vault_runtime_total_gc_pause_ns = 1917611
2022-03-26T23:01:29.836663788Z vault_runtime_total_gc_runs = 19
$ fluent-bit -i splunk -p port=8088 -o stdout
[INPUT]
    name splunk
    listen 0.0.0.0
    port 8088

[OUTPUT]
    name stdout
    match *
[SERVICE]
    flush           1
    log_level       info

[INPUT]
    name            kubernetes_events
    tag             k8s_events
    kube_url        https://kubernetes.default.svc

[OUTPUT]
    name            stdout
    match           *
$ bin/fluent-bit -i thermal -t my_thermal -o stdout -m '*'
Fluent Bit v1.x.x
* Copyright (C) 2019-2020 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2019/08/18 13:39:43] [ info] [storage] initializing...
...
[0] my_thermal: [1566099584.000085820, {"name"=>"thermal_zone0", "type"=>"x86_pkg_temp", "temp"=>60.000000}]
[1] my_thermal: [1566099585.000136466, {"name"=>"thermal_zone0", "type"=>"x86_pkg_temp", "temp"=>59.000000}]
[2] my_thermal: [1566099586.000083156, {"name"=>"thermal_zone0", "type"=>"x86_pkg_temp", "temp"=>59.000000}]
$ bin/fluent-bit -i thermal -t my_thermal -p "interval_sec=60" -p "name_regex=thermal_zone0" -o stdout -m '*'
Fluent Bit v1.3.0
Copyright (C) Treasure Data

[2019/08/18 13:39:43] [ info] [storage] initializing...
...
[0] my_temp: [1565759542.001053749, {"name"=>"thermal_zone0", "type"=>"pch_skylake", "temp"=>48.500000}]
[0] my_temp: [1565759602.001661061, {"name"=>"thermal_zone0", "type"=>"pch_skylake", "temp"=>48.500000}]
[INPUT]
    Name thermal
    Tag  my_thermal

[OUTPUT]
    Name  stdout
    Match *
pipeline:
    inputs:
        - name: thermal
          tag: my_thermal
    outputs:
        - name: stdout
          match: '*'

512K

successful_response_code

It allows to set successful response code. 200, 201 and 204 are supported.

201

tag_from_uri

If true, tag will be created from uri. e.g. v1_metrics from /v1/metrics .

true

threaded

Indicates whether to run this input in its own .

false

/v1/metrics

  • Traces

    • /v1/traces

  • Metrics

    • /opentelemetry.proto.collector.metric.v1.MetricService/Export

    • /opentelemetry.proto.collector.metrics.v1.MetricsService/Export

  • Traces

    • /opentelemetry.proto.collector.trace.v1.TraceService/Export

    • /opentelemetry.proto.collector.traces.v1.TracesService/Export

  • listen

    The network address to listen.

    0.0.0.0

    port

    The port for Fluent Bit to listen for incoming connections. Note that as of Fluent Bit v3.0.2 this port is used for both transport OTLP/HTTP and OTLP/GRPC.

    4318

    tag_key

    Specify the key name to overwrite a tag. If set, the tag will be overwritten by a value of the

    raw_traces

    Route trace data as a log

    false

    buffer_max_size

    Specify the maximum buffer size in KB/MB/GB to the HTTP payload.

    4M

    buffer_chunk_size

    Logs

    Stable

    Stable

    Stable

    Metrics

    Unimplemented

    Stable

    Stable

    Traces

    Unimplemented

    Stable

    Initial size and allocation strategy to store the payload (advanced users only)

    Stable

    filesystem so this plugin will not work for it.

    hashtag
    Configuration

    Key
    Description
    Default

    scrape_interval

    The rate at which metrics are collected.

    5 seconds

    path.procfs

    The mount point used to collect process information and metrics. Read-only is enough

    /proc/

    process_include_pattern

    regex to determine which names of processes are included in the metrics produced by this plugin

    It is applied for all process unless explicitly set. Default is .+.

    process_exclude_pattern

    regex to determine which names of processes are excluded in the metrics produced by this plugin

    It is not applied unless explicitly set. Default is NULL.

    hashtag
    Metrics Available

    Name
    Description

    cpu

    Exposes CPU statistics from /proc.

    io

    Exposes I/O statistics from /proc.

    memory

    Exposes memory statistics from /proc.

    state

    Exposes process state statistics from /proc.

    context_switches

    Exposes context_switches statistics from /proc.

    fd

    Exposes file descriptors statistics from /proc.

    hashtag
    Threading

    This input always runs in its own thread.

    hashtag
    Getting Started

    hashtag
    Simple Configuration File

    In the following configuration file, the input plugin _process_exporter_metrics collects _metrics every 2 seconds and exposes them through our Prometheus Exporter output plugin on HTTP/TCP port 2021.

    You can see the metrics by using curl:

    hashtag
    Container to Collect Host Metrics

    When deploying Fluent Bit in a container you will need to specify additional settings to ensure that Fluent Bit has access to the process details. The following docker command deploys Fluent Bit with a specific mount path for procfs and settings enabled to ensure that Fluent Bit can collect from the host. These are then exposed over port 2021.

    hashtag
    Enhancement Requests

    Development prioritises a subset of the available collectors in the the 3rd party implementation of Prometheus Process Exporterarrow-up-right, to request others please open a Github issue by using the following template: - in_process_exporter_metricsarrow-up-right

    Prometheus Node Exporterarrow-up-right
    the 3rd party implementation of Prometheus Process Exporterarrow-up-right
    The plugin supports the following configuration parameters:
    Key
    Description

    Proc_Name

    Name of the target Process to check.

    Interval_Sec

    Interval in seconds between the service checks. Default value is 1.

    Interval_Nsec

    Specify a nanoseconds interval for service checks, it works in conjunction with the Interval_Sec configuration key. Default value is 0.

    Alert

    If enabled, it will only generate messages if the target process is down. By default this option is disabled.

    Fd

    If enabled, a number of fd is appended to each records. Default value is true.

    Mem

    If enabled, memory usage of the process is appended to each records. Default value is true.

    hashtag
    Getting Started

    In order to start performing the checks, you can run the plugin from the command line or through the configuration file:

    The following example will check the health of crond process.

    hashtag
    Configuration File

    In your main configuration file append the following Input & Output sections:

    hashtag
    Testing

    Once Fluent Bit is running, you will see the health of process:

    Path

    Optional path to the Systemd journal directory, if not set, the plugin will use default paths to read local-only logs.

    Max_Fields

    Set a maximum number of fields (keys) allowed per record.

    8000

    Max_Entries

    When Fluent Bit starts, the Journal might have a high number of logs in the queue. In order to avoid delays and reduce memory usage, this option allows to specify the maximum number of log entries that can be processed per round. Once the limit is reached, Fluent Bit will continue processing the remaining log entries once Journald performs the notification.

    5000

    Systemd_Filter

    Allows to perform a query over logs that contains a specific Journald key/value pairs, e.g: _SYSTEMD_UNIT=UNIT. The Systemd_Filter option can be specified multiple times in the input section to apply multiple filters as required.

    Systemd_Filter_Type

    Define the filter type when Systemd_Filter is specified multiple times. Allowed values are And and Or. With And a record is matched only when all of the Systemd_Filter have a match. With Or a record is matched when any of the Systemd_Filter has a match.

    Or

    Tag

    hashtag
    Getting Started

    In order to receive Systemd messages, you can run the plugin from the command line or through the configuration file:

    hashtag
    Command Line

    From the command line you can let Fluent Bit listen for Systemd messages with the following options:

    In the example above we are collecting all messages coming from the Docker service.

    hashtag
    Configuration File

    In your main configuration file append the following Input & Output sections:

    Listen

    Listener network interface.

    0.0.0.0

    Port

    UDP port where listening for connections

    5170

    Buffer_Size

    Specify the maximum buffer size in KB to receive a JSON message. If not set, the default size will be the value of Chunk_Size.

    Chunk_Size

    By default the buffer to store the incoming JSON messages, do not allocate the maximum memory allowed, instead it allocate memory when is required. The rounds of allocations are set by Chunk_Size in KB. If not set, Chunk_Size is equal to 32 (32KB).

    32

    Format

    Specify the expected payload format. It support the options json and none. When using json, it expects JSON maps, when is set to none, it will split every record using the defined Separator (option below).

    json

    Separator

    hashtag
    Getting Started

    In order to receive JSON messages over UDP, you can run the plugin from the command line or through the configuration file:

    hashtag
    Command Line

    From the command line you can let Fluent Bit listen for JSON messages with the following options:

    By default the service will listen an all interfaces (0.0.0.0) through UDP port 5170, optionally you can change this directly, e.g:

    In the example the JSON messages will only arrive through network interface under 192.168.3.2 address and UDP Port 9090.

    hashtag
    Configuration File

    In your main configuration file append the following Input & Output sections:

    hashtag
    Testing

    Once Fluent Bit is running, you can send some messages using the netcat:

    In Fluent Bitarrow-up-right we should see the following output:

    hashtag
    Performance Considerations

    When receiving payloads in JSON format, there are high performance penalties. Parsing JSON is a very expensive task so you could expect your CPU usage increase under high load environments.

    To get faster data ingestion, consider to use the option Format none to avoid JSON parsing if not needed.

    The address to listen on

    0.0.0.0

    port

    The port for Fluent Bit to listen on

    8080

    buffer_max_size

    Specify the maximum buffer size in KB to receive a JSON message.

    4M

    buffer_chunk_size

    This sets the chunk size for incoming incoming JSON messages. These chunks are then stored/managed in the space available by buffer_max_size.

    512K

    successful_response_code

    It allows to set successful response code. 200, 201 and 204 are supported.

    201

    tag_from_uri

    If true, tag will be created from uri, e.g. api_prom_push from /api/prom/push, and any tag specified in the config will be ignored. If false then a tag must be provided in the config for this input.

    true

    uri

    Specify an optional HTTP URI for the target web server listening for prometheus remote write payloads, e.g: /api/prom/push

    threaded

    Indicates whether to run this input in its own .

    false

    A sample config file to get started will look something like the following:

    [INPUT]
        name prometheus_remote_write
        listen 127.0.0.1
        port 8080
        uri /api/prom/push
    
    [OUTPUT]
        name stdout
        match *
    pipeline:
        inputs:
            - name:
    

    With the above configuration, Fluent Bit will listen on port 8080 for data. You can now send payloads in Prometheus remote write format to the endpoint /api/prom/push.

    hashtag
    Examples

    hashtag
    Communicate with TLS

    Prometheus Remote Write input plugin supports TLS/SSL, for more details about the properties available and general configuration, please refer to the TLS/SSL section.

    Communicating with TLS, you will need to use the tls related parameters:

    Now, you should be able to send data over TLS to the remote write input.

    listen

    Standard Input

    The stdin plugin supports retrieving a message stream from the standard input interface (stdin) of the Fluent Bit process. In order to use it, specify the plugin name as the input, e.g:

    If the stdin stream is closed (end-of-file), the stdin plugin will instruct Fluent Bit to exit with success (0) after flushing any pending output.

    hashtag
    Input formats

    If no parser is configured for the stdin plugin, it expects valid JSON input data in one of the following formats:

    1. A JSON object with one or more key-value pairs: { "key": "value", "key2": "value2" }

    2. A 2-element JSON array in format, which may be:

    • [TIMESTAMP, { "key": "value" }] where TIMESTAMP is a floating point value representing a timestamp in seconds; or

    • from Fluent Bit v2.1.0, [[TIMESTAMP, METADATA], { "key": "value" }] where TIMESTAMP has the same meaning as above and and METADATA is a JSON object.

    Multi-line input JSON is supported.

    Any input data that is not in one of the above formats will cause the plugin to log errors like:

    To handle inputs in other formats, a parser must be explicitly specified in the configuration for the stdin plugin. See for sample configuration.

    hashtag
    Log event timestamps

    The Fluent Bit event timestamp will be set from the input record if the 2-element event input is used or a custom parser configuration supplies a timestamp. Otherwise the event timestamp will be set to the timestamp at which the record is read by the stdin plugin.

    hashtag
    Examples

    hashtag
    Json input example

    A better example to demonstrate how it works will be through a Bash script that generates messages and writes them to . Write the following content in a file named test.sh:

    Now lets start the script and :

    hashtag
    Json input with timestamp example

    An input event timestamp may also be supplied. Replace test.sh with:

    Re-run the sample command. Note that the timestamps output by Fluent Bit are now one day old because Fluent Bit used the input message timestamp.

    hashtag
    Json input with metadata example

    Additional metadata is also supported on Fluent Bit v2.1.0 and above by replacing the timestamp with a 2-element object, e.g.:

    On older Fluent Bit versions records in this format will be discarded. Fluent Bit will log:

    if the log level permits.

    hashtag
    Parser input example

    To capture inputs in other formats, specify a parser configuration for the stdin plugin.

    For example, if you want to read raw messages line-by-line and forward them you could use a parser.conf that captures the whole message line:

    then use that in the parser clause of the stdin plugin in the fluent-bit.conf:

    Fluent Bit will now read each line and emit a single message for each input line:

    In real-world deployments it is best to use a more realistic parser that splits messages into real fields and adds appropriate tags.

    hashtag
    Configuration Parameters

    The plugin supports the following configuration parameters:

    Key
    Description
    Default

    HTTP

    The HTTP input plugin allows you to send custom records to an HTTP endpoint.

    hashtag
    Configuration Parameters

    Key

    Description

    default

    listen

    The address to listen on

    0.0.0.0

    hashtag
    TLS / SSL

    HTTP input plugin supports TLS/SSL, for more details about the properties available and general configuration, please refer to the section.

    hashtag
    gzipped content

    The HTTP input plugin will accept and automatically handle gzipped content as of v2.2.1 as long as the header Content-Encoding: gzip is set on the received data.

    hashtag
    Getting Started

    The http input plugin allows Fluent Bit to open up an HTTP port that you can then route data to in a dynamic way. This plugin supports dynamic tags which allow you to send data with different tags through the same input. An example video and curl message can be seen below

    hashtag
    How to set tag

    The tag for the HTTP input plugin is set by adding the tag to the end of the request URL. This tag is then used to route the event through the system. For example, in the following curl message below the tag set is app.log**. ** because the end end path is /app_log:

    hashtag
    Curl request

    hashtag
    Configuration File

    If you do not set the tag http.0 is automatically used. If you have multiple HTTP inputs then they will follow a pattern of http.N where N is an integer representing the input.

    hashtag
    Curl request

    hashtag
    Configuration File

    hashtag
    How to set tag_key

    The tag_key configuration option allows to specify the key name that will be used to overwrite a tag. The tag's value will be replaced with the value associated with the specified key. For example, setting tag_key to "custom_tag" and the log event contains a json field with the key "custom_tag" Fluent Bit will use the value of that field as the new tag for routing the event through the system.

    hashtag
    Curl request

    hashtag
    Configuration File

    hashtag
    How to set multiple custom HTTP header on success

    The success_header parameter allows to set multiple HTTP headers on success. The format is:

    hashtag
    Example Curl message

    hashtag
    Configuration File

    hashtag
    Command Line

    TCP

    The tcp input plugin allows to retrieve structured JSON or raw messages over a TCP network interface (TCP port).

    hashtag
    Configuration Parameters

    The plugin supports the following configuration parameters:

    Key
    Description
    Default

    hashtag
    Getting Started

    In order to receive JSON messages over TCP, you can run the plugin from the command line or through the configuration file:

    hashtag
    Command Line

    From the command line you can let Fluent Bit listen for JSON messages with the following options:

    By default the service will listen an all interfaces (0.0.0.0) through TCP port 5170, optionally you can change this directly, e.g:

    In the example the JSON messages will only arrive through network interface under 192.168.3.2 address and TCP Port 9090.

    hashtag
    Configuration File

    In your main configuration file append the following Input & Output sections:

    hashtag
    Testing

    Once Fluent Bit is running, you can send some messages using the netcat:

    In we should see the following output:

    hashtag
    Performance Considerations

    When receiving payloads in JSON format, there are high performance penalties. Parsing JSON is a very expensive task so you could expect your CPU usage increase under high load environments.

    To get faster data ingestion, consider to use the option Format none to avoid JSON parsing if not needed.

    Head

    The head input plugin, allows to read events from the head of file. It's behavior is similar to the head command.

    hashtag
    Configuration Parameters

    The plugin supports the following configuration parameters:

    Key
    Description

    hashtag
    Split Line Mode

    This mode is useful to get a specific line. This is an example to get CPU frequency from /proc/cpuinfo.

    /proc/cpuinfo is a special file to get cpu information.

    Cpu frequency is "cpu MHz : 2791.009". We can get the line with this configuration file.

    Output is

    hashtag
    Getting Started

    In order to read the head of a file, you can run the plugin from the command line or through the configuration file:

    hashtag
    Command Line

    The following example will read events from the /proc/uptime file, tag the records with the uptime name and flush them back to the stdout plugin:

    hashtag
    Configuration File

    In your main configuration file append the following Input & Output sections:

    Note: Total interval (sec) = Interval_Sec + (Interval_Nsec / 1000000000).

    e.g. 1.5s = 1s + 500000000ns

    Forward

    Forward is the protocol used by and to route messages between peers. This plugin implements the input service to listen for Forward messages.

    hashtag
    Configuration Parameters

    The plugin supports the following configuration parameters:

    Key
    Description
    pipeline:
        inputs:
            - name: opentelemetry
              listen: 127.0.0.1
              port: 4318
        outputs:
            - name: stdout
              match: '*'
    [INPUT]
        name opentelemetry
        listen 127.0.0.1
        port 4318
    
    [OUTPUT]
        name stdout
        match *
    curl --header "Content-Type: application/json" --request POST --data '{"resourceLogs":[{"resource":{},"scopeLogs":[{"scope":{},"logRecords":[{"timeUnixNano":"1660296023390371588","body":{"stringValue":"{\"message\":\"dummy\"}"},"traceId":"","spanId":""}]}]}]}'   http://0.0.0.0:4318/v1/logs
    # Process Exporter Metrics + Prometheus Exporter
    # -------------------------------------------
    # The following example collect host metrics on Linux and expose
    # them through a Prometheus HTTP end-point.
    #
    # After starting the service try it with:
    #
    # $ curl http://127.0.0.1:2021/metrics
    #
    [SERVICE]
        flush           1
        log_level       info
    
    [INPUT]
        name            process_exporter_metrics
        tag             process_metrics
        scrape_interval 2
    
    [OUTPUT]
        name            prometheus_exporter
        match           process_metrics
        host            0.0.0.0
        port            2021
    curl http://127.0.0.1:2021/metrics
    docker run -ti -v /proc:/host/proc:ro \
                   -p 2021:2021        \
                   fluent/fluent-bit:2.2 \
                   /fluent-bit/bin/fluent-bit \
                             -i process_exporter_metrics -p path.procfs=/host/proc  \
                             -o prometheus_exporter \
                             -f 1
    $ fluent-bit -i proc -p proc_name=crond -o stdout
    [INPUT]
        Name          proc
        Proc_Name     crond
        Interval_Sec  1
        Interval_NSec 0
        Fd            true
        Mem           true
    
    [OUTPUT]
        Name   stdout
        Match  *
    $ fluent-bit -i proc -p proc_name=fluent-bit -o stdout
    Fluent Bit v1.x.x
    * Copyright (C) 2019-2020 The Fluent Bit Authors
    * Copyright (C) 2015-2018 Treasure Data
    * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
    * https://fluentbit.io
    
    [2017/01/30 21:44:56] [ info] [engine] started
    [0] proc.0: [1485780297, {"alive"=>true, "proc_name"=>"fluent-bit", "pid"=>10964, "mem.VmPeak"=>14740000, "mem.VmSize"=>14740000, "mem.VmLck"=>0, "mem.VmHWM"=>1120000, "mem.VmRSS"=>1120000, "mem.VmData"=>2276000, "mem.VmStk"=>88000, "mem.VmExe"=>1768000, "mem.VmLib"=>2328000, "mem.VmPTE"=>68000, "mem.VmSwap"=>0, "fd"=>18}]
    [1] proc.0: [1485780298, {"alive"=>true, "proc_name"=>"fluent-bit", "pid"=>10964, "mem.VmPeak"=>14740000, "mem.VmSize"=>14740000, "mem.VmLck"=>0, "mem.VmHWM"=>1148000, "mem.VmRSS"=>1148000, "mem.VmData"=>2276000, "mem.VmStk"=>88000, "mem.VmExe"=>1768000, "mem.VmLib"=>2328000, "mem.VmPTE"=>68000, "mem.VmSwap"=>0, "fd"=>18}]
    [2] proc.0: [1485780299, {"alive"=>true, "proc_name"=>"fluent-bit", "pid"=>10964, "mem.VmPeak"=>14740000, "mem.VmSize"=>14740000, "mem.VmLck"=>0, "mem.VmHWM"=>1152000, "mem.VmRSS"=>1148000, "mem.VmData"=>2276000, "mem.VmStk"=>88000, "mem.VmExe"=>1768000, "mem.VmLib"=>2328000, "mem.VmPTE"=>68000, "mem.VmSwap"=>0, "fd"=>18}]
    [3] proc.0: [1485780300, {"alive"=>true, "proc_name"=>"fluent-bit", "pid"=>10964, "mem.VmPeak"=>14740000, "mem.VmSize"=>14740000, "mem.VmLck"=>0, "mem.VmHWM"=>1152000, "mem.VmRSS"=>1148000, "mem.VmData"=>2276000, "mem.VmStk"=>88000, "mem.VmExe"=>1768000, "mem.VmLib"=>2328000, "mem.VmPTE"=>68000, "mem.VmSwap"=>0, "fd"=>18}]
    [SERVICE]
        Flush        1
        Log_Level    info
        Parsers_File parsers.conf
    
    [INPUT]
        Name            systemd
        Tag             host.*
        Systemd_Filter  _SYSTEMD_UNIT=docker.service
    
    [OUTPUT]
        Name   stdout
        Match  *
    service:
        flush: 1
        log_level: info
        parsers_file: parsers.conf
    pipeline:
        inputs:
            - name: systemd
              tag: host.*
              systemd_filter: _SYSTEMD_UNIT=docker.service
        outputs:
            - name: stdout
              match: '*'
    $ fluent-bit -i systemd \
                 -p systemd_filter=_SYSTEMD_UNIT=docker.service \
                 -p tag='host.*' -o stdout
    [INPUT]
        Name        udp
        Listen      0.0.0.0
        Port        5170
        Chunk_Size  32
        Buffer_Size 64
        Format      json
    
    [OUTPUT]
        Name        stdout
        Match       *
    pipeline:
        inputs:
            - name: udp
              listen: 0.0.0.0
              port: 5170
              chunk_size: 32
              buffer_size: 64
              format: json
        outputs:
            - name: stdout
              match: '*'
    $ fluent-bit -i udp -o stdout
    $ fluent-bit -i udp -pport=9090 -o stdout
    $ echo '{"key 1": 123456789, "key 2": "abcdefg"}' | nc -u 127.0.0.1 5170
    $ bin/fluent-bit -i udp -o stdout -f 1
    Fluent Bit v2.x.x
    * Copyright (C) 2015-2022 The Fluent Bit Authors
    * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
    * https://fluentbit.io
    
    [2023/07/21 13:01:03] [ info] [fluent bit] version=2.1.7, commit=2474ccc759, pid=9677
    [2023/07/21 13:01:03] [ info] [storage] ver=1.2.0, type=memory, sync=normal, checksum=off, max_chunks_up=128
    [2023/07/21 13:01:03] [ info] [cmetrics] version=0.6.3
    [2023/07/21 13:01:03] [ info] [ctraces ] version=0.3.1
    [2023/07/21 13:01:03] [ info] [input:udp:udp.0] initializing
    [2023/07/21 13:01:03] [ info] [input:udp:udp.0] storage_strategy='memory' (memory only)
    [2023/07/21 13:01:03] [ info] [output:stdout:stdout.0] worker #0 started
    [2023/07/21 13:01:03] [ info] [sp] stream processor started
    [0] udp.0: [[1689912069.078189000, {}], {"key 1"=>123456789, "key 2"=>"abcdefg"}]
    [INPUT]
        Name prometheus_remote_write
        Listen 127.0.0.1
        Port 8080
        Uri /api/prom/push
        Tls On
        tls.crt_file /path/to/certificate.crt
        tls.key_file /path/to/certificate.key
    $ fluent-bit -i stdin -o stdout

    metrics

    To specify which process level of metrics are collected from the host operating system. These metrics depend on /proc fs. The actual values of metrics will be read from /proc when needed. cpu, io, memory, state, context_switches, fd, start_time, thread_wchan, thread depend on procfs.

    cpu,io,memory,state,context_switches,fd,start_time,thread_wchan,thread

    start_time

    Exposes start_time statistics from /proc.

    thread_wchan

    Exposes thread_wchan from /proc.

    thread

    Exposes thread statistics from /proc.

    The tag is used to route messages but on Systemd plugin there is an extra functionality: if the tag includes a star/wildcard, it will be expanded with the Systemd Unit file (_SYSTEMD_UNIT, e.g. host.* => host.UNIT_NAME) or unknown (e.g. host.unknown) if _SYSTEMD_UNIT is missing.

    DB

    Specify the absolute path of a database file to keep track of Journald cursor.

    DB.Sync

    Set a default synchronization (I/O) method. values: Extra, Full, Normal, Off. This flag affects how the internal SQLite engine do synchronization to disk, for more details about each option please refer to this sectionarrow-up-right. note: this option was introduced on Fluent Bit v1.4.6.

    Full

    Read_From_Tail

    Start reading new entries. Skip entries already stored in Journald.

    Off

    Lowercase

    Lowercase the Journald field (key).

    Off

    Strip_Underscores

    Remove the leading underscore of the Journald field (key). For example the Journald field _PID becomes the key PID.

    Off

    Threaded

    Indicates whether to run this input in its own thread.

    false

    When the expected Format is set to none, Fluent Bit needs a separator string to split the records. By default it uses the breakline character (LF or 0x10).

    Source_Address_Key

    Specify the key where the source address will be injected.

    Threaded

    Indicates whether to run this input in its own thread.

    false

    thread
    prometheus_remote_write
    listen: 127.0.0.1
    port: 8080
    uri: /api/prom/push
    outputs:
    - name: stdout
    match: '*'
    thread

    Buffer_Size

    Set the buffer size to read data. This value is used to increase buffer size. The value must be according to the Unit Size specification.

    16k

    Parser

    The name of the parser to invoke instead of the default JSON input parser

    Threaded

    Indicates whether to run this input in its own thread.

    false

    Fluent Bit Event
    parser input example
    Fluent Bitarrow-up-right
    Fluent Bitarrow-up-right

    port

    The port for Fluent Bit to listen on

    9880

    tag_key

    Specify the key name to overwrite a tag. If set, the tag will be overwritten by a value of the key.

    buffer_max_size

    Specify the maximum buffer size in KB to receive a JSON message.

    4M

    buffer_chunk_size

    This sets the chunk size for incoming incoming JSON messages. These chunks are then stored/managed in the space available by buffer_max_size.

    512K

    successful_response_code

    It allows to set successful response code. 200, 201 and 204 are supported.

    201

    success_header

    Add an HTTP header key/value pair on success. Multiple headers can be set. Example: X-Custom custom-answer

    threaded

    Indicates whether to run this input in its own thread.

    false

    Transport Security
    Link to videoarrow-up-right
    [debug] [input:stdin:stdin.0] invalid JSON message, skipping
    [error] [input:stdin:stdin.0] invalid record found, it's not a JSON map or array
    #!/bin/sh
    
    for ((i=0; i<=5; i++)); do
      echo -n "{\"key\": \"some value\"}"
      sleep 1
    done
    $ bash test.sh | fluent-bit -q -i stdin -o stdout
    [0] stdin.0: [[1684196745.942883835, {}], {"key"=>"some value"}]
    [0] stdin.0: [[1684196746.938949056, {}], {"key"=>"some value"}]
    [0] stdin.0: [[1684196747.940162493, {}], {"key"=>"some value"}]
    [0] stdin.0: [[1684196748.941392297, {}], {"key"=>"some value"}]
    [0] stdin.0: [[1684196749.942644238, {}], {"key"=>"some value"}]
    [0] stdin.0: [[1684196750.943721442, {}], {"key"=>"some value"}]
    #!/bin/sh
    
    for ((i=0; i<=5; i++)); do
      echo -n "
        [
          $(date '+%s.%N' -d '1 day ago'),
          {
            \"realtimestamp\": $(date '+%s.%N')
          }
        ]
      "
      sleep 1
    done
    $ bash test.sh | fluent-bit -q -i stdin -o stdout
    [0] stdin.0: [[1684110480.028171300, {}], {"realtimestamp"=>1684196880.030070}]
    [0] stdin.0: [[1684110481.033753395, {}], {"realtimestamp"=>1684196881.034741}]
    [0] stdin.0: [[1684110482.036730051, {}], {"realtimestamp"=>1684196882.037704}]
    [0] stdin.0: [[1684110483.039903879, {}], {"realtimestamp"=>1684196883.041081}]
    [0] stdin.0: [[1684110484.044719457, {}], {"realtimestamp"=>1684196884.046404}]
    [0] stdin.0: [[1684110485.048710107, {}], {"realtimestamp"=>1684196885.049651}]
    #!/bin/sh
    for ((i=0; i<=5; i++)); do
      echo -n "
        [
          [
            $(date '+%s.%N' -d '1 day ago'),
    	{\"metakey\": \"metavalue\"}
          ],
          {
            \"realtimestamp\": $(date '+%s.%N')
          }
        ]
      "
      sleep 1
    done
    $ bash ./test.sh | fluent-bit -q -i stdin -o stdout
    [0] stdin.0: [[1684110513.060139417, {"metakey"=>"metavalue"}], {"realtimestamp"=>1684196913.061017}]
    [0] stdin.0: [[1684110514.063085317, {"metakey"=>"metavalue"}], {"realtimestamp"=>1684196914.064145}]
    [0] stdin.0: [[1684110515.066210508, {"metakey"=>"metavalue"}], {"realtimestamp"=>1684196915.067155}]
    [0] stdin.0: [[1684110516.069149971, {"metakey"=>"metavalue"}], {"realtimestamp"=>1684196916.070132}]
    [0] stdin.0: [[1684110517.072484016, {"metakey"=>"metavalue"}], {"realtimestamp"=>1684196917.073636}]
    [0] stdin.0: [[1684110518.075428724, {"metakey"=>"metavalue"}], {"realtimestamp"=>1684196918.076292}]
    [ warn] unknown time format 6
    [PARSER]
        name        stringify_message
        format      regex
        Key_Name    message
        regex       ^(?<message>.*)
    [INPUT]
        Name    stdin
        Tag     stdin
        Parser  stringify_message
    
    [OUTPUT]
        Name   stdout
        Match  *
    pipeline:
        inputs:
            - name: stdin
              tag: stdin
              parser: stringify_message
        outputs:
            - name: stdout
              match: '*'
    $ seq 1 5 | /opt/fluent-bit/bin/fluent-bit -c fluent-bit.conf -R parser.conf -q
    [0] stdin: [1681358780.517029169, {"message"=>"1"}]
    [1] stdin: [1681358780.517068334, {"message"=>"2"}]
    [2] stdin: [1681358780.517072116, {"message"=>"3"}]
    [3] stdin: [1681358780.517074758, {"message"=>"4"}]
    [4] stdin: [1681358780.517077392, {"message"=>"5"}]
    $
    curl -d '{"key1":"value1","key2":"value2"}' -XPOST -H "content-type: application/json" http://localhost:8888/app.log
    [INPUT]
        name http
        listen 0.0.0.0
        port 8888
    
    [OUTPUT]
        name stdout
        match app.log
    pipeline:
        inputs:
            - name: http
              listen: 0.0.0.0
              port: 8888
        outputs:
            - name: stdout
              match: app.log
    curl -d '{"key1":"value1","key2":"value2"}' -XPOST -H "content-type: application/json" http://localhost:8888
    [INPUT]
        name http
        listen 0.0.0.0
        port 8888
    
    [OUTPUT]
        name  stdout
        match  http.0
    pipeline:
        inputs:
            - name: http
              listen: 0.0.0.0
              port: 8888
        outputs:
            - name: stdout
              match: http.0
    curl -d '{"key1":"value1","key2":"value2"}' -XPOST -H "content-type: application/json" http://localhost:8888/app.log
    [INPUT]
        name http
        listen 0.0.0.0
        port 8888
        tag_key key1
    
    [OUTPUT]
        name stdout
        match value1
    pipeline:
        inputs:
            - name: http
              listen: 0.0.0.0
              port: 8888
              tag_key: key1
        outputs:
            - name: stdout
              match: value1
    [INPUT]
        name http
        success_header X-Custom custom-answer
        success_header X-Another another-answer
        inputs:
            - name: http
              success_header: X-Custom custom-answer
              success_header: X-Another another-answer
    curl -d @app.log -XPOST -H "content-type: application/json" http://localhost:8888/app.log
    [INPUT]
        name http
        listen 0.0.0.0
        port 8888
    
    [OUTPUT]
        name stdout
        match *
    pipeline:
        inputs:
            - name: http
              listen: 0.0.0.0
              port: 8888
    
        outputs:
            - name: stdout
              match: '*'
    $> fluent-bit -i http -p port=8888 -o stdout

    Source_Address_Key

    Specify the key where the source address will be injected.

    Threaded

    Indicates whether to run this input in its own .

    false

    Listen

    Listener network interface.

    0.0.0.0

    Port

    TCP port where listening for connections

    5170

    Buffer_Size

    Specify the maximum buffer size in KB to receive a JSON message. If not set, the default size will be the value of Chunk_Size.

    Chunk_Size

    By default the buffer to store the incoming JSON messages, do not allocate the maximum memory allowed, instead it allocate memory when is required. The rounds of allocations are set by Chunk_Size in KB. If not set, Chunk_Size is equal to 32 (32KB).

    32

    Format

    Specify the expected payload format. It support the options json and none. When using json, it expects JSON maps, when is set to none, it will split every record using the defined Separator (option below).

    json

    Separator

    Fluent Bitarrow-up-right

    When the expected Format is set to none, Fluent Bit needs a separator string to split the records. By default it uses the breakline character (LF or 0x10).

    Threaded

    Indicates whether to run this input in its own . Default: false.

    File

    Absolute path to the target file, e.g: /proc/uptime

    Buf_Size

    Buffer size to read the file.

    Interval_Sec

    Polling interval (seconds).

    Interval_NSec

    Polling interval (nanosecond).

    Add_Path

    If enabled, filepath is appended to each records. Default value is false.

    Key

    Rename a key. Default: head.

    Lines

    Line number to read. If the number N is set, in_head reads first N lines like head(1) -n.

    Split_line

    If enabled, in_head generates key-value pair per line.

    Default

    Listen

    Listener network interface.

    0.0.0.0

    Port

    TCP port to listen for incoming connections.

    24224

    Unix_Path

    Specify the path to unix socket to receive a Forward message. If set, Listen and Port are ignored.

    Unix_Perm

    Set the permission of the unix socket file. If Unix_Path is not set, this parameter is ignored.

    Buffer_Max_Size

    Specify the maximum buffer memory size used to receive a Forward message. The value must be according to the specification.

    6144000

    Buffer_Chunk_Size

    hashtag
    Getting Started

    In order to receive Forward messages, you can run the plugin from the command line or through the configuration file as shown in the following examples.

    hashtag
    Command Line

    From the command line you can let Fluent Bit listen for Forward messages with the following options:

    By default the service will listen an all interfaces (0.0.0.0) through TCP port 24224, optionally you can change this directly, e.g:

    In the example the Forward messages will only arrive through network interface under 192.168.3.2 address and TCP Port 9090.

    hashtag
    Configuration File

    In your main configuration file append the following Input & Output sections:

    hashtag
    Fluent Bit + Secure Forward Setup

    Since Fluent Bit v3, in_forward can handle secure forward protocol.

    For using user-password authentication, it needs to specify security.users at least an one-pair. For using shared key, it needs to specify shared_key in both of forward output and forward input. self_hostname is not able to specify with the same hostname between fluent servers and clients.

    hashtag
    Testing

    Once Fluent Bit is running, you can send some messages using the fluent-cat tool (this tool is provided by Fluentdarrow-up-right:

    In Fluent Bitarrow-up-right we should see the following output:

    Fluent Bitarrow-up-right
    Fluentdarrow-up-right

    Threaded

    Indicates whether to run this input in its own thread. Default: false.

    Exec

    The exec input plugin, allows to execute external program and collects event logs.

    WARNING: Because this plugin invokes commands via a shell, its inputs are subject to shell metacharacter substitution. Careless use of untrusted input in command arguments could lead to malicious command execution.

    hashtag
    Container support

    This plugin will not function in all the distroless production images as it needs a functional /bin/sh which is not present. The debug images use the same binaries so even though they have a shell, there is no support for this plugin as it is compiled out.

    hashtag
    Configuration Parameters

    The plugin supports the following configuration parameters:

    Key
    Description

    hashtag
    Getting Started

    You can run the plugin from the command line or through the configuration file:

    hashtag
    Command Line

    The following example will read events from the output of ls.

    hashtag
    Configuration File

    In your main configuration file append the following Input & Output sections:

    hashtag
    Use as a command wrapper

    To use fluent-bit with the exec plugin to wrap another command, use the Exit_After_Oneshot and Propagate_Exit_Code options, e.g.:

    fluent-bit will output

    then exit with exit code 1.

    Translation of command exit code(s) to fluent-bit exit code follows . Like with a shell, there is no way to differentiate between the command exiting on a signal and the shell exiting on a signal, and no way to differentiate between normal exits with codes greater than 125 and abnormal or signal exits reported by fluent-bit or the shell. Wrapped commands should use exit codes between 0 and 125 inclusive to allow reliable identification of normal exit. If the command is a pipeline, the exit code will be the exit code of the last command in the pipeline unless overridden by shell options.

    hashtag
    Parsing command output

    By default the exec plugin emits one message per command output line, with a single field exec containing the full message. Use the Parser directive to specify the name of a parser configuration to use to process the command input.

    hashtag
    Security concerns

    Take great care with shell quoting and escaping when wrapping commands. A script like

    can ruin your day if someone passes it the argument $(rm -rf /my/important/files; echo "deleted your stuff!")'

    The above script would be safer if written with:

    ... but it's generally best to avoid dynamically generating the command or handling untrusted arguments to it at all.

    $ fluent-bit -i tcp -o stdout
    $ fluent-bit -i tcp://192.168.3.2:9090 -o stdout
    [INPUT]
        Name        tcp
        Listen      0.0.0.0
        Port        5170
        Chunk_Size  32
        Buffer_Size 64
        Format      json
    
    [OUTPUT]
        Name        stdout
        Match       *
    pipeline:
        inputs:
            - name: tcp
              listen: 0.0.0.0
              port: 5170
              chunk_size: 32
              buffer_size: 64
              format: json
        outputs:
            - name: stdout
              match: '*'
    $ echo '{"key 1": 123456789, "key 2": "abcdefg"}' | nc 127.0.0.1 5170
    $ bin/fluent-bit -i tcp -o stdout -f 1
    Fluent Bit v1.x.x
    * Copyright (C) 2019-2020 The Fluent Bit Authors
    * Copyright (C) 2015-2018 Treasure Data
    * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
    * https://fluentbit.io
    
    [2019/10/03 09:19:34] [ info] [storage] initializing...
    [2019/10/03 09:19:34] [ info] [storage] in-memory
    [2019/10/03 09:19:34] [ info] [engine] started (pid=14569)
    [2019/10/03 09:19:34] [ info] [in_tcp] binding 0.0.0.0:5170
    [2019/10/03 09:19:34] [ info] [sp] stream processor started
    [0] tcp.0: [1570115975.581246030, {"key 1"=>123456789, "key 2"=>"abcdefg"}]
    processor    : 0
    vendor_id    : GenuineIntel
    cpu family   : 6
    model        : 42
    model name   : Intel(R) Core(TM) i7-2640M CPU @ 2.80GHz
    stepping     : 7
    microcode    : 41
    cpu MHz      : 2791.009
    cache size   : 4096 KB
    physical id  : 0
    siblings     : 1
    [INPUT]
        Name           head
        Tag            head.cpu
        File           /proc/cpuinfo
        Lines          8
        Split_line     true
        # {"line0":"processor    : 0", "line1":"vendor_id    : GenuineIntel" ...}
    
    [FILTER]
        Name           record_modifier
        Match          *
        Whitelist_key  line7
    
    [OUTPUT]
        Name           stdout
        Match          *
    pipeline:
        inputs:
            - name: head
              tag: head.cpu
              file: /proc/cpuinfo
              lines: 8
              split_line: true
        filters:
            - name: record_modifier
              match: '*'
              whitelist_key: line7
        outputs:
            - name: stdout
              match: '*'
    $ bin/fluent-bit -c head.conf
    Fluent Bit v1.x.x
    * Copyright (C) 2019-2020 The Fluent Bit Authors
    * Copyright (C) 2015-2018 Treasure Data
    * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
    * https://fluentbit.io
    
    [2017/06/26 22:38:24] [ info] [engine] started
    [0] head.cpu: [1498484305.000279805, {"line7"=>"cpu MHz        : 2791.009"}]
    [1] head.cpu: [1498484306.011680137, {"line7"=>"cpu MHz        : 2791.009"}]
    [2] head.cpu: [1498484307.010042482, {"line7"=>"cpu MHz        : 2791.009"}]
    [3] head.cpu: [1498484308.008447978, {"line7"=>"cpu MHz        : 2791.009"}]
    $ fluent-bit -i head -t uptime -p File=/proc/uptime -o stdout -m '*'
    Fluent Bit v1.x.x
    * Copyright (C) 2019-2020 The Fluent Bit Authors
    * Copyright (C) 2015-2018 Treasure Data
    * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
    * https://fluentbit.io
    
    [2016/05/17 21:53:54] [ info] starting engine
    [0] uptime: [1463543634, {"head"=>"133517.70 194870.97"}]
    [1] uptime: [1463543635, {"head"=>"133518.70 194872.85"}]
    [2] uptime: [1463543636, {"head"=>"133519.70 194876.63"}]
    [3] uptime: [1463543637, {"head"=>"133520.70 194879.72"}]
    [INPUT]
        Name          head
        Tag           uptime
        File          /proc/uptime
        Buf_Size      256
        Interval_Sec  1
        Interval_NSec 0
    
    [OUTPUT]
        Name   stdout
        Match  *
    pipeline:
        inputs:
            - name: head
              tag: uptime
              file: /proc/uptime
              buf_size: 256
              interval_sec: 1
              interval_nsec: 0
        outputs:
            - name: stdout
              match: '*'
    [INPUT]
        Name              forward
        Listen            0.0.0.0
        Port              24224
        Buffer_Chunk_Size 1M
        Buffer_Max_Size   6M
    
    [OUTPUT]
        Name   stdout
        Match  *
    pipeline:
        inputs:
            - name: forward
              listen: 0.0.0.0
              port: 24224
              buffer_chunk_size: 1M
              buffer_max_size: 6M
        outputs:
            - name: stdout
              match: '*'
    [INPUT]
        Name              forward
        Listen            0.0.0.0
        Port              24224
        Buffer_Chunk_Size 1M
        Buffer_Max_Size   6M
        Security.Users fluentbit changeme
        Shared_Key secret
        Self_Hostname flb.server.local
    
    [OUTPUT]
        Name   stdout
        Match  *
    pipeline:
        inputs:
            - name: forward
              listen: 0.0.0.0
              port: 24224
              buffer_chunk_size: 1M
              buffer_max_size: 6M
              security.users: fluentbit changeme
              shared_key: secret
              self_hostname: flb.server.local
        outputs:
            - name: stdout
              match: '*'
    $ fluent-bit -i forward -o stdout
    $ fluent-bit -i forward -p listen="192.168.3.2" -p port=9090 -o stdout
    $ echo '{"key 1": 123456789, "key 2": "abcdefg"}' | fluent-cat my_tag
    $ bin/fluent-bit -i forward -o stdout
    Fluent-Bit v0.9.0
    Copyright (C) Treasure Data
    
    [2016/10/07 21:49:40] [ info] [engine] started
    [2016/10/07 21:49:40] [ info] [in_fw] binding 0.0.0.0:24224
    [0] my_tag: [1475898594, {"key 1"=>123456789, "key 2"=>"abcdefg"}]
    thread
    thread

    By default the buffer to store the incoming Forward messages, do not allocate the maximum memory allowed, instead it allocate memory when is required. The rounds of allocations are set by Buffer_Chunk_Size. The value must be according to the Unit Size specification.

    1024000

    Tag_Prefix

    Prefix incoming tag with the defined value.

    Tag

    Override the tag of the forwarded events with the defined value.

    Shared_Key

    Shared key for secure forward authentication.

    Self_Hostname

    Hostname for secure forward authentication.

    Security.Users

    Specify the username and password pairs for secure forward authentication.

    Threaded

    Indicates whether to run this input in its own thread.

    false

    Unit Size

    Exit as soon as the one-shot command exits. This allows the exec plugin to be used as a wrapper for another command, sending the target command's output to any fluent-bit sink(s) then exiting. (bool, default: false)

    Propagate_Exit_Code

    When exiting due to Exit_After_Oneshot, cause fluent-bit to exit with the exit code of the command exited by this plugin. Follows . (bool, default: false)

    Threaded

    Indicates whether to run this input in its own . Default: false.

    Command

    The command to execute, passed to popen(...)arrow-up-right without any additional escaping or processing. May include pipelines, redirection, command-substitution, etc.

    Parser

    Specify the name of a parser to interpret the entry as a structured message.

    Interval_Sec

    Polling interval (seconds).

    Interval_NSec

    Polling interval (nanosecond).

    Buf_Size

    Size of the buffer (check unit sizes for allowed values)

    Oneshot

    Only run once at startup. This allows collection of data precedent to fluent-bit's startup (bool, default: false)

    the usual shell rules for exit code handlingarrow-up-right

    Exit_After_Oneshot

    $ fluent-bit -i exec -p 'command=ls /var/log' -o stdout
    Fluent Bit v1.x.x
    * Copyright (C) 2019-2020 The Fluent Bit Authors
    * Copyright (C) 2015-2018 Treasure Data
    * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
    * https://fluentbit.io
    
    [2018/03/21 17:46:49] [ info] [engine] started
    [0] exec.0: [1521622010.013470159, {"exec"=>"ConsoleKit"}]
    [1] exec.0: [1521622010.013490313, {"exec"=>"Xorg.0.log"}]
    [2] exec.0: [1521622010.013492079, {"exec"=>"Xorg.0.log.old"}]
    [3] exec.0: [1521622010.013493443, {"exec"=>"anaconda.ifcfg.log"}]
    [4] exec.0: [1521622010.013494707, {"exec"=>"anaconda.log"}]
    [5] exec.0: [1521622010.013496016, {"exec"=>"anaconda.program.log"}]
    [6] exec.0: [1521622010.013497225, {"exec"=>"anaconda.storage.log"}]
    [INPUT]
        Name          exec
        Tag           exec_ls
        Command       ls /var/log
        Interval_Sec  1
        Interval_NSec 0
        Buf_Size      8mb
        Oneshot       false
    
    [OUTPUT]
        Name   stdout
        Match  *
    pipeline:
        inputs:
            - name: exec
              tag: exec_ls
              command: ls /var/log
              interval_sec: 1
              interval_nsec: 0
              buf_size: 8mb
              oneshot: false
    
        outputs:
            - name: stdout
              match: '*'
    [INPUT]
        Name                exec
        Tag                 exec_oneshot_demo
        Command             for s in $(seq 1 10); do echo "count: $s"; sleep 1; done; exit 1
        Oneshot             true
        Exit_After_Oneshot  true
        Propagate_Exit_Code true
    
    [OUTPUT]
        Name   stdout
        Match  *
    pipeline:
        inputs:
            - name: exec
              tag: exec_oneshot_demo
              command: 'for s in $(seq 1 10); do echo "count: $s"; sleep 1; done; exit 1'
              oneshot: true
              exit_after_oneshot: true
              propagate_exit_code: true
    
        outputs:
            - name: stdout
              match: '*'
    [0] exec_oneshot_demo: [[1681702172.950574027, {}], {"exec"=>"count: 1"}]
    [1] exec_oneshot_demo: [[1681702173.951663666, {}], {"exec"=>"count: 2"}]
    [2] exec_oneshot_demo: [[1681702174.953873724, {}], {"exec"=>"count: 3"}]
    [3] exec_oneshot_demo: [[1681702175.955760865, {}], {"exec"=>"count: 4"}]
    [4] exec_oneshot_demo: [[1681702176.956840282, {}], {"exec"=>"count: 5"}]
    [5] exec_oneshot_demo: [[1681702177.958292246, {}], {"exec"=>"count: 6"}]
    [6] exec_oneshot_demo: [[1681702178.959508200, {}], {"exec"=>"count: 7"}]
    [7] exec_oneshot_demo: [[1681702179.961715745, {}], {"exec"=>"count: 8"}]
    [8] exec_oneshot_demo: [[1681702180.963924140, {}], {"exec"=>"count: 9"}]
    [9] exec_oneshot_demo: [[1681702181.965852990, {}], {"exec"=>"count: 10"}]
    #!/bin/bash
    # This is a DANGEROUS example of what NOT to do, NEVER DO THIS
    exec fluent-bit \
      -o stdout \
      -i exec \
      -p exit_after_oneshot=true \
      -p propagate_exit_code=true \
      -p command='myscript $*'
      -p command='echo '"$(printf '%q' "$@")" \
    shell conventions for exit code propagationarrow-up-right
    thread

    Windows Exporter Metrics

    A plugin based on Prometheus Windows Exporter to collect system / host level metrics

    Prometheus Windows Exporterarrow-up-right is a popular way to collect system level metrics from microsoft windows, such as CPU / Disk / Network / Process statistics. Fluent Bit 1.9.0 includes windows exporter metrics plugin that builds off the Prometheus design to collect system level metrics without having to manage two separate processes or agents.

    The initial release of Windows Exporter Metrics contains a single collector available from Prometheus Windows Exporter and we plan to expand it over time.

    Important note: Metrics collected with Windows Exporter Metrics flow through a separate pipeline from logs and current filters do not operate on top of metrics.

    hashtag
    Configuration

    Key
    Description
    Default

    hashtag
    Collectors available

    The following table describes the available collectors as part of this plugin. All of them are enabled by default and respects the original metrics name, descriptions, and types from Prometheus Windows Exporter, so you can use your current dashboards without any compatibility problem.

    note: the Version column specifies the Fluent Bit version where the collector is available.

    Name
    Description
    OS
    Version

    hashtag
    Threading

    This input always runs in its own .

    hashtag
    Getting Started

    hashtag
    Simple Configuration File

    In the following configuration file, the input plugin _windows_exporter_metrics collects _metrics every 2 seconds and exposes them through our output plugin on HTTP/TCP port 2021.

    You can test the expose of the metrics by using curl:

    hashtag
    Service where clause

    Windows service collector will retrieve all of the service information for the local node or container. we.service.where, we.service.include, and we.service.exclude can be used to filter the service metrics.

    To filter these metrics, users should specify a WHERE clause. This syntax is defined in .

    Here is how these parameters should work:

    hashtag
    we.service.where

    we.service.where is handled as a raw WHERE clause. For example, when a user specifies the parameter as follows:

    This creates a WMI query like so:

    The WMI mechanism will then handle it and return the information which has a "not OK" status in this example.

    hashtag
    we.service.include

    When defined, the we.service.include is interpreted into a WHERE clause. If multiple key-value pairs are specified, the values will be concatenated with OR. Also, if the values contain % character then a LIKE operator will be used in the clause instead of the = operator. When a user specifies the parameter as follows:

    The parameter will be interpreted as:

    The WMI query will be called with the translated parameter as:

    hashtag
    we.service.exclude

    When defined, the we.service.exclude is interpreted into a WHERE clause. If multiple key-value pairs are specified, the values will be concatenated with AND.

    Also, if the values contain % character then a LIKE operator will be used in the translated clause instead of the != operator. When a user specifies the parameter as follows:

    The parameter will be interpreted as:

    The WMI query will be called with the translated parameter as:

    hashtag
    Advanced usage

    we.service.where, we.service.include, and we.service.exclude can all be used at the same time subject to the following rules.

    1. we.service.include translated and applied into the where clause in the service collector

    2. we.service.exclude translated and applied into the where clause in the service collector

      1. If the we.service.include

    For example, when a user specifies the parameter as follows:

    The WMI query will be called with the translated parameter as:

    hashtag
    Enhancement Requests

    Our current plugin implements a sub-set of the available collectors in the original Prometheus Windows Exporter, if you would like that we prioritize a specific collector please open a Github issue by using the following template: -

    Syslog

    Syslog input plugins allows to collect Syslog messages through a Unix socket server (UDP or TCP) or over the network using TCP or UDP.

    hashtag
    Configuration Parameters

    The plugin supports the following configuration parameters:

    Key
    Description
    Default

    hashtag
    Considerations

    • When using Syslog input plugin, Fluent Bit requires access to the parsers.conf file, the path to this file can be specified with the option -R or through the Parsers_File key on the [SERVICE] section (more details below).

    • When udp or unix_udp is used, the buffer size to receive messages is configurable only through the Buffer_Chunk_Size option which defaults to 32kb.

    hashtag
    Getting Started

    In order to receive Syslog messages, you can run the plugin from the command line or through the configuration file:

    hashtag
    Command Line

    From the command line you can let Fluent Bit listen for Forward messages with the following options:

    By default the service will create and listen for Syslog messages on the unix socket /tmp/in_syslog

    hashtag
    Configuration File

    In your main configuration file append the following Input & Output sections:

    hashtag
    Testing

    Once Fluent Bit is running, you can send some messages using the logger tool:

    In we should see the following output:

    hashtag
    Recipes

    The following content aims to provide configuration examples for different use cases to integrate Fluent Bit and make it listen for Syslog messages from your systems.

    hashtag
    Rsyslog to Fluent Bit: Network mode over TCP

    hashtag
    Fluent Bit Configuration

    Put the following content in your configuration file:

    then start Fluent Bit.

    hashtag
    RSyslog Configuration

    Add a new file to your rsyslog config rules called 60-fluent-bit.conf inside the directory /etc/rsyslog.d/ and add the following content:

    then make sure to restart your rsyslog daemon:

    hashtag
    Rsyslog to Fluent Bit: Unix socket mode over UDP

    hashtag
    Fluent Bit Configuration

    Put the following content in your fluent-bit.conf file:

    then start Fluent Bit.

    hashtag
    RSyslog Configuration

    Add a new file to your rsyslog config rules called 60-fluent-bit.conf inside the directory /etc/rsyslog.d/ and place the following content:

    Make sure that the socket file is readable by rsyslog (tweak the Unix_Perm option shown above).

    we.service.where

    Specify the WHERE clause for retrieving service metrics.

    NULL

    we.service.include

    Specify the key value pairs for the include condition for the WHERE clause of service metrics.

    NULL

    we.service.exclude

    Specify the key value pairs for the exclude condition for the WHERE clause of service metrics.

    NULL

    we.process.allow_process_regex

    Specify the regex covering the process metrics to collect. Collect all by default.

    "/.+/"

    we.process.deny_process_regex

    Specify the regex for process metrics to prevent collection of/ignore. Allow all by default.

    NULL

    collector.cpu.scrape_interval

    The rate in seconds at which cpu metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used.

    0 seconds

    collector.net.scrape_interval

    The rate in seconds at which net metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used.

    0 seconds

    collector.logical_disk.scrape_interval

    The rate in seconds at which logical_disk metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used.

    0 seconds

    collector.cs.scrape_interval

    The rate in seconds at which cs metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used.

    0 seconds

    collector.os.scrape_interval

    The rate in seconds at which os metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used.

    0 seconds

    collector.thermalzone.scrape_interval

    The rate in seconds at which thermalzone metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used.

    0 seconds

    collector.cpu_info.scrape_interval

    The rate in seconds at which cpu_info metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used.

    0 seconds

    collector.logon.scrape_interval

    The rate in seconds at which logon metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used.

    0 seconds

    collector.system.scrape_interval

    The rate in seconds at which system metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used.

    0 seconds

    collector.service.scrape_interval

    The rate in seconds at which service metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used.

    0 seconds

    collector.memory.scrape_interval

    The rate in seconds at which memory metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used.

    0 seconds

    collector.paging_file.scrape_interval

    The rate in seconds at which paging_file metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used.

    0 seconds

    collector.process.scrape_interval

    The rate in seconds at which process metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used.

    0 seconds

    metrics

    To specify which metrics are collected from the host operating system.

    "cpu,cpu_info,os,net,logical_disk,cs,thermalzone,logon,system,service"

    cs

    Exposes cs statistics.

    Windows

    v2.0.8

    os

    Exposes OS statistics.

    Windows

    v2.0.8

    thermalzone

    Exposes thermalzone statistics.

    Windows

    v2.0.8

    cpu_info

    Exposes cpu_info statistics.

    Windows

    v2.0.8

    logon

    Exposes logon statistics.

    Windows

    v2.0.8

    system

    Exposes system statistics.

    Windows

    v2.0.8

    service

    Exposes service statistics.

    Windows

    v2.1.6

    memory

    Exposes memory statistics.

    Windows

    v2.1.9

    paging_file

    Exposes paging_file statistics.

    Windows

    v2.1.9

    process

    Exposes process statistics.

    Windows

    v2.1.9

    is applied, translated
    we.service.include
    and
    we.service.exclude
    conditions are concatenated with
    AND
    .
  • we.service.where is just handled as-is into the where clause in the service collector .

    1. If either of the above parameters is applied, the clause will be applied with AND ( the value of we.service.where ).

  • scrape_interval

    The rate at which metrics are collected from the host operating system

    5 seconds

    we.logical_disk.allow_disk_regex

    Specify the regex for logical disk metrics to allow collection of. Collect all by default.

    "/.+/"

    we.logical_disk.deny_disk_regex

    Specify the regex for logical disk metrics to prevent collection of/ignore. Allow all by default.

    NULL

    we.net.allow_nic_regex

    Specify the regex for network metrics captured by the name of the NIC, by default captures all NICs but to exclude adjust the regex.

    "/.+/"

    cpu

    Exposes CPU statistics.

    Windows

    v1.9

    net

    Exposes Network statistics.

    Windows

    v2.0.8

    logical_disk

    Exposes logical_disk statistics.

    Windows

    thread
    Prometheus Exporter
    the WMI Query Language(WQL)arrow-up-right
    in_windows_exporter_metricsarrow-up-right

    v2.0.8

    # Node Exporter Metrics + Prometheus Exporter
    # -------------------------------------------
    # The following example collect host metrics on Linux and expose
    # them through a Prometheus HTTP end-point.
    #
    # After starting the service try it with:
    #
    # $ curl http://127.0.0.1:2021/metrics
    #
    [SERVICE]
        flush           1
        log_level       info
    
    [INPUT]
        name            windows_exporter_metrics
        tag             node_metrics
        scrape_interval 2
    
    [OUTPUT]
        name            prometheus_exporter
        match           node_metrics
        host            0.0.0.0
        port            2021
    
    
    curl http://127.0.0.1:2021/metrics
    we.service.where Status!='OK'
    SELECT * FROM Win32_Service WHERE Status!='OK'
    we.service.include {"Name":"docker","Name":"%Svc%", "Name":"%Service"}
    (Name='docker' OR Name LIKE '%Svc%' OR Name LIKE '%Service')
    SELECT * FROM Win32_Service WHERE (Name='docker' OR Name LIKE '%Svc%' OR Name LIKE '%Service')
    we.service.exclude {"Name":"UdkUserSvc%","Name":"webthreatdefusersvc%","Name":"XboxNetApiSvc"}
    (NOT Name LIKE 'UdkUserSvc%' AND NOT Name LIKE 'webthreatdefusersvc%' AND Name!='XboxNetApiSvc')
    SELECT * FROM Win32_Service WHERE (NOT Name LIKE 'UdkUserSvc%' AND NOT Name LIKE 'webthreatdefusersvc%' AND Name!='XboxNetApiSvc')
    we.service.include {"Name":"docker","Name":"%Svc%", "Name":"%Service"}
    we.service.exclude {"Name":"UdkUserSvc%","Name":"XboxNetApiSvc"}
    we.service.where NOT Name LIKE 'webthreatdefusersvc%'
     SELECT * FROM Win32_Service WHERE (Name='docker' OR Name LIKE '%Svc%' OR Name LIKE '%Service') AND (NOT Name LIKE 'UdkUserSvc%' AND Name!='XboxNetApiSvc') AND (NOT Name LIKE 'webthreatdefusersvc%')

    Buffer_Chunk_Size

    By default the buffer to store the incoming Syslog messages, do not allocate the maximum memory allowed, instead it allocate memory when is required. The rounds of allocations are set by Buffer_Chunk_Size. If not set, Buffer_Chunk_Size is equal to 32000 bytes (32KB). Read considerations below when using udp or unix_udp mode.

    Buffer_Max_Size

    Specify the maximum buffer size to receive a Syslog message. If not set, the default size will be the value of Buffer_Chunk_Size.

    Receive_Buffer_Size

    Specify the maximum socket receive buffer size. If not set, the default value is OS-dependant, but generally too low to accept thousands of syslog messages per second without loss on udp or unix_udp sockets. Note that on Linux the value is capped by sysctl net.core.rmem_max.

    Source_Address_Key

    Specify the key where the source address will be injected.

    Threaded

    Indicates whether to run this input in its own .

    false

    Mode

    Defines transport protocol mode: unix_udp (UDP over Unix socket), unix_tcp (TCP over Unix socket), tcp or udp

    unix_udp

    Listen

    If Mode is set to tcp or udp, specify the network interface to bind.

    0.0.0.0

    Port

    If Mode is set to tcp or udp, specify the TCP port to listen for incoming connections.

    5140

    Path

    If Mode is set to unix_tcp or unix_udp, set the absolute path to the Unix socket file.

    Unix_Perm

    If Mode is set to unix_tcp or unix_udp, set the permission of the Unix socket file.

    0644

    Parser

    Fluent Bitarrow-up-right

    Specify an alternative parser for the message. If Mode is set to tcp or udp then the default parser is syslog-rfc5424 otherwise syslog-rfc3164-local is used. If your syslog messages have fractional seconds set this Parser value to syslog-rfc5424 instead.

    $ fluent-bit -R /path/to/parsers.conf -i syslog -p path=/tmp/in_syslog -o stdout
    [SERVICE]
        Flush               1
        Log_Level           info
        Parsers_File        parsers.conf
    
    [INPUT]
        Name                syslog
        Path                /tmp/in_syslog
        Buffer_Chunk_Size   32000
        Buffer_Max_Size     64000
        Receive_Buffer_Size 512000
    
    [OUTPUT]
        Name   stdout
        Match  *
    service:
        flush: 1
        log_level: info
        parsers_file: parsers.conf
    pipeline:
        inputs:
            - name: syslog
              path: /tmp/in_syslog
              buffer_chunk_size: 32000
              buffer_max_size: 64000
              receive_buffer_size: 512000
        outputs:
            - name: stdout
              match: '*'
    $ logger -u /tmp/in_syslog my_ident my_message
    $ bin/fluent-bit -R ../conf/parsers.conf -i syslog -p path=/tmp/in_syslog -o stdout
    Fluent Bit v1.x.x
    * Copyright (C) 2019-2020 The Fluent Bit Authors
    * Copyright (C) 2015-2018 Treasure Data
    * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
    * https://fluentbit.io
    
    [2017/03/09 02:23:27] [ info] [engine] started
    [0] syslog.0: [1489047822, {"pri"=>"13", "host"=>"edsiper:", "ident"=>"my_ident", "pid"=>"", "message"=>"my_message"}]
    [SERVICE]
        Flush        1
        Parsers_File parsers.conf
    
    [INPUT]
        Name     syslog
        Parser   syslog-rfc3164
        Listen   0.0.0.0
        Port     5140
        Mode     tcp
    
    [OUTPUT]
        Name     stdout
        Match    *
    service:
        flush: 1
        parsers_file: parsers.conf
    pipeline:
        inputs:
            - name: syslog
              parser: syslog-rfc3164
              listen: 0.0.0.0
              port: 5140
              mode: tcp
        outputs:
            - name: stdout
              match: '*'
    action(type="omfwd" Target="127.0.0.1" Port="5140" Protocol="tcp")
    $ sudo service rsyslog restart
    [SERVICE]
        Flush        1
        Parsers_File parsers.conf
    
    [INPUT]
        Name      syslog
        Parser    syslog-rfc3164
        Path      /tmp/fluent-bit.sock
        Mode      unix_udp
        Unix_Perm 0644
    
    [OUTPUT]
        Name      stdout
        Match     *
    service:
        flush: 1
        parsers_file: parsers.conf
    pipeline:
        inputs:
            - name: syslog
              parser: syslog-rfc3164
              path: /tmp/fluent-bit.sock
              mode: unix_udp
              unix_perm: 0644
        outputs:
            - name: stdout
              match: '*'
    $ModLoad omuxsock
    $OMUxSockSocket /tmp/fluent-bit.sock
    *.* :omuxsock:
    thread

    Tail

    The tail input plugin allows to monitor one or several text files. It has a similar behavior like tail -f shell command.

    The plugin reads every matched file in the Path pattern and for every new line found (separated by a newline character (\n) ), it generates a new record. Optionally a database file can be used so the plugin can have a history of tracked files and a state of offsets, this is very useful to resume a state if the service is restarted.

    hashtag
    Configuration Parameters

    The plugin supports the following configuration parameters:

    Key
    Description
    Default

    Note that if the database parameter DB is not specified, by default the plugin will start reading each target file from the beginning. This also might cause some unwanted behavior, for example when a line is bigger that Buffer_Chunk_Size and Skip_Long_Lines is not turned on, the file will be read from the beginning of each Refresh_Interval until the file is rotated.

    hashtag
    Multiline Support

    Starting from Fluent Bit v1.8 we have introduced a new Multiline core functionality. For Tail input plugin, it means that now it supports the old configuration mechanism but also the new one. In order to avoid breaking changes, we will keep both but encourage our users to use the latest one. We will call the two mechanisms as:

    • Multiline Core

    • Old Multiline

    hashtag
    Multiline Core (v1.8)

    The new multiline core is exposed by the following configuration:

    Key
    Description

    As stated in the , now we provide built-in configuration modes. Note that when using a new multiline.parser definition, you must disable the old configuration from your tail section like:

    • parser

    • parser_firstline

    • parser_N

    • multiline

    hashtag
    Multiline and Containers (v1.8)

    If you are running Fluent Bit to process logs coming from containers like Docker or CRI, you can use the new built-in modes for such purposes. This will help to reassembly multiline messages originally split by Docker or CRI:

    The two options separated by a comma means multi-format: try docker and cri multiline formats.

    We are still working on extending support to do multiline for nested stack traces and such. Over the Fluent Bit v1.8.x release cycle we will be updating the documentation.

    hashtag
    Old Multiline Configuration Parameters

    For the old multiline configuration, the following options exist to configure the handling of multilines logs:

    Key
    Description
    Default

    hashtag
    Old Docker Mode Configuration Parameters

    Docker mode exists to recombine JSON log lines split by the Docker daemon due to its line length limit. To use this feature, configure the tail plugin with the corresponding parser and then enable Docker mode:

    Key
    Description
    Default

    hashtag
    Getting Started

    In order to tail text or log files, you can run the plugin from the command line or through the configuration file:

    hashtag
    Command Line

    From the command line you can let Fluent Bit parse text files with the following options:

    hashtag
    Configuration File

    In your main configuration file, append the following Input and Output sections:

    hashtag
    Old Multi-line example

    When using multi-line configuration you need to first specify Multiline On in the configuration and use the Parser_Firstline and additional parser parameters Parser_N if needed. If we are trying to read the following Java Stacktrace as a single event

    We need to specify a Parser_Firstline parameter that matches the first line of a multi-line event. Once a match is made Fluent Bit will read all future lines until another match with Parser_Firstline is made .

    In the case above we can use the following parser, that extracts the Time as time and the remaining portion of the multiline as log

    If we want to further parse the entire event we can add additional parsers with Parser_N where N is an integer. The final Fluent Bit configuration looks like the following:

    Our output will be as follows.

    hashtag
    Tailing files keeping state

    The tail input plugin a feature to save the state of the tracked files, is strongly suggested you enabled this. For this purpose the db property is available, e.g:

    When running, the database file /path/to/logs.db will be created, this database is backed by SQLite3 so if you are interested into explore the content, you can open it with the SQLite client tool, e.g:

    Make sure to explore when Fluent Bit is not hard working on the database file, otherwise you will see some Error: database is locked messages.

    hashtag
    Formatting SQLite

    By default SQLite client tool do not format the columns in a human read-way, so to explore in_tail_files table you can create a config file in ~/.sqliterc with the following content:

    hashtag
    SQLite and Write Ahead Logging

    Fluent Bit keep the state or checkpoint of each file through using a SQLite database file, so if the service is restarted, it can continue consuming files from it last checkpoint position (offset). The default options set are enabled for high performance and corruption-safe.

    The SQLite journaling mode enabled is Write Ahead Log or WAL. This allows to improve performance of read and write operations to disk. When enabled, you will see in your file system additional files being created, consider the following configuration statement:

    The above configuration enables a database file called test.db and in the same path for that file SQLite will create two additional files:

    • test.db-shm

    • test.db-wal

    Those two files aims to support the WAL mechanism that helps to improve performance and reduce the number system calls required. The -wal file refers to the file that stores the new changes to be committed, at some point the WAL file transactions are moved back to the real database file. The -shm file is a shared-memory type to allow concurrent-users to the WAL file.

    hashtag
    WAL and Memory Usage

    The WAL mechanism give us higher performance but also might increase the memory usage by Fluent Bit. Most of this usage comes from the memory mapped and cached pages. In some cases you might see that memory usage keeps a bit high giving the impression of a memory leak, but actually is not relevant unless you want your memory metrics back to normal. Starting from Fluent Bit v1.7.3 we introduced the new option db.journal_mode mode that sets the journal mode for databases, by default it will be WAL (Write-Ahead Logging), currently allowed configurations for db.journal_mode are DELETE | TRUNCATE | PERSIST | MEMORY | WAL | OFF .

    hashtag
    File Rotation

    File rotation is properly handled, including logrotate's copytruncate mode.

    Note that the Path patterns cannot match the rotated files. Otherwise, the rotated file would be read again and lead to duplicate records.

    Exclude_Path

    Set one or multiple shell patterns separated by commas to exclude files matching certain criteria, e.g: Exclude_Path *.gz,*.zip

    Offset_Key

    If enabled, Fluent Bit appends the offset of the current monitored file as part of the record. The value assigned becomes the key in the map

    Read_from_Head

    For new discovered files on start (without a database offset/position), read the content from the head of the file, not tail.

    False

    Refresh_Interval

    The interval of refreshing the list of watched files in seconds.

    60

    Rotate_Wait

    Specify the number of extra time in seconds to monitor a file once is rotated in case some pending data is flushed.

    5

    Ignore_Older

    Ignores files older than ignore_older. Supports m, h, d (minutes, hours, days) syntax. Default behavior is to read all.

    Skip_Long_Lines

    When a monitored file reaches its buffer capacity due to a very long line (Buffer_Max_Size), the default behavior is to stop monitoring that file. Skip_Long_Lines alter that behavior and instruct Fluent Bit to skip long lines and continue processing other lines that fits into the buffer size.

    Off

    Skip_Empty_Lines

    Skips empty lines in the log file from any further processing or output.

    Off

    DB

    Specify the database file to keep track of monitored files and offsets.

    DB.sync

    Set a default synchronization (I/O) method. Values: Extra, Full, Normal, Off. This flag affects how the internal SQLite engine do synchronization to disk, for more details about each option please refer to . Most of workload scenarios will be fine with normal mode, but if you really need full synchronization after every write operation you should set full mode. Note that full has a high I/O performance cost.

    normal

    DB.locking

    Specify that the database will be accessed only by Fluent Bit. Enabling this feature helps to increase performance when accessing the database but it restrict any external tool to query the content.

    false

    DB.journal_mode

    sets the journal mode for databases (WAL). Enabling WAL provides higher performance. Note that WAL is not compatible with shared network file systems.

    WAL

    DB.compare_filename

    This option determines whether to check both the inode and the filename when retrieving stored file information from the database. 'true' verifies both the inode and filename, while 'false' checks only the inode (default). To check the inode and filename in the database, refer .

    false

    Mem_Buf_Limit

    Set a limit of memory that Tail plugin can use when appending data to the Engine. If the limit is reach, it will be paused; when the data is flushed it resumes.

    Exit_On_Eof

    When reading a file will exit as soon as it reach the end of the file. Useful for bulk load and tests

    false

    Parser

    Specify the name of a parser to interpret the entry as a structured message.

    Key

    When a message is unstructured (no parser applied), it's appended as a string under the key name log. This option allows to define an alternative name for that key.

    log

    Inotify_Watcher

    Set to false to use file stat watcher instead of inotify.

    true

    Tag

    Set a tag (with regex-extract fields) that will be placed on lines read. E.g. kube.<namespace_name>.<pod_name>.<container_name>.<container_id>. Note that "tag expansion" is supported: if the tag includes an asterisk (*), that asterisk will be replaced with the absolute path of the monitored file, with slashes replaced by dots (also see ).

    Tag_Regex

    Set a regex to extract fields from the file name. E.g. (?<pod_name>[a-z0-9](?:[-a-z0-9]*[a-z0-9])?(?:\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*)_(?<namespace_name>[^_]+)_(?<container_name>.+)-(?<container_id>[a-z0-9]{64})\.log$

    Static_Batch_Size

    Set the maximum number of bytes to process per iteration for the monitored static files (files that already exists upon Fluent Bit start).

    50M

    File_Cache_Advise

    Set the posix_fadvise in POSIX_FADV_DONTNEED mode. This will reduce the usage of the kernel file cache. This option is ignored if not running on Linux.

    On

    Threaded

    Indicates whether to run this input in its own .

    false

    multiline_flush

  • docker_mode

  • Buffer_Chunk_Size

    Set the initial buffer size to read files data. This value is used to increase buffer size. The value must be according to the Unit Size specification.

    32k

    Buffer_Max_Size

    Set the limit of the buffer size per monitored file. When a buffer needs to be increased (e.g: very long lines), this value is used to restrict how much the memory buffer can grow. If reading a file exceeds this limit, the file is removed from the monitored file list. The value must be according to the Unit Size specification.

    32k

    Path

    Pattern specifying a specific log file or multiple ones through the use of common wildcards. Multiple patterns separated by commas are also allowed.

    Path_Key

    If enabled, it appends the name of the monitored file as part of the record. The value assigned becomes the key in the map.

    multiline.parser

    Specify one or multiple Multiline Parser definitions to apply to the content.

    Multiline

    If enabled, the plugin will try to discover multiline messages and use the proper parsers to compose the outgoing messages. Note that when this option is enabled the Parser option is not used.

    Off

    Multiline_Flush

    Wait period time in seconds to process queued multiline messages

    4

    Parser_Firstline

    Name of the parser that matches the beginning of a multiline message. Note that the regular expression defined in the parser must include a group name (named capture), and the value of the last match group must be a string

    Parser_N

    Optional-extra parser to interpret and structure multiline entries. This option can be used to define multiple parsers, e.g: Parser_1 ab1, Parser_2 ab2, Parser_N abN.

    Docker_Mode

    If enabled, the plugin will recombine split Docker log lines before passing them to any parser as configured above. This mode cannot be used at the same time as Multiline.

    Off

    Docker_Mode_Flush

    Wait period time in seconds to flush queued unfinished split lines.

    4

    Docker_Mode_Parser

    Specify an optional parser for the first line of the docker multiline mode. The parser name to be specified must be registered in the parsers.conf file.

    Multiline Parser documentation

    NGINX Exporter Metrics

    NGINX Exporter Metrics input plugin scrapes metrics from the NGINX stub status handler.

    hashtag
    Configuration Parameters

    The plugin supports the following configuration parameters:

    Key
    Description
    Default
    [INPUT]
        name              tail
        path              /var/log/containers/*.log
        multiline.parser  docker, cri
    pipeline:
      inputs:
        - name: tail
          path: /var/log/containers/*.log
          multiline.parser: docker, cri
    $ fluent-bit -i tail -p path=/var/log/syslog -o stdout
    [INPUT]
        Name        tail
        Path        /var/log/syslog
    
    [OUTPUT]
        Name   stdout
        Match  *
    pipeline:
      inputs:
        - name: tail
          path: /var/log/syslog
    
      outputs:
        - stdout:
          match: *
    Dec 14 06:41:08 Exception in thread "main" java.lang.RuntimeException: Something has gone wrong, aborting!
        at com.myproject.module.MyProject.badMethod(MyProject.java:22)
        at com.myproject.module.MyProject.oneMoreMethod(MyProject.java:18)
        at com.myproject.module.MyProject.anotherMethod(MyProject.java:14)
        at com.myproject.module.MyProject.someMethod(MyProject.java:10)
        at com.myproject.module.MyProject.main(MyProject.java:6)
    [PARSER]
        Name multiline
        Format regex
        Regex /(?<time>[A-Za-z]+ \d+ \d+\:\d+\:\d+)(?<message>.*)/
        Time_Key  time
        Time_Format %b %d %H:%M:%S
    # Note this is generally added to parsers.conf and referenced in [SERVICE]
    [PARSER]
        Name multiline
        Format regex
        Regex /(?<time>[A-Za-z]+ \d+ \d+\:\d+\:\d+)(?<message>.*)/
        Time_Key  time
        Time_Format %b %d %H:%M:%S
    
    [INPUT]
        Name             tail
        Multiline        On
        Parser_Firstline multiline
        Path             /var/log/java.log
    
    [OUTPUT]
        Name             stdout
        Match            *
    [0] tail.0: [1607928428.466041977, {"message"=>"Exception in thread "main" java.lang.RuntimeException: Something has gone wrong, aborting!
        at com.myproject.module.MyProject.badMethod(MyProject.java:22)
        at com.myproject.module.MyProject.oneMoreMethod(MyProject.java:18)
        at com.myproject.module.MyProject.anotherMethod(MyProject.java:14)
        at com.myproject.module.MyProject.someMethod(MyProject.java:10)", "message"=>"at com.myproject.module.MyProject.main(MyProject.java:6)"}]
    $ fluent-bit -i tail -p path=/var/log/syslog -p db=/path/to/logs.db -o stdout
    $ sqlite3 tail.db
    -- Loading resources from /home/edsiper/.sqliterc
    
    SQLite version 3.14.1 2016-08-11 18:53:32
    Enter ".help" for usage hints.
    sqlite> SELECT * FROM in_tail_files;
    id     name                              offset        inode         created
    -----  --------------------------------  ------------  ------------  ----------
    1      /var/log/syslog                   73453145      23462108      1480371857
    sqlite>
    .headers on
    .mode column
    .width 5 32 12 12 10
    [INPUT]
        name    tail
        path    /var/log/containers/*.log
        db      test.db
    this sectionarrow-up-right
    here
    Workflow of Tail + Kubernetes Filter
    thread

    Host

    Name of the target host or IP address to check.

    localhost

    Port

    Port of the target nginx service to connect to.

    80

    Status_URL

    The URL of the Stub Status Handler.

    /status

    Nginx_Plus

    Turn on NGINX plus mode.

    true

    Threaded

    Indicates whether to run this input in its own .

    false

    hashtag
    Getting Started

    NGINX must be configured with a location that invokes the stub status handler. Here is an example configuration with such a location:

    hashtag
    Configuration with NGINX Plus REST API

    Another metrics API is available with NGINX Plus. You must first configure a path in NGINX Plus.

    hashtag
    Command Line

    From the command line you can let Fluent Bit generate the checks with the following options:

    To gather metrics from the command line with the NGINX Plus REST API we need to turn on the nginx_plus property, like so:

    hashtag
    Configuration File

    In your main configuration file append the following Input & Output sections:

    And for NGINX Plus API:

    hashtag
    Testing

    You can quickly test against the NGINX server running on localhost by invoking it directly from the command line:

    hashtag
    Exported Metrics

    This documentation is copied from the NGINX Prometheus Exporter metrics documentationarrow-up-right on GitHub.

    hashtag
    Common metrics:

    Name
    Type
    Description
    Labels

    nginx_up

    Gauge

    Shows the status of the last metric scrape: 1 for a successful scrape and 0 for a failed one

    []

    hashtag
    Metrics for NGINX OSS:

    hashtag
    Stub status metricsarrow-up-right

    Name
    Type
    Description
    Labels

    nginx_connections_accepted

    Counter

    Accepted client connections.

    []

    nginx_connections_active

    Gauge

    Active client connections.

    []

    nginx_connections_handled

    Counter

    Handled client connections.

    hashtag
    Metrics for NGINX Plus:

    hashtag
    Connectionsarrow-up-right

    Name
    Type
    Description
    Labels

    nginxplus_connections_accepted

    Counter

    Accepted client connections

    []

    nginxplus_connections_active

    Gauge

    Active client connections

    []

    nginxplus_connections_dropped

    Counter

    Dropped client connections dropped

    hashtag
    HTTParrow-up-right

    Name
    Type
    Description
    Labels

    nginxplus_http_requests_total

    Counter

    Total http requests

    []

    nginxplus_http_requests_current

    Gauge

    Current http requests

    []

    hashtag
    SSLarrow-up-right

    Name
    Type
    Description
    Labels

    nginxplus_ssl_handshakes

    Counter

    Successful SSL handshakes

    []

    nginxplus_ssl_handshakes_failed

    Counter

    Failed SSL handshakes

    []

    nginxplus_ssl_session_reuses

    Counter

    Session reuses during SSL handshake

    hashtag
    HTTP Server Zonesarrow-up-right

    Name
    Type
    Description
    Labels

    nginxplus_server_zone_processing

    Gauge

    Client requests that are currently being processed

    server_zone

    nginxplus_server_zone_requests

    Counter

    Total client requests

    server_zone

    nginxplus_server_zone_responses

    Counter

    Total responses sent to clients

    hashtag
    Stream Server Zonesarrow-up-right

    Name
    Type
    Description
    Labels

    nginxplus_stream_server_zone_processing

    Gauge

    Client connections that are currently being processed

    server_zone

    nginxplus_stream_server_zone_connections

    Counter

    Total connections

    server_zone

    nginxplus_stream_server_zone_sessions

    Counter

    Total sessions completed

    hashtag
    HTTP Upstreamsarrow-up-right

    Note: for the state metric, the string values are converted to float64 using the following rule: "up" -> 1.0, "draining" -> 2.0, "down" -> 3.0, "unavail" –> 4.0, "checking" –> 5.0, "unhealthy" -> 6.0.

    Name
    Type
    Description
    Labels

    nginxplus_upstream_server_state

    Gauge

    Current state

    server, upstream

    nginxplus_upstream_server_active

    Gauge

    Active connections

    server, upstream

    nginxplus_upstream_server_limit

    Gauge

    Limit for connections which corresponds to the max_conns parameter of the upstream server. Zero value means there is no limit

    hashtag
    Stream Upstreamsarrow-up-right

    Note: for the state metric, the string values are converted to float64 using the following rule: "up" -> 1.0, "down" -> 3.0, "unavail" –> 4.0, "checking" –> 5.0, "unhealthy" -> 6.0.

    Name
    Type
    Description
    Labels

    nginxplus_stream_upstream_server_state

    Gauge

    Current state

    server, upstream

    nginxplus_stream_upstream_server_active

    Gauge

    Active connections

    server , upstream

    nginxplus_stream_upstream_server_limit

    Gauge

    Limit for connections which corresponds to the max_conns parameter of the upstream server. Zero value means there is no limit

    hashtag
    Location Zonesarrow-up-right

    Name
    Type
    Description
    Labels

    nginxplus_location_zone_requests

    Counter

    Total client requests

    location_zone

    nginxplus_location_zone_responses

    Counter

    Total responses sent to clients

    code (the response status code. The values are: 1xx, 2xx, 3xx, 4xx and 5xx), location_zone

    nginxplus_location_zone_discarded

    Counter

    Requests completed without sending a response

    server {
        listen       80;
        listen  [::]:80;
        server_name  localhost;
        location / {
            root   /usr/share/nginx/html;
            index  index.html index.htm;
        }
        // configure the stub status handler.
        location /status {
            stub_status;
        }
    }
    server {
    	listen       80;
    	listen  [::]:80;
    	server_name  localhost;
    
    	# enable /api/ location with appropriate access control in order
    	# to make use of NGINX Plus API
    	#
    	location /api/ {
    		api write=on;
    		# configure to allow requests from the server running fluent-bit
    		allow 192.168.1.*;
    		deny all;
    	}
    }
    $ fluent-bit -i nginx_metrics -p host=127.0.0.1 -p port=80 -p status_url=/status -p nginx_plus=off -o stdout
    $ fluent-bit -i nginx_metrics -p host=127.0.0.1 -p port=80 -p nginx_plus=on -p status_url=/api -o stdout
    [INPUT]
        Name          nginx_metrics
        Host          127.0.0.1
        Port          80
        Status_URL    /status
        Nginx_Plus    off
    
    [OUTPUT]
        Name   stdout
        Match  *
    [INPUT]
        Name          nginx_metrics
        Nginx_Plus    on
        Host          127.0.0.1
        Port          80
        Status_URL    /api
    
    [OUTPUT]
        Name   stdout
        Match  *
    $ fluent-bit -i nginx_metrics -p host=127.0.0.1 -p nginx_plus=off -o stdout -p match=* -f 1
    Fluent Bit v2.x.x
    * Copyright (C) 2019-2020 The Fluent Bit Authors
    * Copyright (C) 2015-2018 Treasure Data
    * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
    * https://fluentbit.io
    
    2021-10-14T19:37:37.228691854Z nginx_connections_accepted = 788253884
    2021-10-14T19:37:37.228691854Z nginx_connections_handled = 788253884
    2021-10-14T19:37:37.228691854Z nginx_http_requests_total = 42045501
    2021-10-14T19:37:37.228691854Z nginx_connections_active = 2009
    2021-10-14T19:37:37.228691854Z nginx_connections_reading = 0
    2021-10-14T19:37:37.228691854Z nginx_connections_writing = 1
    2021-10-14T19:37:37.228691854Z nginx_connections_waiting = 2008
    2021-10-14T19:37:35.229919621Z nginx_up = 1

    []

    nginx_connections_reading

    Gauge

    Connections where NGINX is reading the request header.

    []

    nginx_connections_waiting

    Gauge

    Idle client connections.

    []

    nginx_connections_writing

    Gauge

    Connections where NGINX is writing the response back to the client.

    []

    nginx_http_requests_total

    Counter

    Total http requests.

    []

    []

    nginxplus_connections_idle

    Gauge

    Idle client connections

    []

    []

    code (the response status code. The values are: 1xx, 2xx, 3xx, 4xx and 5xx), server_zone

    nginxplus_server_zone_discarded

    Counter

    Requests completed without sending a response

    server_zone

    nginxplus_server_zone_received

    Counter

    Bytes received from clients

    server_zone

    nginxplus_server_zone_sent

    Counter

    Bytes sent to clients

    server_zone

    code (the response status code. The values are: 2xx, 4xx, and 5xx), server_zone

    nginxplus_stream_server_zone_discarded

    Counter

    Connections completed without creating a session

    server_zone

    nginxplus_stream_server_zone_received

    Counter

    Bytes received from clients

    server_zone

    nginxplus_stream_server_zone_sent

    Counter

    Bytes sent to clients

    server_zone

    server, upstream

    nginxplus_upstream_server_requests

    Counter

    Total client requests

    server, upstream

    nginxplus_upstream_server_responses

    Counter

    Total responses sent to clients

    code (the response status code. The values are: 1xx, 2xx, 3xx, 4xx and 5xx), server, upstream

    nginxplus_upstream_server_sent

    Counter

    Bytes sent to this server

    server, upstream

    nginxplus_upstream_server_received

    Counter

    Bytes received to this server

    server, upstream

    nginxplus_upstream_server_fails

    Counter

    Number of unsuccessful attempts to communicate with the server

    server, upstream

    nginxplus_upstream_server_unavail

    Counter

    How many times the server became unavailable for client requests (state 'unavail') due to the number of unsuccessful attempts reaching the max_fails threshold

    server, upstream

    nginxplus_upstream_server_header_time

    Gauge

    Average time to get the response header from the server

    server, upstream

    nginxplus_upstream_server_response_time

    Gauge

    Average time to get the full response from the server

    server, upstream

    nginxplus_upstream_keepalives

    Gauge

    Idle keepalive connections

    upstream

    nginxplus_upstream_zombies

    Gauge

    Servers removed from the group but still processing active client requests

    upstream

    server , upstream

    nginxplus_stream_upstream_server_connections

    Counter

    Total number of client connections forwarded to this server

    server, upstream

    nginxplus_stream_upstream_server_connect_time

    Gauge

    Average time to connect to the upstream server

    server, upstream

    nginxplus_stream_upstream_server_first_byte_time

    Gauge

    Average time to receive the first byte of data

    server, upstream

    nginxplus_stream_upstream_server_response_time

    Gauge

    Average time to receive the last byte of data

    server, upstream

    nginxplus_stream_upstream_server_sent

    Counter

    Bytes sent to this server

    server, upstream

    nginxplus_stream_upstream_server_received

    Counter

    Bytes received from this server

    server, upstream

    nginxplus_stream_upstream_server_fails

    Counter

    Number of unsuccessful attempts to communicate with the server

    server, upstream

    nginxplus_stream_upstream_server_unavail

    Counter

    How many times the server became unavailable for client connections (state 'unavail') due to the number of unsuccessful attempts reaching the max_fails threshold

    server, upstream

    nginxplus_stream_upstream_zombies

    Gauge

    Servers removed from the group but still processing active client connections

    upstream

    location_zone

    nginxplus_location_zone_received

    Counter

    Bytes received from clients

    location_zone

    nginxplus_location_zone_sent

    Counter

    Bytes sent to clients

    location_zone

    thread

    Node Exporter Metrics

    A plugin based on Prometheus Node Exporter to collect system / host level metrics

    Prometheus Node Exporterarrow-up-right is a popular way to collect system level metrics from operating systems, such as CPU / Disk / Network / Process statistics. Fluent Bit 1.8.0 includes node exporter metrics plugin that builds off the Prometheus design to collect system level metrics without having to manage two separate processes or agents.

    The initial release of Node Exporter Metrics contains a subset of collectors and metrics available from Prometheus Node Exporter and we plan to expand them over time.

    Important note: Metrics collected with Node Exporter Metrics flow through a separate pipeline from logs and current filters do not operate on top of metrics.

    This plugin is supported on Linux-based operating systems for the most part with macOS offering a reduced subset of metrics. The table below indicates which collector is supported on macOS.

    hashtag
    Configuration

    Key
    Description
    Default

    Note: The plugin top-level scrape_interval setting is the global default with any custom settings for individual scrape_intervals then overriding just that specific metric scraping interval. Each collector.xxx.scrape_interval option only overrides the interval for that specific collector and updates the associated set of provided metrics.

    The overridden intervals only change the collection interval, not the interval for publishing the metrics which is taken from the global setting. For example, if the global interval is set to 5s and an override interval of 60s is used then the published metrics will be reported every 5s but for the specific collector they will stay the same for 60s until it is collected again. This feature aims to help with down-sampling when collecting metrics.

    hashtag
    Collectors available

    The following table describes the available collectors as part of this plugin. All of them are enabled by default and respects the original metrics name, descriptions, and types from Prometheus Exporter, so you can use your current dashboards without any compatibility problem.

    note: the Version column specifies the Fluent Bit version where the collector is available.

    Name
    Description
    OS
    Version

    hashtag
    Threading

    This input always runs in its own .

    hashtag
    Getting Started

    hashtag
    Simple Configuration File

    In the following configuration file, the input plugin _node_exporter_metrics collects _metrics every 2 seconds and exposes them through our output plugin on HTTP/TCP port 2021.

    You can test the expose of the metrics by using curl:

    hashtag
    Container to Collect Host Metrics

    When deploying Fluent Bit in a container you will need to specify additional settings to ensure that Fluent Bit has access to the host operating system. The following docker command deploys Fluent Bit with specific mount paths and settings enabled to ensure that Fluent Bit can collect from the host. These are then exposed over port 2021.

    hashtag
    Fluent Bit + Prometheus + Grafana

    If you like dashboards for monitoring, Grafana is one of the preferred options. In our Fluent Bit source code repository, we have pushed a simple **docker-compose **example. Steps:

    hashtag
    Get a copy of Fluent Bit source code

    hashtag
    Start the service and view your Dashboard

    Now open your browser in the address http://127.0.0.1:3000. When asked for the credentials to access Grafana, just use the **admin **username and admin password.

    Note that by default Grafana dashboard plots the data from the last 24 hours, so just change it to Last 5 minutes to see the recent data being collected.

    hashtag
    Stop the Service

    hashtag
    Enhancement Requests

    Our current plugin implements a sub-set of the available collectors in the original Prometheus Node Exporter, if you would like that we prioritize a specific collector please open a Github issue by using the following template: -

    collector.cpufreq.scrape_interval

    The rate in seconds at which cpufreq metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used.

    0 seconds

    collector.meminfo.scrape_interval

    The rate in seconds at which meminfo metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used.

    0 seconds

    collector.diskstats.scrape_interval

    The rate in seconds at which diskstats metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used.

    0 seconds

    collector.filesystem.scrape_interval

    The rate in seconds at which filesystem metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used.

    0 seconds

    collector.uname.scrape_interval

    The rate in seconds at which uname metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used.

    0 seconds

    collector.stat.scrape_interval

    The rate in seconds at which stat metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used.

    0 seconds

    collector.time.scrape_interval

    The rate in seconds at which time metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used.

    0 seconds

    collector.loadavg.scrape_interval

    The rate in seconds at which loadavg metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used.

    0 seconds

    collector.vmstat.scrape_interval

    The rate in seconds at which vmstat metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used.

    0 seconds

    collector.thermal_zone.scrape_interval

    The rate in seconds at which thermal_zone metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used.

    0 seconds

    collector.filefd.scrape_interval

    The rate in seconds at which filefd metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used.

    0 seconds

    collector.nvme.scrape_interval

    The rate in seconds at which nvme metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used.

    0 seconds

    collector.processes.scrape_interval

    The rate in seconds at which system level of process metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used.

    0 seconds

    metrics

    To specify which metrics are collected from the host operating system. These metrics depend on /proc or /sys fs. The actual values of metrics will be read from /proc or /sys when needed. cpu, cpufreq, meminfo, diskstats, filesystem, stat, loadavg, vmstat, netdev, and filefd depend on procfs. cpufreq metrics depend on sysfs.

    "cpu,cpufreq,meminfo,diskstats,filesystem,uname,stat,time,loadavg,vmstat,netdev,filefd"

    filesystem.ignore_mount_point_regex

    Specify the regex for the mount points to prevent collection of/ignore.

    `^/(dev

    filesystem.ignore_filesystem_type_regex

    Specify the regex for the filesystem types to prevent collection of/ignore.

    `^(autofs

    diskstats.ignore_device_regex

    Specify the regex for the diskstats to prevent collection of/ignore.

    `^(ram

    systemd_service_restart_metrics

    Determines if the collector will include service restart metrics

    false

    systemd_unit_start_time_metrics

    Determines if the collector will include unit start time metrics

    false

    systemd_include_service_task_metrics

    Determines if the collector will include service task metrics

    false

    systemd_include_pattern

    regex to determine which units are included in the metrics produced by the systemd collector

    It is not applied unless explicitly set

    systemd_exclude_pattern

    regex to determine which units are excluded in the metrics produced by the systemd collector

    `.+\.(automount

    filefd

    Exposes file descriptor statistics from /proc/sys/fs/file-nr.

    Linux

    v1.8.2

    filesystem

    Exposes filesystem statistics from /proc/*/mounts.

    Linux

    v2.0.9

    loadavg

    Exposes load average.

    Linux,macOS

    v1.8

    meminfo

    Exposes memory statistics.

    Linux,macOS

    v1.8

    netdev

    Exposes network interface statistics such as bytes transferred.

    Linux,macOS

    v1.8.2

    stat

    Exposes various statistics from /proc/stat. This includes boot time, forks, and interruptions.

    Linux

    v1.8

    time

    Exposes the current system time.

    Linux

    v1.8

    uname

    Exposes system information as provided by the uname system call.

    Linux,macOS

    v1.8

    vmstat

    Exposes statistics from /proc/vmstat.

    Linux

    v1.8.2

    systemd collector

    Exposes statistics from systemd.

    Linux

    v2.1.3

    thermal_zone

    Expose thermal statistics from /sys/class/thermal/thermal_zone/*

    Linux

    v2.2.1

    nvme

    Exposes nvme statistics from /proc.

    Linux

    v2.2.0

    processes

    Exposes processes statistics from /proc.

    Linux

    v2.2.0

    scrape_interval

    The rate at which metrics are collected from the host operating system

    5 seconds

    path.procfs

    The mount point used to collect process information and metrics

    /proc/

    path.sysfs

    The path in the filesystem used to collect system metrics

    /sys/

    collector.cpu.scrape_interval

    The rate in seconds at which cpu metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used.

    0 seconds

    cpu

    Exposes CPU statistics.

    Linux,macOS

    v1.8

    cpufreq

    Exposes CPU frequency statistics.

    Linux

    v1.8

    diskstats

    Exposes disk I/O statistics.

    Linux,macOS

    thread
    Prometheus Exporter
    in_node_exporter_metricsarrow-up-right

    v1.8

    # Node Exporter Metrics + Prometheus Exporter
    # -------------------------------------------
    # The following example collect host metrics on Linux and expose
    # them through a Prometheus HTTP end-point.
    #
    # After starting the service try it with:
    #
    # $ curl http://127.0.0.1:2021/metrics
    #
    [SERVICE]
        flush           1
        log_level       info
    
    [INPUT]
        name            node_exporter_metrics
        tag             node_metrics
        scrape_interval 2
    
    [OUTPUT]
        name            prometheus_exporter
        match           node_metrics
        host            0.0.0.0
        port            2021
    
    
    # Node Exporter Metrics + Prometheus Exporter
    # -------------------------------------------
    # The following example collect host metrics on Linux and expose
    # them through a Prometheus HTTP end-point.
    #
    # After starting the service try it with:
    #
    # $ curl http://127.0.0.1:2021/metrics
    #
    service:
        flush: 1
        log_level: info
    pipeline:
        inputs:
            - name: node_exporter_metrics
              tag:  node_metrics
              scrape_interval: 2
        outputs:
            - name: prometheus_exporter
              match: node_metrics
              host: 0.0.0.0
              port: 2021
    curl http://127.0.0.1:2021/metrics
    docker run -ti -v /proc:/host/proc \
                   -v /sys:/host/sys   \
                   -p 2021:2021        \
                   fluent/fluent-bit:1.8.0 \
                   /fluent-bit/bin/fluent-bit \
                             -i node_exporter_metrics -p path.procfs=/host/proc -p path.sysfs=/host/sys \
                             -o prometheus_exporter -p "add_label=host $HOSTNAME" \
                             -f 1
    git clone https://github.com/fluent/fluent-bit
    cd fluent-bit/docker_compose/node-exporter-dashboard/
    docker-compose up --force-recreate -d --build
    docker-compose down