Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
The cpu input plugin, measures the CPU usage of a process or the whole system by default (considering per CPU core). It reports values in percentage unit for every interval of time set. At the moment this plugin is only available for Linux.
The following tables describes the information generated by the plugin. The keys below represent the data used by the overall system, all values associated to the keys are in a percentage unit (0 to 100%):
The CPU metrics plugin creates metrics that are log-based (I.e. JSON payload). If you are looking for Prometheus-based metrics please see the Node Exporter Metrics input plugin.
key | description |
---|---|
In addition to the keys reported in the above table, a similar content is created per CPU core. The cores are listed from 0 to N as the Kernel reports:
key | description |
---|---|
The plugin supports the following configuration parameters:
Key | Description | Default |
---|---|---|
In order to get the statistics of the CPU usage of your system, you can run the plugin from the command line or through the configuration file:
As described above, the CPU input plugin gathers the overall usage every one second and flushed the information to the output on the fifth second. On this example we used the stdout plugin to demonstrate the output records. In a real use-case you may want to flush this information to some central aggregator such as Fluentd or Elasticsearch.
In your main configuration file append the following Input & Output sections:
The docker input plugin allows you to collect Docker container metrics such as memory usage and CPU consumption.
Content:
The plugin supports the following configuration parameters:
Key | Description | Default |
---|---|---|
If you set neither Include
nor Exclude
, the plugin will try to get metrics from all the running containers.
Here is an example configuration that collects metrics from two docker instances (6bab19c3a0f9
and 14159be4ca2c
).
This configuration will produce records like below.
The disk input plugin, gathers the information about the disk throughput of the running system every certain interval of time and reports them.
The Disk I/O metrics plugin creates metrics that are log-based (I.e. JSON payload). If you are looking for Prometheus-based metrics please see the Node Exporter Metrics input plugin.
The plugin supports the following configuration parameters:
Key | Description | Default |
---|---|---|
In order to get disk usage from your system, you can run the plugin from the command line or through the configuration file:
In your main configuration file append the following Input & Output sections:
Note: Total interval (sec) = Interval_Sec + (Interval_Nsec / 1000000000).
e.g. 1.5s = 1s + 500000000ns
The docker events input plugin uses the docker API to capture server events. A complete list of possible events returned by this plugin can be found here
This plugin supports the following configuration parameters:
Key | Description | Default |
---|---|---|
In your main configuration file append the following Input & Output sections:
The dummy input plugin, generates dummy events. It is useful for testing, debugging, benchmarking and getting started with Fluent Bit.
The plugin supports the following configuration parameters:
You can run the plugin from the command line or through the configuration file:
Key | Description |
---|---|
In your main configuration file append the following Input & Output sections:
The head input plugin, allows to read events from the head of file. It's behavior is similar to the head command.
The plugin supports the following configuration parameters:
Key | Description |
---|
This mode is useful to get a specific line. This is an example to get CPU frequency from /proc/cpuinfo.
/proc/cpuinfo is a special file to get cpu information.
Cpu frequency is "cpu MHz : 2791.009". We can get the line with this configuration file.
Output is
In order to read the head of a file, you can run the plugin from the command line or through the configuration file:
The following example will read events from the /proc/uptime file, tag the records with the uptime name and flush them back to the stdout plugin:
In your main configuration file append the following Input & Output sections:
Note: Total interval (sec) = Interval_Sec + (Interval_Nsec / 1000000000).
e.g. 1.5s = 1s + 500000000ns
A plugin based on Prometheus Node Exporter to collect system / host level metrics
is a popular way to collect system level metrics from operating systems, such as CPU / Disk / Network / Process statistics. Fluent Bit 1.8.0 includes node exporter metrics plugin that builds off the Prometheus design to collect system level metrics without having to manage two separate processes or agents.
The initial release of Node Exporter Metrics contains a subset of collectors and metrics available from Prometheus Node Exporter and we plan to expand them over time.
Important note: Metrics collected with Node Exporter Metrics flow through a separate pipeline from logs and current filters do not operate on top of metrics.
This plugin is currently only supported on Linux based operating systems\
Key | Description | Default |
---|
The following table describes the available collectors as part of this plugin. All of them are enabled by default and respects the original metrics name, descriptions, and types from Prometheus Exporter, so you can use your current dashboards without any compatibility problem.
note: the Version column specifies the Fluent Bit version where the collector is available.
Name | Description | OS | Version |
---|
You can test the expose of the metrics by using curl:
When deploying Fluent Bit in a container you will need to specify additional settings to ensure that Fluent Bit has access to the host operating system. The following docker command deploys Fluent Bit with specific mount paths and settings enabled to ensure that Fluent Bit can collect from the host. These are then exposed over port 2021.
If you like dashboards for monitoring, Grafana is one of the preferred options. In our Fluent Bit source code repository, we have pushed a simple **docker-compose **example. Steps:
Now open your browser in the address http://127.0.0.1:3000. When asked for the credentials to access Grafana, just use the **admin **username and admin password.
Note that by default Grafana dashboard plots the data from the last 24 hours, so just change it to Last 5 minutes to see the recent data being collected.
The collectd input plugin allows you to receive datagrams from collectd service.
The plugin supports the following configuration parameters:
Key | Description | Default |
---|
Here is a basic configuration example.
With this configuration, Fluent Bit listens to 0.0.0.0:25826
, and outputs incoming datagram packets to stdout.
You must set the same types.db files that your collectd server uses. Otherwise, Fluent Bit may not be able to interpret the payload properly.
The exec input plugin, allows to execute external program and collects event logs.
This plugin will not function in the distroless production images (AMD64 currently) as it needs a functional /bin/sh
which is not present. It will function in the 1.8.12 and later -debug
images though as well as the ARM production images as these include a full shell.
The plugin supports the following configuration parameters:
Key | Description |
---|
You can run the plugin from the command line or through the configuration file:
The following example will read events from the output of ls.
In your main configuration file append the following Input & Output sections:
A plugin to collect Fluent Bit's own metrics
Fluent Bit exposes its to allow you to monitor the internals of your pipeline. The collected metrics can be processed similarly to those from the . They can be sent to output plugins including or .
Important note: Metrics collected with Node Exporter Metrics flow through a separate pipeline from logs and current filters do not operate on top of metrics.
Key | Description | Default |
---|
In the following configuration file, the input plugin _node_exporter_metrics collects _metrics every 2 seconds and exposes them through our output plugin on HTTP/TCP port 2021.
You can test the expose of the metrics by using curl:
The HTTP input plugin allows you to send custom records to an HTTP endpoint.
The http input plugin allows Fluent Bit to open up an HTTP port that you can then route data to in a dynamic way. This plugin supports dynamic tags which allow you to send data with different tags through the same input. An example video and curl message can be seen below
How to set tag
The tag for the HTTP input plugin is set by adding the tag to the end of the request URL. This tag is then used to route the event through the system. For example, in the following curl message below the tag set is app.log
**. **If you do not set the tag http.0
is automatically used. If you have multiple HTTP inputs then they will follow a pattern of http.N
where N is an integer representing the input.
Example Curl message
Forward is the protocol used by and to route messages between peers. This plugin implements the input service to listen for Forward messages.
The plugin supports the following configuration parameters:
Key | Description | Default |
---|
In order to receive Forward messages, you can run the plugin from the command line or through the configuration file as shown in the following examples.
From the command line you can let Fluent Bit listen for Forward messages with the following options:
By default the service will listen an all interfaces (0.0.0.0) through TCP port 24224, optionally you can change this directly, e.g:
In the example the Forward messages will only arrive through network interface under 192.168.3.2 address and TCP Port 9090.
In your main configuration file append the following Input & Output sections:
In the following configuration file, the input plugin _node_exporter_metrics collects _metrics every 2 seconds and exposes them through our output plugin on HTTP/TCP port 2021.
Our current plugin implements a sub-set of the available collectors in the original Prometheus Node Exporter, if you would like that we prioritize a specific collector please open a Github issue by using the following template: -
Once Fluent Bit is running, you can send some messages using the fluent-cat tool (this tool is provided by :
In we should see the following output:
cpu_p
CPU usage of the overall system, this value is the summatory of time spent on user and kernel space. The result takes in consideration the numbers of CPU cores in the system.
user_p
CPU usage in User mode, for short it means the CPU usage by user space programs. The result of this value takes in consideration the numbers of CPU cores in the system.
system_p
CPU usage in Kernel mode, for short it means the CPU usage by the Kernel. The result of this value takes in consideration the numbers of CPU cores in the system.
cpuN.p_cpu
Represents the total CPU usage by core N.
cpuN.p_user
Total CPU spent in user mode or user space programs associated to this core.
cpuN.p_system
Total CPU spent in system or kernel mode associated to this core.
Interval_Sec
Polling interval in seconds
1
Interval_NSec
Polling interval in nanoseconds
0
PID
Specify the ID (PID) of a running process in the system. By default the plugin monitors the whole system but if this option is set, it will only monitor the given process ID.
Interval_Sec
Polling interval in seconds
1
Include
A space-separated list of containers to include
Exclude
A space-separated list of containers to exclude
Interval_Sec
Polling interval (seconds).
1
Interval_NSec
Polling interval (nanosecond).
0
Dev_Name
Device name to limit the target. (e.g. sda). If not set, in_disk gathers information from all of disks and partitions.
all disks
Unix_Path
The docker socket unix path
/var/run/docker.sock
Buffer_Size
The size of the buffer used to read docker events (in bytes)
8192
Parser
Specify the name of a parser to interpret the entry as a structured message.
None
Key
When a message is unstructured (no parser applied), it's appended as a string under the key name message.
message
Reconnect.Retry_limits
The maximum number of retries allowed. The plugin tries to reconnect with docker socket when EOF is detected.
5
Reconnect.Retry_interval
The retrying interval. Unit is second.
1
Dummy
Dummy JSON record. Default: {"message":"dummy"}
Start_time_sec
Dummy base timestamp in seconds. Default: 0
Start_time_nsec
Dummy base timestamp in nanoseconds. Default: 0
Rate
Events number generated per second. Default: 1
Samples
If set, the events number will be limited. e.g. If Samples=3, the plugin only generates three events and stops.
File | Absolute path to the target file, e.g: /proc/uptime |
Buf_Size | Buffer size to read the file. |
Interval_Sec | Polling interval (seconds). |
Interval_NSec | Polling interval (nanosecond). |
Add_Path | If enabled, filepath is appended to each records. Default value is false. |
Key | Rename a key. Default: head. |
Lines | Line number to read. If the number N is set, in_head reads first N lines like head(1) -n. |
Split_line | If enabled, in_head generates key-value pair per line. |
scrape_interval | The rate at which metrics are collected from the host operating system | 5 seconds |
path.procfs | The mount point used to collect process information and metrics | /proc/ |
path.sysfs | The path in the filesystem used to collect system metrics | /sys/ |
cpu | Exposes CPU statistics. | Linux | v1.8 |
cpufreq | Exposes CPU frequency statistics. | Linux | v1.8 |
diskstats | Exposes disk I/O statistics. | Linux | v1.8 |
filefd | Exposes file descriptor statistics from | Linux | v1.8.2 |
loadavg | Exposes load average. | Linux | v1.8 |
meminfo | Exposes memory statistics. | Linux | v1.8 |
netdev | Exposes network interface statistics such as bytes transferred. | Linux | v1.8.2 |
stat | Exposes various statistics from | Linux | v1.8 |
time | Exposes the current system time. | Linux | v1.8 |
uname | Exposes system information as provided by the uname system call. | Linux | v1.8 |
vmstat | Exposes statistics from | Linux | v1.8.2 |
scrape_interval | The rate at which metrics are collected from the host operating system | 2 seconds |
scrape_on_start | Scrape metrics upon start, useful to avoid waiting for 'scrape_interval' for the first round of metrics. | false |
Key | Description | default |
host | The address to listen on | 0.0.0.0 |
port | The port for Fluent Bit to listen on | 9880 |
buffer_max_size | Specify the maximum buffer size in KB to receive a JSON message. | 4M |
buffer_chunk_size | This sets the chunk size for incoming incoming JSON messages. These chunks are then stored/managed in the space available by buffer_max_size. | 512K |
Listen | Set the address to listen to | 0.0.0.0 |
Port | Set the port to listen to | 25826 |
TypesDB | Set the data specification file | /usr/share/collectd/types.db |
The mem input plugin, gathers the information about the memory and swap usage of the running system every certain interval of time and reports the total amount of memory and the amount of free available.
In order to get memory and swap usage from your system, you can run the plugin from the command line or through the configuration file:
In your main configuration file append the following Input & Output sections:
The kmsg input plugin reads the Linux Kernel log buffer since the beginning, it gets every record and parse it field as priority, sequence, seconds, useconds, and message.
In order to start getting the Linux Kernel messages, you can run the plugin from the command line or through the configuration file:
As described above, the plugin processed all messages that the Linux Kernel reported, the output has been truncated for clarification.
In your main configuration file append the following Input & Output sections:
Command | The command to execute. |
Parser | Specify the name of a parser to interpret the entry as a structured message. |
Interval_Sec | Polling interval (seconds). |
Interval_NSec | Polling interval (nanosecond). |
Buf_Size |
Oneshot | Only run once at startup. This allows collection of data precedent to fluent-bit's startup (bool, default: false) |
Listen | Listener network interface. | 0.0.0.0 |
Port | TCP port to listen for incoming connections. | 24224 |
Unix_Path | Specify the path to unix socket to receive a Forward message. If set, |
Buffer_Max_Size | 6144000 |
Buffer_Chunk_Size | 1024000 |
Tag_Prefix | Prefix incoming tag with the defined value. |
The stdin plugin allows to retrieve valid JSON text messages over the standard input interface (stdin). In order to use it, specify the plugin name as the input, e.g:
As input data the stdin plugin recognize the following JSON data formats:
A better example to demonstrate how it works will be through a Bash script that generates messages and writes them to Fluent Bit. Write the following content in a file named test.sh:
Give the script execution permission:
Now lets start the script and Fluent Bit in the following way:
The plugin supports the following configuration parameters:
Health input plugin allows you to check how healthy a TCP server is. It does the check by issuing a TCP connection every a certain interval of time.
The plugin supports the following configuration parameters:
Key | Description |
---|---|
In order to start performing the checks, you can run the plugin from the command line or through the configuration file:
From the command line you can let Fluent Bit generate the checks with the following options:
In your main configuration file append the following Input & Output sections:
Once Fluent Bit is running, you will see some random values in the output interface similar to this:
The MQTT input plugin, allows to retrieve messages/data from MQTT control packets over a TCP connection. The incoming data to receive must be a JSON map.
The plugin supports the following configuration parameters:
Key | Description |
---|---|
In order to start listening for MQTT messages, you can run the plugin from the command line or through the configuration file:
Since the MQTT input plugin let Fluent Bit behave as a server, we need to dispatch some messages using some MQTT client, in the following example mosquitto tool is being used for the purpose:
The following command line will send a message to the MQTT input plugin:
In your main configuration file append the following Input & Output sections:
The netif input plugin gathers network traffic information of the running system every certain interval of time, and reports them.
The Network I/O Metrics plugin creates metrics that are log-based (I.e. JSON payload). If you are looking for Prometheus-based metrics please see the Node Exporter Metrics input plugin.
The plugin supports the following configuration parameters:
Key | Description |
---|---|
In order to monitor network traffic from your system, you can run the plugin from the command line or through the configuration file:
In your main configuration file append the following Input & Output sections:
Note: Total interval (sec) = Interval_Sec + (Interval_Nsec / 1000000000).
e.g. 1.5s = 1s + 500000000ns
The serial input plugin, allows to retrieve messages/data from a Serial interface.
Key | Description |
---|---|
In order to retrieve messages over the Serial interface, you can run the plugin from the command line or through the configuration file:
The following example loads the input serial plugin where it set a Bitrate of 9600, listen from the /dev/tnt0 interface and use the custom tag data to route the message.
The above interface (/dev/tnt0) is an emulation of the serial interface (more details at bottom), for demonstrative purposes we will write some message to the other end of the interface, in this case /dev/tnt1, e.g:
In Fluent Bit you should see an output like this:
Now using the Separator configuration, we could send multiple messages at once (run this command after starting Fluent Bit):
In your main configuration file append the following Input & Output sections:
The following content is some extra information that will allow you to emulate a serial interface on your Linux system, so you can test this Serial input plugin locally in case you don't have such interface in your computer. The following procedure has been tested on Ubuntu 15.04 running a Linux Kernel 4.0.
Download the sources
Unpack and compile
Copy the new kernel module into the kernel modules directory
Load the module
You should see new serial ports in /dev/ (ls /dev/tnt*) Give appropriate permissions to the new serial ports:
When the module is loaded, it will interconnect the following virtual interfaces:
Random input plugin generate very simple random value samples using the device interface /dev/urandom, if not available it will use a unix timestamp as value.
The plugin supports the following configuration parameters:
Key | Description |
---|---|
In order to start generating random samples, you can run the plugin from the command line or through the configuration file:
From the command line you can let Fluent Bit generate the samples with the following options:
In your main configuration file append the following Input & Output sections:
Once Fluent Bit is running, you will see the reports in the output interface similar to this:
Process input plugin allows you to check how healthy a process is. It does so by performing a service check at every certain interval of time specified by the user.
The Process metrics plugin creates metrics that are log-based (I.e. JSON payload). If you are looking for Prometheus-based metrics please see the Node Exporter Metrics input plugin.
The plugin supports the following configuration parameters:
Key | Description |
---|---|
In order to start performing the checks, you can run the plugin from the command line or through the configuration file:
The following example will check the health of crond process.
In your main configuration file append the following Input & Output sections:
Once Fluent Bit is running, you will see the health of process:
The statsd input plugin allows you to receive metrics via StatsD protocol.
Content:
The plugin supports the following configuration parameters:
Key | Description | Default |
---|---|---|
Here is a configuration example.
Now you can input metrics through the UDP port as follows:
Fluent Bit will produce the following records:
The Systemd input plugin allows to collect log messages from the Journald daemon on Linux environments.
The plugin supports the following configuration parameters:
Key | Description | Default |
---|
In order to receive Systemd messages, you can run the plugin from the command line or through the configuration file:
From the command line you can let Fluent Bit listen for Systemd messages with the following options:
In the example above we are collecting all messages coming from the Docker service.
In your main configuration file append the following Input & Output sections:
Syslog input plugins allows to collect Syslog messages through a Unix socket server (UDP or TCP) or over the network using TCP or UDP.
The plugin supports the following configuration parameters:
Key | Description | Default |
---|
When using Syslog input plugin, Fluent Bit requires access to the parsers.conf file, the path to this file can be specified with the option -R or through the Parsers_File key on the [SERVICE] section (more details below).
When udp or unix_udp is used, the buffer size to receive messages is configurable only through the Buffer_Chunk_Size option which defaults to 32kb.
In order to receive Syslog messages, you can run the plugin from the command line or through the configuration file:
From the command line you can let Fluent Bit listen for Forward messages with the following options:
By default the service will create and listen for Syslog messages on the unix socket /tmp/in_syslog
In your main configuration file append the following Input & Output sections:
Once Fluent Bit is running, you can send some messages using the logger tool:
The following content aims to provide configuration examples for different use cases to integrate Fluent Bit and make it listen for Syslog messages from your systems.
Put the following content in your fluent-bit.conf file:
then start Fluent Bit.
Add a new file to your rsyslog config rules called 60-fluent-bit.conf inside the directory /etc/rsyslog.d/ and add the following content:
then make sure to restart your rsyslog daemon:
Put the following content in your fluent-bit.conf file:
then start Fluent Bit.
Add a new file to your rsyslog config rules called 60-fluent-bit.conf inside the directory /etc/rsyslog.d/ and place the following content:
Make sure that the socket file is readable by rsyslog (tweak the Unix_Perm
option shown above).
The tail input plugin allows to monitor one or several text files. It has a similar behavior like tail -f
shell command.
The plugin reads every matched file in the Path
pattern and for every new line found (separated by a ), it generates a new record. Optionally a database file can be used so the plugin can have a history of tracked files and a state of offsets, this is very useful to resume a state if the service is restarted.
The plugin supports the following configuration parameters:
Key | Description | Default |
---|
Note that if the database parameter DB
is not specified, by default the plugin will start reading each target file from the beginning. This also might cause some unwanted behavior, for example when a line is bigger that Buffer_Chunk_Size
and Skip_Long_Lines
is not turned on, the file will be read from the beginning of each Refresh_Interval
until the file is rotated.
Starting from Fluent Bit v1.8 we have introduced a new Multiline core functionality. For Tail input plugin, it means that now it supports the old configuration mechanism but also the new one. In order to avoid breaking changes, we will keep both but encourage our users to use the latest one. We will call the two mechanisms as:
Multiline Core
Old Multiline
The new multiline core is exposed by the following configuration:
parser
parser_firstline
parser_N
multiline
multiline_flush
docker_mode
If you are running Fluent Bit to process logs coming from containers like Docker or CRI, you can use the new built-in modes for such purposes. This will help to reassembly multiline messages originally split by Docker or CRI:
The two options separated by a comma means multi-format: try docker
and cri
multiline formats.
We are still working on extending support to do multiline for nested stack traces and such. Over the Fluent Bit v1.8.x release cycle we will be updating the documentation.
For the old multiline configuration, the following options exist to configure the handling of multilines logs:
Docker mode exists to recombine JSON log lines split by the Docker daemon due to its line length limit. To use this feature, configure the tail plugin with the corresponding parser and then enable Docker mode:
In order to tail text or log files, you can run the plugin from the command line or through the configuration file:
From the command line you can let Fluent Bit parse text files with the following options:
When using multi-line configuration you need to first specify Multiline On
in the configuration and use the Parser_Firstline
and additional parser parameters Parser_N
if needed. If we are trying to read the following Java Stacktrace as a single event
We need to specify a Parser_Firstline
parameter that matches the first line of a multi-line event. Once a match is made Fluent Bit will read all future lines until another match with Parser_Firstline
is made .
In the case above we can use the following parser, that extracts the Time as time
and the remaining portion of the multiline as log
If we want to further parse the entire event we can add additional parsers with Parser_N
where N is an integer. The final Fluent Bit configuration looks like the following:
Our output will be as follows.
The tail input plugin a feature to save the state of the tracked files, is strongly suggested you enabled this. For this purpose the db property is available, e.g:
When running, the database file /path/to/logs.db will be created, this database is backed by SQLite3 so if you are interested into explore the content, you can open it with the SQLite client tool, e.g:
Make sure to explore when Fluent Bit is not hard working on the database file, otherwise you will see some Error: database is locked messages.
By default SQLite client tool do not format the columns in a human read-way, so to explore in_tail_files table you can create a config file in ~/.sqliterc with the following content:
Fluent Bit keep the state or checkpoint of each file through using a SQLite database file, so if the service is restarted, it can continue consuming files from it last checkpoint position (offset). The default options set are enabled for high performance and corruption-safe.
The SQLite journaling mode enabled is Write Ahead Log
or WAL
. This allows to improve performance of read and write operations to disk. When enabled, you will see in your file system additional files being created, consider the following configuration statement:
The above configuration enables a database file called test.db
and in the same path for that file SQLite will create two additional files:
test.db-shm
test.db-wal
Those two files aims to support the WAL
mechanism that helps to improve performance and reduce the number system calls required. The -wal
file refers to the file that stores the new changes to be committed, at some point the WAL
file transactions are moved back to the real database file. The -shm
file is a shared-memory type to allow concurrent-users to the WAL
file.
The WAL
mechanism give us higher performance but also might increase the memory usage by Fluent Bit. Most of this usage comes from the memory mapped and cached pages. In some cases you might see that memory usage keeps a bit high giving the impression of a memory leak, but actually is not relevant unless you want your memory metrics back to normal. Starting from Fluent Bit v1.7.3 we introduced the new option db.journal_mode
mode that sets the journal mode for databases, by default it will be WAL (Write-Ahead Logging)
, currently allowed configurations for db.journal_mode
are DELETE | TRUNCATE | PERSIST | MEMORY | WAL | OFF
.
File rotation is properly handled, including logrotate's copytruncate mode.
Note that the Path
patterns cannot match the rotated files. Otherwise, the rotated file would be read again and lead to duplicate records.
The thermal input plugin reports system temperatures periodically -- each second by default. Currently this plugin is only available for Linux.
The following tables describes the information generated by the plugin.
key | description |
---|
The plugin supports the following configuration parameters:
Key | Description |
---|
In order to get temperature(s) of your system, you can run the plugin from the command line or through the configuration file:
Some systems provide multiple thermal zones. In this example monitor only thermal_zone0 by name, once per minute.
In your main configuration file append the following Input & Output sections:
The winlog input plugin allows you to read Windows Event Log.
The plugin supports the following configuration parameters:
Key | Description | Default |
---|
Note that if you do not set db, the plugin will read channels from the beginning on each startup.
Here is a minimum configuration example.
Note that some Windows Event Log channels (like Security
) requires an admin privilege for reading. In this case, you need to run fluent-bit as an administrator.
If you want to do a quick test, you can run this plugin from the command line.
The tcp input plugin allows to retrieve structured JSON or raw messages over a TCP network interface (TCP port).
The plugin supports the following configuration parameters:
Key | Description | Default |
---|
In order to receive JSON messages over TCP, you can run the plugin from the command line or through the configuration file:
From the command line you can let Fluent Bit listen for JSON messages with the following options:
By default the service will listen an all interfaces (0.0.0.0) through TCP port 5170, optionally you can change this directly, e.g:
In the example the JSON messages will only arrive through network interface under 192.168.3.2 address and TCP Port 9090.
In your main configuration file append the following Input & Output sections:
Once Fluent Bit is running, you can send some messages using the netcat:
When receiving payloads in JSON format, there are high performance penalties. Parsing JSON is a very expensive task so you could expect your CPU usage increase under high load environments.
To get faster data ingestion, consider to use the option Format none
to avoid JSON parsing if not needed.
Size of the buffer (check for allowed values)
Specify the maximum buffer memory size used to receive a Forward message. The value must be according to the specification.
By default the buffer to store the incoming Forward messages, do not allocate the maximum memory allowed, instead it allocate memory when is required. The rounds of allocations are set by Buffer_Chunk_Size. The value must be according to the specification.
Key | Description | Default |
---|---|---|
In we should see the following output:
Key | Description |
---|
As stated in the , now we provide built-in configuration modes. Note that when using a new multiline.parser
definition, you must disable the old configuration from your tail section like:
Key | Description | Default |
---|
Key | Description | Default |
---|
In your main configuration file append the following Input & Output sections. An example visualization can be found
In we should see the following output:
Host
Name of the target host or IP address to check.
Port
TCP port where to perform the connection check.
Interval_Sec
Interval in seconds between the service checks. Default value is 1.
Internal_Nsec
Specify a nanoseconds interval for service checks, it works in conjunction with the Interval_Sec configuration key. Default value is 0.
Alert
If enabled, it will only generate messages if the target TCP service is down. By default this option is disabled.
Add_Host
If enabled, hostname is appended to each records. Default value is false.
Add_Port
If enabled, port number is appended to each records. Default value is false.
Listen
Listener network interface, default: 0.0.0.0
Port
TCP port where listening for connections, default: 1883
Interface
Specify the network interface to monitor. e.g. eth0
Interval_Sec
Polling interval (seconds). default: 1
Interval_NSec
Polling interval (nanosecond). default: 0
Verbose
If true, gather metrics precisely. default: false
File
Absolute path to the device entry, e.g: /dev/ttyS0
Bitrate
The bitrate for the communication, e.g: 9600, 38400, 115200, etc
Min_Bytes
The serial interface will expect at least Min_Bytes to be available before to process the message (default: 1)
Separator
Allows to specify a separator string that's used to determinate when a message ends.
Format
Specify the format of the incoming data stream. The only option available is 'json'. Note that Format and Separator cannot be used at the same time.
Buffer_Size
Set the buffer size to read data. This value is used to increase buffer size. The value must be according to the Unit Size specification.
16k
Samples
If set, it will only generate a specific number of samples. By default this value is set to -1, which will generate unlimited samples.
Interval_Sec
Interval in seconds between samples generation. Default value is 1.
Internal_Nsec
Specify a nanoseconds interval for samples generation, it works in conjunction with the Interval_Sec configuration key. Default value is 0.
Proc_Name
Name of the target Process to check.
Interval_Sec
Interval in seconds between the service checks. Default value is 1.
Interval_Nsec
Specify a nanoseconds interval for service checks, it works in conjunction with the Interval_Sec configuration key. Default value is 0.
Alert
If enabled, it will only generate messages if the target process is down. By default this option is disabled.
Fd
If enabled, a number of fd is appended to each records. Default value is true.
Mem
If enabled, memory usage of the process is appended to each records. Default value is true.
Listen
Listener network interface.
0.0.0.0
Port
UDP port where listening for connections
8125
Mode | Defines transport protocol mode: unix_udp (UDP over Unix socket), unix_tcp (TCP over Unix socket), tcp or udp | unix_udp |
Listen | If Mode is set to tcp or udp, specify the network interface to bind. | 0.0.0.0 |
Port | If Mode is set to tcp or udp, specify the TCP port to listen for incoming connections. | 5140 |
Path | If Mode is set to unix_tcp or unix_udp, set the absolute path to the Unix socket file. |
Unix_Perm | If Mode is set to unix_tcp or unix_udp, set the permission of the Unix socket file. | 0644 |
Parser | Specify an alternative parser for the message. If Mode is set to tcp or udp then the default parser is syslog-rfc5424 otherwise syslog-rfc3164-local is used. If your syslog messages have fractional seconds set this Parser value to syslog-rfc5424 instead. |
Buffer_Chunk_Size | By default the buffer to store the incoming Syslog messages, do not allocate the maximum memory allowed, instead it allocate memory when is required. The rounds of allocations are set by Buffer_Chunk_Size. If not set, Buffer_Chunk_Size is equal to 32000 bytes (32KB). Read considerations below when using udp or unix_udp mode. |
Buffer_Max_Size | Specify the maximum buffer size to receive a Syslog message. If not set, the default size will be the value of Buffer_Chunk_Size. |
Multiline | If enabled, the plugin will try to discover multiline messages and use the proper parsers to compose the outgoing messages. Note that when this option is enabled the Parser option is not used. | Off |
Multiline_Flush | Wait period time in seconds to process queued multiline messages | 4 |
Parser_Firstline | Name of the parser that matches the beginning of a multiline message. Note that the regular expression defined in the parser must include a group name (named capture), and the value of the last match group must be a string |
Parser_N | Optional-extra parser to interpret and structure multiline entries. This option can be used to define multiple parsers, e.g: Parser_1 ab1, Parser_2 ab2, Parser_N abN. |
Docker_Mode | If enabled, the plugin will recombine split Docker log lines before passing them to any parser as configured above. This mode cannot be used at the same time as Multiline. | Off |
Docker_Mode_Flush | Wait period time in seconds to flush queued unfinished split lines. | 4 |
Docker_Mode_Parser | Specify an optional parser for the first line of the docker multiline mode. The parser name to be specified must be registered in the |
name | The name of the thermal zone, such as thermal_zone0 |
type | The type of the thermal zone, such as x86_pkg_temp |
temp | Current temperature in celsius |
Interval_Sec | Polling interval (seconds). default: 1 |
Interval_NSec | Polling interval (nanoseconds). default: 0 |
name_regex | Optional name filter regex. default: None |
type_regex | Optional type filter regex. default: None |
Channels | A comma-separated list of channels to read from. |
Interval_Sec | Set the polling interval for each channel. (optional) | 1 |
DB | Set the path to save the read offsets. (optional) |
Listen | Listener network interface. | 0.0.0.0 |
Port | TCP port where listening for connections | 5170 |
Buffer_Size | Specify the maximum buffer size in KB to receive a JSON message. If not set, the default size will be the value of Chunk_Size. |
Chunk_Size | By default the buffer to store the incoming JSON messages, do not allocate the maximum memory allowed, instead it allocate memory when is required. The rounds of allocations are set by Chunk_Size in KB. If not set, Chunk_Size is equal to 32 (32KB). | 32 |
Format | Specify the expected payload format. It support the options json and none. When using json, it expects JSON maps, when is set to none, it will split every record using the defined Separator (option below). | json |
Separator | When the expected Format is set to none, Fluent Bit needs a separator string to split the records. By default it uses the breakline character (LF or 0x10). |
Path | Optional path to the Systemd journal directory, if not set, the plugin will use default paths to read local-only logs. |
Max_Fields | Set a maximum number of fields (keys) allowed per record. | 8000 |
Max_Entries | When Fluent Bit starts, the Journal might have a high number of logs in the queue. In order to avoid delays and reduce memory usage, this option allows to specify the maximum number of log entries that can be processed per round. Once the limit is reached, Fluent Bit will continue processing the remaining log entries once Journald performs the notification. | 5000 |
Systemd_Filter | Allows to perform a query over logs that contains a specific Journald key/value pairs, e.g: _SYSTEMD_UNIT=UNIT. The Systemd_Filter option can be specified multiple times in the input section to apply multiple filters as required. |
Systemd_Filter_Type | Define the filter type when Systemd_Filter is specified multiple times. Allowed values are And and Or. With And a record is matched only when all of the Systemd_Filter have a match. With Or a record is matched when any of the Systemd_Filter has a match. | Or |
Tag | The tag is used to route messages but on Systemd plugin there is an extra functionality: if the tag includes a star/wildcard, it will be expanded with the Systemd Unit file ( |
DB | Specify the absolute path of a database file to keep track of Journald cursor. |
DB.Sync | Full |
Read_From_Tail | Start reading new entries. Skip entries already stored in Journald. | Off |
Lowercase | Lowercase the Journald field (key). | Off |
Strip_Underscores | Remove the leading underscore of the Journald field (key). For example the Journald field _PID becomes the key PID. | Off |
Buffer_Chunk_Size | 32k |
Buffer_Max_Size | 32k |
Path | Pattern specifying a specific log file or multiple ones through the use of common wildcards. Multiple patterns separated by commas are also allowed. |
Path_Key | If enabled, it appends the name of the monitored file as part of the record. The value assigned becomes the key in the map. |
Exclude_Path | Set one or multiple shell patterns separated by commas to exclude files matching certain criteria, e.g: |
Offset_Key | If enabled, Fluent Bit appends the offset of the current monitored file as part of the record. The value assigned becomes the key in the map |
Read_from_Head | For new discovered files on start (without a database offset/position), read the content from the head of the file, not tail. | False |
Refresh_Interval | The interval of refreshing the list of watched files in seconds. | 60 |
Rotate_Wait | Specify the number of extra time in seconds to monitor a file once is rotated in case some pending data is flushed. | 5 |
Ignore_Older | Ignores files which modification date is older than this time in seconds. Supports m,h,d (minutes, hours, days) syntax. |
Skip_Long_Lines | When a monitored file reaches its buffer capacity due to a very long line (Buffer_Max_Size), the default behavior is to stop monitoring that file. Skip_Long_Lines alter that behavior and instruct Fluent Bit to skip long lines and continue processing other lines that fits into the buffer size. | Off |
Skip_Empty_Lines | Skips empty lines in the log file from any further processing or output. | Off |
DB | Specify the database file to keep track of monitored files and offsets. |
DB.sync | normal |
DB.locking | Specify that the database will be accessed only by Fluent Bit. Enabling this feature helps to increase performance when accessing the database but it restrict any external tool to query the content. | false |
DB.journal_mode | sets the journal mode for databases (WAL). Enabling WAL provides higher performance. Note that WAL is not compatible with shared network file systems. | WAL |
Mem_Buf_Limit | Set a limit of memory that Tail plugin can use when appending data to the Engine. If the limit is reach, it will be paused; when the data is flushed it resumes. |
Exit_On_Eof | When reading a file will exit as soon as it reach the end of the file. Useful for bulk load and tests | false |
Parser | Specify the name of a parser to interpret the entry as a structured message. |
Key | When a message is unstructured (no parser applied), it's appended as a string under the key name log. This option allows to define an alternative name for that key. | log |
Inotify_Watcher | Set to false to use file stat watcher instead of inotify. | true |
Tag |
Tag_Regex | Set a regex to extract fields from the file name. E.g. |
Static_Batch_Size | Set the maximum number of bytes to process per iteration for the monitored static files (files that already exists upon Fluent Bit start). | 50M |
multiline.parser |
Set a default synchronization (I/O) method. values: Extra, Full, Normal, Off. This flag affects how the internal SQLite engine do synchronization to disk, for more details about each option please refer to . note: this option was introduced on Fluent Bit v1.4.6.
Set the initial buffer size to read files data. This value is used to increase buffer size. The value must be according to the specification.
Set the limit of the buffer size per monitored file. When a buffer needs to be increased (e.g: very long lines), this value is used to restrict how much the memory buffer can grow. If reading a file exceeds this limit, the file is removed from the monitored file list. The value must be according to the specification.
Set a default synchronization (I/O) method. Values: Extra, Full, Normal, Off. This flag affects how the internal SQLite engine do synchronization to disk, for more details about each option please refer to . Most of workload scenarios will be fine with normal
mode, but if you really need full synchronization after every write operation you should set full
mode. Note that full
has a high I/O performance cost.
Set a tag (with regex-extract fields) that will be placed on lines read. E.g. kube.<namespace_name>.<pod_name>.<container_name>
. Note that "tag expansion" is supported: if the tag includes an asterisk (*), that asterisk will be replaced with the absolute path of the monitored file (also see ).
Specify one or multiple to apply to the content.