Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
The docker input plugin allows you to collect Docker container metrics such as memory usage and CPU consumption.
The plugin supports the following configuration parameters:
Key | Description | Default |
---|---|---|
If you set neither Include
nor Exclude
, the plugin will try to get metrics from all the running containers.
Here is an example configuration that collects metrics from two docker instances (6bab19c3a0f9
and 14159be4ca2c
).
This configuration will produce records like below.
The docker events input plugin uses the docker API to capture server events. A complete list of possible events returned by this plugin can be found here
This plugin supports the following configuration parameters:
Key | Description | Default |
---|---|---|
In your main configuration file append the following Input & Output sections:
The tail input plugin allows to monitor one or several text files. It has a similar behavior like tail -f
shell command.
The plugin reads every matched file in the Path
pattern and for every new line found (separated by a newline character (\n) ), it generates a new record. Optionally a database file can be used so the plugin can have a history of tracked files and a state of offsets, this is very useful to resume a state if the service is restarted.
The plugin supports the following configuration parameters:
Key | Description | Default |
---|---|---|
Note that if the database parameter DB
is not specified, by default the plugin will start reading each target file from the beginning. This also might cause some unwanted behavior, for example when a line is bigger that Buffer_Chunk_Size
and Skip_Long_Lines
is not turned on, the file will be read from the beginning of each Refresh_Interval
until the file is rotated.
Starting from Fluent Bit v1.8 we have introduced a new Multiline core functionality. For Tail input plugin, it means that now it supports the old configuration mechanism but also the new one. In order to avoid breaking changes, we will keep both but encourage our users to use the latest one. We will call the two mechanisms as:
Multiline Core
Old Multiline
The new multiline core is exposed by the following configuration:
As stated in the Multiline Parser documentation, now we provide built-in configuration modes. Note that when using a new multiline.parser
definition, you must disable the old configuration from your tail section like:
parser
parser_firstline
parser_N
multiline
multiline_flush
docker_mode
If you are running Fluent Bit to process logs coming from containers like Docker or CRI, you can use the new built-in modes for such purposes. This will help to reassembly multiline messages originally split by Docker or CRI:
The two options separated by a comma mean Fluent Bit will try each parser in the list in order, applying the first one that matches the log.
It will use the first parser which has a start_state
that matches the log.
For example, it will first try docker
, and if docker
does not match, it will then try cri
.
We are still working on extending support to do multiline for nested stack traces and such. Over the Fluent Bit v1.8.x release cycle we will be updating the documentation.
For the old multiline configuration, the following options exist to configure the handling of multilines logs:
Docker mode exists to recombine JSON log lines split by the Docker daemon due to its line length limit. To use this feature, configure the tail plugin with the corresponding parser and then enable Docker mode:
In order to tail text or log files, you can run the plugin from the command line or through the configuration file:
From the command line you can let Fluent Bit parse text files with the following options:
In your main configuration file, append the following Input
and Output
sections:
When using multi-line configuration you need to first specify Multiline On
in the configuration and use the Parser_Firstline
and additional parser parameters Parser_N
if needed. If we are trying to read the following Java Stacktrace as a single event
We need to specify a Parser_Firstline
parameter that matches the first line of a multi-line event. Once a match is made Fluent Bit will read all future lines until another match with Parser_Firstline
is made .
In the case above we can use the following parser, that extracts the Time as time
and the remaining portion of the multiline as log
If we want to further parse the entire event we can add additional parsers with Parser_N
where N is an integer. The final Fluent Bit configuration looks like the following:
Our output will be as follows.
The tail input plugin a feature to save the state of the tracked files, is strongly suggested you enabled this. For this purpose the db property is available, e.g:
When running, the database file /path/to/logs.db will be created, this database is backed by SQLite3 so if you are interested into explore the content, you can open it with the SQLite client tool, e.g:
Make sure to explore when Fluent Bit is not hard working on the database file, otherwise you will see some Error: database is locked messages.
By default SQLite client tool do not format the columns in a human read-way, so to explore in_tail_files table you can create a config file in ~/.sqliterc with the following content:
Fluent Bit keep the state or checkpoint of each file through using a SQLite database file, so if the service is restarted, it can continue consuming files from it last checkpoint position (offset). The default options set are enabled for high performance and corruption-safe.
The SQLite journaling mode enabled is Write Ahead Log
or WAL
. This allows to improve performance of read and write operations to disk. When enabled, you will see in your file system additional files being created, consider the following configuration statement:
The above configuration enables a database file called test.db
and in the same path for that file SQLite will create two additional files:
test.db-shm
test.db-wal
Those two files aims to support the WAL
mechanism that helps to improve performance and reduce the number system calls required. The -wal
file refers to the file that stores the new changes to be committed, at some point the WAL
file transactions are moved back to the real database file. The -shm
file is a shared-memory type to allow concurrent-users to the WAL
file.
The WAL
mechanism give us higher performance but also might increase the memory usage by Fluent Bit. Most of this usage comes from the memory mapped and cached pages. In some cases you might see that memory usage keeps a bit high giving the impression of a memory leak, but actually is not relevant unless you want your memory metrics back to normal. Starting from Fluent Bit v1.7.3 we introduced the new option db.journal_mode
mode that sets the journal mode for databases, by default it will be WAL (Write-Ahead Logging)
, currently allowed configurations for db.journal_mode
are DELETE | TRUNCATE | PERSIST | MEMORY | WAL | OFF
.
File rotation is properly handled, including logrotate's copytruncate mode.
Note that the Path
patterns cannot match the rotated files. Otherwise, the rotated file would be read again and lead to duplicate records.
Key | Description |
---|---|
Key | Description | Default |
---|---|---|
Key | Description | Default |
---|---|---|
Interval_Sec
Polling interval in seconds
1
Include
A space-separated list of containers to include
Exclude
A space-separated list of containers to exclude
Threaded
Indicates whether to run this input in its own thread.
false
path.containers
Used to specify the container directory if Docker is configured with a custom "data-root" directory.
/var/lib/docker/containers
Unix_Path
The docker socket unix path
/var/run/docker.sock
Buffer_Size
The size of the buffer used to read docker events (in bytes)
8192
Parser
Specify the name of a parser to interpret the entry as a structured message.
None
Key
When a message is unstructured (no parser applied), it's appended as a string under the key name message.
message
Reconnect.Retry_limits
The maximum number of retries allowed. The plugin tries to reconnect with docker socket when EOF is detected.
5
Reconnect.Retry_interval
The retrying interval. Unit is second.
1
Threaded
Indicates whether to run this input in its own thread.
false
Buffer_Chunk_Size
Set the initial buffer size to read files data. This value is used to increase buffer size. The value must be according to the Unit Size specification.
32k
Buffer_Max_Size
Set the limit of the buffer size per monitored file. When a buffer needs to be increased (e.g: very long lines), this value is used to restrict how much the memory buffer can grow. If reading a file exceeds this limit, the file is removed from the monitored file list. The value must be according to the Unit Size specification.
32k
Path
Pattern specifying a specific log file or multiple ones through the use of common wildcards. Multiple patterns separated by commas are also allowed.
Path_Key
If enabled, it appends the name of the monitored file as part of the record. The value assigned becomes the key in the map.
Exclude_Path
Set one or multiple shell patterns separated by commas to exclude files matching certain criteria, e.g: Exclude_Path *.gz,*.zip
Offset_Key
If enabled, Fluent Bit appends the offset of the current monitored file as part of the record. The value assigned becomes the key in the map
Read_from_Head
For new discovered files on start (without a database offset/position), read the content from the head of the file, not tail.
False
Refresh_Interval
The interval of refreshing the list of watched files in seconds.
60
Rotate_Wait
Specify the number of extra time in seconds to monitor a file once is rotated in case some pending data is flushed.
5
Ignore_Older
Ignores files older than ignore_older
. Supports m, h, d (minutes, hours, days) syntax. Default behavior is to read all.
Skip_Long_Lines
When a monitored file reaches its buffer capacity due to a very long line (Buffer_Max_Size), the default behavior is to stop monitoring that file. Skip_Long_Lines alter that behavior and instruct Fluent Bit to skip long lines and continue processing other lines that fits into the buffer size.
Off
Skip_Empty_Lines
Skips empty lines in the log file from any further processing or output.
Off
DB
Specify the database file to keep track of monitored files and offsets.
DB.sync
Set a default synchronization (I/O) method. Values: Extra, Full, Normal, Off. This flag affects how the internal SQLite engine do synchronization to disk, for more details about each option please refer to this section. Most of workload scenarios will be fine with normal
mode, but if you really need full synchronization after every write operation you should set full
mode. Note that full
has a high I/O performance cost.
normal
DB.locking
Specify that the database will be accessed only by Fluent Bit. Enabling this feature helps to increase performance when accessing the database but it restrict any external tool to query the content.
false
DB.journal_mode
sets the journal mode for databases (WAL). Enabling WAL provides higher performance. Note that WAL is not compatible with shared network file systems.
WAL
DB.compare_filename
This option determines whether to check both the inode
and the filename
when retrieving stored file information from the database. 'true' verifies both the inode
and filename
, while 'false' checks only the inode
(default). To check the inode and filename in the database, refer here.
false
Mem_Buf_Limit
Set a limit of memory that Tail plugin can use when appending data to the Engine. If the limit is reach, it will be paused; when the data is flushed it resumes.
Exit_On_Eof
When reading a file will exit as soon as it reach the end of the file. Useful for bulk load and tests
false
Parser
Specify the name of a parser to interpret the entry as a structured message.
Key
When a message is unstructured (no parser applied), it's appended as a string under the key name log. This option allows to define an alternative name for that key.
log
Inotify_Watcher
Set to false to use file stat watcher instead of inotify.
true
Tag
Set a tag (with regex-extract fields) that will be placed on lines read. E.g. kube.<namespace_name>.<pod_name>.<container_name>.<container_id>
. Note that "tag expansion" is supported: if the tag includes an asterisk (*), that asterisk will be replaced with the absolute path of the monitored file, with slashes replaced by dots (also see Workflow of Tail + Kubernetes Filter).
Tag_Regex
Set a regex to extract fields from the file name. E.g. (?<pod_name>[a-z0-9](?:[-a-z0-9]*[a-z0-9])?(?:\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*)_(?<namespace_name>[^_]+)_(?<container_name>.+)-(?<container_id>[a-z0-9]{64})\.log$
Static_Batch_Size
Set the maximum number of bytes to process per iteration for the monitored static files (files that already exists upon Fluent Bit start).
50M
File_Cache_Advise
Set the posix_fadvise in POSIX_FADV_DONTNEED mode. This will reduce the usage of the kernel file cache. This option is ignored if not running on Linux.
On
Threaded
Indicates whether to run this input in its own thread.
false
multiline.parser
Specify one or multiple Multiline Parser definitions to apply to the content.
Multiline
If enabled, the plugin will try to discover multiline messages and use the proper parsers to compose the outgoing messages. Note that when this option is enabled the Parser option is not used.
Off
Multiline_Flush
Wait period time in seconds to process queued multiline messages
4
Parser_Firstline
Name of the parser that matches the beginning of a multiline message. Note that the regular expression defined in the parser must include a group name (named capture), and the value of the last match group must be a string
Parser_N
Optional-extra parser to interpret and structure multiline entries. This option can be used to define multiple parsers, e.g: Parser_1 ab1, Parser_2 ab2, Parser_N abN.
Docker_Mode
If enabled, the plugin will recombine split Docker log lines before passing them to any parser as configured above. This mode cannot be used at the same time as Multiline.
Off
Docker_Mode_Flush
Wait period time in seconds to flush queued unfinished split lines.
4
Docker_Mode_Parser
Specify an optional parser for the first line of the docker multiline mode. The parser name to be specified must be registered in the parsers.conf
file.
Note: This plugin is experimental and may be unstable. Use it in development or testing environments only, as its features and behavior are subject to change.
The in_ebpf
input plugin is an experimental plugin for Fluent Bit that uses eBPF (extended Berkeley Packet Filter) to capture low-level system events. This plugin allows Fluent Bit to monitor kernel-level activities such as process executions, file accesses, memory allocations, network connections, and signal handling. It provides valuable insights into system behavior for debugging, monitoring, and security analysis.
The in_ebpf
plugin leverages eBPF to trace kernel events in real-time. By specifying trace points, users can collect targeted system-level metrics and events, which can be particularly useful for gaining visibility into operating system interactions and performance characteristics.
To enable in_ebpf
, ensure the following dependencies are installed on your system:
Kernel Version: 4.18 or higher with eBPF support enabled.
Required Packages:
bpftool
: Used to manage and debug eBPF programs.
libbpf-dev
: Provides the libbpf
library for loading and interacting with eBPF programs.
CMake 3.13 or higher: Required for building the plugin.
in_ebpf
To enable the in_ebpf
plugin, follow these steps to build Fluent Bit from source:
Clone the Fluent Bit Repository
Configure the Build with in_ebpf
Create a build directory and run cmake
with the -DFLB_IN_EBPF=On
flag to enable the in_ebpf
plugin:
Compile the Source
Run Fluent Bit
Run Fluent Bit with elevated permissions (e.g., sudo
), as loading eBPF programs requires root access or appropriate privileges:
Here's a basic example of how to configure the plugin:
The configuration above enables tracing for:
Signal handling events (trace_signal
)
Memory allocation events (trace_malloc
)
Network bind operations (trace_bind
)
You can enable multiple traces by adding multiple Trace
directives in your configuration. Full list of existing traces can be seen here: Fluent Bit eBPF Traces
The elasticsearch input plugin handles both Elasticsearch and OpenSearch Bulk API requests.
The plugin supports the following configuration parameters:
Key | Description | Default value |
---|---|---|
Note: The Elasticsearch cluster uses "sniffing" to optimize the connections between its cluster and clients. Elasticsearch can build its cluster and dynamically generate a connection list which is called "sniffing". The hostname
will be used for sniffing information and this is handled by the sniffing endpoint.
In order to start performing the checks, you can run the plugin from the command line or through the configuration file:
From the command line you can configure Fluent Bit to handle Bulk API requests with the following options:
In your main configuration file append the following Input & Output sections:
As described above, the plugin will handle ingested Bulk API requests. For large bulk ingestions, you may have to increase buffer size with buffer_max_size and buffer_chunk_size parameters:
Ingesting from beats series agents is also supported. For example, Filebeats, Metricbeat, and Winlogbeat are able to ingest their collected data through this plugin.
Note that Fluent Bit's node information is returning as Elasticsearch 8.0.0.
So, users have to specify the following configurations on their beats configurations:
For large log ingestion on these beat plugins, users might have to configure rate limiting on those beats plugins when Fluent Bit indicates that the application is exceeding the size limit for HTTP requests:
The exec_wasi input plugin, allows to execute WASM program that is WASI target like as external program and collects event logs from there.
The plugin supports the following configuration parameters:
Key | Description |
---|---|
Here is a configuration example. in_exec_wasi can handle parser. To retrieve from structured data from WASM program, you have to create parser.conf:
Note that Time_Format
should be aligned for the format of your using timestamp. In this documents, we assume that WASM program should write JSON style strings into stdout.
Then, you can specify the above parsers.conf in the main fluent-bit configuration:
The disk input plugin, gathers the information about the disk throughput of the running system every certain interval of time and reports them.
The Disk I/O metrics plugin creates metrics that are log-based, such as JSON payload. For Prometheus-based metrics, see the Node Exporter Metrics input plugin.
The plugin supports the following configuration parameters:
Key | Description | Default |
---|---|---|
In order to get disk usage from your system, you can run the plugin from the command line or through the configuration file:
In your main configuration file append the following Input & Output sections:
Note: Total interval (sec) = Interval_Sec + (Interval_Nsec / 1000000000).
e.g. 1.5s = 1s + 500000000ns
The dummy input plugin, generates dummy events. It is useful for testing, debugging, benchmarking and getting started with Fluent Bit.
The plugin supports the following configuration parameters:
Key | Description | Default |
---|---|---|
You can run the plugin from the command line or through the configuration file:
In your main configuration file append the following Input & Output sections:
The collectd input plugin allows you to receive datagrams from collectd service.
The plugin supports the following configuration parameters:
Key | Description | Default |
---|---|---|
Here is a basic configuration example.
With this configuration, Fluent Bit listens to 0.0.0.0:25826
, and outputs incoming datagram packets to stdout.
You must set the same types.db files that your collectd server uses. Otherwise, Fluent Bit may not be able to interpret the payload properly.
The cpu input plugin, measures the CPU usage of a process or the whole system by default (considering per CPU core). It reports values in percentage unit for every interval of time set. At the moment this plugin is only available for Linux.
The following tables describes the information generated by the plugin. The keys below represent the data used by the overall system, all values associated to the keys are in a percentage unit (0 to 100%):
The CPU metrics plugin creates metrics that are log-based, such as JSON payload. For Prometheus-based metrics, see the Node Exporter Metrics input plugin.
key | description |
---|---|
In addition to the keys reported in the above table, a similar content is created per CPU core. The cores are listed from 0 to N as the Kernel reports:
key | description |
---|---|
The plugin supports the following configuration parameters:
Key | Description | Default |
---|---|---|
In order to get the statistics of the CPU usage of your system, you can run the plugin from the command line or through the configuration file:
As described above, the CPU input plugin gathers the overall usage every one second and flushed the information to the output on the fifth second. On this example we used the stdout plugin to demonstrate the output records. In a real use-case you may want to flush this information to some central aggregator such as Fluentd or Elasticsearch.
In your main configuration file append the following Input & Output sections:
The exec input plugin, allows to execute external program and collects event logs.
WARNING: Because this plugin invokes commands via a shell, its inputs are subject to shell metacharacter substitution. Careless use of untrusted input in command arguments could lead to malicious command execution.
This plugin will not function in all the distroless production images as it needs a functional /bin/sh
which is not present. The debug images use the same binaries so even though they have a shell, there is no support for this plugin as it is compiled out.
The plugin supports the following configuration parameters:
Key | Description |
---|---|
You can run the plugin from the command line or through the configuration file:
The following example will read events from the output of ls.
In your main configuration file append the following Input & Output sections:
To use fluent-bit
with the exec
plugin to wrap another command, use the Exit_After_Oneshot
and Propagate_Exit_Code
options, e.g.:
fluent-bit
will output
then exit with exit code 1.
Translation of command exit code(s) to fluent-bit
exit code follows the usual shell rules for exit code handling. Like with a shell, there is no way to differentiate between the command exiting on a signal and the shell exiting on a signal, and no way to differentiate between normal exits with codes greater than 125 and abnormal or signal exits reported by fluent-bit or the shell. Wrapped commands should use exit codes between 0 and 125 inclusive to allow reliable identification of normal exit. If the command is a pipeline, the exit code will be the exit code of the last command in the pipeline unless overridden by shell options.
By default the exec
plugin emits one message per command output line, with a single field exec
containing the full message. Use the Parser
directive to specify the name of a parser configuration to use to process the command input.
Take great care with shell quoting and escaping when wrapping commands. A script like
can ruin your day if someone passes it the argument $(rm -rf /my/important/files; echo "deleted your stuff!")'
The above script would be safer if written with:
... but it's generally best to avoid dynamically generating the command or handling untrusted arguments to it at all.
The HTTP input plugin allows you to send custom records to an HTTP endpoint.
HTTP input plugin supports TLS/SSL, for more details about the properties available and general configuration, please refer to the section.
The HTTP input plugin will accept and automatically handle gzipped content as of v2.2.1 as long as the header Content-Encoding: gzip
is set on the received data.
The http input plugin allows Fluent Bit to open up an HTTP port that you can then route data to in a dynamic way. This plugin supports dynamic tags which allow you to send data with different tags through the same input. An example video and curl message can be seen below
The tag for the HTTP input plugin is set by adding the tag to the end of the request URL. This tag is then used to route the event through the system. For example, in the following curl message below the tag set is app.log**. **
because the end end path is /app_log
:
If you do not set the tag http.0
is automatically used. If you have multiple HTTP inputs then they will follow a pattern of http.N
where N is an integer representing the input.
The tag_key configuration option allows to specify the key name that will be used to overwrite a tag. The tag's value will be replaced with the value associated with the specified key. For example, setting tag_key to "custom_tag" and the log event contains a json field with the key "custom_tag" Fluent Bit will use the value of that field as the new tag for routing the event through the system.
The success_header
parameter allows to set multiple HTTP headers on success. The format is:
The Kafka input plugin allows subscribing to one or more Kafka topics to collect messages from an service. This plugin uses the official (built-in dependency).
Key | Description | default |
---|
In order to subscribe/collect messages from Apache Kafka, you can run the plugin from the command line or through the configuration file:
The kafka plugin can read parameters through the -p argument (property), e.g:
In your main configuration file append the following Input & Output sections:
The Fluent Bit source repository contains a full example of using Fluent Bit to process Kafka records:
The above will connect to the broker listening on kafka-broker:9092
and subscribe to the fb-source
topic, polling for new messages every 100 milliseconds.
Since the payload will be in json format, we ask the plugin to automatically parse the payload with format json
.
Every message received is then processed with kafka.lua
and sent back to the fb-sink
topic of the same broker.
The example can be executed locally with make start
in the examples/kafka_filter
directory (docker/compose is used).
Forward is the protocol used by and to route messages between peers. This plugin implements the input service to listen for Forward messages.
The plugin supports the following configuration parameters:
Key | Description | Default |
---|
In order to receive Forward messages, you can run the plugin from the command line or through the configuration file as shown in the following examples.
From the command line you can let Fluent Bit listen for Forward messages with the following options:
By default the service will listen an all interfaces (0.0.0.0) through TCP port 24224, optionally you can change this directly, e.g:
In the example the Forward messages will only arrive through network interface under 192.168.3.2 address and TCP Port 9090.
In your main configuration file append the following Input & Output sections:
Since Fluent Bit v3, in_forward can handle secure forward protocol.
For using user-password authentication, it needs to specify security.users
at least an one-pair. For using shared key, it needs to specify shared_key
in both of forward output and forward input. self_hostname
is not able to specify with the same hostname between fluent servers and clients.
The head input plugin, allows to read events from the head of file. It's behavior is similar to the head command.
The plugin supports the following configuration parameters:
Key | Description |
---|
This mode is useful to get a specific line. This is an example to get CPU frequency from /proc/cpuinfo.
/proc/cpuinfo is a special file to get cpu information.
Cpu frequency is "cpu MHz : 2791.009". We can get the line with this configuration file.
Output is
In order to read the head of a file, you can run the plugin from the command line or through the configuration file:
The following example will read events from the /proc/uptime file, tag the records with the uptime name and flush them back to the stdout plugin:
In your main configuration file append the following Input & Output sections:
Note: Total interval (sec) = Interval_Sec + (Interval_Nsec / 1000000000).
e.g. 1.5s = 1s + 500000000ns
Health input plugin allows you to check how healthy a TCP server is. It does the check by issuing a TCP connection every a certain interval of time.
The plugin supports the following configuration parameters:
Key | Description |
---|
In order to start performing the checks, you can run the plugin from the command line or through the configuration file:
From the command line you can let Fluent Bit generate the checks with the following options:
In your main configuration file append the following Input & Output sections:
Once Fluent Bit is running, you will see some random values in the output interface similar to this:
A plugin to collect Fluent Bit's own metrics
Fluent Bit exposes its to allow you to monitor the internals of your pipeline. The collected metrics can be processed similarly to those from the . They can be sent to output plugins including , or ..
Important note: Metrics collected with Node Exporter Metrics flow through a separate pipeline from logs and current filters do not operate on top of metrics.
Key | Description | Default |
---|
In the following configuration file, the input plugin _node_exporter_metrics collects _metrics every 2 seconds and exposes them through our output plugin on HTTP/TCP port 2021.
You can test the expose of the metrics by using curl:
Once Fluent Bit is running, you can send some messages using the fluent-cat tool (this tool is provided by :
In we should see the following output:
buffer_max_size
Set the maximum size of buffer.
4M
buffer_chunk_size
Set the buffer chunk size.
512K
tag_key
Specify a key name for extracting as a tag.
NULL
meta_key
Specify a key name for meta information.
"@meta"
hostname
Specify hostname or FQDN. This parameter can be used for "sniffing" (auto-discovery of) cluster node information.
"localhost"
version
Specify Elasticsearch server version. This parameter is effective for checking a version of Elasticsearch/OpenSearch server version.
"8.0.0"
threaded
Indicates whether to run this input in its own thread.
false
WASI_Path
The place of a WASM program file.
Parser
Specify the name of a parser to interpret the entry as a structured message.
Accessible_Paths
Specify the whitelist of paths to be able to access paths from WASM programs.
Interval_Sec
Polling interval (seconds).
Interval_NSec
Polling interval (nanosecond).
Wasm_Heap_Size
Size of the heap size of Wasm execution. Review unit sizes for allowed values.
Wasm_Stack_Size
Size of the stack size of Wasm execution. Review unit sizes for allowed values.
Buf_Size
Size of the buffer (check unit sizes for allowed values)
Oneshot
Only run once at startup. This allows collection of data precedent to fluent-bit's startup (bool, default: false)
Threaded
Indicates whether to run this input in its own thread. Default: false
.
Interval_Sec
Polling interval (seconds).
1
Interval_NSec
Polling interval (nanosecond).
0
Dev_Name
Device name to limit the target. (e.g. sda). If not set, in_disk gathers information from all of disks and partitions.
all disks
Threaded
Indicates whether to run this input in its own thread.
false
Dummy
Dummy JSON record.
{"message":"dummy"}
Metadata
Dummy JSON metadata.
{}
Start_time_sec
Dummy base timestamp, in seconds.
0
Start_time_nsec
Dummy base timestamp, in nanoseconds.
0
Rate
Rate at which messages are generated expressed in how many times per second.
1
Interval_sec
Set time interval, in seconds, at which every message is generated. If set, Rate
configuration is ignored.
0
Interval_nsec
Set time interval, in nanoseconds, at which every message is generated. If set, Rate
configuration is ignored.
0
Samples
If set, the events number will be limited. For example, if Samples=3, the plugin generates only three events and stops.
none
Copies
Number of messages to generate each time they are generated.
1
Flush_on_startup
If set to true
, the first dummy event is generated at startup.
false
Threaded
Indicates whether to run this input in its own thread.
false
Listen
Set the address to listen to
0.0.0.0
Port
Set the port to listen to
25826
TypesDB
Set the data specification file
/usr/share/collectd/types.db
Threaded
Indicates whether to run this input in its own thread.
false
cpu_p
CPU usage of the overall system, this value is the summation of time spent on user and kernel space. The result takes in consideration the numbers of CPU cores in the system.
user_p
CPU usage in User mode, for short it means the CPU usage by user space programs. The result of this value takes in consideration the numbers of CPU cores in the system.
system_p
CPU usage in Kernel mode, for short it means the CPU usage by the Kernel. The result of this value takes in consideration the numbers of CPU cores in the system.
threaded
Indicates whether to run this input in its own thread. Default: false
.
cpuN.p_cpu
Represents the total CPU usage by core N.
cpuN.p_user
Total CPU spent in user mode or user space programs associated to this core.
cpuN.p_system
Total CPU spent in system or kernel mode associated to this core.
Interval_Sec
Polling interval in seconds
1
Interval_NSec
Polling interval in nanoseconds
0
PID
Specify the ID (PID) of a running process in the system. By default the plugin monitors the whole system but if this option is set, it will only monitor the given process ID.
Command
The command to execute, passed to popen(...) without any additional escaping or processing. May include pipelines, redirection, command-substitution, etc.
Parser
Specify the name of a parser to interpret the entry as a structured message.
Interval_Sec
Polling interval (seconds).
Interval_NSec
Polling interval (nanosecond).
Buf_Size
Size of the buffer (check unit sizes for allowed values)
Oneshot
Only run once at startup. This allows collection of data precedent to fluent-bit's startup (bool, default: false)
Exit_After_Oneshot
Exit as soon as the one-shot command exits. This allows the exec plugin to be used as a wrapper for another command, sending the target command's output to any fluent-bit sink(s) then exiting. (bool, default: false)
Propagate_Exit_Code
When exiting due to Exit_After_Oneshot, cause fluent-bit to exit with the exit code of the command exited by this plugin. Follows shell conventions for exit code propagation. (bool, default: false)
Threaded
Indicates whether to run this input in its own thread. Default: false
.
The mem input plugin, gathers the information about the memory and swap usage of the running system every certain interval of time and reports the total amount of memory and the amount of free available.
In order to get memory and swap usage from your system, you can run the plugin from the command line or through the configuration file:
You can enable the threaded
setting to run this input in its own thread.
In your main configuration file append the following Input & Output sections:
Key | Description | default |
listen | The address to listen on | 0.0.0.0 |
port | The port for Fluent Bit to listen on | 9880 |
tag_key | Specify the key name to overwrite a tag. If set, the tag will be overwritten by a value of the key. |
buffer_max_size | Specify the maximum buffer size in KB to receive a JSON message. | 4M |
buffer_chunk_size | This sets the chunk size for incoming incoming JSON messages. These chunks are then stored/managed in the space available by buffer_max_size. | 512K |
successful_response_code | It allows to set successful response code. | 201 |
success_header | Add an HTTP header key/value pair on success. Multiple headers can be set. Example: |
threaded |
|
brokers | Single or multiple list of Kafka Brokers, e.g: 192.168.1.3:9092, 192.168.1.4:9092. |
topics | Single entry or list of topics separated by comma (,) that Fluent Bit will subscribe to. |
format | Serialization format of the messages. If set to "json", the payload will be parsed as json. | none |
client_id | Client id passed to librdkafka. |
group_id | Group id passed to librdkafka. | fluent-bit |
poll_ms | Kafka brokers polling interval in milliseconds. | 500 |
Buffer_Max_Size | Specify the maximum size of buffer per cycle to poll kafka messages from subscribed topics. To increase throughput, specify larger size. | 4M |
rdkafka.{property} |
threaded |
|
Listen | Listener network interface. | 0.0.0.0 |
Port | TCP port to listen for incoming connections. | 24224 |
Unix_Path | Specify the path to unix socket to receive a Forward message. If set, |
Unix_Perm | Set the permission of the unix socket file. If |
Buffer_Max_Size | 6144000 |
Buffer_Chunk_Size | 1024000 |
Tag_Prefix | Prefix incoming tag with the defined value. |
Tag | Override the tag of the forwarded events with the defined value. |
Shared_Key | Shared key for secure forward authentication. |
Self_Hostname | Hostname for secure forward authentication. |
Security.Users | Specify the username and password pairs for secure forward authentication. |
Threaded |
|
File | Absolute path to the target file, e.g: /proc/uptime |
Buf_Size | Buffer size to read the file. |
Interval_Sec | Polling interval (seconds). |
Interval_NSec | Polling interval (nanosecond). |
Add_Path | If enabled, filepath is appended to each records. Default value is false. |
Key | Rename a key. Default: head. |
Lines | Line number to read. If the number N is set, in_head reads first N lines like head(1) -n. |
Split_line | If enabled, in_head generates key-value pair per line. |
Threaded |
Host | Name of the target host or IP address to check. |
Port | TCP port where to perform the connection check. |
Interval_Sec | Interval in seconds between the service checks. Default value is 1. |
Internal_Nsec | Specify a nanoseconds interval for service checks, it works in conjunction with the Interval_Sec configuration key. Default value is 0. |
Alert | If enabled, it will only generate messages if the target TCP service is down. By default this option is disabled. |
Add_Host | If enabled, hostname is appended to each records. Default value is false. |
Add_Port | If enabled, port number is appended to each records. Default value is false. |
Threaded |
scrape_interval | The rate at which metrics are collected from the host operating system | 2 seconds |
scrape_on_start | Scrape metrics upon start, useful to avoid waiting for 'scrape_interval' for the first round of metrics. | false |
threaded |
|
The kmsg input plugin reads the Linux Kernel log buffer since the beginning, it gets every record and parse it field as priority, sequence, seconds, useconds, and message.
Key | Description | Default |
---|---|---|
In order to start getting the Linux Kernel messages, you can run the plugin from the command line or through the configuration file:
As described above, the plugin processed all messages that the Linux Kernel reported, the output has been truncated for clarification.
In your main configuration file append the following Input & Output sections:
NGINX Exporter Metrics input plugin scrapes metrics from the NGINX stub status handler.
The plugin supports the following configuration parameters:
Key | Description | Default |
---|---|---|
NGINX must be configured with a location that invokes the stub status handler. Here is an example configuration with such a location:
Another metrics API is available with NGINX Plus. You must first configure a path in NGINX Plus.
From the command line you can let Fluent Bit generate the checks with the following options:
To gather metrics from the command line with the NGINX Plus REST API we need to turn on the nginx_plus property, like so:
In your main configuration file append the following Input & Output sections:
And for NGINX Plus API:
You can quickly test against the NGINX server running on localhost by invoking it directly from the command line:
This documentation is copied from the NGINX Prometheus Exporter metrics documentation on GitHub.
Note: for the
state
metric, the string values are converted to float64 using the following rule:"up"
->1.0
,"draining"
->2.0
,"down"
->3.0
,"unavail"
–>4.0
,"checking"
–>5.0
,"unhealthy"
->6.0
.
Note: for the
state
metric, the string values are converted to float64 using the following rule:"up"
->1.0
,"down"
->3.0
,"unavail"
–>4.0
,"checking"
–>5.0
,"unhealthy"
->6.0
.
Collects Kubernetes Events
Kubernetes exports it events through the API server. This input plugin allows to retrieve those events as logs and get them processed through the pipeline.
Key | Description | Default |
---|---|---|
* As of Fluent-Bit 3.1, this plugin uses a Kubernetes watch stream instead of polling. In versions before 3.1, the interval parameters are used for reconnecting the Kubernetes watch stream.
This input always runs in its own thread.
The Kubernetes service account used by Fluent Bit must have get
, list
, and watch
permissions to namespaces
and pods
for the namespaces watched in the kube_namespace
configuration parameter. If you're using the helm chart to configure Fluent Bit, this role is included.
In the following configuration file, the input plugin kubernetes_events collects events every 5 seconds (default for interval_nsec) and exposes them through the standard output plugin on the console.
Event timestamps are created from the first existing field, based on the following order of precedence:
lastTimestamp
firstTimestamp
metadata.creationTimestamp
The MQTT input plugin, allows to retrieve messages/data from MQTT control packets over a TCP connection. The incoming data to receive must be a JSON map.
The plugin supports the following configuration parameters:
Key | Description | Default |
---|---|---|
In order to start listening for MQTT messages, you can run the plugin from the command line or through the configuration file:
Since the MQTT input plugin let Fluent Bit behave as a server, we need to dispatch some messages using some MQTT client, in the following example mosquitto tool is being used for the purpose:
The following command line will send a message to the MQTT input plugin:
In your main configuration file append the following Input & Output sections:
The netif input plugin gathers network traffic information of the running system every certain interval of time, and reports them.
The Network I/O Metrics plugin creates metrics that are log-based, such as JSON payload. For Prometheus-based metrics, see the Node Exporter Metrics input plugin.
The plugin supports the following configuration parameters:
Key | Description | Default |
---|---|---|
In order to monitor network traffic from your system, you can run the plugin from the command line or through the configuration file:
In your main configuration file append the following Input & Output sections:
Note: Total interval (sec) = Interval_Sec + (Interval_Nsec / 1000000000).
e.g. 1.5s = 1s + 500000000ns
An input plugin to ingest OTLP Logs, Metrics, and Traces
The OpenTelemetry input plugin allows you to receive data as per the OTLP specification, from various OpenTelemetry exporters, the OpenTelemetry Collector, or Fluent Bit's OpenTelemetry output plugin.
Our compliant implementation fully supports OTLP/HTTP and OTLP/GRPC. Note that the single port
configured which defaults to 4318 supports both transports.
Key | Description | default |
---|---|---|
Important note: Raw traces means that any data forwarded to the traces endpoint (/v1/traces
) will be packed and forwarded as a log message, and will NOT be processed by Fluent Bit. The traces endpoint by default expects a valid protobuf encoded payload, but you can set the raw_traces
option in case you want to get trace telemetry data to any of Fluent Bit's supported outputs.
Fluent Bit based on the OTLP desired protocol exposes the following endpoints for data ingestion:
OTLP/HTTP
Logs
/v1/logs
Metrics
/v1/metrics
Traces
/v1/traces
OTLP/GRPC
Logs
/opentelemetry.proto.collector.log.v1.LogService/Export
/opentelemetry.proto.collector.log.v1.LogService/Export
Metrics
/opentelemetry.proto.collector.metric.v1.MetricService/Export
/opentelemetry.proto.collector.metrics.v1.MetricsService/Export
Traces
/opentelemetry.proto.collector.trace.v1.TraceService/Export
/opentelemetry.proto.collector.traces.v1.TracesService/Export
The OpenTelemetry input plugin supports the following telemetry data types:
A sample config file to get started will look something like the following:
With the above configuration, Fluent Bit will listen on port 4318
for data. You can now send telemetry data to the endpoints /v1/metrics
, /v1/traces
, and /v1/logs
for metrics, traces, and logs respectively.
A sample curl request to POST json encoded log data would be:
Process input plugin allows you to check how healthy a process is. It does so by performing a service check at every certain interval of time specified by the user.
The Process metrics plugin creates metrics that are log-based, such as JSON payload. For Prometheus-based metrics, see the Node Exporter Metrics input plugin.
The plugin supports the following configuration parameters:
Key | Description |
---|---|
In order to start performing the checks, you can run the plugin from the command line or through the configuration file:
The following example will check the health of crond process.
In your main configuration file append the following Input & Output sections:
Once Fluent Bit is running, you will see the health of process:
The Podman Metrics input plugin allows you to collect metrics from podman containers, so they can be exposed later as, for example, Prometheus counters and gauges.
The podman metrics input plugin allows Fluent Bit to gather podman container metrics. The entire procedure of collecting container list and gathering data associated with them bases on filesystem data.This plugin does not execute podman commands or send http requests to podman api - instead it reads podman configuration file and metrics exposed by /sys and /proc filesystems.
This plugin supports and automatically detects both cgroups v1 and v2.
Example Curl message for one running container
Currently supported counters are:
container_memory_usage_bytes
container_memory_max_usage_bytes
container_memory_rss
container_spec_memory_limit_bytes
container_cpu_user_seconds_total
container_cpu_usage_seconds_total
container_network_receive_bytes_total
container_network_receive_errors_total
container_network_transmit_bytes_total
container_network_transmit_errors_total
This plugin mimics naming convetion of docker metrics exposed by cadvisor project
A plugin based on Process Exporter to collect process level of metrics of system metrics
Prometheus Node Exporter is a popular way to collect system level metrics from operating systems, such as CPU / Disk / Network / Process statistics. Fluent Bit 2.2 onwards includes a process exporter plugin that builds off the Prometheus design to collect process level metrics without having to manage two separate processes or agents.
The Process Exporter Metrics plugin implements collecting of the various metrics available from the 3rd party implementation of Prometheus Process Exporter and these will be expanded over time as needed.
Important note: All metrics including those collected with this plugin flow through a separate pipeline from logs and current filters do not operate on top of metrics.
This plugin is only supported on Linux based operating systems as it uses the proc
filesystem to access the relevant metrics.
macOS does not have the proc
filesystem so this plugin will not work for it.
Key | Description | Default |
---|---|---|
Name | Description |
---|---|
This input always runs in its own thread.
In the following configuration file, the input plugin _process_exporter_metrics collects _metrics every 2 seconds and exposes them through our Prometheus Exporter output plugin on HTTP/TCP port 2021.
You can see the metrics by using curl:
When deploying Fluent Bit in a container you will need to specify additional settings to ensure that Fluent Bit has access to the process details. The following docker
command deploys Fluent Bit with a specific mount path for procfs
and settings enabled to ensure that Fluent Bit can collect from the host. These are then exposed over port 2021.
Development prioritises a subset of the available collectors in the the 3rd party implementation of Prometheus Process Exporter, to request others please open a Github issue by using the following template: - in_process_exporter_metrics
A plugin based on Prometheus Node Exporter to collect system / host level metrics
Prometheus Node Exporter is a popular way to collect system level metrics from operating systems, such as CPU / Disk / Network / Process statistics. Fluent Bit 1.8.0 includes node exporter metrics plugin that builds off the Prometheus design to collect system level metrics without having to manage two separate processes or agents.
The initial release of Node Exporter Metrics contains a subset of collectors and metrics available from Prometheus Node Exporter and we plan to expand them over time.
Important note: Metrics collected with Node Exporter Metrics flow through a separate pipeline from logs and current filters do not operate on top of metrics.
This plugin is supported on Linux-based operating systems for the most part with macOS offering a reduced subset of metrics. The table below indicates which collector is supported on macOS.
Key | Description | Default |
---|---|---|
Note: The plugin top-level scrape_interval
setting is the global default with any custom settings for individual scrape_intervals
then overriding just that specific metric scraping interval. Each collector.xxx.scrape_interval
option only overrides the interval for that specific collector and updates the associated set of provided metrics.
The overridden intervals only change the collection interval, not the interval for publishing the metrics which is taken from the global setting. For example, if the global interval is set to 5s and an override interval of 60s is used then the published metrics will be reported every 5s but for the specific collector they will stay the same for 60s until it is collected again. This feature aims to help with down-sampling when collecting metrics.
The following table describes the available collectors as part of this plugin. All of them are enabled by default and respects the original metrics name, descriptions, and types from Prometheus Exporter, so you can use your current dashboards without any compatibility problem.
note: the Version column specifies the Fluent Bit version where the collector is available.
This input always runs in its own thread.
In the following configuration file, the input plugin _node_exporter_metrics collects _metrics every 2 seconds and exposes them through our Prometheus Exporter output plugin on HTTP/TCP port 2021.
You can test the expose of the metrics by using curl:
When deploying Fluent Bit in a container you will need to specify additional settings to ensure that Fluent Bit has access to the host operating system. The following docker command deploys Fluent Bit with specific mount paths and settings enabled to ensure that Fluent Bit can collect from the host. These are then exposed over port 2021.
If you like dashboards for monitoring, Grafana is one of the preferred options. In our Fluent Bit source code repository, we have pushed a simple **docker-compose **example. Steps:
Now open your browser in the address http://127.0.0.1:3000. When asked for the credentials to access Grafana, just use the **admin **username and admin password.
Note that by default Grafana dashboard plots the data from the last 24 hours, so just change it to Last 5 minutes to see the recent data being collected.
Our current plugin implements a sub-set of the available collectors in the original Prometheus Node Exporter, if you would like that we prioritize a specific collector please open a Github issue by using the following template: - in_node_exporter_metrics
An input plugin to ingest payloads of Prometheus remote write
This input plugin allows you to ingest a payload in the Prometheus remote-write format, i.e. a remote write sender can transmit data to Fluent Bit.
Key | Description | default |
---|
A sample config file to get started will look something like the following:
With the above configuration, Fluent Bit will listen on port 8080
for data. You can now send payloads in Prometheus remote write format to the endpoint /api/prom/push
.
Communicating with TLS, you will need to use the tls related parameters:
Now, you should be able to send data over TLS to the remote write input.
The serial input plugin, allows to retrieve messages/data from a Serial interface.
Key | Description |
---|
In order to retrieve messages over the Serial interface, you can run the plugin from the command line or through the configuration file:
The following example loads the input serial plugin where it set a Bitrate of 9600, listen from the /dev/tnt0 interface and use the custom tag data to route the message.
The above interface (/dev/tnt0) is an emulation of the serial interface (more details at bottom), for demonstrative purposes we will write some message to the other end of the interface, in this case /dev/tnt1, e.g:
In Fluent Bit you should see an output like this:
Now using the Separator configuration, we could send multiple messages at once (run this command after starting Fluent Bit):
In your main configuration file append the following Input & Output sections:
The following content is some extra information that will allow you to emulate a serial interface on your Linux system, so you can test this Serial input plugin locally in case you don't have such interface in your computer. The following procedure has been tested on Ubuntu 15.04 running a Linux Kernel 4.0.
Download the sources
Unpack and compile
Copy the new kernel module into the kernel modules directory
Load the module
You should see new serial ports in /dev/ (ls /dev/tnt*) Give appropriate permissions to the new serial ports:
When the module is loaded, it will interconnect the following virtual interfaces:
The splunk input plugin handles requests.
In order to start performing the checks, you can run the plugin from the command line or through the configuration file.
The tag for the Splunk input plugin is set by adding the tag to the end of the request URL by default. This tag is then used to route the event through the system. The default behavior of the splunk input sets the tags for the following endpoints:
/services/collector
/services/collector/event
/services/collector/raw
The requests for these endpoints are interpreted as services_collector
, services_collector_event
, and services_collector_raw
.
If you want to use the other tags for multiple instantiating input splunk plugin, you have to specify tag
property on the each of splunk plugin configurations to prevent collisions of data pipeline.
From the command line you can configure Fluent Bit to handle HTTP HEC requests with the following options:
In your main configuration file append the following Input & Output sections:
The stdin plugin supports retrieving a message stream from the standard input interface (stdin) of the Fluent Bit process. In order to use it, specify the plugin name as the input, e.g:
If the stdin stream is closed (end-of-file), the stdin plugin will instruct Fluent Bit to exit with success (0) after flushing any pending output.
If no parser is configured for the stdin plugin, it expects valid JSON input data in one of the following formats:
A JSON object with one or more key-value pairs: { "key": "value", "key2": "value2" }
A 2-element JSON array in format, which may be:
[TIMESTAMP, { "key": "value" }]
where TIMESTAMP is a floating point value representing a timestamp in seconds; or
from Fluent Bit v2.1.0, [[TIMESTAMP, METADATA], { "key": "value" }]
where TIMESTAMP has the same meaning as above and and METADATA is a JSON object.
Multi-line input JSON is supported.
Any input data that is not in one of the above formats will cause the plugin to log errors like:
To handle inputs in other formats, a parser must be explicitly specified in the configuration for the stdin
plugin. See for sample configuration.
The Fluent Bit event timestamp will be set from the input record if the 2-element event input is used or a custom parser configuration supplies a timestamp. Otherwise the event timestamp will be set to the timestamp at which the record is read by the stdin plugin.
An input event timestamp may also be supplied. Replace test.sh
with:
Re-run the sample command. Note that the timestamps output by Fluent Bit are now one day old because Fluent Bit used the input message timestamp.
Additional metadata is also supported on Fluent Bit v2.1.0 and above by replacing the timestamp with a 2-element object, e.g.:
On older Fluent Bit versions records in this format will be discarded. Fluent Bit will log:
if the log level permits.
To capture inputs in other formats, specify a parser configuration for the stdin
plugin.
For example, if you want to read raw messages line-by-line and forward them you could use a parser.conf
that captures the whole message line:
then use that in the parser
clause of the stdin plugin in the fluent-bit.conf
:
Fluent Bit will now read each line and emit a single message for each input line:
In real-world deployments it is best to use a more realistic parser that splits messages into real fields and adds appropriate tags.
The plugin supports the following configuration parameters:
Random input plugin generate very simple random value samples using the device interface /dev/urandom, if not available it will use a unix timestamp as value.
The plugin supports the following configuration parameters:
Key | Description |
---|
In order to start generating random samples, you can run the plugin from the command line or through the configuration file:
From the command line you can let Fluent Bit generate the samples with the following options:
In your main configuration file append the following Input & Output sections:
Once Fluent Bit is running, you will see the reports in the output interface similar to this:
Fluent Bit 1.9 includes additional metrics features to allow you to collect both logs and metrics with the same collector.
The initial release of the Prometheus Scrape metric allows you to collect metrics from a Prometheus-based endpoint at a set interval. These metrics can be routed to metric supported endpoints such as , , or
Key | Description | Default |
---|
If an endpoint exposes Prometheus Metrics we can specify the configuration to scrape and then output the metrics. In the following example, we retrieve metrics from the HashiCorp Vault application.
Example Output
Indicates whether to run this input in its own .
{property}
can be any
Indicates whether to run this input in its own .
Specify the maximum buffer memory size used to receive a Forward message. The value must be according to the specification.
By default the buffer to store the incoming Forward messages, do not allocate the maximum memory allowed, instead it allocate memory when is required. The rounds of allocations are set by Buffer_Chunk_Size. The value must be according to the specification.
Indicates whether to run this input in its own .
Indicates whether to run this input in its own . Default: false
.
Indicates whether to run this input in its own . Default: false
.
Indicates whether to run this input in its own .
Name | Type | Description | Labels |
---|---|---|---|
Name | Type | Description | Labels |
---|---|---|---|
Name | Type | Description | Labels |
---|---|---|---|
Name | Type | Description | Labels |
---|---|---|---|
Name | Type | Description | Labels |
---|---|---|---|
Name | Type | Description | Labels |
---|---|---|---|
Name | Type | Description | Labels |
---|---|---|---|
Name | Type | Description | Labels |
---|---|---|---|
Name | Type | Description | Labels |
---|---|---|---|
Name | Type | Description | Labels |
---|---|---|---|
Type | HTTP1/JSON | HTTP1/Protobuf | HTTP2/GRPC |
---|---|---|---|
Name | Description | OS | Version |
---|---|---|---|
Prometheus Remote Write input plugin supports TLS/SSL, for more details about the properties available and general configuration, please refer to the section.
A better example to demonstrate how it works will be through a Bash script that generates messages and writes them to . Write the following content in a file named test.sh:
Now lets start the script and :
Key | Description | Default |
---|
Prio_Level
The log level to filter. The kernel log is dropped if its priority is more than prio_level. Allowed values are 0-8. Default is 8. 8 means all logs are saved.
8
Threaded
Indicates whether to run this input in its own thread.
false
Host
Name of the target host or IP address to check.
localhost
Port
Port of the target nginx service to connect to.
80
Status_URL
The URL of the Stub Status Handler.
/status
Nginx_Plus
Turn on NGINX plus mode.
true
Threaded
Indicates whether to run this input in its own thread.
false
nginx_up
Gauge
Shows the status of the last metric scrape: 1
for a successful scrape and 0
for a failed one
[]
nginx_connections_accepted
Counter
Accepted client connections.
[]
nginx_connections_active
Gauge
Active client connections.
[]
nginx_connections_handled
Counter
Handled client connections.
[]
nginx_connections_reading
Gauge
Connections where NGINX is reading the request header.
[]
nginx_connections_waiting
Gauge
Idle client connections.
[]
nginx_connections_writing
Gauge
Connections where NGINX is writing the response back to the client.
[]
nginx_http_requests_total
Counter
Total http requests.
[]
nginxplus_connections_accepted
Counter
Accepted client connections
[]
nginxplus_connections_active
Gauge
Active client connections
[]
nginxplus_connections_dropped
Counter
Dropped client connections dropped
[]
nginxplus_connections_idle
Gauge
Idle client connections
[]
nginxplus_http_requests_total
Counter
Total http requests
[]
nginxplus_http_requests_current
Gauge
Current http requests
[]
nginxplus_ssl_handshakes
Counter
Successful SSL handshakes
[]
nginxplus_ssl_handshakes_failed
Counter
Failed SSL handshakes
[]
nginxplus_ssl_session_reuses
Counter
Session reuses during SSL handshake
[]
nginxplus_server_zone_processing
Gauge
Client requests that are currently being processed
server_zone
nginxplus_server_zone_requests
Counter
Total client requests
server_zone
nginxplus_server_zone_responses
Counter
Total responses sent to clients
code
(the response status code. The values are: 1xx
, 2xx
, 3xx
, 4xx
and 5xx
), server_zone
nginxplus_server_zone_discarded
Counter
Requests completed without sending a response
server_zone
nginxplus_server_zone_received
Counter
Bytes received from clients
server_zone
nginxplus_server_zone_sent
Counter
Bytes sent to clients
server_zone
nginxplus_stream_server_zone_processing
Gauge
Client connections that are currently being processed
server_zone
nginxplus_stream_server_zone_connections
Counter
Total connections
server_zone
nginxplus_stream_server_zone_sessions
Counter
Total sessions completed
code
(the response status code. The values are: 2xx
, 4xx
, and 5xx
), server_zone
nginxplus_stream_server_zone_discarded
Counter
Connections completed without creating a session
server_zone
nginxplus_stream_server_zone_received
Counter
Bytes received from clients
server_zone
nginxplus_stream_server_zone_sent
Counter
Bytes sent to clients
server_zone
nginxplus_upstream_server_state
Gauge
Current state
server
, upstream
nginxplus_upstream_server_active
Gauge
Active connections
server
, upstream
nginxplus_upstream_server_limit
Gauge
Limit for connections which corresponds to the max_conns parameter of the upstream server. Zero value means there is no limit
server
, upstream
nginxplus_upstream_server_requests
Counter
Total client requests
server
, upstream
nginxplus_upstream_server_responses
Counter
Total responses sent to clients
code
(the response status code. The values are: 1xx
, 2xx
, 3xx
, 4xx
and 5xx
), server
, upstream
nginxplus_upstream_server_sent
Counter
Bytes sent to this server
server
, upstream
nginxplus_upstream_server_received
Counter
Bytes received to this server
server
, upstream
nginxplus_upstream_server_fails
Counter
Number of unsuccessful attempts to communicate with the server
server
, upstream
nginxplus_upstream_server_unavail
Counter
How many times the server became unavailable for client requests (state 'unavail') due to the number of unsuccessful attempts reaching the max_fails threshold
server
, upstream
nginxplus_upstream_server_header_time
Gauge
Average time to get the response header from the server
server
, upstream
nginxplus_upstream_server_response_time
Gauge
Average time to get the full response from the server
server
, upstream
nginxplus_upstream_keepalives
Gauge
Idle keepalive connections
upstream
nginxplus_upstream_zombies
Gauge
Servers removed from the group but still processing active client requests
upstream
nginxplus_stream_upstream_server_state
Gauge
Current state
server
, upstream
nginxplus_stream_upstream_server_active
Gauge
Active connections
server
, upstream
nginxplus_stream_upstream_server_limit
Gauge
Limit for connections which corresponds to the max_conns parameter of the upstream server. Zero value means there is no limit
server
, upstream
nginxplus_stream_upstream_server_connections
Counter
Total number of client connections forwarded to this server
server
, upstream
nginxplus_stream_upstream_server_connect_time
Gauge
Average time to connect to the upstream server
server
, upstream
nginxplus_stream_upstream_server_first_byte_time
Gauge
Average time to receive the first byte of data
server
, upstream
nginxplus_stream_upstream_server_response_time
Gauge
Average time to receive the last byte of data
server
, upstream
nginxplus_stream_upstream_server_sent
Counter
Bytes sent to this server
server
, upstream
nginxplus_stream_upstream_server_received
Counter
Bytes received from this server
server
, upstream
nginxplus_stream_upstream_server_fails
Counter
Number of unsuccessful attempts to communicate with the server
server
, upstream
nginxplus_stream_upstream_server_unavail
Counter
How many times the server became unavailable for client connections (state 'unavail') due to the number of unsuccessful attempts reaching the max_fails threshold
server
, upstream
nginxplus_stream_upstream_zombies
Gauge
Servers removed from the group but still processing active client connections
upstream
nginxplus_location_zone_requests
Counter
Total client requests
location_zone
nginxplus_location_zone_responses
Counter
Total responses sent to clients
code
(the response status code. The values are: 1xx
, 2xx
, 3xx
, 4xx
and 5xx
), location_zone
nginxplus_location_zone_discarded
Counter
Requests completed without sending a response
location_zone
nginxplus_location_zone_received
Counter
Bytes received from clients
location_zone
nginxplus_location_zone_sent
Counter
Bytes sent to clients
location_zone
db
Set a database file to keep track of recorded Kubernetes events
db.sync
Set a database sync method. values: extra, full, normal and off
normal
interval_sec
Set the reconnect interval (seconds)*
0
interval_nsec
Set the reconnect interval (sub seconds: nanoseconds)*
500000000
kube_url
API Server end-point
https://kubernetes.default.svc
kube_ca_file
Kubernetes TLS CA file
/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
kube_ca_path
Kubernetes TLS ca path
kube_token_file
Kubernetes authorization token file.
/var/run/secrets/kubernetes.io/serviceaccount/token
kube_token_ttl
kubernetes token ttl, until it is reread from the token file.
10m
kube_request_limit
kubernetes limit parameter for events query, no limit applied when set to 0.
0
kube_retention_time
Kubernetes retention time for events.
1h
kube_namespace
Kubernetes namespace to query events from. Gets events from all namespaces by default
tls.debug
Debug level between 0 (nothing) and 4 (every detail).
0
tls.verify
Enable or disable verification of TLS peer certificate.
On
tls.vhost
Set optional TLS virtual host.
Listen
Listener network interface.
0.0.0.0
Port
TCP port where listening for connections.
1883
Payload_Key
Specify the key where the payload key/value will be preserved.
none
Threaded
Indicates whether to run this input in its own thread.
false
Interface
Specify the network interface to monitor. e.g. eth0
Interval_Sec
Polling interval (seconds).
1
Interval_NSec
Polling interval (nanosecond).
0
Verbose
If true, gather metrics precisely.
false
Test_At_Init
If true, testing if the network interface is valid at initialization.
false
Threaded
Indicates whether to run this input in its own thread.
false
listen
The network address to listen.
0.0.0.0
port
The port for Fluent Bit to listen for incoming connections. Note that as of Fluent Bit v3.0.2 this port is used for both transport OTLP/HTTP and OTLP/GRPC.
4318
tag_key
Specify the key name to overwrite a tag. If set, the tag will be overwritten by a value of the
raw_traces
Route trace data as a log
false
buffer_max_size
Specify the maximum buffer size in KB/MB/GB to the HTTP payload.
4M
buffer_chunk_size
Initial size and allocation strategy to store the payload (advanced users only)
512K
successful_response_code
It allows to set successful response code. 200
, 201
and 204
are supported.
201
tag_from_uri
If true, tag will be created from uri. e.g. v1_metrics from /v1/metrics .
true
threaded
Indicates whether to run this input in its own thread.
false
Logs
Stable
Stable
Stable
Metrics
Unimplemented
Stable
Stable
Traces
Unimplemented
Stable
Stable
Key
Description
Default
scrape_interval
Interval between each scrape of podman data (in seconds)
30
scrape_on_start
Should this plugin scrape podman data after it is started
false
path.config
Custom path to podman containers configuration file
/var/lib/containers/storage/overlay-containers/containers.json
path.sysfs
Custom path to sysfs subsystem directory
/sys/fs/cgroup
path.procfs
Custom path to proc subsystem directory
/proc
threaded
Indicates whether to run this input in its own thread.
false
scrape_interval
The rate at which metrics are collected.
5 seconds
path.procfs
The mount point used to collect process information and metrics. Read-only is enough
/proc/
process_include_pattern
regex to determine which names of processes are included in the metrics produced by this plugin
It is applied for all process unless explicitly set. Default is .+
.
process_exclude_pattern
regex to determine which names of processes are excluded in the metrics produced by this plugin
It is not applied unless explicitly set. Default is NULL
.
metrics
To specify which process level of metrics are collected from the host operating system. These metrics depend on /proc
fs. The actual values of metrics will be read from /proc
when needed. cpu, io, memory, state, context_switches, fd, start_time, thread_wchan, thread depend on procfs.
cpu,io,memory,state,context_switches,fd,start_time,thread_wchan,thread
cpu
Exposes CPU statistics from /proc
.
io
Exposes I/O statistics from /proc
.
memory
Exposes memory statistics from /proc
.
state
Exposes process state statistics from /proc
.
context_switches
Exposes context_switches statistics from /proc
.
fd
Exposes file descriptors statistics from /proc
.
start_time
Exposes start_time statistics from /proc
.
thread_wchan
Exposes thread_wchan from /proc
.
thread
Exposes thread statistics from /proc
.
scrape_interval
The rate at which metrics are collected from the host operating system
5 seconds
path.procfs
The mount point used to collect process information and metrics
/proc/
path.sysfs
The path in the filesystem used to collect system metrics
/sys/
collector.cpu.scrape_interval
The rate in seconds at which cpu metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used.
0 seconds
collector.cpufreq.scrape_interval
The rate in seconds at which cpufreq metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used.
0 seconds
collector.meminfo.scrape_interval
The rate in seconds at which meminfo metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used.
0 seconds
collector.diskstats.scrape_interval
The rate in seconds at which diskstats metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used.
0 seconds
collector.filesystem.scrape_interval
The rate in seconds at which filesystem metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used.
0 seconds
collector.uname.scrape_interval
The rate in seconds at which uname metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used.
0 seconds
collector.stat.scrape_interval
The rate in seconds at which stat metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used.
0 seconds
collector.time.scrape_interval
The rate in seconds at which time metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used.
0 seconds
collector.loadavg.scrape_interval
The rate in seconds at which loadavg metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used.
0 seconds
collector.vmstat.scrape_interval
The rate in seconds at which vmstat metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used.
0 seconds
collector.thermal_zone.scrape_interval
The rate in seconds at which thermal_zone metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used.
0 seconds
collector.filefd.scrape_interval
The rate in seconds at which filefd metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used.
0 seconds
collector.nvme.scrape_interval
The rate in seconds at which nvme metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used.
0 seconds
collector.processes.scrape_interval
The rate in seconds at which system level of process metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used.
0 seconds
metrics
To specify which metrics are collected from the host operating system. These metrics depend on /proc
or /sys
fs. The actual values of metrics will be read from /proc
or /sys
when needed. cpu, cpufreq, meminfo, diskstats, filesystem, stat, loadavg, vmstat, netdev, and filefd depend on procfs. cpufreq metrics depend on sysfs.
"cpu,cpufreq,meminfo,diskstats,filesystem,uname,stat,time,loadavg,vmstat,netdev,filefd"
filesystem.ignore_mount_point_regex
Specify the regex for the mount points to prevent collection of/ignore.
`^/(dev
filesystem.ignore_filesystem_type_regex
Specify the regex for the filesystem types to prevent collection of/ignore.
`^(autofs
diskstats.ignore_device_regex
Specify the regex for the diskstats to prevent collection of/ignore.
`^(ram
systemd_service_restart_metrics
Determines if the collector will include service restart metrics
false
systemd_unit_start_time_metrics
Determines if the collector will include unit start time metrics
false
systemd_include_service_task_metrics
Determines if the collector will include service task metrics
false
systemd_include_pattern
regex to determine which units are included in the metrics produced by the systemd collector
It is not applied unless explicitly set
systemd_exclude_pattern
regex to determine which units are excluded in the metrics produced by the systemd collector
`.+\.(automount
cpu
Exposes CPU statistics.
Linux,macOS
v1.8
cpufreq
Exposes CPU frequency statistics.
Linux
v1.8
diskstats
Exposes disk I/O statistics.
Linux,macOS
v1.8
filefd
Exposes file descriptor statistics from /proc/sys/fs/file-nr
.
Linux
v1.8.2
filesystem
Exposes filesystem statistics from /proc/*/mounts
.
Linux
v2.0.9
loadavg
Exposes load average.
Linux,macOS
v1.8
meminfo
Exposes memory statistics.
Linux,macOS
v1.8
netdev
Exposes network interface statistics such as bytes transferred.
Linux,macOS
v1.8.2
stat
Exposes various statistics from /proc/stat
. This includes boot time, forks, and interruptions.
Linux
v1.8
time
Exposes the current system time.
Linux
v1.8
uname
Exposes system information as provided by the uname system call.
Linux,macOS
v1.8
vmstat
Exposes statistics from /proc/vmstat
.
Linux
v1.8.2
systemd collector
Exposes statistics from systemd.
Linux
v2.1.3
thermal_zone
Expose thermal statistics from /sys/class/thermal/thermal_zone/*
Linux
v2.2.1
nvme
Exposes nvme statistics from /proc
.
Linux
v2.2.0
processes
Exposes processes statistics from /proc
.
Linux
v2.2.0
listen | The address to listen on | 0.0.0.0 |
port | The port for Fluent Bit to listen on | 8080 |
buffer_max_size | Specify the maximum buffer size in KB to receive a JSON message. | 4M |
buffer_chunk_size | This sets the chunk size for incoming incoming JSON messages. These chunks are then stored/managed in the space available by buffer_max_size. | 512K |
successful_response_code | It allows to set successful response code. | 201 |
tag_from_uri | If true, tag will be created from uri, e.g. api_prom_push from /api/prom/push, and any tag specified in the config will be ignored. If false then a tag must be provided in the config for this input. | true |
uri | Specify an optional HTTP URI for the target web server listening for prometheus remote write payloads, e.g: /api/prom/push |
threaded |
|
File | Absolute path to the device entry, e.g: /dev/ttyS0 |
Bitrate | The bitrate for the communication, e.g: 9600, 38400, 115200, etc |
Min_Bytes | The serial interface will expect at least Min_Bytes to be available before to process the message (default: 1) |
Separator | Allows to specify a separator string that's used to determinate when a message ends. |
Format | Specify the format of the incoming data stream. The only option available is 'json'. Note that Format and Separator cannot be used at the same time. |
Threaded |
Key | Description | default |
listen | The address to listen on | 0.0.0.0 |
port | The port for Fluent Bit to listen on | 9880 |
tag_key | Specify the key name to overwrite a tag. If set, the tag will be overwritten by a value of the key. |
buffer_max_size | Specify the maximum buffer size in KB to receive a JSON message. | 4M |
buffer_chunk_size | This sets the chunk size for incoming incoming JSON messages. These chunks are then stored/managed in the space available by buffer_max_size. | 512K |
successful_response_code | It allows to set successful response code. | 201 |
splunk_token | Specify a Splunk token for HTTP HEC authentication. If multiple tokens are specified (with commas and no spaces), usage will be divided across each of the tokens. |
store_token_in_metadata | Store Splunk HEC tokens in the Fluent Bit metadata. If set false, they will be stored as normal key-value pairs in the record data. | true |
splunk_token_key | Use the specified key for storing the Splunk token for HTTP HEC. This is only effective when | @splunk_token |
Threaded |
|
Buffer_Size | 16k |
Parser | The name of the parser to invoke instead of the default JSON input parser |
Threaded |
|
Samples | If set, it will only generate a specific number of samples. By default this value is set to -1, which will generate unlimited samples. |
Interval_Sec | Interval in seconds between samples generation. Default value is 1. |
Interval_Nsec | Specify a nanoseconds interval for samples generation, it works in conjunction with the Interval_Sec configuration key. Default value is 0. |
Threaded |
host | The host of the prometheus metric endpoint that you want to scrape |
port | The port of the prometheus metric endpoint that you want to scrape |
scrape_interval | The interval to scrape metrics | 10s |
metrics_path | The metrics URI endpoint, that must start with a forward slash.
Note: Parameters can also be added to the path by using | /metrics |
threaded |
|
Proc_Name
Name of the target Process to check.
Interval_Sec
Interval in seconds between the service checks. Default value is 1.
Interval_Nsec
Specify a nanoseconds interval for service checks, it works in conjunction with the Interval_Sec configuration key. Default value is 0.
Alert
If enabled, it will only generate messages if the target process is down. By default this option is disabled.
Fd
If enabled, a number of fd is appended to each records. Default value is true.
Mem
If enabled, memory usage of the process is appended to each records. Default value is true.
Threaded
Indicates whether to run this input in its own thread. Default: false
.
The Systemd input plugin allows to collect log messages from the Journald daemon on Linux environments.
The plugin supports the following configuration parameters:
Key | Description | Default |
---|---|---|
In order to receive Systemd messages, you can run the plugin from the command line or through the configuration file:
From the command line you can let Fluent Bit listen for Systemd messages with the following options:
In the example above we are collecting all messages coming from the Docker service.
In your main configuration file append the following Input & Output sections:
The tcp input plugin allows to retrieve structured JSON or raw messages over a TCP network interface (TCP port).
The plugin supports the following configuration parameters:
Key | Description | Default |
---|---|---|
In order to receive JSON messages over TCP, you can run the plugin from the command line or through the configuration file:
From the command line you can let Fluent Bit listen for JSON messages with the following options:
By default the service will listen an all interfaces (0.0.0.0) through TCP port 5170, optionally you can change this directly, e.g:
In the example the JSON messages will only arrive through network interface under 192.168.3.2 address and TCP Port 9090.
In your main configuration file append the following Input & Output sections:
Once Fluent Bit is running, you can send some messages using the netcat:
In Fluent Bit we should see the following output:
When receiving payloads in JSON format, there are high performance penalties. Parsing JSON is a very expensive task so you could expect your CPU usage increase under high load environments.
To get faster data ingestion, consider to use the option Format none
to avoid JSON parsing if not needed.
Syslog input plugins allows to collect Syslog messages through a Unix socket server (UDP or TCP) or over the network using TCP or UDP.
The plugin supports the following configuration parameters:
Key | Description | Default |
---|---|---|
When using Syslog input plugin, Fluent Bit requires access to the parsers.conf file, the path to this file can be specified with the option -R or through the Parsers_File key on the [SERVICE] section (more details below).
When udp or unix_udp is used, the buffer size to receive messages is configurable only through the Buffer_Chunk_Size option which defaults to 32kb.
In order to receive Syslog messages, you can run the plugin from the command line or through the configuration file:
From the command line you can let Fluent Bit listen for Forward messages with the following options:
By default the service will create and listen for Syslog messages on the unix socket /tmp/in_syslog
In your main configuration file append the following Input & Output sections:
Once Fluent Bit is running, you can send some messages using the logger tool:
In Fluent Bit we should see the following output:
The following content aims to provide configuration examples for different use cases to integrate Fluent Bit and make it listen for Syslog messages from your systems.
Put the following content in your configuration file:
then start Fluent Bit.
Add a new file to your rsyslog config rules called 60-fluent-bit.conf inside the directory /etc/rsyslog.d/ and add the following content:
then make sure to restart your rsyslog daemon:
Put the following content in your fluent-bit.conf file:
then start Fluent Bit.
Add a new file to your rsyslog config rules called 60-fluent-bit.conf inside the directory /etc/rsyslog.d/ and place the following content:
Make sure that the socket file is readable by rsyslog (tweak the Unix_Perm
option shown above).
The winevtlog input plugin allows you to read Windows Event Log with new API from winevt.h
.
The plugin supports the following configuration parameters:
Key | Description | Default |
---|---|---|
Note that if you do not set db, the plugin will tail channels on each startup.
Here is a minimum configuration example.
Note that some Windows Event Log channels (like Security
) requires an admin privilege for reading. In this case, you need to run fluent-bit as an administrator.
The default value of Read_Limit_Per_Cycle is set up as 512KiB. Note that 512KiB(= 0x7ffff = 512 * 1024 * 1024) does not equals to 512KB (= 512 * 1000 * 1000). To increase events per second on this plugin, specify larger value than 512KiB.
The Event_Query
parameter can be used to specify the XML query for filtering Windows EventLog during collection. The supported query types are XPath and XML Query. For further details, please refer to the MSDN doc.
If you want to do a quick test, you can run this plugin from the command line.
Note that winevtlog
plugin will tail channels on each startup. If you want to confirm whether this plugin is working or not, you should specify -p 'Read_Existing_Events=true'
parameter.
The thermal input plugin reports system temperatures periodically -- each second by default. Currently this plugin is only available for Linux.
The following tables describes the information generated by the plugin.
key | description |
---|---|
The plugin supports the following configuration parameters:
Key | Description |
---|---|
In order to get temperature(s) of your system, you can run the plugin from the command line or through the configuration file:
Some systems provide multiple thermal zones. In this example monitor only thermal_zone0 by name, once per minute.
In your main configuration file append the following Input & Output sections:
The udp input plugin allows to retrieve structured JSON or raw messages over a UDP network interface (UDP port).
The plugin supports the following configuration parameters:
Key | Description | Default |
---|---|---|
In order to receive JSON messages over UDP, you can run the plugin from the command line or through the configuration file:
From the command line you can let Fluent Bit listen for JSON messages with the following options:
By default the service will listen an all interfaces (0.0.0.0) through UDP port 5170, optionally you can change this directly, e.g:
In the example the JSON messages will only arrive through network interface under 192.168.3.2 address and UDP Port 9090.
In your main configuration file append the following Input & Output sections:
Once Fluent Bit is running, you can send some messages using the netcat:
In Fluent Bit we should see the following output:
When receiving payloads in JSON format, there are high performance penalties. Parsing JSON is a very expensive task so you could expect your CPU usage increase under high load environments.
To get faster data ingestion, consider to use the option Format none
to avoid JSON parsing if not needed.
The winlog input plugin allows you to read Windows Event Log.
The plugin supports the following configuration parameters:
Key | Description | Default |
---|---|---|
Note that if you do not set db, the plugin will read channels from the beginning on each startup.
Here is a minimum configuration example.
Note that some Windows Event Log channels (like Security
) requires an admin privilege for reading. In this case, you need to run fluent-bit as an administrator.
If you want to do a quick test, you can run this plugin from the command line.
The statsd input plugin allows you to receive metrics via StatsD protocol.
Content:
The plugin supports the following configuration parameters:
Key | Description | Default |
---|---|---|
Here is a configuration example.
Now you can input metrics through the UDP port as follows:
Fluent Bit will produce the following records:
A plugin based on Prometheus Windows Exporter to collect system / host level metrics
Prometheus Windows Exporter is a popular way to collect system level metrics from microsoft windows, such as CPU / Disk / Network / Process statistics. Fluent Bit 1.9.0 includes windows exporter metrics plugin that builds off the Prometheus design to collect system level metrics without having to manage two separate processes or agents.
The initial release of Windows Exporter Metrics contains a single collector available from Prometheus Windows Exporter and we plan to expand it over time.
Important note: Metrics collected with Windows Exporter Metrics flow through a separate pipeline from logs and current filters do not operate on top of metrics.
Key | Description | Default |
---|---|---|
The following table describes the available collectors as part of this plugin. All of them are enabled by default and respects the original metrics name, descriptions, and types from Prometheus Windows Exporter, so you can use your current dashboards without any compatibility problem.
note: the Version column specifies the Fluent Bit version where the collector is available.
This input always runs in its own thread.
In the following configuration file, the input plugin _windows_exporter_metrics collects _metrics every 2 seconds and exposes them through our Prometheus Exporter output plugin on HTTP/TCP port 2021.
You can test the expose of the metrics by using curl:
Windows service collector will retrieve all of the service information for the local node or container. we.service.where
, we.service.include
, and we.service.exclude
can be used to filter the service metrics.
To filter these metrics, users should specify a WHERE clause. This syntax is defined in the WMI Query Language(WQL).
Here is how these parameters should work:
we.service.where
is handled as a raw WHERE clause. For example, when a user specifies the parameter as follows:
This creates a WMI query like so:
The WMI mechanism will then handle it and return the information which has a "not OK" status in this example.
When defined, the we.service.include
is interpreted into a WHERE clause. If multiple key-value pairs are specified, the values will be concatenated with OR
. Also, if the values contain %
character then a LIKE
operator will be used in the clause instead of the =
operator. When a user specifies the parameter as follows:
The parameter will be interpreted as:
The WMI query will be called with the translated parameter as:
When defined, the we.service.exclude
is interpreted into a WHERE clause. If multiple key-value pairs are specified, the values will be concatenated with AND
.
Also, if the values contain %
character then a LIKE
operator will be used in the translated clause instead of the !=
operator. When a user specifies the parameter as follows:
The parameter will be interpreted as:
The WMI query will be called with the translated parameter as:
we.service.where
, we.service.include
, and we.service.exclude
can all be used at the same time subject to the following rules.
we.service.include
translated and applied into the where clause in the service collector
we.service.exclude
translated and applied into the where clause in the service collector
If the we.service.include
is applied, translated we.service.include
and we.service.exclude
conditions are concatenated with AND
.
we.service.where
is just handled as-is into the where clause in the service collector .
If either of the above parameters is applied, the clause will be applied with AND (
the value of we.service.where
)
.
For example, when a user specifies the parameter as follows:
The WMI query will be called with the translated parameter as:
Our current plugin implements a sub-set of the available collectors in the original Prometheus Windows Exporter, if you would like that we prioritize a specific collector please open a Github issue by using the following template: - in_windows_exporter_metrics
Indicates whether to run this input in its own .
Indicates whether to run this input in its own . Default: false
.
Indicates whether to run this input in its own .
Set the buffer size to read data. This value is used to increase buffer size. The value must be according to the specification.
Indicates whether to run this input in its own .
Indicates whether to run this input in its own . Default: false
.
Indicates whether to run this input in its own .
Name | Description | OS | Version |
---|---|---|---|
Path
Optional path to the Systemd journal directory, if not set, the plugin will use default paths to read local-only logs.
Max_Fields
Set a maximum number of fields (keys) allowed per record.
8000
Max_Entries
When Fluent Bit starts, the Journal might have a high number of logs in the queue. In order to avoid delays and reduce memory usage, this option allows to specify the maximum number of log entries that can be processed per round. Once the limit is reached, Fluent Bit will continue processing the remaining log entries once Journald performs the notification.
5000
Systemd_Filter
Allows to perform a query over logs that contains a specific Journald key/value pairs, e.g: _SYSTEMD_UNIT=UNIT. The Systemd_Filter option can be specified multiple times in the input section to apply multiple filters as required.
Systemd_Filter_Type
Define the filter type when Systemd_Filter is specified multiple times. Allowed values are And and Or. With And a record is matched only when all of the Systemd_Filter have a match. With Or a record is matched when any of the Systemd_Filter has a match.
Or
Tag
The tag is used to route messages but on Systemd plugin there is an extra functionality: if the tag includes a star/wildcard, it will be expanded with the Systemd Unit file (_SYSTEMD_UNIT
, e.g. host.* => host.UNIT_NAME) or unknown
(e.g. host.unknown) if _SYSTEMD_UNIT
is missing.
DB
Specify the absolute path of a database file to keep track of Journald cursor.
DB.Sync
Set a default synchronization (I/O) method. values: Extra, Full, Normal, Off. This flag affects how the internal SQLite engine do synchronization to disk, for more details about each option please refer to this section. note: this option was introduced on Fluent Bit v1.4.6.
Full
Read_From_Tail
Start reading new entries. Skip entries already stored in Journald.
Off
Lowercase
Lowercase the Journald field (key).
Off
Strip_Underscores
Remove the leading underscore of the Journald field (key). For example the Journald field _PID becomes the key PID.
Off
Threaded
Indicates whether to run this input in its own thread.
false
Listen
Listener network interface.
0.0.0.0
Port
TCP port where listening for connections
5170
Buffer_Size
Specify the maximum buffer size in KB to receive a JSON message. If not set, the default size will be the value of Chunk_Size.
Chunk_Size
By default the buffer to store the incoming JSON messages, do not allocate the maximum memory allowed, instead it allocate memory when is required. The rounds of allocations are set by Chunk_Size in KB. If not set, Chunk_Size is equal to 32 (32KB).
32
Format
Specify the expected payload format. It support the options json and none. When using json, it expects JSON maps, when is set to none, it will split every record using the defined Separator (option below).
json
Separator
When the expected Format is set to none, Fluent Bit needs a separator string to split the records. By default it uses the breakline character (LF or 0x10).
Source_Address_Key
Specify the key where the source address will be injected.
Threaded
Indicates whether to run this input in its own thread.
false
Mode
Defines transport protocol mode: unix_udp (UDP over Unix socket), unix_tcp (TCP over Unix socket), tcp or udp
unix_udp
Listen
If Mode is set to tcp or udp, specify the network interface to bind.
0.0.0.0
Port
If Mode is set to tcp or udp, specify the TCP port to listen for incoming connections.
5140
Path
If Mode is set to unix_tcp or unix_udp, set the absolute path to the Unix socket file.
Unix_Perm
If Mode is set to unix_tcp or unix_udp, set the permission of the Unix socket file.
0644
Parser
Specify an alternative parser for the message. If Mode is set to tcp or udp then the default parser is syslog-rfc5424 otherwise syslog-rfc3164-local is used. If your syslog messages have fractional seconds set this Parser value to syslog-rfc5424 instead.
Buffer_Chunk_Size
By default the buffer to store the incoming Syslog messages, do not allocate the maximum memory allowed, instead it allocate memory when is required. The rounds of allocations are set by Buffer_Chunk_Size. If not set, Buffer_Chunk_Size is equal to 32000 bytes (32KB). Read considerations below when using udp or unix_udp mode.
Buffer_Max_Size
Specify the maximum buffer size to receive a Syslog message. If not set, the default size will be the value of Buffer_Chunk_Size.
Receive_Buffer_Size
Specify the maximum socket receive buffer size. If not set, the default value is OS-dependant, but generally too low to accept thousands of syslog messages per second without loss on udp or unix_udp sockets. Note that on Linux the value is capped by sysctl net.core.rmem_max
.
Source_Address_Key
Specify the key where the source address will be injected.
Threaded
Indicates whether to run this input in its own thread.
false
Channels
A comma-separated list of channels to read from.
Interval_Sec
Set the polling interval for each channel. (optional)
1
Interval_NSec
Set the polling interval for each channel (sub seconds. (optional)
0
Read_Existing_Events
Whether to read existing events from head or tailing events at last on subscribing. (optional)
False
DB
Set the path to save the read offsets. (optional)
String_Inserts
Whether to include StringInserts in output records. (optional)
True
Render_Event_As_XML
Whether to render system part of event as XML string or not. (optional)
False
Use_ANSI
Use ANSI encoding on eventlog messages. If you have issues receiving blank strings with old Windows versions (Server 2012 R2), setting this to True may solve the problem. (optional)
False
Event_Query
Specify XML query for filtering events.
*
Read_Limit_Per_Cycle
Specify read limit per cycle.
512KiB
Threaded
Indicates whether to run this input in its own thread.
false
name
The name of the thermal zone, such as thermal_zone0
type
The type of the thermal zone, such as x86_pkg_temp
temp
Current temperature in celsius
Interval_Sec
Polling interval (seconds). default: 1
Interval_NSec
Polling interval (nanoseconds). default: 0
name_regex
Optional name filter regex. default: None
type_regex
Optional type filter regex. default: None
Threaded
Indicates whether to run this input in its own thread. Default: false
.
Listen
Listener network interface.
0.0.0.0
Port
UDP port where listening for connections
5170
Buffer_Size
Specify the maximum buffer size in KB to receive a JSON message. If not set, the default size will be the value of Chunk_Size.
Chunk_Size
By default the buffer to store the incoming JSON messages, do not allocate the maximum memory allowed, instead it allocate memory when is required. The rounds of allocations are set by Chunk_Size in KB. If not set, Chunk_Size is equal to 32 (32KB).
32
Format
Specify the expected payload format. It support the options json and none. When using json, it expects JSON maps, when is set to none, it will split every record using the defined Separator (option below).
json
Separator
When the expected Format is set to none, Fluent Bit needs a separator string to split the records. By default it uses the breakline character (LF or 0x10).
Source_Address_Key
Specify the key where the source address will be injected.
Threaded
Indicates whether to run this input in its own thread.
false
Channels
A comma-separated list of channels to read from.
Interval_Sec
Set the polling interval for each channel. (optional)
1
DB
Set the path to save the read offsets. (optional)
Threaded
Indicates whether to run this input in its own thread.
false
Listen
Listener network interface.
0.0.0.0
Port
UDP port where listening for connections
8125
Threaded
Indicates whether to run this input in its own thread.
false
scrape_interval
The rate at which metrics are collected from the host operating system
5 seconds
we.logical_disk.allow_disk_regex
Specify the regex for logical disk metrics to allow collection of. Collect all by default.
"/.+/"
we.logical_disk.deny_disk_regex
Specify the regex for logical disk metrics to prevent collection of/ignore. Allow all by default.
NULL
we.net.allow_nic_regex
Specify the regex for network metrics captured by the name of the NIC, by default captures all NICs but to exclude adjust the regex.
"/.+/"
we.service.where
Specify the WHERE clause for retrieving service metrics.
NULL
we.service.include
Specify the key value pairs for the include condition for the WHERE clause of service metrics.
NULL
we.service.exclude
Specify the key value pairs for the exclude condition for the WHERE clause of service metrics.
NULL
we.process.allow_process_regex
Specify the regex covering the process metrics to collect. Collect all by default.
"/.+/"
we.process.deny_process_regex
Specify the regex for process metrics to prevent collection of/ignore. Allow all by default.
NULL
collector.cpu.scrape_interval
The rate in seconds at which cpu metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used.
0 seconds
collector.net.scrape_interval
The rate in seconds at which net metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used.
0 seconds
collector.logical_disk.scrape_interval
The rate in seconds at which logical_disk metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used.
0 seconds
collector.cs.scrape_interval
The rate in seconds at which cs metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used.
0 seconds
collector.os.scrape_interval
The rate in seconds at which os metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used.
0 seconds
collector.thermalzone.scrape_interval
The rate in seconds at which thermalzone metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used.
0 seconds
collector.cpu_info.scrape_interval
The rate in seconds at which cpu_info metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used.
0 seconds
collector.logon.scrape_interval
The rate in seconds at which logon metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used.
0 seconds
collector.system.scrape_interval
The rate in seconds at which system metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used.
0 seconds
collector.service.scrape_interval
The rate in seconds at which service metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used.
0 seconds
collector.memory.scrape_interval
The rate in seconds at which memory metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used.
0 seconds
collector.paging_file.scrape_interval
The rate in seconds at which paging_file metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used.
0 seconds
collector.process.scrape_interval
The rate in seconds at which process metrics are collected from the host operating system. If a value greater than 0 is used then it overrides the global default otherwise the global default is used.
0 seconds
metrics
To specify which metrics are collected from the host operating system.
"cpu,cpu_info,os,net,logical_disk,cs,thermalzone,logon,system,service"
cpu
Exposes CPU statistics.
Windows
v1.9
net
Exposes Network statistics.
Windows
v2.0.8
logical_disk
Exposes logical_disk statistics.
Windows
v2.0.8
cs
Exposes cs statistics.
Windows
v2.0.8
os
Exposes OS statistics.
Windows
v2.0.8
thermalzone
Exposes thermalzone statistics.
Windows
v2.0.8
cpu_info
Exposes cpu_info statistics.
Windows
v2.0.8
logon
Exposes logon statistics.
Windows
v2.0.8
system
Exposes system statistics.
Windows
v2.0.8
service
Exposes service statistics.
Windows
v2.1.6
memory
Exposes memory statistics.
Windows
v2.1.9
paging_file
Exposes paging_file statistics.
Windows
v2.1.9
process
Exposes process statistics.
Windows
v2.1.9