Elasticsearch
The elasticsearch input plugin handles both Elasticsearch and OpenSearch Bulk API requests.
The plugin supports the following configuration parameters:
Key | Description |
---|---|
buffer_max_size | Set the maximum size of buffer. |
buffer_chunk_size | Set the buffer chunk size. |
tag_key | Specify a key name for extracting as a tag. |
meta_key | Specify a key name for meta information. |
hostname | Specify hostname or FQDN. This parameter is effective for sniffering node information. |
In order to start performing the checks, you can run the plugin from the command line or through the configuration file:
From the command line you can configure Fluent Bit to handle Bulk API requests with the following options:
$ fluent-bit -i elasticsearch -p port=9200 -o stdout
In your main configuration file append the following Input & Output sections:
[INPUT]
name elasticsearch
listen 0.0.0.0
port 9200
[OUTPUT]
name stdout
match *
As described above, the plugin will handle ingested Bulk API requests. For large bulk ingestions, you may have to increase buffer size with buffer_max_size and buffer_chunk_size parameters:
[INPUT]
name elasticsearch
listen 0.0.0.0
port 9200
buffer_max_size 20M
buffer_chunk_size 5M
[OUTPUT]
name stdout
match *
Ingesting from beats series agents is also supported. For example, Filebeats, Metricbeat, and Winlogbeat are able to ingest their collected data through this plugin.
Note that Fluent Bit's node information is returning as Elasticsearch 8.0.0.
So, users have to specify the following configurations on their beats configurations:
output.elasticsearch:
allow_older_versions: true
ilm: false
For large log ingestion on these beat plugins, users might have to configure rate limiting on those beats plugins when Fluent Bit indicates that the application is exceeding the size limit for HTTP requests:
processors:
- rate_mimit:
limit: "200/s"
Last modified 1mo ago