Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Send logs to Amazon Kinesis Firehose
The Amazon Kinesis Data Firehose output plugin allows to ingest your records into the Firehose service.
This is the documentation for the core Fluent Bit Firehose plugin written in C. It can replace the aws/amazon-kinesis-firehose-for-fluent-bit Golang Fluent Bit plugin released last year. The Golang plugin was named firehose
; this new high performance and highly efficient firehose plugin is called kinesis_firehose
to prevent conflicts/confusion.
See here for details on how AWS credentials are fetched.
In order to send records into Amazon Kinesis Data Firehose, you can run the plugin from the command line or through the configuration file:
The firehose plugin, can read the parameters from the command line through the -p argument (property), e.g:
In your main configuration file append the following Output section:
The following AWS IAM permissions are required to use this plugin:
Fluent Bit 1.7 adds a new feature called workers
which enables outputs to have dedicated threads. This kinesis_firehose
plugin fully supports workers.
Example:
If you enable a single worker, you are enabling a dedicated thread for your Firehose output. We recommend starting with without workers, evaluating the performance, and then adding workers one at a time until you reach your desired/needed throughput. For most users, no workers or a single worker will be sufficient.
Amazon distributes a container image with Fluent Bit and these plugins.
github.com/aws/aws-for-fluent-bit
Our images are available in Amazon ECR Public Gallery. You can download images with different tags by following command:
For example, you can pull the image with latest version by:
If you see errors for image pull limits, try log into public ECR with your AWS credentials:
You can check the Amazon ECR Public official doc for more details.
You can use our SSM Public Parameters to find the Amazon ECR image URI in your region:
For more see the AWS for Fluent Bit github repo.
Key | Description |
---|---|
region
The AWS region.
delivery_stream
The name of the Kinesis Firehose Delivery stream that you want log records sent to.
time_key
Add the timestamp to the record under this key. By default the timestamp from Fluent Bit will not be added to records sent to Kinesis.
time_key_format
strftime compliant format string for the timestamp; for example, the default is '%Y-%m-%dT%H:%M:%S'. Supports millisecond precision with '%3N' and supports nanosecond precision with '%9N' and '%L'; for example, adding '%3N' to support millisecond '%Y-%m-%dT%H:%M:%S.%3N'. This option is used with time_key.
log_key
By default, the whole log record will be sent to Firehose. If you specify a key name with this option, then only the value of that key will be sent to Firehose. For example, if you are using the Fluentd Docker log driver, you can specify log_key log
and only the log message will be sent to Firehose.
compression
Compression type for Firehose records. Each log record is individually compressed and sent to Firehose. 'gzip' and 'arrow' are the supported values. 'arrow' is only an available if Apache Arrow was enabled at compile time. Defaults to no compression.
role_arn
ARN of an IAM role to assume (for cross account access).
endpoint
Specify a custom endpoint for the Firehose API.
sts_endpoint
Custom endpoint for the STS API.
auto_retry_requests
Immediately retry failed requests to AWS services once. This option does not affect the normal Fluent Bit retry mechanism with backoff. Instead, it enables an immediate retry with no delay for networking errors, which may help improve throughput when there are transient/random networking issues. This option defaults to true
.
external_id
Specify an external ID for the STS API, can be used with the role_arn parameter if your role requires an external ID.
profile
AWS profile name to use. Defaults to default
.
workers
The number of workers to perform flush operations for this output. Default: 1
.
Send logs to Azure Data Explorer (Kusto)
The Kusto output plugin allows to ingest your logs into an Azure Data Explorer cluster, via the Queued Ingestion mechanism. This output plugin can also be used to ingest logs into an Eventhouse cluster in Microsoft Fabric Real Time Analytics.
You can create an Azure Data Explorer cluster in one of the following ways:
You can create an Eventhouse cluster and a KQL database follow the following steps:
Fluent-Bit will use the application's credentials, to ingest data into your cluster.
Fluent-Bit ingests the event data into Kusto in a JSON format, that by default will include 3 properties:
log
- the actual event payload.
tag
- the event tag.
timestamp
- the event timestamp.
A table with the expected schema must exist in order for data to be ingested properly.
By default, Kusto will insert incoming ingestions into a table by inferring the mapped table columns, from the payload properties. However, this mapping can be customized by creatng a JSON ingestion mapping. The plugin can be configured to use an ingestion mapping via the ingestion_mapping_reference
configuration key.
Get started quickly with this configuration file:
If you get a 403 Forbidden
error response, make sure that:
You provided the correct AAD registered application credentials.
You authorized the application to ingest into your database or table.
Send logs, data, and metrics to Amazon S3
The Amazon S3 output plugin lets you ingest records into the S3 cloud object store.
The plugin can upload data to S3 using the multipart upload API or PutObject
. Multipart is the default and is recommended. Fluent Bit will stream data in a series of parts. This limits the amount of data buffered on disk at any point in time. By default, every time 5 MiB of data have been received, a new part will be uploaded. The plugin can create files up to gigabytes in size from many small chunks or parts using the multipart API. All aspects of the upload process are configurable.
The plugin lets you specify a maximum file size, and a timeout for uploads. A file will be created in S3 when the maximum size or the timeout is reached, whichever comes first.
Records are stored in files in S3 as newline delimited JSON.
See AWS Credentials for details about fetching AWS credentials.
The Prometheus success/retry/error metrics values output by the built-in http server in Fluent Bit are meaningless for S3 output. S3 has its own buffering and retry mechanisms. The Fluent Bit AWS S3 maintainers apologize for this feature gap; you can track our progress fixing it on GitHub.
To skip TLS verification, set tls.verify
as false
. For more details about the properties available and general configuration, refer to TLS/SSL.
The plugin requires the following AWS IAM permissions:
The S3 output plugin is used to upload large files to an Amazon S3 bucket, while most other outputs which send many requests to upload data in batches of a few megabytes or less.
When Fluent Bit receives logs, it stores them in chunks, either in memory or the filesystem depending on your settings. Chunks are usually around 2 MB in size. Fluent Bit sends chunks, in order, to each output that matches their tag. Most outputs then send the chunk immediately to their destination. A chunk is sent to the output's flush
callback function, which must return one of FLB_OK
, FLB_RETRY
, or FLB_ERROR
. Fluent Bit keeps count of the return values from each output's flush
callback function. These counters are the data source for Fluent Bit's error, retry, and success metrics available in Prometheus format through its monitoring interface.
The S3 output plugin conforms to the Fluent Bit output plugin specification. Since S3's use case is to upload large files (over 2 MB), its behavior is different. S3's flush
callback function buffers the incoming chunk to the filesystem, and returns an FLB_OK
. This means Prometheus metrics available from the Fluent Bit HTTP server are meaningless for S3. In addition, the storage.total_limit_size
parameter is not meaningful for S3 since it has its own buffering system in the store_dir
. Instead, use store_dir_limit_size
. S3 requires a writeable filesystem. Running Fluent Bit on a read-only filesystem won't work with the S3 output.
S3 uploads primarily initiate using the S3 timer
callback function, which runs separately from its flush
.
S3 has its own buffering system and its own callback to upload data, so the normal sequential data ordering of chunks provided by the Fluent Bit engine may be compromised. S3 has the presevere_data_ordering
option which ensures data is uploaded in the original order it was collected by Fluent Bit.
The HTTP Monitoring interface output metrics are not meaningful for S3. AWS understands that this is non-ideal; we have opened an issue with a design to allow S3 to manage its own output metrics.
You must use store_dir_limit_size
to limit the space on disk used by S3 buffer files.
The original ordering of data inputted to Fluent Bit may not be preserved unless you enable preserve_data_ordering On
.
In Fluent Bit, all logs have an associated tag. The s3_key_format
option lets you inject the tag into the S3 key using the following syntax:
$TAG
: The full tag.
$TAG[n]
: The nth part of the tag (index starting at zero). This syntax is copied from the rewrite tag filter. By default, “parts” of the tag are separated with dots, but you can change this with s3_key_format_tag_delimiters
.
In the following example, assume the date is January 1st, 2020 00:00:00 and the tag associated with the logs in question is my_app_name-logs.prod
.
With the delimiters as .
and -
, the tag splits into parts as follows:
$TAG[0]
= my_app_name
$TAG[1]
= logs
$TAG[2]
= prod
The key in S3 will be /prod/my_app_name/2020/01/01/00/00/00/bgdHN1NM.gz
.
The Fluent Bit S3 output was designed to ensure that previous uploads will never be overwritten by a subsequent upload. The s3_key_format
supports time formatters, $UUID
, and $INDEX
. $INDEX
is special because it is saved in the store_dir
. If you restart Fluent Bit with the same disk, it can continue incrementing the index from its last value in the previous run.
For files uploaded with the PutObject
API, the S3 output requires that a unique random string be present in the S3 key. Many of the use cases for PutObject
uploads involve a short time period between uploads, so a timestamp in the S3 key may not be unique enough between uploads. For example, if you only specify minute granularity timestamps in the S3 key, with a small upload size, it is possible to have two uploads that have timestamps set in the same minute. This requirement can be disabled with static_file_path On
.
The PutObject
API is used in these cases:
When you explicitly set use_put_object On
.
On startup when the S3 output finds old buffer files in the store_dir
from a previous run and attempts to send all of them at once.
On shutdown. To prevent data loss the S3 output attempts to send all currently buffered data at once.
You should always specify $UUID
somewhere in your S3 key format. Otherwise, if the PutObject
API is used, S3 appends a random eight-character UUID to the end of your S3 key. This means that a file extension set at the end of an S3 key will have the random UUID appended to it. Disabled this with static_file_path On
.
For example, we attempt to set a .gz
extension without specifying $UUID
:
In the case where pending data is uploaded on shutdown, if the tag was app
, the S3 key in the S3 bucket might be:
The S3 output appended a random string to the file extension, since this upload on shutdown used the PutObject
API.
There are two ways of disabling this behavior:
Use static_file_path
:
Explicitly define where the random UUID will go in the S3 key format:
The store_dir
is used to temporarily store data before upload. If Fluent Bit stops suddenly, it will try to send all data and complete all uploads before it shuts down. If it can not send some data, on restart it will look in the store_dir
for existing data and try to send it.
Multipart uploads are ideal for most use cases because they allow the plugin to upload data in small chunks over time. For example, 1 GB file can be created from 200 5 MB chunks. While the file size in S3 will be 1 GB, only 5 MB will be buffered on disk at any one point in time.
One drawback to multipart uploads is that the file and data aren't visible in S3 until the upload is completed with a CompleteMultipartUpload call. The plugin attempts to make this call whenever Fluent Bit is shut down to ensure your data is available in S3. It also stores metadata about each upload in the store_dir
, ensuring that uploads can be completed when Fluent Bit restarts (assuming it has access to persistent disk and the store_dir
files will still be present on restart).
If you run Fluent Bit in an environment without persistent disk, or without the ability to restart Fluent Bit and give it access to the data stored in the store_dir
from previous executions, some considerations apply. This might occur if you run Fluent Bit on AWS Fargate.
In these situations, we recommend using the PutObject
API and sending data frequently, to avoid local buffering as much as possible. This will limit data loss in the event Fluent Bit is killed unexpectedly.
The following settings are recommended for this use case:
With use_put_object Off
(default), S3 will attempt to send files using multipart uploads. For each file, S3 first calls CreateMultipartUpload, then a series of calls to UploadPart for each fragment (targeted to be upload_chunk_size
bytes), and finally CompleteMultipartUpload to create the final file in S3.
PutObject
S3 requires each UploadPart fragment to be at least 5,242,880 bytes, otherwise the upload is rejected.
The S3 output must sometimes fallback to the PutObject
API.
Uploads are triggered by these settings:
total_file_size
and upload_chunk_size
: When S3 has buffered data in the store_dir
that meets the desired total_file_size
(for use_put_object On
) or the upload_chunk_size
(for Multipart), it will trigger an upload operation.
upload_timeout
: Whenever locally buffered data has been present on the filesystem in the store_dir
longer than the configured upload_timeout
, it will be sent even when the desired byte size hasn't been reached. If you configure a small upload_timeout
, your files may be smaller than the total_file_size
. The timeout is evaluated against the time at which S3 started buffering data for each unqiue tag (that is, the time when new data was buffered for the unique tag after the last upload). The timeout is also evaluated against the CreateMultipartUpload time, so a multipart upload will be completed after upload_timeout
has elapsed, even if the desired size has not yet been reached.
If your upload_timeout
triggers an upload before the pending buffered data reaches the upload_chunk_size
, it may be too small for a multipart upload. S3 will fallback to use the PutObject
API.
When you enable compression, S3 applies the compression algorithm at send time. The size settings trigger uploads based on the size of buffered data, not the final compressed size. It's possible that after compression, buffered data no longer meets the required minimum S3 UploadPart size. If this occurs, you will see a log message like:
If you encounter this frequently, use the numbers in the messages to guess your compression factor. In this example, the buffered data was reduced from 5,630,650 bytes to 1,063,320 bytes. The compressed size is one-fifth the actual data size. Configuring upload_chunk_size 30M
should ensure each part is large enough after compression to be over the minimum required part size of 5,242,880 bytes.
The S3 API allows the last part in an upload to be less than the 5,242,880 byte minimum. If a part is too small for an existing upload, the S3 output will upload that part and then complete the upload.
upload_timeout
constrains total multipart upload time for a single fileThe upload_timeout
evaluated against the CreateMultipartUpload time. A multipart upload will be completed after upload_timeout
elapses, even if the desired size has not yet been reached.
When CreateMultipartUpload is called, an UploadID
is returned. S3 stores these IDs for active uploads in the store_dir
. Until CompleteMultipartUpload is called, the uploaded data isn't visible in S3.
On shutdown, S3 output attempts to complete all pending uploads. If an upload fails to complete, the ID remains buffered in the store_dir
in a directory called multipart_upload_metadata
. If you restart the S3 output with the same store_dir
it will discover the old UploadIDs and complete the pending uploads. The S3 documentation has suggestions on discovering and deleting or completing dangling uploads in your buckets.
MinIO is a high-performance, S3 compatible object storage and you can build your app with S3 functionality without S3.
The following example runs a MinIO server at localhost:9000
, and create a bucket of your-bucket
.
Example:
The records store in the MinIO server.
To send records into Amazon S3, you can run the plugin from the command line or through the configuration file.
The S3 plugin reads parameters from the command line through the -p
argument:
In your main configuration file append the following Output
section:
An example using PutObject
instead of multipart:
Amazon distributes a container image with Fluent Bit and plugins.
github.com/aws/aws-for-fluent-bit
Our images are available in the Amazon ECR Public Gallery as aws-for-fluent-bit.
You can download images with different tags using the following command:
For example, you can pull the image with latest version with:
If you see errors for image pull limits, try signing in to public ECR with your AWS credentials:
See the Amazon ECR Public official documentation for more details.
amazon/aws-for-fluent-bit is also available from the Docker Hub.
Use our SSM Public Parameters to find the Amazon ECR image URI in your region:
For more information, see the AWS for Fluent Bit GitHub repo.
With Fluent Bit v1.8 or greater, the Amazon S3 plugin includes the support for Apache Arrow. Support isn't enabled by default, and has a dependency on a shared version of libarrow
.
To use this feature, FLB_ARROW
must be turned on at compile time. Use the following commands:
After being compiled, Fluent Bit can upload incoming data to S3 in Apache Arrow format.
For example:
Setting Compression
to arrow
makes Fluent Bit convert payload into Apache Arrow format.
Load, analyze, and process stored data using popular data processing tools such as Python pandas, Apache Spark and Tensorflow.
The following example uses pyarrow
to analyze the uploaded data:
Official and Microsoft Certified Azure Storage Blob connector
The Azure Blob output plugin allows ingesting your records into Azure Blob Storage service. This connector is designed to use the Append Blob and Block Blob API.
Our plugin works with the official Azure Service and also can be configured to be used with a service emulator such as Azurite.
Before getting started, make sure you already have an Azure Storage account. As a reference, the following link explains step-by-step how to set up your account:
We expose different configuration properties. The following table lists all the options available, and the next section has specific configuration details for the official service or the emulator.
Key | Description | default |
---|---|---|
As mentioned above, you can either deliver records to the official service or an emulator. Below we have an example for each use case.
The following configuration example generates a random message with a custom tag:
After you run the configuration file above, you will be able to query the data using the Azure Storage Explorer. The example above will generate the following content in the explorer:
The quickest way to get started is to install Azurite using npm:
then run the service:
Azurite comes with a default account_name
and shared_key
, so make sure to use the specific values provided in the example below (do an exact copy/paste):
after running that Fluent Bit configuration you will see the data flowing into Azurite:
Counter is a very simple plugin that counts how many records it's getting upon flush time. Plugin output is as follows:
You can run the plugin from the command line or through the configuration file:
From the command line you can let Fluent Bit count up a data with the following options:
In your main configuration file append the following Input & Output sections:
Once Fluent Bit is running, you will see the reports in the output interface similar to this:
The file output plugin allows to write the data received through the input plugin to file.
The plugin supports the following configuration parameters:
Key | Description | Default |
---|
Output time, tag and json records. There is no configuration parameters for out_file.
Output the records as JSON (without additional tag
and timestamp
attributes). There is no configuration parameters for plain format.
Output the records as csv. Csv supports an additional configuration parameter.
Output the records as LTSV. LTSV supports an additional configuration parameter.
Output the records using a custom format template.
This accepts a formatting template and fills placeholders using corresponding values in a record.
For example, if you set up the configuration as below:
You will get the following output:
You can run the plugin from the command line or through the configuration file:
From the command line you can let Fluent Bit count up a data with the following options:
In your main configuration file append the following Input & Output sections:
FlowCounter is the protocol to count records. The flowcounter output plugin allows to count up records and its size.
The plugin supports the following configuration parameters:
Key | Description | Default |
---|
You can run the plugin from the command line or through the configuration file:
From the command line you can let Fluent Bit count up a data with the following options:
In your main configuration file append the following Input & Output sections:
Once Fluent Bit is running, you will see the reports in the output interface similar to this:
Send logs and metrics to Amazon CloudWatch
The Amazon CloudWatch output plugin allows to ingest your records into the service. Support for CloudWatch Metrics is also provided via .
This is the documentation for the core Fluent Bit CloudWatch plugin written in C. It can replace the Golang Fluent Bit plugin released last year. The Golang plugin was named cloudwatch
; this new high performance CloudWatch plugin is called cloudwatch_logs
to prevent conflicts/confusion. Check the amazon repo for the Golang plugin for details on the deprecation/migration plan for the original plugin.
See for details on how AWS credentials are fetched.
In order to send records into Amazon Cloudwatch, you can run the plugin from the command line or through the configuration file:
The cloudwatch plugin, can read the parameters from the command line through the -p argument (property), e.g:
In your main configuration file append the following Output section:
For an instance of Localstack running at http://localhost:4566
, the following configuration needs to be added to the [OUTPUT]
section:
Any testing credentials can be exported as local variables, such as AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
.
The following AWS IAM permissions are required to use this plugin:
Here is an example usage, for a common use case- templating log group and stream names based on Kubernetes metadata.
Recall that the kubernetes filter can add metadata which will look like the following:
Using record_accessor, we can build a template based on this object.
Here is our output configuration:
With the above kubernetes metadata, the log group name will be application-logs-ip-10-1-128-166.us-east-2.compute.internal.my-namespace
. And the log stream name will be myapp-5468c5d4d7-n2swr.myapp
.
If the kubernetes structure is not found in the log record, then the log_group_name
and log_stream_prefix
will be used instead, and Fluent Bit will log an error like:
Notice in the example above, that the template values are separated by dot characters. This is important; the Fluent Bit record_accessor library has a limitation in the characters that can separate template variables- only dots and commas (.
and ,
) can come after a template variable. This is because the templating library must parse the template and determine the end of a variable.
Assume that your log records contain the metadata keys container_name
and task
. The following would be invalid templates because the two template variables are not separated by commas or dots:
$task-$container_name
$task/$container_name
$task_$container_name
$taskfooo$container_name
However, the following are valid:
$task.$container_name
$task.resource.$container_name
$task.fooo.$container_name
And the following are valid since they only contain one template variable with nothing after it:
fooo$task
fooo____$task
fooo/bar$container_name
Fluent Bit has different input plugins (cpu, mem, disk, netif) to collect host resource usage metrics. cloudwatch_logs
output plugin can be used to send these host metrics to CloudWatch in Embedded Metric Format (EMF). If data comes from any of the above mentioned input plugins, cloudwatch_logs
output plugin will convert them to EMF format and sent to CloudWatch as JSON log. Additionally, if we set json/emf
as the value of log_format
config option, CloudWatch will extract custom metrics from embedded JSON payload.
Note: Right now, only cpu
and mem
metrics can be sent to CloudWatch.
For using the mem
input plugin and sending memory usage metrics to CloudWatch, we can consider the following example config file. Here, we use the aws
filter which adds ec2_instance_id
and az
(availability zone) to the log records. Later, in the output config section, we set ec2_instance_id
as our metric dimension.
The following config will set two dimensions to all of our metrics- ec2_instance_id
and az
.
Amazon distributes a container image with Fluent Bit and these plugins.
Our images are available in Amazon ECR Public Gallery. You can download images with different tags by following command:
For example, you can pull the image with latest version by:
If you see errors for image pull limits, try log into public ECR with your AWS credentials:
You can use our SSM Public Parameters to find the Amazon ECR image URI in your region:
Send logs to Elasticsearch (including Amazon OpenSearch Service)
The es output plugin, allows to ingest your records into an database. The following instructions assumes that you have a fully operational Elasticsearch service running in your environment.
Key | Description | default |
---|
The write_operation can be any of:
Please note, Id_Key
or Generate_ID
is required in update, and upsert scenario.
In order to insert records into a Elasticsearch service, you can run the plugin from the command line or through the configuration file:
The es plugin, can read the parameters from the command line in two ways, through the -p argument (property) or setting them directly through the service URI. The URI format is the following:
Using the format specified, you could start Fluent Bit through:
which is similar to do:
Some input plugins may generate messages where the field names contains dots, since Elasticsearch 2.0 this is not longer allowed, so the current es plugin replaces them with an underscore, e.g:
becomes
Since Elasticsearch 6.0, you cannot create multiple types in a single index. This means that you cannot set up your configuration as below anymore.
If you see an error message like below, you'll need to fix your configuration to use a single type on each index.
Rejecting mapping update to [search] as the final mapping would have more than 1 type
The Amazon OpenSearch Service adds an extra security layer where HTTP requests must be signed with AWS Sigv4. Fluent Bit v1.5 introduced full support for Amazon OpenSearch Service with IAM Authentication.
Example configuration:
Notice that the Port
is set to 443
, tls
is enabled, and AWS_Region
is set.
Example configuration:
Since v1.8.2, Fluent Bit started using create
method (instead of index
) for data submission. This makes Fluent Bit compatible with Datastream introduced in Elasticsearch 7.9.
If you see action_request_validation_exception
errors on your pipeline with Fluent Bit >= v1.8.2, you can fix it up by turning on Generate_ID
as follows:
Elastic Cloud is now on version 8 so the type option must be removed by setting Suppress_Type_Name On
as indicated above.
Without this you will see errors like:
The following snippet demonstrates using the namespace name as extracted by the kubernetes
filter as logstash prefix:
For records that do nor have the field kubernetes.namespace_name
, the default prefix, logstash
will be used.
GELF is Extended Log Format. The GELF output plugin allows to send logs in GELF format directly to a Graylog input using TLS, TCP or UDP protocols.
The following instructions assumes that you have a fully operational Graylog server running in your environment.
According to , there are some mandatory and optional fields which are used by Graylog in GELF format. These fields are determined with Gelf\*_Key_ key in this plugin.
Key | Description | default |
---|
If you're using Fluent Bit to collect Docker logs, note that Docker places your log in JSON under key log
. So you can set log
as your Gelf_Short_Message_Key
to send everything in Docker logs to Graylog. In this case, you need your log
value to be a string; so don't parse it using JSON parser.
The order of looking up the timestamp in this plugin is as follows:
Value of Gelf_Timestamp_Key
provided in configuration
Value of timestamp
key
Timestamp does not set by Fluent Bit. In this case, your Graylog server will set it to the current timestamp (now).
The version
of GELF message is also mandatory and Fluent Bit sets it to 1.1 which is the current latest version of GELF.
If you use udp
as transport protocol and set Compress
to true
, Fluent Bit compresses your packets in GZIP format, which is the default compression that Graylog offers. This can be used to trade more CPU load for saving network bandwidth.
If you're using Fluent Bit for shipping Kubernetes logs, you can use something like this as your configuration file:
By default, GELF tcp uses port 12201 and Docker places your logs in /var/log/containers
directory. The logs are placed in value of the log
key. For example, this is a log saved by Docker:
Now, this is what happens to this log:
Fluent Bit GELF plugin adds "version": "1.1"
to it.
We used this data
key as Gelf_Short_Message_Key
; so GELF plugin changes it to short_message
.
Timestamp is generated.
Finally, this is what our Graylog server input sees:
The Chronicle output plugin allows ingesting security logs into service. This connector is designed to send unstructured security logs.
Fluent Bit streams data into an existing Google Chronicle tenant using a service account that you specify. Therefore, before using the Chronicle output plugin, you must create a service account, create a Google Chronicle tenant, authorize the service account to write to the tenant, and provide the service account credentials to Fluent Bit.
To stream security logs into Google Chronicle, the first step is to create a Google Cloud service account for Fluent Bit:
Fluent Bit does not create a tenant of Google Chronicle for your security logs, so you must create this ahead of time.
Fluent Bit's Chronicle output plugin uses a JSON credentials file for authentication credentials. Download the credentials file by following these instructions:
Key | Description | default |
---|
If you are using a Google Cloud Credentials File, the following configuration is enough to get you started:
Send logs to Amazon Kinesis Streams
The Amazon Kinesis Data Streams output plugin allows to ingest your records into the service.
This is the documentation for the core Fluent Bit Kinesis plugin written in C. It has all the core features of the Golang Fluent Bit plugin released in 2019. The Golang plugin was named kinesis
; this new high performance and highly efficient kinesis plugin is called kinesis_streams
to prevent conflicts/confusion.
Currently, this kinesis_streams
plugin will always use a random partition key when uploading records to kinesis via the .
In order to send records into Amazon Kinesis Data Streams, you can run the plugin from the command line or through the configuration file:
The kinesis_streams plugin, can read the parameters from the command line through the -p argument (property), e.g:
In your main configuration file append the following Output section:
The following AWS IAM permissions are required to use this plugin:
Amazon distributes a container image with Fluent Bit and these plugins.
Our images are available in Amazon ECR Public Gallery. You can download images with different tags by following command:
For example, you can pull the image with latest version by:
If you see errors for image pull limits, try log into public ECR with your AWS credentials:
You can use our SSM Public Parameters to find the Amazon ECR image URI in your region:
Send logs to Azure Log Analytics using Logs Ingestion API with DCE and DCR
Azure Logs Ingestion plugin allows you ingest your records using to supported or to that you create.
The Logs ingestion API requires the following components:
A Data Collection Endpoint (DCE)
A Data Collection Rule (DCR) and
A Log Analytics Workspace
To get more details about how to setup these components, please refer to the following documentations:
To send records into an Azure Log Analytics using Logs Ingestion API the following resources needs to be created:
A Data Collection Endpoint (DCE) for ingestion
A Data Collection Rule (DCR) for data transformation
An app registration with client secrets (for DCR access).
Use this configuration to quickly get started:
Setup your DCR transformation accordingly based on the json output from fluent-bit's pipeline (input, parser, filter, output).
Send logs, metrics to Azure Log Analytics
Azure output plugin allows to ingest your records into service.
To get more details about how to setup Azure Log Analytics, please refer to the following documentation:
In order to insert records into an Azure Log Analytics instance, you can run the plugin from the command line or through the configuration file:
The azure plugin, can read the parameters from the command line in two ways, through the -p argument (property), e.g:
In your main configuration file append the following Input & Output sections:
Key | Description | Default |
---|---|---|
Key | Description | Default |
---|---|---|
Key | Description |
---|
Key | Description |
---|
Key | Description |
---|
Key | Description |
---|
Sometimes, you may want the log group or stream name to be based on the contents of the log record itself. This plugin supports templating log group and stream names using Fluent Bit syntax.
You can check the for more details
For more see .
The parameters index and type can be confusing if you are new to Elastic, if you have used a common relational database before, they can be compared to the database and table concepts. Also see
Elasticsearch output plugin supports TLS/SSL, for more details about the properties available and general configuration, please refer to the section.
Operation | Description |
---|
In your main configuration file append the following Input & Output sections. You can visualize this configuration
For details, please read .
Fluent Bit v1.5 changed the default mapping type from flb_type
to _doc
, which matches the recommendation from Elasticsearch from version 6.2 forwards (). This doesn't work in Elasticsearch versions 5.6 through 6.1 (). Ensure you set an explicit map (such as doc
or flb_type
) in the configuration, as seen on the last line:
See for details on how AWS credentials are fetched.
Fluent Bit supports connecting to providing just the cloud_id
and the cloud_auth
settings. cloud_auth
uses the elastic
user and password provided when the cluster was created, for details refer to the .
If you get a 403 Forbidden
error response, double check that you have a valid and that you have .
GELF output plugin supports TLS/SSL, for more details about the properties available and general configuration, please refer to the section.
If you're using , this parser can parse time and use it as timestamp of message. If all above fail, Fluent Bit tries to get timestamp extracted by your parser.
Your log timestamp has to be in format. If the Gelf_Timestamp_Key
value of your log is not in this format, your Graylog server will ignore it.
If you're using Fluent Bit in Kubernetes and you're using , this plugin adds host
value to your log by default, and you don't need to add it by your own.
If you use and use a Parser like the docker
parser shown above, it decodes your message and extracts data
(and any other present) field. This is how this log in looks like after decoding:
The , unnests fields inside log
key. In our example, it puts data
alongside stream
and time
.
adds host
name.
Any custom field (not present in .) is prefixed by an underline.
See Google's for further details.
See for details on how AWS credentials are fetched.
Key | Description |
---|
You can check the for more details.
For more see .
Note: According to , all resources should be in the same region.
To visualize basic Logs Ingestion operation, see the following image:
Key | Description | Default |
---|
Either an or
You can follow to setup the DCE, DCR, app registration and a custom table.
Key | Description | default |
---|
Another example using the Log_Type_Key
with , which will read the table name (or event type) dynamically from kubernetes label app
, instead of Log_Type
:
region
The AWS region of your S3 bucket.
us-east-1
bucket
S3 Bucket name
none
json_date_key
Specify the time key name in the output record. To disable the time key, set the value to false
.
date
json_date_format
Specify the format of the date. Accepted values: double
, epoch
, iso8601
(2018-05-30T09:39:52.000681Z), _java_sql_timestamp_
(2018-05-30 09:39:52.000681).
iso8601
total_file_size
Specify file size in S3. Minimum size is 1M
. With use_put_object On
the maximum size is 1G
. With multipart uploads, the maximum size is 50G
.
100M
upload_chunk_size
The size of each part for multipart uploads. Max: 50M
5,242,880 bytes
upload_timeout
When this amount of time elapses, Fluent Bit uploads and creates a new file in S3. Set to 60m
to upload a new file every hour.
10m
store_dir
Directory to locally buffer data before sending. When using multipart uploads, data buffers until reaching the upload_chunk_size
. S3 stores metadata about in progress multipart uploads in this directory, allowing pending uploads to be completed if Fluent Bit stops and restarts. It stores the current $INDEX
value if enabled in the S3 key format so the $INDEX
keeps incrementing from its previous value after Fluent Bit restarts.
/tmp/fluent-bit/s3
store_dir_limit_size
Size limit for disk usage in S3. Limit theS3 buffers in the store_dir
to limit disk usage. Use store_dir_limit_size
instead of storage.total_limit_size
which can be used for other plugins
0
(unlimited)
s3_key_format
Format string for keys in S3. This option supports a UUID, strftime time formatters, a syntax for selecting parts of the Fluent log tag using a syntax inspired by the rewrite_tag
filter. Add $UUID
in the format string to insert a random string. Add $INDEX
in the format string to insert an integer that increments each upload. The $INDEX
value saves in the store_dir
. Add $TAG
in the format string to insert the full log tag. Add $TAG[0]
to insert the first part of the tag in theS3 key. The tag is split into parts using the characters specified with the s3_key_format_tag_delimiters
option. Add the extension directly after the last piece of the format string to insert a key suffix. To specify a key suffix in use_put_object
mode, you must specify $UUID
. See S3 Key Format. Time in s3_key
is the timestamp of the first record in the S3 file.
/fluent-bit-logs/$TAG/%Y/%m/%d/%H/%M/%S
s3_key_format_tag_delimiters
A series of characters used to split the tag into parts for use with s3_key_format
. option.
.
static_file_path
Disables behavior where UUID string appendeds to the end of the S3 key name when $UUID
is not provided in s3_key_format
. $UUID
, time formatters, $TAG
, and other dynamic key formatters all work as expected while this feature is set to true.
false
use_put_object
Use the S3 PutObject
API instead of the multipart upload API. When enabled, the key extension is only available when $UUID
is specified in s3_key_format
. If $UUID
isn't included, a random string appends format string and the key extension can't be customized.
false
role_arn
ARN of an IAM role to assume (for example, for cross account access.)
none
endpoint
Custom endpoint for the S3 API. Endpoints can contain scheme and port.
none
sts_endpoint
Custom endpoint for the STS API.
none
profile
Option to specify an AWS Profile for credentials.
default
canned_acl
Predefined Canned ACL policy for S3 objects.
none
compression
Compression type for S3 objects. gzip
is currently the only supported value by default. If Apache Arrow support was enabled at compile time, you can use arrow
. For gzip compression, the Content-Encoding HTTP Header will be set to gzip
. Gzip compression can be enabled when use_put_object
is on
or off
(PutObject
and Multipart). Arrow compression can only be enabled with use_put_object On
.
none
content_type
A standard MIME type for the S3 object, set as the Content-Type HTTP header.
none
send_content_md5
Send the Content-MD5 header with PutObject
and UploadPart requests, as is required when Object Lock is enabled.
false
auto_retry_requests
Immediately retry failed requests to AWS services once. This option doesn't affect the normal Fluent Bit retry mechanism with backoff. Instead, it enables an immediate retry with no delay for networking errors, which may help improve throughput during transient network issues.
true
log_key
By default, the whole log record will be sent to S3. When specifing a key name with this option, only the value of that key sends to S3. For example, when using Docker you can specify log_key log
and only the log message sends to S3.
none
preserve_data_ordering
When an upload request fails, the last received chunk might swap with a later chunk, resulting in data shuffling. This feature prevents shuffling by using a queue logic for uploads.
true
storage_class
Specify the storage class for S3 objects. If this option isn't specified, objects store with the default STANDARD
storage class.
none
retry_limit
Integer value to set the maximum number of retries allowed. Requires versions 1.9.10 and 2.0.1 or later. For previous version, the number of retries is 5
and isn't configurable.
1
external_id
Specify an external ID for the STS API. Can be used with the role_arn
parameter if your role requires an external ID.
none
workers
The number of workers to perform flush operations for this output.
1
tenant_id
Required - The tenant/domain ID of the AAD registered application.
client_id
Required - The client ID of the AAD registered application.
client_secret
Required - The client secret of the AAD registered application (App Secret).
ingestion_endpoint
Required - The cluster's ingestion endpoint, usually in the form `https://ingest-cluster_name.region.kusto.windows.net
database_name
Required - The database name.
table_name
Required - The table name.
ingestion_mapping_reference
Optional - The name of a JSON ingestion mapping that will be used to map the ingested payload into the table columns.
log_key
Key name of the log content.
log
include_tag_key
If enabled, a tag is appended to output. The key name is used tag_key
property.
On
tag_key
The key name of tag. If include_tag_key
is false, This property is ignored.
tag
include_time_key
If enabled, a timestamp is appended to output. The key name is used time_key
property.
On
time_key
The key name of time. If include_time_key
is false, This property is ignored.
timestamp
workers
The number of workers to perform flush operations for this output.
0
account_name
Azure Storage account name. This configuration property is mandatory
auth_type
Specify the type to authenticate against the service. Fluent Bit supports key
and sas
.
key
shared_key
Specify the Azure Storage Shared Key to authenticate against the service. This configuration property is mandatory when auth_type
is key
.
sas_token
Specify the Azure Storage shared access signatures to authenticate against the service. This configuration property is mandatory when auth_type
is sas
.
container_name
Name of the container that will contain the blobs. This configuration property is mandatory
blob_type
Specify the desired blob type. Fluent Bit supports appendblob
and blockblob
.
appendblob
auto_create_container
If container_name
does not exist in the remote service, enabling this option will handle the exception and auto-create the container.
on
path
Optional path to store your blobs. If your blob name is myblob
, you can specify sub-directories where to store it using path, so setting path to /logs/kubernetes
will store your blob in /logs/kubernetes/myblob
.
emulator_mode
If you want to send data to an Azure emulator service like Azurite, enable this option so the plugin will format the requests to the expected format.
off
endpoint
If you are using an emulator, this option allows you to specify the absolute HTTP address of such service. e.g: http://127.0.0.1:10000.
tls
Enable or disable TLS encryption. Note that Azure service requires this to be turned on.
off
workers
The number of workers to perform flush operations for this output.
0
Delimiter | The character to separate each data. Accepted values are "\t" (or "tab"), "space" or "comma". Other values are ignored and will use default silently. Default: ',' |
Delimiter | The character to separate each pair. Default: '\t'(TAB) |
Label_Delimiter | The character to separate label and the value. Default: ':' |
Template | The format string. Default: '{time} {message}' |
create (default) | adds new data - if the data already exists (based on its id), the op is skipped. |
index | new data is added while existing data (based on its id) is replaced (reindexed). |
update | updates existing data (based on its id). If no data is found, the op is skipped. |
upsert | known as merge or insert if the data does not exist, updates if the data exists (based on its id). |
Path | Directory path to store files. If not set, Fluent Bit will write the files on it's own positioned directory. note: this option was added on Fluent Bit v1.4.6 |
File | Set file name to store the records. If not set, the file name will be the tag associated with the records. |
Format | The format of the file content. See also Format section. Default: out_file. |
Mkdir | Recursively create output directory if it does not exist. Permissions set to 0755. |
Workers |
|
Unit | The unit of duration. (second/minute/hour/day) | minute |
Workers |
|
region | The AWS region. |
log_group_name | The name of the CloudWatch Log Group that you want log records sent to. |
log_group_template |
log_stream_name | The name of the CloudWatch Log Stream that you want log records sent to. |
log_stream_prefix | Prefix for the Log Stream name. The tag is appended to the prefix to construct the full log stream name. Not compatible with the log_stream_name option. |
log_stream_template |
log_key | By default, the whole log record will be sent to CloudWatch. If you specify a key name with this option, then only the value of that key will be sent to CloudWatch. For example, if you are using the Fluentd Docker log driver, you can specify |
log_format |
role_arn | ARN of an IAM role to assume (for cross account access). |
auto_create_group | Automatically create the log group. Valid values are "true" or "false" (case insensitive). Defaults to false. |
log_retention_days | If set to a number greater than zero, and newly create log group's retention policy is set to this many days. Valid values are: [1, 3, 5, 7, 14, 30, 60, 90, 120, 150, 180, 365, 400, 545, 731, 1827, 3653] |
endpoint | Specify a custom endpoint for the CloudWatch Logs API. |
metric_namespace | An optional string representing the CloudWatch namespace for the metrics. See |
metric_dimensions |
sts_endpoint | Specify a custom STS endpoint for the AWS STS API. |
profile | Option to specify an AWS Profile for credentials. Defaults to |
auto_retry_requests | Immediately retry failed requests to AWS services once. This option does not affect the normal Fluent Bit retry mechanism with backoff. Instead, it enables an immediate retry with no delay for networking errors, which may help improve throughput when there are transient/random networking issues. This option defaults to |
external_id | Specify an external ID for the STS API, can be used with the role_arn parameter if your role requires an external ID. |
workers |
Host | IP address or hostname of the target Elasticsearch instance | 127.0.0.1 |
Port | TCP port of the target Elasticsearch instance | 9200 |
Path | Elasticsearch accepts new data on HTTP query path "/_bulk". But it is also possible to serve Elasticsearch behind a reverse proxy on a subpath. This option defines such path on the fluent-bit side. It simply adds a path prefix in the indexing HTTP POST URI. | Empty string |
compress | Set payload compression mechanism. Option available is 'gzip' |
Buffer_Size | 512KB |
Pipeline | Newer versions of Elasticsearch allows to setup filters called pipelines. This option allows to define which pipeline the database should use. For performance reasons is strongly suggested to do parsing and filtering on Fluent Bit side, avoid pipelines. |
AWS_Auth | Enable AWS Sigv4 Authentication for Amazon OpenSearch Service | Off |
AWS_Region | Specify the AWS region for Amazon OpenSearch Service |
AWS_STS_Endpoint | Specify the custom sts endpoint to be used with STS API for Amazon OpenSearch Service |
AWS_Role_ARN | AWS IAM Role to assume to put records to your Amazon cluster |
AWS_External_ID | External ID for the AWS IAM Role specified with |
AWS_Service_Name | es |
AWS_Profile | AWS profile name | default |
Cloud_ID | If you are using Elastic's Elasticsearch Service you can specify the cloud_id of the cluster running. The Cloud ID string has the format |
Cloud_Auth | Specify the credentials to use to connect to Elastic's Elasticsearch Service running on Elastic Cloud |
HTTP_User | Optional username credential for Elastic X-Pack access |
HTTP_Passwd | Password for user defined in HTTP_User |
Index | Index name | fluent-bit |
Type | Type name | _doc |
Logstash_Format | Enable Logstash format compatibility. This option takes a boolean value: True/False, On/Off | Off |
Logstash_Prefix | When Logstash_Format is enabled, the Index name is composed using a prefix and the date, e.g: If Logstash_Prefix is equals to 'mydata' your index will become 'mydata-YYYY.MM.DD'. The last string appended belongs to the date when the data is being generated. | logstash |
Logstash_Prefix_Key |
Logstash_Prefix_Separator | Set a separator between logstash_prefix and date. | - |
Logstash_DateFormat | %Y.%m.%d |
Time_Key | When Logstash_Format is enabled, each record will get a new timestamp field. The Time_Key property defines the name of that field. | @timestamp |
Time_Key_Format | When Logstash_Format is enabled, this property defines the format of the timestamp. | %Y-%m-%dT%H:%M:%S |
Time_Key_Nanos | When Logstash_Format is enabled, enabling this property sends nanosecond precision timestamps. | Off |
Include_Tag_Key | When enabled, it append the Tag name to the record. | Off |
Tag_Key | When Include_Tag_Key is enabled, this property defines the key name for the tag. | _flb-key |
Generate_ID | When enabled, generate | Off |
Id_Key | If set, |
Write_Operation | The write_operation can be any of: create (default), index, update, upsert. | create |
Replace_Dots | When enabled, replace field name dots with underscore, required by Elasticsearch 2.0-2.3. | Off |
Trace_Output | Print all elasticsearch API request payloads to stdout (for diag only) | Off |
Trace_Error | If elasticsearch return an error, print the elasticsearch API request and response (for diag only) | Off |
Current_Time_Index | Use current time for index generation instead of message record | Off |
Suppress_Type_Name | Off |
Workers |
|
Host | Required - The Datadog server where you are sending your logs. |
|
TLS | Required - End-to-end security communications security protocol. Datadog recommends setting this to |
|
compress | Recommended - compresses the payload in GZIP format, Datadog supports and recommends setting this to |
apikey |
Proxy |
provider | To activate the remapping, specify configuration flag provider with value |
json_date_key | Date key name for output. |
|
include_tag_key | If enabled, a tag is appended to output. The key name is used |
|
tag_key | The key name of tag. If |
|
dd_service |
dd_source |
dd_tags |
dd_message_key | By default, the plugin searches for the key 'log' and remap the value to the key 'message'. If the property is set, the plugin will search the property name key. |
workers |
|
Match | Pattern to match which tags of logs to be outputted by this plugin |
Host | IP address or hostname of the target Graylog server | 127.0.0.1 |
Port | The port that your Graylog GELF input is listening on | 12201 |
Mode | The protocol to use ( | udp |
Gelf_Tag_Key | Key to be used for tag. (Optional in GELF) |
Gelf_Short_Message_Key | A short descriptive message (MUST be set in GELF) | short_message |
Gelf_Timestamp_Key | Your log timestamp (SHOULD be set in GELF) | timestamp |
Gelf_Host_Key | Key which its value is used as the name of the host, source or application that sent this message. (MUST be set in GELF) | host |
Gelf_Full_Message_Key | Key to use as the long message that can i.e. contain a backtrace. (Optional in GELF) | full_message |
Gelf_Level_Key | level |
Packet_Size | If transport protocol is | 1420 |
Compress | If transport protocol is | true |
Workers |
|
google_service_credentials | Absolute path to a Google Cloud credentials JSON file. | Value of the environment variable $GOOGLE_SERVICE_CREDENTIALS |
service_account_email | Account email associated with the service. Only available if no credentials file has been provided. | Value of environment variable $SERVICE_ACCOUNT_EMAIL |
service_account_secret | Private key content associated with the service account. Only available if no credentials file has been provided. | Value of environment variable $SERVICE_ACCOUNT_SECRET |
project_id | The project id containing the tenant of Google Chronicle to stream into. | The value of the |
customer_id | The customer id to identify the tenant of Google Chronicle to stream into. The value of the |
log_type |
region | The GCP region in which to store security logs. Currently, there are several supported regions: |
log_key | By default, the whole log record will be sent to Google Chronicle. If you specify a key name with this option, then only the value of that key will be sent to Google Chronicle. |
workers |
|
region | The AWS region. |
stream | The name of the Kinesis Streams Delivery stream that you want log records sent to. |
time_key | Add the timestamp to the record under this key. By default the timestamp from Fluent Bit will not be added to records sent to Kinesis. |
time_key_format | strftime compliant format string for the timestamp; for example, the default is '%Y-%m-%dT%H:%M:%S'. Supports millisecond precision with '%3N' and supports nanosecond precision with '%9N' and '%L'; for example, adding '%3N' to support millisecond '%Y-%m-%dT%H:%M:%S.%3N'. This option is used with time_key. |
log_key | By default, the whole log record will be sent to Kinesis. If you specify a key name with this option, then only the value of that key will be sent to Kinesis. For example, if you are using the Fluentd Docker log driver, you can specify |
role_arn | ARN of an IAM role to assume (for cross account access). |
endpoint | Specify a custom endpoint for the Kinesis API. |
sts_endpoint | Custom endpoint for the STS API. |
auto_retry_requests | Immediately retry failed requests to AWS services once. This option does not affect the normal Fluent Bit retry mechanism with backoff. Instead, it enables an immediate retry with no delay for networking errors, which may help improve throughput when there are transient/random networking issues. This option defaults to |
external_id | Specify an external ID for the STS API, can be used with the role_arn parameter if your role requires an external ID. |
profile | AWS profile name to use. Defaults to |
workers |
tenant_id | Required - The tenant ID of the AAD application. |
client_id | Required - The client ID of the AAD application. |
client_secret |
dce_url | Required - Data Collection Endpoint(DCE) URL. |
dcr_id |
table_name | Required - The name of the custom log table (include the |
time_key | Optional - Specify the key name where the timestamp will be stored. |
|
time_generated | Optional - If enabled, will generate a timestamp and append it to JSON. The key name is set by the 'time_key' parameter. |
|
compress | Optional - Enable HTTP payload gzip compression. |
|
workers |
|
Customer_ID | Customer ID or WorkspaceID string. |
Shared_Key | The primary or the secondary Connected Sources client authentication key. |
Log_Type | The name of the event type. | fluentbit |
Log_Type_Key | If included, the value for this key will be looked upon in the record and if present, will over-write the |
Time_Key | Optional parameter to specify the key name where the timestamp will be stored. | @timestamp |
Time_Generated | If enabled, the HTTP request header 'time-generated-field' will be included so Azure can override the timestamp with the key specified by 'time_key' option. | off |
Workers |
|
BigQuery output plugin is an experimental plugin that allows you to stream records into Google Cloud BigQuery service. The implementation does not support the following, which would be expected in a full production version:
Data deduplication using insertId
.
Template tables using templateSuffix
.
Fluent Bit streams data into an existing BigQuery table using a service account that you specify. Therefore, before using the BigQuery output plugin, you must create a service account, create a BigQuery dataset and table, authorize the service account to write to the table, and provide the service account credentials to Fluent Bit.
To stream data into BigQuery, the first step is to create a Google Cloud service account for Fluent Bit:
Fluent Bit does not create datasets or tables for your data, so you must create these ahead of time. You must also grant the service account WRITER
permission on the dataset:
Within the dataset you will need to create a table for the data to reside in. You can follow the following instructions for creating your table. Pay close attention to the schema. It must match the schema of your output JSON. Unfortunately, since BigQuery does not allow dots in field names, you will need to use a filter to change the fields for many of the standard inputs (e.g, mem or cpu).
Fluent Bit BigQuery output plugin uses a JSON credentials file for authentication credentials. Download the credentials file by following these instructions:
Using identity federation, you can grant on-premises or multi-cloud workloads access to Google Cloud resources, without using a service account key. It can be used as a more secure alternative to service account credentials. Google Cloud's workload identity federation supports several identity providers (see documentation) but Fluent Bit BigQuery plugin currently supports Amazon Web Services (AWS) only.
You must configure workload identity federation in GCP before using it with Fluent Bit.
See Google's official documentation for further details.
If you are using a Google Cloud Credentials File, the following configuration is enough to get you started:
The influxdb output plugin, allows to flush your records into a InfluxDB time series database. The following instructions assumes that you have a fully operational InfluxDB service running in your system.
Key | Description | default |
---|---|---|
InfluxDB output plugin supports TLS/SSL, for more details about the properties available and general configuration, please refer to the TLS/SSL section.
In order to start inserting records into an InfluxDB service, you can run the plugin from the command line or through the configuration file:
The influxdb plugin, can read the parameters from the command line in two ways, through the -p argument (property) or setting them directly through the service URI. The URI format is the following:
Using the format specified, you could start Fluent Bit through:
In your main configuration file append the following Input & Output sections:
Basic example of Tag_Keys
usage:
With Auto_Tags=On in this example cause error, because every parsed field value type is string. Best usage of this option in metrics like record where one or more field value is not string typed.
Basic example of Tags_List_Key
usage:
Before to start Fluent Bit, make sure the target database exists on InfluxDB, using the above example, we will insert the data into a fluentbit database.
Log into InfluxDB console:
Create the database:
Check the database exists:
The following command will gather CPU metrics from the system and send the data to InfluxDB database every five seconds:
Note that all records coming from the cpu input plugin, have a tag cpu, this tag is used to generate the measurement in InfluxDB
From InfluxDB console, choose your database:
Now query some specific fields:
The CPU input plugin gather more metrics per CPU core, in the above example we just selected three specific metrics. The following query will give a full result:
Query tagged keys:
And now query method key values:
The kafka-rest output plugin, allows to flush your records into a Kafka REST Proxy server. The following instructions assumes that you have a fully operational Kafka REST Proxy and Kafka services running in your environment.
Key | Description | default |
---|---|---|
Kafka REST Proxy output plugin supports TLS/SSL, for more details about the properties available and general configuration, please refer to the TLS/SSL section.
In order to insert records into a Kafka REST Proxy service, you can run the plugin from the command line or through the configuration file:
The kafka-rest plugin, can read the parameters from the command line in two ways, through the -p argument (property), e.g:
In your main configuration file append the following Input & Output sections:
The http output plugin allows to flush your records into a HTTP endpoint. For now the functionality is pretty basic and it issues a POST request with the data records in MessagePack (or JSON) format.
Key | Description | default |
---|---|---|
HTTP output plugin supports TLS/SSL, for more details about the properties available and general configuration, please refer to the TLS/SSL section.
In order to insert records into a HTTP server, you can run the plugin from the command line or through the configuration file:
The http plugin, can read the parameters from the command line in two ways, through the -p argument (property) or setting them directly through the service URI. The URI format is the following:
Using the format specified, you could start Fluent Bit through:
In your main configuration file, append the following Input & Output sections:
By default, the URI becomes tag of the message, the original tag is ignored. To retain the tag, multiple configuration sections have to be made based and flush to different URIs.
Another approach we also support is the sending the original message tag in a configurable header. It's up to the receiver to do what it wants with that header field: parse it and use it as the tag for example.
To configure this behaviour, add this config:
Provided you are using Fluentd as data receiver, you can combine in_http
and out_rewrite_tag_filter
to make use of this HTTP header.
Notice how we override the tag, which is from URI path, with our custom header
Suggested configuration for Sumo Logic using json_lines
with iso8601
timestamps. The PrivateKey
is specific to a configured HTTP collector.
A sample Sumo Logic query for the CPU input. (Requires json_lines
format with iso8601
date format for the timestamp
field).
LogDNA is an intuitive cloud based log management system that provides you an easy interface to query your logs once they are stored.
The Fluent Bit logdna
output plugin allows you to send your log or events to a LogDNA compliant service like:
Before to get started with the plugin configuration, make sure to obtain the proper account to get access to the service. You can start with a free trial in the following link:
Key | Description | Default |
---|---|---|
One of the features of Fluent Bit + LogDNA integration is the ability to auto enrich each record with further context.
When the plugin process each record (or log), it tries to lookup for specific key names that might contain specific context for the record in question, the following table describe the keys and the discovery logic:
The following configuration example, will emit a dummy example record and ingest it on LogDNA. Copy and paste the following content in a file called logdna.conf
:
run Fluent Bit with the new configuration file:
Fluent Bit output:
Your record will be available and visible in your LogDNA dashboard after a few seconds.
In your LogDNA dashboard, go to the top filters and mark the Tags aa
and bb
, then you will be able to see your records as the example below:
Kafka output plugin allows to ingest your records into an Apache Kafka service. This plugin use the official librdkafka C library (built-in dependency)
Key | Description | default |
---|---|---|
Setting
rdkafka.log.connection.close
tofalse
andrdkafka.request.required.acks
to 1 are examples of recommended settings of librdfkafka properties.
In order to insert records into Apache Kafka, you can run the plugin from the command line or through the configuration file:
The kafka plugin can read parameters through the -p argument (property), e.g:
In your main configuration file append the following Input & Output sections:
Fluent-bit comes with support for avro encoding for the out_kafka plugin. Avro support is optional and must be activated at build-time by using a build def with cmake: -DFLB_AVRO_ENCODER=On
such as in the following example which activates:
out_kafka with avro encoding
fluent-bit's prometheus
metrics via an embedded http endpoint
debugging support
builds the test suites
This is example fluent-bit config tails kubernetes logs, decorates the log lines with kubernetes metadata via the kubernetes filter, and then sends the fully decorated log lines to a kafka broker encoded with a specific avro schema.
This example Fluent Bit configuration file creates example records with the payloadkey and msgkey keys. The msgkey value is used as the Kafka message key, and the payloadkey value as the payload.
Forward is the protocol used by Fluentd to route messages between peers. The forward output plugin provides interoperability between Fluent Bit and Fluentd. There are no configuration steps required besides specifying where Fluentd is located, which can be a local or a remote destination.
This plugin offers two different transports and modes:
Forward (TCP): It uses a plain TCP connection.
Secure Forward (TLS): when TLS is enabled, the plugin switch to Secure Forward mode.
The following parameters are mandatory for either Forward for Secure Forward modes:
Key | Description | Default |
---|---|---|
When using Secure Forward mode, the TLS mode requires to be enabled. The following additional configuration parameters are available:
Before proceeding, make sure that Fluentd is installed, if it's not the case please refer to the following Fluentd Installation document and go ahead with that.
Once Fluentd is installed, create the following configuration file example that will allow us to stream data into it:
That configuration file specifies that it will listen for TCP connections on the port 24224 through the forward input type. Then for every message with a fluent_bit TAG, will print the message to the standard output.
In one terminal launch Fluentd specifying the new configuration file created:
Now that Fluentd is ready to receive messages, we need to specify where the forward output plugin will flush the information using the following format:
If the TAG parameter is not set, the plugin will retain the tag. Keep in mind that TAG is important for routing rules inside Fluentd.
Using the CPU input plugin as an example we will flush CPU metrics to Fluentd with tag fluent_bit:
Now on the Fluentd side, you will see the CPU metrics gathered in the last seconds:
So we gathered CPU metrics and flushed them out to Fluentd properly.
DISCLAIMER: the following example does not consider the generation of certificates for best practice on production environments.
Secure Forward aims to provide a secure channel of communication with the remote Fluentd service using TLS.
Paste this content in a file called flb.conf:
Paste this content in a file called fld.conf:
If you're using Fluentd v1, set up it as below:
Start Fluentd:
Start Fluent Bit:
After five seconds, Fluent Bit will write records to Fluentd. In Fluentd output you will see a message like this:
The nats output plugin, allows to flush your records into a NATS Server end point. The following instructions assumes that you have a fully operational NATS Server in place.
parameter | description | default |
---|---|---|
In order to override the default configuration values, the plugin uses the optional Fluent Bit network address format, e.g:
Fluent Bit only requires to know that it needs to use the nats output plugin, if no extra information is given, it will use the default values specified in the above table.
As described above, the target service and storage point can be changed, e.g:
For every set of records flushed to a NATS Server, Fluent Bit uses the following JSON format:
Each record is an individual entity represented in a JSON array that contains a UNIX_TIMESTAMP and a JSON map with a set of key/values. A summarized output of the CPU input plugin will looks as this:
Observe employs the http output plugin, allowing you to flush your records into Observe.
For now the functionality is pretty basic and it issues a POST request with the data records in MessagePack (or JSON) format.
The following are the specific HTTP parameters to employ:
Key | Description | default |
---|---|---|
In your main configuration file, append the following Input & Output sections:
New Relic is a data management platform that gives you real-time insights of your data for developers, operations and management teams.
The Fluent Bit nrlogs
output plugin allows you to send your logs to New Relic service.
Before to get started with the plugin configuration, make sure to obtain the proper account to get access to the service. You can register and start with a free trial in the following link:
| compress | Set the compression mechanism for the payload. This option allows two values: gzip
(enabled by default) or false
to disable compression. | gzip |
The following configuration example, will emit a dummy example record and ingest it on New Relic. Copy and paste the following content in a file called newrelic.conf
:
run Fluent Bit with the new configuration file:
Fluent Bit output:
is a very popular and versatile open source database management system that supports the SQL language and that is capable of storing both structured and unstructured data, such as JSON objects.
Given that Fluent Bit is designed to work with JSON objects, the pgsql
output plugin allows users to send their data to a PostgreSQL database and store it using the JSONB
type.
PostgreSQL 9.4 or higher is required.
According to the parameters you have set in the configuration file, the plugin will create the table defined by the table
option in the database defined by the database
option hosted on the server defined by the host
option. It will use the PostgreSQL user defined by the user
option, which needs to have the right privileges to create such a table in that database.
NOTE: If you are not familiar with how PostgreSQL's users and grants system works, you might find useful reading the recommended links in the "References" section at the bottom.
A typical installation normally consists of a self-contained database for Fluent Bit in which you can store the output of one or more pipelines. Ultimately, it is your choice to to store them in the same table, or in separate tables, or even in separate databases based on several factors, including workload, scalability, data protection and security.
In this example, for the sake of simplicity, we use a single table called fluentbit
in a database called fluentbit
that is owned by the user fluentbit
. Feel free to use different names. Preferably, for security reasons, do not use the postgres
user (which has SUPERUSER
privileges).
fluentbit
userGenerate a robust random password (e.g. pwgen 20 1
) and store it safely. Then, as postgres
system user on the server where PostgreSQL is installed, execute:
At the prompt, please provide the password that you previously generated.
As a result, the user fluentbit
without superuser privileges will be created.
If you prefer, instead of the createuser
application, you can directly use the SQL command .
fluentbit
databaseAs postgres
system user, please run:
This will create a database called fluentbit
owned by the fluentbit
user. As a result, the fluentbit
user will be able to safely create the data table.
In your main configuration file add the following section:
The output plugin automatically creates a table with the name specified by the table
configuration option and made up of the following fields:
tag TEXT
time TIMESTAMP WITHOUT TIMEZONE
data JSONB
As you can see, the timestamp does not contain any information about the time zone and it is therefore referred to the time zone used by the connection to PostgreSQL (timezone
setting).
PostgreSQL 10 introduces support for declarative partitioning. In order to improve vertical scalability of the database, you can decide to partition your tables on time ranges (for example on a monthly basis). PostgreSQL supports also subpartitions, allowing you to even partition by hash your records (version 11+), and default partitions (version 11+).
If you are starting now, our recommendation at the moment is to choose the latest major version of PostgreSQL.
PostgreSQL is a really powerful and extensible database engine. More expert users can indeed take advantage of BEFORE INSERT
triggers on the main table and re-route records on normalised tables, depending on tags and content of the actual JSON objects.
For example, you can use Fluent Bit to send HTTP log records to the landing table defined in the configuration file. This table contains a BEFORE INSERT
trigger (a function in plpgsql
language) that normalises the content of the JSON object and that inserts the record in another table (with its own structure and partitioning model). This kind of triggers allow you to discard the record from the landing table by returning NULL
.
Here follows a list of useful resources from the PostgreSQL documentation:
An output plugin to expose Prometheus Metrics
The prometheus exporter allows you to take metrics from Fluent Bit and expose them such that a Prometheus instance can scrape them.
Important Note: The prometheus exporter only works with metric plugins, such as Node Exporter Metrics
Key | Description | Default |
---|
The Prometheus exporter only works with metrics captured from metric plugins. In the following example, host metrics are captured by the node exporter metrics plugin and then are routed to prometheus exporter. Within the output plugin two labels are added app="fluent-bit"
and color="blue"
Send logs to Amazon OpenSearch Service
The opensearch output plugin, allows to ingest your records into an database. The following instructions assumes that you have a fully operational OpenSearch service running in your environment.
Key | Description | default |
---|
The write_operation can be any of:
Please note, Id_Key
or Generate_ID
is required in update, and upsert scenario.
In order to insert records into an OpenSearch service, you can run the plugin from the command line or through the configuration file:
The opensearch plugin, can read the parameters from the command line in two ways, through the -p argument (property) or setting them directly through the service URI. The URI format is the following:
Using the format specified, you could start Fluent Bit through:
which is similar to do:
Some input plugins may generate messages where the field names contains dots. This opensearch plugin replaces them with an underscore, e.g:
becomes
The following snippet demonstrates using the namespace name as extracted by the kubernetes
filter as logstash preifix:
For records that do nor have the field kubernetes.namespace_name
, the default prefix, logstash
will be used.
The Amazon OpenSearch Service adds an extra security layer where HTTP requests must be signed with AWS Sigv4. This plugin supports Amazon OpenSearch Service with IAM Authentication.
Example configuration:
Notice that the Port
is set to 443
, tls
is enabled, and AWS_Region
is set.
Similarly to Elastic Cloud, OpenSearch in version 2.0 and above needs to have type option being removed by setting Suppress_Type_Name On
.
Without this you will see errors like:
Amazon OpenSearch Serverless is an offering that eliminates your need to manage OpenSearch clusters. All existing Fluent Bit OpenSearch output plugin options work with OpenSearch Serverless. For Fluent Bit, the only difference is that you must specify the service name as aoss
(Amazon OpenSearch Serverless) when you enable AWS_Auth
:
Data Access Permissions
With data access permissions, IAM policies are not needed to access the collection.
For example, in this scenario the logs show that a connection was successfully established with the OpenSearch domain, and yet an error is still returned:
This behavior could be indicative of a hard-to-detect issue with index shard usage in the OpenSearch domain.
While OpenSearch index shards and disk space are related, they are not directly tied to one another.
OpenSearch domains are limited to 1000 index shards per data node, regardless of the size of the nodes. And, importantly, shard usage is not proportional to disk usage: an individual index shard can hold anywhere from a few kilobytes to dozens of gigabytes of data.
In other words, depending on the way index creation and shard allocation are configured in the OpenSearch domain, all of the available index shards could be used long before the data nodes run out of disk space and begin exhibiting disk-related performance issues (e.g. nodes crashing, data corruption, or the dashboard going offline).
The primary issue that arises when a domain is out of available index shards is that new indexes can no longer be created (though logs can still be added to existing indexes).
When that happens, the Fluent Bit OpenSearch output may begin showing confusing behavior. For example:
Errors suddenly appear (outputs were previously working and there were no changes to the Fluent Bit configuration when the errors began)
Errors are not consistently occurring (some logs are still reaching the OpenSearch domain)
The Fluent Bit service logs show errors, but without any detail as to the root cause
If any of those symptoms are present, consider using the OpenSearch domain's API endpoints to troubleshoot possible shard issues.
Running this command will show both the shard count and disk usage on all of the nodes in the domain.
Index creation issues will begin to appear if any hot data nodes have around 1000 shards OR if the total number of shards spread across hot and ultrawarm data nodes in the cluster is greater than 1000 times the total number of nodes (e.g., in a cluster with 6 nodes, the maximum shard count would be 6000).
Alternatively, running this command to manually create a new index will return an explicit error related to shard count if the maximum has been exceeded.
There are multiple ways to resolve excessive shard usage in an OpenSearch domain such as deleting or combining indexes, adding more data nodes to the cluster, or updating the domain's index creation and sharding strategy. Consult the OpenSearch documentation for more information on how to use these strategies.
Send logs to Oracle Cloud Infrastructure Logging Analytics Service
Oracle Cloud Infrastructure Logging Analytics output plugin allows you to ingest your log records into service.
Oracle Cloud Infrastructure Logging Analytics is a machine learning-based cloud service that monitors, aggregates, indexes, and analyzes all log data from on-premises and multicloud environments. Enabling users to search, explore, and correlate this data to troubleshoot and resolve problems faster and derive insights to make better operational decisions.
For details about OCI Logging Analytics refer to https://docs.oracle.com/en-us/iaas/logging-analytics/index.html
Following are the top level configuration properties of the plugin:
Key | Description | default |
---|
The following parameters are to set the Logging Analytics resources that must be used to process your logs by OCI Logging Analytics.
Key | Description | default |
---|
In order to insert records into the OCI Logging Analytics service, you can run the plugin from the command line or through the configuration file:
The OCI Logging Analytics plugin can read the parameters from the command line in two ways, through the -p argument (property), e.g:
In your main configuration file append the following Input & Output sections:
In case of multiple inputs, where oci_la_* properties can differ, you can add the properties in the record itself and instruct the plugin to read these properties from the record. The option oci_config_in_record, when set to true in the output config, will make the plugin read the mandatory and optional oci_la properties from the incoming record. The user must ensure that the necessary configs have been inserted using relevant filters, otherwise the respective chunk will be dropped. Below is an example to insert oci_la_log_source_name and oci_la_log_group_id in the record:
You can attach certain metadata to the log events collected from various inputs.
The above configuration will generate a payload that looks like this
The multiple oci_la_global_metadata and oci_la_metadata options are turned into a JSON object of key value pairs, nested under the key metadata.
With oci_config_in_record option set to true, the metadata key-value pairs will need to be injected in the record as an object of key value pair nested under the respective metadata field. Below is an example of one such configuration
The above configuration first injects the necessary metadata keys and values in the record directly, with a prefix olgm. attached to the keys in order to segregate the metadata keys from rest of the record keys. Then, using a nest filter only the metadata keys are selected by the filter and nested under oci_la_global_metadata key in the record, and the prefix olgm. is removed from the metadata keys.
An output plugin to submit Logs, Metrics, or Traces to an OpenTelemetry endpoint
The OpenTelemetry plugin allows you to take logs, metrics, and traces from Fluent Bit and submit them to an OpenTelemetry HTTP endpoint.
Important Note: At the moment only HTTP endpoints are supported.
Key | Description | Default |
---|
The OpenTelemetry plugin works with logs and only the metrics collected from one of the metric input plugins. In the following example, log records generated by the dummy plugin and the host metrics collected by the node exporter metrics plugin are exported by the OpenTelemetry output plugin.
is multi-tenant log aggregation system inspired by Prometheus. It is designed to be very cost effective and easy to operate.
The Fluent Bit loki
built-in output plugin allows you to send your log or events to a Loki service. It supports data enrichment with Kubernetes labels, custom label keys and Tenant ID within others.
Be aware there is a separate Golang output plugin provided by with different configuration options.
Key | Description | Default |
---|
Loki store the record logs inside Streams, a stream is defined by a set of labels, at least one label is required.
Fluent Bit implements a flexible mechanism to set labels by using fixed key/value pairs of text but also allowing to set as labels certain keys that exists as part of the records that are being processed. Consider the following JSON record (pretty printed for readability):
If you decide that your Loki Stream will be composed by two labels called job
and the value of the record key called stream
, your labels
configuration properties might look as follows:
When processing above's configuration, internally the ending labels for the stream in question becomes:
Another feature of Labels management is the ability to provide custom key names, using the same record accessor pattern we can specify the key name manually and let the value to be populated automatically at runtime, e.g:
When processing that new configuration, the internal labels will be:
label_keys
propertyThe additional configuration property called label_keys
allow to specify multiple record keys that needs to be placed as part of the outgoing Stream Labels, yes, this is a similar feature than the one explained above in the labels
property. Consider this as another way to set a record key in the Stream, but with the limitation that you cannot use a custom name for the key value.
The following configuration examples generate the same Stream Labels:
the above configuration accomplish the same than this one:
both will generate the following Streams label:
label_map_path
propertyThe configuration property label_map_path
is to read a JSON file that defines how to extract labels from each record.
The file should contain a JSON object. Each keys define how to get label value from a nested record. Each values are used as label names.
The following configuration examples generate the same Stream Labels:
map.json:
The following configuration examples generate the same Stream Labels:
the above configuration accomplish the same than this one:
both will generate the following Streams label:
Note that if you are running in a Kubernetes environment, you might want to enable the option auto_kubernetes_labels
which will auto-populate the streams with the Pod labels for you. Consider the following configuration:
Based in the JSON example provided above, the internal stream labels will be:
If there is only one key remaining after removing keys, you can use the drop_single_key
property to send its value to Loki, rather than a single key=value pair.
Consider this simple JSON example:
If the value is a string, line_format
is json
, and drop_single_key
is true
, it will be sent as a quoted string.
The outputted line would show in Loki as:
If drop_single_key
is raw
, or line_format
is key_value
, it will show in Loki as:
If you want both structured JSON and plain-text logs in Loki, you should set drop_single_key
to raw
and line_format
to json
. Loki does not interpret a quoted string as valid JSON, and so to remove the quotes without drop_single_key
set to raw, you would need to use a query like this:
If drop_single_key
is off
, it will show in Loki as:
You can get the same behavior this flag provides in Loki with drop_single_key
set to off
with this query:
The following configuration:
Defines fixed values for the cluster and region labels.
Uses the record accessor pattern to set the namespace label to the namespace name as determined by the Kubernetes metadata filter (not shown).
Uses a structured metadata field to hold the Kubernetes pod name.
Other common uses for structured metadata include trace and span IDs, process and thread IDs, and log levels.
Structured metadata is officially supported starting with Loki 3.0, and shouldn't be used with Loki deployments prior to Loki 3.0.
This plugin inherit core Fluent Bit features to customize the network behavior and optionally enable TLS in the communication channel. For more details about the specific options available refer to the following articles:
Note that all options mentioned in the articles above must be enabled in the plugin configuration in question.
An example configuration - make sure to set the credentials and ensure the host URL matches the correct one for your deployment:
The following configuration example, will emit a dummy example record and ingest it on Loki . Copy and paste the following content into a file called out_loki.conf
:
run Fluent Bit with the new configuration file:
Fluent Bit output:
The null output plugin just throws away events.
The plugin doesn't support configuration parameters.
You can run the plugin from the command line or through the configuration file:
From the command line you can let Fluent Bit throws away events with the following options:
In your main configuration file append the following Input & Output sections:
Send logs to OpenObserve using Fluent Bit
Use the OpenObserve output plugin to ingest logs into .
Before you begin, you need an , an HTTP_User
, and an HTTP_Passwd
. You can find these fields under Ingestion in OpenObserve Cloud. Alternatively, you can achieve this with various installation types as mentioned in the
Key | Description | Default |
---|
Use this configuration file to get started:
The number of to perform flush operations for this output.
The number of to perform flush operations for this output.
Template for Log Group name using Fluent Bit syntax. This field is optional and if configured it overrides the log_group_name
. If the template translation fails, an error is logged and the log_group_name
(which is still required) is used instead. See the tutorial below for an example.
Template for Log Stream name using Fluent Bit syntax. This field is optional and if configured it overrides the other log stream options. If the template translation fails, an error is logged and the log_stream_name or log_stream_prefix are used instead (and thus one of those fields is still required to be configured). See the tutorial below for an example.
An optional parameter that can be used to tell CloudWatch the format of the data. A value of json/emf enables CloudWatch to extract custom metrics embedded in a JSON payload. See the .
A list of lists containing the dimension keys that will be applied to all metrics. The values within a dimension set MUST also be members on the root-node. For more information about dimensions, see and . In the fluent-bit config, metric_dimensions is a comma and semicolon separated string. If you have only one list of dimensions, put the values as a comma separated string. If you want to put list of lists, use the list as semicolon separated strings. For example, if you set the value as 'dimension_1,dimension_2;dimension_3', we will convert it as [[dimension_1, dimension_2],[dimension_3]]
The number of to perform flush operations for this output. Default: 1
.
Specify the buffer size used to read the response from the Elasticsearch HTTP service. This option is useful for debugging purposes where is required to read full responses, note that response size grows depending of the number of records inserted. To set an unlimited amount of memory set this value to False, otherwise the value must be according to the specification.
Service name to be used in AWS Sigv4 signature. For integration with Amazon OpenSearch Serverless, set to aoss
. See the section on Amazon OpenSearch Serverless for more information.
When included: the value of the key in the record will be evaluated as key reference and overrides Logstash_Prefix for index generation. If the key/value is not found in the record then the Logstash_Prefix option will act as a fallback. The parameter is expected to be a .
Time format (based on ) to generate the second part of the Index name.
When enabled, mapping types is removed and Type
option is ignored. If using Elasticsearch 8.0.0 or higher - it , so it shall be set to On.
The number of to perform flush operations for this output.
Required - Your .
Optional - Specify an HTTP Proxy. The expected format of this value is . Note that https is not supported yet.
Recommended - The human readable name for your service generating the logs (e.g. the name of your application or database). If unset, Datadog will look for the service using ."
Recommended - A human readable name for the underlying technology of your service (e.g. postgres
or nginx
). If unset, Datadog will look for the source in the .
Optional - The you want to assign to your logs in Datadog. If unset, Datadog will look for the tags in the .
The number of to perform flush operations for this output.
Key to be used as the log level. Its value must be in (between 0 and 7). (Optional in GELF)
The number of to perform flush operations for this output.
The log type to parse logs as. Google Chronicle supports parsing for .
The number of to perform flush operations for this output.
The number of to perform flush operations for this output. Default: 1
.
Required - The client secret of the AAD application ().
Required - Data Collection Rule (DCR) immutable ID (see to collect the immutable id)
The number of to perform flush operations for this output.
The number of to perform flush operations for this output.
Key | Description | default |
---|---|---|
Key | Description |
---|---|
Key | Description | Default |
---|---|---|
Alternatively, you can use the SQL command .
Make sure that the fluentbit
user can connect to the fluentbit
database on the specified target host. This might require you to properly configure the file.
Key | Description | Default |
---|
Fluent Bit relies on , the PostgreSQL native client API, written in C language. For this reason, default values might be affected by and compilation settings. The above table, in brackets, list the most common default values for each connection option.
For security reasons, it is advised to follow the directives included in the section.
For more information on the JSONB
data type in PostgreSQL, please refer to the page in the official documentation, where you can find instructions on how to index or query the objects (including jsonpath
introduced in PostgreSQL 12).
For more information on horizontal partitioning in PostgreSQL, please refer to the page in the official documentation.
The parameters index and type can be confusing if you are new to OpenSearch, if you have used a common relational database before, they can be compared to the database and table concepts. Also see
OpenSearch output plugin supports TLS/SSL, for more details about the properties available and general configuration, please refer to the section.
Operation | Description |
---|
In your main configuration file append the following Input & Output sections. You can visualize this configuration
See for details on how AWS credentials are fetched.
When sending logs to OpenSearch Serverless, your AWS IAM entity needs . Give your IAM entity the following data access permissions to your serverless collection:
Occasionally the Fluent Bit service may generate errors without any additional detail in the logs to explain the source of the issue, even with the service's log_level attribute set to .
OCI Logging Analytics output plugin supports TLS/SSL, for more details about the properties available and general configuration, please refer to the section.
OCI Logging Analytics service must be onboarded with the minumum required policies, in the OCI region where you want to monitor. Refer for details.
Create OCI Logging Analytics LogGroup(s) if not done already. Refer for details.
As you can see the label job
has the value fluentbit
and the second label is configured to access the nested map called sub
targeting the value of the key stream
. Note that the second label name must starts with a $
, that means that's a pattern so it provide you the ability to retrieve values from nested maps by using the key names.
lets you attach custom fields to individual log lines without embedding the information in the content of the log line. This capability works well for high cardinality data that isn't suited for using labels. While not a label, the structured_metadata
configuration parameter operates similarly to the labels
parameter. Both parameters are comma-delimited key=value
lists, and both can use record accessors to reference keys within the record being processed.
: timeouts, keepalive and source address
: all about TLS configuration and certificates
Fluent Bit supports sending logs (and metrics) to by providing the appropriate URL and ensuring TLS is enabled.
Host
IP address or hostname of the target InfluxDB service
127.0.0.1
Port
TCP port of the target InfluxDB service
8086
Database
InfluxDB database name where records will be inserted
fluentbit
Bucket
InfluxDB bucket name where records will be inserted - if specified, database
is ignored and v2 of API is used
Org
InfluxDB organization name where the bucket is (v2 only)
fluent
Sequence_Tag
The name of the tag whose value is incremented for the consecutive simultaneous events.
_seq
HTTP_User
Optional username for HTTP Basic Authentication
HTTP_Passwd
Password for user defined in HTTP_User
HTTP_Token
Authentication token used with InfluDB v2 - if specified, both HTTP_User and HTTP_Passwd are ignored
HTTP_Header
Add a HTTP header key/value pair. Multiple headers can be set
Tag_Keys
Space separated list of keys that needs to be tagged
Auto_Tags
Automatically tag keys where value is string. This option takes a boolean value: True/False, On/Off.
Off
Uri
Custom URI endpoint
Workers
The number of workers to perform flush operations for this output.
0
Host
IP address or hostname of the target Kafka REST Proxy server
127.0.0.1
Port
TCP port of the target Kafka REST Proxy server
8082
Topic
Set the Kafka topic
fluent-bit
Partition
Set the partition number (optional)
Message_Key
Set a message key (optional)
Time_Key
The Time_Key property defines the name of the field that holds the record timestamp.
@timestamp
Time_Key_Format
Defines the format of the timestamp.
%Y-%m-%dT%H:%M:%S
Include_Tag_Key
Append the Tag name to the final record.
Off
Tag_Key
If Include_Tag_Key is enabled, this property defines the key name for the tag.
_flb-key
Workers
The number of workers to perform flush operations for this output.
0
google_service_credentials
Absolute path to a Google Cloud credentials JSON file.
Value of the environment variable $GOOGLE_SERVICE_CREDENTIALS
project_id
The project id containing the BigQuery dataset to stream into.
The value of the project_id
in the credentials file
dataset_id
The dataset id of the BigQuery dataset to write into. This dataset must exist in your project.
table_id
The table id of the BigQuery table to write into. This table must exist in the specified dataset and the schema must match the output.
skip_invalid_rows
Insert all valid rows of a request, even if invalid rows exist. The default value is false, which causes the entire request to fail if any invalid rows exist.
Off
ignore_unknown_values
Accept rows that contain values that do not match the schema. The unknown values are ignored. Default is false, which treats unknown values as errors.
Off
enable_workload_identity_federation
Enables workload identity federation as an alternative authentication method. Cannot be used with service account credentials file or environment variable. AWS is the only identity provider currently supported.
Off
aws_region
Used to construct a regional endpoint for AWS STS to verify AWS credentials obtained by Fluent Bit. Regional endpoints are recommended by AWS.
project_number
GCP project number where the identity provider was created. Used to construct the full resource name of the identity provider.
pool_id
GCP workload identity pool where the identity provider was created. Used to construct the full resource name of the identity provider.
provider_id
GCP workload identity provider. Used to construct the full resource name of the identity provider. Currently only AWS accounts are supported.
google_service_account
Email address of the Google service account to impersonate. The workload identity provider must have permissions to impersonate this service account, and the service account must have permissions to access Google BigQuery resources (e.g. write
access to tables)
workers
The number of workers to perform flush operations for this output.
0
host
IP address or hostname of the target HTTP Server
127.0.0.1
http_User
Basic Auth Username
http_Passwd
Basic Auth Password. Requires HTTP_User to be set
AWS_Auth
Enable AWS SigV4 authentication
false
AWS_Service
Specify the AWS service code, i.e. es, xray, etc., of your service, used by SigV4 authentication. Usually can be found in the service endpoint's subdomains, protocol://service-code.region-code.amazonaws.com
AWS_Region
Specify the AWS region of your service, used by SigV4 authentication
AWS_STS_Endpoint
Specify the custom sts endpoint to be used with STS API, used with the AWS_Role_ARN option, used by SigV4 authentication
AWS_Role_ARN
AWS IAM Role to assume, used by SigV4 authentication
AWS_External_ID
External ID for the AWS IAM Role specified with aws_role_arn
, used by SigV4 authentication
port
TCP port of the target HTTP Server
80
Proxy
Specify an HTTP Proxy. The expected format of this value is http://HOST:PORT
. Note that HTTPS is not currently supported. It is recommended not to set this and to configure the HTTP proxy environment variables instead as they support both HTTP and HTTPS.
uri
Specify an optional HTTP URI for the target web server, e.g: /something
/
compress
Set payload compression mechanism. Option available is 'gzip'
format
Specify the data format to be used in the HTTP request body, by default it uses msgpack. Other supported formats are json, json_stream and json_lines and gelf.
msgpack
allow_duplicated_headers
Specify if duplicated headers are allowed. If a duplicated header is found, the latest key/value set is preserved.
true
log_response_payload
Specify if the response paylod should be logged or not.
true
header_tag
Specify an optional HTTP header field for the original message tag.
header
Add a HTTP header key/value pair. Multiple headers can be set.
json_date_key
Specify the name of the time key in the output record. To disable the time key just set the value to false
.
date
json_date_format
Specify the format of the date. Supported formats are double, epoch, iso8601 (eg: 2018-05-30T09:39:52.000681Z) and java_sql_timestamp (eg: 2018-05-30 09:39:52.000681)
double
gelf_timestamp_key
Specify the key to use for timestamp
in gelf format
gelf_host_key
Specify the key to use for the host
in gelf format
gelf_short_message_key
Specify the key to use as the short
message in gelf format
gelf_full_message_key
Specify the key to use for the full
message in gelf format
gelf_level_key
Specify the key to use for the level
in gelf format
body_key
Specify the key to use as the body of the request (must prefix with "$"). The key must contain either a binary or raw string, and the content type can be specified using headers_key (which must be passed whenever body_key is present). When this option is present, each msgpack record will create a separate request.
headers_key
Specify the key to use as the headers of the request (must prefix with "$"). The key must contain a map, which will have the contents merged on the request headers. This can be used for many purposes, such as specifying the content-type of the data contained in body_key.
workers
The number of workers to perform flush operations for this output.
2
logdna_host
LogDNA API host address
logs.logdna.com
logdna_port
LogDNA TCP Port
443
api_key
API key to get access to the service. This property is mandatory.
hostname
Name of the local machine or device where Fluent Bit is running.
When this value is not set, Fluent Bit lookup the hostname and auto populate the value. If it cannot be found, an unknown
value will be set instead.
mac
Mac address. This value is optional.
ip
IP address of the local hostname. This value is optional.
tags
A list of comma separated strings to group records in LogDNA and simplify the query with filters.
file
Optional name of a file being monitored. Note that this value is only set if the record do not contain a reference to it.
app
Name of the application. This value is auto discovered on each record, if not found, the default value is used.
Fluent Bit
workers
The number of workers to perform flush operations for this output.
`0`
level
If the record contains a key called level
or severity
, it will populate the context level
key with that value. If not found, the context key is not set.
file
if the record contains a key called file
, it will populate the context file
with the value found, otherwise If the plugin configuration provided a file
property, that value will be used instead (see table above).
app
If the record contains a key called app
, it will populate the context app
with the value found, otherwise it will use the value set for app
in the configuration property (see table above).
meta
if the record contains a key called meta
, it will populate the context meta
with the value found.
format
Specify data format, options available: json, msgpack, raw.
json
message_key
Optional key to store the message
message_key_field
If set, the value of Message_Key_Field in the record will indicate the message key. If not set nor found in the record, Message_Key will be used (if set).
timestamp_key
Set the key to store the record timestamp
@timestamp
timestamp_format
Specify timestamp format, should be 'double', 'iso8601' (seconds precision) or 'iso8601_ns' (fractional seconds precision)
double
brokers
Single or multiple list of Kafka Brokers, e.g: 192.168.1.3:9092, 192.168.1.4:9092.
topics
Single entry or list of topics separated by comma (,) that Fluent Bit will use to send messages to Kafka. If only one topic is set, that one will be used for all records. Instead if multiple topics exists, the one set in the record by Topic_Key will be used.
fluent-bit
topic_key
If multiple Topics exists, the value of Topic_Key in the record will indicate the topic to use. E.g: if Topic_Key is router and the record is {"key1": 123, "router": "route_2"}, Fluent Bit will use topic route_2. Note that if the value of Topic_Key is not present in Topics, then by default the first topic in the Topics list will indicate the topic to be used.
dynamic_topic
adds unknown topics (found in Topic_Key) to Topics. So in Topics only a default topic needs to be configured
Off
queue_full_retries
Fluent Bit queues data into rdkafka library, if for some reason the underlying library cannot flush the records the queue might fills up blocking new addition of records. The queue_full_retries
option set the number of local retries to enqueue the data. The default value is 10 times, the interval between each retry is 1 second. Setting the queue_full_retries
value to 0
set's an unlimited number of retries.
10
rdkafka.{property}
{property}
can be any librdkafka properties
raw_log_key
When using the raw format and set, the value of raw_log_key in the record will be send to kafka as the payload.
workers
The number of workers to perform flush operations for this output.
0
Host
Target host where Fluent-Bit or Fluentd are listening for Forward messages.
127.0.0.1
Port
TCP Port of the target service.
24224
Time_as_Integer
Set timestamps in integer format, it enable compatibility mode for Fluentd v0.12 series.
False
Upstream
If Forward will connect to an Upstream instead of a simple host, this property defines the absolute path for the Upstream configuration file, for more details about this refer to the Upstream Servers documentation section.
Unix_Path
Specify the path to unix socket to send a Forward message. If set, Upstream
is ignored.
Tag
Overwrite the tag as we transmit. This allows the receiving pipeline start fresh, or to attribute source.
Send_options
Always send options (with "size"=count of messages)
False
Require_ack_response
Send "chunk"-option and wait for "ack" response from server. Enables at-least-once and receiving server can control rate of traffic. (Requires Fluentd v0.14.0+ server)
False
Compress
Set to 'gzip' to enable gzip compression. Incompatible with Time_as_Integer=True
and tags set dynamically using the Rewrite Tag filter. Requires Fluentd server v0.14.7 or later.
none
Workers
The number of workers to perform flush operations for this output.
2
Shared_Key
A key string known by the remote Fluentd used for authorization.
Empty_Shared_Key
Use this option to connect to Fluentd with a zero-length secret.
False
Username
Specify the username to present to a Fluentd server that enables user_auth
.
Password
Specify the password corresponding to the username.
Self_Hostname
Default value of the auto-generated certificate common name (CN).
localhost
tls
Enable or disable TLS support
Off
tls.verify
Force certificate validation
On
tls.debug
Set TLS debug verbosity level. It accept the following values: 0 (No debug), 1 (Error), 2 (State change), 3 (Informational) and 4 Verbose
1
tls.ca_file
Absolute path to CA certificate file
tls.crt_file
Absolute path to Certificate file.
tls.key_file
Absolute path to private Key file.
tls.key_passwd
Optional password for tls.key_file file.
host
IP address or hostname of the NATS Server
127.0.0.1
port
TCP port of the target NATS Server
4222
workers
The number of workers to perform flush operations for this output.
0
host
IP address or hostname of Observe's data collection endpoint. $(OBSERVE_CUSTOMER) is your Customer ID
OBSERVE_CUSTOMER.collect.observeinc.com
port
TCP port of to employ when sending to Observe
443
tls
Specify to use tls
on
uri
Specify the HTTP URI for the Observe's data ingest
/v1/http/fluentbit
format
The data format to be used in the HTTP request body
msgpack
header
The specific header that provides the Observe token needed to authorize sending data into a datastream.
Authorization Bearer ${OBSERVE_TOKEN}
header
The specific header to instructs Observe how to decode incoming payloads
X-Observe-Decoder fluent
compress
Set payload compression mechanism. Option available is 'gzip'
gzip
tls.ca_file
For use with Windows: provide path to root cert
workers
The number of workers to perform flush operations for this output.
0
base_uri
Full address of New Relic API end-point. By default the value points to the US end-point.
If you want to use the EU end-point you can set this key to the following value: https://log-api.eu.newrelic.com/log/v1
api_key
Your key for data ingestion. The API key is also called the ingestion key, you can get more details on how to generated in the official documentation here.
From a configuration perspective either an api_key
or an license_key
is required. New Relic suggest to use primary the api_key
.
license_key
Optional authentication parameter for data ingestion.
Note that New Relic suggest to use the api_key
instead. You can read more about the License Key here.
workers
The number of workers to perform flush operations for this output.
0
create (default) | adds new data - if the data already exists (based on its id), the op is skipped. |
index | new data is added while existing data (based on its id) is replaced (reindexed). |
update | updates existing data (based on its id). If no data is found, the op is skipped. |
upsert | known as merge or insert if the data does not exist, updates if the data exists (based on its id). |
Host | Required. The OpenObserve server where you are sending logs. |
|
TLS | Required: Enable end-to-end security using TLS. Set to |
|
compress | Recommended: Compresses the payload in GZIP format. OpenObserve supports and recommends setting this to | none |
HTTP_User | Required: Username for HTTP authentication. | none |
HTTP_Passwd | Required: Password for HTTP authentication. | none |
URI | Required: The API path used to send logs. |
|
Format | Required: The format of the log payload. OpenObserve expects JSON. |
|
json_date_key | Optional: The JSON key used for timestamps in the logs. |
|
json_date_format | Optional: The format of the date in logs. OpenObserve supports ISO 8601. |
|
include_tag_key | If |
|
| Hostname/IP address of the PostgreSQL instance | - (127.0.0.1) |
| PostgreSQL port | - (5432) |
| PostgreSQL username | - (current user) |
| Password of PostgreSQL username | - |
| Database name to connect to | - (current user) |
| Table name where to store data | - |
| - |
| Key in the JSON object containing the record timestamp | date |
| Define if we will use async or sync connections | false |
| Minimum number of connection in async mode | 1 |
| Maximum amount of connections in async mode | 4 |
| Set to | false |
|
|
host | This is address Fluent Bit will bind to when hosting prometheus metrics. Note: | 0.0.0.0 |
port | This is the port Fluent Bit will bind to when hosting prometheus metrics | 2021 |
add_label | This allows you to add custom labels to all metrics exposed through the prometheus exporter. You may have multiple of these fields |
workers |
|
Host | IP address or hostname of the target OpenSearch instance | 127.0.0.1 |
Port | TCP port of the target OpenSearch instance | 9200 |
Path | OpenSearch accepts new data on HTTP query path "/_bulk". But it is also possible to serve OpenSearch behind a reverse proxy on a subpath. This option defines such path on the fluent-bit side. It simply adds a path prefix in the indexing HTTP POST URI. | Empty string |
Buffer_Size | 4KB |
Pipeline | OpenSearch allows to setup filters called pipelines. This option allows to define which pipeline the database should use. For performance reasons is strongly suggested to do parsing and filtering on Fluent Bit side, avoid pipelines. |
AWS_Auth | Enable AWS Sigv4 Authentication for Amazon OpenSearch Service | Off |
AWS_Region | Specify the AWS region for Amazon OpenSearch Service |
AWS_STS_Endpoint | Specify the custom sts endpoint to be used with STS API for Amazon OpenSearch Service |
AWS_Role_ARN | AWS IAM Role to assume to put records to your Amazon cluster |
AWS_External_ID | External ID for the AWS IAM Role specified with |
AWS_Service_Name | es |
AWS_Profile | AWS profile name | default |
HTTP_User | Optional username credential for access |
HTTP_Passwd | Password for user defined in HTTP_User |
Index | fluent-bit |
Type | Type name. This option is ignored if | _doc |
Logstash_Format | Enable Logstash format compatibility. This option takes a boolean value: True/False, On/Off | Off |
Logstash_Prefix | When Logstash_Format is enabled, the Index name is composed using a prefix and the date, e.g: If Logstash_Prefix is equals to 'mydata' your index will become 'mydata-YYYY.MM.DD'. The last string appended belongs to the date when the data is being generated. | logstash |
Logstash_Prefix_Key |
Logstash_Prefix_Separator | Set a separator between logstash_prefix and date. | - |
Logstash_DateFormat | %Y.%m.%d |
Time_Key | When Logstash_Format is enabled, each record will get a new timestamp field. The Time_Key property defines the name of that field. | @timestamp |
Time_Key_Format | When Logstash_Format is enabled, this property defines the format of the timestamp. | %Y-%m-%dT%H:%M:%S |
Time_Key_Nanos | When Logstash_Format is enabled, enabling this property sends nanosecond precision timestamps. | Off |
Include_Tag_Key | When enabled, it append the Tag name to the record. | Off |
Tag_Key | When Include_Tag_Key is enabled, this property defines the key name for the tag. | _flb-key |
Generate_ID | When enabled, generate | Off |
Id_Key | If set, |
Write_Operation | Operation to use to write in bulk requests. | create |
Replace_Dots | When enabled, replace field name dots with underscore. | Off |
Trace_Output | When enabled print the OpenSearch API calls to stdout (for diag only) | Off |
Trace_Error | When enabled print the OpenSearch API calls to stdout when OpenSearch returns an error (for diag only) | Off |
Current_Time_Index | Use current time for index generation instead of message record | Off |
Suppress_Type_Name | When enabled, mapping types is removed and | Off |
Workers |
|
Compress | Set payload compression mechanism. The only available option is |
config_file_location | The location of the configuration file containing OCI authentication details. Reference for generating the configuration file - https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdkconfig.htm#SDK_and_CLI_Configuration_File |
profile_name | OCI Config Profile Name to be used from the configuration file | DEFAULT |
namespace | OCI Tenancy Namespace in which the collected log data is to be uploaded |
proxy | define proxy if required, in http://host:port format, supports only http protocol |
workers |
|
oci_config_in_record | If set to true, the following oci_la_* will be read from the record itself instead of the output plugin configuration. | false |
oci_la_log_group_id | The OCID of the Logging Analytics Log Group where the logs must be stored. This is a mandatory parameter |
oci_la_log_source_name | The Logging Analytics Source that must be used to process the log records. This is a mandatory parameter |
oci_la_entity_id | The OCID of the Logging Analytics Entity |
oci_la_entity_type | The entity type of the Logging Analytics Entity |
oci_la_log_path | Specify the original location of the log files |
oci_la_global_metadata | Use this parameter to specify additional global metadata along with original log content to Logging Analytics. The format is 'key_name value'. This option can be set multiple times |
oci_la_metadata | Use this parameter to specify additional metadata for a log event along with original log content to Logging Analytics. The format is 'key_name value'. This option can be set multiple times |
host | IP address or hostname of the target HTTP Server | 127.0.0.1 |
http_user | Basic Auth Username |
http_passwd | Basic Auth Password. Requires HTTP_user to be set |
port | TCP port of the target HTTP Server | 80 |
proxy |
metrics_uri | Specify an optional HTTP URI for the target web server listening for metrics, e.g: /v1/metrics | / |
logs_uri | Specify an optional HTTP URI for the target web server listening for logs, e.g: /v1/logs | / |
traces_uri | Specify an optional HTTP URI for the target web server listening for traces, e.g: /v1/traces | / |
header | Add a HTTP header key/value pair. Multiple headers can be set. |
log_response_payload | Log the response payload within the Fluent Bit log | false |
logs_body_key | The log body key to look up in the log events body/message. Sets the Body field of the opentelemtry logs data model. | message |
logs_trace_id_message_key | The trace id key to look up in the log events body/message. Sets the TraceId field of the opentelemtry logs data model. | traceId |
logs_span_id_message_key | The span id key to look up in the log events body/message. Sets the SpanId field of the opentelemtry logs data model. | spanId |
logs_severity_text_message_key | The severity text id key to look up in the log events body/message. Sets the SeverityText field of the opentelemtry logs data model. | severityText |
logs_severity_number_message_key | The severity number id key to look up in the log events body/message. Sets the SeverityNumber field of the opentelemtry logs data model. | severityNumber |
add_label | This allows you to add custom labels to all metrics exposed through the OpenTelemetry exporter. You may have multiple of these fields |
compress | Set payload compression mechanism. Option available is 'gzip' |
logs_observed_timestamp_metadata_key | Specify an ObservedTimestamp key to look up in the metadata. | $ObservedKey |
logs_timestamp_metadata_key | Specify a Timestamp key to look up in the metadata. | $Timestamp |
logs_severity_key_metadata_key | Specify a SeverityText key to look up in the metadata. | $SeverityText |
logs_severity_number_metadata_key | Specify a SeverityNumber key to look up in the metadata. | $SeverityNumber |
logs_trace_flags_metadata_key | Specify a Flags key to look up in the metadata. | $Flags |
logs_span_id_metadata_key | Specify a SpanId key to look up in the metadata. | $SpanId |
logs_trace_id_metadata_key | Specify a TraceId key to look up in the metadata. | $TraceId |
logs_attributes_metadata_key | Specify an Attributes key to look up in the metadata. | $Attributes |
workers |
|
host | Loki hostname or IP address. Do not include the subpath, i.e. | 127.0.0.1 |
uri | Specify a custom HTTP URI. It must start with forward slash. | /loki/api/v1/push |
port | Loki TCP port | 3100 |
tls | Use TLS authentication | off |
http_user | Set HTTP basic authentication user name |
http_passwd | Set HTTP basic authentication password |
bearer_token | Set bearer token authentication token value. |
header | Add additional arbitrary HTTP header key/value pair. Multiple headers can be set. |
tenant_id | Tenant ID used by default to push logs to Loki. If omitted or empty it assumes Loki is running in single-tenant mode and no X-Scope-OrgID header is sent. |
labels | Stream labels for API request. It can be multiple comma separated of strings specifying | job=fluent-bit |
label_keys | Optional list of record keys that will be placed as stream labels. This configuration property is for records key only. More details in the Labels section. |
label_map_path | Specify the label map file path. The file defines how to extract labels from each record. More details in the Labels section. |
structured_metadata |
remove_keys | Optional list of keys to remove. |
drop_single_key | If set to true and after extracting labels only a single key remains, the log line sent to Loki will be the value of that key in line_format. If set to | off |
line_format | Format to use when flattening the record to a log line. Valid values are | json |
auto_kubernetes_labels | If set to true, it will add all Kubernetes labels to the Stream labels | off |
tenant_id_key | Specify the name of the key from the original record that contains the Tenant ID. The value of the key is set as |
compress | Set payload compression mechanism. The only available option is gzip. Default = "", which means no compression. |
workers |
|
Stackdriver output plugin allows to ingest your records into Google Cloud Stackdriver Logging service.
Before to get started with the plugin configuration, make sure to obtain the proper credentials to get access to the service. We strongly recommend to use a common JSON credentials file, reference link:
Your goal is to obtain a credentials JSON file that will be used later by Fluent Bit Stackdriver output plugin.
Key | Description | default |
---|---|---|
If you are using a Google Cloud Credentials File, the following configuration is enough to get started:
Example configuration file for k8s resource type:
local_resource_id is used by stackdriver output plugin to set the labels field for different k8s resource types. Stackdriver plugin will try to find the local_resource_id field in the log entry. If there is no field logging.googleapis.com/local_resource_id in the log, the plugin will then construct it by using the tag value of the log.
The local_resource_id should be in format:
k8s_container.<namespace_name>.<pod_name>.<container_name>
k8s_node.<node_name>
k8s_pod.<namespace_name>.<pod_name>
This implies that if there is no local_resource_id in the log entry then the tag of logs should match this format. Note that we have an option tag_prefix so it is not mandatory to use k8s_container(node/pod) as the prefix for tag.
Currently, there are four ways which fluent-bit uses to assign fields into the resource/labels section.
Resource Labels API
Monitored Resource API
Local Resource Id
Credentials / Config Parameters
If resource_labels
is correctly configured, then fluent-bit will attempt to populate all resource/labels using the entries specified. Otherwise, fluent-bit will attempt to use the monitored resource API. Similarly, if the monitored resource API cannot be used, then fluent-bit will attempt to populate resource/labels using configuration parameters and/or credentials specific to the resource type. As mentioned in the Configuration File section, fluent-bit will attempt to use or construct a local resource ID for a K8s resource type which does not use the resource labels or monitored resource API.
Note that the project_id
resource label will always be set from the service credentials or fetched from the metadata server and cannot be overridden.
The resource_labels
configuration parameter offers an alternative API for assigning the resource labels. To use, input a list of comma separated strings specifying resource labels plaintext assignments (new=value
), mappings from an original field in the log entry to a destination field (destination=$original
) and/or environment variable assignments (new=${var}
).
For instance, consider the following log entry:
Combined with the following Stackdriver configuration:
This will produce the following log:
This makes the resource_labels
API the recommended choice for supporting new or existing resource types that have all resource labels known before runtime or available on the payload during runtime.
For instance, for a K8s resource type, resource_labels
can be used in tandem with the Kubernetes filter to pack all six resource labels. Below is an example of what this could look like for a k8s_container
resource:
resource_labels
also supports validation for required labels based on the input resource type. This allows fluent-bit to check if all specified labels are present for a given configuration before runtime. If validation is not currently supported for a resource type that you would like to use this API with, we encourage you to open a pull request for it. Adding validation for a new resource type is simple - all that is needed is to specify the resources associated with the type alongside the required labels here.
Github reference: #761
An upstream connection error means Fluent Bit was not able to reach Google services, the error looks like this:
This belongs to a network issue by the environment where Fluent Bit is running, make sure that from the Host, Container or Pod you can reach the following Google end-points:
The error looks like this:
Do following check:
If the log entry does not contain the local_resource_id field, does the tag of the log match for format?
If tag_prefix is configured, does the prefix of tag specified in the input plugin match the tag_prefix?
Workers
Github reference: #7552
When the number of Workers is greater than 1, Fluent Bit may intermittently crash.
Stackdriver officially supports a logging agent based on Fluentd.
We plan to support some special fields in structured payloads. Use cases of special fields is here.
The stdout output plugin allows to print to the standard output the data received through the input plugin. Their usage is very simple as follows:
Key | Description | default |
---|---|---|
We have specified to gather CPU usage metrics and print them out to the standard output in a human readable way:
No more, no less, it just works.
The Apache SkyWalking output plugin, allows to flush your records to a Apache SkyWalking OAP. The following instructions assumes that you have a fully operational Apache SkyWalking OAP in place.
parameter | description | default |
---|---|---|
Apache SkyWalking output plugin supports TLS/SSL, for more details about the properties available and general configuration, please refer to the TLS/SSL section.
In order to start inserting records into an Apache SkyWalking service, you can run the plugin through the configuration file:
In your main configuration file append the following Input & Output sections:
The format of the plugin output follows the data collect protocol.
For example, if we get log as follows,
This message is packed into the following protocol format and written to the OAP via the REST API.
The tcp output plugin allows to send records to a remote TCP server. The payload can be formatted in different ways as required.
Key | Description | default |
---|---|---|
The following parameters are available to configure a secure channel connection through TLS:
Key | Description | Default |
---|---|---|
We have specified to gather CPU usage metrics and send them in JSON lines mode to a remote end-point using netcat service.
Run the following in a separate terminal, netcat
will start listening for messages on TCP port 5170. Once it connects to Fluent Bit ou should see the output as above in JSON format:
Repeat the JSON approach but using the msgpack
output format.
We could send this to stdout but as it is a serialized format you would end up with strange output. This should really be handled by a msgpack receiver to unpack as per the details in the developer documentation here. As an example we use the Python msgpack library to deal with it:
An output plugin to submit Prometheus Metrics using the remote write protocol
The prometheus remote write plugin allows you to take metrics from Fluent Bit and submit them to a Prometheus server through the remote write mechanism.
Important Note: The prometheus exporter only works with metric plugins, such as Node Exporter Metrics
Key | Description | Default |
---|---|---|
The Prometheus remote write plugin only works with metrics collected by one of the from metric input plugins. In the following example, host metrics are collected by the node exporter metrics plugin and then delivered by the prometheus remote write output plugin.
The following are examples of using Prometheus remote write with hosted services below
With Grafana Cloud hosted metrics you will need to use the specific host that is mentioned as well as specify the HTTP username and password given within the Grafana Cloud page.
With Logz.io hosted prometheus you will need to make use of the header option and add the Authorization Bearer with the proper key. The host and port may also differ within your specific hosted instance.
With Coralogix Metrics you may need to customize the URI. Additionally, you will make use of the header key with Coralogix private key.
With Levitate, you must use the Levitate cluster-specific write URL and specify the HTTP username and password for the token created for your Levitate cluster.
Ordinary prometheus clients add some of the labels as below:
instance
label can be emulated with add_label instance ${HOSTNAME}
. And other labels can be added with add_label <key> <value>
setting.
Send logs to Splunk HTTP Event Collector
Splunk output plugin allows to ingest your records into a Splunk Enterprise service through the HTTP Event Collector (HEC) interface.
To get more details about how to setup the HEC in Splunk please refer to the following documentation: Splunk / Use the HTTP Event Collector
Connectivity, transport and authentication configuration properties:
Key | Description | default |
---|---|---|
Content and Splunk metadata (fields) handling configuration properties:
Key | Description | default |
---|---|---|
Splunk output plugin supports TLS/SSL, for more details about the properties available and general configuration, please refer to the TLS/SSL section.
In order to insert records into a Splunk service, you can run the plugin from the command line or through the configuration file:
The splunk plugin, can read the parameters from the command line in two ways, through the -p argument (property), e.g:
In your main configuration file append the following Input & Output sections:
By default, the Splunk output plugin nests the record under the event
key in the payload sent to the HEC. It will also append the time of the record to a top level time
key.
If you would like to customize any of the Splunk event metadata, such as the host or target index, you can set Splunk_Send_Raw On
in the plugin configuration, and add the metadata as keys/values in the record. Note: with Splunk_Send_Raw
enabled, you are responsible for creating and populating the event
section of the payload.
For example, to add a custom index and hostname:
This will create a payload that looks like:
For more information on the Splunk HEC payload format and all event metadata Splunk accepts, see here: http://docs.splunk.com/Documentation/Splunk/latest/Data/AboutHEC
If the option splunk_send_raw
has been enabled, the user must take care to put all log details in the event field, and only specify fields known to Splunk in the top level event, if there is a mismatch, Splunk will return a HTTP error 400.
Consider the following example:
splunk_send_raw off
splunk_send_raw on
For up to date information about the valid keys in the top level object, refer to the Splunk documentation:
http://docs.splunk.com/Documentation/Splunk/latest/Data/AboutHEC
With Splunk version 8.0> you can also use the Fluent Bit Splunk output plugin to send data to metric indices. This allows you to perform visualizations, metric queries, and analysis with other metrics you may be collecting. This is based off of Splunk 8.0 support of multi metric support via single JSON payload, more details can be found on Splunk's documentation page
Sending to a Splunk Metric index requires the use of Splunk_send_raw
option being enabled and formatting the message properly. This includes three specific operations
Nest metric events under a "fields" property
Add metric_name:
to all metrics
Add index, source, sourcetype as fields in the message
The following configuration gathers CPU metrics, nests the appropriate field, adds the required identifiers and then sends to Splunk.
With Fluent Bit 2.0, you can also send Fluent Bit's metrics type of events into Splunk via Splunk HEC. This allows you to perform visualizations, metric queries, and analysis with directly sent Fluent Bit's metrics type of events. This is based off Splunk 8.0 support of multi metric support via single concatenated JSON payload.
Sending Fluent Bit's metrics into Splunk requires the use of collecting Fluent Bit's metrics plugins. Note that whether events type of logs or metrics can be distinguished automatically. You don't need to pay attentions about the type of events. This example includes two specific operations
Collect node or Fluent Bit's internal metrics
Send metrics as single concatenated JSON payload
The Syslog output plugin allows you to deliver messages to Syslog servers. It supports RFC3164 and RFC5424 formats through different transports such as UDP, TCP or TLS.
As of Fluent Bit v1.5.3 the configuration is very strict. You must be aware of the structure of your original record so you can configure the plugin to use specific keys to compose your outgoing Syslog message.
Future versions of Fluent Bit are expanding this plugin feature set to support better handling of keys and message composing.
Key | Description | Default |
---|---|---|
The Syslog output plugin supports TLS/SSL. For more details about the properties available and general configuration, please refer to the TLS/SSL section.
Get started quickly with this configuration file:
The following is an example of how to configure the syslog_sd_key
to send Structured Data to the remote Syslog server.
Example log:
Example configuration file:
Example output:
Some services use the structured data field to pass authentication tokens (e.g. [<token>@41018]
), which would need to be added to each log message dynamically. However, this requires setting the token as a key rather than as a value. Here's an example of how that might be achieved, using AUTH_TOKEN
as a variable:
The Slack output plugin delivers records or messages to your preferred Slack channel. It formats the outgoing content in JSON format for readability.
This connector uses the Slack Incoming Webhooks feature to post messages to Slack channels. Using this plugin in conjunction with the Stream Processor is a good combination for alerting.
Before configuring this plugin, make sure to setup your Incoming Webhook. For detailed step-by-step instructions, review the following official documentation:
Once you have obtained the Webhook address you can place it in the configuration below.
Key | Description | Default |
---|---|---|
Get started quickly with this configuration file:
The websocket output plugin allows to flush your records into a WebSocket endpoint. For now the functionality is pretty basic and it issues a HTTP GET request to do the handshake, and then use TCP connections to send the data records in either JSON or MessagePack (or JSON) format.
Key | Description | default |
---|---|---|
In order to insert records into a HTTP server, you can run the plugin from the command line or through the configuration file:
The websocket plugin, can read the parameters from the command line in two ways, through the -p argument (property) or setting them directly through the service URI. The URI format is the following:
Using the format specified, you could start Fluent Bit through:
In your main configuration file, append the following Input & Output sections:
Websocket plugin is working with tcp keepalive mode, please refer to networking section for details. Since websocket is a stateful plugin, it will decide when to send out handshake to server side, for example when plugin just begins to work or after connection with server has been dropped. In general, the interval to init a new websocket handshake would be less than the keepalive interval. With that strategy, it could detect and resume websocket connections.
Once Fluent Bit is running, you can send some messages using the netcat:
In Fluent Bit we should see the following output:
From the output of fluent-bit log, we see that once data has been ingested into fluent bit, plugin would perform handshake. After a while, no data or traffic is undergoing, tcp connection would been abort. And then another piece of data arrived, a retry for websocket plugin has been triggered, with another handshake and data flush.
There is another scenario, once websocket server flaps in a short time, which means it goes down and up in a short time, fluent-bit would resume tcp connection immediately. But in that case, websocket output plugin is a malfunction state, it needs to restart fluent-bit to get back to work.
Vivo Exporter is an output plugin that exposes logs, metrics, and traces through an HTTP endpoint. This plugin aims to be used in conjunction with Vivo project .
Key | Description | Default |
---|---|---|
Here is a simple configuration of Vivo Exporter, note that this example is not based on defaults.
Vivo Exporter provides buffers that serve as streams for each telemetry data type, in this case, logs
, metrics
, and traces
. Each buffer contains a fixed capacity in terms of size (20M by default). When the data arrives at a stream, it’s appended to the end. If the buffer is full, it removes the older entries to make room for new data.
The data
that arrives is a chunk
. A chunk is a group of events that belongs to the same type (logs, metrics or traces) and contains the same tag
. Every chunk placed in a stream is assigned with an auto-incremented id
.
By using a simple HTTP request, you can retrieve the data from the streams. The following are the endpoints available:
The example below will generate dummy log events which will be consuming by using curl
HTTP command line client:
Configure and start Fluent Bit
Retrieve the data
We are using the
-i
curl option to print also the HTTP response headers.
Curl output would look like this:
As mentioned above, on each stream we buffer a chunk
that contains N events, each chunk contains it own ID which is unique inside the stream.
When we receive the HTTP response, Vivo Exporter also reports the range of chunk IDs that were served in the response via the HTTP headers Vivo-Stream-Start-ID
and Vivo-Stream-End-ID
.
The values of these headers can be used by the client application to specify a range between IDs or set limits for the number of chunks to retrieve from the stream.
A client might be interested into always retrieve the latest chunks available and skip previous one that already processed. In a first request without any given range, Vivo Exporter will provide all the content that exists in the buffer for the specific stream, on that response the client might want to keep the last ID (Vivo-Stream-End-ID) that was received.
To query ranges or starting from specific chunks IDs, remember that they are incremental, you can use a mix of the following options:
The following example specifies the range from chunk ID 1 to chunk ID 3 and only 1 chunk:
curl -i "http://127.0.0.1:2025/logs?from=1&to=3&limit=1"
Output:
The td output plugin, allows to flush your records into the Treasure Data cloud service.
The plugin supports the following configuration parameters:
Key | Description | Default |
---|---|---|
In order to start inserting records into Treasure Data, you can run the plugin from the command line or through the configuration file:
Ideally you don't want to expose your API key from the command line, using a configuration file is highly desired.
In your main configuration file append the following Input & Output sections:
Specifies any valid
The number of to perform flush operations for this output.
The number of to perform flush operations for this output.
Specify the buffer size used to read the response from the OpenSearch HTTP service. This option is useful for debugging purposes where is required to read full responses, note that response size grows depending of the number of records inserted. To set an unlimited amount of memory set this value to False, otherwise the value must be according to the specification.
Service name to be used in AWS Sigv4 signature. For integration with Amazon OpenSearch Serverless, set to aoss
. See the section on Amazon OpenSearch Serverless for more information.
Index name, supports from 2.0.5 onwards.
When included: the value of the key in the record will be evaluated as key reference and overrides Logstash_Prefix for index generation. If the key/value is not found in the record then the Logstash_Prefix option will act as a fallback. The parameter is expected to be a .
Time format (based on ) to generate the second part of the Index name.
The number of to perform flush operations for this output.
The number of to perform flush operations for this output.
Specify an HTTP Proxy. The expected format of this value is http://HOST:PORT
. Note that HTTPS is not currently supported. It is recommended not to set this and to configure the instead as they support both HTTP and HTTPS.
The number of to perform flush operations for this output.
Optional comma-separated list of key=value
strings specifying structured metadata for the log line. Like the labels
parameter, values can reference record keys using record accessors. See for more information.
The number of to perform flush operations for this output.
endpoint | Description |
---|---|
Query string option | Description |
---|---|
google_service_credentials
Absolute path to a Google Cloud credentials JSON file
Value of environment variable $GOOGLE_APPLICATION_CREDENTIALS
service_account_email
Account email associated to the service. Only available if no credentials file has been provided.
Value of environment variable $SERVICE_ACCOUNT_EMAIL
service_account_secret
Private key content associated with the service account. Only available if no credentials file has been provided.
Value of environment variable $SERVICE_ACCOUNT_SECRET
metadata_server
Prefix for a metadata server. Can also set environment variable $METADATA_SERVER.
location
The GCP or AWS region in which to store data about the resource. If the resource type is one of the generic_node or generic_task, then this field is required.
namespace
A namespace identifier, such as a cluster name or environment. If the resource type is one of the generic_node or generic_task, then this field is required.
node_id
A unique identifier for the node within the namespace, such as hostname or IP address. If the resource type is generic_node, then this field is required.
job
An identifier for a grouping of related task, such as the name of a microservice or distributed batch. If the resource type is generic_task, then this field is required.
task_id
A unique identifier for the task within the namespace and job, such as a replica index identifying the task within the job. If the resource type is generic_task, then this field is required.
export_to_project_id
The GCP project that should receive these logs.
Defaults to the project ID of the google_service_credentials file, or the project_id from Google's metadata.google.internal server.
resource
Set resource type of data. Supported resource types: k8s_container, k8s_node, k8s_pod, k8s_cluster, global, generic_node, generic_task, and gce_instance.
global, gce_instance
k8s_cluster_name
The name of the cluster that the container (node or pod based on the resource type) is running in. If the resource type is one of the k8s_container, k8s_node or k8s_pod, then this field is required.
k8s_cluster_location
The physical location of the cluster that contains (node or pod based on the resource type) the container. If the resource type is one of the k8s_container, k8s_node or k8s_pod, then this field is required.
labels_key
The value of this field is used by the Stackdriver output plugin to find the related labels from jsonPayload and then extract the value of it to set the LogEntry Labels.
logging.googleapis.com/labels
. See Stackdriver Special Fields for more info.
labels
Optional list of comma separated of strings specifying key=value
pairs. The resulting labels
will be combined with the elements in obtained from labels_key
to set the LogEntry Labels. Elements from labels
will override duplicate values from labels_key
.
log_name_key
The value of this field is used by the Stackdriver output plugin to extract logName from jsonPayload and set the logName field.
logging.googleapis.com/logName
. See Stackdriver Special Fields for more info.
tag_prefix
Set the tag_prefix used to validate the tag of logs with k8s resource type. Without this option, the tag of the log must be in format of k8s_container(pod/node).* in order to use the k8s_container resource type. Now the tag prefix is configurable by this option (note the ending dot).
k8s_container., k8s_pod., k8s_node.
severity_key
Specify the name of the key from the original record that contains the severity information.
logging.googleapis.com/severity
. See Stackdriver Special Fields for more info.
project_id_key
The value of this field is used by the Stackdriver output plugin to find the gcp project id from jsonPayload and then extract the value of it to set the PROJECT_ID within LogEntry logName, which controls the gcp project that should receive these logs.
logging.googleapis.com/projectId
. See Stackdriver Special Fields for more info.
autoformat_stackdriver_trace
Rewrite the trace field to include the projectID and format it for use with Cloud Trace. When this flag is enabled, the user can get the correct result by printing only the traceID (usually 32 characters).
false
workers
The number of workers to perform flush operations for this output.
1
custom_k8s_regex
Set a custom regex to extract field like pod_name, namespace_name, container_name and docker_id from the local_resource_id in logs. This is helpful if the value of pod or node name contains dots.
(?<pod_name>[a-z0-9](?:[-a-z0-9]*[a-z0-9])?(?:\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*)_(?<namespace_name>[^_]+)_(?<container_name>.+)-(?<docker_id>[a-z0-9]{64})\.log$
resource_labels
An optional list of comma separated strings specifying resource labels plaintext assignments (new=value
) and/or mappings from an original field in the log entry to a destination field (destination=$original
). Nested fields and environment variables are also supported using the record accessor syntax. If configured, all resource labels will be assigned using this API only, with the exception of project_id
. See Resource Labels for more details.
compress
Set payload compression mechanism. The only available option is gzip
. Default = "", which means no compression.
Format
Specify the data format to be printed. Supported formats are msgpack, json, json_lines and json_stream.
msgpack
json_date_key
Specify the name of the time key in the output record. To disable the time key just set the value to false
.
date
json_date_format
Specify the format of the date. Supported formats are double, epoch, iso8601 (eg: 2018-05-30T09:39:52.000681Z) and java_sql_timestamp (eg: 2018-05-30 09:39:52.000681)
double
workers
The number of workers to perform flush operations for this output.
1
host
Hostname of Apache SkyWalking OAP
127.0.0.1
port
TCP port of the Apache SkyWalking OAP
12800
auth_token
Authentication token if needed for Apache SkyWalking OAP
None
svc_name
Service name that fluent-bit belongs to
sw-service
svc_inst_name
Service instance name of fluent-bit
fluent-bit
workers
The number of workers to perform flush operations for this output.
0
Host
Target host where Fluent-Bit or Fluentd are listening for Forward messages.
127.0.0.1
Port
TCP Port of the target service.
5170
Format
Specify the data format to be printed. Supported formats are msgpack json, json_lines and json_stream.
msgpack
json_date_key
Specify the name of the time key in the output record. To disable the time key just set the value to false
.
date
json_date_format
Specify the format of the date. Supported formats are double, epoch, iso8601 (eg: 2018-05-30T09:39:52.000681Z) and java_sql_timestamp (eg: 2018-05-30 09:39:52.000681)
double
workers
The number of workers to perform flush operations for this output.
2
tls
Enable or disable TLS support
Off
tls.verify
Force certificate validation
On
tls.debug
Set TLS debug verbosity level. It accept the following values: 0 (No debug), 1 (Error), 2 (State change), 3 (Informational) and 4 Verbose
1
tls.ca_file
Absolute path to CA certificate file
tls.crt_file
Absolute path to Certificate file.
tls.key_file
Absolute path to private Key file.
tls.key_passwd
Optional password for tls.key_file file.
host
IP address or hostname of the target HTTP Server
127.0.0.1
http_user
Basic Auth Username
http_passwd
Basic Auth Password. Requires HTTP_user to be set
AWS_Auth
Enable AWS SigV4 authentication
false
AWS_Service
For Amazon Managed Service for Prometheus, the service name is aps
aps
AWS_Region
Region of your Amazon Managed Service for Prometheus workspace
AWS_STS_Endpoint
Specify the custom sts endpoint to be used with STS API, used with the AWS_Role_ARN option, used by SigV4 authentication
AWS_Role_ARN
AWS IAM Role to assume, used by SigV4 authentication
AWS_External_ID
External ID for the AWS IAM Role specified with aws_role_arn
, used by SigV4 authentication
port
TCP port of the target HTTP Server
80
proxy
Specify an HTTP Proxy. The expected format of this value is http://HOST:PORT
. Note that HTTPS is not currently supported. It is recommended not to set this and to configure the HTTP proxy environment variables instead as they support both HTTP and HTTPS.
uri
Specify an optional HTTP URI for the target web server, e.g: /something
/
header
Add a HTTP header key/value pair. Multiple headers can be set.
log_response_payload
Log the response payload within the Fluent Bit log
false
add_label
This allows you to add custom labels to all metrics exposed through the prometheus exporter. You may have multiple of these fields
workers
The number of workers to perform flush operations for this output.
2
host
IP address or hostname of the target Splunk service.
127.0.0.1
port
TCP port of the target Splunk service.
8088
splunk_token
Specify the Authentication Token for the HTTP Event Collector interface.
http_user
Optional username for Basic Authentication on HEC
http_passwd
Password for user defined in HTTP_User
http_buffer_size
Buffer size used to receive Splunk HTTP responses
2M
compress
Set payload compression mechanism. The only available option is gzip
.
channel
Specify X-Splunk-Request-Channel Header for the HTTP Event Collector interface.
http_debug_bad_request
If the HTTP server response code is 400 (bad request) and this flag is enabled, it will print the full HTTP request and response to the stdout interface. This feature is available for debugging purposes.
workers
The number of workers to perform flush operations for this output.
2
splunk_send_raw
When enabled, the record keys and values are set in the top level of the map instead of under the event key. Refer to the Sending Raw Events section from the docs for more details to make this option work properly.
off
event_key
Specify the key name that will be used to send a single value as part of the record.
event_host
Specify the key name that contains the host value. This option allows a record accessors pattern.
event_source
Set the source value to assign to the event data.
event_sourcetype
Set the sourcetype value to assign to the event data.
event_sourcetype_key
Set a record key that will populate 'sourcetype'. If the key is found, it will have precedence over the value set in event_sourcetype
.
event_index
The name of the index by which the event data is to be indexed.
event_index_key
Set a record key that will populate the index
field. If the key is found, it will have precedence over the value set in event_index
.
event_field
Set event fields for the record. This option can be set multiple times and the format is key_name record_accessor_pattern
.
host
Domain or IP address of the remote Syslog server.
127.0.0.1
port
TCP or UDP port of the remote Syslog server.
514
mode
Desired transport type. Available options are tcp
and udp
.
udp
syslog_format
The Syslog protocol format to use. Available options are rfc3164
and rfc5424
.
rfc5424
syslog_maxsize
The maximum size allowed per message. The value must be an integer representing the number of bytes allowed. If no value is provided, the default size is set depending of the protocol version specified by syslog_format
.
rfc3164
sets max size to 1024 bytes.
rfc5424
sets the size to 2048 bytes.
syslog_severity_key
The key name from the original record that contains the Syslog severity number. This configuration is optional.
syslog_severity_preset
The preset severity number. It will be overwritten if syslog_severity_key
is set and a key of a record is matched. This configuration is optional.
6
syslog_facility_key
The key name from the original record that contains the Syslog facility number. This configuration is optional.
syslog_facility_preset
The preset facility number. It will be overwritten if syslog_facility_key
is set and a key of a record is matched. This configuration is optional.
1
syslog_hostname_key
The key name from the original record that contains the hostname that generated the message. This configuration is optional.
syslog_hostname_preset
The preset hostname. It will be overwritten if syslog_hostname_key
is set and a key of a record is matched. This configuration is optional.
syslog_appname_key
The key name from the original record that contains the application name that generated the message. This configuration is optional.
syslog_appname_preset
The preset application name. It will be overwritten if syslog_appname_key
is set and a key of a record is matched. This configuration is optional.
syslog_procid_key
The key name from the original record that contains the Process ID that generated the message. This configuration is optional.
syslog_procid_preset
The preset process ID. It will be overwritten if syslog_procid_key
is set and a key of a record is matched. This configuration is optional.
syslog_msgid_key
The key name from the original record that contains the Message ID associated to the message. This configuration is optional.
syslog_msgid_preset
The preset message ID. It will be overwritten if syslog_msgid_key
is set and a key of a record is matched. This configuration is optional.
syslog_sd_key
The key name from the original record that contains a map of key/value pairs to use as Structured Data (SD) content. The key name is included in the resulting SD field as shown in examples below. This configuration is optional.
syslog_message_key
The key name from the original record that contains the message to deliver. Note that this property is mandatory, otherwise the message will be empty.
allow_longer_sd_id
If true, Fluent-bit allows SD-ID that is longer than 32 characters. Such long SD-ID violates RFC 5424.
false
workers
The number of workers to perform flush operations for this output.
0
webhook
Absolute address of the Webhook provided by Slack
workers
The number of workers to perform flush operations for this output.
0
Host
IP address or hostname of the target WebSocket Server
127.0.0.1
Port
TCP port of the target WebSocket Server
80
URI
Specify an optional HTTP URI for the target websocket server, e.g: /something
/
Header
Add a HTTP header key/value pair. Multiple headers can be set.
Format
Specify the data format to be used in the HTTP request body, by default it uses msgpack. Other supported formats are json, json_stream and json_lines and gelf.
msgpack
json_date_key
Specify the name of the date field in output
date
json_date_format
Specify the format of the date. Supported formats are double, epoch, iso8601 (eg: 2018-05-30T09:39:52.000681Z) and java_sql_timestamp (eg: 2018-05-30 09:39:52.000681)
double
workers
The number of workers to perform flush operations for this output.
0
empty_stream_on_read
If enabled, when an HTTP client consumes the data from a stream, the stream content will be removed.
Off
stream_queue_size
Specify the maximum queue size per stream. Each specific stream for logs, metrics and traces can hold up to stream_queue_size
bytes.
20M
http_cors_allow_origin
Specify the value for the HTTP Access-Control-Allow-Origin header (CORS).
workers
The number of workers to perform flush operations for this output.
1
/logs
Exposes log events in JSON format. Each event contains a timestamp, metadata and the event content.
/metrics
Exposes metrics events in JSON format. Each metric contains name, metadata, metric type and labels (dimensions).
/traces
Exposes traces events in JSON format. Each trace contains a name, resource spans, spans, attributes, events information, etc.
from
Specify the first chunk ID that is desired to be retrieved. Note that if the chunk
ID does not exists the next one in the queue will be provided.
to
The last chunk ID is desired. If not found, the whole stream will be provided (starting from from
if was set).
limit
Limit the output to a specific number of chunks. The default value is 0
, which means: send everything.
API
The Treasure Data API key. To obtain it please log into the Console and in the API keys box, copy the API key hash.
Database
Specify the name of your target database.
Table
Specify the name of your target table where the records will be stored.
Region
Set the service region, available values: US and JP
US
Workers
The number of workers to perform flush operations for this output.
0