arrow-left
All pages
gitbookPowered by GitBook
1 of 45

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Outputs

Amazon Kinesis Data Firehose

Send logs to Amazon Kinesis Firehose

The Amazon Kinesis Data Firehose output plugin allows to ingest your records into the Firehosearrow-up-right service.

This is the documentation for the core Fluent Bit Firehose plugin written in C. It can replace the aws/amazon-kinesis-firehose-for-fluent-bitarrow-up-right Golang Fluent Bit plugin released last year. The Golang plugin was named firehose; this new high performance and highly efficient firehose plugin is called kinesis_firehose to prevent conflicts/confusion.

See herearrow-up-right for details on how AWS credentials are fetched.

hashtag
Configuration Parameters

Key
Description

hashtag
Getting Started

In order to send records into Amazon Kinesis Data Firehose, you can run the plugin from the command line or through the configuration file:

hashtag
Command Line

The firehose plugin, can read the parameters from the command line through the -p argument (property), e.g:

hashtag
Configuration File

In your main configuration file append the following Output section:

hashtag
Permissions

The following AWS IAM permissions are required to use this plugin:

hashtag
Worker support

Fluent Bit 1.7 adds a new feature called workers which enables outputs to have dedicated threads. This kinesis_firehose plugin fully supports workers.

Example:

If you enable a single worker, you are enabling a dedicated thread for your Firehose output. We recommend starting with without workers, evaluating the performance, and then adding workers one at a time until you reach your desired/needed throughput. For most users, no workers or a single worker will be sufficient.

hashtag
AWS for Fluent Bit

Amazon distributes a container image with Fluent Bit and these plugins.

hashtag
GitHub

hashtag
Amazon ECR Public Gallery

Our images are available in Amazon ECR Public Gallery. You can download images with different tags by following command:

For example, you can pull the image with latest version by:

If you see errors for image pull limits, try log into public ECR with your AWS credentials:

You can check the for more details.

hashtag
Docker Hub

hashtag
Amazon ECR

You can use our SSM Public Parameters to find the Amazon ECR image URI in your region:

For more see .

Amazon CloudWatch

Send logs and metrics to Amazon CloudWatch

The Amazon CloudWatch output plugin allows to ingest your records into the service. Support for CloudWatch Metrics is also provided via .

This is the documentation for the core Fluent Bit CloudWatch plugin written in C. It can replace the Golang Fluent Bit plugin released last year. The Golang plugin was named cloudwatch; this new high performance CloudWatch plugin is called cloudwatch_logs to prevent conflicts/confusion. Check the amazon repo for the Golang plugin for details on the deprecation/migration plan for the original plugin.

See for details on how AWS credentials are fetched.

Amazon Kinesis Data Streams

Send logs to Amazon Kinesis Streams

The Amazon Kinesis Data Streams output plugin allows to ingest your records into the service.

This is the documentation for the core Fluent Bit Kinesis plugin written in C. It has all the core features of the Golang Fluent Bit plugin released in 2019. The Golang plugin was named kinesis; this new high performance and highly efficient kinesis plugin is called kinesis_streams to prevent conflicts/confusion.

Currently, this kinesis_streams plugin will always use a random partition key when uploading records to kinesis via the .

See for details on how AWS credentials are fetched.

ARN of an IAM role to assume (for cross account access).

endpoint

Specify a custom endpoint for the Firehose API.

sts_endpoint

Custom endpoint for the STS API.

auto_retry_requests

Immediately retry failed requests to AWS services once. This option does not affect the normal Fluent Bit retry mechanism with backoff. Instead, it enables an immediate retry with no delay for networking errors, which may help improve throughput when there are transient/random networking issues. This option defaults to true.

external_id

Specify an external ID for the STS API, can be used with the role_arn parameter if your role requires an external ID.

profile

AWS profile name to use. Defaults to default.

workers

The number of to perform flush operations for this output. Default: 1.

region

The AWS region.

delivery_stream

The name of the Kinesis Firehose Delivery stream that you want log records sent to.

time_key

Add the timestamp to the record under this key. By default the timestamp from Fluent Bit will not be added to records sent to Kinesis.

time_key_format

strftime compliant format string for the timestamp; for example, the default is '%Y-%m-%dT%H:%M:%S'. Supports millisecond precision with '%3N' and supports nanosecond precision with '%9N' and '%L'; for example, adding '%3N' to support millisecond '%Y-%m-%dT%H:%M:%S.%3N'. This option is used with time_key.

log_key

By default, the whole log record will be sent to Firehose. If you specify a key name with this option, then only the value of that key will be sent to Firehose. For example, if you are using the Fluentd Docker log driver, you can specify log_key log and only the log message will be sent to Firehose.

compression

Compression type for Firehose records. Each log record is individually compressed and sent to Firehose. 'gzip' and 'arrow' are the supported values. 'arrow' is only an available if Apache Arrow was enabled at compile time. Defaults to no compression.

github.com/aws/aws-for-fluent-bitarrow-up-right
aws-for-fluent-bitarrow-up-right
Amazon ECR Public official docarrow-up-right
amazon/aws-for-fluent-bitarrow-up-right
the AWS for Fluent Bit github repoarrow-up-right

role_arn

hashtag
Configuration Parameters
Key
Description

region

The AWS region.

log_group_name

The name of the CloudWatch Log Group that you want log records sent to.

log_group_template

Template for Log Group name using Fluent Bit syntax. This field is optional and if configured it overrides the log_group_name. If the template translation fails, an error is logged and the log_group_name (which is still required) is used instead. See the tutorial below for an example.

log_stream_name

The name of the CloudWatch Log Stream that you want log records sent to.

log_stream_prefix

Prefix for the Log Stream name. The tag is appended to the prefix to construct the full log stream name. Not compatible with the log_stream_name option.

log_stream_template

Template for Log Stream name using Fluent Bit syntax. This field is optional and if configured it overrides the other log stream options. If the template translation fails, an error is logged and the log_stream_name or log_stream_prefix are used instead (and thus one of those fields is still required to be configured). See the tutorial below for an example.

hashtag
Getting Started

In order to send records into Amazon Cloudwatch, you can run the plugin from the command line or through the configuration file:

hashtag
Command Line

The cloudwatch plugin, can read the parameters from the command line through the -p argument (property), e.g:

hashtag
Configuration File

In your main configuration file append the following Output section:

hashtag
Intergration with Localstack (Cloudwatch Logs)

For an instance of Localstack running at http://localhost:4566, the following configuration needs to be added to the [OUTPUT] section:

Any testing credentials can be exported as local variables, such as AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.

hashtag
Permissions

The following AWS IAM permissions are required to use this plugin:

hashtag
Log Stream and Group Name templating using record_accessor syntax

Sometimes, you may want the log group or stream name to be based on the contents of the log record itself. This plugin supports templating log group and stream names using Fluent Bit record_accessorarrow-up-right syntax.

Here is an example usage, for a common use case- templating log group and stream names based on Kubernetes metadata.

Recall that the kubernetes filter can add metadata which will look like the following:

Using record_accessor, we can build a template based on this object.

Here is our output configuration:

With the above kubernetes metadata, the log group name will be application-logs-ip-10-1-128-166.us-east-2.compute.internal.my-namespace. And the log stream name will be myapp-5468c5d4d7-n2swr.myapp.

If the kubernetes structure is not found in the log record, then the log_group_name and log_stream_prefix will be used instead, and Fluent Bit will log an error like:

hashtag
Limitations of record_accessor syntax

Notice in the example above, that the template values are separated by dot characters. This is important; the Fluent Bit record_accessor library has a limitation in the characters that can separate template variables- only dots and commas (. and ,) can come after a template variable. This is because the templating library must parse the template and determine the end of a variable.

Assume that your log records contain the metadata keys container_name and task. The following would be invalid templates because the two template variables are not separated by commas or dots:

  • $task-$container_name

  • $task/$container_name

  • $task_$container_name

  • $taskfooo$container_name

However, the following are valid:

  • $task.$container_name

  • $task.resource.$container_name

  • $task.fooo.$container_name

And the following are valid since they only contain one template variable with nothing after it:

  • fooo$task

  • fooo____$task

  • fooo/bar$container_name

hashtag
Metrics Tutorial

Fluent Bit has different input plugins (cpu, mem, disk, netif) to collect host resource usage metrics. cloudwatch_logs output plugin can be used to send these host metrics to CloudWatch in Embedded Metric Format (EMF). If data comes from any of the above mentioned input plugins, cloudwatch_logs output plugin will convert them to EMF format and sent to CloudWatch as JSON log. Additionally, if we set json/emf as the value of log_format config option, CloudWatch will extract custom metrics from embedded JSON payload.

Note: Right now, only cpu and mem metrics can be sent to CloudWatch.

For using the mem input plugin and sending memory usage metrics to CloudWatch, we can consider the following example config file. Here, we use the aws filter which adds ec2_instance_id and az (availability zone) to the log records. Later, in the output config section, we set ec2_instance_id as our metric dimension.

The following config will set two dimensions to all of our metrics- ec2_instance_id and az.

hashtag
AWS for Fluent Bit

Amazon distributes a container image with Fluent Bit and these plugins.

hashtag
GitHub

github.com/aws/aws-for-fluent-bitarrow-up-right

hashtag
Amazon ECR Public Gallery

aws-for-fluent-bitarrow-up-right

Our images are available in Amazon ECR Public Gallery. You can download images with different tags by following command:

For example, you can pull the image with latest version by:

If you see errors for image pull limits, try log into public ECR with your AWS credentials:

You can check the Amazon ECR Public official docarrow-up-right for more details

hashtag
Docker Hub

amazon/aws-for-fluent-bitarrow-up-right

hashtag
Amazon ECR

You can use our SSM Public Parameters to find the Amazon ECR image URI in your region:

For more see the AWS for Fluent Bit github repoarrow-up-right.

CloudWatch Logsarrow-up-right
EMFarrow-up-right
aws/amazon-cloudwatch-logs-for-fluent-bitarrow-up-right
herearrow-up-right
hashtag
Configuration Parameters
Key
Description

region

The AWS region.

stream

The name of the Kinesis Streams Delivery stream that you want log records sent to.

time_key

Add the timestamp to the record under this key. By default the timestamp from Fluent Bit will not be added to records sent to Kinesis.

time_key_format

strftime compliant format string for the timestamp; for example, the default is '%Y-%m-%dT%H:%M:%S'. Supports millisecond precision with '%3N' and supports nanosecond precision with '%9N' and '%L'; for example, adding '%3N' to support millisecond '%Y-%m-%dT%H:%M:%S.%3N'. This option is used with time_key.

log_key

By default, the whole log record will be sent to Kinesis. If you specify a key name with this option, then only the value of that key will be sent to Kinesis. For example, if you are using the Fluentd Docker log driver, you can specify log_key log and only the log message will be sent to Kinesis.

role_arn

ARN of an IAM role to assume (for cross account access).

hashtag
Getting Started

In order to send records into Amazon Kinesis Data Streams, you can run the plugin from the command line or through the configuration file:

hashtag
Command Line

The kinesis_streams plugin, can read the parameters from the command line through the -p argument (property), e.g:

hashtag
Configuration File

In your main configuration file append the following Output section:

hashtag
Permissions

The following AWS IAM permissions are required to use this plugin:

hashtag
AWS for Fluent Bit

Amazon distributes a container image with Fluent Bit and these plugins.

hashtag
GitHub

github.com/aws/aws-for-fluent-bitarrow-up-right

hashtag
Amazon ECR Public Gallery

aws-for-fluent-bitarrow-up-right

Our images are available in Amazon ECR Public Gallery. You can download images with different tags by following command:

For example, you can pull the image with latest version by:

If you see errors for image pull limits, try log into public ECR with your AWS credentials:

You can check the Amazon ECR Public official docarrow-up-right for more details.

hashtag
Docker Hub

amazon/aws-for-fluent-bitarrow-up-right

hashtag
Amazon ECR

You can use our SSM Public Parameters to find the Amazon ECR image URI in your region:

For more see the AWS for Fluent Bit github repoarrow-up-right.

Kinesisarrow-up-right
aws/amazon-kinesis-streams-for-fluent-bitarrow-up-right
PutRecords APIarrow-up-right
herearrow-up-right
$ fluent-bit -i cpu -o kinesis_firehose -p delivery_stream=my-stream -p region=us-west-2 -m '*' -f 1
[OUTPUT]
    Name  kinesis_firehose
    Match *
    region us-east-1
    delivery_stream my-stream
{
	"Version": "2012-10-17",
	"Statement": [{
		"Effect": "Allow",
		"Action": [
			"firehose:PutRecordBatch"
		],
		"Resource": "*"
	}]
}
[OUTPUT]
    Name  kinesis_firehose
    Match *
    region us-east-1
    delivery_stream my-stream
    workers 2
docker pull public.ecr.aws/aws-observability/aws-for-fluent-bit:<tag>
docker pull public.ecr.aws/aws-observability/aws-for-fluent-bit:latest
aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin public.ecr.aws
aws ssm get-parameters-by-path --path /aws/service/aws-for-fluent-bit/
$ fluent-bit -i cpu -o cloudwatch_logs -p log_group_name=group -p log_stream_name=stream -p region=us-west-2 -m '*' -f 1
[OUTPUT]
    Name cloudwatch_logs
    Match   *
    region us-east-1
    log_group_name fluent-bit-cloudwatch
    log_stream_prefix from-fluent-bit-
    auto_create_group On
endpoint localhost
port 4566
{
	"Version": "2012-10-17",
	"Statement": [{
		"Effect": "Allow",
		"Action": [
			"logs:CreateLogStream",
			"logs:CreateLogGroup",
			"logs:PutLogEvents"
		],
		"Resource": "*"
	}]
}
kubernetes: {
    annotations: {
        "kubernetes.io/psp": "eks.privileged"
    },
    container_hash: "<some hash>",
    container_name: "myapp",
    docker_id: "<some id>",
    host: "ip-10-1-128-166.us-east-2.compute.internal",
    labels: {
        app: "myapp",
        "pod-template-hash": "<some hash>"
    },
    namespace_name: "my-namespace",
    pod_id: "198f7dd2-2270-11ea-be47-0a5d932f5920",
    pod_name: "myapp-5468c5d4d7-n2swr"
}
[OUTPUT]
    Name cloudwatch_logs
    Match   *
    region us-east-1
    log_group_name fallback-group
    log_stream_prefix fallback-stream
    auto_create_group On
    log_group_template application-logs-$kubernetes['host'].$kubernetes['namespace_name']
    log_stream_template $kubernetes['pod_name'].$kubernetes['container_name']
[2022/06/30 06:09:29] [ warn] [record accessor] translation failed, root key=kubernetes
[SERVICE]
    Log_Level info

[INPUT]
    Name mem
    Tag mem

[FILTER]
    Name aws
    Match *

[OUTPUT]
    Name cloudwatch_logs
    Match *
    log_stream_name fluent-bit-cloudwatch
    log_group_name fluent-bit-cloudwatch
    region us-west-2
    log_format json/emf
    metric_namespace fluent-bit-metrics
    metric_dimensions ec2_instance_id
    auto_create_group true
[FILTER]
    Name aws
    Match *

[OUTPUT]
    Name cloudwatch_logs
    Match *
    log_stream_name fluent-bit-cloudwatch
    log_group_name fluent-bit-cloudwatch
    region us-west-2
    log_format json/emf
    metric_namespace fluent-bit-metrics
    metric_dimensions ec2_instance_id,az
    auto_create_group true
docker pull public.ecr.aws/aws-observability/aws-for-fluent-bit:<tag>
docker pull public.ecr.aws/aws-observability/aws-for-fluent-bit:latest
aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin public.ecr.aws
aws ssm get-parameters-by-path --path /aws/service/aws-for-fluent-bit/
$ fluent-bit -i cpu -o kinesis_streams -p stream=my-stream -p region=us-west-2 -m '*' -f 1
[OUTPUT]
    Name  kinesis_streams
    Match *
    region us-east-1
    stream my-stream
{
	"Version": "2012-10-17",
	"Statement": [{
		"Effect": "Allow",
		"Action": [
			"kinesis:PutRecords"
		],
		"Resource": "*"
	}]
}
docker pull public.ecr.aws/aws-observability/aws-for-fluent-bit:<tag>
docker pull public.ecr.aws/aws-observability/aws-for-fluent-bit:latest
aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin public.ecr.aws
aws ssm get-parameters-by-path --path /aws/service/aws-for-fluent-bit/

endpoint

Specify a custom endpoint for the Kinesis API.

sts_endpoint

Custom endpoint for the STS API.

auto_retry_requests

Immediately retry failed requests to AWS services once. This option does not affect the normal Fluent Bit retry mechanism with backoff. Instead, it enables an immediate retry with no delay for networking errors, which may help improve throughput when there are transient/random networking issues. This option defaults to true.

external_id

Specify an external ID for the STS API, can be used with the role_arn parameter if your role requires an external ID.

profile

AWS profile name to use. Defaults to default.

workers

The number of workers to perform flush operations for this output. Default: 1.

workers

log_key

By default, the whole log record will be sent to CloudWatch. If you specify a key name with this option, then only the value of that key will be sent to CloudWatch. For example, if you are using the Fluentd Docker log driver, you can specify log_key log and only the log message will be sent to CloudWatch.

log_format

An optional parameter that can be used to tell CloudWatch the format of the data. A value of json/emf enables CloudWatch to extract custom metrics embedded in a JSON payload. See the Embedded Metric Formatarrow-up-right.

role_arn

ARN of an IAM role to assume (for cross account access).

auto_create_group

Automatically create the log group. Valid values are "true" or "false" (case insensitive). Defaults to false.

log_retention_days

If set to a number greater than zero, and newly create log group's retention policy is set to this many days. Valid values are: [1, 3, 5, 7, 14, 30, 60, 90, 120, 150, 180, 365, 400, 545, 731, 1827, 3653]

endpoint

Specify a custom endpoint for the CloudWatch Logs API.

metric_namespace

An optional string representing the CloudWatch namespace for the metrics. See Metrics Tutorial section below for a full configuration.

metric_dimensions

A list of lists containing the dimension keys that will be applied to all metrics. The values within a dimension set MUST also be members on the root-node. For more information about dimensions, see Dimensionarrow-up-right and Dimensionsarrow-up-right. In the fluent-bit config, metric_dimensions is a comma and semicolon separated string. If you have only one list of dimensions, put the values as a comma separated string. If you want to put list of lists, use the list as semicolon separated strings. For example, if you set the value as 'dimension_1,dimension_2;dimension_3', we will convert it as [[dimension_1, dimension_2],[dimension_3]]

sts_endpoint

Specify a custom STS endpoint for the AWS STS API.

profile

Option to specify an AWS Profile for credentials. Defaults to default

auto_retry_requests

Immediately retry failed requests to AWS services once. This option does not affect the normal Fluent Bit retry mechanism with backoff. Instead, it enables an immediate retry with no delay for networking errors, which may help improve throughput when there are transient/random networking issues. This option defaults to true.

external_id

Specify an external ID for the STS API, can be used with the role_arn parameter if your role requires an external ID.

workers

The number of workers to perform flush operations for this output. Default: 1.

record_accessorarrow-up-right
record_accessorarrow-up-right

Counter

Counter is a very simple plugin that counts how many records it's getting upon flush time. Plugin output is as follows:

[TIMESTAMP, NUMBER_OF_RECORDS_NOW] (total = RECORDS_SINCE_IT_STARTED)

hashtag
Getting Started

You can run the plugin from the command line or through the configuration file:

hashtag
Command Line

From the command line you can let Fluent Bit count up a data with the following options:

hashtag
Configuration File

In your main configuration file append the following Input & Output sections:

hashtag
Testing

Once Fluent Bit is running, you will see the reports in the output interface similar to this:

Azure Blob

Official and Microsoft Certified Azure Storage Blob connector

The Azure Blob output plugin allows ingesting your records into service. This connector is designed to use the Append Blob and Block Blob API.

Our plugin works with the official Azure Service and also can be configured to be used with a service emulator such as .

hashtag
Azure Storage Account

Before getting started, make sure you already have an Azure Storage account. As a reference, the following link explains step-by-step how to set up your account:

FlowCounter

FlowCounter is the protocol to count records. The flowcounter output plugin allows to count up records and its size.

hashtag
Configuration Parameters

The plugin supports the following configuration parameters:

Key
Description
Default

NULL

The null output plugin just throws away events.

hashtag
Configuration Parameters

The plugin doesn't support configuration parameters.

hashtag

$ fluent-bit -i cpu -o counter
[INPUT]
    Name cpu
    Tag  cpu

[OUTPUT]
    Name  counter
    Match *
Getting Started

You can run the plugin from the command line or through the configuration file:

hashtag
Command Line

From the command line you can let Fluent Bit throws away events with the following options:

hashtag
Configuration File

In your main configuration file append the following Input & Output sections:

$ fluent-bit -i cpu -o null
[INPUT]
    Name cpu
    Tag  cpu

[OUTPUT]
    Name null
    Match *
$ bin/fluent-bit -i cpu -o counter -f 1
Fluent Bit v1.x.x
* Copyright (C) 2019-2020 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2017/07/19 11:19:02] [ info] [engine] started
1500484743,1 (total = 1)
1500484744,1 (total = 2)
1500484745,1 (total = 3)
1500484746,1 (total = 4)
1500484747,1 (total = 5)

Azure Blob Storage Tutorial (Video)arrow-up-right

hashtag
Configuration Parameters

We expose different configuration properties. The following table lists all the options available, and the next section has specific configuration details for the official service or the emulator.

Key
Description
default

account_name

Azure Storage account name. This configuration property is mandatory

auth_type

Specify the type to authenticate against the service. Fluent Bit supports key and sas.

key

shared_key

Specify the Azure Storage Shared Key to authenticate against the service. This configuration property is mandatory when auth_type is key.

sas_token

Specify the Azure Storage shared access signatures to authenticate against the service. This configuration property is mandatory when auth_type is sas.

hashtag
Getting Started

As mentioned above, you can either deliver records to the official service or an emulator. Below we have an example for each use case.

hashtag
Configuration for Azure Storage Service

The following configuration example generates a random message with a custom tag:

After you run the configuration file above, you will be able to query the data using the Azure Storage Explorer. The example above will generate the following content in the explorer:

hashtag
Configuring and using Azure Emulator: Azurite

hashtag
Install and run Azurite

The quickest way to get started is to install Azurite using npm:

then run the service:

hashtag
Configuring Fluent Bit for Azurite

Azuritearrow-up-right comes with a default account_name and shared_key, so make sure to use the specific values provided in the example below (do an exact copy/paste):

after running that Fluent Bit configuration you will see the data flowing into Azurite:

Azure Blob Storagearrow-up-right
Azuritearrow-up-right

Unit

The unit of duration. (second/minute/hour/day)

minute

Workers

The number of to perform flush operations for this output.

0

hashtag
Getting Started

You can run the plugin from the command line or through the configuration file:

hashtag
Command Line

From the command line you can let Fluent Bit count up a data with the following options:

hashtag
Configuration File

In your main configuration file append the following Input & Output sections:

hashtag
Testing

Once Fluent Bit is running, you will see the reports in the output interface similar to this:

Azure Log Analytics

Send logs, metrics to Azure Log Analytics

Azure output plugin allows to ingest your records into Azure Log Analyticsarrow-up-right service.

To get more details about how to setup Azure Log Analytics, please refer to the following documentation: Azure Log Analyticsarrow-up-right

hashtag
Configuration Parameters

Key
Description
default

hashtag
Getting Started

In order to insert records into an Azure Log Analytics instance, you can run the plugin from the command line or through the configuration file:

hashtag
Command Line

The azure plugin, can read the parameters from the command line in two ways, through the -p argument (property), e.g:

hashtag
Configuration File

In your main configuration file append the following Input & Output sections:

Another example using the Log_Type_Key with , which will read the table name (or event type) dynamically from kubernetes label app, instead of Log_Type:

SkyWalking

The Apache SkyWalking output plugin, allows to flush your records to a Apache SkyWalkingarrow-up-right OAP. The following instructions assumes that you have a fully operational Apache SkyWalking OAP in place.

hashtag
Configuration Parameters

parameter
description
default

hashtag
TLS / SSL

Apache SkyWalking output plugin supports TLS/SSL, for more details about the properties available and general configuration, please refer to the section.

hashtag
Getting Started

In order to start inserting records into an Apache SkyWalking service, you can run the plugin through the configuration file:

hashtag
Configuration File

In your main configuration file append the following Input & Output sections:

hashtag
Output Format

The format of the plugin output follows the .

For example, if we get log as follows,

This message is packed into the following protocol format and written to the OAP via the REST API.

Treasure Data

The td output plugin, allows to flush your records into the Treasure Dataarrow-up-right cloud service.

hashtag
Configuration Parameters

The plugin supports the following configuration parameters:

Key
Description
Default

hashtag
Getting Started

In order to start inserting records into , you can run the plugin from the command line or through the configuration file:

hashtag
Command Line:

Ideally you don't want to expose your API key from the command line, using a configuration file is highly desired.

hashtag
Configuration File

In your main configuration file append the following Input & Output sections:

NATS

The nats output plugin, allows to flush your records into a NATS Serverarrow-up-right end point. The following instructions assumes that you have a fully operational NATS Server in place.

hashtag
Configuration parameters

parameter
description
default

host

In order to override the default configuration values, the plugin uses the optional Fluent Bit network address format, e.g:

hashtag
Running

only requires to know that it needs to use the nats output plugin, if no extra information is given, it will use the default values specified in the above table.

As described above, the target service and storage point can be changed, e.g:

hashtag
Data format

For every set of records flushed to a NATS Server, Fluent Bit uses the following JSON format:

Each record is an individual entity represented in a JSON array that contains a UNIX_TIMESTAMP and a JSON map with a set of key/values. A summarized output of the CPU input plugin will looks as this:

Observe

Observe employs the http output plugin, allowing you to flush your records .

For now the functionality is pretty basic and it issues a POST request with the data records in (or JSON) format.

The following are the specific HTTP parameters to employ:

hashtag
Configuration Parameters

Key
Description

Prometheus Exporter

An output plugin to expose Prometheus Metrics

The prometheus exporter allows you to take metrics from Fluent Bit and expose them such that a Prometheus instance can scrape them.

Important Note: The prometheus exporter only works with metric plugins, such as Node Exporter Metrics

Key
Description
Default
[SERVICE]
    flush     1
    log_level info

[INPUT]
    name      dummy
    dummy     {"name": "Fluent Bit", "year": 2020}
    samples   1
    tag       var.log.containers.app-default-96cbdef2340.log

[OUTPUT]
    name                  azure_blob
    match                 *
    account_name          YOUR_ACCOUNT_NAME
    shared_key            YOUR_SHARED_KEY
    path                  kubernetes
    container_name        logs
    auto_create_container on
    tls                   on
$ npm install -g azurite
$ azurite
Azurite Blob service is starting at http://127.0.0.1:10000
Azurite Blob service is successfully listening at http://127.0.0.1:10000
Azurite Queue service is starting at http://127.0.0.1:10001
Azurite Queue service is successfully listening at http://127.0.0.1:10001
[SERVICE]
    flush     1
    log_level info

[INPUT]
    name      dummy
    dummy     {"name": "Fluent Bit", "year": 2020}
    samples   1
    tag       var.log.containers.app-default-96cbdef2340.log

[OUTPUT]
    name                  azure_blob
    match                 *
    account_name          devstoreaccount1
    shared_key            Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==
    path                  kubernetes
    container_name        logs
    auto_create_container on
    tls                   off
    emulator_mode         on
    endpoint              http://127.0.0.1:10000
$ azurite
Azurite Blob service is starting at http://127.0.0.1:10000
Azurite Blob service is successfully listening at http://127.0.0.1:10000
Azurite Queue service is starting at http://127.0.0.1:10001
Azurite Queue service is successfully listening at http://127.0.0.1:10001
127.0.0.1 - - [03/Sep/2020:17:40:03 +0000] "GET /devstoreaccount1/logs?restype=container HTTP/1.1" 404 -
127.0.0.1 - - [03/Sep/2020:17:40:03 +0000] "PUT /devstoreaccount1/logs?restype=container HTTP/1.1" 201 -
127.0.0.1 - - [03/Sep/2020:17:40:03 +0000] "PUT /devstoreaccount1/logs/kubernetes/var.log.containers.app-default-96cbdef2340.log?comp=appendblock HTTP/1.1" 404 -
127.0.0.1 - - [03/Sep/2020:17:40:03 +0000] "PUT /devstoreaccount1/logs/kubernetes/var.log.containers.app-default-96cbdef2340.log HTTP/1.1" 201 -
127.0.0.1 - - [03/Sep/2020:17:40:04 +0000] "PUT /devstoreaccount1/logs/kubernetes/var.log.containers.app-default-96cbdef2340.log?comp=appendblock HTTP/1.1" 201 -
$ fluent-bit -i cpu -o flowcounter
[INPUT]
    Name cpu
    Tag  cpu

[OUTPUT]
    Name flowcounter
    Match *
    Unit second
$ fluent-bit -i cpu -o flowcounter
Fluent Bit v1.x.x
* Copyright (C) 2019-2020 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2016/12/23 11:01:20] [ info] [engine] started
[out_flowcounter] cpu.0:[1482458540, {"counts":60, "bytes":7560, "counts/minute":1, "bytes/minute":126 }]
workers

container_name

Name of the container that will contain the blobs. This configuration property is mandatory

blob_type

Specify the desired blob type. Fluent Bit supports appendblob and blockblob.

appendblob

auto_create_container

If container_name does not exist in the remote service, enabling this option will handle the exception and auto-create the container.

on

path

Optional path to store your blobs. If your blob name is myblob, you can specify sub-directories where to store it using path, so setting path to /logs/kubernetes will store your blob in /logs/kubernetes/myblob.

emulator_mode

If you want to send data to an Azure emulator service like Azuritearrow-up-right, enable this option so the plugin will format the requests to the expected format.

off

endpoint

If you are using an emulator, this option allows you to specify the absolute HTTP address of such service. e.g: http://127.0.0.1:10000arrow-up-right.

tls

Enable or disable TLS encryption. Note that Azure service requires this to be turned on.

off

workers

The number of workers to perform flush operations for this output.

0

host

Hostname of Apache SkyWalking OAP

127.0.0.1

port

TCP port of the Apache SkyWalking OAP

12800

auth_token

Authentication token if needed for Apache SkyWalking OAP

None

svc_name

Service name that fluent-bit belongs to

sw-service

svc_inst_name

Service instance name of fluent-bit

fluent-bit

workers

The number of workers to perform flush operations for this output.

0

TLS/SSL
data collect protocolarrow-up-right

API

The Treasure Dataarrow-up-right API key. To obtain it please log into the Consolearrow-up-right and in the API keys box, copy the API key hash.

Database

Specify the name of your target database.

Table

Specify the name of your target table where the records will be stored.

Region

Set the service region, available values: US and JP

US

Workers

The number of workers to perform flush operations for this output.

0

Treasure Dataarrow-up-right

IP address or hostname of the NATS Server

127.0.0.1

port

TCP port of the target NATS Server

4222

workers

The number of workers to perform flush operations for this output.

0

Fluent Bitarrow-up-right
[INPUT]
    Name cpu

[OUTPUT]
    Name skywalking
    svc_name dummy-service
    svc_inst_name dummy-service-fluentbit
{
   "log": "This is the original log message"
}
[{
  "timestamp": 123456789,
  "service": "dummy-service",
  "serviceInstance": "dummy-service-fluentbit",
  "body": {
    "json": {
      "json": "{\"log\": \"This is the original log message\"}"
    }
  }
}]
$ fluent-bit -i cpu -o td -p API="abc" -p Database="fluentbit" -p Table="cpu_samples"
[INPUT]
    Name cpu
    Tag  my_cpu

[OUTPUT]
    Name     td
    Match    *
    API      5713/e75be23caee19f8041dfa635ddfbd0dcd8c8d981
    Database fluentbit
    Table    cpu_samples
nats://host:port
$ bin/fluent-bit -i cpu -o nats -V -f 5
Fluent Bit v1.x.x
* Copyright (C) 2019-2020 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2016/03/04 10:17:33] [ info] Configuration
flush time     : 5 seconds
input plugins  : cpu
collectors     :
[2016/03/04 10:17:33] [ info] starting engine
cpu[all] all=3.250000 user=2.500000 system=0.750000
cpu[i=0] all=3.000000 user=1.000000 system=2.000000
cpu[i=1] all=3.000000 user=2.000000 system=1.000000
cpu[i=2] all=2.000000 user=2.000000 system=0.000000
cpu[i=3] all=6.000000 user=5.000000 system=1.000000
[2016/03/04 10:17:33] [debug] [in_cpu] CPU 3.25%
...
[
  [UNIX_TIMESTAMP, JSON_MAP_1],
  [UNIX_TIMESTAMP, JSON_MAP_2],
  [UNIX_TIMESTAMP, JSON_MAP_N],
]
[
  [1457108504,{"tag":"fluentbit","cpu_p":1.500000,"user_p":1,"system_p":0.500000}],
  [1457108505,{"tag":"fluentbit","cpu_p":4.500000,"user_p":3,"system_p":1.500000}],
  [1457108506,{"tag":"fluentbit","cpu_p":6.500000,"user_p":4.500000,"system_p":2}]
]

off

Workers

The number of to perform flush operations for this output.

0

Customer_ID

Customer ID or WorkspaceID string.

Shared_Key

The primary or the secondary Connected Sources client authentication key.

Log_Type

The name of the event type.

fluentbit

Log_Type_Key

If included, the value for this key will be looked upon in the record and if present, will over-write the log_type. If not found then the log_type value will be used.

Time_Key

Optional parameter to specify the key name where the timestamp will be stored.

@timestamp

Time_Generated

record-accessorarrow-up-right

If enabled, the HTTP request header 'time-generated-field' will be included so Azure can override the timestamp with the key specified by 'time_key' option.

default

host

IP address or hostname of Observe's data collection endpoint. $(OBSERVE_CUSTOMER) is your

OBSERVE_CUSTOMER.collect.observeinc.com

port

TCP port of to employ when sending to Observe

443

tls

Specify to use tls

on

uri

Specify the HTTP URI for the Observe's data ingest

/v1/http/fluentbit

format

The data format to be used in the HTTP request body

msgpack

header

hashtag
Configuration File

In your main configuration file, append the following Input & Output sections:

into Observearrow-up-right
MessagePackarrow-up-right

This allows you to add custom labels to all metrics exposed through the prometheus exporter. You may have multiple of these fields

workers

The number of to perform flush operations for this output.

1

hashtag
Getting Started

The Prometheus exporter only works with metrics captured from metric plugins. In the following example, host metrics are captured by the node exporter metrics plugin and then are routed to prometheus exporter. Within the output plugin two labels are added app="fluent-bit"and color="blue"

# Node Exporter Metrics + Prometheus Exporter
# -------------------------------------------
# The following example collect host metrics on Linux and expose
# them through a Prometheus HTTP end-point.
#
# After starting the service try it with:
#
# $ curl http://127.0.0.1:2021/metrics
#
[SERVICE]
    flush           1
    log_level       info

[INPUT]
    name            node_exporter_metrics
    tag             node_metrics
    scrape_interval 2

[OUTPUT]

host

This is address Fluent Bit will bind to when hosting prometheus metrics. Note: listen parameter is deprecated from v1.9.0.

0.0.0.0

port

This is the port Fluent Bit will bind to when hosting prometheus metrics

2021

add_label

Google Chronicle

The Chronicle output plugin allows ingesting security logs into Google Chroniclearrow-up-right service. This connector is designed to send unstructured security logs.

hashtag
Google Cloud Configuration

Fluent Bit streams data into an existing Google Chronicle tenant using a service account that you specify. Therefore, before using the Chronicle output plugin, you must create a service account, create a Google Chronicle tenant, authorize the service account to write to the tenant, and provide the service account credentials to Fluent Bit.

hashtag
Creating a Service Account

To stream security logs into Google Chronicle, the first step is to create a Google Cloud service account for Fluent Bit:

hashtag
Creating a Tenant of Google Chronicle

Fluent Bit does not create a tenant of Google Chronicle for your security logs, so you must create this ahead of time.

hashtag
Retrieving Service Account Credentials

Fluent Bit's Chronicle output plugin uses a JSON credentials file for authentication credentials. Download the credentials file by following these instructions:

hashtag
Configurations Parameters

Key
Description
default

See Google's for further details.

hashtag
Configuration File

If you are using a Google Cloud Credentials File, the following configuration is enough to get you started:

File

The file output plugin allows to write the data received through the input plugin to file.

hashtag
Configuration Parameters

The plugin supports the following configuration parameters:

Key
Description
Default

hashtag
Format

hashtag
out_file format

Output time, tag and json records. There is no configuration parameters for out_file.

hashtag
plain format

Output the records as JSON (without additional tag and timestamp attributes). There is no configuration parameters for plain format.

hashtag
csv format

Output the records as csv. Csv supports an additional configuration parameter.

Key
Description

hashtag
ltsv format

Output the records as LTSV. LTSV supports an additional configuration parameter.

Key
Description

hashtag
template format

Output the records using a custom format template.

Key
Description

This accepts a formatting template and fills placeholders using corresponding values in a record.

For example, if you set up the configuration as below:

You will get the following output:

hashtag
Getting Started

You can run the plugin from the command line or through the configuration file:

hashtag
Command Line

From the command line you can let Fluent Bit count up a data with the following options:

hashtag
Configuration File

In your main configuration file append the following Input & Output sections:

OpenObserve

Send logs to OpenObserve using Fluent Bit

Use the OpenObserve output plugin to ingest logs into OpenObservearrow-up-right.

Before you begin, you need an OpenObserve accountarrow-up-right, an HTTP_User, and an HTTP_Passwd. You can find these fields under Ingestion in OpenObserve Cloud. Alternatively, you can achieve this with various installation types as mentioned in the OpenObserve documentationarrow-up-right

hashtag
Configuration Parameters

Key
Description
Default

hashtag
Configuration File

Use this configuration file to get started:

Standard Output

The stdout output plugin allows to print to the standard output the data received through the input plugin. Their usage is very simple as follows:

hashtag
Configuration Parameters

Key
Description
default

hashtag
Command Line

We have specified to gather usage metrics and print them out to the standard output in a human readable way:

No more, no less, it just works.

Datadog

Send logs to Datadog

The Datadog output plugin allows to ingest your logs into .

Before you begin, you need a , a , and you need to .

hashtag
Configuration Parameters

Key
Description
Default

Vivo Exporter

Vivo Exporter is an output plugin that exposes logs, metrics, and traces through an HTTP endpoint. This plugin aims to be used in conjunction with .

hashtag
Configuration Parameters

Key
Description
Default

TCP & TLS

The tcp output plugin allows to send records to a remote TCP server. The payload can be formatted in different ways as required.

hashtag
Configuration Parameters

Key
Description
default

New Relic

is a data management platform that gives you real-time insights of your data for developers, operations and management teams.

The Fluent Bit nrlogs output plugin allows you to send your logs to New Relic service.

Before to get started with the plugin configuration, make sure to obtain the proper account to get access to the service. You can register and start with a free trial in the following link:

hashtag

Kafka REST Proxy

The kafka-rest output plugin, allows to flush your records into a server. The following instructions assumes that you have a fully operational Kafka REST Proxy and Kafka services running in your environment.

hashtag
Configuration Parameters

Key
Description
default
$ fluent-bit -i cpu -o azure -p customer_id=abc -p shared_key=def -m '*' -f 1
[INPUT]
    Name  cpu

[OUTPUT]
    Name        azure
    Match       *
    Customer_ID abc
    Shared_Key  def
[INPUT]
    Name  cpu

[OUTPUT]
    Name        azure
    Match       *
    Log_Type_Key $kubernetes['labels']['app']
    Customer_ID abc
    Shared_Key  def
[OUTPUT]
    name         http
    match        *
    host         my-observe-customer-id.collect.observeinc.com
    port         443
    tls          on

    uri          /v1/http/fluentbit

    format       msgpack
    header       Authorization     Bearer ${OBSERVE_TOKEN}
    header       X-Observe-Decoder fluent
    compress     gzip

    # For Windows: provide path to root cert
    #tls.ca_file  C:\fluent-bit\isrgrootx1.pem
# Node Exporter Metrics + Prometheus Exporter
# -------------------------------------------
# The following example collect host metrics on Linux and expose
# them through a Prometheus HTTP end-point.
#
# After starting the service try it with:
#
# $ curl http://127.0.0.1:2021/metrics
#
service:
    flush: 1
    log_level: info
pipeline:
    inputs:
        - name: node_exporter_metrics
          tag:  node_metrics
          scrape_interval: 2
    outputs:
        - name: prometheus_exporter
          match: node_metrics
          host: 0.0.0.0
          port: 2021
          # add user-defined labels
          add_label:
            - app fluent-bit
            - color blue
workers

The specific header that provides the Observe token needed to authorize sending data into a datastreamarrow-up-right.

Authorization Bearer ${OBSERVE_TOKEN}

header

The specific header to instructs Observe how to decode incoming payloads

X-Observe-Decoder fluent

compress

Set payload compression mechanism. Option available is 'gzip'

gzip

tls.ca_file

For use with Windows: provide path to root cert

workers

The number of workers to perform flush operations for this output.

0

Customer IDarrow-up-right
name prometheus_exporter
match node_metrics
host 0.0.0.0
port 2021
# add user-defined labels
add_label app fluent-bit
add_label color blue
workers

Path

Directory path to store files. If not set, Fluent Bit will write the files on it's own positioned directory. note: this option was added on Fluent Bit v1.4.6

File

Set file name to store the records. If not set, the file name will be the tag associated with the records.

Format

The format of the file content. See also Format section. Default: out_file.

Mkdir

Recursively create output directory if it does not exist. Permissions set to 0755.

Workers

The number of workers to perform flush operations for this output.

1

Delimiter

The character to separate each data. Accepted values are "\t" (or "tab"), "space" or "comma". Other values are ignored and will use default silently. Default: ','

Delimiter

The character to separate each pair. Default: '\t'(TAB)

Label_Delimiter

The character to separate label and the value. Default: ':'

Template

The format string. Default: '{time} {message}'

/api/default/default/_json

Format

Required: The format of the log payload. OpenObserve expects JSON.

json

json_date_key

Optional: The JSON key used for timestamps in the logs.

timestamp

json_date_format

Optional: The format of the date in logs. OpenObserve supports ISO 8601.

iso8601

include_tag_key

If true, a tag is appended to the output. The key name is used in the tag_key property.

false

Host

Required. The OpenObserve server where you are sending logs.

localhost

TLS

Required: Enable end-to-end security using TLS. Set to on to enable TLS communication with OpenObserve.

on

compress

Recommended: Compresses the payload in GZIP format. OpenObserve supports and recommends setting this to gzip for optimized log ingestion.

none

HTTP_User

Required: Username for HTTP authentication.

none

HTTP_Passwd

Required: Password for HTTP authentication.

none

URI

Required: The API path used to send logs.

Format

Specify the data format to be printed. Supported formats are msgpack, json, json_lines and json_stream.

msgpack

json_date_key

Specify the name of the time key in the output record. To disable the time key just set the value to false.

date

json_date_format

Specify the format of the date. Supported formats are double, epoch, iso8601 (eg: 2018-05-30T09:39:52.000681Z) and java_sql_timestamp (eg: 2018-05-30 09:39:52.000681)

double

workers

The number of workers to perform flush operations for this output.

1

CPUarrow-up-right
tag: [time, {"key1":"value1", "key2":"value2", "key3":"value3"}]
{"key1":"value1", "key2":"value2", "key3":"value3"}
time[delimiter]"value1"[delimiter]"value2"[delimiter]"value3"
field1[label_delimiter]value1[delimiter]field2[label_delimiter]value2\n
[INPUT]
  Name mem

[OUTPUT]
  Name file
  Format template
  Template {time} used={Mem.used} free={Mem.free} total={Mem.total}
1564462620.000254 used=1045448 free=31760160 total=32805608
$ fluent-bit -i cpu -o file -p path=output.txt
[INPUT]
    Name cpu
    Tag  cpu

[OUTPUT]
    Name file
    Match *
    Path output_dir
[OUTPUT]
  Name http
  Match *
  URI /api/default/default/_json
  Host localhost
  Port 5080
  tls on
  Format json
  Json_date_key    timestamp
  Json_date_format iso8601
  HTTP_User <YOUR_HTTP_USER>
  HTTP_Passwd <YOUR_HTTP_PASSWORD>
  compress gzip
$ bin/fluent-bit -i cpu -o stdout -v
$ bin/fluent-bit -i cpu -o stdout -p format=msgpack -v
Fluent Bit v1.x.x
* Copyright (C) 2019-2020 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2016/10/07 21:52:01] [ info] [engine] started
[0] cpu.0: [1475898721, {"cpu_p"=>0.500000, "user_p"=>0.250000, "system_p"=>0.250000, "cpu0.p_cpu"=>0.000000, "cpu0.p_user"=>0.000000, "cpu0.p_system"=>0.000000, "cpu1.p_cpu"=>0.000000, "cpu1.p_user"=>0.000000, "cpu1.p_system"=>0.000000, "cpu2.p_cpu"=>0.000000, "cpu2.p_user"=>0.000000, "cpu2.p_system"=>0.000000, "cpu3.p_cpu"=>1.000000, "cpu3.p_user"=>0.000000, "cpu3.p_system"=>1.000000}]
[1] cpu.0: [1475898722, {"cpu_p"=>0.250000, "user_p"=>0.250000, "system_p"=>0.000000, "cpu0.p_cpu"=>0.000000, "cpu0.p_user"=>0.000000, "cpu0.p_system"=>0.000000, "cpu1.p_cpu"=>1.000000, "cpu1.p_user"=>1.000000, "cpu1.p_system"=>0.000000, "cpu2.p_cpu"=>0.000000, "cpu2.p_user"=>0.000000, "cpu2.p_system"=>0.000000, "cpu3.p_cpu"=>0.000000, "cpu3.p_user"=>0.000000, "cpu3.p_system"=>0.000000}]
[2] cpu.0: [1475898723, {"cpu_p"=>0.750000, "user_p"=>0.250000, "system_p"=>0.500000, "cpu0.p_cpu"=>2.000000, "cpu0.p_user"=>1.000000, "cpu0.p_system"=>1.000000, "cpu1.p_cpu"=>0.000000, "cpu1.p_user"=>0.000000, "cpu1.p_system"=>0.000000, "cpu2.p_cpu"=>1.000000, "cpu2.p_user"=>0.000000, "cpu2.p_system"=>1.000000, "cpu3.p_cpu"=>0.000000, "cpu3.p_user"=>0.000000, "cpu3.p_system"=>0.000000}]
[3] cpu.0: [1475898724, {"cpu_p"=>1.000000, "user_p"=>0.750000, "system_p"=>0.250000, "cpu0.p_cpu"=>1.000000, "cpu0.p_user"=>1.000000, "cpu0.p_system"=>0.000000, "cpu1.p_cpu"=>2.000000, "cpu1.p_user"=>1.000000, "cpu1.p_system"=>1.000000, "cpu2.p_cpu"=>1.000000, "cpu2.p_user"=>1.000000, "cpu2.p_system"=>0.000000, "cpu3.p_cpu"=>1.000000, "cpu3.p_user"=>1.000000, "cpu3.p_system"=>0.000000}]

customer_id

The customer id to identify the tenant of Google Chronicle to stream into. The value of the customer_id should be specified in the configuration file.

log_type

The log type to parse logs as. Google Chronicle supports parsing for .

region

The GCP region in which to store security logs. Currently, there are several supported regions: US, EU, UK, ASIA. Blank is handled as US.

log_key

By default, the whole log record will be sent to Google Chronicle. If you specify a key name with this option, then only the value of that key will be sent to Google Chronicle.

workers

The number of to perform flush operations for this output.

0

google_service_credentials

Absolute path to a Google Cloud credentials JSON file.

Value of the environment variable $GOOGLE_SERVICE_CREDENTIALS

service_account_email

Account email associated with the service. Only available if no credentials file has been provided.

Value of environment variable $SERVICE_ACCOUNT_EMAIL

service_account_secret

Private key content associated with the service account. Only available if no credentials file has been provided.

Value of environment variable $SERVICE_ACCOUNT_SECRET

project_id

The project id containing the tenant of Google Chronicle to stream into.

The value of the project_id in the credentials file

Creating a Google Cloud Service Accountarrow-up-right
Creating and Managing Service Account Keysarrow-up-right
official documentationarrow-up-right

Host

Required - The Datadog server where you are sending your logs.

http-intake.logs.datadoghq.com

TLS

Required - End-to-end security communications security protocol. Datadog recommends setting this to on.

off

compress

Recommended - compresses the payload in GZIP format, Datadog supports and recommends setting this to gzip.

apikey

Required - Your .

Proxy

Optional - Specify an HTTP Proxy. The expected format of this value is . Note that https is not supported yet.

provider

To activate the remapping, specify configuration flag provider with value ecs.

json_date_key

Date key name for output.

timestamp

include_tag_key

If enabled, a tag is appended to output. The key name is used tag_key property.

false

tag_key

The key name of tag. If include_tag_key is false, This property is ignored.

tagkey

dd_service

Recommended - The human readable name for your service generating the logs (e.g. the name of your application or database). If unset, Datadog will look for the service using ."

dd_source

Recommended - A human readable name for the underlying technology of your service (e.g. postgres or nginx). If unset, Datadog will look for the source in the .

dd_tags

Optional - The you want to assign to your logs in Datadog. If unset, Datadog will look for the tags in the .

dd_message_key

By default, the plugin searches for the key 'log' and remap the value to the key 'message'. If the property is set, the plugin will search the property name key.

workers

The number of to perform flush operations for this output.

0

hashtag
Configuration File

Get started quickly with this configuration file:

hashtag
Troubleshooting

hashtag
403 Forbidden

If you get a 403 Forbidden error response, double check that you have a valid Datadog API keyarrow-up-right and that you have activated Datadog Logs Managementarrow-up-right.

Datadogarrow-up-right
Datadog accountarrow-up-right
Datadog API keyarrow-up-right
activate Datadog Logs Managementarrow-up-right

Off

stream_queue_size

Specify the maximum queue size per stream. Each specific stream for logs, metrics and traces can hold up to stream_queue_size bytes.

20M

http_cors_allow_origin

Specify the value for the HTTP Access-Control-Allow-Origin header (CORS).

workers

The number of to perform flush operations for this output.

1

hashtag
Getting Started

Here is a simple configuration of Vivo Exporter, note that this example is not based on defaults.

hashtag
How it works

Vivo Exporter provides buffers that serve as streams for each telemetry data type, in this case, logs, metrics, and traces. Each buffer contains a fixed capacity in terms of size (20M by default). When the data arrives at a stream, it’s appended to the end. If the buffer is full, it removes the older entries to make room for new data.

The data that arrives is a chunk. A chunk is a group of events that belongs to the same type (logs, metrics or traces) and contains the same tag. Every chunk placed in a stream is assigned with an auto-incremented id.

hashtag
Requesting data from the streams

By using a simple HTTP request, you can retrieve the data from the streams. The following are the endpoints available:

endpoint
Description

/logs

Exposes log events in JSON format. Each event contains a timestamp, metadata and the event content.

/metrics

Exposes metrics events in JSON format. Each metric contains name, metadata, metric type and labels (dimensions).

/traces

Exposes traces events in JSON format. Each trace contains a name, resource spans, spans, attributes, events information, etc.

The example below will generate dummy log events which will be consuming by using curl HTTP command line client:

Configure and start Fluent Bit

Retrieve the data

We are using the -i curl option to print also the HTTP response headers.

Curl output would look like this:

hashtag
Streams and IDs

As mentioned above, on each stream we buffer a chunk that contains N events, each chunk contains it own ID which is unique inside the stream.

When we receive the HTTP response, Vivo Exporter also reports the range of chunk IDs that were served in the response via the HTTP headers Vivo-Stream-Start-ID and Vivo-Stream-End-ID.

The values of these headers can be used by the client application to specify a range between IDs or set limits for the number of chunks to retrieve from the stream.

hashtag
Retrieve ranges and use limits

A client might be interested into always retrieve the latest chunks available and skip previous one that already processed. In a first request without any given range, Vivo Exporter will provide all the content that exists in the buffer for the specific stream, on that response the client might want to keep the last ID (Vivo-Stream-End-ID) that was received.

To query ranges or starting from specific chunks IDs, remember that they are incremental, you can use a mix of the following options:

Query string option
Description

from

Specify the first chunk ID that is desired to be retrieved. Note that if the chunk ID does not exists the next one in the queue will be provided.

to

The last chunk ID is desired. If not found, the whole stream will be provided (starting from from if was set).

limit

Limit the output to a specific number of chunks. The default value is 0, which means: send everything.

The following example specifies the range from chunk ID 1 to chunk ID 3 and only 1 chunk:

curl -i "http://127.0.0.1:2025/logs?from=1&to=3&limit=1"

Output:

empty_stream_on_read

Vivo projectarrow-up-right

If enabled, when an HTTP client consumes the data from a stream, the stream content will be removed.

127.0.0.1

Port

TCP Port of the target service.

5170

Format

Specify the data format to be printed. Supported formats are msgpack json, json_lines and json_stream.

msgpack

json_date_key

Specify the name of the time key in the output record. To disable the time key just set the value to false.

date

json_date_format

Specify the format of the date. Supported formats are double, epoch, iso8601 (eg: 2018-05-30T09:39:52.000681Z) and java_sql_timestamp (eg: 2018-05-30 09:39:52.000681)

double

workers

The number of to perform flush operations for this output.

2

hashtag
TLS Configuration Parameters

The following parameters are available to configure a secure channel connection through TLS:

Key
Description
Default

tls

Enable or disable TLS support

Off

tls.verify

Force certificate validation

On

tls.debug

Set TLS debug verbosity level. It accept the following values: 0 (No debug), 1 (Error), 2 (State change), 3 (Informational) and 4 Verbose

1

tls.ca_file

Absolute path to CA certificate file

hashtag
Command Line

hashtag
JSON format

We have specified to gather CPUarrow-up-right usage metrics and send them in JSON lines mode to a remote end-point using netcat service.

Run the following in a separate terminal, netcat will start listening for messages on TCP port 5170. Once it connects to Fluent Bit ou should see the output as above in JSON format:

hashtag
Msgpack format

Repeat the JSON approach but using the msgpack output format.

We could send this to stdout but as it is a serialized format you would end up with strange output. This should really be handled by a msgpack receiver to unpack as per the details in the developer documentation herearrow-up-right. As an example we use the Python msgpack libraryarrow-up-right to deal with it:

Host

Target host where Fluent-Bit or Fluentd are listening for Forward messages.

Configuration Parameters

base_uri

Full address of New Relic API end-point. By default the value points to the US end-point.

If you want to use the EU end-point you can set this key to the following value:

api_key

Your key for data ingestion. The API key is also called the ingestion key, you can get more details on how to generated in the official documentation .

From a configuration perspective either an api_key or an license_key is required. New Relic suggest to use primary the api_key.

license_key

Optional authentication parameter for data ingestion.

Note that New Relic suggest to use the api_key instead. You can read more about the License Key .

| compress | Set the compression mechanism for the payload. This option allows two values: gzip (enabled by default) or false to disable compression. | gzip |

workers

The number of to perform flush operations for this output.

0

The following configuration example, will emit a dummy example record and ingest it on New Relic. Copy and paste the following content in a file called newrelic.conf:

run Fluent Bit with the new configuration file:

Fluent Bit output:

New Relicarrow-up-right
New Relic Sign Uparrow-up-right

IP address or hostname of the target Kafka REST Proxy server

127.0.0.1

Port

TCP port of the target Kafka REST Proxy server

8082

Topic

Set the Kafka topic

fluent-bit

Partition

Set the partition number (optional)

Message_Key

Set a message key (optional)

Time_Key

The Time_Key property defines the name of the field that holds the record timestamp.

@timestamp

Time_Key_Format

Defines the format of the timestamp.

%Y-%m-%dT%H:%M:%S

Include_Tag_Key

Append the Tag name to the final record.

Off

Tag_Key

If Include_Tag_Key is enabled, this property defines the key name for the tag.

_flb-key

Workers

The number of to perform flush operations for this output.

0

hashtag
TLS / SSL

Kafka REST Proxy output plugin supports TLS/SSL, for more details about the properties available and general configuration, please refer to the TLS/SSL section.

hashtag
Getting Started

In order to insert records into a Kafka REST Proxy service, you can run the plugin from the command line or through the configuration file:

hashtag
Command Line

The kafka-rest plugin, can read the parameters from the command line in two ways, through the -p argument (property), e.g:

hashtag
Configuration File

In your main configuration file append the following Input & Output sections:

Kafka REST Proxyarrow-up-right

Host

Kafka

Kafka output plugin allows to ingest your records into an Apache Kafkaarrow-up-right service. This plugin use the official librdkafka C libraryarrow-up-right (built-in dependency)

hashtag
Configuration Parameters

Key
Description
default

format

Setting rdkafka.log.connection.close to false and rdkafka.request.required.acks to 1 are examples of recommended settings of librdfkafka properties.

hashtag
Getting Started

In order to insert records into Apache Kafka, you can run the plugin from the command line or through the configuration file:

hashtag
Command Line

The kafka plugin can read parameters through the -p argument (property), e.g:

hashtag
Configuration File

In your main configuration file append the following Input & Output sections:

hashtag
Avro Support

Fluent-bit comes with support for avro encoding for the out_kafka plugin. Avro support is optional and must be activated at build-time by using a build def with cmake: -DFLB_AVRO_ENCODER=On such as in the following example which activates:

  • out_kafka with avro encoding

  • fluent-bit's prometheus

  • metrics via an embedded http endpoint

  • debugging support

hashtag
Kafka Configuration File with Avro Encoding

This is example fluent-bit config tails kubernetes logs, decorates the log lines with kubernetes metadata via the kubernetes filter, and then sends the fully decorated log lines to a kafka broker encoded with a specific avro schema.

hashtag
Kafka Configuration File with Raw format

This example Fluent Bit configuration file creates example records with the payloadkey and msgkey keys. The msgkey value is used as the Kafka message key, and the payloadkey value as the payload.

OpenTelemetry

An output plugin to submit Logs, Metrics, or Traces to an OpenTelemetry endpoint

The OpenTelemetry plugin allows you to take logs, metrics, and traces from Fluent Bit and submit them to an OpenTelemetry HTTP endpoint.

Important Note: At the moment only HTTP endpoints are supported.

Key
Description
Default

host

IP address or hostname of the target HTTP Server

127.0.0.1

http_user

Basic Auth Username

hashtag
Getting Started

The OpenTelemetry plugin works with logs and only the metrics collected from one of the metric input plugins. In the following example, log records generated by the dummy plugin and the host metrics collected by the node exporter metrics plugin are exported by the OpenTelemetry output plugin.

WebSocket

The websocket output plugin allows to flush your records into a WebSocket endpoint. For now the functionality is pretty basic and it issues a HTTP GET request to do the handshake, and then use TCP connections to send the data records in either JSON or MessagePackarrow-up-right (or JSON) format.

hashtag
Configuration Parameters

Key
Description
default

hashtag
Getting Started

In order to insert records into a HTTP server, you can run the plugin from the command line or through the configuration file:

hashtag
Command Line

The websocket plugin, can read the parameters from the command line in two ways, through the -p argument (property) or setting them directly through the service URI. The URI format is the following:

Using the format specified, you could start Fluent Bit through:

hashtag
Configuration File

In your main configuration file, append the following Input & Output sections:

Websocket plugin is working with tcp keepalive mode, please refer to section for details. Since websocket is a stateful plugin, it will decide when to send out handshake to server side, for example when plugin just begins to work or after connection with server has been dropped. In general, the interval to init a new websocket handshake would be less than the keepalive interval. With that strategy, it could detect and resume websocket connections.

hashtag
Testing

hashtag
Configuration File

Once Fluent Bit is running, you can send some messages using the netcat:

In we should see the following output:

hashtag
Scenario Description

From the output of fluent-bit log, we see that once data has been ingested into fluent bit, plugin would perform handshake. After a while, no data or traffic is undergoing, tcp connection would been abort. And then another piece of data arrived, a retry for websocket plugin has been triggered, with another handshake and data flush.

There is another scenario, once websocket server flaps in a short time, which means it goes down and up in a short time, fluent-bit would resume tcp connection immediately. But in that case, websocket output plugin is a malfunction state, it needs to restart fluent-bit to get back to work.

LogDNA

LogDNAarrow-up-right is an intuitive cloud based log management system that provides you an easy interface to query your logs once they are stored.

The Fluent Bit logdna output plugin allows you to send your log or events to a LogDNAarrow-up-right compliant service like:

  • LogDNAarrow-up-right

  • IBM Log Analysisarrow-up-right

Before to get started with the plugin configuration, make sure to obtain the proper account to get access to the service. You can start with a free trial in the following link:

hashtag
Configuration Parameters

Key
Description
Default

hashtag
Auto Enrichment & Data Discovery

One of the features of Fluent Bit + LogDNA integration is the ability to auto enrich each record with further context.

When the plugin process each record (or log), it tries to lookup for specific key names that might contain specific context for the record in question, the following table describe the keys and the discovery logic:

Key
Description

hashtag
Getting Started

The following configuration example, will emit a dummy example record and ingest it on LogDNA. Copy and paste the following content in a file called logdna.conf:

run Fluent Bit with the new configuration file:

Fluent Bit output:

Your record will be available and visible in your LogDNA dashboard after a few seconds.

hashtag
Query your Data in LogDNA

In your LogDNA dashboard, go to the top filters and mark the Tags aa and bb, then you will be able to see your records as the example below:

Azure Data Explorer

Send logs to Azure Data Explorer (Kusto)

The Kusto output plugin allows to ingest your logs into an Azure Data Explorerarrow-up-right cluster, via the Queued Ingestionarrow-up-right mechanism. This output plugin can also be used to ingest logs into an Eventhousearrow-up-right cluster in Microsoft Fabric Real Time Analytics.

hashtag
For ingesting into Azure Data Explorer: Creating a Kusto Cluster and Database

You can create an Azure Data Explorer cluster in one of the following ways:

hashtag
For ingesting into Microsoft Fabric Real Time Analytics : Creating an Eventhouse Cluster and KQL Database

You can create an Eventhouse cluster and a KQL database follow the following steps:

hashtag
Creating an Azure Registered Application

Fluent-Bit will use the application's credentials, to ingest data into your cluster.

hashtag
Creating a Table

Fluent-Bit ingests the event data into Kusto in a JSON format, that by default will include 3 properties:

  • log - the actual event payload.

  • tag - the event tag.

  • timestamp - the event timestamp.

A table with the expected schema must exist in order for data to be ingested properly.

hashtag
Optional - Creating an Ingestion Mapping

By default, Kusto will insert incoming ingestions into a table by inferring the mapped table columns, from the payload properties. However, this mapping can be customized by creatng a . The plugin can be configured to use an ingestion mapping via the ingestion_mapping_reference configuration key.

hashtag
Configuration Parameters

Key
Description
Default

hashtag
Configuration File

Get started quickly with this configuration file:

hashtag
Troubleshooting

hashtag
403 Forbidden

If you get a 403 Forbidden error response, make sure that:

  • You provided the correct AAD registered application credentials.

  • You authorized the application to ingest into your database or table.

Azure Logs Ingestion API

Send logs to Azure Log Analytics using Logs Ingestion API with DCE and DCR

Azure Logs Ingestion plugin allows you ingest your records using Logs Ingestion API in Azure Monitorarrow-up-right to supported Azure tablesarrow-up-right or to custom tablesarrow-up-right that you create.

The Logs ingestion API requires the following components:

  • A Data Collection Endpoint (DCE)

  • A Data Collection Rule (DCR) and

  • A Log Analytics Workspace

Note: According to , all resources should be in the same region.

To visualize basic Logs Ingestion operation, see the following image:

To get more details about how to setup these components, please refer to the following documentations:

hashtag
Configuration Parameters

Key
Description
Default

hashtag
Getting Started

To send records into an Azure Log Analytics using Logs Ingestion API the following resources needs to be created:

  • A Data Collection Endpoint (DCE) for ingestion

  • A Data Collection Rule (DCR) for data transformation

  • Either an or

  • An app registration with client secrets (for DCR access).

You can follow to setup the DCE, DCR, app registration and a custom table.

hashtag
Configuration File

Use this configuration to quickly get started:

Setup your DCR transformation accordingly based on the json output from fluent-bit's pipeline (input, parser, filter, output).

Google Cloud BigQuery

BigQuery output plugin is an experimental plugin that allows you to stream records into Google Cloud BigQueryarrow-up-right service. The implementation does not support the following, which would be expected in a full production version:

  • Application Default Credentialsarrow-up-right.

  • Data deduplicationarrow-up-right using insertId.

  • Template tablesarrow-up-right using templateSuffix.

hashtag
Google Cloud Configuration

Fluent Bit streams data into an existing BigQuery table using a service account that you specify. Therefore, before using the BigQuery output plugin, you must create a service account, create a BigQuery dataset and table, authorize the service account to write to the table, and provide the service account credentials to Fluent Bit.

hashtag
Creating a Service Account

To stream data into BigQuery, the first step is to create a Google Cloud service account for Fluent Bit:

hashtag
Creating a BigQuery Dataset and Table

Fluent Bit does not create datasets or tables for your data, so you must create these ahead of time. You must also grant the service account WRITER permission on the dataset:

Within the dataset you will need to create a table for the data to reside in. You can follow the following instructions for creating your table. Pay close attention to the schema. It must match the schema of your output JSON. Unfortunately, since BigQuery does not allow dots in field names, you will need to use a filter to change the fields for many of the standard inputs (e.g, mem or cpu).

hashtag
Retrieving Service Account Credentials

Fluent Bit BigQuery output plugin uses a JSON credentials file for authentication credentials. Download the credentials file by following these instructions:

hashtag
Workload Identity Federation

Using identity federation, you can grant on-premises or multi-cloud workloads access to Google Cloud resources, without using a service account key. It can be used as a more secure alternative to service account credentials. Google Cloud's workload identity federation supports several identity providers (see documentation) but Fluent Bit BigQuery plugin currently supports Amazon Web Services (AWS) only.

You must configure workload identity federation in GCP before using it with Fluent Bit.

hashtag
Configurations Parameters

Key
Description
default

See Google's for further details.

hashtag
Configuration File

If you are using a Google Cloud Credentials File, the following configuration is enough to get you started:

Prometheus Remote Write

An output plugin to submit Prometheus Metrics using the remote write protocol

The prometheus remote write plugin allows you to take metrics from Fluent Bit and submit them to a Prometheus server through the remote write mechanism.

Important Note: The prometheus exporter only works with metric plugins, such as Node Exporter Metrics

Key
Description
Default
[INPUT]
    Name  dummy
    Tag   dummy

[OUTPUT]
    Name       chronicle
    Match      *
    customer_id my_customer_id
    log_type my_super_awesome_type
[OUTPUT]
    Name        datadog
    Match       *
    Host        http-intake.logs.datadoghq.com
    TLS         on
    compress    gzip
    apikey      <my-datadog-api-key>
    dd_service  <my-app-service>
    dd_source   <my-app-source>
    dd_tags     team:logs,foo:bar
[INPUT]
    name  dummy
    tag   events
    rate  2

[OUTPUT]
    name                   vivo_exporter
    match                  *
    empty_stream_on_read   off
    stream_queue_size      20M
    http_cors_allow_origin *
[INPUT]
    name  dummy
    tag   events
    rate  2

[OUTPUT]
    name   vivo_exporter
    match  *
curl -i http://127.0.0.1:2025/logs
HTTP/1.1 200 OK
Server: Monkey/1.7.0
Date: Tue, 21 Mar 2023 16:42:28 GMT
Transfer-Encoding: chunked
Content-Type: application/json
Vivo-Stream-Start-ID: 0
Vivo-Stream-End-ID: 3

[[1679416945459254000,{"_tag":"events"}],{"message":"dummy"}]
[[1679416945959398000,{"_tag":"events"}],{"message":"dummy"}]
[[1679416946459271000,{"_tag":"events"}],{"message":"dummy"}]
[[1679416946959943000,{"_tag":"events"}],{"message":"dummy"}]
[[1679416947459806000,{"_tag":"events"}],{"message":"dummy"}]
[[1679416947958777000,{"_tag":"events"}],{"message":"dummy"}]
[[1679416948459391000,{"_tag":"events"}],{"message":"dummy"}]
HTTP/1.1 200 OK
Server: Monkey/1.7.0
Date: Tue, 21 Mar 2023 16:45:05 GMT
Transfer-Encoding: chunked
Content-Type: application/json
Vivo-Stream-Start-ID: 1
Vivo-Stream-End-ID: 1

[[1679416945959398000,{"_tag":"events"}],{"message":"dummy"}]
[[1679416946459271000,{"_tag":"events"}],{"message":"dummy"}]
$ bin/fluent-bit -i cpu -o tcp://127.0.0.1:5170 -p format=json_lines -v
$ nc -l 5170
{"date":1644834856.905985,"cpu_p":1.1875,"user_p":0.5625,"system_p":0.625,"cpu0.p_cpu":0.0,"cpu0.p_user":0.0,"cpu0.p_system":0.0,"cpu1.p_cpu":1.0,"cpu1.p_user":1.0,"cpu1.p_system":0.0,"cpu2.p_cpu":4.0,"cpu2.p_user":2.0,"cpu2.p_system":2.0,"cpu3.p_cpu":1.0,"cpu3.p_user":0.0,"cpu3.p_system":1.0,"cpu4.p_cpu":1.0,"cpu4.p_user":0.0,"cpu4.p_system":1.0,"cpu5.p_cpu":1.0,"cpu5.p_user":1.0,"cpu5.p_system":0.0,"cpu6.p_cpu":0.0,"cpu6.p_user":0.0,"cpu6.p_system":0.0,"cpu7.p_cpu":3.0,"cpu7.p_user":1.0,"cpu7.p_system":2.0,"cpu8.p_cpu":0.0,"cpu8.p_user":0.0,"cpu8.p_system":0.0,"cpu9.p_cpu":1.0,"cpu9.p_user":0.0,"cpu9.p_system":1.0,"cpu10.p_cpu":1.0,"cpu10.p_user":0.0,"cpu10.p_system":1.0,"cpu11.p_cpu":0.0,"cpu11.p_user":0.0,"cpu11.p_system":0.0,"cpu12.p_cpu":0.0,"cpu12.p_user":0.0,"cpu12.p_system":0.0,"cpu13.p_cpu":3.0,"cpu13.p_user":2.0,"cpu13.p_system":1.0,"cpu14.p_cpu":1.0,"cpu14.p_user":1.0,"cpu14.p_system":0.0,"cpu15.p_cpu":0.0,"cpu15.p_user":0.0,"cpu15.p_system":0.0}
$ bin/fluent-bit -i cpu -o tcp://127.0.0.1:5170 -p format=msgpack -v
#Python3
import socket
import msgpack

unpacker = msgpack.Unpacker(use_list=False, raw=False)
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
s.bind(("127.0.0.1", 5170))
s.listen(1)
connection, address = s.accept()

while True:
    data = connection.recv(1024)
    if not data:
        break
    unpacker.feed(data)
    for unpacked in unpacker:
        print(unpacked)
$ pip install msgpack
$ python3 test.py
(ExtType(code=0, data=b'b\n5\xc65\x05\x14\xac'), {'cpu_p': 0.1875, 'user_p': 0.125, 'system_p': 0.0625, 'cpu0.p_cpu': 0.0, 'cpu0.p_user': 0.0, 'cpu0.p_system': 0.0, 'cpu1.p_cpu': 0.0, 'cpu1.p_user': 0.0, 'cpu1.p_system': 0.0, 'cpu2.p_cpu': 1.0, 'cpu2.p_user': 0.0, 'cpu2.p_system': 1.0, 'cpu3.p_cpu': 0.0, 'cpu3.p_user': 0.0, 'cpu3.p_system': 0.0, 'cpu4.p_cpu': 0.0, 'cpu4.p_user': 0.0, 'cpu4.p_system': 0.0, 'cpu5.p_cpu': 0.0, 'cpu5.p_user': 0.0, 'cpu5.p_system': 0.0, 'cpu6.p_cpu': 0.0, 'cpu6.p_user': 0.0, 'cpu6.p_system': 0.0, 'cpu7.p_cpu': 0.0, 'cpu7.p_user': 0.0, 'cpu7.p_system': 0.0, 'cpu8.p_cpu': 0.0, 'cpu8.p_user': 0.0, 'cpu8.p_system': 0.0, 'cpu9.p_cpu': 1.0, 'cpu9.p_user': 1.0, 'cpu9.p_system': 0.0, 'cpu10.p_cpu': 0.0, 'cpu10.p_user': 0.0, 'cpu10.p_system': 0.0, 'cpu11.p_cpu': 0.0, 'cpu11.p_user': 0.0, 'cpu11.p_system': 0.0, 'cpu12.p_cpu': 0.0, 'cpu12.p_user': 0.0, 'cpu12.p_system': 0.0, 'cpu13.p_cpu': 0.0, 'cpu13.p_user': 0.0, 'cpu13.p_system': 0.0, 'cpu14.p_cpu': 0.0, 'cpu14.p_user': 0.0, 'cpu14.p_system': 0.0, 'cpu15.p_cpu': 0.0, 'cpu15.p_user': 0.0, 'cpu15.p_system': 0.0})
[SERVICE]
    flush     1
    log_level info

[INPUT]
    name      dummy
    dummy     {"message":"a simple message", "temp": "0.74", "extra": "false"}
    samples   1

[OUTPUT]
    name      nrlogs
    match     *
    api_key   YOUR_API_KEY_HERE
$ fluent-bit -c newrelic.conf
Fluent Bit v1.5.0
* Copyright (C) 2019-2020 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2020/04/10 10:58:32] [ info] [storage] version=1.0.3, initializing...
[2020/04/10 10:58:32] [ info] [storage] in-memory
[2020/04/10 10:58:32] [ info] [storage] normal synchronization mode, checksum disabled, max_chunks_up=128
[2020/04/10 10:58:32] [ info] [engine] started (pid=2772591)
[2020/04/10 10:58:32] [ info] [output:newrelic:newrelic.0] configured, hostname=log-api.newrelic.com:443
[2020/04/10 10:58:32] [ info] [sp] stream processor started
[2020/04/10 10:58:35] [ info] [output:nrlogs:nrlogs.0] log-api.newrelic.com:443, HTTP status=202
{"requestId":"feb312fe-004e-b000-0000-0171650764ac"}
$ fluent-bit -i cpu -t cpu -o kafka-rest -p host=127.0.0.1 -p port=8082 -m '*'
[INPUT]
    Name  cpu
    Tag   cpu

[OUTPUT]
    Name        kafka-rest
    Match       *
    Host        127.0.0.1
    Port        8082
    Topic       fluent-bit
    Message_Key my_key
specific log types onlyarrow-up-right
workers
Datadog API keyarrow-up-right
http://host:portarrow-up-right
Service Remapperarrow-up-right
ddsource attributearrow-up-right
tagsarrow-up-right
`ddtags' attributearrow-up-right
workers
workers

tls.crt_file

Absolute path to Certificate file.

tls.key_file

Absolute path to private Key file.

tls.key_passwd

Optional password for tls.key_file file.

workers
https://log-api.eu.newrelic.com/log/v1arrow-up-right
https://log-api.newrelic.com/log/v1arrow-up-right
herearrow-up-right
herearrow-up-right
workers
workers
builds the test suites

Specify data format, options available: json, msgpack, raw.

json

message_key

Optional key to store the message

message_key_field

If set, the value of Message_Key_Field in the record will indicate the message key. If not set nor found in the record, Message_Key will be used (if set).

timestamp_key

Set the key to store the record timestamp

@timestamp

timestamp_format

Specify timestamp format, should be 'double', 'iso8601arrow-up-right' (seconds precision) or 'iso8601_ns' (fractional seconds precision)

double

brokers

Single or multiple list of Kafka Brokers, e.g: 192.168.1.3:9092, 192.168.1.4:9092.

topics

Single entry or list of topics separated by comma (,) that Fluent Bit will use to send messages to Kafka. If only one topic is set, that one will be used for all records. Instead if multiple topics exists, the one set in the record by Topic_Key will be used.

fluent-bit

topic_key

If multiple Topics exists, the value of Topic_Key in the record will indicate the topic to use. E.g: if Topic_Key is router and the record is {"key1": 123, "router": "route_2"}, Fluent Bit will use topic route_2. Note that if the value of Topic_Key is not present in Topics, then by default the first topic in the Topics list will indicate the topic to be used.

dynamic_topic

adds unknown topics (found in Topic_Key) to Topics. So in Topics only a default topic needs to be configured

Off

queue_full_retries

Fluent Bit queues data into rdkafka library, if for some reason the underlying library cannot flush the records the queue might fills up blocking new addition of records. The queue_full_retries option set the number of local retries to enqueue the data. The default value is 10 times, the interval between each retry is 1 second. Setting the queue_full_retries value to 0 set's an unlimited number of retries.

10

rdkafka.{property}

{property} can be any librdkafka propertiesarrow-up-right

raw_log_key

When using the raw format and set, the value of raw_log_key in the record will be send to kafka as the payload.

workers

The number of workers to perform flush operations for this output.

0

http_passwd

Basic Auth Password. Requires HTTP_user to be set

port

TCP port of the target HTTP Server

80

proxy

Specify an HTTP Proxy. The expected format of this value is http://HOST:PORT. Note that HTTPS is not currently supported. It is recommended not to set this and to configure the HTTP proxy environment variablesarrow-up-right instead as they support both HTTP and HTTPS.

metrics_uri

Specify an optional HTTP URI for the target web server listening for metrics, e.g: /v1/metrics

/

logs_uri

Specify an optional HTTP URI for the target web server listening for logs, e.g: /v1/logs

/

traces_uri

Specify an optional HTTP URI for the target web server listening for traces, e.g: /v1/traces

/

header

Add a HTTP header key/value pair. Multiple headers can be set.

log_response_payload

Log the response payload within the Fluent Bit log

false

logs_body_key

The log body key to look up in the log events body/message. Sets the Body field of the opentelemtry logs data model.

message

logs_trace_id_message_key

The trace id key to look up in the log events body/message. Sets the TraceId field of the opentelemtry logs data model.

traceId

logs_span_id_message_key

The span id key to look up in the log events body/message. Sets the SpanId field of the opentelemtry logs data model.

spanId

logs_severity_text_message_key

The severity text id key to look up in the log events body/message. Sets the SeverityText field of the opentelemtry logs data model.

severityText

logs_severity_number_message_key

The severity number id key to look up in the log events body/message. Sets the SeverityNumber field of the opentelemtry logs data model.

severityNumber

add_label

This allows you to add custom labels to all metrics exposed through the OpenTelemetry exporter. You may have multiple of these fields

compress

Set payload compression mechanism. Option available is 'gzip'

logs_observed_timestamp_metadata_key

Specify an ObservedTimestamp key to look up in the metadata.

$ObservedKey

logs_timestamp_metadata_key

Specify a Timestamp key to look up in the metadata.

$Timestamp

logs_severity_key_metadata_key

Specify a SeverityText key to look up in the metadata.

$SeverityText

logs_severity_number_metadata_key

Specify a SeverityNumber key to look up in the metadata.

$SeverityNumber

logs_trace_flags_metadata_key

Specify a Flags key to look up in the metadata.

$Flags

logs_span_id_metadata_key

Specify a SpanId key to look up in the metadata.

$SpanId

logs_trace_id_metadata_key

Specify a TraceId key to look up in the metadata.

$TraceId

logs_attributes_metadata_key

Specify an Attributes key to look up in the metadata.

$Attributes

workers

The number of workers to perform flush operations for this output.

0

Host

IP address or hostname of the target WebSocket Server

127.0.0.1

Port

TCP port of the target WebSocket Server

80

URI

Specify an optional HTTP URI for the target websocket server, e.g: /something

/

Header

Add a HTTP header key/value pair. Multiple headers can be set.

Format

Specify the data format to be used in the HTTP request body, by default it uses msgpack. Other supported formats are json, json_stream and json_lines and gelf.

msgpack

json_date_key

Specify the name of the date field in output

date

json_date_format

Specify the format of the date. Supported formats are double, epoch, iso8601 (eg: 2018-05-30T09:39:52.000681Z) and java_sql_timestamp (eg: 2018-05-30 09:39:52.000681)

double

workers

The number of workers to perform flush operations for this output.

0

networkingarrow-up-right
Fluent Bitarrow-up-right
$ fluent-bit -i cpu -o kafka -p brokers=192.168.1.3:9092 -p topics=test
[INPUT]
    Name  cpu

[OUTPUT]
    Name        kafka
    Match       *
    Brokers     192.168.1.3:9092
    Topics      test
cmake -DFLB_DEV=On -DFLB_OUT_KAFKA=On -DFLB_TLS=On -DFLB_TESTS_RUNTIME=On -DFLB_TESTS_INTERNAL=On -DCMAKE_BUILD_TYPE=Debug -DFLB_HTTP_SERVER=true -DFLB_AVRO_ENCODER=On ../
[INPUT]
    Name              tail
    Tag               kube.*
    Alias             some-alias
    Path              /logdir/*.log
    DB                /dbdir/some.db
    Skip_Long_Lines   On
    Refresh_Interval  10
    Parser some-parser

[FILTER]
    Name                kubernetes
    Match               kube.*
    Kube_URL            https://some_kube_api:443
    Kube_CA_File        /certs/ca.crt
    Kube_Token_File     /tokens/token
    Kube_Tag_Prefix     kube.var.log.containers.
    Merge_Log           On
    Merge_Log_Key       log_processed

[OUTPUT]
    Name        kafka
    Match       *
    Brokers     192.168.1.3:9092
    Topics      test
    Schema_str  {"name":"avro_logging","type":"record","fields":[{"name":"timestamp","type":"string"},{"name":"stream","type":"string"},{"name":"log","type":"string"},{"name":"kubernetes","type":{"name":"krec","type":"record","fields":[{"name":"pod_name","type":"string"},{"name":"namespace_name","type":"string"},{"name":"pod_id","type":"string"},{"name":"labels","type":{"type":"map","values":"string"}},{"name":"annotations","type":{"type":"map","values":"string"}},{"name":"host","type":"string"},{"name":"container_name","type":"string"},{"name":"docker_id","type":"string"},{"name":"container_hash","type":"string"},{"name":"container_image","type":"string"}]}},{"name":"cluster_name","type":"string"},{"name":"fabric","type":"string"}]}
    Schema_id some_schema_id
    rdkafka.client.id some_client_id
    rdkafka.debug All
    rdkafka.enable.ssl.certificate.verification true

    rdkafka.ssl.certificate.location /certs/some.cert
    rdkafka.ssl.key.location /certs/some.key
    rdkafka.ssl.ca.location /certs/some-bundle.crt
    rdkafka.security.protocol ssl
    rdkafka.request.required.acks 1
    rdkafka.log.connection.close false

    Format avro
    rdkafka.log_level 7
    rdkafka.metadata.broker.list 192.168.1.3:9092
[INPUT]
    Name example
    Tag  example.data
    Dummy {"payloadkey":"Data to send to kafka", "msgkey": "Key to use in the message"}


[OUTPUT]
    Name        kafka
    Match       *
    Brokers     192.168.1.3:9092
    Topics      test
    Format      raw

    Raw_Log_Key       payloadkey
    Message_Key_Field msgkey
# Dummy Logs & traces with Node Exporter Metrics export using OpenTelemetry output plugin
# -------------------------------------------
# The following example collects host metrics on Linux and dummy logs & traces and delivers
# them through the OpenTelemetry plugin to a local collector :
#
[SERVICE]
    Flush                1
    Log_level            info

[INPUT]
    Name                 node_exporter_metrics
    Tag                  node_metrics
    Scrape_interval      2

[INPUT]
    Name                 dummy
    Tag                  dummy.log
    Rate                 3

[INPUT]
    Name                 event_type
    Type                 traces

[OUTPUT]
    Name                 opentelemetry
    Match                *
    Host                 localhost
    Port                 443
    Metrics_uri          /v1/metrics
    Logs_uri             /v1/logs
    Traces_uri           /v1/traces
    Log_response_payload True
    Tls                  On
    Tls.verify           Off
    logs_body_key $message
    logs_span_id_message_key span_id
    logs_trace_id_message_key trace_id
    logs_severity_text_message_key loglevel
    logs_severity_number_message_key lognum
    # add user-defined labels
    add_label            app fluent-bit
    add_label            color blue
http://host:port/something
$ fluent-bit -i cpu -t cpu -o websocket://192.168.2.3:80/something -m '*'
[INPUT]
    Name  cpu
    Tag   cpu

[OUTPUT]
    Name  websocket
    Match *
    Host  192.168.2.3
    Port  80
    URI   /something
    Format json
[INPUT]
    Name        tcp
    Listen      0.0.0.0
    Port        5170
    Format      json

[OUTPUT]
    Name           websocket
    Match          *
    Host           127.0.0.1
    Port           8080
    URI            /
    Format         json
    workers	   4
    net.keepalive               on
    net.keepalive_idle_timeout  30
$ echo '{"key 1": 123456789, "key 2": "abcdefg"}' | nc 127.0.0.1 5170; sleep 35; echo '{"key 1": 123456789, "key 2": "abcdefg"}' | nc 127.0.0.1 5170
bin/fluent-bit   -c ../conf/out_ws.conf
Fluent Bit v1.7.0
* Copyright (C) 2019-2020 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2021/02/05 22:17:09] [ info] [engine] started (pid=6056)
[2021/02/05 22:17:09] [ info] [storage] version=1.1.0, initializing...
[2021/02/05 22:17:09] [ info] [storage] in-memory
[2021/02/05 22:17:09] [ info] [storage] normal synchronization mode, checksum disabled, max_chunks_up=128
[2021/02/05 22:17:09] [ info] [input:tcp:tcp.0] listening on 0.0.0.0:5170
[2021/02/05 22:17:09] [ info] [out_ws] we have following parameter /, 127.0.0.1, 8080, 25
[2021/02/05 22:17:09] [ info] [output:websocket:websocket.0] worker #1 started
[2021/02/05 22:17:09] [ info] [output:websocket:websocket.0] worker #0 started
[2021/02/05 22:17:09] [ info] [sp] stream processor started
[2021/02/05 22:17:09] [ info] [output:websocket:websocket.0] worker #3 started
[2021/02/05 22:17:09] [ info] [output:websocket:websocket.0] worker #2 started
[2021/02/05 22:17:33] [ info] [out_ws] handshake for ws
[2021/02/05 22:18:08] [ warn] [engine] failed to flush chunk '6056-1612534687.673438119.flb', retry in 7 seconds: task_id=0, input=tcp.0 > output=websocket.0 (out_id=0)
[2021/02/05 22:18:15] [ info] [out_ws] handshake for ws
^C[2021/02/05 22:18:23] [engine] caught signal (SIGINT)
[2021/02/05 22:18:23] [ warn] [engine] service will stop in 5 seconds
[2021/02/05 22:18:27] [ info] [engine] service stopped
[2021/02/05 22:18:27] [ info] [output:websocket:websocket.0] thread worker #0 stopping...
[2021/02/05 22:18:27] [ info] [output:websocket:websocket.0] thread worker #0 stopped
[2021/02/05 22:18:27] [ info] [output:websocket:websocket.0] thread worker #1 stopping...
[2021/02/05 22:18:27] [ info] [output:websocket:websocket.0] thread worker #1 stopped
[2021/02/05 22:18:27] [ info] [output:websocket:websocket.0] thread worker #2 stopping...
[2021/02/05 22:18:27] [ info] [output:websocket:websocket.0] thread worker #2 stopped
[2021/02/05 22:18:27] [ info] [output:websocket:websocket.0] thread worker #3 stopping...
[2021/02/05 22:18:27] [ info] [output:websocket:websocket.0] thread worker #3 stopped
[2021/02/05 22:18:27] [ info] [out_ws] flb_ws_conf_destroy

mac

Mac address. This value is optional.

ip

IP address of the local hostname. This value is optional.

tags

A list of comma separated strings to group records in LogDNA and simplify the query with filters.

file

Optional name of a file being monitored. Note that this value is only set if the record do not contain a reference to it.

app

Name of the application. This value is auto discovered on each record, if not found, the default value is used.

Fluent Bit

workers

The number of to perform flush operations for this output.

`0`

logdna_host

LogDNA API host address

logs.logdna.com

logdna_port

LogDNA TCP Port

443

api_key

API key to get access to the service. This property is mandatory.

hostname

Name of the local machine or device where Fluent Bit is running.

When this value is not set, Fluent Bit lookup the hostname and auto populate the value. If it cannot be found, an unknown value will be set instead.

level

If the record contains a key called level or severity, it will populate the context level key with that value. If not found, the context key is not set.

file

if the record contains a key called file, it will populate the context file with the value found, otherwise If the plugin configuration provided a file property, that value will be used instead (see table above).

app

If the record contains a key called app, it will populate the context app with the value found, otherwise it will use the value set for app in the configuration property (see table above).

meta

if the record contains a key called meta, it will populate the context meta with the value found.

LogDNA Sign Uparrow-up-right

database_name

Required - The database name.

table_name

Required - The table name.

ingestion_mapping_reference

Optional - The name of a that will be used to map the ingested payload into the table columns.

log_key

Key name of the log content.

log

include_tag_key

If enabled, a tag is appended to output. The key name is used tag_key property.

On

tag_key

The key name of tag. If include_tag_key is false, This property is ignored.

tag

include_time_key

If enabled, a timestamp is appended to output. The key name is used time_key property.

On

time_key

The key name of time. If include_time_key is false, This property is ignored.

timestamp

workers

The number of to perform flush operations for this output.

0

tenant_id

Required - The tenant/domain ID of the AAD registered application.

client_id

Required - The client ID of the AAD registered application.

client_secret

Required - The client secret of the AAD registered application (App Secretarrow-up-right).

ingestion_endpoint

Required - The cluster's ingestion endpoint, usually in the form `https://ingest-cluster_name.region.kusto.windows.net

Create a free-tier clusterarrow-up-right
Create a fully-featured clusterarrow-up-right
Create an Eventhouse clusterarrow-up-right
Create a KQL databasearrow-up-right
Register an Applicationarrow-up-right
Add a client secretarrow-up-right
Authorize the app in your databasearrow-up-right
JSON ingestion mappingarrow-up-right

dcr_id

Required - Data Collection Rule (DCR) immutable ID (see to collect the immutable id)

table_name

Required - The name of the custom log table (include the _CL suffix as well if applicable)

time_key

Optional - Specify the key name where the timestamp will be stored.

@timestamp

time_generated

Optional - If enabled, will generate a timestamp and append it to JSON. The key name is set by the 'time_key' parameter.

true

compress

Optional - Enable HTTP payload gzip compression.

true

workers

The number of to perform flush operations for this output.

0

tenant_id

Required - The tenant ID of the AAD application.

client_id

Required - The client ID of the AAD application.

client_secret

Required - The client secret of the AAD application (App Secretarrow-up-right).

dce_url

Required - Data Collection Endpoint(DCE) URL.

this documentarrow-up-right
Azure Logs Ingestion APIarrow-up-right
Send data to Azure Monitor Logs with Logs ingestion API (setup DCE, DCR and Log Analytics)arrow-up-right
Azure tablesarrow-up-right
custom tablesarrow-up-right
this guidelinearrow-up-right

skip_invalid_rows

Insert all valid rows of a request, even if invalid rows exist. The default value is false, which causes the entire request to fail if any invalid rows exist.

Off

ignore_unknown_values

Accept rows that contain values that do not match the schema. The unknown values are ignored. Default is false, which treats unknown values as errors.

Off

enable_workload_identity_federation

Enables workload identity federation as an alternative authentication method. Cannot be used with service account credentials file or environment variable. AWS is the only identity provider currently supported.

Off

aws_region

Used to construct a regional endpoint for AWS STS to verify AWS credentials obtained by Fluent Bit. Regional endpoints are recommended by AWS.

project_number

GCP project number where the identity provider was created. Used to construct the full resource name of the identity provider.

pool_id

GCP workload identity pool where the identity provider was created. Used to construct the full resource name of the identity provider.

provider_id

GCP workload identity provider. Used to construct the full resource name of the identity provider. Currently only AWS accounts are supported.

google_service_account

Email address of the Google service account to impersonate. The workload identity provider must have permissions to impersonate this service account, and the service account must have permissions to access Google BigQuery resources (e.g. write access to tables)

workers

The number of to perform flush operations for this output.

0

google_service_credentials

Absolute path to a Google Cloud credentials JSON file.

Value of the environment variable $GOOGLE_SERVICE_CREDENTIALS

project_id

The project id containing the BigQuery dataset to stream into.

The value of the project_id in the credentials file

dataset_id

The dataset id of the BigQuery dataset to write into. This dataset must exist in your project.

table_id

The table id of the BigQuery table to write into. This table must exist in the specified dataset and the schema must match the output.

Creating a Google Cloud Service Accountarrow-up-right
Creating and using datasetsarrow-up-right
Creating and using tablesarrow-up-right
Creating and Managing Service Account Keysarrow-up-right
Workload Identity Federation overviewarrow-up-right
Configuring workload identity federationarrow-up-right
Obtaining short-lived credentials with identity federationarrow-up-right
official documentationarrow-up-right

http_passwd

Basic Auth Password. Requires HTTP_user to be set

AWS_Auth

Enable AWS SigV4 authentication

false

AWS_Service

For Amazon Managed Service for Prometheus, the service name is aps

aps

AWS_Region

Region of your Amazon Managed Service for Prometheus workspace

AWS_STS_Endpoint

Specify the custom sts endpoint to be used with STS API, used with the AWS_Role_ARN option, used by SigV4 authentication

AWS_Role_ARN

AWS IAM Role to assume, used by SigV4 authentication

AWS_External_ID

External ID for the AWS IAM Role specified with aws_role_arn, used by SigV4 authentication

port

TCP port of the target HTTP Server

80

proxy

Specify an HTTP Proxy. The expected format of this value is http://HOST:PORT. Note that HTTPS is not currently supported. It is recommended not to set this and to configure the instead as they support both HTTP and HTTPS.

uri

Specify an optional HTTP URI for the target web server, e.g: /something

/

header

Add a HTTP header key/value pair. Multiple headers can be set.

log_response_payload

Log the response payload within the Fluent Bit log

false

add_label

This allows you to add custom labels to all metrics exposed through the prometheus exporter. You may have multiple of these fields

workers

The number of to perform flush operations for this output.

2

hashtag
Getting Started

The Prometheus remote write plugin only works with metrics collected by one of the from metric input plugins. In the following example, host metrics are collected by the node exporter metrics plugin and then delivered by the prometheus remote write output plugin.

hashtag
Examples

The following are examples of using Prometheus remote write with hosted services below

hashtag
Grafana Cloud

With Grafana Cloudarrow-up-right hosted metrics you will need to use the specific host that is mentioned as well as specify the HTTP username and password given within the Grafana Cloud page.

hashtag
Logz.io Infrastructure Monitoring

With Logz.io hosted prometheusarrow-up-right you will need to make use of the header option and add the Authorization Bearer with the proper key. The host and port may also differ within your specific hosted instance.

hashtag
Coralogix

With Coralogix Metricsarrow-up-right you may need to customize the URI. Additionally, you will make use of the header key with Coralogix private key.

hashtag
Levitate

With Levitatearrow-up-right, you must use the Levitate cluster-specific write URL and specify the HTTP username and password for the token created for your Levitate cluster.

hashtag
Add Prometheus like Labels

Ordinary prometheus clients add some of the labels as below:

instance label can be emulated with add_label instance ${HOSTNAME}. And other labels can be added with add_label <key> <value> setting.

host

IP address or hostname of the target HTTP Server

127.0.0.1

http_user

Basic Auth Username

InfluxDB

The influxdb output plugin, allows to flush your records into a InfluxDBarrow-up-right time series database. The following instructions assumes that you have a fully operational InfluxDB service running in your system.

hashtag
Configuration Parameters

Key
Description
default

hashtag
TLS / SSL

InfluxDB output plugin supports TLS/SSL, for more details about the properties available and general configuration, please refer to the section.

hashtag
Getting Started

In order to start inserting records into an InfluxDB service, you can run the plugin from the command line or through the configuration file:

hashtag
Command Line

The influxdb plugin, can read the parameters from the command line in two ways, through the -p argument (property) or setting them directly through the service URI. The URI format is the following:

Using the format specified, you could start Fluent Bit through:

hashtag
Configuration File

In your main configuration file append the following Input & Output sections:

hashtag
Tagging

Basic example of Tag_Keys usage:

With Auto_Tags=On in this example cause error, because every parsed field value type is string. Best usage of this option in metrics like record where one or more field value is not string typed.

Basic example of Tags_List_Key usage:

hashtag
Testing

Before to start Fluent Bit, make sure the target database exists on InfluxDB, using the above example, we will insert the data into a fluentbit database.

hashtag
1. Create database

Log into InfluxDB console:

Create the database:

Check the database exists:

hashtag
2. Run Fluent Bit

The following command will gather CPU metrics from the system and send the data to InfluxDB database every five seconds:

Note that all records coming from the cpu input plugin, have a tag cpu, this tag is used to generate the measurement in InfluxDB

hashtag
3. Query the data

From InfluxDB console, choose your database:

Now query some specific fields:

The CPU input plugin gather more metrics per CPU core, in the above example we just selected three specific metrics. The following query will give a full result:

hashtag
4. View tags

Query tagged keys:

And now query method key values:

PostgreSQL

PostgreSQLarrow-up-right is a very popular and versatile open source database management system that supports the SQL language and that is capable of storing both structured and unstructured data, such as JSON objects.

Given that Fluent Bit is designed to work with JSON objects, the pgsql output plugin allows users to send their data to a PostgreSQL database and store it using the JSONB type.

PostgreSQL 9.4 or higher is required.

hashtag
Preliminary steps

According to the parameters you have set in the configuration file, the plugin will create the table defined by the table option in the database defined by the database option hosted on the server defined by the host option. It will use the PostgreSQL user defined by the user option, which needs to have the right privileges to create such a table in that database.

NOTE: If you are not familiar with how PostgreSQL's users and grants system works, you might find useful reading the recommended links in the "References" section at the bottom.

A typical installation normally consists of a self-contained database for Fluent Bit in which you can store the output of one or more pipelines. Ultimately, it is your choice to to store them in the same table, or in separate tables, or even in separate databases based on several factors, including workload, scalability, data protection and security.

In this example, for the sake of simplicity, we use a single table called fluentbit in a database called fluentbit that is owned by the user fluentbit. Feel free to use different names. Preferably, for security reasons, do not use the postgres user (which has SUPERUSER privileges).

hashtag
Create the fluentbit user

Generate a robust random password (e.g. pwgen 20 1) and store it safely. Then, as postgres system user on the server where PostgreSQL is installed, execute:

At the prompt, please provide the password that you previously generated.

As a result, the user fluentbit without superuser privileges will be created.

If you prefer, instead of the createuser application, you can directly use the SQL command .

hashtag
Create the fluentbit database

As postgres system user, please run:

This will create a database called fluentbit owned by the fluentbit user. As a result, the fluentbit user will be able to safely create the data table.

Alternatively, you can use the SQL command .

hashtag
Connection

Make sure that the fluentbit user can connect to the fluentbit database on the specified target host. This might require you to properly configure the file.

hashtag
Configuration Parameters

Key
Description
Default

hashtag
Libpq

Fluent Bit relies on , the PostgreSQL native client API, written in C language. For this reason, default values might be affected by and compilation settings. The above table, in brackets, list the most common default values for each connection option.

For security reasons, it is advised to follow the directives included in the section.

hashtag
Configuration Example

In your main configuration file add the following section:

hashtag
The output table

The output plugin automatically creates a table with the name specified by the table configuration option and made up of the following fields:

  • tag TEXT

  • time TIMESTAMP WITHOUT TIMEZONE

  • data JSONB

As you can see, the timestamp does not contain any information about the time zone and it is therefore referred to the time zone used by the connection to PostgreSQL (timezone setting).

For more information on the JSONB data type in PostgreSQL, please refer to the page in the official documentation, where you can find instructions on how to index or query the objects (including jsonpath introduced in PostgreSQL 12).

hashtag
Scalability

PostgreSQL 10 introduces support for declarative partitioning. In order to improve vertical scalability of the database, you can decide to partition your tables on time ranges (for example on a monthly basis). PostgreSQL supports also subpartitions, allowing you to even partition by hash your records (version 11+), and default partitions (version 11+).

For more information on horizontal partitioning in PostgreSQL, please refer to the page in the official documentation.

If you are starting now, our recommendation at the moment is to choose the latest major version of PostgreSQL.

hashtag
There's more ...

PostgreSQL is a really powerful and extensible database engine. More expert users can indeed take advantage of BEFORE INSERT triggers on the main table and re-route records on normalised tables, depending on tags and content of the actual JSON objects.

For example, you can use Fluent Bit to send HTTP log records to the landing table defined in the configuration file. This table contains a BEFORE INSERT trigger (a function in plpgsql language) that normalises the content of the JSON object and that inserts the record in another table (with its own structure and partitioning model). This kind of triggers allow you to discard the record from the landing table by returning NULL.

hashtag
References

Here follows a list of useful resources from the PostgreSQL documentation:

GELF

GELF is Graylogarrow-up-right Extended Log Format. The GELF output plugin allows to send logs in GELF format directly to a Graylog input using TLS, TCP or UDP protocols.

The following instructions assumes that you have a fully operational Graylog server running in your environment.

hashtag
Configuration Parameters

According to GELF Payload Specificationarrow-up-right, there are some mandatory and optional fields which are used by Graylog in GELF format. These fields are determined with Gelf\*_Key_ key in this plugin.

Key
Description
default

hashtag
TLS / SSL

GELF output plugin supports TLS/SSL, for more details about the properties available and general configuration, please refer to the section.

hashtag
Notes

  • If you're using Fluent Bit to collect Docker logs, note that Docker places your log in JSON under key log. So you can set log as your Gelf_Short_Message_Key to send everything in Docker logs to Graylog. In this case, you need your log value to be a string; so don't parse it using JSON parser.

  • The order of looking up the timestamp in this plugin is as follows:

hashtag
Configuration File Example

If you're using Fluent Bit for shipping Kubernetes logs, you can use something like this as your configuration file:

By default, GELF tcp uses port 12201 and Docker places your logs in /var/log/containers directory. The logs are placed in value of the log key. For example, this is a log saved by Docker:

If you use and use a Parser like the docker parser shown above, it decodes your message and extracts data (and any other present) field. This is how this log in looks like after decoding:

Now, this is what happens to this log:

  1. Fluent Bit GELF plugin adds "version": "1.1" to it.

  2. The , unnests fields inside log key. In our example, it puts data alongside stream and time.

  3. We used this data

Finally, this is what our Graylog server input sees:

Oracle Log Analytics

Send logs to Oracle Cloud Infrastructure Logging Analytics Service

Oracle Cloud Infrastructure Logging Analytics output plugin allows you to ingest your log records into service.

Oracle Cloud Infrastructure Logging Analytics is a machine learning-based cloud service that monitors, aggregates, indexes, and analyzes all log data from on-premises and multicloud environments. Enabling users to search, explore, and correlate this data to troubleshoot and resolve problems faster and derive insights to make better operational decisions.

For details about OCI Logging Analytics refer to https://docs.oracle.com/en-us/iaas/logging-analytics/index.html

hashtag
Configuration Parameters

Following are the top level configuration properties of the plugin:

[SERVICE]
    flush     1
    log_level info

[INPUT]
    name      dummy
    dummy     {"log":"a simple log message", "severity": "INFO", "meta": {"s1": 12345, "s2": true}, "app": "Fluent Bit"}
    samples   1

[OUTPUT]
    name      logdna
    match     *
    api_key   YOUR_API_KEY_HERE
    hostname  my-hostname
    ip        192.168.1.2
    mac       aa:bb:cc:dd:ee:ff
    tags      aa, bb
$ fluent-bit -c logdna.conf
Fluent Bit v1.5.0
* Copyright (C) 2019-2020 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2020/04/07 17:44:37] [ info] [storage] version=1.0.3, initializing...
[2020/04/07 17:44:37] [ info] [storage] in-memory
[2020/04/07 17:44:37] [ info] [storage] normal synchronization mode, checksum disabled, max_chunks_up=128
[2020/04/07 17:44:37] [ info] [engine] started (pid=2157706)
[2020/04/07 17:44:37] [ info] [output:logdna:logdna.0] configured, hostname=monox-fluent-bit-2
[2020/04/07 17:44:37] [ info] [sp] stream processor started
[2020/04/07 17:44:38] [ info] [output:logdna:logdna.0] logs.logdna.com:443, HTTP status=200
{"status":"ok","batchID":"f95849a8-ec6c-4775-9d52-30763604df9b:40710:ld72"}
.create table FluentBit (log:dynamic, tag:string, timestamp:datetime)
[OUTPUT]
    Match *
    Name azure_kusto
    Tenant_Id <app_tenant_id>
    Client_Id <app_client_id>
    Client_Secret <app_secret>
    Ingestion_Endpoint https://ingest-<cluster>.<region>.kusto.windows.net
    Database_Name <database_name>
    Table_Name <table_name>
    Ingestion_Mapping_Reference <mapping_name>
[INPUT]
    Name    tail
    Path    /path/to/your/sample.log
    Tag     sample
    Key     RawData
# Or use other plugins Plugin
# [INPUT]
#     Name    cpu
#     Tag     sample

[FILTER]
    Name modify
    Match sample
    # Add a json key named "Application":"fb_log"
    Add Application fb_log

# Enable this section to see your json-log format
#[OUTPUT]
#    Name stdout
#    Match *
[OUTPUT]
    Name            azure_logs_ingestion
    Match           sample
    client_id       XXXXXXXX-xxxx-yyyy-zzzz-xxxxyyyyzzzzxyzz
    client_secret   some.secret.xxxzzz
    tenant_id       XXXXXXXX-xxxx-yyyy-zzzz-xxxxyyyyzzzzxyzz
    dce_url         https://log-analytics-dce-XXXX.region-code.ingest.monitor.azure.com
    dcr_id          dcr-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
    table_name      ladcr_CL
    time_generated  true
    time_key        Time
    Compress        true
[INPUT]
    Name  dummy
    Tag   dummy

[OUTPUT]
    Name       bigquery
    Match      *
    dataset_id my_dataset
    table_id   dummy_table
# Node Exporter Metrics + Prometheus remote write output plugin
# -------------------------------------------
# The following example collects host metrics on Linux and delivers
# them through the Prometheus remote write plugin to new relic :
#
[SERVICE]
    Flush                1
    Log_level            info

[INPUT]
    Name                 node_exporter_metrics
    Tag                  node_metrics
    Scrape_interval      2

[OUTPUT]
    Name                 prometheus_remote_write
    Match                node_metrics
    Host                 metric-api.newrelic.com
    Port                 443
    Uri                  /prometheus/v1/write?prometheus_server=YOUR_DATA_SOURCE_NAME
    Header               Authorization Bearer YOUR_LICENSE_KEY
    Log_response_payload True
    Tls                  On
    Tls.verify           On
    # add user-defined labels
    add_label            app fluent-bit
    add_label            color blue

# Note : it would be necessary to replace both YOUR_DATA_SOURCE_NAME and YOUR_LICENSE_KEY
# with real values for this example to work.
[OUTPUT]
    name prometheus_remote_write
    host prometheus-us-central1.grafana.net
    match *
    uri /api/prom/push
    port 443
    tls on
    tls.verify on
    http_user <GRAFANA Username>
    http_passwd <GRAFANA Password>
[OUTPUT]
    name prometheus_remote_write
    host listener.logz.io
    port 8053
    match *
    header Authorization Bearer <LOGZIO Key>
    tls on
    tls.verify on
    log_response_payload true
[OUTPUT]
    name prometheus_remote_write
    host metrics-api.coralogix.com
    uri prometheus/api/v1/write?appLabelName=path&subSystemLabelName=path&severityLabelName=severity
    match *
    port 443
    tls on
    tls.verify on
    header Authorization Bearer <CORALOGIX Key>
[OUTPUT]
    name prometheus_remote_write
    host app-tsdb.last9.io
    match *
    uri /v1/metrics/82xxxx/sender/org-slug/write
    port 443
    tls on
    tls.verify on
    http_user <Levitate Cluster Username>
    http_passwd <Levitate Cluster Password>
[OUTPUT]
    Name                 prometheus_remote_write
    Match                your.metric
    Host                 xxxxxxx.yyyyy.zzzz
    Port                 443
    Uri                  /api/v1/write
    Header               Authorization Bearer YOUR_LICENSE_KEY
    Log_response_payload True
    Tls                  On
    Tls.verify           On
    # add user-defined labels
    add_label instance ${HOSTNAME}
    add_label job fluent-bit
workersarrow-up-right
JSON ingestion mappingarrow-up-right
workers
this documentarrow-up-right
workers
workers
HTTP proxy environment variablesarrow-up-right
workers

Host

IP address or hostname of the target InfluxDB service

127.0.0.1

Port

TCP port of the target InfluxDB service

8086

Database

InfluxDB database name where records will be inserted

fluentbit

Bucket

InfluxDB bucket name where records will be inserted - if specified, database is ignored and v2 of API is used

Org

InfluxDB organization name where the bucket is (v2 only)

fluent

Sequence_Tag

The name of the tag whose value is incremented for the consecutive simultaneous events.

_seq

HTTP_User

Optional username for HTTP Basic Authentication

HTTP_Passwd

Password for user defined in HTTP_User

HTTP_Token

Authentication token used with InfluDB v2 - if specified, both HTTP_User and HTTP_Passwd are ignored

HTTP_Header

Add a HTTP header key/value pair. Multiple headers can be set

Tag_Keys

Space separated list of keys that needs to be tagged

Auto_Tags

Automatically tag keys where value is string. This option takes a boolean value: True/False, On/Off.

Off

Uri

Custom URI endpoint

Workers

The number of workers to perform flush operations for this output.

0

TLS/SSL
influxdb://host:port
$ fluent-bit -i cpu -t cpu -o influxdb://127.0.0.1:8086 -m '*'
[INPUT]
    Name  cpu
    Tag   cpu

[OUTPUT]
    Name          influxdb
    Match         *
    Host          127.0.0.1
    Port          8086
    Database      fluentbit
    Sequence_Tag  _seq
[INPUT]
    Name            tail
    Tag             apache.access
    parser          apache2
    path            /var/log/apache2/access.log

[OUTPUT]
    Name          influxdb
    Match         *
    Host          127.0.0.1
    Port          8086
    Database      fluentbit
    Sequence_Tag  _seq
    # make tags from method and path fields
    Tag_Keys      method path
[INPUT]
    Name              dummy
    # tagged fields: level, ID, businessObjectID, status
    Dummy             {"msg": "Transfer completed", "level": "info", "ID": "1234", "businessObjectID": "qwerty", "status": "OK", "tags": ["ID", "businessObjectID"]}

[OUTPUT]
    Name          influxdb
    Match         *
    Host          127.0.0.1
    Port          8086
    Bucket        My_Bucket
    Org           My_Org
    Sequence_Tag  _seq
    HTTP_Token    My_Token
    # tag all fields inside tags string array
    Tags_List_Enabled True
    Tags_List_Key tags
    # tag level, status fields
    Tag_Keys level status
$ influx
Visit https://enterprise.influxdata.com to register for updates, InfluxDB server management, and monitoring.
Connected to http://localhost:8086 version 1.1.0
InfluxDB shell version: 1.1.0
>
> create database fluentbit
>
> show databases
name: databases
name
----
_internal
fluentbit

>
$ bin/fluent-bit -i cpu -t cpu -o influxdb -m '*'
> use fluentbit
Using database fluentbit
> SELECT cpu_p, system_p, user_p FROM cpu
name: cpu
time                  cpu_p   system_p    user_p
----                  -----   --------    ------
1481132860000000000   2.75        0.5      2.25
1481132861000000000   2           0.5      1.5
1481132862000000000   4.75        1.5      3.25
1481132863000000000   6.75        1.25     5.5
1481132864000000000   11.25       3.75     7.5
> SELECT * FROM cpu
> SHOW TAG KEYS ON fluentbit FROM "apache.access"
name: apache.access
tagKey
------
_seq
method
path
> SHOW TAG VALUES ON fluentbit FROM "apache.access" WITH KEY = "method"
name: apache.access
key    value
---    -----
method "MATCH"
method "POST"

Database

Database name to connect to

- (current user)

Table

Table name where to store data

-

Connection_Options

Specifies any valid

-

Timestamp_Key

Key in the JSON object containing the record timestamp

date

Async

Define if we will use async or sync connections

false

min_pool_size

Minimum number of connection in async mode

1

max_pool_size

Maximum amount of connections in async mode

4

cockroachdb

Set to true if you will connect the plugin with a CockroachDB

false

workers

The number of to perform flush operations for this output.

0

The pg_hba.conf filearrow-up-right
  • JSON typesarrow-up-right

  • Date/Time functions and operatorsarrow-up-right

  • Table partitioningarrow-up-right

  • libpq - C API for PostgreSQLarrow-up-right

  • libpq - Environment variablesarrow-up-right

  • libpq - password filearrow-up-right

  • Trigger functionsarrow-up-right

  • Host

    Hostname/IP address of the PostgreSQL instance

    - (127.0.0.1)

    Port

    PostgreSQL port

    - (5432)

    User

    PostgreSQL username

    - (current user)

    Password

    Password of PostgreSQL username

    -

    CREATE USERarrow-up-right
    CREATE DATABASEarrow-up-right
    pg_hba.confarrow-up-right
    libpqarrow-up-right
    environment variablesarrow-up-right
    password filearrow-up-right
    JSON typesarrow-up-right
    Table partitioningarrow-up-right
    Database Rolesarrow-up-right
    GRANTarrow-up-right
    CREATE USERarrow-up-right
    CREATE DATABASEarrow-up-right

    short_message

    Gelf_Timestamp_Key

    Your log timestamp (SHOULD be set in GELF)

    timestamp

    Gelf_Host_Key

    Key which its value is used as the name of the host, source or application that sent this message. (MUST be set in GELF)

    host

    Gelf_Full_Message_Key

    Key to use as the long message that can i.e. contain a backtrace. (Optional in GELF)

    full_message

    Gelf_Level_Key

    Key to be used as the log level. Its value must be in (between 0 and 7). (Optional in GELF)

    level

    Packet_Size

    If transport protocol is udp, you can set the size of packets to be sent.

    1420

    Compress

    If transport protocol is udp, you can set this if you want your UDP packets to be compressed.

    true

    Workers

    The number of to perform flush operations for this output.

    0

    Value of Gelf_Timestamp_Key provided in configuration
  • Value of timestamp key

  • If you're using Docker JSON parser, this parser can parse time and use it as timestamp of message. If all above fail, Fluent Bit tries to get timestamp extracted by your parser.

  • Timestamp does not set by Fluent Bit. In this case, your Graylog server will set it to the current timestamp (now).

  • Your log timestamp has to be in UNIX Epoch Timestamparrow-up-right format. If the Gelf_Timestamp_Key value of your log is not in this format, your Graylog server will ignore it.

  • If you're using Fluent Bit in Kubernetes and you're using Kubernetes Filter Plugin, this plugin adds host value to your log by default, and you don't need to add it by your own.

  • The version of GELF message is also mandatory and Fluent Bit sets it to 1.1 which is the current latest version of GELF.

  • If you use udp as transport protocol and set Compress to true, Fluent Bit compresses your packets in GZIP format, which is the default compression that Graylog offers. This can be used to trade more CPU load for saving network bandwidth.

  • key as
    Gelf_Short_Message_Key
    ; so GELF plugin changes it to
    short_message
    .
  • Kubernetes Filter adds host name.

  • Timestamp is generated.

  • Any custom field (not present in GELF Payload Specificationarrow-up-right.) is prefixed by an underline.

  • Match

    Pattern to match which tags of logs to be outputted by this plugin

    Host

    IP address or hostname of the target Graylog server

    127.0.0.1

    Port

    The port that your Graylog GELF input is listening on

    12201

    Mode

    The protocol to use (tls, tcp or udp)

    udp

    Gelf_Tag_Key

    Key to be used for tag. (Optional in GELF)

    Gelf_Short_Message_Key

    TLS/SSL
    Tail Input
    stdout
    Nest Filter

    A short descriptive message (MUST be set in GELF)

    Key
    Description
    default

    config_file_location

    The location of the configuration file containing OCI authentication details. Reference for generating the configuration file - https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdkconfig.htm#SDK_and_CLI_Configuration_File

    profile_name

    OCI Config Profile Name to be used from the configuration file

    DEFAULT

    namespace

    OCI Tenancy Namespace in which the collected log data is to be uploaded

    proxy

    define proxy if required, in http://host:port format, supports only http protocol

    The following parameters are to set the Logging Analytics resources that must be used to process your logs by OCI Logging Analytics.

    Key
    Description
    default

    oci_config_in_record

    If set to true, the following oci_la_* will be read from the record itself instead of the output plugin configuration.

    false

    oci_la_log_group_id

    The OCID of the Logging Analytics Log Group where the logs must be stored. This is a mandatory parameter

    oci_la_log_source_name

    The Logging Analytics Source that must be used to process the log records. This is a mandatory parameter

    oci_la_entity_id

    The OCID of the Logging Analytics Entity

    hashtag
    TLS/SSL

    OCI Logging Analytics output plugin supports TLS/SSL, for more details about the properties available and general configuration, please refer to the TLS/SSL section.

    hashtag
    Getting Started

    hashtag
    Prerequisites

    • OCI Logging Analytics service must be onboarded with the minumum required policies, in the OCI region where you want to monitor. Refer Logging Analytics Quick Startarrow-up-right for details.

    • Create OCI Logging Analytics LogGroup(s) if not done already. Refer Create Log Grouparrow-up-right for details.

    hashtag
    Running the output plugin

    In order to insert records into the OCI Logging Analytics service, you can run the plugin from the command line or through the configuration file:

    hashtag
    Command line

    The OCI Logging Analytics plugin can read the parameters from the command line in two ways, through the -p argument (property), e.g:

    hashtag
    Configuration file

    In your main configuration file append the following Input & Output sections:

    hashtag
    Insert oci_la configs in the record

    In case of multiple inputs, where oci_la_* properties can differ, you can add the properties in the record itself and instruct the plugin to read these properties from the record. The option oci_config_in_record, when set to true in the output config, will make the plugin read the mandatory and optional oci_la properties from the incoming record. The user must ensure that the necessary configs have been inserted using relevant filters, otherwise the respective chunk will be dropped. Below is an example to insert oci_la_log_source_name and oci_la_log_group_id in the record:

    hashtag
    Add optional metadata

    You can attach certain metadata to the log events collected from various inputs.

    The above configuration will generate a payload that looks like this

    The multiple oci_la_global_metadata and oci_la_metadata options are turned into a JSON object of key value pairs, nested under the key metadata.

    With oci_config_in_record option set to true, the metadata key-value pairs will need to be injected in the record as an object of key value pair nested under the respective metadata field. Below is an example of one such configuration

    The above configuration first injects the necessary metadata keys and values in the record directly, with a prefix olgm. attached to the keys in order to segregate the metadata keys from rest of the record keys. Then, using a nest filter only the metadata keys are selected by the filter and nested under oci_la_global_metadata key in the record, and the prefix olgm. is removed from the metadata keys.

    OCI Logging Analyticsarrow-up-right

    Slack

    The Slack output plugin delivers records or messages to your preferred Slack channel. It formats the outgoing content in JSON format for readability.

    This connector uses the Slack Incoming Webhooks feature to post messages to Slack channels. Using this plugin in conjunction with the Stream Processor is a good combination for alerting.

    hashtag
    Slack Webhook

    Before configuring this plugin, make sure to setup your Incoming Webhook. For detailed step-by-step instructions, review the following official documentation:

    Once you have obtained the Webhook address you can place it in the configuration below.

    hashtag
    Configuration Parameters

    Key
    Description
    Default

    hashtag
    Configuration File

    Get started quickly with this configuration file:

    HTTP

    The http output plugin allows to flush your records into a HTTP endpoint. For now the functionality is pretty basic and it issues a POST request with the data records in (or JSON) format.

    hashtag
    Configuration Parameters

    Key
    Description
    default

    Syslog

    The Syslog output plugin allows you to deliver messages to Syslog servers. It supports RFC3164 and RFC5424 formats through different transports such as UDP, TCP or TLS.

    As of Fluent Bit v1.5.3 the configuration is very strict. You must be aware of the structure of your original record so you can configure the plugin to use specific keys to compose your outgoing Syslog message.

    Future versions of Fluent Bit are expanding this plugin feature set to support better handling of keys and message composing.

    hashtag
    Configuration Parameters

    Key
    createuser -P fluentbit
    createdb -O fluentbit fluentbit
    [OUTPUT]
        Name                pgsql
        Match               *
        Host                172.17.0.2
        Port                5432
        User                fluentbit
        Password            YourCrazySecurePassword
        Database            fluentbit
        Table               fluentbit
        Connection_Options  -c statement_timeout=0
        Timestamp_Key       ts
    [INPUT]
        Name                    tail
        Tag                     kube.*
        Path                    /var/log/containers/*.log
        Parser                  docker
        DB                      /var/log/flb_kube.db
        Mem_Buf_Limit           5MB
        Refresh_Interval        10
    
    [FILTER]
        Name                    kubernetes
        Match                   kube.*
        Merge_Log_Key           log
        Merge_Log               On
        Keep_Log                Off
        Annotations             Off
        Labels                  Off
    
    [FILTER]
        Name                    nest
        Match                   *
        Operation               lift
        Nested_under            log
    
    [OUTPUT]
        Name                    gelf
        Match                   kube.*
        Host                    <your-graylog-server>
        Port                    12201
        Mode                    tcp
        Gelf_Short_Message_Key  data
    
    [PARSER]
        Name                    docker
        Format                  json
        Time_Key                time
        Time_Format             %Y-%m-%dT%H:%M:%S.%L
        Time_Keep               Off
    {"log":"{\"data\": \"This is an example.\"}","stream":"stderr","time":"2019-07-21T12:45:11.273315023Z"}
    [0] kube.log: [1565770310.000198491, {"log"=>{"data"=>"This is an example."}, "stream"=>"stderr", "time"=>"2019-07-21T12:45:11.273315023Z"}]
    {"version":"1.1", "short_message":"This is an example.", "host": "<Your Node Name>", "_stream":"stderr", "timestamp":1565770310.000199}
    $ fluent-bit -i dummy -t dummy -o oci_logan -p config_file_location=<location> -p namespace=<namespace> \
      -p oci_la_log_group_id=<lg_id> -p oci_la_log_source_name=<ls_name> -p tls=on -p tls.verify=off -m '*'
    [INPUT]
        Name dummy
        Tag dummy
    [Output]
        Name oracle_log_analytics
        Match *
        Namespace <namespace>
        config_file_location <location>
        profile_name ADMIN
        oci_la_log_source_name <log-source-name>
        oci_la_log_group_id <log-group-ocid>
        tls On
        tls.verify Off
    [INPUT]
        Name dummy
        Tag dummy
    
    [Filter]
        Name modify
        Match *
        Add oci_la_log_source_name <LOG_SOURCE_NAME>
        Add oci_la_log_group_id <LOG_GROUP_OCID>
    
    [Output]
        Name oracle_log_analytics
        Match *
        config_file_location <oci_file_path>
        profile_name ADMIN
        oci_config_in_record true
        tls On
        tls.verify Off
    [INPUT]
        Name dummy
        Tag dummy
    
    [Output]
        Name oracle_log_analytics
        Match *
        Namespace example_namespace
        config_file_location /Users/example_file_location
        profile_name ADMIN
        oci_la_log_source_name example_log_source
        oci_la_log_group_id ocid.xxxxxx
        oci_la_global_metadata glob_key1 value1
        oci_la_global_metadata glob_key2 value2
        oci_la_metadata key1 value1
        oci_la_metadata key2 value2
        tls On
        tls.verify Off
    {
      "metadata": {
        "glob_key1": "value1",
        "glob_key2": "value2"
      },
      "logEvents": [
        {
          "metadata": {
            "key1": "value1",
            "key2": "value2"
          },
          "logSourceName": "example_log_source",
          "logRecords": [
            "dummy"
          ]
        }
      ]
    }
    [INPUT]
        Name dummy
        Tag dummy
    
    [FILTER]
        Name Modify
        Match *
        Add olgm.key1 val1
        Add olgm.key2 val2
    
    [FILTER]
        Name nest
        Match *
        Operation nest
        Wildcard olgm.*
        Nest_under oci_la_global_metadata
        Remove_prefix olgm.
    
    [Filter]
        Name modify
        Match *
        Add oci_la_log_source_name <LOG_SOURCE_NAME>
        Add oci_la_log_group_id <LOG_GROUP_OCID>
    
    [Output]
        Name oracle_log_analytics
        Match *
        config_file_location <oci_file_path>
        namespace <oci_tenancy_namespace>
        profile_name ADMIN
        oci_config_in_record true
        tls On
        tls.verify Off

    workers

    The number of workers to perform flush operations for this output.

    1

    oci_la_entity_type

    The entity type of the Logging Analytics Entity

    oci_la_log_path

    Specify the original location of the log files

    oci_la_global_metadata

    Use this parameter to specify additional global metadata along with original log content to Logging Analytics. The format is 'key_name value'. This option can be set multiple times

    oci_la_metadata

    Use this parameter to specify additional metadata for a log event along with original log content to Logging Analytics. The format is 'key_name value'. This option can be set multiple times

    PostgreSQL connection optionsarrow-up-right
    workers
    standard syslog levelsarrow-up-right
    workers

    webhook

    Absolute address of the Webhook provided by Slack

    workers

    The number of workers to perform flush operations for this output.

    0

    https://api.slack.com/messaging/webhooks#getting_startedarrow-up-right
    [OUTPUT]
        name                 slack
        match                *
        webhook              https://hooks.slack.com/services/T00000000/B00000000/XXXXXXXXXXXXXXXXXXXXXXXX

    IP address or hostname of the target HTTP Server

    127.0.0.1

    http_User

    Basic Auth Username

    http_Passwd

    Basic Auth Password. Requires HTTP_User to be set

    AWS_Auth

    Enable AWS SigV4 authentication

    false

    AWS_Service

    Specify the AWS service code, i.e. es, xray, etc., of your service, used by SigV4 authentication. Usually can be found in the service endpoint's subdomains, protocol://service-code.region-code.amazonaws.com

    AWS_Region

    Specify the AWS region of your service, used by SigV4 authentication

    AWS_STS_Endpoint

    Specify the custom sts endpoint to be used with STS API, used with the AWS_Role_ARN option, used by SigV4 authentication

    AWS_Role_ARN

    AWS IAM Role to assume, used by SigV4 authentication

    AWS_External_ID

    External ID for the AWS IAM Role specified with aws_role_arn, used by SigV4 authentication

    port

    TCP port of the target HTTP Server

    80

    Proxy

    Specify an HTTP Proxy. The expected format of this value is http://HOST:PORT. Note that HTTPS is not currently supported. It is recommended not to set this and to configure the instead as they support both HTTP and HTTPS.

    uri

    Specify an optional HTTP URI for the target web server, e.g: /something

    /

    compress

    Set payload compression mechanism. Option available is 'gzip'

    format

    Specify the data format to be used in the HTTP request body, by default it uses msgpack. Other supported formats are json, json_stream and json_lines and gelf.

    msgpack

    allow_duplicated_headers

    Specify if duplicated headers are allowed. If a duplicated header is found, the latest key/value set is preserved.

    true

    log_response_payload

    Specify if the response paylod should be logged or not.

    true

    header_tag

    Specify an optional HTTP header field for the original message tag.

    header

    Add a HTTP header key/value pair. Multiple headers can be set.

    json_date_key

    Specify the name of the time key in the output record. To disable the time key just set the value to false.

    date

    json_date_format

    Specify the format of the date. Supported formats are double, epoch, iso8601 (eg: 2018-05-30T09:39:52.000681Z) and java_sql_timestamp (eg: 2018-05-30 09:39:52.000681)

    double

    gelf_timestamp_key

    Specify the key to use for timestamp in gelf format

    gelf_host_key

    Specify the key to use for the host in gelf format

    gelf_short_message_key

    Specify the key to use as the short message in gelf format

    gelf_full_message_key

    Specify the key to use for the full message in gelf format

    gelf_level_key

    Specify the key to use for the level in gelf format

    body_key

    Specify the key to use as the body of the request (must prefix with "$"). The key must contain either a binary or raw string, and the content type can be specified using headers_key (which must be passed whenever body_key is present). When this option is present, each msgpack record will create a separate request.

    headers_key

    Specify the key to use as the headers of the request (must prefix with "$"). The key must contain a map, which will have the contents merged on the request headers. This can be used for many purposes, such as specifying the content-type of the data contained in body_key.

    workers

    The number of to perform flush operations for this output.

    2

    hashtag
    TLS / SSL

    HTTP output plugin supports TLS/SSL, for more details about the properties available and general configuration, please refer to the TLS/SSL section.

    hashtag
    Getting Started

    In order to insert records into a HTTP server, you can run the plugin from the command line or through the configuration file:

    hashtag
    Command Line

    The http plugin, can read the parameters from the command line in two ways, through the -p argument (property) or setting them directly through the service URI. The URI format is the following:

    Using the format specified, you could start Fluent Bit through:

    hashtag
    Configuration File

    In your main configuration file, append the following Input & Output sections:

    By default, the URI becomes tag of the message, the original tag is ignored. To retain the tag, multiple configuration sections have to be made based and flush to different URIs.

    Another approach we also support is the sending the original message tag in a configurable header. It's up to the receiver to do what it wants with that header field: parse it and use it as the tag for example.

    To configure this behaviour, add this config:

    Provided you are using Fluentd as data receiver, you can combine in_http and out_rewrite_tag_filter to make use of this HTTP header.

    Notice how we override the tag, which is from URI path, with our custom header

    hashtag
    Example : Add a header

    hashtag
    Example : Sumo Logic HTTP Collector

    Suggested configuration for Sumo Logic using json_lines with iso8601 timestamps. The PrivateKey is specific to a configured HTTP collector.

    A sample Sumo Logic query for the CPU input. (Requires json_lines format with iso8601 date format for the timestamp field).

    MessagePackarrow-up-right

    host

    Description
    Default

    host

    Domain or IP address of the remote Syslog server.

    127.0.0.1

    port

    TCP or UDP port of the remote Syslog server.

    514

    mode

    Desired transport type. Available options are tcp and udp.

    udp

    syslog_format

    The Syslog protocol format to use. Available options are rfc3164 and rfc5424.

    rfc5424

    syslog_maxsize

    The maximum size allowed per message. The value must be an integer representing the number of bytes allowed. If no value is provided, the default size is set depending of the protocol version specified by syslog_format. rfc3164 sets max size to 1024 bytes. rfc5424 sets the size to 2048 bytes.

    syslog_severity_key

    hashtag
    TLS / SSL

    The Syslog output plugin supports TLS/SSL. For more details about the properties available and general configuration, please refer to the TLS/SSL section.

    hashtag
    Examples

    hashtag
    Configuration File

    Get started quickly with this configuration file:

    hashtag
    Structured Data

    The following is an example of how to configure the syslog_sd_key to send Structured Data to the remote Syslog server.

    Example log:

    Example configuration file:

    Example output:

    hashtag
    Adding Structured Data Authentication Token

    Some services use the structured data field to pass authentication tokens (e.g. [<token>@41018]), which would need to be added to each log message dynamically. However, this requires setting the token as a key rather than as a value. Here's an example of how that might be achieved, using AUTH_TOKEN as a variable:

    Forward

    Forward is the protocol used by Fluentdarrow-up-right to route messages between peers. The forward output plugin provides interoperability between Fluent Bitarrow-up-right and Fluentdarrow-up-right. There are no configuration steps required besides specifying where Fluentdarrow-up-right is located, which can be a local or a remote destination.

    This plugin offers two different transports and modes:

    • Forward (TCP): It uses a plain TCP connection.

    • Secure Forward (TLS): when TLS is enabled, the plugin switch to Secure Forward mode.

    hashtag
    Configuration Parameters

    The following parameters are mandatory for either Forward for Secure Forward modes:

    Key
    Description
    Default

    hashtag
    Secure Forward Mode Configuration Parameters

    When using Secure Forward mode, the mode requires to be enabled. The following additional configuration parameters are available:

    Key
    Description
    Default

    hashtag
    Forward Setup

    Before proceeding, make sure that is installed, if it's not the case please refer to the following document and go ahead with that.

    Once is installed, create the following configuration file example that will allow us to stream data into it:

    That configuration file specifies that it will listen for TCP connections on the port 24224 through the forward input type. Then for every message with a fluent_bit TAG, will print the message to the standard output.

    In one terminal launch specifying the new configuration file created:

    hashtag
    Fluent Bit + Forward Setup

    Now that is ready to receive messages, we need to specify where the forward output plugin will flush the information using the following format:

    If the TAG parameter is not set, the plugin will retain the tag. Keep in mind that TAG is important for routing rules inside .

    Using the input plugin as an example we will flush CPU metrics to with tag fluent_bit:

    Now on the side, you will see the CPU metrics gathered in the last seconds:

    So we gathered metrics and flushed them out to properly.

    hashtag
    Fluent Bit + Secure Forward Setup

    DISCLAIMER: the following example does not consider the generation of certificates for best practice on production environments.

    Secure Forward aims to provide a secure channel of communication with the remote Fluentd service using .

    hashtag
    Fluent Bit

    Paste this content in a file called flb.conf:

    hashtag
    Fluentd

    Paste this content in a file called fld.conf:

    If you're using Fluentd v1, set up it as below:

    hashtag
    Test Communication

    Start Fluentd:

    Start Fluent Bit:

    After five seconds, Fluent Bit will write records to Fluentd. In Fluentd output you will see a message like this:

    Splunk

    Send logs to Splunk HTTP Event Collector

    Splunk output plugin allows to ingest your records into a Splunk Enterprisearrow-up-right service through the HTTP Event Collector (HEC) interface.

    To get more details about how to setup the HEC in Splunk please refer to the following documentation: Splunk / Use the HTTP Event Collectorarrow-up-right

    hashtag
    Configuration Parameters

    Connectivity, transport and authentication configuration properties:

    Key
    Description
    default

    Content and Splunk metadata (fields) handling configuration properties:

    Key
    Description
    default

    hashtag
    TLS / SSL

    Splunk output plugin supports TLS/SSL, for more details about the properties available and general configuration, please refer to the section.

    hashtag
    Getting Started

    In order to insert records into a Splunk service, you can run the plugin from the command line or through the configuration file:

    hashtag
    Command Line

    The splunk plugin, can read the parameters from the command line in two ways, through the -p argument (property), e.g:

    hashtag
    Configuration File

    In your main configuration file append the following Input & Output sections:

    hashtag
    Data format

    By default, the Splunk output plugin nests the record under the event key in the payload sent to the HEC. It will also append the time of the record to a top level time key.

    If you would like to customize any of the Splunk event metadata, such as the host or target index, you can set Splunk_Send_Raw On in the plugin configuration, and add the metadata as keys/values in the record. Note: with Splunk_Send_Raw enabled, you are responsible for creating and populating the event section of the payload.

    For example, to add a custom index and hostname:

    This will create a payload that looks like:

    For more information on the Splunk HEC payload format and all event metadata Splunk accepts, see here:

    hashtag
    Sending Raw Events

    If the option splunk_send_raw has been enabled, the user must take care to put all log details in the event field, and only specify fields known to Splunk in the top level event, if there is a mismatch, Splunk will return a HTTP error 400.

    Consider the following example:

    splunk_send_raw off

    splunk_send_raw on

    For up to date information about the valid keys in the top level object, refer to the Splunk documentation:

    hashtag
    Splunk Metric Index

    With Splunk version 8.0> you can also use the Fluent Bit Splunk output plugin to send data to metric indices. This allows you to perform visualizations, metric queries, and analysis with other metrics you may be collecting. This is based off of Splunk 8.0 support of multi metric support via single JSON payload, more details can be found on

    Sending to a Splunk Metric index requires the use of Splunk_send_raw option being enabled and formatting the message properly. This includes three specific operations

    • Nest metric events under a "fields" property

    • Add metric_name: to all metrics

    • Add index, source, sourcetype as fields in the message

    hashtag
    Example Configuration

    The following configuration gathers CPU metrics, nests the appropriate field, adds the required identifiers and then sends to Splunk.

    hashtag
    Send Metrics Events of Fluent Bit

    With Fluent Bit 2.0, you can also send Fluent Bit's metrics type of events into Splunk via Splunk HEC. This allows you to perform visualizations, metric queries, and analysis with directly sent Fluent Bit's metrics type of events. This is based off Splunk 8.0 support of multi metric support via single concatenated JSON payload.

    Sending Fluent Bit's metrics into Splunk requires the use of collecting Fluent Bit's metrics plugins. Note that whether events type of logs or metrics can be distinguished automatically. You don't need to pay attentions about the type of events. This example includes two specific operations

    • Collect node or Fluent Bit's internal metrics

    • Send metrics as single concatenated JSON payload

    Loki

    is multi-tenant log aggregation system inspired by Prometheus. It is designed to be very cost effective and easy to operate.

    The Fluent Bit loki built-in output plugin allows you to send your log or events to a Loki service. It supports data enrichment with Kubernetes labels, custom label keys and Tenant ID within others.

    Be aware there is a separate Golang output plugin provided by with different configuration options.

    hashtag
    Configuration Parameters

    Key

    Stackdriver

    Stackdriver output plugin allows to ingest your records into service.

    Before to get started with the plugin configuration, make sure to obtain the proper credentials to get access to the service. We strongly recommend to use a common JSON credentials file, reference link:

    Your goal is to obtain a credentials JSON file that will be used later by Fluent Bit Stackdriver output plugin.

    hashtag

    [INPUT]
        Name  cpu
        Tag   cpu
    
    [OUTPUT]
        Name  http
        Match *
        Host  192.168.2.3
        Port  80
        URI   /something
    pipeline:
        inputs:
            - name: cpu
              tag:  cpu
        outputs:
            - name: http
              match: '*'
              host: 192.168.2.3
              port: 80
              URI: /something
    [OUTPUT]
        Name  http
        Match *
        Host  192.168.2.3
        Port  80
        URI   /something
        Format json
        header_tag  FLUENT-TAG
        outputs:
            - name: http
              match: '*'
              host: 192.168.2.3
              port: 80
              URI: /something
              format: json
              header_tag: FLUENT-TAG
    [OUTPUT]
        Name           http
        Match          *
        Host           127.0.0.1
        Port           9000
        Header         X-Key-A Value_A
        Header         X-Key-B Value_B
        URI            /something
        outputs:
            - name: http
              match: '*'
              host: 127.0.0.1
              port: 9000
              header:
                - X-Key-A Value_A
                - X-Key-B Value_B
              URI: /something
    [OUTPUT]
        Name             http
        Match            *
        Host             collectors.au.sumologic.com
        Port             443
        URI              /receiver/v1/http/[PrivateKey]
        Format           json_lines
        Json_date_key    timestamp
        Json_date_format iso8601
        outputs:
            - name: http
              match: '*'
              host: collectors.au.sumologic.com
              port: 443
              URI: /receiver/v1/http/[PrivateKey]
              format: json_lines
              json_date_key: timestamp
              json_date_format: iso8601
    http://host:port/something
    $ fluent-bit -i cpu -t cpu -o http://192.168.2.3:80/something -m '*'
    <source>
      @type http
      add_http_headers true
    </source>
    
    <match something>
      @type rewrite_tag_filter
      <rule>
        key HTTP_FLUENT_TAG
        pattern /^(.*)$/
        tag $1
      </rule>
    </match>
    _sourcecategory="my_fluent_bit"
    | json "cpu_p" as cpu
    | timeslice 1m
    | max(cpu) as cpu group by _timeslice
    [OUTPUT]
        name                 syslog
        match                *
        host                 syslog.yourserver.com
        port                 514
        mode                 udp
        syslog_format        rfc5424
        syslog_maxsize       2048
        syslog_severity_key  severity
        syslog_facility_key  facility
        syslog_hostname_key  hostname
        syslog_appname_key   appname
        syslog_procid_key    procid
        syslog_msgid_key     msgid
        syslog_sd_key        sd
        syslog_message_key   message
        outputs:
            - name: syslog
              match: "*"
              host: syslog.yourserver.com
              port: 514
              mode: udp
              syslog_format: rfc5424
              syslog_maxsize: 2048
              syslog_severity_key: severity
              syslog_facility_key: facility
              syslog_hostname_key: hostname
              syslog_appname_key: appname
              syslog_procid_key: procid
              syslog_msgid_key: msgid
              syslog_sd_key: sd
              syslog_message_key: message
    [OUTPUT]
        name                 syslog
        match                *
        host                 syslog.yourserver.com
        port                 514
        mode                 udp
        syslog_format        rfc5424
        syslog_maxsize       2048
        syslog_hostname_key  hostname
        syslog_appname_key   appname
        syslog_procid_key    procid
        syslog_msgid_key     msgid
        syslog_sd_key        uls@0
        syslog_message_key   log
      outputs:
        - name: syslog
          match: "*"
          host: syslog.yourserver.com
          port: 514
          mode: udp
          syslog_format: rfc5424
          syslog_maxsize: 2048
          syslog_hostname_key: hostname
          syslog_appname_key: appname
          syslog_procid_key: procid
          syslog_msgid_key: msgid
          syslog_sd_key: uls@0
          syslog_message_key: log
    [FILTER]
        name  lua
        match *
        call  append_token
        code  function append_token(tag, timestamp, record) record["${AUTH_TOKEN}"] = {} return 2, timestamp, record end
    
    [OUTPUT]
        name                    syslog
        match                   *
        host                    syslog.yourserver.com
        port                    514
        mode                    tcp
        syslog_format           rfc5424
        syslog_hostname_preset  my-hostname
        syslog_appname_preset   my-appname
        syslog_message_key      log
        allow_longer_sd_id      true
        syslog_sd_key           ${AUTH_TOKEN}
        tls                     on
        tls.crt_file            /path/to/my.crt
      filters:
        - name:  lua
          match: "*"
          call:  append_token
          code:  |
            function append_token(tag, timestamp, record)
                record["${AUTH_TOKEN}"] = {}
                return 2, timestamp, record
            end
    
      outputs:
        - name: syslog
          match: "*"
          host: syslog.yourserver.com
          port: 514
          mode: tcp
          syslog_format: rfc5424
          syslog_hostname_preset: myhost
          syslog_appname_preset: myapp
          syslog_message_key: log
          allow_longer_sd_id: true
          syslog_sd_key: ${AUTH_TOKEN}
          tls: on
          tls.crt_file: /path/to/my.crt
    {
        "hostname": "myhost",
        "appname": "myapp",
        "procid": "1234",
        "msgid": "ID98",
        "uls@0": {
            "logtype": "access",
            "clustername": "mycluster",
            "namespace": "mynamespace"
        },
        "log": "Sample app log message."
    }
    <14>1 2021-07-12T14:37:35.569848Z myhost myapp 1234 ID98 [uls@0 logtype="access" clustername="mycluster" namespace="mynamespace"] Sample app log message.

    The key name from the original record that contains the Syslog severity number. This configuration is optional.

    syslog_severity_preset

    The preset severity number. It will be overwritten if syslog_severity_key is set and a key of a record is matched. This configuration is optional.

    6

    syslog_facility_key

    The key name from the original record that contains the Syslog facility number. This configuration is optional.

    syslog_facility_preset

    The preset facility number. It will be overwritten if syslog_facility_key is set and a key of a record is matched. This configuration is optional.

    1

    syslog_hostname_key

    The key name from the original record that contains the hostname that generated the message. This configuration is optional.

    syslog_hostname_preset

    The preset hostname. It will be overwritten if syslog_hostname_key is set and a key of a record is matched. This configuration is optional.

    syslog_appname_key

    The key name from the original record that contains the application name that generated the message. This configuration is optional.

    syslog_appname_preset

    The preset application name. It will be overwritten if syslog_appname_key is set and a key of a record is matched. This configuration is optional.

    syslog_procid_key

    The key name from the original record that contains the Process ID that generated the message. This configuration is optional.

    syslog_procid_preset

    The preset process ID. It will be overwritten if syslog_procid_key is set and a key of a record is matched. This configuration is optional.

    syslog_msgid_key

    The key name from the original record that contains the Message ID associated to the message. This configuration is optional.

    syslog_msgid_preset

    The preset message ID. It will be overwritten if syslog_msgid_key is set and a key of a record is matched. This configuration is optional.

    syslog_sd_key

    The key name from the original record that contains a map of key/value pairs to use as Structured Data (SD) content. The key name is included in the resulting SD field as shown in examples below. This configuration is optional.

    syslog_message_key

    The key name from the original record that contains the message to deliver. Note that this property is mandatory, otherwise the message will be empty.

    allow_longer_sd_id

    If true, Fluent-bit allows SD-ID that is longer than 32 characters. Such long SD-ID violates RFC 5424.

    false

    workers

    The number of workers to perform flush operations for this output.

    0

    HTTP proxy environment variablesarrow-up-right
    workers

    Unix_Path

    Specify the path to unix socket to send a Forward message. If set, Upstream is ignored.

    Tag

    Overwrite the tag as we transmit. This allows the receiving pipeline start fresh, or to attribute source.

    Send_options

    Always send options (with "size"=count of messages)

    False

    Require_ack_response

    Send "chunk"-option and wait for "ack" response from server. Enables at-least-once and receiving server can control rate of traffic. (Requires Fluentd v0.14.0+ server)

    False

    Compress

    Set to 'gzip' to enable gzip compression. Incompatible with Time_as_Integer=True and tags set dynamically using the filter. Requires Fluentd server v0.14.7 or later.

    none

    Workers

    The number of to perform flush operations for this output.

    2

    Self_Hostname

    Default value of the auto-generated certificate common name (CN).

    localhost

    tls

    Enable or disable TLS support

    Off

    tls.verify

    Force certificate validation

    On

    tls.debug

    Set TLS debug verbosity level. It accept the following values: 0 (No debug), 1 (Error), 2 (State change), 3 (Informational) and 4 Verbose

    1

    tls.ca_file

    Absolute path to CA certificate file

    tls.crt_file

    Absolute path to Certificate file.

    tls.key_file

    Absolute path to private Key file.

    tls.key_passwd

    Optional password for tls.key_file file.

    Host

    Target host where Fluent-Bit or Fluentd are listening for Forward messages.

    127.0.0.1

    Port

    TCP Port of the target service.

    24224

    Time_as_Integer

    Set timestamps in integer format, it enable compatibility mode for Fluentd v0.12 series.

    False

    Upstream

    If Forward will connect to an Upstream instead of a simple host, this property defines the absolute path for the Upstream configuration file, for more details about this refer to the Upstream Servers documentation section.

    Shared_Key

    A key string known by the remote Fluentd used for authorization.

    Empty_Shared_Key

    Use this option to connect to Fluentd with a zero-length secret.

    False

    Username

    Specify the username to present to a Fluentd server that enables user_auth.

    Password

    Specify the password corresponding to the username.

    TLS
    Fluentdarrow-up-right
    Fluentd Installationarrow-up-right
    Fluentdarrow-up-right
    Fluentdarrow-up-right
    Fluentdarrow-up-right
    Fluentdarrow-up-right
    CPU
    Fluentdarrow-up-right
    Fluentdarrow-up-right
    CPU
    Fluentdarrow-up-right
    TLS

    2M

    compress

    Set payload compression mechanism. The only available option is gzip.

    channel

    Specify X-Splunk-Request-Channel Header for the HTTP Event Collector interface.

    http_debug_bad_request

    If the HTTP server response code is 400 (bad request) and this flag is enabled, it will print the full HTTP request and response to the stdout interface. This feature is available for debugging purposes.

    workers

    The number of to perform flush operations for this output.

    2

    event_sourcetype

    Set the sourcetype value to assign to the event data.

    event_sourcetype_key

    Set a record key that will populate 'sourcetype'. If the key is found, it will have precedence over the value set in event_sourcetype.

    event_index

    The name of the index by which the event data is to be indexed.

    event_index_key

    Set a record key that will populate the index field. If the key is found, it will have precedence over the value set in event_index.

    event_field

    Set event fields for the record. This option can be set multiple times and the format is key_name record_accessor_pattern.

    host

    IP address or hostname of the target Splunk service.

    127.0.0.1

    port

    TCP port of the target Splunk service.

    8088

    splunk_token

    Specify the Authentication Token for the HTTP Event Collector interface.

    http_user

    Optional username for Basic Authentication on HEC

    http_passwd

    Password for user defined in HTTP_User

    http_buffer_size

    splunk_send_raw

    When enabled, the record keys and values are set in the top level of the map instead of under the event key. Refer to the Sending Raw Events section from the docs for more details to make this option work properly.

    off

    event_key

    Specify the key name that will be used to send a single value as part of the record.

    event_host

    Specify the key name that contains the host value. This option allows a record accessors pattern.

    event_source

    Set the source value to assign to the event data.

    TLS/SSL
    http://docs.splunk.com/Documentation/Splunk/latest/Data/AboutHECarrow-up-right
    http://docs.splunk.com/Documentation/Splunk/latest/Data/AboutHECarrow-up-right
    Splunk's documentation pagearrow-up-right

    Buffer size used to receive Splunk HTTP responses

    Description
    Default

    host

    Loki hostname or IP address. Do not include the subpath, i.e. loki/api/v1/push, but just the base hostname/URL.

    127.0.0.1

    uri

    Specify a custom HTTP URI. It must start with forward slash.

    /loki/api/v1/push

    port

    Loki TCP port

    3100

    tls

    Use TLS authentication

    off

    http_user

    Set HTTP basic authentication user name

    http_passwd

    hashtag
    Labels

    Loki store the record logs inside Streams, a stream is defined by a set of labels, at least one label is required.

    Fluent Bit implements a flexible mechanism to set labels by using fixed key/value pairs of text but also allowing to set as labels certain keys that exists as part of the records that are being processed. Consider the following JSON record (pretty printed for readability):

    If you decide that your Loki Stream will be composed by two labels called job and the value of the record key called stream , your labels configuration properties might look as follows:

    As you can see the label job has the value fluentbit and the second label is configured to access the nested map called sub targeting the value of the key stream . Note that the second label name must starts with a $, that means that's a Record Accessor pattern so it provide you the ability to retrieve values from nested maps by using the key names.

    When processing above's configuration, internally the ending labels for the stream in question becomes:

    Another feature of Labels management is the ability to provide custom key names, using the same record accessor pattern we can specify the key name manually and let the value to be populated automatically at runtime, e.g:

    When processing that new configuration, the internal labels will be:

    hashtag
    Using the label_keys property

    The additional configuration property called label_keys allow to specify multiple record keys that needs to be placed as part of the outgoing Stream Labels, yes, this is a similar feature than the one explained above in the labels property. Consider this as another way to set a record key in the Stream, but with the limitation that you cannot use a custom name for the key value.

    The following configuration examples generate the same Stream Labels:

    the above configuration accomplish the same than this one:

    both will generate the following Streams label:

    hashtag
    Using the label_map_path property

    The configuration property label_map_path is to read a JSON file that defines how to extract labels from each record.

    The file should contain a JSON object. Each keys define how to get label value from a nested record. Each values are used as label names.

    The following configuration examples generate the same Stream Labels:

    map.json:

    The following configuration examples generate the same Stream Labels:

    the above configuration accomplish the same than this one:

    both will generate the following Streams label:

    hashtag
    Kubernetes & Labels

    Note that if you are running in a Kubernetes environment, you might want to enable the option auto_kubernetes_labels which will auto-populate the streams with the Pod labels for you. Consider the following configuration:

    Based in the JSON example provided above, the internal stream labels will be:

    hashtag
    Drop Single Key

    If there is only one key remaining after removing keys, you can use the drop_single_key property to send its value to Loki, rather than a single key=value pair.

    Consider this simple JSON example:

    If the value is a string, line_format is json, and drop_single_key is true, it will be sent as a quoted string.

    The outputted line would show in Loki as:

    If drop_single_key is raw, or line_format is key_value, it will show in Loki as:

    If you want both structured JSON and plain-text logs in Loki, you should set drop_single_key to raw and line_format to json. Loki does not interpret a quoted string as valid JSON, and so to remove the quotes without drop_single_key set to raw, you would need to use a query like this:

    If drop_single_key is off, it will show in Loki as:

    You can get the same behavior this flag provides in Loki with drop_single_key set to off with this query:

    hashtag
    Structured metadata

    Structured metadataarrow-up-right lets you attach custom fields to individual log lines without embedding the information in the content of the log line. This capability works well for high cardinality data that isn't suited for using labels. While not a label, the structured_metadata configuration parameter operates similarly to the labels parameter. Both parameters are comma-delimited key=value lists, and both can use record accessors to reference keys within the record being processed.

    The following configuration:

    • Defines fixed values for the cluster and region labels.

    • Uses the record accessor pattern to set the namespace label to the namespace name as determined by the Kubernetes metadata filter (not shown).

    • Uses a structured metadata field to hold the Kubernetes pod name.

    Other common uses for structured metadata include trace and span IDs, process and thread IDs, and log levels.

    Structured metadata is officially supported starting with Loki 3.0, and shouldn't be used with Loki deployments prior to Loki 3.0.

    hashtag
    Networking and TLS Configuration

    This plugin inherit core Fluent Bit features to customize the network behavior and optionally enable TLS in the communication channel. For more details about the specific options available refer to the following articles:

    • Networking Setup: timeouts, keepalive and source address

    • Security & TLS: all about TLS configuration and certificates

    Note that all options mentioned in the articles above must be enabled in the plugin configuration in question.

    hashtag
    Fluent Bit + Grafana Cloud

    Fluent Bit supports sending logs (and metrics) to Grafana Cloudarrow-up-right by providing the appropriate URL and ensuring TLS is enabled.

    An example configuration - make sure to set the credentials and ensure the host URL matches the correct one for your deployment:

    hashtag
    Getting Started

    The following configuration example, will emit a dummy example record and ingest it on Loki . Copy and paste the following content into a file called out_loki.conf:

    run Fluent Bit with the new configuration file:

    Fluent Bit output:

    Lokiarrow-up-right
    Grafanaarrow-up-right
    Configuration Parameters
    Key
    Description
    default

    google_service_credentials

    Absolute path to a Google Cloud credentials JSON file

    Value of environment variable $GOOGLE_APPLICATION_CREDENTIALS

    service_account_email

    Account email associated to the service. Only available if no credentials file has been provided.

    Value of environment variable $SERVICE_ACCOUNT_EMAIL

    service_account_secret

    Private key content associated with the service account. Only available if no credentials file has been provided.

    Value of environment variable $SERVICE_ACCOUNT_SECRET

    metadata_server

    Prefix for a metadata server. Can also set environment variable $METADATA_SERVER.

    hashtag
    Configuration File

    If you are using a Google Cloud Credentials File, the following configuration is enough to get started:

    Example configuration file for k8s resource type:

    local_resource_id is used by stackdriver output plugin to set the labels field for different k8s resource types. Stackdriver plugin will try to find the local_resource_id field in the log entry. If there is no field logging.googleapis.com/local_resource_id in the log, the plugin will then construct it by using the tag value of the log.

    The local_resource_id should be in format:

    • k8s_container.<namespace_name>.<pod_name>.<container_name>

    • k8s_node.<node_name>

    • k8s_pod.<namespace_name>.<pod_name>

    This implies that if there is no local_resource_id in the log entry then the tag of logs should match this format. Note that we have an option tag_prefix so it is not mandatory to use k8s_container(node/pod) as the prefix for tag.

    hashtag
    Resource Labels

    Currently, there are four ways which fluent-bit uses to assign fields into the resource/labels section.

    1. Resource Labels API

    2. Monitored Resource API

    3. Local Resource Id

    4. Credentials / Config Parameters

    If resource_labels is correctly configured, then fluent-bit will attempt to populate all resource/labels using the entries specified. Otherwise, fluent-bit will attempt to use the monitored resource API. Similarly, if the monitored resource API cannot be used, then fluent-bit will attempt to populate resource/labels using configuration parameters and/or credentials specific to the resource type. As mentioned in the Configuration File section, fluent-bit will attempt to use or construct a local resource ID for a K8s resource type which does not use the resource labels or monitored resource API.

    Note that the project_id resource label will always be set from the service credentials or fetched from the metadata server and cannot be overridden.

    hashtag
    Using the resource_labels parameter

    The resource_labels configuration parameter offers an alternative API for assigning the resource labels. To use, input a list of comma separated strings specifying resource labels plaintext assignments (new=value), mappings from an original field in the log entry to a destination field (destination=$original) and/or environment variable assignments (new=${var}).

    For instance, consider the following log entry:

    Combined with the following Stackdriver configuration:

    This will produce the following log:

    This makes the resource_labels API the recommended choice for supporting new or existing resource types that have all resource labels known before runtime or available on the payload during runtime.

    For instance, for a K8s resource type, resource_labels can be used in tandem with the Kubernetes filterarrow-up-right to pack all six resource labels. Below is an example of what this could look like for a k8s_container resource:

    resource_labels also supports validation for required labels based on the input resource type. This allows fluent-bit to check if all specified labels are present for a given configuration before runtime. If validation is not currently supported for a resource type that you would like to use this API with, we encourage you to open a pull request for it. Adding validation for a new resource type is simple - all that is needed is to specify the resources associated with the type alongside the required labels herearrow-up-right.

    hashtag
    Troubleshooting Notes

    hashtag
    Upstream connection error

    Github reference: #761arrow-up-right

    An upstream connection error means Fluent Bit was not able to reach Google services, the error looks like this:

    This belongs to a network issue by the environment where Fluent Bit is running, make sure that from the Host, Container or Pod you can reach the following Google end-points:

    • https://www.googleapis.comarrow-up-right

    • https://logging.googleapis.comarrow-up-right

    hashtag
    Fail to process local_resource_id

    The error looks like this:

    Do following check:

    • If the log entry does not contain the local_resource_id field, does the tag of the log match for format?

    • If tag_prefix is configured, does the prefix of tag specified in the input plugin match the tag_prefix?

    hashtag
    Occasional Crashing with >1 Workers

    Github reference: #7552arrow-up-right

    When the number of Workers is greater than 1, Fluent Bit may intermittently crash.

    hashtag
    Other implementations

    Stackdriver officially supports a logging agent based on Fluentdarrow-up-right.

    We plan to support some special fields in structured payloadsarrow-up-right. Use cases of special fields is herearrow-up-right.

    Google Cloud Stackdriver Loggingarrow-up-right
    Creating a Google Service Account for Stackdriverarrow-up-right
    <source>
      type forward
      bind 0.0.0.0
      port 24224
    </source>
    
    <match fluent_bit>
      type stdout
    </match>
    $ fluentd -c test.conf
    2017-03-23 11:50:43 -0600 [info]: reading config file path="test.conf"
    2017-03-23 11:50:43 -0600 [info]: starting fluentd-0.12.33
    2017-03-23 11:50:43 -0600 [info]: gem 'fluent-mixin-config-placeholders' version '0.3.1'
    2017-03-23 11:50:43 -0600 [info]: gem 'fluent-plugin-docker' version '0.1.0'
    2017-03-23 11:50:43 -0600 [info]: gem 'fluent-plugin-elasticsearch' version '1.4.0'
    2017-03-23 11:50:43 -0600 [info]: gem 'fluent-plugin-flatten-hash' version '0.2.0'
    2017-03-23 11:50:43 -0600 [info]: gem 'fluent-plugin-flowcounter-simple' version '0.0.4'
    2017-03-23 11:50:43 -0600 [info]: gem 'fluent-plugin-influxdb' version '0.2.8'
    2017-03-23 11:50:43 -0600 [info]: gem 'fluent-plugin-json-in-json' version '0.1.4'
    2017-03-23 11:50:43 -0600 [info]: gem 'fluent-plugin-mongo' version '0.7.10'
    2017-03-23 11:50:43 -0600 [info]: gem 'fluent-plugin-out-http' version '0.1.3'
    2017-03-23 11:50:43 -0600 [info]: gem 'fluent-plugin-parser' version '0.6.0'
    2017-03-23 11:50:43 -0600 [info]: gem 'fluent-plugin-record-reformer' version '0.7.0'
    2017-03-23 11:50:43 -0600 [info]: gem 'fluent-plugin-rewrite-tag-filter' version '1.5.1'
    2017-03-23 11:50:43 -0600 [info]: gem 'fluent-plugin-stdin' version '0.1.1'
    2017-03-23 11:50:43 -0600 [info]: gem 'fluent-plugin-td' version '0.10.27'
    2017-03-23 11:50:43 -0600 [info]: adding match pattern="fluent_bit" type="stdout"
    2017-03-23 11:50:43 -0600 [info]: adding source type="forward"
    2017-03-23 11:50:43 -0600 [info]: using configuration file: <ROOT>
      <source>
        type forward
        bind 0.0.0.0
        port 24224
      </source>
      <match fluent_bit>
        type stdout
      </match>
    </ROOT>
    2017-03-23 11:50:43 -0600 [info]: listening fluent socket on 0.0.0.0:24224
    bin/fluent-bit -i INPUT -o forward://HOST:PORT
    $ bin/fluent-bit -i cpu -t fluent_bit -o forward://127.0.0.1:24224
    2017-03-23 11:53:06 -0600 fluent_bit: {"cpu_p":0.0,"user_p":0.0,"system_p":0.0,"cpu0.p_cpu":0.0,"cpu0.p_user":0.0,"cpu0.p_system":0.0,"cpu1.p_cpu":0.0,"cpu1.p_user":0.0,"cpu1.p_system":0.0,"cpu2.p_cpu":0.0,"cpu2.p_user":0.0,"cpu2.p_system":0.0,"cpu3.p_cpu":1.0,"cpu3.p_user":1.0,"cpu3.p_system":0.0}
    2017-03-23 11:53:07 -0600 fluent_bit: {"cpu_p":2.25,"user_p":2.0,"system_p":0.25,"cpu0.p_cpu":3.0,"cpu0.p_user":3.0,"cpu0.p_system":0.0,"cpu1.p_cpu":1.0,"cpu1.p_user":1.0,"cpu1.p_system":0.0,"cpu2.p_cpu":1.0,"cpu2.p_user":1.0,"cpu2.p_system":0.0,"cpu3.p_cpu":3.0,"cpu3.p_user":2.0,"cpu3.p_system":1.0}
    2017-03-23 11:53:08 -0600 fluent_bit: {"cpu_p":1.75,"user_p":1.0,"system_p":0.75,"cpu0.p_cpu":2.0,"cpu0.p_user":1.0,"cpu0.p_system":1.0,"cpu1.p_cpu":3.0,"cpu1.p_user":1.0,"cpu1.p_system":2.0,"cpu2.p_cpu":3.0,"cpu2.p_user":2.0,"cpu2.p_system":1.0,"cpu3.p_cpu":2.0,"cpu3.p_user":1.0,"cpu3.p_system":1.0}
    2017-03-23 11:53:09 -0600 fluent_bit: {"cpu_p":4.75,"user_p":3.5,"system_p":1.25,"cpu0.p_cpu":4.0,"cpu0.p_user":3.0,"cpu0.p_system":1.0,"cpu1.p_cpu":5.0,"cpu1.p_user":4.0,"cpu1.p_system":1.0,"cpu2.p_cpu":3.0,"cpu2.p_user":2.0,"cpu2.p_system":1.0,"cpu3.p_cpu":5.0,"cpu3.p_user":4.0,"cpu3.p_system":1.0}
    [SERVICE]
        Flush      5
        Daemon     off
        Log_Level  info
    
    [INPUT]
        Name       cpu
        Tag        cpu_usage
    
    [OUTPUT]
        Name          forward
        Match         *
        Host          127.0.0.1
        Port          24284
        Shared_Key    secret
        Self_Hostname flb.local
        tls           on
        tls.verify    off
    <source>
      @type         secure_forward
      self_hostname myserver.local
      shared_key    secret
      secure no
    </source>
    
    <match **>
     @type stdout
    </match>
    <source>
      @type forward
      <transport tls>
        cert_path /etc/td-agent/certs/fluentd.crt
        private_key_path /etc/td-agent/certs/fluentd.key
        private_key_passphrase password
      </transport>
      <security>
        self_hostname myserver.local
        shared_key secret
      </security>
    </source>
    
    <match **>
     @type stdout
    </match>
    $ fluentd -c fld.conf
    $ fluent-bit -c flb.conf
    2017-03-23 13:34:40 -0600 [info]: using configuration file: <ROOT>
      <source>
        @type secure_forward
        self_hostname myserver.local
        shared_key xxxxxx
        secure no
      </source>
      <match **>
        @type stdout
      </match>
    </ROOT>
    2017-03-23 13:34:41 -0600 cpu_usage: {"cpu_p":1.0,"user_p":0.75,"system_p":0.25,"cpu0.p_cpu":1.0,"cpu0.p_user":1.0,"cpu0.p_system":0.0,"cpu1.p_cpu":2.0,"cpu1.p_user":1.0,"cpu1.p_system":1.0,"cpu2.p_cpu":1.0,"cpu2.p_user":1.0,"cpu2.p_system":0.0,"cpu3.p_cpu":2.0,"cpu3.p_user":1.0,"cpu3.p_system":1.0}
    2017-03-23 13:34:42 -0600 cpu_usage: {"cpu_p":1.75,"user_p":1.75,"system_p":0.0,"cpu0.p_cpu":3.0,"cpu0.p_user":3.0,"cpu0.p_system":0.0,"cpu1.p_cpu":2.0,"cpu1.p_user":2.0,"cpu1.p_system":0.0,"cpu2.p_cpu":0.0,"cpu2.p_user":0.0,"cpu2.p_system":0.0,"cpu3.p_cpu":1.0,"cpu3.p_user":1.0,"cpu3.p_system":0.0}
    2017-03-23 13:34:43 -0600 cpu_usage: {"cpu_p":1.75,"user_p":1.25,"system_p":0.5,"cpu0.p_cpu":3.0,"cpu0.p_user":3.0,"cpu0.p_system":0.0,"cpu1.p_cpu":2.0,"cpu1.p_user":2.0,"cpu1.p_system":0.0,"cpu2.p_cpu":0.0,"cpu2.p_user":0.0,"cpu2.p_system":0.0,"cpu3.p_cpu":1.0,"cpu3.p_user":0.0,"cpu3.p_system":1.0}
    2017-03-23 13:34:44 -0600 cpu_usage: {"cpu_p":5.0,"user_p":3.25,"system_p":1.75,"cpu0.p_cpu":4.0,"cpu0.p_user":2.0,"cpu0.p_system":2.0,"cpu1.p_cpu":8.0,"cpu1.p_user":5.0,"cpu1.p_system":3.0,"cpu2.p_cpu":4.0,"cpu2.p_user":3.0,"cpu2.p_system":1.0,"cpu3.p_cpu":4.0,"cpu3.p_user":2.0,"cpu3.p_system":2.0}
    $ fluent-bit -i cpu -t cpu -o splunk -p host=127.0.0.1 -p port=8088 \
      -p tls=on -p tls.verify=off -m '*'