Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Official and Microsoft Certified Azure Storage Blob connector
The Azure Blob output plugin allows ingesting your records into Azure Blob Storage service. This connector is designed to use the Append Blob and Block Blob API.
Our plugin works with the official Azure Service and also can be configured to be used with a service emulator such as Azurite.
Before getting started, make sure you already have an Azure Storage account. As a reference, the following link explains step-by-step how to set up your account:
We expose different configuration properties. The following table lists all the options available, and the next section has specific configuration details for the official service or the emulator.
As mentioned above, you can either deliver records to the official service or an emulator. Below we have an example for each use case.
The following configuration example generates a random message with a custom tag:
After you run the configuration file above, you will be able to query the data using the Azure Storage Explorer. The example above will generate the following content in the explorer:
The quickest way to get started is to install Azurite using npm:
then run the service:
Azurite comes with a default account_name
and shared_key
, so make sure to use the specific values provided in the example below (do an exact copy/paste):
after running that Fluent Bit configuration you will see the data flowing into Azurite:
Send logs and metrics to Amazon CloudWatch
The Amazon CloudWatch output plugin allows to ingest your records into the CloudWatch Logs service. Support for CloudWatch Metrics is also provided via EMF.
This is the documentation for the core Fluent Bit CloudWatch plugin written in C. It can replace the aws/amazon-cloudwatch-logs-for-fluent-bit Golang Fluent Bit plugin released last year. The Golang plugin was named cloudwatch
; this new high performance CloudWatch plugin is called cloudwatch_logs
to prevent conflicts/confusion. Check the amazon repo for the Golang plugin for details on the deprecation/migration plan for the original plugin.
In order to send records into Amazon Cloudwatch, you can run the plugin from the command line or through the configuration file:
The cloudwatch plugin, can read the parameters from the command line through the -p argument (property), e.g:
In your main configuration file append the following Output section:
Fluent Bit has different input plugins (cpu, mem, disk, netif) to collect host resource usage metrics. cloudwatch_logs
output plugin can be used to send these host metrics to CloudWatch in Embedded Metric Format (EMF). If data comes from any of the above mentioned input plugins, cloudwatch_logs
output plugin will convert them to EMF format and sent to CloudWatch as JSON log. Additionally, if we set json/emf
as the value of log_format
config option, CloudWatch will extract custom metrics from embedded JSON payload.
Note: Right now, only cpu
and mem
metrics can be sent to CloudWatch.
For using the mem
input plugin and sending memory usage metrics to CloudWatch, we can consider the following example config file. Here, we use the aws
filter which adds ec2_instance_id
and az
(availability zone) to the log records. Later, in the output config section, we set ec2_instance_id
as our metric dimension.
The following config will set two dimensions to all of our metrics- ec2_instance_id
and az
.
Amazon distributes a container image with Fluent Bit and these plugins.
github.com/aws/aws-for-fluent-bit
Our images are available in Amazon ECR Public Gallery. You can download images with different tags by following command:
For example, you can pull the image with latest version by:
If you see errors for image pull limits, try log into public ECR with your AWS credentials:
You can check the Amazon ECR Public official doc for more details
You can use our SSM Public Parameters to find the Amazon ECR image URI in your region:
For more see the AWS for Fluent Bit github repo.
Send logs to Elasticsearch (including Amazon Elasticsearch Service)
The es output plugin, allows to ingest your records into a Elasticsearch database. The following instructions assumes that you have a fully operational Elasticsearch service running in your environment.
The parameters index and type can be confusing if you are new to Elastic, if you have used a common relational database before, they can be compared to the database and table concepts. Also see the FAQ below
Elasticsearch output plugin supports TTL/SSL, for more details about the properties available and general configuration, please refer to the TLS/SSL section.
In order to insert records into a Elasticsearch service, you can run the plugin from the command line or through the configuration file:
The es plugin, can read the parameters from the command line in two ways, through the -p argument (property) or setting them directly through the service URI. The URI format is the following:
Using the format specified, you could start Fluent Bit through:
which is similar to do:
In your main configuration file append the following Input & Output sections. You can visualize this configuration here
Some input plugins may generate messages where the field names contains dots, since Elasticsearch 2.0 this is not longer allowed, so the current es plugin replaces them with an underscore, e.g:
becomes
Since Elasticsearch 6.0, you cannot create multiple types in a single index. This means that you cannot set up your configuration as below anymore.
If you see an error message like below, you'll need to fix your configuration to use a single type on each index.
Rejecting mapping update to [search] as the final mapping would have more than 1 type
For details, please read the official blog post on that issue.
Fluent Bit v1.5 changed the default mapping type from flb_type
to _doc
, which matches the recommendation from Elasticsearch from version 6.2 forwards (see commit with rationale). This doesn't work in Elasticsearch versions 5.6 through 6.1 (see Elasticsearch discussion and fix). Ensure you set an explicit map (such as doc
or flb_type
) in the configuration, as seen on the last line:
The Amazon ElasticSearch Service adds an extra security layer where HTTP requests must be signed with AWS Sigv4. Fluent Bit v1.5 introduced full support for Amazon ElasticSearch Service with IAM Authentication.
Fluent Bit supports sourcing AWS credentials from any of the standard sources (for example, an Amazon EKS IAM Role for a Service Account).
Example configuration:
Notice that the Port
is set to 443
, tls
is enabled, and AWS_Region
is set.
Send logs to Amazon Kinesis Firehose
The Amazon Kinesis Data Firehose output plugin allows to ingest your records into the service.
This is the documentation for the core Fluent Bit Firehose plugin written in C. It can replace the Golang Fluent Bit plugin released last year. The Golang plugin was named firehose
; this new high performance and highly efficient firehose plugin is called kinesis_firehose
to prevent conflicts/confusion.
In order to send records into Amazon Kinesis Data Firehose, you can run the plugin from the command line or through the configuration file:
The firehose plugin, can read the parameters from the command line through the -p argument (property), e.g:
In your main configuration file append the following Output section:
Amazon distributes a container image with Fluent Bit and these plugins.
Our images are available in Amazon ECR Public Gallery. You can download images with different tags by following command:
For example, you can pull the image with latest version by:
If you see errors for image pull limits, try log into public ECR with your AWS credentials:
You can use our SSM Public Parameters to find the Amazon ECR image URI in your region:
Counter is a very simple plugin that counts how many records it's getting upon flush time. Plugin output is as follows:
You can run the plugin from the command line or through the configuration file:
From the command line you can let Fluent Bit count up a data with the following options:
In your main configuration file append the following Input & Output sections:
Once Fluent Bit is running, you will see the reports in the output interface similar to this:
The file output plugin allows to write the data received through the input plugin to file.
The plugin supports the following configuration parameters:
Output time, tag and json records. There is no configuration parameters for out_file.
Output the records as JSON (without additional tag
and timestamp
attributes). There is no configuration parameters for plain format.
Output the records as csv. Csv supports an additional configuration parameter.
Output the records as LTSV. LTSV supports an additional configuration parameter.
Output the records using a custom format template.
This accepts a formatting template and fills placeholders using corresponding values in a record.
For example, if you set up the configuration as below:
You will get the following output:
You can run the plugin from the command line or through the configuration file:
From the command line you can let Fluent Bit count up a data with the following options:
In your main configuration file append the following Input & Output sections:
Kafka output plugin allows to ingest your records into an service. This plugin use the official (built-in dependency)
Setting
rdkafka.log.connection.close
tofalse
andrdkafka.request.required.acks
to 1 are examples of recommended settings of librdfkafka properties.
In order to insert records into Apache Kafka, you can run the plugin from the command line or through the configuration file:
The kafka plugin, can read the parameters from the command line in two ways, through the -p argument (property), e.g:
In your main configuration file append the following Input & Output sections:
Key | Description |
---|
You can check the for more details.
For more see .
If you get a 403 Forbidden
error response, double check that you have a valid and that you have .
Key
Description
default
account_name
Azure Storage account name. This configuration property is mandatory
shared_key
Specify the Azure Storage Shared Key to authenticate against the service. This configuration property is mandatory.
container_name
Name of the container that will contain the blobs. This configuration property is mandatory
blob_type
Specify the desired blob type. Fluent Bit supports appendblob
and blockblob
.
appendblob
auto_create_container
If container_name
does not exist in the remote service, enabling this option will handle the exception and auto-create the container.
on
path
Optional path to store your blobs. If your blob name is myblob
, you can specify sub-directories where to store it using path, so setting path to /logs/kubernetes
will store your blob in /logs/kubernetes/myblob
.
emulator_mode
If you want to send data to an Azure emulator service like Azurite, enable this option so the plugin will format the requests to the expected format.
off
endpoint
If you are using an emulator, this option allows you to specify the absolute HTTP address of such service. e.g: http://127.0.0.1:10000.
tls
Enable or disable TLS encryption. Note that Azure service requires this to be turned on.
off
Key
Description
region
The AWS region.
log_group_name
The name of the CloudWatch Log Group that you want log records sent to.
log_stream_name
The name of the CloudWatch Log Stream that you want log records sent to.
log_stream_prefix
Prefix for the Log Stream name. The tag is appended to the prefix to construct the full log stream name. Not compatible with the log_stream_name option.
log_key
By default, the whole log record will be sent to CloudWatch. If you specify a key name with this option, then only the value of that key will be sent to CloudWatch. For example, if you are using the Fluentd Docker log driver, you can specify log_key log
and only the log message will be sent to CloudWatch.
log_format
An optional parameter that can be used to tell CloudWatch the format of the data. A value of json/emf enables CloudWatch to extract custom metrics embedded in a JSON payload. See the Embedded Metric Format.
role_arn
ARN of an IAM role to assume (for cross account access).
auto_create_group
Automatically create the log group. Valid values are "true" or "false" (case insensitive). Defaults to false.
endpoint
Specify a custom endpoint for the CloudWatch Logs API.
metric_namespace
An optional string representing the CloudWatch namespace for the metrics. See Metrics Tutorial
section below for a full configuration.
metric_dimensions
A list of lists containing the dimension keys that will be applied to all metrics. The values within a dimension set MUST also be members on the root-node. For more information about dimensions, see Dimension and Dimensions. In the fluent-bit config, metric_dimensions is a comma and semicolon seperated string. If you have only one list of dimensions, put the values as a comma seperated string. If you want to put list of lists, use the list as semicolon seperated strings. For example, if you set the value as 'dimension_1,dimension_2;dimension_3', we will convert it as [[dimension_1, dimension_2],[dimension_3]]
sts_endpoint
Specify a custom STS endpoint for the AWS STS API.
Key
Description
default
Host
IP address or hostname of the target Elasticsearch instance
127.0.0.1
Port
TCP port of the target Elasticsearch instance
9200
Path
Elasticsearch accepts new data on HTTP query path "/_bulk". But it is also possible to serve Elasticsearch behind a reverse proxy on a subpath. This option defines such path on the fluent-bit side. It simply adds a path prefix in the indexing HTTP POST URI.
Empty string
Buffer_Size
Specify the buffer size used to read the response from the Elasticsearch HTTP service. This option is useful for debugging purposes where is required to read full responses, note that response size grows depending of the number of records inserted. To set an unlimited amount of memory set this value to False, otherwise the value must be according to the Unit Size specification.
4KB
Pipeline
Newer versions of Elasticsearch allows to setup filters called pipelines. This option allows to define which pipeline the database should use. For performance reasons is strongly suggested to do parsing and filtering on Fluent Bit side, avoid pipelines.
AWS_Auth
Enable AWS Sigv4 Authentication for Amazon ElasticSearch Service
Off
AWS_Region
Specify the AWS region for Amazon ElasticSearch Service
AWS_STS_Endpoint
Specify the custom sts endpoint to be used with STS API for Amazon ElasticSearch Service
AWS_Role_ARN
AWS IAM Role to assume to put records to your Amazon ES cluster
AWS_External_ID
External ID for the AWS IAM Role specified with aws_role_arn
HTTP_User
Optional username credential for Elastic X-Pack access
HTTP_Passwd
Password for user defined in HTTP_User
Index
Index name
fluent-bit
Type
Type name
_doc
Logstash_Format
Enable Logstash format compatibility. This option takes a boolean value: True/False, On/Off
Off
Logstash_Prefix
When Logstash_Format is enabled, the Index name is composed using a prefix and the date, e.g: If Logstash_Prefix is equals to 'mydata' your index will become 'mydata-YYYY.MM.DD'. The last string appended belongs to the date when the data is being generated.
logstash
Logstash_DateFormat
Time format (based on strftime) to generate the second part of the Index name.
%Y.%m.%d
Time_Key
When Logstash_Format is enabled, each record will get a new timestamp field. The Time_Key property defines the name of that field.
@timestamp
Time_Key_Format
When Logstash_Format is enabled, this property defines the format of the timestamp.
%Y-%m-%dT%H:%M:%S
Time_Key_Nanos
When Logstash_Format is enabled, enabling this property sends nanosecond precision timestamps.
Off
Include_Tag_Key
When enabled, it append the Tag name to the record.
Off
Tag_Key
When Include_Tag_Key is enabled, this property defines the key name for the tag.
_flb-key
Generate_ID
When enabled, generate _id
for outgoing records. This prevents duplicate records when retrying ES.
Off
Replace_Dots
When enabled, replace field name dots with underscore, required by Elasticsearch 2.0-2.3.
Off
Trace_Output
When enabled print the elasticsearch API calls to stdout (for diag only)
Off
Trace_Error
When enabled print the elasticsearch API calls to stdout when elasticsearch returns an error (for diag only)
Off
Current_Time_Index
Use current time for index generation instead of message record
Off
Logstash_Prefix_Key
When included: the value in the record that belongs to the key will be looked up and over-write the Logstash_Prefix for index generation. If the key/value is not found in the record then the Logstash_Prefix option will act as a fallback. Nested keys are not supported (if desired, you can use the nest filter plugin to remove nesting)
region | The AWS region. |
delivery_stream | The name of the Kinesis Firehose Delivery stream that you want log records sent to. |
time_key | Add the timestamp to the record under this key. By default the timestamp from Fluent Bit will not be added to records sent to Kinesis. |
time_key_format | strftime compliant format string for the timestamp; for example, the default is '%Y-%m-%dT%H:%M:%S'. This option is used with time_key. |
log_key | By default, the whole log record will be sent to Firehose. If you specify a key name with this option, then only the value of that key will be sent to Firehose. For example, if you are using the Fluentd Docker log driver, you can specify |
role_arn | ARN of an IAM role to assume (for cross account access). |
endpoint | Specify a custom endpoint for the Firehose API. |
sts_endpoint | Custom endpoint for the STS API. |
Key | Description |
Delimiter | The character to separate each data. Default: ',' |
Key | Description |
Delimiter | The character to separate each pair. Default: '\t'(TAB) |
Label_Delimiter | The character to separate label and the value. Default: ':' |
Key | Description |
Template | The format string. Default: '{time} {message}' |
Key | Description |
Path | Absolute directory path to store files. If not set, Fluent Bit will write the files on it's own positioned directory. note: this option was added on Fluent Bit v1.4.6 |
File | Set file name to store the records. If not set, the file name will be the tag associated with the records. |
Format | The format of the file content. See also Format section. Default: out_file. |
Key | Description | Default |
Host | Required - The Datadog server where you are sending your logs. |
|
TLS | Required - End-to-end security communications security protocol. Datadog recommends setting this to |
|
compress | Recommended - compresses the payload in GZIP format, Datadog supports and recommends setting this to |
apikey |
dd_service | Recommended - The human readable name for your service generating the logs - the name of your application or database. |
dd_source | Recommended - A human readable name for the underlying technology of your service. For example, |
dd_tags |
Key | Description | default |
format | Specify data format, options available: json, msgpack. | json |
message_key | Optional key to store the message |
message_key_field | If set, the value of Message_Key_Field in the record will indicate the message key. If not set nor found in the record, Message_Key will be used (if set). |
timestamp_key | Set the key to store the record timestamp | @timestamp |
timestamp_format | 'iso8601' or 'double' | double |
brokers | Single of multiple list of Kafka Brokers, e.g: 192.168.1.3:9092, 192.168.1.4:9092. |
topics | Single entry or list of topics separated by comma (,) that Fluent Bit will use to send messages to Kafka. If only one topic is set, that one will be used for all records. Instead if multiple topics exists, the one set in the record by Topic_Key will be used. | fluent-bit |
topic_key | If multiple Topics exists, the value of Topic_Key in the record will indicate the topic to use. E.g: if Topic_Key is router and the record is {"key1": 123, "router": "route_2"}, Fluent Bit will use topic route_2. Note that if the value of Topic_Key is not present in Topics, then by default the first topic in the Topics list will indicate the topic to be used. |
dynamic_topic | adds unknown topics (found in Topic_Key) to Topics. So in Topics only a default topic needs to be configured | Off |
queue_full_retries | Fluent Bit queues data into rdkafka library, if for some reason the underlaying library cannot flush the records the queue might fills up blocking new addition of records. The | 10 |
rdkafka.{property} |
PostgreSQL is a very popular and versatile open source database management system that supports the SQL language and that is capable of storing both structured and unstructured data, such as JSON objects.
Given that Fluent Bit is designed to work with JSON objects, the pgsql
output plugin allows users to send their data to a PostgreSQL database and store it using the JSONB
type.
PostgreSQL 9.4 or higher is required.
According to the parameters you have set in the configuration file, the plugin will create the table defined by the table
option in the database defined by the database
option hosted on the server defined by the host
option. It will use the PostgreSQL user defined by the user
option, which needs to have the right privileges to create such a table in that database.
NOTE: If you are not familiar with how PostgreSQL's users and grants system works, you might find useful reading the recommended links in the "References" section at the bottom.
A typical installation normally consists of a self-contained database for Fluent Bit in which you can store the output of one or more pipelines. Ultimately, it is your choice to to store them in the same table, or in separate tables, or even in separate databases based on several factors, including workload, scalability, data protection and security.
In this example, for the sake of simplicity, we use a single table called fluentbit
in a database called fluentbit
that is owned by the user fluentbit
. Feel free to use different names. Preferably, for security reasons, do not use the postgres
user (which has SUPERUSER
privileges).
fluentbit
userGenerate a robust random password (e.g. pwgen 20 1
) and store it safely. Then, as postgres
system user on the server where PostgreSQL is installed, execute:
At the prompt, please provide the password that you previously generated.
As a result, the user fluentbit
without superuser privileges will be created.
If you prefer, instead of the createuser
application, you can directly use the SQL command CREATE USER
.
fluentbit
databaseAs postgres
system user, please run:
This will create a database called fluentbit
owned by the fluentbit
user. As a result, the fluentbit
user will be able to safely create the data table.
Alternatively, you can use the SQL command CREATE DATABASE
.
Make sure that the fluentbit
user can connect to the fluentbit
database on the specified target host. This might require you to properly configure the pg_hba.conf
file.
Fluent Bit relies on libpq, the PostgreSQL native client API, written in C language. For this reason, default values might be affected by environment variables and compilation settings. The above table, in brackets, list the most common default values for each connection option.
For security reasons, it is advised to follow the directives included in the password file section.
In your main configuration file add the following section:
The output plugin automatically creates a table with the name specified by the table
configuration option and made up of the following fields:
tag TEXT
time TIMESTAMP WITHOUT TIMEZONE
data JSONB
As you can see, the timestamp does not contain any information about the time zone and it is therefore referred to the time zone used by the connection to PostgreSQL (timezone
setting).
For more information on the JSONB
data type in PostgreSQL, please refer to the JSON types page in the official documentation, where you can find instructions on how to index or query the objects (including jsonpath
introduced in PostgreSQL 12).
PostgreSQL 10 introduces support for declarative partitioning. In order to improve vertical scalability of the database, you can decide to partition your tables on time ranges (for example on a monthly basis). PostgreSQL supports also subpartitions, allowing you to even partition by hash your records (version 11+), and default partitions (version 11+).
For more information on horizontal partitioning in PostgreSQL, please refer to the Table partitioning page in the official documentation.
If you are starting now, our recommendation at the moment is to choose the latest major version of PostgreSQL.
PostgreSQL is a really powerful and extensible database engine. More expert users can indeed take advantage of BEFORE INSERT
triggers on the main table and re-route records on normalised tables, depending on tags and content of the actual JSON objects.
For example, you can use Fluent Bit to send HTTP log records to the landing table defined in the configuration file. This table contains a BEFORE INSERT
trigger (a function in plpgsql
language) that normalises the content of the JSON object and that inserts the record in another table (with its own structure and partitioning model). This kind of triggers allow you to discard the record from the landing table by returning NULL
.
Here follows a list of useful resources from the PostgreSQL documentation:
Send logs, metrics to Azure Log Analytics
Azure output plugin allows to ingest your records into Azure Log Analytics service.
To get more details about how to setup Azure Log Analytics, please refer to the following documentation: Azure Log Analytics
In order to insert records into an Azure Log Analytics instance, you can run the plugin from the command line or through the configuration file:
The azure plugin, can read the parameters from the command line in two ways, through the -p argument (property), e.g:
In your main configuration file append the following Input & Output sections:
GELF is Graylog Extended Log Format. The GELF output plugin allows to send logs in GELF format directly to a Graylog input using TLS, TCP or UDP protocols.
The following instructions assumes that you have a fully operational Graylog server running in your environment.
According to GELF Payload Specification, there are some mandatory and optional fields which are used by Graylog in GELF format. These fields are determined with Gelf\*_Key_ key in this plugin.
GELF output plugin supports TLS/SSL, for more details about the properties available and general configuration, please refer to the TLS/SSL section.
If you're using Fluent Bit to collect Docker logs, note that Docker places your log in JSON under key log
. So you can set log
as your Gelf_Short_Message_Key
to send everything in Docker logs to Graylog. In this case, you need your log
value to be a string; so don't parse it using JSON parser.
The order of looking up the timestamp in this plugin is as follows:
Value of Gelf_Timestamp_Key
provided in configuration
Value of timestamp
key
If you're using Docker JSON parser, this parser can parse time and use it as timestamp of message. If all above fail, Fluent Bit tries to get timestamp extracted by your parser.
Timestamp does not set by Fluent Bit. In this case, your Graylog server will set it to the current timestamp (now).
Your log timestamp has to be in UNIX Epoch Timestamp format. If the Gelf_Timestamp_Key
value of your log is not in this format, your Graylog server will ignore it.
If you're using Fluent Bit in Kubernetes and you're using Kubernetes Filter Plugin, this plugin adds host
value to your log by default, and you don't need to add it by your own.
The version
of GELF message is also mandatory and Fluent Bit sets it to 1.1 which is the current latest version of GELF.
If you use udp
as transport protocol and set Compress
to true
, Fluent Bit compresses your packets in GZIP format, which is the default compression that Graylog offers. This can be used to trade more CPU load for saving network bandwidth.
If you're using Fluent Bit for shipping Kubernetes logs, you can use something like this as your configuration file:
By default, GELF tcp uses port 12201 and Docker places your logs in /var/log/containers
directory. The logs are placed in value of the log
key. For example, this is a log saved by Docker:
If you use Tail Input and use a Parser like the docker
parser shown above, it decodes your message and extracts data
(and any other present) field. This is how this log in stdout looks like after decoding:
Now, this is what happens to this log:
Fluent Bit GELF plugin adds "version": "1.1"
to it.
The Nest Filter, unnests fields inside log
key. In our example, it puts data
alongside stream
and time
.
We used this data
key as Gelf_Short_Message_Key
; so GELF plugin changes it to short_message
.
Kubernetes Filter adds host
name.
Timestamp is generated.
Any custom field (not present in GELF Payload Specification) is prefixed by an underline.
Finally, this is what our Graylog server input sees:
The influxdb output plugin, allows to flush your records into a InfluxDB time series database. The following instructions assumes that you have a fully operational InfluxDB service running in your system.
InfluxDB output plugin supports TTL/SSL, for more details about the properties available and general configuration, please refer to the TLS/SSL section.
In order to start inserting records into an InfluxDB service, you can run the plugin from the command line or through the configuration file:
The influxdb plugin, can read the parameters from the command line in two ways, through the -p argument (property) or setting them directly through the service URI. The URI format is the following:
Using the format specified, you could start Fluent Bit through:
In your main configuration file append the following Input & Output sections:
Basic example of Tag_Keys
usage:
With Auto_Tags=On in this example cause error, because every parsed field value type is string. Best usage of this option in metrics like record where one ore more field value is not string typed.
Before to start Fluent Bit, make sure the target database exists on InfluxDB, using the above example, we will insert the data into a fluentbit database.
Log into InfluxDB console:
Create the database:
Check the database exists:
The following command will gather CPU metrics from the system and send the data to InfluxDB database every five seconds:
Note that all records coming from the cpu input plugin, have a tag cpu, this tag is used to generate the measurement in InfluxDB
From InfluxDB console, choose your database:
Now query some specific fields:
The CPU input plugin gather more metrics per CPU core, in the above example we just selected three specific metrics. The following query will give a full result:
Query tagged keys:
And now query method key values:
LogDNA is an intuitive cloud based log management system that provides you an easy interface to query your logs once they are stored.
The Fluent Bit logdna
output plugin allows you to send your log or events to a LogDNA compliant service like:
Before to get started with the plugin configuration, make sure to obtain the proper account to get access to the service. You can start with a free trial in the following link:
One of the features of Fluent Bit + LogDNA integration is the ability to auto enrich each record with further context.
When the plugin process each record (or log), it tries to lookup for specific key names that might contain specific context for the record in question, the following table describe the keys and the discovery logic:
The following configuration example, will emit a dummy example record and ingest it on LogDNA. Copy and paste the following content in a file called logdna.conf
:
run Fluent Bit with the new configuration file:
Fluent Bit output:
Your record will be available and visible in your LogDNA dashboard after a few seconds.
In your LogDNA dashboard, go to the top filters and mark the Tags aa
and bb
, then you will be able to see your records as the example below:
Send logs, data, metrics to Amazon S3
The Amazon S3 output plugin allows you to ingest your records into the S3 cloud object store.
The plugin can upload data to S3 using the multipart upload API or using S3 PutObject. Multipart is the default and is recommended; Fluent Bit will stream data in a series of 'parts'. This limits the amount of data it has to buffer on disk at any point in time. By default, every time 5 MiB of data have been received, a new 'part' will be uploaded. The plugin can create files up to gigabytes in size from many small chunks/parts using the multipart API. All aspects of the upload process are configurable using the configuration options.
The plugin allows you to specify a maximum file size, and a timeout for uploads. A file will be created in S3 when the max size is reached, or the timeout is reached- whichever comes first.
Records are stored in files in S3 as newline delimited JSON.
In Fluent Bit, all logs have an associated tag. The s3_key_format
option lets you inject the tag into the s3 key using the following syntax:
$TAG
=> the full tag
$TAG[n]
=> the nth part of the tag (index starting at zero). This syntax is copied from the rewrite tag filter. By default, “parts” of the tag are separated with dots, but you can change this with s3_key_format_tag_delimiters
.
In the example below, assume the date is January 1st, 2020 00:00:00 and the tag associated with the logs in question is my_app_name-logs.prod
.
With the delimiters as . and -, the tag will be split into parts as follows:
$TAG[0]
= my_app_name
$TAG[1]
= logs
$TAG[2]
= prod
So the key in S3 will be prod/my_app_name/2020/01/01/00/00/00
.
The store_dir
is used to temporarily store data before it is uploaded. If Fluent Bit is stopped suddenly it will try to send all data and complete all uploads before it shuts down. If it can not send some data, on restart it will look in the store_dir
for existing data and will try to send it.
Multipart uploads are ideal for most use cases because they allow the plugin to upload data in small chunks over time. For example, 1 GB file can be created from 200 5MB chunks. While the file size in S3 will be 1 GB, only 5 MB will be buffered on disk at any one point in time.
There is one minor drawback to multipart uploads- the file and data will not be visible in S3 until the upload is completed with a CompleteMultipartUpload call. The plugin will attempt to make this call whenever Fluent Bit is shut down to ensure your data is available in s3. It will also store metadata about each upload in the store_dir
, ensuring that uploads can be completed when Fluent Bit restarts (assuming it has access to persistent disk and the store_dir
files will still be present on restart).
If you run Fluent Bit in an environment without persistent disk, or without the ability to restart Fluent Bit and give it access to the data stored in the store_dir
from previous executions- some considerations apply. This might occur if you run Fluent Bit on AWS Fargate.
In these situations, we recommend using the PutObject API, and sending data frequently, to avoid local buffering as much as possible. This will limit data loss in the event Fluent Bit is killed unexpectedly.
The following settings are recommended for this use case:
In order to send records into Amazon S3, you can run the plugin from the command line or through the configuration file.
The s3 plugin, can read the parameters from the command line through the -p argument (property), e.g:
In your main configuration file append the following Output section:
An example that using PutObject instead of multipart:
Amazon distributes a container image with Fluent Bit and this plugins.
github.com/aws/aws-for-fluent-bit
You can use our SSM Public Parameters to find the Amazon ECR image URI in your region:
For more see the AWS for Fluent Bit github repo.
The http output plugin allows to flush your records into a HTTP endpoint. For now the functionality is pretty basic and it issues a POST request with the data records in MessagePack (or JSON) format.
HTTP output plugin supports TTL/SSL, for more details about the properties available and general configuration, please refer to the TLS/SSL section.
In order to insert records into a HTTP server, you can run the plugin from the command line or through the configuration file:
The http plugin, can read the parameters from the command line in two ways, through the -p argument (property) or setting them directly through the service URI. The URI format is the following:
Using the format specified, you could start Fluent Bit through:
In your main configuration file, append the following Input & Output sections:
By default, the URI becomes tag of the message, the original tag is ignored. To retain the tag, multiple configuration sections have to be made based and flush to different URIs.
Another approach we also support is the sending the original message tag in a configurable header. It's up to the receiver to do what it wants with that header field: parse it and use it as the tag for example.
To configure this behaviour, add this config:
Provided you are using Fluentd as data receiver, you can combine in_http
and out_rewrite_tag_filter
to make use of this HTTP header.
Notice how we override the tag, which is from URI path, with our custom header
Suggested configuration for Sumo Logic using json_lines
with iso8601
timestamps. The PrivateKey
is specific to a configured HTTP collector.
A sample Sumo Logic query for the CPU input. (Requires json_lines
format with iso8601
date format for the timestamp
field).
The kafka-rest output plugin, allows to flush your records into a Kafka REST Proxy server. The following instructions assumes that you have a fully operational Kafka REST Proxy and Kafka services running in your environment.
Kafka REST Proxy output plugin supports TTL/SSL, for more details about the properties available and general configuration, please refer to the TLS/SSL section.
In order to insert records into a Kafka REST Proxy service, you can run the plugin from the command line or through the configuration file:
The kafka-rest plugin, can read the parameters from the command line in two ways, through the -p argument (property), e.g:
In your main configuration file append the following Input & Output sections:
Forward is the protocol used by Fluentd to route messages between peers. The forward output plugin allows to provide interoperability between Fluent Bit and Fluentd. There are not configuration steps required besides to specify where Fluentd is located, it can be in the local host or a in a remote machine.
This plugin offers two different transports and modes:
Forward (TCP): It uses a plain TCP connection.
Secure Forward (TLS): when TLS is enabled, the plugin switch to Secure Forward mode.
The following parameters are mandatory for either Forward for Secure Forward modes:
When using Secure Forward mode, the TLS mode requires to be enabled. The following additional configuration parameters are available:
Before proceeding, make sure that Fluentd is installed in your system, if it's not the case please refer to the following Fluentd Installation document and go ahead with that.
Once Fluentd is installed, create the following configuration file example that will allow us to stream data into it:
That configuration file specifies that it will listen for TCP connections on the port 24224 through the forward input type. Then for every message with a fluent_bit TAG, will print the message to the standard output.
In one terminal launch Fluentd specifying the new configuration file created (in_fluent-bit.conf):
Now that Fluentd is ready to receive messages, we need to specify where the forward output plugin will flush the information using the following format:
If the TAG parameter is not set, the plugin will set the tag as fluent_bit. Keep in mind that TAG is important for routing rules inside Fluentd.
Using the CPU input plugin as an example we will flush CPU metrics to Fluentd:
Now on the Fluentd side, you will see the CPU metrics gathered in the last seconds:
So we gathered CPU metrics and flushed them out to Fluentd properly.
DISCLAIMER: the following example do not consider the generation of certificates for a proper usage of production environments.
Secure Forward aims to provide a secure channel of communication with the remote Fluentd service using TLS. Above there is a minimalist configuration for testing purposes.
Paste this content in a file called flb.conf:
Paste this content in a file called fld.conf:
If you're using Fluentd v1, set up it as below:
Start Fluentd:
Start Fluent Bit:
After five seconds, Fluent Bit will write the records to Fluentd. In Fluentd output you will see a message like this:
FlowCounter is the protocol to count records. The flowcounter output plugin allows to count up records and its size.
The plugin supports the following configuration parameters:
You can run the plugin from the command line or through the configuration file:
From the command line you can let Fluent Bit count up a data with the following options:
In your main configuration file append the following Input & Output sections:
Once Fluent Bit is running, you will see the reports in the output interface similar to this:
The Slack output plugin delivers records or messages to your preferred Slack channel. It formats the outgoing content in JSON format for readability.
This connector uses the Slack Incoming Webhooks feature to post messages to Slack channels. Using this plugin in conjunction with the Stream Processor is a good combination for alerting.
Before to configure this plugin, make sure to setup your Incoming Webhook, for a detailed step-by-step instruction review the following official document:
Once you have obtained the Webhook address you can place it in the configuration below.
Get started quickly with this configuration file:
The null output plugin just throws away events.
The plugin doesn't support configuration parameters.
You can run the plugin from the command line or through the configuration file:
From the command line you can let Fluent Bit throws away events with the following options:
In your main configuration file append the following Input & Output sections:
The nats output plugin, allows to flush your records into a end point. The following instructions assumes that you have a fully operational NATS Server in place.
In order to flush records, the nats plugin requires to know two parameters:
In order to override the default configuration values, the plugin uses the optional Fluent Bit network address format, e.g:
only requires to know that it needs to use the nats output plugin, if no extra information is given, it will use the default values specified in the above table.
As described above, the target service and storage point can be changed, e.g:
For every set of records flushed to a NATS Server, Fluent Bit uses the following JSON format:
Each record is an individual entity represented in a JSON array that contains a UNIX_TIMESTAMP and a JSON map with a set of key/values. A summarized output of the CPU input plugin will looks as this:
is a data management platform that gives you real-time insights of your data for developers, operations and management teams.
The Fluent Bit nrlogs
output plugin allows you to send your logs to New Relic service.
Before to get started with the plugin configuration, make sure to obtain the proper account to get access to the service. You can register and start with a free trial in the following link:
The following configuration example, will emit a dummy example record and ingest it on New Relic. Copy and paste the following content in a file called newrelic.conf
:
run Fluent Bit with the new configuration file:
Fluent Bit output:
Stackdriver output plugin allows to ingest your records into service.
Before to get started with the plugin configuration, make sure to obtain the proper credentials to get access to the service. We strongly recommend to use a common JSON credentials file, reference link:
Your goal is to obtain a credentials JSON file that will be used later by Fluent Bit Stackdriver output plugin.
If you are using a Google Cloud Credentials File, the following configuration is enough to get started:
Example configuration file for k8s resource type:
local_resource_id is used by stackdriver output plugin to set the labels field for different k8s resource types. Stackdriver plugin will try to find the local_resource_id field in the log entey. If there is no field logging.googleapis.com/local_resource_id in the log, the plugin will then construct it by using the tag value of the log.
The local_resource_id should be in format:
k8s_container.<namespace_name>.<pod_name>.<container_name>
k8s_node.<node_name>
k8s_pod.<namespace_name>.<pod_name>
This implies that if there is no local_resource_id in the log entry then the tag of logs should match this format. Note that we have an option tag_prefix so it is not mandatory to use k8s_container(node/pod) as the prefix for tag.
An upstream connection error means Fluent Bit was not able to reach Google services, the error looks like this:
This belongs to a network issue by the environment where Fluent Bit is running, make sure that from the Host, Container or Pod you can reach the following Google end-points:
The error looks like this:
Do following check:
If the log entry does not contain the local_resource_id field, does the tag of the log match for format?
If tag_prefix is configured, does the prefix of tag specified in the input plugin match the tag_prefix?
Other implementations
The Syslog output plugin allows you to deliver messages to Syslog servers, it supports RFC3164 and RFC5424 formats through different transports such as UDP, TCP or TLS.
As of Fluent Bit v1.5.3, the configuration is very strict in terms that you must be aware about the structure of your original record, so you can configure the plugin to use specific keys to compose your outgoing Syslog message.
Future versions of Fluent Bit are expanding this plugin feature set to support better handling of keys and message composing.
Get started quickly with this configuration file:
is multi-tenant log aggregation system inspired by Prometheus. It is designed to be very cost effective and easy to operate.
The Fluent Bit loki
built-in output plugin allows you to send your log or events to a Loki service. It support data enrichment with Kubernetes labels, custom label keys and Tenant ID within others.
Loki store the record logs inside Streams, a stream is defined by a set of labels, at least one label is required.
Fluent Bit implements a flexible mechanism to set labels by using fixed key/value pairs of text but also allowing to set as labels certain keys that exists as part of the records that are being processed. Consider the following JSON record (pretty printed for readability):
If you decide that your Loki Stream will be composed by two labels called job
and the value of the record key called stream
, your labels
configuration properties might look as follows:
When processing above's configuration, internally the ending labels for the stream in question becomes:
Another feature of Labels management is the ability to provide custom key names, using the same record accessor pattern we can specify the key name manually and let the value to be populated automatically at runtime, e.g:
When processing that new configuration, the internal labels will be:
label_keys
propertyThe additional configuration property called label_keys
allow to specify multiple record keys that needs to be placed as part of the outgoing Stream Labels, yes, this is a similar feature than the one explained above in the labels
property. Consider this as another way to set a record key in the Stream, but with the limitation that you cannot use a custom name for the key value.
The following configuration examples generate the same Stream Labels:
the above configuration accomplish the same than this one:
both will generate the following Streams label:
Note that if you are running in a Kubernetes environment, you might want to enable the option auto_kubernetes_labels
which will auto-populate the streams with the Pod labels for you. Consider the following configuration:
Based in the JSON example provided above, the internal stream labels will be:
This plugin inherit core Fluent Bit features to customize the network behavior and optionally enable TLS in the communication channel. For more details about the specific options available refer to the following articles:
Note that all options mentioned in the articles above must be enabled in the plugin configuration in question.
The following configuration example, will emit a dummy example record and ingest it on Loki . Copy and paste the following content in a file called out_loki.conf
:
run Fluent Bit with the new configuration file:
Fluent Bit output:
The stdout output plugin allows to print to the standard output the data received through the input plugin. Their usage is very simple as follows:
We have specified to gather usage metrics and print them out to the standard output in a human readable way:
No more, no less, it just works.
The tcp output plugin allows to send records to a remote TCP server. The payload can be formatted in different ways as required.
The following parameters are available to configure a secure channel connection through TLS:
Run the following in a separate terminal, netcat will start listening for messages on TCP port 5170
Start Fluent Bit
No more, no less, it just works.
Send logs to Splunk HTTP Event Collector
Splunk output plugin allows to ingest your records into a service through the HTTP Event Collector (HEC) interface.
To get more details about how to setup the HEC in Splunk please refer to the following documentation:
Splunk output plugin supports TTL/SSL, for more details about the properties available and general configuration, please refer to the section.
In order to insert records into a Splunk service, you can run the plugin from the command line or through the configuration file:
The splunk plugin, can read the parameters from the command line in two ways, through the -p argument (property), e.g:
In your main configuration file append the following Input & Output sections:
By default, the Splunk output plugin nests the record under the event
key in the payload sent to the HEC. It will also append the time of the record to a top level time
key.
If you would like to customize any of the Splunk event metadata, such as the host or target index, you can set Splunk_Send_Raw On
in the plugin configuration, and add the metadata as keys/values in the record. Note: with Splunk_Send_Raw
enabled, you are responsible for creating and populating the event
section of the payload.
For example, to add a custom index and hostname:
This will create a payload that looks like:
If the option splunk_send_raw
has been enabled, the user must take care to put all log details in the event field, and only specify fields known to Splunk in the top level event, if there is a mismatch, Splunk will return a HTTP error 400.
Consider the following example:
splunk_send_raw off
splunk_send_raw on
For up to date information about the valid keys in the top level object, refer to the Splunk documentation:
The td output plugin, allows to flush your records into the cloud service.
The plugin supports the following configuration parameters:
In order to start inserting records into , you can run the plugin from the command line or through the configuration file:
Ideally you don't want to expose your API key from the command line, using a configuration file is higly desired.
In your main configuration file append the following Input & Output sections:
Required - Your .
Optional - The you want to assign to your logs in Datadog.
{property}
can be any
Github reference:
Stackdriver officially supports a .
We plan to support some . Use cases of special fields is .
As you can see the label job
has the value fluentbit
and the second label is configured to access the nested map called sub
targeting the value of the key stream
. Note that the second label name must starts with a $
, that means that's a pattern so it provide you the ability to retrieve values from nested maps by using the key names.
: timeouts, keepalive and source address
: all about TLS configuration and certificates
We have specified to gather usage metrics and send them in JSON lines mode to a remote end-point using netcat service, e.g:
For more information on the Splunk HEC payload format and all event meatadata Splunk accepts, see here:
Key
Description
default
Match
Pattern to match which tags of logs to be outputted by this plugin
Host
IP address or hostname of the target Graylog server
127.0.0.1
Port
The port that your Graylog GELF input is listening on
12201
Mode
The protocol to use (tls
, tcp
or udp
)
udp
Gelf_Short_Message_Key
A short descriptive message (MUST be set in GELF)
short_message
Gelf_Timestamp_Key
Your log timestamp (SHOULD be set in GELF)
timestamp
Gelf_Host_Key
Key which its value is used as the name of the host, source or application that sent this message. (MUST be set in GELF)
host
Gelf_Full_Message_Key
Key to use as the long message that can i.e. contain a backtrace. (Optional in GELF)
full_message
Gelf_Level_Key
Key to be used as the log level. Its value must be in standard syslog levels (between 0 and 7). (Optional in GELF)
level
Packet_Size
If transport protocol is udp
, you can set the size of packets to be sent.
1420
Compress
If transport protocol is udp
, you can set this if you want your UDP packets to be compressed.
true
Key
Description
default
Host
IP address or hostname of the target InfluxDB service
127.0.0.1
Port
TCP port of the target InfluxDB service
8086
Database
InfluxDB database name where records will be inserted
fluentbit
Sequence_Tag
The name of the tag whose value is incremented for the consecutive simultaneous events.
_seq
HTTP_User
Optional username for HTTP Basic Authentication
HTTP_Passwd
Password for user defined in HTTP_User
Tag_Keys
Space separated list of keys that needs to be tagged
Auto_Tags
Automatically tag keys where value is string. This option takes a boolean value: True/False, On/Off.
Off
Key
Description
Default
logdna_host
LogDNA API host address
logs.logdna.com
logdna_port
LogDNA TCP Port
443
api_key
API key to get access to the service. This property is mandatory.
hostname
Name of the local machine or device where Fluent Bit is running.
When this value is not set, Fluent Bit lookup the hostname and auto populate the value. If it cannot be found, an unknown
value will be set instead.
mac
Mac address. This value is optional.
ip
IP address of the local hostname. This value is optional.
tags
A list of comma separated strings to group records in LogDNA and simplify the query with filters.
file
Optional name of a file being monitored. Note that this value is only set if the record do not contain a reference to it.
app
Name of the application. This value is auto discovered on each record, if not found, the default value is used.
Fluent Bit
Key
Description
level
If the record contains a key called level
or severity
, it will populate the context level
key with that value. If not found, the context key is not set.
file
if the record contains a key called file
, it will populate the context file
with the value found, otherwise If the plugin configuration provided a file
property, that value will be used instead (see table above).
app
If the record contains a key called app
, it will populate the context app
with the value found, otherwise it will use the value set for app
in the configuration property (see table above).
meta
if the record contains a key called meta
, it will populate the context meta
with the value found.
Key
Description
Default
region
The AWS region of you S3 bucket
us-east-1
bucket
S3 Bucket name
None
json_date_format
Specifies the format of the date. Supported formats are double, iso8601 and epoch.
iso8601
total_file_size
Specifies the size of files in S3. Maximum size is 50G, minimim is 1M.
100M
upload_chunk_size
The size of each 'part' for multipart uploads. Max: 50M
5,242,880 bytes
upload_timeout
Whenever this amount of time has elapsed, Fluent Bit will complete an upload and create a new file in S3. For example, set this value to 60m and you will get a new file every hour.
10m
store_dir
Directory to locally buffer data before sending. When multipart uploads are used, data will only be buffered until the upload_chunk_size
is reached.
/tmp/fluent-bit/s3
s3_key_format
Format string for keys in S3. This option supports strftime time formatters and a syntax for selecting parts of the Fluent log tag using a syntax inspired by the rewrite_tag filter. Add $TAG in the format string to insert the full log tag; add $TAG[0] to insert the first part of the tag in the s3 key. The tag is split into “parts” using the characters specified with the s3_key_format_tag_delimiters
option. See the in depth examples and tutorial in the documentation.
/fluent-bit-logs/$TAG/%Y/%m/%d/%H/%M/%S
s3_key_format_tag_delimiters
A series of characters which will be used to split the tag into 'parts' for use with the s3_key_format option. See the in depth examples and tutorial in the documentation.
.
use_put_object
Use the S3 PutObject API, instead of the multipart upload API.
false
role_arn
ARN of an IAM role to assume (ex. for cross account access).
None
endpoint
Custom endpoint for the S3 API.
None
sts_endpoint
Custom endpoint for the STS API.
None
Key
Description
default
host
IP address or hostname of the target HTTP Server
127.0.0.1
http_User
Basic Auth Username
http_Passwd
Basic Auth Password. Requires HTTP_User to be set
port
TCP port of the target HTTP Server
80
Proxy
Specify an HTTP Proxy. The expected format of this value is http://host:port. Note that https is not supported yet. Please consider not setting this and use HTTP_PROXY
environment variable instead, which supports both http and https.
uri
Specify an optional HTTP URI for the target web server, e.g: /something
/
compress
Set payload compression mechanism. Option available is 'gzip'
format
Specify the data format to be used in the HTTP request body, by default it uses msgpack. Other supported formats are json, json_stream and json_lines and gelf.
msgpack
allow_duplicated_headers
Specify if duplicated headers are allowed. If a duplicated header is found, the latest key/value set is preserved.
true
header_tag
Specify an optional HTTP header field for the original message tag.
header
Add a HTTP header key/value pair. Multiple headers can be set.
json_date_key
Specify the name of the time key in the output record. To disable the time key just set the value to false
.
date
json_date_format
Specify the format of the date. Supported formats are double, epoch and iso8601 (eg: 2018-05-30T09:39:52.000681Z)
double
gelf_timestamp_key
Specify the key to use for timestamp
in gelf format
gelf_host_key
Specify the key to use for the host
in gelf format
gelf_short_messge_key
Specify the key to use as the short
message in gelf format
gelf_full_message_key
Specify the key to use for the full
message in gelf format
gelf_level_key
Specify the key to use for the level
in gelf format
Key
Description
default
Host
IP address or hostname of the target Kafka REST Proxy server
127.0.0.1
Port
TCP port of the target Kafka REST Proxy server
8082
Topic
Set the Kafka topic
fluent-bit
Partition
Set the partition number (optional)
Message_Key
Set a message key (optional)
Time_Key
The Time_Key property defines the name of the field that holds the record timestamp.
@timestamp
Time_Key_Format
Defines the format of the timestamp.
%Y-%m-%dT%H:%M:%S
Include_Tag_Key
Append the Tag name to the final record.
Off
Tag_Key
If Include_Tag_Key is enabled, this property defines the key name for the tag.
_flb-key
Key
Description
Default
Host
Target host where Fluent-Bit or Fluentd are listening for Forward messages.
127.0.0.1
Port
TCP Port of the target service.
24224
Time_as_Integer
Set timestamps in integer format, it enable compatibility mode for Fluentd v0.12 series.
False
Upstream
If Forward will connect to an Upstream instead of a simple host, this property defines the absolute path for the Upstream configuration file, for more details about this refer to the Upstream Servers documentation section.
Tag
Overwrite the tag as we transmit. This allows the receiving pipeline start fresh, or to attribute source.
Send_options
Always send options (with "size"=count of messages)
False
Require_ack_response
Send "chunk"-option and wait for "ack" response from server. Enables at-least-once and receiving server can control rate of traffic. (Requires Fluentd v0.14.0+ server)
False
Key
Description
Default
Shared_Key
A key string known by the remote Fluentd used for authorization.
Empty_Shared_Key
Use this option to connect to Fluentd with a zero-length secret.
False
Username
Specify the username to present to a Fluentd server that enables user_auth
.
Password
Specify the password corresponding to the username.
Self_Hostname
Default value of the auto-generated certificate common name (CN).
localhost
tls
Enable or disable TLS support
Off
tls.verify
Force certificate validation
On
tls.debug
Set TLS debug verbosity level. It accept the following values: 0 (No debug), 1 (Error), 2 (State change), 3 (Informational) and 4 Verbose
1
tls.ca_file
Absolute path to CA certificate file
tls.crt_file
Absolute path to Certificate file.
tls.key_file
Absolute path to private Key file.
tls.key_passwd
Optional password for tls.key_file file.
Key
Description
Default
Unit
The unit of duration. (second/minute/hour/day)
minute
Key
Description
Default
Host
Hostname/IP address of the PostgreSQL instance
- (127.0.0.1)
Port
PostgreSQL port
- (5432)
User
PostgreSQL username
- (current user)
Password
Password of PostgreSQL username
-
Database
Database name to connect to
- (current user)
Table
Table name where to store data
-
Timestamp_Key
Key in the JSON object containing the record timestamp
date
Async
Define if we will use async or sync connections
false
min_pool_size
Minimum number of connection in async mode
1
max_pool_size
Maximum amount of connections in async mode
4
cockroachdb
Set to true
if you will connect the plugin with a CockroachDB
false
Key | Description | Default |
host | Domain or IP address of the remote Syslog server. | 127.0.0.1 |
port | TCP or UDP port of the remote Syslog server. | 514 |
mode | Set the desired transport type, the available options are | udp |
syslog_format | Specify the Syslog protocol format to use, the available options are | rfc5424 |
syslog_maxsize | Set the maximum size allowed per message. The value must be only integers representing the number of bytes allowed. If no value is provided, the default size is set depending of the protocol version specified by |
syslog_severity_key | Specify the name of the key from the original record that contains the Syslog severity number. This configuration is optional. |
syslog_facility_key | Specify the name of the key from the original record that contains the Syslog facility number. This configuration is optional. |
syslog_hostname_key | Specify the key name from the original record that contains the hostname that generated the message. This configuration is optional. |
syslog_appname_key | Specify the key name from the original record that contains the application name that generated the message. This configuration is optional. |
syslog_procid_key | Specify the key name from the original record that contains the Process ID that generated the message. This configuration is optional. |
syslog_msgid_key | Specify the key name from the original record that contains the Message ID associated to the message. This configuration is optional. |
syslog_sd_key | Specify the key name from the original record that contains the Structured Data (SD) content. This configuration is optional. |
syslog_message_key | Specify the key name that contains the message to deliver. Note that if this property is mandatory, otherwise the message will be empty |
Key | Description | default |
Host | Target host where Fluent-Bit or Fluentd are listening for Forward messages. | 127.0.0.1 |
Port | TCP Port of the target service. | 5170 |
Format | Specify the data format to be printed. Supported formats are msgpack json, json_lines and json_stream. | msgpack |
json_date_key | Specify the name of the time key in the output record. To disable the time key just set the value to | date |
json_date_format | Specify the format of the date. Supported formats are double, epoch and iso8601 (eg: 2018-05-30T09:39:52.000681Z) | double |
Key | Description | Default |
tls | Enable or disable TLS support | Off |
tls.verify | Force certificate validation | On |
tls.debug | Set TLS debug verbosity level. It accept the following values: 0 (No debug), 1 (Error), 2 (State change), 3 (Informational) and 4 Verbose | 1 |
tls.ca_file | Absolute path to CA certificate file |
tls.crt_file | Absolute path to Certificate file. |
tls.key_file | Absolute path to private Key file. |
tls.key_passwd | Optional password for tls.key_file file. |
Key | Description | Default |
webhook | Absolute address of the Webhook provided by Slack |
Key | Description | default |
google_service_credentials | Absolute path to a Google Cloud credentials JSON file | Value of environment variable $GOOGLE_SERVICE_CREDENTIALS |
service_account_email | Account email associated to the service. Only available if no credentials file has been provided. | Value of environment variable $SERVICE_ACCOUNT_EMAIL |
service_account_secret | Private key content associated with the service account. Only available if no credentials file has been provided. | Value of environment variable $SERVICE_ACCOUNT_SECRET |
resource | Set resource type of data. Supported resource types: k8s_container, k8s_node, k8s_pod, global and gce_instance. | global, gce_instance |
k8s_cluster_name | The name of the cluster that the container (node or pod based on the resource type) is running in. If the resource type is one of the k8s_container, k8s_node or k8s_pod, then this field is required. |
k8s_cluster_location | The physical location of the cluster that contains (node or pod based on the resource type) the container. If the resource type is one of the k8s_container, k8s_node or k8s_pod, then this field is required. |
labels_key | The value of this field is used by the Stackdriver output plugin to find the related labels from jsonPayload and then extract the value of it to set the LogEntry Labels. | logging.googleapis.com/labels |
tag_prefix | Set the tag_prefix used to validate the tag of logs with k8s resource type. Without this option, the tag of the log must be in format of k8s_container(pod/node).* in order to use the k8s_container resource type. Now the tag prefix is configurable by this option (note the ending dot). | k8s_container., k8s_pod., k8s_node. |
severity_key | Specify the name of the key from the original record that contains the severity information. |
tag_prefix | Set the tag_prefix used to validate the tag of logs with k8s resource type. Without this option, the tag of the log must be in format of k8s_container(pod/node).* in order to use the k8s_container resource type. Now the tag prefix is configurable by this option. | k8s_container., k8s_pod., k8s_node. |
parameter | description | default |
host | IP address or hostname of the NATS Server | 127.0.0.1 |
port | TCP port of the target NATS Server | 4222 |
Key | Description | Default |
host | Loki hostname or IP address | 127.0.0.1 |
port | Loki TCP port | 3100 |
http_user | Set HTTP basic authentication user name |
http_passwd | Set HTTP basic authentication password |
tenant_id | Tenant ID used by default to push logs to Loki. If omitted or empty it assumes Loki is running in single-tenant mode and no X-Scope-OrgID header is sent. |
labels | Stream labels for API request. It can be multiple comma separated of strings specifying | job=fluentbit |
label_keys | Optional list of record keys that will be placed as stream labels. This configuration property is for records key only. More details in the Labels section. |
line_format | Format to use when flattening the record to a log line. Valid values are | json |
auto_kubernetes_labels | If set to true, it will add all Kubernetes labels to the Stream labels | off |
Key | Description | default |
Format | Specify the data format to be printed. Supported formats are msgpack json, json_lines and json_stream. | msgpack |
json_date_key | Specify the name of the time key in the output record. To disable the time key just set the value to | date |
json_date_format | Specify the format of the date. Supported formats are double, epoch and iso8601 (eg: 2018-05-30T09:39:52.000681Z) | double |
Key | Description | Default |
base_uri | Full address of New Relic API end-point. By default the value points to the US end-point. |
api_key | From a configuration perspective either an |
license_key | Optional authentication parameter for data ingestion. |
compress | Set the compression mechanism for the payload. This option allows two values: | gzip |
Key | Description | default |
Host | IP address or hostname of the target Splunk service. | 127.0.0.1 |
Port | TCP port of the target Splunk service. | 8088 |
Splunk_Token |
Splunk_Send_Raw | When enabled, the record keys and values are set in the top level of the map instead of under the event key. note: refer to the Sending Raw Events section below for more details to make this option work properly. | Off |
HTTP_User | Optional username for Basic Authentication on HEC |
HTTP_Passwd | Password for user defined in HTTP_User |
Key | Description | Default |
API |
Database | Specify the name of your target database. |
Table | Specify the name of your target table where the records will be stored. |
Region | Set the service region, available values: US and JP | US |
BigQuery output plugin is an experimental plugin that allows you to stream records into Google Cloud BigQuery service. The implementation does not support the following, which would be expected in a full production version:
Data deduplication using insertId
.
Template tables using templateSuffix
.
Fluent Bit streams data into an existing BigQuery table using a service account that you specify. Therefore, before using the BigQuery output plugin, you must create a service account, create a BigQuery dataset and table, authorize the service account to write to the table, and provide the service account credentials to Fluent Bit.
To stream data into BigQuery, the first step is to create a Google Cloud service account for Fluent Bit:
Fluent Bit does not create datasets or tables for your data, so you must create these ahead of time. You must also grant the service account WRITER
permission on the dataset:
Within the dataset you will need to create a table for the data to reside in. You can follow the following instructions for creating your table. Pay close attention to the schema. It must match the schema of your output JSON. Unfortunately, since BigQuery does not allow dots in field names, you will need to use a filter to change the fields for many of the standard inputs (e.g, mem or cpu).
Fluent Bit BigQuery output plugin uses a JSON credentials file for authentication credentials. Download the credentials file by following these instructions:
If you are using a Google Cloud Credentials File, the following configuration is enough to get you started:
Key
Description
default
Customer_ID
Customer ID or WorkspaceID string.
Shared_Key
The primary or the secondary Connected Sources client authentication key.
Log_Type
The name of the event type.
fluentbit
If you want to use the EU end-point you can set this key to the following value:
Your key for data ingestion. The API key is also called the ingestion key, you can get more details on how to generated in the official documentation .
Note that New Relic suggest to use the api_key
instead. You can read more about the License Key .
Specify the Authentication for the HTTP Event Collector interface.
The API key. To obtain it please log into the and in the API keys box, copy the API key hash.
Key
Description
default
google_service_credentials
Absolute path to a Google Cloud credentials JSON file
Value of the environment variable $GOOGLE_SERVICE_CREDENTIALS
project_id
The project id containing the BigQuery dataset to stream into.
The value of the project_id
in the credentials file
dataset_id
The dataset id of the BigQuery dataset to write into. This dataset must exist in your project.
table_id
The table id of the BigQuery table to write into. This table must exist in the specified dataset and the schema must match the output.