Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
FlowCounter is the protocol to count records. The flowcounter output plugin allows to count up records and its size.
The plugin supports the following configuration parameters:
You can run the plugin from the command line or through the configuration file:
From the command line you can let Fluent Bit count up a data with the following options:
In your main configuration file append the following Input & Output sections:
Once Fluent Bit is running, you will see the reports in the output interface similar to this:
Key
Description
Default
Unit
The unit of duration. (second/minute/hour/day)
minute
Azure output plugin allows to ingest your records into Azure Log Analytics service.
To get more details about how to setup the Azure Log Analytics please refer to the following documentation: Azure Log Analytics
In order to insert records into a Azure, you can run the plugin from the command line or through the configuration file:
The azure plugin, can read the parameters from the command line in two ways, through the -p argument (property), e.g:
In your main configuration file append the following Input & Output sections:
Key
Description
default
Customer_ID
Customer ID or WorkspaceID string.
Shared_Key
The primary or the secondary Connected Sources client authentication key.
Log_Type
The name of the event type.
fluentbit
Counter is a very simple plugin that counts how many records it's getting upon flush time. Plugin output is as follows:
You can run the plugin from the command line or through the configuration file:
From the command line you can let Fluent Bit count up a data with the following options:
In your main configuration file append the following Input & Output sections:
Once Fluent Bit is running, you will see the reports in the output interface similar to this:
The null output plugin just throws away events.
The plugin doesn't support configuration parameters.
You can run the plugin from the command line or through the configuration file:
From the command line you can let Fluent Bit throws away events with the following options:
In your main configuration file append the following Input & Output sections:
BigQuery output plugin is and experimental plugin that allows you to stream records into Google Cloud BigQuery service. The implementation does not support the following, which would be expected in a full production version:
Data deduplication using insertId
.
Template tables using templateSuffix
.
Fluent Bit streams data into an existing BigQuery table using a service account that you specify. Therefore, before using the BigQuery output plugin, you must create a service account, create a BigQuery dataset and table, authorize the service account to write to the table, and provide the service account credentials to Fluent Bit.
To stream data into BigQuery, the first step is to create a Google Cloud service account for Fluent Bit:
Fluent Bit does not create datasets or tables for your data, so you must create these ahead of time. You must also grant the service account WRITER
permission on the dataset:
Within the dataset you will need to create a table for the data to reside in. You can follow the following instructions for creating your table. Pay close attention to the schema. It must match the schema of your output JSON. Unfortunately, since BigQuery does not allow dots in field names, you will need to use a filter to change the fields for many of the standard inputs (e.g, mem or cpu).
Fluent Bit BigQuery output plugin uses a JSON credentials file for authentication credentials. Download the credentials file by following these instructions:
If you are using a Google Cloud Credentials File, the following configuration is enough to get you started:
The output plugins defines where Fluent Bit should flush the information it gathers from the input. At the moment the available options are the following:
The influxdb output plugin, allows to flush your records into a InfluxDB time series database. The following instructions assumes that you have a fully operational InfluxDB service running in your system.
InfluxDB output plugin supports TTL/SSL, for more details about the properties available and general configuration, please refer to the TLS/SSL section.
In order to start inserting records into an InfluxDB service, you can run the plugin from the command line or through the configuration file:
The influxdb plugin, can read the parameters from the command line in two ways, through the -p argument (property) or setting them directly through the service URI. The URI format is the following:
Using the format specified, you could start Fluent Bit through:
In your main configuration file append the following Input & Output sections:
Basic example of Tag_Keys
usage:
With Auto_Tags=On in this example cause error, because every parsed field value type is string. Best usage of this option in metrics like record where one ore more field value is not string typed.
Before to start Fluent Bit, make sure the target database exists on InfluxDB, using the above example, we will insert the data into a fluentbit database.
Log into InfluxDB console:
Create the database:
Check the database exists:
The following command will gather CPU metrics from the system and send the data to InfluxDB database every five seconds:
Note that all records coming from the cpu input plugin, have a tag cpu, this tag is used to generate the measurement in InfluxDB
From InfluxDB console, choose your database:
Now query some specific fields:
The CPU input plugin gather more metrics per CPU core, in the above example we just selected three specific metrics. The following query will give a full result:
Query tagged keys:
And now query method key values:
Kafka output plugin allows to ingest your records into an Apache Kafka service. This plugin use the official librdkafka C library (built-in dependency)
Setting
rdkafka.log.connection.close
tofalse
andrdkafka.request.required.acks
to 1 are examples of recommended settings of librdfkafka properties.
In order to insert records into Apache Kafka, you can run the plugin from the command line or through the configuration file:
The splunk plugin, can read the parameters from the command line in two ways, through the -p argument (property), e.g:
In your main configuration file append the following Input & Output sections:
The nats output plugin, allows to flush your records into a NATS Server end point. The following instructions assumes that you have a fully operational NATS Server in place.
In order to flush records, the nats plugin requires to know two parameters:
In order to override the default configuration values, the plugin uses the optional Fluent Bit network address format, e.g:
Fluent Bit only requires to know that it needs to use the nats output plugin, if no extra information is given, it will use the default values specified in the above table.
As described above, the target service and storage point can be changed, e.g:
For every set of records flushed to a NATS Server, Fluent Bit uses the following JSON format:
Each record is an individual entity represented in a JSON array that contains a UNIX_TIMESTAMP and a JSON map with a set of key/values. A summarized output of the CPU input plugin will looks as this:
The es output plugin, allows to flush your records into a Elasticsearch database. The following instructions assumes that you have a fully operational Elasticsearch service running in your environment.
The parameters index and type can be confusing if you are new to Elastic, if you have used a common relational database before, they can be compared to the database and table concepts.
Elasticsearch output plugin supports TTL/SSL, for more details about the properties available and general configuration, please refer to the TLS/SSL section.
In order to insert records into a Elasticsearch service, you can run the plugin from the command line or through the configuration file:
The es plugin, can read the parameters from the command line in two ways, through the -p argument (property) or setting them directly through the service URI. The URI format is the following:
Using the format specified, you could start Fluent Bit through:
which is similar to do:
In your main configuration file append the following Input & Output sections:
Some input plugins may generate messages where the field names contains dots, since Elasticsearch 2.0 this is not longer allowed, so the current es plugin replaces them with an underscore, e.g:
becomes
The http output plugin allows to flush your records into a HTTP endpoint. For now the functionality is pretty basic and it issues a POST request with the data records in MessagePack (or JSON) format.
HTTP output plugin supports TTL/SSL, for more details about the properties available and general configuration, please refer to the TLS/SSL section.
In order to insert records into a HTTP server, you can run the plugin from the command line or through the configuration file:
The http plugin, can read the parameters from the command line in two ways, through the -p argument (property) or setting them directly through the service URI. The URI format is the following:
Using the format specified, you could start Fluent Bit through:
In your main configuration file, append the following Input & Output sections:
By default, the URI becomes tag of the message, the original tag is ignored. To retain the tag, multiple configuration sections have to be made based and flush to different URIs.
Another approach we also support is the sending the original message tag in a configurable header. It's up to the receiver to do what it wants with that header field: parse it and use it as the tag for example.
To configure this behaviour, add this config:
Provided you are using Fluentd as data receiver, you can combine in_http
and out_rewrite_tag_filter
to make use of this HTTP header.
Notice how we override the tag, which is from URI path, with our custom header
Splunk output plugin allows to ingest your records into a service through the HTTP Event Collector (HEC) interface.
To get more details about how to setup the HEC in Splunk please refer to the following documentation:
Splunk output plugin supports TTL/SSL, for more details about the properties available and general configuration, please refer to the section.
In order to insert records into a Splunk service, you can run the plugin from the command line or through the configuration file:
The splunk plugin, can read the parameters from the command line in two ways, through the -p argument (property), e.g:
In your main configuration file append the following Input & Output sections:
By default, the Splunk output plugin nests the record under the event
key in the payload sent to the HEC. It will also append the time of the record to a top level time
key.
If you would like to customize any of the Splunk event metadata, such as the host or target index, you can set Splunk_Send_Raw On
in the plugin configuration, and add the metadata as keys/values in the record. Note: with Splunk_Send_Raw
enabled, you are responsible for creating and populating the event
section of the payload.
For example, to add a custom index and hostname:
This will create a payload that looks like:
The td output plugin, allows to flush your records into the cloud service.
The plugin supports the following configuration parameters:
In order to start inserting records into , you can run the plugin from the command line or through the configuration file:
Ideally you don't want to expose your API key from the command line, using a configuration file is higly desired.
In your main configuration file append the following Input & Output sections:
Stackdriver output plugin allows to ingest your records into service.
Before to get started with the plugin configuration, make sure to obtain the proper credentials to get access to the service. We strongly recommend to use a common JSON credentials file, reference link:
Your goal is to obtain a credentials JSON file that will be used later by Fluent Bit Stackdriver output plugin.
If you are using a Google Cloud Credentials File, the following configuration is enough to get started:
An upstream connection error means Fluent Bit was not able to reach Google services, the error looks like this:
This belongs to a network issue by the environment where Fluent Bit is running, make sure that from the Host, Container or Pod you can reach the following Google end-points:
The file output plugin allows to write the data received through the input plugin to file.
The plugin supports the following configuration parameters:
Output time, tag and json records. There is no configuration parameters for out_file.
Output the records as JSON (without additional tag
and timestamp
attributes). There is no configuration parameters for plain format.
Output the records as csv. Csv supports an additional configuration parameter.
Output the records as LTSV. LTSV supports an additional configuration parameter.
You can run the plugin from the command line or through the configuration file:
From the command line you can let Fluent Bit count up a data with the following options:
In your main configuration file append the following Input & Output sections:
The stdout output plugin allows to print to the standard output the data received through the input plugin. Their usage is very simple as follows:
We have specified to gather usage metrics and print them out to the standard output in a human readable way:
No more, no less, it just works.
The kafka-rest output plugin, allows to flush your records into a server. The following instructions assumes that you have a fully operational Kafka REST Proxy and Kafka services running in your environment.
Kafka REST Proxy output plugin supports TTL/SSL, for more details about the properties available and general configuration, please refer to the section.
In order to insert records into a Kafka REST Proxy service, you can run the plugin from the command line or through the configuration file:
The kafka-rest plugin, can read the parameters from the command line in two ways, through the -p argument (property), e.g:
In your main configuration file append the following Input & Output sections:
Forward is the protocol used by to route messages between peers. The forward output plugin allows to provide interoperability between and . There are not configuration steps required besides to specify where is located, it can be in the local host or a in a remote machine.
This plugin offers two different transports and modes:
Forward (TCP): It uses a plain TCP connection.
Secure Forward (TLS): when TLS is enabled, the plugin switch to Secure Forward mode.
The following parameters are mandatory for either Forward for Secure Forward modes:
When using Secure Forward mode, the mode requires to be enabled. The following additional configuration parameters are available:
That configuration file specifies that it will listen for TCP connections on the port 24224 through the forward input type. Then for every message with a fluent_bit TAG, will print the message to the standard output.
DISCLAIMER: the following example do not consider the generation of certificates for a proper usage of production environments.
Paste this content in a file called flb.conf:
Paste this content in a file called fld.conf:
If you're using Fluentd v1, set up it as below:
Start Fluentd:
Start Fluent Bit:
After five seconds, Fluent Bit will write the records to Fluentd. In Fluentd output you will see a message like this:
For more information on the Splunk HEC payload format and all event meatadata Splunk accepts, see here:
Github reference:
Stackdriver officially supports a .
Before proceeding, make sure that is installed in your system, if it's not the case please refer to the following document and go ahead with that.
Once is installed, create the following configuration file example that will allow us to stream data into it:
In one terminal launch specifying the new configuration file created (in_fluent-bit.conf):
Now that is ready to receive messages, we need to specify where the forward output plugin will flush the information using the following format:
If the TAG parameter is not set, the plugin will set the tag as fluent_bit. Keep in mind that TAG is important for routing rules inside .
Using the input plugin as an example we will flush CPU metrics to :
Now on the side, you will see the CPU metrics gathered in the last seconds:
So we gathered metrics and flushed them out to properly.
Secure Forward aims to provide a secure channel of communication with the remote Fluentd service using . Above there is a minimalist configuration for testing purposes.
name
title
description
Azure Log Analytics
Ingest records into Azure Log Analytics
BigQuery
Ingest records into Google BigQuery
Count Records
Simple records counter.
Elasticsearch
flush records to a Elasticsearch server.
File
Flush records to a file.
FlowCounter
Count records.
Forward
Fluentd forward protocol.
HTTP
Flush records to an HTTP end point.
InfluxDB
Flush records to InfluxDB time series database.
Apache Kafka
Flush records to Apache Kafka
Kafka REST Proxy
Flush records to a Kafka REST Proxy server.
Google Stackdriver Logging
Flush records to Google Stackdriver Logging service.
Standard Output
Flush records to the standard output.
Splunk
Flush records to a Splunk Enterprise service
Flush records to the Treasure Data cloud service for analytics.
NATS
flush records to a NATS server.
NULL
throw away events.
Key
Description
default
Host
IP address or hostname of the target InfluxDB service
127.0.0.1
Port
TCP port of the target InfluxDB service
8086
Database
InfluxDB database name where records will be inserted
fluentbit
Sequence_Tag
The name of the tag whose value is incremented for the consecutive simultaneous events.
_seq
HTTP_User
Optional username for HTTP Basic Authentication
HTTP_Passwd
Password for user defined in HTTP_User
Tag_Keys
Space separated list of keys that needs to be tagged
Auto_Tags
Automatically tag keys where value is string. This option takes a boolean value: True/False, On/Off.
Off
Key
Description
default
google_service_credentials
Absolute path to a Google Cloud credentials JSON file
Value of the environment variable $GOOGLE_SERVICE_CREDENTIALS
project_id
The project id containing the BigQuery dataset to stream into.
The value of the project_id
in the credentials file
dataset_id
The dataset id of the BigQuery dataset to write into. This dataset must exist in your project.
table_id
The table id of the BigQuery table to write into. This table must exist in the specified dataset and the schema must match the output.
Key
Description
default
Format
Specify data format, options available: json, msgpack.
json
Message_Key
Optional key to store the message
Timestamp_Key
Set the key to store the record timestamp
@timestamp
Timestamp_Format
'iso8601' or 'double'
double
Brokers
Single of multiple list of Kafka Brokers, e.g: 192.168.1.3:9092, 192.168.1.4:9092.
Topics
Single entry or list of topics separated by comma (,) that Fluent Bit will use to send messages to Kafka. If only one topic is set, that one will be used for all records. Instead if multiple topics exists, the one set in the record by Topic_Key will be used.
fluent-bit
Topic_Key
If multiple Topics exists, the value of TopicKey in the record will indicate the topic to use. E.g: if Topic_Key is _router and the record is {"key1": 123, "router": "route2"}, Fluent Bit will use topic _route_2. Note that the topic must be registered in the Topics list.
rdkafka.{property}
{property}
can be any librdkafka properties
parameter
description
default
host
IP address or hostname of the NATS Server
127.0.0.1
port
TCP port of the target NATS Server
4222
Key
Description
default
Host
IP address or hostname of the target Elasticsearch instance
127.0.0.1
Port
TCP port of the target Elasticsearch instance
9200
Path
Elasticsearch accepts new data on HTTP query path "/_bulk". But it is also possible to serve Elasticsearch behind a reverse proxy on a subpath. This option defines such path on the fluent-bit side. It simply adds a path prefix in the indexing HTTP POST URI.
Empty string
Buffer_Size
Specify the buffer size used to read the response from the Elasticsearch HTTP service. This option is useful for debugging purposes where is required to read full responses, note that response size grows depending of the number of records inserted. To set an unlimited amount of memory set this value to False, otherwise the value must be according to the Unit Size specification.
4KB
Pipeline
Newer versions of Elasticsearch allows to setup filters called pipelines. This option allows to define which pipeline the database should use. For performance reasons is strongly suggested to do parsing and filtering on Fluent Bit side, avoid pipelines.
HTTP_User
Optional username credential for Elastic X-Pack access
HTTP_Passwd
Password for user defined in HTTP_User
Index
Index name
fluentbit
Type
Type name
flb_type
Logstash_Format
Enable Logstash format compatibility. This option takes a boolean value: True/False, On/Off
Off
Logstash_Prefix
When Logstash_Format is enabled, the Index name is composed using a prefix and the date, e.g: If Logstash_Prefix is equals to 'mydata' your index will become 'mydata-YYYY.MM.DD'. The last string appended belongs to the date when the data is being generated.
logstash
Logstash_DateFormat
Time format (based on strftime) to generate the second part of the Index name.
%Y.%m.%d
Time_Key
When Logstash_Format is enabled, each record will get a new timestamp field. The Time_Key property defines the name of that field.
@timestamp
Time_Key_Format
When Logstash_Format is enabled, this property defines the format of the timestamp.
%Y-%m-%dT%H:%M:%S
Include_Tag_Key
When enabled, it append the Tag name to the record.
Off
Tag_Key
When Include_Tag_Key is enabled, this property defines the key name for the tag.
_flb-key
Generate_ID
When enabled, generate _id
for outgoing records. This prevents duplicate records when retrying ES.
Off
Replace_Dots
When enabled, replace field name dots with underscore, required by Elasticsearch 2.0-2.3.
Off
Trace_Output
When enabled print the elasticsearch API calls to stdout (for diag only)
Off
Current_Time_Index
Use current time for index generation instead of message record
Off
Logstash_Prefix_Key
Prefix keys with this string
Key
Description
default
Host
IP address or hostname of the target HTTP Server
127.0.0.1
HTTP_User
Basic Auth Username
HTTP_Passwd
Basic Auth Password. Requires HTTP_User to be set
Port
TCP port of the target HTTP Server
80
Proxy
Specify an HTTP Proxy. The expected format of this value is http://host:port. Note that https is not supported yet.
URI
Specify an optional HTTP URI for the target web server, e.g: /something
/
Format
Specify the data format to be used in the HTTP request body, by default it uses msgpack. Other supported formats are json, json_stream and json_lines and gelf.
msgpack
header_tag
Specify an optional HTTP header field for the original message tag.
Header
Add a HTTP header key/value pair. Multiple headers can be set.
json_date_key
Specify the name of the date field in output
date
json_date_format
Specify the format of the date. Supported formats are double and iso8601 (eg: 2018-05-30T09:39:52.000681Z)
double
gelf_timestamp_key
Specify the key to use for timestamp
in gelf format
gelf_host_key
Specify the key to use for the host
in gelf format
gelf_short_messge_key
Specify the key to use as the short
message in gelf format
gelf_full_message_key
Specify the key to use for the full
message in gelf format
gelf_level_key
Specify the key to use for the level
in gelf format
Key | Description |
Delimiter | The character to separate each data. Default: ',' |
Key | Description |
Delimiter | The character to separate each pair. Default: '\t'(TAB) |
Label_Delimiter | The character to separate label and the value. Default: ':' |
Key | Description | default |
google_service_credentials | Absolute path to a Google Cloud credentials JSON file | Value of environment variable $GOOGLE_SERVICE_CREDENTIALS |
service_account_email | Account email associated to the service. Only available if no credentials file has been provided. | Value of environment variable $SERVICE_ACCOUNT_EMAIL |
service_account_secret | Private key content associated with the service account. Only available if no credentials file has been provided. | Value of environment variable $SERVICE_ACCOUNT_SECRET |
resource | Set resource type of data. Only global is supported. | global |
Key | Description | default |
Host | IP address or hostname of the target Kafka REST Proxy server | 127.0.0.1 |
Port | TCP port of the target Kafka REST Proxy server | 8082 |
Topic | Set the Kafka topic | fluent-bit |
Partition | Set the partition number (optional) |
Message_Key | Set a message key (optional) |
Time_Key | The Time_Key property defines the name of the field that holds the record timestamp. | @timestamp |
Time_Key_Format | Defines the format of the timestamp. | %Y-%m-%dT%H:%M:%S |
Include_Tag_Key | Append the Tag name to the final record. | Off |
Tag_Key | If Include_Tag_Key is enabled, this property defines the key name for the tag. | _flb-key |
Key | Description |
Path | File path to output. If not set, the filename will be tag name. |
Format | The format of the file content. See also Format section. Default: out_file. |
Key | Description | default |
Format | Specify the data format to be printed. Supported formats are msgpack and json_lines. | msgpack |
json_date_key | Specify the name of the date field in output | date |
json_date_format | Specify the format of the date. Supported formats are double and iso8601 (eg: 2018-05-30T09:39:52.000681Z) | double |
Key | Description | default |
Host | IP address or hostname of the target Splunk service. | 127.0.0.1 |
Port | TCP port of the target Splunk service. | 8088 |
Splunk_Token |
Splunk_Send_Raw | When enabled, the record keys and values are set in the top level of the map instead of under the event key. | Off |
HTTP_User | Optional username for Basic Authentication on HEC |
HTTP_Passwd | Password for user defined in HTTP_User |
Key | Description | Default |
API |
Database | Specify the name of your target database. |
Table | Specify the name of your target table where the records will be stored. |
Region | Set the service region, available values: US and JP | US |
Key | Description | Default |
Host | Target host where Fluent-Bit or Fluentd are listening for Forward messages. | 127.0.0.1 |
Port | TCP Port of the target service. | 24224 |
Time_as_Integer | Set timestamps in integer format, it enable compatibility mode for Fluentd v0.12 series. | False |
Upstream |
Key | Description | Default |
Shared_Key | A key string known by the remote Fluentd used for authorization. |
Self_Hostname | Default value of the auto-generated certificate common name (CN). |
tls | Enable or disable TLS support | Off |
tls.verify | Force certificate validation | On |
tls.debug | Set TLS debug verbosity level. It accept the following values: 0 (No debug), 1 (Error), 2 (State change), 3 (Informational) and 4 Verbose | 1 |
tls.ca_file | Absolute path to CA certificate file |
tls.crt_file | Absolute path to Certificate file. |
tls.key_file | Absolute path to private Key file. |
tls.key_passwd | Optional password for tls.key_file file. |
Specify the Authentication for the HTTP Event Collector interface.
The API key. To obtain it please log into the and in the API keys box, copy the API key hash.
If Forward will connect to an Upstream instead of a simple host, this property defines the absolute path for the Upstream configuration file, for more details about this refer to the documentation section.