Fluent Bit: Official Manual
SlackGitHubCommunity MeetingsSandbox and LabsWebinars
1.0
1.0
  • Introduction
  • About
    • Why ?
    • Fluentd & Fluent Bit
    • License
  • Installation
    • Supported Platforms
    • Requirements
    • Download Sources
    • Build and Install
    • Build with Static Configuration
    • Docker Images
    • Kubernetes
    • TD Agent Bit
    • Debian Packages
    • Ubuntu Packages
    • CentOS Packages
    • Raspberry Pi
    • Yocto Project
    • Unit Tests
  • Getting Started
    • Service
    • Input
    • Parser
    • Filter
    • Buffer
    • Routing
    • Output
  • Configuration
    • Configuration Schema
    • Configuration File
    • Configuration Variables
    • Configuration Commands
    • Buffering / Storage
    • Monitoring
    • Unit Sizes
    • TLS / SSL
    • Backpressure
    • Memory Usage
    • Upstream Servers
    • Scheduler
  • Service
  • Input Plugins
    • CPU Usage
    • Disk Usage
    • Dummy
    • Exec
    • Forward
    • Head
    • Health
    • Kernel Log Buffer
    • Memory Usage
    • MQTT
    • Network Traffic
    • Process
    • Random
    • Serial Interface
    • Standard Input
    • Syslog
    • Systemd
    • Tail
    • TCP
  • Parsers
    • JSON Parser
    • Regular Expression Parser
    • Decoders
  • Filter Plugins
    • Grep
    • Kubernetes
    • Lua
    • Parser
    • Record Modifier
    • Standard Output
    • Throttle
    • Nest
    • Modify
  • Output Plugins
    • Azure
    • BigQuery
    • Counter
    • Elasticsearch
    • File
    • FlowCounter
    • Forward
    • HTTP
    • InfluxDB
    • Kafka
    • Kafka REST Proxy
    • NATS
    • Null
    • Stackdriver
    • Standard Output
    • Splunk
    • Treasure Data
  • Fluent Bit for Developers
    • Library API
    • Ingest Records Manually
Powered by GitBook
On this page
  • Configuration Parameters
  • TLS / SSL
  • Getting Started
  • Command Line
  • Configuration File
  • About Elasticsearch field names

Was this helpful?

Export as PDF
  1. Output Plugins

Elasticsearch

Last updated 6 years ago

Was this helpful?

The es output plugin, allows to flush your records into a database. The following instructions assumes that you have a fully operational Elasticsearch service running in your environment.

Configuration Parameters

Key

Description

default

Host

IP address or hostname of the target Elasticsearch instance

127.0.0.1

Port

TCP port of the target Elasticsearch instance

9200

Path

Elasticsearch accepts new data on HTTP query path "/_bulk". But it is also possible to serve Elasticsearch behind a reverse proxy on a subpath. This option defines such path on the fluent-bit side. It simply adds a path prefix in the indexing HTTP POST URI.

Empty string

Buffer_Size

4KB

Pipeline

Newer versions of Elasticsearch allows to setup filters called pipelines. This option allows to define which pipeline the database should use. For performance reasons is strongly suggested to do parsing and filtering on Fluent Bit side, avoid pipelines.

HTTP_User

Optional username credential for Elastic X-Pack access

HTTP_Passwd

Password for user defined in HTTP_User

Index

Index name

fluentbit

Type

Type name

flb_type

Logstash_Format

Enable Logstash format compatibility. This option takes a boolean value: True/False, On/Off

Off

Logstash_Prefix

When Logstash_Format is enabled, the Index name is composed using a prefix and the date, e.g: If Logstash_Prefix is equals to 'mydata' your index will become 'mydata-YYYY.MM.DD'. The last string appended belongs to the date when the data is being generated.

logstash

Logstash_DateFormat

%Y.%m.%d

Time_Key

When Logstash_Format is enabled, each record will get a new timestamp field. The Time_Key property defines the name of that field.

@timestamp

Time_Key_Format

When Logstash_Format is enabled, this property defines the format of the timestamp.

%Y-%m-%dT%H:%M:%S

Include_Tag_Key

When enabled, it append the Tag name to the record.

Off

Tag_Key

When Include_Tag_Key is enabled, this property defines the key name for the tag.

_flb-key

Generate_ID

When enabled, generate _id for outgoing records. This prevents duplicate records when retrying ES.

Off

Replace_Dots

When enabled, replace field name dots with underscore, required by Elasticsearch 2.0-2.3.

Off

Trace_Output

When enabled print the elasticsearch API calls to stdout (for diag only)

Off

Current_Time_Index

Use current time for index generation instead of message record

Off

Logstash_Prefix_Key

Prefix keys with this string

The parameters index and type can be confusing if you are new to Elastic, if you have used a common relational database before, they can be compared to the database and table concepts.

TLS / SSL

Getting Started

In order to insert records into a Elasticsearch service, you can run the plugin from the command line or through the configuration file:

Command Line

The es plugin, can read the parameters from the command line in two ways, through the -p argument (property) or setting them directly through the service URI. The URI format is the following:

es://host:port/index/type

Using the format specified, you could start Fluent Bit through:

$ fluent-bit -i cpu -t cpu -o es://192.168.2.3:9200/my_index/my_type \
    -o stdout -m '*'

which is similar to do:

$ fluent-bit -i cpu -t cpu -o es -p Host=192.168.2.3 -p Port=9200 \
    -p Index=my_index -p Type=my_type -o stdout -m '*'

Configuration File

In your main configuration file append the following Input & Output sections:

[INPUT]
    Name  cpu
    Tag   cpu

[OUTPUT]
    Name  es
    Match *
    Host  192.168.2.3
    Port  9200
    Index my_index
    Type  my_type

About Elasticsearch field names

Some input plugins may generate messages where the field names contains dots, since Elasticsearch 2.0 this is not longer allowed, so the current es plugin replaces them with an underscore, e.g:

{"cpu0.p_cpu"=>17.000000}

becomes

{"cpu0_p_cpu"=>17.000000}

Specify the buffer size used to read the response from the Elasticsearch HTTP service. This option is useful for debugging purposes where is required to read full responses, note that response size grows depending of the number of records inserted. To set an unlimited amount of memory set this value to False, otherwise the value must be according to the specification.

Time format (based on ) to generate the second part of the Index name.

Elasticsearch output plugin supports TTL/SSL, for more details about the properties available and general configuration, please refer to the section.

Elasticsearch
TLS/SSL
Unit Size
strftime