Fluent Bit: Official Manual
SlackGitHubCommunity MeetingsSandbox and LabsWebinars
1.9
1.9
  • Fluent Bit v1.9 Documentation
  • About
    • What is Fluent Bit?
    • A Brief History of Fluent Bit
    • Fluentd & Fluent Bit
    • License
  • Concepts
    • Key Concepts
    • Buffering
    • Data Pipeline
      • Input
      • Parser
      • Filter
      • Buffer
      • Router
      • Output
  • Installation
    • Getting Started with Fluent Bit
    • Upgrade Notes
    • Supported Platforms
    • Requirements
    • Sources
      • Download Source Code
      • Build and Install
      • Build with Static Configuration
    • Linux Packages
      • Amazon Linux
      • Redhat / CentOS
      • Debian
      • Ubuntu
      • Raspbian / Raspberry Pi
    • Docker
    • Containers on AWS
    • Amazon EC2
    • Kubernetes
    • macOS
    • Windows
    • Yocto / Embedded Linux
  • Administration
    • Configuring Fluent Bit
      • Classic mode
        • Format and Schema
        • Configuration File
        • Variables
        • Commands
        • Upstream Servers
        • Record Accessor
      • YAML Configuration
        • Configuration File
      • Unit Sizes
      • Multiline Parsing
    • Security
    • Buffering & Storage
    • Backpressure
    • Scheduling and Retries
    • Networking
    • Memory Management
    • Monitoring
    • Dump Internals / Signal
    • HTTP Proxy
  • Local Testing
    • Validating your Data and Structure
    • Running a Logging Pipeline Locally
  • Data Pipeline
    • Pipeline Monitoring
    • Inputs
      • Collectd
      • CPU Log Based Metrics
      • Disk I/O Log Based Metrics
      • Docker Log Based Metrics
      • Docker Events
      • Dummy
      • Exec
      • Fluent Bit Metrics
      • Forward
      • Head
      • HTTP
      • Health
      • Kernel Logs
      • Memory Metrics
      • MQTT
      • Network I/O Log Based Metrics
      • NGINX Exporter Metrics
      • Node Exporter Metrics
      • Process Log Based Metrics
      • Prometheus Scrape Metrics
      • Random
      • Serial Interface
      • Standard Input
      • StatsD
      • Syslog
      • Systemd
      • Tail
      • TCP
      • Thermal
      • Windows Event Log
      • Windows Event Log (winevtlog)
      • Windows Exporter Metrics
    • Parsers
      • Configuring Parser
      • JSON
      • Regular Expression
      • LTSV
      • Logfmt
      • Decoders
    • Filters
      • AWS Metadata
      • CheckList
      • Expect
      • GeoIP2 Filter
      • Grep
      • Kubernetes
      • Lua
      • Parser
      • Record Modifier
      • Modify
      • Multiline
      • Nest
      • Nightfall
      • Rewrite Tag
      • Standard Output
      • Throttle
      • Tensorflow
    • Outputs
      • Amazon CloudWatch
      • Amazon Kinesis Data Firehose
      • Amazon Kinesis Data Streams
      • Amazon S3
      • Azure Blob
      • Azure Log Analytics
      • Counter
      • Datadog
      • Elasticsearch
      • File
      • FlowCounter
      • Forward
      • GELF
      • Google Cloud BigQuery
      • HTTP
      • InfluxDB
      • Kafka
      • Kafka REST Proxy
      • LogDNA
      • Loki
      • NATS
      • New Relic
      • NULL
      • Observe
      • OpenSearch
      • OpenTelemetry
      • PostgreSQL
      • Prometheus Exporter
      • Prometheus Remote Write
      • SkyWalking
      • Slack
      • Splunk
      • Stackdriver
      • Standard Output
      • Syslog
      • TCP & TLS
      • Treasure Data
      • WebSocket
  • Stream Processing
    • Introduction to Stream Processing
    • Overview
    • Changelog
    • Getting Started
      • Fluent Bit + SQL
      • Check Keys and NULL values
      • Hands On! 101
  • Fluent Bit for Developers
    • C Library API
    • Ingest Records Manually
    • Golang Output Plugins
    • Developer guide for beginners on contributing to Fluent Bit
Powered by GitBook
On this page
  • Configuration Parameters
  • TLS / SSL
  • write_operation
  • Getting Started
  • Command Line
  • Configuration File
  • About OpenSearch field names
  • FAQ
  • Fluent Bit + Amazon OpenSearch Service
  • Action/metadata contains an unknown parameter type

Was this helpful?

Export as PDF
  1. Data Pipeline
  2. Outputs

OpenSearch

Send logs to Amazon OpenSearch Service

Last updated 2 years ago

Was this helpful?

The opensearch output plugin, allows to ingest your records into an database. The following instructions assumes that you have a fully operational OpenSearch service running in your environment.

Configuration Parameters

Key
Description
default

Host

IP address or hostname of the target OpenSearch instance

127.0.0.1

Port

TCP port of the target OpenSearch instance

9200

Path

OpenSearch accepts new data on HTTP query path "/_bulk". But it is also possible to serve OpenSearch behind a reverse proxy on a subpath. This option defines such path on the fluent-bit side. It simply adds a path prefix in the indexing HTTP POST URI.

Empty string

Buffer_Size

4KB

Pipeline

OpenSearch allows to setup filters called pipelines. This option allows to define which pipeline the database should use. For performance reasons is strongly suggested to do parsing and filtering on Fluent Bit side, avoid pipelines.

AWS_Auth

Enable AWS Sigv4 Authentication for Amazon OpenSearch Service

Off

AWS_Region

Specify the AWS region for Amazon OpenSearch Service

AWS_STS_Endpoint

Specify the custom sts endpoint to be used with STS API for Amazon OpenSearch Service

AWS_Role_ARN

AWS IAM Role to assume to put records to your Amazon cluster

AWS_External_ID

External ID for the AWS IAM Role specified with aws_role_arn

HTTP_User

Optional username credential for access

HTTP_Passwd

Password for user defined in HTTP_User

Index

Index name

fluent-bit

Type

Type name

_doc

Logstash_Format

Enable Logstash format compatibility. This option takes a boolean value: True/False, On/Off

Off

Logstash_Prefix

When Logstash_Format is enabled, the Index name is composed using a prefix and the date, e.g: If Logstash_Prefix is equals to 'mydata' your index will become 'mydata-YYYY.MM.DD'. The last string appended belongs to the date when the data is being generated.

logstash

Logstash_DateFormat

%Y.%m.%d

Time_Key

When Logstash_Format is enabled, each record will get a new timestamp field. The Time_Key property defines the name of that field.

@timestamp

Time_Key_Format

When Logstash_Format is enabled, this property defines the format of the timestamp.

%Y-%m-%dT%H:%M:%S

Time_Key_Nanos

When Logstash_Format is enabled, enabling this property sends nanosecond precision timestamps.

Off

Include_Tag_Key

When enabled, it append the Tag name to the record.

Off

Tag_Key

When Include_Tag_Key is enabled, this property defines the key name for the tag.

_flb-key

Generate_ID

When enabled, generate _id for outgoing records. This prevents duplicate records when retrying.

Off

Id_Key

If set, _id will be the value of the key from incoming record and Generate_ID option is ignored.

Write_Operation

Operation to use to write in bulk requests.

create

Replace_Dots

When enabled, replace field name dots with underscore.

Off

Trace_Output

When enabled print the OpenSearch API calls to stdout (for diag only)

Off

Trace_Error

When enabled print the OpenSearch API calls to stdout when OpenSearch returns an error (for diag only)

Off

Current_Time_Index

Use current time for index generation instead of message record

Off

Logstash_Prefix_Key

When included: the value in the record that belongs to the key will be looked up and over-write the Logstash_Prefix for index generation. If the key/value is not found in the record then the Logstash_Prefix option will act as a fallback. Nested keys are not supported (if desired, you can use the nest filter plugin to remove nesting)

Suppress_Type_Name

When enabled, mapping types is removed and Type option is ignored.

Off

Workers

Enables dedicated thread(s) for this output. Default value is set since version 1.8.13. For previous versions is 0.

2

TLS / SSL

write_operation

The write_operation can be any of:

Operation
Description

create (default)

adds new data - if the data already exists (based on its id), the op is skipped.

index

new data is added while existing data (based on its id) is replaced (reindexed).

update

updates existing data (based on its id). If no data is found, the op is skipped.

upsert

known as merge or insert if the data does not exist, updates if the data exists (based on its id).

Please note, Id_Key or Generate_ID is required in update, and upsert scenario.

Getting Started

In order to insert records into an OpenSearch service, you can run the plugin from the command line or through the configuration file:

Command Line

The opensearch plugin, can read the parameters from the command line in two ways, through the -p argument (property) or setting them directly through the service URI. The URI format is the following:

es://host:port/index/type

Using the format specified, you could start Fluent Bit through:

$ fluent-bit -i cpu -t cpu -o es://192.168.2.3:9200/my_index/my_type \
    -o stdout -m '*'

which is similar to do:

$ fluent-bit -i cpu -t cpu -o opensearch -p Host=192.168.2.3 -p Port=9200 \
    -p Index=my_index -p Type=my_type -o stdout -m '*'

Configuration File

[INPUT]
    Name  cpu
    Tag   cpu

[OUTPUT]
    Name  opensearch
    Match *
    Host  192.168.2.3
    Port  9200
    Index my_index
    Type  my_type

About OpenSearch field names

Some input plugins may generate messages where the field names contains dots. This opensearch plugin replaces them with an underscore, e.g:

{"cpu0.p_cpu"=>17.000000}

becomes

{"cpu0_p_cpu"=>17.000000}

FAQ

Fluent Bit + Amazon OpenSearch Service

The Amazon OpenSearch Service adds an extra security layer where HTTP requests must be signed with AWS Sigv4. This plugin supports Amazon OpenSearch Service with IAM Authentication.

Example configuration:

[OUTPUT]
    Name  opensearch
    Match *
    Host  vpc-test-domain-ke7thhzoo7jawsrhmm6mb7ite7y.us-west-2.es.amazonaws.com
    Port  443
    Index my_index
    Type  my_type
    AWS_Auth On
    AWS_Region us-west-2
    tls     On

Notice that the Port is set to 443, tls is enabled, and AWS_Region is set.

Action/metadata contains an unknown parameter type

Similarly to Elastic Cloud, OpenSearch in version 2.0 and above needs to have type option being removed by setting Suppress_Type_Name On.

Without this you will see errors like:

{"error":{"root_cause":[{"type":"illegal_argument_exception","reason":"Action/metadata line [1] contains an unknown parameter [_type]"}],"type":"illegal_argument_exception","reason":"Action/metadata line [1] contains an unknown parameter [_type]"},"status":400}

Specify the buffer size used to read the response from the OpenSearch HTTP service. This option is useful for debugging purposes where is required to read full responses, note that response size grows depending of the number of records inserted. To set an unlimited amount of memory set this value to False, otherwise the value must be according to the specification.

Time format (based on ) to generate the second part of the Index name.

The parameters index and type can be confusing if you are new to OpenSearch, if you have used a common relational database before, they can be compared to the database and table concepts. Also see

OpenSearch output plugin supports TTL/SSL, for more details about the properties available and general configuration, please refer to the section.

In your main configuration file append the following Input & Output sections. You can visualize this configuration

See for details on how AWS credentials are fetched.

OpenSearch
TLS/SSL
here
here
the FAQ below
Unit Size
strftime
example configuration visualization from config.calyptia.com