Fluent Bit: Official Manual
SlackGitHubCommunity MeetingsSandbox and LabsWebinars
1.8
1.8
  • Fluent Bit v1.8 Documentation
  • About
    • What is Fluent Bit ?
    • A Brief History of Fluent Bit
    • Fluentd & Fluent Bit
    • License
  • Concepts
    • Key Concepts
    • Buffering
    • Data Pipeline
      • Input
      • Parser
      • Filter
      • Buffer
      • Router
      • Output
  • Installation
    • Getting Started with Fluent Bit
    • Upgrade Notes
    • Supported Platforms
    • Requirements
    • Sources
      • Download Source Code
      • Build and Install
      • Build with Static Configuration
    • Linux Packages
      • Amazon Linux
      • Redhat / CentOS
      • Debian
      • Ubuntu
      • Raspbian / Raspberry Pi
    • Docker
    • Containers on AWS
    • Amazon EC2
    • Kubernetes
    • Yocto / Embedded Linux
    • Windows
  • Administration
    • Configuring Fluent Bit
      • Format and Schema
      • Configuration File
      • Variables
      • Commands
      • Upstream Servers
      • Unit Sizes
      • Multiline Parsing
      • Record Accessor
    • Security
    • Buffering & Storage
    • Backpressure
    • Scheduling and Retries
    • Networking
    • Memory Management
    • Monitoring
    • Dump Internals / Signal
    • HTTP Proxy
  • Local Testing
    • Validating your Data and Structure
    • Running a Logging Pipeline Locally
  • Data Pipeline
    • Pipeline Monitoring
    • Inputs
      • Node Exporter Metrics
      • Collectd
      • CPU Metrics
      • Disk I/O Metrics
      • Docker Metrics
      • Docker Events
      • Dummy
      • Exec
      • Fluent Bit Metrics
      • Forward
      • Head
      • HTTP
      • Health
      • Kernel Logs
      • Memory Metrics
      • MQTT
      • Network I/O Metrics
      • Process Metrics
      • Random
      • Serial Interface
      • Standard Input
      • StatsD
      • Syslog
      • Systemd
      • Tail
      • TCP
      • Thermal
      • Windows Event Log
    • Parsers
      • Configuring Parser
      • JSON
      • Regular Expression
      • LTSV
      • Logfmt
      • Decoders
    • Filters
      • AWS Metadata
      • CheckList
      • Expect
      • GeoIP2 Filter
      • Grep
      • Kubernetes
      • Lua
      • Parser
      • Record Modifier
      • Modify
      • Multiline
      • Nest
      • Rewrite Tag
      • Standard Output
      • Throttle
      • Tensorflow
    • Outputs
      • Prometheus Exporter
      • Prometheus Remote Write
      • Amazon CloudWatch
      • Amazon Kinesis Data Firehose
      • Amazon Kinesis Data Streams
      • Amazon S3
      • Azure Log Analytics
      • Azure Blob
      • Google Cloud BigQuery
      • Counter
      • Datadog
      • Elasticsearch
      • File
      • FlowCounter
      • Forward
      • GELF
      • HTTP
      • InfluxDB
      • Kafka
      • Kafka REST Proxy
      • LogDNA
      • Loki
      • NATS
      • New Relic
      • NULL
      • PostgreSQL
      • Slack
      • Stackdriver
      • Standard Output
      • Splunk
      • Syslog
      • TCP & TLS
      • Treasure Data
      • WebSocket
  • Stream Processing
    • Introduction to Stream Processing
    • Overview
    • Changelog
    • Getting Started
      • Fluent Bit + SQL
      • Check Keys and NULL values
      • Hands On! 101
  • Fluent Bit for Developers
    • C Library API
    • Ingest Records Manually
    • Golang Output Plugins
    • Developer guide for beginners on contributing to Fluent Bit
Powered by GitBook
On this page
  • Configuration Parameters
  • Getting Started
  • Command Line
  • Configuration File
  • Worker support
  • AWS for Fluent Bit

Was this helpful?

Export as PDF
  1. Data Pipeline
  2. Outputs

Amazon Kinesis Data Firehose

Send logs to Amazon Kinesis Firehose

Last updated 3 years ago

Was this helpful?

The Amazon Kinesis Data Firehose output plugin allows to ingest your records into the service.

This is the documentation for the core Fluent Bit Firehose plugin written in C. It can replace the Golang Fluent Bit plugin released last year. The Golang plugin was named firehose; this new high performance and highly efficient firehose plugin is called kinesis_firehose to prevent conflicts/confusion.

See for details on how AWS credentials are fetched.

Configuration Parameters

Key
Description

region

The AWS region.

delivery_stream

The name of the Kinesis Firehose Delivery stream that you want log records sent to.

time_key

Add the timestamp to the record under this key. By default the timestamp from Fluent Bit will not be added to records sent to Kinesis.

time_key_format

strftime compliant format string for the timestamp; for example, the default is '%Y-%m-%dT%H:%M:%S'. This option is used with time_key.

log_key

By default, the whole log record will be sent to Firehose. If you specify a key name with this option, then only the value of that key will be sent to Firehose. For example, if you are using the Fluentd Docker log driver, you can specify log_key log and only the log message will be sent to Firehose.

role_arn

ARN of an IAM role to assume (for cross account access).

endpoint

Specify a custom endpoint for the Firehose API.

sts_endpoint

Custom endpoint for the STS API.

auto_retry_requests

Immediately retry failed requests to AWS services once. This option does not affect the normal Fluent Bit retry mechanism with backoff. Instead, it enables an immediate retry with no delay for networking errors, which may help improve throughput when there are transient/random networking issues.

Getting Started

In order to send records into Amazon Kinesis Data Firehose, you can run the plugin from the command line or through the configuration file:

Command Line

The firehose plugin, can read the parameters from the command line through the -p argument (property), e.g:

$ fluent-bit -i cpu -o kinesis_firehose -p delivery_stream=my-stream -p region=us-west-2 -m '*' -f 1

Configuration File

In your main configuration file append the following Output section:

[OUTPUT]
    Name  kinesis_firehose
    Match *
    region us-east-1
    delivery_stream my-stream

Worker support

Fluent Bit 1.7 adds a new feature called workers which enables outputs to have dedicated threads. This kinesis_firehose plugin fully supports workers.

Example:

[OUTPUT]
    Name  kinesis_firehose
    Match *
    region us-east-1
    delivery_stream my-stream
    workers 2

If you enable a single worker, you are enabling a dedicated thread for your Firehose output. We recommend starting with without workers, evaluating the performance, and then adding workers one at a time until you reach your desired/needed throughput. For most users, no workers or a single worker will be sufficient.

AWS for Fluent Bit

Amazon distributes a container image with Fluent Bit and these plugins.

GitHub

Amazon ECR Public Gallery

Our images are available in Amazon ECR Public Gallery. You can download images with different tags by following command:

docker pull public.ecr.aws/aws-observability/aws-for-fluent-bit:<tag>

For example, you can pull the image with latest version by:

docker pull public.ecr.aws/aws-observability/aws-for-fluent-bit:latest

If you see errors for image pull limits, try log into public ECR with your AWS credentials:

aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin public.ecr.aws

Docker Hub

Amazon ECR

You can use our SSM Public Parameters to find the Amazon ECR image URI in your region:

aws ssm get-parameters-by-path --path /aws/service/aws-for-fluent-bit/

You can check the for more details.

For more see .

github.com/aws/aws-for-fluent-bit
aws-for-fluent-bit
Amazon ECR Public official doc
amazon/aws-for-fluent-bit
the AWS for Fluent Bit github repo
Firehose
aws/amazon-kinesis-firehose-for-fluent-bit
here