Fluent Bit: Official Manual
SlackGitHubCommunity MeetingsSandbox and LabsWebinars
2.2
2.2
  • Fluent Bit v2.2 Documentation
  • About
    • What is Fluent Bit?
    • A Brief History of Fluent Bit
    • Fluentd & Fluent Bit
    • License
  • Concepts
    • Key Concepts
    • Buffering
    • Data Pipeline
      • Input
      • Parser
      • Filter
      • Buffer
      • Router
      • Output
  • Installation
    • Getting Started with Fluent Bit
    • Upgrade Notes
    • Supported Platforms
    • Requirements
    • Sources
      • Download Source Code
      • Build and Install
      • Build with Static Configuration
    • Linux Packages
      • Amazon Linux
      • Redhat / CentOS
      • Debian
      • Ubuntu
      • Raspbian / Raspberry Pi
    • Docker
    • Containers on AWS
    • Amazon EC2
    • Kubernetes
    • macOS
    • Windows
    • Yocto / Embedded Linux
  • Administration
    • Configuring Fluent Bit
      • Classic mode
        • Format and Schema
        • Configuration File
        • Variables
        • Commands
        • Upstream Servers
        • Record Accessor
      • YAML Configuration
        • Configuration File
      • Unit Sizes
      • Multiline Parsing
    • Transport Security
    • Buffering & Storage
    • Backpressure
    • Scheduling and Retries
    • Networking
    • Memory Management
    • Monitoring
    • HTTP Proxy
    • Hot Reload
    • Troubleshooting
  • Local Testing
    • Validating your Data and Structure
    • Running a Logging Pipeline Locally
  • Data Pipeline
    • Pipeline Monitoring
    • Inputs
      • Collectd
      • CPU Log Based Metrics
      • Disk I/O Log Based Metrics
      • Docker Log Based Metrics
      • Docker Events
      • Dummy
      • Elasticsearch
      • Exec
      • Exec Wasi
      • Fluent Bit Metrics
      • Forward
      • Head
      • HTTP
      • Health
      • Kafka
      • Kernel Logs
      • Kubernetes Events
      • Memory Metrics
      • MQTT
      • Network I/O Log Based Metrics
      • NGINX Exporter Metrics
      • Node Exporter Metrics
      • Podman Metrics
      • Process Log Based Metrics
      • Process Exporter Metrics
      • Prometheus Scrape Metrics
      • Random
      • Serial Interface
      • Splunk
      • Standard Input
      • StatsD
      • Syslog
      • Systemd
      • Tail
      • TCP
      • Thermal
      • UDP
      • OpenTelemetry
      • Windows Event Log
      • Windows Event Log (winevtlog)
      • Windows Exporter Metrics
    • Parsers
      • Configuring Parser
      • JSON
      • Regular Expression
      • LTSV
      • Logfmt
      • Decoders
    • Filters
      • AWS Metadata
      • CheckList
      • ECS Metadata
      • Expect
      • GeoIP2 Filter
      • Grep
      • Kubernetes
      • Log to Metrics
      • Lua
      • Parser
      • Record Modifier
      • Modify
      • Multiline
      • Nest
      • Nightfall
      • Rewrite Tag
      • Standard Output
      • Sysinfo
      • Throttle
      • Type Converter
      • Tensorflow
      • Wasm
    • Outputs
      • Amazon CloudWatch
      • Amazon Kinesis Data Firehose
      • Amazon Kinesis Data Streams
      • Amazon S3
      • Azure Blob
      • Azure Data Explorer
      • Azure Log Analytics
      • Azure Logs Ingestion API
      • Counter
      • Datadog
      • Elasticsearch
      • File
      • FlowCounter
      • Forward
      • GELF
      • Google Chronicle
      • Google Cloud BigQuery
      • HTTP
      • InfluxDB
      • Kafka
      • Kafka REST Proxy
      • LogDNA
      • Loki
      • NATS
      • New Relic
      • NULL
      • Observe
      • Oracle Log Analytics
      • OpenSearch
      • OpenTelemetry
      • PostgreSQL
      • Prometheus Exporter
      • Prometheus Remote Write
      • SkyWalking
      • Slack
      • Splunk
      • Stackdriver
      • Standard Output
      • Syslog
      • TCP & TLS
      • Treasure Data
      • Vivo Exporter
      • WebSocket
  • Stream Processing
    • Introduction to Stream Processing
    • Overview
    • Changelog
    • Getting Started
      • Fluent Bit + SQL
      • Check Keys and NULL values
      • Hands On! 101
  • Fluent Bit for Developers
    • C Library API
    • Ingest Records Manually
    • Golang Output Plugins
    • WASM Filter Plugins
    • WASM Input Plugins
    • Developer guide for beginners on contributing to Fluent Bit
Powered by GitBook
On this page
  • Input formats
  • Log event timestamps
  • Examples
  • Json input example
  • Json input with timestamp example
  • Json input with metadata example
  • Parser input example
  • Configuration Parameters

Was this helpful?

Export as PDF
  1. Data Pipeline
  2. Inputs

Standard Input

Last updated 1 year ago

Was this helpful?

The stdin plugin supports retrieving a message stream from the standard input interface (stdin) of the Fluent Bit process. In order to use it, specify the plugin name as the input, e.g:

$ fluent-bit -i stdin -o stdout

If the stdin stream is closed (end-of-file), the stdin plugin will instruct Fluent Bit to exit with success (0) after flushing any pending output.

Input formats

If no parser is configured for the stdin plugin, it expects valid JSON input data in one of the following formats:

  1. A JSON object with one or more key-value pairs: { "key": "value", "key2": "value2" }

  2. A 2-element JSON array in format, which may be:

  • [TIMESTAMP, { "key": "value" }] where TIMESTAMP is a floating point value representing a timestamp in seconds; or

  • from Fluent Bit v2.1.0, [[TIMESTAMP, METADATA], { "key": "value" }] where TIMESTAMP has the same meaning as above and and METADATA is a JSON object.

Multi-line input JSON is supported.

Any input data that is not in one of the above formats will cause the plugin to log errors like:

[debug] [input:stdin:stdin.0] invalid JSON message, skipping
[error] [input:stdin:stdin.0] invalid record found, it's not a JSON map or array

To handle inputs in other formats, a parser must be explicitly specified in the configuration for the stdin plugin. See for sample configuration.

Log event timestamps

The Fluent Bit event timestamp will be set from the input record if the 2-element event input is used or a custom parser configuration supplies a timestamp. Otherwise the event timestamp will be set to the timestamp at which the record is read by the stdin plugin.

Examples

Json input example

#!/bin/sh

for ((i=0; i<=5; i++)); do
  echo -n "{\"key\": \"some value\"}"
  sleep 1
done
$ bash test.sh | fluent-bit -q -i stdin -o stdout
[0] stdin.0: [[1684196745.942883835, {}], {"key"=>"some value"}]
[0] stdin.0: [[1684196746.938949056, {}], {"key"=>"some value"}]
[0] stdin.0: [[1684196747.940162493, {}], {"key"=>"some value"}]
[0] stdin.0: [[1684196748.941392297, {}], {"key"=>"some value"}]
[0] stdin.0: [[1684196749.942644238, {}], {"key"=>"some value"}]
[0] stdin.0: [[1684196750.943721442, {}], {"key"=>"some value"}]

Json input with timestamp example

An input event timestamp may also be supplied. Replace test.sh with:

#!/bin/sh

for ((i=0; i<=5; i++)); do
  echo -n "
    [
      $(date '+%s.%N' -d '1 day ago'),
      {
        \"realtimestamp\": $(date '+%s.%N')
      }
    ]
  "
  sleep 1
done

Re-run the sample command. Note that the timestamps output by Fluent Bit are now one day old because Fluent Bit used the input message timestamp.

$ bash test.sh | fluent-bit -q -i stdin -o stdout
[0] stdin.0: [[1684110480.028171300, {}], {"realtimestamp"=>1684196880.030070}]
[0] stdin.0: [[1684110481.033753395, {}], {"realtimestamp"=>1684196881.034741}]
[0] stdin.0: [[1684110482.036730051, {}], {"realtimestamp"=>1684196882.037704}]
[0] stdin.0: [[1684110483.039903879, {}], {"realtimestamp"=>1684196883.041081}]
[0] stdin.0: [[1684110484.044719457, {}], {"realtimestamp"=>1684196884.046404}]
[0] stdin.0: [[1684110485.048710107, {}], {"realtimestamp"=>1684196885.049651}]

Json input with metadata example

Additional metadata is also supported on Fluent Bit v2.1.0 and above by replacing the timestamp with a 2-element object, e.g.:

#!/bin/sh
for ((i=0; i<=5; i++)); do
  echo -n "
    [
      [
        $(date '+%s.%N' -d '1 day ago'),
	{\"metakey\": \"metavalue\"}
      ],
      {
        \"realtimestamp\": $(date '+%s.%N')
      }
    ]
  "
  sleep 1
done
$ bash ./test.sh | fluent-bit -q -i stdin -o stdout
[0] stdin.0: [[1684110513.060139417, {"metakey"=>"metavalue"}], {"realtimestamp"=>1684196913.061017}]
[0] stdin.0: [[1684110514.063085317, {"metakey"=>"metavalue"}], {"realtimestamp"=>1684196914.064145}]
[0] stdin.0: [[1684110515.066210508, {"metakey"=>"metavalue"}], {"realtimestamp"=>1684196915.067155}]
[0] stdin.0: [[1684110516.069149971, {"metakey"=>"metavalue"}], {"realtimestamp"=>1684196916.070132}]
[0] stdin.0: [[1684110517.072484016, {"metakey"=>"metavalue"}], {"realtimestamp"=>1684196917.073636}]
[0] stdin.0: [[1684110518.075428724, {"metakey"=>"metavalue"}], {"realtimestamp"=>1684196918.076292}]

On older Fluent Bit versions records in this format will be discarded. Fluent Bit will log:

[ warn] unknown time format 6

if the log level permits.

Parser input example

To capture inputs in other formats, specify a parser configuration for the stdin plugin.

For example, if you want to read raw messages line-by-line and forward them you could use a parser.conf that captures the whole message line:

[PARSER]
    name        stringify_message
    format      regex
    Key_Name    message
    regex       ^(?<message>.*)

then use that in the parser clause of the stdin plugin in the fluent-bit.conf:

[INPUT]
    Name    stdin
    Tag     stdin
    Parser  stringify_message

[OUTPUT]
    Name   stdout
    Match  *
pipeline:
    inputs:
        - name: stdin
          tag: stdin
          parser: stringify_message
    outputs:
        - name: stdout
          match: '*'

Fluent Bit will now read each line and emit a single message for each input line:

$ seq 1 5 | /opt/fluent-bit/bin/fluent-bit -c fluent-bit.conf -R parser.conf -q
[0] stdin: [1681358780.517029169, {"message"=>"1"}]
[1] stdin: [1681358780.517068334, {"message"=>"2"}]
[2] stdin: [1681358780.517072116, {"message"=>"3"}]
[3] stdin: [1681358780.517074758, {"message"=>"4"}]
[4] stdin: [1681358780.517077392, {"message"=>"5"}]
$

In real-world deployments it is best to use a more realistic parser that splits messages into real fields and adds appropriate tags.

Configuration Parameters

The plugin supports the following configuration parameters:

Key
Description
Default

Buffer_Size

16k

Parser

The name of the parser to invoke instead of the default JSON input parser

A better example to demonstrate how it works will be through a Bash script that generates messages and writes them to . Write the following content in a file named test.sh:

Now lets start the script and :

Set the buffer size to read data. This value is used to increase buffer size. The value must be according to the specification.

Fluent Bit
Fluent Bit
Unit Size
parser input example
Fluent Bit Event