Fluent Bit: Official Manual
SlackGitHubCommunity MeetingsSandbox and LabsWebinars
2.2
2.2
  • Fluent Bit v2.2 Documentation
  • About
    • What is Fluent Bit?
    • A Brief History of Fluent Bit
    • Fluentd & Fluent Bit
    • License
  • Concepts
    • Key Concepts
    • Buffering
    • Data Pipeline
      • Input
      • Parser
      • Filter
      • Buffer
      • Router
      • Output
  • Installation
    • Getting Started with Fluent Bit
    • Upgrade Notes
    • Supported Platforms
    • Requirements
    • Sources
      • Download Source Code
      • Build and Install
      • Build with Static Configuration
    • Linux Packages
      • Amazon Linux
      • Redhat / CentOS
      • Debian
      • Ubuntu
      • Raspbian / Raspberry Pi
    • Docker
    • Containers on AWS
    • Amazon EC2
    • Kubernetes
    • macOS
    • Windows
    • Yocto / Embedded Linux
  • Administration
    • Configuring Fluent Bit
      • Classic mode
        • Format and Schema
        • Configuration File
        • Variables
        • Commands
        • Upstream Servers
        • Record Accessor
      • YAML Configuration
        • Configuration File
      • Unit Sizes
      • Multiline Parsing
    • Transport Security
    • Buffering & Storage
    • Backpressure
    • Scheduling and Retries
    • Networking
    • Memory Management
    • Monitoring
    • HTTP Proxy
    • Hot Reload
    • Troubleshooting
  • Local Testing
    • Validating your Data and Structure
    • Running a Logging Pipeline Locally
  • Data Pipeline
    • Pipeline Monitoring
    • Inputs
      • Collectd
      • CPU Log Based Metrics
      • Disk I/O Log Based Metrics
      • Docker Log Based Metrics
      • Docker Events
      • Dummy
      • Elasticsearch
      • Exec
      • Exec Wasi
      • Fluent Bit Metrics
      • Forward
      • Head
      • HTTP
      • Health
      • Kafka
      • Kernel Logs
      • Kubernetes Events
      • Memory Metrics
      • MQTT
      • Network I/O Log Based Metrics
      • NGINX Exporter Metrics
      • Node Exporter Metrics
      • Podman Metrics
      • Process Log Based Metrics
      • Process Exporter Metrics
      • Prometheus Scrape Metrics
      • Random
      • Serial Interface
      • Splunk
      • Standard Input
      • StatsD
      • Syslog
      • Systemd
      • Tail
      • TCP
      • Thermal
      • UDP
      • OpenTelemetry
      • Windows Event Log
      • Windows Event Log (winevtlog)
      • Windows Exporter Metrics
    • Parsers
      • Configuring Parser
      • JSON
      • Regular Expression
      • LTSV
      • Logfmt
      • Decoders
    • Filters
      • AWS Metadata
      • CheckList
      • ECS Metadata
      • Expect
      • GeoIP2 Filter
      • Grep
      • Kubernetes
      • Log to Metrics
      • Lua
      • Parser
      • Record Modifier
      • Modify
      • Multiline
      • Nest
      • Nightfall
      • Rewrite Tag
      • Standard Output
      • Sysinfo
      • Throttle
      • Type Converter
      • Tensorflow
      • Wasm
    • Outputs
      • Amazon CloudWatch
      • Amazon Kinesis Data Firehose
      • Amazon Kinesis Data Streams
      • Amazon S3
      • Azure Blob
      • Azure Data Explorer
      • Azure Log Analytics
      • Azure Logs Ingestion API
      • Counter
      • Datadog
      • Elasticsearch
      • File
      • FlowCounter
      • Forward
      • GELF
      • Google Chronicle
      • Google Cloud BigQuery
      • HTTP
      • InfluxDB
      • Kafka
      • Kafka REST Proxy
      • LogDNA
      • Loki
      • NATS
      • New Relic
      • NULL
      • Observe
      • Oracle Log Analytics
      • OpenSearch
      • OpenTelemetry
      • PostgreSQL
      • Prometheus Exporter
      • Prometheus Remote Write
      • SkyWalking
      • Slack
      • Splunk
      • Stackdriver
      • Standard Output
      • Syslog
      • TCP & TLS
      • Treasure Data
      • Vivo Exporter
      • WebSocket
  • Stream Processing
    • Introduction to Stream Processing
    • Overview
    • Changelog
    • Getting Started
      • Fluent Bit + SQL
      • Check Keys and NULL values
      • Hands On! 101
  • Fluent Bit for Developers
    • C Library API
    • Ingest Records Manually
    • Golang Output Plugins
    • WASM Filter Plugins
    • WASM Input Plugins
    • Developer guide for beginners on contributing to Fluent Bit
Powered by GitBook
On this page
  • Configuration Parameters
  • Format
  • out_file format
  • plain format
  • csv format
  • ltsv format
  • template format
  • Getting Started
  • Command Line
  • Configuration File

Was this helpful?

Export as PDF
  1. Data Pipeline
  2. Outputs

File

The file output plugin allows to write the data received through the input plugin to file.

Configuration Parameters

The plugin supports the following configuration parameters:

Key
Description
Default

Path

Directory path to store files. If not set, Fluent Bit will write the files on it's own positioned directory. note: this option was added on Fluent Bit v1.4.6

File

Set file name to store the records. If not set, the file name will be the tag associated with the records.

Format

The format of the file content. See also Format section. Default: out_file.

Mkdir

Recursively create output directory if it does not exist. Permissions set to 0755.

Workers

Enables dedicated thread(s) for this output. Default value is set since version 1.8.13. For previous versions is 0.

1

Format

out_file format

Output time, tag and json records. There is no configuration parameters for out_file.

tag: [time, {"key1":"value1", "key2":"value2", "key3":"value3"}]

plain format

Output the records as JSON (without additional tag and timestamp attributes). There is no configuration parameters for plain format.

{"key1":"value1", "key2":"value2", "key3":"value3"}

csv format

Output the records as csv. Csv supports an additional configuration parameter.

Key
Description

Delimiter

The character to separate each data. Accepted values are "\t" (or "tab"), "space" or "comma". Other values are ignored and will use default silently. Default: ','

time[delimiter]"value1"[delimiter]"value2"[delimiter]"value3"

ltsv format

Output the records as LTSV. LTSV supports an additional configuration parameter.

Key
Description

Delimiter

The character to separate each pair. Default: '\t'(TAB)

Label_Delimiter

The character to separate label and the value. Default: ':'

field1[label_delimiter]value1[delimiter]field2[label_delimiter]value2\n

template format

Output the records using a custom format template.

Key
Description

Template

The format string. Default: '{time} {message}'

This accepts a formatting template and fills placeholders using corresponding values in a record.

For example, if you set up the configuration as below:

[INPUT]
  Name mem

[OUTPUT]
  Name file
  Format template
  Template {time} used={Mem.used} free={Mem.free} total={Mem.total}

You will get the following output:

1564462620.000254 used=1045448 free=31760160 total=32805608

Getting Started

You can run the plugin from the command line or through the configuration file:

Command Line

From the command line you can let Fluent Bit count up a data with the following options:

$ fluent-bit -i cpu -o file -p path=output.txt

Configuration File

In your main configuration file append the following Input & Output sections:

[INPUT]
    Name cpu
    Tag  cpu

[OUTPUT]
    Name file
    Match *
    Path output_dir

Last updated 1 year ago

Was this helpful?