Fluent Bit: Official Manual
SlackGitHubCommunity MeetingsSandbox and LabsWebinars
3.2
3.2
  • Fluent Bit v3.2 Documentation
  • About
    • What is Fluent Bit?
    • A Brief History of Fluent Bit
    • Fluentd & Fluent Bit
    • License
    • Sandbox and Lab Resources
  • Concepts
    • Key Concepts
    • Buffering
    • Data Pipeline
      • Input
      • Parser
      • Filter
      • Buffer
      • Router
      • Output
  • Installation
    • Getting Started with Fluent Bit
    • Upgrade Notes
    • Supported Platforms
    • Requirements
    • Sources
      • Download Source Code
      • Build and Install
      • Build with Static Configuration
    • Linux Packages
      • Amazon Linux
      • Redhat / CentOS
      • Debian
      • Ubuntu
      • Raspbian / Raspberry Pi
    • Docker
    • Containers on AWS
    • Amazon EC2
    • Kubernetes
    • macOS
    • Windows
    • Yocto / Embedded Linux
    • Buildroot / Embedded Linux
  • Administration
    • Configuring Fluent Bit
      • YAML Configuration
        • Service
        • Parsers
        • Multiline Parsers
        • Pipeline
        • Plugins
        • Upstream Servers
        • Environment Variables
        • Includes
      • Classic mode
        • Format and Schema
        • Configuration File
        • Variables
        • Commands
        • Upstream Servers
        • Record Accessor
      • Unit Sizes
      • Multiline Parsing
    • Transport Security
    • Buffering & Storage
    • Backpressure
    • Scheduling and Retries
    • Networking
    • Memory Management
    • Monitoring
    • Multithreading
    • HTTP Proxy
    • Hot Reload
    • Troubleshooting
    • Performance Tips
    • AWS credentials
  • Local Testing
    • Validating your Data and Structure
    • Running a Logging Pipeline Locally
  • Data Pipeline
    • Pipeline Monitoring
    • Inputs
      • Collectd
      • CPU Log Based Metrics
      • Disk I/O Log Based Metrics
      • Docker Events
      • Docker Log Based Metrics
      • Dummy
      • Elasticsearch
      • Exec
      • Exec Wasi
      • Ebpf
      • Fluent Bit Metrics
      • Forward
      • Head
      • Health
      • HTTP
      • Kafka
      • Kernel Logs
      • Kubernetes Events
      • Memory Metrics
      • MQTT
      • Network I/O Log Based Metrics
      • NGINX Exporter Metrics
      • Node Exporter Metrics
      • OpenTelemetry
      • Podman Metrics
      • Process Exporter Metrics
      • Process Log Based Metrics
      • Prometheus Remote Write
      • Prometheus Scrape Metrics
      • Random
      • Serial Interface
      • Splunk
      • Standard Input
      • StatsD
      • Syslog
      • Systemd
      • Tail
      • TCP
      • Thermal
      • UDP
      • Windows Event Log
      • Windows Event Log (winevtlog)
      • Windows Exporter Metrics
    • Parsers
      • Configuring Parser
      • JSON
      • Regular Expression
      • LTSV
      • Logfmt
      • Decoders
    • Processors
      • Content Modifier
      • Labels
      • Metrics Selector
      • OpenTelemetry Envelope
      • SQL
    • Filters
      • AWS Metadata
      • CheckList
      • ECS Metadata
      • Expect
      • GeoIP2 Filter
      • Grep
      • Kubernetes
      • Log to Metrics
      • Lua
      • Parser
      • Record Modifier
      • Modify
      • Multiline
      • Nest
      • Nightfall
      • Rewrite Tag
      • Standard Output
      • Sysinfo
      • Throttle
      • Type Converter
      • Tensorflow
      • Wasm
    • Outputs
      • Amazon CloudWatch
      • Amazon Kinesis Data Firehose
      • Amazon Kinesis Data Streams
      • Amazon S3
      • Azure Blob
      • Azure Data Explorer
      • Azure Log Analytics
      • Azure Logs Ingestion API
      • Counter
      • Dash0
      • Datadog
      • Dynatrace
      • Elasticsearch
      • File
      • FlowCounter
      • Forward
      • GELF
      • Google Chronicle
      • Google Cloud BigQuery
      • HTTP
      • InfluxDB
      • Kafka
      • Kafka REST Proxy
      • LogDNA
      • Loki
      • Microsoft Fabric
      • NATS
      • New Relic
      • NULL
      • Observe
      • OpenObserve
      • OpenSearch
      • OpenTelemetry
      • Oracle Log Analytics
      • PostgreSQL
      • Prometheus Exporter
      • Prometheus Remote Write
      • SkyWalking
      • Slack
      • Splunk
      • Stackdriver
      • Standard Output
      • Syslog
      • TCP & TLS
      • Treasure Data
      • Vivo Exporter
      • WebSocket
  • Stream Processing
    • Introduction to Stream Processing
    • Overview
    • Changelog
    • Getting Started
      • Fluent Bit + SQL
      • Check Keys and NULL values
      • Hands On! 101
  • Fluent Bit for Developers
    • C Library API
    • Ingest Records Manually
    • Golang Output Plugins
    • WASM Filter Plugins
    • WASM Input Plugins
    • Developer guide for beginners on contributing to Fluent Bit
Powered by GitBook
On this page
  • Mem_Buf_Limit
  • storage.max_chunks_up
  • About pause and resume callbacks

Was this helpful?

Export as PDF
  1. Administration

Backpressure

Last updated 6 months ago

Was this helpful?

It's possible for logs or data to be ingested or created faster than the ability to flush it to some destinations. A common scenario is when reading from big log files, especially with a large backlog, and dispatching the logs to a backend over the network, which takes time to respond. This generates backpressure, leading to high memory consumption in the service.

To avoid backpressure, Fluent Bit implements a mechanism in the engine that restricts the amount of data an input plugin can ingest. Restriction is done through the configuration parameters Mem_Buf_Limit and storage.Max_Chunks_Up.

As described in the concepts section, Fluent Bit offers two modes for data handling: in-memory only (default) and in-memory and filesystem (optional).

The default storage.type memory buffer can be restricted with Mem_Buf_Limit. If memory reaches this limit and you reach a backpressure scenario, you won't be able to ingest more data until the data chunks that are in memory can be flushed. The input pauses and Fluent Bit a [warn] [input] {input name or alias} paused (mem buf overlimit) log message.

Depending on the input plugin in use, this might cause incoming data to be discarded (for example, TCP input plugin). The tail plugin can handle pauses without data ingloss, storing its current file offset and resuming reading later. When buffer memory is available, the input resumes accepting logs. Fluent Bit a [info] [input] {input name or alias} resume (mem buf overlimit) message.

Mitigate the risk of data loss by configuring secondary storage on the filesystem using the storage.type of filesystem (as described in ). Initially, logs will be buffered to both memory and the filesystem. When the storage.max_chunks_up limit is reached, all new data will be stored in the filesystem. Fluent Bit stops queueing new data in memory and buffers only to the filesystem. When storage.type filesystem is set, the Mem_Buf_Limit setting no longer has any effect. Instead, the [SERVICE] level storage.max_chunks_up setting controls the size of the memory buffer.

Mem_Buf_Limit

Mem_Buf_Limit applies only with the default storage.type memory. This option is disabled by default and can be applied to all input plugins.

As an example situation:

  • Mem_Buf_Limit is set to 1MB.

  • The input plugin tries to append 700 KB.

  • The engine routes the data to an output plugin.

  • The output plugin backend (HTTP Server) is down.

  • Engine scheduler retries the flush after 10 seconds.

  • The input plugin tries to append 500 KB.

In this situation, the engine allows appending those 500 KB of data into the memory, with a total of 1.2 MB of data buffered. The limit is permissive and will allow a single write past the limit. When the limit is exceeded, the following actions are taken:

  • Block local buffers for the input plugin (can't append more data).

  • Notify the input plugin, invoking a pause callback.

The engine protects itself and won't append more data coming from the input plugin in question. It's the responsibility of the plugin to keep state and decide what to do in a paused state.

In a few seconds, if the scheduler was able to flush the initial 700 KB of data or it has given up after retrying, that amount of memory is released and the following actions occur:

  • Upon data buffer release (700 KB), the internal counters get updated.

  • Counters now are set at 500 KB.

  • Because 500 KB isless than 1 MB, it checks the input plugin state.

  • If the plugin is paused, it invokes a resume callback.

  • The input plugin can continue appending more data.

storage.max_chunks_up

The [SERVICE] level storage.max_chunks_up setting controls the size of the memory buffer. When storage.type filesystem is set, the Mem_Buf_Limit setting no longer has an effect.

The setting behaves similar to the Mem_Buf_Limit scenario when the non-default storage.pause_on_chunks_overlimit is enabled.

When (default) storage.pause_on_chunks_overlimit is disabled, the input won't pause when the memory limit is reached. Instead, it switches to buffering logs only in the filesystem. Limit the disk spaced used for filesystem buffering with storage.total_limit_size.

About pause and resume callbacks

Each plugin is independent and not all of them implement pause and resume callbacks. These callbacks are a notification mechanism for the plugin.

With the default storage.type memory and Mem_Buf_Limit, the following log messages emit for pause and resume:

[warn] [input] {input name or alias} paused (mem buf overlimit)
[info] [input] {input name or alias} resume (mem buf overlimit)

With storage.type filesystem and storage.max_chunks_up, the following log messages emit for pause and resume:

[input] {input name or alias} paused (storage buf overlimit)
[input] {input name or alias} resume (storage buf overlimit)

See docs for more information.

One example of a plugin that implements these callbacks and keeps state correctly is the plugin. When the pause callback triggers, it pauses its collectors and stops appending data. Upon resume, it resumes the collectors and continues ingesting data. Tail tracks the current file offset when it pauses, and resumes at the same position. If the file hasn't been deleted or moved, it can still be read.

Buffering & Storage
Tail Input
Buffering
emits
emits
Buffering & Storage