Buffering & Storage
The end-goal of Fluent Bit is to collect, parse, filter and ship logs to a central place. In this workflow there are many phases and one of the critical pieces is the ability to do buffering : a mechanism to place processed data into a temporary location until is ready to be shipped.
By default when Fluent Bit processes data, it uses Memory as a primary and temporary place to store the records, but there are certain scenarios where it would be ideal to have a persistent buffering mechanism based in the filesystem to provide aggregation and data safety capabilities.
Choosing the right configuration is critical and the behavior of the service can be conditioned based in the backpressure settings. Before we jump into the configuration let's make sure we understand the relationship between Chunks, Memory, Filesystem and Backpressure.
Chunks, Memory, Filesystem and Backpressure
Understanding the chunks, buffering and backpressure concepts is critical for a proper configuration. Let's do a recap of the meaning of these concepts.
Chunks
When an input plugin (source) emits records, the engine groups the records together in a Chunk. A Chunk size usually is around 2MB. By configuration, the engine decides where to place this Chunk, the default is that all chunks are created only in memory.
Buffering and Memory
As mentioned above, the Chunks generated by the engine are placed in memory but this is configurable.
If memory is the only mechanism set for the input plugin, it will just store data as much as it can there (memory). This is the fastest mechanism with the least system overhead, but if the service is not able to deliver the records fast enough because of a slow network or an unresponsive remote service, Fluent Bit memory usage will increase since it will accumulate more data than it can deliver.
In a high load environment with backpressure the risks of having high memory usage is the chance of getting killed by the Kernel (OOM Killer). A workaround for this backpressure scenario is to limit the amount of memory in records that an input plugin can register, this configuration property is called mem_buf_limit
: if a plugin has enqueued more than the mem_buf_limit
, it won't be able to ingest more until that data can be delivered or flushed properly. In this scenario the input plugin in question is paused.
The workaround of mem_buf_limit
is good for certain scenarios and environments, it helps to control the memory usage of the service, but at the costs that if a file gets rotated while paused, you might lose that data since it won't be able to register new records. This can happen with any input source plugin. The goal of mem_buf_limit
is memory control and survival of the service.
For full data safety guarantee, use filesystem buffering.
Here is an example input definition:
If this input uses more than 50MB memory to buffer logs, you will get a warning like this in the Fluent Bit logs:
Filesystem buffering to the rescue
Filesystem buffering enabled helps with backpressure and overall memory control.
Behind the scenes, Memory and Filesystem buffering mechanisms are not mutually exclusive, indeed when enabling filesystem buffering for your input plugin (source) you are getting the best of the two worlds: performance and data safety.
When the Filesystem buffering is enabled, the behavior of the engine is different, upon Chunk creation, it stores the content in memory but also it maps a copy on disk (through mmap(2)), this Chunk is active in memory and backed up in disk is called to be up
which means "the chunk content is up in memory".
How this Filesystem buffering mechanism deals with high memory usage and backpressure ?: Fluent Bit controls the number of Chunks that are up
in memory.
By default, the engine allows to have 128 Chunks up
in memory in total (considering all Chunks), this value is controlled by service property storage.max_chunks_up
. The active Chunks that are up
are ready for delivery and the ones that still are receiving records. Any other remaining Chunk is in a down
state, which means that's only in the filesystem and won't be up
in memory unless is ready to be delivered.
If the input plugin has enabled mem_buf_limit
and storage.type
as filesystem
, when reaching the mem_buf_limit
threshold, instead of the plugin being paused, all new data will go to Chunks that are down
in the filesystem. This allows to control the memory usage by the service but also providing a a guarantee that the service won't lose any data.
Limiting Filesystem space for Chunks
Fluent Bit implements the concept of logical queues: based on its Tag a Chunk, can be routed to multiple destinations, so internally we keep a reference from where a Chunk was created and where it needs to go.
It's common to find cases where if we have multiple destinations for a Chunk, one of the destinations might be slower than the other, or maybe one is generating backpressure and not all of them. In this scenario, how do we limit the amount of filesystem Chunks that we are logically queueing?
Starting from Fluent Bit v1.6, we introduced the new configuration property for output plugins called storage.total_limit_size
which limits the number of Chunks that exist in the file system for a certain logical output destination. If one of destinations reaches the storage.total_limit_size
, the oldest Chunk from its queue for that logical output destination will be discarded.
Configuration
The storage layer configuration takes place in three areas:
Service Section
Input Section
Output Section
The known Service section configures a global environment for the storage layer, the Input sections define which buffering mechanism to use and the output the limits for the logical queues.
Service Section Configuration
The Service section refers to the section defined in the main configuration file:
storage.path
Set an optional location in the file system to store streams and chunks of data. If this parameter is not set, Input plugins can only use in-memory buffering.
storage.sync
Configure the synchronization mode used to store the data into the file system. It can take the values normal or full.
normal
storage.checksum
Enable the data integrity check when writing and reading data from the filesystem. The storage layer uses the CRC32 algorithm.
Off
storage.max_chunks_up
If the input plugin has enabled filesystem
storage type, this property sets the maximum number of Chunks that can be up
in memory. This helps to control memory usage.
128
storage.backlog.mem_limit
If storage.path is set, Fluent Bit will look for data chunks that were not delivered and are still in the storage layer, these are called backlog data. This option configures a hint of maximum value of memory to use when processing these records.
5M
storage.metrics
off
a Service section will look like this:
that configuration sets an optional buffering mechanism where the route to the data is /var/log/flb-storage/, it will use normal synchronization mode, without running a checksum and up to a maximum of 5MB of memory when processing backlog data.
Input Section Configuration
Optionally, any Input plugin can configure their storage preference, the following table describes the options available:
storage.type
Specifies the buffering mechanism to use. It can be memory or filesystem.
memory
storage.pause_on_chunks_overlimit
Specifies if file storage is to be paused when reaching the chunk limit.
off
The following example configures a service that offers filesystem buffering capabilities and two Input plugins being the first based in filesystem and the second with memory only.
Output Section Configuration
If certain chunks are filesystem storage.type based, it's possible to control the size of the logical queue for an output plugin. The following table describes the options available:
storage.total_limit_size
Limit the maximum number of Chunks in the filesystem for the current output logical destination.
The following example create records with CPU usage samples in the filesystem and then they are delivered to Google Stackdriver service limiting the logical queue (buffering) to 5M:
If for some reason Fluent Bit gets offline because of a network issue, it will continue buffering CPU samples but just keep a maximum of 5M of the newest data.
Last updated