Send logs, data, metrics to Amazon S3
The plugin can upload data to S3 using the multipart upload API or using S3 PutObject. Multipart is the default and is recommended; Fluent Bit will stream data in a series of 'parts'. This limits the amount of data it has to buffer on disk at any point in time. By default, every time 5 MiB of data have been received, a new 'part' will be uploaded. The plugin can create files up to gigabytes in size from many small chunks/parts using the multipart API. All aspects of the upload process are configurable using the configuration options.
The plugin allows you to specify a maximum file size, and a timeout for uploads. A file will be created in S3 when the max size is reached, or the timeout is reached- whichever comes first.
Records are stored in files in S3 as newline delimited JSON.
The plugin requires the following AWS IAM permissions:
In Fluent Bit, all logs have an associated tag. The
s3_key_formatoption lets you inject the tag into the s3 key using the following syntax:
$TAG=> the full tag
$TAG[n]=> the nth part of the tag (index starting at zero). This syntax is copied from the rewrite tag filter. By default, “parts” of the tag are separated with dots, but you can change this with
In the example below, assume the date is January 1st, 2020 00:00:00 and the tag associated with the logs in question is
With the delimiters as . and -, the tag will be split into parts as follows:
So the key in S3 will be
store_diris used to temporarily store data before it is uploaded. If Fluent Bit is stopped suddenly it will try to send all data and complete all uploads before it shuts down. If it can not send some data, on restart it will look in the
store_dirfor existing data and will try to send it.
Multipart uploads are ideal for most use cases because they allow the plugin to upload data in small chunks over time. For example, 1 GB file can be created from 200 5MB chunks. While the file size in S3 will be 1 GB, only 5 MB will be buffered on disk at any one point in time.
There is one minor drawback to multipart uploads- the file and data will not be visible in S3 until the upload is completed with a CompleteMultipartUpload call. The plugin will attempt to make this call whenever Fluent Bit is shut down to ensure your data is available in s3. It will also store metadata about each upload in the
store_dir, ensuring that uploads can be completed when Fluent Bit restarts (assuming it has access to persistent disk and the
store_dirfiles will still be present on restart).
If you run Fluent Bit in an environment without persistent disk, or without the ability to restart Fluent Bit and give it access to the data stored in the
store_dirfrom previous executions- some considerations apply. This might occur if you run Fluent Bit on AWS Fargate.
In these situations, we recommend using the PutObject API, and sending data frequently, to avoid local buffering as much as possible. This will limit data loss in the event Fluent Bit is killed unexpectedly.
The following settings are recommended for this use case:
Fluent Bit 1.7 adds a new feature called
workerswhich enables outputs to have dedicated threads. This
s3plugin has partial support for workers. The plugin can only support a single worker; enabling multiple workers will lead to errors/indeterminate behavior.
If you enable a single worker, you are enabling a dedicated thread for your S3 output. We recommend starting without workers, evaluating the performance, and then enabling a worker if needed. For most users, the plugin can provide sufficient throughput without workers.
Then, the records will be stored into the MinIO server.
In order to send records into Amazon S3, you can run the plugin from the command line or through the configuration file.
The s3 plugin, can read the parameters from the command line through the -p argument (property), e.g:
$ fluent-bit -i cpu -o s3 -p bucket=my-bucket -p region=us-west-2 -p -m '*' -f 1
In your main configuration file append the following Output section:
An example that using PutObject instead of multipart:
Amazon distributes a container image with Fluent Bit and this plugins.
Our images are available in Amazon ECR Public Gallery. You can download images with different tags by following command:
docker pull public.ecr.aws/aws-observability/aws-for-fluent-bit:<tag>
For example, you can pull the image with latest version by:
docker pull public.ecr.aws/aws-observability/aws-for-fluent-bit:latest
If you see errors for image pull limits, try log into public ECR with your AWS credentials:
aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin public.ecr.aws
You can use our SSM Public Parameters to find the Amazon ECR image URI in your region:
aws ssm get-parameters-by-path --path /aws/service/aws-for-fluent-bit/
Starting from Fluent Bit v1.8, the Amazon S3 plugin includes the support for Apache Arrow. The support is currently not enabled by default, as it depends on a shared version of
libarrowas the prerequisite.
To use this feature,
FLB_ARROWmust be turned on at compile time:
$ cd build/
$ cmake -DFLB_ARROW=On ..
$ cmake --build .
Once compiled, Fluent Bit can upload incoming data to S3 in Apache Arrow format. For example:
As shown in this example, setting
arrowmakes Fluent Bit to convert payload into Apache Arrow format.
The stored data is very easy to load, analyze and process using popular data processing tools (such as Python pandas, Apache Spark and Tensorflow). The following code uses
pyarrowto analyze the uploaded data:
>>> import pyarrow.feather as feather
>>> import pyarrow.fs as fs
>>> s3 = fs.S3FileSystem()
>>> file = s3.open_input_file("my-bucket/fluent-bit-logs/cpu.0/2021/04/27/09/36/15-object969o67ZF")
>>> df = feather.read_feather(file)
date cpu_p user_p system_p cpu0.p_cpu cpu0.p_user cpu0.p_system
0 2021-04-27T09:33:53.539346Z 1.0 1.0 0.0 1.0 1.0 0.0
1 2021-04-27T09:33:54.539330Z 0.0 0.0 0.0 0.0 0.0 0.0
2 2021-04-27T09:33:55.539305Z 1.0 0.0 1.0 1.0 0.0 1.0
3 2021-04-27T09:33:56.539430Z 0.0 0.0 0.0 0.0 0.0 0.0
4 2021-04-27T09:33:57.539803Z 0.0 0.0 0.0 0.0 0.0 0.0