Fluent Bit: Official Manual
SlackGitHubCommunity MeetingsSandbox and LabsWebinars
3.1
3.1
  • Fluent Bit v3.1 Documentation
  • About
    • What is Fluent Bit?
    • A Brief History of Fluent Bit
    • Fluentd & Fluent Bit
    • License
  • Concepts
    • Key Concepts
    • Buffering
    • Data Pipeline
      • Input
      • Parser
      • Filter
      • Buffer
      • Router
      • Output
  • Installation
    • Getting Started with Fluent Bit
    • Upgrade Notes
    • Supported Platforms
    • Requirements
    • Sources
      • Download Source Code
      • Build and Install
      • Build with Static Configuration
    • Linux Packages
      • Amazon Linux
      • Redhat / CentOS
      • Debian
      • Ubuntu
      • Raspbian / Raspberry Pi
    • Docker
    • Containers on AWS
    • Amazon EC2
    • Kubernetes
    • macOS
    • Windows
    • Yocto / Embedded Linux
    • Buildroot / Embedded Linux
  • Administration
    • Configuring Fluent Bit
      • Classic mode
        • Format and Schema
        • Configuration File
        • Variables
        • Commands
        • Upstream Servers
        • Record Accessor
      • YAML Configuration
        • Configuration File
      • Unit Sizes
      • Multiline Parsing
    • Transport Security
    • Buffering & Storage
    • Backpressure
    • Scheduling and Retries
    • Networking
    • Memory Management
    • Monitoring
    • Multithreading
    • HTTP Proxy
    • Hot Reload
    • Troubleshooting
  • Local Testing
    • Validating your Data and Structure
    • Running a Logging Pipeline Locally
  • Data Pipeline
    • Pipeline Monitoring
    • Inputs
      • Collectd
      • CPU Log Based Metrics
      • Disk I/O Log Based Metrics
      • Docker Log Based Metrics
      • Docker Events
      • Dummy
      • Elasticsearch
      • Exec
      • Exec Wasi
      • Fluent Bit Metrics
      • Forward
      • Head
      • HTTP
      • Health
      • Kafka
      • Kernel Logs
      • Kubernetes Events
      • Memory Metrics
      • MQTT
      • Network I/O Log Based Metrics
      • NGINX Exporter Metrics
      • Node Exporter Metrics
      • Podman Metrics
      • Process Log Based Metrics
      • Process Exporter Metrics
      • Prometheus Scrape Metrics
      • Prometheus Remote Write
      • Random
      • Serial Interface
      • Splunk
      • Standard Input
      • StatsD
      • Syslog
      • Systemd
      • Tail
      • TCP
      • Thermal
      • UDP
      • OpenTelemetry
      • Windows Event Log
      • Windows Event Log (winevtlog)
      • Windows Exporter Metrics
    • Parsers
      • Configuring Parser
      • JSON
      • Regular Expression
      • LTSV
      • Logfmt
      • Decoders
    • Processors
      • Content Modifier
      • Labels
      • Metrics Selector
      • OpenTelemetry Envelope
      • SQL
    • Filters
      • AWS Metadata
      • CheckList
      • ECS Metadata
      • Expect
      • GeoIP2 Filter
      • Grep
      • Kubernetes
      • Log to Metrics
      • Lua
      • Parser
      • Record Modifier
      • Modify
      • Multiline
      • Nest
      • Nightfall
      • Rewrite Tag
      • Standard Output
      • Sysinfo
      • Throttle
      • Type Converter
      • Tensorflow
      • Wasm
    • Outputs
      • Amazon CloudWatch
      • Amazon Kinesis Data Firehose
      • Amazon Kinesis Data Streams
      • Amazon S3
      • Azure Blob
      • Azure Data Explorer
      • Azure Log Analytics
      • Azure Logs Ingestion API
      • Counter
      • Datadog
      • Elasticsearch
      • File
      • FlowCounter
      • Forward
      • GELF
      • Google Chronicle
      • Google Cloud BigQuery
      • HTTP
      • InfluxDB
      • Kafka
      • Kafka REST Proxy
      • LogDNA
      • Loki
      • Microsoft Fabric
      • NATS
      • New Relic
      • NULL
      • OpenObserve
      • Observe
      • Oracle Log Analytics
      • OpenSearch
      • OpenTelemetry
      • PostgreSQL
      • Prometheus Exporter
      • Prometheus Remote Write
      • SkyWalking
      • Slack
      • Splunk
      • Stackdriver
      • Standard Output
      • Syslog
      • TCP & TLS
      • Treasure Data
      • Vivo Exporter
      • WebSocket
  • Stream Processing
    • Introduction to Stream Processing
    • Overview
    • Changelog
    • Getting Started
      • Fluent Bit + SQL
      • Check Keys and NULL values
      • Hands On! 101
  • Fluent Bit for Developers
    • C Library API
    • Ingest Records Manually
    • Golang Output Plugins
    • WASM Filter Plugins
    • WASM Input Plugins
    • Developer guide for beginners on contributing to Fluent Bit
Powered by GitBook
On this page
  • Configuration Parameters
  • TLS / SSL
  • Permissions
  • Differences between S3 and other Fluent Bit outputs
  • Summary: Uniqueness in S3 Plugin
  • S3 Key Format and Tag Delimiters
  • Allowing a file extension in the S3 Key Format with $UUID
  • Reliability
  • Using S3 without persisted disk
  • S3 Multipart Uploads
  • Fallback to PutObject
  • upload_timeout constrains total multipart upload time for a single file
  • Completing uploads
  • Usage with MinIO
  • Get Started
  • Command Line
  • Configuration File
  • AWS for Fluent Bit
  • GitHub
  • Amazon ECR Public Gallery
  • Docker Hub
  • Amazon ECR
  • Advanced usage
  • Use Apache Arrow for in-memory data processing

Was this helpful?

Export as PDF
  1. Data Pipeline
  2. Outputs

Amazon S3

Send logs, data, and metrics to Amazon S3

Last updated 8 months ago

Was this helpful?

The Amazon S3 output plugin lets you ingest records into the cloud object store.

The plugin can upload data to S3 using the or . Multipart is the default and is recommended. Fluent Bit will stream data in a series of parts. This limits the amount of data buffered on disk at any point in time. By default, every time 5 MiB of data have been received, a new part will be uploaded. The plugin can create files up to gigabytes in size from many small chunks or parts using the multipart API. All aspects of the upload process are configurable.

The plugin lets you specify a maximum file size, and a timeout for uploads. A file will be created in S3 when the maximum size or the timeout is reached, whichever comes first.

Records are stored in files in S3 as newline delimited JSON.

Configuration Parameters

Key
Description
Default

region

The AWS region of your S3 bucket.

us-east-1

bucket

S3 Bucket name

none

json_date_key

Specify the time key name in the output record. To disable the time key, set the value to false.

date

json_date_format

Specify the format of the date. Accepted values: double, epoch, iso8601 (2018-05-30T09:39:52.000681Z), _java_sql_timestamp_ (2018-05-30 09:39:52.000681).

iso8601

total_file_size

Specify file size in S3. Minimum size is 1M. With use_put_object On the maximum size is 1G. With multipart uploads, the maximum size is 50G.

100M

upload_chunk_size

The size of each part for multipart uploads. Max: 50M

5,242,880 bytes

upload_timeout

When this amount of time elapses, Fluent Bit uploads and creates a new file in S3. Set to 60m to upload a new file every hour.

10m

store_dir

Directory to locally buffer data before sending. When using multipart uploads, data buffers until reaching the upload_chunk_size. S3 stores metadata about in progress multipart uploads in this directory, allowing pending uploads to be completed if Fluent Bit stops and restarts. It stores the current $INDEX value if enabled in the S3 key format so the $INDEX keeps incrementing from its previous value after Fluent Bit restarts.

/tmp/fluent-bit/s3

store_dir_limit_size

Size limit for disk usage in S3. Limit theS3 buffers in the store_dir to limit disk usage. Use store_dir_limit_size instead of storage.total_limit_size which can be used for other plugins

0 (unlimited)

s3_key_format

/fluent-bit-logs/$TAG/%Y/%m/%d/%H/%M/%S

s3_key_format_tag_delimiters

A series of characters used to split the tag into parts for use with s3_key_format. option.

.

static_file_path

Disables behavior where UUID string appendeds to the end of the S3 key name when $UUID is not provided in s3_key_format. $UUID, time formatters, $TAG, and other dynamic key formatters all work as expected while this feature is set to true.

false

use_put_object

Use the S3 PutObject API instead of the multipart upload API. When enabled, the key extension is only available when $UUID is specified in s3_key_format. If $UUID isn't included, a random string appends format string and the key extension can't be customized.

false

role_arn

ARN of an IAM role to assume (for example, for cross account access.)

none

endpoint

Custom endpoint for the S3 API. Endpoints can contain scheme and port.

none

sts_endpoint

Custom endpoint for the STS API.

none

profile

Option to specify an AWS Profile for credentials.

default

canned_acl

none

compression

Compression type for S3 objects. gzip is currently the only supported value by default. If Apache Arrow support was enabled at compile time, you can use arrow. For gzip compression, the Content-Encoding HTTP Header will be set to gzip. Gzip compression can be enabled when use_put_object is on or off (PutObject and Multipart). Arrow compression can only be enabled with use_put_object On.

none

content_type

A standard MIME type for the S3 object, set as the Content-Type HTTP header.

none

send_content_md5

Send the Content-MD5 header with PutObject and UploadPart requests, as is required when Object Lock is enabled.

false

auto_retry_requests

Immediately retry failed requests to AWS services once. This option doesn't affect the normal Fluent Bit retry mechanism with backoff. Instead, it enables an immediate retry with no delay for networking errors, which may help improve throughput during transient network issues.

true

log_key

By default, the whole log record will be sent to S3. When specifing a key name with this option, only the value of that key sends to S3. For example, when using Docker you can specify log_key log and only the log message sends to S3.

none

preserve_data_ordering

When an upload request fails, the last received chunk might swap with a later chunk, resulting in data shuffling. This feature prevents shuffling by using a queue logic for uploads.

true

storage_class

none

retry_limit

Integer value to set the maximum number of retries allowed. Requires versions 1.9.10 and 2.0.1 or later. For previous version, the number of retries is 5 and isn't configurable.

1

external_id

Specify an external ID for the STS API. Can be used with the role_arn parameter if your role requires an external ID.

none

workers

1

TLS / SSL

Permissions

The plugin requires the following AWS IAM permissions:

{
    "Version": "2012-10-17",
    "Statement": [{
        "Effect": "Allow",
        "Action": [
            "s3:PutObject"
        ],
        "Resource": "*"
    }]
}

Differences between S3 and other Fluent Bit outputs

The S3 output plugin is used to upload large files to an Amazon S3 bucket, while most other outputs which send many requests to upload data in batches of a few megabytes or less.

When Fluent Bit receives logs, it stores them in chunks, either in memory or the filesystem depending on your settings. Chunks are usually around 2 MB in size. Fluent Bit sends chunks, in order, to each output that matches their tag. Most outputs then send the chunk immediately to their destination. A chunk is sent to the output's flush callback function, which must return one of FLB_OK, FLB_RETRY, or FLB_ERROR. Fluent Bit keeps count of the return values from each output's flush callback function. These counters are the data source for Fluent Bit's error, retry, and success metrics available in Prometheus format through its monitoring interface.

The S3 output plugin conforms to the Fluent Bit output plugin specification. Since S3's use case is to upload large files (over 2 MB), its behavior is different. S3's flush callback function buffers the incoming chunk to the filesystem, and returns an FLB_OK. This means Prometheus metrics available from the Fluent Bit HTTP server are meaningless for S3. In addition, the storage.total_limit_size parameter is not meaningful for S3 since it has its own buffering system in the store_dir. Instead, use store_dir_limit_size. S3 requires a writeable filesystem. Running Fluent Bit on a read-only filesystem won't work with the S3 output.

S3 has its own buffering system and its own callback to upload data, so the normal sequential data ordering of chunks provided by the Fluent Bit engine may be compromised. S3 has the presevere_data_ordering option which ensures data is uploaded in the original order it was collected by Fluent Bit.

Summary: Uniqueness in S3 Plugin

  • You must use store_dir_limit_size to limit the space on disk used by S3 buffer files.

  • The original ordering of data inputted to Fluent Bit may not be preserved unless you enable preserve_data_ordering On.

S3 Key Format and Tag Delimiters

In Fluent Bit, all logs have an associated tag. The s3_key_format option lets you inject the tag into the S3 key using the following syntax:

  • $TAG: The full tag.

  • $TAG[n]: The nth part of the tag (index starting at zero). This syntax is copied from the rewrite tag filter. By default, “parts” of the tag are separated with dots, but you can change this with s3_key_format_tag_delimiters.

In the following example, assume the date is January 1st, 2020 00:00:00 and the tag associated with the logs in question is my_app_name-logs.prod.

[OUTPUT]
    Name  s3
    Match *
    bucket                       my-bucket
    region                       us-west-2
    total_file_size              250M
    s3_key_format                /$TAG[2]/$TAG[0]/%Y/%m/%d/%H/%M/%S/$UUID.gz
    s3_key_format_tag_delimiters .-

With the delimiters as . and -, the tag splits into parts as follows:

  • $TAG[0] = my_app_name

  • $TAG[1] = logs

  • $TAG[2] = prod

The key in S3 will be /prod/my_app_name/2020/01/01/00/00/00/bgdHN1NM.gz.

Allowing a file extension in the S3 Key Format with $UUID

The Fluent Bit S3 output was designed to ensure that previous uploads will never be overwritten by a subsequent upload. The s3_key_format supports time formatters, $UUID, and $INDEX. $INDEX is special because it is saved in the store_dir. If you restart Fluent Bit with the same disk, it can continue incrementing the index from its last value in the previous run.

For files uploaded with the PutObject API, the S3 output requires that a unique random string be present in the S3 key. Many of the use cases for PutObject uploads involve a short time period between uploads, so a timestamp in the S3 key may not be unique enough between uploads. For example, if you only specify minute granularity timestamps in the S3 key, with a small upload size, it is possible to have two uploads that have timestamps set in the same minute. This requirement can be disabled with static_file_path On.

The PutObject API is used in these cases:

  • When you explicitly set use_put_object On.

  • On startup when the S3 output finds old buffer files in the store_dir from a previous run and attempts to send all of them at once.

  • On shutdown. To prevent data loss the S3 output attempts to send all currently buffered data at once.

You should always specify $UUID somewhere in your S3 key format. Otherwise, if the PutObject API is used, S3 appends a random eight-character UUID to the end of your S3 key. This means that a file extension set at the end of an S3 key will have the random UUID appended to it. Disabled this with static_file_path On.

For example, we attempt to set a .gz extension without specifying $UUID:

[OUTPUT]
    Name  s3
    Match *
    bucket                       my-bucket
    region                       us-west-2
    total_file_size              50M
    use_put_object               Off
    compression                  gzip
    s3_key_format                /$TAG/%Y/%m/%d/%H_%M_%S.gz

In the case where pending data is uploaded on shutdown, if the tag was app, the S3 key in the S3 bucket might be:

/app/2022/12/25/00_00_00.gz-apwgylqg

The S3 output appended a random string to the file extension, since this upload on shutdown used the PutObject API.

There are two ways of disabling this behavior:

  • Use static_file_path:

    [OUTPUT]
        Name  s3
        Match *
        bucket                       my-bucket
        region                       us-west-2
        total_file_size              50M
        use_put_object               Off
        compression                  gzip
        s3_key_format                /$TAG/%Y/%m/%d/%H_%M_%S.gz
        static_file_path             On
  • Explicitly define where the random UUID will go in the S3 key format:

    [OUTPUT]
        Name  s3
        Match *
        bucket                       my-bucket
        region                       us-west-2
        total_file_size              50M
        use_put_object               Off
        compression                  gzip
        s3_key_format                /$TAG/%Y/%m/%d/%H_%M_%S/$UUID.gz

Reliability

The store_dir is used to temporarily store data before upload. If Fluent Bit stops suddenly, it will try to send all data and complete all uploads before it shuts down. If it can not send some data, on restart it will look in the store_dir for existing data and try to send it.

Multipart uploads are ideal for most use cases because they allow the plugin to upload data in small chunks over time. For example, 1 GB file can be created from 200 5 MB chunks. While the file size in S3 will be 1 GB, only 5 MB will be buffered on disk at any one point in time.

Using S3 without persisted disk

In these situations, we recommend using the PutObject API and sending data frequently, to avoid local buffering as much as possible. This will limit data loss in the event Fluent Bit is killed unexpectedly.

The following settings are recommended for this use case:

[OUTPUT]
    Name s3
    Match *
    bucket your-bucket
    region us-east-1
    total_file_size 1M
    upload_timeout 1m
    use_put_object On

S3 Multipart Uploads

Fallback to PutObject

Uploads are triggered by these settings:

  • total_file_size and upload_chunk_size: When S3 has buffered data in the store_dir that meets the desired total_file_size (for use_put_object On) or the upload_chunk_size (for Multipart), it will trigger an upload operation.

[ info] [output:s3:s3.0] Pre-compression upload_chunk_size= 5630650, After
compression, chunk is only 1063320 bytes, the chunk was too small, using PutObject to upload

If you encounter this frequently, use the numbers in the messages to guess your compression factor. In this example, the buffered data was reduced from 5,630,650 bytes to 1,063,320 bytes. The compressed size is one-fifth the actual data size. Configuring upload_chunk_size 30M should ensure each part is large enough after compression to be over the minimum required part size of 5,242,880 bytes.

The S3 API allows the last part in an upload to be less than the 5,242,880 byte minimum. If a part is too small for an existing upload, the S3 output will upload that part and then complete the upload.

upload_timeout constrains total multipart upload time for a single file

Completing uploads

Usage with MinIO

Example:

[OUTPUT]
   Name s3
   Match *
   bucket your-bucket
   endpoint http://localhost:9000

The records store in the MinIO server.

Get Started

To send records into Amazon S3, you can run the plugin from the command line or through the configuration file.

Command Line

The S3 plugin reads parameters from the command line through the -p argument:

fluent-bit -i cpu -o s3 -p bucket=my-bucket -p region=us-west-2 -p -m '*' -f 1

Configuration File

In your main configuration file append the following Output section:

[OUTPUT]
    Name s3
    Match *
    bucket your-bucket
    region us-east-1
    store_dir /home/ec2-user/buffer
    total_file_size 50M
    upload_timeout 10m

An example using PutObject instead of multipart:

[OUTPUT]
    Name s3
    Match *
    bucket your-bucket
    region us-east-1
    store_dir /home/ec2-user/buffer
    use_put_object On
    total_file_size 10M
    upload_timeout 10m

AWS for Fluent Bit

Amazon distributes a container image with Fluent Bit and plugins.

GitHub

Amazon ECR Public Gallery

You can download images with different tags using the following command:

docker pull public.ecr.aws/aws-observability/aws-for-fluent-bit:<tag>

For example, you can pull the image with latest version with:

docker pull public.ecr.aws/aws-observability/aws-for-fluent-bit:latest

If you see errors for image pull limits, try signing in to public ECR with your AWS credentials:

aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin public.ecr.aws

Docker Hub

Amazon ECR

Use our SSM Public Parameters to find the Amazon ECR image URI in your region:

aws ssm get-parameters-by-path --path /aws/service/aws-for-fluent-bit/

Advanced usage

Use Apache Arrow for in-memory data processing

To use this feature, FLB_ARROW must be turned on at compile time. Use the following commands:

cd build/
cmake -DFLB_ARROW=On ..
cmake --build .

After being compiled, Fluent Bit can upload incoming data to S3 in Apache Arrow format.

For example:

[INPUT]
    Name cpu

[OUTPUT]
    Name s3
    Bucket your-bucket-name
    total_file_size 1M
    use_put_object On
    upload_timeout 60s
    Compression arrow

Setting Compression to arrow makes Fluent Bit convert payload into Apache Arrow format.

Load, analyze, and process stored data using popular data processing tools such as Python pandas, Apache Spark and Tensorflow.

The following example uses pyarrow to analyze the uploaded data:

>>> import pyarrow.feather as feather
>>> import pyarrow.fs as fs
>>>
>>> s3 = fs.S3FileSystem()
>>> file = s3.open_input_file("my-bucket/fluent-bit-logs/cpu.0/2021/04/27/09/36/15-object969o67ZF")
>>> df = feather.read_feather(file)
>>> print(df.head())
   date  cpu_p  user_p  system_p  cpu0.p_cpu  cpu0.p_user  cpu0.p_system
0  2021-04-27T09:33:53.539346Z    1.0     1.0       0.0         1.0          1.0            0.0
1  2021-04-27T09:33:54.539330Z    0.0     0.0       0.0         0.0          0.0            0.0
2  2021-04-27T09:33:55.539305Z    1.0     0.0       1.0         1.0          0.0            1.0
3  2021-04-27T09:33:56.539430Z    0.0     0.0       0.0         0.0          0.0            0.0
4  2021-04-27T09:33:57.539803Z    0.0     0.0       0.0         0.0          0.0            0.0

See for details about fetching AWS credentials.

The output by the built-in http server in Fluent Bit are meaningless for S3 output. S3 has its own buffering and retry mechanisms. The Fluent Bit AWS S3 maintainers apologize for this feature gap; you can .

Format string for keys in S3. This option supports a UUID, strftime time formatters, a syntax for selecting parts of the Fluent log tag using a syntax inspired by the rewrite_tag filter. Add $UUID in the format string to insert a random string. Add $INDEX in the format string to insert an integer that increments each upload. The $INDEX value saves in the store_dir. Add $TAG in the format string to insert the full log tag. Add $TAG[0] to insert the first part of the tag in theS3 key. The tag is split into parts using the characters specified with the s3_key_format_tag_delimiters option. Add the extension directly after the last piece of the format string to insert a key suffix. To specify a key suffix in use_put_object mode, you must specify $UUID. See . Time in s3_key is the timestamp of the first record in the S3 file.

for S3 objects.

Specify the for S3 objects. If this option isn't specified, objects store with the default STANDARD storage class.

The number of to perform flush operations for this output.

To skip TLS verification, set tls.verify as false. For more details about the properties available and general configuration, refer to .

S3 uploads primarily initiate using the S3 callback function, which runs separately from its flush.

The HTTP Monitoring interface output metrics are not meaningful for S3. AWS understands that this is non-ideal; we have to allow S3 to manage its own output metrics.

One drawback to multipart uploads is that the file and data aren't visible in S3 until the upload is completed with a call. The plugin attempts to make this call whenever Fluent Bit is shut down to ensure your data is available in S3. It also stores metadata about each upload in the store_dir, ensuring that uploads can be completed when Fluent Bit restarts (assuming it has access to persistent disk and the store_dir files will still be present on restart).

If you run Fluent Bit in an environment without persistent disk, or without the ability to restart Fluent Bit and give it access to the data stored in the store_dir from previous executions, some considerations apply. This might occur if you run Fluent Bit on .

With use_put_object Off (default), S3 will attempt to send files using multipart uploads. For each file, S3 first calls , then a series of calls to for each fragment (targeted to be upload_chunk_size bytes), and finally to create the final file in S3.

S3 each fragment to be at least 5,242,880 bytes, otherwise the upload is rejected.

The S3 output must sometimes fallback to the .

upload_timeout: Whenever locally buffered data has been present on the filesystem in the store_dir longer than the configured upload_timeout, it will be sent even when the desired byte size hasn't been reached. If you configure a small upload_timeout, your files may be smaller than the total_file_size. The timeout is evaluated against the time at which S3 started buffering data for each unqiue tag (that is, the time when new data was buffered for the unique tag after the last upload). The timeout is also evaluated against the time, so a multipart upload will be completed after upload_timeout has elapsed, even if the desired size has not yet been reached.

If your upload_timeout triggers an upload before the pending buffered data reaches the upload_chunk_size, it may be too small for a multipart upload. S3 will fallback to use the .

When you enable compression, S3 applies the compression algorithm at send time. The size settings trigger uploads based on the size of buffered data, not the final compressed size. It's possible that after compression, buffered data no longer meets the required minimum S3 size. If this occurs, you will see a log message like:

The upload_timeout evaluated against the time. A multipart upload will be completed after upload_timeout elapses, even if the desired size has not yet been reached.

When is called, an UploadID is returned. S3 stores these IDs for active uploads in the store_dir. Until is called, the uploaded data isn't visible in S3.

On shutdown, S3 output attempts to complete all pending uploads. If an upload fails to complete, the ID remains buffered in the store_dir in a directory called multipart_upload_metadata. If you restart the S3 output with the same store_dir it will discover the old UploadIDs and complete the pending uploads. The has suggestions on discovering and deleting or completing dangling uploads in your buckets.

is a high-performance, S3 compatible object storage and you can build your app with S3 functionality without S3.

The following example runs at localhost:9000, and create a bucket of your-bucket.

Our images are available in the Amazon ECR Public Gallery as .

See the for more details.

is also available from the Docker Hub.

For more information, see the .

With Fluent Bit v1.8 or greater, the Amazon S3 plugin includes the support for . Support isn't enabled by default, and has a dependency on a shared version of libarrow.

AWS Credentials
Prometheus success/retry/error metrics values
track our progress fixing it on GitHub
TLS/SSL
timer
opened an issue with a design
CompleteMultipartUpload
AWS Fargate
CreateMultipartUpload
UploadPart
CompleteMultipartUpload
requires
UploadPart
PutObject API
CreateMultipartUpload
PutObject API
UploadPart
CreateMultipartUpload
CreateMultipartUpload
CompleteMultipartUpload
S3 documentation
MinIO
a MinIO server
github.com/aws/aws-for-fluent-bit
aws-for-fluent-bit
Amazon ECR Public official documentation
amazon/aws-for-fluent-bit
AWS for Fluent Bit GitHub repo
Apache Arrow
Predefined Canned ACL policy
storage class
S3 Key Format
S3
multipart upload API
PutObject
workers
AWS logo