Only this pageAll pages
Powered by GitBook
Couldn't generate the PDF for 108 pages, generation stopped at 100.
Extend with 50 more pages.
1 of 100

1.3

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Introduction

is a Fast and Lightweight Log Processor and Forwarder for Linux, OSX and BSD family operating systems. It has been made with a strong focus on performance to allow the collection of events from different sources without complexity.

is part of the project ecosystem, it's licensed under the terms of the . This project is made and sponsored by .

Fluent Bit
Fluent Bit
Fluentd
Apache License v2.0
Treasure Data

Supported Platforms

The following operating systems and architectures are supported in Fluent Bit.

Operating System

Distribution

Architecture

Linux

Centos 7

x86_64

Debian 8 (Jessie)

x86_64

Debian 9 (Stretch)

x86_64

Raspbian 8 (Debian Jessie)

AArch32

Raspbian 9 (Debian Stretch)

AArch32

Ubuntu 16.04 (Xenial Xerus)

x86_64

Ubuntu 18.04 (Bionic Beaver)

x86_64

From an architecture support perspective, Fluent Bit is fully functional on x86, x86_64, AArch32 and AArch64 based processors.

Fluent Bit can work also on OSX and *BSD systems, but not all plugins will be available on all platforms. Official support will be expanding based on community demand.

License

                                 Apache License
                           Version 2.0, January 2004
                        http://www.apache.org/licenses/

   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION

   1. Definitions.

      "License" shall mean the terms and conditions for use, reproduction,
      and distribution as defined by Sections 1 through 9 of this document.

      "Licensor" shall mean the copyright owner or entity authorized by
      the copyright owner that is granting the License.

      "Legal Entity" shall mean the union of the acting entity and all
      other entities that control, are controlled by, or are under common
      control with that entity. For the purposes of this definition,
      "control" means (i) the power, direct or indirect, to cause the
      direction or management of such entity, whether by contract or
      otherwise, or (ii) ownership of fifty percent (50%) or more of the
      outstanding shares, or (iii) beneficial ownership of such entity.

      "You" (or "Your") shall mean an individual or Legal Entity
      exercising permissions granted by this License.

      "Source" form shall mean the preferred form for making modifications,
      including but not limited to software source code, documentation
      source, and configuration files.

      "Object" form shall mean any form resulting from mechanical
      transformation or translation of a Source form, including but
      not limited to compiled object code, generated documentation,
      and conversions to other media types.

      "Work" shall mean the work of authorship, whether in Source or
      Object form, made available under the License, as indicated by a
      copyright notice that is included in or attached to the work
      (an example is provided in the Appendix below).

      "Derivative Works" shall mean any work, whether in Source or Object
      form, that is based on (or derived from) the Work and for which the
      editorial revisions, annotations, elaborations, or other modifications
      represent, as a whole, an original work of authorship. For the purposes
      of this License, Derivative Works shall not include works that remain
      separable from, or merely link (or bind by name) to the interfaces of,
      the Work and Derivative Works thereof.

      "Contribution" shall mean any work of authorship, including
      the original version of the Work and any modifications or additions
      to that Work or Derivative Works thereof, that is intentionally
      submitted to Licensor for inclusion in the Work by the copyright owner
      or by an individual or Legal Entity authorized to submit on behalf of
      the copyright owner. For the purposes of this definition, "submitted"
      means any form of electronic, verbal, or written communication sent
      to the Licensor or its representatives, including but not limited to
      communication on electronic mailing lists, source code control systems,
      and issue tracking systems that are managed by, or on behalf of, the
      Licensor for the purpose of discussing and improving the Work, but
      excluding communication that is conspicuously marked or otherwise
      designated in writing by the copyright owner as "Not a Contribution."

      "Contributor" shall mean Licensor and any individual or Legal Entity
      on behalf of whom a Contribution has been received by Licensor and
      subsequently incorporated within the Work.

   2. Grant of Copyright License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      copyright license to reproduce, prepare Derivative Works of,
      publicly display, publicly perform, sublicense, and distribute the
      Work and such Derivative Works in Source or Object form.

   3. Grant of Patent License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      (except as stated in this section) patent license to make, have made,
      use, offer to sell, sell, import, and otherwise transfer the Work,
      where such license applies only to those patent claims licensable
      by such Contributor that are necessarily infringed by their
      Contribution(s) alone or by combination of their Contribution(s)
      with the Work to which such Contribution(s) was submitted. If You
      institute patent litigation against any entity (including a
      cross-claim or counterclaim in a lawsuit) alleging that the Work
      or a Contribution incorporated within the Work constitutes direct
      or contributory patent infringement, then any patent licenses
      granted to You under this License for that Work shall terminate
      as of the date such litigation is filed.

   4. Redistribution. You may reproduce and distribute copies of the
      Work or Derivative Works thereof in any medium, with or without
      modifications, and in Source or Object form, provided that You
      meet the following conditions:

      (a) You must give any other recipients of the Work or
          Derivative Works a copy of this License; and

      (b) You must cause any modified files to carry prominent notices
          stating that You changed the files; and

      (c) You must retain, in the Source form of any Derivative Works
          that You distribute, all copyright, patent, trademark, and
          attribution notices from the Source form of the Work,
          excluding those notices that do not pertain to any part of
          the Derivative Works; and

      (d) If the Work includes a "NOTICE" text file as part of its
          distribution, then any Derivative Works that You distribute must
          include a readable copy of the attribution notices contained
          within such NOTICE file, excluding those notices that do not
          pertain to any part of the Derivative Works, in at least one
          of the following places: within a NOTICE text file distributed
          as part of the Derivative Works; within the Source form or
          documentation, if provided along with the Derivative Works; or,
          within a display generated by the Derivative Works, if and
          wherever such third-party notices normally appear. The contents
          of the NOTICE file are for informational purposes only and
          do not modify the License. You may add Your own attribution
          notices within Derivative Works that You distribute, alongside
          or as an addendum to the NOTICE text from the Work, provided
          that such additional attribution notices cannot be construed
          as modifying the License.

      You may add Your own copyright statement to Your modifications and
      may provide additional or different license terms and conditions
      for use, reproduction, or distribution of Your modifications, or
      for any such Derivative Works as a whole, provided Your use,
      reproduction, and distribution of the Work otherwise complies with
      the conditions stated in this License.

   5. Submission of Contributions. Unless You explicitly state otherwise,
      any Contribution intentionally submitted for inclusion in the Work
      by You to the Licensor shall be under the terms and conditions of
      this License, without any additional terms or conditions.
      Notwithstanding the above, nothing herein shall supersede or modify
      the terms of any separate license agreement you may have executed
      with Licensor regarding such Contributions.

   6. Trademarks. This License does not grant permission to use the trade
      names, trademarks, service marks, or product names of the Licensor,
      except as required for reasonable and customary use in describing the
      origin of the Work and reproducing the content of the NOTICE file.

   7. Disclaimer of Warranty. Unless required by applicable law or
      agreed to in writing, Licensor provides the Work (and each
      Contributor provides its Contributions) on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
      implied, including, without limitation, any warranties or conditions
      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
      PARTICULAR PURPOSE. You are solely responsible for determining the
      appropriateness of using or redistributing the Work and assume any
      risks associated with Your exercise of permissions under this License.

   8. Limitation of Liability. In no event and under no legal theory,
      whether in tort (including negligence), contract, or otherwise,
      unless required by applicable law (such as deliberate and grossly
      negligent acts) or agreed to in writing, shall any Contributor be
      liable to You for damages, including any direct, indirect, special,
      incidental, or consequential damages of any character arising as a
      result of this License or out of the use or inability to use the
      Work (including but not limited to damages for loss of goodwill,
      work stoppage, computer failure or malfunction, or any and all
      other commercial damages or losses), even if such Contributor
      has been advised of the possibility of such damages.

   9. Accepting Warranty or Additional Liability. While redistributing
      the Work or Derivative Works thereof, You may choose to offer,
      and charge a fee for, acceptance of support, warranty, indemnity,
      or other liability obligations and/or rights consistent with this
      License. However, in accepting such obligations, You may act only
      on Your own behalf and on Your sole responsibility, not on behalf
      of any other Contributor, and only if You agree to indemnify,
      defend, and hold each Contributor harmless for any liability
      incurred by, or claims asserted against, such Contributor by reason
      of your accepting any such warranty or additional liability.

   END OF TERMS AND CONDITIONS

Download Sources

Stable

For production systems, we strongly suggest that you always get the latest stable release from our web site, you can get the official tarballs (.tar.gz) from the following link:

Development

For people who aims to contribute to the project testing or extending the code base, can get the development version from our GIT repository:

Note that our master branch is where the development of Fluent Bit happens. Since it's a development version, expect issues when compiling or at run time.

We encourage everybody to help us testing every development version, at the end this is what will become stable.

, including it core, plugins and tools are distributed under the terms of the :

Fluent Bit
Apache License v2.0
$ git clone https://github.com/fluent/fluent-bit
http://fluentbit.io/download/

About

Being an open source project, it has been widely adopted to solve logging needs in Cloud Native environments where Docker and Kubernetes are key components; Fluent Bit is a natural fit.

Why ?

Data collection and log forwarding is hard.

Nowadays the number of sources of information in our environments is ever increasing. Handling data collection at scale is complex, and collecting and aggregating diverse data requires a specialized tool that can deal with:

  • Different sources of information.

  • Different data formats.

  • Multiple destinations.

Build and Install

Prepare environment

In the following steps you can find exact commands to build and install the project with the default options. If you already know how CMake works you can skip this part and look at the build options available.

Change to the build/ directory inside the Fluent Bit sources:

Now you are ready to start the compilation process through the simple make command:

to continue installing the binary on the system just do:

it's likely you may need root privileges so you can try to prefixing the command with sudo.

Build Options

Fluent Bit provides certain options to CMake that can be enabled or disabled when configuring, please refer to the following tables under the General Options, Input Plugins and Output Plugins sections.

General Options

Input Plugins

The input plugins provides certain features to gather information from a specific source type which can be a network interface, some built-in metric or through a specific input device, the following input plugins are available:

Output Plugins

The output plugins gives the capacity to flush the information to some external interface, service or terminal, the following table describes the output plugins available as of this version:

Build with Static Configuration

Static configuration mode aims to include a built-in configuration in the final binary of Fluent Bit, disabling the usage of external files or flags at runtime.

Getting Started

Requirements

Configuration Directory

the configuration provided above will calculate CPU metrics from the running system and print them to the standard output interface.

Build with Custom Configuration

Inside Fluent Bit source code, get into the build/ directory and run CMake appending the FLB_STATIC_CONF option pointing the configuration directory recently created, e.g:

then build it:

At this point the fluent-bit binary generated is ready to run without necessity of further configuration:

is an open source and multi-platform log forwarder tool which aims to be a generic Swiss knife for log collection and distribution.

We, , as a Big Data company, provide an analytics infrastructure in the Cloud where we provide an end-to-end solution to collect, store and do analytics over the data. is an integral part of this pipeline where it solves the log collection needs.

was born to address the need for a high performance and optimized tool that can collect data from any input source, unify that data and deliver it to multiple destinations.

uses as it build system. The suggested procedure to prepare the build system consists on the following steps:

Let configure the project specifying where the root path is located:

in normal operation mode allows to be configurable through or using specific arguments in the command line, while this is the ideal deployment case, there are scenarios where a more restricted configuration is required: static configuration mode.

The following steps assumes you are familiar with configuring Fluent Bit using text files and you have experience building it from scratch as described in the section.

In your file system prepare a specific directory that will be used as an entry point for the build system to lookup and parse the configuration files. It is mandatory that this directory contain as a minimum one configuration file called fluent-bit.conf containing the required , and sections. As an example create a new fluent-bit.conf file with the following content:

Fluent Bit
Treasure Data
Fluent Bit
$ cd build/
$ cmake ../
-- The C compiler identification is GNU 4.9.2
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- The CXX compiler identification is GNU 4.9.2
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
...
-- Could NOT find Doxygen (missing:  DOXYGEN_EXECUTABLE)
-- Looking for accept4
-- Looking for accept4 - not found
-- Configuring done
-- Generating done
-- Build files have been written to: /home/edsiper/coding/fluent-bit/build
$ make
Scanning dependencies of target msgpack
[  2%] Building C object lib/msgpack-1.1.0/CMakeFiles/msgpack.dir/src/unpack.c.o
[  4%] Building C object lib/msgpack-1.1.0/CMakeFiles/msgpack.dir/src/objectc.c.o
[  7%] Building C object lib/msgpack-1.1.0/CMakeFiles/msgpack.dir/src/version.c.o
...
[ 19%] Building C object lib/monkey/mk_core/CMakeFiles/mk_core.dir/mk_file.c.o
[ 21%] Building C object lib/monkey/mk_core/CMakeFiles/mk_core.dir/mk_rconf.c.o
[ 23%] Building C object lib/monkey/mk_core/CMakeFiles/mk_core.dir/mk_string.c.o
...
Scanning dependencies of target fluent-bit-static
[ 66%] Building C object src/CMakeFiles/fluent-bit-static.dir/flb_pack.c.o
[ 69%] Building C object src/CMakeFiles/fluent-bit-static.dir/flb_input.c.o
[ 71%] Building C object src/CMakeFiles/fluent-bit-static.dir/flb_output.c.o
...
Linking C executable ../bin/fluent-bit
[100%] Built target fluent-bit-bin
$ make install

option

description

default

FLB_ALL

Enable all features available

No

FLB_DEBUG

Build binaries with debug symbols

No

FLB_JEMALLOC

Use Jemalloc as default memory allocator

No

FLB_TLS

Builds with SSL/TLS support

No

FLB_BINARY

Build executable

Yes

FLB_EXAMPLES

Build examples

Yes

FLB_SHARED_LIB

Build shared library

Yes

FLB_VALGRIND

Enable Valgrind support

No

FLB_TRACE

Enable trace mode

No

FLB_TESTS_RUNTIME

Enable runtime tests

No

FLB_TESTS_INTERNAL

Enable internal tests

No

FLB_TESTS

Enable tests

No

FLB_MTRACE

Enable mtrace support

No

FLB_INOTIFY

Enable Inotify support

Yes

FLB_POSIX_TLS

Force POSIX thread storage

No

FLB_SQLDB

Enable SQL embedded database support

No

FLB_HTTP_SERVER

Enable HTTP Server

No

FLB_BACKTRACE

Enable backtrace/stacktrace support

Yes

FLB_LUAJIT

Enable Lua scripting support

Yes

FLB_STATIC_CONF

Build binary using static configuration files. The value of this option must be a directory containing configuration files.

[SERVICE]
    Flush     1
    Daemon    off
    Log_Level info

[INPUT]
    Name      cpu

[OUTPUT]
    Name      stdout
    Match     *
$ cd fluent-bit/build/
$ cmake -DFLB_STATIC_CONF=/path/to/my/confdir/
$ make
$ bin/fluent-bit 
Fluent-Bit v0.15.0
Copyright (C) Treasure Data

[2018/10/19 15:32:31] [ info] [engine] started (pid=15186)
[0] cpu.local: [1539984752.000347547, {"cpu_p"=>0.750000, "user_p"=>0.500000, "system_p"=>0.250000, "cpu0.p_cpu"=>1.000000, "cpu0.p_user"=>1.000000, "cpu0.p_system"=>0.000000, "cpu1.p_cpu"=>0.000000, "cpu1.p_user"=>0.000000, "cpu1.p_system"=>0.000000, "cpu2.p_cpu"=>0.000000, "cpu2.p_user"=>0.000000, "cpu2.p_system"=>0.000000, "cpu3.p_cpu"=>1.000000, "cpu3.p_user"=>1.000000, "cpu3.p_system"=>0.000000}]
Fluent Bit
Fluent Bit
CMake
CMake
Fluent Bit
text files
Build and Install

option

description

default

Enable CPU input plugin

On

Enable Forward input plugin

On

Enable Head input plugin

On

Enable Health input plugin

On

Enable Kernel log input plugin

On

Enable Memory input plugin

On

FLB_IN_RANDOM

Enable Random input plugin

On

Enable Serial input plugin

On

Enable Standard input plugin

On

FLB_IN_TCP

Enable TCP input plugin

On

Enable MQTT input plugin

On

Enable Xbee input plugin

Off

option

description

default

On

On

Enable HTTP output plugin

On

Off

FLB_OUT_PLOT

Enable Plot output plugin

On

Enable STDOUT output plugin

On

On

FLB_OUT_NULL

Enable /dev/null output plugin

On

Installation

The following section will guide you to the step to download, build and install Fluent Bit from sources and specific instructions for the installation of binaries that we already distribute for Debian/Ubuntu/Redhat/CentOS and Raspberry Pi.

If you find some problem on a certain step, don't hesitate to report the problem on our bug tracker:

Fluentd & Fluent Bit

Both projects share a lot of similarities, Fluent Bit is fully based on the design and experience of Fluentd architecture and general design. Choosing which one to use depends on the final needs, from an architecture perspective we can consider:

  • Fluentd is a log collector, processor, and aggregator.

  • Fluent Bit is a log collector and processor (it doesn't have strong aggregation features like Fluentd).

The following table describes a comparison in different areas of the projects:

Consider Fluentd mainly as an Aggregator and Fluent Bit as a Log Forwarder, we can see both projects complement each other providing a full reliable solution.

Requirements

  • Compiler: GCC or clang

  • CMake

  • Flex (only if Stream Processor is enabled)

  • Bison (only if Stream Processor is enabled)

There are not other dependencies besides libc and pthreads in the most basic mode. For certain features that depends on third party components, those are included in the main source code repository.

Upgrade Notes

The following article cover the relevant notes for users upgrading from previous Fluent Bit versions. We aim to cover compatibility changes that you must be aware of.

Fluent Bit v1.3

If you are migrating from Fluent Bit v1.2 to v1.3, there are not breaking changes. If you are upgrading from an older version please review the incremental changes below.

Fluent Bit v1.2

Docker, JSON, Parsers and Decoders

On Fluent Bit v1.2 we have fixed many issues associated with JSON encoding and decoding, for hence when parsing Docker logs is not longer necessary to use decoders. The new Docker parser looks like this:

[PARSER]
    Name         docker
    Format       json
    Time_Key     time
    Time_Format  %Y-%m-%dT%H:%M:%S.%L
    Time_Keep    On

Note: again, do not use decoders.

Kubernetes Filter

We have done improvements also on how Kubernetes Filter handle the stringified log message. If the option Merge_Log is enabled, it will try to handle the log content as a JSON map, if so, it will add the keys to the root map.

In addition, we have fixed and improved the option called Merge_Log_Key. If a merge log succeed, all new keys will be packaged under the key specified by this option, a suggested configuration is as follows:

[FILTER]
    Name             Kubernetes
    Match            kube.*
    Kube_Tag_Prefix  kube.var.log.containers.
    Merge_Log        On
    Merge_Log_Key    log_processed

As an example, if the original log content is the following map:

{"key1": "val1", "key2": "val2"}

the final record will be composed as follows:

{
    "log": "{\"key1\": \"val1\", \"key2\": \"val2\"}",
    "log_processed": {
        "key1": "val1",
        "key2": "val2"
    }
}

Fluent Bit v1.1

If you are upgrading from Fluent Bit <= 1.0.x you should take in consideration the following relevant changes when switching to Fluent Bit v1.1 series:

Kubernetes Filter

We introduced a new configuration property called Kube_Tag_Prefix to help Tag prefix resolution and address an unexpected behavior that landed in previous versions.

Duing 1.0.x release cycle, a commit in Tail input plugin changed the default behavior on how the Tag was composed when using the wildcard for expansion generating breaking compatibility with other services. Consider the following configuration example:

[INPUT]
    Name  tail
    Path  /var/log/containers/*.log
    Tag   kube.*

The expected behavior is that Tag will be expanded to:

kube.var.log.containers.apache.log

but the change introduced in 1.0 series switched from absolute path to the base file name only:

kube.apache.log

On Fluent Bit v1.1 release we restored to our default behavior and now the Tag is composed using the absolute path of the monitored file.

Having absolute path in the Tag is relevant for routing and flexible configuration where it also helps to keep compatibility with Fluentd behavior.

This behavior switch in Tail input plugin affects how Filter Kubernetes operates. As you know when the filter is used it needs to perform local metadata lookup that comes from the file names when using Tail as a source. Now with the new Kube_Tag_Prefix option you can specify what's the prefix used in Tail input plugin, for the configuration example above the new configuration will look as follows:

[INPUT]
    Name  tail
    Path  /var/log/containers/*.log
    Tag   kube.*

[FILTER]
    Name             kubernetes
    Match            *
    Kube_Tag_Prefix  kube.var.log.containers.

So the proper for Kube_Tag_Prefix value must be composed by Tag prefix set in Tail input plugin plus the converted monitored directory replacing slashes with dots.

Docker Images

Fluent Bit container images are available on Docker Hub ready for production usage. Current available images can be deployed in multiple architectures.

Tags and Versions

Tag(s)

Manifest Architectures

Description

1.3

x86_64, arm64v8, arm32v7

Latest release of 1.3.x series.

1.3-debug

x86_64

v1.3.x releases + Busybox

1.3.11

x86_64, arm64v8, arm32v7

1.3.11-debug

x86_64

v1.3.11 release + Busybox

1.3.10

x86_64, arm64v8, arm32v7

1.3.10-debug

x86_64

v1.3.10 release + Busybox

1.3.9

x86_64, arm64v8, arm32v7

1.3.9-debug

x86_64

v1.3.9 release + Busybox

1.3.8

x86_64, arm64v8, arm32v7

1.3.8-debug

x86_64

v1.3.8 release + Busybox

1.3.7

x86_64, arm64v8, arm32v7

1.3.7-debug

x86_64

v1.3.7 release + Busybox

1.3.6

x86_64, arm64v8, arm32v7

1.3.6-debug

x86_64

v1.3.6 release + Busybox

1.3.5

x86_64, arm64v8, arm32v7

1.3.5-debug

x86_64

v1.3.5 release + Busybox

1.3.4

x86_64, arm64v8, arm32v7

1.3.4-debug

x86_64

v1.3.4 release + Busybox

1.3.3

x86_64, arm64v8, arm32v7

1.3.3-debug

x86_64

v1.3.3 release + Busybox

1.3.2

x86_64, arm64v8, arm32v7

1.3.2-debug

x86_64

v1.3.2 release + Busybox

1.3.1

x86_64, arm64v8, arm32v7

1.3.1-debug

x86_64

v1.3.1 release + Busybox

1.3.0

x86_64, arm64v8, arm32v7

1.3.0-debug

x86_64

v1.3.0 release + Busybox

It's strongly suggested that you always use the latest image of Fluent Bit.

Multi Architecture Images

In addition, the main manifest provides images for arm64v8 and arm32v7 architctectures. From a deployment perspective there is no need to specify an architecture, the container client tool that pulls the image gets the proper layer for the running architecture.

For every architecture we build the layers using the following base images:

Architecture

Base Image

x86_64

arm64v8

arm64v8/debian:buster-slim

arm32v7

arm32v7/debian:buster-slim

Getting Started

Download the last stable image from 1.3 series:

$ docker pull fluent/fluent-bit:1.3

Once the image is in place, now run the following (useless) test which makes Fluent Bit measure CPU usage by the container:

$ docker run -ti fluent/fluent-bit:1.3 /fluent-bit/bin/fluent-bit -i cpu -o stdout -f 1

That command will let Fluent Bit measure CPU usage every second and flush the results to the standard output, e.g:

Fluent-Bit v1.3.x
Copyright (C) Treasure Data

[2019/10/01 12:29:02] [ info] [engine] started
[0] cpu.0: [1504290543.000487750, {"cpu_p"=>0.750000, "user_p"=>0.250000, "system_p"=>0.500000, "cpu0.p_cpu"=>0.000000, "cpu0.p_user"=>0.000000, "cpu0.p_system"=>0.000000, "cpu1.p_cpu"=>1.000000, "cpu1.p_user"=>0.000000, "cpu1.p_system"=>1.000000, "cpu2.p_cpu"=>1.000000, "cpu2.p_user"=>1.000000, "cpu2.p_system"=>0.000000, "cpu3.p_cpu"=>0.000000, "cpu3.p_user"=>0.000000, "cpu3.p_system"=>0.000000}]

F.A.Q

Why there is no Fluent Bit Docker image based on Alpine Linux ?

Alpine Linux uses Musl C library instead of Glibc. Musl is not fully compatible with Glibc which generated many issues in the following areas when used with Fluent Bit:

  • Memory Allocator: to run Fluent Bit properly in high-load environments, we use Jemalloc as a default memory allocator which reduce fragmentation and provides better performance for our needs. Jemalloc cannot run smoothly with Musl and requires extra work.

  • Alpine Linux Musl functions bootstrap have a compatibility issue when loading Golang shared libraries, this generate problems when trying to load Golang output plugins in Fluent Bit.

  • Alpine Linux Musl Time format parser does not support Glibc extensions

  • Maintainers preference in terms of base image due to security and maintenance reasons are Distroless and Debian.

Where 'latest' Tag points to ?

Our Docker containers images are deployed thousands of times per day, we take security and stability very seriously.

The latest tag most of the time points to the latest stable image. When we release a major update to Fluent Bit like for example from v1.2.x to v1.3.0, we don't move latest tag until 2 weeks after the release. That give us extra time to verify with our community that everything works as expected.

Kubernetes

  • Read Kubernetes/Docker log files from the file system or through Systemd Journal.

  • Enrich logs with Kubernetes metadata.

  • Deliver logs to third party storage services like Elasticsearch, InfluxDB, HTTP, etc.

Content:

Concepts

Before geting started it is important to understand how Fluent Bit will be deployed. Kubernetes manages a cluster of nodes, so our log agent tool will need to run on every node to collect logs from every POD, hence Fluent Bit is deployed as a DaemonSet (a POD that runs on every node of the cluster).

When Fluent Bit runs, it will read, parse and filter the logs of every POD and will enrich each entry with the following information (metadata):

  • POD Name

  • POD ID

  • Container Name

  • Container ID

  • Labels

  • Annotations

To obtain these information, a built-in filter plugin called kubernetes talks to the Kubernetes API Server to retrieve relevant information such as the pod_id, labels and annotations, other fields such as pod_name, container_id and container_name are retrieved locally from the log file names. All of this is handled automatically, no intervention is required from a configuration aspect.

Installation

The next step is to create a ConfigMap that will be used by our Fluent Bit DaemonSet:

Note for Kubernetes v1.16

Starting from Kubernetes v1.16, DaemonSet resources are not longer served from extensions/v1beta . Our current Daemonset Yaml files uses the old apiVersion.

If you are using Kubernetes v1.16, grab manually a copy of your Daemonset Yaml file and replace the value of apiVersion from:

to

You can read more about this deprecation on Kubernetes v1.14 Changelog here:

Fluent Bit to Elasticsearch

Fluent Bit DaemonSet ready to be used with Elasticsearch on a normal Kubernetes Cluster:

Fluent Bit to Elasticsearch on Minikube

If you are using Minikube for testing purposes, use the following alternative DaemonSet manifest:

Details

The default configuration of Fluent Bit makes sure of the following:

  • Consume all containers logs from the running Node.

  • The Kubernetes filter will enrich the logs with Kubernetes metadata, specifically labels and annotations. The filter only goes to the API Server when it cannot find the cached info, otherwise it uses the cache.

  • There is an option called Retry_Limit set to False, that means if Fluent Bit cannot flush the records to Elasticsearch it will re-try indefinitely until it succeed.

Debian Packages

Server GPG key

The first step is to add our server GPG key to your keyring, on that way you can get our signed packages:

Update your sources lists

On Debian, you need to add our APT server entry to your sources lists, please add the following content at bottom of your /etc/apt/sources.list file:

Debian 10 (Buster)

Debian 9 (Stretch)

Debian 8 (Jessie)

Update your repositories database

Now let your system update the apt database:

Install TD Agent Bit

Using the following apt-get command you are able now to install the latest td-agent-bit:

Now the following step is to instruct systemd to enable the service:

If you do a status check, you should see a similar output like this:

The default configuration of td-agent-bit is collecting metrics of CPU usage and sending the records to the standard output, you can see the outgoing data in your /var/log/syslog file.

Ubuntu Packages

Server GPG key

The first step is to add our server GPG key to your keyring, on that way you can get our signed packages:

Update your sources lists

On Ubuntu, you need to add our APT server entry to your sources lists, please add the following content at bottom of your /etc/apt/sources.list file:

Ubuntu 18.04 LTS (Bionic Beaver)

Ubuntu 16.04 LTS (Xenial Xerus)

Update your repositories database

Now let your system update the apt database:

Install TD-Agent Bit

Using the following apt-get command you are able now to install the latest td-agent-bit:

Now the following step is to instruct systemd to enable the service:

If you do a status check, you should see a similar output like this:

The default configuration of td-agent-bit is collecting metrics of CPU usage and sending the records to the standard output, you can see the outgoing data in your /var/log/syslog file.

Raspberry Pi

Server GPG key

The first step is to add our server GPG key to your keyring, on that way you can get our signed packages:

Update your sources lists

On Debian and derivated systems such as Raspbian, you need to add our APT server entry to your sources lists, please add the following content at bottom of your /etc/apt/sources.list file:

Raspbian 9 (Stretch)

Raspbian 8 (Jessie)

Update your repositories database

Now let your system update the apt database:

Install TD-Agent Bit

Using the following apt-get command you are able now to install the latest td-agent-bit:

Now the following step is to instruct systemd to enable the service:

If you do a status check, you should see a similar output like this:

The default configuration of td-agent-bit is collecting metrics of CPU usage and sending the records to the standard output, you can see the outgoing data in your /var/log/syslog file.

CentOS Packages

Install on Redhat / CentOS

Configure Yum

We provide td-agent-bit through a Yum repository. In order to add the repository reference to your system, please add a new file called td-agent-bit.repo in /etc/yum.repos.d/ with the following content:

note: we encourage you always enable the gpgcheck for security reasons. All our packages are signed.

Install

Once your repository is configured, run the following command to install it:

Now the following step is to instruct systemd to enable the service:

If you do a status check, you should see a similar output like this:

The default configuration of td-agent-bit is collecting metrics of CPU usage and sending the records to the standard output, you can see the outgoing data in your /var/log/messages file.

TD Agent Bit

The following distributions are supported:

INPUT

Enable output plugin

Enable output plugin

Enable output plugin

Enable output plugin

Data collection matters and nowadays the scenarios from where the information can come from are very variable. For hence to be more flexible in certain markets needs, we may need different options. On this page, we will describe the relationship between the and open source projects.

and projects are both created and sponsored by and they aim to solve the collection, processing, and delivery of Logs.

Fluentd

Fluent Bit

Scope

Containers / Servers

Containers / Servers

Language

C & Ruby

C

Memory

~40MB

~450KB

Performance

High Performance

High Performance

Dependencies

Built as a Ruby Gem, it requires a certain number of gems.

Zero dependencies, unless some special plugin requires them.

Plugins

More than 650 plugins available

Around 35 plugins available

License

uses very low CPU and Memory consumption, it's compatible with most of x86, x86_64, AArch32 and AArch64 based platforms. In order to build it you need the following components in your system:

For more details about changes on each release please refer to the .

The following table describe the tags are available on Docker Hub repository:

Release

Release

Release

Release

Release

Release

Release

Release

Release

Release

Release

Release

Our x8664 stable image is based in focusing on security containing just the Fluent Bit binary and minimal system libraries and basic configuration. Optionally, we provide _debug images for x86_64 which contains Busybox that can be used to troubleshoot or testing purposes.

is a lightweight and extensible Log Processor that comes with full support for Kubernetes:

Our Kubernetes Filter plugin is fully inspired on the written by .

must be deployed as a DaemonSet, so on that way it will be available on every node of your Kubernetes cluster. To get started run the following commands to create the namespace, service account and role setup:

The will not append more than 5MB into the engine until they are flushed to the Elasticsearch backend. This limit aims to provide a workaround for scenarios.

The default backend in the configuration is Elasticsearch set by the . It uses the Logstash format to ingest the logs. If you need a different Index and Type, please refer to the plugin option and do your own adjustments.

Fluent Bit is distributed as td-agent-bit package and is available for the latest (and old) stable Debian systems: Buster, Stretch and Jessie. This stable Fluent Bit distribution package is maintained by .

Fluent Bit is distributed as td-agent-bit package and is available for the latest stable Ubuntu system: Xenial Xerus. This stable Fluent Bit distribution package is maintained by .

Fluent Bit is distributed as td-agent-bit package and is available for the Raspberry, specifically for . This stable Fluent Bit distribution package is maintained by .

Fluent Bit is distributed as td-agent-bit package and is available for the latest stable CentOS system. This stable Fluent Bit distribution package is maintained by .

We distribute Fluent Bit as packages for specific Enterprise Linux distributions under the name of td-agent-bit. These packages are maintained by .

FLB_IN_CPU
FLB_IN_FORWARD
FLB_IN_HEAD
FLB_IN_HEALTH
FLB_IN_KMSG
FLB_IN_MEM
FLB_IN_SERIAL
FLB_IN_STDIN
FLB_IN_MQTT
FLB_IN_XBEE
FLB_OUT_ES
Elastic Search
FLB_OUT_FORWARD
Fluentd
FLB_OUT_HTTP
FLB_OUT_NATS
NATS
FLB_OUT_STDOUT
FLB_OUT_TD
Treasure Data
https://github.com/fluent/fluent-bit/issues
Fluentd
Fluent Bit
Fluentd
Fluent Bit
Treasure Data
Fluent Bit
Official Release Notes
fluent/fluent-bit
Distroless
$ kubectl create namespace logging
$ kubectl create -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/fluent-bit-service-account.yaml
$ kubectl create -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/fluent-bit-role.yaml
$ kubectl create -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/fluent-bit-role-binding.yaml
$ kubectl create -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/output/elasticsearch/fluent-bit-configmap.yaml
apiVersion: extensions/v1beta1
apiVersion: apps/v1
$ kubectl create -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/output/elasticsearch/fluent-bit-ds.yaml
$ kubectl create -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/output/elasticsearch/fluent-bit-ds-minikube.yaml
$ wget -qO - https://packages.fluentbit.io/fluentbit.key | sudo apt-key add -
deb https://packages.fluentbit.io/debian/buster buster main
deb https://packages.fluentbit.io/debian/stretch stretch main
deb https://packages.fluentbit.io/debian/jessie jessie main
$ sudo apt-get update
$ sudo apt-get install td-agent-bit
$ sudo service td-agent-bit start
sudo service td-agent-bit status
● td-agent-bit.service - TD Agent Bit
   Loaded: loaded (/lib/systemd/system/td-agent-bit.service; disabled; vendor preset: enabled)
   Active: active (running) since mié 2016-07-06 16:58:25 CST; 2h 45min ago
 Main PID: 6739 (td-agent-bit)
    Tasks: 1
   Memory: 656.0K
      CPU: 1.393s
   CGroup: /system.slice/td-agent-bit.service
           └─6739 /opt/td-agent-bit/bin/td-agent-bit -c /etc/td-agent-bit/td-agent-bit.conf
...
$ wget -qO - https://packages.fluentbit.io/fluentbit.key | sudo apt-key add -
deb https://packages.fluentbit.io/ubuntu/bionic bionic main
deb https://packages.fluentbit.io/ubuntu/xenial xenial main
$ sudo apt-get update
$ sudo apt-get install td-agent-bit
$ sudo service td-agent-bit start
sudo service td-agent-bit status
● td-agent-bit.service - TD Agent Bit
   Loaded: loaded (/lib/systemd/system/td-agent-bit.service; disabled; vendor preset: enabled)
   Active: active (running) since mié 2016-07-06 16:58:25 CST; 2h 45min ago
 Main PID: 6739 (td-agent-bit)
    Tasks: 1
   Memory: 656.0K
      CPU: 1.393s
   CGroup: /system.slice/td-agent-bit.service
           └─6739 /opt/td-agent-bit/bin/td-agent-bit -c /etc/td-agent-bit/td-agent-bit.conf
...
$ wget -qO - https://packages.fluentbit.io/fluentbit.key | sudo apt-key add -
deb https://packages.fluentbit.io/raspbian/stretch stretch main
deb https://packages.fluentbit.io/raspbian/jessie jessie main
$ sudo apt-get update
$ sudo apt-get install td-agent-bit
$ sudo service td-agent-bit start
sudo service td-agent-bit status
● td-agent-bit.service - TD Agent Bit
   Loaded: loaded (/lib/systemd/system/td-agent-bit.service; disabled; vendor preset: enabled)
   Active: active (running) since mié 2016-07-06 16:58:25 CST; 2h 45min ago
 Main PID: 6739 (td-agent-bit)
    Tasks: 1
   Memory: 656.0K
      CPU: 1.393s
   CGroup: /system.slice/td-agent-bit.service
           └─6739 /opt/td-agent-bit/bin/td-agent-bit -c /etc/td-agent-bit/td-agent-bit.conf
...
[td-agent-bit]
name = TD Agent Bit
baseurl = https://packages.fluentbit.io/centos/7/$basearch/
gpgcheck=1
gpgkey=https://packages.fluentbit.io/fluentbit.key
enabled=1
$ yum install td-agent-bit
$ service td-agent-bit start
$ service td-agent-bit status
Redirecting to /bin/systemctl status  td-agent-bit.service
● td-agent-bit.service - TD Agent Bit
   Loaded: loaded (/usr/lib/systemd/system/td-agent-bit.service; disabled; vendor preset: disabled)
   Active: active (running) since Thu 2016-07-07 02:08:01 BST; 9s ago
 Main PID: 3820 (td-agent-bit)
   CGroup: /system.slice/td-agent-bit.service
           └─3820 /opt/td-agent-bit/bin/td-agent-bit -c etc/td-agent-bit/td-agent-bit.conf
...
Apache License v2.0
Apache License v2.0
v1.3.11
v1.3.10
v1.3.9
v1.3.8
v1.3.7
v1.3.6
v1.3.5
v1.3.4
v1.3.3
v1.3.2
v1.3.1
v1.3.0
Distroless
Fluent Bit
Fluentd Kubernetes Metadata Filter
Jimmi Dyson
Fluent Bit
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.14.md#deprecations
Tail input plugin
backpressure
Elasticsearch Ouput Plugin
Treasure Data, Inc
Treasure Data, Inc
Raspbian 8
Treasure Data, Inc
Treasure Data, Inc
Treasure Data, Inc.
Concepts
Installation Steps

Yocto Project

We distribute two main recipes, one for testing/dev purposes and other with the latest stable release.

It's strongly recommended to always use the stable release of Fluent Bit recipe and not the one from GIT master for production deployments.

Fluent Bit v1.1 and native ARMv8 (aarch64) support

Fluent Bit >= v1.1.x already integrates native AArch64 support where stack switches for co-routines are done through native ASM calls, on this scenario there is no issues as the one faced in previous series.

Service

Windows

Fluent Bit is distributed as td-agent-bit package for Windows. Fluent Bit has two flavours of Windows installers: a ZIP archive (for quick testing) and an EXE installer (for system installation).

Installation Packages

The latest stable version is v1.3.11:

64 Bits

Installers

SHA256 Checksums

9846538ba849cb2a0e77a75f247e6a59703536b916fb371fc0ac91ee6c372dce

c58128aeff74c98504e871a6b2051f7248d01a77e2a72264e4f3525c21f6b9c8

32 Bits

Installers

SHA256 Checksums

8811e01e25678d20d07e70dddc7846048fdeb08c85292d636ee32d81bcd58ec5

57c2a95e99fab83e2f9d7b834ce110a44c88656f8c38d4a6388c39599314f1bb

To check the integrity, use Get-FileHash command on PowerShell.

PS> Get-FileHash td-agent-bit-1.3.11-win64.zip

Installing from ZIP archive

Download a ZIP archive from the list above. Then you need to expand the ZIP archive. You can do this by clicking "Extract All" on Explorer, or if you're using PowerShell, you can use Expand-Archive commandlet.

PS> Expand-Archive td-agent-bit-1.3.11-win64.zip

The ZIP package contains the following set of files.

td-agent-bit
├── bin
│   ├── fluent-bit.dll
│   └── fluent-bit.exe
├── conf
│   ├── fluent-bit.conf
│   ├── parsers.conf
│   └── plugins.conf
└── include
    │   ├── flb_api.h
    │   ├── ...
    │   └── flb_worker.h
    └── fluent-bit.h

Now, launch cmd.exe or PowerShell on your machine, and execute fluent-bit.exe as follows.

PS> .\bin\fluent-bit.exe -i dummy -o stdout

If you see the following output, it's working fine!

PS> .\bin\fluent-bit.exe  -i dummy -o stdout
Fluent Bit v1.3.11
Copyright (C) Treasure Data

[2019/06/28 10:13:04] [ info] [storage] initializing...
[2019/06/28 10:13:04] [ info] [storage] in-memory
[2019/06/28 10:13:04] [ info] [storage] normal synchronization mode, checksum disabled, max_chunks_up=128
[2019/06/28 10:13:04] [ info] [engine] started (pid=10324)
[2019/06/28 10:13:04] [ info] [sp] stream processor started
[0] dummy.0: [1561684385.443823800, {"message"=>"dummy"}]
[1] dummy.0: [1561684386.428399000, {"message"=>"dummy"}]
[2] dummy.0: [1561684387.443641900, {"message"=>"dummy"}]
[3] dummy.0: [1561684388.441405800, {"message"=>"dummy"}]

To halt the process, press CTRL-C in the terminal.

Installing from EXE installer

Download an EXE installer from the links above. Then, double-click the EXE installer you've downloaded. Installation wizard will automatically start.

Click Next and proceed. By default, Fluent Bit is installed into C:\Program Files\td-agent-bit\, so you should be able to launch fluent-bit as follow after installation.

PS> C:\Program Files\td-agent-bit\bin\fluent-bit.exe -i dummy -o stdout

Input

When an input plugin is loaded, an internal instance is created. Every instance has its own and independent configuration. Configuration keys are often called properties.

Every input plugin has its own documentation section where it's specified how it can be used and what properties are available.

Parser

Dealing with raw strings is a constant pain; having a structure is highly desired. Ideally we want to set a structure to the incoming data by the Input Plugins as soon as they are collected:

The Parser allows you to convert from unstructured to structured data. As a demonstrative example consider the following Apache (HTTP Server) log entry:

192.168.2.20 - - [28/Jul/2006:10:27:10 -0300] "GET /cgi-bin/try/ HTTP/1.0" 200 3395

The above log line is a raw string without format, ideally we would like to give it a structure that can be processed later easily. If the proper configuration is used, the log entry could be converted to:

{
  "host":    "192.168.2.20",
  "user":    "-",
  "method":  "GET",
  "path":    "/cgi-bin/try/",
  "code":    "200",
  "size":    "3395",
  "referer": "",
  "agent":   ""
 }

Filter

In production environments we want to have full control of the data we are collecting, filtering is an important feature that allows us to alter the data before delivering it to some destination.

Filtering is implemented through plugins, so each filter available could be used to match, exclude or enrich your logs with some specific metadata.

Very similar to the input plugins, Filters run in an instance context, which has its own independent configuration. Configuration keys are often called properties.

Configuration

Fluent Bit is flexible enough to be configured either from the command line or through a configuration file. For production environments, we strongly recommend to use the configuration file approach.

Note that all configuration files use a specific fixed and strict schema, please proceed to the following sections for a better understanding:

Output

The output interface allows us to define destinations for the data. Common destinations are remote services, local file system or standard interface with others. Outputs are implemented as plugins and there are many available.

When an output plugin is loaded, an internal instance is created. Every instance has its own independent configuration. Configuration keys are often called properties.

Every output plugin has its own documentation section specifying how it can be used and what properties are available.

Buffer

When the data or logs are ready to be routed to some destination, by default they are buffered in memory.

Note that buffered data is not longer a raw text, instead it's in Fluent Bit internal binary representation.

Optionally Fluent Bit offers a buffering mechanism in the file system that acts as a backup system to avoid data loss in case of system failures.

Routing

Routing is a core feature that allows to route your data through Filters and finally to one or multiple destinations.

There are two important concepts in Routing:

  • Tag

  • Match

When the data is generated by the input plugins, it comes with a Tag (most of the time the Tag is configured manually), the Tag is a human-readable indicator that helps to identify the data source.

In order to define where the data should be routed, a Match rule must be specified in the output configuration.

Consider the following configuration example that aims to deliver CPU metrics to an Elasticsearch database and Memory metrics to the standard output interface:

[INPUT]
    Name cpu
    Tag  my_cpu

[INPUT]
    Name mem
    Tag  my_mem

[OUTPUT]
    Name   es
    Match  my_cpu

[OUTPUT]
    Name   stdout
    Match  my_mem

Note: the above is a simple example demonstrating how Routing is configured.

Routing works automatically reading the Input Tags and the Output Match rules. If some data has a Tag that doesn't match upon routing time, the data is deleted.

Routing with Wildcard

Routing is flexible enough to support wildcard in the Match pattern. The below example defines a common destination for both sources of data:

[INPUT]
    Name cpu
    Tag  my_cpu

[INPUT]
    Name mem
    Tag  my_mem

[OUTPUT]
    Name   stdout
    Match  my_*

The match rule is set to my_* which means it will match any Tag that starts with my_.

Configuration Schema

Fluent Bit may optionally use a configuration file to define how the service will behave, and before proceeding we need to understand how the configuration schema works. The schema is defined by three concepts:

A simple example of a configuration file is as follows:

[SERVICE]
    # This is a commented line
    Daemon    off
    log_level debug

Sections

A section is defined by a name or title inside brackets. Looking at the example above, a Service section has been set using [SERVICE] definition. Section rules:

  • All section content must be indented (4 spaces ideally).

  • Multiple sections can exist on the same file.

  • A section is expected to have comments and entries, it cannot be empty.

  • Any commented line under a section, must be indented too.

Entries: Key/Value

A section may contain Entries, an entry is defined by a line of text that contains a Key and a Value, using the above example, the [SERVICE] section contains two entries, one is the key Daemon with value off and the other is the key Log_Level with the value debug. Entries rules:

  • An entry is defined by a key and a value.

  • A key must be indented.

  • A key must contain a value which ends in the breakline.

  • Multiple keys with the same name can exist.

Also commented lines are set prefixing the # character, those lines are not processed but they must be indented too.

Indented Configuration Mode

Fluent Bit configuration files are based in a strict Indented Mode, that means that each configuration file must follow the same pattern of alignment from left to right when writing text. By default an indentation level of four spaces from left to right is suggested. Example:

[FIRST_SECTION]
    # This is a commented line
    Key1  some value
    Key2  another value
    # more comments

[SECOND_SECTION]
    KeyN  3.14

As you can see there are two sections with multiple entries and comments, note also that empty lines are allowed and they do not need to be indented.

Unit Sizes

Configuration File

There are some cases where using the command line to start Fluent Bit is not ideal. When running Fluent Bit as a service, a configuration file is preferred.

The configuration file supports four types of sections:

In addition there is an additional feature to include external files:

Service

The Service section defines global properties of the service, the keys available as of this version are described in the following table:

Example

The following is an example of a SERVICE section:

Input

An INPUT section defines a source (related to an input plugin), here we will describe the base configuration for each INPUT section. Note that each input plugin may add it own configuration keys:

The Name is mandatory and it let Fluent Bit know which input plugin should be loaded. The Tag is mandatory for all plugins except for the input forward plugin (as it provides dynamic tags).

Example

The following is an example of an INPUT section:

Filter

A FILTER section defines a filter (related to an filter plugin), here we will describe the base configuration for each FILTER section. Note that each filter plugin may add it own configuration keys:

The Name is mandatory and it let Fluent Bit know which filter plugin should be loaded. The Match or Match_Regex is mandatory for all plugins. If both are specified, Match_Regex takes precedence.

Example

The following is an example of an FILTER section:

Output

The OUTPUT section specify a destination that certain records should follow after a Tag match. The configuration support the following keys:

Example

The following is an example of an OUTPUT section:

Example: collecting CPU metrics

The following configuration file example demonstrates how to collect CPU metrics and flush the results every five seconds to the standard output:

Include File

To avoid complicated long configuration files is better to split specific parts in different files and call them (include) from one main file.

Starting from Fluent Bit 0.12 the new configuration command @INCLUDE has been added and can be used in the following way:

The configuration reader will try to open the path somefile.conf, if not found, it will assume it's a relative path based on the path of the base configuration file, e.g:

  • Main configuration file path: /tmp/main.conf

  • Included file: somefile.conf

  • Fluent Bit will try to open somefile.conf, if it fails it will try /tmp/somefile.conf.

The @INCLUDE command only works at top-left level of the configuration line, it cannot be used inside sections.

Wildcard character (*) is supported to include multiple files, e.g:

TLS / SSL

Fluent Bit provides integrated support for Transport Layer Security (TLS) and it predecessor Secure Sockets Layer (SSL) respectively. In this section we will refer as TLS only for both implementations.

Each output plugin that requires to perform Network I/O can optionally enable TLS and configure the behavior. The following table describes the properties available:

The listed properties can be enabled in the configuration file, specifically on each output plugin section or directly through the command line. The following output plugins can take advantage of the TLS feature:

Example: enable TLS on HTTP output

By default HTTP output plugin uses plain TCP, enabling TLS from the command line can be done with:

In the command line above, the two properties tls and tls.verify where enabled for demonstration purposes (we strongly suggest always keep verification ON).

The same behavior can be accomplished using a configuration file:

Tips and Tricks

Connect to virtual servers using TLS

Buffering / Storage

By default when Fluent Bit process data, it uses Memory as a primary and temporal place to store the record logs, but there are certain scenarios where would be ideal to have a persistent buffering mechanism based in the filesystem to provide aggregation and data safety capabilities.

Starting with Fluent Bit v1.0, we introduced a new storage layer that can either work in memory or in the file system. Input plugins can be configured to use one or the other upon demand at start time.

Configuration

The storage layer configuration takes place in two areas:

  • Service Section

  • Input Section

The known Service section configure a global environment for the storage layer, and then in the Input sections defines which mechanism to use.

Service Section Configuration

a Service section will look like this:

that configuration configure an optional buffering mechanism where it root for data is /var/log/flb-storage/, it will use normal synchronization mode, without checksum and up to a maximum of 5MB of memory when processing backlog data.

Input Section Configuration

Optionally, any Input plugin can configure their storage preference, the following table describe the options available:

The following example configure a service that offers filesystem buffering capabilities and two Input plugins being the first based in memory and the second with the filesystem:

Distribution

Version

Codename

18.04

Bionic Beaver

16.04

Xenial Xerus

10

Buster

9

Stretch

8

Jessie

8

Jessie

7

source code provides Bitbake recipes to configure, build and package the software for a Yocto based image. Note that specific steps of usage of these recipes in your Yocto environment (Poky) is out of the scope of this documentation.

Version

Recipe

Description

devel

Build Fluent Bit from GIT master. This recipe aims to be used for development and testing purposes only.

v1.3.11

Build latest stable version of Fluent Bit.

has a 'Service' which runs the filter chain from input to output. Global configuration here includes whether to daemonise, diagnostic logging, flush interval, etc.

For more details, please refer to the section.

provides different Input Plugins to gather information from different sources, some of them just collect data from log files while others can gather metrics information from the operating system. There are many plugins for different needs.

For more details, please refer to the section.

Parsers are fully configurable and are independently and optionally handled by each input plugin, for more details please refer to the section.

For more details about the Filters available and their usage, please refer to the section.

(must read)

For more details, please refer to the section.

Certain configuration directives in Fluent Bit refer to unit sizes such as when defining the size of a buffer or specific limits, we can find these in plugins like , or in generic properties like .

Starting from v0.11.10, all unit sizes have been standardized across the core and plugins, the following table describes the options that can be used and what they mean:

Fluent Bit allows to use one configuration file which works at a global scope and uses the defined previously.

Fluent Bit supports . If you are serving multiple hostnames on a single IP address (a.k.a. virtual hosting), you can make use of tls.vhost to connect to a specific hostname.

The end-goal of is to collect, parse, filter and ship logs to a central place. In this workflow there are many phases and one of the critical pieces is the ability to do buffering : a mechanism to place processed data into a temporal location until is ready to be shipped.

Ubuntu
Ubuntu
Debian
Debian
Debian
Raspbian
CentOS
Fluent Bit
Fluent Bit
Service
Fluent Bit
Input Plugins
Parsers
Filters
File Schema
Configuration Files
Configuration Variables
Configuration Commands
Monitoring
Unit Sizes
TLS / SSL
Backpressure
Memory Usage
Output Plugins
Sections
Entries: Key/Value
Indented Configuration Mode

Suffix

Description

Example

When a suffix is not specified, it's assumed that the value given is a bytes representation.

Specifying a value of 32000, means 32000 bytes

k, K, KB, kb

Kilobyte: a unit of memory equal to 1,000 bytes.

32k means 32000 bytes.

m, M, MB, mb

Megabyte: a unit of memory equal to 1,000,000 bytes

1M means 1000000 bytes

g, G, GB, gb

Gigabyte: a unit of memory equal to 1,000,000,000 bytes

1G means 1000000000 bytes

[SERVICE]
    Flush           5
    Daemon          off
    Log_Level       debug

Key

Description

Name

Name of the input plugin.

Tag

Tag name associated to all records comming from this plugin.

[INPUT]
    Name cpu
    Tag  my_cpu

Key

Description

Name

Name of the filter plugin.

Match

A pattern to match against the tags of incoming records. It's case sensitive and support the star (*) character as a wildcard.

Match_Regex

A regular expression to match against the tags of incoming records. Use this option if you want to use the full regex syntax.

[FILTER]
    Name  stdout
    Match *

Key

Description

Name

Name of the output plugin.

Match

A pattern to match against the tags of incoming records. It's case sensitive and support the star (*) character as a wildcard.

Match_Regex

A regular expression to match against the tags of incoming records. Use this option if you want to use the full regex syntax.

[OUTPUT]
    Name  stdout
    Match my*cpu
[SERVICE]
    Flush     5
    Daemon    off
    Log_Level debug

[INPUT]
    Name  cpu
    Tag   my_cpu

[OUTPUT]
    Name  stdout
    Match my*cpu
@INCLUDE somefile.conf
@INCLUDE input_*.conf
SERVICE
OUTPUT

Property

Description

Default

tls

enable or disable TLS support

Off

tls.verify

force certificate validation

On

tls.debug

Set TLS debug verbosity level. It accept the following values: 0 (No debug), 1 (Error), 2 (State change), 3 (Informational) and 4 Verbose

1

tls.ca_file

absolute path to CA certificate file

tls.ca_path

absolute path to scan for certificate files

tls.crt_file

absolute path to Certificate file

tls.key_file

absolute path to private Key file

tls.key_passwd

optional password for tls.key_file file

tls.vhost

hostname to be used for TLS SNI extension

$ fluent-bit -i cpu -t cpu -o http://192.168.2.3:80/something \
    -p tls=on         \
    -p tls.verify=off \
    -m '*'
[INPUT]
    Name  cpu
    Tag   cpu

[OUTPUT]
    Name       http
    Match      *
    Host       192.168.2.3
    Port       80
    URI        /something
    tls        On
    tls.verify Off
[INPUT]
    Name  cpu
    Tag   cpu

[OUTPUT]
    Name        forward
    Match       *
    Host        192.168.10.100
    Port        24224
    tls         On
    tls.verify  On
    tls.ca_file /etc/certs/fluent.crt
    tls.vhost   fluent.example.com

Key

Description

Default

storage.path

Set an optional location in the file system to store streams and chunks of data. If this parameter is not set, Input plugins can only use in-memory buffering.

storage.sync

Configure the synchronization mode used to store the data into the file system. It can take the values normal or full.

normal

storage.checksum

Enable the data integrity check when writing and reading data from the filesystem. The storage layer uses the CRC32 algorithm.

Off

storage.backlog.mem_limit

If storage.path is set, Fluent Bit will look for data chunks that were not delivered and are still in the storage layer, these are called backlog data. This option configure a hint of maximum value of memory to use when processing these records.

5M

[SERVICE]
    flush                     1
    log_Level                 info
    storage.path              /var/log/flb-storage/
    storage.sync              normal
    storage.checksum          off
    storage.backlog.mem_limit 5M

Key

Description

Default

storage.type

Specify the buffering mechanism to use. It can be memory or filesystem.

memory

[SERVICE]
    flush                     1
    log_Level                 info
    storage.path              /var/log/flb-storage/
    storage.sync              normal
    storage.checksum          off
    storage.backlog.mem_limit 5M

[INPUT]
    name          cpu
    storage.type  filesystem

[INPUT]
    name          mem
    storage.type  memory

Configuration Variables

Fluent Bit support the usage of environment variables in any value associated to a key when using a configuration file.

The variables are case sensitive and can be used in the following format:

${MY_VARIABLE}

When Fluent Bit starts, the configuration reader will detect any request for ${MY_VARIABLE} and will try to resolve it value.

Example

Create the following configuration file (fluent-bit.conf):

[SERVICE]
    Flush        1
    Daemon       Off
    Log_Level    info

[INPUT]
    Name cpu
    Tag  cpu.local

[OUTPUT]
    Name  ${MY_OUTPUT}
    Match *

Open a terminal and set the environment variable:

$ export MY_OUTPUT=stdout

The above command set the 'stdout' value to the variable MY_OUTPUT.

Run Fluent Bit with the recently created configuration file:

$ bin/fluent-bit -c fluent-bit.conf
Fluent-Bit v0.11.0
Copyright (C) Treasure Data

[2017/04/03 12:25:25] [ info] [engine] started
[0] cpu.local: [1491243925, {"cpu_p"=>1.750000, "user_p"=>1.750000, "system_p"=>0.000000, "cpu0.p_cpu"=>3.000000, "cpu0.p_user"=>2.000000, "cpu0.p_system"=>1.000000, "cpu1.p_cpu"=>0.000000, "cpu1.p_user"=>0.000000, "cpu1.p_system"=>0.000000, "cpu2.p_cpu"=>4.000000, "cpu2.p_user"=>4.000000, "cpu2.p_system"=>0.000000, "cpu3.p_cpu"=>1.000000, "cpu3.p_user"=>1.000000, "cpu3.p_system"=>0.000000}]

As you can see the service worked properly as the configuration was valid.

Dummy

The dummy input plugin, generates dummy events. It is useful for testing, debugging, benchmarking and getting started with Fluent Bit.

Configuration Parameters

The plugin supports the following configuration parameters:

Getting Started

You can run the plugin from the command line or through the configuration file:

Key

Description

Dummy

Dummy JSON record. Default: {"message":"dummy"}

Rate

Events number generated per second. Default: 1

Command Line

$ fluent-bit -i dummy -o stdout
Fluent-Bit v0.12.0
Copyright (C) Treasure Data

[2017/07/06 21:55:29] [ info] [engine] started
[0] dummy.0: [1499345730.015265366, {"message"=>"dummy"}]
[1] dummy.0: [1499345731.002371371, {"message"=>"dummy"}]
[2] dummy.0: [1499345732.000267932, {"message"=>"dummy"}]
[3] dummy.0: [1499345733.000757746, {"message"=>"dummy"}]

Configuration File

In your main configuration file append the following Input & Output sections:

[INPUT]
    Name   dummy
    Tag    dummy.log

[OUTPUT]
    Name   stdout
    Match  *

Disk Usage

The disk input plugin, gathers the information about the disk throughput of the running system every certain interval of time and reports them.

Configuration Parameters

The plugin supports the following configuration parameters:

Key

Description

Interval_Sec

Polling interval (seconds). default: 1

Interval_NSec

Polling interval (nanosecond). default: 0

Dev_Name

Device name to limit the target. (e.g. sda). If not set, in_disk gathers information from all of disks and partitions.

Getting Started

In order to get disk usage from your system, you can run the plugin from the command line or through the configuration file:

Command Line

$ fluent-bit -i disk -o stdout
Fluent-Bit v0.11.0
Copyright (C) Treasure Data

[2017/01/28 16:58:16] [ info] [engine] started
[0] disk.0: [1485590297, {"read_size"=>0, "write_size"=>0}]
[1] disk.0: [1485590298, {"read_size"=>0, "write_size"=>0}]
[2] disk.0: [1485590299, {"read_size"=>0, "write_size"=>0}]
[3] disk.0: [1485590300, {"read_size"=>0, "write_size"=>11997184}]

Configuration File

In your main configuration file append the following Input & Output sections:

[INPUT]
    Name          disk
    Tag           disk
    Interval_Sec  1
    Interval_NSec 0
[OUTPUT]
    Name   stdout
    Match  *

Note: Total interval (sec) = Interval_Sec + (Interval_Nsec / 1000000000).

e.g. 1.5s = 1s + 500000000ns

fluent-bit_git.bb
fluent-bit_1.3.11.bb
td-agent-bit-1.3.11-win64.exe
td-agent-bit-1.3.11-win64.zip
td-agent-bit-1.3.11-win32.exe
td-agent-bit-1.3.11-win32.zip
Tail Input
Forward Input
Mem_Buf_Limit
Fluent Bit
schema
Elasticsearch
Forward
GELF
HTTP
Splunk
TLS server name indication
Fluent Bit
Service
Input
Filter
Output
Include File

Key

Description

Default Value

Flush

Set the flush time in seconds. Everytime it timeouts, the engine will flush the records to the output plugin.

5

Daemon

Boolean value to set if Fluent Bit should run as a Daemon (background) or not. Allowed values are: yes, no, on and off.

Off

Log_File

Absolute path for an optional log file.

Log_Level

Set the logging verbosity level. Allowed values are: error, warning, info, debug and trace. Values are accumulative, e.g: if 'debug' is set, it will include error, warning, info and debug. Note that trace mode is only available if Fluent Bit was built with the WITH_TRACE option enabled.

info

Parsers_File

Path for a parsers configuration file. Multiple Parsers_File entries can be used.

Plugins_File

Streams_File

HTTP_Server

Enable built-in HTTP Server

Off

HTTP_Listen

Set listening interface for HTTP Server when it's enabled

0.0.0.0

HTTP_Port

Set TCP Port for the HTTP Server

2020

Coro_Stack_Size

Set the coroutines stack size in bytes. The value must be greater than the page size of the running system. Don't set too small value (say 4096), or coroutine threads can overrun the stack buffer.

24576

Backpressure

In certain environments is common to see that logs or data being ingested is faster than the ability to flush it to some destinations. The common case is reading from big log files and dispatching the logs to a backend over the network which takes some time to respond, this generate backpressure leading to a high memory consumption in the service.

In order to avoid backpressure, Fluent Bit implements a mechanism in the engine that restrict the amount of data than an input plugin can ingest, this is done through the configuration parameter Mem_Buf_Limit.

Mem_Buf_Limit

This option is disabled by default and can be applied to all input plugins. Let's explain it behavior using the following scenario:

  • Mem_Buf_Limit is set to 1MB (one megabyte)

  • input plugin tries to append 700KB

  • engine route the data to an output plugin

  • output plugin backend (HTTP Server) is down

  • engine scheduler will retry the flush after 10 seconds

  • input plugin tries to append 500KB

At this exact point, the engine will allow to append those 500KB of data into the engine: in total we have 1.2MB. The options works in a permissive mode before to reach the limit, but the limit is exceeded the following actions are taken:

  • block local buffers for the input plugin (cannot append more data)

  • notify the input plugin invoking a pause callback

The engine will protect it self and will not append more data coming from the input plugin in question; Note that is the plugin responsibility to keep their state and take some decisions about what to do on that paused state.

After some seconds if the scheduler was able to flush the initial 700KB of data or it gave up after retrying, that amount memory is released and internally the following actions happens:

  • Upon data buffer release (700KB), the internal counters get updated

  • Counters now are set at 500KB

  • Since 500KB is < 1MB it checks the input plugin state

  • If the plugin is paused, it invokes a resume callback

  • input plugin can continue appending more data

About pause and resume Callbacks

Each plugin is independent and not all of them implements the pause and resume callbacks. As said, these callbacks are just a notification mechanism for the plugin.

Scheduler

Once an output plugin gets call to flush some data, after processing that data it can notify the Engine three possible return statuses:

  • OK

  • Retry

  • Error

If the return status was OK, it means it was successfully able to process and flush the data, if it returned an Error status, means that an unrecoverable error happened and the engine should not try to flush that data again. If a Retry was requested, the Engine will ask the Scheduler to retry to flush that data, the Scheduler will decide how many seconds to wait before that happen.

Configuring Retries

The Scheduler provides a simple configuration option called Retry_Limit which can be set independently on each output section. This option allows to disable retries or impose a limit to try N times and then discard the data after reaching that limit:

Value

Description

Retry_Limit

N

Integer value to set the maximum number of retries allowed. N must be >= 1 (default: 2)

Retry_Limit

False

When Retry_Limit is set to False, means that there is not limit for the number of retries that the Scheduler can do.

Example

The following example configure two outputs where the HTTP plugin have an unlimited number of retries and the Elasticsearch plugin have a limit of 5 times:

[OUTPUT]
    Name        http
    Host        192.168.5.6
    Port        8080
    Retry_Limit False

[OUTPUT]
    Name            es
    Host            192.168.5.20
    Port            9200
    Logstash_Format On
    Retry_Limit     5

Upstream Servers

An Upstream defines a set of nodes that will be targeted by an output plugin, by the nature of the implementation an output plugin must support the Upstream feature. The following plugin(s) have Upstream support:

The current balancing mode implemented is round-robin.

Configuration

To define an Upstream it's required to create an specific configuration file that contains an UPSTREAM and one or multiple NODE sections. The following table describe the properties associated to each section. Note that all of them are mandatory:

Section

Key

Description

UPSTREAM

name

Defines a name for the Upstream in question.

NODE

name

Defines a name for the Node in question.

host

IP address or hostname of the target host.

port

TCP port of the target service.

Nodes and specific plugin configuration

A Node might contain additional configuration keys required by the plugin, on that way we provide enough flexibility for the output plugin, a common use case is Forward output where if TLS is enabled, it requires a shared key (more details in the example below).

Nodes and TLS (Transport Layer Security)

In addition to the properties defined in the table above, the network operations against a defined node can optionally be done through the use of TLS for further encryption and certificates use.

Configuration File Example

The following example defines an Upstream called forward-balancing which aims to be used by Forward output plugin, it register three Nodes:

  • node-1: connects to 127.0.0.1:43000

  • node-2: connects to 127.0.0.1:44000

  • node-3: connects to 127.0.0.1:45000 using TLS without verification. It also defines a specific configuration option required by Forward output called shared_key.

[UPSTREAM]
    name       forward-balancing

[NODE]
    name       node-1
    host       127.0.0.1
    port       43000

[NODE]
    name       node-2
    host       127.0.0.1
    port       44000

[NODE]
    name       node-3
    host       127.0.0.1
    port       45000
    tls        on
    tls.verify off
    shared_key secret

Note that every Upstream definition must exists on it own configuration file in the file system. Adding multiple Upstreams in the same file or different files is not allowed.

Monitoring

Fluent Bit comes with a built-in HTTP Server that can be used to query internal information and monitor metrics of each running plugin.

Getting Started

To get started, the first step is to enable the HTTP Server from the configuration file:

[SERVICE]
    HTTP_Server  On
    HTTP_Listen  0.0.0.0
    HTTP_PORT    2020

[INPUT]
    Name cpu

[OUTPUT]
    Name  stdout
    Match *

the above configuration snippet will instruct Fluent Bit to start it HTTP Server on TCP Port 2020 and listening on all network interfaces:

$ bin/fluent-bit -c fluent-bit.conf
Fluent-Bit v0.14.x
Copyright (C) Treasure Data

[2017/10/27 19:08:24] [ info] [engine] started
[2017/10/27 19:08:24] [ info] [http_server] listen iface=0.0.0.0 tcp_port=2020

now with a simple curl command is enough to gather some information:

$ curl -s http://127.0.0.1:2020 | jq
{
  "fluent-bit": {
    "version": "0.13.0",
    "edition": "Community",
    "flags": [
      "FLB_HAVE_TLS",
      "FLB_HAVE_METRICS",
      "FLB_HAVE_SQLDB",
      "FLB_HAVE_TRACE",
      "FLB_HAVE_HTTP_SERVER",
      "FLB_HAVE_FLUSH_LIBCO",
      "FLB_HAVE_SYSTEMD",
      "FLB_HAVE_VALGRIND",
      "FLB_HAVE_FORK",
      "FLB_HAVE_PROXY_GO",
      "FLB_HAVE_REGEX",
      "FLB_HAVE_C_TLS",
      "FLB_HAVE_SETJMP",
      "FLB_HAVE_ACCEPT4",
      "FLB_HAVE_INOTIFY"
    ]
  }
}

Note that we are sending the curl command output to the jq program which helps to make the JSON data easy to read from the terminal. Fluent Bit don't aim to do JSON pretty-printing.

REST API Interface

Fluent Bit aims to expose useful interfaces for monitoring, as of Fluent Bit v0.14 the following end points are available:

URI

Description

Data Format

/

Fluent Bit build information

JSON

/api/v1/uptime

Get uptime information in seconds and human readable format

JSON

/api/v1/metrics

Internal metrics per loaded plugin

JSON

/api/v1/metrics/prometheus

Internal metrics per loaded plugin ready to be consumed by a Prometheus Server

Prometheus Text 0.0.4

Uptime Example

Query the service uptime with the following command:

$ curl -s http://127.0.0.1:2020/api/v1/uptime | jq

it should print a similar output like this:

{
  "uptime_sec": 8950000,
  "uptime_hr": "Fluent Bit has been running:  103 days, 14 hours, 6 minutes and 40 seconds"
}

Metrics Examples

Query internal metrics in JSON format with the following command:

$ curl -s http://127.0.0.1:2020/api/v1/metrics | jq

it should print a similar output like this:

{
  "input": {
    "cpu.0": {
      "records": 8,
      "bytes": 2536
    }
  },
  "output": {
    "stdout.0": {
      "proc_records": 5,
      "proc_bytes": 1585,
      "errors": 0,
      "retries": 0,
      "retries_failed": 0
    }
  }
}

Metrics in Prometheus format

Query internal metrics in Prometheus Text 0.0.4 format:

$ curl -s http://127.0.0.1:2020/api/v1/metrics/prometheus

this time the same metrics will be in Prometheus format instead of JSON:

fluentbit_input_records_total{name="cpu.0"} 57 1509150350542
fluentbit_input_bytes_total{name="cpu.0"} 18069 1509150350542
fluentbit_output_proc_records_total{name="stdout.0"} 54 1509150350542
fluentbit_output_proc_bytes_total{name="stdout.0"} 17118 1509150350542
fluentbit_output_errors_total{name="stdout.0"} 0 1509150350542
fluentbit_output_retries_total{name="stdout.0"} 0 1509150350542
fluentbit_output_retries_failed_total{name="stdout.0"} 0 1509150350542

Configuring Aliases

By default configured plugins on runtime get an internal name in the format plugin_name.ID. For monitoring purposes this can be confusing if many plugins of the same type were configured. To make a distinction each configured input or output section can get an alias that will be used as the parent name for the metric.

[SERVICE]
    HTTP_Server  On
    HTTP_Listen  0.0.0.0
    HTTP_PORT    2020

[INPUT]
    Name  cpu
    Alias server1_cpu

[OUTPUT]
    Name  stdout
    Alias raw_output
    Match *

Now when querying the metrics we get the aliases in place instead of the plugin name:

{
  "input": {
    "server1_cpu": {
      "records": 8,
      "bytes": 2536
    }
  },
  "output": {
    "raw_output": {
      "proc_records": 5,
      "proc_bytes": 1585,
      "errors": 0,
      "retries": 0,
      "retries_failed": 0
    }
  }
}

Service

name

type

description

Daemon

Bool

If true go to background on start

Flush

Int

Interval to flush output (seconds)

Grace

Int

Wait time (seconds) on exit

HTTP_Listen

Str

Address to listen (e.g. 0.0.0.0)

HTTP_Port

Int

Port to listen (e.g. 8888)

HTTP_Server

Bool

If true enable statistics HTTP server

Log_File

Str

File to log diagnostic output

Log_Level

Str

Diagnostic level (error/warning/info/debug/trace)

Parsers_File

Str

Optional 'parsers' config file (can be multiple)

Plugins_File

Str

Optional 'plugins' config file (can be multiple)

Note that Parsers_File and Plugins_File are both relative to the directory the main config file is in.

Storage and Buffering Configuration

In addition to the properties listed in the table above, the Storage and Buffering options are extensively documented in the following section:

Configuration Example

[SERVICE]
    Flush        1
    Daemon       Off
    Config_Watch On
    Parsers_File parsers.conf
    Parsers_File custom_parsers.conf

Memory Usage

In certain scenarios would be ideal to estimate how much memory Fluent Bit could be using, this is very useful for containerized environments where memory limits are a must.

Estimating

Input plugins append data independently, so in order to do an estimation a limit should be imposed through the Mem_Buf_Limit option. If the limit was set to 10MB we need to estimate that in the worse case, the output plugin likely could use 20MB.

So, if we impose a limit of 10MB for the input plugins and considering the worse case scenario of the output plugin consuming 20MB extra, as a minimum we need (30MB x 1.2) = 36MB.

Glibc and Memory Fragmentation

Is well known that in intensive environments where memory allocations happens in the order of magnitude, the default memory allocator provided by Glibc could lead to a high fragmentation, reporting a high memory usage by the service.

You can check if Fluent Bit has been built with Jemalloc using the following command:

$ bin/fluent-bit -h|grep JEMALLOC

The output should looks like:

Build Flags =  JSMN_PARENT_LINKS JSMN_STRICT FLB_HAVE_TLS FLB_HAVE_SQLDB
FLB_HAVE_TRACE FLB_HAVE_FLUSH_LIBCO FLB_HAVE_VALGRIND FLB_HAVE_FORK
FLB_HAVE_PROXY_GO FLB_HAVE_JEMALLOC JEMALLOC_MANGLE FLB_HAVE_REGEX
FLB_HAVE_C_TLS FLB_HAVE_SETJMP FLB_HAVE_ACCEPT4 FLB_HAVE_INOTIFY

If the FLB_HAVE_JEMALLOC option is listed in Build Flags, everything will be fine.

CPU Usage

The cpu input plugin, measures the CPU usage of a process or the whole system by default (considering per CPU core). It reports values in percentage unit for every interval of time set. At the moment this plugin is only available for Linux.

The following tables describes the information generated by the plugin. The keys below represent the data used by the overall system, all values associated to the keys are in a percentage unit (0 to 100%):

key

description

cpu_p

CPU usage of the overall system, this value is the summatory of time spent on user and kernel space. The result takes in consideration the numbers of CPU cores in the system.

user_p

CPU usage in User mode, for short it means the CPU usage by user space programs. The result of this value takes in consideration the numbers of CPU cores in the system.

system_p

CPU usage in Kernel mode, for short it means the CPU usage by the Kernel. The result of this value takes in consideration the numbers of CPU cores in the system.

In addition to the keys reported in the above table, a similar content is created per CPU core. The cores are listed from 0 to N as the Kernel reports:

key

description

cpuN.p_cpu

Represents the total CPU usage by core N.

cpuN.p_user

Total CPU spent in user mode or user space programs associated to this core.

cpuN.p_system

Total CPU spent in system or kernel mode associated to this core.

Configuration Parameters

The plugin supports the following configuration parameters:

Key

Description

Default

Interval_Sec

Polling interval in seconds

1

Interval_NSec

Polling interval in nanoseconds

0

PID

Specify the ID (PID) of a running process in the system. By default the plugin monitors the whole system but if this option is set, it will only monitor the given process ID.

Getting Started

In order to get the statistics of the CPU usage of your system, you can run the plugin from the command line or through the configuration file:

Command Line

$ build/bin/fluent-bit -i cpu -t my_cpu -o stdout -m '*'
Fluent-Bit v1.x.x
Copyright (C) Treasure Data

[2019/09/02 10:46:29] [ info] starting engine
[0] [1452185189, {"cpu_p"=>7.00, "user_p"=>5.00, "system_p"=>2.00, "cpu0.p_cpu"=>10.00, "cpu0.p_user"=>8.00, "cpu0.p_system"=>2.00, "cpu1.p_cpu"=>6.00, "cpu1.p_user"=>4.00, "cpu1.p_system"=>2.00}]
[1] [1452185190, {"cpu_p"=>6.50, "user_p"=>5.00, "system_p"=>1.50, "cpu0.p_cpu"=>6.00, "cpu0.p_user"=>5.00, "cpu0.p_system"=>1.00, "cpu1.p_cpu"=>7.00, "cpu1.p_user"=>5.00, "cpu1.p_system"=>2.00}]
[2] [1452185191, {"cpu_p"=>7.50, "user_p"=>5.00, "system_p"=>2.50, "cpu0.p_cpu"=>7.00, "cpu0.p_user"=>3.00, "cpu0.p_system"=>4.00, "cpu1.p_cpu"=>6.00, "cpu1.p_user"=>6.00, "cpu1.p_system"=>0.00}]
[3] [1452185192, {"cpu_p"=>4.50, "user_p"=>3.50, "system_p"=>1.00, "cpu0.p_cpu"=>6.00, "cpu0.p_user"=>5.00, "cpu0.p_system"=>1.00, "cpu1.p_cpu"=>5.00, "cpu1.p_user"=>3.00, "cpu1.p_system"=>2.00}]

Configuration File

In your main configuration file append the following Input & Output sections:

[INPUT]
    Name cpu
    Tag  my_cpu

[OUTPUT]
    Name  stdout
    Match *

Stream Processor

Fluent Bit v1.1 comes with a new and optional Stream Processor Engine that allows to do data processing through SQL queries. This article covers the format of the expected configuration file.

For more details about the Stream Processor Engine use please refer to the following guide:

Concepts

The stream processor can be configured through defining Tasks which have a name and an execution SQL statement:

Concept

Description

Task

Definition of a Stream Processor task to be executed. A task is defined through a section called STREAM_TASK.

Name

Tasks have a name for debugging and testing purposes.

Exec

SQL statement to be executed when a Task runs.

Streams File Configuration

The Stream Processor is configured through a streams file that is referenced from the main fluent-bit.conf configuration file through the Streams_File key. The content of the streams file must have the following format specified in the table below:

Section

Key

Description

Mandatory?

STREAM_TASK

Name

Set a name for the task in question. The value is used as a reference only.

Yes

Exec

SQL statement to be executed by the task. Note that the SQL statement must be finished with a semicolon. The SQL statement must be set in one single line (no multiline support in the configuration)

Yes

Configuration Example

Consider the following fluent-bit.conf configuration file:

[SERVICE]
    Flush        1
    Log_Level    info
    Streams_File stream_processor.conf

[INPUT]
    Name         cpu
    alias        cpu_data

[OUTPUT]
    Name         stdout
    Match        results

Now creates a stream_processor.conf configuration file with the following content:

[STREAM_TASK]
    Name   cpu_test
    Exec   CREATE STREAM cpu WITH (tag='results') AS SELECT AVG(cpu_p) from STREAM:cpu_data WINDOW TUMBLING (5 SECOND);

On the query there are a few things happening:

  • Stream Processor have a Task attached to any incoming Stream of data called cpu_data (check the alias set in the Input section).

  • Stream Processor will aggregate the value of cpu_p record field and calculate it average during a window of 5 seconds.

  • Stream Processor every 5 seconds will send the results back into Fluent Bit pipeline with a tag called results.

  • Fluent Bit output section will match results tagged records and print them to the standard output interface.

You should see the following output in your terminal:

$ bin/fluent-bit -c fluent-bit.conf 
Fluent Bit v1.1.0
Copyright (C) Treasure Data

[2019/05/17 11:26:34] [ info] [storage] initializing...
[2019/05/17 11:26:34] [ info] [storage] in-memory
[2019/05/17 11:26:34] [ info] [storage] normal synchronization mode, checksum disabled
[2019/05/17 11:26:34] [ info] [engine] started (pid=16769)
[2019/05/17 11:26:34] [ info] [sp] stream processor started
[2019/05/17 11:26:34] [ info] [sp] registered task: cpu_test
[0] results: [1558085199.000175517, {"AVG(cpu_p)"=>2.750000}]
[0] results: [1558085204.000151430, {"AVG(cpu_p)"=>3.400000}]
[0] results: [1558085209.000131753, {"AVG(cpu_p)"=>1.700000}]
[0] results: [1558085214.000147562, {"AVG(cpu_p)"=>3.500000}]
[0] results: [1558085219.000118591, {"AVG(cpu_p)"=>2.050000}]
[0] results: [1558085224.000179645, {"AVG(cpu_p)"=>26.375000}]

Input Plugins

name

title

description

Collectd

Listen for UDP packets from Collectd.

CPU Usage

measure total CPU usage of the system.

Disk Usage

measure Disk I/Os.

Dummy

generate dummy event.

Exec

executes external program and collects event logs.

Forward

Fluentd forward protocol.

Head

read first part of files.

Health

Check health of TCP services.

Kernel Log Buffer

read the Linux Kernel log buffer messages.

Memory Usage

measure the total amount of memory used on the system.

MQTT

start a MQTT server and receive publish messages.

Network Traffic

measure network traffic.

Process

Check health of Process.

Random

Generate Random samples.

Serial Interface

read data information from the serial interface.

Standard Input

read data from the standard input.

Syslog

read syslog messages from a Unix socket.

Systemd

read logs from Systemd/Journald.

Tail

Tail log files

TCP

Listen for JSON messages over TCP.

Thermal

measure system temperature(s).

Health

Health input plugin allows you to check how healthy a TCP server is. It does the check by issuing a TCP connection every a certain interval of time.

Configuration Parameters

The plugin supports the following configuration parameters:

Getting Started

In order to start performing the checks, you can run the plugin from the command line or through the configuration file:

Command Line

From the command line you can let Fluent Bit generate the checks with the following options:

Configuration File

In your main configuration file append the following Input & Output sections:

Testing

Once Fluent Bit is running, you will see some random values in the output interface similar to this:

Forward

Configuration Parameters

The plugin supports the following configuration parameters:

Getting Started

In order to receive Forward messages, you can run the plugin from the command line or through the configuration file:

Command Line

From the command line you can let Fluent Bit listen for Forward messages with the following options:

By default the service will listen an all interfaces (0.0.0.0) through TCP port 24224, optionally you can change this directly, e.g:

In the example the Forward messages will only arrive through network interface under 192.168.3.2 address and TCP Port 9090.

Configuration File

In your main configuration file append the following Input & Output sections:

Testing

Kernel Log Buffer

The kmsg input plugin reads the Linux Kernel log buffer since the beginning, it gets every record and parse it field as priority, sequence, seconds, useconds, and message.

Getting Started

In order to start getting the Linux Kernel messages, you can run the plugin from the command line or through the configuration file:

Command Line

As described above, the plugin processed all messages that the Linux Kernel reported, the output has been truncated for clarification.

Configuration File

In your main configuration file append the following Input & Output sections:

Exec

The exec input plugin, allows to execute external program and collects event logs.

Configuration Parameters

The plugin supports the following configuration parameters:

Getting Started

You can run the plugin from the command line or through the configuration file:

Command Line

The following example will read events from the output of ls.

Configuration File

In your main configuration file append the following Input & Output sections:

Memory Usage

The mem input plugin, gathers the information about the memory and swap usage of the running system every certain interval of time and reports the total amount of memory and the amount of free available.

Getting Started

In order to get memory and swap usage from your system, you can run the plugin from the command line or through the configuration file:

Command Line

Configuration File

In your main configuration file append the following Input & Output sections:

Head

The head input plugin, allows to read events from the head of file. It's behavior is similar to the head command.

Configuration Parameters

The plugin supports the following configuration parameters:

Split Line Mode

This mode is useful to get a specific line. This is an example to get CPU frequency from /proc/cpuinfo.

/proc/cpuinfo is a special file to get cpu information.

Cpu frequency is "cpu MHz : 2791.009". We can get the line with this configuration file.

Output is

Getting Started

In order to read the head of a file, you can run the plugin from the command line or through the configuration file:

Command Line

The following example will read events from the /proc/uptime file, tag the records with the uptime name and flush them back to the stdout plugin:

Configuration File

In your main configuration file append the following Input & Output sections:

Note: Total interval (sec) = Interval_Sec + (Interval_Nsec / 1000000000).

e.g. 1.5s = 1s + 500000000ns

Path for a plugins configuration file. A plugins configuration file allows to define paths for external plugins, for an example .

Path for the Stream Processor configuration file. For details about the format of SP configuration file .

The plugin who implements and keep a good state is the plugin. When the pause callback is triggered, it stop their collectors and stop appending data. Upon resume, it re-enable the collectors.

has an Engine that helps to coordinate the data ingestion from input plugins and call the Scheduler to decide when is time to flush the data through one or multiple output plugins. The Scheduler flush new data every a fixed time of seconds and Schedule retries when asked.

It's common that Fluent Bit aims to connect to external services to deliver the logs over the network, this is the case of , and within others. Being able to connect to one node (host) is normal and enough for more of the use cases, but there are other scenarios where balancing across different nodes is required. The Upstream feature provides such capability.

The TLS options available are described in the section and can be added to the any Node section.

The following example set an alias to the INPUT section which is using the input plugin:

The SERVICE defines the global behaviour of the engine.

In order to estimate we will assume that the input plugins have set the Mem_Buf_Limit option (you can learn more about it in the section).

Fluent Bit has an internal binary representation for the data being processed, but when this data reach an output plugin, this one will likely create their own representation in a new memory buffer for processing. The best example are the and output plugins, both needs to convert the binary representation to their respective-custom JSON formats before to talk to their backend servers.

It's strongly suggested that in any production environment, Fluent Bit should be built with enabled (e.g. -DFLB_JEMALLOC=On). Jemalloc is an alternative memory allocator that can reduce fragmentation (among others things) resulting in better performance.

As described above, the CPU input plugin gathers the overall usage every one second and flushed the information to the output on the fifth second. On this example we used the stdout plugin to demonstrate the output records. In a real use-case you may want to flush this information to some central aggregator such as or .

Fluent Bit will gather CPU usage metrics through (metrics are calculated by default every second).

If you want to learn more about our Stream Processor engine please read the .

The input plugins defines the source from where can collect data, it can be through a network interface, radio hardware or some built-in metric. As of this version the following input plugins are available:

Forward is the protocol used by and to route messages between peers. This plugin implements the input service to listen for Forward messages.

Once Fluent Bit is running, you can send some messages using the fluent-cat tool (this tool is provided by :

In we should see the following output:

see here
see here
Tail Input
Fluent Bit
output plugins
HTTP
Elasticsearch
Forward
Forward
TLS/SSL
CPU
Fluent Bit
Storage & Buffering
Backpressure
InfluxDB
Elasticsearch
jemalloc
Fluentd
Elasticsearch
https://docs.fluentbit.io/stream-processing/
CPU input plugin
official guide
Fluent Bit

Key

Description

Host

Name of the target host or IP address to check.

Port

TCP port where to perform the connection check.

Interval_Sec

Interval in seconds between the service checks. Default value is 1.

Internal_Nsec

Specify a nanoseconds interval for service checks, it works in conjuntion with the Interval_Sec configuration key. Default value is 0.

Alert

If enabled, it will only generate messages if the target TCP service is down. By default this option is disabled.

Add_Host

If enabled, hostname is appended to each records. Default value is false.

Add_Port

If enabled, port number is appended to each records. Default value is false.

$ fluent-bit -i health://127.0.0.1:80 -o stdout
[INPUT]
    Name          health
    Host          127.0.0.1
    Port          80
    Interval_Sec  1
    Interval_NSec 0

[OUTPUT]
    Name   stdout
    Match  *
$ fluent-bit -i health://127.0.0.1:80 -o stdout
Fluent-Bit v0.9.0
Copyright (C) Treasure Data

[2016/10/07 21:37:51] [ info] [engine] started
[0] health.0: [1475897871, {"alive"=>true}]
[1] health.0: [1475897872, {"alive"=>true}]
[2] health.0: [1475897873, {"alive"=>true}]
[3] health.0: [1475897874, {"alive"=>true}]
$ fluent-bit -i forward -o stdout
$ fluent-bit -i forward://192.168.3.2:9090 -o stdout
[INPUT]
    Name              forward
    Listen            0.0.0.0
    Port              24224
    Buffer_Chunk_Size 32KB
    Buffer_Max_Size   64KB

[OUTPUT]
    Name   stdout
    Match  *
$ echo '{"key 1": 123456789, "key 2": "abcdefg"}' | fluent-cat my_tag
$ bin/fluent-bit -i forward -o stdout
Fluent-Bit v0.9.0
Copyright (C) Treasure Data

[2016/10/07 21:49:40] [ info] [engine] started
[2016/10/07 21:49:40] [ info] [in_fw] binding 0.0.0.0:24224
[0] my_tag: [1475898594, {"key 1"=>123456789, "key 2"=>"abcdefg"}]
$ bin/fluent-bit -i kmsg -t kernel -o stdout -m '*'
Fluent-Bit v0.8.0
Copyright (C) Treasure Data

[0] kernel: [1463421823, {"priority"=>3, "sequence"=>1814, "sec"=>11706, "usec"=>732233, "msg"=>"ERROR @wl_cfg80211_get_station : Wrong Mac address, mac = 34:a8:4e:d3:40:ec profile =20:3a:07:9e:4a:ac"}]
[1] kernel: [1463421823, {"priority"=>3, "sequence"=>1815, "sec"=>11706, "usec"=>732300, "msg"=>"ERROR @wl_cfg80211_get_station : Wrong Mac address, mac = 34:a8:4e:d3:40:ec profile =20:3a:07:9e:4a:ac"}]
[2] kernel: [1463421829, {"priority"=>3, "sequence"=>1816, "sec"=>11712, "usec"=>729728, "msg"=>"ERROR @wl_cfg80211_get_station : Wrong Mac address, mac = 34:a8:4e:d3:40:ec profile =20:3a:07:9e:4a:ac"}]
[3] kernel: [1463421829, {"priority"=>3, "sequence"=>1817, "sec"=>11712, "usec"=>729802, "msg"=>"ERROR @wl_cfg80211_get_station : Wrong Mac address, mac = 34:a8:4e:d3:40:ec
...
[INPUT]
    Name   kmsg
    Tag    kernel

[OUTPUT]
    Name   stdout
    Match  *

Key

Description

Command

The command to execute.

Parser

Specify the name of a parser to interpret the entry as a structured message.

Interval_Sec

Polling interval (seconds).

Interval_NSec

Polling interval (nanosecond).

$ fluent-bit -i exec -p 'command=ls /var/log' -o stdout
Fluent-Bit v0.13.0
Copyright (C) Treasure Data

[2018/03/21 17:46:49] [ info] [engine] started
[0] exec.0: [1521622010.013470159, {"exec"=>"ConsoleKit"}]
[1] exec.0: [1521622010.013490313, {"exec"=>"Xorg.0.log"}]
[2] exec.0: [1521622010.013492079, {"exec"=>"Xorg.0.log.old"}]
[3] exec.0: [1521622010.013493443, {"exec"=>"anaconda.ifcfg.log"}]
[4] exec.0: [1521622010.013494707, {"exec"=>"anaconda.log"}]
[5] exec.0: [1521622010.013496016, {"exec"=>"anaconda.program.log"}]
[6] exec.0: [1521622010.013497225, {"exec"=>"anaconda.storage.log"}]
[INPUT]
    Name          exec
    Tag           exec_ls
    Command       ls /var/log
    Interval_Sec  1
    Interval_NSec 0

[OUTPUT]
    Name   stdout
    Match  *
$ fluent-bit -i mem -t memory -o stdout -m '*'
Fluent-Bit v0.11.0
Copyright (C) Treasure Data

[2017/03/03 21:12:35] [ info] [engine] started
[0] memory: [1488543156, {"Mem.total"=>1016044, "Mem.used"=>841388, "Mem.free"=>174656, "Swap.total"=>2064380, "Swap.used"=>139888, "Swap.free"=>1924492}]
[1] memory: [1488543157, {"Mem.total"=>1016044, "Mem.used"=>841420, "Mem.free"=>174624, "Swap.total"=>2064380, "Swap.used"=>139888, "Swap.free"=>1924492}]
[2] memory: [1488543158, {"Mem.total"=>1016044, "Mem.used"=>841420, "Mem.free"=>174624, "Swap.total"=>2064380, "Swap.used"=>139888, "Swap.free"=>1924492}]
[3] memory: [1488543159, {"Mem.total"=>1016044, "Mem.used"=>841420, "Mem.free"=>174624, "Swap.total"=>2064380, "Swap.used"=>139888, "Swap.free"=>1924492}]
[INPUT]
    Name   mem
    Tag    memory

[OUTPUT]
    Name   stdout
    Match  *

Key

Description

File

Absolute path to the target file, e.g: /proc/uptime

Buf_Size

Buffer size to read the file.

Interval_Sec

Polling interval (seconds).

Interval_NSec

Polling interval (nanosecond).

Add_Path

If enabled, filepath is appended to each records. Default value is false.

Key

Rename a key. Default: head.

Lines

Line number to read. If the number N is set, in_head reads first N lines like head(1) -n.

Split_line

If enabled, in_head generates key-value pair per line.

processor    : 0
vendor_id    : GenuineIntel
cpu family    : 6
model        : 42
model name    : Intel(R) Core(TM) i7-2640M CPU @ 2.80GHz
stepping    : 7
microcode    : 41
cpu MHz        : 2791.009
cache size    : 4096 KB
physical id    : 0
siblings    : 1
[INPUT]
    Name           head
    Tag            head.cpu
    File           /proc/cpuinfo
    Lines          8
    Split_line     true
    # {"line0":"processor    : 0", "line1":"vendor_id    : GenuineIntel" ...}

[FILTER]
    Name           record_modifier
    Match          *
    Whitelist_key  line7

[OUTPUT]
    Name           stdout
    Match          *
$ bin/fluent-bit -c head.conf 
Fluent-Bit v0.12.0
Copyright (C) Treasure Data

[2017/06/26 22:38:24] [ info] [engine] started
[0] head.cpu: [1498484305.000279805, {"line7"=>"cpu MHz        : 2791.009"}]
[1] head.cpu: [1498484306.011680137, {"line7"=>"cpu MHz        : 2791.009"}]
[2] head.cpu: [1498484307.010042482, {"line7"=>"cpu MHz        : 2791.009"}]
[3] head.cpu: [1498484308.008447978, {"line7"=>"cpu MHz        : 2791.009"}]
$ fluent-bit -i head -t uptime -p File=/proc/uptime -o stdout -m '*'
Fluent-Bit v0.8.0
Copyright (C) Treasure Data

[2016/05/17 21:53:54] [ info] starting engine
[0] uptime: [1463543634, {"head"=>"133517.70 194870.97"}]
[1] uptime: [1463543635, {"head"=>"133518.70 194872.85"}]
[2] uptime: [1463543636, {"head"=>"133519.70 194876.63"}]
[3] uptime: [1463543637, {"head"=>"133520.70 194879.72"}]
[INPUT]
    Name          head
    Tag           uptime
    File          /proc/uptime
    Buf_Size      256
    Interval_Sec  1
    Interval_NSec 0

[OUTPUT]
    Name   stdout
    Match  *

Serial Interface

The serial input plugin, allows to retrieve messages/data from a Serial interface.

Configuration Parameters

Key

Description

File

Absolute path to the device entry, e.g: /dev/ttyS0

Bitrate

The bitrate for the communication, e.g: 9600, 38400, 115200, etc

Min_Bytes

The serial interface will expect at least Min_Bytes to be available before to process the message (default: 1)

Separator

Allows to specify a separator string that's used to determinate when a message ends.

Format

Specify the format of the incoming data stream. The only option available is 'json'. Note that Format and Separator cannot be used at the same time.

Getting Started

In order to retrieve messages over the Serial interface, you can run the plugin from the command line or through the configuration file:

Command Line

The following example loads the input serial plugin where it set a Bitrate of 9600, listen from the /dev/tnt0 interface and use the custom tag data to route the message.

$ fluent-bit -i serial -t data -p File=/dev/tnt0 -p BitRate=9600 -o stdout -m '*'

The above interface (/dev/tnt0) is an emulation of the serial interface (more details at bottom), for demonstrative purposes we will write some message to the other end of the interface, in this case /dev/tnt1, e.g:

$ echo 'this is some message' > /dev/tnt1

In Fluent Bit you should see an output like this:

$ fluent-bit -i serial -t data -p File=/dev/tnt0 -p BitRate=9600 -o stdout -m '*'
Fluent-Bit v0.8.0
Copyright (C) Treasure Data

[2016/05/20 15:44:39] [ info] starting engine
[0] data: [1463780680, {"msg"=>"this is some message"}]

Now using the Separator configuration, we could send multiple messages at once (run this command after starting Fluent Bit):

$ echo 'aaXbbXccXddXee' > /dev/tnt1
$ fluent-bit -i serial -t data -p File=/dev/tnt0 -p BitRate=9600 -p Separator=X -o stdout -m '*'
Fluent-Bit v0.8.0
Copyright (C) Treasure Data

[2016/05/20 16:04:51] [ info] starting engine
[0] data: [1463781902, {"msg"=>"aa"}]
[1] data: [1463781902, {"msg"=>"bb"}]
[2] data: [1463781902, {"msg"=>"cc"}]
[3] data: [1463781902, {"msg"=>"dd"}]

Configuration File

In your main configuration file append the following Input & Output sections:

[INPUT]
    Name      serial
    Tag       data
    File      /dev/tnt0
    BitRate   9600
    Separator X

[OUTPUT]
    Name   stdout
    Match  *

Emulating Serial Interface on Linux

The following content is some extra information that will allow you to emulate a serial interface on your Linux system, so you can test this Serial input plugin locally in case you don't have such interface in your computer. The following procedure has been tested on Ubuntu 15.04 running a Linux Kernel 4.0.

Build and install the tty0tty module

Download the sources

$ git clone https://github.com/freemed/tty0tty

Unpack and compile

$ cd tty0tty/module
$ make

Copy the new kernel module into the kernel modules directory

$ sudo cp tty0tty.ko /lib/modules/$(uname -r)/kernel/drivers/misc/

Load the module

$ sudo depmod
$ sudo modprobe tty0tty

You should see new serial ports in /dev/ (ls /dev/tnt*) Give appropriate permissions to the new serial ports:

$ sudo chmod 666 /dev/tnt*

When the module is loaded, it will interconnect the following virtual interfaces:

/dev/tnt0 <=> /dev/tnt1
/dev/tnt2 <=> /dev/tnt3
/dev/tnt4 <=> /dev/tnt5
/dev/tnt6 <=> /dev/tnt7

Random

Random input plugin generate very simple random value samples using the device interface /dev/urandom, if not available it will use a unix timestamp as value.

Configuration Parameters

The plugin supports the following configuration parameters:

Key

Description

Samples

If set, it will only generate a specific number of samples. By default this value is set to -1, which will generate unlimited samples.

Interval_Sec

Interval in seconds between samples generation. Default value is 1.

Internal_Nsec

Specify a nanoseconds interval for samples generation, it works in conjuntion with the Interval_Sec configuration key. Default value is 0.

Getting Started

In order to start generating random samples, you can run the plugin from the command line or through the configuration file:

Command Line

From the command line you can let Fluent Bit generate the samples with the following options:

$ fluent-bit -i random -o stdout

Configuration File

In your main configuration file append the following Input & Output sections:

[INPUT]
    Name          random
    Samples      -1
    Interval_Sec  1
    Interval_NSec 0

[OUTPUT]
    Name   stdout
    Match  *

Testing

Once Fluent Bit is running, you will see the reports in the output interface similar to this:

$ fluent-bit -i random -o stdout
Fluent-Bit v0.9.0
Copyright (C) Treasure Data

[2016/10/07 20:27:34] [ info] [engine] started
[0] random.0: [1475893654, {"rand_value"=>1863375102915681408}]
[1] random.0: [1475893655, {"rand_value"=>425675645790600970}]
[2] random.0: [1475893656, {"rand_value"=>7580417447354808203}]
[3] random.0: [1475893657, {"rand_value"=>1501010137543905482}]
[4] random.0: [1475893658, {"rand_value"=>16238242822364375212}]

Network Traffic

The netif input plugin gathers network traffic information of the running system every certain interval of time, and reports them.

Configuration Parameters

The plugin supports the following configuration parameters:

Key

Description

Interface

Specify the network interface to monitor. e.g. eth0

Interval_Sec

Polling interval (seconds). default: 1

Interval_NSec

Polling interval (nanosecond). default: 0

Verbose

If true, gather metrics precisely. default: false

Getting Started

In order to monitor network traffic from your system, you can run the plugin from the command line or through the configuration file:

Command Line

$ bin/fluent-bit -i netif -p interface=eth0 -o stdout
Fluent-Bit v0.12.0
Copyright (C) Treasure Data

[2017/07/08 23:34:18] [ info] [engine] started
[0] netif.0: [1499524459.001698260, {"eth0.rx.bytes"=>89769869, "eth0.rx.packets"=>73357, "eth0.rx.errors"=>0, "eth0.tx.bytes"=>4256474, "eth0.tx.packets"=>24293, "eth0.tx.errors"=>0}]
[1] netif.0: [1499524460.002541885, {"eth0.rx.bytes"=>98, "eth0.rx.packets"=>1, "eth0.rx.errors"=>0, "eth0.tx.bytes"=>98, "eth0.tx.packets"=>1, "eth0.tx.errors"=>0}]
[2] netif.0: [1499524461.001142161, {"eth0.rx.bytes"=>98, "eth0.rx.packets"=>1, "eth0.rx.errors"=>0, "eth0.tx.bytes"=>98, "eth0.tx.packets"=>1, "eth0.tx.errors"=>0}]
[3] netif.0: [1499524462.002612971, {"eth0.rx.bytes"=>98, "eth0.rx.packets"=>1, "eth0.rx.errors"=>0, "eth0.tx.bytes"=>98, "eth0.tx.packets"=>1, "eth0.tx.errors"=>0}]

Configuration File

In your main configuration file append the following Input & Output sections:

[INPUT]
    Name          netif
    Tag           netif
    Interval_Sec  1
    Interval_NSec 0
    Interface     eth0
[OUTPUT]
    Name   stdout
    Match  *

Note: Total interval (sec) = Interval_Sec + (Interval_Nsec / 1000000000).

e.g. 1.5s = 1s + 500000000ns

Process

Process input plugin allows you to check how health a process is. It does the check by issuing a process every a certain interval of time.

Configuration Parameters

The plugin supports the following configuration parameters:

Key

Description

Proc_Name

Name of the target Process to check.

Interval_Sec

Interval in seconds between the service checks. Default value is 1.

Internal_Nsec

Specify a nanoseconds interval for service checks, it works in conjuntion with the Interval_Sec configuration key. Default value is 0.

Alert

If enabled, it will only generate messages if the target process is down. By default this option is disabled.

Fd

If enabled, a number of fd is appended to each records. Default value is true.

Mem

If enabled, memory usage of the process is appended to each records. Default value is true.

Getting Started

In order to start performing the checks, you can run the plugin from the command line or through the configuration file:

The following example will check the health of crond process.

$ fluent-bit -i proc -p proc_name=crond -o stdout

Configuration File

In your main configuration file append the following Input & Output sections:

[INPUT]
    Name          proc
    Proc_Name     crond
    Interval_Sec  1
    Interval_NSec 0
    Fd            true
    Mem           true

[OUTPUT]
    Name   stdout
    Match  *

Testing

Once Fluent Bit is running, you will see the health of process:

$ fluent-bit -i proc -p proc_name=fluent-bit -o stdout
Fluent-Bit v0.11.0
Copyright (C) Treasure Data

[2017/01/30 21:44:56] [ info] [engine] started
[0] proc.0: [1485780297, {"alive"=>true, "proc_name"=>"fluent-bit", "pid"=>10964, "mem.VmPeak"=>14740000, "mem.VmSize"=>14740000, "mem.VmLck"=>0, "mem.VmHWM"=>1120000, "mem.VmRSS"=>1120000, "mem.VmData"=>2276000, "mem.VmStk"=>88000, "mem.VmExe"=>1768000, "mem.VmLib"=>2328000, "mem.VmPTE"=>68000, "mem.VmSwap"=>0, "fd"=>18}]
[1] proc.0: [1485780298, {"alive"=>true, "proc_name"=>"fluent-bit", "pid"=>10964, "mem.VmPeak"=>14740000, "mem.VmSize"=>14740000, "mem.VmLck"=>0, "mem.VmHWM"=>1148000, "mem.VmRSS"=>1148000, "mem.VmData"=>2276000, "mem.VmStk"=>88000, "mem.VmExe"=>1768000, "mem.VmLib"=>2328000, "mem.VmPTE"=>68000, "mem.VmSwap"=>0, "fd"=>18}]
[2] proc.0: [1485780299, {"alive"=>true, "proc_name"=>"fluent-bit", "pid"=>10964, "mem.VmPeak"=>14740000, "mem.VmSize"=>14740000, "mem.VmLck"=>0, "mem.VmHWM"=>1152000, "mem.VmRSS"=>1148000, "mem.VmData"=>2276000, "mem.VmStk"=>88000, "mem.VmExe"=>1768000, "mem.VmLib"=>2328000, "mem.VmPTE"=>68000, "mem.VmSwap"=>0, "fd"=>18}]
[3] proc.0: [1485780300, {"alive"=>true, "proc_name"=>"fluent-bit", "pid"=>10964, "mem.VmPeak"=>14740000, "mem.VmSize"=>14740000, "mem.VmLck"=>0, "mem.VmHWM"=>1152000, "mem.VmRSS"=>1148000, "mem.VmData"=>2276000, "mem.VmStk"=>88000, "mem.VmExe"=>1768000, "mem.VmLib"=>2328000, "mem.VmPTE"=>68000, "mem.VmSwap"=>0, "fd"=>18}]

Systemd

The Systemd input plugin allows to collect log messages from the Journald daemon on Linux environments.

Configuration Parameters

The plugin supports the following configuration parameters:

Key

Description

Default

Path

Optional path to the Systemd journal directory, if not set, the plugin will use default paths to read local-only logs.

Max_Fields

Set a maximum number of fields (keys) allowed per record.

8000

Max_Entries

When Fluent Bit starts, the Journal might have a high number of logs in the queue. In order to avoid delays and reduce memory usage, this option allows to specify the maximum number of log entries that can be processed per round. Once the limit is reached, Fluent Bit will continue processing the remaining log entries once Journald performs the notification.

5000

Systemd_Filter

Allows to perform a query over logs that contains a specific Journald key/value pairs, e.g: _SYSTEMD_UNIT=UNIT. The Systemd_Filter option can be specified multiple times in the input section to apply multiple filters as required.

Systemd_Filter_Type

Define the filter type when Systemd_Filter is specified multiple times. Allowed values are And and Or. With And a record is matched only when all of the Systemd_Filter have a match. With Or a record is matched when any of the Systemd_Filter has a match.

Or

Tag

The tag is used to route messages but on Systemd plugin there is an extra functionality: if the tag includes a star/wildcard, it will be expanded with the Systemd Unit file (e.g: host.* => host.UNIT_NAME).

DB

Specify the absolute path of a database file to keep track of Journald cursor.

Read_From_Tail

Start reading new entries. Skip entries already stored in Journald.

Off

Strip_Underscores

Remove the leading underscore of the Journald field (key). For example the Journald field _PID becomes the key PID.

Off

Getting Started

In order to receive Systemd messages, you can run the plugin from the command line or through the configuration file:

Command Line

From the command line you can let Fluent Bit listen for Systemd messages with the following options:

$ fluent-bit -i systemd \
             -p systemd_filter=_SYSTEMD_UNIT=docker.service \
             -p tag='host.*' -o stdout

In the example above we are collecting all messages coming from the Docker service.

Configuration File

In your main configuration file append the following Input & Output sections:

[SERVICE]
    Flush        1
    Log_Level    info
    Parsers_File parsers.conf

[INPUT]
    Name            systemd
    Tag             host.*
    Systemd_Filter  _SYSTEMD_UNIT=docker.service

[OUTPUT]
    Name   stdout
    Match  *

JSON Parser

The JSON parser is the simplest option: if the original log source is a JSON map string, it will take it structure and convert it directly to the internal binary representation.

A simple configuration that can be found in the default parsers configuration file, is the entry to parse Docker log files (when the tail input plugin is used):

[PARSER]
    Name        docker
    Format      json
    Time_Key    time
    Time_Format %Y-%m-%dT%H:%M:%S %z

The following log entry is a valid content for the parser defined above:

{"key1": 12345, "key2": "abc", "time": "2006-07-28T13:22:04Z"}

After processing, it internal representation will be:

[1154103724, {"key1"=>12345, "key2"=>"abc"}]

The time has been converted to Unix timestamp (UTC) and the map reduced to each component of the original message.

Thermal

The thermal input plugin reports system temperatures periodically -- each second by default. Currently this plugin is only available for Linux.

The following tables describes the information generated by the plugin.

key

description

name

The name of the thermal zone, such as thermal_zone0

type

The type of the thermal zone, such as x86_pkg_temp

temp

Current temperature in celcius

Configuration Parameters

The plugin supports the following configuration parameters:

Key

Description

Interval_Sec

Polling interval (seconds). default: 1

Interval_NSec

Polling interval (nanoseconds). default: 0

name_regex

Optional name filter regex. default: None

type_regex

Optional type filter regex. default: None

Getting Started

In order to get temperature(s) of your system, you can run the plugin from the command line or through the configuration file:

Command Line

$ bin/fluent-bit -i thermal -t my_thermal -o stdout -m '*'
Fluent Bit v1.3.0
Copyright (C) Treasure Data

[2019/08/18 13:39:43] [ info] [storage] initializing...
...
[0] my_thermal: [1566099584.000085820, {"name"=>"thermal_zone0", "type"=>"x86_pkg_temp", "temp"=>60.000000}]
[1] my_thermal: [1566099585.000136466, {"name"=>"thermal_zone0", "type"=>"x86_pkg_temp", "temp"=>59.000000}]
[2] my_thermal: [1566099586.000083156, {"name"=>"thermal_zone0", "type"=>"x86_pkg_temp", "temp"=>59.000000}]

Some systems provide multiple thermal zones. In this example monitor only thermal_zone0 by name, once per minute.

$ bin/fluent-bit -i thermal -t my_thermal -p "interval_sec=60" -p "name_regex=thermal_zone0" -o stdout -m '*'
Fluent Bit v1.3.0
Copyright (C) Treasure Data

[2019/08/18 13:39:43] [ info] [storage] initializing...
...
[0] my_temp: [1565759542.001053749, {"name"=>"thermal_zone0", "type"=>"pch_skylake", "temp"=>48.500000}]
[0] my_temp: [1565759602.001661061, {"name"=>"thermal_zone0", "type"=>"pch_skylake", "temp"=>48.500000}]

Configuration File

In your main configuration file append the following Input & Output sections:

[INPUT]
    Name thermal
    Tag  my_thermal

[OUTPUT]
    Name  stdout
    Match *
collectd
cpu
disk
dummy
exec
forward
head
health
kmsg
mem
mqtt
netif
proc
random
serial
stdin
syslog
systemd
tail
tcp
thermal
Fluent Bit
Fluentd
Fluentd
Fluent Bit

Key

Description

Default

Listen

Listener network interface.

0.0.0.0

Port

TCP port to listen for incoming connections.

24224

Buffer_Max_Size

Buffer_Chunk_Size

Buffer_Chunk_Size

32KB

Standard Input

The stdin plugin allows to retrieve valid JSON text messages over the standard input interface (stdin). In order to use it, specify the plugin name as the input, e.g:

$ fluent-bit -i stdin -o stdout

As input data the stdin plugin recognize the following JSON data formats:

1. { map => val, map => val, map => val }
2. [ time, { map => val, map => val, map => val } ]
#!/bin/sh

while :; do
  echo -n "{\"key\": \"some value\"}"
  sleep 1
done

Give the script execution permission:

$ chmod 755 test.sh
$ ./test.sh | fluent-bit -i stdin -o stdout
Fluent-Bit v0.9.0
Copyright (C) Treasure Data

[2016/10/07 21:44:46] [ info] [engine] started
[0] stdin.0: [1475898286, {"key"=>"some value"}]
[1] stdin.0: [1475898287, {"key"=>"some value"}]
[2] stdin.0: [1475898288, {"key"=>"some value"}]
[3] stdin.0: [1475898289, {"key"=>"some value"}]
[4] stdin.0: [1475898290, {"key"=>"some value"}]

Syslog

Syslog input plugins allows to collect Syslog messages through a Unix socket server (UDP or TCP) or over the network using TCP or UDP.

Configuration Parameters

The plugin supports the following configuration parameters:

Key

Description

Default

Mode

Defines transport protocol mode: unix_udp (UDP over Unix socket), unix_tcp (TCP over Unix socket), tcp or udp

unix_udp

Listen

If Mode is set to tcp, specify the network interface to bind.

0.0.0.0

Port

If Mode is set to tcp, specify the TCP port to listen for incoming connections.

5140

Path

If Mode is set to unix_tcp or unix_udp, set the absolute path to the Unix socket file.

Unix_Perm

If Mode is set to unix_tcp or unix_udp, set the permission of the Unix socket file.

0644

Parser

Specify an alternative parser for the message. By default, the plugin uses the parser syslog-rfc3164. If your syslog messages have fractional seconds set this Parser value to syslog-rfc5424 instead.

Buffer_Chunk_Size

By default the buffer to store the incoming Syslog messages, do not allocate the maximum memory allowed, instead it allocate memory when is required. The rounds of allocations are set by Buffer_Chunk_Size in KB. If not set, Buffer_Chunk_Size is equal to 32 (32KB). Read considerations below when using udp or unix_udp mode.

Buffer_Max_Size

Specify the maximum buffer size in KB to receive a Syslog message. If not set, the default size will be the value of Buffer_Chunk_Size.

Considerations

  • When using Syslog input plugin, Fluent Bit requires access to the parsers.conf file, the path to this file can be specified with the option -R or through the Parsers_File key on the [SERVER] section (more details below).

  • When udp or unix_udp is used, the buffer size to receive messages is configurable only through the Buffer_Chunk_Size option which defaults to 32kb.

Getting Started

In order to receive Syslog messages, you can run the plugin from the command line or through the configuration file:

Command Line

From the command line you can let Fluent Bit listen for Forward messages with the following options:

$ fluent-bit -R /path/to/parsers.conf -i syslog -p path=/tmp/in_syslog -o stdout

By default the service will create and listen for Syslog messages on the unix socket /tmp/in_syslog

Configuration File

In your main configuration file append the following Input & Output sections:

[SERVICE]
    Flush               1
    Log_Level           info
    Parsers_File        parsers.conf

[INPUT]
    Name                syslog
    Path                /tmp/in_syslog
    Buffer_Chunk_Size   32
    Buffer_Max_Size     64

[OUTPUT]
    Name   stdout
    Match  *

Testing

Once Fluent Bit is running, you can send some messages using the logger tool:

$ logger -u /tmp/in_syslog my_ident my_message
$ bin/fluent-bit -R ../conf/parsers.conf -i syslog -p path=/tmp/in_syslog -o stdout
Fluent-Bit v0.11.0
Copyright (C) Treasure Data

[2017/03/09 02:23:27] [ info] [engine] started
[0] syslog.0: [1489047822, {"pri"=>"13", "host"=>"edsiper:", "ident"=>"my_ident", "pid"=>"", "message"=>"my_message"}]

Recipes

The following content aims to provide configuration examples for different use cases to integrate Fluent Bit and make it listen for Syslog messages from your systems.

Rsyslog to Fluent Bit: Network mode over TCP

Fluent Bit Configuration

Put the following content in your fluent-bit.conf file:

[SERVICE]
    Flush        1
    Parsers_File parsers.conf

[INPUT]
    Name     syslog
    Parser   syslog-rfc3164
    Listen   0.0.0.0
    Port     5140
    Mode     tcp

[OUTPUT]
    Name     stdout
    Match    *

then start Fluent Bit.

RSyslog Configuration

Add a new file to your rsyslog config rules called 60-fluent-bit.conf inside the directory /etc/rsyslog.d/ and add the following content:

action(type="omfwd" Target="127.0.0.1" Port="5140" Protocol="tcp")

then make sure to restart your rsyslog daemon:

$ sudo service rsyslog restart

Rsyslog to Fluent Bit: Unix socket mode over UDP

Fluent Bit Configuration

Put the following content in your fluent-bit.conf file:

[SERVICE]
    Flush        1
    Parsers_File parsers.conf

[INPUT]
    Name      syslog
    Parser    syslog-rfc3164
    Path      /tmp/fluent-bit.sock
    Mode      unix_udp
    Unix_Perm 0644

[OUTPUT]
    Name      stdout
    Match     *

then start Fluent Bit.

RSyslog Configuration

Add a new file to your rsyslog config rules called 60-fluent-bit.conf inside the directory /etc/rsyslog.d/ and place the following content:

$ModLoad omuxsock
$OMUxSockSocket /tmp/fluent-bit.sock
*.* :omuxsock:

Make sure that the socket file is readable by rsyslog (tweak the Unix_Perm option shown above).

TCP

The tcp input plugin allows to retrieve structured JSON or raw messages over a TCP network interface (TCP port).

Configuration Parameters

The plugin supports the following configuration parameters:

Key

Description

Default

Listen

Listener network interface.

0.0.0.0

Port

TCP port where listening for connections

5170

Buffer_Size

Specify the maximum buffer size in KB to receive a JSON message. If not set, the default size will be the value of Chunk_Size.

Chunk_Size

By default the buffer to store the incoming JSON messages, do not allocate the maximum memory allowed, instead it allocate memory when is required. The rounds of allocations are set by Chunk_Size in KB. If not set, Chunk_Size is equal to 32 (32KB).

32

Format

Specify the expected payload format. It support the options json and none. When using json, it expects JSON maps, when is set to none, it will split every record using the defined Separator (option below).

json

Separator

When the expected Format is set to none, Fluent Bit needs a separator string to split the records. By default it uses the breakline character \n (LF or 0x10).

\n

Getting Started

In order to receive JSON messages over TCP, you can run the plugin from the command line or through the configuration file:

Command Line

From the command line you can let Fluent Bit listen for JSON messages with the following options:

$ fluent-bit -i tcp -o stdout

By default the service will listen an all interfaces (0.0.0.0) through TCP port 5170, optionally you can change this directly, e.g:

$ fluent-bit -i tcp://192.168.3.2:9090 -o stdout

In the example the JSON messages will only arrive through network interface under 192.168.3.2 address and TCP Port 9090.

Configuration File

In your main configuration file append the following Input & Output sections:

[INPUT]
    Name        tcp
    Listen      0.0.0.0
    Port        5170
    Chunk_Size  32
    Buffer_Size 64
    Format      json

[OUTPUT]
    Name        stdout
    Match       *

Testing

Once Fluent Bit is running, you can send some messages using the netcat:

$ echo '{"key 1": 123456789, "key 2": "abcdefg"}' | nc 127.0.0.1 5170
$ bin/fluent-bit -i tcp -o stdout -f 1
Fluent Bit v1.3.x
Copyright (C) Treasure Data

[2019/10/03 09:19:34] [ info] [storage] initializing...
[2019/10/03 09:19:34] [ info] [storage] in-memory
[2019/10/03 09:19:34] [ info] [engine] started (pid=14569)
[2019/10/03 09:19:34] [ info] [in_tcp] binding 0.0.0.0:5170
[2019/10/03 09:19:34] [ info] [sp] stream processor started
[0] tcp.0: [1570115975.581246030, {"key 1"=>123456789, "key 2"=>"abcdefg"}]

Performance Considerations

When receiving payloads in JSON format, there are high performance penalties. Parsing JSON is a very expensive task so you could expect your CPU usage increase under high load environments.

To get faster data ingestion, consider to use the option Format none to avoid JSON parsing if not needed.

Windows Event Log

The winlog input plugin allows you to read Windows Event Log.

Content:

Configuration Parameters

The plugin supports the following configuration parameters:

Key

Description

Default

Channels

A comma-separated list of channels to read from.

Interval_Sec

Set the polling interval for each channel. (optional)

1

DB

Set the path to save the read offsets. (optional)

Note that if you do not set db, the plugin will read channels from the beginning on each startup.

Configuration Examples

Configuration File

Here is a minimum configuration example.

[INPUT]
    Name         winlog
    Channels     Setup,Windows PowerShell
    Interval_Sec 1
    DB           winlog.sqlite

[OUTPUT]
    Name   stdout
    Match  *

Note that some Windows Event Log channels (like Security) requires an admin privilege for reading. In this case, you need to run fluent-bit as an administrator.

Command Line

If you want to do a quick test, you can run this plugin from the command line.

$ fluent-bit -i winlog -p 'channels=Setup' -o stdout

Parsers

The parser engine is fully configurable and can process log entries based in two types of format:

By default, Fluent Bit provides a set of pre-configured parsers that can be used for different use cases such as logs from:

  • Apache

  • Nginx

  • Docker

  • Syslog rfc5424

  • Syslog rfc3164

Parsers are defined in one or multiple configuration files that are loaded at start time, either from the command line or through the main Fluent Bit configuration file.

Configuration Parameters

Multiple parsers can be defined and each section have it own properties. The following table describes the available options for each parser definition:

Key

Description

Name

Set an unique name for the parser in question.

Format

Regex

If format is regex, this option must be set specifying the Ruby Regular Expression that will be used to parse and compose the structured message.

Time_Key

If the log entry provides a field with a timestamp, this option specify the name of that field.

Time_Format

Time_Offset

Specify a fixed UTC time offset (e.g. -0600, +0200, etc.) for local dates.

Time_Keep

By default when a time key is recognized and parsed, the parser will drop the original time field. Enabling this option will make the parser to keep the original time field and it value in the log entry.

Types

Specify the data type of parsed field. The syntax is types <field_name_1>:<type_name_1> <field_name_2>:<type_name_2> .... The supported types are string(default), integer, bool, float, hex. ltsv, logfmt and regex supports this option.

Decode_Field

Decode a field value, the only decoder available is json. The syntax is: Decode_Field json <field_name>.

Parsers Configuration File

All parsers must be defined in a parsers.conf file, not in the Fluent Bit global configuration file. The parsers file expose all parsers available that can be used by the Input plugins that are aware of this feature. A parsers file can have multiple entries like this:

[PARSER]
    Name        docker
    Format      json
    Time_Key    time
    Time_Format %Y-%m-%dT%H:%M:%S.%L
    Time_Keep   On

[PARSER]
    Name        syslog-rfc5424
    Format      regex
    Regex       ^\<(?<pri>[0-9]{1,5})\>1 (?<time>[^ ]+) (?<host>[^ ]+) (?<ident>[^ ]+) (?<pid>[-0-9]+) (?<msgid>[^ ]+) (?<extradata>(\[(.*)\]|-)) (?<message>.+)$
    Time_Key    time
    Time_Format %Y-%m-%dT%H:%M:%S.%L
    Time_Keep   On
    Types pid:integer

For more information about the parsers available, please refer to the default parsers file distributed with Fluent Bit source code:

Time Resolution and Fractional Seconds

In addition, we extended our time resolution to support fractional seconds like 2017-05-17T15:44:31.187512963Z. Since Fluent Bit v0.12 we have full support for nanoseconds resolution, the %L format option for Time_Format is provided as a way to indicate that content must be interpreted as fractional seconds.

Note: The option %L is only valid when used after seconds (%S) or seconds since the Epoch (%s), e.g: %S.%L or %s.%L

Logfmt Parser

Here is an example configuration:

[PARSER]
    Name        logfmt
    Format      logfmt

The following log entry is a valid content for the parser defined above:

key1=val1 key2=val2

After processing, it internal representation will be:

[1540936693, {"key1"=>"val1",
              "key2"=>"val2"}]

LTSV Parser

Labeled Tab-separated Values (LTSV format is a variant of Tab-separated Values (TSV). Each record in a LTSV file is represented as a single line. Each field is separated by TAB and has a label and a value. The label and the value have been separated by ':'.

Here is an example how to use this format in the apache access log.

Config this in httpd.conf:

The parser.conf:

The following log entry is a valid content for the parser defined above:

After processing, it internal representation will be:

The time has been converted to Unix timestamp (UTC).

Tail

The tail input plugin allows to monitor one or several text files. It has a similar behavior like tail -f shell command.

The plugin reads every matched file in the Path pattern and for every new line found (separated by a \n), it generates a new record. Optionally a database file can be used so the plugin can have a history of tracked files and a state of offsets, this is very useful to resume a state if the service is restarted.

Content:

Configuration Parameters

The plugin supports the following configuration parameters:

Note that if the database parameter db is not specified, by default the plugin will start reading each target file from the beginning.

Multiline Configuration Parameters

Additionally the following options exists to configure the handling of multi-lines files:

Docker Mode Configuration Parameters

Docker mode exists to recombine JSON log lines split by the Docker daemon due to its line length limit. To use this feature, configure the tail plugin with the corresponding parser and then enable Docker mode:

Getting Started

In order to tail text or log files, you can run the plugin from the command line or through the configuration file:

Command Line

From the command line you can let Fluent Bit parse text files with the following options:

Configuration File

In your main configuration file append the following Input & Output sections:

Tailing files keeping state

The tail input plugin a feature to save the state of the tracked files, is strongly suggested you enabled this. For this purpose the db property is available, e.g:

When running, the database file /path/to/logs.db will be created, this database is backed by SQLite3 so if you are interested into explore the content, you can open it with the SQLite client tool, e.g:

Make sure to explore when Fluent Bit is not hard working on the database file, otherwise you will see some Error: database is locked messages.

Formatting SQLite

By default SQLite client tool do not format the columns in a human read-way, so to explore in_tail_files table you can create a config file in ~/.sqliterc with the following content:

Files Rotation

Files rotation are properly handled, including logrotate copytruncate mode.

Lua

Due to the necessity to have a flexible filtering mechanism, now is possible to extend Fluent Bit capabilities writing simple filters using Lua programming language. A Lua based filter takes two steps:

  • Configure the Filter in the main configuration

  • Prepare a Lua script that will be used by the Filter

Content:

Configuration Parameters

The plugin supports the following configuration parameters:

Getting Started

Command Line

From the command line you can use the following options:

Configuration File

In your main configuration file append the following Input, Filter & Output sections:

Lua Script Filter API

The life cycle of a filter have the following steps:

  • Upon Tag matching by filter_lua, it may process or bypass the record.

  • If filter_lua accepts the record, it will invoke the function defined in the call property which basically is the name of a function defined in the Lua script.

  • Invoke Lua function passing each record in JSON format.

  • Upon return, validate return value and take some action (described above)

Callback Prototype

The Lua script can have one or multiple callbacks that can be used by filter_lua, it prototype is as follows:

Function Arguments

Return Values

Each callback must return three values:

Code Examples

For functional examples of this interface, please refer to the code samples provided in the source code of the project located here:

Number Type

In Lua, Fluent Bit treats number as double. It means an integer field (e.g. IDs, log levels) will be converted double. To avoid type conversion, Type_int_key property is available.

Filter Plugins

The filter plugins allows to alter the incoming data generated by the input plugins. As of this version the following filter plugins are available:

In order to let a Filter be applied over some data, the Match rule must exists and it must match the Tag for the incoming data.

Parser

The Parser Filter plugin allows to parse field in event records.

Configuration Parameters

The plugin supports the following configuration parameters:

Getting Started

Configuration File

This is an example to parser a record {"data":"100 0.5 true This is example"}.

The plugin needs parser file which defines how to parse field.

The path of parser file should be written in configuration file at [SERVICE] section.

The output is

You can see the record {"data":"100 0.5 true This is example"} are parsed.

Preserve original fields

By default, the parser plugin only keeps the parsed fields in its output.

If you enable Reserve_Data, all other fields are preserved:

This will produce the output:

If you enable Reserved_Data and Preserve_Key, the original key field will be preserved as well:

This will produce the output:

Record Modifier

The Record Modifier Filter plugin allows to append fields or to exclude specific fields.

Configuration Parameters

The plugin supports the following configuration parameters: Remove_key and Whitelist_key are exclusive.

Getting Started

In order to start filtering records, you can run the filter from the command line or through the configuration file.

This is a sample in_mem record to filter.

Append fields

The following configuration file is to append product name and hostname (via environment variable) to record.

You can also run the filter from command line.

The output will be

Remove fields with Remove_key

The following configuration file is to remove 'Swap.*' fields.

You can also run the filter from command line.

The output will be

Remove fields with Whitelist_key

The following configuration file is to remain 'Mem.*' fields.

You can also run the filter from command line.

The output will be

Nest

The Nest Filter plugin allows you to operate on or with nested data. Its modes of operation are

  • nest - Take a set of records and place them in a map

  • lift - Take a map by key and lift its records up

Example usage (nest)

As an example using JSON notation, to nest keys matching the Wildcard value Key* under a new key NestKey the transformation becomes,

Example (input)

Example (output)

Example usage (lift)

As an example using JSON notation, to lift keys nested under the Nested_under value NestKey* the transformation becomes,

Example (input)

Example (output)

Configuration Parameters

The plugin supports the following configuration parameters:

Getting Started

Example #1 - nest

Command Line

Note: Using the command line mode requires quotes parse the wildcard properly. The use of a configuration file is recommended.

The following command will load the mem plugin. Then the nest filter will match the wildcard rule to the keys and nest the keys matching Mem.* under the new key NEST.

Configuration File

Result

The output of both the command line and configuration invocations should be identical and result in the following output.

Example #1 - nest and lift undo

This example nests all Mem.* and Swap,* items under the Stats key and then reverses these actions with a lift operation. The output appears unchanged.

Configuration File

Result

Example #2 - nest 3 levels deep

This example takes the keys starting with Mem.* and nests them under LAYER1, which itself is then nested under LAYER2, which is nested under LAYER3.

Configuration File

Result

Example #3 - multiple nest and lift filters with prefix

This example starts with the 3-level deep nesting of Example 2 and applies the lift filter three times to reverse the operations. The end result is that all records are at the top level, without nesting, again. One prefix is added for each level that is lifted.

Configuration file

Result

Throttle

The Throttle Filter plugin sets the average Rate of messages per Interval, based on leaky bucket and sliding window algorithm. In case of overflood, it will leak within certain rate.

Configuration Parameters

The plugin supports the following configuration parameters:

Functional description

Lets imagine we have configured:

we received 1 message first second, 3 messages 2nd, and 5 3rd. As you can see, disregard that Window is actually 5, we use "slow" start to prevent overflooding during the startup.

But as soon as we reached Window size * Interval, we will have true sliding window with aggregation over complete window.

When we have average over window is more than Rate, we will start dropping messages, so that

will become:

As you can see, last pane of the window was overwritten and 1 message was dropped.

Interval vs Window size

You might noticed possibility to configure Interval of the Window shift. It is counter intuitive, but there is a difference between two examples above:

and

Even though both examples will allow maximum Rate of 60 messages per minute, first example may get all 60 messages within first second, and will drop all the rest for the entire minute:

While the second example will not allow more than 1 message per second every second, making output rate more smooth:

It may drop some data if the rate is ragged. I would recommend to use bigger interval and rate for streams of rare but important events, while keep Window bigger and Interval small for constantly intensive inputs.

Command Line

Note: It's suggested to use a configuration file.

The following command will load the tail plugin and read the content of lines.txt file. Then the throttle filter will apply a rate limit and only pass the records which are read below the certain rate:

Configuration File

The example above will pass 1000 messages per second in average over 300 seconds.

Output Plugins

Kubernetes

Fluent Bit Kubernetes Filter allows to enrich your log files with Kubernetes metadata.

When Fluent Bit is deployed in Kubernetes as a DaemonSet and configured to read the log files from the containers (using tail or systemd input plugins), this filter aims to perform the following operations:

  • Analyze the Tag and extract the following metadata:

    • Pod Name

    • Namespace

    • Container Name

    • Container ID

  • Query Kubernetes API Server to obtain extra metadata for the POD in question:

    • Pod ID

    • Labels

    • Annotations

The data is cached locally in memory and appended to each record.

Configuration Parameters

The plugin supports the following configuration parameters:

Processing the 'log' value

Kubernetes Filter aims to provide several ways to process the data contained in the log key. The following explanation of the workflow assumes that your original Docker parser defined in parsers.conf is as follows:

Since Fluent Bit v1.2 we are not suggesting the use of decoders (Decode_Field_As) if you are using Elasticsearch database in the output to avoid data type conflicts.

To perform processing of the log key, it's mandatory to enable the Merge_Log configuration property in this filter, then the following processing order will be done:

  • If a Pod suggest a parser, the filter will use that parser to process the content of log.

  • If the option Merge_Parser was set and the Pod did not suggest a parser, process the log content using the suggested parser in the configuration.

  • If no Pod was suggested and no Merge_Parser is set, try to handle the content as JSON.

If log value processing fails, the value is untouched. The order above is not chained, meaning it's exclusive and the filter will try only one of the options above, not all of them.

Kubernetes Annotations

A flexible feature of Fluent Bit Kubernetes filter is that allow Kubernetes Pods to suggest certain behaviors for the log processor pipeline when processing the records. At the moment it support:

  • Suggest a pre-defined parser

  • Request to exclude logs

The following annotations are available:

Annotation Examples in Pod definition

Suggest a parser

The following Pod definition runs a Pod that emits Apache logs to the standard output, in the Annotations it suggest that the data should be processed using the pre-defined parser called apache:

Request to exclude logs

There are certain situations where the user would like to request that the log processor simply skip the logs from the Pod in question:

Note that the annotation value is boolean which can take a true or false and must be quoted.

Workflow of Tail + Kubernetes Filter

Tail support Tags expansion, which means that if a tag have a star character (*), it will replace the value with the absolute path of the monitored file, so if you file name and path is:

then the Tag for every record of that file becomes:

note that slashes are replaced with dots.

Kubernetes Filter do not care from where the logs comes from, but it cares about the absolute name of the monitored file, because that information contains the pod name and namespace name that are used to retrieve associated metadata to the running Pod from the Kubernetes Master/API Server.

If the configuration property Kube_Tag_Prefix was configured (available on Fluent Bit >= 1.1.x), it will use that value to remove the prefix that was appended to the Tag in the previous Input section. Note that the configuration property defaults to _kube._var.logs.containers. , so the previous Tag content will be transformed from:

to:

the transformation above do not modify the original Tag, just creates a new representation for the filter to perform metadata lookup.

that new value is used by the filter to lookup the pod name and namespace, for that purpose it uses an internal Regular expression:

Custom Regex

Under certain and not common conditions, a user would want to alter that hard-coded regular expression, for that purpose the option Regex_Parser can be used (documented on top).

Final Comments

So at this point the filter is able to gather the values of pod_name and namespace, with that information it will check in the local cache (internal hash table) if some metadata for that key pair exists, if so, it will enrich the record with the metadata value, otherwise it will connect to the Kubernetes Master/API Server and retrieve that information.

Helm Charts and Yaml

Fluent Bit configuration reader expects that every line in the configuration files, ends with a \n (LF or 0x10). When composing Yaml files for a Helm chart always enable the multiline mode, example:

Azure

Configuration Parameters

Getting Started

In order to insert records into an Azure Log Analytics instance, you can run the plugin from the command line or through the configuration file:

Command Line

The azure plugin, can read the parameters from the command line in two ways, through the -p argument (property), e.g:

Configuration File

In your main configuration file append the following Input & Output sections:

Decoders

There are certain cases where the log messages being parsed contains encoded data, a typical use case can be found in containerized environments with Docker: application logs it data in JSON format but becomes an escaped string, Consider the following example

Original message generated by the application:

Then the Docker log message become encapsulated as follows:

as you can see the original message is handled as an escaped string. Ideally in Fluent Bit we would like to keep having the original structured message and not a string.

Getting Started

Decoders are a built-in feature available through the Parsers file, each Parser definition can optionally set one or multiple decoders. There are two type of decoders type:

  • Decode_Field: if the content can be decoded in a structured message, append that structure message (keys and values) to the original log message.

  • Decode_Field_As: any content decoded (unstructured or structured) will be replaced in the same key/value, no extra keys are added.

Our pre-defined Docker Parser have the following definition:

Each line in the parser with a key Decode_Field instruct the parser to apply a specific decoder on a given field, optionally it offer the option to take an extra action if the decoder cannot succeed.

Decoders

Optional Actions

By default if a decoder fails to decode the field or want to try a next decoder, is possible to define an optional action. Available actions are:

Note that actions are affected by some restrictions:

  • on Decode_Field_As, if succeeded, another decoder of the same type in the same field can be applied only if the data continue being a unstructed message (raw text).

  • on Decode_Field, if succeeded, can only be applied once for the same field. By nature Decode_Field aims to decode a structured message.

Examples

escaped_utf8

Example input (from /path/to/log.log in configuration below)

Example output

Configuration file

The fluent-bit-parsers.conf file,

BigQuery

Google Cloud Configuration

Fluent Bit streams data into an existing BigQuery table using a service account that you specify. Therefore, before using the BigQuery output plugin, you must create a service account, create a BigQuery dataset and table, authorize the service account to write to the table, and provide the service account credentials to Fluent Bit.

Creating a Service Account

To stream data into BigQuery, the first step is to create a Google Cloud service account for Fluent Bit:

Creating a BigQuery Dataset and Table

Fluent Bit does not create datasets or tables for your data, so you must create these ahead of time. You must also grant the service account WRITER permission on the dataset:

Within the dataset you will need to create a table for the data to reside in. You can follow the following instructions for creating your table. Pay close attention to the schema. It must match the schema of your output JSON. Unfortunately, since BigQuery does not allow dots in field names, you will need to use a filter to change the fields for many of the standard inputs (e.g, mem or cpu).

Retrieving Service Account Credentials

Fluent Bit BigQuery output plugin uses a JSON credentials file for authentication credentials. Download the credentials file by following these instructions:

Configurations Parameters

Configuration File

If you are using a Google Cloud Credentials File, the following configuration is enough to get you started:

File

The file output plugin allows to write the data received through the input plugin to file.

Configuration Parameters

The plugin supports the following configuration parameters:

Format

out_file format

Output time, tag and json records. There is no configuration parameters for out_file.

plain format

Output the records as JSON (without additional tag and timestamp attributes). There is no configuration parameters for plain format.

csv format

Output the records as csv. Csv supports an additional configuration parameter.

ltsv format

Output the records as LTSV. LTSV supports an additional configuration parameter.

template format

Output the records using a custom format template.

This accepts a formatting template and fills placeholders using corresponding values in a record.

For example, if you set up the configuration as below:

You will get the following output:

Getting Started

You can run the plugin from the command line or through the configuration file:

Command Line

From the command line you can let Fluent Bit count up a data with the following options:

Configuration File

In your main configuration file append the following Input & Output sections:

FlowCounter

FlowCounter is the protocol to count records. The flowcounter output plugin allows to count up records and its size.

Configuration Parameters

The plugin supports the following configuration parameters:

Getting Started

You can run the plugin from the command line or through the configuration file:

Command Line

From the command line you can let Fluent Bit count up a data with the following options:

Configuration File

In your main configuration file append the following Input & Output sections:

Testing

Once Fluent Bit is running, you will see the reports in the output interface similar to this:

Counter

Counter is a very simple plugin that counts how many records it's getting upon flush time. Plugin output is as follows:

Getting Started

You can run the plugin from the command line or through the configuration file:

Command Line

From the command line you can let Fluent Bit count up a data with the following options:

Configuration File

In your main configuration file append the following Input & Output sections:

Testing

Once Fluent Bit is running, you will see the reports in the output interface similar to this:

Elasticsearch

Configuration Parameters

TLS / SSL

Getting Started

In order to insert records into a Elasticsearch service, you can run the plugin from the command line or through the configuration file:

Command Line

The es plugin, can read the parameters from the command line in two ways, through the -p argument (property) or setting them directly through the service URI. The URI format is the following:

Using the format specified, you could start Fluent Bit through:

which is similar to do:

Configuration File

In your main configuration file append the following Input & Output sections:

About Elasticsearch field names

Some input plugins may generate messages where the field names contains dots, since Elasticsearch 2.0 this is not longer allowed, so the current es plugin replaces them with an underscore, e.g:

becomes

FAQ

Elasticsearch rejects requests saying "the final mapping would have more than 1 type"

Since Elasticsearch 6.0, you cannot create multiple types in a single index. This means that you cannot set up your configuration as below anymore.

If you see an error message like below, you'll need to fix your configuration to use a single type on each index.

Rejecting mapping update to [search] as the final mapping would have more than 1 type

Fluent Bit + AWS Elasticsearch

AWS Elasticsearch adds an extra security layer where the HTTP requests we must be signed with AWS Signv4, as of Fluent Bit v1.3 this is not yet supported. At the end of January 2020 with the release of Fluent Bit v1.4 we are adding such feature (among integration with other AWS Services ;) )

As a workaround, you can use the following tool as a proxy:

More details about this AWS requirement can be found here:

GELF

The following instructions assumes that you have a fully operational Graylog server running in your environment.

Configuration Parameters

TLS / SSL

Notes

  • If you're using Fluent Bit to collect Docker logs, note that Docker places your log in JSON under key log. So you can set log as your Gelf_Short_Message_Key to send everything in Docker logs to Graylog. In this case, you need your log value to be a string; so don't parse it using JSON parser.

  • The order of looking up the timestamp in this plugin is as follows:

    1. Value of Gelf_Timestamp_Key provided in configuration

    2. Value of timestamp key

    3. Timestamp does not set by Fluent Bit. In this case, your Graylog server will set it to the current timestamp (now).

  • The version of GELF message is also mandatory and Fluent Bit sets it to 1.1 which is the current latest version of GELF.

  • If you use udp as transport protocol and set Compress to true, Fluent Bit compresses your packets in GZIP format, which is the default compression that Graylog offers. This can be used to trade more CPU load for saving network bandwidth.

Configuration File Example

If you're using Fluent Bit for shipping Kubernetes logs, you can use something like this as your configuration file:

By default, GELF tcp uses port 12201 and Docker places your logs in /var/log/containers directory. The logs are placed in value of the log key. For example, this is a log saved by Docker:

Now, this is what happens to this log:

  1. Fluent Bit GELF plugin adds "version": "1.1" to it.

  2. We used this data key as Gelf_Short_Message_Key; so GELF plugin changes it to short_message.

  3. Timestamp is generated.

Finally, this is what our Graylog server input sees:

Modify

The Modify Filter plugin allows you to change records using rules and conditions.

Example usage

As an example using JSON notation to,

  • Rename Key2 to RenamedKey

  • Add a key OtherKey with value Value3 if OtherKey does not yet exist

Example (input)

Example (output)

Configuration Parameters

Rules

The plugin supports the following rules:

  • Rules are case insensitive, parameters are not

  • Any number of rules can be set in a filter instance.

  • Rules are applied in the order they appear, with each rule operating on the result of the previous rule.

Conditions

The plugin supports the following conditions:

  • Conditions are case insensitive, parameters are not

  • Any number of conditions can be set.

  • Conditions apply to the whole filter instance and all its rules. Not to individual rules.

  • All conditions have to be true for the rules to be applied.

Example #1 - Add and Rename

Using command Line

Note: Using the command line mode requires quotes parse the wildcard properly. The use of a configuration file is recommended.

Configuration File

Result

The output of both the command line and configuration invocations should be identical and result in the following output.

Example #2 - Conditionally Add and Remove

Configuration File

Result

Example #3 - Emoji

Configuration File

Result

Datadog

Configuration Parameters

Configuration File

Get started quickly with this configuration file:

Troubleshooting

403 Forbidden

HTTP

Configuration Parameters

TLS / SSL

Getting Started

In order to insert records into a HTTP server, you can run the plugin from the command line or through the configuration file:

Command Line

The http plugin, can read the parameters from the command line in two ways, through the -p argument (property) or setting them directly through the service URI. The URI format is the following:

Using the format specified, you could start Fluent Bit through:

Configuration File

In your main configuration file, append the following Input & Output sections:

By default, the URI becomes tag of the message, the original tag is ignored. To retain the tag, multiple configuration sections have to be made based and flush to different URIs.

Another approach we also support is the sending the original message tag in a configurable header. It's up to the receiver to do what it wants with that header field: parse it and use it as the tag for example.

To configure this behaviour, add this config:

Provided you are using Fluentd as data receiver, you can combine in_http and out_rewrite_tag_filter to make use of this HTTP header.

Notice how we override the tag, which is from URI path, with our custom header

Example : Add a header

Forward

This plugin offers two different transports and modes:

  • Forward (TCP): It uses a plain TCP connection.

  • Secure Forward (TLS): when TLS is enabled, the plugin switch to Secure Forward mode.

Configuration Parameters

The following parameters are mandatory for either Forward for Secure Forward modes:

Secure Forward Mode Configuration Parameters

Forward Setup

That configuration file specifies that it will listen for TCP connections on the port 24224 through the forward input type. Then for every message with a fluent_bit TAG, will print the message to the standard output.

Fluent Bit + Forward Setup

Fluent Bit + Secure Forward Setup

DISCLAIMER: the following example do not consider the generation of certificates for a proper usage of production environments.

Fluent Bit

Paste this content in a file called flb.conf:

Fluentd

Paste this content in a file called fld.conf:

If you're using Fluentd v1, set up it as below:

Test Communication

Start Fluentd:

Start Fluent Bit:

After five seconds, Fluent Bit will write the records to Fluentd. In Fluentd output you will see a message like this:

InfluxDB

Configuration Parameters

TLS / SSL

Getting Started

In order to start inserting records into an InfluxDB service, you can run the plugin from the command line or through the configuration file:

Command Line

The influxdb plugin, can read the parameters from the command line in two ways, through the -p argument (property) or setting them directly through the service URI. The URI format is the following:

Using the format specified, you could start Fluent Bit through:

Configuration File

In your main configuration file append the following Input & Output sections:

Tagging

Basic example of Tag_Keys usage:

With Auto_Tags=On in this example cause error, because every parsed field value type is string. Best usage of this option in metrics like record where one ore more field value is not string typed.

Testing

Before to start Fluent Bit, make sure the target database exists on InfluxDB, using the above example, we will insert the data into a fluentbit database.

1. Create database

Log into InfluxDB console:

Create the database:

Check the database exists:

2. Run Fluent Bit

The following command will gather CPU metrics from the system and send the data to InfluxDB database every five seconds:

Note that all records coming from the cpu input plugin, have a tag cpu, this tag is used to generate the measurement in InfluxDB

3. Query the data

From InfluxDB console, choose your database:

Now query some specific fields:

The CPU input plugin gather more metrics per CPU core, in the above example we just selected three specific metrics. The following query will give a full result:

4. View tags

Query tagged keys:

And now query method key values:

Specify the maximum buffer memory size used to receive a Forward message. The value must be according to the specification.

By default the buffer to store the incoming Forward messages, do not allocate the maximum memory allowed, instead it allocate memory when is required. The rounds of allocations are set by Buffer_Chunk_Size. The value must be according to the specification.

A better example to demonstrate how it works will be through a Bash script that generates messages and writes them to . Write the following content in a file named test.sh:

Now lets start the script and in the following way:

In we should see the following output:

In we should see the following output:

Parsers are an important component of , with them you can take any unstructured log entry and give them a structure that makes easier it processing and further filtering.

(named capture)

Note: if you are using Regular Expressions note that Fluent Bit uses Ruby based regular expressions and we encourage to use web site as an online editor to test them.

Specify the format of the parser, the available options here are: , , or [logfmt][logfmt.md].

Specify the format of the time field so it can be recognized and analyzed properly. Fluent-bit uses strptime(3) to parse time so you can ferer to for available modifiers.

Time resolution and it format supported are handled by using the libc system function.

The logfmt parser allows to parse the logfmt format described in . A more formal description is in .

The ltsv parser allows to parse formatted texts.

Lua Filter allows you to modify the incoming records using custom Scripts.

In order to test the filter, you can run the plugin from the command line or through the configuration file. The following examples uses the input plugin for data ingestion, invoke Lua filter using the script and calls the function which only print the same information to the standard output:

In order to start filtering records, you can run the filter from the command line or through the configuration file. The following invokes the , which outputs the following (example),

The output plugins defines where should flush the information it gathers from the input. At the moment the available options are the following:

Kubernetes Filter depends on either or input plugins to process and enrich records with Kubernetes metadata. Here we will explain the workflow of Tail and how it configuration is correlated with Kubernetes filter. Consider the following configuration example (just for demo purposes, not production):

In the input section, the plugin will monitor all files ending in .log in path /var/log/containers/. For every file it will read every line and apply the docker parser. Then the records are emitted to the next step with an expanded tag.

When runs, it will try to match all records that starts with kube. (note the ending dot), so records from the file mentioned above will hit the matching rule and the filter will try to enrich the records

If you want to know more details, check the source code of that definition .

You can see on web site how this operation is performed, check the following demo link:

Azure output plugin allows to ingest your records into service.

To get more details about how to setup Azure Log Analytics, please refer to the following documentation:

BigQuery output plugin is and experimental plugin that allows you to stream records into service. The implementation does not support the following, which would be expected in a full production version:

.

using insertId.

using templateSuffix.

The es output plugin, allows to ingest your records into a database. The following instructions assumes that you have a fully operational Elasticsearch service running in your environment.

The parameters index and type can be confusing if you are new to Elastic, if you have used a common relational database before, they can be compared to the database and table concepts. Also see

Elasticsearch output plugin supports TTL/SSL, for more details about the properties available and general configuration, please refer to the section.

For details, please read .

GELF is Extended Log Format. The GELF output plugin allows to send logs in GELF format directly to a Graylog input using TLS, TCP or UDP protocols.

According to , there are some mandatory and optional fields which are used by Graylog in GELF format. These fields are determined with Gelf\*_Key_ key in this plugin.

GELF output plugin supports TLS/SSL, for more details about the properties available and general configuration, please refer to the section.

If you're using , this parser can parse time and use it as timestamp of message. If all above fail, Fluent Bit tries to get timestamp extracted by your parser.

Your log timestamp has to be in format. If the Gelf_Timestamp_Key value of your log is not in this format, your Graylog server will ignore it.

If you're using Fluent Bit in Kubernetes and you're using , this plugin adds host value to your log by default, and you don't need to add it by your own.

If you use and use a Parser like the docker parser shown above, it decodes your message and extracts data (and any other present) field. This is how this log in looks like after decoding:

The , unnests fields inside log key. In our example, it puts data alongside stream and time.

adds host name.

Any custom field (not present in ) is prefixed by an underline.

In order to start filtering records, you can run the filter from the command line or through the configuration file. The following invokes the , which outputs the following (example),

The Datadog output plugin allows to ingest your logs into .

Before you begin, you need a , a , and you need to .

If you get a 403 Forbidden error response, double check that you have a valid and that you have .

The http output plugin allows to flush your records into a HTTP endpoint. For now the functionality is pretty basic and it issues a POST request with the data records in (or JSON) format.

HTTP output plugin supports TTL/SSL, for more details about the properties available and general configuration, please refer to the section.

Forward is the protocol used by to route messages between peers. The forward output plugin allows to provide interoperability between and . There are not configuration steps required besides to specify where is located, it can be in the local host or a in a remote machine.

When using Secure Forward mode, the mode requires to be enabled. The following additional configuration parameters are available:

Before proceeding, make sure that is installed in your system, if it's not the case please refer to the following document and go ahead with that.

Once is installed, create the following configuration file example that will allow us to stream data into it:

In one terminal launch specifying the new configuration file created (in_fluent-bit.conf):

Now that is ready to receive messages, we need to specify where the forward output plugin will flush the information using the following format:

If the TAG parameter is not set, the plugin will set the tag as fluent_bit. Keep in mind that TAG is important for routing rules inside .

Using the input plugin as an example we will flush CPU metrics to :

Now on the side, you will see the CPU metrics gathered in the last seconds:

So we gathered metrics and flushed them out to properly.

Secure Forward aims to provide a secure channel of communication with the remote Fluentd service using . Above there is a minimalist configuration for testing purposes.

The influxdb output plugin, allows to flush your records into a time series database. The following instructions assumes that you have a fully operational InfluxDB service running in your system.

InfluxDB output plugin supports TTL/SSL, for more details about the properties available and general configuration, please refer to the section.

Unit Size
Unit Size
Fluent Bit
Fluent Bit
Fluent Bit
Fluent Bit
Fluent Bit
JSON Maps
Regular Expressions
Rubular
https://github.com/fluent/fluent-bit/blob/master/conf/parsers.conf
strftime(3)
https://brandur.org/logfmt
https://godoc.org/github.com/kr/logfmt
Configuration Parameters
Configuration Examples
LogFormat "host:%h\tident:%l\tuser:%u\ttime:%t\treq:%r\tstatus:%>s\tsize:%b\treferer:%{Referer}i\tua:%{User-Agent}i" combined_ltsv
CustomLog "logs/access_log" combined_ltsv
[PARSER]
    Name        access_log_ltsv
    Format      ltsv
    Time_Key    time
    Time_Format [%d/%b/%Y:%H:%M:%S %z]
    Types       status:integer size:integer
host:127.0.0.1  ident:- user:-  time:[10/Jul/2018:13:27:05 +0200]       req:GET / HTTP/1.1      status:200      size:16218      referer:http://127.0.0.1/       ua:Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:59.0) Gecko/20100101 Firefox/59.0
host:127.0.0.1  ident:- user:-  time:[10/Jul/2018:13:27:05 +0200]       req:GET /assets/plugins/bootstrap/css/bootstrap.min.css HTTP/1.1        status:200      size:121200     referer:http://127.0.0.1/       ua:Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:59.0) Gecko/20100101 Firefox/59.0
host:127.0.0.1  ident:- user:-  time:[10/Jul/2018:13:27:05 +0200]       req:GET /assets/css/headers/header-v6.css HTTP/1.1      status:200      size:37706      referer:http://127.0.0.1/       ua:Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:59.0) Gecko/20100101 Firefox/59.0
host:127.0.0.1  ident:- user:-  time:[10/Jul/2018:13:27:05 +0200]       req:GET /assets/css/style.css HTTP/1.1  status:200      size:1279       referer:http://127.0.0.1/       ua:Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:59.0) Gecko/20100101 Firefox/59.0
[1531222025.000000000, {"host"=>"127.0.0.1", "ident"=>"-", "user"=>"-", "req"=>"GET / HTTP/1.1", "status"=>200, "size"=>16218, "referer"=>"http://127.0.0.1/", "ua"=>"Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:59.0) Gecko/20100101 Firefox/59.0"}]
[1531222025.000000000, {"host"=>"127.0.0.1", "ident"=>"-", "user"=>"-", "req"=>"GET /assets/plugins/bootstrap/css/bootstrap.min.css HTTP/1.1", "status"=>200, "size"=>121200, "referer"=>"http://127.0.0.1/", "ua"=>"Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:59.0) Gecko/20100101 Firefox/59.0"}]
[1531222025.000000000, {"host"=>"127.0.0.1", "ident"=>"-", "user"=>"-", "req"=>"GET /assets/css/headers/header-v6.css HTTP/1.1", "status"=>200, "size"=>37706, "referer"=>"http://127.0.0.1/", "ua"=>"Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:59.0) Gecko/20100101 Firefox/59.0"}]
[1531222025.000000000, {"host"=>"127.0.0.1", "ident"=>"-", "user"=>"-", "req"=>"GET /assets/css/style.css HTTP/1.1", "status"=>200, "size"=>1279, "referer"=>"http://127.0.0.1/", "ua"=>"Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:59.0) Gecko/20100101 Firefox/59.0"}]

Key

Description

Default

Multiline

If enabled, the plugin will try to discover multiline messages and use the proper parsers to compose the outgoing messages. Note that when this option is enabled the Parser option is not used.

Off

Multiline_Flush

Wait period time in seconds to process queued multiline messages

4

Parser_Firstline

Name of the parser that matchs the beginning of a multiline message. Note that the regular expression defined in the parser must include a group name (named capture)

Parser_N

Optional-extra parser to interpret and structure multiline entries. This option can be used to define multiple parsers, e.g: Parser_1 ab1, Parser_2 ab2, Parser_N abN.

Key

Description

Default

Docker_Mode

If enabled, the plugin will recombine split Docker log lines before passing them to any parser as configured above. This mode cannot be used at the same time as Multiline.

Off

Docker_Mode_Flush

Wait period time in seconds to flush queued unfinished split lines.

4

$ fluent-bit -i tail -p path=/var/log/syslog -o stdout
[INPUT]
    Name        tail
    Path        /var/log/syslog

[OUTPUT]
    Name   stdout
    Match  *
$ fluent-bit -i tail -p path=/var/log/syslog -p db=/path/to/logs.db -o stdout
$ sqlite3 tail.db
-- Loading resources from /home/edsiper/.sqliterc

SQLite version 3.14.1 2016-08-11 18:53:32
Enter ".help" for usage hints.
sqlite> SELECT * FROM in_tail_files;
id     name                              offset        inode         created
-----  --------------------------------  ------------  ------------  ----------
1      /var/log/syslog                   73453145      23462108      1480371857
sqlite>
.headers on
.mode column
.width 5 32 12 12 10

Key

Description

Script

Path to the Lua script that will be used.

Call

Lua function name that will be triggered to do filtering. It's assumed that the function is declared inside the Script defined above.

Type_int_key

If these keys are matched, the fields are converted to integer. If more than one key, delimit by space

$ fluent-bit -i dummy -F lua -p script=test.lua -p call=cb_print -m '*' -o null
[INPUT]
    Name   dummy

[FILTER]
    Name    lua
    Match   *
    script  test.lua
    call    cb_print

[OUTPUT]
    Name   null
    Match  *
function cb_print(tag, timestamp, record)
   return code, timestamp, record
end

name

description

tag

Name of the tag associated with the incoming record.

timestamp

Unix timestamp with nanoseconds associated with the incoming record. The original format is a double (seconds.nanoseconds)

record

Lua table with the record content

name

data type

description

code

integer

The code return value represents the result and further action that may follows. If code equals -1, means that filter_lua must drop the record. If code equals 0 the record will not be modified, otherwise if code equals 1, means the original timestamp or record have been modified so it must be replaced by the returned values from timestamp (second return value) and record (third return value).

timestamp

double

If code equals 1, the original record timestamp will be replaced with this new value.

record

table

if code equals 1, the original record information will be replaced with this new value. Note that the format of this value must be a valid Lua table.

Key

Description

Default

Key_Name

Specify field name in record to parse.

Parser

Specify the parser name to interpret the field. Multiple Parser entries are allowed (one per line).

Preserve_Key

Keep original Key_Name field in the parsed result. If false, the field will be removed.

False

Reserve_Data

Keep all other original fields in the parsed result. If false, all other original fields will be removed.

False

Unescape_Key

If the key is a escaped string (e.g: stringify JSON), unescape the string before to apply the parser.

False

[PARSER]
    Name dummy_test
    Format regex
    Regex ^(?<INT>[^ ]+) (?<FLOAT>[^ ]+) (?<BOOL>[^ ]+) (?<STRING>.+)$
[SERVICE]
    Parsers_File /path/to/parsers.conf

[INPUT]
    Name dummy
    Tag  dummy.data
    Dummy {"data":"100 0.5 true This is example"}

[FILTER]
    Name parser
    Match dummy.*
    Key_Name data
    Parser dummy_test

[OUTPUT]
    Name stdout
    Match *
$ fluent-bit -c dummy.conf
Fluent-Bit v0.12.0
Copyright (C) Treasure Data

[2017/07/06 22:33:12] [ info] [engine] started
[0] dummy.data: [1499347993.001371317, {"INT"=>"100", "FLOAT"=>"0.5", "BOOL"=>"true", "STRING"=>"This is example"}]
[1] dummy.data: [1499347994.001303118, {"INT"=>"100", "FLOAT"=>"0.5", "BOOL"=>"true", "STRING"=>"This is example"}]
[2] dummy.data: [1499347995.001296133, {"INT"=>"100", "FLOAT"=>"0.5", "BOOL"=>"true", "STRING"=>"This is example"}]
[3] dummy.data: [1499347996.001320284, {"INT"=>"100", "FLOAT"=>"0.5", "BOOL"=>"true", "STRING"=>"This is example"}]
[PARSER]
    Name dummy_test
    Format regex
    Regex ^(?<INT>[^ ]+) (?<FLOAT>[^ ]+) (?<BOOL>[^ ]+) (?<STRING>.+)$
[SERVICE]
    Parsers_File /path/to/parsers.conf

[INPUT]
    Name dummy
    Tag  dummy.data
    Dummy {"data":"100 0.5 true This is example", "key1":"value1", "key2":"value2"}

[FILTER]
    Name parser
    Match dummy.*
    Key_Name data
    Parser dummy_test
    Reserve_Data On
$ fluent-bit -c dummy.conf
Fluent-Bit v0.12.0
Copyright (C) Treasure Data

[2017/07/06 22:33:12] [ info] [engine] started
[0] dummy.data: [1499347993.001371317, {"INT"=>"100", "FLOAT"=>"0.5", "BOOL"=>"true", "STRING"=>"This is example"}, "key1":"value1", "key2":"value2"]
[1] dummy.data: [1499347994.001303118, {"INT"=>"100", "FLOAT"=>"0.5", "BOOL"=>"true", "STRING"=>"This is example"}, "key1":"value1", "key2":"value2"]
[2] dummy.data: [1499347995.001296133, {"INT"=>"100", "FLOAT"=>"0.5", "BOOL"=>"true", "STRING"=>"This is example"}, "key1":"value1", "key2":"value2"]
[3] dummy.data: [1499347996.001320284, {"INT"=>"100", "FLOAT"=>"0.5", "BOOL"=>"true", "STRING"=>"This is example"}, "key1":"value1", "key2":"value2"]
[PARSER]
    Name dummy_test
    Format regex
    Regex ^(?<INT>[^ ]+) (?<FLOAT>[^ ]+) (?<BOOL>[^ ]+) (?<STRING>.+)$
[SERVICE]
    Parsers_File /path/to/parsers.conf

[INPUT]
    Name dummy
    Tag  dummy.data
    Dummy {"data":"100 0.5 true This is example", "key1":"value1", "key2":"value2"}

[FILTER]
    Name parser
    Match dummy.*
    Key_Name data
    Parser dummy_test
    Reserve_Data On
    Preserve_Key On
$ fluent-bit -c dummy.conf
Fluent-Bit v0.12.0
Copyright (C) Treasure Data

[2017/07/06 22:33:12] [ info] [engine] started
[0] dummy.data: [1499347993.001371317, {"data":"100 0.5 true This is example", "INT"=>"100", "FLOAT"=>"0.5", "BOOL"=>"true", "STRING"=>"This is example"}]
[1] dummy.data: [1499347994.001303118, {"data":"100 0.5 true This is example", "INT"=>"100", "FLOAT"=>"0.5", "BOOL"=>"true", "STRING"=>"This is example"}]
[2] dummy.data: [1499347995.001296133, {"data":"100 0.5 true This is example", "INT"=>"100", "FLOAT"=>"0.5", "BOOL"=>"true", "STRING"=>"This is example"}]
[3] dummy.data: [1499347996.001320284, {"data":"100 0.5 true This is example", "INT"=>"100", "FLOAT"=>"0.5", "BOOL"=>"true", "STRING"=>"This is example"}]

Key

Description

Record

Append fields. This parameter needs key and value pair.

Remove_key

If the key is matched, that field is removed.

Whitelist_key

If the key is not matched, that field is removed.

{"Mem.total"=>1016024, "Mem.used"=>716672, "Mem.free"=>299352, "Swap.total"=>2064380, "Swap.used"=>32656, "Swap.free"=>2031724}
[INPUT]
    Name mem
    Tag  mem.local

[OUTPUT]
    Name  stdout
    Match *

[FILTER]
    Name record_modifier
    Match *
    Record hostname ${HOSTNAME}
    Record product Awesome_Tool
$ fluent-bit -i mem -o stdout -F record_modifier -p 'Record=hostname ${HOSTNAME}' -p 'Record=product Awesome_Tool' -m '*'
[0] mem.local: [1492436882.000000000, {"Mem.total"=>1016024, "Mem.used"=>716672, "Mem.free"=>299352, "Swap.total"=>2064380, "Swap.used"=>32656, "Swap.free"=>2031724, "hostname"=>"localhost.localdomain", "product"=>"Awesome_Tool"}]
[INPUT]
    Name mem
    Tag  mem.local

[OUTPUT]
    Name  stdout
    Match *

[FILTER]
    Name record_modifier
    Match *
    Remove_key Swap.total
    Remove_key Swap.used
    Remove_key Swap.free
$ fluent-bit -i mem -o stdout -F  record_modifier -p 'Remove_key=Swap.total' -p 'Remove_key=Swap.free' -p 'Remove_key=Swap.used' -m '*'
[0] mem.local: [1492436998.000000000, {"Mem.total"=>1016024, "Mem.used"=>716672, "Mem.free"=>295332}]
[INPUT]
    Name mem
    Tag  mem.local

[OUTPUT]
    Name  stdout
    Match *

[FILTER]
    Name record_modifier
    Match *
    Whitelist_key Mem.total
    Whitelist_key Mem.used
    Whitelist_key Mem.free
$ fluent-bit -i mem -o stdout -F  record_modifier -p 'Whitelist_key=Mem.total' -p 'Whitelist_key=Mem.free' -p 'Whitelist_key=Mem.used' -m '*'
[0] mem.local: [1492436998.000000000, {"Mem.total"=>1016024, "Mem.used"=>716672, "Mem.free"=>295332}]
{
  "Key1"     : "Value1",
  "Key2"     : "Value2",
  "OtherKey" : "Value3"
}
{
  "OtherKey" : "Value3"
  "NestKey"  : {
    "Key1"     : "Value1",
    "Key2"     : "Value2",
  }
}
{
  "OtherKey" : "Value3"
  "NestKey"  : {
    "Key1"     : "Value1",
    "Key2"     : "Value2",
  }
}
{
  "Key1"     : "Value1",
  "Key2"     : "Value2",
  "OtherKey" : "Value3"
}

Key

Value Format

Operation

Description

Operation

ENUM [nest or lift]

Select the operation nest or lift

Wildcard

FIELD WILDCARD

nest

Nest records which field matches the wildcard

Nest_under

FIELD STRING

nest

Nest records matching the Wildcard under this key

Nested_under

FIELD STRING

lift

Lift records nested under the Nested_under key

Add_prefix

FIELD STRING

ANY

Prefix affected keys with this string

Remove_prefix

FIELD STRING

ANY

Remove prefix from affected keys if it matches this string

[0] memory: [1488543156, {"Mem.total"=>1016044, "Mem.used"=>841388, "Mem.free"=>174656, "Swap.total"=>2064380, "Swap.used"=>139888, "Swap.free"=>1924492}]
$ bin/fluent-bit -i mem -p 'tag=mem.local' -F nest -p 'Operation=nest' -p 'Wildcard=Mem.*' -p 'Nest_under=Memstats' -p 'Remove_prefix=Mem.' -m '*' -o stdout
[INPUT]
    Name mem
    Tag  mem.local

[OUTPUT]
    Name  stdout
    Match *

[FILTER]
    Name nest
    Match *
    Operation nest
    Wildcard Mem.*
    Nest_under Memstats
    Remove_prefix Mem.
[2018/04/06 01:35:13] [ info] [engine] started
[0] mem.local: [1522978514.007359767, {"Swap.total"=>1046524, "Swap.used"=>0, "Swap.free"=>1046524, "Memstats"=>{"total"=>4050908, "used"=>714984, "free"=>3335924}}]
[INPUT]
    Name mem
    Tag  mem.local

[OUTPUT]
    Name  stdout
    Match *

[FILTER]
    Name nest
    Match *
    Operation nest
    Wildcard Mem.*
    Wildcard Swap.*
    Nest_under Stats
    Add_prefix NESTED

[FILTER]
    Name nest
    Match *
    Operation lift
    Nested_under Stats
    Remove_prefix NESTED
[2018/06/21 17:42:37] [ info] [engine] started (pid=17285)
[0] mem.local: [1529566958.000940636, {"Mem.total"=>8053656, "Mem.used"=>6940380, "Mem.free"=>1113276, "Swap.total"=>16532988, "Swap.used"=>1286772, "Swap.free"=>15246216}]
[INPUT]
    Name mem
    Tag  mem.local

[OUTPUT]
    Name  stdout
    Match *

[FILTER]
    Name nest
    Match *
    Operation nest
    Wildcard Mem.*
    Nest_under LAYER1

[FILTER]
    Name nest
    Match *
    Operation nest
    Wildcard LAYER1*
    Nest_under LAYER2

[FILTER]
    Name nest
    Match *
    Operation nest
    Wildcard LAYER2*
    Nest_under LAYER3
[0] mem.local: [1524795923.009867831, {"Swap.total"=>1046524, "Swap.used"=>0, "Swap.free"=>1046524, "LAYER3"=>{"LAYER2"=>{"LAYER1"=>{"Mem.total"=>4050908, "Mem.used"=>1112036, "Mem.free"=>2938872}}}}]


{
  "Swap.total"=>1046524,
  "Swap.used"=>0,
  "Swap.free"=>1046524,
  "LAYER3"=>{
    "LAYER2"=>{
      "LAYER1"=>{
        "Mem.total"=>4050908,
        "Mem.used"=>1112036,
        "Mem.free"=>2938872
      }
    }
  }
}
[INPUT]
    Name mem
    Tag  mem.local

[OUTPUT]
    Name  stdout
    Match *

[FILTER]
    Name nest
    Match *
    Operation nest
    Wildcard Mem.*
    Nest_under LAYER1

[FILTER]
    Name nest
    Match *
    Operation nest
    Wildcard LAYER1*
    Nest_under LAYER2

[FILTER]
    Name nest
    Match *
    Operation nest
    Wildcard LAYER2*
    Nest_under LAYER3

[FILTER]
    Name nest
    Match *
    Operation lift
    Nested_under LAYER3
    Add_prefix Lifted3_

[FILTER]
    Name nest
    Match *
    Operation lift
    Nested_under Lifted3_LAYER2
    Add_prefix Lifted3_Lifted2_

[FILTER]
    Name nest
    Match *
    Operation lift
    Nested_under Lifted3_Lifted2_LAYER1
    Add_prefix Lifted3_Lifted2_Lifted1_
[0] mem.local: [1524862951.013414798, {"Swap.total"=>1046524, "Swap.used"=>0, "Swap.free"=>1046524, "Lifted3_Lifted2_Lifted1_Mem.total"=>4050908, "Lifted3_Lifted2_Lifted1_Mem.used"=>1253912, "Lifted3_Lifted2_Lifted1_Mem.free"=>2796996}]


{
  "Swap.total"=>1046524, 
  "Swap.used"=>0, 
  "Swap.free"=>1046524, 
  "Lifted3_Lifted2_Lifted1_Mem.total"=>4050908, 
  "Lifted3_Lifted2_Lifted1_Mem.used"=>1253912, 
  "Lifted3_Lifted2_Lifted1_Mem.free"=>2796996
}

Key

Value Format

Description

Rate

Integer

Amount of messages for the time.

Window

Integer

Amount of intervals to calculate average over. Default 5.

Interval

String

Time interval, expressed in "sleep" format. e.g 3s, 1.5m, 0.5h etc

Print_Status

Bool

Whether to print status messages with current rate and the limits to information logs

Rate 5
Window 5
Interval 1s
+-------+-+-+-+ 
|1|3|5| | | | | 
+-------+-+-+-+ 
|  3  |         average = 3, and not 1.8 if you calculate 0 for last 2 panes. 
+-----+
+-------------+ 
|1|3|5|7|3|4| | 
+-------------+ 
  |  4.4    |   
  ----------+
+-------------+
|1|3|5|7|3|4|7|
+-------------+
    |   5.2   |
    +---------+
+-------------+
|1|3|5|7|3|4|6|
+-------------+
    |   5     |
    +---------+
Rate 60
Window 5
Interval 1m
Rate 1
Window 300
Interval 1s
XX        XX        XX
XX        XX        XX
XX        XX        XX
XX        XX        XX
XX        XX        XX
XX        XX        XX
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
  X    X     X    X    X    X
XXXX XXXX  XXXX XXXX XXXX XXXX
+-+-+-+-+-+--+-+-+-+-+-+-+-+-+-+
$ bin/fluent-bit -i tail -p 'path=lines.txt' -F throttle -p 'rate=1' -m '*' -o stdout
[INPUT]
    Name   tail
    Path   lines.txt

[FILTER]
    Name     throttle
    Match    *
    Rate     1000
    Window   300
    Interval 1s

[OUTPUT]
    Name   stdout
    Match  *
[PARSER]
    Name         docker
    Format       json
    Time_Key     time
    Time_Format  %Y-%m-%dT%H:%M:%S.%L
    Time_Keep    On

Annotation

Description

Default

fluentbit.io/parser[_stream][-container]

Suggest a pre-defined parser. The parser must be registered already by Fluent Bit. This option will only be processed if Fluent Bit configuration (Kubernetes Filter) have enabled the option K8S-Logging.Parser. If present, the stream (stdout or stderr) will restrict that specific stream. If present, the container can override a specific container in a Pod.

fluentbit.io/exclude[_stream][-container]

Request to Fluent Bit to exclude or not the logs generated by the Pod. This option will only be processed if Fluent Bit configuration (Kubernetes Filter) have enabled the option K8S-Logging.Exclude.

False

apiVersion: v1
kind: Pod
metadata:
  name: apache-logs
  labels:
    app: apache-logs
  annotations:
    fluentbit.io/parser: apache
spec:
  containers:
  - name: apache
    image: edsiper/apache_logs
apiVersion: v1
kind: Pod
metadata:
  name: apache-logs
  labels:
    app: apache-logs
  annotations:
    fluentbit.io/exclude: "true"
spec:
  containers:
  - name: apache
    image: edsiper/apache_logs
[INPUT]
    Name    tail
    Tag     kube.*
    Path    /var/log/containers/*.log
    Parser  docker

[FILTER]
    Name             kubernetes
    Match            kube.*
    Kube_URL         https://kubernetes.default.svc:443
    Kube_CA_File     /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
    Kube_Token_File  /var/run/secrets/kubernetes.io/serviceaccount/token
    Kube_Tag_Prefix  kube.var.log.containers.
    Merge_Log        On
    Merge_Log_Key    log_processed
/var/log/containers/apache-logs-annotated_default_apache-aeeccc7a9f00f6e4e066aeff0434cf80621215071f1b20a51e8340aa7c35eac6.log
kube.var.log.containers.apache-logs-annotated_default_apache-aeeccc7a9f00f6e4e066aeff0434cf80621215071f1b20a51e8340aa7c35eac6.log
kube.var.log.containers.apache-logs-annotated_default_apache-aeeccc7a9f00f6e4e066aeff0434cf80621215071f1b20a51e8340aa7c35eac6.log
apache-logs-annotated_default_apache-aeeccc7a9f00f6e4e066aeff0434cf80621215071f1b20a51e8340aa7c35eac6.log
(?<pod_name>[a-z0-9](?:[-a-z0-9]*[a-z0-9])?(?:\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*)_(?<namespace_name>[^_]+)_(?<container_name>.+)-(?<docker_id>[a-z0-9]{64})\.log$
apiVersion: v1
data:
  fluent-bit.conf: |
    [SERVICE]
        Flush           2
        Daemon          off
        Log_Level       info
        Parsers_File    parsers.conf
        Plugins_File    plugins.conf
        HTTP_Server     On
        HTTP_Listen     0.0.0.0
        HTTP_Port       2020

Key

Description

default

Customer_ID

Customer ID or WorkspaceID string.

Shared_Key

The primary or the secondary Connected Sources client authentication key.

Log_Type

The name of the event type.

fluentbit

$ fluent-bit -i cpu -o azure -p customer_id=abc -p shared_key=def -m '*' -f 1
[INPUT]
    Name  cpu

[OUTPUT]
    Name        azure
    Match       *
    Customer_ID abc
    Shared_Key  def
{"status": "up and running"}
{"log":"{\"status\": \"up and running\"}\r\n","stream":"stdout","time":"2018-03-09T01:01:44.851160855Z"}
[PARSER]
    Name         docker
    Format       json
    Time_Key     time
    Time_Format  %Y-%m-%dT%H:%M:%S.%L
    Time_Keep    On
    # Command       |  Decoder  | Field | Optional Action   |
    # ==============|===========|=======|===================|
    Decode_Field_As    escaped     log

Name

Description

json

handle the field content as a JSON map. If it find a JSON map it will replace the content with a structured map.

escaped

decode an escaped string.

escaped_utf8

decode a UTF8 escaped string.

Name

Description

try_next

if the decoder failed, apply the next Decoder in the list for the same field.

do_next

if the decoder succeeded or failed, apply the next Decoder in the list for the same field.

{"log":"\u0009Checking indexes...\n","stream":"stdout","time":"2018-02-19T23:25:29.1845444Z"}
{"log":"\u0009\u0009Validated: _audit _internal _introspection _telemetry _thefishbucket history main snmp_data summary\n","stream":"stdout","time":"2018-02-19T23:25:29.1845536Z"}
{"log":"\u0009Done\n","stream":"stdout","time":"2018-02-19T23:25:29.1845622Z"}
[24] tail.0: [1519082729.184544400, {"log"=>"   Checking indexes...                                                   
", "stream"=>"stdout", "time"=>"2018-02-19T23:25:29.1845444Z"}]
[25] tail.0: [1519082729.184553600, {"log"=>"           Validated: _audit _internal _introspection _telemetry _thefishbucket history main snmp_data summary
", "stream"=>"stdout", "time"=>"2018-02-19T23:25:29.1845536Z"}]
[26] tail.0: [1519082729.184562200, {"log"=>"   Done                  
", "stream"=>"stdout", "time"=>"2018-02-19T23:25:29.1845622Z"}]
[SERVICE]
    Parsers_File fluent-bit-parsers.conf

[INPUT]
    Name        tail
    Parser      docker
    Path        /path/to/log.log

[OUTPUT]
    Name   stdout
    Match  *
[PARSER]
    Name        docker
    Format      json
    Time_Key    time
    Time_Format %Y-%m-%dT%H:%M:%S %z
    Decode_Field_as escaped_utf8 log

Key

Description

default

google_service_credentials

Absolute path to a Google Cloud credentials JSON file

Value of the environment variable $GOOGLE_SERVICE_CREDENTIALS

project_id

The project id containing the BigQuery dataset to stream into.

The value of the project_id in the credentials file

dataset_id

The dataset id of the BigQuery dataset to write into. This dataset must exist in your project.

table_id

The table id of the BigQuery table to write into. This table must exist in the specified dataset and the schema must match the output.

[INPUT]
    Name  dummy
    Tag   dummy

[OUTPUT]
    Name       bigquery
    Match      *
    dataset_id my_dataset
    table_id   dummy_table

Key

Description

Path

File path to output. If not set, the filename will be tag name.

Format

The format of the file content. See also Format section. Default: out_file.

tag: [time, {"key1":"value1", "key2":"value2", "key3":"value3"}]
{"key1":"value1", "key2":"value2", "key3":"value3"}

Key

Description

Delimiter

The character to separate each data. Default: ','

time[delimiter]"value1"[delimiter]"value2"[delimiter]"value3"

Key

Description

Delimiter

The character to separate each pair. Default: '\t'(TAB)

Label_Delimiter

The character to separate label and the value. Default: ':'

field1[label_delimiter]value1[delimiter]field2[label_delimiter]value2\n

Key

Description

Template

The format string. Default: '{time} {message}'

[INPUT]
  Name mem

[OUTPUT]
  Name file
  Format template
  Template {time} used={Mem.used} free={Mem.free} total={Mem.total}
1564462620.000254 used=1045448 free=31760160 total=32805608
$ fluent-bit -i cpu -o file -p path=output.txt
[INPUT]
    Name cpu
    Tag  cpu

[OUTPUT]
    Name file
    Match *
    Path output.txt

Key

Description

Default

Unit

The unit of duration. (second/minute/hour/day)

minute

$ fluent-bit -i cpu -o flowcounter
[INPUT]
    Name cpu
    Tag  cpu

[OUTPUT]
    Name flowcounter
    Match *
    Unit second
$ fluent-bit -i cpu -o flowcounter  
Fluent-Bit v0.10.0
Copyright (C) Treasure Data

[2016/12/23 11:01:20] [ info] [engine] started
[out_flowcounter] cpu.0:[1482458540, {"counts":60, "bytes":7560, "counts/minute":1, "bytes/minute":126 }]
[TIMESTAMP, NUMBER_OF_RECORDS_NOW] (total = RECORDS_SINCE_IT_STARTED)
$ fluent-bit -i cpu -o counter
[INPUT]
    Name cpu
    Tag  cpu

[OUTPUT]
    Name  counter
    Match *
$ bin/fluent-bit -i cpu -o counter -f 1
Fluent-Bit v0.12.0
Copyright (C) Treasure Data

[2017/07/19 11:19:02] [ info] [engine] started
1500484743,1 (total = 1)
1500484744,1 (total = 2)
1500484745,1 (total = 3)
1500484746,1 (total = 4)
1500484747,1 (total = 5)
es://host:port/index/type
$ fluent-bit -i cpu -t cpu -o es://192.168.2.3:9200/my_index/my_type \
    -o stdout -m '*'
$ fluent-bit -i cpu -t cpu -o es -p Host=192.168.2.3 -p Port=9200 \
    -p Index=my_index -p Type=my_type -o stdout -m '*'
[INPUT]
    Name  cpu
    Tag   cpu

[OUTPUT]
    Name  es
    Match *
    Host  192.168.2.3
    Port  9200
    Index my_index
    Type  my_type
{"cpu0.p_cpu"=>17.000000}
{"cpu0_p_cpu"=>17.000000}
[OUTPUT]
    Name  es
    Match foo.*
    Index search
    Type  type1

[OUTPUT]
    Name  es
    Match bar.*
    Index search
    Type  type2
[INPUT]
    Name                    tail
    Tag                     kube.*
    Path                    /var/log/containers/*.log
    Parser                  docker
    DB                      /var/log/flb_kube.db
    Mem_Buf_Limit           5MB
    Refresh_Interval        10

[FILTER]
    Name                    kubernetes
    Match                   kube.*
    Merge_Log_Key           log
    Merge_Log               On
    Keep_Log                Off
    Annotations             Off
    Labels                  Off

[FILTER]
    Name                    nest
    Match                   *
    Operation               lift
    Nested_under            log

[OUTPUT]
    Name                    gelf
    Match                   kube.*
    Host                    <your-graylog-server>
    Port                    12201
    Mode                    tcp
    Gelf_Short_Message_Key  data

[PARSER]
    Name                    docker
    Format                  json
    Time_Key                time
    Time_Format             %Y-%m-%dT%H:%M:%S.%L
    Time_Keep               Off
{"log":"{\"data\": \"This is an example.\"}","stream":"stderr","time":"2019-07-21T12:45:11.273315023Z"}
[0] kube.log: [1565770310.000198491, {"log"=>{"data"=>"This is an example."}, "stream"=>"stderr", "time"=>"2019-07-21T12:45:11.273315023Z"}]
{"version":"1.1", "short_message":"This is an example.", "host": "<Your Node Name>", "_stream":"stderr", "timestamp":1565770310.000199}
{
  "Key1"     : "Value1",
  "Key2"     : "Value2"
}
{
  "Key1"       : "Value1",
  "RenamedKey" : "Value2",
  "OtherKey"   : "Value3"
}

Operation

Parameter 1

Parameter 2

Description

Set

STRING:KEY

STRING:VALUE

Add a key/value pair with key KEY and value VALUE. If KEY already exists, this field is overwritten

Add

STRING:KEY

STRING:VALUE

Add a key/value pair with key KEY and value VALUE if KEY does not exist

Remove

STRING:KEY

NONE

Remove a key/value pair with key KEY if it exists

Remove_wildcard

WILDCARD:KEY

NONE

Remove all key/value pairs with key matching wildcard KEY

Remove_regex

REGEXP:KEY

NONE

Remove all key/value pairs with key matching regexp KEY

Rename

STRING:KEY

STRING:RENAMED_KEY

Rename a key/value pair with key KEY to RENAMED_KEY if KEY exists AND RENAMED_KEY does not exist

Hard_rename

STRING:KEY

STRING:RENAMED_KEY

Rename a key/value pair with key KEY to RENAMED_KEY if KEY exists. If RENAMED_KEY already exists, this field is overwritten

Copy

STRING:KEY

STRING:COPIED_KEY

Copy a key/value pair with key KEY to COPIED_KEY if KEY exists AND COPIED_KEY does not exist

Hard_copy

STRING:KEY

STRING:COPIED_KEY

Copy a key/value pair with key KEY to COPIED_KEY if KEY exists. If COPIED_KEY already exists, this field is overwritten

Condition

Parameter

Parameter 2

Description

Key_exists

STRING:KEY

NONE

Is true if KEY exists

Key_does_not_exist

STRING:KEY

STRING:VALUE

Is true if KEY does not exist

A_key_matches

REGEXP:KEY

NONE

Is true if a key matches regex KEY

No_key_matches

REGEXP:KEY

NONE

Is true if no key matches regex KEY

Key_value_equals

STRING:KEY

STRING:VALUE

Is true if KEY exists and its value is VALUE

Key_value_does_not_equal

STRING:KEY

STRING:VALUE

Is true if KEY exists and its value is not VALUE

Key_value_matches

STRING:KEY

REGEXP:VALUE

Is true if key KEY exists and its value matches VALUE

Key_value_does_not_match

STRING:KEY

REGEXP:VALUE

Is true if key KEY exists and its value does not match VALUE

Matching_keys_have_matching_values

REGEXP:KEY

REGEXP:VALUE

Is true if all keys matching KEY have values that match VALUE

Matching_keys_do_not_have_matching_values

REGEXP:KEY

REGEXP:VALUE

Is true if all keys matching KEY have values that do not match VALUE

[0] memory: [1488543156, {"Mem.total"=>1016044, "Mem.used"=>841388, "Mem.free"=>174656, "Swap.total"=>2064380, "Swap.used"=>139888, "Swap.free"=>1924492}]
[1] memory: [1488543157, {"Mem.total"=>1016044, "Mem.used"=>841420, "Mem.free"=>174624, "Swap.total"=>2064380, "Swap.used"=>139888, "Swap.free"=>1924492}]
[2] memory: [1488543158, {"Mem.total"=>1016044, "Mem.used"=>841420, "Mem.free"=>174624, "Swap.total"=>2064380, "Swap.used"=>139888, "Swap.free"=>1924492}]
[3] memory: [1488543159, {"Mem.total"=>1016044, "Mem.used"=>841420, "Mem.free"=>174624, "Swap.total"=>2064380, "Swap.used"=>139888, "Swap.free"=>1924492}]
bin/fluent-bit -i mem \
  -p 'tag=mem.local' \
  -F modify \
  -p 'Add=Service1 SOMEVALUE' \
  -p 'Add=Service2 SOMEVALUE3' \
  -p 'Add=Mem.total2 TOTALMEM2' \
  -p 'Rename=Mem.free MEMFREE' \
  -p 'Rename=Mem.used MEMUSED' \
  -p 'Rename=Swap.total SWAPTOTAL' \
  -p 'Add=Mem.total TOTALMEM' \
  -m '*' \
  -o stdout
[INPUT]
    Name mem
    Tag  mem.local

[OUTPUT]
    Name  stdout
    Match *

[FILTER]
    Name modify
    Match *
    Add Service1 SOMEVALUE
    Add Service3 SOMEVALUE3
    Add Mem.total2 TOTALMEM2
    Rename Mem.free MEMFREE
    Rename Mem.used MEMUSED
    Rename Swap.total SWAPTOTAL
    Add Mem.total TOTALMEM
[2018/04/06 01:35:13] [ info] [engine] started
[0] mem.local: [1522980610.006892802, {"Mem.total"=>4050908, "MEMUSED"=>738100, "MEMFREE"=>3312808, "SWAPTOTAL"=>1046524, "Swap.used"=>0, "Swap.free"=>1046524, "Service1"=>"SOMEVALUE", "Service3"=>"SOMEVALUE3", "Mem.total2"=>"TOTALMEM2"}]
[1] mem.local: [1522980611.000658288, {"Mem.total"=>4050908, "MEMUSED"=>738068, "MEMFREE"=>3312840, "SWAPTOTAL"=>1046524, "Swap.used"=>0, "Swap.free"=>1046524, "Service1"=>"SOMEVALUE", "Service3"=>"SOMEVALUE3", "Mem.total2"=>"TOTALMEM2"}]
[2] mem.local: [1522980612.000307652, {"Mem.total"=>4050908, "MEMUSED"=>738068, "MEMFREE"=>3312840, "SWAPTOTAL"=>1046524, "Swap.used"=>0, "Swap.free"=>1046524, "Service1"=>"SOMEVALUE", "Service3"=>"SOMEVALUE3", "Mem.total2"=>"TOTALMEM2"}]
[3] mem.local: [1522980613.000122671, {"Mem.total"=>4050908, "MEMUSED"=>738068, "MEMFREE"=>3312840, "SWAPTOTAL"=>1046524, "Swap.used"=>0, "Swap.free"=>1046524, "Service1"=>"SOMEVALUE", "Service3"=>"SOMEVALUE3", "Mem.total2"=>"TOTALMEM2"}]
[INPUT]
    Name mem
    Tag  mem.local
    Interval_Sec 1

[FILTER]
    Name    modify
    Match   mem.*

    Condition Key_Does_Not_Exist cpustats
    Condition Key_Exists Mem.used

    Set cpustats UNKNOWN

[FILTER]
    Name    modify
    Match   mem.*

    Condition Key_Value_Does_Not_Equal cpustats KNOWN

    Add sourcetype memstats

[FILTER]
    Name    modify
    Match   mem.*

    Condition Key_Value_Equals cpustats UNKNOWN

    Remove_wildcard Mem
    Remove_wildcard Swap
    Add cpustats_more STILL_UNKNOWN

[OUTPUT]
    Name           stdout
    Match          *
[2018/06/14 07:37:34] [ info] [engine] started (pid=1493)
[0] mem.local: [1528925855.000223110, {"cpustats"=>"UNKNOWN", "sourcetype"=>"memstats", "cpustats_more"=>"STILL_UNKNOWN"}]
[1] mem.local: [1528925856.000064516, {"cpustats"=>"UNKNOWN", "sourcetype"=>"memstats", "cpustats_more"=>"STILL_UNKNOWN"}]
[2] mem.local: [1528925857.000165965, {"cpustats"=>"UNKNOWN", "sourcetype"=>"memstats", "cpustats_more"=>"STILL_UNKNOWN"}]
[3] mem.local: [1528925858.000152319, {"cpustats"=>"UNKNOWN", "sourcetype"=>"memstats", "cpustats_more"=>"STILL_UNKNOWN"}]
[INPUT]
    Name mem
    Tag  mem.local

[OUTPUT]
    Name  stdout
    Match *

[FILTER]
    Name modify
    Match *

    Remove_Wildcard Mem
    Remove_Wildcard Swap
    Set This_plugin_is_on 🔥
    Set 🔥 is_hot
    Copy 🔥 💦
    Rename  💦 ❄️
    Set ❄️ is_cold
    Set 💦 is_wet
[2018/06/14 07:46:11] [ info] [engine] started (pid=21875)
[0] mem.local: [1528926372.000197916, {"This_plugin_is_on"=>"🔥", "🔥"=>"is_hot", "❄️"=>"is_cold", "💦"=>"is_wet"}]
[1] mem.local: [1528926373.000107868, {"This_plugin_is_on"=>"🔥", "🔥"=>"is_hot", "❄️"=>"is_cold", "💦"=>"is_wet"}]
[2] mem.local: [1528926374.000181042, {"This_plugin_is_on"=>"🔥", "🔥"=>"is_hot", "❄️"=>"is_cold", "💦"=>"is_wet"}]
[3] mem.local: [1528926375.000090841, {"This_plugin_is_on"=>"🔥", "🔥"=>"is_hot", "❄️"=>"is_cold", "💦"=>"is_wet"}]
[0] mem.local: [1528926376.000610974, {"This_plugin_is_on"=>"🔥", "🔥"=>"is_hot", "❄️"=>"is_cold", "💦"=>"is_wet"}]
[OUTPUT]
    Name        datadog
    Match       *
    Host        http-intake.logs.datadoghq.com
    TLS         on
    compress    gzip
    apikey      <my-datadog-api-key>
    dd_service  <my-app-service>
    dd_source   <my-app-source>
    dd_tags     team:logs,foo:bar
http://host:port/something
$ fluent-bit -i cpu -t cpu -o http://192.168.2.3:80/something -m '*'
[INPUT]
    Name  cpu
    Tag   cpu

[OUTPUT]
    Name  http
    Match *
    Host  192.168.2.3
    Port  80
    URI   /something
[OUTPUT]
    Name  http
    Match *
    Host  192.168.2.3
    Port  80
    URI   /something
    Format json
    header_tag  FLUENT-TAG
<source>
  @type http
  add_http_headers true
</source>

<match something>
  @type rewrite_tag_filter
  <rule>
    key HTTP_FLUENT_TAG
    pattern /^(.*)$/
    tag $1
  </rule>
</match>
[OUTPUT]
    Name           http
    Match          *
    Host           127.0.0.1
    Port           9000
    Header         X-Key-A Value_A
    Header         X-Key-B Value_B
    URI            /something

Key

Description

Default

Shared_Key

A key string known by the remote Fluentd used for authorization.

Empty_Shared_Key

Use this option to connect to Fluentd with a zero-length secret.

False

Username

Specify the username to present to a Fluentd server that enables user_auth.

Password

Specify the password corresponding to the username.

Self_Hostname

Default value of the auto-generated certificate common name (CN).

tls

Enable or disable TLS support

Off

tls.verify

Force certificate validation

On

tls.debug

Set TLS debug verbosity level. It accept the following values: 0 (No debug), 1 (Error), 2 (State change), 3 (Informational) and 4 Verbose

1

tls.ca_file

Absolute path to CA certificate file

tls.crt_file

Absolute path to Certificate file.

tls.key_file

Absolute path to private Key file.

tls.key_passwd

Optional password for tls.key_file file.

<source>
  type forward
  bind 0.0.0.0
  port 24224
</source>

<match fluent_bit>
  type stdout
</match>
$ fluentd -c test.conf
2017-03-23 11:50:43 -0600 [info]: reading config file path="test.conf"
2017-03-23 11:50:43 -0600 [info]: starting fluentd-0.12.33
2017-03-23 11:50:43 -0600 [info]: gem 'fluent-mixin-config-placeholders' version '0.3.1'
2017-03-23 11:50:43 -0600 [info]: gem 'fluent-plugin-docker' version '0.1.0'
2017-03-23 11:50:43 -0600 [info]: gem 'fluent-plugin-elasticsearch' version '1.4.0'
2017-03-23 11:50:43 -0600 [info]: gem 'fluent-plugin-flatten-hash' version '0.2.0'
2017-03-23 11:50:43 -0600 [info]: gem 'fluent-plugin-flowcounter-simple' version '0.0.4'
2017-03-23 11:50:43 -0600 [info]: gem 'fluent-plugin-influxdb' version '0.2.8'
2017-03-23 11:50:43 -0600 [info]: gem 'fluent-plugin-json-in-json' version '0.1.4'
2017-03-23 11:50:43 -0600 [info]: gem 'fluent-plugin-mongo' version '0.7.10'
2017-03-23 11:50:43 -0600 [info]: gem 'fluent-plugin-out-http' version '0.1.3'
2017-03-23 11:50:43 -0600 [info]: gem 'fluent-plugin-parser' version '0.6.0'
2017-03-23 11:50:43 -0600 [info]: gem 'fluent-plugin-record-reformer' version '0.7.0'
2017-03-23 11:50:43 -0600 [info]: gem 'fluent-plugin-rewrite-tag-filter' version '1.5.1'
2017-03-23 11:50:43 -0600 [info]: gem 'fluent-plugin-stdin' version '0.1.1'
2017-03-23 11:50:43 -0600 [info]: gem 'fluent-plugin-td' version '0.10.27'
2017-03-23 11:50:43 -0600 [info]: adding match pattern="fluent_bit" type="stdout"
2017-03-23 11:50:43 -0600 [info]: adding source type="forward"
2017-03-23 11:50:43 -0600 [info]: using configuration file: <ROOT>
  <source>
    type forward
    bind 0.0.0.0
    port 24224
  </source>
  <match fluent_bit>
    type stdout
  </match>
</ROOT>
2017-03-23 11:50:43 -0600 [info]: listening fluent socket on 0.0.0.0:24224
bin/fluent-bit -i INPUT -o forward://HOST:PORT
$ bin/fluent-bit -i cpu -t fluent_bit -o forward://127.0.0.1:24224
2017-03-23 11:53:06 -0600 fluent_bit: {"cpu_p":0.0,"user_p":0.0,"system_p":0.0,"cpu0.p_cpu":0.0,"cpu0.p_user":0.0,"cpu0.p_system":0.0,"cpu1.p_cpu":0.0,"cpu1.p_user":0.0,"cpu1.p_system":0.0,"cpu2.p_cpu":0.0,"cpu2.p_user":0.0,"cpu2.p_system":0.0,"cpu3.p_cpu":1.0,"cpu3.p_user":1.0,"cpu3.p_system":0.0}
2017-03-23 11:53:07 -0600 fluent_bit: {"cpu_p":2.25,"user_p":2.0,"system_p":0.25,"cpu0.p_cpu":3.0,"cpu0.p_user":3.0,"cpu0.p_system":0.0,"cpu1.p_cpu":1.0,"cpu1.p_user":1.0,"cpu1.p_system":0.0,"cpu2.p_cpu":1.0,"cpu2.p_user":1.0,"cpu2.p_system":0.0,"cpu3.p_cpu":3.0,"cpu3.p_user":2.0,"cpu3.p_system":1.0}
2017-03-23 11:53:08 -0600 fluent_bit: {"cpu_p":1.75,"user_p":1.0,"system_p":0.75,"cpu0.p_cpu":2.0,"cpu0.p_user":1.0,"cpu0.p_system":1.0,"cpu1.p_cpu":3.0,"cpu1.p_user":1.0,"cpu1.p_system":2.0,"cpu2.p_cpu":3.0,"cpu2.p_user":2.0,"cpu2.p_system":1.0,"cpu3.p_cpu":2.0,"cpu3.p_user":1.0,"cpu3.p_system":1.0}
2017-03-23 11:53:09 -0600 fluent_bit: {"cpu_p":4.75,"user_p":3.5,"system_p":1.25,"cpu0.p_cpu":4.0,"cpu0.p_user":3.0,"cpu0.p_system":1.0,"cpu1.p_cpu":5.0,"cpu1.p_user":4.0,"cpu1.p_system":1.0,"cpu2.p_cpu":3.0,"cpu2.p_user":2.0,"cpu2.p_system":1.0,"cpu3.p_cpu":5.0,"cpu3.p_user":4.0,"cpu3.p_system":1.0}
[SERVICE]
    Flush      5
    Daemon     off
    Log_Level  info

[INPUT]
    Name       cpu
    Tag        cpu_usage

[OUTPUT]
    Name          forward
    Match         *
    Host          127.0.0.1
    Port          24284
    Shared_Key    secret
    Self_Hostname flb.local
    tls           on
    tls.verify    off
<source>
  @type         secure_forward
  self_hostname myserver.local
  shared_key    secret
  secure no
</source>

<match **>
 @type stdout
</match>
<source>
  @type forward
  <transport tls>
    cert_path /etc/td-agent/certs/fluentd.crt
    private_key_path /etc/td-agent/certs/fluentd.key
    private_key_passphrase password
  </transport>
  <security>
    self_hostname myserver.local
    shared_key secret
  </security>
</source>

<match **>
 @type stdout
</match>
$ fluentd -c fld.conf
$ fluent-bit -c flb.conf
2017-03-23 13:34:40 -0600 [info]: using configuration file: <ROOT>
  <source>
    @type secure_forward
    self_hostname myserver.local
    shared_key xxxxxx
    secure no
  </source>
  <match **>
    @type stdout
  </match>
</ROOT>
2017-03-23 13:34:41 -0600 cpu_usage: {"cpu_p":1.0,"user_p":0.75,"system_p":0.25,"cpu0.p_cpu":1.0,"cpu0.p_user":1.0,"cpu0.p_system":0.0,"cpu1.p_cpu":2.0,"cpu1.p_user":1.0,"cpu1.p_system":1.0,"cpu2.p_cpu":1.0,"cpu2.p_user":1.0,"cpu2.p_system":0.0,"cpu3.p_cpu":2.0,"cpu3.p_user":1.0,"cpu3.p_system":1.0}
2017-03-23 13:34:42 -0600 cpu_usage: {"cpu_p":1.75,"user_p":1.75,"system_p":0.0,"cpu0.p_cpu":3.0,"cpu0.p_user":3.0,"cpu0.p_system":0.0,"cpu1.p_cpu":2.0,"cpu1.p_user":2.0,"cpu1.p_system":0.0,"cpu2.p_cpu":0.0,"cpu2.p_user":0.0,"cpu2.p_system":0.0,"cpu3.p_cpu":1.0,"cpu3.p_user":1.0,"cpu3.p_system":0.0}
2017-03-23 13:34:43 -0600 cpu_usage: {"cpu_p":1.75,"user_p":1.25,"system_p":0.5,"cpu0.p_cpu":3.0,"cpu0.p_user":3.0,"cpu0.p_system":0.0,"cpu1.p_cpu":2.0,"cpu1.p_user":2.0,"cpu1.p_system":0.0,"cpu2.p_cpu":0.0,"cpu2.p_user":0.0,"cpu2.p_system":0.0,"cpu3.p_cpu":1.0,"cpu3.p_user":0.0,"cpu3.p_system":1.0}
2017-03-23 13:34:44 -0600 cpu_usage: {"cpu_p":5.0,"user_p":3.25,"system_p":1.75,"cpu0.p_cpu":4.0,"cpu0.p_user":2.0,"cpu0.p_system":2.0,"cpu1.p_cpu":8.0,"cpu1.p_user":5.0,"cpu1.p_system":3.0,"cpu2.p_cpu":4.0,"cpu2.p_user":3.0,"cpu2.p_system":1.0,"cpu3.p_cpu":4.0,"cpu3.p_user":2.0,"cpu3.p_system":2.0}

Key

Description

default

Host

IP address or hostname of the target InfluxDB service

127.0.0.1

Port

TCP port of the target InfluxDB service

8086

Database

InfluxDB database name where records will be inserted

fluentbit

Sequence_Tag

The name of the tag whose value is incremented for the consecutive simultaneous events.

_seq

HTTP_User

Optional username for HTTP Basic Authentication

HTTP_Passwd

Password for user defined in HTTP_User

Tag_Keys

Space separated list of keys that needs to be tagged

Auto_Tags

Automatically tag keys where value is string. This option takes a boolean value: True/False, On/Off.

Off

influxdb://host:port
$ fluent-bit -i cpu -t cpu -o influxdb://127.0.0.1:8086 -m '*'
[INPUT]
    Name  cpu
    Tag   cpu

[OUTPUT]
    Name          influxdb
    Match         *
    Host          127.0.0.1
    Port          8086
    Database      fluentbit
    Sequence_Tag  _seq
[INPUT]
    Name            tail
    Tag             apache.access
    parser          apache2
    path            /var/log/apache2/access.log

[OUTPUT]
    Name          influxdb
    Match         *
    Host          127.0.0.1
    Port          8086
    Database      fluentbit
    Sequence_Tag  _seq
    # make tags from method and path fields
    Tag_Keys      method path
$ influx
Visit https://enterprise.influxdata.com to register for updates, InfluxDB server management, and monitoring.
Connected to http://localhost:8086 version 1.1.0
InfluxDB shell version: 1.1.0
>
> create database fluentbit
>
> show databases
name: databases
name
----
_internal
fluentbit

>
$ bin/fluent-bit -i cpu -t cpu -o influxdb -m '*'
> use fluentbit
Using database fluentbit
> SELECT cpu_p, system_p, user_p FROM cpu
name: cpu
time                  cpu_p   system_p    user_p
----                  -----   --------    ------
1481132860000000000   2.75        0.5      2.25
1481132861000000000   2           0.5      1.5
1481132862000000000   4.75        1.5      3.25
1481132863000000000   6.75        1.25     5.5
1481132864000000000   11.25       3.75     7.5
> SELECT * FROM cpu
> SHOW TAG KEYS ON fluentbit FROM "apache.access"
name: apache.access
tagKey
------
_seq
method
path
> SHOW TAG VALUES ON fluentbit FROM "apache.access" WITH KEY = "method"
name: apache.access
key    value
---    -----
method "MATCH"
method "POST"

Null

The null output plugin just throws away events.

Configuration Parameters

The plugin doesn't support configuration parameters.

Getting Started

You can run the plugin from the command line or through the configuration file:

Command Line

From the command line you can let Fluent Bit throws away events with the following options:

$ fluent-bit -i cpu -o null

Configuration File

In your main configuration file append the following Input & Output sections:

[INPUT]
    Name cpu
    Tag  cpu

[OUTPUT]
    Name null
    Match *
json
regex
ltsv
strptime documentation
LTSV
Lua
dummy
test.lua
cb_print()
https://github.com/fluent/fluent-bit/tree/master/scripts
Memory Usage Input Plugin
Fluent Bit
Tail
Systemd
Tail
Kubernetes Filter
here
Rublar.com
https://rubular.com/r/HZz3tYAahj6JCd
Azure Log Analytics
Azure Log Analytics
Google Cloud BigQuery
Application Default Credentials
Data deduplication
Template tables
Creating a Google Cloud Service Account
Creating and using datasets
Creating and using tables
Creating and Managing Service Account Keys
Elasticsearch
TLS/SSL
the official blog post on that issue
https://github.com/abutaha/aws-es-proxy
https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-request-signing.html
Graylog
GELF Payload Specification
TLS/SSL
Docker JSON parser
UNIX Epoch Timestamp
Kubernetes Filter Plugin
Tail Input
stdout
Nest Filter
Kubernetes Filter
GELF Payload Specification
Memory Usage Input Plugin
Datadog
Datadog account
Datadog API key
activate Datadog Logs Management
Datadog API key
activated Datadog Logs Management
MessagePack
TLS/SSL
Fluentd
Fluent Bit
Fluentd
Fluentd
TLS
Fluentd
Fluentd Installation
Fluentd
Fluentd
Fluentd
Fluentd
CPU
Fluentd
Fluentd
CPU
Fluentd
TLS
InfluxDB
TLS/SSL
Configuration Parameters
Multiline Parameters
Docker Mode Parameters
Getting Started
Tailing Files Keeping State
Configuration Parameters
Getting Started
Lua Script Filter API
the FAQ below

Key

Description

Default

Buffer_Chunk_Size

32k

Buffer_Max_Size

Buffer_Chunk_Size

Path

Pattern specifying a specific log files or multiple ones through the use of common wildcards.

Path_Key

If enabled, it appends the name of the monitored file as part of the record. The value assigned becomes the key in the map.

Exclude_Path

Set one or multiple shell patterns separated by commas to exclude files matching a certain criteria, e.g: exclude_path=*.gz,*.zip

Refresh_Interval

The interval of refreshing the list of watched files in seconds.

60

Rotate_Wait

Specify the number of extra time in seconds to monitor a file once is rotated in case some pending data is flushed.

5

Ignore_Older

Ignores records which are older than this time in seconds. Supports m,h,d (minutes, hours, days) syntax. Default behavior is to read all records from specified files. Only available when a Parser is specificied and it can parse the time of a record.

Skip_Long_Lines

When a monitored file reach it buffer capacity due to a very long line (Buffer_Max_Size), the default behavior is to stop monitoring that file. Skip_Long_Lines alter that behavior and instruct Fluent Bit to skip long lines and continue processing other lines that fits into the buffer size.

Off

DB

Specify the database file to keep track of monitored files and offsets.

DB.Sync

Full

Mem_Buf_Limit

Set a limit of memory that Tail plugin can use when appending data to the Engine. If the limit is reach, it will be paused; when the data is flushed it resumes.

Parser

Specify the name of a parser to interpret the entry as a structured message.

Key

When a message is unstructured (no parser applied), it's appended as a string under the key name log. This option allows to define an alternative name for that key.

log

Tag

Set a tag (with regex-extract fields) that will be placed on lines read. E.g. kube.<namespace_name>.<pod_name>.<container_name>

Tag_Regex

Set a regex to extract fields from the file. E.g. (?<pod_name>[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*)_(?<namespace_name>[^_]+)_(?<container_name>.+)-

name

title

description

Azure Log Analytics

Ingest records into Azure Log Analytics

BigQuery

Ingest records into Google BigQuery

Count Records

Simple records counter.

Datadog

Ingest logs into Datadog.

Elasticsearch

flush records to a Elasticsearch server.

File

Flush records to a file.

FlowCounter

Count records.

Forward

Fluentd forward protocol.

HTTP

Flush records to an HTTP end point.

InfluxDB

Flush records to InfluxDB time series database.

Apache Kafka

Flush records to Apache Kafka

Kafka REST Proxy

Flush records to a Kafka REST Proxy server.

Google Stackdriver Logging

Flush records to Google Stackdriver Logging service.

Standard Output

Flush records to the standard output.

Splunk

Flush records to a Splunk Enterprise service

TCP & TLS

flush records to a TCP server.

NATS

flush records to a NATS server.

NULL

throw away events.

Key

Description

Default

Buffer_Size

32k

Kube_URL

API Server end-point

Kube_CA_File

CA certificate file

/var/run/secrets/kubernetes.io/serviceaccount/ca.crt

Kube_CA_Path

Absolute path to scan for certificate files

Kube_Token_File

Token file

/var/run/secrets/kubernetes.io/serviceaccount/token

Kube_Tag_Prefix

When the source records comes from Tail input plugin, this option allows to specify what's the prefix used in Tail configuration.

kube.var.log.containers.

Merge_Log

When enabled, it checks if the log field content is a JSON string map, if so, it append the map fields as part of the log structure.

Off

Merge_Log_Key

When Merge_Log is enabled, the filter tries to assume the log field from the incoming message is a JSON string message and make a structured representation of it at the same level of the log field in the map. Now if Merge_Log_Key is set (a string name), all the new structured fields taken from the original log content are inserted under the new key.

Merge_Log_Trim

When Merge_Log is enabled, trim (remove possible \n or \r) field values.

On

Merge_Parser

Optional parser name to specify how to parse the data contained in the log key. Recommended use is for developers or testing only.

Keep_Log

When Keep_Log is disabled, the log field is removed from the incoming message once it has been successfully merged (Merge_Log must be enabled as well).

On

tls.debug

Debug level between 0 (nothing) and 4 (every detail).

-1

tls.verify

When enabled, turns on certificate validation when connecting to the Kubernetes API server.

On

Use_Journal

When enabled, the filter reads logs coming in Journald format.

Off

Regex_Parser

K8S-Logging.Parser

Allow Kubernetes Pods to suggest a pre-defined Parser (read more about it in Kubernetes Annotations section)

Off

K8S-Logging.Exclude

Allow Kubernetes Pods to exclude their logs from the log processor (read more about it in Kubernetes Annotations section).

Off

Labels

Include Kubernetes resource labels in the extra metadata.

On

Annotations

Include Kubernetes resource annotations in the extra metadata.

On

Kube_meta_preload_cache_dir

If set, Kubernetes meta-data can be cached/pre-loaded from files in JSON format in this directory, named as namespace-pod.meta

Dummy_Meta

If set, use dummy-meta data (for test/dev purposes)

Off

Key

Description

default

Host

IP address or hostname of the target Elasticsearch instance

127.0.0.1

Port

TCP port of the target Elasticsearch instance

9200

Path

Elasticsearch accepts new data on HTTP query path "/_bulk". But it is also possible to serve Elasticsearch behind a reverse proxy on a subpath. This option defines such path on the fluent-bit side. It simply adds a path prefix in the indexing HTTP POST URI.

Empty string

Buffer_Size

4KB

Pipeline

Newer versions of Elasticsearch allows to setup filters called pipelines. This option allows to define which pipeline the database should use. For performance reasons is strongly suggested to do parsing and filtering on Fluent Bit side, avoid pipelines.

HTTP_User

Optional username credential for Elastic X-Pack access

HTTP_Passwd

Password for user defined in HTTP_User

Index

Index name

fluentbit

Type

Type name

flb_type

Logstash_Format

Enable Logstash format compatibility. This option takes a boolean value: True/False, On/Off

Off

Logstash_Prefix

When Logstash_Format is enabled, the Index name is composed using a prefix and the date, e.g: If Logstash_Prefix is equals to 'mydata' your index will become 'mydata-YYYY.MM.DD'. The last string appended belongs to the date when the data is being generated.

logstash

Logstash_DateFormat

%Y.%m.%d

Time_Key

When Logstash_Format is enabled, each record will get a new timestamp field. The Time_Key property defines the name of that field.

@timestamp

Time_Key_Format

When Logstash_Format is enabled, this property defines the format of the timestamp.

%Y-%m-%dT%H:%M:%S

Include_Tag_Key

When enabled, it append the Tag name to the record.

Off

Tag_Key

When Include_Tag_Key is enabled, this property defines the key name for the tag.

_flb-key

Generate_ID

When enabled, generate _id for outgoing records. This prevents duplicate records when retrying ES.

Off

Replace_Dots

When enabled, replace field name dots with underscore, required by Elasticsearch 2.0-2.3.

Off

Trace_Output

When enabled print the elasticsearch API calls to stdout (for diag only)

Off

Trace_Error

When enabled print the elasticsearch API calls to stdout when elasticsearch returns an error

Off

Current_Time_Index

Use current time for index generation instead of message record

Off

Logstash_Prefix_Key

Prefix keys with this string

Key

Description

default

Match

Pattern to match which tags of logs to be outputted by this plugin

Host

IP address or hostname of the target Graylog server

127.0.0.1

Port

The port that your Graylog GELF input is listening on

12201

Mode

The protocol to use (tls, tcp or udp)

udp

Gelf_Short_Message_Key

A short descriptive message (MUST be set in GELF)

short_message

Gelf_Timestamp_Key

Your log timestamp (SHOULD be set in GELF)

timestamp

Gelf_Host_Key

Key which its value is used as the name of the host, source or application that sent this message. (MUST be set in GELF)

host

Gelf_Full_Message_Key

Key to use as the long message that can i.e. contain a backtrace. (Optional in GELF)

full_message

Gelf_Level_Key

level

Packet_Size

If transport protocol is udp, you can set the size of packets to be sent.

1420

Compress

If transport protocol is udp, you can set this if you want your UDP packets to be compressed.

true

Key

Description

Default

Host

Required - The Datadog server where you are sending your logs.

http-intake.logs.datadoghq.com

TLS

Required - End-to-end security communications security protocol. Datadog recommends setting this to on.

off

compress

Recommended - compresses the payload in GZIP format, Datadog supports and recommends setting this to gzip.

apikey

dd_service

Recommended - The human readable name for your service generating the logs - the name of your application or database.

dd_source

Recommended - A human readable name for the underlying technology of your service. For example, postgres or nginx.

dd_tags

Key

Description

default

Host

IP address or hostname of the target HTTP Server

127.0.0.1

HTTP_User

Basic Auth Username

HTTP_Passwd

Basic Auth Password. Requires HTTP_User to be set

Port

TCP port of the target HTTP Server

80

Proxy

URI

Specify an optional HTTP URI for the target web server, e.g: /something

/

Format

Specify the data format to be used in the HTTP request body, by default it uses msgpack. Other supported formats are json, json_stream and json_lines and gelf.

msgpack

header_tag

Specify an optional HTTP header field for the original message tag.

Header

Add a HTTP header key/value pair. Multiple headers can be set.

json_date_key

Specify the name of the date field in output

date

json_date_format

Specify the format of the date. Supported formats are double and iso8601 (eg: 2018-05-30T09:39:52.000681Z)

double

gelf_timestamp_key

Specify the key to use for timestamp in gelf format

gelf_host_key

Specify the key to use for the host in gelf format

gelf_short_messge_key

Specify the key to use as the short message in gelf format

gelf_full_message_key

Specify the key to use for the full message in gelf format

gelf_level_key

Specify the key to use for the level in gelf format

Key

Description

Default

Host

Target host where Fluent-Bit or Fluentd are listening for Forward messages.

127.0.0.1

Port

TCP Port of the target service.

24224

Time_as_Integer

Set timestamps in integer format, it enable compatibility mode for Fluentd v0.12 series.

False

Upstream

Send_options

Always send options (with "size"=count of messages)

False

Require_ack_response

Send "chunk"-option and wait for "ack" response from server. Enables at-least-once and receiving server can control rate of traffic. (Requires Fluentd v0.14.0+ server)

False

Stackdriver

Before to get started with the plugin configuration, make sure to obtain the proper credentials to get access to the service. We strongly recommend to use a common JSON credentials file, reference link:

Your goal is to obtain a credentials JSON file that will be used later by Fluent Bit Stackdriver output plugin.

Configuration Parameters

Key

Description

default

google_service_credentials

Absolute path to a Google Cloud credentials JSON file

Value of environment variable $GOOGLE_SERVICE_CREDENTIALS

service_account_email

Account email associated to the service. Only available if no credentials file has been provided.

Value of environment variable $SERVICE_ACCOUNT_EMAIL

service_account_secret

Private key content associated with the service account. Only available if no credentials file has been provided.

Value of environment variable $SERVICE_ACCOUNT_SECRET

resource

Set resource type of data. Only global and gce_instance are supported.

global, gce_instance

Configuration File

If you are using a Google Cloud Credentials File, the following configuration is enough to get started:

[INPUT]
    Name  cpu
    Tag   cpu

[OUTPUT]
    Name        stackdriver
    Match       *

Troubleshooting Notes

Upstream connection error

An upstream connection error means Fluent Bit was not able to reach Google services, the error looks like this:

[2019/01/07 23:24:09] [error] [oauth2] could not get an upstream connection

This belongs to a network issue by the environment where Fluent Bit is running, make sure that from the Host, Container or Pod you can reach the following Google end-points:

Other implementations

Kafka

Configuration Parameters

Key

Description

default

Format

Specify data format, options available: json, msgpack.

json

Message_Key

Optional key to store the message

Message_Key_Field

If set, the value of Message_Key_Field in the record will indicate the message key. If not set nor found in the record, Message_Key will be used (if set).

Timestamp_Key

Set the key to store the record timestamp

@timestamp

Timestamp_Format

'iso8601' or 'double'

double

Brokers

Single of multiple list of Kafka Brokers, e.g: 192.168.1.3:9092, 192.168.1.4:9092.

Topics

Single entry or list of topics separated by comma (,) that Fluent Bit will use to send messages to Kafka. If only one topic is set, that one will be used for all records. Instead if multiple topics exists, the one set in the record by Topic_Key will be used.

fluent-bit

Topic_Key

If multiple Topics exists, the value of Topic_Key in the record will indicate the topic to use. E.g: if Topic_Key is router and the record is {"key1": 123, "router": "route_2"}, Fluent Bit will use topic route_2. Note that if the value of Topic_Key is not present in Topics, then by default the first topic in the Topics list will indicate the topic to be used.

rdkafka.{property}

Setting rdkafka.log.connection.close to false and rdkafka.request.required.acks to 1 are examples of recommended settings of librdfkafka properties.

Getting Started

In order to insert records into Apache Kafka, you can run the plugin from the command line or through the configuration file:

Command Line

The kafka plugin, can read the parameters from the command line in two ways, through the -p argument (property), e.g:

$ fluent-bit -i cpu -o kafka -p brokers=192.168.1.3:9092 -p topics=test

Configuration File

In your main configuration file append the following Input & Output sections:

[INPUT]
    Name  cpu

[OUTPUT]
    Name        kafka
    Match       *
    Brokers     192.168.1.3:9092
    Topics      test

Kafka REST Proxy

Configuration Parameters

Key

Description

default

Host

IP address or hostname of the target Kafka REST Proxy server

127.0.0.1

Port

TCP port of the target Kafka REST Proxy server

8082

Topic

Set the Kafka topic

fluent-bit

Partition

Set the partition number (optional)

Message_Key

Set a message key (optional)

Time_Key

The Time_Key property defines the name of the field that holds the record timestamp.

@timestamp

Time_Key_Format

Defines the format of the timestamp.

%Y-%m-%dT%H:%M:%S

Include_Tag_Key

Append the Tag name to the final record.

Off

Tag_Key

If Include_Tag_Key is enabled, this property defines the key name for the tag.

_flb-key

TLS / SSL

Getting Started

In order to insert records into a Kafka REST Proxy service, you can run the plugin from the command line or through the configuration file:

Command Line

The kafka-rest plugin, can read the parameters from the command line in two ways, through the -p argument (property), e.g:

$ fluent-bit -i cpu -t cpu -o kafka-rest -p host=127.0.0.1 -p port=8082 -m '*'

Configuration File

In your main configuration file append the following Input & Output sections:

[INPUT]
    Name  cpu
    Tag   cpu

[OUTPUT]
    Name        kafka-rest
    Match       *
    Host        127.0.0.1
    Port        8082
    Topic       fluent-bit
    Message_Key my_key

NATS

In order to flush records, the nats plugin requires to know two parameters:

parameter

description

default

host

IP address or hostname of the NATS Server

127.0.0.1

port

TCP port of the target NATS Server

4222

In order to override the default configuration values, the plugin uses the optional Fluent Bit network address format, e.g:

nats://host:port

Running

$ bin/fluent-bit -i cpu -o nats -V -f 5
Fluent-Bit v0.7.0
Copyright (C) Treasure Data

[2016/03/04 10:17:33] [ info] Configuration
flush time     : 5 seconds
input plugins  : cpu
collectors     :
[2016/03/04 10:17:33] [ info] starting engine
cpu[all] all=3.250000 user=2.500000 system=0.750000
cpu[i=0] all=3.000000 user=1.000000 system=2.000000
cpu[i=1] all=3.000000 user=2.000000 system=1.000000
cpu[i=2] all=2.000000 user=2.000000 system=0.000000
cpu[i=3] all=6.000000 user=5.000000 system=1.000000
[2016/03/04 10:17:33] [debug] [in_cpu] CPU 3.25%
...

As described above, the target service and storage point can be changed, e.g:

Data format

For every set of records flushed to a NATS Server, Fluent Bit uses the following JSON format:

[
  [UNIX_TIMESTAMP, JSON_MAP_1],
  [UNIX_TIMESTAMP, JSON_MAP_2],
  [UNIX_TIMESTAMP, JSON_MAP_N],
]

Each record is an individual entity represented in a JSON array that contains a UNIX_TIMESTAMP and a JSON map with a set of key/values. A summarized output of the CPU input plugin will looks as this:

[
  [1457108504,{"tag":"fluentbit","cpu_p":1.500000,"user_p":1,"system_p":0.500000}],
  [1457108505,{"tag":"fluentbit","cpu_p":4.500000,"user_p":3,"system_p":1.500000}],
  [1457108506,{"tag":"fluentbit","cpu_p":6.500000,"user_p":4.500000,"system_p":2}]
]

Grep

The Grep Filter plugin allows to match or exclude specific records based in regular expression patterns.

Configuration Parameters

The plugin supports the following configuration parameters:

Key

Value Format

Description

Regex

FIELD REGEX

Keep records which field matches the regular expression.

Exclude

FIELD REGEX

Exclude records which field matches the regular expression.

Getting Started

In order to start filtering records, you can run the filter from the command line or through the configuration file. The following example assumes that you have a file called lines.txt with the following content

aaa
aab
bbb
ccc
ddd
eee
fff
ggg

Command Line

Note: using the command line mode need special attention to quote the regular expressions properly. It's suggested to use a configuration file.

The following command will load the tail plugin and read the content of lines.txt file. Then the grep filter will apply a regular expression rule over the log field (created by tail plugin) and only pass the records which field value starts with aa:

$ bin/fluent-bit -i tail -p 'path=lines.txt' -F grep -p 'regex=log aa' -m '*' -o stdout

Configuration File

[INPUT]
    Name   tail
    Path   lines.txt

[FILTER]
    Name   grep
    Match  *
    Regex  log aa

[OUTPUT]
    Name   stdout
    Match  *

The filter allows to use multiple rules which are applied in order, you can have many Regex and Exclude entries as required.

Nested fields example

Currently nested fields are not supported. If you have records in the following format

{
    "kubernetes": {
        "pod_name": "myapp-0",
        "namespace_name": "default",
        "pod_id": "216cd7ae-1c7e-11e8-bb40-000c298df552",
        "labels": {
            "app": "myapp"
        },
        "host": "minikube",
        "container_name": "myapp",
        "docker_id": "370face382c7603fdd309d8c6aaaf434fd98b92421ce7c7c8aafe7697d4aa362"
    }
}
[FILTER]
    Name         nest
    Match        *
    Operation    lift
    Nested_under kubernetes

[FILTER]
    Name         nest
    Match        *
    Operation    lift
    Nested_under labels

[FILTER]
    Name    grep
    Match   *
    Exclude app myapp

Getting Started

name

title

description

Grep

Match or exclude specific records by patterns.

Kubernetes

Enrich logs with Kubernetes Metadata.

Lua

Filter records using Lua Scripts.

Parser

Parse record.

Record Modifier

Modify record.

Stdout

Print records to the standard output interface.

Throttle

Apply rate limit to event flow.

Nest

Nest records under a specified key

Modify

Modifications to record.

Set the initial buffer size to read files data. This value is used too to increase buffer size. The value must be according to the specification.

Set the limit of the buffer size per monitored file. When a buffer needs to be increased (e.g: very long lines), this value is used to restrict how much the memory buffer can grow. If reading a file exceed this limit, the file is removed from the monitored file list. The value must be according to the specification.

Set a default synchronization (I/O) method. Values: Extra, Full, Normal, Off. This flag affects how the internal SQLite engine do synchronization to disk, for more details about each option please refer to .

Flush records to the cloud service for analytics.

Set the buffer size for HTTP client when reading responses from Kubernetes API server. The value must be according to the specification.

Set an alternative Parser to process record Tag and extract pod_name, namespace_name, container_name and docker_id. The parser must be registered in a (refer to parser filter-kube-test as an example).

Specify the buffer size used to read the response from the Elasticsearch HTTP service. This option is useful for debugging purposes where is required to read full responses, note that response size grows depending of the number of records inserted. To set an unlimited amount of memory set this value to False, otherwise the value must be according to the specification.

Time format (based on ) to generate the second part of the Index name.

Key to be used as the log level. Its value must be in (between 0 and 7). (Optional in GELF)

Required - Your .

Optional - The you want to assign to your logs in Datadog.

Specify an HTTP Proxy. The expected format of this value is . Note that https is not supported yet.

If Forward will connect to an Upstream instead of a simple host, this property defines the absolute path for the Upstream configuration file, for more details about this refer to the documentation section.

Stackdriver output plugin allows to ingest your records into service.

Github reference:

Stackdriver officially supports a .

Kafka output plugin allows to ingest your records into an service. This plugin use the official (built-in dependency)

{property} can be any

The kafka-rest output plugin, allows to flush your records into a server. The following instructions assumes that you have a fully operational Kafka REST Proxy and Kafka services running in your environment.

Kafka REST Proxy output plugin supports TTL/SSL, for more details about the properties available and general configuration, please refer to the section.

The nats output plugin, allows to flush your records into a end point. The following instructions assumes that you have a fully operational NATS Server in place.

only requires to know that it needs to use the nats output plugin, if no extra information is given, it will use the default values specified in the above table.

and if you want to exclude records that match given nested field (for example kubernetes.labels.app), you could use combination of and grep filters. Here is an example that will exclude records that match kubernetes.labels.app: myapp:

is a straightforward tool and to get started with it we need to understand it basic workflow. Consider the following diagram a global overview of it:

Unit Size
Unit Size
this section
grep
kubernetes
lua
parser
record_modifier
stdout
throttle
nest
modify
azure
bigquery
counter
datadog
es
file
flowcounter
forward
http
influxdb
kafka
kafka-rest
stackdriver
stdout
splunk
tcp
td
Treasure Data
Treasure Data
nats
null
Unit Size
https://kubernetes.default.svc:443
parsers file
Unit Size
strftime
standard syslog levels
Datadog API key
tags
http://host:port
Upstream Servers
Google Cloud Stackdriver Logging
Creating a Google Service Account for Stackdriver
#761
https://www.googleapis.com
https://logging.googleapis.com
logging agent based on Fluentd
Apache Kafka
librdkafka C library
Kafka REST Proxy
TLS/SSL
NATS Server
Fluent Bit
nest

MQTT

The MQTT input plugin, allows to retrieve messages/data from MQTT control packets over a TCP connection. The incoming data to receive must be a JSON map.

Configuration Parameters

The plugin supports the following configuration parameters:

Key

Description

Listen

Listener network interface, default: 0.0.0.0

Port

TCP port where listening for connections, default: 1883

Getting Started

In order to start listening for MQTT messages, you can run the plugin from the command line or through the configuration file:

Command Line

Since the MQTT input plugin let Fluent Bit behave as a server, we need to dispatch some messages using some MQTT client, in the following example mosquitto tool is being used for the purpose:

$ fluent-bit -i mqtt -t data -o stdout -m '*'
Fluent-Bit v0.8.0
Copyright (C) Treasure Data

[2016/05/20 14:22:52] [ info] starting engine
[0] data: [1463775773, {"topic"=>"some/topic", "key1"=>123, "key2"=>456}]

The following command line will send a message to the MQTT input plugin:

$ mosquitto_pub  -m '{"key1": 123, "key2": 456}' -t some/topic

Configuration File

In your main configuration file append the following Input & Output sections:

[INPUT]
    Name   mqtt
    Tag    data
    Listen 0.0.0.0
    Port   1883

[OUTPUT]
    Name   stdout
    Match  *
librdkafka properties
Fluent Bit

Interface

Description

Entry point of data. Implemented through Input Plugins, this interface allows to gather or receive data. E.g: log file content, data over TCP, built-in metrics, etc.

Parsers allow to convert unstructured data gathered from the Input interface into a structured one. Parsers are optional and depends on Input plugins.

The filtering mechanism allows to alter the data ingested by the Input plugins. Filters are implemented as plugins.

By default, the data ingested by the Input plugins, resides in memory until is routed and delivered to an Output interface.

Data ingested by an Input interface is tagged, that means that a Tag is assigned and this one is used to determinate where the data should be routed based on a match rule.

An output defines a destination for the data. Destinations are handled by output plugins. Note that thanks to the Routing interface, the data can be delivered to multiple destinations.

Collectd

The collectd input plugin allows you to receive datagrams from collectd.

Content:

Configuration Parameters

The plugin supports the following configuration parameters:

Key

Description

Default

Address

Set the address to listen to

0.0.0.0

Port

Set the port to listen to

25826

TypesDB

Set the data specification file

/usr/share/collectd/types.db

Configuration Examples

Here is a basic configuration example.

[INPUT]
    Name         collectd
    Address      0.0.0.0
    Port         25826
    TypesDB      /usr/share/collectd/types.db,/etc/collectd/custom.db

[OUTPUT]
    Name   stdout
    Match  *

With this configuration, fluent-bit listens to 0.0.0.0:25826, and outputs incoming datagram packets to stdout.

You must set the same types.db files that your collectd server uses. Otherwise, fluent-bit may not be able to interpret the payload properly.

Configuration Commands

Configuration files must be flexible enough for any deployment need, but they must keep a clean and readable format.

Fluent Bit Commands extends a configuration file with specific built-in features. The list of commands available as of Fluent Bit 0.12 series are:

Command

Prototype

Description

@INCLUDE FILE

Include a configuration file

@SET KEY=VAL

Set a configuration variable

@INCLUDE Command

Configuring a logging pipeline might lead to an extensive configuration file. In order to maintain a human-readable configuration, it's suggested to split the configuration in multiple files.

The @INCLUDE command allows the configuration reader to include an external configuration file, e.g:

[SERVICE]
    Flush 1

@INCLUDE inputs.conf
@INCLUDE outputs.conf

The above example defines the main service configuration file and also include two files to continue the configuration:

inputs.conf

[INPUT]
    Name cpu
    Tag  mycpu

[INPUT]
    Name tail
    Path /var/log/*.log
    Tag  varlog.*

outputs.conf

[OUTPUT]
    Name   stdout
    Match  mycpu

[OUTPUT]
    Name            es
    Match           varlog.*
    Host            127.0.0.1
    Port            9200
    Logstash_Format On

Note that despites the order of inclusion, Fluent Bit will ALWAYS respect the following order:

  • Service

  • Inputs

  • Filters

  • Outputs

@SET Command

The @SET command can only be used at root level of each line, meaning it cannot be used inside a section, e.g:

@SET my_input=cpu
@SET my_output=stdout

[SERVICE]
    Flush 1

[INPUT]
    Name ${my_input}

[OUTPUT]
    Name ${my_output}

Fluent Bit supports , one way to expose this variables to Fluent Bit is through setting a Shell environment variable, the other is through the @SET command.

Input
Parser
Filter
Buffer
Routing
Output
configuration variables
Configuration Parameters
Configuration Examples
@INCLUDE
@SET

Standard Output

The Standard Output Filter plugin allows to print to the standard output the data received through the input plugin.

Configuration Parameters

There are no parameters.

Getting Started

In order to start filtering records, you can run the filter from the command line or through the configuration file.

Command Line

$ fluent-bit -i cpu -t cpu.local -F stdout -m '*' -o null -m '*'

Configuration File

In your main configuration file append the following FILTER sections:

[INPUT]
    Name cpu
    Tag  cpu.local

[FILTER]
    Name  stdout
    Match *

[OUTPUT]
    Name  null
    Match *

Regular Expression Parser

The regex parser allows to define a custom Ruby Regular Expression that will use a named capture feature to define which content belongs to which key name.

Note: understanding how regular expressions works is out of the scope of this content.

From a configuration perspective, when the format is set to regex, is mandatory and expected that a Regex configuration key exists.

The following parser configuration example aims to provide rules that can be applied to an Apache HTTP Server log entry:

[PARSER]
    Name   apache
    Format regex
    Regex  ^(?<host>[^ ]*) [^ ]* (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^\"]*?)(?: +\S*)?)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")?$
    Time_Key time
    Time_Format %d/%b/%Y:%H:%M:%S %z

As an example, takes the following Apache HTTP Server log entry:

192.168.2.20 - - [29/Jul/2015:10:27:10 -0300] "GET /cgi-bin/try/ HTTP/1.0" 200 3395

The above content do not provide a defined structure for Fluent Bit, but enabling the proper parser we can help to make a structured representation of it:

[1154104030, {"host"=>"192.168.2.20",
              "user"=>"-",
              "method"=>"GET",
              "path"=>"/cgi-bin/try/",
              "code"=>"200",
              "size"=>"3395",
              "referer"=>"",
              "agent"=>""
              }
]

A common pitfall is that you cannot use characters other than alphabets, numbers and underscore in group names. For example, a group name like (?<user-name>.*) will cause an error due to containing an invalid character (-).

Fluent Bit uses regular expression library on Ruby mode, for testing purposes you can use the following web editor to test your expressions:

Important: do not attempt to add multiline support in your regular expressions if you are using input plugin since each line is handled as a separated entity. Instead use Tail support configuration feature.

In order to understand, learn and test regular expressions like the example above, we suggest you try the following Ruby Regular Expression Editor:

Onigmo
http://rubular.com/
http://rubular.com/r/X7BH0M4Ivm
Tail
Multiline
Fluent Bit Workflow