Learn how to monitor your Fluent Bit data pipelines
Fluent Bit comes with built-it features to allow you to monitor the internals of your pipeline, connect to Prometheus and Grafana, Health checks and also connectors to use external services for such purposes:
Fluent Bit comes with a built-in HTTP Server that can be used to query internal information and monitor metrics of each running plugin.
The monitoring interface can be easily integrated with Prometheus since we support it native format.
To get started, the first step is to enable the HTTP Server from the configuration file:
the above configuration snippet will instruct Fluent Bit to start it HTTP Server on TCP Port 2020 and listening on all network interfaces:
now with a simple curl command is enough to gather some information:
Note that we are sending the curl command output to the jq program which helps to make the JSON data easy to read from the terminal. Fluent Bit don't aim to do JSON pretty-printing.
Fluent Bit aims to expose useful interfaces for monitoring, as of Fluent Bit v0.14 the following end points are available:
Query the service uptime with the following command:
it should print a similar output like this:
Query internal metrics in JSON format with the following command:
it should print a similar output like this:
Query internal metrics in Prometheus Text 0.0.4 format:
this time the same metrics will be in Prometheus format instead of JSON:
By default configured plugins on runtime get an internal name in the format plugin_name.ID. For monitoring purposes, this can be confusing if many plugins of the same type were configured. To make a distinction each configured input or output section can get an alias that will be used as the parent name for the metric.
The following example set an alias to the INPUT section which is using the CPU input plugin:
Now when querying the metrics we get the aliases in place instead of the plugin name:
Fluent Bit's exposed prometheus style metrics can be leveraged to create dashboards and alerts.
The provided example dashboard is heavily inspired by Banzai Cloud's logging operator dashboard but with a few key differences such as the use of the instance
label (see why here), stacked graphs and a focus on Fluent Bit metrics.
Sample alerts are available here.
Fluent bit now supports four new configs to set up the health check.
Note: Not every error log means an error nor be counted, the errors retry failures count only on specific errors which is the example in config table description
So the feature works as: Based on the HC_Period customer setup, if the real error number is over HC_Errors_Count
or retry failure is over HC_Retry_Failure_Count
, fluent bit will be considered as unhealthy. The health endpoint will return HTTP status 500 and String error
. Otherwise it's healthy, will return HTTP status 200 and string ok
The equation is:
Note: the HC_Errors_Count and HC_Retry_Failure_Count only count for output plugins and count a sum for errors and retry failures from all output plugins which is running.
See the config example:
The command to call health endpoint
Based on the fluent bit status, the result will be:
HTTP status 200 and "ok" in response to healthy status
HTTP status 500 and "error" in response for unhealthy status
With the example config, the health status is determined by following equation:
If (HC_Errors_Count > 5) OR (HC_Retry_Failure_Count > 5) IN 5 seconds is TRUE, then it's unhealthy.
If (HC_Errors_Count > 5) OR (HC_Retry_Failure_Count > 5) IN 5 seconds is FALSE, then it's healthy.
Calyptia Cloud is a hosted service that allows you to monitor your Fluent Bit agents including data flow, metrics and configurations.
Register your Fluent Bit agent will take less than one minute, steps:
Go to cloud.calyptia.com and sign-in
On the left menu click on Settings and generate/copy your API key
In your Fluent Bit configuration file, append the following configuration section:
Make sure to replace your API key in the configuration. After a few seconds upon restart your Fluent Bit agent, the Calyptia Cloud Dashboard will list your agent. Metrics will take around 30 seconds to shows up.
If want to get in touch with Calyptia team, just send an email to hello@calyptia.com
URI | Description | Data Format |
---|---|---|
Config Name | Description | Default Value |
---|---|---|
/
Fluent Bit build information
JSON
/api/v1/uptime
Get uptime information in seconds and human readable format
JSON
/api/v1/metrics
Internal metrics per loaded plugin
JSON
/api/v1/metrics/prometheus
Internal metrics per loaded plugin ready to be consumed by a Prometheus Server
Prometheus Text 0.0.4
/api/v1/storage
Get internal metrics of the storage layer / buffered data. This option is enabled only if in the SERVICE
section the property storage.metrics
has been enabled
JSON
/api/v1/health
Fluent Bit health check result
String
Health_Check
enable Health check feature
Off
HC_Errors_Count
the error count to meet the unhealthy requirement, this is a sum for all output plugins in a defined HC_Period, example for output error: [2022/02/16 10:44:10] [ warn] [engine] failed to flush chunk '1-1645008245.491540684.flb', retry in 7 seconds: task_id=0, input=forward.1 > output=cloudwatch_logs.3 (out_id=3)
5
HC_Retry_Failure_Count
the retry failure count to meet the unhealthy requirement, this is a sum for all output plugins in a defined HC_Period, example for retry failure: [2022/02/16 20:11:36] [ warn] [engine] chunk '1-1645042288.260516436.flb' cannot be retried: task_id=0, input=tcp.3 > output=cloudwatch_logs.1
5
HC_Period
The time period by second to count the error and retry failure data point
60