InfluxDB

The influxdb output plugin, allows to flush your records into a InfluxDB time series database. The following instructions assumes that you have a fully operational InfluxDB service running in your system.

Configuration Parameters

KeyDescriptiondefault

Host

IP address or hostname of the target InfluxDB service

127.0.0.1

Port

TCP port of the target InfluxDB service

8086

Database

InfluxDB database name where records will be inserted

fluentbit

Bucket

InfluxDB bucket name where records will be inserted - if specified, database is ignored and v2 of API is used

Org

InfluxDB organization name where the bucket is (v2 only)

fluent

Sequence_Tag

The name of the tag whose value is incremented for the consecutive simultaneous events.

_seq

HTTP_User

Optional username for HTTP Basic Authentication

HTTP_Passwd

Password for user defined in HTTP_User

HTTP_Token

Authentication token used with InfluDB v2 - if specified, both HTTP_User and HTTP_Passwd are ignored

HTTP_Header

Add a HTTP header key/value pair. Multiple headers can be set

Tag_Keys

Space separated list of keys that needs to be tagged

Auto_Tags

Automatically tag keys where value is string. This option takes a boolean value: True/False, On/Off.

Off

Uri

Custom URI endpoint

Workers

The number of workers to perform flush operations for this output.

0

TLS / SSL

InfluxDB output plugin supports TLS/SSL, for more details about the properties available and general configuration, please refer to the TLS/SSL section.

Getting Started

In order to start inserting records into an InfluxDB service, you can run the plugin from the command line or through the configuration file:

Command Line

The influxdb plugin, can read the parameters from the command line in two ways, through the -p argument (property) or setting them directly through the service URI. The URI format is the following:

influxdb://host:port

Using the format specified, you could start Fluent Bit through:

$ fluent-bit -i cpu -t cpu -o influxdb://127.0.0.1:8086 -m '*'

Configuration File

In your main configuration file append the following Input & Output sections:

[INPUT]
    Name  cpu
    Tag   cpu

[OUTPUT]
    Name          influxdb
    Match         *
    Host          127.0.0.1
    Port          8086
    Database      fluentbit
    Sequence_Tag  _seq

Tagging

Basic example of Tag_Keys usage:

[INPUT]
    Name            tail
    Tag             apache.access
    parser          apache2
    path            /var/log/apache2/access.log

[OUTPUT]
    Name          influxdb
    Match         *
    Host          127.0.0.1
    Port          8086
    Database      fluentbit
    Sequence_Tag  _seq
    # make tags from method and path fields
    Tag_Keys      method path

With Auto_Tags=On in this example cause error, because every parsed field value type is string. Best usage of this option in metrics like record where one or more field value is not string typed.

Basic example of Tags_List_Key usage:

[INPUT]
    Name              dummy
    # tagged fields: level, ID, businessObjectID, status
    Dummy             {"msg": "Transfer completed", "level": "info", "ID": "1234", "businessObjectID": "qwerty", "status": "OK", "tags": ["ID", "businessObjectID"]}

[OUTPUT]
    Name          influxdb
    Match         *
    Host          127.0.0.1
    Port          8086
    Bucket        My_Bucket
    Org           My_Org
    Sequence_Tag  _seq
    HTTP_Token    My_Token
    # tag all fields inside tags string array
    Tags_List_Enabled True
    Tags_List_Key tags
    # tag level, status fields
    Tag_Keys level status

Testing

Before to start Fluent Bit, make sure the target database exists on InfluxDB, using the above example, we will insert the data into a fluentbit database.

1. Create database

Log into InfluxDB console:

$ influx
Visit https://enterprise.influxdata.com to register for updates, InfluxDB server management, and monitoring.
Connected to http://localhost:8086 version 1.1.0
InfluxDB shell version: 1.1.0
>

Create the database:

> create database fluentbit
>

Check the database exists:

> show databases
name: databases
name
----
_internal
fluentbit

>

2. Run Fluent Bit

The following command will gather CPU metrics from the system and send the data to InfluxDB database every five seconds:

$ bin/fluent-bit -i cpu -t cpu -o influxdb -m '*'

Note that all records coming from the cpu input plugin, have a tag cpu, this tag is used to generate the measurement in InfluxDB

3. Query the data

From InfluxDB console, choose your database:

> use fluentbit
Using database fluentbit

Now query some specific fields:

> SELECT cpu_p, system_p, user_p FROM cpu
name: cpu
time                  cpu_p   system_p    user_p
----                  -----   --------    ------
1481132860000000000   2.75        0.5      2.25
1481132861000000000   2           0.5      1.5
1481132862000000000   4.75        1.5      3.25
1481132863000000000   6.75        1.25     5.5
1481132864000000000   11.25       3.75     7.5

The CPU input plugin gather more metrics per CPU core, in the above example we just selected three specific metrics. The following query will give a full result:

> SELECT * FROM cpu

4. View tags

Query tagged keys:

> SHOW TAG KEYS ON fluentbit FROM "apache.access"
name: apache.access
tagKey
------
_seq
method
path

And now query method key values:

> SHOW TAG VALUES ON fluentbit FROM "apache.access" WITH KEY = "method"
name: apache.access
key    value
---    -----
method "MATCH"
method "POST"

Last updated