Fluent Bit: Official Manual
SlackGitHubCommunity MeetingsSandbox and LabsWebinars
1.3
1.3
  • Introduction
  • About
    • Why ?
    • Fluentd & Fluent Bit
    • License
  • Installation
    • Supported Platforms
    • Requirements
    • Download Sources
    • Upgrade Notes
    • Build and Install
    • Build with Static Configuration
    • Docker Images
    • Kubernetes
    • TD Agent Bit
    • Debian Packages
    • Ubuntu Packages
    • CentOS Packages
    • Raspberry Pi
    • Yocto Project
    • Windows
  • Getting Started
    • Service
    • Input
    • Parser
    • Filter
    • Buffer
    • Routing
    • Output
  • Configuration
    • Configuration Schema
    • Configuration File
    • Configuration Variables
    • Configuration Commands
    • Buffering / Storage
    • Monitoring
    • Unit Sizes
    • TLS / SSL
    • Backpressure
    • Memory Usage
    • Upstream Servers
    • Scheduler
    • Stream Processor
  • Service
  • Input Plugins
    • Collectd
    • CPU Usage
    • Disk Usage
    • Dummy
    • Exec
    • Forward
    • Head
    • Health
    • Kernel Log Buffer
    • Memory Usage
    • MQTT
    • Network Traffic
    • Process
    • Random
    • Serial Interface
    • Standard Input
    • Syslog
    • Systemd
    • Tail
    • TCP
    • Thermal
    • Windows Event Log
  • Parsers
    • JSON Parser
    • Regular Expression Parser
    • LTSV Parser
    • Logfmt Parser
    • Decoders
  • Filter Plugins
    • Grep
    • Kubernetes
    • Lua
    • Parser
    • Record Modifier
    • Standard Output
    • Throttle
    • Nest
    • Modify
  • Output Plugins
    • Azure
    • BigQuery
    • Counter
    • Datadog
    • Elasticsearch
    • File
    • FlowCounter
    • Forward
    • GELF
    • HTTP
    • InfluxDB
    • Kafka
    • Kafka REST Proxy
    • NATS
    • Null
    • Stackdriver
    • Standard Output
    • Splunk
    • TCP & TLS
    • Treasure Data
  • Fluent Bit for Developers
    • Library API
    • Ingest Records Manually
    • Fluent Bit and Golang Plugins
Powered by GitBook
On this page
  • Configuration Parameters
  • TLS / SSL
  • Getting Started
  • Command Line
  • Configuration File
  • Data format

Was this helpful?

Export as PDF
  1. Output Plugins

Splunk

Last updated 5 years ago

Was this helpful?

Splunk output plugin allows to ingest your records into a service through the HTTP Event Collector (HEC) interface.

To get more details about how to setup the HEC in Splunk please refer to the following documentation:

Configuration Parameters

Key

Description

default

Host

IP address or hostname of the target Splunk service.

127.0.0.1

Port

TCP port of the target Splunk service.

8088

Splunk_Token

Splunk_Send_Raw

When enabled, the record keys and values are set in the top level of the map instead of under the event key.

Off

HTTP_User

Optional username for Basic Authentication on HEC

HTTP_Passwd

Password for user defined in HTTP_User

TLS / SSL

Splunk output plugin supports TTL/SSL, for more details about the properties available and general configuration, please refer to the section.

Getting Started

In order to insert records into a Splunk service, you can run the plugin from the command line or through the configuration file:

Command Line

The splunk plugin, can read the parameters from the command line in two ways, through the -p argument (property), e.g:

$ fluent-bit -i cpu -t cpu -o splunk -p host=127.0.0.1 -p port=8088 \
  -p tls=on -p tls.verify=off -m '*'

Configuration File

In your main configuration file append the following Input & Output sections:

[INPUT]
    Name  cpu
    Tag   cpu

[OUTPUT]
    Name        splunk
    Match       *
    Host        127.0.0.1
    Port        8088
    TLS         On
    TLS.Verify  Off
    Message_Key my_key

Data format

By default, the Splunk output plugin nests the record under the event key in the payload sent to the HEC. It will also append the time of the record to a top level time key.

If you would like to customize any of the Splunk event metadata, such as the host or target index, you can set Splunk_Send_Raw On in the plugin configuration, and add the metadata as keys/values in the record. Note: with Splunk_Send_Raw enabled, you are responsible for creating and populating the event section of the payload.

For example, to add a custom index and hostname:

[INPUT]
    Name  cpu
    Tag   cpu

# nest the record under the 'event' key
[FILTER]
    Name nest
    Match *
    Operation nest
    Wildcard *
    Nest_under event

# add event metadata
[FILTER]
    Name      modify
    Match     *
    Add index my-splunk-index
    Add host  my-host

[OUTPUT]
    Name        splunk
    Match       *
    Host        127.0.0.1
    Splunk_Token xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx
    Splunk_Send_Raw On

This will create a payload that looks like:

{
    "time": "1535995058.003385189",
    "index": "my-splunk-index",
    "host": "my-host",
    "event": {
        "cpu_p":0.000000,
        "user_p":0.000000,
        "system_p":0.000000
    }
}

Specify the Authentication for the HTTP Event Collector interface.

For more information on the Splunk HEC payload format and all event meatadata Splunk accepts, see here:

Splunk Enterprise
Splunk / Use the HTTP Event Collector
TLS/SSL
http://docs.splunk.com/Documentation/Splunk/latest/Data/AboutHEC
Token