Kafka
Last updated
Last updated
Kafka output plugin allows to ingest your records into an Apache Kafka service. This plugin use the official librdkafka C library (built-in dependency)
Setting
rdkafka.log.connection.close
tofalse
andrdkafka.request.required.acks
to 1 are examples of recommended settings of librdfkafka properties.
In order to insert records into Apache Kafka, you can run the plugin from the command line or through the configuration file:
The splunk plugin, can read the parameters from the command line in two ways, through the -p argument (property), e.g:
In your main configuration file append the following Input & Output sections:
Key
Description
default
Format
Specify data format, options available: json, msgpack.
json
Message_Key
Optional key to store the message
Timestamp_Key
Set the key to store the record timestamp
@timestamp
Timestamp_Format
'iso8601' or 'double'
double
Brokers
Single of multiple list of Kafka Brokers, e.g: 192.168.1.3:9092, 192.168.1.4:9092.
Topics
Single entry or list of topics separated by comma (,) that Fluent Bit will use to send messages to Kafka. If only one topic is set, that one will be used for all records. Instead if multiple topics exists, the one set in the record by Topic_Key will be used.
fluent-bit
Topic_Key
If multiple Topics exists, the value of TopicKey in the record will indicate the topic to use. E.g: if Topic_Key is _router and the record is {"key1": 123, "router": "route2"}, Fluent Bit will use topic _route_2. Note that the topic must be registered in the Topics list.
rdkafka.{property}
{property}
can be any librdkafka properties