Fluent Bit: Official Manual
SlackGitHubCommunity MeetingsSandbox and LabsWebinars
3.0
3.0
  • Fluent Bit v3.0 Documentation
  • About
    • What is Fluent Bit?
    • A Brief History of Fluent Bit
    • Fluentd & Fluent Bit
    • License
  • Concepts
    • Key Concepts
    • Buffering
    • Data Pipeline
      • Input
      • Parser
      • Filter
      • Buffer
      • Router
      • Output
  • Installation
    • Getting Started with Fluent Bit
    • Upgrade Notes
    • Supported Platforms
    • Requirements
    • Sources
      • Download Source Code
      • Build and Install
      • Build with Static Configuration
    • Linux Packages
      • Amazon Linux
      • Redhat / CentOS
      • Debian
      • Ubuntu
      • Raspbian / Raspberry Pi
    • Docker
    • Containers on AWS
    • Amazon EC2
    • Kubernetes
    • macOS
    • Windows
    • Yocto / Embedded Linux
    • Buildroot / Embedded Linux
  • Administration
    • Configuring Fluent Bit
      • Classic mode
        • Format and Schema
        • Configuration File
        • Variables
        • Commands
        • Upstream Servers
        • Record Accessor
      • YAML Configuration
        • Configuration File
      • Unit Sizes
      • Multiline Parsing
    • Transport Security
    • Buffering & Storage
    • Backpressure
    • Scheduling and Retries
    • Networking
    • Memory Management
    • Monitoring
    • HTTP Proxy
    • Hot Reload
    • Troubleshooting
  • Local Testing
    • Validating your Data and Structure
    • Running a Logging Pipeline Locally
  • Data Pipeline
    • Pipeline Monitoring
    • Inputs
      • Collectd
      • CPU Log Based Metrics
      • Disk I/O Log Based Metrics
      • Docker Log Based Metrics
      • Docker Events
      • Dummy
      • Elasticsearch
      • Exec
      • Exec Wasi
      • Fluent Bit Metrics
      • Forward
      • Head
      • HTTP
      • Health
      • Kafka
      • Kernel Logs
      • Kubernetes Events
      • Memory Metrics
      • MQTT
      • Network I/O Log Based Metrics
      • NGINX Exporter Metrics
      • Node Exporter Metrics
      • Podman Metrics
      • Process Log Based Metrics
      • Process Exporter Metrics
      • Prometheus Scrape Metrics
      • Prometheus Remote Write
      • Random
      • Serial Interface
      • Splunk
      • Standard Input
      • StatsD
      • Syslog
      • Systemd
      • Tail
      • TCP
      • Thermal
      • UDP
      • OpenTelemetry
      • Windows Event Log
      • Windows Event Log (winevtlog)
      • Windows Exporter Metrics
    • Parsers
      • Configuring Parser
      • JSON
      • Regular Expression
      • LTSV
      • Logfmt
      • Decoders
    • Processors
      • Content Modifier
      • Metrics Selector
      • SQL
    • Filters
      • AWS Metadata
      • CheckList
      • ECS Metadata
      • Expect
      • GeoIP2 Filter
      • Grep
      • Kubernetes
      • Log to Metrics
      • Lua
      • Parser
      • Record Modifier
      • Modify
      • Multiline
      • Nest
      • Nightfall
      • Rewrite Tag
      • Standard Output
      • Sysinfo
      • Throttle
      • Type Converter
      • Tensorflow
      • Wasm
    • Outputs
      • Amazon CloudWatch
      • Amazon Kinesis Data Firehose
      • Amazon Kinesis Data Streams
      • Amazon S3
      • Azure Blob
      • Azure Data Explorer
      • Azure Log Analytics
      • Azure Logs Ingestion API
      • Counter
      • Datadog
      • Elasticsearch
      • File
      • FlowCounter
      • Forward
      • GELF
      • Google Chronicle
      • Google Cloud BigQuery
      • HTTP
      • InfluxDB
      • Kafka
      • Kafka REST Proxy
      • LogDNA
      • Loki
      • Microsoft Fabric
      • NATS
      • New Relic
      • NULL
      • Observe
      • Oracle Log Analytics
      • OpenSearch
      • OpenTelemetry
      • PostgreSQL
      • Prometheus Exporter
      • Prometheus Remote Write
      • SkyWalking
      • Slack
      • Splunk
      • Stackdriver
      • Standard Output
      • Syslog
      • TCP & TLS
      • Treasure Data
      • Vivo Exporter
      • WebSocket
  • Stream Processing
    • Introduction to Stream Processing
    • Overview
    • Changelog
    • Getting Started
      • Fluent Bit + SQL
      • Check Keys and NULL values
      • Hands On! 101
  • Fluent Bit for Developers
    • C Library API
    • Ingest Records Manually
    • Golang Output Plugins
    • WASM Filter Plugins
    • WASM Input Plugins
    • Developer guide for beginners on contributing to Fluent Bit
Powered by GitBook
On this page
  • Configuration
  • OTLP Transport Protocol Endpoints
  • Getting started

Was this helpful?

Export as PDF
  1. Data Pipeline
  2. Inputs

OpenTelemetry

An input plugin to ingest OTLP Logs, Metrics, and Traces

The OpenTelemetry input plugin allows you to receive data as per the OTLP specification, from various OpenTelemetry exporters, the OpenTelemetry Collector, or Fluent Bit's OpenTelemetry output plugin.

Our compliant implementation fully supports OTLP/HTTP and OTLP/GRPC. Note that the single port configured which defaults to 4318 supports both transports.

Configuration

Key
Description
default

listen

The network address to listen.

0.0.0.0

port

The port for Fluent Bit to listen for incoming connections. Note that as of Fluent Bit v3.0.2 this port is used for both transport OTLP/HTTP and OTLP/GRPC.

4318

tag_key

Specify the key name to overwrite a tag. If set, the tag will be overwritten by a value of the

raw_traces

Route trace data as a log

false

buffer_max_size

Specify the maximum buffer size in KB/MB/GB to the HTTP payload.

4M

buffer_chunk_size

Initial size and allocation strategy to store the payload (advanced users only)

512K

successful_response_code

It allows to set successful response code. 200, 201 and 204 are supported.

201

tag_from_uri

If true, tag will be created from uri. e.g. v1_metrics from /v1/metrics .

true

Important note: Raw traces means that any data forwarded to the traces endpoint (/v1/traces) will be packed and forwarded as a log message, and will NOT be processed by Fluent Bit. The traces endpoint by default expects a valid protobuf encoded payload, but you can set the raw_traces option in case you want to get trace telemetry data to any of Fluent Bit's supported outputs.

OTLP Transport Protocol Endpoints

Fluent Bit based on the OTLP desired protocol exposes the following endpoints for data ingestion:

OTLP/HTTP

  • Logs

    • /v1/logs

  • Metrics

    • /v1/metrics

  • Traces

    • /v1/traces

OTLP/GRPC

  • Logs

    • /opentelemetry.proto.collector.log.v1.LogService/Export

    • /opentelemetry.proto.collector.log.v1.LogService/Export

  • Metrics

    • /opentelemetry.proto.collector.metric.v1.MetricService/Export

    • /opentelemetry.proto.collector.metrics.v1.MetricsService/Export

  • Traces

    • /opentelemetry.proto.collector.trace.v1.TraceService/Export

    • /opentelemetry.proto.collector.traces.v1.TracesService/Export

Getting started

The OpenTelemetry plugin currently supports the following telemetry data types:

Type
HTTP/JSON
HTTP/Protobuf

Logs

Stable

Stable

Metrics

Unimplemented

Stable

Traces

Unimplemented

Stable

A sample config file to get started will look something like the following:

pipeline:
    inputs:
        - name: opentelemetry
          listen: 127.0.0.1
          port: 4318
    outputs:
        - name: stdout
          match: '*'
[INPUT]
	name opentelemetry
	listen 127.0.0.1
	port 4318

[OUTPUT]
	name stdout
	match *

With the above configuration, Fluent Bit will listen on port 4318 for data. You can now send telemetry data to the endpoints /v1/metrics, /v1/traces, and /v1/logs for metrics, traces, and logs respectively.

A sample curl request to POST json encoded log data would be:

curl --header "Content-Type: application/json" --request POST --data '{"resourceLogs":[{"resource":{},"scopeLogs":[{"scope":{},"logRecords":[{"timeUnixNano":"1660296023390371588","body":{"stringValue":"{\"message\":\"dummy\"}"},"traceId":"","spanId":""}]}]}]}'   http://0.0.0.0:4318/v1/logs

Last updated 1 year ago

Was this helpful?