Fluent Bit: Official Manual
SlackGitHubCommunity MeetingsSandbox and LabsWebinars
3.1
3.1
  • Fluent Bit v3.1 Documentation
  • About
    • What is Fluent Bit?
    • A Brief History of Fluent Bit
    • Fluentd & Fluent Bit
    • License
  • Concepts
    • Key Concepts
    • Buffering
    • Data Pipeline
      • Input
      • Parser
      • Filter
      • Buffer
      • Router
      • Output
  • Installation
    • Getting Started with Fluent Bit
    • Upgrade Notes
    • Supported Platforms
    • Requirements
    • Sources
      • Download Source Code
      • Build and Install
      • Build with Static Configuration
    • Linux Packages
      • Amazon Linux
      • Redhat / CentOS
      • Debian
      • Ubuntu
      • Raspbian / Raspberry Pi
    • Docker
    • Containers on AWS
    • Amazon EC2
    • Kubernetes
    • macOS
    • Windows
    • Yocto / Embedded Linux
    • Buildroot / Embedded Linux
  • Administration
    • Configuring Fluent Bit
      • Classic mode
        • Format and Schema
        • Configuration File
        • Variables
        • Commands
        • Upstream Servers
        • Record Accessor
      • YAML Configuration
        • Configuration File
      • Unit Sizes
      • Multiline Parsing
    • Transport Security
    • Buffering & Storage
    • Backpressure
    • Scheduling and Retries
    • Networking
    • Memory Management
    • Monitoring
    • Multithreading
    • HTTP Proxy
    • Hot Reload
    • Troubleshooting
  • Local Testing
    • Validating your Data and Structure
    • Running a Logging Pipeline Locally
  • Data Pipeline
    • Pipeline Monitoring
    • Inputs
      • Collectd
      • CPU Log Based Metrics
      • Disk I/O Log Based Metrics
      • Docker Log Based Metrics
      • Docker Events
      • Dummy
      • Elasticsearch
      • Exec
      • Exec Wasi
      • Fluent Bit Metrics
      • Forward
      • Head
      • HTTP
      • Health
      • Kafka
      • Kernel Logs
      • Kubernetes Events
      • Memory Metrics
      • MQTT
      • Network I/O Log Based Metrics
      • NGINX Exporter Metrics
      • Node Exporter Metrics
      • Podman Metrics
      • Process Log Based Metrics
      • Process Exporter Metrics
      • Prometheus Scrape Metrics
      • Prometheus Remote Write
      • Random
      • Serial Interface
      • Splunk
      • Standard Input
      • StatsD
      • Syslog
      • Systemd
      • Tail
      • TCP
      • Thermal
      • UDP
      • OpenTelemetry
      • Windows Event Log
      • Windows Event Log (winevtlog)
      • Windows Exporter Metrics
    • Parsers
      • Configuring Parser
      • JSON
      • Regular Expression
      • LTSV
      • Logfmt
      • Decoders
    • Processors
      • Content Modifier
      • Labels
      • Metrics Selector
      • OpenTelemetry Envelope
      • SQL
    • Filters
      • AWS Metadata
      • CheckList
      • ECS Metadata
      • Expect
      • GeoIP2 Filter
      • Grep
      • Kubernetes
      • Log to Metrics
      • Lua
      • Parser
      • Record Modifier
      • Modify
      • Multiline
      • Nest
      • Nightfall
      • Rewrite Tag
      • Standard Output
      • Sysinfo
      • Throttle
      • Type Converter
      • Tensorflow
      • Wasm
    • Outputs
      • Amazon CloudWatch
      • Amazon Kinesis Data Firehose
      • Amazon Kinesis Data Streams
      • Amazon S3
      • Azure Blob
      • Azure Data Explorer
      • Azure Log Analytics
      • Azure Logs Ingestion API
      • Counter
      • Datadog
      • Elasticsearch
      • File
      • FlowCounter
      • Forward
      • GELF
      • Google Chronicle
      • Google Cloud BigQuery
      • HTTP
      • InfluxDB
      • Kafka
      • Kafka REST Proxy
      • LogDNA
      • Loki
      • Microsoft Fabric
      • NATS
      • New Relic
      • NULL
      • OpenObserve
      • Observe
      • Oracle Log Analytics
      • OpenSearch
      • OpenTelemetry
      • PostgreSQL
      • Prometheus Exporter
      • Prometheus Remote Write
      • SkyWalking
      • Slack
      • Splunk
      • Stackdriver
      • Standard Output
      • Syslog
      • TCP & TLS
      • Treasure Data
      • Vivo Exporter
      • WebSocket
  • Stream Processing
    • Introduction to Stream Processing
    • Overview
    • Changelog
    • Getting Started
      • Fluent Bit + SQL
      • Check Keys and NULL values
      • Hands On! 101
  • Fluent Bit for Developers
    • C Library API
    • Ingest Records Manually
    • Golang Output Plugins
    • WASM Filter Plugins
    • WASM Input Plugins
    • Developer guide for beginners on contributing to Fluent Bit
Powered by GitBook
On this page
  • For ingesting into Azure Data Explorer: Creating a Kusto Cluster and Database
  • For ingesting into Microsoft Fabric Real Time Analytics : Creating an Eventhouse Cluster and KQL Database
  • Creating an Azure Registered Application
  • Creating a Table
  • Optional - Creating an Ingestion Mapping
  • Configuration Parameters
  • Configuration File
  • Troubleshooting
  • 403 Forbidden

Was this helpful?

Export as PDF
  1. Data Pipeline
  2. Outputs

Azure Data Explorer

Send logs to Azure Data Explorer (Kusto)

Last updated 8 months ago

Was this helpful?

The Kusto output plugin allows to ingest your logs into an cluster, via the mechanism. This output plugin can also be used to ingest logs into an cluster in Microsoft Fabric Real Time Analytics.

For ingesting into Azure Data Explorer: Creating a Kusto Cluster and Database

You can create an Azure Data Explorer cluster in one of the following ways:

For ingesting into Microsoft Fabric Real Time Analytics : Creating an Eventhouse Cluster and KQL Database

You can create an Eventhouse cluster and a KQL database follow the following steps:

Creating an Azure Registered Application

Fluent-Bit will use the application's credentials, to ingest data into your cluster.

Creating a Table

Fluent-Bit ingests the event data into Kusto in a JSON format, that by default will include 3 properties:

  • log - the actual event payload.

  • tag - the event tag.

  • timestamp - the event timestamp.

A table with the expected schema must exist in order for data to be ingested properly.

.create table FluentBit (log:dynamic, tag:string, timestamp:datetime)

Optional - Creating an Ingestion Mapping

Configuration Parameters

Key
Description
Default

tenant_id

Required - The tenant/domain ID of the AAD registered application.

client_id

Required - The client ID of the AAD registered application.

client_secret

ingestion_endpoint

Required - The cluster's ingestion endpoint, usually in the form `https://ingest-cluster_name.region.kusto.windows.net

database_name

Required - The database name.

table_name

Required - The table name.

ingestion_mapping_reference

log_key

Key name of the log content.

log

include_tag_key

If enabled, a tag is appended to output. The key name is used tag_key property.

On

tag_key

The key name of tag. If include_tag_key is false, This property is ignored.

tag

include_time_key

If enabled, a timestamp is appended to output. The key name is used time_key property.

On

time_key

The key name of time. If include_time_key is false, This property is ignored.

timestamp

workers

0

Configuration File

Get started quickly with this configuration file:

[OUTPUT]
    Match *
    Name azure_kusto
    Tenant_Id <app_tenant_id>
    Client_Id <app_client_id>
    Client_Secret <app_secret>
    Ingestion_Endpoint https://ingest-<cluster>.<region>.kusto.windows.net
    Database_Name <database_name>
    Table_Name <table_name>
    Ingestion_Mapping_Reference <mapping_name>

Troubleshooting

403 Forbidden

If you get a 403 Forbidden error response, make sure that:

  • You provided the correct AAD registered application credentials.

  • You authorized the application to ingest into your database or table.

By default, Kusto will insert incoming ingestions into a table by inferring the mapped table columns, from the payload properties. However, this mapping can be customized by creatng a . The plugin can be configured to use an ingestion mapping via the ingestion_mapping_reference configuration key.

Required - The client secret of the AAD registered application ().

Optional - The name of a that will be used to map the ingested payload into the table columns.

The number of to perform flush operations for this output.

Azure Data Explorer
Queued Ingestion
Eventhouse
Create a free-tier cluster
Create a fully-featured cluster
Create an Eventhouse cluster
Create a KQL database
Register an Application
Add a client secret
Authorize the app in your database
JSON ingestion mapping
App Secret
JSON ingestion mapping
workers