Fluent Bit: Official Manual
SlackGitHubCommunity MeetingsSandbox and LabsWebinars
1.4
1.4
  • Fluent Bit v1.4 Documentation
  • About
    • What is Fluent Bit ?
    • A Brief History of Fluent Bit
    • Fluentd & Fluent Bit
    • License
  • Concepts
    • Key Concepts
    • Buffering
    • Data Pipeline
      • Input
      • Parser
      • Filter
      • Buffer
      • Router
      • Output
  • Installation
    • Upgrade Notes
    • Supported Platforms
    • Requirements
    • Sources
      • Download Source Code
      • Build and Install
      • Build with Static Configuration
    • Linux Packages
      • Amazon Linux
      • Redhat / CentOS
      • Debian
      • Ubuntu
      • Raspbian / Raspberry Pi
    • Docker
    • Amazon
      • Containers on AWS
      • Amazon EC2
    • Kubernetes
    • Yocto / Embedded Linux
    • Windows
  • Administration
    • Configuring Fluent Bit
      • Format and Schema
      • Configuration File
      • Variables
      • Commands
      • Upstream Servers
      • Unit Sizes
    • Security
    • Buffering & Storage
    • Backpressure
    • Scheduling and Retries
    • Memory Management
    • Monitoring
    • Dump Internals / Signal
  • Data Pipeline
    • Inputs
      • Collectd
      • CPU Metrics
      • Disk I/O Metrics
      • Dummy
      • Exec
      • Forward
      • Head
      • Health
      • Kernel Logs
      • Memory Metrics
      • MQTT
      • Network I/O Metrics
      • Process
      • Random
      • Serial Interface
      • Standard Input
      • Syslog
      • Systemd
      • Tail
      • TCP
      • Thermal
      • Windows Event Log
    • Parsers
      • JSON
      • Regular Expression
      • LTSV
      • Logfmt
      • Decoders
    • Filters
      • AWS Metadata
      • Grep
      • Kubernetes
      • Lua
      • Parser
      • Record Modifier
      • Rewrite Tag
      • Standard Output
      • Throttle
      • Nest
      • Modify
    • Outputs
      • Azure
      • BigQuery
      • Counter
      • Datadog
      • Elasticsearch
      • File
      • FlowCounter
      • Forward
      • GELF
      • HTTP
      • InfluxDB
      • Kafka
      • Kafka REST Proxy
      • NATS
      • NULL
      • PostgreSQL
      • Stackdriver
      • Standard Output
      • Splunk
      • TCP & TLS
      • Treasure Data
  • Stream Processing
    • Introduction to Stream Processing
    • Overview
    • Changelog
    • Getting Started
      • Fluent Bit + SQL
      • Check Keys and NULL values
      • Hands On! 101
  • Fluent Bit for Developers
    • C Library API
    • Ingest Records Manually
    • Golang Output Plugins
    • Developer guide for beginners on contributing to Fluent Bit
Powered by GitBook
On this page

Was this helpful?

Export as PDF
  1. Data Pipeline
  2. Parsers

JSON

The JSON parser is the simplest option: if the original log source is a JSON map string, it will take it structure and convert it directly to the internal binary representation.

A simple configuration that can be found in the default parsers configuration file, is the entry to parse Docker log files (when the tail input plugin is used):

[PARSER]
    Name        docker
    Format      json
    Time_Key    time
    Time_Format %Y-%m-%dT%H:%M:%S %z

The following log entry is a valid content for the parser defined above:

{"key1": 12345, "key2": "abc", "time": "2006-07-28T13:22:04Z"}

After processing, it internal representation will be:

[1154103724, {"key1"=>12345, "key2"=>"abc"}]

The time has been converted to Unix timestamp (UTC) and the map reduced to each component of the original message.

Last updated 5 years ago

Was this helpful?