Fluent Bit: Official Manual
SlackGitHubCommunity MeetingsSandbox and LabsWebinars
2.2
2.2
  • Fluent Bit v2.2 Documentation
  • About
    • What is Fluent Bit?
    • A Brief History of Fluent Bit
    • Fluentd & Fluent Bit
    • License
  • Concepts
    • Key Concepts
    • Buffering
    • Data Pipeline
      • Input
      • Parser
      • Filter
      • Buffer
      • Router
      • Output
  • Installation
    • Getting Started with Fluent Bit
    • Upgrade Notes
    • Supported Platforms
    • Requirements
    • Sources
      • Download Source Code
      • Build and Install
      • Build with Static Configuration
    • Linux Packages
      • Amazon Linux
      • Redhat / CentOS
      • Debian
      • Ubuntu
      • Raspbian / Raspberry Pi
    • Docker
    • Containers on AWS
    • Amazon EC2
    • Kubernetes
    • macOS
    • Windows
    • Yocto / Embedded Linux
  • Administration
    • Configuring Fluent Bit
      • Classic mode
        • Format and Schema
        • Configuration File
        • Variables
        • Commands
        • Upstream Servers
        • Record Accessor
      • YAML Configuration
        • Configuration File
      • Unit Sizes
      • Multiline Parsing
    • Transport Security
    • Buffering & Storage
    • Backpressure
    • Scheduling and Retries
    • Networking
    • Memory Management
    • Monitoring
    • HTTP Proxy
    • Hot Reload
    • Troubleshooting
  • Local Testing
    • Validating your Data and Structure
    • Running a Logging Pipeline Locally
  • Data Pipeline
    • Pipeline Monitoring
    • Inputs
      • Collectd
      • CPU Log Based Metrics
      • Disk I/O Log Based Metrics
      • Docker Log Based Metrics
      • Docker Events
      • Dummy
      • Elasticsearch
      • Exec
      • Exec Wasi
      • Fluent Bit Metrics
      • Forward
      • Head
      • HTTP
      • Health
      • Kafka
      • Kernel Logs
      • Kubernetes Events
      • Memory Metrics
      • MQTT
      • Network I/O Log Based Metrics
      • NGINX Exporter Metrics
      • Node Exporter Metrics
      • Podman Metrics
      • Process Log Based Metrics
      • Process Exporter Metrics
      • Prometheus Scrape Metrics
      • Random
      • Serial Interface
      • Splunk
      • Standard Input
      • StatsD
      • Syslog
      • Systemd
      • Tail
      • TCP
      • Thermal
      • UDP
      • OpenTelemetry
      • Windows Event Log
      • Windows Event Log (winevtlog)
      • Windows Exporter Metrics
    • Parsers
      • Configuring Parser
      • JSON
      • Regular Expression
      • LTSV
      • Logfmt
      • Decoders
    • Filters
      • AWS Metadata
      • CheckList
      • ECS Metadata
      • Expect
      • GeoIP2 Filter
      • Grep
      • Kubernetes
      • Log to Metrics
      • Lua
      • Parser
      • Record Modifier
      • Modify
      • Multiline
      • Nest
      • Nightfall
      • Rewrite Tag
      • Standard Output
      • Sysinfo
      • Throttle
      • Type Converter
      • Tensorflow
      • Wasm
    • Outputs
      • Amazon CloudWatch
      • Amazon Kinesis Data Firehose
      • Amazon Kinesis Data Streams
      • Amazon S3
      • Azure Blob
      • Azure Data Explorer
      • Azure Log Analytics
      • Azure Logs Ingestion API
      • Counter
      • Datadog
      • Elasticsearch
      • File
      • FlowCounter
      • Forward
      • GELF
      • Google Chronicle
      • Google Cloud BigQuery
      • HTTP
      • InfluxDB
      • Kafka
      • Kafka REST Proxy
      • LogDNA
      • Loki
      • NATS
      • New Relic
      • NULL
      • Observe
      • Oracle Log Analytics
      • OpenSearch
      • OpenTelemetry
      • PostgreSQL
      • Prometheus Exporter
      • Prometheus Remote Write
      • SkyWalking
      • Slack
      • Splunk
      • Stackdriver
      • Standard Output
      • Syslog
      • TCP & TLS
      • Treasure Data
      • Vivo Exporter
      • WebSocket
  • Stream Processing
    • Introduction to Stream Processing
    • Overview
    • Changelog
    • Getting Started
      • Fluent Bit + SQL
      • Check Keys and NULL values
      • Hands On! 101
  • Fluent Bit for Developers
    • C Library API
    • Ingest Records Manually
    • Golang Output Plugins
    • WASM Filter Plugins
    • WASM Input Plugins
    • Developer guide for beginners on contributing to Fluent Bit
Powered by GitBook
On this page
  • Chronicle
  • Google Cloud Configuration
  • Configurations Parameters
  • Configuration File

Was this helpful?

Export as PDF
  1. Data Pipeline
  2. Outputs

Google Chronicle

Last updated 1 year ago

Was this helpful?


Chronicle

The Chronicle output plugin allows ingesting security logs into service. This connector is designed to send unstructured security logs.

Google Cloud Configuration

Fluent Bit streams data into an existing Google Chronicle tenant using a service account that you specify. Therefore, before using the Chronicle output plugin, you must create a service account, create a Google Chronicle tenant, authorize the service account to write to the tenant, and provide the service account credentials to Fluent Bit.

Creating a Service Account

To stream security logs into Google Chronicle, the first step is to create a Google Cloud service account for Fluent Bit:

Creating a Tenant of Google Chronicle

Fluent Bit does not create a tenant of Google Chronicle for your security logs, so you must create this ahead of time.

Retrieving Service Account Credentials

Fluent Bit's Chronicle output plugin uses a JSON credentials file for authentication credentials. Download the credentials file by following these instructions:

Configurations Parameters

Key
Description
default

google_service_credentials

Absolute path to a Google Cloud credentials JSON file.

Value of the environment variable $GOOGLE_SERVICE_CREDENTIALS

service_account_email

Account email associated with the service. Only available if no credentials file has been provided.

Value of environment variable $SERVICE_ACCOUNT_EMAIL

service_account_secret

Private key content associated with the service account. Only available if no credentials file has been provided.

Value of environment variable $SERVICE_ACCOUNT_SECRET

project_id

The project id containing the tenant of Google Chronicle to stream into.

The value of the project_id in the credentials file

customer_id

The customer id to identify the tenant of Google Chronicle to stream into. The value of the customer_id should be specified in the configuration file.

log_type

region

The GCP region in which to store security logs. Currently, there are several supported regions: US, EU, UK, ASIA. Blank is handled as US.

log_key

By default, the whole log record will be sent to Google Chronicle. If you specify a key name with this option, then only the value of that key will be sent to Google Chronicle.

Configuration File

If you are using a Google Cloud Credentials File, the following configuration is enough to get you started:

[INPUT]
    Name  dummy
    Tag   dummy

[OUTPUT]
    Name       chronicle
    Match      *
    customer_id my_customer_id
    log_type my_super_awesome_type

The log type to parse logs as. Google Chronicle supports parsing for .

See Google's for further details.

Google Chronicle
Creating a Google Cloud Service Account
Creating and Managing Service Account Keys
official documentation
specific log types only