# Data pipeline

The Fluent Bit data pipeline incorporates several specific concepts. Data processing flows through the pipeline following these concepts in order.

{% @mermaid/diagram content="graph LR
accTitle: Fluent Bit data pipeline
accDescr: A diagram of the Fluent Bit data pipeline, which includes input, processors, a parser, a filter, a buffer, routing, and various outputs.
A\[Input] --> B\[Processors]
B --> C\[Parser]
C --> D\[Filter]
D --> E\[Buffer]
E --> F((Routing))
F --> G\[Output 1]
F --> H\[Output 2]
F --> I\[Output 3]" %}

## Inputs

[Input plugins](https://docs.fluentbit.io/manual/data-pipeline/inputs) gather information from different sources. Some plugins collect data from log files, and others gather metrics information from the operating system. There are many plugins to suit different needs.

## Processors

[Processors](https://docs.fluentbit.io/manual/data-pipeline/processors) are components that modify, transform, or enhance data as it flows through the pipeline. Processors are attached directly to individual input or output plugins rather than defined globally, and they don't use tag matching.

Because processors run in the same thread as their associated plugin, they can reduce performance overhead compared to filters—especially when [multithreading](https://docs.fluentbit.io/manual/administration/multithreading) is enabled.

Processors are configured in [YAML configuration files](https://docs.fluentbit.io/manual/administration/configuring-fluent-bit/yaml) only.

## Parser

[Parsers](https://docs.fluentbit.io/manual/data-pipeline/parsers) convert unstructured data to structured data. Use a parser to set a structure to the incoming data by using input plugins as data is collected.

## Filter

[Filters](https://docs.fluentbit.io/manual/data-pipeline/filters) let you alter the collected data before delivering it to a destination. In production environments you need full control of the data you're collecting. Using filters lets you control data before processing.

## Buffer

The [buffering](https://docs.fluentbit.io/manual/data-pipeline/buffering) phase in the pipeline aims to provide a unified and persistent mechanism to store your data, using the primary in-memory model or the file system-based mode.

## Routing

[Routing](https://docs.fluentbit.io/manual/data-pipeline/router) is a core feature that lets you route your data through filters, and then to one or multiple destinations. The router relies on the concept of [tags](https://docs.fluentbit.io/manual/key-concepts#tag) and [matching](https://docs.fluentbit.io/manual/key-concepts#match) rules.

## Output

[Output plugins](https://docs.fluentbit.io/manual/data-pipeline/outputs) let you define destinations for your data. Common destinations are remote services, local file systems, or other standard interfaces.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.fluentbit.io/manual/concepts/data-pipeline.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
