The labels processor lets you manipulate the labels of metrics.
Similar to filters, this processor presents a enriching/modifying mechanism to perform operations for labels manipulation. The most significant difference is that processors perform better than filters, and when chaining them there are no encoding or decoding performance penalties.
Note: Both processors and this specific component can be enabled only by using the YAML configuration format. Classic mode configuration format doesn't support processors.
Key | Description |
---|---|
Change the value of the name
to fluentbit
:
The following example appends the key agent
with the value fluentbit
as the label of metrics:
Upsert the value of name
and insert fluentbit
:
Delete containing name
key from metrics:
Apply the SHA-1 algorithm for the value of the key hostname
:
update
Update an existing key with a value into metrics. The key/value pair is required. If the specified key doesn't exist, the operation silently fails and has no effect.
insert
Insert a new key with a value into metrics. The key/value pair is required.
upsert
Upsert a specific key with a value, the upsert
operation will try to update the value of the key. If the key does not exist, the key will be created. The key-value pair is required.
delete
Delete a key from the labels of metrics. The key/value pair is required. If the specified key doesn't exist, the operation silently fails and has no effect.
hash
Replace the key value with a hash generated by the SHA-256 algorithm from the specified label name. The generated binary value is set as a hex string.
The content_modifier processor allows you to manipulate the messages, metadata/attributes and content of Logs and Traces.
Similar to the functionality exposed by filters, this processor presents a unified mechanism to perform such operations for data manipulation. The most significant difference is that processors perform better than filters, and when chaining them, there are no encoding/decoding performance penalties.
Note that processors and this specific component can only be enabled using the new YAML configuration format. Classic mode configuration format doesn't support processors.
The processor, works on top of what we call a context, meaning the place where the content modification will happen. We provide different contexts to manipulate the desired information, the following contexts are available:
Context Name | Signal | Description |
---|---|---|
In addition, we provide special contexts to operate on data that follows an OpenTelemetry Log Schema, all of them operates on shared data across a group of records:
Context Name | Signal | Description |
---|---|---|
TIP: if your data is not following the OpenTelemetry Log Schema and your backend or destination for your logs expects to be in an OpenTelemetry schema, take a look at the processor called OpenTelemetry Envelope that you can use in conjunbction with this processor to transform your data to be compatible with OpenTelemetry Log schema.
The actions specify the type of operation to run on top of a specific key or content from a Log or a Trace. The following actions are available:
The following example appends the key color
with the value blue
to the log stream.
Update the value of key1
and insert key2
:
Delete key2
from the stream:
Change the name of key2
to test
:
Apply the SHA-256 algorithm for the value of the key password
:
By using a domain address, perform a extraction of the components of it as a list of key value pairs:
Both keys in the example are strings. Convert the key1
to a double/float type and key2
to a boolean:
The sql processor provides a simple interface to select content from Logs by also supporting conditional expressions.
Our SQL processor does not depend on a database or indexing; it runs everything on the fly (this is good). We don't have the concept of tables but you run the query on the STREAM.
Note that this processor differs from the "stream processor interface" that runs after the filters; this one can only be used in the processor's section of the input plugins when using YAML configuration mode.
Key | Description |
---|---|
The following example generates a sample message with two keys called key
and http.url
. By using a simple SQL statement we will select only the key http.url
.
Similar to the example above, now we will extract the parts of http.url
and only select the domain from the value, for that we will use together content-modifier and sql processors together:
the expected output of this pipeline will be something like this:
The OpenTelemetry Envelope processor is used to transform your data to be compatible with the OpenTelemetry Log schema. If your data was not generated by OpenTelemetry input and your backend or destination for your logs expects to be in an OpenTelemetry schema.
The processor does not provide any extra configuration parameter, it can be used directly in your processors Yaml directive.
In this example, we will use the Dummy input plugin to generate a sample message per second, right after is created the processor opentelemetry_envelope
is used to transform the data to be compatible with the OpenTelemetry Log schema. The output is sent to the standard output and also to an OpenTelemetry collector which is receiving data in port 4318.
fluent-bit.yaml
otel-collector.yaml
You will notice in the standard output of FLuent Bit will print the raw representation of the schema, however, the OpenTelemetry collector will receive the data in the OpenTelemetry Log schema.
Inspecting the output file out.json
you will see the data in the OpenTelemetry Log schema:
While OpenTelemetry Envelope enrich your logs with the Schema, you might be interested into take a step further and use the Content Modifier processor to modify the content of your logs. Here is a quick example that will allow you to add some resource and scope attributes to your logs:
The collector JSON output will look like this:
For more details about further processing, read the Content Modifier processor documentation.
The metric_selector processor allows you to select metrics to include or exclude (similar to the grep
filter for logs).
The native processor plugin supports the following configuration parameters:
Key | Description | Default |
---|---|---|
Here is a basic configuration example.
All processors are only valid with the YAML configuration format. Processor configuration should be located under the relevant input or output plugin configuration.
Metric_Name parameter will translate the strings which is quoted with backslashes /.../
as Regular expressions. Without them, users need to specify Operation_Type whether prefix matching or substring matching. The default operation is prefix matching. For example, /chunks/
will be translated as a regular expression.
Key | Description |
---|---|
Action | Description |
---|---|
context
Specify the context where the modifications will happen (more details above).The following contexts are available: attributes
, body
, span_name
, span_kind
, span_status
, span_attributes
, otel_resource_attributes
, otel_scope_name
, otel_scope_version
, otel_scope_attributes
.
key
Specify the name of the key that will be used to apply the modification.
value
Based on the action type, value
might required and represent different things. Check the detailed information for the specific actions.
pattern
Defines a regular expression pattern. This property is only used by the extract
action.
converted_type
Define the data type to perform the conversion, the available options are: string
, boolean
, int
and double
.
insert
Insert a new key with a value into the target context. The key
and value
parameters are required.
upsert
Given a specific key with a value, the upsert
operation will try to update the value of the key. If the key does not exist, the key will be created. The key
and value
parameters are required.
delete
Delete a key from the target context. The key
parameter is required.
rename
Change the name of a key. The value
set in the configuration will represent the new name. The key
and value
parameters are required.
hash
Replace the key value with a hash generated by the SHA-256 algorithm, the binary value generated is finally set as an hex string representation. The key
parameter is required.
extract
Allows to extact the value of a single key as a list of key/value pairs. This action needs the configuration of a regular expression in the pattern
property . The key
and pattern
parameters are required. For more details check the examples below.
convert
Convert the data type of a key value. The key
and converted_type
parameters are required.
query
Define the SQL statement to run on top of the Logs stream; it must end with ;
.
Metric_Name
Keep metrics in which the metric of name matches with the actual name or the regular expression.
Context
Specify matching context. Currently, metric_name and delete_label_value are only supported.
Metrics_Name
Action
Specify the action for specified metrics. INCLUDE and EXCLUDE are allowed.
Operation_Type
Specify the operation type of action for metrics payloads. PREFIX and SUBSTRING are allowed.
Label
Specify a label key and value pair.
attributes
Logs
Modify the attributes or metadata of a Log record.
body
Logs
Modify the content of a Log record.
span_name
Traces
Modify the name of a Span.
span_kind
Traces
Modify the kind of a Span.
span_status
Traces
Modify the status of a Span.
span_attributes
Traces
Modify the attributes of a Span.
otel_resource_attributes
Logs
Modify the attributes of the Log Resource.
otel_scope_name
Logs
Modify the name of a Log Scope.
otel_scope_version
Logs
Modify version of a Log Scope.
otel_scope_attributes
Logs
Modify the attributes of a Log Scope.