OTEL integration with Fluent Bit
OpenTelemetry provides an open source standard for logs, metrics & traces. Fluent Bit and the OpenTelemetry collector are both powerful telemetry collectors within the CNCF ecosystem.
Both aim to collect, process, and route telemetry data and support all telemetry types. They each emerged from different projects with different strengths: FB started with logs and OTel started with traces. The common narrative suggests you must choose one or the other but these projects can and should coexist. Many teams are successfully using both by leveraging each for what it does best or for other non-functional requirements like experience with Golang vs C, ease of maintenance, etc.
The OpenTelemetry collector also has a Receiver and Exporter that enable you to ingest telemetry via the Fluent Forward protocol.
I will show you various examples of using Fluent Bit in different deployment scenarios. We demonstrate full working stacks using simple containers to make it easy to reuse the examples and pick out the bits you want to test/modify for your own use.
A repo is provided here with all examples: https://github.com/FluentDo/fluent-bit-examples
These examples are quite simple primarily to walk you through basic scenarios explaining what is going on. There are also some other examples provided by others like https://github.com/isItObservable/fluentbit-vs-collector which may be useful to look at too.
Fluent Bit YAML config
In each case I will be using the new (since v2.0 anyway!) YAML format rather than the old “classic” format to hopefully future proof this article a bit but also allow you to start using the processors
functionality only available with YAML configuration. The official documentation provides full details on this configuration format as well: https://docs.fluentbit.io/manual/administration/configuring-fluent-bit/yaml
Fluent Bit processors
A processor is essentially a filter that runs bound specifically to the input or output plugin it is associated with: this means that it only runs for the relevant data routed from that input or to that output.
Previously filters were part of the overall pipeline so can match any data from any other filters and inputs but will run on the main thread to process their data, so there are two benefits primarily here:
- Processors run on the thread(s) associated with their input or output plugin. This can prevent the “noisy neighbours” problem with certain input data starving out more important processing.
- Processors do not have to spend the usual cost to unpack and pack data into the generic internal
msgpack
format.
All existing filters can be used as processors but there are some new processors added which can not be used as filters. Processors are provided that work across the various logs, metrics and trace data types whereas filters are only provided for log type data.
Simple usage
We will run up the OTEL collector as a simple container with a Fluent Bit container feeding it OTEL dummy data as a simple test to show everything and walk through the configuration before moving on to more interesting and complex deployments. The OTEL collector is here as a simple OTEL receiver that is trivial to run and prove Fluent Bit is feeding it OTEL data.
OTEL collector
Start up the OTEL receiver to handle receiving OTEL data and printing it out to show it. We use the following configuration to do that:
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
exporters:
debug:
verbosity: detailed
service:
pipelines:
traces:
receivers: [otlp]
exporters: [debug]
metrics:
receivers: [otlp]
exporters: [debug]
logs:
receivers: [otlp]
exporters: [debug]
Run up the container using the configuration above:
docker run -p 127.0.0.1:4317:4317 -p 127.0.0.1:4318:4318 -v $PWD/otel-config.yaml:/etc/otelcol-contrib/config.yaml otel/opentelemetry-collector-contrib:0.128.0
We open the relevant ports to receive OTEL data and tell it to mount the configuration file into the default location.
Fluent Bit sending OTEL
Now we can run up the Fluent Bit container that generates some dummy log data for now to show it all working together. We will use the following configuration to do that:
service:
log_level: info
pipeline:
inputs:
- name: dummy
tag: test
processors:
logs:
- name: opentelemetry_envelope
outputs:
- name: opentelemetry
match: "*"
host: 127.0.0.1
port: 4318
- name: stdout
match: "*"
To run it up with YAML we have to mount the configuration file in and override the default command to use the YAML file rather than the classic configuration:
docker run --rm -it --network=host -v $PWD/fluent-bit.yaml:/fluent-bit/etc/fluent-bit.yaml:ro fluent/fluent-bit -c /fluent-bit/etc/fluent-bit.yaml
We’re using host networking here just to simplify sending from our container to the already open localhost
ports - really you should connect the ports properly using dedicated networks or host/ip addresses.
There is a full compose stack here as well to simplify things for you: https://github.com/FluentDo/fluent-bit-examples/tree/main/otel-collector
Let us walk through the Fluent Bit configuration to explain the various components:
service:
log_level: info
This just sets up the top-level Fluent Bit configuration, specifically I added this as an example to help with debugging if you need to increase the log level.
pipeline:
inputs:
- name: dummy
tag: test
processors:
logs:
- name: opentelemetry_envelope
outputs:
- name: opentelemetry
match: "*"
host: 127.0.0.1
port: 4318
tls: off
metrics_uri: /v1/metrics
logs_uri: /v1/logs
traces_uri: /v1/traces
log_response_payload: true
- name: stdout
match: "*"
Here we show a simple telemetry pipeline using the dummy
input to generate sample log messages that is then routed to both an opentelemetry
output (using the appropriate port and localhost address along with URIs that the collector wants) and a local stdout output.
This allows us to see the generated data both on the Fluent Bit side and also being sent to the OTEL collector we started previously.
Opentelemetry-envelope processor
The opentelemetry-envelope
processor is used to ensure that the OTEL metadata is properly set up - this should be done for non-OTEL inputs that are going to OTEL outputs: https://docs.fluentbit.io/manual/pipeline/processors/opentelemetry-envelope
Essentially it provides the OTLP relevant information in the schema at the metadata level as attributes (rather than within the actual log data in the record) which other filters can then work with or the output plugin can use, e.g.
processors:
logs:
- name: opentelemetry_envelope
- name: content_modifier
context: otel_resource_attributes
action: upsert
key: service.name
value: YOUR_SERVICE_NAME
It is usable for metrics or log type data as well.
Output
You should see the Fluent Bit container reporting the generated dummy data like so:
[0] test: [[1749725103.415777685, {}], {"message"=>"dummy"}]
[0] test: [[1749725104.415246054, {}], {"message"=>"dummy"}]
The output from stdout
shows first the tag we are matching test
followed by the timestamp (in UNIX epoch format) and any other metadata with finally the actual logs payload being shown which in this case is the message
key with the value dummy
.
The [0]
information is to show we are reporting the first event in a batch - if there were multiple events for stdout
to print then it would increment for each one until the next output.
Now, on the OTEL collector side we should see the log messages coming in like so:
2025-06-12T10:45:04.958Z info ResourceLog #0
Resource SchemaURL:
ScopeLogs #0
ScopeLogs SchemaURL:
InstrumentationScope
LogRecord #0
ObservedTimestamp: 1970-01-01 00:00:00 +0000 UTC
Timestamp: 2025-06-12 10:45:04.415246054 +0000 UTC
SeverityText:
SeverityNumber: Unspecified(0)
Body: Str(dummy)
Trace ID:
Span ID:
Flags: 0
You can see the body
is reported as just dummy
, i.e. the message
key because we only have a single top-level one. If you look at the documentation you can see that by default the opentelemetry
output looks to send the message
key which is useful for demoing with dummy
.
We can tweak Fluent Bit to generate a multi-key input and then pick the relevant key to send via a configuration like so:
service:
log_level: info
pipeline:
inputs:
- name: dummy
tag: test
dummy: '{"key1": "value1", "key2": "value2"}'
processors:
logs:
- name: opentelemetry_envelope
outputs:
- name: opentelemetry
match: "*"
logs_body_key: key2
host: 127.0.0.1
port: 4318
tls: off
metrics_uri: /v1/metrics
logs_uri: /v1/logs
traces_uri: /v1/traces
log_response_payload: true
Using this configuration you can see Fluent Bit reporting output like this:
[0] test: [[1749726031.415125943, {}], {"key1"=>"value1", "key2"=>"value2"}]
With the OTEL collector then receiving the key2 value:
2025-06-12T11:00:31.846Z info ResourceLog #0
Resource SchemaURL:
ScopeLogs #0
ScopeLogs SchemaURL:
InstrumentationScope
LogRecord #0
ObservedTimestamp: 1970-01-01 00:00:00 +0000 UTC
Timestamp: 2025-06-12 11:00:31.415125943 +0000 UTC
SeverityText:
SeverityNumber: Unspecified(0)
Body: Str(value2)
Trace ID:
Span ID:
Flags: 0
The documentation shows how to configure some of the other OTEL fields appropriately: https://docs.fluentbit.io/manual/pipeline/outputs/opentelemetry
In the basic example we are not populating other useful information like SeverityText
, or everything else but these can be set up from the data using the various configuration options available to you in the documentation.
Note the the configuration options let you distinguish between data in the actual log message body and data found in the metadata:
xxx_metadata_key
: Looks for the key in the record metadata and not in the log message body.xxx_message_key
: Looks for the key in the log message body/record content.
Fluent Bit with gRPC
Fluent Bit also supports using gRPC (and HTTP/2) as well but these need to be explicitly enabled via the grpc or http2 configuration options.
Using our previous OTEL collector listening on 4317
for gRPC data we can therefore do the following:
service:
log_level: warn
pipeline:
inputs:
- name: dummy
tag: test
processors:
logs:
- name: opentelemetry_envelope
outputs:
- name: opentelemetry
match: "*"
host: 127.0.0.1
port: 4317
grpc: on
tls: off
metrics_uri: /v1/metrics
logs_uri: /v1/logs
traces_uri: /v1/traces
Now, it should send data over gRPC to the OTEL collector which reports similar output as before. I increased the log level because the current version of Fluent Bit was very “chatty” about success reporting for gRPC.
Metrics and traces
Fluent Bit can handle metric and trace style data now. It can scrape metrics from prometheus endpoints, handle the prometheus write protocol or handle OTLP metric data directly.
For a simple demonstration, we can use the fluentbit_metrics input which actually provides metrics about Fluent Bit itself: https://docs.fluentbit.io/manual/pipeline/inputs/fluentbit-metrics
service:
log_level: info
pipeline:
inputs:
- name: fluentbit_metrics
tag: metrics
outputs:
- name: opentelemetry
match: "*"
host: 127.0.0.1
port: 4318
tls: off
metrics_uri: /v1/metrics
logs_uri: /v1/logs
traces_uri: /v1/traces
log_response_payload: true
- name: stdout
match: "*"
- name: prometheus_exporter
match: metrics
host: 0.0.0.0
port: 2021
We provide a stdout
output which will report the data in the log but also an endpoint you can scrape for Prometheus format data at port 2021 via the prometheus_exporter
.
The metrics are then also sent to the OTEL collector we are running which should report output like so:
StartTimestamp: 1970-01-01 00:00:00 +0000 UTC
Timestamp: 2025-06-12 12:51:52.418887897 +0000 UTC
Value: 0.000000
NumberDataPoints #1
Data point attributes:
-> name: Str(stdout.1)
StartTimestamp: 1970-01-01 00:00:00 +0000 UTC
Timestamp: 2025-06-12 12:51:48.339675316 +0000 UTC
Value: 0.000000
Metric #30
Descriptor:
-> Name: fluentbit_output_chunk_available_capacity_percent
-> Description: Available chunk capacity (percent)
-> Unit:
-> DataType: Gauge
NumberDataPoints #0
Data point attributes:
-> name: Str(opentelemetry.0)
StartTimestamp: 1970-01-01 00:00:00 +0000 UTC
Timestamp: 2025-06-12 12:51:52.418913562 +0000 UTC
Value: 100.000000
NumberDataPoints #1
Data point attributes:
-> name: Str(stdout.1)
StartTimestamp: 1970-01-01 00:00:00 +0000 UTC
Timestamp: 2025-06-12 12:51:52.416704619 +0000 UTC
Value: 100.000000
Traces require something that generates OpenTelemetry format traces - the only supported trace input for the moment is from OTEL.
The opentelemetry
input plugin (not output) shows you how to configure this, including even converting traces to log style data via the raw_traces option (e.g. to send to an endpoint that only supports log data like S3, etc. rather than OTLP trace data): https://docs.fluentbit.io/manual/pipeline/inputs/opentelemetry
There is also a useful logs_to_metrics
filter which can be used to convert log messages into metrics - quite a common pattern found in a lot of existing applications is to log various buffer sizes, etc. which can be exposed better as metrics: https://docs.fluentbit.io/manual/pipeline/filters/log_to_metrics
Using Fluent Forward with OTEL collector
The OTEL collector can actually directly talk to Fluent Bit with the Fluent Forward protocol via a receiver (if sending from Fluent Bit) or an exporter (if sending to Fluent Bit). This may be a better option in some cases and is easy to configure.
The Fluent Forward protocol is also implemented by Fluentd and essentially is a msgpack
based implementation that includes the tag of the data.
It is an optimal way to transfer data between Fluentd/Fluent Bit instances as it uses the internal data structure for it all.
Sending from OTEL collector to Fluent Bit
Fluent Bit needs to receive data using a forward
input plugin.
pipeline:
inputs:
- name: forward
listen: 0.0.0.0
port: 24224
outputs:
- name: stdout
match: '*'
We configure the OTEL collector to have a Fluent Forward exporter to send this data.
exporters:
fluentforward:
endpoint:
tcp_addr: 127.0.0.1:24224
tag: otelcollector
Remember that the Fluent Forward protocol includes the tag so it is not part of the input plugin.
Sending to OTEL collector from Fluent Bit
We configure the OTEL collector to have a Fluent Forward receiver to get this data.
receivers:
fluentforward:
endpoint: 0.0.0.0:24224
Fluent Bit needs to send data using a forward
output plugin.
pipeline:
inputs:
- name: dummy
tag: test
outputs:
- name: forward
match: '*'
port: 24224