Custom Export
Short guide on how to setup exporting telemetry to a custom endpoint.
Overview
Prover Nodes can choose to export to their own OTLP endpoints. This can be done through a combination of an OtelCol docker container, and an additional receiver of your choice such as a complete integrated one like Grafana Alloy, but setting up individual receivers such as Prometheus for metrics, is also possible.
Here's how an example services will look like:
Otel Collector
gRPC OTLP Receiver
4317
Datadog Agent
OtelCollector Export Endpoint
4320
Grafana Alloy
OtelCollector Export Endpoint
4321
Previously, the prover node binary would simply export to DD agent on port 4317. With this setup, the binary will export to Otel Collector which in turn will dual-export to DD and Alloy.
Currently, we only support metrics, traces and logs through gRPC OTLP.
Datadog Agent
First, we'll setup the DD agent, which is what Fermah uses to receive full visibility on prover nodes. Usually, this will already be running as part of the Installation, but we'll modify the port to instead run on 4320. Here's just the modified ports:
ports:
- "4320:4317"
Grafana Alloy
We'll need two files for setting up Alloy, but the config file depends a lot on each prover node's setup, because it defines the final export endpoint for each (metrics, traces, logs), for Grafana this is Prometheus, Tempo and Loki respectively.
First, the relevant docker compose file:
services:
alloy:
restart: always
image: grafana/alloy:latest
ports:
- "4321:4317"
- "12345:12345"
volumes:
- ./config.alloy:/etc/alloy/config.alloy
command: run --server.http.listen-addr=0.0.0.0:12345 --storage.path=/var/lib/alloy/data /etc/alloy/config.alloy
environment:
- GRAFANA_CLOUD_API_KEY=${GRAFANA_CLOUD_API_KEY}
And here's the configuration file:
User needs to update each endpoint to match their use case, either through Grafana Cloud or local instance.
The above configuration file assumes the prover node is using Grafana Cloud. That means that they know each endpoint URL for Prometheus, Loki and Tempo, as well as the API key used for authentication, assuming environment variable available at: GRAFANA_CLOUD_API_KEY
when running docker compose.
If using local receivers for each endpoint, please refer to official alloy configuration reference. However, the only change usually needed is removing the authentication and changing the endpoint URLs.
Otel Collector
This container is the important glue that ties everything together. Otelcol endpoint will now be the target export of the prover node binary, as it will run listening on port 4317.
The otelcol container needs both a docker compose service declaration and a YML configuration file. Both will be declared in the section below.
Final Docker Compose File
We need to run everything on the same compose file, as conveniently, docker compose creates a network for each compose file, letting us refer to each service by their DNS name which is the service name declared in the file.
With this setup, you'll end up with the following files:
docker-compose.yml
config.alloy
otel-config.yml
When running docker compose up -d
, make sure you also specify the Datadog API key that was assigned to your prover node (prefix the command with DATADOG_API_KEY=${API_KEY}
) and any other API keys such as for Grafana Cloud.
Last updated
Was this helpful?