Skip to Content

Custom Export

Prover Nodes can choose to export to their own OTLP endpoints using a combination of an OtelCol docker container, and an additional receiver of your choice, such as Grafana Alloy or individual options like Prometheus.

Currently, we only support metrics, traces and logs through gRPC OTLP.

The architecture routes prover node exports through OtelCol, which then dual-exports to both Datadog and Grafana Alloy:

ServicePurposePort
Otel CollectorgRPC OTLP Receiver4317
Datadog AgentOtelCollector Export Endpoint4320
Grafana AlloyOtelCollector Export Endpoint4321

Datadog Agent

The Datadog Agent configuration requires modifying the port to 4320 by updating the docker-compose.yml ports mapping from the default 4317:

ports: - "4320:4317"

Grafana Alloy

Alloy setup requires two files: a docker-compose service declaration and a configuration file.

docker-compose service

services: alloy: restart: always image: grafana/alloy:latest ports: - "4321:4317" - "12345:12345" volumes: - ./config.alloy:/etc/alloy/config.alloy command: run --server.http.listen-addr=0.0.0.0:12345 --storage.path=/var/lib/alloy/data /etc/alloy/config.alloy environment: - GRAFANA_CLOUD_API_KEY=${GRAFANA_CLOUD_API_KEY}

config.alloy

The config must be customized based on each prover node’s setup, specifying export endpoints for Prometheus (metrics), Tempo (traces), and Loki (logs).

If using local receivers for each endpoint, please refer to official alloy configuration reference . However, the only change usually needed is removing the authentication and changing the endpoint URLs.

logging { level = "info" format = "logfmt" } otelcol.receiver.otlp "default" { grpc { } output { metrics = [otelcol.processor.resourcedetection.default.input] logs = [otelcol.processor.resourcedetection.default.input] traces = [otelcol.processor.resourcedetection.default.input] } } otelcol.processor.resourcedetection "default" { detectors = ["env", "system", "ec2", "gcp", "docker"] system { hostname_sources = ["os"] } output { metrics = [otelcol.processor.transform.add_resource_attributes_as_metric_attributes.input] logs = [otelcol.processor.batch.default.input] traces = [ otelcol.processor.batch.default.input, otelcol.connector.host_info.default.input, ] } } otelcol.connector.host_info "default" { host_identifiers = ["host.name"] output { metrics = [otelcol.processor.batch.default.input] } } otelcol.processor.transform "add_resource_attributes_as_metric_attributes" { error_mode = "ignore" metric_statements { context = "datapoint" statements = [ "set(attributes[\"deployment.environment\"], resource.attributes[\"deployment.environment\"])", "set(attributes[\"service.version\"], resource.attributes[\"service.version\"])", ] } output { metrics = [otelcol.processor.batch.default.input] } } otelcol.processor.batch "default" { output { metrics = [otelcol.exporter.prometheus.metrics_service.input] logs = [otelcol.exporter.loki.logs_service.input] traces = [otelcol.exporter.otlp.grafana_cloud_tempo.input] } } otelcol.exporter.prometheus "metrics_service" { add_metric_suffixes = false forward_to = [prometheus.remote_write.default.receiver] } otelcol.exporter.loki "logs_service" { forward_to = [loki.write.default.receiver] } otelcol.exporter.otlp "grafana_cloud_tempo" { client { endpoint = "tempo-prod-04-prod-us-east-0.grafana.net:443" auth = otelcol.auth.basic.grafana_cloud_tempo.handler } } otelcol.auth.basic "grafana_cloud_tempo" { username = "863733" password = env("GRAFANA_CLOUD_API_KEY") } loki.write "default" { endpoint { url = "https://logs-prod-006.grafana.net/loki/api/v1/push" basic_auth { username = "869417" password = env("GRAFANA_CLOUD_API_KEY") } } } prometheus.remote_write "default" { endpoint { url = "https://prometheus-prod-13-prod-us-east-0.grafana.net/api/prom/push" basic_auth { username = "1537843" password = env("GRAFANA_CLOUD_API_KEY") } } }

Otel Collector

This container is the important glue that ties everything together. Otelcol endpoint will now be the target export of the prover node binary, as it will run listening on port 4317.

The otelcol container needs both a docker compose service declaration and a YML configuration file. Both will be declared in the section below.

We need to run everything on the same compose file, as conveniently, docker compose creates a network for each compose file, letting us refer to each service by their DNS name which is the service name declared in the file.

Final Docker Compose File

configs: datadog_config: content: | apm_config: apm_non_local_traffic: true # Use java container support jmx_use_container_support: true otlp_config: metrics: # Add all resource attributes as tags for metrics resource_attributes_as_tags: true services: datadog: image: datadog/agent:7.60.1 environment: - DD_API_KEY=${DATADOG_API_KEY} - DD_SITE=us5.datadoghq.com - DD_OTLP_CONFIG_RECEIVER_PROTOCOLS_GRPC_ENDPOINT=0.0.0.0:4317 - DD_LOGS_ENABLED=true - DD_OTLP_CONFIG_LOGS_ENABLED=true volumes: - /var/run/docker.sock:/var/run/docker.sock - /proc/:/host/proc/:ro - /sys/fs/cgroup:/host/sys/fs/cgroup:ro - /var/lib/docker/containers:/var/lib/docker/containers:ro configs: - source: datadog_config target: /etc/datadog-agent/datadog.yaml ports: - "4320:4317" restart: unless-stopped alloy: restart: unless-stopped image: grafana/alloy:latest ports: - "4321:4317" - "12345:12345" volumes: - ./config.alloy:/etc/alloy/config.alloy command: run --server.http.listen-addr=0.0.0.0:12345 --storage.path=/var/lib/alloy/data /etc/alloy/config.alloy environment: - GRAFANA_CLOUD_API_KEY=${GRAFANA_CLOUD_API_KEY} otelcol: image: otel/opentelemetry-collector-contrib volumes: - ./otel-config.yaml:/etc/otelcol-contrib/config.yaml ports: - 13133:13133 # health_check extension - 4317:4317 # OTLP gRPC receiver

otel-config.yml

receivers: otlp: protocols: grpc: endpoint: 0.0.0.0:4317 exporters: otlp/dd: endpoint: datadog:4317 tls: insecure: true otlp/alloy: endpoint: alloy:4317 tls: insecure: true service: pipelines: metrics: receivers: [otlp] exporters: [otlp/dd, otlp/alloy] traces: receivers: [otlp] exporters: [otlp/dd, otlp/alloy] logs: receivers: [otlp] exporters: [otlp/dd, otlp/alloy]

Deployment

With this setup, you’ll end up with the following files:

  • docker-compose.yml
  • config.alloy
  • otel-config.yml

When running docker compose up -d, make sure you also specify the Datadog API key that was assigned to your prover node (prefix the command with DATADOG_API_KEY=${API_KEY} ) and any other API keys such as for Grafana Cloud.

DATADOG_API_KEY=${API_KEY} GRAFANA_CLOUD_API_KEY=${API_KEY} docker compose up -d
Last updated on