I am trying to make OpenTelemetry exporter to work with OpenTelemetry collector.
I found this OpenTelemetry collector demo.
So I copied these four config files
to my app.
Also based on these two demos in open-telemetry/opentelemetry-js repo:
I came up with my version (sorry for a bit long, really hard to set up a minimum working version due to the lack of docs):
.env
OTELCOL_IMG=otel/opentelemetry-collector-dev:latest
OTELCOL_ARGS=
docker-compose.yml
version: '3.7'
services:
# Jaeger
jaeger-all-in-one:
image: jaegertracing/all-in-one:latest
ports:
- "16686:16686"
- "14268"
- "14250"
# Zipkin
zipkin-all-in-one:
image: openzipkin/zipkin:latest
ports:
- "9411:9411"
# Collector
otel-collector:
image: ${OTELCOL_IMG}
command: ["--config=/etc/otel-collector-config.yaml", "${OTELCOL_ARGS}"]
volumes:
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
ports:
- "1888:1888" # pprof extension
- "8888:8888" # Prometheus metrics exposed by the collector
- "8889:8889" # Prometheus exporter metrics
- "13133:13133" # health_check extension
- "55678" # OpenCensus receiver
- "55680:55679" # zpages extension
depends_on:
- jaeger-all-in-one
- zipkin-all-in-one
# Agent
otel-agent:
image: ${OTELCOL_IMG}
command: ["--config=/etc/otel-agent-config.yaml", "${OTELCOL_ARGS}"]
volumes:
- ./otel-agent-config.yaml:/etc/otel-agent-config.yaml
ports:
- "1777:1777" # pprof extension
- "8887:8888" # Prometheus metrics exposed by the agent
- "14268" # Jaeger receiver
- "55678" # OpenCensus receiver
- "55679:55679" # zpages extension
- "13133" # health_check
depends_on:
- otel-collector
otel-agent-config.yaml
receivers:
opencensus:
zipkin:
endpoint: :9411
jaeger:
protocols:
thrift_http:
exporters:
opencensus:
endpoint: "otel-collector:55678"
insecure: true
logging:
loglevel: debug
processors:
batch:
queued_retry:
extensions:
pprof:
endpoint: :1777
zpages:
endpoint: :55679
health_check:
service:
extensions: [health_check, pprof, zpages]
pipelines:
traces:
receivers: [opencensus, jaeger, zipkin]
processors: [batch, queued_retry]
exporters: [opencensus, logging]
metrics:
receivers: [opencensus]
processors: [batch]
exporters: [logging,opencensus]
otel-collector-config.yaml
receivers:
opencensus:
exporters:
prometheus:
endpoint: "0.0.0.0:8889"
namespace: promexample
const_labels:
label1: value1
logging:
zipkin:
endpoint: "http://zipkin-all-in-one:9411/api/v2/spans"
format: proto
jaeger:
endpoint: jaeger-all-in-one:14250
insecure: true
processors:
batch:
queued_retry:
extensions:
health_check:
pprof:
endpoint: :1888
zpages:
endpoint: :55679
service:
extensions: [pprof, zpages, health_check]
pipelines:
traces:
receivers: [opencensus]
processors: [batch, queued_retry]
exporters: [logging, zipkin, jaeger]
metrics:
receivers: [opencensus]
processors: [batch]
exporters: [logging]
After running docker-compose up -d
, I can open Jaeger (http://localhost:16686) and Zipkin UI (http://localhost:9411).
And my ConsoleSpanExporter
works in both web client and Express.js server.
However, I tried this OpenTelemetry exporter code in both client and server, I am still having issue to connect OpenTelemetry collector.
Please see my comment about URL inside of the code
import { CollectorTraceExporter } from '@opentelemetry/exporter-collector';
// ...
tracerProvider.addSpanProcessor(new SimpleSpanProcessor(new ConsoleSpanExporter()));
tracerProvider.addSpanProcessor(
new SimpleSpanProcessor(
new CollectorTraceExporter({
serviceName: 'my-service',
// url: 'http://localhost:55680/v1/trace', // Return error 404.
// url: 'http://localhost:55681/v1/trace', // No response, not exists.
// url: 'http://localhost:14268/v1/trace', // No response, not exists.
})
)
);
Any idea? Thanks
Since the OpenTelemetry collector works with the standard OpenTelemetry specification, it is vendor agnostic. You could configure the collector to send data to any vendor that supports OpenTelemetry data by switching or adding another exporter to your collector.
In OpenTelemetry, an exporter allows you to export metrics from an instrumented application to another third party system. There is an OpenTelemetry exporter capable of exporting OpenTelemetry metrics in a way where the Prometheus client can consume them.
What Is an OpenTelemetry Collector? An OTEL collector is one of the core components of OpenTelemetry. OTEL collectors can optionally be deployed between instrumented applications and the Telemetry backends as a Telemetry pipeline component.
Data Collection concepts in order to understand the repositories applicable to the OpenTelemetry Collector. The Collector consists of three components that access telemetry data: These components once configured must be enabled via pipelines within the service section.
All, I am trying to integrate OpenTelemetry collector with Prometheus in a .Net application. I am using OTLP Exporter in the application and running OpenTelemetry Collector as a windows service on same system. Please find the below 'config.yaml' I am using for collector
Here’s an example that uses the OIDC authenticator on the receiver side, making this suitable for a remote collector that receives data from an OpenTelemetry Collector acting as agent: On the agent side, this is an example that makes the OTLP exporter obtain OIDC tokens, adding them to every RPC made to a remote collector:
To send data to an OTLP endpoint or the OpenTelemetry Collector, you’ll want to configure an OTLP exporter that sends to your endpoint. dotnet add package OpenTelemetry.Exporter.OpenTelemetryProtocol dotnet add package OpenTelemetry.Extensions.Hosting --prerelease
The demo you tried is using older configuration and opencensus which should be replaced with otlp receiver. Having said that this is a working example https://github.com/open-telemetry/opentelemetry-js/tree/master/examples/collector-exporter-node/docker So I'm copying the files from there:
docker-compose.yaml
version: "3"
services:
# Collector
collector:
image: otel/opentelemetry-collector:latest
command: ["--config=/conf/collector-config.yaml", "--log-level=DEBUG"]
volumes:
- ./collector-config.yaml:/conf/collector-config.yaml
ports:
- "9464:9464"
- "55680:55680"
- "55681:55681"
depends_on:
- zipkin-all-in-one
# Zipkin
zipkin-all-in-one:
image: openzipkin/zipkin:latest
ports:
- "9411:9411"
# Prometheus
prometheus:
container_name: prometheus
image: prom/prometheus:latest
volumes:
- ./prometheus.yaml:/etc/prometheus/prometheus.yml
ports:
- "9090:9090"
collector-config.yaml
receivers:
otlp:
protocols:
grpc:
http:
exporters:
zipkin:
endpoint: "http://zipkin-all-in-one:9411/api/v2/spans"
prometheus:
endpoint: "0.0.0.0:9464"
processors:
batch:
queued_retry:
service:
pipelines:
traces:
receivers: [otlp]
exporters: [zipkin]
processors: [batch, queued_retry]
metrics:
receivers: [otlp]
exporters: [prometheus]
processors: [batch, queued_retry]
prometheus.yaml
global:
scrape_interval: 15s # Default is every 1 minute.
scrape_configs:
- job_name: 'collector'
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ['collector:9464']
This should work fine with opentelemetry-js ver. 0.10.2
Default port for traces is 55680 and for metrics 55681
The link I posted previously - you will always find there the latest up to date working example: https://github.com/open-telemetry/opentelemetry-js/tree/master/examples/collector-exporter-node And for web example you can use the same docker and see all working examples here: https://github.com/open-telemetry/opentelemetry-js/tree/master/examples/tracer-web/
Thank you sooo much for @BObecny's help! This is a complement of @BObecny's answer.
Since I am more interested in integrating with Jaeger. So here is the config to set up with all Jaeger, Zipkin, Prometheus. And now it works on both front end and back end.
First both front end and back end use same exporter code:
import { CollectorTraceExporter } from '@opentelemetry/exporter-collector';
new SimpleSpanProcessor(
new CollectorTraceExporter({
serviceName: 'my-service',
})
)
docker-compose.yaml
version: "3"
services:
# Collector
collector:
image: otel/opentelemetry-collector:latest
command: ["--config=/conf/collector-config.yaml", "--log-level=DEBUG"]
volumes:
- ./collector-config.yaml:/conf/collector-config.yaml
ports:
- "9464:9464"
- "55680:55680"
- "55681:55681"
depends_on:
- jaeger-all-in-one
- zipkin-all-in-one
# Jaeger
jaeger-all-in-one:
image: jaegertracing/all-in-one:latest
ports:
- "16686:16686"
- "14268"
- "14250"
# Zipkin
zipkin-all-in-one:
image: openzipkin/zipkin:latest
ports:
- "9411:9411"
# Prometheus
prometheus:
container_name: prometheus
image: prom/prometheus:latest
volumes:
- ./prometheus.yaml:/etc/prometheus/prometheus.yml
ports:
- "9090:9090"
collector-config.yaml
receivers:
otlp:
protocols:
grpc:
http:
exporters:
jaeger:
endpoint: jaeger-all-in-one:14250
insecure: true
zipkin:
endpoint: "http://zipkin-all-in-one:9411/api/v2/spans"
prometheus:
endpoint: "0.0.0.0:9464"
processors:
batch:
queued_retry:
service:
pipelines:
traces:
receivers: [otlp]
exporters: [zipkin]
processors: [batch, queued_retry]
metrics:
receivers: [otlp]
exporters: [prometheus]
processors: [batch, queued_retry]
prometheus.yaml
global:
scrape_interval: 15s # Default is every 1 minute.
scrape_configs:
- job_name: 'collector'
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ['collector:9464']
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With