The OpenTelemetry Collector provides a vendor-agnostic solution for receiving, processing and exporting telemetry data. We won’t cover the collector in depth here but I encourage you to review the documentation on the collector to learn more about the features and capabilities it provides. For the purposes of this guide, the collector is used to receive the telemetry data from Istio, batch the data and export it to Cloud Observability.
otel-collector.yaml
ConfigMap
to the file.apiVersion: v1
kind: ConfigMap
metadata:
name: opentelemetry-collector-conf
namespace: istio-system
labels:
app: opentelemetry-collector
data:
opentelemetry-collector-config: |
receivers:
otlp:
protocols:
grpc:
http:
processors:
batch:
exporters:
otlp/lightstep:
endpoint: ingest.lightstep.com:443
headers:
"lightstep-access-token": "<YOUR_TOKEN>"
logging:
loglevel: debug
extensions:
health_check:
service:
extensions:
- health_check
pipelines:
logs:
receivers: [otlp]
processors: [batch]
exporters: [logging]
traces:
receivers: [otlp]
processors: [batch]
exporters: [logging, otlp/lightstep]
metrics:
receivers: [otlp]
processors: [batch]
exporters: [logging, otlp/lightstep]
<YOUR_TOKEN>
with a Lightstep access token for your project in Cloud Observability. See the instructions here on how to create and manage access tokens in Cloud Observability.
If you encounter errors after applying this configuration and/or your telemetry data fails to export from the collector to Cloud Observability remove the quotations around your access token. Some environments require the quotations around tokens with special characters while other environments may treat the quotations as part of the token.
service
definition to the fileapiVersion: v1
kind: Service
metadata:
name: opentelemetry-collector
namespace: istio-system
labels:
app: opentelemetry-collector
spec:
ports:
- name: grpc-otlp # Default endpoint for OpenTelemetry receiver.
port: 4317
protocol: TCP
targetPort: 4317
selector:
app: opentelemetry-collector
deployment
to the fileapiVersion: apps/v1
kind: Deployment
metadata:
name: opentelemetry-collector
namespace: istio-system
spec:
selector:
matchLabels:
app: opentelemetry-collector
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
app: opentelemetry-collector
sidecar.istio.io/inject: "false" # do not inject
spec:
containers:
- command:
- "/otelcol"
- "--config=/conf/opentelemetry-collector-config.yaml"
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
image: otel/opentelemetry-collector:0.71.0
imagePullPolicy: IfNotPresent
name: opentelemetry-collector
ports:
- containerPort: 4317
protocol: TCP
resources:
limits:
cpu: "2"
memory: 4Gi
requests:
cpu: 200m
memory: 400Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- name: opentelemetry-collector-config-vol
mountPath: /conf
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
terminationGracePeriodSeconds: 30
volumes:
- configMap:
defaultMode: 420
items:
- key: opentelemetry-collector-config
path: opentelemetry-collector-config.yaml
name: opentelemetry-collector-conf
name: opentelemetry-collector-config-vol
istio-system
namespace in the clusterkubectl apply -f otel-collector.yaml -n istio-system
This example uses a specific version of the collector. To use the latest version or some other preferred version, update the image
with the latest version of the opentelemetry-collector
. See here for the list of collector releases.
The OpenTelemetry Collector is now deployed and running in the cluster. Now we need to configure Istio to send the distributed tracing data to the collector.
next: Configure Istio Telemetry