In this step, we will configure the Docker containers to expose their logs then use the OpenTelemetry Collector to pull the logs and export them to ServiceNow Cloud Observability.
Edit the docker-compose.yaml
file
Add the following code after the version: 3
line
x-default-logging: &logging
driver: "json-file"
options:
max-size: "5m"
max-file: "2"
tag: "{{.Name}}|{{.ImageName}}|{{.ID}}"
This defines our defaults for logging which will then apply to each of our services
For the users:
service, add the following line to the list of volumes:
- /var/lib/docker/containers:/var/lib/docker/containers:ro
This exposes the logs so that they can be pulled by the collector
Also for the users:
service, add this line
logging: *logging
This configures the service to use the default logging configuration you defined in step 2
Repeat steps 2 and 3 for the web
and otel-collector
services
Add the following user configuration to the otel-collector:
service
user: 0:0
This provides the service with the necessary permissions to pull the container logs
Open the collector configuration file (/opentelemetry/conf/config.yaml
)
In the receivers
section, add the following:
# Receiver to pull Docker logs
filelog:
include:
- /var/lib/docker/containers/*/*-json.log
encoding: utf-8
fingerprint_size: 1kb
force_flush_period: "0"
include_file_name: false
include_file_path: true
max_concurrent_files: 1024
max_log_size: 1MiB
operators:
- type: json_parser
timestamp:
layout: "%Y-%m-%dT%H:%M:%S.%LZ"
parse_from: attributes.time
- from: attributes.stream
to: resource["log.io.stream"]
type: move
- field: attributes.attrs.tag
type: remove
if: "attributes?.attrs?.tag != nil"
- from: attributes.log
to: body
type: move
poll_interval: 200ms
start_at: beginning
This configures the filelog receiver to pull the logs from the container volumes. It includes configuration for polling, limits and other options as well as does some parsing of the log data. This parsing is used to format the timestamp and set attributes based on values in the log.
In the pipelines
section, add the following logs pipeline:
logs:
receivers: [filelog]
processors: [batch]
exporters: [otlp/lightstep]
This defines the pipeline for logs which receives the logs using the filelog receiver, batches them and then exports them to Cloud Observability.
If you are continuing from a previous workshop and/or have already updated the access token in your collector configuration you can skip this step
Cloud Observability uses access tokens to determine which project to associate your telemetry data with when the data is ingested. When you create a project in Cloud Observability, an access token is created automatically. To get your access token, follow these steps:
Log in to your Cloud Observability account
Click on the Settings icon in the navigation bar
Click on Access tokens under TOKENS AND KEYS
Click the icon button next to your access token to copy the token to your clipboard
In your config.yaml
file, replace {LIGHTSTEP_ACCESS_TOKEN}
with the access token you just copied and save the file
We’re now ready to test our configuration and view our container logs in Cloud Observability.
If your containers are still running, press Ctrl+C
in the terminal and wait for them to gracefully stop
Run the following command to restart the containers
docker-compose up
Once the services are up and running the logs should be exporting to Cloud Observability. Let’s check it out.
Let’s take a look at the logs coming from Docker.
It can sometimes take a few minutes for the logs to make it to Cloud Observability. If you don’t see any logs initially, wait a few moments and try again. If you still don’t see any logs after that, double check your collector configuration and review the previous steps.
If you are not still logged into Cloud Observability, login again
Click on the Logs icon the side navigation
You should see a list of logs coming from your containers. Click on one to view the details
Trying searching for logs. In the search bar at the top, type http://0.0.0.0:80
and hit Enter. This will return any log entries with that search term, such as this:
Now we have our container logs being ingested into Cloud Observability. Next we will work on sending custom logs and correlating those logs with traces.
next: Custom Logs