Since the last update, we have added support for tracing for all the supported frameworks (Flask, Django, FastAPI and Go)! Tracing is a way to see how a request traversed your application. We support Charmed Tempo HA. I took this as an opportunity to write in more detail about observability with an example of how to integrate a Django application with the Canonical Observability Stack as well as tracing. The story is similar for our other supported frameworks.
Observing and Tracing a Django App
In this post, we’ll take a look at how to make use of the observability features of the native support for web applications in Charmcraft and Rockcraft. At the end, we’ll extend this to include tracing which we have recently added support for. Observability and tracing is supported in all of our supported frameworks (Flask, Django, FastAPI and Go). For today, we will use a Django application as an example.
Observability is supported via an integration with the Canonical Observability Stack. Let’s explore what this looks like using an example Django application. We will use multipass so let’s create and prepare a new VM:
multipass launch --cpus 4 --disk 100G --memory 16G --name charm-dev 24.04
multipass shell charm-dev
sudo snap install rockcraft --classic
lxd --version
lxd init --auto
sudo snap install charmcraft --classic
sudo snap install microk8s --channel 1.31-strict/stable
sudo adduser $USER snap_microk8s
newgrp snap_microk8s
sudo microk8s enable hostpath-storage
sudo microk8s enable registry
sudo microk8s enable ingress
IPADDR=$(ip -4 -j route get 2.2.2.2 | jq -r '.[] | .prefsrc')
microk8s enable metallb:$IPADDR-$IPADDR
sudo snap install juju --channel 3.6/stable
mkdir -p ~/.local/share
juju bootstrap microk8s dev-controller
mkdir django-observability
cd django-observability
sudo apt-get update && sudo apt-get install python3-venv -y
python3 -m venv .venv
source .venv/bin/activate
The next step is to create a sample Django app. We’ll use the Django CLI to create a new project and add a greeting app. To start, create the following requirements.txt
file:
Django
psycopg2-binary
Install Django and create the project and greetings app:
pip install -r requirements.txt
django-admin startproject django_observability
cd django_observability
django-admin startapp greeting
Replace the contents of greeting/views.py
with:
from django.http import HttpResponse
def index(request):
return HttpResponse("Hello, world!\n")
And create the greeting/urls.py
with the following contents:
from django.urls import path
from . import views
urlpatterns = [
path("", views.index, name="index"),
]
Open the django_observability/urls.py
file and update the contents to:
from django.contrib import admin
from django.urls import include, path
urlpatterns = [
path("", include("greeting.urls")),
path('admin/', admin.site.urls),
]
We need to make a few changes to the Django settings file to connect to a database and for the app to be production ready. Open django_observability/settings.py and:
- Add a few imports at the top:
import json
import os
import secrets
- Update the
SECRET_KEY
,DEBUG
andALLOWED_HOSTS
settings to:
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = os.environ.get('DJANGO_SECRET_KEY', secrets.token_hex(32))
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = os.environ.get('DJANGO_DEBUG', 'false') == 'true'
ALLOWED_HOSTS = json.loads(os.environ.get('DJANGO_ALLOWED_HOSTS', '*'))
- Change the DATABASES configuration to:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': os.environ.get('POSTGRESQL_DB_NAME'),
'USER': os.environ.get('POSTGRESQL_DB_USERNAME'),
'PASSWORD': os.environ.get('POSTGRESQL_DB_PASSWORD'),
'HOST': os.environ.get('POSTGRESQL_DB_HOSTNAME'),
'PORT': os.environ.get('POSTGRESQL_DB_PORT'),
}
}
Next, we need to create the OCI image and the charm for the application and deploy it and the observability stack using juju:
cd ..
rockcraft init --profile django-framework
rockcraft pack
rockcraft.skopeo --insecure-policy copy --dest-tls-verify=false \
oci-archive:django-observability_0.1_$(dpkg --print-architecture).rock \
docker://localhost:32000/django-observability:0.1
mkdir charm
cd charm
charmcraft init --profile django-framework --name django-observability
Update charmcraft.yaml
with the following section at the end:
requires:
postgresql:
interface: postgresql_client
optional: false
Then keep going with the deployment:
charmcraft fetch-lib charms.tempo_coordinator_k8s.v0.tracing
charmcraft pack
juju add-model django-observability
juju deploy \
./django-observability_ubuntu-22.04-$(dpkg --print-architecture).charm \
django-observability --resource \
django-app-image=localhost:32000/django-observability:0.1
juju deploy postgresql-k8s --trust
juju integrate django-observability postgresql-k8s
juju deploy nginx-ingress-integrator --channel=latest/stable --trust
juju integrate nginx-ingress-integrator django-observability
juju config nginx-ingress-integrator \
service-hostname=django-observability path-routes=/
At this point we have the app deployed. Let’s just check everything works using curl http://django-observability --resolve django-observability:80:127.0.0.1
. The app should respond with Hello, world!
The next step is to deploy observability and integrate it with the Django app:
juju deploy cos-lite --trust
juju integrate django-observability grafana
juju integrate django-observability prometheus
juju integrate django-observability loki
We have just asked juju to do a lot so let’s wait for it to be done using juju status –watch 2s
. Once that is all finished, let’s take a look at the dashboards that have been created. Run the following command to get the endpoints:
juju show-unit catalogue/0 | grep url
juju run grafana/leader get-admin-password
This will show a few URLs and the default admin password. The one we are interested in has the grafana postfix and should look something like http://10.18.66.154/django-observability-grafana (your IP address will be different). Append the /dashboards postfix to the URL to go to the dashboards overview, click General and click the Django Operator dashboard. You should see something like this:
It is reporting no requests since we haven’t made any yet! Let’s send the curl
command again curl http://django-observability --resolve django-observability:80:127.0.0.1
a few times and the data will update over the next few minutes to something like:
So far, we have deployed our app and the observability stack and got dashboards without needing to define them ourselves! We can also see the application logs. Go to the explore section (something like http://10.18.66.154/django-observability-grafana/explore, your IP address will be different). At the top of the page, set the label filters to juju_application
and then pick django-observability
from the dropdown menu on the right and click run query.
When you scroll down, you will see all the access logs of the application:
The next step is to do something a bit more advanced, we will trace a request! We will need to create S3 storage to store the traces and deploy tempo, do the following:
pip install minio
curl https://raw.githubusercontent.com/canonical/tempo-coordinator-k8s-operator/main/scripts/deploy_minio.py | python
juju deploy tempo-coordinator-k8s --channel edge --trust tempo
juju deploy tempo-worker-k8s --channel edge --trust tempo-worker
juju integrate tempo tempo-worker
juju integrate tempo s3
juju integrate tempo:ingress traefik:traefik-route
juju integrate tempo:grafana-source grafana:grafana-source
juju deploy grafana-agent-k8s --channel latest/edge
juju integrate grafana-agent-k8s:tracing tempo:tracing
juju config grafana-agent-k8s tracing_sample_rate_workload=100
The next step is to add tracing to the Django application. Update the contents of django_observability/greeting/views.py
to:
from time import sleep
from django.http import HttpResponse
from opentelemetry import trace
tracer = trace.get_tracer(__name__)
def index(request):
with tracer.start_as_current_span("greeting-span"):
simulate_io()
simulate_db()
return HttpResponse("Hello, world!\n")
def simulate_io():
with tracer.start_as_current_span("io-span"):
sleep(0.1)
def simulate_db():
with tracer.start_as_current_span("db-span"):
sleep(0.2)
This includes the definition of several spans, including simulated IO and a simulated database request. Update the requirements.txt
to the following:
Django
psycopg2-binary
opentelemetry-api
opentelemetry-exporter-otlp
opentelemetry-exporter-otlp-proto-http
opentelemetry-instrumentation
opentelemetry-instrumentation-wsgi
opentelemetry-sdk
In the rockcraft.yaml
, update the version to 0.2
and run:
rockcraft pack
rockcraft.skopeo --insecure-policy copy --dest-tls-verify=false \
oci-archive:django-observability_0.2_$(dpkg --print-architecture).rock \
docker://localhost:32000/django-observability:0.2
Go into the charm directory and update the requires section at the end of charmcraft.yaml
to:
requires:
postgresql:
interface: postgresql_client
optional: false
tracing:
interface: tracing
optional: true
limit: 1
Pack, deploy the charm and integrate it with tracing:
charmcraft pack
juju refresh django-observability \
--path=./django-observability_ubuntu-22.04-$(dpkg --print-architecture).charm \
--resource django-app-image=localhost:32000/django-observability:0.2
juju integrate grafana-agent-k8s:tracing-provider django-charm:tracing
Now we can send a few requests to the Django app using curl http://django-observability --resolve django-observability:80:127.0.0.1
and the requests should be getting traced. The next step is to go back to Grafana at http://10.18.66.154/django-observability-grafana/explore (your IP address will be different again). Change the data source to tempo and go to the search query type and select django-observability
as the service name:
Scroll down and select one of the listed requests which will open a window on the right showing the details of the trace:
We can see the outer greeting span and the inner IO and database span. And that is it! We created a Django application and integrated it with observability including tracing. We saw the default dashboard giving us metrics for how the application is being used, we saw the application logs and also saw that we were able to trace a request showing how long the simulated IO and database interactions worked.
Thank you to @0x12b and @ppasotti and the rest of the observability team for creating great observability charms! Thank you @alithethird for developing this feature