Charmed Temporal K8s Tutorial - Deploy Temporal Worker

Deploy Temporal worker

Requirements

  • rockcraft installed.
  • A local OCI images registry to push images to or access to a public one.

This is part of the Charmed Temporal Tutorial. Please refer to this page for more information and the overview of the content.

The Temporal worker is the entity that listens and polls a specific task queue, and executes code in response to the task.

The Temporal worker charm allows users to upload and automatically run custom worker scripts, independent of the chosen SDK. This is achieved by creating a rock with all runtime dependencies, worker scripts, and workflows, that is used at deployment time.

Because of this, deploying the worker involves two steps:

  1. Creating a custom container image for the worker using rocks.
  2. Deploying the worker charm using a custom container image.

In a production setting, a Temporal worker can be deployed in a separate environment from the Temporal server. For simplicity, this guide assumes the server and worker belong to the same network, and can be connected directly.

If this is not the case, an ingress can be considered. See Configure Ingress with Nginx Ingress Integrator for more details.

Custom container image for the worker

To create a custom container image for the worker, you need to build and publish it using Rockcraft. The steps below guide you through creating a rock-based image that includes your worker script and associated workflows.

  1. Create a rockcraft project. You can use the rockcraft.yaml as template.

  2. Ensure the command of the rock runs the worker script directly. For example, if command: "./app/scripts/start-worker.sh":

$ cat start-worker.sh
 
python3 app/resource_sample/worker.py
  1. Ensure your activities and workflows are also included in the rock as the worker script needs access to them.

  2. Build the rock with rockcraft pack.

  3. Make your rock available in a local or public registry. See Publish a rock to a registry for details.

Deploy and configure Temporal worker

Once the rock is ready and available, you will deploy the worker charm.

  1. Add a model where worker charms will be deployed:

Deploying the worker charms in a dedicated Juju model allows a logic separation between the Charmed Temporal server components and the workers.

juju add-model temporal-workers-model
  1. Deploy the worker charm using the recently created image:
juju deploy temporal-worker-k8s --resource temporal-worker-image=<your-registry>/<your-rock-name>:<tag>
  1. Create a configuration file with information about the server hostname, the task queue to poll, and namespace to connect to:
cat config.yaml

temporal-worker-k8s:
  host: "temporal-server-hostname:7233"
  queue: "your-queue"
  namespace: "your-namespace"
  1. Configure the worker charm with the configuration file from the previous step:
juju config temporal-worker-k8s --file=path/to/config.yaml

See next: Run Your First Workflow

This is a great tutorial. Thanks for putting it together.

The command: kubectl -n temporal-model logs temporal-worker-k8s-0 -c temporal-worker -f

should be: kubectl -n temporal-worker logs temporal-worker-k8s-0 -c temporal-worker -f

I don’t know if you want to get into details about proxies but if you do, I had to use the standard microk8s proxy settings documented at MicroK8s - Installing behind a proxy with ‘temporal’ added to no_proxy:

HTTPS_PROXY=http://squid.internal:3128
HTTP_PROXY=http://squid.internal:3128
NO_PROXY=10.0.0.0/8,192.168.0.0/16,127.0.0.1,172.16.0.0/16,temporal
https_proxy=http://squid.internal:3128
http_proxy=http://squid.internal:3128
no_proxy=10.0.0.0/8,192.168.0.0/16,127.0.0.1,172.16.0.0/16,temporal

otherwise, kubectl -n temporal-worker logs temporal-worker-k8s-0 -c temporal-worker -f would fail with 403 Forbidden.

Thanks for the feedback! I’ve updated the instructions to include the correct model name. Since writing this tutorial, we have also included proxy config parameters for the worker charm which would be injected as environment variables into the workload container, have you tried using these?