How-to: Connect to Kafka using Hydra as OIDC provider

In this article, we show how Kafka clients (producers and consumers) can authenticate with Kafka using OAuth and Hydra as an OIDC provider.

Juju environment

The instructions provided in this article are based on the following components and versions:

  • Juju controller: 3.4.2
  • MicroK8s: v1.25.1
  • Terraform: v1.7.2 (Juju provider: v0.10.1)

Please make sure the metallb add-on is enabled on Microk8s :

microk8s enable metallb:10.1.81.152-10.1.81.160

Charm environment (ecosystem)

The following figure shows the K8S charms required for this guide, their relations, and the client processes.

This ecosystem is composed of the identity charms (green), event-bus charms (blue), certificate charm (yellow), and the clients which are simple python scripts running on the host machine.

Setting up the environment

In order to bring up this ecosystem we are going to use this repository:

This repository contains various examples of usage of the event-bus, including an OAuth client example:

git clone https://github.com/canonical/eventbus-client-samples 
cd eventbus-client-samples && make setup

In the oauth folder we locate the terrafrom plans and apply it.

# before running the terraform plan, let's create a model
juju add-model kafka-oauth
cd oauth/terraform/environments/k8s/hydra
terraform init
terraform apply

Creating a Hydra OAuth client ID

Once Hydra is up and running, we can use a Juju action to create an OAuth client ID for our Kafka clients.

# in the base folder: eventbus-client-samples
juju run hydra/0 create-oauth-client --quiet scope="[profile,email,phone,offline]" grant-types="[client_credentials]" audience="[kafka]" > auth/hydra.yaml
# note that the client is configured to read the OAuth details from auth/hydra.yaml file

The client ID can be retrieved as follows (we need this below we setting ACLs):

cat auth/hydra.yaml | yq eval '."client-id"'
# example: 90d433b1-fe84-42d8-a3dd-ed9b6fd47c08

Configuring permissions (ACLs)

In order for our client to be able to produce and consume a topic, we need to set up some permissions. This is done using the Event-bus Governor. This charm allows to specify topics and ACLs as configuration, and synchronizes them with Kafka. For the ACLs we can use the email of the service account as the identifier of the user (principal), and we can specify them in the policies file in oauth/terraform/environments/k8s/hydra/config/policies.yaml, which looks as follows:

policies:
  topics: 
    - name: test
      partitions: 1
      replication: 1
      acls:
        allow:
        - principal: 90d433b1-fe84-42d8-a3dd-ed9b6fd47c08
          operations: [read, write, describe]
  groups:
    - name: test-group
      acls:
        allow:
        - principal: 90d433b1-fe84-42d8-a3dd-ed9b6fd47c08
          operations: [read]

These ACLs can be applied by running:

juju config governor policies=@oauth/terraform/environments/k8s/hydra/config/policies.yaml

Configuring the client

At this point, we have configured the service account and the permissions. An additional requirement for the client is to acces the CA certificate (used to sign Kafka certificates), since we are using self-signed ones. We retrieve this using the following action:

juju run tls-operator/0 get-ca-certificate --format yaml --quiet | yq eval '."tls-operator/0".results."ca-certificate"' > auth/ca.crt
# the client is configured to read the CA from the auth/ca.crt file

Additionally, we need configure the DNS so that our client reaches Kafka using its advertised domain name. Since this domain name is internal to K8S and our client is running on the Host, we can do this by adding a mapping between the pod IP and the advertised domain name in the /etc/hosts file. In order to get the IP we run:

# to get Kafka IP 
juju show-unit kafka/0 | yq eval '."kafka/0".address'
# exmple: 10.1.68.239

Using this IP, we can add the following entry in our /etc/hosts:

10.1.68.239 kafka-0.kafka-endpoints

Finally, in the .env file in the oauth folder, we just need to specify a few more details such as the OIDC provider and the Hydra token URL (and verify the other parameters are set correctly):

KAFKA_TOPIC=test
KAFKA_CONSUMER_GROUP=test-group
KAFKA_SERVERS=kafka-0.kafka-endpoints:9096 
KAFKA_SECURITY_PROTOCOL=SASL_SSL
KAFKA_SECURITY_MECHANISM=OAUTHBEARER
OAUTH_IDP=HYDRA
HYDRA_TOKEN_URL=https://10.1.81.153/kafka-oauth-hydra/oauth2/token

To get the Hydra URL:

juju run traefik-public/0 --quiet show-proxied-endpoints
# example: https://10.1.81.153/kafka-oauth-hydra
# note that we need to add `/oauth2/token` as a suffix

Now we are ready.

Running the client

On two different terminals, from the base directory eventbus-client-samples, run:

# terminal 1
venv/bin/python oauth/main.py --producer
# Write something: hello 
# Write something: world

# terminal 2
venv/bin/python oauth/main.py --consumer
# Got: hello
# Got: world

If everything is configured correctly, the text sent in the producer terminal appears in the consumer one.

Related posts:

2 Likes