In this article, we show how Kafka clients (producers and consumers) can authenticate with Kafka using OAuth and Google as an OIDC provider.
Juju environment
The instructions provided in this article are based on the following components and versions:
- Juju controller: 3.4.2
- MicroK8s: v1.25.1
- Terraform: v1.7.2 (Juju provider: v0.10.1)
Charm environment (ecosystem)
The following figure shows the K8S charms required for this guide, their relations, and the client processes.
This ecosystem is composed of the identity charms (green), event-bus charms (blue), certificate charm (yellow), and the clients which are simple python scripts running on the host machine.
Setting up the environment
In order to bring up this ecosystem we are going to use this repository:
This repository contains various examples of usage of the event-bus, including an OAuth client example:
git clone https://github.com/canonical/eventbus-client-samples
cd eventbus-client-samples && make setup
In the oauth
folder we locate the terrafrom plans and apply it.
# before running the terraform plan, let's create a model
juju add-model kafka-oauth
cd oauth/terraform/environments/k8s/google
terraform init
terraform apply
Creating a Google service account
Our Kafka client requires a Google service account in order to be able to create an identity token (used to authenticate with Kafka). In order to create one, please follow the instructions provided in the GCP documentation. Once the account is created, we need to create a pair of public/private keys associated to this account. To do so:
- Double click on the newly created service account and go on the keys tab
- Click “Add Key” and select “Create new key” (leave JSON as default type)
- Click create
This will download the keys (JSON file) locally. Please copy this file to the auth
folder, and rename it to service_account.json
. Let’s assume the email of this service account is: test-svc@test-project.iam.gserviceaccount.com
(used in the ACLs below).
Configuring permissions (ACLs)
In order for our client to be able to produce and consume a topic, we need to set up some permissions. This is done using the Event-bus Governor. This charm allows to specify topics and ACLs as configuration, and synchronizes them with Kafka. For the ACLs we can use the email of the service account as the identifier of the user (principal), and we can specify them
in the policies file in oauth/terraform/environments/k8s/google/config/policies.yaml
, which looks as follows:
policies:
topics:
- name: test
partitions: 1
replication: 1
acls:
allow:
- principal: test-svc@test-project.iam.gserviceaccount.com
operations: [read, write, describe]
groups:
- name: test-group
acls:
allow:
- principal: test-svc@test-project.iam.gserviceaccount.com
operations: [read]
These ACLs can be applied by running:
juju config governor policies=@oauth/terraform/environments/k8s/google/config/policies.yaml
Configuring the client
At this point, we have configured the service account and the permissions. An additional requirement for the client is to acces the CA certificate (used to sign Kafka certificates), since we are using self-signed ones. We retrieve this using the following action:
juju run tls-operator/0 get-ca-certificate --format yaml --quiet | yq eval '."tls-operator/0".results."ca-certificate"' > auth/ca.crt
# the client is configured to read the CA from the auth/ca.crt file
Additionally, we need configure the DNS so that our client reaches Kafka using its advertised domain name. Since this domain name is internal to K8S and our client is running on the Host, we can do this by adding a mapping between the pod IP and the advertised domain name in the /etc/hosts
file. In order to get the IP we run:
# to get Kafka IP
juju show-unit kafka/0 | yq eval '."kafka/0".address'
# exmple: 10.1.68.239
Using this IP, we can add the following entry in our /etc/hosts
:
10.1.68.239 kafka-0.kafka-endpoints
Finally, in the .env
file in the oauth
folder, we just need to specify that our OIDC provider is Google, and verify the rest of the parameters:
KAFKA_TOPIC=test
KAFKA_CONSUMER_GROUP=test-group
KAFKA_SERVERS=kafka-0.kafka-endpoints:9096
KAFKA_SECURITY_PROTOCOL=SASL_SSL
KAFKA_SECURITY_MECHANISM=OAUTHBEARER
OAUTH_IDP=GOOGLE
Now we are ready.
Running the client
On two different terminals, from the base directory eventbus-client-samples
, run:
# terminal 1
venv/bin/python oauth/main.py --producer
# Write something: hello
# Write something: world
# terminal 2
venv/bin/python oauth/main.py --consumer
# Got: hello
# Got: world
If everything is configured correctly, the text sent in the producer terminal appears in the consumer one.