When deploying the MicroK8s charm, DNS support is provided out of the box with standard defaults (using the domain cluster.local
and the upstream nameservers of the individual hosts). This guide explains how to use the CoreDNS charm to have better control over the DNS configuration of the cluster.
Requirements
- A running Charmed MicroK8s cluster
- You have followed Deploy Kubernetes charms to configure the MicroK8s cluster as a Juju cloud. For this guide, we will assume that this is
microk8s-cloud
.
Deploy CoreDNS
Create a new model called dns-system
, then deploy the coredns
charm:
juju add-model dns-system microk8s-cloud
juju deploy coredns --channel 1.28/stable
Wait for CoreDNS to deploy, you can check the progress using juju status
:
Model Controller Cloud/Region Version SLA Timestamp
dns-system zs microk8s-cloud/localhost 3.1.5 unsupported 16:41:26+03:00
App Version Status Scale Charm Channel Rev Address Exposed Message
coredns active 1 coredns 1.28/stable 101 10.152.183.247 no
Unit Workload Agent Address Ports Message
coredns/0* active idle 10.1.208.70
The next step is to create an offer for the dns-provider
endpoint, so that we can relate MicroK8s with CoreDNS:
juju offer coredns:dns-provider coredns
Note the endpoint name from the output, e.g.:
Application "coredns" endpoints [dns-provider] available at "admin/dns-system.coredns"
We will need this name to configure the cross-model relation in the next step.
Integrate MicroK8s with CoreDNS
We will now switch back to our main model where MicroK8s is running:
juju switch microk8s
Next, consume the CoreDNS endpoint and integrate with MicroK8s:
juju consume admin/dns-system.coredns coredns
juju integrate microk8s coredns
Wait for the microk8s
units to settle, you can validate that everything is ready using juju status --relations
:
Model Controller Cloud/Region Version SLA Timestamp
microk8s zs zerostack/KHY 3.1.5 unsupported 16:42:55+03:00
SAAS Status Store URL
coredns active zs admin/dns-system.coredns
App Version Status Scale Charm Channel Rev Exposed Message
microk8s 1.28.2 active 1 microk8s 1 yes node is ready
Unit Workload Agent Machine Public address Ports Message
microk8s/0* active idle 0 172.16.100.49 16443/tcp node is ready
Machine State Address Inst id Base AZ Message
0 started 172.16.100.49 9ddce17d-2cba-4a18-b3c4-2e10336b1673 ubuntu@20.04 nova ACTIVE
Relation provider Requirer Interface Type Message
coredns:dns-provider microk8s:dns kube-dns regular
microk8s:peer microk8s:peer microk8s-peer peer
In the output, we can also see the relation between coredns:dns-provider
and microk8s:dns
.
Test DNS resolution
The easiest way to validate that the integration is complete, is to start a small alpine pod to resolve an external hostname, e.g. canonical.com
:
juju exec --unit microk8s/0 -- microk8s kubectl run --rm -it --image alpine --restart=Never test-dns -- nslookup canonical.com
The output should look like this:
Server: 10.152.183.247
Address: 10.152.183.247:53
Non-authoritative answer:
Name: canonical.com
Address: 185.125.190.20
Name: canonical.com
Address: 185.125.190.29
Name: canonical.com
Address: 185.125.190.21
Non-authoritative answer:
Name: canonical.com
Address: 2620:2d:4000:1::27
Name: canonical.com
Address: 2620:2d:4000:1::28
Name: canonical.com
Address: 2620:2d:4000:1::26
pod "test-dns" deleted
Notice how the server address (10.152.183.247
) matches the application address of the CoreDNS application we deployed earlier.