I’ve been working on a kubernetes charm for bind, in order to deploy DNS resolvers within kubernetes itself.
You can deploy it locally in MicroK8s and then use the local service IP to resolve DNS queries like this, assuming you’ve already set up a local single-node microk8s Juju controller:
$ juju add-model bind-test $ juju deploy cs:~bind-charmers/bind $ juju status Model Controller Cloud/Region Version SLA Timestamp bind-test myk8s-localhost myk8s/localhost 2.8.6 unsupported 16:13:23+07:00 App Version Status Scale Charm Store Rev OS Address Notes bind bind:edge active 1 bind jujucharms 1 kubernetes 10.152.183.246 Unit Workload Agent Address Ports Message bind/0* active idle 10.1.25.13 53/TCP,53/UDP Pod configured
However, we don’t yet have a user-friendly method to expose such a service to the internet, e.g. from a Juju-deployed Kubernetes cluster (see Launchpad bug LP#1889703 to track that feature request).
Deployed with the default settings, the charm will install and configure bind with the default Ubuntu package configuration, so DNSSEC enabled, with no forwarders explicitly defined - it’ll use the root nameservers, so will need to be able to reach those.
Any manner of custom configuration is supported by simply configuring the charm with the location of a git repository with which to populate
/etc/bind/, for example:
$ juju config bind custom_config_repo="https://github.com/example/custom-bind-config" $
The charm currently neither requires nor supports any relations.
Because bind’s default settings don’t allow recursive queries by default, you’ll need to change that with a custom config if you want to be able to run test queries from your local machine. So in a microk8s setup on a laptop or workstation, once you’ve deployed the charm as above, feel free to use this public repository which just contains the stock bind config from the Ubuntu 20.04 package, with a 1-line change to allow all RFC1918 networks to query the application - this obviously isn’t designed for a production environment, but just to allow simple testing:
$ juju config bind custom_config_repo="https://git.launchpad.net/~barryprice/junk/+git/etc-bind-stock-config-with-all-rfc1918-nets-allowed"
And once that’s settled:
$ juju status Model Controller Cloud/Region Version SLA Timestamp bind-test myk8s-localhost myk8s/localhost 2.8.6 unsupported 16:55:23+07:00 App Version Status Scale Charm Store Rev OS Address Notes bind bind:edge active 1 bind jujucharms 2 kubernetes 10.152.183.246 Unit Workload Agent Address Ports Message bind/1* active idle 10.1.25.36 53/TCP,53/UDP $ dig +short example.com @10.152.183.246 220.127.116.11 $ dig +short example.com @10.152.183.246 AAAA 2606:2800:220:1:248:1893:25c8:1946 $ dig +short +dnssec example.com @10.152.183.246 18.104.22.168 A 8 2 86400 20201115051118 20201025154454 62811 example.com. P8BE247EZ54+DZ1aZOVDYv3MxnxT+XAmd1W41PyBCB0QopMxAe7l6brV VXQtfDwsY6wL71BKZL7eTsyWYP9x4JQTYeY6UIwXeuOQ+uS8A+fGlQBa aPCIZCw0JQQTCCmCmmrrwpkIDAiunF0UOeRZl3CzE5QOX0lw4db/3M6n IKg= $
As you can see, the Application IP will now resolve IPv4 and IPv6 lookups, with DNSSEC too.
Thanks go to Stuart Bishop for his work on Kubernetes charm structure that provided several useful pointers for this charm, as well as to John Meinel, John Lenton and Facundo Batista for their help and feedback.