Set up remote access

Key Value
Summary Learn an approach to setting up remote access to your Charmed Kubeflow deployment
Categories docs, kubeflow
Difficulty 3
Author Rob Gibbon


Duration: 2:00

Charmed Kubeflow delivers a powerful, sophisticated end-to-end MLOps platform which you can deploy in half an hour or less, using MicroK8s or another conformant Kubernetes distribution.

Exposing Charmed Kubeflow over the network so that users can access it is straightforward when you know how, and this tutorial will guide you through the steps needed to get things deployed and configured right for your users.

What you’ll learn

  • How to configure MicroK8s to expose Charmed Kubeflow to the outside world
  • How to configure your Charmed Kubeflow for round-robin DNS to simplify accessing the platform, with Dnsmasq and Systemd-Resolved

What you’ll need

  • A MicroK8s cluster running on Ubuntu 20.04 with one or more nodes running Charmed Kubeflow and MetalLB - see the install guide to get up and running
  • Another computer running a modern web browser from which to test remote access to Charmed Kubeflow
  • Some command line knowledge

Enable MicroK8s ingress addon

Duration: 3:00

The first step we need to complete will be to enable an ingress on our MicroK8s cluster. On one of the MicroK8s nodes, run the following command:

microk8s enable ingress

Next, we’ll configure the ingress add-on so that it exposes our Charmed Kubeflow MLOps platform. Just make sure that you’ve followed the steps in the install guide and have MetalLB enabled on MicroK8s for your cluster first. Run the following commands to expose your Charmed Kubeflow deployment for remote access:

cat << EOF > ingress.yaml
kind: Ingress
  name: kubeflow-ingress
  annotations: /
  ingressClassName: public
  - host: kubeflow.ubuntu.local
        - path: /
          pathType: Prefix
              name: istio-ingressgateway
                number: 80

microk8s.kubectl apply -f ingress.yaml -nkubeflow

That wasn’t so bad! Wait a few moments until MicroK8s makes the configuration change:

microk8s.kubectl get ingress -nkubeflow
NAME               CLASS    HOSTS                   ADDRESS     PORTS   AGE
kubeflow-ingress   public   kubeflow.ubuntu.local   80      29s

Now we’ll make remote access for your clients highly available with round-robin DNS so that if any node of the MicroK8s cluster goes offline, your users will still be able to reach Charmed Kubeflow. Read on…

Setting up dnsmasq

Duration: 10:00

The next step is to set up Dnsmasq on two of the MicroK8s nodes. We are going to use Dnsmasq as a lightweight caching DNS service that also resolves all of our local MicroK8s hosts to the domain name kubeflow.ubuntu.local, in order to set up round-robin DNS.

Round-robin DNS Round-robin DNS is a way to ensure improved service availability and load-balancing via DNS, by load-balancing requests across multiple DNS records with the same ‘A’ record but different IP addresses.

First let’s install the Dnsmasq package. Again, on each node in the MicroK8s cluster, run the following command:

sudo apt install dnsmasq -y

Next we need to configure the Dnsmasq daemon. We want Dnsmasq to listen for DNS lookup requests on our MicroK8s nodes’ loopback interface as well as on the public interface. Note that your server may have more than one network interface - make sure you choose the right one!

You can find a lot of information about your MicroK8s node’s network configuration by running the following command:

ip addr

Ok, let’s go ahead and configure Dnsmasq. Note that you should replace the IP address assigned to the variable PUBLIC_IP with your own MicroK8s cluster node’s public IP address, which will likely be different to the one shown below.

Run these commands on each MicroK8s cluster node to set up Dnsmasq as a caching DNS server that recursively forwards unknown lookups to Google’s public DNS:


cat << EOF > dnsmasq.local.conf

cat dnsmasq.local.conf | sudo tee -a /etc/dnsmasq.d/local.conf

The next step is to set up static records for Dnsmasq on each of the nodes where Dnsmasq is running. We want to create a static record for each node in the MicroK8s cluster. On each node in the MicroK8s cluster, run the following commands:

declare -a microk8s_cluster=("" "" "")

for node in ${microk8s_cluster[@]}; do
echo "$node kubeflow.ubuntu.local" | sudo tee -a /etc/hosts

Alright, we want to bounce Dnsmasq so that it picks up our configuration changes.

sudo systemctl enable dnsmasq
sudo systemctl restart dnsmasq

But there are some extra steps here, to ensure that our servers and clients using systemd-resolved can make use of Dnsmasq. On each node of the cluster, and the systems that will access it, run the following commands:

echo "DNS=" | sudo tee -a /etc/systemd/resolved.conf
echo "FallbackDNS=" | sudo tee -a /etc/systemd/resolved.conf
sudo systemctl restart systemd-resolved
sudo systemd-resolve --flush-caches

Configure CoreDNS for Dnsmasq

Duration: 3:00

In order for internal services like the authentication system OIDC-gatekeeper to be able to resolve our Charmed Kubeflow service’s hostname correctly, we need to configure CoreDNS to recursively lookup external addresses via Dnsmasq. Let’s do that now:

microk8s disable dns
microk8s enable dns:,

Reconfigure the Charmed Kubeflow public URL

Duration: 4:00

The final step is to reconfigure the Charmed Kubeflow public URL so that the OIDC Gatekeeper and Dex operators know how users will access Charmed Kubeflow. On the computer running Juju, execute the following two commands to complete the setup:

juju config oidc-gatekeeper public-url=http://kubeflow.ubuntu.local
juju config dex-auth public-url=http://kubeflow.ubuntu.local

Next Steps

Duration: 1:00

Congratulations on setting up highly available remote access to your Charmed Kubeflow MLOps platform running on MicroK8s! But if you’re having difficulties, don’t worry - head over to the forum to ask a question and get in touch with the community.

Further reading

Duration: 1:00

Have questions? Contact our sales team now.

@robgibbon are the steps to set up and configure Dnsmasq necessary if you only have one node you’re running microk8s for learning/testing purposes? I’m wondering if I can just follow the steps enable ingress and reconfigure the Kubeflow public URL if all I want to do is access the central dashboard from another device on my network.

Dnsmasq is optional


@robgibbon Hello rob, thanks for your tutorial. But it seems a bit complicated for us. If we just want to access kubeflow dashboard remotely through its ip:port, is there anyway that we could skip dns setting part?

Also, for remote machine, does it need to set as its DNS server address?

Hi @robgibbon, as you mentioned that “Dnsmasq is optional”, I tried to just follow Enable MicroK8s ingress addon, Reconfigure the Charmed Kubeflow public URL; skip Setting up dnsmasq and Configure CoreDNS for Dnsmasq. However, when I try to access the dashboard via http://IP:80.Screen Shot 2022-09-26 at 22.27.05

I’m not so sure why this happens, do you have any idea? Thank you.

I followed the guide verbatim and arrived at the same error. No solution in sight. Were you able to find anything re. this?

I spent far too long chasing this issue and finally figured out how to get my kubeflow dashboard exposed, in my case from our domain:

kind: Ingress
  name: kubeflow-dashboard-ingress
    app: istio-ingressgateway
  annotations: /
  # this is microk8s specific, likely nginx in other situations?
  ingressClassName: public
  - host:
        - path: /
          pathType: Prefix
              name: istio-ingressgateway-workload
                number: 80

Then I pointed dex and oidc at it:

juju config dex-auth public-url=
juju config oidc-gatekeeper public-url=

Seems like the istio-ingressgateway-workload is required as the backend service as hinted at over here: Accessing the Kubeflow dashboard from other devices locally - getting started - Charmhub

I’m still getting my head around how this works, so am excited when it does.

That’s interesting Joel. I haven’t tested it out, but based on Andrew’s comment, you might be onto something.

It’s possible then that the doc has a mistake:

name: istio-ingressgateway

Perhaps should be

name: istio-ingressgateway-workload