Set up remote access

Key Value
Summary Learn an approach to setting up remote access to your Charmed Kubeflow deployment
Categories docs, kubeflow
Difficulty 3
Author Rob Gibbon

Overview

Duration: 2:00

Charmed Kubeflow delivers a powerful, sophisticated end-to-end MLOps platform which you can deploy in half an hour or less, using MicroK8s or another conformant Kubernetes distribution.

Exposing Charmed Kubeflow over the network so that users can access it is straightforward when you know how, and this tutorial will guide you through the steps needed to get things deployed and configured right for your users.

What youā€™ll learn

  • How to configure MicroK8s to expose Charmed Kubeflow to the outside world
  • How to configure your Charmed Kubeflow for round-robin DNS to simplify accessing the platform, with Dnsmasq and Systemd-Resolved

What youā€™ll need

  • A MicroK8s cluster running on Ubuntu 20.04 with one or more nodes running Charmed Kubeflow and MetalLB - see the install guide to get up and running
  • Another computer running a modern web browser from which to test remote access to Charmed Kubeflow
  • Some command line knowledge

Enable MicroK8s ingress addon

Duration: 3:00

The first step we need to complete will be to enable an ingress on our MicroK8s cluster. On one of the MicroK8s nodes, run the following command:

microk8s enable ingress

Next, weā€™ll configure the ingress add-on so that it exposes our Charmed Kubeflow MLOps platform. Just make sure that youā€™ve followed the steps in the install guide and have MetalLB enabled on MicroK8s for your cluster first. Run the following commands to expose your Charmed Kubeflow deployment for remote access:

cat << EOF > ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: kubeflow-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  ingressClassName: public
  rules:
  - host: kubeflow.ubuntu.local
    http:
      paths:
        - path: /
          pathType: Prefix
          backend:
            service:
              name: istio-ingressgateway
              port:
                number: 80
EOF

microk8s.kubectl apply -f ingress.yaml -nkubeflow

That wasnā€™t so bad! Wait a few moments until MicroK8s makes the configuration change:

microk8s.kubectl get ingress -nkubeflow
NAME               CLASS    HOSTS                   ADDRESS     PORTS   AGE
kubeflow-ingress   public   kubeflow.ubuntu.local   127.0.0.1   80      29s

Now weā€™ll make remote access for your clients highly available with round-robin DNS so that if any node of the MicroK8s cluster goes offline, your users will still be able to reach Charmed Kubeflow. Read onā€¦

Setting up dnsmasq

Duration: 10:00

The next step is to set up Dnsmasq on two of the MicroK8s nodes. We are going to use Dnsmasq as a lightweight caching DNS service that also resolves all of our local MicroK8s hosts to the domain name kubeflow.ubuntu.local, in order to set up round-robin DNS.

ā“˜ Round-robin DNS Round-robin DNS is a way to ensure improved service availability and load-balancing via DNS, by load-balancing requests across multiple DNS records with the same ā€˜Aā€™ record but different IP addresses.

First letā€™s install the Dnsmasq package. Again, on each node in the MicroK8s cluster, run the following command:

sudo apt install dnsmasq -y

Next we need to configure the Dnsmasq daemon. We want Dnsmasq to listen for DNS lookup requests on our MicroK8s nodesā€™ loopback interface as well as on the public interface. Note that your server may have more than one network interface - make sure you choose the right one!

You can find a lot of information about your MicroK8s nodeā€™s network configuration by running the following command:

ip addr

Ok, letā€™s go ahead and configure Dnsmasq. Note that you should replace the IP address assigned to the variable PUBLIC_IP with your own MicroK8s cluster nodeā€™s public IP address, which will likely be different to the one shown below.

Run these commands on each MicroK8s cluster node to set up Dnsmasq as a caching DNS server that recursively forwards unknown lookups to Googleā€™s public DNS:

PUBLIC_IP=192.168.100.10

cat << EOF > dnsmasq.local.conf
domain-needed
bogus-priv
no-poll
server=8.8.8.8
server=8.8.4.4
listen-address=$PUBLIC_IP
except-interface=l0
bind-interfaces
EOF

cat dnsmasq.local.conf | sudo tee -a /etc/dnsmasq.d/local.conf

The next step is to set up static records for Dnsmasq on each of the nodes where Dnsmasq is running. We want to create a static record for each node in the MicroK8s cluster. On each node in the MicroK8s cluster, run the following commands:

declare -a microk8s_cluster=("192.168.100.10" "192.168.100.11" "192.168.100.12")

for node in ${microk8s_cluster[@]}; do
echo "$node kubeflow.ubuntu.local" | sudo tee -a /etc/hosts
done

Alright, we want to bounce Dnsmasq so that it picks up our configuration changes.

sudo systemctl enable dnsmasq
sudo systemctl restart dnsmasq

But there are some extra steps here, to ensure that our servers and clients using systemd-resolved can make use of Dnsmasq. On each node of the cluster, and the systems that will access it, run the following commands:

echo "DNS=192.168.100.10" | sudo tee -a /etc/systemd/resolved.conf
echo "FallbackDNS=192.168.100.11" | sudo tee -a /etc/systemd/resolved.conf
sudo systemctl restart systemd-resolved
sudo systemd-resolve --flush-caches

Configure CoreDNS for Dnsmasq

Duration: 3:00

In order for internal services like the authentication system OIDC-gatekeeper to be able to resolve our Charmed Kubeflow serviceā€™s hostname correctly, we need to configure CoreDNS to recursively lookup external addresses via Dnsmasq. Letā€™s do that now:

microk8s disable dns
microk8s enable dns:192.168.100.10,192.168.100.11

Reconfigure the Charmed Kubeflow public URL

Duration: 4:00

The final step is to reconfigure the Charmed Kubeflow public URL so that the OIDC Gatekeeper and Dex operators know how users will access Charmed Kubeflow. On the computer running Juju, execute the following two commands to complete the setup:

juju config oidc-gatekeeper public-url=http://kubeflow.ubuntu.local
juju config dex-auth public-url=http://kubeflow.ubuntu.local

Next Steps

Duration: 1:00

Congratulations on setting up highly available remote access to your Charmed Kubeflow MLOps platform running on MicroK8s! But if youā€™re having difficulties, donā€™t worry - head over to the forum to ask a question and get in touch with the community.

Further reading

Duration: 1:00

Have questions? Contact our sales team now.

@robgibbon are the steps to set up and configure Dnsmasq necessary if you only have one node youā€™re running microk8s for learning/testing purposes? Iā€™m wondering if I can just follow the steps enable ingress and reconfigure the Kubeflow public URL if all I want to do is access the central dashboard from another device on my network.

Dnsmasq is optional

Rob

@robgibbon Hello rob, thanks for your tutorial. But it seems a bit complicated for us. If we just want to access kubeflow dashboard remotely through its ip:port, is there anyway that we could skip dns setting part?

Also, for remote machine, does it need to set 192.168.100.10 as its DNS server address?

Hi @robgibbon, as you mentioned that ā€œDnsmasq is optionalā€, I tried to just follow Enable MicroK8s ingress addon, Reconfigure the Charmed Kubeflow public URL; skip Setting up dnsmasq and Configure CoreDNS for Dnsmasq. However, when I try to access the dashboard via http://IP:80.Screen Shot 2022-09-26 at 22.27.05

Iā€™m not so sure why this happens, do you have any idea? Thank you.

I followed the guide verbatim and arrived at the same error. No solution in sight. Were you able to find anything re. this?

I spent far too long chasing this issue and finally figured out how to get my kubeflow dashboard exposed, in my case from our domain:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: kubeflow-dashboard-ingress
  labels:
    app: istio-ingressgateway
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  # this is microk8s specific, likely nginx in other situations?
  ingressClassName: public
  rules:
  - host: example.com
    http:
      paths:
        - path: /
          pathType: Prefix
          backend:
            service:
              name: istio-ingressgateway-workload
              port:
                number: 80

Then I pointed dex and oidc at it:

juju config dex-auth public-url=http://example.com
juju config oidc-gatekeeper public-url=http://example.com

Seems like the istio-ingressgateway-workload is required as the backend service as hinted at over here: Accessing the Kubeflow dashboard from other devices locally - getting started - Charmhub

Iā€™m still getting my head around how this works, so am excited when it does.

Thatā€™s interesting Joel. I havenā€™t tested it out, but based on Andrewā€™s comment, you might be onto something.

Itā€™s possible then that the doc has a mistake:

name: istio-ingressgateway

Perhaps should be

name: istio-ingressgateway-workload

1 Like