How to install Kubeflow with MicroK8s on AWS for testing and development

Deploying Kubeflow to MicroK8s in an AWS instance is easily achieved using Juju. Juju integrates directly with AWS, enabling you to provision an EC2 instance. MicroK8s can be installed on this instance, and Charmed Kubeflow can be installed on top of that.

Contents:

Requirements

  • The latest version of the Juju OLM. On Ubuntu or other snap enabled operating systems, just run
sudo snap install juju --classic
  • You should have an active AWS account, with the credentials already stored in Juju. See the Juju documentation if you need to add credentials now.

Bootstrap AWS instance

Juju requires a controller instance to manage the deployment. If you already have a controller for AWS, you can skip to step 2. To bootstrap a controller on AWS, run the following command on your local machine:

juju bootstrap \
          --bootstrap-constraints "cores=2 mem=4G" \
          --config mongo-memory-profile=low \
          aws/us-east-1 \
          mk8s-controller

You can substitute in a different region instead of aws/us-east-1, though bear in mind that not all AWS instances are available in some regions. You can also use a different name for the controller (mk8s-controller is used here).

Deploy a host instance

The host instance will run MicroK8s, and will require enough resources to deploy the Kubeflow components. If you want to run GPU workloads, you can also choose a GPU enabled instance. For example:

  • Standard deployment
juju deploy ubuntu demo --constraints="root-disk=100G instance-type=t2.2xlarge" --series focal && juju wait -wv
  • GPU deployment
 juju deploy ubuntu demo --constraints="root-disk=100G instance-type=g4dn.xlarge" --series focal && juju wait -wv

Enable access to the instance

Run these commands to be able to connect to the running instance from your local machine (or elsewhere).

juju run --unit demo/0 "open-port 80"
juju expose demo

Connect to AWS instance

You can now use Juju to connect to the AWS instance using SSH:

juju ssh demo/0

or if you prefer, directly from another terminal on the local machine:

ssh ubuntu@(juju status --format yaml | yq e '.applications.demo.units["demo/0"]'.public-address -)
Note:

The rest of these steps should be performed on the AWS instance, through the SSH connection which has just been made.

Install requirements on AWS instance

To run and deploy Kubeflow, the remote instance will need some software installed. As well as MicroK8s itself and Juju, some additional applications will be needed for use as a development system.

Kubernetes support:

Kubeflow 1.6 supports Kubernetes version 1.22, which you can install with the command

sudo snap install microk8s --classic --channel=1.22/stable

The previous release Kubeflow 1.4 requires 1.21 version of Kubernetes.

See the MicroK8s install page for more instructions on setting up MicroK8s, or follow the documentation for your K8s deployment.

If you are using MicroK8s, the following additional steps should be performed:

  • add the user to the MicroK8s group:
    sudo usermod -a -G microk8s $USER
    newgrp microk8s
    
  • copy the Kubernetes configuration to the default location:
    mkdir ~/.kube
    microk8s config > .kube/config
    sudo chown -f -R $USER ~/.kube
    sudo snap alias microk8s.kubectl kubectl
    
  • configure MicroK8s addons:
    microk8s enable dns storage rbac metallb:"10.64.140.43-10.64.140.49,192.168.0.105-192.168.0.111"
    

For a GPU enabled system, you should also install and initialise the required drivers:

sudo apt install -y nvidia-headless-460
sudo modprobe nvidia

The next step is to install juju components:

for snap in juju juju-wait juju-kubectl juju-bundle; do
    sudo snap install $snap --classic;
done

Now you can connect Juju to Kubernetes:

  • add the cluster:
    juju add-k8s myk8s
    
  • create a controller:
    juju bootstrap myk8s my-controller
    
  • create a model and namespace named kubeflow:
    juju add-model kubeflow
    

Deploy the Kubeflow bundle

Once the model is created, you can deploy one of the kubeflow bundles. To deploy the full bundle, run:

juju deploy kubeflow --trust  --channel=1.6/stable

You can observe your Kubeflow deployment getting spun-up with the command:

watch -c juju status --color

Create an Ingress resource

Substitute the public IP address of the instance in place of x.x.x.x in the code below:

echo "apiVersion: networking.k8s.io
kind: Ingress
metadata:
  name: istio-ingressgateway
spec:
  rules:
  - host: x.x.x.x.nip.io
    http:
      paths:
      - path: /
        backend:
          serviceName: istio-ingressgateway
          servicePort: 80
" | microk8s kubectl apply -n kubeflow -f -

Set URL in authentication methods

Patch role:

In the previous 1.4 release of Kubeflow, in order to set up Istio correctly when RBAC is enabled, you need to provide the istio-ingressgateway operator access to Kubernetes resources. The following command will create the appropriate role:

microk8s kubectl patch role -n kubeflow istio-ingressgateway-operator -p '{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"Role","metadata":{"name":"istio-ingressgateway-operator"},"rules":[{"apiGroups":["*"],"resources":["*"],"verbs":["*"]}]}'

This step is not required for the current 1.6 release.

A final step to enable your Kubeflow dashboard access is to provide the istio-ingressgateway public URL to dex-auth and oidc-gatekeeper via the following commands:

juju config dex-auth public-url=http://<URL>
juju config oidc-gatekeeper public-url=http://<URL>

Where <URL> is the hostname that the Kubeflow dashboard responds to. For example, in a typical MicroK8s installation with metallb enabled, this URL is http://10.64.140.43.nip.io. Note that when you have set up DNS, you should use the resolvable address used by istio-ingressgateway.

Finally, you can connect to your Kubeflow dashboard following the dashboard documentation.

1 Like