Unable to get NFS Charm provisioner working with Charmed Kubernetes on Ubuntu 18.04

Has anyone been able to get the NFS provisoner working with Conjure-up on 18.04? I find a lot of other links/issues such as:

I followed ver batim the steps outlined here:

Note that I am indeed using LXD 4.1. However, I do not see any such file " /etc/apparmor.d/lxc/lxc-default"

Has anyone been able to document a working Juju NFS provisioner on a localhost deploy of CDK.

The Provisioner pod is stuck in container creating.
Warning FailedMount 24s (x12 over 20m) kubelet, juju-1d799a-4 (combined from similar events): MountVolume.SetUp failed for volume “nfs-client-root” : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/dd5b9819-60e5-4129-b443-9f109a50f686/volumes/kubernetes.io~nfs/nfs-client-root --scope – mount -t nfs 10.13.177.185:/srv/data/kubernetes-worker /var/lib/kubelet/pods/dd5b9819-60e5-4129-b443-9f109a50f686/volumes/kubernetes.io~nfs/nfs-client-root
Output: Running scope as unit: run-r154244b7397c4808a0be8733146509b1.scope
mount.nfs: requested NFS version or transport protocol is not supported

Any help is appreciated. Ideally we should be able to use the storage Class and dynamically provision storage

Welcome @ahsanshah1234, sorry that it’s taken a long time to get back to you.

Conjure-up is no longer the recommended channel to deploy Charmed Kubernetes. I believe the most current documentation is available from ubuntu.com:

Has anyone from @k8s-charmers got any experience with using NFS and the backing store?

Thanks @timClicks for the reply. I was following this: Charmed Kubernetes | Juju. Since I was using LXD on localhost, I followed this:

“Note: If you’re deploying on lxd, use conjure-up instead, as described
above. It configures your lxd profile to support running Kubernetes on lxd.”

Is it still recommended that I reinstall using juju directly? Note that this is a clean build. I know @routhinator has some threads on this as well. If anyone has gotten this to work let me know. Else I will try a reinstall. It seems the MOUNT for LXD on localhost doesnt work. I also dont see any LXC profile for apparmor.

The conjure-up approach has been deprecated in favor of using Juju directly. The docs have all been updated but are not published yet. Sorry for the confusion there. The best guide for deploying Charmed Kubernetes on local lxd is here. The charms themselves embed the lxd profile that they need. Hope this helps.

It’s also worth pointing out that using NFS with Charmed Kubernetes is covered in the docs as well.

Hi. As recommended I went ahead and destroyed the conjure-up based install and reinstall per the new method. I still seem to get the following error on the NFS provisioner (which I had with Conjure Up):

Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/d3f27e94-0aa3-47a9-a726-a43ecbb1eefe/volumes/kubernetes.io~nfs/nfs-client-root --scope – mount -t nfs 10.13.177.211:/srv/data/kubernetes-worker /var/lib/kubelet/pods/d3f27e94-0aa3-47a9-a726-a43ecbb1eefe/volumes/kubernetes.io~nfs/nfs-client-root
Output: Running scope as unit: run-r046d008bc8764a6e915806b13c69f14e.scope
mount.nfs: requested NFS version or transport protocol is not supported

It just seems that the mount of the NFS doesnt seem to work with LXD at least as per what is advertized in the steps.

Could this be a bug? Any other configuration needed?

Pretty sure I am running into the exact same issue as defined here: NFS Charm + 1.15 Charmed Kube = NFS Version not supported

The new method of deploying via JUJU does not properly set up a working LXD container and as such the NFS provisioner does not work on a base install. Is this something that can be fixed without hacking around the configs? Any help is appreciated.

A reinstall w conjure-up and reapplication of a custom profile as defined here https://github.com/charmed-kubernetes/charm-kubernetes-worker/blob/master/lxd-profile.yaml followed by a juju redeploy finally got this working.

I would recommend we update the documents as others may struggle with a basic localhost setup for dynamic provisioning. Closing this issue