I created a manual cloud from 4 Hetzner instances and then used the lxd charmed operator to create a 3 node cluster. That worked out fine and I can add the cluster as a remote to my local lxd, launch instances and it is also showing up as a cloud when running juju clouds. However when running juju bootstrap <cloud-name> <controller-name> I’m running into the issue where juju is attempting to connect to the ip-address of the container it launched the controller on. Since this IP belongs to the lxdfan0 network and is only internal to the lxd cluster it will not work. I read the post about configuring the lxd nodes network interfaces and I’m sure this will work but it seems like I would then have to set up the lxd cluster manually without using the lxd charm. The drawback (apart from the manual config required) of using the network config is from what I understand is that with this setup all containers created would be accessible from the outside.
In lxd there is the possibility of adding a proxy device to a container, perhaps that config could be added to the controller to be created so that ssh connections from juju to the controller lxd instance would go to the public facing ip and proxied to the controller instance. Also the ssh port would need to be set as the underlying juju machine on which the lxd cluster nodes are provisined would be using the same ip and port 22. Looks to be somekind of setup for this with k8s but cannot find any documentation on if it is implemented also for LXD and how I would set it up.
So to my question: Does anybody no some other way of bootstrapping a remote lxd cluster cloud that does not involve lxd node network config and allows for the lxd charm to be used?
Some k8s clusters (eg Kind) have a similar problem, and connection from the Juju CLI to the controller service (not directly routeable externally) is proxied via the kube api server, ie the api server proxy allows a client external to the k8s cluster to route traffic to cluster resources which otherwise might not be reachable; the api server acts as a bastion between user and in-cluster endpoint.
This is a scenario managed by Juju; Juju stores the proxy config (IP address of the kube api server, ca cert etc) with the standard controller config in the controllers.yaml file used by the Juju CLI.
Maybe could explore what options are available to do something similar for LXD clusters.
@manadart or @stgraber do you have any thoughts? Is this type of proxying available for LXD clusters or is it something worth considering?
Having a look at the charm code, it looks like there is no way to “bring your own network” - you either set up on the default LXD bridge if in stand-alone mode, or the Fan if clustered.
The charm authors might be convinced to allow specification of the bridge that LXD uses, but this would require configuration of the cluster hosts prior to installation, ensuring a bridged NIC with the same name on each.
Be aware this would expose each container publicly unless further firewalling is applied.
Thank you for your responses. I’m not sure exactly what the charmcode looks like. But I did see that the when initializing the juju-controller it tried to ssh into the remote lxd internal instance IP from the local juju on another host. If instead an option would be provided to select the controller IP and ssh port used by juju this could be used to allow for communication with the remote lxd controller on the remote lxd host. For this to work the remote lxd would need to have a proxy device listening on the configured IP and port. This could be added in a profile when launching the juju-controller instance. Also other required ports could be proxied. Not sure but I do think the lxd proxy device is available with both fan and bridge lxd networks. Could also have something like a ufw in the lxd instance container to limit ingress traffic. With configuration option to add IPs. Just thoughts and perhaps not something that should be done and are certainly other ways of doing it.