Setting up LXD Cluster as a Juju Cloud

Setting up LXD Cluster as a Juju Cloud

These instructions represent a “shortest path” to setting up a LXD cluster on MAAS nodes, adding the cluster to Juju as a new cloud, and setting up Juju credentials to access it. The following uses the juju --edge snap, install it with:

$ sudo snap install juju --edge --classic

For a very good introduction on how to setup a LXD cluster; Stéphane Graber from the LXD team has a youtube video Setting up a 3 nodes cluster that is well worth a watch.

LXD Cluster

The following assumes that you have ssh access to MAAS nodes in order to set up LXD.

It is recommended that you read information on securing your environment for production use.

MAAS Nodes

  • Before provisioning the MAAS nodes that will be cluster members, ensure that the physical network device to be used is bridged, and configured to get an automatically assigned IP from the 10.0.0.0/24 subnet.
  • Provision the node with Bionic (LXD 3.x installed by default, supporting clustering).

LXD First Node

For simplicity, we use a trust password to add new nodes to the cluster and to add add Juju credential certificates.

SSH to the machine, run “lxd init” and answer as shown below.

$ lxd init
Would you like to use LXD clustering? (yes/no) [default=no]: yes
What name should be used to identify this node in the cluster? [default=xxx]: 
What IP address or DNS name should be used to reach this node? [default=10.0.0.x]:
Are you joining an existing cluster? (yes/no) [default=no]:
Setup password authentication on the cluster? (yes/no) [default=yes]:
Trust password for new clients:
Again: 
Do you want to configure a new local storage pool? (yes/no) [default=yes]:
Name of the storage backend to use (btrfs, dir, lvm) [default=btrfs]:
Create a new BTRFS pool? (yes/no) [default=yes]:
Would you like to use an existing block device? (yes/no) [default=no]:
Size in GB of the new loop device (1GB minimum) [default=21GB]:
Do you want to configure a new remote storage pool? (yes/no) [default=no]:
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to configure LXD to use an existing bridge or host interface? (yes/no) [default=no]: yes
Name of the existing bridge or host interface: $MAAS_BRIDGE_NAME
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: 

LXD Remaining Cluster Nodes

For each machine to be added to the cluster, run “lxd init”, but this time opt to join the existing cluster.

$ lxd init
Would you like to use LXD clustering? (yes/no) [default=no]: yes
What name should be used to identify this node in the cluster? [default=xxx]:
What IP address or DNS name should be used to reach this node? [default=10.0.0.x]:
Are you joining an existing cluster? (yes/no) [default=no]: yes
IP address or FQDN of an existing cluster node: 10.0.0.x
Cluster fingerprint: 68e985c3e1d840ee6d2ecca72ae033356450ebdd4e297e07a42a9b6c48255128  
You can validate this fingerpring by running "lxc info" locally on an existing node. 
Is this the correct fingerprint? (yes/no) [default=no]: yes
Cluster trust password: 
…

Setup Juju Cloud and Credentials

Adding the Cluster as a Juju Cloud

Save the following cloud definition to a file, replacing the endpoint. The scheme and the port for the endpoint is optional. Juju will prefix the endpoint with the HTTPS scheme and suffix with port 8443, by default if either don’t exist.

clouds:
  lxd-remote:
    type: lxd
    auth-types: [interactive, certificate]
    endpoint: $URL_TO_REMOTE_LXD

Add the cloud to Juju by sourcing the data in the file.

$ juju add-cloud lxd-remote $PATH_TO_CLOUDS_YAML

Adding Credentials for the Cloud

Save the following credential cloud file, replacing with the trust password accordingly.

credentials:
    lxd-remote:
        admin:
            auth-type: interactive
            trust-password: $TRUST_PASSWORD

Add the credentials in the file to Juju.

$ juju add-credential lxd-remote -f $PATH_TO_CREDENTIALS_YAML

Bootstrap to the New Cloud

$ juju bootstrap lxd-remote

6 Likes

Recent changes to the juju code (on edge snap) allow us to remove the need to manually locate the certificates on the local machine. Instead, juju will go and find potential client certificates and if none are found, will generate a set for you. So in the end, when adding a credential, only a trust-password is required (see above)

If you don’t want to use a trust-password and have all the certificates to hand, then you can provide them to juju in the following format:

credentials:
    lxd-remote:
        admin:
            auth-type: certificate
                server-cert: |
                    <server-cert>
                client-cert: |
                    <client-cert>
                client-key: |
                    <client-key>
$ juju add-credential lxd-remote -f $PATH_TO_CREDENTIALS_YAML

Edge should now also have the ability to target specific nodes of the cluster with the same availability zone syntax used by the EC2 provider.

So with the bootstrap, deploy and add-unit commands, you can use:

--to zone={node name}

or just

--to {node name}

2 Likes

If bootstrap fails with:

ERROR failed to initialize state: validating initialization args: validating cloud credentials: credential “lxd-remote/admin/ok” with auth-type “certificate” is not supported (expected one of [“interactive”])

You’re hitting bug 1783407

Work around is to edit .local/share/juju/credentials.yaml and change:
auth-type: certificate
to:
auth-type: interactive

1 Like

I’ve updated the main post to highlight that the LXD provider supports both interactive and certificate auth-types.

I had to use sudo with lxd init when adding a node to the cluster.

This is incorrect, it should read $ snap install --edge juju

Updated TY much …and I need more characters to hit the limit required to post…

Do I need to enable any flags here? Or am I doing it wrong?:

ubuntu@celery:~$ juju add-unit kubernetes-worker --to celery
ERROR adding new machine to host unit "kubernetes-worker/2": cannot add a new machine: unknown placement directive: celery

Or…

ubuntu@celery:~$ juju add-unit kubernetes-worker --to zone=celery
ERROR adding new machine to host unit "kubernetes-worker/1": cannot add a new machine: unknown placement directive: zone=celery

what’s celery here? Is that a deployed application? I’m not following what you’re up to and asking. In theory I’d not worry about placement with the lxd cluster because they’re all containers already anyway.

Thanks for the reply @rick_h - ‘celery’ is an LXD cluster member, and that cluster is bootstrapped into Juju. Juju appears to have enumerated the cluster members, as certain deployment commands result in an error along the lines of 'unable to find availability zone ‘celery’, ‘member2’, ‘member3’, ‘member4’, {etc} So I had assumed I was using familiar object names to Juju in this context.

Agreed on this one, that’s the goal for me here, but I’m having trouble with cdk stand up on the cluster so was hoping to use constraints to eliminate any host-specific variables whilst debugging.

The zones should be the names of the cluster members. If you add some machines and look at the lxc list of the machines it should have a column in the output showing the names that lxd is using for them. I seem to recall getting a chance to name them when you add the machines to the cluster. Those are the names that Juju is using for the zones.

Is there a juju charm that can be deployed on a MAAS that spawns a LXD cluster?

… and is there anyone that can point to how the FAN network should be setup properly in this case?

I know @mskalka had looked into it for a while, but I don’t think it progressed to something officially supported. It is available as:

(juju deploy cs:~mskalka/lxd-cluster)

1 Like

I realize this post is quite dated. But I’m looking to automate my juju cloud build. Can you elaborate on the sources of the certs in question. For the LXD cluster build itself I have a cluster.cert and a cluster.key, but I’m not seeing a “server.crt”. Is there a particular location this server.crt lives on the lxd master or? Thanks for your feedback.

1 Like

From looking at the source code, cluster.crt can be used instead of server.crt. The cluster.crt is shared between all nodes, where the server.crt is used just for that server.

When developing this feature, the server.crt was created, this might no longer be the case. If you find a bug around this, then please file a bug.