Best Practices for Sharing Resources between models

Hello folks!

TL:DR;

What I want to do is: enable-ha --to lxd:0 where 0 is already deployed to another model in the MAAS cloud

Long, rambling stream-of-consciousness begins:

I had a wonderful IRC banter with @wallyworld, blahdeblah and QthePirate regarding this last night, and I wanted to put it down in here partly to straighten out my own thoughts, and also to understand what I should and shouldn’t be doing, as I’m almost certainly straying off-track here.

DISCLAIMER: My cluster has several ‘pet’ resources which are unavoidable as they’re specialized hardware which whilst able and required to carry regular workloads(they have a very high capacity), are frequently but not constantly needed to perform their own specialized tasks - a future state is that elastic migration of units from and to these resources can be done on a schedule, which is not my focus for this question, as it requires me to get a few other pieces of the puzzle straight first, and would likely be dealt with by a second-tier orchestrator like K8s or clustered lxd.

Scenario background:
I have a MAAS cloud Juju controller. It’s on a KVM that was manually deployed to a host I’m trying to decommission. The host has several other workloads with connected dependencies I’ll be manually unpicking as I go, and rebuilding within the awesome Juju platform with as little ‘pet’ configuration as possible.

Migrating to Juju-managed resources:
Based on the docs and discussions it seems the clean way to begin is by migrating the Juju controller to the target MAAS cluster. The clean way to do this appears to be by using enable-ha, scaling out to the target, then removing the controller unit from the old node.

The difficulty I’m experiencing is this: The controller would preferably live in an lxd container - and any scaling would be to new containers on existing machines or existing containers within the MAAS cloud, as a safe assumption is that all machines within the MAAS cloud are currently checked out to other models, but available to carry more units.

What I want to do is: enable-ha --to lxd:0 where 0 is already deployed to another model in the MAAS cloud.
Options I’m aware of are:

  1. Do manual add-machine ssh:happy-turtle.maas then add-machine lxd:1 and enable-ha --to 1/lxd/0. This doesn’t work - adding an ssh: machine fails “ERROR machine is already provisioned”. This method if it worked would also be at the expense of the MAAS cloud functionality, which I’d be reluctant to give up.

  2. Use a KVM pod instead. I know this would work, but the KVM layer is something I need to minimize use of, as it’s frequently used by ‘pet’ workloads that require a lot of resources, and additionally some of the machines that have the highest available capacity for compute resources simply aren’t physically able to carry KVM at all.

If you’re still reading, thank you for your time and patience - a beverage of your choice is at my expense if we meet.

You folks are awesome and are building what is unquestionably the future of SDI automation.

enable-ha --to lxd:X is not currently supported.

With your point 1, you need to add-machine on the actual LXD container itself, assuming you’ve used MAAS DHCP to get an address for the container or otherwise ensured the container is addressable. ie juju add-machine ssh:<ip-address>. And then enable-ha --to X where X is the machine id resulting from add-machine.

Many folks doing HA controllers on MAAS do actually use the KVM approach. One way, pre MAAS 2.5, is to manually deploy KVM instances and re-enlist them back into MAAS with tags. The use the --constraints option to enable-ha to ensure those KVM instances get used when the new controllers are created. MAAS 2.5 supports KVM pods natively so no need to enlist existing instances.

We have future plans to allow controllers to support multiple clouds which would make what you want easier I think. You would be able to bootstrap Juju onto the LXD cluster, and enable-ha, thereby having the controllers provisioned on top of the LXD cloud. You’d then be able to register the MAAS cloud with those controllers and add a model to that MAAS cloud; workloads added to the model would deploy to MAAS. We support this scenario now for Kubernetes workloads, where the Kubernetes cluster is added post-bootstrap to an existing controller. But we don’t (yet) support it for VM based clouds.

1 Like

I love where you’re going with this, and I’ve been thinking about it for some time.

Begin whimsical ramble here :

I wish that there could be a ‘Juju teleport charm’ where you could add a teleport on one model and link to relatable objects on another model. This would be amazing because you could still build different projects on their own dedicated models, but say you wanted to create a relation from an object on model A to say a postgres DB(s) on model B, Model B will modify charms config relative to its external relations, as apposed to having models that rely on each other with no immediate visibility that they do.

1 Like

Thanks for that @dvnt. I’m toying with having several controllers for the same cluster to provide more agility and harness the functionality provided my each provider - so having a manual, lxd cluster and a MAAS cluster controller all running to give 3 insertion points for performing deployments whilst working around gaps in functionality or integration with my own glue scripts (or possibly Mojo, which I just started a few test envs with). I dream of a future state where MAAS and Juju are symbiotic, and LXD is a first class deployment target for MAAS.

I think what you want here is the Juju cross model relation feature. This allows charms to be deployed to different models and then related. The models don’t even have to be in the same controller. It uses the offer/consume paradigm. You deploy a charm, create an offer, and add permissions to that offer. Then on the consuming side, you deploy your app, and then relate it to the offer in the other model.

Here some preliminary documentation to get started. There’s also some doc from an external third party who use cross model relations on their own setup. And here’s some doc if you want to use cross model relations with postgresql which is illustrative of the type of thing you can do.

Other scenarios not in the above doc include having nagios in one model, monitoring other models, and prometheus collecting stats from other models. The examples below are illustrative - I haven’t triple checked them for typos etc, so hopefully I haven’t fat fingered anything.

Simple deployment example using models in a single controller:

juju bootstrap aws
juju switch controller
juju deploy nagios
juju expose nagios
juju offer nagios:monitors
juju switch default
juju deploy ubuntu
juju deploy nrpe
juju add-relation ubuntu nrpe
juju add-relation nrpe:monitors controller.nagios

Then

  • get the public IP of nagios and load it in the browser
  • username is nagiosadmin
  • password can be found by this command on the nagios model:
    juju ssh nagios/0 sudo cat /var/lib/juju/nagios.passwd

And here’s an example of cross model relations with prometheus using multiple controllers:

juju bootstrap google bigbrother
juju bootstrap aws monitorme
juju switch bigbrother
juju deploy prometheus
juju expose prometheus
juju offer prometheus:target offerprom
juju switch monitorme
juju deploy ubuntu
juju deploy telegraf
juju relate ubuntu:juju-info telegraf
juju consume bigbrother:/admin/default.offerprom promed
juju relate telegraf:prometheus-client promed:target

Then

  • get the public IP of prometheus and load it in the browser :9090
  • open the status->targets list and make sure you see the relation with the public IP address of the telegraf unit in AWS
1 Like

Oh. My. God. I’ll definitely be playing with this :hushed: thanks for that
I guess Pebcak + RTFM

1 Like