[feature request] LXD profile as a deploy constraint

When deploying charms on AWS public cloud, there is a constraint which allows us to deploy into named instance types.

It works something like this (Some details here):

juju deploy tiny-bash --constraints 'instance-type=t4g.nano'

Now this is very useful as it allows us to have control over spent resources and how they are composed at deploy time.

My feature request is that this feature would be carried over to LXD clouds making use of the LXD profile feature.

This would if implemented in a good way allow us to define custom or named LXD profiles to be used very much like for AWS instance-types.

We could imagine that this would look something like this:

juju deploy tiny-bash --constraints 'lxd-profile=small'

This would open up for supplying good default profiles for things like cuda, usb-support, x11, multi-nic, infiniband etc. etc. But custom profiles are what I personally would love to have access to for my LXD clouds.

It is possible today, to include a lxd profile yaml to a charm codebase (Like in this lxd-profile charm). This has the drawback, that it is very static and doesn’t allow a charmer to alter the deployment of the same charm to different profiles. Making changes to the profile, require changes to the charm which is not as dynamic.

Let me know your thoughts on this.

@jamesbeedy @hallback

Oh, and for knowledge, this instance-type feature for AWS is not documented well and especially not with “juju deploy --help” @tmihoc which makes it a bit of a secret killer feature nobody knows about. =)

There has been an idea floated around about having lxd-profiles more flexible per charm. Unfortunately, it’s not currently on the roadmap cycle.

Essentially you could potentially do:

profiles:
   small:
       ......
   large:
       ......

Then to deploy it, you could potentially do juju deploy --config "lxd-profile-name: small". It doesn’t make sense to be a constraint, as you would apply the profile before the charm and you have a bit of a chicken and egg situation.

I don’t understand this, since this is exactly what is happening on AWS - or am I missing details here?

Unfortunately. Instance-types are known before a machine is created, lxd-profiles are part of a charm and applied after a machine is created. Therefor a we can’t constraint on something if it’s not ready. We could probably paper over this with a lot of complicated trickery, but I’m not sure it’s worth it and it’s probably the wrong abstraction.

But I still don’t get it.

It is perfectly possible to launch a lxc container using a supplied profile as:

lxc launch ubuntu:20.04 --profile my-profile

So, I can’t understand the “catch 22” problem. If you are not referring the the lxc profile that may be supplied as the “lxd-profile.yaml” in a charm which then I understand is applied after the container has been created.

But, I’m talking about the lxd-profile which can (from what I showed above) be applied as part of the creation of the instance iteself - before any charm or even juju has been installed in the container.

Applying profiles outside the purview of Juju will probably not be supported, as Juju would want to model that profile with in a charm directly.

Maybe there could be away to tag an container so that the instance-type would make sense for lxd?

I think that, when the Juju team was implementing profile support for LXD, some of the requirements ruled out handling the profiles at the Juju layer, in favor of applying them from the Charm.

That may or may not have been the right abstraction. But it would be non trivial and breaking change to implement it as a constraint at this point.

1 Like

I still believe that it is the wrong abstraction and probably should be changed to become a set of container options/fields that could then be written as a lxd profile. That way it could be exposed to more providers.

It should be possible to tag a container via the metadata with in the properties object to add a tag or instance-tag, so that we could look for that when applying an instance-type constraint matches that machine.

1 Like

This is all strange to me as I would see this as a fairly “1:1” mapping to the AWS feature. Why would this be different?

Named instance types would be as simple as defining a profile and have juju - exactly as for aws - be able to target that.

My use case currently is that I would like to be able to create lxc instances which have network interfaces in two spaces. For example haproxy would benefit from this as I can then automatically deploy haproxy to a named instance type which then gets IP addresses from two different lxc networks (spaces). I have no means to do this at this time without modifying the lxc instance afterwards and then have to ssh into the container to setup the networking manually.

If I had a lxc profile, I could have all this automated. Maybe there is a way already for my use-case but I know too little of juju to know how.

There just isn’t anything to do this yet, hence the feature request I guess.

Yes, totally.

More specifically, I have a network with internet facing addresses which I normally wouldn’t use (space: internet) and the private network (space: private)

Maybe I can setup lxd differently such as the containers would “normally” get only addresses from the space: private and if I specify instead to use my “special” profile, the containers would get addresses on both the space: private + space: internet.

Perhaps there is a different way to achieve this with lxd+juju…

Regardless of my use-case, I can’t see why AWS is different from LXD in this respect of using named instances.