Here’s my specific office setup:
- Macbook Air and Mac Mini, both M1 based.
- Multipass installed on the Macs so that I can easily spin up Ubuntu VMs.
- Ubuntu Server installed on a small form factor Amd64 based computer, which I access over my local network (or via a secure tunnel that I can use while traveling).
- An lxd cluster on the Amd64 box, which I have access to from the lxc client on my Arm boxen.
I’ve found myself working more from a remote session on the Amd64 server than locally on one of the Macs. This is a pretty typical DevOps setup, and part of why I like it is simply the convenience of working from a byobu session on the server as I switch between my laptop and desktop. But there are some papercuts that prevent me from being able to change my workflow if I wanted to.
What works well:
- The juju client generally works well, both run natively from homebrew, and run in a multipass vm. I do have to pass
--constraints arch=amd64
when bootstrapping an AWS controller, but that’s a one time inconvenience. After that, I can deploy and operate a cloud without issue. - Registering, publishing, and releasing charms with
charmcraft
works well. (Again, this applies to both the homebrew version and the snap running in a vm.) - Multipass & LXD work well, just in general! I think that these are two of Canonical’s best projects, and I really enjoy using them.
What doesn’t work well:
-
charmcraft pack
. It builds the Arm64 version of the charm, which I can’t then deploy to AWS, or my lxd cluster. -
juju deploy
to the Amd64 lxd cluster from a client on an Arm box. The Juju CLI sees a mismatch between the architecture on the Ubuntu server and the Arm box that is running the client, and I wasn’t able to fix this with constraints. (I think that it’s because the lxd cluster looks like a local cluster to Juju, which invokes a separate check.)