I’ve recently been struggling getting LXD bootstrapping in a snappy fashion with the times getting slower and slower. So on investigating the cloud-init-output.log of the bootstrapped controller, it spends a LOT of time downloading the sources from apt.
Knowing that I’m pretty much downloading the same thing over and over again, @jameinel suggested using an apt-cache. The following is an outline of doing this using a docker image, so that I don’t have to install anything else on to my host system (assuming you’ve already got docker installed!).
The following runs the apt-cache version 3.1-2 in daemon mode.
Then to get juju to use the cache, copy the url on the index page of cacher (ips can vary) http://0.0.0.0:3142/ and keep it written down for later. Then locate your lxc server address from lxc info, generally speaking it’ll be the local url (starts with 192.168,) or your bridge address (starts with 10.), copy that and use it later for the template.
Use the following template of clouds.yaml to update your ~/.local/share/juju/clouds.yaml
clouds:
lxd:
type: lxd
auth-types: [certificate]
endpoint: ${URL FROM LXC}
regions:
default:
endpoint: ${URL FROM LXC}
config:
apt-http-proxy: ${URL FROM APT-CACHER}
Then bootstrap the lxd cloud and enjoy your fruits of your labour.
I’ll mention that if you have a general device on your network, you can also configure:
$ cat /etc/apt/apt.conf.d/95-juju-proxy-settings
Acquire::http::Proxy "http://${URL FROM APT-CACHER}:3142";
Acquire::https::Proxy "http://${URL FROM APT-CACHER}:3142";
I personally have a NUC sitting on the network and point all my containers and my Linux boxes at that cache. That means that day-to-day updates seed the cache, so everything gets happier.
(edited because I missed the --publish 3142:3142 line which you’re already doing)
I think one of the reasons why the apt update part of bootstrap gets slow is that the LXD images get stale. There’s an auto-update setting which can be enabled to ensure the images are kept current. Even with an apt cache, the processing of new packages still can take a bit of time.
You can use lxc image edit juju/bionic/amd64 (and same for trusty, xenial, zestry etc) to set the auto-update property of the images Juju uses to true.
I’m not sure why Juju doesn’t ensure this is set true by default. Perhaps we should change that?
I’ve written up my approach and code in my juju-lxd-tweaks repo.
It effectively does an auto-cache of the LXD image used by Juju, but it also installs the binary agent for the installed version of Juju and installs dependencies for the charm reactive framework.
@simonrichardson is there a way to update a cloud sitting on the controller with this using some juju command or how do I apply this setting to already existing clouds on the controller?
Also, in the model-config, there are a few config options:
The documentation doesn’t say when to use a “apt-mirror” vs “apt-https-proxy” - especially if used together…
I know that its possible to set on the controller, but since I’d like to set this on a “region/cloud” level - I don’t know how to set this for controller-level-clouds.
apt-mirror is “I’m hosting a completely different site at https://my-mirror-of-host” with a different base URL, vs at apt-proxy is “make your requests to the same URL but go via an http proxy”. The only difference for juju’s http-proxy vs apt-http-proxy is that the latter is only configured for apt, not all http content. Think apt-cacher-ng vs squid proxy. (They both functionally do something very similar, though.)
The mirrors is more like: Mirrors : Ubuntu
Where you want to change the root URL and all the content that you download to a site that is closer to you.
I doubt you would set up your own mirror, most people would just set up a local caching proxy.
So I was actually meaning your own personal machine (if I’m going to be running a caching proxy, you should have the host machine also use the proxy, as then it always gets seeded with new updates, both for your host and for the containers that you run.)
Yeah - makes sense. We did deploy one on a separate host+container which now serves the whole network and are rolling that out model by model. Setting the “model-defaults” isn’t having an affect on already existing models.
Just noting here to save other people some time. I’ve recently had issues where apt-cacher was failing to find the hostnames for the archive urls. I updated the version of the docker image and this fixed it. The hardcoded version in the original post is quite old now.