This page provides an overview of various types of services that can be implemented in order for Juju to live happily in an internet-deprived environment. Common tools that can be used to achieve such services are also listed.
- HTTP/S proxy
- APT proxy
- FTP proxy
- Juju-specific proxy
- Snap-specific proxy
- APT mirror
- HTTP/S server
- Juju agent mirror
- Cloud images mirror
The purpose of a forward HTTP/S proxy is to act as an intermediary for a client making any HTTP or HTTPS request. For our purposes, the client is a Juju machine.
Most such proxies include a caching ability. That is, the proxy will store the resulting data locally so that any subsequent request can be quickly satisfied.
The de-facto forward proxy solution on Ubuntu is
Juju uses options
https-proxy to denote these proxies.
When an HTTP/S proxy is configured there may be destinations that should be excluded from using it.
Juju uses option
no-proxy to represent these destinations.
In general, all instances within a cloud should be able to communicate directly with one another. It may, or it may not, be necessary to express this via the
no-proxy variable. This is because the latter is limited to the HTTP and HTTPS protocols, which may not apply in every case. Nevertheless, it is often done out of simplicity.
This “no proxy” idea is often used extensively with the localhost cloud at both the (shell) environment level and at the Juju level. The purpose is to ensure the client can connect with the local controller and that the local machines can inter-communicate, all without going through a proxy.
Hosts to exclude from the proxy include:
- localhost & 127.0.0.1 (standard ‘no proxy’ settings)
- the address assigned to the regular Ethernet device
- the address assigned to the host’s LXD bridge (to talk to the controller via its API)
- the entire subnet dedicated to the particular LXD installation
Page Using the localhost cloud offline demonstrates how this is done.
An HTTP/S or FTP proxy may not be configured to accept requests for APT packages. An APT proxy is identical to that of an HTTP/S or FTP proxy except that it applies specifically to APT package requests.
Some common implementations include:
Juju uses options
apt-https-proxy to set these proxies, depending on the protocol involved (i.e. FTP, HTTP, or HTTPS).
A standard FTP proxy. Juju uses the
ftp-proxy option for this type of proxy.
The following suite of proxy settings have a Juju-only scope (i.e. they are not system-wide):
They are passed to charm hook contexts as the following environment variables, respective to the above list:
JUJU_CHARM_FTP_PROXY JUJU_CHARM_HTTP_PROXY JUJU_CHARM_HTTPS_PROXY JUJU_CHARM_NO_PROXY
These Juju-specific proxy settings are incompatible with the four corresponding legacy proxy settings described in earlier sections. Data validation is enabled to prevent collisions from occurring.
One big benefit of using these finely-scoped settings is that
juju-no-proxy can contain subnets (in CIDR notation) whereas its legacy counterpart cannot.
Work is underway to introduce further granularity that will allow specific libraries (e.g.
charm-helpers) to enable a proxy setting on a per-call basis.
The following suite of proxy settings have a Snap-only scope:
The first two provide the standard HTTP and HTTPS proxy values for
snapd running on a machine.
The second two are for a local snap store. Key
snap-store-proxy is a 32-character alphanumeric key that identifies the store (i.e. its ID). Key
snap-store-assertions is a collection of digitally signed documents that express a fact or policy about an object in the snap universe.
Here is a sample assertion:
type: store authority-id: canonical store: YDBQvAwC2CfJElRq2XGkqcjR4bA9yNr2 operator-id: eJ8VwwkInXdLo5nIgoSKH8j95qs6BQ7D timestamp: 2017-11-24T12:10:19.881852Z url: http://firestorm.local sign-key-sha3-384: BWDEoaqyr25nF5SNCvEv2v7QnM9QsfCc0PBMYD_i2NGSQ32EF2d4D0hqUel3m8ul AcLBUgQAAQoABgUCWhgMKwAA3gYQAK68FSpGO3MQTOHuXar15Te7nf7RKa/5gJR2jIDf45XSVhYt fsWdX5yEaRwoXWor84Tesm1XtYodyNRbBAKmz7a/1/tT105UxtnflO1Y42Yb4AliFtvW7Sc1eHO3 pg/ZAhx/2LmchBFJURon+vWi/scCr6GkUoQ+xNvCQpA0hWPfD4BnS5TJjhiA8PyGQWTLmyms5jbK 5AhIdogFKpPfmeaSCgSjz2OsMMJYQO639A2gmoT2zSHqJs4+/bTb2Oq4j08Am7Wv28vyVglWdedc QKZuBJ/sepmZzHcWHNb65z3+KT+VC12LLQd/I+SxUkTsBNKC1mwpY39PrAsJDMCltxCepKmti0T6 hwYCYrrA6vBXjqoSRyW/YzDKRB0VpN3GwCE/1DmuxNFN2CUn4SM+q+SYmCuIaoDCmMyk6P9jrHyv JO8V/ctnZ0FvdrwnXFQDH6HY5rojjyEyjlZo6M8H2SunLX0u/goVh38D8o0bEmX/cZEKtTZx7ml+ lxDMSobdfIYPBl4FjVGHY+Zkdso0xQjctG1nNhkeYJQswLqfHEEdwaeCyGBh42cQfFLqxd0qK36M M49U7JumoWH6aclbo0RXGKDI9vsBRnmOOaCUus9gbbrNUs6MTst+RCPXqXPi4tzbTtRAY5jd8LWv 9/ZUS/A2VSNUaiKvfdzG6cnzr72R
Note that the assertion contains the store ID (e.g. field ‘store’). See the Snapcraft documentation for details.
Assign such data to key
snap-store-assertions by first placing it into a YAML-formatted file, say
assertion.yaml, and then proceeding as follows:
juju model-config assertion.yaml
The file contents look like this:
snap-store-assertions: |- type: store authority-id: canonical store: YDBQvAwC2CfJElRq2XGkqcjR4bA9yNr2 operator-id: eJ8VwwkInXdLo5nIgoSKH8j95qs6BQ7D timestamp: 2017-11-24T12:10:19.881852Z url: http://firestorm.local sign-key-sha3-384: BWDEoaqyr25nF5SNCvEv2v7QnM9QsfCc0PBMYD_i2NGSQ32EF2d4D0hqUel3m8ul AcLBUgQAAQoABgUCWhgMKwAA3gYQAK68FSpGO3MQTOHuXar15Te7nf7RKa/5gJR2jIDf45XSVhYt fsWdX5yEaRwoXWor84Tesm1XtYodyNRbBAKmz7a/1/tT105UxtnflO1Y42Yb4AliFtvW7Sc1eHO3 pg/ZAhx/2LmchBFJURon+vWi/scCr6GkUoQ+xNvCQpA0hWPfD4BnS5TJjhiA8PyGQWTLmyms5jbK 5AhIdogFKpPfmeaSCgSjz2OsMMJYQO639A2gmoT2zSHqJs4+/bTb2Oq4j08Am7Wv28vyVglWdedc QKZuBJ/sepmZzHcWHNb65z3+KT+VC12LLQd/I+SxUkTsBNKC1mwpY39PrAsJDMCltxCepKmti0T6 hwYCYrrA6vBXjqoSRyW/YzDKRB0VpN3GwCE/1DmuxNFN2CUn4SM+q+SYmCuIaoDCmMyk6P9jrHyv JO8V/ctnZ0FvdrwnXFQDH6HY5rojjyEyjlZo6M8H2SunLX0u/goVh38D8o0bEmX/cZEKtTZx7ml+ lxDMSobdfIYPBl4FjVGHY+Zkdso0xQjctG1nNhkeYJQswLqfHEEdwaeCyGBh42cQfFLqxd0qK36M M49U7JumoWH6aclbo0RXGKDI9vsBRnmOOaCUus9gbbrNUs6MTst+RCPXqXPi4tzbTtRAY5jd8LWv 9/ZUS/A2VSNUaiKvfdzG6cnzr72R
Instead of proxying client requests to an internet-based repository it is possible to maintain the repository internally. That is, you can have a copy or mirror of an Ubuntu package repository. This option has a large storage requirement and the initial setup/download time is considerable. Regular mirror synchronization will also be needed.
Here are some popular mirroring solutions:
Juju uses the
apt-mirror option for this.
For any sort of mirror, an HTTP/S server (i.e. a web server) will be required to respond to the actual client requests. These are the most common ones:
An agent that gets installed onto a Juju machine is proxied through the controller. If the latter cannot satisfy the request via its cache it will download the agent from the official agent site, and then pass it on to the machine. It is possible, however, to set up an agent mirror so the remote site is not solicited by the controller. Download the latest few agents and configure one of the aforementioned web servers accordingly. Updates to the mirrored agents will be needed as new versions of Juju itself become available.
Agent downloads can be performed manually from the above site where, for example, the file
juju-2.3.1-ubuntu-amd64.tgz is the 2.3.1 agent for all Ubuntu releases for the AMD64 architecture.
Downloads can also be made with the
juju sync-agent-binaries command. Note that this method results in a larger download as only the major and minor version numbers can be specified (e.g. 2.3 and not 2.3.1) and all architectures are retrieved. Additionally, there will be a file for every Ubuntu release even though they are all identical (e.g.
juju-2.3.1-bionic-amd64.tgz). Each one of these files is approximately 27 MiB in size.
Below, all 2.3 agents are retrieved and placed into a local directory:
mkdir -p /home/ubuntu/tmp/agents juju sync-agent-binaries --version 2.3 --local-dir=/home/ubuntu/tmp/agents-2.3
Once the agents are downloaded, the
juju sync-agent-binaries command can also be used to push them to a model, thereby foregoing the need for a mirror.
A mirror can be made of the official cloud images (
http://cloud-images.ubuntu.com). This will primarily be useful for a localhost cloud (LXD) but a local OpenStack installation can also make use of such a mirror if LXD containers are put on its instances.