The combination of Juju and the ecosystem of Charmed Operators enables anyone from individuals to enterprises to operate applications anywhere. Whether you need PostgreSQL on bare-metal, Kubernetes on AWS, or Grafana on Openstack, Juju has you covered. Juju provides a declarative way to install, provision, maintain, update, upgrade, and connect applications on and across Kubernetes clusters, containers, virtual machines, and bare metal machines, on public or private cloud.
The focus of Juju has always been on the reliable and secure operation of applications. It does not provide generic abstractions for creating and managing arbitrary cloud resources outside of the scope of the applications it’s driving. Cloud-specific resources can be created by charms, but often when deploying Charmed solutions into the enterprise, there is existing infrastructure it must “plug into” or sit alongside - like an org-wide Amazon VPC, or a particular Virtual Network on Azure, etc.
Hashicorp Terraform is a hugely successful swiss-army knife of infrastructure provisioning. It has a large community and over 2000 providers for different cloud providers in the form of IaaS, PaaS and SaaS vendors, and even for pizza delivery…
Today the Juju team is proud to announce a Terraform provider for Juju!
Why Terraform?
You may already know that Juju has the ability to provision the infrastructure it requires at charm deploy time, and it will continue to handle that part itself. The Terraform provider assumes the presence of a Juju controller that has been bootstrapped on one of its supported compute substrates (Azure, Kubernetes, VMWare, Openstack, etc.). Once there is a controller in place, the Terraform provider can be used to create models, deploy and configure charms and create relations both within a model and across models.
The benefit here is two-fold:
-
If you, or your organisation is already familiar with Terraform, you can now integrate Juju deployments seamlessly into your infrastructure delivery pipeline
-
You can cross-reference attributes from other resources managed with Terraform from your Juju deployments. For example: if you deploy an ingress charm on Kubernetes that needs to be configured with the hostname of an external load-balancer on AWS, you can grab the hostname for the load-balancer at deploy time and pipe it straight into your charm configuration!
It’s early days for the provider, so we encourage you to give it a try and let us know how you got on!
Quick start!
If you’d like to give the provider a spin, first ensure that you have a Linux machine available that already has MicroK8s, LXD and Juju installed. If you’re unfamiliar with this process, you can check out the dev setup docs, or the broader Juju docs.
You can install Terraform from the Snap Store:
sudo snap install --classic terraform
MicroK8s example
Make sure you’ve bootstrapped Juju onto your MicroK8s install. If you haven’t, run:
juju bootstrap microk8s micro
In this example, we will create a new model named dev
, and deploy the following applications:
We’ll also create some relations between those applications. To get started, let’s grab the Terraform file from Github:
# Grab the Terraform file
wget -qO k8s-example.tf jnsgr.uk/tf-k8s-example
# Initialise the provider
terraform init
# Run Terraform Plan, check the output
terraform plan
# Deploy the resources, skip the approval
terraform apply -auto-approve
This command will return relatively quickly, but if you watch the Juju status output, you should soon see the model settle into the following state:
❯ juju status --relations
Model Controller Cloud/Region Version SLA Timestamp
dev micro microk8s/localhost 2.9.33 unsupported 21:15:36+01:00
App Version Status Scale Charm Channel Rev Address Exposed Message
grafana active 1 grafana-k8s beta 18 10.152.183.66 no
hello-kubecon active 1 hello-kubecon stable 14 10.152.183.145 no
parca v0.0.1-dcec8f2 active 1 parca-k8s edge 16 10.152.183.37 no
prometheus 2.33.5 active 1 prometheus-k8s beta 55 10.152.183.239 no
zinc 0.3.0 active 1 zinc-k8s stable 28 10.152.183.85 no
Unit Workload Agent Address Ports Message
grafana/0* active idle 10.1.90.135
hello-kubecon/0* active idle 10.1.90.139
parca/0* active idle 10.1.90.129
prometheus/0* active idle 10.1.90.140
zinc/0* active idle 10.1.90.144
Relation provider Requirer Interface Type Message
grafana:grafana-peers grafana:grafana-peers grafana_peers peer
parca:metrics-endpoint prometheus:metrics-endpoint prometheus_scrape regular
prometheus:grafana-source grafana:grafana-source grafana_datasource regular
prometheus:prometheus-peers prometheus:prometheus-peers prometheus_peers peer
zinc:grafana-dashboard grafana:grafana-dashboard grafana_dashboard regular
zinc:metrics-endpoint prometheus:metrics-endpoint prometheus_scrape regular
zinc:profiling-endpoint parca:profiling-endpoint parca_scrape regular
Success! If you’d like, you can now visit the Prometheus web endpoint, or login to the Zinc frontend after grabbing the admin password using the included action. To clean up, you can run:
terraform destroy -auto-approve
Details for nerds!
If we take a look at the contents of the file, things should feel very familiar to existing Terraform users, first we configure the provider:
terraform {
required_providers {
juju = {
source = "juju/juju"
version = "0.4.0"
}
}
}
provider "juju" {}
By default, the provider will use whichever controller is currently selected in the Juju client. If you need to specify a specific controller, you can do so in the provider configuration. If you run juju controllers
you’ll see which that is. The provider only supports 4 resource types at the moment:
-
juju_model
- for deploying new models -
juju_application
- for deploying charms -
juju_offer
- for creating cross-model relation offers -
juju_integration
- for creating integrations (formerly known as relations!)
Application specifications can look fairly simple, too:
resource "juju_application" "zinc" {
name = "zinc"
model = juju_model.development.name
trust = true
charm {
name = "zinc-k8s"
}
}
More information can be found in the provider documentation.
LXD example with cross-model relation
Now we’ll bootstrap a controller on LXD, and show how to create a cross-model relation:
# Bootstrap a LXD controller if you haven't already
juju bootstrap localhost lxd
# Fetch the Terraform file
# Grab the Terraform file
wget -qO lxd-example.tf jnsgr.uk/tf-lxd-example
# Initialise the provider
terraform init
# Run Terraform Plan, check the output
terraform plan
# Deploy the resources, skip the approval
terraform apply -auto-approve
The key piece in this example is how we establish the CMR; first we create an offer:
resource "juju_offer" "postgresql_offer" {
model = juju_model.db.name
application_name = juju_application.postgresql.name
endpoint = "db"
}
resource "juju_integration" "postgresql_hello_juju" {
model = juju_model.app.name
application {
name = juju_application.hello_juju.name
endpoint = "db"
}
application {
offer_url = juju_offer.postgresql_offer.url
}
}
You can see the status once things have settled with juju status --relations
:
❯ juju status --relations
Model Controller Cloud/Region Version SLA Timestamp
app lxd localhost/localhost 2.9.33 unsupported 21:22:25+01:00
SAAS Status Store URL
database active local admin/db.database
App Version Status Scale Charm Channel Rev Exposed Message
myapp active 1 hello-juju stable 8 no
Unit Workload Agent Machine Public address Ports Message
myapp/0* active idle 0 10.235.95.180 80/tcp
Machine State Address Inst id Series AZ Message
0 started 10.235.95.180 juju-6717c9-0 focal Running
Relation provider Requirer Interface Type Message
database:db myapp:db pgsql regular
And that’s it! You should have postgresql
deployed into a model named database
, and hello-juju
deployed into a model named app
with a functioning cross-model relation.
Get involved!
A reminder that you can join the charming community on our Charmhub Discourse instance, and even chat with us and other members of community over on the Charmhub Mattermost. Feel free to reach out and tell us about the cool things you’re building with Juju and Terraform, and give any feedback you might have!
You can find the source code for the provider on Github, where you can file issues or make contributions.