Introducing the Juju Terraform provider

The combination of Juju and the ecosystem of Charmed Operators enables anyone from individuals to enterprises to operate applications anywhere. Whether you need PostgreSQL on bare-metal, Kubernetes on AWS, or Grafana on Openstack, Juju has you covered. Juju provides a declarative way to install, provision, maintain, update, upgrade, and connect applications on and across Kubernetes clusters, containers, virtual machines, and bare metal machines, on public or private cloud.

The focus of Juju has always been on the reliable and secure operation of applications. It does not provide generic abstractions for creating and managing arbitrary cloud resources outside of the scope of the applications it’s driving. Cloud-specific resources can be created by charms, but often when deploying Charmed solutions into the enterprise, there is existing infrastructure it must “plug into” or sit alongside - like an org-wide Amazon VPC, or a particular Virtual Network on Azure, etc.

Hashicorp Terraform is a hugely successful swiss-army knife of infrastructure provisioning. It has a large community and over 2000 providers for different cloud providers in the form of IaaS, PaaS and SaaS vendors, and even for pizza delivery…

Today the Juju team is proud to announce a Terraform provider for Juju!

Why Terraform?

You may already know that Juju has the ability to provision the infrastructure it requires at charm deploy time, and it will continue to handle that part itself. The Terraform provider assumes the presence of a Juju controller that has been bootstrapped on one of its supported compute substrates (Azure, Kubernetes, VMWare, Openstack, etc.). Once there is a controller in place, the Terraform provider can be used to create models, deploy and configure charms and create relations both within a model and across models.

The benefit here is two-fold:

  • If you, or your organisation is already familiar with Terraform, you can now integrate Juju deployments seamlessly into your infrastructure delivery pipeline

  • You can cross-reference attributes from other resources managed with Terraform from your Juju deployments. For example: if you deploy an ingress charm on Kubernetes that needs to be configured with the hostname of an external load-balancer on AWS, you can grab the hostname for the load-balancer at deploy time and pipe it straight into your charm configuration!

It’s early days for the provider, so we encourage you to give it a try and let us know how you got on!

Quick start!

If you’d like to give the provider a spin, first ensure that you have a Linux machine available that already has MicroK8s, LXD and Juju installed. If you’re unfamiliar with this process, you can check out the dev setup docs, or the broader Juju docs.

You can install Terraform from the Snap Store:

sudo snap install --classic terraform

MicroK8s example

Make sure you’ve bootstrapped Juju onto your MicroK8s install. If you haven’t, run:

juju bootstrap microk8s micro

In this example, we will create a new model named dev, and deploy the following applications:

We’ll also create some relations between those applications. To get started, let’s grab the Terraform file from Github:

# Grab the Terraform file
wget -qO k8s-example.tf jnsgr.uk/tf-k8s-example

# Initialise the provider
terraform init

# Run Terraform Plan, check the output
terraform plan

# Deploy the resources, skip the approval
terraform apply -auto-approve

This command will return relatively quickly, but if you watch the Juju status output, you should soon see the model settle into the following state:

❯ juju status --relations
Model  Controller  Cloud/Region        Version  SLA          Timestamp
dev    micro       microk8s/localhost  2.9.33   unsupported  21:15:36+01:00

App            Version         Status  Scale  Charm           Channel  Rev  Address         Exposed  Message
grafana                        active      1  grafana-k8s     beta      18  10.152.183.66   no       
hello-kubecon                  active      1  hello-kubecon   stable    14  10.152.183.145  no       
parca          v0.0.1-dcec8f2  active      1  parca-k8s       edge      16  10.152.183.37   no       
prometheus     2.33.5          active      1  prometheus-k8s  beta      55  10.152.183.239  no       
zinc           0.3.0           active      1  zinc-k8s        stable    28  10.152.183.85   no       

Unit              Workload  Agent  Address      Ports  Message
grafana/0*        active    idle   10.1.90.135         
hello-kubecon/0*  active    idle   10.1.90.139         
parca/0*          active    idle   10.1.90.129         
prometheus/0*     active    idle   10.1.90.140         
zinc/0*           active    idle   10.1.90.144         

Relation provider            Requirer                     Interface           Type     Message
grafana:grafana-peers        grafana:grafana-peers        grafana_peers       peer     
parca:metrics-endpoint       prometheus:metrics-endpoint  prometheus_scrape   regular  
prometheus:grafana-source    grafana:grafana-source       grafana_datasource  regular  
prometheus:prometheus-peers  prometheus:prometheus-peers  prometheus_peers    peer     
zinc:grafana-dashboard       grafana:grafana-dashboard    grafana_dashboard   regular  
zinc:metrics-endpoint        prometheus:metrics-endpoint  prometheus_scrape   regular  
zinc:profiling-endpoint      parca:profiling-endpoint     parca_scrape        regular  

Success! If you’d like, you can now visit the Prometheus web endpoint, or login to the Zinc frontend after grabbing the admin password using the included action. To clean up, you can run:

terraform destroy -auto-approve
Details for nerds!

If we take a look at the contents of the file, things should feel very familiar to existing Terraform users, first we configure the provider:

terraform {
  required_providers {
    juju = {
      source  = "juju/juju"
      version = "0.4.0"
    }
  }
}
provider "juju" {}

By default, the provider will use whichever controller is currently selected in the Juju client. If you need to specify a specific controller, you can do so in the provider configuration. If you run juju controllers you’ll see which that is. The provider only supports 4 resource types at the moment:

Application specifications can look fairly simple, too:

resource "juju_application" "zinc" {
  name  = "zinc"
  model = juju_model.development.name
  trust = true

  charm {
    name = "zinc-k8s"
  }
}

More information can be found in the provider documentation.

LXD example with cross-model relation

Now we’ll bootstrap a controller on LXD, and show how to create a cross-model relation:

# Bootstrap a LXD controller if you haven't already
juju bootstrap localhost lxd

# Fetch the Terraform file
# Grab the Terraform file
wget -qO lxd-example.tf jnsgr.uk/tf-lxd-example

# Initialise the provider
terraform init

# Run Terraform Plan, check the output
terraform plan

# Deploy the resources, skip the approval
terraform apply -auto-approve

The key piece in this example is how we establish the CMR; first we create an offer:

resource "juju_offer" "postgresql_offer" {
  model            = juju_model.db.name
  application_name = juju_application.postgresql.name
  endpoint         = "db"
}

resource "juju_integration" "postgresql_hello_juju" {
  model = juju_model.app.name

  application {
    name     = juju_application.hello_juju.name
    endpoint = "db"
  }

  application {
    offer_url = juju_offer.postgresql_offer.url
  }
}

You can see the status once things have settled with juju status --relations:

❯ juju status --relations
Model  Controller  Cloud/Region         Version  SLA          Timestamp
app    lxd         localhost/localhost  2.9.33   unsupported  21:22:25+01:00

SAAS      Status  Store  URL
database  active  local  admin/db.database

App    Version  Status  Scale  Charm       Channel  Rev  Exposed  Message
myapp           active      1  hello-juju  stable     8  no       

Unit      Workload  Agent  Machine  Public address  Ports   Message
myapp/0*  active    idle   0        10.235.95.180   80/tcp  

Machine  State    Address        Inst id        Series  AZ  Message
0        started  10.235.95.180  juju-6717c9-0  focal       Running

Relation provider  Requirer  Interface  Type     Message
database:db        myapp:db  pgsql      regular 

And that’s it! You should have postgresql deployed into a model named database, and hello-juju deployed into a model named app with a functioning cross-model relation.

Get involved!

A reminder that you can join the charming community on our Charmhub Discourse instance, and even chat with us and other members of community over on the Charmhub Mattermost. Feel free to reach out and tell us about the cool things you’re building with Juju and Terraform, and give any feedback you might have!

You can find the source code for the provider on Github, where you can file issues or make contributions.

10 Likes

I think the procedure here may need some additional nuance, perhaps due to recent changes in its dependencies.

  1. This does not appear to work with Juju 2.9/stable. I get errors of the form Error: internal error, Client(5) claimed to return *client.Client but returned *client.ClientV5.

  2. Juju 3.0/stable (3.0.3) and 3.1/stable (3.1.2) both “only work with strictly confined microk8s”, according to the error I get when I try to bootstrap them. However, this isn’t called out, and the latest/stable uses classic confinement.

  3. If I use --channel=1.27-strict/stable for microk8s, then I hit a different error, which is kind of strange. Using the 3.1/stable Juju snap (specifically, version 3.1.2), and after confirming that the controller is showing up as 3.1.2 via juju controllers, and after having cleared out my ~/.terraform.d as well as the .terraform* files/dirs in my working directory, I end up having issues when I hit the “terraform apply -auto-approve” step. It tells me:

    Error: juju client with major version 2 used with a controller having major version 3 not supported
    update your juju client to match the version running on the controller
    

Any guidance here would be appreciated, as I think this may be difficult for people to reproduce at present.

@vultaire So there are a couple of separate points here in your items. To answer individually:

  1. I just tried the steps above with 2.9 and it seems to be working fine. The only difference is that I used the latest tip of the provider, while the steps above use 0.4.0 (that might be something to update), so you might wanna try it with version = "0.8.0", seems to be working ok on my end. This is against a 2.9 controller bootstrapped on microk8s, which brings me to the second point:

  2. Terraform currently only supports 2.9, so I wouldn’t expect it to work with anything >=3.0. That support is coming soon (probably shortly after the juju 3.3 release). Maybe we can edit the steps to include juju version. Though if you look at the juju status output above you can see the agent is in version 2.9.33.

  3. The error makes sense because the terraform juju provider will create a 2.9 client against your 3.1.2 controller. This is a consequence of the 2nd item above.

So again, I suggest bootstrapping a 2.9 controller on microk8s and using terraform provider juju version 0.8.0. Keep in mind that if you’re building your own juju from source and bootstrapping with that, then after building you might need to run make microk8s-operator-update to update the docker image that goes into the controller pod on microk8s. (you shouldn’t worry about that if you’re using juju from snap).

Hope this helps, cheers!

Hi @cderici - thanks for getting back to me on this.

I was able to re-try on 2.9 after modifying Jon’s k8s-example.tf to use 0.8.0 for the provider, and I was able to get further. parca-k8s appears to have been updated to require Juju 3.1.0, so it is necessary to drop it and its relations from the k8s-example.tf file to make this work at present.

Thanks for the pointer!