In this episode of “what kind of cool stuff we can do with the Terraform Juju provider,” we explore how to manually provision machines from our existing setup into our Juju model using the Terraform Juju provider. For a primer on the Terraform provider itself, check out the latest posts by some handsome fellas here and there.
When we run juju deploy
, Juju will provision whatever machine and resources necessary
to deploy and run the applications we need on the cloud we’re working with. However, in
particular to machines, we may have an already setup environment of our own, especially if
we’re using something like Terraform to actually provision a whole infrastructure on a major
platform like AWS or VSphere. Or, you may be the IS team or the Solutions QA team in a major big
time company, and have already set awesome environments running all the projects
QA-ing them, and you wish you could just pick the actual machines you want to deploy particular
workloads on. Well, technically you can do that with juju add-machine ssh:user@host
, but there is a
way that’s better in multiple ways to be able to provision everything with only a single push of a
button with a Terraform plan, in integration with your already existing infrastructure setup
sitting right there inside the plan.
Let’s see how you can actually do that with our recently added support for manual provisioning on the terraform-provider-juju with a bit of a tutorial style demo.
First we need a machine
For our little demo, we’ll create a tiny infrastructure on AWS using Terraform, and we’ll put in there a machine ready to be provisioned by the Terraform Juju provider. We’ll be working on AWS to get a little closer to a scenario in the real world out there; however, this works essentially on any substrate that can get you any sort of compute instance with IP and ssh access. For example, the PR that adds this functionality uses lxd in the QA steps.
Let’s start working on our Terraform plan to create our environment. We’ll use both the Terraform AWS provider and the Terraform Juju provider in the same plan, former for setting up and creating the machine, and the latter for actually provisioning the machine into our Juju model.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "5.4.0"
}
juju = {
version = "0.4.1"
source = "juju/juju"
}
}
}
provider "aws" {
region = "us-west-2"
access_key = <redacted>
secret_key = <redacted>
}
provider "juju" {}
variable "pub_key_path" {
type = string
default = "$PATH_TO_THIS_PLAN/aws-machine.pub"
}
variable "priv_key_path" {
type = string
default = "$PATH_TO_THIS_PLAN/aws-machine"
}
Essentially what we need here is a small AWS instance that we can ssh into, because that’s how the
Terraform Juju provider will provision it in the model. To do that, it’ll need the ssh key pair
(somewhat annoyingly the filenames,
here’s why).
So just for convenience and to be explicit here, we’ll actually ssh-keygen
a pair and note
down the paths here to use. We’ll instruct the Terraform AWS provider to create the instance with
this pair as access keys, and have the Terraform Juju provider to use this pair to access and
provision the machine.
I don’t see too much point in diving too deep into the machine creation part, as it’s particular
to the substrate we’re using. The important part is to pick out the essential information that
the Terraform Juju provider needs to access the outside machine:
IP address, and the key pair
.
Let’s put the resources for aws instance creation into our plan:
data "aws_ami" "ubuntu" {
most_recent = true
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
owners = ["099720109477"] # Canonical
}
resource "aws_instance" "test-instance" {
ami = data.aws_ami.ubuntu.id
key_name = aws_key_pair.my-keys.key_name
instance_type = "t3.micro"
associate_public_ip_address = true
vpc_security_group_ids = [
"sg-28f2de4c"
]
tags = {
Name = "Terraform Test"
}
}
resource "aws_key_pair" "my-keys" {
key_name = "aws-machine-test-pair"
public_key = file(var.pub_key_path)
}
output "ec2instance" {
value = aws_instance.test-instance.public_ip
}
Details for AWS nerds!
- You can use any AMI, I wanted to be a little explicit here.
- If you don’t have a VPC security group, then you’ll need a
aws_security_group
, here’s an example:
resource "aws_security_group" "project-iac-sg" {
name = "secgroupname"
description = "secgroupname"
vpc_id = "vpc-id"
// To Allow SSH Transport
ingress {
from_port = 22
protocol = "tcp"
to_port = 22
cidr_blocks = ["0.0.0.0/0"]
}
}
The same goes for vpc, if you don’t have one, then you’ll need the aws_vpc
resource, something like this:
resource "aws_vpc" "main" {
cidr_block = var.base_cidr_block
}
The vpc I’m using is already defined in our aws setup for Juju, so I’m sort of cheating there a little bit for the sake of simplicity in the demo.
The important part is that the security group you create needs to have the port 22 ingress access to be able to accept ssh calls coming into the instance.
Let’s give some notes for the folks who’re not super familiar with Terraform. Our instance is
created with resource "aws_instance" "test-instance"
, and it uses the key pair defined with
resource "aws_key_pair" "my-keys"
. In the next step, we’ll pass the IP address of the test-instance
and the key
pair to the resources that will be created for Juju to provision that instance. The output
at the end is
just for our own convenience, Juju’s resources will refer to aws_instance.test-instance.public_ip
directly from the resource.
Finally, we apply this plan, and see the happy output that says that the instance we need is created and ready to rock:
aws_key_pair.example: Creating...
aws_instance.test-instance: Creating...
aws_key_pair.example: Creation complete after 1s [id=aws-machine-test-pair]
aws_instance.test-instance: Still creating... [10s elapsed]
aws_instance.test-instance: Creation complete after 13s [id=i-060834790f1337756]
Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
Outputs:
ec2instance = "18.237.38.94"
Roll in the Terraform Juju provider!
Now we have a machine ready to go, and we know how to access it. As we stated earlier too, this
machine can be crated anywhere, all the Terraform Juju provider cares is that we have an IP and a
key pair to access it. Note that the Terraform Juju provider needs a bootstrapped Juju
controller, so we’ll need to bootstrap one on AWS if we don’t have one already, so juju bootstrap aws skynet
.
For our scenario, let’s say that you already have our Terraform plan that uses the Terraform AWS provider with a bunch of other complicated setups that you have for your big operation environment. All you need to do to use that setup with Juju is to have Juju’s resources in the same plan, referring to whichever resources you want to use. If that clear explanation is not super obvious, let’s see it in action where we provision the
aws_instance.test-instance
in our model and deploy a charm on it. Let’s add the following into
our plan:
resource "juju_model" "machinetest" {
name = "machinetest"
}
resource "juju_application" "placement_example" {
name = "placement-example"
model = juju_model.machinetest.name
charm {
name = "ubuntu"
series = "focal"
}
units = 1
placement = juju_machine.my-machine.machine_id
depends_on = [juju_machine.my-machine]
}
resource "juju_machine" "my-awesome-machine" {
model = juju_model.machinetest.name
name = "machine-manually-provisioned-at-aws"
ssh_address = format("ubuntu@%s", aws_instance.test-instance.public_ip)
public_key_file = var.pub_key_path
private_key_file = var.priv_key_path
depends_on = [aws_instance.test-instance]
}
So the plan here is to create a model called machinetest
, and provision a machine using ssh_address
and keypair, and deploy an application on that machine. Note that the machine resource receives the IP address directly from the aws_instance
resource
and uses the same key pair that the Terraform AWS provider used to create the instance.
After running terraform plan && terraform apply --auto-approve
on our plan, we see yet again
another happy Terraform output:
Plan: 3 to add, 0 to change, 0 to destroy.
juju_model.machinetest: Creating...
juju_model.machinetest: Creation complete after 1s [id=e3c8ae4c-dccd-4ce3-852d-8e6ce2fa66b7]
juju_machine.my-machine: Creating...
juju_machine.my-machine: Still creating... [10s elapsed]
juju_machine.my-machine: Still creating... [20s elapsed]
juju_machine.my-machine: Still creating... [30s elapsed]
juju_machine.my-machine: Still creating... [40s elapsed]
juju_machine.my-machine: Still creating... [50s elapsed]
juju_machine.my-machine: Still creating... [1m0s elapsed]
juju_machine.my-machine: Still creating... [1m10s elapsed]
juju_machine.my-machine: Still creating... [1m20s elapsed]
juju_machine.my-machine: Still creating... [1m30s elapsed]
juju_machine.my-machine: Still creating... [1m40s elapsed]
juju_machine.my-machine: Creation complete after 1m47s [id=machinetest:0:machine-manually-provisioned-at-aws]
juju_application.placement_example: Creating...
juju_application.placement_example: Creation complete after 4s [id=machinetest:placement-example]
Apply complete! Resources: 3 added, 0 changed, 0 destroyed.
Outputs:
ec2instance = "18.237.38.94"
This time the added 3 resources are the Juju resources! What happens is that the Terraform Juju
provider creates a model called the machinetest
, and then it creates the machine resource
juju_machine.my-awesome-machine
which provisions the machine into the model by accessing
the machine and using real actual magic. Only then it will create the application resource juju_application.placement_example
(i.e., deploy the ubuntu charm) and get it running on our machine.
So finally, after we let it steep for a bit, we see the happy Juju model:
$ juju status -m machinetest
Model Controller Cloud/Region Version SLA Timestamp
machinetest terraform-test-controller aws/us-east-1 2.9.44.1 unsupported 10:59:05-06:00
App Version Status Scale Charm Channel Rev Exposed Message
placement-example 20.04 active 1 ubuntu stable 24 no
Unit Workload Agent Machine Public address Ports Message
placement-example/0* active idle 0 18.237.38.94
Machine State Address Inst id Series AZ Message
0 started 18.237.38.94 manual:18.237.38.94 focal Manually provisioned machine
Here’s the kicker!
So far in the same Terraform plan, we used the Terraform AWS provider to create an instance on AWS, and we used the Terraform Juju provider to access that instance and provision it in our Juju model, which, I believe, is super cool. If I go a step further, however, just for an experiment, follow the QA steps in this PR to create a machine on LXD and make another application and machine resource on our running terraform plan to provision and deploy, then we get magic:
Model Controller Cloud/Region Version SLA Timestamp
machinetest terraform-test-controller aws/us-east-1 2.9.44.1 unsupported 11:35:00-06:00
App Version Status Scale Charm Channel Rev Exposed Message
placement-example 20.04 active 1 ubuntu stable 24 no
placement-example2 22.04 active 1 ubuntu stable 24 no
Unit Workload Agent Machine Public address Ports Message
placement-example2/0* active idle 1 10.137.32.63
placement-example/0* active idle 0 18.237.38.94
Machine State Address Inst id Series AZ Message
0 started 18.237.38.94 manual:18.237.38.94 focal Manually provisioned machine
1 started 10.137.32.63 manual:10.137.32.63 jammy Manually provisioned machine
The machine number 0 is running on AWS, just like the controller that governs this model. The
machine number 1, however, is an lxc instance on my local machine that I just created
with lxc launch ubuntu:22.04 mtest
. This is very cool because I’m pretty sure that
the controller on AWS doesn’t have any access to the lxc instance running on my local machine.
So what this means is that I can have multiple machines on different substrates on the same model.
Now, this is all Juju dark magic stuff, and who knows what sorts of different wonders are waiting
for you if you want to go down that path.
However, what this means for our topic here with the Terraform Juju provider, is that with this manual provisioning support on the terraform-provider-juju, you may provision an astonishingly large number of machines, possibly created on different substrates (e.g. LXC, Azure, GCE, OCI) into a single Juju model, all in the same plan, with a single push of a button.
Well, maybe two pushes of a button
There are some unfortunate caveats to this methodology that I’m sure we’ll work on in
the coming times. For example, I masterfully concealed the fact that running both the AWS and Juju components simultaneously in a single plan presents an issue. Although the resources that both providers will use can be combined in the same plan without any problem, executing a single terraform apply
will result in failure. That’s because in AWS when an instance is created and assigned a public IP, it takes a little while for it to bring up the sshd and actually accept ssh calls. Consequently, if we create all the resources at once in Terraform, the Juju provisioner will encounter issues as it tries to access the machine. Currently, there is no straightforward way to instruct the Terraform Juju provider to wait until the AWS instance is accessible.
Regardless, I’m sure this will enable numerous scenarios that I currently can’t imagine, and as we and others continue to utilize the Terraform Juju provider in practice, we will be actively introducing new features and experimenting with new cool ways of using this. Stay tuned!