Question on Spaces, Deploying to an Elastic IP

Background Hey, I am deploying 3 applications… a nodejs app, nginx, and certbot… via juju!

I created an AWS VPC, 2 subnets (1 public, 1 private) … a natgateway and internet gateway…

About 3 months or so back I successfully used this VPC via JAAS controller and model… to deploy a webapp, give it TLS certs via certbot… and everything was seemingly fine. had DNS too via route 53…

My issue now is… I wiped out the applications in preparation to test some new configs… same VPC same networking same spaces… same subnets… but when I deploy I am not reusing the same machines nor same elastic ip address…

The problem

why is it when I deploy… I get some public ipv4 address… but now I need to attach an elastic ip… so that can be associated with the route53 setup… or am I wrong and no elastic ip is needed?? if I attempt to hit the public ip given in juju… nothing happens or gets shown in browser

when I associate an elastic ip, juju agent never grabs that in the status… and in fact I no longer can SSH into the machine at all… even though I can just fine before the elastic ip is associated.

Model                Controller  Cloud/Region   Version  SLA          Timestamp
my-first-mahrio-dev  JAAS        aws/us-west-1  2.9.18   unsupported  11:23:58+01:00

App      Version  Status  Scale  Charm    Store  Channel  Rev  OS      Message
certbot           active      1  certbot  local             8  ubuntu  (update) certbot is associated with mahrio
mahrio            active      1  mahrio   local            13  ubuntu  (website-relation-changed) finished change 10:21
nginx             active      1  nginx    local             9  ubuntu  (update) NGINX is running 10:22

Unit        Workload  Agent  Machine  Public address  Ports           Message
certbot/8*  active    idle   34       12.34.56.78                     (update) certbot is associated with mahrio
mahrio/14*  active    idle   35       10.0.2.188      3000/tcp        (website-relation-changed) finished change 10:21
nginx/9*    active    idle   34       12.34.56.78     80/tcp,443/tcp  (update) NGINX is running 10:22

Machine  State    DNS          Inst id              Series  AZ          Message
34       started  12.34.56.78  i-12345a  bionic  us-west-1b  running
35       started  10.0.2.188   i-12345b  bionic  us-west-1b  Start instance attempt 1

Here’s my bundle I had exported from the working env… with a few defaults pared backed… like channel (which was all stable and not needed)

series: bionic
applications:
  certbot:
    charm: /home/charms/certbot-charm
    num_units: 1
    to:
    - "1"
    bindings:
      "": mahrio-dev-public
  mahrio:
    charm: /home/charms/mahrio-charm
    num_units: 1
    to:
    - "2"
    options:
      git_deploy_private_key: |
        -----BEGIN OPENSSH PRIVATE KEY-----
        -----END OPENSSH PRIVATE KEY-----
      mahrio_config: |
        domain1.com
        domain2.com
      mongodb_connection_str: SOME_MONGO_STRING
    bindings:
      "": mahrio-dev-backend
  nginx:
    charm: /home/charms/nginx-charm
    num_units: 1
    to:
    - "1"
    bindings:
      "": mahrio-dev-public
machines:
  "1":
    constraints: instance-type=t3.micro
  "2":
    constraints: instance-type=t3.small
relations:
- - mahrio:website
  - nginx:website
- - certbot:website
  - mahrio:website

And some other metadata

$juju spaces
Name                Space ID  Subnets     
alpha               0                     
mahrio-dev-backend  1         10.0.2.0/24 
                              252.2.0.0/16
mahrio-dev-public   2         10.0.1.0/24 
                              252.1.0.0/16
                                         
$ juju subnets
subnets:
  10.0.1.0/24:
    type: ipv4
    provider-id: subnet-123
    provider-network-id: vpc-00001
    status: in-use
    space: mahrio-dev-public
    zones:
    - us-west-1b
  10.0.2.0/24:
    type: ipv4
    provider-id: subnet-456
    provider-network-id: vpc-00001
    status: in-use
    space: mahrio-dev-backend
    zones:
    - us-west-1b
  252.1.0.0/16:
    type: ipv4
    provider-id: subnet-123-INFAN-10-0-1-0-24
    provider-network-id: vpc-00001
    status: in-use
    space: mahrio-dev-public
    zones:
    - us-west-1b
  252.2.0.0/16:
    type: ipv4
    provider-id: subnet-456-INFAN-10-0-2-0-24
    provider-network-id: vpc-00001
    status: in-use
    space: mahrio-dev-backend
    zones:
    - us-west-1b

Thanks for taking a look

UPDATE:

I can now see my webapp. I manually went into the EC2 instance that was in the public subnet… and edited the routing table to allow HTTP traffic… that was missing… I guess if I had had the elastic ip and security group on that ip already it wouldn’t have been necessary… but my question still remains…

how would I assign after the juju deploy a new elastic ip… AND get to SSH into my instance if it’s stuck on the old ip?

and now I am wondering aloud… how do I codify the VPC, the basic internet gateway… etc … in AWS (or in any major cloud co…). seems Juju would stop at the networking setup… and I just want to devise a plan with my public cloud usage… to create the needed basic VPC, networking… seems I’d need to learn from articles like this

If you run juju show-machine <ID> can you see the EIP in the addresses?

1 Like

I do not … I setup a webapp successfully, fully working with TLS… THEN tried setting the elastic ip and the AWS ip in the console updated… but juju doesn’t show anything up the “old” public ip… which I have changed in the output below to 1.2.3.4

I don’t recognize this other 252.10.7.1 address … but 10.0.1.8 goes to the other private subnet I guess

 juju show-machine 37
model: my-first-mahrio-dev
machines:
  "37":
    juju-status:
      current: started
      since: 30 Nov 2021 21:55:59+01:00
      version: 2.9.18
    hostname: ip-10-0-1-8
    dns-name: 1.2.3.4
    ip-addresses:
    - 1.2.3.4
    - 10.0.1.8
    - 252.10.7.1
    instance-id: i-64f
    machine-status:
      current: running
      message: running
      since: 30 Nov 2021 21:54:46+01:00
    modification-status:
      current: idle
      since: 30 Nov 2021 21:54:20+01:00
    series: bionic
    network-interfaces:
      ens5:
        ip-addresses:
        - 10.0.1.8
        mac-address: bb:cc:dd:ee:ff:gg
        gateway: 10.0.1.1
        space: mahrio-dev-public
        is-up: true
      fan-252:
        ip-addresses:
        - 252.10.7.1
        mac-address: ab:cd:ef:gh:ik:lm
        space: mahrio-dev-public
        is-up: true
    constraints: instance-type=t3.micro
    hardware: arch=amd64 cores=2 cpu-power=700 mem=1024M root-disk=8192M availability-zone=us-west-1b

EDIT:

THe good news is… the site is up, running, on the elastic ip… so everything “works” except jujus status and show-machine

@manadart good news…

I left the setup alone… the webapp was working so I just figured I need to let Juju catch up… and indeed when I came back to it today, juju status shows the elastic ip…

I am now wondering how can I

  1. codify this elastic ip into my yaml file ? or how do I work with a certain instance-id or elastic-ip safely without needing to manually re-associate my elastic ip to new instances?

  2. how can I use reserved instances of AWS with Juju ? do I call out in my yaml file a constraint of some sort I wonder?

1 Like