Background Hey, I am deploying 3 applications… a nodejs app, nginx, and certbot… via juju!
I created an AWS VPC, 2 subnets (1 public, 1 private) … a natgateway and internet gateway…
About 3 months or so back I successfully used this VPC via JAAS controller and model… to deploy a webapp, give it TLS certs via certbot… and everything was seemingly fine. had DNS too via route 53…
My issue now is… I wiped out the applications in preparation to test some new configs… same VPC same networking same spaces… same subnets… but when I deploy I am not reusing the same machines nor same elastic ip address…
why is it when I deploy… I get some public ipv4 address… but now I need to attach an elastic ip… so that can be associated with the route53 setup… or am I wrong and no elastic ip is needed?? if I attempt to hit the public ip given in juju… nothing happens or gets shown in browser
when I associate an elastic ip, juju agent never grabs that in the status… and in fact I no longer can SSH into the machine at all… even though I can just fine before the elastic ip is associated.
Model Controller Cloud/Region Version SLA Timestamp my-first-mahrio-dev JAAS aws/us-west-1 2.9.18 unsupported 11:23:58+01:00 App Version Status Scale Charm Store Channel Rev OS Message certbot active 1 certbot local 8 ubuntu (update) certbot is associated with mahrio mahrio active 1 mahrio local 13 ubuntu (website-relation-changed) finished change 10:21 nginx active 1 nginx local 9 ubuntu (update) NGINX is running 10:22 Unit Workload Agent Machine Public address Ports Message certbot/8* active idle 34 188.8.131.52 (update) certbot is associated with mahrio mahrio/14* active idle 35 10.0.2.188 3000/tcp (website-relation-changed) finished change 10:21 nginx/9* active idle 34 184.108.40.206 80/tcp,443/tcp (update) NGINX is running 10:22 Machine State DNS Inst id Series AZ Message 34 started 220.127.116.11 i-12345a bionic us-west-1b running 35 started 10.0.2.188 i-12345b bionic us-west-1b Start instance attempt 1
Here’s my bundle I had exported from the working env… with a few defaults pared backed… like channel (which was all stable and not needed)
series: bionic applications: certbot: charm: /home/charms/certbot-charm num_units: 1 to: - "1" bindings: "": mahrio-dev-public mahrio: charm: /home/charms/mahrio-charm num_units: 1 to: - "2" options: git_deploy_private_key: | -----BEGIN OPENSSH PRIVATE KEY----- -----END OPENSSH PRIVATE KEY----- mahrio_config: | domain1.com domain2.com mongodb_connection_str: SOME_MONGO_STRING bindings: "": mahrio-dev-backend nginx: charm: /home/charms/nginx-charm num_units: 1 to: - "1" bindings: "": mahrio-dev-public machines: "1": constraints: instance-type=t3.micro "2": constraints: instance-type=t3.small relations: - - mahrio:website - nginx:website - - certbot:website - mahrio:website
And some other metadata
$juju spaces Name Space ID Subnets alpha 0 mahrio-dev-backend 1 10.0.2.0/24 252.2.0.0/16 mahrio-dev-public 2 10.0.1.0/24 252.1.0.0/16 $ juju subnets subnets: 10.0.1.0/24: type: ipv4 provider-id: subnet-123 provider-network-id: vpc-00001 status: in-use space: mahrio-dev-public zones: - us-west-1b 10.0.2.0/24: type: ipv4 provider-id: subnet-456 provider-network-id: vpc-00001 status: in-use space: mahrio-dev-backend zones: - us-west-1b 252.1.0.0/16: type: ipv4 provider-id: subnet-123-INFAN-10-0-1-0-24 provider-network-id: vpc-00001 status: in-use space: mahrio-dev-public zones: - us-west-1b 252.2.0.0/16: type: ipv4 provider-id: subnet-456-INFAN-10-0-2-0-24 provider-network-id: vpc-00001 status: in-use space: mahrio-dev-backend zones: - us-west-1b
Thanks for taking a look