Hi guys I tried to install openstack bundle but I have this error:
admin@maasserver:~/openstack-bundles/stable/openstack-base$ juju deploy ./bundle.yaml
Located charm “mysql-innodb-cluster” in charm-store, revision 24
Located charm “neutron-api” in charm-store, revision 302
Located charm “neutron-api-plugin-ovn” in charm-store, revision 10
Located charm “mysql-router” in charm-store, revision 29
Located charm “nova-cloud-controller” in charm-store, revision 361
Located charm “nova-compute” in charm-store, revision 337
Located charm “mysql-router” in charm-store, revision 29
Located charm “ntp” in charm-store, revision 47
Located charm “openstack-dashboard” in charm-store, revision 318
Located charm “ovn-central” in charm-store, revision 15
Located charm “ovn-chassis” in charm-store, revision 21
Located charm “placement” in charm-store, revision 31
Located charm “mysql-router” in charm-store, revision 29
Located charm “rabbitmq-server” in charm-store, revision 117
Located charm “vault” in charm-store, revision 68
Located charm “mysql-router” in charm-store, revision 29
Executing changes:
upload charm mysql-innodb-cluster from charm-store for series focal with architecture=amd64
deploy application mysql-innodb-cluster from charm-store on focal
ERROR cannot deploy bundle: cs:mysql-innodb-cluster-24 resource “mysql-shell”: bad metadata: resource missing filename
I executed this same bundle file like 5 days ago and I didn’t have any error but now I am redeploying and have this error…
Since this problem occurs only when using the cs: prefix during deployment, the most obvious workaround is to switch to the mysql-innodb-cluster charm that uses channels:
This is the same bundle and overlay I used about a year ago and now I need to re-deploy it (Wallaby). I went down the path of switching mysql from cs to ch:
in about 10 places, but that lead to a while new set of issues. Do I need to go back to an even older version of Juju? I really need this testbed back to the way it was last time I deployed it. I thought by archiving the bundle and overlay, I would have no problems re-deploying so I didn’t think twice about tearing down this openstack install so I could use the machines for another project. Is there some guidance on everything that needs to be captured? I did not touch the MAAS server and I didn’t think we had touched the JUJU server, but it looks like SNAP may have updated juju for me.
I don’t understand the reference to relation “database”. When I include --relations, I don’t see anything labeled “database”
root@juju2:~/devops/bin/openstack/engstack4# juju status mysql-innodb-cluster keystone-mysql-router
Model Controller Cloud/Region Version SLA Timestamp
engstack4 maas3-default maas3/default 2.9.31 unsupported 21:30:45Z
App Version Status Scale Charm Channel Rev Exposed Message
keystone 19.0.0 waiting 1 keystone stable 323 no Allowed_units list provided but this unit not present
keystone-mysql-router 8.0.29 waiting 1 mysql-router 8.0/stable 30 no 'db-router' incomplete, MySQL Router not yet bootstrapped
mysql-innodb-cluster 8.0.29 active 3 mysql-innodb-cluster 8.0/stable 26 no Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
Unit Workload Agent Machine Public address Ports Message
keystone/0* blocked idle 0/lxd/1 10.23.0.16 5000/tcp Incomplete relations: database, Allowed_units list provided but this unit not present
keystone-mysql-router/0* waiting executing 10.23.0.16 'db-router' incomplete, MySQL Router not yet bootstrapped
mysql-innodb-cluster/0 active idle 1/lxd/1 10.23.0.26 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
mysql-innodb-cluster/1 active idle 2/lxd/2 10.23.0.28 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
mysql-innodb-cluster/2* active idle 3/lxd/2 10.23.0.21 Unit is ready: Mode: R/W, Cluster is ONLINE and can tolerate up to ONE failure.
Machine State DNS Inst id Series AZ Message
0 started 10.22.0.1 grunt107 focal default Deployed
0/lxd/1 started 10.23.0.16 juju-b5cd12-0-lxd-1 focal default Container started
1 started 10.22.0.5 grunt10 focal default Deployed
1/lxd/1 started 10.23.0.26 juju-b5cd12-1-lxd-1 focal default Container started
2 started 10.22.0.4 grunt8 focal default Deployed
2/lxd/2 started 10.23.0.28 juju-b5cd12-2-lxd-2 focal default Container started
3 started 10.22.0.6 grunt9 focal default Deployed
3/lxd/2 started 10.23.0.21 juju-b5cd12-3-lxd-2 focal default Container started
Regarding the code snippets, I think that’s fine how you did it. Regarding your issue, juju status --relations shows you the current relations that are active between the charms. Since you’re missing the database relation, you won’t see it there. To see a list of possible relations that exist for a specific charm, you can do juju info keystone and you’ll see a section like this :
So the keystone charm has a shared-db relation endpoint, that can be attached to the mysql-innodb-cluster application. If you add that relation, I think it’ll solve your current issue.
Shouldn’t the shared-db connection be from keystone to keystone-mysql-router and then a db-router connection from keystone-mysql-router to mysql-innodb-cluster? Here’s the relevant sections from the Wallaby bundle?
keystone started crashing and got stuck in maintenance mode.
I’m going to try a new path … setting all charms to use latest/stable instead. If that blows up, I’ll come back to this, but I am frustrated that they bundle and overlay that I saved last year no longer work. I know there a lot of moving parts, but I still feel like that shouldn’t happen.
back to the suggestion that I use mysql-router 8.0/latest. This seems to require a new relationship “juju-info” and I can’t find a version of keystone that provides juju-info. Any suggestions?