also from another forum on radosgw issues i ran the following and got this output.
juju ssh ceph-radosgw/0 sudo systemctl status ceph-radosgw@rgw.juju-e94665-0-lxd-1
ā ceph-radosgw@rgw.juju-e94665-0-lxd-1.service - Ceph rados gateway
Loaded: loaded (/lib/systemd/system/ceph-radosgw@.service; disabled; vendor preset: enabled)
Active: inactive (dead)
Connection to 10.1.7.48 closed.
i dont know if that is overly helpful.
I looked into cinder as well:
unit-cinder-0: 16:17:09 WARNING unit.cinder/0.update-status ERROR no relation id specified
unit-cinder-0: 16:17:10 INFO unit.cinder/0.juju-log Installing crontab: /etc/cron.d/cinder-volume-usage-audit
unit-cinder-0: 16:17:11 INFO unit.cinder/0.juju-log get_network_addresses: [(ā10.1.7.55ā, ā10.1.7.55ā)]
unit-cinder-0: 16:17:11 INFO unit.cinder/0.juju-log Unit is ready
unit-cinder-0: 16:17:12 INFO juju.worker.uniter.operation ran āupdate-statusā hook (via explicit, bespoke hook script)
unit-cinder-0: 16:21:37 INFO unit.cinder/0.juju-log Registered config file: /etc/cinder/cinder.conf
unit-cinder-0: 16:21:37 INFO unit.cinder/0.juju-log Registered config file: /etc/cinder/api-paste.ini
unit-cinder-0: 16:21:37 INFO unit.cinder/0.juju-log Registered config file: /etc/cinder/policy.json
unit-cinder-0: 16:21:37 INFO unit.cinder/0.juju-log Registered config file: /etc/haproxy/haproxy.cfg
unit-cinder-0: 16:21:37 INFO unit.cinder/0.juju-log Registered config file: /etc/apache2/sites-available/openstack_https_frontend.conf
unit-cinder-0: 16:21:37 INFO unit.cinder/0.juju-log Registered config file: /etc/apache2/ports.conf
unit-cinder-0: 16:21:37 INFO unit.cinder/0.juju-log Registered config file: /etc/memcached.conf
unit-cinder-0: 16:21:37 INFO unit.cinder/0.juju-log Registered config file: /etc/apache2/sites-enabled/wsgi-openstack-api.conf
unit-cinder-0: 16:21:37 INFO unit.cinder/0.juju-log Updating status.
unit-cinder-0: 16:21:38 WARNING unit.cinder/0.update-status ERROR no relation id specified
unit-cinder-0: 16:21:38 INFO unit.cinder/0.juju-log Installing crontab: /etc/cron.d/cinder-volume-usage-audit
unit-cinder-0: 16:21:39 INFO unit.cinder/0.juju-log get_network_addresses: [(ā10.1.7.55ā, ā10.1.7.55ā)]
unit-cinder-0: 16:21:39 INFO unit.cinder/0.juju-log Unit is ready
unit-cinder-0: 16:21:40 INFO juju.worker.uniter.operation ran āupdate-statusā hook (via explicit, bespoke hook script)
and glance:
unit-glance-0: 16:16:33 INFO unit.glance/0.juju-log get_network_addresses: [(ā10.1.7.38ā, ā10.1.7.38ā)]
unit-glance-0: 16:16:34 INFO unit.glance/0.juju-log Unit is ready
unit-glance-0: 16:16:34 INFO juju.worker.uniter.operation ran āupdate-statusā hook (via explicit, bespoke hook script)
unit-glance-0: 16:21:39 INFO unit.glance/0.juju-log Updating status.
unit-glance-0: 16:21:39 INFO unit.glance/0.juju-log Making dir /var/lib/charm/glance root:root 555
unit-glance-0: 16:21:39 INFO unit.glance/0.juju-log Making dir /etc/ceph root:root 555
unit-glance-0: 16:21:39 INFO unit.glance/0.juju-log Registered config file: /etc/glance/glance-api.conf
unit-glance-0: 16:21:39 INFO unit.glance/0.juju-log Registered config file: /etc/haproxy/haproxy.cfg
unit-glance-0: 16:21:39 INFO unit.glance/0.juju-log Registered config file: /var/lib/charm/glance/ceph.conf
unit-glance-0: 16:21:39 INFO unit.glance/0.juju-log Registered config file: /etc/apache2/sites-available/openstack_https_frontend.conf
unit-glance-0: 16:21:39 INFO unit.glance/0.juju-log Registered config file: /etc/memcached.conf
unit-glance-0: 16:21:39 INFO unit.glance/0.juju-log Registered config file: /etc/glance/glance-swift.conf
unit-glance-0: 16:21:39 INFO unit.glance/0.juju-log Registered config file: /etc/glance/policy.yaml
unit-glance-0: 16:21:39 WARNING unit.glance/0.update-status ERROR no relation id specified
unit-glance-0: 16:21:41 INFO unit.glance/0.juju-log get_network_addresses: [(ā10.1.7.38ā, ā10.1.7.38ā)]
unit-glance-0: 16:21:41 INFO unit.glance/0.juju-log Unit is ready
unit-glance-0: 16:21:42 INFO juju.worker.uniter.operation ran āupdate-statusā hook (via explicit, bespoke hook script)
attempted to remove and re-install the ceph-radosgw and it didnāt work. i tried to clean it including relations and it wont remove or see the swift-ha relation:
mwsadminprod@spk-r1-maas-prod-1:~$ juju status --relations | grep ceph-radosgw
ceph-radosgw unknown 0 ceph-radosgw stable 499 no
ceph-radosgw:cluster ceph-radosgw:cluster swift-ha peer
mwsadminprod@spk-r1-maas-prod-1:~$ juju remove-relation ceph-radosgw:cluster ceph-radosgw:cluster
ERROR no relations found
mwsadminprod@spk-r1-maas-prod-1:~$ juju status --relations | grep ceph-radosgw
ceph-radosgw unknown 0 ceph-radosgw stable 499 no
ceph-radosgw:cluster ceph-radosgw:cluster swift-ha peer
mwsadminprod@spk-r1-maas-prod-1:~$ juju remove-relation ceph-radosgw:cluster ceph-radosgw:cluster
Full syslog from radosgw that is stuck after fresh deploy: Ubuntu Pastebin