Trouble with Glance Simplestreams sync

Does anybody know if Glance Simplestreams Sync is supported and working on Ussuri builds?

Just deployed and related to Keystone.

root@maas-01:~# juju deploy cs:glance-simplestreams-sync --to lxd:3
Located charm "cs:glance-simplestreams-sync-34".
Deploying charm "cs:glance-simplestreams-sync-34".
root@maas-01:~# juju add-relation glance-simplestreams-sync:identity-service keystone:identity-service

Keystone creates the Product Streams endpoint but then the App just goes into an unknown state and there’s nothing really interesting in the log
Wondering if I should submit a bug?

Unit                          Workload  Agent  Machine  Public address  Ports     Message
glance-simplestreams-sync/2*  unknown   idle   3/lxd/2  192.168.51.36
keystone/0*                   active    idle   0/lxd/2  192.168.51.12   5000/tcp  Unit is ready
  keystone-mysql-router/0*    active    idle            192.168.51.12             Unit is ready

Machine  State    DNS            Inst id              Series  AZ    Message
0        started  192.168.51.5   vocal-calf           focal   AZ-1  Deployed
0/lxd/2  started  192.168.51.12  juju-d365d2-0-lxd-2  focal   AZ-1  Container started
3        started  192.168.51.8   eager-sponge         focal   AZ-2  Deployed
3/lxd/2  started  192.168.51.36  juju-d365d2-3-lxd-2  bionic  AZ-2  Container started



2020-09-07 18:03:33 INFO juju.worker.apicaller connect.go:158 [8e376c] "unit-glance-simplestreams-sync-2" successfully connected to "192.168.50.113:17070"
2020-09-07 18:03:33 DEBUG juju.worker.dependency engine.go:564 "api-caller" manifold worker started at 2020-09-07 18:03:33.891842215 +0000 UTC
2020-09-07 18:03:33 DEBUG juju.worker.dependency engine.go:564 "log-sender" manifold worker started at 2020-09-07 18:03:33.90223912 +0000 UTC
2020-09-07 18:03:33 DEBUG juju.worker.dependency engine.go:564 "upgrader" manifold worker started at 2020-09-07 18:03:33.90335146 +0000 UTC
2020-09-07 18:03:33 DEBUG juju.worker.dependency engine.go:564 "migration-inactive-flag" manifold worker started at 2020-09-07 18:03:33.905202206 +0000 UTC
2020-09-07 18:03:33 DEBUG juju.worker.dependency engine.go:564 "upgrade-steps-runner" manifold worker started at 2020-09-07 18:03:33.932027081 +0000 UTC
2020-09-07 18:03:33 DEBUG juju.worker.dependency engine.go:584 "upgrade-steps-runner" manifold worker completed successfully
2020-09-07 18:03:34 INFO juju.worker.upgrader upgrader.go:149 abort check blocked until version event received
2020-09-07 18:03:34 INFO juju.worker.upgrader upgrader.go:155 unblocking abort check
2020-09-07 18:03:34 INFO juju.worker.upgrader upgrader.go:188 desired agent binary version: 2.8.1
2020-09-07 18:03:34 DEBUG juju.worker.dependency engine.go:598 "upgrade-check-flag" manifold worker stopped: gate unlocked
2020-09-07 18:03:34 DEBUG juju.worker.dependency engine.go:564 "upgrade-check-flag" manifold worker started at 2020-09-07 18:03:34.035773731 +0000 UTC
2020-09-07 18:03:34 DEBUG juju.worker.dependency engine.go:564 "migration-fortress" manifold worker started at 2020-09-07 18:03:34.047104906 +0000 UTC
2020-09-07 18:03:34 DEBUG juju.worker.dependency engine.go:564 "migration-minion" manifold worker started at 2020-09-07 18:03:34.057504172 +0000 UTC
2020-09-07 18:03:34 INFO juju.worker.migrationminion worker.go:140 migration phase is now: NONE
2020-09-07 18:03:34 DEBUG juju.worker.dependency engine.go:564 "proxy-config-updater" manifold worker started at 2020-09-07 18:03:34.07576425 +0000 UTC
2020-09-07 18:03:34 DEBUG juju.worker.dependency engine.go:564 "api-address-updater" manifold worker started at 2020-09-07 18:03:34.075818551 +0000 UTC
2020-09-07 18:03:34 DEBUG juju.worker.dependency engine.go:564 "charm-dir" manifold worker started at 2020-09-07 18:03:34.075854085 +0000 UTC
2020-09-07 18:03:34 DEBUG juju.worker.dependency engine.go:564 "metric-spool" manifold worker started at 2020-09-07 18:03:34.075890575 +0000 UTC
2020-09-07 18:03:34 DEBUG juju.worker.logger logger.go:64 initial log config: "<root>=DEBUG"
2020-09-07 18:03:34 DEBUG juju.worker.leadership tracker.go:125 glance-simplestreams-sync/2 making initial claim for glance-simplestreams-sync leadership
2020-09-07 18:03:34 INFO juju.worker.logger logger.go:118 logger worker started
2020-09-07 18:03:34 DEBUG juju.worker.dependency engine.go:564 "meter-status" manifold worker started at 2020-09-07 18:03:34.075974901 +0000 UTC
2020-09-07 18:03:34 DEBUG juju.worker.dependency engine.go:564 "leadership-tracker" manifold worker started at 2020-09-07 18:03:34.076139178 +0000 UTC
2020-09-07 18:03:34 DEBUG juju.worker.dependency engine.go:564 "logging-config-updater" manifold worker started at 2020-09-07 18:03:34.076184082 +0000 UTC
2020-09-07 18:03:34 DEBUG juju.worker.logger logger.go:92 reconfiguring logging from "<root>=DEBUG" to "<root>=WARNING"

The latest revision of the glance-simplestreams-sync charm does not enable synchronisation by default. You will need to enable that post-deploy:

juju config glance-simplestreams-sync run=true

You can also kick off a one-time sync with an action:

juju run-action --wait glance-simplestreams-sync/0 sync-images

Thanks Peter, was unaware of the new config.

I tried running the action, just returns an error.

root@maas-01:~# juju run-action --wait glance-simplestreams-sync/3 sync-images
unit-glance-simplestreams-sync-3:
  UnitId: glance-simplestreams-sync/3
  id: "10"
  message: exit status 1
  results:
    ReturnCode: 1
    Stderr: |
      /usr/share/glance-simplestreams-sync/glance_simplestreams_sync.py:109: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
        confobj = yaml.load(f)
  status: failed
  timing:
    completed: 2020-09-07 21:03:44 +0000 UTC
    enqueued: 2020-09-07 21:03:43 +0000 UTC
    started: 2020-09-07 21:03:43 +0000 UTC

This is all that’s in the unit log

@pmatulis you think I should submit a bug?

What Object storage solution are you using?

Whatever object storage that’s supplied as part of the Openstack Base bundle.

No Swift, but functioning Ceph pool.

Thanks @pmatulis! Now that you’ve pointed me in the direction of object storage, I see I actually get a policy error when trying to browse the object store containers.

Makes sense to investigate this rabbit hole, but I don’t really know what I’m looking for :innocent:

Hi, @dvnt.

The openstack base bundle comes with ceph-radosgw as the object storage solution [0].

Based on the YAML warning from stderr I wonder if you have a custom setting for mirror_list [1]? It is unfortunately easy to get this wrong. We need to work to make that easier.

If that is not the case my next recommendation is that you run the script manually and paste the output.

On the glance-simplestreams-sync unit:

/usr/share/glance-simplestreams-sync/glance-simplestreams-sync.sh

[0] https://github.com/openstack-charmers/openstack-bundles/blob/master/stable/openstack-base/bundle.yaml#L165
[1] https://github.com/openstack/charm-glance-simplestreams-sync/blob/master/config.yaml#L2

–
David Ames

Hi @thedac

To answer your question, nothing special on the mirror_list setting.

:heart: I’m always grateful when the inner workings of the charms are shared -

Looks like this is blowing up because of the Vault implementation and Self Issued Certificates.
Guess this won’t be a problem with provider issued certificates, but that said are there Vault relations/integrations that I’m missing or is this a feature request?

root@juju-294f71-5-lxd-3:~# /usr/share/glance-simplestreams-sync/glance-simplestreams-sync.sh
Traceback (most recent call last):
  File "/usr/share/glance-simplestreams-sync/glance_simplestreams_sync.py", line 458, in <module>
    main()
  File "/usr/share/glance-simplestreams-sync/glance_simplestreams_sync.py", line 405, in main
    ksc = get_keystone_client(id_conf['api_version'])
  File "/usr/share/glance-simplestreams-sync/glance_simplestreams_sync.py", line 192, in get_keystone_client
    return ksc_class(**ksc_vars)
  File "/usr/lib/python2.7/dist-packages/keystoneclient/v3/client.py", line 250, in __init__
    self.authenticate()
  File "/usr/lib/python2.7/dist-packages/keystoneclient/httpclient.py", line 578, in authenticate
    resp = self.get_raw_token_from_identity_service(**kwargs)
  File "/usr/lib/python2.7/dist-packages/keystoneclient/v3/client.py", line 336, in get_raw_token_from_identity_service
    _('Authorization failed: %s') % e)
keystoneauth1.exceptions.auth.AuthorizationFailure: Authorization failed: SSL exception connecting to https://keystone.lab.devzero.home:5000/v3/auth/tokens: HTTPSConnectionPool(host='keystone.lab.devzero.home', port=5000): Max retries exceeded with url: /v3/auth/tokens (Caused by SSLError(SSLError(1, u'[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:727)'),))

I often have these ah ha moments after I’ve posted here - my apologies

I generated a base64 output of the certificate value displayed after running:

juju run-action --wait vault/0 authorize-charm token=x.XXXXXXXXXXX’

I then ran

juju config glance-simplestreams-sync ssl_ca=%base64_value_here%

I can see the images sync into glance now.

@thedac you legend you! :metal:

@dvnt, Glad you got things working.

For completeness, I’ll add that you can use vault to handle certificates by relating all the charms (including glance-simplestreams-sync) to the certificates relation. Vault can then either create its own CA or you can generate an intermediate signed by your organization’s CA. Vault then hands out certs to all of the services that are validated by the same intermediate cert [0]

[0] https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/latest/app-certificate-management.html

1 Like