Hi Martin
I’m new to Openstack and I’ve been testing Openstack installation in a virtual testlab, using 6 nodes. I’ve installed Maas, Juju controller and 4 nodes according to the Openstack documentation. Everything runs ok, I can access the dashboard, create instances etc. All looking good. I’m running everything on focal.
Nicely done for getting everything installed and running! It’s not ‘trivial’ to say the least, although we do try to make the charms make the process easier. Configuring OpenStack and associated services is complicated. I’ll try and answer your questions, but please do come back if it doesn’t seem clear; there’s lots of people more expert than me here (although it is the holiday season!)
So how do I manage Ceph storage, I can’t see it in the Openstack dashboard. Is this all command line based? I’m running a ceph cluster myself and it has a dashboard. is it possible to add that as well? Or does this go through Cinder?
So Charmed OpenStack can work with Ceph clusters in two ways. The first way is using Charmed Ceph (i.e. using the ceph-osd and ceph-mon charms to install and manage the ceph cluster. This is the simplest approach, particularly if the Ceph cluster is only going to be used for the OpenStack installation.
The second is to use the ceph-proxy charm which you configure to point at an existing Ceph cluster. ceph-proxy “looks” like ceph-mon to the other charms.
Another question, when I go for a production deployment I will obv go for dedicated nodes for ceph and nova. But all the other stuff, maas, vault, neutron, keystone etc. I assume that can be run virtual as well? Is it recommended to run this on a separate, private cloud? (e.g. on vmware/proxmox).
So MaaS wants to take care of the bare-metal. The core API services charms (neutron-api, keystone, cinder, glance, etc) will all happily live in containers (e.g. LXD) and nova-compute and ceph are run on the bare-metal nodes. It is possible to co-locate ceph and nova-compute (which we do in hyper-converged systems); however, it’s not mandatory.
In the bundle provided to Juju to deploy the cloud you can specify the machines and containers (we recommend this) so that everything ends up in the right place. Juju knows how to talk to MaaS to provision machines that MaaS knows about, and Juju can also allocate LXD containers if they are specified.
We have several bundles that explore how to build up an OpenStack system. In this line it specifies that the unit should go into a container, as an example. So it’s possible to force a unit into a specific container on a machine using --to <m>/lxd/<n>
, where m
is the machine and n
is the container. See here for further details.
I’m experienced with networking, vlans, routing etc. and with hypervisors vmware, proxmox etc and I’m pretty advanced with Linux but right now Openstack seems much to take in at this point, when I run a production environment I need to be able to troubleshoot as well, keep it running so any pointers where to begin would be much appriciated.
No kidding! OpenStack networking is a bit obscure and complex because it’s trying to pack into so many concepts! The good news is that it’s all based on what Linux networking does. e.g. VLANs, tunnels, iptables, and more recently OpenFlow (with OVN).
There’s two parts to networking in a Charmed OpenStack system; there’s the config options in the relevant charms (neutron-api, neutron-gateway (if used), and more recently OVN support in ovn-chassis and ovn-central). The options in these charms set up the topology of the networking and how to connect the OpenStack cloud to the outside world. But they are just setting up things in in the OpenStack system using the various OpenStack clients libraries.
Then, actually setting up networks for tenants, etc, is done using normal OpenStack commands.
I hope that sheds a little light on some of your questions. It’s a big topic, so please pop back if you have issues or questions that we may be able to answer.