Hi Sir,
I needed to attach higher storage to create instance in openstack
Hi Sir,
I needed to attach higher storage to create instance in openstack
Alright, I understand that you want VM images to reside on this storage. Are you using Ceph as a storage backend for you cloud? Are you using Cinder volumes, also possibly backed by Ceph?
Can you share what deployment instructions you have followed up to now?
It would also be helpful if you could give the current state of your cloud by providing output (via a pastebin) to
juju status --relations
Dear Sir
Please find the status of juju-relation
root@maassrv:~# juju status --relations
Model Controller Cloud/Region Version SLA Timestamp
openstack juju-controller maassrv/default 2.9.16 unsupported 15:14:56+05:30
App Version Status Scale Charm Store Channel Rev OS Message
ceph-mon 13.2.8 active 4 ceph-mon charmstore stable 482 ubuntu Unit is ready and clustered
ceph-osd 13.2.8 blocked 5 ceph-osd charmstore stable 502 ubuntu No block devices detected using current configuration
ceph-radosgw 13.2.8 active 1 ceph-radosgw charmstore stable 412 ubuntu Unit is ready
cinder 13.0.9 active 1 cinder charmstore stable 448 ubuntu Unit is ready
cinder-ceph 13.0.9 active 1 cinder-ceph charmstore stable 360 ubuntu Unit is ready
glance 17.0.0 active 1 glance charmstore stable 473 ubuntu Unit is ready
keystone 14.2.0 active 1 keystone charmstore stable 567 ubuntu Application Ready
mysql 5.7.20 active 1 percona-cluster charmstore stable 402 ubuntu Unit is ready
neutron-api 13.0.7 active 1 neutron-api charmstore stable 485 ubuntu Unit is ready
neutron-gateway 13.0.7 active 1 neutron-gateway charmstore stable 467 ubuntu Unit is ready
neutron-openvswitch 13.0.7 active 3 neutron-openvswitch charmstore stable 452 ubuntu Unit is ready
nova-cloud-controller 18.3.0 active 1 nova-cloud-controller charmstore stable 552 ubuntu Unit is ready
nova-compute 18.3.0 active 3 nova-compute charmstore stable 577 ubuntu Unit is ready
ntp 3.2 active 4 ntp charmstore stable 47 ubuntu chrony: Ready
openstack-dashboard 14.1.0 active 1 openstack-dashboard charmstore stable 513 ubuntu Unit is ready
rabbitmq-server 3.6.10 active 1 rabbitmq-server charmstore stable 441 ubuntu Unit is ready
Unit Workload Agent Machine Public address Ports Message
ceph-mon/0* active idle 1/lxd/0 192.168.0.160 Unit is ready and clustered
ceph-mon/1 active idle 2/lxd/0 192.168.0.165 Unit is ready and clustered
ceph-mon/2 active idle 3/lxd/0 192.168.0.167 Unit is ready and clustered
ceph-mon/3 active idle 0 192.168.0.152 Unit is ready and clustered
ceph-osd/0* active idle 1 192.168.0.153 Unit is ready (1 OSD)
ceph-osd/1 active idle 2 192.168.0.154 Unit is ready (1 OSD)
ceph-osd/2 active idle 3 192.168.0.155 Unit is ready (1 OSD)
ceph-osd/3 active idle 0 192.168.0.152 Unit is ready (1 OSD)
ceph-osd/6 blocked idle 0/lxd/5 192.168.0.170 No block devices detected using current configuration
ceph-radosgw/0* active idle 0/lxd/0 192.168.0.158 80/tcp Unit is ready
cinder/0* active idle 1/lxd/1 192.168.0.159 8776/tcp Unit is ready
cinder-ceph/0* active idle 192.168.0.159 Unit is ready
glance/0* active idle 2/lxd/1 192.168.0.164 9292/tcp Unit is ready
keystone/0* active idle 3/lxd/1 192.168.0.156 5000/tcp Unit is ready
mysql/0* active idle 0/lxd/1 192.168.0.161 3306/tcp Unit is ready
neutron-api/0* active idle 1/lxd/2 192.168.0.162 9696/tcp Unit is ready
neutron-gateway/0* active idle 0 192.168.0.152 Unit is ready
ntp/0* active idle 192.168.0.152 123/udp chrony: Ready
nova-cloud-controller/0* active idle 2/lxd/2 192.168.0.166 8774/tcp,8775/tcp,8778/tcp Unit is ready
nova-compute/0* active idle 1 192.168.0.153 Unit is ready
neutron-openvswitch/0* active idle 192.168.0.153 Unit is ready
ntp/1 active idle 192.168.0.153 123/udp chrony: Ready
nova-compute/1 active idle 2 192.168.0.154 Unit is ready
neutron-openvswitch/1 active idle 192.168.0.154 Unit is ready
ntp/2 active idle 192.168.0.154 123/udp chrony: Ready
nova-compute/2 active idle 3 192.168.0.155 Unit is ready
neutron-openvswitch/2 active idle 192.168.0.155 Unit is ready
ntp/3 active idle 192.168.0.155 123/udp chrony: Ready
openstack-dashboard/0* active idle 3/lxd/2 192.168.0.163 80/tcp,443/tcp Unit is ready
rabbitmq-server/0* active idle 0/lxd/2 192.168.0.157 5672/tcp Unit is ready
Machine State DNS Inst id Series AZ Message
0 started 192.168.0.152 HP-storage bionic default Deployed
0/lxd/0 started 192.168.0.158 juju-43669e-0-lxd-0 bionic default Container started
0/lxd/1 started 192.168.0.161 juju-43669e-0-lxd-1 bionic default Container started
0/lxd/2 started 192.168.0.157 juju-43669e-0-lxd-2 bionic default Container started
0/lxd/5 started 192.168.0.170 juju-43669e-0-lxd-5 bionic default Container started
1 started 192.168.0.153 HP-01 bionic default Deployed
1/lxd/0 started 192.168.0.160 juju-43669e-1-lxd-0 bionic default Container started
1/lxd/1 started 192.168.0.159 juju-43669e-1-lxd-1 bionic default Container started
1/lxd/2 started 192.168.0.162 juju-43669e-1-lxd-2 bionic default Container started
2 started 192.168.0.154 HP-02 bionic default Deployed
2/lxd/0 started 192.168.0.165 juju-43669e-2-lxd-0 bionic default Container started
2/lxd/1 started 192.168.0.164 juju-43669e-2-lxd-1 bionic default Container started
2/lxd/2 started 192.168.0.166 juju-43669e-2-lxd-2 bionic default Container started
3 started 192.168.0.155 HP-03 bionic default Deployed
3/lxd/0 started 192.168.0.167 juju-43669e-3-lxd-0 bionic default Container started
3/lxd/1 started 192.168.0.156 juju-43669e-3-lxd-1 bionic default Container started
3/lxd/2 started 192.168.0.163 juju-43669e-3-lxd-2 bionic default Container started
Relation provider Requirer Interface Type Message
ceph-mon:client cinder-ceph:ceph ceph-client regular
ceph-mon:client glance:ceph ceph-client regular
ceph-mon:client nova-compute:ceph ceph-client regular
ceph-mon:mon ceph-mon:mon ceph peer
ceph-mon:osd ceph-osd:mon ceph-osd regular
ceph-mon:radosgw ceph-radosgw:mon ceph-radosgw regular
ceph-radosgw:cluster ceph-radosgw:cluster swift-ha peer
cinder-ceph:ceph-access nova-compute:ceph-access cinder-ceph-key regular
cinder-ceph:storage-backend cinder:storage-backend cinder-backend subordinate
cinder:cinder-volume-service nova-cloud-controller:cinder-volume-service cinder regular
cinder:cluster cinder:cluster cinder-ha peer
glance:cluster glance:cluster glance-ha peer
glance:image-service cinder:image-service glance regular
glance:image-service nova-cloud-controller:image-service glance regular
glance:image-service nova-compute:image-service glance regular
keystone:cluster keystone:cluster keystone-ha peer
keystone:identity-service ceph-radosgw:identity-service keystone regular
keystone:identity-service cinder:identity-service keystone regular
keystone:identity-service glance:identity-service keystone regular
keystone:identity-service neutron-api:identity-service keystone regular
keystone:identity-service nova-cloud-controller:identity-service keystone regular
keystone:identity-service openstack-dashboard:identity-service keystone regular
mysql:cluster mysql:cluster percona-cluster peer
mysql:shared-db cinder:shared-db mysql-shared regular
mysql:shared-db glance:shared-db mysql-shared regular
mysql:shared-db keystone:shared-db mysql-shared regular
mysql:shared-db neutron-api:shared-db mysql-shared regular
mysql:shared-db nova-cloud-controller:shared-db mysql-shared regular
mysql:shared-db openstack-dashboard:shared-db mysql-shared regular
neutron-api:cluster neutron-api:cluster neutron-api-ha peer
neutron-api:neutron-api nova-cloud-controller:neutron-api neutron-api regular
neutron-api:neutron-plugin-api neutron-gateway:neutron-plugin-api neutron-plugin-api regular
neutron-api:neutron-plugin-api neutron-openvswitch:neutron-plugin-api neutron-plugin-api regular
neutron-gateway:cluster neutron-gateway:cluster quantum-gateway-ha peer
neutron-gateway:juju-info ntp:juju-info juju-info subordinate
neutron-gateway:quantum-network-service nova-cloud-controller:quantum-network-service quantum regular
neutron-openvswitch:neutron-plugin nova-compute:neutron-plugin neutron-plugin subordinate
nova-cloud-controller:cluster nova-cloud-controller:cluster nova-ha peer
nova-compute:cloud-compute nova-cloud-controller:cloud-compute nova-compute regular
nova-compute:compute-peer nova-compute:compute-peer nova peer
nova-compute:juju-info ntp:juju-info juju-info subordinate
ntp:ntp-peers ntp:ntp-peers ntp peer
openstack-dashboard:cluster openstack-dashboard:cluster openstack-dashboard-ha peer
rabbitmq-server:amqp cinder:amqp rabbitmq regular
rabbitmq-server:amqp glance:amqp rabbitmq regular
rabbitmq-server:amqp neutron-api:amqp rabbitmq regular
rabbitmq-server:amqp neutron-gateway:amqp rabbitmq regular
rabbitmq-server:amqp neutron-openvswitch:amqp rabbitmq regular
rabbitmq-server:amqp nova-cloud-controller:amqp rabbitmq regular
rabbitmq-server:amqp nova-compute:amqp rabbitmq regular
rabbitmq-server:cluster rabbitmq-server:cluster rabbitmq-ha peer
You are indeed using Ceph and are also are using it to back Cinder.
I see that you have one ceph-osd unit (ceph-osd/6
) but that is containerised. Generally, for Charmed OpenStack, almost every application can be containerised with the exception of nova-compute and ceph-osd. You should therefore remove that unit (juju remove-unit ceph-osd/6
) and redeploy onto an available physical machine.
Please specify the block device on the machine you are adding. This device needs to be known to the ceph-osd application (via the osd-devices
option). The option can be configured post-deploy.
how to add sir?
juju add-unit --1 ceph-osd ???
If you have but one machine available then Juju will “grab it” by default. Otherwise you’ll need to target it via the --to <machine>.maas
option or a constraint such as a MAAS tag (--constraints tags=<blah>
). The tag is added to the machine in MAAS.
So assuming the single-machine-available scenario you would add it like this:
juju add-unit ceph-osd
What block device (e.g. /dev/sdb
) on the new machine do you want to use for storage?
sir
already the server has deployed i am not able to attach disk huge size disk, and block devices is /dev/sda
What Juju machine id does it have (e.g. 0
, 1
, etc)?
When you determine that, let’s look at its devices:
juju ssh <id> sudo lsblk
Sir please find details and i destroy and re-installing openstack using manual and i did not connect to storage server after its gets completed i will try to add juju add-unit Hp-storage ceph-osd
root@maassrv:~# juju ssh 0 sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 61.8M 1 loop /snap/core20/1081
loop1 7:1 0 32.3M 1 loop /snap/snapd/13170
loop2 7:2 0 67.3M 1 loop /snap/lxd/21545
loop3 7:3 0 72.6M 1 loop /snap/lxd/21723
sda 8:0 0 111.8G 0 disk
sdb 8:16 0 111.8G 0 disk
├─sdb1 8:17 0 512M 0 part /boot/efi
└─sdb2 8:18 0 111.3G 0 part /
Connection to 192.168.0.198 closed.
It looks like this server has device /dev/sda
available with a capacity of 112 GiB. However, you mentioned 70 TiB in your original post.
sir
that is another server details
i am re deploying again using manual installations now Only i need use this command for that storage juju add-unit Hp-storage ceph-osd ?? or something else ??
Thanks. So what you want to do is deploy a ceph-osd unit onto the target machine. However, you said that the output (screenshot) is for “another server”. We need to know precisely which server we are deploying onto. Let’s assume it is the MAAS node named HP-storage.maas
In the manual install steps I would install the first ceph-osd unit in this way:
juju deploy --config ceph-osd.yaml --to HP-storage.maas ceph-osd
Where file ceph-osd.yaml
contains the configuration:
ceph-osd:
osd-devices: /dev/sda
source: cloud:focal-wallaby
Alternatively, you can create a “tag” in the MAAS web UI like ceph-osd-hp
, assign it to the server, and then:
juju deploy --config ceph-osd.yaml --constraints tags=ceph-osd-hp ceph-osd
Normally we want more than one OSD storage node however.
If you had three identical HP servers tagged in this way then you could add multiple with one invocation:
juju deploy -n 3 --config ceph-osd.yaml --constraints tags=ceph-osd-hp ceph-osd
To scale out manually using other available servers:
juju add-unit ceph-osd
If two are available:
juju add-unit -n 2 ceph-osd
If these other servers have different block devices (e.g. /dev/sdb
) then just alter the ceph-osd application’s configuration by changing this line in the ceph-osd.yaml
file:
osd-devices: /dev/sda /dev/sdb
Note that the ceph-osd charm will not overwrite a disk currently in use.
Then apply the configuration change (the new units will be notified automatically):
juju config --file ceph-osd.yaml ceph-osd
When it comes time to deploy ceph-mon, make sure you set option expected-osd-count
appropriately where the count is the sum of all block devices used as OSD disks in the Ceph cluster (you can have more than one OSD per server).
Hi
I tired to deploy and giving this error
juju deploy --config ceph-osd.yaml --constraints tags=ceph-osd-hp ceph-osd Located charm “ceph-osd” in charm-hub, revision 509 Deploying “ceph-osd” from charm-hub charm “ceph-osd”, revision 509 in channel stable ERROR cannot add application “ceph-osd”: application already exists: deploy application using an alias name, or use remove-application to remove the existing one and try again root@maassrv:~# juju deploy ./ --config ceph-osd.yaml --constraints tags=ceph-osd-hp ceph-osd ERROR options provided but not supported when deploying a bundle: --config, --constraints root@maassrv:~# juju deploy --config ceph-osd.yaml --constraints tags=ceph-osd-hp ceph-osd Located charm “ceph-osd” in charm-hub, revision 509 Deploying “ceph-osd” from charm-hub charm “ceph-osd”, revision 509 in channel stable ERROR cannot add application “ceph-osd”: application already exists: deploy application using an alias name, or use remove-application to remove the existing one and try again root@maassrv:~# juju deploy --config ceph-osd.yaml ceph-osd Located charm “ceph-osd” in charm-hub, revision 509 Deploying “ceph-osd” from charm-hub charm “ceph-osd”, revision 509 in channel stable ERROR cannot add application “ceph-osd”: application already exists: deploy application using an alias name, or use remove-application to remove the existing one and try again
Sir i tired to add Hp-storage server but its taking another hard disk i just use this command juju add-unit ceph-osd after that and i added nova compute now i not able see 61.Tb storage please find below screenshot
ceph-osd is already deployed to your model as “ceph-osd”.
run:
juju add-unit ceph-osd --to HP-storage
juju config ceph-osd osd-devices="/dev/sdb"
Where: HP-storage is the name of the storage machine and /dev/sdb is the 68T storage device .
I have some concern over the deployment though. Assuming this is the only object storage device; if the HP-storage server were to reboot or crash, all storage requests would be halted. Crashing all instances in the cloud and potentially corrupting data.
(What’s not immediately clear is) If there the are other osd-devices smaller than 68T, you’ll probably run into an issue where those ceph-osd become full before 68T volume which will put that whole pool in read only mode anyway
Hi Boss
Thank you for ur mail boss
I did this below command not able to see in openstack dashboard in hypervisor list 68 TB volume and its taking another disk sdb
juju add-unit ceph-osd --to HP-storage
juju config ceph-osd osd-devices="/dev/sdb"
You won’t necessarily see the storage on the Openstack Dashboard.
Try this trick so that nova writes ephemeral instances into ceph too. The storage should report on the openstack dashboard then
juju config nova-compute libvirt-image-backend=rbd
This is what my yaml looks like:
nova-compute:
annotations:
gui-x: '190'
gui-y: '890'
charm: cs:nova-compute
num_units: 6
options:
customize-failure-domain: true
libvirt-image-backend: rbd
rbd-pool: cinder-ceph
config-flags: default_ephemeral_format=ext4
enable-live-migration: true
enable-resize: true
migration-auth-type: ssh
openstack-origin: *openstack-origin
bindings:
"": admin-network
to:
- '0'
- '1'
- '2'
- '3'
- '4'
- '5'
In you previous post. Your juju status shows ceph-osd has no block devices configured.
Is this still the case ?