Questions regarding my Openstack Set Up/Are my hardware resources enough?

Hi,

So last year I had a project to build Openstack backed by MAAS and Juju in a test environment. This forum was of great help during that time, so thank you. And well thanks to that ive landed a small internship in my school to try and deploy that project.

So I’ve been asked to check whether this could be deployed in my school for students in my course. It’s for the Cybersecurity major so mainly our labs involve something like CTF(Capture the Flag) or using docker and containers on the VM(which is usually a Linux OS) to demonstrate various kinds of attacks.

However I’m not sure if the resources I have would be adequate so I have a few questions regarding that.

To start, this would be used by around 30-35 users…Basically a professor might use this in class so it would need to deploy ~30 VM’s at the same time. To be safe I’d like to have a capacity of like 50 VM’s.

The resources I have are a bit outdated though, I’m using an old Supermicro server, you can find the details of the processor and storage in this link( https://www.supermicro.com/products/SuperBlade/module/SBI-7226T-T2.cfm ). I’m also unsure whether the Chipset provided is enough for what I need.

I’ll be used the charmed bundle deployment method this time. So for the hard disks it’ll be :

MAAS Node : 500GB SATA

Juju Controller: 500GB SATA

3 Openstack Nodes : 2x 500GB SATA (one for ceph osd each)

As for RAM I’d be using what’s recommended : 8GB on the MAAS and Openstack Nodes and 4GB on Juju Controller.

So my questions are:

  1. Would these resources be enough for the number of users I want to accommodate? And yea do you think my set up is enough to deploy the environment I want?

  2. Since it’s an old server, only SATA hard disks are compatible. Will this be a big problem? I assume this would slow down deployment and functioning of VM’s, but I’m wondering if it would still function smooth enough for a class lab.

Hi. You will need more memory for the three cloud nodes. Assuming you will be running one hypervisor per node and that the node itself will require a minimum of 2GiB, you are left with 6GiB for VMs. For a total of 48 VMs, that’s only 375MiB per VM. Aim for 32GiB on each cloud node.

The SATA disks are SSD right?

And it’s not clear how many cores you actually have on your motherboard.

Consider having a spare disk at hand in case of a failure.

1 Like

Hi!

I need to double check but I think each motherboard has two quad core processors. So im assuming 8 cores per node. Would that be enough?

I believe the disks are SATA HDD. Attaching the name and link if that helps:

Toshiba Internal Hard Drive MQ01ABD050 :

.

Do you think it would work or I’d need to upgrade to an SSD? (SAS disks are not compatible with my server unfortunately)

Will upgrade RAM to 32GB per node and get a spare disk to be safe, thanks!

Yeah I would definitely go for SSD drives. Both memory and SSD drives are cheap these days so it is worth trying it all out.

I tried HDD and it’s painfully slow for most things. to the point that it can even cause connection issues or cause pods to fail to start if you have startup probes enabled in kubernetes. I had an app that should have started within 5 minutes and it took over an hour just to give you some perspective.

SSD will give you a huge boost to IOPS. That drive will net you at most 1k IOPS for Read and roughly 200 for write, plus it’s a Toshiba which means it has a foot in the grave out of the factory. I’ve replaced tons of them compared to any other drive. A Samsung 870 EVO SSD will get you around “Random Read 98K, Write 88K IOPS”.

You should plan for the amount of storage you need. For a budget with little kept on it the 500GB EVO drives are pretty cheap and last a reasonable amount of writes though the PRO drives will write twice as much in their lifetime. I’m not familiar with other SSD manufacturers but I like Samsungs and they usually top out on the specs across the industry.