Key | Value |
---|---|
Summary | An introduction to Open Source MANO deployments and VNF workloads orchestration with Charmed OSM |
Categories | osm |
Difficulty | 1 |
Author | Wajeeha Hamid wajeeha.hamid@canonical.com |
Introduction
Duration: 5:00
This tutorial provides a walkthrough of Charmed OSM deployment and onboarding of virtual and cloud-native network functions.
Charmed OSM is an upstream OSM (Open Source MANO (Management and Orchestration)) deployment that is based on Juju charms. By using charm operators for OSM deployments telcos can benefit from a model-driven declarative approach and simplified operations. Once you get MicroK8s installed, you can deploy Charmed OSM on top of MicroK8s. Although we use Charmed OSM for testing/development purposes in this tutorial, it is production-grade and can be successfully used in live environments.
Canonical’s Development Stack for OSM is a toolset that you can use to onboard and orchestrate real VNF workloads directly on your workstation. It consists of the following components:
- MicroK8s - for deploying a local Kubernetes cluster that will host your OSM installation
- MicroStack - for deploying a local OpenStack cluster and using it as a VIM (Virtual Infrastructure Manager) for OSM
- Charmed OSM - simple OSM installation fully aligned with the upstream project.
What you’ll need
If you are using Windows or Mac OS, install multipass first to launch an Ubuntu VM.
If you are using Ubuntu 18.04 LTS, which we highly recommend, you can move on. Otherwise, you may need to install snapd first.
The minimum recommended system requirements are as follows:
- Ubuntu 18.04 LTS Operating System
- 16 GB of RAM
- 4 CPUs
- 50 GB of free storage space
The following section describes how you can get Canonical’s Development Stack for OSM up and running in just a few minutes.
Deploy Charmed OSM
In this tutorial, the installation of the Charmed OSM Release 10 is illustrated. OSM has an upstream installer in ETSI repository which needs to be downloaded. Pass the --charmed
option with the installer to enable the charmed installation of OSM. For the development stack, you should use --microstack
as a VIM for OSM.
wget https://osm-download.etsi.org/ftp/osm-11.0-eleven/install_osm.sh
chmod +x install_osm.sh
./install_osm.sh --charmed
That’s it, you are done !!
Once the installation is complete, you will have the OSM ready and you can start onboarding the VNFs.
Understanding Installation Steps
Duration: 0:00
If you want to know more about what happened during the installation, then you should continue, if not, you can skip to the next section.
The steps followed during installation are as follows:
- Installation of snaps for MicroK8s and Juju
- Setting up the MicroK8s
- Bootstrapping the Juju controller on MicroK8s and LXD clusters
- Deploying the charmed OSM bundle
- Installation of snap for OSM client
- Integration of charmed OSM with Microstack VIM
- Onboarding VNFs and KNFs
Snaps installation
The first thing that script does is installs the snaps of MicroK8s and Juju using the following commands
sudo snap install microk8s --classic
Sudo snap install juju --classic
Charmed OSM installation uses multiple charmed operators that are managed by Juju for day-to-day operations such as the backup and restoration of the platform. It runs on top of a Kubernetes cluster cloud, and we use MicroK8s for this.
To see all the snap packages installed on your system, use this command:
sudo snap list
Setup MicroK8s
MicroK8s is an upstream Kubernetes deployment that can run directly on your workstation. It is a Kubernetes in a snap, which means that all Kubernetes services and supporting libraries are packaged together in a single image that can be installed on over 40 Linux distributions, Windows, and Mac OS. MicroK8s is fully compliant with the upstream Kubernetes, thus you can use it to host your OSM installation.
The MicroK8s cluster must be configured to enable the following addons:
- dns, ingress, storage, and metallb
Your local Kubernetes cluster should now be up and running. You can interact with it by using the microk8s.kubectl command. To verify MicroK8s installation, we can use microk8s.status
.
$ microk8s.status
microk8s is running
high-availability: no
datastore master nodes: 127.0.0.1:19001
datastore standby nodes: none
addons:
enabled:
dns # CoreDNS
ha-cluster # Configure high availability on the current node
ingress # Ingress controller for external access
metallb # Loadbalancer for your Kubernetes cluster
prometheus # Prometheus operator for monitoring and logging
storage # Storage class; allocates storage from host directory
...
For more information on MicroK8s visit https://microk8s.io/.
Found a bug? Report it here.
Setup Juju Controller
The installation script bootstraps Juju controller to MicroK8s to deploy OSM, setting up an LXD cluster on the host system and adding it to the controller. If you wish to know more about the procedure, you can find it in this guide.
Check if the controller has been added:
$ juju controllers
Use --refresh option with this command to see the latest information.
Controller Model User Access Cloud/Region Models Nodes HA Version
osm-vca* osm admin superuser microk8s/localhost 2 1 - 2.8.6
To see the configuration of controller use the following command:
$ juju show-controller osm-vca
osm-vca:
details:
uuid: 4fe941a9-cf2a-4a0a-87c5-578078ea2074
controller-uuid: 4fe941a9-cf2a-4a0a-87c5-578078ea2074
api-endpoints: ['...:17070']
cloud: microk8s
region: localhost
agent-version: 2.8.6
agent-git-commit: 5d0442d3e15952bfc0ce059cb43ef7949ca71aaa
controller-model-version: 2.8.6
mongo-version: 4.0.18
ca-fingerprint: 49:58:C7:46:1E:2A...
ca-cert: |
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
models:
controller:
uuid: 8b3a8329-1d40-4034-8abd-730f6186c51f
model-uuid: 8b3a8329-1d40-4034-8abd-730f6186c51f
osm:
uuid: 060b8889-6789-4afc-8943-80b7e9c89bc8
model-uuid: 060b8889-6789-4afc-8943-80b7e9c89bc8
unit-count: 14
current-model: admin/osm
account:
user: admin
access: superuser
You can also verify the clouds managed by Juju controller:
$ juju clouds --controller osm-vca
Clouds available on the controller:
Cloud Regions Default Type
2e14cc8e-e427-4605-ab79-3e7b687e957e 1 default k8s
lxd-cloud 1 default lxd
microk8s 1 localhost k8s
Deploy Charmed OSM bundle
To deploy the Charmed OSM Juju Bundle on the controller, a model named osm will be created by the script. On completion, you can expose some OSM services: NBI, NG-UI, Grafana, and Prometheus.
To verify the deployment with services enabled you can check the status of juju:
$ juju status
Model Controller Cloud/Region Version SLA Timestamp
osm osm-vca microk8s/localhost 2.8.6 unsupported 10:42:50Z
App Version Status Scale Charm Store Rev OS Address Notes
grafana-k8s rocks.canonical.com:443/gra... active 1 grafana-k8s jujucharms 32 kubernetes 10.152.183.201 exposed
kafka-k8s rocks.canonical.com:443/wur... active 1 kafka-k8s jujucharms 21 kubernetes 10.152.183.178
...
Unit Workload Agent Address Ports Message
grafana-k8s/2* active idle 10.1.170.79 3000/TCP ready
kafka-k8s/2* active idle 10.1.170.103 9092/TCP ready
...
The following will allow you to check what has been created on MicroK8s:
$ microk8s.kubectl -n osm get all -n osm
NAME READY STATUS RESTARTS AGE
pod/ro-k8s-0 1/1 Running 3 19d
...
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/modeloperator ClusterIP 10.152.183.222 <none> 17071/TCP 20d
...
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/modeloperator 1/1 1 1 20d
...
NAME DESIRED CURRENT READY AGE
replicaset.apps/modeloperator-6797c74575 1 1 1 20d
...
NAME READY AGE
statefulset.apps/ro-k8s 1/1 20d
...
To understand each component of the bundle, see OSM Architecture tutorial
OSM client
Once the bundle has been deployed, a snap for the OSM client will be installed. The Charmed OSM has now been deployed and ready to use.
To verify the client installation use:
$ osm version
Server version: 10.0.0+gc47d045 2020-04-17
Client version: 10.0.0+g42e87fa
You can also access Charmed OSM web GUI using the IP address returned by the following command:
juju status ui-k8s | grep kubernetes | awk '{print $8}'
Microstack installation
MicroStack is an upstream OpenStack deployment that can also run directly on your workstation. It includes all the key components of OpenStack: Keystone, Nova, Neutron, Glance and is evolving extremely fast. Similar to MicroK8s, it is an OpenStack in snap.
In this case, as we have specified the --microstack
option during deployment, the installer will automatically add Microstack to the stack, and configure it so that it can be used by OSM.
The script will install a snap of MicroStack into the host machine. To configure MicroStack, a new network will be created and a router will be set up. Upon configuration, MicroStack will be added to the OSM.
You can check if MicroStack is properly configured with the following commands:
$ microstack.openstack network list
$ microstack.openstack router list
To verify its integration with OSM, do the following:
$ osm vim-list
+-------------+--------------------------------------+-------------------+
| vim name | uuid | operational state |
+-------------+--------------------------------------+-------------------+
| microstack | 8dcb3b1d-a758-498d-b157-c6eb916d2747 | ENABLED |
+-------------+--------------------------------------+-------------------+
For more information on MicroStack visit https://opendev.org/x/microstack.
Found a bug? Report it here.
Onboarding VNFs
Duration: 05:00
At this point you have MicroK8s installed, Charmed OSM deployed on MicroK8s and MicroStack configured as a VIM. You can now deploy a basic network service to exercise the environment.
Network Service
To onboard a network service you need to clone the git repo from ETSI that contains different examples of Network Services:
git clone https://osm.etsi.org/gitlab/vnf-onboarding/osm-packages.git
cd osm-packages/charm-packages/
git submodule update --init
Then, upload the following packages to OSM:
osm nfpkg-create ha_proxy_charm_vnf
osm nspkg-create ha_proxy_charm_ns
You can list the existing NSDs and VNFDs with the following commands:
osm nsd-list
osm vnfd-list
As the packages have been uploaded, you can create basic Network Service:
osm ns-create --ns_name basic-06-proxy-charm \
--nsd_name ha_proxy_charm-ns \
--vim_account microstack \
--config '{
vld: [ { name: mgmtnet, vim-network-name: osm-ext } ]
}'
The Network Service is being deployed. You can check its status by executing the following command. Wait until the config status turns to configured.
watch osm ns-list
Press Ctrl+C to terminate the watch command, when you see the following output:
+----------------------+--------------------------------------+---------------------+----------+-------------------+-------+
| ns instance name | id | date | ns state | current operation | error |
+----------------------+--------------------------------------+---------------------+----------+-------------------+-------+
| basic-06-proxy-charm | 3aab7c7b-d8d7-4442-bcf3-139c214a8110 | 2021-02-18T12:45:58 | READY | IDLE (None) | N/A |
+----------------------+--------------------------------------+---------------------+----------+-------------------+-------+
To get the history of all operations over a NS, run osm ns-op-list NS_ID
For more details on the current operation, run osm ns-op-show OPERATION_ID
You can now SSH to the machine that was launched by executing the following command:
IP_VNF1=`microstack.openstack server list -c Networks -f value --name basic-06-proxy-c-1-mgmtVM-0 \
| awk '{ print $2 }'`
ssh ubuntu@$IP_VNF1 # Password: osm4u
IP_VNF2=`microstack.openstack server list -c Networks -f value --name basic-06-proxy-c-2-mgmtVM-0 \
| awk '{ print $2 }'`
ssh ubuntu@$IP_VNF2 # Password: osm4u
Day-1/Day-2 configuration
Now the NS service is up and running, a Day-2 configuration on the VNF is onboarded using Proxy Charm operators.
The action available in this Network Service consists of creating a file inside the VNF at the path specified by the filename parameter.
Execute the action on VNF 1:
osm ns-action --action_name touch \
--vnf_name 1 \
--params '{filename: /home/ubuntu/touched}' \
basic-06-proxy-charm
And check whether the file has been created:
IP=`microstack.openstack server list -c Networks -f value --name basic-06-proxy-c-1-mgmtVM-0 \
| awk '{ print $2 }'`
ssh ubuntu@$IP -i .ssh/id_microstack
ls
For more information on VNF onboarding, visit upstream documentation
Onboarding KNFs
OSM also allows us to deploy workload on K8s clusters, but for development purposes, you can use a MicroK8scluster. The following steps will configure OSM to use MicroK8s as K8s clusters and deploy a basic network service to exercise the environment created.
Setting up a MicroK8s cluster in OSM
The installer has set a MicroK8s cluster for you to deploy a KNF. For this, you need the Kubernetes configuration. The following command shows the MicroK8s configuration:
$ microk8s.config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRU...
server: https://...:16443
name: microk8s-cluster
contexts:
- context:
cluster: microk8s-cluster
user: admin
name: microk8s
current-context: microk8s
kind: Config
preferences: {}
users:
- name: admin
user:
token: Q05CRGh5Z...
We will save this configuration in a file:
$ microk8s.config > microk8s-kubeconfig.yaml
Then, you will add MicroK8s to OSM
$ osm k8scluster-add --creds microk8s-kubeconfig.yaml \
--version v1 --vim microstack \
--k8s-nets '{"net1": osm-ext}' \
--description "K8s cluster" osm-k8s-cluster
Now, check the status of the cluster:
$ osm k8scluster-list
+----------------------+-------------------------+------------+-------------------+-------------------+
| Name | Id | VIM | Operational State | Op. state details |
+----------------------+-------------------------+------------+-------------------+-------------------+
| osm-microk8s-cluster | f879ba44-017-4377-a... | microstack | ENABLED | Helm: ENABLED |
| | | | | Juju: ENABLED |
+----------------------+-------------------------+------------+-------------------+-------------------+
OSM can use Juju Bundles or Helm charts to deploy the KNFs. For this tutorial, a Juju Bundle has been used, because it allows us to execute Day-1 and Day-2 actions in the KNF.
In case you need more information about the K8s cluster:
$ osm k8scluster-show osm-microk8s-cluster
Network Service
Now you can start onboarding the packages to OSM using the same ETSI repo.
cd ~/osm-packages/magma
osm nfpkg-create squid_cnf
osm nspkg-create squid_cnf_ns
List the existing NSDs and VNFDs with the following commands:
osm nsd-list
osm vnfd-list
Now, as the packages have been uploaded, you can create a basic Network Service:
osm ns-create --ns_name squid_cnf \
--nsd_name squid_cnf_ns \
--vim_account microstack \
--config '{
vld: [ { name: mgmtnet, vim-network-name: osm-ext } ]
}'
The Network Service is being deployed. You can check its status by executing the following command. Wait until the config status turns to configured.
watch osm ns-list
Press Ctrl+C to terminate the watch command, when you see the following output:
+----------------------+--------------------------------------+---------------------+----------+-------------------+-------+
| ns instance name | id | date | ns state | current operation | error |
+----------------------+--------------------------------------+---------------------+----------+-------------------+-------+
| squid_cnf | 257f0c3a-c279-4a7d-b970-c668341b95a3 | 2021-02-18T12:45:58 | READY | IDLE (None) | N/A |
+----------------------+--------------------------------------+---------------------+----------+-------------------+-------+
To get the history of all operations over an NS, run osm ns-op-list NS_ID
.
For more details on the current operation, run osm ns-op-show OPERATION_ID
.
Day-1/Day-2 configuration
Now the NS service is up and running, a Day-2 configuration on the CNF is onboarded using Charms.
The squid proxy by default does not allow internet traffic, so if you try to access any web page through squid, the proxy will deny it.
$ curl https://www.canonical.com --proxy 10.1.170.119:3128 -I
HTTP/1.1 403 Forbidden
Server: squid/3.5.27
...
curl: (56) Received HTTP code 403 from proxy after CONNECT
The following command allows the proxy to access www.canonical.com:
osm ns-action squid_cnf --vnf_name squid_cnf \
--kdu_name squid-kdu \
--action_name addurl \
--params '{application-name: squid, url: canonical.com}'
To check if the operation was successful, use osm ns-op-list
:
+--------------------------------------+-------------+-------------+-----------+---------+------------+
| id | operation | action_name | status | date | detail |
+--------------------------------------+-------------+-------------+-----------+---------+------------+
| ba7cd1ed-38a8-4afd-bbab-750b8491cdcb | instantiate | N/A | COMPLETED | 2021... | - |
| 5c1fbef0-7075-4f35-8385-752772858b4b | action | addurl | COMPLETED | 2021... | - |
+--------------------------------------+-------------+-------------+-----------+---------+------------+
The website should be accessible now.
$ curl https://www.canonical.com --proxy 10.1.170.119:3128 -I
HTTP/1.1 200 Connection established
HTTP/2 301
...
To learn more about Canonical Development Stack for OSM, watch a video.
That’s all folks!
Duration: 2:00
Congratulations! You have made it!
- Deploy Charmed OSM on top of MicroK8s,
- Deploy MicroStack and add it as a VIM for Charmed OSM,
- Onboard sample VNF and KNF workloads through Charmed OSM.
In this tutorial, you have learned how to deploy Charmed OSM on top of MicroK8s, add MicroStack as a VIM, and onboard sample VNF workloads through Charmed OSM. You can now use your newly learned skills to accelerate your transition to NFV with OSM.
Installation video
To see results of each step refer to this installation tutorial
Where to go from here?
- Visit Charmed OSM website
- Read more about Canonical’s solutions for telcos
- Explore MicroK8s and MicroStack
- Read upstream OSM documentation
- Tell us your NFV story!