First steps with the Charmed OSM

Welcome to Charmed OSM osm-40

The objective of this page is to give an overview of the first steps to get started with Charmed OSM.

Requirements

  • OS: Ubuntu 18.04 LTS
  • MINIMUM:
    • 4 CPUs
    • 8 GB RAM
    • 50GB disk
    • Single interface with Internet access.
  • RECOMMENDED:
    • 8 CPUs
    • 16 GB RAM
    • 120GB disk
    • Single interface with Internet access.

User Guide

Installing OSM has never been easier. With one command, you will be able to deploy Charmed OSM in an empty environment using microk8s.

First of all, let’s download the installation script for upstream, and give it executable permissions.

wget https://osm-download.etsi.org/ftp/osm-8.0-eight/install_osm.sh
chmod +x install_osm.sh

Install

To install Charmed OSM locally you should execute the following command:

./install_osm.sh --charmed

Checking the status

While the installer is running, you will see progress in the number of active services, which can take several minutes depending on the internet connection speed. While this is running, to see the status of the deployment execute watch -c juju status --color. Also, you can execute watch kubectl -n osm get pods to see the status of the Kubernetes Pods.

You will see this output from the juju status command when the deployment is finished.

$ juju status
Model  Controller  Cloud/Region        Version  SLA          Timestamp
osm    osm-vca     microk8s/localhost  2.8.6    unsupported  11:59:11+01:00

App             Version                         Status  Scale  Charm           Store       Rev  OS          Address         Notes
grafana-k8s     rocks.canonical.com:443/gra...  active      1  grafana-k8s     jujucharms   32  kubernetes  10.152.183.21   exposed
kafka-k8s       rocks.canonical.com:443/wur...  active      1  kafka-k8s       jujucharms   21  kubernetes  10.152.183.65   
keystone        keystone:8                      active      1  keystone        jujucharms    1  kubernetes  10.152.183.174  
lcm-k8s         lcm:8                           active      1  lcm-k8s         jujucharms   45  kubernetes  10.152.183.145  
mariadb-k8s     rocks.canonical.com:443/mar...  active      1  mariadb-k8s     jujucharms   34  kubernetes  10.152.183.124  
mon-k8s         mon:8                           active      1  mon-k8s         jujucharms   36  kubernetes  10.152.183.85   
mongodb-k8s     mongo:latest                    active      1  mongodb-k8s     jujucharms   29  kubernetes  10.152.183.215  
nbi-k8s         nbi:8                           active      1  nbi-k8s         jujucharms   44  kubernetes  10.152.183.8    exposed
ng-ui           ng-ui:8                         active      1  ng-ui           jujucharms    9  kubernetes  10.152.183.143  exposed
pla             pla:8                           active      1  pla             jujucharms    4  kubernetes  10.152.183.44   
pol-k8s         pol:8                           active      1  pol-k8s         jujucharms   34  kubernetes  10.152.183.48   
prometheus-k8s  alpine:latest                   active      1  prometheus-k8s  jujucharms   32  kubernetes  10.152.183.101  exposed
ro-k8s          ro:8                            active      1  ro-k8s          jujucharms   40  kubernetes  10.152.183.239  
ui-k8s          light-ui:8                      active      1  ui-k8s          jujucharms   51  kubernetes  10.152.183.195  exposed
zookeeper-k8s   rocks.canonical.com:443/k8s...  active      1  zookeeper-k8s   jujucharms   37  kubernetes  10.152.183.27   

Unit               Workload  Agent  Address       Ports                       Message
grafana-k8s/0*     active    idle   10.1.245.88   3000/TCP                    ready
kafka-k8s/0*       active    idle   10.1.245.126  9092/TCP                    ready
keystone/0*        active    idle   10.1.245.82   5000/TCP                    
lcm-k8s/0*         active    idle   10.1.245.81   9999/TCP                    ready
mariadb-k8s/0*     active    idle   10.1.245.73   3306/TCP                    ready
mon-k8s/0*         active    idle   10.1.245.114  8000/TCP                    ready
mongodb-k8s/0*     active    idle   10.1.245.105  27017/TCP                   ready
nbi-k8s/0*         active    idle   10.1.245.117  9999/TCP                    ready
ng-ui/0*           active    idle   10.1.245.78   80/TCP                      ready
pla/0*             active    idle   10.1.245.104  8080/TCP                    ready
pol-k8s/0*         active    idle   10.1.245.96   80/TCP                      ready
prometheus-k8s/0*  active    idle   10.1.245.127  9090/TCP                    ready
ro-k8s/0*          active    idle   10.1.245.85   9090/TCP                    ready
ui-k8s/0*          active    idle   10.1.245.86   80/TCP                      ready
zookeeper-k8s/0*   active    idle   10.1.245.123  2181/TCP,2888/TCP,3888/TCP  ready

Start playing with OSM

If you have installed OSM in an external machine, or in a VM, you can access it through the kubernetes ingress controller enabled in microk8s. These are the services exposed:

  • OSM UI: https://ui.<ip>.xip.io
  • NBI: https://nbi.<ip>.xip.io
  • Prometheus: https://prometheus.<ip>.xip.io
  • Grafana: https://grafana.<ip>.xip.io

Note: <ip> should be replaced by the IP of the default network interface in your OSM Machine

OSM User Interface

Access from outside the machine at http://ui..xip.io, or at http://10.152.183.143 (see ng-ui cluster IP in juju status) from inside the OSM Machine.

  • Username: admin
  • Password: admin

OSM Client

The OSM client is automatically installed with Charmed OSM, but if you want to install it in a separate machine, you can easily do that installing the osmclient snap:

sudo snap install osmclient

The OSM client needs the OSM_HOSTNAME environment variable pointing to the NBI.

export OSM_HOSTNAME=nbi.<ip>.xip.io:443
osm --help  # print all the commands

Prometheus

Access Prometheus User Interface from outside the machine at http://prometheus..xip.io, or at http://10.152.183.101:9090 (see prometheus cluster IP in juju status) from inside the OSM Machine.

Grafana

Access Grafana User Interface from outside the machine at http://grafana..xip.io, or at http://10.152.183.21:3000 (see grafana cluster IP in juju status) from inside the OSM Machine.

  • Username: admin
  • Password: admin

Uninstall

To uninstall Charmed OSM you should execute the following command.

./install_osm.sh --charmed --uninstall

Note: this will not remove microk8s or juju even if they were installed by the installer. Removal of these components can be done manually once it is known that no other services require them with the following commands:

sudo snap remove --purge juju
sudo snap remove --purge microk8s

Troubleshooting

If you have any trouble with the installation, please contact us, we will be glad to answer your questions:

1 Like

In environments with restricted network access, you may encounter an error similar to this:

ERROR cannot deploy bundle: cannot add charm "cs:~charmed-osm/grafana-k8s-13": cannot retrieve charm "cs:~charmed-osm/grafana-k8s-13": cannot get archive: Get https://api.jujucharms.com/charmstore/v5/~charmed-osm/grafana-k8s-13/archive?channel=edge: dial tcp: lookup api.jujucharms.com on 10.152.183.10:53: read udp 10.1.1.12:55949->10.152.183.10:53: i/o timeout

In order to solve this, we need to edit the kube-dns configuration to point to your DNS servers. Edit the configuration and both sets of DNS addresses accordingly:

microk8s.kubectl -n kube-system edit configmap/kube-dns

kube-dns will automatically reload the configuration, so re-run juju deploy command and verify that the error is resolved.

Get the name of the kube-dns pod:

$ kubectl -n kube-system get pods
NAME                                              READY   STATUS    RESTARTS   AGE
heapster-v1.5.2-6b5d7b57f9-c9vln                  4/4     Running   0          67m
hostpath-provisioner-6d744c4f7c-cr9br             1/1     Running   0          71m
kube-dns-6bfbdd666c-xrnnb                         3/3     Running   3          71m
kubernetes-dashboard-6fd7f9c494-zx6s9             1/1     Running   0          71m
monitoring-influxdb-grafana-v4-78777c64c8-lsh8l   2/2     Running   2          71m

Check the logs for dnsmasq container in the pod:

$ kubectl -n kube-system logs kube-dns-6bfbdd666c-xrnnb dnsmasq

Once dnsmasq is able to resolve hostnames, you can continue with the installation.

hello,

Thanks for the info… but can you please elaborate a bit on the needed changes? E.g. do we edit the upstreamNameservers only or also the kubectl.kubernetes.io/last-applied-configuration? What are the correct values? how should we get them? E.g. I tries an nslookup and got the addresses from there to put on the DNS config… saved and exited… but didn’t work. I still have same error.

Can you please mention a bit more detailed how to fix this error? Thank you and sorry… forgot to mention… newbie alert :slight_smile:

hello again,
Whatever attempt I did with editing that file didn’t work.

I managed to solve this with:
sudo iptables -P FORWARD ACCEPT
(ubuntu 18.04 fresh install on a VM)

After exposing the services, I am receiving a Disallowed Host error from Django. Is there a juju or kubernetes configuration settings to allow access from the host of the VM host? I am using multipass to run ubuntu 18.04 and the OSM environment.

Found the local_settings.py override file: /var/snap/microstack/common/etc/horizon/local_settings.d/_05_snap_tweaks.py

Reference article: https://opendev.org/x/microstack/commit/3230120f6201a13b00a9c9c479571bfbc3949cee

Excellent. Does this mean that your problem is solved?

Yes and no. I have been trying to follow the Canonical tutorial here. As I wrote above, I am installing on a multipass VM and want to access the GUI from my laptop. Following this article, enabling ingress and exposing the services, the snap install of microstack --classic --beta installed a local_settings.py with ALLOWED_HOSTS set to internal VM addresses; overriding using the local_settings.d file worked. But I found that enabling the ingress module causes nginx.service to no longer restart. I rebuilt a brand new VM today and install microstack --classic --edge and the ALLOWED_HOSTS setting has been updated to *.

Yes, I can access the Web Gui at osm..xip.io. However, this is showing me the openstack web page not the OSM web page that I have seen in the ETSI OSM presentation videos. Also, after the exposing osm/prometheus/grafana services per the article, all addresses are resolving to the same openstack gui rather than 3 different services.

The instructions to expose the services above are identical to several other references, so should I be expecting the OSM Web UI or the Openstack Web UI? If the former, do you have any pointers how to properly expose it?

Hello @aahamlin!

You’re having that issue basically because ingress and the microstack dashboard are using basically the same port.

I’m going to find out the way of changing the microstack dashboard port, and will come back to you.

In the meantime, you can disable ingress, and use sshuttle to access the OSM services.

sshuttle -r ubuntu@<ip> -N -v
# Access the IPs of the services shown by the `juju status` command

Regarding the ALLOWED_HOSTS, I’m going to find out the way to add an IP to that, but in the meantime you can map it to your localhost by using the -L flag:

ssh ubuntu@<ip> -L 8080:localhost:80

Thanks @davigar15

Your reply confirms my deduction of the issue. Thanks for suggestions, I will give them a try tomorrow. If you do find a way to alter the microstack dashboard port that would be appreciated, if I figure it out I’ll post back.

This is the first time I have used multipass. I had hoped the tutorial would provide a simple way to get all the way to a running OSM installation, including accessing the web gui. :slight_smile:

The edge release of microstack (or the use of the local_setting.d directory overrides) have gotten me past the ALLOWED_HOSTS issue for the moment.

Hello @aahamlin,

I have spoken with the main developer of microstack, and we sort out a list of steps that should solve the issue you’re having. When installing microstack, do this:

sudo snap install microstack --edge --classic
sudo snap set microstack config.network.ports=8080
sudo snap set microstack config.network.dashboard-allowed-hosts=*
sudo microstack.init --auto

Then try to access the openstack dashboard at http://<VM_IP>:8080
That should also unblock you from not being able to use the microk8s ingress, since the openstack dashboard will be running in another port.

PS: I will test those steps, but if you’re able to test it before me, please make sure you ping me how did it go.

Thanks!
Ping me if you need anything.

1 Like

Hello!
I don’t see the setup_lxd.sh file. Can anyone please help to point me where I can find it?
Thanks!

Like Alex righty asked I can’t find the following files in the git repo:

setup_lxd.sh
install_nfs.sh
vca_config.sh

Also how can deploy this using a generic k8s cluster (i.e. if I don’t want to use microk8s)?

Those of you who ran into issues might want to check out the current set of instructions which have been updated by @davigar15, thanks David!

1 Like