Deploy OSM-HA on AWS

The Charmed Distribution of OSM (AWS)

Welcome to The Charmed Distribution of OSM!

The objective of this page is to give an overview of the first steps to get up and running with HA version of OSM.

User Guide

The installation process is very straight forward, and will help you in the process of installing OSM in AWS

Steps

  • Bootstrap AWS Cloud
  • Deploy CDK and OSM-VCA on AWS
  • Boostrap CDK Cloud
  • Deploy OSM

Bootstrap AWS Cloud

First of all, in your local machine, you should have juju installed.

sudo snap install juju --classic

AWS Credentials

Next step is to add the credentials of your AWS account.

juju add-credential aws
# Output
Enter credential name: osm-ha-credential

Using auth-type "access-key".

Enter access-key: XXXXXXXXXXXXXXXXX

Enter secret-key:

Credential "osm-ha-credential" added locally for cloud "aws".

Bootstrap controller

Once the credentials are added, it’s time to bootstrap a juju controller into AWS.

juju bootstrap aws aws-osm-ha --credential osm-ha-credential

Deploy CDK and OSM-VCA on AWS

This section will present the steps to deploy a CDK into AWS, which will be use later on to deploy OSM on top of it. Also, we will need to create an overlay.yaml file in order to include several things:

  • osm-vca: Charm needed by LCM (OSM component) to host the Proxy Charms.
  • kubernetes-worker: 4 workers are needed.
  • aws-integrator: Charm needed to have CDK working on AWS.
cat << EOF > overlay.yaml
applications:
  osm-vca:
    charm: cs:~charmed-osm/vca
    num_units: 1
    constraints: mem=4G cores=2 root-disk=40G
  kubernetes-worker:
    num_units: 4
  aws-integrator:
    charm: cs:~containers/aws-integrator
    num_units: 1
relations:
  - ['aws-integrator', 'kubernetes-master']
  - ['aws-integrator', 'kubernetes-worker']
EOF

Deploy CDK and OSM-VCA with the following commands:

juju add-model cdk
juju deploy charmed-kubernetes --overlay overlay.yaml
juju trust aws-integrator
juju offer osm-vca:osm-vca # Offer osm-vca for a Cross-Model Relation

The command juju find-offers shows the URL of the offered interface (admin/cdk.osm-vca).

Store       URL                Access  Interfaces
aws-osm-ha  admin/cdk.osm-vca  admin   osm-vca:osm-vca

Boostrap CDK Cloud

Before bootstrapping the CDK Cloud, it’s important to wait until CDK is up and running. We know that when the Message of the kubernetes-master is “Kubernetes master running”. This can take 20 minutes.

watch -c juju status --color
Model  Controller  Cloud/Region   Version  SLA          Timestamp
cdk    aws-osm-ha  aws/us-east-1  2.6.5    unsupported  11:16:01+02:00

App                    Version   Status  Scale  Charm                  Store       Rev  OS      Notes
aws-integrator         1.16.148  active      1  aws-integrator         jujucharms   10  ubuntu
containerd                       active      6  containerd             jujucharms    2  ubuntu
easyrsa                3.0.1     active      1  easyrsa                jujucharms  254  ubuntu  
etcd                   3.2.10    active      3  etcd                   jujucharms  434  ubuntu
flannel                0.10.0    active      6  flannel                jujucharms  425  ubuntu
kubeapi-load-balancer  1.14.0    active      1  kubeapi-load-balancer  jujucharms  649  ubuntu  exposed
kubernetes-master      1.15.0    active      2  kubernetes-master      jujucharms  700  ubuntu
kubernetes-worker      1.15.0    active      4  kubernetes-worker      jujucharms  552  ubuntu  exposed
osm-vca                          active      1  vca                    jujucharms    0  ubuntu

Unit                      Workload  Agent  Machine  Public address  Ports           Message
aws-integrator/0*         active    idle   0        3.88.26.246                     Ready
easyrsa/0*                active    idle   1        34.207.79.236                   Certificate Authority connected.
etcd/0                    active    idle   2        3.95.158.127    2379/tcp        Healthy with 3 known peers
etcd/1                    active    idle   3        18.208.142.176  2379/tcp        Healthy with 3 known peers
etcd/2*                   active    idle   4        18.209.101.141  2379/tcp        Healthy with 3 known peers
kubeapi-load-balancer/0*  active    idle   5        3.219.35.137    443/tcp         Loadbalancer ready.
kubernetes-master/0       active    idle   6        35.174.154.229  6443/tcp        Kubernetes master running.
  containerd/5            active    idle            35.174.154.229                  Container runtime available.
  flannel/5               active    idle            35.174.154.229                  Flannel subnet 10.1.62.1/24
kubernetes-master/1*      active    idle   7        3.89.29.87      6443/tcp        Kubernetes master running.
  containerd/4            active    idle            3.89.29.87                      Container runtime available.
  flannel/4               active    idle            3.89.29.87                      Flannel subnet 10.1.11.1/24
kubernetes-worker/0*      active    idle   8        18.215.124.61   80/tcp,443/tcp  Kubernetes worker running.
  containerd/0*           active    idle            18.215.124.61                   Container runtime available.
  flannel/0*              active    idle            18.215.124.61                   Flannel subnet 10.1.81.1/24
kubernetes-worker/1       active    idle   9        54.211.52.28    80/tcp,443/tcp  Kubernetes worker running.
  containerd/2            active    idle            54.211.52.28                    Container runtime available.
  flannel/2               active    idle            54.211.52.28                    Flannel subnet 10.1.23.1/24
kubernetes-worker/2       active    idle   10       3.93.171.170    80/tcp,443/tcp  Kubernetes worker running.
  containerd/1            active    idle            3.93.171.170                    Container runtime available.
  flannel/1               active    idle            3.93.171.170                    Flannel subnet 10.1.66.1/24
kubernetes-worker/3       active    idle   11       3.94.78.179     80/tcp,443/tcp  Kubernetes worker running.
  containerd/3            active    idle            3.94.78.179                     Container runtime available.
  flannel/3               active    idle            3.94.78.179                     Flannel subnet 10.1.83.1/24
osm-vca/0*                active    idle   12       18.207.243.174                  configured

Machine  State    DNS             Inst id              Series  AZ          Message
0        started  3.88.26.246     i-060581800c9b3de9e  bionic  us-east-1a  running
1        started  34.207.79.236   i-0e5be760554ea0b16  bionic  us-east-1b  running
2        started  3.95.158.127    i-0c723a5c9330a17e3  bionic  us-east-1a  running
3        started  18.208.142.176  i-0ccdc065640112f5d  bionic  us-east-1b  running
4        started  18.209.101.141  i-0431891ab2dcc004b  bionic  us-east-1c  running
5        started  3.219.35.137    i-053071bbc1f012ae1  bionic  us-east-1d  running
6        started  35.174.154.229  i-091a0b6e8dadcfa6c  bionic  us-east-1a  running
7        started  3.89.29.87      i-08826546e130c1515  bionic  us-east-1b  running
8        started  18.215.124.61   i-0f73acd5c5eeef2e6  bionic  us-east-1d  running
9        started  54.211.52.28    i-09933015cbd3cd922  bionic  us-east-1c  running
10       started  3.93.171.170    i-031171240a1a70b5b  bionic  us-east-1b  running
11       started  3.94.78.179     i-029902110200145cb  bionic  us-east-1a  running
12       started  18.207.243.174  i-05592992d699d3d2f  bionic  us-east-1f  running

Offer    Application  Charm  Rev  Connected  Endpoint  Interface  Role
osm-vca  osm-vca      vca    0    0/0        osm-vca   osm-vca    provider

Get CDK credentials

The credentials of the Kubernetes needs to be stored in ~/.kube/config. For copying the credentials and installing the kubectl client, execute the following commands:

mkdir ~/.kube
juju scp kubernetes-master/0:config ~/.kube/config
sudo snap install kubectl --classic

Create Storage

These additional commands are needed to create storages for CDK to be working in AWS.

# Create a storage class using the `kubernetes.io/aws-ebs` provisioner
kubectl create -f - <<EOY
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: ebs-1
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2
EOY

# Create a persistent volume claim using that storage class
kubectl create -f - <<EOY
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: testclaim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 100Mi
  storageClassName: ebs-1
EOY

Bootstrap Controller

Insert the region you got before in the juju status command.

cat ~/.kube/config | juju add-k8s k8s-cloud --local --region=aws/us-east-1
juju bootstrap k8s-cloud

Deploy OSM

This section covers the deployment of OSM, and we will use a osm namespace for it.

juju add-model osm

Create storage pools

Create the needed storage pools for OSM:

juju create-storage-pool operator-storage kubernetes
juju create-storage-pool osm-pv kubernetes
juju create-storage-pool packages-pv kubernetes

Deploy

Deploy OSM simply by executing the following:

juju deploy osm-ha

Cross-model-relation: <controller_name>:<URL>

Add the cross-model relation between osm-vca and lcm.

juju add-relation lcm-k8s aws-osm-ha:admin/cdk.osm-vca

HOW TO

Access OSM UI

Take the IP of one of the kubernetes-workers (p.e. 18.215.124.61), and execute the following commands:

juju config ui-k8s juju-external-hostname=osm.18.215.124.61.xip.io
juju expose ui-k8s
juju config prometheus-k8s juju-external-hostname=prometheus.18.215.124.61.xip.io
juju expose prometheus-k8s
juju config grafana-k8s juju-external-hostname=grafana.18.215.124.61.xip.io
juju expose grafana-k8s

The ingress module uses nginx. By default, it has the option proxy-body-size to 1m. This will be a problem if a VNF package of more than 1m is uploaded. To solve it, we only have to add an annotation to the ingress.

kubectl -n osm edit ingress ui-k8s

# Add the following line in the annotations
    nginx.ingress.kubernetes.io/proxy-body-size: "0"

You can access now these services:

Check OSM status

$ juju status
Model  Controller           Cloud/Region         Version  SLA          Timestamp
osm    k8s-cloud-us-east-1  k8s-cloud/us-east-1  2.6.5    unsupported  21:18:23+02:00

App             Version  Status  Scale  Charm           Store       Rev  OS          Address         Notes
grafana-k8s              active      3  grafana-k8s     jujucharms   15  kubernetes  10.152.183.161
kafka-k8s                active      3  kafka-k8s       jujucharms    1  kubernetes  10.152.183.48
lcm-k8s                  active      3  lcm-k8s         jujucharms   20  kubernetes  10.152.183.75
mariadb-k8s              active      3  mariadb-k8s     jujucharms   13  kubernetes  10.152.183.31
mon-k8s                  active      3  mon-k8s         jujucharms   14  kubernetes  10.152.183.171
mongodb-k8s              active      3  mongodb-k8s     jujucharms   14  kubernetes  10.152.183.109
nbi-k8s                  active      3  nbi-k8s         jujucharms   19  kubernetes  10.152.183.252
pol-k8s                  active      3  pol-k8s         jujucharms   14  kubernetes  10.152.183.194
prometheus-k8s           active      3  prometheus-k8s  jujucharms   12  kubernetes  10.152.183.55
ro-k8s                   active      3  ro-k8s          jujucharms   14  kubernetes  10.152.183.168
ui-k8s                   active      3  ui-k8s          jujucharms   23  kubernetes  10.152.183.230  exposed
zookeeper-k8s            active      3  zookeeper-k8s   jujucharms   16  kubernetes  10.152.183.199

Unit               Workload  Agent  Address     Ports                                Message
grafana-k8s/0*     active    idle   10.1.38.43  3000/TCP                             configured
grafana-k8s/1      active    idle   10.1.31.47  3000/TCP                             configured
grafana-k8s/2      active    idle   10.1.42.79  3000/TCP                             configured
kafka-k8s/0*       active    idle   10.1.38.38  9092/TCP                             configured
kafka-k8s/1        active    idle   10.1.42.73  9092/TCP                             configured
kafka-k8s/2        active    idle   10.1.31.41  9092/TCP                             configured
lcm-k8s/0*         active    idle   10.1.38.41  80/TCP                               configured
lcm-k8s/1          active    idle   10.1.42.76  80/TCP                               configured
lcm-k8s/2          active    idle   10.1.31.44  80/TCP                               configured
mariadb-k8s/0*     active    idle   10.1.38.33  3306/TCP,4444/TCP,4567/TCP,4568/TCP  configured
mariadb-k8s/1      active    idle   10.1.42.80  3306/TCP,4444/TCP,4567/TCP,4568/TCP  configured
mariadb-k8s/2      active    idle   10.1.31.48  3306/TCP,4444/TCP,4567/TCP,4568/TCP  configured
mon-k8s/0*         active    idle   10.1.38.42  8000/TCP                             configured
mon-k8s/1          active    idle   10.1.42.77  8000/TCP                             configured
mon-k8s/2          active    idle   10.1.31.45  8000/TCP                             configured
mongodb-k8s/0      active    idle   10.1.38.35  27017/TCP                            configured
mongodb-k8s/1*     active    idle   10.1.42.70  27017/TCP                            configured
mongodb-k8s/2      active    idle   10.1.31.39  27017/TCP                            configured
nbi-k8s/0*         active    idle   10.1.38.40  9999/TCP                             configured
nbi-k8s/1          active    idle   10.1.42.75  9999/TCP                             configured
nbi-k8s/2          active    idle   10.1.31.43  9999/TCP                             configured
pol-k8s/0*         active    idle   10.1.38.39  80/TCP                               configured
pol-k8s/1          active    idle   10.1.42.74  80/TCP                               configured
pol-k8s/2          active    idle   10.1.31.42  80/TCP                               configured
prometheus-k8s/0*  active    idle   10.1.38.44  9090/TCP                             configured
prometheus-k8s/1   active    idle   10.1.42.78  9090/TCP                             configured
prometheus-k8s/2   active    idle   10.1.31.46  9090/TCP                             configured
ro-k8s/0*          active    idle   10.1.38.37  9090/TCP                             configured
ro-k8s/1           active    idle   10.1.42.72  9090/TCP                             configured
ro-k8s/2           active    idle   10.1.31.40  9090/TCP                             configured
ui-k8s/0*          active    idle   10.1.31.37  80/TCP                               configured
ui-k8s/1           active    idle   10.1.38.34  80/TCP                               configured
ui-k8s/2           active    idle   10.1.42.69  80/TCP                               configured
zookeeper-k8s/0*   active    idle   10.1.38.36  2181/TCP,2888/TCP,3888/TCP           configured
zookeeper-k8s/1    active    idle   10.1.42.71  2181/TCP,2888/TCP,3888/TCP           configured
zookeeper-k8s/2    active    idle   10.1.31.38  2181/TCP,2888/TCP,3888/TCP           configured

Scale applications

Some applications such as MariaDB and RO (OSM) need to be scaled after the deployment is finished. We are working on improving that, but in the meantime, you should execute the following commands:

  • Scale MariaDB cluster
juju scale-application mariadb-k8s 3
  • Scale RO
juju scale-application ro-k8s 3

How to clean up

The easiest way to clean up everything is executing the following commands:

juju kill-controller aws-osm-ha -t 0 -y
juju unregister k8s-cloud-us-east-1

How to use OSM

This topic

1 Like