This guide will walk you through installing the Charmed Distribution of OSM on AWS.
Current version: OSM v7.0.1
Upstream documentation
Requirements
- AWS account
Getting started
Install the basic prerequisites.
sudo snap install juju --classic
sudo snap install kubectl --classic
AWS credentials
Add the credentials of your AWS account.
juju add-credential aws
You will be asked for the auth-type (choose access-key
), access-key, and secret-key.
Enter credential name: osm-ha-credential
Using auth-type "access-key".
Enter access-key: XXXXXXXXXXXXXXXXX
Enter secret-key:
Credential "osm-ha-credential" added locally for cloud "aws".
Bootstrap Juju on AWS
Bootstrap the Juju controller, on AWS, which will be used to deploy Charmed Kubernetes.
juju bootstrap aws aws-osm-ha --credential osm-ha-credential
Deploy OSM VCA
The VNF Configuration and Abstraction (VCA) is a component required by Life Cycle Management (LCM), which is another OSM component. The VCA is an LXD bootstraped Juju controller that OSM will use to deploy proxy charms.
juju add-model vca
juju deploy cs:~charmed-osm/vca \
--constraints "mem=4G cores=2 root-disk=40G" \
osm-vca
juju offer osm-vca:osm-vca
Deploy LMA stack (optional)
Create graylog-vhost.tmpl
<Location "/">
RequestHeader set X-Graylog-Server-URL "http://{{ servername }}/api/"
ProxyPass http://{{ graylog_web }}/
ProxyPassReverse http://{{ graylog_web }}/
</Location>
<Location "/api/">
ProxyPass http://{{ graylog_api }}/api/
ProxyPassReverse http://{{ graylog_api }}/api/
</Location>
Create osm-lma-bundle.yml
series: bionic
variables:
nagios-context: &nagios-context osm-lma
applications:
apache2:
charm: cs:apache2
constraints: cores=1 mem=4G
num_units: 1
expose: true
options:
enable_modules: "headers proxy_html proxy_http"
vhost_http_template: include-base64://graylog-vhost.tmpl
elasticsearch:
charm: cs:elasticsearch
num_units: 1
constraints: cores=8 mem=32G
options:
firewall_enabled: False
es-heap-size: 16
graylog:
charm: cs:graylog
num_units: 1
constraints: cores=4 mem=8G
mongodb:
charm: cs:mongodb
constraints: cores=2 mem=4G
num_units: 1
grafana:
charm: cs:grafana
constraints: cores=2 mem=4G
num_units: 1
prometheus:
charm: cs:prometheus2
constraints: cores=2 mem=4G
num_units: 1
nagios:
num_units: 1
constraints: cores=2 mem=4G
charm: cs:nagios
nrpe-host:
charm: cs:nrpe
options:
nagios_hostname_type: "host"
nagios_host_context: *nagios-context
xfs_errors: "30"
swap: ''
filebeat:
charm: cs:filebeat
options:
logpath: "/var/log/*.log /var/log/*/*.log /var/log/syslog"
telegraf:
charm: cs:telegraf
relations:
- [ "apache2:reverseproxy", "graylog:website" ]
- [ "graylog:elasticsearch", "elasticsearch:client" ]
- [ "graylog:mongodb", "mongodb:database" ]
- [ "filebeat:logstash", "graylog:beats" ]
- [ "prometheus:grafana-source", "grafana:grafana-source" ]
- [ "telegraf:prometheus-client", "prometheus:target" ]
- [ "apache2:nrpe-external-master", "nrpe-host:nrpe-external-master" ]
- [ "elasticsearch:nrpe-external-master", "nrpe-host:nrpe-external-master" ]
- [ "graylog:nrpe-external-master", "nrpe-host:nrpe-external-master" ]
- [ "mongodb:nrpe-external-master", "nrpe-host:nrpe-external-master" ]
- [ "grafana:nrpe-external-master", "nrpe-host:nrpe-external-master" ]
- [ "prometheus:nrpe-external-master", "nrpe-host:nrpe-external-master" ]
- [ "nagios:monitors", "nrpe-host:monitors" ]
- [ "nagios:juju-info", "telegraf:juju-info" ]
- [ "nagios:juju-info", "filebeat:beats-host" ]
Deploy the LMA stack
juju add-model lma
juju deploy osm-lma-bundle.yml lma-stack
juju offer lma.graylog:beats graylog-beats
juju offer lma.prometheus:target prometheus-target
juju offer lma.nagios:monitors nagios-monitors
Deploy Charmed Kubernetes (without LMA)
Charmed Kubernetes provides a well integrated, turn-key Kubernetes® platform that is open, extensible, and secure.
Create overlay.yaml
applications:
kubernetes-worker:
num_units: 3
constraints: cores=4 mem=8G root-disk=100G
aws-integrator:
charm: cs:~containers/aws-integrator
num_units: 1
relations:
- ['aws-integrator', 'kubernetes-master']
- ['aws-integrator', 'kubernetes-worker']
Deploy Charmed Kubernetes
juju add-model cdk
juju deploy charmed-kubernetes --overlay overlay.yaml
juju trust aws-integrator
Deploy Charmed Kubernetes (with LMA)
Charmed Kubernetes provides a well integrated, turn-key Kubernetes® platform that is open, extensible, and secure.
Create overlay.yaml
variables:
nagios-context: &nagios-context osm-cdk
applications:
aws-integrator:
charm: cs:~containers/aws-integrator
num_units: 1
kubernetes-master:
options:
nagios_context: *nagios-context
kubernetes-worker:
num_units: 3
constraints: cores=4 mem=8G root-disk=100G
options:
nagios_context: *nagios-context
filebeat:
charm: cs:filebeat
options:
logpath: '/var/log/*.log /var/log/*/*.log /var/log/syslog'
filebeat-kube:
charm: cs:filebeat
options:
logpath: '/var/log/*.log /var/log/*/*.log /var/log/syslog /var/log/pods/*/*/*.log'
kube_logs: true
telegraf:
charm: cs:telegraf
nrpe-host:
charm: cs:nrpe
options:
nagios_hostname_type: host
nagios_host_context: *nagios-context
xfs_errors: "30"
swap: ''
relations:
- ['aws-integrator', 'kubernetes-master']
- ['aws-integrator', 'kubernetes-worker']
- [ "filebeat-kube:beats-host", "kubernetes-master:juju-info" ]
- [ "filebeat-kube:beats-host", "kubernetes-worker:juju-info" ]
- [ "filebeat:beats-host", "easyrsa:juju-info" ]
- [ "filebeat:beats-host", "etcd:juju-info" ]
- [ "kubernetes-master:juju-info", "telegraf:juju-info" ]
- [ "kubernetes-worker:juju-info", "telegraf:juju-info" ]
- [ "easyrsa:juju-info", "nrpe-host:general-info" ]
- [ "etcd:nrpe-external-master", "nrpe-host:nrpe-external-master" ]
- [ "kubeapi-load-balancer:nrpe-external-master", "nrpe-host:nrpe-external-master" ]
- [ "kubernetes-master:nrpe-external-master", "nrpe-host:nrpe-external-master" ]
- [ "kubernetes-worker:nrpe-external-master", "nrpe-host:nrpe-external-master" ]
- [ "aws-integrator:juju-info", "nrpe-host:general-info" ]
Deploy Charmed Kubernetes
juju add-model cdk
juju deploy charmed-kubernetes --overlay overlay.yaml
juju trust aws-integrator
juju relate filebeat:logstash lma.graylog-beats
juju relate filebeat-kube:logstash lma.graylog-beats
juju relate nrpe-host:monitors lma.nagios-monitors
juju relate telegraf:prometheus-client lma.prometheus-target
Bootstrap Juju on Charmed Kubernetes
Before bootstrapping the CDK Cloud, it’s important to wait until CDK is up and running. When the kubernetes-master message shows “Kubernetes master running” means that it is ready to go . This can take 20 minutes.
watch -c juju status kubernetes-master --color
Model Controller Cloud/Region Version SLA Timestamp
cdk aws-osm-ha aws/us-east-1 2.5.8 unsupported 10:58:10+02:00
App Version Status Scale Charm Store Rev OS Notes
containerd active 2 containerd jujucharms 20 ubuntu
flannel 0.10.0 active 2 flannel jujucharms 438 ubuntu
kubernetes-master 1.15.3 active 2 kubernetes-master jujucharms 724 ubuntu
Unit Workload Agent Machine Public address Ports Message
kubernetes-master/0 active idle 6 18.233.154.213 6443/tcp Kubernetes master running.
containerd/5 active idle 18.233.154.213 Container runtime available.
flannel/5 active idle 18.233.154.213 Flannel subnet 10.1.80.1/24
kubernetes-master/1* active idle 7 54.172.90.177 6443/tcp Kubernetes master running.
containerd/4 active idle 54.172.90.177 Container runtime available.
flannel/4 active idle 54.172.90.177 Flannel subnet 10.1.20.1/24
Machine State DNS Inst id Series AZ Message
6 started 18.233.154.213 i-038627dbcea0975ce bionic us-east-1a running
7 started 54.172.90.177 i-01702fdcc78a78ee6 bionic us-east-1b running
Charmed Kubernetes Credentials
Get kubernetes credentials with the following commands.
mkdir ~/.kube
juju scp kubernetes-master/0:config ~/.kube/config
Configure storage
These additional commands are needed to create storages for CDK to be working in AWS.
# Create a storage class using the `kubernetes.io/aws-ebs` provisioner
kubectl create -f - <<EOY
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ebs-1
provisioner: kubernetes.io/aws-ebs
volumeBindingMode: WaitForFirstConsumer
parameters:
type: gp2
EOY
Bootstrap
Make sure to specify the right region. See the previous juju status
command to double-check it.
# Bootstrap k8s cloud
REGION=aws/us-east-1
cat ~/.kube/config | juju add-k8s k8s-cloud --local --region=$REGION
juju bootstrap k8s-cloud
# Add a new model for OSM
juju add-model osm
Install OSM
Choose how you would like Charmed OSM to be deployed.
Standalone
The standalone version is perfect for evaluation and development purposes.
juju deploy osm
juju add-relation lcm-k8s aws-osm-ha:admin/vca.osm-vca
juju add-relation mon-k8s aws-osm-ha:admin/vca.osm-vca
High-Availability
For production use, we offer a high-availability version of Charmed OSM. Each component will be deployed in clusters of three units setup with failover, and requires significantly more resources to operate.
juju deploy osm-ha
juju add-relation lcm-k8s aws-osm-ha:admin/vca.osm-vca
juju add-relation mon-k8s aws-osm-ha:admin/vca.osm-vca
Status
It can take several minutes or longer to install. To monitor the progress of the installation, you can watch the output of juju status:
$ watch -c juju status --color
Every 2.0s: juju status --color
Model Controller Cloud/Region Version SLA Timestamp
osm k8s-cloud-us-east-1 k8s-cloud/us-east-1 2.6.6 unsupported 12:26:14+02:00
SAAS Status Store URL
osm-vca active aws-osm-ha admin/vca.osm-vca
App Version Status Scale Charm Store Rev OS Address Notes
grafana-k8s active 3 grafana-k8s jujucharms 15 kubernetes 10.152.183.94
kafka-k8s active 3 kafka-k8s jujucharms 1 kubernetes 10.152.183.31
lcm-k8s active 3 lcm-k8s jujucharms 21 kubernetes 10.152.183.2
mariadb-k8s active 1 mariadb-k8s jujucharms 16 kubernetes 10.152.183.246
mon-k8s active 3 mon-k8s jujucharms 14 kubernetes 10.152.183.134
mongodb-k8s active 3 mongodb-k8s jujucharms 15 kubernetes 10.152.183.93
nbi-k8s active 3 nbi-k8s jujucharms 24 kubernetes 10.152.183.229
osmclient-k8s active 1 osmclient-k8s jujucharms 0 kubernetes 10.152.183.125
pol-k8s active 3 pol-k8s jujucharms 14 kubernetes 10.152.183.130
prometheus-k8s active 3 prometheus-k8s jujucharms 14 kubernetes 10.152.183.113
ro-k8s active 1 ro-k8s jujucharms 20 kubernetes 10.152.183.34
ui-k8s active 3 ui-k8s jujucharms 28 kubernetes 10.152.183.232
zookeeper-k8s active 3 zookeeper-k8s jujucharms 16 kubernetes 10.152.183.106
Unit Workload Agent Address Ports Message
grafana-k8s/0* active idle 10.1.94.8 3000/TCP configured
grafana-k8s/1 active idle 10.1.71.10 3000/TCP configured
grafana-k8s/2 active idle 10.1.33.14 3000/TCP configured
kafka-k8s/0* active idle 10.1.71.7 9092/TCP configured
kafka-k8s/1 active idle 10.1.94.5 9092/TCP configured
kafka-k8s/2 active idle 10.1.100.32 9092/TCP configured
lcm-k8s/0* active idle 10.1.71.15 80/TCP configured
lcm-k8s/1 active idle 10.1.94.11 80/TCP configured
lcm-k8s/2 active idle 10.1.100.39 80/TCP configured
mariadb-k8s/0* active idle 10.1.71.14 3306/TCP,4444/TCP,4567/TCP,4568/TCP ready
mon-k8s/0* active idle 10.1.71.11 8000/TCP configured
mon-k8s/1 active idle 10.1.94.9 8000/TCP configured
mon-k8s/2 active idle 10.1.100.34 8000/TCP configured
mongodb-k8s/0* active idle 10.1.71.5 27017/TCP configured
mongodb-k8s/1 active idle 10.1.94.3 27017/TCP configured
mongodb-k8s/2 active idle 10.1.100.30 27017/TCP configured
nbi-k8s/0* active idle 10.1.71.9 9999/TCP configured
nbi-k8s/1 active idle 10.1.94.7 9999/TCP configured
nbi-k8s/2 active idle 10.1.100.35 9999/TCP configured
osmclient-k8s/0* active idle 10.1.33.15 22/TCP ready
pol-k8s/0* active idle 10.1.71.8 80/TCP configured
pol-k8s/1 active idle 10.1.94.6 80/TCP configured
pol-k8s/2 active idle 10.1.100.33 80/TCP configured
prometheus-k8s/0* active idle 10.1.71.12 9090/TCP configured
prometheus-k8s/1 active idle 10.1.94.10 9090/TCP configured
prometheus-k8s/2 active idle 10.1.100.36 9090/TCP configured
ro-k8s/0* waiting idle 10.1.71.16 9090/TCP configured
ui-k8s/0* active idle 10.1.33.16 80/TCP configured
ui-k8s/1 active idle 10.1.71.17 80/TCP configured
ui-k8s/2 active idle 10.1.94.12 80/TCP configured
zookeeper-k8s/0* active idle 10.1.71.6 2181/TCP,2888/TCP,3888/TCP configured
zookeeper-k8s/1 active idle 10.1.94.4 2181/TCP,2888/TCP,3888/TCP configured
zookeeper-k8s/2 active idle 10.1.100.31 2181/TCP,2888/TCP,3888/TCP configured
HOW TO
Install OSM Client
Install the osm client snap:
sudo snap install osmclient
sudo snap alias osmclient.osm osm
Add LB to NBI:
cat << EOF > nbi-lb.yaml
apiVersion: v1
kind: Service
metadata:
name: nbi-k8s-lb
spec:
ports:
- port: 9999
protocol: TCP
targetPort: 9999
selector:
juju-app: nbi-k8s
sessionAffinity: None
type: LoadBalancer
EOF
kubectl apply -f nbi-lb.yaml -n osm
Get load balancer URL:
$ kubectl get services -n osm --field-selector metadata.name=nbi-k8s-lb
#NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
#nbi-k8s-lb LoadBalancer 10.152.183.22 a52179eda27564ecc8bf75ef59b89b1a-266844128.us-east-1.elb.amazonaws.com 9999:31414/TCP 6m24s
export OSM_HOSTNAME=a52179eda27564ecc8bf75ef59b89b1a-266844128.us-east-1.elb.amazonaws.com
It can take several minutes for the LB to be actually enabled.
Switch between juju controllers
Charmed Kubernetes:
juju switch aws-osm-ha:admin/cdk
LMA Stack:
juju switch aws-osm-ha:admin/lma
VCA:
juju switch aws-osm-ha:admin/vca
Charmed OSM:
juju switch k8s-cloud-us-east-1:admin/osm
Expose UI
Take the IP of one of the kubernetes-workers (p.e. 3.84.169.68), and execute the following commands:
juju config ui-k8s juju-external-hostname=osm.3.84.169.68.xip.io
juju expose ui-k8s
The ingress module uses nginx. By default, it has the option proxy-body-size
to 1m
. This will be a problem if a VNF package of more than 1m is uploaded. To solve it, we only have to add an annotation to the ingress.
kubectl -n osm edit ingress ui-k8s
# Add the following line in the annotations
nginx.ingress.kubernetes.io/proxy-body-size: "0"
You can access now the OSM UI: http://osm.18.215.124.61.xip.io
Expose Prometheus
Take the IP of one of the kubernetes-workers (p.e. 3.84.169.68), and execute the following commands:
juju config prometheus-k8s juju-external-hostname=prometheus.3.84.169.68.xip.io
juju expose prometheus-k8s
You can access now the Prometheus UI: http://prometheus.18.215.124.61.xip.io
Expose Grafana
Take the IP of one of the kubernetes-workers (p.e. 3.84.169.68), and execute the following commands:
juju config grafana-k8s juju-external-hostname=grafana.3.84.169.68.xip.io
juju expose grafana-k8s
You can access now the Grafana UI: http://grafana.18.215.124.61.xip.io
Clean up
The easiest way to clean up everything is executing the following commands:
juju kill-controller aws-osm-ha -t 0 -y
juju unregister k8s-cloud-us-east-1