This charm deploys a container runtime and the Kubernetes worker applications: kubelet, and kube-proxy.
In order for this charm to be useful, it should be deployed with its companion charm kubernetes-control-plane and linked with an SDN-Plugin and a container runtime such as containerd.
This charm is part of the Charmed Kubernetes bundle bundle which can be deployed with a single command::
juju deploy charmed-kubernetes
For more information about Charmed Kubernetes see the overview documentation
Scale out
To add additional compute capacity to your Kubernetes workers, you may
juju add-unit kubernetes-worker
to scale the cluster. The new units will
automatically join any related kubernetes-master, and enlist themselves as
ready once the deployment is complete.
Snap Configuration
The Kubernetes resources used by this charm are snap packages. When not specified during deployment, these resources come from the public store. By default, the snapd daemon will refresh all snaps installed from the store four (4) times per day. A charm configuration option is provided for operators to control this refresh frequency.
Examples:
refresh kubernetes-worker snaps every tuesday
juju config kubernetes-worker snapd_refresh="tue"
refresh snaps at 11pm on the last (5th) friday of the month
juju config kubernetes-worker snapd_refresh="fri5,23:00"
delay the refresh as long as possible
juju config kubernetes-worker snapd_refresh="max"
use the system default refresh timer
juju config kubernetes-worker snapd_refresh=""
Configuration
This charm supports some configuration options to set up a Kubernetes cluster that works in your environment.
For some specific Kubernetes service configuration tasks, please also see the
section on configuring K8s services.
Please also see the [kubernetes-master
charm configuration][charm-kubernetes-master]
for other settings relating to Kubernetes services.
Configuring K8s services
IPVS (IP Virtual Server)
This requires configuration of both the kubernetes-control-plane
and
kubernetes-worker
charms. Please see the configuration section on
the kubernetes-control-plane page.
Configuring kubelet
Each worker runs the node agent, kubelet
with a set of arguments and
configuration set by this charm. In some cases it may be desirable to add
options or arguments, for which the charm provides two mechanisms
kubelet-extra-args for command line options. kubelet-extra-config for configuration.
The definitive reference for kubelet
is the upstream documentation.
HugePages
HugePages are a standard memory management feature of the Linux kernel to decrease overhead for processes which consume large amounts of memory.
Kubernetes includes support for using HugePages with pods (see the upstream documentation).
To use HugePages in your pods with Charmed Kubernetes, it is necessary to update the configuration for the workers:
-
Fetch the current ‘sysctl’ configuration from the worker:
juju config kubernetes-worker sysctl
This should return a string of config options, e.g.:
{ net.ipv4.conf.all.forwarding : 1, net.ipv4.neigh.default.gc_thresh1 : 128, net.ipv4.neigh.default.gc_thresh2 : 28672, net.ipv4.neigh.default.gc_thresh3 : 32768, net.ipv6.neigh.default.gc_thresh1 : 128, net.ipv6.neigh.default.gc_thresh2 : 28672, net.ipv6.neigh.default.gc_thresh3 : 32768, fs.inotify.max_user_instances : 8192, fs.inotify.max_user_watches: 1048576 }
-
The config option for HugePages is
vm.nr_hugepages
. To add this configuration, you should append it to the string and set the whole configuration. For example, for 100 2Mi pages:juju config kubernetes-worker sysctl="{ net.ipv4.conf.all.forwarding : 1, net.ipv4.neigh.default.gc_thresh1 : 128, net.ipv4.neigh.default.gc_thresh2 : 28672, net.ipv4.neigh.default.gc_thresh3 : 32768, net.ipv6.neigh.default.gc_thresh1 : 128, net.ipv6.neigh.default.gc_thresh2 : 28672, net.ipv6.neigh.default.gc_thresh3 : 32768, fs.inotify.max_user_instances : 8192, fs.inotify.max_user_watches: 1048576, vm.nr_hugepages: 100}"
-
HugePages can now be consumed via container level resource requirements using the resource name
hugepages-<size>
.For example:
apiVersion: v1 kind: Pod metadata: name: hugepages-test spec: containers: - image: ubuntu:latest command: - sleep - inf name: example volumeMounts: - mountPath: /hugepages name: hugepage resources: limits: hugepages-2Mi: 100Mi memory: 100Mi requests: memory: 100Mi volumes: - name: hugepage emptyDir: medium: HugePages
Huge page usage in a namespace can be managed with ResourceQuota, similar to other compute resources.
-
To verify, you can exec into the pod and check the
/proc/meminfo
.kubectl exec hugepage-test cat /proc/meminfo | grep HugePages_