Kubeflow edge for inference and distributed training
Going even smaller in terms of the memory footprint, the Kubeflow edge bundle was born.
Kubeflow edge uses the inference and distributed training pieces inside Kubeflow – including TF-job-operator, PyTorch-operator, Seldon Core and Kubeflow Pipelines – and packages them for a 4Gb of RAM device to run.
So far, we have seen this option generate the most impact in industries that leverage an IoT mesh, such as manufacturing, mobility, retail, or ag-tech.
To install Kubeflow-edge, deploy MicroK8s and then run:
$ KUBEFLOW_BUNDLE=edge microk8s enable kubeflow
Check out what’s inside Kubeflow edge