Running OVN-Kubernetes on a preexisting kind cluster
OVN-Kubernetes is a CNI for Kubernetes based on the Open Virtual Network (OVN) project.
kind (Kubernetes in Docker) deployment of OVN kubernetes is a fast and easy means to quickly install and test kubernetes with OVN kubernetes CNI. The value proposition is really for developers who want to reproduce an issue or test a fix in an environment that can be brought up locally and within a few minutes.
The kind.sh script creates a new kind cluster and then deploys OVN-Kubernetes on top of it.
Thanks to the new --deploy
option introduced by this PR into the kind.sh script, now it is possible to leverage the script to deploy OVN-Kubernetes on preexisting kind clusters.
First, let’s define the kind cluster configuration and store it in a kind-ovn.yaml file:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
# kube proxy will be disabled
kubeProxyMode: "none"
# the default CNI will not be installed
disableDefaultCNI: true
podSubnet: "10.244.0.0/16"
serviceSubnet: "10.96.0.0/16"
kubeadmConfigPatches:
- |
kind: ClusterConfiguration
metadata:
name: config
apiServer:
extraArgs:
"v": "4"
controllerManager:
extraArgs:
"v": "4"
scheduler:
extraArgs:
"v": "4"
networking:
dnsDomain: cluster.local
---
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
"v": "4"
---
kind: JoinConfiguration
nodeRegistration:
kubeletExtraArgs:
"v": "4"
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
authorization-mode: "AlwaysAllow"
- role: worker
- role: worker
Create the cluster:
$ kind create cluster --name ovn --image kindest/node:v1.24.0 --config=kind-ovn.yaml
Export the kubeconfig file:
$ kind get kubeconfig --name ovn > kubeconfig
$ export KUBECONFIG=$(pwd)/kubeconfig
As you can see the nodes are in NotReady
state (because there is no CNI deployed) and the CNI dependant pods are in Pending
state:
$ kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ovn-control-plane NotReady control-plane 59s v1.24.0 172.18.0.3 <none> Ubuntu 21.10 5.17.5-300.fc36.x86_64 containerd://1.6.4
ovn-worker NotReady <none> 35s v1.24.0 172.18.0.2 <none> Ubuntu 21.10 5.17.5-300.fc36.x86_64 containerd://1.6.4
ovn-worker2 NotReady <none> 22s v1.24.0 172.18.0.4 <none> Ubuntu 21.10 5.17.5-300.fc36.x86_64 containerd://1.6.4
$ kubectl get po -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6d4b75cb6d-4nbkk 0/1 Pending 0 107s
kube-system coredns-6d4b75cb6d-wkmzk 0/1 Pending 0 107s
kube-system etcd-kind-control-plane 1/1 Running 0 2m3s
kube-system kube-apiserver-kind-control-plane 1/1 Running 0 2m
kube-system kube-controller-manager-kind-control-plane 1/1 Running 0 2m
kube-system kube-scheduler-kind-control-plane 1/1 Running 0 2m3s
local-path-storage local-path-provisioner-9cd9bd544-4dt8d 0/1 Pending 0 107s
Enable IPv6 on the kind containers:
$ KIND_NODES=$(kind get nodes --name ovn)
$ for n in $KIND_NODES; do
docker exec "$n" sysctl --ignore net.ipv6.conf.all.disable_ipv6=0
docker exec "$n" sysctl --ignore net.ipv6.conf.all.forwarding=1
done
Clone the ovn-kubernetes repo:
$ git clone https://github.com/ovn-org/ovn-kubernetes.git
Run the kind.sh script with the --deploy
option to avoid the creation of a new kind cluster:
$ cd ovn-kubernetes/contrib/
$ ./kind.sh -kc $KUBECONFIG --deploy
Check if the nodes and the CNI dependant pods have transitioned to Ready
state and the OVN pods are present:
$ kubectl get node
NAME STATUS ROLES AGE VERSION
ovn-control-plane Ready control-plane 11m v1.24.0
ovn-worker Ready <none> 10m v1.24.0
ovn-worker2 Ready <none> 10m v1.24.0
$ kubectl get po -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6d4b75cb6d-9r8lh 1/1 Running 0 10m
kube-system coredns-6d4b75cb6d-kvhf6 1/1 Running 0 10m
kube-system etcd-ovn-control-plane 1/1 Running 0 11m
kube-system kube-apiserver-ovn-control-plane 1/1 Running 0 11m
kube-system kube-controller-manager-ovn-control-plane 1/1 Running 0 11m
kube-system kube-scheduler-ovn-control-plane 1/1 Running 0 11m
local-path-storage local-path-provisioner-9cd9bd544-fm7vm 1/1 Running 0 10m
ovn-kubernetes ovnkube-db-5fdf4c4986-t2hp7 2/2 Running 0 3m12s
ovn-kubernetes ovnkube-master-5b5ddf8879-7vqcd 2/2 Running 0 3m10s
ovn-kubernetes ovnkube-node-8mmjm 3/3 Running 0 3m3s
ovn-kubernetes ovnkube-node-ggcsd 3/3 Running 0 3m3s
ovn-kubernetes ovnkube-node-xtdxn 3/3 Running 0 3m3s
ovn-kubernetes ovs-node-4qvfz 1/1 Running 0 3m11s
ovn-kubernetes ovs-node-99vl7 1/1 Running 0 3m11s
ovn-kubernetes ovs-node-dx2rk 1/1 Running 0 3m11s
To tear down the KIND cluster when finished simply run:
$ ./kind.sh --delete
Alternativally, you can use this script to automate the deployment.
Tested on Fedora release 36 and Ubuntu 22.04 with kind version 0.17.0.
Comments