Let’s take a look at the list of containers managed by containerd
on the Kubernetes worker node:
$ juju ssh kubernetes-worker/0 sudo ctr --namespace=k8s.io containers ls
CONTAINER IMAGE RUNTIME
066a57c0eb19e2f13a53da0d161bff05abba6834fc823bd835c656654e8516c6 sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e io.containerd.runtime.v1.linux
082dc1663483d2ad66900ff282374cf791e512e260cbe1f399c8e358eb7a9bc3 sha256:5d092e4b984acc2661bb4d2752ddd98ac87004d58c56ff41f768b1136a63a1f2 io.containerd.runtime.v1.linux
13a9464d5c1ff0b822ca6edc98fb55e359bdf9d6b16932e25b70f142ae83716a sha256:b5af743e598496e8ebd7a6eb3fea76a6464041581520d1c2315c95f993287303 io.containerd.runtime.v1.linux
1e0110721c0962ce28d830544cbe79da8604c671814b71a931b73df939a38620 sha256:577260d221dbb1be2d83447402d0d7c5e15501a89b0e2cc1961f0b24ed56c77c io.containerd.runtime.v1.linux
248f628e85552d209ba985ee427b6766ec3b6c4c1ec98c0ac9e170c5d8113acd sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e io.containerd.runtime.v1.linux
2731f5c54b7197a7b00e3a09636e7a6c5b61cafef0e037534670fc835e3b7b31 sha256:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b io.containerd.runtime.v1.linux
3af887cc9d5fdb8329a03b7fadf35819cdef5683e15642a98fb3388e930c895c sha256:71b13764bb0827aae7ff634592e32ed2fa9ee4ebe7573ad518ee940faae19402 io.containerd.runtime.v1.linux
54c8c99d48902a047aa2f69064b691c63b3287e32e01bea1de17538cc822eb84 sha256:6802d83967b995c2c2499645ede60aa234727afc06079e635285f54c82acbceb io.containerd.runtime.v1.linux
5c81130a739f265077f13ea76dae183e9ec54baa766a11a5731a050388be36e1 sha256:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b io.containerd.runtime.v1.linux
617808e88222ecf22685fa97c9d9d3332126a423fe26ccd44090cc4619c7c88e sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e io.containerd.runtime.v1.linux
61e23851f589863bc152a5e7436a3088ef4fa71d02d6181a341b9ff52a42f904 sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e io.containerd.runtime.v1.linux
66f2edb64ba85da66b5ff4861ac8d02cb8d02651ad84069c76c41250106e9647 sha256:71b13764bb0827aae7ff634592e32ed2fa9ee4ebe7573ad518ee940faae19402 io.containerd.runtime.v1.linux
6f03ab0f3f7489b46376e4f24861cfc7220166805a6e4744aeb92dc343f6ec0f sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e io.containerd.runtime.v1.linux
8e9d9644102a8a22c2628a510a2165fc922c36a8341c2b8b9ed58ddc819e2311 sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e io.containerd.runtime.v1.linux
94d040ca432ece2217f6bbe00510e053c56ed8fbe6188639425f96e26eeb3bc3 sha256:f60a10c24e075bad56bac7a14559bb7fe6e601f2aad4982b2b8c2ed9c79ccfb6 io.containerd.runtime.v1.linux
a0a5dc5e53e724cd3b44fc7a7c24fae1d6a1c29d8c52bb08b483178fd20d8886 sha256:8cb3de219af7bdf0b3ae66439aecccf94cebabb230171fa4b24d66d4a786f4f7 io.containerd.runtime.v1.linux
a39f221bfd1436bc40879eecc286cf62f8fdac0799845f599ebe634133157ef3 sha256:709901356c11546f46256dd2028c390d2b4400fe439c678b87f050977672ae8e io.containerd.runtime.v1.linux
a54d4a154a231ec1759261f6da7fb988af809f874d1e2632a999e4d072dc3d68 sha256:f60a10c24e075bad56bac7a14559bb7fe6e601f2aad4982b2b8c2ed9c79ccfb6 io.containerd.runtime.v1.linux
af21885d19060770835b0f47d31484fdc4ff918d5285b11d8aaff9f8c7cc9e65 sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e io.containerd.runtime.v1.linux
c54e3585e186497617842a976470358ececcdb17fa37ae8b9d89485eea16e61f sha256:f60a10c24e075bad56bac7a14559bb7fe6e601f2aad4982b2b8c2ed9c79ccfb6 io.containerd.runtime.v1.linux
cf843e76d3489e84db44d7d891c0236155e13567a1bd1a0628e891f6b7c31fec sha256:0439eb3e11f1927af2e0ef5f38271079cb74cacfe82d3f58b76b92fb03c644fc io.containerd.runtime.v1.linux
f512ee1d9d274d1638b7895b338d0e4e3739c48a58bd54fa75d86d278b3b9126 sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e io.containerd.runtime.v1.linux
fa688b4faf7b422a1077065e977c84aeba8fee4bd6f6008b6f481050d6dd9ca7 sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e io.containerd.runtime.v1.linux
Connection to 3.227.255.168 closed.
As you can see there are many containers already running. Those are hosting Kubernetes control services. But the default output is really meaningless, so let’s just count how many containers are running there:
$ juju ssh kubernetes-worker/0 sudo ctr --namespace=k8s.io containers ls | grep containerd | wc -l
Connection to 3.227.255.168 closed.
23
There are 23 containers running. Let’s create a sample pod based on the nginx
image:
$ kubectl run nginx --image nginx --restart Never
pod/nginx created
We can check the pod’s status by executing the following command:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 12s
Again, let’s have a look at the containers list:
$ juju ssh kubernetes-worker/0 sudo ctr --namespace=k8s.io containers ls
CONTAINER IMAGE RUNTIME
066a57c0eb19e2f13a53da0d161bff05abba6834fc823bd835c656654e8516c6 sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e io.containerd.runtime.v1.linux
082da21e6840dbbf94c940a24ed42e6e26f636582d864792c9cf7d1b33ebeca4 rocks.canonical.com:443/cdk/pause-amd64:3.1 io.containerd.runtime.v1.linux
082dc1663483d2ad66900ff282374cf791e512e260cbe1f399c8e358eb7a9bc3 sha256:5d092e4b984acc2661bb4d2752ddd98ac87004d58c56ff41f768b1136a63a1f2 io.containerd.runtime.v1.linux
13a9464d5c1ff0b822ca6edc98fb55e359bdf9d6b16932e25b70f142ae83716a sha256:b5af743e598496e8ebd7a6eb3fea76a6464041581520d1c2315c95f993287303 io.containerd.runtime.v1.linux
1e0110721c0962ce28d830544cbe79da8604c671814b71a931b73df939a38620 sha256:577260d221dbb1be2d83447402d0d7c5e15501a89b0e2cc1961f0b24ed56c77c io.containerd.runtime.v1.linux
2102015643e86832fbbcdcb62f82e3b2895e7287cc18e2d4276030d3a4935c2c sha256:540a289bab6cb1bf880086a9b803cf0c4cefe38cbb5cdefa199b69614525199f io.containerd.runtime.v1.linux
248f628e85552d209ba985ee427b6766ec3b6c4c1ec98c0ac9e170c5d8113acd sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e io.containerd.runtime.v1.linux
2731f5c54b7197a7b00e3a09636e7a6c5b61cafef0e037534670fc835e3b7b31 sha256:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b io.containerd.runtime.v1.linux
3af887cc9d5fdb8329a03b7fadf35819cdef5683e15642a98fb3388e930c895c sha256:71b13764bb0827aae7ff634592e32ed2fa9ee4ebe7573ad518ee940faae19402 io.containerd.runtime.v1.linux
54c8c99d48902a047aa2f69064b691c63b3287e32e01bea1de17538cc822eb84 sha256:6802d83967b995c2c2499645ede60aa234727afc06079e635285f54c82acbceb io.containerd.runtime.v1.linux
5c81130a739f265077f13ea76dae183e9ec54baa766a11a5731a050388be36e1 sha256:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b io.containerd.runtime.v1.linux
617808e88222ecf22685fa97c9d9d3332126a423fe26ccd44090cc4619c7c88e sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e io.containerd.runtime.v1.linux
61e23851f589863bc152a5e7436a3088ef4fa71d02d6181a341b9ff52a42f904 sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e io.containerd.runtime.v1.linux
66f2edb64ba85da66b5ff4861ac8d02cb8d02651ad84069c76c41250106e9647 sha256:71b13764bb0827aae7ff634592e32ed2fa9ee4ebe7573ad518ee940faae19402 io.containerd.runtime.v1.linux
6f03ab0f3f7489b46376e4f24861cfc7220166805a6e4744aeb92dc343f6ec0f sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e io.containerd.runtime.v1.linux
8e9d9644102a8a22c2628a510a2165fc922c36a8341c2b8b9ed58ddc819e2311 sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e io.containerd.runtime.v1.linux
94d040ca432ece2217f6bbe00510e053c56ed8fbe6188639425f96e26eeb3bc3 sha256:f60a10c24e075bad56bac7a14559bb7fe6e601f2aad4982b2b8c2ed9c79ccfb6 io.containerd.runtime.v1.linux
a0a5dc5e53e724cd3b44fc7a7c24fae1d6a1c29d8c52bb08b483178fd20d8886 sha256:8cb3de219af7bdf0b3ae66439aecccf94cebabb230171fa4b24d66d4a786f4f7 io.containerd.runtime.v1.linux
a39f221bfd1436bc40879eecc286cf62f8fdac0799845f599ebe634133157ef3 sha256:709901356c11546f46256dd2028c390d2b4400fe439c678b87f050977672ae8e io.containerd.runtime.v1.linux
a54d4a154a231ec1759261f6da7fb988af809f874d1e2632a999e4d072dc3d68 sha256:f60a10c24e075bad56bac7a14559bb7fe6e601f2aad4982b2b8c2ed9c79ccfb6 io.containerd.runtime.v1.linux
af21885d19060770835b0f47d31484fdc4ff918d5285b11d8aaff9f8c7cc9e65 sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e io.containerd.runtime.v1.linux
c54e3585e186497617842a976470358ececcdb17fa37ae8b9d89485eea16e61f sha256:f60a10c24e075bad56bac7a14559bb7fe6e601f2aad4982b2b8c2ed9c79ccfb6 io.containerd.runtime.v1.linux
cf843e76d3489e84db44d7d891c0236155e13567a1bd1a0628e891f6b7c31fec sha256:0439eb3e11f1927af2e0ef5f38271079cb74cacfe82d3f58b76b92fb03c644fc io.containerd.runtime.v1.linux
f512ee1d9d274d1638b7895b338d0e4e3739c48a58bd54fa75d86d278b3b9126 sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e io.containerd.runtime.v1.linux
fa688b4faf7b422a1077065e977c84aeba8fee4bd6f6008b6f481050d6dd9ca7 sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e io.containerd.runtime.v1.linux
Connection to 3.227.255.168 closed.
$ juju ssh kubernetes-worker/0 sudo ctr --namespace=k8s.io containers ls | grep containerd | wc -l
Connection to 3.227.255.168 closed.
25
The container which hosts the nginx
pod has been created. Let’s see whether there are any qemu
processes running there:
$ juju ssh kubernetes-worker/0 sudo "ps -ef | grep qemu"
ubuntu 68878 68877 0 10:51 pts/0 00:00:00 bash -c sudo ps -ef | grep qemu
ubuntu 68880 68878 0 10:51 pts/0 00:00:00 grep qemu
Connection to 3.227.255.168 closed.
There is nothing! The reason is that you have to explicitely request to use the Kata runtime when launching container workloads.
Creating a “kata” class
For this purpose we have to create a kata
class:
echo <EOF >> kata.yaml
apiVersion: node.k8s.io/v1beta1
kind: RuntimeClass
metadata:
name: kata
handler: kata
EOF
$ kubectl create -f kata.yaml
runtimeclass.node.k8s.io/kata created
Creating a pod via the Kata runtime
Now that the kata
class has been created, we can create another pod using the Kata runtime. We start by creating a YAML file for the pod:
$ kubectl run nginx-kata --image nginx --restart Never --dry-run --output yaml > nginx-kata.yaml
Then we add the runtimeClassName: kata
line to the nginx-kata.yaml
file below the spec
section:
$ cat nginx-kata.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: nginx-kata
name: nginx-kata
spec:
runtimeClassName: kata
containers:
- image: nginx
name: nginx-kata
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Never
status: {}
And finally, we create the pod:
$ kubectl create -f nginx-kata.yaml
pod/nginx-kata created
So let’s now check the container’s list again:
$ juju ssh kubernetes-worker/0 sudo ctr --namespace=k8s.io containers ls
CONTAINER IMAGE RUNTIME
066a57c0eb19e2f13a53da0d161bff05abba6834fc823bd835c656654e8516c6 sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e io.containerd.runtime.v1.linux
082da21e6840dbbf94c940a24ed42e6e26f636582d864792c9cf7d1b33ebeca4 rocks.canonical.com:443/cdk/pause-amd64:3.1 io.containerd.runtime.v1.linux
082dc1663483d2ad66900ff282374cf791e512e260cbe1f399c8e358eb7a9bc3 sha256:5d092e4b984acc2661bb4d2752ddd98ac87004d58c56ff41f768b1136a63a1f2 io.containerd.runtime.v1.linux
13a9464d5c1ff0b822ca6edc98fb55e359bdf9d6b16932e25b70f142ae83716a sha256:b5af743e598496e8ebd7a6eb3fea76a6464041581520d1c2315c95f993287303 io.containerd.runtime.v1.linux
1e0110721c0962ce28d830544cbe79da8604c671814b71a931b73df939a38620 sha256:577260d221dbb1be2d83447402d0d7c5e15501a89b0e2cc1961f0b24ed56c77c io.containerd.runtime.v1.linux
2102015643e86832fbbcdcb62f82e3b2895e7287cc18e2d4276030d3a4935c2c sha256:540a289bab6cb1bf880086a9b803cf0c4cefe38cbb5cdefa199b69614525199f io.containerd.runtime.v1.linux
248f628e85552d209ba985ee427b6766ec3b6c4c1ec98c0ac9e170c5d8113acd sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e io.containerd.runtime.v1.linux
2731f5c54b7197a7b00e3a09636e7a6c5b61cafef0e037534670fc835e3b7b31 sha256:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b io.containerd.runtime.v1.linux
3741ee057f93943a5e9ff1502a92a876bc8339a0d50ff887e899b37aef7a5357 sha256:540a289bab6cb1bf880086a9b803cf0c4cefe38cbb5cdefa199b69614525199f io.containerd.kata.v2
3af887cc9d5fdb8329a03b7fadf35819cdef5683e15642a98fb3388e930c895c sha256:71b13764bb0827aae7ff634592e32ed2fa9ee4ebe7573ad518ee940faae19402 io.containerd.runtime.v1.linux
54c8c99d48902a047aa2f69064b691c63b3287e32e01bea1de17538cc822eb84 sha256:6802d83967b995c2c2499645ede60aa234727afc06079e635285f54c82acbceb io.containerd.runtime.v1.linux
5c81130a739f265077f13ea76dae183e9ec54baa766a11a5731a050388be36e1 sha256:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b io.containerd.runtime.v1.linux
617808e88222ecf22685fa97c9d9d3332126a423fe26ccd44090cc4619c7c88e sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e io.containerd.runtime.v1.linux
61e23851f589863bc152a5e7436a3088ef4fa71d02d6181a341b9ff52a42f904 sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e io.containerd.runtime.v1.linux
66f2edb64ba85da66b5ff4861ac8d02cb8d02651ad84069c76c41250106e9647 sha256:71b13764bb0827aae7ff634592e32ed2fa9ee4ebe7573ad518ee940faae19402 io.containerd.runtime.v1.linux
6f03ab0f3f7489b46376e4f24861cfc7220166805a6e4744aeb92dc343f6ec0f sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e io.containerd.runtime.v1.linux
8e9d9644102a8a22c2628a510a2165fc922c36a8341c2b8b9ed58ddc819e2311 sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e io.containerd.runtime.v1.linux
94d040ca432ece2217f6bbe00510e053c56ed8fbe6188639425f96e26eeb3bc3 sha256:f60a10c24e075bad56bac7a14559bb7fe6e601f2aad4982b2b8c2ed9c79ccfb6 io.containerd.runtime.v1.linux
a0a5dc5e53e724cd3b44fc7a7c24fae1d6a1c29d8c52bb08b483178fd20d8886 sha256:8cb3de219af7bdf0b3ae66439aecccf94cebabb230171fa4b24d66d4a786f4f7 io.containerd.runtime.v1.linux
a39f221bfd1436bc40879eecc286cf62f8fdac0799845f599ebe634133157ef3 sha256:709901356c11546f46256dd2028c390d2b4400fe439c678b87f050977672ae8e io.containerd.runtime.v1.linux
a54d4a154a231ec1759261f6da7fb988af809f874d1e2632a999e4d072dc3d68 sha256:f60a10c24e075bad56bac7a14559bb7fe6e601f2aad4982b2b8c2ed9c79ccfb6 io.containerd.runtime.v1.linux
af21885d19060770835b0f47d31484fdc4ff918d5285b11d8aaff9f8c7cc9e65 sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e io.containerd.runtime.v1.linux
c54e3585e186497617842a976470358ececcdb17fa37ae8b9d89485eea16e61f sha256:f60a10c24e075bad56bac7a14559bb7fe6e601f2aad4982b2b8c2ed9c79ccfb6 io.containerd.runtime.v1.linux
cf843e76d3489e84db44d7d891c0236155e13567a1bd1a0628e891f6b7c31fec sha256:0439eb3e11f1927af2e0ef5f38271079cb74cacfe82d3f58b76b92fb03c644fc io.containerd.runtime.v1.linux
d53282f1cd5e37c2adbf8a76851ee942733965aa2732760700e5ad7a2f6e53cd rocks.canonical.com:443/cdk/pause-amd64:3.1 io.containerd.kata.v2
f512ee1d9d274d1638b7895b338d0e4e3739c48a58bd54fa75d86d278b3b9126 sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e io.containerd.runtime.v1.linux
fa688b4faf7b422a1077065e977c84aeba8fee4bd6f6008b6f481050d6dd9ca7 sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e io.containerd.runtime.v1.linux
Connection to 3.227.255.168 closed.
$ juju ssh kubernetes-worker/0 sudo ctr --namespace=k8s.io containers ls | grep containerd | wc -l
Connection to 3.227.255.168 closed.
27
The number has grown again and we can now see that io.containerd.kata.v2
is being used.
We can also check if there are any qemu
processes running on the Kubernetes Worker node:
$ juju ssh kubernetes-worker/0 sudo "ps -ef | grep qemu"
root 13702 1 1 11:07 ? 00:00:01 /usr/bin/qemu-vanilla-system-x86_64 -name sandbox-d53282f1cd5e37c2adbf8a76851ee942733965aa2732760700e5ad7a2f6e53cd -uuid 4ad3b074-b802-41f0-ae42-73a53cf27f50 -machine pc,accel=kvm,kernel_irqchip,nvdimm -cpu host -qmp unix:/run/vc/vm/d53282f1cd5e37c2adbf8a76851ee942733965aa2732760700e5ad7a2f6e53cd/qmp.sock,server,nowait -m 2048M,slots=10,maxmem=516891M -device pci-bridge,bus=pci.0,id=pci-bridge-0,chassis_nr=1,shpc=on,addr=2,romfile= -device virtio-serial-pci,disable-modern=false,id=serial0,romfile= -device virtconsole,chardev=charconsole0,id=console0 -chardev socket,id=charconsole0,path=/run/vc/vm/d53282f1cd5e37c2adbf8a76851ee942733965aa2732760700e5ad7a2f6e53cd/console.sock,server,nowait -device nvdimm,id=nv0,memdev=mem0 -object memory-backend-file,id=mem0,mem-path=/usr/share/kata-containers/kata-containers-image_clearlinux_1.9.0-rc0_agent_ba6ab83c16.img,size=134217728 -device virtio-scsi-pci,id=scsi0,disable-modern=false,romfile= -object rng-random,id=rng0,filename=/dev/urandom -device virtio-rng,rng=rng0,romfile= -device virtserialport,chardev=charch0,id=channel0,name=agent.channel.0 -chardev socket,id=charch0,path=/run/vc/vm/d53282f1cd5e37c2adbf8a76851ee942733965aa2732760700e5ad7a2f6e53cd/kata.sock,server,nowait -device virtio-9p-pci,disable-modern=false,fsdev=extra-9p-kataShared,mount_tag=kataShared,romfile= -fsdev local,id=extra-9p-kataShared,path=/run/kata-containers/shared/sandboxes/d53282f1cd5e37c2adbf8a76851ee942733965aa2732760700e5ad7a2f6e53cd,security_model=none -netdev tap,id=network-0,vhost=on,vhostfds=3,fds=4 -device driver=virtio-net-pci,netdev=network-0,mac=36:62:ea:9e:37:7f,disable-modern=false,mq=on,vectors=4,romfile= -global kvm-pit.lost_tick_policy=discard -vga none -no-user-config -nodefaults -nographic -daemonize -object memory-backend-ram,id=dimm1,size=2048M -numa node,memdev=dimm1 -kernel /usr/share/kata-containers/vmlinuz-4.19.75.54-42.container -append tsc=reliable no_timer_check rcupdate.rcu_expedited=1 i8042.direct=1 i8042.dumbkbd=1 i8042.nopnp=1 i8042.noaux=1 noreplace-smp reboot=k console=hvc0 console=hvc1 iommu=off cryptomgr.notests net.ifnames=0 pci=lastbus=0 root=/dev/pmem0p1 rootflags=dax,data=ordered,errors=remount-ro ro rootfstype=ext4 quiet systemd.show_status=false panic=1 nr_cpus=72 agent.use_vsock=false systemd.unit=kata-containers.target systemd.mask=systemd-networkd.service systemd.mask=systemd-networkd.socket -pidfile /run/vc/vm/d53282f1cd5e37c2adbf8a76851ee942733965aa2732760700e5ad7a2f6e53cd/pid -smp 1,cores=1,threads=1,sockets=72,maxcpus=72
ubuntu 14549 14548 0 11:09 pts/0 00:00:00 bash -c sudo ps -ef | grep qemu
ubuntu 14551 14549 0 11:09 pts/0 00:00:00 grep qemu
Connection to 3.227.255.168 closed.
$ juju ssh kubernetes-worker/0 sudo "ps -ef | grep qemu | grep root | wc -l"
Connection to 3.227.255.168 closed.
1
As you can see, this time, the qemu
process has been created. The nginx-kata
pod is running as a container inside of the VM on the Kubernetes worker node.
Creating a deployment via the Kata runtime
Creating deployments via the Kata runtime does not differ much from creating pods. Let’s start with creating a YAML file for an nginx-based deployment with 3 replicas:
$ kubectl run nginx-deployment-kata --image nginx --replicas 3 --restart Always --dry-run --output yaml > nginx-kata-deployment.yaml
Then add the runtimeClassName: kata
line again below the container’s spec
section:
$ cat nginx-kata-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
run: nginx-deployment-kata
name: nginx-deployment-kata
spec:
replicas: 3
selector:
matchLabels:
run: nginx-deployment-kata
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
run: nginx-deployment-kata
spec:
runtimeClassName: kata
containers:
- image: nginx
name: nginx-deployment-kata
resources: {}
status: {}
And create the deployment:
$ kubectl create -f nginx-kata-deployment.yaml
deployment.apps/nginx-deployment-kata created
Once we check the number of qemu
processes running on the Kubernetes Worker node, we will notice that there are 4 of them:
$ juju ssh kubernetes-worker/0 sudo "ps -ef | grep qemu | grep root | wc -l"
Connection to 3.227.255.168 closed.
4
This means that apart from the VM created for the nginx-kata
pod a separate VM has been created for each replica of the nginx-deployment-kata
deployment.