2.4.7. Run container using KubeVirt¶
KubeVirt is a virtualization add-on to Kubernetes. The aim is to provide a common ground for virtualization solutions on top of Kubernetes. KubeVirt enables virtual machines to be deployed, consumed, and managed by Kubernetes just like containers.
This section describes how to install KubeVirt on Kubernetes. We assume that a Kubernetes cluster is already installed.
It illustrates the deployment of Virtual Service Router within KubeVirt installed on this cluster. It has been tested with Kubernetes release v1.26 and containerd as the Container Runtime. The KubeVirt release tested is v1.5.2.
If you are already familiar with KubeVirt or if you already have KubeVirt installed, you may want to skip the KubeVirt installation procedure and focus on Deploy VSR Virtual Machine.
Note
To simplify the documentation, we assume that all commands are run by the root user.
Note
At the time of writing, the latest KubeVirt version is 1.5.2.
Installing KubeVirt on Kubernetes¶
Requirements¶
Kubernetes cluster based on one of the latest three Kubernetes releases that are out at the time the KubeVirt release is made.
Kubernetes apiserver must have –allow-privileged=true in order to run KubeVirt’s privileged DaemonSet.
Note
KubeVirt is currently supported on the following container runtimes:
containerd
crio
Other container runtimes, which do not use virtualization features, should work too. However, the mentioned ones are the main target.
See also
The requirements from KubeVirt documentation.
KubeVirt Installation¶
Grab the latest version of KubeVirt:
# KUBEVIRT_VERSION=$(curl -s https://api.github.com/repos/kubevirt/kubevirt/releases/latest | awk -F '[ \t":]+' '/tag_name/ {print $3}')
Install the KUBEVIRT_VERSION release of the KubeVirt operator:
# kubectl create -f https://github.com/kubevirt/kubevirt/releases/download/${KUBEVIRT_VERSION}/kubevirt-operator.yaml
namespace/kubevirt created
customresourcedefinition.apiextensions.k8s.io/kubevirts.kubevirt.io created
priorityclass.scheduling.k8s.io/kubevirt-cluster-critical created
clusterrole.rbac.authorization.k8s.io/kubevirt.io:operator created
serviceaccount/kubevirt-operator created
role.rbac.authorization.k8s.io/kubevirt-operator created
rolebinding.rbac.authorization.k8s.io/kubevirt-operator-rolebinding created
clusterrole.rbac.authorization.k8s.io/kubevirt-operator created
clusterrolebinding.rbac.authorization.k8s.io/kubevirt-operator created
deployment.apps/virt-operator created
Create the KubeVirt Custom Resource (instance deployment request) which triggers the actual installation:
# kubectl create -f https://github.com/kubevirt/kubevirt/releases/download/${KUBEVIRT_VERSION}/kubevirt-cr.yaml
kubevirt.kubevirt.io/kubevirt created
Wait until all KubeVirt components are up:
# kubectl -n kubevirt wait kv kubevirt --for condition=Available
kubevirt.kubevirt.io/kubevirt condition met
Check that KubeVirt is fully deployed: the PHASE column of the KubeVirt Custom Resource should be set to “Deployed”:
# kubectl -n kubevirt get kubevirt
NAME AGE PHASE
kubevirt 4m41s Deployed
List pods of the KubeVirt namespace and check that those are in “Running” status:
# kubectl get pods -n kubevirt
NAME READY STATUS RESTARTS AGE
virt-api-d64b75fb-bpzkm 1/1 Running 0 10m
virt-api-d64b75fb-wt6mc 1/1 Running 0 10m
virt-controller-78585b67fc-b86pl 1/1 Running 0 10m
virt-controller-78585b67fc-c7dm8 1/1 Running 0 10m
virt-handler-cjqh8 1/1 Running 0 10m
virt-handler-j7867 1/1 Running 0 10m
virt-operator-6f5cd77cdc-gqchf 1/1 Running 0 15m
virt-operator-6f5cd77cdc-sfj2p 1/1 Running 0 15m
Install the KubeVirt client, virtctl:
# curl -Lo virtctl https://github.com/kubevirt/kubevirt/releases/download/${KUBEVIRT_VERSION}/virtctl-${KUBEVIRT_VERSION}-linux-amd64
# chmod +x virtctl
# mv virtctl $HOME/.local/bin
See also
More information can be found on KubeVirt documentation
Deploy VSR Virtual Machine¶
In KubeVirt, every VirtualMachineInstance object represents a single running VM instance.
We use containerDisk as an ephemeral disk for the VM. containerDisk is a VM image that is stored as a container image in a container image registry. containerDisks are ephemeral storage devices that can be assigned to any number of active VirtualMachineInstances.
The Virtual Service Router containerDisk image is available at: download.6wind.com/vsr/x86_64/3.11/3.11.0.ga
To deploy the VirtualMachine object, create the following
vsr-vmi-deploy.yaml file with this content:
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: "vsr"
spec:
runStrategy: Always
template:
metadata:
labels:
kubevirt.io/domain: vsr
role: vsr
spec:
domain:
devices:
disks:
- name: cdisk
disk: {}
- name: cloudinitdisk
disk: {}
interfaces:
- name: default
masquerade: {}
- name: sriov
sriov: {}
resources:
requests:
memory: "8Gi"
cpu: "4"
limits:
memory: "8Gi"
cpu: "4"
networks:
- name: default
pod: {}
- name: sriov
multus:
networkName: multus-intel-sriov-nic-vsr
default: false
accessCredentials:
- sshPublicKey:
source:
secret:
secretName: my-pub-key
propagationMethod:
noCloud: {}
volumes:
- name: cdisk
containerDisk:
image: download.6wind.com/vsr/x86_64-ce/3.11:3.11.0.ga
- name: cloudinitdisk
cloudInitNoCloud:
userData: |-
#cloud-config
This configuration file declares a containerDisk with the Virtual Service Router image. Also, through this configuration file, we connect the VirtualMachine to a secondary network using Multus. This assumes that multus is installed across the Kubernetes cluster and a corresponding NetworkAttachmentDefinition CRD was created. This assumes also SR-IOV is installed and configured accross the cluster.
# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
[...]
kube-multus-ds-gscvc 1/1 Running 0 24h
kube-multus-ds-zjbtq 1/1 Running 0 24h
kube-sriov-device-plugin-amd64-64g76 1/1 Running 0 22h
kube-sriov-device-plugin-amd64-bljxd 1/1 Running 0 22h
# kubectl get network-attachment-definitions.k8s.cni.cncf.io
NAME AGE
multus-intel-sriov-nic-vsr 22h
See also
You can refer to KubeVirt User Guide to have detailed explanations.
Now deploy this Virtual Service Router VM into the Kubernetes cluster:
# kubectl apply -f vsr-vmi-deploy.yaml
virtualmachine.kubevirt.io/vsr created
Check that the Virtual Machine Instance is running:
# kubectl get vm
NAME AGE STATUS READY
vsr 13m Running True
# kubectl get vmi
NAME AGE PHASE IP NODENAME READY
vsr 14m Running 10.229.0.22 sd-135617 True
You can access the VM through the serial console.
# virtctl console vsr
Successfully connected to vsr console. The escape sequence is ^]
[...]
vsr login:
You can also access the VM through ssh. We assume a ssh key pair is generated and available.
Place the SSH public key into a Kubernetes Secret:
# kubectl create secret generic my-pub-key --from-file=key1=id_rsa.pub
secret/my-pub-key created
The Secret containing the public key is then assigned to the VM using the access credentials API with the noCloud propagation method. KubeVirt injects the SSH public key into the virtual machine by using the generated cloud-init metadata instead of the user data.
In the configuration file described previously (vsr-vmi-deploy.yaml), ensure
the accessCredentials section and cloudinitdisk volume are present.
Access the VM through ssh:
# virtctl ssh -i .ssh/id_rsa root@vsr
See also
You can refer to KubeVirt User Guide to have detailed explanations about the different means to access the VM.
Applying this manifest to the cluster creates a virt-launcher container running libvirt and qemu. For every VirtualMachine object one pod is created. This pod’s primary container runs the virt-launcher KubeVirt component. The main purpose of the virt-launcher Pod is to provide the cgroups and namespaces which will be used to host the VM process. virt-launcher uses libvirtd to manage the life-cycle of the VM process.
# kubectl get pod
NAME READY STATUS RESTARTS AGE
virt-launcher-vsr-wvmfv 3/3 Running 0 2d17h
# kubectl exec -it virt-launcher-vsr-wvmfv -- virsh list
Id Name State
-----------------------------
1 default_vsr running
# kubectl exec -it virt-launcher-vsr-wvmfv -- virsh dumpxml default_vsr
<domain type='kvm' id='1'>
<name>default_vsr</name>
<uuid>043a43c8-c94b-53d0-b1be-44448a0d4a8d</uuid>
<metadata>
<kubevirt xmlns="http://kubevirt.io">
<uid/>
</kubevirt>
</metadata>
[...]
Access the Virtual Service Router VM:
# virtctl console vsr
Apply the basic following configuration in nc-cli command mode:
# edit running
# system fast-path port pci-b6s0
# commit
Note
In the command above, replace the pci port with the one suitable for your configuration.
Check the Fast Path is running:
# show summary
Service Status
======= ======
[...]
fast-path enabled, 1 port, core-mask 2-3
Note
You can refer to the Virtual Service Router User Guide to implement the configuration.
Deploy Virtual Service Router VirtualMachineInstanceReplicaSet¶
A VirtualMachineInstanceReplicaSet tries to ensures that a specified number of VirtualMachineInstance replicas are running at any time. It is very similar to a Kubernetes ReplicaSet.
To deploy the VirtualMachineInstanceReplicaSet object, create the following
vsr-vmireplset-deploy.yaml file with this
content:
apiVersion: kubevirt.io/v1
kind: VirtualMachineInstanceReplicaSet
metadata:
name: vsr-replicaset
spec:
replicas: 2
selector:
matchLabels:
myvmi: myvmi
template:
metadata:
name: vsr
labels:
myvmi: myvmi
spec:
domain:
devices:
disks:
- name: mydisk
disk: {}
- name: cloudinitdisk
disk: {}
interfaces:
- name: default
bridge: {}
- name: sriov
sriov: {}
resources:
requests:
memory: "8Gi"
cpu: "4"
limits:
memory: "8Gi"
cpu: "4"
networks:
- name: default
pod: {}
- name: sriov
multus:
networkName: multus-intel-sriov-nic-vsr
accessCredentials:
- sshPublicKey:
source:
secret:
secretName: my-pub-key
propagationMethod:
noCloud: {}
volumes:
- name: mydisk
containerDisk:
image: download.6wind.com/vsr/x86_64-ce/3.11:3.11.0.ga
- name: cloudinitdisk
cloudInitNoCloud:
userData: |-
#cloud-config
Now deploy this Virtual Service Router VirtualMachineInstanceReplicaSet into the Kubernetes cluster:
# kubectl apply -f vsr-vmireplset-deploy.yaml
virtualmachineinstancereplicaset.kubevirt.io/vsr-replicaset created
Applying this manifest to the cluster creates 2 VirtualMachineInstance replicas based on the VirtualMachineInstance configuration set previously.
# kubectl get vmirs
NAME DESIRED CURRENT READY AGE
vsr-replicaset 2 2 2 97s
# kubectl get vmi
NAME AGE PHASE IP NODENAME READY
vsr2fwsc 97s Running 10.229.0.24 sd-135617 True
vsrkfqbv 97s Running 10.229.0.25 sd-135617 True
# kubectl get pod
NAME READY STATUS RESTARTS AGE
virt-launcher-vsr2fwsc-wbvsm 3/3 Running 0 87s
virt-launcher-vsrkfqbv-dwqhh 3/3 Running 0 87s
Live Migration¶
Live Migration is a common virtualization feature supported by KubeVirt where VMs running on one cluster node move to another cluster node without shutting down the guest OS or its applications.
Check the status of nodes in Kubernetes Cluster:
# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
sd-135617 Ready control-plane 6d3h v1.26.1 195.154.27.62 <none> Ubuntu 20.04.5 LTS 5.4.0-172-generic containerd://1.6.28
sd-135618 Ready <none> 6d2h v1.26.1 195.154.27.63 <none> Ubuntu 20.04.5 LTS 5.4.0-171-generic containerd://1.6.28
Note
For testing purpose, the cluster is configured in the way we can schedule pods on both nodes, so also on the control-plane node.
Live migration is enabled by default in this version of KubeVirt.
Deploy a Virtual Service Router VM , creating the following
testvsr.yaml file with this
content:
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: "vsr"
spec:
runStrategy: Always
template:
metadata:
labels:
kubevirt.io/domain: vsr
role: vsr
spec:
domain:
devices:
disks:
- name: cdisk
disk: {}
- name: cloudinitdisk
disk: {}
interfaces:
- name: default
masquerade: {}
- name: sriov
sriov: {}
resources:
requests:
memory: "8Gi"
cpu: "4"
limits:
memory: "8Gi"
cpu: "4"
networks:
- name: default
pod: {}
- name: sriov
multus:
networkName: multus-intel-sriov-nic-vsr
default: false
accessCredentials:
- sshPublicKey:
source:
secret:
secretName: my-pub-key
propagationMethod:
noCloud: {}
volumes:
- name: cdisk
containerDisk:
image: download.6wind.com/vsr/x86_64-ce/3.11:3.11.0.ga
- name: cloudinitdisk
cloudInitNoCloud:
userData: |-
#cloud-config
Now apply this manifest to the Kubernetes cluster:
# kubectl apply -f testvsr.yaml
virtualmachine.kubevirt.io/vsr created
Warning
Live migration, at the time of writing, is not allowed with a pod network binding of bridge interface type. It is why we configured through the yaml manifest file the masquerade interface type.
Notice the pod shows as running on node sd-135617:
# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
virt-launcher-vsr-z27cg 3/3 Running 0 51s 10.229.0.43 sd-135617 <none> 1/1
# kubectl get vmi
NAME AGE PHASE IP NODENAME READY
vsr 56s Running 10.229.0.43 sd-135617 True
For testing purpose and demonstrating Live Migration, we set the hostname in the Virtual Service Router VM we just deployed into the cluster:
# virtctl console vsr
[...]
myvsr login: admin
Password:
[...]
myvsr>
Note
Here, we set the hostname to “myvsr”. You can refer to the Virtual Service Router product documentation.
Then migrate the VM from one node, sd-135617 to the other sd-135618:
# virtctl migrate vsr
VM vsr was scheduled to migrate
Notice the original virt-launcher pod has entered the Completed state and the virtual machine is now running on the node sd-135618.
# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
virt-launcher-vsr-m699n 3/3 Running 0 48s 10.229.1.25 sd-135618 <none> 1/1
virt-launcher-vsr-z27cg 0/3 Completed 0 4m12s 10.229.0.43 sd-135617 <none> 1/1
# kubectl get vmi
NAME AGE PHASE IP NODENAME READY
vsr 4m41s Running 10.229.1.25 sd-135618 True
Also notice the migration is successful:
# kubectl get VirtualMachineInstanceMigration
NAME PHASE VMI
kubevirt-migrate-vm-jz4q8 Succeeded vsr
Access again to the Virtual Service Router VM which is hosted now on the other node, sd-135618:
# virtctl console vsr
[...]
myvsr login: admin
Password:
[...]
myvsr>
We can note the access to the Virtual Service Router VM is always working after the Live Migration and the hostname configuration remains unchanged after the Live Migration.