2.3.7. Run container using KubeVirt¶
KubeVirt is a virtualization add-on to Kubernetes. The aim is to provide a common ground for virtualization solutions on top of Kubernetes. KubeVirt enables virtual machines to be deployed, consumed, and managed by Kubernetes just like containers.
This section describes how to install KubeVirt on Kubernetes. We assume that a Kubernetes cluster is already installed.
It illustrates the deployment of Virtual Service Router within KubeVirt installed on this cluster. It has been tested with Kubernetes release v1.26 and containerd as the Container Runtime. The KubeVirt release tested is v1.1.1.
If you are already familiar with KubeVirt or if you already have KubeVirt installed, you may want to skip the KubeVirt installation procedure and focus on Deploy VSR Virtual Machine.
Note
To simplify the documentation, we assume that all commands are run by the root user.
Note
At the time of writing, the latest KubeVirt version is 1.1.1.
Installing KubeVirt on Kubernetes¶
Requirements¶
Kubernetes cluster based on one of the latest three Kubernetes releases that are out at the time the KubeVirt release is made.
Kubernetes apiserver must have –allow-privileged=true in order to run KubeVirt’s privileged DaemonSet.
Note
KubeVirt is currently supported on the following container runtimes:
containerd
crio
Other container runtimes, which do not use virtualization features, should work too. However, the mentioned ones are the main target.
See also
The requirements from KubeVirt documentation.
KubeVirt Installation¶
Grab the latest version of KubeVirt:
# KUBEVIRT_VERSION=$(curl -s https://api.github.com/repos/kubevirt/kubevirt/releases/latest | awk -F '[ \t":]+' '/tag_name/ {print $3}')
Install the KUBEVIRT_VERSION release of the KubeVirt operator:
# kubectl create -f https://github.com/kubevirt/kubevirt/releases/download/${KUBEVIRT_VERSION}/kubevirt-operator.yaml
namespace/kubevirt created
customresourcedefinition.apiextensions.k8s.io/kubevirts.kubevirt.io created
priorityclass.scheduling.k8s.io/kubevirt-cluster-critical created
clusterrole.rbac.authorization.k8s.io/kubevirt.io:operator created
serviceaccount/kubevirt-operator created
role.rbac.authorization.k8s.io/kubevirt-operator created
rolebinding.rbac.authorization.k8s.io/kubevirt-operator-rolebinding created
clusterrole.rbac.authorization.k8s.io/kubevirt-operator created
clusterrolebinding.rbac.authorization.k8s.io/kubevirt-operator created
deployment.apps/virt-operator created
Create the KubeVirt Custom Resource (instance deployment request) which triggers the actual installation:
# kubectl create -f https://github.com/kubevirt/kubevirt/releases/download/${KUBEVIRT_VERSION}/kubevirt-cr.yaml
kubevirt.kubevirt.io/kubevirt created
Wait until all KubeVirt components are up:
# kubectl -n kubevirt wait kv kubevirt --for condition=Available
kubevirt.kubevirt.io/kubevirt condition met
Check that KubeVirt is fully deployed: the PHASE column of the KubeVirt Custom Resource should be set to “Deployed”:
# kubectl -n kubevirt get kubevirt
NAME AGE PHASE
kubevirt 4m41s Deployed
List pods of the KubeVirt namespace and check that those are in “Running” status:
# kubectl get pods -n kubevirt
NAME READY STATUS RESTARTS AGE
virt-api-d64b75fb-bpzkm 1/1 Running 0 10m
virt-api-d64b75fb-wt6mc 1/1 Running 0 10m
virt-controller-78585b67fc-b86pl 1/1 Running 0 10m
virt-controller-78585b67fc-c7dm8 1/1 Running 0 10m
virt-handler-cjqh8 1/1 Running 0 10m
virt-handler-j7867 1/1 Running 0 10m
virt-operator-6f5cd77cdc-gqchf 1/1 Running 0 15m
virt-operator-6f5cd77cdc-sfj2p 1/1 Running 0 15m
Install the KubeVirt client, virtctl:
# curl -Lo virtctl https://github.com/kubevirt/kubevirt/releases/download/${KUBEVIRT_VERSION}/virtctl-${KUBEVIRT_VERSION}-linux-amd64
# chmod +x virtctl
# mv virtctl $HOME/.local/bin
See also
More information can be found on KubeVirt documentation
Deploy VSR Virtual Machine¶
In KubeVirt, every VirtualMachineInstance object represents a single running VM instance.
We use containerDisk as an ephemeral disk for the VM. containerDisk is a VM image that is stored as a container image in a container image registry. containerDisks are ephemeral storage devices that can be assigned to any number of active VirtualMachineInstances.
Inject a local Virtual Service Router VM image into a container image. We assume you have already downloaded the Virtual Service Router qcow2 image.
The qcow2 image is available at: https://download.6wind.com/products/vsr/x86_64/3.9/3.9.1/qcow2
# ls
6wind-vsr-x86_64-v3.9.1.qcow2
From the same directory where the qcow2 image is present:
# cat << EOF > Dockerfile
FROM scratch
ADD --chown=107:107 6wind-vsr-x86_64-v3.9.1.qcow2 /disk/
EOF
# docker build -t 195.154.27.62:5000/vsr:3.9.1 .
# docker images -a
REPOSITORY TAG IMAGE ID CREATED SIZE
195.154.27.62:5000/vsr 3.9.1 0d7677ba25ba 4 minutes ago 1.29GB
# docker push 195.154.27.62:5000/vsr:3.9.1
Note
In the Dockerfile above, 107 is the UID of the user QEMU.
Note
For testing purpose, we use, here, a local registry to store the container image.
Note
Here, 195.154.27.62:5000 is the address:port of our local registry.
Now, we need to deploy the VirtualMachine object.
To do so, create the following vsr-vmi-deploy.yaml
file with this
content
:
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: "vsr"
spec:
running: true
template:
metadata:
labels:
kubevirt.io/domain: vsr
spec:
domain:
devices:
disks:
- name: cdisk
disk: {}
- name: cloudinitdisk
disk: {}
interfaces:
- name: default
bridge: {}
- name: sriov
sriov: {}
resources:
requests:
memory: "8Gi"
cpu: "4"
limits:
memory: "8Gi"
cpu: "4"
networks:
- name: default
pod: {}
- name: sriov
multus:
networkName: multus-intel-sriov-nic-vsr
accessCredentials:
- sshPublicKey:
source:
secret:
secretName: my-pub-key
propagationMethod:
noCloud: {}
volumes:
- name: cdisk
containerDisk:
image: 195.154.27.62:5000/vsr:3.9.1
- name: cloudinitdisk
cloudInitNoCloud:
userData: |-
#cloud-config
This configuration file declares a containerDisk with the Virtual Service Router image injected previously. Also, through this configuration file, we connect the VirtualMachine to a secondary network using Multus. This assumes that multus is installed across the Kubernetes cluster and a corresponding NetworkAttachmentDefinition CRD was created. This assumes also SR-IOV is installed and configured accross the cluster.
# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
[...]
kube-multus-ds-gscvc 1/1 Running 0 24h
kube-multus-ds-zjbtq 1/1 Running 0 24h
kube-sriov-device-plugin-amd64-64g76 1/1 Running 0 22h
kube-sriov-device-plugin-amd64-bljxd 1/1 Running 0 22h
# kubectl get network-attachment-definitions.k8s.cni.cncf.io
NAME AGE
multus-intel-sriov-nic-vsr 22h
See also
You can refer to KubeVirt User Guide to have detailed explanations.
Now deploy this Virtual Service Router VM into the Kubernetes cluster:
# kubectl apply -f vsr-vmi-deploy.yaml
virtualmachine.kubevirt.io/vsr created
Check that the Virtual Machine Instance is running:
# kubectl get vm
NAME AGE STATUS READY
vsr 13m Running True
# kubectl get vmi
NAME AGE PHASE IP NODENAME READY
vsr 14m Running 10.229.0.22 sd-135617 True
You can access the VM through the serial console.
# virtctl console vsr
Successfully connected to vsr console. The escape sequence is ^]
[...]
vsr login:
You can also access the VM through ssh. We assume a ssh key pair is generated and available.
Place the SSH public key into a Kubernetes Secret:
# kubectl create secret generic my-pub-key --from-file=key1=id_rsa.pub
secret/my-pub-key created
The Secret containing the public key is then assigned to the VM using the access credentials API with the noCloud propagation method. KubeVirt injects the SSH public key into the virtual machine by using the generated cloud-init metadata instead of the user data.
In the configuration file described previously (vsr-vmi-deploy.yaml), ensure the following lines are present:
accessCredentials:
- sshPublicKey:
source:
secret:
secretName: my-pub-key
propagationMethod:
noCloud: {}
- name: cloudinitdisk
cloudInitNoCloud:
userData: |-
#cloud-config
Access the VM through ssh:
# virtctl ssh -i .ssh/id_rsa root@vsr
See also
You can refer to KubeVirt User Guide to have detailed explanations about the different means to access the VM.
Applying this manifest to the cluster creates a virt-launcher container running libvirt and qemu. For every VirtualMachine object one pod is created. This pod’s primary container runs the virt-launcher KubeVirt component. The main purpose of the virt-launcher Pod is to provide the cgroups and namespaces which will be used to host the VM process. virt-launcher uses libvirtd to manage the life-cycle of the VM process.
# kubectl get pod
NAME READY STATUS RESTARTS AGE
virt-launcher-vsr-wvmfv 3/3 Running 0 2d17h
# kubectl exec -it virt-launcher-vsr-wvmfv -- virsh list
Id Name State
-----------------------------
1 default_vsr running
# kubectl exec -it virt-launcher-vsr-wvmfv -- virsh dumpxml default_vsr
<domain type='kvm' id='1'>
<name>default_vsr</name>
<uuid>043a43c8-c94b-53d0-b1be-44448a0d4a8d</uuid>
<metadata>
<kubevirt xmlns="http://kubevirt.io">
<uid/>
</kubevirt>
</metadata>
[...]
Access the Virtual Service Router VM:
# virtctl console vsr
Apply the basic following configuration in nc-cli command mode:
# edit running
# system fast-path port pci-b6s0
# commit
Note
In the command above, replace the pci port with the one suitable for your configuration.
Check the Fast Path is running:
# show summary
Service Status
======= ======
[...]
fast-path enabled, 1 port, core-mask 2-3
Note
You can refer to the Virtual Service Router User Guide to implement the configuration.
Deploy Virtual Service Router VirtualMachineInstanceReplicaSet¶
A VirtualMachineInstanceReplicaSet tries to ensures that a specified number of VirtualMachineInstance replicas are running at any time. It is very similar to a Kubernetes ReplicaSet.
To deploy the VirtualMachineInstanceReplicaSet object, create the following
vsr-vmireplset-deploy.yaml
file with this
content
:
apiVersion: kubevirt.io/v1
kind: VirtualMachineInstanceReplicaSet
metadata:
name: vsr-replicaset
spec:
replicas: 2
selector:
matchLabels:
myvmi: myvmi
template:
metadata:
name: vsr
labels:
myvmi: myvmi
spec:
domain:
devices:
disks:
- name: mydisk
disk: {}
- name: cloudinitdisk
disk: {}
interfaces:
- name: default
bridge: {}
- name: sriov
sriov: {}
resources:
requests:
memory: "8Gi"
cpu: "4"
limits:
memory: "8Gi"
cpu: "4"
networks:
- name: default
pod: {}
- name: sriov
multus:
networkName: multus-intel-sriov-nic-vsr
accessCredentials:
- sshPublicKey:
source:
secret:
secretName: my-pub-key
propagationMethod:
noCloud: {}
volumes:
- name: mydisk
containerDisk:
image: 195.154.27.62:5000/vsr:3.9.1
- name: cloudinitdisk
cloudInitNoCloud:
userData: |-
#cloud-config
Now deploy this Virtual Service Router VirtualMachineInstanceReplicaSet into the Kubernetes cluster:
# kubectl apply -f vsr-vmireplset-deploy.yaml
virtualmachineinstancereplicaset.kubevirt.io/vsr-replicaset created
Applying this manifest to the cluster creates 2 VirtualMachineInstance replicas based on the VirtualMachineInstance configuration set previously.
# kubectl get vmirs
NAME DESIRED CURRENT READY AGE
vsr-replicaset 2 2 2 97s
# kubectl get vmi
NAME AGE PHASE IP NODENAME READY
vsr2fwsc 97s Running 10.229.0.24 sd-135617 True
vsrkfqbv 97s Running 10.229.0.25 sd-135617 True
# kubectl get pod
NAME READY STATUS RESTARTS AGE
virt-launcher-vsr2fwsc-wbvsm 3/3 Running 0 87s
virt-launcher-vsrkfqbv-dwqhh 3/3 Running 0 87s
Live Migration¶
Live Migration is a common virtualization feature supported by KubeVirt where VMs running on one cluster node move to another cluster node without shutting down the guest OS or its applications.
Check the status of nodes in Kubernetes Cluster:
# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
sd-135617 Ready control-plane 6d3h v1.26.1 195.154.27.62 <none> Ubuntu 20.04.5 LTS 5.4.0-172-generic containerd://1.6.28
sd-135618 Ready <none> 6d2h v1.26.1 195.154.27.63 <none> Ubuntu 20.04.5 LTS 5.4.0-171-generic containerd://1.6.28
Note
For testing purpose, the cluster is configured in the way we can schedule pods on both nodes, so also on the control-plane node.
Live migration is enabled by default in this version of KubeVirt.
Deploy a Virtual Service Router VM , creating the following
testvsr.yaml
file with this
content
:
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: "vsr"
spec:
running: true
template:
metadata:
labels:
kubevirt.io/domain: vsr
spec:
domain:
devices:
disks:
- name: cdisk
disk: {}
- name: cloudinitdisk
disk: {}
interfaces:
- name: default
masquerade: {}
- name: sriov
sriov: {}
resources:
requests:
memory: "8Gi"
cpu: "4"
limits:
memory: "8Gi"
cpu: "4"
networks:
- name: default
pod: {}
- name: sriov
multus:
networkName: multus-intel-sriov-nic-vsr
accessCredentials:
- sshPublicKey:
source:
secret:
secretName: my-pub-key
propagationMethod:
noCloud: {}
volumes:
- name: cdisk
containerDisk:
image: 195.154.27.62:5000/vsr:3.9.1
- name: cloudinitdisk
cloudInitNoCloud:
userData: |-
#cloud-config
Now apply this manifest to the Kubernetes cluster:
# kubectl apply -f testvsr.yaml
virtualmachine.kubevirt.io/vsr created
Warning
Live migration, at the time of writing, is not allowed with a pod network binding of bridge interface type. It is why we configured through the yaml manifest file the masquerade interface type.
Notice the pod shows as running on node sd-135617:
# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
virt-launcher-vsr-z27cg 3/3 Running 0 51s 10.229.0.43 sd-135617 <none> 1/1
# kubectl get vmi
NAME AGE PHASE IP NODENAME READY
vsr 56s Running 10.229.0.43 sd-135617 True
For testing purpose and demonstrating Live Migration, we set the hostname in the Virtual Service Router VM we just deployed into the cluster:
# virtctl console vsr
[...]
myvsr login: admin
Password:
[...]
myvsr>
Note
Here, we set the hostname to “myvsr”. You can refer to the Virtual Service Router product documentation.
Then migrate the VM from one node, sd-135617 to the other sd-135618:
# virtctl migrate vsr
VM vsr was scheduled to migrate
Notice the original virt-launcher pod has entered the Completed state and the virtual machine is now running on the node sd-135618.
# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
virt-launcher-vsr-m699n 3/3 Running 0 48s 10.229.1.25 sd-135618 <none> 1/1
virt-launcher-vsr-z27cg 0/3 Completed 0 4m12s 10.229.0.43 sd-135617 <none> 1/1
# kubectl get vmi
NAME AGE PHASE IP NODENAME READY
vsr 4m41s Running 10.229.1.25 sd-135618 True
Also notice the migration is successful:
# kubectl get VirtualMachineInstanceMigration
NAME PHASE VMI
kubevirt-migrate-vm-jz4q8 Succeeded vsr
Access again to the Virtual Service Router VM which is hosted now on the other node, sd-135618:
# virtctl console vsr
[...]
myvsr login: admin
Password:
[...]
myvsr>
We can note the access to the Virtual Service Router VM is always working after the Live Migration and the hostname configuration remains unchanged after the Live Migration.
Containerized Data Importer¶
The Containerized Data Importer (CDI) provides facilities for enabling Persistent Volume Claims (PVCs) to be used as disks for KubeVirt VMs by way of DataVolumes.
Here, we use CDI to import the qcow2 Virtual Service Router VM disk image from the 6WIND download server and use it through a DataVolume during the Virtual Service Router VM deployment into the Kubernetes cluster.
Install the latest CDI release:
# export TAG=$(curl -s -w %{redirect_url} https://github.com/kubevirt/containerized-data-importer/releases/latest)
# export VERSION=$(echo ${TAG##*/})
# kubectl create -f https://github.com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-operator.yaml
namespace/cdi created
customresourcedefinition.apiextensions.k8s.io/cdis.cdi.kubevirt.io created
clusterrole.rbac.authorization.k8s.io/cdi-operator-cluster created
clusterrolebinding.rbac.authorization.k8s.io/cdi-operator created
serviceaccount/cdi-operator created
role.rbac.authorization.k8s.io/cdi-operator created
rolebinding.rbac.authorization.k8s.io/cdi-operator created
deployment.apps/cdi-operator created
# kubectl create -f https://github.com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-cr.yaml
cdi.cdi.kubevirt.io/cdi created
# kubectl get pod -n cdi
NAME READY STATUS RESTARTS AGE
cdi-apiserver-6879bb9874-dkvj2 1/1 Running 0 32s
cdi-deployment-68959d6746-cjhx9 1/1 Running 0 44s
cdi-operator-6c68847f4b-dz27d 1/1 Running 0 2m43s
cdi-uploadproxy-58479cc494-t2d78 1/1 Running 0 40s
# kubectl get cdi -n cdi
NAME AGE PHASE
cdi 3m36s Deployed
Import the Virtual Service Router Disk Image from the 6WIND download server. A DataVolume is used to both do the import and store the imported image.
To do so, create the following dv_vsr.yaml
manifest with this
content
:
apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
name: vsr-image
namespace: default
spec:
pvc:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 32Gi
storageClassName: nfs-client
source:
http:
url: https://download.6wind.com/products/vsr/x86_64/3.9/3.9.1/qcow2/6wind-vsr-x86_64-v3.9.1.qcow2
extraHeaders:
- "Authorization: Basic xxx"
Use this manifest to import a Virtual Service Router disk image into a PersistentVolumeClaim (PVC). Regarding PVC, in our cluster, dynamic provisioning is enabled using NFS StorageClass.
In this manifest yaml file, in the source data and more precisely in the extraHeaders field, replace “xxx” by “login:password”, base64 encoded.
source:
http:
url: https://download.6wind.com/products/vsr/x86_64/3.9/3.9.1/qcow2/6wind-vsr-x86_64-v3.9.1.qcow2
extraHeaders:
- "Authorization: Basic xxx"
Warning
Replace login and password with the credentials provided by 6WIND support.
Note
You can refer to the Kubernetes documentation regarding PersistentVolume, PersistentVolumeClaim and StorageClass.
See also
You can refer to KubeVirt User Guide to have detailed explanations about DataVolume.
Apply the manifest:
# kubectl apply -f dv_vsr.yaml
datavolume.cdi.kubevirt.io/vsr-image created
The status of the import can be followed:
# kubectl get --watch dv
NAME PHASE PROGRESS RESTARTS AGE
vsr-image ImportInProgress 9.01% 1 35s
vsr-image ImportInProgress 12.01% 1 36s
vsr-image ImportInProgress 15.02% 1 38s
[...]
vsr-image ImportInProgress 99.21% 1 89s
vsr-image ImportInProgress 99.21% 1 90s
vsr-image Succeeded 100.0% 1 92s
# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
vsr-image Bound pvc-7ed557cc-a7f4-4063-b6fe-67bf277b58e3 32Gi RWX nfs-client 5m17s
Note
The PVC provisioned dynamically has the same name than the related DataVolume.
Deploy a Virtual Service Router VirtualMachineInstanceReplicaSet using the DataVolume just deployed.
To do so, create the following vsr-vmireplset-pvc.yaml
manifest with this
content
:
apiVersion: kubevirt.io/v1
kind: VirtualMachineInstanceReplicaSet
metadata:
name: vsr-replicaset-bis
spec:
replicas: 2
selector:
matchLabels:
myrsvmi: myrsvmi
template:
metadata:
name: vsr-bis
labels:
myrsvmi: myrsvmi
spec:
domain:
devices:
disks:
- name: mydisk
disk: {}
- name: cloudinitdisk
disk: {}
interfaces:
- name: default
bridge: {}
- name: sriov
sriov: {}
resources:
requests:
memory: "8Gi"
cpu: "4"
limits:
memory: "8Gi"
cpu: "4"
networks:
- name: default
pod: {}
- name: sriov
multus:
networkName: multus-intel-sriov-nic-vsr
accessCredentials:
- sshPublicKey:
source:
secret:
secretName: my-pub-key
propagationMethod:
noCloud: {}
volumes:
- name: mydisk
ephemeral:
persistentVolumeClaim:
claimName: vsr-image
- name: cloudinitdisk
cloudInitNoCloud:
userData: |-
#cloud-config
The main point we can note in this configuration file is the use of the datavolume we deployed previously to store through the PVC the disk image.
volumes:
- name: mydisk
ephemeral:
persistentVolumeClaim:
claimName: vsr-image
Now deploy this Virtual Service Router VirtualMachineInstanceReplicaSet into the Kubernetes cluster:
# kubectl apply -f vsr-vmireplset-pvc.yaml
virtualmachineinstancereplicaset.kubevirt.io/vsr-replicaset-bis created
Applying this manifest to the cluster creates 2 VirtualMachineInstance replicas.
# kubectl get vmirs
NAME DESIRED CURRENT READY AGE
vsr-replicaset-bis 2 2 2 61s
# kubectl get vmi
NAME AGE PHASE IP NODENAME READY
vsr-bish9ljn 118s Running 10.229.1.37 sd-135618 True
vsr-bisql9sr 118s Running 10.229.0.73 sd-135617 True
# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
virt-launcher-vsr-bish9ljn-dpgdk 2/2 Running 0 3m28s 10.229.1.37 sd-135618 <none> 1/1
virt-launcher-vsr-bisql9sr-rtm82 2/2 Running 0 3m28s 10.229.0.73 sd-135617 <none> 1/1
You can scale out to have more replicas:
# kubectl scale vmirs/vsr-replicaset-bis --replicas=3
virtualmachineinstancereplicaset.kubevirt.io/vsr-replicaset-bis scaled
Notice we have now one more replica, so 3 replicas:
# kubectl get vmirs
NAME DESIRED CURRENT READY AGE
vsr-replicaset-bis 3 3 3 9m42s
# kubectl get vmi
NAME AGE PHASE IP NODENAME READY
vsr-bish9ljn 13m Running 10.229.1.37 sd-135618 True
vsr-bisql9sr 13m Running 10.229.0.73 sd-135617 True
vsr-bisqvx6p 7m5s Running 10.229.0.74 sd-135617 True
# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
virt-launcher-vsr-bish9ljn-dpgdk 2/2 Running 0 14m 10.229.1.37 sd-135618 <none> 1/1
virt-launcher-vsr-bisql9sr-rtm82 2/2 Running 0 14m 10.229.0.73 sd-135617 <none> 1/1
virt-launcher-vsr-bisqvx6p-hk6tf 2/2 Running 0 8m27s 10.229.0.74 sd-135617 <none> 1/1
In the same way, you can scale down to have less replicas:
# kubectl scale vmirs/vsr-replicaset-bis --replicas=1
virtualmachineinstancereplicaset.kubevirt.io/vsr-replicaset-bis scaled
Notice we have now less replicas, so 1 replica:
# kubectl get vmirs
NAME DESIRED CURRENT READY AGE
vsr-replicaset-bis 1 1 1 18m
# kubectl get vmi
NAME AGE PHASE IP NODENAME READY
vsr-bisqvx6p 12m Running 10.229.0.74 sd-135617 True
# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
virt-launcher-vsr-bisqvx6p-hk6tf 2/2 Running 0 13m 10.229.0.74 sd-135617 <none> 1/1