1.6.5. VM Creation¶
Libvirt¶
If you don’t have a VM ready, you can use a cloud image. The next steps explain how to use an Ubuntu focal cloud-image. You can skip those steps if you already have one.
See also
Write cloud-init user-data and meta-data files to setup the password and set the hostname (change vm1 according to your needs).
root@host:~# cat << EOF > user-data #cloud-config ssh_pwauth: True chpasswd: list: | ubuntu:ubuntu expire: False EOF root@host:~# cat << EOF > meta-data instance-id: vm1 local-hostname: vm1 EOF
Build an iso image containing the meta-data and user-data and put it in the libvirt
images
directory.root@host:~# yum install -y genisoimage root@host:~# genisoimage -output seed.iso -volid cidata \ -joliet -rock user-data meta-data root@host:~# cp seed.iso /var/lib/libvirt/images/
Download the latest Ubuntu cloud image into the libvirt images directory.
root@host:~# url=https://cloud-images.ubuntu.com/focal/current root@host:~# curl $url/focal-server-cloudimg-amd64.img \ -o /var/lib/libvirt/images/ubuntu-20.04.img
Now that you have a VM template, we can start the virtual machine using libvirt.
Create an XML domain file named
<your_vm_hostname>.xml
(according to your VM hostname), containing at least:a libvirt domain name (
<name>
) adapted to the VM hostnamea Virtio interface for management (
<interface type='user'>
) configuration wizard (<interface type='vhostuser'>
)two disks, one for the cloud-init iso, one for the cloud-image iso (
<disk type='file' device='disk'>
)serial port forwarding to port
1000
(<serial type='tcp'>
and<console type='tcp'>
)hugepages (
<numa>
and<hugepages>
)as much vhost-user interfaces as you have sockets displayed in the fast path configuration wizard
See also the libvirt XML documentation: https://libvirt.org/formatdomain.html
The template for this file is the following:
<domain type='kvm'> <name>vm1</name> <!-- change the name according to your needs--> <memory>1048576</memory> <!-- adapt to the desired memory size --> <os> <type>hvm</type> <boot dev="hd"/> </os> <vcpu placement='static'>1</vcpu> <vcpupin vcpu='0' cpuset='0'/> <cpu> <numa> <!-- adapt to the desired memory size --> <cell id="0" cpus="0" memory="1048576" memAccess="shared"/> </numa> </cpu> <!-- IN ORDER TO USE MULTIQUEUE, MANY CORES ARE NECESSARY REPLACE THE <vcpu>/<vcpupin>/<cpu> section by the following template where: - N in the desired number of vcpus - A, B, C are the desired host core IDs on which vcpus are pinned - E, F, G are other host core IDs on which emulator threads are pinned - N = X * Y * Z, for example <topology sockets='1' cores='2' threads='2' /> for 4 vcpus <vcpu placement='static'>N</vcpu> <cputune> <vcpupin vcpu='0' cpuset='A'/> <vcpupin vcpu='1' cpuset='B'/> ... <vcpupin vcpu='N' cpuset='C'/> <emulatorpin cpuset='E, F, G, ...'/> </cputune> <cpu> <topology sockets='X' cores='Y' threads='Z' /> <numa> <cell id="0" cpus="0-N" memory="1048576" memAccess="shared"/> </numa> </cpu> --> <numatune> <!-- adapt to set the host node where hugepages are taken --> <memory mode='strict' nodeset='0'/> </numatune> <memoryBacking> <hugepages> <page size="2048" unit="KiB"/> </hugepages> </memoryBacking> <features> <acpi/> </features> <devices> <disk type='file' device='disk'> <source file='/var/lib/libvirt/images/seed.iso'/> <target dev='vdb' bus='virtio'/> </disk> <disk type='file' device='disk'> <driver type='qcow2' cache='none'/> <source file='/var/lib/libvirt/images/ubuntu-20.04.img'/> <target dev='vda' bus='virtio'/> </disk> <interface type='user'> <model type='virtio'/> </interface> <!-- INSERT HERE YOUR VHOST USER INTERFACES These interfaces are declared with the following template where: - N is the desired number of queues (optional, only in multiqueue cases) <interface type='vhostuser'> <source type='unix' path='/path/to/the/vhostuser/socket' mode='server'/> <model type='virtio'/> <driver queues='N'> </interface> --> <serial type='tcp'> <source mode='bind' host='0.0.0.0' service='10000'/> <protocol type='raw'/> <target port='0'/> <alias name='serial0'/> </serial> <serial type='pty'> <source path='/dev/pts/4'/> <target port='1'/> <alias name='serial1'/> </serial> <console type='tcp'> <source mode='bind' host='0.0.0.0' service='10000'/> <protocol type='raw'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> </devices> </domain>
Start the virtual machine using the previously created XML file.
root@host:~# virsh create <your_vm_hostname>.xml
Forward the ssh port to local port 2222.
root@host:~# virsh qemu-monitor-command --hmp <your_vm_hostname> \ 'hostfwd_add ::2222-:22'
OpenStack¶
You can use your VM or download a cloud image like described in the previous section.
See also
The Libvirt section of this guide
For this example, a cloud image is used.
Download the cloud image:
root@host:~# url=https://cloud-images.ubuntu.com/bionic/current root@host:~# curl $url/bionic-server-cloudimg-amd64.img \ -o /tmp/ubuntu-cloud.img
Create the glance image to be able to boot this VM in an OpenStack setup:
root@host:~# source /root/admin-openrc.sh root@host:~# openstack image create "ubuntu-virtio" \ --file /tmp/ubuntu-cloud.img \ --disk-format qcow2 --container-format bare \ --property hw_vif_multiqueue_enabled=false \ --public +---------------------------+--------------------------------------+ | Property | Value | +---------------------------+--------------------------------------+ | checksum | bf0d07acee853f95ff676c95f28aec6e | | container_format | bare | | created_at | 2016-04-06T08:41:03Z | | disk_format | qcow2 | | hw_vif_multiqueue_enabled | false | | id | f97506ae-7d64-465d-ac74-6effa039b3ec | | min_disk | 0 | | min_ram | 0 | | name | ubuntu-virtio | | owner | 3e3b085aa3ea445f8beabc369963d63b | | protected | False | | size | 320471040 | | status | active | | tags | [] | | updated_at | 2016-04-06T08:41:24Z | | virtual_size | None | | visibility | public | +---------------------------+--------------------------------------+
Note
hw_vif_multiqueue_enabled
must be set to false if multiqueue is not enabled.
Use a cloud-init file when booting a VM with nova
When booting the VM with nova, the
--user-data
option can be used to provide a local file. It is mainly used to pass a configuration file for cloud-init image. The cloud-init service allows customizing a VM at boot time (like setting a password).Example of
cloud.cfg
:--- disable_root: false ssh_pwauth: True chpasswd: list: | root:root expire: False runcmd: - sed -i -e '/^PermitRootLogin/s/^.*$/PermitRootLogin yes/' /etc/ssh/sshd_config - sed -i -e '/^#UseDNS/s/^.*$/UseDNS no/' /etc/ssh/sshd_config - sed -i -e '/^GSSAPIAuthentication/s/^.*$/GSSAPIAuthentication no/' /etc/ssh/sshd_config - restart ssh - systemctl restart ssh
Run the nova VM:
root@host:~# openstack network create private root@host:~# openstack subnet create --network private --subnet-range 11.0.0.0/24 private_subnet root@host:~# openstack router create router root@host:~# openstack router add subnet router private_subnet root@host:~# openstack server create --flavor m1.vm_huge \ --image ubuntu-virtio \ --nic net-id=$(openstack network list -- name private (c ID -f value) \ --user-data cloud.cfg \ vm1
Connection to the VM¶
Serial console can be obtained using telnet on port
10000
.root@host:~# telnet 127.0.0.1 10000
Note
In order to get the VM port, use the following command:
root@host:~# ps ax | grep -e qemu -e port=
A SSH console is also available.
root@host:~# ssh -p 2222 ubuntu@127.0.0.1
Note
According to some reports, the cloud init configuration that happens at first boot may sometimes fail to enforce the user and password set in the user-data file. In that case, you won’t be able to log in your VM. You might need to patch the image file with some additional script: https://trickycloud.wordpress.com/2013/11/09/default-user-and-password-in-ubuntu-cloud-images/
Installation of iperf and iperf3¶
Once connected to the VM, install the iperf and iperf3 packages:
root@vm:~# apt-get install -y iperf iperf3