1.4.7. Starting a VM¶
This section describes how to run and manage VMs.
Hotplug a virtual port¶
VMs will need virtual ports to communicate with the outside world. To create
such ports dynamically in the fast path, the fp-vdev
command can be used. When the
VM will start, each virtual ring will be polled by one fast path logical core
located on the same socket as the VM. If there are less fast path logical cores
than virtual rings, some fast path logical cores will poll several virtual rings. The
number of virtual rings is configured in the VM using ethtool -L
.
A hotplug port is removed when the fast path is restarted. In that case, to restore it, it has to be recreated, and any configuration made on the associated FPVI interface has to be remade. VMs using this interface must be restarted as well.
Note
This is the preferred alternative, but it is possible to create virtual ports at fast path start using the configuration wizard.
For instance, the command below creates a virtual port named tap0
, that will
connect to the /tmp/pmd-vhost0
vhost-user socket.
# fp-vdev add tap0 --sockpath /tmp/pmd-vhost0
devargs:
sockmode: client
sockname: /tmp/pmd-vhost0
driver: pmd-vhost
ifname: tap0
rx_cores: all
Note
OpenStack Nova calls fp-vdev
to create the needed ports itself, so it
should not be done manually.
Note
Make sure that the fast path has been started before you create hotplug
ports with fp-vdev
commands
See also
The 6WINDGate Fast Path Managing virtual devices documentation for more information about fp-vdev
command.
Libvirt configuration¶
It is recommended to use libvirt to start VMs with Virtual Accelerator.
Note
Please make sure that the fast path CPU isolation feature is disabled (see Configuration files).
XML domain file¶
Interfaces
The fast path virtual ports are compatible with the vhost-user virtio backend of QEMU.
Example of an interface compatible with a fast path virtual port:
<interface type='vhostuser'>
<source type='unix' path='/tmp/pmd-vhost0' mode='server'/>
<model type='virtio'/>
</interface>
Example of a 4 queues interface compatible with a fast path virtual port:
<interface type='vhostuser'>
<source type='unix' path='/tmp/pmd-vhost0' mode='server'/>
<model type='virtio'/>
<driver queues='4'/>
</interface>
Hugepages
The VM should be started with memory allocated from hugepages, in shared mode.
Example of a VM with 1GB of memory using 2MB hugepages taken from NUMA node
0
of the host:
<cpu>
<numa>
<cell id="0" cpus="0" memory="1048576" memAccess="shared"/>
</numa>
</cpu>
<numatune>
<memory mode='strict' nodeset='0'/>
</numatune>
<memoryBacking>
<hugepages>
<page size="2048" unit="KiB"/>
</hugepages>
</memoryBacking>
See also
The libvirt XML documentation: https://libvirt.org/formatdomain.html
Guest multiqueue configuration¶
The multiqueue feature allows the guest network performance to scale with the number of CPUs. To enable multiqueue in guest configuration, a script to automatically configure the right number of queues at interface creation must be added.
For Centos-like distributions
Copy the file below to /sbin/ifup-pre-local
:
#!/bin/bash
. /etc/init.d/functions
cd /etc/sysconfig/network-scripts
. ./network-functions
[ -f ../network ] && . ../network
CONFIG=${1}
need_config "${CONFIG}"
source_config
if [ -n "$DEVICE" ]; then
nb_queues=`ethtool -l $DEVICE | grep Combined: | awk '{print $2}' | head -n1`
ethtool -L $DEVICE combined $nb_queues
# configure tx queues
nb_processor=`cat /proc/cpuinfo | grep processor | wc -l`
nb_xps=$nb_processor
if [ "$nb_queues" -lt $nb_xps ]; then
nb_xps=$nb_queues
fi
last_xps=$(($nb_xps-1))
for i in `seq 0 $last_xps`;
do
let "mask_cpus=1 << $i"
echo $mask_cpus > /sys/class/net/$DEVICE/queues/tx-$i/xps_cpus
done
fi
For Ubuntu-like distributions
Copy the file below to /etc/network/if-pre-up.d/enable_multiqueue
:
#!/bin/sh
ETHTOOL=/sbin/ethtool
test -x $ETHTOOL || exit 0
[ "$IFACE" != "lo" ] || exit 0
nb_queues=`ethtool -l $IFACE | grep Combined: | awk '{print $2}' | head -n1`
ethtool -L $IFACE combined $nb_queues
# configure tx queues
nb_processor=`cat /proc/cpuinfo | grep processor | wc -l`
nb_xps=$nb_processor
if [ "$nb_queues" -lt $nb_xps ]; then
nb_xps=$nb_queues
fi
last_xps=$(($nb_xps-1))
for i in `seq 0 $last_xps`;
do
let "mask_cpus=1 << $i"
echo $mask_cpus > /sys/class/net/$IFACE/queues/tx-$i/xps_cpus
done
Note
This script will be called for interfaces configured in the
/etc/network/interfaces
file. It will be called when the networking service
is started, or when ifup <ifname>
is called. Please check man interfaces
for more informations.
OpenStack¶
Hugepages¶
Enable hugepages in the flavor of the VM:
# openstack flavor set --property hw:mem_page_size=large myflavor
Note
If a VM is spawned without hugepages, its virtual ports will be created, but will not be functional.
Note
It is required to have hugepages activated on Compute node.
Multiqueue¶
This section explains how to spawn VMs with multiple queues. The multiqueue feature allows the guest network performance to scale with the number of CPUs.
Enable multiqueue in the VM’s image template:
# openstack image set --property hw_vif_multiqueue_enabled=true myimage
Note
The image template must have been prepared according to Guest multiqueue configuration.
Warning
hw_vif_multiqueue_enabled
must be set to false if monoqueue VM is needed.
The VMs will be booted with a number of queues equal to the number of vCPUs.
CPU isolation¶
To make sure that Nova does not spawn VMs on the CPUs dedicated to the fast path, set the
vcpu_pin_set
attribute in the DEFAULT section of/etc/nova/nova.conf
.# crudini --set /etc/nova/nova.conf DEFAULT vcpu_pin_set 0-1