1.6.3. Two VMs, 4 logical cores, Virtio multiqueue NICs, offloading enabled, with Open vSwitch¶
Two VMs on different hypervisors exchange traffic on a private subnet. Each hypervisor uses Open vSwitch bridges to connect its physical interface and the VM virtual interface. Each VM runs on two physical cores (four logical cores). Virtual Accelerator runs on two physical cores too (logical cores 10, 11, 22, 23).
iperf
TCP is started between the two spawned VMs.
We use the following prompts in this section:
root@hosts:~# #=> all hosts
root@vms:~# #=> all vms
root@host1:~# #=> host1
root@host2:~# #=> host2
root@vm-host1:~# #=> VM on host1
root@vm-host2:~# #=> VM on host2
In this usecase, physical machines used are the same.
Virtual Accelerator configuration¶
Show the default fast path configuration:
root@hosts:~# fast-path.sh config Info: no configuration file /etc/fast-path.env for fast-path.sh, using defaults Configuring Fast Path... Fast path configuration info ============================ Selected ethernet card ---------------------- Intel Corporation Ethernet 10G 2P X520 Adapter (rev 01) PCI card (not mounted on any eth) with cores 11,23 Intel Corporation Ethernet 10G 2P X520 Adapter (rev 01) PCI card mounted on eth4 with cores 11,23 Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (rev 01) PCI card mounted on eth5 with cores 11,23 Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (rev 01) PCI card mounted on eth6 with cores 11,23
The logical cores 11 and 23 are located on the first socket, they poll the ports located on the first socket.
If you are connected to a machine using a network interface supported by the fast path (for instance using SSH), you should avoid the fast path to take control of this port and thus deleting its network configuration, which would break the connection.
root@hosts:~# ethtool -i mgmt0 [...] bus-info: 0000:01:00.0 [...] root@hosts:~# FP_PORTS="all -0000:01:00.0" fast-path.sh config --update
Update
/etc/fast-path.env
in order to use 4 cores:root@hosts:~# FP_MASK="10,11,22,23" fast-path.sh config --update
libvirt does not support the cpuset isolation feature; it has to be disabled in
/etc/cpuset.env
:-#: ${CPUSET_ENABLE:=1} +: ${CPUSET_ENABLE:=0}
Start Virtual Accelerator:
root@hosts:~# systemctl start virtual-accelerator.target
Restart the Open vSwitch control plane.
root@hosts:~# systemctl restart openvswitch
The hugepages are allocated by Virtual Accelerator at startup and libvirt cannot detect them dynamically. libvirt must be restarted to take the hugepages into account.
root@hosts:~# systemctl restart libvirtd.service
Warning
If you restart Virtual Accelerator, you must restart openvswitch and libvirt (and its VMs) as well.
Linux configuration¶
VM configuration
Add a new virtio interface on each host to communicate with the VM:
See also
The Hotplug a virtual port section of the guide.
The
sockpath
argument will be used in the libvirt XML file later.These new ports will be polled by fast path logical cores located on the same socket as the VM. The number of fast path logical cores polling a port depends on the number of virtual rings. Each virtual ring is polled by one fast path logical core. If there are less fast path logical cores than virtual rings, some fast path logical cores poll several virtual rings. The number of virtual rings is configured in the VM using
ethtool -L
.root@hosts:~# fp-vdev add tap0 --sockpath=/tmp/pmd-vhost0 devargs: profile: endpoint sockmode: client sockname: /tmp/pmd-vhost0 txhash: l3l4 verbose: 0 driver: pmd-vhost ifname: tap0 rx_cores: all root@hosts:~# ip link set dev tap0 up
Note
Make sure that the fast path has been started before you create hotplug ports with
fp-vdev
commandsSee also
The 6WINDGate Fast Path Managing virtual devices documentation for more information about the
fp-vdev
command.Create VMs configuration files using libvirt:
Create
vm-host1.xml
file onhost1
andvm-host2.xml
file onhost2
using the VM Creation section of this guide.Launch VMs:
On
host1
:root@host1:~# virsh create vm-host1.xml
On
host2
:root@host2:~# virsh create vm-host2.xml
Then, connect to the VMs:
See also
The Connection to the VM section of this guide
root@hosts:~# telnet 127.0.0.1 10000
Enable multiqueue on VMs:
root@vms:~# ethtool -L eth0 combined 4 root@vms:~# ethtool -l eth0 Channel parameters for eth0: Pre-set maximums: RX: 0 TX: 0 Other: 0 Combined: 4 Current hardware settings: RX: 0 TX: 0 Other: 0 Combined: 4
Then check the number of queues:
root@hosts:~# fp-cli dpdk-core-port-mapping port 0: (rte_ixgbe_pmd) nb_rxq=4 nb_txq=4 rxq_shared=0 txq_shared=0 rxq0=c10 rxq1=c11 rxq2=c22 rxq3=c23 txq0=c10 txq1=c11 txq2=c22 txq3=c23 port 1: eth4 (rte_ixgbe_pmd) nb_rxq=4 nb_txq=4 rxq_shared=0 txq_shared=0 rxq0=c10 rxq1=c11 rxq2=c22 rxq3=c23 txq0=c10 txq1=c11 txq2=c22 txq3=c23 port 2: eth5 (rte_ixgbe_pmd) nb_rxq=4 nb_txq=4 rxq_shared=0 txq_shared=0 rxq0=c10 rxq1=c11 rxq2=c22 rxq3=c23 txq0=c10 txq1=c11 txq2=c22 txq3=c23 port 3: eth6 (rte_ixgbe_pmd) nb_rxq=4 nb_txq=4 rxq_shared=0 txq_shared=0 rxq0=c10 rxq1=c11 rxq2=c22 rxq3=c23 txq0=c10 txq1=c11 txq2=c22 txq3=c23
Networking setup:
Specific networking setup on
host1
:root@host1:~# ip link set dev eth4 up root@host1:~# ip addr add 192.168.0.1/24 dev eth4
Specific networking setup on
host2
:root@host2:~# ip link set dev eth4 up root@host2:~# ip addr add 192.168.0.2/24 dev eth4
Open vSwitch setup on all hosts:
root@hosts:~# ovs-vsctl add-br br0 root@hosts:~# ovs-vsctl add-port br0 tap0 root@hosts:~# ovs-vsctl add-port br0 eth4 root@hosts:~# ip link set dev br0 up
Specific networking setup on
vm-host1
:root@vm-host1:~# ip link set dev eth0 up root@vm-host1:~# ip addr add 10.23.0.1/24 dev eth0
Specific networking setup on
vm-host2
:root@vm-host2:~# ip link set dev eth0 up root@vm-host2:~# ip addr add 10.23.0.2/24 dev eth0
Testing¶
Check the connectivity between
vm-host1
andvm-host2
:root@vm-host1:~# ping 10.23.0.2 -i 0.01 -c 1000 [...] 64 bytes from 10.23.0.2: icmp_seq=998 ttl=64 time=0.095 ms 64 bytes from 10.23.0.2: icmp_seq=999 ttl=64 time=0.095 ms 64 bytes from 10.23.0.2: icmp_seq=1000 ttl=64 time=0.096 ms --- 10.23.0.2 ping statistics --- 1000 packets transmitted, 1000 received, 0% packet loss, time 9989ms rtt min/avg/max/mdev = 0.090/0.095/0.873/0.026 ms root@host1:~# fp-cli stats ==== interface stats: lo-vr0 port:254 mgmt0-vr0 port:254 eth1-vr0 port:254 eth2-vr0 port:254 eth3-vr0 port:254 fpn0-vr0 port:254 eth4-vr0 port:1 eth5-vr0 port:2 eth6-vr0 port:3 eth0-vr0 port:0 tap0-vr0 port:4 ovs-system-vr0 port:254 br0-vr0 port:254 ==== global stats: ==== exception stats: LocalBasicExceptions:4 LocalExceptionClass: FPTUN_EXC_SP_FUNC:4 LocalExceptionType: FPTUN_BASIC_EXCEPT:4 ==== IPv4 stats: ==== arp stats: ==== IPv6 stats: ==== L2 stats: ==== fp-vswitch stats: flow_not_found:4 output_ok:1998
Note
The flow_not_found statistics is increased for the first packets of each flow, that are sent to Linux because they don’t match any known flow in the fast path. Linux receives the packets and sends them to the ovs-vswitchd daemon (it is the standard Linux processing). The daemon creates a flow in the OVS kernel data plane. The flow is automatically synchronized to the fast path, and the next packets of the flow are processed by the fast path.
During traffic, you can check that the flows are available in the kernel on each host:
root@host1:~# ovs-dpctl dump-flows in_port(2),eth(src=52:54:00:50:87:e5,dst=52:54:00:d5:75:6e),eth_type(0x0800),ipv4(frag=no) packets:191535, bytes:12341830191, used:0.000s, actions:3 in_port(3),eth(src=52:54:00:d5:75:6e,dst=52:54:00:50:87:e5),eth_type(0x0800),ipv4(frag=no), packets:240937, bytes:15983383, used:0.000s, actions:2 root@host2:~# ovs-dpctl dump-flows in_port(2),eth(src=52:54:00:d5:75:6e,dst=52:54:00:50:87:e5),eth_type(0x0800),ipv4(frag=no), packets:253909, bytes:16840441, used:0.000s, actions:3 in_port(3),eth(src=52:54:00:50:87:e5,dst=52:54:00:d5:75:6e),eth_type(0x0800),ipv4(frag=no), packets:421568, bytes:12983559073, used:0.000s, actions:2
Reset the fast path statistics first on each host:
root@hosts:~# fp-cli stats-reset
Start the
iperf
server onvm-host2
:root@vm-host2:~# iperf3 -s
Start the
iperf
TCP client onvm-host1
:root@vm-host1:~# iperf3 -c 10.23.0.2 Connecting to host 10.23.0.2, port 5201 [ 4] local 10.23.0.1 port 60743 connected to 10.23.0.2 port 5201 [ ID] Interval Transfer Bandwidth Retr Cwnd [ 4] 0.00-1.00 sec 1.10 GBytes 9.44 Gbits/sec 155 1.12 MBytes [ 4] 1.00-2.00 sec 1.10 GBytes 9.41 Gbits/sec 90 1.14 MBytes [ 4] 2.00-3.00 sec 1.09 GBytes 9.41 Gbits/sec 90 1.16 MBytes [ 4] 3.00-4.00 sec 1.10 GBytes 9.42 Gbits/sec 122 1.20 MBytes [ 4] 4.00-5.00 sec 1.10 GBytes 9.42 Gbits/sec 113 1.07 MBytes [ 4] 5.00-6.00 sec 1.10 GBytes 9.42 Gbits/sec 90 1.12 MBytes [ 4] 6.00-7.00 sec 1.10 GBytes 9.41 Gbits/sec 90 1.14 MBytes [ 4] 7.00-8.00 sec 1.10 GBytes 9.41 Gbits/sec 112 1020 KBytes [ 4] 8.00-9.00 sec 1.09 GBytes 9.37 Gbits/sec 737 885 KBytes [ 4] 9.00-10.00 sec 1.10 GBytes 9.42 Gbits/sec 127 901 KBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth Retr [ 4] 0.00-10.00 sec 11.0 GBytes 9.41 Gbits/sec 1726 sender [ 4] 0.00-10.00 sec 11.0 GBytes 9.41 Gbits/sec receiver iperf Done.
During traffic, you can check the fast path CPU usage on each host:
root@host1:~# fp-cpu-usage Fast path CPU usage: cpu: %busy cycles cycles/packet 10: 9% 50198333 13794 11: 1% 6972454 3309 22: 8% 46654144 20855 23: 9% 51496212 9958 root@host2:~# fp-cpu-usage Fast path CPU usage: cpu: %busy cycles cycles/packet 10: 11% 60073363 2517 11: 12% 64039855 2635 22: 24% 128062956 2264 23: 30% 157335765 2344
What is important to see is that all the cores are working. The traffic is spread on all the cores, even if the distribution is not fair in our case.
After traffic, you can check the fast path statistics:
root@hosts:~# fp-cli stats ==== interface stats: lo-vr0 port:254 mgmt0-vr0 port:254 eth1-vr0 port:254 eth2-vr0 port:254 eth3-vr0 port:254 fpn0-vr0 port:254 eth4-vr0 port:1 eth5-vr0 port:2 eth6-vr0 port:3 eth0-vr0 port:0 tap0-vr0 port:4 ovs-system-vr0 port:254 br0-vr0 port:254 ifs_ipackets:22 ifs_ibytes:2013 ==== global stats: ==== exception stats: LocalBasicExceptions:51 LocalFPTunExceptions:22 LocalExceptionClass: FPTUN_EXC_SP_FUNC:29 FPTUN_EXC_ETHER_DST:20 FPTUN_EXC_IP_DST:2 LocalExceptionType: FPTUN_BASIC_EXCEPT:29 FPTUN_IPV6_INPUT_EXCEPT:2 FPTUN_ETH_INPUT_EXCEPT:20 ==== IPv4 stats: ==== arp stats: ==== IPv6 stats: ==== L2 stats: ==== fp-vswitch stats: flow_not_found:28 flow_pullup_too_small:1 output_ok:6731979
Packets are correctly forwarded by the fast path.