1.6.1. One VM, one core, Virtio monoqueue NICs, forwarding traffic, with Open vSwitch¶
A single VM forwards traffic from tester1
to tester2
. The hypervisor uses
Open vSwitch bridges to connect its physical interfaces and the VM virtual
interfaces. The VM runs on one core, and two Virtio interfaces using one
virtual ring each. Virtual Accelerator runs on two cores (cores 1 and 2).
Virtual Accelerator configuration¶
In this first use case, the default fast path configuration is used:
one physical core per socket is dedicated to the fast path
all supported physical ports are polled by the logical cores located on the same socket
4GB of hugepages memory is reserved on each socket for the VMs.
Show the default fast path configuration
root@host:~# fast-path.sh config Info: no configuration file /etc/fast-path.env for fast-path.sh, using defaults Configuring Fast Path... Fast path configuration info ============================ Selected ethernet card ---------------------- Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01) PCI card mounted on ens1f0 with cores 7,23 Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01) PCI card mounted on ens1f1 with cores 7,23 Intel Corporation I350 Gigabit Network Connection (rev 01) PCI card mounted on mgmt0 with cores 7,23 Intel Corporation I350 Gigabit Network Connection (rev 01) PCI card mounted on enp6s0f1 with cores 7,23 Intel Corporation Ethernet Controller XL710 for 40GbE QSFP+ (rev 01) PCI card mounted on ens7f0 with cores 15,31 Intel Corporation Ethernet Controller XL710 for 40GbE QSFP+ (rev 01) PCI card mounted on ens7f1 with cores 15,31 Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01) PCI card mounted on ens5f0 with cores 15,31 Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01) PCI card mounted on ens5f1 with cores 15,31 The logical cores 7 and 23 are located on the first socket, they poll the ports located on the first socket. The logical cores 15 and 31 are located on the second socket, they poll the ports located on the second socket.
If you are connected this machine using a network interface supported by the fast path (for instance using SSH), you should avoid the fast path to take control of this port and thus deleting its network configuration, which would break the connection.
root@host:~# ethtool -i mgmt0 [...] bus-info: 0000:06:00.0 [...] root@host:~# FP_PORTS="all -0000:06:00.0" fast-path.sh config --update
As we will not terminate traffic in the VMs, it is recommended to disable offloads to get better performance. See Configuration files for more information.
root@host:~# FP_OFFLOAD="off" fast-path.sh config --update
libvirt does not support the cpuset isolation feature; it has to be disabled in
/etc/cpuset.env
.-#: ${CPUSET_ENABLE:=1} +: ${CPUSET_ENABLE:=0}
Start Virtual Accelerator.
root@host:~# systemctl start virtual-accelerator.target
Restart the Open vSwitch control plane.
root@host:~# systemctl restart openvswitch
The hugepages are allocated by Virtual Accelerator at startup and libvirt cannot detect them dynamically. libvirt must be restarted to take the hugepages into account.
root@host:~# systemctl restart libvirtd.service
Warning
If you restart Virtual Accelerator, you must restart openvswitch and libvirt (and its VMs) as well.
Create two virtio interfaces to communicate with the VM. The
sockpath
argument will be used in the libvirt XML file later.These new ports will be polled by fast path logical cores located on the same socket as the VM. The number of fast path logical cores polling a port depends on the number of virtual rings. Each virtual ring is polled by one fast path logical core. If there are less fast path logical cores than virtual rings, some fast path logical cores poll several virtual rings. The number of virtual rings is configured in the VM using
ethtool -L
.root@host:~# fp-vdev add tap0 --sockpath=/tmp/pmd-vhost0 --profile=nfv devargs: profile: nfv sockmode: client sockname: /tmp/pmd-vhost0 txhash: l3l4 verbose: 0 driver: pmd-vhost ifname: tap0 rx_cores: all root@host:~# fp-vdev add tap1 --sockpath=/tmp/pmd-vhost1 --profile=nfv devargs: profile: nfv sockmode: client sockname: /tmp/pmd-vhost1 txhash: l3l4 verbose: 0 driver: pmd-vhost ifname: tap1 rx_cores: all
Note
Make sure that the fast path has been started before you create hotplug ports with
fp-vdev
commandsSee also
The 6WINDGate Fast Path Managing virtual devices documentation for more information about the
fp-vdev
command.
Linux configuration¶
VM Creation (if needed)¶
If you don’t have a VM ready, you can use a cloud image. See VM Creation to create one VM with the following libvirt configuration sections:
hostname vm1
<name>vm1</name>
two vhost-user interfaces:
<interface type='vhostuser'> <source type='unix' path='/tmp/pmd-vhost0' mode='server'/> <model type='virtio'/> </interface> <interface type='vhostuser'> <source type='unix' path='/tmp/pmd-vhost1' mode='server'/> <model type='virtio'/> </interface>
1048576 bytes (1GB) of memory:
<memory>1048576</memory>
and
<numa> <cell id="0" cpus="0" memory="1048576" memAccess="shared"/> </numa>
Configuration of the host and of VM1¶
Now that we have access to the VM, we can setup Linux with the configuration needed for forwarding.
Inside the VM, enable forwarding, set interfaces up and add addresses on both
1.1.1.0
and2.2.2.0
subnets.root@vm1:~# echo 1 > /proc/sys/net/ipv4/ip_forward root@vm1:~# ip link set eth1 up root@vm1:~# ip link set eth2 up root@vm1:~# ip addr add 1.1.1.2/24 dev eth1 root@vm1:~# ip addr add 2.2.2.2/24 dev eth2
On the host, check the names of the fast path interfaces at the end of the port description in the
fp-shmem-ports -d
output.root@host:~# fp-shmem-ports -d core freq : 2693512037 offload : disabled vxlan ports : port 4789 (set by user) port 8472 (set by user) port 0: eth1 mac 00:1b:21:74:59:2c driver rte_ixgbe_pmd RX queues: 2 (max: 128) TX queues: 4 (max: 64) RX vlan strip off RX IPv4 checksum on RX TCP checksum on RX UDP checksum on LRO off port 1: eth2 mac 00:1b:21:74:59:2d driver rte_ixgbe_pmd RX queues: 2 (max: 128) TX queues: 4 (max: 64) RX vlan strip off RX IPv4 checksum on RX TCP checksum on RX UDP checksum on LRO off (...) port 7: tap0 mac 02:09:c0:8a:8a:94 driver pmd-vhost (args sockmode=client,sockname=/tmp/pmd-vhost0) RX queues: 1 (max: 1) TX queues: 4 (max: 64) RX TCP checksum off RX UDP checksum off LRO off port 8: tap1 mac 02:09:c0:f4:f5:fd driver pmd-vhost (args sockmode=client,sockname=/tmp/pmd-vhost1) RX queues: 1 (max: 1) TX queues: 4 (max: 64) RX TCP checksum off RX UDP checksum off LRO off
On the host, set interfaces up.
root@host:~# ip link set eth1 up root@host:~# ip link set eth2 up root@host:~# ip link set tap0 up root@host:~# ip link set tap1 up
On the host, create two OVS bridges, each containing a pair of physical/virtual interfaces.
root@host:~# ovs-vsctl add-br br1 root@host:~# ovs-vsctl add-br br2 root@host:~# ovs-vsctl add-port br1 tap0 root@host:~# ovs-vsctl add-port br1 eth1 root@host:~# ovs-vsctl add-port br2 tap1 root@host:~# ovs-vsctl add-port br2 eth2
Configuration of tester1 and tester2¶
On
tester1
, configure the1.1.1.0
subnet and add a route to2.2.2.0
subnet.root@tester1:~# ip link set eth0 up root@tester1:~# ip add add 1.1.1.1/24 dev eth0 root@tester1:~# ip route add 2.2.2.0/24 via 1.1.1.2
On
tester2
, configure the2.2.2.0
subnet and add a route to1.1.1.0
.root@tester2:~# ip link set eth0 up root@tester2:~# ip add add 2.2.2.1/24 dev eth0 root@tester2:~# ip route add 1.1.1.0/24 via 2.2.2.2
The Linux configuration is finished.
Testing¶
We can send traffic from tester1
to tester2
and check that the fast path switches
it to and from the VM. First, let’s do a ping to check the setup.
Reset the fast path statistics first.
root@host:~# fp-cli stats-reset
Ping the
tester2
address.root@tester1:~# ping -c60 2.2.2.1
During traffic, you can check that the flows are available in the kernel.
root@host:~# ovs-dpctl dump-flows recirc_id(0),in_port(6),eth(src=90:e2:ba:0e:4e:45,dst=52:54:00:fe:13:6f), eth_type(0x0800),ipv4(frag=no), packets:15, bytes:1470, used:0.000s, actions:5 recirc_id(0),in_port(5),eth(src=52:54:00:fe:13:6f,dst=90:e2:ba:0e:4e:45), eth_type(0x0800),ipv4(frag=no), packets:15, bytes:1470, used:0.000s, actions:6 recirc_id(0),in_port(4),eth(src=90:e2:ba:0e:4e:44,dst=52:54:00:11:91:7d), eth_type(0x0800),ipv4(frag=no), packets:15, bytes:1470, used:0.000s, actions:3 recirc_id(0),in_port(3),eth(src=52:54:00:11:91:7d,dst=90:e2:ba:0e:4e:44), eth_type(0x0800),ipv4(frag=no), packets:15, bytes:1470, used:0.000s, actions:4
The statistics are increased in the fast path, showing that the fast path processed the packets.
root@host:~# fp-cli fp-vswitch-stats flow_not_found:4 output_ok:244
Note
The flow_not_found statistics is increased for the first packets of each flow, that are sent to Linux because they don’t match any known flow in the fast path. Linux receives the packets and sends them to the ovs-vswitchd daemon (it is the standard Linux processing). The daemon creates a flow in the OVS kernel data plane. The flow is automatically synchronized to the fast path, and the next packets of the flow are processed by the fast path.
Now that we checked the setup, we can try iperf
.
Reset the fast path statistics first.
root@host:~# fp-cli stats-reset
Install
iperf
on both servers and start the server ontester2
.root@tester1:~# yum install -y iperf root@tester2:~# yum install -y iperf root@tester2:~# iperf -s
Start
iperf
ontester1
.root@tester1:~# iperf -c 2.2.2.1 ------------------------------------------------------------ Client connecting to 2.2.2.1, TCP port 5001 TCP window size: 85.0 KByte (default) ------------------------------------------------------------ [ 4] local 1.1.1.1 port 40338 connected with 2.2.2.1 port 5001 [ ID] Interval Transfer Bandwidth [ 4] 0.0-10.0 sec 6.18 GBytes 5.31 Gbits/sec
During traffic, on the
host
, you can check the fast path usage.root@host:~# fp-cpu-usage Fast path CPU usage: cpu: %busy cycles cycles/packet 7: 6% 34111856 5430 15: <1% 4730352 0 23: 79% 429230848 2112 31: <1% 4616412 0 average cycles/packets received from NIC: 2256 (472689468/209463)
After traffic, you can check the fast path statistics.
root@host:~# fp-cli fp-vswitch-stats flow_not_found:8 output_ok:19509389