1.3.3. Configuration tuning¶
Configuration wizard¶
A configuration wizard is provided to customize the fast path configuration. To launch the wizard, do:
# fast-path.sh config -i
Fast path configuration
=======================
1 - Select fast path ports and polling cores
2 - Select a hardware crypto accelerator
3 - Advanced configuration
4 - Advanced plugin configuration
5 - Display configuration
S - Save configuration and exit
Q - Quit
Enter selection [S]:
The 1 - Select fast path ports and polling cores
option takes care of the
mandatory fast path configuration, which comprises:
- Core allocation
- The fast path needs dedicated cores that are isolated from other Linux tasks.
- Physical port assignation
- The fast path must have full control over a network port to provide acceleration on this port. At fast path start, a DPVI will replace each Linux interface associated to a fast path port. The new interface has the same name as the old interface. The configuration that was done on the old interface is lost (IP addresses, MTU, routes, etc).
- Virtual port creation
To handle traffic to and from virtual machines, the fast path must create virtual ports. Like for physical interfaces, a DPVI will be created for each virtual port. The virtual ports can be created through the wizard. However, it is usually better to allocate the ports at VM start.
See also
the Hotplug a virtual port section for more information.
To make the guest network performance scale with the number of cores, you must use the multiqueue feature. To exchange packets between the host and its guests, virtual rings are used. Each virtual ring will be polled by one fast path logical core located on the same socket as the VM. If there are less fast path logical cores than virtual rings, some fast path logical cores will poll several virtual rings. The number of virtual rings is configured in the VM using
ethtool -L
.See also
- the Guest multiqueue configuration section below for more information about configuring a guest with several CPUs,
- the Virtio Host PMD documentation for detailed information about multiqueue configuration.
In OpenStack deployments, this step is not needed. Nova will automatically create the needed ports on demand, using the fp-vdev command itself.
- Core to port mapping
- The fast path cores’ main task is to check if packets are available on a port, and process these packets. In most use cases, good performance is obtained with the default configuration: all cores poll all ports of the same socket.
The 2 - Select a hardware crypto accelerator
option allows to select the
crypto acceleration type (Virtual Accelerator IPsec only).
- Crypto acceleration selection
In order to benefit from crypto acceleration, an acceleration engine must be selected. The supported engines are:
- Intel Multi-Buffer for software crypto acceleration using AES-NI
- Intel Coleto Creek for hardware crypto acceleration using Intel Communications Chipset 895x Series, 8925 or 8926 (Coleto Creek)
If no acceleration engine is selected, software crypto takes place.
In 3 - Advanced configuration
, the following parameters can be customized:
- fast path memory allocation (FP_MEMORY)
The fast path needs dedicated memory. The fast path dedicated memory is allocated in hugepages.
Note
A hugepage is a page that addresses more memory than the usual 4KB. Accessing a hugepage is more efficient than accessing a regular memory page. Its default size is 2MB.
- VM memory allocation (VM_MEMORY)
- For performance reasons, the memory used by the VMs is reserved in hugepages. By default, the Virtual Accelerator allocates 4GB per socket.
- Mbuf pool preallocation (NB_MBUF)
- The network packets manipulated by the fast path are stored in buffers named mbufs. A mbuf pool is allocated at fast path start.
- Offloads configuration (FP_OFFLOAD)
TCP/UDP offloads are used to maximize performance. They must be:
- enabled if the traffic is mostly composed of TCP/UDP sessions terminated on the guest (default value on a Virtual Accelerator)
- disabled if the guests are mostly forwarding traffic
By default, offloads are enabled.
The S - Save configuration and exit
option writes the configuration file in
/etc/fast-path.env
.
See also
The 6WINDGate Fast Path Baseline documentation for more information about fast-path.env
.
Configuration files¶
After running the wizard, you can fine tune the fast path configuration by editing its configuration files.
Configuration options to be modified at this step is typically the activation of CPU isolation. It improves fast path performance by assuring that fast path cores are isolated from Linux. It is enabled by default and relies on the Linux cpuset feature.
libvirt does not support this feature; therefore, if libvirt is used to start
VMs, CPU isolation must be disabled in /etc/cpuset.env
by
setting CPUSET_ENABLE
to 0
.
With OpenStack, it can be left enabled as Nova takes care of CPU isolation.
The table below shows which configuration file and variable are relevant for each configuration step.
Step | Variable | Configuration File |
---|---|---|
core allocation | FP_MASK |
/etc/fast-path.env |
memory allocation | FP_MEMORY and VM_MEMORY |
/etc/fast-path.env |
physical ports assignation | FP_PORTS |
/etc/fast-path.env |
core / port mapping | CORE_PORT_MAPPING |
/etc/fast-path.env |
mbuf preallocation | NB_MBUF |
/etc/fast-path.env |
offloads activation | FP_OFFLOAD |
/etc/fast-path.env |
cpuset activation | CPUSET_ENABLE |
/etc/cpuset.env |
See also
- The 6WINDGate Fast Path Baseline documentation for more information about
fast-path.env
and about CPU isolation using cpuset - The 6WINDGate FPN-SDK documentation for more information about offloads