1.4.3. Configuration tuning¶
Configure the BIOS¶
Intel VMD domain is longer than the 32-bit length of the linux design, it must be disabled from the BIOS.
Note
This BIOS setting applies only for Intel CPUs.
Configuration wizard¶
A configuration wizard is provided to customize the fast path configuration. To launch the wizard, do:
# fast-path.sh config -i
Fast path configuration
=======================
1 - Select fast path ports and polling cores
2 - Select a hardware crypto accelerator
3 - Advanced configuration
4 - Advanced plugin configuration
5 - Display configuration
S - Save configuration and exit
Q - Quit
Enter selection [S]:
The 1 - Select fast path ports and polling cores
option takes care of the
mandatory fast path configuration, which comprises:
- Core allocation
The fast path needs dedicated cores that are isolated from other Linux tasks.
- Physical port assignation
The fast path must have full control over a network port to provide acceleration on this port. At fast path start, a DPVI will replace each Linux interface associated to a fast path port. The new interface has the same name as the old interface. The configuration that was done on the old interface is lost (IP addresses, MTU, routes, etc).
- Virtual port creation
To handle traffic to and from virtual machines, the fast path must create virtual ports. Like for physical interfaces, a DPVI will be created for each virtual port. The virtual ports can be created through the wizard. However, it is usually better to allocate the ports at VM start.
See also
the Hotplug a virtual port section for more information.
To make the guest network performance scale with the number of cores, you must use the multiqueue feature. To exchange packets between the host and its guests, virtual rings are used. Each virtual ring will be polled by one fast path logical core located on the same socket as the VM. If there are less fast path logical cores than virtual rings, some fast path logical cores will poll several virtual rings. The number of virtual rings is configured in the VM using
ethtool -L
.See also
the Guest multiqueue configuration section below for more information about configuring a guest with several CPUs,
the Virtio Host PMD documentation for detailed information about multiqueue configuration.
In OpenStack deployments, this step is not needed. Nova will automatically create the needed ports on demand, using the fp-vdev command itself.
- Core to port mapping
The fast path cores’ main task is to check if packets are available on a port, and process these packets. In most use cases, good performance is obtained with the default configuration: all cores poll all ports of the same socket.
The 2 - Select a hardware crypto accelerator
option allows to select the
crypto acceleration type 1.
- Crypto acceleration selection
In order to benefit from crypto acceleration, an acceleration engine must be selected. The supported engines are:
Intel Multi-Buffer for software crypto acceleration using AES-NI
Intel Coleto Creek for hardware crypto acceleration using Intel Communications Chipset 895x Series, 8925 or 8926 (Coleto Creek)
If no acceleration engine is selected, software crypto takes place.
In 3 - Advanced configuration
, the following parameters can be customized:
- fast path memory allocation (FP_MEMORY)
The fast path needs dedicated memory. The fast path dedicated memory is allocated in hugepages.
Note
A hugepage is a page that addresses more memory than the usual 4KB. Accessing a hugepage is more efficient than accessing a regular memory page. Its default size is 2MB.
- VM memory allocation (VM_MEMORY)
For performance reasons, the memory used by the VMs is reserved in hugepages. By default, Virtual Accelerator allocates 4GB per socket.
- Mbuf pool preallocation (NB_MBUF)
The network packets manipulated by the fast path are stored in buffers named mbufs. A mbuf pool is allocated at fast path start.
- Offloads configuration (FP_OFFLOAD)
TCP/UDP offloads are used to maximize performance. They must be:
enabled if the traffic is mostly composed of TCP/UDP sessions terminated on the guest (default value on a Virtual Accelerator)
disabled if the guests are mostly forwarding traffic
By default, offloads are enabled.
The S - Save configuration and exit
option writes the configuration file in
/etc/fast-path.env
.
See also
The 6WINDGate Fast Path Baseline documentation for more information about fast-path.env
.
After running the wizard, if you don’t need additional fine tuning of the fast path configuration, you’ll want to jump to the Optimizing performance section.
- 1
requires an IPsec Application License
Configuration files¶
Experts users can fine tune the fast path configuration even further by editing its configuration files.
The table below shows which configuration file and variable are relevant for each configuration step.
Step |
Variable |
Configuration File |
---|---|---|
core allocation |
|
|
memory allocation |
|
|
physical ports assignation |
|
|
core / port mapping |
|
|
mbuf preallocation |
|
|
offloads activation |
|
|
cpuset activation |
|
|
See also
The 6WINDGate Fast Path Baseline documentation for more information about
fast-path.env
and about CPU isolation using cpusetThe 6WINDGate FPN-SDK documentation for more information about offloads