1.1. Introduction

1.1.1. Overview

Virtual Accelerator provides packet processing acceleration for virtual network infrastructures.

Virtual Accelerator runs inside the hypervisor and removes the performance bottlenecks by offloading virtual switching from the networking stack. The CPU resources necessary for packet processing are drastically reduced, so that less cores are required to process network traffic at higher rates and Linux stability is increased.

In addition to simple virtual switching (using OVS or the Linux bridge), Virtual Accelerator supports an extensive set of networking protocols to provide a complete virtual networking infrastructure.

Virtual Accelerator is fully integrated with Linux and its environment, so that existing Linux applications do not need to be modified to benefit from packet processing acceleration.

Virtual Accelerator is available for Intel x86 servers, supports Red Hat Enterprise Linux, CentOS and Ubuntu distributions and OpenStack.

1.1.2. Features

../_images/overview.svg

Virtual Accelerator Features

High performance I/Os leveraging DPDK, with multi-vendor NIC support from Intel, Mellanox and Cisco

  • Intel 1G 82575, 82576, 82580, I210, I211, I350, I354 (igb)

  • Intel 10G 82598, 82599, X520, X540 (ixgbe)

  • Intel 10G/40G X710, XL710, XXV710 (i40e)

  • Mellanox 10G/40G Connect-X 3 (mlx4)

  • Mellanox 10G/25G/40G/50G/100G Connect-X 4/5 (mlx5)

  • Broadcom NetExtreme E-Series (bnxt)

High performance virtual switching

  • OVS acceleration

  • Linux Bridge

High performance virtual networking

In addition to virtual switching, Virtual Accelerator supports a complete set of networking protocols, based on the 6WINDGate technology, that can be used to design innovative virtual networking infrastructures.

  • Layer 2 and Encapsulations

    • GRE

    • VLAN (802.1Q, QinQ)

    • VXLAN

    • LAG (802.3ad, LACP)

    • MPLS

  • IP networking

    • IPv4 and IPv6

    • IPv6 Autoconfiguration

    • VRF

    • IPv4 and IPv6 Tunneling

    • NAT

  • Routing

    • BGP, BGP4+

    • OSPFv2, OSPFv3

    • RIP, RIPng

    • Cross-VRF

    • Static Routes

    • Path monitoring

    • ECMP

    • PBR

    • BFD

    • MPLS LDP (beta)

    • BGP L3VPN (beta)

    • VXLAN EVPN (beta)

    • Point to Multipoint GRE interfaces

    • NHRP

    • DMVPN with IPsec

  • IPsec 1

    • IKEv1, IKEv2 Pre-shared Keys or X509 Certificates

    • MOBIKE

    • Encryption: 3DES, AES-CBC/GCM (128, 192, 256)

    • Hash: MD-5, SHA-1, SHA-2 (256, 384, 512), AES-XCBC (128)

    • Key Management: RSA, DH MODP groups 1 (768 bits), 2 (1024 bits), 5 (1536 bits) and 14 (2048 bits), DH PFS

    • High performance (AES-NI, QAT)

    • Tunnel, Transport or BEET mode

    • SVTI, DVTI

  • QoS

    • Rate limiting per interface, per VRF

    • Class-based QoS

      • Classification: ToS / IP / DSCP / CoS

      • Shaping and Policing

      • Scheduling: PQ, PB-DWRR

  • Security

    • Access Control Lists

    • Unicast Reverse Path Forwarding

    • Control Plane Protection

    • BGP Flowspec

  • High Availability

    • VRRP

Support of standard Virtio drivers

Virtual Accelerator comes with a high performance Virtio back-end driver for communication with any guest running Virtio front-end drivers (can be based on DPDK, Linux, or other OSes).

Management/Monitoring

  • SSHv2

  • SNMP

  • KPIs / Telemetry (YANG-based)

  • Role-Based Access Control with AAA (TACACS)

  • Syslog

  • 802.1ab LLDP

  • sFlow

  • Seamless integration with management and orchestration tools

    Virtual Accelerator is fully integrated with Linux and its environment, so that existing Linux applications do not need to be modified to benefit from packet processing acceleration. Standard Linux APIs are preserved, including iproute2, iptables, brctl, ovs-ofctl, ovs-vsctl, etc.

1.1.3. Supported platforms

  • Virtual Accelerator is provided as a set of binary packages and is validated on the following distributions:

    • Ubuntu 18.04 for x86

    • Red Hat 8 for x86

    • CentOS 8 for x86

See also

Refer to the Virtual Accelerator Release Notes for detailed information about the latest validated versions of the Linux distributions.

  • Supported processors

    • Intel Xeon E5-1600/2600/4600 v2 family (Ivy Bridge EP)

    • Intel Xeon E5-1600/2600/4600 v3 family (Haswell EP)

    • Intel Xeon E5-1600/2600/4600 v4 family (Broadwell EP)

    • Intel Xeon E7-2800/4800 v2 family (Ivy Bridge EX)

    • Intel Xeon E7-2800/4800 v3 family (Haswell EX)

    • Intel Xeon E7-4800/8800 v4 family (Broadwell)

    • Intel Xeon Platinum/Gold/Silver/Bronze family (Skylake)

    • Intel Atom C3000 family (Denverton)

    • Intel Xeon D family

  • Supported OpenStack Distributions: Ubuntu Cloud and RDO.

1.1.4. Delivery contents

Virtual Accelerator is provided as a .tgz archive file organized as follows:

software/

Virtual Accelerator binary software for your Linux distribution (.deb or .rpm packages)

doc/
  • Virtual Accelerator Getting Started Guide

    This document

  • Virtual Accelerator Publicly Available Software List

    List of Publicly Available Software included in the Virtual Accelerator delivery

  • 6WINDGate documentation

    Detailed tgz package containing documentation of the 6WINDGate modules that compose the Virtual Accelerator product. Extract the package and open index.html in your browser.

    The mapping between Virtual Accelerator features and the corresponding 6WINDGate modules is detailed in the Technology section of this document.

An OpenStack application note is available, delivered separately, named 6wind-app-note-openstack-support-vX.Y.Z-ubuntu-18.04.tar.gz.

1.1.5. Technology

Virtual Accelerator sits inside a KVM hypervisor and offloads packet processing from the Linux networking stack, thanks to the 6WINDGate technology.

This section provides notions about the 6WINDGate architecture that are necessary to understand how Virtual Accelerator works. More details about 6WINDGate can be found in the detailed documentation of each 6WINDGate module.

Data Plane

The Virtual Accelerator data plane is responsible for actual virtual switching and networking functionalities. It runs on a dedicated set of cores, isolated from the rest of the Linux operating system, and processes all incoming packets from NICs or vNICs. The Virtual Accelerator data plane relies on the DPDK, the 6WINDGate FPN-SDK and the 6WINDGate fast path.

../_images/data-plane.svg

Virtual Accelerator data plane architecture

The DPDK ensures high performance I/Os and is provided by the 6WINDGate DPDK module. It includes drivers for miscellaneous NICs, called PMDs. NIC support is included as part of 6WINDGate DPDK itself, or as separate add-ons.

On top of the DPDK, the FPN-SDK provides a hardware abstraction layer implementing low level system features, such as offloads (checksum, LRO/TSO), RSS, handling of communication between Linux and the fast path, management of memory pools and rings, etc.

Finally, the fast path provides the high performance networking protocols for virtual switching and networking, including Linux bridge, OVS, IP forwarding, filtering, etc.

The Virtual Accelerator data plane does not require any specific configuration as it is tranparently synchronized with the Linux data plane as described in the next section.

For more information regarding:

Refer to the documentation of:

Usage of the DPDK drivers for physical NICs

Usage of the DPDK crypto libraries 1

Usage of system monitoring, configuration and debug tools

FPN-SDK Baseline

Fast Path configuration file reference, including fast path core allocation, port/core binding, etc.

FPN-SDK Add-on for DPDK

Fast Path management

Fast Path Baseline

How to configure virtual switching and networking protocols (protocols are implemented in fast path modules, with one document per module).

The corresponding fast path modules

Linux - Fast Path Synchronization

To provide transparency with Linux, the 6WINDGate technology implements a continuous and transparent synchronization mechanism, so that all Linux configuration is synchronized into the fast path.

This is provided by the 6WINDGate Linux - Fast Path Synchronization module.

../_images/transparency.svg

Virtual Accelerator transparency

For more information regarding:

Refer to the documentation of:

Management of transparency with Linux

Linux - Fast Path Synchronization

How to configure VRF in Linux

Linux - Fast Path Synchronization - VRF

Control Plane

Virtual Accelerator provides its own control plane components, maintained and supported by 6WIND:

These components are maintained and supported by 6WIND. The rest of the Virtual Accelerator data plane can be configured thanks to standard Linux tools, such as iproute2, iptables, brctl, etc. These are supported and maintained by the Linux distribution vendor.

For more information regarding:

Refer to the documentation of:

How to configure the Open vSwitch control plane

Control Plane OVS

How to configure the IKE control plane 1

Control Plane Security - IKEv1 and IKEv2

vNICs

Communication with guests is provided through the Virtio Host PMD module, which is a DPDK add-on module providing a Virtio backend for guests.

The following offloads are advertised through the Virtio Host PMD so that Virtio guests can leverage them:

  • Checksum offload (IP and TCP/UDP)

  • LRO (based on GRO)

  • TSO (based on GSO)

  • Any of these offloads above can be leveraged inside tunnels (VLAN, VXLAN, GRE, IPinIP).

Offloads leverage hardware when possible (both the hardware and PMDs must support them); else, offloads are performed in software.

For more information regarding:

Refer to the documentation of:

Usage of the DPDK drivers for vNICs

6WINDGate DPDK Virtio Host PMD

Fast Path configuration file reference, including vNIC options and offloads

FPN-SDK Add-on for DPDK

1

requires a Virtual Accelerator IPsec Application License