2.2.5. Install as a VM using OpenStack

This chapter explains how to start a Turbo Router VM using OpenStack.

It expects that you already installed an OpenStack cloud, in which you are able to spawn VMs.

You have two choices:

Note

The following commands may change depending on your OpenStack version. The important part are that the image must be imported in glance, the flavor with correct size created, and that the image and the flavor are used to start the VM. It was tested with an Ubuntu 16.04 hypervisor running the Ocata OpenStack version.

VM with virtual NICs

This simple configuration imports a Turbo Router qcow2 in OpenStack, creates the right flavor, and starts a Turbo Router VM.

  1. [Controller] Export the Turbo Router qcow2 file path:

    # TURBO_QCOW2=/path/to/6wind-turbo-*-<arch>-<version>.qcow2
    
  2. [Controller] Use glance to create a VM image with the Turbo Router qcow2 file:

    # openstack image create --disk-format qcow2 --container-format bare \
                             --file $TURBO_QCOW2 turbo-router
    
  3. [Controller] Create a flavor with 8192 memory and 4 virtual CPUs.

    # openstack flavor create --ram 8192 \
                              --vcpus 4 turbo-router
    
  4. [Controller] Create two networks:

    # neutron net-create private1
    # neutron subnet-create --name private_subnet1 private1 11.0.0.0/24
    # net1=$(neutron net-show private1 | grep "\ id\ " | awk '{ print $4 }')
    # neutron net-create private2
    # neutron subnet-create --name private_subnet2 private2 12.0.0.0/24
    # net2=$(neutron net-show private2 | grep "\ id\ " | awk '{ print $4 }')
    
  5. [Controller] Boot the Turbo Router VM with one interface on each network:

    # openstack server create --flavor turbo-router \
                              --image turbo-router \
                              --nic net-id=$net1 --nic net-id=$net2 \
                              turbo-router_vm
    
  6. Connect to the VM. This steps depends on your OpenStack installation. You should get:

    (...)
      ____        _____ _   _ ____          ____             _
     / /\ \      / /_ _| \ | |  _ \  __   _|  _ \ ___  _   _| |_ ___ _ __
    | '_ \ \ /\ / / | ||  \| | | | | \ \ / / |_) / _ \| | | | __/ _ \ '__|
    | (_) \ V  V /  | || |\  | |_| |  \ V /|  _ < (_) | |_| | ||  __/ |
     \___/ \_/\_/  |___|_| \_|____/    \_/ |_| \_\___/ \__,_|\__\___|_|
    
    vrouter login: admin
    

The next step is to perform your first configuration.

VM with physical NICs

This section details how to start Turbo Router with dedicated physical NICs within OpenStack.

Using dedicated NICs requires some work on your compute node which is detailed in Hypervisor mandatory prerequisites.

Once the hypervisor is configured properly, two technologies are available:

  • whole NICs are dedicated to Turbo Router, see Passthrough mode, simpler configuration, but only one VM can use each NIC

  • portions of NICs are dedicated to Turbo Router, see SR-IOV mode, to have more VMs running on the hypervisor

For production setups, you might want to consider checking Optimize performance in virtual environment to get the best performance (except the section about CPU pinning).

The crudini package has to be installed.

Passthrough mode

With this configuration, the Turbo Router VM will get dedicated interfaces.

The passthrough mode is only available if the compute node hardware supports Intel VT-d, and if it is enabled (see enable Intel VT-d).

  1. [Compute] Get the vendor and product id of the dedicated interface that you want to give to the Turbo Router VM. In this example, for the eno1 interface, we have 8086 as vendor id and 1583 as product id. Please replace the interface name, pci id, vendor id and product id by your own values:

    # IFACE=eno1
    # ethtool -i $IFACE | grep bus-info | awk '{print $2}'
    0000:81:00.1
    # PCI=0000:81:00.1
    # lspci -n -s $PCI | awk '{print $3}'
    8086:1583
    # VENDOR_ID=8086
    # PRODUCT_ID=1583
    
  2. [Compute] Configure a PCI device alias. It will identify the vendor_id and product_id found in first step with the a1 alias in the next steps.

    # crudini --set /etc/nova/nova.conf pci alias \
                       '{ "vendor_id":"'$VENDOR_ID'", "product_id":"'$PRODUCT_ID'", "device_type":"type-PF", "name":"a1" }'
    
  3. [Compute] Tell which PCI device can be given to VMs. Here we give the PCI device 0000:81:00.1:

    # crudini --set /etc/nova/nova.conf pci passthrough_whitelist \
                       '{ "address": "'$PCI'" }'
    # service nova-compute restart
    

    Note

    It is possible to add more PCI devices here, by giving a list to crudini (i.e: ‘[{ “address”: “pci1” }, { “address”: “pci2” }]’) in the previous command.

  4. [Controller] Export the previously configured variables, as well as the Turbo Router qcow2 file path:

    # TURBO_QCOW2=/path/to/6wind-turbo-*-<arch>-<version>.qcow2
    # IFACE=eno1
    # PCI=0000:81:00.1
    # VENDOR_ID=8086
    # PRODUCT_ID=1583
    
  5. [Controller] Configure nova-scheduler to activate the PciPassthroughFilter. Note that if you have enabled filters already, you should just add PciPassthroughFilter to your list:

    # crudini --set /etc/nova/nova.conf DEFAULT enabled_filters \
                       'RetryFilter,AvailabilityZoneFilter,RamFilter,DiskFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter,PciPassthroughFilter'
    # crudini --set /etc/nova/nova.conf DEFAULT available_filters \
                       'nova.scheduler.filters.all_filters'
    # service nova-scheduler restart
    
  6. [Controller] Configure a PCI device alias. It will identify the vendor_id and product_id found in first step with the a1 alias:

    # crudini --set /etc/nova/nova.conf pci alias \
                       '{ "vendor_id":"'$VENDOR_ID'", "product_id":"'$PRODUCT_ID'", "device_type":"type-PF", "name":"a1" }'
    # service nova-api restart
    
  7. [Controller] Use glance to create a VM image with the Turbo Router qcow2 file:

    # openstack image create --disk-format qcow2 --container-format bare \
                             --file $TURBO_QCOW2 turbo-router
    
  8. [Controller] Create a flavor with 8192MB of memory and 4 virtual CPUs.

    # openstack flavor create --ram 8192 \
                              --vcpus 4 turbo-router-passthrough
    
  9. [Controller] Configure the flavor to request 1 pci device in alias a1:

    # openstack flavor set turbo-router-passthrough \
                           --property "pci_passthrough:alias"="a1:1"
    

    Note

    To request X devices, change the previous command into “a1:X”.

  10. [Controller] Configure the flavor to use one NUMA node and the same hyperthreads, and enable hugepages to get deterministic performances. OpenStack will choose CPUs and memory on the same NUMA node as the NICs:

    # openstack flavor set turbo-router-passthrough \
                           --property hw:numa_nodes=1 \
                           --property hw:cpu_policy=dedicated \
                           --property hw:cpu_thread_policy=require \
                           --property hw:mem_page_size=large
    
  11. [Controller] Boot the Turbo Router VM:

    # openstack server create --flavor turbo-router-passthrough \
                              --image turbo-router \
                              turbo-router_vm
    

The next step is to perform your first configuration.

SR-IOV mode

SR-IOV enables an Ethernet port to appear as multiple, separate, physical devices called Virtual Functions (VF). You will need compatible hardware, and Intel VT-d configured. The traffic coming from each VF can not be seen by the other VFs. The performance is almost as good as the performance in passthrough mode.

Being able to split an Ethernet port can increase the VM density on the hypervisor compared to passthrough mode.

In this configuration, the Turbo Router VM will get Virtual Functions (VFs).

See also

For more information about SR-IOV, more advanced configurations, interconnecting physical and virtual networks, please check your OpenStack documentation: https://docs.openstack.org/ocata/networking-guide/config-sriov.html

  1. [Compute] First check if the network interface that you want to use supports SR-IOV and how much VFs can be configured. Here we check for eno1 interface. Please export your own interface name instead of eno1.

    # IFACE=eno1
    # lspci -vvv -s $(ethtool -i $IFACE | grep bus-info | awk -F': ' '{print $2}') | grep SR-IOV
             Capabilities: [160 v1] Single Root I/O Virtualization (SR-IOV)
    # lspci -vvv -s $(ethtool -i $IFACE | grep bus-info | awk -F': ' '{print $2}') | grep VFs
                 Initial VFs: 64, Total VFs: 64, Number of VFs: 0, Function Dependency Link: 00
    
  2. [Compute] Add VFs, and check that those VFs were created. You should add this command to a custom startup script to make it persistent. Please export your own vf pci id instead of 81:0a.0.

    # echo 2 > /sys/class/net/$IFACE/device/sriov_numvfs
    # lspci | grep Ethernet | grep Virtual
    81:0a.0 Ethernet controller: Intel Corporation XL710/X710 Virtual Function (rev 02)
    81:0a.1 Ethernet controller: Intel Corporation XL710/X710 Virtual Function (rev 02)
    # VF_PCI=0000:81:0a.0
    
  3. [Compute] Get the vendor and product id of the dedicated VF that you want to give to the Turbo Router VM. In this example, for the 81:0a.0 VF, we have 8086 as vendor id and 154c as product id. Let’s export the two variables VENDOR_ID and PRODUCT_ID for further use:

    # lspci -n -s $VF_PCI | awk '{print $3}'
    8086:154c
    # VENDOR_ID=8086
    # PRODUCT_ID=154c
    
  4. [Compute] You need to set eno1 up so that VFs are properly detected in the guest VM.

    # ip link set $IFACE up
    
  5. [Compute] Install and configure the SR-IOV agent:

    # apt-get install neutron-sriov-agent
    # crudini --set /etc/neutron/plugins/ml2/sriov_agent.ini securitygroup \
                       firewall_driver neutron.agent.firewall.NoopFirewallDriver
    # crudini --set /etc/neutron/plugins/ml2/sriov_agent.ini sriov_nic \
                       physical_device_mappings physnet2:$IFACE
    # service neutron-sriov-agent restart
    
  6. [Compute] Configure a PCI device alias. It will identify the vendor_id and product_id found in first step with the a1 alias. Also tell which PCI device can be given to VMs. Here we give all the VFs configured on eno1:

    # crudini --set /etc/nova/nova.conf pci alias \
                       '{ "vendor_id":"'$VENDOR_ID'", "product_id":"'$PRODUCT_ID'", "device_type":"type-VF", "name":"a1" }'
    
  7. [Compute] Tell which PCI device can be given to VMs. Here we give all the VFs configured on eno1:

    # crudini --set /etc/nova/nova.conf pci passthrough_whitelist \
                       '{ "devname": "'$IFACE'", "physical_network": "physnet2" }'
    # service nova-compute restart
    
  8. [Controller] Export the previously configured variables, as well as the Turbo Router qcow2 file path:

    # TURBO_QCOW2=/path/to/6wind-turbo-*-<arch>-<version>.qcow2
    # IFACE=eno1
    # VENDOR_ID=8086
    # PRODUCT_ID=154c
    
  9. [Controller] Configure nova-scheduler to activate the PciPassthroughFilter. Note that if you have enabled filters already, you should just add PciPassthroughFilter to your list:

    # crudini --set /etc/nova/nova.conf DEFAULT enabled_filters \
                       'RetryFilter,AvailabilityZoneFilter,RamFilter,DiskFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter,PciPassthroughFilter'
    # crudini --set /etc/nova/nova.conf DEFAULT available_filters \
                       'nova.scheduler.filters.all_filters'
    # service nova-scheduler restart
    
  10. [Controller] Configure a PCI device alias. It will identify the vendor_id and product_id found in first step with the a1 alias:

    # crudini --set /etc/nova/nova.conf pci alias \
                       '{ "vendor_id":"'$VENDOR_ID'", "product_id":"'$PRODUCT_ID'", "device_type":"type-VF", "name":"a1" }'
    # service nova-api restart
    
  11. [Controller] Use glance to create a VM image with the Turbo Router qcow2 file:

    # openstack image create --disk-format qcow2 --container-format bare \
                             --file $TURBO_QCOW2 turbo-router
    
  12. [Controller] Create a flavor with 8192MB of memory and 4 virtual CPUs.

    # openstack flavor create --ram 8192 \
                              --vcpus 4 turbo-router-sriov
    
  13. [Controller] Configure the flavor to request 1 pci device in alias a1:

    # openstack flavor set turbo-router-sriov \
                           --property "pci_passthrough:alias"="a1:1"
    
  14. [Controller] Configure the flavor to use one NUMA node and the same hyperthreads, and enable hugepages to get deterministic performances. OpenStack will choose CPUs and memory on the same NUMA node as the NICs:

    # openstack flavor set turbo-router-sriov \
                           --property hw:numa_nodes=1 \
                           --property hw:cpu_policy=dedicated \
                           --property hw:cpu_thread_policy=require \
                           --property hw:mem_page_size=large
    
  15. [Controller] Boot the Turbo Router VM:

    # openstack server create --flavor turbo-router-sriov \
                              --image turbo-router \
                              turbo-router_vm
    

The next step is to perform your first configuration.