2.2.5. Install as a VM using OpenStack

This chapter explains how to start a Virtual Service Router VM using OpenStack.

It expects that you already installed an OpenStack cloud, in which you are able to spawn VMs.

You have two choices:

Note

The following commands may change depending on your OpenStack version. The important part are that the image must be imported in glance, the flavor with correct size created, and that the image and the flavor are used to start the VM. It was tested with an Ubuntu-20.04 hypervisor running the Train OpenStack version.

VM with virtual NICs

This simple configuration imports a Virtual Service Router qcow2 in OpenStack, creates the right flavor, and starts a Virtual Service Router VM.

  1. [Controller] Export the Virtual Service Router qcow2 file path:

    # VSR_QCOW2=/path/to/6wind-vsr-<arch>-<version>.qcow2
    
  2. [Controller] Use glance to create a VM image with the Virtual Service Router qcow2 file:

    # openstack image create --disk-format qcow2 --container-format bare \
                             --file $VSR_QCOW2 vsr
    
  3. [Controller] Create a flavor with 8192 memory and 4 virtual CPUs.

    # openstack flavor create --ram 8192 \
                              --vcpus 4 vsr
    
  4. [Controller] Create two networks:

    # neutron net-create private1
    # neutron subnet-create --name private_subnet1 private1 11.0.0.0/24
    # net1=$(neutron net-show private1 | grep "\ id\ " | awk '{ print $4 }')
    # neutron net-create private2
    # neutron subnet-create --name private_subnet2 private2 12.0.0.0/24
    # net2=$(neutron net-show private2 | grep "\ id\ " | awk '{ print $4 }')
    
  5. [Controller] Boot the Virtual Service Router VM with one interface on each network:

    # openstack server create --flavor vsr \
                              --image vsr \
                              --nic net-id=$net1 --nic net-id=$net2 \
                              vsr_vm
    
  6. Connect to the VM. This steps depends on your OpenStack installation. You should get:

    (...)
      ____        _____ _   _ ____   __     ______  ____
     / /\ \      / /_ _| \ | |  _ \  \ \   / / ___||  _ \
    | '_ \ \ /\ / / | ||  \| | | | |  \ \ / /\___ \| |_) |
    | (_) \ V  V /  | || |\  | |_| |   \ V /  ___) |  _ <
     \___/ \_/\_/  |___|_| \_|____/     \_/  |____/|_| \_\
    
    vsr login: admin
    

The next step is to perform your first configuration.

VM with physical NICs

This section details how to start Virtual Service Router with dedicated physical NICs within OpenStack.

Using dedicated NICs requires some work on your compute node which is detailed in Hypervisor mandatory prerequisites.

Once the hypervisor is configured properly, two technologies are available:

  • whole NICs are dedicated to Virtual Service Router, see Passthrough mode, simpler configuration, but only one VM can use each NIC

  • portions of NICs are dedicated to Virtual Service Router, see SR-IOV mode, to have more VMs running on the hypervisor

For production setups, you might want to consider checking Optimize performance in virtual environment to get the best performance (except the section about CPU pinning).

The crudini package has to be installed.

Passthrough mode

With this configuration, the Virtual Service Router VM will get dedicated interfaces.

The passthrough mode is only available if the compute node hardware supports Intel VT-d, and if it is enabled (see enable Intel VT-d).

  1. [Compute] Get the vendor and product id of the dedicated interface that you want to give to the Virtual Service Router VM. In this example, for the eno1 interface, we have 8086 as vendor id and 1583 as product id. Please replace the interface name, pci id, vendor id and product id by your own values:

    # IFACE=eno1
    # ethtool -i $IFACE | grep bus-info | awk '{print $2}'
    0000:81:00.1
    # PCI=0000:81:00.1
    # lspci -n -s $PCI | awk '{print $3}'
    8086:1583
    # VENDOR_ID=8086
    # PRODUCT_ID=1583
    
  2. [Compute] Configure a PCI device alias. It will identify the vendor_id and product_id found in first step with the a1 alias in the next steps.

    # crudini --set /etc/nova/nova.conf pci alias \
                       '{ "vendor_id":"'$VENDOR_ID'", "product_id":"'$PRODUCT_ID'", "device_type":"type-PF", "name":"a1" }'
    
  3. [Compute] Tell which PCI device can be given to VMs. Here we give the PCI device 0000:81:00.1:

    # crudini --set /etc/nova/nova.conf pci passthrough_whitelist \
                       '{ "address": "'$PCI'" }'
    # service nova-compute restart
    

    Note

    It is possible to add more PCI devices here, by giving a list to crudini (i.e: ‘[{ “address”: “pci1” }, { “address”: “pci2” }]’) in the previous command.

  4. [Controller] Export the previously configured variables, as well as the Virtual Service Router qcow2 file path:

    # VSR_QCOW2=/path/to/6wind-vsr-<arch>-<version>.qcow2
    # IFACE=eno1
    # PCI=0000:81:00.1
    # VENDOR_ID=8086
    # PRODUCT_ID=1583
    
  5. [Controller] Configure nova-scheduler to activate the PciPassthroughFilter. Note that if you have enabled filters already, you should just add PciPassthroughFilter to your list:

    # crudini --set /etc/nova/nova.conf DEFAULT enabled_filters \
                       'RetryFilter,AvailabilityZoneFilter,RamFilter,DiskFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter,PciPassthroughFilter'
    # crudini --set /etc/nova/nova.conf DEFAULT available_filters \
                       'nova.scheduler.filters.all_filters'
    # service nova-scheduler restart
    
  6. [Controller] Configure a PCI device alias. It will identify the vendor_id and product_id found in first step with the a1 alias:

    # crudini --set /etc/nova/nova.conf pci alias \
                       '{ "vendor_id":"'$VENDOR_ID'", "product_id":"'$PRODUCT_ID'", "device_type":"type-PF", "name":"a1" }'
    # service nova-api restart
    
  7. [Controller] Use glance to create a VM image with the Virtual Service Router qcow2 file:

    # openstack image create --disk-format qcow2 --container-format bare \
                             --file $VSR_QCOW2 vsr
    
  8. [Controller] Create a flavor with 8192MB of memory and 4 virtual CPUs.

    # openstack flavor create --ram 8192 \
                              --vcpus 4 vsr-passthrough
    
  9. [Controller] Configure the flavor to request 1 pci device in alias a1:

    # openstack flavor set vsr-passthrough \
                           --property "pci_passthrough:alias"="a1:1"
    

    Note

    To request X devices, change the previous command into “a1:X”.

  10. [Controller] Configure the flavor to use one NUMA node and the same hyperthreads, and enable hugepages to get deterministic performances. OpenStack will choose CPUs and memory on the same NUMA node as the NICs:

    # openstack flavor set vsr-passthrough \
                           --property hw:numa_nodes=1 \
                           --property hw:cpu_policy=dedicated \
                           --property hw:cpu_thread_policy=require \
                           --property hw:mem_page_size=large
    
  11. [Controller] Boot the Virtual Service Router VM:

    # openstack server create --flavor vsr-passthrough \
                              --image vsr \
                              vsr_vm
    

The next step is to perform your first configuration.

SR-IOV mode

SR-IOV enables an Ethernet port to appear as multiple, separate, physical devices called Virtual Functions (VF). You will need compatible hardware, and Intel VT-d configured. The traffic coming from each VF can not be seen by the other VFs. The performance is almost as good as the performance in passthrough mode.

Being able to split an Ethernet port can increase the VM density on the hypervisor compared to passthrough mode.

In this configuration, the Virtual Service Router VM will get Virtual Functions (VFs).

See also

For more information about SR-IOV, more advanced configurations, interconnecting physical and virtual networks, please check your OpenStack documentation: https://docs.openstack.org/nova/train/admin/networking.html#sr-iov

  1. [Compute] First check if the network interface that you want to use supports SR-IOV and how much VFs can be configured. Here we check for eno1 interface. Please export your own interface name instead of eno1.

    # IFACE=eno1
    # lspci -vvv -s $(ethtool -i $IFACE | grep bus-info | awk -F': ' '{print $2}') | grep SR-IOV
             Capabilities: [160 v1] Single Root I/O Virtualization (SR-IOV)
    # lspci -vvv -s $(ethtool -i $IFACE | grep bus-info | awk -F': ' '{print $2}') | grep VFs
                 Initial VFs: 64, Total VFs: 64, Number of VFs: 0, Function Dependency Link: 00
    
  2. [Compute] Add VFs, and check that those VFs were created. You should add this command to a custom startup script to make it persistent. Please export your own vf pci id instead of 81:0a.0.

    # echo 2 > /sys/class/net/$IFACE/device/sriov_numvfs
    # lspci | grep Ethernet | grep Virtual
    81:0a.0 Ethernet controller: Intel Corporation XL710/X710 Virtual Function (rev 02)
    81:0a.1 Ethernet controller: Intel Corporation XL710/X710 Virtual Function (rev 02)
    # VF_PCI=0000:81:0a.0
    
  3. [Compute] Get the vendor and product id of the dedicated VF that you want to give to the Virtual Service Router VM. In this example, for the 81:0a.0 VF, we have 8086 as vendor id and 154c as product id. Let’s export the two variables VENDOR_ID and PRODUCT_ID for further use:

    # lspci -n -s $VF_PCI | awk '{print $3}'
    8086:154c
    # VENDOR_ID=8086
    # PRODUCT_ID=154c
    
  4. [Compute] You need to set eno1 up so that VFs are properly detected in the guest VM.

    # ip link set $IFACE up
    
  5. [Compute] Install and configure the SR-IOV agent:

    # apt-get install neutron-sriov-agent
    # crudini --set /etc/neutron/plugins/ml2/sriov_agent.ini securitygroup \
                       firewall_driver neutron.agent.firewall.NoopFirewallDriver
    # crudini --set /etc/neutron/plugins/ml2/sriov_agent.ini sriov_nic \
                       physical_device_mappings physnet2:$IFACE
    # service neutron-sriov-agent restart
    
  6. [Compute] Configure a PCI device alias. It will identify the vendor_id and product_id found in first step with the a1 alias. Also tell which PCI device can be given to VMs. Here we give all the VFs configured on eno1:

    # crudini --set /etc/nova/nova.conf pci alias \
                       '{ "vendor_id":"'$VENDOR_ID'", "product_id":"'$PRODUCT_ID'", "device_type":"type-VF", "name":"a1" }'
    
  7. [Compute] Tell which PCI device can be given to VMs. Here we give all the VFs configured on eno1:

    # crudini --set /etc/nova/nova.conf pci passthrough_whitelist \
                       '{ "devname": "'$IFACE'", "physical_network": "physnet2" }'
    # service nova-compute restart
    
  8. [Controller] Export the previously configured variables, as well as the Virtual Service Router qcow2 file path:

    # VSR_QCOW2=/path/to/6wind-vsr-<arch>-<version>.qcow2
    # IFACE=eno1
    # VENDOR_ID=8086
    # PRODUCT_ID=154c
    
  9. [Controller] Configure nova-scheduler to activate the PciPassthroughFilter. Note that if you have enabled filters already, you should just add PciPassthroughFilter to your list:

    # crudini --set /etc/nova/nova.conf DEFAULT enabled_filters \
                       'RetryFilter,AvailabilityZoneFilter,RamFilter,DiskFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter,PciPassthroughFilter'
    # crudini --set /etc/nova/nova.conf DEFAULT available_filters \
                       'nova.scheduler.filters.all_filters'
    # service nova-scheduler restart
    
  10. [Controller] Configure a PCI device alias. It will identify the vendor_id and product_id found in first step with the a1 alias:

    # crudini --set /etc/nova/nova.conf pci alias \
                       '{ "vendor_id":"'$VENDOR_ID'", "product_id":"'$PRODUCT_ID'", "device_type":"type-VF", "name":"a1" }'
    # service nova-api restart
    
  11. [Controller] Use glance to create a VM image with the Virtual Service Router qcow2 file:

    # openstack image create --disk-format qcow2 --container-format bare \
                             --file $VSR_QCOW2 vsr
    
  12. [Controller] Create a flavor with 8192MB of memory and 4 virtual CPUs.

    # openstack flavor create --ram 8192 \
                              --vcpus 4 vsr-sriov
    
  13. [Controller] Configure the flavor to request 1 pci device in alias a1:

    # openstack flavor set vsr-sriov \
                           --property "pci_passthrough:alias"="a1:1"
    
  14. [Controller] Configure the flavor to use one NUMA node and the same hyperthreads, and enable hugepages to get deterministic performances. OpenStack will choose CPUs and memory on the same NUMA node as the NICs:

    # openstack flavor set vsr-sriov \
                           --property hw:numa_nodes=1 \
                           --property hw:cpu_policy=dedicated \
                           --property hw:cpu_thread_policy=require \
                           --property hw:mem_page_size=large
    
  15. [Controller] Boot the Virtual Service Router VM:

    # openstack server create --flavor vsr-sriov \
                              --image vsr \
                              vsr_vm
    

The next step is to perform your first configuration.

Troubleshooting

This section gathers issues that happen with Virtual Service Router started in an OpenStack environment.

VM start errors

Symptoms
  • My VM can’t start, or is in a bad state (NOSTATE):

    $ nova list
    +--------------------------------------+------+--------+------------+-------------+------------------------------+
    | ID                                   | Name | Status | Task State | Power State | Networks                     |
    +--------------------------------------+------+--------+------------+-------------+------------------------------+
    | 52ad953d-19dd-47a9-b03d-dfe565e655e1 | vm3  | ERROR  | -          | NOSTATE     |                              |
    | b28e5aa1-05c9-494b-8f0e-0247d95bde87 | vm2  | ACTIVE | -          | Running     | private2=12.0.0.3            |
    | c4a52ed6-775d-45b3-96c2-8c2a6a1530ac | vm1  | ACTIVE | -          | Running     | private=11.0.0.6, 172.24.4.3 |
    +--------------------------------------+------+--------+------------+-------------+------------------------------+
    
Hints
  • Check the /var/log/nova/nova-compute.log file for ERROR. Considering the output, check the following issues.

Not enough memory

Symptoms

My VM can’t start, or is in a bad state (NOSTATE). On the compute node hosting the VM. /var/log/nova/nova-compute.log shows ERRORs and TRACEs like those:

Error launching a defined domain with XML: <domain type='kvm'>
[instance: 52ad953d-19dd-47a9-b03d-dfe565e655e1] Instance failed to spawn
...
...: unable to map backing store for hugepages: Cannot allocate memory
Hints
  • Add more memory to your compute node.

Cannot use hugepages of 1GB

Symptoms

Nova displays an error “Unable to find any usable hugetlbfs mount”.

On the controller node, /var/log/nova/nova-conductor.log shows ERRORs and TRACEs like this one:

error: Unable to find any usable hugetlbfs mount for 1048576 KiB
Hints
  • Hugepages cannot be allocated for the VM. It may be due to the size of the hugepages. Try to allocate more but smaller hugepages.