PMD in the guestΒΆ

To load Virtio Host PMD in a 6WINDGate DPDK application, pass the 6WINDGate DPDK add-on library to the 6WINDGate DPDK application with the -d option. To instantiate a virtual device, use the --vdev EAL option (one per device).

  • A 6WINDGate DPDK application runs on one core on the host

  • A 6WINDGate DPDK application runs on two cores on the guest

  • Packets are sent/received using 2 rings for each direction

  • There is no notification (this is not needed with PMDs)

../../../../_images/aafig-99a106ee19cc495e9ccdfa2276100a8e9f9d54ad.svg
  1. On the host, reserve huge pages and mount a hugetlbfs file system.

    # echo 4096 > /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages
    # mkdir -p /mnt/huge-2M && mount -t hugetlbfs none /mnt/huge-2M
    
  2. Start the 6WINDGate DPDK application on the host.

    # QUEUE_CONFIG="rxqmap=manual:rr/1:3,txqmap=manual:rr/0:2"
    # cd /path/to/dpdk/x86_64-native-linuxapp-gcc
    # ./app/testpmd -c 0x3000 -n 3 --socket-mem=0,512 --huge-dir=/mnt/huge-2M \
          -d /path/to/librte_pmd_vhost.so \
          --vdev pmd-vhost0,${QUEUE_CONFIG},sockname=/tmp/vhost_sock0 \
          -- --socket-num=1 --port-numa-config=0,1 --port-topology=chained -i
    

    We use testpmd, provided in the 6WINDGate DPDK package. In this example, it uses 2 cores (mask is 0x3000) but only one is used for the data plane, the other is for the command line.

    In this example, two RX queues are created, corresponding to the ring indexes 1 and 3, and two TX queues are created, corresponding to the ring indexes 0 and 2. The indexes of RX rings must be odd, and the indexes of TX rings must be even.

    Note

    • The argument sockname=/tmp/vhost_sock0 specifies the path of the vhost-user Unix socket to create. QEMU will connect to this socket to negociate features and configure the vhost PMD driver.

    • To optimize performance, it is a good idea to take care of where memory is allocated, using numactl --cpunodebind=X --membind=X when starting the 6WINDGate DPDK application, specifying --socket-mem=X, and allocating huge pages on the proper socket. In this example, all is allocated on the second socket (socket 1).

  3. Start QEMU, simulating a 4 cores machine, with a vhost-user virtio device providing two pairs of queues:

    # numactl --cpunodebind=1 --membind=1 \
       qemu-system-x86_64 --enable-kvm -k fr -m 2G \
       -cpu host -smp cores=4,threads=1,sockets=1 \
       -serial telnet::4445,server,nowait -monitor telnet::5556,server,nowait \
       -hda vm.qcow2 \
       -object memory-backend-file,id=mem,size=2G,mem-path=/mnt/huge-2M,share=on \
       -numa node,memdev=mem \
       -chardev socket,path=/tmp/vhost_sock0,id=chr0,server \
       -netdev type=vhost-user,id=net0,chardev=chr0,queues=2,vhostforce=on \
       -device virtio-net-pci,netdev=net0,vectors=5,mq=on,ioeventfd=on
    

    Note

    • To manage more than one pair of queues, you must patch QEMU.

    • For maximum performance, every pthread of the QEMU process (except management pthreads) has to be pinned to a CPU and must be the only application running on that CPU. This can be done with several tasksets. The pthread IDs associated with vCPU can be retrieved from the QEMU console using the info cpu command.

  4. In the guest, the virtual virtio PCI device should be listed:

    # lspci -nn
    [...]
    00:03.0 Ethernet controller [0200]: Red Hat, Inc Virtio network device [1af4:1000]
    
  5. On the guest, reserve the huge pages and start the 6WINDGate DPDK application:

    # modprobe uio
    # cd /path/to/dpdk/x86_64-native-linuxapp-gcc
    # insmod kmod/igb_uio.ko
    # python ../tools/dpdk_nic_bind.py -b igb_uio 0000:00:03.0
    # echo 512 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
    # mkdir -p /mnt/huge-2M && mount -t hugetlbfs none /mnt/huge-2M
    # ./app/testpmd -c 0x7 -n 3 --huge-dir=/mnt/huge-2M \
         --pci-whitelist 0000:00:03.0 \
         -- --rxd=256 --txd=256 --txqflags=0xf00 -i --port-topology=chained \
         --rxq=2 --txq=2
    
  6. Transmit packets between host and guest. For instance:

    1. On the guest, enter:

      testpmd> set fwd rxonly
      testpmd> set corelist 1,2
      testpmd> start
      
    2. On the host, enter:

      testpmd> set fwd txonly
      testpmd> start
      [wait 10 seconds]
      testpmd> stop