2. IPoE multiple instances / kubevirt

As already described in a previous section IPoE multiple instances, this section, in the same way, shows how to set up multiple instances of IPoE servers to increase the number of sessions.

However, this time, those instances are deployed as VM in a kubevirt environment on top of a kubernetes cluster.

2.1. Prerequisites

See Prerequisites section.

2.2. Deploy BNG instances into the cluster

Several configuration files, YAML files, will be applied.

First of all, create the following files:

  • pci-config.yml:
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: pci-config
    data:
      bus_addr_to_name.py: |
        #!/usr/bin/env python3
        import sys
        import re
        BUS_ADDR_RE = re.compile(r'''
        ^
        (?P<domain>([\da-f]+)):
        (?P<bus>([\da-f]+)):
        (?P<slot>([\da-f]+))\.
        (?P<func>(\d+))
        $
        ''', re.VERBOSE | re.IGNORECASE)
    
        def bus_addr_to_name(bus_addr):
            """
            Convert a PCI bus address into a port name as used in nc-cli.
            """
            match = BUS_ADDR_RE.match(bus_addr)
            if not match:
                raise ValueError('pci bus address %s does not match regexp' % bus_addr)
    
            d = match.groupdict()
            domain = int(d['domain'], 16)
            bus = int(d['bus'], 16)
            slot = int(d['slot'], 16)
            func = int(d['func'], 10)
    
            name = 'pci-'
            if domain != 0:
                name += 'd%d' % domain
            name += 'b%ds%d' % (bus, slot)
            if func != 0:
                name += 'f%d' % func
            print (name)
    
        if __name__ == "__main__":
            bus_addr_to_name(sys.argv[1])
    
  • bng-ipoe-config.yml:
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: bng-config
    data:
      bng.cli: |
        netconf connect
        edit running
        / system license online serial <serialnumber>
        / system hostname BNG-_INDEX_
        / system fast-path port _PCI_
        / vrf main interface physical eth1 port _PCI_
        / vrf main interface veth veth1.1 link-interface veth1.2
        / vrf main interface veth veth1.1 link-vrf vrf1
        / vrf main interface veth veth1.1 ipv4 address 10.10.0.1/24
        / vrf vrf1 interface veth veth1.2 link-interface veth1.1
        / vrf vrf1 interface veth veth1.2 link-vrf main
        / vrf vrf1 interface veth veth1.2 ipv4 address 10.10.0.2/24
        / vrf vrf1 dhcp server enabled true
        / vrf vrf1 dhcp server subnet 172._SUBNET_.0.0/16 interface veth1.2
        / vrf vrf1 dhcp server subnet 172._SUBNET_.0.0/16 range 172._SUBNET_.0.4 172._SUBNET_.255.254
        / vrf main interface physical eth1 ipv4 address 172._SUBNET_.0.2/24
        / vrf main ipoe-server enabled true
        / vrf main ipoe-server limits max-session 10000
        / vrf main ipoe-server dhcp-relay interface eth1 agent-information relay-address 10.10.0.1
        / vrf main ipoe-server dhcp-relay interface eth1 agent-information trusted-circuit false
        / vrf main ipoe-server dhcp-relay interface eth1 agent-information link-selection 172._SUBNET_.0.2
        / vrf main ipoe-server dhcp-relay interface eth1 router 172._SUBNET_.0.2
        / vrf main ipoe-server dhcp-relay server 10.10.0.2
        commit
    

In the file named bng-ipoe-config.yml, you will find an IPoE server configuration in the main VRF and a DHCP server configuration in a separate VRF. Both will be implemented during the BNG instance startup.

You can update the content of this file as needed including the serial number related to your license and the DHCP pool.

See also

See the User’s Guide for more information regarding:

Download those files and copy them in the host.

# ls -l
 [...]
-rw-r--r-- 1 root root 1346 Nov 13 14:55 bng-ipoe-config.yml
-rw-r--r-- 1 root root 1047 Nov 13 11:15 pci-config.yml
 [...]

From the path where you stored your files in the host, execute the following commands:

# kubectl create -f pci-config.yml
configmap/pci-config created

# kubectl create -f bng-ipoe-config.yml
configmap/bng-config created

Note

Those files can be downloaded from the 6WIND deployment guide repository.

Then, create the following file:

  • kubevirt-bng-pool-deployment.yml:
    apiVersion: pool.kubevirt.io/v1alpha1
    kind: VirtualMachinePool
    metadata:
      name: vrouter-dut
    spec:
      replicas: 3
      selector:
        matchLabels:
          kubevirt.io/vmpool: vrouter-dut
      virtualMachineTemplate:
        metadata:
          labels:
            kubevirt.io/vmpool: vrouter-dut
        spec:
          runStrategy: Always
          template:
            metadata:
              labels:
                kubevirt.io/vmpool: vrouter-dut
            spec:
              domain:
                cpu:
                  sockets: 1
                  cores: 2
                  threads: 2
                  dedicatedCpuPlacement: true
                devices:
                  filesystems:
                    - name: scripts-vol
                      virtiofs: {}
                    - name: py-vol
                      virtiofs: {}
                  disks:
                    - name: ctdisk
                      disk: {}
                    - name: cloudinitdisk
                      disk: {}
                    - name: conf-vol
                      disk: {}
                  interfaces:
                    - name: default
                      masquerade: {}
                    - name: multus-1
                      sriov: {}
                resources:
                  requests:
                    memory: 8Gi
                memory:
                  hugepages:
                    pageSize: "1Gi"
              networks:
                - name: default
                  pod: {}
                - name: multus-1
                  multus:
                    networkName: multus-intel-sriov-nic-vsr1
                    default: false
              volumes:
              - name: ctdisk
                containerDisk:
                  image: localhost:5000/dut:current
              - cloudInitNoCloud:
                  userData: |-
                    #cloud-config
                    bootcmd:
                      - mkdir /mnt/template
                      - mkdir /mnt/script
                      - mkdir /mnt/probe
                      - mkdir /mnt/config
                      - mount -t virtiofs scripts-vol /mnt/template
                      - mount -t virtiofs py-vol /mnt/script
                      - mount -t virtiofs probe-vol /mnt/probe
                      - mount -t virtiofs conf-vol /mnt/config
                    runcmd:
                      - export INDEX=$(hostname|awk -F "-" '{ print $3 }')
                      - export PCI_BUS=0000:$(lspci|grep -i "virtual function"|awk '{print $1}')
                      - export PCI_NAME=$(python3 /mnt/script/bus_addr_to_name.py $PCI_BUS)
                      - sed -e "s/_INDEX_/$INDEX/g" -e "s/_SUBNET_/$((INDEX+16))/g" -e "s/_PCI_/${PCI_NAME}/g" /mnt/template/bng.cli > /mnt/config/config-target.cli
                      - pidof nc-cli && nc-cli -n -f /mnt/config/config-target.cli
                name: cloudinitdisk
              - configMap:
                  name: bng-config
                name: scripts-vol
              - configMap:
                  name: pci-config
                name: py-vol
              - name: conf-vol
                emptyDisk:
                  capacity: 2Gi
    

By using filesystem, configMaps are shared through virtiofs. Filesystem allows to dynamically propagate changes on configMaps to VMIs. Filesystem devices must be mounted inside the VM. This can be done through cloudInitNoCloud.

See also

For more details, refer to:

Through this yaml deployment file, 3 BNG VMs will be instantiated. You can update the content of this configuration file as needed including the number of replicas (VMs). Also, some parameters in this file should map with the configuration set in the prerequisites section like for example network resource attachments.

See also

See the Getting Started guide for more information regarding:

Before deploying this file, the 6WIND BNG qcow2 image packaged as a Container Image should be available from a container registry. In this file, you can update, in the containerDisk part, the image field content accordingly.

...
volumes:
- name: ctdisk
  containerDisk:
    image: localhost:5000/dut:current  # << update here
...

See also

See the Getting Started guide for more information regarding:

Note

This file can be downloaded from the 6WIND deployment guide repository.

Download this file and copy it in the host.

# ls -l
[...]
-rw-r--r--  1 root root 2877 Nov 14 11:08 kubevirt-bng-pool-deployment.yml
[...]

From the path where you stored your file in the host, execute the following commands:

# kubectl create -f kubevirt-bng-pool-deployment.yml
virtualmachinepool.pool.kubevirt.io/vrouter-dut created

Check that the 3 VMs just deployed are correctly created, running and ready:

# kubectl get vms
NAME            AGE   STATUS    READY
vrouter-dut-0   74s   Running   True
vrouter-dut-1   74s   Running   True
vrouter-dut-2   74s   Running   True

2.3. Start sessions

For testing purpose, we will use the bngblaster tool to create sessions. For 3 vBNG, we will create 30 000 sessions.

  1. Get a Ubuntu 22.04 image and spawn a new virtual machine. This time, use a PF interface (passthrough mode).

    # cp ubuntu-22.04.qcow2 /var/lib/libvirt/images/vm4.qcow2
    
    # virt-install --name blaster --vcpus=6,sockets=1,cores=3,threads=2 \
                   --os-variant ubuntu22.04 --cpu host --network=default,model=e1000 \
                   --ram 8192 --noautoconsole --import \
                   --memorybacking hugepages=yes \
                   --disk /var/lib/libvirt/images/blaster.qcow2,device=disk,bus=virtio \
                   --host-device 17:00.1
    
  2. Pin vCPUs to host CPUs.

    # virsh vcpupin blaster 0 7
    # virsh vcpupin blaster 1 31
    # virsh vcpupin blaster 2 8
    # virsh vcpupin blaster 3 32
    # virsh vcpupin blaster 4 9
    # virsh vcpupin blaster 5 33
    
  3. Log on the blaster instance.

    # virsh console blaster
    
  4. Configure interface.

    # ip link set dev ens1 up
    # ip link set dev eth0 up
    # dhclient ens1
    
  5. Install bngblaster.

    # dpkg-query -W bngblaster || { wget https://github.com/rtbrick/bngblaster/releases/download/0.9.7/bngblaster-0.9.7-ubuntu-22.04_amd64.deb; dpkg -i bngblaster-0.9.7-ubuntu-22.04_amd64.deb; }
    
  6. Configure and start bngblaster.

    # cat bngblaster_ipoe.json
    {
        "interfaces": {
            "access": [
            {
                "interface": "eth0",
                "type": "ipoe",
                "vlan-mode": "N:1"
            }
        ]
        },
        "dhcp": {
            "enable": true
        },
        "dhcpv6": {
            "enable": false
        },
        "access-line": {
        },
        "sessions": {
            "max-outstanding": 1000,
            "reconnect": true,
            "count": 30000
        },
        "ipoe": {
            "ipv4": true,
            "ipv6": false
        }
    }
    
    # bngblaster -C bngblaster_ipoe.json -J /tmp/report.json -S /tmp/bngblaster.sock -L /tmp/bng.log -I
    
  7. Check active sessions on each BNG VM. For example:

    Connect to one of the BNG VMs with admin credentials:

    # virtctl console vrouter-dut-2
    Successfully connected to vrouter-dut-2 console. The escape sequence is ^]
    
    BNG-2 login: admin
    Password:
    [...]
    

Then,

BNG-2> show ipoe-server-stats
Sessions counters
 active    : 10000
 starting  : 0
 delayed : 0
BNG-2>