2. IPoE multiple instances / CNF

As already described in a previous section IPoE multiple instances, this section, in the same way, shows how to set up multiple instances of IPoE servers to increase the number of sessions.

However, this time, those instances are deployed as CNF in a kubernetes environment.

2.1. Prerequisites

See Prerequisites section.

Note

There is no need to deploy and configure the smarter-device-manager plugin. Indeed, PPP will not be used inside the container.

2.2. Deploy BNG instances into the cluster

Several configuration files, YAML files, will be applied.

First of all, create the following files:

  • pci-config.yml:
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: pci-config
    data:
      bus_addr_to_name.py: |
        #!/usr/bin/env python3
        import sys
        import re
        BUS_ADDR_RE = re.compile(r'''
        ^
        (?P<domain>([\da-f]+)):
        (?P<bus>([\da-f]+)):
        (?P<slot>([\da-f]+))\.
        (?P<func>(\d+))
        $
        ''', re.VERBOSE | re.IGNORECASE)
    
        def bus_addr_to_name(bus_addr):
            """
            Convert a PCI bus address into a port name as used in nc-cli.
            """
            match = BUS_ADDR_RE.match(bus_addr)
            if not match:
                raise ValueError('pci bus address %s does not match regexp' % bus_addr)
    
            d = match.groupdict()
            domain = int(d['domain'], 16)
            bus = int(d['bus'], 16)
            slot = int(d['slot'], 16)
            func = int(d['func'], 10)
    
            name = 'pci-'
            if domain != 0:
                name += 'd%d' % domain
            name += 'b%ds%d' % (bus, slot)
            if func != 0:
                name += 'f%d' % func
            print (name)
    
        if __name__ == "__main__":
            bus_addr_to_name(sys.argv[1])
    
  • run-after-ready-probe-config.yml:
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: run-after-ready-probe
    data:
      run-after-ready.sh: |
        #!/bin/sh
        pidof nc-cli && nc-cli -n -f /mnt/config/config-target.cli
    
  • bng-ipoe-config.yml:
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: bng-config
    data:
      bng.cli: |
        netconf connect
        edit running
        / system license online serial <serialnumber>
        / system hostname BNG-_INDEX_
        / system fast-path port _PCI_
        / vrf main interface physical eth1 port _PCI_
        / vrf main interface veth veth1.1 link-interface veth1.2
        / vrf main interface veth veth1.1 link-vrf vrf1
        / vrf main interface veth veth1.1 ipv4 address 10.10.0.1/24
        / vrf vrf1 interface veth veth1.2 link-interface veth1.1
        / vrf vrf1 interface veth veth1.2 link-vrf main
        / vrf vrf1 interface veth veth1.2 ipv4 address 10.10.0.2/24
        / vrf vrf1 dhcp server enabled true
        / vrf vrf1 dhcp server subnet 172._SUBNET_.0.0/16 interface veth1.2
        / vrf vrf1 dhcp server subnet 172._SUBNET_.0.0/16 range 172._SUBNET_.0.4 172._SUBNET_.255.254
        / vrf main interface physical eth1 ipv4 address 172._SUBNET_.0.2/24
        / vrf main ipoe-server enabled true
        / vrf main ipoe-server limits max-session 10000
        / vrf main ipoe-server dhcp-relay interface eth1 agent-information relay-address 10.10.0.1
        / vrf main ipoe-server dhcp-relay interface eth1 agent-information trusted-circuit false
        / vrf main ipoe-server dhcp-relay interface eth1 agent-information link-selection 172._SUBNET_.0.2
        / vrf main ipoe-server dhcp-relay interface eth1 router 172._SUBNET_.0.2
        / vrf main ipoe-server dhcp-relay server 10.10.0.2
        commit
    

In the file named bng-ipoe-config.yml, you will find an IPoE server configuration in the main VRF and a DHCP server configuration in a separate VRF. Both will be implemented during the bng container startup.

You can update the content of this file as needed including the serial number related to your license and the DHCP pool.

See also

See the User’s Guide for more information regarding:

Download those files and copy them in the host.

# ls -l
[...]
-rw-r--r-- 1 root root 1707 Oct 18 15:34 bng-ipoe-config.yml
-rw-r--r-- 1 root root 1175 Oct 18 15:31 pci-config.yml
-rw-r--r-- 1 root root  255 Oct 18 15:32 run-after-ready-probe-config.yml
[...]

From the path where you stored your files in the host, execute the following commands:

# kubectl create -f pci-config.yml
configmap/pci-config created

# kubectl create -f bng-ipoe-config.yml
configmap/bng-config created

# kubectl create -f run-after-ready-probe-config.yml
configmap/run-after-ready-probe created

Note

Those files can be downloaded from the 6WIND deployment guide repository.

Then, create the following file:

  • bng-ipoe-statefulset-deployment.yml:
    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      labels:
        app: bng
      name: bng
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: bng
      template:
        metadata:
          annotations:
            k8s.v1.cni.cncf.io/networks: multus-intel-sriov-nic-vsr1
            container.apparmor.security.beta.kubernetes.io/bng: unconfined
          labels:
            app: bng
        spec:
          containers:
          - image: download.6wind.com/vsr/x86_64-ce/3.10:3.10.0.ga
            imagePullPolicy: IfNotPresent
            name: bng
            lifecycle:
              postStart:
                exec:
                  command:
                  - sh
                  - "-c"
                  - |
                    /bin/sh <<'EOF'
                    export PCI_BUS=$(printenv | grep -i pcidevice | awk -F '=' '{print $2}')
                    export PCI_NAME=$(python3 /mnt/script/bus_addr_to_name.py $PCI_BUS)
                    sed -e "s/_INDEX_/$INDEX/g" \
                    -e "s/_SUBNET_/$((INDEX+16))/g" \
                    -e "s/_PCI_/${PCI_NAME}/g" \
                    /mnt/template/bng.cli > /mnt/config/config-target.cli
                    EOF
            startupProbe:
              exec:
                command:
                  - sh
                  - -c
                  - /mnt/probe/run-after-ready.sh
              failureThreshold: 5
            resources:
              limits:
                cpu: "4"
                memory: 2Gi
                hugepages-1Gi: 8Gi
                intel/sriov_vfio: '1'
              requests:
                cpu: "4"
                memory: 2Gi
                hugepages-1Gi: 8Gi
                intel/sriov_vfio: '1'
            env:
            - name: K8S_NODE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
            - name: K8S_POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: K8S_POD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
            - name: K8S_POD_CPU_REQUEST
              valueFrom:
                resourceFieldRef:
                  resource: requests.cpu
            - name: K8S_POD_MEM_REQUEST
              valueFrom:
                resourceFieldRef:
                  resource: requests.memory
            - name: INDEX
              valueFrom:
                fieldRef:
                  fieldPath: metadata.labels['apps.kubernetes.io/pod-index']
            securityContext:
              capabilities:
                add: ["NET_ADMIN", "NET_RAW", "SYS_ADMIN", "SYS_NICE", "IPC_LOCK", "NET_BROADCAST", "SYSLOG", "SYS_TIME"
                     ]
            volumeMounts:
            - mountPath: /dev/hugepages
              name: hugepage
            - mountPath: /dev/shm
              name: shm
            - mountPath: /run
              name: run
            - mountPath: /run/lock
              name: run-lock
            - mountPath: /mnt/config
              name: conf-vol
              subPath: init-config.cli
            - mountPath: /mnt/script
              name: py-vol
            - mountPath: /mnt/template
              name: scripts-vol
            - mountPath: /mnt/probe
              name: probe-vol
          imagePullSecrets:
          - name: regcred
          securityContext:
            sysctls:
            - name: net.ipv4.conf.default.disable_policy
              value: "1"
            - name: net.ipv4.ip_local_port_range
              value: "30000 40000"
            - name: net.ipv4.ip_forward
              value: "1"
            - name: net.ipv6.conf.all.forwarding
              value: "1"
            - name: net.netfilter.nf_conntrack_events
              value: "1"
          volumes:
          - emptyDir:
              medium: HugePages
              sizeLimit: 8Gi
            name: hugepage
          - emptyDir:
              medium: Memory
              sizeLimit: 2Gi
            name: shm
          - emptyDir:
              medium: Memory
              sizeLimit: 500Mi
            name: tmp
          - emptyDir:
              medium: Memory
              sizeLimit: 200Mi
            name: run
          - emptyDir:
              medium: Memory
              sizeLimit: 200Mi
            name: run-lock
          - configMap:
              defaultMode: 365
              name: bng-config
            name: scripts-vol
          - emptyDir: {}
            name: conf-vol
          - configMap:
              defaultMode: 365
              name: pci-config
            name: py-vol
          - configMap:
              defaultMode: 365
              name: run-after-ready-probe
            name: probe-vol
    

Through this yaml deployment file, 3 bng containers will be instantiated. You can update the content of this configuration file as needed including the number of replicas (instances). Also, some parameters in this file should map with the configuration set in the prerequisites section like for example network resource attachments.

See also

See the Getting Started guide for more information regarding:

Before deploying this file, you should create Kubernetes secret to authenticate to the 6WIND registry, using the credentials provided by 6WIND.

See also

See the Getting Started guide for more information regarding:

Note

This file can be downloaded from the 6WIND deployment guide repository.

Download this file and copy it in the host.

# ls -l
[...]
-rw-r--r-- 1 root root 4260 Oct 18 15:56 bng-ipoe-statefulset-deployment.yml
[...]

From the path where you stored your file in the host, execute the following commands:

# kubectl create -f bng-ipoe-statefulset-deployment.yml
statefulset.apps/bng created

Check that the 3 pods just deployed are correctly created, running and ready:

 # kubectl get pods
NAME    READY   STATUS    RESTARTS   AGE
bng-0   1/1     Running   0          4m39s
bng-1   1/1     Running   0          3m46s
bng-2   1/1     Running   0          3m7s

2.3. Start sessions

For testing purpose, we will use the bngblaster tool to create sessions. For 3 vBNG, we will create 30 000 sessions.

  1. Get a Ubuntu 22.04 image and spawn a new virtual machine. This time, use a PF interface (passthrough mode).

    # cp ubuntu-22.04.qcow2 /var/lib/libvirt/images/vm4.qcow2
    
    # virt-install --name blaster --vcpus=6,sockets=1,cores=3,threads=2 \
                   --os-variant ubuntu22.04 --cpu host --network=default,model=e1000 \
                   --ram 8192 --noautoconsole --import \
                   --memorybacking hugepages=yes \
                   --disk /var/lib/libvirt/images/blaster.qcow2,device=disk,bus=virtio \
                   --host-device 17:00.1
    
  2. Pin vCPUs to host CPUs.

    # virsh vcpupin blaster 0 7
    # virsh vcpupin blaster 1 31
    # virsh vcpupin blaster 2 8
    # virsh vcpupin blaster 3 32
    # virsh vcpupin blaster 4 9
    # virsh vcpupin blaster 5 33
    
  3. Log on the blaster instance.

    # virsh console blaster
    
  4. Configure interface.

    # ip link set dev ens1 up
    # ip link set dev eth0 up
    # dhclient ens1
    
  5. Install bngblaster.

    # dpkg-query -W bngblaster || { wget https://github.com/rtbrick/bngblaster/releases/download/0.9.7/bngblaster-0.9.7-ubuntu-22.04_amd64.deb; dpkg -i bngblaster-0.9.7-ubuntu-22.04_amd64.deb; }
    
  6. Configure and start bngblaster.

    # cat bngblaster_ipoe.json
    {
        "interfaces": {
            "access": [
            {
                "interface": "eth0",
                "type": "ipoe",
                "vlan-mode": "N:1"
            }
        ]
        },
        "dhcp": {
            "enable": true
        },
        "dhcpv6": {
            "enable": false
        },
        "access-line": {
        },
        "sessions": {
            "max-outstanding": 1000,
            "reconnect": true,
            "count": 30000
        },
        "ipoe": {
            "ipv4": true,
            "ipv6": false
        }
    }
    
    # bngblaster -C bngblaster_ipoe.json -J /tmp/report.json -S /tmp/bngblaster.sock -L /tmp/bng.log -I
    
  7. Check active sessions on each vBNG. For example:

    Connect to one of the bng instances with admin credentials:

    # kubectl exec -it bng-2 -- login
    
    BNG-2> show ipoe-server-stats
    Sessions counters
     active    : 10000
     starting  : 0
     delayed : 0
    BNG-2>