1. PPPoE multiple instances / CNF

As already described in a previous section PPPoE multiple instances, this section, in the same way, shows how to set up multiple instances of PPPoE servers to increase the number of sessions.

However, this time, those instances are deployed as CNF in a kubernetes environment.

1.1. Prerequisites

Ensure a kubernetes cluster is fully deployed.

# kubectl get nodes
NAME            STATUS   ROLES           AGE   VERSION
node1   Ready    control-plane   24m   v1.29.9
# kubectl get pod -A
NAMESPACE                NAME                                    READY   STATUS    RESTARTS   AGE
kube-flannel             kube-flannel-ds-gr2l8                   1/1     Running   0          75m
kube-system              coredns-76f75df574-mtdzp                1/1     Running   0          75m
kube-system              coredns-76f75df574-zk2b7                1/1     Running   0          75m
kube-system              etcd-node1                              1/1     Running   0          75m
kube-system              kube-apiserver-node1                    1/1     Running   0          75m
kube-system              kube-controller-manager-node1           1/1     Running   0          75m
kube-system              kube-multus-ds-8zghd                    1/1     Running   0          75m
kube-system              kube-proxy-zcggh                        1/1     Running   0          75m
kube-system              kube-scheduler-node1                    1/1     Running   0          75m
kube-system              kube-sriov-device-plugin-amd64-ghx9b    1/1     Running   0          74m
smarter-device-manager   smarter-device-manager-7vplw            1/1     Running   0          74m

As PPP will be used inside the container, it is required to pass the /dev/ppp device to the container. So ensure the smarter-device-manager plugin is deployed and configured.

# kubectl get ds -n smarter-device-manager
NAME                     DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
smarter-device-manager   1         1         1       1            1           <none>          89m
# kubectl get pod -n smarter-device-manager
NAME                           READY   STATUS    RESTARTS   AGE
smarter-device-manager-7vplw   1/1     Running   0          91m
# kubectl describe cm smarter-device-manager -n smarter-device-manager
Name:         smarter-device-manager
Namespace:    smarter-device-manager
Labels:       <none>
Annotations:  <none>

Data
====
conf.yaml:
----
- devicematch: ^ppp$
  nummaxdevices: 100


BinaryData
====

Events:  <none>

Also, ensure multus plugin and SR-IOV plugin are installed and configured.

# kubectl get pod -A |grep -i sriov
kube-system              kube-sriov-device-plugin-amd64-pfsk5    1/1     Running   0          26m
# kubectl get pod -A |grep -i multus
kube-system              kube-multus-ds-kwmvt                    1/1     Running   0          28m

# kubectl get net-attach-def -A
NAMESPACE   NAME                          AGE
default     multus-intel-sriov-nic-vsr1   71s

For each PPPoE server instance deployed as a CNF, a SR-IOV interface should be provided for the container. So, for N instances, N VFs should be created on the host physical interface. One VF will be provided to one container through SR-IOV.

# lspci|grep -i eth
[...]
32:00.0 Ethernet controller: Intel Corporation Ethernet Controller E810-XXV for SFP (rev 02)
32:00.1 Ethernet controller: Intel Corporation Ethernet Controller E810-XXV for SFP (rev 02)
32:01.0 Ethernet controller: Intel Corporation Ethernet Adaptive Virtual Function (rev 02)
32:01.1 Ethernet controller: Intel Corporation Ethernet Adaptive Virtual Function (rev 02)
32:01.2 Ethernet controller: Intel Corporation Ethernet Adaptive Virtual Function (rev 02)
32:01.3 Ethernet controller: Intel Corporation Ethernet Adaptive Virtual Function (rev 02)
32:11.0 Ethernet controller: Intel Corporation Ethernet Adaptive Virtual Function (rev 02)
[...]

In order to fullfill all the prerequisites, you can implement steps described in the following sections of the Getting Started guide :

1.2. Deploy BNG instances into the cluster

Several configuration files, YAML files, will be applied.

First of all, create the following files:

  • pci-config.yml:
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: pci-config
    data:
      bus_addr_to_name.py: |
        #!/usr/bin/env python3
        import sys
        import re
        BUS_ADDR_RE = re.compile(r'''
        ^
        (?P<domain>([\da-f]+)):
        (?P<bus>([\da-f]+)):
        (?P<slot>([\da-f]+))\.
        (?P<func>(\d+))
        $
        ''', re.VERBOSE | re.IGNORECASE)
    
        def bus_addr_to_name(bus_addr):
            """
            Convert a PCI bus address into a port name as used in nc-cli.
            """
            match = BUS_ADDR_RE.match(bus_addr)
            if not match:
                raise ValueError('pci bus address %s does not match regexp' % bus_addr)
    
            d = match.groupdict()
            domain = int(d['domain'], 16)
            bus = int(d['bus'], 16)
            slot = int(d['slot'], 16)
            func = int(d['func'], 10)
    
            name = 'pci-'
            if domain != 0:
                name += 'd%d' % domain
            name += 'b%ds%d' % (bus, slot)
            if func != 0:
                name += 'f%d' % func
            print (name)
    
        if __name__ == "__main__":
            bus_addr_to_name(sys.argv[1])
    
  • run-after-ready-probe-config.yml:
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: run-after-ready-probe
    data:
      run-after-ready.sh: |
        #!/bin/sh
        pidof nc-cli && nc-cli -n -f /mnt/config/config-target.cli
    
  • bng-pppoe-config.yml:
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: bng-config
    data:
      bng.cli: |
        netconf connect
        edit running
        / system license online serial <serialnumber>
        / system hostname BNG-_INDEX_
        / system fast-path port _PCI_
        / vrf main interface physical eth1 port _PCI_
        / system fast-path limits fp-max-if 10300
        / system fast-path limits pppoe-max-channel 10300
        / system fast-path limits ip4-max-addr 10300
        / system fast-path limits ip6-max-addr 10300
        / vrf main ppp-server instance p enabled true
        / vrf main ppp-server instance p ppp ipcp require
        / vrf main ppp-server instance p ppp lcp echo-interval 180 echo-failure 1
        / vrf main ppp-server instance p ip-pool default-local-ip 172._SUBNET_.255.254
        / vrf main ppp-server instance p ip-pool pool pppoe peer-pool 172._SUBNET_.0.0/16
        / vrf main ppp-server instance p max-sessions 10000
        / vrf main ppp-server instance p pppoe enabled true
        / vrf main ppp-server instance p pppoe ip-pool pppoe
        / vrf main ppp-server instance p pppoe interface eth1
        / vrf main ppp-server instance p auth radius enabled false
        / vrf main ppp-server instance p auth peer-secrets enabled true
        / vrf main ppp-server instance p auth peer-secrets secrets test password test
        / vrf main ppp-server instance p auth peer-auth-mode pap
        commit
    

In the file named bng-pppoe-config.yml, you will find a PPPoE server configuration which will be implemented during the bng container startup.

You can update the content of this file as needed including the serial number related to your license and the IP pool.

See also

See the User’s Guide for more information regarding:

Download those files and copy them in the host.

# ls -l
[...]
-rw-r--r-- 1 root root 1344 Oct 17 15:05 bng-pppoe-config.yml
-rw-r--r-- 1 root root 1174 Oct 17 14:57 pci-config.yml
-rw-r--r-- 1 root root  241 Oct 17 15:03 run-after-ready-probe-config.yml
[...]

From the path where you stored your files in the host, execute the following commands:

# kubectl create -f pci-config.yml
configmap/pci-config created

# kubectl create -f bng-pppoe-config.yml
configmap/bng-config created

# kubectl create -f run-after-ready-probe-config.yml
configmap/run-after-ready-probe created

Note

Those files can be downloaded from the 6WIND deployment guide repository.

Then, create the following file:

  • bng-pppoe-statefulset-deployment.yml:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  labels:
    app: bng
  name: bng
spec:
  replicas: 3
  selector:
    matchLabels:
      app: bng
  template:
    metadata:
      annotations:
        k8s.v1.cni.cncf.io/networks: multus-intel-sriov-nic-vsr1
        container.apparmor.security.beta.kubernetes.io/bng: unconfined
      labels:
        app: bng
    spec:
      containers:
      - image: download.6wind.com/vsr/x86_64-ce/3.10:3.10.0.ga
        imagePullPolicy: IfNotPresent
        name: bng
        lifecycle:
          postStart:
            exec:
              command:
              - sh
              - "-c"
              - |
                /bin/sh <<'EOF'
                export PCI_BUS=$(printenv | grep -i pcidevice | awk -F '=' '{print $2}')
                export PCI_NAME=$(python3 /mnt/script/bus_addr_to_name.py $PCI_BUS)
                sed -e "s/_INDEX_/$INDEX/g" \
                -e "s/_SUBNET_/$((INDEX+16))/g" \
                -e "s/_PCI_/${PCI_NAME}/g" \
                /mnt/template/bng.cli > /mnt/config/config-target.cli
                EOF
        startupProbe:
          exec:
            command:
              - sh
              - -c
              - /mnt/probe/run-after-ready.sh
          failureThreshold: 5
        resources:
          limits:
            cpu: "4"
            memory: 2Gi
            hugepages-1Gi: 8Gi
            intel/sriov_vfio: '1'
            smarter-devices/ppp: 1
          requests:
            cpu: "4"
            memory: 2Gi
            hugepages-1Gi: 8Gi
            intel/sriov_vfio: '1'
            smarter-devices/ppp: 1
        env:
        - name: K8S_NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        - name: K8S_POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: K8S_POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        - name: K8S_POD_CPU_REQUEST
          valueFrom:
            resourceFieldRef:
              resource: requests.cpu
        - name: K8S_POD_MEM_REQUEST
          valueFrom:
            resourceFieldRef:
              resource: requests.memory
        - name: INDEX
          valueFrom:
            fieldRef:
              fieldPath: metadata.labels['apps.kubernetes.io/pod-index']
        securityContext:
          capabilities:
            add: ["NET_ADMIN", "NET_RAW", "SYS_ADMIN", "SYS_NICE", "IPC_LOCK", "NET_BROADCAST", "SYSLOG", "SYS_TIME"
                 ]
        volumeMounts:
        - mountPath: /dev/hugepages
          name: hugepage
        - mountPath: /dev/shm
          name: shm
        - mountPath: /run
          name: run
        - mountPath: /run/lock
          name: run-lock
        - mountPath: /mnt/config
          name: conf-vol
          subPath: init-config.cli
        - mountPath: /mnt/script
          name: py-vol
        - mountPath: /mnt/template
          name: scripts-vol
        - mountPath: /mnt/probe
          name: probe-vol
      imagePullSecrets:
      - name: regcred
      securityContext:
        sysctls:
        - name: net.ipv4.conf.default.disable_policy
          value: "1"
        - name: net.ipv4.ip_local_port_range
          value: "30000 40000"
        - name: net.ipv4.ip_forward
          value: "1"
        - name: net.ipv6.conf.all.forwarding
          value: "1"
        - name: net.netfilter.nf_conntrack_events
          value: "1"
      volumes:
      - emptyDir:
          medium: HugePages
          sizeLimit: 8Gi
        name: hugepage
      - emptyDir:
          medium: Memory
          sizeLimit: 2Gi
        name: shm
      - emptyDir:
          medium: Memory
          sizeLimit: 500Mi
        name: tmp
      - emptyDir:
          medium: Memory
          sizeLimit: 200Mi
        name: run
      - emptyDir:
          medium: Memory
          sizeLimit: 200Mi
        name: run-lock
      - configMap:
          defaultMode: 365
          name: bng-config
        name: scripts-vol
      - emptyDir: {}
        name: conf-vol
      - configMap:
          defaultMode: 365
          name: pci-config
        name: py-vol
      - configMap:
          defaultMode: 365
          name: run-after-ready-probe
        name: probe-vol

Through this yaml deployment file, 3 bng containers will be instantiated. You can update the content of this configuration file as needed including the number of replicas (instances). Also, some parameters in this file should map with the configuration set in the prerequisites section like for example network resource attachments.

See also

See the Getting Started guide for more information regarding:

Before deploying this file, you should create Kubernetes secret to authenticate to the 6WIND registry, using the credentials provided by 6WIND.

See also

See the Getting Started guide for more information regarding:

Note

This file can be downloaded from the 6WIND deployment guide repository.

Download this file and copy it in the host.

# ls -l
[...]
-rw-r--r--  1 root root 4497 Oct 17 16:52 bng-pppoe-statefulset-deployment.yml
[...]

From the path where you stored your file in the host, execute the following commands:

# kubectl create -f bng-pppoe-statefulset-deployment.yml
statefulset.apps/bng created

Check that the 3 pods just deployed are correctly created, running and ready:

# kubectl get pods
NAME    READY   STATUS    RESTARTS   AGE
bng-0   1/1     Running   0          4m35s
bng-1   1/1     Running   0          3m42s
bng-2   1/1     Running   0          3m4s

1.3. Start sessions

For testing purpose, we will use the bngblaster tool to create sessions. For 3 vBNG, we will create 30 000 sessions.

  1. Get a Ubuntu 22.04 image and spawn a new virtual machine. This time, use a PF interface (passthrough mode).

    # cp ubuntu-22.04.qcow2 /var/lib/libvirt/images/vm4.qcow2
    
    # virt-install --name blaster --vcpus=6,sockets=1,cores=3,threads=2 \
                   --os-variant ubuntu22.04 --cpu host --network=default,model=e1000 \
                   --ram 8192 --noautoconsole --import \
                   --memorybacking hugepages=yes \
                   --disk /var/lib/libvirt/images/blaster.qcow2,device=disk,bus=virtio \
                   --host-device 17:00.1
    
  2. Pin vCPUs to host CPUs.

    # virsh vcpupin blaster 0 7
    # virsh vcpupin blaster 1 31
    # virsh vcpupin blaster 2 8
    # virsh vcpupin blaster 3 32
    # virsh vcpupin blaster 4 9
    # virsh vcpupin blaster 5 33
    
  3. Log on the blaster instance.

    # virsh console blaster
    
  4. Configure interface.

    # ip link set dev ens1 up
    # ip link set dev eth0 up
    # dhclient ens1
    
  5. Install bngblaster.

    # dpkg-query -W bngblaster || { wget https://github.com/rtbrick/bngblaster/releases/download/0.9.7/bngblaster-0.9.7-ubuntu-22.04_amd64.deb; dpkg -i bngblaster-0.9.7-ubuntu-22.04_amd64.deb; }
    
  6. Configure and start bngblaster.

    # cat bngblaster.json
    {
        "interfaces": {
            "access": [
                {
                    "__comment__": "PPPoE Client",
                    "interface": "eth0",
                    "type": "pppoe",
                    "vlan-mode": "N:1",
                    "stream-group-id": 0
                }
            ]
        },
        "sessions": {
            "max-outstanding": 800,
            "reconnect": true,
            "count": 30000
        },
        "pppoe": {
            "reconnect": true
        },
        "ppp": {
            "authentication": {
                "username": "test",
                "password": "test",
                "timeout": 5,
                "retry": 30,
                "protocol": "PAP"
            },
            "lcp": {
                "conf-request-timeout": 10,
                "conf-request-retry": 5,
                "keepalive-interval": 0,
                "keepalive-retry": 3
            },
            "ipcp": {
                "enable": true,
                "request-ip": true,
                "request-dns1": false,
                "request-dns2": false,
                "conf-request-timeout": 1,
                "conf-request-retry": 10
            },
            "ip6cp": {
                "enable": false
            }
        },
        "dhcpv6": {
            "enable": false
        },
        "session-traffic": {
            "ipv4-pps": 1
        }
    }
    
    # bngblaster -C bngblaster.json -J /tmp/report.json -S /tmp/bngblaster.sock -L /tmp/bng.log -I
    
  7. Check active sessions on each BNG container.

    Connect to one of the bng instances with admin credentials:

    # kubectl exec -it bng-2 -- login
    

Then,

BNG-2> show ppp-server statistics instance p
Sessions counters
 active    : 10001
 starting  : 0
 finishing : 0

PPPoE counters
 active        : 10001
 starting      : 0
 PADI received : 30314
 PADI dropped  : 0
 PADO sent     : 28725
 PADR received : 10308
 PADS sent     : 10038
BNG-2>