1. PPPoE multiple instances / kubevirt¶
As already described in a previous section PPPoE multiple instances, this section, in the same way, shows how to set up multiple instances of PPPoE servers to increase the number of sessions.
However, this time, those instances are deployed as VM in a kubevirt environment on top of a kubernetes cluster.
1.1. Prerequisites¶
Ensure kubevirt is fully deployed on top of a kubernetes cluster.
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
node1 Ready control-plane 30m v1.29.10
# kubectl -n kubevirt get kubevirt
NAME AGE PHASE
kubevirt 30m Deployed
# kubectl get pods -n kubevirt
NAME READY STATUS RESTARTS AGE
virt-api-54666f869-nv5jx 1/1 Running 0 29m
virt-controller-c67776ccb-kp7d5 1/1 Running 0 29m
virt-controller-c67776ccb-zzwsz 1/1 Running 0 29m
virt-handler-r9r7q 1/1 Running 0 29m
virt-operator-6776f5689d-z8zwh 1/1 Running 0 30m
virt-operator-6776f5689d-zn7cz 1/1 Running 0 30m
# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-flannel kube-flannel-ds-h5ghb 1/1 Running 0 32m
kube-system coredns-76f75df574-bkvpd 1/1 Running 0 32m
kube-system coredns-76f75df574-lbqnz 1/1 Running 0 32m
kube-system etcd-node1 1/1 Running 0 33m
kube-system kube-apiserver-node1 1/1 Running 0 33m
kube-system kube-controller-manager-node1 1/1 Running 0 33m
kube-system kube-multus-ds-xrtph 1/1 Running 0 32m
kube-system kube-proxy-k2gt6 1/1 Running 0 32m
kube-system kube-scheduler-node1 1/1 Running 0 33m
kube-system kube-sriov-device-plugin-amd64-k5hdp 1/1 Running 0 3m41s
kubevirt virt-api-54666f869-nv5jx 1/1 Running 0 31m
kubevirt virt-controller-c67776ccb-kp7d5 1/1 Running 0 31m
kubevirt virt-controller-c67776ccb-zzwsz 1/1 Running 0 31m
kubevirt virt-handler-r9r7q 1/1 Running 0 31m
kubevirt virt-operator-6776f5689d-z8zwh 1/1 Running 0 32m
kubevirt virt-operator-6776f5689d-zn7cz 1/1 Running 0 32m
Also, ensure multus plugin and SR-IOV plugin are installed and configured.
# kubectl get pod -A | grep -i sriov
kube-system kube-sriov-device-plugin-amd64-k5hdp 1/1 Running 0 7m39s
# kubectl get pod -A | grep -i multus
kube-system kube-multus-ds-xrtph 1/1 Running 0 37m
# kubectl get net-attach-def -A
NAMESPACE NAME AGE
default multus-intel-sriov-nic-vsr1 8m9s
For each PPPoE server instance deployed as a VM, a SR-IOV interface should be provided. So, for N instances, N VFs should be created on the host physical interface. One VF will be provided to one VM through SR-IOV.
# lspci | grep -i eth
[...]
32:00.0 Ethernet controller: Intel Corporation Ethernet Controller E810-XXV for SFP (rev 02)
32:00.1 Ethernet controller: Intel Corporation Ethernet Controller E810-XXV for SFP (rev 02)
32:01.0 Ethernet controller: Intel Corporation Ethernet Adaptive Virtual Function (rev 02)
32:01.1 Ethernet controller: Intel Corporation Ethernet Adaptive Virtual Function (rev 02)
32:01.2 Ethernet controller: Intel Corporation Ethernet Adaptive Virtual Function (rev 02)
32:01.3 Ethernet controller: Intel Corporation Ethernet Adaptive Virtual Function (rev 02)
32:11.0 Ethernet controller: Intel Corporation Ethernet Adaptive Virtual Function (rev 02)
[...]
Note
KubeVirt relies on VFIO userspace driver to pass PCI devices into VM guest. Because of that, when configuring SR-IOV operator policies, make sure you define a pool of VF resources that uses deviceType: vfio-pci.
In order to fullfill all the prerequisites, you can implement steps described in the following sections of the Getting Started guide :
1.2. Deploy BNG instances into the cluster¶
Several configuration files, YAML files, will be applied.
First of all, create the following files:
pci-config.yml
:apiVersion: v1 kind: ConfigMap metadata: name: pci-config data: bus_addr_to_name.py: | #!/usr/bin/env python3 import sys import re BUS_ADDR_RE = re.compile(r''' ^ (?P<domain>([\da-f]+)): (?P<bus>([\da-f]+)): (?P<slot>([\da-f]+))\. (?P<func>(\d+)) $ ''', re.VERBOSE | re.IGNORECASE) def bus_addr_to_name(bus_addr): """ Convert a PCI bus address into a port name as used in nc-cli. """ match = BUS_ADDR_RE.match(bus_addr) if not match: raise ValueError('pci bus address %s does not match regexp' % bus_addr) d = match.groupdict() domain = int(d['domain'], 16) bus = int(d['bus'], 16) slot = int(d['slot'], 16) func = int(d['func'], 10) name = 'pci-' if domain != 0: name += 'd%d' % domain name += 'b%ds%d' % (bus, slot) if func != 0: name += 'f%d' % func print (name) if __name__ == "__main__": bus_addr_to_name(sys.argv[1])
bng-pppoe-config.yml
:apiVersion: v1 kind: ConfigMap metadata: name: bng-config data: bng.cli: | netconf connect edit running / system license online serial <serialnumber> / system hostname BNG-_INDEX_ / system fast-path port _PCI_ / vrf main interface physical eth1 port _PCI_ / system fast-path limits fp-max-if 10300 / system fast-path limits pppoe-max-channel 10300 / system fast-path limits ip4-max-addr 10300 / system fast-path limits ip6-max-addr 10300 / vrf main ppp-server instance p enabled true / vrf main ppp-server instance p ppp ipcp require / vrf main ppp-server instance p ppp lcp echo-interval 180 echo-failure 1 / vrf main ppp-server instance p ip-pool default-local-ip 172._SUBNET_.255.254 / vrf main ppp-server instance p ip-pool pool pppoe peer-pool 172._SUBNET_.0.0/16 / vrf main ppp-server instance p max-sessions 10000 / vrf main ppp-server instance p pppoe enabled true / vrf main ppp-server instance p pppoe ip-pool pppoe / vrf main ppp-server instance p pppoe interface eth1 / vrf main ppp-server instance p auth radius enabled false / vrf main ppp-server instance p auth peer-secrets enabled true / vrf main ppp-server instance p auth peer-secrets secrets test password test / vrf main ppp-server instance p auth peer-auth-mode pap commit
In the file named bng-pppoe-config.yml, you will find a PPPoE server configuration which will be implemented during the BNG instance startup.
You can update the content of this file as needed including the serial number related to your license and the IP pool.
See also
See the User’s Guide for more information regarding:
Download those files and copy them in the host.
# ls -l
[...]
-rw-r--r-- 1 root root 1346 Nov 13 14:55 bng-pppoe-config.yml
-rw-r--r-- 1 root root 1047 Nov 13 11:15 pci-config.yml
[...]
From the path where you stored your files in the host, execute the following commands:
# kubectl create -f pci-config.yml
configmap/pci-config created
# kubectl create -f bng-pppoe-config.yml
configmap/bng-config created
Note
Those files can be downloaded from the 6WIND deployment guide repository.
Then, create the following file:
kubevirt-bng-pool-deployment.yml
:
apiVersion: pool.kubevirt.io/v1alpha1
kind: VirtualMachinePool
metadata:
name: vrouter-dut
spec:
replicas: 3
selector:
matchLabels:
kubevirt.io/vmpool: vrouter-dut
virtualMachineTemplate:
metadata:
labels:
kubevirt.io/vmpool: vrouter-dut
spec:
runStrategy: Always
template:
metadata:
labels:
kubevirt.io/vmpool: vrouter-dut
spec:
domain:
cpu:
sockets: 1
cores: 2
threads: 2
dedicatedCpuPlacement: true
devices:
filesystems:
- name: scripts-vol
virtiofs: {}
- name: py-vol
virtiofs: {}
disks:
- name: ctdisk
disk: {}
- name: cloudinitdisk
disk: {}
- name: conf-vol
disk: {}
interfaces:
- name: default
masquerade: {}
- name: multus-1
sriov: {}
resources:
requests:
memory: 8Gi
memory:
hugepages:
pageSize: "1Gi"
networks:
- name: default
pod: {}
- name: multus-1
multus:
networkName: multus-intel-sriov-nic-vsr1
default: false
volumes:
- name: ctdisk
containerDisk:
image: localhost:5000/dut:current
- cloudInitNoCloud:
userData: |-
#cloud-config
bootcmd:
- mkdir /mnt/template
- mkdir /mnt/script
- mkdir /mnt/probe
- mkdir /mnt/config
- mount -t virtiofs scripts-vol /mnt/template
- mount -t virtiofs py-vol /mnt/script
- mount -t virtiofs probe-vol /mnt/probe
- mount -t virtiofs conf-vol /mnt/config
runcmd:
- export INDEX=$(hostname|awk -F "-" '{ print $3 }')
- export PCI_BUS=0000:$(lspci|grep -i "virtual function"|awk '{print $1}')
- export PCI_NAME=$(python3 /mnt/script/bus_addr_to_name.py $PCI_BUS)
- sed -e "s/_INDEX_/$INDEX/g" -e "s/_SUBNET_/$((INDEX+16))/g" -e "s/_PCI_/${PCI_NAME}/g" /mnt/template/bng.cli > /mnt/config/config-target.cli
- pidof nc-cli && nc-cli -n -f /mnt/config/config-target.cli
name: cloudinitdisk
- configMap:
name: bng-config
name: scripts-vol
- configMap:
name: pci-config
name: py-vol
- name: conf-vol
emptyDisk:
capacity: 2Gi
By using filesystem, configMaps are shared through virtiofs. Filesystem allows to dynamically propagate changes on configMaps to VMIs. Filesystem devices must be mounted inside the VM. This can be done through cloudInitNoCloud.
Through this yaml deployment file, 3 BNG VMs will be instantiated. You can update the content of this configuration file as needed including the number of replicas (VMs). Also, some parameters in this file should map with the configuration set in the prerequisites section like for example network resource attachments.
Before deploying this file, the 6WIND BNG qcow2 image packaged as a Container Image should be available from a container registry. In this file, you can update, in the containerDisk part, the image field content accordingly.
...
volumes:
- name: ctdisk
containerDisk:
image: localhost:5000/dut:current # << update here
...
Note
This file can be downloaded from the 6WIND deployment guide repository.
Download this file and copy it in the host.
# ls -l
[...]
-rw-r--r-- 1 root root 2877 Nov 14 11:08 kubevirt-bng-pool-deployment.yml
[...]
From the path where you stored your file in the host, execute the following commands:
# kubectl create -f kubevirt-bng-pool-deployment.yml
virtualmachinepool.pool.kubevirt.io/vrouter-dut created
Check that the 3 VMs just deployed are correctly created, running and ready:
# kubectl get vms
NAME AGE STATUS READY
vrouter-dut-0 74s Running True
vrouter-dut-1 74s Running True
vrouter-dut-2 74s Running True
1.3. Start sessions¶
For testing purpose, we will use the bngblaster tool to create sessions. For 3 vBNG, we will create 30 000 sessions.
Get a Ubuntu 22.04 image and spawn a new virtual machine. This time, use a PF interface (passthrough mode).
# cp ubuntu-22.04.qcow2 /var/lib/libvirt/images/vm4.qcow2 # virt-install --name blaster --vcpus=6,sockets=1,cores=3,threads=2 \ --os-variant ubuntu22.04 --cpu host --network=default,model=e1000 \ --ram 8192 --noautoconsole --import \ --memorybacking hugepages=yes \ --disk /var/lib/libvirt/images/blaster.qcow2,device=disk,bus=virtio \ --host-device 17:00.1
Pin vCPUs to host CPUs.
# virsh vcpupin blaster 0 7 # virsh vcpupin blaster 1 31 # virsh vcpupin blaster 2 8 # virsh vcpupin blaster 3 32 # virsh vcpupin blaster 4 9 # virsh vcpupin blaster 5 33
Log on the blaster instance.
# virsh console blaster
Configure interface.
# ip link set dev ens1 up # ip link set dev eth0 up # dhclient ens1
Install bngblaster.
# dpkg-query -W bngblaster || { wget https://github.com/rtbrick/bngblaster/releases/download/0.9.7/bngblaster-0.9.7-ubuntu-22.04_amd64.deb; dpkg -i bngblaster-0.9.7-ubuntu-22.04_amd64.deb; }
Configure and start bngblaster.
# cat bngblaster.json { "interfaces": { "access": [ { "__comment__": "PPPoE Client", "interface": "eth0", "type": "pppoe", "vlan-mode": "N:1", "stream-group-id": 0 } ] }, "sessions": { "max-outstanding": 800, "reconnect": true, "count": 30000 }, "pppoe": { "reconnect": true }, "ppp": { "authentication": { "username": "test", "password": "test", "timeout": 5, "retry": 30, "protocol": "PAP" }, "lcp": { "conf-request-timeout": 10, "conf-request-retry": 5, "keepalive-interval": 0, "keepalive-retry": 3 }, "ipcp": { "enable": true, "request-ip": true, "request-dns1": false, "request-dns2": false, "conf-request-timeout": 1, "conf-request-retry": 10 }, "ip6cp": { "enable": false } }, "dhcpv6": { "enable": false }, "session-traffic": { "ipv4-pps": 1 } } # bngblaster -C bngblaster.json -J /tmp/report.json -S /tmp/bngblaster.sock -L /tmp/bng.log -I
Check active sessions on each BNG VM.
Connect to one of the BNG VMs with admin credentials:
# virtctl console vrouter-dut-2 Successfully connected to vrouter-dut-2 console. The escape sequence is ^] BNG-2 login: admin Password: [...]
Then,
BNG-2> show ppp-server statistics instance p Sessions counters active : 10001 starting : 0 finishing : 0 PPPoE counters active : 10001 starting : 0 PADI received : 30314 PADI dropped : 0 PADO sent : 28725 PADR received : 10308 PADS sent : 10038 BNG-2>