1. PPPoE multiple instances

This section shows how to set up multiple instances of PPPoE servers to increase the number of sessions.

1.1. Start instances

Multiple vBNG instances are started with KVM. Each instance will have three interfaces, the first one will be a management interface. The second interface will be the one hosting PPPoE tunnel, and finally the third one will be heading to a radius server. For N instances, create Nx3 VFs on the host’s physical interface.

  1. Install qemu-kvm

    # sudo apt-get install -y qemu-kvm
    # sudo apt-get install -y virtinst libvirt-daemon-system libvirt-clients
    
  2. Define needed VF. For example 3 VFs on each PF.

    # echo 3 > /sys/class/net/eth1/device/sriov_numvfs
    # echo 3 > /sys/class/net/eth2/device/sriov_numvfs
    
    # lspci | grep Ethernet | grep Mellanox
    17:00.0 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5]
    17:00.2 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5 Virtual Function]
    17:00.3 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5 Virtual Function]
    17:00.4 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5 Virtual Function]
    17:00.1 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5]
    17:10.2 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5 Virtual Function]
    17:10.3 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5 Virtual Function]
    17:10.4 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5 Virtual Function]
    

The interfaces with PCI IDs 17:00.2, 17:00.3, 17:00.4 are VFs of PF with pci id 17:00.0. The interfaces with PCI IDs 17:10.2, 17:10.3, 17:10.4 are VFs of PF with pci id 17:00.1.

Both PF (17:00.0 and 17:00.1) should be connected via a switch.

  1. Define hugepages to have enough memory for all vBNG instances. Define them in the appropriate NUMA node. For example, 60 1GB hugepages:

    # echo 60 > /sys/devices/system/node/node0/hugepages/hugepages-1048576kB/nr_hugepages
    # hugeadm --pool-list
           Size   Minimum  Current  Maximum  Default
      2097152           0        0        0
      1073741824       60       60       60        *
    
  2. You need to set eth1 up so that VFs are properly detected in the guest VM.

    # ip link set eth1 up
    
  3. Then use virt-install to spawn the VMs, specifying one host-device argument for each VF that you want to give. In this example, we give the VF with PCI ID 17:00.2 and 17:10.2 to the first vBNG.

    # cp 6wind-vsr-<arch>-<version>.qcow2 /var/lib/libvirt/images/vm1.qcow2
    # cp 6wind-vsr-<arch>-<version>.qcow2 /var/lib/libvirt/images/vm2.qcow2
    # cp 6wind-vsr-<arch>-<version>.qcow2 /var/lib/libvirt/images/vm3.qcow2
    
    # virt-install --name vbng1 --vcpus=6,sockets=1,cores=3,threads=2 \
                   --os-variant ubuntu22.04 --cpu host --network=default,model=e1000 \
                   --ram 16834 --noautoconsole --import \
                   --memorybacking hugepages=yes \
                   --disk /var/lib/libvirt/images/vm1.qcow2,device=disk,bus=virtio \
                   --host-device 17:00.2 --host-device 17:10.2
    
    # virt-install --name vbng2 --vcpus=6,sockets=1,cores=3,threads=2 \
                   --os-variant ubuntu22.04 --cpu host --network=default,model=e1000 \
                   --ram 16834 --noautoconsole --import \
                   --memorybacking hugepages=yes \
                   --disk /var/lib/libvirt/images/vm2.qcow2,device=disk,bus=virtio \
                   --host-device 17:00.3 --host-device 17:10.3
    
    # virt-install --name vbng3 --vcpus=6,sockets=1,cores=3,threads=2 \
                   --os-variant ubuntu22.04 --cpu host --network=default,model=e1000 \
                   --ram 16834 --noautoconsole --import \
                   --memorybacking hugepages=yes \
                   --disk /var/lib/libvirt/images/vm3.qcow2,device=disk,bus=virtio \
                   --host-device 17:00.4 --host-device 17:10.4
    
  4. Pin vCPUs to host CPUs.

    # virsh vcpupin vbng1 0 1
    # virsh vcpupin vbng1 1 25
    # virsh vcpupin vbng1 2 2
    # virsh vcpupin vbng1 3 26
    # virsh vcpupin vbng1 4 3
    # virsh vcpupin vbng1 5 27
    
    # virsh vcpupin vbng2 0 4
    # virsh vcpupin vbng2 1 28
    # virsh vcpupin vbng2 2 5
    # virsh vcpupin vbng2 3 29
    # virsh vcpupin vbng2 4 6
    # virsh vcpupin vbng2 5 30
    
    # virsh vcpupin vbng3 0 10
    # virsh vcpupin vbng3 1 34
    # virsh vcpupin vbng3 2 11
    # virsh vcpupin vbng3 3 35
    # virsh vcpupin vbng3 4 12
    # virsh vcpupin vbng3 5 36
    
  5. Log on the instance as an admin user with:

    # virsh console vbng1
    
  6. Configure hostname, management interface and network interface.

    vsr>
    vsr> show state / network-port
    network-port pci-b6s0
        bus-addr 0000:06:00.0
        vendor "Mellanox Technologies"
        model "MT27800 Family [ConnectX-5 Virtual Function]"
        mac-address 7a:a8:88:ef:42:9d
        interface eth0
        ..
    network-port pci-b7s0
        bus-addr 0000:07:00.0
        vendor "Mellanox Technologies"
        model "MT27800 Family [ConnectX-5 Virtual Function]"
        mac-address 7a:a8:88:ef:60:aa
        interface eth1
        ..
    network-port pci-b2s1
        bus-addr 0000:02:01.0
        vendor "Intel Corporation"
        model "82540EM Gigabit Ethernet Controller"
        mac-address 52:54:00:b1:dc:5b
        interface ens1
        ..
    vsr> edit running
    vsr running config# system hostname vbng-1
    vsr running config# commit
    Configuration committed.
    
    vsr running config#
    vbng-1 running config# vrf main interface physical ens1 port pci-b2s1
    vbng-1 running config# vrf main interface physical ens1 ipv4 dhcp enabled true
    vbng-1 running config# vrf main interface physical eth0 port pci-b6s0
    vbng-1 running config# vrf main interface physical to-radius port pci-b7s0
    vbng-1 running config# vrf main interface physical eth0 ipv4 address 172.16.0.1/16
    vbng-1 running config# vrf main interface physical to-radius ipv4 address 172.20.1.254/16
    vbng-1 running config# commit
    Configuration committed.
    
  7. Add a license.

    See section Installing your license

  8. Configure the fast path. We need to modify some limits to be able to accept 10K sessions. Check first if your limits are compliant to below ones that are a minimum

    vbng-1> edit running
    vbng-1 running config# / system fast-path enabled true
    vbng-1 running config#! / system fast-path port pci-b6s0
    vbng-1 running config# / system fast-path port pci-b7s0
    vbng-1 running config# / system fast-path limits fp-max-if 10300
    vbng-1 running config# / system fast-path limits pppoe-max-channel 10300
    vbng-1 running config# / system fast-path limits ip4-max-addr 10300
    vbng-1 running config# / system fast-path limits ip6-max-addr 10300
    vbng-1 running config# commit
    Configuration committed.
    
  9. Do the same for other vBNG instances.

See also

See the User’s Guide for more information regarding:

1.2. Configuration

  1. Configure a PPPoE server in the first vBNG on interface eth0.

Define a maximum of 10000 sessions and an IP pool of 172.16.0.0/16, we also limit the amount of sessions accepted per second to 500.

vbng-1> edit running
vbng-1 running config# / vrf main ppp-server instance p enabled true
vbng-1 running config# / vrf main ppp-server instance p ppp ipcp require
vbng-1 running config# / vrf main ppp-server instance p ip-pool default-local-ip 172.16.255.254
vbng-1 running config#! / vrf main ppp-server instance p ip-pool pool pppoe peer-pool 172.16.0.0/16
vbng-1 running config# / vrf main ppp-server instance p max-sessions 10000
vbng-1 running config# / vrf main ppp-server instance p max-starting 500
  1. We’ll also bind the ppp server to interface eth0 :

vbng-1 running config# / vrf main ppp-server instance p pppoe enabled true
vbng-1 running config#! / vrf main ppp-server instance p pppoe ip-pool pppoe
vbng-1 running config#! / vrf main ppp-server instance p pppoe interface eth0
  1. Now we configure the authentication to be bound to a RADIUS, this one will be directly connected through interface to-radius previously configured

vbng-1 running config# / vrf main ppp-server instance p auth radius
vbng-1 running config# / vrf main ppp-server instance p auth radius enabled true
vbng-1 running config# / vrf main ppp-server instance p auth radius server address 172.20.1.1 auth-port 1812 acct-port 1813 secret 5ecret123
vbng-1 running config# / vrf main ppp-server instance p auth radius nas
vbng-1 running config# / vrf main ppp-server instance p auth radius nas ip-address 172.20.1.254
vbng-1 running config# / vrf main ppp-server instance p auth radius nas identifier 172.20.1.254
vbng-1 running config# / vrf main ppp-server instance p auth radius change-of-authorization-server
vbng-1 running config# / vrf main ppp-server instance p auth radius change-of-authorization-server ip-address 172.20.1.1
vbng-1 running config# / vrf main ppp-server instance p auth radius change-of-authorization-server secret 5ecret123
vbng-1 running config# / vrf main ppp-server instance p auth radius accounting
vbng-1 running config# / vrf main ppp-server instance p auth radius accounting interim-interval 15
vbng-1 running config# / vrf main ppp-server instance p auth radius accounting session-id-in-authentication true
  1. We also need to take care of load-balancing sessions between all BNG instances :

vbng-1 running config# / vrf main ppp-server instance p pppoe pado-delay session-count 1000 delay 100
vbng-1 running config# / vrf main ppp-server instance p pppoe pado-delay session-count 3000 delay 300
vbng-1 running config# / vrf main ppp-server instance p pppoe pado-delay session-count 5000 delay 500
vbng-1 running config# / vrf main ppp-server instance p pppoe pado-delay session-count 7000 delay 700

Second instance :

vbng-2 running config# / vrf main ppp-server instance p pppoe pado-delay session-count 1000 delay 200
vbng-2 running config# / vrf main ppp-server instance p pppoe pado-delay session-count 3000 delay 400
vbng-2 running config# / vrf main ppp-server instance p pppoe pado-delay session-count 5000 delay 600
vbng-2 running config# / vrf main ppp-server instance p pppoe pado-delay session-count 7000 delay 800

Third instance :

vbng-3 running config# / vrf main ppp-server instance p pppoe pado-delay session-count 1000 delay 300
vbng-3 running config# / vrf main ppp-server instance p pppoe pado-delay session-count 3000 delay 500
vbng-3 running config# / vrf main ppp-server instance p pppoe pado-delay session-count 5000 delay 700
vbng-3 running config# / vrf main ppp-server instance p pppoe pado-delay session-count 7000 delay 900
  1. Optionally, you can also configure KPIs.

vbng-1> edit running
vbng-1 running config# / vrf mgt kpi telegraf metrics monitored-interface vrf main name eth0
vbng-1 running config# / vrf mgt kpi telegraf metrics monitored-interface vrf main name to-radius
vbng-1 running config# / vrf mgt kpi telegraf metrics template all
vbng-1 running config# / vrf mgt kpi telegraf influxdb-output url http://<server-ip>:8086 database telegraf
vbng-1 running config# commit
  1. For the other instances, we change the IP pool and radius identifier.

For example:

vbng-2 running config# / vrf main ppp-server instance p ip-pool default-local-ip 172.17.255.254
vbng-2 running config#! / vrf main ppp-server instance p ip-pool pool pppoe peer-pool 172.17.0.0/18
vbng-2 running auth# / vrf main ppp-server instance p auth radius nas ip-address 172.20.1.253
vbng-2 running auth# / vrf main ppp-server instance p auth radius nas identifier 172.20.1.253
vbng-3 running config# / vrf main ppp-server instance p ip-pool default-local-ip 172.18.255.254
vbng-3 running config#! / vrf main ppp-server instance p ip-pool pool pppoe peer-pool 172.18.0.0/16
vbng-3 running auth# / vrf main ppp-server instance p auth radius nas ip-address 172.20.1.252
vbng-3 running auth# / vrf main ppp-server instance p auth radius nas identifier 172.20.1.252

See also

See the User’s Guide for more information regarding:

1.3. Configuring Radius

See also

See the Radius configuration into this deployment guide:

As we have 3 instances, the Freeradius configuration file /etc/freeradius/3.0/clients.conf should contain :

client BNG1 {
   ipaddr = 172.20.1.254
  secret = "5ecret123"
}

client BNG2 {
   ipaddr = 172.20.1.253
  secret = "5ecret123"
}

client BNG3 {
   ipaddr = 172.20.1.252
  secret = "5ecret123"
}

1.4. Start sessions

For testing purposes, we will use the bngblaster tool to create sessions. For 3 vBNG, we will create 30 000 sessions.

  1. Get a Ubuntu 22.04 image and spawn a new virtual machine. This time, use a PF interface (passthrough mode).

    cp ubuntu-22.04.qcow2 /var/lib/libvirt/images/blaster.qcow2
    
    virt-install --name blaster --vcpus=6,sockets=1,cores=3,threads=2 \
                   --os-variant ubuntu22.04 --cpu host --network=default,model=e1000 \
                   --ram 8192 --noautoconsole --import \
                   --memorybacking hugepages=yes \
                   --disk /var/lib/libvirt/images/blaster.qcow2,device=disk,bus=virtio \
                   --host-device 17:00.1
    
  2. Pin vCPUs to host CPUs.

    virsh vcpupin blaster 0 7
    virsh vcpupin blaster 1 31
    virsh vcpupin blaster 2 8
    virsh vcpupin blaster 3 32
    virsh vcpupin blaster 4 9
    virsh vcpupin blaster 5 33
    
  3. Log on the blaster instance.

    virsh console blaster
    
  4. Configure interface.

    ip link set dev ens1 up
    ip link set dev eth0 up
    dhclient ens1
    
  5. Install bngblaster.

    dpkg-query -W bngblaster || { wget https://github.com/rtbrick/bngblaster/releases/download/0.9.15/bngblaster-0.9.15-ubuntu-22.04_amd64.deb; dpkg -i bngblaster-0.9.15-ubuntu-22.04_amd64.deb; }
    
  6. Configure and start bngblaster.

    cat bngblaster.json
    {
        "interfaces": {
            "access": [
                {
                    "__comment__": "PPPoE Client",
                    "interface": "eth0",
                    "type": "pppoe",
                    "vlan-mode": "N:1",
                    "stream-group-id": 0
                }
            ]
        },
        "sessions": {
            "max-outstanding": 800,
            "reconnect": true,
            "count": 30000
        },
        "pppoe": {
            "reconnect": true
        },
        "ppp": {
            "authentication": {
                "username": "test",
                "password": "test",
                "timeout": 5,
                "retry": 30,
                "protocol": "PAP"
            },
            "lcp": {
                "conf-request-timeout": 10,
                "conf-request-retry": 5,
                "keepalive-interval": 0,
                "keepalive-retry": 3
            },
            "ipcp": {
                "enable": true,
                "request-ip": true,
                "request-dns1": false,
                "request-dns2": false,
                "conf-request-timeout": 1,
                "conf-request-retry": 10
            },
            "ip6cp": {
                "enable": false
            }
        },
        "dhcpv6": {
            "enable": false
        },
        "session-traffic": {
            "ipv4-pps": 1
        }
    }
    
    bngblaster -C bngblaster.json -J /tmp/report.json -S /tmp/bngblaster.sock -L /tmp/bng.log -I
    
  7. Check active sessions on each vBNG. For example:

    vbng-3> show ppp-server statistics instance p
    Sessions counters
     active    : 10013
     starting  : 0
     finishing : 0
    
    PPPoE counters
     active        : 10013
     starting      : 0
     PADI received : 69615
     PADI dropped  : 0
     PADO sent     : 38885
     PADR received : 20727
     PADS sent     : 20045
    vbng-3>