3. PPPoE Dual Stack Configuration¶
3.1. License¶
For each VSR node of this setup, you must follow the Getting Started guide to provide a minimal Day-1 configuration and install a valid and relevant license.
A valid Virtual Service Router Network License is also required. Using show license
, make sure it is
well activated, otherwise features like the fast path, PPP Server and IPoE Server
won’t function.
vsr> show license
Active perpetual license for Virtual Service Router
License tokens 10
Current activations 1/10
Connected to license server (last contact 2024-06-04 16:49:23)
Lease is valid until 2024-06-25 12:49:23
Serial number is XXXXXXXXXX
Computer ID is nFtc6ebng2xuRTg6Sa/M
License was activated online
Support is valid until 2026-06-03 05:00:00 (standard mode)
Max throughput 100.0G (moving average 0.0G)
BNG IPoE activated for 100000 sessions (currently used 0)
BNG PPPoE activated for 100000 sessions (currently used 0)
CG-NAT activated for 30000000 conntracks (currently used 0)
DDoS protection activated
FP firewall activated for 30000000 conntracks (currently used 0)
GTP activated for 1000000 tunnels (currently used 0)
IPsec activated for 100000 tunnels (currently used 0)
vsr>
Repeat this step for all your routers in this setup
For the PPPoE BNG use case, you must make sure that your license shows “BNG PPPoE Activated”.
3.2. Hostname¶
Using the VSR CLI, let us start with setting the hostname and then getting the interfaces configured.
To set the VSR hostname, proceed as follows:
vsr> edit running
vsr running config# system hostname bng-pppoe
vsr running config# commit
bng-pppoe running config#
Repeat this step for all your routers in this setup
The following configurations are more specific to the BNG-PPPoE router or PPPoE functionality.
3.3. Interfaces¶
Allocate the ports that will be involved in data plane processing into the fast path:
bng-pppoe running config# / system fast-path
bng-pppoe running fast-path#! port pci-b0s4
bng-pppoe running fast-path# port pci-b0s5
bng-pppoe running fast-path# port pci-b0s6
bng-pppoe running fast-path# port pci-b0s7
All physical and logical interfaces are configured under the ‘main’ VRF in this example.
bng-pppoe running fast-path# / vrf main
Create Ethernet interfaces and attach them to a port of a NIC:
bng-pppoe running vrf main# interface physical radius
bng-pppoe running physical radius#! port pci-b0s6
bng-pppoe running physical radius# description "bng-pppoe_to-Radius"
bng-pppoe running physical radius# ipv4 address 172.20.1.254/24
bng-pppoe running physical radius# ..
bng-pppoe running interface# physical access
bng-pppoe running physical access#! port pci-b0s4
bng-pppoe running physical radius# description "bng-pppoe_to-CPEs"
bng-pppoe running physical access# ..
bng-pppoe running interface# physical internet
bng-pppoe running physical internet#! port pci-b0s5
bng-pppoe running physical internet# description "bng-pppoe_to-Internet"
bng-pppoe running physical internet# ipv4 address 109.254.1.1/24
bng-pppoe running physical internet# ipv6 address 2001:db8::1/64
bng-pppoe running physical internet# ..
Add VLANs towards the CPE networks:
bng-pppoe running interface# vlan vlan10
bng-pppoe running vlan vlan10# description "To-CPE"
bng-pppoe running vlan vlan10# vlan-id 10
bng-pppoe running vlan vlan10# link-interface access
bng-pppoe running vlan vlan10# ..
Add a loopback interface to be used for the local DNS definition:
bng-pppoe running interface# loopback dns
bng-pppoe running loopback dns# ipv4 address 1.1.1.1/32
bng-pppoe running loopback dns# ipv6 address 1::1/128
bng-pppoe running loopback dns# ..
Add the DNS configuration and enable the DNS records for both IPv4 and IPv6:
bng-pppoe running vrf main# dns-server use-system-servers false
bng-pppoe running dns-server# record bng.com 8.8.8.8
bng-pppoe running dns-server# record bngv6.com 8888::8888
bng-pppoe running dns-server# ..
Repeat this step for all your routers in this setup and only with the relevant interfaces
Specifically for the CPEs shall configure a PPPoE interface that would connect towards the BNG-PPPoE router, the main wan interface shall only bind the physical port
CPE1 running interface# physical wan
CPE1 running pppoe pppoe-wan#! port pci-b0s4
CPE1 running pppoe pppoe-wan# ..
[....]
CPE1 running interface# pppoe pppoe-wan
CPE1 running pppoe pppoe-wan#! link-interface wan
CPE1 running pppoe pppoe-wan# auth user cpe1
CPE1 running pppoe pppoe-wan# auth secret cpe1
CPE1 running pppoe pppoe-wan# request domain-name-servers
CPE1 running pppoe pppoe-wan# lcp echo-interval 3
CPE1 running pppoe pppoe-wan# lcp echo-failure 3
CPE1 running pppoe pppoe-wan# ..
Review the respective configuration on each router and commit it:
bng-pppoe running config# show config nodefault
interface
physical access
port pci-b0s4
[...]
bng-pppoe running config# commit
Configuration committed.
See also
See the VSR User’s Guide for more information regarding:
At this point of the implementation, connectivity would still not be established, the next step would be to configure the BNG-PPPoE with the PPP Server functionality and the required RADIUS parameters that were described in the PPPoE use case description section.
Note
we also support an option without a RADIUS server, using IPCP or IP6CP functions. See https://doc.6wind.com/new/vsr-3/latest/vsr-guide/user-guide/cli/services/ppp-server.html#example-1-without-a-radius-server for the complete configuration of these items.
The configuration for the PPP Server used in this setup is listed hereafter.
First, we would need to configure the physical or virtual interface connected to the RADIUS server. The latter can be co-located in the network or residing in a different domain. For simplicity, we have directly attached it to the BNG-PPPoE router, however if need be you need to make sure of your reachability and proper routing configuration if the RADIUS server is configured in a different subnet or network domain.
bng-pppoe running config# vrf main interface physical radius
bng-pppoe running physical radius#! port pci-b0s6
bng-pppoe running physical radius# description "bng-pppoe_to-Radius"
bng-pppoe running physical radius# ipv4 address 172.20.1.254/24
Then we would enable a PPP server over an Ethernet connection using the interface vlan10, which is connected to the CPEs:
bng-pppoe running config# vrf main ppp-server instance pppoe-server
bng-pppoe running instance pppoe-server# pppoe interface vlan10
bng-pppoe running instance pppoe-server# ..
Then we need to enable the RADIUS information configuration such as Authentication, Authorization, and Accounting.
bng-pppoe running config# vrf main ppp-server instance pppoe-server
bng-pppoe running instance pppoe-server# auth radius
bng-pppoe running radius# server address 172.20.1.10 auth-port 1812 acct-port 1813 secret 5ecret123
bng-pppoe running radius# default-local-ip 100.64.0.1
bng-pppoe running radius# change-of-authorization-server secret 5ecret123
bng-pppoe running radius# nas ip-address 172.20.1.254
bng-pppoe running radius# nas identifier 172.20.1.254
bng-pppoe running radius# accounting interim-interval 2
bng-pppoe running radius# accounting session-id-in-authentication true
bng-pppoe running radius# commit
3.4. Radius configuration¶
Following this configuration you should configure your RADIUS server. In our setup, the RADIUS server is directly connected to the BNG-PPPoE router as previously mentioned, and we will list hereafter the content of the two configuration files that are required for the authentication process.
Note
For info, we have used FreeRadius version 3.0 installed in an Ubuntu 22.04, but you can use any product you are mostly familiar with.
In the clients.conf file, you need to configure the secret for the BNG-PPPoE router to allow for proper authentication (file found at /etc/freeradius/3.0/)
client BNG {
ipaddr = 172.20.1.254
secret = "5ecret123"
}
Further, you need to configure the 6WIND RADIUS dictionary file or Vendor-specific attributes in order to use them properly in our implementation. The following vendor-specific attribute definitions need to be created in the “dictionary.6WIND” file under the /usr/share/freeradius/ path:
#
# dictionary.6WIND
#
VENDOR 6WIND 7336
BEGIN-VENDOR 6WIND
ATTRIBUTE 6WIND-AVPair 1 string
ATTRIBUTE 6WIND-limit 7 string
ATTRIBUTE 6WIND-iface-rpf 23 integer
ATTRIBUTE 6WIND-qos-template-name 24 string
END-VENDOR 6WIND
Then in the users file, you need to define the parameters for the CPEs PPPoE sessions establishment: (users file found at /etc/freeradius/3.0/users)
cpe1 Cleartext-Password := 'cpe1'
Acct-Interim-Interval = 15,
Service-Type = Framed-User,
Framed-Protocol = PPP,
Framed-Routing = Broadcast-Listen,
Framed-Filter-Id = "std.ppp",
Framed-MTU = 1500,
Framed-Interface-Id = a00:20ff:fe99:a998,
Framed-IP-Address = 100.64.10.1,
Framed-IP-Netmask = 255.255.255.255,
MS-Primary-DNS-Server = 1.1.1.1,
Framed-IPv6-Prefix = "2000:0:0:106::/64",
Framed-IPv6-Route = "2000:0:0:106::1/80 :: 1",
6WIND-qos-template-name = premium-subscribers
cpe2 Cleartext-Password := 'cpe2'
Acct-Interim-Interval = 15,
Service-Type = Framed-User,
Framed-Protocol = PPP,
Framed-Routing = Broadcast-Listen,
Framed-Filter-Id = "std.ppp",
Framed-MTU = 1500,
Framed-Interface-Id = a00:20ff:fe99:a999,
Framed-IP-Address = 100.64.10.2,
Framed-IP-Netmask = 255.255.255.255,
MS-Primary-DNS-Server = 1.1.1.1,
Framed-IPv6-Prefix = "2000:0:0:107::/64",
Framed-IPv6-Route = "2000:0:0:107::1/80 :: 1",
6WIND-qos-template-name = premium-subscribers
After this step you would be able to verify the PPPoE sessions are well established and active See the Troubleshoot PPPoE Sessions section below.
3.5. DHCP Services for end hosts¶
For simplicity, we have opted to configure a DHCP server on the CPEs in order to allocate private host addresses.
The following configuration can be used, given we want to allocate the 192.168.1.0/24 subnet to our end users.
CPE1 running config# vrf main dhcp server
CPE1 running server# subnet 192.168.1.0/24
CPE1 running subnet 192.168.1.0/24# interface lan
CPE1 running subnet 192.168.1.0/24# default-gateway 192.168.1.1
CPE1 running subnet 192.168.1.0/24# range 192.168.1.10 192.168.1.100
CPE1 running subnet 192.168.1.0/24# dhcp-options domain-name-server 192.168.1.1
Remember that the DNS has already been included in the PPPoE request using the
request domain-name-servers
command option under the PPPoE interface configuration.
3.6. NAT Services for end hosts¶
Further, source NAT will be used in order to simplify the routing on the CPEs and the BNG-PPPoE router.
On the CPE device the configuration would look like this:
CPE1 running config# vrf main nat
CPE1 running nat# source-rule 1 outbound-interface pppoe-wan translate-to output-address
Similarly, on the BNG-PPPoE side we would configure the following:
bng-pppoe running config# vrf main nat
bng-pppoe running nat# source-rule 1 outbound-interface internet translate-to output-address
Note
For large scale deployments, we would recommend using the 6WIND CG-NAT capabilities (requires an additional license), hence leveraging the high performance and scalability that we could offer.
See also
More details about 6WIND’s CG-NAT capabilities can be found here: CG-NAT basics
3.7. eBGP¶
Finally, we would configure eBGP as our exterior routing protocol. It will be used to peer with the internet router that would act as the gateway for our DNS services: The configuration is pretty straightforward by enabling both IPv4 and IPv6 address families, hence allowing the end-hosts to reach those simulated internet addresses.
bng-pppoe running vrf main# routing bgp
bng-pppoe running bgp# as 65222
bng-pppoe running bgp# ebgp-requires-policy false
bng-pppoe running bgp# address-family ipv4-unicast redistribute static
bng-pppoe running bgp# address-family ipv6-unicast redistribute connected
bng-pppoe running bgp# neighbor 109.254.1.2
bng-pppoe running neighbor 109.254.1.2# remote-as 65123
bng-pppoe running neighbor 109.254.1.2# address-family ipv4-unicast enabled true
bng-pppoe running neighbor 109.254.1.2# address-family ipv6-unicast enabled false
bng-pppoe running neighbor 109.254.1.2# ..
bng-pppoe running bgp# neighbor 2001:db8::2
bng-pppoe running neighbor 2001:db8::2# remote-as 65123
bng-pppoe running neighbor 2001:db8::2# address-family ipv4-unicast enabled false
bng-pppoe running neighbor 2001:db8::2# address-family ipv6-unicast enabled true
bng-pppoe running neighbor 2001:db8::2# ..
bng-pppoe running bgp#
At this time, it would be a good idea to check the eBGP adjacencies are up and routes are advertised, then ping the defined internet addresses from the host routers.
See the Troubleshoot PPPoE Sessions section below.
3.8. HTB PPPoE QoS Configuration¶
The following configuration has been used to define the 6WIND-qos-template used in our setup.
3.8.1. Configure a base static scheduler¶
bng-pppoe running config# qos
bng-pppoe running qos# scheduler scheduler-1
bng-pppoe running scheduler scheduler-1# htb
bng-pppoe running htb# queue 1
bng-pppoe running queue1#! bandwidth 40G
bng-pppoe running queue1# ceiling 40G
bng-pppoe running queue1#! child-queue 2
bng-pppoe running queue1#! child-queue 3
bng-pppoe running queue1#! child-queue 4
bng-pppoe running queue1#! ..
bng-pppoe running htb# queue 2
bng-pppoe running queue2#! description "This is the static parent queue for premium subscribers queues"
bng-pppoe running queue2#! bandwidth 30G
bng-pppoe running queue2#! ceiling 40G
bng-pppoe running queue2#! ..
bng-pppoe running htb#! queue 3
bng-pppoe running queue3#! description "This is the static parent queue for non-premium subscribers queues"
bng-pppoe running queue3#! bandwidth 10G
bng-pppoe running queue3#! ceiling 40G
bng-pppoe running queue3#! ..
bng-pppoe running htb#! queue 4
bng-pppoe running queue4#! description "This is the default queue"
bng-pppoe running queue4#! bandwidth 10K
bng-pppoe running queue4# ceiling 40G
bng-pppoe running queue4# ceiling-priority 9
bng-pppoe running queue4# ..
bng-pppoe running htb# default-queue 4
3.8.2. Add the base-scheduler to the PPP server interface¶
bng-pppoe running config# vrf main interface vlan vlan10 qos egress scheduler scheduler-1
3.8.3. Configure the Templates locally¶
Note
By default queues have the higher priority value, that is 0, configure explicitly the
ceiling-priority
for queues with a lower priority.
Note
The ceiling-priority
attribute should be set on qos template queues to be applied.
bng-pppoe running config# vrf main ppp-server instance pppoe-server qos
bng-pppoe running qos# template premium-subscribers scheduler-interface vlan10
bng-pppoe running qos# template premium-subscribers queue prem static-parent 2
bng-pppoe running qos# template premium-subscribers queue prem bandwidth 7M
bng-pppoe running qos# template premium-subscribers queue prem ceiling 2G
bng-pppoe running qos# template premium-subscribers queue prem-voip dynamic-parent prem
bng-pppoe running qos# template premium-subscribers queue prem-voip bandwidth 5M
bng-pppoe running qos# template premium-subscribers queue prem-voip ceiling 2G
bng-pppoe running qos# template premium-subscribers queue prem-voip mark 0x1
bng-pppoe running qos# template premium-subscribers queue prem-data dynamic-parent prem
bng-pppoe running qos# template premium-subscribers queue prem-data bandwidth 2M
bng-pppoe running qos# template premium-subscribers queue prem-data ceiling 2G
bng-pppoe running qos# template premium-subscribers queue prem-data mark 0x0
bng-pppoe running qos# template non-premium-subscribers scheduler-interface vlan10
bng-pppoe running qos# template non-premium-subscribers queue non-prem static-parent 3
bng-pppoe running qos# template non-premium-subscribers queue non-prem bandwidth 4M
bng-pppoe running qos# template non-premium-subscribers queue non-prem ceiling 1G
bng-pppoe running qos# template non-premium-subscribers queue non-prem-voip dynamic-parent non-prem
bng-pppoe running qos# template non-premium-subscribers queue non-prem-voip bandwidth 3M
bng-pppoe running qos# template non-premium-subscribers queue non-prem-voip ceiling 1G
bng-pppoe running qos# template non-premium-subscribers queue non-prem-voip ceiling-priority 1
bng-pppoe running qos# template non-premium-subscribers queue non-prem-voip mark 0x1
bng-pppoe running qos# template non-premium-subscribers queue non-prem-data dynamic-parent non-prem
bng-pppoe running qos# template non-premium-subscribers queue non-prem-data bandwidth 1M
bng-pppoe running qos# template non-premium-subscribers queue non-prem-data ceiling 1G
bng-pppoe running qos# template non-premium-subscribers queue non-prem-data ceiling-priority 1
bng-pppoe running qos# template non-premium-subscribers queue non-prem-data mark 0x0
bng-pppoe running qos# default-template non-premium-subscribers
Once the configuration is in place, the RADIUS setup of a user should include its QoS template name, for instance, for a premium user the attribute is: (/etc/freeradius/3.0/users)
6WIND-qos-template-name = premium-subscribers
If no attribute can be retrieved from the RADIUS server, the default template is used (non-premium-subscribers).
3.8.4. Configure QoS marking¶
In this implementation, the VOIP traffic is marked with 0x1. The other traffic has the mark 0x0 (equivalent to no mark). The marking can be done using the IP Packet Filtering context.
Below we’ll see an example of traffic marking using the standard Virtual Service Router firewall. Keep in mind that this mark is purely local to the Virtual Service Router, as a metadata to the packets, and won’t be replicated once the packet has left the system.
First lets assume you have a standard SIP VOIP traffic on TCP 5060/5061 ports, coming from your customers without any DSCP marking. We need to mark packets as soon as they arrive on the interface, so they’ll be handled correctly. Consequently we’ll use the PREROUTING target in the mangle table which is the dedicated table to alter packets with such marking.
bng-pppoe running qos# / vrf main firewall ipv4 mangle prerouting
bng-pppoe running qos# rule 1 protocol tcp destination port-range 5060-5061 action mark 0x1
bng-pppoe running qos# commit
The mark 0x1 will be catched by the QoS mechanism and packets will be sent to the right queue according your template.
3.8.5. Protecting control plane packet¶
By default the control plane traffic is not processed differently than the dataplane traffic in the QoS. There is no security to protect control packets from being dropped at QoS enqueue. To protect them you can configure a queue dedicated to control plane packets with a guarantee bandwidth.
bng-pppoe running config# / qos class cp-traffic cp true
bng-pppoe running config# / qos scheduler scheduler-1 htb queue 5 bandwidth 1M
bng-pppoe running config# / qos scheduler scheduler-1 htb queue 5 class cp-traffic
Now you are sure that a bandwidth of 1 Mbps is reserved for control plane packets only.
3.9. Detailed PPPoE server configuration¶
Now we have a base configuration that works for main common cases with Radius, we’ll tune the configuration to avoid issues when scaling to 10 000 CPE
3.9.1. Limit the maximum number of sessions¶
This setting will set the maximum number of sessions to 10000, it applies PER instance of BNG.
Note
The default maximum is set to 10300
bng-pppoe running config# / vrf main ppp-server instance pppoe-server max-sessions 10000
3.9.2. Limiting accepted sessions per second¶
Once done we’ll limit the amount of sessions accepted per second, as it can lead to overload the PPP server. It applies PER instance of BNG.
A Number of “starting” session increasing constantly is a sign of overload of the control plane daemon. This setting has direct impact on the “starting” counter, see below.
The right number can differ according to your BNG capacities (number of CPU given to control plane), here we’ll take the minimum case of 4 vCPU given to control plane. We recommend setting this value between 300 & 600.
bng-pppoe running config# / vrf main ppp-server instance pppoe-server max-starting 500
bng-pppoe running instance pppoe-server# show ppp-server statistics instance pppoe-server
Sessions counters
active : 6153
--> starting : 321 <--
finishing : 0
PPPoE counters
active : 6474
starting : 0
PADI received : 53394
PADI dropped : 0
PADO sent : 14119
PADR received : 73298
PADS sent : 6474
A low max-starting parameter may leads you to change the LCP Echo/Reply settings.
3.9.3. Increasing LCP Echo/Reply¶
As we are limiting the amount of accepted session per second, and as 10K users can be very control plane intensive with low ressources system you may encounter connections drop as LCP Echo are not answered in due time.
In Virtual Service Router default behavior is sending an LCP echo each 15s and considering connection down after 4 echo failure, so 60s without any answer leads to an authentication failure, resetting the whole PPP session establishment.
Under heavy load, it may be useful to increase this setting, below example allows 90s for Reply to be sent.
bng-pppoe running config# / vrf main ppp-server instance pppoe-server ppp lcp echo-interval 30
bng-pppoe running config# / vrf main ppp-server instance pppoe-server ppp lcp echo-failure 3
3.9.4. Load-balance sessions¶
When having multiple instances, serving the same service, it can be worth to be sure that sessions are load balanced accross those instances. In PPPoE session, the first instance answering to a PADI, with a PADO, is the “winning” instance, that will established and host the session.
In order to balance sessions accross multiple instances, PADO delay can be increased based on the number of sessions managed by the instance. This way you ensure a fair repartition of the load amongst your different instances. You can set multiple delay (in ms), according multiple session amount threshold.
bng-pppoe running config# / vrf main ppp-server instance pppoe-server pppoe pado-delay session-count 1000 delay 100
bng-pppoe running config# / vrf main ppp-server instance pppoe-server pppoe pado-delay session-count 3000 delay 300
bng-pppoe running config# / vrf main ppp-server instance pppoe-server pppoe pado-delay session-count 5000 delay 500
bng-pppoe running config# / vrf main ppp-server instance pppoe-server pppoe pado-delay session-count 7000 delay 700
Above configuration will delay PADO for 200ms after reaching 1000 sessions, then 400ms when reaching 3000 sessions and so on. The idea is to have different delay (higher) on other instances to be sure sessions will be balanced on second instances, then on other ones.
Example for second instance :
bng-pppoe2 running config# / vrf main ppp-server instance pppoe-server pppoe pado-delay session-count 1000 delay 200
bng-pppoe2 running config# / vrf main ppp-server instance pppoe-server pppoe pado-delay session-count 3000 delay 400
bng-pppoe2 running config# / vrf main ppp-server instance pppoe-server pppoe pado-delay session-count 5000 delay 600
bng-pppoe2 running config# / vrf main ppp-server instance pppoe-server pppoe pado-delay session-count 7000 delay 800
3.9.5. Fast-Path required settings¶
In order to be able to get 10000 sessions, it is mandatory to update some Fast-Path parameters :
As each PPP session is done on a dedicated PPP interface hosted by the Fast-Path, increasing the maximum of interfaces and PPPoE sessions accepted by the FP is mandatory :
bng-pppoe running config# / system fast-path limits fp-max-if 10300
bng-pppoe running config# / system fast-path limits pppoe-max-channel 10300
In most cases a QoS profile will be applied to each session. We take the example 2 different typology of customers (premimum / non premium customers) each with 3 classes of traffic which will need to be shaped at different bandwidth. In this case you need:
Scheduler is in charge of applying the QoS profile, there will be 1 scheduler per subscriber plus some margin in case of maximum sessions reached is a bit higher than 10000, it can happen if sessions are still negotiated while we reach effectively 10000.
Policies determine which action must be taken depending on the Policies selector match. Here we have two possible policies per user, including some margin : 21000
Classes correspond to the queues where each customer class of traffic will be sent. 3 queues are created in above example, adding margin once again : 62000
bng-pppoe running config# / system fast-path limits qos-max-schedulers 10300
bng-pppoe running config# / system fast-path limits qos-max-policies 21000
bng-pppoe running config# / system fast-path limits qos-max-classes 62000