Usage¶
In this section, it is assumed that Virtual Accelerator has been properly installed and configured. See Getting Started for more details.
You can configure LAG interfaces via iproute2
standard shell commands.
Managing LAG interfaces¶
Simple LAG¶
This example is relevant to a CentOS 7 machine.
We will assume the network topology is the following:
Configure IP addresses and routes:
$ sudo ip link set eth3 up $ sudo ip addr add 10.23.1.1/24 dev eth3 $ sudo ip link set eth1 down $ sudo ip link set eth2 down
Create a bonding interface:
$ sudo modprobe bonding $ sudo ip addr add 10.10.10.1/24 dev bond0 $ sudo ip link set dev bond0 up $ sudo ip link set eth1 master bond0 $ sudo ip link set eth2 master bond0
[Optional] For the
active-backup
mode instead of the defaultbalance-rr
mode, the first command must be replaced by:$ sudo modprobe bonding mode=active-backup miimon=100
Create a new route:
$ sudo ip route add 10.22.1.0/24 dev bond0
Display the characteristics of LAG interfaces on the fast path:
$ ip -d address show 11: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000 link/ether 52:54:00:00:01:00 brd ff:ff:ff:ff:ff:ff promiscuity 0 12: eth2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000 link/ether 52:54:00:00:01:00 brd ff:ff:ff:ff:ff:ff promiscuity 0 14: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 52:54:00:00:01:00 brd ff:ff:ff:ff:ff:ff promiscuity 0 bond inet 10.10.10.1/24 scope global bond0 valid_lft forever preferred_lft forever
$ cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011) Bonding Mode: load balancing (round-robin) MII Status: up MII Polling Interval (ms): 0 Up Delay (ms): 0 Down Delay (ms): 0 Slave Interface: eth1 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 52:54:00:00:01:00 Slave queue ID: 0 Slave Interface: eth2 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 52:54:00:00:02:00 Slave queue ID: 0
$ fpcmd lag bond0-vr0 mode: round-robin slaves (2): eth1-vr0 can_tx=yes state=active link=up queue_id=0 link_failure_count=0 perm_hwaddr=52:54:00:00:01:00 eth2-vr0 can_tx=no state=backup link=up queue_id=0 link_failure_count=0 perm_hwaddr=52:54:00:00:02:00
LAG with XOR mode¶
This example is relevant to a CentOS 7 machine.
We will assume the network topology is the following:
Configure IP addresses and routes:
$ sudo ip link set eth3 up $ sudo ip addr add 10.23.1.1/24 dev eth3 $ sudo ip link set eth1 down $ sudo ip link set eth2 down
Create a bonding interface:
$ sudo modprobe bonding mode=balance-xor $ sudo ip addr add 10.10.10.1/24 dev bond0 $ sudo ip link set dev bond0 up $ sudo ip link set eth1 master bond0 $ sudo ip link set eth2 master bond0
[Optional] If your distribution does not support mode synchronization, set the fast path LAG mode:
$ fp-cli lag-mode-set bond0 xor
Create a new route:
$ sudo ip route add 10.22.1.0/24 dev bond0
Display the characteristics of LAG interfaces on the fast path:
$ ip -d address show 11: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000 link/ether 52:54:00:00:01:00 brd ff:ff:ff:ff:ff:ff promiscuity 0 12: eth2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000 link/ether 52:54:00:00:01:00 brd ff:ff:ff:ff:ff:ff promiscuity 0 14: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 52:54:00:00:01:00 brd ff:ff:ff:ff:ff:ff promiscuity 0 bond inet 10.10.10.1/24 scope global bond0 valid_lft forever preferred_lft forever
$ cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011) Bonding Mode: load balancing (xor) Transmit Hash Policy: layer2 (0) MII Status: up MII Polling Interval (ms): 0 Up Delay (ms): 0 Down Delay (ms): 0 Slave Interface: eth1 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 52:54:00:00:01:00 Slave queue ID: 0 Slave Interface: eth2 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 52:54:00:00:02:00 Slave queue ID: 0
$ fpcmd lag bond0-vr0 mode: xor xmit_hash_policy: layer2 slaves (2): eth1-vr0 can_tx=yes state=active link=up queue_id=0 link_failure_count=0 perm_hwaddr=52:54:00:00:01:00 eth2-vr0 can_tx=no state=backup link=up queue_id=0 link_failure_count=0 perm_hwaddr=52:54:00:00:02:00
LAG with XOR mode and specific hash policy¶
This example is relevant to a CentOS 7 machine.
We will assume the network topology is the following:
Configure IP addresses and routes:
$ sudo ip link set eth3 up $ sudo ip addr add 10.23.1.1/24 dev eth3 $ sudo ip link set eth1 down $ sudo ip link set eth2 down
Create a bonding interface with a specific hash policy:
$ sudo modprobe bonding mode=balance-xor xmit-hash-policy=layer2+3 $ sudo ip addr add 10.10.10.1/24 dev bond0 $ sudo ip link set dev bond0 up $ sudo ip link set eth1 master bond0 $ sudo ip link set eth2 master bond0
[Optional] If your distribution does not support mode synchronization, set the fast path LAG mode:
$ fp-cli lag-mode-set bond0 xor
[Optional] If your distribution does not support hash policy synchronization, set the fast path LAG xmit hash policy:
$ fp-cli lag-hash-policy-set bond0 layer2+3
Create a new route:
$ sudo ip route add 10.22.1.0/24 dev bond0
Display the characteristics of LAG interfaces on the fast path:
$ ip -d address show 11: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000 link/ether 52:54:00:00:01:00 brd ff:ff:ff:ff:ff:ff promiscuity 0 12: eth2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000 link/ether 52:54:00:00:01:00 brd ff:ff:ff:ff:ff:ff promiscuity 0 14: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 52:54:00:00:01:00 brd ff:ff:ff:ff:ff:ff promiscuity 0 bond inet 10.10.10.1/24 scope global bond0 valid_lft forever preferred_lft forever
$ cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011) Bonding Mode: load balancing (xor) Transmit Hash Policy: layer2+3 (2) MII Status: up MII Polling Interval (ms): 0 Up Delay (ms): 0 Down Delay (ms): 0 Slave Interface: eth1 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 52:54:00:00:01:00 Slave queue ID: 0 Slave Interface: eth2 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 52:54:00:00:02:00 Slave queue ID: 0
$ fpcmd lag bond0-vr0 mode: xor xmit_hash_policy: layer2+3 slaves (2): eth1-vr0 can_tx=yes state=active link=up queue_id=0 link_failure_count=0 perm_hwaddr=52:54:00:00:01:00 eth2-vr0 can_tx=no state=backup link=up queue_id=0 link_failure_count=0 perm_hwaddr=52:54:00:00:02:00
LAG with 802.3ad mode¶
This example is relevant to a CentOS 7 machine.
We will assume the network topology is the following:
Configure IP addresses and routes:
$ sudo ip link set eth3 up $ sudo ip addr add 10.23.1.1/24 dev eth3 $ sudo ip link set eth1 down $ sudo ip link set eth2 down
Restart Linux bonding module:
$ sudo modprobe -r bonding
Create a bonding interface:
$ sudo modprobe bonding mode=802.3ad lacp_rate=1 miimon=100 max_bonds=0 $ sudo ip link add bond0 type bond $ sudo ip link set dev bond0 down $ sudo ip addr add 10.10.10.1/24 dev bond0 $ sudo ip link set dev bond0 up $ sudo ip link set eth1 master bond0 $ sudo ip link set eth2 master bond0
Create a new route:
$ sudo ip route add 10.22.1.0/24 dev bond0
Display the characteristics of LAG interfaces on the fast path:
$ ip -d address show 11: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000 link/ether 52:54:00:00:01:00 brd ff:ff:ff:ff:ff:ff promiscuity 0 12: eth2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000 link/ether 52:54:00:00:01:00 brd ff:ff:ff:ff:ff:ff promiscuity 0 14: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 52:54:00:00:01:00 brd ff:ff:ff:ff:ff:ff promiscuity 0 bond inet 10.10.10.1/24 scope global bond0 valid_lft forever preferred_lft forever
$ cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011) Bonding Mode: IEEE 802.3ad Dynamic link aggregation Transmit Hash Policy: layer2 (0) MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 0 Down Delay (ms): 0 802.3ad info LACP rate: fast Min links: 0 Aggregator selection policy (ad_select): stable Active Aggregator Info: Aggregator ID: 1 Number of ports: 1 Actor Key: 9 Partner Key: 1 Partner Mac Address: 00:00:00:00:00:00 Slave Interface: eth1 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 52:54:00:00:01:00 Slave queue ID: 0 Aggregator ID: 1 Actor Churn State: monitoring Partner Churn State: monitoring Actor Churned Count: 0 Partner Churned Count: 0 details actor lacp pdu: system priority: 65535 port key: 9 port priority: 255 port number: 1 port state: 79 details partner lacp pdu: system priority: 65535 oper key: 1 port priority: 255 port number: 1 port state: 1 Slave Interface: eth2 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 52:54:00:00:02:00 Slave queue ID: 0 Aggregator ID: 2 Actor Churn State: monitoring Partner Churn State: monitoring Actor Churned Count: 0 Partner Churned Count: 0 details actor lacp pdu: system priority: 65535 port key: 9 port priority: 255 port number: 2 port state: 71 details partner lacp pdu: system priority: 65535 oper key: 1 port priority: 255 port number: 1 port state: 1
$ fpcmd lag bond0-vr0 mode: 802.3ad xmit_hash_policy: layer2 info_aggregator: 2, info_num_ports: 1 slaves (2): eth1-vr0 can_tx=yes state=active link=up queue_id=0 link_failure_count=0 perm_hwaddr=52:54:00:00:01:00 aggregator_id=1 eth2-vr0 can_tx=no state=backup link=up queue_id=0 link_failure_count=0 perm_hwaddr=52:54:00:00:02:00 aggregator_id=2
Managing LAG interfaces with fp-cli¶
Starting fp-cli¶
The
fp-cli
commands below allow you to manage LAG interfaces:$ fp-cli
Displaying LAG interfaces and their slaves¶
Synopsis
lag
Example
<fp-0> lag
bond0-vr0
mode: round-robin
slaves (2):
eth1-vr0 can_tx=yes
state=active link=up queue_id=0 link_failure_count=0
perm_hwaddr=00:21:85:c1:82:58
eth4-vr0 can_tx=yes
state=active link=up queue_id=0 link_failure_count=0
perm_hwaddr=00:00:46:50:4e:00
bond1-vr0
mode: 802.3ad
xmit_hash_policy: layer2
info_aggregator: 1, info_num_ports: 1
slaves (1):
eth3-vr0 can_tx=no
state=active link=up queue_id=0 link_failure_count=0
perm_hwaddr=00:30:1b:b4:df:94
aggregator_id=1
bond2-vr0
mode: xor
xmit_hash_policy: layer2+3
slaves (1):
eth5-vr0 can_tx=yes
state=active link=up queue_id=0 link_failure_count=0
perm_hwaddr=00:00:46:50:c2:04
Setting the LAG policies¶
The default is balance-rr (round robin).
Synopsis
lag-mode-set <master_ifname> round-robin|xor
- <master_ifname>
Name of the bonding interface.
- round-robin
Transmit packets in sequential order from the first available slave through the last. This mode provides load balancing and fault tolerance.
- xor
Transmit according to the hash policy matching the xmit-hash-policy value.
The default applied policy is dependent on whether or not the distribution provides netlink notification support for the xmit-hash-policy setting:
if support is provided, the default applied policy is layer2.
otherwise, the default applied policy is layer2+3.
This mode provides load balancing and fault tolerance.
See also
Setting the LAG xmit-hash-policy
Example
<fp-0> lag-mode-set bond0 round-robin
Note
You can use the 802.3ad mode only if the Linux - Fast Path Synchronization supports this feature.
Setting the LAG xmit-hash-policy¶
Select the transmit hash policy to use for slave selection in balance-xor, 802.3ad modes.
Synopsis
lag-hash-policy-set <master_ifname> layer2|layer2+3|layer3+4|encap2+3|encap3+4
- <master_ifname>
Name of the bonding interface.
- layer2
The hash is computed from the source and destination MAC addresses modulo the number of slaves.
- layer2+3
This policy uses a combination of layer2 and layer3 protocol information to generate the hash.
The hash is computed from the IP/IPv6 source and destination addresses modulo the number of slaves. If the packet contains a VLAN header, its ID is used as well.
This policy is intended to provide a more balanced distribution of traffic than layer2 alone, especially in environments where a layer3 gateway device is required to reach most destinations.
- layer3+4
This policy uses upper layer protocol information, when available, to compute the hash. This allows for traffic to a particular network peer to span multiple slaves, although a single connection will not span multiple slaves.
- encap2+3
This policy uses the same method than layer2+3, but it dissects all recognized encapsulation layers (refer to fpn-flow api for more details), if any. It uses the most inner L3 information to generate the hash.
- encap3+4
This policy uses the same method than layer3+4, but it dissects all recognized encapsulation layers (refer to fpn-flow api for more details), if any. It uses the most inner L3/L4 information to generate the hash.
Example
<fp-0> lag-hash-policy-set bond0 layer3+4
Configuring LAG NUMA awareness¶
The LAG policy is socket independent i.e. application of this policy can forward a packet allocated in a socket to another socket. But cross socket reduces performance (see Performance optimization of FPN-SDK Add-on for DPDK for more details).
By default, application of the LAG policy is socket independent.
It is possible to enable NUMA awareness for LAG via a dedicated option, please refer to LAG options.
It may be also tuned at runtime thanks to fp-cli
.
Synopsis
numa-aware-set lag on|off
For more details on the CLI commands to manage NUMA awareness, see NUMA awareness configuration.
When LAG is socket aware, the LAG policy is applied only on the subset of interface belonging to the same socket of the considered packet.
Providing options¶
Some capabilities can be tuned for this module.
- --iface-max¶
Maximum number of LAG interfaces.
- Default value
32
- Memory footprint per LAG interfaces
136 B
Example
FP_OPTIONS="--mod-opt=lag:--iface-max=16"
Then fast path can manage up to 16 LAG interfaces.
- --max-slaves¶
Maximum number of slaves for all LAG interfaces
- Default value
64
- Memory footprint per LAG slave interfaces
128 B
Example
FP_OPTIONS="--mod-opt=lag:--max-slaves=16"
- max-slaves-per-iface¶
Maximum number of slaves per LAG interface
Default value
- Range
0 .. max-slaves
Example
FP_OPTIONS="--mod-opt=lag:--max-slaves-per-iface=16"
- --slaves-hash-order¶
Size order of LAG slave interfaces hash table Value automatically updated if
--max-slaves
is changed.Default value
- Range
1 .. 31
Example
FP_OPTIONS="--mod-opt=lag:--slaves-hash-order=10"
Tip
To get optimal performance, apply the following ratios to the two parameters:
Parameter |
Value |
---|---|
–slaves-hash-order |
|
–max-slaves |
|
Note
See Fast Path Capabilities documentation for impact of the available memory on the default value of configurable capabilities
- --numa-aware¶
Enable LAG NUMA awareness capability.
Example
FP_OPTIONS="--mod-opt=lag:--numa-aware"