Usage

The classification of packets for QoS uses the netfilter framework, which is configured via iptables and is synchronized from Linux to the Fast Path Filtering IPv4 and Fast Path Filtering IPv6 modules.

All other QoS operations are implemented in the fast path only and configured via fp-cli.

Fast Path startup options

Enabling QoS on a physical interface is done by configuring a scheduler on this interface. The scheduler is attached to a single fast path core.

By default, in addition to scheduling output packets, fast path cores may poll network port rx queues, perform crypto offloading for other cores, process packets originating from Linux, etc. These cores are named worker cores.

However, it is possible to dedicate some of the fast path cores to only perform scheduling. These cores are named scheduler cores. This is optional, but provides more accurate QoS guarantees.

The assignment of cores dedicated to QoS scheduling can be performed by editing the fast path configuration file.

QoS cores are a subset of the fast path cores (FP_MASK variable). Their list can be specified by setting the -Q option in FPNSDK_OPTIONS or by using the option : ${QOS_SCHEDULER_MASK:=<value>}.

Example

For example, the fast path runs on cores 1 to 6, and cores 2 and 4 are dedicated to QoS scheduling:

# fast-path.sh stop
# vi /etc/fast-path.env
  [...]
  FP_MASK="1-6"
  QOS_SCHEDULER_MASK="2,4"
  [...]
# fast-path.sh start

Module initialization

The QoS module initialization supports the following options.

--sched-max

set maximum number of QoS schedulers (interfaces with QoS enabled)

--filter-max

set maximum number of QoS filter rules

--class-max

set maximum number of QoS classes

--meter-max

set maximum number of QoS meters (max 1 per class + 1 per scheduler)

The default values for the maximum are set in the configuration. Eg. CONFIG_MCORE_QOS_SCHED_MAX is the default value for --sched-max.

Display QoS global parameters

Display worker and scheduler cores, cores with schedulers attached, and free QoS objects.

<fp-0> qos-global
worker cores: 1 3 5 6
scheduler cores: 2 4
cores with configured schedulers: 2:1
allocated objects (current/max):
- sched: 1/32
- class: 4/512
- filter: 0/2048
- meter: 0/512

worker cores lists cores that poll network interfaces. They also perform packet classification (netfilter rules, QoS filters), and metering. They can optionally perform scheduling if schedulers are configured on them.

scheduler cores lists cores that only perform scheduling.

cores with configured schedulers lists worker and scheduler cores on which schedulers are configured. The first number is the core id, the second one after : is the number of schedulers.

allocated objects counts the number of allocated QoS objects (schedulers, classes, filters, meters). These objects are allocated from a pool. The first number counts the allocated objects. The second one, after / is the pool size.

Memory considerations

The objects considered are allocated during the fast path initialization and are pre-allocated, so the memory footprint depends only on the maximum for each pool.

Below are the unitary sizes for the objects allocated for each pool. For the total memory footprint, multiply the unitary size by the maximum object number.

Unitary sizes for the object allocated in shared memory

  • sched 76 Bytes (72 Bytes for 32bits architectures)

  • class 68 Bytes

  • filter 32 Bytes

  • meter 60 Bytes

Unitary sizes for the objects allocated in the fast path memory

Without CONFIG_MCORE_FPN_GC:

  • sched 32 Bytes (16 Bytes for 32bits architectures)

  • class 8 Bytes (4 Bytes for 32bits architectures)

  • filter 0 Bytes

  • meter 8 Bytes (4 Bytes for 32bits architectures)

With CONFIG_MCORE_FPN_GC:

  • sched 48 Bytes (24 Bytes for 32bits architectures)

  • class 24 Bytes (12 Bytes for 32bits architectures)

  • filter 16 Bytes (8 Bytes for 32bits architectures)

  • meter 24 Bytes (12 Bytes for 32bits architectures)

Configure packet classification and marking

The netfilter framework is used to classify packets, optionally mark their DSCP, then set a netfilter mark. The mark will be used by the QoS filter stage to enqueue paquets in the right class queue.

All netfilter match methods and targets supported by the Fast Path Filtering IPv4 and Fast Path Filtering IPv6 modules can be used. The mangle table is typically used for QoS packet classification.

Netfilter rules are configured with the iptables Linux command.

There are basically 2 models when configuring packet classification for QoS:

  • trusted: we trust the DSCP marking performed by packet originators. The typical match method used for QoS is then dscp.

  • untrusted: we do not trust the DSCP marking performed by packet originators. The DSCP of packets must then be reset, and all match methods are liable to be used.

The typical targets used for QoS are DSCP and MARK.

Example

All packets with the EF DSCP must have their netfilter mark set to 0x1. This mark will then be used by the QoS filter stage to steer packets to the right class:

# iptables -t mangle -I POSTROUTING -m dscp --dscp-class EF -j MARK --set-mark 0x1

All UDP packets destined to 10.99.0.1 must have their DSCP set to EF:

# iptables -t mangle -I POSTROUTING -p udp -d 10.99.0.1 -j DSCP --set-dscp-class EF

Display netfilter rules in the Linux kernel:

# iptables -t mangle -L -v
Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain OUTPUT (policy ACCEPT 1 packets, 76 bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain POSTROUTING (policy ACCEPT 1 packets, 76 bytes)
 pkts bytes target     prot opt in     out     source               destination
    0     0 DSCP       udp  --  any    any     anywhere             10.99.0.1            DSCP set 0x2e
    1    76 MARK       all  --  any    any     anywhere             anywhere             DSCP match 0x2e MARK set 0x1

Display netfilter rules in the fast path:

<fp-0> nf4-rules mangle
Chain PREROUTING (policy ACCEPT 0 packets 0 bytes)
    pkts      bytes target    prot opt  in     out    source              destination

Chain INPUT (policy ACCEPT 0 packets 0 bytes)
    pkts      bytes target    prot opt  in     out    source              destination

Chain FORWARD (policy ACCEPT 0 packets 0 bytes)
    pkts      bytes target    prot opt  in     out    source              destination

Chain OUTPUT (policy ACCEPT 0 packets 0 bytes)
    pkts      bytes target    prot opt  in     out    source              destination

Chain POSTROUTING (policy ACCEPT 0 packets 0 bytes)
    pkts      bytes target    prot opt  in     out    source              destination
       0          0 DSCP      udp  --   any    any    anywhere            10.99.0.1            DSCP set 0xb8
       0          0 MARK      all  --   any    any    anywhere            anywhere             MARK set 0x1 DSCP match 0x2e

Configure QoS schedulers

A scheduler may be configured for each physical network interface. Each scheduler runs on a single fast path core.

Enable and configure scheduling

Enable QoS on a physical interface and configure its scheduler.

qos-sched-add <ifname> (prio|dwrr) <classes> [rate <rate>] [burst <burst>] \
              [l2overhead <l2overhead>] [qsize <pkts>[,<pkts>[,...]]] \
              [priority <priority>[,<priority>[,...]]] \
              [weight <weight>[,<weight>[,...]]] [cpu <coreid>]
qos-sched-add <ifname> fuzz <classes> [rate <rate>] [burst <burst>] \
              [l2overhead <l2overhead>] [qsize <pkts>[,<pkts>[,...]]] \
              [qrate <rate>[,<rate>[,...]]] [qburst <burst>[,<burst>[,...]]] \
              [drop <x/y>[,<x/y>[,...]]] delay <delay>[,<us>[,...]] \
              [lsize <nb_pkts>[,<nb_pkts>[,...]]] [cpu <coreid>]

Parameters

<ifname>

physical interface on which the scheduler is configured.

prio

selects the strict priority scheduling algorithm. Class identifiers and priorities range from 1 (highest priority) to <classes> (lowest priority).

dwrr

selects deficit weighted round robin (DWRR) scheduling algorithm. Unlike prio, all classes have the same priority by default, but are given more or less scheduling attention based on their weight.

Since weight is used directly as a quantum, it represents the number of packet bytes of a class to process during each scheduling turn.

Specifying weight values lower than the mean packet size provides higher accuracy but may negatively affect performance as more scheduling turns are needed to achieve the desired distribution, while very large weights will cause rough scheduling patterns at the small scale, unfairness and possibly weight/priority inversion due to packet drop on the remaining classes.

When unsure about the mean packet size, a MTU-sized weight should guarantee that at least one packet will be processed during a scheduling turn.

The default weight (quantum) for DWRR classes is 1500 bytes.

A higher priority level can be given to at most one class through priority. Such a class is not affected by weight since its traffic is always processed first. This class should be limited to low volume traffic where latency matters (such as voice or control data) as it can monopolize all the available bandwidth.

Note: quantum only reflects packet data; l2overhead is separately taken into account during transmission and has no impact on class deficit.

To put this into perspective, 15 packets of 100 bytes each are considered equal to a single packet of 1500 bytes, although more bandwidth will typically be consumed by a higher number of packets due to l2overhead.

fuzz

selects the fuzzer scheduling algorithm. All classes have equal priorities and a fixed weight which cannot be configured and thus these options are not supported by the fuzzer scheduler. The classes are scheduled in a round robin manner and are entertained until their fixed weight is consumed.

<classes>

number of classes (input queues) to allocate for scheduler. While possible values depend on the algorithm specified by the previous parameter, minimum is typically 1 and there is usually no upper bound other than memory constraints (see CONFIG_MCORE_QOS_CLASS_MAX).

rate <rate>

(optional) link rate in bps, with an optional multiplier prefix (K/M/G). If set, the scheduler will implement a token bucket algorithm with parameters (rate, burst) to limit the sending of packets on the wire. If unset, the scheduler will try to submit as many packets as possible up to the burst size, regardless of the link rate.

burst <burst>

(optional) maximum bytes sent in a scheduler round, with an optional multiplier prefix (K/M/G). Default 48K.

l2overhead <l2overhead>

(optional) layer 2 overhead in bytes. Bytes added to the frame size to enforce rate and burst. Default 24 (ethernet CRC + IFG + preamble).

qsize <pkts>[,<pkts>[,...]]

(optional) size of each class queue, in packets. Default 256.

qrate <rate>[,<rate>[,...]]

(optional) configures traffic shaping on a list of classes, that is, the maximum rate in bps at which each of them can be dequeued by the scheduler.

A zero <rate> disables traffic shaping. This is the default.

See rate parameter for more information.

qburst <burst>[,<burst>[,...]]

(optional) maximum number of bytes dequeued from each class during a scheduler round. Only relevant to classes whose qrate is nonzero.

If zero or unspecified, a default value is inherited from burst.

priority <priority>[,<priority>[,...]]

(optional) force priority level of each class instead of relying on the default behavior of the chosen scheduler.

weight <weight>[,<weight>[,...]]

(optional) weight of each class relative to others for a given priority level. Some scheduling algorithms such as DWRR rely on that to balance bandwidth in case of congestion.

drop <x/y>[,<x/y>[,...]]

(optional) ‘x’ packets will be forcefully dropped per ‘y’ packets for each class in the fuzz scheduler.

delay <delay>[,<us>[,...]]

(optional) delay of each class introduces latencies before sending the packets in the fuzz scheduler.

lsize <nb_pkts>[,<nb_pkts>[,...]]

(optional) lsize helps to configure the fuzz scheduler to dynamically set list sizes per class that will be used to cache packets in the scenario when the class queue is full due to delay introduced by the user.

cpu <coreid>

(optional) core that will schedule packets for this interface. Default: scheduler core with the least schedulers attached, or, in the absence of scheduler core, the worker core with the least schedulers attached.

Example

Configure a strict priority scheduler on interface ntfp2, with 4 classes, and a rate of 1 Gbps.

qos-sched-add ntfp2 prio 4 rate 1g

Display scheduling configuration

Display all or a subset of QoS schedulers.

qos-sched [iface <ifname>|cpu <cpu>|index <index>|per-iface|raw]

By default, display all schedulers by scanning fast path cores.

Parameters

iface <ifname>

display the scheduler attached to the specified interface.

cpu <cpu>

display schedulers handled by the specified fast path core.

index <index>

display the scheduler with this index.

per-iface

display all schedulers by scanning interfaces.

raw

display all schedulers by scanning the scheduler table.

Example

<fp-0> qos-sched
=== core 2:
[1] ntfp2-vrf0 core=2 algo=prio classes=4 def-classid=0x4 rate=1G burst=48K l2overhead=24 status=active

Disable scheduling on a physical interface

Disable QoS on a physical interface and release all attached resources (classes, filters, meters…).

qos-sched-del iface <ifname>

or

qos-sched-del index <index>

Parameters

<ifname>

physical interface on which the scheduler is configured.

<index>

index in the table of scheduler object.

Example

Disable QoS on interface ntfp2:

qos-sched-del iface ntfp2

Delete scheduler 1, and disable QoS on the interface to which it was attached.

qos-sched-del index 1

Configure QoS classes

QoS classes are automatically create when adding a strict priority scheduler. They cannot be individually added or deleted.

Display classes

Display all or a subset of QoS classes.

qos-class [iface <ifname>|cpu <cpu>|index <index>|per-iface|raw]

By default, display all classes by scanning fast path cores.

Parameters

iface <ifname>

display the class attached to the specified interface.

cpu <cpu>

display classes handled by the specified class core.

index <index>

display the class with this index.

per-iface

display all classes by scanning interfaces.

raw

display all classes by scanning the class table.

Example

<fp-0> qos-class
=== core 2:
--- ntfp2-vrf0:
[1] classid=0x1 prio=1 weight=1 qsize=256 rate=0 burst=48K status=active
[2] classid=0x2 prio=2 weight=1 qsize=256 rate=0 burst=48K status=active
[3] classid=0x3 prio=3 weight=1 qsize=256 rate=0 burst=48K status=active
[4] classid=0x4 prio=4 weight=1 qsize=256 rate=0 burst=48K status=active

Configure QoS filters

An ordered table of QoS filter rules may be configured for each physical network interface with scheduling enabled. This table maps netfilter marks to class IDs.

A scheduler must be configured on the interface.

The default behavior when a packet matches no filter rule is to use the low order 16 bits of the mark as the packet classid. If this value does not match an existing class, then the packet is sent to the scheduler default class (lowest priority in the case of a strict priority scheduler).

The reserved class ID 0xffff references the direct queue, it indicates that the traffic will bypass QoS metering, scheduling and shaping, it will be directly sent over the wire. The direct queue must be reserved to sporadic traffic that is not likely to disturb the scheduled traffic.

Add a QoS filter rule

qos-filter-add <ifname> <prio> <mark>[/<mask>] [flag <flag>] <mark-to-classid>
where <flag> is:
    cp        (critical control plane traffic)
    nocp      (any packet except critical control plane traffic)
where <mark-to-classid> is:
   set-classid <val>
   and-classid <val>
   or-classid <val>
   xor-classid <val>
   xset-classid <val>[/<mask>]

Example

<ifname>

interface on which the scheduler is configured.

<prio>

priority of the filtering rule.

<mark>[/<mask>]

netfilter mark filter. The rule applies to all packets matching this mark filter.

<flag>

optional flag. cp matches critical control plane traffic (ARP, ICMP, routing protocols, IKE…). nocp matches data traffic and non-critical control plane traffic.

If a flag option is set, the filter matches traffic that statisfies both mark and flag conditions.

The filtering action <mark-to-classid> is similar to the netfilter MARK target. It can be one of the following, knowing that <val> is an integer between 0 and 255. First the classid is initialized to the low order 16 bits of the mark. Then the classid is calculated as follows:

set-classid <val>

the classid is set to <val>.

and-classid <val>

a logical AND is done between the classid and <val>.

or-classid <val>

a logical OR is done between the classid and <val>.

xor-classid <val>

a logical XOR is done between the classid and <val>.

xset-classid <val>[/<mask>]

first the bits of the classid in <mask> are reset, then a logical XOR is done between the classid and <val>.

Example

Send all packets with netfilter mark ending with 0x0001 to class 0x2:

<fp-0> qos-filter-add ntfp2 10 1/0xffff set-classid 0x2

Send all packets with netfilter mark first byte equal to 0x80 to the class whose classid is the mark last byte:

<fp-0> qos-filter-add ntfp2 10 0x80000000/0xf0000000 and-classid 0xff

Delete a QoS filter rule

qos-filter-del iface <ifname> [prio <prio>]

or:

qos-filter-del index <index>

Parameters

<ifname>

interface on which the QoS filter rule is configured.

<prio>

priority of the filter rule.

<index>

index in the table of filter rule objects.

Example

Delete the first QoS filter rule on interface ntfp2:

<fp-0> qos-filter-del iface ntfp2

Delete the QoS filter entry with index 2:

<fp-0> qos-filter-del index 2

Display QoS filter rules

qos-filter [iface <ifname>|cpu <cpu>|index <index>|per-iface|raw] [detail]

Parameters

iface <ifname>

display the QoS filter rules attached to the specified interface.

cpu <cpu>

display the QoS filter rules handled by the specified fast path core.

index <index>

display the QoS filter rule with this index.

per-iface

display all QoS filter rules by scanning interfaces.

raw

display all QoS filter rules by scanning the QoS filter rule table.

detail

display more details

Example

<fp-0> qos-filter iface ntfp2
=== core 2:
--- ntfp2-vrf0:
[1]    10: mark=0x1/0xff set-classid=0x1 status=active
[2]    10: mark=0x80000000/0xf0000000 and-classid=0xff status=active

Configure QoS metering and policing

A meter may be attached to each QoS scheduler and to each QoS class.

A meter implements a color-blind trTCM conformant to RFC 4115, in order to evaluate the conformance of traffic to a configured profile.

Packets that must be enqueued in a QoS class queue are submitted to the meter attached to the scheduler (if any), then to the meter attached to the class (if any). Depending on the color assigned by the meter (green, yellow or red), the packet may be accepted (with an optional TOS remarking) or dropped.

Configure a meter

Attach a meter to a scheduler or to a class.

qos-meter-add <ifname> <classid> <CIR> <CBS> <EIR> <EBS> pps|bps
              [green <action>] [yellow <action>] [red <action>]
where <action> is:
   set-tos <tos>
   and-tos <tos>
   or-tos <tos>
   xor-tos <tos>
   xset-tos <tos>[/<mask>]
   accept
   drop

Parameters

The metering action <action> for each color consists in dropping or accepting the packet, with an optional TOS remarking.

<ifname>

interface on which the meter is defined.

<classid>

classid of the class to which the meter is attached. If 0, then the meter is attached to the scheduler itself (common to all classes).

<CIR>

CIR, expressed in bps or pps, with an optional multiplier (K/M/G).

<CBS>

CBS, expressed in bytes or packets, with an optional multiplier (K/M/G).

<EIR>

EIR, expressed in bps or pps, with an optional multiplier (K/M/G).

<EBS>

EBS, expressed in bytes or packets, with an optional multiplier (K/M/G).

pps|bps

Unit used for CIR, CBS, EIR and EBS.

  • pps means that values are expressed in terms of packets:

    • rates are in pps (CIR and EIR)

    • burst sizes are in packets (CBS and EBS)

  • bps means that values are expressed in terms of bits.

    • rates are in bps (CIR and EIR)

    • burst sizes are in bytes (CBS and EBS)

By default, the actions are:

green accept yellow accept red drop

Example

Attach a simple policer for all traffic sent to ntfp2 scheduler: drop traffic that exceeds 500 Mbps with a burst size of 48 Kbytes:

<fp-0> qos-meter-add ntfp2 0 500m 48k 0 0 bps

Attach a 2-level policer on class 2 of interface ntfp2, that remarks the TOS of yellow packets and drops red packets:

<fp-0> qos-meter-add ntfp2 2 200m 48k 20m 15k bps yellow set-tos 0x48

Delete a meter

Delete a meter from a scheduler or to a class.

qos-meter-del iface <ifname> <classid>

or

qos-meter-del index <index>

Parameters

<ifname>

interface on which the meter is defined.

<classid>

classid of the class to which the meter is attached. If 0, then the meter is attached to the scheduler itself (common to all classes).

<index>

index in the table of meter objects.

Example


Delete the meter attached to interface ntfp2:

<fp-0> qos-meter-del iface ntfp2 0

Delete the meter entry with index 2:

<fp-0> qos-meter-del index 2

Display meters

Display all or a subset of QoS meters.

qos-meter [iface <ifname>|cpu <cpu>|index <index>|per-iface|raw] [detail]

Parameters

By default, display all meters by scanning fast path cores.

iface <ifname>

display the meters attached to the specified interface.

cpu <cpu>

display the meters handled by the specified fast path core.

index <index>

display the meter with this index.

per-iface

display all meters by scanning interfaces.

raw

display all meters by scanning the meter table.

detail

display more details

Example

<fp-0> qos-meter
=== core 2:
--- ntfp2-vrf0:
[1] class=0 cir='500M bps' cbs='48K B' eir='0 bps' ebs='0 B' green='accept' yellow='accept' red='drop' status=active
[2] class=2 cir='200M bps' cbs='48K B' eir='20M bps' ebs='15K B' green='accept' yellow='set-tos 0x48' red='drop' status=active

QoS statistics

Display QoS statistics

Display all QoS statistics in a human readable form, except QoS filter statistics.

qos-stats [percore] [all]

Parameters

percore

display statistics per-core, when applicable. Note that metering and transmit statistics are not maintained per core.

all

Display all statistics (even those that are null).

Example

<fp-0> qos-stats all
sched iface=ntfp2-vrf0 [1]:
| enq_ok_pkts:199214
| enq_drop_noclass_pkts:0
| enq_drop_meter_pkts:0
| enq_drop_qfull_pkts:0
| xmit_ok_pkts:199214
| xmit_drop_pkts:0
| meter [1]:
| | green packets:199214 bytes:19522076
| | yellow packets:0 bytes:0
| | red packets:0 bytes:0
| class classid=0x1 [1]:
| | enq_ok_pkts:11
| | enq_drop_meter_pkts:0
| | enq_drop_qfull_pkts:0
| | xmit_ok_pkts:11
| | xmit_drop_pkts:0
| class classid=0x2 [2]:
| | enq_ok_pkts:141008
| | enq_drop_meter_pkts:0
| | enq_drop_qfull_pkts:0
| | xmit_ok_pkts:141008
| | xmit_drop_pkts:0
| | meter [2]:
| | | green packets:141008 bytes:13818784
| | | yellow packets:0 bytes:0
| | | red packets:0 bytes:0
| class classid=0x3 [3]:
| | enq_ok_pkts:0
| | enq_drop_meter_pkts:0
| | enq_drop_qfull_pkts:0
| | xmit_ok_pkts:0
| | xmit_drop_pkts:0
| class classid=0x4 [4]:
| | enq_ok_pkts:58195
| | enq_drop_meter_pkts:0
| | enq_drop_qfull_pkts:0
| | xmit_ok_pkts:58195
| | xmit_drop_pkts:0

This command navigates through all schedulers and for each scheduler dumps:

  • the scheduler statistics (packet enqueuing and transmission counters),

  • the optional scheduler meter statistics (green/yellow/red packet counters)

  • the class statistics (packet enqueuing and transmission counters)

  • the optional class meter statistics (green/yellow/red packet counters)

Display QoS statistics in json format

Display all QoS statistics in JSON format, except QoS filter statistics.

qos-stats-json

Example

<fp-0> qos-stats-json
[
  {
    "ifname": "ntfp2",
    "vrfid": 0,
    "index": 1,
    "enq_ok_pkts": 199214,
    "enq_drop_noclass_pkts": 0,
    "enq_drop_meter_pkts": 0,
    "enq_drop_qfull_pkts": 0,
    "xmit_ok_pkts": 199214,
    "xmit_drop_pkts": 0,
    "meter": {
      "green": {
        "packets": 199214,
        "bytes": 19522076
      },
      "yellow": {
        "packets": 0,
        "bytes": 0
      },
      "red": {
        "packets": 0,
        "bytes": 0
      }
    },
    "classes": [
      {
        "classid": 1,
        "index": 1,
        "enq_ok_pkts": 11,
        "enq_drop_meter_pkts": 0,
        "enq_drop_qfull_pkts": 0,
        "xmit_ok_pkts": 11,
        "xmit_drop_pkts": 0
      },
      {
        "classid": 2,
        "index": 2,
        "enq_ok_pkts": 141008,
        "enq_drop_meter_pkts": 0,
        "enq_drop_qfull_pkts": 0,
        "xmit_ok_pkts": 141008,
        "xmit_drop_pkts": 0,
        "meter": {
          "green": {
            "packets": 141008,
            "bytes": 13818784
          },
          "yellow": {
            "packets": 0,
            "bytes": 0
          },
          "red": {
            "packets": 0,
            "bytes": 0
          }
        }
      },
      {
        "classid": 3,
        "index": 3,
        "enq_ok_pkts": 0,
        "enq_drop_meter_pkts": 0,
        "enq_drop_qfull_pkts": 0,
        "xmit_ok_pkts": 0,
        "xmit_drop_pkts": 0
      },
      {
        "classid": 4,
        "index": 4,
        "enq_ok_pkts": 58195,
        "enq_drop_meter_pkts": 0,
        "enq_drop_qfull_pkts": 0,
        "xmit_ok_pkts": 58195,
        "xmit_drop_pkts": 0
      }
    ]
  }
]

This command displays the same information as qos-stats in the JSON format. This format enables later parsing by various applications.

Display QoS filter rules statistics

Display QoS filter statistics in a human readable form.

qos-filter-stats [iface <ifname>|cpu <cpu>|index <index>|per-iface|raw]
                 [detail] [percore] [all]

Parameters

iface <ifname>

display the QoS filter rules attached to the specified interface.

cpu <cpu>

display the QoS filter rules handled by the specified fast path core.

index <index>

display the QoS filter rule with this index.

per-iface

display all QoS filter rules by scanning interfaces.

raw

display all QoS filter rules by scanning the QoS filter rule table.

detail

display more details

percore

display statistics per-core.

all

display all statistics (even those that are null).

Example

<fp-0> qos-filter iface ntfp2
=== core 2:
--- ntfp2-vrf0:
[1]    10: mark=0x1/0xff set-classid=0x1 status=active
  match_pkts:10
[2]    10: mark=0x80000000/0xf0000000 and-classid=0xff status=active
  match_pkts:8
<fp-0> qos-filter-stats index 2 percore all
[2] sched=1 prio=10 mark=0x80000000/0xf0000000 and-classid=0xff status=active
  match_pkts:
    match_pkts[1]:5
    match_pkts[2]:2
    match_pkts[3]:1
    Total:0

Display QoS and filters configuration and statistics in json format

Display all QoS configuration and statistics with filter statistics in JSON format.

qos-dump-json

Example

<fp-0> qos-dump-json
{
  "ntfp2-vr0": {
    "core": 1,
    "nb-classes": 3,
    "def-classid": 3,
    "rate": 8000000000,
    "burst": 12000,
    "l2overhead": 24,
    "algo": "dwrr",
    "status": "active",
    "enq_ok_pkts": 0,
    "enq_drop_noclass_pkts": 0,
    "enq_drop_meter_pkts": 0,
    "enq_drop_qfull_pkts": 0,
    "xmit_ok_pkts": 0,
    "xmit_drop_pkts": 0,
    "classes": {
      "1": {
        "prio": 1,
        "weight": 1500,
        "qsize": 512,
        "rate": 0,
        "burst": 0,
        "enq_ok_pkts": 0,
        "enq_drop_noclass_pkts": 0,
        "enq_drop_meter_pkts": 0,
        "enq_drop_qfull_pkts": 0,
        "xmit_ok_pkts": 0,
        "xmit_drop_pkts": 0,
        "set-filters": {
          "0": {
            "mark": "0x00000004",
            "mask": "0xffffffff",
            "classid_val": "0x0001",
            "classid_mask": "0xffff",
            "match_pkts": 0
          },
          "1": {
            "mark": "0x00000002",
            "mask": "0xffffffff",
            "classid_val": "0x0001",
            "classid_mask": "0xffff",
            "match_pkts": 0
          }
        }
      },
      "2": {
        "prio": 2,
        "weight": 1500,
        "qsize": 256,
        "rate": 4000000000,
        "burst": 48000,
        "enq_ok_pkts": 0,
        "enq_drop_noclass_pkts": 0,
        "enq_drop_meter_pkts": 0,
        "enq_drop_qfull_pkts": 0,
        "xmit_ok_pkts": 0,
        "xmit_drop_pkts": 0,
        "set-filters": {
          "2": {
            "mark": "0x00000010",
            "mask": "0xffffffff",
            "classid_val": "0x0002",
            "classid_mask": "0xffff",
            "match_pkts": 0
          }
        }
      },
      "3": {
        "prio": 2,
        "weight": 2000,
        "qsize": 256,
        "rate": 0,
        "burst": 0,
        "enq_ok_pkts": 0,
        "enq_drop_noclass_pkts": 0,
        "enq_drop_meter_pkts": 0,
        "enq_drop_qfull_pkts": 0,
        "xmit_ok_pkts": 0,
        "xmit_drop_pkts": 0,
        "meter": {
          "cir": 6000000000,
          "cbs": 1500,
          "eir": 500000000,
          "ebs": 1500,
          "unit": "bps",
          "green-packets": 0,
          "green-bytes": 0,
          "yellow-packets": 0,
          "yellow-bytes": 0,
          "red-packets": 0,
          "red-bytes": 0
        },
        "set-filters": {
          "-1": {
            "mark": "0x00000000",
            "mask": "0x00000000",
            "classid_val": "0x0003",
            "classid_mask": "0xffff",
            "match_pkts": 0
          }
        }
      }
    },
    "other-filters": {
      "5": {
        "mark": "0x80000000",
        "mask": "0xf0000000",
        "classid_val": "0x0000",
        "classid_mask": "0xff00",
        "match_pkts": 0
      }
    }
  }
}

Reset QoS statistics

Reset all QoS statistics.

qos-stats-reset

Example of simple shaping with control plane protection

Here is a simple example where the output traffic on interface eth2 is shaped at 100Mbps, while protecting the control plane critical traffic.

A Priority Queueing scheduler is configured with 2 classes, one for critical control plane traffic, one for other traffic. All traffic with the cp flag is directed to class 1, with the highest priority:

qos-sched-add eth2 prio 2 rate 100M
qos-filter-add eth2 10 0/0 flag cp set-classid 1

By default, all other traffic is sent to class 2, the class with the lowest priority. An explicit filter may however be added if marks are used by other fast path modules, in order to send all other traffic to class 2 regardless of its mark:

qos-filter-add eth2 10 0/0 set-classid 2