You are on page 1of 245

Special Topic - QoS

Issue 07

Date 2018-06-18

HUAWEI TECHNOLOGIES CO., LTD.


Copyright © Huawei Technologies Co., Ltd. 2018. All rights reserved.
No part of this document may be reproduced or transmitted in any form or by any means without prior
written consent of Huawei Technologies Co., Ltd.

Trademarks and Permissions


and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.

All other trademarks and trade names mentioned in this document are the property of their respective
holders.

Notice
The purchased products, services and features are stipulated by the contract made between Huawei and
the customer. All or part of the products, services and features described in this document may not be
within the purchase scope or the usage scope. Unless otherwise specified in the contract, all statements,
information, and recommendations in this document are provided "AS IS" without warranties, guarantees
or representations of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has been made in the
preparation of this document to ensure accuracy of the contents, but all statements, information, and
recommendations in this document do not constitute a warranty of any kind, express or implied.

Huawei Technologies Co., Ltd.


Address: Huawei Industrial Base
Bantian, Longgang
Shenzhen 518129
People's Republic of China

Website: http://www.huawei.com

Email: support@huawei.com

Issue 07 (2018-06-18) Huawei Proprietary and Confidential i


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS About This Document

About This Document

Overview
This document describes QoS implementations on a Huawei NE40E, NE80E, NE5000E,
CX600, and ME60, and introduces hardware-based QoS implementations on the forwarding
plane.

Product Version

This document does not provide implementation differences between product versions or
detailed parameters. The board implementation differences listed by this document are found
in VRPV5 version.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential ii


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS About This Document

Change History

Issue 07 (2018-06-18) Huawei Proprietary and Confidential iii


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS About This Document

Version Release Date Change History

07 2018-6-18 The table "Step 1Table 1.1 Trusted Priority Fields" is modified.
The section "3.5.1Implementation Differences of BA Classification" is
modified.
06 2017-10-25 In the chapter "3.3.3BA and PHB":
 "BA-symbol" is renamed as "Remark-symbol".
 The description about the trusted priority field of "neither IP nor MPLS
packet" is modified.
 Both the "Remark-symbol" and "PHB-symbol" keep unchanged if the
diffserv-mode { pipe | short-pipe } command is configured.
 The description of "Rules for PHB Action" is modified.
The description of "Which Priority Field of the Inbound Packet Is Reset in
PHB Action" is modified.
The description of "Rules for Marking the EXP Field of New-added MPLS
Header" is modified.
The section "3.5.1Implementation Differences of BA Classification" is
modified.
The section "6.7.1Implementation Differences of MPLS DiffServ" is modified.
The section "DSCP Remarking Rules in MPLS VPN Scenarios" is added in the
chapter "6.3MPLS DiffServ Configuration".
The figures in chapter "5.3Congestion Avoidance" are modified.
05 2016-03-18 New chapter "4.5Capabilities for Policing and Shaping" is added.
New chapter "9QoS and Network Control Packets" is added.
New chapter "3.3.3BA and PHB" is added.
New chapter "6.3MPLS DiffServ Configuration" is added.
New chapter "3.6FAQ about Classification and Marking" is added.
New chapter "4.6FAQ about Policing and Shaping" is added.
The section "Impact of Queue Buffer on Jitter" in the chapter "5.4Impact of
Queue Buffer on Delay and Jitter" is modified.
The description about the process order of the remark, service-class and qos
car commands is added in the chapter "8Overall QoS Process on Routers".
04 2015-08-27 New chapter "3.5.1Implementation Differences of BA Classification" is added.
New chapter "3.5.2Implementation Differences of MF Classification" is added.
New chapter "4.4.1Implementation Differences of Policing and Shaping" is
added.
New chapter "5.6QoS Implementations on Different Boards" is added.
New chapter "6.7.1Implementation Differences of MPLS DiffServ" is added.
03 2015-08-21  The sentence "Therefore, Penultimate Hop Popping (PHP) is not supported in
Pipe mode." is deleted in the chapter "6.2MPLS DiffServ".
Issue 07 (2018-06-18) Huawei Proprietary and Confidential iv
Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS About This Document

Version Release Date Change History

07 2018-6-18 The table "Step 1Table 1.1 Trusted Priority Fields" is modified.
The section "3.5.1Implementation Differences of BA Classification" is
modified.
06 2017-10-25 In the chapter "3.3.3BA and PHB":
 "BA-symbol" is renamed as "Remark-symbol".
 The description about the trusted priority field of "neither IP nor MPLS
packet" is modified.
 Both the "Remark-symbol" and "PHB-symbol" keep unchanged if the
diffserv-mode { pipe | short-pipe } command is configured.
 The description of "Rules for PHB Action" is modified.
The description of "Which Priority Field of the Inbound Packet Is Reset in
PHB Action" is modified.
The description of "Rules for Marking the EXP Field of New-added MPLS
Header" is modified.
The section "3.5.1Implementation Differences of BA Classification" is
modified.
The section "6.7.1Implementation Differences of MPLS DiffServ" is modified.
The section "DSCP Remarking Rules in MPLS VPN Scenarios" is added in the
chapter "6.3MPLS DiffServ Configuration".
The figures in chapter "5.3Congestion Avoidance" are modified.
05 2016-03-18 New chapter "4.5Capabilities for Policing and Shaping" is added.
New chapter "9QoS and Network Control Packets" is added.
New chapter "3.3.3BA and PHB" is added.
New chapter "6.3MPLS DiffServ Configuration" is added.
New chapter "3.6FAQ about Classification and Marking" is added.
New chapter "4.6FAQ about Policing and Shaping" is added.
The section "Impact of Queue Buffer on Jitter" in the chapter "5.4Impact of
Queue Buffer on Delay and Jitter" is modified.
The description about the process order of the remark, service-class and qos
car commands is added in the chapter "8Overall QoS Process on Routers".
04 2015-08-27 New chapter "3.5.1Implementation Differences of BA Classification" is added.
New chapter "3.5.2Implementation Differences of MF Classification" is added.
New chapter "4.4.1Implementation Differences of Policing and Shaping" is
added.
New chapter "5.6QoS Implementations on Different Boards" is added.
New chapter "6.7.1Implementation Differences of MPLS DiffServ" is added.
03 2015-08-21  The sentence "Therefore, Penultimate Hop Popping (PHP) is not supported in
Pipe mode." is deleted in the chapter "6.2MPLS DiffServ".
Issue 07 (2018-06-18) Huawei Proprietary and Confidential v
Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS Contents

Contents

About This Document....................................................................................................................ii


1 QoS Overview................................................................................................................................1
1.1 What Is QoS...................................................................................................................................................................1
1.2 QoS Specifications.........................................................................................................................................................2
1.3 Common QoS Specifications.........................................................................................................................................5
1.4 End-to-End QoS Service Models...................................................................................................................................6

2 DiffServ Overview........................................................................................................................9
2.1 DiffServ Model...............................................................................................................................................................9
2.2 DSCP and PHB.............................................................................................................................................................10
2.3 Four Components in the DiffServ Model.....................................................................................................................13

3 Classification and Marking.......................................................................................................15


3.1 Traffic Classifiers and Traffic Behaviors......................................................................................................................15
3.2 QoS Priority Fields.......................................................................................................................................................17
3.3 BA Classification..........................................................................................................................................................20
3.3.1 What Is BA Classification.........................................................................................................................................20
3.3.2 QoS Priority Mapping...............................................................................................................................................21
3.3.3 BA and PHB..............................................................................................................................................................28
3.4 MF Classification.........................................................................................................................................................36
3.4.1 What Is MF Classification.........................................................................................................................................36
3.4.2 Traffic Policy Based on MF Classification................................................................................................................38
3.4.3 ACL Rules in MF Classification...............................................................................................................................42
3.4.4 QPPB.........................................................................................................................................................................48
3.5 QoS Implementations on Different Boards..................................................................................................................55
3.5.1 Implementation Differences of BA Classification.....................................................................................................55
3.5.2 Implementation Differences of MF Classification....................................................................................................56
3.6 FAQ about Classification and Marking........................................................................................................................63
3.6.1 Is the Default-mapping Defined by Huawei or by RFC Standard?...........................................................................63
3.6.2 Is It Possible to Remark Several Fields Together?....................................................................................................63
3.6.3 When There Are Multiple Classifiers and Multiple Behaviors in One Traffic Policy, How Are They Evaluated?. .63
3.6.4 What Is the Default Behavior If BA Classification Is Not Configured on the Inbound Interface?...........................64

4 Traffic Policing and Traffic Shaping.......................................................................................65

Issue 07 (2018-06-18) Huawei Proprietary and Confidential vi


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS Contents

4.1 Traffic Policing.............................................................................................................................................................65


4.1.1 Overview...................................................................................................................................................................65
4.1.2 Token Bucket.............................................................................................................................................................66
4.1.3 CAR...........................................................................................................................................................................70
4.1.4 Traffic Policing Applications.....................................................................................................................................76
4.2 Traffic Shaping.............................................................................................................................................................79
4.3 Comparison Between Traffic Policing and Traffic Shaping.........................................................................................88
4.4 QoS Implementations on Different Boards..................................................................................................................89
4.4.1 Implementation Differences of Policing and Shaping...............................................................................................89
4.5 Capabilities for Policing and Shaping..........................................................................................................................91
4.6 FAQ about Policing and Shaping.................................................................................................................................95
4.6.1 When is Traffic Shaped? When Is Traffic Policed?...................................................................................................95
4.6.2 What Is Differences between Port-based Traffic Shaping and Queue-based Traffic Shaping?................................96
4.6.3 Whether Port-based Traffic Shaping and Queue-based Traffic Shaping Affect Other Functions?...........................96
4.6.4 How Long the Time Delay in the Worst Situation When Traffic Shaping Is Used?.................................................97
4.6.5 What Is the Default Size of A Queue Buffer?...........................................................................................................97
4.6.6 What Is Default Behavior on Outbound Interface?...................................................................................................97

5 Congestion Management and Avoidance...............................................................................99


5.1 Traffic Congestion and Solutions.................................................................................................................................99
5.2 Queues and Congestion Management........................................................................................................................102
5.3 Congestion Avoidance................................................................................................................................................115
5.4 Impact of Queue Buffer on Delay and Jitter...............................................................................................................119
5.5 HQoS..........................................................................................................................................................................120
5.6 QoS Implementations on Different Boards................................................................................................................147

6 MPLS QoS...................................................................................................................................149
6.1 MPLS QoS Overview.................................................................................................................................................149
6.2 MPLS DiffServ...........................................................................................................................................................150
6.3 MPLS DiffServ Configuration...................................................................................................................................156
6.4 MPLS-TE...................................................................................................................................................................159
6.5 MPLS DiffServ-Aware TE.........................................................................................................................................167
6.6 MPLS VPN QoS.........................................................................................................................................................182
6.7 QoS Implementations on Different Boards................................................................................................................188
6.7.1 Implementation Differences of MPLS DiffServ.....................................................................................................188

7 ATM QoS....................................................................................................................................189
7.1 Basic Concepts of ATM..............................................................................................................................................189
7.2 QoS of ATMoPSN and PSNoATM.............................................................................................................................198

8 Overall QoS Process on Routers.............................................................................................205


9 QoS and Network Control Packets........................................................................................218
10 Description Agreement of Board Type...............................................................................225

Issue 07 (2018-06-18) Huawei Proprietary and Confidential vii


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS Contents

11 References.................................................................................................................................227
11.1 IETF RFCs................................................................................................................................................................227
11.2 Broadband Forum Technical Specifications.............................................................................................................228
11.3 MEF Technical Specifications..................................................................................................................................229

12 Abbreviations...........................................................................................................................230

Issue 07 (2018-06-18) Huawei Proprietary and Confidential viii


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS QoS OverviewQoS Overview

1 QoS Overview

About This Chapter


1.1 What Is QoS
1.2 QoS Specifications
1.3 Common QoS Specifications
1.4 End-to-End QoS Service Models

1.1 What Is QoS


As networks rapidly develop, services on the Internet become increasingly diversified. Apart
from traditional applications such as WWW, email, and File Transfer Protocol (FTP), the
Internet has expanded to encompass other services such as IP phones, e-commerce,
multimedia games, e-learning, telemedicine, videophones, videoconferencing, video on
demand (VoD), and online movies. In addition, enterprise users use Virtual Private Network
(VPN) technologies to connect their branches in different areas so that they can access each
other's corporate databases or manage remote devices through Telnet.

Figure 1.1 Internet services

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 1


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS QoS OverviewQoS Overview

Diversified services enrich users' lives but also increase the risk of traffic congestion on the
Internet. In the case of traffic congestion, services can encounter long delays or even packet
loss. As a result, services deteriorate or even become unavailable. Therefore, a solution to
resolve traffic congestion on the IP network is urgently needed.
The best way to resolve traffic congestion is actually to increase network bandwidths.
However, increasing network bandwidths is not practical in terms of operation and
maintenance costs.
The quality of service (QoS) that uses a policy to manage traffic congestion at a low cost has
been deployed. QoS aims to provide end-to-end service guarantees for differentiated services
and has played an overwhelmingly important role on the Internet. Without QoS, service
quality cannot be guaranteed.

1.2 QoS Specifications


QoS provides customized service guarantees based on the following specifications:
 Bandwidth/throughput
 Delay
 Delay variations (Jitter)
 Packet loss rate

Bandwidth/Throughput
Bandwidth, also called throughput, refers to the maximum number of bits allowed to transmit
between two ends within a specified period (1 second) or the average rate at which specific
data flows are transmitted between two network nodes. Bandwidth is expressed in bit/s.
As services become increasingly diversified, Internet Citizens expect higher bandwidths so
they can not only browse the Internet for news but also experience any number of popular
applications. The epoch-making information evolution continually delivers new and attractive
applications, such as new-generation multimedia, video transmission, database, and IPTV, all
of which demand extremely high bandwidths. Therefore, bandwidth is always the major focus
of network planning and provides an important basis for network analysis.

Figure 1.1 Insufficient bandwidth

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 2


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS QoS OverviewQoS Overview

Two concepts, upstream rate and downstream rate, are closely related to bandwidth. The upstream rate
refers to the rate at which users can send or upload information to the network, and the downstream rate
refers to the rate at which the network sends data to users. For example, the rate at which users upload
files to the network is determined by the upstream rate, and the rate at which users download files is
determined by the downstream rate.

Delay
A delay refers to the period of time during which a packet is transmitted from a source to its
destination.
Use voice transmission as an example. A delay refers to the period during which words are
spoken and then heard. If a long delay occurs, voices become unclear or interrupted.
Most users are insensitive to a delay of less than 100 ms. If a delay ranging from 100 ms to
300 ms occurs, the speaker can sense slight pauses in the responder's reply, which can seem
annoying to both. If a delay greater than 300 ms occurs, both the speaker and responder
obviously sense the delay and have to wait for responses. If the speaker cannot wait but
repeats what has been said, voices overlap, and the quality of the conversation deteriorates
severely.

Figure 1.1 Long delay

Delay variations (jitter)


Jitter refers to the difference in delays of packets in the same flow. If the period before a
packet that has reached a device is sent by the device differs from one packet to another in a
flow, jitters occur, and service quality is negatively affected.
Specific services, especially voice and video services, are zero-tolerant of jitters, which causes
interruptions in voice or video services.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 3


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS QoS OverviewQoS Overview

Figure 1.1 High jitter

Jitters also affect protocol packet transmissions. Specific protocol packets are transmitted at a
fixed interval. If high jitters occur, such protocols alternate between Up and Down, adversely
affecting quality.
Jitter thrives on networks but service quality will not be affected if jitters do not exceed a
specific tolerance. Buffers can alleviate excess jitters but prolong delays.

Packet Loss Rate


Packet loss occurs when one or more packets traveling across a network fail to reach their
destination. Slight packet loss does not affect services. For example, users are unaware of the
loss of a bit or a packet in voice transmissions. If a bit or a packet is lost in video
transmission, the image on the screen becomes momentarily garbled but the image recovers
very quickly. Even if TCP is used to transmit data, slight packet loss is not a problem because
TCP instantly retransmits the packets that have been lost. If severe packet loss does occur,
however, packet transmission efficiency is affected. The packet loss rate indicates the severity
of service interruptions on networks and concerns users.

Figure 1.1 High packet loss rate

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 4


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS QoS OverviewQoS Overview

1.3 Common QoS Specifications


Internet users have different requirements for the bandwidth, delay, jitter, and packet loss rate
for different services on IP networks. The following table lists QoS specifications for different
services.

Table 1.1 QoS specifications for different services


Service Type Bandwidth/Th Delay Jitter Packet Loss
roughput Rate

Emails, file Low Not important Not Not


transmission, and important important
Telnet
HTML web browsing Not specific Medium Medium Not
important
E-commerce Medium Low Low Low
VoIP and IPTV Low Low and Low Low and
predictable predictable
Streaming media High Low and Low Low and
predictable predictable

Common QoS Specifications at the MEF


As defined at the Metro Ethernet Forum (MEF), QoS specifications include the availability,
delay, jitter, packet loss rate, and mean time to repair.

Service Service Characteristics Service Performance


Class

Premium Real-time IP telephony or IP video Availability>99.99%


applications Delay<40ms
Jitter<1ms
Loss<0.1%
Restoration time:50ms
Silver Bursty mission critical data applications Availability>99.99%
requiring low loss and delay (eg.,Storage) Delay<50ms
Jitter=N/A
Loss<0.1%
Restoration time:200ms
Bronze Bursty data applications requiring bandwidth Availability>99.90%
assurances Delay<500ms
Jitter=N/A
Loss NA

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 5


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS QoS OverviewQoS Overview

Service Service Characteristics Service Performance


Class

Restoration time:2s
Standard Best effort service Availability>97.00%
Delay=N/A
Jitter=N/A
Loss NA
Restoration time:5s

Reference Values of QoS Specifications in the Industry


Service Type Delay Jitter Loss Rate

Voice Media ≤50ms ≤10ms ≤1%


Signaling ≤100ms ≤10ms ≤0.1%
IPTV Multicast ≤1s ≤200ms ≤0.1%
VoD ≤10s ≤200ms ≤0.1%
FTP download N/A N/A N/A
HTTP download N/A N/A N/A
HTTP Web browsing ≤10s N/A N/A
Games UDP ≤1000ms ≤50ms ≤5%
TCP ≤500ms ≤50ms ≤5%

1.4 End-to-End QoS Service Models


Network applications require successful end-to-end communication. Traffic may traverse
multiple routers on one network or even multiple networks before reaching the destination
host. Therefore, to provide an end-to-end QoS guarantee, an overall network deployment is
required. Service models are used to provide an end-to-end QoS guarantee based on specific
requirements.
QoS provides the following types of service models:
 Best-Effort
 Integrated service (IntServ)
 Differentiated service (DiffServ)

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 6


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS QoS OverviewQoS Overview

Best-Effort
Best-Effort is the default service model on the Internet and applies to various network
applications, such as FTP and email. It is the simplest service model. Without network
approval or notification, an application can send any number of packets at any time. The
network then makes its best attempt to send the packets but does not provide any guarantee
for performance.
The Best-Effort model applies to services that have low requirements for delay and reliability.

IntServ
Before sending a packet, IntServ uses signaling to apply for a specific level of service from
the network. The application first notifies the network of its traffic parameters and specific
service qualities, such as bandwidths and delays. After receiving a confirmation that sufficient
resources have been reserved, the application sends the packets. The network maintains a state
for each packet flow and executes QoS behaviors based on this state to fulfill the promise
made to the application. The packets must be controlled within the range described by the
traffic parameters.
IntServ uses the Resource Reservation Protocol (RSVP) as signaling, which is similar to
Asynchronous Transfer Mode Static Virtual Circuit (ATM SVC), and adopts connection-
oriented transmission. RSVP is a transport layer protocol and does not transmit data at the
application layer. Like ICMP, RSVP functions as a network control protocol and transmits
resource reservation messages between nodes.
When RSVP is used for end-to-end communication, the routers including the core routers on
the end-to-end network maintain a soft state for each data flow. A soft state is a temporary
state that refreshes periodically using RSVP messages. Routers check whether sufficient
resources can be reserved based on these RSVP messages. The path is available only when all
involved routers can provide sufficient resources.

Figure 1.1 IntServ model

IntServ uses RSVP to apply for resources over the entire network, requiring that all nodes on
the end-to-end network support RSVP. In addition, each node periodically exchanges state
information with its neighbor, consuming a large number of resources. More importantly, all
nodes on the network maintain a state for each data flow. On the backbone network, however,
there are millions of data flows. Therefore, the IntServ model applies to edge networks and
does not widely apply to the backbone network.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 7


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS QoS OverviewQoS Overview

DiffServ
DiffServ classifies packets on the network into multiple classes for differentiated processing.
When traffic congestion occurs, classes with a higher priority are given preference. This
function allows packets to be differentiated and to have different packet loss rates, delays, and
jitters. Packets of the same class are aggregated and sent as a whole to ensure the same delay,
jitter, and packet loss rate.
In the DiffServ model, edge routers classify and aggregate traffic. Edge routers classify
packets based on a combination of fields, such as the source and destination addresses of
packets, precedence in the ToS field, and protocol type. Edge routers also re-mark packets
with different priorities, which can be identified by other routers for resource allocation and
traffic control. Therefore, DiffServ is a flow-based QoS model.

Figure 1.1 DiffServ model

Compared with IntServ, DiffServ requires no signaling. In the DiffServ model, an application
does not need to apply for network resources before transmitting packets. Instead, the
application notifies the network nodes of its QoS requirements by setting QoS parameters in
IP packet headers. The network does not maintain a state for each data flow but provides
differentiated services based on the QoS parameters of each.
DiffServ takes full advantage of network flexibility and extensibility and transforms
information in packets into per-hop behaviors, greatly reducing signaling operations.
Therefore, DiffServ not only adapts to Internet service provider (ISP) networks but also
accelerates IP QoS applications on live networks.

Combination of IntServ and DiffServ


DiffServ contains a few service classes and a little state information and is easy to implement
and extend. Therefore, DiffServ is widely applied to IP backbone networks. However,
DiffServ reserves resources on a single node, and cannot provide flow-based end-to-end QoS
guarantee.
IntServ provides end-to-end QoS guarantee on IP networks but is inapplicable to IP backbone
networks due to low extensibility.
MPLS DS-TE, integrating the advantages of IntServ and DiffServ, optimizes network
resources and provides an end-to-end QoS guarantee for different services.
For detailed implementation of MPLS DS-TE, see 6.5MPLS DiffServ-Aware TE.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 8


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS DiffServ OverviewDiffServ Overview

2 DiffServ Overview

About This Chapter


2.1 DiffServ Model
2.2 DSCP and PHB
2.3 Four Components in the DiffServ Model

2.1 DiffServ Model


The DiffServ model is the most commonly used QoS model on IP networks. Technologies
described in this document are based on the DiffServ model.
DiffServ classifies incoming packets on the network edge and manages packets of the same
class as a whole to ensure the same transmission rate, delay, and jitter.
Network edge nodes mark packets with a specific service class in packet headers, and then
apply traffic management policies to the packets based on the service class. Interior nodes
perform specific behaviors for packets based on packet information.

Figure 1.1 DiffServ model

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 9


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS DiffServ OverviewDiffServ Overview

 DiffServ (DS) node: a network node that implements the DiffServ function.
 DS boundary node: connects to another DS domain or a non-DS-aware domain. The DS
boundary node classifies and manages incoming traffic.
 DS interior node: connects to DS boundary nodes and other interior nodes in one DS
domain. DS interior nodes implement simple traffic classification based on DSCP values,
and manage traffic.
 DS domain: a contiguous set of DS nodes that adopt the same service policy and per-hop
behavior (PHB). One DS domain covers one or more networks under the same
administration. For example, a DS domain can be an ISP's networks or an organization's
intranet. For an introduction to PHB, see the next section.
 DS region: consists of one or more adjacent DS domains. Different DS domains in one
DS region may use different PHBs to provide differentiated services. The service level
agreement (SLA) and traffic conditioning agreement (TCA) are used to allow for
differences between PHBs in different DS domains. The SLA or TCA specifies how to
maintain consistent processing of the data flow from one DS domain to another.
 SLA: The SLA refers to the services that the ISP promises to provide for individual
users, enterprise users, or adjacent ISPs that need intercommunication. The SLA covers
multiple dimensions, including the accounting protocol. The service level specification
(SLS) provides technique description for the SLA. The SLS focuses on the traffic control
specification (TCS) and provides detailed performance parameters, such as the
committed information rate (CIR), peak information rate (PIR), committed burst size
(CBS), and peak burst size (PBS).

2.2 DSCP and PHB


Per-hop behavior (PHB) is an important concept in the DiffServ model. The Internet
Engineering Task Force (IETF) redefined the type of service (ToS) for IPv4 packets and
Traffic Class (TC) for IPv6 packets as the Differentiated Service (DS) field for the DiffServ
model. The value of the DS field is the DiffServ code point (DSCP) value. Different DSCP
values correspond to different PHBs, as described in this section.

DSCP
RFC 1349 redefined the ToS field for IPv4 packets and added a C bit to the ToS field to
indicate the Monetary cost. Then, RFC 2474 defined bits 0 to 5 in the ToS field of IPv4 packet
headers as the DSCP value and renamed the ToS field to “DS”.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 10


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS DiffServ OverviewDiffServ Overview

Figure 1.1 DSCP

In an IPv4 packet, the six left-most bits (0 to 5) in the DS field are defined as the DSCP value,
and the two right-most bits (6 and 7) are reserved bits. Bits 0 to 2 are the Class Selector Code
Point (CSCP) value, indicating a class of DSCP. Devices that support the DiffServ function
perform forwarding behaviors for packets based on the DSCP value.
In IPv6 packet headers, two fields are related to QoS: TC and Flow Label (FL). The TC field
contains eight bits and functions the same as the ToS field in IPv4 packets to identify the
service type. The FL field contains 20 bits and identifies packets in the same data flow. The
FL field, together with the source and destination addresses, uniquely identifies a data flow.
All packets in one data flow share the same FL field, and devices can rapidly process packets
in the same data flow.

PHB
Per-hop Behavior (PHB) is a description of the externally observable forwarding treatment
applied at a differentiated services-compliant node to a behavior aggregate. A DS node
performs the same PHB for packets with the same DSCP value. The PHB defines some
forwarding behaviors but does not specify the implementation mode.
At present, the IETF defines four types of PHBs: Class Selector (CS), Expedited Forwarding
(EF), Assured Forwarding (AF), and best-effort (BE). BE PHB is the default.

Table 1.1 Mapping of PHBs and DSCP values


PHB DSCP Value Description

CS XXX000, where X is 0 or 1. The CS PHB indicates the same service


(RFC2474) When Xs are all 0s, this class as the IP precedence value.
PHB is the default PHB. NOTE
For the CS PHB, the DSCP RFC 2474 reserves all values of the XXX000
value is equal to the IP format to allow DiffServ-incapable devices that
precedence value multiplied parse only the three left-most bits in the ToS field
to be compatible with other devices.
by 8. For example, CS6 = 6
x 8 and CS7 = 7 x 8.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 11


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS DiffServ OverviewDiffServ Overview

PHB DSCP Value Description

EF 101110 The EF PHB defines that the rate at which


(RFC2598) packets are sent from any DS node must be
higher than or equal to the specified rate.
The EF PHB cannot be re-marked in the DS
domain but can be re-marked on the edge
nodes.
The EF PHB functions the same as a virtual
leased line to provide services with a low
packet loss rate, delay, and jitter and a
specific bandwidth.
The EF PHB applies to real-time services
that require a short delay, low jitter, and low
packet loss rate, such as video, voice, and
video conferencing.
AF XXXYY0, where X is 0 or The AF PHB defines that traffic that
(RFC2597) 1. XXX indicates the IP exceeds the specified bandwidth (as agreed
precedence. YY indicates the to by users and an ISP) can be forwarded.
drop precedence. The larger Traffic that does not exceed the bandwidth
the value, the higher the specification is forwarded as required, and
drop priority. the traffic that exceeds the bandwidth
Currently, four AF classes specification is forwarded at a lower
with three levels of drop priority.
precedence in each AF class Carriers provide differentiated bandwidth
are defined for general use. resources for the AF PHB. After the AF
An IP packet that belongs to PHB is allocated sufficient bandwidths,
an AF class i and has drop other data can consume the remaining
precedence j is marked with bandwidths.
the AF codepoint AFij, The AF PHB applies to services that require
where i ranges from 1 to 4 a short delay, low packet loss rate, and high
and j ranges from 1 to 3. reliability, such as e-commerce and VPN
services.
BE 000000 The BE PHB focuses only on whether
(RFC2474) packets can reach the destination, regardless
of the transmission performance. Traditional
IP packets can be transmitted in BE mode.
Any router must support the BE PHB.

Table 1.2 Common PHB applications


PHB Applications

CS6 CS6 and CS7 PHBs are used for protocol packets by default, such as OSPF and
and BGP packets. If these packets are not forwarded, protocol services are interrupted.
CS7
EF EF PHB is used for voice services. Voice services require a short delay, low jitter,
and low packet loss rate, and are second only to protocol packets in terms of
importance.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 12


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS DiffServ OverviewDiffServ Overview

PHB Applications

NOTE
The bandwidth dedicated to EF PHB must be restricted so that other services can use the
bandwidth.

AF4 AF4 PHB is used for signaling of voice services.


NOTE
Signaling is used for call control, during which a seconds-long delay is tolerable, but no delay
is allowed during a conversation. Therefore, the processing priority of voice services is higher
than that of signaling.

AF3 AF3 PHB is used for BTV services of IPTV. Live programs are real-time services,
requiring continuous bandwidth and a large throughput guarantee.
AF2 AF2 PHB is used for VoD services of IPTV. VoD services require lower real-time
performance than BTV services and allow delays or buffering.
AF1 AF1 PHB is used for leased-line services, which are second to IPTV and voice
services in terms of importance. Bank-based premium services, one type of
leased-line services, can use the AF4 or even EF PHB.
BE BE PHB applies to best-effort services on the Internet, such as email and telnet
services.

2.3 Four Components in the DiffServ Model


The DiffServ model consists of four QoS components. Traffic classification and re-marking
provide a basis for differentiated services. Traffic policing and shaping, congestion
management, and congestion avoidance control network traffic and resource allocation in
different ways and allow the system to provide differentiated services.
 Classification and Marking: classification classifies packets while keeping the packets
unchanged. Traffic marking sets different priorities for packets and therefore changes the
packets.

Traffic marking refers to external re-marking, which is implemented on outgoing packets. Re-marking
modifies the priority field of packets to relay QoS information to the next-hop device.
Internal marking is used for internal processing and does not modify packets. Internal marking is
implemented on incoming packets for the device to process the packets based on the marks before
forwarding them. The concept of internal marking is discussed later in this document.
 Policing and Shaping: restricts the traffic rate to a specific value. When traffic exceeds
the specified rate, traffic policing drops excess traffic, and traffic shaping buffers excess
traffic.
 Congestion management: places packets in queues for buffering when traffic
congestion occurs and determines the forwarding order based on a specific scheduling
algorithm.
 Congestion avoidance: monitors network resources. When network congestion
intensifies, the device proactively drops packets to regulate traffic so that the network is
not overloaded.
The four QoS components are performed in a specific order, as shown in the following figure.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 13


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS DiffServ OverviewDiffServ Overview

Figure 1.2 QoS implementation

The QoS components are performed at different locations on the network, as shown in the
following figure. In principle, traffic classification, traffic re-marking, and traffic policing are
implemented on the inbound user-side interface, and traffic shaping is implemented on the
outbound user-side interface (if packets of various levels are involved, queue scheduling and a
packet drop policy must be configured on the outbound user-side interface). Congestion
management and congestion avoidance are configured on the outbound network-side
interface.

Figure 1.3 Four QoS Components

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 14


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS Classification and MarkingClassification and Marking

3 Classification and Marking

About This Chapter


3.1 Traffic Classifiers and Traffic Behaviors
3.2 QoS Priority Fields
3.3 BA Classification
3.4 MF Classification
3.5 QoS Implementations on Different Boards
3.6 FAQ about Classification and Marking

3.1 Traffic Classifiers and Traffic Behaviors


Traffic Classifiers
Traffic classification technology allows a device to classify packets that enter a DiffServ
domain in order for the device to identify the packet service type and to apply any appropriate
action upon the packet.

Traffic Classification Techniques


Packets can be classified based on QoS priorities (for details, see section3.2QoS Priority
Fields), or packet information such as the source IP address, destination IP address, MAC
address, IP protocol, and port number, or specifications in an SLA. Therefore, traffic
classification can be classified as behavior aggregate classification or multi-field
classification. For details, see section 3.3BA Classification and 3.4MF Classification.
After packets are classified at the DiffServ domain edge, internal nodes provide differentiated
services for classified packets. A downstream node can accept and continue the upstream
classification or classify packets based on its own criteria.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 15


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS Classification and MarkingClassification and Marking

Traffic Behaviors
A traffic classifier is configured to provide differentiated services and must be associated with
a certain traffic control or resource allocation behavior, which is called a traffic behavior.
The following table describes traffic behaviors that can be implemented individually or jointly
for classified packets on a Huawei router.

Traffic Behavior Description

Marking / External Sets or modifies the priority of packets to relay QoS


Remarking marking information to the next device.
Internal Sets the class of service (CoS) and drop precedence of
marking packets for internal processing on a device so that packets
can be placed directly in specific queues.
Setting the drop precedence of packets is also called
coloring packets. When traffic congestion occurs, packets
in the same queue are provided with differentiated buffer
services based on colors.
Traffic policing Restricts the traffic rate to a specific value. When traffic
exceeds the specified rate, excess traffic is dropped.
Congestion management Places packets in queues for buffering. When traffic
congestion occurs, the device determines the forwarding
order based on a specific scheduling algorithm and
performs traffic shaping for outgoing traffic to meet users'
requirements on the network performance.
Congestion avoidance Monitors network resources. When network congestion
intensifies, the device drops packets to prevent
overloading the network.
Packet filtering Functions as the basic traffic control method. The device
determines whether to drop or forward packets based on
traffic classification results.
Policy-based routing (also Determines whether packets will be dropped or forwarded
called redirection) based on the following policies:
 Drop PBR states that a specific IP address must be
matched against the forwarding table. If an outbound
interface is matched, packets are forwarded; otherwise,
packets are dropped.
 Forward PBR states that a specific IP address must be
matched against the forwarding table. If an outbound
interface is matched, packets are forwarded; otherwise,
packets are forwarded based on the destination IP
addresses.
Load balancing Load balancing is configured to be session-by-session or
packet-by-packet.
Load balancing applies only to packets that have multiple
forwarding paths available. There are two possible
scenarios:

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 16


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS Classification and MarkingClassification and Marking

Traffic Behavior Description


 Multiple forwarding entries exist.
 Only one forwarding entry exists, but a trunk interface
that has multiple member interfaces functions as the
outbound interface in the forwarding entry.
Packet fragmentation Modifies the Don’t Fragment (DF) field of packets.
NOTE
Some packets sent from user terminals are 1500 bytes long. PCs
generally set the DF value to 1 in the packets. When packets
traverse network devices at various layers, such as the access,
aggregation, or core network layer, additional information is
added so that the packet length will exceed the maximum
transmission unit (MTU) of 1500 bytes. If such a packet carries
the DF value of 1 in the header, the packet will be dropped. A DF
value of 1 specifies that a datagram not be fragmented in transit.
To prevent such packet loss and to keep users unaware of any
change, the device involved is allowed to set the DF field in an
IP header.

URPF (Unicast Reverse Path Prevents the source address spoofing attack. URPF
Forwarding) obtains the source IP address and the inbound interface of
a packet and checks them against the forwarding table. If
the source IP address is not found, URPF considers the
source IP address as a pseudo address and drops the
packet.
Flow mirroring Allows a device to copy an original packet from a
mirrored port and to send the copy to the observing port.
Flow sampling Collects information about specific data flow, such as
timestamps, source address, destination address, source
port number, destination port number, ToS value, protocol
number, packet length, and inbound interface information,
to intercept specific users.

3.2 QoS Priority Fields


DiffServ provides differentiated services for packets that carry different QoS information in
specific fields. This section describes these fields.

ToS Field in an IPv4 Packet Header


In an IPv4 packet header, the three left-most bits (IP precedence) in the ToS field or the six
left-most bits (DSCP field) in the ToS field are used to identify the QoS priority. The IP
precedence classifies packets into a maximum of eight classes, and the DSCP field classifies
packets into a maximum of 64 classes.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 17


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS Classification and MarkingClassification and Marking

Figure 1.1 ToS field in an IPv4 packet header

RFC 1349 defines bits in the ToS field as follows:


 Bits 0 to 2 refer to the precedence. The value ranges from 0 to 7. The larger the value, the
higher the precedence. The largest values (7 and 6) are reserved for routing and updating
network control communications. User-level applications can use only the precedence
levels from 0 to 5.
 The D bit refers to the delay. The value 0 indicates no specific requirements for delay
and the value 1 indicates that the network is required to minimize delay.
 The T bit refers to the throughput. The value 0 indicates no specific requirements for
throughput and the value 1 indicates that the network is required to maximize
throughput.
 The R bit refers to the reliability. The value 0 indicates no specific requirements for
reliability and the value 1 indicates that the network is required to maximize reliability.
 The C bit refers to the monetary cost. The value 0 indicates no specific requirements for
monetary cost and the value 1 indicates that the network is required to minimize
monetary cost.
 Bits 6 and 7 are reserved.
RFC 2474 defines bits 0 to 6 as the DSCP field, and the three left-most bits indicate the class
selector code point (CSCP) value, which identifies a class of DSCP. Devices that support
DiffServ apply PHBs to packets based on the DSCP value in the packets. For details about
DSCP and PHB, see 2.2DSCP and PHB.

TC Field in an IPv6 Header


Two fields in an IPv6 header are related to QoS, Traffic Class (TC), and Flow Label (FL). The
TC field has eight bits and functions the same as the ToS field in an IPv4 packet header to
identify the service type. The FL field has 20 bits and is used to identify packets in the same
data flow. The FL, together with the source and destination addresses, identifies a data flow.
All packets in one data flow share the same FL so that a device can process the packets that
have the same QoS requirement as a whole.

Figure 1.1 TC field in an IPv6 header

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 18


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS Classification and MarkingClassification and Marking

EXP Field in an MPLS Header


Multiprotocol Label Switching (MPLS) packets are classified based on the EXP field value.
The EXP field in MPLS packets is similar in function to the ToS field or DSCP field in IP
packets.

Figure 1.1 EXP field in an MPLS header

The EXP field is 3 bits long and indicates precedence. The value ranges from 0 to 7 with a
larger value reflecting a higher precedence.
The precedence field in an IP header also has three bits. Therefore, one precedence value in an
IP header exactly corresponds to one precedence value in an MPLS header. However, the
DSCP field in an IP header has 6 bits, unlike the EXP length. Therefore, multiple DSCP
values correspond to only one EXP value. As the IEEE standard defines, the three left-most
bits in the DSCP field (the CSCP value) correspond to the EXP value, regardless of what the
three right-most bits are.

802.1p Value in a VLAN Packet


VLAN packets are classified based on the 802.1p value in the packets. The PRI field (802.1p
value) in a VLAN packet header identifies the QoS requirement. Figure 1-4 illustrates a
VLAN packet header.

Figure 1.1 PRI field in a VLAN packet header

The PRI field is 3 bits long and indicates precedence. The value ranges from 0 to 7 with a
larger value reflecting a higher precedence.

Table 1.1 Mapping between the 802.1p/IP Precedence value and applications
802.1p/IP Precedence Typical Applications

7 Reserved for network control packets (such as routing


protocol packets)
6
5 Voice streams
4 Video conferencing
3 Call signaling
2 High-priority data streams
1 Medium-priority data streams

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 19


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS Classification and MarkingClassification and Marking

802.1p/IP Precedence Typical Applications

0 Best effort (BE) data streams

CLP Field in an ATM Header


The CLP field in an ATM header is one bit long and identifies the cell loss priority for
congestion management. Cells with the CLP value of 1 are dropped preferentially when traffic
congestion occurs.

Figure 1.1 CLP field in an ATM header

3.3 BA Classification
3.3.1 What Is BA Classification
Behavior Aggregate (BA) classification allows the device to classify packets based on related
values as follows:
 DSCP value of IPv4 packets
 TC value of IPv6 packets
 EXP value of MPLS packets
 802.1p value of VLAN packets
 CLP value of ATM packets
It is used to simply identify the traffic that has the specific priority or class of service (CoS)
for mapping between external and internal priorities.
BA classification confirms that the priority of incoming packets on a device is trusted and
mapped to the service-class and color based on a priority mapping table. The service-class and
color of outgoing packets are then mapped back to the priority. For details about priority
mapping, see section 3.3.2QoS Priority Mapping.
To configure BA classification on a Huawei device, configure a DiffServ (DS) domain, define
a priority mapping table for the DS domain, and bind the DS domain to a trusted interface.
BA classification applies to the DS internal nodes.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 20


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS Classification and MarkingClassification and Marking

3.3.2 QoS Priority Mapping


The priority field in a packet varies with network type. For example, a packet carries the
802.1p field on a VLAN, the DSCP field on an IP network, and the EXP field on an MPLS
network. To provide differentiated services for different packets, the device maps the QoS
priority of incoming packets to the scheduling precedence (also called service-class) and drop
precedence (also called color), and then performs congestion management based on the
service-class and congestion avoidance based on the color. Before forwarding packets out, the
device maps the service-class and color of the packets back to the QoS priority, which
provides a basis for other devices to process the packets.
A device maps the QoS priority to the service-class and color for incoming packets and maps
the service-class and color back to the QoS priority for outgoing packets, as shown in the
following figure.

Figure 1.2 QoS priority mapping

Service-class
Service-class refers to the internal service class of packets. Eight service-class values are
available: class selector 7 (CS7), CS6, expedited forwarding (EF), assured forwarding 4
(AF4), AF3, AF2, AF1, and best effort (BE). Service-class determines the type of queues to
which packets belong.
The priority of queues with a specific service-class is calculated based on scheduling
algorithms.
 If queues with eight service-class all use priority queuing (PQ) scheduling, the queues
are displayed in descending order of priorities: CS7 > CS6 > EF > AF4 > AF3 > AF2 >
AF1 > BE.
 If the BE queue uses PQ scheduling (rarely on live networks) but all the other seven
queues use weighted fair queuing (WFQ) scheduling, the BE queue is of the highest
priority.
 If queues with eight service-class all use WFQ scheduling, the priority is irrelevant to
WFQ scheduling.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 21


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS Classification and MarkingClassification and Marking

More details about queue scheduling are provided later in this document.

Color
Color, referring to the drop precedence of packets on a device, determines the order in which
packets in one queue are dropped when traffic congestion occurs. As defined by the Institute
of Electrical and Electronics Engineers (IEEE), the color of a packet can be green, yellow, or
red.
Drop precedences are compared based on the configured parameters. For example, if a
maximum of 50% of the buffer area is configured to store packets colored Green, whereas a
maximum of 100% of the buffer area is configured to store packets colored Red, the drop
precedence of packets colored Green is higher than that of packets colored Red.

Trusting the Priority of Received Packets


As described in section 3.1Traffic Classifiers and Traffic Behaviors, after packets are
classified on the DiffServ domain edge, internal nodes provide differentiated services for the
packets that are classified. A downstream node can resume the classification result calculated
on an upstream node or perform another traffic classification based on its own criteria. If the
downstream node resumes the classification result calculated on an upstream node, the
downstream node trusts the QoS priority (DSCP, IP precedence, 802.1p, or EXP) of packets
that the interface connecting to the upstream node receives. This is called the mode of trusting
the interface.
A Huawei router does not trust the interface by default. After receiving a packet, a Huawei
router re-marks the service-class of the packet as BE and the color of the packet as Green,
regardless of what QoS priority the packet carries.

DS Domain and Priority Mapping Table


A Huawei router can perform QoS priority mapping based on the priority mapping table.
Different DiffServ (DS) domains can have their own mapping tables. Administrators of a
device can define DS domains and specify differentiated mappings for the DS domains.
A Huawei router allows administrators to define a DS domain and has predefined the
following domains:
 Default domain: describes the default mappings between the external priority, service-
class, and color of IP, VLAN, and MPLS packets.
 5p3d domain: describes the mappings between the 802.1p value, service-class, and color
of VLAN packets. This domain applies to the 802.1ad-compliant local area network
(LAN) that supports five scheduling precedence and three drop precedences.

IEEE defines eight PHBs (CS7, CS6, EF, AF4, AF3, AF2, AF1, and BE) and further defines four PHBs
for three drop precedences. Therefore, the total number of PHBs is 16 (4 + 4 x 3 = 16).
There are 64 DSCP values, allowing each PHB to correspond to a DSCP value. However, there are only
eight 802.1p values, causing some PHBs not to have corresponding 802.1p values. Generally the eight
802.1p values correspond to the eight scheduling precedence. IEEE 802.1ad defines STAG and CTAG
formats, with the STAG supporting Drop Eligible Indicator (DEI) while the CTAG does not. IEEE
802.1ad provides a 3-bit Priority Code Point (PCP) field that applies to both the CTAG and STAG to
specify the scheduling and drop precedence. PCP allows an 802.1p value to indicate both the scheduling
and drop precedences, and also brings the concepts of 8p0d, 7p1d, 6p2d, and 5p3d. The letter p indicates
the scheduling precedence, and the letter d indicates the drop precedence. For example, 5p3d supports
five scheduling precedences and three drop precedences.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 22


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS Classification and MarkingClassification and Marking

The default and 5p3d domains exist by default and cannot be deleted, and only the default
domain can be modified.

Priority Mapping Table for the Default Domain


The mapping between the external priority, service-class, and color on a Huawei router is
described as follows:

Table 1.1 Default mapping from the DSCP value to the service-class and color
DSCP Service- Color DSCP Service- Color
class class

0~7 BE Green 28 AF3 Yellow


8 AF1 29 BE Green
9 BE 30 AF3 Red
10 AF1 31 BE Green
11 BE 32 AF4
12 AF1 Yellow 33 BE
13 BE Green 34 AF4
14 AF1 Red 35 BE
15 BE Green 36 AF4 Yellow
16 AF2 37 BE Green
17 BE 38 AF4 Red
18 AF2 39 BE Green
19 BE 40 EF
20 AF2 Yellow 41~45 BE
21 BE Green 46 EF
22 AF2 Red 47 BE
23 BE Green 48 CS6
24 AF3 49~55 BE
25 BE 56 CS7
26 AF3 57~63 BE
27 BE

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 23


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS Classification and MarkingClassification and Marking

Table 1.2 Default mapping from the service-class and color to the DSCP value
Service-class Color DSCP

BE Green 0
AF1 Green 10
AF1 Yellow 12
AF1 Red 14
AF2 Green 18
AF2 Yellow 20
AF2 Red 22
AF3 Green 26
AF3 Yellow 28
AF3 Red 30
AF4 Green 34
AF4 Yellow 36
AF4 Red 38
EF Green 46
CS6 Green 48
CS7 Green 56

Table 1.3 Default mapping from the IP Precedence/MPLS EXP/802.1p to the service-class and
color
IP Precedence/MPLS Service-class Color
EXP/802.1p

0 BE Green
1 AF1 Green
2 AF2 Green
3 AF3 Green
4 AF4 Green
5 EF Green
6 CS6 Green
7 CS7 Green

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 24


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS Classification and MarkingClassification and Marking

Table 1.4 Default mapping from the service-class and color to IP Precedence/MPLS EXP/802.1p
Service-class Color IP Precedence/MPLS
EXP/802.1p

BE Green 0
AF1 Green, Yellow, Red 1
AF2 Green, Yellow, Red 2
AF3 Green, Yellow, Red 3
AF4 Green, Yellow, Red 4
EF Green 5
CS6 Green 6
CS7 Green 7

Priority Mapping Table for the 5p3d Domain


IEEE 802.1ad provided the PCP definition, as shown in the following figure.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 25


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS Classification and MarkingClassification and Marking

Figure 1.1 PCP encoding/decoding

As shown in Figure 1.1, the number that ranges from 0 to 7 indicates the 802.1p value. The
value in the format of number x+letter DE indicates that the 802.1p priority is x and the
drop_eligible value is true. If the drop_eligible value is false, the drop precedence can be
ignored. If the drop_eligible value is true, the drop precedence cannot be ignored.
The 5p3d domain on a Huawei router uses an IEEE 802.1ad-compliant priority mapping table
by default. Table 1-9 shows the mapping table that is designed to match the IEEE 802.1ad.

Table 1.1 IEEE802.1ad-compliant mapping table for the 5p3d domain


802.1p Value to Color Color to 802.1p Value

Drop_eligible Color defined in a Color defined in a Drop_eligible


defined in IEEE Huawei Router Huawei Router defined in IEEE
802.1ad 802.1ad

false Green Green false


true Yellow Yellow, Red true

The default mapping between the 802.1p value, service-class, and color for the 5p3d domain
on a Huawei router is shown in Table 1.2 and Table 1.3.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 26


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS Classification and MarkingClassification and Marking

Table 1.2 Mapping from the 802.1p value to the service-class and color
802.1p Service-class Color

0 BE Yellow
1 BE Green
2 AF2 Yellow
3 AF2 Green
4 AF4 Yellow
5 AF4 Green
6 CS6 Green
7 CS7 Green

The mapping from the 802.1p value to the service-class may apply to an inbound interface that belongs
to a non-5p3d domain, leading to eight 802.1p values in Table 1.2. The outbound interface belongs to a
5p3d domain, leading to five service-classes in Table 1-10: BE, AF2, AF4, CS6, and CS7.

Table 1.3 Mapping from the Service-class and Color to the 802.1p Value
Service-class Color 802.1p

BE Green 1
AF1 Green 1
AF1 Yellow 0
AF1 Red 0
AF2 Green 3
AF2 Yellow 2
AF2 Red 2
AF3 Green 3
AF3 Yellow 2
AF3 Red 2
AF4 Green 5
AF4 Yellow 4
AF4 Red 4
EF Green 5
CS6 Green 6
CS7 Green 7

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 27


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS Classification and MarkingClassification and Marking

In Table 1.3, the mapping from the service-class and color to the 802.1p value may apply to an inbound
interface that uses a 5p3d domain or DSCP, EXP, or IP precedence as a basis for mapping, leading to
eight service-classes. The outbound interface may use a non-5p3d domain, leading to eight 802.1p
values.

3.3.3 BA and PHB


BA and PHB Actions
The QoS actions taken by the device always depends on the service-class and color of the
packet.
1. The service-class and color of a packet is initialized to <BE, Green>.
2. If the trust upstream command is configured on the inbound interface, when received a
packet, the upstream (inbound) board of the device resets the service-class and color of
the packet based on the priority field(s) (such as DSCP, 802.1p, EXP, etc.) of the packet.
This procedure is called BA action.
3. If remark, or remark within CAR, is configured for the packet, the inbound board resets
the service-class and color of the packet.
4. Then, the device takes other QoS actions based on the service-class and color of the
packet.
5. After the above actions is taken, the downstream (outbound) board should make a
decision whether to modify the priority field(s) (such as DSCP, 802.1p, EXP, etc.) of the
packet or not. In some scenarios, the priority field of the packet is not expected to be
changed. The procedure that the outbound board modifies the priority field(s) of the
packet based on the service-class and color of the packet is called PHB action.

Remark and PHB Symbols


The device sets two symbols, "Remark-symbol" for inbound board and "PHB-symbol" for
outbound board, to decide whether to take PHB action or not. Both the Remark and PHB
Symbols can be "Y" or "N".

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 28


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS Classification and MarkingClassification and Marking

The device takes PHB action only when both the two symbols are set as "Y". There is an
exceptional case, that is, if the outbound board is Type-C, the device takes PHB action when
the "PHB-symbol" is set to "Y", regardless of the "Remark-symbol".
By default, the Remark-symbol is set as "N", and the PHB-symbol is set as "Y" in the
V600R002 and the earlier versions, and "N" in the V600R003 and the later versions.
Both the Remark and PHB Symbols can be changed by commands (Table 1.1).

On Type-C boards, the "Remark-symbol" is always set as "Y" and cannot be changed by commands.

Table 1.1 Commands for Remark and PHB Symbols Setting


Commands QoS actions Values of the Remark and PHB
symbols

trust upstream The service-class and color of the packet are Both the Remark and PHB symbols
reset based on the priority field(s) (such as are set as "Y".
DSCP, 802.1p, EXP, etc.) of the packet (BA
action is taken).
diffserv-mode { pipe | The service-class and color of the packet are Both the "Remark-symbol" and
short-pipe } reset according to the diffserv-mode service- "PHB-symbol" keep unchanged.
class color command.
diffserv-mode uniform This command is default configuration and Both the "Remark-symbol" and
does not affect the actions of the inbound and "PHB-symbol" keep unchanged.
outbound boards.
service-class The service-class and color of the flow are reset  The "Remark-symbol" is set as
according to the service-class service-class "Y" if the parameter no-remark is
color color command. not configured within the
command.
 The "Remark-symbol" is set as
"N" if the parameter no-remark is
configured within the command.
qos default-service-class The service-class is reset according to the qos Both the "Remark-symbol" and
default-service-class service-class command. "PHB-symbol" keep unchanged.
remark (inbound)  The service-class and color of the packet are The "Remark-symbol" is set as "Y"
reset. and the "PHB-symbol" keep
 Type-B board: Whether the packet is unchanged.
remarked or not depends on the "PHB-symbol"
set by the outbound boards of the packet.
 Other boards: the "Remark-symbol" is set as
"Y" and the "PHB-symbol" is not affected. The
inbound board takes BA action and remarks the
inbound packets, regardless of the "Remark-
symbol" and "PHB-symbol".
remark (outbound) The service-class and color of the packet are Both the "Remark-symbol" and
reset. "PHB-symbol" keep unchanged.
The outbound board remarks the outbound
packets, regardless of the "Remark-symbol"

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 29


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS Classification and MarkingClassification and Marking

Commands QoS actions Values of the Remark and PHB


symbols

and "PHB-symbol".
For example, assume that the remark dscp 11
command is configured for outbound interface
and the service-class and color of the packet are
<ef, green>. The DSCP of the packet is set as
11 directly, rather than the value mapped from
<ef, green> based on the downstream PHB
mapping table. If the outbound packet has vlan
tag, the 802.1p of the vlan tag is set based on
<ef, green> and the downstream PHB mapping
table. If both the remark dscp 11 command
and the remark 8021p command are
configured for the outbound interface, then
both the DSCP and the 802.1p of the packet are
modified directly according the remark
commands.
qos phb enable - The "PHB-symbol" is set as "Y".
qos phb disable - The "PHB-symbol" is set as "N".
qos car { green | yellow | The service-class and color of the packet are Both the "Remark-symbol" and
red } pass service-class reset. "PHB-symbol" keep unchanged.
color

Rules for PHB Action


As stated above, to control the PHB action, the device set two symbols, "Remark-symbol" for
inbound board and "PHB-symbol" for outbound board. They can be changed by commands.
 To set Remark-symbol as "Y", configure the "trust upstream", "remark", or "service-
class" command on inbound interface.

 To set Remark-symbol as "N", configure the "service-class class-value color color-


value no-remark" command, or don't configure the commands stated above on the
inbound interface.
 To set PHB-symbol as "Y", configure the "trust upstream" or "qos phb enable"
command on outbound interface.
 To set PHB-symbol as "N", configure the "qos phb disable" command or don't
configure the "trust upstream" command on outbound interface.
The PHB action is depended on the two symbols (Remark-symbol and PHB-symbol) and the
board types of the inbound and outbound interfaces, see Table 1.1.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 30


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS Classification and MarkingClassification and Marking

Table 1.1 Rules for PHB Action


Board type Symbols Mapping back

Inbound Outbound Remark PHB

Any Any N N No
Any Any Y N No
Any Any Y Y Yes
Any Type-C N Y Yes
Type-C Any N Y Yes
Any boards except Any boards except N Y No
Type-C Type-C

Which Priority Field is Trusted


The trusted QoS priority fields are depended on the configuration on the inbound interface,
see Table 1.1 (for ATM interface, see 7.2QoS of ATMoPSN and PSNoATM).

Table 1.1 Trusted Priority Fields


Inbound interface Forwarding Which field is trusted
configuration (● indicates yes, type
○ indicates no)

○trust upstream command Any types No field is trusted and the packet is mapped to <BE,
○trust 8021p command Green>.

○trust upstream command Any types


●trust 8021p command
●trust upstream command L2 (Ethernet)  If IP header is next to Ethernet header: DSCP is
○trust 8021p command forwarding trusted;
 If Non-IP header is next to Ethernet header: 802.1p
is trusted (If there is no 802.1p, the frame is mapped
to <BE, Green>).
IP forwarding  If IP header is next to L2 header: DSCP is trusted;
(Including IP -> If MPLS + IP header is next to L2 header: outer
IP, IP-> MPLS) EXP is trusted;
 If MPLS + Ethernet + IP header is next to L2
header: outer EXP is trusted.
MPLS POP->IP  MPLS Diffserv Uniform/Pipe mode: outer EXP is
trusted.
 MPLS Diffserv Short-Pipe mode: DSCP is trusted.
MPLS Outer EXP is trusted.
forwarding

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 31


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS Classification and MarkingClassification and Marking

Inbound interface Forwarding Which field is trusted


configuration (● indicates yes, type
○ indicates no)

●trust upstream command L2 (Ethernet)  If VLAN tagged frame: 802.1p is trusted.


●trust 8021p command forwarding If VLAN untagged frame: the frame is mapped to
<BE, Green>.
IP forwarding  For main interface: the trust 802.1p command
does not take effect, and the rule is the same as that
only the trust upstream command is configured in
inbound interface.
 For sub-interface: the trust 802.1p command takes
effect. If VLAN tagged frame: 802.1p is trusted. If
VLAN untagged frame: the frame is mapped to <BE,
Green>. And the configurations of the main interface
and sub-interface take effect independently, without
affecting each other.
MPLS POP->IP  MPLS Diffserv Uniform/Pipe mode: outer EXP is
trusted.
 MPLS Diffserv Short-Pipe mode: DSCP is trusted.
MPLS Outer EXP is trusted.
forwarding

Which Priority Field of the Inbound Packet Is Reset in PHB Action


If there are several QoS priority fields in the packet, which priority field is reset in PHB
action depends on the configuration on the inbound interface and the board type of the
outbound interface, see Table 1.1.

Table 1.1 Rules for setting DSCP and 802.1p in L2 Forwarding (including VLL/VPLS) Scenario
Outbound configuration (● DSCP EXP 802.1p
indicates configured, ○
indicates not configured)

○ trust upstream Keep Keep Keep unchanged.


○ qos phb enable unchanged unchanged For new added VLAN tag, 802.1p is set to 0.
● trust upstream
○ trust 8021p
● qos phb enable Keep Keep  Keep unchanged if outbound board is Type-A.
○ trust 8021p unchanged unchanged For new added VLAN tag, 802.1p is set to 0.
 For other type outbound boards, according to the
<service-class, color> of the packet and the
downstream priority mapping table if PHB action
is performed, and keep unchanged, or set to 0 for
new new-added VLAN tag, if PHB action is not
performed.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 32


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS Classification and MarkingClassification and Marking

Outbound configuration (● DSCP EXP 802.1p


indicates configured, ○
indicates not configured)

● trust upstream Keep Keep  According to the <service-class, color> of the


● trust 8021p unchanged unchanged packet and the downstream priority mapping table
if PHB action is performed.
● qos phb enable Keep unchanged, or set to 0 for new new-added
● trust 8021p VLAN tag, if PHB action is not performed.

Note
The rules for PHB action performing, see the sections “Remark and PHB Symbols” and “Rules for PHB Action”.

Table 1.2 Rules for setting DSCP and 802.1p in L3 Forwarding (including MPLS L3VPN) scenarios
Outbound configuration DSCP 802.1p
(● indicates configured, ○
indicates not configured)

○trust upstream Keep unchanged.  According to the <service-class,


○qos phb enable color> of the packet and the
downstream DS domain (Default
●trust upstream  On MPLS L3VPN ingress PE domain by default) mapping table,
○trust 8021p node: DSCP is set according to the if the outbound board is Type-B.
<service-class, color> of the packet  Keep unchanged, or set to 0 for
and the downstream priority mapping new new-added VLAN tag, if
table if outbound board is Type-B or outbound board is other type.
Type-C, and DSCP keeps unchanged if
●qos phb enable outbound board is other type.  Keep unchanged, or set to 0 for
○trust 8021p  In Pipe or Short-Pipe mode on new new-added VLAN tag, if
L3VPN egress PE node: DSCP keeps outbound board is Type-A.
unchanged.  According to the <service-class,
 In IP forwarding or on MPLS color> of the packet and the
L3VPN egress PE node in Uniform downstream priority mapping
mode: DSCP is set according to the table if outbound board is other
<service-class, color> of the packet type.
and the downstream priority mapping
table.
●trust upstream  If the outbound board is Type-B: According to the <service-class,
●trust 8021p DSCP keeps unchanged on MPLS color> of the packet and the
L3VPN egress PE node in Pipe or downstream priority mapping
Short-Pipe mode; in other scenarios, table.
DSCP is set according to the <service-
class, color> of the packet.
 If the outbound board is other
type: DSCP keeps unchanged.
●qos phb enable  On MPLS L3VPN ingress PE  According to the <service-class,
●trust 8021p node: DSCP is set according to the color> of the packet and the
<service-class, color> of the packet downstream priority mapping
and the downstream priority mapping

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 33


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS Classification and MarkingClassification and Marking

Outbound configuration DSCP 802.1p


(● indicates configured, ○
indicates not configured)

table if outbound board is Type-B or table.


Type-C, and keep unchanged if
outbound board is other type.
 On L3VPN egress PE node in Pipe
or Short-Pipe mode: DSCP keeps
unchanged.
 In IP forwarding or on MPLS
L3VPN egress PE node in Uniform
mode: DSCP keeps unchanged if
outbound board is Type-A, and set
according to the <service-class, color>
of the packet and the downstream
priority mapping table if outbound
board is other type.
Note:
In this table, the precondition of “According to the <service-class, color> of the packet and the downstream
priority mapping table” is the outbound board performing PHB action. If the PHB action is not performed,
the setting rules is the same as the default setting rules (see the rule for outbound configuration without trust
upstream and qos phb enable commands).
The rules for PHB action performing, see the sections “Remark and PHB Symbols” and “Rules for PHB
Action”.
The Rule for setting EXP in new-added label in the IP to MPLS scenario, see the section Rules for Marking
the EXP Field of New-added MPLS Header.

Table 1.3 Rules for setting DSCP, EXP and 802.1p in MPLS Forwarding Scenarios (on P nodes)
Outbound DSCP EXP 802.1p
configuration (●
indicates configured,
○ indicates not
configured)

○ trust upstream Keep Keep unchanged.  According to the <service-class,


○ qos phb enable unchanged. color> of the packet and the
downstream DS domain (Default
●trust upstream Keep According to the domain by default) mapping table If
○trust 8021p unchanged. <service-class, the outbound board is Type-B.
color> of the packet  Keep unchanged, or set to 0 for new
and the downstream new-added VLAN tag, if outbound
priority mapping board is other type.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 34


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS Classification and MarkingClassification and Marking

Outbound DSCP EXP 802.1p


configuration (●
indicates configured,
○ indicates not
configured)

●qos phb enable table (Default  Keep unchanged, or set to 0 for new
○trust 8021p mapping table for new-added VLAN tag, if outbound
Type-B). board is Type-A.
 According to the <service-class,
color> of the packet and the
downstream priority mapping table if
outbound board is other type.
●trust upstream Keep According to the <service-class,
●trust 8021p unchanged. color> of the packet and the
downstream priority mapping table if
●qos phb enable outbound board is other type.
●trust 8021p
Note:
In this table, the precondition of “According to the <service-class, color> of the packet and the downstream
priority mapping table” is the outbound board performing PHB action. If the PHB action is not performed,
the setting rules is the same as the default setting rules (see the rule for outbound configuration is without
trust upstream and qos phb enable commands.
The rules for PHB action performing, see the sections “Remark and PHB Symbols” and “Rules for PHB
Action”.

Rules for Marking the EXP Field of New-added MPLS Header


Table 1.1 lists the rules for marking the EXP field of the new-added MPLS header.

Table 1.1 Rules for Marking the EXP Field of New-added MPLS Header
EXP Outbound board Rules

Outer EXP Type-A  According to the <service-class, color> of the packet and the
downstream priority mapping table if PHB action is
performed.
 Set as 0 if PHB action is not performed.
Type-B According to the <service-class, color> of the packet and the
mapping table of the specified DS domain in downstream if
PHB action is performed.
According to the <service-class, color> of the packet and the
default domain if PHB action is not performed.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 35


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS Classification and MarkingClassification and Marking

EXP Outbound board Rules

Type-C, Type-D According to the <service-class, color> of the packet and the
mapping table of the specified DS domain in downstream if
PHB action is performed.
 According to the service-class (0 to 7 for BE to CS7) if PHB
action is not performed.
Inner EXP in Any type If PHB action is performed: according to the <service-class,
VPLS color> of the packet and the mapping table of the DS domain
scenario specified the mpls-inner-exp phb domain command (VSI
instance view) (by default, using the DS domain specified by
the NNI interface).
If PHB action is not performed: same as the rule for setting
outer EXP when PHB action is not performed.
Inner EXP in Type-A or Type-D  According to the <service-class, color> of the packet and the
VLL and downstream priority mapping table if PHB action is
L3VPN performed. Inherits the EXP set by the upstream board if mpls-
scenarios inner-exp phb disable command (slot view) is configured for
the downstream board.
 Inherits the EXP set by the upstream board if PHB action is
not performed.
Type-B or Type-C Inherits the EXP set by the upstream board.

Note:
The rules for PHB action performing, see the sections “Remark and PHB Symbols” and “Rules for PHB Action”.
The rules for setting the inner EXP of VLL/L3VPN by upstream board are:
 Type-A: according to the service-class (0 to 7 for BE to CS7)
 Other boards: inner EXP of VLL is set according to the <service-class, color> of the packet and the mapping table of the DS
domain specified the mpls l2vc diffserv domain command (interface view, Default domain by default), and inner EXP of
L3VPN is set according to the <service-class, color> of the packet and the mapping table of the DS domain specified the
mpls-inner-exp phb domain command (VPN instance view). By default, inner EXP is set according to the service-class (0
to 7 for BE to CS7) if upstream board is Type-A and Type-D, and set according to the <service-class, color> of the packet
and the default domain mapping table if upstream board is Type-B or Type-C.

3.4 MF Classification
3.4.1 What Is MF Classification
Multi-Field Classification
As networks rapidly develop, services on the Internet become increasingly diversified.
Various services share limited network resources, especially when multiple services use port
number 80. Because of this increasing demand, network devices are required to possess a high
degree of sensitivity for services, including an in-depth parsing of packets and a
comprehensive understanding of any packet field at any layer. This level of sensitivity rises

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 36


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS Classification and MarkingClassification and Marking

far beyond what behavior aggregate (BA) classification can offer. Multi-field (MF)
classification can be deployed to help address this sensitivity deficit.
MF classification allows a device to elaborately classify packets based on certain conditions,
such as 5-tuple (source IP address, source port number, protocol number, destination address,
and destination port number). To simplify configurations and facilitate batch modification,
MF classification commands are designed based on a template. For details, see section
3.4.2Traffic Policy Based on MF Classification.
MF classification is implemented at the network edge. The following table shows three modes
of MF classification on a Huawei router.

MF Items Remarks
Classification

Layer 2 (link 802.1p value in the outer VLAN Items can be jointly used as
layer) MF tag required.
classification
802.1p value in the inner VLAN
tag
Source MAC address
Destination MAC address
Protocol field encapsulated in
Layer 2 headers
IP MF IPv4 DSCP value Items can be jointly used as
classificat required.
ion IP precedence
Source IPv4 address
Destination IPv4 address
IPv4 fragments
TCP/UDP source port number
TCP/UDP destination port
number
Protocol number
TCP synchronization flag
IPv6 DSCP value Items can be jointly used as
required.
Protocol number
The Type-A and Type-B boards
Source IPv6 address support matching 96 bits length at
most for the source IPv6 address.
Destination IPv6 address For an IPv6 address with the
length of 128 bits, the boards
TCP/UDP source port number
support matching the bits 0 to 31
TCP/UDP destination port and 64 to 127.
number
MPLS MF EXP A maximum of four labels can be

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 37


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS Classification and MarkingClassification and Marking

MF Items Remarks
Classification

classification Label identified. The three fields can be


jointly used in each label as
TTL needed.

In addition to the preceding items that can be used in MF classification, a Huawei router can perform
MF classification based on VLAN IDs, but does not use the VLAN ID solely. Instead, the MF
classification policy is bound to a VLAN ID (the same as being bound to an interface). The three MF
classification modes shown in Table 1-1 support MF classification based on VLAN IDs.
For example:
Restrict the bandwidth to 100 Mbit/s for packets with the VLAN ID 100 and the 802.1p value 5
Restrict the bandwidth to 100 Mbit/s for packets with the VLAN ID 200 and the 802.1p value 5
Restrict the bandwidth to 100 Mbit/s for packets with the VLAN ID 300 and the 802.1p value 5
The configurations are as follows:
traffic classifier test
if-match 8021p 5

traffic behavior test


car cir 100000

traffic policy test


classifier test behavior test

interface xxx
traffic-policy test inbound vlan 100 link-layer //"Link-layer"
indicates Layer 2 MF classification.
traffic-policy test inbound vlan 200 link-layer
traffic-policy test inbound vlan 300 link-layer

In addition, a Huawei router supports MF classification based on time periods for traffic control. MF
classification based on time periods allows carriers to configure a policy for each time period so that
network resources are optimized. For example, analysis on the usage habits of subscribers shows that the
network traffic peaks from 20:00 to 22:00, during which large volumes of P2P and download services
affect the normal use of other data services. Carriers can lower the bandwidths for P2P and download
services during this time period to prevent network congestion.
Configuration example:
time-range test 20:00 to 22:00 daily

acl 2000
rule permit source 129.9.0.0 0.0.255.255 time-range test //Configure
time-range in the ACL rule to specify the period during which the rule takes
effect.
traffic classifier test
if-match acl 2000

traffic behavior test


car cir 100000

traffic policy test


classifier test behavior test

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 38


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS Classification and MarkingClassification and Marking

interface xxx
traffic-policy test inbound

3.4.2 Traffic Policy Based on MF Classification


Multiple traffic classifiers and behaviors can be configured on a Huawei router. To implement
Multi-Field (MF) classification, a traffic policy in which classifiers are associated with
specific traffic behaviors is bound to an interface. This traffic policy based on MF
classification is also called class-based QoS.
A traffic policy based on MF classification is configured using a profile that allows batch
configuration or modification.
The QoS profile covers the following concepts:
 Traffic classifier: defines the service type. The if-match clauses are used to set traffic
classification rules.
 Traffic behavior: defines actions that can be applied to a traffic classifier. For details
about traffic behaviors, see section 3.1Traffic Classifiers and Traffic Behaviors.
 Traffic policy: associates traffic classifiers and behaviors. After a traffic policy is
configured, it is applied to an interface.
The following figure shows relationships between an interface, traffic policy, traffic behavior,
traffic classifier, and ACL.

Figure 1.1 Relationships between an interface, traffic policy, traffic behavior, traffic classifier, and
ACL.

(1) A traffic policy can be applied to different interfaces.


(2) One or more classifier and behavior pairs can be configured in a traffic policy. One
classifier and behavior pair can be configured in different traffic policies.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 39


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS Classification and MarkingClassification and Marking

(3) One or more if-match clauses can be configured for a traffic classifier, and each if-match
clause can specify an ACL. An ACL can be applied to different traffic classifiers and contains
one or more rules.
(4) One or more actions can be configured in a traffic behavior.

And/Or Logic in Traffic Classifiers


If a traffic classifier has multiple matching rules, the And/Or logic relationships between rules
are described as follows:
 And: Packets that match all the if-match clauses configured in a traffic classifier belong
to this traffic classifier.
 Or: Packets that match any of the if-match clauses configured in a traffic classifier
belong to this traffic classifier.

Shared and Unshared Modes of a Traffic Policy


A traffic policy works in either shared or unshared mode. For example, a traffic policy defines
that the bandwidths of TCP and UDP traffic are restricted to 100 Mbit/s and 200 Mbit/s,
respectively, and that the bandwidth of other traffic is restricted to 300 Mbit/s. If the traffic
policy is applied to two interfaces, there are two possible scenarios:
 If the traffic policy is in unshared mode, the two interfaces to which the traffic policy
applies are restricted individually. On each interface, the bandwidths of TCP traffic, UDP
traffic, and other traffic are restricted to 100 Mbit/s, 200 Mbit/s, and 300 Mbit/s,
respectively.
 If the traffic policy is in shared mode, the two interfaces to which the traffic policy
applies are restricted as a whole. The total bandwidths of TCP traffic, UDP traffic, and
other traffic on the two interfaces are restricted to 100 Mbit/s, 200 Mbit/s, and 300
Mbit/s, respectively.

If a traffic policy works in shared mode, the interfaces to which the traffic policy applies must
be located on the same board.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 40


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS Classification and MarkingClassification and Marking

Traffic Policy Implementation

Figure 1.1 Traffic policy implementation

As shown in the figure, a packet is matched against traffic classifiers in the order in which
those classifiers are configured. If the packet matches a traffic classifier, no further match
operation is performed. If not, the packet is matched against the following traffic classifiers
one by one. If the packet matches no traffic classifier at all, the packet is forwarded with no
traffic policy executed.
If multiple if-match clauses are configured for a traffic classifier, the packet is matched
against them in the order in which they are configured. If an ACL or UCL is specified in an if-
match clause, the packet is matched against the multiple rules in the ACL or UCL. The system
first checks whether the ACL or UCL exists. (A non-existent ACL or UCL can be applied to a
traffic classifier.) If the packet matches a rule in the ACL or UCL, no further match operation
is performed.
A permit or deny action can be specified in an ACL for a traffic classifier to work with
specific traffic behaviors as follows:
 If the deny action is specified in an ACL, the packet that matches the ACL is denied,
regardless of what the traffic behavior defines.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 41


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS Classification and MarkingClassification and Marking

 If the permit action is specified in an ACL, the traffic behavior applies to the packet that
matches the ACL.
For example, the following configuration leads to such a result: the IP precedence of packets
with the source IP address 50.0.0.1/24 are re-marked as 7; the packets with the source IP
address 60.0.0.1/24 are dropped; the packets with the source IP address 70.0.0.1/24 are
forwarded with the IP precedence unchanged.
acl 3999
rule 5 permit ip source 50.0.0.0 0.255.255.255
rule 10 deny ip source 60.0.0.0 0.255.255.255
traffic classifier acl
if-match acl 3999
traffic behavior test
remark ip-pre 7
traffic policy test
classifier acl behavior test
interface GigabitEthernet1/0/1
traffic-policy test inbound

For traffic behavior mirroring or sampling, even if a packet matches a rule that defines a deny action, the
traffic behavior takes effect for the packet.
For details about the order in which a packet is matched against multiple rules in an ACL or UCL, see
section 3.4.3ACL Rules in MF Classification.

3.4.3 ACL Rules in MF Classification


ACL
An access control list (ACL) is a collection of sequential rules used by a device to filter
network traffic. Each rule contains a filter element that is based on criteria such as the source
address, destination address, and port number of a packet. An ACL classifies packets by using
these rules. When the rules are applied to a router, the router determines whether packets are
permitted or denied.
Table 1.1 shows ACLs that are supported on a Huawei router. Table 1.2 shows the filter
elements that are used in ACLs on a Huawei router.

Table 1.1 ACLs supported on a Huawei router


ACL Type ACL Remarks
Number

Numbe Basic ACL 2000 - Filters packets based on the source IP addresses
red 2999 of packets.
ACL
Advanced ACL 3000 - Filters packets based on the source IP address,
3999 destination IP address, protocol number ,
TCP/UDP source port number, TCP/UDP
destination port number.
Ethernet frame 4000 - Filters packets based on the Ethernet frame
header-based 4999 header.
ACL

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 42


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS Classification and MarkingClassification and Marking

ACL Type ACL Remarks


Number

User control list 6000 - Filters packets based on user group.


(UCL) 9999
MPLS ACL 10000 - Filters packets based on the label, EXP, or TTL
10999 value of MPLS packets.
Named ACL The functions of named ACLs can be easily
understood by their names; therefore they are
not listed here. Also, the rules supported by
named ACLs are the same as those supported by
numbered ACLs.

Table 1.2 Filter elements supported by ACLs


ACL Type Supported Filter Elements

Basic ACL  Source IP address.


 time-range: indicates the period during which an ACL takes effect. If
the time-range is not set, the ACL takes effect immediately after being
configured.
Advanced ACL  IP protocol number. You can also specify a protocol name because a
protocol name corresponds to a protocol number. For example: GRE
(with protocol number 47), ICMP (1), IGMP (2), IP (any IP protocol),
IPinIP (4), OSPF (89), TCP (6), and UDP (17).
 Source IP address.
 Destination IP address.
 UDP/TCP source or destination port number.
 DSCP value.
 Information about IP fragments. Five types of fragments are
supported: fragments, non-fragments, first fragment, non-first
fragments, and first fragment or non-fragments.
 Type of Service (TOS) field in IP packets, which is 4 bits long as
defined in RFC 1349.
NOTE
RFC 791 defines an 8-bit TOS filed in an IP packet, whereas RFC 1349 divides
the 8-bit field into a 3-bit precedence field, a 4-bit TOS field, and a 1-bit reserved
bit.
 IP precedence, which indicates the three left-most bits in the 8-bit
field as defined in RFC 1349.
 Flags in a TCP packet, including URG, ACK, PSH, PST, SYN, and
FIN.
 Name, type, or code of ICMP messages.
 time-range: indicates the period during which an ACL takes effect. If
the time-range is not set, the ACL takes effect immediately after being
configured.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 43


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS Classification and MarkingClassification and Marking

ACL Type Supported Filter Elements

Ethernet frame  802.1p value. Note: in a QinQ scenario, it indicates the outer VLAN
header-based tag.
ACL 802.1p value in the inner VLAN tag. The 802.1p value in the inner
VLAN tag is used in a QinQ scenario.
 Destination MAC address.
 Source MAC address.
 Protocol Type
UCL Filter elements supported by the UCL are the same as those supported
by the advanced ACL, with the user group element replacing the source
IP address element. For details about the UCL, see the later part of this
section.
MPLS ACL  MPLS Exp.
 MPLS label.
 MPLS TTL (Time To Live).

ACL Rule Matching Implementation


Each ACL rule has an ID. Rules in an ACL are displayed in ascending order of IDs. Users can
set rule IDs or allow a device to automatically generate rule IDs.
After traffic classifiers are configured, the system matches packets against an ACL as follows:
 Checks whether the ACL exists (non-existent ACLs can be applied to traffic classifiers).
 Matches packets against rules in the order in which the rules are displayed. When
packets match one rule, the match operation is complete, and no more rules will be
matched against.

ACL Rule Matching Mode


There are two ACL rule matching modes: config and auto. The config mode is used by
default.
 If the config mode is used, users can set rule IDs or allow a device to automatically
allocate rule IDs based on the step.
If rule IDs are specified when rules are configured, the rules are inserted at places
specified by the rule IDs. For example, three rules with IDs 5, 10, and 15 exist on a
device. If a new rule with ID 3 is configured, the rules are displayed in ascending order,
3, 5, 10, and 15. This is the same as inserting a rule before ID 5.
If users do not set rule IDs, the device automatically allocates rule IDs based on the step.
For example, if the ACL step is set to 5, the difference or interval between two rule IDs
is 5, such as 5, 10, 15, and the rest may be deduced by analogy. If the ACL step is set to
2, the device allocates rule IDs starting from 2. The step allows users to insert new rules,
facilitating rule maintenance. For example, the ACL step is 5 by default. If a user does
not configure a rule ID, the system automatically generates a rule ID 5 as the first rule. If
the user intends to add a new rule before rule 5, the user only needs to input a rule ID
smaller than 5. After the automatic realignment, the new rule becomes the first rule.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 44


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS Classification and MarkingClassification and Marking

In the config mode, the system matches rules in ascending order of rule IDs. As a result,
a latter configured rule may be matched earlier.
 If the auto mode is used, the system automatically allocates rule IDs, and places the most
precise rule in the front of the ACL based on the depth-first principle. This can be
implemented by comparing the address wildcard. The smaller the wildcard, the narrower
the specified range.
For example, 129.102.1.1 0.0.0.0 specifies a host with the IP address 129.102.1.1, and
129.102.1.1 0.0.0.255 specifies a network segment with the network segment address
ranging from 129.102.1.1 to 129.102.1.255. The former specifies a narrower host range
and is placed before the latter.
The detailed operations are as follows:
− For basic ACL rules, the source address wildcards are compared. If the source
address wildcards are the same, the system matches packets against the ACL rules
based on the configuration order.
− For advanced ACL rules, the protocol ranges and then the source address wildcards
are compared. If both the protocol ranges and the source wildcards are the same, the
destination address wildcards are then compared. If the destination address
wildcards are also the same, the ranges of source port numbers are compared with
the smaller range being allocated a higher precedence. If the ranges of source port
numbers are still the same, the ranges of destination port numbers are compared
with the smaller range being allocated a higher precedence. If the ranges of
destination port numbers are still the same, the system matches packets against ACL
rules based on the configuration order of rules.
For example, a wide range of packets are specified for packet filtering. Later, it is
required that packets matching a specific feature in the range be allowed to pass. If the
auto mode is configured in this case, the administrator only needs to define a specific
rule and does not need to re-order the rules because a narrower range is allocated a
higher precedence in the auto mode.
For example, the following commands are configured one after another:
rule deny ip dscp 30 destination 1.1.0.0 0.0.255.255
rule permit ip dscp 30 destination 1.1.1.0 0.0.0.255

If the config mode is used, the rules in the ACL are displayed as follows:
acl 3000
rule 5 deny ip dscp 30 destination 1.1.0.0 0.0.255.255
rule 10 permit ip dscp 30 destination 1.1.1.0 0.0.0.255

If the auto mode is used, the rules in the ACL are displayed as follows:
acl 3000
rule 1 permit ip dscp 30 destination 1.1.1.0 0.0.0.255
rule 2 deny ip dscp 30 destination 1.1.0.0 0.0.255.255

If the device receives a packet with DSCP value 30 and destination IP address 1.1.1.1, the
packet is dropped when the config mode is used, but the packet is allowed to pass when the
auto mode is used.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 45


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS Classification and MarkingClassification and Marking

ACL Rules Applied to Traffic Classifiers


Table 1.1 shows ACL rules applied to traffic classifiers on various Huawei routers.

Table 1.1 ACL rules applied to traffic classifiers


Device Type Traffic Behavior as Deny Traffic Behavior Not as
Deny

NE40E/80E/5000E, CX600  If a packet matches a  If a packet matches a


and ME60 permit rule in the ACL, the permit rule in the ACL, the
system performs the traffic system performs the traffic
behavior and the packet is behavior.
dropped.  If a packet matches a deny
 If a packet matches a deny rule in the ACL, the system
rule in the ACL, the system will not perform the traffic
drops the packet but does behavior but drops the
not perform the traffic packet.
behavior.  If a packet has not
 If a packet has not matched any rules in the
matched any rules in the ACL, the system will not
ACL, the system will not perform any traffic behavior
perform any traffic behavior but forwards the packet.
but forwards the packet.
Versions earlier than  If a packet matches a  If a packet matches a
NE20&NE20E permit rule in the ACL, the permit rule in the ACL, the
V200R005C03SPC100 system forwards the packet. system performs the traffic
 If a packet matches a deny behavior.
rule in the ACL, the system  If a packet matches a deny
drops the packet. rule in an ACL, the system
 If a packet has not will not perform the traffic
matched any rules in the behavior but forwards the
ACL, the system will not packet.
perform any traffic behavior  If a packet has not
but forwards the packet. matched any rules in the
ACL, the system will not
NE20&NE20E  If a packet matches a perform any traffic behavior
V200R005C03 SPC100 and permit rule in the ACL, the but forwards the packet.
later version system drops the packet.
 If a packet matches a deny
rule in an ACL, the system
will not perform the traffic
behavior but forwards the
packet.
 If a packet has not
matched any rules in the
ACL, the system will not
perform any traffic behavior
but forwards the packet.
NE40/80  Only filter elements, not permit or deny rules, can be
specified in the ACL applied to traffic classifiers.
 If a packet matches a specific filter element in the ACL,

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 46


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS Classification and MarkingClassification and Marking

Device Type Traffic Behavior as Deny Traffic Behavior Not as


Deny

the system performs corresponding traffic behaviors.


 If a packet has not matched any filter element in the ACL,
the system does not perform any traffic behavior but
forwards the packet.
NE05/08E/16E  The permit or deny action cannot be configured in traffic
behaviors. To implement the firewall function, configure the
firewall command instead of configuring traffic behaviors.
 If a packet matches a rule in the ACL, the system
performs the corresponding traffic behavior . If a packet has
not matched any rule in the ACL, the system does not
perform any traffic behavior but forwards the packet.

UCL
The order in which packets are matched against rules in a UCL is the same as that in other
types of ACLs. Unlike other types of ACLs, the UCL has an additional user-group field,
which is used to identify the source or destination user group. Table 1-1 shows four types of
UCLs.

Figure 1.1 UCL network

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 47


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS Classification and MarkingClassification and Marking

 User-to-network UCL: specifies the filter element as the source user group and applies
to upstream traffic on the user side and downstream traffic on the network side.
 User-to-user UCL: specifies the filter element as the destination user group and applies
to upstream and downstream traffic on the user side.
 Network-to-network UCL: specifies the filter element as neither the source user group
nor the destination user group and applies to upstream and downstream traffic on the
network side.
 Network-to-user UCL: specifies the filter element as both the source user groups and
the destination user groups and applies to upstream traffic on the network side and
downstream traffic on the user side.

3.4.4 QPPB
QoS Policy Propagation on BGP (QPPB) is a special Multi-Field (MF) classification
application.

Background
The following example uses the network shown in Figure 1.1 to illustrate how QPPB is
introduced. In this networking, the AS 400 is a high priority network. All packets transmitted
across AS 400 must be re-marked with an IP precedence for preferential transmission. To
meet such requirements, edge nodes (Node-A, Node-B, and Node-C) in AS 100 must be
configured to re-mark the IP precedence of packets destined for or sent from AS 400. The
edge interface connecting to AS 400 on Node-C must be configured to re-mark packets.
Node-A or Node-B must be configured to perform traffic classification for packets destined
for an IP address in AS 400. If a large number of IP addresses or address segments are
configured in AS 400, Node-A and Node-B encounter excess traffic classification operations.
In addition, if the network topology is prone to changes, a large number of configuration
modifications are required.

Figure 1.1 Inter-AS network

To simplify configuration on Node-A and Node-B, QPPB is introduced. QPPB allows packets
to be classified based on AS information or community attributes.
QPPB, as the name implies, applies QoS policies using Border Gateway Protocol (BGP). The
primary advantage of QPPB is that BGP route attributes can be set for traffic classification by
the route sender and the route receiver must only configure an appropriate policy for receiving
routes. The route receiver sets QoS parameters for packets matching the BGP route attributes

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 48


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS Classification and MarkingClassification and Marking

and then implements corresponding traffic behaviors before data forwarding. When the
network topology changes, the BGP route receiver does not modify local configurations if the
route attributes of the advertised BGP routes do not change.

Implementation
As shown in Figure 1.1, Node-A and Node-C are IBGP peers in AS 100. Node-A is
configured to re-mark IP precedence for packets destined for or sent from AS 400. The QPPB
implementation is as follows:

Figure 1.1 QPPB Implementation

1. The BGP route sender (Node-C) sets specific attributes for BGP routes (such as the
AS_Path, community attributes, and extended community attributes).
2. Node-C advertises these BGP routes.
3. The BGP route receiver (Node-A) presets attribute entries. After receiving BGP routes
matching the attribute entries, the BGP route receiver sets a behavior ID identifying a
traffic behavior in the forwarding information base (FIB) table.
4. Before transmitting packets, Node-A obtains the behavior IDs of the routes from the FIB
for these packets and performs the corresponding traffic behaviors for these packets.
The preceding process demonstrates that QPPB does not transmit the QoS policy along with
the BGP route information. The route sender sets route attributes for routes to be advertised,
and the route receiver sets the QoS policy based on the route attributes of the destination
network segment.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 49


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS Classification and MarkingClassification and Marking

Typical Application 1: Inter-AS Traffic Classification

Figure 1.1 Inter-AS Traffic Classification

As shown in Figure 1.1, QPPB allows the edge devices in AS 100 to classify inter-AS
packets. For example, to configure rate limit on Node-C for packets transmitted between AS
200 and AS 400, perform the following operations:
 For packets from AS 200 to AS 400, apply destination address-based QPPB on all Node-
C’s interfaces that belong to AS 100.
 For packets from AS 400 to AS 200, apply source address-based QPPB on the Node-C’s
interface connecting to AS 400.

FIB-based packet forwarding applies to upstream traffic but not downstream traffic.
Therefore, QPPB is enabled on the upstream interface of traffic.

Typical Application 2: L3VPN Traffic Classification

Figure 1.1 QPPB application to L3VPN traffic

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 50


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS Classification and MarkingClassification and Marking

As shown in Figure 1.1, PEs connect to multiple VPNs. A PE can set route attributes, such as
community, for a specified VPN instance before advertising any route. After receiving the
routing information, the remote peer imports the route and the associated QoS parameters to
the FIB table. This enables the traffic from CEs to be forwarded based on the corresponding
traffic behaviors. In this manner, different VPNs can be provided with different QoS
guarantees.

Typical Application 3: User-to-ISP Traffic Accounting

Figure 1.1 QPPB application for user-to-ISP traffic accounting

As shown in Figure 1.1, QPPB is implemented as follows for user-to-ISP traffic accounting:
 BGP routes are advertised with community attributes.
 BGP routes are imported and the community attributes of the BGP routes are matched
against attribute entries. Behavior IDs are set in the FIB table for the routes matching the
attribute entries.
 A QPPB policy is configured. A corresponding traffic behavior (such as statistics
collection, CAR, and re-marking) is configured for the qos-local-id (Behavior ID).
 Destination address-based QPPB is enabled for incoming traffic.
 The QPPB policy is applied to incoming traffic on the user-side interface.
 During packet forwarding, the Behavior ID (qos-local-id) is obtained for packets based
on the destination IP address, and the corresponding traffic behavior is performed.
Configuration example on an Type-A or Type-C board:
# Define rules for community attributes.
ip community-filter 10 permit 1000:10
ip community-filter 11 permit 1000:11
ip community-filter 12 permit 1000:12

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 51


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS Classification and MarkingClassification and Marking

ip community-filter 13 permit 1000:13

# Define a route-policy.
Route-policy policyA permit node 10
if-match community-filter 10
apply qos-local-id 10
Route-policy policyA permit node 11
if-match community-filter 11
apply qos-local-id 11
Route-policy policyA permit node 12
if-match community-filter 12
apply qos-local-id 12
Route-policy policyA permit node 13
if-match community-filter 13
apply qos-local-id 13

# Apply the route-policy to BGP routes.


bgp 100
peer 1.1.1.10 as-number 100
peer 1.1.1.11 as-number 100
peer 1.1.1.12 as-number 100
peer 1.1.1.13 as-number 100
peer 1.1.1.10 route-policy policyA import
peer 1.1.1.11 route-policy policyA import
peer 1.1.1.12 route-policy policyA import
peer 1.1.1.13 route-policy policyA import

# Define traffic behaviors.


traffic behavior b10
car cir 2000
traffic behavior b11
car cir 3000
traffic behavior b12
car cir 4000
traffic behavior b13
car cir 5000

# Define a QPPB policy.


qppb local-policy policyA
qos-local-id 10 behavior b10
qos-local-id 11 behavior b11
qos-local-id 12 behavior b12
qos-local-id 13 behavior b13

# Apply the QPPB policy to incoming traffic on the user-side interface.


qppb-policy policyA destination inbound

# Check statistics.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 52


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS Classification and MarkingClassification and Marking

display qppb local-policy statistics interface X/X/X inbound

Typical Application 4: ISP-to-User Traffic Accounting

Figure 1.1 Application for ISP-to-user traffic accounting

As shown in Figure 1.1, QPPB is implemented as follows for ISP-to-user traffic accounting:
 BGP routes are advertised with community attributes.
 BGP routes are imported and the community attributes of the BGP routes are matched
against attribute entries. Behavior IDs are set in the FIB table for the routes matching the
attribute entries.
 A QPPB policy is configured. A corresponding traffic behavior (such as statistics
collecting, CAR, and re-marking) is configured for the qos-local-id (Behavior ID).
 Source address-based QPPB is enabled for incoming traffic.
 The QPPB policy is applied to outgoing traffic on the user-side interface.
 During packet forwarding, the Behavior ID (qos-local-id) is obtained for packets based
on the source IP address, and the corresponding traffic behavior is performed.
Configuration example:
# Define rules for community attributes.
ip community-filter 10 permit 1000:10
ip community-filter 11 permit 1000:11
ip community-filter 12 permit 1000:12
ip community-filter 13 permit 1000:13

# Define a route-policy.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 53


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS Classification and MarkingClassification and Marking

Route-policy policyA permit node 10


if-match community-filter 10
apply qos-local-id 10
Route-policy policyA permit node 11
if-match community-filter 11
apply qos-local-id 11
Route-policy policyA permit node 12
if-match community-filter 12
apply qos-local-id 12
Route-policy policyA permit node 13
if-match community-filter 13
apply qos-local-id 13

# Apply the route-policy to BGP routes.


bgp 100
peer 1.1.1.10 as-number 100
peer 1.1.1.11 as-number 100
peer 1.1.1.12 as-number 100
peer 1.1.1.13 as-number 100
peer 1.1.1.10 route-policy policyA import
peer 1.1.1.11 route-policy policyA import
peer 1.1.1.12 route-policy policyA import
peer 1.1.1.13 route-policy policyA import

# Define traffic behaviors.


traffic behavior b10
car cir 2000
traffic behavior b11
car cir 3000
traffic behavior b12
car cir 4000
traffic behavior b13
car cir 5000

# Define a QPPB policy.


qppb local-policy policyA
qos-local-id 10 behavior b10
qos-local-id 11 behavior b11
qos-local-id 12 behavior b12
qos-local-id 13 behavior b13

# Enable source address-based QPPB for incoming traffic.


qppb-policy qos-local-id source inbound

# Apply the QPPB policy to outgoing traffic on the user-side interface.


qppb-policy policyA outbound

# Check statistics.
display qppb local-policy statistics interface X/X/X outbound

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 54


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS Classification and MarkingClassification and Marking

3.5 QoS Implementations on Different Boards


3.5.1 Implementation Differences of BA Classification
Upstream Mapping
Item Board Difference description

BA classification based on Type-B No support


the Drop Eligible Indicator
(DEI) bit in a VLAN tag Type-C No support
Type-A Support
Type-D If DEI=1, color=yellow, else, color =green.

Example:
Scenario: trust upstream command is configured on an inbound interface.
Inbound packet type: VLAN packet or QinQ packet
Outbound packet type: any type.

Note:
Other implementation differences in BA upstream mapping, see the sections “Which Priority Field is Trusted”
and “Remark and PHB Symbols”.

Downstream Mapping

QoS maping and


remarking priciples.xlsx

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 55


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS Classification and MarkingClassification and Marking

MPLS Diffserv
Implementation differences in MPLS Diffserv, see the following sections:
 Which Priority Field is Trusted
 Remark and PHB Symbols
 Rules for PHB ActionWhich Priority Field of the Inbound Packet Is Reset in PHB Action
 Rules for Marking the EXP Field of New-added MPLS Header
 DSCP Remarking Rules in MPLS VPN Scenarios

3.5.2 Implementation Differences of MF Classification


Remark Behavior
Item Board Difference description

Result of the remark dscp Type-B The remarked DSCP may be not the same as the valued
command specified in the command. The dscp is reset according to
the BA downstream mapping table.
Type-C The remarked DSCP is the same as the valued specified
in the command.
Type-A
Type-D
Example:
Scenario: remark dscp command is configured for the inbound or
outbound packet.

The remark dscp command Type-B The inbound board does not remark the DSCP value
configured on inbound interface directly.
Type-C The inbound board remarks the DSCP value directly.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 56


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS Classification and MarkingClassification and Marking

Item Board Difference description

Type-A
Type-D
Example:
Scenario 1:
 Inbound: trust upstream and remark dscp commands are configured.
 Outbound: trust upstream or qos phb enable command is configured.

Scenario 2:
 Inbound: trust upstream and remark dscp commands are configured.
 Outbound: trust upstream and qos phb enable command are not
configured.

The remark dscp command in the Type-B Support


outbound interface of Egress PE in
VPLS Scenario Command: remark payload-dscp
Type-C Not support
Type-A
Type-D

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 57


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS Classification and MarkingClassification and Marking

Item Board Difference description

The remark mpls-exp command Type-B Not support


Type-C Support (only for outbound traffic)
Type-A Support (only for inbound traffic)
Type-D Support (both inbound and outbound traffic)
Example:
Scenario 1: On transit LSR in MPLS domain, the trust upstream
command is configured both in the inbound interface and the outbound
interface, and the remark mpls-exp command is configured on the
inbound interface.

Scenario 2: On transit LSR in MPLS domain, the trust upstream


command is configured both in the inbound interface and the outbound
interface, and the remark mpls-exp command is configured on the
outbound interface.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 58


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS Classification and MarkingClassification and Marking

ACL/UCL Matching Rule


Item Board Difference description

ACL/UCL matching the ToS value Type-B ACL/UCL matching the ToS value is not concerned the
last bit of ToS, and the matching result may be
incorrect.
For example, if the ACL is to match the packet of
tos=1, result is, both the packet of Tos=0 and the packet
of tos=1 are matched.
Type-C Not have the problem.
Type-A
Type-D
Example:
Scenario:
#
acl 3000
rule 5 permit tcp tos 0
#
traffic classifier a
if-match acl 3000
#
traffic behavior a
deny
#
traffic policy a
classifier a behavior a
#
Interface GigabitEthernet1/0/1
traffic-policy a inbound
#

The result is,


 if the interface GigabitEthernet1/0/1 is in Type-B board: discards both
the packet of tos=0 and the packet of tos=1.
 if the interface GigabitEthernet1/0/1 is in Type-A, Type-C or Type-D
board: only discards the packet of tos=0.
all-layer mode MF classification Type-B Not support.
Type-C Not support.
Type-A Support.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 59


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS Classification and MarkingClassification and Marking

Item Board Difference description

Type-D Use the traffic-policy policy-name { inbound |


outbound } all-layer command.
The device first performs Layer 2 MF classification. If
No packet matches Layer 2 MF classification, the
device continues to perform Layer 3 MF classification.
NOTE
In all-layer mode, the device does not perform MPLS MF
classification.
By default, the device only performs Layer 3 MF
classification.

Example:
Scenario: to deny the packet that 802.1p=0 or dscp=40.

MF classification matching 802.1p Type-B Not support


of inner VLAN tag
Type-C V600R005 and earlier version: Support (only for
inbound traffic)
V600R006 and later version: Support (for both inbound
and outbound traffic)
Type-A Not support
Type-D Support (for both inbound and outbound traffic)

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 60


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS Classification and MarkingClassification and Marking

Item Board Difference description

Example:
Scenario: To discard the inbound traffic that inner-802.1p = 5 (take
V600R006 or later version as an example)

MF classification matching MPLS Type-B Not support


packet header
Type-C Support
Type-A Use the traffic-policy policy-name { inbound |
outbound } mpls-layer command.
Type-D
Example:
Scenario: to discard the outbound packet that the exp of the inner label is
2.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 61


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS Classification and MarkingClassification and Marking

QinQ Sub-interface Scenarios


Item Board Difference description

MF classification on inbound QinQ Type-B Only supports layer 3 MF classification if the QinQ
terminal sub-interface terminal sub-interface works in layer3 mode.
And only supports layer 2 (link layer) MF classification
if the QinQ terminal sub-interface works in layer2
mode.
Type-C Supports all kinds of MF classification
Type-A
Type-D
MF classification on outbound Type-B MF classification takes effect on the traffic of sub-
QinQ terminal main-interface interface.
Type-C MF classification does not take effect on the traffic of
sub-interface.
Type-A
Type-D
The behaviors supported by MF Type-B Only supports permit, deny, remark and traffic-statistics
classification on outbound QinQ
terminal sub-interface Type-C Supports all kinds of traffic behaviors described in the
chapter 3.1Traffic Classifiers and Traffic Behaviors.
Type-A
Type-D

MPLS L2VPN Scenarios


Item Board Difference description

MF classification on VLL or Type-B Not support


PWE3 Scenario
Type-C Support
Type-A
Type-D

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 62


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS Classification and MarkingClassification and Marking

Item Board Difference description

Example:
Scenario: MF classification is configured in UNI interface of PE VLL or
PWE3 scenario.

IPv6 Scenarios
Item Board Difference description

CAR for IPv6 packet matching the Type-B Not support


MF classification
Type-C Support
Type-A
Type-D

3.6 FAQ about Classification and Marking


3.6.1 Is the Default-mapping Defined by Huawei or by RFC
Standard?
Question
Is the default mapping shown in section Default Priority Mapping Table for the Default
Domain defined by Huawei or by RFC standard?

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 63


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS Classification and MarkingClassification and Marking

Answer
It is defined by Huawei, not by RFC standard. RFC 5127 has the recommended mapping
between EXP and DSCP, and the mapping on the Huawei routers is the same as RFC5127
recommendation.

3.6.2 Is It Possible to Remark Several Fields Together?


Question
Is it possible or not to remark several fields? For example, Is it possible to remark the 802.1p,
EXP and DSCP together?

Answer
It is possible to remark several fields. To remark 802.1p, EXP and DSCP together, configure
the remark 8021p, remark mpls-exp and the remark dscp commands together on the traffic
behavior view.

3.6.3 When There Are Multiple Classifiers and Multiple


Behaviors in One Traffic Policy, How Are They Evaluated?
Question
When there are multiple classifiers and multiple behaviors in one traffic policy, how are they
evaluated? In order?

Answer
Yes, in order. For more information, see the section "Traffic Policy Implementation".

3.6.4 What Is the Default Behavior If BA Classification Is Not


Configured on the Inbound Interface?
Question
What is the default behavior if BA classification is not configured on the inbound interface?

Answer
If the BA classification (the "trust upstream", "diffserv-mode pipe" and "diffserv-mode
short-pipe") commands are not configured on the inbound interface, the packets are mapped
to <BE, Green>, that is, all the packets from the inbound interface are put into the BE queue.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 64


Copyright © Huawei
Technologies Co., Ltd.
Traffic Policing and Traffic ShapingTraffic Policing and
Special Topic - QoS Traffic Shaping

4 Traffic Policing and Traffic Shaping

About This Chapter


4.1 Traffic Policing
4.2 Traffic Shaping
4.3 Comparison Between Traffic Policing and Traffic Shaping
4.4 QoS Implementations on Different Boards
4.5 Capabilities for Policing and Shaping
4.6 FAQ about Policing and Shaping

4.1 Traffic Policing


4.1.1 Overview
Traffic policing controls the rate of incoming packets to ensure that network resources are
properly allocated. If the traffic rate of a connection exceeds the specifications on an interface,
traffic policing allows the interface to drop excess packets or re-mark the packet priority to
maximize network resource usage and protect operators' profits. An example of this process is
restricting the rate of HTTP packets to 50% of the network bandwidth)
Traffic policing implements the QoS requirements defined in the service level agreement
(SLA). The SLA contains parameters, such as the Committed Information Rate (CIR), Peak
Information Rate (PIR), Committed Burst Size (CBS), and Peak Burst Size (PBS) to monitor
and control incoming traffic. The device performs Pass, Drop, or Markdown actions for the
traffic exceeding the specified limit. Markdown means that packets are marked with a lower
service class or a higher drop precedence so that these packets are preferentially dropped
when traffic congestion occurs. This measure ensures that the packets conforming to the SLA
can have the services specified in the SLA.
Traffic policing uses committed access rate (CAR) to control traffic. CAR uses token buckets
to meter the traffic rate. Then preset actions are implemented based on the metering result.
These actions include:
 Pass: forwards the packets conforming to the SLA.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 65


Copyright © Huawei
Technologies Co., Ltd.
Traffic Policing and Traffic ShapingTraffic Policing and
Special Topic - QoS Traffic Shaping

 Discard: drops the packets exceeding the specified limit.


 Re-mark: re-marks the packets whose traffic rate is between the CIR and PIR with a
lower priority and allows these packets to be forwarded.

4.1.2 Token Bucket


What Is a Token Bucket
A token bucket is a commonly used mechanism that measures traffic passing through a
device.
A token bucket can be considered a container of tokens, which has a pre-defined capacity.
Tokens are put into the token bucket at a preset rate. When the token bucket is full of tokens,
no more tokens can be added. Figure 1.1 shows a token bucket.

Figure 1.1 Token bucket

A token bucket measures traffic but does not filter packets or perform any action, such as dropping
packets.

As shown in Figure 1.2, when a packet arrives, the device obtains enough tokens from the
token bucket for packet transmission. If the token bucket does not have enough tokens to send
the packet, the packet either waits for enough tokens or is discarded. This feature limits
packets to be sent at a rate less than or equal to the rate at which tokens are generated.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 66


Copyright © Huawei
Technologies Co., Ltd.
Traffic Policing and Traffic ShapingTraffic Policing and
Special Topic - QoS Traffic Shaping

Figure 1.2 Processing packets using token buckets

The token bucket mechanism widely applies to QoS technologies, such as the committed
access rate (CAR), traffic shaping, and Line Rate (LR).

This section only describes how to meter and mark packets using token buckets.

Two Token Bucket Markers


RFC 2697 and RFC 2698 define two token bucket markers respectively: single rate three
color marker (srTCM), and two rate three color marker (trTCM). Both token bucket mark the
packets either green, yellow, or red. Note that the colors in token bucket markers are
irrelevant to those indicating drop precedence. The srTCM focuses on the burst packet
size, whereas the trTCM focuses on the burst traffic rate. The srTCM, which is simpler than
the trTCM, is widely used for traffic metering.
Both token bucket markers operates in Color-Blind or Color-Aware mode. The widely used
Color-Blind mode is the default one.

Parameters for srTCM


The following parameters are involved in srTCM:
 Committed Information Rate (CIR): the rate at which tokens are put into a token bucket.
The CIR is expressed in bit/s.
 Committed Burst Size (CBS): the committed volume of traffic that an interface allows to
pass through, also the depth of a token bucket. The CBS is expressed in bytes. The CBS
must be greater than or equal to the size of the largest possible packet in the stream. Note
that sometimes a single packet can consume all the tokens in the token bucket. The larger
the CBS is, the greater the traffic burst can be.
 Extended burst size (EBS): the maximum size of burst traffic before all traffic exceeds
the CIR. The EBS is expressed in bytes.
A packet is marked green if it does not exceed the CBS, yellow if it exceeds the CBS but does
not exceed the EBS, and red if it exceeds the EBS.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 67


Copyright © Huawei
Technologies Co., Ltd.
Traffic Policing and Traffic ShapingTraffic Policing and
Special Topic - QoS Traffic Shaping

Mechanism for srTCM


A Huawei router uses two token buckets for srTCM.

Figure 1.1 Mechanism for srTCM

The srTCM uses two token buckets, C and E, which both share the common rate CIR. The
maximum size of bucket C is the CBS, and the maximum size of bucket E is the EBS.
When the EBS is 0, no token is added in bucket E. Therefore, only bucket C is used for
srTCM. When only bucket C is used, packets are marked either green or red. When the EBS is
not 0, two token buckets are used and packets are marked either green, yellow or red.

Method of Adding Tokens for srTCM


In srTCM, tokens are put into bucket C and then bucket E after bucket C is full of tokens.
After both buckets C and E are filled with tokens, subsequent tokens are dropped.
Both buckets C and E are initially full.

Rules for srTCM


Tc and Te refer to the number of tokens in buckets C and E, respectively. The initial values of
Tc and Te are respectively the CBS and EBS.
In Color-Blind mode, the following rules apply when a packet of size B arrives at time t:
 When one token bucket is used:
− If Tc(t) – B ≥ 0, the packet is marked green, and Tc is decremented by B.
− If Tc(t) – B < 0, the packet is marked red, and Tc remains unchanged.
 When two token buckets are used:
− If Tc(t) – B ≥ 0, the packet is marked green, and Tc is decremented by B.
− If Tc(t) – B < 0 but Te(t) - B ≥ 0, the packet is marked yellow, and Te is
decremented by B.
− If Te(t) – B < 0, the packet is marked red, and neither Tc nor Te is decremented.
In Color-Aware mode, the following rules apply when a packet of size B arrives at time t:
 When one token bucket is used:
− If the packet has been pre-colored as green and Tc(t) - B ≥ 0, the packet is re-
marked green, and Tc is decremented by B.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 68


Copyright © Huawei
Technologies Co., Ltd.
Traffic Policing and Traffic ShapingTraffic Policing and
Special Topic - QoS Traffic Shaping

− If the packet has been pre-colored as green and Tc(t) – B < 0, the packet is re-
marked red, and Tc remains unchanged.
− If the packet has been pre-colored as yellow or red, the packet is re-marked red
regardless of the packet length. The Tc value remains unchanged.
 When two token buckets are used:
− If the packet has been pre-colored as green and Tc(t) - B ≥ 0, the packet is re-
marked green, and Tc is decremented by B.
− If the packet has been pre-colored as green and Tc(t) – B < 0 but Te(t) - B ≥ 0, the
packet is marked yellow, and Te is decremented by B.
− If the packet has been pre-colored as yellow and Te(t) – B ≥ 0, the packet is re-
marked yellow, and Te is decremented by B.
− If the packet has been pre-colored as yellow and Te(t) – B < 0, the packet is re-
marked red, and Te remains unchanged.
− If the packet has been pre-colored as red, the packet is re-marked red regardless of
the packet length. The Tc and Te values remain unchanged.

Parameters for trTCM


trTCM covers the following parameters:
 CIR: the rate at which tokens are put into a token bucket. The CIR is expressed in bit/s.
 CBS: the committed volume of traffic that an interface allows to pass through, also the
depth of a token bucket. The CBS is expressed in bytes. The CBS must be greater than or
equal to the size of the largest possible packet entering a device.
 PIR: the maximum rate at which an interface allows packets to pass and is expressed in
bit/s. The PIR must be greater than or equal to the CIR.
 PBS: the maximum volume of traffic that an interface allows to pass through in a traffic
burst.

Mechanism for trTCM


The trTCM uses two token buckets and focuses on the burst traffic rate. The trTCM uses two
token buckets, C and P, with rates CIR and PIR, respectively. The maximum size of bucket C
is the CBS, and the maximum size of bucket P is the PBS.

Figure 1.1 Mechanism for trTCM

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 69


Copyright © Huawei
Technologies Co., Ltd.
Traffic Policing and Traffic ShapingTraffic Policing and
Special Topic - QoS Traffic Shaping

Method of Adding Tokens for trTCM


Tokens are put into buckets C and P at the rate of CIR and PIR, respectively. When one bucket
is full of tokens, any subsequent tokens for the bucket are dropped, but tokens continue being
put into the other bucket if it is not full.
Buckets C and P are initially full.

Rules for trTCM


The trTCM focuses on the traffic burst rate and checks whether the traffic rate is conforming
to the specifications. Therefore, traffic is measured based on bucket P and then bucket C.
Tc and Tp refer to the numbers of tokens in buckets C and P, respectively. The initial values of
Tc and Tp are respectively the CBS and PBS.
In Color-Blind mode, the following rules apply when a packet of size B arrives at time t:
 If Tp(t) – B < 0, the packet is marked red, and The Tc and Tp values remain unchanged.
 If Tp(t) – B ≥ 0 but Tc(t) – B < 0, the packet is marked yellow, and Tp is decremented by
B.
 If Tc(t) – B ≥ 0, the packet is marked green and both Tp and Tc are decremented by B.
In Color-Aware mode, the following rules apply when a packet of size B arrives at time t:
 If the packet has been pre-colored as green, and Tp(t) – B < 0, the packet is re-marked
red, and neither Tp nor Tc is decremented.
 If the packet has been pre-colored as green and Tp(t) – B ≥ 0 but Tc(t) – B < 0, the
packet is re-marked yellow, and Tp is decremented by B, and Tc remains unchanged.
 If the packet has been pre-colored as green and Tc(t) – B ≥ 0, the packet is re-marked
green, and both Tp and Tc are decremented by B.
 If the packet has been pre-colored as yellow and Tp(t) – B < 0, the packet is re-marked
red, and neither Tp nor Tc is decremented.
 If the packet has been pre-colored as yellow and Tp(t) – B ≥ 0, the packet is re-marked
yellow, and Tp is decremented by B and Tc remains unchanged.
 If the packet has been pre-colored as red, the packet is re-marked red regardless of what
the packet length is. The Tp and Tc values remain unchanged.

4.1.3 CAR
What Is CAR
In traffic policing, committed access rate (CAR) is used to control traffic. CAR uses token
buckets to measure traffic and determines whether a packet is conforming to the specification.
CAR has the following two functions:
 Rate limit: Only packets allocated enough tokens are allowed to pass so that the traffic
rate is restricted.
 Traffic classification: Packets are marked internal priorities, such as the scheduling
precedence and drop precedence, based on the measurement performed by token
buckets.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 70


Copyright © Huawei
Technologies Co., Ltd.
Traffic Policing and Traffic ShapingTraffic Policing and
Special Topic - QoS Traffic Shaping

CAR Process

Figure 1.1 CAR process

 When a packet arrives, the device matches the packet against matching rules. If the
packet matches a rule, the router uses token buckets to meter the traffic rate.
 The router marks the packet red, yellow, or green based on the metering result. Red
indicates that the traffic rate exceeds the specifications. Yellow indicates that the traffic
rate exceeds the specifications but is within an allowed range. Green indicates that the
traffic rate is conforming to the specifications.
 The device drops packets marked red, re-marks and forwards packets marked yellow,
and forwards packets marked green.

Marking Process of CAR


Huawei routers conform to RFC 2697 and RFC 2698 to implement CAR.
CAR supports srTCM with single bucket, srTCM with two buckets, and trTCM. This section
provides examples of the three marking methods in Color-Blind mode. The implementation in
Color-Aware mode is similar to that in Color-Blind mode.
 SrTCM with Single Bucket
This example uses the CIR 1 Mbit/s, the committed burst size (CBS) 2000 bytes, and the
excess burst size (EBS) 0. The EBS 0 indicates that only bucket C is used. Bucket C is
initially full of tokens.
− If the first arriving packet is 1500 bytes long, the packet is marked green because
the number of tokens in bucket C is greater than the packet length. The number of
tokens in bucket C then decreases by 1500 bytes, with 500 bytes remaining.
− Assume that the second packet arriving at the interface after a delay of 1 ms is 1500
bytes long. Additional 125-byte tokens are put into bucket C (CIR x time period = 1
Mbit/s x 1 ms = 1000 bits = 125 bytes). Bucket C now has 625-byte tokens, which
are not enough for the 1500-byte second packet. Therefore, the second packet is
marked red.
− Assume that the third packet arriving at the interface after a delay of 1 ms is 1000
bytes long. Additional 125-byte tokens are put into bucket C (CIR x time period = 1
Mbit/s x 1 ms = 1000 bits = 125 bytes). Bucket C now has 750-byte tokens, which

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 71


Copyright © Huawei
Technologies Co., Ltd.
Traffic Policing and Traffic ShapingTraffic Policing and
Special Topic - QoS Traffic Shaping

are not enough for the 1000-byte third packet. Therefore, the third packet is marked
red.
− Assume that the fourth packet arriving at the interface after a delay of 20 ms is 1500
bytes long. Additional 2500-byte tokens are put into bucket C (CIR x time period =
1 Mbit/s x 20 ms = 20000 bits = 2500 bytes). This time 3250-byte tokens are
destined for bucket C, but the excess 1250-byte tokens over the CBS (2000 bytes)
are dropped. Therefore, bucket C has 2000-byte tokens, which are enough for the
1500-byte fourth packet. The fourth packet is marked green, and the number of
tokens in bucket C decreases by 1500 bytes to 500 bytes.
The following table illustrates this process:
No. Time Packet Delay Token Tokens in Tokens in Marking
Length Addition Bucket C Bucket C
Before After
Packet Packet
Processing Processing

- - - - - 2000 2000 -
1 0 1500 0 0 2000 500 Green
2 1 1500 1 125 625 625 Red
3 2 1000 1 125 750 750 Red
4 22 1500 20 2500 2000 500 Green

 SrTCM with Two Buckets


This example uses the CIR 1 Mbit/s and the CBS and EBS both 2000 bytes. Buckets C
and E are initially full of tokens.
− If the first packet arriving at the interface is 1500 bytes long, the packet is marked
green because the number of tokens in bucket C is greater than the packet length.
The number of tokens in bucket C then decreases by 1500 bytes, with 500 bytes
remaining. The number of tokens in bucket E remains unchanged.
− Assume that the second packet arriving at the interface after a delay of 1 ms is 1500
bytes long. Additional 125-byte tokens are put into bucket C (CIR x time period = 1
Mbit/s x 1 ms = 1000 bits = 125 bytes). Bucket C now has 625-byte tokens, which
are not enough for the 1500-byte second packet. Bucket E has 2000-byte tokens,
which are enough for the second packet. Therefore, the second packet is marked
yellow, and the number of tokens in bucket E decreases by 1500 bytes, with 500
bytes remaining. The number of tokens in bucket C remains unchanged.
− Assume that the third packet arriving at the interface after a delay of 1 ms is 1000
bytes long. Additional 125-byte tokens are put into bucket C (CIR x time period = 1
Mbit/s x 1 ms = 1000 bits = 125 bytes). Bucket C now has 750-byte tokens and
bucket E has 500-byte tokens, neither of which is enough for the 1000-byte third
packet. Therefore, the third packet is marked red. The number of tokens in buckets
C and E remain unchanged.
− Assume that the fourth packet arriving at the interface after a delay of 20 ms is 1500
bytes long. Additional 2500-byte tokens are put into bucket C (CIR x time period =
1 Mbit/s x 20 ms = 20000 bits = 2500 bytes). This time 3250-byte tokens are
destined for bucket C, but the excess 1250-byte tokens over the CBS (2000 bytes)
are put into bucket E instead. Therefore, bucket C has 2000-byte tokens, and bucket

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 72


Copyright © Huawei
Technologies Co., Ltd.
Traffic Policing and Traffic ShapingTraffic Policing and
Special Topic - QoS Traffic Shaping

E has 1750-byte tokens. Tokens in bucket C are enough for the 1500-byte fourth
packet. Therefore, the fourth packet is marked green, and the number of tokens in
bucket C decreases by 1500 bytes, with 500 bytes remaining. The number of tokens
in bucket E remains unchanged.
The following table illustrates the preceding process:
N Tim Packe Delay Token Tokens in Buckets Tokens in Buckets Marking
o. e t Addition Before Packet After Packet
Lengt Processing Processing
h
Bucket C Bucket E Bucket C Bucket E

- - - - - 2000 2000 2000 2000 -

1 0 1500 0 0 2000 2000 500 2000 Green

2 1 1500 1 125 625 2000 625 500 Yellow


3 2 1000 1 125 750 500 750 500 Red
4 22 1500 20 2500 2000 1750 500 1750 Green

 TrTCM
This example uses the CIR 1 Mbit/s, the PIR 2 Mbit/s, and the CBS and EBS both 2000
bytes. Buckets C and P are initially full of tokens.
− If the first packet arriving at the interface is 1500 bytes long, the packet is marked
green because the number of tokens in both buckets P and C is greater than the
packet length. Then the number of tokens in both buckets P and C decreases by
1500 bytes, with 500 bytes remaining.
− Assume that the second packet arriving at the interface after a delay of 1 ms is 1500
bytes long. Additional 250-byte tokens are put into bucket P (PIR x time period = 2
Mbit/s x 1 ms = 2000 bits = 250 bytes) and 125-byte tokens are put into bucket C
(CIR x time period = 1 Mbit/s x 1 ms = 1000 bits = 125 bytes). Bucket P now has
750-byte tokens, which are not enough for the 1500-byte second packet. Therefore,
the second packet is marked red, and the number of tokens in buckets P and C
remain unchanged.
− Assume that the third packet arriving at the interface after a delay of 1 ms is 1000
bytes long. Additional 250-byte tokens are put into bucket P (PIR x time period = 2
Mbit/s x 1 ms = 2000 bits = 250 bytes) and 125-byte tokens are put into bucket C
(CIR x time period = 1 Mbit/s x 1 ms = 1000 bits = 125 bytes). Bucket P now has
1000-byte tokens, which equals the third packet length. Bucket C has only 750-byte
tokens, which are not enough for the 1000-byte third packet. Therefore, the third
packet is marked yellow. The number of tokens in bucket P decreases by 1000
bytes, with 0 bytes remaining. The number of tokens in bucket C remains
unchanged.
− Assume that the fourth packet arriving at the interface after a delay of 20 ms is 1500
bytes long. Additional 5000-byte tokens are put into bucket P (PIR x time period =
2 Mbit/s x 20 ms = 40000 bits = 5000 bytes), but excess tokens over the PBS (2000

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 73


Copyright © Huawei
Technologies Co., Ltd.
Traffic Policing and Traffic ShapingTraffic Policing and
Special Topic - QoS Traffic Shaping

bytes) are dropped. Bucket P has 2000-byte tokens, which are enough for the 1500-
byte fourth packet. Bucket C has 750-byte tokens left, and additional 2500-byte
tokens are put into bucket C (CIR x time period = 1 Mbit/s x 20 ms = 2000 bits =
250 bytes). This time 3250-byte tokens are destined for bucket C, but excess tokens
over the CBS (2000 bytes) are dropped. Bucket C then has 2000-byte tokens, which
are enough for the 1500-byte fourth packet. Therefore, the fourth packet is marked
green. The number of tokens in both buckets P and C decreases by 1500 bytes, with
500 bytes remaining.
The following table illustrates this process:
N Ti Pac Delay Token Addition Tokens in Tokens in Marki
o. me ket Buckets Before Buckets After ng
Len Packet Packet
gth Processing Processing

Bucket Bucket Bucket Bucket Bucket Bucket Bucket Bucket


C P C P C P C P

- - - - - - - 2000 2000 2000 2000 -

1 0 150 0 0 0 0 2000 2000 500 500 Green


0

2 1 150 1 1 125 250 625 750 625 750 Red


0
3 2 100 1 1 125 250 750 1000 750 0 Yellow
0
4 22 150 20 20 2500 5000 2000 2000 500 500 Green
0

Usage Scenarios for the Three Marking Methods


The srTCM focus on the traffic burst size and have a simple token-adding method and packet
processing mechanism. The trTCM focuses on the traffic burst rate and has a complex token-
adding method and packet processing mechanism.
The srTCM and trTCM have their own advantages and disadvantages. They vary from each
other in performance, such as the packet loss rate, burst traffic processing capability, hybrid
packet forwarding capability, data forwarding smoothing capability. The three markers fit for
traffic with different features as follows:
 To control the traffic rate, use srTCM with single bucket.
 To control the traffic rate and distinguish traffic marked differently and process them
differently, use srTCM with two buckets. Note that traffic marked yellow must be
processed differently from traffic marked green. Otherwise, the implementation outcome
of srTCM with two buckets is the same as that of the srTCM with single bucket.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 74


Copyright © Huawei
Technologies Co., Ltd.
Traffic Policing and Traffic ShapingTraffic Policing and
Special Topic - QoS Traffic Shaping

 To control the traffic rate and check whether the traffic rate exceeds the CIR or PIR, use
trTCM. Note that traffic marked yellow must be processed differently from traffic
marked green. Otherwise, the implementation outcome of trTCM is the same as that of
srTCM with single bucket.

CAR Parameter Setting


The CIR is the key to determine the volume of traffic allowed to pass through a network. The
larger the CIR is, the higher the rate at which tokens are generated. The more the tokens
allocated to packets, the greater the volume of traffic allowed to pass through. The CBS is
also an important parameter. A larger CBS results in more accumulated tokens in bucket C
and a greater volume of traffic allowed to pass through.
 The CBS must be greater than or equal to the maximum packet length. For example, the
CIR is 100 Mbit/s, and the CBS is 200 bytes. If a device receives 1500-byte packets, the
packet length always exceeds the CBS, causing the packets to be marked red or yellow
even if the traffic rate is lower than 100 Mbit/s. This leads to an inaccurate CAR
implementation.
 It is recommended that when the value of the CBS or the EBS/PBS is larger than 0, it is
larger than or equal to the size of the largest possible IP packet in the stream.
The Bucket depth (CBS, EBS or PBS) is set based on actual rate limit requirements. In
principle, the bucket depth is calculated based on the following conditions:
1. Bucket depth must be greater than or equal to the MTU.
2. Bucket depth must be greater than or equal to the allowed burst traffic volume.
Condition 1 is easy to meet. Condition 2 is difficult to operate, and the following formula is
introduced:
Bucket depth (bytes) = Bandwidth (kbit/s) x RTT (ms) / 8. Note that RTT refers to round trip
time and is generally set to 200 ms.
The following formulas are generally used for Huawei routers:
 When the bandwidth is lower than or equal to 100 Mbit/s: Bucket depth (bytes) =
Bandwidth (kbit/s) x 1500 (ms) / 8.
 When the bandwidth is higher than 100 Mbit/s: Bucket depth (bytes) = 100,000 (kbit/s) x
1500 (ms) / 8.

CAR calculates the bandwidth of packets based on the entire packet. For example, CAR
counts the length of the frame header and CRC field but not the preamble, inter frame gap, or
SFD of an Ethernet frame in the bandwidth. The following figure illustrates a complete
Ethernet frame (bytes):

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 75


Copyright © Huawei
Technologies Co., Ltd.
Traffic Policing and Traffic ShapingTraffic Policing and
Special Topic - QoS Traffic Shaping

4.1.4 Traffic Policing Applications


Traffic policing mainly applies to DS ingress node. Packets that exceed the SLA are dropped,
or re-marked to ensure that packets conforming to the SLA are provided with guaranteed
services. Figure 1.1 shows typical networking.

Figure 1.1 Application 1

As shown in Figure 1.2, a router connects a wide area network (WAN) and a local area
network (LAN). The LAN bandwidth (100 Mbit/s) is higher than the WAN bandwidth (2
Mbit/s). When a LAN user attempts to send a large amount of data to a WAN, the router at the
network edge is prone to traffic congestion. Traffic policing can be configured on the router at
the network edge to restrict the traffic rate, preventing traffic congestion.

Figure 1.2 Application 2

Interface-based Traffic Policing


Interface-based traffic policing controls all traffic that enters an interface and does not identify
the packet types. As shown in Figure 1.1, a router at an ISP network edge connects to three
user networks. The SLA defines that each user can send traffic at a maximum rate of 256
kbit/s. However, burst traffic is sometimes transmitted. Traffic policing can be configured on
the ingress router at the ISP network edge to restrict the traffic rate to a maximum of 256
kbit/s. All excess traffic over 256 kbit/s will be dropped.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 76


Copyright © Huawei
Technologies Co., Ltd.
Traffic Policing and Traffic ShapingTraffic Policing and
Special Topic - QoS Traffic Shaping

Figure 1.1 Interface-based traffic policing

Class-based Traffic Policing


The class-based traffic policy controls the rate of one or more types of packets that enter an
interface but not all types of packets.
As shown in Figure 1.1, traffic from the three users at 1.1.1.1, 1.1.1.2, and 1.1.1.3 is
converged to a router. The SLA defines that each user can send traffic at a maximum rate of
256 kbit/s. However, burst traffic is sometimes transmitted. When a user sends a large amount
of data, services of other users may be affected even if they send traffic at a rate lower than
256 kbit/s. To resolve this problem, configure traffic classification and traffic policing based
on source IP addresses on the inbound interface of the device to control the rate of traffic sent
from different users. The device drops excess traffic when the traffic rate of a certain user
exceeds 256 kbit/s.

Figure 1.1 Class-based traffic policing

Multiple traffic policies must be configured on the inbound interface to implement different rate limits
for data flows sent from different source hosts. The traffic policies take effect in the configuration order.
The first traffic policy configured is the first to effect first after data traffic reaches the interface.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 77


Copyright © Huawei
Technologies Co., Ltd.
Traffic Policing and Traffic ShapingTraffic Policing and
Special Topic - QoS Traffic Shaping

Combination of Traffic Policing and Other QoS Policies


Traffic policing and other QoS components can be implemented together to guarantee QoS
network-wide.
Figure 1.1 shows how traffic policing works with congestion avoidance to control traffic. In
this networking, four user networks connect to a router at the ISP network edge. The SLA
defines that each user can send FTP traffic at a maximum rate of 256 kbit/s. However, burst
traffic is sometimes transmitted at a rate even higher than 1 Mbit/s. When a user sends a large
amount of FTP data, FTP services of other users may be affected even if they send traffic at a
rate lower than 256 kbit/s. To resolve this problem, configure class-based traffic policing on
each inbound interface of the router to monitor the FTP traffic and re-mark the DSCP values
of packets. The traffic at a rate lower than or equal to 256 kbit/s is re-marked AF11. The
traffic at a rate ranging from 256 kbit/s to 1 Mbit/s is re-marked AF12. The traffic at a rate
higher than 1 Mbit/s is re-marked AF13. Weighted Random Early Detection (WRED) is
configured as a drop policy for these types of traffic on outbound interfaces to prevent traffic
congestion. WRED drops packets based on the DSCP values. Packets in AF13 are first
dropped, and then AF12 and AF11 in sequence.

Figure 1.1 Combination of traffic policing and congestion avoidance

Statistics Collection of Traffic Policing


Traffic that enters a network must be controlled, and traffic statistics must be collected.
Traditional statistics collection has the following defects:
 For upstream traffic, only statistics about packets after a CAR operation is implemented
can be collected. Statistics about the actual traffic in need and the packet loss during
CAR are not provided.
 For downstream traffic, only statistics about packets after a CAR operation is
implemented can be collected. Statistics about the forwarded and dropped packets are
not provided.
Carriers require statistics about traffic that has been implemented with CAR to analyze user
traffic beyond the specifications, which provides a basis for persuasion of purchasing a higher
bandwidth. Using the interface-based CAR statistics collection function, Huawei routers can
collect and record statistics about the upstream traffic after a CAR operation (the actual access
traffic of an enterprise user or an Internet bar), as well as statistics about the forwarded and
dropped downstream packets after a CAR operation.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 78


Copyright © Huawei
Technologies Co., Ltd.
Traffic Policing and Traffic ShapingTraffic Policing and
Special Topic - QoS Traffic Shaping

4.2 Traffic Shaping


What Is Traffic Shaping
Traffic shaping controls the rate of outgoing packets to allow the traffic rate to match that on
the downstream device. When traffic is transmitted from a high-speed link to a low-speed link
or a traffic burst occurs, the inbound interface of the low-speed link is prone to severe data
loss. To prevent this problem, traffic shaping must be configured on the outbound interface of
the device connecting to the low-speed link, as shown in Figure 1.1.

Figure 1.1 Data transmission from the high-speed link to the low-speed link

As shown in Figure 1.2, traffic shaping can be configured on the outbound interface of an
upstream device to make irregular traffic transmitted at an even rate, preventing traffic
congestion on the downstream device.

Figure 1.2 Effect of traffic shaping

Traffic Shaping Implementation


Traffic shaping buffers overspeed packets and uses token buckets to transmit these packets
afterward at an even rate.
On Huawei routers, tokens are added at an interval, which is calculated in the format of
CBS/CIR, with the quantity equal to the CBS for traffic shaping.
The srTCM with Single Bucket is used in traffic shaping, and the traffic shaping result can be
either Green or Red.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 79


Copyright © Huawei
Technologies Co., Ltd.
Traffic Policing and Traffic ShapingTraffic Policing and
Special Topic - QoS Traffic Shaping

On Huawei routers, the length of the frame header and CRC field are calculated in the bandwidth for
packets to which CAR applies but not calculated in the bandwidth for packets that have been
implemented with traffic shaping. For example, if the traffic shaping value is set to 23 Mbit/s for IPoE
packets, the IP packets are transmitted at a rate of 23 Mbit/s with the lengths of the frame header and
CRC field not counted.
In addition, whether the CBS can be modified in traffic shaping is determined by the product model,
product version, and board type.

Traffic shaping is implemented for packets that have been implemented with queue
scheduling and are leaving the queues. For details about queues and queue scheduling, see
5Congestion Management and Avoidance.
There are two traffic shaping modes: queue-based traffic shaping and interface-based traffic
shaping.
 Queue-based traffic shaping applies to each queue on an outbound interface.
− When packets have been implemented with queue scheduling and are leaving
queues, the packets that do not need traffic shaping are forwarded; the packets that
need traffic shaping are measured against token buckets.
− After queues are measured against token buckets, if packets in a queue are
transmitted at a rate conforming to the specifications, the packets in the queue are
marked green and forwarded. If packets in a queue are transmitted at a rate
exceeding the specifications, the packet that is leaving the queue is forwarded, but
the queue is marked unscheduled and can be scheduled after new tokens are added
to the token bucket. After the queue is marked unscheduled, more packets can be
put into the queue, but excess packets over the queue capacity are dropped.
Therefore, traffic shaping allows traffic to be sent at an even rate but does not
provide a zero-packet-loss guarantee.

Figure 1.1 Queue-based traffic shaping

 Interface-based traffic shaping, also called line rate (LR), is used to restrict the rate at
which all packets (including burst packets) are transmitted. Interface-based traffic
shaping takes effect on the entire outbound interface, regardless of packet priorities.
Figure 1.2 shows how interface-based traffic shaping is implemented:
− When packets have been implemented with queue scheduling and are leaving
queues, all queues are measured together against token buckets.
− After queues are measured against token buckets, if the packets total-rate
conforming to the specifications, the queue is forwarded. If the packet rate on an

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 80


Copyright © Huawei
Technologies Co., Ltd.
Traffic Policing and Traffic ShapingTraffic Policing and
Special Topic - QoS Traffic Shaping

interface exceeds the specification, the interface stops packet scheduling and will
resume scheduling when tokens are enough.

Figure 1.2 Interface-based traffic shaping

Traffic Shaping Applications


Traffic shaping controls the traffic output to minimize packet loss.

Figure 1.1 Traffic shaping application

 Interface-based traffic shaping


As shown in Figure 1.2, enterprise headquarters are connected to branches through
leased lines on an ISP network in Hub-Spoke mode. The bandwidth of each leased line is
1 Gbit/s. If all branches send data to headquarters, traffic congestion occurs on the nodes
connecting to headquarters at the ISP network edge. To prevent packet loss, configure
traffic shaping on outbound interfaces of the nodes at the branch network edge.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 81


Copyright © Huawei
Technologies Co., Ltd.
Traffic Policing and Traffic ShapingTraffic Policing and
Special Topic - QoS Traffic Shaping

Figure 1.2 Interface-based traffic shaping

 Queue-based traffic shaping


As shown in Figure 1.3, enterprise headquarters are connected to branches through
leased lines on an ISP network in Hub-Spoke mode. The bandwidth of each leased line is
1 Gbit/s. Branches access the Internet through headquarters, but the link bandwidth
between headquarters and the Internet is only 100 Mbit/s. If all branches access the
Internet at a high rate, the rate of web traffic sent from headquarters to the Internet may
exceed 100 Mbit/s, causing web packet loss on the ISP network.
To prevent web packet loss, configure queue-based traffic shaping for web traffic on
outbound interfaces of branches and outbound interfaces connecting to the Internet on
headquarters.

Figure 1.3 Queue-based traffic shaping

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 82


Copyright © Huawei
Technologies Co., Ltd.
Traffic Policing and Traffic ShapingTraffic Policing and
Special Topic - QoS Traffic Shaping

Shaped Rate Adjustment: Last-Mile QoS


Last mile indicates the link between the user and the access switch (such as the Ethernet/ATM
DSLAM), as shown in Figure 1.1. Residential and enterprise users generally access the
Ethernet/ATM DSLAM using IPoE, PPPoE, IPoA, PPPoA, IPoEoA, or PPPoEoA, and the
DSLAM is connected to the BRAS or SR, edge device on the backbone network, through a
metropolitan area network (MAN).

Figure 1.1 Last mile

In a broadband service access scenario, an Ethernet link connects a BRAS or SR and a


DSLAM. The BRAS or SR encapsulates Ethernet packets, and traffic shaping is implemented
based on the Ethernet packets.
If an ATM link connects a user and a DSLAM, ATM encapsulation applies to packets on the
user side of the DSLAM. The ATM header encapsulation-cost may be higher than the
Ethernet header encapsulation-cost, making it possible that the rate of shaped traffic exceeds
the capability of the ATM link. As a result, traffic is congested on the DSLAM, and packet
loss occurs.
As shown in Figure 1.2, packets are transmitted using PPPoEoA between a user and a
DSLAM and using PPPoE between the DSLAM and a BRAS or SR.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 83


Copyright © Huawei
Technologies Co., Ltd.
Traffic Policing and Traffic ShapingTraffic Policing and
Special Topic - QoS Traffic Shaping

Figure 1.2 Packet Field changes in last mile (PPPoE->PPPoEoA)

The entire PPPoE packet is encapsulated into a 1483B frame (for example, LLC
encapsulation) on the DSLAM, and then the 1483B frame is encapsulated in AAL5 mode (a
CPCS-PDU Trailer added). After that, the packet is sliced into 48-byte ATM payloads, which
are finally encapsulated in the ATM cell.
Figure 1.3 shows how the header length is calculated. If LLC encapsulation is implemented
for the 1483B frame, as shown in Figure 1.2, a 20-byte (14+6) PPPoE header or a 36-byte
(6+14+8+8) PPPoEoA header is added.
If the PPPoE payload is L bytes, the PPPoEoA payload is L+36 bytes. When the PPPoEoA
payload (L+36) is a multiple of 48, the number of ATM cells required for transmitting the
packet is (L+36)/48; when the PPPoEoA payload (L+36) is not a multiple of 48, the number
of cells required for transmitting the packet is (L+36)/48+1. Therefore, the length of the
packet added with an ATM cell header becomes ( (L+36) / 48 + 1 ) x 53 or ( (L+36) / 48 ) x
53.
If the PPPoE payload in the packets sent from the BRAS or SR to the DSLAM is 100 bytes
and the packet rate is 2 Mbit/s, the rate of the packets sent from the DSLAM to the CPE is
calculated as follows: ( (100+36) / 48 + 1 ) x 53 / 120 x 2 Mbit/s = 3.4Mbit/s.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 84


Copyright © Huawei
Technologies Co., Ltd.
Traffic Policing and Traffic ShapingTraffic Policing and
Special Topic - QoS Traffic Shaping

Figure 1.3 Packet encapsulation

Even if the link connects the user and DSLAM is also an Ethernet link, the encapsulation cost
of the packets sent between the user and DSLAM can possibly exceed that on the user side of
the BRAS or SR. For example, the Ethernet packet encapsulated on the BRAS or SR does not
carry a VLAN tag, but the packet sent between the user and DSLAM carries a single or
double VLAN tags due to VLAN or QinQ encapsulation.
To resolve this problem, last-mile QoS can be configured on the BRAS or SR. Last-mile QoS
allows a device to calculate the length of headers to be added to packets based on the
bandwidth purchased by users and the bandwidth of the downstream interface on the DSLAM
for traffic shaping.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 85


Copyright © Huawei
Technologies Co., Ltd.
Traffic Policing and Traffic ShapingTraffic Policing and
Special Topic - QoS Traffic Shaping

The BRAS or SR cannot parse the packets that are encapsulated using multiple protocols. For
example, the BRAS or SR parses only the Ethernet header of PPPoE packets but not any
header that is previously added. In addition, if a DSLAM connects to a CPE through an ATM
link (as shown in Figure 1.3):
 The DSLAM, during AAL5 encapsulation, will add headers to make the length of
headers be a multiple of 48 bytes.
 The DSLAM, during AAL5 encapsulation, will implement LLC or VC encapsulation,
which provides different header encapsulation-costs. Even if LLC encapsulation is used
on the DSLAM, LLC encapsulation-costs are different between PPPoA packets and
IPoA/PPPoEoA/IPoEoA packets.
Therefore, the BRAS or SR cannot automatically infer the sum length of the packets that has
been encapsulated on the DSLAM and requires compensation bytes.
After compensation bytes are configured, if the DSLAM connects to the CPE through an
Ethernet link, the BRAS or SR can automatically infer the sum length of the packet
encapsulated on the DSLAM based on the length of the forwarded packet and the configured
compensation bytes, and determine the shaped rate to be adjusted.
After compensation bytes are configured, if the DSLAM connects to the CPE through an
ATM link, the BRAS or SR can automatically infer the length of headers to be added (PAD
field length for AAL5 encapsulation) based on the length of the forwarded packet and
configured compensation bytes, and then infer the sum length of the packet encapsulated on
the DSLAM. Therefore, the PAD cost and ATM cell header cost do not need attention when
the compensation bytes are configured.
The following tables provide common encapsulation-costs and compensation bytes.

Table 3.1 Packet encapsulation-cost


Encapsulation Type Encapsulation-cost (Bytes)

PPP header 2
Eth header 14
VLAN header 4
QinQ header 8
AAL5 VC AAL5 Header + AAL5 tail = 0 + 8 = 8
encapsul
ation LLC Type1 (connection-less mode, AAL5 Header + AAL5 tail = 8 + 8 =
such as IPoE, PPPoE, IPoA, PPPoA, 16
IPoEoA and PPPoEoA)
LLC Type2 (connection mode, such as AAL5 Header + AAL5 tail = 4 + 8 =
PPPoA) 12

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 86


Copyright © Huawei
Technologies Co., Ltd.
Traffic Policing and Traffic ShapingTraffic Policing and
Special Topic - QoS Traffic Shaping

Table 3.2 Common compensation bytes for last-mile QoS


Scenario Compensation Bytes
 For LLC encapsulation
AAL5 Header + AAL5 tail
=8+8
= 16
 For VC encapsulation
=0+8
=8

 For LLC encapsulation


= AAL5 header + AAL5 tail
- PPPoE header - VLAN
header - Eth header
= 4 + 8 - 6 - 4 - 14
= - 12
 For VC encapsulation
= 0 + 8 - 6 - 4 - 14
= - 16
 For LLC encapsulation
= AAL5 header + AAL5 tail
- PPP header - PPPoE header
=8+8-2-6
=8
 For VC encapsulation
=0+8-2-6
=0
 For LLC encapsulation
= AAL5 header + AAL5 tail
- PPP header - PPPoE header
- Eth header
= 8 + 8 - 2 - 6 - 14
=-6
 For VC encapsulation
= 0 + 8 - 2 - 6 - 14
= - 14

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 87


Copyright © Huawei
Technologies Co., Ltd.
Traffic Policing and Traffic ShapingTraffic Policing and
Special Topic - QoS Traffic Shaping

Scenario Compensation Bytes


 For LLC encapsulation
= AAL5 header + AAL tail -
QinQ header - Eth header
= 8 + 8 - 8 - 14
= -6
 For VC encapsulation
= 0 + 8 - 8 - 14
= - 14
= VLAN header - QinQ header
=-4

= 0 - QinQ header
=-8

4.3 Comparison Between Traffic Policing and Traffic


Shaping
Similarity
Traffic policing and traffic shaping share the following features:
 Both are used to limit network traffic rate.
 Both use token buckets to measure the traffic rate.
 Both apply to DS boundary nodes.

Difference
The following table lists the differences between traffic policing and traffic shaping.

Traffic policing Traffic Shaping

Drops excess traffic over the specifications Buffers excess traffic over the
or re-marks such traffic with a lower specifications.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 88


Copyright © Huawei
Technologies Co., Ltd.
Traffic Policing and Traffic ShapingTraffic Policing and
Special Topic - QoS Traffic Shaping

Traffic policing Traffic Shaping

priority.
Consumes no additional memory resources Consumes memory resources for excess
and brings no delay or jitter. traffic buffer and brings delay and jitter.
Packet loss may result in packet Packet loss rarely occurs, so does packet
retransmission. retransmission.
Traffic re-marking supported. Traffic re-marking unsupported.

4.4 QoS Implementations on Different Boards


4.4.1 Implementation Differences of Policing and Shaping
Implementation Differences of CAR
Item Board Difference description

Action based on the CAR token Type-B  If the metering result is green, the action of the packet can
bucket metering result. be pass, and cannot be discard or remark.
 If the metering result is yellow, the action of the packet
can be pass or remark, and cannot be discard.
 If the metering result is red, the action of the packet can
be pass or remark and discard.
Command:
qos car { cir cir-value [ pir pir-value ] } [ cbs cbs-value
pbs pbs-value ] [ green pass } | yellow pass [ service-class
class color color ] | red { discard | pass [ service-class
class color color ] } ]
Type-C  If the metering result is green, the action of the packet can
be pass or remark, and cannot be discard.
 If the metering result is yellow or red, the action of the
packet can be pass or remark and discard.
Command:
qos car { cir cir-value [ pir pir-value] } [ cbs cbs-value
pbs pbs-value ] [ green pass [ service-class class color
color ] | yellow { discard | pass [ service-class class color
color ] } | red { discard | pass [ service-class class color
color ] } ]

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 89


Copyright © Huawei
Technologies Co., Ltd.
Traffic Policing and Traffic ShapingTraffic Policing and
Special Topic - QoS Traffic Shaping

Item Board Difference description

Type-A  For upstream, no matter what the metering result is, the
action of the packet can be pass or remark and discard.
Command for upstream:
qos car { cir cir-value [ pir pir-value ] } [ cbs cbs-value
pbs pbs-value ] [ green { discard | pass [ service-class
class color color ] } | yellow { discard | pass [ service-
class class color color ] } | red { discard | pass [
service-class class color color ] } ]
 For downstream, no matter what the metering result is,
the action of the packet can be pass or discard, and cannot
be remark.
Command for downstream:
qos car { cir cir-value [ pir pir-value ] } [ cbs cbs-value
pbs pbs-value ] [ green { discard | pass } | yellow {
discard | pass } | red { discard | pass } ]
Type-D The action of the packet can be pass or remark and drop,
whatever the metering result is.
Command:
qos car { cir cir-value [ pir pir-value ] } [ cbs cbs-value
pbs pbs-value ] [ green { discard | pass [ service-class
class color color ] } | yellow { discard | pass [ service-
class class color color ] } | red { discard | pass [ service-
class class color color ] } ]
Limit the layer2 traffic Type-B Only support on inbound interface
Type-C Support on inbound and outbound interfaces
Type-A
Type-D
Configuring both CAR and HQoS Type-B Not support
qos-profile CAR together on the
same inbound interface Type-C Support
Type-A
Type-D
Configuring both port-based CAR Type-B Flow-based CAR takes effect only for the traffic that is
and flow-based CAR together on matched with the rule in the MF classification.
the same interface Type-C
Port-based CAR takes effect for the traffic that is not
matched.
Type-A Flow-based CAR takes effect only for the traffic that is
matched with the rule in the MF classification.
Type-D
Port-based CAR takes effect for all the traffic.
CAR mode Type-B Supports only Color-Blind.
Type-C Supports both Color-Blind and Color-Aware.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 90


Copyright © Huawei
Technologies Co., Ltd.
Traffic Policing and Traffic ShapingTraffic Policing and
Special Topic - QoS Traffic Shaping

Item Board Difference description

Type-A
Type-D

Implementation Differences of Shaping


See the chapter 5.6QoS Implementations on Different Boards.

4.5 Capabilities for Policing and Shaping


Table 1.1 Capabilities for Policing and Shaping
("Y" indicates "supported" and "N" indicates "Not supported")

CAR Shaping
Type Sub-Type Note
In Out In Out

Includes GE, POS,


Interface- L3 main
IP-Trunk, Eth- Y N Y Y -
based interface
Trunk
Includes Layer2
Interface-
VE interface and Layer3 Virtual- N N N N -
based
Ethernet
L2 interface does not
Interface- Includes GE, Eth-
L2 interface Y Y Y Y support user-queue
based Trunk
command
The member physical
interface of the VLANIF
Interface- VLANIF
VLANIF N N N N supports CAR and shaping,
based interface
see the previous row of this
table.
Interface- Dot1Q sub- Includes GE, Eth-
Y Y Y Y -
based interface Trunk
Dot1Q
Interface- termination Includes GE, Eth-
Y Y Y Y -
based sub- Trunk
interface
QinQ
Interface- termination Includes GE, Eth-
Y Y Y Y -
based sub- Trunk
interface
Interface- QinQ/Dot1 Includes GE, Eth- Y Y Y Y On Type-B boards, the
based Q Trunk main-interface

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 91


Copyright © Huawei
Technologies Co., Ltd.
Traffic Policing and Traffic ShapingTraffic Policing and
Special Topic - QoS Traffic Shaping

CAR Shaping
Type Sub-Type Note
In Out In Out

termination
configuration takes effect
main-
also on sub-interfaces.
interface
L2 Ethernet
Ethernet type/length
flow-based frame Y Y Y N -
field
information
 Type-A boards support
only inbound;
L2 Ethernet  Type-B boards support
flow-based frame Outer 802.1p Y Y Y N only outbound;
information  Other types of boards:
support both inbound and
outbound.
 Type-A and Type-B
boards do not support
inbound and outbound.
L2 Ethernet  Type-C boards do not
flow-based frame Inner 802.1p Y Y Y N
support in outbound.
information
 Other board-types:
support both inbound and
outbound.
L2 Ethernet
flow-based frame destination mac Y Y Y N -
information
L2 Ethernet
flow-based frame source mac Y Y Y N -
information
MPLS Not supported on Type-B
flow-based Outer mpls-exp Y N Y N
information boards.
MPLS
flow-based 2nd layer mpls-exp Y N Y N
information
MPLS
flow-based 3rd layer mpls-exp Y N Y N
information
MPLS Not supported on Type-A
flow-based 4th layer mpls-exp Y N Y N
information and Type-B boards.
MPLS Type-C boards do not
flow-based outer mpls label Y N Y N support in inbound.
information
MPLS
flow-based 2nd layer mpls label Y N Y N
information
MPLS
flow-based 3rd layer mpls label Y N Y N
information

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 92


Copyright © Huawei
Technologies Co., Ltd.
Traffic Policing and Traffic ShapingTraffic Policing and
Special Topic - QoS Traffic Shaping

CAR Shaping
Type Sub-Type Note
In Out In Out

MPLS
flow-based 4th layer mpls label Y N Y N -
information
MPLS
flow-based Outer mpls ttl Y N Y N -
information
MPLS
flow-based 2nd layer mpls ttl Y N Y N -
information
MPLS
flow-based 3rd layer mpls ttl Y N Y N -
information
MPLS
flow-based 4th layer mpls ttl N N N N -
information
IP
flow-based DSCP Y Y Y N -
information
IP
flow-based ip-precedence Y Y Y N -
information
IP
flow-based ToS Y Y Y N -
information
IP
flow-based Protocol number Y Y Y N -
information
IP
flow-based all ipv4 Y Y Y N -
information
IP
flow-based fragment-type Y Y Y N -
information
IP
flow-based source ipv4 address Y Y Y N -
information
IP destination ipv4
flow-based Y Y Y N -
information address
IP
flow-based TCP-flag Y Y Y N -
information
IP TCP/UDP source
flow-based Y Y Y N -
information port
IP TCP/UDP
flow-based Y Y Y N -
information destination port
IP
flow-based ICMP type Y Y Y N -
information
IP
flow-based Time-range Y Y Y N -
information

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 93


Copyright © Huawei
Technologies Co., Ltd.
Traffic Policing and Traffic ShapingTraffic Policing and
Special Topic - QoS Traffic Shaping

CAR Shaping
Type Sub-Type Note
In Out In Out

DSCP (6 left-most
IPv6
flow-based bits of Traffic Class Y Y Y N
information
(TC) field)
Precedence (3 left-
IPv6
flow-based most bits of Traffic Y Y Y N
information Type-B boards do not
Class (TC) field)
support outbound CAR.
ToS (left-most 4-7
IPv6
flow-based bits of Traffic Class Y Y Y N
information
(TC) field)
IPv6
flow-based Protocol number Y Y Y N
information
The Type-A and Type-B
boards support matching 96
bits length at most for the
source IPv6 address. For an
IPv6 address with the
IPv6 Source IPv6 length of 128 bits, the
flow-based Y Y Y N
information address boards support matching
the bits 0 to 31 and 64 to
127.
Type-B boards do not
support outbound CAR.
IPv6 Destination IPv6
flow-based Y Y Y N
information address
IPv6
flow-based fragment Y Y Y N
information
IPv6
flow-based icmpv6-type Y Y Y N
information Type-B boards do not
IPv6 support outbound CAR.
flow-based time-range Y Y Y N
information
IPv6
flow-based next-header Y Y Y N
information
IPv6
flow-based all ipv6 Y Y Y N
information
 Supported only in
V600R002 and the later
versions.
User-based UCL-based Y Y Y Y  Type-B only support on
inbound.
 Other boards support both
inbound and outbound.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 94


Copyright © Huawei
Technologies Co., Ltd.
Traffic Policing and Traffic ShapingTraffic Policing and
Special Topic - QoS Traffic Shaping

CAR Shaping
Type Sub-Type Note
In Out In Out

User-based PPPoE/IPoE user Y Y Y Y


Supported only in
User-based Family Users Y Y Y Y V600R002 and the later
versions
User-based lease-line user Y Y Y Y
MPLS
L3VPN- VPN-Instance-based N N Y Y -
based
MPLS
L3VPN- BGP-Peer-based N N Y Y -
based
VPLS-based VSI-based N N Y Y -
VPLS-based Peer-based N N Y Y -
VLL/PWE3
VC-based N N Y Y -
-based
queue-based CQ (port-queue) N N N Y -
queue-based FQ (flow-queue) N N Y Y -
Supported only in
queue-based multiple FQs (share-shaping) N N Y Y V600R002 and the later
versions
queue-based SQ (user-queue) N N Y Y -
queue-based GQ (user-group-queue) N N Y Y -
 Supported only in
V600R007 and the later
queue-based VI (Parent GQ) N N N Y versions.
 Supported only when
eTM card is available.

4.6 FAQ about Policing and Shaping


4.6.1 When is Traffic Shaped? When Is Traffic Policed?
Question
When is Traffic Shaped? When Is Traffic Policed?

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 95


Copyright © Huawei
Technologies Co., Ltd.
Traffic Policing and Traffic ShapingTraffic Policing and
Special Topic - QoS Traffic Shaping

Answer
CAR is used to policy traffic and the port shaping or port-queue shaping command is used
to shape traffic.
On upstream, CAR is done before shaping, and CAR is done before packet put into queue,
and shaping is done when the packet is leaving the queue.
On downstream, if there is not an eTM subcard on outbound board, CAR is done after
shaping. Shaping is done when the packet is leaving the queue and CAR is done after the
packet has left queue.
On downstream, if there is an eTM subcard on outbound board, CAR is done before shaping,
and CAR is done before packet put into queue, and shaping is done when the packet is leaving
the queue.
For details, see the chapter "8Overall QoS Process on Routers".

4.6.2 What Is Differences between Port-based Traffic Shaping and


Queue-based Traffic Shaping?
Question
What is the differences between port-based traffic shaping and queue-based traffic shaping?
Which one is recommended and why?

Answer
Both the port-based traffic shaping and the queue-based traffic shaping use token buckets to
measure traffic and determines whether a packet is conforming to the specification, and both
use queue buffer and may increase time delay when traffic congestion occurs.
The differences between them are:
 Port-based traffic shaping is used to restrict the rate at which all packets (including burst
packets) are sent from an outbound interface. Port-based traffic shaping takes effect on
the entire outbound interface, regardless of packet priorities.
 Queue-based traffic shaping is used to restrict the rate at which the packets sent from a
specified queue on an outbound interface.
If you want to limit the bandwidth of an outbound interface, it is recommended to use port-
based traffic shaping, and to limit the bandwidth of a queue (or the bandwidth of the packets
of a specified priority), it is recommended to use queue-based traffic shaping.
For example, if you want to limit the bandwidth of GE1/0/0, configure the port shaping
command in GE1/0/0 interface view, and to limit the bandwidth of EF traffic of GE1/0/0,
configure the port-queue ef shaping command in GE1/0/0 interface view.

4.6.3 Whether Port-based Traffic Shaping and Queue-based


Traffic Shaping Affect Other Functions?
Question
When port-based traffic shaping or queue-based traffic shaping is configured, whether it will
affect the other functions?

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 96


Copyright © Huawei
Technologies Co., Ltd.
Traffic Policing and Traffic ShapingTraffic Policing and
Special Topic - QoS Traffic Shaping

Answer
It does not affect the other functions when port-based traffic shaping or queue-based traffic
shaping is configured, because the buffer is planned and assigned elaborately.

4.6.4 How Long the Time Delay in the Worst Situation When
Traffic Shaping Is Used?
Question
Since traffic shaping may increase time delay when traffic congestion occurs, how long is the
time delay in the worst situation?

Answer
The delay is determined by the buffer size assigned for a queue and the output bandwidth
allocated to a queue.
The format is as follows:
Delay of a queue = (Buffer size for the queue x 8) / Traffic shaping rate for the queue
Therefore, when the buffer is fully, the delay will increase to the biggest.
For detailed information about the queue buffer and the delay, see the chapter "5.4Impact of
Queue Buffer on Delay and Jitter".

4.6.5 What Is the Default Size of A Queue Buffer?


Question
What is the default size of a queue buffer?

Answer
In Huawei routers mentioned in this document, the default size of a queue buffer is dependent
on the type of the Physical Interface Cards (PICs) in the LPU board.

4.6.6 What Is Default Behavior on Outbound Interface?


Question
What is the default behavior if port-based traffic shaping and queue-based traffic shaping are
not configured on the outbound interface?

Answer
If port-based traffic shaping and queue-based traffic shaping are not configured on the
oubound interface, and HQoS is not configured:
 By default, there are eight CQ queues (including BE, AF1, AF2, AF3, AF4, EF, CS6,
CS7) in the downstream boards, and there is no FQ queues. The traffic is put into the CQ
queues when traffic congestion occurs.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 97


Copyright © Huawei
Technologies Co., Ltd.
Traffic Policing and Traffic ShapingTraffic Policing and
Special Topic - QoS Traffic Shaping

 By default, the default size of a queue buffer is dependent on the type of the Physical
Interface Cards (PICs) in the LPU board.
 For the default scheduling order of the eight CQ queues, see the chapter "Scheduling
Order". By default, CS7, CS6 and EF is PQ queues, and the AF4, AF3, AF2, AF1 and BE
are WFQ queues, their weights are 10:10:10:15:15.
 By default, the drop policy of all the eight CQ queues are Trail Drop. When the buffer of
a CQ queue is full, the new-coming packets are discarded.
 By default, queue-based traffic shaping is not used for the eight CQ queues, and port-
based traffic shaping is not used for the outbound interface. The bandwidth of the
packets sent from a queue is not limited and may up to the bandwidth of the outbound
interface. Also, the bandwidth of the packets sent from a outbound interface is not
limited and may up to the bandwidth of the outbound interface.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 98


Copyright © Huawei
Technologies Co., Ltd.
Congestion Management and AvoidanceCongestion
Special Topic - QoS Management and Avoidance

5 Congestion Management and Avoidance

About This Chapter


5.1 Traffic Congestion and Solutions
5.2 Queues and Congestion Management
5.3 Congestion Avoidance
5.4 Impact of Queue Buffer on Delay and Jitter
5.5 HQoS
5.6 QoS Implementations on Different Boards

5.1 Traffic Congestion and Solutions


Background
Traffic congestion occurs when multiple users compete for the same resources (such as the
bandwidth and buffer) on the shared network. For example, a user on a local area network
(LAN) sends data to a user on another LAN through a wide area network (WAN). The WAN
bandwidth is generally higher than the LAN bandwidth. Therefore, data cannot be transmitted
at the same rate on the WAN as that on the LAN. Traffic congestion occurs on the router
connecting the LAN and WAN, as shown in Figure 1.1.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 99


Copyright © Huawei
Technologies Co., Ltd.
Congestion Management and AvoidanceCongestion
Special Topic - QoS Management and Avoidance

Figure 1.1 Traffic congestion

Figure 1.2 shows the common traffic congestion causes.


 Traffic rate mismatch: Packets are transmitted to a device through a high-speed link and
are forwarded out through a low-speed link.
 Traffic aggregation: Packets are transmitted from multiple interfaces to a device and are
forwarded out through a single interface without enough bandwidth.

Figure 1.2 Link bandwidth restriction

Traffic congestion is derived not only from link bandwidth restriction but also from any
resource shortage, such as available processing time, buffer, and memory resource shortage.
In addition, traffic is not satisfactorily controlled and exceeds the capacity of available
network resources, also leading to traffic congestion.

Location
As shown in Figure 1.1, traffic can be classified into the following based on the device
location and traffic forwarding direction:
 Upstream traffic on the user side
 Downstream traffic on the user side
 Upstream traffic on the network side
 Downstream traffic on the network side

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 100


Copyright © Huawei
Technologies Co., Ltd.
Congestion Management and AvoidanceCongestion
Special Topic - QoS Management and Avoidance

Figure 1.1 Upstream and downstream traffic on the user and network sides

Generally, upstream traffic is not congested because upstream traffic does not bother with
traffic rate mismatch, traffic aggregation, or forwarding resource shortage. Downstream
traffic, instead, is prone to traffic congestion.

Impacts
Traffic congestion has the following adverse impacts on network traffic:
 Traffic congestion intensifies delay and jitter.
 Overlong delays lead to packet retransmission.
 Traffic congestion reduces the throughput of networks.
 Intensified traffic congestion consumes a large number of network resources (especially
storage resources). Unreasonable resource allocation may cause resources to be locked
and the system to go Down.
Therefore, traffic congestion is the main cause of service deterioration. Since traffic
congestion prevails on the PSN network, traffic congestion must be prevented or effectively
controlled.

Solutions
A solution to traffic congestion is a must on every carrier network. A balance between limited
network resources and user requirements is required so that user requirements are satisfied
and network resources are fully used.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 101


Copyright © Huawei
Technologies Co., Ltd.
Congestion Management and AvoidanceCongestion
Special Topic - QoS Management and Avoidance

Congestion management and avoidance are commonly used to relieve traffic congestion.
 Congestion management provides means to manage and control traffic when traffic
congestion occurs. Packets sent from one interface are placed into multiple queues that
are marked with different priorities. The packets are sent based on the priorities.
Different queue scheduling mechanisms are designed for different situations and lead to
different results.
 Congestion avoidance is a flow control technique used to relieve network overload. By
monitoring the usage of network resources in queues or memory buffer, a device
automatically drops packets on the interface that shows a sign of traffic congestion.

5.2 Queues and Congestion Management


Congestion management defines a policy that determines the order in which packets are
forwarded and specifies drop principles for packets. The queuing technology is generally
used.
The queuing technology orders packets in the buffer. When the packet rate exceeds the
interface bandwidth or the bandwidth allocated to the queue that buffers packets, the packets
are buffered in queues and wait to be forwarded. The queue scheduling algorithm determines
the order in which packets are leaving a queue and the relationships between queues.

The Traffic Manager (TM) on the forwarding plane houses high-speed buffers, for which all interfaces
have to compete. To prevent traffic interruptions due to long-time loss in the buffer battle, the system
allocates a small buffer to each interface and ensures that each queue on each interface can use the
buffer.
The TM puts received packets into the buffer and allows these packets to be forwarded in time when
traffic is not congested. In this case, the period during which packets are stored in the buffer is at μs
level, and the delay can be ignored.
When traffic is congested, packets accumulate in the buffer and wait to be forwarded. The delay greatly
prolongs. The delay is determined by the buffer size for a queue and the output bandwidth allocated to a
queue. The format is as follows:
Delay of a queue = Buffer size for the queue / Output bandwidth for the queue

Each interface on a Huawei device stores eight downstream queues, which are called class
queues (CQs) or port queues. The eight queues are BE, AF1, AF2, AF3, AF4, EF, CS6, and
CS7.
The first in first out (FIFO) mechanism is used to transfer packets in a queue. Resources used
to forward packets are allocated based on the arrival order of packets.

Figure 1.1 Entering and leaving a queue

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 102


Copyright © Huawei
Technologies Co., Ltd.
Congestion Management and AvoidanceCongestion
Special Topic - QoS Management and Avoidance

Scheduling Algorithms
The commonly used scheduling algorithms are as follows:
 First In First Out (FIFO)
 Strict Priority (SP)
 Round Robin (RR)
 Weighted Round Robin (WRR)
 Deficit Round Robin (DRR)
 Deficit Weighted Round Robin (DWRR)
 Weighted Fair Queuing (WFQ)

FIFO
FIFO does not need traffic classification. As shown in Figure 1.1, FIFO allows the packets
that come earlier to enter the queue first. On the exit of a queue, FIFO allows the packets to
leave the queue in the same order as that in which the packets enter the queue.

SP
SP schedules packets strictly based on queue priorities. Packets in queues with a low priority
can be scheduled only after all packets in queues with a high priority have been scheduled.
As shown in Figure 1.1, three queues with a high, medium, and low priority respectively are
configured with SP scheduling. The number indicates the order in which packets arrive.

Figure 1.1 SP scheduling

When packets leave queues, the device forwards the packets in the descending order of
priorities. Packets in the higher-priority queue are forwarded preferentially. If packets in the
higher-priority queue come in between packets in the lower-priority queue that is being
scheduled, the packets in the high-priority queue are still scheduled preferentially. This
implementation ensures that packets in the higher-priority queue are always forwarded
preferentially. As long as there are packets in the high queue no other queue will be served.
The disadvantage of SP is that the packets in lower-priority queues are not processed until all
the higher-priority queues are empty. As a result, a congested higher-priority queue causes all
lower-priority queues to starve.

RR
RR schedules multiple queues in ring mode. If the queue on which RR is performed is not
empty, the scheduler takes one packet away from the queue. If the queue is empty, the queue
is skipped, and the scheduler does not wait.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 103


Copyright © Huawei
Technologies Co., Ltd.
Congestion Management and AvoidanceCongestion
Special Topic - QoS Management and Avoidance

Figure 1.1 RR scheduling

WRR
Compared with RR, WRR can set the weights of queues. During the WRR scheduling, the
scheduling chance obtained by a queue is in direct proportion to the weight of the queue. RR
scheduling functions the same as WRR scheduling in which each queue has a weight 1.
WRR configures a counter for each queue and initializes the counter based on the weight
values. Each time a queue is scheduled, a packet is taken away from the queue and being
transmitted, and the counter decreases by 1. When the counter becomes 0, the device stops
scheduling the queue and starts to schedule other queues with a non-0 counter. When the
counters of all queues become 0, all these counters are initialized again based on the weight,
and a new round of WRR scheduling starts. In a round of WRR scheduling, the queues with
the larger weights are scheduled more times.

Figure 1.1 WRR scheduling

In an example, three queues with the weight 50%, 25%, and 25% respectively are configured
with WRR scheduling.
The counters are initialized first: Count[1] = 2, Count[2] = 1, and Count[3] = 1.
 First round of WRR scheduling:
Packet 1 is taken from queue 1, with Count[1] = 1. Packet 5 is taken from queue 2, with
Count[2] = 0. Packet 8 is taken from queue 3, with Count[3] = 0.
 Second round of WRR scheduling:
Packet 2 is taken from queue 1, with Count[1] = 0. Queues 2 and 3 do not participate in
this round of WRR scheduling since Count [2] = 0 and Count[3] = 0.
Then, Count[1] = 0; Count[2] = 0; Count[3] = 0. The counters are initialized again:
Count[1] = 2; Count[2] = 1; Count[3] = 1.
 Third round of WRR scheduling:
Packet 3 is taken from queue 1, with Count[1] = 1. Packet 6 is taken from queue 2, with
Count[2] = 0. Packet 9 is taken from queue 3, with Count[3] = 0.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 104


Copyright © Huawei
Technologies Co., Ltd.
Congestion Management and AvoidanceCongestion
Special Topic - QoS Management and Avoidance

 Fourth round of WRR scheduling:


Packet 4 is taken from queue 1, with Count[1] = 0. Queues 2 and 3 do not participate in
this round of WRR scheduling since Count [2] = 0 and Count[3] = 0.
Then, Count[1] = 0; Count[2] = 0; Count[3] = 0. The counters are initialized again:
Count[1] = 2; Count[2] = 1; Count[3] = 1.
In statistical terms, you can see that the times for the packets to be scheduled in each queue is
in direct ratio to the weight of this queue. The higher the weight, the more the times of
scheduling. If the interface bandwidth is 100 Mbit/s, the queue with the lowest weight can
obtain a minimum bandwidth of 25 Mbit/s, preventing packets in the lower-priority queue
from being starved out when SP scheduling is implemented.
During the WRR scheduling, the empty queue is directly skipped. Therefore, when the rate at
which packets arrive at a queue is low, the remaining bandwidth of the queue is used by other
queues based on a certain proportion.
WRR scheduling has two disadvantages:
 WRR schedules packets based on the number of packets. Therefore, each queue has no
fixed bandwidth. With the same scheduling chance, a long packet obtains higher
bandwidth than a short packet. Users are generally sensitive to the bandwidth. When the
average lengths of the packets in the queues are the same or known, users can obtain
expected bandwidth by configuring WRR weights of the queues; however, when the
average packet length of the queues changes, users cannot obtain expected bandwidth by
configuring WRR weights of the queues.
 Services that require a short delay cannot be scheduled in time.

DRR
The scheduling principle of DRR is similar to that of RR.
RR schedules packets based on the number of packets, whereas DRR schedules packets based
on the packet length.
DRR configures a counter, which implies the number of excess bytes over the threshold
(deficit) in the previous round for each queue. The counters are initialized as the maximum
bytes (generally the MTU of the interface) allowed in a round of DRR scheduling. Each time
a queue is scheduled, a packet is taken away from the queue, and the counter decreases by 1.
If a packet is too long for the queue scheduling capacity, DRR allows the counter Deficit to be
a negative. This ensures that long packets can be scheduled. In the next round of scheduling,
however, this queue will not be scheduled. When the counter becomes 0 or a negative, the
device stops scheduling the queue and starts to schedule other queues with a positive counter.
When the counters of all queues become 0 or negatives, all these counters are initialized, and
a new round of DRR scheduling starts.
In an example, the MTU of an interface is 150 bytes. Two queues Q1 and Q2 use DRR
scheduling. Multiple 200-byte packets are buffered in Q1, and multiple 100-byte packets are
buffered in Q2. Figure 1.1 shows how DRR schedules packets in these two queues.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 105


Copyright © Huawei
Technologies Co., Ltd.
Congestion Management and AvoidanceCongestion
Special Topic - QoS Management and Avoidance

Figure 1.1 DRR scheduling

As shown in Figure 1.1, after six rounds of DRR scheduling, three 200-byte packets in Q1
and six 100-byte packets in Q2 are scheduled. The output bandwidth ratio of Q1 to Q2 is
actually 1:1.
Unlike SP scheduling, DRR scheduling prevents packets in low-priority queues from being
starved out. However, DRR scheduling cannot set weights of queues and cannot schedule
services requiring a low-delay (such as voice services) in time.

DWRR
Compared with DRR, Deficit Weighted Round Robin (DWRR) can set the weights of queues.
DRR scheduling functions the same as DWRR scheduling in which each queue has a weight
1.
DWRR configures a counter, which implies the number of excess bytes over the threshold
(deficit) in the previous round for each queue. The counters are initialized as the Weight x
MTU. Each time a queue is scheduled, a packet is taken away from the queue, and the counter
decreases by 1. When the counter becomes 0, the device stops scheduling the queue and starts
to schedule other queues with a non-0 counter. When the counters of all queues become 0, all
these counters are initialized as weight x MTU, and a new round of DWRR scheduling starts.
In an example, the MTU of an interface is 150 bytes. Two queues Q1 and Q2 use DRR
scheduling. Multiple 200-byte packets are buffered in Q1, and multiple 100-byte packets are
buffered in Q2. The weight ratio of Q1 to Q2 is 2:1. Figure 1.1 shows how DWRR schedules
packets.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 106


Copyright © Huawei
Technologies Co., Ltd.
Congestion Management and AvoidanceCongestion
Special Topic - QoS Management and Avoidance

Figure 1.1 DWRR scheduling

 First round of DWRR scheduling:


The counters are initialized as follows: Deficit[1] = weight1 x MTU = 300 and Deficit[2]
= weight2 x MTU=150. A 200-byte packet is taken from Q1, and a 100-byte packet is
taken from Q2. Then, Deficit[1] = 100 and Deficit[2] = 50.
 Second round of DWRR scheduling:
A 200-byte packet is taken from Q1, and a 100-byte packet is taken from Q2. Then,
Deficit[1] = -100 and Deficit[2] = -50.
 Third round of DWRR scheduling:
The counters of both queues are negatives. Therefore, Deficit[1] = Deficit[1] + weight1 x
MTU = -100 + 2 x 150 = 200 and Deficit[2] = Deficit[2] + weight2 x MTU = -50 + 1 x
150 = 100.
A 200-byte packet is taken from Q1, and a 100-byte packet is taken from Q2. Then,
Deficit[1] = 0 and Deficit[2] = 0.
As shown in Figure 1.1, after three rounds of DWRR scheduling, three 200-byte packets in
Q1 and three 100-byte packets in Q2 are scheduled. The output bandwidth ratio of Q1 to Q2
is actually 2:1, which conforms to the weight ratio.
DWRR scheduling prevents packets in low-priority queues from being starved out and allows
bandwidths to be allocated to packets based on the weight ratio when the lengths of packets in
different queues vary or change greatly.
However, DWRR scheduling does not schedule services requiring a low-delay (such as voice
services) in time.

WFQ
WFQ allocates bandwidths to flows based on the weight. In addition, to allocate bandwidths
fairly to flows, WFQ schedules packets in bits. Figure 1.1 shows how bit-by-bit scheduling
works.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 107


Copyright © Huawei
Technologies Co., Ltd.
Congestion Management and AvoidanceCongestion
Special Topic - QoS Management and Avoidance

Figure 1.1 Bit-by-bit scheduling

The bit-by-bit scheduling mode shown in Figure 1.1 allows the device to allocate bandwidths
to flows based on the weight. This prevents long packets from preempting bandwidths of
short packets and reduces the delay and jitter when both short and long packets wait to be
forwarded.
The bit-by-bit scheduling mode, however, is an ideal one. A Huawei router performs the WFQ
scheduling based on a certain granularity, such as 256 B and 1 KB. Different boards support
different granularities.
Advantages of WFQ:
 Different queues obtain the scheduling chances fairly, balancing delays of flows.
 Short and long packets obtain the scheduling chances fairly. If both short and long
packets wait in queues to be forwarded, short packets are scheduled preferentially,
reducing jitters of flows.
 The lower the weight of a flow is, the lower the bandwidth the flow obtains.

Port Queue Scheduling


You can configure SP scheduling or weight-based scheduling for eight queues on each
interface of a Huawei router. Eight queues can be classified into three groups, priority queuing
(PQ) queues, WFQ queues, and low priority queuing (LPQ) queues, based on scheduling
algorithms.
 PQ queue
SP scheduling applies to PQ queues. Packets in high-priority queues are scheduled
preferentially. Therefore, services that are sensitive to delays (such as VoIP) can be
configured with high priorities.
In PQ queues, however, if the bandwidth of high-priority packets is not restricted, low-
priority packets cannot obtain bandwidth and are starved out.
Configuring eight queues on an interface to be PQ queues is allowed but not
recommended. Generally, services that are sensitive to delays are put into PQ queues.
 WFQ queue
Weight-based scheduling, such as WRR, DWRR, and WFQ, applies to WFQ queues. The
P40-E subcard uses DWRR or DRR, and other boards use WFQ or WRR.
 LPQ queue
SP scheduling applies to LPQ queues. The difference is that when congestion occurs, the
PQ queue can preempt the bandwidth of the WFQ queue whereas the LPQ queue cannot.
After packets in the PQ and WFQ queues are all scheduled, the remaining bandwidth can
be assigned to packets in the LPQ queue.
In the actual application, best effort (BE) flows can be put into the LPQ queue. When the
network is overloaded, BE flows can be limited so that other services can be processed
preferentially.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 108


Copyright © Huawei
Technologies Co., Ltd.
Congestion Management and AvoidanceCongestion
Special Topic - QoS Management and Avoidance

LPQ queue is implemented on an high-speed interface (such as an Ethernet interface).


LPQ is not supported on a low-speed interface (such as a Serial or MP-Group interface).
WFQ, PQ, and LPQ can be used separately or jointly for eight queues on an interface.

Scheduling Order
SP scheduling is implemented between PQ, WFQ, and LPQ queues. PQ queues are scheduled
preferentially, and then WFQ queues and LPQ queues are scheduled in sequence, as shown in
Figure 1.1. Figure 1.2 shows the detailed process.

Figure 1.1 Port queue scheduling order

Figure 1.2 Port queue scheduling process

 Packets in PQ queues are preferentially scheduled, and packets in WFQ queues are
scheduled only when no packets are buffered in PQ queues.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 109


Copyright © Huawei
Technologies Co., Ltd.
Congestion Management and AvoidanceCongestion
Special Topic - QoS Management and Avoidance

 When all PQ queues are empty, WFQ queues start to be scheduled. If packets are added
to PQ queues afterward, packets in PQ queues are still scheduled preferentially.
 Packets in LPQ queues start to be scheduled only after all PQ and WFQ queues are
empty.
Bandwidths are preferentially allocated to PQ queues to guarantee the peak information rate
(PIR) of packets in PQ queues. The remaining bandwidth is allocated to WFQ queues based
on the weight. If the bandwidth is not fully used, the remaining bandwidth is allocated to
WFQ queues whose PIRs are higher than the obtained bandwidth until the PIRs of all WFQ
queues are guaranteed. If any bandwidth is remaining at this time, the bandwidth resources
are allocated to LPQ queues.

PIR here refers to the traffic shaping rate configured in the port-queue command.

Bandwidth Allocation Example 1


In this example, the traffic shaping rate is set to 100 Mbit/s on an interface (by default, the
traffic shaping rate is the interface bandwidth). The input bandwidth and PIR of each service
are configured as follows.

Service Class Queue Input Bandwidth PIR


(bit/s) (bit/s)

CS7 PQ 65 M 55 M
CS6 PQ 30 M 30 M
EF WFQ with the weight 5 10 M 5M
AF4 WFQ with the weight 4 10 M 10 M
AF3 WFQ with the weight 3 10 M 15 M
AF2 WFQ with the weight 2 20 M 25 M
AF1 WFQ with the weight 1 20 M 20 M
BE LPQ 100 M Not
configure
d

The bandwidth is allocated as follows:


 PQ scheduling is performed first. The 100 Mbit/s bandwidth is allocated to the CS7
queue first. The output bandwidth of CS7 equals the minimum rate of the traffic shaping
rate (100 Mbit/s), input bandwidth of CS7 (65 Mbit/s), and PIR of CS7 (55 Mbit/s), that
is, 55 Mbit/s. The remaining bandwidth 45 Mbit/s is allocated to the CS6 queue. The
output bandwidth of CS6 equals the minimum rate of the traffic shaping rate (45 Mbit/s),
input bandwidth of CS6 (30 Mbit/s), and PIR of CS6 (30 Mbit/s), that is, 30 Mbit/s.
After PQ scheduling, the remaining bandwidth is 15 Mbit/s (100 Mbit/s - 55 Mbit/s - 30
Mbit/s).
 Then the first round of WFQ scheduling starts. The remaining bandwidth after PQ
scheduling is allocated to WFQ queues. The bandwidth allocated to a WFQ queue is

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 110


Copyright © Huawei
Technologies Co., Ltd.
Congestion Management and AvoidanceCongestion
Special Topic - QoS Management and Avoidance

calculated based on this format: Bandwidth allocated to a WFQ queue = Remaining


bandwidth x Weight of this queue / Sum of weights = 15 Mbit/s x Weight / 15.
− Bandwidth allocated to the EF queue = 15 Mbit/s x 5 / 15 = 5 Mbit/s = PIR. The
bandwidth allocated to the EF queue is fully used.
− Bandwidth allocated to the AF4 queue = 15 Mbit/s x 4 / 15 = 4 Mbit/s < PIR. The
bandwidth allocated to the AF4 queue is exhausted.
− Bandwidth allocated to the AF3 queue = 15 Mbit/s x 3 / 15 = 3 Mbit/s < PIR. The
bandwidth allocated to the AF3 queue is exhausted.
− Bandwidth allocated to the AF2 queue = 15 Mbit/s x 2 / 15 = 2 Mbit/s < PIR. The
bandwidth allocated to the AF2 queue is exhausted.
− Bandwidth allocated to the AF1 queue = 15 Mbit/s x 1 / 15 = 1 Mbit/s < PIR. The
bandwidth allocated to the AF1 queue is exhausted.
 The bandwidth is exhausted, and BE packets are not scheduled. The output BE
bandwidth is 0.
The output bandwidth of each queue is as follows:

Service Queue Input PIR (bit/s) Output


Class Bandwidth Bandwidth
(bit/s) (bit/s)

CS7 PQ 65 M 55 M 55 M
CS6 PQ 30 M 30 M 30 M
EF WFQ with the 10 M 5M 5M
weight 5
AF4 WFQ with the 10 M 10 M 4M
weight 4
AF3 WFQ with the 10 M 15 M 3M
weight 3
AF2 WFQ with the 20 M 25 M 2M
weight 2
AF1 WFQ with the 20 M 20 M 1M
weight 1
BE LPQ 100 M Not 0
configured

Bandwidth Allocation Example 2


In this example, the traffic shaping rate is set to 100 Mbit/s on an interface. The input
bandwidth and PIR of each service are configured as follows.

Service Class Queue Input Bandwidth PIR (bit/s)


(bit/s)

CS7 PQ 15 M 25 M
CS6 PQ 30 M 10 M

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 111


Copyright © Huawei
Technologies Co., Ltd.
Congestion Management and AvoidanceCongestion
Special Topic - QoS Management and Avoidance

Service Class Queue Input Bandwidth PIR (bit/s)


(bit/s)

EF WFQ with the weight 5 90 M 100 M


AF4 WFQ with the weight 4 10 M 10 M
AF3 WFQ with the weight 3 10 M 15 M
AF2 WFQ with the weight 2 20 M 25 M
AF1 WFQ with the weight 1 20 M 20 M
BE LPQ 100 M Not configured

The bandwidth is allocated as follows:


 Packets in the PQ queue are scheduled preferentially to ensure the PIR of the PQ queue.
After PQ scheduling, the remaining bandwidth is 75 Mbit/s (100 Mbit/s - 15 Mbit/s - 10
Mbit/s).
 Then the first round of WFQ scheduling starts. The remaining bandwidth after PQ
scheduling is allocated to WFQ queues. The bandwidth allocated to a WFQ queue is
calculated based on this format: Bandwidth allocated to a WFQ queue = Remaining
bandwidth x Weight of this queue / Sum of weights = 75 Mbit/s x Weight/15.
− Bandwidth allocated to the EF queue = 75 Mbit/s x 5 / 15 = 25 Mbit/s < PIR. The
bandwidth allocated to the EF queue is fully used.
− Bandwidth allocated to the AF4 queue = 75 Mbit/s x 4 / 15 = 20 Mbit/s > PIR. The
AF4 queue actually obtains the bandwidth 10 Mbit/s (PIR). The remaining
bandwidth is 10 Mbit/s.
− Bandwidth allocated to the AF3 queue = 75 Mbit/s x 3 / 15 = 15 Mbit/s = PIR. The
AF3 queue actually obtains the bandwidth 10 Mbit/s (PIR). The remaining
bandwidth is 5 Mbit/s.
− Bandwidth allocated to the AF2 queue = 75 Mbit/s x 2 / 15 = 10 Mbit/s < PIR. The
bandwidth allocated to the AF2 queue is exhausted.
− Bandwidth allocated to the AF1 queue = 75 Mbit/s x 1 / 15 = 5 Mbit/s < PIR. The
bandwidth allocated to the AF1 queue is exhausted.
 The remaining bandwidth is 15 Mbit/s, which is allocated to the queues, whose PIRs are
higher than the obtained bandwidth, based on the weight.
− Bandwidth allocated to the EF queue = 15 Mbit/s x 5 / 8 = 9.375 Mbit/s. The sum of
bandwidths allocated to the EF queue is 34.375 Mbit/s, which is also lower than the
PIR. Therefore, the bandwidth allocated to the EF queue is exhausted.
− Bandwidth allocated to the AF2 queue = 15 Mbit/s x 2 / 8 = 3.75 Mbit/s. The sum
of bandwidths allocated to the AF2 queue is 13.75 Mbit/s, which is also lower than
the PIR. Therefore, the bandwidth allocated to the AF2 queue is exhausted.
− Bandwidth allocated to the AF1 queue = 15 Mbit/s x 1 / 8 = 1.875 Mbit/s. The sum
of bandwidths allocated to the AF1 queue is 6.875 Mbit/s, which is also lower than
the PIR. Therefore, the bandwidth allocated to the AF1 queue is exhausted.
 The bandwidth is exhausted, and the BE queue is not scheduled. The output BE
bandwidth is 0.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 112


Copyright © Huawei
Technologies Co., Ltd.
Congestion Management and AvoidanceCongestion
Special Topic - QoS Management and Avoidance

The output bandwidth of each queue is as follows:

Service Queue Input PIR (bit/s) Output


Class Bandwidth Bandwidth
(bit/s) (bit/s)

CS7 PQ 15 M 25 M 15 M
CS6 PQ 30 M 10 M 10 M
EF WFQ with the weight 90 M 100 M 34.375 M
5
AF4 WFQ with the weight 10 M 10 M 10 M
4
AF3 WFQ with the weight 10 M 15 M 10 M
3
AF2 WFQ with the weight 20 M 25 M 13.75 M
2
AF1 WFQ with the weight 20 M 20 M 6.875 M
1
BE LPQ 100 M Not 0
configured

Bandwidth Allocation Example 3


In this example, the traffic shaping rate is set to 100 Mbit/s on an interface. The input
bandwidth and PIR of each service are configured as follows.

Service Queue Input Bandwidth PIR (bit/s)


Class (bit/s)

CS7 PQ 15 M 25 M
CS6 PQ 30 M 10 M
EF WFQ with the weight 5 90 M 10 M
AF4 WFQ with the weight 4 10 M 10 M
AF3 WFQ with the weight 3 10 M 15 M
AF2 WFQ with the weight 2 20 M 10 M
AF1 WFQ with the weight 1 20 M 10 M
BE LPQ 100 M Not configured

The bandwidth is allocated as follows:

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 113


Copyright © Huawei
Technologies Co., Ltd.
Congestion Management and AvoidanceCongestion
Special Topic - QoS Management and Avoidance

 Packets in the PQ queue are scheduled preferentially to ensure the PIR of the PQ queue.
After PQ scheduling, the remaining bandwidth is 75 Mbit/s (100 Mbit/s - 15 Mbit/s - 10
Mbit/s).
 Then the first round of WFQ scheduling starts. The remaining bandwidth after PQ
scheduling is allocated to WFQ queues. The bandwidth allocated to a WFQ queue is
calculated based on this format: Bandwidth allocated to a WFQ queue = Remaining
bandwidth x weight of this queue / sum of weights = 75 Mbit/s x weight / 15.
− Bandwidth allocated to the EF queue = 75 Mbit/s x 5 / 15 = 25 Mbit/s > PIR. The
EF queue actually obtains the bandwidth 10 Mbit/s (PIR). The remaining bandwidth
is 15 Mbit/s.
− Bandwidth allocated to the AF4 queue = 75 Mbit/s x 4 / 15 = 20 Mbit/s > PIR. The
AF4 queue actually obtains the bandwidth 10 Mbit/s (PIR). The remaining
bandwidth is 10 Mbit/s.
− Bandwidth allocated to the AF3 queue = 75 Mbit/s x 3 / 15 = 15 Mbit/s = PIR. The
AF3 queue actually obtains the bandwidth 10 Mbit/s. The remaining bandwidth is 5
Mbit/s.
− Bandwidth allocated to the AF2 queue = 75 Mbit/s x 2 / 15 = 10 Mbit/s = PIR. The
bandwidth allocated to the AF2 queue is exhausted.
− Bandwidth allocated to the AF1 queue = 75 Mbit/s x 1 / 15 = 5 Mbit/s < PIR. The
bandwidth allocated to the AF1 queue is exhausted.
 The remaining bandwidth is 30 Mbit/s, which is allocated to the AF1 queue, whose PIRs
are higher than the obtained bandwidth, based on the weight. Therefore, the bandwidth
allocated to the AF1 queue is 5 Mbit/s.
 The remaining bandwidth is 25 Mbit/s, which is allocated to the BE queue.
The output bandwidth of each queue is as follows:

Service Queue Input PIR (bit/s) Output


Class Bandwidth Bandwidt
(bit/s) h (bit/s)

CS7 PQ 15 M 25 M 15 M
CS6 PQ 30 M 10 M 10 M
EF WFQ with the 90 M 10 M 10 M
weight 5
AF4 WFQ with the 10 M 10 M 10 M
weight 4
AF3 WFQ with the 10 M 15 M 10 M
weight 3
AF2 WFQ with the 20 M 10 M 10 M
weight 2
AF1 WFQ with the 20 M 10 M 10 M
weight 1
BE LPQ 100 M Not configured 25 M

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 114


Copyright © Huawei
Technologies Co., Ltd.
Congestion Management and AvoidanceCongestion
Special Topic - QoS Management and Avoidance

5.3 Congestion Avoidance


Congestion avoidance is a flow control technique used to relieve network overload. By
monitoring the usage of network resources for queues or memory buffers, a device
automatically drops packets that shows a sign of traffic congestion.
Huawei routers support two drop policies:
 Tail drop
 Weighted Random Early Detection (WRED)

Tail Drop
Tail drop is the traditional congestion avoidance mechanism used to drop all newly arrived
packets when congestion occurs.
Tail Drop causes TCP global synchronization. If TCP detects packet loss, TCP enters the
slow-start state. Then TCP probes the network by sending packets at a lower rate, which
speeds up until packet loss is detected again. In Tail drop mechanisms, all newly arrived
packets are dropped when congestion occurs, causing all TCP sessions to simultaneously enter
the slow start state and the packet transmission to slow down. Then all TCP sessions restart
their transmission at roughly the same time and then congestion occurs again, causing another
burst of packet drops, and all TCP sessions enters the slow start state again. The behavior
cycles constantly, severely reducing the network resource usage.

WRED
WRED is a congestion avoidance mechanism used to drop packets before the queue
overflows. WRED resolves TCP global synchronization by randomly dropping packets to
prevent a burst of TCP retransmission. If a TCP connection reduces the transmission rate
when packet loss occurs, other TCP connections still keep a high rate for sending packets. The
WRED mechanism improves the bandwidth resource usage.
WRED sets lower and upper thresholds for each queue and defines the following rules:
 When the length of a queue is lower than the lower threshold, no packet is dropped.
 When the length of a queue exceeds the upper threshold, all newly arrived packets are
tail dropped.
 When the length of a queue ranges from the lower threshold to the upper threshold,
newly arrived packets are randomly dropped, but a maximum drop probability is set. The
maximum drop probability refers to the drop probability when the queue length reaches
the upper threshold. Figure 1.1 is a drop probability graph. The longer the queue, the
larger the drop probability.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 115


Copyright © Huawei
Technologies Co., Ltd.
Congestion Management and AvoidanceCongestion
Special Topic - QoS Management and Avoidance

Figure 1.1 WRED drop probability

As shown in Figure 1.2, the maximum drop probability is a%, the length of the current queue
is m, and the drop probability of the current queue is x%. WRED delivers a random value i to
each arrived packet, (0 < i% < 100%), and compares the random value with the drop
probability of the current queue. If the random value i ranges from 0 to x, the newly arrived
packet is dropped; if the random value ranges from x to 100%, the newly arrived packet is not
dropped.

Figure 1.2 WRED implementation

As shown in Figure 1.3, the drop probability of the queue with the length m (lower threshold
< m < upper threshold) is x%. If the random value ranges from 0 to x, the newly arrived
packet is dropped. The drop probability of the queue with the length n (m < n < upper
threshold) is y%. If the random value ranges from 0 to y, the newly arrived packet is dropped.
The range of 0 to y is wider than the range of 0 to x. There is a higher probability that the
random value falls into the range of 0 to y. Therefore, the longer the queue, the higher the
drop probability.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 116


Copyright © Huawei
Technologies Co., Ltd.
Congestion Management and AvoidanceCongestion
Special Topic - QoS Management and Avoidance

Figure 1.3 Drop probability change with the queue length

As shown in Figure 1.4, the maximum drop probabilities of two queues Q1 and Q2 are a%
and b%, respectively. When the length of Q1 and Q2 is m, the drop probabilities of Q1 and
Q2 are respectively x% and y%. If the random value ranges from 0 to x, the newly arrived
packet in Q1 is dropped, If the random value ranges from 0 to y, the newly arrived packet in
Q2 is dropped. The range of 0 to y is wider than the range of 0 to x. There is a higher
probability that the random value falls into the range of 0 to y. Therefore, When the queue
lengths are the same, the higher the maximum drop probability, the higher the drop
probability.

Figure 1.4 Drop probability change with the maximum drop probability

You can configure WRED for each flow queue (FQ) and class queue (CQ) on Huawei routers.
WRED allows the configuration of lower and upper thresholds and drop probability for each
drop precedence. Therefore, WRED can allocate different drop probabilities to service flows
or even packets with different drop precedences in a service flow.

Drop Policy Selection


Tail drop applies to PQ queues for services that have high requirements for real-time
performance. Tail drop drops packets only when the queue overflows. In addition, PQ queues

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 117


Copyright © Huawei
Technologies Co., Ltd.
Congestion Management and AvoidanceCongestion
Special Topic - QoS Management and Avoidance

preempt bandwidths of other queues. Therefore, when traffic congestion occurs, highest
bandwidths can be provided for real-time services.
WRED applies to WFQ queues. WFQ queues share bandwidth based on the weight and are
prone to traffic congestion. Using WRED for WFQ queues effectively resolves TCP global
synchronization when traffic congestion occurs.

WRED Lower and Upper Thresholds and Drop Probability Configuration


In actual applications, the WRED lower threshold is recommended to start from 50% and
change with the drop precedence. As shown in Figure 1.1, a lowest drop probability and
highest lower and upper thresholds are recommended for green packets; a medium drop
probability and medium lower and upper thresholds are recommended for yellow packets; a
highest drop probability and lowest lower and upper thresholds are recommended for red
packets. When traffic congestion intensifies, red packets are first dropped due to low lower
threshold and high drop probability. As the queue length increases, the device drops green
packets at last. If the queue length reaches the upper threshold for red/yellow/green packets,
red/yellow/green packets respectively start to be tail dropped.

Figure 1.1 WRED drop probability for three drop precedences

Maximum Queue Length Configuration


The maximum queue length can be set using the queue-depth command on Huawei routers.
As 5.2Queues and Congestion Management describes, when traffic congestion occurs,
packets accumulate in the buffer and are delayed. The delay is determined by the queue buffer
size and the output bandwidth allocated to a queue. When the output bandwidths are the same,
the shorter the queue, the lower the delay.
The queue length cannot be set too small. If the length of a queue is too small, the buffer is
not enough even if the traffic rate is low. As a result, packet loss occurs. The shorter the
queue, the less the tolerance of burst traffic.
The queue length cannot be set too large. If the length of a queue is too large, the delay
increases along with it. Especially when a TCP connection is set up, one end sends a packet to
the peer end and waits for a response. If no response is received within the timer timeout
period, the TCP sender retransmits the packet. If a packet is buffered for a long time, the
packet has no difference with the dropped ones.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 118


Copyright © Huawei
Technologies Co., Ltd.
Congestion Management and AvoidanceCongestion
Special Topic - QoS Management and Avoidance

Therefore, setting the queue length to 10 ms x output queue bandwidth is recommended for
high-priority queues (CS7, CS6, and EF); setting the queue length to 100 ms x output queue
bandwidth is recommended for low-priority queues.

5.4 Impact of Queue Buffer on Delay and Jitter


Queue Buffer
The Traffic Manager (TM) on the forwarding plane houses high-speed buffers, for which all
interfaces have to compete. To prevent traffic interruptions due to long-time loss in the buffer
battle, the system allocates a small buffer to each interface and ensures that each queue on
each interface can use the buffer.

Impact of Queue Buffer on Delay


The TM puts received packets into the buffer and allows these packets to be forwarded in time
when traffic is not congested. In this case, the period during which packets are stored in the
buffer is at microsecond level, and the delay can be ignored.
When traffic is congested, packets accumulate in the buffer and wait to be forwarded. The
delay greatly prolongs. The interval from the time when a packet enters the buffer to the time
when the packet is forwarded is called the buffer delay or queue delay.
The buffer delay is determined by the buffer size for a queue and the output bandwidth
allocated to the queue. The format is as follows:
Buffer delay = Buffer size for the queue / Output bandwidth for the queue
The buffer size is expressed in bytes, and the output bandwidth (also called the traffic shaping
rate) is expressed in bit/s. Therefore, the preceding format can also be expressed as follows:
Buffer delay = (Buffer size for the queue x 8) / Traffic shaping rate for the queue
As the format indicates, the larger the buffer size, the longer the buffer delay.

Impact of Queue Buffer on Jitter


Jitter refers to the delay-difference between packets in the same flow. Typically, packets are
sent at evenly spaced intervals. However, this interval may fluctuate and cause an irregularity
of delay-difference between the packets. This irregularity is known as a jitter. Jitter negatively
affects real-time services, such as voice and video services, by creating noticeable
intermittence. A voice/video receiving terminal generally uses a buffer mechanism to
minimize jitters. However, if jitters are too severe for the buffer mechanism to mitigate, the
receiving terminal may experience voice or video distortion and intermittence.
Severe jitters are mainly caused by the following two scenarios: 1. Route status on IP
networks frequently changes, causing packets to be transmitted through different routes. 2.
Packets are buffered on various nodes during traffic congestion, resulting in different delays.
Scenario 2 is commonly seen on live networks.
Jitters increase when packet delays become increasingly varied. If packet delays are
controlled at lower levels, jitters are then also controlled. Therefore, you can control jitters by
controlling delays. For example, if delays are controlled below 5 us, delay variations (jitters)
are definitely below 5 us.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 119


Copyright © Huawei
Technologies Co., Ltd.
Congestion Management and AvoidanceCongestion
Special Topic - QoS Management and Avoidance

As described in Impact of Queue Buffer on Delay, large buffer sizes increase buffer delays.
Controlling buffer sizes means control over packet delays.

Queue Buffer Settings


The maximum buffer size can be set using the queue-depth command on Huawei routers.
The buffer size cannot be set too small. If the length of a queue is too small, the buffer is
insufficient even if the traffic rate is low. As a result, packet loss occurs.
It is not recommended to set large buffer size for the services that have high requirements for
real-time performance. If the length of a queue is too large, the delay increases along with it.
As described in Impact of Queue Buffer on Delay, Buffer delay = (Buffer size for the queue x
8) / Traffic shaping rate for the queue.
The following format can be inferred:
Buffer Size for a Queue (bytes) = Traffic shaping rate (bit/s) x maximum delay that is
allowed (s) / 8
high-priority services require that the delay be shorter than 10 ms, and Low-priority services
generally require that the delay be shorter than or equal to 100 ms. Therefore, setting the
buffer size to a value (10 ms x traffic shaping rate) is recommended for high-priority queues
(CS7, CS6, and EF); setting the buffer size to a value (100 ms x traffic shaping rate) is
recommended for low-priority queues.

5.5 HQoS
Hierarchical Quality of Service (HQoS) is a technology that uses a queue scheduling
mechanism to guarantee the bandwidth of multiple services of multiple users in the DiffServ
model.
Traditional QoS performs 1-level traffic scheduling. The device can distinguish services on an
interface but cannot identify users. Packets of the same priority are placed into the same
queue on an interface and compete for the same queue resources.
HQoS uses multi-level scheduling to distinguish user-specific or service-specific traffic and
provide differentiated bandwidth management.

Basic Scheduling Model


The scheduling model consists of two components: scheduler and scheduled object.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 120


Copyright © Huawei
Technologies Co., Ltd.
Congestion Management and AvoidanceCongestion
Special Topic - QoS Management and Avoidance

 Scheduler: schedules multiple queues. The scheduler performs a specific scheduling


algorithm to determine the order in which packets are forwarded. The scheduling
algorithm can be Strict Priority (SP) or weight-based scheduling. The weight-based
scheduling algorithms include Deficit Round Robin (DRR), Weighted Round Robin
(WRR), Deficit Weighted Round Robin (DWRR), and Weighted Fair Queuing (WFQ).
For details about scheduling algorithms, see 5.2Queues and Congestion Management.
The scheduler performs one action: selecting a queue. After a queue is selected by a
scheduler, the packets in the front of the queue are forwarded.
 Scheduled object: refers to a queue. Packets are sequenced in queues in the buffer.
Three configurable attributes are delivered to a queue:
(1) Priority or weight
(2) PIR
(3) Drop policy, including tail drop and Weighted Random Early Detection (WRED)
Packets may enter or leave a queue:
(1) Entering a queue: The device determines whether to drop a received packet based on
the drop policy. If the packet is not dropped, it enters the tail of the queue.
(2) Leaving a queue: After a queue is selected by a scheduler, the packets in the front of
the queue are shaped and then forwarded out of the queue.

Hierarchical Scheduling Model


HQoS uses a tree-shaped hierarchical scheduling model. As shown in Figure 1.1, the
hierarchical scheduling model consists of three types of nodes:
 Leaf node: is located at the bottom layer and identifies a queue. The leaf node is a
scheduled object and can only be scheduled.
 Transit node: is located at the medium layer and refers to both a scheduler and a
scheduled object. When a transit node functions as a scheduled object, the transit node
can be considered a virtual queue, which is only a layer in the scheduling architecture but
not an actual queue that consumes buffers.
 Root node: is located at the top layer and identifies the top-level scheduler. The root node
is only a scheduler but not a scheduled object. The PIR is generally delivered to the root
node to restrict the output bandwidth.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 121


Copyright © Huawei
Technologies Co., Ltd.
Congestion Management and AvoidanceCongestion
Special Topic - QoS Management and Avoidance

Figure 1.1 Hierarchical scheduling model

A scheduler can schedule multiple queues or schedulers. The scheduler can be considered a
parent node, and the scheduled queue or scheduler can be considered a child node. The parent
node is the traffic aggregation point of multiple child nodes.
Traffic classification rules and control parameters can be specified on each node to classify
and control traffic. Traffic classification rules based on different user or service requirements
can be configured on nodes at different layers. In addition, different control actions can be
performed for traffic on different nodes. This ensures multi-layer/user/service traffic
management.

HQoS Hierarchies
In HQoS scheduling, one-layer transit node can be used to implement three-layer scheduling
architecture, or multi-layer transit nodes can be used to implement multi-layer scheduling
architecture. In addition, two or more hierarchical scheduling models can be used together by
mapping a packet output from a scheduling model to a leaf node in another scheduling model,
as shown in Figure 1.1. This provides flexible scheduling options.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 122


Copyright © Huawei
Technologies Co., Ltd.
Congestion Management and AvoidanceCongestion
Special Topic - QoS Management and Avoidance

Figure 1.1 Flexible HQoS hierarchies

HQoS hierarchies supported by devices of different vendors may be different. HQoS


hierarchies supported by different chips of the same vendor may also be different.

Scheduling Architecture of Huawei Routers


Figure 1.1 shows class queues (CQs) and port schedulers. Huawei routers not configured with
HQoS have only CQs and port schedulers.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 123


Copyright © Huawei
Technologies Co., Ltd.
Congestion Management and AvoidanceCongestion
Special Topic - QoS Management and Avoidance

Figure 1.1 Scheduling architecture without HQoS

A CQ has the following configurable attributes:


 Queue priority and weight
 PIR
 Drop policy, including tail drop and WRED
As shown in Figure 1.2, when HQoS is configured, a router specifies a buffer for flow queues
that require hierarchical scheduling and performs a round of multi-layer scheduling for these
flow queues. After that, the router puts HQoS traffic and non-HQoS traffic together into the
CQ for unified scheduling.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 124


Copyright © Huawei
Technologies Co., Ltd.
Congestion Management and AvoidanceCongestion
Special Topic - QoS Management and Avoidance

Figure 1.2 HQoS scheduling

 Leaf node: flow queue (FQ)


A leaf node is used to buffer data flows of one priority for a user. Data flows of each user
can be classified into one to eight priorities. Each user can use one to eight FQs.
Different users cannot share FQs. A traffic shaping value can be configured for each FQ
to restrict the maximum bandwidth.
FQs and CQs share the following configurable attributes:
− Queue priority and weight
− PIR
− Drop policy, including tail drop and WRED
 Transit node: subscriber queue
An SQ indicates a user (for example, a VLAN, LSP, or PVC). You can configure the CIR
and PIR for each SQ.
Each SQ corresponds to eight FQ priorities, and one to eight FQs can be configured. If
an FQ is idle, other FQs can consume the bandwidth of the FQ, but the bandwidth that
can be used by an FQ cannot exceed the PIR of the FQ.
An SQ functions as both a scheduler and a virtual queue to be scheduled.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 125


Copyright © Huawei
Technologies Co., Ltd.
Congestion Management and AvoidanceCongestion
Special Topic - QoS Management and Avoidance

− As a scheduler: schedules multiple FQs. Priority queuing (PQ), Weighted Fair


Queuing (WFQ), or low priority queuing (LPQ) applies to an FQ. The FQs with the
service class EF, CS6, and CS7 use SP scheduling by default. The flow queues with
the service class BE, AF1, AF2, AF3, and AF4 use WFQ scheduling by default,
with the weight 10:10:10:15:15.
− As a virtual queue to be scheduled: is allocated two attributes, CIR and PIR. Using
metering, the SQ traffic is divided into two parts, the part within the CIR and the
burst part within the PIR. The former part is paid by users, and the latter part is also
called the excess information rate (EIR). The EIR can be calculated using this
format: EIR = PIR - CIR. The EIR refers to the burst traffic rate, which can reach a
maximum of PIR.
− Root node: group queue (GQ)
To simplify operation, you can define multiple users as a GQ, which is similar to a
BGP peer group that comprises multiple BGP peers. For example, all users that
require the same bandwidth or all premium users can be configured as a GQ.
A GQ can be bound to multiple SQs, but an SQ can be bound only to one GQ.
A GQ schedules SQs. DRR is used to schedule the traffic within CIR between SQs.
If any bandwidth is remaining after the first round, DRR is used to schedule the EIR
traffic. The bandwidth of CIR is preferentially provided, and the burst traffic
exceeded the PIR is dropped. Therefore, if a GQ obtains the bandwidth of PIR, each
SQ in the GQ can obtain a minimum bandwidth of CIR or even a maximum
bandwidth of PIR.
In addition, a GQ, as a root node, can be configured with a PIR attribute to restrict
the sum rate of multiple member users of the GQ. All users in this GQ are restricted
by the PIR. The PIR of a GQ is used for rate limit but does not provide bandwidth
guarantee. The PIR of a GQ is recommended to be greater than the sum of CIRs of
all its member SQs. Otherwise, a user (SQ) cannot obtain sufficient bandwidth.

When a device supports dynamic SQ scheduling, note the following:


 If the sum of CIRs of all SQs exceeds the GQ bandwidth, the bandwidth is allocated based on the
CIR ratio. For example, the CIRs of SQ1 and SQ2 are respectively 100 Mbit/s and 50 Mbit/s, but the
GQ bandwidth is only 100 Mbit/s. The output bandwidths of SQ1 and SQ2 are allocated in the ratio
of 100 Mbit/s to 50 Mbit/s.
 If the sum of PIRs of all SQs exceed the GQ bandwidth, the CIR of each SQ is guaranteed first, and
the remaining bandwidth is allocated based on the EIR ratio. For example, the CIR and PIR of SQ1
are respectively 100 Mbit/s and 150 Mbit/s; the CIR and PIR of SQ2 are respectively 0 Mbit/s and
100 Mbit/s; the GQ bandwidth is 200 Mbit/s. SQ1 obtains the bandwidth of CIR, 100 Mbit/s. The
remaining bandwidth 100 Mbit/s is allocated to SQ1 and SQ2 in the EIR ratio of 50 Mbit/s to 100
Mbit/s. As a result, SQ1 obtains the bandwidth of 133 Mbit/s, and SQ2 obtains the bandwidth of 66
Mbit/s.
The following example illustrates the relationship between an FQ, SQ, and GQ.
In an example, 20 residential users live in a building. Each residential user purchases the
bandwidth of 20 Mbit/s. To guarantee the bandwidth, an SQ with both the CIR and PIR
of 20 Mbit/s is created for each residential user. The PIR here also restricts the maximum
bandwidth for each residential user. With the subscription of VoIP and IPTV services as
well as the HSI services, carriers promote a new bandwidth packages with the value-
added services (including VoIP and IPTV) added but the bandwidth 20 Mbit/s
unchanged. Each residential user can use VoIP, IPTV, and HSI services.
To meet such bandwidth requirements, HQoS is configured as follows:
− Three FQs are configured for the three services (VoIP, IPTV, and HSI).
− Altogether 20 SQs are configured for 20 residential users. The CIR and PIR are
configured for each SQ.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 126


Copyright © Huawei
Technologies Co., Ltd.
Congestion Management and AvoidanceCongestion
Special Topic - QoS Management and Avoidance

− One GQ is configured for the whole building and correspond to 20 residential users.
The sum bandwidth of the 20 residential users is actually the PIR of the GQ. Each
of the 20 residential users use services individually, but the sum bandwidth of them
is restricted by the PIR of the GQ.
The hierarchy model is as follows:
− FQs are used to distinguish services of a user and control bandwidth allocation
among services.
− SQs are used to distinguish users and restrict the bandwidth of each user.
− GQs are used to distinguish user groups and control the traffic rate of five SQs.
FQs enable bandwidth allocation among services. SQs distinguish each user. GQs enable
the CIR of each user to be guaranteed and all member users to share the bandwidth.
The bandwidth exceeds the CIR is not guaranteed because it is not paid by users. The
CIR must be guaranteed because the CIR has been purchased by users. As shown in
Figure 1.2, the CIR of users is marked, and the bandwidth is preferentially allocated to
guarantee the CIR. Therefore, the bandwidth of CIR will not be preempted by the burst
traffic exceeded the service rates.
On Huawei routers, HQoS uses different architectures to schedule upstream or
downstream queues.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 127


Copyright © Huawei
Technologies Co., Ltd.
Congestion Management and AvoidanceCongestion
Special Topic - QoS Management and Avoidance

HQoS Scheduling for Upstream Queues

Figure 1.1 Scheduling architecture for upstream queues

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 128


Copyright © Huawei
Technologies Co., Ltd.
Congestion Management and AvoidanceCongestion
Special Topic - QoS Management and Avoidance

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 129


Copyright © Huawei
Technologies Co., Ltd.
Congestion Management and AvoidanceCongestion
Special Topic - QoS Management and Avoidance

The scheduling path of upstream HQoS traffic is FQ -> SQ -> GQ, and then joins the non-
HQoS traffic for the following two-layer scheduling:
 Target Blade (TB) scheduling
TB scheduling is also called Virtual Output Queue (VOQ) scheduling.
On a crossroad shown in the following figure, three vehicles (car, pallet trunk, and
carriage truck) come along crossing A and are bound for crossing B, C, and D
respectively. If crossing B is jammed at this time, the car cannot move ahead and stays in
the way of the pallet trunk and carriage truck, although crossing C and D are clear.

If three lanes destined for crossing B, C, and D are set on crossing A, the problem is
resolved.

SFUs on a router are similar to crossing A, B, C, and D. If each board allocates queues to
packets destined for different boards, such queues are called VOQs, which allow packets
destined for different boards to pass when one board is congested.
A device automatically specifies VOQs. Users can not modify the attributes of VOQs.

Upstream multicast traffic has not been duplicated and does not show a destination. Therefore, multicast
traffic is put into a separate VOQ.
For unicast traffic, a VOQ is configured for a destination board.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 130


Copyright © Huawei
Technologies Co., Ltd.
Congestion Management and AvoidanceCongestion
Special Topic - QoS Management and Avoidance

DRR is implemented between unicast VOQs and then between unicast and multicast
VOQs.
 Class queue (CQ) scheduling
Four CQs, COS0 for CS7, CS6, and EF services, COS1 for AF4 and AF3 services, COS2
for AF2 and AF1 services, and COS3 for BE services, are used for upstream traffic.
SP scheduling applies to COS0, which is preferentially scheduled. WFQ scheduling
applies to COS1, COS2, and COS3, with the WFQ weight of 1, 2, and 4 respectively.
Users cannot modify the attributes of upstream CQs and schedulers.
Non-HQoS traffic directly enters four upstream CQs, without passing FQs. HQoS traffic
passes FQs and CQs.
The process of upstream HQoS scheduling is as follows:
1. Entering a queue: An HQoS packet enters an FQ. When a packet enters an FQ, the
system checks the FQ status and determines whether to drop the packet. If the packet is
not dropped, it enters the tail of the FQ.
2. Applying for scheduling: After entering the FQ, the packet reports the queue status
change to the SQ scheduler and applies for scheduling. The SQ scheduler reports the
queue status change to the GQ scheduler and applies for scheduling. Therefore, the
scheduling request path is FQ -> SQ -> GQ.
3. Hierarchical scheduling: After receiving a scheduling request, a GQ scheduler selects an
SQ, and the SQ selects an FQ. Therefore, the scheduling path is GQ -> SQ -> FQ.
4. Leaving a queue: After an FQ is selected, packets in the front of the FQ leave the queue
and enters the VOQ tail. The VOQ reports the queue status change to the CQ scheduler
and applies for scheduling. After receiving the request, the CQ selects a VOQ. Packets in
the front of the VOQ leave the queue and are sent to an SFU.
Therefore, the scheduling process is (FQ -> SQ -> GQ) + (VOQ -> CQ).

Table 1.1 Parameters for upstream scheduling


Queue/Sch Queue Attribute Scheduler Attribute
eduler

FQ Queue priority and weight, -


which can be configured.
 PIR, which can be configured.
The PIR is not configured by
default.
 Drop policy, which can be
configured as WRED. The drop
policy is tail drop by default.
SQ  CIR, which can be configured.  To be configured.
 PIR, which can be configured.  PQ and WFQ apply to FQs. By
default, PQ applies to EF, CS6, and
CS7 services; WFQ applies to AF2,
AF3, and AF4 services; LPQ applies to
BE and AF1 services.
GQ  PIR, which can be configured.  Not to be configured.
The PIR is used to restrict the DRR is used to schedule the traffic
bandwidth but does not provide within CIR between SQs. If any

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 131


Copyright © Huawei
Technologies Co., Ltd.
Congestion Management and AvoidanceCongestion
Special Topic - QoS Management and Avoidance

Queue/Sch Queue Attribute Scheduler Attribute


eduler

any bandwidth guarantee. bandwidth is remaining after the first


round, DRR is used to schedule the
EIR traffic. The bandwidth of CIR is
preferentially provided, and the burst
traffic exceeded the PIR is dropped.
VOQ  Not to be configured.  Not to be configured.
 DRR is implemented between
unicast VOQs and then between
unicast and multicast VOQs.
CQ  Not to be configured.  Not to be configured.
 SP scheduling applies to COS0,
which is preferentially scheduled.
Weight-based scheduling applies to
COS1, COS2, and COS3, with the
WFQ weight of 1, 2, and 4
respectively.

HQoS Scheduling for Downstream Queues


On Huawei routers, some Physical Interface Cards (PICs) are equipped with a Traffic
Manager (TM) chip, which is called an egress Traffic Manager (eTM) subcard.
If the PIC is equipped with an eTM subcard, downstream scheduling is implemented on the
eTM subcard. If the PIC is not equipped with an eTM subcard, downstream scheduling is
implemented on the downstream TM chip. The scheduling processes vary in the two cases.

The eTM subcard is supported in V600R002 and later.


To check a PIC is an eTM sub-card or not, run the display device pic-status command in any view. If the
PIC is an eTM card, the Type field in the output information includes "_T_CARD".
<HUAWEI> display device pic-status
Pic-status information in Chassis 1:
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

SLOT PIC Status Type Port_count Init_result Logic_down

1 1 Registered ETH_20XGF_NB_CARD 20 SUCCESS SUCCESS

2 0 Registered LAN_WAN_2x10GF_T_CARD 2 SUCCESS SUCCESS

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

 Downstream TM scheduling

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 132


Copyright © Huawei
Technologies Co., Ltd.
Congestion Management and AvoidanceCongestion
Special Topic - QoS Management and Avoidance

Figure 1.1 TM Scheduling architecture for downstream queues

Downstream TM scheduling includes the scheduling paths FQ -> SQ -> GQ and CQ ->
port. There are eight CQs for downstream traffic, CS7, CS6, EF, AF4, AF3, AF2, AF1,
and BE. Users can modify the queue parameters and scheduling parameters.
The process of downstream TM scheduling is as follows:
a. Entering a queue: An HQoS packet enters an FQ.
b. Applying for scheduling: The downstream scheduling application path is (FQ -> SQ
-> GQ) + (GQ -> destination port).

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 133


Copyright © Huawei
Technologies Co., Ltd.
Congestion Management and AvoidanceCongestion
Special Topic - QoS Management and Avoidance

c. Hierarchical scheduling: The downstream scheduling path is (destination port ->


CQ) + (GQ -> SQ -> FQ).
d. Leaving a queue: After an FQ is selected, packets in the front of the FQ leave the
queue and enters the CQ tail. The CQ forwards the packet to the destination port.
Non-HQoS traffic directly enters eight downstream CQs, without passing FQs.

Table 1.1 Parameters for downstream TM scheduling


Queue/Sc Queue Attribute Scheduler Attribute
heduler

FQ  Queue priority and weight, which -


can be configured.
 PIR, which can be configured. The
PIR is not configured by default.
 Drop policy, which can be
configured as WRED. The drop
policy is tail drop by default.
SQ  CIR, which can be configured.  To be configured.
 PIR, which can be configured.  PQ and WFQ apply to FQs. By
default, PQ applies to EF, CS6, and
CS7 services; WFQ applies to AF2,
AF3, and AF4 services; LPQ applies
to BE and AF1 services.
GQ  PIR, which can be configured. The  Not to be configured.
PIR is used to restrict the bandwidth  DRR is used to schedule the traffic
but does not provide any bandwidth within CIR between SQs. If any
guarantee. bandwidth is remaining after the first
round, DRR is used to schedule the
EIR traffic. The bandwidth of CIR is
preferentially provided, and the burst
traffic exceeded the PIR is dropped.
CQ  Queue priority and weight, which -
can be configured.
 PIR, which can be configured. The
PIR is not configured by default.
 Drop policy, which can be
configured as WRED. The drop
policy is tail drop by default.
Port  PIR, which can be configured.  To be configured.
 PQ and WFQ apply to FQs. By
default, PQ applies to EF, CS6, and
CS7 services; WFQ applies to AF2,
AF3, and AF4 services; LPQ applies
to BE and AF1 services.

 Downstream eTM scheduling

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 134


Copyright © Huawei
Technologies Co., Ltd.
Congestion Management and AvoidanceCongestion
Special Topic - QoS Management and Avoidance

Figure 1.2 eTM scheduling architecture for downstream queues

Unlike downstream TM scheduling, downstream eTM scheduling is a five-level


scheduling architecture. Downstream eTM scheduling uses only FQs but not CQs. In
addition to the scheduling path FQ -> SQ -> GQ, a parent GQ scheduling, also called a
virtual interface (VI) scheduling, is implemented.

The VI is only a name of a scheduler but not a real virtual interface. In actual applications, a VI
corresponds to a sub-interface or a physical interface. The VI refers to different objects in different
applications.
The differences between downstream TM scheduling and downstream eTM scheduling
are as follows:

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 135


Copyright © Huawei
Technologies Co., Ltd.
Congestion Management and AvoidanceCongestion
Special Topic - QoS Management and Avoidance

− Downstream TM scheduling uses two scheduling architectures. The five-level


scheduling consists of two parts, (FQ -> SQ -> GQ) + (CQ -> port). HQoS traffic is
scheduled in the path of FQ -> SQ -> GQ, and enters a CQ with non-HQoS traffic
in the scheduling path of CQ -> port.
− Downstream eTM, an entity queue scheduling method, uses the scheduling path of
FQ -> SQ -> GQ -> VI -> port. The system sets a default SQ for eight CQs
configured for non-HQoS traffic. The SQ directly participates in the port
scheduling.

Table 2.1 Parameters for downstream eTM scheduling


Queue/Sc Queue Attribute Scheduler Attribute
heduler

FQ  Queue priority and weight, which -


can be configured.
 PIR, which can be configured. The
PIR is not configured by default.
 Drop policy, which can be
configured as WRED. The drop
policy is tail drop by default.
SQ  CIR, which can be configured.  To be configured.
 PIR, which can be configured.  PQ and WFQ apply to FQs. By
default, PQ applies to EF, CS6, and
CS7 services; WFQ applies to AF2,
AF3, and AF4 services; LPQ applies
to BE and AF1 services.
GQ  PIR, which can be configured. The  Not to be configured.
PIR is used to restrict the bandwidth  DRR is used to schedule the traffic
but does not provide any bandwidth within CIR between SQs. If any
guarantee. bandwidth is remaining after the first
round, DRR is used to schedule the
EIR traffic. The bandwidth of CIR is
preferentially provided, and the burst
traffic exceeded the PIR is dropped.
Parent  PIR, which can be configured. The  Not to be configured.
GQ/VI PIR is used to restrict the bandwidth  DRR is used between GQs.
but does not provide any bandwidth
guarantee.
Port  PIR, which can be configured.  To be configured.
 PQ and WFQ apply to FQs. By
default, PQ applies to EF, CS6, and
CS7 services; WFQ applies to AF2,
AF3, and AF4 services; LPQ applies
to BE and AF1 services.

 Difference Between Downstream TM Scheduling and eTM Scheduling

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 136


Copyright © Huawei
Technologies Co., Ltd.
Congestion Management and AvoidanceCongestion
Special Topic - QoS Management and Avoidance

Table 2.2 Difference between Downstream TM Scheduling and eTM Scheduling


Difference Downstream TM Scheduling Downstream eTM Scheduling

port-queue This command takes effect for This command takes effect for
command traffic of a specific priority on an traffic of a specific priority in a
interface. non-flow queue.
GQ bandwidth The GQ bandwidth can be shared The GQ bandwidth can be shared
share by its member SQs on a TM chip by its member SQs only on the
if the same GQ profile is used. same interface, even if the same
GQ profile is used.
SQ bandwidth Multiple SQs in the same GQ on Multiple SQs in the same GQ on
share different physical interfaces share different sub-interfaces but not
the bandwidth. physical interfaces share the
bandwidth.
Trunk and The port-queue command can The port-queue command can be
member be configured on a trunk configured on a trunk interface or
interfaces interface or its member its member interfaces, but the
interfaces. The command command configuration takes
configuration on a member effect only for non-flow queues.
interface takes effect
preferentially and schedules all
traffic on the member interface.

Use traffic shaping as an example to illustrate the difference between downstream TM


scheduling and eTM scheduling. Assume that traffic shaping is configured for port
queues on an interface and for flow queues or user queues on a sub-interface.
flow-queue FQ
queue ef shaping 10M

interface gigabitethernet1/0/0
port-queue ef shaping 100M

interface gigabitethernet1/0/0.1
user-queue cir 50m pir 50m flow-queue FQ

interface gigabitethernet1/0/0.2
//Note: user-queue and qos-profile are not configured on
gigabitethernet1/0/0.3.

− For downstream TM scheduling, the traffic shaping rate configured using the port-
queue command determines the sum bandwidth of both HQoS and non-HQoS
traffic. Based on the preceding configuration:
가 The rate of EF traffic sent from GE 1/0/0.1 does not exceed 10 Mbps.
나 The rate of EF traffic sent from GE 1/0/0 (including GE 1/0/0, GE 1/0/0.1, and
GE 1/0/0.2) does not exceed 100 Mbps.
− For downstream eTM scheduling, the traffic shaping rate configured using the port-
queue command determines the sum bandwidth of non-HQoS traffic (default SQ
bandwidth). Based on the preceding configuration:

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 137


Copyright © Huawei
Technologies Co., Ltd.
Congestion Management and AvoidanceCongestion
Special Topic - QoS Management and Avoidance

가 The rate of EF traffic sent from GE 1/0/0 and GE 1/0/0.2 (non-HQoS traffic)
does not exceed 100 Mbps.
나 The rate of EF traffic sent from GE 1/0/0.1 does not exceed 10 Mbps.
다 The rate of EF traffic sent from GE 1/0/0 can reach a maximum of 110 Mbps.

HQoS Priority Mapping


Both upstream and downstream HQoS scheduling uses two entity queues: eight FQs and four
CQs for upstream scheduling, and eight FQs and eight CQs for downstream scheduling.
Packets enter an FQ based on the service class. After that, packets in the front of the FQ queue
leave the queue. and enter a CQ based on the mapping.
The mapping from FQs to CQs can be in Uniform or Pipe mode.
 Uniform: The system defines a fixed mapping. Upstream scheduling uses the uniform
mode.
 Pipe: Users can modify the mapping. The original priorities carried in packets will not be
modified in pipe mode.

Share Shaping
Share shaping, also called Flow Group Queue shaping (FGQ shaping), implements traffic
shaping for a group that two or more flow queues (FQs) in a subscriber queue (SQ) constitute.
This ensures that other services in the SQ can obtain bandwidths.
For example, a user has HSI, IPTV, and VoIP services, and IPTV services include IPTV
unicast and multicast services. To ensure the CIR of IPTV services and prevent IPTV services
from preempting the bandwidth reserved for HIS and VoIP services, you can configure four
FQs, each of which is specially used for HSI, IPTV unicast, and IPTV multicast, and VoIP
services. As shown in Figure 1.1, share shaping is implemented for IPTV unicast and
multicast services, and then HQoS is implemented for all services.

Figure 1.1 Share shaping

Currently, share shaping applies only to one group of FQs from eight FQs in one SQ. The
share shaping modes shown in Figure 1.2 are not supported on a Huawei router.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 138


Copyright © Huawei
Technologies Co., Ltd.
Congestion Management and AvoidanceCongestion
Special Topic - QoS Management and Avoidance

Figure 1.2 Share shaping modes not yet supported on a Huawei router

Share shaping can be implemented in either of the following modes on a Huawei router:
 Mode A: Share shaping only shapes but not schedule traffic, and queues to which share
shaping applies can use different scheduling algorithms.
 Mode B: Queues to which share shaping applies must share the same scheduling
algorithm. These share-shaping-capable queues are scheduled in advance, and then
scheduled with other queues in the SQ as a whole. When share-shaping-capable queues
are scheduled with other queues in the SQ, the highest priority of the share-shaping-
capable queues becomes the priority of the share-shaping-capable queues as a whole, and
the sum of weights of the share-shaping-capable queues become the weight of the share-
shaping-capable queues as a whole.

Currently, share shaping applies only to the P20-E sub-card and P40-E sub-card. The P20-E uses mode
A, whereas the P40-E uses mode B.

Example 1: As shown in Figure 1.3, the traffic shaping rate of the BE, EF, AF1, AF2, and
AF3 queues is 90 Mbit/s, and the sum bandwidth of the SQ is 100 Mbit/s. All queues use the
strict priority (SP) scheduling. After share shaping is configured, the sum bandwidth of the
AF1 and AF3 queues is set to 80 Mbit/s.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 139


Copyright © Huawei
Technologies Co., Ltd.
Congestion Management and AvoidanceCongestion
Special Topic - QoS Management and Avoidance

Figure 1.3 SP scheduling for share shaping

Assume that the PIR is ensured for the SQ. The input rate of the EF queue is 10 Mbit/s, and
that of each other queue is 70 Mbit/s. Share shaping allocates bandwidths to the queues in
either of the following modes:
 Mode A: SP scheduling applies to all queues.
− The EF queue obtains the 10 Mbit/s bandwidth, and the remaining bandwidth is 90
Mbit/s.
− The bandwidth allocated to the AF3 queue is calculated in the following format:
Min { AF3 PIR, share-shaping PIR, SQ PIR, remaining bandwidth} = Min {90
Mbit/s, 80 Mbit/s, 100 Mbit/s, 90 Mbit/s} = 80Mbps. The traffic rate of the AF3
queue, however, is only 70 Mbit/s. Therefore, the AF3 queue actually obtains the 70
Mbit/s bandwidth, leaving the 20 Mbit/s bandwidth available for other queues.
− The bandwidth allocated to the AF2 queue is calculated in the following format:
Min { AF2 PIR, SQ PIR, remaining bandwidth}= Min {90 Mbit/s, 100 Mbit/s, 20
Mbit/s} = 20 Mbit/s. Therefore, the AF2 queue obtains the 20 Mbit/s bandwidth,
and no bandwidth is remaining.
− The AF1 and BE queues obtain no bandwidth.
 Mode B: The EF is scheduled firstly, and then AF3 and AF1 are scheduled as a whole ,
and then BE is scheduled.
− The EF queue obtains the 10 Mbit/s bandwidth, and the remaining bandwidth is 90
Mbit/s.
− The bandwidth allocated to the AF3 queue is calculated in the following format:
Min { AF3 PIR, share-shaping PIR, SQ PIR, remaining bandwidth} = Min
{90Mbps, 80Mbps, 100Mbps, 90Mbps} = 80Mbps. The input rate of the AF3
queue, however, is only 70 Mbit/s. Therefore, the AF3 queue actually obtains the 70
Mbit/s bandwidth, leaving the 20 Mbit/s bandwidth available for other queues.
− The bandwidth allocated to the AF2 queue is calculated in the following format:
Min { AF1 PIR, share-shaping PIR - AF3 bandwidth, SQ PIR, remaining bandwidth
} = Min {90 Mbit/s, 10 Mbit/s, 20 Mbit/s} = 10 Mbit/s. Therefore, the AF1 queue
obtains the 10 Mbit/s bandwidth, and the remaining bandwidth becomes 10 Mbit/s.
− The bandwidth allocated to the AF2 queue is calculated in the following format:
Min { AF2 PIR, SQ PIR, remaining bandwidth } = Min { 90 Mbit/s, 100 Mbit/s, 10
Mbit/s } = 10 Mbit/s. Therefore, the AF2 queue obtains the 10 Mbit/s bandwidth,
and no bandwidth is remaining.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 140


Copyright © Huawei
Technologies Co., Ltd.
Congestion Management and AvoidanceCongestion
Special Topic - QoS Management and Avoidance

− The BE queue obtains no bandwidth.


The following table shows the bandwidth allocation results.
Que Scheduling Input Bandwidth PIR (Mbit/s) Output Bandwidth
ue Algorithms (Mbit/s) (Mbit/s)

Mode A Mode B

EF SP 10 90 10 10
AF3 SP 70 90 70 70
AF2 SP 70 90 20 10
AF1 SP 70 90 0 10
BE SP 70 Not configured 0 0

Example 2: Assume that the WFQ scheduling applies to the EF, AF1, AF2, and AF3 queues
with the weight ratio as 1:1:1:2 (EF:AF3:AF2:AF1) in example 1. The LPQ scheduling
applies to the BE queue. The PIR 100 Mbit/s is ensured for the SQ. The input rate of the EF
and AF3 queues is 10 Mbit/s, and that of each other queue is 70 Mbit/s. Share shaping
allocates bandwidths to the queues in either of the following modes:
 Mode A: The WFQ scheduling applies to all queues.
First-round WFQ scheduling:
− The bandwidth allocated to the EF queue is calculated as follows: 1 / (1 + 1 + 1 + 2)
x 100 Mbit/s=20 Mbit/s. The input rate of the EF queue, however, is only 10 Mbit/s.
Therefore, the remaining bandwidth is 90 Mbit/s.
− The bandwidth allocated to the AF3 queue is calculated as follows: 1 / (1 + 1 + 1 +
2) x 100 Mbit/s=20 Mbit/s. The input rate of the AF3 queue, however, is only 10
Mbit/s. Therefore, the remaining bandwidth is 80 Mbit/s.
− The bandwidth allocated to the AF2 queue is calculated as follows: 1 / (1 + 1 + 1 +
2) x 100 Mbit/s=20 Mbit/s. Therefore, the AF2 queue obtains the 20 Mbit/s
bandwidth, and the remaining bandwidth becomes 60 Mbit/s.
− The bandwidth allocated to the AF1 queue is calculated as follows: 2 / (1 + 1 + 1 +
2) x 100 Mbit/s=40 Mbit/s. Therefore, the AF2 queue obtains the 40 Mbit/s
bandwidth, and the remaining bandwidth becomes 20 Mbit/s.
Second-round WFQ scheduling:
− The bandwidth allocated to the AF2 queue is calculated as follows: 1 / (1 + 2) x 20
Mbit/s = 6.7 Mbit/s.
− The bandwidth allocated to the AF1 queue is calculated as follows: 2 / (1 + 2) x 20
Mbit/s = 13.3 Mbit/s.
No bandwidth is remaining, and the BE queue obtains no bandwidth.
 Mode B: The AF3 and AF1 queues, as a whole, are scheduled with the EF and AF2
queues using the WFQ scheduling. The weight ratio is calculated as follows: EF:
(AF3+AF1):AF2 = 1:(1+2):1 = 1:3:1.
First-round WFQ scheduling:
− The bandwidth allocated to the EF queue is calculated as follows: 1 / (1 +3 + 1) x
100 Mbit/s=20 Mbit/s. The input rate of the EF queue, however, is only 10 Mbit/s.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 141


Copyright © Huawei
Technologies Co., Ltd.
Congestion Management and AvoidanceCongestion
Special Topic - QoS Management and Avoidance

Therefore, the EF queue actually obtains the 10 Mbit/s bandwidth, and the
remaining bandwidth is 90 Mbit/s.
− The bandwidth allocated to the AF3 and AF1 queues, as a whole, is calculated as
follows: 3 / (1 + 3 + 1) x 100 Mbit/s = 60 Mbit/s. Therefore, the remaining
bandwidth becomes 30 Mbit/s. The 60 Mbit/s bandwidth allocated to the AF3 and
AF1 queues as a whole are further allocated to each in the ratio of 1:2. The 20
Mbit/s bandwidth is allocated to the AF3 queue. The input rate of the AF3 queue,
however, is only 10 Mbit/s. Therefore, the AF3 queue actually obtains the 10 Mbit/s
bandwidth, and the remaining 50 Mbit/s bandwidth is allocated to the AF1 queue.
− The bandwidth allocated to the AF2 queue is calculated as follows: 1 / (1 +3 + 1) x
100 Mbit/s=20 Mbit/s. Therefore, the AF2 queue obtains the 20 Mbit/s bandwidth,
and the remaining bandwidth becomes 10 Mbit/s.
Second-round WFQ scheduling:
− The bandwidth allocated to the AF3 and AF1 queues as a whole is calculated as
follows: 3 / (3 + 1) x 10 Mbit/s=7.5 Mbit/s. The 7.5 Mbit/s bandwidth, not
exceeding the share shaping bandwidth, can be all allocated to the AF3 and AF1
queues as a whole. The PIR of the AF3 queue has been ensured. Therefore, the 7.5
Mbit/s bandwidth is allocated to the AF1 queue.
− The bandwidth allocated to the AF2 queue is calculated as follows: 1 / (3 + 1 ) x 10
Mbit/s = 2.5 Mbit/s.
No bandwidth is remaining, and the BE queue obtains no bandwidth.
The following table shows the bandwidth allocation results.
Que Schedulin Input PIR (Mbit/s) Output Bandwidth
ue g Bandwidth (Mbit/s)
Algorithm (Mbit/s)
s Mode A Mode B

EF SP 10 90 10 10
AF3 SP 70 90 10 70
AF2 SP 70 90 26.7 22.5
AF1 SP 70 90 53.3 57.5
BE SP 70 Not configured 0 0

HQoS Applications and Classification


Use the following HQoS applications as examples:
 No QoS configuration: If QoS is not configured on an outbound interface and the default
configuration is used, the scheduling path is 1 port queue -> 1 VI queue -> 1 GQ -> 1 SQ
-> 1 FQ. Only one queue is scheduled at each layer. Therefore, the scheduling path can
be considered an FIFO queue.
 Distinguishing only service priorities: If an outbound interface is configured only to
distinguish service priorities, the scheduling path is 1 port queue -> 1 VI queue -> 1 GQ
-> 1 SQ -> 8 FQs. Therefore, the scheduling path can be considered port->FQ.
 Distinguishing service priorities + users: As shown in the following figure, an L3
gateway is connected to an RTN that has three base stations attached. To distinguish the
three base stations on GE 1/0/0 of the L3 gateway and services of different priorities

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 142


Copyright © Huawei
Technologies Co., Ltd.
Congestion Management and AvoidanceCongestion
Special Topic - QoS Management and Avoidance

from the three base stations, configure the hierarchy architecture as port -> base station
-> base station service, corresponding to the scheduling path: port -> 1 VI queue -> 1 GQ
-> 3 SQs -> 8 FQs.

 Distinguishing priorities + users + aggregation devices: As shown in the following


figure, an L3 gateway is connected to two RTNs (aggregation devices) that each has
three base stations attached. To distinguish the two RTNs on GE 1/0/0 of the L3 gateway,
three base stations on each RTN, and services of different priorities on the three base
stations, configure the hierarchy architecture as port -> RTN -> base station -> base
station services, corresponding to the scheduling path: 1 port -> 1 VI queue -> 2 GQs ->
3 SQs -> 8 FQs.

HQoS can be classified as interface-based HQoS, class-based HQoS, or profile-based HQoS.


 Interface-based HQoS
Interface-based HQoS allows an interface or a sub-interface to function as a user. All
packets on an interface or sub-interface belong to an SQ that is configured the interface.
Interface-based HQoS supports upstream and downstream scheduling.
As shown in Figure 1.1, GE 1/0/0.1 and GE 1/0/0.2 on a router access VLAN 1 and
VLAN 2, respectively. For VLAN 1, ensure that the EF and AF traffic rates are
respectively 10 Mbit/s and 30 Mbit/s, the sum bandwidth is 100 Mbit/s, and a maximum
burst traffic rate is 120 Mbit/s. For VLAN 2, ensure that the EF traffic rate is 50 Mbit/s,
the sum bandwidth is 200 Mbit/s, and a maximum burst traffic rate is 220 Mbit/s. The
maximum sum bandwidth of VLAN 1 and VLAN 2 is 320 Mbit/s.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 143


Copyright © Huawei
Technologies Co., Ltd.
Congestion Management and AvoidanceCongestion
Special Topic - QoS Management and Avoidance

Figure 1.1 Interface-based HQoS

You can configure interface-based HQoS to meet the preceding requirements. An


interface has only one VLAN user that has multiple services. The scheduling architecture
can be configured as port -> VLAN -> service type , corresponding to the scheduling
path: port -> SQ ->FQ.
 Class-based HQoS
Class-based HQoS uses multi-field (MF) classification to classify packets that require
HQoS scheduling and treats all packets that match the classification rule as a user.
An SQ is configured in the traffic behavior view, then a traffic policy that defines the
traffic behavior is applied to an interface.
Class-based HQoS takes effect only for upstream queues.
As shown in Figure 1.2, two interfaces on a router connect to RTNs, and each RTN has
three base stations attached, each of which runs services of different priorities. The
router, as an L3 gateway, is required to distinguish the three base stations and services of
different priorities on the three base stations.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 144


Copyright © Huawei
Technologies Co., Ltd.
Congestion Management and AvoidanceCongestion
Special Topic - QoS Management and Avoidance

Figure 1.2 Class-based HQoS

You can configured class-based HQoS to meet the preceding requirements. An interface
has two user groups (two RTNs), one user group has three users (three base stations), and
one base station runs multiple services. The hierarchical architecture is configured as
port -> RTN -> base station -> base station services, corresponding to the scheduling
path: port -> GQ -> SQ ->FQ.
 Profile-based HQoS
Traffic that enters different interfaces can be scheduled in an SQ.
Profile-based HQoS implements QoS scheduling management for access users by
defining various QoS profiles and applying the QoS profiles to interfaces. A QoS profile
is a set of QoS parameters (such as the queue bandwidth and flow queues) for a specific
user queue.
Profile-based HQoS supports upstream and downstream scheduling.
As shown in Figure 1.3, the router, as an edge device on an ISP network, accesses a local
area network (LAN) through E-Trunk 1. The LAN houses 1000 users that have VoIP,
IPTV, and common Internet services. Eth-Trunk 1.1000 accesses VoIP services; Eth-
Trunk 1.2000 accesses IPTV services; Eth-Trunk 1.3000 accesses other services. The
802.1p value in the outer VLAN tag is used to identify the service type (802.1p value 5
for VoIP services and 802.1p value 4 for IPTV services). The VID that ranges from 1 to
1000 in the inner VLAN tag identifies the user. The VIDs of Eth-Trunk 1.1000 and Eth-
Trunk 1.2000 are respectively 1000 and 2000. It is required that the sum bandwidth of
each user be restricted to 120 Mbit/s, the CIR be 100 Mbit/s, and the bandwidth allocated
to VoIP and IPTV services of each user are respectively 60 Mbit/s and 40 Mbit/s. Other
services are not provided with any bandwidth guarantee.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 145


Copyright © Huawei
Technologies Co., Ltd.
Congestion Management and AvoidanceCongestion
Special Topic - QoS Management and Avoidance

Figure 1.3 Profile-based HQoS

You can configured profile-based HQoS to meet the preceding requirements. Only traffic
with the same inner VLAN ID enters the same SQ. Therefore, 1000 SQs are created.
Traffic with the same inner VLAN ID but different outer VLAN IDs enter different FQs
in the same SQ.
Configuration example:
flow-queue test
queue ef pq shaping 60000
queue af4 wfq shaping 40000

qos-profile qp1
user-queue cir 100 pir 120 flow-queue test

interface Eth-Trunk1.1000
trust 8021p
control-vid 1000 qinq-termination
vlan-group 1
qinq termination pe-vid 1000 ce-vid 1 to 1000 vlan-group 1
ip binding vpn-instance voip
ip address 168.0.1.1 255.255.255.0
qos-profile qp1 inbound pe-vid 1000 ce-vid 1 to 1000 identifier ce-vid group
group1
qos-profile qp1 outbound pe-vid 1000 ce-vid 1 to 1000 identifier ce-vid group
group1

interface Eth-Trunk1.2000
control-vid 2000 qinq-termination
vlan-group 1
qinq termination pe-vid 2000 ce-vid 1 to 1000 vlan-group 1
ip binding vpn-instance iptv
ip address 168.0.2.1 255.255.255.0
qos-profile qp1 inbound pe-vid 2000 ce-vid 1 to 1000 identifier ce-vid group
group1
qos-profile qp1 outbound pe-vid 2000 ce-vid 1 to 1000 identifier ce-vid group
group1

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 146


Copyright © Huawei
Technologies Co., Ltd.
Congestion Management and AvoidanceCongestion
Special Topic - QoS Management and Avoidance

interface Eth-Trunk1.3000
control-vid 3000 qinq-termination
vlan-group 1
qinq termination pe-vid 3000 ce-vid 1 to 1000 vlan-group 1
ip binding vpn-instance others
ip address 168.0.1.1 255.255.255.0
qos-profile qp1 inbound pe-vid 3000 ce-vid 1 to 1000 identifier ce-vid group
group1
qos-profile qp1 outbound pe-vid 3000 ce-vid 1 to 1000 identifier ce-vid group
group1

identifier ce-vid: indicates that traffic with the same CE-VID share the SQ bandwidth.
group group1: indicates that the system allocates the SQ bandwidth based on interfaces by default. SQs
on different interfaces cannot share the bandwidth. Profile-based HQoS allows traffic from one user to
be distributed on different interfaces. Therefore, a group must be specified so that SQs on different
interfaces can share the same board resource. In this example, traffic from one SQ is distributed on
different interfaces. Therefore, a group must be specified.
In the preceding configuration, the trust 8021p command, not mentioned here, must be configured on
the inbound interface for return traffic.

5.6 QoS Implementations on Different Boards


Item Board Difference description

eTM sub-card (Detailed Type-A Partially support


differences between
downstream TM Type-B Partially support
scheduling and eTM
Type-C Not support
scheduling, see Step 1Table
2.2) Type-D Partially support
To check a PIC is an eTM sub-card or not, run the display device pic-status
command in any view. If the PIC is an eTM card, the Type field in the output
information includes "_T_CARD".
<HUAWEI> display device pic-status
Pic-status information in Chassis 1:
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - -
SLOT PIC Status Type Port_count Init_result
Logic_down
1 1 Registered ETH_20XGF_NB_CARD 20 SUCCESS
SUCCESS
2 0 Registered LAN_WAN_2x10GF_T_CARD 2 SUCCESS
SUCCESS
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - -

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 147


Copyright © Huawei
Technologies Co., Ltd.
Congestion Management and AvoidanceCongestion
Special Topic - QoS Management and Avoidance

Item Board Difference description

Depth of flow queue Type-B Maximum of configurable value is 1000 Kbytes.


Example:
[HUAWEI] flow-wred test
[HUAWEI-flow-wred-test] queue-depth 1000

Type-A Maximum of configurable value is 32768 Kbytes.


Type-C Example:
[HUAWEI] flow-wred test
Type-D [HUAWEI-flow-wred-test] queue-depth 32768

Depth of port queue Type-B Maximum of configurable value is 8000 Kbytes.


Example:
[HUAWEI] port-wred test
[HUAWEI-port-wred-test] queue-depth 1000

Type-A Maximum of configurable value is 131072 Kbytes.


Type-C Example:
[HUAWEI] port-wred test
Type-D [HUAWEI-port-wred-test] queue-depth 131072

CQ congestion avoidance Type-A Using tail-drop by default.


Type-B Using tail-drop by default.
Type-C Using WRED by default.
Both low-limit and high-limit are 64 Mbit, and discard-
percentage is 98%, by default.
Type-D Using WRED by default.
Both low-limit and high-limit are 1 Mbit, discard-percentage is
100%, by default.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 148


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS MPLS QoSMPLS QoS

6 MPLS QoS

About This Chapter


6.1 MPLS QoS Overview
6.2 MPLS DiffServ
6.3 MPLS DiffServ Configuration
6.4 MPLS-TE
6.5 MPLS DiffServ-Aware TE
6.6 MPLS VPN QoS
6.7 QoS Implementations on Different Boards

6.1 MPLS QoS Overview


Multiprotocol Label Switching (MPLS) uses label-based forwarding to replace traditional
route-based forwarding. MPLS has a powerful and flexible routing function and can meet the
requirements of various applications for the network. MPLS can be implemented on various
physical media, such as the Ethernet, PPP, ATM, and frame relay (FR).
Currently MPLS widely applies to large-scale networks. Therefore, quality of service (QoS)
for MPLS networks must be deliberately deployed.
MPLS establishes label switched paths (LSPs) to implement connection-oriented forwarding.
QoS for LSP provides QoS guarantee for data flows transmitted over LSPs. Therefore, the
DiffServ and IntServ models are applied to MPLS networks. The combination of MPLS and
IntServ shapes multiprotocol label switching traffic engineering (MPLS TE), and the
combination of MPLS and DiffServ shapes MPLS DiffServ.

MPLS TE - Combination of MPLS and IntServ


IntServ uses the Resource Reservation Protocol (RSVP) to apply for resources over the entire
network and maintains a forwarding state for each data flow, hindering the extensibility.
Therefore, IntServ does not prevail on networks. RFC 3209, however, extends RSVP by
allowing RSVP PATH messages to carry label requests and RSVP RESV messages to support

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 149


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS MPLS QoSMPLS QoS

label allocation. The extended RSVP is called Resource Reservation Protocol-Traffic


Engineering (RSVP-TE). RSVP-TE allows MPLS to control the path through which traffic
traverses and reserve resources during LSP establishment so that traffic can bypass congestion
nodes. This method of balancing network traffic is called MPLS TE.
MPLS TE controls the path through which traffic traverses but cannot identify services.
Traffic is transmitted along LSPs, regardless of service priorities. Therefore, if the actual
traffic rate exceeds the specification, requirements for services that are sensitive to QoS are
not satisfied. Therefore, MPLS TE alone cannot provide the QoS guarantee.

MPLS DiffServ - Combination of MPLS and DiffServ


The DiffServ model can distinguish services based on packet contents and allow packets with
high priorities to be forwarded preferentially. Therefore, DiffServ widely applies to MPLS
networks.
However, DiffServ reserves resources only on a single node and cannot specify the bandwidth
for each service in advance. When the traffic rate exceeds the allowed bandwidth, high-
priority services are forwarded preferentially at the cost that delays and packet loss of low-
priority services deteriorate. In the case of severe traffic congestion, even high-priority
services are delayed or lost. Therefore, MPLS DiffServ alone can hardly provide the end-to-
end QoS guarantee or allow services to comply with the Service Level Agreement (SLA).

MPLS DS-TE - Combination of MPLS TE and MPLS DiffServ


MPLS DiffServ-aware Traffic Engineering (DS-TE) combines MPLS TE and MPLS DiffServ
to allow services to be distinguished and resources to be preferentially allocated to high-
priority services. MPLS DS-TE can establish customized LSPs for various services to ensure
sufficient bandwidth for high-priority services.

VPN QoS - MPLS QoS Application on MPLS VPNs


VPN QoS combines MPLS QoS and MPLS VPN to serve for networking that bears services
of various priorities. VPN QoS distinguishes services of different priorities and ensures that
high-priority services are forwarded preferentially. This guarantees the QoS for important
services on VPNs.
DiffServ, RSVP-TE, and MPLS VPN can be jointly used based on actual requirements to
isolate services, distinguish services of different priorities, ensure bandwidth resources for
important services or important VPNs, and forwards packets on VPNs or MPLS-TE tunnels
based on packet priorities. This provides a solid technical basis for carriers to develop voice,
video, and SLA-complying VPN services.

6.2 MPLS DiffServ


MPLS DiffServ Traffic Classification
In DiffServ model, traffic classification is implemented at network edges to classify packets
into multiple priorities or service classes. If the IP precedence fields in IP headers are used to
identify packets, the packets can be classified into eight (23) classes. If the DSCP fields in IP
headers are used to identify packets, the packets can be classified into 64 (2 6) classes. On each
node through which packets pass, the DSCP or IP precedence fields in IP headers are checked
to determine the per-hop behavior (PHB) of the packets.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 150


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS MPLS QoSMPLS QoS

On an MPLS network, however, the label switching router (LSR) does not check the IP
header information. Therefore, traffic classification cannot be implemented based on the TOS
or DSCP fields of packets. RFC 3270 defines two schemes for traffic classification on an
MPLS network.

Scheme 1: E-LSP
The EXP-Inferred-PSC LSP (E-LSP) scheme uses the 3-bit EXP value in an MPLS header to
determine the PHB of the packets. Figure 1.1 shows an MPLS header.

Figure 1.1 MPLS header

The EXP value can be copied from the DSCP or IP precedence in an IP packet or be set by
MPLS network carriers.
The label determines the forwarding path, and the EXP determines the PHB.
The E-LSP is applicable to networks that support not more than eight PHBs. The precedence
field in an IP header also has three bits, same as the EXP field length. Therefore, one
precedence value in an IP header exactly corresponds to one precedence value in an MPLS
header. However, the DSCP field in an IP header has six bits, different from the EXP length.
Therefore, more DSCP values correspond to only one EXP value. As the IEEE standard
defines, the three left-most bits in the DSCP field (the CSCP value) correspond to the EXP
value, regardless of what the three right-most bits are.
During traffic classification, the EXP value in an MPLS packet is mapped to the scheduling
precedence and drop precedence. Except traffic classification, QoS operations on an MPLS
network, such as traffic shaping, traffic policing, and congestion avoidance, are implemented
in the same manner as those on an IP network.

Figure 1.2 E-LSP

When the MPLS packet is leaving the LSR, the scheduling precedence and drop precedence
are mapped back to the EXP value for further EXP-based operations on the network.

For more details about the default mapping between the EXP value, service class, and color on Huawei
routers, see 3.3.2QoS Priority Mapping.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 151


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS MPLS QoSMPLS QoS

Scheme 2: L-LSP
The Label-Only-Inferred-PSC LSP (L-LSP) scheme uses labels to transmit PHB information.
The EXP field has only three bits, and therefore cannot be used alone to identify more than
eight PHBs. Instead, only the 20-bit label in an MPLS header can be used to identify more
than eight PHBs. The L-LSP is applicable to networks that support more than eight PHBs.
During packet forwarding, the label determines the forwarding path and scheduling behaviors
of the packets; the EXP carries the drop precedence. Therefore, the label and EXP both
determine the PHB. PHB information needs to be transmitted during LSP establishment. The
L-LSPs can transmit single-PHB flow, and also multi-PHB flow that has packets of the same
scheduling behavior but different drop precedences.

Comparison Between the Two Schemes


In the L-LSP scheme, an LSP must be established for each type of services from the ingress
LSR to the egress LSRs. An L-LSP supports only one type of service.
In the E-LSP scheme, only one LSP needs to be established between the ingress and egress
LSRs to support up to eight PHBs.

Table 1.1 Comparison between E-LSPs and L-LSPs


E-LSP L-LSP

The EXP determines the PHB (including the The label and EXP determine the PHB.
drop precedence).
No additional signals are needed. Signals are needed to establish LSPs.
Not supported on ATM links. Supported on ATM links.
Each LSP supports up to eight behavior Each LSP supports only one BA.
aggregates (BAs).

The principles for selecting L-LSPs or E-LSPs are as follows:


 Link layer: On PPP networks or LANs, E-LSPs, L-LSPs, or both can be used. On ATM
networks, the EXP value is invisible, and therefore only L-LSPs can be used.
 Service type: Up to eight PHBs are supported if only E-LSPs are used. To support more
than eight PHBs, you must use L-LSPs or use both E-LSPs and L-LSPs.
 Network load: Using E-LSPs reduces the LSP quantity, label resource consumption, and
signaling. Using L-LSPs is more resource-consuming.
Generally a network provides a maximum of four types of services, which can be transmitted
using E-LSPs. L-LSPs are used on ATM networks or networks that require the QoS guarantee
for various services with different drop precedences. Huawei routers support E-LSPs only.

CoS Processing in MPLS DiffServ


The DiffServ model allows transit nodes in a DS domain to check and modify the IP
precedence, DSCP, or EXP value, which is called the class of service (CoS). Therefore, the
CoS value may vary during packet transmission.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 152


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS MPLS QoSMPLS QoS

Carriers need to determine whether to trust the CoS information in an IP or MPLS packet that
is entering an MPLS network or is leaving an MPLS network for an IP network. RFC 3270
defines three modes for processing the CoS: Uniform, Pipe, and Short Pipe.

Uniform Mode
When carriers determine to trust the CoS value (IP precedence or DSCP) in a packet from an
IP network, the Uniform mode can be used. The MPLS ingress LSR copies the CoS value in
the packet to the EXP field in the MPLS outer header to ensure the same QoS on the MPLS
network. When the packet is leaving the MPLS network, the egress LSR copies the EXP value
back to the IP precedence or DSCP in the IP packet.

Figure 1.1 Uniform mode

As its name implies, Uniform mode ensures the same priority of packets on the IP and MPLS
networks. Priority mapping is performed for packets when they are entering or leaving an
MPLS network. Uniform mode has disadvantages. If the EXP value in a packet changes on an
MPLS network, the PHB for the packet that is leaving the MPLS network changes
accordingly. In this case, the original CoS of the packet does not take effect.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 153


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS MPLS QoSMPLS QoS

Figure 1.2 CoS change in Uniform mode

Pipe Mode
When carriers determine not to trust the CoS value in a packet from an IP network, the Pipe
mode can be used. The MPLS ingress delivers a new EXP value to the MPLS outer header,
and the QoS guarantee is provided based on the newly-set EXP value from the MPLS ingress
to the egress. The CoS value is used only after the packet leaves the MPLS network.

Figure 1.1 Pipe mode

In Pipe mode, the MPLS ingress does not copy the IP precedence or DSCP to the EXP field
for a packet that enters an MPLS network. Similarly, the egress does not copy the EXP value
to the IP precedence or DSCP for a packet that leaves an MPLS network. If the EXP value in

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 154


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS MPLS QoSMPLS QoS

a packet changes on an MPLS network, the change takes effect only on the MPLS network.
When a packet leaves an MPLS network, the original CoS continues to take effect.

Short Pipe Mode


The Short Pipe mode is an enhancement of the Pipe mode. Packet processing on the MPLS
ingress in Short Pipe mode is the same as that in Pipe mode. On the egress, however, the
egress pops the label and then implements QoS scheduling. The packets are scheduled based
on the CoS value that carriers define from the MPLS ingress to the penultimate hop, and are
scheduled based on the original CoS value by the MPLS egress.

Figure 1.1 Short Pipe mode

In Pipe or Short Pipe mode, carriers can define a desired CoS value for QoS implementation
on the carriers' own network, without changing the original CoS value of packets.
The difference between Pipe mode and Short Pipe mode lies in the QoS marking for the
outgoing traffic from a PE to a CE. In Pipe mode, outgoing traffic is scheduled based on a
CoS value defined by carriers, whereas outgoing traffic uses the original CoS value in Short
Pipe mode, as shown in Figure 1.2.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 155


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS MPLS QoSMPLS QoS

Figure 1.2 Difference between Pipe and Short Pipe

6.3 MPLS DiffServ Configuration


BA and PHB
This section describes MPLS DiffServ configurations, which involves BA and PHB
definitions. For details about BA and PHB definitions, see 3.3.3BA and PHB.

BA and PHB Implementations in an MPLS DiffServ Scenario


Three modes are defined in MPLS DiffServ scenarios: Uniform, Pipe, and Short Pipe.
 In Uniform mode, the original priorities of packets are used for QoS implementation on
an MPLS network. The original priorities of packets are reset when the packets are
leaving the MPLS network.
 In Pipe or Short Pipe mode, packets are allocated a desired priority on the MPLS
network, regardless of their original priorities. After leaving the backbone network, the
original priorities of packets remain unchanged.
− In Pipe mode, the egress on the MPLS network processes a packet based on its re-
allocated priorities.
− In Short Pipe mode, the egress on the MPLS network processes a packet based on
its original priorities.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 156


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS MPLS QoSMPLS QoS

Figure 1.1 BA and PHB implementation on a PE

Figure 1.2 BA and PHB implementation on a P

 Both BA and PHB implementations are required on the network-side (NNI) interface of
a PE. Therefore, run the trust upstream command on the network-side interface.
 Both BA and PHB implementations are required on the network-side (NNI) and user-
side (UNI) interfaces of a P. Therefore, run the trust upstream command on both the
network-side and user-side interfaces.
 On the UNI interface of a PE, implement BA for upstream traffic and the one of the
following operations for downstream traffic:
− In Uniform mode, PHB is required. To be specific, in Uniform mode, both BA and
PHB implementations are required on the PE. Therefore, run the trust upstream
command on the UNI interface too.
− In Pipe or Short Pipe mode, the UNI interface of the PE requires BA but not PHB.
Therefore, run the diffserv-mode { pipe service-class color | short-pipe service-
class color } command on the UNI interface. Note that the diffserv-mode
command is mutually exclusive with the trust upstream command.

 In V600R002 or earlier versions, PHB is enabled by default. In Pipe or Short Pipe mode, run the qos
phb disable command on the user-side interface of the PE to disable PHB.
 In V600R003 or later versions, PHB is disabled by default.

DSCP Remarking Rules in MPLS VPN Scenarios


DSCP keeps unchanged in L2 forwarding (including VPLS/VLL ingress PE node and egress
PE node).
DSCP keeps unchanged in MPLS forwarding (P node).

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 157


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS MPLS QoSMPLS QoS

DSCP may be changed in L3VPN ingress PE node or egress PE node, for details, see Table
1.1.

Table 1.1 DSCP Remarking Rules in MPLS L3VPN PE node


Outbound configuration DSCP Note
(● indicates configured, ○
indicates not configured)

○trust upstream Keep unchanged. In this table, the precondition of


○qos phb enable “According to the <service-class,
color> of the packet and the
●trust upstream  On ingress PE node: According to downstream priority mapping
○trust 8021p the <service-class, color> of the packet table” is the PHB action is
and the downstream priority mapping performed on outbound board. If
●qos phb enable table if outbound board is Type-B or the PHB action is not performed,
Type-C, and keep unchanged if the DSCP keeps unchanged.
○trust 8021p
outbound board is other type. The rules for PHB action
 On egress PE node in Pipe or performing, see the sections
Short-Pipe mode: DSCP keeps “Remark and PHB Symbols” and
unchanged. “Rules for PHB Action”.
 On egress PE node in Uniform
mode: DSCP is set according to the On MPLS L3VPN ingress PE
<service-class, color> of the packet node in Pipe/Short-pipe mode:
and the downstream priority mapping
table.
 Configure the trust upstream
command with vpn-mode
●trust upstream  If the outbound board is Type-B: parameter for outbound interface
DSCP keeps unchanged on egress PE to keep the DSCP unchanged if
●trust 8021p
node in Pipe or Short-Pipe mode, in the NNI board is Type-C.
other scenarios, DSCP is set according  Configure the qos phb disable
to the <service-class, color> of the command for NNI interface to
packet. keep the DSCP unchanged if the
 If the outbound board is other NNI board is Type-B (After this
type: DSCP keeps unchanged. command is run, common IP
packets transmitted from the NNI
●qos phb enable  On ingress PE node: DSCP is set interface to other UNI interfaces
●trust 8021p according to the <service-class, color> also skip PHB action. Therefore,
of the packet and the downstream this method applies only to MPLS
priority mapping table if outbound packets. There is no perfect
board is Type-B or Type-C, and keep solution for the NNI interface on
unchanged if outbound board is other which MPLS and IP packets are
type. transmitted in a mixed manner).
 On egress PE node in Pipe or
Short-Pipe mode: DSCP keeps
unchanged.
 On egress PE node in Uniform
mode: DSCP keeps unchanged if
outbound board is Type-A, and set
according to the <service-class, color>
of the packet and the downstream
priority mapping table if outbound
board is other type.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 158


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS MPLS QoSMPLS QoS

The preceding configurations are provided, regardless of 802.1p value. Considering 802.1p value, run
the qos phb { disable | enable } vlan command based on actual requirements and board differences.

For More Information


 Which Priority Field is Trusted
 Which Priority Field of the Inbound Packet Is Reset in PHB Action
 Rules for Marking the EXP Field of New-added MPLS Header

6.4 MPLS-TE
Overview
Multiprotocol label switching traffic engineering (MPLS TE) integrates the MPLS technology
with traffic engineering. It can reserve resources by setting up label switched paths (LSPs) for
a specified path in an attempt to avoid network congestion and balance network traffic.
MPLS TE provides a wide range of attributes, including assured bandwidth, explicit path,
affinity attribute, priority and preemption, and fast reroute (FRR).

Introduction
On traditional IP networks, devices select the shortest path as the route regardless of other
factors, such as the bandwidth. The shortest path may be congested with traffic, whereas other
available paths are idle.

Figure 1.1 Traditional routing

In the example shown in Figure 1.1, links have the same metric. The shortest path from R1
(R8) to R5 is R1 (R8) → R2 → R3 → R4 → R5. Data is forwarded along this shortest path
though other paths exist. In this manner, the path R1 (R8) → R2 → R3 → R4 → R5 may be
congested because of overload, and the path R1 (R8) → R2 → R6 → R7 → R4 → R5 may be
idle.
Conventional TE solutions to congestion are as follows:

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 159


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS MPLS QoSMPLS QoS

 Controlling the network traffic by adjusting the link metric


To solve the preceding problem, you can increase the link metric of the path R2 → R3 →
R4 to be larger than that of the path R2 → R6 → R7 → R4. In this manner, the traffic
can be led to the path R1 (R8) → R2 → R6 → R7 → R4 → R5.

Figure 1.2 Routing after a metric change

This method eliminates the congestion on the link R1 (R8) → R2 → R3 → R4 → R5;


however, the other link R1 (R8) → R2 → R6 → R7 → R4 → R5 may be congested. In
addition, the metric is difficult to adjust on a network with complex topology because the
change of a link affects multiple routes.
 Redirecting some traffic by setting up virtual circuit (VCs) in an overlay model
The current Interior Gateway Protocols (IGPs) are topology driven and consider only the
static network connection, regardless of the dynamic factors, such as bandwidth
availability and traffic characteristics.
The overlay model, such as IP over Asynchronous Transfer Mode (ATM) or IP over
Frame Relay (FR) can complement IGP disadvantages. The overlay model provides a
virtual topology over a physical topology for a network. This implementation helps
reasonably adjust traffic and implement QoS features. The overlay model, however, is of
high cost and low expansibility.
To implement TE on a large-scale network, a scalable and simple solution is required. As an
overlay model, MPLS can easily establish a virtual topology on a physical network and map
traffic to the virtual topology. Therefore, the MPLS TE technology that combines MPLS and
TE is developed.
MPLS TE has its own strength in dealing with network congestion. With MPLS TE, carriers
can exactly control the path through which traffic passes to allow traffic to bypass congested
nodes. This addresses the problem that some paths are overloaded and others are idle, making
full use of bandwidth resources. In addition, MPLS TE reserves resources in the establishment
of LSP tunnels to ensure the QoS.

Principles
 Information Advertisement
MPLS TE resolves the congestion due to unbalanced load. In addition to network
topology information, TE must be aware of the load information of a network. To
advertise link status including the maximum link bandwidth, the maximum reservable
bandwidth, the current reserved bandwidth, and the link color, MPLS TE extends the

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 160


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS MPLS QoSMPLS QoS

current IGP. For example, MPLS TE introduces new TLVs into IS-IS or new LSAs into
OSPF.
By extending the IGP, MPLS TE maintains link attributes and topology attributes on
each LSR to form a TE DataBase (TEDB).
 Path Calculation
After the TEDB is created through the Information Advertisement, each MPLS TE
ingress obtains the network load information and select the paths that meet the specified
constraints. MPLS TE allows carriers to specify a path on each ingress. For example,
carriers can define a node that traffic must pass through or a node that traffic must
bypass. The nodes can be specified hop by hop or some hops are specified. In addition,
some constraints, such as the bandwidth, can be specified.
MPLS TE uses the Constraint Shortest Path First (CSPF) algorithm to calculate the
shortest path that meets with the specified constraints based on the TEDB information.
 RSVP-TE
After calculating the shortest path from the ingress to the egress, MPLS TE uses the
extended Resource Reservation Protocol (RSVP) signaling, RSVP-TE, to dynamically
set up constraint-based routed LSP (CR-LSPs).
RSVP-TE extends RSVP as follows:
− RSVP-TE adds label request objects in the Path message to support label requests
and label objects in the RESV message to support label allocation.
− The extended RSVP messages can carry label binding information and information
about constraints during path selection. This information helps set up RSVP-TE
CR-LSPs.
− In addition, RSVP-TE maintains resource reservation using the summary refresh
(Srefresh) and Hello mechanism, reducing information to be transmitted and
processed for maintaining RSVP soft state and lightening the device workload.
 LSP Establishment
Figure 1.1 shows how a CR-LSP is set up.

Figure 1.1 LSP establishment

e. The ingress uses the CSPF algorithm to calculate the shortest path that meets
specific constraints, such as the bandwidth, explicit paths, and color, and generate a
Path message. The Path message carries constraints.
f. The Path message is sent from the ingress through the shortest path calculated using
CSPF to the egress. Path State Blocks (PSBs) are created on the passing LSRs.
g. After receiving a Path message, the egress returns a Resv message. The Resv
message, carrying resource reservation information, is sent to the ingress hop-by-
hop. Each passing LSR creates and maintains a Reserved State Block (RSB) and
allocates a label.
h. When the Resv message reaches the ingress, an LSP is set up.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 161


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS MPLS QoSMPLS QoS

MPLS TE Features
 Explicit Path
MPLS TE creates an LSP based on a specific path, and this path is called an explicit
path. Explicit paths are classified as strict explicit paths or loose explicit paths.
− Strict explicit path
A strict explicit path means that a hop is directly connected to its next hop. By
specifying a strict explicit path, the most accurate path is provided for a CR-LSP.

Figure 1.1 Strict explicit path

As shown in Figure 1.1, an LSP is set up over a strict explicit path from the ingress
LSRA to the egress LSRF. "B strict" indicates that the LSP must pass through
LSRB and the previous hop of LSRB is LSRA. "C strict" indicates that the LSP
must pass through LSRC and the previous hop of LSRC is LSRB. In this manner,
an accurate path is provided for the LSP.
− Loose explicit Path
A loose explicit path contains the specified nodes through which an LSP must pass;
however, other routers can exist between nodes.

Figure 1.2 Loose explicit Path

As shown in Figure 1.2, an LSP is set up over a loose explicit path from the ingress
LSRA to the egress LSRF. "D loose" indicates that the LSP must pass through
LSRD and nodes can exist between LSRD and LSRA.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 162


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS MPLS QoSMPLS QoS

MPLS TE can set up an LSP over a strict or loose explicit path.


 LSP Priority Preemption
If no path meeting the bandwidth requirement of a desired CR-LSP is available, a device
tears down an established CR-LSP and uses the bandwidth assigned to that CR-LSP to
establish a desired CR-LSP. This is called preemption. On an RSVP-TE tunnel, one end
initiates bandwidth preemption by sending Resv messages.
CR-LSPs use two priority attributes, namely, setup and holding priorities, to determine
whether to preempt resources. The priority ranges from 0 to 7. The lowest priority is 7.
The priority and the preemption attributes are used in conjunction to determine resource
preemption among tunnels. When multiple CR-LSPs are to be set up, the LSP with a
higher setup priority preempts resources and is set up preferentially. When the bandwidth
is insufficient, resources for an established LSP with a low holding priority may be
preempted by an LSP with a high setup priority.
For example, when a new path Path 1 is set up and must compete with the established
Path 2 for resources, Path 1 can succeed in preemption only when its setup priority is
higher than the holding priority of Path 2.
Therefore, to ensure the establishment of a CR-LSP, its setup priority cannot be higher
than its holding priority; otherwise, endless preemption occurs among LSPs, causing
route flapping.
 Administrative Group and Affinity Attributes
An administrative group is a 32-bit vector representing a set of link attributes. In RFC
3209, administrative groups are called link-attributes.
The affinity attribute is a 32-bit vector representing the color of the TE link. After a
tunnel is configured with an affinity property, a device compares the affinity property
with the administrative group attribute during link selection to determine whether a link
with specified attributes is selected or not. An MPLS TE tunnel uses a 32-bit mask to
represent the affinity property bits to be compared. The comparison rules are as follows:
− If some bits in the mask are 1, at least one bit in an administrative group is 1 and the
corresponding bit in the affinity attribute should be 1. If the bits in the affinity
attribute are 0s, the corresponding bits in the administrative group cannot be 1.
Assume that an affinity property is 0x0000FFFF and its mask is 0xFFFFFFFF. The
16 left-most bits in an administrative group attribute can only be 0, and at lease one
of the 16 right-most bits is 1. This means that the administrative group attribute
ranges from 0x00000001 to 0x0000FFFF.
− If some bits in a mask are 0, the corresponding bits in an administrative group are
not compared with the affinity property bits.
Assume that an affinity property is 0xFFFFFFFF and its mask is 0xFFFF0000. At
least one of the 16 left-most bits in an administrative group attribute is 1 and the 16
right-most bits can be 0 or 1. This means that the administrative group attribute
ranges from 0x00010000 to 0xFFFFFFFF.
After the affinity attribute is configured on the ingress, it is passed to every node using
the RSVP-TE protocol.
 Make-before-break
Make-before-break is a mechanism that changes the attributes of MPLS TE tunnels
based on the following conditions: A new path is established before the original path is
deleted. The data is not lost when the traffic is switched. No additional bandwidth is
consumed. Make-before-break is implemented through SE.
The new CR-LSP may compete with the primary CR-LSP on some shared links for
bandwidth. The new CR-LSP cannot be established if it fails the competition. Using the

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 163


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS MPLS QoSMPLS QoS

make-before-break mechanism, the system does not need to calculate the bandwidth
reserved for the new path. The new path uses the bandwidth of the original path. The
overlapped path consumes no additional bandwidth, but the path not overlapped still
consumes additional bandwidth.

Figure 1.3 Make-before-break

In the example shown in Figure 1.3, the maximum reservable bandwidth is 60 Mbit/s. A
CR-LSP along the path R1 -> R2 -> R3 -> R4 is set up with the bandwidth 40 Mbit/s.
The path is expected to change to R1 -> R5 -> R3 -> R4 to forward data because R5 has
a light load. The available bandwidth of the R3→R4 path is only 20 Mbit/s, which
cannot meet the requirement of 40 Mbit/s. To resolve this problem, the make-before-
break mechanism is used
to allow the newly established path R1→R5→R3→R4 to use the bandwidth of the
original path R3→R4. After the new CR-LSP is established, the original path is deleted
after traffic is switched to the new path.
Increasing the tunnel bandwidth is similar. If the reservable bandwidth of the shared link
increases to a certain extent, a new path can be established.
Use the case in Figure 1.3 as an example. The maximum reservable bandwidth is 60
Mbit/s. A CR-LSP along the path R1 -> R2 -> R3 -> R4 is set up with the bandwidth 30
Mbit/s.
The path is expected to change to R1 -> R5 -> R3 -> R4 to forward data because R5 has
a light load, and the released bandwidth is 40 Mbit/s. The available bandwidth of the
R3→R4 path is only 30 Mbit/s, which cannot meet the requirement of 40 Mbit/s. To
resolve this problem, the make-before-break mechanism is used
to allow the newly established path R1→R5→R3→R4 to use the bandwidth of the
original path R3→R4. The bandwidth of the new path is 40 Mbit/s, among which 30
Mbit/s is released by the original path. After the new CR-LSP is established, the original
path is deleted after traffic is switched to the new path.
 Automatic Bandwidth Adjustment
When establishing a CR-LSP, a device assigns bandwidth for the CR-LSP according to
the signaling link selection (SLS) or traffic conditioning specification (TCS). Based on
the original statistics on this CR-LSP, automatic bandwidth adjustment allows the
adjustment of the bandwidth consumed by this CR-LSP. The adjustment does not affect
the current traffic in the CR-LSP.
Through regular sampling, such as sampling every five minutes, you can obtain the
average bandwidth of this CR-LSP in a sampling period. After sampling many times in a
period, such as 24 hours, you can calculate a new bandwidth based on the average value
of the sampled values to establish a CR-LSP with different bandwidth. After the CR-LSP

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 164


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS MPLS QoSMPLS QoS

is established, the traffic is switched to the new CR-LSP, and the original CR-LSP is
deleted. If the CR-LSP fails to be established, the traffic is still in the original CR-LSP.
The bandwidth is adjusted after the next sampling period.
To avoid unnecessary adjustment, you can configure the adjustment threshold. The
bandwidth is adjusted only when the ratio of the maximum average bandwidth of this
time to the maximum average bandwidth of last time reaches a certain threshold. You can
also set the maximum and minimum bandwidth. The adjusted bandwidth must be within
the range.
 MPLS TE FRR
To ensure continuity of services, MPLS TE introduces the CR-LSP backup mechanism
and FRR mechanism. If a link fault occurs, traffic can be switched in time.
In TE FRR, bypass tunnels are pre-established, not using failed links or nodes, to protect
the primary CR-LSP. When a link or node fails, traffic is transmitted over the bypass CR-
LSP and the ingress can simultaneously initiate the setup of the primary CR-LSP without
interrupting data transmission.
MPLS TE FRR has two protection modes, link protection and node protection, as shown
in Figure 1.4 and Figure 1.5.

Figure 1.4 MPLS TE FRR link protection

Figure 1.5 MPLS TE FRR node protection

− Link protection: A Point of Local Repair (PLR) and a Merge Point (MP) are
connected through a link, over which a primary CR-LSP is set up. When this link
fails, traffic is switched to the bypass CR-LSP.
− Node protection: A PLR and an MP are connected through a node, and a primary
CR-LSP passes through the node. When the node fails or the link between the PLR
and the node fails, traffic is switched to the bypass CR-LSP.
 CR-LSP Backup

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 165


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS MPLS QoSMPLS QoS

Important LSPs need to be backed up. If a primary CR-LSP fails, traffic will be switched
to the backup CR-LSP.
A backup CR-LSP protects a primary CR-LSP in the same tunnel. If the ingress detects
that the primary CR-LSP is unavailable, the ingress switches traffic to a backup CR-LSP.
After the primary CR-LSP recovers, traffic is switched back.

Figure 1.6 CR-LSP backup

CR-LSP backup modes are as follows:


− Hot standby: The backup CR-LSP is set up immediately after the primary CR-LSP
is set up. If the primary CR-LSP fails, an ordinary backup CR-LSP is established
and takes over traffic from the primary CR-LSP. If the primary CR-LSP recovers,
traffic switches back to the primary CR-LSP.
− Ordinary backup: The backup CR-LSP is set up when the primary CR-LSP fails. If
the primary CR-LSP fails, an ordinary backup CR-LSP is established and takes over
traffic from the primary CR-LSP. If the primary CR-LSP recovers, traffic switches
back to the primary CR-LSP.
− Best-effort path: When both primary and backup CR-LSPs fail, a temporary CR-
LSP, also called a best-effort path, is set up and traffic is switched to it.
CR-LSP backup is an end-to-end tunnel protection technology.
 Tunnel Protection Group
A tunnel protection group uses a backup tunnel (protection tunnel) to protect a primary
tunnel (working tunnel).
A tunnel protection group has the following protection modes:
− 1:1 protection mode: Each working tunnel has its own protection tunnel. In normal
cases, traffic is transmitted along the working tunnel. When the working tunnel
fails, traffic is switched to the protection tunnel.

Figure 1.7 1:1 protection mode

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 166


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS MPLS QoSMPLS QoS

− N:1 protection mode: A tunnel functions as the protection tunnel for multiple
working tunnels. When any of the working tunnels fails, the data is switched to the
shared protection tunnel.

Figure 1.8 N:1 protection mode

Like CR-LSP backup, the tunnel protection group is also an end-to-end tunnel protection
technology. Differences between CR-LSP backup and the tunnel protection group are as
follows:
− The attributes of tunnels in a tunnel protection group are irrelevant to each other.
For example, the protection tunnel with the bandwidth 10 Mbit/s can protect the
working tunnel with the bandwidth 100 Mbit/s. In CR-LSP backup, however,
except the TE FRR attribute, the attributes such as bandwidth, setup priority, and
holding priority of the primary CR-LSP are the same as those of the backup CR-
LSP.
− A tunnel protection group uses one tunnel to protect another tunnel, whereas CR-
LSP backup allows the primary and backup CR-LSPs to be created over one tunnel.

6.5 MPLS DiffServ-Aware TE


Background
 Advantages and Disadvantages of MPLS TE
Multiprotocol label switching traffic engineering (MPLS TE) uses available resources to
establish a label switched path (LSP), and therefore provides guaranteed bandwidth for
traffic. MPLS TE can also precisely control traffic paths so that current bandwidth can be
fully used.
MPLS TE, however, cannot provide differentiated QoS guarantees for traffic of different
types. When both voice and video traffic is transmitted, video frames may be
retransmitted over a long period of time, so it may be required that video traffic be of a
higher drop precedence than voice traffic. MPLS TE, however, does not classify traffic
and processes voice and video traffic with the same drop precedence.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 167


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS MPLS QoSMPLS QoS

Figure 1.1 MPLS TE

 Advantages and Disadvantages of the MPLS DiffServ model


The DiffServ model classifies user services and performs differentiated traffic
forwarding behaviors based on the service class, meeting different QoS requirements.
The DiffServ model is excellent in scalability. Data streams of multiple services are
mapped with a limited number of service classes so that the amount of information to be
maintained is in direct proportion to the types of data streams but not the volume of data
streams.
The DiffServ model, however, can reserve resources only on a single node. End-to-end
QoS cannot be guaranteed.
 Disadvantages of Using Both MPLS DiffServ and MPLS TE
In some application scenarios, using MPLS DiffServ or MPLS TE alone cannot meet
requirements.
For example, a link carries both voice and data services. To ensure the quality of voice
services, you must lower voice traffic delays. The sum delay is calculated based on this
formula: Sum delay = Delay in processing packets + Delay in transmitting packets. The
delay in processing packets is calculated based on this formula: Delay in processing
packets = Forwarding delay + Queuing delay. When the path is specified, the delay in
transmitting packets remains unchanged. To shorten the sum delay for voice traffic,
reduce the delay in processing voice packets on each hop. When traffic congestion
occurs, the more packets, the longer the queue, and the higher the delay in processing
packets. Therefore, you must restrict the voice traffic on each link.
If the MPLS DiffServ model is used in this case, services are distinguished, and a
specific MPLS TE LSP is configured for each type of service. When a link or node fails
on the network, the network topology changes, or an LSP is preempted, the voice traffic
rate on the link may still exceeds the specification, and end-to-end QoS cannot be
guaranteed.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 168


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS MPLS QoSMPLS QoS

Figure 1.2 Using both MPLS TE and MPLS DiffServ

As shown in Figure 1.2, the bandwidth of each link is 100 Mbit/s, and all links share the
same metric. Voice traffic is transmitted from R1 to R4 and from R2 to R4 at the rate of
60 Mbit/s and 40 Mbit/s, respectively. Traffic from R1 to R4 is transmitted along the LSP
over the path R1 → R3 → R4, with the ratio of voice traffic being 60% between R3 and
R4. Traffic from R2 to R4 is transmitted along the LSP over the path R2 → R3 → R7 →
R4, with the ratio of voice traffic being 40% between R7 and R4.
When the link between R3 and R4 fails, as shown in Figure 1.3, the LSP between R1 and
R4 switches to the path R1 → R3 → R7 → R4 because this path is the shortest path with
sufficient bandwidth. At this time, the ratio of voice traffic from R7 to R4 reaches 100%,
causing the sum delay of voice traffic to prolong.

Figure 1.3 Networking after a link fails

MPLS DiffServ-Aware Traffic Engineering (DS-TE) can resolve this problem.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 169


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS MPLS QoSMPLS QoS

What Is MPLS DS-TE


MPLS DS-TE combines MPLS TE and MPLS DiffServ to provide QoS guarantee.
The class type (CT) is used in DS-TE to allocate resources based on the service class. To
provide differentiated services, DS-TE divides the LSP bandwidth into one to eight parts, each
part corresponding to a service class. Such a collection of bandwidths of an LSP or a group of
LSPs with the same service class are called a CT. DS-TE maps traffic with the same per-hop
behavior (PHB) to one CT and allocates resources to each CT.
If one LSP corresponds to multiple CTs and carries traffic with various service classes, this
LSP is called a multi-CT LSP. The IETF defines that DS-TE can support up to eight CTs,
marked as CTi, in which i ranges from 0 to 7.
If an LSP corresponds to one CT, this LSP is called a single-CT LSP.
Multi-CT LSPs can be used for the scenario shown in Figure 1.2. VoIP and HSI services from
R1 to R4 use different CTs on one MPLS TE tunnel. VoIP and HSI services from R2 to R4
use different CTs on another MPLS TE tunnel. Then the ratios of different services remain
balanced, and the ratio of voice services stays within a proper range, as shown in Figure 1.1.

Figure 1.1 MPLS DS-TE

When the link from R3 to R4 fails, VoIP and HSI services from R1 to R4 are switched to the
path R1 → R3 → R5 → R6 → R4, as shown in Figure 1.2. Voice services from R1 to R4 can
also be controlled within a proper range.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 170


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS MPLS QoSMPLS QoS

Figure 1.2 After the link fails

MPLS DS-TE Principles


 Extension of DS-TE
A DS-TE LSP is set up based on the CT. When DS-TE calculates an LSP, it needs to take
CTs and available bandwidth of each CT as constraints. When DS-TE reserves resources
for the label switching routers (LSRs) along an LSP, it needs to consider CTs and their
bandwidth requirements. Therefore, the IEEE extends the Interior Gateway Protocol
(IGP) and Resource Reservation Protocol (RSVP).
− RFC 4124 extends the IGP by introducing the optional bandwidth constraints sub-
TLVs and redefining the original unreserved bandwidth sub-TLVs. These sub-TLVs
are used to report and collect information about reservable bandwidths of CTs with
different priorities on a link.
− In addition, the IETF extends RSVP by defining a CLASSTYPE object for the Path
message in RFC 4124 and defining an EXTENDED_CLASSTYPE object in draft-
minei-diffserv-te-multi-class.
 LSP Preemption and TE-class Mapping
DS-TE uses the same preemption mode as MPLS TE. If no path meeting the bandwidth
requirement of a desired LSP is available, a device can tear down an established LSP and
use the bandwidth assigned to that LSP to establish a desired LSP.
DS-TE also uses two priority attributes, setup and holding priorities, to determine
whether to preempt resources. The two priorities can be called a preemption priority. The
value of the priority is an integer ranging from 0 to 7. The smaller the value, the higher
the priority.
DS-TE allocates a preemption priority to each CT in the format of <CT, priority>, which
is called a TE-class. A TE-class can be used to manage LSP preemption relationships in a
unified manner.
CTs and preemption priorities can be used in any combination. There are 64 (8 x 8) TE-
classes theoretically. Huawei routers support a maximum of eight TE classes. The TE
classes can be named as TE-Class[i], with i ranging from 0 to 7. TE classes are grouped
into a TE-class mapping table. Huawei routers are preconfigured with a default TE-class
mapping table. Carriers can modify the TE-class mapping table.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 171


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS MPLS QoSMPLS QoS

TE-Class CT Priority

TE-Class[0] 0 0
TE-Class[1] 1 0
TE-Class[2] 2 0
TE-Class[3] 3 0
TE-Class[4] 0 7
TE-Class[5] 1 7
TE-Class[6] 2 7
TE-Class[7] 3 7

An LSP can be set up only when both <CT, setup-priority> and <CT, holding-priority>
exist in the TE-class mapping table.
If the TE-class mapping table of a node contains only TE-Class [0] = <CT0, 6> and TE-
Class [1] = <CT0, 7>, only three types of LSPs can be set up:
− Class-Type = CT0, setup-priority = 6, holding-priority = 6
− Class-Type = CT0, setup-priority = 7, holding-priority = 6
− Class-Type = CT0, setup-priority = 7, holding-priority = 7
In the establishment of a CR-LSP, when configuring the ingress or reserving resources
along the nodes of the LSP, you need to take the TE-class mapping table into account.
Otherwise, the CR-LSP cannot be set up. It is recommended that all LSRs on the MPLS
network be configured with the same TE-class mapping table.
 DS-TE LSP Establishment
The DS-TE LSP establishment is similar to the MPLS TE LSP establishment except for
the following differences:
− The Path message carries CT information.
− After the LSRs receive Path messages carrying CT information, the LSRs check
whether the <CT, priority> exists in the local TE-class mapping table and whether
the bandwidth requirements in the CT information are met. If both conditions are
met, a new DS-TE LSP can be set up.
− After an LSP is set up, each LSR starts to calculate the available bandwidth for each
CT. The reserved information is sent to the IGP module to advertise to other LSRs
on the network.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 172


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS MPLS QoSMPLS QoS

Figure 1.1 DS-TE LSP establishment

Bandwidth Constraints Model


The following concepts are related to the MPLS DS-TE tunnel:
 Actual bandwidth of a link: the actual bandwidth (interface rate) of a link over which the
tunnel is set up. For example, the default rate of a GE electrical interface 1000 Mbit/s
indicates that the actual bandwidth of the link connecting to the interface is 1000 Mbit/s.
 Maximum reservable bandwidth of a link: the bandwidth reserved by a link for all MPLS
TE tunnels that pass through the link. The maximum reservable bandwidth of a link
cannot be higher than its actual bandwidth.
 CT bandwidth: the bandwidth of a specific service on a DS-TE tunnel.
 BCi bandwidth: the bandwidth reserved for all CTi (i ranges from 0 to 7) on a link. The
Bandwidth Constraint (BC) bandwidth refers to the bandwidth constraints on a link,
whereas the CT bandwidth refers to the bandwidth constraints on a DS-TE tunnel.

Figure 1.1 Different types of bandwidth

The Bandwidth Constraints Model (BCM) is used to define the maximum number of BCs,
which CTs can use the bandwidth of each BC, and how to use BC bandwidth.
Currently, the IETF defines the following BCMs:
 MAM

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 173


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS MPLS QoSMPLS QoS

The Maximum Allocation Model (MAM) maps a BC to a CT, and CTs do not share
bandwidth resources.

Figure 1.2 MAM model

In the MAM, the sum of CTi LSP bandwidths does not exceed BCi (i ranges from 0 to
7); the sum of bandwidths of all LSPs of all CTs does not exceed the maximum
reservable bandwidth of the link.
For example, a link with the bandwidth of 100 Mbit/s adopts the MAM and supports 3
CTs (CT0, CT1, and CT2). BC0, which is 20 Mbit/s, carries CT0 (BE flows); BC1,
which is 50 Mbit/s, carries CT1 (AF flows); BC2, which is 30 Mbit/s, carries CT2 (EF
flows). In this case, the total LSP bandwidths that are used to transmit BE flows cannot
exceed 20 Mbit/s; the total LSP bandwidths that are used to transmit AF flows cannot
exceed 50 Mbit/s; the total LSP bandwidths that are used to transmit EF flows cannot
exceed 30 Mbit/s.

Figure 1.3 MAM example

In the MAM, bandwidth preemption between CTs does not occur, but certain bandwidth
resources may be wasted.
 RDM
The Russian Dolls Model (RDM) permits bandwidth sharing between CTs.
The bandwidth of BC0 is smaller than the maximum reservable bandwidth on a link.
Nesting relationships exist among BCs. As shown in Figure 1.4, the bandwidth of BC2 is
fixed; the bandwidth of BC1 nests the bandwidth of BC2; the bandwidth of BC0 nests
the bandwidth of all BCs. This model is similar to a Russian doll. A large doll nests a
smaller doll and then this smaller doll nests a much smaller doll, and so on.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 174


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS MPLS QoSMPLS QoS

Figure 1.4 RDM model

For example, a link with the bandwidth of 100 Mbit/s adopts the RDM and supports 3
CTs (CT0, CT1, and CT2). CT0, CT1, and CT2 are used to transmit BE flows, AF flows,
and EF flows, respectively. BC0 is 100 Mbit/s; BC1 is 50 Mbit/s; BC2 is 20 Mbit/s. In
this case, the total LSP bandwidths that are used to transmit EF flows cannot exceed 20
Mbit/s; the total LSP bandwidths that are used to transmit EF flows and AF flows cannot
exceed 50 Mbit/s; the total LSP bandwidths that are used to transmit BE, AF, and EF
flows cannot exceed 100 Mbit/s.

Figure 1.5 RDM example

The RDM allows bandwidth preemption among CTs. If 0 ≤ m < n ≤7 and 0 ≤ i < j ≤ 7,
the preemption relationship among CTs is: CTi with the priority m can preempt the
bandwidth of CTi with the priority n and the bandwidth of CTj with the priority n. The
total LSP bandwidths of CTi, however, does not exceed the bandwidth of BCi.
In the RDM, the bandwidth is efficiently used, but CTs cannot be isolated and the
bandwidth of each CT is ensured using the preemption mechanism.
 Extended-MAM
The Extended-MAM is a bandwidth allocation mode that supports E-LSP.
The extended-MAM supports eight implicit TE-classes of the combinations of CT0 and
the priority being 0 to 7. Eight implicit CTs are flooded through the Unreserved BW TLV
of the IGP module.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 175


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS MPLS QoSMPLS QoS

Comparison Between BCMs


RDM MAM/Extended-MAM

One BC is mapped to one or more CTs. One BC is mapped to one CT, which is easy
to understand and manage.
CTs cannot be isolated, and the CT Different CTs are isolated, and the CT
bandwidth is ensured using preemption. bandwidth is ensured without preemption.
The bandwidth is used efficiently. The bandwidth may be wasted.
Not recommended on networks that do not Applicable to networks that do not allow
allow preemption. preemption.

IETF Mode and Non-IETF Mode


Before the IETF defines DS-TE (called IETF DS-TE), Huawei has developed its own DS-TE
(called Non-IETF DS-TE).
 IETF mode: The IETF mode is defined by the IETF and supports 64 TE-classes by
combining eight CTs and eight priorities. Huawei routers support up to eight TE-classes.
 Non-IETF mode: The non-IETF mode is defined by Huawei and supports 16 TE-classes
by combining two CTs and eight priorities. The MPLS TE tunnel in non-IETF mode is
called a common MPLS TE tunnel, and the device that supports the non-IETF mode is
called a non-DS-TE device.

Intercommunication Between Devices in Different DS-TE Modes


In actual network deployment or device upgrade, devices in non-IETF mode may need to
communicate with devices in IETF mode. Currently the following interworking operations
are supported:
 Interworking between a DS-TE device and a non-DS-TE device
− Setup of a non-DS-TE tunnel from a non-DS-TE device to a DS-TE device
− Setup of a non-DS-TE tunnel from a DS-TE device to a non-DS-TE device.
 When a Huawei device is communicating with a non-Huawei DS-TE device that does
not support the CLASSTYPE object, the following CT information in the Path message
can be parsed:
− CT information about the L-LSP carried in the EXTENDED_CLASSTYPE object
− CT0 information carried in the EXTENDED_CLASSTYPE object.

Switching Between DS-TE Modes


The IETF mode and non-IETF mode can be switched. The following table shows the
switching between the IETF mode and non-IETF mode.

Item Non-IETF Mode to IETF IETF Mode to Non-IETF


Mode Mode

Changes of bandwidth The bandwidth constraints The extended-MAM is


constraint models model is unchanged. switched to the MAM.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 176


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS MPLS QoSMPLS QoS

Item Non-IETF Mode to IETF IETF Mode to Non-IETF


Mode Mode

Changes of the TE-class If the TE-class mapping No TE-class mapping table


mapping table table is configured, use the is used.
configured TE-class If a TE-class mapping table
mapping table. Otherwise, is configured, the TE-class
use the default one. mapping table is not deleted.
If no TE-class mapping table
is configured, the default
TE-class mapping table is
deleted.
Deletion of LSPs LSPs with the combinations The following types of LSPs
of <CT, setup-priority> and are deleted from the ingress
<CT, holding-priority> that and transit nodes:
are not in the TE-class  Multi-CT LSPs
mapping table are deleted
from the ingress and transit
 Single-CT LSPs with a CT
nodes. ranging from CT2 to CT7

Differences Between the IETF Mode and Non-IETF Mode


Item Non-IETF Mode IETF Mode

Bandwidth model Supports the MAM and Supports the RDM, MAM,
RDM. and extended MAM.
CT type Supports CT0 and CT1 in Supports CT0 to CT7 in
single CT mode. CT0 and multi-CT mode. Eight CTs
CT1 cannot be both can be configured together.
configured.
BC type Supports BC0 and BC1. Supports BC0 to BC7.
TE-class mapping table The TE-class mapping table The TE-class mapping table
can be configured but can be configured and take
cannot take effect. The TE- effect.
class mapping table is used
only to house the established
LSP to prevent the LSP from
being deleted.
IGP message The priority-based The CT information is
reservable bandwidth is carried in the Unreserved
carried in the Unreserved bandwidth sub-TLV and
bandwidth sub-TLV. Bandwidth Constraints sub-
TLV.
RSVP message The CT information is  Single-CT: CT
carried in the ADSPEC information is carried in the
object. CLASSTYPE object.
 Multi-CT: CT information

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 177


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS MPLS QoSMPLS QoS

Item Non-IETF Mode IETF Mode

is carried in the
EXTENDED_CLASSTYPE
object.

DS-TE in Tunnel Protection


DS-TE is a combination of MPLS TE and MPLS DiffServ and therefore inherits the MPLS
TE tunnel protection mechanism. MPLS TE supports the following tunnel protection
mechanisms:
 TE FRR
 CR-LSP backup
 Tunnel protection group
For details about the three tunnel protection mechanisms, see 6.4MPLS-TE. DS-TE features
in the tunnel protection networking are as follows.

Table 1.1 DS-TE enabled with tunnel protection


Protection Mode DS-TE Features

TE FRR DS-TE features are applied as follows:


 When the bandwidth protection is
required, manual FRR supports 1:1
protection and N:1 protection and
guarantees QoS by manually configuring
CTs and bandwidths for bypass tunnels; auto
FRR supports only 1:1 protection and
guarantees QoS through the bypass tunnel
inheriting the CTs and bandwidths of the
primary tunnel.
 When the bandwidth protection is not
required, CTs and bandwidths of the bypass
tunnel are not taken into consideration. Both
manual FRR and auto FRR support 1:1
protection and N:1 protection.
CR-LSP backup The bypass tunnel inherits the CT types and
bandwidths of the primary tunnel. The best-
effort path does not need to guarantee QoS.
Therefore, the best-effort path does not
inherit the CT types and bandwidth of the
primary tunnel and only guarantees that
traffic can be forwarded.
Tunnel protection group A tunnel protection group is formed by
binding two independently configured
tunnels that back up each other. Therefore,
the DS-TE feature of the backup tunnel is
determined by the configuration. To
guarantee QoS, the backup tunnel has the

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 178


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS MPLS QoSMPLS QoS

Protection Mode DS-TE Features

same CT and bandwidth as the primary


tunnel.
In addition, MPLS OAM detection packets
are sent through the queue with the highest
priority on a TE tunnel.

Like MPLS TE, DS-TE tunnel protection mechanisms can be used individually or jointly.

QoS Mechanism on Each Hop


DS-TE combines MPLS TE and MPLS DiffServ and does not provide a new field to carry
DiffServ information for traffic classification. Instead, DS-TE inherits the traffic classification
mechanism used in MPLS DiffServ. MPLS DiffServ has two traffic classification
mechanisms: E-LSPs and L-LSPs. Huawei routers support only E-LSPs. Therefore, MPLS
DS-TE uses E-LSPs defining that the scheduling type and the drop precedence are set in the
EXP field in the MPLS label.
DS-TE maps traffic with the same PHB to one CT and allocates resources to each CT. The
PHB, however, is presented as the queue scheduling mode and drop precedence in internal
processing. Therefore, the CT must be associated with queues.
Huawei routers support two entity queues: eight CQs (or port queues) and eight FQs, which
can be in a one-to-one mapping with the eight CTs. Huawei routers are preconfigured with a
default mapping between the CT and CQ/FQ, and allow users to create eight mapping tables.

Table 1.1 Default mapping between the CT and CQ/FQ


CT Type Queue Scheduling Mode

CT7 CS7 PQ
CT6 CS6 PQ
CT5 EF WFQ
CT4 AF4 WFQ
CT3 AF3 WFQ
CT2 AF2 WFQ
CT1 AF1 WFQ
CT0 BE LPQ

Except for the mapping between the CT and CQ/FQ, DS-TE implements traffic classification,
traffic shaping, traffic policing, and congestion avoidance in the same manner as MPLS
DiffServ.

MPLS DS-TE Typical Applications


 Application Scenario 1: Access of Different Services of a VPN

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 179


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS MPLS QoSMPLS QoS

On VPNs using the MPLS TE tunnel, EF, AF, and BE services can be transmitted over
the same VPN. This means that different services may be transmitted over the same
tunnel at the same time.
To prevent mutual interference between different services in a TE tunnel, you can create
multiple VPNs and TE tunnels so that different tunnels transmit different services. If
multiple VPNs transmit services concurrently on the network, a large number of VPNs
and TE tunnels need to be created, which is a waste of resources.
An alternative is to deploy DS-TE. A multi-CT LSP is used to transmit different services
of a VPN. A multi-CT LSP can be reserved for eight CTs, each corresponding to a
service of a VPN. These services are mutually independent.
As shown in Figure 1.1, BE, AF, and EF services access VPN1 at the same time. You
need to set up only one TE tunnel, configure CT0 (100 Mbit/s), CT1 (50 Mbit/s), and
CT2 (10 Mbit/s) for the tunnel, and bind VPN1 to the tunnel on the ingress. After the
configurations are complete, all the traffic of VPN1 is classified and then enters the
corresponding queue.

Figure 1.1 Different services of a VPN over an LSP

 Application Scenario 2: Access of Services of Different VPNs


On VPNs using the MPLS TE tunnel, multiple VPNs may use the same TE tunnel. These
VPNs have different requirements for QoS, which may cause VPNs to compete for
resources and QoS for each type of service is not guaranteed. This scenario can be
classified into the following situations, for each of which a solution is provided:
− Multiple VPNs with totally different services
If no more than eight service types exist on all VPNs, only one TE tunnel can be
used to transmit these services.
For example, VPN1 and VPN2 both access the MPLS TE network. VPN1 has EF
and BE traffic, while VPN2 has AF traffic. In this example, only one TE tunnel
needs to be set up, with different CTs for different services of each VPN. A total of
three CTs must be configured, which is the same as the number of service types in
VPN1 and VPN2.
− Multiple VPNs with identical services
The required number of TE tunnels equals the number of VPNs. The required
number of CTs of each TE tunnel equals the number of service types of each VPN.
For example, VPN1 and VPN2 both access the MPLS TE network. Both VPN1 and
VPN2 have EF and BE traffic. In this example, two TE tunnels need to be set up so

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 180


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS MPLS QoSMPLS QoS

that the services of different VPNs use different TE tunnels. On each TE tunnel, two
CTs are configured for the two types of services.
− Multiple VPNs with partially same services:
A TE tunnel needs to be set up for each VPN. The required number of CTs of each
TE tunnel equals the number of service types of each VPN.
 Application Scenario 3: Access of VPN traffic and Non-VPN Traffic
Both VPN traffic and non-VPN traffic coexist and have different QoS requirements. If
all traffic is transmitted over a TE tunnel, VPN traffic and non-VPN traffic may compete
for resources and QoS for each type of service cannot be guaranteed.
This scenario can be classified into the following situations, for each of which a solution
is provided:
− VPNs and non-VPNs with totally different services:
Only one TE tunnel can be used to transmit traffic. Different CTs are configured for
the service classes of VPN traffic and non-VPN traffic. The number of CTs equals
the service number of VPN traffic plus the service number of non-VPN traffic, as
shown in Figure 1.2.

Figure 1.2 VPN traffic and non-VPN traffic over a TE tunnel

− VPNs and non-VPNs with identical services


Two TE tunnels need to be set up respectively for VPN traffic and non-VPN traffic.
The required number of CTs of each TE tunnel equals the number of service classes
on the tunnel , as shown in Figure 1.3.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 181


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS MPLS QoSMPLS QoS

Figure 1.3 VPN traffic and non-VPN traffic over different TE tunnels

− VPNs and non-VPNs with partially same services


Two TE tunnels need to be set up respectively for VPN traffic and non-VPN traffic.
The required number of CTs of each TE tunnel equals the number of service classes
on the tunnel , as shown in Figure 1.3.

6.6 MPLS VPN QoS


QoS Requirements of MPLS VPNs
Because of its flexibility, scalability, and advantages in QoS, MPLS VPN gradually becomes
the most important VPN technology. It is widely applied to carrier networks and enterprise
networks.
 Enterprise interworking: Secure and reliable intercommunication is provided for
enterprise headquarters, branches, and employees on a business trip.
 Service isolation: Multiple services (such as interworking, 3G, and NGN services) on a
network are isolated using VPNs to ensure that each service runs individually.
Therefore, VPN users require an end-to-end QoS guarantee. Users expect that MPLS VPN
QoS can function as well as QoS for physical links or ATM PVC QoS. QoS between the PE
and CE is guaranteed by user networks. MPLS VPN carriers guarantee QoS between PEs.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 182


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS MPLS QoSMPLS QoS

Figure 1.1 QoS requirements of MPLS VPNs

MPLS QoS
An MPLS VPN is shared by Internet users and multiple VPN users. Therefore, resources are
competed for between VPNs and between Internet users and VPNs. The DiffServ model
reserves resources on a single node, hardly meeting end-to-end QoS requirements of VPNs.
To resolve this problem, the following schemes are provided:
 Scheme 1: LDP LSP/Static LSP + MPLS HQoS
 Scheme 2: MPLS TE + MPLS HQoS
 Scheme 3: Dedicated MPLS TE tunnel (bound to VPNs) between CEs + MPLS HQoS
 Scheme 4: MPLS DiffServ + dedicated MPLS TE + RRVPN
 Scheme 5: MPLS DS-TE

Scheme 1: LDP LSP/Static LSP + MPLS HQoS


Application Scenario: Static LSPs, LDP LSPs, or TE tunnels without bandwidth guarantee
are used on an MPLS VPN, on which PEs support MPLS HQoS. This scheme is applicable
only to MPLS HQoS-capable devices.
Static LSPs, LDP LSPs, or TE tunnels without bandwidth guarantee do not reserve resources
on the forwarding path, not meeting end-to-end QoS requirements. If these tunnels are used
on an MPLS VPN, MPLS HQoS can be used to ensure end-to-end QoS requirements.
LDP LSPs or static LSPs are set up between PEs. Priorities of packets entering a PE have
been predefined or are set by the PE. After packets enter an MPLS domain, packets are
processed on each hop based on priorities.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 183


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS MPLS QoSMPLS QoS

Figure 1.1 MPLS DiffServ + MPLS HQoS

Figure 1.2 HQoS scheduling mode for scheme 1

Scheme 1 is easy to implement but has the following problems:


 MPLS HQoS distinguishes VPNs and various services in each VPN only on the ingress
PE, not on the P or egress PE. VPN-based HQoS is not supported. Services are
distinguished only by priority.
 Resources reserved for each priority on a hop are shared by multiple service flows,
including VPN service flows and non-VPN service flows (such as Internet service flows
on the public network). They have to compete for resources. Internet traffic is prone to a
burst. If the bandwidth is allocated based on the peak hour usage, resources are mostly
wasted. If the bandwidth is allocated based on the committed bandwidth usage, packet
loss occurs in the case of traffic congestion.
 Service flows traverse multiple links. Although these links reserve bandwidths for
packets of different priorities, the available bandwidth of certain links may be
insufficient and cannot provide end-to-end QoS guarantee.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 184


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS MPLS QoSMPLS QoS

 If LDP FRR is configured for LDP LSPs and the primary LDP LSP fails, traffic of
multiple VPNs is switched from the primary LDP LSP to the backup LDP LSP. Then
VPN-based HQoS does not take effect on the ingress PE. After the primary LDP LSP
recovers, traffic is switched back to the primary LDP LSP, and VPN-based HQoS
resumes its effect on the ingress PE.

Scheme 2: MPLS TE + MPLS HQoS


Application Scenario: MPLS TE tunnels are used on an MPLS VPN, on which PEs support
MPLS HQoS. This scheme is applicable only to MPLS HQoS-capable devices.
In scheme 1, VPN services and non-VPN services compete for resources on each link. To
resolve this problem, MPLS TE tunnels are used to replace VPN tunnels in scheme 2.
An MPLS TE tunnel can ensure the sum bandwidth for all VPN traffic destined for a certain
peer PE. The ingress PE uses MPLS HQoS to distinguish VPNs and various services in each
VPN.
Unlike scheme 1, scheme 2 uses an MPLS TE tunnel, which carry traffic of a user group by
default. In addition, the peer PE has been identified on the TE tunnel and needs no further
identification. Figure 1.1 shows the HQoS scheduling mode for scheme 2.

Figure 1.1 HQoS scheduling mode for scheme 2

You can choose the HQoS scheduling mode in scheme 1 as required.

Scheme 2 restricts the sum bandwidth of all VPNs on the ingress within the maximum
reservable bandwidth so that downstream nodes do not compete for resources.
Supplement to Tunnel Protection
The make-before-break mechanism is used to create CR-LSPs when attributes of MPLS TE
tunnels are changed.
 When the bandwidth of an MPLS TE tunnel is changed, a CR-LSP is set up. If the new
bandwidth is lower than the VPN bandwidth, another MPLS TE tunnel that meets the
bandwidth requirements is used for VPN forwarding. If the changed bandwidth is higher

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 185


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS MPLS QoSMPLS QoS

than or equal to the VPN bandwidth, the newly established CR-LSP is used for VPN
forwarding.
 When the explicit path of an MPLS TE tunnel is changed, a CR-LSP is reestablished. If
the new explicit path does not meet the MPLS TE requirements, the original explicit path
is still used for MPLS TE forwarding. If the new explicit path meets the MPLS TE
requirements, the new explicit path is used for MPLS TE forwarding. The original MPLS
TE tunnel is still used for VPN forwarding, regardless of whether traffic is switched to
the new explicit path.
If the MPLS TE tunnel is configured with CR-LSP hot standby and the primary CR-LSP fails,
the CR-LSP is reestablished, and VPN traffic is switched to the backup CR-LSP. VPN-based
QoS operations, such as bandwidth guarantee, traffic shaping, and queue scheduling, can be
implemented for the VPN traffic.
If the MPLS TE tunnel is configured with hot standby and a best-effort path and both the
primary and backup CR-LSPs fail, VPN traffic is switched to the best-effort path, and the
VPN-based bandwidth guarantee and rate limit configurations do not take effect on the public
network side. After the MPLS TE tunnel recovers, the VPN-based bandwidth guarantee and
rate limit configurations take effect again on the public network side.
If the MPLS TE tunnel is configured with CR-LSP backup (not hot standby), TE FRR, and
tunnel protection groups, VPN-based QoS operations, such as bandwidth guarantee, traffic
shaping, and queue scheduling, cannot be implemented.

Scheme 3: Dedicated MPLS TE Tunnel (Bound to VPNs) Between CEs + MPLS


HQoS
Application Scenario: Dedicated MPLS TE tunnels are used on an MPLS VPN, on which
PEs support MPLS HQoS. This scheme is applicable only to MPLS HQoS-capable devices.
A dedicated MPLS TE tunnel is established between two CEs and bound to a specific VPN so
that traffic from the VPN to the peer is transmitted along the dedicated tunnel. The tunnel
carries only services of the specific VPN. End-to-end bandwidth guarantee is provided for the
MPLS TE tunnel.

Figure 1.1 Dedicated MPLS TE tunnel between CEs

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 186


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS MPLS QoSMPLS QoS

MPLS HQoS is implemented on the ingress PE to distinguish VPNs and services in each
VPN. For the HQoS scheduling mode, see Figure 1.1.
On the tunnel, however, services are transmitted regardless of priorities. Therefore, if the
actual traffic rate exceeds the specification, services that are sensitive to QoS are affected. In
addition, this scheme is of poor scalability. The tunnel multiplexing technology is not applied
to the networking, so the tunnel quantity is in direct ratio to the square of the CE quantity. A
large number of tunnels are required on the backbone network. MPLS TE requires that the
signaling protocol periodically refresh resource reservation status on a tunnel, consuming a
large number of resources.

Scheme 4: MPLS DiffServ + Dedicated MPLS TE Tunnel + RRVPN


Application Scenario: MPLS TE tunnels are used on an MPLS VPN, on which services are
distinguished. This scheme is applicable only to devices that support resource reserved VPN
(RRVPN), such as NE40E V300R003.
If only dedicated MPLS TE tunnels are used on an MPLS VPN, the tunnel multiplexing
technology is not applied to the networking, so the tunnel quantity is in direct ratio to the
square of the CE quantity. A large number of tunnels are required on the backbone network. If
a non-dedicated MPLS TE tunnel is used, one TE tunnel may be shared by multiple VPNs.
VPNs compete for resources, causing the QoS performance to deteriorate. For example, if a
VPN is attacked, communications of other VPNs are affected.
To resolve this problem, the scheme of MPLS DiffServ + dedicated MPLS TE tunnel +
RRVPN is used.

Figure 1.1 Dedicated MPLS TE tunnel + RRVPN

 Behavior aggregate (BA) or multi-field (MF) traffic classification is implemented on the


ingress PE, or the MPLS DiffServ model is configured to work in Uniform, Pipe, or
Short Pipe mode.
 A dedicated MPLS TE tunnel is set up and bound to a specific VPN so that traffic from
the VPN to the peer PE is transmitted along the dedicated tunnel.
 RRVPN is applied to the dedicated MPLS TE tunnel. RRVPN is implemented based on
the tunnel multiplexing technology and provides CAR and HQoS. Therefore, traffic

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 187


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS MPLS QoSMPLS QoS

policing can be implemented for incoming traffic of each VPN or traffic of each service
of a VPN.
Scheme 4 restricts the sum bandwidth of all VPNs on the ingress within the maximum
reservable bandwidth so that downstream nodes do not compete for resources.

Scheme 5: MPLS DS-TE


Application Scenario: This scheme is applicable only to MPLS DS-TE-capable devices.
For more details, see MPLS DS-TE Typical Applications.

6.7 QoS Implementations on Different Boards


6.7.1 Implementation Differences of MPLS DiffServ
The board differences described in the above chapters are applicable to MPLS VPN scenario.

Chapter Link

MPLS QoS 6.3MPLS DiffServ Configuration


Classification and 3.5.1Implementation Differences of BA Classification
marking 3.5.2Implementation Differences of MF Classification
Traffic policing and traffic 4.4.1Implementation Differences of Policing and Shaping
shaping
Congestion management 5.6QoS Implementations on Different Boards
and avoidance

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 188


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS ATM QoSATM QoS

7 ATM QoS

About This Chapter


7.1 Basic Concepts of ATM
7.2 QoS of ATMoPSN and PSNoATM

7.1 Basic Concepts of ATM


What Is ATM
Asynchronous transfer mode (ATM) is a cell-based data transfer technique in which channel
demand determines packet allocation.
ATM integrates the features of circuit switching and packet switching. On one hand, ATM is
connection-oriented and any ATM user needs to communicate with another ATM user over an
established connection. On the other hand, ATM sends data in fixed-sized cells and multiple
ATM connections can share bandwidth resources.

Statistical Multiplexing of ATM


ATM uses statistical multiplexing to achieve the best utilization of network resources.
Statistical multiplexing dynamically allocates network resources to different services based on
the statistical features of these services, so that network resources are best utilized. An ATM
network can transmit multiple types of services, such as data, voice, and video services, at
different rates and provides QoS guarantee for real-time services.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 189


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS ATM QoSATM QoS

Figure 1.1 Statistical multiplexing of ATM

As shown in Figure 1.1, the data of users D, C, and A is allocated to transmission lines based
on their arrival sequence. Because user B does not send any data, user B does not occupy
bandwidth resources. In this sense, an ATM connection is a virtual connection.

VCC
On an ATM network, the source and destination must communicate over an established
connection. The establishment of an ATM connection is similar to the establishment of a
telephone call connection. An ATM connection is a virtual circuit connection (VCC) uniquely
identified by a virtual path identifier (VPI)/virtual channel identifier (VCI) pair.
From the perspective of routing, a VPI/VCI pair functions in a similar way as an IP address.
Multiple VPI/VCI pairs uniquely identify a multi-segment connection. When a switching
node receives an ATM cell, the switching node searches its local VPI/VCI mapping table and
replaces the incoming VPI/VCI pair carried in the cell with the corresponding outgoing
VPI/VCI pair.

Figure 1.1 VCC

ATM switching includes VP switching and VC switching. A VP switching node changes only
the VPI value of the VPI/VCI pair, whereas a VC switching node changes both the VPI and

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 190


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS ATM QoSATM QoS

VCI values of the VPI/VCI pair. A VP can be viewed as a large pipe with VCs as its small
pipes. Figure 1.2 shows the relationships between VPs and VCs.

Figure 1.2 Relationships between VPs and VCs

The multiplexing, switching, and transmission of ATM cells are performed on VCs.

PVC and SVC


ATM supports two types of VCs:
 Switched virtual circuits (SVCs): are established by ATM user terminals using signaling.
SVCs function in a similar way as user lines on a telephone network. An ATM network
establishes an SVC for two users to communicate only after one of the two users initiates
a communication request. After the communication is complete, the SVC is released by
the signaling. SVCs can appropriately utilize network resources to reduce
communication costs.
 Permanent virtual circuits (PVCs): are statically configured by administrators. A PVC
cannot be automatically released after the communication is complete. PVCs function in
a similar way as leased lines on a telephone network. Users connected over PVCs can
communicate even if network resources are insufficient. PVCs apply to scenarios with
high communication requirements.
Nowadays, most ATM networks use PVCs to transmit data.

Importance of Congestion Management for an ATM Network


Compared with congestion management on a circuit or packet switched network, congestion
management on an ATM network has the following characteristics:
 Important: ATM is an asynchronous data transfer technique that uses statistical
multiplexing to dynamically allocate network resources to different services. Statistical
multiplexing improves network resource usage and allocation flexibility, but also
increases network congestion risks.
 Difficult:
− An ATM network transmits a variety of services. The data traffic features of these
services are hard to control.
− An ATM network transmits services at high rates. If traffic congestion occurs on a
certain connection, the congestion rapidly spreads to other connections.
Congestion management is a daunting task faced by ATM networks.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 191


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS ATM QoSATM QoS

Basic Principles of Congestion Management


The International Telecommunication Union-Telecommunication Standardization Sector
(ITU-T) develops a set of congestion control mechanisms to satisfy the congestion
management requirements of ATM networks. The basic principles of these mechanisms are to
prevent traffic congestion from occurring by appropriately managing network resources.
Congestion management measures can be classified into two types:
 Preventive measures (traffic control): are designed to prevent traffic congestion from
occurring. These measures include traffic contract parameter determination, call
admission control, and traffic parameter control.
 Responsive measures (congestion control): are designed to minimize the impact of traffic
congestion after traffic congestion occurs. These measures include selectively dropping
ATM cells and reporting congestion indications.

Traffic Control
On an ATM network, traffic control is implemented based on service types and quality.
The ATM Forum classifies service types as constant bit rate (CBR), real-time variable bit rate
(rt-VBR), non-real-time variable bit rate (nrt-VBR), available bit rate (ABR), and unspecified
bit rate (UBR) based on service rates. These service types will be described in details in the
section ATM Service Types.
Besides determining service types based on service rates, the ATM Forum defines QoS and
traffic parameters to measure ATM service quality. Before an ATM connection is established,
QoS and traffic parameters are negotiated between an ATM user and the ATM network or
between two ATM networks. These negotiated parameters form a traffic contract.

Traffic Contract Parameters


The traffic contract used by ATM networks is similar to the service level agreement (SLA)
used by IP networks.

Figure 1.1 ATM traffic contract

An ATM traffic contract includes:


 Traffic parameters

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 192


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS ATM QoSATM QoS

Some traffic parameters describe the traffic characteristics of services. These parameters
are called source traffic parameters. These parameters include:
− PCR(Peak Cell Rate): indicates the maximum allowable rate at which cells can be
transmitted along an ATM connection. Cells exceeding the PCR will be dropped by
the ATM ingress or marked as droppable. Cells marked as droppable will be
dropped by any node that encounters traffic congestion. For CBR services, the PCR
represents the constant bandwidth provided by a VC.
− Sustainable cell rate (SCR): indicates the average allowable, long-term cell transfer
rate on a specific ATM connection. The SCR is specific to VBR services.
− Minimum cell rate (MCR): indicates the minimum allowable rate at which cells can
be transmitted along an ATM connection.
− Maximum burst size (MBS): indicates the maximum allowable burst size of cells
that can be transmitted contiguously on a particular ATM connection. The MBS is
specific to VBR services. The burst size indicates the ratio of the peak bit rate to the
average bit rate. The larger the burst size, the larger the rate variation of the service.
Some traffic parameters describe the characteristics of services in relation to time. For
example, the cell delay variance tolerance (CDVT) indicates the maximum cell delay
variance (CDV) allowed between two terminals. These parameters apply to real-time
services.
 QoS parameters
− peak-to-peak CDV (peak-to-peak Cell Delay Variance): indicates the difference
between the maximum and minimum cell transfer delay (CTD) experienced during
the connection.
− Maximum cell transfer delay (MCTD): indicates the maximum CTD. The CTD
indicates the delay experienced by a cell between the time it takes for the first bit of
the cell to be transmitted by the source and the last bit of the cell to be received by
the destination. The MCTD is an important parameter for CBR services. If the
transmission duration of some cells is too long, the destination will regard these
cells as lost or delayed. The destination drops delayed cells even if these cells have
been reassembled into packets. Large CTD will affect the quality of voice services.
− Cell loss rate (CLR): indicates the percentage of cells that are lost on the network
due to errors or congestion and are not received by the destination. Cells may fail to
reach the destination in the following situations:
1. The destination is incorrect.
2. ATM nodes experience severe congestion.
3. The burst size of the traffic sent by the source exceeds the specifications in the
traffic contract.
4. The CTD of cells exceeds the MCTD.

CAC
When a terminal initiates a call request, the terminal includes the characteristics of the traffic
to be sent to the network and service quality requirements in the call request.
After the network receives a call request, the network starts the call admission control (CAC)
function to detect the distribution of network resources. Then, the network determines
whether the available network resources can meet the service quality requirements.
If the available network resources can meet requirements, the network accepts the call request
and establishes a new VC. In this situation, the service quality of existing connections remains
unchanged. Otherwise, the network rejects the call request.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 193


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS ATM QoSATM QoS

Traffic Parameter Control


After a VC is established, the characteristics of traffic transmitted over the VC may go beyond
the characteristics determined by the network using the CAC function. In this situation,
network service quality may be affected if traffic parameters are not appropriately controlled.
To solve the preceding problem, a traffic monitoring and control mechanism is deployed on
user-to-network interfaces (UNIs) and network-to-network interfaces (NNIs) to ensure that
the characteristics of the incoming traffic bound for each VC conform to the negotiated
characteristics specified in the traffic contract. The traffic monitoring and control process is
called traffic parameter control.
A main traffic parameter control measure is to mark the traffic that exceeds negotiated
specifications. Marked traffic will be dropped first if congestion occurs. The service quality of
marked traffic cannot be ensured.

Congestion Control
When an ATM network detects congestion, the network starts congestion control by
selectively dropping cells of minor importance and reporting forward and feedback
congestion indications. If the preceding measures cannot put congestion under control, the
ATM network releases the congested connection or reroute traffic.
 Selectively Dropping Cells
The cell loss priority (CLP) bit of cells transmitted over an ATM network indicates the
drop priority of cells. The CLP bit has two values: 0 and 1. If congestion occurs, the
ATM network drops cells with the CLP bit as 1 first.

Figure 1.1 ATM cell structure

The cells with the CLP bit as 1 may be cells of minor importance sent by users or cells
whose CLP bits are changed from 0 to 1 by usage parameter control (UPC) or network
parameter control (NPC) due to inconsistency with negotiated specifications. The ATM

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 194


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS ATM QoSATM QoS

network drops cells with the CLP bit as 1 to ensure the transmission quality of high-
priority cells.
 Reporting Congestion Indications
The measure of selectively dropping cells is taken by the ATM node where congestion
occurs. In some situations, the entire network needs to work in coordination to deal with
congestion. The nodes that experience traffic must be able to spread congestion
information to other parts of the network for other nodes to take responsive measures to
control congestion. Congestion indication types are classified as explicit forward
congestion indication (EFCI) and feedback congestion indication (FCI).
The EFCI process is as follows:
i. The source sends a cell with the EFCI bit as 0.
j. The congested ATM node re-sets the EFCI bit of the cell to 1 before forwarding the
cell.
k. After the destination receives a cell with the EFCI bit as 1, the destination sets the
EFCI bit of a cell to 1 before sending the cell to the source.
l. After the source receives a cell with the EFCI bit as 1 from the destination, the
source determines that congestion has occurred on the connection between itself
and the destination and lowers the traffic sending rate to relieve traffic congestion.
FCI applies to ABR services and is implemented using resource management (RM)
cells. The FCI process is as follows:
m. The source injects an RM cell into the cell flow sent to the destination. The header
of the RM cell carries the CLP bit (0) and PTI field (110) to detect available
bandwidth.
n. The destination returns the received RM cell to the source. The returned RM cell
contains the feedback information added by ATM nodes along the transmission
path. The feedback information reflects bandwidth availability.
o. The source takes responsive measures based on actual situations:
가 If the source does not receive the returned RM cell, the source continuously
reduces the traffic sending rate.
나 If the source receives the RM cell and the feedback information contained in
the RM cell indicates that the available bandwidth has increased, the source
can increase the traffic sending rate.
다 If the source receives the RM cell and the feedback information contained in
the RM cell indicates that the available bandwidth has decreased, the source
should rapidly reduce the traffic sending rate.

ATM Service Types


 CBR
The traffic of constant bit rate (CBR) services is transmitted at a constant bit rate over a
VC. CBR services apply to interactive digital audio and video applications that require
continuous digital information streams, such as video conference, telephony, and distant
education.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 195


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS ATM QoSATM QoS

Figure 1.1 CBR service characteristics

The amount of bandwidth allocated to CBR services is characterized by the PCR. CBR
services are tailored for any type of data for which the terminals require predictable
responsive time and a static amount of bandwidth continuously available for the lifetime
of the connection.
 VBR
The source of a variable bit rate (VBR) service sends cells at a variable rate, and the
traffic can be regarded as burst traffic. VBR services include:
− Real-time variable bit rate (rt-VBR) services have strict delay and jitter
requirements and are suited for applications with high requirements for real-time
communication, such as IP-based voice and video services.
− non-real-time variable bit rate (nrt-VBR) services are suited for applications that
have low requirements for real-time communication and allow burst traffic, such as
ticket booking systems and bank transaction systems. nrt-VBR services can
guarantee low cell loss for traffic that conforms to the traffic contract but has no
restrictions on cell transfer delay.

Figure 1.2 VBR service characteristics

 ABR

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 196


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS ATM QoSATM QoS

Available Bit Rate (ABR) services do not have restrictions on delay or jitter. As a result,
ABR services cannot be used for applications that require real-time communication.
ABR services use some flow control measures to control network congestion and cell
loss, reducing the cell loss rate and guaranteeing bandwidth availability. Typical ABR
service applications include LAN emulation services and LAN interworking services.

Figure 1.3 ABR service characteristics

 UBR
Unspecified bit rate (UBR) services do not guarantee bandwidth availability. UBR
services are best-effort services generally used for applications that are tolerant of delay
and jitter, such as email and FTP services.
UBR services do not provide QoS guarantee. The cell loss rate and cell transmission
delay of connections cannot be guaranteed. The PCR is optional in CAC and UPC. If a
network does not have strict requirements for the PCR, you do not need to use the PCR.
The difference between ABR and UBR services lies in that when a VC is congested, the
ABR service reduces the traffic sending rate whereas the UBR service drops cells.

Figure 1.4 UBR service characteristics

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 197


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS ATM QoSATM QoS

7.2 QoS of ATMoPSN and PSNoATM


ATM and PSN Integration
The ATM technology was designed to resolve all network communication problems.
However, because ATM is too well developed, its complex architecture causes difficulties in
ATM system development, configuration, management, and fault location. The ATM
technology does not have a chance to display its super performance with a pure ATM
network.
In late 1990s, the Internet and IP technology gain an overwhelming competitive edge over
ATM with their simplicity and flexibility. Because ATM has a great advantage in providing
guaranteed service transmission quality, technologies that integrate ATM with the packet
switched network (PSN) appear. These technologies include ATM over PSN (ATMoPSN), IP
over ATM (IPoA), IP over Ethernet over ATM (IPoEoA), PPP over ATM (PPPoA), and PPP
over Ethernet over ATM (PPPoEoA).

ATMoPSN
ATMoPSN uses the pseudo wire emulation edge-to-edge (PWE3) technology to transparently
transmit ATM services over a PSN.

Figure 1.1 ATMoPSN networking model

Figure 1.2 shows the ATMoPSN packet structure.

Figure 1.2 ATM PWE3 encapsulation

One ATMoPSN packet contains only one ATM cell. To increase bandwidth usage, you can use
cell concatenation to put multiple cells into one PW packet for transmission.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 198


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS ATM QoSATM QoS

PSNoATM

For the convenience of description, the IPoA, IPoEoA, PPPoA, and PPPoEoA technologies are all called
PSNoATM technologies.
 IPoA and IPoEoA
With the IP-oriented development trend of core networks and the popularity of the
Ethernet technology among access layer devices, ATM networks are widely used to carry
IP and Ethernet services. As bearer networks for IP and Ethernet services, ATM networks
provide high-speed PPP connections, high network performance, and good QoS
guarantee.
When an IP packet is transmitted over an ATM network, the IP packet must be adapted to
the AAL so that the packet can be fragmented into cells at the source and be reassembled
at the destination. RFC 1483 provides adaption standards.

Figure 1.1 IPoA and IPoEoA

− IPoA encapsulates IP packets as specified in RFC 1483R before transmitting IP


packets over an ATM network.
− IPoEoA encapsulates IP packets as specified in RFC 1483B before transmitting
IPoE packets over an ATM network.

Figure 1.2 Encapsulation of IP and Ethernet services

 PPPoA and PPPoEoA


PPPoA encapsulates PPP packets in which IP or other protocol packets have been
encapsulated into ATM cells before transmitting the PPP packets over an ATM network.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 199


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS ATM QoSATM QoS

PPPoEoA encapsulates PPPoE packets into ATM cells before transmitting these packets
over an ATM network.

To enable an ATM network to transmit Ethernet packets (including IPoEoA and PPPoEoA packets),
Huawei routers provide a special type of interface: virtual Ethernet (VE) interface. A VE interface has
Ethernet features and can be dynamically created. A VE interface sends or receives packets over an ATM
PVC at the bottom protocol layer. At the link layer, a VE interface uses the Ethernet protocol. At the
network layer or upper layers, a VE interface uses the same protocols as a common Ethernet interface.

ATMoPSN QoS
Figure 1.1 shows a typical ATMoPSN networking model. Figure 1.2 shows the ATM PWE3
packet structure.

Figure 1.1 ATMoPSN networking model

Figure 1.2 ATM PWE3 packet structure

ATM cells are encapsulated with Multiprotocol Label Switching (MPLS) labels when being
transmitted over a PSN. MPLS QoS can ensure the end-to-end QoS of services transmitted
over a PW.
To retain ATM QoS during the transmission of ATM cells over a PSN, ATM QoS parameters
must be mapped to MPLS EXP values. Huawei routers implement this function using ATM
traffic classification:
 When ATM cells enter a PSN, the ingress node maps CLP values carried in ATM cells to
routers' internal service classes and drop priorities (colors) based on the ATM service
type.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 200


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS ATM QoSATM QoS

 When ATM cells leave a PSN, the egress node maps the internal service classes and
colors of ATM cells to the original CLP values carried in ATM cells.

Figure 1.3 Priority mapping for ATMoPSN QoS

ATM traffic classification used in ATMoPSN QoS consists of behavior aggregation (BA)
classification and forced traffic classification.
 ATM BA Classification
In ATMoPSN, ATM BA classification is used to map the CLP values carried in ATM
cells entering a PSN to the ingress router's internal service classes and drop priorities
(colors) based on the ATM service type. When ATM cells leave a PSN, the internal
service class and drop priority of ATM cells are mapped to the original CLP values
carried in ATM cells.
You can enable ATM BA classification on the ingress node, configure the ingress node to
map the ATM priorities to PSN priorities, and configure the egress node to map the PSN
priorities to ATM priorities. The following tables describe the default mapping
relationships between ATM priorities and PSN priorities.

Table 3.1 ATM -> PSN priority default mapping relationships


ATM Service Type ATM CLP Service-Class Color

CBR 0 EF Green
1 EF Green
ABR 0 AF1 Green
1 AF1 Yellow
NRT-VBR 0 AF2 Green
1 AF2 Yellow
RT-VBR 0 AF4 Green
1 AF4 Yellow

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 201


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS ATM QoSATM QoS

ATM Service Type ATM CLP Service-Class Color

UBR 0 BE Green
1 BE Green
OAM-Cell - EF Green

Table 3.2 PSN -> ATM priority default mapping relationships


Service-Class Color ATM CLP

BE Green 1
AF1 Green 1
AF1 Yellow 1
AF1 Red 1
AF2 Green 0
AF2 Yellow 1
AF2 Red 1
AF3 Green 1
AF3 Yellow 1
AF3 Red 1
AF4 Green 0
AF4 Yellow 1
AF4 Red 1
EF Green 0
CS6 Green 0
CS7 Green 0

 ATM Forced Traffic Classification


Although ATM cells carry priority information, it is difficult to implement simple traffic
classification using the priority information carried in ATM cells if multiple ATM cells
are concatenated in a PW packet for transmission. In this situation, you can use ATM
forced traffic classification.
Forced traffic classification is implemented on the upstream interfaces of edge routers on
an ATM network. Forced traffic classification forcibly specifies the service classes and
colors of IP packets transmitted over a PVC, through an interface (main interface or sub-
interface), or over a PVP, irrespective of the ATM service type and CLP values. Then, the
ATM network applies QoS policies on the downstream interface of edge routers based on
the specified service classes and colors.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 202


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS ATM QoSATM QoS

PSNoATM QoS

For the convenience of description, the IPoA, IPoEoA, PPPoA, and PPPoEoA technologies are all called
PSNoATM technologies.

Besides the traffic control and congestion control mechanisms offered by ATM, Huawei
routers provide the following ATM QoS mechanisms:
 ATM Traffic Classification
Before transmitting IP, Ethernet, or PPP services, an ATM network must use ATM traffic
classification to map the IP priorities to ATM priorities.
PSNoATM supports the following types of ATM traffic classification:
− ATM BA Classification
If PSNoATM uses ATM BA classification, the ingress node of the ATM network
trusts the priorities (such as DSCP values, IP precedence, 802.1p values, or MPLS
EXP values) carried in upstream packets and maps these priorities to the router's
internal service classes and colors on upstream board, and maps back to ATM CLP
values on downstream board. The egress node of the ATM network maps the ATM
CLP values to the internal service classes and colors on upstream board, and maps
back to the priorities of packets on downstream board.
You can enable ATM simple traffic classification on the ingress node, configure the
ingress node to map the ATM priorities to PSN priorities, and configure the egress
node to map the PSN priorities to ATM priorities.
The following lists priority mapping references:
가 For more information about the mapping from PSN priorities to internal
service classes and colors, see 3.3.2QoS Priority Mapping.
나 For more information about the mapping from ATM CLP values to internal
service classes and colors, see Step 1Table 3.1.
다 For more information about the mapping from internal service classes and
colors to ATM CLP values, see Step 1Table 3.2.
라 For more information about the mapping from internal service classes and
colors to PSN priorities, see 3.3.2QoS Priority Mapping.
− ATM Forced Traffic Classification
ATM forced traffic classification in PSNoATM is similar to that in ATMoPSN.
− ATM MF Classification
ATM Multiple Field (MF) classification in PSNoATM is similar to complex traffic
classification in IP QoS. The only difference is that the traffic policies used in ATM
complex traffic classification must be configured on ATM interfaces (including
ATM sub-interfaces) or VE interfaces.
For more information about complex traffic classification in IP QoS, see the section
3.4MF Classification.
 ATM Traffic Shaping
ATM traffic shaping enables cells to be sent at a relatively even rate by adjusting the
traffic characteristics of cells transmitted over a VCC or VPC.
ATM traffic shaping uses the following methods:
− Reducing the peak cell rate
− Limiting the burst traffic size
− Adjusting the cell transmission interval

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 203


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS ATM QoSATM QoS

− Queuing cells
ATM traffic shaping is similar to IP QoS traffic shaping. The differences are:
− IP QoS traffic shaping applies to IP packets, whereas ATM traffic shaping apply to
ATM cells.
− IP QoS traffic shaping uses token bucket algorithms, whereas ATM traffic shaping
uses leaky bucket algorithms.
Leaky bucket algorithms forcibly limit traffic rates, whereas token bucket
algorithms allow burst traffic transmission while enabling existing traffic to be
evenly transmitted.
Although ATM traffic shaping can be implemented on any parts of an ATM network, it is
usually used on the egress of an ATM network.
 ATM PVC Congestion Management
In ATM PVC congestion management, packets exceeding the PVC bandwidth are not
dropped. Instead, these packets are cached, and transmitted when the PVC is idle, using
queuing mechanisms.
A PVC supports eight queues. The first queue uses the strict priority (SP) scheduling
algorithm and is called the priority queuing (PQ) queue. The other queues use the
weighted fair queuing (WFQ) algorithm and are called WFQ queues. Currently, an ATM
PVC supports only one-level queue scheduling. The queue scheduling mechanism used
by a PQ queue on an ATM PVC is similar to the queue scheduling mechanism used by a
PQ queue in IP QoS. For more information, see 5.2Queues and Congestion Management.
By default, an ATM PVC is not configured with queue scheduling. If PQ or WFQ
scheduling is configured for a queue on an ATM PVC, the other queues on the ATM
PVC use WFQ scheduling by default, with the weight 20. It is recommended that the
weight be greater than 10.
Huawei routers allow you to adjust the internal service classes of UBR services
transmitted over a PVC or PVP to guarantee the quality of high-priority services.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 204


Copyright © Huawei
Technologies Co., Ltd.
Overall QoS Process on RoutersOverall QoS Process on
Special Topic - QoS Routers

8 Overall QoS Process on Routers

Concept of Upstream and Downstream Traffic


Traffic that a router forwards is classified into upstream and downstream traffic. Traffic sent
to the SFU is called upstream traffic and traffic forwarded from the SFU is called downstream
traffic, as shown in Figure 1.1.

Figure 1.1 Upstream and downstream traffic

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 205


Copyright © Huawei
Technologies Co., Ltd.
Overall QoS Process on RoutersOverall QoS Process on
Special Topic - QoS Routers

Board Architecture on the Forwarding Plane

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 206


Copyright © Huawei
Technologies Co., Ltd.
Overall QoS Process on RoutersOverall QoS Process on
Special Topic - QoS Routers

QoS Implementation on Board Components

The number of the passing packets in the display port-queue statistics command output may
be greater than the number of packets that are actually sent from the interface because certain
packets are dropped during CAR, filter, and multicast prune on the downstream packet
forwarding engine (PFE).

On Huawei routers, some Physical Interface Cards (PICs) are equipped with a Traffic
Manager (TM) chip, which is called an egress Traffic Manager (eTM) subcard.
To check a PIC is an eTM sub-card or not, run the display device pic-status command in any
view. If the PIC is an eTM card, the Type field in the output information includes
"_T_CARD".
<HUAWEI> display device pic-status
Pic-status information in Chassis 1:
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
SLOT PIC Status Type Port_count Init_result Logic_down

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 207


Copyright © Huawei
Technologies Co., Ltd.
Overall QoS Process on RoutersOverall QoS Process on
Special Topic - QoS Routers

1 1 Registered ETH_20XGF_NB_CARD 20 SUCCESS SUCCESS


2 0 Registered LAN_WAN_2x10GF_T_CARD 2 SUCCESS SUCCESS
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

If the downstream PIC is not equipped with an eTM subcard, downstream packet-based queue
management is implemented on the downstream TM. If the downstream PIC is equipped with
an eTM subcard, downstream packet-based queue management is implemented on the eTM
subcard.

Packet Forwarding Process


Figure 1.1 shows how an Ethernet packet is forwarded when the PIC is not equipped with an
eTM subcard.

Figure 1.1 Packet forwarding process when the PIC is not equipped with an eTM subcard

 Packet forwarding process for upstream traffic


p. Optical/electrical signals of a physical link are encapsulated as an Ethernet packet
to be sent to the upstream PFE, which can be a network processor (NP) or an
application-specific integrated circuit (ASIC).

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 208


Copyright © Huawei
Technologies Co., Ltd.
Overall QoS Process on RoutersOverall QoS Process on
Special Topic - QoS Routers

q. The inbound interface processing module on the upstream PFE parses the link layer
protocol and identifies the packet type.
r. The traffic classification module on the upstream PFE implements BA and MF
traffic classification in sequence based on the configuration on the inbound
interface.
s. The upstream PFE searches the forwarding table for an outbound interface and next
hop based on packet information (such as MAC address, destination IP address, and
MPLS label). The upstream PFE drops the packets with the forwarding behavior
drop, and CAR is not implemented for these packets.
t. The upstream PFE implements rate limit for upstream traffic based on the CAR
configuration on the inbound interface or in the MF traffic classification profile.

CAR does not apply to CPU packets to prevent packet loss in the case of traffic congestion.
u. The upstream PFE sends packets to the upstream TM.
v. The upstream TM processes flow queues (optional) based on the user-queue
configuration on the inbound interface or in the MF classification profile, and then
implements VOQ processing. After that, the upstream TM sends packets to the
upstream Flexible Interface Card (FIC).
w. The upstream FIC fragments packets and encapsulates them into micro cells before
sending them to the switch fabric unit (SFU).

Similar to an ATM module, the SFU forwards packets based on a fixed cell length. Therefore, packets
are fragmented before being sent to the SFU.
 Packet forwarding process for downstream traffic
Micro cells are sent from the SFU to the downstream TM.
x. The downstream FIC encapsulates the micro cells into packets again.
y. The downstream TM duplicates multicast packets.
z. The downstream TM processes flow queues based on the user-queue configuration
on the outbound interface (including the VLANIF interface) if needed, and
processes class queues (CQs) before sending them to the downstream PFE.
aa. The downstream PFE searches the forwarding table for packet encapsulation
information. For example, for an IPv4, the PFE searches the forwarding table based
on the next hop. For an MPLS packet, the PFE searches the MPLS forwarding
table.
bb. The downstream PFE implements MF classification based on the outbound
interface configuration and then BA traffic classification (only mapping from the
service class and drop precedence to the external priority).
cc. The downstream PFE implements rate limit for downstream traffic based on the
CAR configuration on the outbound interface or in the MF traffic classification
profile.
dd. For packets to be sent to the CPU, the downstream PFE implements CP-CAR
before sending them to the CPU. For packets not to be sent to the CPU, the
downstream PFE sends them to the outbound interface processing module for an
additional Layer 2 header (Layer 2 header and MPLS header are added for an
MPLS packet). After that, these packets are sent to the PIC.
ee. The PIC converts packets to optical/electrical signals and sends them to the physical
link.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 209


Copyright © Huawei
Technologies Co., Ltd.
Overall QoS Process on RoutersOverall QoS Process on
Special Topic - QoS Routers

Figure 1.2 shows how a packet is forwarded when the PIC is equipped with an eTM subcard.
The operation for the upstream traffic is the same as that when the PIC is not equipped with
an eTM subcard. The difference in operations for downstream traffic lies in that the
downstream flow queues are processed in the eTM subcard when the PIC is equipped with an
eTM subcard and the downstream flow queues are processed on the downstream TM when
the PIC is not equipped with an eTM subcard. In addition, five-level scheduling (FQ -> SQ ->
GQ -> VI -> port) is implemented for downstream flow queues when the PIC is equipped
with an eTM subcard, whereas three-level scheduling + two-level scheduling are implemented
for downstream flow queues when the PIC is not equipped with an eTM subcard.

Figure 1.2 Packet forwarding process when the PIC is equipped with an eTM subcard

QoS Implementation During Packet Forwarding


As shown in Figure 1.1, the QoS implementation during packet forwarding is as follows:

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 210


Copyright © Huawei
Technologies Co., Ltd.
Overall QoS Process on RoutersOverall QoS Process on
Special Topic - QoS Routers

Figure 1.1 QoS implementation during packet forwarding

 On the upstream PFE:


ff. The upstream PFE initializes the internal priority of packets (service class as BE
and color as green).
gg. The upstream PFE implements BA traffic classification based on the inbound
interface configuration. BA traffic classification requires the upstream PFE to
obtain the priority field value (802.1p, DSCP, MPLS EXP or ATM CLP) for traffic
classification and modify the internal priority of packets (service class and color)
according the upstream mapping table.
hh. The upstream PFE implements MF traffic classification based on the inbound
interface configuration. MF traffic classification modifies the upstream PFE to
obtain multiple field information for traffic classification. After that, the upstream
PFE implements related behaviors (such as filter, re-mark, or redirect). If the
behavior is re-mark, the upstream PFE modifies the internal priority of packets
(service class and color).

The remark command, qos default-service-class command, service-class command, car command and
the qos car command with service-class parameter may reset the Service-class & Color. If these
commands are configured together in upstream board, then their executed order are: remark -> qos
default-service-class (interface view) -> service-class (traffic behavior view) -> qos car (interface
view)-> car (traffic behavior view), no matter what the configuration order is.
ii. The upstream PFE searches the routing table for an outbound interface of a packet
based on its destination IP address.
jj. The upstream PFE implements CAR for packets based on the inbound interface
configuration or MF traffic classification profile. If both interface-based CAR and
MF traffic classification-based CAR are configured, MF traffic classification-based
CAR takes effect. In a CAR operation, a pass, drop, or pass+re-mark behavior can
be performed for incoming traffic. If the behavior is pass+re-mark, the upstream
PFE modifies the internal priority of packets (service class and color).
kk. Then, packets are sent to the upstream TM.
 On the upstream TM:

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 211


Copyright © Huawei
Technologies Co., Ltd.
Overall QoS Process on RoutersOverall QoS Process on
Special Topic - QoS Routers

ll. The upstream TM processes flow queues based on the inbound interface
configuration or MF traffic classification configuration. If both interface-based
user-queue and MF traffic classification-based user-queue are configured, MF
traffic classification-based user-queue takes effect. Packets are put into different
flow queues based on the service class, and WRED drop policy is implemented for
flow queues based on the color if needed.
mm. The upstream TM processes VOQs. VOQs are classified based on the destination
board. The information about the destination board is obtained based on the
outbound interface of packets. Then, packets are put into different VOQs based on
the service class.
nn. After being scheduled in VOQs, packets are sent to the SFU and then forwarded to
the destination board on which the outbound interface is located.
oo. Then, packets are sent to the downstream TM.
 On the downstream TM
pp. (This step is skipped when the downstream PIC is equipped with an eTM subcard)
The downstream TM processes flow queues based on the user-queue configuration
on the outbound interface. Packets are put into different flow queues based on the
service class, and WRED drop policy is implemented for flow queues based on the
color if needed.
qq. (This step is skipped when the downstream PIC is equipped with an eTM subcard)
The downstream TM processes port queues (CQs). Packets are put into different
CQs based on the service class, and WRED drop policy is implemented for CQs
based on the color if WRED is configured.
rr. Then, packets are sent to the downstream PFE.
 On the downstream PFE:
ss. The downstream PFE implements MF traffic classification based on the outbound
interface configuration. MF traffic classification requires the downstream PFE to
obtain multiple field information for traffic classification. Behaviors, such as filter
and re-mark, are performed based on traffic classification results. If the behavior is
re-mark, the downstream PFE modifies the internal priority of packets (service class
and color).
tt. The downstream PFE implements CAR for packets based on the outbound interface
configuration or MF traffic classification configuration. If both interface-based
CAR and MF traffic classification-based CAR are configured, MF traffic
classification-based CAR takes effect. In a CAR operation, a pass, drop, or pass+re-
mark behavior can be performed for incoming traffic. If the behavior is pass+re-
mark, the downstream PFE modifies the internal priority of packets (service class
and color).
uu. Executes the downstream PHB action: the priorities of outgoing packets are set for
newly added packet headers and are modified for existing packet headers, based on
the service class and color.

The remark command, service-class command and the qos car command with service-class parameter
may reset the Service-class & Color. If these commands are configured together in upstream board, then
their executed order are: service-class (traffic behavior view)->qos car (traffic behavior view)->car
(interface view)-> remark (traffic behavior view), no matter what the configuration order is.
vv. Then, packets are sent to the downstream PIC.
가 When the PIC is not equipped with an eTM subcard, the PIC adds the link-
layer CRC to the packets before sending them to the physical link.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 212


Copyright © Huawei
Technologies Co., Ltd.
Overall QoS Process on RoutersOverall QoS Process on
Special Topic - QoS Routers

나 When the PIC is equipped with an eTM subcard, the PIC adds the link-layer
CRC to the packets and performs a round of flow queue scheduling before
sending the packets to the physical link. Downstream flow queues are
processed based on the user-queue configuration on the outbound interface.
Packets are put into different FQs based on the service class, and WRED drop
policy is implemented for FQs based on the color if WRED is configured.
When the PIC is equipped with an eTM subcard, downstream packets are not
scheduled on the downstream TM.

Packet Field Changes During Packet Forwarding


After CAR and traffic shaping are performed for packets, the bandwidth calculation is closely
related to the packet length. Therefore, the packet field changes during packet forwarding
require attention.
For example, packet field changes in some common scenarios are described in the following
part.

Figure 1.1 Incoming packet in sub-interface accessing L3VPN networking

 CAR calculates the bandwidth of packets based on the entire packet. For example, CAR counts the
length of the frame header and CRC field but not the preamble, inter frame gap, or SFD of an
Ethernet frame in the bandwidth. The following figure illustrates a complete Ethernet frame (bytes).

The bandwidth covers the CRC field but not the IFG field.
 The upstream PFE adds a Frame Header, which is removed by the downstream PFE. The Frame
header is used to transfer information between chips. NPtoTM and TMtoNP fields are used to
transfer information between the NP and TM.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 213


Copyright © Huawei
Technologies Co., Ltd.
Overall QoS Process on RoutersOverall QoS Process on
Special Topic - QoS Routers

 When the PIC is not equipped with an eTM subcard, the length of a packet scheduled on the
downstream TM is different from that of the packet sent to the link. To ensure more accurate traffic
shaping, you are recommended to run the network-header-length command to compensate the
packet with a specific length.
On the downstream interface on the network side:
 For a Type-A or Type-B board, the packet scheduled on the downstream TM does not contain the
IFG, L2 Header, two MPLS Labels, or CRC fields but contains an additional Frame Header,
compared with the packet sent to the link. The Frame Header length is equal to the L2 Header
length. Therefore, a +12-byte compensation (not including the IFG field) or a +32-byte
compensation (including the IFG field) can be performed for the packet.
 For a Type-C board, the packet scheduled on the downstream TM does not contain a Frame Header.
Therefore, a +26-byte compensation (not including the IFG field) or a +46-byte compensation
(including the IFG field) can be performed for the packet.
 When the PIC is equipped with an eTM subcard, no packet length compensation is required.

Figure 1.2 Outgoing packet in sub-interface accessing L3VPN networking

On the downstream interface on the user side:


 For a Type-A or Type-B board, the packet scheduled on the downstream TM does not contain the
IFG, L2 Header, VLAN tag, or CRC fields but contains an additional Frame Header, compared with
the packet sent to the link. The Frame Header length is equal to the L2 Header length. Therefore, a
+8-byte compensation (not including the IFG field) or a +28-byte compensation (including the IFG
field) can be performed for the packet.
 For a Type-C board, the packet scheduled on the downstream TM does not contain a Frame Header.
Therefore, a +22-byte compensation (not including the IFG field) or a +42-byte compensation
(including the IFG field) can be performed for the packet.
For more details, see Incoming packet in sub-interface accessing L3VPN networking.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 214


Copyright © Huawei
Technologies Co., Ltd.
Overall QoS Process on RoutersOverall QoS Process on
Special Topic - QoS Routers

Figure 1.3 Incoming packet in sub-interface accessing VPLS networking

On the downstream interface on the network side:


 For a Type-A or Type-B board, the packet scheduled on the downstream TM has only one L2
Header, does not contain the IFG, two MPLS Labels, or CRC fields but contains an additional Frame
Header, compared with the packet sent to the link. The Frame Header length is equal to the L2
Header length. Therefore, a +12-byte compensation (not including the IFG field) or a +32-byte
compensation (including the IFG field) can be performed for the packet.
 For a Type-C board, the packet scheduled on the downstream TM does not contain a Frame Header.
Therefore, a +26-byte compensation (not including the IFG field) or a +46-byte compensation
(including the IFG field) can be performed for the packet.
For more details, see Incoming packet in sub-interface accessing L3VPN networking.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 215


Copyright © Huawei
Technologies Co., Ltd.
Overall QoS Process on RoutersOverall QoS Process on
Special Topic - QoS Routers

Figure 1.4 Outgoing packet in sub-interface accessing VPLS networking

On the downstream interface on the user side:


 For a Type-A or Type-B board, the packet scheduled on the downstream TM does not contain the
IFG, VLAN tag, or CRC fields but contains an additional Frame Header, compared with the packet
sent to the link. Therefore, a -6-byte compensation (not including the IFG field) or a +14-byte
compensation (including the IFG field) can be performed for the packet.
 For a Type-C board, the packet scheduled on the downstream TM does not contain a Frame Header.
Therefore, a +8-byte compensation (not including the IFG field) or a +28-byte compensation
(including the IFG field) can be performed for the packet.
For more details, see Incoming packet in sub-interface accessing L3VPN networking.

Supplement to Packet Length Compensation


The network-header-length command used to configure packet length compensation is
configured in the service template. Certain service templates have been predefined on Huawei
routers. Using these service templates, you do not need to calculate the length required in a
compensation
To manually configure the compensation length, you can run the display interface command
to view statistics on the outbound interface and calculate the length of the packet sent to the
link (L1), and run the display port-queue command to view statistics about the queues and
calculate the length of the packet scheduled on the downstream TM (L2). The compensation
length is obtained in this formula: Length compensation = L1 - L2.

Template Name Conversion Type

bridge-outbound Bridge packet conversion in the outbound direction of the tunnel

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 216


Copyright © Huawei
Technologies Co., Ltd.
Overall QoS Process on RoutersOverall QoS Process on
Special Topic - QoS Routers

Template Name Conversion Type

ip-outbound IP packet conversion or IP-to-802.1Q packet conversion on the


outbound interface
ip-outbound1 IP-to-QinQ packet conversion on the outbound interface
ip-outbound2 IP packet conversion on the outbound POS interface
ip-outbound3 IP packet conversion on the outbound FR interface
l3vpn-outbound L3VPN packet conversion or L3VPN-to-802.1Q packet conversion
on the outbound FR interface
l3vpn-outbound1 L3VPN-to-QinQ packet conversion on the outbound interface
l3vpn-outbound2 L3VPN packet conversion on the outbound POS or FR interface
pbt-outbound PBT packet conversion on the outbound interface
vlan-mapping- VLAN mapping packet conversion on the outbound interface
outbound
vll-outbound VLL packet conversion on heterogeneous medium on the outbound
POS interface or VLL-to-QinQ packet conversion in the outbound
direction of the tunnel
vll-outbound1 VLL packet conversion on homogeneous medium on the outbound
POS interface or VLL-to-QinQ packet conversion in the outbound
direction of the tunnel
vpls-outbound VPLS-to-802.1Q packet conversion in the outbound direction of the
tunnel
vpls-outbound1 VPLS-to-QinQ packet conversion in the outbound direction of the
tunnel

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 217


Copyright © Huawei
Technologies Co., Ltd.
QoS and Network Control PacketsQoS and Network
Special Topic - QoS Control Packets

9 QoS and Network Control Packets

QoS and Network Control Packets To Local CPU


The following packets are sent to Local CPU for processing:
 If from the protocol field in the Layer 2 frame header, the PFE identifies a packet as a
network control packet that needs to be sent to the CPU for processing, such as an ARP,
RARP, IS-IS, LLDP, STP/RSTP/MSTP, Eth-OAM, LACP, or PPP control packet. If the
destination IP address of the protocol packet is a reserved multicast IP address (ranging
from 224.0.0.1 to 224.0.0.255), the upstream PFE does not search the forwarding table
for packet forwarding.
 The network control packets encapsulated in TCP or UDP and then in IP, such as BFD,
BGP, RIP, DHCP, DNS. MPLS LDP, NTP, SNMP, etc.; or encapsulated in IP directly,
such as ICMP, RSVP, VRRP, IGMP, PIM, etc. Thought these kinds of network control
packets can not be identified from the protocol field in the Layer 2 frame header, but
their destination addresses are the local IP addresses of the router, such as the IP
addresses of the direct interfaces, the Loopback interface, or the virtual-template
interfaces, or the IP muticast address (such as DHCP Request messages). The packets
matched these kinds of destination addresses are considered as the packets sent to the
router itself, therefore, they are sent to the local CPU for processing.
The QoS process for the network control packets to the local CPU of the router is almost the
same as that for other flows, as shown in Figure 1.1.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 218


Copyright © Huawei
Technologies Co., Ltd.
QoS and Network Control PacketsQoS and Network
Special Topic - QoS Control Packets

Figure 1.1 QoS Process for Network Control Packets To Local CPU

The differences are,


 The committed access rate (CAR) action is not performed on the network control packets
that are sent to the CPU, preventing packet loss in the case of traffic bursts.
 Since the network control packets are sent to the CPU on the downstream boards, the
traffic classification and remarking, are meaningless for the packets. Therefore, BA
traffic classification and MF traffic classification are not performed on network control
packets on the downstream boards.
 If a large number of packets are sent to the CPU for processing, the CPU will be
overloaded. To prevent this problem, CP-CAR is performed on the packets before they
are sent to the CPU. The mechanism of CP-CAR is similar to that of flow-based CAR.
Packets are separated in different channels based on the protocol type, VLAN, or user.
Each channel uses a token bucket to limit the packet rate. If the bandwidth of the packets
that are sent to the CPU exceeds a specified rate, the packets will be randomly discarded.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 219


Copyright © Huawei
Technologies Co., Ltd.
QoS and Network Control PacketsQoS and Network
Special Topic - QoS Control Packets

Therefore, the CP-CAR action, rather than the CAR action, is performed on the network
control packets on downstream board.

For more details about QoS Process, see Packet Forwarding Process.

Priority Mapping Rules for Network Control Packets To Local CPU


As described in Which Priority Field is Trusted,
 If the inbound interface is not configured with trust upstream command, all packets,
including the network control packets to local CPU, are mapped to <BE, Green> and
enter the BE queue.
 If the inbound interface is configured with trust upstream command, all packets,
including the network control packets to local CPU, are mapped according the priority
fields (802.1p, MPLS EXP, DSCP or ATM CLP, etc.) carried by the packets, and enter
the relative queues.
 If the inbound interface is configured with trust upstream command, and if any priority
fields (802.1p, MPLS EXP, DSCP or ATM CLP, etc.) are not carried in the packet, the
packet that can be identified as network control packet from the protocol fields in the
Layer 2 frame header, such as an ARP, RARP, IS-IS, LLDP, STP/RSTP/MSTP, Eth-
OAM, LACP, or PPP control packet, are mapped to <CS6, Green> and enter CS6 queue;
otherwise, the packet is considered as an error packet and mapped to <BE, Green>.

QoS and Local Generated Network Control Packets


As shown in Figure 1.1, the local generated network control packets are directly delivered to
the PFE, without being processed by the PIC. Because most of the local generated packets
carry destination board and outbound interface information, the PFE does not need to search
the forwarding table. The packets directly enter a queue. As for a small number of special
packets, such as ping packets with a specified source interface (triggered by the ping
destination-ip si interface-name command), the PFE needs to search the forwarding table
because the IP address of the source interface is unknown. Then, the special packets are sent
to the TM, without going through CAR limitation. The subsequent processing of local
generated network control packets is similar to that of other packets, except that CAR and
traffic classification are not performed on the local generated network control packets on the
downlink board.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 220


Copyright © Huawei
Technologies Co., Ltd.
QoS and Network Control PacketsQoS and Network
Special Topic - QoS Control Packets

Figure 1.1 QoS Process for Local Generated Network Control Packets

Priority Mapping for Local Generated Network Control Packets


To make ensure the packet be scheduled preferentially, by default, the local generated network
control packet is mapped to <CS6, Green> and enter the CS6 queue. You can use the host-
packet dscp dscp-value map local-service service-class [ color color ] command to modify
the priority mapping rule.

Priority Setting for Local Generated Network Control Packets


You can use the host-packet type { management-protocol | control-protocol } dscp dscp-
value command to modify the DSCP values for some management protocol packets, including
SSH, FTP, TFTP, TELNET, SNMP, and SYSLOG, and some network control packets,
including BFD, BGP, LDP, OSFP and IS-IS.
Table 1.1 lists the default priorities of local generated network control packets.

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 221


Copyright © Huawei
Technologies Co., Ltd.
QoS and Network Control PacketsQoS and Network
Special Topic - QoS Control Packets

Table 1.1 Default Priorities of Local Generated Network Control Packets


Protocol Default Priority

IP Precedence = 7 (P2P over UDP)


1588v2 / Precision Time Protocol (PTP)
802.1p = 7 (P2P over Ethernet)
802.1ag 802.1p = 7
802.3ah No priority field
Address Resolution Protocol (ARP) /
Reverse Address Resolution Protocol No priority field
(RARP)
IP Precedence = 6
Border Gateway Protocol (BGP)
TOS = 0xc0
IP Precedence = 7
Bidirectional Forwarding Detection (BFD)
TOS = 0xe0
Ethernet Ring Protection Switching
802.1p = 7
(ERPS) / G.8032
IP Precedence = 6
File Transfer Protocol (FTP) Control
TOS = 0xc0
IP Precedence = 0
Internet Control Message Protocol (ICMP)
TOS = 0x00
ICMPv6 Echo (type = 128/129, code = 0) IPv6 Traffic Class = 0x00
ICMPv6 Router Solicitation (RS) / Router
Advertisement (RA) / Neighbor
Solicitation (NS) / Neighbor Advertisement IPv6 Traffic Class = 0xC0
(NA) / Redirect messages
(type = 133/134/135/136/137, code = 0)
ICMPv6 other messages IPv6 Traffic Class = 0x00

Internet Group Management Protocol IP Precedence = 6


(IGMP) TOS = 0xc0
IP Precedence = 6
Internet Key Exchange (IKE)
TOS = 0xc0

IP Flow Performance Measurement (IP IP Precedence = 6


FPM) TOS = 0xc0
Intermediate System to Intermediate
No priority field
System (IS-IS)

Layer Two Tunneling Protocol (L2TP) IP Precedence = 6


Control TOS = 0xc0
Link Aggregation Control Protocol
No priority field
(LACP)

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 222


Copyright © Huawei
Technologies Co., Ltd.
QoS and Network Control PacketsQoS and Network
Special Topic - QoS Control Packets

Protocol Default Priority

Label Distribution Protocol (LDP) IP Precedence = 6


Link Layer Discovery Protocol (LLDP) No priority field
IP Precedence = 6
Multicast Listener Discovery (MLD)
TOS = 0xc0

Multicast Source Discovery Protocol IP Precedence = 0


(MSDP) TOS = 0x00
Multi-Spanning Tree Protocol (MSTP) No priority field
IP Precedence = 0
Network Quality Analysis (NQA)
TOS = 0x00
IP Precedence = 6
Network Time Protocol (NTP)
TOS = 0xc0
IP Precedence = 6
Open Shortest Path First (OSPF)
TOS = 0xc0
Open Shortest Path First version 3
IPv6 Traffic Class = 0xC0
(OSPFv3)
IP Precedence = 6
Protocol Independent Multicast (PIM)
TOS = 0xc0
IP Precedence = 0
Routing Information Protocol (RIP)
TOS = 0x00
RIP next generation (RIPng) IPv6 Traffic Class = 0xC0
Rapid Ring Protection Protocol (RRPP) 802.1p = 7
Rapid Spanning Tree Protocol (RSTP) No priority field
IP Precedence = 6
Resource Reservation Protocol (RSVP)
TOS = 0xc0

Simple Network Management Protocol IP Precedence = 6


(SNMP) TOS = 0xc0
IP Precedence = 6
Secure Shell (SSH)
TOS = 0xc0
Spanning Tree Protocol (STP) No priority field

Telecommunication Network Protocol IP Precedence = 6


(Telnet) TOS = 0xc0
IP Precedence = 6
Trivial File Transfer Protocol (TFTP)
TOS = 0xc0
Two-Way Active Measurement Protocol IP Precedence = 0

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 223


Copyright © Huawei
Technologies Co., Ltd.
QoS and Network Control PacketsQoS and Network
Special Topic - QoS Control Packets

Protocol Default Priority

(TWAMP) TOS = 0x00

Virtual Router Redundancy Protocol IP Precedence = 6


(VRRP) TOS = 0xc0

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 224


Copyright © Huawei
Technologies Co., Ltd.
Description Agreement of Board TypeDescription
Special Topic - QoS Agreement of Board Type

10 Description Agreement of Board Type

Type Boards List Boards List (ME60)


Name (NE40E/NE80E/NE5000E/CX600
)

Type-A LPUA BSUF-10


LPUG MSUF-10
LPUH
LPUF-10
Type-B LPUB BSUF-21
LPUF-20, LPUF-21, LPUF-21-A, MSUF-21
LPUF-21-B, LPUF-21-E BSUF-40
LPUS-20 MSUF-40
LPUF-40, LPUF-40-A, LPUF-40- BSUI-40
B, LPUF-40-E
MSUI-40
LPUI-40
NPUI-20
Type-C LPUI-41 BSUI-41
LPUS-41 BSUF-100
LPUF-100 BSUI-100, BSUI-100-E
LPUI-100 MSUF-100
LPUS-100 MSUI-100
Type-D LPUI-21-L BSUF-51
LPUF-50, LPUF-50-L BSUI-51, BSUI-51-E
LPUF-51, LPUF-51-B, LPUF-51- MSUF-51
E, LPUF-51-L MSUI-51
LPUF-51 with P51-H BSUF-240
LPUI-51, LPUI-51-B, LPUI-51-E,
LPUI-51-L, LPUI-51-S
LPUS-51

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 225


Copyright © Huawei
Technologies Co., Ltd.
Description Agreement of Board TypeDescription
Special Topic - QoS Agreement of Board Type

Type Boards List Boards List (ME60)


Name (NE40E/NE80E/NE5000E/CX600
)

LPUF-101, LPUF-101-B
LPUI-101, LPUI-101-B
LPUS-101
LPUF-102, LPUF-102-E
LPUI-102, LPUI-102-E
LPUF-120, LPUF-120-B, LPUF-
120-E
LPUF-120 with P120-H
LPUI-120, LPUI-120-B, LPUI-
120-E, LPUI-120-L
LPUS-120
LPUF-200
LPUI-200, LPUI-200-L
LPUF-400
LPUF-240, LPUF-240-B, LPUF-
240-E
LPUF-240 with P240-H
LPUI-240, LPUI-240-B, LPUI-
240-L
LPUF-480, LPUI-480
LPUI-1T, LPUI-1T-B
NPU-50, NPU-50-E
NPU-120, NPU-120-E
Integrated NPU (80G) in NE40E-
M2E/CX600-M2E
Integrated NPU (160G) in NE40E-
M2F/CX600-M2F

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 226


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS ReferencesReferences

11 References

About This Chapter


11.1 IETF RFCs
11.2 Broadband Forum Technical Specifications
11.3 MEF Technical Specifications

11.1 IETF RFCs


RFCs Title Link

RFC 791 Internet Protocol Specification http://www.ietf.org/rfc/rfc791


RFC 1349 Type of Service in the Internet Protocol http://www.ietf.org/rfc/rfc1349
Suite
RFC 1483 Multiprotocol Encapsulation over ATM http://www.ietf.org/rfc/rfc1483
Adaptation Layer 5
RFC 1633 Integrated Services in the Internet http://www.ietf.org/rfc/rfc1633
Architecture: an Overview
RFC 2474 Definition of the Differentiated http://www.ietf.org/rfc/rfc2474
Services Field (DS Field) in the IPv4
and IPv6 Headers
RFC 2475 An Architecture for Differentiated http://www.ietf.org/rfc/rfc2475
Services
RFC 2597 Assured Forwarding PHB Group http://www.ietf.org/rfc/rfc2597
RFC 2598 An Expedited Forwarding PHB http://www.ietf.org/rfc/rfc2598
RFC 2697 A Single Rate Three Color Marker http://www.ietf.org/rfc/rfc2697
RFC 2698 A Two Rate Three Color Marker http://www.ietf.org/rfc/rfc2698
RFC 3168 The Addition of Explicit Congestion http://www.ietf.org/rfc/rfc3168

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 227


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS ReferencesReferences

RFCs Title Link

Notification (ECN) to IP
RFC 3209 RSVP-TE: Extensions to RSVP for LSP http://www.ietf.org/rfc/rfc3209
Tunnels
RFC 3246 An Expedited Forwarding PHB (Per- http://www.ietf.org/rfc/rfc3246
Hop Behavior)
RFC 3260 New Terminology and Clarifications for http://www.ietf.org/rfc/rfc3260
Diffserv
RFC 3270 Multi-Protocol Label Switching http://www.ietf.org/rfc/rfc3270
(MPLS) Support of Differentiated
Services
RFC 3545 Enhanced Compressed RTP (CRTP) for http://www.ietf.org/rfc/rfc3545
Links with High Delay, Packet Loss
and Reordering
RFC 3564 Requirements for Support of http://www.ietf.org/rfc/rfc3564
Differentiated Services-aware MPLS
Traffic Engineering
RFC 4124 Protocol Extensions for Support of http://www.ietf.org/rfc/rfc4124
Diffserv-aware MPLS Traffic
Engineering
RFC 4125 Maximum Allocation Bandwidth http://www.ietf.org/rfc/rfc4125
Constraints Model for Diffserv-aware
MPLS Traffic Engineering
RFC 4127 Russian Dolls Bandwidth Constraints http://www.ietf.org/rfc/rfc4127
Model for Diffserv-aware MPLS
Traffic Engineering
RFC 4717 Encapsulation Methods for Transport of http://www.ietf.org/rfc/rfc4717
Asynchronous Transfer Mode (ATM)
over MPLS Networks
RFC 4816 Pseudowire Emulation Edge-to-Edge http://www.ietf.org/rfc/rfc4816
(PWE3) Asynchronous Transfer Mode
(ATM) Transparent Cell Transport
Service

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 228


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS ReferencesReferences

11.2 Broadband Forum Technical Specifications


No. Title Link

TR-059 DSL Evolution - http://www.broadband-


Architecture Requirements forum.org/technical/downloa
for the Support of QoS- d/TR-059.pdf
Enabled IP Services
TR-101 Migration to Ethernet Based http://www.broadband-
DSL Aggregation forum.org/technical/downloa
d/TR-101.pdf
TR-126 Triple-Play Services Quality http://www.broadband-
of Experience (QoE) forum.org/technical/downloa
Requirements d/TR-126.pdf

11.3 MEF Technical Specifications


No. Title

MEF 6.1 Amendment to Ethernet Services Attributes Phase 2


MEF 10.1 Ethernet Services Definitions, Phase 2

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 229


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS AbbreviationsAbbreviations

12 Abbreviations

Abbreviation Full Spelling

3
3G 3rd Generation
A
AAL ATM Adaptation Layer
ABR Available Bit Rate
ACL Access Control List
AF Assured Forwarding
ATM Asynchronous Transfer Mode
B
BA Behavior Aggregation
BC Bandwidth Control
BE Best-Effort
BGP Border Gateway Protocol
BRAS Broadband Remote Access Server
C
CAC Call Admission Control
CAR Committed Access Rate
CBR Constant Bit Rate
CBS Committed Burst Size
CDVT Cell Delay Variation Tolerance
CE Customer Edge

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 230


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS AbbreviationsAbbreviations

Abbreviation Full Spelling

CIR Committed Information Rate


CLP Cell Loss Priority
CLR Cell Loss Rate
COS Class Of Service
CQ Class Queue
CR Core Router
CR-LSP Constraint-based Routed LSP
CRC Clic Redundancy Check
CS Class Selector
CSCP Class Selector Code Point
CSPF Constraint Shortest Path First
CT Class Type
CTD Cell Transfer Delay
D
DF Don't Fragment
DiffServ Differentiated Service
DRR Deficit Round Robin
DS Differentiated Service
DS-TE DiffServ-aware Traffic Engineering
DSCP Differentiated Services Code Point
DSLAM Digital Subscriber Line Access Multiplexer
DWRR Deficit Weighted Round Robin
E
E-LSP EXP-Inferred-PSC (PHB Scheduling Class)
LSP
EBS Extended burst size
EF Expedited Forwarding
EFCI Explicit Forward Congestion Indication
eTM egress Traffic Manager
EXP Experimental Bits
F

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 231


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS AbbreviationsAbbreviations

Abbreviation Full Spelling

FIC Fabric Interface Controller


FIFO First In First Out
FL Flow Label
FQ Flow Queue
FR Frame Relay
FRR Fast ReRoute
FTP File Transfer Protocol
G
GQ Group Queue
GRE Generic Routing Encapsulation
H
HG Home Gateway
HQoS Hierarchical Quality of Service
HSI High Speed Internet
HTML Hypertext Markup Language
HTTP Hypertext Transfer Protocol
I
ICMP Internet Control Message Protocol
IEEE Institute of Electrical and Electronics
Engineers
IETF Internet Engineering Task Force
IGMP Internet Group Management Protocol
IGW Internet Gateway
IntServ Integrated Service
IP Internet Protocol
IPinIP IP in IP Encapsulation
IPoA IP over ATM
IPoEoA IP over Ethernet over ATM
IPoE Internet Protocol over Ethernet
IPTV Internet Protocol Television
IPv4 Internet Protocol version 4

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 232


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS AbbreviationsAbbreviations

Abbreviation Full Spelling

IPv6 Internet Protocol version 6


ISP Internet Service Provider
L
L-LSP Label-Only-Inferred-PSC (PHB Scheduling
Class) LSP
L3VPN Layer 3 Virtual Private Network
LAN Local Area Network
LDP Label Distribution Protocol
LPQ Low Priority Queue
LPU Line Processing Unit
LR Line Rate
LSP Labeled Switch Path
LSR Labeled Switching Router
LSW LAN Switch
M
MAC Medium Access Control
MAM Maximum Allocation Model
MBS Maximum Burst Size
MCDT Maximum Cell Transfer Delay
MCR Minimum Cell Rate
MF Multiple Field
MP Merge Point
MPLS Multiprotocol Label Switching
MTU Maximum Transmission Unit
N
NGN Next Generation Network
NPC Network Parameter Control
NRT-VBR Non Real Time-Variable Bit Rate
O
OLT Optical Line Terminal
ONT Optical Network Terminal

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 233


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS AbbreviationsAbbreviations

Abbreviation Full Spelling

OSPF Open Shortest Path First


P
PBS Peak Burst Size
PC Personal Computer
PCP Priority Code Point
PCR Peak Cell Rate
PE Provider Edge
PFE Packet Forward Engine
PHB Per Hop Behaviors
PIC Physical Interface Card
PIR Peak Information Rate
PLR Point of Local Repair
PPP Point-to-Point Protocol
PPPoA PPP over ATM
PPPoEoA PPP over Ethernet over ATM
PQ Priority Queuing
PSN Packet Switched Network
PSNoATM PSN over ATM
PWE3 Pseudo-Wire Emulation Edge to Edge
PVC Permanent Virtual Circuit
PVP Permanent Virtual Path
Q
QinQ 802.1Q in 802.1Q
QoS Quality of Service
QPPB QoS Policy Propagation on BGP
R
RDM Russian Dolls Model
RED Random Early Detection
RFC Request For Comments
RR Round Robin
RSVP Resource Reservation Protocol

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 234


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS AbbreviationsAbbreviations

Abbreviation Full Spelling

RT-VBR Real Time-Variable Bit Rate


RTN Radio Transmission Node
RTT Round Trip Time
S
SCR Sustain Cell Rate
SFD Start-of-Frame Delimiter
SLA Service Level Agreements
SLS Service Level Specification
SP Strict Priority
SQ Subscriber Queue
SR Service Router
STB Set Top Box
SVC Switched VC
T
TB Target Blade
TC Traffic Class
TCA Traffic Conditioning Agreement
TCP Transmission Control Protocol
TCS Traffic Conditioning Specification
TE Tranffic Engineering
TLV Type-Length-Value
TM Traffic Manager
ToS Type of Service
TP Traffic Policing
TTL Time To Live
U
UBR Unspecified Bit Rate
UCL User Control List
UDP User Datagram Protocol
UPC Usage Parameter Control
URPF Unicast Reverse Path Forwarding

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 235


Copyright © Huawei
Technologies Co., Ltd.
Special Topic - QoS AbbreviationsAbbreviations

Abbreviation Full Spelling

V
VBR Variable Bit Rate
VC Virtual Circuit
VCC Virtual Circuit Connection
VCI Virtual Channel Identifier
VE Virtual Ethernet
VI Virtual Interface
VLAN Virtual Local Area Network
VLL Virtual Leased Line
VoD Video on Demand
VoIP Voice over IP
VOQ Virtual Output Queue
VP Virtual Path
VPI Virtual Path Identifier
VPN Virtual Private Network
W
WFQ Weighted Fair Queuing
WRED Weighted Random Early Detection
WRR Weighted Round Robin
WWW World Wide Web

Issue 07 (2018-06-18) Huawei Proprietary and Confidential 236


Copyright © Huawei
Technologies Co., Ltd.

You might also like