In this part part we will get to know what is the congestion management and queuing mechanisms based on fair queuing like Weighted Fair Queuing or Class Based Weighted Fair Queuing and how to configure them. Also how to avoid the congestion.
Congestion Management is a set of QoS features that handle the queuing and scheduling of traffic
The queue is a memory structure that hold incoming packets (prior to forwarding lookup) and egress packets (after lookup). In QoS we refer to “interface queues” for received packets (ingress queues) and forwarded packets (egress queue). By default the queues are configured for FIFO First in first out, then incoming burst may cause congestion. FIFO doesn’t control which packet will be held which will be transmitted. Congestion management provide some control of order of packets transmissions.
On IOS router the following things can be selected/modified in order to management the congestion:
– Queuing: determination of which packet goes into which queue
– Drop Policy: if a queue starts to get too full (i.e http traffic has 40% of the queue an if http traffic exceeded 40% then will be randomly dropped)
– Scheduling: ( how packets are being sent from particular queues to the interface)
– Maximum numbers of queues
– Maximum queue length , maximum packets allowed in the queue (only on routers)
On routers policy-map that contains queuing commands can only be applied on interfaces in the “outbound” direction, on switches can be configured inbound but rarely is.
The queuing mechanisms:
1. FIFO Default queuing mechanism on switches and most routers interfaces. The packet that has arrived first will be sent as first.
no fair-queue – enabling manually
hold-queue 0-240000 [in/out] – numbers of allowed packets / depth of the queue
show ip int fa0/1
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
Queueing strategy: fifo
Output queue: 0/40 (size/max)
2. FAIR QUEUING
Places different types of traffic into different queues so “important” traffic can be scheduled more frequently then other traffic.
What is the flow?
The flow is a group of packets belonging to the same conversation, with the same L3 src/dst addresses, L4 protocol ports, IP Precedence, DSCP. There are 2 categories of flows : High bandwidth and low bandwidth.
Types of Fair Queuing:
Generic Fair Queuing – sorts traffic into flows, each flow given its own Queue, so for instance we have 3 different queues for http , telnet and voice traffic. Depends on IP Precedence that has been set up for given flow bigger or smaller piece of packet is being taken and sent. If IP precedence is higher, the smaller part of packet is sent but more often, if IP Precedence is lower Fair Queuing mechanism waits more time until collects required portion of packet and send its.
Flow-Based WFQ – Weighted Fair Queuing
Each flow is given a dynamic Weight/priority (the lower the weight, the greater the bandwidth is alllocated to that flow. Depending on the the platform there may be up to 256 unique flows. WFQ is default on slow speed serial interfaces. WFQ is not supported on tunneling and encryption. Switches don’t supoort WFQ. WFQ is similar to generic fair queuing but doesn’t sent the portion of packets alike generic fair queuing but sends entire packet, so first are sent packets from queue with precedence higher then from lower.
Configuration of WFQ
bandwidth percent 1-99% (we can’t asign 100%, the remained bandwidth will be for class-default)
fair-que [numbers of queues]
Queuing strategy of WFQ will be visible under “show ip interface brief” as class-based queuing, but under “show policy-map int” as fair queue: per-flow queue
CBWFQ – Class Based Weighted Fair Queuing
Extends WFQ by allowing the user to define user-based traffic classes (WFQ do it based only on flow criteria). In this way multiple flows of traffic, that in WFQ would be in different queue are groupped together. Main benefit is that now we have control over the minimum bandwidth allocated to each class-based flow.
The features in CBWFQ:
Queuing – queues traffic based on anything supported by MQC
Dropping – utilizes Tail Drop or WRED (configurable per queue)
Scheduling – default is FiFo, other shared round robin based on bandwidth settings
Details of queues
We may have up to 64 queues per interface. We may allocate for particular Class (queues) bandwidth. When a queue for particular class reaches maximum size tail dop will ensue (traffic will be dropped), definied by “queue-limit [number of packets]”
class-map match-all Flow1
match access-group 102
class-map match all Flow2
match ip dscp af31
bandwidth percent 50
bandwidth percent 10
interface serial 0/1/0
service-policy output CBWFQ
When we run “show policy-map interface” we gonna see bandwidth traffic for particular classes
LLQ – low latency Queuing also called PQ(Priority queue)/CBWFQ
This is an add-on feature to CBWFQ. Allows us to convert on or more of defined classes into a Priority Queue, is recognized by IOS by the “priority” command within a class-map.
LLQ was designed only for Voice traffic. It can be serviced before any other traffic, also prevents jitter. With CBWFQ/LLQ we can direct any kind of traffic into the LLQ. During configuration we have to specify bandwidth of LLQ (the maximum bandwidth of this queue during congestion).
priority percent 20
If we have 2 or more class maps with fixed priority, then they will be bundle into one single queue with reserved bandwidth
Is a set of features that attempt to prevent queues from becoming congested. The single queue becomes congested, when is loaded in 100%, what causes Tail Drop (dropping packets) and lead to global synchronization . Global Synchronization occures when a few streams reach 100% of a queues capacity and drop to zero immidiately, then again from zero reach 100% and again to 0% and so on. Congestion Avoidance task is to prevent theses streams from reaching 100% of queue capacity.
Congestion Avoidance can be done in 3 places
– ingress interface queue
– at the forwarding engine (policing)
– egress queue (drop tresholds)
Congestion Avoidance mechanisms within queues:
WTD – Weighted Tail Drop (Switches)
WRED Weighted Random Early Discard (Routers & switches)
WTD and WRED
– minimum and maximum
What descriptors can match against a treshold?
WTD – internal DSCP
WRED – anything matched in class-map
We configure the minimum and maximum tresholds and we decide at which level traffic will be dropped. We may set up that at the 60% of capacity of given queue frames with COS from 0 to 3 will be dropped, but the frames with higher COS values will be forwarded.
WTD example of configuration
mls qos queue-set output [qset-id] queue-id treshold command that set up tresholds
mls qos srr-qeue output dscp-map queue [queue-id] treshold [treshold-id] dscp1 …dscp8 – command that mapping DSCP to the treshold
random packets start drops at the minimum treshold (not everything alike WTD). Increase in linear format until max-treshold is reached. After max-treshold is reached WRED drops 100% of all next packets received.
WRED exampe of configuration
1. Select a class from within Policy-map
2. Apply the “random detect” command to class
3. Select the following
– hit the carriage return and accept WRED defaults
– (optional) choose what characteristics WRED will look for when dropping packets (i.e DSCP, IP Precedence, CoS etc)
– (optional) configure minimum and maximum tresholds
– (optional) change the mark probability denominator
WRED drop probability denominator
Rate of packet drop between min and max can be tuned using “mark propability denominator” /MPD . If “denominator” set to 100 one in every 100 packets will be dropped just prior to max treshold being reached. This feature of MPD is not configurable on cisco switches only routers !
random-detect precedence/dscp 1 1000 2000 packets per seconds <1-65535> 100
If the queue of packets with ip precedence/dscp 1 reaches 1000 packets then one on every 100 packets will be dropped until we reach 2000 packets in the queue, then we have tail drop, and all trafic that will come will be dropped
WRED by default applies against IP Precedence values, we change this default behaviour command under policy map: random-detect dscp-based
show policy-map int
POLICING AND SHAPING
The CIR is a download/upload speed agreed with ISP. Policing and shaping measure rate of traffic against a configured Commited Information Rate (CIR) and :
Shapers buffer excess traffic – utilized on the site of Customer to prevent the traffic from being dropped by the ISP Policer
Policers typically drop excess traffic – utilized on the site of ISP to enforce the contract
RATES and COLORS
The router may work as:
Single rate – policer monitoring traffic at one CIR
Dual rate – policer monitoring CIR and PIR (Peak Inf. Rate)
If CIR is up to 5 MB/s and PIR is set up to 10MB/s, then ISP will be sending traffic up to 10MB/s but with IP precedence or DSCP 0 what means that traffic may be dropped somewhere further if congestion will occour in the ISP network.
If device is dual rate policer we don’t have to specify the PIR! If PIR is not specified then PIR=CIR.
If router is dual rate policer that means that we may apply colors as well.
A “COLOR” is another term for what we can do with traffic that is beneath or above our CIR
– Two color policer – “Color 1” conform-action , “Color 2” exceed-action
– Three color policer – “Color 1” conform-action, “Color 2” exceed-action, “Color 3” violate-action
Color 1 is up to CIR – and usually action is “don’t touch this traffic”
Color 2 is up to PIR – and usually action is “send the traffic but change IP precedence or DSCP to 0 ”
Color 3 is above PIR – action is drop the traffic
Configuration under policy-map with “police” command
police cir ….
The same idea as policing rate-limiting of traffic, but we can’t drop the traffi. we may only buffer (unless the queue will be full, then pockets will be dropped), typically done on Customer egress interface when on the second site ISP has done sime kind of policing.
Configuration under policy-map with command “shape“