Quick Notes on the 3560 Egress Queuing

The goal of this article is to discuss how would the following configuration work in the 3560 series switches:
interface FastEthernet0/13
 switchport mode access
 load-interval 30
 speed 10
 srr-queue bandwidth shape 50 0 0 0
 srr-queue bandwidth share 33 33 33 1
 srr-queue bandwidth limit 20
Before we begin, let’s recap what we know so far about the 3560 egress queuing:
1) When SRR scheduler is configured in shared mode, bandwidth allocated to each queue is based on relative weight. E.g. when configuring “srr-queue bandwidth share 30 20 25 25″ we obtain the weight sum as 30+20+25+25 = 100 (could be different, but it’s nice to reference to “100”, as a representation of 100%). Relative weights are therefore “30/100”, “20/100”, “25/100”, “25/100” and you can calculate the effective bandwidth *guaranteed* to a queue multiplying this weight by the interface bandwidth: e.g. 30/100*100Mbps = 30Mbps for the 100Mbps interface and 30/100*10Mbps=3Mbps for 10Mbps interface. Of course, the weights are only taken in consideration when interface is oversubscribed, i.e. experiences a congestion.
2) When configured in shaped mode, bandwidth restriction (policing) for each queue is based on inverse absolute weight. E.g. for “srr-queue bandwidth shape 30 0 0 0” we effectively restrict the first queue to “1/30” of the interface bandwidth (which is approximately 3,3Mbps for 100Mbps interface and approximately 330Kbps for a 10Mbps interface). Setting SRR shape weight to zero effectively means no shaping is applied. When shaping is enabled for a queue, SRR scheduler does not use shared weight corresponding to this queue when calculating relative bandwidth for shared queues.
3) You can mix shaped and shared settings on the same interface. For example two queues may be configured for shaping and others for sharing:
interface FastEthernet0/13
 srr-queue bandwidth share 100 100 40 20
 srr-queue bandwidth shape  50  50  0  0
Suppose the interface rate is 100Mpbs; then queues 1 and 2 will get 2 Mbps, and queues 3 and 4 will share the remaining bandwidth (100-2-2=96Mbps) in proportion “2:1”. Note that queues 1 and 2 will be guaranteed and limited to 2Mbps at the same time.
4) The default “shape” and “share” weight settings are as follows: “25 0 0 0” and “25 25 25 25”. This means queue 1 is policed down to 4Mbps on 100Mpbs interfaces by default (400Kbps on 10Mbps interface) and the remaining bandwidth is equally shared among the other queues (2-4). So take care when you enable “mls qos” in a switch.
5) When you enable “priority-queue out” on an interface, it turns queue 1 into priority queue, and scheduler effectively does not account for the queue’s weight in calculations. Note that PQ will also ignore shaped mode settings as well, and this may make other queues starve.
6) You can apply “aggregate” egress rate-limitng to a port by using command “srr-queue bandwidth limit xx” at interface level. Effectively, this command limits interface sending rate down to xx% of interface capacity. Note that range starts from 10%, so if you need speeds lower than 10Mbps, consider changing port speed down o 10Mbps.
How will this setting affect SRR scheduling? Remember, that SRR shared weights are relative, and therefore they will share the new bandwidth among the respective queues. However, shaped queue rates are based on absolute weights calculated off interface bandwidth (e.g. 10Mbps or 100Mbps) and are subtracted from interface “available” bandwidth. Consider the example below:
interface FastEthernet0/13
 switchport mode access
 speed 10
 srr-queue bandwidth shape 50 0 0 0
 srr-queue bandwidth share 20 20 20 20
 srr-queue bandwidth limit 20
Interface sending rate is limited to 2Mbps. Queue 1 is shaped to 1/50 of 10Mps, which is 200Kbps of bandwidth. The remaining bandwidth 2000-200=1800Kbps is divided equally among other queues in proportion 20:20:20=1:1:1. That is, in case on congestion and all queues filled up, queue 1 will get 200Kbps, and queues 2-4 will get 600Kbps each.

Quick Questions and Answers

Q: How would I determine which queue will the packet go to? What if my packet has a CoS and DSCP value set at the same time?
A: That depends on what you are trusting at classification stage. If you trust CoS value, then QoS to Output Queue map will be used. Likewise, if you trust DSCP value, then DSCP to Output Queue map will determine the outgoing queue. Use “show mls qos map” commands to find out the current mappings.
Q: What if I’ve configured “shared” and “shaped” srr-queue settings for the same queue?
A: The shaped queue settings will override shared weight. Effectively, shared weight will also be exempted from SRR calculations. Note that in shaped mode queue is still guaranteed it’s bandwidth, but at the same time is not allowed to send above the limit.
Q: What if priority-queue is enabled on the interface? Can I restrict the PQ sending rate using “shaped” weight?
A: No you can’t. Priority-queue will take all the bandwidth if needed, so take care with traffic admission.
Q: How will a shaped queue compete with shared queues on the same interface?
A: Shared queues share the bandwidth remaining from the shaped queues. At the same time, shaped queues are guaranteed the amount of bandwidth allowed by their absolute weight.
Q: How is SRR shared mode different from WRR found in the Catalyst 3550?
A: SRR in shared mode essentially behaves similar to WRR, but is designed to be more efficient. Where WRR would empty the queue up to it’s credit in single run, SRR will take series of quick runs among all the queues, providing more “fair” share and smoother traffic behavior.

Verification scenario diagram and configs

3560 Egress Queuing
For the lab scenario, we configure R1, R3 and R5 to send traffic down to R2 across two 3560 switches saturating the link between them. All routers share one subnet 172.16.0.X/24 where X is the router number. SW1 will assign CoS/IP Precedence values of 1, 3 and 5 respectively to traffic originated by R1, R3 and R5. At the same time, SW1 will apply egress scheduling on it’s connection to SW2. R2’s function is to meter the incoming traffic, by matching the IP precedence values in packets. Note that SW2 has mls qos disabled by default.
We will use the default CoS to Output Queue mappings with CoS 1 mapped to Queue 2, CoS 3 mapped to Queue 3 and CoS 5 mapped to Queue 1. Note that by the virtue of the default mapping tables, CoS 0-7 map to IP Precedence 0-7 (which become overwritten), so we can match IP precedence’s on R2.
SW1#show mls qos maps cos-output-q
   Cos-outputq-threshold map:
              cos:  0   1   2   3   4   5   6   7
              ------------------------------------
  queue-threshold: 2-1 2-1 3-1 3-1 4-1 1-1 4-1 4-1
SW1’s connection to SW2 is set to 10Mbps port rate, and further limited down to 2Mps by the use of “srr bandwidth limit” command. We will apply different scenarios and see how SRR behaves. Here comes the configurations for SW1 and R2:
SW1:
interface FastEthernet0/1
 switchport mode access
 load-interval 30
 mls qos cos 1
 mls qos trust cos
 spanning-tree portfast
!
interface FastEthernet0/3
 switchport mode access
 load-interval 30
 mls qos cos 3
 mls qos trust cos
 spanning-tree portfast
!
interface FastEthernet0/5
 load-interval 30
 mls qos cos 5
 mls qos trust cos
 spanning-tree portfast

R2:
class-map match-all PREC5
 match ip precedence 5
class-map match-all PREC1
 match ip precedence 1
class-map match-all PREC3
 match ip precedence 3
!
!
policy-map TEST
 class PREC5
 class PREC3
 class PREC1
!
access-list 100 deny   icmp any any
access-list 100 permit ip any any
!
interface FastEthernet0/0
 ip address 172.16.0.2 255.255.255.0
 ip access-group 100 in
 load-interval 30
 duplex auto
 speed auto
 service-policy input TEST
To simulate traffic flow we execute the following command on R1, R3 and R5:
R1#ping 172.16.0.2 repeat 100000000 size 1500 timeout 0
In the following scenarios port speed is locked to 10Mbps and additionally port is limited to 20% of the bandwidth, with the effective sending rate of 2Mbps.
First scenario: Queue 1 (prec 5) is limited to 200Kbps while Queue 2 (prec 1) and Queue 3 (prec 3) share the remaining bandwidth in equal proportions:
SW1:
interface FastEthernet0/13
 switchport mode access
 load-interval 30
 speed 10
 srr-queue bandwidth share 33 33 33 1
 srr-queue bandwidth shape  50  0  0  0
 srr-queue bandwidth limit 20

R2#sh policy-map interface fastEthernet 0/0 | inc bps|Class
    Class-map: PREC5 (match-all)
      30 second offered rate 199000 bps
    Class-map: PREC3 (match-all)
      30 second offered rate 886000 bps
    Class-map: PREC1 (match-all)
      30 second offered rate 887000 bps
    Class-map: class-default (match-any)
      30 second offered rate 0 bps, drop rate 0 bps
Second Scenario: Queue 1 (prec 5) is configured as priority and we see it leaves other queues starving for bandwidth:
SW1:
interface FastEthernet0/13
 switchport mode access
 load-interval 30
 speed 10
 srr-queue bandwidth share 33 33 33 1
 srr-queue bandwidth shape  50  0  0  0
 srr-queue bandwidth limit 20
 priority-queue out

R2#sh policy-map interface fastEthernet 0/0 | inc bps|Class
    Class-map: PREC5 (match-all)
      30 second offered rate 1943000 bps
    Class-map: PREC3 (match-all)
      30 second offered rate 11000 bps
    Class-map: PREC1 (match-all)
      30 second offered rate 15000 bps
    Class-map: class-default (match-any)
      30 second offered rate 0 bps, drop rate 0 bps
Third Scenario: Queues 1 (prec 5) and 2 (prec 1) are shaped to 200Kbps, while Queue 3 (prec 3) takes all the remaining bandwidth:
SW1:
interface FastEthernet0/13
 switchport mode access
 load-interval 30
 speed 10
 srr-queue bandwidth share 33 33 33 1
 srr-queue bandwidth shape  50  50  0  0
 srr-queue bandwidth limit 20

R2#sh policy-map interface fastEthernet 0/0 | inc bps|Class
    Class-map: PREC5 (match-all)
      30 second offered rate 203000 bps
    Class-map: PREC3 (match-all)
      30 second offered rate 1569000 bps
    Class-map: PREC1 (match-all)
      30 second offered rate 199000 bps
    Class-map: class-default (match-any)
      30 second offered rate 0 bps, drop rate 0 bps

0 comments:

About US

Network Bulls is Best Institute for Cisco CCNA, CCNA Security, CCNA Voice, CCNP, CCNP Security, CCNP Voice, CCIP, CCIE RS, CCIE Security Version 4 and CCIE Voice Certification courses in India. Network Bulls is a complete Cisco Certification Training and Course Coaching Institute in Gurgaon/Delhi NCR region in India. Network Bulls has Biggest Cisco Training labs in India. Network Bulls offers all Cisco courses on Real Cisco Devices. Network Bulls has Biggest Team of CCIE Trainers in North India, with more than 90% of passing rate in First Attempt for CCIE Security Version 4 candidates.
  • Biggest Cisco Training Labs in India
  • More than 90% Passing Rate in First Attempt
  • CCIE Certified Trainers for All courses
  • 24x7 Lab Facility
  • 100% Job Guaranteed Courses
  • Awarded as Best Network Security Institute in 2011 by Times
  • Only Institute in India, to provide CCIE Security Version 4.0 Training
  • CCIE Security Version 4 Training available
  • Latest equipments available for CCIE Security Version 4

Network Bulls Institute Gurgaon

Network Bulls Institute in Gurgaon is one of the best Cisco Certifications Training Centers in India. Network Bulls has Biggest Networking Training and Networking courses labs in North India. Network Bulls is offering Cisco Training courses on real Cisco Routers and Switches. Labs of Network Bulls Institute are 24x7 Available. There are many coaching Centers in Delhi, Gurgaon, Chandigarh, Jaipur, Surat, Mumbai, Bangalore, Hyderabad and Chennai, who are offering Cisco courses, but very few institutes out of that big list are offering Cisco Networking Training on real Cisco devices, with Live Projects. Network Bulls is not just an institute. Network Bulls is a Networking and Network Security Training and consultancy company, which is offering Cisco certifications Training as well support too. NB is awarded in January 2012, by Times, as Best Network Security and Cisco Training Institute for the year 2011. Network Bulls is also offering Summer Training in Gurgaon and Delhi. Network Bulls has collaboration with IT companies, from which Network Bulls is offering Networking courses in Summer Training and Industrial Training of Btech BE BCA MCA students on real Live projects. Job Oriented Training and Industrial Training on Live projects is also offered by network bulls in Gurgaon and Delhi NCR region. Network Bulls is also providing Cisco Networking Trainings to Corporates of Delhi, Gurgaon, bangalore, Jaipur, Nigeria, Chandigarh, Mohali, Haryana, Punjab, Bhiwani, Ambala, Chennai, Hyderabad.
Cisco Certification Exams are also conducted by Network Bulls in its Gurgaon Branch.
Network Bulls don't provide any Cisco CCNA, CCNP simulations for practice. They Provide High End Trainings on Real topologies for high tech troubleshooting on real Networks. There is a list of Top and best Training Institutes in India, which are providing CCNA and CCNP courses, but NB has a different image from market. Many students has given me their feedbacks and reviews about Network bulls Institute, but there were no complaints about any fraud from this institute. Network Bulls is such a wonderful place to get trained from Industry expert Trainers, under guidance of CCIE Certified Engineers.

About Blog

This Blog Contains Links shared by sites: Cisco Guides, Dumps collection, Exam collection, Career Cert, Ketam Mehta, GodsComp.co.cc.

NB

NB
Cisco Networking Certifications Training

Cisco Training in Delhi

ccna training in gurgaon. ccnp course institute in gurgaon, ccie coaching and bootcamp training near gurgaon and delhi. best institute of ccna course in delhi gurgaon india. network bulls provides ccna,ccnp,ccsp,ccie course training in gurgaon, new delhi and india. ccsp training new delhi, ccie security bootcamp in delhi.

Testimonials : Network Bulls

My Name is Rohit Sharma and i Have done CCNA and CCNP Training in Gurgaon Center of Network Bulls and it was a great experience for me to study in Network Bulls.

Cisco Networking Certifications

Myself Komal Verma and i took CCSP Training from Network Bulls in Gurgaon. The day i joined Network Bulls, the day i get addicted with Networking Technologies and I thank Mr. Vikas Sheokand for this wonderful session of Networking. :)
I must say that Network Bulls is Best Institute of CCNA CCNP CCSP CCIE Course Training in Gurgaon, New Delhi and in India too.
Komal Verma

About a wonderfull CCIE Training Institute in Gurgaon

I am Kiran shah from New Delhi. I have recently completed my CCNA CCNP & CCIE Training in Gurgaon from Network Bulls and i recommend Network Bulls for Cisco Training in India.

Kiran Shah

Cisco Coaching and Learning Center

Disclaimer: This site does not store any files on its server. I only index and link to content provided by other sites. If you see any file on server that is against copy right you can inform me at (sidd12341 [at] gmail.com). I will delete that materials within two days. This Website is not official Website of any Institute like INE, Network Bulls, IP Expert. Thanks

CCIE Security Version 4

Cisco Finally updated CCIE Security Lab exam blueprint. WSA Ironport and ISE devices are added in CCIE Security Version 4 Lab Exam Syllabus Blueprint. In Updated CCIE Security Version 4 Syllabus blueprint, new technologies like Mobile Security, VoIP Security and IPV6 Security along with Network Security, are added. As in CCIE Security Version 3 blueprint, Cisco had focused on Network Security only, but now as per market demand, Cisco is looking forward to produce Internet gear Security Engineer, not only Network Security engineers.
In CCIE Security Version 4 Bluerpint, Lab Exam is going to be more interested than before. What is Difference in CCIE Security Version 3 and Version 4? Just go through the CCIE Security Version 4 Lab Equipment and Lab Exam Syllabus Blueprints and find out!