Scaling MPLS Networks

In this blog post we are going to review a number of MPLS scaling techniques. Theoretically, the main factors that limit MPLS network growth are:
  1. IGP Scaling. Route Summarization, which is the core procedure for scaling of all commonly used IGPs does not work well with MPLS LSPs. We’ll discuss the reasons for this and see what solutions are available to deploy MPLS in presence of IGP route summarization.
  2. Forwarding State growth. Deploying MPLE TE may be challenging in large network as number of tunnels grow like O(N^2) where N is the number of TE endpoints (typically the number of PE routers). While most of the networks are not even near the breaking point, we are still going to review techniques that allow MPLS-TE to scale to very large networks (10th of thousands routers).
  3. Management Overhead. MPLS requires additional control plane components and therefore is more difficult to manage compared to classic IP networks. This becomes more complicated with the network growth.
The blog post summarizes some recently developed approaches that address the first two of the above mentioned issues. Before we begin, I would like to thank Daniel Ginsburg for introducing me to this topic back in 2007.

IGP Scalability and Route Summarization

IP networks were built around the concept of hierarchical addressing and routing, first introduced by Kleinrock [KLEIN]. Scaling for hierarchically addressed networks is achieved by using topologically-aware addresses that allow for network clustering and routing information hiding by summarizing contiguous address blocks. While other approaches for route scaling has been developed later, the hierarchical approach is the prevalent one in modern networks. Modern IGPs used in SP networks are link-state (ISIS and OSPF) and hence maintain network topology information in addition to the network reachability information. Route summarization creates topological domain boundaries and condenses network reachability information thus having the following important effects on link-state routing protocols:
  1. Topological database size is decreased. This reduces the impact on router memory and CPU as smaller database requires less maintenance efforts and consumes less memory.
  2. Convergence time within every area is improved as a result of faster SPF, smaller flooding scope and decreased FIB size.
  3. Flooding is reduced, since events in one routing domain do not propagate into another as a result of information hiding. This improves routing process stability.
The above positive effects were very important during the earlier days of the Internet, when hardware did not have enough power to easily run link-state routing protocols. Modern advances in router hardware allow for scaling of link state protocols to single-area networks consisting or thousand of routers. Certain optimization procedures mentioned, for instance, in [OSPF-FAST] allow for such large single-area networks to remain stable and converge on sub-second time-scale. Many ISP networks indeed use single-area design for their IGPs, enjoying simplified operations procedures. However, scaling IGPs to tens of thousands nodes will most likely require use of routing areas yet allow for end-to-end MPLS LSPs. Another trend that may call for support of route summarization in MPLS networks is the fact that many enterprises, which typically employ area-based design, are starting their own MPLS deployments.
Before we go into details discussing the effect of route summarization MPLS LSPs, it is worth recalling what negative effects route summarization has in pure IP networks. Firstly, summarization hides network topology and detailed routing information and thus results in suboptimal routing, and even routing loops, in presence of route redistribution between different protocols. This problem is especially visible at larger scale, e.g. for the Internet as a whole. The other serious problem is that summarization hides reachability information. For example, if a single host goes done, the summarized prefix encompassing this hosts’s IP address will remain the same and the failure notification will not leak the event beyond the summarization domain. Being a positive effect of summarization, this behavior has negative impact on inter-area network convergence. For example, this affects BGP next-hop tracking process (see [BGP-NEXTHOP]).

MPLS LSPs and Route Summarization

MPLS LSP is unidirectional tunnel built on switching of locally significant labels. There are multiple ways for constructing MPLS LSPs, but we are concerned with classic LDP signaling at the moment. In order for LDP-signaled LSP to destination (FEC, forwarding equivalency class) X to be contiguous, every hop on path to X should have forwarding state for this FEC. Essentially, forwarding component for MPLS treats X, the FEC, as an endpoint identifier. If two nodes cannot match information about X, they cannot “stitch” the LSP in contiguous manner. In the case of MPLS deployment in IP networks, the FEC is typically the PE’s /32 IP address. IP address has overloaded functions of both location pointer and endpoint identifier. Summarizing the /32 addresses hides the endpoint identity and prohibits LDP nodes to consistently map labels for a given FEC. Look at the diagram below: ABR2 summarizes PE1-PE3 Loopback prefixes, and thus PE1 cannot construct an end-to-end LSP for 10.0.0.1/32, 10.0.0.2/32 or 10.0.0.3/32.
mpls-summarization-1
One obvious solution to this problem would be changing the underlying MPLS transport to IP based tunnels, such as mGRE or L2TPv3. Such solutions are available for deploying with L3 and L2 VPNs (see [L3VPN-MGRE] for example) and perfectly work with route summarization. However, by using convenient routing, these solutions lose the powerful feature of MPLS Traffic Engineering. While some Traffic Engineering solutions are available for IP networks, they are not as flexible and feature-rich as MPLS-TE. Therefore, we are not going to look into IP-based tunneling in this publication.

Route Leaking

Route leaking is the most widely deployed technique to allow for end-to-end LSP construction in presence of route summarization. This technique is based on the fact that typically LSPs need only be constructed for the PE addresses. Provided that a separate subnet is selected for the PE Loopback interfaces it is easy to apply fine-tuned control to the PE prefixes in the network. Referring to the figure below, all transit link prefixes (10.0.0.0/16) could be summarized (and even suppressed) and only the necessary PE prefixes (subnet 20.0.0.0/24 on the diagram below) will be leaked. It is also possible to fine-tune LDP to propagate only the prefixes from selected subnet, reducing LDP signaling overhead.
mpls-summarization-2
Leaking /32’s also allows for perfect next-hop reachability tracking, as a route for every PE router is individually present all routing tables. Of course, such granularity significantly reduces the positive effect of multi-area design and summarization. If the number of PE routers grows to tens of thousands, network stability would be at risk. However, this approach is by far the most commonly used with multi-area designs nowadays.

Inter-Area RSVP-TE LSPs

MPLS transport is often deployed for the purpose of traffic-engineering. Typically, a full-mesh of MPLS TE LSPs is provisioned between the PE routers in the network to accomplish this goal (strategic Traffic Engineering). The MPLS TE LSPs are signaled by means of TE extensions to RSVP protocol (RSVP-TE), which allows for source-routed LSP construction. A typical RSVP-TE LSP is explicitly routed by the source PE, where explicit route is either manually specified or computed dynamically by constrained SPF (cSPF) algorithm. Notice that it is impossible for a router within one area to run cSPF for another area, as the topology information is hidden (there are certain solutions to overcome this limitation e.g. [PCE-ARCH], but they are out of the scope of this publication).
Based on the single-area limitation, it may seem that MPLS-TE is not applicable to multi-area design. However, a number of extensions to RSVP signaling have been proposed to allow for inter-area LSP construction (see [RSVP-TE-INTERAREA]). In short, such extensions allow for explicitly programming loose inter-area hops (ABRs or ASBRs) and let every ABR expand the loose next-hop (ERO, explicit route object, expansion). The headend router and every ABR run the cSPF algorithm for their areas, such that every segment of Inter-AS LSP is locally optimal.
mpls-summarization-3
This approach does not necessarily result in globally optimal path, i.e. the resulting LSP may appear to differ from the shortest IGP path between the two destinations. This is a serious problem, and requires considerable modification to RSVP-TE signaling to be resolved. See the document [PCE-ARCH] for an approach to make MPLS LSPs globally optimal. Next, inter-area TE involves additional management overhead, as the ABR loose next-hops need to be programmed manually, possibly covering backup paths via alternate ABRs. The final issue is less obvious, but may have significant impact in large networks.
Every MPLS-TE constructed LSP is point-to-point (P2P) by its nature. This means that for N endpoints there is a total of N^2 LSPs connecting them, which results in rapidly growing number of forwarding states (e.g. CEF table entries) in the core routers, not to mention the control-plane overhead. This issue becomes serious in very large networks, as it has been thoroughly analyzed in [FARREL-MPLS-TE-SCALE-1 ] and [FARREL-MPLS-TE-SCALE-2]. It is interesting to compare the MPLS-TE scaling properties to those of MPLS LDP. The former constructs P2P LSPs, the latter builds Multipoint-to-Point (MP2P) LSPs that merge toward the tail-end router (see [MINEI-MPLS-SCALING]). This is the direct result of signaling behavior: RSVP is end-to-end and LDP signaling is local between every pair of routers. In result, the number of forwarding states with LDP grows as O(N) compared to O(N^2) with MPLS-TE. Look at the diagram below, that illustrates LSPs constructed using RSVP-TE and LDP.
mpls-summarization-4
One solution to the state growth problem would be shrinking the full-mesh of PE-to-PE LSPs, e.g. pushing the MPLS-TE endpoints from PE deeper in the network core, say up to the aggregation layer and then deploying LDP on the PEs and over the MPLS-TE tunnels. This would reduce the size of the tunnel full-mesh significantly, but prevent the use of peripheral connections for MPLS-TE. In other words, this reduces the effect of using MPLS-TE for network resource optimization.
mpls-summarization-5
Besides losing full MPLS-TE functionality, the use of unmodified LDP means that route summarization could not deployed in such scenario. Based on these limitations, we are not going to discuss this approach for MPLS-TE scaling in this publication. Notice that replacing LDP with MPLS-TE in the non-core sectors of the network does not solve global traffic engineer problem, though allows for construction locally optimal LSPs in every level. However, similar to the “LDP-in-the-edge approach”, this solution reduces overall effectiveness of MPLS-TE in the network. There are other ways to improve MPLS-TE scaling properties which we are going to discuss further.

Hierarchical RSVP-TE LSPs

As mentioned previously, hierarchies are the key to IP routing scalability. Based on this, it could be reasonable to assume that hierarchical LSPs could be used to solve MPLS-TE scaling problems. Hierarchical LSPs have been in the MPLS standards almost since the inception and their definition has been finalized in RFC 4206. The idea behind the hierarchical LSPs is the use of layered MPLS-TE tunnels. The first layer of MPLS-TE tunnels is used to establish forwarding adjacencies for the link-state IGP. These forwarding adjacencies are then flooded in link-state advertisements and added into Traffic Engineering Database (TED), though not the LSDB. In fact, the first mesh looks exactly like set of IGP logical connections, but get advertised only into TED. It is important that link-state information is not flooded across this mesh, hence the name “forwarding adjacency”. This first layer of MPLS LSPs typically overlays the network core.
mpls-summarization-6
The second level of MPLS-TE tunnels is constructed with the first-level mesh added to the TED in mind, using cSPF or manual programming. From the standpoint of the second-level mesh, the physical core topology is hidden and replaced with the full-mesh of RSVP-TE signaled “links”. Effectively, the second-level tunnels are nested within the first level mesh and are never visible to the core network. The second-level mesh normally spans edge-to-edge and connects the PEs. At the data plane, hierarchical LSPs are realized by means of label stacking.
mpls-summarization-7
Not every instance of IOS code supports the classic RFC4206 FAs. However, it is possible to add an mGRE tunnel spanning between the P-routers forming the first level mesh and use it to route the PE-PE MPLS TE LSPs (second level). This solution does not require running additional set of IGP adjacencies over the mGRE tunnel and therefore has acceptable scalability properties. The mGRE tunnel could be overlaid over a classic, non-FA RSVP-TE tunnel mesh used for traffic engineering between the P-routers. This solution creates additional overhead for the hierarchical tunnel mesh, but allows for implementing hierarchical LSPs in the IOS code that does not support the RFC 4206 FAs.
At last, it is worth pointing out that Cisco puts different meaning into the term Forwarding Adjacencies, as it is implemented in IOS software. Instead of advertising the MPLS-TE tunnels into TED, Cisco advertises them into LSDB and uses for shortest path construction. These virtual connections are not used for LSA/LSP flooding though, just like the “classic” FAs. Such technique does not allow for hierarchical LSPs, as second-level mesh cannot be signaled over FA defined in this manner, but allows for global visibility of MPLS TE tunnels within a single IGP area, compared to Auto-Route, which only supports local visibility.

Do Hierarchical RSVP-TE LSPs Solve the Scaling Problem?

It may look like that using hierarchical LSPs along with route summarization solves the MPLS scaling problems at once. The network core has to deal with the first-level of MPLS-TE LSPS only, and it is perfectly allows for route summarization, as illustrated on the diagram below:
mpls-summarization-8
However, a more detailed analysis performed in [FARREL-MPLS-TE-SCALE-2] shows that in popular ISP network topologies deploying multi-level LSP hierarchies does not result in significant benefits. Firstly, there are still “congestion points” remaining at the level where 1st and 2nd layer-meshes are joined. This layer has to struggle with fast-growing number of LSP states as it has to support both levels of MPLS TE meshes. Furthermore, the management burden associated with deploying multiple MPLS-TE meshes along with Inter-Area MPLS TE tunnels is significant, and seriously impacts network growth. The same arguments apply to the “hybrid” solution that uses mGRE tunnel in the core, as every P router needs to have a CEF entry for every other P router connected to the mGRE tunnel. Not to mention the underlying P-P RSVP-TE mesh adding even more forwarding states in the P-routers.

Multipoint-to-Point RSVP-TE LSPs

What is the key issue that makes MPLS-TE hard to scale? The root cause is the point-to-point nature of MPLS-TE LSPs. This results in rapid forwarding state proliferation in transit nodes. An alternative solution to using LSP hierarchies is the use if Multipoint-to-Point RSVP-TE LSPs. Similar to LDP, it is possible to merge the LSPs that traverse the same egress link and terminate on the same endpoint. With the existing protocol design, RSVP-TE cannot do that, but certain protocol extensions proposed in [YASUKAWA-MP2P-LSP] allow for automatic RSVP-TE merging. Intuitively it is clear that the number of forwarding states with MP2P LSPs grows like O(N) where N is the number of PE routers, compared to O(N^2) in classic MPLS-TE. Effectively, MP2P RSVP-TE signaled LSPs have the same scaling properties as LDP signaled LSPs, with the added benefits of MPLS-TE functionality and Inter-Area signaling. Based on the scaling properties, th use of MP2P Inter-Area LSPs seems to be a promising direction toward scaling MPLS networks.

BGP-Based Hierarchical LSPs

As we have seen, the default operations mode for MPLS-TE does not offer enough scaling even with LSP hierarchies. It is worth asking, whether it is possible to create hierarchical LSPs using signaling other than RSVP-TE. As you remember, BGP extensions could be used to transport MPLS labels. This gives an idea of creating nested LSPs by overlaying BGP mesh over IGP areas. Here is an illustration of this concept:
mpls-summarization-9
In this sample scenario, there are three IGP areas, and ABRs optimally summarizing their area address ranges, therefore hiding the PE /32 prefixes. Inside every area, LDP or RSVP-TE could be used for constructing intra-area LSPs, for example LSPs from PEs to ABRs. At the same time, all PEs establish BGP sessions with their nearest ABRs and ABRs connect in full iBGP mesh, treating the PEs as route-reflector clients. This allows for the PEs to propagate their Loopback /32 prefixes via BGP. The iBGP peering should be done using another set of Loopback interfaces (call them IGP-routed) that are used to build transport LSPs inside every area..
mpls-summarization-10
The only routers that will see the PE loopback prefixes (BGP routed) are the other PEs and the ABRs. The next step is configuring the ABRs that act as route-reflectors for the PE routers to change the BGP IP next-hop to self and activate MPLS label propagation over all iBGP peering sessions. The net result is overlay label distribution process. Every PE would use two labels in the stack to get to another PE’s Loopback (BGP propagated) by means of BGP next-hop recursive resolution. The topmost label (LDP or RSVP-TE) is used to steer packet to the nearest ABR, using the transport LSP built toward the IGP-routed Loopback interface. The bottom label (BGP propagated) identifies the PE prefix within the context of a given ABR. Every ABR will pop incoming LDP/RSVP-TE label, then swap the PE label to the label understood by another ABR/PE (as signaled via BGP) and then push new LDP label that starts a new LSP to reach the next ABR/PE. Effectively, this implements two-level hierarchical LSP end-to-end between any pair of PEs. This behavior is a result of BGP’s ability to propagate label information and the recursive next-hop resolution process.
How well does this approach scale? Firstly, using BGP for prefix distribution ensures that we may advertise truly large amount of PE prefixes without any serious problems (though DFZ systems operators may disagree). At first sight, routing convergence may seem to be a problem, as loss of any PE router should result in iBGP session timing out based on BGP keepalives. However, if BGP next-hop tracking (see [BGP-NEXTHOP]) is used within every area, then ABRs will be able to detect loss of PE at the pace of IGP converges. Link failures within an area will also be handled by IGP or, possibly, using intra-area protection mechanisms, such as MPLS/IP Fast Re-Route. Now for the drawback of the BGP-signaled hierarchical LSPs:
  1. Configuration and Management overhead. In popular BGP-free core design, P routers (which typically include ABRs) do not run BGP. Adding extra mesh of BGP peering sessions requires configuring all PEs and, specifically, ABRs, which involves non-trivial efforts. This slows initial deployment process and complicates further operations.
  2. ABR failure protection. Hierarchical LSPs constructed using BGP, consist of multiple disconnected LDP/RSVP-TE signaled segments, e.g. PE-ABR1, ABR1-ABR2. Within the current set of RSVP-TE FRR features, it is not possible to protect the LSP endpoint nodes, due to local significance of the exposed label. There is work in progress to implement endpoint node FRR protection, but this feature is not yet available. This might be a problem, as it makes the network core vulnerable to ABR failures.
  3. The amount of forwarding states increases in the PE and (additionally) ABR nodes. However, unlike MPLS TE LSPs this growth is proportional to the number of PE routers, which is approximately the same scaling limitation we would have with MP2P MPLS-TE LSPs. Therefore, growth of forwarding state in the ABRs should not be a big problem, especially since no other nodes than PEs/ABRs are affected.
To summarize, it is possible to deploy hierarchical LSPs and get LDP/single-area TE working with multi-area IGPs without any updates to existing protocols (LDP, BGP, RSVP-TE). The main drawback is excessive management overhead and lack of MPLS FRR protection features for the ABRs. However, this is the only approach that does not require any changes to the running software, as all functionality is implemented using existing protocol features.

LDP Extensions to work with Route Summarization

In order to make native LDP (RFC 3036) work with IGP route summarization it is possible to extend the protocol in some way. Up to date, there are two main approaches: one of them utilizes prefix leaking via LDP and the other implements hierarchical LSPs using LDP.

LDP Prefix Leaking (Interarea LDP)

A very simply extension to LDP allows for performing route summarization. Per the LDP’s RFC, prior to installing a label mapping for prefix X, local router needs to ensure there is an exact match for X in the RIB. RFC 5283 suggest changing this verification procedure to the longest match: if there is a prefix Z in the RIB such that X is a subnet to Z then keep this label mapping. It is important to notice that LFIB is not aggregated in any manner – all label mappings received via LDP are maintained and propagated further. Things are different at the RIB level, however. Prefixes are allowed to be summarized and routing protocol operations are simplified. No end-to-end LSPs are broken, because label mappings for specific prefixes are maintained along the path.
mpls-summarization-11
What are the drawbacks of this approach? Obviously, LFIB size growth (forwarding state) is the first one. It is possible to argue that maintaining LFIB is less burdensome compared to maintaining IGP databases, so this could be acceptable. However, it is well known that IGP convergence is seriously affected by FIB sizes, not just IGP protocol data structures, as updating large FIB takes considerable time. Based on this, the LDP “leaking” approach does not solve all scalability issues. On the other hand, keeping detailed information in LFIB allows for end-to-end connectivity tracking, thanks to LDP ordered label propagation. If on area loses a prefix, LDP will signal the loss of label mapping, even though no specific information will ever leak to IGP. This is the flip-side of having detailed information at the forwarding plane level. The other problem that could be pointed out is LDP signaling overhead. However, since LDP is “on-demand” and “distance-vector” protocol it does not impose as much problems as say link-state IGP flooding.

Aggregated FEC Approach

This approach has been suggested by G. Swallow of Cisco Systems – see [SWALLOW-AGG-FEC]. It requires modifications to LDP and slight changes to the forwarding plane behavior. Here is how it works. When an ABR aggregates prefixes {X1…Xn} into a new summary prefix X it generates a label for X (aggregate FEC) and propagates it to other areas, creating an LSP for the aggregate FEC that terminates at the ABR. The LDP mappings are propagated using “aggregate FEC” type to signal special processing for packets matching this prefix. The LSP constructed for such FEC has PHP (penultimate hop popping) disabled for the reason we’ll explain later. All that other routers see in their RIB/FIB is the summary prefix X and the corresponding LFIB element (aggregate FEC/label). In addition to propagating the route/label for X, the same ABR also applies special hash function to the IP addresses {X1…Xn} (the specific prefixes) and generates local labels based on the result of this function. These new algorithmic labels are stored under the context of the “aggregated” label generated for prefix X. That is, these labels should only be interpreted in the association with the “aggregate” label. The algorithmic labels are further stitched with the labels the ABR learns via LDP from the source area for the prefixes {X1…Xn}.
The last piece of the puzzle is how PE creates the label stack for a specific prefix Xi. When a PE attempts to encapsulate a packet destined to Xi on ip2mpls edge it looks up the LFIB for exact match. If no exact match is found, but there is a matching aggregate FEC X, the PE will use the same hash function that ABR used previously on Xi to create an algorithmic label for Xi. The PE then stacks this “algorithmic” label under the label for the aggregate FEC X and sends the packet with two labels – the topmost for the aggregate X and the bottom for Xi. The packet will arrive at the ABR that originated the summary prefix X, with the topmost label NOT being removed by PHP mechanism (as mentioned previously). This allows the ABR to correctly determine the context for the bottom label. The topmost label is removed, and the next de-aggregation label for Xi is used to lookup the real label (stitched with the de-aggregation label for Xi) to be used for further switching.
mpls-summarization-12
This method is backward compatible with classic LDP implementation and will interoperate with other LDP deployments. Notice that there is no control-plane correlation between ABRs and PE, like there is in the case of BGP-signaled hierarchical LSPs. Instead, synchronization is achieved by using the same globally known hash function that produces de-aggregation labels. This method reduces control plane overhead associated with hierarchical LSP construction, but has one drawback – there is no end-to-end reachability signaling, like there was in RFC 5283 approach. That is, if an area loses prefix for PE, there is no way to signal this via LDP, as only aggregated FEC is being propagated. The presentation [SWALLOW-SCALING-MPLS] suggests a generic solution to this problem, by means of IGP protocol extension. In addition to flooding a summary prefix, the ABR is responsible for flooding a bit-vector that corresponds to every possible /32 under the summary. For example, for a /16 prefix there should be a 2^16 bit vector, where bit flag being equal to one means the corresponding /32 prefix is reachable and zero means it being unreachable. This scheme allows for certain optimizations, such as using Bloom filters (see [BLOOM-FILTER]) for information compression. This approach is known as Summarized Route Detailed Reachability (SRDR). The SRDR approach solves the problem of hiding the reachability information at the cost of modification to IGP signaling. An alternative is using tuned BGP keepalives. This, however, puts high stress on router’s control plane. A better alternative is using data-plane reachability discovery, such as using multi-hop BFD ([BFD-MULTIHOP]). The last two approaches do not require any modifications to IGP protocols and therefore better interoperate with existing networks.

Hierarchical LDP

This approach, an extension to RFC 5283, has been proposed by Kireeti Kompella of Juniper, but never has been officially documented and presented for peer review. There were only a few presentations made at various conferences, such as [KOMPELLA-HLDP]. No IETF draft is available, so we can only guess about the protocol details. In a nutshell, it seems that the idea is running a an overlay mesh of LDP sessions between PEs/ABRs similar to the BGP approach and using stacked FEC advertisements. The topmost FEC in such advertisement corresponds to the summarized prefix advertised by the ABR. This FEC is flooded across all areas and local mappings are used to construct an LSP terminating at the ABR. So far, this looks similar to the aggregated FEC approach. However, instead of using algorithmic label generation, the PE and ABR directly exchange their bindings on the specific prefixes, using new form of FEC announcement – hierarchical FEC stacking. ABR advertises aggregated FEC along with aggregated label and nested specific labels. The PE knows what labels the ABR is expecting for the specific prefixes, and may construct a two-label stack consisting of the “aggregate” label and “specific” label learned via the directed LDP session. The specific prefixes are accepted by the virtue of RFC 5283 extension, which allows detailed FEC information if there is a summary prefix in the RIB matching the specific prefix.
mpls-summarization-13
The hierarchical LDP approach maintains control-plane connection between the PE and the ABRs. Most likely, this means manual configuration of directed LDP sessions, very much similar to the BGP approach. The benefit is the control-plane reachability signaling and better extensibility compared to the Aggregated FEC approach. Another benefit is that BGP mesh is left intact and only LDP configuration has to be modified. However, it seems like that further work on Hierarchical LDP extensions has been abandoned, as there are no publications or discussions on this subject.

Hierarchical LSP Commonalities

So far we reviewed three different approaches to constructing hierarchical LSPs. The first one uses RSVP-TE forwarding adjacencies, the second one uses BGP label propagation and the last two use LDP extensions. All the approaches result in constructing transport LSP segments, terminating at the ABRs. For example, in RSVP-TE approach there are LSPs connecting the ABRs. In BGP approach, there are LSP segments connecting PEs to ABRs. As we mentioned previously, the current set of MPLS FRR features does not protect LSP endpoints. As a direct result, using hierarchical LSPs decreases the effectiveness of MPLS FRR protection. There is a work in progress on extending FRR protection to LSP endpoints, but there are no complete working solutions at the moment.

Summary

We have reviewed various aspects of MPLS technology scaling. The two main ones are scaling IGP protocols by using route summarization/areas and getting MPLS to work with summarization. A number of approaches are available to solve the latter problem, and practically all of them (with except to inter-area MPLS TE) are based on hierarchical LSP construction. Some approaches, such as BGP-signaled hierarchical LSPs are ready to be deployed using existing protocol functionality, at the expense of added management overhead. Others require modifications to control/forwarding plane behavior.
It looked like there was high interest in MPLS scaling problems about 3-4 years ago (2006-2007), but this topic looks to be abandoned nowadays. There is no active work in progress on LDP extensions mentioned above, however Multipoint-to-Point RSVO-TE LSP draft document [YASUKAWA-MP2P-LSP] seems to be making progress through IETF. Based on this, it looks like using Inter-Area RSVP-TE with MP2P extensions is going to be the main solution to scaling the MPLS networks of the future.

0 comments:

About US

Network Bulls is Best Institute for Cisco CCNA, CCNA Security, CCNA Voice, CCNP, CCNP Security, CCNP Voice, CCIP, CCIE RS, CCIE Security Version 4 and CCIE Voice Certification courses in India. Network Bulls is a complete Cisco Certification Training and Course Coaching Institute in Gurgaon/Delhi NCR region in India. Network Bulls has Biggest Cisco Training labs in India. Network Bulls offers all Cisco courses on Real Cisco Devices. Network Bulls has Biggest Team of CCIE Trainers in North India, with more than 90% of passing rate in First Attempt for CCIE Security Version 4 candidates.
  • Biggest Cisco Training Labs in India
  • More than 90% Passing Rate in First Attempt
  • CCIE Certified Trainers for All courses
  • 24x7 Lab Facility
  • 100% Job Guaranteed Courses
  • Awarded as Best Network Security Institute in 2011 by Times
  • Only Institute in India, to provide CCIE Security Version 4.0 Training
  • CCIE Security Version 4 Training available
  • Latest equipments available for CCIE Security Version 4

Network Bulls Institute Gurgaon

Network Bulls Institute in Gurgaon is one of the best Cisco Certifications Training Centers in India. Network Bulls has Biggest Networking Training and Networking courses labs in North India. Network Bulls is offering Cisco Training courses on real Cisco Routers and Switches. Labs of Network Bulls Institute are 24x7 Available. There are many coaching Centers in Delhi, Gurgaon, Chandigarh, Jaipur, Surat, Mumbai, Bangalore, Hyderabad and Chennai, who are offering Cisco courses, but very few institutes out of that big list are offering Cisco Networking Training on real Cisco devices, with Live Projects. Network Bulls is not just an institute. Network Bulls is a Networking and Network Security Training and consultancy company, which is offering Cisco certifications Training as well support too. NB is awarded in January 2012, by Times, as Best Network Security and Cisco Training Institute for the year 2011. Network Bulls is also offering Summer Training in Gurgaon and Delhi. Network Bulls has collaboration with IT companies, from which Network Bulls is offering Networking courses in Summer Training and Industrial Training of Btech BE BCA MCA students on real Live projects. Job Oriented Training and Industrial Training on Live projects is also offered by network bulls in Gurgaon and Delhi NCR region. Network Bulls is also providing Cisco Networking Trainings to Corporates of Delhi, Gurgaon, bangalore, Jaipur, Nigeria, Chandigarh, Mohali, Haryana, Punjab, Bhiwani, Ambala, Chennai, Hyderabad.
Cisco Certification Exams are also conducted by Network Bulls in its Gurgaon Branch.
Network Bulls don't provide any Cisco CCNA, CCNP simulations for practice. They Provide High End Trainings on Real topologies for high tech troubleshooting on real Networks. There is a list of Top and best Training Institutes in India, which are providing CCNA and CCNP courses, but NB has a different image from market. Many students has given me their feedbacks and reviews about Network bulls Institute, but there were no complaints about any fraud from this institute. Network Bulls is such a wonderful place to get trained from Industry expert Trainers, under guidance of CCIE Certified Engineers.

About Blog

This Blog Contains Links shared by sites: Cisco Guides, Dumps collection, Exam collection, Career Cert, Ketam Mehta, GodsComp.co.cc.

NB

NB
Cisco Networking Certifications Training

Cisco Training in Delhi

ccna training in gurgaon. ccnp course institute in gurgaon, ccie coaching and bootcamp training near gurgaon and delhi. best institute of ccna course in delhi gurgaon india. network bulls provides ccna,ccnp,ccsp,ccie course training in gurgaon, new delhi and india. ccsp training new delhi, ccie security bootcamp in delhi.

Testimonials : Network Bulls

My Name is Rohit Sharma and i Have done CCNA and CCNP Training in Gurgaon Center of Network Bulls and it was a great experience for me to study in Network Bulls.

Cisco Networking Certifications

Myself Komal Verma and i took CCSP Training from Network Bulls in Gurgaon. The day i joined Network Bulls, the day i get addicted with Networking Technologies and I thank Mr. Vikas Sheokand for this wonderful session of Networking. :)
I must say that Network Bulls is Best Institute of CCNA CCNP CCSP CCIE Course Training in Gurgaon, New Delhi and in India too.
Komal Verma

About a wonderfull CCIE Training Institute in Gurgaon

I am Kiran shah from New Delhi. I have recently completed my CCNA CCNP & CCIE Training in Gurgaon from Network Bulls and i recommend Network Bulls for Cisco Training in India.

Kiran Shah

Cisco Coaching and Learning Center

Disclaimer: This site does not store any files on its server. I only index and link to content provided by other sites. If you see any file on server that is against copy right you can inform me at (sidd12341 [at] gmail.com). I will delete that materials within two days. This Website is not official Website of any Institute like INE, Network Bulls, IP Expert. Thanks

CCIE Security Version 4

Cisco Finally updated CCIE Security Lab exam blueprint. WSA Ironport and ISE devices are added in CCIE Security Version 4 Lab Exam Syllabus Blueprint. In Updated CCIE Security Version 4 Syllabus blueprint, new technologies like Mobile Security, VoIP Security and IPV6 Security along with Network Security, are added. As in CCIE Security Version 3 blueprint, Cisco had focused on Network Security only, but now as per market demand, Cisco is looking forward to produce Internet gear Security Engineer, not only Network Security engineers.
In CCIE Security Version 4 Bluerpint, Lab Exam is going to be more interested than before. What is Difference in CCIE Security Version 3 and Version 4? Just go through the CCIE Security Version 4 Lab Equipment and Lab Exam Syllabus Blueprints and find out!