Using MPLS and M-LDP Signaling for Multicast VPNs

Introduction

This blog post provides example M-VPN configuration using out-of-band mapping of C-multicast groups to core M-LSP tunnels signaled via M-LDP. A reader is assumed to have solid knowledge of M-VPNs and understanding of M-LDP. For people unfamiliar with M-LDP, the following blog post provides some introductory reading: The long road to M-LSPs. The terminology used in this article follows the common convention of using the “P-” prefix for provider specific objects (e.g. multicas routes or tunnels) and using the “C-” prefix for the customer objects (e.g. private IP addresses). Before we begin, I would like to thank our reader Hans Verkerk who pointed me the fact that Cisco IOS does support M-LDP in the latest 12.2SR images.

Interworking M-LDP with PIM: In-Band and Out-of-Band

To recap, M-LDP allows for construction of P2MP (point-to-multipoint) and MP2MP (multipoint-to-multipoint) LSPs. Every LSP is identified by a tuple made of the root node IP address X and an opaque value Y, shared by all leaves. MP2MP LSP is identified by a shared root IP address and an opaque value and consists of downstream and upstream sections. The downstream part is a P2MP LSP rooted at the shared node and the upstream part is a MP2P (multipoint to point) LSP built starting from the root node and to the leaves to allow the latter sending traffic upstream to the root. It is important to note that the components of MP2P LSP are classic unicast LSPs connecting each and every leaf to the root of MP2MP LSP. Look at the diagram below for illustration of MP2MP LSP signaling.
mld-signaled-mvpns-mp2mp-lsp
One of the most prominent applications of M-LSPs is effective multicast traffic encapsulation in MPLS networks. Since M-LSPs are signaled using M-LDP and multicast trees use PIM the problem is mapping multicast trees to M-LSPs. It is intuitively obvious that shortest path trees could be mapped to P2MP LSPs and shared trees correspond to MP2MP LSPs. There are two approaches to implement such mapping: the first uses in-band signaling, where PIM messages are directly translated into M-LDP FEC bindings, and the second approach uses out-of-band mapping, e.g. based on manual configuration.

PIM/M-LDP in-band signaling could be useful to transit multicast traffic across MPLS enabled cores in optimal manner. There is no need to enable PIM and multicast routing in the MPLS core – the IP2MPLS edge devices will translate PIM Join messages into M-LDP FEC bindings and encode the group/source information in opaque values to let the other edge device convert M-LDP FEC binding into PIM Joins. The work is currently in progress to standardize the in-band signaling in the following IETF Drafts: “draft-wijnands-mpls-mldp-in-band-signaling” and “draft-rekhter-pim-sm-over-mldp”. In-band signaling could also be used for M-VPN implementation by directly translating all customer PIM Joins into core P2MP groups. This approach is also known as “Direct MDT” and its benefit is the fact that no customer PIM adjacencies need to be running across the SP tunnel interface. However, the massive drawback is uncontrolled forwarding state growth in the SP core, which seriously limits scalability of this approach. The diagram below illustrates direct MDT model signaling flow: customer PIM messages are translated into core M-LDP bindings.
mld-signaled-mvpns-inband-signaling
The out-of-band approach is well known by Rosen’s MVPN implementation, where M-VPN endpoints are discovered by means of MP-BGP extensions and MDTs are mapped to SP core multicast groups (P-tunnels) by means of manual configuration. With some modifications, this approach could be adapted to use M-LSPs as transport (Provider or P-Tunnels) instead of mGRE. The default MDT could be instantiated as MP2MP LSP connecting all PEs participating in a particular M-VPN. The data MDTs could be dynamically created on demand by mapping selected multicast flows to more optimal P-tunnels instantiated as P2MP LSPs. You may find the detailed description of updated Rosen’s M-VPNs in “draft-ietf-l3vpn-2547bis-mcast”. This document outlines the use of different P-tunnel transportation and signaling techniques, including native multicast mGRE, RSVP-TE signaled M-LSPs, M-LDP signaled M-LSPs and finally the full mesh of unicast tunnels connecting the PE routers. Notice that the newer draft refers to the “default MDT” as “Inclusive tunnel” and “data MDT” as “Selective Tunnel” – those is more generic terms than “MDTs”.
Even though with respect to M-VPNs the out-of-band approach is much more scalable than in-band, it still has a limitation rooted in the fact that PIM is required to signal customer multicast groups. This results in full-mesh of overlay C-PIM adjacencies between the PE routers, one mesh for every M-VPN implemented. In addition to this, PIM being a soft-state protocol periodically re-signals all trees, which puts additional burden on the PE-routers. A work is in progress to reduce the amount of PIM refresh messages, known as “refresh reduction”. Meanwhile, M-LDP could be used in Carrier-Supporting-Carrier (CsC) manner, where customer trees are instantiated as nested MPLS LSPs, which allows the SP to push the C-PIM signaling off the PE routers and down to the customer equipment. A special extension to M-LDP is being standardized for this purpose in the draft “draft-wijnands-mpls-mldp-csc”.

RSVP-TE and M-LDP Signaled M-LSPs

Traffic engineering is the biggest advantage of using MPLS LSPs in place of any other transport. The static, source-signaled RSVP-TE based P2MP LSPs discussed in the previous blog post have the ability of using Traffic-Engineering database for optimal resource utilization. However, M-LDP based M-LSPs are dynamically signaled in ordered downstream manner, thus disallowing resource pre-calculation and reservation. However, the use of M-LSPs instead of mGRE allows for effective traffic protection. The approach to this is simple recursive routing: if a given upstream node detects that the best path to downstream node goes across a MPLS tunnel, the corresponding M-LSP branch is re-routed across the tunnel using an additional tunnel label needed. Combined with one-hop primary/backup tunnels this allows for effective multicast traffic protection against link and node failures.

Using M-LSPs as transport for Rosen’s MVPNs

It does not look like Cisco currently implements the “in-band” signaling mechanism in IOS 12.2SR. However, the Out-of-band mapping could be used to replace mGRE with M-LDP signaled M-LSPs. The configuration is straight-forward, and requires the following steps
  • Disabling core SP multicast if you don’t need it for other applications. M-LDP will be used for M-LSP construction. Of course, you may keep native multicast services if you still need them.
  • Configuring all VRFs with a VPN Identifier. VPN ID is standardized in RFC 2685 and serves the purpose of identifying the same VPN in multiple PEs, possibly spanning multiple ASs. Unlike RDs and RTs this value is the same among all sites and could be used by some protocols suchs RADIUS or DHCP to allocate the proper configuration information. M-LDP uses the VPN-ID for the construction of the opaque value shared by all PEs for the given M-VPN. The VPN-ID replaces the use of shared SP core multicast group that was needed for mGRE based P-tunnels. The format for the VPN-ID is “OUI:Unique-ID” where both values are in hex. You may simply choose the value unique for every VPN.
  • Specifying the Default MDT Root IP address. This is necessary in order to create the shared MP2MP LSP known to all PEs for the particular M-VPN. This LSP is used to establish C-PIM adjacencies and forward all multicast traffic by default. Notice that a single root could be used by multiple M-VPNs as long as they use different VPN-IDs. In real-world deployments, the root node placement is important as there is by default only one P-Tunnel used for all M-VPN traffic. This results in additional load on the MP2MP root as all M-VPN traffic is supposed to transit this router. You may compare the MP2MP root with the PIM RP router, with the exception to the fact that you may manually select it for every M-VPN.
  • Configuring the number of data MDTs available for a given VRF. This value if specific per VRF per router. Just like classic (multicast mGRE) data MDTs, you configure the traffic threshold (by default there is no switchover) that needs to be crossed for the PE to signal a separate P2MP LSPs for the exceeding multicast flows. You don’t have to specify the opaque value similar to the multicast-group range needed for mGRE VPNs. The same opaque value will be reused, however the data MDT LSPs will be rooted in the BGP Next-Hop IP address corresponding to the multicast C-source IP address. Notice that multiple data MDTs built by different PEs toward the same root will be effectively merged in the core due to the fact that they share the same VPN-ID. This significantly improves resource utilization.
The above configuration guidelines imply that there is no currently support for Inter-AS M-LDP based M-VPNs as M-LDP uses BGP next-hop IP address to construct P2MP trees and does not support any “proxy” elements. The updated draft “draft-ietf-l3vpn-2547bis-mcast” additionally specifies the use of segmented P-Tunnels as an alternate to the “spanning” inter-AS P-tunnels used by mGRE-based M-VPNs. However, this feature does not seems to be yet implemented in IOS, as well as the additional MP-BGP extensions required for the new, transport-independent M-VPNs.

Practical Example

The figure below displays the topology used by the sample scenario. The three routers: R1, R2 and R3 form multicast-free SP core that uses M-LDP in the core and one-hop automatic backup tunnels for traffic protection. The CE1 and CE2 routers in the customer domain use PIM SSM for multicast signaling, and CE2 joins group 232.1.1.1 sourced at CE1 using IGMPv3. We are going to review and verify this scenario:
mld-signaled-mvpns-practical-example
The following are the configurations for R1, R2 and R3 – the PE and P routers.
R1:
hostname R1
!
interface Loopback 0
 ip address 10.1.1.1 255.255.255.255
!
interface Serial 2/0
 no shutdown
 encapsulation frame-relay
 no frame-relay inverse
!
interface Serial 2/0.12 point
 frame-relay interface-dlci 102
 ip address 10.1.12.1 255.255.255.0
 no ip pim sparse-mode
 bandwidth 10000
 mpls traffic-eng tunnels
 ip rsvp bandwidth 10000
!
interface Serial 2/0.13 point
 frame-relay interface-dlci 103
 ip address 10.1.13.1 255.255.255.0
 no ip pim sparse-mode
 bandwidth 10000
 mpls traffic-eng tunnels
 ip rsvp bandwidth 10000
!
mpls traffic-eng tunnels
mpls traffic-eng auto-tunnel backup nhop-only
mpls traffic-eng auto-tunnel primary onehop
!
ip routing protocol purge interface
!
router ospf 1
 network 0.0.0.0 0.0.0.0 area 0
  mpls ldp autoconfig area 0
 mpls traffic-eng area 0
 mpls traffic-eng router-id Loopback0
  mpls traffic-eng multicast-intact
!
mpls mldp
 mpls mldp path traffic-eng
!
ip vrf TEST
 rd 100:1
 vpn id 100:1
 route-target export 100:1
 route-target import 100:1
 mdt preference mldp
 mdt default mpls mldp 10.1.3.3
 mdt data mpls mldp 10
 mdt data threshold 1
!
 ip multicast-routing vrf TEST
ip pim vrf TEST ssm default
!
interface FastEthernet 0/0
 ip vrf forwarding TEST
 ip address 172.16.1.1 255.255.255.0
 ip pim sparse-mode
 no shutdown
!
router bgp 100
 bgp router-id 10.1.1.1
 neighbor 10.1.2.2 remote-as 100
 neighbor 10.1.2.2 update-source Loopback0
 address-family ipv4 unicast
  no neighbor 10.1.2.2 activate
 address-family vpnv4 unicast
  neighbor 10.1.2.2 activate
 address-family ipv4 vrf TEST
  redistribute connected
The above configuration enable LDP and RSVP-TE in the core, automatically configuring one-hop primary and backup tunnels. Notice that OSPF process is setup for LDP auto-configuration. A VRF is created in the router and R1 is set to peer with R2 using VPN-IPv4 address family exchange. There is no multicast routing enabled on the P-interfaces, only the VRF is set up for multicast routing using PIM-SSM. Pay close attention to the VRF configuration:
mpls mldp
mpls mldp path traffic-eng
!
ip vrf TEST
 rd 100:1
 vpn id 100:1
 route-target export 100:1
 route-target import 100:1
 mdt preference mldp
  mdt default mpls mldp 10.1.3.3
  mdt data mpls mldp 10
  mdt data threshold 1
The M-LDP configuration creates single default MPLS-based MDT using R3′s Loopback0 as the root node. The opaque value for this MP2MP LSP is constructed based on the VPN ID value of 100:1. Additionally, data MDTs are to be triggered when the traffic flow exceeds 1 Kbps. The limit of Data MDT is 10 per the VRF. Pay attention to the fact that M-LDP is configured to use the traffic-engineering tunnels for M-LSP forwarding using the commands mpls mldp path traffic-eng to allow the upstream nodes using traffic-engineering tunnels for one-hop M-LSP segments.
The configuration for R2 is almost identical to R1:
R2:
hostname R2
!
interface Loopback 0
 ip address 10.1.2.2 255.255.255.255
!
interface Serial 2/0
 no shutdown
 encapsulation frame-relay
 no frame-relay inverse
!
interface Serial 2/0.12 point
 frame-relay interface-dlci 201
 ip address 10.1.12.2 255.255.255.0
 no ip pim sparse-mode
 bandwidth 10000
 mpls traffic-eng tunnels
 ip rsvp bandwidth 10000
!
interface Serial 2/0.23 point
 frame-relay interface-dlci 203
 ip address 10.1.23.1 255.255.255.0
 no ip pim sparse-mode
 bandwidth 10000
 mpls traffic-eng tunnels
 ip rsvp bandwidth 10000
!
mpls traffic-eng tunnels
mpls traffic-eng auto-tunnel backup nhop-only
mpls traffic-eng auto-tunnel primary onehop
!
ip routing protocol purge interface
!
router ospf 1
 network 0.0.0.0 0.0.0.0 area 0
 mpls ldp autoconfig area 0
 mpls traffic-eng area 0
 mpls traffic-eng router-id Loopback0
 mpls traffic-eng multicast-intact
!
mpls mldp
mpls mldp path traffic-eng
!
interface FastEthernet 0/0
 ip vrf forwarding TEST
 ip address 172.16.2.1 255.255.255.0
 ip pim sparse-mode
 no shutdown
!
router bgp 100
 bgp router-id 10.1.2.2
 neighbor 10.1.1.1 remote-as 100
 neighbor 10.1.1.1 update-source Loopback0
 address-family ipv4 unicast
  no neighbor 10.1.1.1 activate
 address-family vpnv4 unicast
  neighbor 10.1.1.1 activate
 address-family ipv4 vrf TEST
  redistribute connected
And lastly, R3 is configured as a pure P-router. All three routers are configured for one-hop primary and backup tunnels, which allows for complete link protection – every single link failure in this topology is protected by Fast-Reroute. MLDP is configured to use the TE tunnels and thus all multicast traffic is protected as well.
R3:
hostname R3
!
interface Loopback 0
 ip address 10.1.3.3 255.255.255.255
!
mpls traffic-eng tunnels
mpls traffic-eng auto-tunnel backup nhop-only
mpls traffic-eng auto-tunnel primary onehop
!
ip routing protocol purge interface
!
router ospf 1
 network 0.0.0.0 0.0.0.0 area 0
 mpls ldp autoconfig area 0
 mpls traffic-eng area 0
 mpls traffic-eng router-id Loopback0
 mpls traffic-eng multicast-intact
!
mpls mldp
mpls mldp path traffic-eng
!
interface Serial 2/0
 no shutdown
 encapsulation frame-relay
 no frame-relay inverse
!
interface Serial 2/0.13 point
 frame-relay interface-dlci 301
 ip address 10.1.13.3 255.255.255.0
 no ip pim sparse-mode
 bandwidth 10000
 mpls traffic-eng tunnels
 ip rsvp bandwidth 10000
!
interface Serial 2/0.23 point
 frame-relay interface-dlci 302
 ip address 10.1.23.3 255.255.255.0
 no ip pim sparse-mode
 bandwidth 10000
 mpls traffic-eng tunnels
 ip rsvp bandwidth 10000

Practical Scenario Verification

Start by checking M-LDP peering and ensuring there is no multicast running in the network core:
R3#show ip pim neighbor
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router, N - Default DR Priority,
      P - Proxy Capable, S - State Refresh Capable, G - GenID Capable
Neighbor          Interface                Uptime/Expires    Ver   DR
Address                                                            Prio/Mode

R1#show mpls ldp neighbor 
    Peer LDP Ident: 10.1.3.3:0; Local LDP Ident 10.1.1.1:0
  TCP connection: 10.1.3.3.56698 - 10.1.1.1.646
 State: Oper; Msgs sent/rcvd: 272/270; Downstream
 Up time: 03:49:31
 LDP discovery sources:
   Serial2/0.13, Src IP addr: 10.1.13.3
   Targeted Hello 10.1.1.1 -> 10.1.3.3, active, passive
        Addresses bound to peer LDP Ident:
          10.1.3.3        10.1.13.3       10.1.23.3
    Peer LDP Ident: 10.1.2.2:0; Local LDP Ident 10.1.1.1:0
  TCP connection: 10.1.2.2.31782 - 10.1.1.1.646
 State: Oper; Msgs sent/rcvd: 270/276; Downstream
 Up time: 03:49:26
 LDP discovery sources:
   Serial2/0.12, Src IP addr: 10.1.12.2
   Targeted Hello 10.1.1.1 -> 10.1.2.2, active, passive
        Addresses bound to peer LDP Ident:
          10.1.2.2        10.1.12.2       10.1.23.1       

R1#show mpls mldp neighbor

  MLDP peer ID    : 10.1.3.3:0, uptime 03:49:36 Up,
  Target Adj     : Yes
  Session hndl   : 3
  Upstream count : 1
  Branch count   : 0
  Path count     : 2
  Path(s)        : 10.1.3.3          No LDP Tunnel65337
                 : 10.1.13.3         LDP Serial2/0.13
  Nhop count     : 2
  Nhop list      : 10.1.3.3 10.1.13.3 

  MLDP peer ID    : 10.1.2.2:0, uptime 03:49:30 Up,
  Target Adj     : Yes
  Session hndl   : 4
  Upstream count : 0
  Branch count   : 0
  Path count     : 2
  Path(s)        : 10.1.2.2          No LDP Tunnel65336
                 : 10.1.12.2         LDP Serial2/0.12
  Nhop count     : 0
Notice the MLDP output, which shows additional paths to every peer via the MPLS TE tunnels that show no LDP enabled on them. This is due to the fact that we enabled the command mpls mldp path traffic-eng. Check the local traffic engineering tunnels in R1 to see that the above listed tunnels are primary one-hop tunnels to R2 and R3 respectively. Pay attention to the fact that these tunnels are locally protected from link failures.
R1#show mpls traffic-eng tunnels summary
Signalling Summary:
    LSP Tunnels Process:            running
    Passive LSP Listener:           running
    RSVP Process:                   running
    Forwarding:                     enabled
    auto-tunnel:
  backup Enabled  (2 ), id-range:65436-65535
 onehop Enabled  (2 ), id-range:65336-65435
 mesh   Disabled (0 ), id-range:64336-65335

    Periodic reoptimization:        every 3600 seconds, next in 44 seconds
    Periodic FRR Promotion:         Not Running
    Periodic auto-tunnel:
        primary establish scan:     every 10 seconds, next in 1 seconds
        primary rm active scan:     disabled
        backup notinuse scan:       every 3600 seconds, next in 151 seconds
    Periodic auto-bw collection:    every 300 seconds, next in 44 seconds
    P2P:
      Head: 4 interfaces,   4 active signalling attempts, 4 established
            8 activations,  4 deactivations
            58 failed activations
            0 SSO recovery attempts, 0 SSO recovered
      Midpoints: 2, Tails: 4

    P2MP:
      Head: 0 interfaces,   0 active signalling attempts, 0 established
            0 sub-LSP activations,  0 sub-LSP deactivations
            0 LSP successful activations,  0 LSP deactivations
            SSO: Unsupported
      Midpoints: 0, Tails: 0

R1#show mpls traffic-eng tunnels tunnel 65336

Name: R1_t65336                           (Tunnel65336) Destination: 10.1.2.2
  Status:
    Admin: up         Oper: up     Path: valid       Signalling: connected
    path option 1, type explicit __dynamic_tunnel65336 (Basis for Setup, path weight 10)

  Config Parameters:
    Bandwidth: 0        kbps (Global)  Priority: 7  7   Affinity: 0x0/0xFFFF
    Metric Type: TE (default)
    AutoRoute announce: enabled  LockDown: disabled Loadshare: 0        bw-based
    auto-bw: disabled
  Active Path Option Parameters:
    State: explicit path option 1 is active
    BandwidthOverride: disabled  LockDown: disabled  Verbatim: disabled

  InLabel  :  -
   OutLabel : Serial2/0.12, implicit-null
  Next Hop : 10.1.12.2
  FRR OutLabel : Tunnel65436, explicit-null
  RSVP Signalling Info:
       Src 10.1.1.1, Dst 10.1.2.2, Tun_Id 65336, Tun_Instance 6558
    RSVP Path Info:
      My Address: 10.1.12.1
      Explicit Route: 10.1.12.2 10.1.2.2
      Record   Route:   NONE
      Tspec: ave rate=0 kbits, burst=1000 bytes, peak rate=0 kbits
    RSVP Resv Info:
      Record   Route:  10.1.2.2(0)
      Fspec: ave rate=0 kbits, burst=1000 bytes, peak rate=0 kbits
  Shortest Unconstrained Path Info:
    Path Weight: 10 (TE)
    Explicit Route: 10.1.12.2 10.1.2.2
  History:
    Tunnel:
      Time since created: 3 hours, 57 minutes
      Time since path change: 3 hours, 57 minutes
      Number of LSP IDs (Tun_Instances) used: 1
    Current LSP: [ID: 6558]
      Uptime: 3 hours, 57 minutes

R1#show mpls traffic-eng tunnels tunnel 65337

Name: R1_t65337                           (Tunnel65337) Destination: 10.1.3.3
  Status:
    Admin: up         Oper: up     Path: valid       Signalling: connected
    path option 1, type explicit __dynamic_tunnel65337 (Basis for Setup, path weight 10)

  Config Parameters:
    Bandwidth: 0        kbps (Global)  Priority: 7  7   Affinity: 0x0/0xFFFF
    Metric Type: TE (default)
    AutoRoute announce: enabled  LockDown: disabled Loadshare: 0        bw-based
    auto-bw: disabled
  Active Path Option Parameters:
    State: explicit path option 1 is active
    BandwidthOverride: disabled  LockDown: disabled  Verbatim: disabled

  InLabel  :  -
   OutLabel : Serial2/0.13, implicit-null
  Next Hop : 10.1.13.3
  FRR OutLabel : Tunnel65437, explicit-null
  RSVP Signalling Info:
       Src 10.1.1.1, Dst 10.1.3.3, Tun_Id 65337, Tun_Instance 2427
    RSVP Path Info:
      My Address: 10.1.13.1
      Explicit Route: 10.1.13.3 10.1.3.3
      Record   Route:   NONE
      Tspec: ave rate=0 kbits, burst=1000 bytes, peak rate=0 kbits
    RSVP Resv Info:
      Record   Route:  10.1.3.3(0)
      Fspec: ave rate=0 kbits, burst=1000 bytes, peak rate=0 kbits
  Shortest Unconstrained Path Info:
    Path Weight: 10 (TE)
    Explicit Route: 10.1.13.3 10.1.3.3
  History:
    Tunnel:
      Time since created: 3 hours, 57 minutes
      Time since path change: 3 hours, 57 minutes
      Number of LSP IDs (Tun_Instances) used: 1
    Current LSP: [ID: 2427]
      Uptime: 3 hours, 57 minutes
Next, validate the existence of MP2MP LSP for the VPN configured on R1 and R2. This LSP should be rooted in R3, and both R1 and R2 should have upstream and downstream components of this LSP. The below output shows the following interesting information:
  • The upstream component (U) of the MP2MP LSP uses Label 17 and Tunnel 65337 to reach to the next upstream hop toward the root node. This is how R1 sends information to all other MP2MP participants.
  • The downstream component (D) shows the P2MP label 21 and the next-hop of 10.1.3.3, meaning R3’s Loopback0 is the root of the respective P2MP tree.
  • Replication clients are the nodes downstream to the P2MP LSP that are to receive the multipoint replicated traffic. In our case, the replication client is the Multicast-enabled VRF MVRF, which uses virtual LSPVIF0 interface as the “ingress” multicast interface. This interface is the abstraction used by Cisco IOS to represent the terminating P2MP LSP. All multicast traffic appear to be sourcing out of this interface to the local multicast routing subsystem.
R1#show mpls mldp database
   * Indicates MLDP recursive forwarding is enabled

LSM ID : 1 (RNR LSM ID: F1000002) Type: MP2MP   Uptime : 04:04:20
  FEC Root           : 10.1.3.3
  Opaque decoded     : [mdt 100:1 0]
  Opaque length      : 11 bytes
  Opaque value       : 07 000B 0001000000000100000000
  RNR active LSP     : (this entry)
  Upstream client(s) :
    10.1.3.3:0    [Active]
      Expires        : Never         Path Set ID  : B9000001
      Out Label (U)  : 17            Interface    : Tunnel65337
      Local Label (D): 21            Next Hop     : 10.1.3.3
  Replication client(s):
    MDT  (VRF TEST)
      Uptime         : 04:04:20      Path Set ID  : DD000002
      Interface      : Lspvif0
You may get output similar to the one above using the same show command on R2.
R2#show mpls mldp database
   * Indicates MLDP recursive forwarding is enabled

LSM ID : 43000001 (RNR LSM ID: 21000002)    Type: MP2MP   Uptime : 04:29:08
   FEC Root           : 10.1.3.3
   Opaque decoded     : [mdt 100:1 0]
  Opaque length      : 11 bytes
  Opaque value       : 07 000B 0001000000000100000000
  RNR active LSP     : (this entry)
  Upstream client(s) :
    10.1.3.3:0    [Active]
      Expires        : Never         Path Set ID  : 18000001
      Out Label (U)  : 16            Interface    : Tunnel65337*
      Local Label (D): 21            Next Hop     : 10.1.3.3
  Replication client(s):
    MDT  (VRF TEST)
      Uptime         : 04:29:08      Path Set ID  : 2
      Interface      : Lspvif0
If you check the MPLS forwarding table on R1, you will notice that the P2MP downstream component label is mapped to the “MDT 100:1” using aggregate lookup operation. This means that the decapsulated packets are to be routed using the respective VRF’s MFIB database.
R1#show mpls forwarding-table 
Local      Outgoing   Prefix           Bytes Label   Outgoing   Next Hop
Label      Label      or Tunnel Id     Switched      interface
...
21    [T]  No Label   [mdt 100:1 0][V] 53166         aggregate/TEST
22         Pop Label  10.1.3.3 65436 [1]   
                                       0             Se2/0.12   point2point 

[T] Forwarding through a LSP tunnel.
 View additional labelling info with the 'detail' option
Look at the root of the MP2MP LSP for the information on the upstream and downstream labels. The output below illustrates that R3 sees two downstream P2MP LSP clients for the opaque value “mdt 100:1” and uses label 21 for both of them. Both downstream clients are reachable via the TE tunnels. Next, there are two “upstream MP2P” labels that are advertised by R3 to R1 and R2 respectively to be used for encapsulation of upstream traffic. R1 and R2 will use those labels to reach the root of MP2MP LSP via the upstream tunnels.
R3#show mpls mldp database 
   * Indicates MLDP recursive forwarding is enabled

LSM ID : 10000002   Type: MP2MP   Uptime : 05:49:04
   FEC Root           : 10.1.3.3 (we are the root)
   Opaque decoded     : [mdt 100:1 0]
  Opaque length      : 11 bytes
  Opaque value       : 07 000B 0001000000000100000000
  Upstream client(s) :
    None
      Expires        : N/A           Path Set ID  : 88000004
  Replication client(s):
    10.1.1.1:0
      Uptime         : 05:49:04      Path Set ID  : 5
      Out label (D)  : 21            Interface    : Tunnel65336*
      Local label (U): 17            Next Hop     : 10.1.1.1
    10.1.2.2:0
      Uptime         : 00:50:43      Path Set ID  : D8000007
      Out label (D)  : 21            Interface    : Tunnel65337*
      Local label (U): 16            Next Hop     : 10.1.2.2
Now that we ensured the MP2MP LSP existence, check for C-PIM adjacencies that should be established using the MP2MP LSP. As you can see, adjacencies are established over the LSPVIF0 interface, which looks as the actual multicast packet source for every C-PIM instance. There is no mGRE MDT tunnel anymore, it is now replaced by the LSPVIF0 interface.
R1#show ip pim vrf TEST neighbor 
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router, N - Default DR Priority,
      P - Proxy Capable, S - State Refresh Capable, G - GenID Capable
Neighbor          Interface                Uptime/Expires    Ver   DR
Address                                                            Prio/Mode
172.16.1.7        FastEthernet0/0          03:53:39/00:01:34 v2    1 / DR S G
10.1.2.2          Lspvif0                  04:30:56/00:01:24 v2    1 / DR S P G

R2#show ip pim vrf TEST neighbor
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router, N - Default DR Priority,
      P - Proxy Capable, S - State Refresh Capable, G - GenID Capable
Neighbor          Interface                Uptime/Expires    Ver   DR
Address                                                            Prio/Mode
10.1.1.1          Lspvif0                  04:31:18/00:01:30 v2    1 / S P G

R1#show ip pim mdt
  * implies mdt is the default MDT
  MDT Group/Num   Interface   Source                   VRF
* 0               Lspvif0     Loopback0                TEST

R2#show ip pim mdt 
  * implies mdt is the default MDT
  MDT Group/Num   Interface   Source                   VRF
* 0               Lspvif0     Loopback0                TEST
The final part of the test is configuring one of the CE routers to join a specific multicast source:
R2#show ip igmp vrf TEST groups 
IGMP Connected Group Membership
Group Address    Interface                Uptime    Expires   Last Reporter   Group Accounted
232.1.1.1        FastEthernet0/0          03:37:04  stopped   172.16.2.7
224.0.1.40       Lspvif0                  04:18:01  00:02:24  10.1.2.2
Notice how the C-multicast routing tables look on the PE routers:
R1#show ip mroute vrf TEST
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry, E - Extranet,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group,
       V - RD & Vector, v - Vector
Outgoing interface flags: H - Hardware switched, A - Assert winner
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(172.16.1.7, 232.1.1.1), 01:06:34/00:02:39, flags: sT
  Incoming interface: FastEthernet0/0, RPF nbr 172.16.1.7
  Outgoing interface list:
    Lspvif0, Forward/Sparse, 03:38:23/00:02:39

(*, 224.0.1.40), 04:37:07/00:02:08, RP 0.0.0.0, flags: DPL
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list: Null

R2#show ip mroute vrf TEST
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry, E - Extranet,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group,
       V - RD & Vector, v - Vector
Outgoing interface flags: H - Hardware switched, A - Assert winner
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(172.16.1.7, 232.1.1.1), 03:37:48/00:02:18, flags: sTI
  Incoming interface: Lspvif0, RPF nbr 10.1.1.1
  Outgoing interface list:
    FastEthernet0/0, Forward/Sparse-Dense, 03:37:48/00:02:18

(*, 224.0.1.40), 04:36:29/00:02:44, RP 0.0.0.0, flags: DCL
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    Lspvif0, Forward/Sparse, 04:18:45/00:02:44
The control plane part looks correct, and now we may test the data-plane connectivity by sourcing ping packets to 232.1.1.1 from CE1. Before we do this, enable packet debugging in R2.
R2#debug mpls packet 
Packet debugging is on

R2#debug ip mfib vrf TEST fs 
MFIB IPv4 fs debugging enabled for vrf TEST
MPLS turbo: Se2/0.23: rx: Len 108 Stack {21 0 253} - ipv4 data
MPLS les: Se2/0.23: rx: Len 108 Stack {21 0 253} - ipv4 data
MFIBv4(0x1): Receive (172.16.1.7,232.1.1.1) from Lspvif0 (FS): hlen 5 prot 1 len 100 ttl 252 frag 0x0
The above output signifies that the ICMP packets received arrive in MPLS encapsulation using the downstream label of 21. It is interesting to look at the debugging output on R3, which is the MP2MP root:
R3#
MPLS turbo: Se2/0.13: rx: Len 108 Stack {17 0 254} - ipv4 data
MPLS les: Se2/0.13: rx: Len 108 Stack {17 0 254} - ipv4 data

MPLS les: Tu65337: tx: Len 108 Stack {21 0 253} - ipv4 data
MPLS les: Se2/0.23: tx: Len 108 Stack {21 0 253} - ipv4 data
You can clearly see the MPLS packet received on the upstream LSP with the label of 17 and transmitted downstream on the tunnel interface connecting R3 to R2 using the downstream label of 21. Notice that the outgoing downstream label is not stacked with the tunnel transport label, as the transport label for Tunnel65337 is implicit null:
R3#show mpls traffic-eng tunnels tunnel 65337

Name: R3_t65337                           (Tunnel65337) Destination: 10.1.2.2
  Status:
    Admin: up         Oper: up     Path: valid       Signalling: connected
    path option 1, type explicit __dynamic_tunnel65337 (Basis for Setup, path weight 10)

  Config Parameters:
    Bandwidth: 0        kbps (Global)  Priority: 7  7   Affinity: 0x0/0xFFFF
    Metric Type: TE (default)
    AutoRoute announce: enabled  LockDown: disabled Loadshare: 0        bw-based
    auto-bw: disabled
  Active Path Option Parameters:
    State: explicit path option 1 is active
    BandwidthOverride: disabled  LockDown: disabled  Verbatim: disabled

  InLabel  :  -
  OutLabel : Serial2/0.23, implicit-null
  Next Hop : 10.1.23.1
This completes the test of the basic connectivity and MP2MP LSP. The next in turn is testing advanced features, such as traffic protection and data MDT switchover.

Advanced Features: FRR and MDT Switchover

We will test the FRR as it applies to the MP2MP LSP. To accomplish this, disable the link connecting R3 to R2 by removing the DLCI from R3:
R3:
interface Serial 2/0.23
 no frame-relay interface-dlci 302
Now oserve the status of MPLS TE tunnels. Notice that the one-hop tunnel connecting R3 to R2 is not re-routed over the backup tunnel going across R1.
R3#show mpls mldp database 
  * Indicates MLDP recursive forwarding is enabled

LSM ID : 10000002   Type: MP2MP   Uptime : 04:53:19
  FEC Root           : 10.1.3.3 (we are the root)
  Opaque decoded     : [mdt 100:1 0]
  Opaque length      : 11 bytes
  Opaque value       : 07 000B 0001000000000100000000
  Upstream client(s) :
    None
      Expires        : N/A           Path Set ID  : 88000004
  Replication client(s):
    10.1.1.1:0
      Uptime         : 04:53:19      Path Set ID  : 5
        Out label (D)  : 21            Interface    : Tunnel65336*
      Local label (U): 17            Next Hop     : 10.1.1.1
    10.1.2.2:0
      Uptime         : 04:53:11      Path Set ID  : C7000006
       Out label (D)  : 21            Interface    : Tunnel65337*
      Local label (U): 16            Next Hop     : 10.1.2.2

R3#show mpls traffic-eng tunnels Tunnel 65337

Name: R3_t65337                           (Tunnel65337) Destination: 10.1.2.2
  Status:
    Admin: up         Oper: up     Path: valid       Signalling: connected
    path option 1, type explicit __dynamic_tunnel65337 (Basis for Setup, path weight 10)
        Change in required resources detected: reroute pending
        Currently Signalled Parameters:
          Bandwidth: 0        kbps (Global)  Priority: 7  7   Affinity: 0x0/0xFFFF
          Metric Type: TE (default)

  Config Parameters:
    Bandwidth: 0        kbps (Global)  Priority: 7  7   Affinity: 0x0/0xFFFF
    Metric Type: TE (default)
    AutoRoute announce: enabled  LockDown: disabled Loadshare: 0        bw-based
    auto-bw: disabled
  Active Path Option Parameters:
    State: explicit path option 1 is active
    BandwidthOverride: disabled  LockDown: disabled  Verbatim: disabled

  InLabel  :  -
  OutLabel : Serial2/0.23, implicit-null
  Next Hop : 10.1.23.1
    FRR OutLabel : Tunnel65436, explicit-null (in use)
  RSVP Signalling Info:
       Src 10.1.3.3, Dst 10.1.2.2, Tun_Id 65337, Tun_Instance 6593
    RSVP Path Info:
      My Address: 10.1.23.3
      Explicit Route: 10.1.2.2 10.1.2.2
      Record   Route:   NONE
      Tspec: ave rate=0 kbits, burst=1000 bytes, peak rate=0 kbits
    RSVP Resv Info:
      Record   Route:  10.1.2.2(0)
      Fspec: ave rate=0 kbits, burst=1000 bytes, peak rate=0 kbits
  Shortest Unconstrained Path Info:
    Path Weight: 20 (TE)
    Explicit Route: 10.1.13.1 10.1.12.2 10.1.2.2
  History:
    Tunnel:
      Time since created: 1 hours, 27 minutes
      Time since path change: 1 hours, 27 minutes
      Number of LSP IDs (Tun_Instances) used: 4
    Current LSP: [ID: 6593]
      Uptime: 1 hours, 27 minutes

R3#show mpls traffic-eng tunnels Tunnel 65436

Name: R3_t65436                           (Tunnel65436) Destination: 10.1.2.2
  Status:
    Admin: up         Oper: up     Path: valid       Signalling: connected
    path option 1, type explicit __dynamic_tunnel65436 (Basis for Setup, path weight 20)

  Config Parameters:
    Bandwidth: 0        kbps (Global)  Priority: 7  7   Affinity: 0x0/0xFFFF
    Metric Type: TE (default)
    AutoRoute announce: disabled LockDown: disabled Loadshare: 0        bw-based
    auto-bw: disabled
  Active Path Option Parameters:
    State: explicit path option 1 is active
    BandwidthOverride: disabled  LockDown: disabled  Verbatim: disabled

  InLabel  :  -
    OutLabel : Serial2/0.13, 22
    Next Hop : 10.1.13.1
  RSVP Signalling Info:
         Src 10.1.3.3, Dst 10.1.2.2, Tun_Id 65436, Tun_Instance 1
    RSVP Path Info:
      My Address: 10.1.13.3
      Explicit Route: 10.1.13.1 10.1.12.2 10.1.2.2
      Record   Route:   NONE
      Tspec: ave rate=0 kbits, burst=1000 bytes, peak rate=0 kbits
    RSVP Resv Info:
      Record   Route:   NONE
      Fspec: ave rate=0 kbits, burst=1000 bytes, peak rate=0 kbits
  Shortest Unconstrained Path Info:
    Path Weight: 20 (TE)
    Explicit Route: 10.1.13.1 10.1.12.2 10.1.2.2
  History:
    Tunnel:
      Time since created: 1 hours, 27 minutes
      Time since path change: 1 hours, 27 minutes
      Number of LSP IDs (Tun_Instances) used: 1
    Current LSP: [ID: 1]
      Uptime: 1 hours, 27 minutes
Return things back to normal on R3 (for simplicity) and attempt a flood ping off CE1 to the group 232.1.1.1 (use timeout value of zero).
R7#ping 232.1.1.1 repeat 10000 timeout 0

Type escape sequence to abort.
Sending 10000, 100-byte ICMP Echos to 232.1.1.1, timeout is 0 seconds:
......................................................................
......................................................................
Both R1 and R2 will not display additional P2MP LSP constructed from R2 to R1. This M-LSP is rooted in R1’s Loopback0 and both R1 and R2 are shown as the leaf nodes for it (even though R1 does not need any traffic). R1 forwards packets downstream the P2MP LSP using the TE tunnel between R1 and R2 for recursive forwarding. This ensures traffic protection for multicast data MDT flows.
R1#show mpls mldp database 
  * Indicates MLDP recursive forwarding is enabled

LSM ID : 8D000006   Type: P2MP   Uptime : 00:00:31
  FEC Root           : 10.1.1.1 (we are the root)
  Opaque decoded     : [mdt 100:1 1]
  Opaque length      : 11 bytes
  Opaque value       : 07 000B 0001000000000100000001
  Upstream client(s) :
    None
      Expires        : N/A           Path Set ID  : C5000007
  Replication client(s):
    MDT  (VRF TEST)
      Uptime         : 00:00:31      Path Set ID  : None
      Interface      : Lspvif0
    10.1.2.2:0
      Uptime         : 00:00:31      Path Set ID  : None
      Out label (D)  : 23            Interface    : Tunnel65336
      Local label (U): None          Next Hop     : 10.1.2.2
...
R2#show mpls mldp database 
  * Indicates MLDP recursive forwarding is enabled

LSM ID : 81000006   Type: P2MP   Uptime : 00:00:24
  FEC Root           : 10.1.1.1
  Opaque decoded     : [mdt 100:1 1]
  Opaque length      : 11 bytes
  Opaque value       : 07 000B 0001000000000100000001
  Upstream client(s) :
    10.1.1.1:0    [Active]
      Expires        : Never         Path Set ID  : 75000006
      Out Label (U)  : None          Interface    : Tunnel65336*
      Local Label (D): 23            Next Hop     : 10.1.1.1
  Replication client(s):
    MDT  (VRF TEST)
      Uptime         : 00:00:24      Path Set ID  : None
      Interface      : Lspvif0

Conclusions

MLDP reaches initial deployment phase and could be used for intra-AS M-VPN implementations. Two main advantages of using M-LDP over mGRE transport are the reduction of PIM signaling overhead and the option to protect multicast traffic using MPLE TE FRR. However, unlike source-signaled RSVP-TE based P2MP LSPs, M-LDP signaled LSPs do not support traffic engineering but only recursive routing over TE tunnels. The IOS version used for testing supports M-VPN based on M-LDP signaled P-tunnels (out of band), but does not seem to support in-band signaling of M-LDP LSPs. However, some MLDP commands give and idea that MLDP already supports the opaque types necessary to implement in-band signaling. We may expect to see more extensions added to M-LDP in near future, including CsC option to build P2MP LSPs using proxy root identifiers and Inter-AS extensions to M-LDP based P-Tunnels (segmented tunnels).

0 comments:

About US

Network Bulls is Best Institute for Cisco CCNA, CCNA Security, CCNA Voice, CCNP, CCNP Security, CCNP Voice, CCIP, CCIE RS, CCIE Security Version 4 and CCIE Voice Certification courses in India. Network Bulls is a complete Cisco Certification Training and Course Coaching Institute in Gurgaon/Delhi NCR region in India. Network Bulls has Biggest Cisco Training labs in India. Network Bulls offers all Cisco courses on Real Cisco Devices. Network Bulls has Biggest Team of CCIE Trainers in North India, with more than 90% of passing rate in First Attempt for CCIE Security Version 4 candidates.
  • Biggest Cisco Training Labs in India
  • More than 90% Passing Rate in First Attempt
  • CCIE Certified Trainers for All courses
  • 24x7 Lab Facility
  • 100% Job Guaranteed Courses
  • Awarded as Best Network Security Institute in 2011 by Times
  • Only Institute in India, to provide CCIE Security Version 4.0 Training
  • CCIE Security Version 4 Training available
  • Latest equipments available for CCIE Security Version 4

Network Bulls Institute Gurgaon

Network Bulls Institute in Gurgaon is one of the best Cisco Certifications Training Centers in India. Network Bulls has Biggest Networking Training and Networking courses labs in North India. Network Bulls is offering Cisco Training courses on real Cisco Routers and Switches. Labs of Network Bulls Institute are 24x7 Available. There are many coaching Centers in Delhi, Gurgaon, Chandigarh, Jaipur, Surat, Mumbai, Bangalore, Hyderabad and Chennai, who are offering Cisco courses, but very few institutes out of that big list are offering Cisco Networking Training on real Cisco devices, with Live Projects. Network Bulls is not just an institute. Network Bulls is a Networking and Network Security Training and consultancy company, which is offering Cisco certifications Training as well support too. NB is awarded in January 2012, by Times, as Best Network Security and Cisco Training Institute for the year 2011. Network Bulls is also offering Summer Training in Gurgaon and Delhi. Network Bulls has collaboration with IT companies, from which Network Bulls is offering Networking courses in Summer Training and Industrial Training of Btech BE BCA MCA students on real Live projects. Job Oriented Training and Industrial Training on Live projects is also offered by network bulls in Gurgaon and Delhi NCR region. Network Bulls is also providing Cisco Networking Trainings to Corporates of Delhi, Gurgaon, bangalore, Jaipur, Nigeria, Chandigarh, Mohali, Haryana, Punjab, Bhiwani, Ambala, Chennai, Hyderabad.
Cisco Certification Exams are also conducted by Network Bulls in its Gurgaon Branch.
Network Bulls don't provide any Cisco CCNA, CCNP simulations for practice. They Provide High End Trainings on Real topologies for high tech troubleshooting on real Networks. There is a list of Top and best Training Institutes in India, which are providing CCNA and CCNP courses, but NB has a different image from market. Many students has given me their feedbacks and reviews about Network bulls Institute, but there were no complaints about any fraud from this institute. Network Bulls is such a wonderful place to get trained from Industry expert Trainers, under guidance of CCIE Certified Engineers.

About Blog

This Blog Contains Links shared by sites: Cisco Guides, Dumps collection, Exam collection, Career Cert, Ketam Mehta, GodsComp.co.cc.

NB

NB
Cisco Networking Certifications Training

Cisco Training in Delhi

ccna training in gurgaon. ccnp course institute in gurgaon, ccie coaching and bootcamp training near gurgaon and delhi. best institute of ccna course in delhi gurgaon india. network bulls provides ccna,ccnp,ccsp,ccie course training in gurgaon, new delhi and india. ccsp training new delhi, ccie security bootcamp in delhi.

Testimonials : Network Bulls

My Name is Rohit Sharma and i Have done CCNA and CCNP Training in Gurgaon Center of Network Bulls and it was a great experience for me to study in Network Bulls.

Cisco Networking Certifications

Myself Komal Verma and i took CCSP Training from Network Bulls in Gurgaon. The day i joined Network Bulls, the day i get addicted with Networking Technologies and I thank Mr. Vikas Sheokand for this wonderful session of Networking. :)
I must say that Network Bulls is Best Institute of CCNA CCNP CCSP CCIE Course Training in Gurgaon, New Delhi and in India too.
Komal Verma

About a wonderfull CCIE Training Institute in Gurgaon

I am Kiran shah from New Delhi. I have recently completed my CCNA CCNP & CCIE Training in Gurgaon from Network Bulls and i recommend Network Bulls for Cisco Training in India.

Kiran Shah

Cisco Coaching and Learning Center

Disclaimer: This site does not store any files on its server. I only index and link to content provided by other sites. If you see any file on server that is against copy right you can inform me at (sidd12341 [at] gmail.com). I will delete that materials within two days. This Website is not official Website of any Institute like INE, Network Bulls, IP Expert. Thanks

CCIE Security Version 4

Cisco Finally updated CCIE Security Lab exam blueprint. WSA Ironport and ISE devices are added in CCIE Security Version 4 Lab Exam Syllabus Blueprint. In Updated CCIE Security Version 4 Syllabus blueprint, new technologies like Mobile Security, VoIP Security and IPV6 Security along with Network Security, are added. As in CCIE Security Version 3 blueprint, Cisco had focused on Network Security only, but now as per market demand, Cisco is looking forward to produce Internet gear Security Engineer, not only Network Security engineers.
In CCIE Security Version 4 Bluerpint, Lab Exam is going to be more interested than before. What is Difference in CCIE Security Version 3 and Version 4? Just go through the CCIE Security Version 4 Lab Equipment and Lab Exam Syllabus Blueprints and find out!