Chapter 7. Operating and Troubleshooting IP Multicast Networks

This chapter covers the concepts of troubleshooting a multicast environment. We begin with an overview of the logical steps in troubleshooting a multicast environment. Next, we use an example network to explain the steps and commands necessary for troubleshooting in order to help you understand multicast packet flows. This chapter also examines troubleshooting case scenarios that are mapped to the multicast troubleshooting logic and the fundamental usage of that logic while troubleshooting a multicast problem.

Multicast Troubleshooting Logic

There is a wide range of techniques to determine the root cause of a problem. In this chapter, we take a systematic approach to troubleshooting multicast issues by using the following methodology:

1. Receiver check

2. Source check

3. State verification

Multicast Troubleshooting Methodology

The topology presented is a simple multicast network, as shown in Figure 7-1. R3 and R4 are configured as RP (with Anycast with Auto-RP for downstream propagation). OSPF is the routing protocol configured for the unicast domain. R2 is the first-hop router that is connected to the source and R5 is the last-hop router connected to the receiver.

Image

Figure 7-1 Multicast Troubleshooting Sample Topology

Example 7-1 provides the configurations for the routers, and you should review these before proceeding with the chapter.

Example 7-1 Show R3 and R4 Multicast Configurations for Sample Topology


R3
R3#show running-config
..
hostname R3
!
no ip domain lookup
ip multicast-routing
ip cef
!
interface Loopback0
 ip address 192.168.3.3 255.255.255.255
 ip pim sparse-mode
!
interface Loopback100
 ip address 192.168.100.100 255.255.255.255
 ip pim sparse-mode
!
interface Ethernet2/0
 ip address 10.1.2.2 255.255.255.0
 ip pim sparse-mode
!
interface Ethernet3/0
 ip address 10.1.4.1 255.255.255.0
 ip pim sparse-mode
!
router ospf 1
 router-id 192.168.3.3
 network 0.0.0.0 255.255.255.255 area 0
!
ip forward-protocol nd
!
ip pim autorp listener
ip pim send-rp-announce Loopback100 scope 20 group-list 1 interval 10
ip pim send-rp-discovery Loopback100 scope 20 interval 10
ip msdp peer 192.168.4.4 connect-source Loopback0
ip msdp cache-sa-state
ip msdp default-peer 192.168.4.4
!
access-list 1 permit 239.1.0.0 0.0.255.255


R4
R4#show running-config
Building configuration...
..
hostname R4
!
no ip domain lookup
ip multicast-routing
ip cef
!
interface Loopback0
 ip address 192.168.4.4 255.255.255.255
 ip pim sparse-mode
!
interface Loopback100
 ip address 192.168.100.100 255.255.255.255
 ip pim sparse-mode
!
interface Ethernet2/0
 ip address 10.1.5.1 255.255.255.0
 ip pim sparse-mode
!
!
interface Ethernet3/0
 ip address 10.1.4.2 255.255.255.0
 ip pim sparse-mode
!
router ospf 1
 router-id 192.168.4.4
 network 0.0.0.0 255.255.255.255 area 0
!
ip pim autorp listener
ip pim send-rp-announce Loopback100 scope 20 group-list 1 interval 10
ip pim send-rp-discovery Loopback100 scope 20 interval 10
ip msdp peer 192.168.3.3 connect-source Loopback0
ip msdp cache-sa-state
ip msdp default-peer 192.168.3.3
!
access-list 1 permit 239.1.0.0 0.0.255.255


Example 7-2 provides the downstream router configuration.

Example 7-2 Downstream Router Configurations


R2
R2#show running-config

hostname R2
!
ip multicast-routing
ip cef
!
interface Loopback0
 ip address 192.168.2.2 255.255.255.255
 ip pim sparse-mode
!
interface Ethernet0/0
 ip address 10.1.1.2 255.255.255.0
 ip pim sparse-mode
 load-interval 30
!
interface Ethernet1/0
 ip address 10.1.3.1 255.255.255.0
 ip pim sparse-mode
!
interface Ethernet2/0
 ip address 10.1.2.1 255.255.255.0
 ip pim sparse-mode
!
router ospf 1
 network 0.0.0.0 255.255.255.255 area 0
!
ip pim autorp listener
….


R5
R5#show running-config

hostname R5
!
no ip domain lookup
ip multicast-routing
ip cef
interface Loopback0
 ip address 192.168.5.5 255.255.255.255
 ip pim sparse-mode

!
interface Ethernet0/0
 ip address 10.1.6.1 255.255.255.0
 ip pim sparse-mode
!
interface Ethernet1/0
 ip address 10.1.3.2 255.255.255.0
 ip pim sparse-mode
!
interface Ethernet2/0
 ip address 10.1.5.2 255.255.255.0
 ip pim sparse-mode
!
router ospf 1
 network 0.0.0.0 255.255.255.255 area 0
!
ip pim autorp listener


Baseline Check: Source and Receiver Verification

A baseline check is used to determine unicast connectivity between specific elements in the network. Remember, multicast routing uses the unicast routing infrastructure. If unicast routing is not working, neither will multicast!

Before starting the multicast baseline check, verify that the source and receiver have reachability. This can be accomplished with a ping test.

To perform the baseline check and gather the baseline information, use the following steps:

Step 1. Receiver check: Make sure a receiver is subscribed via IGMP and that (*,G) to the RP exists before trying to troubleshoot:

a. Check the group state on the last-hop router.

b. Check IGMP membership on the last-hop PIM-DR.

c. Verify the (*,G) state at the LHR and check the RP for the (*,G) entry and RPF.

Step 2. Source check: Make sure you have an active source before trying to troubleshoot:

a. Verify that the source is sending the multicast traffic to the first-hop router.

b. Confirm that the FHR has registered the group with the RP.

c. Determine that the RP is receiving the registry messages.

d. Confirm that the multicast state is built on the FHR.

Let’s perform each of the steps above on our sample network to ensure that the FHR and LHR are operating as expected and that state for the flow is established between them. To do this, use the following steps:

Step 1. Make sure a receiver is subscribed via IGMP before trying to debug:

a. Check the group state on the last-hop router.

Verify that the last-hop router (LHR) has the appropriate (*,G) entry:

R5#show ip mroute 239.1.1.1
(*, 239.1.1.1), 00:02:35/00:02:50, RP 192.168.100.100, flags: SJCF
  Incoming interface: Ethernet2/0, RPF nbr 10.1.5.1
  Outgoing interface list:
    Ethernet0/0, Forward/Sparse, 00:02:35/00:02:50

Notice that the incoming interface (IIF) is Ethernet2/0, and the outgoing interface is Ethernet0/0. Referring to Figure 7-1, we can see that this flow has been established using the proper interfaces.

b. Check IGMP membership on the last-hop PIM-DR:

R5#show ip igmp groups 239.1.1.1
IGMP Connected Group Membership
Group Address    Interface            Uptime    Expires   Last Reporter   Group Accounted
239.1.1.1        Ethernet0/0          01:14:42  00:02:16  10.1.6.2
R5#

Ensure that the router has the RP information aligned to the scope range of the multicast group (using the show ip pim rp mapping command) and document the outgoing interface to reach the RP (using the show ip rpf RP_address command):

R5# show ip pim rp mapping
PIM Group-to-RP Mappings
Group(s) 239.1.0.0/16
  RP 192.168.100.100 (?), v2v1
    Info source: 192.168.100.100 (?), elected via Auto-RP
         Uptime: 01:13:19, expires: 00:00:20

R5#show ip rpf 192.168.100.100
RPF information for ? (192.168.100.100)
  RPF interface: Ethernet2/0
  RPF neighbor: ? (10.1.5.1)
  RPF route/mask: 192.168.100.100/32
  RPF type: unicast (ospf 1)
  Doing distance-preferred lookups across tables
  RPF topology: ipv4 multicast base, originated from ipv4 unicast base

c. Verify the (*,G) state at the LHR and check the RP for the (*,G) entry and RPF.

The objective is to verify the registration of the receiver to the RP, consequently seeing the (*,G) entry.

Next, confirm that R5 is connected to the receiver, using the show ip mroute command:

R5#show ip mroute 239.1.1.1
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group,
       C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry,
       E - Extranet,
       X - Proxy Join Timer Running, A - Candidate for MSDP
           Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group,
       G - Received BGP C-Mroute, g - Sent BGP C-Mroute,
       N - Received BGP Shared-Tree Prune, n - BGP C-Mroute suppressed,
       Q - Received BGP S-A Route, q - Sent BGP S-A Route,
       V - RD & Vector, v - Vector, p - PIM Joins on route,
       x - VxLAN group
Outgoing interface flags: H - Hardware switched, A - Assert winner,
p - PIM Join
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 239.1.1.1), 1d22h/00:03:02, RP 192.168.100.100, flags: SJC
  Incoming interface: Ethernet2/0, RPF nbr 10.1.5.1
  Outgoing interface list:
    Ethernet0/0, Forward/Sparse, 1d22h/00:03:02

The incoming interface is Ethernet2/0 for the shared tree (*,G) connected to the RP (192.168.100.100), and the outgoing interface shows the receiver connection.

Verify the multicast state at the RP (R4):

R4#show ip mroute 239.1.1.1
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group,
       C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry,
       E - Extranet,
       X - Proxy Join Timer Running, A - Candidate for MSDP
           Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group,
       G - Received BGP C-Mroute, g - Sent BGP C-Mroute,
       N - Received BGP Shared-Tree Prune, n - BGP C-Mroute suppressed,
       Q - Received BGP S-A Route, q - Sent BGP S-A Route,
       V - RD & Vector, v - Vector, p - PIM Joins on route,
       x - VxLAN group
Outgoing interface flags: H - Hardware switched, A - Assert winner,
p - PIM Join
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 239.1.1.1), 1d19h/00:02:43, RP 192.168.100.100, flags: S
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    Ethernet2/0, Forward/Sparse, 1d19h/00:02:43

The output for the RP shows the outgoing interface (Ethernet2/0) connected to the LHR; this matches the steady state topology.

The next step is to confirm that the RP address (*,G) entry is installed on the LHR and to check RPF.

The show ip rpf RP command points to the next hop in the (*,G) tree. Use this command in conjunction with the show ip mroute command to verify the appropriate interface:

R5#show ip rpf 192.168.100.100
RPF information for ? (192.168.100.100)
  RPF interface: Ethernet2/0
  RPF neighbor: ? (10.1.5.1)
  RPF route/mask: 192.168.100.100/32
  RPF type: unicast (eigrp 1)
  Doing distance-preferred lookups across tables
  RPF topology: ipv4 multicast base, originated from ipv4 unicast base
R5#


R5#show ip mroute 239.1.1.1
(*, 239.1.1.1), 00:02:35/00:02:50, RP 192.168.100.100, flags: SJCF
  Incoming interface: Ethernet2/0, RPF nbr 10.1.5.1
  Outgoing interface list:
    Ethernet0/0, Forward/Sparse, 00:02:35/00:02:50



(10.1.1.1, 239.1.1.1), 00:02:21/00:00:38, flags: T
  Incoming interface: Ethernet1/0, RPF nbr 10.1.3.1
  Outgoing interface list:
    Ethernet0/0, Forward/Sparse, 00:02:21/00:03:28

The highlighted text indicates symmetry between the mroute state and RPF state.

Step 2. Make sure you have an active source before trying to debug:

a. Verify multicast traffic on the incoming interface (Ethernet0/0) of the first-hop router (FHR) R2:

R2#show interface Ethernet0/0 | include multicast
     Received 1182 broadcasts (33 IP multicasts)
R2#show interface  Ethernet0/0 | include multicast
     Received 1182 broadcasts (35 IP multicasts)
R2#show interface  Ethernet0/0 | include multicast
     Received 1183 broadcasts (36 IP multicasts)
R2#show interface  Ethernet0/0 | include multicast
     Received 1184 broadcasts (37 IP multicasts)

The output shows the increase in the IP multicast packets received on the interface that connect to the source.

b. Confirm that the FHR has registered the group with the RP:

Make sure the FHR is aware of the RP using the show ip pim rp mapping command:

R2#show ip pim rp mapping
PIM Group-to-RP Mappings
Group(s) 239.1.0.0/16
  RP 192.168.100.100 (?), v2v1
    Info source: 192.168.100.100 (?), elected via Auto-RP
         Uptime: 01:25:28, expires: 00:00:24
R2#

Verify that the FHR is sending packets to the RP (unicast register packets):

R2# show interface tunnel 0 | include output
  Last input never, output 00:00:19, output hang never
  Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
  5 minute output rate 0 bits/sec, 0 packets/sec
     15 packets output, 1920 bytes, 0 underruns
     0 output errors, 0 collisions, 0 interface resets
     0 output buffer failures, 0 output buffers swapped out
R2#clear ip mroute 239.1.1.1

R2# show interface tunnel 0 | include output
  Last input never, output 00:00:04, output hang never
  Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
  5 minute output rate 0 bits/sec, 0 packets/sec
     16 packets output, 2048 bytes, 0 underruns
     0 output errors, 0 collisions, 0 interface resets
     0 output buffer failures, 0 output buffers swapped out
R2#

You may be wondering about where the tunnel interface came from. The tunnel 0 interface is automatically created by enabling multicast on the downstream router. This tunnel is used to encapsulate the unicast register packets to the RP.

c. Determine that the RP is receiving the registry messages.

Use the debug ip pim <mcast_group_address> command.

R3 shows the registry message received and register stop sent. The register stop will be sent only if the multicast state has active receivers in the shared tree:

Mar 15 20:11:13.030: PIM(0): Received v2 Register on Ethernet2/0 from
10.1.2.1
*Mar 15 20:11:13.030:      for 10.1.1.1, group 239.1.1.1
*Mar 15 20:11:13.030: PIM(0): Send v2 Register-Stop to 10.1.2.1 for
10.1.1.1, group 239.1.1.1

d. Confirm that the multicast state is built on the FHR (R2):

R2#show ip mroute 239.1.1.1
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group,
       C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry,
       E - Extranet,
       X - Proxy Join Timer Running, A - Candidate for MSDP
           Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group,
       G - Received BGP C-Mroute, g - Sent BGP C-Mroute,
       N - Received BGP Shared-Tree Prune, n - BGP C-Mroute suppressed,
       Q - Received BGP S-A Route, q - Sent BGP S-A Route,
       V - RD & Vector, v - Vector, p - PIM Joins on route,
       x - VxLAN group
Outgoing interface flags: H - Hardware switched, A - Assert winner,
p - PIM Join
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 239.1.1.1), 00:00:36/stopped, RP 192.168.100.100, flags: SPF
  Incoming interface: Ethernet2/0, RPF nbr 10.1.2.2
  Outgoing interface list: Null

(10.1.1.1, 239.1.1.1), 00:00:36/00:02:23, flags: FT
  Incoming interface: Ethernet0/0, RPF nbr 10.1.1.1
  Outgoing interface list:
    Ethernet1/0, Forward/Sparse, 00:00:17/00:03:12

The multicast state indicates the (S,G) is built with FT flags. The incoming interface is connected to the source and the outgoing interface shows that the packet-forwarding takes the best path between the source and receiver based on the unicast routing protocol.

The results shown in this section are steady state. It is important to understand the commands and steady state condition. During troubleshooting, use the steady state information with the output collected during troubleshooting to compare and assess the problems.

State Verification

There are two primary components to state verification; these include RP control-plane check and hop-by-hop state validation.

RP Control-Plane Check

Figure 7-2 is used to provide an example of state behavior in which the SPT uses the following path, Source -> R2 -> R3 -> R4 -> R5 -> Receiver. In this example, the interface between R2 and R5 is shut down, forcing R2 to choose the path to R3 and R4. This is done to verify the switchover of the multicast flow from (*,G) to (S,G) following the best path selected by the unicast routing protocol.

Image

Figure 7-2 Multicast State Flow via the RP

This would essentially be step 3 in the baselining process, as shown here:

Step 3. Verify RP and SPT state entries across the path:

a. Check the MSDP summary to verify peering is operational.

b. Verify the group state at each active RP.

c. Verify SPT changes.

This configuration, covered in Chapter 4, uses the hybrid RP design and Auto-RP for downstream propagation of the RP.

Step 3a. Check the MSDP summary to verify peering is operational:

R3# show ip msdp summary
MSDP Peer Status Summary
Peer Address     AS    State    Uptime/  Reset SA    Peer Name
                                Downtime Count Count
*192.168.4.4     ?     Up       00:02:16 1     0     ?

The show ip msdp summary command verifies that the MSDP relationship is operational between the RPs. This relationship is not impacted by the multicast data flow state in the mroute table.

Step 3b. Verify the group state at each active RP (R3 and R4):

R3#show ip mroute 239.1.1.1
(*, 239.1.1.1), 00:08:39/stopped, RP 192.168.100.100, flags: SP
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list: Null
(10.1.1.1, 239.1.1.1), 00:00:16/00:02:43, flags: TA
  Incoming interface: Ethernet2/0, RPF nbr 10.1.2.1
  Outgoing interface list:
    Ethernet3/0, Forward/Sparse, 00:00:16/00:03:13

R4# show ip mroute 239.1.1.1
(*, 239.1.1.1), 00:08:45/00:03:02, RP 192.168.100.100, flags: S
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    Ethernet2/0, Forward/Sparse, 00:01:08/00:02:38

(10.1.1.1, 239.1.1.1), 00:00:27/00:02:32, flags: MT
  Incoming interface: Ethernet3/0, RPF nbr 10.1.4.1
  Outgoing interface list:
    Ethernet2/0, Forward/Sparse, 00:00:27/00:03:02

The RP (R4) is closest to the receiver and will consequently be the RP that receives the join request from the LHR (R5). Comparing the output above, notice the outgoing interface list is NULL on R3 and R4 is using Ethernet2/0. This shows that R4 is the active RP. The “T” flags on both R3 and R4 indicate the SPT has been established. The flags “A” and “M” on routers R3 and R4, respectively, indicate that R3 is the candidate RP and that the entry was created by MSDP.

Step 3c. Verify SPT changes.

Review the multicast state at the RP after the connection between R2 and R5 is restored; refer to Figure 7-3.

Image

Figure 7-3 Multicast Steady State Topology

With the R2 to R5 connection restored, the SPT flow no longer traverses the R2, R3, R4, R5 path but rather the R2, R5 path. Notice the P flag set in (S,G) entry. This shows the multicast stream has been pruned. The (*,G) state still will have receiver information via Ethernet2/0 as the outgoing interface:

R4#show ip mroute 239.1.1.1
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group,
       C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry,
       E - Extranet,
       X - Proxy Join Timer Running, A - Candidate for MSDP
           Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group,
       G - Received BGP C-Mroute, g - Sent BGP C-Mroute,
       N - Received BGP Shared-Tree Prune, n - BGP C-Mroute suppressed,
       Q - Received BGP S-A Route, q - Sent BGP S-A Route,
       V - RD & Vector, v - Vector, p - PIM Joins on route,
       x - VxLAN group
Outgoing interface flags: H - Hardware switched, A - Assert winner,
p - PIM Join
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 239.1.1.1), 1d21h/00:03:08, RP 192.168.100.100, flags: S
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    Ethernet2/0, Forward/Sparse, 1d21h/00:03:08

(10.1.1.1, 239.1.1.1), 00:07:57/00:02:19, flags: PMT
  Incoming interface: Ethernet2/0, RPF nbr 10.1.5.2
  Outgoing interface list: Null

You should be very familiar with multicast flows traversing through the RP and those that do not. You will undoubtedly encounter both situations as you implement and troubleshoot multicast networks.

It would be terribly inefficient for multicast messages to always traverse the RP. We saw that multicast traffic moved to the SPT in Step 3c. How does this happen? Let’s review the SPT process.

R5 is the LHR and also the location where there is a better (lower-cost) unicast route to the source.

Use the show ip mroute command to monitor the incoming interface for the (S,G) and (*,G) shift, as demonstrated in Example 7-3.

Example 7-3 show ip mroute Command Output


R5#show ip mroute 239.1.1.1
..
(*, 239.1.1.1), 00:02:31/00:02:55, RP 192.168.100.100, flags: SJC
  Incoming interface: Ethernet2/0, RPF nbr 10.1.5.1
  Outgoing interface list:
    Ethernet0/0, Forward/Sparse, 00:02:31/00:02:55

(10.1.1.1, 239.1.1.1), 00:02:31/00:00:28, flags: T
  Incoming interface: Ethernet1/0, RPF nbr 10.1.3.1
  Outgoing interface list:
    Ethernet0/0, Forward/Sparse, 00:02:31/00:03:14


In this example, R5 shows Ethernet1/0 will be selected as the RPF towards the source address 10.1.1.1. This is shown using the show ip rpf command, as demonstrated in Example 7-4.

Example 7-4 show ip rpf Command Output


R5#show ip rpf 10.1.1.1
RPF information for ? (10.1.1.1)
  RPF interface: Ethernet1/0
  RPF neighbor: ? (10.1.3.1)
  RPF route/mask: 10.1.1.0/24
  RPF type: unicast (ospf 1)
  Doing distance-preferred lookups across tables
  RPF topology: ipv4 multicast base, originated from ipv4 unicast base
R5#


We can also see additional details using the debug ip pim command, as shown in the output in Example 7-5. Pay particular attention to the highlighted section.

Example 7-5 debug ip pim Output


!  Start the debug and wait for the state to change
R5#
R5#debug ip pim  239.1.1.1
PIM debugging is on
R5#
*Mar 26 18:44:59.351: %OSPF-5-ADJCHG: Process 1, Nbr 192.168.2.2 on Ethernet1/0 from
  LOADING to FULL, Loading Done
R5#
*Mar 26 18:45:08.556: PIM(0): Received v2 Join/Prune on Ethernet0/0 from 10.1.6.2,
  to us
*Mar 26 18:45:08.557: PIM(0): Join-list: (10.1.1.1/32, 239.1.1.1), S-bit set
*Mar 26 18:45:08.557: PIM(0): Update Ethernet0/0/10.1.6.2 to (10.1.1.1, 239.1.1.1),
  Forward state, by PIM SG Join
R5#
*Mar 26 18:45:10.261: PIM(0): Insert (10.1.1.1,239.1.1.1) join in nbr 10.1.3.1's queue
*Mar 26 18:45:10.261: PIM(0): Insert (10.1.1.1,239.1.1.1) prune in nbr 10.1.5.1's
  queue
*Mar 26 18:45:10.261: PIM(0): Building Join/Prune packet for nbr 10.1.5.1
*Mar 26 18:45:10.261: PIM(0):  Adding v2 (10.1.1.1/32, 239.1.1.1), S-bit Prune
*Mar 26 18:45:10.261: PIM(0): Send v2 join/prune to 10.1.5.1 (Ethernet2/0)
*Mar 26 18:45:10.261: PIM(0): Building Join/Prune packet for nbr 10.1.3.1
*Mar 26 18:45:10.261: PIM(0):  Adding v2 (10.1.1.1/32, 239.1.1.1), S-bit Join
*Mar 26 18:45:10.261: PIM(0): Send v2 join/prune to 10.1.3.1 (Ethernet1/0)
R5#
*Mar 26 18:45:10.324: PIM(0): Building Periodic (*,G) Join / (S,G,RP-bit) Prune mes-
  sage for 239.1.1.1
*Mar 26 18:45:10.324: PIM(0): Insert (*,239.1.1.1) join in nbr 10.1.5.1's queue
*Mar 26 18:45:10.324: PIM(0): Insert (10.1.1.1,239.1.1.1) sgr prune in nbr
  10.1.5.1's queue
*Mar 26 18:45:10.324: PIM(0): Building Join/Prune packet for nbr 10.1.5.1
*Mar 26 18:45:10.324: PIM(0):  Adding v2 (192.168.100.100/32, 239.1.1.1), WC-bit,
  RPT-bit, S-bit Join
*Mar 26 18:45:10.324: PIM(0):  Adding v2 (10.1.1.1/32, 239.1.1.1), RPT-bit, S-bit
  Prune
*Mar 26 18:45:10.324: PIM(0): Send v2 join/prune to 10.1.5.1 (Ethernet2/0)
R5#
*Mar 26 18:45:24.124: PIM(0): Insert (10.1.1.1,239.1.1.1) join in nbr 10.1.3.1's
  queue
*Mar 26 18:45:24.125: PIM(0): Building Join/Prune packet for nbr 10.1.3.1
*Mar 26 18:45:24.125: PIM(0):  Adding v2 (10.1.1.1/32, 239.1.1.1), S-bit Join
*Mar 26 18:45:24.125: PIM(0): Send v2 join/prune to 10.1.3.1 (Ethernet1/0)
*Mar 26 18:45:24.256: PIM(0): Received v2 Join/Prune on Ethernet0/0 from 10.1.6.2,
  to us
*Mar 26 18:45:24.256: PIM(0): Join-list: (10.1.1.1/32, 239.1.1.1), S-bit set
*Mar 26 18:45:24.256: PIM(0): Update Ethernet0/0/10.1.6.2 to (10.1.1.1, 239.1.1.1),
  Forward state, by PIM SG Join
R5#sh ip mroute 239.1.1.1
*Mar 26 18:45:41.349: PIM(0): Received v2 Join/Prune on Ethernet0/0 from 10.1.6.2,
  to us
*Mar 26 18:45:41.349: PIM(0): Join-list: (*, 239.1.1.1), RPT-bit set, WC-bit set,
  S-bit set
*Mar 26 18:45:41.349: PIM(0): Update Ethernet0/0/10.1.6.2 to (*, 239.1.1.1), Forward
  state, by PIM *G Join
*Mar 26 18:45:41.349: PIM(0): Update Ethernet0/0/10.1.6.2 to (10.1.1.1, 239.1.1.1),
  Forward state, by PIM *G Join


From the previous debug, please note the change from 10.1.5.1 to 10.1.3.1 (PIM join state for S,G) and note also the “S-Bit Join” message that indicates the transition to the SPT. During the triggered (*,G) state, the DR creates a join/prune message with the WC-bit and RPT-bit set to 1. The WC-bit set to 1 indicates that any source may match this entry, and the flow will follow the shared tree. When the RPT-bit is set to 1, it shows that the join is associated with the shared tree and the join/prune message is sent along the shared tree towards the RP.

Hop-by-Hop State Validation

Up to this point in the troubleshooting process, we have covered the fundamentals—source and receiver verification and RP control-plane confirmation. If we have not solved the multicast problem, the troubleshooting methodology will be done on a hop-by-hop basis from the LHR to the FHR. For this to be accomplished correctly, you must have a thorough understanding of the entire topology.


Note

When you are under pressure to solve a problem, this is not the time to begin documenting the network. There should already be a comprehensive network diagram available.


The network diagram in Figure 7-4 aids in identifying the path between the source and the receiver. This begins the next phase of the troubleshooting process—understanding the flow.

Image

Figure 7-4 Multicast (S,G) Path Flow

Figure 7-4 is a simple reference topology. Whether you are troubleshooting something very simple or very complex, the process is still the same in assessing the health of the multicast network.

We begin by establishing the appropriate (S,G) path. Because this is a network we are familiar with from previous examples, we know that the (S,G) path uses R2 and R5. We begin our troubleshooting effort at the LHR and then work towards the FHR.

High-level steps:

Step 4. Verify the mroute state information for the following elements:

a. Validated that the IIF is correct?

b. Verify that the OIF is correct?

c. Are the “flags” for (*, G) and (S, G) entries correct?

d. Verify that the RP information is correct?

If there are anomalies from the previous steps, verify that each entry contains RFP information with the show ip rpf ip-address command and move up the shortest-path toward the source. The questions to ask yourself based on the output are:

Image Does this align with the information in the mroute entry?

Image Is this what you would expect when looking at the unicast routing table?

Now let’s review each element shown in Step 4. R5 is the LHR where we begin the process. Using the show ip mroute command, we can verify that the state information for 239.1.1.1 is maintained correctly by examining the elements, as shown in Example 7-6:

Example 7-6 Multicast State at R5(LHR)


R5#show ip mroute 239.1.1.1
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry, E - Extranet,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group,
       G - Received BGP C-Mroute, g - Sent BGP C-Mroute,
       N - Received BGP Shared-Tree Prune, n - BGP C-Mroute suppressed,
       Q - Received BGP S-A Route, q - Sent BGP S-A Route,
       V - RD & Vector, v - Vector, p - PIM Joins on route,
       x - VxLAN group
Outgoing interface flags: H - Hardware switched, A - Assert winner, p - PIM Join
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 239.1.1.1), 2d01h/00:03:22, RP 192.168.100.100, flags: SJC
  Incoming interface: Ethernet2/0, RPF nbr 10.1.5.1
  Outgoing interface list:
    Ethernet0/0, Forward/Sparse, 2d01h/00:03:22

(10.1.1.1, 239.1.1.1), 00:04:53/00:02:31, flags: T
  Incoming interface: Ethernet1/0, RPF nbr 10.1.3.1
  Outgoing interface list:
    Ethernet0/0, Forward/Sparse, 00:04:53/00:03:22


From the output in Example 7-6, we can answer the following questions and verify the appropriate operation:

Step 4a. Validated that the IIF is correct?

Image (*,G) Ethernet2/0 points to the RP.

Image (S,G) Ethernet 1/0 based on RPF to the source.

Step 4b. Verify that the OIF is correct?

Image (*,G) Ethernet 0/0 points towards the receiver.

Image (S,G) Ethernet 0/0 points towards the receiver.

Step 4c. Are the “flags” for (*, G) and (S, G) entries correct?

Image (*,G) state: SJC.

Image (S,G) state: T.

Step 4d. Verify that the RP information is correct?

Image 192.168.100.100 (The Anycast Auto-RP learned from R3 and R4.)

Taking a systematic approach to determining the root cause of the problem, we continue examining each device in the path toward the FHR. For brevity, here we jump to the FHR and use the show ip mroute command to inspect the multicast state, as demonstrated in Example 7-7.

Example 7-7 Multicast State at R2 (FHR)


R2#show ip mroute  239.1.1.1
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry, E - Extranet,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group,
       G - Received BGP C-Mroute, g - Sent BGP C-Mroute,
       N - Received BGP Shared-Tree Prune, n - BGP C-Mroute suppressed,
       Q - Received BGP S-A Route, q - Sent BGP S-A Route,
       V - RD & Vector, v - Vector, p - PIM Joins on route,
       x - VxLAN group
Outgoing interface flags: H - Hardware switched, A - Assert winner, p - PIM Join
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 239.1.1.1), 00:08:48/stopped, RP 192.168.100.100, flags: SPF
  Incoming interface: Ethernet2/0, RPF nbr 10.1.2.2
  Outgoing interface list: Null

(10.1.1.1, 239.1.1.1), 00:08:48/00:02:41, flags: FT
  Incoming interface: Ethernet0/0, RPF nbr 10.1.1.1
  Outgoing interface list:
    Ethernet1/0, Forward/Sparse, 00:08:48/00:02:36


The “T” flag indicates that multicast traffic is flowing using the (S, G) state entry.

We can use the output of Example 7-7 to validate the correct operation and answer the following troubleshooting questions.

Step 4a. Validated that the IIF is correct?

Image (*,G) Ethernet2/0 points to the RP.

Image (S,G) Ethernet0/0 based on RPF to the source.

Step 4b. Verify that the OIF is correct?

Image (*,G) NULL; no receiver state on shared tree.

Image (S,G) Ethernet1/0 points towards the receiver.

Step 4c. Are the “flags” for (*, G) and (S, G) entries correct?

Image (*,G) state: SPF.

Image (S,G) state: FT.

Step 4d. Verify that the RP information is correct?

Image 192.168.100.100.

The (*,G) is in a pruned state after registration. This is based on the switch from the shared to the source tree following the unicast best path between the source and receiver. The (S,G) shows the “FT” flags. The packet is flowing via the shortest path tree, and it is the first-hop router for registry.

The fundamental troubleshooting considerations previously covered are applicable for IPv4 and IPv6 multicast environments. In the following section, we review some of the common tools used in IOS for multicast troubleshooting.

Overview of Common Tools for Multicast Troubleshooting

In many situations during troubleshooting, we may need to generate synthetic traffic using a traditional tool such as ping, or we may be required to determine the source of intermittent problems or network performance challenges that would require the use of a more sophisticated tool such as an IP service level agreement (SLA). We may also be required to move beyond the traditional show commands to gain additional insight into the operation or the lack of correct operation in the network.

Ping Test

The multicast ping test is a traditional troubleshooting tool used by network engineers to verify the control and data planes. The test does not remove application centric problems, but it can be used to verify the network infrastructure.

As shown in Figure 7-5, we begin by assigning a router interface to a join-group.

Image

Figure 7-5 Multicast Ping Test Procedure

Step 1. Add an IGMP Join group to simulate the receiver at the LHR:

R3#show run interface  Ethernet0/0
Building configuration...

Current configuration : 114 bytes
!
interface Ethernet0/0
 ip address 10.1.6.2 255.255.255.0
 ip pim sparse-mode
 ip igmp join-group 239.1.1.1
end

Step 2. Use multicast ping to send packets from the FHR:

R1#ping 239.1.1.1
Type escape sequence to abort.
Sending 1, 100-byte ICMP Echos to 239.1.1.1, timeout is 2 seconds:

*Mar 23 21:27:15.209: ICMP: echo reply rcvd, src 10.1.6.2, dst
10.1.1.1, topology BASE, dscp 0 topoid 0
Reply to request 0 from 10.1.6.2, 34 ms
R1#

You can now verify the multicast state in the mroute table. This is a very simple method to troubleshoot a multicast network. It also serves as a method to verify multicast functionality prior to implementing a multicast service.


Note

Use caution when using the ip igmp join-group command because it causes the device to process switch all packets to the configured multicast address. This can lead to high CPU utilization. This command should be removed immediately after troubleshooting, and it should not be used to test live multicast application traffic.


SLA Test

The IP service level agreement (SLA) feature provides the ability to analyze applications and services through the generation of synthetic traffic. With the IP SLA feature, you can measure one-way latency, jitter, and packet loss.

This is accomplished using two elements, a sender that generates UDP packets at a given interval to a particular destination or destinations, and a receiver, or multiple receivers, that collect and analyze the synthetic traffic and send a response back to the sender.

At the time of this writing, the following code releases support SLA multicast:

Image 15.2(4)M

Image 15.3(1)S

Image Cisco IOS XE Release 3.8S

Image 15.1(2)SG

Image Cisco IOS XE Release 3.4SG

Example 7-8 and Example 7-9 show the sender and receiver configurations.

Example 7-8 Sender Configuration


ip sla endpoint-list type ip RCVRS
 ip-address 10.1.6.2 port 5000
ip sla 10
 udp-jitter 239.1.1.101 5000 endpoint-list RCVRS source-ip 10.1.1.1 source-port 5000
   num-packets 50 interval 25
 request-data-size 128
 tos 160
 verify-data
 tree-init 1
 control timeout 4
 control retry 2
 dscp 10
 threshold 1000
 timeout 10000
 frequency 10
 history hours-of-statistics-kept 4
 history distributions-of-statistics-kept 5
 history statistics-distribution-interval 10
 history enhanced interval 60 buckets 100
ip sla schedule 10 life forever start-time now


Example 7-9 Receiver Configuration


ip sla responder
ip sla responder udp-echo ipaddress 10.1.1.1 port 5000


Use the show commands in Example 7-10 to verify the condition of the multicast probes.

Example 7-10 Verifying Multicast Probe Condition


R1#show ip sla configuration
IP SLAs Infrastructure Engine-III
Entry number: 10
Owner:
Tag:
Operation timeout (milliseconds): 10000
Type of operation to perform: mcast
Endpoint list: RCVRS
Target address/Source address: 239.1.1.100/10.1.1.1
Target port/Source port: 5000/5001
Type Of Service parameter: 0xA0
Request size (ARR data portion): 128
Packet Interval (milliseconds)/Number of packets: 25/50
Verify data: Yes
Vrf Name:
DSCP: 10
Number of packets to set multicast tree: 1
SSM: disabled
Control message timeout(seconds): 4
Control message retry count: 2
Schedule:
   Operation frequency (seconds): 10  (not considered if randomly scheduled)
   Next Scheduled Start Time: Start Time already passed
   Group Scheduled : FALSE
   Randomly Scheduled : FALSE
   Life (seconds): Forever
   Entry Ageout (seconds): never
   Recurring (Starting Everyday): FALSE
   Status of entry (SNMP RowStatus): Active
Threshold (milliseconds): 1000
Distribution Statistics:
   Number of statistic hours kept: 4
   Number of statistic distribution buckets kept: 5
   Statistic distribution interval (milliseconds): 10
Enhanced History:
   Aggregation Interval:60 Buckets:100

Entry number: 2132304746
Type of operation: mcast
Multicast operation id: 10
Target address/Source address: 10.1.6.2/10.1.1.1
Target port/Source port: 5000/5001
Multicast address: 239.1.1.100

R6#show ip sla endpoint-list type ip
Endpoint-list Name: RCVRS
    Description:
    List of IPV4 Endpoints:
    ip-address 10.1.6.2 port 5000


Common Multicast Debug Commands

It is advisable to enable the debug commands during a maintenance window with Cisco TAC assistance. The following debug commands help provide additional insight during troubleshooting.

debug ip mpacket Command

The debug ip mpacket command provides packet details for the multicast packets traversing the L3 device, as shown in Example 7-11.

Example 7-11 Verifying Multicast Probe Condition


*Sep 04 14:48:01.651: IP: s=10.1.1.1 (Ethernet1/0) d=239.1.1.1 (Serial0/0) ld
*Sep 04 14:48:02.651: IP: s=10.1.1.1 (Ethernet1/0) d=239.1.1.1 (Serial0/0) ld
*Sep 04 14:48:03.651: IP: s=10.1.1.1 (Ethernet1/0) d=239.1.1.1 (Serial0/0) ld


debug ip pim Command

The debug ip pim command shows the all PIM packets flowing through the device and is useful for understanding PIM state.

The debug is taken from a RP configured in hybrid ASM mode with Auto-RP. The messages show a group join using 239.1.1.1 and group 239.1.1.2 that has a receiver with no active sources. The output also indicates the RP election of 192.168.100.100 as a candidate. Example 7-12 shows the output of debug ip pim.

Example 7-12 depug ip pim Output


R3#show logging

*Mar 17 21:41:37.637: PIM(0): check pim_rp_announce 1
*Mar 17 21:41:37.637: PIM(0): send rp announce
*Mar 17 21:41:44.657: PIM(0): Received v2 Join/Prune on Ethernet2/0 from 10.1.2.1,
  to us
*Mar 17 21:41:44.658: PIM(0): Join-list: (*, 239.1.1.2), RPT-bit set, WC-bit set,
  S-bit set
*Mar 17 21:41:44.658: PIM(0): Check RP 192.168.100.100 into the (*, 239.1.1.2) entry
*Mar 17 21:41:44.658: PIM(0): Adding register decap tunnel (Tunnel1) as accepting
  interface of (*, 239.1.1.2).
*Mar 17 21:41:44.658: PIM(0): Add Ethernet2/0/10.1.2.1 to (*, 239.1.1.2), Forward
  state, by PIM *G Join
*Mar 17 21:41:45.658: PIM(0): Received v2 Register on Ethernet2/0 from 10.1.2.1
*Mar 17 21:41:45.658:      for 10.1.1.1, group 239.1.1.1
*Mar 17 21:41:45.658: PIM(0): Check RP 192.168.100.100 into the (*, 239.1.1.1) entry
*Mar 17 21:41:45.658: PIM(0): Adding register decap tunnel (Tunnel1) as accepting
  interface of (*, 239.1.1.1).
*Mar 17 21:41:45.658: PIM(0): Adding register decap tunnel (Tunnel1) as accepting
  interface of (10.1.1.1, 239.1.1.1).
*Mar 17 21:41:45.659: PIM(0): Send v2 Register-Stop to 10.1.2.1 for 10.1.1.1, group
  239.1.1.1
*Mar 17 21:41:47.633: PIM(0): check pim_rp_announce 1
*Mar 17 21:41:47.633: PIM(0): send rp announce
*Mar 17 21:41:47.634: PIM(0): Received v2 Assert on Ethernet3/0 from 10.1.4.2
*Mar 17 21:41:47.634: PIM(0): Assert metric to source 192.168.100.100 is [0/0]
*Mar 17 21:41:47.634: PIM(0): We lose, our metric [0/0]
*Mar 17 21:41:47.634: PIM(0): Insert (192.168.100.100,224.0.1.39) prune in nbr
  10.1.4.2's queue
*Mar 17 21:41:47.634: PIM(0): Send (192.168.100.100, 224.0.1.39) PIM-DM prune to oif
  Ethernet3/0 in Prune state
*Mar 17 21:41:47.634: PIM(0): (192.168.100.100/32, 224.0.1.39) oif Ethernet3/0 in Prune state
*Mar 17 21:41:47.634: PIM(0): Building Join/Prune packet for nbr 10.1.4.2
*Mar 17 21:41:47.634: PIM(0):  Adding v2 (192.168.100.100/32, 224.0.1.39) Prune
*Mar 17 21:41:47.634: PIM(0): Send v2 join/prune to 10.1.4.2 (Ethernet3/0)
*Mar 17 21:41:51.022: PIM(0): Received v2 Assert on Ethernet3/0 from 10.1.4.2
*Mar 17 21:41:51.022: PIM(0): Assert metric to source 192.168.100.100 is [0/0]


debug ip igmp Command

The output of debug ip igmp command at LHR (R5) provides details regarding the IGMP packets. As shown in Example 7-13, the router receives a join from 239.1.1.1 and then adds a multicast (*,G) entry to add the receiver state in the multicast entry.

Example 7-13 depug ip igmpt Output


*Mar 17 21:50:45.703: IGMP(0): Received v2 Report on Ethernet0/0 from 10.1.6.2 for
  239.1.1.1
*Mar 17 21:50:45.703: IGMP(0): Received Group record for group 239.1.1.1, mode 2
  from 10.1.6.2 for 0 sources
*Mar 17 21:50:45.703: IGMP(0): Updating EXCLUDE group timer for 239.1.1.1
*Mar 17 21:50:45.703: IGMP(0): MRT Add/Update Ethernet0/0 for (*,239.1.1.1) by 0
*Mar 17 21:50:45.704: IGMP(0): Received v2 Report on Ethernet0/0 from 10.1.6.2 for
  239.1.1.1
*Mar 17 21:50:45.704: IGMP(0): Received Group record for group 239.1.1.1, mode 2
  from 10.1.6.2 for 0 sources
*Mar 17 21:50:45.704: IGMP(0): Updating EXCLUDE group timer for 239.1.1.1
*Mar 17 21:50:45.704: IGMP(0): MRT Add/Update Ethernet0/0 for (*,239.1.1.1) by 0


Multicast Troubleshooting

Figure 7-6 shows the three basic blocks that you need to understand to support our multicast troubleshooting efforts.

Image

Figure 7-6 Multicast Troubleshooting Blocks

Before you begin troubleshooting, you need to understand each domain for your multicast environment and then tie it to the appropriate troubleshooting steps to the respective domain.

I. Application Scope Review (the scope of the application that is not working):

Image Are the receivers local to the source?

Image Are the receivers across the WAN?

Image Verify the bandwidth of the multicast group for the application and validate latency-specific parameters.

Image Verify the addressing scheme used in the enterprise.

II. Unicast Domain Scope Review:

Image What is the Unicast routing protocol design?

Image What is the path between the source and destination for the multicast flow?

Image What are the WAN technologies such as IPSEC, MPLS-VPN? Is this offering from a service provider or self-managed, and so on?

Image What is the QoS design and does it align to the class that is configured for that multicast service?

III. Multicast Domain:

Image What is the PIM configuration? (ASM, SSM, or BiDir)

Image Verify the number of multicast states in the network.

Image Verify the best practices have been followed for PIM control-plane configuration, explained in Chapter 5.

Image Verify the configuration of the downstream routers.

These are some high-level steps used to understand the multicast environment. Earlier in the chapter we reviewed a through step-by-step approach to identify a multicast issue. Let us review a case study to understand the troubleshooting process.

Multicast Troubleshooting Case Study

Problem: Multicast is not working for few multicast groups in the network.

Troubleshooting:

I. Application Scope Review: Items to consider while you review the application:

Image The scope of the application.

1.1 Are any multicast applications in the network currently functioning?

Answer: Multicast groups across in other regions are working.

1.2 Are all the groups that are not working a part of the same multicast RP group?

Answer: Yes.

1.3 Are there any working groups with the scope range?

Answer: No.

1.4 Was it working before?

Answer: Yes.

1.5 What changes were made in the environment?

Answer: I don’t know.

Image The bandwidth of the stream for the single multicast group.

Comment: At this point, when you have a nonworking group, the bandwidth does not play a significant role.

Image What are the latency specific parameters for multicast stream distribution?

Comment: With a nonfunctional group, latency information does not play an important part.

Image What type of multicast address scheme is used by the source?

Answer: Scoped using the range of 239.x.x.x space.

II. Unicast Domain Scope Review: Things to consider in this domain:

Image Unicast routing protocol architecture.

Image What is the underlying unicast protocol?

Answer: OSPF and BGP.

Image Is there any unicast routing reachability issues?

Answer: No.

Image What is the path between the source and destination and identify the source and destination (single out a multicast group)?

Answer: Draw the path between the, source, RP, and receiver. It is beneficial to understand this path while troubleshooting the multicast control plane.

Image Is there a WAN technology overlay, such as IPsec, MPLS-VPN?

Answer: Dedicated point-to-point connections with no IPsec or MPLS-VPNs.


Note

IPsec does not natively support multicast transport. An overlay technology should be considered, such as GRE, DMVPN or GETVPN, to transport multicast with IPSEC encryption.


Image Does the QoS architecture and class align to the multicast transport?

Comment: When there is no latency or sporadic user experience issue, then a review of QoS really is not generally warranted.

III. PIM Domain: Things to consider in this domain:

Image What is the PIM configuration? (ASM, SSM, or BiDir)

Answer: ASM.

Image What is the multicast state in the network?

Answer: Not all the best practices are deployed; only ip pim register limit, boundary, and scoping of the hybrid RP with Auto-RP has been deployed.

Image Are the best practices for PIM control plane deployed?

Answer: Only ip pim register-limit is configured.

Image Identify the RP location in the network for the application group (ASM).

Answer: The RP is in the data path, and the RP also functions as the campus core.

Image What is the configuration of the downstream router?

Answer: Only ip pim register-limit has been deployed.

Figure 7-7 will be used to explain the troubleshooting methodology.

Image

Figure 7-7 Case Study: Multicast Troubleshooting Topology

Baseline Check: Source and Receiver Verification

Before you start the multicast baseline check, verify that the source and receiver have ping reachability. Example 7-14 shows a unicast ping test from the receiver to the source IP (10.1.1.1).

Example 7-14 Verification of the Source Reachability from the Receiver


Receiver#ping 10.1.1.1
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.1.1.1, timeout is 2 seconds:
!!!!!


Step 1: Receiver Check

Table 7-1 Outlines the receiver verification steps.

Image
Image

Table 7-1 Receiver Verification Steps

Step 2: Source Check

Table 7-2 outlines the source verification steps before starting the multicast baseline check.

Image
Image

Table 7-2 Source Verification Steps

Make sure you have an active source before trying to troubleshoot.

The Outgoing interface list points to NULL in Step 2c. At steady state, the outgoing interface should point towards the RP in initial state. This is a problem in the state build that needs to be reviewed further. Rather than reviewing Step 2d (debug message to verify state build-up at FHR), let’s move to next step, state verification.

Step 3: State Verification

Table 7-3 outlines the baseline RP and control-plane verification steps.

Image
Image
Image

Table 7-3 RP and Control-Plane Check

As you observed earlier in Step 3, the RP control-plane check had inconsistencies. There is no point in proceeding with Step 4, hop-by-hop state validation, until the inconsistencies are resolved. This means it could be a control-plane problem. To further eliminate suspicion of application-based issues, perform a multicast ping from FHR to LHR in the topology, as outlined in Figure 7-8. The multicast ping will simulate the application, becoming the new multicast source. This illuminates how the state is built from Steps 1, 2, and 3 with a network source.

Image

Figure 7-8 Multicast Ping Test Procedure

To create a receiver, configure an IGMP join group at R5(LHR) closest to the receiver. Use the ip igmp join-group command. Use a multicast group that is not used in any production traffic and falls in the multicast RP scope range. Example 7-15 shows the configuration.

Example 7-15 Create IGMP Join Group


R5#show run interface loopback 0
Building configuration...

Current configuration : 118 bytes
!
interface Loopback0
 ip address 192.168.5.5 255.255.255.255
 ip pim sparse-mode
 ip igmp join-group 239.1.1.10
end

R5#


Verify connectivity to the unicast and multicast control plane from R2 (closest to the receiver), as demonstrated in Example 7-16.

Example 7-16 Multicast and Unicast Connectivity Check


R2#ping 192.168.5.5
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 192.168.5.5, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/1 ms

R2#ping 239.1.1.10
Type escape sequence to abort.
Sending 1, 100-byte ICMP Echos to 239.1.1.100, timeout is 2 seconds:
.
R2#


The unicast ping test is successful at R2, but the multicast ping is not successful. This test is done merely to remove application anomalies. The output clearly shows that this test is not successful and confirms that this is a control-plane issue in the network.

Summary of the failed tests:

Image Steps 2b and 2c: FHR registration to the RP failed, and the RP state flags for multicast were incorrect.

Image Step 3c: Failure of the RP state exchanges; the state at R3 did not provide correct results because the (S,G) group was in a pruned state.

Figure 7-9 shows the fundamental overview of the three troubleshooting steps:

Image

Figure 7-9 Troubleshooting Case Study: Focus Area

The multicast control plane is broken at Step 1 or Step 3. The areas of concern include R2, R3, and R4. It is recommended to work with Cisco TAC during troubleshooting because we will be using debug commands to understand the PIM control-plane flow. It is also recommended to plan a change control window prior to executing debug commands, because it might impact other network functions.

To verify the registry process, use the debug ip pim 239.1.1.1 command at R3, the output for which is shown in Example 7-17.

Example 7-17 Control-plane Debug Capture at R3


*Mar 21 20:48:06.866: PIM(0): Received v2 Register on Ethernet2/0 from 10.1.2.1
*Mar 21 20:48:06.866:      for 10.1.1.1, group 239.1.1.1
*Mar 21 20:48:06.866: PIM(0): Check RP 192.168.100.100 into the (*, 239.1.1.1) entry
*Mar 21 20:48:06.866: PIM(0): Adding register decap tunnel (Tunnel1) as accepting
  interface of (*, 239.1.1.1).
*Mar 21 20:48:06.866: PIM(0): Adding register decap tunnel (Tunnel1) as accepting
  interface of (10.1.1.1, 239.1.1.1).
*Mar 21 20:48:06.866: PIM(0): Send v2 Register-Stop to 10.1.2.1 for 10.1.1.1, group
  239.1.1.1


PIM register packets are seen being sent and received. The question arises, why does the state at R3 not show the receiver and instead shows the pruned state for the (S,G) entry? We need to do some further investigation to determine the root cause; this moves our attention to Step 3.

We begin troubleshooting this by using the show ip mroute command on R3, as demonstrated in Example 7-18.

Example 7-18 show ip mroute Command Output


R3#show ip mroute 239.1.1.1
 …
(*, 239.1.1.1), 00:04:21/stopped, RP 192.168.100.100, flags: SP
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list: Null

(10.1.1.1, 239.1.1.1), 00:04:21/00:02:38, flags: PA
  Incoming interface: Ethernet2/0, RPF nbr 10.1.2.1
  Outgoing interface list: NullProblem Area


This state flag relationship for the (S,G) entry does not look correct; the message is in the pruned state; and the outgoing interface is set to NULL. The (*,G) also does not show any receiver information.

The peer MSDP relationship between R3 and R4 shows the information of the source learned at R4. This can be seen using the show ip msdp sa-cache command, as demonstrated in Example 7-19.

Example 7-19 show ip msdp sa-cache Command Output


R4#show ip msdp sa-cache
MSDP Source-Active Cache - 1 entries
(10.1.1.1, 239.1.1.1), RP 192.168.100.100, AS ?,00:08:46/00:05:22, Peer 192.168.3.3
R4#


R4 has the receiver information in its multicast state table, shown using the ip mroute command, as demonstrated in Example 7-20.

Example 7-20 Multicast State Table at R4


(*, 239.1.1.1), 5d22h/stopped, RP 192.168.100.100, flags: S
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    Ethernet2/0, Forward/Sparse, 5d22h/00:03:04

(10.1.1.1, 239.1.1.1), 00:00:18/00:02:41, flags: M
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    Ethernet2/0, Forward/Sparse, 00:00:18/00:02:41


The control plane from the RPs (R3 and R4) looks fine; however, the state entry at R3 does not show the multicast stream flowing. It cannot be an application issue because the multicast ping test did not succeed. The multicast state does not show R3 forwarding the (S,G) state to R4, which has the receiver entry in the (*,G) state table, and consequently the group is in a pruned state for the (S,G) at R3 (refer to the “problem area” identified in the show ip mroute at R3). The R3 and R4 multicast state table exchange between the RP is based on MSDP feature.

Verify the data plane interface with the multicast control plane for congruency.

At R3, check the Layer 3 routing relationship with the PIM relationship. This is accomplished using the show ip ospf neighbor command to determine the state of the routing adjacency and the show ip pim neighbor command to show the PIM neighbor relationship. Example 7-21 shows output from the show ip ospf neighbor command.

Example 7-21 show ip ospf neighbor and show ip pim neighbor Command Output for R3


R3#show ip ospf neighbor

Neighbor ID     Pri   State           Dead Time   Address         Interface
192.168.4.4       1   FULL/BDR        00:00:38    10.1.4.2        Ethernet3/0
192.168.2.2       1   FULL/DR         00:00:37    10.1.2.1        Ethernet2/0

R3# show ip pim neighbor
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router, N - Default DR Priority,
      P - Proxy Capable, S - State Refresh Capable, G - GenID Capable
Neighbor          Interface                Uptime/Expires    Ver   DR
Address                                                            Prio/Mode
10.1.2.1          Ethernet2/0              6d02h/00:01:34    v2    1 / S P G
R3#


Notice that Ethernet3/0 does not have a PIM relationship. This interface connects R3 to R4 and could be the reason why the receiver PIM join did not make it to R3 from R4. To verify the inconsistency, execute the same command at R4 and check the Layer 3 routing relationship with the PIM relationship, as demonstrated in Example 7-22.

Example 7-22 show ip ospf neighbor and show ip pim neighbor Command Output for R4


R4#show ip ospf neigbhor

Neighbor ID     Pri   State           Dead Time   Address         Interface
192.168.3.3       1   FULL/DR         00:00:32    10.1.4.1        Ethernet3/0
192.168.5.5       1   FULL/DR         00:00:37    10.1.5.2        Ethernet2/0

R4#show ip pim neigbhor
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router, N - Default DR Priority,
      P - Proxy Capable, S - State Refresh Capable, G - GenID Capable
Neighbor          Interface                Uptime/Expires    Ver   DR
Address                                                            Prio/Mode
10.1.5.2          Ethernet2/0              6d02h/00:01:24    v2    1 / DR S P G
R4#


The previous command verifies the failure of the PIM relationship between R3 and R4.

Check the configuration on R4, Ethernet3/0 and verify the existence of PIM routing using the command show ip pim interface, as demonstrated in Example 7-23.

Example 7-23 show ip pim interface Command Output for R4


R4#show ip pim interface

Address          Interface           Ver/   Nbr    Query  DR         DR
                                     Mode   Count  Intvl  Prior
192.168.4.4      Loopback0           v2/S   0      30     1          192.168.4.4
192.168.100.100  Loopback100         v2/S   0      30     1          192.168.100.100
10.1.5.1         Ethernet2/0         v2/S   1      30     1          10.1.5.2
10.1.4.2         Ethernet3/0         v2/S   1      30     1          10.1.4.2
R4#


From the output we see that PIM sparse-mode is enabled on the interface.

Now we check the configuration on R3 Ethernet3/0 and verify the existence of PIM routing using the show ip pim interface command on R3, as demonstrated in Example 7-24.

Example 7-24 show ip pim interface Command Output for R3


R3#show ip pim interface

Address          Interface           Ver/   Nbr    Query  DR         DR
                                     Mode   Count  Intvl  Prior
10.1.2.2         Ethernet2/0         v2/S   1      30     1          10.1.2.2
192.168.3.3      Loopback0           v2/S   0      30     1          192.168.3.3
192.168.100.100  Loopback100         v2/S   0      30     1          192.168.100.100
R3#


We can see from the output of this command that PIM has not been enabled on the Ethernet3/0 interface. This clearly indicates the problem.

To rectify the problem, we enable PIM sparse-mode on interface Ethernet 3/0 at R3 using the commands shown in Example 7-25.

Example 7-25 Enabling PIM Sparse-Mode on R3’s Ethernet 3/0 Interface


R3#configure terminal
Enter configuration commands, one per line.  End with CNTL/Z.
R3(config)#interface  ethernet 3/0
R3(config-if)#ip pim sparse-mode


Now we need to verify that the configuration change resolved the problem using the show ip pim neighbor command, as demonstrated in Example 7-26.

Example 7-26 PIM Neighbor Overview


R3#show ip pim neigbhor
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router, N - Default DR Priority,
      P - Proxy Capable, S - State Refresh Capable, G - GenID Capable
Neighbor          Interface                Uptime/Expires    Ver   DR
Address                                                            Prio/Mode
10.1.2.1          Ethernet2/0              6d03h/00:01:34    v2    1 / S P G
10.1.4.2          Ethernet3/0              00:00:55/00:01:18 v2    1 / DR S P G
R3#
R3#sh ip ospf nei

Neighbor ID     Pri   State           Dead Time   Address         Interface
192.168.4.4       1   FULL/BDR        00:00:35    10.1.4.2        Ethernet3/0
192.168.2.2       1   FULL/DR         00:00:33    10.1.2.1        Ethernet2/0


This output shows that the neighbor adjacency is established.

Now we can test to verify if the source and receiver for 239.1.1.1 are able to communicate. To verify the source is sending multicast traffic, we execute show ip mroute to check the multicast state flags on R3, as demonstrated in Example 7-27.

Example 7-27 Multicast State Table at R3


R3#show ip mroute 239.1.1.1
IP Multicast Routing Table
..
Outgoing interface flags: H - Hardware switched, A - Assert winner, p - PIM Join
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 239.1.1.1), 00:04:59/stopped, RP 192.168.100.100, flags: SP
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list: Null

(10.1.1.1, 239.1.1.1), 00:04:59/00:02:17, flags: TA
  Incoming interface: Ethernet2/0, RPF nbr 10.1.2.1
  Outgoing interface list:
    Ethernet3/0, Forward/Sparse, 00:03:49/00:03:25


The flag “T” indicates that multicast flow is via R3 router. Now verify with multicast ping that it is working properly, as demonstrated in Example 7-28.

Example 7-28 Verifying the Multicast Ping


R2# ping 239.1.1.10 source l0 repeat 100
Type escape sequence to abort.
Sending 100, 100-byte ICMP Echos to 239.1.1.10, timeout is 2 seconds:
Packet sent with a source address of 192.168.2.2

Reply to request 0 from 192.168.5.5, 19 ms
Reply to request 0 from 192.168.5.5, 19 ms
Reply to request 1 from 192.168.5.5, 1 ms
Reply to request 1 from 192.168.5.5, 1 ms
Reply to request 2 from 192.168.5.5, 1 ms
Reply to request 2 from 192.168.5.5, 1 ms


The test shows the ping is successful. The multicast of state of the group 239.1.1.1 on R3 shows the appropriate information, as shown in Example 7-29.

Example 7-29 Multicast State Table at R3


R3#sh ip mroute 239.1.1.10
IP Multicast Routing Table
..

(*, 239.1.1.10), 00:00:24/stopped, RP 192.168.100.100, flags: SP
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list: Null

(192.168.2.2, 239.1.1.10), 00:00:24/00:02:35, flags: TA
  Incoming interface: Ethernet2/0, RPF nbr 10.1.2.1
  Outgoing interface list:
    Ethernet3/0, Forward/Sparse, 00:00:24/00:03:05


The problem was caused due to PIM not being enabled on one of the links. Another command you can verify to see the flow of multicast is the show ip mfib count, as shown in Example 7-30.

Example 7-30 Multicast FIB Table to Check the Data Plane


R3#show ip mfib count | begin  239.1.1.1
Group: 239.1.1.1
  RP-tree,
   SW Forwarding: 0/0/0/0, Other: 0/0/0
  Source: 10.1.1.1,
   SW Forwarding: 9/0/100/0, Other: 0/0/0
  Totals - Source count: 1, Packet count: 9
Group: 239.1.1.2
  RP-tree,
   SW Forwarding: 0/0/0/0, Other: 0/0/0
Group: 239.1.1.10
  RP-tree,
   SW Forwarding: 0/0/0/0, Other: 0/0/0
  Source: 192.168.2.2,
   SW Forwarding: 36/0/100/0, Other: 0/0/0
  Totals - Source count: 1, Packet count: 36


This command changes based on the network platform you are using; however, it is an excellent command to know if you want to verify data plane traffic.

Important Multicast show Commands

This section explains several IOS, NX-OS, and IOS-XR commands useful in collecting pertinent information on multicast. We recommend that you review Cisco.com command documentation to understand other multicast commands that can be leveraged during troubleshooting.

show ip igmp group Command

This command is used to verify if the receiver has sent an IGMP request to join group 239.1.2.3. The command syntax is the same for IOS, Nexus, and IOS-XR:

show ip igmp group group

Example 7-31 demonstrates sample output from this command.

Example 7-31 show ip igmp group Command Output


R1#sh ip igmp group 239.1.2.3
IGMP Connected Group Membership
Group Address    Interface            Uptime    Expires   Last Reporter
239.1.2.3        Ethernet1/0          00:01:07  never     172.16.8.6


show ip igmp interface/show igmp interface Commands

This command is used to verify the IGMP version, query interval/timeout, IGMP activity, query router, and the multicast group that has been joined.

The command syntax for IOS and Nexus is:

show ip igmp interface interface-type

The command syntax for IOS-XR is:

show igmp interface interface-type

Example 7-32 demonstrates sample output from the show ip igmp interface command for IOS. (The NX-OS output will be similar, but it is not shown here.)

Example 7-32 show ip igmp interface Command Output for IOS


R6#sh ip igmp interface Ethernet1/0
Ethernet1/0 is up, line protocol is up
  Internet address is 172.16.8.6/24
  IGMP is enabled on interface
  Current IGMP version is 2
  CGMP is disabled on interface
  IGMP query interval is 60 seconds
  IGMP querier timeout is 120 seconds
  IGMP max query response time is 10 seconds
  Last member query response interval is 1000 ms
  Inbound IGMP access group is not set
  IGMP activity: 1 joins, 0 leaves
  Multicast routing is enabled on interface
  Multicast TTL threshold is 0
  Multicast designated router (DR) is 172.16.8.6 (this system)
  IGMP querying router is 172.16.8.6 (this system)
  Multicast groups joined (number of users):
      239.1.2.3(1)


Example 7-33 demonstrates sample output from the show igmp interface command for IOS-XR.

Example 7-33 show igmp interface Command Output for IOS-XR


RP/0/0/CPU0:R1#sh igmp  interface loopback 1
Wed Mar 23 07:52:13.139 UTC

Loopback1 is up, line protocol is up
  Internet address is 10.100.1.1/32
  IGMP is enabled on interface
  Current IGMP version is 3
  IGMP query interval is 60 seconds
  IGMP querier timeout is 125 seconds
  IGMP max query response time is 10 seconds
  Last member query response interval is 1 seconds
  IGMP activity: 4 joins, 0 leaves
  IGMP querying router is 10.100.1.1 (this system)
  Time elapsed since last query sent 00:00:48
  Time elapsed since IGMP router enabled 00:02:07
  Time elapsed since last report received 00:00:45


show ip mroute/show mrib route Command

This is a very useful command to understand the multicast flow. The information available from this command helps understand the flags, RPF neighbor, RP information, OIL, and IIF. Note the difference in the flag nomenclature between IOS and NX-OS. The information relayed is the same.

The command syntax for IOS and Nexus is:

show ip mroute

The command syntax for IOS-XR is:

show mrib route

Example 7-34 demonstrates sample output from the show ip mroute command for IOS.

Example 7-34 show ip mroute Command Output for IOS


R6> show ip mroute
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, C - Connected, L - Local, P - Pruned
       R - RP-bit set, F - Register flag, T - SPT-bit set, J - Join SPT
       M - MSDP created entry, X - Proxy Join Timer Running
       A - Advertised via MSDP
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 239.1.1.1), 00:13:28/00:02:59, RP 10.1.5.1, flags: SCJ
  Incoming interface: Ethernet0, RPF nbr 10.1.2.1,
  Outgoing interface list:
    Ethernet1, Forward/Sparse, 00:13:28/00:02:32
    Serial0, Forward/Sparse, 00:4:52/00:02:08
(171.68.37.121/32, 239.1.1.1), 00:01:43/00:02:59, flags: CJT
  Incoming interface: Serial0, RPF nbr 192.10.2.1
  Outgoing interface list:
    Ethernet1, Forward/Sparse, 00:01:43/00:02:11
    Ethernet0, forward/Sparse, 00:01:43/00:02:11


Example 7-35 demonstrates sample output from the show ip mroute command for Nexus.

Example 7-35 show ip mroute Command Output for Nexus


nexusr1# show ip mroute
IP Multicast Routing Table for VRF "default"

(*, 232.0.0.0/8), uptime: 00:08:09, pim ip
  Incoming interface: Null, RPF nbr: 0.0.0.0
  Outgoing interface list: (count: 0)

(*, 239.1.1.1/32), uptime: 00:07:22, pim ip
  Incoming interface: loopback1, RPF nbr: 10.100.1.1
  Outgoing interface list: (count: 1)
    Ethernet2/2, uptime: 00:07:22, pim

(*, 239.1.1.100/32), uptime: 00:07:27, igmp ip pim
  Incoming interface: loopback1, RPF nbr: 10.100.1.1
  Outgoing interface list: (count: 1)
    loopback0, uptime: 00:07:27, igmp


Example 7-36 demonstrates sample output from the show mrib route command for IOS-XR.

Example 7-36 show mrib route Command Output for IOS-XR


RP/0/0/CPU0:R1#show mrib route
Wed Mar 23 07:53:27.604 UTC

IP Multicast Routing Information Base
Entry flags: L - Domain-Local Source, E - External Source to the Domain,
    C - Directly-Connected Check, S - Signal, IA - Inherit Accept,
    IF - Inherit From, D - Drop, ME - MDT Encap, EID - Encap ID,
    MD - MDT Decap, MT - MDT Threshold Crossed, MH - MDT interface handle
    CD - Conditional Decap, MPLS - MPLS Decap, MF - MPLS Encap, EX - Extranet
    MoFE - MoFRR Enabled, MoFS - MoFRR State, MoFP - MoFRR Primary
    MoFB - MoFRR Backup, RPFID - RPF ID Set, X - VXLAN
Interface flags: F - Forward, A - Accept, IC - Internal Copy,
    NS - Negate Signal, DP - Don't Preserve, SP - Signal Present,
    II - Internal Interest, ID - Internal Disinterest, LI - Local Interest,
    LD - Local Disinterest, DI - Decapsulation Interface
    EI - Encapsulation Interface, MI - MDT Interface, LVIF - MPLS Encap,
    EX - Extranet, A2 - Secondary Accept, MT - MDT Threshold Crossed,
    MA - Data MDT Assigned, LMI - mLDP MDT Interface, TMI - P2MP-TE MDT Interface
    IRMI - IR MDT Interface
 (*,239.0.0.0/8) RPF nbr: 10.100.1.1 Flags: L C RPF P
  Up: 00:03:23
  Outgoing Interface List
    Decapstunnel0 Flags: NS DI, Up: 00:03:17

(*,239.1.1.100) RPF nbr: 10.100.1.1 Flags: C RPF
  Up: 00:02:15
  Incoming Interface List
    Decapstunnel0 Flags: A NS, Up: 00:02:15
  Outgoing Interface List
    Loopback1 Flags: F IC NS II LI, Up: 00:02:15
RP/0/0/CPU0:R1#


show ip pim interface/show pim interface Commands

This command is useful for verifying the interfaces that are configured with PIM. The designated router (DR) for segment is also shown using this command.

The command syntax for IOS and Nexus is:

show ip pim interface

The command syntax for IOS-XR is:

show pim interface

Example 7-37 demonstrates sample output from the show ip pim interface command for IOS. (The NX-OS output is similar, but it is not shown here.)

Example 7-37 show ip pim interface Command Output for IOS


R6#show ip pim interface
Address          Interface          Version/Mode    Nbr   Query     DR
                                                    Count Intvl
172.16.10.6      Serial0/0          v2/Sparse        1    30     0.0.0.0
172.16.7.6       Ethernet0/1        v2/Sparse        1    30     172.16.7.6
172.16.8.6       Ethernet1/0        v2/Sparse        0    30     172.16.8.6


Example 7-38 demonstrates sample output from the show pim interface command for IOS-XR.

Example 7-38 show pim interface Command Output for IOS-XR


RP/0/0/CPU0:R1#show pim interface
Wed Mar 23 07:55:37.675 UTC

PIM interfaces in VRF default
Address            Interface                     PIM  Nbr   Hello  DR    DR
                                                      Count Intvl  Prior

192.168.0.1        Loopback0                     on   1     30     1     this system
10.100.1.1         Loopback1                     on   1     30     1     this system
192.168.12.1       GigabitEthernet0/0/0/0        on   2     30     1    192.168.12.2
192.168.23.1       GigabitEthernet0/0/0/1        on   2     30      1   192.168.23.2
RP/0/0/CPU0:R1#


show ip pim neighbor/show pim neighbor Commands

After checking the PIM interface, this command helps verify the PIM adjacency between Layer 3 neighbors.

The command syntax for IOS and Nexus is:

show ip pim neighbor

The command syntax for IOS-XR is:

show pim neighbor

Example 7-39 demonstrates sample output from the show ip pim neighbor command for IOS. (The NX-OS output is similar, but it is not shown here.)

Example 7-39 show ip pim neighbor Command Output for IOS


R6#show ip pim neighbor
PIM Neighbor Table
Neighbor Address  Interface          Uptime    Expires   Ver  Mode
172.16.10.3       Serial0/0          7w0d      00:01:26  v2
172.16.7.5        Ethernet0/1        7w0d      00:01:30  v2


Example 7-40 demonstrates sample output from the show pim neighbor command for IOS-XR.

Example 7-40 show pim neighbor Command Output for IOS-XR


RP/0/0/CPU0:R1#show pim neighbor
Wed Mar 23 07:56:42.811 UTC

PIM neighbors in VRF default
Flag: B - Bidir capable, P - Proxy capable, DR - Designated Router,
      E - ECMP Redirect capable
      * indicates the neighbor created for this router

Neighbor Address           Interface              Uptime    Expires  DR pri  Flags

192.168.12.1*              GigabitEthernet0/0/0/0 00:06:36  00:01:36 1       B P E
192.168.12.2               GigabitEthernet0/0/0/0 00:06:32  00:01:35 1 (DR)  P
192.168.23.1*              GigabitEthernet0/0/0/1 00:06:36  00:01:37 1       B P E
192.168.23.2               GigabitEthernet0/0/0/1 00:06:34  00:01:35 1 (DR)  B P
192.168.0.1*               Loopback0              00:06:37  00:01:20 1 (DR)  B P E
10.100.1.1*                Loopback1              00:06:37  00:01:19 1 (DR)  B P E
RP/0/0/CPU0:R1#


show ip pim rp Command

This command checks the RP information on a router. The Nexus command shows scope range and RP propagation similar to the IOS show ip pim rp mapping command.

The command syntax for IOS and Nexus is:

show ip pim rp

Example 7-41 demonstrates sample output from the show ip pim rp command for IOS.

Example 7-41 show ip pim rp Command Output for IOS


R6#sh ip pim rp 239.1.2.3
Group: 239.1.2.3, RP: 111.1.1.1, v2, uptime 00:23:36, expires never


Example 7-42 demonstrates sample output from the show ip pim rp command for NX-OS.

Example 7-42 show ip pim rp Command Output for NX-OS


nexusr1# show ip pim rp
PIM RP Status Information for VRF "default"
BSR disabled
Auto-RP disabled
BSR RP Candidate policy: None
BSR RP policy: None
Auto-RP Announce policy: None
Auto-RP Discovery policy: None

Anycast-RP 192.168.0.1 members:
  10.100.1.1*
Anycast-RP 192.168.0.2 members:
  10.100.1.1*

RP: 10.100.1.1*, (0), uptime: 00:04:42, expires: never,
  priority: 0, RP-source: (local), group ranges:
      239.0.0.0/8
nexusr1#


show ip pim rp mapping/show pim rp mapping Commands

This command checks the RP information on a router. The groups scoped for the RP and the RP propagation method. NX-OS displays RP information details using the show ip pim rp command.

The command syntax for IOS is:

show ip pim rp mapping

The command syntax for IOS-XR is:

show pim rp mapping

Example 7-43 demonstrates sample output from the show ip pim rp mapping command for IOS.

Example 7-43 show ip pim rp mapping Command Output for IOS


R5# show ip pim rp mapping
PIM Group-to-RP Mappings

Group(s) 239.1.0.0/16
  RP 192.168.100.100 (?), v2v1
    Info source: 192.168.100.100 (?), elected via Auto-RP
         Uptime: 1w0d, expires: 00:00:24


Example 7-44 demonstrates sample output from the show pim rp mapping command for IOS-XR.

Example 7-44 show pim rp mapping Command Output for IOS-XR


RP/0/0/CPU0:R1#show pim rp  mapping
Wed Mar 23 07:57:43.757 UTC
PIM Group-to-RP Mappings
Group(s) 239.0.0.0/8
  RP 10.100.1.1 (?), v2
    Info source: 0.0.0.0 (?), elected via config
      Uptime: 00:07:39, expires: never
RP/0/0/CPU0:R1#


Summary

Troubleshooting is a learned skill that requires time and effort. To become proficient at solving multicast problems quickly, you need to understand the infrastructure, establish a baseline, and follow the fundamental troubleshooting methodology of application, unicast domain, and multicast domain. The more time you spend reviewing the output of the show commands explained in this chapter, the better understanding you will have of the operation of multicast. Remember, practice, practice, practice!

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset