Share:|

As part of Premier Support, I was recently on-site at a customer for a few days, doing some "mini consultancy" work, mainly looking at extending Network Device discovery. Here, I want to make some notes to highlight some defects/surprising behaviour, and some of the things I was able to help the customer with.

 

Standard Network Device discovery

 

Many customers deploy SNMP credentials to discover Network Devices, and are quite happy with the coverage of supported devices, and/or the turnaround of adding new ones in monthly TKU updates after submitting a new device capture. Typically, Discovery is used for basic inventory recognition (being synced to CMDB) and importantly the discovery of the connection between a switch and the Host nodes it is connected to. However, the customer I was working with wanted to dig deeper into the data...

 

Problems and Gaps Identified

 

No Linkage between interfaces and IPs

 

In contrast to Host nodes, the interfaces and IPs of a Network Devices are not shown in a unified table. Instead they are displayed separately, and by default there is no connection in the UI or data model between an IP and the interface it is connected to. It turns out that if you turn on virtual interface discovery (see Managing network device virtual interface discovery), a side effect is that you do get a link from IP to interface and vice-versa. I logged defect DRUD1-25944 for this.

 

 

Incomplete documentation

 

We document how to enable virtual interfaces: Managing network device virtual interface discovery, however IMHO this document is lacking in several ways. It only mentions how it controls virtual interface discovery. It doesn't mention interface-IP linkage as a side effect. Why does it have to be controlled on the command line, not a UI option? Why would you not want it on by default - are there any downsides? If yes, what are they? I created docs defect DRUD1-26743 to improve this.

 

Not all network interfaces discovered

 

By turning on virtual interface discovery, more interfaces are discovered (see above). However, core code maintains a whitelist of "interesting" interface types:

 

0   # unknown

6   # ethernet csmacd

7   # iso88023 csmacd

8   # iso88024TokenBus

9   # iso88025TokenRing

15  # fddi

62  # fastEther

69  # fastEtherFX

71  # ieee80211

117 # gigabitEthernet

54  # propMultiplexor

161 # IEEE 802.3ad Link Aggregate

 

and drops any that don't match this list. This list was added a long time ago and is no longer appropriate IMHO; this was logged as defect DRUD1-26655 with a tentative fix targetted at the end of this year. As part of Premier Support, I was able to provide the customer a temporary update to remove the filter until then.

 

Cisco firmware image file not discovered

 

A simple custom pattern was written to extract this from OID 1.3.6.1.4.1.9.2.1.73 and populate the Network Device node, by calling the discovery.snmpGet() function. RFE DRDC1-13530 was logged to request this OOTB, and this Idea (feel free to vote on it) was raised on request of Engineering.

Interface statuses not discovered

 

A custom pattern was written to extract interface status from OID 1.3.6.1.2.1.2.2.1 using the discovery.snmpGetTable() function and populate two new attributes:

 

Chassis and cards are only in Directly Discovered Data

As part of core discovery, we create DiscoveredCard and DiscoveredChassis nodes, but these are not visible from the main Network Device page. Also - ultimately, information will need to be consumed in the CMDB, and it is not recommended to attempt to write a sync mapping directly from DDD. So, I wrote a custom pattern to copy the data from the DDD into a couple of lists of Detail nodes, for each type, and created links from the cards to their corresponding containing Chassis:

This has been logged as an improvement, DRUD1-26654 with a tentative fix date targetted around 2019-11.

 

DiscoveredCard nodes missing descriptions

 

While looking at the data for the above point it was found that most DiscoveredCard nodes have no description. We think there is more data available in the MIB than we are pulling; this was logged as improvement DRUD1-13628.

 

Protocol Data

 

My customer was interested in extracting specific entries for different network protocols that may be configured: BGP, OSPF, and the Cisco-specific EIGRP. It was fairly simple matter to write a custom pattern to pull entries from the 3 SNMP tables and create 3 lists of Detail nodes that corresponded to these entries.

 

Future Work

 

This additional data that is now in Discovery needs to be populated into the CMDB, so I shall need to write some custom sync mappings.