Skip navigation
1 2 3 Previous Next

Discovery

132 posts
Nick Smith

Getting old

Posted by Nick Smith Moderator Jan 17, 2020
Share This:

I try not to think too much about getting old. It's a terrible thing, but it is better than the alternative. I can no longer pretend I am not at least seriously middle-aged and although it doesn't stop me going to an occasional club night (see you with Digitalism at Village Underground in February?), hangovers seem to take longer to clear. And my 80s cultural references don't seem resonate with customers when I am old enough to be their mother.

 

Similarly, software has a lifecycle - such as Discovery's, which we document here. When we released Discovery 11.3 in March 2018, the standard VM platform was based on CentOS 7, so new installation should have used that. However, we maintained an option for an in-place upgrade of existing earlier versions to 11.3 on CentOS 6. This route was taken by some customers as it was a least-effort task, that did not involve procuring new CentOS 7 VMs and migrating. It should be noted that this is the last in-place upgrade; the next release should necessitate moving to CentOS 7.

 

Regardless of which OS version you are running, it is highly recommended to keep up to date with our OSUs to maintained the latest stability and security fixes. Patches are obtained from the CentOS feed, which in turn is based on what Red Hat release. Their policy is that all (including security) updates for CentOS 6 will cease this year, 2020-11-21, see here. Discovery appliances are not open to the wider Internet so are not exposed to external attacks, but still contain sensitive information about an organisation's estate. Sometimes OS updates are made that fix only theoretical vulnerabilities - eg in a library that we don't use in a way that could trigger the problem. But still, it is certainly advisable and simpler to be up-to-date.

 

You will see that the CentOS date is a bit earlier than the end of "full support" for 11.3 on 2021-03-21. So a reasonable question is whether BMC will offer any kind of OS updates between these. I have recently learned from Product Management that all CentOS 6 updates will cease after the earlier date, in November this year. So, in order to be assured of maintaining the security of the OS, you need to plan to migrate to new CentOS 7 VMs this year (either 11.3 or the latest on-premise version at the time).

 

I logged Docs defect DRUD1-28495 to try and get a clarification on the main support page.

Nick Smith

What's your number?

Posted by Nick Smith Moderator Jan 2, 2020
Share This:

Happy New Year to all! Welcome to the roaring 20s even if we still don't have our flying cars.

 

Numbers are amazing things, but so ubiquitous that most of us probably don't think of them in any detail, unless you happen to be a number theorist. One of the first things we learn as a child is to distinguish one object from another, and then to give labels to different sizes of collections, and the ordering within them: the cardinals and ordinals. Then we learn the rationals, reals and the wonders of complex numbers and vectors at school. That covers what is needed for most of Science and Engineering, although you might use extensions like quaternions and tensors. In more advanced maths and theoretical physics you might encounter esoteric beasts, which I sometimes try to get my head around. My favourites are Cantor's transfinite numbers, the surreals, Grassmann anti-commuting numbers and p-adic numbers. As an aside, all natural numbers are provably interesting.

 

To try and drag myself back to a (more prosaic) point: numbers are also be used to version software. This is, of course, a fundamental way of keeping track of what set of features and defects are bundled up in a release, what the support status is - and perhaps what the state of stability or security is. Historically, most software I encountered used a Semantic Versioning scheme, for example three or four groups of digits, like:

 

  • [Major].[Minor].{Patch]

 

We currently use this scheme for Discovery; at the time of writing the latest on-premise version being 11.3.0.5 (in our scheme the third digit group is always zero; this is patch 5 on major version 11, minor version 3). This scheme has advantages:

 

  • Major version of 0 indicates pre-release software that is not production-ready
  • Major version jumps indicate a large feature set uplift, and/or large architectural changes
  • A Patch release should only contain bug fixes, not new (or retired) features.
  • Compatibility, effort of upgrading and risk can be estimated: Usually it's reasonable to suppose the upgrade from 9.X to 10.0 is going to be longer/harder/more significant than, say, 10.0 to 10.1.

 

But even in this scheme, things are not that simple. The popular PuTTY client has been going for 20 years, and still on a version 0.X - and yet it's stable and commonly used in production. In relation to the last bullet point, I have had at least one customer who asked us not to make add fixes in a minor release because that would take extra effort to get through change control; they wanted *exactly the same code* to be called a patch release so fewer testing steps be done. The major downside to this scheme I can see is that it is not obvious from the version *when* the release was made, so a table like this  is required.

 

Don Knuth's versioning schemes of TeX and Metafont are, let's say... idiosyncratic. They asymptotically approach π and Euler's number, respectively (currently 3.14159265 and 2.7182818). Cute as this might be, thankfully these are exceptional examples.

 

My first professional OS (I don't count writing university reports in GEOS) was SunOS 4 on SPARC (4.1.3 was a classic release). But when Sun moved from BSD-style SunOS 4 to SVR4-based SunOS 5, the latter was rebranded as Solaris 2.X, with the former series retroactively renamed as Solaris 1. But after Solaris 2.6, the "2.X" prefix was dropped, so thereafter we had Solaris 7, 8, 9, 10, 11... and then we started getting point releases again to the current 11.4. Under the covers, though, the SunOS reference is still visible:

Java versioning made a similar change; things made a jump after JDK 1.4 to "Java 5", but internally, the "1." prefix still exists for (say) Java 8:

except when it's not. For example, this Java 11 installation:

Perhaps the most public change was after Windows 8; there was no Windows 9, and Windows 10 was treated as a different product. Its version adopted the "modern" trend to be the Calendar-Versioning scheme.

WinUpdate.PNG

I can't help thinking that these marketing changes make things more complex than necessary.

 

Although I perceive this to be a "modern" trend, some systems have been using it for several years. Notably Ubuntu Linux's first version in Oct 2004 was version "4.10". BMC has moved over to this scheme for Remedy and CMDB from version 9.1 to 19.05, as can be seen here. It is expected that Discovery will do something similar for the next on-premise release, although the last I heard no decision had been made as to exactly what this will be. The clear advantage is:

 

  • The release year and month should be obvious.

 

Again, reality does not always coincide with the ideal. For example, OpenWrt's 18.06 release was in July 2018. Perhaps you can forgive one month difference. But OpenWrt 19.07 is still not released, at the time of writing (scheduled for Jan 2020). Windows 10 1809 only made public release in November. 1703 had a public release in April. Moreover there are some disadvantages:

 

  • It's not clear from a date whether there is a major or minor change, and the associated benefits/risks/effort.

 

My understand is that this is not supposed to be a problem once all software release cycles are moved closer to a continuous, "agile", model: many small releases where the whole concept of a major release goes away. This is fine as a theoretical limiting case, but I am yet to be convinced it can always be achieved in practice. There are some large changes that just can't be broken down into a series of smaller ones.

 

I note that even BMC version support documentation seems to be rather confused to me. It's been updated recently (Dec 2019)  to include the format:

 

  • YY.YY.RR
    • YY.YY= 4-digit year
    • RR=release

 

But what does this mean? Say, "CMDB 19.11" - only the 2 digits "19" is a year, so does that mean "11" is the minor release? If so, that means that large changes can only take place once a year (major version). Moreover the wording actually asserts exactly one major architectural change per year. That can't be right. And no facilities for service packs/patch numbers here. I have an open question with management from before Christmas; I'll let you know if I get clarification.

 

All this complexity makes it hard for Discovery to store consistent data too. For SoftwareInstances, we have two attributes:

 

  • version ("full version"): the internal version, with as much detail as possible
  • product_version ("Product Version"): a higher-level "marketing" version

 

An example would be for SQL Server:

How does Discovery record Window 10? Not very well, IMHO: we set the version to 10 and don't record the YYMM version at all. I have logged defect DRUD1-25673 back in 2019-03, and I am hopeful this will be fixed in the next on-premise release. Related, is how this would be pushed to CMDB. The BMC_OperatingSystem class has fields VersionNumber and MarketVersion, but we currently make no attempt to distinguish these in the mappings:

I think we should, like for Windows:

 

  • MarketVersion : Server 2019
  • VersionNumber : 10.0.17763

 

or for HP-UX:

 

  • MarketVersion : 11i v2
  • VersionNumber : 11.23

 

I have DRUD1-26198 open for this, but so far no sign of a fix schedule. As part of a Premier Support contract, I had to resort to writing a (simple) custom pattern for my customer, who originally could only see the "11i" part of their extensive HP-UX estate.

 

==

I dedicate this desultory post to the biggest number: 40.

Share This:

A nice power of 2, of course.

 

Perhaps a nice green.

 

For me, $8000 is the start of the Commodore 64 cartridge memory space.

 

But if you have some Hewlett-Packard SAS Solid state drives, this number is the time (in hours) that they will live. You can't make this up, but it seems that without a firmware patch, they will die irrecoverably after 32768 hours.

 

I don't seem to have access to any HP drives; does anyone have any that are reported by:

 

search DiskDrive show vendor, model processwith countUnique(0)

Share This:

Hello Discovery Community,

 

We have recently released some new features in BMC Helix Discovery as a Service (DaaS), as well as in the December TKU and I am excited to share with you some of the details.  Some of these items came directly from all of you via your interactions in the community; whether it was via an idea or general discussion.

 

Below is a brief outline of the new features in 19.11:

 

Available in BMC Helix Discovery as a Service (DaaS) Only

 

Integration with Credential Management Systems

We have added in DaaS, the ability to integrate with the following external credential management systems.  You can now configure the integration with the providers using the vault providers page in the BMC Helix Discovery Outpost.

 

Screen Shot 2019-12-03 at 3.38.21 PM.jpg

 

 

 

 

 

 

 

 

 

ServiceNow CMDB Sync

With BMC Helix Discovery 19.11, you can now setup a CMDB synchronization with ServiceNow natively within DaaS.  The integration will sync your BMC Helix Discovery data to a ServiceNow CMDB with standard data mappings that can be filtered and extended.  Note: This feature requires an additional BMC Helix Discovery license.  If you are interested in learning more, please reach out to your account manager.

 

SyncToServiceNow.png

Available in the December 2019 TKU

 

Enhancements to Cloud Data Model

Based on feedback from multiple clients, we have introduced a change in how we model the cloud data.  We are now separating the cloud data by the account that it belongs to.  This will allow for easy clarification on which cloud services belong to which team (cloud account).

 

If you discover more than one AWS Account, more than one Azure Subscription or more than one GCP Project, all the data from Cloud Region through to individual nodes within services will be clearly separated, where before it was intermingled.

 

As a result, the keys of all CloudRegion and CloudService nodes, and many contained nodes will change, even if you only discover a single account. If you synchronize to a CMDB, the identities of the corresponding CIs will also change.

 

More information can be found here.

 

Enhanced AWS Role-Switching

Based on feedback from clients who are scanning their cloud environments, we are introducing a new method for AWS credential management.  You can now configure an AWS account that is given a list of AWS roles within the AWS console.  Discovery will then have the need to only configure that single AWS credential in the vault and it will be able to discover cloud services for all roles that the AWS account was given access to.  This will streamline the setup of AWS credentials and associated scan ranges within Discovery.

 

**Note on role-switching

The new configuration to support role switching cannot be added automatically to existing AWS credentials, and consequently, any existing scheduled AWS scans using those will fail.  The workaround is to simply click *Edit* on the scheduled scan and then click *Apply*.  Using the *Edit/Apply* workaround enables you to continue scanning AWS without interruption.

 

More information can be found here.

 

 

New Offering - BMC Discovery for Data Center - Red Hat Edition

Back at the beginning of November, we announced the extension of the Full Support End Date for BMC Discovery v11.1 to September 15, 2020.  In that email, I mentioned the introduction of the future availability of a new edition of Discovery running on Red Hat Enterprise Linux 7.  That edition is now available with the same functionality as BMC Discovery v11.3.  If you are interested in migration and pricing details, please contact your account manager for more details.

 

We are excited to release these new features and offerings and I welcome your feedback as we continue to introduce new features to both DaaS and on-premise.  Be on the lookout for DaaS features showing up in future on-premise releases.

 

Greg DeaKyne

Lead Product Manager

Share This:

Introduction

Traditionally, we have supported two types of Windows proxies:

  • Credential - Windows credentials are stored in the appliance and passed to the Cred Proxy during scanning as required
  • Active Directory - The proxy's service runs under an AD account, and is able to scan targets that trust that domain/account without any credentials being stored in the appliance

 

Windows scanning has been problematic for some, because of security concerns: in order have really useful data, you have to assign Administrator permissions to the proxy, and this is considered too risky. Some customer mitigations have included:

  • Running a Credential proxy and the credentials managed by a credential manager (currently CyberArk, other integrations in progress)
  • Running Windows scanning infrequently, in a tightly controlled time window, where the account is otherwise disabled.

 

An alternative which we have not hitherto documented is to use group Managed Service Accounts (introduced in Windows Server 2008R2). I would be interested to hear your thoughts on how you manage the security implications of Windows scanning; do you think gMSA use would help manage this problem?

 

It is expected that full support for gMSA will be made in the next major release for proxies/outposts as part of DRUD1-27034.

 

Thanks to Roland Appleby for much of the material in this post.

 

Instructions

These were tested with Discovery 11.3.0.5 and Windows Server 2019. PowerShell commands should be run under Administrator or equivalent.

 

Obtain KDS root key for your domain

On the Domain Controller, run the following PS command:
Get-KdsRootKey

 

If this shows that you have a KDS root key, skip the next step.

 

Run the following PS command to create the root key:
Add-KdsRootKey -EffectiveImmediately

 

You will then need to wait 10 hours before continuing.

 

Create a domain security group for the proxy host

On the Domain Controller, run the following PS command:
New-ADGroup "BMC Discovery Proxy" -GroupCategory Security -GroupScope Global -Path "DC=npgs,DC=bmc,DC=com"

 

(Modify the path as required for your domain, and here the Security Group Name has been chosen as "BMC Discovery Proxy")

 

Add your proxy host to this security group with this PS command:
Add-AdGroupMember -Identity "BMC Discovery Proxy" -Members PROXYSERVER$

 

where PROXYSERVER is the proxy server host.

 

Create the gMSA

On a Domain Controller, run the following PS command:
New-ADServiceAccount -Name "bmc-disco-proxy" -DnsHostName "bmc-disco-proxy.bmc.com"  -PrincipalsAllowedToRetrieveManagedPassword "BMC Discovery Proxy"


(note that "BMC Discovery Proxy" here must match the Security Group Name created above)

 

Install the gMSA on the proxy host

Reboot the proxy host to ensure that it is up to date with respect to the group membership.

 

Run this PS command on the proxy host:
Install-AdServiceAccount "bmc-disco-proxy"

 

where the name of the gMSA given here is arbitrary. You should see the new gMSA:

 

Add the gMSA to the local administrators group on the proxy host

Run this PS command on the proxy host:
Add-LocalGroupMember -Group "Administrators" -Member "npgs\bmc-disco-proxy$"

 

 

where "npgs" is the name of the AD domain. Alternatively, use the Windows UI tools.

 

Configure the Discovery Proxy to run as the gMSA account

These steps assume that you already have an Active Directory proxy installed. Stop the proxy service (if running) and change how the service logs on:

 

 

The account name should be of the form "npgs\bmc-disco-proxy$", where "npgs" is the AD domain. The password fields should be left blank. I have found that once set, this tab is greyed-out and that I couldn't change the account without deleting and re-creating the proxy service.

 

Grant the gMSA account permissions to discover hosts in the domain

The gMSA account needs to have the appropriate permissions to allow the Discovery Proxy access to the hosts in the domain that it is scanning. This can be done by either adding the gMSA account to an appropriate Domain Administrators group, or by adding the gMSA account to the local Administrators group on each machine individually.

 

It should now be possible to scan Windows hosts in the domain once a Discovery appliance has been configured to use the proxy.

Share This:

Customers frequently ask for help with custom patterns in the Community.

 

Here is some general information to help you get started.

 

See this YouTube video:  How to add or modify BMC discovered data and synchronize to CMDB? - YouTube

 

TPL - The Pattern Language.  Some users use the terms "TPL" and "pattern" interchangeably.

A module (contained inside a TPL file) contains one or more pattern or syncmapping statements.

 

 

There are 2 types of TPL customization for the BMC Discovery product:

  1) Discovery patterns typically add and/or modify Discovered information.

         There are many OOTB Discovery patterns (defined by the TKU).

         There may also be custom Discovery patterns.

         The keyword "pattern" is used to define a pattern.

 

  2) syncmappings define the behavior of the CMDB Sync process.

         There are many OOTB syncmappings (defined by the TKU).

         There may also be custom syncmappings.

         The keyword "syncmapping" is used to define a syncmapping.

 

 

Best Practices:

 

Should I edit the OOTB patterns and syncmappings?

NO!!

If you edit the OOTB pattern, the edits will be lost with the next TKU update.

 

How can I change the behavior since I should not edit the OOTB patterns and syncmappings?

1) To modify the behavior of Discovery patterns, there are  2 methods:

 

  • Preferred method: add an additional (custom) Discovery pattern

     Each discovery pattern has a trigger statement.  If the trigger succeeds, then the pattern body is executed.

                 Example of Discovery pattern:

 

  • Discovery Override pattern
    Sometimes, an Override pattern is required to modify the OOTB Discovery pattern behavior.
    Only use this method when absolutely required.

         An Override pattern is only to be used when an Extension can not provide the desired functionality.

         An Override pattern redefines the entire OOTB pattern, with special additional syntax to Override the base pattern.

 

 

2) To modify the behavior of the CMDB syncmappings, you can add one of these 2 types of custom patterns:

 

  • Syncmapping Extension (also called "augment")

                   Syncmapping extensions can add new data to the CMDB, and/or modify data in the CMDB.

                   Example: if you wish to add the age_count information for a Host to the CMDB, use an Extension.

 

                   Features of Syncmapping Extensions:

      • Extensions can add new attributes or modify attribute values in the CMDB. However, they can not delete attributes.
      • Extensions can add new relationships to the CMDB.  However, they can not delete or modify relationships.

 

                   Limitations of Syncmapping Extensions:

      • Extensions can not delete attributes from the CMDB.
      • Extensions can not delete or modify relationships in the CMDB.

 

                   Examples of Syncmapping Extensions:

 

  • Syncmapping Override

                   Sometimes, a Syncmapping Override is required to modify the OOTB behavior because of limitations of Syncmapping Extensions.

                   Always use a Syncmapping Extension when possible.

                   An Override is only to be used when an Extension can not provide the desired functionality.

                   An Override redefines the entire OOTB syncmapping, with special additional syntax to Override the base pattern.

 

                   Example of CMDB Syncmapping Override pattern:

 

Writing a custom pattern is like writing a small software program.

Here are some best practice steps for writing/testing a pattern:

    1) Write the pattern on your laptop using a text editor.

         Or, try the experimental IDE:  Experimental development IDE for TPL

 

    Hint:  It is less confusing to always edit the pattern from your text editor or IDE instead of editing the pattern directly in the UI.

    Hint:  Add log.info statements while you are debugging your pattern.  Later, you can change those from log.info to log.debug.

    Hint:  Put an identifier (such as your name) in each log.info statement so that you can easily identify that it is your log statement.

    Hint:  It may be helpful to number each log statement so you can easily find it in the pattern.

            Example:

                    log.info("LISA 1: Pattern was triggered.  Here I am in the pattern body.  Host name=%host.name%");

                    log.info("LISA 2: Inside the 'for each fsmount' loop.  fsmount=%fsmount.mount%");

 

    2) Upload the pattern from the Manage->Knowledge page
          The Upload action checks for syntax errors in the pattern, and if there are no syntax errors, then it Activates the pattern.

    3) Fix syntax errors in your text editor or IDE, and then Upload again in the UI until there are no more syntax errors

    4) If you added new attributes (which are needed by your CMDB Syncmapping) to your CMDB,

        then do this after adding the new attributes one time:  Restart the Discovery Services

    5) Test the pattern / Fix the pattern / Test / Fix until you are happy with the pattern

 

          2 ways to test Discovery patterns:

               A) Create a Manual Group and the "Run Pattern" feature:

                    Executing patterns manually - Documentation for BMC Discovery 11.3 - BMC Documentation

                   With "Run Pattern", you will see the log.info and log.debug statements on the UI page easily.

                   The "Run Pattern" tells you if any of the nodes in the Manual Group triggered your pattern.

 

                   If your pattern triggers on a SoftwareInstance, then you must have a SoftwareInstance in your Manual Group.

                   If your pattern triggers on a Host, then you must have a Host node in your Manual Group.

                   And so on.

 

               B) Run the Discovery of the Host/Device which should trigger your pattern.

                   Look for your log statements in this log file:  /usr/tideway/log/tw_svc_eca_patterns.log

 

               Make sure your Discovery pattern gets triggered

               The Discovery patterns have a "triggers" statement.  The trigger statement is very important to define correctly.

               If the trigger statement does not succeed, then the pattern body will not be executed.

               Example of a trigger statement:

                 triggers

                       on process := DiscoveredProcess created, confirmed where

                                             cmd matches regex "(?i)CouchDB\S+\\w?erl\.exe"  or cmd matches unix_cmd 'beam'

                end triggers;

 

           4 ways to test CMDB Syncmappings:

               A) Pick a pertinent Device and choose Actions->CMDB Sync (from the Consolidator)

               B) With Continuous Sync running, Scan the Device (from the Scanner)

               C) Perform a Resync from the Consolidator (This is long-running.  Only do this when necessary).

               D) Pick a Host/Device and choose Actions->CMDB Sync Preview (from the Consolidator)

                    If your syncmapping syncs to a new class such as BMC_FileSystem, then check the preview visualization for the new class.

                    If your syncmapping only changes/adds attributes, you will not see any change in the preview visualization.

         

                 All of the above actions will log data to this log file:  tw_svc_cmdbsync_transformer.log

 

                 Actions A,B,C will change the data in the CMDB.

                 Action D will not change the data in the CMDB.  It is only a Preview.  But, it logs the messages to the log file.

       

                Hint:  If the CMDB Sync Preview does not work, then there is something very wrong with your pattern.

 

 

Resources to help with custom patterns.

1) The Discovery UI:

 

       The Discovery UI has some information about creating SoftwareInstance patterns magically through the UI:

        Manage->Knowledge->Creating Patterns

 

       At the top, there is specific information about modeling a SoftwareInstance in Discovery.

       If you wish to have a custom pattern to create an SI, be sure to read that section, as well as the documentation that it points to.

 

 

Also, in the Discovery UI are some sample templates:

      Manage->Knowledge->Creating Patterns

 

Sample SI pattern templates:

 

Sample "location" pattern templates:

 

 

Sample pattern template to create a SQL Integration Point:

 

Sample pattern template which adds addition SQL calls:

 

Sample Mainframe pattern templates:

 

Sample External Event pattern template:

 

Sample CMDB Syncmapping templates:

 

To utilize one of these pattern templates, perform the following steps:

A) Download the pattern, and save it to your laptop

B) Use an editor on your laptop to edit and save the pattern.

        Names surrounded by double dollar signs like $$pattern_name$$ should all be replaced with values suitable for the pattern.

C) Upload your pattern on the Manage-Knowledge page to see if there are syntax errors.

      If not, Edit / Upload until the compile errors are gone.

D) Test your Discovery pattern using a Manual Group and the "Run Pattern" feature:  Executing patterns manually - Documentation for BMC Discovery 11.3 - BMC Documentation

    With "Run Pattern", you will see the log.info and log.debug statements for your pattern.

    If you run Discovery without "Run Pattern", you will need to look for your log statements in this log file:  tw_svc_eca_patterns.log

E) To test your CMDB Syncmapping, you can add log.info statements into the pattern, upload the pattern, run CMDB Sync,

      and then check for the log statements in this log file:  tw_svc_cmdbsync_transformer.log

 

 

2) The Community

 

Visit the Discovery community link:  Discovery

Click on "Patterns" as seen below:

 

You will find sample patterns which are made freely available by other customers, and by the BMC Discovery support team and developers.

 

3) Look directly at the OOTB patterns that are found in the TKU

 

    You can look at the patterns in the UI.

    And/Or, you can unzip the TKU zip file, and review the abundance of patterns.

 

    To view the patterns in the UI, you can look on the Manage->Knowledge page.

 

     You will find the Syncmapping patterns here on the page:  (Under BMC Discovery Operation -> CMDB Sync):

 

 

4) Training

    The Advanced Discovery training course has information about Discovery patterns and custom Discovery patterns.

    To this date, the course does not teach about the CMDB Syncmappings.

 

5) Customer Support can help with certain questions

       Support will not write custom patterns or custom syncmappings for you.  But, Support may be able to help with specific questions or problems.

       Support has access to some samples that may not be in the community.  Certain sample patterns are attached to internal KA's.

 

         See:   BMC's Customization Policy

 

6) BMC Consulting or BMC Professional Services

Greg DeaKyne

October TKU Released!

Posted by Greg DeaKyne Employee Oct 4, 2019
Share This:

Hello Community!

 

This past Wednesday, if you are subscribed to TKU release announcements, you would have received an email from me announcing the latest TKU availability.  As a reminder to the timing changes from this summer, the TKU and OSU releases will be made available on EPD on the first Wednesday of the month.  SaaS customers on BMC Helix Discovery will have the latest TKU applied to their Development environment on the first Wednesday of the month and their Production environment on the second Wednesday of the month.

 

We have a lot of exciting new content to announce and the details can be found on the October 2019 TKU Release page.  Highlights include several new patterns for software products, enhancements to existing software patterns, and general bug fixes.  In this latest release, we also introduced 38 new network devices, details can be found on the TKU October 2019 Network Devices page.  In addition, we have introduced 4 new cloud services across Azure and Google Cloud.  To find out more about the cloud providers and services within those providers that we can discover, visit Supported Cloud Providers.

 

We look forward to your feedback on the content that we are delivering via the monthly TKU releases.  Drop a comment below or reach out to me directly.

 

Have a good weekend everyone!

 

Greg DeaKyne

Product Manager, BMC Discovery

Nick Smith

Forward your log

Posted by Nick Smith Moderator Sep 5, 2019
Share This:

Just a quick note on syslog forwarding from the Discovery appliance.

 

While the Discovery appliance (physical or virtual) is based on a fairly standard CentOS build, we are careful to control the packages and configurations to ensure the OS layer is reliable and predictable for the application. Thus although it is tempting for an experienced Linux administrator to want to configure things to their liking, this urge should be avoided, and limited to only those things that are explicitly documented to avoid problems in future, and potentially voiding support.

 

One often-requested configuration was to forward OS syslogs to a remote syslog collector. Since we hadn't officially described it in the docs, it wasn't officially supported. I am please to say we now have, here.

 

It's very simple to setup, and now if your organisation's policies require/recommend it, you can do so while being fully with the appliance support rules.

Share This:

As part of Premier Support, I was recently on-site at a customer for a few days, doing some "mini consultancy" work, mainly looking at extending Network Device discovery. Here, I want to make some notes to highlight some defects/surprising behaviour, and some of the things I was able to help the customer with.

 

Standard Network Device discovery

 

Many customers deploy SNMP credentials to discover Network Devices, and are quite happy with the coverage of supported devices, and/or the turnaround of adding new ones in monthly TKU updates after submitting a new device capture. Typically, Discovery is used for basic inventory recognition (being synced to CMDB) and importantly the discovery of the connection between a switch and the Host nodes it is connected to. However, the customer I was working with wanted to dig deeper into the data...

 

Problems and Gaps Identified

 

No Linkage between interfaces and IPs

 

In contrast to Host nodes, the interfaces and IPs of a Network Devices are not shown in a unified table. Instead they are displayed separately, and by default there is no connection in the UI or data model between an IP and the interface it is connected to. It turns out that if you turn on virtual interface discovery (see Managing network device virtual interface discovery), a side effect is that you do get a link from IP to interface and vice-versa. I logged defect DRUD1-25944 for this.

 

Further, I my customer wanted a more unified UI for the network interface table, like we provide for Hosts. DRUD1-272124 is logged for this. In the meantime, I was able to provide my own "hotfix" to the core code to get a just-acceptable display.

 

Incomplete documentation

 

We document how to enable virtual interfaces: Managing network device virtual interface discovery, however IMHO this document is lacking in several ways. It only mentions how it controls virtual interface discovery. It doesn't mention interface-IP linkage as a side effect. Why does it have to be controlled on the command line, not a UI option? Why would you not want it on by default - are there any downsides? If yes, what are they? I created docs defect DRUD1-26743 to improve this.

 

Not all network interfaces discovered

 

By turning on virtual interface discovery, more interfaces are discovered (see above). However, core code maintains a whitelist of "interesting" interface types:

 

0   # unknown

6   # ethernet csmacd

7   # iso88023 csmacd

8   # iso88024TokenBus

9   # iso88025TokenRing

15  # fddi

62  # fastEther

69  # fastEtherFX

71  # ieee80211

117 # gigabitEthernet

54  # propMultiplexor

161 # IEEE 802.3ad Link Aggregate

 

and drops any that don't match this list. This list was added a long time ago and is no longer appropriate IMHO; this was logged as defect DRUD1-26655, planned for fix in FF release, tentatively targetted for 2010-02. As part of Premier Support, I was able to provide the customer a temporary update to remove the filter until then.

 

Cisco firmware image file not discovered

 

A simple custom pattern was written to extract this from OID 1.3.6.1.4.1.9.2.1.73 and populate the Network Device node, by calling the discovery.snmpGet() function. RFE DRDC1-13530 was logged to request this OOTB, and this Idea (feel free to vote on it) was raised on request of Engineering.

Interface statuses not discovered

 

A custom pattern was written to extract interface status from OID 1.3.6.1.2.1.2.2.1 using the discovery.snmpGetTable() function and populate two new attributes:

 

Chassis and cards are only in Directly Discovered Data

As part of core discovery, we create DiscoveredCard and DiscoveredChassis nodes, but these are not visible from the main Network Device page. Also - ultimately, information will need to be consumed in the CMDB, and it is not recommended to attempt to write a sync mapping directly from DDD. So, I wrote a custom pattern to copy the data from the DDD into a couple of lists of Detail nodes, for each type, and created links from the cards to their corresponding containing Chassis:

This has been logged as an improvement, DRUD1-26654 with a tentative fix date targetted around 2019-11.

 

DiscoveredCard nodes missing descriptions

 

While looking at the data for the above point it was found that most DiscoveredCard nodes have no description. We think there is more data available in the MIB than we are pulling; this was logged as improvement DRDC1-13628.

 

Protocol Data

 

My customer was interested in extracting specific entries for different network protocols that may be configured: BGP, OSPF, and the Cisco-specific EIGRP. It was fairly simple matter to write a custom pattern to pull entries from the 3 SNMP tables and create 3 lists of Detail nodes that corresponded to these entries.

 

Future Work

 

This additional data that is now in Discovery needs to be populated into the CMDB, so I shall need to write some custom sync mappings.

Share This:

Hi,

 

The BMC Discovery documentation looks a little different today. We have applied a new presentation layer for the content. It provides:

  • A responsive design to handle mobile and desktop users.
  • A version picker on all pages, in a drop down above the page tree. This replaces the original versions banner that we had for many years in BMC Discovery.
  • A table of contents on the right hand side of each page that stays visible when you scroll.
  • "Next page" and "previous page" navigation.
  • "Was this page helpful" buttons on every page.

 

One of my personal favorites is the improved look of tables.

 

Do please continue to comment on the documentation, like or dislike pages, and let us know how we can improve it.

 

Thanks, Duncan.

Greg DeaKyne

July TKU Released

Posted by Greg DeaKyne Employee Jul 3, 2019
Share This:

Greetings Community!

 

This past Monday, if you are subscribed to TKU release announcements, you saw that July is our first month of our new cadence for TKU release.  As a reminder to the timing changes, the TKU and OSU releases wll be made available on EPD on the first Wednesday of the month.  SaaS customers on BMC Helix Discovery will have the latest TKU applied to their Development environment on the first Wednesday of the month and their Production environment on the second Wednesday of the month.

 

We have a lot of exciting new content to announce and the details can be found on the July 2019 TKU release page.  Highlights include several new patterns for software products, enhancements to existing software patterns, and general bug fixes.  In this latest TKU release, we have included 29 new network device definitions.

 

If you are discovering cloud resources, we have introduced a new cloud inventory capability with essential information about those cloud services and in the July TKU we have started with roughly 15 new AWS products.  In addition, we are still focused on providing deeper discovery on patterns for cloud services and we are now making that available for Amazon ElasticSearch and Amazon Elastic Container Services for Kubernetes (EKS).

 

The July TKU is certainly packed with a lot of good additions to the Discovery portfolio.  We look forward to your feedback on what is advancing the impact of your Discovery data and what is missing and key that we can include in future TKU updates.

 

Greg DeaKyne

Product Manager, BMC Discovery

Share This:

Greetings Community!  Greg here, Product Manager for Discovery (intro post from April).

 

We have recently released a new Distributed Storage model for Discovery via the May TKU (Release Notes).  The Distributed Storage Model is a set of multiple physical systems where storage capacity and computing power are grouped together to behave as one single storage system (cluster).  We have this new model available across multiple storage systems from vendors such as EMC, NetApp, & Nimble.  For more information on the Distributed Storage model, the new dedicated page on these technologies can be found here.

 

Distributed_Storage_Model_Diagram.png

 

 

After you've had a chance to try out the May Storage TKU with some of these storage systems, we'd love to hear your feedback on how it's improving the discovery of your datacenter.  Feel free to post below and let us know your thoughts!

Share This:

When importing root node keys from a large environment, tw_root_node_key_import may run for many hours before completing. If it is not run using 'screen', the session may timeout, and the import may fail without error or notification. As a result, not all of the root node keys may be imported.

 

BMC now recommends to run "screen" before running tw_root_node_key_import to prevent this session timeout failure. See https://docs.bmc.com/docs/display/DISCO113/Using+screen.

Share This:

I would like to take this opportunity to introduce myself to this very active and engaging community.  My name is Greg DeaKyne.  I was born and raised in Indiana and escaped the cold winters for sunny Charleston, South Carolina almost 8 years ago.  When I’m away from the keyboard, I’m staying busy running around town with my two daughters (4 & 6) between ballet practice, gymnastics, guitar practice, soccer practice, and hopefully a few trips to the beach.  Before kids, I used to spend my free time on the golf course, but my game has gotten pretty rusty over the past 6 years. 

 

I started using BMC software as a customer over 10 years ago but will be frank with you that I haven’t had the opportunity to touch it in a few years. I led the team that rolled out ADDM, CMDB, BladeLogic, and Atrium Orchestrator.  We were utilizing the toolset during a transformation of our IT teams as we integrated many acquisitions, datacenter consolidation, increased security posture, and enabled front line support via tools and automation.

 

I have spent the majority of the past 15 years leading multiple IT teams delivering many global SaaS offerings across multiple colo, managed service, and public cloud environments.  I was managing teams responsible for all aspects of the datacenter including compute, storage, database, operating system, application, and networking. 

 

I have seen the power that Discovery can provide any size organization and I look forward to working with you all (y’all for my Charleston neighbors) as we improve this product through one of the most active communities at BMC.

 

Today is the start of my third week at BMC and so you will start to see me interacting more in the coming weeks as I continue to learn more about what we are currently working on and sharing with you what the future of BMC Helix Discovery will be.

 

Greg

Share This:

The December OSU updates many packages, and upgrades the operating system to CentOS 7.6. Included in the upgrade to 7.6  is an update to the sudo package.

 

The change in sudo includes: “PAM account management modules are now run even when no password is required.” [See the Red Hat documentation for more information]

 

For some processes in BMC Discovery, the tideway user uses sudo to run certain system level programs. Where user passwords expire, the changed processing of the PAM modules would request users to change the password, which in turn would have caused automated sudo usage to fail. This would be particularly problematic for customers that have strict expiry policies (STIG) on the appliance command line users.

 

The update to sudo-1.8.23-3.el7 (and only sudo) has been excluded from the December OSU while the we determine the full extent of the changes required.

 

Update: 8th April 2019. The April OSU will now contain any updates to sudo.

From the Aprill 2019 OSU, Discovery will utilise the pam_localuser.so module in the PAM sudo stack against a specific list of users (pam_localuser).

Filter Blog

By date:
By tag: