Skip navigation
1 2 3 Previous Next

Discovery / ADDM

111 posts

I saw Tool in 2007 at Brixton Academy; great show. Somewhat less enjoyable (depending on your musical taste, I suppose) is having to install VMware Tools on VMware-deployed appliances. We document the procedure here.


So far, so good. Unfortunately, whenever a new kernel is installed (product version upgrade or OSU) VMware tools needs the modules recompiled for the new kernel. After you have rebooted, you will see a red baseline icon like this:



which you can click through, and observe the warning that it's not running:


However, you are not given any information as to why, and you may miss this completely if you are not in the habit of monitoring and correcting all major baseline events. So for the moment, you will just have to keep in mind that you should re-run the


install script after an OSU upgrade. Note: you can use the "-d" flag to take all the defaults without being prompted.


For reference, we have DRUD1-22356 to try and make the user experience better in this regard.


[Note: edited, after discussion with Kerryn Wood]


I wish to draw your attention to a problem applying 2018-01 TKU on an appliance earlier than 11.2: an upstream package naming problem leads to a failure and a non-working appliance *if* this is subsequently upgraded to 11.2.X. This is not good.


To reiterate: the 2018-01 OSU will install fine on 11.1, and you might want to do that for Meltdown/Spectre mitigations. But you should not attempt an upgrade to 11.2.X on a 11.1.X / 2018-01 OSU appliance. I expect we will issue a new OSU you will need to install on 11.1.X before upgrading to 11.2.X in this situation. [UPDATE: New OSU is now available on EPD and replaces the previous one]


It would be fine to take an 11.X appliance on a pre-2018-01 OSU, upgrade to 11.2.X and apply 2018-01 OSU to it.


While OSUs are normally very easy to apply (UI upload; wait a bit; reboot) this reminds us that no change is completely without risk, and updates should always be applied on UAT first - before considering a production upgrade.


For reference, this is tracked as defect DRUD1-22350.


I admit it: we don't get many issues around OpenVMS,


Although I can't say I have really used it, I get vaguely nostalgic for the dusty old MicroVax that sat next to me for a while at the University of Nottingham. And even earlier I think I remember seeing Jodrell Bank using them for telescope control. So in contrast to the usual periodic Apple, Microsoft or Linux (and now Intel) security vulnerabilities, this problem caught my eye.


If you work in a large organisation, you might still have a few of these machines refusing to get replaced, and/or have very specialised uses. You might want to check you are comfortable with the risks, and not assume they are completely impenetrable. Find them with something like:


  • search Host where os_type matches regex "OpenVMS"


Note we also End-of-Life data for the OS up to 7.3-2 in EDP module SupportDetail.OS.HP.OpenVMS. I am going to talk to the TKU team about getting this table updated for later versions; it appears that the current version 8.4-2 is supported into 2021.


If you have any experiences with this OS, I would be interested to hear from you.


For a long time, the TKU Extended Data Pack has been able to mark software end-of-life, such as this for one of my ESX servers:



However it may have gone unnoticed to you (I certainly had missed it, until a customer question prompted me to check) that the EDP now supports some hardware end-of-life data, for example:




2018-Jan EDP has some data for different kinds of hardware:


No doubt, this list will expand over time. If you have an EDP license, you might want to check how the current list covers your estate, and log important missing devices with Support as RFEs. For example, to find Hosts that have no EOL data, try this query:


search Host where not virtual and vendor and model and not nodecount(traverse ElementWithDetail:SupportDetail:HardwareDetail:SupportDetail) show vendor, model processwith countUnique(0, 0)


Hello Everyone,


I am pleased to announce that we have expanded our support for deployment of the BMC Discovery virtual appliance to include many popular virtualization and cloud platforms.  Please review the following documentation to get details on what platforms are supported and the steps involved in converting the OVF image to run on the platform of choice:


Supported virtualization platforms - BMC Discovery 11.2 - BMC Documentation


Your friendly product manager,




UPDATE 2/1/2018:


We have further updated the OSU packages which contain fixes from Red Hat which addresses previous issues we reported with Xen and AWS based virtual guests.




This OS Update (OSU) contains fixes and mitigations for the Meltdown and Spectre vulnerabilities. As a single-use restricted appliance, the risk of these vulnerabilities to BMC Discovery is low, but to follow security best practices we would recommend applying the update in most cases.

This update also contains the Red Hat fix for the issue, noted in the previous (23 January 2018 on CentOS 6 and 22 January 2018 on RHEL 6)OSU where a paravirtualized guest on XEN, including AWS and other cloud platforms, could be rendered unbootable.

For more information on this OSU, please reference the following documentation:

For Discovery appliances on v11.1 or earlier:  25 January 2018 on RHEL 6 - BMC Discovery OS upgrades - BMC Documentation

For Discovery appliances on v11.2:  27 January 2018 on CentOS 6 - BMC Discovery OS upgrades - BMC Documentation




Hello BMC Discovery Community,


We would like to make you aware that the underlying OS used by BMC Discovery is affected by the spectre and meltdown vulnerabilities.  Red Hat have published a kernel update (version 2.6.32-696.18.7)  to mitigate some aspects of the ‘Meltdown’ and ‘Spectre’ CPU vulnerabilities. We had planned to roll this into an extra OS update.


However, as we prepared this update we learnt that this kernel update had rendered some systems unbootable. This only applies to certain hypervisors and certain VM build types, but given the serious consequences we could no longer publish the update in its current form. We are now working with Red Hat in pursuit of an early resolution to the fault.


Although these CPU vulnerabilities are high profile at present, the risk to the BMC Discovery appliance is low. All variants of the vulnerabilities apply to locally-executing code gaining improper access to memory. As an appliance, the Discovery system should not be running any unapproved software, and perimeter security remains the main line of defense for the BMC Discovery appliance.


We will of course provide updated operating system packages when we believe it is safe to do so.


For more information on the vulnerabilities see the following pages:



CVE-2017-5754 - Red Hat Customer Portal



CVE-2017-5715 - Red Hat Customer Portal

CVE-2017-5753 - Red Hat Customer Portal




As many of you may be aware we released in the November 2017 TKU the ability to discover OpenStack environments using the cloud scan capabilities introduced in BMC Discovery 11.2.


OpenStack provides open source cloud software used to create public or private clouds. OpenStack software enables you to have virtualized computing platforms, as public clouds, private clouds hosted by a cloud provider, or in your own data center. At this initial release, BMC Discovery of OpenStack enables you to discover Compute (nova), Block storage (cinder), Load balancers (neutron and octavia), Orchestration (heat), and Shared file systems (manila) services running in OpenStack. Please visit the OpenStack documentation page for more information.


Discovering OpenStack - BMC Discovery 11.2 - BMC Documentation


We are very excited to be able to offer these types of content updates via TKU and look forward to hearing from you on how these new cloud resources are working in your environment as well as get any feedback on where you think we should go next with cloud discovery!




Your neighborhood product manager,




Dnsmasq is "a lightweight DNS, TFTP, PXE, router advertisement and DHCP server. It is intended to provide coupled DNS and DHCP service to a LAN". A number of vulnerabilities have been found that allow remote code execution and denial of service attacks.


Discovery is not vulnerable. Only one of the vulnerabilities, CVE-2017-14491, applies to RHEL/CentOS 6 and exists in the dnsmasq, dnsmasq-debuginfo and dnsmasq-utils packages - which we do not install on the appliance.


Much better wordsmiths than I have covered this on many sites, for example arstechnica and theregister, and covered quite comprehensively by Red Hat.


CVE-2017-14491 (CWE-122): RHEL6 is affected, we do not ship the dnsmasq packages.

CVE-2017-14492 (CWE-122): RHEL6 not affected.

CVE-2017-14493 (CWE-121): RHEL6 not affected.

CVE-2017-14494 (CWE-125): RHEL6 not affected.

CVE-2017-14495 (CWE-400): RHEL6 not affected.

CVE-2017-14496 (CWE-190->CWE-125): RHEL6 not affected.

CVE-2017-13704 (CWE190): RHEL6 not affected.


I think we should take the opportunity to come up with a codename too - in the comments section below.


We've had a number of questions about the permissions that BMC Discovery requires to fully discover Storage resources in Microsoft Azure, so I thought I would try and explain the situation.


Like every other system we can discover, BMC Discovery only needs read access to resources in Microsoft Azure. Helpfully, Microsoft provides a Reader role which gives read only access to all resources. Excellent, that's exactly what we need ...


However, in turns out that some Azure Storage values we would like to collect aren't directly available via the Azure Resource Manager API, specifically the size and encryption (D@RE) parameters for virtual hard disks (VHDs). For VMs which use VHDs (not Managed Disks) we have to query the Blob properties directly from Azure Storage. To do this, we must properly sign the request and that requires a Storage Key.


The Azure Resource Manager API provides a method to retrieve the keys, but this isn't covered by the Reader role, so Discovery needs the Microsoft.Storage/storageAccounts/listKeys/action permission. These keys provide Full Access to the storage resources so, in theory, these could be used to modify or even delete Storage Blobs. However, like any other credential, Discovery does not expose these keys to the user: there is no way to retrieve and use them in a pattern for example. Internally, Discovery retrieves the key when certain Azure Storage read operations are needed.


Microsoft's approach to this problem is to use Shared Access Signatures (SAS). However, for this to work for Discovery, we would need a SAS token for each Storage Account, which is a large configuration burden.


So, to sum up:


We need the Storage Keys to get size and encryption (D@RE) values for VHDs used by VMs. If you don't grant this permission (i.e. just use the Reader role), then the values will be missing. Discovery will still perform all other Azurescanning. You ONLY need to grant the permission if size and D@RE for VHDs is important to you.


If you are using Managed Disks, then BMC Discovery does not need the Microsoft.Storage/storageAccounts/listKeys/action permission - the Managed Disk API allows us to read all the values we want.


I hope this helps clarify the situation.


Hello Everyone,


We have introduced a patch to BMC Discovery 11.2 to address an upgrade issue found concerning customers who have integrated CyberArk with Discovery.  This issue has been corrected in patch 1 and we encourage all CyberArk users to use this patch for upgrade.  Full details can be found here.


Upgrading to Version 11.2 patch 1

It is very important that, for clustered deployments, all the members of the cluster are functioning prior to upgrade.


We strongly recommend that you upgrade to version 11.2 patch 1 if you are on any previous versions of BMC Discovery 10.0.x, 10.1.x, 10.2.x, 11.0.x, 11.1.x, or 11.2. For details about the upgrade procedure, see the Upgrading BMC Discovery page.


Downloading BMC Discovery Product Releases

All above Product releases are available from the BMC Electronic Product Distribution (EPD) site where they all replace their respective current GA files.


Thanks and enjoy!


The BMC Discovery Team


A vulnerability (CVE-2017-9798) dubbed OptionsBleed has [finally] been patched upstream and is starting to gain some press.


The vulnerability can be triggered by badly configured .htaccess files, and the way in which affected Apache versions handle memory. The .htaccess files are recursively consumed through the served directory tree. If any of the .htaccess files has a request method defined in a Limits directive that is either superseded globally, or doesn't exist, then Apache exposes itself to a use-after-free vulnerability.


The memory handling bug means that the area of memory in question is freed-up but still used by Apache and potentially then allocated to another part of the running Apache instance. An OPTIONS request can then leak this data.


Often, it's being likened to the OpenSSL HeartBleed vulnerability because the vulnerability allows leaked information, this time from httpd (Apache Web Server). There are detailed explanations of the issue on nakedsecurity and arstechnica UK. Red Hat have given the vulnerability a Moderate rating.


A target will have to be unlucky timing wise and on a busy server for this to leak anything sensitive - but any leakage is bad.


So - Discovery - affected? No, not by default. We do not ship, or configure, any .htaccess files on an Appliance. That said, some vulnerability assessment tools will show the Appliance as vulnerable because we are running a vulnerable version of httpd. The updated version of httpd will be shipped in the next OSU after the patch is released.


Hello Everyone,


Today we launched our multi-cloud discovery capabilities which will be available in BMC Discovery 11.2.  I would like to thank everyone who provided input and feedback to shape the enhancements we introduced into this release.  This is truly a milestone release where we make it possible for our customers to have the visibility they require as they transform to support multi-cloud environments.


A discovery webinar on What's New with BMC Discovery 11.2 is scheduled for September 21, 2017 where we will go into detail on the features briefly explained in the next section.  Below is a link to register to the webinar, more details on the What's New webinar can be found at the end of this post.


Register now


Below is a brief guide on some of the major features implemented in version 11.2:


BMC Discovery 11.2 - What's New

Cloud Discovery

BMC Discovery provides a single pane of glass to present dependencies and assets that span public cloud, private cloud, and traditional data center environments.  We provide distinct and beautiful visualizations that show the cloud context of an application that is consistent regardless of the terminology differences between cloud vendors. We worked with both AWS and Azure to design this solution and are now able to represent hybrid application deployments that can span multiple cloud providers as well as infrastructure on-premise.  We perform this by using a combination of traditional IP based agent less scanning and consuming cloud vendors rich web API.  The result is an application model that can represent dependencies which span traditional infrastructure and cloud services.AWS-Azure-Discovery-Detailed.png

Cloud Context with Background Shading

It is now possible to see nodes that are in the same Cloud Region or Location, by enabling the color shading feature. In application models, you can use shading to show which nodes were saved by the user, and which were added by BMC Discovery.


Continuous Cloud Service Content Updates

With version 11.2 we will now be able to provide capabilities to discover additional cloud providers and cloud services in our monthly TKU cadence.  This is key as the pace at which new cloud services are being introduced by the major cloud providers is as often as weekly to monthly.  While we initially are releasing with AWS and Azure, additional cloud providers will come in future TKUs.


Shared Node Awareness

When Discovery updates application models at scan time, it takes shared software nodes into account, so models do not balloon with unwanted nodes.  The system automatically identifies software nodes that are likely to be shared by multiple applications, for example shared database servers and message queues. Visualizations do not follow relationships out of these nodes, so the view is simpler and less cluttered.


API Access via TPL

The flexibility customers come to know and love with TPL is now more powerful than ever!  Extend upon the Cloud API discovery we perform invoking additional functions in TPL to gather additional cloud context that may not be represented in our out of the box patterns.

We also introduced capabilities to invoke REST APIs via TPL which will further enhance our ability to provide additional content to our customers and target devices that have richer information available in their respective REST API.


Model changes

The following major changes have been made to the BMC Discovery model:

VirtualMachine nodes

BMC Discovery 11.2 changes the way that virtual machines are modeled. In previous releases VMs were modeled using a SoftwareInstance node with a vm_type attribute. They are now modeled using a Virtual Machine node which makes it easier to find and relate VMs to their containers and Hosts.

Database nodes

Logical databases are now stored in dedicated Database nodes rather than the DatabaseDetail nodes previously used. Dedicated Database nodes simplify the separation of databases from other database details. DatabaseDetail nodes are still used for other information about databases, for example schemas and tablespaces.


Other Enhancements

The following additional enhancements are introduced in version 11.2:

  • CMDB Sync to new Cloud Instance class (requires CMDB 9.1 SP3)
  • Use the CMDB REST API for CMDB Sync (requires CMDB 9.1 SP3)
  • Share and tag favorite queries
  • And more…


More Information

BMC Discovery version 11.2 files are now available for download at the BMC Electronic Product Distribution (EPD) site.

Read more in the Release Notes


Discovery Webinar:  What's New with BMC Discovery 11.2

Please join us as we share our knowledge on the newest feature to come to BMC Discovery 11.2


For this session we will focus on the following topics:


  • Overview of v11.2 Themes
  • Deep Dive into What's New with v11.2
  • Q&A


Register now


Date and Time:

Thursday, September 21, 2017 10:00 am, Central Time Zone (Chicago, GMT-06:00)

Thursday, September 21, 2017 11:00 am, Eastern Time Zone (New York, GMT-05:00)

Thursday, September 21, 2017 8:00 am, Pacific Time Zone (San Francisco, GMT-08:00)

Thursday, September 21, 2017 4:00 pm, Europe Time (Paris, GMT+01:00)

Thursday, September 21, 2017 3:00 pm, UK Time (London, GMT)


Duration: 1 hour


If you cannot attend at that time, a link to watch the recording will be sent after the event.


This course provides information on the key concepts and core functionality needed to Deploy, Administer & Troubleshoot BMC Discovery version 11. The course also covers new features and enhancements made to BMC Discovery 11.



  • Technical Personnel responsible for Deploying & Administering BMC Discovery 11
  • Configuration & Project Managers


  • Familiarity with the Linux Command Line
  • Knowledge of Simple Regular Expressions and Scripting Languages such as Perl, Python or Bash.


Contact your Education Sales Representative or for further information or to register




1.   BMC Discovery Overview

14. Query Builder
2.   User Administration & Security15. Query Language Overview
3.   Scanning Basics16. BMC Atrium CMDB Synchronization
4.   Discovery Credentials17. Appliance Baseline
5.   Taxonomy & Data Model18. Clustering
6.   Discovery Overview19. Consolidation
7.   Discovery Scripts20. Appliance Backup & Restore
8.   Storage Discovery21. Using the BMC Discovery CLI
9.   Load Balancer Discovery22. Appliance Support & Upgrade
10. Discovery Investigation & Troubleshooting23. Compliance
11. Patterns Overview24. Disk Configuration
12. Visualizations25. Storage Terms
13. Dashboards Overview26. Load Balancing Overview                         


Detailed information on when and where these course will be held can be found using this link which is maintained and updated by the BMC Education Services Team.


Upcoming Sessions are listed below (as at Friday 11th August, 2017):


Australia, New Zealand & Asia:


Start Date

End Date



28-Aug-201701-Sep-2017Malaysia TimeEnglish
04-Sep-201708-Sep-2017Malaysia TimeEnglish
18-Sep-201722-Sep-2017Indian Standard TimeEnglish
20-Nov-201724-Nov-2017Malaysia TimeEnglish
04-Dec-201708-Dec-2017Malaysia TimeEnglish
18-Dec-201722-Dec-2017Indian Standard TimeEnglish


North Americas:



Start Date

End Date



28-Aug-201701-Sep-2017Central Standard TimeEnglish
18-Sep-201722-Sep-2017Central Standard TimeEnglish
02-Oct-201706-Oct-2017Central Standard TimeEnglish
30-Oct-201703-Nov-2017Central Standard TimeEnglish
11-Dec-201715-Dec-2017Central Standard TimeEnglish
18-Dec-201722-Dec-2017Central Standard TimeEnglish




Start Date

End Date



07-Aug-201711-Aug-2017Greenwich Mean Time


25-Sep-201729-Sep-2017Central European TimeGerman
02-Oct-201706-Oct-2017Greenwich Mean TimeEnglish
23-Oct-201727-Oct-2017Greenwich Mean TimeEnglish
20-Nov-201724-Nov-2017Greenwich Mean TimeEnglish
18-Dec-201722-Dec-2017Greenwich Mean TimeEnglish

LDAP/Active Directory Server SAN


I have noticed that while many customers were using integration to Active Directory, many were using LDAP not LDAPS. One customer in particular said it didn't work. I noticed an internal BMC system was only LDAP enabled, and not to our corporate AD. But I knew *some* customers environments were able to use LDAPS against AD.


I had set up my own LDAP server some time ago, based on 389 Directory Server, but had never got round to adding LDAPS for security. In the end it was pretty uneventful: I just created some suitable certificates with my own CA, and loaded them up through the 389 DS Console UI.


Then, the normal configuration of LDAPS in the Discovery UI can be done - again, rather uninterestingly straightforward. So, time to check out AD: I pointed to our AD Prod alias and...




just like the customer who had told me it didn't work. A simple bit of checking with revealed that the DNS entry I was using for AD was an alias that resolved to multiple servers - not unsurprising in a large enterprise. Each of the AD servers supplied a certificate with a CN based on their own DNS A record, not the CNAME of the cluster of machines. There were no Subject Alternative Names, so the client (Discovery Appliance) failed to validate the AD servers' certificates.


Solution: Ensure the certificates have SANs for the hostname you are connecting with.




Now in the old days, ldapsearch was an invaluable tool in testing/checking LDAP configuration and structure from the appliance. In later versions, it is less so, as the UI has been improved. However, for completeness I though I would check it.


Non-secure LDAP worked as expected:


$ ldapsearch -W -H ldap:// -D "cn=Directory Manager" -s sub -b "ou=Security,dc=tideway,dc=com"  "(description=Staff Members)"


But when I tried to run it over a secure connection, I got a TLS error:


$ ldapsearch -W -H ldaps:// -D "cn=Directory Manager" -s sub -b "ou=Security,dc=tideway,dc=com"  "(description=Staff Members)"

Enter LDAP Password:

ldap_sasl_bind(SIMPLE): Can't contact LDAP server (-1)


because ldapsearch does not know about the LDAPS server's CA certificates. The client configuration file for ldapseatch and other OpenLdap client tools is in /etc/openldap/ldap.conf, and by default has a pointer to a directory containing CA certificates:


TLS_CACERTDIR   /etc/openldap/certs


I thought initially we could just copy/link the Discovery LDAP CA file (/usr/tideway/etc/ldap_cacert.pem) here, but, it's not that simple: this directory doesn't hold simple PEM files, it contains a mini database of NSS certificates. What you have to do is add those PEM format CA certs into the NSS database, like this:


$ certutil -d /etc/openldap/certs -A -n "LDAPS CA Certificates" -t "C,," -a -i /usr/tideway/etc/ldap_cacert.pem


You can check it loaded thus:


$ certutil -d /etc/openldap/certs -L

Certificate Nickname                                         Trust Attributes



LDAPS CA Certificates                                        C,,


If you need to, you can delete with the -D flag, referencing the same Nickname:


certutil -d /etc/openldap/certs -n "LDAPS CA Certificates" -D


Anyway, now ldapsearch returns results over a secure TLS connection:


$ ldapsearch -W -H ldaps:// -D "cn=Directory Manager" -s sub -b "ou=Security,dc=tideway,dc=com"  "(description=Staff Members)"

Enter LDAP Password:

# extended LDIF


# LDAPv3

# base <ou=Security,dc=tideway,dc=com> with scope subtree

# filter: (description=Staff Members)

# requesting: ALL



# Staff, Groups, Security,

dn: cn=Staff,ou=Groups,ou=Security,dc=tideway,dc=com

description: Staff Members

objectClass: top

objectClass: groupofuniquenames

cn: Staff

uniqueMember: uid=ENoether,ou=People,ou=Security,dc=tideway,dc=com

uniqueMember: uid=ALovelace,ou=People,ou=Security,dc=tideway,dc=com

uniqueMember: uid=JWatson,ou=People,ou=Security,dc=tideway,dc=com

uniqueMember: uid=HPoincare,ou=People,ou=Security,dc=tideway,dc=com


# search result

search: 2

result: 0 Success


# numResponses: 2

# numEntries: 1


An interesting problem came up recently that I thought I would share. A customer had a cluster of version 11 appliances, configured to use SSO and Secure LDAP. As far as they were aware, there were no problems.


They were using the default appliance disk layout, and wanted to move the datastore to a new, larger disk on each appliance, which had been previously provisioned on each appliance VM. All good. So, they started the Disk Configuration UI, which started... shutting down the services (as expected) and then doing nothing more, for several hours (not as expected). NBG.


The immediate priority was, of course, to get the appliances back and usable, which consisted of:

  •     Running "tw_disk_utils --fix-interrupted" on CLI
  •     Restarting the tideway services.


Subsequent investigation showed that while the UI was working for the LDAP-based administrator user that had been used on most appliances, it was NOT working on the last machine to be added to the cluster: Also, CLI authentications failed too. Importantly, it had been provisioned after the other machines had been configured for LDAP.


This sequence of events had triggered defect DRUD1-18597, whereby the LDAP CA bundle is not distributed to a newly added member. This meant that although the local system user was working fine, when the LDAP administrator tried to initiate a disk operation, the coordinator got an error from that machine (because LDAPS authentication could not be made) but it simply retried, ad infinitum.


A simple workaround exists:

  •     Copy the file (/usr/tideway/etc/ldap_cacert.pem) from another appliance
  •     Restart the tideway service.


This should be fixed in 11.2.

Filter Blog

By date:
By tag: