Skip navigation
1 2 3 4 Previous Next


122 posts

Hello Everyone,


I am pleased to announce that we have expanded our support for deployment of the BMC Discovery virtual appliance to include many popular virtualization and cloud platforms.  Please review the following documentation to get details on what platforms are supported and the steps involved in converting the OVF image to run on the platform of choice:


Supported virtualization platforms - BMC Discovery 11.2 - BMC Documentation


Your friendly product manager,




UPDATE 2/1/2018:


We have further updated the OSU packages which contain fixes from Red Hat which addresses previous issues we reported with Xen and AWS based virtual guests.




This OS Update (OSU) contains fixes and mitigations for the Meltdown and Spectre vulnerabilities. As a single-use restricted appliance, the risk of these vulnerabilities to BMC Discovery is low, but to follow security best practices we would recommend applying the update in most cases.

This update also contains the Red Hat fix for the issue, noted in the previous (23 January 2018 on CentOS 6 and 22 January 2018 on RHEL 6)OSU where a paravirtualized guest on XEN, including AWS and other cloud platforms, could be rendered unbootable.

For more information on this OSU, please reference the following documentation:

For Discovery appliances on v11.1 or earlier:  25 January 2018 on RHEL 6 - BMC Discovery OS upgrades - BMC Documentation

For Discovery appliances on v11.2:  27 January 2018 on CentOS 6 - BMC Discovery OS upgrades - BMC Documentation




Hello BMC Discovery Community,


We would like to make you aware that the underlying OS used by BMC Discovery is affected by the spectre and meltdown vulnerabilities.  Red Hat have published a kernel update (version 2.6.32-696.18.7)  to mitigate some aspects of the ‘Meltdown’ and ‘Spectre’ CPU vulnerabilities. We had planned to roll this into an extra OS update.


However, as we prepared this update we learnt that this kernel update had rendered some systems unbootable. This only applies to certain hypervisors and certain VM build types, but given the serious consequences we could no longer publish the update in its current form. We are now working with Red Hat in pursuit of an early resolution to the fault.


Although these CPU vulnerabilities are high profile at present, the risk to the BMC Discovery appliance is low. All variants of the vulnerabilities apply to locally-executing code gaining improper access to memory. As an appliance, the Discovery system should not be running any unapproved software, and perimeter security remains the main line of defense for the BMC Discovery appliance.


We will of course provide updated operating system packages when we believe it is safe to do so.


For more information on the vulnerabilities see the following pages:



CVE-2017-5754 - Red Hat Customer Portal



CVE-2017-5715 - Red Hat Customer Portal

CVE-2017-5753 - Red Hat Customer Portal




As many of you may be aware we released in the November 2017 TKU the ability to discover OpenStack environments using the cloud scan capabilities introduced in BMC Discovery 11.2.


OpenStack provides open source cloud software used to create public or private clouds. OpenStack software enables you to have virtualized computing platforms, as public clouds, private clouds hosted by a cloud provider, or in your own data center. At this initial release, BMC Discovery of OpenStack enables you to discover Compute (nova), Block storage (cinder), Load balancers (neutron and octavia), Orchestration (heat), and Shared file systems (manila) services running in OpenStack. Please visit the OpenStack documentation page for more information.


Discovering OpenStack - BMC Discovery 11.2 - BMC Documentation


We are very excited to be able to offer these types of content updates via TKU and look forward to hearing from you on how these new cloud resources are working in your environment as well as get any feedback on where you think we should go next with cloud discovery!




Your neighborhood product manager,




Dnsmasq is "a lightweight DNS, TFTP, PXE, router advertisement and DHCP server. It is intended to provide coupled DNS and DHCP service to a LAN". A number of vulnerabilities have been found that allow remote code execution and denial of service attacks.


Discovery is not vulnerable. Only one of the vulnerabilities, CVE-2017-14491, applies to RHEL/CentOS 6 and exists in the dnsmasq, dnsmasq-debuginfo and dnsmasq-utils packages - which we do not install on the appliance.


Much better wordsmiths than I have covered this on many sites, for example arstechnica and theregister, and covered quite comprehensively by Red Hat.


CVE-2017-14491 (CWE-122): RHEL6 is affected, we do not ship the dnsmasq packages.

CVE-2017-14492 (CWE-122): RHEL6 not affected.

CVE-2017-14493 (CWE-121): RHEL6 not affected.

CVE-2017-14494 (CWE-125): RHEL6 not affected.

CVE-2017-14495 (CWE-400): RHEL6 not affected.

CVE-2017-14496 (CWE-190->CWE-125): RHEL6 not affected.

CVE-2017-13704 (CWE190): RHEL6 not affected.


I think we should take the opportunity to come up with a codename too - in the comments section below.


We've had a number of questions about the permissions that BMC Discovery requires to fully discover Storage resources in Microsoft Azure, so I thought I would try and explain the situation.


Like every other system we can discover, BMC Discovery only needs read access to resources in Microsoft Azure. Helpfully, Microsoft provides a Reader role which gives read only access to all resources. Excellent, that's exactly what we need ...


However, in turns out that some Azure Storage values we would like to collect aren't directly available via the Azure Resource Manager API, specifically the size and encryption (D@RE) parameters for virtual hard disks (VHDs). For VMs which use VHDs (not Managed Disks) we have to query the Blob properties directly from Azure Storage. To do this, we must properly sign the request and that requires a Storage Key.


The Azure Resource Manager API provides a method to retrieve the keys, but this isn't covered by the Reader role, so Discovery needs the Microsoft.Storage/storageAccounts/listKeys/action permission. These keys provide Full Access to the storage resources so, in theory, these could be used to modify or even delete Storage Blobs. However, like any other credential, Discovery does not expose these keys to the user: there is no way to retrieve and use them in a pattern for example. Internally, Discovery retrieves the key when certain Azure Storage read operations are needed.


Microsoft's approach to this problem is to use Shared Access Signatures (SAS). However, for this to work for Discovery, we would need a SAS token for each Storage Account, which is a large configuration burden.


So, to sum up:


We need the Storage Keys to get size and encryption (D@RE) values for VHDs used by VMs. If you don't grant this permission (i.e. just use the Reader role), then the values will be missing. Discovery will still perform all other Azurescanning. You ONLY need to grant the permission if size and D@RE for VHDs is important to you.


If you are using Managed Disks, then BMC Discovery does not need the Microsoft.Storage/storageAccounts/listKeys/action permission - the Managed Disk API allows us to read all the values we want.


I hope this helps clarify the situation.


Hello Everyone,


We have introduced a patch to BMC Discovery 11.2 to address an upgrade issue found concerning customers who have integrated CyberArk with Discovery.  This issue has been corrected in patch 1 and we encourage all CyberArk users to use this patch for upgrade.  Full details can be found here.


Upgrading to Version 11.2 patch 1

It is very important that, for clustered deployments, all the members of the cluster are functioning prior to upgrade.


We strongly recommend that you upgrade to version 11.2 patch 1 if you are on any previous versions of BMC Discovery 10.0.x, 10.1.x, 10.2.x, 11.0.x, 11.1.x, or 11.2. For details about the upgrade procedure, see the Upgrading BMC Discovery page.


Downloading BMC Discovery Product Releases

All above Product releases are available from the BMC Electronic Product Distribution (EPD) site where they all replace their respective current GA files.


Thanks and enjoy!


The BMC Discovery Team


A vulnerability (CVE-2017-9798) dubbed OptionsBleed has [finally] been patched upstream and is starting to gain some press.


The vulnerability can be triggered by badly configured .htaccess files, and the way in which affected Apache versions handle memory. The .htaccess files are recursively consumed through the served directory tree. If any of the .htaccess files has a request method defined in a Limits directive that is either superseded globally, or doesn't exist, then Apache exposes itself to a use-after-free vulnerability.


The memory handling bug means that the area of memory in question is freed-up but still used by Apache and potentially then allocated to another part of the running Apache instance. An OPTIONS request can then leak this data.


Often, it's being likened to the OpenSSL HeartBleed vulnerability because the vulnerability allows leaked information, this time from httpd (Apache Web Server). There are detailed explanations of the issue on nakedsecurity and arstechnica UK. Red Hat have given the vulnerability a Moderate rating.


A target will have to be unlucky timing wise and on a busy server for this to leak anything sensitive - but any leakage is bad.


So - Discovery - affected? No, not by default. We do not ship, or configure, any .htaccess files on an Appliance. That said, some vulnerability assessment tools will show the Appliance as vulnerable because we are running a vulnerable version of httpd. The updated version of httpd will be shipped in the next OSU after the patch is released.


Hello Everyone,


Today we launched our multi-cloud discovery capabilities which will be available in BMC Discovery 11.2.  I would like to thank everyone who provided input and feedback to shape the enhancements we introduced into this release.  This is truly a milestone release where we make it possible for our customers to have the visibility they require as they transform to support multi-cloud environments.


A discovery webinar on What's New with BMC Discovery 11.2 is scheduled for September 21, 2017 where we will go into detail on the features briefly explained in the next section.  Below is a link to register to the webinar, more details on the What's New webinar can be found at the end of this post.


Register now


Below is a brief guide on some of the major features implemented in version 11.2:


BMC Discovery 11.2 - What's New

Cloud Discovery

BMC Discovery provides a single pane of glass to present dependencies and assets that span public cloud, private cloud, and traditional data center environments.  We provide distinct and beautiful visualizations that show the cloud context of an application that is consistent regardless of the terminology differences between cloud vendors. We worked with both AWS and Azure to design this solution and are now able to represent hybrid application deployments that can span multiple cloud providers as well as infrastructure on-premise.  We perform this by using a combination of traditional IP based agent less scanning and consuming cloud vendors rich web API.  The result is an application model that can represent dependencies which span traditional infrastructure and cloud services.AWS-Azure-Discovery-Detailed.png

Cloud Context with Background Shading

It is now possible to see nodes that are in the same Cloud Region or Location, by enabling the color shading feature. In application models, you can use shading to show which nodes were saved by the user, and which were added by BMC Discovery.


Continuous Cloud Service Content Updates

With version 11.2 we will now be able to provide capabilities to discover additional cloud providers and cloud services in our monthly TKU cadence.  This is key as the pace at which new cloud services are being introduced by the major cloud providers is as often as weekly to monthly.  While we initially are releasing with AWS and Azure, additional cloud providers will come in future TKUs.


Shared Node Awareness

When Discovery updates application models at scan time, it takes shared software nodes into account, so models do not balloon with unwanted nodes.  The system automatically identifies software nodes that are likely to be shared by multiple applications, for example shared database servers and message queues. Visualizations do not follow relationships out of these nodes, so the view is simpler and less cluttered.


API Access via TPL

The flexibility customers come to know and love with TPL is now more powerful than ever!  Extend upon the Cloud API discovery we perform invoking additional functions in TPL to gather additional cloud context that may not be represented in our out of the box patterns.

We also introduced capabilities to invoke REST APIs via TPL which will further enhance our ability to provide additional content to our customers and target devices that have richer information available in their respective REST API.


Model changes

The following major changes have been made to the BMC Discovery model:

VirtualMachine nodes

BMC Discovery 11.2 changes the way that virtual machines are modeled. In previous releases VMs were modeled using a SoftwareInstance node with a vm_type attribute. They are now modeled using a Virtual Machine node which makes it easier to find and relate VMs to their containers and Hosts.

Database nodes

Logical databases are now stored in dedicated Database nodes rather than the DatabaseDetail nodes previously used. Dedicated Database nodes simplify the separation of databases from other database details. DatabaseDetail nodes are still used for other information about databases, for example schemas and tablespaces.


Other Enhancements

The following additional enhancements are introduced in version 11.2:

  • CMDB Sync to new Cloud Instance class (requires CMDB 9.1 SP3)
  • Use the CMDB REST API for CMDB Sync (requires CMDB 9.1 SP3)
  • Share and tag favorite queries
  • And more…


More Information

BMC Discovery version 11.2 files are now available for download at the BMC Electronic Product Distribution (EPD) site.

Read more in the Release Notes


Discovery Webinar:  What's New with BMC Discovery 11.2

Please join us as we share our knowledge on the newest feature to come to BMC Discovery 11.2


For this session we will focus on the following topics:


  • Overview of v11.2 Themes
  • Deep Dive into What's New with v11.2
  • Q&A


Register now


Date and Time:

Thursday, September 21, 2017 10:00 am, Central Time Zone (Chicago, GMT-06:00)

Thursday, September 21, 2017 11:00 am, Eastern Time Zone (New York, GMT-05:00)

Thursday, September 21, 2017 8:00 am, Pacific Time Zone (San Francisco, GMT-08:00)

Thursday, September 21, 2017 4:00 pm, Europe Time (Paris, GMT+01:00)

Thursday, September 21, 2017 3:00 pm, UK Time (London, GMT)


Duration: 1 hour


If you cannot attend at that time, a link to watch the recording will be sent after the event.


This course provides information on the key concepts and core functionality needed to Deploy, Administer & Troubleshoot BMC Discovery version 11. The course also covers new features and enhancements made to BMC Discovery 11.



  • Technical Personnel responsible for Deploying & Administering BMC Discovery 11
  • Configuration & Project Managers


  • Familiarity with the Linux Command Line
  • Knowledge of Simple Regular Expressions and Scripting Languages such as Perl, Python or Bash.


Contact your Education Sales Representative or for further information or to register




1.   BMC Discovery Overview

14. Query Builder
2.   User Administration & Security15. Query Language Overview
3.   Scanning Basics16. BMC Atrium CMDB Synchronization
4.   Discovery Credentials17. Appliance Baseline
5.   Taxonomy & Data Model18. Clustering
6.   Discovery Overview19. Consolidation
7.   Discovery Scripts20. Appliance Backup & Restore
8.   Storage Discovery21. Using the BMC Discovery CLI
9.   Load Balancer Discovery22. Appliance Support & Upgrade
10. Discovery Investigation & Troubleshooting23. Compliance
11. Patterns Overview24. Disk Configuration
12. Visualizations25. Storage Terms
13. Dashboards Overview26. Load Balancing Overview                         


Detailed information on when and where these course will be held can be found using this link which is maintained and updated by the BMC Education Services Team.


Upcoming Sessions are listed below (as at Friday 11th August, 2017):


Australia, New Zealand & Asia:


Start Date

End Date



28-Aug-201701-Sep-2017Malaysia TimeEnglish
04-Sep-201708-Sep-2017Malaysia TimeEnglish
18-Sep-201722-Sep-2017Indian Standard TimeEnglish
20-Nov-201724-Nov-2017Malaysia TimeEnglish
04-Dec-201708-Dec-2017Malaysia TimeEnglish
18-Dec-201722-Dec-2017Indian Standard TimeEnglish


North Americas:



Start Date

End Date



28-Aug-201701-Sep-2017Central Standard TimeEnglish
18-Sep-201722-Sep-2017Central Standard TimeEnglish
02-Oct-201706-Oct-2017Central Standard TimeEnglish
30-Oct-201703-Nov-2017Central Standard TimeEnglish
11-Dec-201715-Dec-2017Central Standard TimeEnglish
18-Dec-201722-Dec-2017Central Standard TimeEnglish




Start Date

End Date



07-Aug-201711-Aug-2017Greenwich Mean Time


25-Sep-201729-Sep-2017Central European TimeGerman
02-Oct-201706-Oct-2017Greenwich Mean TimeEnglish
23-Oct-201727-Oct-2017Greenwich Mean TimeEnglish
20-Nov-201724-Nov-2017Greenwich Mean TimeEnglish
18-Dec-201722-Dec-2017Greenwich Mean TimeEnglish

LDAP/Active Directory Server SAN


I have noticed that while many customers were using integration to Active Directory, many were using LDAP not LDAPS. One customer in particular said it didn't work. I noticed an internal BMC system was only LDAP enabled, and not to our corporate AD. But I knew *some* customers environments were able to use LDAPS against AD.


I had set up my own LDAP server some time ago, based on 389 Directory Server, but had never got round to adding LDAPS for security. In the end it was pretty uneventful: I just created some suitable certificates with my own CA, and loaded them up through the 389 DS Console UI.


Then, the normal configuration of LDAPS in the Discovery UI can be done - again, rather uninterestingly straightforward. So, time to check out AD: I pointed to our AD Prod alias and...




just like the customer who had told me it didn't work. A simple bit of checking with revealed that the DNS entry I was using for AD was an alias that resolved to multiple servers - not unsurprising in a large enterprise. Each of the AD servers supplied a certificate with a CN based on their own DNS A record, not the CNAME of the cluster of machines. There were no Subject Alternative Names, so the client (Discovery Appliance) failed to validate the AD servers' certificates.


Solution: Ensure the certificates have SANs for the hostname you are connecting with.




Now in the old days, ldapsearch was an invaluable tool in testing/checking LDAP configuration and structure from the appliance. In later versions, it is less so, as the UI has been improved. However, for completeness I thought I would check it.


Non-secure LDAP worked as expected:


$ ldapsearch -W -H ldap:// -D "cn=Directory Manager" -s sub -b "ou=Security,dc=tideway,dc=com"  "(description=Staff Members)"


But when I tried to run it over a secure connection, I got a TLS error:


$ ldapsearch -W -H ldaps:// -D "cn=Directory Manager" -s sub -b "ou=Security,dc=tideway,dc=com"  "(description=Staff Members)"

Enter LDAP Password:

ldap_sasl_bind(SIMPLE): Can't contact LDAP server (-1)


because ldapsearch does not know about the LDAPS server's CA certificates. The client configuration file for ldapseatch and other OpenLdap client tools is in /etc/openldap/ldap.conf, and by default has a pointer to a directory containing CA certificates:


TLS_CACERTDIR   /etc/openldap/certs


I thought initially we could just copy/link the Discovery LDAP CA file (/usr/tideway/etc/ldap_cacert.pem) here, but, it's not that simple: this directory doesn't hold simple PEM files, it contains a mini database of NSS certificates. What you have to do is add those PEM format CA certs into the NSS database, like this:


$ certutil -d /etc/openldap/certs -A -n "LDAPS CA Certificates" -t "C,," -a -i /usr/tideway/etc/ldap_cacert.pem


You can check it loaded thus:


$ certutil -d /etc/openldap/certs -L

Certificate Nickname                                         Trust Attributes



LDAPS CA Certificates                                        C,,


If you need to, you can delete with the -D flag, referencing the same Nickname:


certutil -d /etc/openldap/certs -n "LDAPS CA Certificates" -D


Anyway, now ldapsearch returns results over a secure TLS connection:


$ ldapsearch -W -H ldaps:// -D "cn=Directory Manager" -s sub -b "ou=Security,dc=tideway,dc=com"  "(description=Staff Members)"

Enter LDAP Password:

# extended LDIF


# LDAPv3

# base <ou=Security,dc=tideway,dc=com> with scope subtree

# filter: (description=Staff Members)

# requesting: ALL



# Staff, Groups, Security,

dn: cn=Staff,ou=Groups,ou=Security,dc=tideway,dc=com

description: Staff Members

objectClass: top

objectClass: groupofuniquenames

cn: Staff

uniqueMember: uid=ENoether,ou=People,ou=Security,dc=tideway,dc=com

uniqueMember: uid=ALovelace,ou=People,ou=Security,dc=tideway,dc=com

uniqueMember: uid=JWatson,ou=People,ou=Security,dc=tideway,dc=com

uniqueMember: uid=HPoincare,ou=People,ou=Security,dc=tideway,dc=com


# search result

search: 2

result: 0 Success


# numResponses: 2

# numEntries: 1


An interesting problem came up recently that I thought I would share. A customer had a cluster of version 11 appliances, configured to use SSO and Secure LDAP. As far as they were aware, there were no problems.


They were using the default appliance disk layout, and wanted to move the datastore to a new, larger disk on each appliance, which had been previously provisioned on each appliance VM. All good. So, they started the Disk Configuration UI, which started... shutting down the services (as expected) and then doing nothing more, for several hours (not as expected). NBG.


The immediate priority was, of course, to get the appliances back and usable, which consisted of:

  •     Running "tw_disk_utils --fix-interrupted" on CLI
  •     Restarting the tideway services.


Subsequent investigation showed that while the UI was working for the LDAP-based administrator user that had been used on most appliances, it was NOT working on the last machine to be added to the cluster: Also, CLI authentications failed too. Importantly, it had been provisioned after the other machines had been configured for LDAP.


This sequence of events had triggered defect DRUD1-18597, whereby the LDAP CA bundle is not distributed to a newly added member. This meant that although the local system user was working fine, when the LDAP administrator tried to initiate a disk operation, the coordinator got an error from that machine (because LDAPS authentication could not be made) but it simply retried, ad infinitum.


A simple workaround exists:

  •     Copy the file (/usr/tideway/etc/ldap_cacert.pem) from another appliance
  •     Restart the tideway service.


This should be fixed in 11.2.


You may have noticed in the 11.1 Enhancements page, a note about vCenter appliances now being discovered as Host node:vc_docs.png

So, if you scan a vCenter you will get something like this:



So, not a vast amount of information, but it's certainly there. Note that, unusually, a Host node is allowed even though network interfaces/MACs are not discovered. But what's going on with the name?


It turns out that up to and including 11.1.05, no attempt is made to extract a hostname via the VMware API. As you can see, a hostname is constructed from the discovered IP address. There is a change (DRUD1-19304) planned for 11.2, whereby the setting "VirtualCenter.InstanceName" is queried over the vCenter API.


You can check your vCenter instance like this:

  • You should see something like:


If you see a hostname here, then 11.2 should be able to capture the hostname. If, like this example, the vCenter API exposes just an IP address, there is nothing Discovery can do about it. There are some unfortunate characteristics of the vCenter installation that this InstanceName has to be chosen at install time, and cannot be changed thereafter. See this VMware KB article.


If you have also seen this in your environment, and it's causing you problems, I invite you to let me know.


Just when you had got to grips with the Windows ransomware vulnerability WannaCry, comes another big one, dubbed SambaCry, a serious vulnerability in the open source Samba package which implements SMB/CIFS protocols. Headlines:


  • CVE-2017-7494
  • Remote code execution as root
  • Affects versions from 3.5.0 (released 2010)
  • Patched in core code at:
    • 4.6.4
    • 4.5.10
    • 4.4.14
  • Patched by Redhat streams - see here


I would not expect any Internet-exposed machines to have this open, but Shodan indicates there many thousands that are. So while you may not be exposing it directly, it may just be a matter of time before this exploit is weaponised to exploit indirect channels to get inside Intranets and propagate.


Note that from a Discovery appliance perspective, we don't run a Samba server, so the appliance stack is not vulnerable.


Chrome 58 has recently dropped into Stable, and it happens to be my browser of choice on Windows, Android and Fedora. A change has come in how a certificate's Common Name is being interpreted.


Historically, The CN part of the Subject field  was used to specify the DNS address of the server you were connecting to, such as:


Subject: C=GB, ST=England, L=London, O=Smithnet, OU=Smithnet IT,


The x509 v3 extension provided a Subject Alternative Name field, which could be used to specify several different DNS entries, such as:


X509v3 Subject Alternative Name:,,


But, if you didn't need alternatives, you didn't need to use this field. Except, apparently (I was not previously aware) for some time this behaviour was deprecated and  we were not supposed to be using the CN/Subject field: the SAN field was to be used even if only one entry was required.


Chrome is now enforcing this: so if you have an otherwise perfectly good certificate, that only has CN but no SAN, you will get a NET::ERR_CERT_COMMON_NAME_INVALID error:




You can of course use the Advanced link to add an exception.


Now, Discovery's Certificate Signing Request creation does not currently have any mechanism to support SANs. I am hoping a new feature to be released in the next major product release, but of course I can't guarantee that. Regardless, your CA that signs CSRs should be able to add the appropriate single-SAN entry even if the CSR doesn't contain any SAN data - but if it doesn't - you should now know why you get the above error.


Several versions ago we separated the detection of RAM into two Host attributes:


  • ram: The amount of RAM (MB) installed on the host.
  • logical_ram: The amount of RAM (MB) available to the OS.


The "ram" would typically be the actual physical sum of all the RAM devices on a physical machine. Or the RAM allocated to a VM in (say) ESX, The "logical_ram" is what the OS sees and can use - usually a little less, once you remove overhead for (say) BIOS, video memory and other reservations. That's all well and good.


I have been looking at some customer data (a large estate of > 35k OSIs) and we found some apparent anomalies. We simply reported on the difference:


search Host where ram and logical_ram show name, ram - logical_ram as 'Difference'


and sorting on the difference. Most had a small positive value as expected, but we also found a few:


Very Large Positive Value


By large, I mean many tens of GB or more. These seemed to be physical machines, with a plenty of RAM, but we found the OSes that had been installed on them were not able to make use of all the RAM. We confirmed, for Windows and Red Hat.


So, Discovery was reporting correctly - and you could use a report like this to find installations were you were wasting RAM.


Negative Value


While you may be familiar with negative resistance, temperature or mass - surely not RAM. How could logical_ram be larger than ram? There were not many examples, but a few Windows and Linux, VMs on ESX. Discovery correctly reported what the respective OSes reported (via WMI or dmidecode). So far, we don't know the root cause, although my current guess is something in the ESX layer.


Have you seen examples like this in your estate? Could you share? I would really like to see if there are other examples out there.

Filter Blog

By date:
By tag: