Skip navigation
1 2 3 Previous Next

Discovery / ADDM

104 posts
Share:|

Dnsmasq is "a lightweight DNS, TFTP, PXE, router advertisement and DHCP server. It is intended to provide coupled DNS and DHCP service to a LAN". A number of vulnerabilities have been found that allow remote code execution and denial of service attacks.

 

Discovery is not vulnerable. Only one of the vulnerabilities, CVE-2017-14491, applies to RHEL/CentOS 6 and exists in the dnsmasq, dnsmasq-debuginfo and dnsmasq-utils packages - which we do not install on the appliance.

 

Much better wordsmiths than I have covered this on many sites, for example arstechnica and theregister, and covered quite comprehensively by Red Hat.

 

CVE-2017-14491 (CWE-122): RHEL6 is affected, we do not ship the dnsmasq packages.

CVE-2017-14492 (CWE-122): RHEL6 not affected.

CVE-2017-14493 (CWE-121): RHEL6 not affected.

CVE-2017-14494 (CWE-125): RHEL6 not affected.

CVE-2017-14495 (CWE-400): RHEL6 not affected.

CVE-2017-14496 (CWE-190->CWE-125): RHEL6 not affected.

CVE-2017-13704 (CWE190): RHEL6 not affected.

 

I think we should take the opportunity to come up with a codename too - in the comments section below.

Share:|

We've had a number of questions about the permissions that BMC Discovery requires to fully discover Storage resources in Microsoft Azure, so I thought I would try and explain the situation.

 

Like every other system we can discover, BMC Discovery only needs read access to resources in Microsoft Azure. Helpfully, Microsoft provides a Reader role which gives read only access to all resources. Excellent, that's exactly what we need ...

 

However, in turns out that some Azure Storage values we would like to collect aren't directly available via the Azure Resource Manager API, specifically the size and encryption (D@RE) parameters for virtual hard disks (VHDs). For VMs which use VHDs (not Managed Disks) we have to query the Blob properties directly from Azure Storage. To do this, we must properly sign the request and that requires a Storage Key.

 

The Azure Resource Manager API provides a method to retrieve the keys, but this isn't covered by the Reader role, so Discovery needs the Microsoft.Storage/storageAccounts/listKeys/action permission. These keys provide Full Access to the storage resources so, in theory, these could be used to modify or even delete Storage Blobs. However, like any other credential, Discovery does not expose these keys to the user: there is no way to retrieve and use them in a pattern for example. Internally, Discovery retrieves the key when certain Azure Storage read operations are needed.

 

Microsoft's approach to this problem is to use Shared Access Signatures (SAS). However, for this to work for Discovery, we would need a SAS token for each Storage Account, which is a large configuration burden.

 

So, to sum up:

 

We need the Storage Keys to get size and encryption (D@RE) values for VHDs used by VMs. If you don't grant this permission (i.e. just use the Reader role), then the values will be missing. Discovery will still perform all other Azurescanning. You ONLY need to grant the permission if size and D@RE for VHDs is important to you.

 

If you are using Managed Disks, then BMC Discovery does not need the Microsoft.Storage/storageAccounts/listKeys/action permission - the Managed Disk API allows us to read all the values we want.

 

I hope this helps clarify the situation.

Share:|

Hello Everyone,

 

We have introduced a patch to BMC Discovery 11.2 to address an upgrade issue found concerning customers who have integrated CyberArk with Discovery.  This issue has been corrected in patch 1 and we encourage all CyberArk users to use this patch for upgrade.  Full details can be found here.

 

Upgrading to Version 11.2 patch 1

It is very important that, for clustered deployments, all the members of the cluster are functioning prior to upgrade.

 

We strongly recommend that you upgrade to version 11.2 patch 1 if you are on any previous versions of BMC Discovery 10.0.x, 10.1.x, 10.2.x, 11.0.x, 11.1.x, or 11.2. For details about the upgrade procedure, see the Upgrading BMC Discovery page.

 

Downloading BMC Discovery Product Releases

All above Product releases are available from the BMC Electronic Product Distribution (EPD) site where they all replace their respective current GA files.

 

Thanks and enjoy!

 

The BMC Discovery Team

Share:|

A vulnerability (CVE-2017-9798) dubbed OptionsBleed has [finally] been patched upstream and is starting to gain some press.

 

The vulnerability can be triggered by badly configured .htaccess files, and the way in which affected Apache versions handle memory. The .htaccess files are recursively consumed through the served directory tree. If any of the .htaccess files has a request method defined in a Limits directive that is either superseded globally, or doesn't exist, then Apache exposes itself to a use-after-free vulnerability.

 

The memory handling bug means that the area of memory in question is freed-up but still used by Apache and potentially then allocated to another part of the running Apache instance. An OPTIONS request can then leak this data.

 

Often, it's being likened to the OpenSSL HeartBleed vulnerability because the vulnerability allows leaked information, this time from httpd (Apache Web Server). There are detailed explanations of the issue on nakedsecurity and arstechnica UK. Red Hat have given the vulnerability a Moderate rating.

 

A target will have to be unlucky timing wise and on a busy server for this to leak anything sensitive - but any leakage is bad.

 

So - Discovery - affected? No, not by default. We do not ship, or configure, any .htaccess files on an Appliance. That said, some vulnerability assessment tools will show the Appliance as vulnerable because we are running a vulnerable version of httpd. The updated version of httpd will be shipped in the next OSU after the patch is released.

Share:|

Hello Everyone,

 

Today we launched our multi-cloud discovery capabilities which will be available in BMC Discovery 11.2.  I would like to thank everyone who provided input and feedback to shape the enhancements we introduced into this release.  This is truly a milestone release where we make it possible for our customers to have the visibility they require as they transform to support multi-cloud environments.

 

A discovery webinar on What's New with BMC Discovery 11.2 is scheduled for September 21, 2017 where we will go into detail on the features briefly explained in the next section.  Below is a link to register to the webinar, more details on the What's New webinar can be found at the end of this post.

 

Register now

 

Below is a brief guide on some of the major features implemented in version 11.2:

 

BMC Discovery 11.2 - What's New

Cloud Discovery

BMC Discovery provides a single pane of glass to present dependencies and assets that span public cloud, private cloud, and traditional data center environments.  We provide distinct and beautiful visualizations that show the cloud context of an application that is consistent regardless of the terminology differences between cloud vendors. We worked with both AWS and Azure to design this solution and are now able to represent hybrid application deployments that can span multiple cloud providers as well as infrastructure on-premise.  We perform this by using a combination of traditional IP based agent less scanning and consuming cloud vendors rich web API.  The result is an application model that can represent dependencies which span traditional infrastructure and cloud services.AWS-Azure-Discovery-Detailed.png

Cloud Context with Background Shading

It is now possible to see nodes that are in the same Cloud Region or Location, by enabling the color shading feature. In application models, you can use shading to show which nodes were saved by the user, and which were added by BMC Discovery.

Multi-Cloud-Context-Screenshot.PNG

Continuous Cloud Service Content Updates

With version 11.2 we will now be able to provide capabilities to discover additional cloud providers and cloud services in our monthly TKU cadence.  This is key as the pace at which new cloud services are being introduced by the major cloud providers is as often as weekly to monthly.  While we initially are releasing with AWS and Azure, additional cloud providers will come in future TKUs.

 

Shared Node Awareness

When Discovery updates application models at scan time, it takes shared software nodes into account, so models do not balloon with unwanted nodes.  The system automatically identifies software nodes that are likely to be shared by multiple applications, for example shared database servers and message queues. Visualizations do not follow relationships out of these nodes, so the view is simpler and less cluttered.

 

API Access via TPL

The flexibility customers come to know and love with TPL is now more powerful than ever!  Extend upon the Cloud API discovery we perform invoking additional functions in TPL to gather additional cloud context that may not be represented in our out of the box patterns.

We also introduced capabilities to invoke REST APIs via TPL which will further enhance our ability to provide additional content to our customers and target devices that have richer information available in their respective REST API.

 

Model changes

The following major changes have been made to the BMC Discovery model:

VirtualMachine nodes

BMC Discovery 11.2 changes the way that virtual machines are modeled. In previous releases VMs were modeled using a SoftwareInstance node with a vm_type attribute. They are now modeled using a Virtual Machine node which makes it easier to find and relate VMs to their containers and Hosts.

Database nodes

Logical databases are now stored in dedicated Database nodes rather than the DatabaseDetail nodes previously used. Dedicated Database nodes simplify the separation of databases from other database details. DatabaseDetail nodes are still used for other information about databases, for example schemas and tablespaces.

 

Other Enhancements

The following additional enhancements are introduced in version 11.2:

  • CMDB Sync to new Cloud Instance class (requires CMDB 9.1 SP3)
  • Use the CMDB REST API for CMDB Sync (requires CMDB 9.1 SP3)
  • Share and tag favorite queries
  • And more…

 

More Information

BMC Discovery version 11.2 files are now available for download at the BMC Electronic Product Distribution (EPD) site.

Read more in the Release Notes

 

Discovery Webinar:  What's New with BMC Discovery 11.2

Please join us as we share our knowledge on the newest feature to come to BMC Discovery 11.2

 

For this session we will focus on the following topics:

 

  • Overview of v11.2 Themes
  • Deep Dive into What's New with v11.2
  • Q&A

 

Register now

 

Date and Time:

Thursday, September 21, 2017 10:00 am, Central Time Zone (Chicago, GMT-06:00)

Thursday, September 21, 2017 11:00 am, Eastern Time Zone (New York, GMT-05:00)

Thursday, September 21, 2017 8:00 am, Pacific Time Zone (San Francisco, GMT-08:00)

Thursday, September 21, 2017 4:00 pm, Europe Time (Paris, GMT+01:00)

Thursday, September 21, 2017 3:00 pm, UK Time (London, GMT)

 

Duration: 1 hour

 

If you cannot attend at that time, a link to watch the recording will be sent after the event.

Share:|

This course provides information on the key concepts and core functionality needed to Deploy, Administer & Troubleshoot BMC Discovery version 11. The course also covers new features and enhancements made to BMC Discovery 11.

 

TARGET AUDIENCE:

  • Technical Personnel responsible for Deploying & Administering BMC Discovery 11
  • Configuration & Project Managers

PREREQUISITES:

  • Familiarity with the Linux Command Line
  • Knowledge of Simple Regular Expressions and Scripting Languages such as Perl, Python or Bash.

 

Contact your Education Sales Representative or education@bmc.com for further information or to register

 

COURSE OVERVIEW:

 

1.   BMC Discovery Overview

14. Query Builder
2.   User Administration & Security15. Query Language Overview
3.   Scanning Basics16. BMC Atrium CMDB Synchronization
4.   Discovery Credentials17. Appliance Baseline
5.   Taxonomy & Data Model18. Clustering
6.   Discovery Overview19. Consolidation
7.   Discovery Scripts20. Appliance Backup & Restore
8.   Storage Discovery21. Using the BMC Discovery CLI
9.   Load Balancer Discovery22. Appliance Support & Upgrade
10. Discovery Investigation & Troubleshooting23. Compliance
11. Patterns Overview24. Disk Configuration
12. Visualizations25. Storage Terms
13. Dashboards Overview26. Load Balancing Overview                         

 

Detailed information on when and where these course will be held can be found using this link which is maintained and updated by the BMC Education Services Team.

 

Upcoming Sessions are listed below (as at Friday 11th August, 2017):

 

Australia, New Zealand & Asia:

 

Start Date

End Date

Timezone

Language

28-Aug-201701-Sep-2017Malaysia TimeEnglish
04-Sep-201708-Sep-2017Malaysia TimeEnglish
18-Sep-201722-Sep-2017Indian Standard TimeEnglish
20-Nov-201724-Nov-2017Malaysia TimeEnglish
04-Dec-201708-Dec-2017Malaysia TimeEnglish
18-Dec-201722-Dec-2017Indian Standard TimeEnglish

 

North Americas:

 

 

Start Date

End Date

Timezone

Language

28-Aug-201701-Sep-2017Central Standard TimeEnglish
18-Sep-201722-Sep-2017Central Standard TimeEnglish
02-Oct-201706-Oct-2017Central Standard TimeEnglish
30-Oct-201703-Nov-2017Central Standard TimeEnglish
11-Dec-201715-Dec-2017Central Standard TimeEnglish
18-Dec-201722-Dec-2017Central Standard TimeEnglish

 

EMEA:

 

Start Date

End Date

Timezone

Language

07-Aug-201711-Aug-2017Greenwich Mean Time

English

25-Sep-201729-Sep-2017Central European TimeGerman
02-Oct-201706-Oct-2017Greenwich Mean TimeEnglish
23-Oct-201727-Oct-2017Greenwich Mean TimeEnglish
20-Nov-201724-Nov-2017Greenwich Mean TimeEnglish
18-Dec-201722-Dec-2017Greenwich Mean TimeEnglish
Share:|

LDAP/Active Directory Server SAN

 

I have noticed that while many customers were using integration to Active Directory, many were using LDAP not LDAPS. One customer in particular said it didn't work. I noticed an internal BMC system was only LDAP enabled, and not to our corporate AD. But I knew *some* customers environments were able to use LDAPS against AD.

 

I had set up my own LDAP server some time ago, based on 389 Directory Server, but had never got round to adding LDAPS for security. In the end it was pretty uneventful: I just created some suitable certificates with my own CA, and loaded them up through the 389 DS Console UI.

 

Then, the normal configuration of LDAPS in the Discovery UI can be done - again, rather uninterestingly straightforward. So, time to check out AD: I pointed to our AD Prod alias and...

 

tls.png

 

just like the customer who had told me it didn't work. A simple bit of checking with revealed that the DNS entry I was using for AD was an alias that resolved to multiple servers - not unsurprising in a large enterprise. Each of the AD servers supplied a certificate with a CN based on their own DNS A record, not the CNAME of the cluster of machines. There were no Subject Alternative Names, so the client (Discovery Appliance) failed to validate the AD servers' certificates.

 

Solution: Ensure the certificates have SANs for the hostname you are connecting with.

 

ldapsearch

 

Now in the old days, ldapsearch was an invaluable tool in testing/checking LDAP configuration and structure from the appliance. In later versions, it is less so, as the UI has been improved. However, for completeness I though I would check it.

 

Non-secure LDAP worked as expected:

 

$ ldapsearch -W -H ldap://npgsldap.tideway.com -D "cn=Directory Manager" -s sub -b "ou=Security,dc=tideway,dc=com"  "(description=Staff Members)"

 

But when I tried to run it over a secure connection, I got a TLS error:

 

$ ldapsearch -W -H ldaps://npgsldap.tideway.com:636 -D "cn=Directory Manager" -s sub -b "ou=Security,dc=tideway,dc=com"  "(description=Staff Members)"

Enter LDAP Password:

ldap_sasl_bind(SIMPLE): Can't contact LDAP server (-1)

 

because ldapsearch does not know about the LDAPS server's CA certificates. The client configuration file for ldapseatch and other OpenLdap client tools is in /etc/openldap/ldap.conf, and by default has a pointer to a directory containing CA certificates:

 

TLS_CACERTDIR   /etc/openldap/certs

 

I thought initially we could just copy/link the Discovery LDAP CA file (/usr/tideway/etc/ldap_cacert.pem) here, but, it's not that simple: this directory doesn't hold simple PEM files, it contains a mini database of NSS certificates. What you have to do is add those PEM format CA certs into the NSS database, like this:

 

$ certutil -d /etc/openldap/certs -A -n "LDAPS CA Certificates" -t "C,," -a -i /usr/tideway/etc/ldap_cacert.pem

 

You can check it loaded thus:

 

$ certutil -d /etc/openldap/certs -L

Certificate Nickname                                         Trust Attributes

                                                             SSL,S/MIME,JAR/XPI

 

LDAPS CA Certificates                                        C,,

 

If you need to, you can delete with the -D flag, referencing the same Nickname:

 

certutil -d /etc/openldap/certs -n "LDAPS CA Certificates" -D

 

Anyway, now ldapsearch returns results over a secure TLS connection:

 

$ ldapsearch -W -H ldaps://npgsldap.tideway.com:636 -D "cn=Directory Manager" -s sub -b "ou=Security,dc=tideway,dc=com"  "(description=Staff Members)"

Enter LDAP Password:

# extended LDIF

#

# LDAPv3

# base <ou=Security,dc=tideway,dc=com> with scope subtree

# filter: (description=Staff Members)

# requesting: ALL

#

 

# Staff, Groups, Security, tideway.com

dn: cn=Staff,ou=Groups,ou=Security,dc=tideway,dc=com

description: Staff Members

objectClass: top

objectClass: groupofuniquenames

cn: Staff

uniqueMember: uid=ENoether,ou=People,ou=Security,dc=tideway,dc=com

uniqueMember: uid=ALovelace,ou=People,ou=Security,dc=tideway,dc=com

uniqueMember: uid=JWatson,ou=People,ou=Security,dc=tideway,dc=com

uniqueMember: uid=HPoincare,ou=People,ou=Security,dc=tideway,dc=com

 

# search result

search: 2

result: 0 Success

 

# numResponses: 2

# numEntries: 1

Share:|

An interesting problem came up recently that I thought I would share. A customer had a cluster of version 11 appliances, configured to use SSO and Secure LDAP. As far as they were aware, there were no problems.

 

They were using the default appliance disk layout, and wanted to move the datastore to a new, larger disk on each appliance, which had been previously provisioned on each appliance VM. All good. So, they started the Disk Configuration UI, which started... shutting down the services (as expected) and then doing nothing more, for several hours (not as expected). NBG.

 

The immediate priority was, of course, to get the appliances back and usable, which consisted of:

  •     Running "tw_disk_utils --fix-interrupted" on CLI
  •     Restarting the tideway services.

 

Subsequent investigation showed that while the UI was working for the LDAP-based administrator user that had been used on most appliances, it was NOT working on the last machine to be added to the cluster: Also, CLI authentications failed too. Importantly, it had been provisioned after the other machines had been configured for LDAP.

 

This sequence of events had triggered defect DRUD1-18597, whereby the LDAP CA bundle is not distributed to a newly added member. This meant that although the local system user was working fine, when the LDAP administrator tried to initiate a disk operation, the coordinator got an error from that machine (because LDAPS authentication could not be made) but it simply retried, ad infinitum.

 

A simple workaround exists:

  •     Copy the file (/usr/tideway/etc/ldap_cacert.pem) from another appliance
  •     Restart the tideway service.

 

This should be fixed in 11.2.

Share:|

You may have noticed in the 11.1 Enhancements page, a note about vCenter appliances now being discovered as Host node:vc_docs.png

So, if you scan a vCenter you will get something like this:

 

vc_info.png

So, not a vast amount of information, but it's certainly there. Note that, unusually, a Host node is allowed even though network interfaces/MACs are not discovered. But what's going on with the name?

 

It turns out that up to and including 11.1.05, no attempt is made to extract a hostname via the VMware API. As you can see, a hostname is constructed from the discovered IP address. There is a change (DRUD1-19304) planned for 11.2, whereby the setting "VirtualCenter.InstanceName" is queried over the vCenter API.

 

You can check your vCenter instance like this:

  • You should see something like:

vc_in2.png

If you see a hostname here, then 11.2 should be able to capture the hostname. If, like this example, the vCenter API exposes just an IP address, there is nothing Discovery can do about it. There are some unfortunate characteristics of the vCenter installation that this InstanceName has to be chosen at install time, and cannot be changed thereafter. See this VMware KB article.

 

If you have also seen this in your environment, and it's causing you problems, I invite you to let me know.

Share:|

Just when you had got to grips with the Windows ransomware vulnerability WannaCry, comes another big one, dubbed SambaCry, a serious vulnerability in the open source Samba package which implements SMB/CIFS protocols. Headlines:

 

  • CVE-2017-7494
  • Remote code execution as root
  • Affects versions from 3.5.0 (released 2010)
  • Patched in core code at:
    • 4.6.4
    • 4.5.10
    • 4.4.14
  • Patched by Redhat streams - see here

 

I would not expect any Internet-exposed machines to have this open, but Shodan indicates there many thousands that are. So while you may not be exposing it directly, it may just be a matter of time before this exploit is weaponised to exploit indirect channels to get inside Intranets and propagate.

 

Note that from a Discovery appliance perspective, we don't run a Samba server, so the appliance stack is not vulnerable.

Share:|

Chrome 58 has recently dropped into Stable, and it happens to be my browser of choice on Windows, Android and Fedora. A change has come in how a certificate's Common Name is being interpreted.

 

Historically, The CN part of the Subject field  was used to specify the DNS address of the server you were connecting to, such as:

 

Subject: C=GB, ST=England, L=London, O=Smithnet, OU=Smithnet IT, CN=www.smithnet.org.uk/emailAddress=web@smithnet.org.uk

 

The x509 v3 extension provided a Subject Alternative Name field, which could be used to specify several different DNS entries, such as:

 

X509v3 Subject Alternative Name:

    DNS:www.smithnet.org.uk, DNS:smithnet.org.uk, DNS:pooh.smithnet.org.uk

 

But, if you didn't need alternatives, you didn't need to use this field. Except, apparently (I was not previously aware) for some time this behaviour was deprecated and  we were not supposed to be using the CN/Subject field: the SAN field was to be used even if only one entry was required.

 

Chrome is now enforcing this: so if you have an otherwise perfectly good certificate, that only has CN but no SAN, you will get a NET::ERR_CERT_COMMON_NAME_INVALID error:

 

CN_INVALID.jpg

 

You can of course use the Advanced link to add an exception.

 

Now, Discovery's Certificate Signing Request creation does not currently have any mechanism to support SANs. I am hoping a new feature to be released in the next major product release, but of course I can't guarantee that. Regardless, your CA that signs CSRs should be able to add the appropriate single-SAN entry even if the CSR doesn't contain any SAN data - but if it doesn't - you should now know why you get the above error.

Share:|

Several versions ago we separated the detection of RAM into two Host attributes:

 

  • ram: The amount of RAM (MB) installed on the host.
  • logical_ram: The amount of RAM (MB) available to the OS.

 

The "ram" would typically be the actual physical sum of all the RAM devices on a physical machine. Or the RAM allocated to a VM in (say) ESX, The "logical_ram" is what the OS sees and can use - usually a little less, once you remove overhead for (say) BIOS, video memory and other reservations. That's all well and good.

 

I have been looking at some customer data (a large estate of > 35k OSIs) and we found some apparent anomalies. We simply reported on the difference:

 

search Host where ram and logical_ram show name, ram - logical_ram as 'Difference'

 

and sorting on the difference. Most had a small positive value as expected, but we also found a few:

 

Very Large Positive Value

 

By large, I mean many tens of GB or more. These seemed to be physical machines, with a plenty of RAM, but we found the OSes that had been installed on them were not able to make use of all the RAM. We confirmed, for Windows and Red Hat.

 

So, Discovery was reporting correctly - and you could use a report like this to find installations were you were wasting RAM.

 

Negative Value

 

While you may be familiar with negative resistance, temperature or mass - surely not RAM. How could logical_ram be larger than ram? There were not many examples, but a few Windows and Linux, VMs on ESX. Discovery correctly reported what the respective OSes reported (via WMI or dmidecode). So far, we don't know the root cause, although my current guess is something in the ESX layer.

 

Have you seen examples like this in your estate? Could you share? I would really like to see if there are other examples out there.

Share:|

Companies have implemented many systems that are purpose built to manage various IT processes. Traditionally those systems are not made from the same vendors and do not interact. As a result, companies are left with disparate systems that aren’t leveraging the same set of information. BMC Discovery provides an open platform through a REST API that enables other systems to take advantage of its rich discovery data, and to drive discovery configuration.

 

In this session we will go through the BMC Discovery REST API and share some use cases which may encourage development incorporating the REST API.

 

We will host a Q&A session with a panel of experts and encourage the community to be prepared with questions.

 

Register now

 

Once your registration is approved, you will receive a confirmation email message with instructions on how to join the event.

 

Date and Time:

 

Tuesday, April 25, 2017 10:00 am, Central Daylight Time (Chicago, GMT-04:00)

Tuesday, April 25, 2017 11:00 am, Eastern Daylight Time (New York, GMT-04:00)

Tuesday, April 25, 2017 8:00 am, Pacific Daylight Time (San Francisco, GMT-07:00)

Tuesday, April 25, 2017 5:00 pm, Europe Summer Time (Paris, GMT+02:00)

Tuesday, April 25, 2017 4:00 pm, GMT Summer Time (London, GMT+01:00)

 

Duration: 1 hour

 

If you cannot attend at that time, a link to watch the recording will be sent after the event.

Share:|

The most useful way to view the TPL documentation is to export a PDF of the document.

 

1) find the doc:  https://docs.bmc.com/docs/display/DISCO110/The+Pattern+Language+TPL                      

2) Click the "gear", and choose "Export to PDF"

 

In the 10.0 documention, the PDF's were already exported:  Export PDF Pages - BMC Discovery 10.0 - BMC Documentation

 

See this for the PDF document from 10.0:  TPL Guide PDF - BMC Discovery 10.0 - BMC Documentation

 

But, newer versions of the doc do not have the PDF's... you have to export them yourself.

 

This doc explains how to create your own PDF's :  Exporting to PDF and other formats - Help for BMC Online Technical Documentation

Share:|

Security is top of mind for most IT professionals these days.  It seems as though not a day goes by without news of yet another data breach, or hack attempt.  BMC Discovery data is critical in helping you understand where your weak points are and whether business applications are running on vulnerable systems.  Security vulnerabilities pose one threat to the business but it doesn’t stop there.  BMC Discovery data easily helps you identify other key risk areas:

 

  • Understanding where out of support, or soon to be out of support software or operating systems exist.
  • Reduce version sprawl
  • Locate servers that are running out of storage capacity
  • Eliminate gaps in anti-virus or automation agent coverage by finding systems missing key software
  • Know what you don't know is in your environment

 

In this session we will go through how to gather the information from BMC Discovery to enable you to manage out security, operational, and business risk.

 

We will host a Q&A session with a panel of experts and encourage the community to be prepared with questions.

 

Register now

 

Once your registration is approved, you will receive a confirmation email message with instructions on how to join the event.

 

Date and Time:

 

Tuesday, March 28, 2017 11:00 am, Eastern Daylight Time (New York, GMT-04:00)

Tuesday, March 28, 2017 8:00 am, Pacific Daylight Time (San Francisco, GMT-07:00)

Tuesday, March 28, 2017 5:00 pm, Europe Summer Time (Paris, GMT+02:00)

Tuesday, March 28, 2017 4:00 pm, GMT Summer Time (London, GMT+01:00)

 

Duration: 1 hour

 

If you cannot attend at that time, a link to watch the recording will be sent after the event.

Filter Blog

By date:
By tag: