3 Replies Latest reply on Oct 9, 2007 5:54 AM by Andrew Knott

    Reporting on Performance Metrics

      Share This:

      Has anyone had to report on server performance metrics with BladeLogic Reports?


      The client would like the ability to report on performance metrics ie report on all server utilization over a 24 hour period each day. They would like to have something like a perf mon stat, or vmstat that would collect the data (CPU utilization, Memory Usage, Disk I/O, and % used) all the time and dump to a file. Then somehow read the files contents and add it to a server property (or another method), then run the populate reports job, and have BladeLogic Reports display the perfomance data for each server.

        • 1. Re: Reporting on Performance Metrics

          This is possible in the Reports framework, but will require some custom work. One approach would be along the lines of the following:


          1. Create a script which collects the performance data and dumps it to a file, as you described, one file per server.


          2. Update PropertiesDataDescription.xml and add a multi-valued data group, with several child data items, representing the performance metrics. E.g., the fields might be time, cpu utilization, and memory usage - so each record would encapsulate the cpu utilization and memory usage at some point in time. See the "Modifying the Data Set for Ad Hoc Reports" section of the Reports Users Guide for more details on this.


          3. Modify os_config.nsh to read the files generated in step 1 and output the data in the os_config.nsh-generated server XML files in a format corresponding to the definition in PropertiesDataDescription.xml in step 2 (again, see the users guide for more specifics on this).


          4. Run load_warehouse_schema to generate the dynamic schema in the warehouse which will be used to hold the performance data (users guide again).


          5. Each run of os_config.nsh followed by populate_reports.nsh will then populate the collected performance data into the warehouse, and it will be available as data items in the ad hoc inventory domain. Arbitrary ad hoc inventory reports can then be created to report on this data.


          There are a couple of limitations to this approach though:


          1. If the client desires the performance data to be refreshed daily, then this will require running os_config.nsh every day. Typically, clients run os_config weekly because it does take some time to collect all the config data, and it is not necessary to update all of it on a daily basis.


          2. Historical performance data will not be maintained in the warehouse DB, in the sense that each run of os_config.nsh replaces the data collected by the previous run. But if the client only cares about the previous day's performance data, this may not be an issue.


          The other option is to go with a pure "custom reports" approach which plugs into the current framework. This would involve creating custom DB tables in the warehouse to hold the performance data, writing a script or program which collects performance data and populates it into the DB tables, then writing RDLs which access these tables and produce the desired reports.

          • 2. Re: Reporting on Performance Metrics

            Tim, thanks for the info. I'm afraid that they will want daily reporting, and it will be across mulitple servers, on 3 different appservers. Also they will want to keep the data for future referrence, on all servers. I've explained to them that this isn't 'Live' data, that the report is a capture or snapshot in time when os_config/populate reports executes. I believe they want to be able to launch BL Reports and see live activity reports, which currenlty can't be done.

            • 3. Re: Reporting on Performance Metrics

              If you have a performance log file on the app server that is bing updated hourly, can the "live" view be a central EO that parses the performance monitoring file?