Skip navigation
1 2 Previous Next

TrueSight Capacity Optimization

16 Posts authored by: Timothy Hill
Share:|

Update Note:

This article is not applicable to TrueSight Capacity Optimization (CO) version 10.0 and later.  One of the many enhancements made to the CO 10.0 release is that the communication mechanism used by the Remote Scheduler Supervisor has been changed to increase its stability and avoid the scenarios covered in this article where it became necessary to manually clean the communication channel.

 

In Capacity Optimization (CO) the Datahub's Remote Scheduling Supervisor service communicates periodically with the CO Schedulers.  If this communication fails, the CO Schedulers will be flagged to a red ERROR state.  They may continue to run jobs that have already been executed or scheduled, but they will no longer respond to new scheduling requests from the CO console.

 

The symptoms associated with the Datahub's Remote Scheduling Supervisor being unable to communicate with the CO Scheduler are varied and cover a broad range of possible problems. In this article we consider several of the most common, when the symptoms include one or more of the following conditions.

 

  • All CO Schedulers are reporting in red ERROR state.
  • When trying to submit a job to a scheduler (or stop a running task) the CO UI responds with the error,  "The system was unable to execute  the given request.  Please check scheduler [Scheduler Name] status."
  • When executed, all CO Reports don't complete and messages state something like, "The report has been submitted and will be executed immediately.  Please refresh to view the results."
  • In the CO Scheduler cpit.log there is a repeated message like, "[main]- [MIFSchedulerCommander] Performing startup procedure for instance #[ID]" and never reports the "[MIFSchedulerCommander] Received startup messages.

 

It is not uncommon for the communication channel between the CO Scheduler and Remote Scheduling Supervisor to be corrupted is when the CO file system fills up. There can be different symptoms that may be visible when this occurs on the CO Application Server (AS) or a CO ETL Engine (EE) server.

 

  • If e-mail alerting is enabled, the most obvious symptom will be an e-mail error messages from CO reporting a failure of the Local Monitoring task.

 

  • If e-mail alerting is not enabled thee may be other symptoms which typically include:
    • CO Analysis and Predict reports failing to generate output with unexpected errors (such as the time filter not covering a period that contains data when a quick analysis shows that it does)
    • Analysis failing with the error, "java.io.IOException - message: No space left on device to write file: /path/file
    • Caplan Scheduler *** ALERT mail *** Task Local monitoring for Default [50] completed with 1 errors
    • File system full clean up

 

If the CO File System fills up, it is a best practice to always clean the CO Scheduler (and datacuum) and CO Datahub working set files to prevent problems with this communication channel. Generally, the steps needed to fix most of the issues related to by the messages above are shown below and should be followed in the order written.  However, if it is only needed to clear up Datahub communication problems, the Scheduler (and datacuum) portion of the clean-up should not need to be followed as described in this article.

 

  • Stop all the CO components on al machines in the CO instance
  • Clean up the working set files on all machines
  • Restart  all the components

 

Restoring the CO communication channel between the CO Datahub and the CO Schedulers is considered a best practice recommendation to resolve the problems associated with the aformentioned symptoms. When the communication channel is down, the remote schedulers will queue the status messages that it wants to send back to the CO console.  As a result of this, when the communication channel is re-established,  there can be a spike in communication which saturates the communication channel and takes time to clear (or causes the original communication problem to continue).

 

Cleaning the Datahub component only  will frequently work to resolve this communication problem, and can be a quick path to recovery. If this resolves the problems, then you will not need to execute the Scheduler clean-up steps listed. The Scheduler (and datacuum) clean-up is used when the CO Remote Scheduling Supervisor communication channel has been corrupted (for example possibly after a file system full problem):

 

To Clean the Dathub, you must Stop all the components (including scheduler) and clean up the corrupted runtime files:

  • Access AS via ssh as CO OS user.
  • Change directory to CO home folder (usually it's /opt/cpit);

cd /[CO Installation Directory]

Try to stop CO DWH in a clean way

./cpit stop Datahub

  • Wait five minutes or until the countdown ends.
  • Insure there are no other DWH jboss processes stuck using these commands :
ps -ef | grep jboss
Issue a kill -9 $PIDNUMBER for every remaining jboss processps -ef | grep jboss
Issue a kill -9 $PIDNUMBER for every remaining jboss process
  • Access AS via ssh as CO OS user. Insure there are no run.sh stuck using these commands:

ps -ef | grep run.sh

Issue a kill -9 $PIDNUMBER for every remaining run.sh

Execute these commands to clean-up the Datahub directories on the AS (path relative to the base CO Installation Directory):

  • CO 9.5 SP1 and SP2:

./cpit clean datahub

  • CO 9.0:

rm -rf datahub/jboss/server/all/data/kahadb/*

rm -rf datahub/jboss/dlq_messages/*

rm -rf datahub/jboss/server/all/tmp/*

rm -rf datahub/jboss/server/all/data/tx-object-store/*

  • CO 4.5:

rm -rf repository/kahadb/*

rm -rf datahub/jboss/server/all/data/kahadb/*

rm -rf datahub/jboss/not_processed_messages/*

rm -rf datahub/jboss/dlq_messages/*

  • CO 4.0:

rm -rf datahub/jboss/server/default/data/data/*

rm -rf datahub/jboss/server/default/data/tx-object-store/*

rm -rf datahub/jboss/dlq_messages/*

rm -rf datahub/jboss/server/default/tmp/*

rm -rf datahub/jboss/bin/activemq-data/localhost/*

 

NOTE ** If you are using these steps to fix the datahub after a machine migration, or a copy of the datahub, CHECK these files to contain no pointers to other machines hostnames:

datahub/jboss/server/all/deploy/cluster-service.xml

datahub/jboss/server/all/deploy/activemq-ra.rar/META-INF/ra.xml

 

To Clean the Scheduler  you will also need to follow these steps

  • Stop AS Scheduler:

cd /[CO Installation Directory]

./cpit stop scheduler

  • Check there are no other schedulers stuck using these commands:

ps -ef | grep scheduler

Issue a kill -9 $PIDNUMBER for every remaining scheduler

  • Clean up the scheduler tasks configuration directories on the AS

Execute these commands to clean the Scheduler directories on the AS (path relative to the base CO Installation Directory):

  • CO 9.5 SP1 and SP2:

./cpit clean scheduler

  • CO 9.5:

rm -rf scheduler/task/*

rm -rf scheduler/mif/notdelivered/*

rm -rf scheduler/localdb/*

  • CO 9.0, 4.5, 4.0:

rm -rf scheduler/task/*

rm -rf scheduler/mif/notdelivered/*

rm -rf scheduler/localdb/*

Execute these commands to clean the datacuum directories on the AS

  • Stop AS dataccum:

cd /[CO Installation Directory]

./cpit stop dataccum

  • Insure there are no other dataccums stuck using these commands:

ps -ef | grep dataccum

Issue a kill -9 $PIDNUMBER for every remaining dataccum

  • Access EE via ssh as CO OS user:

Stop EE scheduler

cd /[CO Installation Directory]

./cpit stop scheduler

  • Insure  there are no other schedulers stuck on the EE using these commands:

ps -ef | grep scheduler

Issue a kill -9 $PIDNUMBER for every remaining scheduler on the EE

  • Clean up the scheduler tasks configuration directories on the EE

Execute these commands to clean the Scheduler directories on the EE (path relative to the base CO Installation Directory):

  • CO 9.5 SP1 and SP2:

./cpit clean scheduler

  • CO 9.5:

rm -rf scheduler/task/*

rm -rf scheduler/mif/notdelivered/*

rm -rf scheduler/localdb/*

  • CO 9.0, 4.5, 4.0:

rm -rf scheduler/task/*

rm -rf scheduler/mif/notdelivered/*

rm -rf scheduler/localdb/*

Execute these commands to clean the datacuum directories on the EE

  • Stop EE dataccum:

cd /[CO Installation Directory]

./cpit stop dataccum

  • Insure there are no other dataccums stuck using these commands:

ps -ef | grep dataccum

Issue a kill -9 $PIDNUMBER for every remaining dataccum

  • Now, check the ETL and chains status:

In UI, goto Administration -> scheduler -> etls page and Administration -> scheduler -> system tasks for RUNNING TASKS that might have been stuck. Note their ID and then force them to be ended using SQL within the CO database:

update task_status set status= 'ENDED' where taskid in (XX,XX2,XX3)

  • Restart the components you stopped to restore the functionality ON BOTH MACHINESRestart the components you stopped to restore the functionality ON BOTH MACHINES
  • Run the "Component status checker" task
  • Wait at least a minute and then Access Administration > System > Status to see the status.

 

 

We hope you found this article useful. This content is also available as: Steps to recover CO functionality when the CO Schedulers are unable to properly communicate with the Datahub Remote Scheduling Supervisor service

 

Knowledge Article ID:    KA350370  https://bmcsites.force.com/casemgmt/sc_KnowledgeArticle?sfdcid=000031595

 

 

 

Miss a blog?  See BMC TrueSight Support Blogs

 

Share:|

Now, there is a new way to watch Connect with TrueSight Capacity Optimization (BCO) Webinars, via iTunes Podcasts.

 

To get started click Connect with - TrueSight Capacity Optimization (BCO) Series to subscribe to the podcast.  Once you have subscribed they will automatically be synced based on your subscription options as the webinar videos are posted.  All previously recorded webinars are also available.

Share:|

Happy New Year!!

 

You may encounter a situation where you need to force deployment of a TrueSight Capacity Optimization patch.

This can happen when the patch is stuck in READY state in the Maintenance tab or if the patch is in ERROR state and can't be redeployed via the TrueSight Capacity Optimization Maintenance User Interface (UI). In this article we will cover these two situations and a procedure for how to resolve this.

 

Situation 1 - Patch is in READY state in the Maintenance tab

The most common source of a TrueSight Capacity Optimization patch deployment that remains in READY state occurs when the Scheduler on the target machine for the patch is no longer communicating properly with the Dataub's Remote Scheduler Supervisor.

 

So, the first thing to try, is to correct the communication channel between the TrueSight Capacity OptimizationDatahub Remote Scheduler Supervisor and the Scheduler by following the steps in the linked article below, Steps to recover BCO functionality when the BCO Schedulers are unable to properly communicate with the Datahub Remote Scheduling Supervisor service  at:


https://kb.bmc.com/infocenter/index?page=content&id=KA350370

 

In most cases, once the steps in the article have been followed, the patch stuck in READY state will be automatically deployed after the TrueSight Capacity OptimizationScheduler is restarted (which is part of the cleanup processs).


Situation 2 - Patch is in ERROR state and can't be redeployed via the TrueSight Capacity Optimization Maintenance UI

This situation is less common, but the TrueSight Capacity Optimization patch is in ERROR state and can't be redeployed for some reason (such as subsequent patches have been deployed that are blocking the re-deployment).


In this case, the patch deployment can be forced by copying the patch file to the $CPITBASE/scheduler/autodeploy on the target machine and restarting the TrueSight Capacity Optimization Scheduler (or waiting for the Auto Deployment task to execute which can take up to 10 minutes).

 

Here is an applicable scenario :

 

You have installed TrueSight Capacity Optimization 9.5 with a Datahub AS and a Web AS. The Datahub AS does not have the Web component installed.
Mistakenly, the $CPITBASE/repository directory has not been shared between the two environments.

 

You then deploy the TrueSight Capacity Optimization 9.5 SP2 patch using the Maintenance tab to both AS Schedulers.

 

In this case, the TrueSight Capacity Optimization 9.5.02 patch deployment will fail on the Datahub AS (and thus none of the database update DDL with run) and it will succeed on the Web AS.


This results in a mismatch between the datbase structure and what the patched Web binaries are expecting. One of the symptoms of this mismatch is that an attempt to redeploy the TrueSight Capacity Optimization 9.5.02 patch on the Datahub AS (even after the $CPITBASE/repository directory has been properly shared) via the GUI can't proceed because the 'Target Schedulers' screen will remain blank (due to the underlying SQL failing to execute).

 

To fix this copy the BCO_9.5.02.00_AS.zip file to the $CPITBASE/repository directory on the Datahub AS and then restart the TrueSight Capacity Optimization Scheduler on the machine.

 

 

We hope you found this article useful (actually we hope you never have to use it!). It is also available in our Knowledge Base as: In BCO, how can I force the deployment of a patch if it is in READY state in the Maintenance tab or in ERROR state and can't be redeployed via the BCO UI
https://bmcsites.force.com/casemgmt/sc_KnowledgeArticle?sfdcid=000090605

 

tim

 

Miss a blog?  See BMC TrueSight Support Blogs

Share:|

In TrueSight Capacity Optimization 9.x, sometimes systems or business driver metrics are not visible, or seem to have have disappeared. Still, you can find the systems using the BCO Search tool, but they are not visible in your workspace domain. How do you get them back?

 

One possibility is that the information has been hidden due to there being no current data. To remedy this try this simple procedure:

 

1.     Login into BCO with the credentials of the affected user, in this case we’re showing the Administrator user. If you are already logged in, by clicking on the "Home" link in the “Welcome ….” part of the screen from any other screen you will also be taken to the screen image shown below.

UserHome.jpg

2.     Clicking on “Edit your profile” will present the screen below. UrProfile.jpg

3.     From the profile screen, check the Preferences section "Hide systems, business drivers and series without recent data" where there is an option to hide the data. The default is to hide data older than 30 days.

 

4.     Make sure the "Do not hide" is selected. If it's not selected, click the radio buttons for "Do not hide" and Save.

 

5.     This is a user setting, so this procedure needs to be repeated for every user affected, logging in with his credentials.

 

 

The above is information is also available at the BMC Support site as, BCO systems or business drivers metrics disappeared. Systems can be returned in the search tool in BCO but they are not visible in the workspace domain:

https://bmcsites.force.com/casemgmt/sc_KnowledgeArticle?sfdcid=000097395

 

I hope you have found this article useful. Feel free to comment on it, or to suggest other topics of interest for future articles.

Miss a blog?  See BMC TrueSight Support Blogs

thx, timo

Share:|

Hi!

Have you been postponing an upgrade to the latest version of BMC TrueSight Capacity Optimization because of time, learning curve concerns or risk issues? (note the BMC TrueSight Capacity Optimization and TrueSight Performance Assurance names reflect a recent product re-branding)

Worry no more!! The BMC TrueSight Capacity Optimization Assisting MIGration Operations (AMIGO) program is designed specifically to help you upgrade quickly, easily and safely.   Please read on!!

The BMC Assisted MIGration Operation is a program designed to assist customers with the planning of product upgrades to a newer version – “Success through proper planning”. Our focus is to help direct and guide customers in the best path for success to reach their goal. We help by identifying pitfalls early during the planning stage and prior to the actual upgrade, install, or migration occurs. This is accomplished by providing documentation, previous knowledge and experience, and published best practices. This program is offered to BMC customers currently subscribed to BMC Customer Support.

BMC will provide collateral containing upgrade best practices compiled by experts in BMC Support and R&D.  You can leverage this information to create your upgrade plan. Upgrade experts in BMC Customer Support will review your plan and offer advice to help ensure your success.

The attachments found within contain the following:

  1. AMIGO_BCO_Datasheet.pdf- A two page summary describing the benefits of BMCs AMIGO program.
  2. AMIGO_BCO - External.pdf - A presentation which provides an overview of the AMIGO concept, workflow, and its benefits.

This information will help you “jump start” your upgrade planning.

The BMC AMIGO program includes:

» A “Question and Answer” session before you upgrade
» A review of your upgrade plan with Customer Support
» An upgrade checklist
» Helpful tips and tricks for upgrade success from previous customer upgrades
» A follow-up session with Customer Support to let them know how it went. This will help BMC to enhance the process.

 

To get started, review the BMC TrueSight Capacity Optimization Upgrade Checklist (see 3.c below) and then open a BMC Support issue containing your environment information (product, version, OS, etc.) and the planned date of the installation, if known.

We will then contact you promptly, and work with you to ensure a successful and timely outcome.

The following diagram provides an overview of the BMC TrueSight Capacity Optimization AMIGO process:AmigoProcess.jpeg

AMIGO Starter Phase

 

1. Create the starter issue. The issue summary should contain "AMIGO Starter" to identify this as an AMIGO starter issue.  Include the following in the details of the issue:

     a.     Where you are in the upgrade process (Analysis, Planning, Testing, etc.)

     b.     Current patch/Service Pack version along with customization and integration information

     c .     Planned Time frame

     d.  If possible, review and respond to the "Request for Information (RFI) for BMC TrueSight Capacity Optimization AMIGO" document to provide BMC Customer Support with a more detailed environment overview: https://bmcsites.force.com/casemgmt/sc_KnowledgeArticle?sfdcid=000011561

 

2. BMC Customer Support will provide AMIGO collateral (this article) and work with you to answer any preliminary questions regarding the upgrade process.

 

3. Review the following items to establish an overview of the BMC Capacity Optimization upgrade.

     a. BMC TrueSight Capacity Optimization 9.5 Product Documentation - https://docs.bmc.com/docs/display/public/bcmco95/Upgrading

     b. BMC TrueSight Performance Assurance 9.5 Product Documentation -   https://docs.bmc.com/docs/display/public/bcmco95/Upgrading+BMC+Performance+Assurance

     c. BMC TrueSight Capacity Optimization Upgrade Checklist:https://bmcsites.force.com/casemgmt/sc_KnowledgeArticle?sfdcid=000011565

 

4. Once reviewed a call can be scheduled to answer any questions on the upgrade collateral.

 

NOTE: We recommend that you review all product documentation in detail, a consolidated list can be found here: https://bmcsites.force.com/casemgmt/sc_KnowledgeArticle?sfdcid=000031706

 

 

          5. After having been provided the collateral and any remaining questions have been addressed, the AMIGO Starter issue will be closed.

        a. Open additional (non-AMIGO) issues for any questions or concerns you may have regarding upgrade.

        b. You will then prepare and document your upgrade plan.

 

AMIGO Review Phase

 

1. Create the review Issue. The Issue summary should contain "AMIGO Review" to identify this as an AMIGO review Issue.  Provide the BMC Customer Support team with the following in the Issue:

      a. Planned Upgrade Date

      b. Documented Upgrade Plan

      c. Any questions regarding the upgrade plan

 

2. BMC Technical Support will review the upgrade plan.

 

3.Once the upgrade plan is reviewed, a call will be scheduled by BMC Customer Support to discuss any feedback and to answer questions.

 

4. After all feedback has been discussed and questions have been addressed, the AMIGO Review issue will be closed. Open additional (non-AMIGO) issues for any questions or concerns you may have regarding upgrade.

 

NOTE:  To enable time for a complete review of your upgrade plan, please provide all related details to BMC Customer Support at least two weeks prior to the planned upgrade date.

 

This link will provide specific details for the AMIGO program for BMC TrueSight Capacity Optimization and requires a BMC Support Central login;https://bmcsites.force.com/casemgmt/sc_KnowledgeArticle?sfdcid=000011562

 

The AMIGO program is also offered for other BMC products and the link below will provide additional information on how to obtain information specific to these products. https://bmcsites.force.com/casemgmt/sc_KnowledgeArticle?sfdcid=000011566

 

 

Please be aware - the AMIGO Program provides help planning your upgrade. It does not provide expedited resolution for technical issues encountered during the upgrade process, nor is it a substitute for a Professional Services engagement. If you desire assistance with the execution of your upgrade plan, please contact your BMC Account Manager for help engaging BMC Professional Services.

 

- thanks for reading,

timo

 

 

Got a topic of interest, or want to provide feedback on this month's article?  Post it here.

 

Miss a Pulse? Just follow this link --> BMC TrueSight Support Blogs

Share:|

BCO Data Warehouse - an Introduction

 

The Data Warehouse (DWH) is an essential component in BMC Capacity Optimization (BCO) which is managed by the Data Hub component.

 

The Data Hub uses a backend service, the Near-Real Time Warehouse (NRTWH) to store, organize, and calculate statistics for the collected data. This component is always running inside the Data Hub

 

The Data Warehouse administration menu lets you inspect and tune the mechanisms that control the warehouse.

To access this menu, go to Administration>Data Warehouse in the Navigation frame.

 

Data Warehouse Activity

The Data Warehouse activity consists of:

 

Data Aggregation

  • ETL Tasks collect data into a stage table
  • The warehousing engine calculates aggregations and splits data into summary tables at different time resolution qualities (detail, hour, day, and month)
  • Data aggregated based on internal rules (derived rows) and are referred to as hierarchical data aggregation

Data classification

  • The day and hour class definitions are used to categorize data as specified by the Calendar

Data aging

  • Data classified as old by customizable aging parameters is deleted from the warehouse tables

Custom statistics

  • User can define additional statistics to perform calculations on data series (for example, percentages or baselines)

 

A Data flow report is visible in the Data Warehouse section in the console, or through one of BCO’s Diagnostic reports. This provides a way to check BCO import activity and keep it under control, as the number of rows processed per day is a good indicator of the system's health and ability to process records.

 

When performing historical imports (for example, if you are planning to bulk load more than two million rows), it is strongly recommended to split the ETL data in smaller chunks, limiting the amount of information processed at one time. This prevents congestion in the Near-Real-Time Warehouse engine.

 

 

 

Data Warehouse (DWH) Reference Model

The Data Warehouse can host different types of data:

 

  • Time series (TS): Performance or business driver data representing metrics over time, with two subtypes:
    • Sampled, that is metric samples, at regular time intervals
    • Delta records
  • Custom structures (CS): Data that represent generic records with custom attributes, with two subtypes:
    • Buffer tables, containing data that is copied into BCO for further processing, but is generally not important for direct analysis
    • item-level detail tables, containing data that represent the details of an item that are important for analysis purposes. For example, errors for a specific page
  • Object relationships (OBJREL): Data that represent relationships between entities
  • Events (EVDAT): Data that represents events

 

The primary purpose of the DWH is the collection of historical time series data. A time series is a sequence of samples or statistics for a certain measurement, each corresponding to a point in time. The DWH contains both time series samples and statistics, aggregated at different time resolutions (hour, day, and month).

 

All time series are associated with a measured object, described according to a reference model. In its most general form, the reference model for measured objects is displayed in the figure below:

 

RefModel.jpg

The Reference Model comprises of the following components:

 

  • An entity represents a single system (for example, a database instance or web server) or a business driver. There are also three categories of entities: Domains, Business Drivers, and Systems (https://docs.bmc.com/docs/display/public/bcmco95/Entity+types) category has its own set of sub-types that can be also assigned to the entity.
  • An object is a metric for a system or business driver for which data is collected (for example, the CPU utilization of a server or the FTP transfer bit rate).
  • The location tracks the physical location from where a metric was observed. (for example, the FTP transfer bit rate when a file is downloaded from New York or from San Diego.
  • A subobject represents finer details of a metric (for example, a metric measuring the free space of a disk could have details about the free space of each disk partition as its subobjects).

 

Each available object/metric has a standard name in BCO which adheres to the naming convention defined for BCO datasets. A complete metric listing can be seen in the console, Administration >DATAWAREHOUSE>Datasets & metrics or in the product documentation (https://docs.bmc.com/docs/display/public/bcmco95/Datasets+and+metrics)

 

 

Each metric has a type which defines the unit of measure and how statistics about that data should be collected. In BCO version 9.5, metric types are listed in the table below:

 

TypeDescription
GENERICGeneric counter, absolute value
PERCENTAGEPercentage counter
COUNTA count of events, absolute number
RATEA frequency, in events/sec
POSACCUMPositive accumulation counter
ConfConfiguration data (string)
NEGACCUMNegative accumulation counter
ELAPSEDElapsed time, in seconds
WEIGHTED_GENERICPercentage counter, weighted
PEAK_PERCENTAGEPeak Percentage counter
PEAK_RATEPeak frequency, in events/sec
DELTADifference between subsequent samples
WEIGHTED_PERCENTAGEGeneric counter, absolute value, weighted

 

Measurement units and formats use common standards:

 

TypeFormat
TimestampsYYYY-MM-DD HH24:MI:SS
Elapsed timesseconds (duration)
Percentage metricsfrom 0 to 1
Rate metricsevents/sec

 

 

In summary, the reference model specifies:

  • The standard for object structure
  • The standard for metric names
  • The standard for measurement units
  • The time granularity, which is automatically adjusted by the data warehouse

 

 

Data Flow Report

The Data flow report page in the BCO console summarizes warehousing operations in terms of imported, derived, reduced and aged rows, and in terms of duration. This is a daily report, so you can monitor the growth of BCO and check how many rows are stored each day at detail level. Also see, https://docs.bmc.com/docs/display/public/bcmco95/Data+flow+report

 

The page and the report provide the following information:

  • Data Load Capacity Used summary:
    • Loaded daily (last 30 day average):  number of rows in the data flow report that are loaded daily. The value is averaged out for the last 30 days.
    • Estimated daily capacity (8 hour period): estimated maximum daily throughput sustainable by the deployment, considering a processing time of 8 hours.
  • TS: Referring date (timestamp): Current day statistics are updated at regular intervals
  • Derived Rows: number of derived rows (sum)
  • Processing Count: number of rows processed by all threads (sum)
  • Processing Rows: number of rows processed from stage tables (sum)
  • Processing Throughput: total processing count divided by total processing time (aggregation on all DWH threads)
  • Processing Time [S]: number of seconds spent in processing, with at least one active thread (sum)
  • Reduced Rows: number of rows stored in the D and MDCH tables
  • Split Rows: number of rows generated due to a split, having a "TS+duration" that exceeds the hour limit (sum)
  • Stored Rows Conf: number of rows stored in the CONF_VARIATION table (sum)
  • Stored Rows Day: number of rows stored in the DH table (sum)
  • Stored Rows Detail: number of rows stored in the DETAIL table (sum)
  • Thread Processing Throughput: total number of rows processed, divided by the thread processing time

This report can be very useful to support in helping determine where the problem lies!

 

Data Flow Report

The Data flow report page in the BCO console summarizes warehousing operations in terms of imported, derived, reduced and aged rows, and in terms of duration. This is a daily report, so you can monitor the growth of BCO and check how many rows are stored each day at detail level. Also see, https://docs.bmc.com/docs/display/public/bcmco95/Data+flow+report

The page and the report provide the following information:

  • Data Load Capacity Used summary:
    • Loaded daily (last 30 day average):  number of rows in the data flow report that are loaded daily. The value is averaged out for the last 30 days.
    • Estimated daily capacity (8 hour period): estimated maximum daily throughput sustainable by the deployment, considering a processing time of 8 hours.
  • TS: Referring date (timestamp): Current day statistics are updated at regular intervals
  • Derived Rows: number of derived rows (sum)
  • Processing Count: number of rows processed by all threads (sum)
  • Processing Rows: number of rows processed from stage tables (sum)
  • Processing Throughput: total processing count divided by total processing time (aggregation on all DWH threads)
  • Processing Time [S]: number of seconds spent in processing, with at least one active thread (sum)
  • Reduced Rows: number of rows stored in the D and MDCH tables
  • Split Rows: number of rows generated due to a split, having a "TS+duration" that exceeds the hour limit (sum)
  • Stored Rows Conf: number of rows stored in the CONF_VARIATION table (sum)
  • Stored Rows Day: number of rows stored in the DH table (sum)
  • Stored Rows Detail: number of rows stored in the DETAIL table (sum)
  • Thread Processing Throughput: total number of rows processed, divided by the thread processing time

This report can be very useful to support in helping determine where the problem lies!

 

The image below is an example of a Data flow report taken from one of our BCO lab servers:DataFlowReport.jpg

If you completed a sizing exercise with BMC before you installed and started using BCO, you will have a basis for comparing your warehouse to your initial sizing estimates, which relatres to the number of rows that the warehouse is able to process. This can be useful in cases where your initial estimates were lower than expected due to the popularity of gathering capacity information from various ETL’s, and you are now processing more data than it is able to handle.

 

Do you think BCO queries are taking too long?

Generally, we recommend that the DWH processing time should be less than 4 hours, which is set as the default in BCO for your system tasks and ETLs. When processing time exceeds this, you may notice a warning message similar to the one below:

BCO_ETL_WARN301: ETL "task name” [nn] on scheduler [n] is running since yyyy-mm-dd hh-mm-ss. The expected execution time is less than 4 hours.

 

Unless you have a dramatic increase in the number of new entities imported, or are playing catch-up with older historical data, the row Processing Count and Processing Time from the Data flow report should be relatively close each day, and you should not see this message. If it is not, then there could be something else going on, and should be investigated.

 

 

Additional Reference

In problems with the warehouse or datahub, these articles may be helpful in resolving or at least determining where the issue is:

 

Datahub page is not responsive and takes several minutes to refresh in GUI. Datawarehouse performance issues KA400067,

https://kb.bmc.com/infocenter/index?page=content&id=KA400067

 

The BCO console is taking a long time to complete tasks that at other times were faster. What can be done to determine why the BCO console is running slowly? KA376940, https://kb.bmc.com/infocenter/index?page=content&id=KA376940

 

How to quickly assess the health of a BCO installation - Daily checks over a BCO installation, KA 413301,

https://kb.bmc.com/infocenter/index?page=content&id=KA413301

 

 

I hope you have found this article to be useful. Feel free to comment on it, or to suggest other topics of interest for future articles.

Miss a Pulse? BMC ProactiveNet Pulse Blogs

 

thx, timo

Share:|

In BMC Performance Assurance, the General Manager Lite utility is designed to monitor the collection, transfer, processing, and population status of nightly Manager runs and provide a time series view and the stability of the environment over the last 30 days (by default).  It can also track the progression of that data into the BCO datawarehouse as well, if BCO has been implemented.

 

There are two components to General Manager Lite:

 

ComponentBPA 9.0 RequirementsBPA 7.5.10 Requirements
General Manager Lite Core

9.0.03 (9.0 SP3)

and later

7.5.10 Cumulative Hot Fix #2 (#202020)

and later (recommended)

BCO/BPA Status and Recovery script

9.0.03 Cumulative Hot Fix #4 (#30040)

and later

7.5.10 Cumulative Hot Fix #2 (#202020)

and later

 

The BCO/BPA Status and Recovery (BCO_BPAStatusAndRecoveryManager.pl) script is optional, and must be run on a UNIX console. When used, the benefits are:

  • The script prompts you for the information necessary to implement General Manager Lite (rather than requiring a manual setup, which is described below)
  • The script will automate the execution of the General Manager scripts via the BPA pcron facility

Implementation with the BCO_BPAStatusAndRecoveryManager.pl script

  The BCO_BPAStatusAndRecoveryManager.pl script must run on a UNIX console and you will need to enter the requested information as indicated. The section below explains the typical prompts and response guidance:

 

>> Enter BPA Console Name[s] (Multiple Consoles comma separated (Example console1, console2)) (Current Value=localhost)

 

The General Manager Lite (GMLite) component must run on a Linux system where the BPA console is installed, but it can communicate with all BPA Unix, Linux, and Windows consoles in your environment in order to build a centralized view of your BPA data processing.  For this prompt, specify a list of BPA consoles for GMLite to contact to obtain BPA data processing information on a nightly basis.

 

>> Enter Daily Script Execution Time (HH:MM)   (Current Value=20:00)

 

This prompt is for when GMLite scripts should be executed each day.  By default the script will execute at 8 PM.  This time should be (a) sometime after your last Manager run has finished processing for the day (b)  data import into BCO should be complete (if applicable), and (c) at a time when recover populates of data into BCO could be attempted (if applicable).

 

>> Enter BPA GeneralManager Port  (Current Value=10129)

 

This is by default port 10129, and that port is not commonly customized.

 

>> Enter BPA Output Directory Where the Data will be put  (Current Value=$BEST1_HOME/local/manager/status/GeneralManagerLite)

 

Specify where the GMLite output should be written on this console if you don't want to use the default location.

 

>> Enter gnuplot install directory (Do not specify anything if you wish use one in your path) (Current Value=undefined)

 

General Manager Lite can create a web page that includes charts reporting the number of computers configured, collected, transferred, processed, populated into the BPA database, and imported into BCO.  This functionality requires that the 'gnuplot' utility be installed on your BPA console.  If GNUplot is installed and you would to enable this functionality, specify the gnuplot location here.  On Linux the default installation path for GNUplot is /usr/bin.

 

>> Enter are you configuring BCO BPA ETL Status reporting [Y|N] (You will need ORACLE_HOME, DSN, user name and password)  (Current Value=N)

 

If you are importing BPA data into BCO, General Manager Lite can be configured to monitor the success rate of the import of BPA data into BCO.  Answer 'Y' if GMLite should be configured to monitor BCO population success.

 

>> Enter BCO Oracle DSN (must be configured via tnsnames.ora (see http://www.orafaq.com/wiki/Tnsnames.ora))  (Current Value=undefined)

 

When integrated with BCO, supply the TNS Name of your BCO Database (as defined in the $ORACLE_HOME/network/admin/tnsnames.ora file).

 

>> Enter BCO ORACLE_HOME (Current Value=undefined)

 

When integrated with BCO, supply the path to your Oracle Client installation on the BPA console.  If you are using Unix Populate this can be the same path specified in the $BEST1_HOME/local/setup/MpopulateOracleHome.loc file.

 

>> Enter BCO Oracle Password (Displayed encrypted)   (Current Value=undefined)

 

When integrated with BCO, supply the password for the BCO_OWN database user (schema owner).

 

>> Enter BCO Oracle User Name  (Current Value=undefined)

 

When integrated with BCO,, supply the BCO database account that owns the BCO installation (by default 'BCO_OWN').  In older BCO installations this may be CPIT_OWN.  You can validate by checking in the BCO web interface under Administration -> System -> Configuration -> General -> Database Username (Schema Owner).

 

>> Enter Number of Days to recover starting from today  (Current Value=2)

 

When integrated with BCO, this is the number of days that General Manager Lite should look back for recovery of failed BPA data imports into BCO.

 

>> Enter Number per day of top BPA visualizer file errors to recover  (Current Value=10)

 

When integrated with BCO, this is the number of Visualizer files that General Manager Lite should attempt to recover each day.  The reason to specify a limit is to throttle the amount of recovery activity to prevent recovery populates from interfering with the nightly load of BPA data into BCO.

 

>> Enter BPA vis file directory  (Current Value=undefined)

 

When integrated with BCO, this is the archive directory where the BPA Visualizer files are to be copied by General Manager Lite when they need to be recovered by the BPA Recovery ETLs configured in BCO.

 

 

Example Output (UNIX only) from the BCO_BPAStatusAndRecoveryManager.pl script

> $BEST1_HOME/bgs/scripts/BCO_BPAStatusAndRecoveryManager.pl
INFO: Using path /usr/adm/best1_9.0.00/bgs/scripts/BCO_BPAStatusAndRecoveryManager.pl
Info: Using BEST1_HOME=/usr/adm/best1_9.0.00
Info : reading input file /usr/adm/best1_9.0.00/local/setup/BCO_BPAStatusAndRecoveryManager.opt
Please answer the following questions regarding the operation of GeneralManagerLite
Enter BPA Console Name[s] (Multiple Consoles comma separated( Example console1,console2))  (Current Value=localhost)
[ Hit Return to Accept Current Value ]) ?vl-hou-cus-sp55.bmc.com
Enter Daily Script Execution Time (HH:MM)   (Current Value=20:00)
[ Hit Return to Accept Current Value ]) ?08:00
Enter BPA GeneralManager Port  (Current Value=10129)
[ Hit Return to Accept Current Value ]) ?
Current Value=10129 kept
Enter BPA Output Directory Where the Data will be put  (Current Value=$BEST1_HOME/local/manager/status/GeneralManagerLite)
[ Hit Return to Accept Current Value ]) ?
Current Value=$BEST1_HOME/local/manager/status/GeneralManagerLite kept
Please answer the following questions regarding the operation of BCO_BPAtimeAnalysisWebPageCreate
Enter gnuplot install directory (Do not specify anything if you wish use one in your path)  (Current Value=undefined)
[ Hit Return to Accept Current Value ]) ?/usr/bin
Enter are you configuring BCO BPA ETL Status reporting [Y|N] (You will need ORACLE_HOME, DSN, user name and password)  (Current Value=N)
[ Hit Return to Accept Current Value ]) ?Y
Please answer the following questions regarding the operation of BCOStatus
Enter BCO Oracle DSN (must be configured via tnsnames.ora (see http://www.orafaq.com/wiki/Tnsnames.ora))  (Current Value=undefined)
[ Hit Return to Accept Current Value ]) ?ORA112DB_SP71
Enter BCO ORACLE_HOME  (Current Value=undefined)
[ Hit Return to Accept Current Value ]) ?/data1/oracle/product/11.2.0/client_1
Enter BCO Oracle Password (Displayed encrypted)   (Current Value=undefined)

[ Hit Return to Accept Current Value ]) ?BmcCapac1ty_OWN
Enter BCO Oracle User Name  (Current Value=undefined)
[ Hit Return to Accept Current Value ]) ?BCO_OWN
Please answer the following questions regarding the operation of BCORecover
Enter Number of Days to recover starting from today  (Current Value=2)
[ Hit Return to Accept Current Value ]) ?
Current Value=2 kept
Enter  Number per day of top BPA visualizer file errors to recover (Current Value=10)
[ Hit Return to Accept Current Value ]) ?
Current Value=10 kept
Enter BPA vis file directory  (Current Value=undefined)
[ Hit Return to Accept Current Value ]) ?/data1/best1data/bcovisrecover
Warning : The directory does not exist, attempting to create /usr/adm/best1_9.0.00/local/manager/status/GeneralManagerLite
Info : testing BPA console=vl-hou-cus-sp55.bmc.com
OS=Linux
Info : Running /data1/oracle/product/11.2.0/client_1/bin/tnsping ORA112DB_SP71
Info :
TNS Ping Utility for Linux: Version 11.2.0.1.0 - Production on 17-SEP-2013 09:52:16
Copyright (c) 1997, 2009, Oracle.  All rights reserved.

Used parameter files:

Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = vl-sjc-cus-sp71.labs.bmc.com)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = ORA112DB)))
OK (650 msec)

Info : BCO ETL will be queried
pcrontab: can't find task ID in your pcrontab file.
Info : unable run pcrontab to unschedule : /usr/adm/best1_9.0.00/bgs/scripts/pcrontab.sh -unschedule 03 : return 1536
Info : no runs to unschedule
Info : Scheduling : /usr/adm/best1_9.0.00/bgs/scripts/pcrontab.sh -schedule 03 "00 08 * * * /usr/adm/best1_9.0.00/bgs/scripts/BCO_BPAStatusAndRecoveryManager.pl -r >
/usr/adm/best1_9.0.00/local/manager/log/BCO_BPAStatusAndRecoveryManager.log 2>&1"

 

 

General Manager Lite web page output

If gnuplot (www.gnuplot.info) is available on the BPA Linux console where General Manager Lite is scheduled, then it will automatically create some web reports that summarize the data collection, transfer, processing, populate, and BCO import success of your BPA environment.

The reports are created by default in the $BEST1_HOME/local/manager/status/GeneralManagerLite/BCO_BPAWebReport directory, and can be viewed via a local web browser running on your Linux console or shared out via a web server running on the BPA console.

 

Below is an example of the chart output from three different BPA consoles. You likely will need to zoom in to see it better, but I wanted to show an image that had many nodes in it to give you a better idea of what this looks like.

 

The legend for the data is represented by:

  • Red line -- The number of configured BPA computers.  This is the number of computers included in domain/policy files in an active BPA Manager run
  • Green line -- The number of computers that successfully collected data in the environment
  • Blue line -- The number of computers that successfully transferred data back to the BPA console
  • Purple line -- The number of computers that were successfully processed and included in a Visualizer file by the BPA console
  • Cyan line -- The number of computers that were successfully imported into BCO by the BPA ETLs

GMlite_output2.jpg

 

 

Detailed Information about the underlying General Manager Lite scripts

In order to obtain this functionality, the following are required:

1. A BPA console

    1. Unix console, 7.5.10 or later
    2. Unix Console, 9.0.00, needs 9.0.03 (9.0 SP3 or later)
    3. Linux console, for 9.5 and later

 

2.You need a perl script (GeneralManagerLite.pl) that must run on a UNIX console and an updated GeneralManagerClient (this enhancement is recorded as QM001745812). These are available as part of 7.5.10 UNIX console patches, beginning with June 2012.

 

Additional updates have been made since June 2012, QM001764244, and this is included in 7.5.10 SP2 console patch from December 2012.  Additional enhancements have been made since December 2012, including support for Windows consoles (QM001781969 and QM001779687).  These were included in 7.5.12 Cumulative Patch 2 (May 2013).

 

The script only needs to be run on from one of your consoles (UNIX only). It will gather processing statistics from all of your BPA (UNIX and/or Windows) consoles.

 

 

What the tool does
1. Identifies all the nodes in your environment (this is output as allNodes.csv file)
2. Identifies all manager run/domain mappings (this is output as domain2ManagerMap.csv file)
3. Identifies all duplicate nodes in your environment (this is output as duplicateNodes.csv file)
4. Identifies all failed nodes and categorizes them into collect, transfer and processing errors (this is output as failedNodes.csv and failedNodesNoAgent.csv files)
5. Obtains the remote agent logs for collect, transfer, and processing errors (including proxy collection).

 


How to run the tool manually

$BEST1_HOME/bgs/scripts/GeneralManagerLite.pl -c <Console Name> [-o <Output Directory > -p <General Manager Port>  -l -d -i <manager run pattern>]

 

Where:

   Required Parameters:

 

    Console Name                      BPA console with GeneralManagerServer running or a command separated list of BPA consoles

 

   Optional Parameters:

 

    Output Directory                  Output Directory where results will be deposited : default is the current directory

    GeneralManagerPort           General Manager Port : default 10129

    -l                                           Get the Remote Agent and Proxy Logs for detailed analysis

                                                 NOTE: this can take a lot of disk space and is the default

    -d                                          Save results in date-stamped directories for the last 30 days (recommended configuration)

    -i                                           Ignore/remove results for manager runs which match the pattern specified;

                                                 multiple patterns may be specified by using a comma

 

Note that the BCO_BPAStatusAndRecoveryManager implementation method described above is just a semi-automated method for running this script and supplying the necessary input parameters.

 


Instructions for using the manually run script
1. Find a location with a considerable amount of disk space if you are planning to acquire the optional log files (using -l) as they will consume a lot of disk space.
If you have specified that you want the collect logs,  logs are about 7 MB per node.  So if you have 1000 nodes with collect failures, you will need at least 10 GB of free space.

 

2. Run:

$BEST1_HOME/bgs/scripts/GeneralManagerLite.pl -c <Console Name> [-o <Output Directory > -p <General Manager Port>  -l -d ]


The script will generate a subdirectory for each BPA console which contains all relevant csv and log files.  Then the files are zipped. These files can be sent to Customer Support as a summary of the entire BPA console environment.   It can take a while for the script to run (at 10 seconds per node, 1000 failing nodes will take 3 hours). The more failures there are, the longer the processing takes.

Only "today" is captured when the script is run.  If you want to keep track of "history", you can set this up by specifying the -d flag (automatically keeps the last 30 days of results in date-stamped directories and removes data older than 30 days).  You should schedule the script to run every day, but note that it will overwrite the results from the previous day unless the -d option is used (or you manually archive the files).

 

If you have "special" manager runs, such as ones with no data collection where data is simply being reprocessed, you should remove these from the output by using the -i option.  Otherwise, they will produce incorrect results since they don't have the full complement of activities occurring.  Note that this is implemented by using a pattern match so that you don't have to specify the full names of manager runs.

 

NOTE:  If you are a Windows-console only installation, you can use a Linux VM to do a BPA console install in order to run the script.  You don't need the console to be actually running any Manager runs.

 

 

Interpreting the output

"Nodes" which didn't get successfully put into the database for a particular day are divided between failedNodes.csv and failedNodesNoAgent.csv because the type of follow-up required is likely to be different between the two groups of nodes. The error code associated with each node's status is provided in the .csv file: C means collect failure, T means data transfer failure, P means processing (no data created for input to the CDB) failure.

 

The error code numbers are attached to this article (CollectTransferErrorCodes.xls  or KA373639), and are detailed in the associated logs for that node (if requested using -l). This enables a summary level understanding of how many failures there are for the date, and how many nodes have the same kind of failure.  The purpose is to provide a convenient way to troubleshoot groups of nodes rather than doing them one at a time.  The details for each node are available in the associated log (if requested via -l), so low level reporting is fully supported as well.

 

The attached failedNodesNoAgent.csv lists all nodes with Collect errors 91, 92, or 94:

91  Error SD_COMM_BAD_HOST                    Service daemon invalid host name provided (cannot find server or DNS error), the agent                                                                            name is not known by the OS.

92  Error SD_COMM_BAD_PORT                    Service daemon not installed on the remote node (connection refused) The  product is not                                                                            installed on the agent computer or the service daemon is not running.

94 Error SD_COMM_CONNECT_TIMEOUT   Service daemon connection timed out, node offline, or the agent node is off the network.

 

 

Best Practices for Doing a Daily Health Check for BPA Consoles

(1) Review the GeneralManagerLite output as described above.  This gives an overall summary of how many nodes are under management, and the status of each node.  Also comparing results from day-to-day immediately highlights any change in the overall health as well as pinpoints the source of the changes.

 

An unsupported script is available (console_status_email_option.sh) to summarize this daily review and to email you the results.  The script is coded for a 9.0 console and is attached to this article and described in the attached Word document (OptionalStatus_email.docx)

 

(2) Using the General Manager GUI (displayed in Perceiver or BCO 9.0), Console Operations -> "Recover Runs" view.

Alternatively, you can use failedNodes.csv (output from GeneralManagerLite) or export the "Recover Runs" to csv if you prefer.

The idea here is to initiate any Recovery actions first, then work on the data collection problems which typically require more analysis to resolve.

 

(3) Sort by "Populate Status".  For any Manager run which is not "OK", select the run, and then select "Recover".

 

(4) Sort by "Transfer Fail".  For any Manager run which doesn't have a value of 0, select the run, and then select "Recover".

 

(5) Sort by "Collect Fail".  Use the corresponding Console Reports -> "Node History" view to establish the precise problem (using the error code), how many nodes have the same problem, and if the problem is persistent (using 3 or 5 day history setting). Perform remediation as indicated by the error code and cause.  Note that the results of successful remediation may not appear for up to 2 days depending on the problem fixed and how often the Manager run is scheduled for execution.

 

If you've specified the optional log gathering feature, the corresponding logs have already been retrieved from the remote nodes and zipped so that they can be sent to Customer Support.

 

(6) Rerun the GeneralManagerLite script after the recovery actions have been completed in order to assess the "recovered" overall health of the data flow for today.

 

When additional troubleshooting time is available, determining the root cause for repeating Population, Processing, or Transfer errors can avoid the need to "Recover" the run(s) each day.

 

The failedNodesNoAgent.csv lists all nodes which are listed as under management by BPA, but no collection agent is present.  Typically this requires an internal ticket to get the agent software installed (either on a proxy or local agent).  Note that this condition can occur when a node has its OS upgraded, but the corresponding BPA agent wasn't upgraded at the same time.

 

Additional Reference:

 

Tool which summarizes the results for all BPA consoles (using General Manager), and gathers logs for nodes with collection errors https://kb.bmc.com/infocenter/index?page=content&id=KA366377

 

Attachments:

 

console_status_email_option.sh     Script that can be used to summarize daily activity and to email you the results, script is coded for                                                         a  9.0 console

OptionalStatus_email.docx              Example email action script sent

CollectTransferErrorCodes.xls         describes various error codes received from implementation of this methodology

 

I  hope you are able to take advantage of this methodology. I think you will find it most useful in a larger environment.

Let me know what you think, or if you have a topic that you think would be useful to delve into here.

 

We're pulsing for you at BMC TrueSight Support Blogs

 

 

 

timo

Share:|

BCO Data Marts and Materialized Views

 

In BCO, Data marts are used to prepare data for use in reports, where a BCO view can be the result of a Structured Query Language (SQL) query or a materialized table.

 

Data marts rely on a basic set of views that hide the BCO data model and expose a more report-oriented view of data, referred to in BCO as “public views”. All the data that can be extracted from BCO is available through public views. BMC recommends that you use public views to access BCO data instead of relying on underlying tables that can change across versions of the product.

There are two ways the SQL statements are processed in BCO relative to data marts.

 

  • SQL statements used in the data mart can be executed on-the-fly as the query is executed (default)
  • A Materializer task is executed on a scheduled basis, creates a table in the BCO database, and populates it with the content of the SQL data mart. This is done when a query would require a lot of execution time as in the case of complex queries, or if multiple reports are accessing the same SQL data mart and so, the generation is performed only once. Materializing a SQL data mart requires storage space, so the process must be handled with care.

 

A SQL data mart allows you to further manipulate the data you have imported into BCO. They are used to for the preparation of data for advanced reports, for the BCO SQL data mart administration, or for an external reporting tool. In particular, a SQL data mart can:

  • Rename columns using more appropriate labels
  • Select only the data series, metrics, and statistics of interest
  • Group results
  • Use grouping objects to apply custom grouping levels and classify measured data
  • Grant access to the data warehouse to an external reporting tool

BCO SQL data marts adopt the naming convention of ER_V<SQL data martname><id> and can be accessed using the database account reserved for reporting purposes, that is by default, CPIT2_REP.

 

Managing SQL data marts

 

You can see what SQL Data Marts are present in BCO by going to the SQL data marts page, where you can also edit or delete them, and also add new ones.  From the BCO console, navigate to:

Administration > Advanced Reporting > Data marts > SQL data marts.  For a Materialized SQL data mart, the Materialize now (the gear icon) will allow you to immediately refresh the materialization.

 

 

In the image above, selecting a row that has a data mart of interest in Name column is hyperlinked, and will allow you to access its detail page. The detail page shows the SQL used for the data mart and as well, you can to edit or delete it.

If you are not familiar with these structures, and want to learn more, see the product documentation page at this link: /blog/ https:/docs.bmc.com/docs/display/public/bcmco95/SQL data martshttps://docs.bmc.com/docs/display/public/bcmco95/SQL+data+marts

 

How to check a materialized view or data mart

 

If things do not seem to be working as you expected with either the out-of-the-box or your in-house developed view,  here are some tips on how you can determine what might be going wrong. In BCO 9.x, you would go to

Administration > Advanced Reporting > Data marts > SQL data marts

 

First, find the row associated with your view/data mart and get the value of its Physical name and ERID. The ERID are the last group of numbers appended to the end of the Physical name. In the example below, the ERID is 2018 (You can also verify this by clicking on the hyperlinked Name of your view from here and hovering over the named icon at the top left side of the frame which displays the SQL used to generate the view, View - id: 2018).

 

 

The examples we will be using here are of an in-house developed view. If you are creating a new view and are having trouble with the view itself, and get and error like:

 

“The inserted view fails due to SQL errors. Please verify it.”

 

From the BCO UI, try to create it from a SQL client like Oracle SQLdeveloper as:

create view mysqltestview as (insertyourSQLcodehere)

 

where you can verify that the syntax is OK.

Things to verify when writing SQL for a BCO SQL view are:

 

  • if the SQL query has an undefined list of columns to extract,

  (select * from XXXXX) and some of them are duplicated /ambiguous

 

(As an example - you joined two tables and are doing a select * from them, this is correct SQL - but not useful to make a materialized view in BCO). Try to explicitly list the columns you want as your query output.

So instead of this SQL….
select * from sys_def d , sys_object o where o.sysid=d.sysid

You must write the SQL like this:

select d.*, o.sysobjid from sys_def d , sys_object o where o.sysid=d.sysid

  • IF your SQL query contains comments using  --   (dashes):

select tableA.columnA,

--tableA.columnB,

tableA.columnD, tableB.ColumnE from tableA, tableB where tableA.columnA = tableB.columnA

 

replace the – (dashes) with /* */ and it will work

select tableA.columnA,

/* tableA.columnB,*/

tableA.columnD, tableB.ColumnE from tableA, tableB where tableA.columnA = tableB.columnA

 

1) Get the query’s DDLs definitions, using the Physical name from the UI:
select dbms_metadata.get_ddl('VIEW',ER_V_FCASTING_0104_2018','CPIT_OWN') from dual;

From this output you could see something like:

DBMS_METADATA.GET_DDL('VIEW','ER_V_FCASTING_0104_2018','CPIT_OWN')
--------------------------------------------------------------------------------
CREATE OR REPLACE FORCE VIEW "CPIT_OWN"."ER_V_FCASTING_0104_2018" ("SYSID
", "DAYSBACK", "DT", "AVG_80TH_PRCTILE", "MAX_60TH_PRCTILE") AS
SELECT "SYSID","DAYSBACK","DT","AVG_80TH_PRCTILE","MAX_60TH_PRCTILE" FROM ER_V_FCASTING_0104_2018_G

Note that the output from the query above has “_G” (yellow highlight) at the end of its definition. This indicates that it hasn't been materialized successfully yet.

 

2) This can be verified if  “ _G “ is still a view:
select dbms_metadata.get_ddl('VIEW','ER_V_FCASTING_0104_2018_G','CPIT_OWN') from dual;

DBMS_METADATA.GET_DDL('VIEW','ER_V_FCASTING_0104_2018_G','CPIT_OWN')
--------------------------------------------------------------------------------
CREATE OR REPLACE FORCE VIEW "CPIT_OWN"."ER_V_FCASTING_0104_2018_G" ("SYS
ID", "DAYSBACK", "DT", "AVG_80TH_PRCTILE", "MAX_60TH_PRCTILE") AS
select sysid,daysback,trunc(ts)as dt,avg(Percentile_Cont_AVG)avg_80th_prctile,
avg(Percentile_Cont_max)max_60th_prctile from(SELECT distinct pm.sysid,sdd.ts AS
ts,daysback,PERCENTILE_CONT(0.80)WITHIN GROUP(ORDER BY sdd.avgvalue ASC)OVER(PA
RTITION BY pm.sysid,trunc(sdd.ts),pm.sysmetricid,daysback)as Percentile_Cont_AVG
,PERCENTILE_CONT(0.60)WITHIN GROUP(ORDER BY sdd.maxvalue ASC)OVER(PARTITION BY p
m.sysid,trunc(sdd.ts),pm.sysmetricid,daysback)as Percentile_Cont_max FROM cpit_o
wn.pv_sys_data_day sdd,cpit_own.pv_sys_metric pm,(select(trunc(sysdate,'MM')-1)-
add_months(trunc(sysdate,'MM'),-3)as daysback from dual)an_period WHERE pm.metri
c='CPU_UTIL' and sdd.sysmetricid=pm.sysmetricid and sdd.ts between add_months(tr
unc(sysdate,'MM'),-3)and trunc(sysdate,'MM')-1)group by sysid,daysback,trunc(ts)

 

3) When the view is materialized successfully,  the _G disappears,  and name of the based view changes to something similar to ER_MV2018_20130611105002 (below green highlight), containing a materialization date timestamp:
DBMS_METADATA.GET_DDL('VIEW','ER_V_FCASTING_0104_2018','CPIT_OWN') from dual;

From this output you could see something like:

DBMS_METADATA.GET_DDL('VIEW','ER_V_FCASTING_0104_2018','CPIT_OWN')
--------------------------------------------------------------------------------
CREATE OR REPLACE FORCE VIEW "CPIT_OWN"."ER_V_FCASTING_0104_2018" ("SYSID
", "DAYSBACK", "DT", "AVG_80TH_PRCTILE", "MAX_60TH_PRCTILE") AS
SELECT "SYSID","DAYSBACK","DT","AVG_80TH_PRCTILE","MAX_60TH_PRCTILE" FROM ER_MV2018_20130611105002

You can double check this with the query below:
dbms_metadata.get_ddl('TABLE','ER_MV2018_20130611105002','CPIT_OWN') from dual;

 

4) If you want to be able to re-force the materialization of a valid view, you can do a manual delete:
delete from er_props where erid=2018 and name like '%er.materializer.last.date'

5) You can also check the view definition:
select * from er_props where erid=2018

6) To check the view properties:
select * from er_def where erid=2018

 

7) An Index on a view column can be created from Administration>ADVANCED REPORTING>Data marts> SQL data mart>Edit  SQL data mart>, in Advanced mode as in the image below:

 

Which presents the following frame…

 

 

          8) If you want to create an index manually, outside of the UI,  on the TS column for a test, try this SQL:

          create index x on ER_MV2018_20130611105002(ts)


           9) You can verify existing indexes on materialized views:
           select * from user_indexes where table_name='ER_MV2018_20130611105002'

 

I hope you will find these tips useful. The tips included in this article are also available as a Knowledge Base article, How to debug Materialized SQL Views / SQL Data Marts at:

https://bmcsites.force.com/casemgmt/sc_KnowledgeArticle?sfdcid=000106688

 

As always, your comments are appreciated and comment if there is something you would like to see in an upcoming Pulse.

Pulse atrophy?  Get Pulsed up at BMC TrueSight Support Blogs

Thx, timo



Share:|

BMC Capacity Optimization provides a great many useful features. I think it would be helpful if we provided some daily checks that you could do to make sure everything is working. This month we'll look at a great way to make sure everything is A-OK!

 

  1. Check Administration > System > Status
    • Check for the alert messages on top of the page
      • with the button on the right you can drill down into details
    • Verify all enabled components are in green, and their "last update date" is current
    • If these are red, check the system task "component status checker" logs for errors


 

 

2. Check Database space on Administration >DATA WAREHOUSE > Status

    • Verify estimated days to crash is reasonably far (>10days),
      • If it's low, verify that autoextensible is set to YES
      • If the above conditions are not verified, check DB space with your DBA
    • The "System Data Tables" at the bottom, shows the most consuming DB tables

 

 

3. Check Administration >DATA WAREHOUSE > Data flow report

    • Verify the benchmark is not RED (you might have a problem of DB sizing)
    • Check the values in the "Processing rows" are near to "Loaded daily (last 30 day average)",
      • Check there are no big holes or spikes in the values inside the "Processing rows" column for the past days
      • The current day number of samples is normally lower, since the entire day has not passed
      • Gaps might mean a problem with ETLs loading data, or the data warehousing engine
      • Spikes might be the consequence of a previous hole, or of a data manual recovery activity

 

 

  1. Check the Data Warehouse queues

(Administration > DATA HUB > Status > Core Services - Near-real-time Warehouse service on BCO <=9.0)

(Administration > COMPONENTS > Backend Services > Core Services - Near-real-time Warehouse service on BCO >=9.5)

    • Verify the "max queue age" column values are under 24 hours
    • Verify there are no yellow or red lines, that could mean data warehouse is slow or hung

 

 

5. Check scheduled tasks execution

(Administration > SCHEDULER > System tasks on BCO <=9.0)
(Administration > ETL & SYSTEM TASKS > System tasks on BCO >=9.5)

    • Verify the "Maintenance activity chain" is scheduled (WAITING) and green (did not have execution errors)
    • Verify the Component Status checker is scheduled (WAITING) or RUNNING, and green/blue (did not have execution errors)
    • Verify other scheduled tasks are not in ERROR, in case verify their logs, or execution history)

 

 

6. Check running tasks

(Administration > SCHEDULER > System tasks on BCO <=9.0)

(Administration > ETL & SYSTEM TASKS > System tasks on BCO >=9.5)

    • Verify the RUNNING tasks are running for a reasonable time (<24 hours)
    • Verify "Last Exit" are not in ERROR/WARNING, in case verify their logs, or execution history

 

 

  1. Check scheduled ETLs execution

(Administration > SCHEDULER > ETL tasks on BCO <=9.0)

(Administration > ETL & SYSTEM TASKS > ETL tasks on BCO >=9.5)

    • Verify your production ETLS are scheduled (WAITING) and "Last Exit" is green (did not have execution errors), in case verify their logs, or execution history

 

 

  1. Check running ETLs

(Administration > SCHEDULER > ETL tasks on BCO <=9.0)

(Administration > ETL & SYSTEM TASKS > ETL tasks on BCO >=9.5)

    • Verify the RUNNING ETLs are running for a reasonable time (<24 hours)
    • Verify "Last Exit" are not in ERROR/WARNING, in case verify their logs, or execution history

 

 

9. Check data availability with a quick analysis over a sample system on the Workspace

    • Check the "Last activity" dates to be current or yesterday
    • Press the "Quick analysis" button on the right to chart one metric (i.e. CPU_UTIL)

 

 

10. Here is an example of a Quick Analysis chart with data

 

 

I hope you will find this procedure useful. This Knowledge Base article is available as How to quickly assess the health of a BCO installation - Daily checks over a Bco installation at:

https://bmcsites.force.com/casemgmt/sc_KnowledgeArticle?sfdcid=000099751

 

As always, your comments are appreciated and let me know if there is something you would like to see in an upcoming Pulse.

Have you missed a Pulse or more? Get back on track at BMC TrueSight Support Blogs

Thx, timo

Share:|

This month’s topic brings to mind an entertaining book I read, written in a parable-like style by Spencer Johnson entitled “Who Moved My Cheese?”.  The book relates similar changes that occur in work and life, and four typical reactions to these changes by

two mice mouse.png     "Sniff" and "Scurry," and two "little-people", "Hem" and "Haw" during their regular search for cheese. Here’s a little of what the story is about...

 

All the characters live in maze, which is supposed to be analogous to life, and look for cheese, to represent fulfillment, happiness, and success. Initially without cheese, each group, the mice and humans, pair off and travel through vast corridors searching for cheese. One day, both groups happen upon a cheese-filled corridor at one of the available cheese stations. Content with their find, the humans establish routines around their daily intake of cheese, gradually becoming complacent and arrogant in the process.

 

One day, Sniff and Scurry (the mice) arrive at their regular cheese station to find no cheese left, but they are not surprised. Noticing the cheese supply dwindling, they have mentally prepared beforehand for the arduous but inevitable task of finding more cheese. Leaving this cheese station behind, they begin their hunt for new cheese together. Later that day, Hem and Haw (the humans) arrive at the cheese station only to find the same thing, no cheese. Angered and annoyed, Hem demands, "Who moved my cheese?" The humans have counted on the cheese supply to be constant, and so are unprepared for this situation.

 

After deciding that the cheese is indeed gone they get angry at the unfairness of the situation and both go home starved. Returning the next day, Hem and Haw find the same cheese-less place. Starting to realize the situation at hand, Haw thinks of a search for new cheese. But Hem is dead set in his victimized mindset and dismisses the proposal.

 

Meanwhile, Sniff and Scurry have found a new and plentiful cheese station and fresh cheese! Back at original cheese station, Hem and Haw are affected by their lack of cheese and blame each other for their problem. Hoping to change, Haw again proposes a search for new cheese. However, Hem is comforted by his old routine and is frightened about the unknown. He knocks the idea again. After a while of being in denial, the humans remain without cheese.

 

One day, having discovered his debilitating fears, Haw begins to chuckle at the situation and stops taking himself so seriously. Realizing he should simply move on, Haw enters the maze, but not before chiseling "If You Do Not Change, You Can Become Extinct" on the wall of the original cheese station for his friend to ponder.

The story goes on in an entertaining way and reminded me that life is about change and my reactions to it. If you have not read this book, I highly recommend it. It’s light reading and got me out of a rut that I had been in.

 

So, how does this relate to Capacity Management software? If I have been able to keep your interest thus far, by now, you certainly are wondering why I have been going on about this stupid story that you don’t have time to read… well…

 

There have been fabulous enhancements made to the look and feel of BMC Electronic Product (EPD) site,

ones, that should make it much easier to find what you are looking for: https://webapps.bmc.com/epd/

We have heard from many of you about the challenges navigating through this maze of information and BMC has made a significant improvement in this area! Some of the new features in place are:

 

  1. A revised Component View which is the default view. It contains a flat list of all components contained in Suites and standalone products to which you have an entitlement( aka license).
    • When you select your product, it defaults to the latest version.
    • Prior product versions can be selected by using dropdown menus.
    • The files have been organized into tabs to make it easier to select the one desired.
    • The list of products can be modified by using the platform option.
  2. You can revert to the old view by selecting Licensed Products View at the top of the screen.
  3. Selecting a component will open the view in a new frame. Use the “Return to Products” option on the top right of the screen to return to the product list. Do not use the Back button of your browser. Please note that you will need to enable your browser to Allow Pop-Ups when using this site.

Here is a screenshot of the Component View screen after initial login:

ProdSelection.jpg
Using the Filter Products selection frame on the left side of the image above, I previously clicked on Unselect All within the Product Category, and only checked Capacity Optimization, and clicked on the GO button, at which point the frame view on the right refreshes the products within the selected category as shown in the image below:
SelList.jpg

 

 

In the example below, BMC Capacity Optimization was selected, the view refreshes and defaults to the current version of the product.

  • Previous versions may be selected by using the Version dropdown button
  • Available Platforms are similarly selected
  • Tabs allow for additional and related options

VersionSelection.jpg

To Select the Products to download:

  • Click the Select check box
  • Make your download selection, click one of the Download buttons and you’re off!

 

 

I hope you find this information helpful and welcome your comments.

 

Thx, timo

Share:|

There are several different problem symptoms that may be visible when the BCO file system fills up on the BCO Application Server (AS) or a BCO ETL Engine (EE) server.

If e-mail alerting is enabled, the most obvious symptom will be e-mail error messages from BCO reporting a failure of the Local Monitoring task:

  • Caplan Scheduler *** ALERT mail *** Task Local monitoring for Default [50] completed with 1 errors
    Filesystem full clean up

 

If e-mail alerting isn't enabled other symptoms include:

  • BCO Analysis and Predict reports failing to generate output with unexpected errors (such as the time filter not covering a period that contains data when a quick analysis shows that it does)
  • Analysis failing with the error, "java.io.IOException - message: No space left on device to write file: /path/file

 

I broke this post down into 4 sections with the intent that it would make it easier to read and perform - if you needed. This is the cleanup procedure after a filesystem full problem and requires that cleanup be performed on the AS and EE servers. The cleanup of these files is not to reduce space usage - it is to fix the BCO Scheduler and or /Datahub if a file system full condition has corrupted their working set files and now they are failing. So, removing these files really won't reduce the disk space consumption all that much - but it can fix problems the file system full condition caused.

 

Section 1: On AS, stop all the components and clean up the corrupted runtime files

 

1)     Access AS via ssh as BCO OS user.

2)     Change directory to BCO home folder (usually it's /opt/cpit)

                cd /[BCO Installation Directory]
3)     Try to stop BCO DWH in a clean way
               ./cpit stop datahub
4)     Wait five minutes or until the countdown ends
5)     Check that there are no other DWH jboss processes stuck
             ps –ef | grep jboss
6)     Issue a kill -9 $PIDNUMBER for every remaining joss
7)     Check there are no run.sh stuck:
               ps –ef | grep run.sh
8)     Issue a kill -9 $PIDNUMBER for every remaining run.sh
9)    Execute these commands (path relative to the base BCO Installation Directory):
  • BCO 9.5
rm  -rf datahub/jboss/server/default/data/kahadb/*
rm -rf datahub/jboss/dlq_messages/*
rm -rf datahub/jboss/server/default/tmp/*
rm -rf datahub/jboss/server/default/data/tx-object-store/*
  • BCO 9.0                   

rm -rf datahub/jboss/server/all/data/kahadb/*

rm -rf datahub/jboss/dlq_messages/*

rm -rf datahub/jboss/server/all/tmp/*

rm -rf datahub/jboss/server/all/data/tx-object-store/*

 

10)      Stop AS scheduler

                    cd /[BCO Installation Directory]
                    ./cpit stop scheduler

11)     Check there are no other Schedulers stuck

                   ps -ef | grep scheduler

12)     Issue a kill -9 $PIDNUMBER for every remaining scheduler

13)     Clean up the Scheduler task folder on the AS

  • BCO 9.5, 9.0

rm -rf scheduler/task/*

rm -rf scheduler/mif/notdelivered/

rm -rf scheduler/localdb/*

14)      Stop AS datacuum

cd /[BCO Installation Directory]

./cpit stop datacuum

15)     Check there are no other dataccums stuck:

     ps -ef | grep datacuum

16)      Issue a kill -9 $PIDNUMBER for every remaining dataccum

 

Section 2: Access EE using ssh as BCO user

 

1)     Stop EE Scheduler

cd /[BCO Installation Directory]

./cpit stop scheduler

2)     Check there are no other schedulers stuck on the EE:

ps –ef | grep scheduler

3)     Issue a kill -9 $PIDNUMBER for every remaining scheduler on the EE

4)     Clean up the scheduler tasks configurations folder on the EE

  • BCO 9.5

rm -rf scheduler/task/*

rm -rf scheduler/mif/notdelivered/*

rm -rf scheduler/localdb/*

  • BCO 9.0

rm -rf scheduler/task/*

rm -rf scheduler/mif/notdelivered/*

rm -rf scheduler/localdb/*

5)      Stop EE dataccum

                cd/[BCO Installation Directory]

                 ./cpit stop datacuum

6)     Check to see if any other datacuums are stuck

ps - ef | grep datacuum

7)     Issue a kill -9 $PIDNUMBER for every remaining datacuum

 

Section 3: Restart Components

 

1)     Restart the components you stopped to restore the functionality ON BOTH MACHINES

2)     Run the "Component status checker" task
3)     Wait a minute and then Access Administration > System > Status to see the status

 

Section 4: Check ETL and Chain Status

 

1)     Check the Administration >SCHEDULER >ETL tasks page and in Administration >SCHEDULER >System tasks for RUNNING TASKS

           that might have been stuck

2)     Take note of their ids and then force them to be ended over BCO DB:
          update task_status set status= 'ENDED' where taskid in (XX,XX2,XX3);

 

This article can be found in it's entirety, including steps for BCO 4.5 and 4.0 at the BMC Support site knowledge base as KA350370, Steps to recover BCO functionality after the AS or EE file system has become 100% full.

 

We hope you find this article informative - and also hope you never have to use it

 

timo

Share:|

There's nothing worse than looking for something and not being able to find it when you need it.

 

With that in mind, I thought it would be  helpful to de-mystify  a situation that may occur if you are not seeing data older than 30 days for some of your systems, business  drivers, or series data for some periods.

 

In the Capacity Optimization version 9.x Console  Home>Profile  make sure that you have the desired settings. If you are using version 4.5, these settings are also available, but under the logged in user and is covered later in this article.

 

You can change  the options to hide systems, business drivers, and series that do not have recent data that has been imported into BCO. The choices are:

 

  • Default (30 days): Systems, business drivers, and series that have not imported data for over 30 days will not be shown  in BMC Capacity Optimization, nor will appear in search results.
  • Do not hide: All entities and series will be shown.
  • Hide if data is older than N days: Systems, Business Drivers, and Series that have not been imported to BCO in over N days will not be shown in BMC Capacity Optimization, nor will they appear in search results.This opption lets you customize the default 30 day threshold. This information is also available in our knowledge base article KA375029: BCO systems or business driver metrics disappeared;

https://kb.bmc.com/infocenter/index?page=content&id=KA375029 (login Required)

 

If you are using BCO 9.X:

  1. Login into BCO with the credentials of the affected user
  2. On the upper right click on the "home" link
  3. On the page that appears click on the "Edit your profile" link

 

HomeProf.png

  1. In the "Hide systems, business drivers and series without recent data" section there is an option to hide the data.
  2. Make sure the "Do not hide" is selected. If it's not selected, change the setting to "Do not hide" and save.
  3. This is a user setting, so this procedure needs to be repeated for every user affected, logging in with his credentials.

DoNotHide.png

 

 

If you are using BCO 4.x

  1. Login into BCO with the credentials of the affected user
  2. In the upper right click on the "username" link in the "Logged in as" part of the screen.    bco4HomeProf.png
  3. In the "Hide systems, business drivers and series without recent data" section there is an option to hide the data.
  4. Make sure the "Do not hide" is selected. If it's not selected, change the setting to "Do not hide" and save.
  5. This is a user setting, so this procedure needs to be repeated for every user affected, logging in with his credentials.

 

bco4DoNotHide.png

 

We hope you find this tip useful and thanks for viewing.

 

Got any ideas for what you would like to see? Let us know!

 

thx, timo

 

 

 

 

 

 

 

    

Share:|

Happy New Year Everyone I hope your holidays were filled with joy and happiness!

 

I was thinking that a good way to start out the New Year would be to provide some information on how to improve the performance of your Visualizer data base – which is the heart of your BMC Performance Assurance Capacity Planning/Performance Management system. Now, bear in mind, I am not a Data Base Administrator (DBA), so you may have to enlist the help of your friendly DBA to accomplish these recommendations. I hope you will find this useful. This information can also be found in our Knowledge base at KA362099 Performance Assurance Visualizer Re-indexing data base tables provides performance improvement (requires support login).

Re-Indexing Visualizer Data Base Tables to Improve Performance

Is your Manager/Automator data base population creeping up on your SLA availability window?  To improve the Visualizer data base performance, ask your DBA to plan and schedule a regular re-indexing of the following tables, which are heavily used by Performance Assurance in the Visualizer data base. There are other tables, but these are the ones that will benefit most.

 

CAXINTVL

CAXPSYSD

CAXNODED

CAXVMD

CAXWPARD

CAXPOOLD

CAXCPUD

CAXDISKD

CAXMEMD

CAXFILSD

CAXLVLDD

CAXNETID

CAXINTFD

CAXWCATD

CAXWRESD

CAXPROCD

CAXCOMMD

CAXUSERD

CAXHSTDD

CAXDATSD

CAXCLSRD

CAXLDSKD

CAXPARDD

CAXCPUVD

CAXEMVMD

 

Events of data population, summarizing and deleting old performance data will perform delete, update, and insert actions in the Visualizer data base. As Oracle indexes are not self-balancing, they will become fragmented after a large number of INSERTs and DELETEs, which may lead to significant performance degradation.

 

 

How can you tell if it there will be any benefit?

Ask your DBA run an Oracle AWR (Automatic Workload Repository) report which can be generated through Oracle Enterprise Manager. Things to look at are Index Performance degradation or Very High physical read bytes; physical read total bytes; physical write total bytes values in AWR report (With High Magnitude of data alteration, which leads fragmentation).

 

Justification

Indexes are Oracle data base objects that provide a fast, efficient method of retrieving data from data base tables. The physical addresses of required rows can be retrieved from indexes much more efficiently than by reading the entire table. Effective indexing usually results in significant improvements to SQL performance too.

Oracle's default index structure is B*-tree, which stands for "Balanced tree." It has a hierarchical tree structure. At the top is the header. This block contains pointers to the appropriate branch block for any given range of key values. The branch block points either to another branch block, if the index is big, or to an appropriate leaf block. Finally, the leaf block contains a list of key values and physical addresses (ROWIDs) of rows in the data base.

Oracle data bases experience a huge benefit from periodic index rebuilding. Oracle recognized this benefit of index rebuilding when the Oracle 9i (10g/11g) online index rebuild feature made it possible to rebuild an Oracle index while the index is being updated.

 

Index Rebuild/Coalesce advantagePerformance radically improves after an index rebuild especially:
1. When a index rebuild is combined with a table reorganization (using the dbms_redefinitionpackage). This is especially useful when the data is accessed via index range scans and when the table is re-sequenced into index-key order (using single-table clusters, or via a CTAS with an order by clause).

2. When a heavily-updated index is rebuilt. In highly volatile data bases in which table and column values change radically, periodic index rebuilds will reclaim index space and improve the performance of index range scans.

"Index rebuilds are low risk - Because the new index is created in temporary segments, Oracle will never destroy the old index until the new index has been created.
"Index rebuilds are unobtrusive - Oracle indexes can be rebuilt online without interruption to availability and DML activity.

"Index rebuilds are cheap - The cost of the duplicate disk to store a new index tree is negligible, as are the computing resources used during a rebuild. Many Oracle professionals forget that unused server resources can never be reclaimed, and servers depreciate so fast that the marginal cost of utilizing extra CPU and RAM are virtually zero.

The image below (Ref;
http://docs.oracle.com/cd/E18283_01/server.112/e17120/indexes002.htm) illustrates the effect of an ALTER INDEX REBUILD or COALESCE on the index. Before performing the operation, the first two leaf blocks are 50% full. This means you have an opportunity to reduce fragmentation and completely fill the first block, while freeing up the second.

 

 

BefAft_AlterIDX.png

 

When should one perform a rebuild? [Oracle Metalink Document ID 182699.1]
Oracle has 4 main features with regard to its internal maintenance of indexes that makes index rebuilds such a rare requirement.

1.  50-50 blocks split and self-balancing mechanism

2.  90-10 block split mechanism for monotonically increasing values

3.  Reusability of deleted row space within an index node

4. Reusability of emptied nodes for subsequent index splits.

 

We hope you have found this article useful. Let us know.

thx, timo

Share:|

Resolving BCO and BPA issues faster through diagnostics

 

In many cases, technical support needs the product log files that provide additional information needed to isolate a particular problem amongst various product components. They are also needed when we escalate an issue to the development team.

 

Including logs and a screen shot of the error that you see (especially if it is in the GUI) when your ticket is opened will make your case move along faster to resolution and will save a lot of back-and-forth communication - before we even get to start working on your issue!

 

We see many cases where we only get a portion of a log in a screen shot. While this is may be  helpful to describe the initial problem that appears to be happening, it does not always give us the complete picture, and we will most likely ask for the complete logs anyway. So, save yourself some time and send the complete logs if you can.

 

My memory is horrible and yours may be better, but either way, the links below will help you obtain the product logs which provide additional information to technical support  and get your ticket progressing faster. This is not a complete list of every log, but certainly the most common. Sending the appropriate log(s) will certainly help us server you better.

 

Capacity Optimization Log Grabber– For problems with system components such as Scheduler, Datahub, Web.., technical support will usually ask for the Log Grabber output. Even if the UI version does not work, there is a command mechanism to obtain the log information.  See: KA350888 In BCO, how do I gather the log files using log Grabber? (Requires support login)

 

Capacity Optimization ETL logs – The ETL’s that get executed in BCO have their own set of logs that describes activity. A problem related to your ETL usually means technical support will need configuration, deployment logs, and possibly Log Grabber output.

See: KA350895 How to gather BCO ETL logs. (Requires support login)

 

Performance Assurance Agent Logs – Agent/proxy logs from the server or console where data collection is having problems are most useful.  Where it is a BPA console issue, a screen shot showing the error message is also helpful.

See: KA321920 Capturing the Perform log files from the remote node (Requires support login)

 

Performance Assurance Installation Logs

On UNIX/Linux:  /var/tmp or /tmp directory

The install_dir/BPA_install_log.txt log. The log is created by the b1config[VVVV].sh script and will contain errors relating to the configuration stage of the installation. This log exists only if the install states it completed successfully and the product still does not work.

 

On Windows: %TEMP%\BPA_install_log.txt

 

Perceiver Logs – A screen shot showing what you see vs what you expected to see helps us determine the nature of the issue. If it is a Visualizer datasource being used, verify the data for the missing node(s) is actually present in Visualizer. If the data is present in the Visualizer database, then we’ll need the Perceiver logs;

See: KA299353 Initial debugging of BMC Perceive problems (Requires support login)

 

You can use the BMC Performance Perceiver maintenance tool to view the log files.

 

Perceiver Installation logs:

On Windows : %TEMP%  - Log file created when you install Perceiver is perceiver_install_log.txt

On UNIX/Linux: /var/tmp - The log file created when you install BMC Performance Perceiver is perceiver_install_log.txt.

 

Visualizer – As in Perceiver, a screen shot showing what is seen/expected and the visualizer (*.vis) file that contains the system in question. It is also helpful to state which system this is for, if the *.vis file contains many systems. This will let us attempt to reproduce your issue “in house”.  However, if the data never makes it into the database, sending the UDR data obtained from your console will usually work with a reference to the server(s) that is problematic.

Automator  - Please send the Automator run log

 

Visualizer Installation Log - If the installer is unable to create the log file, the application exits.

<user_profile>\Application Data\BMCInstall and named <systemname>-<timestamp>.log.

 

If the installer fails to create that directory, the log file is written to %TEMP% and is named <systemname>-<timestamp>.log

 

Getting logs to us - For files larger than 4 GB, we recommend that you send them to the BMC Support FTP site.

See: KA380369 How do I upload files to the BMC FTP site? (Requires support login)

 

We hope you find these tips useful and feel free to comment. 

 

From all of us at BMC – Happy Holidays!

Share:|

BPA and BCO 9.5 have gone to general availability (GA), each with many improvements and fixes to make things easier, more functional, and we hope - more enjoyable to use

 

What’s New

If you have not had an opportunity to see what you’re missing – take a look and see what’s new, as well as what’s been fixed in version 9.5! https://docs.bmc.com/docs/display/public/bcmco95/What%27s+new

 

What’s Corrected

If you have been following a particular issue, the link below documents what has been corrected in this version.

https://docs.bmc.com/docs/display/public/bcmco95/Known+and+corrected+issues

 

Service Packs, Cumulative Patches & Hot Fixes

We have discovered a couple of defects that Support can provide a solution for, should they occur in your environment for Version 9.5. Fixes for these defects will be included in Cumulative Hot Fix #1 (CHF #1) available in mid-November. These issues are listed below:

 

KA404605: The Perform 9.5 Linux Console migrateManagerRuns.pl script gets msg, "INFO: No manager best1homes to migrate in version" during attempted Manager run migration, selecting Perform 9.0 console.

 

KA399092: In BPA 9.0 SP3 and BPA 9.5 when submitting a Manager run it incorrectly attempts to submit the run twice Best1Manager tries to schedule manual run twice

 

There is one other defect you should be aware of and may encounter in BPA 9.0 or 9.5 and if so, we have a work around for this: 

 

KA401975: The Perform version 9.0 b1config9000.main isn't properly upgrading the best1_V.V.VV path in the 7.5.10 $BEST1_HOME/local/setup/.ssh/config file when it migrated to 9.0.

 

The installation program for Performance Assurance 9.5 does not migrate the .ssh/config file from a 7.5.10 or 9.0 installations. There is also a problem with the 9.0 install, where it does not migrate from an existing 7.5.10 install.  Both releases will properly migrate from an earlier agent version (7.5.00 or earlier).

Understanding Versioning in Service Packs, Cumulative Patches and Hot Fixes

A recommended read is an article that explains the Service Pack, Cumulative Patch, and Hot Fix patch process, how, and where to obtain them entitled “ Cumulative Hot Fixes for BMC Capacity Optimization, BMC Performance Assurance, and BMC Performance Perceiver”https://kb.bmc.com/infocenter/index?page=content&id=KA400057

 

Documentation

As before, the general product documentation is online for BPA and BCO has been updated, and it can be found at: https://docs.bmc.com/docs/display/public/bcmco95/Home

 

I will confess that when I first started using the online documentation, I was frustrated. I had known for years (through versions 7.x) where to look for things in the hardcopy or PDF and my, did I struggle with this. Fortunately, my location paralyses was short lived, and now find it much easier to locate things that I am looking for much faster.  Of course, we are continuously improving the content and this will make it even more useful and friendly.

 

Supportability

Another confession I will make is that as a user of the products, prior to coming to the Technical Support group I was always confused as to what was supported and for how long. ….and never knew where to look for it! If you have ever found yourself in the same situation, this may help.

 

Software improvement is market driven and inevitable, which taken to mean in a somewhat casual sense, “out with the old and in with the new”, and older product versions will come to be out of support.  With this, unfortunately  L comes a certain amount of disruption at some point to anyone that uses a computer.

 

From a supportability perspective, it is unlikely Technical Support can provide a new fix developed for an out of support version of a product or component, as the development team is focused on supported and new releases.  Our recommendation will be to get to a supported version. Of course, this may not (does it ever?) fit your planning schedule or business needs. So, it is recommended that you to take the appropriate steps as soon as you can to get to a supported version of the product if you are no longer there.

Support Categories of Concern

Limited Support is the same as full support, with these exceptions:

·         No new patches or fixes will be created.

·         Technical Support will direct customers to existing fixes, patches, and workarounds applicable to the reported issue.

·         Technical Support will direct customers to upgrade to a more current version/release of the product as the solution to their problem in lieu of a patch or fix.

·         Research and Development will be engaged on critical issues only and on a limited basis for problem identification.

No Support

·         We provide no active technical support.

·         You can find some assistance online in our knowledge base as we will retain some of that information beyond the product version’s end-of-life.

 

Performance Assurance Support

With the release of BMC Performance Assurance 9.5, all versions 7.5.x are on limited support while version 7.4.10 is out of support. As a consequence, the BMC Capacity Optimization connectors to BMC Performance Assurance 7.5.x are now deprecated as well, and the connector to BMC Performance Assurance 7.4.10 is now dropped.

Capacity Optimization Support

With the release of BMC Capacity Optimization 9.5, Version 4.0 is out of support and Version 4.5 is in limited support.

BPA 7.5.10 and BCO 4.5 Maintenance

You can find out what Service Packs and Cumulative maintenance is available for the BCO 4.5 and BPA 7.5.10 at this link:   https://docs.bmc.com/docs/display/public/bco45/What%27s+new

Capacity Management Support Pages

Each product has a support page that is easily accessed and lists by product, what is supported and the respective retirement dates.  If you have ever had a problem making a case to get to a newer version of one of the Capacity Management products in your organization due to change management restrictions, this is a good link to use to provide the ammunition for this.  Support Information for BCO and BPA Capacity Management products can be located at this link: https://docs.bmc.com/docs/display/public/bcmco95/Support+information

 

I hope this month’s Pulse will provide you with some useful information. We are interested in your feedback. Don’t be shy

Best, timo

Filter Blog

By date:
By tag: