This document contains official content from the BMC Software Knowledge Base. It is automatically updated when the knowledge article is modified.
TrueSight Capacity Optimization
TrueSight Capacity Optimization 11.3,11.0, 10.7, 10.5, 10.3, 10.0 ; BMC Capacity Optimization 9.5 ; BMC ProactiveNet Performance Management Suite
In TrueSight Capacity Optimization (TSCO), ETL execution is failing with a Program abnormally terminated [java.lang.OutOfMemoryError: GC overhead limit exceeded] error reported in the ETL logs:
ERROR FATAL - Program abnormally terminated [java.lang.OutOfMemoryError: GC overhead limit exceeded]
For example, here is the error generating during a TSCO vCenter ETL execution:
For a Service Extractor ETL the message will generally be reported in the $CPITBASE/scheduler/log/scheduler.out or scheduler.err file.
LP: TrueSight Capacity Optimization 11.0, 10.7, 10.5, 10.3, 10.0
DR: Capacity Optimization 9.5, 9.0, 4.5
This error typically indicates that the Java Heap Size for the JVM in which the ETL is running has been exhaused and the recommended workaround is to increase the Java Heap Size of the ETL. Note that each ETL runs in its own separate JVM which is executed by the TrueSight Capacity Optimization (TSCO) Scheduler.
This document covers increasnig the Java Heap Size associated with the two different types of ETLs:
- Increasing the Java Heap Size for Regular ETLs
- Increasing the Java Heap Size for Service Extractor ETLs (for example, the vCenter Service Extractor)
Regular ETLs run in their own separate java process -- so each separately executed ETL results in the TSCO Scheduler spawning a separate Java process in which that ETL runs. Service Extractor ETLs run within the BCO Scheduler itself in a separate thread.
NOTE: If the /TSCO Installation Directory]/customenvpre.sh file doesn't exist copy the customenvpre.sh.sample file to customenvpre.sh and then proceed to make the recommended changes to the customenvpre.sh described below.
The steps to increase the ETL JVM's Heap Size are:
(1) Edit the /TSCO Installation Directory]/customenvpre.sh file on the ETL Engine where the ETL is running:
(2) Change this:
(3) Restart the Scheduler service on the machine:
cd /[BCO Installation Directory]
./cpit restart scheduler
In general in Technical Support we recommend increasing from the default of 512 MB to 1 GB and then if you continue to experience out of memory errors only then increasing beyond that to either 1536 MB or 2048 MB. If you need to increase beyond 2 GB we would generally recommend contacting Technical Support to discuss the environment and the problem you are seeing. This is simply because 2 GB is a common java heap size memory configuration and we have less experience in Technical Support with environments having a java heap size larger than 3 GB.
Note that each running ETL runs within a separate JVM and thus when setting the Java Heap Size is it important to consider how many concurrent ETLs will be running in the environment in relation to the memory configuration of the computer. The ETL_HEAP_SIZE setting corresponds to the Java command line 'Xmx' setting so this is the maximum heap size of the JVM not the starting heap size of the JVM.
Q: Do I need to also modify the ETL_PERMGEN_SIZE setting?
Typically no. If the ETL_PERMGEN_SIZE setting is to low the ETL would fail with the following error:
For the "java.lang.OutOfMemoryError: GC overhead limit exceeded" error that indicates a lack of Java Heap Space, not a lack of Java Perm Gen space.
Q: Do I need to restart the BCO Scheduler after changing the ETL_HEAP_SIZE?
No, regular ETLs run in a separate java process that reads the customenvpre.sh file on startup so any ETL started after the file has been updated with be executed with the increased Java Heap Size.
Q: Is there any way to validate that the new ETL_HEAP_SIZE parameter has been properly set and has been picked up by the ETL?
If the ETL_HEAP_SIZE has been correctly set when the ETL starts you'll be able to see the increased heap size on the 'java' process command line.
So, something like this:
> ps -ef | grep java | grep etl
cpit 25369 25365 15 13:40 ? 00:00:01 /data1/bmc/BCO/jre/bin/java -Dsrcid=121 -Dcpit.component=etl -Xmx512m -XX:MaxPermSize=128m -Djava.awt.headless=true -Djava.io.tmpdir=/data1/bmc/BCO/temp -Dfile.encoding=UTF-8 -DETLBASE=/data1/bmc/BCO/etl -DPID=25365 ...
The "-Xmx512m" means that the ETL_HEAP_SIZE in the custompenvpre.sh is set to 512 MB.
After changing it to 1024 run 'ps -ef | grep java | grep etl' shows the increased heap size:
So, updated customenvpre.sh:
New 'ps -ef | grep java | grep etl' output:
cpit 25935 25931 96 13:42 ? 00:00:04 /data1/bmc/BCO/jre/bin/java -Dsrcid=121 -Dcpit.component=etl -Xmx1024m -XX:MaxPermSize=128m -Djava.awt.headless=true -Djava.io.tmpdir=/data1/bmc/BCO/temp -Dfile.encoding=UTF-8 -DETLBASE=/data1/bmc/BCO/etl -DPID=25931...
For a Service Extractor, such as the VMware - vCenter Extractor Service:
(1) Edit the /TSCO Installation Directory]/customenvpre.sh file on the ETL Engine where the ETL is running.
(2) Change this:
Q: Is 2048 MB of Scheduler Heap Size sufficient for all Service Extractor ETLs?
No, 2048 MB is a good starting point but the the BCO Scheduler continues to terminate it may be necessary to increase the Scheduler Heap Size setting to a larger value (such as 3096 MB, 4192 MB, or for a very large Service Extractor (or a machine running multiple Service Extractors) to an even higher value.
Some recommended guidelines:
- Reserve at least 1 GB of memory for the OS and other tasks running on the ETL Engine machine
- Contact Technical Support if you plan to increase the Scheduler Heap Size to more than 1/2 of the total physical memory on the machine
- TrueSight Capacity Optimization
- BMC Capacity Optimization
- BMC ProactiveNet Performance Management Suite