TrueSight Capacity Optimization (TSCO)11.5.01 AFS log repeating, "BCO_AFS_FAIL102: AFS is taking too long to process data, this could be due to a lock in the database"

Version 1
    Share This:

    This document contains official content from the BMC Software Knowledge Base. It is automatically updated when the knowledge article is modified.


    TrueSight Capacity Optimization


    Capacity Optimization 11.5.01


    TrueSight Capacity Optimization 11.5, 11.3.01


    After upgrading to TSCO 11.5.01 the Automatic Forecast Service (AFS) log starts to repeat the following message after about 6 hours:

           FAILED - BCO_AFS_FAIL102: AFS is taking too long to process data, this could be due to a lock in the database - last activity date: Wed Aug 07 17:43:42 CDT 2019
    The message repeats until the TSCO Datahub where the AFS is running is restarted without the "last activity date" ever changing. 

    On AFS startup the log reports:  
         INFO  - AFS KILL request. No active threads found.
      WARN  - Could not identify exactly AFS status during startup. Consider it enabled.
      INFO  - AFS START process.
      <-- no further messages until the repeating BCO_AFS_FAIL102 message -->




    Environment recently upgraded from TSCO 11.0 or earlier to TSCO 11.5

    One possible source of this behavior is a cleanup of old Indicators step that is performed by the TSCO Automatic Forecast Service (AFS) on startup.  This cleanup will restructure the old AFS records into their new table location in TSCO 11.5.  In some environments this SQL may have experience performance issues that cause the AFS to hang on startup and never be able to begin processing INDICATORS.  


    The AFS hang problem during the old indicator cleanup is being tracked via Defect DRCOZ-23375 and a fix will be available in TSCO 11.5.01 CHF ( and later to avoid the AFS thread handing when the cleanOldIndicators phase is executed.  


    Here is a potential workaround for the AFS cleanOldIndicators workflow hanging: 

    (1) Set the 'afs.cleanOldIndicators' property value to 'false'. 

    -- Check if the afs.cleanOldIndicators value is currently set:  
          SELECT * FROM CONF_PROPS WHERE NAME = 'afs.cleanOldIndicators';
    -- If not set:  
          INSERT INTO CONF_PROPS (NAME, VALUE) VALUES('afs.cleanOldIndicators', 'false');
    -- If set:  
         UPDATE CONF_PROPS SET VALUE = 'false' WHERE NAME = 'afs.cleanOldIndicators';
    (2) Restart the TSCO Datahub 

    (3) Wait 30 minutes or so 

    (4) Run 'kill -3 [PID of datahub]'     
    Note : kill -3 does not terminate the process. It is a thread dump that will list all the Java threads that are currently active in Java Virtual Machine (JVM).   

    (5) Capture the logGrabber output from the TSCO Datahub AS. 

    If this workaround addresses the AFS hang problem the AFS can be run with the 'afs.cleanOldIndicators = false' configuration as long as necessary. 

    There was a change in TSCO 11.3.01 which moved the Days Idle Indicator metric from the DETAIL to HOUR level.  So, for TSCO 11.3.01 and later it is good to have the afs.cleanOldIndicators run once successfully to cleanup the old data but once that cleanup is complete the task is done and doesn't need to be run again.  Without the cleanup being run it just means some old indicators will remain at the DETAIL level so the impact is just a small amount of database space consumption (so some old rows that could be cleaned up will continue to live in the database without any other visible symptoms of the afs.cleanOldIndicators functionality being disabled. 

    If the errors are not resolved using above steps, collect the following information and reach out to customer engineering :  
    •  Log Grabber output 
    •  Oracle AWR report
    •  Diagnostic Data flow report 


    Article Number:


    Article Type:

    Solutions to a Product Problem

      Looking for additional information?    Search BMC Support  or  Browse Knowledge Articles