In TSCO, my custom Datamart fails with Oracle error ORA-01555 when materializing it. Is there any way to avoid this situation?

Version 10
    Share This:

    This document contains official content from the BMC Software Knowledge Base. It is automatically updated when the knowledge article is modified.


    PRODUCT:

    TrueSight Capacity Optimization


    COMPONENT:

    Capacity Optimization


    APPLIES TO:

    TrueSight Capacity Optimization 11.5.01, 11.3.01



    QUESTION:

    This issue may occur while accessing a View in TSPS that is basically based on a Custom Datamart that retrieves information for a number of systems. It may also occur when a Datamart takes a lot of time to materialize, the data doesn't refresh and the Materialization may fail.

    An example of these errors could be:

    1 - The coRest.log and coApp.log show that the View is not even capable of contacting the Datahub to retrieve information for Datamart ID 1778

    WARN  01/16 13:54:11.281 [https-jsse-nio-8043-exec-9] coRest General exception while processing request: [POST HTTPS://[hostname]:8280/dh-services/data-prv/v2/datamarts/1778/aggregatedata returned 500 and response: {
      "status" : "ERROR",
      "errorCode" : "BCO_PRV_ERR011",
      "message" : "[DatamartServiceRestWS] Failed to query datamart 1778"
    }]



    2 - The Datamart Materializer throws this error when materializing the Datamart.
    ------------------------
    [taskid=376]- BCO_DB_FAIL104: Unable to materialize data mart [2360] [Data mart name]
    StackTrace: com.neptuny.cpit.querymanager.QueryException: DYNAMIC ORA-01555: snapshot too old: rollback segment number 50 with name "XXXXXXXXXXX$" too small 

    sql-code : SQL Query
    at com.neptuny.cpit.querymanager.QueryManager.executeDynamicQuery(QueryManager.java:197) 
    at com.neptuny.cpit.querymanager.QueryManager.executeDynamicQuery(QueryManager.java:128) 
    at com.neptuny.cpit.querymanager.QueryManager.executeDynamicQuery(QueryManager.java:120) 
    at com.neptuny.cpit.database.AbstractDatabaseHelper.executeUpdate(AbstractDatabaseHelper.java:222) 
    at com.neptuny.cpit.database.AbstractDatabaseHelper.executeUpdate(AbstractDatabaseHelper.java:218) 
    at com.neptuny.cpit.warehouse.ErViewMaterializer.materializeView(ErViewMaterializer.java:420) 
    at com.neptuny.cpit.warehouse.ErViewMaterializer.materializeView(ErViewMaterializer.java:804) 
    at com.neptuny.cpit.warehouse.ErViewMaterializer.doMaterialize(ErViewMaterializer.java:1411) 
    at com.neptuny.cpit.warehouse.ErViewMaterializer.materialize(ErViewMaterializer.java:1335) 
    at com.neptuny.cpit.warehouse.ErViewMaterializer.materializeByErDefList(ErViewMaterializer.java:1565) 
    at com.neptuny.cpit.warehouse.ErViewMaterializer.materializeByTaskid(ErViewMaterializer.java:1546) 
    at com.neptuny.cpit.task.ErViewMaterializerTask.doExecution(ErViewMaterializerTask.java:119) 
    at com.neptuny.scheduler.task.AbstractTask.synchronousRun(AbstractTask.java:626) 
    at com.neptuny.scheduler.task.AbstractTask$InnerThread.run(AbstractTask.java:896) 
    Caused by: java.sql.SQLException: ORA-01555: snapshot too old: rollback segment number 50 with name "XXXXXXXXXX$" too small 

    ------------------------


    ANSWER:

    The Oracle error ORA-01555 occurs when there is a long running query that accesses data that are in the meanwhile being modified by other tasks. Oracle needs to make sure the query always sees the data as they were at the time it started, therefore if these data get updated, also the past data need to be preserved until the query completes. The undo retention and undo Tablespace settings determine how much space it is reserved to these past data.

    One of the problems that also may occur is when there are timeouts during the materialization of a custom Datamart. The steps described below contribute to avoid timeouts.

    There are typically 5 ways to avoid these problems:

      
       
    1.  Optimize the Datamart query so that it performs better.
    2.  
    3.  Materialize the Datamart by chunks, so each chunk completes faster. This is a workaround we've been using for some OOTB datamarts too.
    4.  
    5.  Configure Oracle with a higher undo retention or UNDO Tablespace. See this ORACLE document for further details.
    6.  
    7.  Reschedule the Datamart evaluation at a more convenient time.
    8.  
    9.  Check and set the validity of the Datamart to cover much time, in case the validity is set to few hours (i.e. 8 hours).
    Getting a better understanding of what you need to be done on points (1) and (2) above you can have that by chunking your Datamarts, that is a way of tweaking the related SQL. Instead of just providing the complete SQL, you may also provide the property   er.materializer.chunk.sql that determine the list of entities the query fragmentation is based on. 
       

    This is an example, obtained by extrapolating some relevant parts of a ER_PROPS DB align file and using the er.materializer.chunk.sql property:

      
     
    Main query to be chunked  
      SELECT * 
    from sys_def 
    where systypeid in(
       
       select systypeid
    from sys_type
    where systypeid in (1,2,3,4)
     
       
       ); 
       
    This would be the new query with the chunkization:   
        
    • select * from sys_def where systypeid in ${CHUNK_IDS}
    •   
    • insert into er_props(erid,name,value) values(1570,'er.materializer.chunk.sql','select systypeid from sys_type where systypeid in (1,2,3,4)');
    •   
    • insert into er_props(erid,name,value) values(1570,'er.materializer.chunk.size',1000);
    •  
      
    Where 1570 is the   datamart id

    What is the value 1000 on the chunked query?
    This setting basically retrieves a maximum of 1000 rows for the sub-query executed. We do not recommend to change this value, however, if you want to change it you should specify it the property "er.materializer.chunk.size". It is important to keep in mind that 1000 is also the limit that Oracle recommends for adding elements in a query. 

    TSCO fetches the sysids with the query   er.materializer.chunk.sql and creates chunks of them to be passed to the main query   er.view.sql in place of the placeholder, which must be present to let the flow work accordingly. The size of the chunk is selected by the system, it is 1000 by default.  

    Additional information

    The   TSCO documentation states that it is recommended to   allocate a minimum of 3 GB for the TEMP tablespace and   3 GB for the UNDO tablespace: 
      

     


    Article Number:

    000164294


    Article Type:

    FAQ/Procedural



      Looking for additional information?    Search BMC Support  or  Browse Knowledge Articles