8 Replies Latest reply: Nov 28, 2012 4:57 AM by Jarl Groneng RSS

Atrium Integrator - Marking items as Deleted

ryan NameToUpdate

Hello,

 

I am creating a number of import jobs using Atrium Integrator. The one function I have not been able to duplicate from AIE is the "MarkasDeleted" for items no longer found in the import data set.

 

In my example, I am pull computer system + ip end point for a large number of workstations. In AIE, there is an option to mark CIs not found as Deleted. Has anyone duplicated this feature in AI? If so, what is the most efficient method?

 

Thnaks,

Ryan

  • 1. Atrium Integrator - Marking items as Deleted
    Dirk Anderson

    The most efficient way would be to edit the transformation in the spoon interface and include the step (qualification) for the "MarkasDeleted" flag.

     

    Open up the BMC Samples...good info there.

  • 2. Atrium Integrator - Marking items as Deleted
    ryan NameToUpdate

    Thanks...looked at the example and its the same direction I am heading in. Essentially another transformation that does a look of records in CMDB and compares to the data source. Works great in small batches but when expanded out to a class like product where there can be 1 million + CIs, its a taxing operation.

     

    Was hoping someone or BMC had a clever sql update or java function or something to do a more batch approach.

  • 3. Re: Atrium Integrator - Marking items as Deleted
    Manish Patel

    Hi Ryan,

    We do ship sample delete job for reference. You might want to make a copy of it for your use-case.

    The job name is BMC_Computer_System_Delete.

     

    Thanks,

    Manish

  • 4. Re: Atrium Integrator - Marking items as Deleted
    ryan NameToUpdate

    For anyone else looking at a more efficient method in AI to mark CIs no longer found in the source data deleted....

     

    In AI, take a look at the merge rows diff function. This takes 2 stream of data (cmdb as reference then you data source) and compares keys + values for changes. You need to be sure you are sorting each input stream by the keys. I would also recommend creating a checksum in AI and only compare the key + checksum. This will speed up the process tremendously.

     

    This methods allow you to both mark as deleted and update/insert in one AI transformation.

     

    Feel free to message for more details or an exmaple.

  • 5. Re: Atrium Integrator - Marking items as Deleted
    Jarl Groneng

    Hi

     

    Would like more details on this, and if you got an example.


    Regards,

    Jarl

  • 6. Re: Atrium Integrator - Marking items as Deleted
    Yogesh Somavanshi

    Hello Jarl,

    There are multiple ways to handle this,

     

    1>     Using the Merge Rows step as shown below.

     

    2>     If you are using 8.0 one can generate a delete transformation from the AUI wizard itself.

     

    3>     If its 7604 then we have shipped one sample ComputerSystem delete  transformation in the SAMPLE folder of the Spoon repository just take a look at it and design one of your own.

     

     

    To know more about the Merge Rows Step refer to transformation Merge rows - mergs 2 streams of data and add a flag.ktr in the data-integration\samples\transformations folder.

     

    -Regards

    Yogesh

     

    cid:image001.png@01CDCCD8.5AB55D50

     

     

    -Thanks and Regards

    Yogesh

  • 7. Re: Atrium Integrator - Marking items as Deleted
    ryan NameToUpdate

    I hope v8 does not have the mothodolgy as v7.6 for deleting CIs.  A stand alone Delete job is not very efficient or practical when working in the 10's of thousands of CI's.

     

    Yogesh's pic is essentially the set up. One key point is do not use the CMDB Lookup or CMDB Input connector as the performance is very slow. Substitue with a DB look up or DB Input where ever possible and tune your query. This is what I have found in my experience to date. Fingers crossed for v8.

     

    I would also suggest setting the "Nr of Rows in rowset" to something smaller than the default 50k. I generally range between 1000 to 2000. This limits the amount of records in the pipe and helps the transformation run smoother as the JVM isn't choked for memory. The is in the transformation settings under Miscellaneous.

     

    1. CMDB is the reference data set

    2. Discovery is to be compared to CMDB

    3. Sort both on the same key

    4. Use a check sum value to look for changes instead of comparing mutliple keys

    5. After the merge diff, use a 'Switch Case" to either mark a CI as deleted, insert/update, or do nothing if no change.