Skip navigation

BMC AMI Data for Db2

3 Posts authored by: Doug Wilson Employee
Share This:

Load is a fairly simple utility to manage, right?  I suppose it is if your input data is correct and you don’t allow discards, then it either works or fails.  But what if you don’t have control of the input data and you must deal with the occasional discarded record, then there might be a few things to be concerned about.

  • Did any records get discarded?
  • Did more records get discarded than my limit?
  • Did my upstream processing provide no records to load?

All these conditions provide a warning (RC=4) or an error (RC=8).  But what if you want your own standard for return codes to alert you to these conditions using numbers other than 4 and 8. 


Load return code control using installation parameters or keywords

BMC AMI Utilities have parameters called +parms.  These are configured to set default values and they can be overridden in individual utility jobs.  For many Load +parms there is an equivalent Load statement keyword that will override the +parm, whether that +parm is set by the system configuration or specified in the job.

This table shows the +parm related to the bullet points above, the keyword override for each, the default value, and its purpose.



Default value





Return code to warn you that a discard occurred.  Specify a value between 0 and 7.




Return code to let you know the error was due to exceeding the allowable discards specified with the DISCARDS keyword.  Specify a value between 8 and 15.




Return code to warn you that there were no rows to load.  Specify a value between 0 and 7.


Load will always finish with the greatest return code encountered.  Having discards is a warning, unless you set a limit, any error can override the return code to a greater value.

Best Practices

Develop shop standards for these return codes and set them in the utility configuration.  Then individual utility jobs will follow these standards without having to specify anything.  However, the keywords and +parms are there if the need arises to deviate from the standards.

Please refer to the Load manual for restrictions and incompatibilities.

Note:  These parameters and keywords were added with PTF BQU1353 in July 2018. 

Share This:

This time we’ll discuss two recent changes in NGT Unload, the addition of a “less dirty”, which I’ll explain shortly, and the ability to set the default concurrency, or SHRLEVEL.

NGT Unload has always defaulted to “online unload” creating a point of consistency unload while keeping the object read-write.  After all, this only requires a single drain of the writer claims.  This is rarely a bother because transactions that update are pretty good at committing right away.  This table shows the NGT Unload concurrency options.  NGT’s default has always been SHRLEVEL CHANGE CONSISTENT YES, you don’t have to specify any SHRLEVEL to get an online consistent unload. This BLOG will primarily discuss the change shown below in bold.























Less Dirty







Prior to BQU2750 in February 2020, NGT would make a clean point of consistency unload if you requested CONSISTENT NO QUIESCE YES, just like it does with CONSISTENT YES.  Now NGT makes a “less dirty” unload which means it will drain the write claimers to flush any changes for the table space out of the Db2 buffer pool.  This makes the Unload less dirty because only changes that occur during the unload can be missed, any changes Db2 had buffered before the unload are externalized and unloaded. 


I know what you’re thinking, this is crazy.  It’s disruptive to get the drain but not disruptive to track the changes and make a consistent unload, so why have the same pain for less gain? 


If you haven’t read my previous BLOG about simultaneous unloads of the same table, please check it out.  There you will see that two Point-Of-Consistency (POC) unloads cannot run at the same time unless they share the POC.  If you are ultra-sensitive to the timeliness of your unloads and you have many unloads that may try to unload the same table at the same time, then you might want to opt for a less dirty and inconsistent unload rather than the chance of unload failure, especially in a test environment; this is why the “less dirty” option was added.


Default Unload concurrency

This leads us to the default unload SHRLEVEL keywords.  As I mentioned the default has always been SHRLEVEL CHANGE CONSISTENT YES and it still is. However, with BQU2907 in May of 2020 you now can configure the default you want using these Unload parameters (+parms).




Now if you want no disruption to applications, no drain, and you’re OK with inconsistent data, you can configure +CONSISTENT(NO) and +QUIESCE(NO) for your shop default. That’s not a recommendation, but just one option.

Share This:

Let’s discuss having multiple jobs unloading the same table at the same time.  NGT’s support for this is unique and robust.  I’ll break this down into two topics, using multiple concurrent jobs to have: 

  1. Each job unloads partition ranges, with all parts to a common point of consistency.
  2. Each job unloads the same table, possibly the whole table, at the same time with a point of consistency.


Review of unloading to a Point of Consistency (POC)

This is done by having an asynchronous task, for NGT that’s the NGT Subsystem, capture before Images of any page that changes during the unload, then any rows unloaded from one of these pages would come from that before image.  However, once the NGT Subsystem is collecting these before images for a POC, and possibly a subset of partitions, the NGT Subsystem cannot simultaneously manage a second POC of the same object for another job or add partitions to an existing object’s POC.  However, with NGT there is a way. 


NGT puts unload parallelism on steroids!

First, are multiple submitted unloads even needed with NGT?  It is common to want to unload a very large partitioned table space quickly, and to create a clean point-of-consistency (POC).  This is accomplished by unloading the partitions in parallel.  Any unload product can multi-task and process some number of partitions in parallel.  NGT with its server jobs supercharge this by having its server jobs multi-task several partitions just like the master job you submitted is doing.  The use of NGT servers may eliminate the need for multiple user-submitted job to increase parallelism.  However, what if you have thousands of partitions and you want to submit multiple jobs that each can submit multiple server jobs to go nuclear with parallelism.  NGT can do this too.


To run multiple jobs which each unload a subset of partitions of the same table space you specify the +CONNECTALL(YES) global parameter in your set of jobs.  This tells the NGT Subsystem to connect to and collect before images for all parts of the table space.  This way whichever of your jobs runs first, it will have the NGT subsystem monitor all parts and all the subsequent jobs will share this existing connection, share the point of consistency.  You get a clean POC unload of many parts using many jobs running concurrently.


The following JCL could be one of many jobs that you run concurrently; this one unloads the first 100 parts.


//ULDPARMS  DD *                                 


//SYSIN     DD *                                 





       PART 1:100                                

                 SELECT * FROM creator.table_name          


Multiple unrelated unload jobs processing the same table at the same time

NGT Unload has always allowed multiple unload jobs to process the same table even when a POC is requested, SHRLEVEL CHANGE CONSISTENT YES.  If a CONSISTENT unload is running and another CONSISTENT unload is submitted for that same table, NGT will let this and any subsequent overlapping unloads share the existing POC.


If you don’t want this behavior, the sharing of the existing POC, you can specify +POC_SHARE(NO) and subsequent jobs will fail rather than sharing the existing POC.  Specify +POC_SHARE(NO) when it is important that the data is unloaded is consistent as of the time the job is submitted and cannot be from a few minutes prior.


Note: +POC_SHARE(YES/NO) was added with PTF BQU2543 in August 2019.  The prior behavior and the new default is YES.

Filter Blog

By date:
By tag: