Skip navigation
1 2 Previous Next

Solutions for IMS

17 posts
Share This:

If you are still utilizing BMC ISR to download maintenance, you need to be aware that ISR will be withdrawn on December 31, 2020.  In December 2019, BMC announced support for IBM SMP/E RECEIVE ORDER for BMC mainframe products. This is our recommended method to obtain maintenance for your BMC products and solutions. 


In this announcement, it was noted that BMC ISR will be withdrawn from service.  As 2020 has flown by, we are quickly approaching the end of year, and now is a good time to switch to IBM SMP/E RECEIVE ORDER for BMC mainframe products for your ongoing maintenance activities. 


For information on using RECEIVE ORDER for your BMC Products and solutions, please visit Using SMP/E RECEIVE ORDER - Documentation for Installation System - BMC Documentation .


If you require assistance with IBM SMP/E RECEIVE ORDER for BMC mainframe products, please open a case to BMC Support (Support Central - BMC Software) using the Product Common Install – z/OS, Component RECEIVE ORDER Maintenance and a support representative will help you!

Share This:
Share This:

books on shelf 1_32_full.jpg

by Robert Blackhall


As a long time IMS customer, you might ask yourself, “Why should I use the new catalog in IMS?  What is the reason for me to invest in the time to learn, implement, and convert my processes to the catalog? This will be a new way of life.”


Two good reasons:

Reason 1: From an IMS development perspective, the IMS Catalog is necessary to enable Java and SQL applications that are coming in from the web and provide access to IMS databases. 

Reason 2: It is much easier to secure talent for writing Java applications. COBOL developers are becoming harder to find.


From an IMS legacy standpoint, IBM says that, in the future, the use of the IMS catalog will be required. Using IMS managed ACBs is the strategic direction for IMS 15 and any subsequent continuous-delivery functionality. If you do not already use the IMS catalog, you’ll need to implement it when you enable IMS management of ACBs.  This is a pretty strong reason to invest time now to acquaint yourself with the features of the catalog and to develop

an understanding of its impact on your IMS community.


The IMS 15 announcement (IBM United States Software Announcement 217-398, dated October 3, 2017) states, “IBM intends to require IMS management of ACBs in the future. IMS and the IMS catalog must be set up to support ACB management. IMS provides a utility for this. After the requirement for IMS-managed ACBs is in place, IBM also intends to remove the generation processes for PSBLIB, DBDLIB, and ACBLIB. At that time, the IMS catalog,

SQL, and DDL become the interface to IMS database management.”


The future may be next year, or 3-5 years from now. The actual time of this change is still uncertain. What is certain is that IBM believes in the catalog and it is a strategic direction.  No need to worry, however, because BMC Software will be here to light the path for you to increase your catalog expertise. We can provide you with comprehensive tools,

education, and our strong support team.


Learn more about the IMS catalog and Catalog Manager for IMS by BMC by attending the upcoming live webinar.


Webinar: The IMS Catalog – A fireside chat between Deepak Kohli (IBM) and Robert Blackhall (BMC)


Read more about the new no-charge offering - Catalog Manager for IMS.


Robert Blackhall is a Software Architect with 21 years of software development experience at BMC Software. He has been concentrating on the IMS catalog and the Catalog Manager for IMS offering from BMC.

Share This:

It’s an exciting time to be an IMS customer! IMS 14 is here, and on December 4th, BMC proudly announced new IMS enhancements and support for its IMS solutions portfolio.


In addition to supporting IMS 14, the BMC Database Management Solutions for IMS now offer new enhancements in performance, scalability, usability, and availability. The new BMC Database Solutions for IMS releases provide more online restructure capabilities, additional thresholds in MAXM Database Advisor, zero outage batch image copy, OSAM HALDB 8-gigabyte support, a new BMC Delta repository, a database clone function, and many more new capabilities.

BMC has also consolidated functionality from its NEON solutions to simplify the customer experience. New packaging options are available; for example, the new BMC MainView Extensions for IMS TM supplies additional information to MainView for IMS related to transactions that originate off of the mainframe.

BMC believes in the long-term value that IMS provides, running the world’s banks, transportation services, retailers, and so much more. Be sure to check out the new BMC solutions for IMS.


These postings are my own and do not necessarily represent BMC's position, strategies, or opinion.

Share This:

IMS™ TM has always been your faithful transaction manager. Does your IMS TM communicate with DB2 databases to complete transactions? Then you know that these two IBM® siblings must sit together on the same LPAR in order to communicate. For years, a technical restriction has existed that requires the DB2 subsystem to reside on the same logical partition (LPAR) as the IMS TM subsystem that receives the requests. Makes sense, or at least it did. Now, however, it means that mainframe shops are maintaining and paying for multiple, duplicate, (and possibly under-utilized) subsystems merely because of this restriction.


My, how times have changed. These two overachieving MLC software powerhouses sing beautifully together – and now they can do it from separate LPARs. Why separate them?


The big reason to separate the siblings involves monthly license charge (MLC) math. You see, there’s a rumor out there that says sub-capacity pricing is equivalent to usage-based pricing. Not true. MLC software is charged by product and aggregated by LPAR – all at the highest peak MSU rate on that LPAR. So, you might be paying a high price for an IMS TM or a DB2 subsystem just so that they can cohabitate on the same LPAR. 


This is all based on how monthly MLC costs are calculated, which is very complex math, but don’t assume you are stuck. By using BMC Subsystem Optimizer, you can now actively manage down your MLC costs by moving and separating subsystems. BMC just announced V2.0, which provides support for IMS TM to DB2 communication on separate LPARs. Cool stuff, and you’ll want to take a look.


The results can be incredible. We’ve seen customers make small moves in subsystem placement that result in 15-30% reductions in MLC. If you’re paying more than 10K per month for MLC, that adds up!


Have a look at BMC Subsystem Optimizer – and comment here with any thoughts you have about this topic. You too, can make these IMS and DB2 siblings sing together from separate locations.


Learn more about BMC MLC-saving solutions and check out the new infographic on how to save on MLC.



These postings are my own and do not necessarily represent BMC's position, strategies, or opinion.

Share This:

BMC has announced a new add-on package, MainView Extensions for IMS TM, available under the MainView for IMS offering.


This new package contains both BMC Log Analyzer for IMS and BMC Energizer for IMS Connect. This new package supplies additional information to MainView for IMS related to transactions that originate off of the mainframe.


Customers have requested that Energizer for IMS Connect be made available as an optional add-on, and BMC responded. BMC also decided to include Log analyzer for IMS as well, because of its integration with MainView for IMS and the popular benefits it provides.


The package provides visibility for customers who use MainView for IMS with IMS Connect. It provides visibility for calls that originate from a distributed platform and allows proactive monitoring of the environment. In addition, the capability provides the additional benefit of being able to perform diagnostics based on the IMS Log.


BMC MainView Extensions for IMS TM add-on package includes:

  • BMC Energizer for IMS Connect - providing visibility into the specialized area of cross-platform IMS transactions
  • BMC Log Analyzer for IMS - providing the ability to perform detailed analysis on IMS applications from the IMS Log data


Be sure to consider adding this unique capability to your IMS environment and benefit from the additional insight and efficiency.


Share This:

Checkpoints in IMS BMPs:

  • You have to have them, but they are expensive.
  • They are 100% overhead.
  • No productive work gets done while the checkpoint is being taken.
  • They elongate elapsed time, consume CPU time, and increase I/O so you certainly would not want to take more of them than you need.

Chances are that a number of your BMPs are taking more checkpoints than they need to take.


How can this be happening? BMPs written a number of years ago included the logic to take checkpoints at intervals that were appropriate for the processors on which they were running. That was likely a number of machine generations ago. The processor speeds today are much faster and the checkpointing interval from those old days is now much too short, resulting in the application taking more checkpoints than are needed.

The faster processors are causing other unexpected problems, too. They are hiding these checkpoint offenders from you. With faster processing, the BMPs finish in the same or less time than they did previously, so there is no indication that excessive overhead is taking place.

You need a way to find BMPs that are taking too many checkpoints. One checkpoint a second is a good pace for most applications. Once you identify the offenders, ideally you would like to fix the problem without changing the application programs. Those changes use developer time, require testing to verify, and ultimately need change control processing to get the modified programs back into production.

Using BMC Log Analyzer for IMS and BMC Application Restart Control for IMS allows you find and fix excessive checkpointing BMPs. BMC Log Analyzer provides the APPCHECK utility that reads the SLDS and provides an analysis of BMP checkpoint offenders. It looks for BMP updaters that have taken more than one checkpoint per second and reports on them.

AAI graphic.png

The report above was from a BMC customer (hence the hiding of the actual job names, etc.) and shows jobs that took from 3 to over 50 checkpoints per second for the duration of the job.

Now that you know the BMPs to target, create a policy that will be used by the checkpoint pacing functionality in BMC Application Restart Control for IMS (AR/CTL) to bypass unneeded checkpoints. When the application issues a checkpoint, AR/CTL intercepts the call and uses the policy you created to determine if enough time has elapsed since the last checkpoint was taken. If it has, it lets the checkpoint call through. If not enough time has elapsed, AR/CTL immediately returns control to the program, suppressing the unneeded checkpoint.

With your policy in place, AR/CTL provides an automated way to control the pace of checkpointing without needing to change the application program. By eliminating unnecessary checkpoints from these serious offenders, you can reduce CPU and elapsed times significantly, sometimes by 60% or more.

Get more information by reading the BMC Application Restart Control data sheet, or contact your BMC Account Manager to start a discussion.

Share This:

I have personally helped large customers implement Fast Path DEDB restructures with near zero IMS application outage.  This same capability is also available for IMS Full-function and HALDB databases.  Please contact me if you have any questions.


IMS users that BMC works with on a regular basis do not want to take IMS outages for any reason.   A common scheduled outage window task involves implementing database changes.  Change complexity, size of databases, and number of databases needing change dictates outage duration.  This outage can be many hours in duration.  DBA’s are constantly being asked to limit the number of outages a year and to also limit the duration. 


BMC can now address this painful and costly task via software innovation to restructure Full-function, HALDB, and DEDB databases while those databases are available online to IMS users.  BMC Restructure for IMS contains this industry unique technology to reduce these scheduled IMS application outages for IMS database restructures from hours to minutes.   What does this mean to your business?  Additional revenue? Additional competitive advantage? Reduced costs?


Database restructuring support includes:


Full-function and HALDB:

•    Convert a database to a different database type

•    Make tuning changes

•    Increase or decrease segment length (such as changing from fixed length to variable length segments)

•    Add and delete data set groups

•    Add and delete indexes

•    Add new segments while maintaining existing segment parentage

•    Restructure segment data

•    Add new search fields

•    Add, change, and delete compression exits


Fast Path DEDB databases:

•    add and remove areas

•    resize areas

•    perform randomizer changes

•    add segments at the end of a hierarchical path

•    add a sequential dependent segment (SDEP)

•    add, change, or remove a compression exit

•    modify lengths of variable-length segments (decrease the minimum length or increase the maximum length)

•    modify segment content


For additional information see the following links:


BMC Database Restructure for IMS

BMC Fast Path Online Restructure/EP for IMS

Share This:

Customers have recently been asking questions about how to request batch reporting within MAXM Database Advisor for IMS.  This reporting can be done for all database objects in the IMS RECON from the RECON level in the Advisor navigation tree.   Full-function and HALDB database objects use the Database History, Database Space, and Space Usage type reports.  Fast Path DEDB database objects use the Area Performance & Area Space reports.   The information being reported for Full-function and HALDB database objects is different from DEDB database objects hence batch reporting for DEDB database objects cannot be combined into a single report with Full-function and HALDB database objects.


If any requested batch generated report is larger than 4000 lines, then the report cannot be viewed in the Advisor console.    If that is the case, then you will need to view the results in the generated data set via ISPF.  The generated dataset name is specified in the BMCMXA340683I message that indicates the batch job completed successfully.  In this case, you would need to use FTP, or another product that you use, to download the file to your PC before opening it in Excel.  Once the generated report is in Excel it is very easy to manipulate and summarize the data to get information needed.


Let’s look at a specific example where you want to know the total database record counts for a specific set of database objects.  The following procedure will step you through requesting  batch reporting in the Advisor to quickly gain access to this information  Use this procedure to access other batch report information and to do data analysis with an external solution such as Excel:


Requesting Batch Reporting Procedure (Total Database Record Counts Example):


Step 1:  Start by right-clicking on the RECON icon, following the drop-down menus to “Request Batch Report.



Step 1.jpg


Step 2:  Select the Batch Report Type, in this case “Database History” and click “Ok”.


Step 2.jpg


Step 3:  Batch report initiation is indicated with a message in the Advisor Message pane.


Step 3.jpg


Step 3.5:  When Advisor completes the request a status message is issued in the Message pane.  Note that a report dataset is generated and the data set name is indicated in the status message.


Step 3.5.jpg

Step 4:  If the requested report is less than 4000 lines then it can be viewed in the Advisor.  If the requested report is more than 4000 lines then view the dataset referenced in Step 3.5 via ISPF.


Step 4.jpg

Step 5:   Select the generated DBHIST report for viewing.


Step 5.jpg


Step 6:  This is the raw database object information for the Database History batch report being viewed from the console.  This means that the generated report is less than or equal to 4000 lines.  Let’s export the data to Excel so that database object filtering and summation can be accomplished.


Step 6.jpg

Step 7:  Save the exported Database History file on your PC.


Step 7.jpg


Step 7.5:  The export completion is indicated via status message in the Advisor Message pane.


Step 7.5.jpg

Step 8 and 8.5:  Open the Database History .csv file in Excel and add heading filters.


Step 8_8.5.jpg


Step 9:  Use the database object filter to select the databases of interest.


Step 9.jpg


Step 10: Use Excel AutoSum function to quickly determine the number of database records for the database objects of interest.


Step 10.jpg

Share This:

In mid-August I conducted a well attended Webinar on the topic of Coordinated Recovery for DB2, IMS, and VSAM. I have given the presentation at several conferences, Briefings, and customer sites, and I am scheduled to give the topic in New York City on September 26th. Why the resurgent interest in this topic, first addressed in 1999? I think it is because IT organizations recognize the need for a more granular local application recovery capability that is not supported by disk mirroring solutions or disaster recovery procedures.


As with any solution, there are pros and cons to disk mirroring. Typically these solutions are aimed at disaster recovery – a catastrophic outage impacting the entire data center. Disk mirroring solutions can reduce or eliminate data loss and downtime. They are expensive to build and support, but they can serve a useful purpose. However, for most recovery situations, they are not an effective solution.


Consider the following likely events:

• An application program change results in incorrectly updated data

• A user inadvertently updates the wrong data

• A database administrator incorrectly modifies a database structure

• A disgruntled employee updates data maliciously

• A storage controller fails, impacting hundreds of volumes of disk data

• System software maintenance is applied, containing changes that impact database data 

• And many more…


In these cases, one would not declare disaster and move to the remote site mirror. Local application database level recoveries are required. Making the application recoveries even more complex, over the years application relationships have evolved that include DB2, IMS, and CICS/VSAM components. Recovery of any of these components may require recovery of the related components, especially if the recovery action is a recovery to a prior point in time. It is the coordinated recovery to a prior point in time that is the topic of this article.


Suppose an event has occurred that corrupted your DB2 application data. The data corruption is not severe enough to declare disaster. You have decided to recover the local application to a prior point in time. The application is complex and has IMS and CICS/VSAM components that are related to the DB2 application, so those objects must be recovered to the same point in time.


You can exploit the BMC Recovery solutions to support this recovery event.

• For DB2, IMS, and VSAM, all BMC Recovery solutions support a recovery technique called BACKOUT to TIMESTAMP. BACKOUT is very fast and more efficient than normal forward recovery from an image copy.

o This assumes the underlying database datasets are physically accessible – the storage is fine, it is a logical error we are correcting.

o This also assumes that since these applications are related, they are sharing the same system clock – so TIMESTAMP is the same point in time for all applications.

o For BMC DB2 recovery, there may be events that would render BACKOUT obsolete (for instance a LOAD LOG NO executed after your recovery point – we cannot jump around that hole in the log. We would automatically fall back to a normal forward recovery for those objects).

o BMC DB2 recovery also supports a process called Recovery Avoidance. Suppose you are doing this point in time recovery for an application with 100 objects. Assume 80 of those objects have not been updated since the designated TIMESTAMP. There is no reason to recovery them, they are both physically and logically sound.

o If the DB2 recovery is a subsystem wide event (perhaps the application in play is SAP), then a DB2 conditional restart may be required. The BMC DB2 recovery solution automates the analysis for this requirement and generates the conditional restart process if needed. At the conclusion of the generated BMC Recovery action, you will have consistent data as of the designated TIMESTAMP. All in-flight units of work as of the TIMESTAMP will not be recovered. The BACKOUT technique is very fast, reducing downtime for the recovery. For the DB2 application, Recovery Avoidance further reduces downtime.


Some companies are interested is generating a Coordinated Recovery in a Disaster Recovery scenario. There are some differences in the procedure to implement this requirement:

• The recovery point in time TIMESTAMP can be specified, or it can be based on an event such as an IMS Log Switch.

• For IMS, a RECON Clean up utility will need to run at the remote site, as the RECONs were backed up while open.

• The IMS and CICS/VSAM recoveries will be to the specified TIMESTAMP.

• For DB2, the ‘recovery’ will not be a TIMESTAMP recovery; it will be a DB2 Conditional Restart to the RBA/LRSN that correlates to the specified TIMESTAMP. The BMC DB2 Recovery solution provides utilities that will translate the TIMESTAMP to RBA/LRSN and generate the appropriate Conditional Restart process to that point. 

• Once the DB2 Conditional Restart is complete, the application recoveries will all be to the new end of log.


Some questions (and answers) that have come up during presentation of this topic:

• Does the DB2 Catalog and Directory copy have to be done by BMC COPY PLUS, or can normal IBM copy work. (normal IBM copy will work)

• Can RECOVERY PLUS for IMS recover to an alternate dataset (yes. RECOVER PLUS for DB2 can do this too).

• Does the IMS Disaster Recovery part require the use of DBRC? (yes in our example, that is where we obtain the Timestamp that will be used to drive the entire DB2 Conditional Restart process)

• Does the Coordinated Disaster Recovery process have to be driven by an IMS Log Switch? (no, any event can be used to kick off the process – it could be a DB2 log switch, or the end of the batch cycle, or an arbitrary timestamp.)


The Coordinated Recovery Webinar was recorded and can be seen at this URL:


Please comment any questions!

Share This:

Data is always on the move in Information Technology, and as it moves, there’s usually some sort of conversion along the way. If your company uses IMS, moving data to a relational platform for analysis or post processing is almost guaranteed. That’s why, if your company has BMC Extract for IMS, you should consider extending its use for a number of reasons:


  • Finish faster - BMC Extract for IMS processes the data at utility speeds, so your IMS data unloads will process much faster. 2x faster is common, and customers have reported it runs up to 5x faster than COBOL programs.
  • Reduce MIPS - BMC Extract for IMS will save CPU.  Not only because it runs faster, but it also offloads up to 80% of processing to zIIP processors.
  • Reduce or eliminate post processing steps - BMC Extract for IMS supports a rich set of filters, conversion and transformation capabilities that will allow you to format the data as required for the target system during the unload / extract process.
  • Coding a utility job is up to 20x faster than writing a program - BMC Extract for IMS includes powerful selection and transformation capabilities that are much easier and faster than writing a program. Once you understand your IMS data content, you simply define your selection criteria and desired output transformations and formats.

Let’s take a look at this in terms of a common scenario moving data from IMS to DB2. You have a couple of choices that tell BMC Extract for IMS how to format the output file, depending on how you load the data into DB2. These options include a load format to load the data via a load utility, or an SQL format (using an insert statement) to import the data via an SQL process.


For this first example, let’s consider how to format the data for a DB2 load utility. Suppose you have a two-level IMS database containing records of a parent and his children. Assume we have an IMS ANCESTRY database comprised of two segments, PARENT and CHILD. This database contains a single PARENT segment and several CHILD segments. The actual data in the database appears as follows:
   1, Abraham Lincoln

   1, Robert Todd Lincoln
   2, Edward Baker Lincoln
   3, William Wallace Lincoln
   4, Thomas Lincoln


BMC Extract for IMS can be used to pull both segments in a single pass. It can divide the segments into separate extract files. It can also include data from the parent segment with data from the child segment.

Use the following statements to "extract" segment information into two extract files, one for PARENT and one for CHILD. The extracted information for CHILD will include the PARENT information as part of its extracted data.


  WITH (                          
    FIELD ( ( C'PARENT KEY=' ) )  
    FIELD ( ( PARENT.PKEY ) )     
    FIELD ( ( X'6B') )            
    FIELD ( ( C'PARENT=' ) )      
    FIELD ( ( PARENT.PINFO ) )    
    ) ) )                         
  WITH (                          
    FIELD ( ( C'PARENT KEY=' ) )  
    FIELD ( ( PARENT.PKEY ) )     
    FIELD ( ( X'6B') )            
    FIELD ( ( C'CHILD KEY=' ) )   
    FIELD ( ( CHILD.CKEY ) )      
    FIELD ( ( X'6B') )            
    FIELD ( ( C'CHILD=' ) )       
    FIELD ( ( CHILD.CINFO ) )     

The results for the DD PARENTO extract are:

PARENT KEY= 1,PARENT=Abraham Lincoln

The results for the DD CHILDO extract are:

PARENT KEY= 1,CHILD KEY= 1,CHILD=Robert Todd Lincoln
PARENT KEY= 1,CHILD KEY= 2,CHILD=Edward Baker Lincoln
PARENT KEY= 1,CHILD KEY= 3,CHILD=William Wallace Lincoln

Options to consider using BMC LOADPLUS for DB2

If you are loading the data using the BMC LOADPLUS for DB2, you can choose to resume or replace. If you’re adding operational data to a warehouse, you will probably want to choose the resume option so that you don't have to reload the entire table. If you’re using the BMC LOADPLUS for DB2 utility, you’ll have additional options for access to the objects as well as options for how to handle indexes:


  • Available for application access vs. stopped. BMC LOADPLUS for DB2 has options that can allow access to the data while the utility is executing against the object. And these access options are available for either a load REPLACE or RESUME. For load REPLACE, the option is selected based on the SHRLVL parameter. For load RESUME, the online option is SQLAPPLY.
  • Index UPDATE vs. BUILD on load RESUME (Note: This option is not relevant for the SQLAPPLY option). Using BMC LOADPLUS for DB2, BMC recommends that you limit the use of UPDATE to those cases where you are adding a small percent of the total amount of existing data to the table. If you are adding a large amount of data, using UPDATE can impact optimal performance of the SQL that use the index(es) in processing.

Setting up BMC Extract for IMS output for SQL processing


You can also generate SQL INSERT statements for every record, using BMC Extract for IMS, instead of the CSV format. You could then import the data into DB2 via SQL processing. With Log Master for DB2, you can use the High-speed Apply Engine to perform the inserts much faster than standard SQL processing. You also have the option to use the High-speed Apply Engine to insert the data in Oracle or DB2 UDB for LUW databases. An advantage of using SQL to import the data is that the target table would be available for access while the data is being imported.


For the SQL Insert option, you would format the BMC Extract for IMS as follows:


  WITH (                          
    FIELD ( ( PARENT.PKEY ) )     
    FIELD ( ( X'6B7D') )              /* COMMA, SINGLE QUOTE */
    FIELD ( ( X'7D5D5E') )         /* SINGLE QUOTE, CLOSE PAREN, SEMI-COLON */    
    ) ) )                         
  WITH (                          
    FIELD ( ( PARENT.PKEY ) )     
    FIELD ( ( X'6B') )                  /* COMMA */
    FIELD ( ( CHILD.CKEY ) )      
    FIELD ( ( X'6B7D') )              /* COMMA, SINGLE QUOTE */
    FIELD ( ( X'7D5D5E') )         /* SINGLE QUOTE, CLOSE PAREN, SEMI-COLON */     

The results for the DD PARENTO extract are:


The results for the DD CHILDO extract are:

INSERT INTO CHILD.TABLE VALUES(1, 1, 'Robert Todd Lincoln');
INSERT INTO CHILD.TABLE VALUES(1, 2, 'Edward Baker Lincoln');
INSERT INTO CHILD.TABLE VALUES(1, 3, 'William Wallace Lincoln');


Additional Information


For more information, you can go the following URL:


and select the Documentation Center and Demo Library for Mainframe Products (BMC support ID and password are required). To make it easier to locate the information you are looking for, you may want to create a scope to filter the result for a specific set of products e.g., IMS products.


To find examples related to how to set up BMC Extract for IMS, set your scope to look at only the IMS products and use the following search key:


“Extract locating examples”


To find out how to set up a BMC LOADPLUS for DB2 job, set your scope to look at only the DB2 products and use the following search key:


“Examples of LOADPLUS jobs”


To find out how to use High-speed Apply, set you scope to look at only the DB2 products and the following search key:


“Examples using generated High-speed Apply JCL”


There is also an overview training video to show how to extract IMS data and load it into DB2 that you can access by clicking the following link:


BMC Extract for IMS - Extracting IMS Data to Load into DB2 (BMC support ID and password are required)


Thanks to Bob Aubrecht, Jim Kurtz, Ken McDonald form BMC Software for their contributions to this article.

Share This:

As the wisest sages in IT will attest, a copy is often better than the original.  That maxim is never more true than when a database crashes. At that point, the original database becomes worthless and the copy is your only way to get the application back online.


This is also a valid maxim with the COPY function of MAXM Reorg Online.  The MAXM COPY function allows you to reorganize an index, as well as capture and apply updates while the index is online.  Using a traditional index rebuild option is much more difficult – requiring you to take the index offline while it’s being rebuilt.   COPY makes the process simpler, while offering additional functionality, including the ability to heal "broken" HALDB pointers caused by partial reorganization of partitions.


Follow this link to a Knowledge Base article that provides both a sample JCL and explanations of various options available with the MAXM Reorg Online COPY function.


Thanks to Jim Martin, BMC Software for his contribution to this article.

Share This:

We’ve introduced a performance bonus to MAXM Database Advisor for IMS in the August 2012 PUT.  The BMC development team revisited the scripting engine in the Database Toolkit and MAXM Database Advisor and made some substantial performance improvements for processing large HALDB and DEDB databases.  The result is a 90% reduction in both elapsed time and CPU time.  So give yourself a bonus and apply the MAXM Database Advisor enhancements today.

Share This:

Make sure you are up to speed with all of the temporary fixes released for your IMS products at each Program Update Tape (PUT) level. (BMC Support login required.) Learn more

Share This:

As you may be aware, BMC Extract for IMS is a fast and flexible IMS data selection and formatting utility that comes with an extensive expression language which provides the capability to quickly extract and reformat IMS data. Besides being up to five times faster than an application program, API, or User Exit, BMC Extract for IMS  also uses dramatically less CPU, eliminates the need for IMS programming skills, and requires much less time to develop, code, and test.


Many customers use BMC Extract for IMS  to mask sensitive data as they “port” IMS production data over to their test and development environments. A common method is to simply plug sensitive data with constants. However, this approach removes variability for targeted fields and can have a downstream usability impact for key fields and secondary indexes.


BMC Extract for IMS comes with the capability to create “unique” fields during extract. This method can deliver desired field variability to target environments while protecting sensitive data. The key “expression” variable that provides the basis for uniqueness in outputted data is &SEGMENTCOUNT. This variable gets incremented by one for each segment. This variable can be used to create either complete uniqueness, or uniqueness with a degree of duplication.


Below is an example of a method that can be used to create “unique” Social Security numbers during extract. It assumes that the Social Security number is in “X” format and not numeric.


/* */






/* ETC. */


/* */

FIELD ( ( 100000000 - ((&SEGMENTCOUNT-1) ) ) TRANSFORM (1:1 C) )

FIELD ( ( 100000000 - ((&SEGMENTCOUNT-1) / 10) ) TRANSFORM (1:1 C) )

FIELD ( ( 10000000 - ((&SEGMENTCOUNT-1) / 100) ) TRANSFORM (1:1 C) )

FIELD ( ( 1000000 - ((&SEGMENTCOUNT-1) / 1000) ) TRANSFORM (1:1 C) )

FIELD ( ( 100000 - ((&SEGMENTCOUNT-1) / 10000) ) TRANSFORM (1:1 C) )

FIELD ( ( 10000 - ((&SEGMENTCOUNT-1) / 100000) ) TRANSFORM (1:1 C) )

FIELD ( ( 1000 - ((&SEGMENTCOUNT-1) / 1000000) ) TRANSFORM (1:1 C) )

FIELD ( ( 100 - ((&SEGMENTCOUNT-1) / 10000000) ) TRANSFORM (1:1 C) )

FIELD ( ( 10 - ((&SEGMENTCOUNT-1) / 100000000) ) TRANSFORM (1:1 C) )


Below are the results of this expression. I am also including the &SEGMENTCOUNT field as a reference. Notice that I have flipped the positions of the field taking the low-order columns and placing them in the high-order columns. As such, this approach creates a balance of keys with the high-order position being nearly equally balanced between 0-9.



000000000 1

900000000 2

800000000 3

700000000 4

600000000 5

500000000 6

400000000 7

300000000 8

200000000 9

100000000 10

090000000 11

990000000 12

890000000 13

790000000 14

690000000 15

590000000 16

490000000 17

390000000 18

290000000 19

190000000 20

080000000 21

980000000 22

880000000 23


It is easy to engineer in duplication by reducing the number of columns. For example, reducing the “plug” field to use only four columns (and constant-filling the rest of the columns) would create duplicates for every 10,000 occurrences. If the “plug” were reduced to two columns, every 100 occurrences would create duplicates.



Also, BMC Extract for IMS  can be used to create unique “alpha” values as well. The method outlined below shows how to leverage &SEGMENTCOUNT to assign a letter in the alphabet. Here is an example below for one character position.


/* */


/* */










/* */






A 1

B 2

C 3

D 4

E 5

F 6

G 7

H 8

I 9

J 10

K 11

L 12

M 13

N 14

O 15

P 16

Q 17

R 18

S 19

T 20

U 21

V 22

W 23


It is also possible to build many character positions using the supporting calculations. One character position supports 26 unique letters of the alphabet. Two character positions 676 combinations, or actually 702, if you wanted to include “space/blank” as one of the characters to populate in the second sub-field. (Three character positions -17576 or four character positions-456976. etc.). Another example is below where I am defining a second alphabetic field that is space f or the first 26 iterations, then “A ” for the next 26, then “B,” and so on.


/* */



/* */

















A 1

B 2

C 3

X 24

Y 25

Z 26

AA 27

BA 28

CA 29

DA 30

EA 31

FA 32

WA 49

XA 50

YA 51

ZA 52

AB 53

BB 54

CB 55

DB 56

EB 57


It is possible to build a third, fourth, or character position. A third position character would then build off the work done for alpha position 2 and would cycle every 17,576 occurrences (more if using spaces like we did for the second alpha character).


As we have seen, BMC Extract for IMS is a fast and flexible IMS data selection and formatting utility that provides a capability to quickly extract and reformat IMS data. With these methods covered, it should be relatively easy for the implementer to mask data while maintaining uniqueness when extracting IMS data.


Thanks to Bob Aubrecht, Principal Software Consultant, BMC Software for his contribution to this article.  If you have any questions about the article, please feel free to contact Bob directly

Filter Blog

By date:
By tag: