Skip navigation

Solutions for IMS

10 Posts authored by: Al deMoya
Share:|

BMC Application Accelerator for IMS Dynamically Tunes DL/I batch

BMC Software recently introduced BMC Application Accelerator for IMS, a solution that dynamically tunes DL/I batch applications thereby reducing both the CPU usage and elapsed time of IMS batch jobs.  Rather than requiring traditional manual analysis and tuning, BMC Application Accelerator for IMS automatically monitors DL/I batch jobs during execution.  If it determines that it can improve the performance, on subsequent execution of the analyzed jobs, it dynamically changes job parameters so that the job will run more efficiently and consequently faster.

 

then_now.jpg

"Programmer Standing Beside Punched Cards" Photograph ©1955, The MITRE Corporation. All rights reserved. Courtesy of MITRE Corporate Archives

 

 

One of the beta customers, Daniel Hirschler, the IMS System Programmer for a large German bank, shared some of his observations on the product.  Like many IT Organization with mainframes, Daniel’s bank has an initiative to reduce costs whenever possible.  IMS is also very important to the bank as much of their customer information resides in IMS.

 

When Mr. Hirschler was asked how the testing went, he replied “It was great.  You let the product run, you let the jobs run, no intervention required ...  just look at the reports and see the CPU savings.”  He also said that the product installation was easy and that using the product was simple because the bank was already using the browser based interface for their BMC IMS solutions.

 

Dave Hilbe, the Development Director at BMC Software for BMC Application Accelerator for IMS stated that Mr. Hirschler’s experience was exactly what the development team was aiming to achieve.  “While IMS is still mission-critical to many large organizations, they don’t have the people resources or in some cases, the skills to manually tune performance for thousands of IMS batch jobs that run every day.  BMC Application Accelerator for IMS can dynamically tune the IMS batch jobs, saving money [by reducing CPU utilization] and elapsed time."

 

The BMC Application Accelerator for IMS is immediately available for purchase. For more information, please visit:

 

Blog: Tuning DL/I Batch without Breaking a Sweat

 

Product Information:  BMC Application Accelerator for IMS

 

Quick Course: Getting Started with BMC Application Accelerator for IMS (requires a BMC Support login)

 

Follow us on Twitter @bmcmainframe and on Facebook

Share:|

Compare the number of DBAs today to years past and one can easily say that current staffing is an example of "doing more with less"... a lot more (more data volume, more databases, more transaction volume, higher availability requirements, etc.) with a lot less (smaller DBA teams, shorter outage windows, etc.)  A consequence of the current reality is that there is less time for optimizing performance and resources used, especially when it comes to batch.

 

For IMS DL/I batch, BMC will soon release a new solution that can tune to jobs automatically.  The product is called BMC Application Accelerator for IMS and is designed to watch and learn how your DL/I batch jobs run and dynamically tune the job without requiring any changes to the JCL or application code.  The product is currently in beta and some of the results have been pretty impressive:

BAAI Graph 2.jpg

In the graph above comparing the baseline vs. optimized results for CPU time and elapsed times of 2 different jobs, we can see that the CPU savings ranged from 17% to 49% and the elapsed time savings ranged from 60% to 94%.  Please note that your results will vary and that in some cases, BMC Application Accelerator for IMS will not provide any improvement in elapsed time or CPU time.  Results will vary based on a number of factors including, DL/I batch job profiles (highest improvements for read-only, sequential access), degree of disorganization of the IMS database and machine workload.  That said, the effort required to achieve the savings were minimal; simply install the product and point it at the jobs to monitor and optimize.  The product will even self report the savings, job by job.

 

The product will be Generally Available in March so watch the IMS Communities for the announcement.  If you have any questions, feel free to post the question on the blog or you can send me an email.

Share:|

Data is always on the move in Information Technology, and as it moves, there’s usually some sort of conversion along the way. If your company uses IMS, moving data to a relational platform for analysis or post processing is almost guaranteed. That’s why, if your company has BMC Extract for IMS, you should consider extending its use for a number of reasons:

 

  • Finish faster - BMC Extract for IMS processes the data at utility speeds, so your IMS data unloads will process much faster. 2x faster is common, and customers have reported it runs up to 5x faster than COBOL programs.
  • Reduce MIPS - BMC Extract for IMS will save CPU.  Not only because it runs faster, but it also offloads up to 80% of processing to zIIP processors.
  • Reduce or eliminate post processing steps - BMC Extract for IMS supports a rich set of filters, conversion and transformation capabilities that will allow you to format the data as required for the target system during the unload / extract process.
  • Coding a utility job is up to 20x faster than writing a program - BMC Extract for IMS includes powerful selection and transformation capabilities that are much easier and faster than writing a program. Once you understand your IMS data content, you simply define your selection criteria and desired output transformations and formats.

Let’s take a look at this in terms of a common scenario moving data from IMS to DB2. You have a couple of choices that tell BMC Extract for IMS how to format the output file, depending on how you load the data into DB2. These options include a load format to load the data via a load utility, or an SQL format (using an insert statement) to import the data via an SQL process.

 

For this first example, let’s consider how to format the data for a DB2 load utility. Suppose you have a two-level IMS database containing records of a parent and his children. Assume we have an IMS ANCESTRY database comprised of two segments, PARENT and CHILD. This database contains a single PARENT segment and several CHILD segments. The actual data in the database appears as follows:
               
PARENT
   1, Abraham Lincoln

CHILD
   1, Robert Todd Lincoln
   2, Edward Baker Lincoln
   3, William Wallace Lincoln
   4, Thomas Lincoln

 

BMC Extract for IMS can be used to pull both segments in a single pass. It can divide the segments into separate extract files. It can also include data from the parent segment with data from the child segment.

Use the following statements to "extract" segment information into two extract files, one for PARENT and one for CHILD. The extracted information for CHILD will include the PARENT information as part of its extracted data.

 

EXTRACT( DBDNAME ( ANCESTRY )     
  OUTPUTFILE( DDNAME( PARENTO )   
  INCLUDE SEGMENTS ( PARENT )     
  REPLACE SEGMENT ( PARENT        
  WITH (                          
    FIELD ( ( C'PARENT KEY=' ) )  
    FIELD ( ( PARENT.PKEY ) )     
    FIELD ( ( X'6B') )            
    FIELD ( ( C'PARENT=' ) )      
    FIELD ( ( PARENT.PINFO ) )    
    ) ) )                         
  OUTPUTFILE( DDNAME( CHILDO )    
  INCLUDE SEGMENTS ( PARENT CHILD )
  REPLACE SEGMENT ( CHILD         
  WITH (                          
    FIELD ( ( C'PARENT KEY=' ) )  
    FIELD ( ( PARENT.PKEY ) )     
    FIELD ( ( X'6B') )            
    FIELD ( ( C'CHILD KEY=' ) )   
    FIELD ( ( CHILD.CKEY ) )      
    FIELD ( ( X'6B') )            
    FIELD ( ( C'CHILD=' ) )       
    FIELD ( ( CHILD.CINFO ) )     

The results for the DD PARENTO extract are:

<<
PARENT KEY= 1,PARENT=Abraham Lincoln
>>

The results for the DD CHILDO extract are:

<<
PARENT KEY= 1,CHILD KEY= 1,CHILD=Robert Todd Lincoln
PARENT KEY= 1,CHILD KEY= 2,CHILD=Edward Baker Lincoln
PARENT KEY= 1,CHILD KEY= 3,CHILD=William Wallace Lincoln
PARENT KEY= 1,CHILD KEY= 4,CHILD=Thomas Lincoln
>>


Options to consider using BMC LOADPLUS for DB2


If you are loading the data using the BMC LOADPLUS for DB2, you can choose to resume or replace. If you’re adding operational data to a warehouse, you will probably want to choose the resume option so that you don't have to reload the entire table. If you’re using the BMC LOADPLUS for DB2 utility, you’ll have additional options for access to the objects as well as options for how to handle indexes:

 

  • Available for application access vs. stopped. BMC LOADPLUS for DB2 has options that can allow access to the data while the utility is executing against the object. And these access options are available for either a load REPLACE or RESUME. For load REPLACE, the option is selected based on the SHRLVL parameter. For load RESUME, the online option is SQLAPPLY.
  • Index UPDATE vs. BUILD on load RESUME (Note: This option is not relevant for the SQLAPPLY option). Using BMC LOADPLUS for DB2, BMC recommends that you limit the use of UPDATE to those cases where you are adding a small percent of the total amount of existing data to the table. If you are adding a large amount of data, using UPDATE can impact optimal performance of the SQL that use the index(es) in processing.


Setting up BMC Extract for IMS output for SQL processing

 

You can also generate SQL INSERT statements for every record, using BMC Extract for IMS, instead of the CSV format. You could then import the data into DB2 via SQL processing. With Log Master for DB2, you can use the High-speed Apply Engine to perform the inserts much faster than standard SQL processing. You also have the option to use the High-speed Apply Engine to insert the data in Oracle or DB2 UDB for LUW databases. An advantage of using SQL to import the data is that the target table would be available for access while the data is being imported.

 

For the SQL Insert option, you would format the BMC Extract for IMS as follows:

 

EXTRACT( DBDNAME ( ANCESTRY )     
  OUTPUTFILE( DDNAME( PARENTO )   
  INCLUDE SEGMENTS ( PARENT )     
  REPLACE SEGMENT ( PARENT        
  WITH (                          
    FIELD ( ( C'INSERT INTO PARENT.TABLE VALUES (' ) )
    FIELD ( ( PARENT.PKEY ) )     
    FIELD ( ( X'6B7D') )              /* COMMA, SINGLE QUOTE */
    FIELD ( ( PARENT.PINFO ) )
    FIELD ( ( X'7D5D5E') )         /* SINGLE QUOTE, CLOSE PAREN, SEMI-COLON */    
    ) ) )                         
  OUTPUTFILE( DDNAME( CHILDO )    
  INCLUDE SEGMENTS ( PARENT CHILD )
  REPLACE SEGMENT ( CHILD         
  WITH (                          
    FIELD ( ( C'INSERT INTO CHILD.TABLE VALUES (' ) )
    FIELD ( ( PARENT.PKEY ) )     
    FIELD ( ( X'6B') )                  /* COMMA */
    FIELD ( ( CHILD.CKEY ) )      
    FIELD ( ( X'6B7D') )              /* COMMA, SINGLE QUOTE */
    FIELD ( ( CHILD.CINFO ) )
    FIELD ( ( X'7D5D5E') )         /* SINGLE QUOTE, CLOSE PAREN, SEMI-COLON */     

The results for the DD PARENTO extract are:

<<
INSERT INTO PARENT.TABLE VALUES(1, 'Abraham Lincoln');
>>

The results for the DD CHILDO extract are:

<<
INSERT INTO CHILD.TABLE VALUES(1, 1, 'Robert Todd Lincoln');
INSERT INTO CHILD.TABLE VALUES(1, 2, 'Edward Baker Lincoln');
INSERT INTO CHILD.TABLE VALUES(1, 3, 'William Wallace Lincoln');
INSERT INTO CHILD.TABLE VALUES(1, 4, 'Thomas Lincoln');
>>

 

Additional Information

 

For more information, you can go the following URL:

 

http://www.bmc.com/support/product-documentation

 

and select the Documentation Center and Demo Library for Mainframe Products (BMC support ID and password are required). To make it easier to locate the information you are looking for, you may want to create a scope to filter the result for a specific set of products e.g., IMS products.

 

To find examples related to how to set up BMC Extract for IMS, set your scope to look at only the IMS products and use the following search key:

 

“Extract locating examples”

 

To find out how to set up a BMC LOADPLUS for DB2 job, set your scope to look at only the DB2 products and use the following search key:

 

“Examples of LOADPLUS jobs”

 

To find out how to use High-speed Apply, set you scope to look at only the DB2 products and the following search key:

 

“Examples using generated High-speed Apply JCL”

 

There is also an overview training video to show how to extract IMS data and load it into DB2 that you can access by clicking the following link:

 

BMC Extract for IMS - Extracting IMS Data to Load into DB2 (BMC support ID and password are required)

 

Thanks to Bob Aubrecht, Jim Kurtz, Ken McDonald form BMC Software for their contributions to this article.

Share:|

As the wisest sages in IT will attest, a copy is often better than the original.  That maxim is never more true than when a database crashes. At that point, the original database becomes worthless and the copy is your only way to get the application back online.

 

This is also a valid maxim with the COPY function of MAXM Reorg Online.  The MAXM COPY function allows you to reorganize an index, as well as capture and apply updates while the index is online.  Using a traditional index rebuild option is much more difficult – requiring you to take the index offline while it’s being rebuilt.   COPY makes the process simpler, while offering additional functionality, including the ability to heal "broken" HALDB pointers caused by partial reorganization of partitions.

 

Follow this link to a Knowledge Base article that provides both a sample JCL and explanations of various options available with the MAXM Reorg Online COPY function.

 

Thanks to Jim Martin, BMC Software for his contribution to this article.

Share:|

We’ve introduced a performance bonus to MAXM Database Advisor for IMS in the August 2012 PUT.  The BMC development team revisited the scripting engine in the Database Toolkit and MAXM Database Advisor and made some substantial performance improvements for processing large HALDB and DEDB databases.  The result is a 90% reduction in both elapsed time and CPU time.  So give yourself a bonus and apply the MAXM Database Advisor enhancements today.

Share:|

Make sure you are up to speed with all of the temporary fixes released for your IMS products at each Program Update Tape (PUT) level. (BMC Support login required.) Learn more

Share:|

Learn how to add additional page space or adjust parameters when your auxiliary storage becomes insufficient. (BMC Support login required.) Learn more

Share:|

As you may be aware, BMC Extract for IMS is a fast and flexible IMS data selection and formatting utility that comes with an extensive expression language which provides the capability to quickly extract and reformat IMS data. Besides being up to five times faster than an application program, API, or User Exit, BMC Extract for IMS  also uses dramatically less CPU, eliminates the need for IMS programming skills, and requires much less time to develop, code, and test.

 

Many customers use BMC Extract for IMS  to mask sensitive data as they “port” IMS production data over to their test and development environments. A common method is to simply plug sensitive data with constants. However, this approach removes variability for targeted fields and can have a downstream usability impact for key fields and secondary indexes.

 

BMC Extract for IMS comes with the capability to create “unique” fields during extract. This method can deliver desired field variability to target environments while protecting sensitive data. The key “expression” variable that provides the basis for uniqueness in outputted data is &SEGMENTCOUNT. This variable gets incremented by one for each segment. This variable can be used to create either complete uniqueness, or uniqueness with a degree of duplication.

 

Below is an example of a method that can be used to create “unique” Social Security numbers during extract. It assumes that the Social Security number is in “X” format and not numeric.

 

/* */

/* E.G., UNIQUE CONTRIVED SOCIAL SECURITY # */

/* COLUMNS ARE FLIPPED TO CREATE “SEPARATION” */

/* THE “ONES” COLUMN AFFECTS THE “HUNDRED MILLIONS” COLUMN */

/* THE “TENS” COLUMN AFFECTS THE “TEN MILLIONS” COLUMN */

/* THE “HUNDREDS” COLUMN AFFECTS THE “MILLIONS” COLUMN */

/* ETC. */

/* NOTE: YOU CAN SHUFFLE THE SEQUENCE AND STILL HAVE UNIQUENESS */

/* */

FIELD ( ( 100000000 - ((&SEGMENTCOUNT-1) ) ) TRANSFORM (1:1 C) )

FIELD ( ( 100000000 - ((&SEGMENTCOUNT-1) / 10) ) TRANSFORM (1:1 C) )

FIELD ( ( 10000000 - ((&SEGMENTCOUNT-1) / 100) ) TRANSFORM (1:1 C) )

FIELD ( ( 1000000 - ((&SEGMENTCOUNT-1) / 1000) ) TRANSFORM (1:1 C) )

FIELD ( ( 100000 - ((&SEGMENTCOUNT-1) / 10000) ) TRANSFORM (1:1 C) )

FIELD ( ( 10000 - ((&SEGMENTCOUNT-1) / 100000) ) TRANSFORM (1:1 C) )

FIELD ( ( 1000 - ((&SEGMENTCOUNT-1) / 1000000) ) TRANSFORM (1:1 C) )

FIELD ( ( 100 - ((&SEGMENTCOUNT-1) / 10000000) ) TRANSFORM (1:1 C) )

FIELD ( ( 10 - ((&SEGMENTCOUNT-1) / 100000000) ) TRANSFORM (1:1 C) )

 

Below are the results of this expression. I am also including the &SEGMENTCOUNT field as a reference. Notice that I have flipped the positions of the field taking the low-order columns and placing them in the high-order columns. As such, this approach creates a balance of keys with the high-order position being nearly equally balanced between 0-9.

 

New SS# &SEGMENTCOUNT

000000000 1

900000000 2

800000000 3

700000000 4

600000000 5

500000000 6

400000000 7

300000000 8

200000000 9

100000000 10

090000000 11

990000000 12

890000000 13

790000000 14

690000000 15

590000000 16

490000000 17

390000000 18

290000000 19

190000000 20

080000000 21

980000000 22

880000000 23

 

It is easy to engineer in duplication by reducing the number of columns. For example, reducing the “plug” field to use only four columns (and constant-filling the rest of the columns) would create duplicates for every 10,000 occurrences. If the “plug” were reduced to two columns, every 100 occurrences would create duplicates.

 

 

Also, BMC Extract for IMS  can be used to create unique “alpha” values as well. The method outlined below shows how to leverage &SEGMENTCOUNT to assign a letter in the alphabet. Here is an example below for one character position.

 

/* */

/* E.G., UNIQUE CONTRIVED ALPHABETIC UNIQUENESS FOR ONE COLUMN */

/* */

FIELD ( ( IF ((&REMAINDER (&SEGMENTCOUNT / 26)) EQ 1) THEN (C’A’)

ELSE ( IF ((&REMAINDER (&SEGMENTCOUNT / 26)) EQ 2) THEN (C’B’)

ELSE ( IF ((&REMAINDER (&SEGMENTCOUNT / 26)) EQ 3) THEN (C’C’)

ELSE ( IF ((&REMAINDER (&SEGMENTCOUNT / 26)) EQ 4) THEN (C’D’)

ELSE ( IF ((&REMAINDER (&SEGMENTCOUNT / 26)) EQ 5) THEN (C’E’)

ELSE ( IF ((&REMAINDER (&SEGMENTCOUNT / 26)) EQ 6) THEN (C’F’)

ELSE ( IF ((&REMAINDER (&SEGMENTCOUNT / 26)) EQ 7) THEN (C’G’)

ELSE ( IF ((&REMAINDER (&SEGMENTCOUNT / 26)) EQ 25) THEN (C’Y’)

ELSE (C’Z’)

/* */

))))))))))))))))))))))))))

 

Results:

 

Alpha &SEGMENTCOUNT

A 1

B 2

C 3

D 4

E 5

F 6

G 7

H 8

I 9

J 10

K 11

L 12

M 13

N 14

O 15

P 16

Q 17

R 18

S 19

T 20

U 21

V 22

W 23

 

It is also possible to build many character positions using the supporting calculations. One character position supports 26 unique letters of the alphabet. Two character positions 676 combinations, or actually 702, if you wanted to include “space/blank” as one of the characters to populate in the second sub-field. (Three character positions -17576 or four character positions-456976. etc.). Another example is below where I am defining a second alphabetic field that is space f or the first 26 iterations, then “A ” for the next 26, then “B,” and so on.

 

/* */

/* E.G., UNIQUE CONTRIVED ALPHABETIC UNIQUENESS FOR A SECOND COLUMN. */

/* FIRST ITERATION OF 26 POPULATES SPACES. */

/* */

FIELD((IF (&SEGMENTCOUNT LE 26) THEN (C’ ‘)

ELSE(IF((&REMAINDER(&REMAINDER((&SEGMENTCOUNT-1)/676))/26)EQ 1) THEN (C’A’)

ELSE(IF((&REMAINDER(&REMAINDER((&SEGMENTCOUNT-1)/676))/26)EQ 2) THEN (C’B’)

ELSE(IF((&REMAINDER(&REMAINDER((&SEGMENTCOUNT-1)/676))/26)EQ 3) THEN (C’C’)

ELSE(IF((&REMAINDER(&REMAINDER((&SEGMENTCOUNT-1)/676))/26)EQ 4) THEN (C’D’)

ELSE(IF((&REMAINDER(&REMAINDER((&SEGMENTCOUNT-1)/676))/26)EQ 5) THEN (C’E’)

ELSE(IF((&REMAINDER(&REMAINDER((&SEGMENTCOUNT-1)/676))/26)EQ 6) THEN (C’F’)

ELSE(IF((&REMAINDER(&REMAINDER((&SEGMENTCOUNT-1)/676))/26)EQ 7) THEN (C’G’)

ELSE(IF((&REMAINDER(&REMAINDER((&SEGMENTCOUNT-1)/676))/26)EQ 25) THEN (C’Y’)

ELSE (C’Z’)

))))))))))))))))))))))))))

))

 

Results:

 

Alpha &SEGMENTCOUNT

A 1

B 2

C 3

X 24

Y 25

Z 26

AA 27

BA 28

CA 29

DA 30

EA 31

FA 32

WA 49

XA 50

YA 51

ZA 52

AB 53

BB 54

CB 55

DB 56

EB 57

 

It is possible to build a third, fourth, or character position. A third position character would then build off the work done for alpha position 2 and would cycle every 17,576 occurrences (more if using spaces like we did for the second alpha character).

 

As we have seen, BMC Extract for IMS is a fast and flexible IMS data selection and formatting utility that provides a capability to quickly extract and reformat IMS data. With these methods covered, it should be relatively easy for the implementer to mask data while maintaining uniqueness when extracting IMS data.

 

Thanks to Bob Aubrecht, Principal Software Consultant, BMC Software for his contribution to this article.  If you have any questions about the article, please feel free to contact Bob directly Robert_Aubrecht@bmc.com.

Share:|

Make sure you are up to date with all of the temporary fixes released for your IMS products at each Program Update Tape (PUT) level. View this Knowledge Article.  (BMC Support login required.)

Share:|

You don’t have to expose sensitive production data when you use BMC Message Advisor for IMS to “mine” data for testing purposes. Watch the video. (BMC Support login required.)

Filter Blog

By date:
By tag: