Skip navigation
1 2 Previous Next

Solutions for IMS

19 posts
Share:|

It’s an exciting time to be an IMS customer! IMS 14 is here, and on December 4th, BMC proudly announced new IMS enhancements and support for its IMS solutions portfolio.

 

In addition to supporting IMS 14, the BMC Database Management Solutions for IMS now offer new enhancements in performance, scalability, usability, and availability. The new BMC Database Solutions for IMS releases provide more online restructure capabilities, additional thresholds in MAXM Database Advisor, zero outage batch image copy, OSAM HALDB 8-gigabyte support, a new BMC Delta repository, a database clone function, and many more new capabilities.


BMC has also consolidated functionality from its NEON solutions to simplify the customer experience. New packaging options are available; for example, the new BMC MainView Extensions for IMS TM supplies additional information to MainView for IMS related to transactions that originate off of the mainframe.


BMC believes in the long-term value that IMS provides, running the world’s banks, transportation services, retailers, and so much more. Be sure to check out the new BMC solutions for IMS.

 

These postings are my own and do not necessarily represent BMC's position, strategies, or opinion.

Share:|

IMS™ TM has always been your faithful transaction manager. Does your IMS TM communicate with DB2 databases to complete transactions? Then you know that these two IBM® siblings must sit together on the same LPAR in order to communicate. For years, a technical restriction has existed that requires the DB2 subsystem to reside on the same logical partition (LPAR) as the IMS TM subsystem that receives the requests. Makes sense, or at least it did. Now, however, it means that mainframe shops are maintaining and paying for multiple, duplicate, (and possibly under-utilized) subsystems merely because of this restriction.

 

My, how times have changed. These two overachieving MLC software powerhouses sing beautifully together – and now they can do it from separate LPARs. Why separate them?

 

The big reason to separate the siblings involves monthly license charge (MLC) math. You see, there’s a rumor out there that says sub-capacity pricing is equivalent to usage-based pricing. Not true. MLC software is charged by product and aggregated by LPAR – all at the highest peak MSU rate on that LPAR. So, you might be paying a high price for an IMS TM or a DB2 subsystem just so that they can cohabitate on the same LPAR. 

 

This is all based on how monthly MLC costs are calculated, which is very complex math, but don’t assume you are stuck. By using BMC Subsystem Optimizer, you can now actively manage down your MLC costs by moving and separating subsystems. BMC just announced V2.0, which provides support for IMS TM to DB2 communication on separate LPARs. Cool stuff, and you’ll want to take a look.

 

The results can be incredible. We’ve seen customers make small moves in subsystem placement that result in 15-30% reductions in MLC. If you’re paying more than 10K per month for MLC, that adds up!

 

Have a look at BMC Subsystem Optimizer – and comment here with any thoughts you have about this topic. You too, can make these IMS and DB2 siblings sing together from separate locations.

 

Learn more about BMC MLC-saving solutions and check out the new infographic on how to save on MLC.

 

 

These postings are my own and do not necessarily represent BMC's position, strategies, or opinion.

Share:|

BMC has announced a new add-on package, MainView Extensions for IMS TM, available under the MainView for IMS offering.

 

This new package contains both BMC Log Analyzer for IMS and BMC Energizer for IMS Connect. This new package supplies additional information to MainView for IMS related to transactions that originate off of the mainframe.

 

Customers have requested that Energizer for IMS Connect be made available as an optional add-on, and BMC responded. BMC also decided to include Log analyzer for IMS as well, because of its integration with MainView for IMS and the popular benefits it provides.

 

The package provides visibility for customers who use MainView for IMS with IMS Connect. It provides visibility for calls that originate from a distributed platform and allows proactive monitoring of the environment. In addition, the capability provides the additional benefit of being able to perform diagnostics based on the IMS Log.

 

BMC MainView Extensions for IMS TM add-on package includes:

  • BMC Energizer for IMS Connect - providing visibility into the specialized area of cross-platform IMS transactions
  • BMC Log Analyzer for IMS - providing the ability to perform detailed analysis on IMS applications from the IMS Log data

 

Be sure to consider adding this unique capability to your IMS environment and benefit from the additional insight and efficiency.

 

Share:|

Checkpoints in IMS BMPs:

  • You have to have them, but they are expensive.
  • They are 100% overhead.
  • No productive work gets done while the checkpoint is being taken.
  • They elongate elapsed time, consume CPU time, and increase I/O so you certainly would not want to take more of them than you need.

Chances are that a number of your BMPs are taking more checkpoints than they need to take.

 

How can this be happening? BMPs written a number of years ago included the logic to take checkpoints at intervals that were appropriate for the processors on which they were running. That was likely a number of machine generations ago. The processor speeds today are much faster and the checkpointing interval from those old days is now much too short, resulting in the application taking more checkpoints than are needed.


The faster processors are causing other unexpected problems, too. They are hiding these checkpoint offenders from you. With faster processing, the BMPs finish in the same or less time than they did previously, so there is no indication that excessive overhead is taking place.


You need a way to find BMPs that are taking too many checkpoints. One checkpoint a second is a good pace for most applications. Once you identify the offenders, ideally you would like to fix the problem without changing the application programs. Those changes use developer time, require testing to verify, and ultimately need change control processing to get the modified programs back into production.


Using BMC Log Analyzer for IMS and BMC Application Restart Control for IMS allows you find and fix excessive checkpointing BMPs. BMC Log Analyzer provides the APPCHECK utility that reads the SLDS and provides an analysis of BMP checkpoint offenders. It looks for BMP updaters that have taken more than one checkpoint per second and reports on them.

AAI graphic.png

The report above was from a BMC customer (hence the hiding of the actual job names, etc.) and shows jobs that took from 3 to over 50 checkpoints per second for the duration of the job.


Now that you know the BMPs to target, create a policy that will be used by the checkpoint pacing functionality in BMC Application Restart Control for IMS (AR/CTL) to bypass unneeded checkpoints. When the application issues a checkpoint, AR/CTL intercepts the call and uses the policy you created to determine if enough time has elapsed since the last checkpoint was taken. If it has, it lets the checkpoint call through. If not enough time has elapsed, AR/CTL immediately returns control to the program, suppressing the unneeded checkpoint.

With your policy in place, AR/CTL provides an automated way to control the pace of checkpointing without needing to change the application program. By eliminating unnecessary checkpoints from these serious offenders, you can reduce CPU and elapsed times significantly, sometimes by 60% or more.


Get more information by reading the BMC Application Restart Control data sheet, or contact your BMC Account Manager to start a discussion.

Share:|

Your IMS environment is critical to your business—and you’ve probably optimized it the best you can. But these can be troubled times in IMS environments, with batch processes that consume resources and require long windows of processing time—like bandits walking the streets of your Gotham City. Are you fighting the good fight every day? Do you have the resources to protect this arena? Better question: does your team have the time to optimize individual buffer allocations for those IMS jobs? Or, are you interested in getting some help from a familiar friend in a neighboring city?

 

When it comes to managing IMS batch, has your team been able to fully optimize its impact on your MSU peak? Are the BMP, DL/I, or DBB applications causing you pain by requiring windows that are longer than you would like them to be?


Never fear! A new superhero is in town, and its name is AAI—Application Accelerator for IMS from BMC. If you haven’t heard of AAI yet, you’ll want to learn more about it.


shutterstock_superhero238228114-sm.jpg

 

AAI is really innovative—a “set it up and let it do its thing” type of solution. AAI will monitor your IMS batch jobs, recommend jobs to optimize, and automatically take action to reduce CPU usage and elapsed time, ultimately lowering the cost of running your IMS. AAI requires no changes to applications or JCL, and can automatically identify those jobs that need optimizing without your constant oversight. It has a nice graphical user interface and reporting so that your team can track those jobs that are optimized. And—AAI can calculate your savings and report them to you. Sometimes, these savings are as high as 50% (your mileage may vary, of course).

 

So what’s stopping you? Call your IMS superhero AAI today and ask for an assessment of your batch efficiency. Of course, you can also call your trusty BMC Account Manager, who can arrange for the assessment. Hey—it can never hurt to connect with a superhero, right? 

Share:|

It is scary to think about trying to modify old IMS batch jobs that have been functioning perfectly well for years.  The knowledge about the application may no longer be available, and the risk of creating business disruption is a strong argument for leaving well enough alone.  But, many times old applications use more resources and take more time than they should, driving up costs and extending service levels.  It would be nice to have a fool-proof way to optimize that IMS batch without the risks associated with making internal changes. 

This is the problem BMC experts have solved with the new BMC Application Accelerator for IMS.   BMC Application Accelerator for IMS helps customers lower their IMS costs by reducing CPU utilization and the elapsed time dynamically and automatically.  The product is easily installed and the customer simply has to identify the jobs it wants the solution to monitor and optimize.

 

The latest update for BMC’s innovative Application Accelerator for IMS is adding the ability to optimize BMP batch, along with DL/I batch that was supported in the initial release.  Application Accelerator automatically and dynamically tunes IMS batch control parameters to reduce CPU consumption and reduce the elapsed time of the batch jobs.  In customer beta testing, Application Accelerator has reduced CPU use by 20% to 50%, and reduced elapsed time by 20% to 80%. 

 

For more information on BMC Application Accelerator for IMS, check out this video: http://www.bmc.com/support/support-videos/msmdemo-ims-app-acc-overview.html

Or the BMC Application Accelerator for IMS Datasheet:  http://www.bmc.com/products/product-listing/application-accelerator-for-ims.html

Share:|

BMC Application Accelerator for IMS Dynamically Tunes DL/I batch

BMC Software recently introduced BMC Application Accelerator for IMS, a solution that dynamically tunes DL/I batch applications thereby reducing both the CPU usage and elapsed time of IMS batch jobs.  Rather than requiring traditional manual analysis and tuning, BMC Application Accelerator for IMS automatically monitors DL/I batch jobs during execution.  If it determines that it can improve the performance, on subsequent execution of the analyzed jobs, it dynamically changes job parameters so that the job will run more efficiently and consequently faster.

 

then_now.jpg

"Programmer Standing Beside Punched Cards" Photograph ©1955, The MITRE Corporation. All rights reserved. Courtesy of MITRE Corporate Archives

 

 

One of the beta customers, Daniel Hirschler, the IMS System Programmer for a large German bank, shared some of his observations on the product.  Like many IT Organization with mainframes, Daniel’s bank has an initiative to reduce costs whenever possible.  IMS is also very important to the bank as much of their customer information resides in IMS.

 

When Mr. Hirschler was asked how the testing went, he replied “It was great.  You let the product run, you let the jobs run, no intervention required ...  just look at the reports and see the CPU savings.”  He also said that the product installation was easy and that using the product was simple because the bank was already using the browser based interface for their BMC IMS solutions.

 

Dave Hilbe, the Development Director at BMC Software for BMC Application Accelerator for IMS stated that Mr. Hirschler’s experience was exactly what the development team was aiming to achieve.  “While IMS is still mission-critical to many large organizations, they don’t have the people resources or in some cases, the skills to manually tune performance for thousands of IMS batch jobs that run every day.  BMC Application Accelerator for IMS can dynamically tune the IMS batch jobs, saving money [by reducing CPU utilization] and elapsed time."

 

The BMC Application Accelerator for IMS is immediately available for purchase. For more information, please visit:

 

Blog: Tuning DL/I Batch without Breaking a Sweat

 

Product Information:  BMC Application Accelerator for IMS

 

Quick Course: Getting Started with BMC Application Accelerator for IMS (requires a BMC Support login)

 

Follow us on Twitter @bmcmainframe and on Facebook

Share:|

I have personally helped large customers implement Fast Path DEDB restructures with near zero IMS application outage.  This same capability is also available for IMS Full-function and HALDB databases.  Please contact me if you have any questions.

 

IMS users that BMC works with on a regular basis do not want to take IMS outages for any reason.   A common scheduled outage window task involves implementing database changes.  Change complexity, size of databases, and number of databases needing change dictates outage duration.  This outage can be many hours in duration.  DBA’s are constantly being asked to limit the number of outages a year and to also limit the duration. 

 

BMC can now address this painful and costly task via software innovation to restructure Full-function, HALDB, and DEDB databases while those databases are available online to IMS users.  BMC Restructure for IMS contains this industry unique technology to reduce these scheduled IMS application outages for IMS database restructures from hours to minutes.   What does this mean to your business?  Additional revenue? Additional competitive advantage? Reduced costs?

 

Database restructuring support includes:

 

Full-function and HALDB:

•    Convert a database to a different database type

•    Make tuning changes

•    Increase or decrease segment length (such as changing from fixed length to variable length segments)

•    Add and delete data set groups

•    Add and delete indexes

•    Add new segments while maintaining existing segment parentage

•    Restructure segment data

•    Add new search fields

•    Add, change, and delete compression exits

 

Fast Path DEDB databases:

•    add and remove areas

•    resize areas

•    perform randomizer changes

•    add segments at the end of a hierarchical path

•    add a sequential dependent segment (SDEP)

•    add, change, or remove a compression exit

•    modify lengths of variable-length segments (decrease the minimum length or increase the maximum length)

•    modify segment content

 

For additional information see the following links:

 

BMC Database Restructure for IMS

BMC Fast Path Online Restructure/EP for IMS

Share:|

Compare the number of DBAs today to years past and one can easily say that current staffing is an example of "doing more with less"... a lot more (more data volume, more databases, more transaction volume, higher availability requirements, etc.) with a lot less (smaller DBA teams, shorter outage windows, etc.)  A consequence of the current reality is that there is less time for optimizing performance and resources used, especially when it comes to batch.

 

For IMS DL/I batch, BMC will soon release a new solution that can tune to jobs automatically.  The product is called BMC Application Accelerator for IMS and is designed to watch and learn how your DL/I batch jobs run and dynamically tune the job without requiring any changes to the JCL or application code.  The product is currently in beta and some of the results have been pretty impressive:

BAAI Graph 2.jpg

In the graph above comparing the baseline vs. optimized results for CPU time and elapsed times of 2 different jobs, we can see that the CPU savings ranged from 17% to 49% and the elapsed time savings ranged from 60% to 94%.  Please note that your results will vary and that in some cases, BMC Application Accelerator for IMS will not provide any improvement in elapsed time or CPU time.  Results will vary based on a number of factors including, DL/I batch job profiles (highest improvements for read-only, sequential access), degree of disorganization of the IMS database and machine workload.  That said, the effort required to achieve the savings were minimal; simply install the product and point it at the jobs to monitor and optimize.  The product will even self report the savings, job by job.

 

The product will be Generally Available in March so watch the IMS Communities for the announcement.  If you have any questions, feel free to post the question on the blog or you can send me an email.

Share:|

Customers have recently been asking questions about how to request batch reporting within MAXM Database Advisor for IMS.  This reporting can be done for all database objects in the IMS RECON from the RECON level in the Advisor navigation tree.   Full-function and HALDB database objects use the Database History, Database Space, and Space Usage type reports.  Fast Path DEDB database objects use the Area Performance & Area Space reports.   The information being reported for Full-function and HALDB database objects is different from DEDB database objects hence batch reporting for DEDB database objects cannot be combined into a single report with Full-function and HALDB database objects.

 

If any requested batch generated report is larger than 4000 lines, then the report cannot be viewed in the Advisor console.    If that is the case, then you will need to view the results in the generated data set via ISPF.  The generated dataset name is specified in the BMCMXA340683I message that indicates the batch job completed successfully.  In this case, you would need to use FTP, or another product that you use, to download the file to your PC before opening it in Excel.  Once the generated report is in Excel it is very easy to manipulate and summarize the data to get information needed.

 

Let’s look at a specific example where you want to know the total database record counts for a specific set of database objects.  The following procedure will step you through requesting  batch reporting in the Advisor to quickly gain access to this information  Use this procedure to access other batch report information and to do data analysis with an external solution such as Excel:

 

Requesting Batch Reporting Procedure (Total Database Record Counts Example):

 

Step 1:  Start by right-clicking on the RECON icon, following the drop-down menus to “Request Batch Report.

 

 

Step 1.jpg

 

Step 2:  Select the Batch Report Type, in this case “Database History” and click “Ok”.

 

Step 2.jpg

 

Step 3:  Batch report initiation is indicated with a message in the Advisor Message pane.

 

Step 3.jpg

 

Step 3.5:  When Advisor completes the request a status message is issued in the Message pane.  Note that a report dataset is generated and the data set name is indicated in the status message.

 

Step 3.5.jpg

Step 4:  If the requested report is less than 4000 lines then it can be viewed in the Advisor.  If the requested report is more than 4000 lines then view the dataset referenced in Step 3.5 via ISPF.

 

Step 4.jpg

Step 5:   Select the generated DBHIST report for viewing.

 

Step 5.jpg

 

Step 6:  This is the raw database object information for the Database History batch report being viewed from the console.  This means that the generated report is less than or equal to 4000 lines.  Let’s export the data to Excel so that database object filtering and summation can be accomplished.

 

Step 6.jpg

Step 7:  Save the exported Database History file on your PC.

 

Step 7.jpg

 

Step 7.5:  The export completion is indicated via status message in the Advisor Message pane.

 

Step 7.5.jpg

Step 8 and 8.5:  Open the Database History .csv file in Excel and add heading filters.

 

Step 8_8.5.jpg

 

Step 9:  Use the database object filter to select the databases of interest.

 

Step 9.jpg

 

Step 10: Use Excel AutoSum function to quickly determine the number of database records for the database objects of interest.

 

Step 10.jpg

Share:|

In mid-August I conducted a well attended Webinar on the topic of Coordinated Recovery for DB2, IMS, and VSAM. I have given the presentation at several conferences, Briefings, and customer sites, and I am scheduled to give the topic in New York City on September 26th. Why the resurgent interest in this topic, first addressed in 1999? I think it is because IT organizations recognize the need for a more granular local application recovery capability that is not supported by disk mirroring solutions or disaster recovery procedures.

 

As with any solution, there are pros and cons to disk mirroring. Typically these solutions are aimed at disaster recovery – a catastrophic outage impacting the entire data center. Disk mirroring solutions can reduce or eliminate data loss and downtime. They are expensive to build and support, but they can serve a useful purpose. However, for most recovery situations, they are not an effective solution.

 

Consider the following likely events:

• An application program change results in incorrectly updated data

• A user inadvertently updates the wrong data

• A database administrator incorrectly modifies a database structure

• A disgruntled employee updates data maliciously

• A storage controller fails, impacting hundreds of volumes of disk data

• System software maintenance is applied, containing changes that impact database data 

• And many more…

 

In these cases, one would not declare disaster and move to the remote site mirror. Local application database level recoveries are required. Making the application recoveries even more complex, over the years application relationships have evolved that include DB2, IMS, and CICS/VSAM components. Recovery of any of these components may require recovery of the related components, especially if the recovery action is a recovery to a prior point in time. It is the coordinated recovery to a prior point in time that is the topic of this article.

 

Suppose an event has occurred that corrupted your DB2 application data. The data corruption is not severe enough to declare disaster. You have decided to recover the local application to a prior point in time. The application is complex and has IMS and CICS/VSAM components that are related to the DB2 application, so those objects must be recovered to the same point in time.

 

You can exploit the BMC Recovery solutions to support this recovery event.

• For DB2, IMS, and VSAM, all BMC Recovery solutions support a recovery technique called BACKOUT to TIMESTAMP. BACKOUT is very fast and more efficient than normal forward recovery from an image copy.

o This assumes the underlying database datasets are physically accessible – the storage is fine, it is a logical error we are correcting.

o This also assumes that since these applications are related, they are sharing the same system clock – so TIMESTAMP is the same point in time for all applications.

o For BMC DB2 recovery, there may be events that would render BACKOUT obsolete (for instance a LOAD LOG NO executed after your recovery point – we cannot jump around that hole in the log. We would automatically fall back to a normal forward recovery for those objects).

o BMC DB2 recovery also supports a process called Recovery Avoidance. Suppose you are doing this point in time recovery for an application with 100 objects. Assume 80 of those objects have not been updated since the designated TIMESTAMP. There is no reason to recovery them, they are both physically and logically sound.

o If the DB2 recovery is a subsystem wide event (perhaps the application in play is SAP), then a DB2 conditional restart may be required. The BMC DB2 recovery solution automates the analysis for this requirement and generates the conditional restart process if needed. At the conclusion of the generated BMC Recovery action, you will have consistent data as of the designated TIMESTAMP. All in-flight units of work as of the TIMESTAMP will not be recovered. The BACKOUT technique is very fast, reducing downtime for the recovery. For the DB2 application, Recovery Avoidance further reduces downtime.

 

Some companies are interested is generating a Coordinated Recovery in a Disaster Recovery scenario. There are some differences in the procedure to implement this requirement:

• The recovery point in time TIMESTAMP can be specified, or it can be based on an event such as an IMS Log Switch.

• For IMS, a RECON Clean up utility will need to run at the remote site, as the RECONs were backed up while open.

• The IMS and CICS/VSAM recoveries will be to the specified TIMESTAMP.

• For DB2, the ‘recovery’ will not be a TIMESTAMP recovery; it will be a DB2 Conditional Restart to the RBA/LRSN that correlates to the specified TIMESTAMP. The BMC DB2 Recovery solution provides utilities that will translate the TIMESTAMP to RBA/LRSN and generate the appropriate Conditional Restart process to that point. 

• Once the DB2 Conditional Restart is complete, the application recoveries will all be to the new end of log.

 

Some questions (and answers) that have come up during presentation of this topic:

• Does the DB2 Catalog and Directory copy have to be done by BMC COPY PLUS, or can normal IBM copy work. (normal IBM copy will work)

• Can RECOVERY PLUS for IMS recover to an alternate dataset (yes. RECOVER PLUS for DB2 can do this too).

• Does the IMS Disaster Recovery part require the use of DBRC? (yes in our example, that is where we obtain the Timestamp that will be used to drive the entire DB2 Conditional Restart process)

• Does the Coordinated Disaster Recovery process have to be driven by an IMS Log Switch? (no, any event can be used to kick off the process – it could be a DB2 log switch, or the end of the batch cycle, or an arbitrary timestamp.)

 

The Coordinated Recovery Webinar was recorded and can be seen at this URL: http://go.bmc.com/forms/WBNR_MSM_DPM_CoordRecovAug15_BMCcom_EN_Aug2012

 

Please comment any questions!

Share:|

Data is always on the move in Information Technology, and as it moves, there’s usually some sort of conversion along the way. If your company uses IMS, moving data to a relational platform for analysis or post processing is almost guaranteed. That’s why, if your company has BMC Extract for IMS, you should consider extending its use for a number of reasons:

 

  • Finish faster - BMC Extract for IMS processes the data at utility speeds, so your IMS data unloads will process much faster. 2x faster is common, and customers have reported it runs up to 5x faster than COBOL programs.
  • Reduce MIPS - BMC Extract for IMS will save CPU.  Not only because it runs faster, but it also offloads up to 80% of processing to zIIP processors.
  • Reduce or eliminate post processing steps - BMC Extract for IMS supports a rich set of filters, conversion and transformation capabilities that will allow you to format the data as required for the target system during the unload / extract process.
  • Coding a utility job is up to 20x faster than writing a program - BMC Extract for IMS includes powerful selection and transformation capabilities that are much easier and faster than writing a program. Once you understand your IMS data content, you simply define your selection criteria and desired output transformations and formats.

Let’s take a look at this in terms of a common scenario moving data from IMS to DB2. You have a couple of choices that tell BMC Extract for IMS how to format the output file, depending on how you load the data into DB2. These options include a load format to load the data via a load utility, or an SQL format (using an insert statement) to import the data via an SQL process.

 

For this first example, let’s consider how to format the data for a DB2 load utility. Suppose you have a two-level IMS database containing records of a parent and his children. Assume we have an IMS ANCESTRY database comprised of two segments, PARENT and CHILD. This database contains a single PARENT segment and several CHILD segments. The actual data in the database appears as follows:
               
PARENT
   1, Abraham Lincoln

CHILD
   1, Robert Todd Lincoln
   2, Edward Baker Lincoln
   3, William Wallace Lincoln
   4, Thomas Lincoln

 

BMC Extract for IMS can be used to pull both segments in a single pass. It can divide the segments into separate extract files. It can also include data from the parent segment with data from the child segment.

Use the following statements to "extract" segment information into two extract files, one for PARENT and one for CHILD. The extracted information for CHILD will include the PARENT information as part of its extracted data.

 

EXTRACT( DBDNAME ( ANCESTRY )     
  OUTPUTFILE( DDNAME( PARENTO )   
  INCLUDE SEGMENTS ( PARENT )     
  REPLACE SEGMENT ( PARENT        
  WITH (                          
    FIELD ( ( C'PARENT KEY=' ) )  
    FIELD ( ( PARENT.PKEY ) )     
    FIELD ( ( X'6B') )            
    FIELD ( ( C'PARENT=' ) )      
    FIELD ( ( PARENT.PINFO ) )    
    ) ) )                         
  OUTPUTFILE( DDNAME( CHILDO )    
  INCLUDE SEGMENTS ( PARENT CHILD )
  REPLACE SEGMENT ( CHILD         
  WITH (                          
    FIELD ( ( C'PARENT KEY=' ) )  
    FIELD ( ( PARENT.PKEY ) )     
    FIELD ( ( X'6B') )            
    FIELD ( ( C'CHILD KEY=' ) )   
    FIELD ( ( CHILD.CKEY ) )      
    FIELD ( ( X'6B') )            
    FIELD ( ( C'CHILD=' ) )       
    FIELD ( ( CHILD.CINFO ) )     

The results for the DD PARENTO extract are:

<<
PARENT KEY= 1,PARENT=Abraham Lincoln
>>

The results for the DD CHILDO extract are:

<<
PARENT KEY= 1,CHILD KEY= 1,CHILD=Robert Todd Lincoln
PARENT KEY= 1,CHILD KEY= 2,CHILD=Edward Baker Lincoln
PARENT KEY= 1,CHILD KEY= 3,CHILD=William Wallace Lincoln
PARENT KEY= 1,CHILD KEY= 4,CHILD=Thomas Lincoln
>>


Options to consider using BMC LOADPLUS for DB2


If you are loading the data using the BMC LOADPLUS for DB2, you can choose to resume or replace. If you’re adding operational data to a warehouse, you will probably want to choose the resume option so that you don't have to reload the entire table. If you’re using the BMC LOADPLUS for DB2 utility, you’ll have additional options for access to the objects as well as options for how to handle indexes:

 

  • Available for application access vs. stopped. BMC LOADPLUS for DB2 has options that can allow access to the data while the utility is executing against the object. And these access options are available for either a load REPLACE or RESUME. For load REPLACE, the option is selected based on the SHRLVL parameter. For load RESUME, the online option is SQLAPPLY.
  • Index UPDATE vs. BUILD on load RESUME (Note: This option is not relevant for the SQLAPPLY option). Using BMC LOADPLUS for DB2, BMC recommends that you limit the use of UPDATE to those cases where you are adding a small percent of the total amount of existing data to the table. If you are adding a large amount of data, using UPDATE can impact optimal performance of the SQL that use the index(es) in processing.


Setting up BMC Extract for IMS output for SQL processing

 

You can also generate SQL INSERT statements for every record, using BMC Extract for IMS, instead of the CSV format. You could then import the data into DB2 via SQL processing. With Log Master for DB2, you can use the High-speed Apply Engine to perform the inserts much faster than standard SQL processing. You also have the option to use the High-speed Apply Engine to insert the data in Oracle or DB2 UDB for LUW databases. An advantage of using SQL to import the data is that the target table would be available for access while the data is being imported.

 

For the SQL Insert option, you would format the BMC Extract for IMS as follows:

 

EXTRACT( DBDNAME ( ANCESTRY )     
  OUTPUTFILE( DDNAME( PARENTO )   
  INCLUDE SEGMENTS ( PARENT )     
  REPLACE SEGMENT ( PARENT        
  WITH (                          
    FIELD ( ( C'INSERT INTO PARENT.TABLE VALUES (' ) )
    FIELD ( ( PARENT.PKEY ) )     
    FIELD ( ( X'6B7D') )              /* COMMA, SINGLE QUOTE */
    FIELD ( ( PARENT.PINFO ) )
    FIELD ( ( X'7D5D5E') )         /* SINGLE QUOTE, CLOSE PAREN, SEMI-COLON */    
    ) ) )                         
  OUTPUTFILE( DDNAME( CHILDO )    
  INCLUDE SEGMENTS ( PARENT CHILD )
  REPLACE SEGMENT ( CHILD         
  WITH (                          
    FIELD ( ( C'INSERT INTO CHILD.TABLE VALUES (' ) )
    FIELD ( ( PARENT.PKEY ) )     
    FIELD ( ( X'6B') )                  /* COMMA */
    FIELD ( ( CHILD.CKEY ) )      
    FIELD ( ( X'6B7D') )              /* COMMA, SINGLE QUOTE */
    FIELD ( ( CHILD.CINFO ) )
    FIELD ( ( X'7D5D5E') )         /* SINGLE QUOTE, CLOSE PAREN, SEMI-COLON */     

The results for the DD PARENTO extract are:

<<
INSERT INTO PARENT.TABLE VALUES(1, 'Abraham Lincoln');
>>

The results for the DD CHILDO extract are:

<<
INSERT INTO CHILD.TABLE VALUES(1, 1, 'Robert Todd Lincoln');
INSERT INTO CHILD.TABLE VALUES(1, 2, 'Edward Baker Lincoln');
INSERT INTO CHILD.TABLE VALUES(1, 3, 'William Wallace Lincoln');
INSERT INTO CHILD.TABLE VALUES(1, 4, 'Thomas Lincoln');
>>

 

Additional Information

 

For more information, you can go the following URL:

 

http://www.bmc.com/support/product-documentation

 

and select the Documentation Center and Demo Library for Mainframe Products (BMC support ID and password are required). To make it easier to locate the information you are looking for, you may want to create a scope to filter the result for a specific set of products e.g., IMS products.

 

To find examples related to how to set up BMC Extract for IMS, set your scope to look at only the IMS products and use the following search key:

 

“Extract locating examples”

 

To find out how to set up a BMC LOADPLUS for DB2 job, set your scope to look at only the DB2 products and use the following search key:

 

“Examples of LOADPLUS jobs”

 

To find out how to use High-speed Apply, set you scope to look at only the DB2 products and the following search key:

 

“Examples using generated High-speed Apply JCL”

 

There is also an overview training video to show how to extract IMS data and load it into DB2 that you can access by clicking the following link:

 

BMC Extract for IMS - Extracting IMS Data to Load into DB2 (BMC support ID and password are required)

 

Thanks to Bob Aubrecht, Jim Kurtz, Ken McDonald form BMC Software for their contributions to this article.

Share:|

As the wisest sages in IT will attest, a copy is often better than the original.  That maxim is never more true than when a database crashes. At that point, the original database becomes worthless and the copy is your only way to get the application back online.

 

This is also a valid maxim with the COPY function of MAXM Reorg Online.  The MAXM COPY function allows you to reorganize an index, as well as capture and apply updates while the index is online.  Using a traditional index rebuild option is much more difficult – requiring you to take the index offline while it’s being rebuilt.   COPY makes the process simpler, while offering additional functionality, including the ability to heal "broken" HALDB pointers caused by partial reorganization of partitions.

 

Follow this link to a Knowledge Base article that provides both a sample JCL and explanations of various options available with the MAXM Reorg Online COPY function.

 

Thanks to Jim Martin, BMC Software for his contribution to this article.

Share:|

We’ve introduced a performance bonus to MAXM Database Advisor for IMS in the August 2012 PUT.  The BMC development team revisited the scripting engine in the Database Toolkit and MAXM Database Advisor and made some substantial performance improvements for processing large HALDB and DEDB databases.  The result is a 90% reduction in both elapsed time and CPU time.  So give yourself a bonus and apply the MAXM Database Advisor enhancements today.

Share:|

Make sure you are up to speed with all of the temporary fixes released for your IMS products at each Program Update Tape (PUT) level. (BMC Support login required.) Learn more

Filter Blog

By date:
By tag: