Skip navigation
1 2 3 Previous Next

Solutions for DB2

34 posts
Share:|

We can divide DB2 DBAs into three categories: those who have come to DB2 later in the life of our favourite mainframe DBMS, those who worked with DB2 back in the heady days of versions 1 and 2, and some who (like me) fall into both camps. We remember the old days, but we also see how our world of databases has changed.

 

Not only has DB2 itself changed, but the infrastructure within which DB2 resides has also changed – and in some ways, it’s changed dramatically. For example, the way DASD storage is provided and how it is recognized and managed by the operating system.

 

When I started my career as a DB2 DBA, you could actually go into the machine room (if you were allowed) and look at boxes of DASD. We could literally SEE the volumes that we were used to looking at with LISTCAT or with ISPF 3.4. Because this allowed us to visualize how our DB2 data was laid out, we could foresee problems that could be caused by unthinking placement of data sets.

 

Do you remember the “rules” that suggested NEVER putting a table space AND its indexes on the same volume? This was to prevent “head contention” as DB2 tried to read index leaf pages and then table space data pages all at the same time. We tried very hard to ensure that tables and indexes never coexisted on the same DASD volume.

We also had to worry about the number of extents in which a data set existed. The maximum number of extents was fairly limited  in those days, and it was startlingly easy to run out of extents and get a DB2 extend error as DB2 tried, and failed, to find more space to store data. As the company I worked for was growing at an alarming rate, I found myself being called out to fix data set extend errors far too often (and why did they always happen in the middle of the night?). So I developed a trick to keep myself ahead of DB2. If a data set needs to be extended, DB2 tries to extend BEFORE the space is actually needed; if it can’t be extended, a warning message is issued. I used our automated operations tool to look for those messages, extract the jobname from the message (it always seemed to be a batch job that was responsible), and change the job into a performance class that was permanently swapped out. This stopped the job in its tracks before it could fail and gave me the time to move other things around on the volume to make space so that the extend could complete. Then I’d swap the job back in and no one was any the wiser.

 

If you are relatively new to DB2, you are probably wondering if I am making this stuff up. But I suspect that those of you who have been working with DB2 for years are gradually realising where some of our more esoteric DASD management rules have come from.

 

DASD doesn’t work like this anymore. From a z/OS (or DB2 perspective) things don’t look any different. We still give DASD volumes “meaningful” names, and we can still look at data sets with LISTCAT and ISPF 3.4. But what we are looking at is an illusion.

 

What is sitting in the machine room now doesn’t look like a collection of 3390 volumes. It’s now an intelligent array of inexpensive (relative term) disks that are pretending to be the DASD that we are used to seeing. These arrays use advanced management techniques (like RAID) to ensure that they can recover from most hardware failures.  (How many readers have experienced a head crash?) The arrays provide significant amounts of cache memory to aid in the speed and responsiveness of the devices. I used to aim for around a 20 millisecond response time from my DASD – now people should be aiming for a fraction of that. And these virtual devices can have significantly higher capacity than the 3390s of old – in excess of 50 times the capacity for some models!

 

As z/OS has evolved, the maximum number of extents a data set is allowed to have has also been creeping forever upwards. I used to be limited to 15 extents, but now you can have hundreds of extents (and data sets that can happily go multi-volume as well).

 

But – wait a minute! Didn’t we just say that the physical DASD doesn’t exist in the way that we are viewing it from z/OS? What does that mean to extents and sizes? And what about contention? Does all that caching relieve us from the need to spread our data sets around?

 

What we see from a z/OS view today is actually a figment of the operating system’s imagination. A DASD administrator will define “z/OS volumes” and how those volumes map to his storage array devices. Any one volume will, in all likelihood, be spread across a large number of those inexpensive disks. And, for redundancy, multiple z/OS volumes may share space on the same disks. So our first problem would be to determine exactly which z/OS volumes are sharing space in the disk array. It would make no sense to separate our table spaces and indexes if the VIRTUAL 3390s we put them on ultimately mapped to the same disks in the storage array. In reality, it would be impossible to do that; from the z/OS side, there is no control of where a specific data set goes, and it may move over time. Of course, because of the caching that happens, we probably don’t have a performance problem to worry about anyway.

 

Some of you are probably thinking, “So, do extents matter anymore?”

 

As far as performance and DASD management goes, they probably don’t. The extents you see in LISTCAT or ISPF 3.4 are virtual. They are a description of how z/OS thinks the data sets are laid out on disks that are only pretending to be 3390s. From a physical perspective, it is not possible to see from z/OS exactly HOW the data sets are laid out – and it hardly matters. Granted, if you have a data set in hundreds of extents, you may see a small performance degradation, but for normal numbers of extents it would be hardly noticeable. What does matter, though, is that the total number of extents for a data set is still limited – even though they are only virtual extents. In other words, it’s still possible for a dataset to still fail to extend because it has reached the limit of how many extents it can grow to, and this is something that you do need to keep an eye on.

Both z/OS and DB2 have been trying to make the management and sizing of data sets more of a science than an art. I remember using my Supercalc (remember that?) spreadsheet to calculate DB2 table space sizes based on  column sizes, nullability attributes, and the like (with the usual healthy guesswork on the expected number of rows). Even then, I found it irritating that I had to create data sets of a certain size when work I was doing on my PC just need me to create a file and it would just  

z/OS is making allocation of new extents work more intelligently. I got a little lost in this sentence. Would this alternate wording work? Who has looked at a LISCAT of a table space to discover that the first extent is not the same size as the primary quantity defined and that the secondary extends are all different sizes? When z/OS needs to allocate a new extent for a data set and that extent is EXACTLY adjacent to the prior extent, the current highest extent is just made larger. This works with both the primary and secondary allocations. (But z/OS is still playing with VIRTUAL extents that just look as if they are adjacent). Because of this extent consolidation, it is possible for a data set to grow to a larger size than a simple calculation of (Primary Quantity + (n times Secondary Quantity)) would suggest. It also makes it possible to copy a data set from one volume to another and have the target data set run out of extents, even though the source data set is just fine.

 

So…just how many extents can a data set extend to these days? Well, in the best traditions of DB2 “it depends” – on how you create the table space/index space in DB2 and what SMS attributes are defined for the classes that the pageset is created in.

 

DASD has changed, and we need to change along with it. Don’t try to manage DASD the same way we did back in the old days. Instead, understand the implications of the way disk storage is created and managed now and revise your DB2 physical management rules accordingly.

 

I'd be interested in hearing from you if you have modified your DB2 pageset management rules based on the new world of virtual DASD

Share:|

I'm pleased to announce another few feature additions to BMC DB2 tools delivered as SPEs in our maintenance stream

 

The following SPE supports new features in Load Plus for DB2:

PTFs BPU6680 and BPU6692 :

Introduces native support for loading inline LOB columns, support for loading LOB and XML data from, and discarding data to, VBS data sets that are in spanned-record format and continued performance improvements for LOB and XML processing generally including additional task parallelism

 

The following SPE supports new features in Unload Plus for DB2:

PTFs BPU6778 :

Provides DIRECT YES support for unloading of Inline LOB columns as well as unloading LOB and XML data to VBS data sets that are in spanned-record format

 

The following SPE supports changes in Reorg Plus for DB2:

PTFs BPJ0839 and BPU7042:

Provides native support for altering partition limit keys when rebalancing partition by range objects in DB2 11 New Function Mode where ALTER LIMITKEY became a deferred alter

 

And one I may have missed from earlier in the year

 

The following SPE supports changes in APPTUNE for DB2, CATALOG MANAGER for DB2, MainView for DB2, SQL Explorer for DB2, SQL

Performance for DB2 and Workbench for DB2:

PTFs BPU6612 and BPU6617:

Introduces an Explain option enabling display information about whether SQL is eligible to run on an IBM DB2 Analytics Accelerator

 

If you require more information about any of these, please feel free to contact me off-list (details below)

 

Share:|

Now that I am getting back into the rhythm of working after my vacation, I'd like to point you at a recorded webcast recently hosted by BMC, showcasing yet another new product from BMC - Subsystem Optimiser for z/Enterprise (aka SubZero)

 

Learn how you can break free of traditional technical restrictions on subsystem placement which require certain susbsystems to be co-located on the same LPAR

 

Break the data access barrier across LPARs and redirect workloads for better efficiency-without having to modify applications

 

Also, look to lower business risk by redirecting workloads when a system fails

 

Break free of technical restraints on workload consolidation and allocation. Find out how you can optimize subsystem placement to reduce mainframe costs by 20% or more-automatically

 

Spend less on your mainframe so you can invest in innovation

 

Let BMC show you the way

 

This webcast features  David Hilbe, Area Vice President, Research and Development, ZSolutions and Tom Vogel, Lead Solutions Marketing Manager

 

Register to view this webcast at http://www.bmc.com/forms/MCO-R4-SubZero-SaveMillionsSep23-Webex.html?cid=em834404807611ew&Email_Source=BMCEvents

 

After viewing the webcast, I'm sure you'll have questions - feel free to contact me at the usual place, or take a look at the SubZero web page at http://www.bmc.com/it-solutions/subsystem-optimizer.html

 

Phil Grainger

 

Phil Grainger

Share in Pittsburgh

Posted by Phil Grainger Employee Jul 30, 2014
Share:|

Getting ready to head to Pittsburg for the second SHARE of the year

 

This time I'll be speaking - at 10am on Thursday 7th August. Come and hear how to "Save Real Dollars with SQL Tuning"

 

The full BMC speaking schedule looks like this:

 

16065: zNextGen Working and Planning Session

Sunday, August 3, 2014: 3:00 PM-4:00 PM

Room 302 (David L. Lawrence Convention Center)

Speakers: Linda Mooney<https://share.confex.com/share/123/webprogram/Person2965.html>(County of Sacramento) , Vit Gottwald<https://share.confex.com/share/123/webprogram/Person4999.html>(CA Technologies) , Reg Harbeck<https://share.confex.com/share/123/webprogram/Person1784.html>(Mainframe Analytics Ltd.) ,Warren T. Harper<https://share.confex.com/share/123/webprogram/Person5886.html>(BMC Software) and Troy Crutcher<https://share.confex.com/share/123/webprogram/Person5798.html>(IBM Corporation)

 

 

15446: zSeries Scalability and Availability Improvements

Monday, August 4, 2014: 10:00 AM-11:00 AM

Room 311 (David L. Lawrence Convention Center)

Speaker: Donald Zeunert<https://share.confex.com/share/123/webprogram/Person5988.html>(BMC Software)

 

 

16094: Automation for IMS: Why It's Needed, Who benefits, and What the Impact Is

Monday, August 4, 2014: 11:15 AM-12:15 PM

Room 402 (David L. Lawrence Convention Center)

Speaker: Duane Wente<https://share.confex.com/share/123/webprogram/Person5913.html>(BMC Software)

 

 

16064: zNextGen Project Opening and Keynote

Monday, August 4, 2014: 3:00 PM-4:00 PM

Room 319 (David L. Lawrence Convention Center)

Speakers: Linda Mooney<https://share.confex.com/share/123/webprogram/Person2965.html>(County of Sacramento) , Vit Gottwald<https://share.confex.com/share/123/webprogram/Person4999.html>(CA Technologies) , Reg Harbeck<https://share.confex.com/share/123/webprogram/Person1784.html>(Mainframe Analytics Ltd.) and Warren T. Harper<https://share.confex.com/share/123/webprogram/Person5886.html>(BMC Software)

 

 

15582: z/OS 2.1 Unix Systems Services Latest Status and New Features

Tuesday, August 5, 2014: 11:15 AM-12:15 PM

Room 406 (David L. Lawrence Convention Center)

Speakers: Patricia Nolan<https://share.confex.com/share/123/webprogram/Person6059.html>(BMC Software) and Janet Sun<https://share.confex.com/share/123/webprogram/Person4253.html>(Rocket Software, Inc.)

 

 

16061: Planning and Implementing Digital Services

Tuesday, August 5, 2014: 12:25 PM-1:15 PM

Room 316 (David L. Lawrence Convention Center)

Speaker: Dave Roberts<https://share.confex.com/share/123/webprogram/Person6042.html>(BMC)

 

 

16253: MF Economics: What the Past 50 Years Can Tell Us About the Future - A Lively Discussion

Wednesday, August 6, 2014: 7:20 AM-8:20 AM

Room 304 (David L. Lawrence Convention Center)

Speaker: Jonathan Adams<https://share.confex.com/share/123/webprogram/Person5985.html>(BMC )

 

 

16066: zNextGen Project Networking Dinner Event

Wednesday, August 6, 2014: 7:00 PM-8:00 PM

Meet in the Westin Lobby (Westin Pittsburgh Hotel)

Speakers: Vit Gottwald<https://share.confex.com/share/123/webprogram/Person4999.html>(CA Technologies) , Linda Mooney<https://share.confex.com/share/123/webprogram/Person2965.html>(County of Sacramento) , Reg Harbeck<https://share.confex.com/share/123/webprogram/Person1784.html>(Mainframe Analytics Ltd.) ,Warren T. Harper<https://share.confex.com/share/123/webprogram/Person5886.html>(BMC Software) and Troy Crutcher<https://share.confex.com/share/123/webprogram/Person5798.html>(IBM Corporation)

 

 

15492: How to Save REAL Dollars with SQL Tuning

Thursday, August 7, 2014: 10:00 AM-11:00 AM

Room 402 (David L. Lawrence Convention Center)

Speaker: Phil Grainger<https://share.confex.com/share/123/webprogram/Person5993.html>(BMC Software)

 

 

 

Phil Grainger

Lead Product Manager

BMC Software

Share:|

BMC has just released a new white paper entitled "Five Levers to Lower MLC"

 

You can get a copy from http://bit.ly/MCO5Levers

 

Don't forget that if you are paying IBM MLC charges for your z/OS software ANYTHING you can do to lower the cpu consumption during the monthly rolling four hour average peak, will have a positive effect on those charges - and for EVERYTHING running on that LPAR

 

So, tune some SQL and not only save money on DB2 license charges, but also save for z/OS and CICS and/or IMS and/or MQ if they are also running on the same LPAR

 

Phil Grainger

Lead Product Manager

BMC Software

 

Share:|

Just making my travel plans to attend the next z/S DB2 and IMS briefing in Houston on August 12th

 

This is our opportunity to share with customers what's been going on recently with our database tools, but also hear from users how those tools are being used

 

This is something that we do on a regular basis in 1:1 meetings, but the chance to get a group of customers together often brings new ideas into the light

 

The briefing usually starts with a pleasant dinner on the prior evening - a chance for the briefing attendees to get to know each other, as well as an opportunity for the BMC folks to meet the attendees in a more informal setting

 

The briefing itself starts bright and early and moves from high level overviews of our strategy before diving into specifics on the different database management product areas

 

The day ends with transport back to the airport for your flights home

 

If you'd like more information on the August briefing, then please don't hesitate to contact me or your local BMC account team

 

Phil Grainger

Lead Product Manager

BMC Software

 

Share:|

It occurred to me the other day that I have become so comfortable speaking at DB2 conferences, I forget that there are many people out there who feel they could never do what I do

 

Well, do you know what, that used to be me too!

 

Yes, when I first started attending DB2 GSE meetings in the UK, I was too embarrassed to even raise my hand in the crowd when they asked who were new members. Mind you, that ages me as well - who else can remember the crowds we used to get at local DB2 events? The first UK GSE meeting I attended had around 120 attendees!

 

Anyway, I digress

 

It occurred to me after a few meetings, that if I wanted to make DB2 my career (which I did) then there was a good chance that my future boss might also be attending these events. Wouldn't it be great if he/she knew who I was before I even applied for a new job. What better way to do that than to actually speak

 

Now, my wife used to take public speaking classes, and even know I couldn't do what she did - speaking about a random topic selected by the teacher. We are extremely lucky - we are talking about something we not only love, but also (usually) understand inside-out. This is MUCH easier, believe me

 

So, I started small and presented short sessions (they were usually 30 minutes, presented VERY fast) on aspects of DB2 that we were exploiting. Keep in my comfort zone, I thought

 

Don't underestimate the nerves - I was sure the audience could see how nervous I was

 

But then something odd happened

 

People used to come up at the end of the sessions and not just ask questions, but also to comment on "how useful" they found the talk and "how comfortable" I seemed

 

Then I realised something important

 

What the audience sees and hears is important - the way the speaker feels hardly ever translates into anything that the audience picks up on!

 

From there it was a small step to speaking in front of larger audiences

 

BMC came along with an offer (I was a big customer of theirs in those days) and invited me to speak at a UK event (maybe 30-50 attendees) followed by a European event in Oslo (200 or so attendees). That seemed to be a good progression, so I accepted their invitation. Only for them to switch the two dates around so the big one came first

 

YIKES!

 

You know something? I survived the experience

 

It was a small step from there to IDUG, and the rest (as they say) was history

 

So, to close this blog post, I have a few small pieces of advice

 

 

1.       Don't underestimate your own ability

 

2.       Don't think that you are doing nothing special and nobody is interested - many of us really are doing special things with DB2

 

3.       Start with your local user groups, and get comfortable with a small audience

 

4.       Don't hesitate to ask for help. Existing speakers will be only too happy to help you along

 

Oh, and I'll finish by saying - DON'T get stressed by who might be in your audience. I'll never forget the first time I noticed Roger Miller at the back of one of MY audiences. THAT was a shock. But do you know something? Even the stars of DB2 at IBM are interested in hearing what users are actually doing with DB2

 

Phil Grainger

Lead Product Manager

BMC Software

 

Share:|

In a previous article, we introduced the repositories the BMC products use. In that article we spoke about the most important and used of those repositories: The BMC Common utility tables.

 

In this new article on the same topic we will briefly describe the BMC Common DB2 repository.

 

The BMC Common DB2 repository was added in December 2009 to the 9.2 version of the Backup & Recovery set of products, in order to support Dynamic Grouping.

 

In this version, a new way of creating groups of objects (called Object Sets) was introduced. An object set is a set of table spaces and indexes (usually belonging to the same application) that are grouped together in order to perform common backup and recovery procedures. Grouping simplifies the backup and recovery process because you can use the generated JCL to back up or recover the entire group at once. Grouping also simplifies the specification of backup and recovery options, because you can specify the options just once for the entire group.

 

These Objects sets can be build by using the Recovery Manager for DB2 online interface or by using the Recovery Manager for DB2 batch utilities. Using the batch program is faster for creating groups that contain more than a few hundred objects.

 

These objects sets could use several patterns to identify objects to INCLUDE or EXCLUDE from the group:

 

_ TS (table space name pattern)

_ IX (index name pattern)

_ TB (table name pattern)

_ IS (index space name pattern)

_ PL (plan name pattern)

_ PG (package name pattern)

_ SG (stogroup name pattern)

_ OS (object set name pattern)

_ SQ (dynamic SQL pattern)

_ RP (repository plan)

 

 

Probably the most powerful way of creating groups is by SQL pattern. With this option, you can include complex logic to select objects to process, that could include for example JOIN with user defined control tables to identify object that for example should never be processed.

 

In order to support the new object sets, new OBJECTSET syntax was added to Recover Plus for DB2, Copy Plus for DB2 and Log Master for DB2. This syntax allowed these products to read the new BMC Common DB2 repository and extract the list of objects to process. Of course, that made existing syntax like for example RMGROUP obsolete.

 

The new BMC Common DB2 repository included the following new tables:

 

_ object set (OBJSETS)

_ object set definition (OBJSET_DEF)

_ object set SQL (OBJSET_SQL)

_ group options (GRPOPTS)

_ product registration (PRODREG)

_ group authorizations (GROUPAUTH)

 

The repository is documented in detail in an Appendix in the supporting products User Guide.

 

These tables replaced the existing RECOVERY MANAGER tables used for group definition (In that first version, a utility was provided to migrate the content of the previous tables to the new ones).

 

Since them, several additional features or support have been added to the BMC Common DB2 repository. For example, in Version 10.1, RECOVERY MANAGER added OBJECTSET support for ARMBGEN, that allows creating Recovery groups using the OBJECTSET syntax. Also a new batch utility, ARMBSET, has been created to allow common DB2 commands or utilities, like STOP, to be executed at the OBJECTSET level.

 

But the main change occurred last June, when in version 11.1 a shared repository with DASD MANAGER PLUS was announced. From version 11.1, DASD MANAGER PLUS object sets are also stored in the BMC Common DB2 repository, enabling object sets to be shared across products. The BMC Common DB2 repository replaces the previous three tables used in the BMCASUnn database for Object sets definitions.

 

The repository is also documented in Chapter 2 of the DASD MANAGER PLUS for DB2 Reference, “DASD MANAGER PLUS database”. No data conversion is necessary when you upgrade from previous versions of DASD MANAGER PLUS to version 11.1.00 or later. The Installation System migrates the data for you.

 

This support adds also addition syntax and allows Actions defined in DASD MANAGER PLUS to be executed at the Object Set level.

 

I hope this article has been of interest, in future articles we will develop this topic and will include other BMC repositories.

 

Thanks.

Share:|

One of my favorite BMC Catalog Manager for DB2 commands is APPLY. The idea behind this command is to Apply a SQL statement model or template to a list of objects. With this command, you can generate and execute ad-hoc SQL statements without the need of editing or generating statements one by one.

 

This command will substitute the host variables in the SQL template for the corresponding columns in the list entries and create one statement for each object in the list. The host variables must be the DB2 catalog column names.

 

Let's do a step by step example:

 

Let's suppose I want to add a RESTRICT ON DROP attribute to all my tables.

 

First, I generate a list with all my tables, from the LIST panel or just from any panel with TB ISPRXM.%. Now, I could generate an ALTER statement for each of them, change this attribute and execute, but that could be really time consuming, depending on the number of tables.

 

I will do it in just one execution, using APPLY. From any row in the list, I use 's' to see the full catalog row information. There, I can see that the columns I would need for the statement template are named CREATOR and NAME in the catalog (SYSTABLES).

 

Then, I need to create the SQL template. I could do it by hearth or by checking the right sintax in the manual, but it is much easier if BMC Catalog Manager for DB2 do it for you. From any row in the list, I use ALTER to generate an ALTER statement, in the "Create/Alter Table" panel, change the WITH RESTRICT ON DROP attribute and generate the SQL:

 

     ALTER TABLE                      
       ISPRXM.MYTABLE       
          ADD RESTRICT ON DROP        
     ;  

 

Then, I edit the SQL and change the column values for host variables with the column names:

 

     ALTER TABLE                      
       :CREATOR.:NAME       
          ADD RESTRICT ON DROP        
     ;                                                              

and save it in SQL Table as MYAPPLYSQL.

 

Then, I come back to my list and just use the command APPLY MYAPPLYSQL ALL. If I want to exclude any particular object in the list, I can use the X designator in the Cmd column. BMC Catalog Manager for DB2 will process the command against all objects except those excluded with the X designator.

 

In the "Confirm APPLY SQL MODEL" panel, I can review, edit and change the template, and select Apply to list objects 'Y'. In the "Confirm SQL" panel, I can see it has generated an ALTER statement for each table in the list, substituting the correct creator and name from the list. There, I can edit, save and execute the list of commands.

 

Of course, I could be done with SQL, using a combination of literals and column names from SYSIBM.SYSTABLES, but this method is much easier.

 

Another not well known BMC Catalog Manager for DB2 command that is related to this, is IMPORT (do not confuse with IMPORT function in BMC Change Manager for DB2). With this command, you can import SQL statements in a PDS dataset (for example, the PDS you use as input for SPUFI) to the BMC Catalog Manager for DB2 SQL table. From the SQL Table, it is very easy to browse, edit or execute SQL Statements or of course, use them as SQL templates for APPLY command.

 

Hope this entry can be useful.

Enjoy

Share:|

Pool Advisor for DB2 was the first of many BMC Software advisory products that combines the intelligence to advise with the ability to act. Available since October 2000, Pool Advisor for DB2 provides a variety of DB2 pool tuning options that address the health of all DB2 pools (buffer, EDM, RID and sort pools) within the context of the total virtual storage environment and user supplied controls. It can also manage the efficiency of the global dynamic statement PREPARE caching, potentially providing large savings in heavy dynamic SQL environments. With the addition of the Configuration Advisor component, this product also will predict the optimization of its proposed buffer pool layout.

 

With the “z”  hardware, zOS, and DB2 releases since DB2 v8,  there has been a steady march towards support and exploitation of ever larger amounts of virtual and real memory.  Because of this there is a commonly expressed opinion that memory -related tuning is no longer necessary, that there is so much memory now that one doesn’t really need to worry about tuning its use.  Like most old wives’ tales, there is a kernel of truth there that tempts many to believe it.  But also like the tales of old, things are never that simple and believing that it is that simple will get you into trouble more often than not.  One thing is certainly true in most cases.  Data growth is even faster than the hardware and software growth in storage capacity.  Gigabytes of memory may be common, but Terabytes of data are even more common.  And all the evidence indicates this relationship will not be changing.   If there is any rule of thumb that is more likely to keep you out of trouble it is this:  there is never enough memory.

 

Pool Advisor for DB2 works by capturing and performing periodic analysis on all the standard performance metrics, including GETPAGE activity (without tracing!), hit ratios and I/O rates. These statistics are retained internally for a 48-hour rolling interval, providing quick and effective short-term performance tuning of workloads. With the unpredictability in today’s applications due to ERPs, e-business and global demands, it is critical to be able to adjust storage resources dynamically. Since Pool Advisor constantly monitors and analyzes DB2 resources, performance problems can be corrected before they become critical. Pool sizes can be increased and decreased and thresholds adjusted for best overall storage performance (taking MVS paging rates and pool access page faulting into consideration). Online assistance provides detailed and summary explanations of the analysis, benefiting novice and advanced users. Pool Advisor takes the difficult work out of understanding extensive performance statistics.

 

Besides virtual storage management, the primary tuning methodology that Pool Advisor uses is based on standard industry techniques advocated by IBM and tuning professionals for years. This methodology involves

 

  • classifying the access characteristics of all your database objects
  • grouping them into pools so that objects with similar characteristics are together (and separated from objects with very different access characteristics),  and
  • configuring those pools to optimize their performance for the kinds of access techniques their assigned objects use the most.

 

  1. Traditionally, a dedicated tuning professional or consultant has been required to initiate performance traces, perform complex data analysis and implement configuration changes in order to tune DB2 subsystems. Today, there is usually only an overworked administrator or DBA attempting to manage many DB2 subsystems with many applications and literally many thousands of objects. As a result, tuning is usually performed at only the most basic level. If detailed analysis is undertaken, even to the point of simulating the effect of different configurations, the effort is substantial and the result is far from foolproof. Changing workloads and global MVS effects, such as paging, cannot be predicted, and pools are periodically over or under utilized anyway. Not to mention all the other DB2 subsystems for which there was no time or budget for the same level of analysis. Clearly, automation is desperately needed.  One administrator can manage many DB2 subsystems¾and do it well¾if there is a tool to automatically collect and analyze data, suggest changes, perform those changes automatically when appropriate, and call for help when a problem arises (before users begin to notice application response problems and pick up the telephone).

 

Since Pool Advisor for DB2 takes a pro-active approach to tuning, it provides an optional alternative when needed. You have complete control and choice over the actions taken and over the ranges the various pools are allowed to vary. You can use Pool Advisor initially as a super DB2 storage usage monitor. As you build confidence in the actions that it recommends, you can choose to enable specific automations. This allows you to get to the point where those overnight performance problems can be handled without you!

 

Pool Advisor for DB2 provides the ability to analyze object access patterns and calculate optimal buffer pool configurations for improved application response times and overall throughput. You have the option to control the complexity of the buffer pool configuration, including specification of the maximum number of pools and the identification of critical objects. There is also the capability to perform “what-if” scenarios, resulting in predictions of the effectiveness of the resulting configuration.  Unlike other tools in the market place, our configuration analysis will not only recommend better configurations, it is capable of creating DDL for scheduled execution without leaving the product. It provides long term history correlated with change activity, providing valuable application trending and resource usage analysis.

 

There are two significant advantages with our approach. First, the collection agent can be left on, giving a longer and more accurate picture of system activity. It is important to realize that traditional tuning based on short data collection "snapshots" results in configuration decisions that address only the conditions existing at the time of the snapshot.  If your tuning is based on longer collection samples, the peaks and valleys of resource use will be averaged, and that can result in performance problems during times of peak demand.  Real-time analysis is the only way to detect and handle performance issues before they become serious performance problems. Second, Pool Advisor lets you easily access analysis and recommendations, and take corrective action directly, without having to arduously review many separate reports.  Manual effort is dramatically reduced and you don't have to spend all your time becoming an expert on esoteric DB2 internals! With our integrated solution, adjustments to DB2 can be made immediately, without ever leaving Pool Advisor for DB2.

 

If you have BMC Software’s APPTUNE for DB2 or Mainview for DB2, data collection resources are shared with minimal additional overhead. Data Collection has been made zIIP eligible, reducing the cost and overhead of data collection offloading 50% to 80% of Data Collector overhead to zIIP engines. The data collector started task requires only one address space per LPAR, no matter how many subsystems you have. Pool Advisor can be started and stopped at will, but it is important to monitor DB2 at all times so that it can detect problems, issue alerts, and/or take corrective actions. Pool Advisor has a sophisticated method for collecting data that minimizes overhead to DB2 applications dramatically compared to techniques depending on DB2 performance traces.  In addition, Pool Advisor has integrated anti-thrashing logic and a periodic analysis/change cycle specifically designed to react to true adverse performance trends without over-reacting to transient workload spikes.

 

 

In summary, Pool Advisor for DB2 provides unique data and functionality that cannot be found in any monitoring tool, and analysis and recommendations that cannot be found without significant effort in any other tuning tool. Pool Advisor uses “turnkey” technology for DB2 and has the ability to work with other BMC products to detect problems and make corrections before they cause a crisis.

Share:|

Every time you install a BMC product for DB2, it creates a set of DB2 tables, usually referred as “repository”. Most of those tables are for internal use, but sometimes the information contained in those tables can be quite useful.

 

Also, even if those tables are not designated to be maintained from outside the product, sometimes knowing its structure can be very useful for batch generation, exporting definitions, cloning, etc.

 

This article describes in detail some of the repositories the DB2 products create, the information contained there, their usage, maintenance needed, recommendations, tips, techniques, and several customer examples of exploiting the info contained  there (for example, queries to the tables). 

 

These repositories are created during installation, and I strongly recommend do not change the names by default.  If you are changing the version of any product, the installation will take care of creating a new repository and migrating previous data if needed, or reusing the existing tables. If new tables are created, $CxxMIG jobs will be created during installation, if the tables are reused but structure has changed, $CxxALTR jobs with the needed ALTER statements will be generated. For some products, it is optional to reuse the repository or create a new one, but that could change in future versions.

 

The repositories are documented in detail in the corresponding manuals, either in the Reference Guides or in the Customization Guide.

 

Some of those repositories are used by several products, for example the Common utility tables or the BMC Common DB2 repository.

 

In this article we will briefly describe the Common utility tables : Those tables are used mainly by the BMC DB2 utilities .

 

The most important tables are BMCUTIL and BMCSYNC. These are functionally equivalents to SYSUTIL table in DB2. They contain one row per running Utility in BMCUTIL, and one row per utility and object in BMCSYNC. These tables control execution and restarting of BMC utilities (including BMCSTATS that is considered another utility, as RUNSTATS). These tables control also concurrency between BMC utilities.

 

There are functions equivalent to the corresponding IBM commands of TERM and DISPLAY, which can be issued for example from BMC Catalog Manager for DB2 or from a REXX sample program provided by BMC.

 

Another important table is BMCHIST. This is a Historical of BMC utility executions. Some utilities write in this table unconditionally, while others can control writing or not with a parameter (HISTORY = Y/N in COPY PLUS, Unload Plus and RECOVER PLUS, BMCHIST = Y/N in Reorg Plus).

 

This table can be browsed with BMC Catalog Manager for DB2 or with SQL queries.

 

Several interesting queries to this tables can be, for example:

 

Executions by day and utility in the last month:

  SELECT CHAR(DATE,EUR), UTILNAME,

                   COUNT(*) AS EJECUCIONES

  FROM BMCUTIL.CMN_BMCHIST  

  WHERE DATE > CURRENT DATE - 1 MONTH              

  GROUP BY DATE, UTILNAME;

 

Executions and total elapsed by utility in the last month  

   SELECT UTILNAME, COUNT(*) AS EJECUCIONES,                  

     SUM(HOUR(ELAPSED)*3600 + MINUTE(ELAPSED)* 60 +       

     SECOND(ELAPSED)) AS TOTAL                             

  FROM BMCUTIL.CMN_BMCHIST        

  WHERE DATE > CURRENT DATE - 1 MONTH                    

  GROUP BY UTILNAME;

 

Executions with Elapsed time higher than 30min.              

   SELECT DBNAME,SPNAME,UTILNAME,               

      CHAR(ELAPSED,ISO) AS DURACION, CHAR(DATE,ISO) AS FECHA                      

   FROM BMCUTIL.CMN_BMCHIST                      

   WHERE HOUR(ELAPSED)* 60 + MINUTE(ELAPSED) > 30

   ORDER BY ELAPSED DESC;

 

Executions with higher Elapsed time, by utility    

    SELECT DBNAME,SPNAME,UTILNAME,

                     CHAR(ELAPSED,ISO) AS DURACION, CHAR(DATE,ISO) AS FECHA

    FROM BMCUTIL.CMN_BMCHIST         

    WHERE UTILNAME CONCAT CHAR(ELAPSED,ISO) IN                   

                               (SELECT UTILNAME CONCAT CHAR(MAX(ELAPSED),ISO)              

                              FROM BMCUTIL.CMN_BMCHIST         

                              GROUP BY UTILNAME);

 

Other tables in this Database are:

 

BMCLGRNX is a shadow of SYSLGRNGX.

 

BMCXCOPY is equivalent to SYSCOPY, when a BMC event or copy cannot be registered in SYSCOPY. For example, Nonstandard copies as Instant Snapshot, Cabinet, Encrypted, Compressed, etc.; Indexes with COPY NO, copied or reorganized by BMC; Partial Recover/Rebuild of indexes with BMC; etc.

 

This table is used by BMC products to identify and exploit nonstandard copies, for example UNLOAD PLUS uses this table for unloading from an Instant Copy.

 

The MODIFY command from COPY PLUS will delete from this table as the IBM MODIFY utility does from SYSCOPY.

 

BMCTRANS: Used by RECOVERY MANAGER and Log Master for Transaction Recovery

 

BMCDICT: Temporary table used for storing compression dictionaries for LOADPLUS and REORG PLUS.

 

Any other tables that could still reside in this database were created for supporting other functions that the evolution of the products has made them obsolete.

 

In future articles we will develop this topic, including other BMC repositories.

 

Many Thanks.

Share:|

Much has been written the last little while about the aging of the mainframe workforce and the shortage of skills. Pundits and industry analysts are calling for a drastic overhaul and modernization of the environment. At BMC, we don’t believe the situation is that dire although we do recognize that the landscape has changed somewhat and hence the subtitle for this short article. While it’s true that the mainframe workforce is declining and that perhaps the newer entrants lack the knowledge and experience, it’s really the complexity of today’s environment that is creating the biggest challenge in being able to effectively manage the mainframe and all it’s moving parts. I’ve been working with DB2 on zOS for a long time now, so writing this article was somewhat nostalgic for me. I’m not sure if that isn’t a sign of old age setting in though.

 

 

So, back in the days writing DB2 applications was relatively easy. You had some vague idea of what the users wanted or needed, you had your trusty SQL reference and developers guide and access to a TSO terminal. You wrote the SQL, tested it with the rest of the code, most likely COBOL, depending on change control and testing procedures, users may or may not have been involved with testing to any significant degree. Ultimately the code found its way to the production environment and the user community had their results. Life was good. Pretty simple stuff some would say.

 

We also had some rules that were developed to keep things simple and manageable. These were all based on what we knew back then, but probably more importantly, what was available at that time.

 

Keep SQL simple , the optimizer was no where near as sophisticated as it is now, so the thinking at the time was to keep SQL simple and handle a lot of the business logic and rules in the application code and not in the database or SQL request.

 

Minimum number of rows….CPU was very expensive and not as fast. Same for the number of columns.

 

Join predicates, common sense, but a Cartesian product could have taken days to generate back then, so the result of that operation was a lot less forgiving.

 

Table space scans were to be avoided at all costs, but could actually be a good thing as we learned a little later.

 

Sadly, the  “good old days” are long since gone as the technology has gotten a lot more sophisticated and the choices more varied. The flip side of that is that applications can now be developed much quicker and able to handle complex business logic allowing business to react to changes in less time and go to market sooner with new services and offerings.

 

The other good news is that there is help out there to assist  you with navigating  through this maze and we at BMC believe we
have or developing all the tools necessary to restore that sense of simplicity we had way back then.

  

For example, iIt gives us great pleasure to announce that the next release of BMC’s CHANGE MANAGER for DB2 will have full support for External and Native Stored Procedures. ALL Change Manager Functionality will be available for Stored Procedures. Anything you can do to any object type, you can now do to Stored Procedures and  packages are rebound when necessary.

For example, if you alter a table which has a  stored procedure  dependentant on it, Change Manager will rebind the stored procedure package.  If you modify a stored procedure, Change Manager will rebind the packages which are dependent on the procedures.

 

 

And we have plenty more new tricks in store, such as performance advisors for DB2, dynamic threshold management, and a quick and
very efficient copy migration feature for managing test data environments.

 

 

But perhaps our biggest news is the “old tricks” for the new pups in the mainframe space. And that is our Workbench tool for DB2 on z/OS. These old tricks for new dogs will be be released at the same time as the newest version of our DB2 solutions as well, but will be delivered with the specific purpose of addressing the mainframe workforce dilemma. That is how to make people that are experienced with the mainframe even more productive, and also address the issues of making it easier for those people that have no clue of what TSO and ISPF are immediately productive in that environment.

 

This new browser based interface will make it easier for everyone to communicate, share information and accomplish complex tasks in shorter time. The "workspace" is a new concept in the workbench and is the key to this sharing and reuse of information. It’s what allows the user to start an activity, and as we all know how many times we get interrupted during our hectic day, facilitates the suspension of that task, put out the fire at hand, and then return to what you had started without missing a beat.

 

The product is intuitive to use and guides you along the way for the novice user. You can start with a blank slate, or workspace to use the new terminology,  or open an existing one that you may have started and now need to get back to before you were interrupted. The workbench will allow you to query the DB2 catalog, take actions, analyse and excute SQL, submit jobs, view the job output, edit z/OS datasets, etc… In short, just a about everything you can do in a TSO session, but now with the power of a GUI interface.

 

Words alone cannot do justice to the workbench so I highly encourage you to get in touch with your  BMC DB2 Software consultant or
account manager to take a test drive of the workbench and discuss licesning options. You’ll probably never go back to TSO !!

Share:|

As an experienced DBA and long time user of BMC’s DB2 products, you’re probably well versed in the many features and functions of BMC’s utility, COPY PLUS for DB2.  It is the linchpin of your backup strategy and continues to meet your SLA requirements while minimizing CPU and elapsed times.

 

But did you know that COPY PLUS also contains the BMC MODIFY utility?

 

If you didn’t, you are not alone and that’s the purpose of this post: to tell you about BMC MODIFY and why you want to use it instead of the IBM MODIFY utility.

 

BACKGROUND

Generally, DBAs use a MODIFY utility with the RECOVERY option to remove outdated information from SYSCOPY and SYSLGRNX.  These tables can become very large and take up a considerable amount of space. By deleting outdated information from these tables, you can help improve performance for processes that access data from these tables. 

 

In practice, you should regularly run a MODIFY RECOVERY to clean up ‘old’ image copy information.  That ‘old’ point will vary based on your image copy frequency and recovery requirements.  For example, if you make daily Full Image Copies (FICs) of your data, there comes a point at which the SYSCOPY entries about those FICs becomes ‘old’ and of no use in your recovery strategy.

 

Both BMC MODIFY RECOVERY and IBM MODIFY RECOVERY remove this ‘old’ information by allowing you to delete:

  • all records older than a specific date,

MODIFY RECOVERY TABLESPACE DSN8D10A.DSN8S10E DELETE DATE(20020910)

  • all records older than a specific age (days),

MODIFY RECOVERY TABLESPACE DSN8D10A.DSN8S10E DELETE AGE(90)

 

BMC MODIFY

As mentioned earlier, BMC MODIFY is a built-in ‘feature’ of COPY PLUS.  In addition to running faster than IBM MODIFY, the BMC MODIFY utility provides the following functions not available in IBM MODIFY:

 

  • More flexibility in specifying what ‘old’ information to DELETE:
    • DELETE based on the maximum number of copies you want to retain (full or any type copy).

  MODIFY RECOVERY TABLESPACE DSN8D10A.DSN8S10E DELETE MAXFULLCOPIES(3)

  MODIFY RECOVERY TABLESPACE DSN8D10A.DSN8S10E DELETE MAXCOPIES(3)

 

    • DELETE based on when the image copy dataset is not found in the ICF catalog

  MODIFY RECOVERY TABLESPACE DSN8D10A.DSN8S10E DELETE DSNOTFOUND

 

    • DELETE based on specific values of the SYSCOPY columns

  MODIFY RECOVERY TABLESPACE DSN8D10A.DSN8S10E

  DELETE WHERE ICDATE=130321 AND ICTYPE=F

 

NOTE: After any DELETE operation, COPY-pending status is set if:

    • the space is not recoverable to the current state
    • no copy entries remain in SYSCOPY and at least one row was deleted
    • no copy entries exist after an unrecoverable event, such as a REORG LOG NO or a LOAD LOG NO

 

However, BMC MODIFY allows you to specify NOCOPYPEND to indicate that you do not want to have COPY-pending status set after the deletions are processed.  This is especially useful when table spaces are loaded, but never copied.

MODIFY RECOVERY TABLESPACE DSN8D10A.DSN8S10E DELETE AGE(90)NOCOPYPEND

 

  • Ability to verify recoverability of the object based on 3 criteria – minimum copies, maximum logs since last copy or maximum days since last copy.  If any criteria are true, you have the option to be warned (ON NOTRECOVERABLE WARN) or to trigger an image copy (ON NOTRECOVERABLE COPY).

 

    • minimum copies (full or any type)

  MODIFY RECOVERY TABLESPACE DSN8D10A.DSN8S10E VERIFY MINIMUM FULLCOPIES 5

  ON NOTRECOVERABLE WARN

 

  MODIFY RECOVERY TABLESPACE DSN8D10A.DSN8S10E VERIFY MINIMUM COPIES 5

  ON NOTRECOVERABLE WARN

 

    • maximum DB2 LOG datasets since last image copy

  MODIFY RECOVERY TABLESPACE DSN8D10A.DSN8S10E VERIFY MAXIMUM LOGS 7

  ON NOTRECOVERABLE WARN

 

    • maximum days since last image copy

  MODIFY RECOVERY TABLESPACE DSN8D10A.DSN8S10E VERIFY MAXIMUM DAYS 7

  ON NOTRECOVERABLE COPY USING TEMPLATE COPYDS

 

  OUTPUT LOCALP UNIT SYSDA

         DSNAME SLSJLK.LP.&DB.&TS.D&DATE.T&TIME        

 

  TEMPLATE COPYDS

           COPY TABLESPACE *.*                         

           COPYDDN(LOCALP)                             

           SHRLEVEL REFERENCE                          

           RESETMOD NO                                 

 

  • Ability to use SQL like syntax to INSERT SYSCOPY rows

MODIFY RECOVERY TABLESPACE DSN8D10A.DSN8S10E

INSERT

START_RBA = X'0003632123F6'

ICTYPE = F

SHRLEVEL = R

DSNAME = BMCCOPY.DSN8D10A.DSN8S10E.D970110.T180000

DEVTYPE = VCRT

ICUNIT = T

ICDATE = 050331

ICTIME = 180000

 

  • Ability to use SQL like syntax to UPDATE SYSCOPY rows

MODIFY RECOVERY TABLESPACE DSN8D10A.DSN8S10E

UPDATE

SET DEVTYPE = RCART

WHERE ICBACKUP = RP

AND DEVTYPE = 3490

 

Hopefully this post was a helpful reminder to COPY PLUS users that you also have access to the BMC MODIFY utility.  If you have additional questions on using COPY PLUS or BMC MODIFY, please contact your local BMC DB2 Software consultant or BMC Support (800-537-1813).

Share:|

One of the Best kept secrets of BMC CHANGE MANAGER for DB2 is the IMPORT function.

 

The idea behind IMPORT is that sometimes is more easy to specify changes with any of those languages than using the product panels. This function also helps you to create automated procedures to implement changes, for example coming from a design tool in DDL format or from a COMPARE process in CDL format. You can IMPORT into CHANGE MANAGER for DB2 text files containing change specifications written either in DDL, CDL or DML.

 

DDL (Data Definition Language) is the DDL standard DB2, that can be created manually, extracted from any design tool or any product generating DDL (for example, HDDL function from BMC Catalog Manager for DB2) or from a BASELINE Report created with BMC CHANGE MANAGER for DB2.

 

CDL (Change Definition Language) is a proprietary language for change specifications used by BMC CHANGE MANAGER for DB2. This language is the output from the COMPARE process, but you can also generate manually text files containing DDL, as in the following example:

 

-CDL  nnnn ALTER DATABASE old_name NAME new_name.

 

DML (Data Manipulation Language) is another proprietary language from BMC, used as change definition language within the CM/PILOT component of BMC CHANGE MANAGER for DB2.  You can also use two CM/PILOT scripts to help you create SQL-like Data Manipulation Language (DML) statements to update, delete, and migrate data structures. This language can be generated from the CM/PILOT panels or also manually and imported into a WORKID. An example could be:

 

       UPDATE DATABASES

       SET NAME = 'new_name'

       WHERE NAME=’old_name’

 

The content of these files is validated by BMC CHANGE MANAGER for DB2 and converted into an ALTER WORKID. After the statements are imported, BMC CHANGE MANAGER for DB2 creates entries in the CD tables as if they had been requested in the Specification component. This WORKID can be browsed, modified, analyzed and executed.

 

In the “ALTER and CHANGE MANAGER for DB2® User Guide” there is a full section dedicated to “Importing files”, in the "Altering data structures in a database environment” chapter.

 

In the “ALTER and CHANGE MANAGER for DB2® Reference Manual” there are several sections with examples, either of CDL or DML.

In the HLQ.DBCNTL data set there are several examples (ACMDMLxx) of pregenerated DML files that can be edited and adapted to the user needs.

 

For DDL or CDL the Import component can also use Inbound Migrate Profiles to make input from another system compatible with the receiving subsystem’s version of the application. Inbound migrate profiles contain only change rules that modify imported data structure definitions. The change rules are applied as the CDL file or DDL file is imported.

 

However, DML does not use Migrate Profiles and you will get a message BMC70372W - Migrate profile invalid for DML

 

The Import process can be run either in foreground or in batch, find attached examples of different IMPORT processes:

 

For DDL:

 

//IMPORTIN DD DISP=SHR,                               

//         DSN=ISPRXM.BMCPROD.JCL(DDL)               

//ALUIN    DD *                                       

  SSID DJJ1                                           

  WORKID ISPRXM.TESTDDL                                  

  SOURCETYPE DDL                                      

  MIGPROFILE ISPRXM.PROF1          

 

For CDL:

 

//IMPORTIN DD DISP=SHR,                                 

//         DSN=ISPRXM.BMCPROD.JCL(CDL)                  

//ALUIN    DD *                                         

  SSID DJJ1                                             

  WORKID ISPRXM.TESTCDL                                   

  SOURCETYPE CDL                                        

  MIGPROFILE ISPRXM.PROF2

 

For DML:   

 

//AUCIN    DD DISP=SHR,                             

//         DSN=ISPRXM.BMCPROD.JCL(DML)              

//ALUIN    DD *                                     

  SSID DJJ1                                         

  CMPILOT REUSEWORKID                               

  WORKID ISPRXM.TESTDML                               

  IMPORT                                                                         

 

 

To facilitate the IMPORT process, there are additional syntax for managing the target WORKID:

 

NEWWORKID creates the WORKID as part of the IMPORT, you can use a template for generating the WORKID name.

 

REPLACEWORKID imports into an existing WORKID and replaces its previous content or creates a new one if it does not exists

 

WORKIDMERGE combines the imported specifications with any other specifications that could exist in the WORKID.

 

I hope you will find this function useful for your advanced change management needs in DB2.

 

Share:|

Are you in DB2 9 or beyond and do you still have a bunch of simple Tablespaces? Are you worried about accidentally dropping one of them and not being able to recreate it again?

 

Of course, the first thing to do is ALTER RESTRICT ON DROP of all Tables residing in Simple TS, to avoid an accidental Drop.

 

But, what happen if even thou, any Simple TS is dropped? Even if you got the DDL, you will not be able to recreate it. You could recreate it as segmented, but then any attempt to recover the data, will fail.

 

How could you do it? I have developed and tested the procedure below, using BMC Log Master for DB2 and BMC Unload Plus for DB2, but you can try to use similar products from another vendor with similar characteristics, you only need any Log analysis product capable of extracting all info from the dropped TS and an Unload utility capable of unloading from an unregistered copy.

 

First, use BMC Log Master for DB2 with the DROPRECOVERY syntax, to get the DDL and details from the last FIC and RBA's in the report:

                                                                       

  OPTION                                                               

     FILTERREL AND                                                     

                                                                       

  SORTOPTS                                                             

     FILSZ NONE                                                        

     HISTORY WRITE                                                     

                                                                       

  DROPRECOVERY                                                         

     TABLESPACE NAME ISPRXMDB.ISPRXMAX OUTCOPY NO                      

     RECREATE DATASET ISPRXM.DROPREC.RECREATE.DDL NEW                  

        CYLINDERS SPACE(1,1) UNIT(SYSDA) RELEASE                       

     REPORT DATASET ISPRXM.DROPREC.REPORT NEW                   

        CYLINDERS SPACE(1,1) UNIT(SYSDA) RELEASE                

     FROM  DATE(aaaa-mm-dd) TIME(03.44.00.000000)               

     TO  DATE(aaaa-mm-dd) TIME(03.46.00.000000)                 

 

Then, use BMC Log Master for DB2 to populate the repository with information about the old object, because it is not in the Catalog anymore, using OVERTIME MODE:

 

OPTION                                                   

    FILTERREL AND

    EXECUTION MODE OVERTIME                 

                                                          

SORTOPTS                                                 

    FILSZ NONE                                            

    HISTORY WRITE                                         

                                                          

LOGSCAN                                                  

    REPOS UPDATE                                          

    INCLUDE DICTIONARY                                    

    DB2CATALOG YES                                        

    FROM  DATE(aaaa-mm-dd) TIME(03.44.00.000000)          

    TO  DATE(aaaa-mm-dd) TIME(03.46.00.000000)            

     WHERE TABLESPACE NAME = ISPRXMDB.ISPRXMAX            

 

Then, use BMC Log Master for DB2 to extract any SQL activity after the copy, also using OVERTIME MODE. Include Image copy information extracted from the first step for completion and scan range from RBA information (between copy and drop point):

                                                        

  OPTION                                                

     FILTERREL AND                                      

     EXECUTION MODE OVERTIME                            

          ATTEMPT COMPLETION                            

                                                        

  IMAGECOPY TABLESPACE NAME ISPRXMDB.ISPRXMAX           

       DATASET ISPRXM.ISPRXMDB.ISPRXMAX.P0.C011         

       IMRBA X'C92C2B68DDAF'                            

                                                        

  SORTOPTS                                              

     FILSZ NONE                                         

     HISTORY WRITE                                      

                                                        

  LOGSCAN                                                        

    REPORT                                                      

       TYPE SUMMARY                                             

       SYSOUT                                                   

          CLASS(*) NOHOLD                                       

    SQL                                                         

     MIGRATE                                                    

       DATASET &SYSUID..D&DATE..T&TIME..MIGRATE.SQL NEW         

          CYLINDERS SPACE(1,1) UNIT(SYSDA) RELEASE              

       TEMPLATE &SYSUID..D&DATE..T&TIME..MIGRATE.TEMPLATE NEW   

          CYLINDERS SPACE(1,1) UNIT(SYSDA) RELEASE              

       INCLUDE TRIGGER YES                                      

    DB2CATALOG NO                                               

    FROM RBA X'C92C2B68DDAF'                                    

    TO   RBA X'C92C2E51B5CF'                                                  

    WHERE TABLESPACE NAME = ISPRXMDB.ISPRXMAX                   

 

Then, using BMC UNLOAD PLUS for DB2, UNLOAD data from the unregistered copy:

 

     UNLOAD                       

         INFILE IN001 FULL        

         DDLDDN DDL0001

 

Then, you got all information needed. Recreate the object and its dependents, using the DDL extracted, but adding the SEGSIZE clause. Load data extracted in the previous step that will bring data back to the time of the last copy and apply generated MIGRATE SQL to include all activity after the copy.

 

Of course, this is an emergency process, I recommend you using BMC Recovery Management for DB2 solution and the TRANSFORM process to change all your existing TS to UTS without impact.

 

Good luck!

Filter Blog

By date:
By tag: