Skip navigation

BMC AMI Data for Db2

7 Posts authored by: Phil Grainger
Share This:

We can divide DB2 DBAs into three categories: those who have come to DB2 later in the life of our favourite mainframe DBMS, those who worked with DB2 back in the heady days of versions 1 and 2, and some who (like me) fall into both camps. We remember the old days, but we also see how our world of databases has changed.


Not only has DB2 itself changed, but the infrastructure within which DB2 resides has also changed – and in some ways, it’s changed dramatically. For example, the way DASD storage is provided and how it is recognized and managed by the operating system.


When I started my career as a DB2 DBA, you could actually go into the machine room (if you were allowed) and look at boxes of DASD. We could literally SEE the volumes that we were used to looking at with LISTCAT or with ISPF 3.4. Because this allowed us to visualize how our DB2 data was laid out, we could foresee problems that could be caused by unthinking placement of data sets.


Do you remember the “rules” that suggested NEVER putting a table space AND its indexes on the same volume? This was to prevent “head contention” as DB2 tried to read index leaf pages and then table space data pages all at the same time. We tried very hard to ensure that tables and indexes never coexisted on the same DASD volume.

We also had to worry about the number of extents in which a data set existed. The maximum number of extents was fairly limited  in those days, and it was startlingly easy to run out of extents and get a DB2 extend error as DB2 tried, and failed, to find more space to store data. As the company I worked for was growing at an alarming rate, I found myself being called out to fix data set extend errors far too often (and why did they always happen in the middle of the night?). So I developed a trick to keep myself ahead of DB2. If a data set needs to be extended, DB2 tries to extend BEFORE the space is actually needed; if it can’t be extended, a warning message is issued. I used our automated operations tool to look for those messages, extract the jobname from the message (it always seemed to be a batch job that was responsible), and change the job into a performance class that was permanently swapped out. This stopped the job in its tracks before it could fail and gave me the time to move other things around on the volume to make space so that the extend could complete. Then I’d swap the job back in and no one was any the wiser.


If you are relatively new to DB2, you are probably wondering if I am making this stuff up. But I suspect that those of you who have been working with DB2 for years are gradually realising where some of our more esoteric DASD management rules have come from.


DASD doesn’t work like this anymore. From a z/OS (or DB2 perspective) things don’t look any different. We still give DASD volumes “meaningful” names, and we can still look at data sets with LISTCAT and ISPF 3.4. But what we are looking at is an illusion.


What is sitting in the machine room now doesn’t look like a collection of 3390 volumes. It’s now an intelligent array of inexpensive (relative term) disks that are pretending to be the DASD that we are used to seeing. These arrays use advanced management techniques (like RAID) to ensure that they can recover from most hardware failures.  (How many readers have experienced a head crash?) The arrays provide significant amounts of cache memory to aid in the speed and responsiveness of the devices. I used to aim for around a 20 millisecond response time from my DASD – now people should be aiming for a fraction of that. And these virtual devices can have significantly higher capacity than the 3390s of old – in excess of 50 times the capacity for some models!


As z/OS has evolved, the maximum number of extents a data set is allowed to have has also been creeping forever upwards. I used to be limited to 15 extents, but now you can have hundreds of extents (and data sets that can happily go multi-volume as well).


But – wait a minute! Didn’t we just say that the physical DASD doesn’t exist in the way that we are viewing it from z/OS? What does that mean to extents and sizes? And what about contention? Does all that caching relieve us from the need to spread our data sets around?


What we see from a z/OS view today is actually a figment of the operating system’s imagination. A DASD administrator will define “z/OS volumes” and how those volumes map to his storage array devices. Any one volume will, in all likelihood, be spread across a large number of those inexpensive disks. And, for redundancy, multiple z/OS volumes may share space on the same disks. So our first problem would be to determine exactly which z/OS volumes are sharing space in the disk array. It would make no sense to separate our table spaces and indexes if the VIRTUAL 3390s we put them on ultimately mapped to the same disks in the storage array. In reality, it would be impossible to do that; from the z/OS side, there is no control of where a specific data set goes, and it may move over time. Of course, because of the caching that happens, we probably don’t have a performance problem to worry about anyway.


Some of you are probably thinking, “So, do extents matter anymore?”


As far as performance and DASD management goes, they probably don’t. The extents you see in LISTCAT or ISPF 3.4 are virtual. They are a description of how z/OS thinks the data sets are laid out on disks that are only pretending to be 3390s. From a physical perspective, it is not possible to see from z/OS exactly HOW the data sets are laid out – and it hardly matters. Granted, if you have a data set in hundreds of extents, you may see a small performance degradation, but for normal numbers of extents it would be hardly noticeable. What does matter, though, is that the total number of extents for a data set is still limited – even though they are only virtual extents. In other words, it’s still possible for a dataset to still fail to extend because it has reached the limit of how many extents it can grow to, and this is something that you do need to keep an eye on.

Both z/OS and DB2 have been trying to make the management and sizing of data sets more of a science than an art. I remember using my Supercalc (remember that?) spreadsheet to calculate DB2 table space sizes based on  column sizes, nullability attributes, and the like (with the usual healthy guesswork on the expected number of rows). Even then, I found it irritating that I had to create data sets of a certain size when work I was doing on my PC just need me to create a file and it would just  

z/OS is making allocation of new extents work more intelligently. I got a little lost in this sentence. Would this alternate wording work? Who has looked at a LISCAT of a table space to discover that the first extent is not the same size as the primary quantity defined and that the secondary extends are all different sizes? When z/OS needs to allocate a new extent for a data set and that extent is EXACTLY adjacent to the prior extent, the current highest extent is just made larger. This works with both the primary and secondary allocations. (But z/OS is still playing with VIRTUAL extents that just look as if they are adjacent). Because of this extent consolidation, it is possible for a data set to grow to a larger size than a simple calculation of (Primary Quantity + (n times Secondary Quantity)) would suggest. It also makes it possible to copy a data set from one volume to another and have the target data set run out of extents, even though the source data set is just fine.


So…just how many extents can a data set extend to these days? Well, in the best traditions of DB2 “it depends” – on how you create the table space/index space in DB2 and what SMS attributes are defined for the classes that the pageset is created in.


DASD has changed, and we need to change along with it. Don’t try to manage DASD the same way we did back in the old days. Instead, understand the implications of the way disk storage is created and managed now and revise your DB2 physical management rules accordingly.


I'd be interested in hearing from you if you have modified your DB2 pageset management rules based on the new world of virtual DASD

Share This:

I'm pleased to announce another few feature additions to BMC DB2 tools delivered as SPEs in our maintenance stream


The following SPE supports new features in Load Plus for DB2:

PTFs BPU6680 and BPU6692 :

Introduces native support for loading inline LOB columns, support for loading LOB and XML data from, and discarding data to, VBS data sets that are in spanned-record format and continued performance improvements for LOB and XML processing generally including additional task parallelism


The following SPE supports new features in Unload Plus for DB2:

PTFs BPU6778 :

Provides DIRECT YES support for unloading of Inline LOB columns as well as unloading LOB and XML data to VBS data sets that are in spanned-record format


The following SPE supports changes in Reorg Plus for DB2:

PTFs BPJ0839 and BPU7042:

Provides native support for altering partition limit keys when rebalancing partition by range objects in DB2 11 New Function Mode where ALTER LIMITKEY became a deferred alter


And one I may have missed from earlier in the year


The following SPE supports changes in APPTUNE for DB2, CATALOG MANAGER for DB2, MainView for DB2, SQL Explorer for DB2, SQL

Performance for DB2 and Workbench for DB2:

PTFs BPU6612 and BPU6617:

Introduces an Explain option enabling display information about whether SQL is eligible to run on an IBM DB2 Analytics Accelerator


If you require more information about any of these, please feel free to contact me off-list (details below)


Share This:

Now that I am getting back into the rhythm of working after my vacation, I'd like to point you at a recorded webcast recently hosted by BMC, showcasing yet another new product from BMC - Subsystem Optimiser for z/Enterprise (aka SubZero)


Learn how you can break free of traditional technical restrictions on subsystem placement which require certain susbsystems to be co-located on the same LPAR


Break the data access barrier across LPARs and redirect workloads for better efficiency-without having to modify applications


Also, look to lower business risk by redirecting workloads when a system fails


Break free of technical restraints on workload consolidation and allocation. Find out how you can optimize subsystem placement to reduce mainframe costs by 20% or more-automatically


Spend less on your mainframe so you can invest in innovation


Let BMC show you the way


This webcast features  David Hilbe, Area Vice President, Research and Development, ZSolutions and Tom Vogel, Lead Solutions Marketing Manager


Register to view this webcast at


After viewing the webcast, I'm sure you'll have questions - feel free to contact me at the usual place, or take a look at the SubZero web page at


Phil Grainger


Phil Grainger

Share in Pittsburgh

Posted by Phil Grainger Jul 30, 2014
Share This:

Getting ready to head to Pittsburg for the second SHARE of the year


This time I'll be speaking - at 10am on Thursday 7th August. Come and hear how to "Save Real Dollars with SQL Tuning"


The full BMC speaking schedule looks like this:


16065: zNextGen Working and Planning Session

Sunday, August 3, 2014: 3:00 PM-4:00 PM

Room 302 (David L. Lawrence Convention Center)

Speakers: Linda Mooney<>(County of Sacramento) , Vit Gottwald<>(CA Technologies) , Reg Harbeck<>(Mainframe Analytics Ltd.) ,Warren T. Harper<>(BMC Software) and Troy Crutcher<>(IBM Corporation)



15446: zSeries Scalability and Availability Improvements

Monday, August 4, 2014: 10:00 AM-11:00 AM

Room 311 (David L. Lawrence Convention Center)

Speaker: Donald Zeunert<>(BMC Software)



16094: Automation for IMS: Why It's Needed, Who benefits, and What the Impact Is

Monday, August 4, 2014: 11:15 AM-12:15 PM

Room 402 (David L. Lawrence Convention Center)

Speaker: Duane Wente<>(BMC Software)



16064: zNextGen Project Opening and Keynote

Monday, August 4, 2014: 3:00 PM-4:00 PM

Room 319 (David L. Lawrence Convention Center)

Speakers: Linda Mooney<>(County of Sacramento) , Vit Gottwald<>(CA Technologies) , Reg Harbeck<>(Mainframe Analytics Ltd.) and Warren T. Harper<>(BMC Software)



15582: z/OS 2.1 Unix Systems Services Latest Status and New Features

Tuesday, August 5, 2014: 11:15 AM-12:15 PM

Room 406 (David L. Lawrence Convention Center)

Speakers: Patricia Nolan<>(BMC Software) and Janet Sun<>(Rocket Software, Inc.)



16061: Planning and Implementing Digital Services

Tuesday, August 5, 2014: 12:25 PM-1:15 PM

Room 316 (David L. Lawrence Convention Center)

Speaker: Dave Roberts<>(BMC)



16253: MF Economics: What the Past 50 Years Can Tell Us About the Future - A Lively Discussion

Wednesday, August 6, 2014: 7:20 AM-8:20 AM

Room 304 (David L. Lawrence Convention Center)

Speaker: Jonathan Adams<>(BMC )



16066: zNextGen Project Networking Dinner Event

Wednesday, August 6, 2014: 7:00 PM-8:00 PM

Meet in the Westin Lobby (Westin Pittsburgh Hotel)

Speakers: Vit Gottwald<>(CA Technologies) , Linda Mooney<>(County of Sacramento) , Reg Harbeck<>(Mainframe Analytics Ltd.) ,Warren T. Harper<>(BMC Software) and Troy Crutcher<>(IBM Corporation)



15492: How to Save REAL Dollars with SQL Tuning

Thursday, August 7, 2014: 10:00 AM-11:00 AM

Room 402 (David L. Lawrence Convention Center)

Speaker: Phil Grainger<>(BMC Software)




Phil Grainger

Lead Product Manager

BMC Software

Share This:

BMC has just released a new white paper entitled "Five Levers to Lower MLC"


You can get a copy from


Don't forget that if you are paying IBM MLC charges for your z/OS software ANYTHING you can do to lower the cpu consumption during the monthly rolling four hour average peak, will have a positive effect on those charges - and for EVERYTHING running on that LPAR


So, tune some SQL and not only save money on DB2 license charges, but also save for z/OS and CICS and/or IMS and/or MQ if they are also running on the same LPAR


Phil Grainger

Lead Product Manager

BMC Software


Share This:

Just making my travel plans to attend the next z/S DB2 and IMS briefing in Houston on August 12th


This is our opportunity to share with customers what's been going on recently with our database tools, but also hear from users how those tools are being used


This is something that we do on a regular basis in 1:1 meetings, but the chance to get a group of customers together often brings new ideas into the light


The briefing usually starts with a pleasant dinner on the prior evening - a chance for the briefing attendees to get to know each other, as well as an opportunity for the BMC folks to meet the attendees in a more informal setting


The briefing itself starts bright and early and moves from high level overviews of our strategy before diving into specifics on the different database management product areas


The day ends with transport back to the airport for your flights home


If you'd like more information on the August briefing, then please don't hesitate to contact me or your local BMC account team


Phil Grainger

Lead Product Manager

BMC Software


Share This:

It occurred to me the other day that I have become so comfortable speaking at DB2 conferences, I forget that there are many people out there who feel they could never do what I do


Well, do you know what, that used to be me too!


Yes, when I first started attending DB2 GSE meetings in the UK, I was too embarrassed to even raise my hand in the crowd when they asked who were new members. Mind you, that ages me as well - who else can remember the crowds we used to get at local DB2 events? The first UK GSE meeting I attended had around 120 attendees!


Anyway, I digress


It occurred to me after a few meetings, that if I wanted to make DB2 my career (which I did) then there was a good chance that my future boss might also be attending these events. Wouldn't it be great if he/she knew who I was before I even applied for a new job. What better way to do that than to actually speak


Now, my wife used to take public speaking classes, and even know I couldn't do what she did - speaking about a random topic selected by the teacher. We are extremely lucky - we are talking about something we not only love, but also (usually) understand inside-out. This is MUCH easier, believe me


So, I started small and presented short sessions (they were usually 30 minutes, presented VERY fast) on aspects of DB2 that we were exploiting. Keep in my comfort zone, I thought


Don't underestimate the nerves - I was sure the audience could see how nervous I was


But then something odd happened


People used to come up at the end of the sessions and not just ask questions, but also to comment on "how useful" they found the talk and "how comfortable" I seemed


Then I realised something important


What the audience sees and hears is important - the way the speaker feels hardly ever translates into anything that the audience picks up on!


From there it was a small step to speaking in front of larger audiences


BMC came along with an offer (I was a big customer of theirs in those days) and invited me to speak at a UK event (maybe 30-50 attendees) followed by a European event in Oslo (200 or so attendees). That seemed to be a good progression, so I accepted their invitation. Only for them to switch the two dates around so the big one came first




You know something? I survived the experience


It was a small step from there to IDUG, and the rest (as they say) was history


So, to close this blog post, I have a few small pieces of advice



1.       Don't underestimate your own ability


2.       Don't think that you are doing nothing special and nobody is interested - many of us really are doing special things with DB2


3.       Start with your local user groups, and get comfortable with a small audience


4.       Don't hesitate to ask for help. Existing speakers will be only too happy to help you along


Oh, and I'll finish by saying - DON'T get stressed by who might be in your audience. I'll never forget the first time I noticed Roger Miller at the back of one of MY audiences. THAT was a shock. But do you know something? Even the stars of DB2 at IBM are interested in hearing what users are actually doing with DB2


Phil Grainger

Lead Product Manager

BMC Software


Filter Blog

By date:
By tag: