Skip navigation
1 2 3 Previous Next

Solutions for Db2

42 posts
Michael Cotignola

BMC Db2 Coffee Talk

Posted by Michael Cotignola Employee Sep 16, 2020
Share This:

Join us on September 30th for the next BMC Db2 Coffee talk where we will review some of the recent enhancements to Next Generation Technology Reorg for Db2 that have been introduced in the maintenance stream. Register and enter your name to win a BMC coffee mug.

 

https://bmcsoftware.webex.com/bmcsoftware/j.php?RGID=r67ab64cecf16f823ccb85d17cf005234

RegisterrRegister

Share This:

If you are still utilizing BMC ISR to download maintenance, you need to be aware that ISR will be withdrawn on December 31, 2020.  In December 2019, BMC announced support for IBM SMP/E RECEIVE ORDER for BMC mainframe products. This is our recommended method to obtain maintenance for your BMC products and solutions. 

 

In this announcement, it was noted that BMC ISR will be withdrawn from service.  As 2020 has flown by, we are quickly approaching the end of year, and now is a good time to switch to IBM SMP/E RECEIVE ORDER for BMC mainframe products for your ongoing maintenance activities. 

 

For information on using RECEIVE ORDER for your BMC Products and solutions, please visit Using SMP/E RECEIVE ORDER - Documentation for Installation System - BMC Documentation.

 

If you require assistance with IBM SMP/E RECEIVE ORDER for BMC mainframe products, please open a case to BMC Support (Support Central - BMC Software) using the Product Common Install – z/OS, Component RECEIVE ORDER Maintenance and a support representative will help you!

Share This:

This time we’ll discuss two recent changes in NGT Unload, the addition of a “less dirty”, which I’ll explain shortly, and the ability to set the default concurrency, or SHRLEVEL.

NGT Unload has always defaulted to “online unload” creating a point of consistency unload while keeping the object read-write.  After all, this only requires a single drain of the writer claims.  This is rarely a bother because transactions that update are pretty good at committing right away.  This table shows the NGT Unload concurrency options.  NGT’s default has always been SHRLEVEL CHANGE CONSISTENT YES, you don’t have to specify any SHRLEVEL to get an online consistent unload. This BLOG will primarily discuss the change shown below in bold.

 

Syntax

Concurrency

SHRLEVEL REFERENCE

 

 

RW POC

SHRLEVEL CHANGE

 

 

RW POC

SHRLEVEL CHANGE

CONSISTENT YES

 

RW POC

SHRLEVEL CHANGE

CONSISTENT NO

 

RW DIRTY

SHRLEVEL CHANGE

CONSISTENT NO

QUIESCE YES

Less Dirty

SHRLEVEL CHANGE

CONSISTENT NO

QUIESCE NO

RW DIRTY

 

SHRLEVEL CHANGE CONSISTENT NO QUIESCE YES

Prior to BQU2750 in February 2020, NGT would make a clean point of consistency unload if you requested CONSISTENT NO QUIESCE YES, just like it does with CONSISTENT YES.  Now NGT makes a “less dirty” unload which means it will drain the write claimers to flush any changes for the table space out of the Db2 buffer pool.  This makes the Unload less dirty because only changes that occur during the unload can be missed, any changes Db2 had buffered before the unload are externalized and unloaded. 

 

I know what you’re thinking, this is crazy.  It’s disruptive to get the drain but not disruptive to track the changes and make a consistent unload, so why have the same pain for less gain? 

 

If you haven’t read my previous BLOG about simultaneous unloads of the same table, please check it out.  There you will see that two Point-Of-Consistency (POC) unloads cannot run at the same time unless they share the POC.  If you are ultra-sensitive to the timeliness of your unloads and you have many unloads that may try to unload the same table at the same time, then you might want to opt for a less dirty and inconsistent unload rather than the chance of unload failure, especially in a test environment; this is why the “less dirty” option was added.

 

Default Unload concurrency

This leads us to the default unload SHRLEVEL keywords.  As I mentioned the default has always been SHRLEVEL CHANGE CONSISTENT YES and it still is. However, with BQU2907 in May of 2020 you now can configure the default you want using these Unload parameters (+parms).

 

  • +CONSISTENT(YES/NO)
  • +QUIESCE(YES/NO)

 

Now if you want no disruption to applications, no drain, and you’re OK with inconsistent data, you can configure +CONSISTENT(NO) and +QUIESCE(NO) for your shop default. That’s not a recommendation, but just one option.

Share This:

I hope everyone got some value from our Coffee Talk on 8/19.

Catch the recording using the link below:

 

Catalog Manager Tips and Tricks Coffee Talk

Wednesday, August 19, 2020

Play recording

 

Click on the Cheat Sheet doc to get the summary of all 31 tips covered in our session.

Share This:

Welcome to the next BMC Db2 Coffee Talk.  This session will be 30-45 minutes packed with Catalog Manager time saving and tips and tricks.  New features, navigation shortcuts, power commands, and innovative feature use will have you saying ‘That’s so cool, I didn’t know that!’  (TSCIDKT).   There will be a little something for everyone.  My goal is to get 5 TSCIDKTs from each attendee.

 

On August 19th at 10 AM Eastern time

Please register below. (Audio through your computer or phone.)

 

 

 

 

 

BMC Db2 Coffee Talk

 

 

Catalog Manager Tips and Tricks

Host: Todd Mollenhauer

Wednesday, August 19, 2020

10:00 am  |  (UTC-04:00) Eastern Time (US & Canada)  |  1 hr

2 lucky winners to get a BMC Coffee Mug

Register at:

https://bmcsoftware.webex.com/bmcsoftware/j.php?RGID=r83ef76e29dffabba1d4b621e2d42739d

 

 

Share This:

Let’s discuss having multiple jobs unloading the same table at the same time.  NGT’s support for this is unique and robust.  I’ll break this down into two topics, using multiple concurrent jobs to have: 

  1. Each job unloads partition ranges, with all parts to a common point of consistency.
  2. Each job unloads the same table, possibly the whole table, at the same time with a point of consistency.

 

Review of unloading to a Point of Consistency (POC)

This is done by having an asynchronous task, for NGT that’s the NGT Subsystem, capture before Images of any page that changes during the unload, then any rows unloaded from one of these pages would come from that before image.  However, once the NGT Subsystem is collecting these before images for a POC, and possibly a subset of partitions, the NGT Subsystem cannot simultaneously manage a second POC of the same object for another job or add partitions to an existing object’s POC.  However, with NGT there is a way. 

 

NGT puts unload parallelism on steroids!

First, are multiple submitted unloads even needed with NGT?  It is common to want to unload a very large partitioned table space quickly, and to create a clean point-of-consistency (POC).  This is accomplished by unloading the partitions in parallel.  Any unload product can multi-task and process some number of partitions in parallel.  NGT with its server jobs supercharge this by having its server jobs multi-task several partitions just like the master job you submitted is doing.  The use of NGT servers may eliminate the need for multiple user-submitted job to increase parallelism.  However, what if you have thousands of partitions and you want to submit multiple jobs that each can submit multiple server jobs to go nuclear with parallelism.  NGT can do this too.

 

To run multiple jobs which each unload a subset of partitions of the same table space you specify the +CONNECTALL(YES) global parameter in your set of jobs.  This tells the NGT Subsystem to connect to and collect before images for all parts of the table space.  This way whichever of your jobs runs first, it will have the NGT subsystem monitor all parts and all the subsequent jobs will share this existing connection, share the point of consistency.  You get a clean POC unload of many parts using many jobs running concurrently.

 

The following JCL could be one of many jobs that you run concurrently; this one unloads the first 100 parts.

 

//ULDPARMS  DD *                                 

  +CONNECTALL(YES)                               

//SYSIN     DD *                                 

  OUTPUT SYSREC   DSNAME hlq.UNLOAD.P&PART..T&TIME

  UNLOAD                                         

       SHRLEVEL CHANGE CONSISTENT YES      

       UNLOADDN SYSREC   UNLOADDNPFX             

       PART 1:100                                

                 SELECT * FROM creator.table_name          

 

Multiple unrelated unload jobs processing the same table at the same time

NGT Unload has always allowed multiple unload jobs to process the same table even when a POC is requested, SHRLEVEL CHANGE CONSISTENT YES.  If a CONSISTENT unload is running and another CONSISTENT unload is submitted for that same table, NGT will let this and any subsequent overlapping unloads share the existing POC.

 

If you don’t want this behavior, the sharing of the existing POC, you can specify +POC_SHARE(NO) and subsequent jobs will fail rather than sharing the existing POC.  Specify +POC_SHARE(NO) when it is important that the data is unloaded is consistent as of the time the job is submitted and cannot be from a few minutes prior.

 

Note: +POC_SHARE(YES/NO) was added with PTF BQU2543 in August 2019.  The prior behavior and the new default is YES.

Chad Reiber

BMC Db2 Coffee Talk

Posted by Chad Reiber Employee May 21, 2020
Share This:

Spend your morning coffee or first coffee break with BMC Db2 professionals as they discuss how to help you administer, support and maintain your Db2 z/OS environment with innovated software from BMC.

 

On June 3rd at 10 AM Eastern time, spend 30-45 minutes understanding how Db2 Copies are taken, BMC Style. This will be our first talk with many more to follow.

 

Register @ https://bmcsoftware.webex.com/bmcsoftware/j.php?RGID=r1f268aad9821326dc2624edfaf8b7dea

coffe .png

BMC Db2 Coffee Talk

 

 

Db2 Copies – BMC Style

 

Host: BMC Db2 Technical Software Consultants

Wednesday, June 3, 2020

10:00 am  |  (UTC-04:00) Eastern Time (US & Canada)  |  45 mins

Two Lucky Attendees will be shipped a BMC Coffee Mug

Chad Reiber

The Vanishing Db2 DBA

Posted by Chad Reiber Employee Feb 21, 2020
Share This:
Share This:

We can divide DB2 DBAs into three categories: those who have come to DB2 later in the life of our favourite mainframe DBMS, those who worked with DB2 back in the heady days of versions 1 and 2, and some who (like me) fall into both camps. We remember the old days, but we also see how our world of databases has changed.

 

Not only has DB2 itself changed, but the infrastructure within which DB2 resides has also changed – and in some ways, it’s changed dramatically. For example, the way DASD storage is provided and how it is recognized and managed by the operating system.

 

When I started my career as a DB2 DBA, you could actually go into the machine room (if you were allowed) and look at boxes of DASD. We could literally SEE the volumes that we were used to looking at with LISTCAT or with ISPF 3.4. Because this allowed us to visualize how our DB2 data was laid out, we could foresee problems that could be caused by unthinking placement of data sets.

 

Do you remember the “rules” that suggested NEVER putting a table space AND its indexes on the same volume? This was to prevent “head contention” as DB2 tried to read index leaf pages and then table space data pages all at the same time. We tried very hard to ensure that tables and indexes never coexisted on the same DASD volume.

We also had to worry about the number of extents in which a data set existed. The maximum number of extents was fairly limited  in those days, and it was startlingly easy to run out of extents and get a DB2 extend error as DB2 tried, and failed, to find more space to store data. As the company I worked for was growing at an alarming rate, I found myself being called out to fix data set extend errors far too often (and why did they always happen in the middle of the night?). So I developed a trick to keep myself ahead of DB2. If a data set needs to be extended, DB2 tries to extend BEFORE the space is actually needed; if it can’t be extended, a warning message is issued. I used our automated operations tool to look for those messages, extract the jobname from the message (it always seemed to be a batch job that was responsible), and change the job into a performance class that was permanently swapped out. This stopped the job in its tracks before it could fail and gave me the time to move other things around on the volume to make space so that the extend could complete. Then I’d swap the job back in and no one was any the wiser.

 

If you are relatively new to DB2, you are probably wondering if I am making this stuff up. But I suspect that those of you who have been working with DB2 for years are gradually realising where some of our more esoteric DASD management rules have come from.

 

DASD doesn’t work like this anymore. From a z/OS (or DB2 perspective) things don’t look any different. We still give DASD volumes “meaningful” names, and we can still look at data sets with LISTCAT and ISPF 3.4. But what we are looking at is an illusion.

 

What is sitting in the machine room now doesn’t look like a collection of 3390 volumes. It’s now an intelligent array of inexpensive (relative term) disks that are pretending to be the DASD that we are used to seeing. These arrays use advanced management techniques (like RAID) to ensure that they can recover from most hardware failures.  (How many readers have experienced a head crash?) The arrays provide significant amounts of cache memory to aid in the speed and responsiveness of the devices. I used to aim for around a 20 millisecond response time from my DASD – now people should be aiming for a fraction of that. And these virtual devices can have significantly higher capacity than the 3390s of old – in excess of 50 times the capacity for some models!

 

As z/OS has evolved, the maximum number of extents a data set is allowed to have has also been creeping forever upwards. I used to be limited to 15 extents, but now you can have hundreds of extents (and data sets that can happily go multi-volume as well).

 

But – wait a minute! Didn’t we just say that the physical DASD doesn’t exist in the way that we are viewing it from z/OS? What does that mean to extents and sizes? And what about contention? Does all that caching relieve us from the need to spread our data sets around?

 

What we see from a z/OS view today is actually a figment of the operating system’s imagination. A DASD administrator will define “z/OS volumes” and how those volumes map to his storage array devices. Any one volume will, in all likelihood, be spread across a large number of those inexpensive disks. And, for redundancy, multiple z/OS volumes may share space on the same disks. So our first problem would be to determine exactly which z/OS volumes are sharing space in the disk array. It would make no sense to separate our table spaces and indexes if the VIRTUAL 3390s we put them on ultimately mapped to the same disks in the storage array. In reality, it would be impossible to do that; from the z/OS side, there is no control of where a specific data set goes, and it may move over time. Of course, because of the caching that happens, we probably don’t have a performance problem to worry about anyway.

 

Some of you are probably thinking, “So, do extents matter anymore?”

 

As far as performance and DASD management goes, they probably don’t. The extents you see in LISTCAT or ISPF 3.4 are virtual. They are a description of how z/OS thinks the data sets are laid out on disks that are only pretending to be 3390s. From a physical perspective, it is not possible to see from z/OS exactly HOW the data sets are laid out – and it hardly matters. Granted, if you have a data set in hundreds of extents, you may see a small performance degradation, but for normal numbers of extents it would be hardly noticeable. What does matter, though, is that the total number of extents for a data set is still limited – even though they are only virtual extents. In other words, it’s still possible for a dataset to still fail to extend because it has reached the limit of how many extents it can grow to, and this is something that you do need to keep an eye on.

Both z/OS and DB2 have been trying to make the management and sizing of data sets more of a science than an art. I remember using my Supercalc (remember that?) spreadsheet to calculate DB2 table space sizes based on  column sizes, nullability attributes, and the like (with the usual healthy guesswork on the expected number of rows). Even then, I found it irritating that I had to create data sets of a certain size when work I was doing on my PC just need me to create a file and it would just  

z/OS is making allocation of new extents work more intelligently. I got a little lost in this sentence. Would this alternate wording work? Who has looked at a LISCAT of a table space to discover that the first extent is not the same size as the primary quantity defined and that the secondary extends are all different sizes? When z/OS needs to allocate a new extent for a data set and that extent is EXACTLY adjacent to the prior extent, the current highest extent is just made larger. This works with both the primary and secondary allocations. (But z/OS is still playing with VIRTUAL extents that just look as if they are adjacent). Because of this extent consolidation, it is possible for a data set to grow to a larger size than a simple calculation of (Primary Quantity + (n times Secondary Quantity)) would suggest. It also makes it possible to copy a data set from one volume to another and have the target data set run out of extents, even though the source data set is just fine.

 

So…just how many extents can a data set extend to these days? Well, in the best traditions of DB2 “it depends” – on how you create the table space/index space in DB2 and what SMS attributes are defined for the classes that the pageset is created in.

 

DASD has changed, and we need to change along with it. Don’t try to manage DASD the same way we did back in the old days. Instead, understand the implications of the way disk storage is created and managed now and revise your DB2 physical management rules accordingly.

 

I'd be interested in hearing from you if you have modified your DB2 pageset management rules based on the new world of virtual DASD

Share This:

I'm pleased to announce another few feature additions to BMC DB2 tools delivered as SPEs in our maintenance stream

 

The following SPE supports new features in Load Plus for DB2:

PTFs BPU6680 and BPU6692 :

Introduces native support for loading inline LOB columns, support for loading LOB and XML data from, and discarding data to, VBS data sets that are in spanned-record format and continued performance improvements for LOB and XML processing generally including additional task parallelism

 

The following SPE supports new features in Unload Plus for DB2:

PTFs BPU6778 :

Provides DIRECT YES support for unloading of Inline LOB columns as well as unloading LOB and XML data to VBS data sets that are in spanned-record format

 

The following SPE supports changes in Reorg Plus for DB2:

PTFs BPJ0839 and BPU7042:

Provides native support for altering partition limit keys when rebalancing partition by range objects in DB2 11 New Function Mode where ALTER LIMITKEY became a deferred alter

 

And one I may have missed from earlier in the year

 

The following SPE supports changes in APPTUNE for DB2, CATALOG MANAGER for DB2, MainView for DB2, SQL Explorer for DB2, SQL

Performance for DB2 and Workbench for DB2:

PTFs BPU6612 and BPU6617:

Introduces an Explain option enabling display information about whether SQL is eligible to run on an IBM DB2 Analytics Accelerator

 

If you require more information about any of these, please feel free to contact me off-list (details below)

 

Share This:

Now that I am getting back into the rhythm of working after my vacation, I'd like to point you at a recorded webcast recently hosted by BMC, showcasing yet another new product from BMC - Subsystem Optimiser for z/Enterprise (aka SubZero)

 

Learn how you can break free of traditional technical restrictions on subsystem placement which require certain susbsystems to be co-located on the same LPAR

 

Break the data access barrier across LPARs and redirect workloads for better efficiency-without having to modify applications

 

Also, look to lower business risk by redirecting workloads when a system fails

 

Break free of technical restraints on workload consolidation and allocation. Find out how you can optimize subsystem placement to reduce mainframe costs by 20% or more-automatically

 

Spend less on your mainframe so you can invest in innovation

 

Let BMC show you the way

 

This webcast features  David Hilbe, Area Vice President, Research and Development, ZSolutions and Tom Vogel, Lead Solutions Marketing Manager

 

Register to view this webcast at http://www.bmc.com/forms/MCO-R4-SubZero-SaveMillionsSep23-Webex.html?cid=em834404807611ew&Email_Source=BMCEvents

 

After viewing the webcast, I'm sure you'll have questions - feel free to contact me at the usual place, or take a look at the SubZero web page at http://www.bmc.com/it-solutions/subsystem-optimizer.html

 

Phil Grainger

 

Phil Grainger

Share in Pittsburgh

Posted by Phil Grainger Jul 30, 2014
Share This:

Getting ready to head to Pittsburg for the second SHARE of the year

 

This time I'll be speaking - at 10am on Thursday 7th August. Come and hear how to "Save Real Dollars with SQL Tuning"

 

The full BMC speaking schedule looks like this:

 

16065: zNextGen Working and Planning Session

Sunday, August 3, 2014: 3:00 PM-4:00 PM

Room 302 (David L. Lawrence Convention Center)

Speakers: Linda Mooney<https://share.confex.com/share/123/webprogram/Person2965.html>(County of Sacramento) , Vit Gottwald<https://share.confex.com/share/123/webprogram/Person4999.html>(CA Technologies) , Reg Harbeck<https://share.confex.com/share/123/webprogram/Person1784.html>(Mainframe Analytics Ltd.) ,Warren T. Harper<https://share.confex.com/share/123/webprogram/Person5886.html>(BMC Software) and Troy Crutcher<https://share.confex.com/share/123/webprogram/Person5798.html>(IBM Corporation)

 

 

15446: zSeries Scalability and Availability Improvements

Monday, August 4, 2014: 10:00 AM-11:00 AM

Room 311 (David L. Lawrence Convention Center)

Speaker: Donald Zeunert<https://share.confex.com/share/123/webprogram/Person5988.html>(BMC Software)

 

 

16094: Automation for IMS: Why It's Needed, Who benefits, and What the Impact Is

Monday, August 4, 2014: 11:15 AM-12:15 PM

Room 402 (David L. Lawrence Convention Center)

Speaker: Duane Wente<https://share.confex.com/share/123/webprogram/Person5913.html>(BMC Software)

 

 

16064: zNextGen Project Opening and Keynote

Monday, August 4, 2014: 3:00 PM-4:00 PM

Room 319 (David L. Lawrence Convention Center)

Speakers: Linda Mooney<https://share.confex.com/share/123/webprogram/Person2965.html>(County of Sacramento) , Vit Gottwald<https://share.confex.com/share/123/webprogram/Person4999.html>(CA Technologies) , Reg Harbeck<https://share.confex.com/share/123/webprogram/Person1784.html>(Mainframe Analytics Ltd.) and Warren T. Harper<https://share.confex.com/share/123/webprogram/Person5886.html>(BMC Software)

 

 

15582: z/OS 2.1 Unix Systems Services Latest Status and New Features

Tuesday, August 5, 2014: 11:15 AM-12:15 PM

Room 406 (David L. Lawrence Convention Center)

Speakers: Patricia Nolan<https://share.confex.com/share/123/webprogram/Person6059.html>(BMC Software) and Janet Sun<https://share.confex.com/share/123/webprogram/Person4253.html>(Rocket Software, Inc.)

 

 

16061: Planning and Implementing Digital Services

Tuesday, August 5, 2014: 12:25 PM-1:15 PM

Room 316 (David L. Lawrence Convention Center)

Speaker: Dave Roberts<https://share.confex.com/share/123/webprogram/Person6042.html>(BMC)

 

 

16253: MF Economics: What the Past 50 Years Can Tell Us About the Future - A Lively Discussion

Wednesday, August 6, 2014: 7:20 AM-8:20 AM

Room 304 (David L. Lawrence Convention Center)

Speaker: Jonathan Adams<https://share.confex.com/share/123/webprogram/Person5985.html>(BMC )

 

 

16066: zNextGen Project Networking Dinner Event

Wednesday, August 6, 2014: 7:00 PM-8:00 PM

Meet in the Westin Lobby (Westin Pittsburgh Hotel)

Speakers: Vit Gottwald<https://share.confex.com/share/123/webprogram/Person4999.html>(CA Technologies) , Linda Mooney<https://share.confex.com/share/123/webprogram/Person2965.html>(County of Sacramento) , Reg Harbeck<https://share.confex.com/share/123/webprogram/Person1784.html>(Mainframe Analytics Ltd.) ,Warren T. Harper<https://share.confex.com/share/123/webprogram/Person5886.html>(BMC Software) and Troy Crutcher<https://share.confex.com/share/123/webprogram/Person5798.html>(IBM Corporation)

 

 

15492: How to Save REAL Dollars with SQL Tuning

Thursday, August 7, 2014: 10:00 AM-11:00 AM

Room 402 (David L. Lawrence Convention Center)

Speaker: Phil Grainger<https://share.confex.com/share/123/webprogram/Person5993.html>(BMC Software)

 

 

 

Phil Grainger

Lead Product Manager

BMC Software

Share This:

BMC has just released a new white paper entitled "Five Levers to Lower MLC"

 

You can get a copy from http://bit.ly/MCO5Levers

 

Don't forget that if you are paying IBM MLC charges for your z/OS software ANYTHING you can do to lower the cpu consumption during the monthly rolling four hour average peak, will have a positive effect on those charges - and for EVERYTHING running on that LPAR

 

So, tune some SQL and not only save money on DB2 license charges, but also save for z/OS and CICS and/or IMS and/or MQ if they are also running on the same LPAR

 

Phil Grainger

Lead Product Manager

BMC Software

 

Share This:

Just making my travel plans to attend the next z/S DB2 and IMS briefing in Houston on August 12th

 

This is our opportunity to share with customers what's been going on recently with our database tools, but also hear from users how those tools are being used

 

This is something that we do on a regular basis in 1:1 meetings, but the chance to get a group of customers together often brings new ideas into the light

 

The briefing usually starts with a pleasant dinner on the prior evening - a chance for the briefing attendees to get to know each other, as well as an opportunity for the BMC folks to meet the attendees in a more informal setting

 

The briefing itself starts bright and early and moves from high level overviews of our strategy before diving into specifics on the different database management product areas

 

The day ends with transport back to the airport for your flights home

 

If you'd like more information on the August briefing, then please don't hesitate to contact me or your local BMC account team

 

Phil Grainger

Lead Product Manager

BMC Software

 

Share This:

It occurred to me the other day that I have become so comfortable speaking at DB2 conferences, I forget that there are many people out there who feel they could never do what I do

 

Well, do you know what, that used to be me too!

 

Yes, when I first started attending DB2 GSE meetings in the UK, I was too embarrassed to even raise my hand in the crowd when they asked who were new members. Mind you, that ages me as well - who else can remember the crowds we used to get at local DB2 events? The first UK GSE meeting I attended had around 120 attendees!

 

Anyway, I digress

 

It occurred to me after a few meetings, that if I wanted to make DB2 my career (which I did) then there was a good chance that my future boss might also be attending these events. Wouldn't it be great if he/she knew who I was before I even applied for a new job. What better way to do that than to actually speak

 

Now, my wife used to take public speaking classes, and even know I couldn't do what she did - speaking about a random topic selected by the teacher. We are extremely lucky - we are talking about something we not only love, but also (usually) understand inside-out. This is MUCH easier, believe me

 

So, I started small and presented short sessions (they were usually 30 minutes, presented VERY fast) on aspects of DB2 that we were exploiting. Keep in my comfort zone, I thought

 

Don't underestimate the nerves - I was sure the audience could see how nervous I was

 

But then something odd happened

 

People used to come up at the end of the sessions and not just ask questions, but also to comment on "how useful" they found the talk and "how comfortable" I seemed

 

Then I realised something important

 

What the audience sees and hears is important - the way the speaker feels hardly ever translates into anything that the audience picks up on!

 

From there it was a small step to speaking in front of larger audiences

 

BMC came along with an offer (I was a big customer of theirs in those days) and invited me to speak at a UK event (maybe 30-50 attendees) followed by a European event in Oslo (200 or so attendees). That seemed to be a good progression, so I accepted their invitation. Only for them to switch the two dates around so the big one came first

 

YIKES!

 

You know something? I survived the experience

 

It was a small step from there to IDUG, and the rest (as they say) was history

 

So, to close this blog post, I have a few small pieces of advice

 

 

1.       Don't underestimate your own ability

 

2.       Don't think that you are doing nothing special and nobody is interested - many of us really are doing special things with DB2

 

3.       Start with your local user groups, and get comfortable with a small audience

 

4.       Don't hesitate to ask for help. Existing speakers will be only too happy to help you along

 

Oh, and I'll finish by saying - DON'T get stressed by who might be in your audience. I'll never forget the first time I noticed Roger Miller at the back of one of MY audiences. THAT was a shock. But do you know something? Even the stars of DB2 at IBM are interested in hearing what users are actually doing with DB2

 

Phil Grainger

Lead Product Manager

BMC Software

 

Filter Blog

By date:
By tag: