Share This:

Performance Test Environment – Migrating the Data


Your company has decided to set up a Performance Testing Environment, referred to as PTE from now on, with production sized data volumes, allowing them to do  some real testing and get accurate SQL optimizer paths through your data.  Imagine the costs involved; the hardware costs to run the environment, the duplicated DASD to hold the data (this must have been a storage vendor’s idea), the operational costs to keep it operating smoothly, the people costs to update it and the people costs when teams of programmers are waiting for the environment to be ready for another test run.  Of course the teams are not standing around waiting; but you get the idea - when someone is waiting for something to be ready to use – it preoccupies them and they mentally wait on it.  At least I do!   And of course you will have a management layer on top of all those people costs.


Well, let’s get to the real gist of this blog entry, or actually the first of 3 blog entries.  This entry will discuss populating the PTE environment.  The next entry will discuss resetting the data so your application teams can run their next test or series of tests.  And the final blog entry will delve into those costs to make sure you consider them all.  


Populating the PTE environment initially is really a data migration exercise, and BMC has addressed this requirement over the years in several White Papers.  Like this one located here, you will see many flavors of migration; depending on outage reduction, structure modifications along the way, the ability to tolerate fuzzy source data or the need for crystal clear intact relational data.  As we are always striving for speed and outage reduction at BMC my personal favorites are:


      • Change Manager with Resource Maximizer – build a worklist to read production image copies with our Recover Plus product, go through ID translation and then lay those down in your PTE subsystem.
        • Benefit: Automates the build and maintenance of the data migration process, thus reducing the overhead on DBA time and runs at Recover Plus page recovery speed.
      • Online Consistent Copy on the source, followed by BMC RECOVER PLUS using OCC input with OBIDXLAT on the target
        • Benefit: No outage on the source system to create the OCC.  There is little CPU or I/O to create the OCC (it is driven by storage data set snap technology). OCC renders the OCC consistent with log apply technology. The OCC is then input to the BMC RECOVER PLUS OBIDXLAT utility on the target, where replicated data sets will be created.  Consideration: OCC requires the appropriate storage technology


You might have noticed in your shop, a recover product can be difficult to justify, but one that runs every day (hint, hint -- we are also going to use recovery in blog entry #2) and saves resources, should be an easy justification.  Check back next week when we go into restoring/recovering the data so you can re-run your tests.  Would turning many hours into minutes get your LOBs and managers attention? Remember those people are waiting to run the next test.


Change Manager allows for unattended operations when used to manage the objects in the PTE.  All of Change Manager Processes can run unattended in batch.  You can easily do the following.  Let’s say the PTE looks different than production and you want it that way, but you want to the page moving capability of Recover Plus versus doing unload/reloads.  You can take a baseline (a snapshot) of the PTE structures, drop them, copy the objects with Change Manager from production and then run a Change Manager compare, to the original baseline, to put the PTE back to the original structures, but with production data.  All of this can run automatically and unattended.


Please contact myself, your local Software Consultant/Account Manager for more information on any of these great techniques for populating your target environment.