Share: |

While athletes from around the globe compete for the gold in Vancouver, let’s take a look at the medalists in the Data Center Olympics.


If your Data Center participated in the Olympics, which operating systems would win the gold medal? Which operating system performs better and provides more security: z/OS or Windows XP/Vista/7? If your operating system were a hockey team, which goalie would you want protecting your net? z/OS would definitely be the biggest, burliest, and most effective goalie on the ice.


Which databases provide the best response time, the highest security, and the best return on investment? I’m guessing that DB2 on z/OS outperforms DB2 LUW by a wide margin, and IMS runs circles around distributed systems databases. In a downhill ski race, the mainframe databases would win every time.


CICS and IMS/TM manage transactions at lightning speeds. Can the distributed systems transaction managers meet their speed or integrity, do they leave you skating on thin ice?


I’m sure you see the pattern here: mainframe infrastructure, databases, and transaction managers provide the winning performance you need to meet your SLAs. Compare them to their non-mainframe counterparts and see which ones win the gold medal.


Share your Data Center Olympics stories with us.

The postings in this blog are my own and don't necessarily represent BMC's opinion or position.
Share: |

Guest post by Peter Plevka, Consulting Brand Manager

We’ve all been there…working late into the night, adding and modifying DB2 data structures to cope with changes demanded by the business. To understand how difficult it can be to change the structure of a large DB2 database, imagine trying to change the layout/structure of a city. A large number of objects with complex relational dependencies must be managed. It takes a brave person to manage that degree of change.


Imagine we’ve created a completely new city - similar to a complex DB2 database - and each of the 100,000 houses built within it is represented by one DB2 table. The citizens of the city are the users. This city, like the DB2 database, is an immovable, inanimate object. Any attempt to make changes to the city’s shape, structure, or even to individual houses is risky, and is costly in terms of time, resources and availability.


But change is inevitable. The town is growing; it’s attracting more people, more houses, and more resources—in exactly the same way an organization evolves and demands changes to the DB2 database environment, such as the integration of a new ERP system. In our analogy, that new application may be a new school for the city, the widening of the roads, or just additional and larger rooms in some of the 100,000 houses. These are tough changes to make to the infrastructure: after all, the houses (DB2 tables) are already built/defined. But City Hall says that the new school or the new road is required, so the houses and the people (DB2 data and users) within them must be moved. Of course, these people have lives to live, so the change also needs to be non-disruptive. The people and all their furniture must be temporarily placed elsewhere.


With the decision made to make the changes, the houses are torn down, just like you would drop the DB2 table. But when you perform structural changes to DB2 objects, you must rebuild them exactly as they were before including the new or changed parts. That means rebuilding the houses (DB2 tables) brick-by-brick and putting the people and their furniture back exactly as they were before, but also considering the changes.


As any town planner will explain, it’s imperative not only to adopt quality assurance during any change—but also for the change to be made quickly. For large applications with thousands of objects, complex application changes can significantly increase the length of the time period to apply the changes. The utility maintenance work needed when making complex schema changes to thousands of DB2 objects requires the largest part of the change maintenance window.


If we did everything as a serial process, the reconstruction would take way too long. Parallel processing is a must. As the roads are torn up to be widened or the houses moved and re-built, multiple people must work in a synchronized manner simultaneously to move the houses (the tables) and the furniture (the data).


Finally, we must compare structures. DBAs need to test changes in a non-production environment and apply them when fully tested without losing any local customizations. It’s the equivalent to having a test city and a production city. Every street, every house, every piece of furniture that’s been changed needs to look identical in the production city


Before you make these changes, be sure that you have a plan and that you are using the tools you need to ensure that you make the changes quickly and accurately.

The postings in this blog are my own and don't necessarily represent BMC's opinion or position.


Share: |

Guest post by Rick Weaver, Product Manager

The massive snowstorms on the US East Coast this week serve as a reminder for the importance of disaster recovery planning. When disaster strikes, you need a plan for recovering the IT infrastructure and data. When the IT infrastructure is available and performing well, the business is served. But if the data is lost, having a robust infrastructure does not help.


Most IT shops have a disaster recovery plan that includes a provision for protecting and accessing the application database after a site-wide disaster. Disasters, while rare, can be devastating. To protect your databases, you may take periodic volume dumps or backups, execute frequent image copies and log accumulations, and/or use remote site replication.


When determining which technology to use for disaster recovery, consider the cost, acceptable data loss (if any), and availability requirements.


Host/Platform-based replication solutions
With host- or platform-based solutions, a task running on the production computer captures changes and transmits them to a remote computer. The changes can be applied to a shadow database immediately, or they can be retained for use in a later restore/recovery process.


Host-to-host solutions are platform-based, homogeneous, and asynchronous. The source and target platform must be the same; for instance, mainframe z/OS to mainframe z/OS.


Storage-based replication solutions
In storage-based solutions, the storage controller automatically replicates data changes to a remote mirror. The data can be replicated in a synchronous or asynchronous mode.


The benefit of synchronous mode is that the data change is immediately reflected on the remote site mirror. This results in zero data loss, but it is very distance sensitive. In synchronous mode, the remote site must be physically near – typically no more than about 60 miles (100 km). Many companies do not consider this a valid solution for disaster protection.


The benefit of asynchronous mode is that the remote site can be thousands of miles away. However, there is some data loss, typically a few seconds to a few minutes.


Choose a strategy that works for your data, but above all – be prepared!

The postings in this blog are my own and don't necessarily represent BMC's opinion or position.
Share: |

Guest post by Michael J. Jones, Technical Marketing Manager


This is the fourth in a series of posts about cloud computing.


So…where are the mainframes in the cloud?


New and evolving technologies often suffer from too much hype. This was never more apparent than in late 1990s when the internet was expected to change the foundations of business, economics, and culture overnight. The mania around the internet quickly created vast fortunes, and, in many cases, wiped them out even faster. Of course, there was some fundamental truth to the transformational power of the internet, and today it is delivering quite a bit of the value that was originally expected.


Cloud computing is currently in a phase of adoption where there is a bit too much hype. For example, some pundits have predicted the eventual demise of the corporate data center in favor of ever larger shared data centers running publicly available cloud services. A more likely outcome is that both of these approaches will continue, each finding their own best usage scenarios. By co-existing, cloud computing and corporate data centers should actually provide more value than either approach alone.


The early adopters of cloud computing and the ones who are creating the hype are not focused on technology, they are technology agnostic. They want what they want when they want it, and cheap. For this reason most of the cloud computing and dynamic infrastructure marketing from IBM rarely mention the Z platform in the headline. The zSeries is a natural platform to address the challenges facing the adoption of cloud computing implementations, but the early adopters are not only technology agnostics - they are prejudiced against the mainframe platform.


Because the market is technology-agnostic, it will not accept a solely distributed or solely mainframe solution for cloud computing. Vendors must provide solutions that will solve business problems on all platforms.


For an ongoing discussion of the cloud, see Cloud Computing.


The postings in this blog are my own and don't necessarily represent BMC's opinion or position.
Jonathan Adams

Get into the zone

Posted by Jonathan Adams Feb 2, 2010
Share: |

When mainframes were invented, the Internet was not yet a gleam in anyone's eye.


The mainframe community was a small, tightly knit group of professionals.


The mainframe community is still a relatively small, tightly knit group of professionals, but now we have more ways than ever to communicate with each other and share ideas.


A great place to start is at Check out the trivia contests and the latest mainframe news.


And let us know what you think about mainframes in today's world.


The postings in this blog are my own and don't necessarily represent BMC's opinion or position.

Filter Blog

By date:
By tag:
It's amazing what I.T. was meant to be.