Jonathan Adams

Happy holidays

Posted by Jonathan Adams Dec 23, 2009
Share: |

Guest post by Kathy Klimpel, Solutions Marketing Manager


It's a busy time of year for shoppers, and that means it's a busy time of year for the mainframes that support credit card transactions, checking accounts, retail inventories, and deliveries. We depend on technology to do a lot of shopping.


Can you imagine what the brick-and-mortar shopping experience would be like if you had to wait minutes instead of a split second for a credit card approval? The people behind you in line might wonder what was taking so long. Are you at the credit card limit? Did you not pay your bill?


How safe would you feel making an online purchase and waiting 3 minutes before you get confirmation that your order was accepted and your credit card was approved? What if you could not track your package online to see where it was and when it would arrive?


The Internet has changed the way we do business, but the dependable mainframe still stores and processes most of the data. The mainframe enables us to make so many transactions - so easily.


So here's to you, Mr. / Ms. Mainframe: thanks for making it easier to do our holiday shopping.


And thanks to everyone who supports the mainframe. Have a wonderful 2010!

The postings in this blog are my own and don't necessarily represent BMC's opinion or position.
Share: |

Guest post by Bronna Shapiro, Solutions Marketing Manager

Storage has changed: in the early mainframe days, storage was expensive and bulky. Over the years, storage devices have gotten smaller and more efficient – while the types and amounts of data we store have gotten bigger.


Now, the conventional wisdom is that storage is cheap. But is it really? Recent analyst data says that the cost of managing storage is 3-7 times the hardware cost. And we often estimate that we will need 75-100% more storage this year that we did last year. Storage accounts for about 15-20% of the typical IT budget. How cheap does storage seem now?


In a typical z/OS shop, an estimated 30-50% of storage capacity consists of idle cylinders consuming power, generating heat, making the data center less green. This waste is growing 70-100% a year. But the people who could actually manage storage resources are disappearing, and not being replaced. This often results in application programmers having to guess how much storage they need. They often guess high because guessing low causes most storage-related abends.


How can you reclaim the costs associated with wasted storage?


Choose a tool that helps technicians with a limited background in storage management. Make sure that the tool helps novices assign SMS constructs, tests ACS routines in a production environment, and makes sensible decisions about storage. The tool should ensure that you can access the data you need, when you need it, so you can meet SLAs. And be sure that you can get a return on investment by choosing a tool that significantly minimizes cooling and floor space requirements and provides operational savings.

The postings in this blog are my own and don't necessarily represent BMC's opinion or position.
Jonathan Adams

Play time in I.T.?

Posted by Jonathan Adams Dec 16, 2009
Share: |

How many Sony PlayStation 3 consoles does it take to equal the power of a zSeries mainframe?


The US Department of Defense recently purchased 2,200 PS3 consoles to use with their existing 336 PS3s to make a supercomputer. The DoD believes that it is cheaper to buy lots of PS3s than it is to invest in a server. They will run Linux and a mix of home-grown and purchased applications to manage the cluster.


Perhaps the DoD should look into using a mainframe. z/OS provides mature, tested infrastructure and database management systems. When you purchase an operating system and database management system, you get a set of utilities and management tools. Do toys have this kind of infrastructure? If something goes wrong, who can you call for support?


Should you have to build an infrastructure and management tools for a network of toys? Of course not! Depend on hardware and software that has been used in production for years. Invest in infrastructure, operating systems, and software that you can depend on.

The postings in this blog are my own and don't necessarily represent BMC's opinion or position.
Share: |

Guest post by Ross Cochran, Software Consultant


We all know WebSphere MQ never breaks. MQ is the PVC pipe of the IT world.


(For the non-home repair types, PVC is that long white tubing dangling off the back of the plumber’s truck).


Like PVC pipe, MQ connects everything, is always available, and never breaks. So how do you convince your manager to spend resources to manage something that never breaks?  The answer is simple: you don’t. You spend the resources to defend MQ. In other words, everyone will blame MQ for every problem, so you need something to defend MQ. MQ is presumed guilty (because it touches everything) until proven innocent.


MQ, referred to as middleware, is the glue that holds critical applications running the business together. MQ is the connectivity backbone that allows data transmission between pieces of software. Queue managers send messages back and forth to communicate across a logical connection called a channel. Sounds easy - a queue manager sends a message across a channel to another queue manger, the message arrives “on time,” and a reply is sent back to the originating queue manager. Messages are placed onto a queue as they wait for transmission. Life is good.


But wait, this is not the movies; things never work exactly the way they are designed to work. Queue managers are software (we all know that software breaks). MQ channels are logical definitions (we all know things are not always defined properly). Plus all these queue managers and channels run on systems with operating systems (enough said – this stuff does break).


So the trick is going to be how long will a queue manager keep a message in a queue?  Messages are building up in the queue and somebody needs to come get them. Queue managers all over the place (z/OS and distributed systems) babysit messages until another queue manager comes and gets the messages, or a channel gets the message and sends it to another queue manager. With all these moving parts, it sounds like a message could get lost (loosely meaning it has not arrived at its destination). MQ never loses a message; however, that doesn’t necessarily mean that the message gets where you think it’s supposed to go, and it might not get there as quickly as you think it should. How do you know when this is happening? MQ generates events, and the most efficient way to expose these events is through the use of MQ monitoring tools.


It is apparent that you need to spend resources to keep MQ running smoothly. MQ tends to connect diverse MQ platforms, the largest of which is the mainframe (a.k.a. MVS, OS/390, z/Series) to MQ on distributed systems. The largest bone of contention is the interface to monitoring MQ. Do you go with the traditional 3270 green screen or a graphical user interface (GUI)? We all know which screen has the sexiest interface, but you can’t pry some people’s 3270 terminals from their hands. If the mainframe group brought MQ to your shop, the 3270 interface seems to dominate.


By the way, have you ever wondered if there are any real 3270 devices left, versus 3270 emulation?


The best solution for monitoring MQ in a mixed mainframe and distributed systems environment is an interface that is integrated and supports both platforms. This interface provides a tool set that is comfortable to your end-users, while providing an enterprise view of real-time activities, alerts, and historical reporting.


The postings in this blog are my own and don't necessarily represent BMC's opinion or position.

Share: |

In the classic movie “This is Spinal Tap,” band member Nigel Tufnel explains how their amplifier’s volume goes to 11 (at a time when volume dials stopped at 10). His point: they were louder than any other band.


BMC is proud to announce support for IMS 11. The products support and exploit the features that IBM implemented in this release, including 64 bit support for Fast Path.


And speaking of support, the BMC support teams are committed to providing the best technical support in the industry. Our experts are on call and ready to help.

The postings in this blog are my own and don't necessarily represent BMC's opinion or position.
Share: |

Guest post by Mike Sniezek, Product Manager


What does downtime mean in 2009? Does downtime mean that an application is completely down (offline, broken, or otherwise unusable)?

The term downtime refers to when an application fails to provide a service. Applications are ruled by service level agreements that measure reliability, availability, and recovery.

The cost of downtime is measured by when the application is unavailable, and we have traditionally measured this cost by hour. These costs are usually based on the entire system being unavailable for some time. Downtime costs vary by industry. For example, a healthcare application might measure downtime as $600,000 per hour, but an hour of downtime for a brokerage house could cost more than $6,000,000.

These types of measurement numbers have been around for a long time, but the concept seems antiquated now because the internet enables users to conduct business 24 X 7. And with availability comes competition. Web application downtime costs should be measured in seconds, not hours.  Losing business due to poorly performing applications is costly, and this should be considered as downtime. Consider this scenario: your web page is up, your network is intact, your middleware is just fine, and your databases are up and running. But if a transaction takes more than 15 seconds, you will probably lose a customer – and their business. Web customers demand instant gratification.

To accurately measure the cost of downtime, factor in the possibility of lost business, the dollar amount lost in that one transaction not meeting the needs of the new consumer, and the long-term loss of that customer never returning. In other words, downtime does not mean down anymore; it means that reliability and performance are essential 24 hours a day, 7 days a week if you want to keep and grow business.

The postings in this blog are my own and don't necessarily represent BMC's opinion or position.
Jonathan Adams

zIIP it

Posted by Jonathan Adams Dec 4, 2009
Share: |

Guest post by Bronna Shapiro, Solutions Marketing Manager


Fully loaded mainframe processing can be expensive. IT organizations rarely have extra money to spend, and our economy has every shop tightening its belt.


So when a problem comes along, you must zIIP it.


Offloading work to zIIP engines can result in savings of up to $7,000 per MIPS because software running on specialty engines is exempt from related charges such as license costs, upgrades, and maintenance fees.


The choice seems obvious: maximize the use of specialty engines to reduce cost and free up general purpose processors for other work. However, because only certain types of processing are eligible to execute on these specialized engines, you must identify which workloads are eligible for zIIP processing and then move those workloads. BMC products can help guide you in identify the best candidates for using zIIPs.


Offloading CPU cycles to zIIPs during peak hours will likely give you the best bang for your buck. Offloading work overnight, when you have processing capacity available, may not provide significant payback.


BMC products automatically exploit zIIPs. In one case, a customer was able to reduce 50% of the MIPS required for monitoring z/OS.


Remember that everything comes at a cost. Look at your heaviest workloads and capacity needs. Consider the costs of purchasing the zIIP engines and rewriting applications to take advantage of them. Each zIIP you install still requires maintenance and power. Work with your vendors to understand what products are most suited for zIIPs. Weigh the costs against the potential savings.


And when appropriate, zIIP it. zIIP it good.

The postings in this blog are my own and don't necessarily represent BMC's opinion or position.
Share: |

Guest post by Nick Griffin, IMS Product Manager

Benjamin Franklin said, “In this world nothing can be said to be certain, except death and taxes." If Mr. Franklin had lived long enough, he surely would have amended his quote to include the phrase “IMS changes.” Change is inevitable, but how do you implement changes quickly, without errors, and without affecting other areas?

The simple answer: use an automated tool.

By using an automated IMS change management tool, you can ensure that:

  • Objects are in the desired state before and after the changes are completed. Getting the objects into an appropriate state manually is time-consuming, and it can lengthen the outage associated with a change.
  • Changes are made with integrity. When applications are intertwined, changes must be coordinated with all applications involved. You can save time, money, and frustration by doing a “dry-run” for groups of changes across one or more IMS images before committing the changes, and finding errors before you implement them in your production environment.
  • Changes can be implemented at any time – not just when IMS is up and running.  Make dynamic changes immediately, regardless of whether IMS is active, propagate those changes throughout those systems, and report on any inconsistencies and exceptions.
  • You have an audit trail to see who did what – and when.

Keep it simple by using a mature, stable tool that can manage all of your IMS system changes from a single interface with a single communications address space.


Be a hero. As a technical leader, it is important for you to choose the best tool for implementing IMS changes so that your organization can be successful.

The postings in this blog are my own and don't necessarily represent BMC's opinion or position.

Filter Blog

By date:
By tag:
It's amazing what I.T. was meant to be.