How much analysis is too much? Well, when it comes to analyzing what’s happening on the mainframe, you could say: "The more the better!" and that would be true – the increasing number and complexity of systems, applications and transactions running on the mainframe means that careful analysis of mainframe systems is essential to maintain performance and availability.
However, traditional mainframe analytics techniques often rely on constant manual monitoring which can decrease staff productivity and drive up the costs of mainframe operations. Performance alerts, which may be based on out of date threshold information, need to be analyzed to determine if they signal genuine problems or are ‘false alarms’. Corrective actions usually end up being reactive instead of pro-active. Worse still is if these analytics techniques fail to predict a problem which results in a slow down or even unplanned downtime – now, that’s paralysis! To avoid these problems, more and more manual analysis ends up being performed, or the number of metrics monitored is reduced.
What we find a lot of our customers are looking for is a way to put more automation and intelligence around mainframe analytics – so that all the detailed, time-consuming analytics work can go on in the background and they receive timely, accurate thresholds based on what is "normal" for the time of day or normal for that period during their business processing lifecycle. You will generate more intelligent alerts about genuine systems issues that can be addressed or even automate a corrective action, before impacting service levels. Intelligent analytics means ensuring availability and that IT staff don’t have to spend time manually monitoring and trying to determine if there is a problem and what corrective action needs to be taken.
This is the direction that mainframe analytics needs to take – towards a more predictive, automatic and intelligent model.