Skip navigation
Share:|

Today’s mainframe systems run the world’s top industries’ mission-critical workloads. This includes large financial institutions like banks, insurance companies, healthcare organizations, utilities, government, military, and a multitude of other public and private enterprises. 

The beauty of the mainframes lays in its flexibility, stability and security.  Mainframe systems provide the ability to divide the resources of a single machine into multiple, logical partitions (LPARs), capable of running their own independent operating systems.  These LPARs are, in practice, equivalent to separate mainframe systems themselves, and can function independently, or work together as a collection – called sysplex – that cooperate to process computing workloads.  This sysplex design can run large commercial mission-critical workloads continuously and very efficiently.

Each mainframe machine has maximum central processing complex (CPC) capacity – the LPARs running on a given machine collectively use the machine’s CPC capacity.  And capacity is the key word – mainframe systems have the equivalent processing power of multiple rows of multiple commodity servers – and have tremendous transaction throughput capacity. Much higher capacity than those banks of servers – which is why large organizations continue to use this platform – the mighty and majestic mainframe.

Controlling the Majestic Mainframe

Control3.pngphoto credit: romeacrosseurope.com

 

Unlike other computing platforms, a single mainframe system can be configured for a variety of different business needs – performance and cost can be balanced as needed, depending upon the required computing needs.  Whether a business needs to use the full capacity of the system, or to limit the capacity of a test LPAR, or to control the cost of usage-based software, the mainframe can accommodate the needs of the business. 

The mainframe offers “capping” controls that can limit the CPU resource usage of one or more LPARs. Capping is managed either by the Workload Manager (WLM) or the Processor Resource/System Manager (PR/SM). There are 7 effective techniques available to control resource usage – below, we go through each one of them.

Initial Capping (Hard Cap)

The usage of CPC capacity for a given LPAR is based on the weight assigned to it in the Hardware Management Console (HMC) – each LPAR has its share of CPC capacity. However, if an LPAR needs more than its share of CPC capacity, and other LPARs are using less than their shares, the PR/SM can then allocate additional capacity for the capacity-hungry LPAR.

The Initial Capping (IC) setting prevents PR/SM from giving an LPAR more than its share even when there is capacity available in the CPC, meaning the LPAR can never exceeds its share. The IC limit is defined in the HMC as relative weight.

Scope is single LPAR. This capping is managed by the PR/SM.

LPAR Absolute Capping

LPAR absolute capping is a PR/SM-controlled capping limit that applies to a single LPAR. Its limit is defined in the HMC as a fraction of the total number of processors.  This capping is enforced independent of the 4-hour rolling average (4HRA) – the measure of the LPAR’s resource usage – and is applicable to both z/OS LPARs as well as non-z/OS LPARs.

Scope is a single LPAR.

Group Absolute Capping

Group absolute capping similar to the LPAR absolute capping but applies to a defined group of LPARs. This cap limit is defined in the HMC as a fraction of the total number of processors, and is controlled by PR/SM. The combined CPU capacity usage of these groups of LPARs can never exceed the group absolute capping limit at any time.

Scope is a group of LPARs.

Defined Capacity (Soft Capping)

An LPAR receives a Defined Capacity (DC) - its defined maximum capacity - is set in the HMC.  The WLM tracks the 4HRA of the LPAR, and compares it to the LPAR’s DC value. If the LPAR’s 4HRA exceeds its DC value, then it is capped.  The WLM triggers the capping, but it is enforced by the PR/SM. When capping occurs, the workload currently running on the LPAR is delayed.  If the WLM policy is set appropriately, the WLM will run the most critical work and delays the low-importance work.  Capping will be in effect until the resource usage 4HRA drops below the DC value, at which point the LPAR will process workloads without delay.  Since the LPAR’s DC value is compared to the 4HRA (an averaged value), the current CPU usage of this LPAR can go over DC value as long as the average 4 hour rolling value doesn’t exceed the DC limit.

DC only applies to LPARs with shared central processors (CPs); LPARs with dedicated CPs cannot be controlled by DC.  DC (soft capping) cannot be used with Initial Capping control (and vice-versa).

Scope is a single LPAR. This means that the DC value limits the CPU capacity for a single LPAR.

Group Capacity Limit

Similar to DC, group capacity limit (GCL) controls the CPU usage for a group of LPARs on a single CPC. z/OS LPARs can be grouped together in what are called capacity groups.  These LPARs must reside in the same CPC, but not necessarily within the same sysplex. GCL controls all LPARs in the capacity group, and the GCL value is set in HMC. The total 4HRA of all LPARs in a given capacity group cannot exceed the GCL value.  If the 4HRA exceeds the capacity group limit, then the capacity group is capped by the PR/SM, and each member of the group gets its share of resources based on its assigned weight.  As the group 4HRA drops below the GCL, capping is terminated, and the group will process workloads without delay.  While the 4HRA of the group is below the GCL, each LPAR in the group can use the capacity it needs.

Within the capacity group, in addition to its group share, each LPAR in the group can be assigned a DC. In such cases, either the calculated group share or the DC is used to cap, whichever is less. 

Scope is a group of LPARs belonging to the same CPC.

Resource Group Capping

Resource Group (RG) capping provides the ability to control the maximum and the minimum CPU capacity given to the service classes (workloads) that are connected to a given RG. If workloads of a RG exceeds the maximum limit, then WLM caps the CPU usage of the RG.

WLM manages the workloads using the goals defined in the service definition.  If a workload is not meeting its defined goal, the WLM assign more resources to it when needed.  In that process, the WLM may remove resources from other workloads that are meeting their goals.  The RG minimum and maximum values set the limits to this CPU goal management.  A minimum value prevents the WLM from taking away resources, while a maximum value prevents the WLM from providing additional resources even if the workload is not meeting its goal.

Scope is sysplex wide. The service classes that are in a RG should belong to the LPARs that are in the same sysplex.  However, LPARs themselves can span multiple CPCs.

Absolute MSU Capping

WLM-controlled Absolute MSU capping is similar to PR/SM controlled initial (hard) capping – it is permanent capping controlled by the WLM.  The difference is that absolute MSU capping is specified in MSUs (the DC of the LPAR), whereas hard capping is specified in relative weight.

The limit value is derived from that DC and the LPAR group capacity.

Scope is a single LPAR.

Intelligent capping (iCap)

If you are into capping for cost control, then BMC offers dynamic solution - Intelligent capping (iCap)

Intelligent Capping for zEnterprise (iCap) is a mainframe software solution that dynamically automates and optimizes defined capacity settings to lower IBM Monthly License Cost (MLC) costs by 2%-5% or more saving customers millions of dollars, while mitigating risk to the business. After analyzing CPU usage and WLM workload, iCap automatically manages changes to defined capacity settings based on workload profiles, enabling customers to lower costs. BMC Intelligent Capping for zEnterprise removes the manual effort from managing capping limits, while optimizing capacity usage across LPARs or groups of LPARs. The solution dynamically aligns workload allocations based on utilization needs, workload importance, and customer policy profiles

For more information (including a 2 min short video on how iCap works) please visit http://www.bmc.com/it-solutions/intelligent-capping-zenterprise.html

 

Summary

Mainframe systems provide tremendous capabilities, and run the world’s largest and most complex commercial workloads.  Yet it remains highly flexible, and is able to handle the unique business needs and workloads required for a wide range of businesses.  To facilitate this high degree of business flexibility, there are exist 7 different techniques available to control the mighty and majestic mainframe. If you want to automate capping to control cost then BMC offer unique solution - Intelligent capping (iCap)

 

Author: Hemanth Rama is a senior software engineer at BMC Software. He has 11+ years of working experience in IT. He holds 1 patent and 2 pending patent applications. He works on BMC Mainview for z/OS, CMF Monitor, Sysprog Services product lines and has lead several projects. More recently he is working on Intelligent Capping for zEnterprise (iCap) product which optimizes MLC cost. He holds master degree in computer science from Northern Illinois University. He writes regularly on LinkedIn pulse, BMC communities and his personal blog. [[LINK: https://path2siliconvalley.wordpress.com/  ]]

 

Share:|

Yes my friends. Innovation in mainframes is happening right here in Silicon Valley at BMC Software. While everyone is chasing the next big, disruptive unicorn like Uber or Airbnb, BMC engineers are quietly disrupting the mainframe industry with new customer-driven innovation.

Curiously, most non-mainframe technology professionals are unaware of opportunities to innovate on mainframe software.  They think mainframes are age-old dinosaurs that occupy entire rooms, use card punch input, and require bits and bytes programming.  Things have changed a lot over the years.  The same mainframes that are now the size of your refrigerator run the most critical work of our top industries today. This includes large financial institutions like banks, insurance companies, health care, utilities, government, military, and a multitude of other public and private enterprises.  Mainframes are a multi-billion dollar industry. Innovations that are achieved have huge positive impacts in the industry and the global economy. BMC engineers have done precisely that – created innovation that moves waves in the mainframe industry

The Challenge

First, let’s take a look at one key challenge that the mainframe industry faces today. Many surveys have revealed that the biggest challenge of companies with mainframes today is the increasing cost. Mainframe costs include several components, but the largest portion involves IBM monthly license charge [MLC] costs stand out (See Figure 1) What’s more, MLC costs increase 4-7% annually. This would give any CFO a nightmare!

mc

For decades, IBM mainframe customers have manually managed mainframe workload capping strategies, constantly shifting workload allocations to manage costs and meet business demand. But these techniques are manual, risky and error prone and often don’t provide desired results.

For starters, mainframe computers are large, multi-processor computing devices able to perform thousands of tasks every second. Work on mainframe computers is often measured in millions of service units (MSUs), which is a measure of the processor (CPU) capacity used to execute the work. Mainframe customers are often charged for their software that runs on a mainframe based on peak MSU usage through a Monthly Software License Charge (MLC). To determine the MLC, the mainframe operating system generates monthly reports that determine the customer’s system usage (in MSUs) during every hour of the previous month using a rolling average (e.g., a 4-hour rolling average) recorded by each LPAR or a capacity group for the customer. The hourly usage metrics are then aggregated together to derive the total monthly, hourly peak utilization for the customer, which is used to calculate the bill for the customer.

To control costs, staff might assign each LPAR or capacity group a consumption limit (Defined Capacity or Group Capacity Limit), and it cannot use more MSUs than allotted in its respective consumption limit. But this may result in some work not receiving the CPU resources it needs, in effect slowing down the execution and completion of that work. This may have very undesirable effects on important workloads. Since meeting performance objectives of high importance work is deemed a necessary part of shifting resources, customers tend to raise capacity limits to meet the demand and avoid outage to their clients. But raising the capacity limit even for as little as an hour can increase MLC costs substantially.

Today’s Solution

When this challenge was presented, five bright engineers from BMC Software (Hemanth Rama, Edward Williams, Phat Tran, Robert Perini and Steven DeGrange) proposed a dynamic solution that automates and reduces the MLC cost while mitigating the risk to critical business workloads.

The solution they proposed was so innovative that a patent was granted for their outstanding work.

DYNAMIC WORKLOAD CAPPING
PATENT ISSUANCE #9342372
GRANTED ON  MAY 17, 2016.
http://patft1.uspto.gov/netacgi/nph-Parser?patentnumber=9342372

Let me give you an overview of this innovation first before I explain the details for the inquisitive minds.

An overview of Dynamic workload capping architecture

PatentArtech.png

Innovation solves the problem by dynamically changing LPAR defined capacity and Group capacity limits values by taking into account the dynamic changing of workload importance. This is done by interacting with Workload Manager (WLM) component of operating system that runs on each LPAR, for the breakdown of MSU use by WLM service class, period and importance class, then grouping by importance class and aggregating information across the multiple operating LPARs and across multiple SYSPLEX groupings, whose CPU capacity is being managed.

The result is a dynamic capping solution that reduces MLC costs while mitigating risk to critical business workloads.

Let’s walk through an example of how this innovation can make a big difference in saving cost while also mitigating risk.

problem

In this example,

  • There LPARS – LPAR1,  LPAR2 and LPAR3
  • the red line represents the Defined Capacity (DC) for each LPAR
  • the orange fill is (low)importance 5 workload and
  • the red-ish fill is (high) importance 1 workload.

Problem:

Each LPAR has it’s own static DC. LPAR1 has a lot of high importance work and it’s being capped. But LPAR2 and LPAR3 have free capacity, not capped and running  lot of low importance work. The max 4HRA for this scenario is 811 MSU

Today, someone would have to watch these LPARs and manually adjust the DCs to allow the high imp work to run and make decisions about how to balance with other LPARs – 24×7 – impractical!

With innovation:

Now let’s bring in patented solution  to manage these same 3 LPARs with the same amount of work.  You can see that the configuration is monitored constantly and DCs are no longer a static straight line but rather adjusted dynamically to maximize capacity for important workloads. The result – No more capping of high importance work! capacity automatically transferred from LPAR2 and LPAR3 to LPAR 1 to allow high imp work to process at the expense of some low imp work on LPAR2 and LPAR3.

Not only is there no capping of high imp work but the overall MSU usage was lowered to 650 (from 811) by dynamic sharing capacity across the LPARs.

This innovation solution lead to a innovate product - BMC intelligent capping (iCap)

Intelligent Capping for zEnterprise (iCap) is a mainframe software solution that dynamically automates and optimizes defined capacity settings to lower IBM Monthly License Cost (MLC) costs by 2%-5% or more saving customers millions of dollars, while mitigating risk to the business. After analyzing CPU usage and WLM workload, iCap automatically manages changes to defined capacity settings based on workload profiles, enabling customers to lower costs. BMC Intelligent Capping for zEnterprise removes the manual effort from managing capping limits, while optimizing capacity usage across LPARs or groups of LPARs. The solution dynamically aligns workload allocations based on utilization needs, workload importance, and customer policy profiles

iCap architecture

icaparch

For more information (including a 2 min short video on how iCap works) please visit http://www.bmc.com/it-solutions/intelligent-capping-zenterprise.html

Now let’s see what industry and customers are saying about it.

cust

There are many more customers who saw significant savings with iCap.

Stay tuned!  BMC engineers are saying more to come – disruptive innovation that is!

Share:|

Many customers are asking us what the difference is between Group Capping and BMC Intelligent Capping. In this blog, we'll make this clear.

 

The key difference between BMC Intelligent Capping and group capacity capping is their behavior when the MSULIMIT/Group Limit is reached.  If the limit is not being reached, you are not saving money, so what occurs at the limit is extremely important.

 

With Group Capping:

  • Capping decisions are based on LPAR weights, not workload importance.  This means high-importance work can be delayed, while low-importance work still runs.
  • LPARs make the decision to cap themselves based on their weight, and there is no consideration of what and how much is running on other LPARS. This leads to an inefficient use of capacity.
  • When capping occurs, no further LPAR weight changes are made until capping ends. This can take hours and result in high-importance work being delayed.

 

With BMC Intelligent Capping:

  • Capping decisions are made based on workload importance to protect critical work by ensuring that capacity is available in the right place at the right time.
  • LPARs in a policy are truly managed as a group, and decisions are made with full visibility to all LPARs, allowing it to make intelligent decisions about allocation of MSUs.
  • When capping occurs, Intelligent Capping continues to dynamically adjust the MSU allocations to best meet the needs of the LPARs and the high-importance work.

 

BMC Intelligent Capping is designed from the ground up to save monthly license charge (MLC) costs and reduce business risk.

 

The importance of importance

 

With group capping, as long as the LPARs are collectively below the Group Capacity Limit (GCL), it works reasonably well. However, when you begin to approach the group limit (GCL), things don’t work as expected–and that’s a big issue. Maybe you don’t usually approach the limit? In this case, there’s no real purpose in using a group, because it is not saving money. In a well-configured system, the limit should be reached often– that’s how you save money.  This is where Intelligent Capping excels over group capping.

 

In a capacity group, when the GCL is reached, one or more of the LPARs will be capped. Which LPARs will depend on the weights of the LPARs in the group.  LPARs in a group are allocated a share of the group’s limit based on their weight relative to other LPARs in the group. Capacity groups don’t care what is running on an LPAR (they are not importance aware).  If it is using more than its share, an LPAR will be capped – even if it’s running all high-importance work! Meanwhile, you might have an LPAR running an unimportant batch job that continues to run.  This is not what you want to have happen.

 

Another problem with capacity groups: each LPAR in the group independently decides if it needs to cap itself. LPARs are not aware of other LPARs in a group and whether they are using all of their share or what they are running.  The decisions made are pretty limited and are not very efficient.

 

Some customers use IBM Intelligent Resource Director (IRD) to automatically adjust LPAR weights in response to WLM SLA goals to try and make Group Capacity more efficient, but this doesn’t help as much as you might think. IRD only manages weight adjustments between LPARs in a single Sysplex on a single CEC. This is referred to as a Cluster. So, if there are LPARs in the group that are not in the Sysplex, the effectiveness of IRD is reduced. Even worse, as soon as an LPAR in the group is capped, IRD will leave the cap alone or reset it to the initial value. In either case, no further weight changes will occur while it is capped, which might span many hours.  During that time, drastic changes in workload Importance can occur and will go un-managed.

 

The Road to Intelligent Capping

With Intelligent Capping, the LPARs in a policy are truly managed as a group and decisions are made with full visibility to all LPARs, allowing intelligent decisions about MSU allocations. Intelligent Capping monitors each LPAR to determine how much work is running, as well as the 4-hour rolling average, the importance of the workloads, and the overall MSU target for the policy.  Decisions are made based on workload importance, and LPARs with higher importance workloads will receive MSUs from those running lower importance work. This protects critical work by making sure that capacity is available in the right place at the right time, regardless of whether the LPARs are all in the same Sysplex or not.

 

As with Group Capacity, a well-configured system should reach the Intelligent Capping MSULIMIT, saving you the most money.  By using Intelligent Capping to cap LPARs, it continues to actively manage the LPARs and their workloads and continues to dynamically adjust the MSU allocations to best meet the needs of those LPARs and the high-importance work. This allows a lower MSU limit to be set, which reduces MLC costs. In contrast, lowering MSUs by reducing a group’s GCL means that the same percentage reduction will be applied evenly across all LPARs in the GCL. There is no awareness of workload importance and it assumes that every LPAR can afford to donate an equal percentage of their MSUs. This will likely result in even more high-importance work being capped, and therefore not an acceptable cost savings option.

 

Intelligent Capping policies also enable you to provide advanced criteria to further maximize savings and reduce risk.  This includes the ability to consider the cost of MSUs – a patent pending technology from BMC Software.

 

In summary, Intelligent Capping is designed from the ground up to reduce your MLC costs and reduce risk.  This is not the case with WLM capacity groups.  For a test drive, download the MLC Savings Estimator and enter your data to see how much Intelligent Capping can save you. Click here to Download the BMC MLC quick utility

 

Content contributed by Paul Spicer, Lead Product Manager, BMC

Share:|

BMC Software has announced two exciting new technology patents related to its most innovative and recent mainframe solution, Subsystem Optimizer for zEnterprise®.

 

One of the patents provides for remote access to IMS. Tony Lubrano is the sole inventor. The other provides for remote access to DB2, and salutes Tony Lubrano, Jim Dee, and Ray Cole as co-inventors.

 

Here are the details for these two extraordinary achievements:

  • Patent Issuance #9,218,226 (IDF - 12-041-US) – System and Methods for Remote Access to IMS Databases – Anthony Lubrano – Inventor
  • Patent Issuance #9,218,401 (IDF - 12-051-US) – Systems and Methods for Remote Access to DB2 Databases – James Dee; Anthony Lubrano; Ray Cole – Inventors

 

The U.S. Patent Office granted the two patents on December 22, 2015.

These patents lock in the unique intellectual property that BMC developed in crafting Subsystem Optimizer.

 

Further details follow:

 

Patent Issuance #9,218,226

Systems and methods are provided that allow client programs using IMS database access interfaces to access IMS database data available from IMS systems on remote logical partitions and remote zSeries mainframes rather than from a local IMS system. For example, a method may include intercepting an IMS request having a documented IMS request format from a client program executing on a source mainframe system. The method may also include selecting a destination mainframe system and sending a buffer including information from the request from the source mainframe system to the destination mainframe system and establishing, at the destination main frame system, an IMS DRA connection with the IMS system from the request. The method may further include receiving a response from the IMS system, sending a buffer having information from the response from the destination mainframe system to the source mainframe system, and providing the information to the client program.

 

Patent Issuance #9,218,401

Systems and methods are provided that allow client programs using APIs for accessing local DB2 databases to access DB2 systems on remote logical partitions and remote zSeries mainframes rather than from a local DB2 system. For example, a method may include intercepting a DB2 request using a documented API for accessing local DB2 databases from a client program executing on a source mainframe sys tem. The method may also include selecting a destination mainframe system and sending a buffer including information from the request from the source mainframe system to the destination mainframe system and establishing, at the destination mainframe system, a DB2 connection with the DB2 system from the request. The method may further include receiving a response from the DB2 system, sending a buffer having information from the response from the destination mainframe system to the source mainframe system, and pro viding the information to the client program.

Share:|

For those who are not familiar with the mainframe rolling 4-hour average, this blog helps describe its importance.


Why is the Rolling 4-Hour Average (R4HA) important? Why should you and I care?  Developers, testers, system administrators, and other technical professionals are actually very affected by this important metric.


Take a look at the following figure.

Figure 1 – Hourly Peak MSU utilization, R4HA and Soft Capping on LPAR A


The blue columns are the peak utilization per hour, the orange line is the rolling 4-hour average for the last 4 hours, and the red line is the Defined Capacity or Soft Capping limit.


Irrespective of whether you are a DBA, System Programmer, Capacity Manager or a Developer, it is important to understand that regardless of the type of workload running, the type of code that you are writing, the tables that you support, and the system parameters that you set, all of these factors contribute to the 4-hour rolling average.


Here’s the critical part: If you are able to lower this metric for your customer, you are directly helping that customer save money—perhaps significant sums of money.


IBM uses its Sub-Capacity pricing/licensing model to charge its customers on the basis of the peak rolling 4-hour average that a customer runs each month. This is also referred to as the Monthly License Charge, or MLC. You can read more about it here - https://communities.bmc.com/community/bmcdn/bmc_for_mainframes_and_middleware/mlc-software-cost-optimization/blog/2015/11/20/seven-high-potential-roi-strategies-for-reducing-mainframe-costs

 

Every Sub-capacity product is charged based on the peak 4-hour rolling average on the logical partition (LPAR) that it runs on.

 

Figure 2 – LPARs Running DB2 with their Hourly MSU peaks, R4HA and combined R4HA of both LPAR A and B

Figure 3 – LPARs Running IMS with their Hourly MSU peaks, R4HA and combined R4HA of both LPAR B and C

Figure 4 – LPARs Running CICS and z/OS with their Hourly MSU peaks, R4HA and combined R4HA of both LPAR A, B & C


Understanding the Complicated Math

In the first graph, the peak R4HA for LPARs A and B (which run DB2) is not 140 + 160. It is 293.75.

Similarly, for the LPARs running IMS, the peak R4HA is not 170, but instead 166.25 and for all 3 LPARs together, it is 322.5 instead of just the sum of the individual peaks on the 3 LPARs.

So, DB2 will be priced based on 293.75 MSUs, CICS will be charged at 322.5 MSUs, IMS will be charged at 166.25 MSUs, and z/OS will be charged at 322.5 MSUs. This is much more expensive than most think.

 

All of the LPARs seem quiet between 1 AM and 4 AM, so from a developer’s perspective, if you run a poorly designed SQL application during this time, is not much of a concern, compared to running a bad SQL application when all 3 LPARs are peaking. In the above charts, this starts at around 6 PM till midnight.


Hence, it is important to understand whether the code that you are writing will run during the peak processing period or the lean period. This will not only push you to write better code, but it will also help the Mainframe box consume only those MSUs that are appropriate to process the workload and not one bit more.


Similarly, if you have a set of jobs that are currently scheduled to run during the peak processing and consume lot of MSUs themselves, you might consider moving these jobs to a different time. If such a set of jobs is moved by about 30 minutes or so, it can greatly reduce the peak R4HA. If you cannot move the jobs at all, then the current peak R4HA is justified.


Take away -- Being R4HA sensitive will always help you. Every MSU saved during the peak period can significantly reduce the cost of running the same workload, bringing down the overall cost of the mainframe.

Share:|

The law of gravity says that what goes up must come down. Too bad that doesn’t readily seem to apply to mainframe Monthly License Charge (MLC) software costs. Fortunately, however, you can reduce MLC costs by identifying the cost reduction projects that have the best chance of success and leveraging mainframe cost optimization solutions. Meeting this financial challenge is critical because the costs for mainframe MLC products – like DB2, IMS, CICS, to name a few – are increasing annually by 4 to 7 percent. In fact, IBM has announced a 4% increase this coming January.


The biggest part of the mainframe budget – about 30 percent – is spent on MLC software. MLC cost increases can occur without any correlation to business revenue or usage. They can also be triggered by customer-facing web applications that cause unplanned peaks on your mainframe.

 

Here are seven strategies to help your organization get started with mainframe MLC cost reductions and can help you identify projects with the most potential for optimal ROI.

1. Understand Sub-Capacity Pricing
There’s a big misconception about MLC pricing that can impact cost reduction efforts. Did you know that MLC software is generally charged based on peak MSU usage for the month, and not on the full machine capacity? MLC software products are priced monthly according to a peak four-hour rolling average (4HRA) of resource usage measured in millions of service units (MSUs) of machine capacity. Yet IT funding is typically determined by the amount of resources consumed, which is measured in CPU seconds.

So, the most effective approach is to find ways to manage MLC costs and reduce them. But how?

 

You can adopt chargeback and tuning strategies tied to the impact of MLC costs and identify and focus on projects that impact the 4HRA. Modeling technology can provide this insight to help you save while also avoiding unnecessary changes to applications.

 

2. Use a Cost Analyzer Solution to Provide Insight and Identify Projects with the Highest ROI

Be sure to model proposed changes to the peak 4HRA to determine whether they will have a positive impact and to predict the amount of this impact on MLC costs. BMC Cost Analyzer can provide this information and help you to determine which projects offer the highest returns. You can model the next peak, which can prevent investing time and money for changes that may not turn out to be worthwhile.

 

3. Tune for Profitability

If your tuning is only focused on reducing CPU usage, you may not get the results you want because not all CPU seconds or all MSUs are equally expensive. Instead, identify the workloads running in the 4HRA billing peak and determine if they can be reduced, slowed, or scheduled for another time.

 

4. Examine your Chargeback Systems
The rules governing software license charges and the proportion of IT expenses devoted to these charges have both changed, and you need to adapt your chargeback systems accordingly. If your chargeback system is based on a fixed-cost CPU second, for example, you may attempt to reduce peak usage during online windows to delay hardware upgrades. However, these actions may not be aligned with the timeframe for reducing MLC costs.

 

5. Do Your Homework before Signing New Agreements
Do everything possible to lower your current peak 4HRA before signing a renewal agreement for your MLC software. You need to implement these reductions before your negotiations begin.

6. Identify Workloads Running in the 4HRA Billing Peak and Make Adjustments When Needed

If workload resource consumption is not driving hardware upgrades or the peak 4HRA, then reducing consumption won’t save money. However, if you tune the work that impacts the peak 4HRA, the bill is lowered and MLC costs are reduced. This action can meet the budget needs of the business units you support and even lead to increases in IT funding.

 

7. Avoid Risks by Making Sure that You’re Tuning the Right Workloads
Sometimes tuning efforts can backfire. If you eliminate a bottleneck in the critical path of a batch stream that causes significantly more work to be scheduled sooner, this action could contribute to the 4HRA. If the workload is now running much faster, IT can use capping or other scheduling methods to slow the processing. However, you still need to have the ability to view the impact to see trend data for peaks and the workloads that contribute to them. BMC Cost Analyzer can model changes that occur from capping and identify any cost reduction.

 

You can eliminate budget surprises and have the funds you need for new initiatives by actively managing mainframe MLC costs. Read this white paper, Deliver Better Results for Mainframe Cost Reduction Initiatives, to help identify projects with the most potential to deliver optimal ROI.

 

Tom Vogel

Small Mainframe, Big Savings

Posted by Tom Vogel Moderator Nov 13, 2015
Share:|

You may have seen Tide’s “Small but Powerful” laundry detergent commercials. Tide says there’s a lot of power concentrated in smaller football players like Darren Sproles and Cole Beasley, as well as their own products. BMC has found that, despite what people might think, the same holds true for companies that run smaller mainframe environments.

 

If you’re running a small to medium-sized mainframe environment (less than 200 MSUs at monthly peak), you’re still paying IBM a significant amount for your peak monthly mainframe usage in the form of Monthly License Charges (MLC) on software like DB2, IMS, CICS, MQ, and z/OS. Surprisingly, experience indicates that these small and medium-sized mainframe shops have a huge potential for savings.

 

For these rapidly growing businesses, there are hidden costs that can be removed. These companies actually pay more for the same mainframe usage than larger mainframe shops, which exploit the higher MSU pricing discounts for bulk usage. So, although your mainframe environment is smaller, you can still utilize mainframe cost optimization products to help you save just as much or more than larger mainframe shops--sometimes up to 20 percent or more off your current MLC bill.

 

Even if your company is trying to move away from the mainframe, this transition is probably not occurring overnight. In the meantime, the cost of your MLC software is rising each year, driving higher expense for your business. In fact, IBM is raising the price on several MLC products this coming January, by 4%. You’ll want to do something about that now.

 

BMC clients whose peak mainframe usage is as small as 7 MSUs are experiencing savings using BMC MLC Cost Management solutions. One company that runs a peak of 50 MSUs is going to save nearly $250,000 over the next year, and another has saved 5-7% in peak savings, saved 100% of their annual true-up expense, and has seen a 9% reduction in their total MLC costs in just the first six months.

 

BMC is committed to partnering with mainframe users of all sizes to make sure that they’re saving as much as possible on the mainframe. And businesses of all sizes need to optimize their mainframes to make sure they thrive in the digital age.

 

To learn more, see the BMC Mainframe Solutions for Mid-Market information page.

 

https://www.pinterest.com/pin/create/extension/

https://www.pinterest.com/pin/create/extension/

Share:|

The number one thing holding back any growing business is obtaining adequate resources to actually fund growth. Chances are, you’re looking for any way to free up funds. Well, as it turns out, there’s extra money hidden in your mainframe.

 

You pay a lot just for using your mainframe every month, but you don’t need to. Thirty percent of the total cost of your mainframe is in the mainframe Monthly License Charges (MLC). These charges aren’t based on your total mainframe usage but on a rolling four-hour peak and, since you’re paying indefinitely, it costs you more than any other part of your mainframe budget--as high as half of the total, from what we hear from customers. With BMC solutions, you can lower the peaks and drive down your MLC by ten to thirty percent - today.

BMC is the only company innovating with new mainframe management solutions - products like BMC Cost Analyzer and BMC Intelligent Capping. These affordable solutions can help you identify exactly what is causing your mainframe processing peaks, identify the impact of moving workloads or tuning applications, and help you dynamically and automatically manage peaks through capping.

As an example, assume that at your peak you’re running 120 Million Service Units (MSUs) of processing. This would mean you are spending approximately $100,000 per month on MLC alone. But using Cost Analyzer, you can identify which processes are driving that peak and then use Intelligent Capping to bring the 120 MSUs down to, say 100, giving you nearly 17 percent savings. Further tuning and moving of applications and workloads could save you another 10 MSUs, down to 90.

That’s a 25 percent savings on your MLC costs, and at around $25,000 each and every month, that's a huge amount that you can put to work on your next critical growth initiative. If you’re interested in saving that kind of money on a monthly basis, contact BMC for more information. Also check out the web page for MLC cost management solutions and be sure to watch this video. Go get that gold!

Share:|

Many companies are understandably cautious about mainframe workload capping. Most capping solutions are difficult to use. They tend to be manual facilities that, even if successful, present a serious risk of service interruption. Improper capping and delaying important application requests, like credit card transactions, is never good for business. Fortunately, BMC’s innovative Intelligent Capping solution avoids these traditional issues. It can be very suitable for your business today, particularly if you run a small to medium mainframe environment.

 

BMC Intelligent Capping is aware of workload importance, which means it won’t slow down your business-critical applications. It also uses patent-pending cost awareness technology, meaning it will focus only on capping those workloads that are actually driving up your mainframe monthly license charges (MLC). Your MLC costs are based on the peak 4-hour rolling average of mainframe usage, and Intelligent Capping helps you to target those applications that increase your peaks. You can even model your savings ahead of time (using BMC's Cost Analyzer), so you know the savings and business impact of your capping plan before you implement it.

 

Even if you’ve already committed to paying an annual fixed amount for your mainframe MLC and aren’t concerned about reducing your costs, Intelligent Capping can help you manage your workloads to make sure all of your applications run in a cost-effective manner and avoid business service disruptions. And if you’re looking to upgrade or expand your mainframe because your business is growing, Intelligent Capping can help you run more workloads and squeeze more out of your existing mainframe, giving you the ability to defer expensive capacity upgrades and save money.

 

Whether you’ve been burned by other capping solutions you’ve tried or thought that capping wasn’t right for your business, Intelligent Capping offers you real savings and optimization potential. Read more about Intelligent Capping, or contact your BMC representative to find out how Intelligent Capping can "suit" your business. 

 

Share:|

Here’s a common challenge for many IT organizations. There’s a rush to invest in technologies that support the growth in digital engagement. This need is often driven by consumers who increasing rely on their phones to conduct all types of transactions such as depositing checks, looking at balances and statements, making purchases, and managing their portfolios.

 

The mainframe represents such a significant part of the IT budget that many organizations explore cutting costs for the mainframe to help fund business transformation. Keep in mind, however, that mainframes play such a critical role as the back-end for most digital environments. So, can you really have it all – fund transformative digital projects and have the mainframe enable digital business success? Yes. You can – by lowering your mainframe Monthly License Charge (MLC) costs, which can represent up to and more than 30 percent of the overall mainframe budget.

 

Think about it. What if you could cut MLC costs by 20 percent and save thousands or even millions each year, without impacting service delivery? While the capability to reduce MLC costs to this extent and meet budget and service delivery needs might not have been possible a few years ago, recent innovations and new automation have made this option a reality.  You can now meet changing data demands, instantaneous performance expectations, and have the ability to deal with explosive data volume growth. How? The key is to think differently about how to manage MLC costs.


Get Your House and MLC Costs in Order
Analyze and Identify contributors to costs
In many ways, cutting MLC is like lowering the costs to maintain your house. If you want to reduce your annual household expenses to save money or invest in other purchases, you first have to analyze and identify what’s contributing to those costs. Is your gas and electric bill too high? Is your sprinkler system inefficient and increasing your water bill? Are you paying too much for subscription services?

 

When you apply this same analytical approach to evaluating mainframe expenses, consider using cost analyzer technology to understand on a daily basis what’s driving MLC costs. This approach can provide a granular view of batch jobs and workloads to manage them more efficiently. You can also model changes to see how they impact your MLC costs.


Improve efficiencies
When you think of reducing household costs, explore new ways to take action and improve resource efficiencies. For a house, this could include purchasing energy-efficient appliances. With MLC, this involves using the most efficient system management solutions to reduce the ongoing overhead associated with the mainframe. For example, you can enable CICS, IMS, and DB2 to talk to each other without running on the same logical partition (LPAR). This separation can lead to savings without even changing the application.

Explore capping techniques
For your home, you also might also reduce expenses by setting limits to optimize when certain resources are used. For example, you can set your sprinklers to work only during times that are most efficient and have sensors that detect when they are not needed. With the mainframe, you can explore workload capping techniques to optimize capacity and reduce the peak 4-hour rolling average that drives the monthly bill, without risking performance. You can do this by setting up an automated capping strategy to monitor workload activity.  The solution can adjust caps based on your own workload importance policy to ensure that critical workloads are not delayed. With intelligent capping, there will be no increase to MLC costs due to balancing capacity limits across systems.


Exploit new options for optimizing licenses
As you continue to look at your household expenses, do you have subscriptions for home-related services that you rarely use? Maybe it’s time to cut them out of your budget, consider less-expensive options or use them more efficiently.

With MLC costs, it’s also important to explore new options for optimizing subsystem placement. This can include isolating or reducing subsystem licenses on an LPAR (logical partition). This approach gives you the control and flexibility you need to manage peak Million Service Units (MSUs) for maximum processing and minimum cost. You can balance workloads and control peak usage by selecting alternate systems on which to run the workloads, without modifying the applications. In addition, you can redirect workloads when a subsystem fails, ensuring that the business doesn’t miss a step.


Transform Your Business
MLC costs, like household expenses, can take a big chunk out of your IT budget and make it more challenging to pay for investments in digital engagement--unless you get these costs under control.

 

Watch a replay of this recorded webinar hosted by IBM® Systems Magazine on October 6, 2015 at noon CT with BMC mainframe experts Neil Blagrave and Nick Pachnos. Learn how companies worldwide are reducing MLC costs and transforming mainframe management practices to meet the changing data, infrastructure, and cost challenges for business success. 

Share:|

Lowering that peak rolling 4-hour average (or as BMC calls it, R4) just became a lot easier. On September 8th, BMC announced V2.0 of each of its MLC-busting products, as a comprehensive suite named BMC Mainframe MLC Cost Management.

 

You might already be familiar with the products – Cost Analyzer, Intelligent Capping, and Subsystem Optimizer. You also probably know that moving the needle lower on MLC can be difficult – but there are answers today. In the IDC Analyst Connection piece “Controlling Mainframe Monthly Licensed Software Costs: What's the Best Approach?” Tim Grieser explains that “Monthly licensed software costs can run as high as one-third of total mainframe operational costs.” That’s a big chunk, but you can take action today with options that you didn’t have just a couple of years ago.

 

Here’s what you want to ultimately achieve:

  • Understand your cost drivers and mechanisms
  • Identify/validate cost-reducing actions
  • Manage MLC budgets (present state/future state/options)
  • Monetize the use of defined capacity (capping)
  • Optimize subsystem license costs

 

Big news with BMC Mainframe MLC Cost Management – now you can cover all these bases with a swing of the bat. The suite brings together a combination of techniques for saving big money on MLC. Really, the key point here is that you now have the flexibility to choose how you want to save. Here’s more information:


Cost Analyzer 2.0

Using Cost Analyzer, companies are achieving some amazing benefits today. Mainframe IT and management can now understand, on a daily basis, what drives their MLC costs. In V2.0, Cost Analyzer has added deeper insight into specific batch jobs and workloads, providing a more granular view of these cost drivers.

 

So now you can track and manage your annual MLC budget like never before. You can also model changes you might be planning to your MLC software – in some cases, projects that were thought to bring MLC costs down turn out to be cost increases. Why spend the cycles on a project that won’t save you money?

 

Finally, we talk with many customers who say their peak 4-hour rolling average does not occur during batch processing. After using Cost Analyzer, some of them realize that this is not the case. Batch workloads are often the key driver of monthly peak. So you can see how Cost Analyzer can shed some needed light on key drivers of peak.

 

 

Intelligent Capping 2.0

Some organizations are using defined capacity (soft) capping to limit MLC costs.  However, many sites are still incurring higher costs due to a lack of confidence in their ability to set effective caps without impacting business work. BMC Intelligent Capping enables you to set your own policy-driven, automated capping strategy that monitors your workload activity and dynamically adjusts your caps to ensure your most critical workloads are not delayed.

 

In V2.0, Intelligent Capping includes patent-pending technology that makes it “cost-aware”, meaning that no increase to MLC costs will result from balancing capacity limits across systems. So now you have a great capping solution – automated, dynamic, flexible, safe, and - most of all - cost aware.

 

Subsystem Optimizer 2.0

Optimizing subsystem placement can have a very significant impact on MLC costs. By enabling the most expensive MLC products, like CICS, IMS, and DB2 to talk to each other without running on the same logical partition, your MLC costs can drop significantly. BMC is working with many customers today on this – and light bulbs are flashing – “Why didn’t we do this sooner” some say. The high returns are driving a lot of interest in this solution.

 

Here’s why. You probably already know about the technical restriction that exists for MLC software. That is, when a transaction request comes in, say for DB2 via CICS, both these subsystems must reside on the same logical partition. BMC Subsystem Optimizer can provide new flexibility by enabling you to place them on separate LPARs and still communicate with each other. The amazing part is that this separation often changes the MLC math and quite often can lead to significant savings – and this happens with no application changes.

 

The solution already supported CICS-to-DB2 and CICS-to IMS DB. In V2.0, IMS TM-to-DB2, a very common transaction system combination, is available.

 

One thing to keep in mind about Subsystem Optimizer – it provides an important failover capability in case a subsystem temporarily goes down. If a request comes in, and if the DB2 or IMS subsystem is down, you normally need to wait until that subsystem is restarted before the transaction can complete. With Subsystem Optimizer, you can redirect that request immediately to another running subsystem on a different LPAR. This is a big advantage that you might not have today.

 


Learn more about BMC MLC Cost Management in the 2-minute video, or check out the web page. Also be sure to check out the helpful white paper: “5 Levers for Lowering Mainframe MLC Costs”, which provides a tips on how to go about lowering MLC costs.

 

Share:|

Cost Analyzer Release 1.2 extends the benefit of MLC cost management to executives.  Management dashboards provide a transparent and insightful view of MLC costs versus budget, and historical perspective which can be used to manage down the cost of MLC software.

 

A new Software contract reporting tool revolutionizes cost utilization reporting.  This new tool in Cost Analyzer provides a comprehensive look at your cost data from the perspective of both your actual usage and your projected spending for the entire duration of the contract.  Both historical and projected monthly MLC data can also be compared against budgeted allocations.

 

The Software Contract Summary Report is a dashboard that
highlights:

  • Current and projected contract cost summary
  • Average MSUs used over time
  • MLC costs by billing month, both historical and projected
  • Cost variance from budget by billing month, for historical and projected months

      

Active linkages enable drilling down from the dashboard reports to see primary cost drivers, such as LPARs, MLC products and workloads.  There is also an MLC cost management effectiveness metric for tracking how you are doing in optimizing your MLC spend.

 

More enhancement details are available in the release notes, and in a quick course video on the management reporting dashboard.

Share:|

Because MLC costs are determined by the peak rolling 4 hour average (R4HA), and peak cycles are typically regular and known, it is possible to conclude that on-going regular MLC examination and reporting would have little value.  So MLC management may be mistakenly viewed as a once a month, or even once a year activity.  But there are situations where regular use of Cost Analyzer will lead to a proactive effort that spots and mitigates MLC cost issues before they create problems.

 

Five ways to provide value through regular cost management reporting:

  1. If the peak R4HA spikes unexpectedly, it may signal unusual activity that would qualify for exclusion from the SCRT report and the subsequent bill.  Any EXCLUDE requests must be made when the SCRT report is submitted, so it is vital to be regularly viewing peak R4HA activity to spot such an occurrence.  If such an excludable event occurs and is identified, this type of effort will result in cost savings the next month.
  2. Unexpected activity that does not qualify for an EXCLUDE exception could drive up peak usage and impact budget.  With regular reporting you will spot, and be in a position to diagnose and treat, the activity to mitigate the cost and budget impacts.
  3. Regular reporting which includes the new MLC Cost Efficiency Index enables you to gauge the effectiveness of MLC cost reduction efforts – and report to management how well you have been doing.
  4. The new management dashboard delivered in CAZE 1.2 forms the basis of regular reporting that IT management will find particularly interesting, including actual versus budget variance reports, and historical views of MLC costs and cost drivers.
  5. Regularly reporting the projected future status of MLC costs and budgets will let you proactively initiate actions to reduce costs and pull cost back in line with budgets.

         

Check out the new capabilities in Cost Analyzer 1.2, and view the new video Quick Course on it.

Share:|

Capacity planners have been plying their trade for mainframes for so long, and so well, that it may seem like there is nothing new to be added.  But the changing IT imperatives, driven by cost optimization, is opening us new opportunities for adding a fresh set of activities to capacity planning. This opens up a new opportunity for capacity planning professionals to increase the value of the work they do for their organizations.  ITIL descriptions of the capacity management process frequently note the need for cost information as part of the process.  MLC software charges are a great example of cost information which, if integrated into capacity plans, could substantially help IT and raise the value of what capacity planning brings to the table. In many cases, the roadblock that constrains such progress is the complexity of the MLC cost data.  Overcome this barrier, and capacity planners can deliver planning information not only for MIPS and utilization, but also for the cost impacts of capacity changes.

 

Here is some help for capacity planners who want to elevate their game. Read the white paper  “MLC Software Costs – A New Opportunity for Capacity Planners” to learn how to get MLC cost information into your planning process.  

http://www.bmc.com/forms/MCO-R4-CAZE-CapacityPlannerWP-Email-Q2FY15.html?em126488951913ew&Email_Source=MoM

Share:|

(With apologies to the poet...)

Reducing MLC software costs is the number one priority for mainframe users who responded to the BMC 2014 Annual Mainframe Survey.  Over 1,100 respondents said lowering their MLC costs is the top priority for the next year.   As the single largest mainframe budget item, MLC costs are typically 30% to 40% of the total mainframe budget.  This can be an opportunity for IT to reduce total mainframe costs, but reducing MLC costs is challenging.  Some may not believe it is even possible.  The fact is, most mainframe sites can reduce MLC costs by as much as 20% or more.

 

There are five actions IT can take to manage down MLC costs:

  1. Understand and report on the MLC cost base
  2. Model the cost impact of cost-reducing
    alternatives
  3. Maximize savings with capping
  4. Optimize subsystem placement
  5. Tune workloads running during peak

         

For a detailed description of the five levers, along with examples of what can be achieved, read the white paper “Five Levers to
Lower Mainframe MLC Costs While Mitigating Risk” here   http://www.bmc.com/forms/MCO-R4-LowerMFMLCCostswhileMitigatingRisk-WP.html?CID=em315920663303ew&Email_Source=MoM

Filter Blog

By date:
By tag: