A solution architect I met a couple of weeks back told me he didn’t know that mainframes still existed—which made me question the wisdom of writing a blog post about mainframe usage measures. Nevertheless, if we’re going to talk about modernization, we can’t avoid a discussion of MIPS, so here goes.
This illustration references an old tale in which an elephant is described by four blind men. Here, one guy thinks it is made of expensive ivory, symbolizing high license/maintenance fees (TCO). The second, touching the elephant’s legs, thinks it’s a big pillar—but not as big as big data. The third person wonders if the elephant is chained to financial companies (or the other way round), as most financial companies are tied to legacy technology. The guy on the far right thinks the tail may be from an extinct animal (symbolizing outdated technology).
Mainframe usage fees remain a significant part of many companies’ TCO, linked as they are to any application workloads deployed on the mainframe. Mainframe use costs typically involve two measurements: MIPS and MSUs. Although MIPS (millions of instructions per second) are often used as a rule of thumb for cost estimation, the actual measure is expressed as MSUs (million service units). These two measures constitute important elements for mainframe usage pricing models.
Today, businesses have become so global that it’s a challenge to meet the high goals/low costs required of computing services. The good news is that there are still opportunities to achieve significant savings. To use a phrase from cricket, a run saved is a run scored. Since CPU usage is a major component of mainframe costs, it makes sense to explore the role of MIPS.
MIPS is generally considered a measure of computing performance. For large servers and mainframes, MIPS is also a way to measure the cost of computing, as it directly translates into how much it costs to run a transaction.
Mainframe capacity planners use CPU hours, MIPS, and MSUs as indicators for plotting future growth. And reducing MIPS is a priority for companies with mainframes that are looking for ways to save on IT. Unfortunately, there is no simple conversion from MIPS to CPU time. The two measurements are related, but there is no simple algorithm for conversion.
Since MIPS is a measure of speed, to get a time measure in seconds, the number of instructions performed would also be needed. One MSU = 6 MIPS (here MSU and MIPS represent consumption speed, not accumulated consumption). Hence, a job that consistently uses 10 MSU as displayed by monitors uses 60 MIPS.
Because migrating legacy applications to another environment/technology is always expensive, a reduction in MIPS usage can become extremely important in any modernization planning. More details into the world of MIPS, along with a few examples, will be coming in my next post.
Post Date: 01.02.2016