Over the past few weeks, our cloud strategists have had a lively email discussion around mainframe and the cloud. We have compiled the discussion in digest form for this blog post, where we take on some common questions and misconceptions around cloud strategy for mainframe.
Jay Keyes (Vice President, Cloud Advisory Services - Practice Lead):
Many of the companies we consult with on cloud strategy are running business-critical workloads on mainframe. How would you respond to these questions from our recent discussions with different CIOs:
- Is there a clear path to the cloud for mainframe?
- Why, in early 2016, does the multi-billion dollar cloud computing industry still not have viable and affordable mainframe IaaS solutions?
- Do organizations need to modernize their mainframe applications to move them to the cloud?
Nathan Aeder (Associate Director, Cloud Advisory Services - Cloud Infrastructure Strategist):
I think this comes down to the economics of hosting. Cloud hosting margins for public or private cloud are typically razor thin and the acquisition cost of mainframes is exponentially higher than that of commodity hardware used for VMware, OpenStack or Hyper-V, which powers most clouds. The price of one mainframe is high enough but once you consider in the cloud provider space you’d be expected to have at least two mainframes in production to minimize downtime for maintenance and provide some level of redundancy PLUS another mainframe to support disaster recovery, you’re looking at millions of dollars invested. The hardware isn’t all a provider would need either.
Cloud providers (specifically the hyper-scalers like AWS, Google, and Microsoft) are always looking at a more consistent and shorter hardware refresh cycle than typical organizations -- it is Google's stated objective to beat Moore's Law. This would be impossible to achieve on mainframe without significant costs. Then there’s the fact that the number of IT professionals who are proficient in mainframes is low and just due to supply and demand, they come at a hefty price. I haven’t even gotten to the cost of migration yet either. Compare all this to organizations who probably have 1 mainframe, which may or may not be in support, who outsource their DR at a high RPO/RTO, and probably have someone who is self-taught and possesses historical knowledge of the mainframe but probably isn’t considered an Advanced Mainframe Engineer and their comparative cost it probably much lower.
I would love to believe that mainframe as a service is viable but I’m not sure it is. Add to that the fact that I don’t think mainframes could be considered cloud by common standards. To be considered a cloud you typically have to support self-provisioning or on-demand scaling which isn’t built into these mainframe systems.
CJ Kadakia (Director, Cloud Advisory Services - Cloud Application Strategist):
We also need to ask ourselves if IaaS as a cloud computing model makes sense for mainframe. As Nathan pointed out in a recent blog post, what we currently think of as IaaS is becoming legacy technology.
As a former software exec, application modernization seems to me to be the right solution at least some of the time. Of course, the fact that much of the COBOL code is several decades old is evidence that some companies don't see the modernization value benefit. Policy administration systems in Insurance, for example, have such heavy customization for older policies that it would take a rather significant effort to rebuild that into a modern system -- and it is difficult to get an executive to come around on the benefit of thousands of hours of development to support less than 100 policies, and then another few hundred hours of coding to support 10 more policies, etc. If there were viable commercial mainframe IaaS solutions or PaaS solutions, it would (presumably) mitigate a great deal of risk at (presumably) much lower cost.
Furthermore, (as I don my tin foil hat) there are likely market forces that have made it inconvenient to migrate code. COBOL and RPG do not translate well to modern object-oriented frameworks to begin with. It's even more frustrating because you try to bring it into .NET or Java and “surely there is enough demand for this that they would just add a built in library/method/property to handle it.” It turns out that often times there's not. One wonders why. If there was a simple and elegant option for replicating the feature sets of these legacy languages without needing to engineer elaborate programmatic solutions for functions that are simple to use in COBOL, we'd see more mainframe apps modernized.
I'm not sure that modernization is always (or even usually) the right answer. There is a misconception that all mainframe technology is old and outdated but it’s not just a matter of keeping old mainframe technology alive. Early this year IBM released their new Z13 mainframe and billed it as “a $1 billion investment, five years of development, exploits the innovation of more than 500 new patents” and is aimed at mobile transactions. I believe they indicated it has enough power to handle 100 Black Friday transactions in a single day and has 3000% the on-processor memory of typical commodity servers so these are REALLY powerful machines we are talking about. On top of that, demand remains heavy -- 71% of Fortune 500 companies are IBM System Z customers.
Perhaps the question we should be asked is: how much demand truly exists across the industry for mainframe IaaS? Anecdotally we hear there is, but the companies that run mainframe typically have the staff and facilities to host it. I know I mentioned that most Fortune 500 companies are running mainframes but to be honest we’re talking about roughly 350 extremely large companies. With most SMB and even enterprise organizations not using mainframe technology I’m not sure the demand is truly there despite the lip-service we get from CIOs on this topic. I think everyone agrees that it would be amazing for more companies to be able to move to a mainframe managed service but I just don’t think the incentive is there for the providers or larger organizations yet.
Rich Batka (Associate Director, Cloud Advisory Services - Cloud Infrastructure Strategist):
Pragmatic, measured modernization is the way to go, so pick your battles wisely. I believe modernization will occur at the (1) UI/interface/VDI/Application level AND at a (as yet to be designed) (2) self service workload portal level.
The majority of alternatives are not practical when you consider that the typical mainframe OS consists of 10 million lines of code in comparison to the new Boeing 777 which is about 2.6 million lines of code (some reports claim up to 4 million lines of code). Some things are too heavy to lift and replace.
That being said, we have a new generation of IT engineers and IT business executives banging on the door of the exective suite who hold zero allegiance to the heavy metal in the basement. Everything is up for grabs and nothing is discard eligible (DE-old Frame Relay reference). Keep in mind these young executives have a cell phone that could have replaced the Apollo Guidance Computer many times over.
What about the cloud? The mainframe industry, in order to be "cloud" optimized will need one or both of the advancements mentioned above to act as “force multipliers” to shake things around in an industry with a fifty year strong operational record.
Anything is possible as long as we put things in perspective first and proceed with quantifiable, factual data/information.
All good points. I think we can all agree that mainframe is not inherently a problem. It is a heritage system and will likely remain part of the landscape for many years to come. It is the lack of an elastic service-based model that we are mulling over.
We are also frequently asked for our opinions on products such as Microfocus's Visual COBOL and the open source Hercules Project, which allows you to run z/OS on x86. What are your takes on those products?
Visual COBOL perhaps provides an interim step for those who are looking to modernize their infrastructure. I’d compare it to the thinking that organizations have around moving to colocation as a first step in their "cloud" journey. It's not really a cloud technology, but it maybe is an incremental step forward.
I think the key to any organization’s move away from their existing mainframe is to understand the “Why” of their move. Is it because of staffing expertise availability, hardware lifecycles, integration requirements, disaster recovery, or other reasons? In any case the journey is important. How do they mitigate risk by minimizing the amount of change at any given point of the transition. I see two main migration paths for modernization. From an infrastructure standpoint, one option would be to move the mainframe hardware out of the data center and then focus on rewriting applications. From an application development standpoint, another options would be what Microfocus has developed. Moving to a more modern language helps get over that hurdle of COBOL applications, without having to completely rewrite or try to acquire replacement new applications. I think this is one of those times that depending on your background can lead to some heated discussions on the correct approach.
When we talk about moving mainframe to the cloud, we should not limit the discussion to programming languages. Today, to communicate to a mainframe you need to interface with TSO/E (Time Sharing Option Extended) via ISPF (Interactive System Productivity Facility) in the case of z/OS, so any intelligent efforts to modernize the mainframe will need to interact with or provide "superior" offerings when compared to ISPF. Period. That’s part of the issue because ISPF IS the standard has significant functionality built in.
Furthermore, modern mainframe logging is vastly superior when compared to common firewall, syslog, and Windows logging. The modern mainframe typically looks to vendors to provide security management software, performance software, and database management software.
The newest homework assignment for the mainframe will be predicting behavior at transaction speeds with in-line, real-time analytics and that activity will require a lot of horespower locally (or) in the cloud.
Emulator-based solutions will require innovation at the OS (z/OS) level and at the application layer (running software). Think threaded code, more importantly think about how you will unit test that threaded code (but that’s a story for another time).
When it comes to workload management, the common implementation of WorkLoad Manager is mature, well designed and expertly handles large workloads.
The reality is that mainframe manufacturers (i.e., IBM) have spent a lot of money to engineer software that when it crashes, will only crash in its assigned address space. This ability to contain the "contagion" is not common across the client/server/virtualized industry and that is one of the reasons mainframes have such a strong operational record.
The mainframe recently celebrated its 50th birthday. (Happy Belated Birthday!) The System/360 which the book The Mythical Man Month by Fred Brooks was based has evolved into z/mainframe. We easily forget that it is possible to actually run select System/360 applications on a modern z/mainframe implementation. This type of care and feeding is singular in the industry. Top executives will ask you a very simple question. Are we ready to trust $23 billion worth of ATM transactions each year to a "new" application stack or open source emulator?
Meanwhile mainframe manufacturers are offering highly condensed blade extenders such as System Z Blade Extension (zBE) solutions which have given management confidence that more business units will bring important “workloads” to the mainframe group/department.
In summary, we have multiple elements in play here-- A fifty year mainframe history, a proven technical stack and application execution capability, and market disruptors looking at the wrong part of the data center/enterprise (client/server environment/virtualization software) to disrupt.
Visual COBOL sounds nice in theory, but we're seeing the COBOL talent pool continuing to thin out, and that's going to create future problems with supportability. It is difficult to believe that millennials and recent college grads are excited about picking up that skillset. As for Hercules, IBM needs to allow z/OS licensing on those type of platforms first, but if there is a commercially viable model in Hercules, one should assume a deal can be made.
I’m a bit leery of mainframe emulation in concept, even if they could work out licensing. First, running any kind of emulator takes away CPU cycles and won’t run at optimum speeds. While this might not be an issue for X86 based systems running under hardware emulation, one of the main advantages to mainframes has been their ability to process millions of transactions at exceptional speeds. Second, while I think open source applications are great and in some cases underutilized, most mainframes are the lifeblood of their business. Can your business handle the emulated mainframe going down and not having a 1-800 number to call to resolve the issue? My thought, if your business can handle the reduced transaction speeds and lack of enterprise level support, perhaps this is a potential option. At the same time, if you don’t have a compelling reason for a mainframe’s MIPS or GIPS transaction speeds then perhaps the cost and risk of running a mainframe should be re-evaluated and modernization, private cloud hosting, or a Microfocus type solution is more appropriate.
Thank you all for addressing many of the misconceptions and much of the confusion around this topic. When we consult with our enterprise customers around cloud strategy, these are questions we almost always need to answer. As we've seen many times over, there aren't easy one-size-fits-all answers -- mainframe applications, workloads, business criticality, integrations, and organizational size/culture all factor into creating an appropriate cloud strategy for mainframe.
Post Date: 21.01.2016