April 01, 2011
If you take a look back at the commentary that began when the possibilities of clouds were just becoming clear, one of the first bells sounded was the question of what this would mean for mainframes. While there is still no telling what the future holds for the data center, some organizations are trying to put their finger of the pulse of computing to see what IT managers are planning.
AFCOM, an association for data center management professionals, released a report this week entitled “The State of the Data Center” to better understand how data centers are adapting to a number of changes in their industry, including the growing rates of cloud adoption.
In addition to providing some insights about disaster recovery, space, energy and security, the report, which is based on survey results from 358 data center managers concluded that there are threats on the horizon for the trusty mainframe. While it isn’t likely to go down without a long fight, and for some uses, be crushed at all, those in the mainframe business might be finding work a little harder to come by in the next several years if AFCOM's crystal ball is correct.
We asked Jill Yaoz, CEO of AFCOM, how the cloud computing movement is shaping this movement away from mainframes—and to what extent this is really happening versus being noted as a possibility. Based on their results she says that “last year only 14.9 percent of data centers had implemented the technology but today that percentage has grown to 36.6 percent, with another 35.1 percent seriously considering it.”
As AFCOM report indicates, “While historically one of the most critical elements of any data center, today mainframe usage continues to shrink. While we predict mainframes will exist forever in some capacity, their prevalence has been severely diminished.”
In organization’s view, “cloud computing will continue on this trajectory for the next five years, with 80 to 90 percent of all data centers adopting some form of the cloud during that period.”
In some cases cloud computing is replacing the mainframe because of price concerns. As Yaoz stated, “companies are starting to move certain applications off the mainframe and onto servers, especially because of server virtualization that can save companies significant money.”
She notes, however that there are “other applications that absolutely require the capability of a mainframe and its high level of processing and computing power. So in that regard, cloud computing is not affecting the decline of mainframe usage because the applications that run on the cloud are more server-based.”
In her opinion, in order to move high performance computing applications to the cloud “the cloud provider would to have a mainframe with that level of processing power, which is not really possible to do effectively or efficiently.”
The AFCOM figures are different than a report from CA Technologies last year that suggested 79 percent of IT organizations considered mainframes to be a key part of their cloud computing strategy. Based on these results, 82 percent of the respondents said that they planned on using their mainframe in the future either as much or more than they currently do.
In the CA survey, 55% of respondents said they kept mission-critical systems on the mainframe for reliability reasons. Additionally, just under half of those surveyed felt staying on the legacy product was the most cost-effective. Remember, however, this is a survey that was published by CA Technologies, who only a couple of years earlier set forth a major push for is Mainframe 2.0 strategy to modernize mainframes.
The debate about mainframes and the role of cloud computing extends to questions about what the real difference is and what makes them attractive. Many of those who are in the mainframe game might contend, there is nothing new about clouds and really nothing that clouds are capable of that mainframes can’t do.
Jon Toigo, who is the CEO of Toigo Partners International, a mainframe consulting company, told ComputerWorld this week that “a mainframe is a cloud” because its “allocated and de-allocated on demand and made available within a company with security and management controls…all of that already exists in a mainframe.”
However, this brings us back to the question of definitions again—if we consider cloud computing’s value proposition to lie in the idea of dynamic self-service provisioning and easy on and off based on the end user’s whims than mainframes really don’t have the advantage, at least if you’re a user that can make good, quick use of the resources for your particular applications.
Most mainframe systems are kept behind lock and key with dedicated guardians keeping track of its operations. While the concept of self-provisioning is absolutely possible with some custom tweaks, this is not something that generally happens.
While there are some companies that are still pushing forward their mainframe strategies to include cloud computing (IBM and its zEnterprise, which allows for a “hybrid” approach to mainframes—and can also be configured via Tivoli to allow user self-provisioning) there could be other barriers that go beyond hardware or software functionality.
For instance, the mainframe (and computing in general until recently) has been tied to licensing costs that are tied to the physical hardware for the duration. Additionally, the distributed software licensing models can be very high, especially for companies that have an IT policy based on “bring on capacity to ensure peak needs are met” versus dynamically scalable and available based on actual demand.
The release of the CA survey caused a stir and reawakening the debate about mainframe health just as the AFCOM survey did this week. Surveys like these tend to put folks on edge on either side and invigorate fresh questions about true capabilities.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?