The Portland Group
CSCS Top Right Frontpage
HPCwire

Since 1986 - Covering the Fastest Computers
in the World and the People Who Run Them

Language Flags

Visit additional Tabor Communication Publications

Enterprise Tech
Datanami
HPCwire Japan

HPC, the Cloud, and Core Competency


What does HPC have to do with cloud computing? Well, given that HPC environments are constantly growing, consume large quantities of fairly generic compute resources, and have both peaks and valleys in workload profiles, it would seem that HPC would be the perfect candidate for cloud computing, if only we could get past the barriers to adoption.

What I would like to do is present a series of blogs, intended to be a philosophical framing, not a technical roadmap, that will show why HPC is the perfect consumer of cloud computing. These blogs will be broken up into distinct topics in an attempt to create a logical progression aimed at having a common frame of reference. The initial set of blogs will address the barriers to adoption as follows:
 
1.      Ego – IT as a core competency
2.      Cost – getting more value for the same money
3.      Trust – a historical lesson
4.      Control – changes to organizational structure
5.      Security – perspectives on internal security
6.      Performance – realities of simultaneous optimization theory
 
Once we frame the barriers, we can then discuss incremental steps to get to value:
 
1.      Cloud enablement – transforming your environment, internal private cloud
2.      Private Clouds – external private clouds
3.      Hybrid Configurations – leveraging public clouds for appropriate workloads
4.      Public clouds – where and when they may make sense
 
This is the intended general direction, but I reserve the right to deviate based on input from the forum, any needed clarification, or recalibration necessary to stay true to intent of the site.
 
Having said that, let’s move into the first topic of discussion, IT as a core competency.
 
Companies need IT to be executed competently, and to control IT direction, but IT is not the primary product of the company (IT companies aside), and therefore should not be considered a core competency. We can debate tying into the primary function (core business) of the company as a criteria to determine core competency, but I believe it goes to the investment decision process of the company leadership. The primary drivers for the business revolve around delivering product to customers, development of new markets, and customer relationship management. When given the option of where to invest critical resources and assets in the business, executive management will be driven to primarily invest in the direction of the core business, and minimize expenses around all other aspects of the running of the business. Core competency would imply sufficient investment to differentiate the business from the rest of the world.
 
Further reinforcement of these concepts can be seen by looking at where IT is accounted for within the business. Quite commonly, IT is accounted as an SG&A function. This places it into the “overhead” bucket and it gets to compete with facilities, HR, accounting, purchasing, and all other groups that make up the SG&A bucket for the company in order to get resources. I only say this to frame the mindset of financial decisions. Given that companies are measured by how well they control expenses in SG&A (SG&A as a function of revenue), and that many of the components of the SG&A bucket are fixed or based on headcount, you then start to see that budgets for IT are scrutinized with a control oriented mindset, optimized on the cost variable. The R&D side is usually the “spend money to make money” side of the house, where SG&A is driven to control or even cut costs.  Having said that, I have also not met anyone who can flip between these mindsets.
 
In order to control costs as much as possible and to get as much value out of what is spent in the area of IT, most companies take the approach to limit change and hire resources with a breadth of skills as compared to a depth of skills in a specific area. They will attempt to limit change in order to get maximum value out of existing assets, maximize the ability to automate, and to minimize the quantity of personnel required to manage. By limiting change like this though, it defeats the ability of technology to ultimately deliver maximized value. Also, by limiting change, the organization is really promoting a philosophy of maintenance instead of development, and in doing that, many times symptoms will be addressed (just patch it up) instead of the root cause.
 
Additionally, by hiring generalists, the business accomplishes many things, like having the ability to solve any problem in the environment while minimizing overhead staff in addition to having the ability to have fault tolerance in personnel resources (people can take vacations, get sick, or leave for another position). The downside is that many times these generalist resources are attempting to mange the infrastructure, but lack the experience on new technologies that are brought in to properly manage them(they have not have the opportunity to gain experience). Solutions that they develop or integrate are more prone to configuration or design mistakes (doing it for the first time), are many times less efficient solutions than what is possible (not optimized), and are not designed to scale into the future with technologies that are not yet available to solve problems that have yet to surface. And finally, the complexity of the environment is growing faster than the capacity of the organization.
 
This is not to say that internal IT organizations are not excellent, that the personnel are not very talented, or that these organization don’t bring great value to the companies they work for. The only point is that there is more value that could be achieved, and that the company does not (and should not) invest in this function like they do the core product(s) of the company. How many times have we all sat in on meetings listening vendors explain to us what the “perfect solution” is, and knowing that they are right because we thought of it a long time ago, but just have not had the time, funding, resources, and priority to go execute that “perfectly”. Cloud computing has the promise to grant us access to that optimized, “perfect” solution, and next time, we will talk about getting that solution for the same price we are paying for IT today…

Posted by Scott Clark - April 11, 2010 @ 7:48 AM, Pacific Daylight Time

Discussion

There are 5 discussion items posted.

Impact on the ISV community?
Submitted by peterdenyer on Apr 21, 2010 @ 5:52 PM EDT


Scott

A great start into a very interesting topic!

Another topic area, if not already anticipated in your proposed Barriers to Adoption blog sequence, could explore the impact of Cloud Computing on the application vendors (ISVs) in this market segment.

Application vendors want to run a profitable business. Changes in business model can impact that in interesting ways. We saw what happened when the business model (at least in the Electronic Design Automation segment) morphed from a perpetual license model to the current time-based licensing scenario. A somewhat chaotic time revenue-wise for that few years until time-based revenues became the norm in the industry.

Are we liable to see another big inflection point as companies move to a Cloud Computing business model, or add Cloud Computing as an offering in their product line-up?

I see some signs of early Cloud Computing experimentation by some significant EDA vendors. We certainly see Cloud Computing as being a major part of the business strategy for some of the latest entrants in the EDA space. Are other HPC-focused vendors doing the same? Is Cloud Computing in this commercial HPC market a viable , long term direction?

Without presupposing the details of the rest of your blog topics, I'd guess that it will be business (revenue) issues on the part of the application vendors that will turn out to be one of the bigger barriers to adoption for HPC in the Cloud. I'd like to hear your thoughts on this in a future blog.

Post #1

ISV part 1
Submitted by ScottClark on Apr 22, 2010 @ 8:16 PM EDT


Pete,
Great to hear from you, and glad you enjoyed what you read. Let me break this into two responses to get the entire thought out there.

I had not intended to cover the ISV situation (and there is definitely an issues there as you indicated) in the barrier section. I was going to map a potential solution to that issue in the value section, since that is where I think it is going, and it will take a bit of time as you noted about the last change that happened in this space.

If I had to layout a roadmap for this, what I would propose is that semiconductor companies should look to leverage private clouds from an external facility. This would give all the security they have today (it is a private network, just housed somewhere else) and gets the efficiency of best of breed datacenter design. I would also expect that those datacenters will begin to specialize, and so there will emerge EDA Cloud datacenters, where many semiconductor companies will all have their private clouds (hosted datacenters), and at those same facilities, the major EDA companies will also have their hosted services (IAAS from each vendor). The ISVs are already offering services this way, so proposing another facility instead of on premise at the ISV should not present a huge barrier. What this solves is the bandwidth and latency issues of moving data from the customer infrastructure to the ISV infrastructure, so it allows for multiple clouds to seamlessly participate in the customers workload.

Post #2

ISV part 2
Submitted by ScottClark on Apr 22, 2010 @ 8:20 PM EDT


What this does is allow the ISVs to get used to the idea of leveraging cloud resources, but it will just be their cloud to start. I believe that over time, they will realize that IT is not their core competency (no self promotion intended), and they would then partner with a cloud vendor to provide the infrastructure, and the ISV would supply the tools (licenses). They would by that time have some history with how to price, usage models, etc., and would feel comfortable just providing the tools, and be able to do so in a manner consistent with the interests of the shareholders of their company…

I would really enjoy hearing your thoughts on this direction, let me know what you think…

Post #3

I agree with Scott
Submitted by Anonymous on Jul 23, 2010 @ 8:53 PM EDT


The ISV market is a huge Cloud HPC confidence builder for large design houses with mission critical HPC needs that are growing at a rate outside the companies internal financial and technical deployment capabilities.

Were not talking you tube startups were talking public high tech fortune 500 companies that must achieve you tube like capabilities growth with "proven" deployment strategies.

Something only a handful of people like Scott know how to do!

Post #4


Submitted by Scott on Aug 4, 2010 @ 12:23 AM EDT


Well, not sure where the compliment came from, but thank you... and I think this is correctly stated, growth is outpacing internal capabilities, and it really has to be done right...

-swc

Post #5

Join the Discussion

Scott Clark

Scott Clark

Scott Clark has been an infrastructure solution provider in the EDA/Semiconductor industry for almost 20 years.

More Scott Clark


Recent Comments

No Recent Blog Comments

Sponsored Whitepapers

Breaking I/O Bottlenecks

10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.

A New Ultra-Dense Hyper-Scale x86 Server Design

10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.

Sponsored Multimedia

Xyratex, presents ClusterStor at the Vendor Showdown at ISC13

Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.

HPCwire Live! Atlanta's Big Data Kick Off Week Meets HPC

Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?