September 06, 2011
When you hear the name vCompute, your first thought might be that it is referring to another new product from VMware, which tends to preface any new product title with a lowercase “v”. Chances are, unless you’ve worked with them directly, vCompute, the private cloud company is not on your radar. According to the company’s CEO, Edward Hawes, this “fly low” approach is by design.
The company admits that it has always been secretive about its infrastructure and customer base. CEO Hawes told us today that is because of competitive interest in how they have managed to remain profitable since they got their start in 2003.
While he was unwilling to discuss any specific customers in any detail (outside of name dropping the NSA), or the infrastructure backbone that provides the high performance computing service his company offers, he says that both secrecy and security are keys to vCompute’s success. According to Hawes, following a recent across-the-board boost of their infrastructure to cater to the needs of those in climate modeling and other traditional HPC arenas, they expect their emphasis on security matched with compute capability to drive them ahead.
The company has worked with a long list of customers, says Hawes, but he notes that he does not comment in any detail about how or to what extent customers use vCompute’s infrastructure. Recent publicly available projects include working with a Japanese group on a climate modeling project, an IBM/vCompute grid initiative, performing seismic runs for undisclosed customers, and running a number of Compute Against Cancer jobs. They are also working with the Large Synoptic Survey Telescope group, but again, details about the level of involvement are slim here as well.
In a rare display of media outreach, today vCompute went public about a new partnership with Quasar data center to provide cloud and high performance computing services for their customers in the government and defense sector as well as for the purposes of software performance and scalability testing. This might sound like an HPC on demand service, but Hawes says that his company adopts an old school approach that isn’t likely to change since customers are comfortable.
vCompute just finished updating its distributed infrastructure with GPU and advanced cloud technologies, including a Q-Logic Infiniband upgrade. According to the company, this will help them serve their customers in the medical research, geosciences, public sector, and chemical industries given the performance boosts. While the exact details about the infrastructure overhaul are, of course, not being made available, Hawes says that the fact that they are using ROCKS and providing Infiniband should give a sense of the kind of hardware that is backing this cloud.
As Hawes, describes, this is not a public cloud service, it falls more in the category of a private cloud. He contends that public clouds will only see limited growth since far too many organizations will always be reticent to put their data lifeblood into the hands of a third party provider that cannot guarantee security. He says vCompute offer “private cloud services and server farms to clients locally and abroad” for clients that need a boost in capacity but do not want to extend their current infrastructure.
He claims that customers are not asking for pay-as-you-go HPC on demand pricing structures or services, suggesting that what has worked for them since 2003 will continue to do so. Customers are renting the infrastructure for a set rate based on their needs. We asked whether or not this was a rather unexact method of billing and predicting usage when customers have other choices that offer more pay-for-use models, but he pointed again to the fact that they’ve been profitable since inception, in part because of this steadfast model for service delivery, billing, and management.
Customers often ship data versus bother with data movement and pay a flat rate for their application. This begs an important question about on-demand or public cloud services that provide pay-as-you-go pricing and a more web-based approach to data movement; is pay-as-you-go all it’s cracked up to be? If a customer is confident in the provider and their service—if they feel that for their particular application they are best off rolling with an estimate—does having billing that reflects true usage all that important?
Hawes feels that the big customers, those in government or at the Fortune 500 level will never look to public clouds due to security and privacy concerns. He says too that customers are looking for a provider that they have a trusting relationship with. It seems to be working for his company, he says—and he has no plans to change anytime soon.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?