July 03, 2013
Researchers can now buy dedicated nodes on Triton, the San Diego Supercomputer Center (SDSC) system that was re-launched last month with new hardware. The new "condo" use model is said to benefit researchers with bigger HPC workloads that struggled to run under the previous "hotel" use model.
The new Triton Shared Computing Cluster (TSCC) is the upgraded version of Triton Resource, which the University of California, San Diego's SDSC program ran from 2009 to 2012.
In early June, the SDSC took the wraps off TSCC, a bulked up version of the original Triton. In addition to upgraded hardware, the new cluster sports a hybrid "condo/hotel" business model for participating researchers.
Under Triton's "hotel" model, researchers could purchase blocks of computing time at an hourly rate. "While this model works very well for users with intermittent or lower levels of computing needs, it is less than ideal for users with long-term and/or relatively steady computing needs," the SDSC said in its TSCC announcement.
TSCC will continue to offer the hotel model, and will augment it with the new "condo" model that allows researchers to buy computing nodes and have them installed on Triton. Researchers who purchase Triton nodes under the condo model have full access to 100 percent of the node's computing power for a four-year period.
Each computing node on TCSS contains two, 8-core Intel Xeon E5-2670 Sandy Bridge processors and 64 GB of main memory. The basic interconnect is 10 Gigabit Ethernet. Condo users have the option to purchase larger memory nodes and to utilize a faster Infiniband interconnect. Hotel users also have the option of using the Infiniband interconnect. Both hotel and condo users have the option to purchase nodes with NVIDIA co-processors, providing a bump up in HPC capacity.
Researchers who buy under the condo model can also buy blocks of time on the larger cluster under the hotel model, and share Triton's resources with other users. If a node purchased under the condo model is idle, computing time may be "scavenged" from them to support other users, the SDSC says.
"The primary benefits to participants include gaining access to a much larger resource than they could afford solely for their labs, and having a system that is professionally maintained by full-time staff instead of being maintained part-time by lab personnel," Ron Hawkins, SDSC's director of industry relations, says in the announcement.
The cluster is designed and operated by the SDSC's Research Cyberinfrastructure (RCI) program. The RCI covers the operating cost, including administrators, user support, and software licensing, enabling condo users to pay a "modest" fee, beyond the cost of the node hardware itself.
"The condo model has proved to be one of the most successful, sustainable business models for research computing" based on how other UC campuses distribute HPC resources, says Richard Moore, deputy director at SDSC and project manager for the RCI program.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?