November 19, 2009
NVIDIA Tesla GPUs integrated into Penguin on demand system -- now delivers huge boosts in performance for users
SAN FRANCISCO, Nov. 17 -- Penguin Computing, experts in high performance computing solutions, today announced that Tesla GPU compute nodes are available in its Penguin on Demand (POD) system. Tesla equipped PODs will now provide a pay-as-you-go environment for researchers, scientists and engineers to explore the benefits of GPU computing in a hosted environment.
The POD system makes available on demand a computing infrastructure of highly optimized Linux clusters with specialized hardware interconnects and software configurations tuned specifically for HPC workloads. The addition of NVIDIA's Tesla GPU Compute systems to POD now allows users to port their applications to CUDA or OpenCL and test their results very quickly and without capital costs.
POD provides high-density Xeon-based compute nodes coupled with high-speed storage, a persistent and secure compute environment that runs on a head node and executes directly on the compute nodes' physical cores. Jobs run over a localized 10Gig network topology to maximize I/O bandwidth to the user's storage and minimize latency between processes. Penguin Computing also offers a full range of expert support and services for POD customers including application set-up, creation of the HPC computing environment, ongoing maintenance, data exchange services and application tuning.
"We are very excited about the addition of Tesla GPU compute nodes to our Penguin on Demand service," says Tom Coull, general manager of products and engineering at Penguin Computing. "Providing a GPU compute capability further differentiates POD from other more general purpose offerings and continues to demonstrate our commitment to giving users a state of the art HPC-focused compute capability in the cloud."
"Penguin's On Demand Tesla-based GPU computing environment is a great step forward in providing high-performance computing on demand. Our GPU computing customers now have an on demand platform for developing and delivering their CUDA and OpenCL applications to a wide audience -- basically anyone with an Internet connection," said Sumit Gupta, senior product manager of GPU Computing at NVIDIA. Penguin will be demonstrating the capabilities of the Tesla-enabled POD at Supercomputing 2009 (SC09) at Booth #911.
About Penguin Computing
Penguin Computing, headquartered in San Francisco, Calif., specializes in complete, integrated HPC clustering solutions. Penguin has been a successful innovator for over a decade, providing Linux HPC solutions to a variety of industries. Penguin's staff, including the originator of the Beowulf Cluster architecture, has unsurpassed experience in delivering a powerful combination of fully integrated HPC clusters, comprehensive cluster management software, and services. For more information about Penguin Computing and Penguin products, go to http://www.penguincomputing.com.
Source: Penguin Computing, Inc.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?