July 15, 2010
Here is a collection of highlights from this week's news stream as reported by HPCwire.
Clemson Gets $1.4M to Improve Cyberinfrastructure for SC Researchers
EM Photonics Announces Partnership with PSSC Labs
Purdue's Coates Cluster Achieves First Ever TOP500 Ranking for 10Gb Ethernet
Wilson Unlocks the Secret of Swerving Soccer Balls Using STAR-CCM+
AccelerEyes' Jacket Product Family Supports the Latest NVIDIA Fermi GPUs
Internet2, NOAA Partner to Provide New Research Network
Portland Group Releases PGI Visual Fortran for Visual Studio 2010
University of Wales Supercomputing Project to Benefit Welsh Economy
Microsoft Research Illuminates Night Sky and Mars in 3D
RMSC, ESRI Collaborate on HPC Cloud Applications
Intersect360 Research Says HPC Market Will Rebound to $21.8 Billion by 2014
EM Photonics Releases CULA 2.0 to Support Latest Fermi-based NVIDIA GPUs
Graph Theory Predicts Clear Favorite for the FIFA World Cup
NASA Center for Climate Simulation Expands Computational Power
It was just last month that the NASA Center for Climate Simulation (NCCS) launched under the wing of NASA's Goddard Space Flight Center in Greenbelt, Md. With a mandate to provide "an integrated set of supercomputing, visualization, and data interaction technologies that will enhance agency capabilities in weather and climate prediction research," the project would need some high-end HPC equipment to do all that heavy lifting. Today Dell revealed its role in expanding the computational capabilities of the climate project. For starters, as part of its estimated $5.1 million contract with Dell, NCCS is experiencing a doubling of its computing power to more than 300 trillion calculations per second. If every person on the planet grabbed a calculator at the same time, they could not match that kind of speed.
The new Dell PowerEdge C6100 servers are customized for high-performance computing environments and will enable NCCS users to fine-tune their climate models. As always, better input equals better output, and greater computing power allows scientists to include more data in their models. When they can add smaller-scale features of the atmosphere and oceans, the models and associated forecasts will be more accurate. The increase in data analysis capacity will serve not only NASA's earth and space science user community, but the community at large who will receive the benefits of more advanced climate models and climate predictions.
Following a much-needed trend, this increase in system performance is matched with a reduction in energy expenditure compared with previous iterations -- improvements of 69 percent in performance and 47 percent in energy efficiency are anticipated.
According to the announcement, the Dell PowerEdge C6100 servers, which debuted this spring, serve a nitch in the both the public and private research sector among groups seeking a balance between performance, initial expenditure and energy efficiency.
Phil Webster, chief of Goddard's Computational and Information Sciences and Technology Office, explained that Dell's PowerEdge servers were selected based upon both the commitment of Dell to the HPC community and the ability of their systems to scale over time.
Amazon Opens up Cloud to HPC Apps
The big story this week occurs at the intersection of HPC and cloud computing, and has already received signifcant coverage both at HPCwire, here, and in our sister publication, HPC in the Cloud, here. That's right: I'm talking about Amazon's just-announced Cluster Compute Instances (CCI) for Amazon EC2.
Released as part of Amazon Web Services, CCI is a new cloud computing instance type that specifically addresses the performance needs of HPC applications. The HPC instance is similar to the already-established EC2 instances, but has been custom-engineered to provide high-end computing power and low-latency networking, opening up the Web services to users with more discerning computational requirements.
Berkeley Lab collaborated with Amazon to test drive their HPC applications on the Cluster Compute Instances, reporting a speedup of 8.5 times when compared to apps run on the general EC2 instance types. As Michael Feldman points out in his feature article, this speedup isn't really surprising considering the HPC instance's increased computing power and increased network throughput.
It is worth noting, however, that when Amazon ran the Linpack benchmark on 880 of their Cluster Compute instances (7,040 cores), the performance was measured at 41.82 teraflops. That's good enough to put the system at the 146th position on the June TOP500 list.
At any rate, this announcement shows a rising confidence level in the market for HPC apps to be run in the cloud. In the four years since Amazon Web Services was launched, it's progressed from offering run-of-the-mill computing on demand to targeting the more specific needs of complex workloads and network-bound apps. And there's still plenty of room at the top.
Carnegie Mellon Promotes Computer Science Majors
Kudos to Carnegie Mellon for launching a $7 million inititive to get young students interested in technology. Sponsored by the Defense Advanced Research Projects Agency (DARPA), Fostering Innovation through Robotics Exploration (FIRE) is designed to promote interest in computer science by making it fun. And what could be more fun for kids and teens than getting to build your own robot?
From the announcement:
FIRE will develop new tools that enable middle and high school students to expand upon their interest in robots, leading them from one CS-STEM activity to the next. Examples are programming tools that create game-like virtual worlds where robot programs can be tested, as well as computerized tutors that teach mathematics and computer science in the context of robotics.
The number of US college students majoring in computer science, science, technology, engineering and mathematics (CS-STEM) and those with CS-STEM degrees is declining. The statistics for the computer science field are especially troubling -- the number of graduates dropped 43 percent from 2004 to 2007, and women and minorities remain underrepresented. Trends like these raise concerns about national competitiveness.
Robin Shoop, director of FIRE and of Carnegie Mellon's Robotics Academy and an international leader in the development of K-12 robotic education curriculum, commented:
"Tens of thousands of students nationwide participate in robotic activities every year, but these activities do not always translate into increases in academic preparation or sustained engagement with CS-STEM. FIRE will provide the infrastructure, the tools, and the resources to significantly engage students for the long term."
Getting kids in K-12 involved in science and technology and steering them into technology-related majors should be a key priority for every school in this country. It's heartening to read any news items on this essential topic. More information is available at www.fire.cs.cmu.edu.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?