September 26, 2011
MADISON, Wis., September 26 -- The largest, most powerful computer on the University of Wisconsin-Madison campus began operations in June of 2011. The system is named S4 - Supercomputer for Satellite Simulations and Data Assimilation Studies. S4 is housed at the Space Science and Engineering Center (SSEC) and contains 3,072 CPU cores, 8 terabytes of RAM, and over 450 terabytes of disk space. The system is connected via a 40 Gigabit InfiniBand network. S4 is more than half again as large as any previous UW Madison campus high performance compute cluster.
Funded by the National Oceanic and Atmospheric Administration (NOAA), S4 is used by NOAA and UW researchers to run data assimilation experiments with the goal of improving the NOAA operational weather models used to generate weather forecasts for the United States.
The system was designed, installed, and is maintained by the UW SSEC Technical Computing Group.
About the S4 Project
The NOAA Satellite Applications and Research (STAR) branch approached Principal Investigator Liam Gumley and his team with a proposal to provide a million dollar grant to design a large, extremely capable system and, upon approval of the design, purchase the hardware. The system had to be able to run NOAA's operational, simulation, and predictions models.
SSEC Technical Computing staff including Jesse Stroik, John Lalande, and Technical Lead Scott Nolin created the design proposal using NOAA's parameters and critical guidance from Science Lead Brad Pierce and other SSEC scientists. When the design was approved by NOAA in April 2011, the team procured the equipment. Installed in the first week of May and undergoing set-up and testing since that time, the SSEC Data Center provides the working environment.
Principal Investigator Liam Gumley said about the system, "The design our Technical Computing crew put together was based on our ten-year experience with building Linux clusters. Previous builds topped out at 250 CPU cores. The amount of memory in a system is vital as it directly maps to the resolution of the model you can run. Every grid point in a model has to represented in memory. And for every grid point there are perhaps 40 or 50 parameters to track."
"We will be running to NOAA's specifications," Gumley said. "NOAA wants this system to improve their operational, simulation, and prediction models. NOAA has at least two other systems similar to this. One is at a NOAA facility and one at NASA; both of them out east at federal sites. The NOAA system is heavily over-used. It is tough for researchers to get enough cycles. And the NASA system is significantly smaller than the one we have put into operation."
NOAA's desire was for a top-flight system that was also accessible and easy to use, and SSEC and the University of Wisconsin helped meet these goals.
Source: University of Wisconsin Space Science and Engineering Center
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?