August 19, 2010
The Army Research Laboratory (ARL) Defense Department Supercomputing Resource Center (DSRC) in Maryland is used to conduct research in the areas of weapons development and simulation-based testing, chemistry, nanoscience, and beyond with processing power at 350 trillion calculations per second. If the director of the ARL DSRC, Charles, Nietubicz has his wish, that capability will be delivered a web portal, and a very secure one, of course.
In an interview with SIGNAL magazine, Nietubicz noted that the center’s computers could be utilized by researchers to help the department further its research and development goals if access was granted to those outside of the center. As it stands now, however, for any group to make use of the ARL DSRC must spend a great deal of time and effort before they can get started, both because of paperwork, security and other hurdles—not to mention the significant amount of time it would take the researchers to adapt to the system itself.
It is Nietubicz’s goal to follow in the footsteps of companies like Amazon, with its HPC-geared provisioning of resources as well as the greater goals of Microsoft’s Technical Computing initiative that seeks to bring supercomputing to the masses, although he does realize that it is some time away, which is not difficult to imagine the rabid security force this would require if such a goal is ever hoped to be realized.
Nietubicz told George I. Seffers at SIGNAL that, “the average engineer doesn’t use high-performance computing. Part of the reason is that it’s too hard. I’m working to develop a tipping point for high-performance.” In his terms, this tipping point is defined by supercomputing access being part of everyday life versus an elusive privilege. While the tipping point is far from being around the bend, the on-demand model of HPC is gaining traction, if only in theory, where security and data protection is perhaps the critical factor in IT.
Full story at SIGNAL
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?