October 21, 2010
Here is a collection of highlights from this week's news stream as reported by HPCwire.
UC Santa Barbara-Led Team Developing Next-Generation Ethernet
Sony Equips PCs with World Community Grid Software
Allinea Software Signs Further Collaboration Agreement with CEA
Netlist Demonstrates 100 VMs on a Single Standard Server Using HyperCloud Memory at Interop
IBM Reports 12 Percent Increase in Net Income for Q3
Voltaire Grows adVantage Partner Program to More Than 50 Members
Fusion-io Creates New Technology Alliance Program to Drive Innovation Through Collaboration
HyperWorks Partner Alliance Adds RAMSIS by Human Solutions
Coalition of High Performance Computing Leaders Form Community-Based Open-Source File System Alliance
SDSC Celebrates Its 25th Year
Supermicro Showcases HPC Servers at SEG 2010
AMAX Introduces Petabyte-Scale NAS Clustered Storage Solutions for Oil and Gas Exploration
Rocky Mountain Supercomputing Centers Introduces M.O.R.E. POWER Service
CANARIE, Ciena Demo 100G Network
Mellanox InfiniBand Switch Systems Selected by IBM
Philip E. Bourne Wins Microsoft's 2010 Jim Gray eScience Award
XtreemOS Consortium Announces Public Access to Open Test Bed
NSF Grant to Study National Energy Policy and Technology Impacts
New Algorithm Reduces Linear Equation Runtimes
Computer scientists at Carnegie Mellon University have developed a ground-breaking algorithm that can solve systems of linear equations used in important applications, including image processing, logistics and scheduling problems, and recommendation systems. The new algorithm is incredibly efficient and may make it possible for a desktop workstation to solve systems with a billion variables in just a few seconds.
Linear systems are used to model real-world systems, such as transportation, energy, telecommunications and manufacturing, which often include millions, or even billions, of variables. Solving such complex systems is time-consuming on even the fastest systems and has confounded computer scientists and stymied research goals for a long long time. In fact, solving simultaneous equations quickly and accurately is truly an age old mathematical problem. One of the classic algorithms for solving linear systems, which is today dubbed Gaussian elimination, was first published by Chinese mathematicians 2,000 years ago.
Researchers from Carnegie Mellon's Computer Science Department have experienced a breakthrough, one that has great practical potential. The algorithm that they've devised relies on new tools from graph theory, randomized algorithms and linear algebra to greatly speed the time to completion for these linear system problems, with runtimes up to a billion times faster than with Gaussian elimination.
The algorithm applies to a class of problems known as symmetric diagonally dominant (SDD) systems, which have gained prominence in recent years. Recommendation systems, like that used by Netflix, use SSD to compare the preferences of an individual to those of millions of other customers. Image processing, logistics, and engineering are other key uses cases for SSD.
The press release highlights the importance of this achievement:
"The new linear system solver of Koutis, Miller and Peng is wonderful both for its speed and its simplicity," said Spielman, a professor of applied mathematics and computer science at Yale. "There is no other algorithm that runs at even close to this speed. In fact, it's impossible to design an algorithm that will be too much faster."
The work will be presented at the annual IEEE Symposium on Foundations of Computer Science (FOCS 2010), Oct. 23-36 in Las Vegas, and the group's research paper, "Approaching Optimality for Solving SDD Linear Systems," can be downloaded at http://www.cs.cmu.edu/~glmiller/Publications/Papers/KoutisApproaching-2010.pdf.
University of Queensland Deploys SGI Supercomputer
This week the University of Queensland increased its technical computing prowess with a high performance computing (HPC) solution from SGI. The SGI Rackable half-depth servers will be used to support a broad range of research from the fields of bioinformatics, computational chemistry, finite element analysis, computational fluid dynamics, earth sciences, market economics and image processing.
According to Professor Max Lu, deputy vice-chancellor or research at the University of Queensland, "These computers will strengthen an important part of the University's research capacity. Tasks such as processing enormous amounts of biological data generated through techniques such as genome-sequencing, micro-arrays and imaging cannot be done on standard desktop computers."
This will be one of the biggest deployments in Australia. The new SGI system boasts 3,144 processor cores, specifically Intel Xeon 5500 and 7500 series processors, with 11.52 TB memory and 249 TB of disk storage. Other specifications include InfiniBand QDR interconnect with Voltaire Grid Director 4700 switches and Unified Fabric Manager switching technology, and a Panasas file system. DC-based racks and innovative cooling techniques were selected for their energy-efficiency. The design offers flexible configurations to suit the university's current and future requirements. The university opted for SGI Professional Services to provide project management, installation services, datacenter services, training, as well as ongoing consultation and maintenance.
The new machine will be put to work handling the complex research and data needs of universities in Queensland and partner organizations, such as the Queensland Cyber Infrastructure Foundation (QCIF), Commonwealth Scientific and Industrial Research Organisation (CSIRO) and Bioplatforms Australia. Additionally, the infrastructure will be hosting several projects, including the National Computational Infrastructure (NCI) Specialised Facility in Bioinformatics and the European Molecular Biology Laboratory (EMBL) Australia, European Bioinformatics Institute (EBI) Mirror project.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?