June 03, 2008
With HP's rollout of the new ProLiant BL2x220c G5 today, the company has an answer for IBM's recently announced iDataPlex server. Both are extra-dense server architectures designed for scaled out datacenters. That means these boxes are aimed at cloud computing, Web 2.0 and high performance computing, the current hot markets in the IT industry. In this ultra-scale arena, compute density, energy efficiency and price-performance all seem to be converging. Hardware reliability takes a back seat to being able to quickly swap out fried parts.
In our coverage today John West specs out the new HP gear:
HP is announcing the Xeon-based HP ProLiant BL2x220c G5 blade. This blade allows HP to cram over 12 TFLOPS (1,024 quad-core sockets) into a single 42U rack, a very dense solution. This density comes with a lot of engineering and at the price of some functionality. Of course HP has put in their custom fan kit, and the BladeSystem that holds the new C-class compliant G5 blade is engineered for effective (air) cooling and does some smart power supply management to keep operations at the knee of the efficiency curve.
HP gets two dual-socket servers on a single blade by carefully selecting what makes it in the design and what gets left out. You can choose from the dual-core Xeon 5200 or quad-core Xeon 5400, each with 4 DDR2 DIMMs per blade. HP saves power using DDR2 instead of the FBDIMMs more commonly seen in Intel-based servers and by having 4 DIMM slots instead of 8. Each server also has only one PCI-Express mezzanine socket and one disk drive. These design tradeoffs obviously mean that the new blade isn't the right compute foundation for every task -- in particular a maxed out quad-core solution would be light on memory per core -- but HP is very specifically focusing this product for the customer that wants a lot of compute in a small space.
And from last week, John extols the virtues of iDataPlex:
An iDataPlex setup holds the nodes sideways from the usual orientation, and combines two racks worth of them in a package that's wider than deep. Fans are in the back, but with a shorter distance to pull the cold air across, they use less power and keep components cooler. The iDataPlex is designed for a crowd that values price and quantity over reliability and other such fancy features. The system has fewer redundant components and a simpler design that favors a "pull and replace" approach to node failure over the traditional "predict and manage" approach.
IBM will offer up to 22 different chips and motherboard combinations for the nodes, allowing customers to tailor systems precisely for their needs. You want low power Xeons with slow memory? No problem. Or, for a no-frills-added HPC machine, you can max out on computing power and memory in a full rack of these, and keep it cool with optional water-cooled rear doors.
The final details on the iDataPlex will be forthcoming when Big Blue rolls out their new offering in June. The new ProLiant blade is available now.
Posted by Michael Feldman - June 02, 2008 @ 9:00 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?