April 14, 2010
Cray has never made a big deal about the custom Linux operating system it packages with its XT supercomputing line. In general, companies don't like to tout proprietary OS environments since they tend to lock custom codes in and third-party ISV applications out. But the third generation Cray Linux Environment (CLE3) that the company announced on Wednesday is designed to make elite supercomputing an ISV-friendly experience.
Besides adding compatibility to off-the-shelf ISV codes, which we'll get to in a moment, the newly-minted Cray OS contains a number of other enhancements. In the performance realm, CLE3 increases overall scalability to greater than 500,000 cores (up from 200,000 in CLE2), adds Lustre 1.8 support, and includes some advanced scheduler features. Cray also added a feature called "core specialization," which allows the user to pin a single core on the node to the OS and devote the remainder to application code. According to Cray, on some types of codes, this can bump performance 10 to 20 percent. CLE3 also brings with it some additional reliability features, including NodeKARE, a diagnostic capability that makes sure jobs are running on healthy nodes.
But the biggest new feature added to CLE3 is compatibility with standard HPC codes from independent software vendors (ISVs). This new capability has the potential to open up a much broader market for Cray's flagship XT product line, and further blur the line between proprietary supercomputers and traditional HPC clusters.
Cray has had an on-again off-again relationship with HPC software vendors. Many of the established ISVs in this space grew up alongside Cray Research, and software from companies like CEI, LSTC, SIMULIA, and CD-adapco actually ran on the original Cray Research machines. Over time, these vendors migrated to standard x86 Linux and Windows systems, which became their prime platforms, and dropped products that required customized solutions for supercomputers. Cray left most of the commercial ISVs behind as it focused on high-end HPC and custom applications.
But a couple of years ago, Cray decided it was going to bring the ISVs back into its top-of-the-line supers. The company already had the major pieces in place -- an x86 platform in the Opteron-based XT architecture and a SUSE Linux-based OS in CLE. The pieces didn't quite fit because Cray used an MPI implementation targeted to its proprietary SeaStar system interconnect, while the ISVs employ MPI libraries built atop a standard communication protocol -- either TCP/IP or the OpenFabrics Enterprise Distribution (OFED). The only way commercial software (or any software for that matter) would run on an XT machine was by compiling the application code with the Cray libraries. In fact, CD-adapco and LSTC went to the trouble of doing exactly that and ported some of their codes to run on Cray supercomputers. In general, though, ISVs would rather not be bothered to maintain and support multiple distributions of their software for low-volume platforms.
In Cray's new Linux distribution, Cray has added a TCP/IP layer on top of its SeaStar library to form a bridge to standard Linux codes. That means vanilla ISV applications should literally work out of the box, assuming the software licensing is set up properly. According to Barry Bolding, vice president of Cray's Scalable Systems division, they have been busy testing codes from all the major vendors -- ANSYS, The MathWorks, SIMULIA, CEI, CD-adapco, LSTC, Metacomp Technologies, Accelrys -- and have yet to uncover incompatibilities. He says from the application's point of view, the Cray system software environment now looks like any standard x86 Linux cluster.
Access to the TCP/IP interface is only available in what Cray calls "Cluster Compatibility Mode" (CCM), which represents the ISV-friendly part of CLE3. The default environment is Cray's "standard" runtime, which they now refer to as "Extreme Scalability Mode." The idea is that as ISV-derived jobs are queued up for execution, the appropriate nodes are loaded with CCM, and then subsequently reprovisioned with ESM after the application completes. The OS footprint for the two modes is nearly identical, with the CCM version about 45 MB larger than its ESM sibling.
In the initial version of CLE3, the size of a CCM job is limited to 2,048 cores. Bolding says that's because they don't think they'll be able to achieve any more scalability than that with the TCP/IP implementation. Of course, multiple CCM apps could be running simultaneously. So, for example, an Abaqus CAE job could be running on 100 nodes, a CEI EnSight one on another 50, MATLAB on 20 more, and so on.
Bolding claims that the performance they've achieved from TCP/IP on top of SeaStar is close to what you could get out of an InfiniBand-based cluster. The upcoming "Baker" system will incorporate the faster "Gemini" interconnect, so they expect a significant performance gain just from the new hardware. In addition, next year Cray plans to offer an OFED communication stack on top of its interconnect, which should boost performance even further. Bolding is confident the Gemini-OFED combo will outrun InfiniBand in any benchmark.
With the initial CLE3 release, the company can now target customers who need the XT for their own scalable custom codes, but who wouldn't have purchased a system because they wanted to run ISV codes in tandem. How big of a market that represents is anyone's guess, but Cray will soon find out. Next year, with the optimized Gemini/OFED communication, the company can sell Bakers to customers that only have ISV apps to run, but can pay a premium for better performance.
CLE3 will be released on the various XT platforms in stages. The initial version will be included with the currently-shipping XT6 and XT6m machines, with plans to make it available for the XT5 and XT5m systems sometime later in the year. CLE3 will also be packaged with the Baker supers from the start. Those systems are expected to start shipping in the third quarter of 2010.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?