October 21, 2010
Some of the most prominent organizations in the HPC community have joined together to bootstrap a non-profit corporation devoted to scalable file system technologies. On Tuesday, Cray, Data Direct Networks, Lawrence Livermore National Laboratory (LLNL) and Oak Ridge National Laboratory (ORNL) announced the incorporation of Open Scalable File Systems, Inc. (OpenSFS). The newly-hatched group has cast itself as the focal point for development of Lustre and other open source file system technologies aimed at high performance computing.
According to OpenSFS CEO Norman Morse, the organization's mission is to bring together the stakeholders for high-end scalable file systems and provide a formal structure for moving the associated software forward. Today that effort will focus on Lustre, the open source parallel file system that grew up in HPC. The Lustre source repository is currently in the hands of Oracle, who inherited the technology when it acquired Sun Microsystems, (who itself had acquired Lustre a year before it got swallowed up). Since Oracle will focus post-Lustre 2.0 development on OpenSolaris and its own database products, Linux-based Lustre for HPC has been left to a disparate group of vendors, research labs, and academic institutions who have a common need to see the technology move forward.
OpenSFS' role will be to gather requirements from HPC stakeholders, prioritize them, and then fund the efforts to implement them. "We'll develop feature sets that are important for the entire community, within the context of OpenSFS, and then over time those feature sets will make their way back into the canonical Lustre release," explains Galen Shipman, group leader of technology integration at Oak Ridge National Laboratory and OpenSFS board member.
That model is pretty much the same as before, prior to Oracle's control of the Lustre code. The rationale is to fold all software fixes and enhancements back into official Lustre source repository, in order to avoid the prospect of multiple (and incompatible) implementations roaming around the ecosystem. "We absolutely refuse to fork the system," declares Morse. "We intend for Oracle to be the canonical definition of Lustre."
The initial focus for OpenSFS will be to support and stabilize the current Linux-based Lustre storage systems in production at HPC installations around the world. This is especially critical for the array of US Department of Energy labs, who have very large Lustre storage systems deployed, and even larger ones on the drawing board. The longer term goal for OpenSFS is to morph Lustre and related parallel file technologies into something that supports the transition to exascale systems several years down the road.
Requirements for new features will come out of technical working groups organized by OpenSFS, and those enhancements deemed most important will be brought forward as RFPs to the community. As a non-profit entity, OpenSFS won't be doing the development itself, but vendors who have aggregated Lustre expertise -- Whamcloud, Terascala, Xyratex, SGI, Cray, DataDirect Networks, and others -- would be likely to bid on these contracts.
Funding for this work will be derived from OpenSFS membership dues, which depending on your organization's commitment to this effort can be quite expensive. There are three different levels: The promoter level costs $500K per year, which buys you a seat on the OpenSFS board; the contributor/adopter level runs $50K, and lets you manage a working group; finally, for $5K per year you can become a support member, which allows you to participate in the working group. As you might imagine, the further you go up the membership food chain, the more influence you have over which work gets funded.
Since Lustre development and testing requires large-scale computing and storage, support for this OpenSFS-initiated development will be provided by national labs, such as Lawrence Livermore and Oak Ridge, which already have resources in place for this type of work. At LLNL, the Hyperion system is available on the lab's unclassified network as a test bed for scaling different types of Linux cluster technologies. For the past year, Sun Microsystems (and then Oracle) used the machine for its Lustre 2.0 development. Likewise, Oak Ridge has its own test bed of storage systems from various vendors for developer access. Much of the SMP scalability work for Lustre was developed and tested at ORNL. Other research labs, both in the US and elsewhere, may end up donating their own HPC resources for Lustre development, especially if they're looking to drive specific file system development for their own programs.
Morse says members are already lining up to join the alliance. According to him, more than 20 organizations -- vendors, universities, and government labs -- are ready to sign on (although he wouldn't say at what membership levels). As soon as certain legalities of OpenSFS incorporation are finalized, they'll begin bringing them aboard. Morse expects to attract in the neighborhood of 50 to 60 organizations.
To help that process along, next month OpenSFS is going to host an introductory meeting about the organization in conjunction with Supercomputing Conference (SC10) in New Orleans. Although they were too late to reserve a session at SC10 proper, the meeting will take place in parallel with the conference festivities. The meeting is tentatively scheduled for Tuesday, September 16 at the Ritz Carlton. Registration information will soon be available on the OpenSFS website.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?