November 24, 2006
The OSCAR working group has released a new version of the Open Source Cluster Application Resources (OSCAR) toolkit, OSCAR 5.0.
OSCAR is a software package that supports the use of high-performance computing by reducing the work of cluster configuration, installation, operation, and management. The infrastructure underlying OSCAR 5.0 has been completely reworked to include smart package managers, yum based image building and package installs, easier client updating by using a repository based approach, and optimized start ups to reduce build time. Another long anticipated feature added in 5.0 is the ability to support multiple Linux distributions and architectures on the same cluster.
Many other improvements to the OSCAR infrastructure, including better prerequisite handling and pre-installation system configuration checking, should make installation smoother than ever. Less time is wasted downloading packages for installation types which are not of interest, due to a shift to modular distribution tarbals.
A new utility called netbootmgr reduces the amount of time spent mucking about in the BIOS by centrally managing a nodes behavior when a network boot is detected. You can easily switch nodes from accepting new images to booting off their local hard drive and back again from an easy to use interface.
Node Image deployment is also faster and easier to keep track of with new, scalable bittorrent deployment options and an improved deployment monitor.
A new MailScanner warning: numerical links are often malicious: 18.104.22.168 <http://22.214.171.124> deployment kernel for systemimager means improved hardware support, and the newly redesigned "Use Your Own Kernel" functionality means better support for bleeding edge equipment and customized kernels.
The packages included with OSCAR 5.0 have been updated, including Maui 3.2.6p14, Torque 2.0.0p8, LAM MPI 7.1.2, MPICH 1.2.7, and Ganglia 3.0.3. OpenMPI 1.1.1 and Sun Grid Engine (SGE) 6.0u8 are also now included as core packages.
Looking toward future releases, a new package and database structure was designed to prepare OSCAR for Debian support.
OSCAR 5.0 has been tested for use with both IA32 processors and x86_64 processors under Fedora Core 4 & 5, Red Hat Enterprise Linux 4.0, Scientific Linux 4, and CentOS 4. Mandrivia 2006 and SuSE Linux 10.0 (OpenSuSE) were also tested for IA32.
OSCAR 5.0, previous OSCAR versions, and additional information about the OSCAR project are available from the OSCAR web site at http://oscar.openclustergroup.org/.
The OSCAR working group is a consortium of industry, academic and research participants. Organizations who contributed to OSCAR 5.0 include Revolution Linux, Bald Guy Software, Michael Smith Genome Sciences Centre, NEC HPC Europe, IBM, Intel, Indiana University, Louisiana Tech University, Oak Ridge National Laboratory (ORNL), and the University of Texas Health Science Center San Antonio. OSCAR is the product of the OSCAR working group of the Open Cluster Group (OCG). OCG is dedicated to making cluster computing practical. The OCG and its subgroups are open to all.
OSCAR Working Group Homepage
OSCAR Project Homepage
Open Cluster Group Homepage
HA-OSCAR (high-availability) Working Group Homepage
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?