November 14, 2011
TORQUE 4.0 beta offers scalability and enterprise-ready speed and reliability
PROVO, Utah, Nov. 14 -- Adaptive Computing, managers of the world's largest supercomputing workloads and experts in HPC workload management, today announced the availability of TORQUE 4.0 beta that offers petaflop scalability and enterprise-ready speed and reliability for high performance computing jobs and resource management. TORQUE is an open-source job/resource manager that provides control over batch jobs and distributed compute nodes and continually reports information regarding the state of nodes and workload status. TORQUE 4.0 extends scalability for petaflop and beyond, adds parallel multi-threading to improve response and reliability, and strengthens reliability. The release also enhances control and security over users. Adaptive Computing's Ken Nielson, Godfather of TORQUE, will present the Features of 4.0 at a Special TORQUE Event at Supercomputing '11 on Wednesday, November 16th @ 3:00 – 3:30 in Booth 927. The beta version will be available to community users to evaluate beginning Wednesday, November 16th and throughout December with final general availability of the free TORQUE 4.0 download in January.
TORQUE 4.0 beta offers new capabilities and value for the user community. TORQUE 4.0 offers extended scalability for petaflop and beyond with the new job radix to run jobs spanning hundreds of thousands of nodes and new manager-of-managers (MOM) hierarchy to increase the number and manageability of supported nodes. TORQUE 4.0 frees customers from processor and node communication bottlenecks so they can run the large number of jobs they need with high reliability and fast response.The radix concept advances scalability by running jobs which can span tens of thousands or even hundreds of thousands of nodes, not just processor cores. The radix concept also increases the number of nodes supported and significantly reduces the overhead of node updates through the new MOM hierarchy which efficiently distributes the node communication load across the network.
Other features and benefits include parallel multi-threading that improves response and reliability, providing instant response on user requests and submissions and the ability to continue work at a rapid pace even if some individual processes linger at a slower pace. Slow transactions no longer slow down the system. TORQUE 4.0 delivers faster job throughput by starting submitted jobs and ending completed jobs faster through optimized internal algorithms and reducing job start and completion overhead. 4.0 strengthens reliability for jobs and data transfers with all UDP-based network communication replaced with TCP and fewer job failures due to data loss on node-to-node data transfers. Furthermore, TORQUE 4.0 offers enhanced control and security over users with a new authorization daemon that also prevents users from running jobs as other users.
"Adaptive Computing is honored to be the custodian of the TORQUE open-source project. We actively develop the code base, in cooperation with the TORQUE community, as TORQUE is an integrated part of the Moab product line," said Ken Neilson TORQUE development manager at Adaptive Computing. "Adaptive Computing is committed to providing state-of-the-art resource and job management to support the HPC and open-source communities."
About Adaptive Computing
Adaptive Computing, manages the world's largest supercomputing environments with its self-optimizing dynamic cloud management solutions and HPC workload management systems driven by Moab, a patented multi-dimensional decision engine. Moab delivers policy-based governance, allowing customers to consolidate and virtualize resources, allocate and manage applications, optimize service levels and reduce operational costs. Adaptive Computing is the preferred dynamic cloud and workload management solution for the leading global HPC and datacenter vendors. For more information, call (801) 717-3700 or visit www.adaptivecomputing.com.
Source: Adaptive Computing
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?