March 26, 2013
March 26 — Silicon Mechanics, Inc., a leading manufacturer of rackmount servers, storage, and high-performance computing hardware, announces that it now offers the Intel Xeon Phi coprocessor, based on Intel Many Integrated Cores technology. Intel Xeon Phi coprocessors deliver higher-than-ever aggregate performance for highly parallel applications, and eliminate much of the need for dual-programming architecture.
“We’re proud to be offering the Intel Xeon Phi coprocessor in our hybrid computing server line,” said Ken Hostetler, Director of Product Management at Silicon Mechanics. “We know that the programming model supported by Intel Many Integrated Cores architecture will be right for many implementations. Our customers are already excited about it.”
Intel Xeon Phi provides the programmability of the Intel Xeon processor architecture to an emerging group of highly parallel applications that benefit from processors containing a large number of cores and threads. While Intel Xeon processors remain the preferred choice for a broad range of applications, Intel Xeon Phi coprocessors are designed to provide efficient performance for highly parallel applications that benefit from numerous mathematical calculations performed at once, for example, accurately tracking a weather storm.
Intel Xeon Phi coprocessors have many more, smaller cores, many more hardware threads, and wider vector units than multi-core Intel Xeon processors. The high degree of parallelism compensates for the lower speed of each individual core to deliver higher aggregate performance for highly parallel applications.
Building on established CPU architecture and programming concepts, Intel Xeon Phi coprocessors provide developers of highly parallel applications with the benefits of code re-use. Common programming models for Intel Xeon processors extend to Intel Xeon Phi coprocessors, so developers do not need to rethink an entire problem as they embrace high degrees of parallelism.
The same techniques that deliver optimal performance on Intel Xeon processors, like scaling applications to cores and threads, blocking data for hierarchical memory and caches, and effective use of SIMD, also apply to maximizing performance on Intel Xeon Phi coprocessors.
With greater reuse of parallel CPU code, software companies and IT departments can create and maintain a single code base binary, and they don’t have to re-train developers on proprietary programming models associated with accelerators.
About Silicon Mechanics
Silicon Mechanics, Inc. is a leading manufacturer of rackmount servers, storage, and high-performance computing clusters, with one of the industry’s most comprehensive product offerings. Working collaboratively with customers to create best-fit solutions at competitive prices, we support our products with superior warranty offerings and a team of sales and service engineers dedicated to service and solutions excellence. Based in Bothell, WA, Silicon Mechanics is an Intel Premier Provider and a Premier member of the AMD Fusion Partner Program.
Source: Silicon Mechanics
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?