October 26, 2012
BOTHELL, WA, Oct. 26 – Silicon Mechanics, Inc., a leading manufacturer of rackmount servers, storage, and high-performance computing hardware, will be highlighting and demonstrating one of its zStax open architecture storage solutions based on NexentaStor at SC12 Conference & Exhibition, which will be held November 10-16, 2012 in Salt Lake City, Utah, Booth # 2423. Silicon Mechanics will also be launching its 2nd Annual Research Cluster Grant competition, in which a high-performance computing cluster will be donated to an educational or research institution.
Using “Let’s Talk” as the theme for SC12, the Silicon Mechanics sales and engineering team will be engaging in active conversations on high-performance computing, hybrid CPU-GPU computing, storage, and various approaches to solving computing problems. NVIDIA Tesla GPUs and the Intel Xeon Phi Coprocessor, as well as high-density server offerings typically used in high-performance environments, will be at Silicon Mechanics’ booth to serve as a focal point for discussions.
A highlight will be a real-time demonstration of a Silicon Mechanics zStax unified storage appliance, powered by NexentaStor, a fully featured NAS/SAN software platform which provides enterprise storage features at mid-tier prices. During the demonstration, Silicon Mechanics will create a complex job, booted on a diskless cluster using Bright Cluster Manager, and then show how NexentaStor seamlessly handles load failover. The demonstration will be conducted by industry expert Tommy Scherer, who has joined Silicon Mechanics and will be helping to expand Silicon Mechanics’ storage product offerings.
The zStax unified storage appliance powered by NexentaStor provides a storage solution that can be deployed on commodity storage hardware. As a Nexenta Certified Reseller, Silicon Mechanics can help customers save 70 to 80 percent compared to expensive, vertically integrated, proprietary storage technologies.
SC12 will also be the launch pad for Silicon Mechanics’ 2nd Annual Research Cluster Grant competition. The company will be donating a high-performance cluster as part of a highly competitive research grant program.
“Enabled by many generous donations from its partners, Silicon Mechanics is excited to build on last year’s highly successful event,” said Eva Cherry, CEO and President of Silicon Mechanics. “This year’s cluster features even more of the latest state-of-the-art hardware and software, including next-generation AMD Opteron processors and NVIDIA Tesla GPUs.”
About Silicon Mechanics
Silicon Mechanics, Inc. is a leading manufacturer of rackmount servers, storage, and high-performance computing clusters, with one of the industry’s most comprehensive product offerings. Working collaboratively with customers to create best-fit solutions at competitive prices, we support our products with superior warranty offerings and a team of sales and service engineers dedicated to service and solutions excellence. Based in Bothell, WA, Silicon Mechanics is an Intel® Premier Provider and a Premier member of the AMD Fusion Partner Program.
Source: Silicon Mechanics
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?