November 19, 2010
During SC10 in New Orleans, we had a chance to drop by a number of exhibits to check in with what’s going on for some vendors who are improving the HPC ecosystem and by proxy, the ability for improvements in cloud computing.
Interconnects are, as you can imagine, a rather big piece of this ecosystem that supports HPC and cloud, yet we often don’t spend enough time talking about them—and if we do, we tend to often focus on the one vendor in this space with the vast majority of the market share, Mellanox.
On Wednesday I dropped by the QLogic booth to have a chat with Joe Yaworski about the interconnects market as a whole and what elements of differentiation there are with such a market share imbalance.
To be more direct, I flat-out asked Yaworski how QLogic was different and what case studies there were to demonstrate that there are variations in performance or other factors.
His response was that since QLogic’s point of differentiation is that it did not retrofit its products with MPI on top, which others did because in the beginning, InfiniBand was originally designed to become the datacenter backbone replacement for Ethernet and fiber channel. In other words, it had a rich set of features and capabilities that had nothing to do with HPC. However, once InfiniBand found its niche in HPC, QLogic stepped up to design InfiniBand products that were MPI-targeted from the beginning, thus eliminating any hitches that might have existed due to the retrofitting. His argument is that the messaging rate is thereby superior and that this was the reason why they were chosen for a large-scale implementation at Lawrence Livermore.
Here we have Mr. Yarowski providing more details on the above points…
While on the surface, this conversation might seem to have little to do directly with clouds, it is worth noting that there are some areas of possible differentiation in this market that might exist—and the more improvements on interconnects that emerge means that the possibility for more finely-tuned cloud computing capabilities could exist. Mellanox, for instance, often sees this connection and produces news releases around it but oftentimes QLogic steers clear of cloud tie-ins, at least relative to its much larger and pervasive competitor.
More from Joe on the Livermore connection...
This is an interesting market to watch, especially since the problems that it needs to solve to improve latency have an incredible bearing not only for HPC in general, but for related uses in cloud computing capabilities for high-performance computing applications.
Posted by Nicole Hemsoth - November 19, 2010 @ 3:08 AM, Pacific Standard Time
Nicole Hemsoth is the managing editor of HPC in the Cloud and will discuss a range of overarching issues related to HPC-specific cloud topics in posts.
No Recent Blog Comments
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?