The Portland Group
Oakridge Top Right
HPCwire

Since 1986 - Covering the Fastest Computers
in the World and the People Who Run Them

Language Flags

Visit additional Tabor Communication Publications

Enterprise Tech
Datanami
HPCwire Japan

Blog: From the Editor

From the Editor | Main Blog Index

It's All Done With Mirrors


If I ever develop an optical interconnect, I'm going to call it LotsaLux. Too cute? I should ask the folks at Lightfleet Corporation. On Monday the company unveiled its own optical interconnect technology called Corowave, whose name is derived from the verb coruscate, which means to sparkle or reflect brilliantly.

Lightfleet's Corowave interconnect uses laser transmitters and opto-electric receivers to support inter-processor communication in a highly parallel fashion. Each compute or storage node contains a transmitter and a receiver. Mirrors and lenses are used to direct the light transmissions to receivers in an all-to-all topology. The all-to-all nature of the Corowave interconnect is the key to the technology.

Chris Kruell, Lightfleet's VP of marketing, says the interconnect can be applied to a range of computer environments -- data center servers, telco equipment, and embedded devices -- anywhere that multiple nodes talk to each other incessantly. The all-to-all interconnect is designed to avoid the congestion and saturation of a traditional interconnect.

By getting rid of the internal crossbar switches and cables, Lightfleet claims it reduces the number of communication components by a factor of 40. This allows the interconnect to be inserted into a relatively compact space -- one third of a cubic foot for a 32-way server. In addition, the all-to-all connectivity allows for flat latency as the number of nodes scales up.

Kruell says any type of technical or commercial application that uses multicast or broadcast communication would benefit, that is, just about any highly parallel workload on a multiprocessor system. For example, if someone wanted to combine data mining with video streaming to do real-time intelligent ad insertion, this type of data communication would be ideal. Another candidate would be a drug interaction simulation that used molecular dynamics introduced into a static mesh simulation. Because you don't know where the next piece of data is coming from, the all-to-all network communication has a tremendous advantage.

The degree of speedup will certainly depend upon the nature of the program. Existing MPI applications that make heavy use of the all-to-all or broadcast functions would be prime targets. But new applications that were specifically designed to take advantage of highly parallelized communication could be the real beneficiaries of Corowave.

"A true all-to-all architecture has not been available before," says Kruell. "So there's going to be a huge speedup potential by optimizing for that."

According to Kruell, another benefit of the Corowave technology is the zero incremental overhead for transmitting one-to-one or one-to-all.

"In a typical cluster today, the approach to multicast is to establish, usually in software, a set of serial point-to-point messages all containing the same thing, which can needlessly consume bandwidth of the I/O processors. The inherent parallel nature of the Corowave interconnect can eliminate these extra data sends and can free up the I/O processors to handle incremental data communications."

The announcement this week was only to get potential customers buzzing about the technology. Lightfleet is planning to incorporate the Corowave interconnect into its own high performance server, which is scheduled for release in July 2007. The company is also looking to license the technology to other OEMs, as yet unnamed.

It'll be interesting to see side-by-side performance comparisons of systems and applications when this technology gets put into real boxes.

-----

As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at editor@hpcwire.com.

Posted by Michael Feldman - March 08, 2007 @ 9:00 PM, Pacific Standard Time

Michael Feldman

Michael Feldman

Michael Feldman is the editor of HPCwire.

More Michael Feldman


Recent Comments

No Recent Blog Comments

Sponsored Whitepapers

Breaking I/O Bottlenecks

10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.

A New Ultra-Dense Hyper-Scale x86 Server Design

10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.

Sponsored Multimedia

Xyratex, presents ClusterStor at the Vendor Showdown at ISC13

Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.

HPCwire Live! Atlanta's Big Data Kick Off Week Meets HPC

Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?