February 03, 2010
Facebook engineer Donn Lee probably feels a little bit like Amity Police Chief Martin Brody did in the first Jaws movie. After seeing the monster shark for the first time, Brody tells Captain Quint with a deadpan delivery: "You're gonna need a bigger boat."
Substitute social networking demand for the shark and the network for the boat, and you basically have Facebook's own horror story. The iconic social networking site is trying to cope with bandwidth requirements that double every year as it tries to support its 300 million (and growing!) user base.
In an article this week in Computerworld, Facebook's Lee explains the company's dilemma. According to him, their applications already require 100 Gigabit Ethernet and in the not-too-distant future will need 1 Terabit Ethernet. That means a single Facebook datacenter will need 64 Terabit pipes in the backbone. This would necessitate thousands of 10 GbE ports and more than 100 of the largest switches available -- not really a feasible solution. In a nutshell, bandwidth-hungry social networking is tied to an Ethernet anchor.
OK, so Facebook is not a traditional HPC app, but it sure acts like one. From the article:
Facebook is different from many enterprises in that it throws many servers at a single application rather than dividing up each server into multiple virtual machines. That means it faces a special challenge of knitting the many servers together. But its bandwidth challenge is rooted in fundamental advances in technology. All server motherboards come with Gigabit Ethernet built in, and today's multicore processors can easily fill those pipes.
Sound familiar? HPC apps can at least can take advantage of 40 Gbps InfiniBand for server-to-server communication today. But ultimately everything must go through an Ethernet pipe once you exit the LAN (ignoring the few InfiniBand-based WAN solutions).
In the Ethernet realm, 10 GbE is the fattest pipe available today and those products are just hitting the market en masse. The IEEE 802.3ba proposal, which specifies 100 GbE for the WAN backbone and 40 GbE for server-to-server communication, has been making its way through the IEEE standards process. 802.3ba is expected to be ratified later this year, and will presumably be followed by vendor offerings that support it.
Unfortunately, there doesn't seem to be any happy ending to this story. Given the sluggish speed at which the Ethernet industry moves, all of this seems like too little too late. Lee says the lack of bandwidth constrains innovation at Facebook, and ultimately the customer experience. For the time being, it looks like the disparity between the pace of social networking and Ethernet technology will continue to widen.
Posted by Michael Feldman - February 03, 2010 @ 6:12 PM, Pacific Standard Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?