March 18, 2012
How many servers does it take to power Amazon's massive cloud infrastructure? A fair question, considering they are one of the biggest cloud providers out there. Amazon, for obvious reasons has not been forthcoming with the information, so Huang Liu, a research manager at Accenture Technology Labs, set out to find the answer. According to his calculations, which he writes about on his blog, the Amazon Elastic Compute Cloud (EC2) is home to nearly half a million servers.
Liu's findings are based on a combination of internal and external IP addresses, which he uses to come up with an estimate of the number of server racks in each region. He then extrapolates: if each rack has a 4 10U chassis, and each chassis holds 16 blades, that gives you a total of 64 blade servers per rack.
In table form, Liu shows the number of servers contained in each of Amazon's seven regions, for a grand total of 454,400. It's worth noting that the US East hub, Amazon's first, has the lion's share with 321,920. Based on this, Liu infers that "it is hard to compete with Amazon on scale in the US, but in other regions, the entry barrier is lower. For example, Sao Paulo has only 25 racks of servers."
Liu has also charted the expansion of Amazon's US-based infrastructure over the past six months, from August. 23, 2011, to February 23, 2012, remarking on the impressive growth rate. According to his work, the US East region has been adding an average of 110 server racks per month. Liu points out that although the growth rate is linear, it has slowed down some over the past couple of months.
How did he do it? Liu writes:
Figuring out EC2' size is not trivial. Part of the reason is that EC2 provides you with virtual machines and it is difficult to know how many virtual machines are active on a physical host. Thus, even if we can determine how many virtual machines are there, we still cannot figure out the number of physical servers. Instead of focusing on how many servers are there, our methodology probes for the number of server racks out there.
There's a lot more to it than that, and Mr. Liu lays out his methodology in detail, providing the following notes as a summary of the process:
Mr. Liu is quick to point out that these figures are estimates, based on a number of educated assumptions, but they are the best figures we have so far, and are helping to inform the larger cloud conversation. Besides as Liu notes, "the methodology is fully documented." He invites "inquisitive minds" to read over his findings and to point out flaws in his process. For its part, the community has done just that; the story has already been picked up by a number of news outlets in the last few days.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?