July 16, 2009
It was inevitable that with all the hype and marketing dollars directed at cloud computing these days that someone would eventually start trying to use them for real work. Of course, this puts a nasty wrinkle into marketing plans because once people starting using them for real work, then there are actual performance results. The results themselves aren't too troubling because they are usually point cases, and negative messages are easily explained away by calling on the vagaries of a particular software stack and the giving away of snazzy memory sticks. But then the results lead the engineering-minded to wonder whether all of the available cloud computing alternatives behave in the same way, and if not which of the them might be best suited for a particular task. This leads to standardized testing and then, before you know it, we have full-fledged benchmarking on our hands.
Not great for marketing departments, but wonderful for customers. And the good news for customers -- and potential customers -- of cloud computing is that the community is starting to think seriously about benchmarking the performance of clouds.
There is a long history in benchmarking computer hardware or software components tests that isolate, as much as possible, a single feature of the system under test to facilitate comparisons. In order to make sure that the comparisons are valid, benchmarks like those from the TPC require testing be done in a managed environment with a fixed configuration that can be completely described and replicated for future testing. Further, since the TPC benchmarks focus on transactional database systems they also require adherence to the ACID properties, notably (as we'll see in a moment) the coherence property.
But, when you think about it, this benchmarking model isn't really a good match for clouds, where the service model is designed to be dynamic, distributed and robust. One of the key selling points for cloud infrastructure is that it can grow and shrink with a particular user's demand, and workload can be shifted to wherever it is most advantageously served. In this environment the hardware may change over time, as may parts of the system software stack. Furthermore creating a reliable distributed processing environment usually means replicating parts of the data, and making these data available in the presence of communications failures means relaxing some of the traditional guarantees on data consistency (Amazon's cloud storage offering only guarantees eventual consistency, for example).
So, while traditional approaches to benchmarking, and traditional benchmarks for that matter, will provide some useful information about the performance of clouds, the traditional testing philosophy behind most benchmarks today doesn't lend itself to creating a test of merit that enables comparison of two clouds with one another in a way that takes into account the very features that make them interesting technology solutions for certain classes of problems in the first place.
The general topic is dealt with ably in an interesting paper [PDF] from DBTest '09. In that paper the authors outline what they're looking for in a cloud benchmark: something that doesn't require a static system configuration, reflects the ability of the cloud to adapt to changing load, assesses robustness to failures of various components, and includes the full cloud software stack rather than just one component.
That's a pretty tall order, and amounts to something akin to not just being able to demonstrate that your 1996 Porsche 993 isn't just faster than a Corvette, but that it's cooler. Speed you can measure; "cool" is sufficiently general that it's pretty hard to quantify. Still, you have to have a goal, and working on the problem is certainly worthwhile (not least because then my friends Steve and John could avoid a lot of pointless bar fights). The authors do manage a pretty reasonable suggestion for a benchmark in the paper, which I commend to your summer beach reading lists.
There are some cloud benchmarking efforts already well past the paper stage. Cloudstone, for example, is a benchmark out of UC Berkeley designed to measure the performance of clouds designed to run Web 2.0 applications. And there is also MalStone, a benchmark of more direct interest to the HPC crowd since it is designed specifically to allow the comparison of clouds designed for data intensive computing.
As described by Robert Grossman, the director of the National Center for Data Mining at the University of Illinois at Chicago and chair of the Open Cloud Consortium, MalStone is a "stylized analytic computation of a type that is common in data intensive computing." The MalStone computation starts with a very large set of distributed files that document the date and time that users visited Web pages (including a user id), and also specify whether those users' computers later become compromised by malware. The computation then goes through the files trying to identify Web pages that are possible sources of contamination by cross-referencing the browser history for each user id with records of whether the user's machine is compromised. Web sites that figure prominently in the average browsing history of a cohort of machines that were subsequently compromised are suspect.
As Grossman points out, the task itself need not be a good way of finding Web sites hosting malware. It only needs to be a task sufficient to measure the performance of clouds for data intensive tasks:
We call MalStone stylized since we do not argue that this is a useful or effective algorithm for finding compromised sites. Rather, we point out that if the log data is so large that it requires large numbers of disks to manage it, then computing something as simple as this ratio can be computationally challenging. For example, if the data spans 100 disks, then the computation cannot be done easily with any of the databases that are common today. On the other hand, if the data fits into a database, then this statistic can be computed easily using a few lines of SQL.
There are two benchmarks, MalStone A and MalStone B. MalStone A computes a global figure for each Web site for all times included in the logs; MalStone B computes the figures by Web site by week. The datasets involved are quite large, with up to 100 TB of data.
MalStone A-10 uses 10 billion records so that in total there is 1 TB of data. Similarly, MalStone A-100 requires 100 billion records and MalStone A-1000 requires 1 trillion records. MalStone B-10, B-100 and B-1000 are defined in the same way.
You can read more about the benchmarks and get the actual source code for them at code.google.com/p/malgen/.
Earlier this summer Grossman and his team at the Open Cloud Consortium (OCC) announced results comparing Hadoop (the environment used by Facebook, Yahoo, and others) with the open-source cloud architecture Sector. Grossman describes Sector in a blog post as "an open source cloud written in C++ for storing, sharing and processing large data sets." The OCC uses 10 GbE circuits on the National Lambda Rail (NLR) as the backbone for its testbed, and runs its tests over the NLR between San Diego, Los Angeles, Chicago and Washington, DC.
The preliminary results are interesting. They show significant differences between Hadoop and Sector, but also differences between Hadoop with Hadoop's implementation of MapReduce, and Hadoop using Streams and coding MalStone in Python. The most significant differences are for MalStone B, where performance ranges from 841 minutes with Hadoop/MapReduce to 44 minutes with Sector. Even the Hadoop/Streams implementation, which is considerably faster than the MapReduce, comes in at nearly 143 minutes. The range there is 14 hours to 44 minutes, worst-case to best.
These results highlight the importance of making sure your cloud is designed to solve the problem at hand. And as MalStone and other cloud benchmarking efforts continue to evolve users will have even more robust tools to make informed decisions.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?