August 10, 2010
One might think that cloud would be a perfect fit for the weather, but for some organizations, the tie is only as strong as the terms themselves imply.
Complex modeling to predict and track weather patterns requires significant compute power and the bandwidth required for many types of HPC applications. While many of the applications for this type of modeling and analysis would be a good fit for the cloud following the requisite tweaks, the bandwidth that Amazon and other public cloud vendors could provide just isn’t enough to sway an organization like MetService, the New Zealand weather authority (and state-owned enterprise) to the cloudy side.
MetService, like many other large weather forecasting and analysis centers around the world, relies on a suite of software that performs complex mathematical analysis on huge datasets in order to arrive at long-term predictions of patterns and trends, both globally and locally. This software is based on numerical weather prediction models that not only produce but analyze vast amounts of data in real time.
Numerical weather prediction models (NWP) predict the weather using real-time weather conditions as the input vale for mathematical models of the atmosphere. These models produce an enormous amount of data and the compute required to crunch it is considerable, requiring supercomputer capacity. While there are some variations on this method, it is this capacity alone that makes the future-dating of forecasts far more accurate than they were in the past. The problem that weather organizations face is clear, when the needs increase, so too the hardware demands. For centers like MetService who are already outgrowing their current capacity and space, taking their cloud forecasting to the cloud seems like a viable option in theory but not in practice according to IT officials from the weather center.
This week ComputerWorld reported that MetService did look to the cloud when making plans to increase the resolution of its 3D model of New Zealand weather, but this was merely a glance. On an application level, this appeared to be a possibility, but upon closer inspection, MetService dedices that the bandwidth barriers were too high, thus the company looked to investing in new hardware instead. In a statement to the tech magazine, MetService’s CIO explained that “Cloud operation is probably not feasible…though it was investigated. Amazon can supply arrays of HPC blade processors, but 10 GbE communication with the cloud is probably not sufficient bandwidth.”
MetService CIO also noted, “the upgrade will be almost entirely a hardware matter…it will involve doubling the linear resolution, taking data points at the corners of 4km squares rather than 8km squares and possibly increasing the number of vertical layers in the model…from a software point of view that task will be relatively easy” but for this organization, the 8x boost in computer horsepower needed is driving the need. While the cloud seems like an ideal fit, the lack of sufficient bandwidth is the barrier that is now forcing higher up-front investment costs for an organization that while quite profitable, especially for a former government-owned enterprise, still will need to scrape together the funds to buy and find room for the new infrastructure.
Stories like these are bound to raise some eyebrows as rumors circulate that there are a handful of industry giants developing an Infiniband-enabled public cloud. If latency is the key issue holding key segments of the HPC and other enterprise markets back from grasping at the public cloud—and these are the types with the funding to make it worth their investment (consider that MetService had 36 million in operating revenues last year)—we have some interesting years ahead, indeed. Like MetService, the entire financial services sector, in addition to grappling with some of the security concerns that are still being touted as all-important from nearly all corners, might be the next big adopter of the public cloud. The race is on to develop one, however, that balances capability with speed and while Amazon’s new instance type for HPC is a huge leap in the right direction, what will happen when Google or another provider marches forward, holding high the Infiniband banner?
Posted by Nicole Hemsoth - August 10, 2010 @ 8:57 AM, Pacific Daylight Time
Nicole Hemsoth is the managing editor of HPC in the Cloud and will discuss a range of overarching issues related to HPC-specific cloud topics in posts.
No Recent Blog Comments
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?