The Portland Group
CSCS Top Right Frontpage
HPCwire

Since 1986 - Covering the Fastest Computers
in the World and the People Who Run Them

Language Flags

Visit additional Tabor Communication Publications

Enterprise Tech
Datanami
HPCwire Japan

South Africa's HPC Center Tames Its 'Zoo of Architectures'


The 21st century has seen a plethora of supercomputing centers sprouting up across the globe. While the US, Western Europe, and Japan are still the dominant HPC territories, rapidly developing countries such as China, India, Brazil, Russia, and Saudi Arabia are quickly ramping up their HPC infrastructures. Of all the regions, though, Africa is still mostly an HPC desert. But in Cape Town, South Africa, the three-year-old Center for High Performance Computing (CHPC) aims to change all that.

Funded by South Africa's Department of Science and Technology (DST), CHPC is tasked with providing high-end computational resources for research organizations and businesses in South Africa and throughout Africa. Since it opened its doors in June 2007, CHPC, under the direction of Dr. Happy Sithole, has been busy building up its HPC infrastructure and gathering users. Because it is a regional HPC center -- essentially covering a whole continent -- its resources have to serve a wide array of clients and applications.

The center caters mainly to South African university researchers, some of whom are visiting from other African nations. Typical application domains include astronomy, material science, oceanography, climate modeling, bioinformatics, and computational fluid dynamics. A handful of commercial businesses are just discovering the center. The first 3D animation film developed in South Africa, "Lion of Judah," was rendered at CHPC by Character Matters studio. In addition, a local mining technology company makes use of the center's HPC infrastructure and a South African petrochemical firm is considering running and developing applications on one or more of the CHPC machines.

As it stands today, other Africa nations wanting to use the centers HPC resources have to come to South Africa for access. "We do want to expand and get users logging in from other African countries," says Dr. Sithole. The problem is one of connectivity, he explains. As with other infrastructure in much of Africa, modern data networks are just starting to be built out. An undersea cable was recently installed to connect East African countries to Europe and a similar cable is being considered for West Africa. Once these networks are in place, Dr. Sithole expects users outside South Africa to start requesting CHPC resources.

Today, the center houses four major HPC systems. The most notable characteristic of the machines is that no two are alike. The current collection includes an AMD Opteron-based IBM e1350 cluster (with eight nodes equipped with ClearSpeed accelerator cards), an IBM Blue Gene/P rack, an older Power-based IBM P690 (along with a spare not in production), and a hybrid Sun machine consisting of a SPARC64 VII M9000 server, integrated with a Sun Constellation blade Cluster powered by Intel Xeon processors. Operating systems range from AIX on the IBM P690, to Solaris on the M9000, with various flavors of Linux on the other machines. A portion of the e1350 cluster has been set up to boot either Linux or Windows HPC Server 2008. Dr. Sithole has referred to this somewhat eclectic mix of hardware as a "zoo of architectures."

Both the Blue Gene/P and the P960 were donated by IBM, while the others were procured by CHPC as funding allowed. Since the South African DST doesn't have the deep pockets of an agency like the US National Science Foundation (NSF), one of the driving considerations at CHPC is maximizing compute capability for the money spent. There is also the additional requirement of providing the right mix of architectures for different users.

For example, the newest machine -- the Sun system deployed in 2009 -- has the advantage of encapsulating both SMP capability (the M9000 server) and the more traditional distributed computing architecture (the Sun Constellation cluster). All told, the four CHPC systems add up to less than 50 teraflops of peak performance. The Sun M9000/Constellation system represents the lion's share of that, at 31 teraflops. That machine also holds the title of the most powerful supercomputer on the African continent and managed to break into the latest TOP500 list at number 311.

But the diversity of HPC systems at CHPC also presents a big challenge. With hundreds of users tapping into the systems, how does the center manage the computer systems so as to maximize overall utilization? That's where Adaptive Computing's Moab technology come in. Starting in 2010, CHPC deployed Moab, in the form of the company's Adaptive HPC Suite, to help bring the center's supercomputers under a unified management scheme.

Rather than having to manually provision and perform job scheduling one machine at a time, Moab sits atop the workload manager on each system and orchestrates them to function as a single entity. It does so by acting as a metascheduler that encapsulates OS provisioning and job submission. The workload managers themselves -- TORQUE, SLURM, Tivoli LoadLeveler, Sun Grid Engine, or Platform LSF -- are doing the actual provisioning and job scheduling. But Moab is pulling the strings, juggling job priorities and service level agreements to make sure the users' applications are served.

According to Peter ffoulkes, Adaptive Computing's vice president of marketing, Moab isn't just farming out the work. It has built-in intelligence to look ahead and project future demands in order to optimize job execution and provisioning throughout the day. The idea is to improve both performance and throughput so more work can be done in a given time period. "They want to make their pool of resources as flexible as possible," explains ffoulkes. "That's uniform across everything in computing these days."

Thanks to Moab, access time improved and utilization rates soared at the center. According to Dr. Sithole, prior to deploying the Adaptive solution, utilization on the Blue Gene system was around 50 to 60 percent; now it's approaching 97 percent. Likewise, utilization on the Sun cluster is over 80 percent. Not only does this automated scheme improve datacenter performance compared to manual provisioning, but it also eliminates the element of human error.

Moab was also key in supporting the 3D animation work performed by the Character Matters studio at CHPC. In this case, the rendering app required a Windows environment, so Moab dynamically provisioned the e1350 cluster with Microsoft's Windows HPC Server 2008 environment when the studio needed those resources. After the rendering completes, Moab automatically directs the reprovisioning of those nodes for the next job.

Dr. Sithole says they're already talking with a number of HPC vendors about their next system, but his strategy extends beyond just acquiring more hardware. The broader goal is to grow the HPC user base in Africa and South Africa. To that end, he's been working with the US-based Council on Competitiveness to formulate a plan for encouraging more industry participation. Once African HPC demand increases, he expects local universities and businesses will be interested in deploying small and medium-sized HPC systems of their own. That will leave CHPC to push the high end of HPC and evolve into a world-class supercomputing center.

Most Read Features

Most Read Around the Web

Most Read This Just In

Most Read Blogs


Sponsored Whitepapers

Breaking I/O Bottlenecks

10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.

A New Ultra-Dense Hyper-Scale x86 Server Design

10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.

Sponsored Multimedia

Xyratex, presents ClusterStor at the Vendor Showdown at ISC13

Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.

HPCwire Live! Atlanta's Big Data Kick Off Week Meets HPC

Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?

Xyratex

HPC Job Bank


Featured Events


HPCwire Events