|The Leading Source for Global News and Information Covering the Ecosystem of High Productivity Computing / May 12, 2006|
As chief technology officer and senior vice president of SGI, Dr. Eng Lim Goh, has been the driver behind SGI's Project Ultraviolet, the incubator for the company's next-generation, computer platforms. He has been evangelizing the Ultraviolet technologies for some time now, most recently at the HPCC conference in March. With the release of the Altix 4700 platform last month, some of these advanced technologies are now commercially available.
One major focus of Project Ultraviolet is to address the problem of dealing with the enormous databases that are becoming commonplace in both government and industry. Databases in the multi-terabyte, and even petabyte range are no longer exceptional. In particular, a growing number of government agencies have a critical need to perform much more intensive knowledge discovery with these rapidly-growing datasets. But because of the sheer size of the datasets, current HPC systems have difficulty extracting knowledge from them in an efficient manner.
"Because of that, over the years we have been increasingly investing in what we call accelerated knowledge discovery," said Goh. "We're employing a part of [the Ultraviolet technology] to target this particular problem."
In April of this year, SGI began delivering their new Altix 4700 platform, which is capable of addressing these knowledge discovery challenges. They already have customers who are using their older systems for this capability, but will benefit greatly from the new Altix 4700 technology.
The basis of the new architecture is the inclusion of much larger amounts of memory, enough to accommodate these extremely large datasets. SGI will use its globally shared memory architecture to allow users to store entire databases -- or very large subsets of them -- in memory, enabling the data to be processed much more quickly by the system's processors.
"We are talking on the order of multi-terabyte memory, managed by a single operating system," said Goh.
SGI has already shipped more than a dozen SGI systems with over a terabyte of memory and about a hundred systems of half a terabyte or larger. But the new Altix will have much larger memory capacities. The systems SGI has in mind will scale to tens of terabytes and beyond. In fact, a few SGI customers are already testing with systems in the 10-terabyte range. "The largest we have shipped is a 13-terabyte memory system for the Japan Atomic Energy Agency," said Goh.
The new Altix 4700 increases the memory headroom significantly, scaling up to 128 terabytes of memory. According to Goh, the physical addressing capability of the Intel Itanium architecture, used on all Altix platforms, is a good fit for these large globally shared memory systems. The x86 class of processors, although they are 64-bit capable, have a 40-bit physical address limit which constrains them to a one terabyte memory reach.
"These x86 processors are ideal for clusters, because they only have to address memory for a single node, explained Goh. "But increasingly, our customers need to go way beyond that. They require every processor in our entire network to see all of the memory of all nodes. The Itanium is the only processor I know of that has enough physical addressing space to cover more than a terabyte."
But it's not just a matter of big memory; high performing I/O is required as well. The standard Linux I/O performance of one gigabyte-per-second is not adequate. And even this performance level can be a stretch for a single instance of the Linux operating system.
"If you just do the math -- ten terabytes at one gigabyte per second -- it will take you about a day to fill that database," said Goh.
To remove the I/O performance barriers in Linux, SGI transferred some of their IRIX OS software technology into the Linux OS kernel. These changes have been accepted by the open source community and are now included in the Linux 2.6 kernel. So with the Linux 2.6 running on SGI hardware, they were able to read and write data to a single file at 10 gigabytes per second. This is just the first step; with the introduction of the Altix 4700, they intend to move well beyond this level of performance.
Goh explained that the third goal of the new architecture, beyond increased memory and I/O bandwidth, is to keep costs in check. One way to do this is to allow memory and processors to scale independently.
"As we grow memory, the customer should not be forced to increase the number of processors -- which is typically the case," said Goh. "If you think of a cluster, once you max out the memory in a node, in order to get more memory, you have to add more nodes."
For these large database applications, which usually don't require as much computational performance as a typical HPC application, a standard cluster architecture is unbalanced. To make matters worse, since cluster memory is fragmented, not shared, applications can't access the database as a unified object, contributing to software complexity.
"We allow for the ability of the memory to scale independently of the number of processors," explained Goh. "The way we do it is to put the [intelligence] in the chipset, the things between processors and the memory. So you could have nodes with memory below them, but no processors above them."
Once users have this big memory capacity and the ability to feed it fast, SGI had to ensure that the memory was reliable. As memory capacity increases to the order of terabytes, hardware errors become statistically more likely, at least for commercial off-the-shelf (COTS) memory. But to keep costs reasonable, the memory needs to be COTS; multi-terabytes of special memory with superior reliability is not economical.
"The customer will not tolerate COTS-class reliability," said Goh. "If the application has already invested a few minutes or tens of minutes reading the database from disk into memory, the user wants to avoid reloading the data because of memory failures that happen while the application is running."
So what SGI has done is add extra logic to its chipset to improve memory reliability. The key technology used is a proactive memory scrubber, which is implemented on the HUB chip of the Altix chipset. While the application is running, the scrubber stress-tests portions of memory that the processor is not currently using. If a memory cell is close to failure, the stress-test will actually force a failure, causing the system to deallocate that memory page. This shrinks the available memory pool slightly, but the running application is barely inconvenienced. This solution allows SGI to use COTS hardware to scale out memory.
The increased memory capacity, I/0 bandwidth and memory reliability form the basis of the new Altix 4700 platform. In addition, the new system will be implemented with a blade architecture -- instead of the older generation's brick form factor -- and include an updated chipset. The 4700 supports a maximum of 512 processor cores within a single node and is able to link up to 8000 cores with SGI's proprietary NUMAlink interconnect. The blade form factor allows very fine-grained configuration options for compute, memory and I/O resources and also substantially improves system density.
Initially, the Altix 4700s will be shipped with the Intel Itanium Madison 9M processors, but will be socket-upgradable to dual-core Montecito processor. According to Goh, there is a huge pent-up demand for the new platform and SGI is working to assure as large a supply as possible with the transition to Montecito.
The Altix 4700 also features FPGA capability with a RASC blade and brings new I/O options including PCI-Express. Other advanced technology features that are part of Project Ultraviolet, such as the processor-in-memory (PIM) technology and vector data movement logic, will not be implemented in the Altix 4700 but will be included in the next-generation platform, scheduled for 2008.