July 21, 2006
After a delay of nearly a year, this week Intel finally launched its dual-core Itanium 2 Processor 9000 series (formerly code-named Montecito). The 9000 series was introduced in five different flavors, with a variety of clock speeds and cache memory sizes. Over the next several weeks all eight OEMs that produce Itanium-based servers are expected to promote systems that incorporate the new dual-core chip.
The Itanium represents Intel's four-year venture into the mainframe microprocessor market. The company promotes the chip as an industry-standard alternative to the proprietary 64-bit RISC architectures, specifically, Sun Microsystem's UltraSPARC processor and IBM's Power processor. Itanium's Explicitly Parallel Instruction Computing (EPIC) architecture differs from both CISC and RISC approaches, using instruction level parallelism (ILP) to achieve high levels of performance.
Itanium's declared market turf is mission-critical enterprise servers and high-end supecomputers, neither of which are particularly high-volume segments when compared to the overall server and commodity cluster computing market. But according to IDC, revenue for Itanium-based servers will grow to approximately $6.6 billion by 2009. And over the next five years, the compound annual growth rate for Itanium-based servers is expected to be 35 percent, compared to 3.4 percent for the overall server market.
SGI is particularly enthusiastic about the new chip since it has made a large investment in the architecture in its Altix systesm. The company claims that their new Itanium 2 9000-equipped platforms, which are expected to be commercially available at the end of August, are already achieving record performance on applications such as computational structural mechanics, molecular dynamics, weather forecasting and environmental modeling.
HP, which sells the vast majority of Itanium-based servers, is also happy to see the chip. In this issue of HPCwire, Ed Turkel, Manager, HPC Product Marketing at HP, discusses the future of Itanium 2-based systems in the rapidly developing HPC enterprise market. Industrial applications such as seismic modeling, aerospace/aeronautical design, financial forecasting/modeling and automotive CAE represent some of the more prominent HPC enterprise workloads. Turkel makes a case that Itanium is well-suited for this growing market.
Says Turkel: "With vastly superior on-board memory caching and I/O systems designed to deal with larger data volumes, servers based on the Intel Itanium 2 processor can provide faster, more accurate calculations at a lower price point than comparable RISC-based systems."
The Intel chip does appear to be steadily eroding the market share of its RISC competitors. In its first real year of production (2003), the Itanium-based system represented only about a tenth of the market occupied by RISC systems. But as of this year, Itanium-based systems now generate almost half as much revenue as either UltraSPARC or Power-based systems.
Even though the Itanium chip volumes have grown steadily, both Intel and HP originally envisioned a faster penetration into the IT market. Both analysts and customers expected more from the earlier Itanium versions, so the architecture developed a reputation as an underachiever. But Intel and its Itanium OEM fans are certainly placing a lot of their hopes on the new dual-core offering. The chip doubles the performance of the previous generation single-core Madison, and accomplishes this with less power.
Although Intel sees the IBM Power and Sun UltraSPARC RISC chips as Itanium's competitors, its biggest threat may be from below -- the AMD Opteron and Intel's own Xeon microprocessor. These dual-core 64-bit x86 chips are being used in systems throughout the high-end enterprise server and HPC markets. Even though the Itanium has certain technological advantages over the x86 chips -- such as greater memory reach and higher levels of instruction parallelism -- for many applications these benefits are outweighed by the price/performance advantages of Opterons and Xeons. In addition, the software momentum that is associated with the x86 architecture creates a formidable barrier for the establishment of competing architectures. All of this pressure tends to push the Itanium- and RISC-based systems towards higher-end and more specialized applications.
Considering that Intel and the other vendors in the Itanium Solutions Alliance have already poured billions of dollars into the architecture, it's hard to imagine that they'll pull the plug anytime soon, even if they achieve only modest success. It would be unfortunate if the chip disappeared entirely. With regards to general-purpose microprocessors, there is not a whole lot of diversity in the IT industry right now. And if Itanium fails, could any new architecture survive?
Elsewhere in the Issue
Speaking of surviving, global warming appears to be in full swing this summer in the Northern Hemisphere. There are still learned people who don't quite believe in the whole concept, but I'm guessing few of them live in southern England. This past week in Wisley, just south of London, the temperature reached 97.7 degrees Fahrenheit (36.5 Centigrade), a record for July in the usually temperate British Isles. This was part of an overall heat wave that has affected large areas of Europe. Meanwhile in the U.S., much of the central and western parts of the country are also enduring sizzling temperatures. The mile-high city of Denver hit a record of 101 just last week. While daily weather extremes don't necessarily indicate the climate is changing, meteorological data recorded over the past several decades does point to global warming.
But in order to really understand climate changes and their effects, accurate simulations have to be developed. As climatologists have accumulated knowledge of the Earth's weather and as supercomputing power has increased, the models have become increasingly sophisticated. In this week's issue of HPCwire, our feature article, "The Next Generation of Climate Models" talks about one of the more advanced climate models in use today. In this article, Per Nyberg, Earth Sciences Segment Director at Cray, describes the Community Climate System Model (CCSM) and the supercomputing resources behind it. CCSM integrates a variety of component systems such as ocean simulations, atmospheric simulations and ice sheet modeling into a unified picture of the Earth's climate. The next version of CCSM will do even more:
"We expect that, when the next climate model is released, we'll have options for essentially full atmospheric chemistry, dynamic vegetation processes on the land, ocean ecosystems, and more," says ORNL's John Drake, chief computational scientist for the Climate Science End Station effort. "By pulling all of these processes together, we'll be able to create not only a physically coupled model, but a chemically coupled and biologically coupled climate model. That's a big stretch over where we are now."
As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at firstname.lastname@example.org.
Posted by Michael Feldman - July 20, 2006 @ 9:00 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?