February 07, 2012
AMD is plotting a relatively conservative roadmap for its Opteron CPUs over the next year or two, even as it preps its heterogenous computing technology for the big leap into the server arena. At the company's 2012 Financial Analyst Day last week, AMD execs re-pledged their commitment to the server market and outlined a strategy that puts less emphasis on high performance cores and design complexity and more on power efficiency and building SoC products tailored to specific datacenter workloads.
In the near term though, it will be very much business as usual for the Opteron line. The big news (or non-news) is that AMD will not follow the top-of-the-line 16-core Interlagos chip with a 20-core successor -- the so-called “Terramar” CPU. Instead, the company will offer “Abu Dhabi,” which, like its predecessor, tops out at 16 cores. It also uses the same processor technology (32nm) and offers the same memory support (quad-channel DDR3) There is no support for PCIe Gen 3, which was skipped for this go-around with the rationale that the newer, faster bus interface won't be needed “until the market is better positioned for wide adoption of that very high-end technology.”
Abu Dhabi and the other next-generation Opterons (“Seoul” and “Delhi”) will be based on a new core architecture, known as “Piledriver,” and are scheduled to be launched in the second half of 2012. Rather than a complete redesign, Piledriver appears to be a tweak of the modular CPU design AMD pioneered with Bulldozer last year. The architectural enhancements include new ISA extensions and improved IPC. Essentially AMD is doing two tocks (microarchitecture redesign) in a row, with no intervening tick (process technology shrink).
The chipmaker's conservative Opteron strategy may reflect some new thinking there. According to AMD's new CTO, Mark Papermaster, in the past the company has put too much effort squeezing the last ounce of performance out of the cores, using extra design complexity to compensate for second place in semiconductor manufacturing. “We've gone after that last two to three percent performance, and historically it's led to a longer development cycle,” said Papermaster. According to him, the new focus is on time-to-market and using hardware-software codesign to deliver application platforms, rather than just silicon.
Lisa Su, Senior Vice President and General Manager or AMD's Global Business Units, said the decision not to scale up the core count on the next-generation Opterons was the result of customer feedback. According to her, rather than wanting more cores, their server clients were just interested upgraded Opteron parts that could be plugged into the existing G34 and C32 sockets. In any case, since AMD has no fab partner that is ready to move to a sub-32nm process, there really wouldn't be additional die space available for more cores, caches, and bigger memory controllers without a much more drastic microarchitecture design.
According to Su, the new Piledriver cores will deliver more performance at the same TDP, although, at this stage, AMD is not offering any numbers that would shed light on those improvements. If the company can eke out some additional FLOPS from the Piledriver cores, along with some interesting ISA extensions, that's probably their best shot at competing against Intel's Sandy Bridge Xeon CPUs (E5 series), which are built on 22nm technology. The Xeon E5 CPUs are already installed in a number of top supercomputers, although the chips are not officially launched yet.
Delphi, in case you were wondering, is the successor to the not-yet-released Zurich CPU, which is based on the current generation Bulldozer core. Zurich is slated for release in the first half of 2012. These 1P processors will inhabit a new socket known as AM3+, code-named “Jakarta.” They are being targeted to light-weight web serving and related microserver space, which Intel has recently made a play for with its low-power Xeons and higher-end Atom chips.
Beyond Piledriver, is the Steamroller architecture, another modular CPU design that promises greater parallelism, which could mean either more cores or simultaneous multi-threading or both. After Steamroller come “Excavator,” a microarchitecture that focuses on greater performance. No dates were attached to either of these designs but 2013 and 2014, respectively, would be likely timeframes.
Of course, the other side of AMD is their GPU portfolio. But it's notable that none of the product talk during the Financial Analyst Day mentioned the company's FireStream offerings, the company's discrete GPU accelerators aimed at high performance computing. In the face of nearly complete dominance of NVIDIA's Tesla products for this space, it's likely that AMD has ceded this market to its rival, at least for the time being.
Where AMD has a clear advantage is its ability to marry its CPU and GPU logic onto integrated heterogeneous chips, which they call APUs (accelerated processing units). All the APUs the company has developed to date have been targeted to client devices -- desktops, notebooks and soon tablets. Not surprisingly, company execs devoted much attention to the APU client roadmap during the Analyst Day. But there was also a fair amount of discussion about migrating APU designs into the server space.
In particular, AMD sees custom datacenter workloads in areas like multimedia web serving, search engine processing, visual rendering, high performance computing as an opportunity for GPU acceleration in their heterogeneous computing platforms. One aspect to this is that they intend to be able to build SoC products in a modular way that combines x86 cores with their GPU designs, and do it in such a way as to tailor different chip designs to different workloads. AMD is even willing to incorporate third-party IP blocks into these SoCs, for example, fixed-function cores aimed at very specific types of processing like codec encoding/decoding.
The logic behind this strategy is that because many of these workloads are feeding the boom in web-connected mobile devices, there is a huge and rapidly growing market for such server infrastructure. “You're not just talking about racks and racks of servers that just care about power and performance,” said Su, “you're talking about specialty workloads, and it actually will fragment the server market a bit.”
Citing IDC numbers, AMD points to projected compounded annual growth rates of 15 percent, for cloud-based web applications; 13 percent, for virtualized workloads; and 7.3 percent, for high performance computing, over the next three years. Whether those numbers pan out as forecast and are enough to support volume production of specialized SoCs remains to be seen, but AMD doesn't want to left with its one-trick Opteron pony if the server market starts to fragment.
Executing on that strategy is going to be the principle challenge for AMD. In the short-term, it has to find a way to get its server market share above the single-digit level -- 5 to 7 percent in 2011, by most estimates -- on the merits of its Opteron line. But the more difficult task ahead will be moving its heterogeneous technology into the datacenter. Although AMD has more of the pieces in place than its competitors, the heterogeneous waters here are uncharted. Nimbleness will be well-rewarded.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?