October 30, 2008
How do we move high-performance computing forward? At Intel, we are producing technologies that enable major breakthroughs in science, engineering, medicine, and an array of other fields. At the same time, we are helping to make it simpler and more affordable for organizations to get involved with high-performance computing, from small and medium-sized businesses that need cost-effective systems to large-scale data centers that use HPC to solve new problems. But where do we go from here?
In this new series of articles, HPC@Intel: Moving HPC Forward, we will share new and innovative ideas for solving today's -- and tomorrow's -- key challenges in HPC. In the first few months, we will explore strategies for scaling performance forward, evaluate when to say no to parallelism, and explain why balanced systems can deliver performance that rivals systems with alternative architectures. Over the course of the series, we will show you how Intel is advancing the state of HPC while also bringing HPC to a wider range of users.
Advancing HPC at Intel
Intel® architectures are the choice for more than 75 percent of the world's Top500 HPC systems. But our contribution to HPC extends well beyond the production of high-performance processing architectures.
Creating balanced systems is a top priority at Intel. We know that sustainable HPC performance can be achieved only by balancing processor capacity with memory capacity and I/O bandwidth. We are helping to develop those balanced systems and to produce components that deliver significant performance gains for HPC applications.
We are also conducting upstream research on software and hardware technologies to accelerate multi-core and many-core architectures. We are bringing memory capacity closer to the cores, exploring new interconnect strategies, and examining new network fabrics and network packaging technologies. This research has already enabled us to introduce several new technologies into the HPC industry.
We are working to optimize power usage for HPC, not only at the processor and board level but also at the rack and data center level. More than 85 percent of our internal servers are HPC systems. Running those systems has taught us how to optimize power and cooling for large data centers. Now we can achieve very high power density without liquid cooling by employing careful warm and cold air management and other optimizations. We have shared and will continue to share that information with partners and end users.
Meanwhile, we provide a rich portfolio of software tools for HPC. The Intel® Cluster Toolkit includes compilers, performance analysis tools, threading tools, and libraries, such as the Intel® Math Kernel Library and Intel® MPI Library. These tools help developers scale performance forward through focused, surgical changes to code. We also offer deep software expertise. With software engineers specializing in key HPC segments, such as manufacturing, oil and gas, and financial services, we work together with industry players to optimize HPC applications.
We are also heavily involved in education. Intel has partnered with 800 universities around the world to develop curricula that will help tomorrow's software engineers develop parallel software code for HPC. A future article will detail what we are doing to make sure future developers have the skills to write code for thousands of threads running on large multi-node systems.
Working with the HPC ecosystem
We realize the importance of partnering with hardware and software vendors throughout the ecosystem to provide end users with the tools they need to succeed. We work with software developers to help optimize their applications, middleware, and drivers for current and future Intel architectures. For example, we recently released the Intel® Software Development Emulator (Intel® SDE) to support Intel® Advanced Vector Extensions (Intel® AVX), which will be introduced with the forthcoming "Sandy Bridge" processor. Intel compiler and Intel performance library support for Intel AVX will be available in early Q1 2009.
Charting the road ahead
The Intel "tick-tock" model for processor technology innovation provides the predictability that partners and end users need to maximize the return on their HPC investments. On the most recent "tick," we introduced the 45-nm process technology, which helped deliver better performance and energy efficiency in a smaller version of an existing microarchitecture. In 2009, we will start production on the next-generation 32-nm silicon process technology.
This year's "tock" will capitalize on the 45-nm technology to introduce the "Nehalem" microarchitecture, which will deliver important benefits for HPC customers. Going forward, our partners and end users can continue to count on this beat rate for innovation as they plan their HPC investments.
The tempo of these articles will be even quicker. In coming months, we will evaluate how forward scaling can address the challenges of developing software for new core counts and the inevitable enhancement of the instruction set. We will also examine the transition to parallel architectures: When should software developers say "no" to parallelism? Plus, we will consider offload options: As you optimize your code, will investments in offload options deliver the long-term ROI you need?
We will also discuss the growing use of HPC by small and medium-sized organizations, and show how Intel and our ecosystem partners are working together to make it easier for these organizations to use HPC. Collaborative programs such as Intel® Cluster Ready are lowering the barriers to HPC by helping to ensure interoperability, simplify procurement, reduce time to productivity, and decrease the total cost of system ownership.
This is an exciting time to be part of the HPC community. We look forward to showing you how we are moving HPC forward.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?