October 28, 2005
The adoption of field-programmable gate arrays (FPGAs) is increasing, but a fuller understanding and acceptance of their capabilities is needed for the technology to make the next leap into wider adoption. Malachy Devlin, senior vice president and chief technology officer of Nallatech, recently spoke with HPCwire to address some questions about what FPGAs can do and what is needed to foster expanded acceptance.
HPCwire: With the advent of FPGA technology around the corner and the growing acceptance of FPGA technology within the HPC community, how do you see this technology helping HPC applications?
Devlin: FPGAs may appear to be a new technology, but they are now over 20 years old, with Xilinx first inventing them in 1984. Nallatech has over 1,500 site installations of FPGA-based processing systems illustrating the technology is well on the road of adoption. However, most of this adoption has been realized in high-performance computing within the embedded marketplace. With proven examples of deployment from this area and the continued growth of capacity in high-performance computing, we have shown the viability of FPGAs to carry out algorithms based on bit, integer and now floating point algorithms.
We have also shown FPGAs are able to provide increased processing performance from two-times to over 100-times performance over the fastest microprocessors such as the Opteron or Itanium 2. Interesting this performance doesn't come at the decrement of having to increase the power consumption budget. In fact, FPGAs run much cooler than a microprocessor. Where we are considering over 100W for a high-end microprocessor, FPGAs typically consume around 15W when executing high-performance algorithms. The knock-on effect from this is significant, with this large increase in GFLOPS/Watt, over 10-times, we are able to reduce electricity costs, air conditioning costs and machine room floor space. The latter is realized through the reduced thermal density, which enable us to pack more devices in a given space, thus reducing the floor space required for large installations.
HPCwire: There is a perception that FPGAs are difficult to program. How is this technology being made more accessible?
Devlin: It is true that FPGAs are the younger sibling to the microprocessor and hence the tool flows and methodologies have not reached the same maturity level. But this is changing rapidly. When I first used FPGAs in 1989, the main tools for designing FPGAs were pen, paper and a basic layout tool called XACT. Today, we are able to write programs directly in C, FORTRAN and MATLAB and compile these to FPGAs. The investment in this area continues rapidly.
To get the full performance of FPGAs, we need to take advantage of their ability to run many operations in parallel, whereas an Itanium 2 has five floating point units, we are able to put hundreds of floating point units in an FPGA. This does mean that code refactoring may be necessary to take advantage of this; therefore, there will be a limit of taking dusty deck code and have it running instantly on an FPGA.
We shouldn't look at this totally as a disadvantage. FPGAs are allowing us to break the shackles of Von Neumann and instruction set architectures. This can only be a good thing as we can now dynamically create processing engines that fit the algorithm problem, rather than fitting the algorithm to a particular processor architecture. In fact, the need for code refactoring is really a result of providing further choice in how the algorithm implementation is constructed. This is the first time that software developers are given the capability to construct their own processor architectures rather than relying on the decisions of a processor architecture team within the processor company that needs to try and cater for a wider range of application areas.
HPCwire: Nallatech recently partnered with SGI to provide FPGA technology to its products. How vital are technology partnerships such as these in the development of the FPGA market?
Devlin: Partnerships are critical to the success. Our relationship with SGI is bringing together the best in class for HPC and FPGA computing technology. Through this blending of capabilities, we are developing some great innovations for reconfigurable computing. Partnerships also need to go wider than this. We need to create the complete ecosystem for the technology to survive and prosper. Fortunately, this is taking shape through initiatives such as the FPGA High Performance Computing Alliance, FHPCA and OpenFPGA.org. These are bringing together over 26 organizations that are addressing standardization and increasing awareness of FPGA within the high performance computing space.
HPCwire: Which particular industries do you see taking the lead in the wider deployment of FPGA technology?
Devlin: We are seeing a lot of interest from a wide range of industries. This is driven by the demonstration of improved computing performance per watt, per dollar and per cubic foot by FPGAs over traditional processors. These are all key parameters that are getting pushed to their limits with the latest large clusters.
Our FPGA systems have been running applications in a wide range of industries, such as seismic processing, bioinformatics, simulations and encryption. We get performance improvements from 17-times in seismic to over 250-times in the bioinformatics. It's important to note that applications such as the seismic processing and simulation incorporate significant amounts of floating point operations in their algorithms, which is typically considered a no-go for FPGAs. This is not true; we have been doing floating point on FPGAs since 2002, primarily single precision, however, we have had double precision floating capability since 2004. We are also programming these floating point algorithms in C and not a hardware-orientated language such as VHDL, making FPGAs much more accessible to HPC developers.
HPCwire: What do you see as the main challenges FPGA technology will face in the future?
Devlin: FPGAs have brought a new processing paradigm to the mix and have certainly shown their capabilities in real applications. Moving forward, we need to continue the thrust on tools, environments and methodologies so that FPGAs become more manageable and identify with today's software techniques. This needs the community to get together and create appropriate standards to ensure that we do not fragment the market before it goes mainstream. FPGAs are the first commercially successful technology that has given us the tools we need to move from this Von Neumann era to a new era where we are no longer constrained by fixed processor architectures.
Malachy Devlin is senior vice president and chief technology officer of Nallatech. He obtained a Ph.D. in Signal Processing from Strathclyde University and is recognized worldwide, as an industry expert on FPGA technologies. He is a software specialist with several years experience in various companies, including the National Engineering Laboratory, Telia and Hughes Microelectronics (now part of Raytheon). He is part of the team that developed Nallatech's DIME modular technology based on FPGAs.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?