July 06, 2007
Now that we're at the halfway mark for 2007, I thought I'd take a look back at the first six months of the year and try to highlight the most significant HPC stories of the year. In chronological order:
1. Less than four weeks into the new year, Sun and Intel became best buddies, announcing that there would be a broad partnership between the two companies. Up until that point, Sun used only AMD technology in their x86-based boxes. Under the new arrangement, Sun will add Intel Xeon processors to their workstations and servers, alongside their Opteron-based offerings. Intel will support and distribute Solaris, Sun's home-grown OS. The Sun-Intel partnership seemed to foreshadow AMD's 2007 woes that accumulated as the year wore on.
2. Also in January, Intel demonstrated its "breakthrough" 45nm process technology, announcing an aggressive schedule to roll-out commercial products based on the new process. The first shipments of 45nm chips may come as early as Q4 2007. The new process technology uses hafnium to dramatically reduce electron leakage, an increasingly annoying problem as semiconductor process sizes have shrunk. IBM also announced plans to move to 45nm, but their plans to get the technology into production seem less aggressive than Intel's. The real loser here is AMD, who is still in the process of moving their competing x86 products onto the 65nm process.
3. In February, a Canadian tech startup called D-Wave demonstrated a prototype of a commercial quantum computer years ahead of what most people thought would be possible. The 16-qubit prototype didn't have the power to challenge convential high-end computing, nor did the demonstration convince skeptics that "true" quantum computing was actually taking place. Larger systems will be needed to do this. A 32-qubit D-Wave machine is scheduled to be available at the end of the year.
4. The most interesting high performance interconnect story of the half-year came from Woven Systems. They've developed a 144-port 10 GbE switch designed to create a lossless Ethernet fabric with latency comparable to InfiniBand. According to Woven, this is achieved at one-fifth the cost of other 10 GbE solutions. The switch does dynamic load balancing at the hardware level (in a custom ASIC) to provide high levels of performance. Woven predicted general availability for their switch in Q3 of this year.
5. At the Intel Developer Forum (IDF) in April, Intel finally revealed its intent to develop the much-rumored GPU-like Larrabee product line. No details of the technology were revealed at IDF, but Intel characterized Larrabee as a "highly parallel, IA-based programmable architecture" designed to scale to teraflop-level performance. While not calling this manycore architecture a GPU, Intel appears to be aiming Larrabee products at both visualization applications and vector processing/scientific computing.
6. While multicore/manycore is the current megatrend in computing, researchers at the University of Texas think there's a lot to be gained from instruction-level concurrency. In May, after years of research and development, the Texas team released a prototype of their TRIPS (Tera-op Reliable Intelligently adaptive Processing Systems) microprocessor. The chip is meant to dynamically adapt to the type of application being run, whether or not the particular workload contains inherent parallelism. At a time when everyone is singing the same multicore tune, it's refreshing to hear a different song. The TRIPS prototype demonstration was meant to solicit interest from commercial chipmakers. Anyone out there willing to tackle a new instruction set?
7. In early June, PeakStream was acquired by Google. PeakStream was one of two startup companies that offered a high-level stream computing development platform for multicore architectures. The other one, RapidMind Inc., had just launched its competing offering two weeks prior to the PeakStream acquisition. Both products offered a software development environment for developing stream computing applications for x86, Cell and GPU platforms. There was a wide variety of speculation on what Google intended to do with PeakStream technology (I offered my own two cents).
8. Later in June, NVIDIA launched Tesla, a GPU product line targeted specificially for the high performance technical computing market. The first Tesla products were essentially repackaged Quadro GPUs targeted for HPC workstations and servers. The company's CUDA C compiler environment provides programmers with access to the general purpose computing features of the GPU hardware, giving NVIDIA a complete end-to-end offering for high performance computing. By the end of the year, NVIDIA plans to implement double precision floating point in the new Tesla offerings.
9. June was a busy month. At the International Supercomputing Conference (ISC) in Germany, IBM previewed its Blue Gene/L successor -- Blue Gene/P. This second-generation architecture is designed to be able to scale well into petaflop territory. Using quad-core PowerPC chips, bumping the speed in the CPUs, and generally improving the system interconnect, the new architecture more than doubles the compute power of the Blue Gene/L generation. The first deployment of a Blue Gene/P will be a sub-petaflop system at Argonne National Laboratory this fall.
10. A plethora of other HPC industry news came out at ISC. Maybe the most significant was Sun Microsystems's formal return to the capability supercomputing arena. The company announced its Sun Constellation product line, which, like Blue Gene/P, is capable of petaflop levels of performance. Unlike Cray or IBM, who use proprietary system interconnects to link processors together, Sun is using a souped-up InfiniBand switch and a simplified interconnect topology to connect Sun Blade 6000 servers. The servers themselves can be based on AMD Opterons, Intel Xeons or Sun's T1 processors. Essentially Sun is trying to do what many thought was impractical -- scale a cluster into a petaflop machine. The first Sun Constellation deployment will be at the Texas Advanced Computing Center by the end of this year. At around 500 peak teraflops, that machine will vie for the number one spot on the Top500 list.
By the way, over at Tabor Research, Addison Snell offers his top three picks of the most important HPC announcements leading up to (and during) last week's International Supercomputing Conference. So for an analyst's perspective of what's important in HPC, check out the Tabor Research blog.
As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at firstname.lastname@example.org.
Posted by Michael Feldman - July 05, 2007 @ 9:00 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?