August 27, 2013
IBM's Watson impressed people all over the world in 2011 when the machine beat all contenders in a game of Jeopardy! Since then, IBM has kept the Power7-based technology busy, with gigs in financial analysis, healthcare, and customer service, among others. But now that IBM is gearing up to ship its Power8 processors, we could see a newer and more powerful Watson emerge.
IBM unveiled details of the forthcoming Power8 processor this week at the Hot Chips conference at Stanford University. The 12-core, 4Ghz Power8 chip can execute 96 threads simultaneously, and is expected to be 2x to 3x more powerful than IBM's Power7 chip, which was used in the Watson supercomputer that competed on the game show.
The Power8's capability to move data has been dramatically improved over previous designs, including the interim Power7+ chip that IBM shipped in 2012. According to the Hot Chips presentation on Monday by Jeff Stuecheli, the chief nest architect for Power8, the new processor features 230 GB per second of sustained memory bandwidth between the L4 caches and the processors, and total peak I/O of 48 GB per second along the new on-die integrated PCI-Express 3.0 controllers, making it a very speedy chip.
One of the cool things IBM has done with those integrated PCI-Express 3.0 controllers is to design a new transport layer called the Coherence Attach Processor Interface (CAPI) that will enable co-processors, including GPUs or field programmable gate arrays (FPGAs) to be connected directly to the Power8 chip and to share data stored in its memory. Nvidia, which together with Google and Mellanox signed onto the OpenPOWER Consortium that IBM unveiled earlier this month, is expected to build a GPU that plugs into the Power8 chips through this CAPI protocol.
So, what does all this mean for Watson, the 2,880-core supercomputer that stole so many hearts during Jeopardy!? On a purely thread-count and parallel workload basis, the Power8 processor should be a boon for Watson, which, with Power7 technology, was able to process 500 gigabytes of information--the equivalent of a million books--every second.
The truly intriguing part, however, is what a new Power8-based supercomputer combined with Nvidia GPUs and a Mellanox interconnect can do for HPC workloads, including the DeepQA (question answering) technology that Watson is based on.
According to IBM, DeepQA combined uses more than 100 different techniques in its various workload components, including natural language processing, machine learning, hypothesis generation, evidence gathering, analysis, and scoring. Apache's Hadoop and UIMA (Unstructured Information Management Architecture) frameworks also played a role in Watson's software.
The area where Watson struggled the most--correctly assessing contextual clues to determine the true meaning of a statement--is a particularly difficult problem that artificial intelligence and machine learning experts have yet to crack. Perhaps the Power8 technology, combined with developments from the OpenPOWER Consortium, will help find better answers.
"Watson wasn't a traditional workload for us," IBM's Stuecheli said during his Hot Chips presentation, according to a PC World story. "We'd like to find more of these opportunities."
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?