August 19, 2010
There’s Apache’s Hadoop, there’s Google’s MapReduce, and in case it slipped completely off your radar, there is also Microsoft’s solution called Dryad—something that has all but disappeared from conversations over the last couple of years outside of the few academic institutions that were granted access to the code last year.
As Microsoft describes it, “Dryad is an infrastructure which allows a programmer to use the resources of a computer cluster or a data center for running data-parallel programs. A Dryad programmer can use thousands of machines, each of them with multiple processors or cores, without knowing anything about concurrent programming.”
One of the reasons why Dryad has been reduced from the map, publicly speaking is because it’s been tucked away as an ongoing research project to help refine models for writing distributed and parallel programs that require solid scaling capabilities. As of yesterday, the Dryad stack is moving out of its long-term residence at Microsoft Research and into Microsoft’s Technical Computing Group where work will continue to bring a test build by November of this year.
As ZDNet reported this morning, “the plan is to deliver a first Community Technology Preview (CTP) test build of the stack and to release a final version of it running on Windows Server High Performance Computing servers by 2011.”
In addition to providing some compelling graphics showing some of the developments of the Dryad stack as it exists today versus what some might see in November, ZDNet’s Mary Jo Foley stated that the company is “continuing to step up its work in the HPC space, hoping to ice out Linux in that arena” and that they seem to be “counting on Dryad to keep up their momentum both on premises, with Windows Server, and in the cloud with Windows Azure in its datacenters.”
Full story at ZDNet
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?