July 14, 2011
General Electric (GE) has been sharing stories about its successes with high performance computing at its Advanced Computing Lab and plans to offer a window into how the company makes use of strategic technologies.
The Advanced Computing Lab is, according to Chris McConnell from the Edison Engineering Development Program, not focused just on hardware, but more on gaining a deeper understanding of “how computer architectures work for getting the algorithms and applications to meet performance specs.” He points to CUDA and OpenCL as programming tools that are among those he and his team work with.
GE also focuses on MPI programming, data intensive simulation, algorithm optimization and evaluation of new architectures. He says that these technologies are assisting GE with their work on radiation detection systems, next-gen computers for aircraft, large scale simulations of alloys and turbine designs and biomedical imaging. McConnell points to related work that also goes on in their Computational Intelligence Lab, which is geared toward analytics and machine learning.
McConnell noted that his company has been able to realize tremendous benefits from the use of HPC. He said this week that “the idea of high performance and scientific computing is ever growing with the use of GPUs, commodity clusters and high end supercomputers” and that “the work done using these machines and techniques have enabled GE to make decisions in a fraction of the time that it once took.”
For those with the time, past recorded webinars that share insights about the intersection of supercomputing, engineering and data-intensive computing are available here—a site that contains other sessions on diverse topics relevant to the HPC community, including a detailed exploration of the “Internet of Things.”
The company will be holding another series of webinars in August on topic that include the use of social media and collaboration tools for research and another that considers how advanced automation and analytics can create new ways to operate complex energy and other networks.
Full story at General Electric
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?