October 02, 2012
Deep space exploration missions are constantly sending their data back to Earth for processing and analysis. However, with the ongoing nature of those missions and future planned missions, it may become difficult for space organizations to store and process all of that information over the next 30 years or so.
The solution? Build a supercomputer on the moon. Ouliang Chang, a graduate student at the University of Southern California, presented that idea, which is to be his PhD thesis, at a space conference in Pasadena.
As of right now, the Deep Space Network, which consists of 13 antennas throughout the United States, Spain, and Australia, is ably processing the information coming from deep space. However, bandwidth on Earth is limited. The data available from deep space exploration may not be. According to a 2006 NASA report, over the next three decades they expect an “order-of-magnitude increase in data to and from spacecraft and at least a doubling of the number of supported spacecraft.”
A lunar supercomputer would provide many advantages to its Earth-bound counterparts. One of the biggest challenges to modern supercomputing is the fact that these supercomputers have to be built on Earth and have to be cooled on Earth. And that can be a challenge on our warm little planet
The temperature of the proposed site would hover between 40 and 60 Kelvin as a result of being placed on the bottom of a lunar regolith that would perpetually shield it from sunlight. Remembering that 0 Kelvin is absolute zero, cooling suddenly becomes much more manageable.
Further, high-temperature superconducting materials come into play at 40 to 60 Kelvin. On Earth, it requires a tremendous amount of energy to capture a system’s internal energy and leaving it near superconductivity-level temperatures. On the dark side of the moon, those temperatures are already a reality.
The regolith also would protect the supercomputer from radiation, a significant concern on a surface that is not protected by magnetic fields. The location also offers some protection from asteroids.
The biggest advantage with respect to its mission is its accessibility to the Deep Space Network. Instead of submitting through the crowded Earth signals, the satellites and spacecraft would send their information to the lunar network strategically placed away from all of the Terran electromagnetic noise.
Not surprisingly, such an undertaking would cost plenty of money. While there are some slightly outlandish post-construction funding recovery ideas, such as hosting a sort of robotic moon Olympics, it is more likely that the cost of shipping materials to space will have to decrease significantly before a moon-based supercomputer could become feasible.
From excavating and engineering a site to the actual building of the lunar supercomputer, the monetary commitment would be massive. According to this Wired article, it costs about $50,000 to ship a pound of material into space. It is estimated that the total cost would exceed $10 billion, making it the solar system’s most expensive supercomputer.
However, Chang estimates that the project will still need ten years or so to become technologically feasible. By that point, carbon nanotube technology may progress to the point where a space elevator could be built, drastically decreasing shipping costs. Failing that, a more efficient propulsion system could be built.
A lunar supercomputer could also serve as a backup to Earth systems in a catastrophe, an idea proposed in 2004 by Space Systems Loral. It could also provide data management support for possible future lunar and space missions.
The idea of a lunar supercomputer is seemingly straight out of science fiction. Stanley Kubrick’s science fiction, however, predicted such a supercomputer’s launch to take place eleven years ago. Perhaps with a full embracing of Chang’s ideas and advancements in space shipping, the supercomputer on the moon can be a reality in another eleven years.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?