March 19, 2012
After multiple terrorist attacks, including embassy bombings and the World Trade Center attacks, the NSA had suffered somewhat of a tarnished record for keeping the country safe. Looking to rebound, the agency started implementing its most ambitious data gathering and cryptanalysis program to date, using one of the world’s most powerful supercomputers.
In a Wired article published last Thursday, James Bamford received details of a new NSA facility in Utah. More than five times the size of the U.S. Capitol and costing $2 billion, the site, codenamed Stellar Wind, is expected to be fully operational in 2013. Its main job is to collect and analyze data flowing from emails, phone calls, receipts, and other sources. The NSA, aware of data’s recent growth has made plans to hoard as much information as possible with massive storage capacity. According to a 2007 DOD report, “the Pentagon is attempting to expand its worldwide communications network, known as the Global Information Grid, to handle yottabytes (1 million exabytes) of data.” The center also requires a 65MW substation, which contributes to the estimated $40 million per year in operational costs.
Speaking with William Binney, a former crypto-mathematician who quit the agency after it launched the warrantless-wiretapping program, Bamford learned the lengths the NSA was willing to go to collect information on US citizens. According to Binney since the 9/11 attacks, the NSA has intercepted roughly between 15 to 20 trillion “communications,” which could be anything from financial transactions and travel plans to emails and phone calls.
Much of the data is encrypted though, and that’s where the supercomputing comes in. To extract the information, the NSA had to employ brute force algorithms, and that required a lot of computing power. Bamford reports that the Multiprogram Research Facility was built at Oak Ridge National Laboratory to house a supercomputer for such work. That facility, known as Building 5300, spanned 214,000 square feet and cost $41 million to build back in 2006. While the unclassified “Jaguar” supercomputer was being deployed on the other side of the Oak Ridge campus, the NSA was installing an even more powerful system in Building 5300. Writes Banford:
The NSA’s machine was likely similar to the unclassified Jaguar, but it was much faster out of the gate, modified specifically for cryptanalysis and targeted against one or more specific algorithms, like the AES. In other words, they were moving from the research and development phase to actually attacking extremely difficult encryption systems. The code-breaking effort was up and running.
According to Binney, a lot of foreign government data the agency was never to break (128-bit encryption) might now decipherable. And that was the rational for the data repository in Utah. The facility can hold onto encrypted information until the technology is developed that allows it to be decoded.
The NSA apparently is already planning on building an exascale system capable of breaking even more complex encryption schemes. That system, scheduled to be deployed at Oak Ridge in 2018, will require an even larger facility and is expected to consume 200MW of power.
The NSA, by nature, likes to operate behind closed doors, preferring not to reveal the technology it’s using to combat national threats. In this case, given the concerns of individual privacy and constitutionality, the story is additionally embarrassing for the agency.
For HPC followers, it’s a reminder that some of the most powerful supercomputers in the world never make an appearance on the TOP500 list.
Full story at Wired
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?