July 07, 2013
Ensuring the security of one’s information in the cloud has proven to be problematic, especially considering the recent revelations regarding the National Security Administration and their accessing of data.
Researchers at MIT sought to combat that security risk in proposing the building of Ascend, a hardware component that can be coupled with cloud servers and prevents two types of security risks on public information.
“This is the first time that any hardware design has been proposed — it hasn’t been built yet — that would give you this level of security while only having about a factor of three or four overhead in performance,” said Srini Devadas, MIT’s Edwin Sibley Webster Professor of Electrical Engineering and Computer Science, whose group developed the new system. “People would have thought it would be a factor of 100.”
While a performance penalty of a factor of three or four is certainly preferable to that of a hundred, those keen on running experiments in the cloud on generalized data may not be too willing to take on such a decrease. However, there are applications whose data is sensitive, particularly in the genomic and other related healthcare fields, where Ascend would be advantageous.
According to MIT, Ascend works by randomly assigning access nodes. Every time Ascend traverses a path down the access tree to retrieve information from a node, it swaps the information with another random node somewhere in the file system. In this way, it becomes difficult for potential attackers to inferring specific data locations based on sequences of memory access.
Further, Ascend would reportedly protect against timed attacks by sending requests at regular intervals to the memory, meaning a spyware application would be unable to determine the runtime of any other particular application.
The importance of keeping one’s data secure is always there, even if it is variable from application to application. The performance penalty of Ascend, once it is built, may be worth ensuring data is kept out of curious hands.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?