June 14, 2013
CAMPBELL, Calif., June 14 -- Samplify, the leading intellectual property company for accelerating memory, storage, and I/O bottlenecks in computing, consumer electronics and mobile devices, announces the availability of its APAX HDF (Hierarchical Data Format) Storage Library for high-performance computing (HPC), Big Data, and cloud computing applications. With APAX HDF, HPC users can accelerate disk throughput by 3-8X and reduce the storage requirements of their HDF-enabled applications without having to modify their application software. The APAX HDF Storage Library works with Samplify's APAX Profiler tool to analyze the inherent accuracy in each dataset being stored, and applies the recommended encoding rate to maximize acceleration of algorithms with no effect on results.
"Our engagements with government labs, academic institutions, and private data centers reveal a continuous struggle to manage an ever increasing amount of data," says Al Wegener, Founder and CTO of Samplify. "We have been asked for a simpler way to integrate our APAX encoding technology in Big Data and cloud applications. By using plug-in technology for HDF, we enable any application that currently uses HDF as its storage format to get the benefits of improved disk throughput and reduced storage requirements afforded by APAX."
Next week at the International Supercomputing Conference (ISC'13), a paper presentation co-authored with Deutsches Klimarechenzentrum (DKRZ) [German Climate Computing Centre], University of Hamburg and Samplify cites, "The most easily obtained benefit from lossy compression of climate datasets is a significant reduction in disk file size and a corresponding increase in disk bandwidth." For increasing disk throughput, the authors observe, "APAX appears to be faster... APAX is a single-pass algorithm which leads to better cache usage." When comparing the quality of the results, the authors note, "APAX averaged 1.6X more compression." The authors conclude, "APAX offers better encoding for most climate variables due to its superior compression or data quality."
The APAX technology is a universal numerical data encoder that operates on any integer or floating point data type and can achieve typical encoding rates of 3:1 to 8:1 without affecting the results of computing applications. Samplify's APAX SDK is a software library which can be linked into any computing application to enable it to operate natively on APAX-encoded data in memory, on disk, or streaming across network interfaces. The APAX SDK is optimized for SIMD execution on SSE/AVX Intel CPUs achieving a throughput of 200 MB/sec per core, and the API is fully compatible with the company's APAX hardware IP core for SoC and FPGA integration, allowing APAX-enabled applications to take advantage of future APAX-enabled hardware in the data center. The web-based APAX Profiler analysis tool analyzes the inherent accuracy in the user's dataset and makes a recommendation of encoding rates to maximize the acceleration of their algorithm with no effect on results.
Availability and Pricing Samplify's APAX HDF Library is available immediately from the company for Linux platforms with annual licensing starting at U.S. $50,000 for data centers with storage requirements of one petabyte. For more information, go to www.samplify.com/apax-hdf
Samplify will be exhibiting in Booth 365 at ISC'13, June 17-19, in Leipzig, Germany. A paper entitled "Evaluating Lossy Compression on Climate Data" will be presented on Wednesday, June 19, at 9:40 AM in Session 6--Hall 5.
Samplify is a Silicon Valley startup providing the only software and hardware numerical encoder for solving memory, I/O, and storage bottlenecks in HPC, Big Data, cloud computing, consumer electronics and mobile devices. Samplify is a privately-held company with funding from Charles River Ventures and Formative Ventures and strategic investors such as Schlumberger, Mamiya, and IDT.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?