The Portland Group
Oakridge Top Right
HPCwire

Since 1986 - Covering the Fastest Computers
in the World and the People Who Run Them

Language Flags

Visit additional Tabor Communication Publications

Enterprise Tech
Datanami
HPCwire Japan

GemStone Releases GemFire Enterprise 5.1


BEAVERTON, Ore., Oct. 30 -- GemStone Systems, the leading provider of distributed data management and virtualization solutions, today announced GemFire Enterprise 5.1, a core component of its high-performance, enterprise data fabric (EDF). The new GemFire Enterprise 5.1 release serves as a distributed operational data management infrastructure that sits between clustered application processes and back-end data sources to provide very low-latency, predictable, high-throughput data sharing and event distribution. By managing data in-memory, GemFire Enterprise 5.1 enables extremely high-speed data sharing that turns a network of machines into a single, logical data management unit or a data fabric.  

GemFire Enterprise 5.1 introduces an advanced set of technical features to deliver powerful, end-to-end scalability and performance improvements. By augmenting native C++/C# caching capabilities, GemFire Enterprise 5.1 provides highly available asynchronous cache update notifications to ensure clients are protected against server failures.  

“As enterprises seek to move from a typical disaster recovery scenario to a resilient architecture, companies need a dynamic distributed cache to support next-generation enterprise utilities, especially for compute-intensive, fault-tolerant applications,” said Chris Wolf, senior analyst with Burton Group.

“There are a large number of variables in a distributed system which significantly increase the possibility of an error, such as loss of data consistency, missed event notifications, or failure conditions arising from applications, resource limitations or machine failures,” said Jags Ramnarayan, chief architect at GemStone Systems. “With this release, GemFire Enterprise 5.1 minimizes the application risk under such conditions and specifies any level of redundancy when partitioning the data across the cluster.  GemFire Enterprise 5.1 will control how the concurrent load is handled on any server by a configurable set of workers and be assured that events enqueued for delivery to clients can survive server failures.”

The combination of distributed data caching with reliable message delivery provides customers with the tools to build next generation high-performance, real-time applications. For grid users, GemFire Enterprise 5.1 offers scalability and the predictability that it becomes near linear when additional resources become available to the data fabric.

“As more and more organizations turn to distributed data grids to improve application performance, minimize latency and reduce operating expenses, they must address the growing reliability and scalability challenges,” continues Ramnarayan. “GemFire Enterprise 5.1 will allow users to leverage native client cache enhancements, configure more than one level of redundancy and optimize for high concurrency to guarantee data availability and integrity. This release reinforces our commitment to delivering reliable solutions to improve and simplify our client’s most critical IT processes and deliver best-in-class scalability for distributed data grids with sub millisecond latency.”

New features of GemFire Enterprise 5.1 include:

  • Partitioned data regions: Data partitioning in GemFire Enterprise 5.1 has improved redundancy. Partitioned regions configured with redundancy, listener invocation automatically fails over to the new designated primary. Partitioned regions can inter-operate transparently with non-partitioned regions within a distributed system and improve eager or lazy recovery. The user-defined policies/configurations also control the memory management and redundancy of the partitioned regions, guaranteeing “total ordering” of all events across the distributed system without requiring transaction/locks. As a result, all updates can be routed through the primary partition and ensure a balanced memory usage profile.
  • Reliable and highly available event delivery: GemFire Enterprise 5.1 ensures clients are resilient to server failures and enjoy continuous availability and on-demand scalability. The high-speed transport layer based on TCP and reliable multi-cast ensures that there is 100 percent data availability with no downtime. With distributed event notifications, data updates are uniformly spread across the data set to process events. GemFire Enterprise 5.1 also offers distributed query support to execute OQL for "scatter-gather" algorithms.
  • Concurrent workload management: GemFire Enterprise 5.1 allows users to handle hundreds of client connections multiplexed for a configurable number of workers, providing better concurrency workload management capabilities so clients experience reduced buffers. With control over the number of threads, the amount of conserve-sockets set to false can be used to parallelize data traffic to peer members and provide better overall throughput, especially if nodes are multi-honed. By reducing the number of active client connections and providing a configuration option for the client to only connect to one endpoint, connections can now be acquired more lazily than in the past.
  • New high-performance persistence implementation: GemFire Enterprise 5.1 offers high-performance persistence implementation designed so that every operation is appended to disk files. Circular event log files, which can grow to a configured size, will automatically rollover to a new file. The thread coalesces the logs to reclaim disk space, resulting in almost a 100 percent throughput gain for asynchronous persistence and a 50 percent gain for synchronous persistence.
  • Improved native C++/C# client cache: Several native client cache enhancements have been implemented in the client-server caching model of GemFire Enterprise 5.1 to foster easy data sharing and collaboration across applications. Cache level heap LRU implementations will reduce risks from fragmentation when working with varying object sizes in the cache. By executing queries on the server side, clients will be able to access partitioned regions and receive reliable event notifications through subscriptions. Client-side performance will not bottleneck the cache server or impede its ability to scale to a growing number of clients, ensuring seamless scalability for grid-like environments.  

About GemStone Systems Inc.

GemStone Systems is a privately held infrastructure software company that provides data services solutions for enterprise business architects and data infrastructure managers that are building, enhancing or simplifying access, distribution, integration and management of information within and across the enterprise. Founded in 1982, and with over 200 installed customers, GemStone is recognized worldwide for its unique competency and patented technology in object management, virtual memory architectures, high-performance caching and data distribution technologies.

Most Read Features

Most Read Around the Web

Most Read This Just In

Most Read Blogs


Sponsored Whitepapers

Breaking I/O Bottlenecks

10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.

A New Ultra-Dense Hyper-Scale x86 Server Design

10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.

Sponsored Multimedia

Xyratex, presents ClusterStor at the Vendor Showdown at ISC13

Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.

HPCwire Live! Atlanta's Big Data Kick Off Week Meets HPC

Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?

Newsletters

Stay informed! Subscribe to HPCwire email Newsletters.

HPCwire Weekly Update
HPC in the Cloud Update
Digital Manufacturing Report
Datanami
HPCwire Conferences & Events
Job Bank
HPCwire Product Showcases


Xyratex

HPC Job Bank


Featured Events


HPCwire Events