XTP Platforms: Ready to Make their Mark

By By Derrick Harris, Editor

February 18, 2008

Only one year ago, Gartner analyst Massimo Pezzini described grid-based application platforms as a burgeoning technology that might just be ready to tackle the demands of extreme transaction processing (XTP) environments. My, how times have changed. In an October 2007 report titled “The Birth of the Extreme Transaction Processing Platform: Enabling a Service-Oriented Architecture, Events and More,” Pezzini wrote that many XTP-enabling technologies — including grid-based application platforms — are “technically credible” alternatives to traditional application servers and, although still in use by only the leading edge of companies, have enough of an installation base “to prove they can address real requirements.” Put simply: grid-based application platforms are for real, and true extreme transaction-processing platforms (XTPP) are on the horizon.

According to the report — and easily evidenced by speaking with end-users — traditional online transaction-processing (OLTP) architectures are wearing thin because of a bombardment of transactions caused in part by today’s service-oriented and event-driven architectures. In addition, the completion times of transactions — especially in markets like electronic trading — are expected to be significantly lower, even while associated data volumes are growing — often times exponentially.

The ultimate platform to address these concerns will be the XTPP, which according to Pezzini will include the following features: a cohesive programming model; event-processing and service containers; a flow management container; a batch container; a common distributed transaction manager; a high-performance computing fabric; tera-architecture support; and development tool, security, administration and management capabilities. The development of robust XTPPs, wrote Pezzini, is evidenced by expanded capabilities of traditional solutions by vendors like Oracle, BEA Systems and Tibco, as well as by the increasing XTP holism of leading-edge application platforms offered by companies like IBM, GigaSpaces and Appistry.

“One Platform to Rule them All”

One of the first to latch onto Pezzini’s research and market itself as a grid-based application platform, as well as a player in the XTP space, Appistry is particularly pleased with the sunny forecast it is seeing for solutions like its Enterprise Application Fabric (EAF). Of particular interest to Sam Charrington, Appistry’s vice president of product management and marketing, is the Gartner report’s prediction that: “By 2012, mounting user need for XTP applications and technology innovation will propel at least one new software vendor into leadership in the application platform market with more than 15% market share in the XTP platform segment.”

Injecting a little Middle Earth into the XTP discussion, Charrington commented that Gartner’s prediction for XTPPs seems to be that there will be “one platform to rule them all.” Having been called all of those things — from application server to SOA to grid-based application platform — he believes Appistry is in a good position to be among the market leaders. While vendors selling new technologies always want to talk about how it works with existing technologies, Charrington said more and more customers are interested in EAF in place of their current application servers. “I think the reason why there are so many names out there for technologies like ours is because … there is something new that is being born,” he said.

EAF has always been about the application, and in order to ease the process of running applications on it — as well as to allay potential concerns about the steep learning curve that comes with replacing an existing application environment — Appistry has been working hard to make EAF compatible with existing applications and codes. In the past, explained Charrington, applications had to written to EAF’s specific API and had to be aware that they were running in a fabric environment. Now, however, developers can write applications using the Spring Framework, .NET or plain-old Java and deploy them in the fabric without making any changes.

One of Gartner’s concerns about the XTPP market is that it doesn’t have formal standards just yet, but Charrington sees the application platform world creating its own de facto standards by making their products compatible with Spring, POJO, .NET, etc. There is more going on than Gartner might lead you to believe, says Charrington, and while Appistry and others do have a vested interest in providing standards where necessary, this focus on everyday development models means, in the very least, that customers don’t have to be locked into specific platforms.

Overall, said Charrington, the whole ride has been exciting for Appistry, as what required a lot of education a couple of years ago, people seem to get now. “The problems are becoming apparent to customers as they try to keep up with their competition [and] the requirements of their customers,” he observed. “Our customers are wanting real-time access to information, ‘anytime, any place.'” This makes for fertile ground, added Charrington, as less time spent educating customers means more time spent focusing on improving the solution.

Tackling XTP with EAF

In late 2006, when we first covered XTP, we spoke with Appistry customer Clearent, who was getting set to take its credit card payment-processing service live on EAF. Clearent has been up and running for a year now, and Vice President of Product Development Mark Peck says EAF has allowed the company keep processing transactions day and night with no interruptions.

In Clearent’s line of business, he explained, the business day lasts from sunup on the East Coast until sundown on the West Coast, and his company has to provide clients with up-to-date Web information on all the transactions that have been processed. And the setting sun doesn’t mean Clearent’s infrastructure can take a break. “When the sun does go down … and the last bars and restaurants close down and our merchants are processing transactions, and batching them up for processing, we’re getting overnight feeds that have to be accommodated,” Peck explained. “The fact that we can perform system upgrades in real time without having to bring the system down is … a big win for us in this industry.”

Clearly, high availability and low latency are essential to any business handling as many transactions as Clearent (which has its sights set on upward of 500 per second), so the company has made Appistry’s fabric a big part of both its customer-Web-request and transaction-processing environments. “The ability to service those Web requests in a timely fashion is key to the end-user’s perception of our service,” said Peck. Latency concerns, of course, come into play on the transactional side, where they are critical. Clearent has a narrow window to get the transactions from the card association, process them, apply the appropriate billing and pricing, and then convey them for clearing and settlement. “Within that window,” said Peck, “even if it’s running in a batch mode, we can’t afford to spend very much time on any one given transaction.”

Even given the success Clearent has seen with EAF as it moves toward becoming a true XTP shop, Peck still believes his company is in the minority when it comes to leveraging this type of solution, and thus has a “tremendous advantage” over competitors — but that is bound to change. “I think it’s going to be impossible for somebody to come into this space and compete effectively at the scales and demands — in terms of the real-time transaction processing, the robustness, the ease of use … without having a platform like Appistry’s or some of these extreme transaction platforms,” predicted Peck. “It’s going to be, if not impossible, certainly extremely difficult.”

Big Blue’s XTP Play

For its part, IBM is selling WebSphere Extended Deployment (XD), a software solution designed to virtualize and optimize existing application infrastructures. Although it provides the richest experience with WebSphere, XD supports a slew of application servers, including BEA WebLogic, SAP NetWeaver, JBoss and Apache Tomcat. XD consists of three components — operations optimization, data grid and compute grid — that are available as an integrated package or as individual pieces. Operations optimization provides capabilities like application virtualization, dynamic resource allocation and policy-based management. The compute grid component allows developers to write high-performance, parallelizable Java applications. In many ways, however, IBM considers XD’s real differentiator to be its data grid component, which is powered by IBM’s ObjectGrid technology.

Matt Haynos, project manager for WebSphere XD, says data growth is crippling application performance in traditional infrastructures, which has led customers to products like ObjectGrid, Oracle Coherence or GigaSpaces eXtreme Application Platform, among others. These products allow users to design their data infrastructures first, and then move the business logic or code to the data. “The real cool thing about ObjectGrid,” said Haynos, “is that you can maintain consistent application response even if your data doubles.” And XD handles the scalability challenges brought on by data growth, as well. Just plug in additional servers as data volumes grow, says Haynos, and XD scales linearly, automatically rebalancing the data.

Billy Newport, a distinguished engineer at IBM, believes there are two markets for data grid and distributed caching technologies like ObjectGrid: (1) trading/investment banking, where users understand in detail their architectural needs and develop very specialized infrastructures and applications to deliver near-millisecond response times; and (2) everyone else. There are many normal customers that are just running into walls in terms of data growth, datacenter power woes, etc., who want a solution that will fit in with what they’re already doing as unobtrusively as possible, said Newport. He added that the latter group is probably the bigger market opportunity.

Among the non-financial applications best suited to XTPPs, said Newport, are those involving complex-event processing (CEP) and those needing to deliver low latency times while mining large in-memory datasets. An example of the latter would be predictive analysis medical applications that sift through gigabytes of data in order to discover trends, similarities, etc. “The argument for [data caching] is you think you will be touching the data often enough that the cost of routing the logic over there is cheaper than touching the data remotely,” he explained.

Expanding a little on Gartner’s definition of what an XTPP includes, Newport says the first thing an XTPP needs is a replicated memory storage system, so that when a virtual machine comes up, it can automatically start storing data to the grid, and the grid knows to replicate that data somewhere for the sake of fault tolerance. As opposed to writing to disk, “With this memory-based approach,” explained Newport, “the only things you’re really limited on are CPU, network and memory — and all of those three variables scale linearly as you add boxes.” With ObjectGrid’s state-of-the-art memory-based infrastructure, he added, the only problem with scale is finding enough boxes.

In terms of how ObjectGrid, and WebSphere XD as a whole, compare with competitive solutions, Newport said that ObjectGrid and Oracle Coherence are both embeddable data grids. He believes IBM’s product definitely works better in WebSphere environments and also works better on enterprise networks thanks to its ability to asynchronously replicate data across datacenters and topologies. When compared to GigaSpaces, Newport sees the big difference to be GigaSpaces’ goal of replacing the current application platform, as well as its tight alignment with the Spring framework. XD, he explained, is working with Spring and is capable of serving as the application platform, but its general purpose is to optimize customers’ current setups.

Interestingly, however, IBM and Appistry actually have been working to exploit the complementary natures of ObjectGrid and Appistry’s EAF. Appistry has its Fabric-Accessible Memory feature, acknowledged Newport, but it’s not a coherent, robust data grid. “There are two ways of looking at XTP,” he said, “but … Appistry … doesn’t address scaling of the backend; it addresses scaling of the application.”

Appistry’s Charrington takes a similar view, noting that Appistry and IBM have been partners for years and that, “Particularly around XTP, I think IBM and Appistry both believe that grid-based application platforms, like what we’ve got with EAF, and distributed caches, like what they have with ObjectGrid, are both important pieces of the overall XTP puzzle from a customer perspective.” The companies have integrated the solutions in the field, he added, and are continuing to assess their options around integrating the two solutions out of the box.

Aside from working with Appistry, IBM is continuing its quest to improve ObjectGrid and to find ways to optimize other technologies using data grid technologies. The company just increased its investment in ObjectGrid by 50 percent, noted Newport, and it is looking for ways to build a CEP platform on top of ObjectGrid, a task made easier with IBM’s recent acquisition of AptSoft and its CEP engine. Support for alternate languages, such as C++ and .NET, also should be on the way, he said.

By refusing to rest on its laurels and advancing these types of efforts, IBM hopes to make the technology easier to consume and tap into the vast market of users who aren’t as advanced as their financial counterparts. “The perception right now is that caching is an expensive technology to deploy because of development costs … and things like that,” said Newport. “The more ‘de-skilled,’ if you want, we can make deploying cache technology, the better off we’ll be and the better off the customers are going to be.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire