July 30, 2007
Each trading day is a perfect storm. Every month, every quarter, the volume of data increases, the sophistication of algorithms and business processes grows, and the competitive pressure to get things done as quickly and efficiently as possible mounts. In the past, Moore’s Law has rescued us from drowning in computing demand, but the pace of progress alone can no longer stem the tide. While we previously had been able to recompile applications with new processor optimizations and deploy them on bigger, faster systems to keep ourselves afloat, new systems offer greater concurrency instead of greater speed, and simple recompilation and deployment cannot take advantage of them. Our appetite for computing power isn’t satisfied with lone, uncoordinated machines. For financial services, distributed computing isn’t a luxury: it puts food on the table.
The ongoing adoption of distributed computing within financial services has not been easy, though. Retrofitting applications to benefit from distributed architectures has required significant knowledge, resources and effort; often more than we have had on hand. For example, many financial applications begin with prototypes developed in Microsoft Excel by quantitative analysts or other business managers. Typically, those prototypes have not lent themselves to concurrent or distributed implementations. Software engineers, who in the past could afford to do relatively literal translations of spreadsheet logic into production code, have since needed to transform solutions into something more amenable to running in parallel. This transformation generally requires developers with greater technical skill and finer ability to engage with and understand the underlying business problem. While individual or small teams of average developers can, with very light impact on IT, implement monolithic or standard n-tier applications, implementing distributed applications generally requires more sophisticated developers and specialized knowledge in networking, security, concurrency and performance.
Furthermore, these projects tend to require significantly more IT involvement, coordination and production management in order to support the physical infrastructure of distributed applications. Before the current crop of distributed computing tools came to market, many engineering teams rolled their own distributed application infrastructures. Even when relying on message-passing libraries such as MPI, teams had to invest heavily in provisioning, deployment and data distribution. Such teams spent an inordinate amount of time developing distributed computing infrastructures instead of the custom business logic that generates unique value.
Fortunately, the past several years have revealed many enabling developments in distributed computing, including a number of high-quality vendor products brought to market that significantly reduce the complexity and cost of delivering distributed applications. Now more than ever, it is much easier to develop distributed application across a range of different platforms. Nonetheless, distributed application development still requires a fair amount of architectural skill and understanding, IT involvement, and nontrivial transformation of business logic.
Distributed Computing Today
The key aspect of distributed computing today is that it is no longer just theoretical. You actually can write certain types of distributed applications (such as those that are embarrassingly parallel) with off-the-shelf products, and with minimal time, effort and cost. The range of stable, usable distributed computing platforms -- such as those from Platform Computing, GigaSpaces and Digipede Technologies -- is impressive, as are the other supporting technologies -- such as distributed data frameworks from GemStone, Tangosol and ScaleOut Software, and event processing systems from Progress Apama and BEA -- that enable more sophisticated distributed designs and architectures. Thus, it is becoming much rarer to find software development teams in financial services working on this type of plumbing.
Additionally, there has been a significant rise in conferences, articles and blog entries on distributed computing in financial services and in enterprises at large. While there have been several notable distributed computing projects in the past -- everything from key cracking and searches for Mersenne primes to genome/proteome mapping and signal analysis for SETI -- few were structured in a way that represented how financial services needed to use distributed computing. There was a dearth of information and dialogue about the unique demands of distributed computing in finance, and a lack of live projects from which the community could learn. Now, we are seeing a growing number of financial institutions, from global investment banks to hedge funds, not only piloting distributed computing projects, but also talking about them in public and semi-public forums.
On the other hand, the current state of the world offers a number of serious obstacles. For example, while it is positive that there is a wave of vendor products that solve different parts of the distributed computing puzzle, few of them treat distributed application development as a holistic endeavor that encompasses many problems (i.e., job scheduling, event processing, data distribution and caching, security, deployment, APIs, IDEs, etc.) at once. Except for GigaSpaces, most distributed computing architectures require the assembly of infrastructure from several different vendors. While this does permit architectures built from best-of-breed solutions, it can be challenging to stitch the various pieces together into a coherent developer framework.
Another obstacle is that the organizations designing business logic have not been thinking of business logic in a form amenable to distributed computing. Most algorithms, prototypes and problem descriptions exhibit a serial bias and usually require significant transformation to adapt the design to a distributed model. For example, many designs assume a canonical database, a master process and reliable determinism, and these assumptions get subtly baked into the requirements. That means the software engineering process must reach back into the business to search for equivalent, distributed solutions. This, unfortunately, puts significant pressure on perhaps the weakest interaction in many financial organizations: the interaction between subject matter expert and software engineer.
Things are also somewhat bleak on the developer side. From a programming language perspective, we are still in the assembly language era of distributed computing. Most distributed programs are intimately involved on a line-by-line basis in concurrency, synchronization, coherency and other plumbing. Design patterns and language concepts have not sufficiently formed and stabilized to migrate into our mainstay programming languages, although there are some interesting indications of the things to come in technologies such as Erlang and Microsoft’s CCR/DSS.
While there are distributed computing vendors are eager to address some of these developer concerns, vendors lately have been torn in two different directions: (1) satisfying the needs of the IT organization and providing better tools to manage, interoperate, provision, secure and monitor large grids; and (2) satisfying the needs of the developer and making distributed applications easier to code, test, debug, package and deploy. Unfortunately, few vendors have been able to make progress on both fronts. Some products, such as Digipede and GigaSpaces, are clearly more developer-friendly than others, while others, like those from Platform Computing, have grown sophisticated management capabilities, but without the developer richness of the former.
Distributed Computing Tomorrow
Developing distributed computing applications today is both practical and profitable, if perhaps a bit quirky and fussy. But several trends are developing out of the current state of affairs that look to have promising effects on distributed computing. In particular, there are three that we can expect to make a splash in financial services:
Many of the current set of distributed applications in financial services are based in the front office and represent purely embarrassingly parallel applications. In the coming years, however, we should see this type of application extend into the middle and back office. What is still unclear, though, is whether financial institutions will run all of these applications on consolidated, single-grid architectures or continue to run smaller ad-hoc grids for individual applications. As the number of applications in an institution grows, the pressure on IT to centrally manage of the physical infrastructure will grow and incite IT departments to consolidate grids. However, if it doesn’t become significantly easier for application developers to reserve, configure, debug, test and deploy resources within a shared grid computing infrastructure, many line-of-business departments will continue to defect from shared infrastructure and deploy private grids.
Of course, it is reasonable to expect in the coming years that various vendors of cooperating technologies, such as job scheduling and distributed caching, will find partners and either package their technologies together or merge entirely. The packaging of these technologies as platforms will make it much easier for IT and engineering to work together on shared infrastructure. The new platforms will likely incorporate other technologies, as well, such as complex event processing, security entitlement and virtualization. And job scheduling and resource allocation are expected to get much more sophisticated, perhaps offering teams and departments the ability to bid on grid resources in a trading-type market.
With more pervasive adoption and more comprehensive platforms in place, we can expect to see a maturation of design patterns and best practices for developing distributed applications. There have been a number of products and technologies, such as Erlang, Microsoft CCR/DSS, Progress Apama and BEA Event Server, that have driven application design in a promising direction toward event-driven and message-oriented design patterns that, at least from a logical level, are much more accessible to non-technical contributors, yet highly amenable to distributed computing. While these particular technologies do not solve the distributed computing problem directly, they are creating a way to model business applications in a way that might allow for more automatic distribution of computation over a grid. A business process modeled against these design patterns has a distributed implementation that looks very similar to a naïve serial implementation. Event-driven/message-oriented designs coupled with efficient message bus infrastructures and distributed caching offer a way to avoid the difficult translation of business requirements into code.
The future of distributed application development in financial services is bright and tantalizing, and we can be hopeful that vendors will address today’s complexities in management, interoperability, deployment, infrastructure sharing, testing, debugging, development and more. But, even with the current crop of tools, financial developers can be very effective in scaling applications to meet the performance demands of trading in world markets. Developing grid computing applications today has progressed beyond its Ford Model T days, even if we haven’t yet gotten to such niceties as disc brakes, seat belts, airbags and windshield wipers.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?