July 01, 2010
HPC has been around in the capital market financial space for a number of years. Is there now a need for a data aware cloud, that offers improved utilization and SLA coupled with lower latency, to move HPC to the next level?
We can begin to answer this question by looking at how HPC is used from a market risk perspective, initially providing some background on how market risk is calculated from a portfolio perspective and then discussing the need for a data aware cloud that leverages the Windows Server HPC 2008 R2 product
Both the buy-side (advising institutions concerned with buying) and sell-side (a firm that sells investment services to asset management firms - services such as broking/dealing, investment banking, advisory functions, and investment research) have had various HPC grid deployments for many years. Platform and DataSynapse are the two that I believe have the largest install bases within the London and New York financial communities.
Initially these HPC grid installations were not used in the most intelligent way - in some cases they were used to simply run Microsoft Excel sheet calculations in parallel. Unfortunately in certain places this is still the case today, which effectively means that we have non-optimum grid usage often leading to the failure of Service Level Agreements (SLA’s) coupled with higher running costs and job/task compute times.
Another issue that has been common in sell-side organizations is asset classes (rates, foreign exchange, commodities, etc) owning their own HPC grids partly due to the inability of the HPC grid to satisfy SLA’s appropriately.
One of the main uses of financial HPC grids has been to calculate risk, specifically Market Risk. Market risk is the risk that the value of a portfolio, either an investment portfolio or trade portfolio, will decrease due to the change in value of the market risk factors. The four standard market risk factors are stock prices, interest rates, foreign exchange rates, and commodity prices.
Before we look at how market risk is calculated using HPC grids, let’s look at how a HPC is used within the context of the trade life cycle. If we start with the trader, they would submit a buy or sell order to a market (e.g. exchange). On the order being accepted a trade would be created that becomes part of a portfolio (sometimes this is known as the traders book). For the lifetime of the trade being held, its market risk needs to be calculated so that the holder of the trade can understand what their profit/loss is overall. Often the market risk can be calculated with a closed form solution (which is what we will discuss for the rest of this article).
An HPC solution is ideal given the number of books that require market risk calculations especially with the ever increasing need to perform a trade market risk calculation when one or more of the market risk factors changes. With certain trades there is no closed form solution, thus forcing a Monte Carlo route which is compute intensive and thus again forcing an HPC solution.
In the simplest and possibly most common form, market risk is calculated as follows for a portfolio:
1. Snap appropriate market data, yield curves, forecast curves (the model) required for the calculations
2. Submit all trades to the HPC grid. Either send the required calculations data (from 1) with each trade submitted to the HPC, or allow each node used within the HPC to access the required calculation data (possibly in a database or other repository)
3. Store calculated market risk values as required in a repository for later use
4. Iterate this process as many times per day as necessary to manage your market risk
The above four-step process works, but there are a number of downsides, including the need to pass a lot of data around the network and creating hot spots where a large number of nodes are accessing other repositories to get certain data, creating bottle necks.
If we move the above market risk calculation to the cloud (such as Amazon EC2, Google App Engine, Microsoft Azure), we end up with the same issues apart from the need to host the hardware. There has to be a more intelligent way of calculating market risk. What we need is a more intelligent HPC/cloud that is data aware.
The Data Aware Cloud
Given the aforementioned market risk problem, there are a number of factors we can leverage to improve the utilization of the HPC grid while calculating Market Risk:
Portfolio size could possibly influence the way we leverage the HPC grid for calculating discount factors.
When we submit the portfolio job to the HPC scheduler/broker for processing, we don’t want random distribution of the trades (as tasks) to any node for processing. We need a more targeted approach. Windows Server HPC, DataSynapse and Platform all support the concept of the “group” allowing a restriction on the nodes a job can run on within the HPC grid. However, we want to take this a step further and ensure that any trades, in the YEN currency example, are submitted to the same group of nodes where possible. Further, we want to pre-load the discount factors for the YEN market risk calculations onto these nodes if possible to avoid sending the discount factors with each trade to a node for processing.
We essentially want a stateless/stateful concept within the HPC grid. One might think about holding this state on the node within the operating system process that the node uses to run the task(s). Unfortunately this isn’t ideal, as this process often has to be torn down after processing a task due to the nature of certain legacy libraries.
Additionally, there is an SLA problem to be solved, which is why we need the “groups” mentioned above to be dynamic. If the HPC scheduler views that a job will not be processing complete within the SLA, it needs to bring more nodes into the YEN currency group. In doing this, the new nodes need to be pre-loaded with the appropriate state. Likewise, on completion of the job, the state on the nodes needs to be erased.
Using Windows Server HPC 2008 R2 as a Data Aware Cloud
If we look at building a Data Aware Market Risk HPC using Windows Server HPC 2008 R2, we end up with a possible architecture as seen in the below diagram. For brevity let’s assume the HPC grid is fed by a Complex Event Processing (CEP) engine such as Microsoft StreamInsight which figures out when a portfolio needs to be re-calculated and submits jobs. The benefit of CEP is that the HPC grid is event-fed, rather than batch–fed, which is often the case.
Let’s also assume that we have a Windows Server AppFabric Caching cluster that contains the discount factors. AppFabric effectively offers us a data fabric solution that we can locate in close proximity to our HPC grid to reduce latency on data access required by HPC jobs. Although it’s possible to include all HPC nodes into the AppFabric cluster for a small HPC grid, for a grid in the tens of thousands of nodes this would probably be a impractical. Additionally, this path wouldn’t necessarily reduce network latency of data access since there is still only a single primary copy of any piece of data in a distributed data fabric.
The HPC Scheduler receives the jobs and distributes its constituent tasks to the appropriate nodes in the grid. Prior to a node receiving a YEN task, the node manager would have been instructed to pre-load the “State Manager” with the appropriate set of discount factors, thereby limiting the data transfer packet to the node and any node-to-repository traffic.
Where Are We?
This article has briefly touched on a few possibilities to improve the computational time of calculating market risk. What is clear from my research and proof of concepts (POCs) is that much can be done improve the usage of an HPC grid from a software perspective, both in terms of logical business application as well as from the various vendors of HPC products to support improved orchestration of resources.
Today HPC’s are leveraged from a batch perspective, possibly calculating market risk at certain per-defined times, or when certain per-defined key market events occur – i.e. the London Inter Bank Offered Rate (LIBOR) is released daily at 11am. The introduction of changes in the operating environment driven by regulatory frameworks - specifically around capital adequacy - and increased internalization are effectively forcing the need for dynamic market risk calculations on an intra-day basis;. As a result the business is able to maximize the cash in the market while minimizing their regulatory reserve requirements. Going forward, as the business needs to become ever more dynamic, it is clear that HPC solutions will increasingly need to leverage event-driven software engineering techniques and CEP to meet these challenges.
Matt Davey is a Director at Lab49, a strategy and technology consulting firm that builds advanced business solutions for the financial services industry. You can read more from Lab49 at http://blog.lab49.com/, or Matt's blog at http://mdavey.wordpress.com.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?