Back to Posts

The advantages of GPU acceleration in computational finance

Posted in Insights

The field of computational finance is defined by speed. With vast amount of calculations to run in order to perform pre-trade risk analytics - a number that will only continue to grow as more data becomes available - it’s critical for asset managers to be able to perform these calculations as quickly as possible.

In recent years, this has led to one question in particular: How many simultaneous calculations can you run using Graphical Processing Unit-based (GPU) architecture compared to running those same calculations on a Central Processing Unit (CPU)?

Understanding the difference between a GPU and CPU

A GPU is a special purpose processor that is built specifically for performing large volumes of calculations. Modern GPUs contain thousands of cores capable of trillions of operations per second, accelerating the rate at which the calculations can be completed. When programmed correctly, GPUs can take large datasets and perform the same operation over and over again on them at a much faster rate than a CPU alone.

A CPU is a general purpose processor. It can do just about any type of calculation, but that doesn’t mean it can do them in the most optimal way. It has just a few cores, and can only work with a few threads at a time with a lot of cache memory.

For this reason, GPUs are being used for purposes outside of their original use, which as the name implied, was for graphics rendering. Now, they’re being used in data-intensive industries whose domains require powerful tools to evaluate massive problems such as training and applying machine learning models at scale

The gains in speed from GPU-accelerated processing are well-known, but developing for GPUs requires a particular programming skill set and can be time consuming to do correctly. Most financial institutions don’t have this kind of engineering expertise in-house, or the resources to direct toward a multi-year development program. Today, this can be overcome by using secure, third-party cloud services that can give institutions the GPU processing power they need, when they need it.

As you’ll see in the examples below, the advantage of using GPU processing to make financial calculations becomes much more apparent as the volume increases. At a small number of calculations, the performance of a GPU is fairly comparable to that of a CPU. But as the number of calculations increases, so too does the difference in speed between the processors.

This means that as the number of calculations your applications require increases, the multiplier signifying the GPU’s speed advantage over CPU grows larger and larger, making the benefit of parallel processing even more apparent at higher volumes.

Bond calculations done in seconds

To test a client’s high-performance computing requirements for a generic bond portfolio, Elsen developed an application on the Elsen nPlatform that calculated the portfolio value for 100,000 bonds with equal portfolio weights. Each bond had its own Floating Spread, Coupon Payment Frequency and Time to Maturity.

The application also calculated 1 million reference and discount rate curves, where the curves were defined as Base rate + an Adjustment, where the Adjustments were provided by client (In production, these are generated through a Monte Carlo process).

The GPU was able to complete the 1 million curve calculations 250x+ faster than a CPU. The average time to process one curve was 14 ms, and it only took 3.6 hours to complete all of the calculations.

VWAP calculations made easy and quick

We worked with a financial institution that was running volume-weighted average price (VWAP) computations over nights and weekends to populate the in-memory database for quick retrieval throughout the week. They wanted to be able to compute VWAP calculations at any time to eliminate need to store pre-computed values.

We developed an application on the Elsen nPlatform that, given a security and two timestamps, retrieved aggregated data between these two timestamps. For example, for ticker IBM, the application should compute the total volume and VWAP price between two specified times.

The client’s request: “If we make 50,000 calls for such data, we want to implement a process where each core on a GPU would compute this linear algebra in a parallel fashion.”

The client’s metric for success was at least 30x speed up for the GPU over a CPU for 50,000 simultaneous VWAP calculations with random double timestamps across 500 securities. At 100,000 simultaneous calculations, the GPU was 42x faster than the CPU.

Again, as the number of calculations increases, the GPU’s performance becomes accretively more efficient when compared to the CPU.

These use cases illustrate the power that GPU technology - a key component of the Elsen nPlatform - can have when applied to financial calculation problems. By helping asset managers run pre-trade risk analytics in a more rapidly iterative way, Elsen is making it possible for technical and non-technical users alike to test ideas and gain insights in a matter of minutes, not days or weeks.

Justin ensures Elsen only creates state-of-the-art technology, and leads the development team in creating the best solutions for Elsen customers. Justin’s interest in high-performance computational modeling applications led him to positions at IBM and the U.S. Government. He previously ran a research lab centered around analytical modeling, which is similar to financial modeling and gives him an extremely valuable skill set when creating solutions for financial services. Outside of Elsen, he enjoys mentoring students at Northeastern, boxing, networking, and building things that help people be more efficient. Justin holds a B.S. in Computer Engineering from Northeastern University.