Rafael Nicolas Fermin Cota, Yi King, and Chris Chung — written May 27, 2016 — source

Recent research by Marcos Lopez de Prado aims to improve Markowitz’s Critical Line Algorithm (CLA). The (currently patent-pending) methodology proposes the Hierarchical Risk Parity (HRP) approach which aims to tackle several issues with the original CLA: instability, concentration, and underperformance.

HRP applies modern mathematics (graph theory and machine learning techniques) to build a diversified portfolio based on the information contained in the covariance matrix. However, unlike quadratic optimizers, HRP does not require the invertibility of the covariance matrix. In fact, HRP can compute a portfolio on an ill-degenerated or even a singular covariance matrix, an impossible feat for quadratic optimizers. Monte Carlo experiments show that HRP delivers lower out-of-sample variance than CLA, even though minimum-variance is CLA’s optimization objective. HRP also produces less risky portfolios out-of-sample compared to traditional risk parity methods.

The main idea of HRP is to allocate weights to a portfolio of securities based on

- the clusters formed by securities (determined on how each security correlates to the portfolio)
- the volatility of each cluster (more volatile clusters receive lesser weighting, and vice versa)

This post demonstrates a Rcpp + OpenMP implementation of the HRP methodology suggested by the paper. It uses security returns as input and churns out a weighting vector applies to all securities involved.

The computation is split into four stages

- Compute Distance Matrix
- Clusterize Securities
- Quasi-Diagonalize (QD) the Covariance Matrix
- Generate Security Weighting

In the HRP paper, clusters is defined by a group of securities that similarly correlates with other securities within the portfolio.

First, we compute a n by n distance matrix based on the correlation matrix on the n assets. The distance is defined as \(d_{i,j} = \sqrt{\frac{1}{2}*\left(1-\rho_{i,j}\right)}\) which produces the distance between **each asset**. The lower the the “distance”, the more correlated two assets are. This step is implemented in the `distanceMatrix_elementwise`

function.

Secondly, we compute the Euclidean distance between the column-vectors of the distance matrix. \(D_{i,j} = \sqrt{\sum_{i=1}^n \left(d_{n,i} - d_{n,j}\right)^2}\) This measures the similarity between two asset on how they correlates **to the portfolio**. The lower the distance, the more similar two assets’ correlations with the portfolio are. This step is implemented in the `distanceMatrix_rowwise`

function.

Provided the matrix of similarities between each assets, we proceed to the clustering step to group securities into a hierarchy of clusters.

During each iteration, we pick a set of two most similar securities based on the distance matrix generated from the previous step, group them together as a cluster, and replace this cluster with a generalizing branch. In this implementation, the generalizaing branch is created using the nearest point algorithm. For branch \(u\) consists of security \(i\) and \(j\), the similarty with all remaining securities in the portfolio is calculated as \(D_{,u} = min\{D_{,i}, D_{,j}\}\)

At the end of the clustering step, we have a \(N by 4\) matrix where \(N\) stands for the number of clusters. The first two elements consist of branch index (can by both a security or a generalizing branch). The third element is the similarity/distance between the two branches, and the last element indicates the number of securities in the cluster.

Provided the clusterization from the last step, we want tp re-organize the covariance matrix so the indexing follows clusters. In order to achive this, we need to first “flatten the clusters” based on the \(N by 4\) matrix generated from the last step. This step is implemented in function `clusterIndex`

with a recursive call on `flatCluster`

, the result is a security index based on generated clusters from previous step.

In the `flatCluster`

function, we start with the last cluster generated and trace back to its components based on the first two element in the cluster matrix. If a component is a generalizing branch, the function calls itself recursively to trace back the components of the generalizing brach, until the index of a security is returned. `task`

constructs from `OpenMP`

is used to speed up the process by teating each “trace back” as a task, and a `taskwait`

construct is used to ensure the security index is generated from a bottom-up approach.

With the cluster based security index generated. `quasiDiag`

function
re-arranges the covariance matrix based on the new index into a
quasi-diagonoal covariance matrix. In this way, “similar” securities are
group together for the weight allocation step. To re-iterate, the HRP
approach first divides securities into clusters and then allocates weightings
based on each clusters’ risk level.

With the re-organized quasi-diagonal covariance matrix and the asset index of clustered securities, we proceed into weight allocation.

As stated in the paper, the inverse-variance allocation is optimal for a diagonal covariance matrix. This step takes the advantage of this by

- defining the variance of a set as the variance for inverse-variance allocation
- split the allocations between adjacent subsets in inverse proportion to their aggregated variances

We initialize the weighting to each security to 1, \(w_i = 1, i = \{1..n\}\).

The allocation algorithm is as follows

- bisect the portfolio into two sets, \(s_1\) and \(s_2\)
- let \(V_i\) be the covariance matrix for set \(s_i\)
- let \(W_i = diag\left(V_i\right)^{-1} * \frac{1}{tr\left(diag\left(V_i\right)^{-1}\right)}\)
- let \(V_{adj,i} = W_i^{'} * V_i * W_i\)
- let \(a_1 = 1 - \frac{V_{adj,1}}{V_{adj,1} + V_{adj,2}}, a_2 = 1-a_1\)
- adjust weightings for each set as \(w_{s_i} = w_{s_i} * a_i\)

The implementation is done in function `weightAllocation`

with recursive calls on `bisectWeightAllocation`

. Similar to the cluster flatenning step, `task`

constructs from `OpenMP`

is used to speed up the process by treating each bisection step as a task. A `taskwait`

construct is not required in this step as the update on the weight vector is top-down, and child tasks (further bisection steps) are not generated until the parent task (current bisection step) is finished.

With Rcpp and OpenMP, the speed of the computation competitive when it is used for backtesting resuls in faster performance. The test data is based on a return matrix of 30 securities with 2500 data points.

replications | elapsed |
---|---|

1000 | 7.924 |

**tags:**
openmp
recursion
modeling

- Using iconv via the R exported header — Dirk Eddelbuettel
- Using Scheme from R via Guile and Rcpp — Dirk Eddelbuettel
- Handling R6 objects in C++ — David Quesada