Dmitriy Selivanov — written Jan 24, 2016 — source
After Tomas Mikolov et al. released word2vec tool, there was a boom of articles about words vector representations. One of the greatest is GloVe, which did a big thing by explaining how such algorithms work. It also refolmulates word2vec optimization as a special kind of factoriazation for word cooccurences matrix.
This post is devided into two main parts:
GloVe algorithm consists of the following steps:
The main challenges I faced during implementation:
There are a few main issues with term cooccurence matrix (tcm):
To meet requirement of sparsity we need to store data in associative array. unordered_map
is good candidate because of \(O(1)\) lookups/inserts complexity. I ended with std::unordered_map< std::pair<uint32_t, uint32_t>, T >
as container for sparse matrix in triplet form. Performance of unordered_map
heavily depends on underlying hash fucntion. Fortunately, we can pack pair<uint32_t, uint32_t>
into single uint64_t
in a determenistic way without any collisions.
Hash function for std::pair<uint32_t, uint32_t>
will look like:
For details see this and this stackoverflow questions.
Also note, that our cooccurence matrix is symmetrical, so internally we will store only elements above main diagonal.
Now we should implement efficient parallel asynchronous stochastic gradient descent for word cooccurence matrix factorization, which is proposed in GloVe paper. Interesting thing - SGD inherently is serial algoritms, but when your problem is sparse, you can do asynchronous updates without any locks and achieve speedup proportional to number of cores on your machine! If you didn’t read HOGWILD!, I recomment to do that.
Let me remind formulation of SGD. We try to move \(x_t\) parameters in a minimizing direction, given by \(−g_t\) with a learning rate \(\alpha\):
\[x_{t+1} = x_t − \alpha g_t\]So, we have to calculate gradients for our cost function \(J = \sum_{i=1}^V \sum_{j=1}^V f(X_{ij}) ( w_i^T w_j + b_i + b_j - \log X_{ij} )^2\):
\[\frac{\partial J}{\partial w_i} = f(X_{ij}) w_j ( w_i^T w_j + b_i + b_j - \log X_{ij})\] \[\frac{\partial J}{\partial w_j} = f(X_{ij}) w_i ( w_i^T w_j + b_i + b_j - \log X_{ij})\] \[\frac{\partial J}{\partial b_i} = f(X_{ij}) (w_i^T w_j + b_i + b_j - \log X_{ij})\] \[\frac{\partial J}{\partial b_j} = f(X_{ij}) (w_i^T w_j + b_i + b_j - \log X_{ij})\]We will use modification of SGD - AdaGrad algoritm. It automaticaly determines per-feature learning rate by tracking historical gradients, so that frequently occurring features in the gradients get small learning rates and infrequent features get higher ones. For AdaGrad implementation details see excellent Notes on AdaGrad by Chris Dyer.
Formulation of AdaGrad step \(t\) and feature \(i\) is the following:
\[x_{t+1, i} = x_{t, i} − \frac{\alpha}{\sqrt{\sum_{\tau=1}^{t-1}} g_{\tau,i}^2} g_{t,i}\]As we can see, at each iteration \(t\) we need to keep track of sum over all historical gradients.
Actually we will use modification of AdaGrad - HOGWILD-style asynchronous AdaGrad :-) Main idea of HOGWILD! algorithm is very simple - don’t use any synchronizations. If your problem is sparse, allow threads to overwrite each other! This works and works fine. Again, see HOGWILD! paper for details and theoretical proof.
Now lets put all into the code.
As seen from the analysis above, GloveFit
class should consist of the following parameters:
w_i
, w_j
(for main and context words).b_i
, b_j
.grad_sq_w_i
, grad_sq_w_j
for adaptive learning rates.grad_sq_b_i
, grad_sq_b_j
for adaptive learning rates.learning_rate
, max_cost
and other scalar model parameters.Now we should initialize parameters and perform iteration of SGD:
As discussed above, all these steps can be performed in parallel loop (over all non-zero word-coocurence scores). This can be easily done via OpenMP parallel for
and reduction: #pragma omp parallel for reduction(+:global_cost)
. But there is one significant issue with this approach - it is very hard to make portable R-package with OpenMP support. By default it will work only on linux distributions because:
clang
on OS X doesn’t support OpenMP (of course you can install clang-omp
or gcc
with brew
but this also could be tricky).For more details see OpenMP-support section of Writing R Extensions manual.
Luckily we have a better alternative - Intel Thread Building Blocks library and RcppParallel package which provides RVector
and RMatrix
wrapper classes for safe and convenient access to R data structures in a multi-threaded environment! Moreover it just works on main platforms - OS X, Windows, Linux.
Using TBB is a little bit trickier than writing simple OpenMP #pragma
directives. You should implement functor which operates on a chunk of data and call parallelReduce
or parallelFor
on entire data collection. You can find useful (and simple) examples at RcppParallel examples section.
For now suppose we have partial_fit
method in GloveFit
class with following signature (see actual code here):
It takes
<x_irow, x_icol, x_val>
begin
and end
pointers for a range on which we want to perform our SGD.Then it performs SGD steps over this range - updates word vectors, gradients, etc. As a result it returns value of accumulated cost function. Note that internally this method modifies input object.
Also note that signature of partial_fit
is very similar to what we have to implement in our TBB functor. Now we are ready to write it:
As you can see, it is very similar to example form RcppParallel site. One difference - it has side-effects. By calling partial_fit
it modifies internal state of the input instance of GloveFit
class (which actually contains our GloVe model).
Now lets write GloveFitter
class which will be callable from R via Rcpp-modules
. It will act as interface for fitting our model and take all input model parameters such as vocabulary size, desired word vectors size, initial AdaGrad learning rate, etc. Also we want to track cost between iterations and want to be able to perform some early stopping strategy between SGD iterations. For that purpose we keep our model in C++ class, so we can modify it “in place” at each SGD iteration (which can be problematic in R)
And create wrapper with Rcpp-Modules
:
Now we can use GloveFitter
class from R:
tags: parallel
Tweet