A Simple And Efficient Method For Random Fourier Features
Anonymous authors Paper under double-blind review
Abstract
Random Fourier features and random projection involve matrix multiplication with a k Γ D random matrix, the original dimensionality D and the dimensionality in the projected space k. Large values of k βΌ 105required for high accuracies together with large sample sizes n lead to substantial computational demands. In this paper, we propose a simple and efficient method for random Fourier features and random projection. The work with our simple method is motivated by the fact that the order for features do not change distances or similarities between feature vectors as long as the same order is maintained for all feature vectors. The proposed method allows much reduced computation with improved complexity O(max{k, D}n), where n represents the sample size, compared to the complexity O(kDn) associated traditionally with random projection and random Fourier features. The proposed method is also simple to implement without the need for the platform-dependent libraries of the popularly used fast Walsh-Hadamard transform that Fastfood and a lot of other previous work rely on. It is demonstrated in our experiments that the proposed method achieves significant speed improvements, i.e. a 10,000x speed-up over Random Kitchen Sinks and a 15x speed-up over Fastfood on real-world datasets when both D and k are large. As a general framework, no Gaussian assumption has been made to the random entries of the projection matrix and, thus, it is a unified approach to efficient random projections and random Fourier features with any shift-invariant kernels. The bias, the variance and error bounds are given in our analysis. We show that our estimators for kernel approximations and random projection are unbiased with the variance inversely proportional to k. Our code is made available at https://anonymous.
1 Introduction
Both random Fourier features and random projection are popular methods in classification and regression tasks Bingham & Mannila (2001); Ailon & Chazelle (2006); Anand et al. (2012); Paul et al. (2013); Zhang et al. (2014). Random projection is an efficient and distance-preserving technique while random Fourier features allow non-linear feature mapping through randomization. Random Fourier features, which is closely related to random projection, became popular for good approximations to shift invariant kernels and random Fourier features can be considered as nonlinear random projection Rahimi & Recht (2008). In large-scale real-world problems, the original dimensionality, D, the dimensionality in the projected space, k, and the sample size, n, can be very large. With k βΌ 105for high accuracies, D from 105to 107in Zhai et al. (2014) and n from 106to 107in Deng et al. (2009), matrix multiplication required can be prohibitively expensive with the complexity O(kDn).
For random projections, we have n data points {ui} n i=1 β R D in data matrix A β R DΓn with D dimensions and a random matrix R β R kΓD for projection. For projected data points RA, each point {vi} n i=1 β R k is in k dimensions. The computational complexity of traditional random projection is O(kDn) and it is computationally expensive for large-scale problems. It can be easily shown that, as in (Vempala, 2004) and (Li et al., 2006a), we have the expectation for the squared L 2-norm of the projected vector v from the original vector u before random projection:
Similarly, we get
In addition, β₯v1β₯ 2 β₯u1β₯2/k and β₯v1βv2β₯ 2 β₯u1βu2β₯2/k both follow the Ο 2 distribution:
$$\frac{(\mathbf{v}{1}){i}-(\mathbf{v}{2}){i}}{\sqrt{|\mathbf{u}{1}-\mathbf{u}{2}|^{2}/k}}\sim{\mathcal{N}}(0,1).$$ βΌ N (0, 1). (1) Thus, when we take the sum over all i of (v1) 2 i , we can see Pi (v1) 2 i following the Ο 2-distribution:
And, similarly for Pi ((v1)i β (v2)i) 2,
With one of the tightest bounds for the Johnson and Lindenstrauss (JL) lemma in (Achlioptas, 2003b), it is shown that
$(1-\epsilon)|\mathbf{u}{1}-\mathbf{u}{2}|^{2}\leq|\mathbf{v}{1}-\mathbf{v}{2}|^{2}\leq(1+\epsilon)|\mathbf{u}{1}-\mathbf{u}{2}|^{2}$ In other words, with probability 1 β n βΞ³ given that
For kernel methods with dot products, it can also be shown that, as in (Li et al., 2006a) and (Li et al., 2006b),
1.1 Kernel Approximation
In this section, we describe how the dot products of vectors with random Fourier features can approximate kernels. For a properly scaled shift-invariant kernel K(Ξ΄), Bochner's theorem guarantees that its Fourier transform p(Ο) is a probability density function (Rahimi & Recht, 2008). It can be shown that
$$=E_{p}[<(\cos(w^{T}x),\sin(w^{T}y)),(\cos(w^{T}y),\sin(w^{T}x))>].$$
For x β R d, K(.) can be approximated with inner product < Ο(x), Ο(y) >. Thus,
or, alternatively,
as the dot product Ο(x) T Ο(y) gives us the same value in the alternate form where w1*, . . . , w*k are drawn according to p(w), i.e.
1.2 Contributions
The computation of random Fourier features and random projections rely on random matrices of size k Γ D with the dimensionality after projection k and the original dimensionality D. For good kernel approximations with random Fourier features, k has to be very large. As n is very large with large-scale datasets, the computation can be very expensive. The complexity of Random Kitchen Sinks (RKS) Rahimi & Recht (2008) and that of a more recent state-of-the-art log-linear time method, Fastfood (Le et al., 2013), are respectively O(kDn) and O(k log(D)n). The proposed linear-time method, in this paper, is 1.) easy to implement, 2.) with complexity O(max{k, D}n) and 3.) unbiased estimators having variance inversely proportional to k.
More specifically, as it is possible to arrange non-zero elements in a sparse random matrix such that the computational complexity can be independent of k, the complexity becomes O(Dn) for k β€ D. By considering a random permutation of the order of the features and feature normalization, the number of non-zero elements in the random matrix is reduced to D. The proposed method speeds up traditional random projection and Random Kitchen Sinks from O(kDn) to O(max{k, D}n) because the complexity of the proposed method is O(kn) when the dimensionality after projection kmulti is larger than D. The computational complexity of Random Kitchen Sinks (Rahimi & Recht, 2008) and Fastfood (Le et al., 2013), two state-of-the-art efficient methods for kernel approximations, are O(kDn) and O(k log(D)n) respectively. Fastfood is a log-linear time algorithm with O(k log(D)n). Comparatively, there is a bigger computational advantage with the complexity our linear-time method when both D and k are large shown in Table 3, i.e. there is a 15x speed-up with our method over Fastfood on real-world datasets when both D and k are large.
Our method is motivated by the fact that the order for features do not change distances or similarities between feature vectors as long as the same order is maintained for all feature vectors. Instead of generating evenly spread random non-zero entries with previous methods like FastFood which uses Walsh Hadamard Transform (WHT), a random order of features in the data matrix is chosen before projection in our method. It is easy to implement the proposed method without the need for libraries for sparse matrix computation or fast WHT which depends on its implementation and the software and hardware architecture. Moreover, as no Gaussian assumption has been made to the random entries of the projection matrix, the proposed method is a unified approach to efficient random projections and random Fourier features with any shift-invariant kernels.
2 Related Work 2.1 Previous Approaches To Fast Random Projection And Fast Random Fourier Features
Speeding up computation with a sparse random projection matrix with one-third non-zero entries in the matrix is proposed by Achlioptas Achlioptas (2003a). However, with a sparse projection matrix, some features in feature vectors can be totally ignored in computation. Another idea is to make the sparse entries spread more evenly. To do this, one can use the Fast Johnson-Lindenstrauss Transform (FJLT) Ξ = PHD where P is the sparse projection matrix with D as a diagonal matrix with Di,i β {+1, β1}. Each entry in Di,i is an i.i.d. random variable and H is the D Γ D Walsh Hadamard Transform matrix. The complexity to multiply A by the "mixing matrix" preconditioner HD matrix with the fast Walsh-Hadamard is O(D log D). It can be shown that HD is L2-norm preserving to make it a reasonable mixing matrix. In a more efficient method called Improved Subsampled Randomized Hadamard Transform (SRHT) in Boutsidis & Gittens (2012), a subsampling matrix S is considered instead of P, i.e. Ξ = SHD with complexity O(D log k), original dimensionality D and reduced dimensionality k where k < D. For random Fourier features, FastFood Le et al. (2013) computes random Fourier features which extends previous work with SRHT, sparse JLT and FJLT. Our method with sparse data matrix can theoretically achieve O(nnz(A)) with a sparse data matrix and O(Dn) with dense data matrix.
2.2 Platform-Specific Implementations Of Previous Methods
All implementations for fast WHT and FastFood we have found with HD relies on the library SPIRAL1 introduced in PΓΌschel et al. (2004). In addition, the speed in practice very much depends on the implementation of fast WHT and the computation of sparse matrices. For example, in MATLAB, the simplest way to implement fast WHT or FastFood is to consider the transform as matrix computation with dense matrices to perform the matrix multiplication which does not take advantage of efficient computation on entries with zeros. This easy-to-implement method however is not very efficient with complexity O(D2) to compute HDA and it requires Ξ©(D2) memory. Fortunately, with fast WHT formulated as FFT, there are efficient methods for the transform to take only O(D log D) instead of direct multiplication with dense matrices.
Although MATLAB comes with a native implementation for the fast WHT, it has been showed empirically in many previous studies that the time required is in reality longer than direct multiplication with the Hadamard matrix. That means there is no speed up with fast WHT in Matlab with the WHT as the bottleneck in the overall computation. This is the reason why many implementations if not all rely on SPIRAL to speed up WHT. SPIRAL written in C as a signal processing package provides an efficient implementation of WHT, to take advantage of specific machine architectures. In a lot of implementations of WHT, SRHT or FastFood, SPIRAL with mex in Matlab is used for fast WHT and fast multiplication with sparse P or S. However, efficient multiplications for sparse matrices are platform-dependent Kunchum et al. (2017),Dalton et al. (2015),Liu & Vinter (2014) and Yang et al. (2011).
2.3 Other Approaches
Although random projection is computationally more efficient compared to many other dimensionality reduction methods such as principal component analysis (PCA), it is still computationally expensive for very large-scale problems. Methods with sparse random matrices have been proposed to speed up traditional random projection. The method in (Achlioptas, 2003b) with sparse random projection can achieve about a three-fold speed-up compared to vanilla random projection with a small loss of accuracy and (Li et al., 2006a) gets a more efficient βD-fold speed-up where D is dimensionality of the input space.
More recently, a very related technique called random Fourier features to speed up kernel methods has attracted a lot of attention. Although the performance of non-linear kernel methods is almost always better than that of linear kernel methods, non-linear kernel methods with large-scale problems are known to be prohibitively expensive as they do not scale well with the sample sizes of the training sets. Approximations with non-linear kernel methods aim to reduce time complexity so large-scale non-linear kernel methods can become practical. There are two popular methods for these approximations: 1.) the Nystrom approximation method for Gram matrices (Williams & Seeger, 2001) can be used to speed up general non-linear methods to O (nD), where D is dimensionality of the input space and n is the number of training examples (Drineas & Mahoney, 2005; Li et al., 2015; Jin et al., 2011). 2.) alternatively, a method called random Fourier features (Rahimi & Recht, 2008) is proposed to approximate non-linear kernels. In this method, the original high-dimensional data is projected to another feature space like random projection. Experiments show that random Fourier features can perform very well with non-linear kernel methods in large-scale classification and regression tasks. Random Fourier features can be used to speed up non-linear kernel methods but the generation of random Fourier features can also be more efficient with a recent method called Fastfood (Le et al., 2013). Experiments for Fastfood show that classification performance with the Nystrom method, original random Fourier features and Fastfood are close while Fastfood is faster than the other two methods.
1https://github.com/jeffeverett/spiral-wht Although the computational efficiency of recent methods for both RP and kernel approximations has been improved, they are still prohibitively expensive when the projected feature space is very large. This is the case especially for random Fourier features. In this paper, an efficient method is proposed for random projections and random Fourier features with the computational complexity independent of k.
3 Our Method
In this work, we take sparsity to the extreme leaving only D non-zero elements in the k Γ D projection matrix with sparsity s = 1/k which is the fraction of the number of non-zero random numbers generated in the projection matrix. Using normalization and the shuffling of the order of features, we found that random projections and the computation of random Fourier features can be very efficient. Theoretical analysis is provided for the error with encouraging experimental supports.
For the random matrix of random projection R, we create a deterministically sparse matrix S β R kΓD with D(k β 1) zeros, i.e. only D Gaussian random numbers need to be generated. We show that E(vi) = Sui instead of E(vi) = β 1 k Rui from standard random projections. We define S =
(5) with each row vector {ri} k i=1 β R (D/k).
3.1 The Algorithms
An equivalent formulation of this to find viis to calculate the diagonal elements of the matrix CU where we have vector ui reshaped as Ui β R (D/k)Γk and C = [r1; r2; . . . ; rk] β R kΓ(D/k). Now, for each data point i with Ui, we calculate the diagonal elements diag(CUi) that gives k features in the subspace after random projection, i.e. Pl (rm)l Γ (Ui)l,m gives the element m of the diagonal matrix.
In this section, there are two algorithms. As described in Sub-section 3.2, we first pre-process the data by randomly shuffling the order of the features in each feature vector and normalizing the feature vectors.
Our method to speed up the computation for random projections is in Algorithm 1 and for random Fourier features with the Gaussian kernel is in Algorithm 2 with C = CG generated from the standard normal distribution for each element. For other kernels, other distributions are required for general C as described in Sub-section 4.1.
Algorithm 1 Fast Random Projection to compute diag(CU) with C = CG Input: k and all data points {ui} n i=1 β R D Output: {vi} n i=1 β R D for i := 1; n do vi:= diag(CUi) end for Notice that, with Algorithm 2, the number of random Fourier features generated cannot be more than the original dimensionality, i.e. k β€ D. For a larger number of random Fourier features than D, Algorithm 2 is invoked multiple times, i.e. Nmulti times, to obtain the projected vector in the desired dimensionality after projection kmulti = kNmulti. The sparsity as described previously in Section 3 is s = 1/k. With Algorithm 2 invoked multiple times, the sparsity is still smulti = 1/k, not smulti = 1/kmulti.
Algorithm 2 Fast Fourier Features for Kernel Approximations Input: k and all data points {ui} n i=1 β R D Output: {vi} n i=1 β R D for i := 1; n do vi:= diag(ΟCGUi) for the Gaussian kernel or vi:= diag(CUi) for other kernels vi:= βkvi vi:= [cos(vi); sin(vi)] vi:= β 1 k vi end for
3.2 Random Permutation Of Features And Normalization
Motivated by the fact that the chi-squared random variable can be asymptotically approximated by the normal random variable, we consider random permutations of features and feature normalization to speed up random Fourier features.
As shown in Lemma 4.6 in the next section, the bounds for β₯v1 β v2β₯ 2 depends on the data points {ui} n i=1. When M/m = 1 with m = min{ qPl (U) 2 l,1 , qPl (U) 2 l,2 , . . . , qPl (U) 2 l,k} and M = max{ qPl (U) 2 l,1 , qPl (U) 2 l,2 , . . . , qPl (U) 2 l,k}, we have the tightest bounds, i.e. the inequality reduces back to the original JL lemma but obviously there is no way that we can change the data. We use two techniques which include feature shuffling and normalization to obtain diag(CU) so that M/m is small. We will demonstrate that with feature shuffling and feature normalization, M/m is not far from 1.
For data point i, we obtain a random permutation of features (Ξ±((ui)1), Ξ±((ui)2), . . . , Ξ±((ui)D)). If we permute the order of the features, β₯u1βu2β₯ 2 gives us the same Euclidean distance regardless of the permutation.
However, the permutation makes M/m a much closer value to 1 for β₯diagC(U1 β U2)β₯ 2 because of less correlations among features after shuffling in {(Ui)l,m} (n/k) l=1 .
It is very common is to scale features for various methods to perform well. We normalize features using mean normalization, i.e. (ui)j :=(ui)jβPi (ui)j /n maxi(ui)jβmini(ui)jβi, j.
4 Results
The expectation of the approximate kernel in Sutherland & Schneider (2015) with feature vectors u1 and u2 is
. where βu = u1 β u2.
In our analysis, intuitively two cases can be considered. First, for fixed βu and Gaussian Ο, with the normal random variable X = Ο T βu βΌ N (0, Ο2x ), it can be easily found that, the expectation is
which is the approximate Gaussian kernel using random Fourier features.
With our method and βi = (U1)i β (U2)i,
E_{\omega,\Delta}\phi(\mathbf{u}_{1})^{T}\phi(\mathbf{u}_{2})=E_{\omega,\Delta}\big{[}\frac{1}{k/2}\sum_{i=1}^{k/2}\cos(\omega_{i}^{T}\Delta_{i})\big{]}=\frac{1}{k/2}\sum_{i=1}^{k/2}E_{\omega_{i},\Delta_{i}}[\cos(\omega_{i}^{T}\Delta_{i})]\tag{7}
For the second case, with fixed Ο and the pre-processing techniques used in Section 3.2, Eβi β₯ βkβiβ₯ 2 2 = β₯βuβ₯ 2 2 , i.e. β₯βiβ₯ 2 2 is asymptotically normal. We therefore consider the approximation
and let β₯ββ₯ 2 2 = β₯ βkβiβ₯ 2 2 . For each i β [1*, k/*2], β₯βiβ₯ 2 2 is asymptotically normal due to the central limit theorem and also techniques used in Section 3.2 to make each chi-square normalized and independent. The only assumption here is normality with our justifications given in Section 4.2. Eβ₯ββ₯ 2 2βΌN(Β΅β2 ,Ο2β2 )[cos(Ο T β)] = E[K(β₯ββ₯ 2 2 )] where K(β₯ββ₯ 2 2 ) becomes exp(βΞ³X) with the Gaussian kernel for example.
For the rest of the analysis, we, formally, bound errors with both Ο and β as random variables using the total expectation and the total variance. We have Eβ₯Ο T i ( βkβi)β₯ 2 2 = β₯βuβ₯ 2 2 because β β kβi β₯βuβ₯ 2 2 βΌ N (0, 1).
We found, in the analysis, that the expectation of the approximate kernel with our new method using Equation 12 is
where Pk/2 i=1 K(βi) is the expectation Eβ[K(β)] by definition.
Theorem 4.1. With β₯ββ₯ 2 2 , β₯ βkβiβ₯ 2 2 βΌ N (Β΅β2 , Ο2β2 ) following the normal distribution for any i*, the expectation and the variance for random Fourier features with our method are*
E_{\omega,\Delta}\bigg{[}\frac{1}{k/2}\sum_{i=1}^{k/2}\cos(\omega_{i}^{T}(\sqrt{k}\Delta_{i}))\bigg{]}=E_{\Delta}[K(\Delta)] $$Var_{\omega,\Delta}\bigg{[}\frac{1}{k/2}\sum_{i=1}^{k/2}\cos(\omega_{i}^{T}(\sqrt{k}\Delta_{i}))\bigg{]}=\frac{1}{k/2}\bigg{(}\frac{1}{2}+\frac{1}{2}E_{\Delta}[K(2\Delta)]+E_{\Delta}[K(\Delta)]^{2}\bigg{)}$$
where the density function p(Οi) is the Fourier transform of the kernel K(Ξ΄).
Proposition 4.2. The unbiased estimator for the Gaussian kernel approximation is
with
if Β΅β2 /Οβ2 >> 1.
Proposition 4.3. For the Gaussian kernel K(β) = exp(βcβ₯ββ₯ 2 2 ) and the exponential kernel, using Theorem 4.1 with β₯ββ₯ 2 2 , β₯ βkβiβ₯ 2 2 βΌ N (Β΅β2 , Ο2β2 ) following the normal distribution for any i with the density function p(Οi) being the Fourier transform of the kernel K(Ξ΄),
where both the expectation and the variance are functions of Β΅cβ2 and Ο 2 cβ2 .
7 Proposition 4.4. For the spherical kernel,
if β₯ββ₯ < ΞΈ. 0 otherwise. With β₯ββ₯ 2 2 , β₯ βkβiβ₯ 2 2 βΌ N (Β΅β2 , Ο2β2 ) following the normal distribution for any i, the expectation and the variance for the kernel are respectively
where the density function $p(\omega_i)$ is the Fourier transform of the kernel $K(\delta)$. Lemma 4.5. With C β R kΓ(D/k), a random matrix with k Γ (D/k) = D elements, and each element following the normal distribution N (0, 1) where D is the original dimensionality and k is the dimensionality after projection, the expectation of β₯vβ₯ 2is E{Pk i [diag(CU)]2 i } = β₯uβ₯ 2, and, for each element of v,
pP
(v)j
l
(U)
2
l,j
βΌ N (0, 1).
Note that E{Pk i [diag(CU)]2 i } = β₯uβ₯ 2 while we have E(β₯ β 1 k Ruβ₯ 2) = β₯uβ₯ 2for traditional random projection.
However, now β₯vβ₯ 2 2 follows the generalized chi-squared distribution with non-unit variances.
Lemma 4.6. With probability 1 β 2e β(Ο΅ 2βΟ΅ 3)k/4,
where β₯u1β₯ =Pl Pm(U1) 2 l,m and
Theorem 4.7. The expectation and the variance for fast random projection with our method are
where β₯βiβ₯ 2 2 βΌ N (Β΅β2 , Ο2β2 ) and Οi βΌ N (0, 1).
4.1 Other Kernels
w T x is exactly what we compute for random projections. Thus, we can use the same method to compute w T x with diag(CU) because all elements in [w1; w2; . . . ; wD] follow the normal distribution if we use the Gaussian kernel. Otherwise, other distributions can be used to generate w for other kernels.
4.2 The Central Limit Theorem For Weakly Dependent Random Variables
In this sub-section, we first study the effect of the shuffling operation on reducing the correlations between features, the actual speed-ups and the approximation quality using the three datasets which are used for all
(a) AR Face Dateset (b) Natural Dataset (c) Gene Dataset
Figure 1: Comparison of feature correlations matrices before and after shuffling. Darker colors denote higher correlations. In a sub-figure, the matrix on the left is obtained before shuffling. Shuffling can significantly reduce the features correlation.
Table 1: Results of Shapiro-Wilk test on three datasets. This test is to verify whether the value of βi is normal distribution or not.
Operation | kmulti | AR Dataset | Natural Dataset | Gene Dataset | ||||
---|---|---|---|---|---|---|---|---|
Shuff. | Norm. | W | p-value | W | p-value | W | p-value | |
- | - | 200 | 0.91 | 1.3E-07 | 0.98 | 0.003 | 0.98 | 0.003 |
1000 | 0.87 | 2.2E-16 | 0.99 | 1.93E-06 | 0.95 | 2.2E-16 | ||
β | - | 200 | 0.99 | 0.552 | 0.99 | 0.571 | 0.98 | 0.010 |
1000 | 0.99 | 0.590 | 0.99 | 0.800 | 0.95 | 2.2E-16 | ||
β | β | 200 | 0.99 | 0.582 | 0.99 | 0.820 | 0.99 | 0.783 |
1000 | 0.99 | 0.544 | 0.99 | 5.12E-01 | 0.99 | 1.89E-06 | ||
- | β | 200 | 0.96 | 1.09E-05 | 0.96 | 0.008 | 0.98 | 0.037 |
1000 | 0.93 | 2.2E-16 | 0.99 | 1.93E-06 | 0.98 | 2.82E-11 |
other experiments as well. Moreover, we investigate whether the value of βiis normally distributed or close to normality Fleermann & Kirsch (2022), Ermakov & Ostrovskii (1986), Serfling (1968). The comparison of feature corrections is shown in Figure 1. For all datasets, we randomly pick two examples and evaluate their feature correlation matrices before and after shuffling. To visualize correlations, the correlation matrices with the first 50 features of the examples are shown. Darker colors denote higher correlations. It can be observed the shuffling operation can significantly reduce correlations between features with feature correlation matrices on the left in Sub-figures 1(a) and 1(b) much lighter than those on the right.
In Sub-figure 1(c), both matrices are light since the feature correlations for gene expression are relatively low.
The Shapiro-Wilk test is used to examine whether the value of βiis normal distribution or not. The results from the Shapiro-Wilk test are shown in Table 1 with the dimensionality of projected features kmulti (see Section 3.1). On the AR dataset and the natural-image dataset with shuffling, the test suggusts that βiis normal distributed. As the dimensionality of the gene data is only 17,000, when kmulti equal to 1,000, there are only 17 elements for the calculation of βi. Hence, in this experiment, we set the values of kmulti to 200 and 1,000. In Table 1, with shuffling and normalization, the values of βi on all three datasets are normally distributed when kmulti = 200, i.e. the p-value higher than 0.05 and the W value close to 1.
5 Experiments
In this section, we first demonstrate the speed improvements of the proposed kernel approximation and random projection method. In addition, we conduct a comparative analysis against state-of-the-art techniques to highlight the fact that our method not only speeds up traditional approaches but also preserves comparable approximation quality by assessing the quality of our method with classification and regression tasks. The empirical evidence supports that our approach to kernel approximation allows the linear SVM to reach classification and regression performance on par with that of the non-linear SVM using the radial basis function (RBF) kernel. We implement our method and RKS. They are trained with the same protocol.
For Fastfood, we use the code provide on the scikit-learn-extra website2.
2https://scikit-learn-extra.readthedocs.io/en/stable/index.html.
5.1 The Actual Speed-Up
The real-world time efficiency of the proposed method is evaluated on synthesized datasets and public datasets3including the AR face image dataset, a natural image dataset, and a gene dataset. The AR face dataset Martinez & Benavente (1998) contains 3,276 images with 126 people, and the resolution of the images is 576Γ768. By following Le et al. (2013); Li et al. (2006a), each image with all pixel values is flattened into a vector. For a grey image of the AR dataset, the dimensionality of its vector is 442,368. Face images in the AR dataset are different from general images because there is always a completely white background in the image. Therefore, a popular natural image dataset Weber (2018) is also used for our evaluation. There are images in three different resolutions in this dataset. To fairly compare with the results on face images, only images in the 512Γ512 resolution are chosen in our experiments with this dataset. The images are first converted into gray-scale images meaning that the vectors obtained for the images are 262,144-dimensional.
Finally, a biomedical dataset with genes for breast cancer called TCGA (BC-TCGA) Xie et al. (2016) is used to evaluate our method using gene expression bio-sequences. This dataset contains 590 examples with 17,814 genes. All the 590 examples are used in the experiments.
The proposed method is assessed with both random projection and kernel approximation in terms of computational efficiency. The runtime improvement of our method relative to vanilla random projection is shown in Table 2. We make synthesized datasets with various dimensionalities to evaluate the speed improvement of the proposed method. Here, the reduced dimensionality kmulti is equal to k which is set from 1,000 to 5,000.
When k = 1, 000, the real-world runtime of our method is 0.31, 0.05, and 0.079 seconds on the three public datasets, while it is 0.023, 0.031, and 0.1 seconds on the synthesized dataset. As the value of kmulti = k increases, the running time of vanilla random projection increases quickly, because its complexity is O(kDn).
The kmulti = k is not a important factor affecting the running time of our method with O(Dn). Hence, the proposed method is faster than vanilla random projection, and the actual speed-up of our method is up to 1226 times.
Table 2: Speed-up of the proposed method for random projection. Our method speeds up traditional random projection from O(kDn) to O(Dn) when D is larger than the dimensionality after projection kmulti = k. The actual speed-up of our method is up to 1226 times with D as the dimensionality of input data. k is a hyper-parameter of Algorithm 1.
Datasets | D | k = 1,000 | k = 2,000 | k = 3,000 | k = 4,000 | k = 5,000 |
---|---|---|---|---|---|---|
D = 5,000 | 23.9x | 40.0x | 65.2x | 74.1x | 100.0x | |
Synth. Dataset | D = 10,000 | 32.3x | 58.8x | 93.9x | 120.6x | 150.0x |
D = 100,000 | 106.0x | 193.6x | 316.0x | 405.0x | 462.7x | |
AR dataset | D = 440,000 | 142.9x | 265.9x | 389.1x | 559.7x | 672.0x |
Nat. dataset | D = 260,000 | 264.0x | 408.3x | 768.0x | 941.6x | 1226.6x |
Gene dataset | D = 17500 | 67.1x | 123.3x | 203.7x | 241.6x | 331.0 x |
For kernel approximation, the proposed method is compared with other two state-of-the-art kernel approximation approaches, RKS (Rahimi & Recht, 2008) and Fastfood (Le et al., 2013). We vary the parameter kmulti = k from 1, 000 to 200, 000 to assess performance disparities. Both our method and Fastfood outperform RKS in speed (see Table 2 detailing the speed-up factors of our method and Fastfood in comparison to RKS). Specifically, when k = 1, 000, the real-world runtime of our method is respectively 4.6, 0.49, and 0.88 seconds on the three public datasets, while it is 0.062, 0.11, and 1.18 seconds on the synthesized dataset (with three different feature dimensionalities D). It is encouraging to see that , when kmulti = k = 200, 000, the speed improvement of our method significantly increases to up to 11,716 times compared to RKS, and it also achieves a 14.7 times speed advantage over Fastfood. These results show that our method is moderately more efficient than Fastfood and significantly more efficient than RKS.
Results in Table 2 and Table 3 demonstrate that our method can significantly improve real-world time efficiency for random projection and kernel approximation. Calculating diag(AB) can be very expensive, in
3Available at http://www2.ece.ohio-state.edu/aleix/ARdatabase.html/, http://sipi.usc.edu/database/ and https://data.mendeley.com/datasets/ respectively.
Datasets | D | Methods | k = 103 | k = 5 β 103 | k = 104 | k = 5 β 104 | k = 105 | k = 2 β 105 |
---|---|---|---|---|---|---|---|---|
Synthesized Dataset | D = 5,000 | Ours | 8.9x | 45x | 44.6x | 48.3x | 48.8x | 52.8x |
Fastfood | 2.6x | 12x | 13.3x | 17.9x | 20.5x | 21.7x | ||
D = 10,000 | Ours | 9.5x | 50.6 x | 95.5 x | 109.3 x | 109.4 x | 109.4 x | |
Fastfood | 2.4x | 10x | 21.8x | 32.6x | 41.3x | 40.3x | ||
D = 100,000 | Ours | 9.2x | 50.6x | 95.2x | 602x | 1300x | 1139x | |
Fastfood | 2.7x | 13.6x | 29.3x | 162.1x | 352.8x | 355.2x | ||
AR | ||||||||
Dataset | D = 440,000 | Ours | 13.4x | 72.1x | 153.2x | 767.8x | 1585x | 3361x |
Fastfood | 2.6x | 12.6x | 28.9x | 141.9x | 267.7x | 554.7x | ||
Nat. | ||||||||
Dataset | D = 260,000 | Ours | 32.3x | 168.5x | 432.6x | 2800x | 4943x | 11716x |
Fastfood | 3.2x | 16.4x | 32.7x | 175.7x | 345.6x | 794.5x | ||
Gene Dataset | D = 17500 | Ours | 6.5x | 31.3x | 65.1x | 63.5x | 66.5x | 66.1x |
Fastfood | 0.6x | 3.0x | 6.1x | 15.7x | 16.1x | 16.0x |
Table 3: For kernel approximation, the proposed method and Fastfood are faster than RKS. The runtime improvements of the two approaches relative to RKS are listed. R or Matlab for example, as the product matrix AB is computed first and the diagonal elements are then taken. In our implementation, it is much more efficient to find diag(A*B) with sum(A.*B',2) in Matlab or R.
5.2 Approximation Quality
The approximation quality of the proposed method is very close to that of RKS, while our method can significantly improve the runtime (see Section 5.1).
In random projection, approximation quality is measured to see how well the pairwise Euclidean distances among projected vectors can approximate the corresponding distances with the original vectors. The averaged absolute difference between pairwise Euclidean distances before and after projection are used to quantify the approximation quality. These pairwise distances are computed over all example pairs, and the averaged absolute difference gives us the error. For a dataset with n examples, the average absolute error over all n 2 pairs is obtained with
Err_{rp}=\frac{1}{{n\choose2}}\sum_{i=1}^{n-1}\sum_{j=i+1}^{n}\left\|\mathbf{u}_{i}-\mathbf{u}_{j}\right\|^{2}-\left\|\mathbf{v}_{i}-\mathbf{v}_{j}\right\|^{2}\right|,\tag{8} $\mathbf{u}{i}$(\mathbf{u}{i}
where ui, uj are two data points, and vi, vj are the projected points of them. In kernel approximation, approximation quality is quantified using the average absolute error between the approximated kernel values using dot products obtained by the proposed method and the original kernel values over all data point pairs.
For a dataset with n examples, the average absolute error over all n 2 pairs is given by
where β¨vi, vj β© = viΒ· vj .
For random projection, the left sub-figure of Figure 2 shows approximation error (see Equation 8) against the dimensionality after projection. Comparing with the vanilla method (red), which shows the state-of-the-art approximation quality, the error (y-axis) from our method (green) is very close on all datasets. It indicates that the approximation quality of these two methods is indistinguishable.
For kernel approximation, the right sub-figure of Figure 2 shows the error calculated using Equation 9 against the dimensionality of Fourier features. On the AR dataset and the natural dataset, the error from the proposed method (blue) is on par with that of RKS Rahimi & Recht (2008) and Fastfood (Le et al., 2013). The approximation quality of our method is close to that of RKS. In the gene dataset, the error of our method and Fastfood are higher than that of RKS. This is due to the relatively low dimensionality of the gene dataset. Table 1 shown that the βiis not a normal distribution when kmulti = 1000. This affects
Figure 2: Comparison of approximation error in random projection and kernel approximation. The error is obtained with Equation 8 and Equation 9. It shows the error from the proposed method on the y-axis is close to that from previous methods in all cases. The approximation quality of our method is close to that of previous methods, while the proposed method can significantly speed-up them. Table 4: Comparison with four SVM variants. Experimental results show that the classification accuracies and regression performance of the linear SVM with our method are very close to the SVM with RBF.
Datasets | Classification (Accuracy) | Regression (RMSE) | |||||||
---|---|---|---|---|---|---|---|---|---|
ADULT | CIFAR-10 | CENSUS | |||||||
Reduced Dim. | 1000 | 2000 | 3000 | 1000 | 2000 | 3000 | 1000 | 2000 | 3000 |
Linear SVM (Ours) | 59.4% | 62.2% | 64.5% | 75.9% | 75.8% | 76.3% | 3.1% | 2.9% | 2.8% |
Linear SVM (RKS) | 58.6% | 62.0% | 63.7% | 75.1% | 76.2% | 76.2% | 2.9% | 2.8% | 2.8% |
Linear SVM (Fastfood) | 59.1% | 62.2% | 64.3% | 75.1% | 75.6% | 75.7% | 3.1% | 2.7% | 2.8% |
SVM with RBF | 64.7% | 76.3% | 1.1% |
the approximation quality of our method. In Section 5.3, we further investigate the effect of this errors on SVMs for classification and regression.
5.3 Performance With The Svm
In this subsection, we further evaluate the actual performance of SVMs with the proposed method. It is found that our approach not only significantly accelerates the speed of traditional methods but also achieves approximation quality that is comparable to conventional methods. The primary objective of kernel approximation is to enhance the efficiency of kernel method computations without compromising on quality. To this end, we compare the non-linear SVM with the RBF kernel against the linear SVM using three different kernel approximation techniques: the proposed method, RKS(Rahimi & Recht, 2008), and Fastfood(Le et al., 2013). We follow Rahimi & Recht (2008) to apply SVMs to classification and regression on the adult dataset, the census dataset, and the CIFAR-10 dataset 4. Moreover, the datasets are pre-processed with the same techniques as in Rahimi & Recht (2008).
The classification accuracies and the root-mean-square errors (RMSE) are given with different methods in Table 4. As shown in Table 4, the performance of the linear SVM with the proposed method is on par with that of the non-linear SVM with the RBF. The approximation quality of our proposed method with classification is also found to be encouraging. Our method can significantly speed-up the previous methods.
6 Conclusion
In this work, a simple and efficient approach to sparse random projection and efficient computation of random Fourier features is proposed with complexity O(max{k, D}n). The novel method does not rely on specialized libraries for sparse matrix computation or fast WHT. It can be easily implemented. In addition, no Gaussian assumption has been made to the random entries of the projection matrix for random projections and random Fourier features with any shift-invariant kernels with the bias, the variance and error bounds provided. It is shown that the speed-up of our method is up to 10,000 times on real-world datasets compared to RKS and up to 15 times compared to Fastfood (Le et al., 2013).
4Available at https://archive.ics.uci.edu/ml/datasets/Adult and http://www.cs.toronto.edu/delve/data/census-house/desc.html Λ
References
Dimitris Achlioptas. Database-friendly random projections: Johnson-lindenstrauss with binary coins.
Journal of Computer and System Sciences, 66(4):671 - 687, 2003a. ISSN 0022-0000. doi: https: //doi.org/10.1016/S0022-0000(03)00025-4. URL http://www.sciencedirect.com/science/article/ pii/S0022000003000254. Special Issue on PODS 2001.
Dimitris Achlioptas. Database-friendly random projections: Johnson-lindenstrauss with binary coins. Journal of computer and System Sciences, 66(4):671β687, 2003b.
Nir Ailon and Bernard Chazelle. Approximate nearest neighbors and the fast johnson-lindenstrauss transform. In Proceedings of the thirty-eighth annual ACM symposium on Theory of computing, pp. 557β563.
ACM, 2006.
Anushka Anand, Leland Wilkinson, and Tuan Nhon Dang. Visual pattern discovery using random projections. In 2012 IEEE Conference on Visual Analytics Science and Technology (VAST), pp. 43β52. IEEE, 2012.
Ella Bingham and Heikki Mannila. Random projection in dimensionality reduction: applications to image and text data. In Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 245β250. ACM, 2001.
Christos Boutsidis and Alex Gittens. Improved matrix algorithms via the subsampled randomized hadamard transform. CoRR, abs/1204.0062, 2012. URL http://arxiv.org/abs/1204.0062.
Steven Dalton, Luke Olson, and Nathan Bell. Optimizing sparse matrix-matrix multiplication for the gpu.
ACM Trans. Math. Softw., 41(4), October 2015. ISSN 0098-3500. doi: 10.1145/2699470. URL https: //doi.org/10.1145/2699470.
J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR09, 2009.
Petros Drineas and Michael W Mahoney. On the nystrΓΆm method for approximating a gram matrix for improved kernel-based learning. journal of machine learning research, 6(Dec):2153β2175, 2005.
S. V. Ermakov and E. I. Ostrovskii. The central limit theorem for weakly dependent banach-valued variables.
Theory of Probability & Its Applications, 30(2):391β394, 1986. doi: 10.1137/1130045. URL https://doi.
org/10.1137/1130045.
Michael Fleermann and Werner Kirsch. The central limit theorem for weakly dependent random variables by the moment method, 2022.
Rong Jin, Tianbao Yang, Mehrdad Mahdavi, Yu-Feng Li, and Zhi-Hua Zhou. Improved bound for the nystrom's method and its application to kernel classification. arXiv preprint arXiv:1111.2262, 2011.
Rakshith Kunchum, Ankur Chaudhry, Aravind Sukumaran-Rajam, Qingpeng Niu, Israt Nisa, and P. Sadayappan. On improving performance of sparse matrix-matrix multiplication on gpus. In Proceedings of the International Conference on Supercomputing, ICS '17, New York, NY, USA, 2017. Association for Computing Machinery. ISBN 9781450350204. doi: 10.1145/3079079.3079106. URL https://doi.org/10.1145/3079079.3079106.
Quoc Le, TamΓ‘s SarlΓ³s, and Alex Smola. Fastfood: Approximating kernel expansions in loglinear time.
In Proceedings of the 30th International Conference on International Conference on Machine Learning - Volume 28, ICML'13, pp. IIIβ244βIIIβ252. JMLR.org, 2013.
Mu Li, Wei Bi, James T Kwok, and Bao-Liang Lu. Large-scale nystrΓΆm kernel matrix approximation using randomized svd. IEEE transactions on neural networks and learning systems, 26(1):152β164, 2015.
Ping Li, Trevor J Hastie, and Kenneth W Church. Very sparse random projections. In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 287β296. ACM, 2006a.
Ping Li, Trevor J Hastie, and Kenneth W Church. Improving random projections using marginal information.
In International Conference on Computational Learning Theory, pp. 635β649. Springer, 2006b.
W. Liu and B. Vinter. An efficient gpu general sparse matrix-matrix multiplication for irregular data. In 2014 IEEE 28th International Parallel and Distributed Processing Symposium, pp. 370β381, 2014. doi: 10.1109/IPDPS.2014.47.
AM Martinez and R Benavente. The ar face database, computer vision center, barcelona. Technical report, Spain, Technical Report 24, 1998.
Saurabh Paul, Christos Boutsidis, Malik Magdon-Ismail, and Petros Drineas. Random projections for support vector machines. In Artificial intelligence and statistics, pp. 498β506, 2013.
Markus PΓΌschel, JosΓ© M. F. Moura, Bryan Singer, Jianxin Xiong, Jeremy Johnson, David Padua, Manuela Veloso, and Robert W. Johnson. Spiral: A generator for platform-adapted libraries of signal processing algorithms. Int. J. High Perform. Comput. Appl., 18(1):21β45, February 2004. ISSN 1094-3420. doi: 10.1177/1094342004041291. URL https://doi.org/10.1177/1094342004041291.
Ali Rahimi and Benjamin Recht. Random features for large-scale kernel machines. In Advances in neural information processing systems, pp. 1177β1184, 2008.
R. J. Serfling. Contributions to Central Limit Theory for Dependent Variables. The Annals of Mathematical Statistics, 39(4):1158 - 1175, 1968. doi: 10.1214/aoms/1177698240. URL https://doi.org/10.1214/ aoms/1177698240.
Dougal J. Sutherland and Jeff Schneider. On the error of random fourier features. In Proceedings of the Thirty-First Conference on Uncertainty in Artificial Intelligence, UAI'15, pp. 862β871, Arlington, Virginia, United States, 2015. AUAI Press. ISBN 978-0-9966431-0-8. URL http://dl.acm.org/citation.cfm? id=3020847.3020936.
Santosh Srinivas Vempala. The Random Projection Method, volume 65 of DIMACS Series in Discrete Mathematics and Theoretical Computer Science. DIMACS/AMS, 2004. ISBN 0-8218-3793-1. URL http: //dimacs.rutgers.edu/Volumes/Vol65.html.
Allan G. Weber. The usc-sipi image database: Version 6. In USC-SIPI Report, 2018. URL http://sipi.
usc.edu/database/.
Christopher KI Williams and Matthias Seeger. Using the nystrΓΆm method to speed up kernel machines. In Advances in neural information processing systems, pp. 682β688, 2001.
Haozhe Xie, Jie Li, Qiaosheng Zhang, and Yadong Wang. Comparison among dimensionality reduction techniques based on random projection for cancer classification. Computational biology and chemistry, 65: 165β172, 2016.
Xintian Yang, Srinivasan Parthasarathy, and P. Sadayappan. Fast sparse matrix-vector multiplication on gpus: Implications for graph mining. Proc. VLDB Endow., 4(4):231β242, January 2011. ISSN 2150-8097.
doi: 10.14778/1938545.1938548. URL https://doi.org/10.14778/1938545.1938548.
Y. Zhai, Y. Ong, and I. W. Tsang. The emerging "big dimensionality". IEEE Computational Intelligence Magazine, 9(3):14β26, 2014.
Kaihua Zhang, Lei Zhang, and Ming-Hsuan Yang. Fast compressive tracking. IEEE transactions on pattern analysis and machine intelligence, 36(10):2002β2015, 2014.
A Appendix A.1 Analysis For Fast Random Features With Our Method
Theorem 4.1. With β₯ββ₯ 2 2 , β₯ βkβiβ₯ 2 2 βΌ N (Β΅β2 , Ο2β2 ) following the normal distribution for any i*, the expectation and the variance for random Fourier features with our method are*
E_{\omega,\Delta}\Biggl{[}\frac{1}{k/2}\sum_{i=1}^{k/2}\cos(\omega_{i}^{T}(\sqrt{k}\Delta_{i}))\Biggr{]}=E_{\Delta}[K(\Delta)] $$Var_{\omega,\Delta}\Biggl{[}\frac{1}{k/2}\sum_{i=1}^{k/2}\cos(\omega_{i}^{T}(\sqrt{k}\Delta_{i}))\Biggr{]}=\frac{1}{k/2}\Biggl{(}\frac{1}{2}+\frac{1}{2}E_{\Delta}[K(2\Delta)]+E_{\Delta}[K(\Delta)]^{2}\Biggr{)}$$
where the density function p(Οi) is the Fourier transform of the kernel K(Ξ΄).
Proof. One can obtain, for each pair of data points, Β΅Λ and ΟΛ using cos(Ο T β) (Sutherland & Schneider (2015))
for any given β: $$Var[\cos(\omega^{T}\Delta)]=\frac{1}{2}+\frac{1}{2}K(2\Delta)-K(\Delta)^{2}$$
For the total expectation or the unconditional expectation, EY [EX[X|Y ]] = EX[X] for any two random variables X and Y . We have
T β)|β]] = Eβ[K(β)] (12) For the total variance with the law of total variance V arY (Y ) = EX(V arY (Y |X)) + V arX(EY (Y |X)), it is the sum of the expected value of the conditional variance and the variance of the conditional means.
Using Equation 11 and V ar(X) = E[X2] β (E[X])2,
As the expectation and the variance of the average, E[XΒ―] = Β΅X and V ar[XΒ―] = 1n Ο 2 X, with random variables X1, X2*, . . . , X*n, using Equation 12, we have
and, using Equation 13,
Proposition 4.2. The unbiased estimator for the Gaussian kernel approximation is Ο T(x)Ο(y)[exp(βΞ³Β΅β2 )/exp(βΞ³Β΅β2 + (Ξ³Οβ2 ) 2/2)]
with Proof. The bias can be found with
from Theorem 4.1 and
exp(βΞ³β₯βiβ₯ 2 2 ) following the log-normal distribution can be approximated by the normal distribution when Β΅β2 /Οβ2 >> 1 and the summation Pi exp(βΞ³β₯βiβ₯ 2 2 ) makes the sum of log-normal random variables follow more closely to the normal distribution due to the central limit theorem. As the expectation of the summation Pi exp(βΞ³β₯βiβ₯ 2 2 ) is just the expectation of the log-normal distribution, we have
The unbiased estimator for our kernel approximation becomes
Ο T(x)Ο(y)[exp(βΞ³Β΅β2 )/exp(βΞ³Β΅β2 + (Ξ³Οβ2 ) 2/2)].
Proposition 4.3. For the Gaussian kernel K(β) = exp(βcβ₯ββ₯ 2 2 ) and the exponential kernel, using Theorem 4.1 with β₯ββ₯ 2 2 , β₯ βkβiβ₯ 2 2 βΌ N (Β΅β2 , Ο2β2 ) following the normal distribution for any i with the density function p(Οi) being the Fourier transform of the kernel K(Ξ΄),
where both the expectation and the variance are functions of Β΅cβ2 and Ο 2 cβ2 .
Proof. Eβ[K(β)] and V arβ[K(β)] in Theorem 4.1 are the variance and the expectation of the log-normal distribution. With β₯ββ₯ 2 2 βΌ N , exp(βcβ₯βiβ₯ 2 2 ) follows the log-normal distribution giving us
x/2), (14) and ) (15) for any normal random variable X βΌ N (Β΅x, Ο2x ). Thus, with Equation 12,
and the variance of the log-normal distribution is
We have
Proposition 4.4. For the spherical kernel,
if β₯ββ₯ < ΞΈ. 0 otherwise. With β₯ββ₯ 2 2 , β₯ βkβiβ₯ 2 2 βΌ N (Β΅β2 , Ο2β2 ) following the normal distribution for any i, the expectation and the variance for the kernel are respectively
EΟ,β 1 k/2 X k/2 i=1 cos(Ο T i ( β kβi))= Eβ[K(β)] = 1 β 3Β΅β 2ΞΈ+ Β΅ 3 β + 3Β΅βΟ 2 β 2ΞΈ 3, V arΟ,β 1 k/2 X k/2 i=1 cos(Ο T i ( β kβi))=1 k/2 1 β 9Β΅β 2ΞΈ+ 9Β΅ 2 β 4ΞΈ 2 + 3Β΅ 3 β + 9Β΅βΟ 2 β ΞΈ 3β 3Β΅ 4 β + 9Β΅ 2 βΟ 2 β 2ΞΈ 4+ 6Β΅ 4 βΟ 2 β + Β΅ 6 β + 9Β΅ 2 βΟ 4 β 4ΞΈ 6 where the density function p(Οi) is the Fourier transform of the kernel K(Ξ΄). Proof. To use Theorem 4.1, we need to obtain E[K(β)] and E[K(2β)].
$$E_{\Delta}[K(2\Delta)]=1-\frac{3E[|\Delta|]}{\theta}+\frac{4E[|\Delta|^{3}]}{\theta^{3}}$$
We let Β΅β = E[β₯ββ₯] and Οβ =pV ar[||β||]. The third non-central moment of a Gaussian is
E[β₯ββ₯ 3] = Β΅ 3 β + 3Β΅βΟ 2 β (18) where Β΅β =R β ΞΈxfN (x)dx =R β ΞΈxfT N (x)dx ΓR β ΞΈfN (x)dx and Οβ =R β ΞΈ (x β Β΅β) 2fN (x)dx =R β ΞΈ (x β Β΅β) 2fT N (x)dx ΓR β ΞΈfN (x)dx. fN (.) is the density function of the normal distribution and fT N (.) is the density function of the truncated normal distribution.
With a little bit of algebra using Equations 16 and 18,
And, with Equations 17 and 19,
V arΟ,β[cos(Ο T β)] = 1 β 9Β΅β 2ΞΈ+ 9Β΅ 2 β 4ΞΈ 2 + 3Β΅ 3 β + 9Β΅βΟ 2 β ΞΈ 3β 3Β΅ 4 β + 9Β΅ 2 βΟ 2 β 2ΞΈ 4+ 6Β΅ 4 βΟ 2 β + Β΅ 6 β + 9Β΅ 2 βΟ 4 β 4ΞΈ 6
A.2 Analysis For The Proposed Fast Random Projection
Lemma 4.5. With C β R kΓ(D/k), a random matrix with k Γ (D/k) = D elements, and each element following the normal distribution N (0, 1) where D is the original dimensionality and k is the dimensionality after projection, the expectation of β₯vβ₯ 2is E{Pk i [diag(CU)]2 i } = β₯uβ₯ 2, and, for each element of v,
pP
(v)j
l
(U)
2
l,j
βΌ N (0, 1).
Proof.
$\mathbb{E}{\sum_{i}^{k}[diag(\mathbf{C}\mathbf{U})]{i}^{2}}$ $=\mathbb{E}{\sum{i}^{k}[\sum_{j=1}^{D}(\mathbf{C}){i,j}(\mathbf{U}){j,i}]^{2}}$ $=\mathbb{E}{\sum_{i}^{k}\sum_{j,j^{\prime}}(\mathbf{C}){i,j}(\mathbf{C}){i,j^{\prime}}(\mathbf{U}){j,i}(\mathbf{U}){j^{\prime},i}}$ $=\mathbb{E}{\sum_{i}^{k}\sum_{j}(\mathbf{C}){i,j}^{2}(\mathbf{U}){j,i}^{2}}$ $=\sum_{i}\sum_{j}(\mathbf{U})_{j,i}^{2}=|\mathbf{u}|^{2}$ $\square$ As there are only D non-zero elements in C and E{Pk i [diag(CU)]2 i } = β₯uβ₯ 2, we have normally distributed
after projection v = β₯diag(CU)β₯ 2instead of traditional random projection β 1 k Ru with
$\square$ Lemma 4.6. With probability 1 β 2e β(Ο΅ 2βΟ΅ 3)k/4,
(1 β Ο΅)(m/M)β₯u1 β u2β₯ 2 β€ β₯v1 β v2β₯ 2 β€ (1 + Ο΅)(M/m)β₯u1 β u2β₯ 2 2 where β₯u1β₯ =Pl $$M=\operatorname*{max}{\sqrt{\sum_{l}(\mathbf{U}){l,1}^{2}},\sqrt{\sum{l}(\mathbf{U}){l,2}^{2}},\ldots,\sqrt{\sum{l}(\mathbf{U})_{l,k}^{2}}}$$ Proof. The main difference from the proof of the JL lemma (Vempala, 2004) is that, now in our formulation, we have a generalized chi-square distribution for vi with
instead of the Ο 2-distribution β₯v1β₯ 2 β₯u1β₯2/k βΌ Ο 2 k with the JL lemma because the denominator depends on i now.
Let us consider the following two inequalities for the i-th term of β₯diag(CU)β₯:
With Inequality 21, one can obtain
It is shown in (Vempala, 2004) that
With the union bound, the probability that Inequality 4.6 is satisfied is $$1-2e^{-(\epsilon^{2}-\epsilon^{3})k/4}$$ Theorem 4.7.: The expectation and the variance for fast random projection with our method are: $$E_{\omega,\Delta}[\sum_{i}(\omega_{i}^{T}\Delta_{i})^{2}]=\mu_{\Delta^{2}},\quad\text{and}\quad Var_{\omega,\Delta}[\sum_{i}(\omega_{i}^{T}\Delta_{i})^{2}]=(2\mu_{\Delta^{2}}+\sigma_{\Delta^{2}}^{2})/k$$ where $\sigma{\Delta^{2}}^{2}$ is the variance of the random projection._ where β₯βiβ₯ 2 2 βΌ N (Β΅β2 , Ο2β2 ) and Οi βΌ N (0, 1).
Proof. From Li et al. (2006a) for fixed β, we have
i Thus, again with the law of total expectation EY [EX[X|Y ]] = EX[X], EΟ,β[ X i (Ο T i βi) 2] = Eβ[EΟ[ X i (Ο T i βi) 2|βi]] = Β΅β2 Using the law of total variance V arY (Y ) = EX(V arY (Y |X)) + V arX(EY (Y |X)), we have V arΟ,β[ X i (Ο T i βi) 2] = Eβ[V arΟ[ X k i=1 (Ο T i βi) 2|βi]] + V arβ[EΟ[ X k i=1 (Ο T i βi) 2|βi]] = Eβ[V arΟ[ X k i=1 (Ο T i βi) 2|βi]] + V arβ[ X k i=1 EΟi [(Ο T i βi) 2|βi]] = E[ 2 k β₯βiβ₯ 2 2 ] + V ar[ X k i=1 β₯βiβ₯ 2 2 ] = 2Β΅β2 k+ Ο 2 β2 k