RedTachyon commited on
Commit
d82f86d
1 Parent(s): 9627d6f

Upload folder using huggingface_hub

Browse files
Mf4muWBo1U/11_image_0.png ADDED

Git LFS Details

  • SHA256: 0ae366c542a728d45d8886d84390bdcc6320f2920c254ddf46c82d7a89ab3f89
  • Pointer size: 130 Bytes
  • Size of remote file: 59.4 kB
Mf4muWBo1U/8_image_0.png ADDED

Git LFS Details

  • SHA256: a43318a040c17d52820bd388630c49881b6d47eb62f447384a46a483e9a9f69c
  • Pointer size: 130 Bytes
  • Size of remote file: 53.8 kB
Mf4muWBo1U/8_image_1.png ADDED

Git LFS Details

  • SHA256: 9ff626172b752c9946f915bf2c29932216879f11e48d7c389e6d14550c58fcf7
  • Pointer size: 130 Bytes
  • Size of remote file: 27.6 kB
Mf4muWBo1U/Mf4muWBo1U.md ADDED
@@ -0,0 +1,823 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A Simple And Efficient Method For Random Fourier Features
2
+
3
+ Anonymous authors Paper under double-blind review
4
+
5
+ ## Abstract
6
+
7
+ Random Fourier features and random projection involve matrix multiplication with a k × D
8
+ random matrix, the original dimensionality D and the dimensionality in the projected space k. Large values of k ∼ 105required for high accuracies together with large sample sizes n lead to substantial computational demands. In this paper, we propose a simple and efficient method for random Fourier features and random projection. The work with our simple method is motivated by the fact that the order for features do not change distances or similarities between feature vectors as long as the same order is maintained for all feature vectors. The proposed method allows much reduced computation with improved complexity O(max{*k, D*}n), where n represents the sample size, compared to the complexity O(kDn)
9
+ associated traditionally with random projection and random Fourier features. The proposed method is also simple to implement without the need for the platform-dependent libraries of the popularly used fast Walsh-Hadamard transform that Fastfood and a lot of other previous work rely on. It is demonstrated in our experiments that the proposed method achieves significant speed improvements, i.e. a 10,000x speed-up over Random Kitchen Sinks and a 15x speed-up over Fastfood on real-world datasets when both D and k are large. As a general framework, no Gaussian assumption has been made to the random entries of the projection matrix and, thus, it is a unified approach to efficient random projections and random Fourier features with any shift-invariant kernels. The bias, the variance and error bounds are given in our analysis. We show that our estimators for kernel approximations and random projection are unbiased with the variance inversely proportional to k. Our code is made available at https://anonymous.
10
+
11
+ ## 1 Introduction
12
+
13
+ Both random Fourier features and random projection are popular methods in classification and regression tasks Bingham & Mannila (2001); Ailon & Chazelle (2006); Anand et al. (2012); Paul et al. (2013); Zhang et al. (2014). Random projection is an efficient and distance-preserving technique while random Fourier features allow non-linear feature mapping through randomization. Random Fourier features, which is closely related to random projection, became popular for good approximations to shift invariant kernels and random Fourier features can be considered as nonlinear random projection Rahimi & Recht (2008). In large-scale real-world problems, the original dimensionality, D, the dimensionality in the projected space, k, and the sample size, n, can be very large. With k ∼ 105for high accuracies, D from 105to 107in Zhai et al. (2014)
14
+ and n from 106to 107in Deng et al. (2009), matrix multiplication required can be prohibitively expensive with the complexity O(kDn).
15
+
16
+ For random projections, we have n data points {ui}
17
+ n i=1 ∈ R
18
+ D in data matrix A ∈ R
19
+ D×n with D dimensions and a random matrix R ∈ R
20
+ k×D for projection. For projected data points RA, each point {vi}
21
+ n i=1 ∈ R
22
+ k is in k dimensions. The computational complexity of traditional random projection is O(kDn) and it is computationally expensive for large-scale problems. It can be easily shown that, as in (Vempala, 2004) and
23
+ (Li et al., 2006a), we have the expectation for the squared L
24
+ 2-norm of the projected vector v from the original vector u before random projection:
25
+
26
+ $$\mathbb{E}(\left\|\mathbf{v}_{1}\right\|^{2})=\left\|\mathbf{u}_{1}\right\|^{2}=\sum_{j=1}^{D}(\mathbf{u}_{1})_{j}^{2}.$$
27
+ $$(1)$$
28
+ Similarly, we get
29
+ $$\mathbb{E}(\|\mathbf{v}_{1}-\mathbf{v}_{2}\|^{2})=\|\mathbf{u}_{1}-\mathbf{u}_{2}\|^{2}.$$
30
+
31
+ In addition, ∥v1∥
32
+ 2
33
+ ∥u1∥2/k and ∥v1−v2∥
34
+ 2
35
+ ∥u1−u2∥2/k both follow the χ 2 distribution:
36
+
37
+ $$\frac{(\mathbf{v}_{1})_{i}}{\sqrt{\|\mathbf{u}_{1}\|^{2}/k}}\sim{\mathcal{N}}(0,1),$$ $$\frac{(\mathbf{v}_{1})_{i}-(\mathbf{v}_{2})_{i}}{\sqrt{\|\mathbf{u}_{1}-\mathbf{u}_{2}\|^{2}/k}}\sim{\mathcal{N}}(0,1).$$
38
+ ∼ N (0, 1). (1)
39
+ Thus, when we take the sum over all i of (v1)
40
+ 2 i
41
+ , we can see Pi
42
+ (v1)
43
+ 2 i following the χ 2-distribution:
44
+
45
+ $${\frac{\|\mathbf{v}_{1}\|^{2}}{\|\mathbf{u}_{1}\|^{2}/k}}\sim\chi_{k}^{2}.$$
46
+ $$\left(2\right)$$
47
+ And, similarly for Pi
48
+ ((v1)i − (v2)i)
49
+ 2,
50
+ $$\frac{\|\mathbf{v}_{1}-\mathbf{v}_{2}\|^{2}}{\|\mathbf{u}_{1}-\mathbf{u}_{2}\|^{2}/k}\sim\chi_{k}^{2}.\tag{10.1}$$
51
+
52
+ With one of the tightest bounds for the Johnson and Lindenstrauss (JL) lemma in (Achlioptas, 2003b), it is shown that
53
+
54
+ $(1-\epsilon)\|\mathbf{u}_{1}-\mathbf{u}_{2}\|^{2}\leq\|\mathbf{v}_{1}-\mathbf{v}_{2}\|^{2}\leq(1+\epsilon)\|\mathbf{u}_{1}-\mathbf{u}_{2}\|^{2}$ In other words,
55
+ with probability 1 − n
56
+ −γ given that
57
+
58
+ $$k\leq k_{0}=\frac{4+2\gamma}{\epsilon^{2}/2-\epsilon^{3}/3}\log(n).$$
59
+
60
+ For kernel methods with dot products, it can also be shown that, as in (Li et al., 2006a) and (Li et al.,
61
+ 2006b),
62
+
63
+ $$\mathbb{E}(\mathbf{v}_{1}^{T}\mathbf{v}_{2})=\mathbf{u}_{1}^{T}\mathbf{u}_{2}=\sum_{j=1}^{D}(\mathbf{u}_{1})_{j}(\mathbf{u}_{2})_{j}.$$
64
+
65
+ ## 1.1 Kernel Approximation
66
+
67
+ In this section, we describe how the dot products of vectors with random Fourier features can approximate kernels. For a properly scaled shift-invariant kernel K(δ), Bochner's theorem guarantees that its Fourier transform p(ω) is a probability density function (Rahimi & Recht, 2008). It can be shown that
68
+
69
+ $$K(x-y)=\int_{\mathbb{R}^{d}}p(w)(\cos(w^{T}x)\cos(w^{T}y)+\sin(w^{T}x)\sin(w^{T}y))$$ $$=E_{p}[<(\cos(w^{T}x),\sin(w^{T}y)),(\cos(w^{T}y),\sin(w^{T}x))>].$$
70
+ $$\quad(3)$$
71
+
72
+ For x ∈ R
73
+ d, K(.) can be approximated with inner product < ϕ(x), ϕ(y) >. Thus,
74
+
75
+ $$\phi(x)=\sqrt{\frac{2}{k}}(\cos(w_{1}^{T}x),\sin(w_{1}^{T}x),\cos(w_{2}^{T}x),\sin(w_{2}^{T}x),\ldots,\cos(w_{k/2}^{T}x),\sin(w_{k/2}^{T}x))$$
76
+
77
+ or, alternatively,
78
+
79
+ $$\phi(x)=\sqrt{\frac{2}{k}}(\cos(w_{1}^{T}x),\cos(w_{2}^{T}x),\ldots,\cos(w_{k/2}^{T}x),\sin(w_{1}^{T}x),\sin(w_{2}^{T}x),\ldots,\sin(w_{k/2}^{T}x))$$
80
+
81
+ as the dot product ϕ(x)
82
+ T ϕ(y) gives us the same value in the alternate form where w1*, . . . , w*k are drawn according to p(w), i.e.
83
+
84
+ $$\phi(x)^{T}\phi(y)=\frac{1}{k/2}\sum_{i=1}^{k/2}\cos(\omega_{i}^{T}(x-y)).\tag{4}$$
85
+
86
+ ## 1.2 Contributions
87
+
88
+ The computation of random Fourier features and random projections rely on random matrices of size k × D
89
+ with the dimensionality after projection k and the original dimensionality D. For good kernel approximations with random Fourier features, k has to be very large. As n is very large with large-scale datasets, the computation can be very expensive. The complexity of Random Kitchen Sinks (RKS) Rahimi & Recht (2008) and that of a more recent state-of-the-art log-linear time method, Fastfood (Le et al., 2013), are respectively O(kDn) and O(k log(D)n). The proposed linear-time method, in this paper, is 1.) easy to implement, 2.) with complexity O(max{*k, D*}n) and 3.) unbiased estimators having variance inversely proportional to k.
90
+
91
+ More specifically, as it is possible to arrange non-zero elements in a sparse random matrix such that the computational complexity can be independent of k, the complexity becomes O(Dn) for k ≤ D. By considering a random permutation of the order of the features and feature normalization, the number of non-zero elements in the random matrix is reduced to D. The proposed method speeds up traditional random projection and Random Kitchen Sinks from O(kDn) to O(max{*k, D*}n) because the complexity of the proposed method is O(kn) when the dimensionality after projection k*multi* is larger than D. The computational complexity of Random Kitchen Sinks (Rahimi & Recht, 2008) and Fastfood (Le et al., 2013), two state-of-the-art efficient methods for kernel approximations, are O(kDn) and O(k log(D)n) respectively. Fastfood is a log-linear time algorithm with O(k log(D)n). Comparatively, there is a bigger computational advantage with the complexity our linear-time method when both D and k are large shown in Table 3, i.e. there is a 15x speed-up with our method over Fastfood on real-world datasets when both D and k are large.
92
+
93
+ Our method is motivated by the fact that the order for features do not change distances or similarities between feature vectors as long as the same order is maintained for all feature vectors. Instead of generating evenly spread random non-zero entries with previous methods like FastFood which uses Walsh Hadamard Transform (WHT), a random order of features in the data matrix is chosen before projection in our method. It is easy to implement the proposed method without the need for libraries for sparse matrix computation or fast WHT which depends on its implementation and the software and hardware architecture. Moreover, as no Gaussian assumption has been made to the random entries of the projection matrix, the proposed method is a unified approach to efficient random projections and random Fourier features with any shift-invariant kernels.
94
+
95
+ ## 2 Related Work 2.1 Previous Approaches To Fast Random Projection And Fast Random Fourier Features
96
+
97
+ Speeding up computation with a sparse random projection matrix with one-third non-zero entries in the matrix is proposed by Achlioptas Achlioptas (2003a). However, with a sparse projection matrix, some features in feature vectors can be totally ignored in computation. Another idea is to make the sparse entries spread more evenly. To do this, one can use the Fast Johnson-Lindenstrauss Transform (FJLT)
98
+ Π = PHD where P is the sparse projection matrix with D as a diagonal matrix with Di,i ∈ {+1, −1}. Each entry in Di,i is an i.i.d. random variable and H is the D × D Walsh Hadamard Transform matrix. The complexity to multiply A by the "mixing matrix" preconditioner HD matrix with the fast Walsh-Hadamard is O(D log D). It can be shown that HD is L2-norm preserving to make it a reasonable mixing matrix. In a more efficient method called Improved Subsampled Randomized Hadamard Transform (SRHT) in Boutsidis
99
+ & Gittens (2012), a subsampling matrix S is considered instead of P, i.e. Π = SHD with complexity O(D log k), original dimensionality D and reduced dimensionality k where *k < D*. For random Fourier features, FastFood Le et al. (2013) computes random Fourier features which extends previous work with SRHT, sparse JLT and FJLT. Our method with sparse data matrix can theoretically achieve O(nnz(A))
100
+ with a sparse data matrix and O(Dn) with dense data matrix.
101
+
102
+ ## 2.2 Platform-Specific Implementations Of Previous Methods
103
+
104
+ All implementations for fast WHT and FastFood we have found with HD relies on the library SPIRAL1 introduced in Püschel et al. (2004). In addition, the speed in practice very much depends on the implementation of fast WHT and the computation of sparse matrices. For example, in MATLAB, the simplest way to implement fast WHT or FastFood is to consider the transform as matrix computation with dense matrices to perform the matrix multiplication which does not take advantage of efficient computation on entries with zeros. This easy-to-implement method however is not very efficient with complexity O(D2)
105
+ to compute HDA and it requires Ω(D2) memory. Fortunately, with fast WHT formulated as FFT, there are efficient methods for the transform to take only O(D log D) instead of direct multiplication with dense matrices.
106
+
107
+ Although MATLAB comes with a native implementation for the fast WHT, it has been showed empirically in many previous studies that the time required is in reality longer than direct multiplication with the Hadamard matrix. That means there is no speed up with fast WHT in Matlab with the WHT as the bottleneck in the overall computation. This is the reason why many implementations if not all rely on SPIRAL to speed up WHT. SPIRAL written in C as a signal processing package provides an efficient implementation of WHT, to take advantage of specific machine architectures. In a lot of implementations of WHT, SRHT or FastFood, SPIRAL with mex in Matlab is used for fast WHT and fast multiplication with sparse P or S. However, efficient multiplications for sparse matrices are platform-dependent Kunchum et al. (2017),Dalton et al. (2015),Liu & Vinter (2014) and Yang et al. (2011).
108
+
109
+ ## 2.3 Other Approaches
110
+
111
+ Although random projection is computationally more efficient compared to many other dimensionality reduction methods such as principal component analysis (PCA), it is still computationally expensive for very large-scale problems. Methods with sparse random matrices have been proposed to speed up traditional random projection. The method in (Achlioptas, 2003b) with sparse random projection can achieve about a three-fold speed-up compared to vanilla random projection with a small loss of accuracy and (Li et al.,
112
+ 2006a) gets a more efficient √D-fold speed-up where D is dimensionality of the input space.
113
+
114
+ More recently, a very related technique called random Fourier features to speed up kernel methods has attracted a lot of attention. Although the performance of non-linear kernel methods is almost always better than that of linear kernel methods, non-linear kernel methods with large-scale problems are known to be prohibitively expensive as they do not scale well with the sample sizes of the training sets. Approximations with non-linear kernel methods aim to reduce time complexity so large-scale non-linear kernel methods can become practical. There are two popular methods for these approximations: 1.) the Nystrom approximation method for Gram matrices (Williams & Seeger, 2001) can be used to speed up general non-linear methods to O (nD), where D is dimensionality of the input space and n is the number of training examples (Drineas
115
+ & Mahoney, 2005; Li et al., 2015; Jin et al., 2011). 2.) alternatively, a method called random Fourier features (Rahimi & Recht, 2008) is proposed to approximate non-linear kernels. In this method, the original high-dimensional data is projected to another feature space like random projection. Experiments show that random Fourier features can perform very well with non-linear kernel methods in large-scale classification and regression tasks. Random Fourier features can be used to speed up non-linear kernel methods but the generation of random Fourier features can also be more efficient with a recent method called Fastfood (Le et al., 2013). Experiments for Fastfood show that classification performance with the Nystrom method, original random Fourier features and Fastfood are close while Fastfood is faster than the other two methods.
116
+
117
+ 1https://github.com/jeffeverett/spiral-wht Although the computational efficiency of recent methods for both RP and kernel approximations has been improved, they are still prohibitively expensive when the projected feature space is very large. This is the case especially for random Fourier features. In this paper, an efficient method is proposed for random projections and random Fourier features with the computational complexity independent of k.
118
+
119
+ ## 3 Our Method
120
+
121
+ In this work, we take sparsity to the extreme leaving only D non-zero elements in the k × D projection matrix with sparsity s = 1/k which is the fraction of the number of non-zero random numbers generated in the projection matrix. Using normalization and the shuffling of the order of features, we found that random projections and the computation of random Fourier features can be very efficient. Theoretical analysis is provided for the error with encouraging experimental supports.
122
+
123
+ For the random matrix of random projection R, we create a deterministically sparse matrix S ∈ R
124
+ k×D with D(k − 1) zeros, i.e. only D Gaussian random numbers need to be generated. We show that E(vi) = Sui instead of E(vi) = √
125
+ 1 k Rui from standard random projections. We define S =
126
+
127
+ $$\left(\begin{array}{c c c c}{{r_{1}}}&{{0\ldots0}}&{{0\ldots0}}&{{0\ldots0}}\\ {{0\ldots0}}&{{r_{2}}}&{{0\ldots0}}&{{0\ldots0}}\\ {{0\ldots0}}&{{0\ldots0}}&{{\ddots}}&{{0\ldots0}}\\ {{0\ldots0}}&{{0\ldots0}}&{{0\ldots0}}&{{r_{k}}}\end{array}\right)$$
128
+ $$\left(5\right)$$
129
+
130
+ (5)
131
+ with each row vector {ri}
132
+ k i=1 ∈ R
133
+ (D/k).
134
+
135
+ ## 3.1 The Algorithms
136
+
137
+ An equivalent formulation of this to find viis to calculate the diagonal elements of the matrix CU where we have vector ui reshaped as Ui ∈ R
138
+ (D/k)×k and C = [r1; r2; *. . .* ; rk] ∈ R
139
+ k×(D/k). Now, for each data point i with Ui, we calculate the diagonal elements *diag*(CUi) that gives k features in the subspace after random projection, i.e. Pl
140
+ (rm)l × (Ui)l,m gives the element m of the diagonal matrix.
141
+
142
+ In this section, there are two algorithms. As described in Sub-section 3.2, we first pre-process the data by randomly shuffling the order of the features in each feature vector and normalizing the feature vectors.
143
+
144
+ Our method to speed up the computation for random projections is in Algorithm 1 and for random Fourier features with the Gaussian kernel is in Algorithm 2 with C = CG generated from the standard normal distribution for each element. For other kernels, other distributions are required for general C as described in Sub-section 4.1.
145
+
146
+ Algorithm 1 Fast Random Projection to compute *diag*(CU) with C = CG
147
+ Input: k **and all data points** {ui}
148
+ n i=1 ∈ R
149
+ D
150
+ Output: {vi}
151
+ n i=1 ∈ R
152
+ D
153
+ for i := 1; n do vi:= *diag*(CUi)
154
+ end for Notice that, with Algorithm 2, the number of random Fourier features generated cannot be more than the original dimensionality, i.e. k ≤ D. For a larger number of random Fourier features than D, Algorithm 2 is invoked multiple times, i.e. N*multi* times, to obtain the projected vector in the desired dimensionality after projection kmulti = kN*multi*. The sparsity as described previously in Section 3 is s = 1/k. With Algorithm 2 invoked multiple times, the sparsity is still s*multi* = 1/k, not s*multi* = 1/k*multi*.
155
+
156
+ Algorithm 2 Fast Fourier Features for Kernel Approximations Input: k **and all data points** {ui}
157
+ n i=1 ∈ R
158
+ D
159
+ Output: {vi}
160
+ n i=1 ∈ R
161
+ D
162
+ for i := 1; n do vi:= *diag*(σCGUi) for the Gaussian kernel or vi:= *diag*(CUi) for other kernels vi:= √kvi vi:= [cos(vi); sin(vi)] vi:= √
163
+ 1 k vi end for
164
+
165
+ ## 3.2 Random Permutation Of Features And Normalization
166
+
167
+ Motivated by the fact that the chi-squared random variable can be asymptotically approximated by the normal random variable, we consider random permutations of features and feature normalization to speed up random Fourier features.
168
+
169
+ As shown in Lemma 4.6 in the next section, the bounds for ∥v1 − v2∥
170
+ 2 depends on the data points
171
+ {ui}
172
+ n i=1. When M/m = 1 with m = min{
173
+ qPl
174
+ (U)
175
+ 2 l,1
176
+ ,
177
+ qPl
178
+ (U)
179
+ 2 l,2
180
+ , . . . , qPl
181
+ (U)
182
+ 2 l,k} and M =
183
+ max{
184
+ qPl
185
+ (U)
186
+ 2 l,1
187
+ ,
188
+ qPl
189
+ (U)
190
+ 2 l,2
191
+ , . . . , qPl
192
+ (U)
193
+ 2 l,k}, we have the tightest bounds, i.e. the inequality reduces back to the original JL lemma but obviously there is no way that we can change the data. We use two techniques which include feature shuffling and normalization to obtain *diag*(CU) so that M/m is small. We will demonstrate that with feature shuffling and feature normalization, M/m is not far from 1.
194
+
195
+ For data point i, we obtain a random permutation of features (α((ui)1), α((ui)2)*, . . . , α*((ui)D)). If we permute the order of the features, ∥u1−u2∥
196
+ 2 gives us the same Euclidean distance regardless of the permutation.
197
+
198
+ However, the permutation makes M/m a much closer value to 1 for ∥*diag*C(U1 − U2)∥
199
+ 2 because of less correlations among features after shuffling in {(Ui)l,m}
200
+ (n/k)
201
+ l=1 .
202
+
203
+ It is very common is to scale features for various methods to perform well. We normalize features using mean normalization, i.e. (ui)j :=(ui)j−Pi
204
+ (ui)j /n maxi(ui)j−mini(ui)j∀*i, j*.
205
+
206
+ ## 4 Results
207
+
208
+ The expectation of the approximate kernel in Sutherland & Schneider (2015) with feature vectors u1 and u2 is
209
+
210
+ $$E_{\omega}\phi(\mathbf{u}_{1})^{T}\phi(\mathbf{u}_{2})=E_{\omega}[\frac{1}{k/2}\sum_{i=1}^{k/2}\cos(\omega_{i}^{T}(\mathbf{u}_{1}-\mathbf{u}_{2}))]=E_{\omega}[\frac{1}{k/2}\sum_{i=1}^{k/2}\cos(\omega_{i}^{T}(\Delta_{\mathbf{u}}))]\tag{6}$$
211
+ .
212
+ where ∆u = u1 − u2.
213
+
214
+ In our analysis, intuitively two cases can be considered. First, for fixed ∆u and Gaussian ω, with the normal random variable X = ω T ∆u ∼ N (0, σ2x
215
+ ), it can be easily found that, the expectation is
216
+
217
+ $$E[\cos(X)]=e^{-\sigma_{x}^{2}/2}$$
218
+
219
+ which is the approximate Gaussian kernel using random Fourier features.
220
+
221
+ With our method and ∆i = (U1)i − (U2)i,
222
+
223
+ $$E_{\omega,\Delta}\phi(\mathbf{u}_{1})^{T}\phi(\mathbf{u}_{2})=E_{\omega,\Delta}\big{[}\frac{1}{k/2}\sum_{i=1}^{k/2}\cos(\omega_{i}^{T}\Delta_{i})\big{]}=\frac{1}{k/2}\sum_{i=1}^{k/2}E_{\omega_{i},\Delta_{i}}[\cos(\omega_{i}^{T}\Delta_{i})]\tag{7}$$
224
+
225
+ For the second case, with fixed ω and the pre-processing techniques used in Section 3.2, E∆i
226
+
227
+ √k∆i∥
228
+ 2 2 =
229
+ ∥∆u∥
230
+ 2 2
231
+ , i.e. ∥∆i∥
232
+ 2 2 is asymptotically normal. We therefore consider the approximation
233
+
234
+ $$\|\Delta_{\mathbf{u}}\|_{2}^{2}\approx\|{\sqrt{k}}\Delta_{i\in[1,k/2]}\|_{2}^{2}\sim{\mathcal{N}}(\mu_{\Delta^{2}},\sigma_{\Delta^{2}}^{2})$$
235
+
236
+ and let ∥∆∥
237
+ 2 2 = ∥
238
+ √k∆i∥
239
+ 2 2
240
+ . For each i ∈ [1*, k/*2], ∥∆i∥
241
+ 2 2 is asymptotically normal due to the central limit theorem and also techniques used in Section 3.2 to make each chi-square normalized and independent. The only assumption here is normality with our justifications given in Section 4.2. E∥∆∥
242
+ 2 2∼N(µ∆2 ,σ2∆2
243
+ )[cos(ω T ∆)] =
244
+ E[K(∥∆∥
245
+ 2 2
246
+ )] where K(∥∆∥
247
+ 2 2
248
+ ) becomes exp(−γX) with the Gaussian kernel for example.
249
+
250
+ For the rest of the analysis, we, formally, bound errors with both ω and ∆ as random variables using the total expectation and the total variance. We have E∥ω T
251
+ i
252
+ (
253
+ √k∆i)∥
254
+ 2 2 = ∥∆u∥
255
+ 2 2 because
256
+
257
+
258
+ k∆i
259
+ ∥∆u∥
260
+ 2 2
261
+ ∼ N (0, 1).
262
+
263
+ We found, in the analysis, that the expectation of the approximate kernel with our new method using Equation 12 is
264
+
265
+ $$E_{\omega,\Delta}[\cos(\omega^{T}\Delta)]=E_{\Delta}[K(\Delta)]=\sum_{i}K(\Delta_{i})$$
266
+
267
+ where Pk/2 i=1 K(∆i) is the expectation E∆[K(∆)] by definition.
268
+
269
+ Theorem 4.1. *With* ∥∆∥
270
+ 2 2
271
+ , ∥
272
+ √k∆i∥
273
+ 2 2 ∼ N (µ∆2 , σ2∆2 ) following the normal distribution for any i*, the expectation and the variance for random Fourier features with our method are*
274
+
275
+ $$E_{\omega,\Delta}\bigg{[}\frac{1}{k/2}\sum_{i=1}^{k/2}\cos(\omega_{i}^{T}(\sqrt{k}\Delta_{i}))\bigg{]}=E_{\Delta}[K(\Delta)]$$ $$Var_{\omega,\Delta}\bigg{[}\frac{1}{k/2}\sum_{i=1}^{k/2}\cos(\omega_{i}^{T}(\sqrt{k}\Delta_{i}))\bigg{]}=\frac{1}{k/2}\bigg{(}\frac{1}{2}+\frac{1}{2}E_{\Delta}[K(2\Delta)]+E_{\Delta}[K(\Delta)]^{2}\bigg{)}$$
276
+
277
+ where the density function p(ωi) *is the Fourier transform of the kernel* K(δ).
278
+
279
+ Proposition 4.2. *The unbiased estimator for the Gaussian kernel approximation is*
280
+
281
+ $$\phi^{T}(x)\phi(y)[e x p(-\gamma\mu_{\Delta^{2}})/e x p(-\gamma\mu_{\Delta^{2}}+(\gamma\sigma_{\Delta^{2}})^{2}/2)]$$
282
+ with
283
+ $$e x p(-\gamma\mu_{\Delta^{2}})/e x p(-\gamma\mu_{\Delta^{2}}+(\gamma\sigma_{\Delta^{2}})^{2}/2)\approx1$$
284
+
285
+ if µ∆2 /σ∆2 >> 1.
286
+
287
+ Proposition 4.3. *For the Gaussian kernel* K(∆) = exp(−c∥∆∥
288
+ 2 2
289
+ ) and the exponential kernel, using Theorem 4.1 with ∥∆∥
290
+ 2 2
291
+ , ∥
292
+ √k∆i∥
293
+ 2 2 ∼ N (µ∆2 , σ2∆2 ) following the normal distribution for any i *with the density* function p(ωi) *being the Fourier transform of the kernel* K(δ),
294
+
295
+ $$E_{\omega,\Delta}\bigg[\frac{1}{k/2}\sum_{i=1}^{k/2}\cos(\omega_{i}^{T}(\sqrt{k}\Delta_{i}))\bigg]=e x p(-\mu_{c\Delta^{2}}+\sigma_{c\Delta^{2}}^{2}/2),$$
296
+ $$Var_{\omega,\Delta}\biggl[\frac{1}{k/2}\sum_{i=1}^{k/2}\cos(\omega_{i}^{T}(\sqrt{k}\Delta_{i}))\biggr]=\frac{1}{k/2}\biggl(\frac{1}{2}+\frac{1}{2}exp(-2\mu_{c\Delta^{2}}+2\sigma_{c\Delta^{2}}^{2})+exp(-\mu_{c\Delta^{2}}+\sigma_{c\Delta^{2}}^{2}/2)^{2}\biggr).$$
297
+
298
+ where both the expectation and the variance are functions of µc∆2 and σ 2 c∆2 .
299
+
300
+ 7 Proposition 4.4. *For the spherical kernel,*
301
+
302
+ $$K(\Delta)=1-\frac{3}{2}\frac{\|\Delta\|}{\theta}+\frac{1}{2}(\frac{\|\Delta\|}{\theta})^{3}$$
303
+
304
+ if ∥∆∥ < θ. 0 *otherwise. With* ∥∆∥
305
+ 2 2
306
+ , ∥
307
+ √k∆i∥
308
+ 2 2 ∼ N (µ∆2 , σ2∆2 ) *following the normal distribution for any* i, the expectation and the variance for the kernel are respectively
309
+
310
+ $$E_{\omega,\Delta}\biggl[\frac{1}{k/2}\sum_{i=1}^{k/2}\cos(\omega_{i}^{T}(\sqrt{k}\Delta_{i}))\biggr]=E_{\Delta}[K(\Delta)]=1-\frac{3\mu_{\Delta}}{2\theta}+\frac{\mu_{\Delta}^{3}+3\mu_{\Delta}\sigma_{\Delta}^{2}}{2\theta^{3}},$$
311
+ $$Var_{\omega,\Delta}\left[\frac{1}{k/2}\sum_{i=1}^{k/2}\cos(\omega_i^T(\sqrt{k}\Delta_i))\right]=\frac{1}{k/2}\Big(1-\frac{9\mu_\Delta}{2\theta}+\frac{9\mu_\Delta^2}{4\theta^2}+\frac{3\mu_\Delta^3+9\mu_\Delta\sigma_\Delta^2}{\theta^3}-\frac{3\mu_\Delta^4+9\mu_\Delta^2\sigma_\Delta^2}{2\theta^4}+\frac{6\mu_\Delta^4\sigma_\Delta^2+\mu_\Delta^5+9\mu_\Delta^2\sigma_\Delta^4}{4\theta^6}\Big),$$ _where the density function $p(\omega_i)$ is the Fourier transform of the kernel $K(\delta)$._
312
+ Lemma 4.5. *With* C ∈ R
313
+ k×(D/k), a random matrix with k × (D/k) = D elements, and each element following the normal distribution N (0, 1) where D is the original dimensionality and k *is the dimensionality after projection, the expectation of* ∥v∥
314
+ 2is E{Pk i
315
+ [*diag*(CU)]2 i
316
+ } = ∥u∥
317
+ 2, *and, for each element of* v,
318
+
319
+ ```
320
+ pP
321
+ (v)j
322
+ l
323
+ (U)
324
+ 2
325
+ l,j
326
+ ∼ N (0, 1).
327
+
328
+ ```
329
+
330
+ Note that E{Pk i
331
+ [*diag*(CU)]2 i
332
+ } = ∥u∥
333
+ 2 while we have E(∥ √
334
+ 1 k Ru∥
335
+ 2) = ∥u∥
336
+ 2for traditional random projection.
337
+
338
+ However, now ∥v∥
339
+ 2 2 follows the generalized chi-squared distribution with non-unit variances.
340
+
341
+ Lemma 4.6. *With probability* 1 − 2e
342
+ −(ϵ 2−ϵ 3)k/4,
343
+
344
+ $$(1-\epsilon)(m/M)\|\mathbf{u}_{1}-\mathbf{u}_{2}\|^{2}\leq\|\mathbf{v}_{1}-\mathbf{v}_{2}\|^{2}\leq(1+\epsilon)(M/m)\|\mathbf{u}_{1}-\mathbf{u}_{2}\|^{2}$$
345
+
346
+ where ∥u1∥ =Pl Pm(U1)
347
+ 2 l,m and
348
+
349
+ $$m=\operatorname*{min}\{{\sqrt{\sum_{l}(\mathbf{U})_{l,1}^{2}}},{\sqrt{\sum_{l}(\mathbf{U})_{l,2}^{2}}},\ldots,{\sqrt{\sum_{l}(\mathbf{U})_{l,k}^{2}}}\}$$
350
+ $$M=\operatorname*{max}\{{\sqrt{\sum_{l}(\mathbf{U})_{l,1}^{2}}},{\sqrt{\sum_{l}(\mathbf{U})_{l,2}^{2}}},\ldots,{\sqrt{\sum_{l}(\mathbf{U})_{l,k}^{2}}}\}$$
351
+
352
+ Theorem 4.7. *The expectation and the variance for fast random projection with our method are*
353
+
354
+ $$E_{\omega,\Delta}[\sum_{i}(\omega_{i}^{T}\Delta_{i})^{2}]=\mu_{\Delta^{2}},\quad a n d\quad V a r_{\omega,\Delta}[\sum_{i}(\omega_{i}^{T}\Delta_{i})^{2}]=(2\mu_{\Delta^{2}}+\sigma_{\Delta^{2}}^{2})/k$$
355
+
356
+ where ∥∆i∥
357
+ 2 2 ∼ N (µ∆2 , σ2∆2 ) and ωi ∼ N (0, 1).
358
+
359
+ ## 4.1 Other Kernels
360
+
361
+ w T x is exactly what we compute for random projections. Thus, we can use the same method to compute w T x with *diag*(CU) because all elements in [w1; w2; *. . .* ; wD] follow the normal distribution if we use the Gaussian kernel. Otherwise, other distributions can be used to generate w for other kernels.
362
+
363
+ ## 4.2 The Central Limit Theorem For Weakly Dependent Random Variables
364
+
365
+ In this sub-section, we first study the effect of the shuffling operation on reducing the correlations between features, the actual speed-ups and the approximation quality using the three datasets which are used for all
366
+
367
+ ![8_image_0.png](8_image_0.png)
368
+
369
+ (a) AR Face Dateset (b) Natural Dataset (c) Gene Dataset
370
+
371
+ ![8_image_1.png](8_image_1.png)
372
+
373
+ Figure 1: Comparison of feature correlations matrices before and after shuffling. Darker colors denote higher correlations. In a sub-figure, the matrix on the left is obtained before shuffling. Shuffling can significantly reduce the features correlation.
374
+
375
+ Table 1: Results of Shapiro-Wilk test on three datasets. This test is to verify whether the value of ∆i is normal distribution or not.
376
+
377
+ | Operation | kmulti | AR Dataset | Natural Dataset | Gene Dataset | | | | |
378
+ |-------------|----------|--------------|-------------------|----------------|---------|----------|---------|-------|
379
+ | Shuff. | Norm. | W | p-value | W | p-value | W | p-value | |
380
+ | - | - | 200 | 0.91 | 1.3E-07 | 0.98 | 0.003 | 0.98 | 0.003 |
381
+ | 1000 | 0.87 | 2.2E-16 | 0.99 | 1.93E-06 | 0.95 | 2.2E-16 | | |
382
+ | √ | - | 200 | 0.99 | 0.552 | 0.99 | 0.571 | 0.98 | 0.010 |
383
+ | 1000 | 0.99 | 0.590 | 0.99 | 0.800 | 0.95 | 2.2E-16 | | |
384
+ | √ | √ | 200 | 0.99 | 0.582 | 0.99 | 0.820 | 0.99 | 0.783 |
385
+ | 1000 | 0.99 | 0.544 | 0.99 | 5.12E-01 | 0.99 | 1.89E-06 | | |
386
+ | - | √ | 200 | 0.96 | 1.09E-05 | 0.96 | 0.008 | 0.98 | 0.037 |
387
+ | 1000 | 0.93 | 2.2E-16 | 0.99 | 1.93E-06 | 0.98 | 2.82E-11 | | |
388
+
389
+ other experiments as well. Moreover, we investigate whether the value of ∆iis normally distributed or close to normality Fleermann & Kirsch (2022), Ermakov & Ostrovskii (1986), Serfling (1968). The comparison of feature corrections is shown in Figure 1. For all datasets, we randomly pick two examples and evaluate their feature correlation matrices before and after shuffling. To visualize correlations, the correlation matrices with the first 50 features of the examples are shown. Darker colors denote higher correlations. It can be observed the shuffling operation can significantly reduce correlations between features with feature correlation matrices on the left in Sub-figures 1(a) and 1(b) much lighter than those on the right.
390
+
391
+ In Sub-figure 1(c), both matrices are light since the feature correlations for gene expression are relatively low.
392
+
393
+ The Shapiro-Wilk test is used to examine whether the value of ∆iis normal distribution or not. The results from the Shapiro-Wilk test are shown in Table 1 with the dimensionality of projected features k*multi* (see Section 3.1). On the AR dataset and the natural-image dataset with shuffling, the test suggusts that ∆iis normal distributed. As the dimensionality of the gene data is only 17,000, when k*multi* equal to 1,000, there are only 17 elements for the calculation of ∆i. Hence, in this experiment, we set the values of k*multi* to 200 and 1,000. In Table 1, with shuffling and normalization, the values of ∆i on all three datasets are normally distributed when k*multi* = 200, i.e. the p-value higher than 0.05 and the W value close to 1.
394
+
395
+ ## 5 Experiments
396
+
397
+ In this section, we first demonstrate the speed improvements of the proposed kernel approximation and random projection method. In addition, we conduct a comparative analysis against state-of-the-art techniques to highlight the fact that our method not only speeds up traditional approaches but also preserves comparable approximation quality by assessing the quality of our method with classification and regression tasks. The empirical evidence supports that our approach to kernel approximation allows the linear SVM
398
+ to reach classification and regression performance on par with that of the non-linear SVM using the radial basis function (RBF) kernel. We implement our method and RKS. They are trained with the same protocol.
399
+
400
+ For Fastfood, we use the code provide on the scikit-learn-extra website2.
401
+
402
+ 2https://scikit-learn-extra.readthedocs.io/en/stable/index.html.
403
+
404
+ ## 5.1 The Actual Speed-Up
405
+
406
+ The real-world time efficiency of the proposed method is evaluated on synthesized datasets and public datasets3including the AR face image dataset, a natural image dataset, and a gene dataset. The AR face dataset Martinez & Benavente (1998) contains 3,276 images with 126 people, and the resolution of the images is 576×768. By following Le et al. (2013); Li et al. (2006a), each image with all pixel values is flattened into a vector. For a grey image of the AR dataset, the dimensionality of its vector is 442,368. Face images in the AR dataset are different from general images because there is always a completely white background in the image. Therefore, a popular natural image dataset Weber (2018) is also used for our evaluation. There are images in three different resolutions in this dataset. To fairly compare with the results on face images, only images in the 512×512 resolution are chosen in our experiments with this dataset. The images are first converted into gray-scale images meaning that the vectors obtained for the images are 262,144-dimensional.
407
+
408
+ Finally, a biomedical dataset with genes for breast cancer called TCGA (BC-TCGA) Xie et al. (2016) is used to evaluate our method using gene expression bio-sequences. This dataset contains 590 examples with 17,814 genes. All the 590 examples are used in the experiments.
409
+
410
+ The proposed method is assessed with both random projection and kernel approximation in terms of computational efficiency. The runtime improvement of our method relative to vanilla random projection is shown in Table 2. We make synthesized datasets with various dimensionalities to evaluate the speed improvement of the proposed method. Here, the reduced dimensionality k*multi* is equal to k which is set from 1,000 to 5,000.
411
+
412
+ When k = 1, 000, the real-world runtime of our method is 0.31, 0.05, and 0.079 seconds on the three public datasets, while it is 0.023, 0.031, and 0.1 seconds on the synthesized dataset. As the value of k*multi* = k increases, the running time of vanilla random projection increases quickly, because its complexity is O(kDn).
413
+
414
+ The k*multi* = k is not a important factor affecting the running time of our method with O(Dn). Hence, the proposed method is faster than vanilla random projection, and the actual speed-up of our method is up to 1226 times.
415
+
416
+ Table 2: Speed-up of the proposed method for random projection. Our method speeds up traditional random projection from O(kDn) to O(Dn) when D is larger than the dimensionality after projection k*multi* = k. The actual speed-up of our method is up to 1226 times with D as the dimensionality of input data. k is a hyper-parameter of Algorithm 1.
417
+
418
+ | Datasets | D | k = 1,000 | k = 2,000 | k = 3,000 | k = 4,000 | k = 5,000 |
419
+ |----------------|-------------|-------------|-------------|-------------|-------------|-------------|
420
+ | D = 5,000 | 23.9x | 40.0x | 65.2x | 74.1x | 100.0x | |
421
+ | Synth. Dataset | D = 10,000 | 32.3x | 58.8x | 93.9x | 120.6x | 150.0x |
422
+ | D = 100,000 | 106.0x | 193.6x | 316.0x | 405.0x | 462.7x | |
423
+ | AR dataset | D = 440,000 | 142.9x | 265.9x | 389.1x | 559.7x | 672.0x |
424
+ | Nat. dataset | D = 260,000 | 264.0x | 408.3x | 768.0x | 941.6x | 1226.6x |
425
+ | Gene dataset | D = 17500 | 67.1x | 123.3x | 203.7x | 241.6x | 331.0 x |
426
+
427
+ For kernel approximation, the proposed method is compared with other two state-of-the-art kernel approximation approaches, RKS (Rahimi & Recht, 2008) and Fastfood (Le et al., 2013). We vary the parameter k*multi* = k from 1, 000 to 200, 000 to assess performance disparities. Both our method and Fastfood outperform RKS in speed (see Table 2 detailing the speed-up factors of our method and Fastfood in comparison to RKS). Specifically, when k = 1, 000, the real-world runtime of our method is respectively 4.6, 0.49, and 0.88 seconds on the three public datasets, while it is 0.062, 0.11, and 1.18 seconds on the synthesized dataset
428
+ (with three different feature dimensionalities D). It is encouraging to see that , when k*multi* = k = 200, 000, the speed improvement of our method significantly increases to up to 11,716 times compared to RKS, and it also achieves a 14.7 times speed advantage over Fastfood. These results show that our method is moderately more efficient than Fastfood and significantly more efficient than RKS.
429
+
430
+ Results in Table 2 and Table 3 demonstrate that our method can significantly improve real-world time efficiency for random projection and kernel approximation. Calculating diag(AB) can be very expensive, in
431
+
432
+ 3Available at http://www2.ece.ohio-state.edu/aleix/ARdatabase.html/, http://sipi.usc.edu/database/
433
+ and https://data.mendeley.com/datasets/ respectively.
434
+
435
+ | Datasets | D | Methods | k = 103 | k = 5 ∗ 103 | k = 104 | k = 5 ∗ 104 | k = 105 | k = 2 ∗ 105 |
436
+ |---------------------|-------------|-----------|-----------|---------------|-----------|---------------|-----------|---------------|
437
+ | Synthesized Dataset | D = 5,000 | Ours | 8.9x | 45x | 44.6x | 48.3x | 48.8x | 52.8x |
438
+ | Fastfood | 2.6x | 12x | 13.3x | 17.9x | 20.5x | 21.7x | | |
439
+ | D = 10,000 | Ours | 9.5x | 50.6 x | 95.5 x | 109.3 x | 109.4 x | 109.4 x | |
440
+ | Fastfood | 2.4x | 10x | 21.8x | 32.6x | 41.3x | 40.3x | | |
441
+ | D = 100,000 | Ours | 9.2x | 50.6x | 95.2x | 602x | 1300x | 1139x | |
442
+ | Fastfood | 2.7x | 13.6x | 29.3x | 162.1x | 352.8x | 355.2x | | |
443
+ | AR | | | | | | | | |
444
+ | Dataset | D = 440,000 | Ours | 13.4x | 72.1x | 153.2x | 767.8x | 1585x | 3361x |
445
+ | Fastfood | 2.6x | 12.6x | 28.9x | 141.9x | 267.7x | 554.7x | | |
446
+ | Nat. | | | | | | | | |
447
+ | Dataset | D = 260,000 | Ours | 32.3x | 168.5x | 432.6x | 2800x | 4943x | 11716x |
448
+ | Fastfood | 3.2x | 16.4x | 32.7x | 175.7x | 345.6x | 794.5x | | |
449
+ | Gene Dataset | D = 17500 | Ours | 6.5x | 31.3x | 65.1x | 63.5x | 66.5x | 66.1x |
450
+ | Fastfood | 0.6x | 3.0x | 6.1x | 15.7x | 16.1x | 16.0x | | |
451
+
452
+ Table 3: For kernel approximation, the proposed method and Fastfood are faster than RKS. The runtime improvements of the two approaches relative to RKS are listed.
453
+ R or Matlab for example, as the product matrix AB is computed first and the diagonal elements are then taken. In our implementation, it is much more efficient to find diag(A*B) with sum(A.*B',2) in Matlab or R.
454
+
455
+ ## 5.2 Approximation Quality
456
+
457
+ The approximation quality of the proposed method is very close to that of RKS, while our method can significantly improve the runtime (see Section 5.1).
458
+
459
+ In random projection, approximation quality is measured to see how well the pairwise Euclidean distances among projected vectors can approximate the corresponding distances with the original vectors. The averaged absolute difference between pairwise Euclidean distances before and after projection are used to quantify the approximation quality. These pairwise distances are computed over all example pairs, and the averaged absolute difference gives us the error. For a dataset with n examples, the average absolute error over all n 2 pairs is obtained with
460
+
461
+ $$Err_{rp}=\frac{1}{{n\choose2}}\sum_{i=1}^{n-1}\sum_{j=i+1}^{n}\left\|\mathbf{u}_{i}-\mathbf{u}_{j}\right\|^{2}-\left\|\mathbf{v}_{i}-\mathbf{v}_{j}\right\|^{2}\right|,\tag{8}$$ $\mathbf{u}_{i}$\(\mathbf{u}_{i}
462
+ where ui, uj are two data points, and vi, vj are the projected points of them. In kernel approximation, approximation quality is quantified using the average absolute error between the approximated kernel values using dot products obtained by the proposed method and the original kernel values over all data point pairs.
463
+
464
+ For a dataset with n examples, the average absolute error over all n 2 pairs is given by
465
+
466
+ $$Err_{rff}=\frac{1}{{n\choose2}}\sum_{i=1}^{n-1}\sum_{j=i+1}^{n}\left|\left\langle\mathbf{v}_{i},\mathbf{v}_{j}\right\rangle-K\left(\mathbf{u}_{i},\mathbf{u}_{j}\right)\right|,\tag{10}$$
467
+ $$({\mathfrak{g}})$$
468
+
469
+ where ⟨vi, vj ⟩ = vi· vj .
470
+
471
+ For random projection, the left sub-figure of Figure 2 shows approximation error (see Equation 8) against the dimensionality after projection. Comparing with the vanilla method (red), which shows the state-of-the-art approximation quality, the error (y-axis) from our method (green) is very close on all datasets. It indicates that the approximation quality of these two methods is indistinguishable.
472
+
473
+ For kernel approximation, the right sub-figure of Figure 2 shows the error calculated using Equation 9 against the dimensionality of Fourier features. On the AR dataset and the natural dataset, the error from the proposed method (blue) is on par with that of RKS Rahimi & Recht (2008) and Fastfood (Le et al.,
474
+ 2013). The approximation quality of our method is close to that of RKS. In the gene dataset, the error of our method and Fastfood are higher than that of RKS. This is due to the relatively low dimensionality of the gene dataset. Table 1 shown that the ∆iis not a normal distribution when k*multi* = 1000. This affects
475
+
476
+ ![11_image_0.png](11_image_0.png)
477
+
478
+ Figure 2: Comparison of approximation error in random projection and kernel approximation. The error is obtained with Equation 8 and Equation 9. It shows the error from the proposed method on the y-axis is close to that from previous methods in all cases. The approximation quality of our method is close to that of previous methods, while the proposed method can significantly speed-up them.
479
+ Table 4: Comparison with four SVM variants. Experimental results show that the classification accuracies and regression performance of the linear SVM with our method are very close to the SVM with RBF.
480
+
481
+ | Datasets | Classification (Accuracy) | | Regression (RMSE) | | | | | | |
482
+ |-----------------------|-----------------------------|----------|---------------------|-------|-------|-------|------|------|------|
483
+ | | ADULT | CIFAR-10 | CENSUS | | | | | | |
484
+ | Reduced Dim. | 1000 | 2000 | 3000 | 1000 | 2000 | 3000 | 1000 | 2000 | 3000 |
485
+ | Linear SVM (Ours) | 59.4% | 62.2% | 64.5% | 75.9% | 75.8% | 76.3% | 3.1% | 2.9% | 2.8% |
486
+ | Linear SVM (RKS) | 58.6% | 62.0% | 63.7% | 75.1% | 76.2% | 76.2% | 2.9% | 2.8% | 2.8% |
487
+ | Linear SVM (Fastfood) | 59.1% | 62.2% | 64.3% | 75.1% | 75.6% | 75.7% | 3.1% | 2.7% | 2.8% |
488
+ | SVM with RBF | 64.7% | 76.3% | 1.1% | | | | | | |
489
+
490
+ the approximation quality of our method. In Section 5.3, we further investigate the effect of this errors on SVMs for classification and regression.
491
+
492
+ ## 5.3 Performance With The Svm
493
+
494
+ In this subsection, we further evaluate the actual performance of SVMs with the proposed method. It is found that our approach not only significantly accelerates the speed of traditional methods but also achieves approximation quality that is comparable to conventional methods. The primary objective of kernel approximation is to enhance the efficiency of kernel method computations without compromising on quality. To this end, we compare the non-linear SVM with the RBF kernel against the linear SVM using three different kernel approximation techniques: the proposed method, RKS(Rahimi & Recht, 2008), and Fastfood(Le et al., 2013). We follow Rahimi & Recht (2008) to apply SVMs to classification and regression on the adult dataset, the census dataset, and the CIFAR-10 dataset 4. Moreover, the datasets are pre-processed with the same techniques as in Rahimi & Recht (2008).
495
+
496
+ The classification accuracies and the root-mean-square errors (RMSE) are given with different methods in Table 4. As shown in Table 4, the performance of the linear SVM with the proposed method is on par with that of the non-linear SVM with the RBF. The approximation quality of our proposed method with classification is also found to be encouraging. Our method can significantly speed-up the previous methods.
497
+
498
+ ## 6 Conclusion
499
+
500
+ In this work, a simple and efficient approach to sparse random projection and efficient computation of random Fourier features is proposed with complexity O(max{*k, D*}n). The novel method does not rely on specialized libraries for sparse matrix computation or fast WHT. It can be easily implemented. In addition, no Gaussian assumption has been made to the random entries of the projection matrix for random projections and random Fourier features with any shift-invariant kernels with the bias, the variance and error bounds provided. It is shown that the speed-up of our method is up to 10,000 times on real-world datasets compared to RKS and up to 15 times compared to Fastfood (Le et al., 2013).
501
+
502
+ 4Available at https://archive.ics.uci.edu/ml/datasets/Adult and http://www.cs.toronto.edu/delve/data/census-house/desc.html ˜
503
+
504
+ ## References
505
+
506
+ Dimitris Achlioptas. Database-friendly random projections: Johnson-lindenstrauss with binary coins.
507
+
508
+ Journal of Computer and System Sciences, 66(4):671 - 687, 2003a. ISSN 0022-0000. doi: https:
509
+ //doi.org/10.1016/S0022-0000(03)00025-4. URL http://www.sciencedirect.com/science/article/
510
+ pii/S0022000003000254. Special Issue on PODS 2001.
511
+
512
+ Dimitris Achlioptas. Database-friendly random projections: Johnson-lindenstrauss with binary coins. Journal of computer and System Sciences, 66(4):671–687, 2003b.
513
+
514
+ Nir Ailon and Bernard Chazelle. Approximate nearest neighbors and the fast johnson-lindenstrauss transform. In *Proceedings of the thirty-eighth annual ACM symposium on Theory of computing*, pp. 557–563.
515
+
516
+ ACM, 2006.
517
+
518
+ Anushka Anand, Leland Wilkinson, and Tuan Nhon Dang. Visual pattern discovery using random projections. In *2012 IEEE Conference on Visual Analytics Science and Technology (VAST)*, pp. 43–52. IEEE,
519
+ 2012.
520
+
521
+ Ella Bingham and Heikki Mannila. Random projection in dimensionality reduction: applications to image and text data. In *Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery* and data mining, pp. 245–250. ACM, 2001.
522
+
523
+ Christos Boutsidis and Alex Gittens. Improved matrix algorithms via the subsampled randomized hadamard transform. *CoRR*, abs/1204.0062, 2012. URL http://arxiv.org/abs/1204.0062.
524
+
525
+ Steven Dalton, Luke Olson, and Nathan Bell. Optimizing sparse matrix-matrix multiplication for the gpu.
526
+
527
+ ACM Trans. Math. Softw., 41(4), October 2015. ISSN 0098-3500. doi: 10.1145/2699470. URL https: //doi.org/10.1145/2699470.
528
+
529
+ J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In *CVPR09*, 2009.
530
+
531
+ Petros Drineas and Michael W Mahoney. On the nyström method for approximating a gram matrix for improved kernel-based learning. *journal of machine learning research*, 6(Dec):2153–2175, 2005.
532
+
533
+ S. V. Ermakov and E. I. Ostrovskii. The central limit theorem for weakly dependent banach-valued variables.
534
+
535
+ Theory of Probability & Its Applications, 30(2):391–394, 1986. doi: 10.1137/1130045. URL https://doi.
536
+
537
+ org/10.1137/1130045.
538
+
539
+ Michael Fleermann and Werner Kirsch. The central limit theorem for weakly dependent random variables by the moment method, 2022.
540
+
541
+ Rong Jin, Tianbao Yang, Mehrdad Mahdavi, Yu-Feng Li, and Zhi-Hua Zhou. Improved bound for the nystrom's method and its application to kernel classification. *arXiv preprint arXiv:1111.2262*, 2011.
542
+
543
+ Rakshith Kunchum, Ankur Chaudhry, Aravind Sukumaran-Rajam, Qingpeng Niu, Israt Nisa, and P. Sadayappan. On improving performance of sparse matrix-matrix multiplication on gpus. In *Proceedings of the International Conference on Supercomputing*, ICS '17, New York, NY, USA, 2017. Association for Computing Machinery. ISBN 9781450350204. doi: 10.1145/3079079.3079106. URL
544
+ https://doi.org/10.1145/3079079.3079106.
545
+
546
+ Quoc Le, Tamás Sarlós, and Alex Smola. Fastfood: Approximating kernel expansions in loglinear time.
547
+
548
+ In *Proceedings of the 30th International Conference on International Conference on Machine Learning -*
549
+ Volume 28, ICML'13, pp. III–244–III–252. JMLR.org, 2013.
550
+
551
+ Mu Li, Wei Bi, James T Kwok, and Bao-Liang Lu. Large-scale nyström kernel matrix approximation using randomized svd. *IEEE transactions on neural networks and learning systems*, 26(1):152–164, 2015.
552
+
553
+ Ping Li, Trevor J Hastie, and Kenneth W Church. Very sparse random projections. In *Proceedings of* the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 287–296. ACM, 2006a.
554
+
555
+ Ping Li, Trevor J Hastie, and Kenneth W Church. Improving random projections using marginal information.
556
+
557
+ In *International Conference on Computational Learning Theory*, pp. 635–649. Springer, 2006b.
558
+
559
+ W. Liu and B. Vinter. An efficient gpu general sparse matrix-matrix multiplication for irregular data. In 2014 IEEE 28th International Parallel and Distributed Processing Symposium, pp. 370–381, 2014. doi:
560
+ 10.1109/IPDPS.2014.47.
561
+
562
+ AM Martinez and R Benavente. The ar face database, computer vision center, barcelona. Technical report, Spain, Technical Report 24, 1998.
563
+
564
+ Saurabh Paul, Christos Boutsidis, Malik Magdon-Ismail, and Petros Drineas. Random projections for support vector machines. In *Artificial intelligence and statistics*, pp. 498–506, 2013.
565
+
566
+ Markus Püschel, José M. F. Moura, Bryan Singer, Jianxin Xiong, Jeremy Johnson, David Padua, Manuela Veloso, and Robert W. Johnson. Spiral: A generator for platform-adapted libraries of signal processing algorithms. *Int. J. High Perform. Comput. Appl.*, 18(1):21–45, February 2004. ISSN 1094-3420. doi:
567
+ 10.1177/1094342004041291. URL https://doi.org/10.1177/1094342004041291.
568
+
569
+ Ali Rahimi and Benjamin Recht. Random features for large-scale kernel machines. In *Advances in neural* information processing systems, pp. 1177–1184, 2008.
570
+
571
+ R. J. Serfling. Contributions to Central Limit Theory for Dependent Variables. The Annals of Mathematical Statistics, 39(4):1158 - 1175, 1968. doi: 10.1214/aoms/1177698240. URL https://doi.org/10.1214/
572
+ aoms/1177698240.
573
+
574
+ Dougal J. Sutherland and Jeff Schneider. On the error of random fourier features. In Proceedings of the Thirty-First Conference on Uncertainty in Artificial Intelligence, UAI'15, pp. 862–871, Arlington, Virginia, United States, 2015. AUAI Press. ISBN 978-0-9966431-0-8. URL http://dl.acm.org/citation.cfm? id=3020847.3020936.
575
+
576
+ Santosh Srinivas Vempala. *The Random Projection Method*, volume 65 of DIMACS Series in Discrete Mathematics and Theoretical Computer Science. DIMACS/AMS, 2004. ISBN 0-8218-3793-1. URL http:
577
+ //dimacs.rutgers.edu/Volumes/Vol65.html.
578
+
579
+ Allan G. Weber. The usc-sipi image database: Version 6. In *USC-SIPI Report*, 2018. URL http://sipi.
580
+
581
+ usc.edu/database/.
582
+
583
+ Christopher KI Williams and Matthias Seeger. Using the nyström method to speed up kernel machines. In Advances in neural information processing systems, pp. 682–688, 2001.
584
+
585
+ Haozhe Xie, Jie Li, Qiaosheng Zhang, and Yadong Wang. Comparison among dimensionality reduction techniques based on random projection for cancer classification. *Computational biology and chemistry*, 65:
586
+ 165–172, 2016.
587
+
588
+ Xintian Yang, Srinivasan Parthasarathy, and P. Sadayappan. Fast sparse matrix-vector multiplication on gpus: Implications for graph mining. *Proc. VLDB Endow.*, 4(4):231–242, January 2011. ISSN 2150-8097.
589
+
590
+ doi: 10.14778/1938545.1938548. URL https://doi.org/10.14778/1938545.1938548.
591
+
592
+ Y. Zhai, Y. Ong, and I. W. Tsang. The emerging "big dimensionality". IEEE Computational Intelligence Magazine, 9(3):14–26, 2014.
593
+
594
+ Kaihua Zhang, Lei Zhang, and Ming-Hsuan Yang. Fast compressive tracking. IEEE transactions on pattern analysis and machine intelligence, 36(10):2002–2015, 2014.
595
+
596
+ ## A Appendix A.1 Analysis For Fast Random Features With Our Method
597
+
598
+ Theorem 4.1. *With* ∥∆∥
599
+ 2 2
600
+ , ∥
601
+ √k∆i∥
602
+ 2 2 ∼ N (µ∆2 , σ2∆2 ) following the normal distribution for any i*, the expectation and the variance for random Fourier features with our method are*
603
+
604
+ $$E_{\omega,\Delta}\Biggl{[}\frac{1}{k/2}\sum_{i=1}^{k/2}\cos(\omega_{i}^{T}(\sqrt{k}\Delta_{i}))\Biggr{]}=E_{\Delta}[K(\Delta)]$$ $$Var_{\omega,\Delta}\Biggl{[}\frac{1}{k/2}\sum_{i=1}^{k/2}\cos(\omega_{i}^{T}(\sqrt{k}\Delta_{i}))\Biggr{]}=\frac{1}{k/2}\Biggl{(}\frac{1}{2}+\frac{1}{2}E_{\Delta}[K(2\Delta)]+E_{\Delta}[K(\Delta)]^{2}\Biggr{)}$$
605
+
606
+ where the density function p(ωi) *is the Fourier transform of the kernel* K(δ).
607
+
608
+ Proof. One can obtain, for each pair of data points, µˆ and σˆ using cos(ω T ∆) (Sutherland & Schneider (2015))
609
+
610
+ for any given ∆:
611
+ $$E[\cos(\omega^{T}\Delta)]=K(\Delta)$$ $$Var[\cos(\omega^{T}\Delta)]=\frac{1}{2}+\frac{1}{2}K(2\Delta)-K(\Delta)^{2}$$
612
+
613
+ $$(12)$$
614
+ For the total expectation or the unconditional expectation, EY [EX[X|Y ]] = EX[X] for any two random variables X and Y . We have
615
+
616
+ $$E_{\omega,\Delta}[\cos(\omega^{T}\Delta)]=E_{\Delta}[E_{\omega}[\cos(\omega^{T}\Delta)|\Delta]]=E_{\Delta}[K(\Delta)]$$
617
+ T ∆)|∆]] = E∆[K(∆)] (12)
618
+ For the total variance with the law of total variance *V ar*Y (Y ) = EX(*V ar*Y (Y |X)) + *V ar*X(EY (Y |X)), it is the sum of the expected value of the conditional variance and the variance of the conditional means.
619
+
620
+ $$V a r_{\omega,\Delta}[\cos(\omega^{T}\Delta)]=\!E_{\Delta}[V a r_{\omega}[\cos(\omega^{T}\Delta)|\Delta]]+V a r_{\Delta}[E_{\omega}[\cos(\omega^{T}\Delta)|\Delta]]$$
621
+
622
+ Using Equation 11 and *V ar*(X) = E[X2] − (E[X])2,
623
+
624
+ $$\begin{array}{l}{{V a r_{\omega}[\cos(\omega^{T}\Delta)]}}\\ {{\quad=E_{\Delta}[\frac{1}{2}+\frac{1}{2}K(2\Delta)-K(\Delta)^{2}]+V a r_{\Delta}[K(\Delta)]}}\\ {{\quad=\frac{1}{2}+\frac{1}{2}E_{\Delta}[K(2\Delta)]-(V a r_{\Delta}[K(\Delta)]-E_{\Delta}[K(\Delta)]^{2})+V a r_{\Delta}[K(\Delta)]}}\\ {{\quad=\frac{1}{2}+\frac{1}{2}E_{\Delta}[K(2\Delta)]+E_{\Delta}[K(\Delta)]^{2}}}\end{array}$$
625
+
626
+ As the expectation and the variance of the average, E[X¯] = µX and *V ar*[X¯] = 1n σ 2 X, with random variables X1, X2*, . . . , X*n, using Equation 12, we have
627
+
628
+ $$E_{\omega,\Delta}\biggl[\frac{1}{k/2}\sum_{i=1}^{k/2}\cos(\omega_{i}^{T}\Delta_{i})\biggr]=E_{\omega,\Delta}[\cos(\omega^{T}\Delta)=E_{\Delta}[K(\Delta)]$$
629
+
630
+ and, using Equation 13,
631
+
632
+ $$V a r_{\omega,\Delta}\bigg[\frac{1}{k/2}\sum_{i=1}^{k/2}\cos(\omega_{i}^{T}\Delta_{i})\bigg]=\frac{1}{k/2}V a r_{\omega}[\cos(\omega^{T}\Delta)]$$
633
+ $$(13)$$
634
+ $$=\frac{1}{k/2}\biggl(\frac{1}{2}+\frac{1}{2}E_{\Delta}[K(2\Delta)]+E_{\Delta}[K(\Delta)]^{2}\biggr)$$
635
+
636
+ Proposition 4.2. *The unbiased estimator for the Gaussian kernel approximation is* ϕ T(x)ϕ(y)[exp(−γµ∆2 )*/exp*(−γµ∆2 + (γσ∆2 )
637
+ 2/2)]
638
+
639
+ with
640
+ $$e x p(-\gamma\mu_{\Delta^{2}})/e x p(-\gamma\mu_{\Delta^{2}}+(\gamma\sigma_{\Delta^{2}})^{2}/2)\approx1$$
641
+ $$i f\,\mu_{\Delta^{2}}/\sigma_{\Delta^{2}}>>1.$$
642
+ Proof. The bias can be found with
643
+
644
+ $$E_{\omega,\Delta}[\cos(\omega^{T}\Delta)]=E_{\Delta}[K(\Delta)]$$
645
+ from Theorem 4.1 and
646
+ $$E_{X\sim{\mathcal{N}}}[e x p(-\gamma X)]=\sum_{i}e x p(-\gamma||\Delta_{i}||_{2}^{2}).$$
647
+
648
+ exp(−γ∥∆i∥
649
+ 2 2
650
+ ) following the log-normal distribution can be approximated by the normal distribution when µ∆2 /σ∆2 >> 1 and the summation Pi exp(−γ∥∆i∥
651
+ 2 2
652
+ ) makes the sum of log-normal random variables follow more closely to the normal distribution due to the central limit theorem. As the expectation of the summation Pi exp(−γ∥∆i∥
653
+ 2 2
654
+ ) is just the expectation of the log-normal distribution, we have
655
+
656
+ $$\sum_{i}e x p(-\gamma\|\Delta_{i}\|_{2}^{2})=e x p(-\gamma\mu_{\Delta^{2}}+(\gamma\sigma_{\Delta^{2}})^{2}/2).$$
657
+
658
+ The unbiased estimator for our kernel approximation becomes
659
+
660
+ $$\Delta^{2})/e x p(-\gamma\mu_{\Delta^{2}}+$$
661
+
662
+ ϕ T(x)ϕ(y)[exp(−γµ∆2 )*/exp*(−γµ∆2 + (γσ∆2 )
663
+ 2/2)].
664
+
665
+ Proposition 4.3. *For the Gaussian kernel* K(∆) = exp(−c∥∆∥
666
+ 2 2
667
+ ) and the exponential kernel, using Theorem 4.1 with ∥∆∥
668
+ 2 2
669
+ , ∥
670
+ √k∆i∥
671
+ 2 2 ∼ N (µ∆2 , σ2∆2 ) following the normal distribution for any i with the density function p(ωi) *being the Fourier transform of the kernel* K(δ),
672
+
673
+ $$E_{\omega,\Delta}\bigg[\frac{1}{k/2}\sum_{i=1}^{k/2}\cos(\omega_{i}^{T}(\sqrt{k}\Delta_{i}))\bigg]=e x p(-\mu_{c\Delta^{2}}+\sigma_{c\Delta^{2}}^{2}/2),$$
674
+ $$\square$$
675
+ $$Var_{\omega,\Delta}\left[\frac{1}{k/2}\sum_{i=1}^{k/2}\cos(\omega_{i}^{T}(\sqrt{k}\Delta_{i}))\right]=\frac{1}{k/2}\left(\frac{1}{2}+\frac{1}{2}exp(-2\mu_{c\Delta^{2}}+2\sigma_{c\Delta^{2}}^{2})+exp(-\mu_{c\Delta^{2}}+\sigma_{c\Delta^{2}}^{2}/2)^{2}\right).$$
676
+
677
+ where both the expectation and the variance are functions of µc∆2 and σ 2 c∆2 .
678
+
679
+ Proof. E∆[K(∆)] and *V ar*∆[K(∆)] in Theorem 4.1 are the variance and the expectation of the log-normal distribution. With ∥∆∥
680
+ 2 2 ∼ N , exp(−c∥∆i∥
681
+ 2 2
682
+ ) follows the log-normal distribution giving us
683
+
684
+ $$E[e x p(X)]=e x p(\mu_{x}+\sigma_{x}^{2}/2),$$
685
+ x/2), (14)
686
+ and
687
+ $$V a r[e x p(X)]=[e x p(\sigma_{x}^{2})-1]e x p(2\mu_{x}+\sigma_{x}^{2})$$
688
+ ) (15)
689
+ $$(14)$$
690
+ $$(15)$$
691
+ for any normal random variable X ∼ N (µx, σ2x
692
+ ). Thus, with Equation 12,
693
+
694
+ $$E_{\omega,\Delta}[\cos(\omega^{T}\Delta)]=E_{\Delta}[K(\Delta)]$$
695
+ $$=E_{\Delta}[e x p(-c\|\Delta\|_{2}^{2})]=e x p(-\mu_{c\Delta^{2}}+\sigma_{c\Delta^{2}}^{2}/2)$$
696
+
697
+ and the variance of the log-normal distribution is
698
+
699
+ $$V a r_{\Delta}[e x p(-c\|\Delta\|_{2}^{2})]=[e x p(\sigma_{c\Delta^{2}}^{2})-1]e x p(-2\mu_{c\Delta^{2}}+\sigma_{c\Delta^{2}}^{2}).$$
700
+ We have
701
+ $$V a r_{\omega,\Delta}[\cos(\omega^{T}\Delta)]=\frac{1}{2}+\frac{1}{2}e x p(-2\mu_{c\Delta^{2}}+2\sigma_{c\Delta^{2}}^{2})+e x p(-\mu_{c\Delta^{2}}+\sigma_{c\Delta^{2}}^{2}/2)^{2}.$$
702
+
703
+ Proposition 4.4. *For the spherical kernel,*
704
+
705
+ $$K(\Delta)=1-\frac{3}{2}\frac{\|\Delta\|}{\theta}+\frac{1}{2}(\frac{\|\Delta\|}{\theta})^{3}$$
706
+
707
+ if ∥∆∥ < θ. 0 *otherwise. With* ∥∆∥
708
+ 2 2
709
+ , ∥
710
+ √k∆i∥
711
+ 2 2 ∼ N (µ∆2 , σ2∆2 ) *following the normal distribution for any* i, the expectation and the variance for the kernel are respectively
712
+
713
+ Eω,∆ 1 k/2 X k/2 i=1 cos(ω T i ( √ k∆i))= E∆[K(∆)] = 1 − 3µ∆ 2θ+ µ 3 ∆ + 3µ∆σ 2 ∆ 2θ 3, V arω,∆ 1 k/2 X k/2 i=1 cos(ω T i ( √ k∆i))=1 k/2 1 − 9µ∆ 2θ+ 9µ 2 ∆ 4θ 2 + 3µ 3 ∆ + 9µ∆σ 2 ∆ θ 3− 3µ 4 ∆ + 9µ 2 ∆σ 2 ∆ 2θ 4+ 6µ 4 ∆σ 2 ∆ + µ 6 ∆ + 9µ 2 ∆σ 4 ∆ 4θ 6 where the density function p(ωi) is the Fourier transform of the kernel K(δ).
714
+ Proof. To use Theorem 4.1, we need to obtain E[K(∆)] and E[K(2∆)].
715
+
716
+ $$E_{\Delta}[K(\Delta)]=1-\frac{3E[\|\Delta\|]}{2\theta}+\frac{E[\|\Delta\|^{3}]}{2\theta^{3}}$$ $$E_{\Delta}[K(2\Delta)]=1-\frac{3E[\|\Delta\|]}{\theta}+\frac{4E[\|\Delta\|^{3}]}{\theta^{3}}$$
717
+
718
+ We let µ∆ = E[∥∆∥] and σ∆ =p*V ar*[||∆||]. The third non-central moment of a Gaussian is
719
+
720
+ $$(16)$$
721
+ $$(17)$$
722
+ $$(18)$$
723
+
724
+ E[∥∆∥
725
+ 3] = µ 3
726
+ ∆ + 3µ∆σ 2
727
+ ∆ (18)
728
+ where µ∆ =R ∞
729
+ θxfN (x)dx =R ∞
730
+ θxfT N (x)dx ×R ∞
731
+ θfN (x)dx and σ∆ =R ∞
732
+ θ
733
+ (x − µ∆)
734
+ 2fN (x)dx =R ∞
735
+ θ
736
+ (x −
737
+ µ∆)
738
+ 2fT N (x)dx ×R ∞
739
+ θfN (x)dx. fN (.) is the density function of the normal distribution and fT N (.) is the density function of the truncated normal distribution.
740
+
741
+ $$E_{\omega,\Delta}[\cos(\omega^{T}\Delta)]=E_{\Delta}[K(\Delta)]=1-\frac{3\mu_{\Delta}}{2\theta}+\frac{\mu_{\Delta}^{3}+3\mu_{\Delta}\sigma_{\Delta}^{2}}{2\theta^{3}}$$
742
+ $$Var_{\omega,\Delta}[\cos(\omega^{T}\Delta)]=\frac{1}{2}+\frac{1}{2}E_{\Delta}[K(2\Delta)]+E_{\Delta}[K(\Delta)]^{2}\tag{19}$$
743
+
744
+ With a little bit of algebra using Equations 16 and 18,
745
+
746
+ $$E_{\Delta}[K(\Delta)]^{2}=1-\frac{3\mu_{\Delta}}{\theta}+\frac{9\mu_{\Delta}^{2}}{4\theta^{2}}+\frac{\mu_{\Delta}^{3}+3\mu_{\Delta}\sigma_{\Delta}^{2}}{\theta^{3}}-\frac{3\mu_{\Delta}^{4}+9\mu_{\Delta}^{2}\sigma_{\Delta}^{2}}{2\theta^{4}}+\frac{6\mu_{\Delta}^{4}\sigma_{\Delta}^{2}+\mu_{\Delta}^{5}+9\mu_{\Delta}^{2}\sigma_{\Delta}^{4}}{4\theta^{6}}$$
747
+
748
+ And, with Equations 17 and 19,
749
+
750
+ V arω,∆[cos(ω T ∆)] = 1 − 9µ∆ 2θ+ 9µ 2 ∆ 4θ 2 + 3µ 3 ∆ + 9µ∆σ 2 ∆ θ 3− 3µ 4 ∆ + 9µ 2 ∆σ 2 ∆ 2θ 4+ 6µ 4 ∆σ 2 ∆ + µ 6 ∆ + 9µ 2 ∆σ 4 ∆ 4θ 6
751
+
752
+ ## A.2 Analysis For The Proposed Fast Random Projection
753
+
754
+ Lemma 4.5. *With* C ∈ R
755
+ k×(D/k), a random matrix with k × (D/k) = D *elements, and each element* following the normal distribution N (0, 1) where D is the original dimensionality and k *is the dimensionality after projection, the expectation of* ∥v∥
756
+ 2is E{Pk i
757
+ [*diag*(CU)]2 i
758
+ } = ∥u∥
759
+ 2, *and, for each element of* v,
760
+
761
+ ```
762
+ pP
763
+ (v)j
764
+ l
765
+ (U)
766
+ 2
767
+ l,j
768
+ ∼ N (0, 1).
769
+
770
+ ```
771
+
772
+ Proof.
773
+
774
+ $\mathbb{E}\{\sum_{i}^{k}[diag(\mathbf{C}\mathbf{U})]_{i}^{2}\}$ $=\mathbb{E}\{\sum_{i}^{k}[\sum_{j=1}^{D}(\mathbf{C})_{i,j}(\mathbf{U})_{j,i}]^{2}\}$ $=\mathbb{E}\{\sum_{i}^{k}\sum_{j,j^{\prime}}(\mathbf{C})_{i,j}(\mathbf{C})_{i,j^{\prime}}(\mathbf{U})_{j,i}(\mathbf{U})_{j^{\prime},i}\}$ $=\mathbb{E}\{\sum_{i}^{k}\sum_{j}(\mathbf{C})_{i,j}^{2}(\mathbf{U})_{j,i}^{2}\}$ $=\sum_{i}\sum_{j}(\mathbf{U})_{j,i}^{2}=\|\mathbf{u}\|^{2}$
775
+ $\square$
776
+ As there are only D non-zero elements in C and E{Pk i
777
+ [*diag*(CU)]2 i
778
+ } = ∥u∥
779
+ 2, we have normally distributed
780
+
781
+ $$\frac{(\mathbf{v})_{i}}{\sqrt{\sum_{l}(\mathbf{U})_{l,i}^{2}}}\sim{\mathcal{N}}(0,1)$$
782
+
783
+ after projection v = ∥*diag*(CU)∥
784
+ 2instead of traditional random projection √
785
+ 1 k Ru with
786
+
787
+ $${\frac{(\mathbf{v})_{i}}{\sqrt{\|\mathbf{u}\|/k}}}\sim{\mathcal{N}}(0,1).$$
788
+ $\square$
789
+ Lemma 4.6. *With probability* 1 − 2e
790
+ −(ϵ 2−ϵ 3)k/4,
791
+
792
+ (1 − ϵ)(m/M)∥u1 − u2∥ 2 ≤ ∥v1 − v2∥ 2 ≤ (1 + ϵ)(M/m)∥u1 − u2∥ 2 2
793
+ where ∥u1∥ =Pl
794
+ $$\sum_{m}(U_{1})_{l,m}^{2}\,\,\,a n d$$
795
+ $$m=\operatorname*{min}\{\sqrt{\sum_{l}(\mathbf{U})_{l,1}^{2}},\sqrt{\sum_{l}(\mathbf{U})_{l,2}^{2}},\ldots,\sqrt{\sum_{l}(\mathbf{U})_{l,k}^{2}}\}$$ $$M=\operatorname*{max}\{\sqrt{\sum_{l}(\mathbf{U})_{l,1}^{2}},\sqrt{\sum_{l}(\mathbf{U})_{l,2}^{2}},\ldots,\sqrt{\sum_{l}(\mathbf{U})_{l,k}^{2}}\}$$
796
+ Proof. The main difference from the proof of the JL lemma (Vempala, 2004) is that, now in our formulation, we have a generalized chi-square distribution for vi with
797
+
798
+ $$\sum_{i}^{k}\left(\frac{(\mathbf{v})_{i}}{\sqrt{\sum_{l}(\mathbf{U})_{l,i}^{2}}}\right)^{2}$$
799
+
800
+ instead of the χ 2-distribution ∥v1∥
801
+ 2
802
+ ∥u1∥2/k ∼ χ 2 k with the JL lemma because the denominator depends on i now.
803
+
804
+ Let us consider the following two inequalities for the i-th term of ∥*diag*(CU)∥:
805
+
806
+ $$(m/M)k\sum_{l}(\mathbf{U})^{2}_{l,i}\leq\|\mathbf{u}\|^{2}\leq(M/m)k\sum_{l}(\mathbf{U})^{2}_{l,i}\tag{20}$$
807
+ $$\frac{(\mathbf{v})_{i}^{2}}{(m/M)\|\mathbf{u}\|^{2}}\leq\frac{(\mathbf{v})_{i}^{2}}{k\sum_{l}(\mathbf{U})_{l,i}^{2}}\leq\frac{(\mathbf{v})_{i}^{2}}{(M/m)\|\mathbf{u}\|^{2}}\tag{21}$$
808
+
809
+ With Inequality 21, one can obtain
810
+
811
+ $$\begin{array}{l}{{P r(\|d i a g({\bf C}{\bf U})\|^{2}>(1+\epsilon)(M/m)\|{\bf u}\|^{2})\leq P r(\chi_{k}^{2}>(1+\epsilon)k)}}\\ {{P r(\|d i a g({\bf C}{\bf U})\|^{2}<(1-\epsilon)(m/M)\|{\bf u}\|^{2})\leq P r(\chi_{k}^{2}<(1-\epsilon)k)}}\end{array}$$
812
+
813
+ It is shown in (Vempala, 2004) that
814
+
815
+ $$Pr(\chi_{k}^{2}>(1+\epsilon)k)=Pr(\chi_{k}^{2}<(1-\epsilon)k)=e^{-(\epsilon^{2}-\epsilon^{3})k/4}$$ With the union bound, the probability that Inequality 4.6 is satisfied is $$1-2e^{-(\epsilon^{2}-\epsilon^{3})k/4}$$
816
+ **Theorem 4.7**.: _The expectation and the variance for fast random projection with our method are:_ $$E_{\omega,\Delta}[\sum_{i}(\omega_{i}^{T}\Delta_{i})^{2}]=\mu_{\Delta^{2}},\quad\text{and}\quad Var_{\omega,\Delta}[\sum_{i}(\omega_{i}^{T}\Delta_{i})^{2}]=(2\mu_{\Delta^{2}}+\sigma_{\Delta^{2}}^{2})/k$$ _where $\sigma_{\Delta^{2}}^{2}$ is the variance of the random projection._
817
+ where ∥∆i∥
818
+ 2 2 ∼ N (µ∆2 , σ2∆2 ) and ωi ∼ N (0, 1).
819
+
820
+ Proof. From Li et al. (2006a) for fixed ∆, we have
821
+
822
+ $$E_{\omega}[\sum_{i}(\omega_{i}^{T}\Delta)^{2}]=\|\Delta\|_{2}^{2}$$
823
+ i Thus, again with the law of total expectation EY [EX[X|Y ]] = EX[X], Eω,∆[ X i (ω T i ∆i) 2] = E∆[Eω[ X i (ω T i ∆i) 2|∆i]] = µ∆2 Using the law of total variance V arY (Y ) = EX(V arY (Y |X)) + V arX(EY (Y |X)), we have V arω,∆[ X i (ω T i ∆i) 2] = E∆[V arω[ X k i=1 (ω T i ∆i) 2|∆i]] + V ar∆[Eω[ X k i=1 (ω T i ∆i) 2|∆i]] = E∆[V arω[ X k i=1 (ω T i ∆i) 2|∆i]] + V ar∆[ X k i=1 Eωi [(ω T i ∆i) 2|∆i]] = E[ 2 k ∥∆i∥ 2 2 ] + V ar[ X k i=1 ∥∆i∥ 2 2 ] = 2µ∆2 k+ σ 2 ∆2 k
Mf4muWBo1U/Mf4muWBo1U_meta.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "languages": null,
3
+ "filetype": "pdf",
4
+ "toc": [],
5
+ "pages": 19,
6
+ "ocr_stats": {
7
+ "ocr_pages": 0,
8
+ "ocr_failed": 0,
9
+ "ocr_success": 0,
10
+ "ocr_engine": "none"
11
+ },
12
+ "block_stats": {
13
+ "header_footer": 19,
14
+ "code": 2,
15
+ "table": 4,
16
+ "equations": {
17
+ "successful_ocr": 86,
18
+ "unsuccessful_ocr": 6,
19
+ "equations": 92
20
+ }
21
+ },
22
+ "postprocess_stats": {
23
+ "edit": {}
24
+ }
25
+ }