|
# Noncommutative C β**-Algebra Net: Learning Neural Networks With Powerful Product Structure In** C β**-Algebra** |
|
|
|
Anonymous authors Paper under double-blind review |
|
|
|
## Abstract |
|
|
|
We propose a new generalization of neural networks with noncommutative C |
|
β-algebra. An important feature of C |
|
β-algebras is their noncommutative structure of products, but the existing C |
|
β-algebra net frameworks have only considered commutative C |
|
β-algebras. We show that this noncommutative structure of C |
|
β-algebras induces powerful effects in learning neural networks. Our framework has a wide range of applications, such as learning multiple related neural networks simultaneously with interactions and learning invariant features with respect to group actions. The validity of our framework numerically illustrates its potential power. |
|
|
|
## 1 Introduction |
|
|
|
Generalization of the parameter space of neural networks beyond real numbers brings intriguing possibilities. For instance, using complex numbers (Hirose, 1992; Nishikawa et al., 2005; Amin et al., 2008; Yadav et al., |
|
2005; Trabelsi et al., 2018; Lee et al., 2022) or quaternion numbers (Nitta, 1995; Arena et al., 1997; Zhu et al., 2018; Gaudet & Maida, 2018) as neural network parameters is more intuitive and effective, particularly in signal processing, computer vision, and robotics domains. Clifford-algebra, the generalization of these numbers, allows more flexible geometrical data processing and is applied to neural networks to handle rich geometric relationships in data (Pearson & Bisset, 1994; Buchholz, 2005; Buchholz & Sommer, 2008; Rivera-Rovelo et al., 2010; Zang et al., 2022; Brandstetter et al., 2022; Ruhe et al., 2023b;a). Different from these approaches focusing on the geometric perspective of parameter values, an alternative direction of generalization is to use function-valued parameters (Rossi & Conan-Guez, 2005; Thind et al., 2023), broadening the applications of neural networks to functional data. Hashimoto et al. (2022) proposed C |
|
β-algebra net, which generalizes neural network parameters to (commutative) C |
|
β-algebra - a generalization of complex numbers (Murphy, 1990; Hashimoto et al., 2021). They adopted continuous functions on a compact space as a commutative C |
|
β-algebra and present a new interpretation of function-valued neural networks: infinitely many real-valued or complex-valued neural networks are continuously combined into a single C |
|
β-algebra net. For example, networks for the same task with different training datasets or different initial parameters can be combined continuously, which enables efficient learning using shared information among infinite combined networks. Such interaction among networks is similar to learning from related tasks, such as ensemble learning (Dong et al., 2020; Ganaie et al., 2022) and multitask learning (Zhang & Yang, 2022). However, because the product structure in the C |
|
β-algebra that Hashimoto et al. (2022) focuses on is commutative, such networks cannot take advantage of rich product structures of C |
|
β-algebras and, instead, require specially designed loss functions to induce the necessary interaction. |
|
|
|
To fully exploit rich product structures, we propose a new generalization of neural networks with noncommutative C |
|
β-algebra. Typical examples of C |
|
β-algebras include the space of diagonal matrices, which is commutative in terms of matrix product, and the space of squared matrices, which is noncommutative. |
|
|
|
Specifically, in the case of diagonal matrices, their product is simply computed by the multiplication of each diagonal element independently, and thus, they are commutative. On the other hand, the product of two squared nondiagonal matrices is the sum of the products between different elements, and each resultant diagonal element depends on other diagonal elements through nondiagonal elements, which can be interpreted as interaction among diagonal elements. |
|
|
|
Such product structures derived from noncommutativity are powerful when used as neural network parameters. Keeping C |
|
β-algebra over matrices as an example, a neural network with parameters of nondiagonal matrices can naturally induce interactions among multiple neural networks with real or complex-valued parameters by regarding each diagonal element as a parameter of such networks. Because interactions are encoded in the parameters, specially designed loss functions to induce interactions are unnecessary for such noncommutative C |
|
β-algebra nets. This property clearly contrasts the proposed framework with the existing commutative C |
|
β-algebra nets. Another example is a neural network with group C |
|
β-algebra parameters, which is naturally group-equivariant without designing special network architectures. |
|
|
|
Our main contributions in this paper are summarized as follows: |
|
- We generalize the commutative C |
|
β-algebra net proposed by Hashimoto et al. (2022) to noncommutative C |
|
β-algebra, which can take advantage of the noncommutative product structure in the C |
|
β-algebra when learning neural networks. |
|
|
|
- We present two examples of the general noncommutative C |
|
β-algebra net: C |
|
β-algebra net over matrices and group C |
|
β-algebra net. A C |
|
β-algebra net over matrices can naturally combine multiple standard neural networks with interactions. Neural networks with group C |
|
β-algebra parameters are naturally group-equivariant without modifying neural structures. |
|
|
|
- Numerical experiments illustrate the validity of these noncommutative C |
|
β-algebra nets, including interactions among neural networks. |
|
|
|
We emphasize that C |
|
β-algebra is a powerful tool for neural networks, and our work provides a lot of important perspectives about its application. |
|
|
|
## 2 Background |
|
|
|
In this section, we review the mathematical background of C |
|
β-algebra required for this paper and the existing C |
|
β-algebra net. For more theoretical details of the C |
|
β-algebra, see, for example, Murphy (1990). |
|
|
|
## 2.1 C β**-Algebra** |
|
|
|
C |
|
β-algebra is a generalization of the space of complex values. It has structures of the product, involution β, |
|
and norm. Definition 1 (C |
|
β**-algebra)** A set A *is called a* C |
|
β-algebra *if it satisfies the following conditions:* |
|
1. A is an algebra over C *and equipped with a bijection* (Β·) |
|
β: A β A that satisfies the following conditions for Ξ±, Ξ² β C and c, d β A: |
|
- (Ξ±c + Ξ²d) |
|
β = Ξ±cβ + Ξ²dβ, |
|
- (cd) |
|
β = d |
|
βc |
|
β, |
|
- (c |
|
β) |
|
β = c. |
|
|
|
2. A is a normed space with β₯ Β· β₯, and for c, d β A, β₯cdβ₯ β€ β₯cβ₯ β₯dβ₯ holds. In addition, A is complete with respect to *β₯ Β· β₯*. |
|
|
|
3. For c β A, β₯c |
|
βcβ₯ = β₯cβ₯ |
|
2(C |
|
β*-property) holds.* |
|
The product structure in C |
|
β-algebras can be both commutative and noncommutative. |
|
|
|
Example 1 (Commutative C |
|
β**-algebra)** Let A be the space of continuous functions on a compact Hausdorff space Z. We can regard A *as a* C |
|
β*-algebra by setting* |
|
- Product: Pointwise product of two functions, i.e., for a1, a2 β A, a1a2(z) = a1(z)a2(z). |
|
|
|
- Involution: Pointwise complex conjugate, i.e., for a β A, a |
|
β(z) = a(z). |
|
|
|
- Norm: Sup norm, i.e., for a β A, β₯aβ₯ = supzβZ |a(z)|. |
|
|
|
In this case, the product in A *is commutative.* |
|
Example 2 (Noncommutative C |
|
β**-algebra)** Let A be the space of bounded linear operators on a Hilbert space H, which is denoted by B(H). We can regard A *as a* C |
|
β*-algebra by setting* |
|
- *Product: Composition of two operators,* - *Involution: Adjoint of an operator,* |
|
- Norm: Operator norm of an operator, i.e., for a β A, β₯aβ₯ = supvβH,β₯vβ₯H=1 β₯avβ₯H. |
|
|
|
Here, β₯ Β· β₯H is the norm in H. In this case, the product in A is noncommutative. Note that if H *is a* d-dimensional space for a finite natural number d, then elements in A are d by d *matrices.* |
|
Example 3 (Group C |
|
β**-algebra)** *The group* C |
|
β-algebra on a group G*, which is denoted as* C |
|
β(G), is the set of maps from G to C *equipped with the following product, involution, and norm:* |
|
- *Product:* (a Β· b)(g) = RG |
|
a(h)b(h |
|
β1g)dΞ»(h) for g β G, |
|
- *Involution:* a |
|
β(g) = β(g |
|
β1)a(gβ1) for g β G, |
|
- *Norm:* β₯aβ₯ = sup[Ο]βGΛ β₯Ο(a)β₯, |
|
where β(g) *is a positive number satisfying* Ξ»(Eg) = β(g)Ξ»(E) for the Haar measure Ξ» on G*. In addition,* |
|
GΛ is the set of equivalence classes of irreducible unitary representations of G. Note that if G *is discrete,* |
|
then Ξ» is the counting measure on G*. In this paper, we focus mainly on the product structure of* C |
|
β(G)*. For* details of the Haar measure and representations of groups, see Kirillov (1976). If G = Z/pZ*, then* C |
|
β(G) |
|
is C |
|
β*-isomorphic to the* C |
|
β*-algebra of circulant matrices (Hashimoto et al., 2023). Note also that if* G is noncommutative, then C |
|
β(G) *can also be noncommutative.* |
|
|
|
## 2.2 C β**-Algebra Net** |
|
|
|
Hashimoto et al. (2022) proposed generalizing real-valued neural network parameters to commutative C |
|
β- |
|
algebra-valued ones, which enables us to represent multiple real-valued models as a single C |
|
β-algebra net. |
|
|
|
Here, we briefly review the existing (commutative) C |
|
β-algebra net. Let A = C(Z), the commutative C |
|
β- |
|
algebra of continuous functions on a compact Hausdorff space Z. Let H be the depth of the network and N0*, . . . , N*H be the width of each layer. For i = 1*, . . . , H*, set Wi: ANiβ1 β ANi as an Affine transformation defined with an Ni Γ Niβ1 A-valued matrix and an A-valued bias vector in ANi. In addition, set a nonlinear activation function Οi: ANi β ANi. The commutative C |
|
β-algebra net f : AN0 β ANH is defined as |
|
|
|
$$f=\sigma_{H}\circ W_{H}\circ\cdots\circ\sigma_{1}\circ W_{1}.$$ |
|
$\left(1\right)$. |
|
f = ΟH β¦ WH *β¦ Β· Β· Β· β¦* Ο1 β¦ W1. (1) |
|
By generalizing neural network parameters to functions, we can combine multiple standard (real-valued) |
|
neural networks continuously, which enables them to learn efficiently. In this paper, each real-valued network in a C |
|
β-algebra net is called sub-model. We show an example of commutative C |
|
β-algebra nets below. To simplify the notation, we focus on the case where the network does not have biases. However, the same arguments are valid for the case where the network has biases. |
|
|
|
## 2.2.1 The Case Of Diagonal Matrices |
|
|
|
If Z is a finite set, then A = {a β C |
|
dΓd| a is a diagonal matrix}. The C |
|
β-algebra net f on A corresponds to d separate real or complex-valued sub-models. In the case of A = C(Z), we can consider that infinitely many networks are continuously combined, and the C |
|
β-algebra net f with diagonal matrices is a discretization of the C |
|
β-algebra net over C(Z). Indeed, denote by x jthe vector composed of the jth diagonal elements of x β AN , which is defined as the vector in C |
|
N whose kth element is the jth diagonal element of the A-valued kth element of x. Assume the activation function Οi: AN β AN is defined as Οi(x) |
|
j = ΛΟi(x j) for some ΟΛi: C |
|
N β C |
|
N . Since the jth diagonal element of a1a2 for a1, a2 β A is the product of the jth element of a1 |
|
|
|
![3_image_1.png](3_image_1.png) |
|
|
|
(a) Cβ-algebra net over diagonal matrices (commutative). |
|
|
|
The blue parts illustrate the zero parts (nondiagonal parts) |
|
of the diagonal matrices. Each diagonal element corresponds to an individual sub-model. |
|
|
|
$$(2)$$ |
|
|
|
(b) Cβ-algebra net over nondiagonal matrices (noncommutative). Unlike the case of diagonal matrices, nondiagonal parts |
|
|
|
![3_image_0.png](3_image_0.png) |
|
|
|
(colored in orange) are not zero. The nondiagonal elements induce the interactions among multiple sub-models. |
|
Figure 1: Difference between commutative and noncommutative C |
|
β-algebra nets from the perspective of |
|
|
|
interactions among sub-models. |
|
and a2, we have |
|
|
|
$$f(x)^{j}=\tilde{\sigma}_{H}\circ W_{H}^{j}\circ\cdots\circ\tilde{\sigma}_{1}\circ W_{1}^{j},$$ |
|
, (2) |
|
where W |
|
j i β C |
|
NiΓNiβ1is the matrix whose (*k, l*)-entry is the jth diagonal of the (*k, l*)-entry of Wi β |
|
ANiΓNiβ1. Figure 1 (a) schematically shows the C |
|
β-algebra net over diagonal matrices. |
|
|
|
## 3 Noncommutative C β**-Algebra Net** |
|
|
|
Although the existing C |
|
β-algebra net provides a framework for applying C |
|
β-algebra to neural networks, it focuses on commutative C |
|
β-algebras, whose product structure is simple. Therefore, we generalize the existing commutative C |
|
β-algebra net to noncommutative C |
|
β-algebra. Since the product structures in noncommutative C |
|
β-algebras are more complicated than those in commutative C |
|
β-algebras, they enable neural networks to learn features of data more efficiently. For example, if we focus on the C |
|
β-algebra of matrices, then the neural network parameters describe interactions between multiple real-valued sub-models (see Section 3.1.1). |
|
|
|
Let A be a general C |
|
β-algebra and consider the network f in the same form as Equation (1). We emphasize that in our framework, the choice of A is not restricted to a commutative C |
|
β-algebra. We list examples of A and their validity for learning neural networks below. |
|
|
|
## 3.1 Examples Of C β**-Algebras For Neural Networks** |
|
|
|
Through showing several examples of C |
|
β-algebras, we show that C |
|
β-algebra net is a flexible and unified framework that incorporates C |
|
β-algebra into neural networks. As mentioned in the previous section, we focus on the case where the network does not have biases for simplification in this subsection. |
|
|
|
## 3.1.1 Nondiagonal Matrices |
|
|
|
Let A = C |
|
dΓd. Note that A is a noncommutative C |
|
β-algebra. Of course, it is possible to consider matrix data, but we focus on the perspective of interaction among sub-models following Section 2.2. In this case, unlike the network (2), the jth diagonal element of a1a2a3 for a1, a2, a3 β A depends not only on the jth diagonal element of a2, but also the other diagonal elements of a2. Thus, f(x) |
|
j depends not only on the sub-model corresponding to jth diagonal element discussed in Section 2.2.1, but also on other sub-models. |
|
|
|
The nondiagonal elements in A induce interactions between d real or complex-valued sub-models. |
|
|
|
Interaction among sub-models could be related to decentralized peer-to-peer machine learning Vanhaesebrouck et al. (2017); Bellet et al. (2018), where each agent learns without sharing data with others, while improving its ability by leveraging other agents' information through communication. In our case, an agent corresponds to a sub-model, and communication is achieved by interaction. We will see the effect of interaction by the nondiagonal elements in A numerically in Section 4.1. Figure 1 (b) schematically shows the C |
|
β-algebra net over nondiagonal matrices. |
|
|
|
## 3.1.2 Block Diagonal Matrices |
|
|
|
Let A = {a β C |
|
dΓd| a = diag(a1*, . . . ,* am), ai β C |
|
diΓdi }. The product of two block diagonal matrices a = diag(a1*, . . . ,* am) and b = diag(b1*, . . . ,* bm) can be written as |
|
|
|
$$\mathbf{b}_{1},\ldots,\mathbf{a}_{m}\mathbf{b}_{m}).$$ |
|
$$(3)$$ |
|
|
|
ab = diag(a1b1*, . . . ,* ambm). |
|
|
|
In a similar manner to Section 2.2.1, we denote by x jthe N by dj matrix composed of the jth diagonal blocks of x β AN . Assume the activation function Οi: AN β AN is defined as Οi(x) = diag(ΟΛ |
|
1 i |
|
(x 1)*, . . . ,*ΟΛm i |
|
(x m)) |
|
for some ΟΛi,j : C |
|
NΓdj β C |
|
NΓdj. Then, we have |
|
|
|
$$\mathbf{f}(\mathbf{x})^{j}={\hat{\boldsymbol{\sigma}}}_{H}^{j}\circ\mathbf{W}_{H}^{j}\circ\cdots\circ{\hat{\boldsymbol{\sigma}}}_{1}^{j}\circ\mathbf{W}_{1}^{j},$$ |
|
, (3) |
|
where Wj i β (C |
|
djΓdj ) |
|
NiΓNiβ1is the block matrix whose (*k, l*)-entry is the jth block diagonal of the (*k, l*)-entry of Wi β ANiΓNiβ1. In this case, we have m groups of sub-models, each of which is composed of interacting dj sub-models mentioned in Section 3.1.1. Indeed, the block diagonal case generalizes the diagonal and nondiagonal cases stated in Sections 2.2.1 and 3.1.1. If dj = 1 for all j = 1*, . . . , m*, then the network (3) is reduced to the network (2) with diagonal matrices. If m = 1 and d1 = d, then the network (3) is reduced to the network with d by d nondiagonal matrices. |
|
|
|
## 3.1.3 Circulant Matrices |
|
|
|
Let A = {a β C |
|
dΓd| a is a circulant matrix}. Here, a circulant matrix a is the matrix represented as |
|
|
|
$$a={\begin{bmatrix}a_{1}&a_{d}&\cdot\cdot\cdot&a_{2}\\ a_{2}&a_{1}&\cdot\cdot\cdot&a_{3}\\ &\cdot\cdot\cdot&\cdot\cdot\\ a_{d}&a_{d-1}&\cdot\cdot\cdot&a_{1}\end{bmatrix}}$$ |
|
|
|
for a1*, . . . , a*d β C. Note that in this case, A is commutative. Circulant matrices are diagonalized by the discrete Fourier matrix as follows (Davis, 1979). We denote by F the discrete Fourier transform matrix, whose (*i, j*)-entry is Ο |
|
(iβ1)(jβ1)/ |
|
βp, where Ο = e2Ο |
|
ββ1/d. |
|
|
|
Lemma 1 Any circulant matrix a *is decomposed as* a = FΞaF |
|
β*, where* |
|
|
|
$$\Lambda_{a}=\mathrm{diag}\left(\sum_{i=1}^{d}a_{i}\omega^{(i-1)\cdot0},\ldots,\sum_{i=1}^{d}a_{i}\omega^{(i-1)(d-1)}\right).$$ |
|
$$u h e r e$$ |
|
$$\left(4\right)$$ |
|
|
|
Since ab = FΞaΞbF |
|
βfor a, b β A, the product of a and b corresponds to the multiplication of each Fourier component of a and b. Assume the activation function Οi: AN β AN is defined such that (F |
|
βΟi(x)F) |
|
j equals to ΟΛΛi((*F xF*β) |
|
j) for some ΟΛΛi: C |
|
N β C |
|
N . Then, we obtain the network |
|
|
|
$$(F^{*}f(x)F)^{j}=\hat{\hat{\sigma}}_{H}\circ\hat{W}_{H}^{j}\circ\cdots\circ\hat{\hat{\sigma}}_{1}\circ\hat{W}_{1}^{j},\tag{1}$$ |
|
, (4) |
|
where WΛ j i β C |
|
NiΓNiβ1is the matrix whose (*k, l*)-entry is (F w*i,k,l*F |
|
β) |
|
j, where w*i,k,l* is the the (*k, l*)-entry of Wi β ANiΓNiβ1. The jth sub-model of the network (4) corresponds to the network of the jth Fourier component. Remark 1 The j*th sub-model of the network (4) does not interact with those of other Fourier components* than the jth component. This fact corresponds to the fact that A is commutative in this case. Analogous to the case in Section 3.1.1, if we set A as noncirculant matrices, then we obtain interactions between sub-models corresponding to different Fourier components. |
|
|
|
## 3.1.4 Group C β**-Algebra On A Symmetric Group** |
|
|
|
Let G be the symmetric group on the set {1*, . . . , d*} and let A = C |
|
β(G). Note that since G is noncommutative, C |
|
β(G) is also noncommutative. Then, the output f(x) β ANH is the C |
|
NH -valued map on G. Using the product structure introduced in Example 3, we can construct a network that takes the permutation of data into account. Indeed, an element w β A of a weight matrix W β ANiβ1ΓNiis a function on G. Thus, w(g) |
|
describes the weight corresponding to the permutation g β G. Since the product of x β C |
|
β(G) and w is defined as wx(g) = PhβG w(h)x(h |
|
β1g), by applying W, all the weights corresponding to the permutations affect the input. For example, let z β R |
|
d and set x β C |
|
β(G) as x(g) = g Β· z, where g Β· z is the action of g on z, i.e., the permutation of z with respect to g. Then, we can input all the patterns of permutations of z simultaneously, and by virtue of the product structure in C |
|
β(G), the network is learned with the interaction among these permutations. Regarding the output, if the network is learned so that the outputs y become constant functions on G, i.e., y(g) = c for some constant c, then it means that c is invariant with respect to g, i.e., invariant with respect to the permutation. We will numerically investigate the application of the group C |
|
β-algebra net to permutation invariant problems in Section 4.2. |
|
|
|
Remark 2 If the activation function Ο *is defined as* Ο(x)(g) = Ο(x(g)), i.e., applied elementwisely to x, then the network f is permutation equivariant. That is, even if the input x(g) is replaced by x(gh) *for some* h β G, the output f(x)(g) is replaced by f(x)(gh)*. This is because the product in* C |
|
β(G) *is defined as a* convolution. This feature of the convolution has been studied for group equivariant neural networks (Lenssen et al., 2018; Cohen et al., 2019; Sannai et al., 2021; Sonoda et al., 2022). The above setting of the C |
|
β*-algebra* net provides us with a design of group equivariant networks from the perspective of C |
|
β*-algebra.* |
|
Remark 3 Since the number of elements of G is d!*, elements in* C |
|
β(G), which are functions on G, are represented as d!-dimensional vectors. For the case where d is large, we need a method for efficient computations, which is future work. |
|
|
|
## 3.1.5 Bounded Linear Operators On A Hilbert Space |
|
|
|
For functional data, we can also set A as an infinite-dimensional space. Using infinite-dimensional C |
|
β-algebra for analyzing functional data has been proposed (Hashimoto et al., 2021). We can also adopt this idea for neural networks. Let A = B(L |
|
2(Ξ©)) for a measure space Ξ©. Set A0 = {a *β A |* a is a multiplication operator}. |
|
|
|
Here, a multiplication operator a is a linear operator that is defined as av = v Β· u for some u β Lβ(Ξ©). |
|
|
|
The space A0 is a generalization of the space of diagonal matrices to the infinite-dimensional space. If we restrict elements of weight matrices to A0, then we obtain infinitely many sub-models without interactions. |
|
|
|
Since outputs are in A |
|
NH |
|
0, we can obtain functional data as outputs. Similar to the case of matrices (see Section 3.1.1), by setting elements of weight matrices as elements in A, we can take advantage of interactions among infinitely many sub-models. |
|
|
|
## 3.2 Approximation Of Functions With Interactions By C β**-Algebra Net** |
|
|
|
We observe what kind of functions the C |
|
β-algebra net can approximate. In this subsection, we show the universality of C |
|
β-algebra nets, which describes the representation power of models. We focus on the case of A = C |
|
dΓd. Consider a shallow network f : AN0 β A defined as f(x) = Wβ |
|
2 Ο(W1x+b), where W1 β AN1ΓN0, W2 β AN1, and b β AN1. Let Λf : AN0 β A be the function in the form of Λf(x) = [Pd j=1 fkj (x l)]kl, where fkj : C |
|
N0d β R. Here, we abuse the notation and denote by x l β C |
|
N0dthe lth column of x regarded as an N0d by d matrix. Assume fkj is represented as |
|
|
|
$$f_{kj}(x)=\int_{\mathbb{R}}\int_{\mathbb{R}^{N_{0}\,d}}T_{kj}(w,b)\sigma(w^{*}x+b)\mathrm{d}w\,\mathrm{d}b$$ |
|
|
|
for some Tkj : R |
|
N0d Γ R β R. By the theory of the ridgelet transform, such Tkj exists for most realistic settings (CandΓ¨s, 1999; Sonoda & Murata, 2017). For example, Sonoda & Murata (2017) showed the following result. |
|
|
|
$\left(5\right)^{\frac{1}{2}}$ |
|
|
|
Proposition 1 Let S be the space of rapidly decreasing functions on R and S |
|
β² |
|
0 be the dual space of the Lizorkin distribution space on R. Assume a function Λf *has the form of* Λf(x) = [Pd j=1 fkj (x l)]kl*, and* fkj and Λfkj *are in* L |
|
1(R |
|
N0d), where Λf represents the Fourier transform of a function f*. Assume in addition,* |
|
Ο *is in* S |
|
β² |
|
0 |
|
, and there exists Ο β S *such that* RR |
|
|
|
ΟΛ(x)ΛΟ(x)/|x|dx *is nonzero and finite. Then, there exists* Tkj : R |
|
N0d Γ R β R such that fkj *admits a representation of Equation* (5). |
|
|
|
Here, the Lizorkin distribution space is defined as S0 = {Ο *β S |* RR |
|
|
|
x kΟ(x)dx = 0 k β N}. Note that the ReLU is in S |
|
β² |
|
0 |
|
. We discretize Equation (5) by replacing the Lebesgue measures with PN1 i=1 Ξ΄wij and PN1 i=1 Ξ΄bij , |
|
where Ξ΄w is the Dirac measure centered at w. Then, the (*k, l*)-entry of Λf(x) is written as |
|
|
|
$$\sum_{j=1}^{d}\sum_{i=1}^{N_{1}}T_{k j}(w_{i j},b_{i j})\sigma(w_{i j}^{*}x^{l}+b_{i j}).$$ |
|
|
|
Setting the i-th element of W2 β AN1 as [Tkj (wij , bij )]kj , the (*i, m*)-entry of W1 β AN1ΓN0 as [(wi,j )md+l]jl, the ith element of b β AN1 as [bj ]jl, we obtain |
|
|
|
$$\sum_{j=1}^{d}\sum_{i=1}^{N_{1}}T_{k j}(w_{i j},b_{i j})\sigma(w_{i j}^{*}x^{l}+b_{i j})=(W_{2}^{k})^{*}\sigma(W_{1}x^{l}+b^{l}),$$ |
|
|
|
which is the (*k, l*)-entry of f(x). As a result, any function in the form of Λf(x) = [Pd j=1 fkj (x l)]kl with the assumption in Proposition 1 can be represented as a C |
|
β-algebra net. |
|
|
|
Remark 4 *As we discussed in Sections 2.2.1 and 3.1.1, a* C |
|
β*-algebra net over matrices can be regarded as* d interacting sub-models. The above argument insists the lth column of f(x) and Λf(x) *depends only on* x l. |
|
|
|
Thus, in this case, if we input data x lcorresponding to the l*th sub-model, then the output is obtained as the* lth column of the A-valued output f(x). On the other hand, the weight matrices W1 and W2 *and the bias* b are used commonly in providing the outputs for any sub-model, i.e., W1, W2, and b *are learned using data* corresponding to all the sub-models. Therefore, W1, W2, and b *induce interactions among the sub-models.* |
|
|
|
## 4 Experiments |
|
|
|
In this section, we numerically demonstrate the abilities of noncommutative C |
|
β-algebra nets using nondiagonal C |
|
β-algebra nets over matrices in light of interaction and group C |
|
β-algebra nets of its equivariant property. |
|
|
|
We use C |
|
β-algebra-valued multi-layered perceptrons (MLPs) to simplify the experiments. However, they can be naturally extended to other neural networks, such as convolutional neural networks. |
|
|
|
The models were implemented with JAX (Bradbury et al., 2018). Experiments were conducted on an AMD |
|
EPYC 7543 CPU and an NVIDIA A-100 GPU. See Appendix A.1 for additional information on experiments. |
|
|
|
## 4.1 C β**-Algebra Nets Over Matrices** |
|
|
|
In a noncommutative C |
|
β-algebra net over matrices consisting of nondiagonal-matrix parameters, each submodel is expected to interact with others and thus improve performance compared with its commutative counterpart consisting of diagonal matrices. We demonstrate the effectiveness of such interaction using image classification and neural implicit representation (NIR) tasks in a similar setting with peer-to-peer learning such that data are separated for each submodel. |
|
|
|
See Section 3.1.1 for the notations. When training the jth sub-model (j = 1, 2*, . . . , d*), an original N0dimensional input data point x = [x1*, . . . ,* xN0 |
|
] β R |
|
N0is converted to its corresponding representation x β AN0 = R |
|
N0ΓdΓdsuch that x*i,j,j* = xi for i = 1, 2*, . . . , N*0 and 0 otherwise. The loss to its NHdimensional output of a C |
|
β-algebra net y β ANH and the target t β ANH is computed as β(y:,j,j , t:*,j,j* ) + |
|
1 2 Pk,(lΜΈ=j) |
|
(y 2 k,j,l + y 2 k,l,j ), where β is a certain loss function; we use the mean squared error (MSE) for image classification and the Huber loss for NIR. The second and third terms suppress the nondiagonal elements of the outputs to 0. In both examples, we use leaky-ReLU as an activation function and apply it only to the diagonal elements of pre-activations. |
|
|
|
## 4.1.1 Image Classification |
|
|
|
We conduct experiments of image classification tasks using MNIST (Le Cun et al., 1998), KuzushijiMNIST (Clanuwat et al., 2018), and Fashion-MNIST (Xiao et al., 2017), which are composed of 10-class 28 Γ 28 gray-scale images. Each sub-model is trained on a mutually exclusive subset sampled from the original training data and then evaluated on the entire test data. Each subset is sampled to be balanced, i.e., each class has the same number of training samples. As a baseline, we use a commutative C |
|
β-algebra net over diagonal matrices, which consists of the same sub-models but cannot interact with other sub-models. |
|
|
|
Both noncommutative and commutative models share hyperparameters: the number of layers was set to 4, the hidden size was set to 128, and the models were trained for 30 epochs. Table 1 shows average test accuracy. Accuracy can be reported in two distinct manners: the first approach averages the accuracy across individual sub-models ("Average"), and the other is to ensemble the logits of sub-models and then compute the accuracy ("Ensemble"). As can be seen, the noncommutative C |
|
β-algebra net consistently outperforms its commutative counterpart, which is significant, particularly when the number of sub-models is 40. Note that when the number of sub-models is 40, the size of the training dataset for each sub-model is 40 times smaller than the original one, and thus, the commutative C |
|
β-algebra net fails to learn. |
|
|
|
Nevertheless, the noncommutative C |
|
β-algebra net retains performance mostly. Commutative C |
|
β-algebra net improves performance by ensembling, but it achieves inferior performance to both Average and Ensemble noncommutative C |
|
β-algebra net when the number of sub-models is larger than five. Such a performance improvement would be attributed to the fact that noncommutative models have more trainable parameters than commutative ones. Thus, we additionally compare commutative C |
|
β-algebra net with a width of 128 and noncommutative C |
|
β-algebra net with a width of 8, which have the same number of learnable parameters, when the total number of training data is set to 5000, ten times smaller than the experiments of Table 1. |
|
|
|
As seen in Table 2, while the commutative C |
|
β-algebra net mostly fails to learn, the noncommutative C |
|
β- |
|
algebra net learns successfully. These results suggest that the performance of noncommutative C |
|
β-algebra net cannot solely be explained by the number of learnable parameters: the interaction among sub-models provides essential capability. |
|
|
|
Furthermore, Table 3 illustrates that related tasks help performance improvement through interaction. |
|
|
|
Specifically, we prepare five sub-models per dataset, one of MNIST, Kuzushiji-MNIST, and Fashion-MNIST, |
|
and train a single (non)commutative C |
|
β-algebra net consisting of 15 sub-models simultaneously. In addition to the commutative C |
|
β-algebra net, where sub-models have no interaction, and the noncommutative C |
|
β-algebra net, where each sub-model can interact with any other sub-models, we use a block-diagonal noncommutative C |
|
β-algebra net (see Section 3.1.2), where each sub-model can only interact with other submodels trained on the same dataset. Table 3 shows that the fully noncommutative C |
|
β-algebra net surpasses the block-diagonal one on Kuzushiji-MNIST and Fashion-MNIST, implying that not only intra-task interaction but also inter-task interaction helps performance gain. Note these results are not directly comparable with the values of Tables 1 and 3, due to dataset subsampling to balance class sizes (matching MNIST's smallest class). |
|
|
|
## 4.1.2 Neural Implicit Representation |
|
|
|
In the next experiment, we use a C |
|
β-algebra net over matrices to learn implicit representations of 2D images that map each pixel coordinate to its RGB colors (Sitzmann et al., 2020; Xie et al., 2022). Specifically, an input coordinate in [0, 1]2is transformed into a random Fourier feature in [β1, 1]320 and then converted into its C |
|
β-algebraic representation over matrices as an input to a C |
|
β-algebra net over matrices. Similar to the image classification task, we compare noncommutative NIRs with commutative NIRs, using the following hyperparameters: the number of layers to 6 and the hidden dimension to 256. These NIRs learn 128 Γ 128pixel images of ukiyo-e pictures from The Metropolitan Museum of Art1 and photographs of cats from the AFHQ dataset (Choi et al., 2020). |
|
|
|
Figure 2 (top) shows the curves of the average PSNR (Peak Signal-to-Noise Ratio) of sub-models corresponding to the image below. Both commutative and noncommutative C |
|
β-algebra nets consist of five sub-models trained on five ukiyo-e pictures (see also Figure 6). PSNR, the quality measure, of the noncommutative NIR |
|
1https://www.metmuseum.org/art/the-collection |
|
|
|
datasets. "Average" reports the average accuracy of sub-models, and "Ensemble" ensembles the logits of |
|
|
|
sub-models to compute accuracy. Interactions between sub-models that the noncommutative C |
|
|
|
β-algebra net |
|
|
|
improves performance significantly when the number of sub-models is 40. |
|
|
|
Dataset # sub-models Commutative C |
|
|
|
β-algebra nets Noncommutative C |
|
|
|
β-algebra nets |
|
|
|
Average Ensemble Average Ensemble |
|
|
|
| improves performance significantly when the number of sub-models is 40. Dataset # sub-models Commutative C β -algebra nets Noncommutative C β -algebra nets Average Ensemble Average Ensemble MNIST 5 0.963 Β± 0.003 0.970 Β± 0.001 0.970 Β± 0.002 0.976 Β± 0.001 10 0.937 Β± 0.004 0.950 Β± 0.000 0.956 Β± 0.002 0.969 Β± 0.000 20 0.898 Β± 0.007 0.914 Β± 0.002 0.937 Β± 0.002 0.957 Β± 0.001 40 0.605 Β± 0.007 0.795 Β± 0.010 0.906 Β± 0.004 0.938 Β± 0.001 Kuzushiji-MNIST 5 0.837 Β± 0.003 0.871 Β± 0.001 0.859 Β± 0.003 0.888 Β± 0.002 10 0.766 Β± 0.008 0.793 Β± 0.004 0.815 Β± 0.007 0.859 Β± 0.002 20 0.674 Β± 0.011 0.710 Β± 0.001 0.758 Β± 0.007 0.817 Β± 0.001 40 0.453 Β± 0.026 0.532 Β± 0.004 0.682 Β± 0.008 0.767 Β± 0.001 Fashion-MNIST 5 0.862 Β± 0.001 0.873 Β± 0.001 0.868 Β± 0.002 0.882 Β± 0.001 10 0.839 Β± 0.003 0.850 Β± 0.001 0.852 Β± 0.004 0.871 Β± 0.001 20 0.790 Β± 0.010 0.796 Β± 0.002 0.832 Β± 0.005 0.858 Β± 0.000 40 0.650 Β± 0.018 0.674 Β± 0.001 0.810 Β± 0.005 0.841 Β± 0.000 | |
|
|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| |
|
|
|
Table 1: Average test accuracy of commutative and noncommutative C |
|
β-algebra nets over matrices on test |
|
Table 2: Average test accuracy commutative and noncommutative C |
|
β-algebra nets over matrices trained |
|
|
|
on 5000 data points with 20 sub-models. The width of the noncommutative model is set to 8 so that the |
|
|
|
number of learnable parameters is matched with its commutative counterpart. |
|
|
|
Dataset Commutative Noncommutative |
|
|
|
MNIST 0.155 Β± 0.04 0.779 Β± 0.02 Kuzushiji-MNIST 0.140 Β± 0.03 0.486 Β± 0.02 |
|
|
|
Fashion-MNIST 0.308 Β± 0.05 0.673 Β± 0.02 |
|
|
|
grows faster, and correspondingly, it learns the details of ground truth images faster than its commutative version (Figure 2 bottom). Noticeably, the noncommutative representations reproduce colors even at the early stage of learning, although the commutative ones remain monochrome after 500 iterations of training. |
|
|
|
Along with the similar trends observed in the pictures of cats (Figure 3), these results further emphasize the effectiveness of the interaction. Longer-term results are presented in Figure 7. This INR for 2D images can be extended to represent 3D models. Figure 4 shows synthesized views of 3D |
|
implicit representations using the same C |
|
β-algebra MLPs trained on three 3D chairs from the ShapeNet dataset (Chang et al., 2015). The presented poses are unseen during training. Again, the noncommutative INR reconstructs the chair models with less noisy artifacts, indicating that interaction helps efficient learning. See Appendices A.1 and A.2 for details and results. |
|
|
|
## 4.2 Group C β**-Algebra Nets** |
|
|
|
As another experimental example of C |
|
β-algebra nets, we showcase group C |
|
β-algebra nets, which we introduced in Section 3.1.4. The group C |
|
β-algebra nets take functions on a symmetric group as input and return functions on the group as output. |
|
|
|
Refer to Section 3.1.4 for notations. A group C |
|
β-algebra net is trained on data {(x, y) β AN0 ΓANH }, where x and y are N0- and NH-dimensional vector-valued functions. Practically, such functions may be represented as real tensors, e.g., x β C |
|
N0Γ\#G, where \#G is the size of G. Using product between functions explained in Section 3.1.4 and element-wise addition, a linear layer, and consequently, an MLP, on A can be constructed. |
|
|
|
Following the C |
|
β-algebra nets over matrices, we use leaky ReLU for activations. |
|
|
|
Table 3: Average test accuracy over five sub-models simultaneously trained on the three datasets. The |
|
|
|
(fully) noncommutative C |
|
|
|
β-algebra net outperforms block-diagonal the noncommutative C |
|
|
|
β-algebra net on |
|
|
|
Kuzushiji-MNIST and Fashion-MNIST, indicating that the interaction can leverage related tasks. |
|
|
|
Dataset Commutative Block-diagonal Noncommutative |
|
|
|
MNIST 0.956 Β± 0.002 0.969 Β± 0.002 0.970 Β± 0.002 Kuzushiji-MNIST 0.745 Β± 0.004 0.778 Β± 0.006 0.796 Β± 0.008 |
|
|
|
Fashion-MNIST 0.768 Β± 0.007 0.807 Β± 0.006 0.822 Β± 0.002 |
|
|
|
![9_image_0.png](9_image_0.png) |
|
|
|
![9_image_1.png](9_image_1.png) |
|
|
|
Figure 2: Average PSNR of implicit representations of the image below (top) and reconstructions of the ground truth image at every 100 iterations (bottom). The noncommutative C |
|
β-algebra net learns the geometry and colors of the image faster than its commutative counterpart. |
|
One of the simplest tasks for the group C |
|
β-algebra nets is to learn permutation-invariant representations, e.g., predicting the sum of given d digits. In this case, x is a function that outputs permutations of features of d digits, and y(g) is a constant function that returns the sum of these digits for all g β G. In this experiment, we use features of MNIST digits of a pre-trained CNN in 32-dimensional vectors. Digits are selected so that their sum is less than 10 to simplify the problem, and the model is trained to classify the sum of given digits using cross-entropy loss. We set the number of layers to 4 and the hidden dimension to 32. For comparison, we prepare permutation-invariant and permutation-equivariant DeepSet models (Zaheer et al., 2017), which adopt special structures to induce permutation invariance or equivariance, containing the same number of parameters as floating-point numbers with the group C |
|
β-algebra net. |
|
|
|
Table 4 displays the results of the task with various training dataset sizes when d = 3. What stands out in the table is that the group C |
|
β-algebra net consistently outperforms the DeepSet models by large margins, particularly when the number of training data is limited. Additionally, as shown in Figure 5, the group C |
|
β-algebra net converges with much fewer iterations than the DeepSet models. These results suggest that the inductive biases implanted by the product structure in the group C |
|
β-algebra net are effective. |
|
|
|
## 5 Related Works |
|
|
|
Applying algebra structures to neural networks has been studied. Quaternions are applied to, for example, spatial transformations, multi-dimensional signals, color images (Nitta, 1995; Arena et al., 1997; Zhu et al., 2018; Gaudet & Maida, 2018). Clifford algebra (or geometric algebra) is a generalization of quaternions, |
|
|
|
![10_image_0.png](10_image_0.png) |
|
|
|
Figure 3: Ground truth images and their implicit representations of commutative and noncommutative C |
|
β- |
|
algebra nets after 500 iterations of training. The noncommutative C |
|
β-algebra net reproduces colors more faithfully. |
|
|
|
![10_image_1.png](10_image_1.png) |
|
|
|
Figure 4: Synthesized views of 3D implicit representations of commutative and noncommutative C |
|
β-algebra nets after 5000 iterations of training. The noncommutative C |
|
β-algebra net can produce finer details. Note that the commutative C |
|
β-algebra net could not synthesize the chair on the left. |
|
and applying Clifford algebra to neural networks has also been investigated to extract geometric structure of data (Rivera-Rovelo et al., 2010; Zang et al., 2022; Brandstetter et al., 2022; Ruhe et al., 2023b;a). Hoffmann et al. (2020) considered neural networks with matrix-valued parameters for the parameter and computational efficiencies and for achieving extensive structures of neural networks. In this section, we discuss relationships and differences with the existing studies of applying algebras to neural networks. |
|
|
|
Quaternion is a generalization of complex number. A quaternion is expressed as a+bi+cj+dk for *a, b, c, d* β |
|
R. Here, i, j, and k are basis elements that satisfy i 2j 2 = k2 = β1, ij = βji = k, ik = βki = j, and jk = βkj = i. Nitta (1995); Arena et al. (1997) introduced and analyzed neural networks with quaternionvalued parameters. Since the rotations in the three-dimensional space can be represented with quaternions, they can be applied to control the position of robots (Fortuna et al., 1996). More recently, representing color images using quaternions and analyzing them with a quaternion version of a convolutional neural network was proposed and investigated (Zhu et al., 2018; Gaudet & Maida, 2018). |
|
|
|
Table 4: Average test accuracy of an invariant DeepSet model, an equivariant DeepSet model, and a group C |
|
β-algebra net on test data of the sum-of-digits task after 100 epochs of training. The group C |
|
β-algebra net can learn from fewer data. |
|
|
|
Dataset size Invariant DeepSet Equivariant DeepSet Group C |
|
β-algebra net |
|
|
|
| Dataset size | Invariant DeepSet | Equivariant DeepSet | Group C β -algebra net | |
|
|----------------|---------------------|-----------------------|--------------------------| |
|
| 1k | 0.408 Β± 0.014 | 0.571 Β± 0.021 | 0.783 Β± 0.016 | |
|
| 5k | 0.731 Β± 0.026 | 0.811 Β± 0.007 | 0.922 Β± 0.003 | |
|
| 10k | 0.867 Β± 0.021 | 0.836 Β± 0.009 | 0.943 Β± 0.005 | |
|
| 50k | 0.933 Β± 0.005 | 0.862 Β± 0.002 | 0.971 Β± 0.001 | |
|
|
|
![11_image_0.png](11_image_0.png) |
|
|
|
![11_image_1.png](11_image_1.png) |
|
|
|
Figure 5: Average test accuracy curves of invariant DeepSet, equivariant DeepSet, and a group C |
|
β-algebra net trained on 10k data of the sum-of-digits task. The group C |
|
β-algebra net can learn more efficiently and effectively. |
|
Clifford algebra is a generalization of quaternions and enables us to extract the geometric structure of data. It naturally unifies real numbers, vectors, complex numbers, quaternions, exterior algebras, and so on. For a vector space V equipped with a quadratic form Q and an orthonormal basis {e1*, . . . , e*n} of V, the Clifford algebra is constructed by the product ei1 |
|
Β· Β· Β· eik for 1 β€ i1 < *Β· Β· Β·*ik β€ n and 0 β€ k β€ n. The product structure is defined by eiej = βej ei and e 2 i = Q(ei). We have not only the vectors e1*, . . . , e*n, but also the elements whose forms are eiej (bivector), eiej ek (trivector), and so on. Using these different types of vectors, we can describe data in different fields. Brandstetter et al. (2022); Ruhe et al. (2023b) proposed to apply neural networks with Clifford algebra to solve a partial differential equation that involves different fields by describing the correlation of these fields using Clifford algebra. Group-equivariant networks with Clifford algebra have also been proposed for extracting features that are equivariant under group actions (Ruhe et al., 2023a). Zang et al. (2022) analyzed traffic data with residual networks with Clifford algebra-valued parameters for considering the correlation between them in both spatial and temporal domains. RiveraRovelo et al. (2010) approximate the surface of 2D or 3D objects using a network with Clifford algebra. |
|
|
|
Hoffmann et al. (2020) considered generalizing neural network parameters to matrices. These networks can be effective regarding the parameter size and the computational costs. They also consider the flexibility of the design of the network with matrix-valued parameters. On the other hand, C |
|
β-algebra is a natural generalization of the space of complex numbers. An advantage of considering C |
|
β-algebra over other algebras is the straightforward generalization of notions related to neural networks. This is by virtue of the structure of involution, norm, and C |
|
β-property. For example, we have a generalization of Hilbert space by means of C |
|
β-algebra, which is called Hilbert C |
|
β-module (Lance, 1995). Since the input and output spaces are Hilbert spaces, the theory of Hilbert C |
|
β-module can be used in analyzing C |
|
β-algebra nets. We also have a natural generalization of reproducing kernel Hilbert space |
|
(RKHS), which is called reproducing kernel Hilbert C |
|
β-module (RKHM) (Hashimoto et al., 2021). RKHM |
|
enables us to connect C |
|
β-algebra nets with kernel methods (Hashimoto et al., 2023). |
|
|
|
From the perspective of the application to neural networks, both C |
|
β-algebra and Clifford algebra enable us to induce interactions. Clifford algebra can describe the relationship among data components by using bivectors and trivectors. C |
|
β-algebra can also induce the interaction among data components using its product structure. Moreover, it can also induce interaction among sub-models, as we discussed in Section 3.1.1. Our framework also enables us to construct group equivariant neural networks as we discussed in Section 3.1.4. |
|
|
|
## 6 Conclusion And Discussion |
|
|
|
In this paper, we have generalized the space of neural network parameters to noncommutative C |
|
β-algebras. |
|
|
|
Their rich product structures bring powerful properties to neural networks. For example, a C |
|
β-algebra net over nondiagonal matrices enables its sub-models to interact, and a group C |
|
β-algebra net learns permutationequivariant features. We have empirically demonstrated the validity of these properties in various tasks, image classification, neural implicit representation, and sum-of-digits tasks. |
|
|
|
A current practical limitation of noncommutative C |
|
β-algebra nets is their computational cost. The noncommutative C |
|
β-algebra net over matrices used in the experiments requires a quadratic complexity to the number of sub-models for communication, in the same way as the "all-reduce" collective operation in distributed computation. This complexity could be alleviated by, for example, parameter sharing or introducing structures to nondiagonal elements by an analogy between self-attentions and their efficient variants. The group C |
|
β-algebra net even costs factorial time complexity to the size of the set, which becomes soon infeasible as the size of the set increases. Such an intensive complexity might be mitigated by representing parameters by parameter invariant/equivariant neural networks, such as DeepSet. Despite such computational burden, introducing noncommutative C |
|
β-algebra derives interesting properties otherwise impossible. |
|
|
|
We leave further investigation on scalability for future research. An important and interesting future direction is the application of infinite-dimensional C |
|
β-algebras. In this paper, we focused mainly on finite-dimensional C |
|
β-algebras. We showed that the product structure in C |
|
β- |
|
algebras is a powerful tool for neural networks, for example, learning with interactions and group equivariance (or invariance) even for the finite-dimensional case. Infinite-dimensional C |
|
β-algebra allows us to analyze functional data, such as time-series data and spatial data. We believe that applying infinite dimensional C |
|
β- |
|
algebra can be an efficient tool to extract information from the data even when the observation is partially missing. Practical applications of our framework to functional data with infinite-dimensional C |
|
β-algebras are our future work. Our framework with noncommutative C |
|
β-algebras is general and has a wide range of applications. We believe that our framework opens up a new approach to learning neural networks. |
|
|
|
## References |
|
|
|
Md. Faijul Amin, Md. Monirul Islam, and Kazuyuki Murase. Single-layered complex-valued neural networks and their ensembles for real-valued classification problems. In *IJCNN*, 2008. |
|
|
|
Paolo Arena, Luigi Fortuna, Giovanni Muscato, and Maria Gabriella Xibilia. Multilayer perceptrons to approximate quaternion valued functions. *Neural Networks*, 10(2):335β342, 1997. |
|
|
|
Igor Babuschkin, Kate Baumli, Alison Bell, Surya Bhupatiraju, Jake Bruce, Peter Buchlovsky, David Budden, Trevor Cai, Aidan Clark, Ivo Danihelka, Claudio Fantacci, Jonathan Godwin, Chris Jones, Ross Hemsley, Tom Hennigan, Matteo Hessel, Shaobo Hou, Steven Kapturowski, Thomas Keck, Iurii Kemaev, Michael King, Markus Kunesch, Lena Martens, Hamza Merzic, Vladimir Mikulik, Tamara Norman, John Quan, George Papamakarios, Roman Ring, Francisco Ruiz, Alvaro Sanchez, Rosalia Schneider, Eren Sezener, Stephen Spencer, Srivatsan Srinivasan, Luyu Wang, Wojciech Stokowiec, and Fabio Viola. The DeepMind JAX Ecosystem, 2020. URL http://github.com/deepmind. |
|
|
|
AurΓ©lien Bellet, Rachid Guerraoui, Mahsa Taziki, and Marc Tommasi. Personalized and private peer-to-peer machine learning. In *AISTATS*, 2018. |
|
|
|
James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and Qiao Zhang. JAX: composable transformations of Python+NumPy programs, 2018. URL http://github.com/google/jax. |
|
|
|
Johannes Brandstetter, Rianne van den Berg, Max Welling, and Jayesh K Gupta. Clifford neural layers for PDE modeling. arXiv:2209.04934, 2022. |
|
|
|
Sven Buchholz. *A Theory of Neural Computation with Clifford Algebras*. PhD thesis, 2005. |
|
|
|
Sven Buchholz and Gerald Sommer. On Clifford neurons and Clifford multi-layer perceptrons. Neural Networks, 21(7):925β935, 2008. |
|
|
|
Emmanuel J. Candès. Harmonic analysis of neural networks. *Applied and Computational Harmonic Analysis*, |
|
6(2):197β218, 1999. |
|
|
|
Angel X. Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, Jianxiong Xiao, Li Yi, and Fisher Yu. ShapeNet: An Information-Rich 3D Model Repository. Technical report, Stanford University - Princeton University β |
|
Toyota Technological Institute at Chicago, 2015. |
|
|
|
Yunjey Choi, Youngjung Uh, Jaejun Yoo, and Jung-Woo Ha. Stargan v2: Diverse image synthesis for multiple domains. In *CVPR*, 2020. |
|
|
|
Tarin Clanuwat, Mikel Bober-Irizar, Asanobu Kitamoto, Alex Lamb, Kazuaki Yamamoto, and David Ha. |
|
|
|
Deep learning for classical japanese literature. *arXiv*, 2018. |
|
|
|
Taco S Cohen, Mario Geiger, and Maurice Weiler. A general theory of equivariant CNNs on homogeneous spaces. In *NeurIPS*, 2019. |
|
|
|
Philip J. Davis. *Circulant Matrices*. Wiley, 1979. |
|
|
|
Xibin Dong, Zhiwen Yu, Wenming Cao, Yifan Shi, and Qianli Ma. A survey on ensemble learning. *Frontiers* of Computer Science, 14:241β258, 2020. |
|
|
|
L. Fortuna, G. Muscato, and M.G. Xibilia. An hypercomplex neural network platform for robot positioning. |
|
|
|
In 1996 IEEE International Symposium on Circuits and Systems. Circuits and Systems Connecting the World. (ISCAS 96), 1996. |
|
|
|
Mudasir Ahmad Ganaie, Minghui Hu, Ashwani Kumar Malik, M. Tanveer, and Ponnuthurai N. Suganthan. |
|
|
|
Ensemble deep learning: A review. *Engineering Applications of Artificial Intelligence*, 115:105151, 2022. |
|
|
|
Chase J Gaudet and Anthony S Maida. Deep quaternion networks. In *IJCNN*, pp. 1β8. IEEE, 2018. |
|
|
|
Yuka Hashimoto, Isao Ishikawa, Masahiro Ikeda, Fuyuta Komura, Takeshi Katsura, and Yoshinobu Kawahara. Reproducing kernel Hilbert C |
|
β-module and kernel mean embeddings. *Journal of Machine Learning* Research, 22(267):1β56, 2021. |
|
|
|
Yuka Hashimoto, Zhao Wang, and Tomoko Matsui. C |
|
β-algebra net: a new approach generalizing neural network parameters to C |
|
β-algebra. In *ICML*, 2022. |
|
|
|
Yuka Hashimoto, Masahiro Ikeda, and Hachem Kadri. Learning in RKHM: a C |
|
β-algebraic twist for kernel machines. In *AISTATS*, 2023. |
|
|
|
Akira Hirose. Continuous complex-valued back-propagation learning. *Electronics Letters*, 28:1854β1855, 1992. |
|
|
|
Jordan Hoffmann, Simon Schmitt, Simon Osindero, Karen Simonyan, and Erich Elsen. Algebranets. |
|
|
|
arXiv:2006.07360, 2020. |
|
|
|
Patrick Kidger and Cristian Garcia. Equinox: neural networks in JAX via callable PyTrees and filtered transformations. *Differentiable Programming workshop at Neural Information Processing Systems 2021*, |
|
2021. |
|
|
|
Diederik P. Kingma and Jimmy Lei Ba. Adam: a Method for Stochastic Optimization. In *ICLR*, 2015. |
|
|
|
Aleksandr A. Kirillov. *Elements of the Theory of Representations*. Springer, 1976. |
|
|
|
E. Christopher Lance. *Hilbert* C |
|
β*-modules - a Toolkit for Operator Algebraists*. London Mathematical Society Lecture Note Series, vol. 210. Cambridge University Press, New York, 1995. |
|
|
|
Yann Le Cun, Leon Bottou, Yoshua Bengio, and Patrij Haffner. Gradient-based learning applied to document recognition. *Proceedings of the IEEE*, 86(11):2278β2324, 1998. |
|
|
|
ChiYan Lee, Hideyuki Hasegawa, and Shangce Gao. Complex-valued neural networks: A comprehensive survey. *IEEE/CAA Journal of Automatica Sinica*, 9(8):1406β1426, 2022. |
|
|
|
Jan Eric Lenssen, Matthias Fey, and Pascal Libuschewski. Group equivariant capsule networks. In *NeurIPS*, |
|
2018. |
|
|
|
Gerard J. Murphy. C |
|
β*-Algebras and Operator Theory*. Academic Press, 1990. |
|
|
|
Ikuko Nishikawa, Kazutoshi Sakakibara, Takeshi Iritani, and Yasuaki Kuroe. 2 types of complex-valued hopfield networks and the application to a traffic signal control. In *IJCNN*, 2005. |
|
|
|
Tohru Nitta. A quaternary version of the back-propagation algorithm. In *ICNN*, volume 5, pp. 2753β2756, 1995. |
|
|
|
Justin Pearson and D.L. Bisset. Neural networks in the Clifford domain. In *ICNN*, 1994. Jorge Rivera-Rovelo, Eduardo Bayro-Corrochano, and Ruediger Dillmann. Geometric neural computing for 2d contour and 3d surface reconstruction. In *Geometric Algebra Computing*, pp. 191β209. Springer, 2010. |
|
|
|
Fabrice Rossi and Brieuc Conan-Guez. Functional multi-layer perceptron: a non-linear tool for functional data analysis. *Neural Networks*, 18(1):45β60, 2005. |
|
|
|
David Ruhe, Johannes Brandstetter, and Patrick ForrΓ©. Clifford group equivariant neural networks. |
|
|
|
arXiv:2305.11141, 2023a. |
|
|
|
David Ruhe, Jayesh K. Gupta, Steven de Keninck, Max Welling, and Johannes Brandstetter. Geometric clifford algebra networks. arXiv:2302.06594, 2023b. |
|
|
|
Akiyoshi Sannai, Masaaki Imaizumi, and Makoto Kawano. Improved generalization bounds of group invariant |
|
/ equivariant deep networks via quotient feature spaces. In UAI, 2021. |
|
|
|
Vincent Sitzmann, Julien N.P. Martel, Alexander W. Bergman, David B. Lindell, and Gordon Wetzstein. |
|
|
|
Implicit neural representations with periodic activation functions. In *NeurIPS*, 2020. |
|
|
|
Sho Sonoda and Noboru Murata. Neural network with unbounded activation functions is universal approximator. *Applied and Computational Harmonic Analysis*, 43(2):233β268, 2017. |
|
|
|
Sho Sonoda, Isao Ishikawa, and Masahiro Ikeda. Universality of group convolutional neural networks based on ridgelet analysis on groups. In *NeurIPS*, 2022. |
|
|
|
Matthew Tancik, Ben Mildenhall, Terrance Wang, Divi Schmidt, Pratul P. Srinivasan, Jonathan T. Barron, and Ren Ng. Learned Initializations for Optimizing Coordinate-Based Neural Representations. In *CVPR*, |
|
2021. |
|
|
|
Barinder Thind, Kevin Multani, and Jiguo Cao. Deep learning with functional inputs. *Journal of Computational and Graphical Statistics*, 32:171β180, 2023. |
|
|
|
Chiheb Trabelsi, Olexa Bilaniuk, Ying Zhang, Dmitriy Serdyuk, Sandeep Subramanian, Joao Felipe Santos, Soroush Mehri, Negar Rostamzadeh, Yoshua Bengio, and Christopher J Pal. Deep complex networks. In ICLR, 2018. |
|
|
|
Paul Vanhaesebrouck, AurΓ©lien Bellet, and Marc Tommasi. Decentralized collaborative learning of personalized models over networks. In *AISTATS*, 2017. |
|
|
|
Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. *arXiv*, 2017. |
|
|
|
Yiheng Xie, Towaki Takikawa, Shunsuke Saito, Or Litany, Shiqin Yan, Numair Khan, Federico Tombari, James Tompkin, Vincent Sitzmann, and Srinath Sridhar. Neural fields in visual computing and beyond. |
|
|
|
Computer Graphics Forum, 2022. |
|
|
|
Abhishek Yadav, Deepak Mishra, Sudipta Ray, Ram Narayan Yadav, and Prem Kalra. Representation of complex-valued neural networks: a real-valued approach. In *ICISIP*, 2005. |
|
|
|
Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabas Poczos, Russ R Salakhutdinov, and Alexander J Smola. Deep sets. In *NeurIPS*, 2017. |
|
|
|
Di Zang, Xihao Chen, Juntao Lei, Zengqiang Wang, Junqi Zhang, Jiujun Cheng, and Keshuang Tang. A |
|
multi-channel geometric algebra residual network for traffic data prediction. IET Intelligent Transport Systems, 16(11):1549β1560, 2022. |
|
|
|
Yu Zhang and Qiang Yang. A survey on multi-task learning. IEEE Transactions on Knowledge and Data Engineering, 34(12):5586β5609, 2022. |
|
|
|
Xuanyu Zhu, Yi Xu, Hongteng Xu, and Changjian Chen. Quaternion convolutional neural networks. In ECCV, 2018. |
|
|
|
## A Appendix A.1 Implementation Details |
|
|
|
We implemented C |
|
β-algebra nets using JAX (Bradbury et al., 2018) with equinox (Kidger & Garcia, 2021) |
|
and optax (Babuschkin et al., 2020). For C |
|
β-algebra net over matrices, we used the Adam optimizer (Kingma |
|
& Ba, 2015) with a learning rate of 1.0 Γ 10β4, except for the 3D NIR experiment, where Adam's initial learning rate was set to 1.0 Γ 10β3. For group-C |
|
β-algebra net, we adopted the Adam optimizer with a learning rate of 1.0 Γ 10β3. We set the batch size to 32 in all experiments except for the 2D NIR, where each batch consisted of all pixels, and 3D NIR, where a batch size of 4 was used. Listings 1 and 2 illustrate linear layers of C |
|
β-algebra nets using NumPy, equivalent to the implementations used in the experiments in Sections 4.1 and 4.2. |
|
|
|
The implementation of 3D neural implicit representation (Section 4.1.2) is based on a simple NeRF-like model and its renderer in Tancik et al. (2021). For training, 25 views of each 3D chair from the ShapeNet dataset (Chang et al., 2015) are adopted with their 64 Γ 64 pixel reference images. The same C |
|
β-algebra MLPs with the 2D experiments were used, except for the hyperparameters: the number of layers of four and the hidden dimensional size of 128. The permutation-invariant DeepSet model used in Section 4.2 processes each data sample with a four-layer MLP with hyperbolic tangent activation, sum-pooling, and a linear classifier. The permutation-equivariant DeepSet model consists of four permutation-equivariant layers with hyperbolic tangent activation, maxpooling, and a linear classifier, following the point cloud classification in (Zaheer et al., 2017). Although we tried leaky ReLU activation as the group C |
|
β-algebra net, this setting yielded sub-optimal results in permutation-invariant DeepSet. The hidden dimension of the DeepSet models was set to 96 to match the number of floating-point-number parameters equal to that of the group C |
|
β-algebra net. |
|
|
|
## A.2 Additional Results |
|
|
|
Figures 6 and 7 present the additional figures of 2D INRs (Section 4.1.2). Figure 6 is an ukiyo-e counterpart of Figure 3 in the main text. Again, the noncommutative C |
|
β-algebra net learns color details faster than the commutative one. Figure 7 shows average PSNR curves over three NIRs of the image initialized with different random states for 5,000 iterations. Although it is not as effective as the beginning stage, the noncommutative C |
|
β-algebra net still outperforms the commutative one after the convergence. |
|
|
|
import numpy as np |
|
|
|
``` |
|
def matrix_valued_linear(weight: Array, |
|
bias: Array, |
|
input: Array |
|
) -> Array: |
|
""" |
|
weight: Array of shape {output_dim}x{input_dim}x{dim_matrix}x{dim_matrix} |
|
bias: Array of shape {output_dim}x{dim_matrix}x{dim_matrix} |
|
input: Array of shape {input_dim}x{dim_matrix}x{dim_matrix} |
|
out: Array of shape {output_dim}x{dim_matrix}x{dim_matrix} |
|
""" |
|
|
|
``` |
|
|
|
out = [] |
|
for _weight, b in zip(weight, bias): |
|
tmp = np.zeros(weight.shape[2:]) for w, i in zip(_weight, input): |
|
tmp += w @ i + b out.append(tmp) |
|
return np.array(out) |
|
Listing 1: Numpy implementation of a linear layer of a C |
|
β-algebra net over matrices used in Section 4.1. |
|
|
|
![16_image_0.png](16_image_0.png) |
|
|
|
Figure 6: Ground truth images and their implicit representations of commutative and noncommutative C |
|
β- |
|
algebra nets after 500 iterations of training. |
|
|
|
Table 5 and Figure 8 show the additional results of 3D INRs (Section 4.1.2). Table 5 presents the average PSNR of synthesized views. As can be noticed from the synthesized views in Figures 4 and 8, the noncommutative C |
|
β-algebra net produces less noisy output, resulting in a higher PSNR. |
|
|
|
Figure 9 displays test accuracy curves of the group C |
|
β-algebra net and DeepSet models on sub-of-digits task over different learning rates. As Figure 5, which shows the case where the learning rate was 0.001, the group C |
|
β-algebra net converges with much fewer iterations than the DeepSet models over a wide range of learning rates, although the proposed model shows unstable results for a large learning rate of 0.01. |
|
|
|
import dataclasses import **numpy** as np @dataclasses.dataclass class **Permutation**: |
|
\# helper class to handle permutation value: np.ndarray def inverse(self) -> Permutation: |
|
return Permutation(self.value.argsort()) |
|
def __mul__(self, perm: Permutation) -> Permutation: |
|
return Permutation(self.value[perm.value]) |
|
def __eq__(self, other: Permutation) -> bool: |
|
return np.all(self.value == other.value) |
|
@staticmethod def create_hashtable(set_size: int) -> Array: |
|
perms = [Permutation(np.array(p)) for p in permutations(range(set_size))] map = {v: i for i, v in enumerate(perms)} out = [] for perm in perms: |
|
out.append([map[perm.inverse() * _perm] for _perm in perms]) |
|
return np.array(out) |
|
|
|
``` |
|
def group_linear(weight: Array, |
|
bias: Array, |
|
input: Array |
|
) -> Array: |
|
""" |
|
weight: {output_dim}x{input_dim}x{group_size} |
|
bias: {output_dim}x{group_size} |
|
input: {input_dim}x{group_size} |
|
out: {output_dim}x{group_size} |
|
""" |
|
|
|
``` |
|
|
|
hashtable = Permutation.create_hashtable(set_size) *\# {group_size}x{group_size}* g = np.arange(hashtable.shape[0]) |
|
out = [] |
|
for _weight in weight: |
|
tmp0 = [] for y in g: |
|
tmp1 = [] for w, f in zip(_weight, input): |
|
tmp2 = [] |
|
for x in g: |
|
tmp2.append(w[x] * f[hashtable[x, y]]) |
|
tmp1.append(sum(tmp2)) |
|
tmp0.append(sum(tmp1)) |
|
out.append(tmp0) |
|
return np.array(out) + bias Listing 2: Numpy implementation of a group C |
|
β-algebra linear layer used in Section 4.2. |
|
|
|
Table 5: Average PSNR over synthesized views. The specified poses of the views are unseen during training. |
|
|
|
| Commutative | Noncommutative | |
|
|---------------|------------------| |
|
| 18.40 Β± 4.30 | 25.22 Β± 1.45 | |
|
|
|
![18_image_0.png](18_image_0.png) |
|
|
|
Figure 7: Average PSNR over implicit representations of the image of commutative and noncommutative C |
|
β-algebra nets trained on five cat pictures (top) and reconstructions of the ground truth image at every 500 iterations (bottom). |
|
|
|
Commutative Noncommutative Figure 8: Synthesized views of implicit representations of a chair. |
|
|
|
![18_image_1.png](18_image_1.png) |
|
|
|
learning rates. |