RedTachyon commited on
Commit
676849b
β€’
1 Parent(s): 2b1c036

Upload folder using huggingface_hub

Browse files
qp6OQwLsF7/10_image_0.png ADDED

Git LFS Details

  • SHA256: e721261120e889e4b416bbc366ca2306ddadc7d9002c613802f00ccebd44d9b9
  • Pointer size: 131 Bytes
  • Size of remote file: 140 kB
qp6OQwLsF7/10_image_1.png ADDED

Git LFS Details

  • SHA256: f8004d83309372957e11677c06185c18818ad7830153d09fb6ff162e66bd09aa
  • Pointer size: 130 Bytes
  • Size of remote file: 44.4 kB
qp6OQwLsF7/11_image_0.png ADDED

Git LFS Details

  • SHA256: b4013c7a54b3e13a92ce0b612d764baa8d24f59d9e540c59308b94407827180c
  • Pointer size: 130 Bytes
  • Size of remote file: 16.3 kB
qp6OQwLsF7/11_image_1.png ADDED

Git LFS Details

  • SHA256: e935f66d57b63be12f80a4df94b3dcbb574fa11bd2ed27f5697f08b425f2efbd
  • Pointer size: 128 Bytes
  • Size of remote file: 847 Bytes
qp6OQwLsF7/16_image_0.png ADDED

Git LFS Details

  • SHA256: 881563110041773a8686d7add16256fef470bd99742d8b84f7d9798d8506b6f3
  • Pointer size: 130 Bytes
  • Size of remote file: 96.6 kB
qp6OQwLsF7/18_image_0.png ADDED

Git LFS Details

  • SHA256: 459c8c48dc00f72c1a11352832bf1db8b063b5b084fdaa329bbecc4ace692411
  • Pointer size: 130 Bytes
  • Size of remote file: 92.4 kB
qp6OQwLsF7/18_image_1.png ADDED

Git LFS Details

  • SHA256: e20158a37b41e06f56954e819a0f1ed682e3fad8609e4fb922bfcfdfcc969c0b
  • Pointer size: 130 Bytes
  • Size of remote file: 57.8 kB
qp6OQwLsF7/3_image_0.png ADDED

Git LFS Details

  • SHA256: b9d0dc35732469d63f5aa82c6c4dbb8585a876db2073711f03c9e1f0ff7f84de
  • Pointer size: 130 Bytes
  • Size of remote file: 29.8 kB
qp6OQwLsF7/3_image_1.png ADDED

Git LFS Details

  • SHA256: eb0972098a2951fcdf3e2bbbc5f4881caa857cf2d5c6ff0adcec24080985ff2f
  • Pointer size: 130 Bytes
  • Size of remote file: 35.3 kB
qp6OQwLsF7/9_image_0.png ADDED

Git LFS Details

  • SHA256: b71105e24ee8e3efc314cfd16293b285bd84f17902814b2e8b49a01ca093d29c
  • Pointer size: 130 Bytes
  • Size of remote file: 44.4 kB
qp6OQwLsF7/9_image_1.png ADDED

Git LFS Details

  • SHA256: 4d438e078813a538074e305f7b1586d896ca7f5ace2da2d50eed29002816c0aa
  • Pointer size: 130 Bytes
  • Size of remote file: 25.7 kB
qp6OQwLsF7/qp6OQwLsF7.md ADDED
@@ -0,0 +1,861 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Noncommutative C βˆ—**-Algebra Net: Learning Neural Networks With Powerful Product Structure In** C βˆ—**-Algebra**
2
+
3
+ Anonymous authors Paper under double-blind review
4
+
5
+ ## Abstract
6
+
7
+ We propose a new generalization of neural networks with noncommutative C
8
+ βˆ—-algebra. An important feature of C
9
+ βˆ—-algebras is their noncommutative structure of products, but the existing C
10
+ βˆ—-algebra net frameworks have only considered commutative C
11
+ βˆ—-algebras. We show that this noncommutative structure of C
12
+ βˆ—-algebras induces powerful effects in learning neural networks. Our framework has a wide range of applications, such as learning multiple related neural networks simultaneously with interactions and learning invariant features with respect to group actions. The validity of our framework numerically illustrates its potential power.
13
+
14
+ ## 1 Introduction
15
+
16
+ Generalization of the parameter space of neural networks beyond real numbers brings intriguing possibilities. For instance, using complex numbers (Hirose, 1992; Nishikawa et al., 2005; Amin et al., 2008; Yadav et al.,
17
+ 2005; Trabelsi et al., 2018; Lee et al., 2022) or quaternion numbers (Nitta, 1995; Arena et al., 1997; Zhu et al., 2018; Gaudet & Maida, 2018) as neural network parameters is more intuitive and effective, particularly in signal processing, computer vision, and robotics domains. Clifford-algebra, the generalization of these numbers, allows more flexible geometrical data processing and is applied to neural networks to handle rich geometric relationships in data (Pearson & Bisset, 1994; Buchholz, 2005; Buchholz & Sommer, 2008; Rivera-Rovelo et al., 2010; Zang et al., 2022; Brandstetter et al., 2022; Ruhe et al., 2023b;a). Different from these approaches focusing on the geometric perspective of parameter values, an alternative direction of generalization is to use function-valued parameters (Rossi & Conan-Guez, 2005; Thind et al., 2023), broadening the applications of neural networks to functional data. Hashimoto et al. (2022) proposed C
18
+ βˆ—-algebra net, which generalizes neural network parameters to (commutative) C
19
+ βˆ—-algebra - a generalization of complex numbers (Murphy, 1990; Hashimoto et al., 2021). They adopted continuous functions on a compact space as a commutative C
20
+ βˆ—-algebra and present a new interpretation of function-valued neural networks: infinitely many real-valued or complex-valued neural networks are continuously combined into a single C
21
+ βˆ—-algebra net. For example, networks for the same task with different training datasets or different initial parameters can be combined continuously, which enables efficient learning using shared information among infinite combined networks. Such interaction among networks is similar to learning from related tasks, such as ensemble learning (Dong et al., 2020; Ganaie et al., 2022) and multitask learning (Zhang & Yang, 2022). However, because the product structure in the C
22
+ βˆ—-algebra that Hashimoto et al. (2022) focuses on is commutative, such networks cannot take advantage of rich product structures of C
23
+ βˆ—-algebras and, instead, require specially designed loss functions to induce the necessary interaction.
24
+
25
+ To fully exploit rich product structures, we propose a new generalization of neural networks with noncommutative C
26
+ βˆ—-algebra. Typical examples of C
27
+ βˆ—-algebras include the space of diagonal matrices, which is commutative in terms of matrix product, and the space of squared matrices, which is noncommutative.
28
+
29
+ Specifically, in the case of diagonal matrices, their product is simply computed by the multiplication of each diagonal element independently, and thus, they are commutative. On the other hand, the product of two squared nondiagonal matrices is the sum of the products between different elements, and each resultant diagonal element depends on other diagonal elements through nondiagonal elements, which can be interpreted as interaction among diagonal elements.
30
+
31
+ Such product structures derived from noncommutativity are powerful when used as neural network parameters. Keeping C
32
+ βˆ—-algebra over matrices as an example, a neural network with parameters of nondiagonal matrices can naturally induce interactions among multiple neural networks with real or complex-valued parameters by regarding each diagonal element as a parameter of such networks. Because interactions are encoded in the parameters, specially designed loss functions to induce interactions are unnecessary for such noncommutative C
33
+ βˆ—-algebra nets. This property clearly contrasts the proposed framework with the existing commutative C
34
+ βˆ—-algebra nets. Another example is a neural network with group C
35
+ βˆ—-algebra parameters, which is naturally group-equivariant without designing special network architectures.
36
+
37
+ Our main contributions in this paper are summarized as follows:
38
+ - We generalize the commutative C
39
+ βˆ—-algebra net proposed by Hashimoto et al. (2022) to noncommutative C
40
+ βˆ—-algebra, which can take advantage of the noncommutative product structure in the C
41
+ βˆ—-algebra when learning neural networks.
42
+
43
+ - We present two examples of the general noncommutative C
44
+ βˆ—-algebra net: C
45
+ βˆ—-algebra net over matrices and group C
46
+ βˆ—-algebra net. A C
47
+ βˆ—-algebra net over matrices can naturally combine multiple standard neural networks with interactions. Neural networks with group C
48
+ βˆ—-algebra parameters are naturally group-equivariant without modifying neural structures.
49
+
50
+ - Numerical experiments illustrate the validity of these noncommutative C
51
+ βˆ—-algebra nets, including interactions among neural networks.
52
+
53
+ We emphasize that C
54
+ βˆ—-algebra is a powerful tool for neural networks, and our work provides a lot of important perspectives about its application.
55
+
56
+ ## 2 Background
57
+
58
+ In this section, we review the mathematical background of C
59
+ βˆ—-algebra required for this paper and the existing C
60
+ βˆ—-algebra net. For more theoretical details of the C
61
+ βˆ—-algebra, see, for example, Murphy (1990).
62
+
63
+ ## 2.1 C βˆ—**-Algebra**
64
+
65
+ C
66
+ βˆ—-algebra is a generalization of the space of complex values. It has structures of the product, involution βˆ—,
67
+ and norm. Definition 1 (C
68
+ βˆ—**-algebra)** A set A *is called a* C
69
+ βˆ—-algebra *if it satisfies the following conditions:*
70
+ 1. A is an algebra over C *and equipped with a bijection* (Β·)
71
+ βˆ—: A β†’ A that satisfies the following conditions for Ξ±, Ξ² ∈ C and c, d ∈ A:
72
+ - (Ξ±c + Ξ²d)
73
+ βˆ— = Ξ±cβˆ— + Ξ²dβˆ—,
74
+ - (cd)
75
+ βˆ— = d
76
+ βˆ—c
77
+ βˆ—,
78
+ - (c
79
+ βˆ—)
80
+ βˆ— = c.
81
+
82
+ 2. A is a normed space with βˆ₯ Β· βˆ₯, and for c, d ∈ A, βˆ₯cdβˆ₯ ≀ βˆ₯cβˆ₯ βˆ₯dβˆ₯ holds. In addition, A is complete with respect to *βˆ₯ Β· βˆ₯*.
83
+
84
+ 3. For c ∈ A, βˆ₯c
85
+ βˆ—cβˆ₯ = βˆ₯cβˆ₯
86
+ 2(C
87
+ βˆ—*-property) holds.*
88
+ The product structure in C
89
+ βˆ—-algebras can be both commutative and noncommutative.
90
+
91
+ Example 1 (Commutative C
92
+ βˆ—**-algebra)** Let A be the space of continuous functions on a compact Hausdorff space Z. We can regard A *as a* C
93
+ βˆ—*-algebra by setting*
94
+ - Product: Pointwise product of two functions, i.e., for a1, a2 ∈ A, a1a2(z) = a1(z)a2(z).
95
+
96
+ - Involution: Pointwise complex conjugate, i.e., for a ∈ A, a
97
+ βˆ—(z) = a(z).
98
+
99
+ - Norm: Sup norm, i.e., for a ∈ A, βˆ₯aβˆ₯ = supz∈Z |a(z)|.
100
+
101
+ In this case, the product in A *is commutative.*
102
+ Example 2 (Noncommutative C
103
+ βˆ—**-algebra)** Let A be the space of bounded linear operators on a Hilbert space H, which is denoted by B(H). We can regard A *as a* C
104
+ βˆ—*-algebra by setting*
105
+ - *Product: Composition of two operators,* - *Involution: Adjoint of an operator,*
106
+ - Norm: Operator norm of an operator, i.e., for a ∈ A, βˆ₯aβˆ₯ = supv∈H,βˆ₯vβˆ₯H=1 βˆ₯avβˆ₯H.
107
+
108
+ Here, βˆ₯ Β· βˆ₯H is the norm in H. In this case, the product in A is noncommutative. Note that if H *is a* d-dimensional space for a finite natural number d, then elements in A are d by d *matrices.*
109
+ Example 3 (Group C
110
+ βˆ—**-algebra)** *The group* C
111
+ βˆ—-algebra on a group G*, which is denoted as* C
112
+ βˆ—(G), is the set of maps from G to C *equipped with the following product, involution, and norm:*
113
+ - *Product:* (a Β· b)(g) = RG
114
+ a(h)b(h
115
+ βˆ’1g)dΞ»(h) for g ∈ G,
116
+ - *Involution:* a
117
+ βˆ—(g) = βˆ†(g
118
+ βˆ’1)a(gβˆ’1) for g ∈ G,
119
+ - *Norm:* βˆ₯aβˆ₯ = sup[Ο€]∈GΛ† βˆ₯Ο€(a)βˆ₯,
120
+ where βˆ†(g) *is a positive number satisfying* Ξ»(Eg) = βˆ†(g)Ξ»(E) for the Haar measure Ξ» on G*. In addition,*
121
+ Gˆ is the set of equivalence classes of irreducible unitary representations of G. Note that if G *is discrete,*
122
+ then Ξ» is the counting measure on G*. In this paper, we focus mainly on the product structure of* C
123
+ βˆ—(G)*. For* details of the Haar measure and representations of groups, see Kirillov (1976). If G = Z/pZ*, then* C
124
+ βˆ—(G)
125
+ is C
126
+ βˆ—*-isomorphic to the* C
127
+ βˆ—*-algebra of circulant matrices (Hashimoto et al., 2023). Note also that if* G is noncommutative, then C
128
+ βˆ—(G) *can also be noncommutative.*
129
+
130
+ ## 2.2 C βˆ—**-Algebra Net**
131
+
132
+ Hashimoto et al. (2022) proposed generalizing real-valued neural network parameters to commutative C
133
+ βˆ—-
134
+ algebra-valued ones, which enables us to represent multiple real-valued models as a single C
135
+ βˆ—-algebra net.
136
+
137
+ Here, we briefly review the existing (commutative) C
138
+ βˆ—-algebra net. Let A = C(Z), the commutative C
139
+ βˆ—-
140
+ algebra of continuous functions on a compact Hausdorff space Z. Let H be the depth of the network and N0*, . . . , N*H be the width of each layer. For i = 1*, . . . , H*, set Wi: ANiβˆ’1 β†’ ANi as an Affine transformation defined with an Ni Γ— Niβˆ’1 A-valued matrix and an A-valued bias vector in ANi. In addition, set a nonlinear activation function Οƒi: ANi β†’ ANi. The commutative C
141
+ βˆ—-algebra net f : AN0 β†’ ANH is defined as
142
+
143
+ $$f=\sigma_{H}\circ W_{H}\circ\cdots\circ\sigma_{1}\circ W_{1}.$$
144
+ $\left(1\right)$.
145
+ f = ΟƒH β—¦ WH *β—¦ Β· Β· Β· β—¦* Οƒ1 β—¦ W1. (1)
146
+ By generalizing neural network parameters to functions, we can combine multiple standard (real-valued)
147
+ neural networks continuously, which enables them to learn efficiently. In this paper, each real-valued network in a C
148
+ βˆ—-algebra net is called sub-model. We show an example of commutative C
149
+ βˆ—-algebra nets below. To simplify the notation, we focus on the case where the network does not have biases. However, the same arguments are valid for the case where the network has biases.
150
+
151
+ ## 2.2.1 The Case Of Diagonal Matrices
152
+
153
+ If Z is a finite set, then A = {a ∈ C
154
+ dΓ—d| a is a diagonal matrix}. The C
155
+ βˆ—-algebra net f on A corresponds to d separate real or complex-valued sub-models. In the case of A = C(Z), we can consider that infinitely many networks are continuously combined, and the C
156
+ βˆ—-algebra net f with diagonal matrices is a discretization of the C
157
+ βˆ—-algebra net over C(Z). Indeed, denote by x jthe vector composed of the jth diagonal elements of x ∈ AN , which is defined as the vector in C
158
+ N whose kth element is the jth diagonal element of the A-valued kth element of x. Assume the activation function Οƒi: AN β†’ AN is defined as Οƒi(x)
159
+ j = ΛœΟƒi(x j) for some ΟƒΛœi: C
160
+ N β†’ C
161
+ N . Since the jth diagonal element of a1a2 for a1, a2 ∈ A is the product of the jth element of a1
162
+
163
+ ![3_image_1.png](3_image_1.png)
164
+
165
+ (a) Cβˆ—-algebra net over diagonal matrices (commutative).
166
+
167
+ The blue parts illustrate the zero parts (nondiagonal parts)
168
+ of the diagonal matrices. Each diagonal element corresponds to an individual sub-model.
169
+
170
+ $$(2)$$
171
+
172
+ (b) Cβˆ—-algebra net over nondiagonal matrices (noncommutative). Unlike the case of diagonal matrices, nondiagonal parts
173
+
174
+ ![3_image_0.png](3_image_0.png)
175
+
176
+ (colored in orange) are not zero. The nondiagonal elements induce the interactions among multiple sub-models.
177
+ Figure 1: Difference between commutative and noncommutative C
178
+ βˆ—-algebra nets from the perspective of
179
+
180
+ interactions among sub-models.
181
+ and a2, we have
182
+
183
+ $$f(x)^{j}=\tilde{\sigma}_{H}\circ W_{H}^{j}\circ\cdots\circ\tilde{\sigma}_{1}\circ W_{1}^{j},$$
184
+ , (2)
185
+ where W
186
+ j i ∈ C
187
+ NiΓ—Niβˆ’1is the matrix whose (*k, l*)-entry is the jth diagonal of the (*k, l*)-entry of Wi ∈
188
+ ANiΓ—Niβˆ’1. Figure 1 (a) schematically shows the C
189
+ βˆ—-algebra net over diagonal matrices.
190
+
191
+ ## 3 Noncommutative C βˆ—**-Algebra Net**
192
+
193
+ Although the existing C
194
+ βˆ—-algebra net provides a framework for applying C
195
+ βˆ—-algebra to neural networks, it focuses on commutative C
196
+ βˆ—-algebras, whose product structure is simple. Therefore, we generalize the existing commutative C
197
+ βˆ—-algebra net to noncommutative C
198
+ βˆ—-algebra. Since the product structures in noncommutative C
199
+ βˆ—-algebras are more complicated than those in commutative C
200
+ βˆ—-algebras, they enable neural networks to learn features of data more efficiently. For example, if we focus on the C
201
+ βˆ—-algebra of matrices, then the neural network parameters describe interactions between multiple real-valued sub-models (see Section 3.1.1).
202
+
203
+ Let A be a general C
204
+ βˆ—-algebra and consider the network f in the same form as Equation (1). We emphasize that in our framework, the choice of A is not restricted to a commutative C
205
+ βˆ—-algebra. We list examples of A and their validity for learning neural networks below.
206
+
207
+ ## 3.1 Examples Of C βˆ—**-Algebras For Neural Networks**
208
+
209
+ Through showing several examples of C
210
+ βˆ—-algebras, we show that C
211
+ βˆ—-algebra net is a flexible and unified framework that incorporates C
212
+ βˆ—-algebra into neural networks. As mentioned in the previous section, we focus on the case where the network does not have biases for simplification in this subsection.
213
+
214
+ ## 3.1.1 Nondiagonal Matrices
215
+
216
+ Let A = C
217
+ dΓ—d. Note that A is a noncommutative C
218
+ βˆ—-algebra. Of course, it is possible to consider matrix data, but we focus on the perspective of interaction among sub-models following Section 2.2. In this case, unlike the network (2), the jth diagonal element of a1a2a3 for a1, a2, a3 ∈ A depends not only on the jth diagonal element of a2, but also the other diagonal elements of a2. Thus, f(x)
219
+ j depends not only on the sub-model corresponding to jth diagonal element discussed in Section 2.2.1, but also on other sub-models.
220
+
221
+ The nondiagonal elements in A induce interactions between d real or complex-valued sub-models.
222
+
223
+ Interaction among sub-models could be related to decentralized peer-to-peer machine learning Vanhaesebrouck et al. (2017); Bellet et al. (2018), where each agent learns without sharing data with others, while improving its ability by leveraging other agents' information through communication. In our case, an agent corresponds to a sub-model, and communication is achieved by interaction. We will see the effect of interaction by the nondiagonal elements in A numerically in Section 4.1. Figure 1 (b) schematically shows the C
224
+ βˆ—-algebra net over nondiagonal matrices.
225
+
226
+ ## 3.1.2 Block Diagonal Matrices
227
+
228
+ Let A = {a ∈ C
229
+ dΓ—d| a = diag(a1*, . . . ,* am), ai ∈ C
230
+ diΓ—di }. The product of two block diagonal matrices a = diag(a1*, . . . ,* am) and b = diag(b1*, . . . ,* bm) can be written as
231
+
232
+ $$\mathbf{b}_{1},\ldots,\mathbf{a}_{m}\mathbf{b}_{m}).$$
233
+ $$(3)$$
234
+
235
+ ab = diag(a1b1*, . . . ,* ambm).
236
+
237
+ In a similar manner to Section 2.2.1, we denote by x jthe N by dj matrix composed of the jth diagonal blocks of x ∈ AN . Assume the activation function Οƒi: AN β†’ AN is defined as Οƒi(x) = diag(ΟƒΛœ
238
+ 1 i
239
+ (x 1)*, . . . ,*ΟƒΛœm i
240
+ (x m))
241
+ for some ΟƒΛœi,j : C
242
+ NΓ—dj β†’ C
243
+ NΓ—dj. Then, we have
244
+
245
+ $$\mathbf{f}(\mathbf{x})^{j}={\hat{\boldsymbol{\sigma}}}_{H}^{j}\circ\mathbf{W}_{H}^{j}\circ\cdots\circ{\hat{\boldsymbol{\sigma}}}_{1}^{j}\circ\mathbf{W}_{1}^{j},$$
246
+ , (3)
247
+ where Wj i ∈ (C
248
+ djΓ—dj )
249
+ NiΓ—Niβˆ’1is the block matrix whose (*k, l*)-entry is the jth block diagonal of the (*k, l*)-entry of Wi ∈ ANiΓ—Niβˆ’1. In this case, we have m groups of sub-models, each of which is composed of interacting dj sub-models mentioned in Section 3.1.1. Indeed, the block diagonal case generalizes the diagonal and nondiagonal cases stated in Sections 2.2.1 and 3.1.1. If dj = 1 for all j = 1*, . . . , m*, then the network (3) is reduced to the network (2) with diagonal matrices. If m = 1 and d1 = d, then the network (3) is reduced to the network with d by d nondiagonal matrices.
250
+
251
+ ## 3.1.3 Circulant Matrices
252
+
253
+ Let A = {a ∈ C
254
+ dΓ—d| a is a circulant matrix}. Here, a circulant matrix a is the matrix represented as
255
+
256
+ $$a={\begin{bmatrix}a_{1}&a_{d}&\cdot\cdot\cdot&a_{2}\\ a_{2}&a_{1}&\cdot\cdot\cdot&a_{3}\\ &\cdot\cdot\cdot&\cdot\cdot\\ a_{d}&a_{d-1}&\cdot\cdot\cdot&a_{1}\end{bmatrix}}$$
257
+
258
+ for a1*, . . . , a*d ∈ C. Note that in this case, A is commutative. Circulant matrices are diagonalized by the discrete Fourier matrix as follows (Davis, 1979). We denote by F the discrete Fourier transform matrix, whose (*i, j*)-entry is Ο‰
259
+ (iβˆ’1)(jβˆ’1)/
260
+ √p, where Ο‰ = e2Ο€
261
+ βˆšβˆ’1/d.
262
+
263
+ Lemma 1 Any circulant matrix a *is decomposed as* a = FΞ›aF
264
+ βˆ—*, where*
265
+
266
+ $$\Lambda_{a}=\mathrm{diag}\left(\sum_{i=1}^{d}a_{i}\omega^{(i-1)\cdot0},\ldots,\sum_{i=1}^{d}a_{i}\omega^{(i-1)(d-1)}\right).$$
267
+ $$u h e r e$$
268
+ $$\left(4\right)$$
269
+
270
+ Since ab = FΞ›aΞ›bF
271
+ βˆ—for a, b ∈ A, the product of a and b corresponds to the multiplication of each Fourier component of a and b. Assume the activation function Οƒi: AN β†’ AN is defined such that (F
272
+ βˆ—Οƒi(x)F)
273
+ j equals to ΟƒΛ†Λœi((*F xF*βˆ—)
274
+ j) for some ΟƒΛ†Λœi: C
275
+ N β†’ C
276
+ N . Then, we obtain the network
277
+
278
+ $$(F^{*}f(x)F)^{j}=\hat{\hat{\sigma}}_{H}\circ\hat{W}_{H}^{j}\circ\cdots\circ\hat{\hat{\sigma}}_{1}\circ\hat{W}_{1}^{j},\tag{1}$$
279
+ , (4)
280
+ where WΛ† j i ∈ C
281
+ NiΓ—Niβˆ’1is the matrix whose (*k, l*)-entry is (F w*i,k,l*F
282
+ βˆ—)
283
+ j, where w*i,k,l* is the the (*k, l*)-entry of Wi ∈ ANiΓ—Niβˆ’1. The jth sub-model of the network (4) corresponds to the network of the jth Fourier component. Remark 1 The j*th sub-model of the network (4) does not interact with those of other Fourier components* than the jth component. This fact corresponds to the fact that A is commutative in this case. Analogous to the case in Section 3.1.1, if we set A as noncirculant matrices, then we obtain interactions between sub-models corresponding to different Fourier components.
284
+
285
+ ## 3.1.4 Group C βˆ—**-Algebra On A Symmetric Group**
286
+
287
+ Let G be the symmetric group on the set {1*, . . . , d*} and let A = C
288
+ βˆ—(G). Note that since G is noncommutative, C
289
+ βˆ—(G) is also noncommutative. Then, the output f(x) ∈ ANH is the C
290
+ NH -valued map on G. Using the product structure introduced in Example 3, we can construct a network that takes the permutation of data into account. Indeed, an element w ∈ A of a weight matrix W ∈ ANiβˆ’1Γ—Niis a function on G. Thus, w(g)
291
+ describes the weight corresponding to the permutation g ∈ G. Since the product of x ∈ C
292
+ βˆ—(G) and w is defined as wx(g) = Ph∈G w(h)x(h
293
+ βˆ’1g), by applying W, all the weights corresponding to the permutations affect the input. For example, let z ∈ R
294
+ d and set x ∈ C
295
+ βˆ—(G) as x(g) = g Β· z, where g Β· z is the action of g on z, i.e., the permutation of z with respect to g. Then, we can input all the patterns of permutations of z simultaneously, and by virtue of the product structure in C
296
+ βˆ—(G), the network is learned with the interaction among these permutations. Regarding the output, if the network is learned so that the outputs y become constant functions on G, i.e., y(g) = c for some constant c, then it means that c is invariant with respect to g, i.e., invariant with respect to the permutation. We will numerically investigate the application of the group C
297
+ βˆ—-algebra net to permutation invariant problems in Section 4.2.
298
+
299
+ Remark 2 If the activation function Οƒ *is defined as* Οƒ(x)(g) = Οƒ(x(g)), i.e., applied elementwisely to x, then the network f is permutation equivariant. That is, even if the input x(g) is replaced by x(gh) *for some* h ∈ G, the output f(x)(g) is replaced by f(x)(gh)*. This is because the product in* C
300
+ βˆ—(G) *is defined as a* convolution. This feature of the convolution has been studied for group equivariant neural networks (Lenssen et al., 2018; Cohen et al., 2019; Sannai et al., 2021; Sonoda et al., 2022). The above setting of the C
301
+ βˆ—*-algebra* net provides us with a design of group equivariant networks from the perspective of C
302
+ βˆ—*-algebra.*
303
+ Remark 3 Since the number of elements of G is d!*, elements in* C
304
+ βˆ—(G), which are functions on G, are represented as d!-dimensional vectors. For the case where d is large, we need a method for efficient computations, which is future work.
305
+
306
+ ## 3.1.5 Bounded Linear Operators On A Hilbert Space
307
+
308
+ For functional data, we can also set A as an infinite-dimensional space. Using infinite-dimensional C
309
+ βˆ—-algebra for analyzing functional data has been proposed (Hashimoto et al., 2021). We can also adopt this idea for neural networks. Let A = B(L
310
+ 2(Ω)) for a measure space Ω. Set A0 = {a *∈ A |* a is a multiplication operator}.
311
+
312
+ Here, a multiplication operator a is a linear operator that is defined as av = v · u for some u ∈ L∞(Ω).
313
+
314
+ The space A0 is a generalization of the space of diagonal matrices to the infinite-dimensional space. If we restrict elements of weight matrices to A0, then we obtain infinitely many sub-models without interactions.
315
+
316
+ Since outputs are in A
317
+ NH
318
+ 0, we can obtain functional data as outputs. Similar to the case of matrices (see Section 3.1.1), by setting elements of weight matrices as elements in A, we can take advantage of interactions among infinitely many sub-models.
319
+
320
+ ## 3.2 Approximation Of Functions With Interactions By C βˆ—**-Algebra Net**
321
+
322
+ We observe what kind of functions the C
323
+ βˆ—-algebra net can approximate. In this subsection, we show the universality of C
324
+ βˆ—-algebra nets, which describes the representation power of models. We focus on the case of A = C
325
+ dΓ—d. Consider a shallow network f : AN0 β†’ A defined as f(x) = Wβˆ—
326
+ 2 Οƒ(W1x+b), where W1 ∈ AN1Γ—N0, W2 ∈ AN1, and b ∈ AN1. Let ˜f : AN0 β†’ A be the function in the form of ˜f(x) = [Pd j=1 fkj (x l)]kl, where fkj : C
327
+ N0d β†’ R. Here, we abuse the notation and denote by x l ∈ C
328
+ N0dthe lth column of x regarded as an N0d by d matrix. Assume fkj is represented as
329
+
330
+ $$f_{kj}(x)=\int_{\mathbb{R}}\int_{\mathbb{R}^{N_{0}\,d}}T_{kj}(w,b)\sigma(w^{*}x+b)\mathrm{d}w\,\mathrm{d}b$$
331
+
332
+ for some Tkj : R
333
+ N0d × R → R. By the theory of the ridgelet transform, such Tkj exists for most realistic settings (Candès, 1999; Sonoda & Murata, 2017). For example, Sonoda & Murata (2017) showed the following result.
334
+
335
+ $\left(5\right)^{\frac{1}{2}}$
336
+
337
+ Proposition 1 Let S be the space of rapidly decreasing functions on R and S
338
+ β€²
339
+ 0 be the dual space of the Lizorkin distribution space on R. Assume a function ˜f *has the form of* ˜f(x) = [Pd j=1 fkj (x l)]kl*, and* fkj and Λ†fkj *are in* L
340
+ 1(R
341
+ N0d), where Λ†f represents the Fourier transform of a function f*. Assume in addition,*
342
+ Οƒ *is in* S
343
+ β€²
344
+ 0
345
+ , and there exists ψ ∈ S *such that* RR
346
+
347
+ ΟˆΛ†(x)Λ†Οƒ(x)/|x|dx *is nonzero and finite. Then, there exists* Tkj : R
348
+ N0d Γ— R β†’ R such that fkj *admits a representation of Equation* (5).
349
+
350
+ Here, the Lizorkin distribution space is defined as S0 = {Ο• *∈ S |* RR
351
+
352
+ x kΟ•(x)dx = 0 k ∈ N}. Note that the ReLU is in S
353
+ β€²
354
+ 0
355
+ . We discretize Equation (5) by replacing the Lebesgue measures with PN1 i=1 Ξ΄wij and PN1 i=1 Ξ΄bij ,
356
+ where δw is the Dirac measure centered at w. Then, the (*k, l*)-entry of ˜f(x) is written as
357
+
358
+ $$\sum_{j=1}^{d}\sum_{i=1}^{N_{1}}T_{k j}(w_{i j},b_{i j})\sigma(w_{i j}^{*}x^{l}+b_{i j}).$$
359
+
360
+ Setting the i-th element of W2 ∈ AN1 as [Tkj (wij , bij )]kj , the (*i, m*)-entry of W1 ∈ AN1Γ—N0 as [(wi,j )md+l]jl, the ith element of b ∈ AN1 as [bj ]jl, we obtain
361
+
362
+ $$\sum_{j=1}^{d}\sum_{i=1}^{N_{1}}T_{k j}(w_{i j},b_{i j})\sigma(w_{i j}^{*}x^{l}+b_{i j})=(W_{2}^{k})^{*}\sigma(W_{1}x^{l}+b^{l}),$$
363
+
364
+ which is the (*k, l*)-entry of f(x). As a result, any function in the form of ˜f(x) = [Pd j=1 fkj (x l)]kl with the assumption in Proposition 1 can be represented as a C
365
+ βˆ—-algebra net.
366
+
367
+ Remark 4 *As we discussed in Sections 2.2.1 and 3.1.1, a* C
368
+ βˆ—*-algebra net over matrices can be regarded as* d interacting sub-models. The above argument insists the lth column of f(x) and ˜f(x) *depends only on* x l.
369
+
370
+ Thus, in this case, if we input data x lcorresponding to the l*th sub-model, then the output is obtained as the* lth column of the A-valued output f(x). On the other hand, the weight matrices W1 and W2 *and the bias* b are used commonly in providing the outputs for any sub-model, i.e., W1, W2, and b *are learned using data* corresponding to all the sub-models. Therefore, W1, W2, and b *induce interactions among the sub-models.*
371
+
372
+ ## 4 Experiments
373
+
374
+ In this section, we numerically demonstrate the abilities of noncommutative C
375
+ βˆ—-algebra nets using nondiagonal C
376
+ βˆ—-algebra nets over matrices in light of interaction and group C
377
+ βˆ—-algebra nets of its equivariant property.
378
+
379
+ We use C
380
+ βˆ—-algebra-valued multi-layered perceptrons (MLPs) to simplify the experiments. However, they can be naturally extended to other neural networks, such as convolutional neural networks.
381
+
382
+ The models were implemented with JAX (Bradbury et al., 2018). Experiments were conducted on an AMD
383
+ EPYC 7543 CPU and an NVIDIA A-100 GPU. See Appendix A.1 for additional information on experiments.
384
+
385
+ ## 4.1 C βˆ—**-Algebra Nets Over Matrices**
386
+
387
+ In a noncommutative C
388
+ βˆ—-algebra net over matrices consisting of nondiagonal-matrix parameters, each submodel is expected to interact with others and thus improve performance compared with its commutative counterpart consisting of diagonal matrices. We demonstrate the effectiveness of such interaction using image classification and neural implicit representation (NIR) tasks in a similar setting with peer-to-peer learning such that data are separated for each submodel.
389
+
390
+ See Section 3.1.1 for the notations. When training the jth sub-model (j = 1, 2*, . . . , d*), an original N0dimensional input data point x = [x1*, . . . ,* xN0
391
+ ] ∈ R
392
+ N0is converted to its corresponding representation x ∈ AN0 = R
393
+ N0Γ—dΓ—dsuch that x*i,j,j* = xi for i = 1, 2*, . . . , N*0 and 0 otherwise. The loss to its NHdimensional output of a C
394
+ βˆ—-algebra net y ∈ ANH and the target t ∈ ANH is computed as β„“(y:,j,j , t:*,j,j* ) +
395
+ 1 2 Pk,(lΜΈ=j)
396
+ (y 2 k,j,l + y 2 k,l,j ), where β„“ is a certain loss function; we use the mean squared error (MSE) for image classification and the Huber loss for NIR. The second and third terms suppress the nondiagonal elements of the outputs to 0. In both examples, we use leaky-ReLU as an activation function and apply it only to the diagonal elements of pre-activations.
397
+
398
+ ## 4.1.1 Image Classification
399
+
400
+ We conduct experiments of image classification tasks using MNIST (Le Cun et al., 1998), KuzushijiMNIST (Clanuwat et al., 2018), and Fashion-MNIST (Xiao et al., 2017), which are composed of 10-class 28 Γ— 28 gray-scale images. Each sub-model is trained on a mutually exclusive subset sampled from the original training data and then evaluated on the entire test data. Each subset is sampled to be balanced, i.e., each class has the same number of training samples. As a baseline, we use a commutative C
401
+ βˆ—-algebra net over diagonal matrices, which consists of the same sub-models but cannot interact with other sub-models.
402
+
403
+ Both noncommutative and commutative models share hyperparameters: the number of layers was set to 4, the hidden size was set to 128, and the models were trained for 30 epochs. Table 1 shows average test accuracy. Accuracy can be reported in two distinct manners: the first approach averages the accuracy across individual sub-models ("Average"), and the other is to ensemble the logits of sub-models and then compute the accuracy ("Ensemble"). As can be seen, the noncommutative C
404
+ βˆ—-algebra net consistently outperforms its commutative counterpart, which is significant, particularly when the number of sub-models is 40. Note that when the number of sub-models is 40, the size of the training dataset for each sub-model is 40 times smaller than the original one, and thus, the commutative C
405
+ βˆ—-algebra net fails to learn.
406
+
407
+ Nevertheless, the noncommutative C
408
+ βˆ—-algebra net retains performance mostly. Commutative C
409
+ βˆ—-algebra net improves performance by ensembling, but it achieves inferior performance to both Average and Ensemble noncommutative C
410
+ βˆ—-algebra net when the number of sub-models is larger than five. Such a performance improvement would be attributed to the fact that noncommutative models have more trainable parameters than commutative ones. Thus, we additionally compare commutative C
411
+ βˆ—-algebra net with a width of 128 and noncommutative C
412
+ βˆ—-algebra net with a width of 8, which have the same number of learnable parameters, when the total number of training data is set to 5000, ten times smaller than the experiments of Table 1.
413
+
414
+ As seen in Table 2, while the commutative C
415
+ βˆ—-algebra net mostly fails to learn, the noncommutative C
416
+ βˆ—-
417
+ algebra net learns successfully. These results suggest that the performance of noncommutative C
418
+ βˆ—-algebra net cannot solely be explained by the number of learnable parameters: the interaction among sub-models provides essential capability.
419
+
420
+ Furthermore, Table 3 illustrates that related tasks help performance improvement through interaction.
421
+
422
+ Specifically, we prepare five sub-models per dataset, one of MNIST, Kuzushiji-MNIST, and Fashion-MNIST,
423
+ and train a single (non)commutative C
424
+ βˆ—-algebra net consisting of 15 sub-models simultaneously. In addition to the commutative C
425
+ βˆ—-algebra net, where sub-models have no interaction, and the noncommutative C
426
+ βˆ—-algebra net, where each sub-model can interact with any other sub-models, we use a block-diagonal noncommutative C
427
+ βˆ—-algebra net (see Section 3.1.2), where each sub-model can only interact with other submodels trained on the same dataset. Table 3 shows that the fully noncommutative C
428
+ βˆ—-algebra net surpasses the block-diagonal one on Kuzushiji-MNIST and Fashion-MNIST, implying that not only intra-task interaction but also inter-task interaction helps performance gain. Note these results are not directly comparable with the values of Tables 1 and 3, due to dataset subsampling to balance class sizes (matching MNIST's smallest class).
429
+
430
+ ## 4.1.2 Neural Implicit Representation
431
+
432
+ In the next experiment, we use a C
433
+ βˆ—-algebra net over matrices to learn implicit representations of 2D images that map each pixel coordinate to its RGB colors (Sitzmann et al., 2020; Xie et al., 2022). Specifically, an input coordinate in [0, 1]2is transformed into a random Fourier feature in [βˆ’1, 1]320 and then converted into its C
434
+ βˆ—-algebraic representation over matrices as an input to a C
435
+ βˆ—-algebra net over matrices. Similar to the image classification task, we compare noncommutative NIRs with commutative NIRs, using the following hyperparameters: the number of layers to 6 and the hidden dimension to 256. These NIRs learn 128 Γ— 128pixel images of ukiyo-e pictures from The Metropolitan Museum of Art1 and photographs of cats from the AFHQ dataset (Choi et al., 2020).
436
+
437
+ Figure 2 (top) shows the curves of the average PSNR (Peak Signal-to-Noise Ratio) of sub-models corresponding to the image below. Both commutative and noncommutative C
438
+ βˆ—-algebra nets consist of five sub-models trained on five ukiyo-e pictures (see also Figure 6). PSNR, the quality measure, of the noncommutative NIR
439
+ 1https://www.metmuseum.org/art/the-collection
440
+
441
+ datasets. "Average" reports the average accuracy of sub-models, and "Ensemble" ensembles the logits of
442
+
443
+ sub-models to compute accuracy. Interactions between sub-models that the noncommutative C
444
+
445
+ βˆ—-algebra net
446
+
447
+ improves performance significantly when the number of sub-models is 40.
448
+
449
+ Dataset # sub-models Commutative C
450
+
451
+ βˆ—-algebra nets Noncommutative C
452
+
453
+ βˆ—-algebra nets
454
+
455
+ Average Ensemble Average Ensemble
456
+
457
+ | improves performance significantly when the number of sub-models is 40. Dataset # sub-models Commutative C βˆ— -algebra nets Noncommutative C βˆ— -algebra nets Average Ensemble Average Ensemble MNIST 5 0.963 Β± 0.003 0.970 Β± 0.001 0.970 Β± 0.002 0.976 Β± 0.001 10 0.937 Β± 0.004 0.950 Β± 0.000 0.956 Β± 0.002 0.969 Β± 0.000 20 0.898 Β± 0.007 0.914 Β± 0.002 0.937 Β± 0.002 0.957 Β± 0.001 40 0.605 Β± 0.007 0.795 Β± 0.010 0.906 Β± 0.004 0.938 Β± 0.001 Kuzushiji-MNIST 5 0.837 Β± 0.003 0.871 Β± 0.001 0.859 Β± 0.003 0.888 Β± 0.002 10 0.766 Β± 0.008 0.793 Β± 0.004 0.815 Β± 0.007 0.859 Β± 0.002 20 0.674 Β± 0.011 0.710 Β± 0.001 0.758 Β± 0.007 0.817 Β± 0.001 40 0.453 Β± 0.026 0.532 Β± 0.004 0.682 Β± 0.008 0.767 Β± 0.001 Fashion-MNIST 5 0.862 Β± 0.001 0.873 Β± 0.001 0.868 Β± 0.002 0.882 Β± 0.001 10 0.839 Β± 0.003 0.850 Β± 0.001 0.852 Β± 0.004 0.871 Β± 0.001 20 0.790 Β± 0.010 0.796 Β± 0.002 0.832 Β± 0.005 0.858 Β± 0.000 40 0.650 Β± 0.018 0.674 Β± 0.001 0.810 Β± 0.005 0.841 Β± 0.000 |
458
+ |----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
459
+
460
+ Table 1: Average test accuracy of commutative and noncommutative C
461
+ βˆ—-algebra nets over matrices on test
462
+ Table 2: Average test accuracy commutative and noncommutative C
463
+ βˆ—-algebra nets over matrices trained
464
+
465
+ on 5000 data points with 20 sub-models. The width of the noncommutative model is set to 8 so that the
466
+
467
+ number of learnable parameters is matched with its commutative counterpart.
468
+
469
+ Dataset Commutative Noncommutative
470
+
471
+ MNIST 0.155 Β± 0.04 0.779 Β± 0.02 Kuzushiji-MNIST 0.140 Β± 0.03 0.486 Β± 0.02
472
+
473
+ Fashion-MNIST 0.308 Β± 0.05 0.673 Β± 0.02
474
+
475
+ grows faster, and correspondingly, it learns the details of ground truth images faster than its commutative version (Figure 2 bottom). Noticeably, the noncommutative representations reproduce colors even at the early stage of learning, although the commutative ones remain monochrome after 500 iterations of training.
476
+
477
+ Along with the similar trends observed in the pictures of cats (Figure 3), these results further emphasize the effectiveness of the interaction. Longer-term results are presented in Figure 7. This INR for 2D images can be extended to represent 3D models. Figure 4 shows synthesized views of 3D
478
+ implicit representations using the same C
479
+ βˆ—-algebra MLPs trained on three 3D chairs from the ShapeNet dataset (Chang et al., 2015). The presented poses are unseen during training. Again, the noncommutative INR reconstructs the chair models with less noisy artifacts, indicating that interaction helps efficient learning. See Appendices A.1 and A.2 for details and results.
480
+
481
+ ## 4.2 Group C βˆ—**-Algebra Nets**
482
+
483
+ As another experimental example of C
484
+ βˆ—-algebra nets, we showcase group C
485
+ βˆ—-algebra nets, which we introduced in Section 3.1.4. The group C
486
+ βˆ—-algebra nets take functions on a symmetric group as input and return functions on the group as output.
487
+
488
+ Refer to Section 3.1.4 for notations. A group C
489
+ βˆ—-algebra net is trained on data {(x, y) ∈ AN0 Γ—ANH }, where x and y are N0- and NH-dimensional vector-valued functions. Practically, such functions may be represented as real tensors, e.g., x ∈ C
490
+ N0Γ—\#G, where \#G is the size of G. Using product between functions explained in Section 3.1.4 and element-wise addition, a linear layer, and consequently, an MLP, on A can be constructed.
491
+
492
+ Following the C
493
+ βˆ—-algebra nets over matrices, we use leaky ReLU for activations.
494
+
495
+ Table 3: Average test accuracy over five sub-models simultaneously trained on the three datasets. The
496
+
497
+ (fully) noncommutative C
498
+
499
+ βˆ—-algebra net outperforms block-diagonal the noncommutative C
500
+
501
+ βˆ—-algebra net on
502
+
503
+ Kuzushiji-MNIST and Fashion-MNIST, indicating that the interaction can leverage related tasks.
504
+
505
+ Dataset Commutative Block-diagonal Noncommutative
506
+
507
+ MNIST 0.956 Β± 0.002 0.969 Β± 0.002 0.970 Β± 0.002 Kuzushiji-MNIST 0.745 Β± 0.004 0.778 Β± 0.006 0.796 Β± 0.008
508
+
509
+ Fashion-MNIST 0.768 Β± 0.007 0.807 Β± 0.006 0.822 Β± 0.002
510
+
511
+ ![9_image_0.png](9_image_0.png)
512
+
513
+ ![9_image_1.png](9_image_1.png)
514
+
515
+ Figure 2: Average PSNR of implicit representations of the image below (top) and reconstructions of the ground truth image at every 100 iterations (bottom). The noncommutative C
516
+ βˆ—-algebra net learns the geometry and colors of the image faster than its commutative counterpart.
517
+ One of the simplest tasks for the group C
518
+ βˆ—-algebra nets is to learn permutation-invariant representations, e.g., predicting the sum of given d digits. In this case, x is a function that outputs permutations of features of d digits, and y(g) is a constant function that returns the sum of these digits for all g ∈ G. In this experiment, we use features of MNIST digits of a pre-trained CNN in 32-dimensional vectors. Digits are selected so that their sum is less than 10 to simplify the problem, and the model is trained to classify the sum of given digits using cross-entropy loss. We set the number of layers to 4 and the hidden dimension to 32. For comparison, we prepare permutation-invariant and permutation-equivariant DeepSet models (Zaheer et al., 2017), which adopt special structures to induce permutation invariance or equivariance, containing the same number of parameters as floating-point numbers with the group C
519
+ βˆ—-algebra net.
520
+
521
+ Table 4 displays the results of the task with various training dataset sizes when d = 3. What stands out in the table is that the group C
522
+ βˆ—-algebra net consistently outperforms the DeepSet models by large margins, particularly when the number of training data is limited. Additionally, as shown in Figure 5, the group C
523
+ βˆ—-algebra net converges with much fewer iterations than the DeepSet models. These results suggest that the inductive biases implanted by the product structure in the group C
524
+ βˆ—-algebra net are effective.
525
+
526
+ ## 5 Related Works
527
+
528
+ Applying algebra structures to neural networks has been studied. Quaternions are applied to, for example, spatial transformations, multi-dimensional signals, color images (Nitta, 1995; Arena et al., 1997; Zhu et al., 2018; Gaudet & Maida, 2018). Clifford algebra (or geometric algebra) is a generalization of quaternions,
529
+
530
+ ![10_image_0.png](10_image_0.png)
531
+
532
+ Figure 3: Ground truth images and their implicit representations of commutative and noncommutative C
533
+ βˆ—-
534
+ algebra nets after 500 iterations of training. The noncommutative C
535
+ βˆ—-algebra net reproduces colors more faithfully.
536
+
537
+ ![10_image_1.png](10_image_1.png)
538
+
539
+ Figure 4: Synthesized views of 3D implicit representations of commutative and noncommutative C
540
+ βˆ—-algebra nets after 5000 iterations of training. The noncommutative C
541
+ βˆ—-algebra net can produce finer details. Note that the commutative C
542
+ βˆ—-algebra net could not synthesize the chair on the left.
543
+ and applying Clifford algebra to neural networks has also been investigated to extract geometric structure of data (Rivera-Rovelo et al., 2010; Zang et al., 2022; Brandstetter et al., 2022; Ruhe et al., 2023b;a). Hoffmann et al. (2020) considered neural networks with matrix-valued parameters for the parameter and computational efficiencies and for achieving extensive structures of neural networks. In this section, we discuss relationships and differences with the existing studies of applying algebras to neural networks.
544
+
545
+ Quaternion is a generalization of complex number. A quaternion is expressed as a+bi+cj+dk for *a, b, c, d* ∈
546
+ R. Here, i, j, and k are basis elements that satisfy i 2j 2 = k2 = βˆ’1, ij = βˆ’ji = k, ik = βˆ’ki = j, and jk = βˆ’kj = i. Nitta (1995); Arena et al. (1997) introduced and analyzed neural networks with quaternionvalued parameters. Since the rotations in the three-dimensional space can be represented with quaternions, they can be applied to control the position of robots (Fortuna et al., 1996). More recently, representing color images using quaternions and analyzing them with a quaternion version of a convolutional neural network was proposed and investigated (Zhu et al., 2018; Gaudet & Maida, 2018).
547
+
548
+ Table 4: Average test accuracy of an invariant DeepSet model, an equivariant DeepSet model, and a group C
549
+ βˆ—-algebra net on test data of the sum-of-digits task after 100 epochs of training. The group C
550
+ βˆ—-algebra net can learn from fewer data.
551
+
552
+ Dataset size Invariant DeepSet Equivariant DeepSet Group C
553
+ βˆ—-algebra net
554
+
555
+ | Dataset size | Invariant DeepSet | Equivariant DeepSet | Group C βˆ— -algebra net |
556
+ |----------------|---------------------|-----------------------|--------------------------|
557
+ | 1k | 0.408 Β± 0.014 | 0.571 Β± 0.021 | 0.783 Β± 0.016 |
558
+ | 5k | 0.731 Β± 0.026 | 0.811 Β± 0.007 | 0.922 Β± 0.003 |
559
+ | 10k | 0.867 Β± 0.021 | 0.836 Β± 0.009 | 0.943 Β± 0.005 |
560
+ | 50k | 0.933 Β± 0.005 | 0.862 Β± 0.002 | 0.971 Β± 0.001 |
561
+
562
+ ![11_image_0.png](11_image_0.png)
563
+
564
+ ![11_image_1.png](11_image_1.png)
565
+
566
+ Figure 5: Average test accuracy curves of invariant DeepSet, equivariant DeepSet, and a group C
567
+ βˆ—-algebra net trained on 10k data of the sum-of-digits task. The group C
568
+ βˆ—-algebra net can learn more efficiently and effectively.
569
+ Clifford algebra is a generalization of quaternions and enables us to extract the geometric structure of data. It naturally unifies real numbers, vectors, complex numbers, quaternions, exterior algebras, and so on. For a vector space V equipped with a quadratic form Q and an orthonormal basis {e1*, . . . , e*n} of V, the Clifford algebra is constructed by the product ei1
570
+ Β· Β· Β· eik for 1 ≀ i1 < *Β· Β· Β·*ik ≀ n and 0 ≀ k ≀ n. The product structure is defined by eiej = βˆ’ej ei and e 2 i = Q(ei). We have not only the vectors e1*, . . . , e*n, but also the elements whose forms are eiej (bivector), eiej ek (trivector), and so on. Using these different types of vectors, we can describe data in different fields. Brandstetter et al. (2022); Ruhe et al. (2023b) proposed to apply neural networks with Clifford algebra to solve a partial differential equation that involves different fields by describing the correlation of these fields using Clifford algebra. Group-equivariant networks with Clifford algebra have also been proposed for extracting features that are equivariant under group actions (Ruhe et al., 2023a). Zang et al. (2022) analyzed traffic data with residual networks with Clifford algebra-valued parameters for considering the correlation between them in both spatial and temporal domains. RiveraRovelo et al. (2010) approximate the surface of 2D or 3D objects using a network with Clifford algebra.
571
+
572
+ Hoffmann et al. (2020) considered generalizing neural network parameters to matrices. These networks can be effective regarding the parameter size and the computational costs. They also consider the flexibility of the design of the network with matrix-valued parameters. On the other hand, C
573
+ βˆ—-algebra is a natural generalization of the space of complex numbers. An advantage of considering C
574
+ βˆ—-algebra over other algebras is the straightforward generalization of notions related to neural networks. This is by virtue of the structure of involution, norm, and C
575
+ βˆ—-property. For example, we have a generalization of Hilbert space by means of C
576
+ βˆ—-algebra, which is called Hilbert C
577
+ βˆ—-module (Lance, 1995). Since the input and output spaces are Hilbert spaces, the theory of Hilbert C
578
+ βˆ—-module can be used in analyzing C
579
+ βˆ—-algebra nets. We also have a natural generalization of reproducing kernel Hilbert space
580
+ (RKHS), which is called reproducing kernel Hilbert C
581
+ βˆ—-module (RKHM) (Hashimoto et al., 2021). RKHM
582
+ enables us to connect C
583
+ βˆ—-algebra nets with kernel methods (Hashimoto et al., 2023).
584
+
585
+ From the perspective of the application to neural networks, both C
586
+ βˆ—-algebra and Clifford algebra enable us to induce interactions. Clifford algebra can describe the relationship among data components by using bivectors and trivectors. C
587
+ βˆ—-algebra can also induce the interaction among data components using its product structure. Moreover, it can also induce interaction among sub-models, as we discussed in Section 3.1.1. Our framework also enables us to construct group equivariant neural networks as we discussed in Section 3.1.4.
588
+
589
+ ## 6 Conclusion And Discussion
590
+
591
+ In this paper, we have generalized the space of neural network parameters to noncommutative C
592
+ βˆ—-algebras.
593
+
594
+ Their rich product structures bring powerful properties to neural networks. For example, a C
595
+ βˆ—-algebra net over nondiagonal matrices enables its sub-models to interact, and a group C
596
+ βˆ—-algebra net learns permutationequivariant features. We have empirically demonstrated the validity of these properties in various tasks, image classification, neural implicit representation, and sum-of-digits tasks.
597
+
598
+ A current practical limitation of noncommutative C
599
+ βˆ—-algebra nets is their computational cost. The noncommutative C
600
+ βˆ—-algebra net over matrices used in the experiments requires a quadratic complexity to the number of sub-models for communication, in the same way as the "all-reduce" collective operation in distributed computation. This complexity could be alleviated by, for example, parameter sharing or introducing structures to nondiagonal elements by an analogy between self-attentions and their efficient variants. The group C
601
+ βˆ—-algebra net even costs factorial time complexity to the size of the set, which becomes soon infeasible as the size of the set increases. Such an intensive complexity might be mitigated by representing parameters by parameter invariant/equivariant neural networks, such as DeepSet. Despite such computational burden, introducing noncommutative C
602
+ βˆ—-algebra derives interesting properties otherwise impossible.
603
+
604
+ We leave further investigation on scalability for future research. An important and interesting future direction is the application of infinite-dimensional C
605
+ βˆ—-algebras. In this paper, we focused mainly on finite-dimensional C
606
+ βˆ—-algebras. We showed that the product structure in C
607
+ βˆ—-
608
+ algebras is a powerful tool for neural networks, for example, learning with interactions and group equivariance (or invariance) even for the finite-dimensional case. Infinite-dimensional C
609
+ βˆ—-algebra allows us to analyze functional data, such as time-series data and spatial data. We believe that applying infinite dimensional C
610
+ βˆ—-
611
+ algebra can be an efficient tool to extract information from the data even when the observation is partially missing. Practical applications of our framework to functional data with infinite-dimensional C
612
+ βˆ—-algebras are our future work. Our framework with noncommutative C
613
+ βˆ—-algebras is general and has a wide range of applications. We believe that our framework opens up a new approach to learning neural networks.
614
+
615
+ ## References
616
+
617
+ Md. Faijul Amin, Md. Monirul Islam, and Kazuyuki Murase. Single-layered complex-valued neural networks and their ensembles for real-valued classification problems. In *IJCNN*, 2008.
618
+
619
+ Paolo Arena, Luigi Fortuna, Giovanni Muscato, and Maria Gabriella Xibilia. Multilayer perceptrons to approximate quaternion valued functions. *Neural Networks*, 10(2):335–342, 1997.
620
+
621
+ Igor Babuschkin, Kate Baumli, Alison Bell, Surya Bhupatiraju, Jake Bruce, Peter Buchlovsky, David Budden, Trevor Cai, Aidan Clark, Ivo Danihelka, Claudio Fantacci, Jonathan Godwin, Chris Jones, Ross Hemsley, Tom Hennigan, Matteo Hessel, Shaobo Hou, Steven Kapturowski, Thomas Keck, Iurii Kemaev, Michael King, Markus Kunesch, Lena Martens, Hamza Merzic, Vladimir Mikulik, Tamara Norman, John Quan, George Papamakarios, Roman Ring, Francisco Ruiz, Alvaro Sanchez, Rosalia Schneider, Eren Sezener, Stephen Spencer, Srivatsan Srinivasan, Luyu Wang, Wojciech Stokowiec, and Fabio Viola. The DeepMind JAX Ecosystem, 2020. URL http://github.com/deepmind.
622
+
623
+ AurΓ©lien Bellet, Rachid Guerraoui, Mahsa Taziki, and Marc Tommasi. Personalized and private peer-to-peer machine learning. In *AISTATS*, 2018.
624
+
625
+ James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and Qiao Zhang. JAX: composable transformations of Python+NumPy programs, 2018. URL http://github.com/google/jax.
626
+
627
+ Johannes Brandstetter, Rianne van den Berg, Max Welling, and Jayesh K Gupta. Clifford neural layers for PDE modeling. arXiv:2209.04934, 2022.
628
+
629
+ Sven Buchholz. *A Theory of Neural Computation with Clifford Algebras*. PhD thesis, 2005.
630
+
631
+ Sven Buchholz and Gerald Sommer. On Clifford neurons and Clifford multi-layer perceptrons. Neural Networks, 21(7):925–935, 2008.
632
+
633
+ Emmanuel J. Candès. Harmonic analysis of neural networks. *Applied and Computational Harmonic Analysis*,
634
+ 6(2):197–218, 1999.
635
+
636
+ Angel X. Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, Jianxiong Xiao, Li Yi, and Fisher Yu. ShapeNet: An Information-Rich 3D Model Repository. Technical report, Stanford University - Princeton University β€”
637
+ Toyota Technological Institute at Chicago, 2015.
638
+
639
+ Yunjey Choi, Youngjung Uh, Jaejun Yoo, and Jung-Woo Ha. Stargan v2: Diverse image synthesis for multiple domains. In *CVPR*, 2020.
640
+
641
+ Tarin Clanuwat, Mikel Bober-Irizar, Asanobu Kitamoto, Alex Lamb, Kazuaki Yamamoto, and David Ha.
642
+
643
+ Deep learning for classical japanese literature. *arXiv*, 2018.
644
+
645
+ Taco S Cohen, Mario Geiger, and Maurice Weiler. A general theory of equivariant CNNs on homogeneous spaces. In *NeurIPS*, 2019.
646
+
647
+ Philip J. Davis. *Circulant Matrices*. Wiley, 1979.
648
+
649
+ Xibin Dong, Zhiwen Yu, Wenming Cao, Yifan Shi, and Qianli Ma. A survey on ensemble learning. *Frontiers* of Computer Science, 14:241–258, 2020.
650
+
651
+ L. Fortuna, G. Muscato, and M.G. Xibilia. An hypercomplex neural network platform for robot positioning.
652
+
653
+ In 1996 IEEE International Symposium on Circuits and Systems. Circuits and Systems Connecting the World. (ISCAS 96), 1996.
654
+
655
+ Mudasir Ahmad Ganaie, Minghui Hu, Ashwani Kumar Malik, M. Tanveer, and Ponnuthurai N. Suganthan.
656
+
657
+ Ensemble deep learning: A review. *Engineering Applications of Artificial Intelligence*, 115:105151, 2022.
658
+
659
+ Chase J Gaudet and Anthony S Maida. Deep quaternion networks. In *IJCNN*, pp. 1–8. IEEE, 2018.
660
+
661
+ Yuka Hashimoto, Isao Ishikawa, Masahiro Ikeda, Fuyuta Komura, Takeshi Katsura, and Yoshinobu Kawahara. Reproducing kernel Hilbert C
662
+ βˆ—-module and kernel mean embeddings. *Journal of Machine Learning* Research, 22(267):1–56, 2021.
663
+
664
+ Yuka Hashimoto, Zhao Wang, and Tomoko Matsui. C
665
+ βˆ—-algebra net: a new approach generalizing neural network parameters to C
666
+ βˆ—-algebra. In *ICML*, 2022.
667
+
668
+ Yuka Hashimoto, Masahiro Ikeda, and Hachem Kadri. Learning in RKHM: a C
669
+ βˆ—-algebraic twist for kernel machines. In *AISTATS*, 2023.
670
+
671
+ Akira Hirose. Continuous complex-valued back-propagation learning. *Electronics Letters*, 28:1854–1855, 1992.
672
+
673
+ Jordan Hoffmann, Simon Schmitt, Simon Osindero, Karen Simonyan, and Erich Elsen. Algebranets.
674
+
675
+ arXiv:2006.07360, 2020.
676
+
677
+ Patrick Kidger and Cristian Garcia. Equinox: neural networks in JAX via callable PyTrees and filtered transformations. *Differentiable Programming workshop at Neural Information Processing Systems 2021*,
678
+ 2021.
679
+
680
+ Diederik P. Kingma and Jimmy Lei Ba. Adam: a Method for Stochastic Optimization. In *ICLR*, 2015.
681
+
682
+ Aleksandr A. Kirillov. *Elements of the Theory of Representations*. Springer, 1976.
683
+
684
+ E. Christopher Lance. *Hilbert* C
685
+ βˆ—*-modules - a Toolkit for Operator Algebraists*. London Mathematical Society Lecture Note Series, vol. 210. Cambridge University Press, New York, 1995.
686
+
687
+ Yann Le Cun, Leon Bottou, Yoshua Bengio, and Patrij Haffner. Gradient-based learning applied to document recognition. *Proceedings of the IEEE*, 86(11):2278–2324, 1998.
688
+
689
+ ChiYan Lee, Hideyuki Hasegawa, and Shangce Gao. Complex-valued neural networks: A comprehensive survey. *IEEE/CAA Journal of Automatica Sinica*, 9(8):1406–1426, 2022.
690
+
691
+ Jan Eric Lenssen, Matthias Fey, and Pascal Libuschewski. Group equivariant capsule networks. In *NeurIPS*,
692
+ 2018.
693
+
694
+ Gerard J. Murphy. C
695
+ βˆ—*-Algebras and Operator Theory*. Academic Press, 1990.
696
+
697
+ Ikuko Nishikawa, Kazutoshi Sakakibara, Takeshi Iritani, and Yasuaki Kuroe. 2 types of complex-valued hopfield networks and the application to a traffic signal control. In *IJCNN*, 2005.
698
+
699
+ Tohru Nitta. A quaternary version of the back-propagation algorithm. In *ICNN*, volume 5, pp. 2753–2756, 1995.
700
+
701
+ Justin Pearson and D.L. Bisset. Neural networks in the Clifford domain. In *ICNN*, 1994. Jorge Rivera-Rovelo, Eduardo Bayro-Corrochano, and Ruediger Dillmann. Geometric neural computing for 2d contour and 3d surface reconstruction. In *Geometric Algebra Computing*, pp. 191–209. Springer, 2010.
702
+
703
+ Fabrice Rossi and Brieuc Conan-Guez. Functional multi-layer perceptron: a non-linear tool for functional data analysis. *Neural Networks*, 18(1):45–60, 2005.
704
+
705
+ David Ruhe, Johannes Brandstetter, and Patrick ForrΓ©. Clifford group equivariant neural networks.
706
+
707
+ arXiv:2305.11141, 2023a.
708
+
709
+ David Ruhe, Jayesh K. Gupta, Steven de Keninck, Max Welling, and Johannes Brandstetter. Geometric clifford algebra networks. arXiv:2302.06594, 2023b.
710
+
711
+ Akiyoshi Sannai, Masaaki Imaizumi, and Makoto Kawano. Improved generalization bounds of group invariant
712
+ / equivariant deep networks via quotient feature spaces. In UAI, 2021.
713
+
714
+ Vincent Sitzmann, Julien N.P. Martel, Alexander W. Bergman, David B. Lindell, and Gordon Wetzstein.
715
+
716
+ Implicit neural representations with periodic activation functions. In *NeurIPS*, 2020.
717
+
718
+ Sho Sonoda and Noboru Murata. Neural network with unbounded activation functions is universal approximator. *Applied and Computational Harmonic Analysis*, 43(2):233–268, 2017.
719
+
720
+ Sho Sonoda, Isao Ishikawa, and Masahiro Ikeda. Universality of group convolutional neural networks based on ridgelet analysis on groups. In *NeurIPS*, 2022.
721
+
722
+ Matthew Tancik, Ben Mildenhall, Terrance Wang, Divi Schmidt, Pratul P. Srinivasan, Jonathan T. Barron, and Ren Ng. Learned Initializations for Optimizing Coordinate-Based Neural Representations. In *CVPR*,
723
+ 2021.
724
+
725
+ Barinder Thind, Kevin Multani, and Jiguo Cao. Deep learning with functional inputs. *Journal of Computational and Graphical Statistics*, 32:171–180, 2023.
726
+
727
+ Chiheb Trabelsi, Olexa Bilaniuk, Ying Zhang, Dmitriy Serdyuk, Sandeep Subramanian, Joao Felipe Santos, Soroush Mehri, Negar Rostamzadeh, Yoshua Bengio, and Christopher J Pal. Deep complex networks. In ICLR, 2018.
728
+
729
+ Paul Vanhaesebrouck, AurΓ©lien Bellet, and Marc Tommasi. Decentralized collaborative learning of personalized models over networks. In *AISTATS*, 2017.
730
+
731
+ Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. *arXiv*, 2017.
732
+
733
+ Yiheng Xie, Towaki Takikawa, Shunsuke Saito, Or Litany, Shiqin Yan, Numair Khan, Federico Tombari, James Tompkin, Vincent Sitzmann, and Srinath Sridhar. Neural fields in visual computing and beyond.
734
+
735
+ Computer Graphics Forum, 2022.
736
+
737
+ Abhishek Yadav, Deepak Mishra, Sudipta Ray, Ram Narayan Yadav, and Prem Kalra. Representation of complex-valued neural networks: a real-valued approach. In *ICISIP*, 2005.
738
+
739
+ Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabas Poczos, Russ R Salakhutdinov, and Alexander J Smola. Deep sets. In *NeurIPS*, 2017.
740
+
741
+ Di Zang, Xihao Chen, Juntao Lei, Zengqiang Wang, Junqi Zhang, Jiujun Cheng, and Keshuang Tang. A
742
+ multi-channel geometric algebra residual network for traffic data prediction. IET Intelligent Transport Systems, 16(11):1549–1560, 2022.
743
+
744
+ Yu Zhang and Qiang Yang. A survey on multi-task learning. IEEE Transactions on Knowledge and Data Engineering, 34(12):5586–5609, 2022.
745
+
746
+ Xuanyu Zhu, Yi Xu, Hongteng Xu, and Changjian Chen. Quaternion convolutional neural networks. In ECCV, 2018.
747
+
748
+ ## A Appendix A.1 Implementation Details
749
+
750
+ We implemented C
751
+ βˆ—-algebra nets using JAX (Bradbury et al., 2018) with equinox (Kidger & Garcia, 2021)
752
+ and optax (Babuschkin et al., 2020). For C
753
+ βˆ—-algebra net over matrices, we used the Adam optimizer (Kingma
754
+ & Ba, 2015) with a learning rate of 1.0 Γ— 10βˆ’4, except for the 3D NIR experiment, where Adam's initial learning rate was set to 1.0 Γ— 10βˆ’3. For group-C
755
+ βˆ—-algebra net, we adopted the Adam optimizer with a learning rate of 1.0 Γ— 10βˆ’3. We set the batch size to 32 in all experiments except for the 2D NIR, where each batch consisted of all pixels, and 3D NIR, where a batch size of 4 was used. Listings 1 and 2 illustrate linear layers of C
756
+ βˆ—-algebra nets using NumPy, equivalent to the implementations used in the experiments in Sections 4.1 and 4.2.
757
+
758
+ The implementation of 3D neural implicit representation (Section 4.1.2) is based on a simple NeRF-like model and its renderer in Tancik et al. (2021). For training, 25 views of each 3D chair from the ShapeNet dataset (Chang et al., 2015) are adopted with their 64 Γ— 64 pixel reference images. The same C
759
+ βˆ—-algebra MLPs with the 2D experiments were used, except for the hyperparameters: the number of layers of four and the hidden dimensional size of 128. The permutation-invariant DeepSet model used in Section 4.2 processes each data sample with a four-layer MLP with hyperbolic tangent activation, sum-pooling, and a linear classifier. The permutation-equivariant DeepSet model consists of four permutation-equivariant layers with hyperbolic tangent activation, maxpooling, and a linear classifier, following the point cloud classification in (Zaheer et al., 2017). Although we tried leaky ReLU activation as the group C
760
+ βˆ—-algebra net, this setting yielded sub-optimal results in permutation-invariant DeepSet. The hidden dimension of the DeepSet models was set to 96 to match the number of floating-point-number parameters equal to that of the group C
761
+ βˆ—-algebra net.
762
+
763
+ ## A.2 Additional Results
764
+
765
+ Figures 6 and 7 present the additional figures of 2D INRs (Section 4.1.2). Figure 6 is an ukiyo-e counterpart of Figure 3 in the main text. Again, the noncommutative C
766
+ βˆ—-algebra net learns color details faster than the commutative one. Figure 7 shows average PSNR curves over three NIRs of the image initialized with different random states for 5,000 iterations. Although it is not as effective as the beginning stage, the noncommutative C
767
+ βˆ—-algebra net still outperforms the commutative one after the convergence.
768
+
769
+ import numpy as np
770
+
771
+ ```
772
+ def matrix_valued_linear(weight: Array,
773
+ bias: Array,
774
+ input: Array
775
+ ) -> Array:
776
+ """
777
+ weight: Array of shape {output_dim}x{input_dim}x{dim_matrix}x{dim_matrix}
778
+ bias: Array of shape {output_dim}x{dim_matrix}x{dim_matrix}
779
+ input: Array of shape {input_dim}x{dim_matrix}x{dim_matrix}
780
+ out: Array of shape {output_dim}x{dim_matrix}x{dim_matrix}
781
+ """
782
+
783
+ ```
784
+
785
+ out = []
786
+ for _weight, b in zip(weight, bias):
787
+ tmp = np.zeros(weight.shape[2:]) for w, i in zip(_weight, input):
788
+ tmp += w @ i + b out.append(tmp)
789
+ return np.array(out)
790
+ Listing 1: Numpy implementation of a linear layer of a C
791
+ βˆ—-algebra net over matrices used in Section 4.1.
792
+
793
+ ![16_image_0.png](16_image_0.png)
794
+
795
+ Figure 6: Ground truth images and their implicit representations of commutative and noncommutative C
796
+ βˆ—-
797
+ algebra nets after 500 iterations of training.
798
+
799
+ Table 5 and Figure 8 show the additional results of 3D INRs (Section 4.1.2). Table 5 presents the average PSNR of synthesized views. As can be noticed from the synthesized views in Figures 4 and 8, the noncommutative C
800
+ βˆ—-algebra net produces less noisy output, resulting in a higher PSNR.
801
+
802
+ Figure 9 displays test accuracy curves of the group C
803
+ βˆ—-algebra net and DeepSet models on sub-of-digits task over different learning rates. As Figure 5, which shows the case where the learning rate was 0.001, the group C
804
+ βˆ—-algebra net converges with much fewer iterations than the DeepSet models over a wide range of learning rates, although the proposed model shows unstable results for a large learning rate of 0.01.
805
+
806
+ import dataclasses import **numpy** as np @dataclasses.dataclass class **Permutation**:
807
+ \# helper class to handle permutation value: np.ndarray def inverse(self) -> Permutation:
808
+ return Permutation(self.value.argsort())
809
+ def __mul__(self, perm: Permutation) -> Permutation:
810
+ return Permutation(self.value[perm.value])
811
+ def __eq__(self, other: Permutation) -> bool:
812
+ return np.all(self.value == other.value)
813
+ @staticmethod def create_hashtable(set_size: int) -> Array:
814
+ perms = [Permutation(np.array(p)) for p in permutations(range(set_size))] map = {v: i for i, v in enumerate(perms)} out = [] for perm in perms:
815
+ out.append([map[perm.inverse() * _perm] for _perm in perms])
816
+ return np.array(out)
817
+
818
+ ```
819
+ def group_linear(weight: Array,
820
+ bias: Array,
821
+ input: Array
822
+ ) -> Array:
823
+ """
824
+ weight: {output_dim}x{input_dim}x{group_size}
825
+ bias: {output_dim}x{group_size}
826
+ input: {input_dim}x{group_size}
827
+ out: {output_dim}x{group_size}
828
+ """
829
+
830
+ ```
831
+
832
+ hashtable = Permutation.create_hashtable(set_size) *\# {group_size}x{group_size}* g = np.arange(hashtable.shape[0])
833
+ out = []
834
+ for _weight in weight:
835
+ tmp0 = [] for y in g:
836
+ tmp1 = [] for w, f in zip(_weight, input):
837
+ tmp2 = []
838
+ for x in g:
839
+ tmp2.append(w[x] * f[hashtable[x, y]])
840
+ tmp1.append(sum(tmp2))
841
+ tmp0.append(sum(tmp1))
842
+ out.append(tmp0)
843
+ return np.array(out) + bias Listing 2: Numpy implementation of a group C
844
+ βˆ—-algebra linear layer used in Section 4.2.
845
+
846
+ Table 5: Average PSNR over synthesized views. The specified poses of the views are unseen during training.
847
+
848
+ | Commutative | Noncommutative |
849
+ |---------------|------------------|
850
+ | 18.40 Β± 4.30 | 25.22 Β± 1.45 |
851
+
852
+ ![18_image_0.png](18_image_0.png)
853
+
854
+ Figure 7: Average PSNR over implicit representations of the image of commutative and noncommutative C
855
+ βˆ—-algebra nets trained on five cat pictures (top) and reconstructions of the ground truth image at every 500 iterations (bottom).
856
+
857
+ Commutative Noncommutative Figure 8: Synthesized views of implicit representations of a chair.
858
+
859
+ ![18_image_1.png](18_image_1.png)
860
+
861
+ learning rates.
qp6OQwLsF7/qp6OQwLsF7_meta.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "languages": null,
3
+ "filetype": "pdf",
4
+ "toc": [],
5
+ "pages": 19,
6
+ "ocr_stats": {
7
+ "ocr_pages": 0,
8
+ "ocr_failed": 0,
9
+ "ocr_success": 0,
10
+ "ocr_engine": "none"
11
+ },
12
+ "block_stats": {
13
+ "header_footer": 19,
14
+ "code": 2,
15
+ "table": 3,
16
+ "equations": {
17
+ "successful_ocr": 16,
18
+ "unsuccessful_ocr": 3,
19
+ "equations": 19
20
+ }
21
+ },
22
+ "postprocess_stats": {
23
+ "edit": {}
24
+ }
25
+ }