File size: 62,486 Bytes
676849b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
# Noncommutative C ∗**-Algebra Net: Learning Neural Networks With Powerful Product Structure In** C ∗**-Algebra**

Anonymous authors Paper under double-blind review

## Abstract

We propose a new generalization of neural networks with noncommutative C
∗-algebra. An important feature of C
∗-algebras is their noncommutative structure of products, but the existing C
∗-algebra net frameworks have only considered commutative C
∗-algebras. We show that this noncommutative structure of C
∗-algebras induces powerful effects in learning neural networks. Our framework has a wide range of applications, such as learning multiple related neural networks simultaneously with interactions and learning invariant features with respect to group actions. The validity of our framework numerically illustrates its potential power.

## 1 Introduction

Generalization of the parameter space of neural networks beyond real numbers brings intriguing possibilities. For instance, using complex numbers (Hirose, 1992; Nishikawa et al., 2005; Amin et al., 2008; Yadav et al.,
2005; Trabelsi et al., 2018; Lee et al., 2022) or quaternion numbers (Nitta, 1995; Arena et al., 1997; Zhu et al., 2018; Gaudet & Maida, 2018) as neural network parameters is more intuitive and effective, particularly in signal processing, computer vision, and robotics domains. Clifford-algebra, the generalization of these numbers, allows more flexible geometrical data processing and is applied to neural networks to handle rich geometric relationships in data (Pearson & Bisset, 1994; Buchholz, 2005; Buchholz & Sommer, 2008; Rivera-Rovelo et al., 2010; Zang et al., 2022; Brandstetter et al., 2022; Ruhe et al., 2023b;a). Different from these approaches focusing on the geometric perspective of parameter values, an alternative direction of generalization is to use function-valued parameters (Rossi & Conan-Guez, 2005; Thind et al., 2023), broadening the applications of neural networks to functional data. Hashimoto et al. (2022) proposed C
∗-algebra net, which generalizes neural network parameters to (commutative) C
∗-algebra - a generalization of complex numbers (Murphy, 1990; Hashimoto et al., 2021). They adopted continuous functions on a compact space as a commutative C
∗-algebra and present a new interpretation of function-valued neural networks: infinitely many real-valued or complex-valued neural networks are continuously combined into a single C
∗-algebra net. For example, networks for the same task with different training datasets or different initial parameters can be combined continuously, which enables efficient learning using shared information among infinite combined networks. Such interaction among networks is similar to learning from related tasks, such as ensemble learning (Dong et al., 2020; Ganaie et al., 2022) and multitask learning (Zhang & Yang, 2022). However, because the product structure in the C
∗-algebra that Hashimoto et al. (2022) focuses on is commutative, such networks cannot take advantage of rich product structures of C
∗-algebras and, instead, require specially designed loss functions to induce the necessary interaction.

To fully exploit rich product structures, we propose a new generalization of neural networks with noncommutative C
∗-algebra. Typical examples of C
∗-algebras include the space of diagonal matrices, which is commutative in terms of matrix product, and the space of squared matrices, which is noncommutative.

Specifically, in the case of diagonal matrices, their product is simply computed by the multiplication of each diagonal element independently, and thus, they are commutative. On the other hand, the product of two squared nondiagonal matrices is the sum of the products between different elements, and each resultant diagonal element depends on other diagonal elements through nondiagonal elements, which can be interpreted as interaction among diagonal elements.

Such product structures derived from noncommutativity are powerful when used as neural network parameters. Keeping C
∗-algebra over matrices as an example, a neural network with parameters of nondiagonal matrices can naturally induce interactions among multiple neural networks with real or complex-valued parameters by regarding each diagonal element as a parameter of such networks. Because interactions are encoded in the parameters, specially designed loss functions to induce interactions are unnecessary for such noncommutative C
∗-algebra nets. This property clearly contrasts the proposed framework with the existing commutative C
∗-algebra nets. Another example is a neural network with group C
∗-algebra parameters, which is naturally group-equivariant without designing special network architectures.

Our main contributions in this paper are summarized as follows:
- We generalize the commutative C
∗-algebra net proposed by Hashimoto et al. (2022) to noncommutative C
∗-algebra, which can take advantage of the noncommutative product structure in the C
∗-algebra when learning neural networks.

- We present two examples of the general noncommutative C
∗-algebra net: C
∗-algebra net over matrices and group C
∗-algebra net. A C
∗-algebra net over matrices can naturally combine multiple standard neural networks with interactions. Neural networks with group C
∗-algebra parameters are naturally group-equivariant without modifying neural structures.

- Numerical experiments illustrate the validity of these noncommutative C
∗-algebra nets, including interactions among neural networks.

We emphasize that C
∗-algebra is a powerful tool for neural networks, and our work provides a lot of important perspectives about its application.

## 2 Background

In this section, we review the mathematical background of C
∗-algebra required for this paper and the existing C
∗-algebra net. For more theoretical details of the C
∗-algebra, see, for example, Murphy (1990).

## 2.1 C ∗**-Algebra**

C
∗-algebra is a generalization of the space of complex values. It has structures of the product, involution ∗,
and norm. Definition 1 (C
∗**-algebra)** A set A *is called a* C
∗-algebra *if it satisfies the following conditions:*
1. A is an algebra over C *and equipped with a bijection* (·)
∗: A → A that satisfies the following conditions for α, β ∈ C and c, d ∈ A:
- (αc + βd)
∗ = αc∗ + βd∗,
- (cd)
∗ = d
∗c
∗,
- (c
∗)
∗ = c.

2. A is a normed space with ∥ · ∥, and for c, d ∈ A, ∥cd∥ ≤ ∥c∥ ∥d∥ holds. In addition, A is complete with respect to *∥ · ∥*.

3. For c ∈ A, ∥c
∗c∥ = ∥c∥
2(C
∗*-property) holds.*
The product structure in C
∗-algebras can be both commutative and noncommutative.

Example 1 (Commutative C
∗**-algebra)** Let A be the space of continuous functions on a compact Hausdorff space Z. We can regard A *as a* C
∗*-algebra by setting*
- Product: Pointwise product of two functions, i.e., for a1, a2 ∈ A, a1a2(z) = a1(z)a2(z).

- Involution: Pointwise complex conjugate, i.e., for a ∈ A, a
∗(z) = a(z).

- Norm: Sup norm, i.e., for a ∈ A, ∥a∥ = supz∈Z |a(z)|.

In this case, the product in A *is commutative.*
Example 2 (Noncommutative C
∗**-algebra)** Let A be the space of bounded linear operators on a Hilbert space H, which is denoted by B(H). We can regard A *as a* C
∗*-algebra by setting*
- *Product: Composition of two operators,* - *Involution: Adjoint of an operator,*
- Norm: Operator norm of an operator, i.e., for a ∈ A, ∥a∥ = supv∈H,∥v∥H=1 ∥av∥H.

Here, ∥ · ∥H is the norm in H. In this case, the product in A is noncommutative. Note that if H *is a* d-dimensional space for a finite natural number d, then elements in A are d by d *matrices.*
Example 3 (Group C
∗**-algebra)** *The group* C
∗-algebra on a group G*, which is denoted as* C
∗(G), is the set of maps from G to C *equipped with the following product, involution, and norm:*
- *Product:* (a · b)(g) = RG
a(h)b(h
−1g)dλ(h) for g ∈ G,
- *Involution:* a
∗(g) = ∆(g
−1)a(g−1) for g ∈ G,
- *Norm:* ∥a∥ = sup[π]∈Gˆ ∥π(a)∥,
where ∆(g) *is a positive number satisfying* λ(Eg) = ∆(g)λ(E) for the Haar measure λ on G*. In addition,*
Gˆ is the set of equivalence classes of irreducible unitary representations of G. Note that if G *is discrete,*
then λ is the counting measure on G*. In this paper, we focus mainly on the product structure of* C
∗(G)*. For* details of the Haar measure and representations of groups, see Kirillov (1976). If G = Z/pZ*, then* C
∗(G)
is C
∗*-isomorphic to the* C
∗*-algebra of circulant matrices (Hashimoto et al., 2023). Note also that if* G is noncommutative, then C
∗(G) *can also be noncommutative.*

## 2.2 C ∗**-Algebra Net**

Hashimoto et al. (2022) proposed generalizing real-valued neural network parameters to commutative C
∗-
algebra-valued ones, which enables us to represent multiple real-valued models as a single C
∗-algebra net.

Here, we briefly review the existing (commutative) C
∗-algebra net. Let A = C(Z), the commutative C
∗-
algebra of continuous functions on a compact Hausdorff space Z. Let H be the depth of the network and N0*, . . . , N*H be the width of each layer. For i = 1*, . . . , H*, set Wi: ANi−1 → ANi as an Affine transformation defined with an Ni × Ni−1 A-valued matrix and an A-valued bias vector in ANi. In addition, set a nonlinear activation function σi: ANi → ANi. The commutative C
∗-algebra net f : AN0 → ANH is defined as

$$f=\sigma_{H}\circ W_{H}\circ\cdots\circ\sigma_{1}\circ W_{1}.$$
$\left(1\right)$. 
f = σH ◦ WH *◦ · · · ◦* σ1 ◦ W1. (1)
By generalizing neural network parameters to functions, we can combine multiple standard (real-valued)
neural networks continuously, which enables them to learn efficiently. In this paper, each real-valued network in a C
∗-algebra net is called sub-model. We show an example of commutative C
∗-algebra nets below. To simplify the notation, we focus on the case where the network does not have biases. However, the same arguments are valid for the case where the network has biases.

## 2.2.1 The Case Of Diagonal Matrices

If Z is a finite set, then A = {a ∈ C
d×d| a is a diagonal matrix}. The C
∗-algebra net f on A corresponds to d separate real or complex-valued sub-models. In the case of A = C(Z), we can consider that infinitely many networks are continuously combined, and the C
∗-algebra net f with diagonal matrices is a discretization of the C
∗-algebra net over C(Z). Indeed, denote by x jthe vector composed of the jth diagonal elements of x ∈ AN , which is defined as the vector in C
N whose kth element is the jth diagonal element of the A-valued kth element of x. Assume the activation function σi: AN → AN is defined as σi(x)
j = ˜σi(x j) for some σ˜i: C
N → C
N . Since the jth diagonal element of a1a2 for a1, a2 ∈ A is the product of the jth element of a1

![3_image_1.png](3_image_1.png)

(a) C∗-algebra net over diagonal matrices (commutative).

The blue parts illustrate the zero parts (nondiagonal parts)
of the diagonal matrices. Each diagonal element corresponds to an individual sub-model.

$$(2)$$

(b) C∗-algebra net over nondiagonal matrices (noncommutative). Unlike the case of diagonal matrices, nondiagonal parts

![3_image_0.png](3_image_0.png)

(colored in orange) are not zero. The nondiagonal elements induce the interactions among multiple sub-models.
Figure 1: Difference between commutative and noncommutative C
∗-algebra nets from the perspective of

interactions among sub-models.
and a2, we have

$$f(x)^{j}=\tilde{\sigma}_{H}\circ W_{H}^{j}\circ\cdots\circ\tilde{\sigma}_{1}\circ W_{1}^{j},$$
, (2)
where W
j i ∈ C
Ni×Ni−1is the matrix whose (*k, l*)-entry is the jth diagonal of the (*k, l*)-entry of Wi ∈
ANi×Ni−1. Figure 1 (a) schematically shows the C
∗-algebra net over diagonal matrices.

## 3 Noncommutative C ∗**-Algebra Net**

Although the existing C
∗-algebra net provides a framework for applying C
∗-algebra to neural networks, it focuses on commutative C
∗-algebras, whose product structure is simple. Therefore, we generalize the existing commutative C
∗-algebra net to noncommutative C
∗-algebra. Since the product structures in noncommutative C
∗-algebras are more complicated than those in commutative C
∗-algebras, they enable neural networks to learn features of data more efficiently. For example, if we focus on the C
∗-algebra of matrices, then the neural network parameters describe interactions between multiple real-valued sub-models (see Section 3.1.1).

Let A be a general C
∗-algebra and consider the network f in the same form as Equation (1). We emphasize that in our framework, the choice of A is not restricted to a commutative C
∗-algebra. We list examples of A and their validity for learning neural networks below.

## 3.1 Examples Of C ∗**-Algebras For Neural Networks**

Through showing several examples of C
∗-algebras, we show that C
∗-algebra net is a flexible and unified framework that incorporates C
∗-algebra into neural networks. As mentioned in the previous section, we focus on the case where the network does not have biases for simplification in this subsection.

## 3.1.1 Nondiagonal Matrices

Let A = C
d×d. Note that A is a noncommutative C
∗-algebra. Of course, it is possible to consider matrix data, but we focus on the perspective of interaction among sub-models following Section 2.2. In this case, unlike the network (2), the jth diagonal element of a1a2a3 for a1, a2, a3 ∈ A depends not only on the jth diagonal element of a2, but also the other diagonal elements of a2. Thus, f(x)
j depends not only on the sub-model corresponding to jth diagonal element discussed in Section 2.2.1, but also on other sub-models.

The nondiagonal elements in A induce interactions between d real or complex-valued sub-models.

Interaction among sub-models could be related to decentralized peer-to-peer machine learning Vanhaesebrouck et al. (2017); Bellet et al. (2018), where each agent learns without sharing data with others, while improving its ability by leveraging other agents' information through communication. In our case, an agent corresponds to a sub-model, and communication is achieved by interaction. We will see the effect of interaction by the nondiagonal elements in A numerically in Section 4.1. Figure 1 (b) schematically shows the C
∗-algebra net over nondiagonal matrices.

## 3.1.2 Block Diagonal Matrices

Let A = {a ∈ C
d×d| a = diag(a1*, . . . ,* am), ai ∈ C
di×di }. The product of two block diagonal matrices a = diag(a1*, . . . ,* am) and b = diag(b1*, . . . ,* bm) can be written as

$$\mathbf{b}_{1},\ldots,\mathbf{a}_{m}\mathbf{b}_{m}).$$
$$(3)$$

ab = diag(a1b1*, . . . ,* ambm).

In a similar manner to Section 2.2.1, we denote by x jthe N by dj matrix composed of the jth diagonal blocks of x ∈ AN . Assume the activation function σi: AN → AN is defined as σi(x) = diag(σ˜
1 i
(x 1)*, . . . ,*σ˜m i
(x m))
for some σ˜i,j : C
N×dj → C
N×dj. Then, we have

$$\mathbf{f}(\mathbf{x})^{j}={\hat{\boldsymbol{\sigma}}}_{H}^{j}\circ\mathbf{W}_{H}^{j}\circ\cdots\circ{\hat{\boldsymbol{\sigma}}}_{1}^{j}\circ\mathbf{W}_{1}^{j},$$
, (3)
where Wj i ∈ (C
dj×dj )
Ni×Ni−1is the block matrix whose (*k, l*)-entry is the jth block diagonal of the (*k, l*)-entry of Wi ∈ ANi×Ni−1. In this case, we have m groups of sub-models, each of which is composed of interacting dj sub-models mentioned in Section 3.1.1. Indeed, the block diagonal case generalizes the diagonal and nondiagonal cases stated in Sections 2.2.1 and 3.1.1. If dj = 1 for all j = 1*, . . . , m*, then the network (3) is reduced to the network (2) with diagonal matrices. If m = 1 and d1 = d, then the network (3) is reduced to the network with d by d nondiagonal matrices.

## 3.1.3 Circulant Matrices

Let A = {a ∈ C
d×d| a is a circulant matrix}. Here, a circulant matrix a is the matrix represented as

$$a={\begin{bmatrix}a_{1}&a_{d}&\cdot\cdot\cdot&a_{2}\\ a_{2}&a_{1}&\cdot\cdot\cdot&a_{3}\\ &\cdot\cdot\cdot&\cdot\cdot\\ a_{d}&a_{d-1}&\cdot\cdot\cdot&a_{1}\end{bmatrix}}$$

for a1*, . . . , a*d ∈ C. Note that in this case, A is commutative. Circulant matrices are diagonalized by the discrete Fourier matrix as follows (Davis, 1979). We denote by F the discrete Fourier transform matrix, whose (*i, j*)-entry is ω
(i−1)(j−1)/
√p, where ω = e2π
√−1/d.

Lemma 1 Any circulant matrix a *is decomposed as* a = FΛaF
∗*, where*

$$\Lambda_{a}=\mathrm{diag}\left(\sum_{i=1}^{d}a_{i}\omega^{(i-1)\cdot0},\ldots,\sum_{i=1}^{d}a_{i}\omega^{(i-1)(d-1)}\right).$$
$$u h e r e$$
$$\left(4\right)$$

Since ab = FΛaΛbF
∗for a, b ∈ A, the product of a and b corresponds to the multiplication of each Fourier component of a and b. Assume the activation function σi: AN → AN is defined such that (F
∗σi(x)F)
j equals to σˆ˜i((*F xF*∗)
j) for some σˆ˜i: C
N → C
N . Then, we obtain the network

$$(F^{*}f(x)F)^{j}=\hat{\hat{\sigma}}_{H}\circ\hat{W}_{H}^{j}\circ\cdots\circ\hat{\hat{\sigma}}_{1}\circ\hat{W}_{1}^{j},\tag{1}$$
, (4)
where Wˆ j i ∈ C
Ni×Ni−1is the matrix whose (*k, l*)-entry is (F w*i,k,l*F
∗)
j, where w*i,k,l* is the the (*k, l*)-entry of Wi ∈ ANi×Ni−1. The jth sub-model of the network (4) corresponds to the network of the jth Fourier component. Remark 1 The j*th sub-model of the network (4) does not interact with those of other Fourier components* than the jth component. This fact corresponds to the fact that A is commutative in this case. Analogous to the case in Section 3.1.1, if we set A as noncirculant matrices, then we obtain interactions between sub-models corresponding to different Fourier components.

## 3.1.4 Group C ∗**-Algebra On A Symmetric Group**

Let G be the symmetric group on the set {1*, . . . , d*} and let A = C
∗(G). Note that since G is noncommutative, C
∗(G) is also noncommutative. Then, the output f(x) ∈ ANH is the C
NH -valued map on G. Using the product structure introduced in Example 3, we can construct a network that takes the permutation of data into account. Indeed, an element w ∈ A of a weight matrix W ∈ ANi−1×Niis a function on G. Thus, w(g)
describes the weight corresponding to the permutation g ∈ G. Since the product of x ∈ C
∗(G) and w is defined as wx(g) = Ph∈G w(h)x(h
−1g), by applying W, all the weights corresponding to the permutations affect the input. For example, let z ∈ R
d and set x ∈ C
∗(G) as x(g) = g · z, where g · z is the action of g on z, i.e., the permutation of z with respect to g. Then, we can input all the patterns of permutations of z simultaneously, and by virtue of the product structure in C
∗(G), the network is learned with the interaction among these permutations. Regarding the output, if the network is learned so that the outputs y become constant functions on G, i.e., y(g) = c for some constant c, then it means that c is invariant with respect to g, i.e., invariant with respect to the permutation. We will numerically investigate the application of the group C
∗-algebra net to permutation invariant problems in Section 4.2.

Remark 2 If the activation function σ *is defined as* σ(x)(g) = σ(x(g)), i.e., applied elementwisely to x, then the network f is permutation equivariant. That is, even if the input x(g) is replaced by x(gh) *for some* h ∈ G, the output f(x)(g) is replaced by f(x)(gh)*. This is because the product in* C
∗(G) *is defined as a* convolution. This feature of the convolution has been studied for group equivariant neural networks (Lenssen et al., 2018; Cohen et al., 2019; Sannai et al., 2021; Sonoda et al., 2022). The above setting of the C
∗*-algebra* net provides us with a design of group equivariant networks from the perspective of C
∗*-algebra.*
Remark 3 Since the number of elements of G is d!*, elements in* C
∗(G), which are functions on G, are represented as d!-dimensional vectors. For the case where d is large, we need a method for efficient computations, which is future work.

## 3.1.5 Bounded Linear Operators On A Hilbert Space

For functional data, we can also set A as an infinite-dimensional space. Using infinite-dimensional C
∗-algebra for analyzing functional data has been proposed (Hashimoto et al., 2021). We can also adopt this idea for neural networks. Let A = B(L
2(Ω)) for a measure space Ω. Set A0 = {a *∈ A |* a is a multiplication operator}.

Here, a multiplication operator a is a linear operator that is defined as av = v · u for some u ∈ L∞(Ω).

The space A0 is a generalization of the space of diagonal matrices to the infinite-dimensional space. If we restrict elements of weight matrices to A0, then we obtain infinitely many sub-models without interactions.

Since outputs are in A
NH
0, we can obtain functional data as outputs. Similar to the case of matrices (see Section 3.1.1), by setting elements of weight matrices as elements in A, we can take advantage of interactions among infinitely many sub-models.

## 3.2 Approximation Of Functions With Interactions By C ∗**-Algebra Net**

We observe what kind of functions the C
∗-algebra net can approximate. In this subsection, we show the universality of C
∗-algebra nets, which describes the representation power of models. We focus on the case of A = C
d×d. Consider a shallow network f : AN0 → A defined as f(x) = W∗
2 σ(W1x+b), where W1 ∈ AN1×N0, W2 ∈ AN1, and b ∈ AN1. Let ˜f : AN0 → A be the function in the form of ˜f(x) = [Pd j=1 fkj (x l)]kl, where fkj : C
N0d → R. Here, we abuse the notation and denote by x l ∈ C
N0dthe lth column of x regarded as an N0d by d matrix. Assume fkj is represented as

$$f_{kj}(x)=\int_{\mathbb{R}}\int_{\mathbb{R}^{N_{0}\,d}}T_{kj}(w,b)\sigma(w^{*}x+b)\mathrm{d}w\,\mathrm{d}b$$

for some Tkj : R
N0d × R → R. By the theory of the ridgelet transform, such Tkj exists for most realistic settings (Candès, 1999; Sonoda & Murata, 2017). For example, Sonoda & Murata (2017) showed the following result.

$\left(5\right)^{\frac{1}{2}}$

Proposition 1 Let S be the space of rapidly decreasing functions on R and S
′
0 be the dual space of the Lizorkin distribution space on R. Assume a function ˜f *has the form of* ˜f(x) = [Pd j=1 fkj (x l)]kl*, and* fkj and ˆfkj *are in* L
1(R
N0d), where ˆf represents the Fourier transform of a function f*. Assume in addition,*
σ *is in* S
′
0
, and there exists ψ ∈ S *such that* RR

ψˆ(x)ˆσ(x)/|x|dx *is nonzero and finite. Then, there exists* Tkj : R
N0d × R → R such that fkj *admits a representation of Equation* (5).

Here, the Lizorkin distribution space is defined as S0 = {ϕ *∈ S |* RR

x kϕ(x)dx = 0 k ∈ N}. Note that the ReLU is in S
′
0
. We discretize Equation (5) by replacing the Lebesgue measures with PN1 i=1 δwij and PN1 i=1 δbij ,
where δw is the Dirac measure centered at w. Then, the (*k, l*)-entry of ˜f(x) is written as

$$\sum_{j=1}^{d}\sum_{i=1}^{N_{1}}T_{k j}(w_{i j},b_{i j})\sigma(w_{i j}^{*}x^{l}+b_{i j}).$$

Setting the i-th element of W2 ∈ AN1 as [Tkj (wij , bij )]kj , the (*i, m*)-entry of W1 ∈ AN1×N0 as [(wi,j )md+l]jl, the ith element of b ∈ AN1 as [bj ]jl, we obtain

$$\sum_{j=1}^{d}\sum_{i=1}^{N_{1}}T_{k j}(w_{i j},b_{i j})\sigma(w_{i j}^{*}x^{l}+b_{i j})=(W_{2}^{k})^{*}\sigma(W_{1}x^{l}+b^{l}),$$

which is the (*k, l*)-entry of f(x). As a result, any function in the form of Ëœf(x) = [Pd j=1 fkj (x l)]kl with the assumption in Proposition 1 can be represented as a C
∗-algebra net.

Remark 4 *As we discussed in Sections 2.2.1 and 3.1.1, a* C
∗*-algebra net over matrices can be regarded as* d interacting sub-models. The above argument insists the lth column of f(x) and ˜f(x) *depends only on* x l.

Thus, in this case, if we input data x lcorresponding to the l*th sub-model, then the output is obtained as the* lth column of the A-valued output f(x). On the other hand, the weight matrices W1 and W2 *and the bias* b are used commonly in providing the outputs for any sub-model, i.e., W1, W2, and b *are learned using data* corresponding to all the sub-models. Therefore, W1, W2, and b *induce interactions among the sub-models.*

## 4 Experiments

In this section, we numerically demonstrate the abilities of noncommutative C
∗-algebra nets using nondiagonal C
∗-algebra nets over matrices in light of interaction and group C
∗-algebra nets of its equivariant property.

We use C
∗-algebra-valued multi-layered perceptrons (MLPs) to simplify the experiments. However, they can be naturally extended to other neural networks, such as convolutional neural networks.

The models were implemented with JAX (Bradbury et al., 2018). Experiments were conducted on an AMD
EPYC 7543 CPU and an NVIDIA A-100 GPU. See Appendix A.1 for additional information on experiments.

## 4.1 C ∗**-Algebra Nets Over Matrices**

In a noncommutative C
∗-algebra net over matrices consisting of nondiagonal-matrix parameters, each submodel is expected to interact with others and thus improve performance compared with its commutative counterpart consisting of diagonal matrices. We demonstrate the effectiveness of such interaction using image classification and neural implicit representation (NIR) tasks in a similar setting with peer-to-peer learning such that data are separated for each submodel.

See Section 3.1.1 for the notations. When training the jth sub-model (j = 1, 2*, . . . , d*), an original N0dimensional input data point x = [x1*, . . . ,* xN0
] ∈ R
N0is converted to its corresponding representation x ∈ AN0 = R
N0×d×dsuch that x*i,j,j* = xi for i = 1, 2*, . . . , N*0 and 0 otherwise. The loss to its NHdimensional output of a C
∗-algebra net y ∈ ANH and the target t ∈ ANH is computed as ℓ(y:,j,j , t:*,j,j* ) +
1 2 Pk,(l̸=j)
(y 2 k,j,l + y 2 k,l,j ), where â„“ is a certain loss function; we use the mean squared error (MSE) for image classification and the Huber loss for NIR. The second and third terms suppress the nondiagonal elements of the outputs to 0. In both examples, we use leaky-ReLU as an activation function and apply it only to the diagonal elements of pre-activations.

## 4.1.1 Image Classification

We conduct experiments of image classification tasks using MNIST (Le Cun et al., 1998), KuzushijiMNIST (Clanuwat et al., 2018), and Fashion-MNIST (Xiao et al., 2017), which are composed of 10-class 28 × 28 gray-scale images. Each sub-model is trained on a mutually exclusive subset sampled from the original training data and then evaluated on the entire test data. Each subset is sampled to be balanced, i.e., each class has the same number of training samples. As a baseline, we use a commutative C
∗-algebra net over diagonal matrices, which consists of the same sub-models but cannot interact with other sub-models.

Both noncommutative and commutative models share hyperparameters: the number of layers was set to 4, the hidden size was set to 128, and the models were trained for 30 epochs. Table 1 shows average test accuracy. Accuracy can be reported in two distinct manners: the first approach averages the accuracy across individual sub-models ("Average"), and the other is to ensemble the logits of sub-models and then compute the accuracy ("Ensemble"). As can be seen, the noncommutative C
∗-algebra net consistently outperforms its commutative counterpart, which is significant, particularly when the number of sub-models is 40. Note that when the number of sub-models is 40, the size of the training dataset for each sub-model is 40 times smaller than the original one, and thus, the commutative C
∗-algebra net fails to learn.

Nevertheless, the noncommutative C
∗-algebra net retains performance mostly. Commutative C
∗-algebra net improves performance by ensembling, but it achieves inferior performance to both Average and Ensemble noncommutative C
∗-algebra net when the number of sub-models is larger than five. Such a performance improvement would be attributed to the fact that noncommutative models have more trainable parameters than commutative ones. Thus, we additionally compare commutative C
∗-algebra net with a width of 128 and noncommutative C
∗-algebra net with a width of 8, which have the same number of learnable parameters, when the total number of training data is set to 5000, ten times smaller than the experiments of Table 1.

As seen in Table 2, while the commutative C
∗-algebra net mostly fails to learn, the noncommutative C
∗-
algebra net learns successfully. These results suggest that the performance of noncommutative C
∗-algebra net cannot solely be explained by the number of learnable parameters: the interaction among sub-models provides essential capability.

Furthermore, Table 3 illustrates that related tasks help performance improvement through interaction.

Specifically, we prepare five sub-models per dataset, one of MNIST, Kuzushiji-MNIST, and Fashion-MNIST,
and train a single (non)commutative C
∗-algebra net consisting of 15 sub-models simultaneously. In addition to the commutative C
∗-algebra net, where sub-models have no interaction, and the noncommutative C
∗-algebra net, where each sub-model can interact with any other sub-models, we use a block-diagonal noncommutative C
∗-algebra net (see Section 3.1.2), where each sub-model can only interact with other submodels trained on the same dataset. Table 3 shows that the fully noncommutative C
∗-algebra net surpasses the block-diagonal one on Kuzushiji-MNIST and Fashion-MNIST, implying that not only intra-task interaction but also inter-task interaction helps performance gain. Note these results are not directly comparable with the values of Tables 1 and 3, due to dataset subsampling to balance class sizes (matching MNIST's smallest class).

## 4.1.2 Neural Implicit Representation

In the next experiment, we use a C
∗-algebra net over matrices to learn implicit representations of 2D images that map each pixel coordinate to its RGB colors (Sitzmann et al., 2020; Xie et al., 2022). Specifically, an input coordinate in [0, 1]2is transformed into a random Fourier feature in [−1, 1]320 and then converted into its C
∗-algebraic representation over matrices as an input to a C
∗-algebra net over matrices. Similar to the image classification task, we compare noncommutative NIRs with commutative NIRs, using the following hyperparameters: the number of layers to 6 and the hidden dimension to 256. These NIRs learn 128 × 128pixel images of ukiyo-e pictures from The Metropolitan Museum of Art1 and photographs of cats from the AFHQ dataset (Choi et al., 2020).

Figure 2 (top) shows the curves of the average PSNR (Peak Signal-to-Noise Ratio) of sub-models corresponding to the image below. Both commutative and noncommutative C
∗-algebra nets consist of five sub-models trained on five ukiyo-e pictures (see also Figure 6). PSNR, the quality measure, of the noncommutative NIR
1https://www.metmuseum.org/art/the-collection

datasets. "Average" reports the average accuracy of sub-models, and "Ensemble" ensembles the logits of

sub-models to compute accuracy. Interactions between sub-models that the noncommutative C

∗-algebra net

improves performance significantly when the number of sub-models is 40.

Dataset # sub-models Commutative C

∗-algebra nets Noncommutative C

∗-algebra nets

Average Ensemble Average Ensemble

| improves performance significantly when the number of sub-models is 40. Dataset # sub-models Commutative C ∗ -algebra nets Noncommutative C ∗ -algebra nets Average Ensemble Average Ensemble MNIST 5 0.963 ± 0.003 0.970 ± 0.001 0.970 ± 0.002 0.976 ± 0.001 10 0.937 ± 0.004 0.950 ± 0.000 0.956 ± 0.002 0.969 ± 0.000 20 0.898 ± 0.007 0.914 ± 0.002 0.937 ± 0.002 0.957 ± 0.001 40 0.605 ± 0.007 0.795 ± 0.010 0.906 ± 0.004 0.938 ± 0.001 Kuzushiji-MNIST 5 0.837 ± 0.003 0.871 ± 0.001 0.859 ± 0.003 0.888 ± 0.002 10 0.766 ± 0.008 0.793 ± 0.004 0.815 ± 0.007 0.859 ± 0.002 20 0.674 ± 0.011 0.710 ± 0.001 0.758 ± 0.007 0.817 ± 0.001 40 0.453 ± 0.026 0.532 ± 0.004 0.682 ± 0.008 0.767 ± 0.001 Fashion-MNIST 5 0.862 ± 0.001 0.873 ± 0.001 0.868 ± 0.002 0.882 ± 0.001 10 0.839 ± 0.003 0.850 ± 0.001 0.852 ± 0.004 0.871 ± 0.001 20 0.790 ± 0.010 0.796 ± 0.002 0.832 ± 0.005 0.858 ± 0.000 40 0.650 ± 0.018 0.674 ± 0.001 0.810 ± 0.005 0.841 ± 0.000   |
|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|

Table 1: Average test accuracy of commutative and noncommutative C
∗-algebra nets over matrices on test
Table 2: Average test accuracy commutative and noncommutative C
∗-algebra nets over matrices trained

on 5000 data points with 20 sub-models. The width of the noncommutative model is set to 8 so that the

number of learnable parameters is matched with its commutative counterpart.

Dataset Commutative Noncommutative

MNIST 0.155 ± 0.04 0.779 ± 0.02 Kuzushiji-MNIST 0.140 ± 0.03 0.486 ± 0.02

Fashion-MNIST 0.308 ± 0.05 0.673 ± 0.02

grows faster, and correspondingly, it learns the details of ground truth images faster than its commutative version (Figure 2 bottom). Noticeably, the noncommutative representations reproduce colors even at the early stage of learning, although the commutative ones remain monochrome after 500 iterations of training.

Along with the similar trends observed in the pictures of cats (Figure 3), these results further emphasize the effectiveness of the interaction. Longer-term results are presented in Figure 7. This INR for 2D images can be extended to represent 3D models. Figure 4 shows synthesized views of 3D
implicit representations using the same C
∗-algebra MLPs trained on three 3D chairs from the ShapeNet dataset (Chang et al., 2015). The presented poses are unseen during training. Again, the noncommutative INR reconstructs the chair models with less noisy artifacts, indicating that interaction helps efficient learning. See Appendices A.1 and A.2 for details and results.

## 4.2 Group C ∗**-Algebra Nets**

As another experimental example of C
∗-algebra nets, we showcase group C
∗-algebra nets, which we introduced in Section 3.1.4. The group C
∗-algebra nets take functions on a symmetric group as input and return functions on the group as output.

Refer to Section 3.1.4 for notations. A group C
∗-algebra net is trained on data {(x, y) ∈ AN0 ×ANH }, where x and y are N0- and NH-dimensional vector-valued functions. Practically, such functions may be represented as real tensors, e.g., x ∈ C
N0×\#G, where \#G is the size of G. Using product between functions explained in Section 3.1.4 and element-wise addition, a linear layer, and consequently, an MLP, on A can be constructed.

Following the C
∗-algebra nets over matrices, we use leaky ReLU for activations.

Table 3: Average test accuracy over five sub-models simultaneously trained on the three datasets. The

(fully) noncommutative C

∗-algebra net outperforms block-diagonal the noncommutative C

∗-algebra net on

Kuzushiji-MNIST and Fashion-MNIST, indicating that the interaction can leverage related tasks.

Dataset Commutative Block-diagonal Noncommutative

MNIST 0.956 ± 0.002 0.969 ± 0.002 0.970 ± 0.002 Kuzushiji-MNIST 0.745 ± 0.004 0.778 ± 0.006 0.796 ± 0.008

Fashion-MNIST 0.768 ± 0.007 0.807 ± 0.006 0.822 ± 0.002

![9_image_0.png](9_image_0.png) 

![9_image_1.png](9_image_1.png)

Figure 2: Average PSNR of implicit representations of the image below (top) and reconstructions of the ground truth image at every 100 iterations (bottom). The noncommutative C
∗-algebra net learns the geometry and colors of the image faster than its commutative counterpart.
One of the simplest tasks for the group C
∗-algebra nets is to learn permutation-invariant representations, e.g., predicting the sum of given d digits. In this case, x is a function that outputs permutations of features of d digits, and y(g) is a constant function that returns the sum of these digits for all g ∈ G. In this experiment, we use features of MNIST digits of a pre-trained CNN in 32-dimensional vectors. Digits are selected so that their sum is less than 10 to simplify the problem, and the model is trained to classify the sum of given digits using cross-entropy loss. We set the number of layers to 4 and the hidden dimension to 32. For comparison, we prepare permutation-invariant and permutation-equivariant DeepSet models (Zaheer et al., 2017), which adopt special structures to induce permutation invariance or equivariance, containing the same number of parameters as floating-point numbers with the group C
∗-algebra net.

Table 4 displays the results of the task with various training dataset sizes when d = 3. What stands out in the table is that the group C
∗-algebra net consistently outperforms the DeepSet models by large margins, particularly when the number of training data is limited. Additionally, as shown in Figure 5, the group C
∗-algebra net converges with much fewer iterations than the DeepSet models. These results suggest that the inductive biases implanted by the product structure in the group C
∗-algebra net are effective.

## 5 Related Works

Applying algebra structures to neural networks has been studied. Quaternions are applied to, for example, spatial transformations, multi-dimensional signals, color images (Nitta, 1995; Arena et al., 1997; Zhu et al., 2018; Gaudet & Maida, 2018). Clifford algebra (or geometric algebra) is a generalization of quaternions,

![10_image_0.png](10_image_0.png)

Figure 3: Ground truth images and their implicit representations of commutative and noncommutative C
∗-
algebra nets after 500 iterations of training. The noncommutative C
∗-algebra net reproduces colors more faithfully.

![10_image_1.png](10_image_1.png)

Figure 4: Synthesized views of 3D implicit representations of commutative and noncommutative C
∗-algebra nets after 5000 iterations of training. The noncommutative C
∗-algebra net can produce finer details. Note that the commutative C
∗-algebra net could not synthesize the chair on the left.
and applying Clifford algebra to neural networks has also been investigated to extract geometric structure of data (Rivera-Rovelo et al., 2010; Zang et al., 2022; Brandstetter et al., 2022; Ruhe et al., 2023b;a). Hoffmann et al. (2020) considered neural networks with matrix-valued parameters for the parameter and computational efficiencies and for achieving extensive structures of neural networks. In this section, we discuss relationships and differences with the existing studies of applying algebras to neural networks.

Quaternion is a generalization of complex number. A quaternion is expressed as a+bi+cj+dk for *a, b, c, d* ∈
R. Here, i, j, and k are basis elements that satisfy i 2j 2 = k2 = −1, ij = −ji = k, ik = −ki = j, and jk = −kj = i. Nitta (1995); Arena et al. (1997) introduced and analyzed neural networks with quaternionvalued parameters. Since the rotations in the three-dimensional space can be represented with quaternions, they can be applied to control the position of robots (Fortuna et al., 1996). More recently, representing color images using quaternions and analyzing them with a quaternion version of a convolutional neural network was proposed and investigated (Zhu et al., 2018; Gaudet & Maida, 2018).

Table 4: Average test accuracy of an invariant DeepSet model, an equivariant DeepSet model, and a group C
∗-algebra net on test data of the sum-of-digits task after 100 epochs of training. The group C
∗-algebra net can learn from fewer data.

Dataset size Invariant DeepSet Equivariant DeepSet Group C
∗-algebra net

| Dataset size   | Invariant DeepSet   | Equivariant DeepSet   | Group C ∗ -algebra net   |
|----------------|---------------------|-----------------------|--------------------------|
| 1k             | 0.408 ± 0.014       | 0.571 ± 0.021         | 0.783 ± 0.016            |
| 5k             | 0.731 ± 0.026       | 0.811 ± 0.007         | 0.922 ± 0.003            |
| 10k            | 0.867 ± 0.021       | 0.836 ± 0.009         | 0.943 ± 0.005            |
| 50k            | 0.933 ± 0.005       | 0.862 ± 0.002         | 0.971 ± 0.001            |

![11_image_0.png](11_image_0.png)

![11_image_1.png](11_image_1.png) 

Figure 5: Average test accuracy curves of invariant DeepSet, equivariant DeepSet, and a group C
∗-algebra net trained on 10k data of the sum-of-digits task. The group C
∗-algebra net can learn more efficiently and effectively.
Clifford algebra is a generalization of quaternions and enables us to extract the geometric structure of data. It naturally unifies real numbers, vectors, complex numbers, quaternions, exterior algebras, and so on. For a vector space V equipped with a quadratic form Q and an orthonormal basis {e1*, . . . , e*n} of V, the Clifford algebra is constructed by the product ei1
· · · eik for 1 ≤ i1 < *· · ·*ik ≤ n and 0 ≤ k ≤ n. The product structure is defined by eiej = −ej ei and e 2 i = Q(ei). We have not only the vectors e1*, . . . , e*n, but also the elements whose forms are eiej (bivector), eiej ek (trivector), and so on. Using these different types of vectors, we can describe data in different fields. Brandstetter et al. (2022); Ruhe et al. (2023b) proposed to apply neural networks with Clifford algebra to solve a partial differential equation that involves different fields by describing the correlation of these fields using Clifford algebra. Group-equivariant networks with Clifford algebra have also been proposed for extracting features that are equivariant under group actions (Ruhe et al., 2023a). Zang et al. (2022) analyzed traffic data with residual networks with Clifford algebra-valued parameters for considering the correlation between them in both spatial and temporal domains. RiveraRovelo et al. (2010) approximate the surface of 2D or 3D objects using a network with Clifford algebra.

Hoffmann et al. (2020) considered generalizing neural network parameters to matrices. These networks can be effective regarding the parameter size and the computational costs. They also consider the flexibility of the design of the network with matrix-valued parameters. On the other hand, C
∗-algebra is a natural generalization of the space of complex numbers. An advantage of considering C
∗-algebra over other algebras is the straightforward generalization of notions related to neural networks. This is by virtue of the structure of involution, norm, and C
∗-property. For example, we have a generalization of Hilbert space by means of C
∗-algebra, which is called Hilbert C
∗-module (Lance, 1995). Since the input and output spaces are Hilbert spaces, the theory of Hilbert C
∗-module can be used in analyzing C
∗-algebra nets. We also have a natural generalization of reproducing kernel Hilbert space
(RKHS), which is called reproducing kernel Hilbert C
∗-module (RKHM) (Hashimoto et al., 2021). RKHM
enables us to connect C
∗-algebra nets with kernel methods (Hashimoto et al., 2023).

From the perspective of the application to neural networks, both C
∗-algebra and Clifford algebra enable us to induce interactions. Clifford algebra can describe the relationship among data components by using bivectors and trivectors. C
∗-algebra can also induce the interaction among data components using its product structure. Moreover, it can also induce interaction among sub-models, as we discussed in Section 3.1.1. Our framework also enables us to construct group equivariant neural networks as we discussed in Section 3.1.4.

## 6 Conclusion And Discussion

In this paper, we have generalized the space of neural network parameters to noncommutative C
∗-algebras.

Their rich product structures bring powerful properties to neural networks. For example, a C
∗-algebra net over nondiagonal matrices enables its sub-models to interact, and a group C
∗-algebra net learns permutationequivariant features. We have empirically demonstrated the validity of these properties in various tasks, image classification, neural implicit representation, and sum-of-digits tasks.

A current practical limitation of noncommutative C
∗-algebra nets is their computational cost. The noncommutative C
∗-algebra net over matrices used in the experiments requires a quadratic complexity to the number of sub-models for communication, in the same way as the "all-reduce" collective operation in distributed computation. This complexity could be alleviated by, for example, parameter sharing or introducing structures to nondiagonal elements by an analogy between self-attentions and their efficient variants. The group C
∗-algebra net even costs factorial time complexity to the size of the set, which becomes soon infeasible as the size of the set increases. Such an intensive complexity might be mitigated by representing parameters by parameter invariant/equivariant neural networks, such as DeepSet. Despite such computational burden, introducing noncommutative C
∗-algebra derives interesting properties otherwise impossible.

We leave further investigation on scalability for future research. An important and interesting future direction is the application of infinite-dimensional C
∗-algebras. In this paper, we focused mainly on finite-dimensional C
∗-algebras. We showed that the product structure in C
∗-
algebras is a powerful tool for neural networks, for example, learning with interactions and group equivariance (or invariance) even for the finite-dimensional case. Infinite-dimensional C
∗-algebra allows us to analyze functional data, such as time-series data and spatial data. We believe that applying infinite dimensional C
∗-
algebra can be an efficient tool to extract information from the data even when the observation is partially missing. Practical applications of our framework to functional data with infinite-dimensional C
∗-algebras are our future work. Our framework with noncommutative C
∗-algebras is general and has a wide range of applications. We believe that our framework opens up a new approach to learning neural networks.

## References

Md. Faijul Amin, Md. Monirul Islam, and Kazuyuki Murase. Single-layered complex-valued neural networks and their ensembles for real-valued classification problems. In *IJCNN*, 2008.

Paolo Arena, Luigi Fortuna, Giovanni Muscato, and Maria Gabriella Xibilia. Multilayer perceptrons to approximate quaternion valued functions. *Neural Networks*, 10(2):335–342, 1997.

Igor Babuschkin, Kate Baumli, Alison Bell, Surya Bhupatiraju, Jake Bruce, Peter Buchlovsky, David Budden, Trevor Cai, Aidan Clark, Ivo Danihelka, Claudio Fantacci, Jonathan Godwin, Chris Jones, Ross Hemsley, Tom Hennigan, Matteo Hessel, Shaobo Hou, Steven Kapturowski, Thomas Keck, Iurii Kemaev, Michael King, Markus Kunesch, Lena Martens, Hamza Merzic, Vladimir Mikulik, Tamara Norman, John Quan, George Papamakarios, Roman Ring, Francisco Ruiz, Alvaro Sanchez, Rosalia Schneider, Eren Sezener, Stephen Spencer, Srivatsan Srinivasan, Luyu Wang, Wojciech Stokowiec, and Fabio Viola. The DeepMind JAX Ecosystem, 2020. URL http://github.com/deepmind.

Aurélien Bellet, Rachid Guerraoui, Mahsa Taziki, and Marc Tommasi. Personalized and private peer-to-peer machine learning. In *AISTATS*, 2018.

James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and Qiao Zhang. JAX: composable transformations of Python+NumPy programs, 2018. URL http://github.com/google/jax.

Johannes Brandstetter, Rianne van den Berg, Max Welling, and Jayesh K Gupta. Clifford neural layers for PDE modeling. arXiv:2209.04934, 2022.

Sven Buchholz. *A Theory of Neural Computation with Clifford Algebras*. PhD thesis, 2005.

Sven Buchholz and Gerald Sommer. On Clifford neurons and Clifford multi-layer perceptrons. Neural Networks, 21(7):925–935, 2008.

Emmanuel J. Candès. Harmonic analysis of neural networks. *Applied and Computational Harmonic Analysis*,
6(2):197–218, 1999.

Angel X. Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, Jianxiong Xiao, Li Yi, and Fisher Yu. ShapeNet: An Information-Rich 3D Model Repository. Technical report, Stanford University - Princeton University —
Toyota Technological Institute at Chicago, 2015.

Yunjey Choi, Youngjung Uh, Jaejun Yoo, and Jung-Woo Ha. Stargan v2: Diverse image synthesis for multiple domains. In *CVPR*, 2020.

Tarin Clanuwat, Mikel Bober-Irizar, Asanobu Kitamoto, Alex Lamb, Kazuaki Yamamoto, and David Ha.

Deep learning for classical japanese literature. *arXiv*, 2018.

Taco S Cohen, Mario Geiger, and Maurice Weiler. A general theory of equivariant CNNs on homogeneous spaces. In *NeurIPS*, 2019.

Philip J. Davis. *Circulant Matrices*. Wiley, 1979.

Xibin Dong, Zhiwen Yu, Wenming Cao, Yifan Shi, and Qianli Ma. A survey on ensemble learning. *Frontiers* of Computer Science, 14:241–258, 2020.

L. Fortuna, G. Muscato, and M.G. Xibilia. An hypercomplex neural network platform for robot positioning.

In 1996 IEEE International Symposium on Circuits and Systems. Circuits and Systems Connecting the World. (ISCAS 96), 1996.

Mudasir Ahmad Ganaie, Minghui Hu, Ashwani Kumar Malik, M. Tanveer, and Ponnuthurai N. Suganthan.

Ensemble deep learning: A review. *Engineering Applications of Artificial Intelligence*, 115:105151, 2022.

Chase J Gaudet and Anthony S Maida. Deep quaternion networks. In *IJCNN*, pp. 1–8. IEEE, 2018.

Yuka Hashimoto, Isao Ishikawa, Masahiro Ikeda, Fuyuta Komura, Takeshi Katsura, and Yoshinobu Kawahara. Reproducing kernel Hilbert C
∗-module and kernel mean embeddings. *Journal of Machine Learning* Research, 22(267):1–56, 2021.

Yuka Hashimoto, Zhao Wang, and Tomoko Matsui. C
∗-algebra net: a new approach generalizing neural network parameters to C
∗-algebra. In *ICML*, 2022.

Yuka Hashimoto, Masahiro Ikeda, and Hachem Kadri. Learning in RKHM: a C
∗-algebraic twist for kernel machines. In *AISTATS*, 2023.

Akira Hirose. Continuous complex-valued back-propagation learning. *Electronics Letters*, 28:1854–1855, 1992.

Jordan Hoffmann, Simon Schmitt, Simon Osindero, Karen Simonyan, and Erich Elsen. Algebranets.

arXiv:2006.07360, 2020.

Patrick Kidger and Cristian Garcia. Equinox: neural networks in JAX via callable PyTrees and filtered transformations. *Differentiable Programming workshop at Neural Information Processing Systems 2021*,
2021.

Diederik P. Kingma and Jimmy Lei Ba. Adam: a Method for Stochastic Optimization. In *ICLR*, 2015.

Aleksandr A. Kirillov. *Elements of the Theory of Representations*. Springer, 1976.

E. Christopher Lance. *Hilbert* C
∗*-modules - a Toolkit for Operator Algebraists*. London Mathematical Society Lecture Note Series, vol. 210. Cambridge University Press, New York, 1995.

Yann Le Cun, Leon Bottou, Yoshua Bengio, and Patrij Haffner. Gradient-based learning applied to document recognition. *Proceedings of the IEEE*, 86(11):2278–2324, 1998.

ChiYan Lee, Hideyuki Hasegawa, and Shangce Gao. Complex-valued neural networks: A comprehensive survey. *IEEE/CAA Journal of Automatica Sinica*, 9(8):1406–1426, 2022.

Jan Eric Lenssen, Matthias Fey, and Pascal Libuschewski. Group equivariant capsule networks. In *NeurIPS*,
2018.

Gerard J. Murphy. C
∗*-Algebras and Operator Theory*. Academic Press, 1990.

Ikuko Nishikawa, Kazutoshi Sakakibara, Takeshi Iritani, and Yasuaki Kuroe. 2 types of complex-valued hopfield networks and the application to a traffic signal control. In *IJCNN*, 2005.

Tohru Nitta. A quaternary version of the back-propagation algorithm. In *ICNN*, volume 5, pp. 2753–2756, 1995.

Justin Pearson and D.L. Bisset. Neural networks in the Clifford domain. In *ICNN*, 1994. Jorge Rivera-Rovelo, Eduardo Bayro-Corrochano, and Ruediger Dillmann. Geometric neural computing for 2d contour and 3d surface reconstruction. In *Geometric Algebra Computing*, pp. 191–209. Springer, 2010.

Fabrice Rossi and Brieuc Conan-Guez. Functional multi-layer perceptron: a non-linear tool for functional data analysis. *Neural Networks*, 18(1):45–60, 2005.

David Ruhe, Johannes Brandstetter, and Patrick Forré. Clifford group equivariant neural networks.

arXiv:2305.11141, 2023a.

David Ruhe, Jayesh K. Gupta, Steven de Keninck, Max Welling, and Johannes Brandstetter. Geometric clifford algebra networks. arXiv:2302.06594, 2023b.

Akiyoshi Sannai, Masaaki Imaizumi, and Makoto Kawano. Improved generalization bounds of group invariant
/ equivariant deep networks via quotient feature spaces. In UAI, 2021.

Vincent Sitzmann, Julien N.P. Martel, Alexander W. Bergman, David B. Lindell, and Gordon Wetzstein.

Implicit neural representations with periodic activation functions. In *NeurIPS*, 2020.

Sho Sonoda and Noboru Murata. Neural network with unbounded activation functions is universal approximator. *Applied and Computational Harmonic Analysis*, 43(2):233–268, 2017.

Sho Sonoda, Isao Ishikawa, and Masahiro Ikeda. Universality of group convolutional neural networks based on ridgelet analysis on groups. In *NeurIPS*, 2022.

Matthew Tancik, Ben Mildenhall, Terrance Wang, Divi Schmidt, Pratul P. Srinivasan, Jonathan T. Barron, and Ren Ng. Learned Initializations for Optimizing Coordinate-Based Neural Representations. In *CVPR*,
2021.

Barinder Thind, Kevin Multani, and Jiguo Cao. Deep learning with functional inputs. *Journal of Computational and Graphical Statistics*, 32:171–180, 2023.

Chiheb Trabelsi, Olexa Bilaniuk, Ying Zhang, Dmitriy Serdyuk, Sandeep Subramanian, Joao Felipe Santos, Soroush Mehri, Negar Rostamzadeh, Yoshua Bengio, and Christopher J Pal. Deep complex networks. In ICLR, 2018.

Paul Vanhaesebrouck, Aurélien Bellet, and Marc Tommasi. Decentralized collaborative learning of personalized models over networks. In *AISTATS*, 2017.

Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. *arXiv*, 2017.

Yiheng Xie, Towaki Takikawa, Shunsuke Saito, Or Litany, Shiqin Yan, Numair Khan, Federico Tombari, James Tompkin, Vincent Sitzmann, and Srinath Sridhar. Neural fields in visual computing and beyond.

Computer Graphics Forum, 2022.

Abhishek Yadav, Deepak Mishra, Sudipta Ray, Ram Narayan Yadav, and Prem Kalra. Representation of complex-valued neural networks: a real-valued approach. In *ICISIP*, 2005.

Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabas Poczos, Russ R Salakhutdinov, and Alexander J Smola. Deep sets. In *NeurIPS*, 2017.

Di Zang, Xihao Chen, Juntao Lei, Zengqiang Wang, Junqi Zhang, Jiujun Cheng, and Keshuang Tang. A
multi-channel geometric algebra residual network for traffic data prediction. IET Intelligent Transport Systems, 16(11):1549–1560, 2022.

Yu Zhang and Qiang Yang. A survey on multi-task learning. IEEE Transactions on Knowledge and Data Engineering, 34(12):5586–5609, 2022.

Xuanyu Zhu, Yi Xu, Hongteng Xu, and Changjian Chen. Quaternion convolutional neural networks. In ECCV, 2018.

## A Appendix A.1 Implementation Details

We implemented C
∗-algebra nets using JAX (Bradbury et al., 2018) with equinox (Kidger & Garcia, 2021)
and optax (Babuschkin et al., 2020). For C
∗-algebra net over matrices, we used the Adam optimizer (Kingma
& Ba, 2015) with a learning rate of 1.0 × 10−4, except for the 3D NIR experiment, where Adam's initial learning rate was set to 1.0 × 10−3. For group-C
∗-algebra net, we adopted the Adam optimizer with a learning rate of 1.0 × 10−3. We set the batch size to 32 in all experiments except for the 2D NIR, where each batch consisted of all pixels, and 3D NIR, where a batch size of 4 was used. Listings 1 and 2 illustrate linear layers of C
∗-algebra nets using NumPy, equivalent to the implementations used in the experiments in Sections 4.1 and 4.2.

The implementation of 3D neural implicit representation (Section 4.1.2) is based on a simple NeRF-like model and its renderer in Tancik et al. (2021). For training, 25 views of each 3D chair from the ShapeNet dataset (Chang et al., 2015) are adopted with their 64 × 64 pixel reference images. The same C
∗-algebra MLPs with the 2D experiments were used, except for the hyperparameters: the number of layers of four and the hidden dimensional size of 128. The permutation-invariant DeepSet model used in Section 4.2 processes each data sample with a four-layer MLP with hyperbolic tangent activation, sum-pooling, and a linear classifier. The permutation-equivariant DeepSet model consists of four permutation-equivariant layers with hyperbolic tangent activation, maxpooling, and a linear classifier, following the point cloud classification in (Zaheer et al., 2017). Although we tried leaky ReLU activation as the group C
∗-algebra net, this setting yielded sub-optimal results in permutation-invariant DeepSet. The hidden dimension of the DeepSet models was set to 96 to match the number of floating-point-number parameters equal to that of the group C
∗-algebra net.

## A.2 Additional Results

Figures 6 and 7 present the additional figures of 2D INRs (Section 4.1.2). Figure 6 is an ukiyo-e counterpart of Figure 3 in the main text. Again, the noncommutative C
∗-algebra net learns color details faster than the commutative one. Figure 7 shows average PSNR curves over three NIRs of the image initialized with different random states for 5,000 iterations. Although it is not as effective as the beginning stage, the noncommutative C
∗-algebra net still outperforms the commutative one after the convergence.

import numpy as np

```
def matrix_valued_linear(weight: Array,
                         bias: Array,
                         input: Array
                         ) -> Array:
    """
   weight: Array of shape {output_dim}x{input_dim}x{dim_matrix}x{dim_matrix}
   bias: Array of shape {output_dim}x{dim_matrix}x{dim_matrix}
   input: Array of shape {input_dim}x{dim_matrix}x{dim_matrix}
   out: Array of shape {output_dim}x{dim_matrix}x{dim_matrix}
   """

```

out = []
for _weight, b in zip(weight, bias):
tmp = np.zeros(weight.shape[2:]) for w, i in zip(_weight, input):
tmp += w @ i + b out.append(tmp)
return np.array(out)
Listing 1: Numpy implementation of a linear layer of a C
∗-algebra net over matrices used in Section 4.1.

![16_image_0.png](16_image_0.png)

Figure 6: Ground truth images and their implicit representations of commutative and noncommutative C
∗-
algebra nets after 500 iterations of training.

Table 5 and Figure 8 show the additional results of 3D INRs (Section 4.1.2). Table 5 presents the average PSNR of synthesized views. As can be noticed from the synthesized views in Figures 4 and 8, the noncommutative C
∗-algebra net produces less noisy output, resulting in a higher PSNR.

Figure 9 displays test accuracy curves of the group C
∗-algebra net and DeepSet models on sub-of-digits task over different learning rates. As Figure 5, which shows the case where the learning rate was 0.001, the group C
∗-algebra net converges with much fewer iterations than the DeepSet models over a wide range of learning rates, although the proposed model shows unstable results for a large learning rate of 0.01.

import dataclasses import **numpy** as np @dataclasses.dataclass class **Permutation**:
\# helper class to handle permutation value: np.ndarray def inverse(self) -> Permutation:
return Permutation(self.value.argsort())
def __mul__(self, perm: Permutation) -> Permutation:
return Permutation(self.value[perm.value])
def __eq__(self, other: Permutation) -> bool:
return np.all(self.value == other.value)
@staticmethod def create_hashtable(set_size: int) -> Array:
perms = [Permutation(np.array(p)) for p in permutations(range(set_size))] map = {v: i for i, v in enumerate(perms)} out = [] for perm in perms:
out.append([map[perm.inverse() * _perm] for _perm in perms])
return np.array(out)

```
def group_linear(weight: Array,
                 bias: Array,
                 input: Array
                 ) -> Array:
    """
   weight: {output_dim}x{input_dim}x{group_size}
   bias: {output_dim}x{group_size}
   input: {input_dim}x{group_size}
   out: {output_dim}x{group_size}
   """

```

hashtable = Permutation.create_hashtable(set_size) *\# {group_size}x{group_size}* g = np.arange(hashtable.shape[0])
out = []
for _weight in weight:
tmp0 = [] for y in g:
tmp1 = [] for w, f in zip(_weight, input):
tmp2 = []
for x in g:
tmp2.append(w[x] * f[hashtable[x, y]])
tmp1.append(sum(tmp2))
tmp0.append(sum(tmp1))
out.append(tmp0)
return np.array(out) + bias Listing 2: Numpy implementation of a group C
∗-algebra linear layer used in Section 4.2.

Table 5: Average PSNR over synthesized views. The specified poses of the views are unseen during training.

| Commutative   | Noncommutative   |
|---------------|------------------|
| 18.40 ± 4.30  | 25.22 ± 1.45     |

![18_image_0.png](18_image_0.png)

Figure 7: Average PSNR over implicit representations of the image of commutative and noncommutative C
∗-algebra nets trained on five cat pictures (top) and reconstructions of the ground truth image at every 500 iterations (bottom).

Commutative Noncommutative Figure 8: Synthesized views of implicit representations of a chair.

![18_image_1.png](18_image_1.png) 

learning rates.